Explain Plan – Which Plan is Better

29 01 2010

January 29, 2010

A recent post appeared in the OTN forums that indirectly asked the question: which execution plan is better?  The execution plans follow:

The Unhinted Execution Plan:

```--------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT       |             |  1285M|   326G|       |    45M  (1)|178:06:59 |       |       |
|   1 |  LOAD AS SELECT        | E           |       |       |       |            |          |       |       |
|*  2 |   HASH JOIN            |             |  1285M|   326G|  5153M|    45M  (1)|178:06:59 |       |       |
|   3 |    TABLE ACCESS FULL   | D           |   135M|  3607M|       |   254K  (2)| 00:59:17 |       |       |
|*  4 |    HASH JOIN           |             |  1261M|   287G|  2857M|    32M  (1)|124:52:03 |       |       |
|   5 |     TABLE ACCESS FULL  | C           |    76M|  1978M|       |   143K  (2)| 00:33:33 |       |       |
|*  6 |     HASH JOIN          |             |  1241M|   252G|  1727M|    20M  (1)| 78:33:50 |       |       |
|   7 |      TABLE ACCESS FULL | B           |    54M|  1099M|       | 23217   (4)| 00:05:26 |       |       |
|   8 |      PARTITION HASH ALL|             |  1241M|   227G|       |  3452K  (4)| 13:25:29 |     1 |    64 |
|   9 |       TABLE ACCESS FULL| A           |  1241M|   227G|       |  3452K  (4)| 13:25:29 |     1 |    64 |
--------------------------------------------------------------------------------------------------------------```

The Hinted Execution Plan that Sets the Cardinality for Table A to 10M Rows:

```--------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT       |             |    10M|  2696M|       |  4578K  (1)| 17:48:26 |       |       |
|   1 |  LOAD AS SELECT        | E           |       |       |       |            |          |       |       |
|*  2 |   HASH JOIN            |             |    10M|  2696M|  2491M|  4578K  (1)| 17:48:26 |       |       |
|*  3 |    HASH JOIN           |             |    10M|  2374M|  2193M|  3996K  (1)| 15:32:36 |       |       |
|*  4 |     HASH JOIN          |             |    10M|  2079M|  1727M|  3636K  (1)| 14:08:30 |       |       |
|   5 |      TABLE ACCESS FULL | B           |    54M|  1099M|       | 23217   (4)| 00:05:26 |       |       |
|   6 |      PARTITION HASH ALL|             |    10M|  1878M|       |  3362K  (1)| 13:04:42 |     1 |    64 |
|   7 |       TABLE ACCESS FULL| A           |    10M|  1878M|       |  3362K  (1)| 13:04:42 |     1 |    64 |
|   8 |     TABLE ACCESS FULL  | C           |    76M|  1978M|       |   143K  (2)| 00:33:33 |       |       |
|   9 |    TABLE ACCESS FULL   | D           |   135M|  3607M|       |   254K  (2)| 00:59:17 |       |       |
--------------------------------------------------------------------------------------------------------------```

The Original Poster Stated Both Plans have the Same Predicates:

```Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(A."ID"="D"."ID")
3 - access("A"."E_ID"="C"."E_ID")
4 - access("A"."M_ID"="B"."M_ID")```

So, which execution plan is better?  Are the plans the same?  How are they the same, and how do they differ?

While we comtemplate which execution plan is optimal, the OTN thread took a slight detour into a discussion of work areas in Oracle:

A search of the documentation found this page that offered the following definition:

“work area: A private allocation of memory used for sorts, hash joins, and other operations that are memory-intensive. A sort operator uses a work area (the sort area) to perform the in-memory sort of a set of rows. Similarly, a hash-join operator uses a work area (the hash area) to build a hash table from its left input.”

There may be multiple active work areas in a single SQL statement. While not the original purpose of this blog article, the article does show how to see the amount of memory in use for active work areas.  If you have a copy of the book “Troubleshooting Oracle Performance“, I highly recommend that you read pages 434 through 439 if you are curious about Oracle work areas.  Those pages describe how hash joins work and provide a detailed description of work areas.

Back to the original question.  While reading the plans, keep in mind that you are only looking at the optimizer’s estimates for the number of rows, time, memory usage, temp tablespace usage, and costs.  If you are attempting to conclude which plan is faster/better based on the estimates in the first plan and an altered plan with a hinted cardinality estimate, you might be setting yourself up for failure.  Note that the first plan has a calculated cost of about 45,000,000 while the second plan has a calculated cost of about 4,578,000.  So obviously, the second plan is more efficient.  Or is it?  With the cardinality hint, the OP has effectively changed the number of rows that the optimizer expects to be returned from table A from roughly 1,241,000,000 to 10,000,000.  Additionally, one should not directly compare the calcualted cost of one execution plan with that of a second execution plan.  You probably should be thinking to yourself at this point: “Have you considered actually testing the performance?

In the OTN thread Timur stated that both plans use the very same join order: B->A->C->D.  Based on my understanding of execution plans, this is a correct statement, even though the plans look a bit different.  Note that the OP was slightly incorrect in stating that the Predicate Information sections for the two plans were identical – the plan ID numbers should have been a bit different.

(Confession: I re-read the section of the book “Troubleshooting Oracle Performance” that discussed hash joins before providing the following response.)  Essentially, the difference between the two plans is which table (or row source) is the build input, and which table is the probe input. The first table (or row source) listed below the words HASH JOIN is the source for the hash table (the optimizer typically tries to select the smaller estimated row source of the two row sources as the source for the hash table). The second table (or row source) is fully scanned, probing the generated hash table in search of a match. By artifically altering the optimizer’s estimated rows to be returned from table A, the OP has flipped which table (or row source) is the build input, and which table (or row source) is the probe input at each hash join - this could significantly increase, or significantly decrease, or have no impact on the amount of time required for the query to execute, the amount of memory used, or the amount of temp space needed.

My suggestion to the OP is to test the performance to see which execution plan is more efficient, rather than guessing. My blog article that is referenced above has SQL statements that may be used to see the number of work areas that are active at any point, as well as the amount of RAM and temp space in use. You could continue to guess about which plan is better, but why guess?

10 responses

29 01 2010

Hi Charles,

“test the performance to see which execution plan is more efficient, rather than guessing.” Very true.

Does decrease in cost means decrease in elapsed time/execution time of the query?

29 01 2010

>Additionally, one should not directly compare the calculated cost of one execution plan with that of a second execution plan
Well, on the other hand, one who understand what is the cost, what cost-based optimizer is and what’s behind CBO calculations, tend to carefully compare the figures reported by explain plan.

29 01 2010

Anand, one of the questions that seems to be brought up in discussion forums somewhat frequently is: “the cost for this plan is too high, how can I lower the cost?” It is important to keep in mind that the cost figure that appears in explain plans is just an estimate.
* If the optimizer estimates incorrectly the number of rows that will be returned from a result set, the calculated cost will be incorrect. This could happen if statistics have not been collected recently for tables and indexes, or if some of the columns are correlated (not truly independent), or if the wrong data type is selected for a column (or bind variable definition), or functions are used in the WHERE clause, or bind variable peeking does or does not take place, etc.
* If the statistics in SYS.AUX_STATS\$ do not correctly reflect the performance of the system, the cost estimates could be incorrect.
* If OPTIMIZER_INDEX_COST_ADJ or OPTIMIZER_INDEX_CACHING are set incorrectly, the cost estimates could be incorrect.
* In some cases, if the DB_FILE_MULTIBLOCK_READ_COUNT is set incorrectly, the cost estimates could be incorrect.
* If a PARALLEL hint ( http://jonathanlewis.wordpress.com/2009/05/09/hints-on-hints/ ) is supplied, the calculated cost could be incorrect.
* If a CARDINALITY or OPT_ESTIMATE hint is used to change the expected number of rows returned by a result set, the calculated cost could be incorrect.
* If dynamic sampling is used, the calculated cost could be incorrect (Jonathan Lewis touches on this special case in the “Expert Oracle Practices” book).
* … (I probably left out a lot of other possibilities – any other ideas?)

A decrease in the estimated cost probably means a decrease in the estimated execution time, and might mean a decrease in the actual execution time. I am reminded of the following blog article:

http://jonathanlewis.wordpress.com/2006/12/11/cost-is-time/

Timur, good point. By the way, I did not spent much time looking at the explain plans that are included in this blog article until you stated that the join order is exactly the same. I then thought that the plans were “interesting” since they do appear to be different on first glance.

29 01 2010

Timur/ Charles,
>Well, on the other hand, one who understand what is the cost, what cost-based
>optimizer is and what’s behind CBO calculations, tend to carefully compare the
>figures reported by explain plan.

Unfortunately, those who understand what is the cost, what is CBO and what is behind CBO’s calculations are far less compared to those who don’t. Hence there is a risk of such statements easily being (incorrectly) translated as “Cost is the (only) “Holy Grail” to performance tuning in oracle”. Even if the forum post mentioned here, it should be noted that the 2 query plans have been generated by 2 queries that are not same. Now, would it be right to start the tuning process by comparing the costs (and trying to figure out how they are calculated) or start tuning process by identifying WHY does the query need to be hinted in the first place (if hinted query generates more attractive plan) ?
An old asktom thread comes to mind (and I agree with Tom):

29 01 2010

I do wonder about the un-named fan who posted the following on June 14, 2002:
“Comparing the same query using COST is perfect legitimate because that is what the COST is for. What we expect is the optimizer to permute all possible combinations and pick up the plan with the least cost. If you could rewrite the same SQL to get a cheaper cost theoretically it should be faster. If it is not I guess it is an optimizer bug. Also if the COST is not just based on logical reads but on the CPU, then comparing the SQL in the same instance is still legitimate.”

And then the follow-up by apparently the same un-named fan on June 27, 2002:
“Ok, let me rephrase this again. You can compare the same SQL when you are trying to tune that SQL. If you can tune the SQL to get a lower cost than whatever CBO comes up with, it should perform better (lets not get to the run time because it depends on the physical characteristics like file system, disk speed etc.) I’m not wishing CBO to behave like this but thats how CBO works.

Cost CANNOT be treated as a random number. Till 8i Cost is a metric based on the I/O (logical reads) and starting 9i there is an option to make the Cost based on CPU also along with I/O (calling the API DBMS_STATS.GATHER_SYSTEM_STATS)

The very fact COST is exposed is to give the user an idea about the unit of work or resource it is estimating to use for the SQL…

I’m not saying all the above after reading from a manual. I’ve been working on CBO for couple of years and I’m saying the above from my experience. I can also reiterate, if you can come up a tuned SQL where the COST is cheaper than the default CBO plan, then it is a CBO bug. It is not what I wish, it is how it is.”

One has to wonder what that un-named person had read, and possibly what else that person had authored based on their experience. Apparently, some of that information is still being taught or read some where.

From the first followup in the thread:
“When we get a query, we come up with lots of plans. Each step of the plan is assigned some relative cost. At the end, we apply a function to derive the total cost of that query for each plan. These costs (which are ONLY visible to the optimizer) can be compared as they are for the same exact SQL using the same exact environment.”

I think that it should be stated that the calculated cost is compared at *each* step of the execution plan, and that the lowest calculated cost at that stage of the plan is selected. The selection of the lowest cost-based decision at one step in a plan can cause later steps in the plan to generate higher calculated costs, such that the total cost output for the query could be significantly greater than if the cost-based optimizer selected a higher cost join or access path earlier in the execution plan. Thus, for queries with several row sources (tables, indexes, or previous join results), you probably should not compare the costs that are output with an execution plan when a different join order is used or when a different access path is selected.

Of course, the cost is time argument does make some sense (to me) at individual stages of an execution plan.

30 01 2010

Charles,

My intention was not to say ALWAYS IGNORE THE COST. But as I mentioned as a question in my earlier response, is looking at the cost right way to start the performance tuning process ? I believe not.

1 02 2010

[...] when it comes to computer systems and performance you should read these articles by Cary Millsap. Explain Plan – Which Plan is Better Which execution plan is better? Are the plans the same? How are they the same, and how do they [...]

5 02 2010

[...] Hooper grapples the question, Which Plan is Better? Charles writes, “A recent post appeared in the OTN forums that indirectly asked the question: [...]

18 07 2012

Hi Friends,
I have generated one explain plan, it is taking a huge cost.

```-------------------------------------------------------------------
| Id  | Operation          | Name         | Rows  | Bytes | Cost  |
-------------------------------------------------------------------
|   0 | SELECT STATEMENT   |              |     1 |   155 |  6509 |
|   1 |  NESTED LOOPS      |              |     1 |   155 |  6509 |
|   2 |   TABLE ACCESS FULL| TANCASE_INFO |     1 |   149 |  6508 |
|   3 |   INDEX UNIQUE SCAN| PK_CASE_ID   |     1 |     6 |     1 |
-------------------------------------------------------------------

-----------------------------------------------------------------
| Id  | Operation         | Name        | Rows  | Bytes | Cost  |
-----------------------------------------------------------------
|   0 | SELECT STATEMENT  |             |     2 |   302 | 63295 |
|   1 |  TABLE ACCESS FULL| TANWOB_EVNT |     2 |   302 | 63295 |
-----------------------------------------------------------------

------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost  |
------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |     1 |    56 |  2978 |
|   1 |  TABLE ACCESS FULL| TANWOB |     1 |    56 |  2978 |
------------------------------------------------------------

-------------------------------------------------------------
| Id  | Operation         | Name    | Rows  | Bytes | Cost  |
-------------------------------------------------------------
|   0 | SELECT STATEMENT  |         |     1 |    90 |  3988 |
|   1 |  TABLE ACCESS FULL| TANCASE |     1 |    90 |  3988 |
-------------------------------------------------------------
```

Can u please some one help me, to reduce the cost.

Regards
Ravindra

18 07 2012

Ravindra,

First, a little friendly advice. When communicating on blogs, Internet forums, and other places on the Internet where information is shared with people from many different countries with primary languages other than English, please refrain from using uncommon abbreviations for words. Text message abbreviations, such as “u”, do not translate well into languages other than English, making it difficult for non-native English readers to understand and assist with any problems that you share with the world through blogs and Internet forums.

Second, it appears that you have generated four execution plans, one of which has a calculated cost that is roughly 10 times higher than the calculated cost that appears in the other execution plans. The calculated cost is used by the optimizer when developing the execution plan. While the calculated cost is intended to indicate to the Oracle query optimizer the approximate amount of time required to execute a particular portion of the execution plan (see http://jonathanlewis.wordpress.com/2006/12/11/cost-is-time/ ), that calculated cost estimate may be miscalculated due to various factors including out of date object statistics, missing or inappropriate histograms, bind variable peeking / or not peeking, silly mistakes in queries (such as unnecessary implicit data type conversions, unnecessary using a function such as TRUNC on a column in a WHERE clause, etc.), inappropriate system (CPU) statistics, etc.

While it is important for the query optimizer to reasonably acurately calculate the cost of an execution plan, it makes little difference if the cost of a full table scan is 1, 63,295, or 1,000,000,000 if a full table scan is the only “legal” method to retrieve the data from the table due to missing indexes, inappropriate indexes (such as those with a high clustering factor that are not very selective), or indexes that cannot be used due to a mistake in the query (implicit data type conversion, using a function such as TRUNC on the indexed column, etc.).

The “Predicate Information” section of an execution plan (found in execution plans when generated with recent Oracle client versions and Oracle Database versions) may provide valuable information regarding mistakes in a particular query that resulted in an unexpected operation in an execution plan.

In short, do not tune by attempting to make a query’s calculated cost as low as possible. The calculated cost is only an estimate, and if that estimate does not reflect the actual resource utilization to perform an operation in an execution plan, the calculated cost is just a number that is meaningless for improving efficiency. Use actual timing (a 10046 trace is great for such timing), run time statistics, etc. rather than the calculated cost.