Faulty Quotes 5 – Block Sizes

31 01 2010

January 31, 2010 (Updated Feb 1, 2010)

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

The topic of deviating from the default 8KB block size in Oracle Database, or using multiple block sizes in a single database seems to surface every couple of months in the OTN forums, Oracle-L, comp.databases.oracle.server Usenet group, and similar discussion forums.  I think that I understand why.  A lot of information has been written that advocates using multiple block sizes in a single Oracle database, or using the largest possible block size to improve “full scan” or “range scan” performance.  Such information is found in blogs, news articles, discussion forums, expert sites, books, and even Oracle’s download.oracle.com website.  So, why do people ask questions about using larger than default block sizes or multiple block sizes in discussion forums if there are so many sources of information that say “just do it”.  Well, chances are that the Google (or other search engine) search that found all of the sources recommending the use of non-standard settings also found several pages where people basically stated “stop, think, understand before making any changes.”  See the Faulty Quotes 3 blog article.

So, you might be curious what my Google search found.  Is it a best practice to implement multiple block sizes in a single database, and is it a best practice to move all of your indexes to a tablespace using the largest supported block size?  (See chapter 1 of Expert Oracle Practices for a discussion on the topic of implementing “best practices”.)  In the following quotes, I have attempted to quote the bare minimum of each article so that the quote is not taken too far out of context (I am attempting to avoid changing the meaning of what is being quoted).

http://download.oracle.com/docs/cd/E13214_01/wli/docs102/dbtuning/dbtuning.html

“Oracle9i introduced a new feature that allowed a single instance of the database to have data structures with multiple block sizes. This feature is useful for databases that need the flexibility of using a small block size for transaction processing applications (OLTP); and a larger block size to support batch processing applications, decision support systems (DSS), or data warehousing. It can also be used to support more efficient access to larger data types like LOBs.”

http://www.virtual-dba.com/pdfs/Xtivia_WP_Oracle_Best_Practices_2008.pdf  (page 14)

“In Oracle databases 9i, 10g, and 11g, it is a best practice to use multiple block sizes; this allows you to tailor the block size to a specific type of access. Place tables and indexes in tablespaces sized (block size) according to access…”

http://www.oracle.com/technology/products/database/clustering/pdf/bp_rac_dw.pdf (page 19)

“Larger oracle block sizes typically give fewer index levels and hence improved index access times to data. A single I/O will fetch many related rows and subsequent requests for the next rows will already be in the data buffer. This is one of the major benefits of a larger block size. Another benefit is that it will decrease the number of splits.”

dba-oracle.com/art_so_blocksize.htm

“Because the blocksize affects the number of keys within each index block, it follows that the blocksize will have an effect on the structure of the index tree. All else being equal, large 32k blocksizes will have more keys per block, resulting in a flatter index than the same index created in a 2k tablespace.”
“As you can see, the amount of logical reads has been reduced in half simply by using the new 16K tablespace and accompanying 16K data cache. Clearly, the benefits of properly using the new data caches and multi-block tablespace feature of Oracle9i and above are worth your investigation and trials in your own database.”

rampant-books.com/t_oracle_blocksize_disk_i_o.htm

“B-tree indexes with frequent index range scans perform best in the largest supported block size.  This facilitates retrieval of as many index nodes as possible with a single I/O, especially for SQL during index range scans.  Some indexes do not perform range scans, so the DBA should make sure to identify the right indexes”

praetoriate.com/t_oracle_tuning_data_buffer_pools.htm

“This is an important concept for Oracle indexes because indexes perform better when stored in large block size tablespaces.  The indexes perform better because the b-trees may have a lower height and mode entries per index node, resulting in less overall disk overhead with sequential index node access.”

remote-dba.cc/s56.htm

“Indexes want large block sizes – B-tree indexes perform best in the largest supported block size and some experts recommend that all indexes should reside in 32K block size tablespaces. This facilitates retrieval of as many index nodes as possible with a single I/O, especially for SQL performing index range scans.”
“Many DBAs make their default db_block_size 32k and place indexes, the TEMP tablespace and tables with large-table full-table scans in it, using other block sizes for objects that require a smaller fetch size.”

remote-dba.net/unix_linux/multiple_block_sizes.htm

” Large blocks – Indexes, row-ordered tables, single-table clusters, and table with frequent full-table scans should reside in tablespaces with large block sizes.”

oracle-training.cc/s54.htm

“Larger block sizes are suitable for indexes, row-ordered tables, single-table clusters, and tables with frequent full-table scans. In this way, a single I/O will retrieve many related rows, and future requests for related rows will already be available in the data buffer.”

oracle-training.cc/oracle_tips_block_sizes.htm

“Indexes want large block sizes – Indexes will always favor the largest supported blocksize. You want to be able to retrieve as many index nodes as possible in a single I/O, especially for SQL that performs index range scans.  Hence, all indexes should reside in tablespaces with a 32k block size.”

oracle-training.cc/t_oracle_multiple_buffers.htm

“One of the first things the Oracle9i DBA should do is to migrate all of their Oracle indexes into a large blocksize tablespace. Indexes will always favor the largest supported blocksize.”

http://forums.oracle.com/forums/thread.jspa?messageID=2445936

“It’s pretty well established that RAC performs less pinging with 2k blocksizes”
“Large blocks gives more data transfer per I/O call.”
“Indexes like big blocks because index height can be lower and more space exists within the index branch nodes.”

dbapool.com/articles/040902.html

“Index Branches: Larger oracle block sizes typically give fewer index levels and hence improved index access times to data .This is one of the major benefits of a larger block size.”

toadworld.com/LinkClick.aspx?fileticket=fqDqiUsip1Y=&tabid=234  (page 8 )

“In Oracle9i and Oracle10g it is a good practice to use multiple block sizes, this allows you to tailor the block size to a specific type of access. Place tables and indexes in tablespaces according to access. For single block read type OLTP access, use 8k block sizes. For full table scan access such as with data warehouses use 16-32K block sizes. For index lookups use 8-16K block sizes. For indexes that are scanned or bitmap indexes, use 16-32K block sizes.”

dbaforums.org/oracle/index.php?s=87341768e1865563322676a1bd504db6&showtopic=83&pid=133&mode=threaded&start=#entry133

“Multiuple blocksizes are GREAT, but ONLY if your database is I/O-bound… Finally, research has proved that Oracle indexes build cleaner in large blocksizes.”

searchoracle.techtarget.com/tip/Oracle-tuning-Blocksize-and-index-tree-structures

“Because the blocksize affects the number of keys within each index block, it follows that the blocksize will have an effect on the structure of the index tree. All else being equal, large 32k blocksizes will have more keys per block, resulting in a flatter index than the same index created in a 2k tablespace… You can use the large (16-32K) blocksize data caches to contain data from indexes or tables that are the object of repeated large scans.”

dbazine.com/oracle/or-articles/burleson2

“Hence, one of the first things the Oracle9i database administrator will do is to create a 32K tablespace, a corresponding 32K data buffer, and then migrate all of the indexes in their system from their existing blocks into the 32K tablespace… Indexes will always favor the largest supported blocksize.”

statspackanalyzer.com/sample.asp

“You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.”

http://books.google.com/books?id=xxx0KAwY_ZMC&pg=PT133#v=onepage&q=&f=false

“If you have large indexes in your database, you will need a large block size for their tablespaces.”
“Oracle provides separate pools for the various block sizes, and this leads to better use of Oracle memory.”

noriegaaoracleexpert.blogspot.com/2007/08/advances-in-multiple-block-size-caches.html

“… and using multiple block caches act as an intelligent cache differentiator that automatically leverage cache performance optimization. I have successfully tested, like many other DBAs and developers, that beyond any possible SGA tuning that using multiple-block-size database can certainly improve performance through this performance approach.”

http://books.google.com/books?id=Wx6OmllCfIkC&pg=PA164#v=onepage&q=&f=false

“Simply by using the new 16K tablespace and accompanying 16K data cache, the amount of logical reads has been reduced by half.  Most assuredly, the benefits of properly using the new data caches and multi-block tablespace feature of Oracle9i and later, are worth examination and trials in the DBA’s own database.”

http://books.google.com/books?id=bxHDtttb0ZAC&pg=PA406#v=onepage&q=&f=false

“Objects that experience full scans and indexes with frequent range scans might benefit from being placed in a larger block size, with db_file_multiblock_read_count set to the block size for that tablespace.”

http://books.google.com/books?id=Uf2pb1c1H2AC&pg=RA1-PA317#v=onepage&q=&f=false

“Indexes want large block sizes: Indexes will always favor the largest supported block size… Hence, all indexes should reside in tablespaces with a 32K block size.”

dba-oracle.com/oracle_tips_multiple_blocksizes.htm (Added Feb 1, 2010):

“At first, beginners denounced multiple block sizes because they were invented to support transportable tablespaces.  Fortunately, Oracle has codified the benefits of multiple blocksizes, and the Oracle 11g Performance Tuning Guide notes that multiple blocksizes are indeed beneficial in large databases to eliminate superfluous I/O and isolate critical objects into a separate data buffer cache:

‘With segments that have atypical access patterns, store blocks from those segments in two different buffer pools: the KEEP pool and the RECYCLE pool…'”

————————-

Before deciding whether or not to implement a large block size (or a very small block size), or add either a larger or smaller than default block size tablespace, I suggest reviewing the following:

http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/iodesign.htm#i20394  (directly relates to Faulty Quotes 3)

“The use of multiple block sizes in a single database instance is not encouraged because of manageability issues.”

“Expert Oracle Database Architecture”
http://books.google.com/books?id=TmPoYfpeJAUC&pg=PA147#v=onepage&q=&f=false

“These multiple blocksizes were not intended as a performance or Tuning feature, but rather came about in support of transportable tablespaces…”

http://www.freelists.org/post/oracle-l/32K-block-size-tablespace-for-indexes,4

“But in most cases the administration overhead is much bigger than the performance benefit. You can easily end up with over- or undersized db_XXk_cache_size and the database can’t do anything about it. Then the performance will be better in some parts of the day and worse later on.”

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1468781700346675276

“I would not recommend going into a system planning on using multiple blocksizes – they were invented for one thing, to transport data from a transactional system to a warehouse (where you might be going from 8k OLTP to 16/32k warehouse) and to be used only to extract/transform/load the OLTP data.”

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1468781700346675276

“My block size is 4096 and my db_32k_cache_size=67108864
I want to create a tablespace with 32K and rebuild all indexes into this tablespace. These are
frequently used indexes. Do you think is there any benefit for using 32K block size in this scenerio”

“before you do something, you should have an identified goal in mind
so, tell us all – WHY would you do this? Don’t say “cause I read on some website it makes things super fast” (it doesn’t), tell us WHY you think YOU would derive benefit from this?
I do not think there is in general benefits to be gained from using multiple block size tablespaces – short of TRANSPORTING data from one block size to another for an ‘extract transform and load’ process.”

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:729373100346730466

“BUT – do not use multiple block sizes for anything other than transporting data from database A to database B where the block size in A is different from B. No silver bullets with this ‘trick’, nothing you want to do in real life. The cases whereby multiple blocksizes are useful are typically limited to benchmarks, old wives tales, and very exceptionally rare conditions.”

http://jonathanlewis.wordpress.com/2009/03/22/block-size-again/
“ORA-01555: snapshot too old” caused by large block size

“Oracle9i Performance Tuning Tips & Techniques”
http://books.google.com/books?id=59ks3deVd0UC&pg=PA9#v=onepage&q=&f=false

“Warning: Oracle development does not support the notion of using multiple block sizes for performance tuning. The nonstandard block caches are not optimized.”

http://forums.oracle.com/forums/thread.jspa?messageID=2445936

“How can I determine which block size is correct for my database.”

“Use 8k. This is right in the middle, and won’t put you in an edge condition. Call it the Goldilocks block, not to small, not to big, just right.
For both OLTP and DSS, 8k is an optimal size. I use 8k, always.
There is minimal gains to be had in messing with block sizes. Having good db design and good execution plans is a better place to worry about performance.”

Series of related articles (there are at least 5 related articles in this series where the author directly addresses many of the claimed benefits of fiddling with block sizes):
http://richardfoote.wordpress.com/category/index-block-size/

Summary of an OTN forums thread – what was likely the longest thread ever on the topic of block sizes (and very likely multiple block sizes in the same database) from June 2008.  The message thread was too large to be supported on the new OTN software due to performance reasons.  Fortunately, Jonathan Lewis obtained a copy of the thread content in a PDF file:
http://jonathanlewis.wordpress.com/2008/07/19/block-sizes/

Related to the above mentioned OTN thread:
http://structureddata.org/2008/08/14/automatic-db_file_multiblock_read_count/
http://structureddata.org/2008/09/08/understanding-performance/

I posted a number of test cases in the above mentioned OTN thread where I simulated some of the activity in a data warehouse, and activity in an OLTP type database.  To a large extent, the performance was very close to being identical in the databases with the default 8KB and 16KB tablespaces, with just a few exceptions.  As I recall, the 16KB database encountered performance problems when a column with a NULL value was updated, and when a rollback was performed.

Below you will find the scripts to reproduce my test cases that appeared in the above mentioned OTN thread, and the performance results that I obtained.  The OLTP test required roughly 10-12 hours to complete:
Block Size Comparison (save with a .XLS extension and open with Microsoft Excel).

I guess the message is that you should verify that the swimming pool contains water before diving in head first.





Execution Plans – What is the Plan, and Where Do I Find It?

30 01 2010

January 30, 2010

(Forward to the Next Post in the Series)

So, what is the plan?  There are a lot of resources available to help with understanding execution plans, some of which are much more effective than others.  The most effective explanations that I have found are in the “Troubleshooting Oracle Performance” book, but the documentation (documentation2) is also helpful from time to time.  There are also a lot of ways to look at execution plans, including directly examining V$SQL_PLAN in recent Oracle releases.  This article shows some of the ways for generating execution plans, and some of the problems that might be encountered when attempting to obtain the “correct” execution plan.

For this blog article, the following script to create test tables will be used:

CREATE TABLE T1 (
  C1 NUMBER,
  C2 VARCHAR2(255),
  PRIMARY KEY (C1));

CREATE TABLE T2 (
  C1 NUMBER,
  C2 VARCHAR2(255),
  PRIMARY KEY (C1));

CREATE TABLE T3 (
  C1 NUMBER,
  C2 VARCHAR2(255));

CREATE TABLE T4 (
  C1 NUMBER,
  C2 VARCHAR2(255));

INSERT INTO
  T1
SELECT
  ROWNUM,
  RPAD(TO_CHAR(ROWNUM),255,'A')
FROM
  DUAL
CONNECT BY
  LEVEL<=1000000;

INSERT INTO
  T2
SELECT
  ROWNUM,
  RPAD(TO_CHAR(ROWNUM),255,'A')
FROM
  DUAL
CONNECT BY
  LEVEL<=1000000;

INSERT INTO
  T3
SELECT
  ROWNUM,
  RPAD(TO_CHAR(ROWNUM),255,'A')
FROM
  DUAL
CONNECT BY
  LEVEL<=1000000;

INSERT INTO
  T4
SELECT
  ROWNUM,
  RPAD(TO_CHAR(ROWNUM),255,'A')
FROM
  DUAL
CONNECT BY
  LEVEL<=1000000;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T3',CASCADE=>TRUE)

The above script creates four tables with 1,000,000 rows each.  The first two tables have indexes on the primary key column due to the declared primary key column.  Additionally, the script collects statistics for the first and third tables, but not the second and fourth tables (to help with a couple of demonstrations).

If you are running Oracle 10.1.0.1 or higher, you can use the DBMS_XPLAN.DISPLAY_CURSOR function to display execution plans.  This function is called automatically, starting in Oracle 10.1.0.1, when AUTOTRACE is enabled in SQL*Plus.  DBMS_XPLAN.DISPLAY_CURSOR is the preferred method for displaying execution plans, so we will start with that method.

First, we will generate an execution plan for a SQL statement with the OPTIMIZER_MODE set to ALL_ROWS:

SET LINESIZE 150
ALTER SESSION SET OPTIMIZER_MODE='ALL_ROWS';

SELECT
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN 1 AND 10;

 C1 C2
--- ----------
  1 1AAAAAAAAA
  2 2AAAAAAAAA
  3 3AAAAAAAAA
  4 4AAAAAAAAA
  5 5AAAAAAAAA
  6 6AAAAAAAAA
  7 7AAAAAAAAA
  8 8AAAAAAAAA
  9 9AAAAAAAAA
 10 10AAAAAAAA

The typical way to display an execution plan for a SQL statement that was just executed is as follows (note that the third parameter, the FORMAT parameter, could have also been set to NULL to produce the same result):

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));

SQL_ID  9dq71tc7vasgu, child number 0
-------------------------------------
SELECT   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,   T1 WHERE   T1.C1=T3.C1   AND
T1.C1 BETWEEN 1 AND 10

Plan hash value: 1519387310

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |       |       |  1893 (100)|          |
|*  1 |  HASH JOIN                   |              |     9 |  2385 |  1893   (4)| 00:00:09 |
|*  2 |   TABLE ACCESS FULL          | T3           |    10 |    50 |  1888   (4)| 00:00:09 |
|   3 |   TABLE ACCESS BY INDEX ROWID| T1           |    10 |  2600 |     4   (0)| 00:00:01 |
|*  4 |    INDEX RANGE SCAN          | SYS_C0020554 |    10 |       |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T3"."C1")
   2 - filter(("T3"."C1"<=10 AND "T3"."C1">=1))
   4 - access("T1"."C1">=1 AND "T1"."C1"<=10)

In the above, note that the calculated cost of the execution plan is 1893.  Now, a little more experimentation, setting OPTIMIZER_MODE to FIRST_ROWS (Oracle assumes that only the first row will be retrieved, and the rest will likely be discarded):

ALTER SESSION SET OPTIMIZER_MODE='FIRST_ROWS_1';

SELECT
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN 1 AND 10;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));

SQL_ID  9dq71tc7vasgu, child number 1
-------------------------------------
SELECT   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,   T1 WHERE   T1.C1=T3.C1   AND
T1.C1 BETWEEN 1 AND 10

Plan hash value: 2674910673

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |       |       |   575 (100)|          |
|   1 |  NESTED LOOPS                |              |     2 |   530 |   575   (4)| 00:00:03 |
|*  2 |   TABLE ACCESS FULL          | T3           |     7 |    35 |   568   (4)| 00:00:03 |
|   3 |   TABLE ACCESS BY INDEX ROWID| T1           |     1 |   260 |     2   (0)| 00:00:01 |
|*  4 |    INDEX UNIQUE SCAN         | SYS_C0020554 |     1 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("T3"."C1"<=10 AND "T3"."C1">=1))
   4 - access("T1"."C1"="T3"."C1")
       filter(("T1"."C1"<=10 AND "T1"."C1">=1))

In the above, notice that by changing the optimizer mode from ALL_ROWS to FIRST_ROWS_1 a new execution plan was created (the child number has increased by 1 and the Plan hash value has changed) that now uses a nested loops join, rather than a hash join, and the calculated cost has decreased.  You might be wondering why Oracle did not pick this execution plan with the nested loops join, rather than the hash join when the OPTIMIZER_MODE was set to ALL_ROWS, since this plan has a lower calculated cost – we will leave that for another blog article (unless, of course, someone knows the answer and wants to share).  Oddly, the estimated number of rows to be returned from table T3 has decreased when compared to the execution plan with the hash join, and the estimated execution time has also decreased.  But, we really do not know anything about performance from just looking at the above plans.  So, let’s repeat the test again, changing the SQL statement so that we are able to pass in ‘ALLSTATS LAST’ as the format parameter for the DBMS_XPLAN call:

ALTER SESSION SET OPTIMIZER_MODE='ALL_ROWS';

SELECT /*+ GATHER_PLAN_STATISTICS */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN 1 AND 10;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  ddnbt67ftu9ds, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,
   T1 WHERE   T1.C1=T3.C1   AND T1.C1 BETWEEN 1 AND 10

Plan hash value: 1519387310

-------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------
|*  1 |  HASH JOIN                   |              |      1 |      9 |     10 |00:00:04.92 |   37144 |  37125 |  1452K|  1452K| 1328K (0)|
|*  2 |   TABLE ACCESS FULL          | T3           |      1 |     10 |     10 |00:00:00.01 |   37138 |  37125 |       |       |          |
|   3 |   TABLE ACCESS BY INDEX ROWID| T1           |      1 |     10 |     10 |00:00:00.01 |       6 |      0 |       |       |          |
|*  4 |    INDEX RANGE SCAN          | SYS_C0020554 |      1 |     10 |     10 |00:00:00.01 |       4 |      0 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T3"."C1")
   2 - filter(("T3"."C1"<=10 AND "T3"."C1">=1))
   4 - access("T1"."C1">=1 AND "T1"."C1"<=10)

The query completed in 4.92 seconds, retrieved 10 rows, and used 1,328 * 1,024 bytes of memory during the in-memory hash join.  Wow, the full table scan of table T3 completed in 0.01 seconds while reading 37,125 blocks from disk.  If each multiblock disk read were 128 blocks, that would be 290.04 multiblock reads in 0.01 seconds, for an average of  0.000034 seconds per 1MB multiblock read – who needs SSD when you can just read an execution plan incorrectly and make assumptions.  :-)  (Keep reading)

ALTER SESSION SET OPTIMIZER_MODE='FIRST_ROWS_1';

SELECT /*+ GATHER_PLAN_STATISTICS */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN 1 AND 10;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  ddnbt67ftu9ds, child number 1
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,
   T1 WHERE T1.C1=T3.C1   AND T1.C1 BETWEEN 1 AND 10

Plan hash value: 2674910673

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------------
|   1 |  NESTED LOOPS                |              |      1 |      2 |     10 |00:00:00.02 |   37171 |  37125 |
|*  2 |   TABLE ACCESS FULL          | T3           |      1 |      7 |     10 |00:00:00.02 |   37139 |  37125 |
|   3 |   TABLE ACCESS BY INDEX ROWID| T1           |     10 |      1 |     10 |00:00:00.01 |      32 |      0 |
|*  4 |    INDEX UNIQUE SCAN         | SYS_C0020554 |     10 |      1 |     10 |00:00:00.01 |      22 |      0 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("T3"."C1"<=10 AND "T3"."C1">=1))
   4 - access("T1"."C1"="T3"."C1")
       filter(("T1"."C1"<=10 AND "T1"."C1">=1))

As shown above, with the FIRST_ROWS_1 optimizer mode, the query completed in 0.02 seconds.  But, something is wrong.  The execution plan shows that there were 37,125 blocks read from disk, just like the previous execution plan, which would mean that each of the 1MB physical reads required 0.000068 seconds if we only look at the A-Time column of ID 1.  Who needs SSD when you can just read an execution plan incorrectly and make assumptions – oh, wait, I just said that.

The GATHER_PLAN_STATISTICS hint is helpful because it allows DBMS_XPLAN to output the actual execution statistics for the previous SQL statement.  If you do not want to use that hint, it is also possible to change the STATISTICS_LEVEL to ALL at the session level level, as the following demonstrates:

ALTER SESSION SET OPTIMIZER_MODE='ALL_ROWS';
ALTER SESSION SET STATISTICS_LEVEL='ALL';

SELECT
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN 1 AND 10;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  9dq71tc7vasgu, child number 2
-------------------------------------
SELECT   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,   T1 WHERE   T1.C1=T3.C1   AND T1.C1 BETWEEN 1 AND 10

Plan hash value: 1519387310

-------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------
|*  1 |  HASH JOIN                   |              |      1 |      9 |     10 |00:00:05.59 |   37144 |  37125 |  1452K|  1452K| 1324K (0)|
|*  2 |   TABLE ACCESS FULL          | T3           |      1 |     10 |     10 |00:00:05.59 |   37138 |  37125 |       |       |          |
|   3 |   TABLE ACCESS BY INDEX ROWID| T1           |      1 |     10 |     10 |00:00:00.01 |       6 |      0 |       |       |          |
|*  4 |    INDEX RANGE SCAN          | SYS_C0020554 |      1 |     10 |     10 |00:00:00.01 |       4 |      0 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T3"."C1")
   2 - filter(("T3"."C1"<=10 AND "T3"."C1">=1))
   4 - access("T1"."C1">=1 AND "T1"."C1"<=10)

Note that the change to the STATISTICS_LEVEL caused another hard parse for the SQL statement, and the child number is now listed as 2.  Unlike the case where the GATHER_PLAN_STATISTICS hint was used, this time we see the actual time for the full table scan, rather than 0.01 seconds.  If you look closely, you will also notice that the execution time increased from 4.92 seconds to 5.59 seconds as a result of changing the STATISTICS_LEVEL parameter.

Let’s try again with the FIRST_ROWS_1 value specified for OPTIMIZER_MODE.

ALTER SESSION SET OPTIMIZER_MODE='FIRST_ROWS_1';

SELECT
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN 1 AND 10;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  9dq71tc7vasgu, child number 3
-------------------------------------
SELECT   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,   T1 WHERE   T1.C1=T3.C1   AND T1.C1 BETWEEN 1 AND 10

Plan hash value: 2674910673

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------------
|   1 |  NESTED LOOPS                |              |      1 |      2 |     10 |00:00:04.79 |   37171 |  37125 |
|*  2 |   TABLE ACCESS FULL          | T3           |      1 |      7 |     10 |00:00:04.79 |   37139 |  37125 |
|   3 |   TABLE ACCESS BY INDEX ROWID| T1           |     10 |      1 |     10 |00:00:00.01 |      32 |      0 |
|*  4 |    INDEX UNIQUE SCAN         | SYS_C0020554 |     10 |      1 |     10 |00:00:00.01 |      22 |      0 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("T3"."C1"<=10 AND "T3"."C1">=1))
   4 - access("T1"."C1"="T3"."C1")
       filter(("T1"."C1"<=10 AND "T1"."C1">=1))

Note again that there was a hard parse, and the child number increased by 1.  This time, rather than completing in 0.02 seconds, the query required 4.79 seconds, with most of that time attributed to the full table scan.  It is odd that the optimizer predicted that only 2 rows would be returned, rather than 9 or 10 rows.

Let’s try again, this time using bind variables rather than constants (literals):

ALTER SESSION SET OPTIMIZER_MODE='ALL_ROWS';
ALTER SESSION SET STATISTICS_LEVEL='TYPICAL';

VARIABLE N1 NUMBER
VARIABLE N2 NUMBER
EXEC :N1:=1
EXEC :N2:=10

SELECT /*+ GATHER_PLAN_STATISTICS */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  cvq22z77c8fww, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,
   T1 WHERE   T1.C1=T3.C1   AND T1.C1 BETWEEN :N1 AND :N2

Plan hash value: 3807353021

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows|   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
|*  1 |  FILTER                       |              |      1 |        |     10|00:00:05.60 |   37144 |  37125 |       |       |          |
|*  2 |   HASH JOIN                   |              |      1 |      9 |     10|00:00:05.60 |   37144 |  37125 |  1452K|  1452K| 1328K (0)|
|*  3 |    TABLE ACCESS FULL          | T3           |      1 |     10 |     10|00:00:00.69 |   37138 |  37125 |       |       |          |
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |      1 |     10 |     10|00:00:00.01 |       6 |      0 |       |       |          |
|*  5 |     INDEX RANGE SCAN          | SYS_C0020554 |      1 |     10 |     10|00:00:00.01 |       4 |      0 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:N1<=:N2)
   2 - access("T1"."C1"="T3"."C1")
   3 - filter(("T3"."C1"<=:N2 AND "T3"."C1">=:N1))
   5 - access("T1"."C1">=:N1 AND "T1"."C1"<=:N2)

Same execution plan as we saw earlier, with the same cardinality estimates due to bind variable peeking, except now there is a FILTER operation AT ID 1 with a predicate that states that the N1 bind variable must be less than or equal to the N2 bind variable – this is an automatically generated predicate caused by the BETWEEN syntax in the SQL statement.

Now we change the value of the second bind variable to a much larger value:

EXEC :N2:=500000

SELECT /*+ GATHER_PLAN_STATISTICS */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  cvq22z77c8fww, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,
   T1 WHERE   T1.C1=T3.C1   AND T1.C1 BETWEEN :N1 AND :N2

Plan hash value: 3807353021

--------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |Used-Tmp|
--------------------------------------------------------------------------------------------------------------------------------------------------------------
|*  1 |  FILTER                       |              |      1 |        |    500K|00:00:18.48 |     105K|  61283 |   4995 |       |       |          |        |
|*  2 |   HASH JOIN                   |              |      1 |      9 |    500K|00:00:17.98 |     105K|  61283 |   4995 |    15M|  3722K|   18M (1)|  43008 |
|*  3 |    TABLE ACCESS FULL          | T3           |      1 |     10 |    500K|00:00:01.51 |   37138 |  37125 |      0 |       |       |          |        |
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |      1 |     10 |    500K|00:00:11.50 |   68480 |  19163 |      0 |       |       |          |        |
|*  5 |     INDEX RANGE SCAN          | SYS_C0020554 |      1 |     10 |    500K|00:00:01.00 |   25893 |    648 |      0 |       |       |          |        |
--------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:N1<=:N2)
   2 - access("T1"."C1"="T3"."C1")
   3 - filter(("T3"."C1"<=:N2 AND "T3"."C1">=:N1))
   5 - access("T1"."C1">=:N1 AND "T1"."C1"<=:N2)

In the above, notice that there was no hard parse (same SQL_ID and child number as we saw earlier), and the E-Rows column is the same for the two DBMS_XPLAN outputs.  The Used-Tmp column indicates that the hash join spilled to disk during the previous execution, using 43008 * 1024 bytes of space in the TEMP tablespace.  Let’s repeat the test with the altered OPTIMIZER_MODE:

EXEC :N2:=10
ALTER SESSION SET OPTIMIZER_MODE='FIRST_ROWS_1';

SELECT /*+ GATHER_PLAN_STATISTICS */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  cvq22z77c8fww, child number 1
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,
   T1 WHERE T1.C1=T3.C1   AND T1.C1 BETWEEN :N1 AND :N2

Plan hash value: 3268213130

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows|   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------------
|*  1 |  FILTER                       |              |      1 |        |     10|00:00:00.01 |   37171 |  37125 |
|   2 |   NESTED LOOPS                |              |      1 |      2 |     10|00:00:00.01 |   37171 |  37125 |
|*  3 |    TABLE ACCESS FULL          | T3           |      1 |      7 |     10|00:00:00.01 |   37139 |  37125 |
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |     10 |      1 |     10|00:00:00.01 |      32 |      0 |
|*  5 |     INDEX UNIQUE SCAN         | SYS_C0020554 |     10 |      1 |     10|00:00:00.01 |      22 |      0 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:N1<=:N2)
   3 - filter(("T3"."C1"<=:N2 AND "T3"."C1">=:N1))
   5 - access("T1"."C1"="T3"."C1")
       filter(("T1"."C1"<=:N2 AND "T1"."C1">=:N1))

EXEC :N2:=500000

SELECT /*+ GATHER_PLAN_STATISTICS */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  cvq22z77c8fww, child number 1
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,
   T1 WHERET1.C1=T3.C1   AND T1.C1 BETWEEN :N1 AND :N2

Plan hash value: 3268213130

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
-----------------------------------------------------------------------------------------------------------------
|*  1 |  FILTER                       |              |      1 |        |    500K|00:00:18.03 |    1603K|  56267 |
|   2 |   NESTED LOOPS                |              |      1 |      2 |    500K|00:00:17.53 |    1603K|  56267 |
|*  3 |    TABLE ACCESS FULL          | T3           |      1 |      7 |    500K|00:00:04.51 |   70472 |  37125 |
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |    500K|      1 |    500K|00:00:13.43 |    1533K|  19142 |
|*  5 |     INDEX UNIQUE SCAN         | SYS_C0020554 |    500K|      1 |    500K|00:00:04.21 |    1033K|    637 |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:N1<=:N2)
   3 - filter(("T3"."C1"<=:N2 AND "T3"."C1">=:N1))
   5 - access("T1"."C1"="T3"."C1")
       filter(("T1"."C1"<=:N2 AND "T1"."C1">=:N1))

18.03 seconds to execute the SQL statement with OPTIMIZER_MODE set to FIRST_ROWS_1 and 18.48 seconds with OPTIMIZER_MODE set to ALL_ROWS (and 0.01 seconds compared to 5.60 seconds for the execution retrieving 10 rows).  Obviously, this means that we should be running with OPTIMIZER_MODE set to FIRST_ROWS_1 if we want to optimize performance, right?  Well, in short No.  Maybe this will be something that will be investigated in a later blog article.

Now, turning to the unanalyzed tables.  We modify the original SQL statement using bind variables to point to the two tables without up-to-date statistics on the tables and their indexes:

EXEC :N2:=10
ALTER SESSION SET OPTIMIZER_MODE='ALL_ROWS';

SELECT /*+ GATHER_PLAN_STATISTICS */
  T4.C1,
  SUBSTR(T2.C2,1,10) C2
FROM
  T4,
  T2
WHERE
  T2.C1=T4.C1
  AND T2.C1 BETWEEN :N1 AND :N2;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  94bv1jwkzcc38, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T4.C1,   SUBSTR(T2.C2,1,10) C2 FROM   T4,
   T2 WHERE   T2.C1=T4.C1   AND T2.C1 BETWEEN :N1 AND :N2

Plan hash value: 3374068464

-------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows|   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------
|*  1 |  FILTER                       |              |      1 |        |     10|00:00:00.01 |   74124 |  36361 |       |       |          |
|*  2 |   HASH JOIN                   |              |      1 |     10 |     10|00:00:00.01 |   74124 |  36361 |   711K|   711K| 1087K (0)|
|   3 |    TABLE ACCESS BY INDEX ROWID| T2           |      1 |     10 |     10|00:00:00.01 |       5 |      1 |       |       |          |
|*  4 |     INDEX RANGE SCAN          | SYS_C0020555 |      1 |     10 |     10|00:00:00.01 |       3 |      0 |       |       |          |
|*  5 |    TABLE ACCESS FULL          | T4           |      1 |    408 |     10|00:00:00.01 |   74119 |  36360 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:N1<=:N2)
   2 - access("T2"."C1"="T4"."C1")
   4 - access("T2"."C1">=:N1 AND "T2"."C1"<=:N2)
   5 - filter(("T4"."C1">=:N1 AND "T4"."C1"<=:N2))

Note
-----
   - dynamic sampling used for this statement

The only change here is that Oracle is now estimating that 10 rows will be returned rather than the 9 we saw earlier, and the note below the predicate information heading that states that dynamic sampling was used, oh – and the order of the row sources directly below the HASH JOIN line in the plan has changed (is this a problem?).  Maybe dynamic sampling will be a topic for another blog article, but the topic is discussed in various books and articles on the Internet.

ALTER SESSION SET OPTIMIZER_MODE='FIRST_ROWS_1';

SELECT /*+ GATHER_PLAN_STATISTICS */
  T4.C1,
  SUBSTR(T2.C2,1,10) C2
FROM
  T4,
  T2
WHERE
  T2.C1=T4.C1
  AND T2.C1 BETWEEN :N1 AND :N2;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  94bv1jwkzcc38, child number 1
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T4.C1,   SUBSTR(T2.C2,1,10) C2 FROM   T4,
   T2 WHERE   T2.C1=T4.C1   AND T2.C1 BETWEEN :N1 AND :N2

Plan hash value: 3374068464

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows|   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
|*  1 |  FILTER                       |              |      1 |        |     10|00:00:00.01 |   37143 |  37120 |       |       |          |
|*  2 |   HASH JOIN                   |              |      1 |     10 |     10|00:00:00.01 |   37143 |  37120 |   711K|   711K| 1087K (0)|
|   3 |    TABLE ACCESS BY INDEX ROWID| T2           |      1 |     10 |     10|00:00:00.01 |       4 |      0 |       |       |          |
|*  4 |     INDEX RANGE SCAN          | SYS_C0020555 |      1 |     10 |     10|00:00:00.01 |       3 |      0 |       |       |          |
|*  5 |    TABLE ACCESS FULL          | T4           |      1 |   2307 |     10|00:00:00.01 |   37139 |  37120 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:N1<=:N2)
   2 - access("T2"."C1"="T4"."C1")
   4 - access("T2"."C1">=:N1 AND "T2"."C1"<=:N2)
   5 - filter(("T4"."C1">=:N1 AND "T4"."C1"<=:N2))

Note
-----
   - dynamic sampling used for this statement

OK, this plan changed a bit from when the SQL statement referenced tables T1 and T3.  The execution plan is no longer using a nested loops join – in fact it is using the same plan as was used when the OPTIMIZER_MODE was set to ALL_ROWS.

Oracle 9i and earlier Oracle releases had a default OPTIMIZER_MODE of CHOOSE, so let’s see what happens when we use that optimizer mode with the same two tables:

ALTER SESSION SET OPTIMIZER_MODE='CHOOSE';

SELECT /*+ GATHER_PLAN_STATISTICS */
  T4.C1,
  SUBSTR(T2.C2,1,10) C2
FROM
  T4,
  T2
WHERE
  T2.C1=T4.C1
  AND T2.C1 BETWEEN :N1 AND :N2;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  94bv1jwkzcc38, child number 2
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T4.C1,   SUBSTR(T2.C2,1,10) C2 FROM   T4,
   T2 WHERE T2.C1=T4.C1   AND T2.C1 BETWEEN :N1 AND :N2

Plan hash value: 1544755769

-------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Starts | A-Rows |   A-Time   | Buffers | Reads  |
-------------------------------------------------------------------------------------------------------
|   1 |  NESTED LOOPS                |              |      1 |     10 |00:00:00.02 |    2039K|  39001 |
|   2 |   TABLE ACCESS FULL          | T4           |      1 |   1000K|00:00:02.02 |   37139 |  37125 |
|   3 |   TABLE ACCESS BY INDEX ROWID| T2           |   1000K|     10 |00:00:06.80 |    2001K|   1876 |
|*  4 |    INDEX UNIQUE SCAN         | SYS_C0020555 |   1000K|     10 |00:00:04.78 |    2001K|   1876 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T2"."C1"="T4"."C1")
       filter(("T2"."C1"<=:N2 AND "T2"."C1">=:N1))

Note
-----
   - rule based optimizer used (consider using cbo)

Note that the Note section indicates that the rule based optimizer was used, even though the documentation for Oracle 10.2 states that as of Oracle 10.1 the RULE based optimizer is no longer supported.  Also note that the execution plan is now using a nested loops join, the FILTER operation no longer appears, and the full table scan is listed first below the NESTED LOOPS operation, just as it was when the OPTIMIZER_MODE was set to FIRST_ROWS_1 and the query accessed tables T1 and T3.

Let’s say that we are bored, and did not read chapter 15 by Pete Finnigan in the “Expert Oracle Practices” book… assume that column C1 contains a credit card number.  Now for an experiment, we will retrieve all child cursors for SQL_ID cvq22z77c8fww with the bind variables that were submitted on the initial hard parse – be careful with who has access to this feature in a production environment, as it could expose sensitive information:

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('cvq22z77c8fww',NULL,'ALLSTATS LAST +PEEKED_BINDS'));

PLAN_TABLE_OUTPUT                                                                                                                                    
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  cvq22z77c8fww, child number 0                                                                                                                
-------------------------------------                                                                                                                
SELECT /*+ GATHER_PLAN_STATISTICS */   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,   T1 WHERE   T1.C1=T3.C1   AND T1.C1 BETWEEN :N1 AND :N2            

Plan hash value: 3807353021                                                                                                                          

------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem | Used-Tmp|
------------------------------------------------------------------------------------------------------------------------------------------------------
|*  1 |  FILTER                       |              |      1 |        |     10 |00:00:04.28 |   37144 |  37125 |       |       |          |         |
|*  2 |   HASH JOIN                   |              |      1 |      9 |     10 |00:00:04.28 |   37144 |  37125 |  1452K|  1452K| 2315K (0)|         |
|*  3 |    TABLE ACCESS FULL          | T3           |      1 |     10 |     10 |00:00:00.01 |   37138 |  37125 |       |       |          |         |
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |      1 |     10 |     10 |00:00:00.01 |       6 |      0 |       |       |          |         |
|*  5 |     INDEX RANGE SCAN          | SYS_C0020554 |      1 |     10 |     10 |00:00:00.01 |       4 |      0 |       |       |          |         |
------------------------------------------------------------------------------------------------------------------------------------------------------

Peeked Binds (identified by position):                                                                                                               
--------------------------------------                                                                                                               
   1 - (NUMBER): 1                                                                                                                                   
   2 - (NUMBER): 10                                                                                                                                  

Predicate Information (identified by operation id):                                                                                                  
---------------------------------------------------                                                                                                  
   1 - filter(:N1<=:N2)                                                                                                                              
   2 - access("T1"."C1"="T3"."C1")                                                                                                                   
   3 - filter(("T3"."C1"<=:N2 AND "T3"."C1">=:N1))                                                                                                   
   5 - access("T1"."C1">=:N1 AND "T1"."C1"<=:N2)                                                                                                     

-

SQL_ID  cvq22z77c8fww, child number 1                                                                                                                
-------------------------------------                                                                                                                
SELECT /*+ GATHER_PLAN_STATISTICS */   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,   T1 WHERE                                                          
T1.C1=T3.C1   AND T1.C1 BETWEEN :N1 AND :N2                                                                                                          

Plan hash value: 3268213130                                                                                                                          
 -----------------------------------------------------------------------------------------------------------------                                    
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |                                    
-----------------------------------------------------------------------------------------------------------------                                    
|*  1 |  FILTER                       |              |      1 |        |    500K|00:00:18.03 |    1603K|  56267 |                                    
|   2 |   NESTED LOOPS                |              |      1 |      2 |    500K|00:00:17.53 |    1603K|  56267 |                                    
|*  3 |    TABLE ACCESS FULL          | T3           |      1 |      7 |    500K|00:00:04.51 |   70472 |  37125 |                                    
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |    500K|      1 |    500K|00:00:13.43 |    1533K|  19142 |                                    
|*  5 |     INDEX UNIQUE SCAN         | SYS_C0020554 |    500K|      1 |    500K|00:00:04.21 |    1033K|    637 |                                    
-----------------------------------------------------------------------------------------------------------------                                    

Peeked Binds (identified by position):                                                                                                               
--------------------------------------                                                                                                               
   1 - (NUMBER): 1                                                                                                                                   
   2 - (NUMBER): 10                                                                                                                                  

Predicate Information (identified by operation id):                                                                                                  
---------------------------------------------------                                                                                                  
   1 - filter(:N1<=:N2)                                                                                                                              
   3 - filter(("T3"."C1"<=:N2 AND "T3"."C1">=:N1))                                                                                                   
   5 - access("T1"."C1"="T3"."C1")                                                                                                                   
       filter(("T1"."C1"<=:N2 AND "T1"."C1">=:N1)) 

-

SQL_ID  94bv1jwkzcc38, child number 2
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T4.C1,   SUBSTR(T2.C2,1,10) C2 FROM   T4,
   T2 WHERET2.C1=T4.C1   AND T2.C1 BETWEEN :N1 AND :N2

Plan hash value: 1544755769

-------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Starts | A-Rows |   A-Time   | Buffers | Reads  |
-------------------------------------------------------------------------------------------------------
|   1 |  NESTED LOOPS                |              |      1 |     10 |00:00:00.02 |    2039K|  39001 |
|   2 |   TABLE ACCESS FULL          | T4           |      1 |   1000K|00:00:02.02 |   37139 |  37125 |
|   3 |   TABLE ACCESS BY INDEX ROWID| T2           |   1000K|     10 |00:00:06.80 |    2001K|   1876 |
|*  4 |    INDEX UNIQUE SCAN         | SYS_C0020555 |   1000K|     10 |00:00:04.78 |    2001K|   1876 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T2"."C1"="T4"."C1")
       filter(("T2"."C1"<=:N2 AND "T2"."C1">=:N1))

Note
-----
   - rule based optimizer used (consider using cbo)

To learn more about DBMS_XPLAN.DISPLAY_CURSOR, see the documenation.

So, what if we want to know why the child cursors were created?  We could do something like this:

DESC V$SQL_SHARED_CURSOR

SET LINESIZE 200
SET HEADING ON
BREAK ON SQL_ID SKIP 1 

SELECT
  *
FROM
  V$SQL_SHARED_CURSOR
WHERE
  SQL_ID='94bv1jwkzcc38';

SQL_ID        ADDRESS          CHILD_ADDRESS    CHILD_NUMBER USOOSLSEBPISTABDLTRIIRLIOSMUTNFAITDLDBPCSRPTMBMROPMFL
------------- ---------------- ---------------- ------------ -----------------------------------------------------
94bv1jwkzcc38 000007FF9829D308 000007FFA68C1038            0 NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
              000007FF9829D308 000007FF94E0D950            1 NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNYNNNN
              000007FF9829D308 000007FF98046318            2 NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNYNNNN

The above shows that the child cursors were created because of an optimizer mode mismatch (yes, we changed the OPTIMIZER_MODE).  We could also check the bind variable definitions (not needed in this case because we used the undocumented PEEKED_BINDS format parameter with DBMS_XPLAN to see most of this information):

SELECT
  S.CHILD_NUMBER CN,
  SBM.*
FROM
  V$SQL_BIND_METADATA SBM,
  V$SQL S
WHERE
  S.SQL_ID='94bv1jwkzcc38'
  AND S.CHILD_ADDRESS=SBM.ADDRESS
ORDER BY
  S.CHILD_NUMBER,
  SBM.POSITION;

 CN ADDRESS            POSITION   DATATYPE MAX_LENGTH  ARRAY_LEN BIND_NAME
--- ---------------- ---------- ---------- ---------- ---------- ---------
  0 000007FFA68C1038          1          2         22          0 N1
  0 000007FFA68C1038          2          2         22          0 N2
  1 000007FF94E0D950          1          2         22          0 N1
  1 000007FF94E0D950          2          2         22          0 N2
  2 000007FF98046318          1          2         22          0 N1
  2 000007FF98046318          2          2         22          0 N2

OK, now that we have moved off on a tangent, let’s return again to the topic of viewing execution plans.  The above examples show the actual execution plan that was used, which may be different from that produced by explain plan.  So, for fun, let’s look at the EXPLAIN PLAN FOR syntax (DBMS_XPLAN.DISPLAY is valid on Oracle 9.2.0.1 and higher):

ALTER SYSTEM FLUSH SHARED_POOL;

ALTER SESSION SET OPTIMIZER_MODE='ALL_ROWS';

EXPLAIN PLAN FOR
SELECT
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN 1 AND 10;

Explained.

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY());

Plan hash value: 1519387310

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |     9 |  2385 |  1893   (4)| 00:00:09 |
|*  1 |  HASH JOIN                   |              |     9 |  2385 |  1893   (4)| 00:00:09 |
|*  2 |   TABLE ACCESS FULL          | T3           |    10 |    50 |  1888   (4)| 00:00:09 |
|   3 |   TABLE ACCESS BY INDEX ROWID| T1           |    10 |  2600 |     4   (0)| 00:00:01 |
|*  4 |    INDEX RANGE SCAN          | SYS_C0020554 |    10 |       |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T3"."C1")
   2 - filter("T3"."C1"<=10 AND "T3"."C1">=1)
   4 - access("T1"."C1">=1 AND "T1"."C1"<=10)

The above plan appears to be identical to that of the first of the actual plans.  Now a test with bind variables:

VARIABLE N1 NUMBER
VARIABLE N2 NUMBER
EXEC :N1:=1
EXEC :N2:=10

EXPLAIN PLAN FOR
SELECT
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

Explained.

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY());

Plan hash value: 3807353021

----------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |              |  2499 |   646K|  2084   (5)| 00:00:10 |
|*  1 |  FILTER                       |              |       |       |            |          |
|*  2 |   HASH JOIN                   |              |  2499 |   646K|  2084   (5)| 00:00:10 |
|*  3 |    TABLE ACCESS FULL          | T3           |  2500 | 12500 |  1905   (5)| 00:00:10 |
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |  2500 |   634K|   178   (0)| 00:00:01 |
|*  5 |     INDEX RANGE SCAN          | SYS_C0020554 |  4500 |       |    11   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(TO_NUMBER(:N1)<=TO_NUMBER(:N2))
   2 - access("T1"."C1"="T3"."C1")
   3 - filter("T3"."C1">=TO_NUMBER(:N1) AND "T3"."C1"<=TO_NUMBER(:N2))
   5 - access("T1"."C1">=TO_NUMBER(:N1) AND "T1"."C1"<=TO_NUMBER(:N2))

Notice in the above that there are TO_NUMBER entries surrounding each of the bind variables in the Predicate Information section, even though those bind variables were declared as a NUMBER data type.  The cost has also increased a bit.

Let’s use AUTOTRACE to see the execution plan (AUTOTRACE starting in Oracle 10.1.0.1 uses DBMS_XPLAN to output the formatted execution plan).

SET AUTOTRACE TRACEONLY EXPLAIN

SELECT
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

Execution Plan
----------------------------------------------------------
Plan hash value: 3807353021

----------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |              |  2499 |   646K|  2084   (5)| 00:00:10 |
|*  1 |  FILTER                       |              |       |       |            |          |
|*  2 |   HASH JOIN                   |              |  2499 |   646K|  2084   (5)| 00:00:10 |
|*  3 |    TABLE ACCESS FULL          | T3           |  2500 | 12500 |  1905   (5)| 00:00:10 |
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |  2500 |   634K|   178   (0)| 00:00:01 |
|*  5 |     INDEX RANGE SCAN          | SYS_C0020554 |  4500 |       |    11   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(TO_NUMBER(:N1)<=TO_NUMBER(:N2))
   2 - access("T1"."C1"="T3"."C1")
   3 - filter("T3"."C1">=TO_NUMBER(:N1) AND "T3"."C1"<=TO_NUMBER(:N2))
   5 - access("T1"."C1">=TO_NUMBER(:N1) AND "T1"."C1"<=TO_NUMBER(:N2))

OK, the same as the previous example on Oracle 10.1.0.1 and above, complete with the incorrect Predicate Information section.  Again, displaying the runtime statistics and explain plan:

SET AUTOTRACE TRACEONLY STATISTICS EXPLAIN

SELECT
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

10 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 3807353021

----------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |              |  2499 |   646K|  2084   (5)| 00:00:10 |
|*  1 |  FILTER                       |              |       |       |            |          |
|*  2 |   HASH JOIN                   |              |  2499 |   646K|  2084   (5)| 00:00:10 |
|*  3 |    TABLE ACCESS FULL          | T3           |  2500 | 12500 |  1905   (5)| 00:00:10 |
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |  2500 |   634K|   178   (0)| 00:00:01 |
|*  5 |     INDEX RANGE SCAN          | SYS_C0020554 |  4500 |       |    11   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(TO_NUMBER(:N1)<=TO_NUMBER(:N2))
   2 - access("T1"."C1"="T3"."C1")
   3 - filter("T3"."C1">=TO_NUMBER(:N1) AND "T3"."C1"<=TO_NUMBER(:N2))
   5 - access("T1"."C1">=TO_NUMBER(:N1) AND "T1"."C1"<=TO_NUMBER(:N2))

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      37144  consistent gets
      37125  physical reads
          0  redo size
        579  bytes sent via SQL*Net to client
        334  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         10  rows processed

Note that the above plan is not necessarily the actual plan, even though we are looking at the actual runtime statistics.  This could be confusing since we are seeing a rough guess for an execution plan with the actual execution statistics.

For more information about Using EXPLAIN PLAN, see the documentation.

Now, another test.  This time we will instruct Oracle to write the actual execution plan to a trace file every time Oracle must perform a hard parse.  We will force a hard parse by adding a comment to the SQL statement:

ALTER SESSION SET TRACEFILE_IDENTIFIER = '10132_HARD_PARSE';
ALTER SESSION SET EVENTS '10132 TRACE NAME CONTEXT FOREVER, LEVEL 1';

SELECT /* TEST */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

ALTER SESSION SET EVENTS '10132 TRACE NAME CONTEXT OFF';

If we take a look at the output in the trace file, we might see something like this:

sql_id=3g3fc5qyju0j3.
Current SQL statement for this session:
SELECT /* TEST */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2

============
Plan Table
============
-----------------------------------------------------+-----------------------------------+
| Id  | Operation                      | Name        | Rows  | Bytes | Cost  | Time      |
-----------------------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT               |             |       |       |  1893 |           |
| 1   |  FILTER                        |             |       |       |       |           |
| 2   |   HASH JOIN                    |             |     9 |  2385 |  1893 |  00:00:09 |
| 3   |    TABLE ACCESS FULL           | T3          |    10 |    50 |  1888 |  00:00:09 |
| 4   |    TABLE ACCESS BY INDEX ROWID | T1          |    10 |  2600 |     4 |  00:00:01 |
| 5   |     INDEX RANGE SCAN           | SYS_C0020554|    10 |       |     3 |  00:00:01 |
-----------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
1 - filter(:N1<=:N2)
2 - access("T1"."C1"="T3"."C1")
3 - filter(("T3"."C1"<=:N2 AND "T3"."C1">=:N1))
5 - access("T1"."C1">=:N1 AND "T1"."C1"<=:N2)

Content of other_xml column
===========================
  db_version     : 10.2.0.2
  parse_schema   : TESTUSER
  plan_hash      : 3807353021
Peeked Binds
============
  Bind variable information
    position=1
    datatype(code)=2
    datatype(string)=NUMBER
    precision=0
    scale=0
    max length=22
    value=1
  Bind variable information
    position=2
    datatype(code)=2
    datatype(string)=NUMBER
    precision=0
    scale=0
    max length=22
    value=10
  Outline Data:
  /*+
    BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('10.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T3"@"SEL$1")
      INDEX(@"SEL$1" "T1"@"SEL$1" ("T1"."C1"))
      LEADING(@"SEL$1" "T3"@"SEL$1" "T1"@"SEL$1")
      USE_HASH(@"SEL$1" "T1"@"SEL$1")
    END_OUTLINE_DATA
  */

Optimizer environment:
  optimizer_mode_hinted               = false
  optimizer_features_hinted           = 0.0.0
  parallel_execution_enabled          = false
  parallel_query_forced_dop           = 0
  parallel_dml_forced_dop             = 0
  parallel_ddl_forced_degree          = 0
  parallel_ddl_forced_instances       = 0
  _query_rewrite_fudge                = 90
  optimizer_features_enable           = 10.2.0.1
  _optimizer_search_limit             = 5
  cpu_count                           = 4
  active_instance_count               = 1
  parallel_threads_per_cpu            = 2
  hash_area_size                      = 131072
  bitmap_merge_area_size              = 1048576
  sort_area_size                      = 65536
  sort_area_retained_size             = 0
  _sort_elimination_cost_ratio        = 0
  _optimizer_block_size               = 8192
  _sort_multiblock_read_count         = 2
  _hash_multiblock_io_count           = 0
  _db_file_optimizer_read_count       = 128
  _optimizer_max_permutations         = 2000
  pga_aggregate_target                = 204800 KB
  _pga_max_size                       = 204800 KB
  _query_rewrite_maxdisjunct          = 257
  _smm_auto_min_io_size               = 56 KB
  _smm_auto_max_io_size               = 248 KB
  _smm_min_size                       = 204 KB
  _smm_max_size                       = 40960 KB
  _smm_px_max_size                    = 102400 KB
  _cpu_to_io                          = 0
  _optimizer_undo_cost_change         = 10.2.0.1
  parallel_query_mode                 = enabled
  parallel_dml_mode                   = disabled
  parallel_ddl_mode                   = enabled
  optimizer_mode                      = all_rows
  sqlstat_enabled                     = false
  _optimizer_percent_parallel         = 101
  _always_anti_join                   = choose
  _always_semi_join                   = choose
  _optimizer_mode_force               = true
  _partition_view_enabled             = true
  _always_star_transformation         = false
  _query_rewrite_or_error             = false
  _hash_join_enabled                  = true
  cursor_sharing                      = exact
  _b_tree_bitmap_plans                = true
  star_transformation_enabled         = false
  _optimizer_cost_model               = choose
  _new_sort_cost_estimate             = true
  _complex_view_merging               = true
  _unnest_subquery                    = true
  _eliminate_common_subexpr           = true
  _pred_move_around                   = true
  _convert_set_to_join                = false
  _push_join_predicate                = true
  _push_join_union_view               = true
  _fast_full_scan_enabled             = true
  _optim_enhance_nnull_detection      = true
  _parallel_broadcast_enabled         = true
  _px_broadcast_fudge_factor          = 100
  _ordered_nested_loop                = true
  _no_or_expansion                    = false
  optimizer_index_cost_adj            = 100
  optimizer_index_caching             = 0
  _system_index_caching               = 0
  _disable_datalayer_sampling         = false
  query_rewrite_enabled               = true
  query_rewrite_integrity             = enforced
  _query_cost_rewrite                 = true
  _query_rewrite_2                    = true
  _query_rewrite_1                    = true
  _query_rewrite_expression           = true
  _query_rewrite_jgmigrate            = true
  _query_rewrite_fpc                  = true
  _query_rewrite_drj                  = true
  _full_pwise_join_enabled            = true
  _partial_pwise_join_enabled         = true
  _left_nested_loops_random           = true
  _improved_row_length_enabled        = true
  _index_join_enabled                 = true
  _enable_type_dep_selectivity        = true
  _improved_outerjoin_card            = true
  _optimizer_adjust_for_nulls         = true
  _optimizer_degree                   = 0
  _use_column_stats_for_function      = true
  _subquery_pruning_enabled           = true
  _subquery_pruning_mv_enabled        = false
  _or_expand_nvl_predicate            = true
  _like_with_bind_as_equality         = false
  _table_scan_cost_plus_one           = true
  _cost_equality_semi_join            = true
  _default_non_equality_sel_check     = true
  _new_initial_join_orders            = true
  _oneside_colstat_for_equijoins      = true
  _optim_peek_user_binds              = true
  _minimal_stats_aggregation          = true
  _force_temptables_for_gsets         = false
  workarea_size_policy                = auto
  _smm_auto_cost_enabled              = true
  _gs_anti_semi_join_allowed          = true
  _optim_new_default_join_sel         = true
  optimizer_dynamic_sampling          = 2
  _pre_rewrite_push_pred              = true
  _optimizer_new_join_card_computation = true
  _union_rewrite_for_gs               = yes_gset_mvs
  _generalized_pruning_enabled        = true
  _optim_adjust_for_part_skews        = true
  _force_datefold_trunc               = false
  statistics_level                    = typical
  _optimizer_system_stats_usage       = true
  skip_unusable_indexes               = true
  _remove_aggr_subquery               = true
  _optimizer_push_down_distinct       = 0
  _dml_monitoring_enabled             = true
  _optimizer_undo_changes             = false
  _predicate_elimination_enabled      = true
  _nested_loop_fudge                  = 100
  _project_view_columns               = true
  _local_communication_costing_enabled = true
  _local_communication_ratio          = 50
  _query_rewrite_vop_cleanup          = true
  _slave_mapping_enabled              = true
  _optimizer_cost_based_transformation = linear
  _optimizer_mjc_enabled              = true
  _right_outer_hash_enable            = true
  _spr_push_pred_refspr               = true
  _optimizer_cache_stats              = false
  _optimizer_cbqt_factor              = 50
  _optimizer_squ_bottomup             = true
  _fic_area_size                      = 131072
  _optimizer_skip_scan_enabled        = true
  _optimizer_cost_filter_pred         = false
  _optimizer_sortmerge_join_enabled   = true
  _optimizer_join_sel_sanity_check    = true
  _mmv_query_rewrite_enabled          = true
  _bt_mmv_query_rewrite_enabled       = true
  _add_stale_mv_to_dependency_list    = true
  _distinct_view_unnesting            = false
  _optimizer_dim_subq_join_sel        = true
  _optimizer_disable_strans_sanity_checks = 0
  _optimizer_compute_index_stats      = true
  _push_join_union_view2              = true
  _optimizer_ignore_hints             = false
  _optimizer_random_plan              = 0
  _query_rewrite_setopgrw_enable      = true
  _optimizer_correct_sq_selectivity   = true
  _disable_function_based_index       = false
  _optimizer_join_order_control       = 3
  _optimizer_cartesian_enabled        = true
  _optimizer_starplan_enabled         = true
  _extended_pruning_enabled           = true
  _optimizer_push_pred_cost_based     = true
  _sql_model_unfold_forloops          = run_time
  _enable_dml_lock_escalation         = false
  _bloom_filter_enabled               = true
  _update_bji_ipdml_enabled           = 0
  _optimizer_extended_cursor_sharing  = udo
  _dm_max_shared_pool_pct             = 1
  _optimizer_cost_hjsmj_multimatch    = true
  _optimizer_transitivity_retain      = true
  _px_pwg_enabled                     = true
  optimizer_secure_view_merging       = true
  _optimizer_join_elimination_enabled = true
  flashback_table_rpi                 = non_fbt
  _optimizer_cbqt_no_size_restriction = true
  _optimizer_enhanced_filter_push     = true
  _optimizer_filter_pred_pullup       = true
  _rowsrc_trace_level                 = 0
  _simple_view_merging                = true
  _optimizer_rownum_pred_based_fkr    = true
  _optimizer_better_inlist_costing    = all
  _optimizer_self_induced_cache_cost  = false
  _optimizer_min_cache_blocks         = 10
  _optimizer_or_expansion             = depth
  _optimizer_order_by_elimination_enabled = true
  _optimizer_outer_to_anti_enabled    = true
  _selfjoin_mv_duplicates             = true
  _dimension_skip_null                = true
  _force_rewrite_enable               = false
  _optimizer_star_tran_in_with_clause = true
  _optimizer_complex_pred_selectivity = true
  _optimizer_connect_by_cost_based    = false
  _gby_hash_aggregation_enabled       = true
  _globalindex_pnum_filter_enabled    = false
  _fix_control_key                    = 0
  _optimizer_skip_scan_guess          = false
  _enable_row_shipping                = false
  *********************************
  Bug Fix Control Environment
  ***************************
  fix  4611850 = disabled
  fix  4663804 = disabled
  fix  4663698 = disabled
  fix  4545833 = disabled
  fix  3499674 = disabled
  fix  4584065 = disabled
  fix  4602374 = disabled
  fix  4569940 = enabled
  fix  4631959 = disabled
  fix  4519340 = disabled
  fix  4550003 = enabled
  fix  4488689 = disabled
  fix  3118776 = enabled
  fix  4519016 = enabled
  fix  4487253 = enabled
  fix  4556762 = 0      
  fix  4728348 = disabled
  fix  4723244 = disabled
  fix  4554846 = disabled
  fix  4175830 = enabled
  fix  5240607 = disabled
  fix  4722900 = enabled
Query Block Registry:
*********************
SEL$1 0x122d9358 (PARSER) [FINAL]
Optimizer State Dump: call(in-use=53568, alloc=81816), compile(in-use=76816, alloc=126256)

Note in the above that we are able to see the actual execution plan, the peeked bind variables, the set of hints that will reproduce the execution plan, and a large number of normal and hidden optimizer parameters.

We could generate a 10053 trace at level 1 to determine why the above plan was selected, but we will skip that for now.

An example with a 10046 trace at level 4 (we must execute another SQL statement after the SQL statement under investigation so that the STAT lines for our SQL statement are written to the trace file):

ALTER SESSION SET TRACEFILE_IDENTIFIER = '10046_EXECUTION_PLAN';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 4';

SELECT /* TEST */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

SELECT SYSDATE FROM DUAL;

The resulting trace file looks something like this (you did read Chapter 15 of the book, didn’t you – so be sure to delete the trace files when no longer needed):

=====================
PARSING IN CURSOR #34 len=118 dep=0 uid=31 oct=3 lid=31 tim=2087403486 hv=3172794915 ad='9811b110'
SELECT /* TEST */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2
END OF STMT
PARSE #34:c=0,e=162,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=2087403479
BINDS #34:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1000000 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=13beb440  bln=22  avl=02  flg=05
  value=1
 Bind#1
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1000000 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=13beb458  bln=22  avl=02  flg=01
  value=10
EXEC #34:c=0,e=170,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=2087403808
FETCH #34:c=1078125,e=3875649,p=37124,cr=37142,cu=0,mis=0,r=1,dep=0,og=1,tim=2091279513
FETCH #34:c=0,e=180,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=2091280273
STAT #34 id=1 cnt=10 pid=0 pos=1 obj=0 op='FILTER  (cr=37144 pr=37124 pw=0 time=3875627 us)'
STAT #34 id=2 cnt=10 pid=1 pos=1 obj=0 op='HASH JOIN  (cr=37144 pr=37124 pw=0 time=3875620 us)'
STAT #34 id=3 cnt=10 pid=2 pos=1 obj=114232 op='TABLE ACCESS FULL T3 (cr=37138 pr=37124 pw=0 time=19345 us)'
STAT #34 id=4 cnt=10 pid=2 pos=2 obj=114228 op='TABLE ACCESS BY INDEX ROWID T1 (cr=6 pr=0 pw=0 time=66 us)'
STAT #34 id=5 cnt=10 pid=4 pos=1 obj=114229 op='INDEX RANGE SCAN SYS_C0020554 (cr=4 pr=0 pw=0 time=37 us)'
=====================

We could, of course, just read the actual plan direct from the 10046 trace file, as well as the bind variable values and data types.  But, we will use TKPROF instead (we will not be able to see the bind variable values when using TKPROF).

C:\> tkprof test_ora_4436_10046_execution_plan.trc test_ora_4436_10046_execution_plan.txt

The output might look like this:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      1.07       3.87      37124      37144          0          10
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      1.07       3.87      37124      37144          0          10

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 31 

Rows     Row Source Operation
-------  ---------------------------------------------------
     10  FILTER  (cr=37144 pr=37124 pw=0 time=3875627 us)
     10   HASH JOIN  (cr=37144 pr=37124 pw=0 time=3875620 us)
     10    TABLE ACCESS FULL T3 (cr=37138 pr=37124 pw=0 time=19345 us)
     10    TABLE ACCESS BY INDEX ROWID T1 (cr=6 pr=0 pw=0 time=66 us)
     10     INDEX RANGE SCAN SYS_C0020554 (cr=4 pr=0 pw=0 time=37 us)(object id 114229)

Of course, it seems to be a little too common that some people will try using EXPLAIN in TKPROF rather than just working with the Row Source Operation lines (what really happened – most of the time, unless of course you read this article):

C:\>tkprof test_ora_4436_10046_execution_plan.trc test_ora_4436_10046_execution_plan.txt EXPLAIN=TESTUSER/TESTPASS

Note that the resulting output could be very confusing if the “Row Source Operation” plan is completely different from the “Execution Plan”, since the first plan is the actual execution plan, while the second is essentially just an EXPLAIN PLAN FOR type of execution plan:

Rows     Row Source Operation
-------  ---------------------------------------------------
     10  FILTER  (cr=37144 pr=37124 pw=0 time=3875627 us)
     10   HASH JOIN  (cr=37144 pr=37124 pw=0 time=3875620 us)
     10    TABLE ACCESS FULL T3 (cr=37138 pr=37124 pw=0 time=19345 us)
     10    TABLE ACCESS BY INDEX ROWID T1 (cr=6 pr=0 pw=0 time=66 us)
     10     INDEX RANGE SCAN SYS_C0020554 (cr=4 pr=0 pw=0 time=37 us)(object id 114229)

Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   MODE: ALL_ROWS
     10   FILTER
     10    HASH JOIN
     10     TABLE ACCESS   MODE: ANALYZED (FULL) OF 'T3' (TABLE)
     10     TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF 'T1'
                (TABLE)
     10      INDEX   MODE: ANALYZED (RANGE SCAN) OF 'SYS_C0020554'
                 (INDEX (UNIQUE))

So hopefully, you now know what is the plan, how to find it, and what types of messes (faulty information) you do not accidentally step in.





Explain Plan – Which Plan is Better

29 01 2010

January 29, 2010

A recent post appeared in the OTN forums that indirectly asked the question: which execution plan is better?  The execution plans follow:

The Unhinted Execution Plan:

--------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT       |             |  1285M|   326G|       |    45M  (1)|178:06:59 |       |       |
|   1 |  LOAD AS SELECT        | E           |       |       |       |            |          |       |       |
|*  2 |   HASH JOIN            |             |  1285M|   326G|  5153M|    45M  (1)|178:06:59 |       |       |
|   3 |    TABLE ACCESS FULL   | D           |   135M|  3607M|       |   254K  (2)| 00:59:17 |       |       |
|*  4 |    HASH JOIN           |             |  1261M|   287G|  2857M|    32M  (1)|124:52:03 |       |       |
|   5 |     TABLE ACCESS FULL  | C           |    76M|  1978M|       |   143K  (2)| 00:33:33 |       |       |
|*  6 |     HASH JOIN          |             |  1241M|   252G|  1727M|    20M  (1)| 78:33:50 |       |       |
|   7 |      TABLE ACCESS FULL | B           |    54M|  1099M|       | 23217   (4)| 00:05:26 |       |       |
|   8 |      PARTITION HASH ALL|             |  1241M|   227G|       |  3452K  (4)| 13:25:29 |     1 |    64 |
|   9 |       TABLE ACCESS FULL| A           |  1241M|   227G|       |  3452K  (4)| 13:25:29 |     1 |    64 |
--------------------------------------------------------------------------------------------------------------

The Hinted Execution Plan that Sets the Cardinality for Table A to 10M Rows:

--------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT       |             |    10M|  2696M|       |  4578K  (1)| 17:48:26 |       |       |
|   1 |  LOAD AS SELECT        | E           |       |       |       |            |          |       |       |
|*  2 |   HASH JOIN            |             |    10M|  2696M|  2491M|  4578K  (1)| 17:48:26 |       |       |
|*  3 |    HASH JOIN           |             |    10M|  2374M|  2193M|  3996K  (1)| 15:32:36 |       |       |
|*  4 |     HASH JOIN          |             |    10M|  2079M|  1727M|  3636K  (1)| 14:08:30 |       |       |
|   5 |      TABLE ACCESS FULL | B           |    54M|  1099M|       | 23217   (4)| 00:05:26 |       |       |
|   6 |      PARTITION HASH ALL|             |    10M|  1878M|       |  3362K  (1)| 13:04:42 |     1 |    64 |
|   7 |       TABLE ACCESS FULL| A           |    10M|  1878M|       |  3362K  (1)| 13:04:42 |     1 |    64 |
|   8 |     TABLE ACCESS FULL  | C           |    76M|  1978M|       |   143K  (2)| 00:33:33 |       |       |
|   9 |    TABLE ACCESS FULL   | D           |   135M|  3607M|       |   254K  (2)| 00:59:17 |       |       |
--------------------------------------------------------------------------------------------------------------

The Original Poster Stated Both Plans have the Same Predicates:

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access(A."ID"="D"."ID")
   3 - access("A"."E_ID"="C"."E_ID")
   4 - access("A"."M_ID"="B"."M_ID")

So, which execution plan is better?  Are the plans the same?  How are they the same, and how do they differ?

While we comtemplate which execution plan is optimal, the OTN thread took a slight detour into a discussion of work areas in Oracle:

“Can you please help understanding workarea.”

A search of the documentation found this page that offered the following definition:

“work area: A private allocation of memory used for sorts, hash joins, and other operations that are memory-intensive. A sort operator uses a work area (the sort area) to perform the in-memory sort of a set of rows. Similarly, a hash-join operator uses a work area (the hash area) to build a hash table from its left input.”

There may be multiple active work areas in a single SQL statement. While not the original purpose of this blog article, the article does show how to see the amount of memory in use for active work areas.  If you have a copy of the book “Troubleshooting Oracle Performance“, I highly recommend that you read pages 434 through 439 if you are curious about Oracle work areas.  Those pages describe how hash joins work and provide a detailed description of work areas.

Back to the original question.  While reading the plans, keep in mind that you are only looking at the optimizer’s estimates for the number of rows, time, memory usage, temp tablespace usage, and costs.  If you are attempting to conclude which plan is faster/better based on the estimates in the first plan and an altered plan with a hinted cardinality estimate, you might be setting yourself up for failure.  Note that the first plan has a calculated cost of about 45,000,000 while the second plan has a calculated cost of about 4,578,000.  So obviously, the second plan is more efficient.  Or is it?  With the cardinality hint, the OP has effectively changed the number of rows that the optimizer expects to be returned from table A from roughly 1,241,000,000 to 10,000,000.  Additionally, one should not directly compare the calcualted cost of one execution plan with that of a second execution plan.  You probably should be thinking to yourself at this point: “Have you considered actually testing the performance?

In the OTN thread Timur stated that both plans use the very same join order: B->A->C->D.  Based on my understanding of execution plans, this is a correct statement, even though the plans look a bit different.  Note that the OP was slightly incorrect in stating that the Predicate Information sections for the two plans were identical – the plan ID numbers should have been a bit different.

(Confession: I re-read the section of the book “Troubleshooting Oracle Performance” that discussed hash joins before providing the following response.)  Essentially, the difference between the two plans is which table (or row source) is the build input, and which table is the probe input. The first table (or row source) listed below the words HASH JOIN is the source for the hash table (the optimizer typically tries to select the smaller estimated row source of the two row sources as the source for the hash table). The second table (or row source) is fully scanned, probing the generated hash table in search of a match. By artifically altering the optimizer’s estimated rows to be returned from table A, the OP has flipped which table (or row source) is the build input, and which table (or row source) is the probe input at each hash join – this could significantly increase, or significantly decrease, or have no impact on the amount of time required for the query to execute, the amount of memory used, or the amount of temp space needed.

My suggestion to the OP is to test the performance to see which execution plan is more efficient, rather than guessing. My blog article that is referenced above has SQL statements that may be used to see the number of work areas that are active at any point, as well as the amount of RAM and temp space in use. You could continue to guess about which plan is better, but why guess?





Database using ASSM Tablespace Exhibits Slow Insert Performance After an Uncommitted Delete

28 01 2010

January 28, 2010 (Updated January 19, 2011)

In August 2009 an interesting test case appeared in the comp.database.oracle.server Usenet group, with a follow-up post in a second thread.  The test case that was posted follows:

1. Create tablespace, it uses default 8K block size

create tablespace assm
extent management local uniform size 1m
segment space management auto
datafile
'/abc/db01/oracle/ABC1P/oradata/assm_01.dbf' size 1000m;

2. Create table

create table test_assm
(
 n1 number,
 v1 varchar2(50),
 v2 varchar2(50),
 v3 varchar2(50),
 v4 varchar2(50),
 v5 varchar2(50),
 v6 varchar2(50),
 v7 varchar2(50),
 v8 varchar2(50),
 v9 varchar2(50),
v10 varchar2(50)
)
tablespace assm;

3. Populate table with 1,000,000 rows, COMMIT at the end

begin
for i in 1..1000000 loop
insert into test_assm values
(i,
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567');
end loop;
end;
/

COMMIT;

4. Insert additional 1000 rows into the table using ***SINGLE_ROW*** inserts.  I used following script to generate INSERT statements (don’t forget to execute the resulting INSERT statements)

select
'insert into test_assm values(' || n1 ||
',''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'');'
from
        test_assm
where
        rownum < 1001;

It took 1 second to insert 1000 rows through single-row inserts.

5. Delete all rows from the table, don’t commit

6. Re-execute script that inserts 1000 rows from a different session.  Runtime > 20 min.  There were no indexes on the table.

Insert into table containing uncomitted DELETE should not be significantly slower than insert into table without DELETE.

My test results follow.

I executed the test on 64 bit Windows with a fairly slow disk system (little front end caching from the disk subsystem) running Oracle 11.1.0.7, 8KB block size, with the __DB_CACHE_SIZE currently floating at 0.9375GB due to a much larger DB_KEEP_CACHE_SIZE value.  What do I see?

(Edit Jan 19, 2011: Script added to create the c:\insertstatements.sql file)

set linesize 1000
set trimspool on
set pagesize 2000
spool c:\insertstatements.sql

select
'insert into test_assm values(' || n1 ||
',''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'');'
from
        test_assm
where
        rownum < 1001;

spool off 

(Remove the creation SQL statement and the header row from the c:\insertstatements.sql file.)

SET TIMING ON
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'FIND_ME_TEST_ASSM';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

@c:\insertstatements.sql
 
DELETE FROM TEST_ASSM;
 
@c:\insertstatements.sql

1 row created.
Elapsed: 00:00:20.92
1 row created.
Elapsed: 00:00:15.98
1 row created.
Elapsed: 00:00:13.52
1 row created.
...
Elapsed: 00:00:12.41
1 row created.
Elapsed: 00:00:11.84
1 row created.
Elapsed: 00:00:12.32
1 row created.
...

Interesting… becoming faster as more blocks are cached.

So, what is in the trace file?

PARSING IN CURSOR #3 len=532 dep=0 uid=56 oct=2 lid=56 tim=220841924138 hv=471712922 ad='2778b31b8' sqlid='dyqznk8f1vj4u'
insert into test_assm values
  (15,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #3:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220841924138
WAIT #3: nam='db file sequential read' ela= 17613 file#=7 block#=1900672 blocks=1 obj#=67153 tim=220841943750
WAIT #3: nam='db file sequential read' ela= 458 file#=7 block#=1900680 blocks=1 obj#=67153 tim=220841944275
WAIT #3: nam='db file sequential read' ela= 617 file#=7 block#=1900681 blocks=1 obj#=67153 tim=220841944980
WAIT #3: nam='db file sequential read' ela= 73 file#=7 block#=1900682 blocks=1 obj#=67153 tim=220841945113
WAIT #3: nam='db file sequential read' ela= 387 file#=7 block#=1900683 blocks=1 obj#=67153 tim=220841945532
WAIT #3: nam='db file sequential read' ela= 72 file#=7 block#=1900684 blocks=1 obj#=67153 tim=220841945656
WAIT #3: nam='db file sequential read' ela= 14610 file#=7 block#=1900685 blocks=1 obj#=67153 tim=220841960301
...
WAIT #3: nam='db file sequential read' ela= 28 file#=7 block#=1972309 blocks=1 obj#=67153 tim=220862843585
WAIT #3: nam='db file sequential read' ela= 29 file#=7 block#=1972325 blocks=1 obj#=67153 tim=220862843638
WAIT #3: nam='db file sequential read' ela= 69 file#=7 block#=1972341 blocks=1 obj#=67153 tim=220862843732
WAIT #3: nam='db file sequential read' ela= 41 file#=7 block#=1972102 blocks=1 obj#=67153 tim=220862843817
EXEC #3:c=3759624,e=20904025,p=69802,cr=69793,cu=83979,mis=0,r=1,dep=0,og=1,plh=­0,tim=220862828163
STAT #3 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=69802 pw=0 time=0 us)'

Looks like a lot of single block reads, some in the range of 0.017613 seconds, others in the range of 0.000028 seconds. A summary of the first insert looks like this:

First Reference: Cursor 3   Ver 1   Parse at 0.000000
|PARSEs       1|CPU S    0.000000|CLOCK S    0.000000|ROWs        0| PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|
|EXECs        1|CPU S    3.759624|CLOCK S   20.904025|ROWs        1| PHY RD BLKs     69802|CON RD BLKs (Mem)     69793|CUR RD BLKs (Mem)     83979|
|FETCHs       0|CPU S    0.000000|CLOCK S    0.000000|ROWs        0| PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|

  CPU S 100.00%  CLOCK S 100.00%
  *   18.032425 seconds of time related data file I/O
  *    0.001392 seconds of time related to client/network events

Wait Event Summary:
db file sequential read            18.032425  On DB Server        Min Wait:     0.000022  Avg Wait:     0.000258  Max Wait:     0.071639
SQL*Net message to client           0.000003  On Client/Network   Min Wait:     0.000003  Avg Wait:     0.000003  Max Wait:     0.000003
SQL*Net message from client         0.001389  On Client/Network   Min Wait:     0.001389  Avg Wait:     0.001389  Max Wait:     0.001389

69,802 physical block reads, 69,793 consistent gets, 83,979 current mode gets, 18.03 seconds spent performing single block reads.  This seems to be behaving similar to the bug that Jonathan Lewis found with ASSM 16KB block size tablespaces in 2008 when column values in existing rows were changed from NULL to a value.  In that case, the current mode gets were the tipoff that there was a problem.

I repeated the test with an ASSM tablespace with 1MB uniform extents. The first insert performed 71,250 physical block reads, 71,206 consistent gets, 85,473 current mode gets, 18.85 seconds performing single block reads with an elapsed time of 21.53 and for some reason 0 CPU seconds (the next insert reported 3.59 CPU seconds).

I also repeated the test with a locally managed table with with 1MB uniform extents without ASSM: “SIZE 2G REUSE AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M SEGMENT SPACE MANAGEMENT MANUAL”.  The results in the final test were a little disappointing.  The totals from the script execution for all of the inserts:

Total for Trace File:
|PARSEs    1003|CPU S    0.234002|CLOCK S    0.312034|ROWs        0| PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|
|EXECs     1003|CPU S    0.031200|CLOCK S    0.062434|ROWs     1002| PHY RD BLKs         0|CON RD BLKs (Mem)      1051|CUR RD BLKs (Mem)      1343|
|FETCHs       2|CPU S    0.000000|CLOCK S    0.000000|ROWs        1| PHY RD BLKs         0|CON RD BLKs (Mem)         3|CUR RD BLKs (Mem)         0|

Wait Event Summary:
SQL*Net message to client           0.001472  On Client/Network   Min Wait:     0.000001  Avg Wait:     0.000001  Max Wait:     0.000076
SQL*Net message from client         0.683966  On Client/Network   Min Wait:     0.000402  Avg Wait:     0.000684  Max Wait:     0.001799

The totals for all of the inserts performed 0 physical block reads, 1,051 consistent gets, 1,343 current mode gets, 0 seconds performing single block reads with an elapsed time of 0.374468 seconds (0.312034 of that was for parsing) and 0.265202 CPU seconds (0.234002 of that for parsing).

A couple of additional tests, since some of the posters in the Usenet thread reported different behavior.

CREATE SMALLFILE TABLESPACE "LOCAL_UNIFORM1M" DATAFILE 'C:\ORACLE\ORADATA\OR11\locun1MOR1101.dbf' SIZE 2G REUSE AUTOEXTEND ON NEXT 10M
    MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M SEGMENT SPACE MANAGEMENT MANUAL;

CREATE SMALLFILE TABLESPACE "LOCAL_ASSM" LOGGING DATAFILE 'C:\Oracle\OraData\OR11\locassmOR1101.dbf' SIZE 2G REUSE AUTOEXTEND ON NEXT 10M
    MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;

SELECT
  TABLESPACE_NAME,
  BLOCK_SIZE,
  ALLOCATION_TYPE,SEGMENT_SPACE_MANAGEMENT,EXTENT_MANAGEMENT
FROM DBA_TABLESPACES;

TABLESPACE_NAME BLOCK_SIZE ALLOCATIO SEGMEN EXTENT_MAN
--------------- ---------- --------- ------ ----------
LOCAL_UNIFORM1M       8192 UNIFORM   MANUAL LOCAL
LOCAL_ASSM            8192 SYSTEM    AUTO   LOCAL

We now have a new locally managed tablespace with 1MB extents not using ASSM, and another new tablespace using ASSM with autoallocated extents (my original test used an old ASSM autoallocate tablespace containing other data).

(Session 1)

create table test_assm
(
 n1 number,
 v1 varchar2(50),
 v2 varchar2(50),
 v3 varchar2(50),
 v4 varchar2(50),
 v5 varchar2(50),
 v6 varchar2(50),
 v7 varchar2(50),
 v8 varchar2(50),
 v9 varchar2(50),
v10 varchar2(50)
)
tablespace LOCAL_UNIFORM1M;

begin
for i in 1..1000000 loop
insert into test_assm values
(i,
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567');
end loop;
end;
/

COMMIT;

Build the insertstatements.sql file using the select statement provided by the OP, which will include statements like the following:

insert into test_assm values
  (15,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567');
insert into test_assm values
  (16,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567');
insert into test_assm values
  (17,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567');
insert into test_assm values
  (18,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567');
insert into test_assm values
  (19,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567');
...

)

@c:\insertstatements.sql

DELETE FROM test_assm;

(Session 2)

SET TIMING ON
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'FIND_ME_TEST_LOCAL1UM';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

@c:\insertstatements.sql

ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';

EXIT

Using the LOCAL_UNIFORM1M tablespace, the insert completed in less than a second.

Reconnect session 2

Repeated the test with the KEEP pool at 1MB, which allowed the default buffer pool to grow:
(Session 1)

DROP TABLE TEST_ASSM PURGE;

create table test_assm
(
 n1 number,
 v1 varchar2(50),
 v2 varchar2(50),
 v3 varchar2(50),
 v4 varchar2(50),
 v5 varchar2(50),
 v6 varchar2(50),
 v7 varchar2(50),
 v8 varchar2(50),
 v9 varchar2(50),
v10 varchar2(50)
)
tablespace LOCAL_ASSM;

begin
for i in 1..1000000 loop
insert into test_assm values
(i,
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567');
end loop;
end;
/

COMMIT;

@c:\insertstatements.sql

DELETE FROM test_assm;

(Session 2)

SET TIMING ON
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'FIND_ME_TEST_LOCALAM';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

@c:\insertstatements.sql

ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';

EXIT

Each insert statements reported an elapsed time of 0.14 to 0.15 seconds.  __DB_CACHE_SIZE floated to 7,449,083,904

Reconnect session 2

Repeating the test again with a smaller __DB_CACHE_SIZE:
(Session 1)

DROP TABLE TEST_ASSM PURGE;
ALTER SYSTEM SET DB_KEEP_CACHE_SIZE=6G;
create table test_assm
(
 n1 number,
 v1 varchar2(50),
 v2 varchar2(50),
 v3 varchar2(50),
 v4 varchar2(50),
 v5 varchar2(50),
 v6 varchar2(50),
 v7 varchar2(50),
 v8 varchar2(50),
 v9 varchar2(50),
v10 varchar2(50)
)
tablespace LOCAL_ASSM;

begin
for i in 1..1000000 loop
insert into test_assm values
(i,
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567');
end loop;
end;
/

COMMIT;

@c:\insertstatements.sql

DELETE FROM test_assm;

(Session 2)

SET TIMING ON
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'FIND_ME_TEST_LOCALAM2';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

@c:\insertstatements.sql

ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';

EXIT

Each insert statement after the first reported an elapsed time of 0.14 to 0.15 seconds.  __DB_CACHE_SIZE floated to 1,073,741,824

The execution time was about the same as with the larger __DB_CACHE_SIZE.  Apparently only the first insert experienced a large number of ‘db file sequential read’ waits, totalling about 28 seconds based on the timing reported in SQL*Plus.

What if we flood the KEEP and DEFAULT buffer pools:

(Session 3 connected as SYS)

SET LINESIZE 150
SET PAGESIZE 10000
SPOOL C:\TABLES.SQL

SELECT
  'SELECT * FROM '||OWNER||'.'||TABLE_NAME||' ORDER BY 1;' T
FROM
  DBA_TABLES;

SPOOL OFF

Clean up the C:\TABLES.SQL file.

SET AUTOTRACE TRACEONLY STATISTICS;

@C:\TABLES.SQL

SET AUTOTRACE OFF

(Session 1)

DROP TABLE TEST_ASSM PURGE;

create table test_assm
(
 n1 number,
 v1 varchar2(50),
 v2 varchar2(50),
 v3 varchar2(50),
 v4 varchar2(50),
 v5 varchar2(50),
 v6 varchar2(50),
 v7 varchar2(50),
 v8 varchar2(50),
 v9 varchar2(50),
v10 varchar2(50)
)
tablespace LOCAL_ASSM;

begin
for i in 1..1000000 loop
insert into test_assm values
(i,
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567');
end loop;
end;
/

COMMIT;

@c:\insertstatements.sql

DELETE FROM test_assm;

(Session 2)

SET TIMING ON
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'FIND_ME_TEST_LOCALAM3';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

@c:\insertstatements.sql

ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';

EXIT

Each insert statement after the first reported an elapsed time of 0.17 to 0.19 seconds.  OK, that increased the time slightly, but not as much seen earlier.

Maybe it has to due with the process ID – luck of the draw regarding which blocks session 2 attempts to insert into due to the way ASSM attempts to reduce block contention?  I repeated the test again using the same old ASSM tablespace which I used earlier – insert times for the second session where roughly 0.15 seconds each after the first insert completed.  Of course, I bounced the database since the test run the day before, so maybe that has an impact?

The first couple of EXEC and STAT lines from the first of the most recent traces with the 6GB KEEP pool in effect:

EXEC #1:c=3541222,e=26125284,p=54865,cr=69793,cu=83979,mis=0,r=1,dep=0,og=1,plh=­0,tim=314231018338
STAT #1 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=54865 pw=0 time=0 us)'
EXEC #2:c=171601,e=187295,p=0,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=0,tim=3­14231205633
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=0 pw=0 time=0 us)'
EXEC #1:c=156001,e=155942,p=0,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=0,tim=3­14231361575
STAT #1 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=0 pw=0 time=0 us)'
EXEC #2:c=171602,e=156113,p=0,cr=69793,cu=83959,mis=0,r=1,dep=0,og=1,plh=0,tim=3­14231517688
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=0 pw=0 time=0 us)'

The first couple of lines from the previous day’s trace file also with the 6GB KEEP pool in effect:

EXEC #3:c=3759624,e=20904025,p=69802,cr=69793,cu=83979,mis=0,r=1,dep=0,og=1,plh=­0,tim=220862828163
STAT #3 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=69802 pw=0 time=0 us)'
EXEC #2:c=3978025,e=15984033,p=69802,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=­0,tim=220878812196
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=69802 pw=0 time=0 us)'
EXEC #1:c=3666024,e=13540824,p=69802,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=­0,tim=220892353020
STAT #1 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=69802 pw=0 time=0 us)'
EXEC #3:c=3744024,e=13634412,p=69802,cr=69793,cu=83959,mis=0,r=1,dep=0,og=1,plh=­0,tim=220905987432
STAT #3 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=69802 pw=0 time=0 us)'
EXEC #2:c=3650423,e=13447212,p=69803,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=­0,tim=220919434644
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=69803 pw=0 time=0 us)'

This is partial output from the 10046 trace file captured yesterday and today, which targetted a pre-existing ASSM tablespace with roughly the same size default buffer cache in effect (note that the database was bounced between runs, and that may be a source of the time difference).
Yesterday:

PARSING IN CURSOR #3 len=532 dep=0 uid=56 oct=2 lid=56 tim=220841924138 hv=471712922 ad='2778b31b8' sqlid='dyqznk8f1vj4u'
insert into test_assm values
  (15,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #3:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220841924138
WAIT #3: nam='db file sequential read' ela= 17613 file#=7 block#=1900672 blocks=1 obj#=67153 tim=220841943750
WAIT #3: nam='db file sequential read' ela= 458 file#=7 block#=1900680 blocks=1 obj#=67153 tim=220841944275
WAIT #3: nam='db file sequential read' ela= 617 file#=7 block#=1900681 blocks=1 obj#=67153 tim=220841944980
WAIT #3: nam='db file sequential read' ela= 73 file#=7 block#=1900682 blocks=1 obj#=67153 tim=220841945113
...
PARSING IN CURSOR #2 len=532 dep=0 uid=56 oct=2 lid=56 tim=220862828163 hv=1479200138 ad='2778aa958' sqlid='6su1s3tc2pmca'
insert into test_assm values
  (16,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #2:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220862828163
WAIT #2: nam='db file sequential read' ela= 4517 file#=7 block#=1900672 blocks=1 obj#=67153 tim=220862850571
WAIT #2: nam='db file sequential read' ela= 484 file#=7 block#=1900680 blocks=1 obj#=67153 tim=220862851087
WAIT #2: nam='db file sequential read' ela= 548 file#=7 block#=1900681 blocks=1 obj#=67153 tim=220862851684
WAIT #2: nam='db file sequential read' ela= 33 file#=7 block#=1900682 blocks=1 obj#=67153 tim=220862851760
...
PARSING IN CURSOR #1 len=532 dep=0 uid=56 oct=2 lid=56 tim=220878812196 hv=3933466223 ad='2737d43f8' sqlid='g6dd19gp77vmg'
insert into test_assm values
  (17,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #1:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220878812196
WAIT #1: nam='db file sequential read' ela= 8169 file#=7 block#=1900672 blocks=1 obj#=67153 tim=220878847836
WAIT #1: nam='db file sequential read' ela= 470 file#=7 block#=1900680 blocks=1 obj#=67153 tim=220878848364
WAIT #1: nam='db file sequential read' ela= 510 file#=7 block#=1900681 blocks=1 obj#=67153 tim=220878848923
WAIT #1: nam='db file sequential read' ela= 37 file#=7 block#=1900682 blocks=1 obj#=67153 tim=220878849003
...
PARSING IN CURSOR #3 len=532 dep=0 uid=56 oct=2 lid=56 tim=220892353020 hv=1578030285 ad='273749f28' sqlid='a7tcr1tg0xp6d'
insert into test_assm values
  (18,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #3:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220892353020
WAIT #3: nam='db file sequential read' ela= 6309 file#=7 block#=1900672 blocks=1 obj#=67153 tim=220892365476
WAIT #3: nam='db file sequential read' ela= 507 file#=7 block#=1900680 blocks=1 obj#=67153 tim=220892366027
WAIT #3: nam='db file sequential read' ela= 476 file#=7 block#=1900681 blocks=1 obj#=67153 tim=220892366551
WAIT #3: nam='db file sequential read' ela= 37 file#=7 block#=1900682 blocks=1 obj#=67153 tim=220892366630
...
PARSING IN CURSOR #2 len=532 dep=0 uid=56 oct=2 lid=56 tim=220905987432 hv=2708362693 ad='2737a1a18' sqlid='7mbckzyhqwpf5'
insert into test_assm values
  (19,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #2:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220905987432
WAIT #2: nam='db file sequential read' ela= 7892 file#=7 block#=1900672 blocks=1 obj#=67153 tim=220905999307
WAIT #2: nam='db file sequential read' ela= 513 file#=7 block#=1900680 blocks=1 obj#=67153 tim=220905999847
WAIT #2: nam='db file sequential read' ela= 518 file#=7 block#=1900681 blocks=1 obj#=67153 tim=220906000413
WAIT #2: nam='db file sequential read' ela= 37 file#=7 block#=1900682 blocks=1 obj#=67153 tim=220906000493
...
PARSING IN CURSOR #1 len=532 dep=0 uid=56 oct=2 lid=56 tim=220919434644 hv=3773067906 ad='2778aa2c8' sqlid='4gsb2p3hf8wn2'
insert into test_assm values
  (20,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #1:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220919434644
WAIT #1: nam='db file sequential read' ela= 4513 file#=7 block#=1900672 blocks=1 obj#=67153 tim=220919467246
WAIT #1: nam='db file sequential read' ela= 483 file#=7 block#=1900680 blocks=1 obj#=67153 tim=220919467788
WAIT #1: nam='db file sequential read' ela= 474 file#=7 block#=1900681 blocks=1 obj#=67153 tim=220919468320
WAIT #1: nam='db file sequential read' ela= 45 file#=7 block#=1900682 blocks=1 obj#=67153 tim=220919468416

Today, same tablespace, roughly the same default buffer size:

PARSING IN CURSOR #2 len=532 dep=0 uid=56 oct=2 lid=56 tim=314564780904 hv=471712922 ad='2772d7f28' sqlid='dyqznk8f1vj4u'
insert into test_assm values
  (15,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #2:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=314564780904
WAIT #2: nam='db file sequential read' ela= 12462 file#=7 block#=1901256 blocks=1 obj#=67277 tim=314564795862
WAIT #2: nam='db file sequential read' ela= 531 file#=7 block#=1901257 blocks=1 obj#=67277 tim=314564796472
WAIT #2: nam='db file sequential read' ela= 72 file#=7 block#=1901258 blocks=1 obj#=67277 tim=314564796577
WAIT #2: nam='db file sequential read' ela= 331 file#=7 block#=1901259 blocks=1 obj#=67277 tim=314564796941
...
PARSING IN CURSOR #1 len=532 dep=0 uid=56 oct=2 lid=56 tim=314585990954 hv=1479200138 ad='2772d7a28' sqlid='6su1s3tc2pmca'
insert into test_assm values
  (16,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #1:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=314585990954
EXEC #1:c=156001,e=156030,p=0,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=0,tim=3­14586146984
STAT #1 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=0 pw=0 time=0 us)'
WAIT #1: nam='SQL*Net message to client' ela= 5 driver id=1413697536 #bytes=1 p3=0 obj#=67277 tim=314586149201
WAIT #1: nam='SQL*Net message from client' ela= 1674 driver id=1413697536 #bytes=1 p3=0 obj#=67277 tim=314586150901
CLOSE #1:c=0,e=0,dep=0,type=0,tim=314586146984
PARSING IN CURSOR #2 len=532 dep=0 uid=56 oct=2 lid=56 tim=314586146984 hv=3933466223 ad='2734efe38' sqlid='g6dd19gp77vmg'
insert into test_assm values
  (17,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #2:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=314586146984
EXEC #2:c=140401,e=124827,p=0,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=0,tim=3­14586271811
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=0 pw=0 time=0 us)'
WAIT #2: nam='SQL*Net message to client' ela= 5 driver id=1413697536 #bytes=1 p3=0 obj#=67277 tim=314586294069
WAIT #2: nam='SQL*Net message from client' ela= 1347 driver id=1413697536 #bytes=1 p3=0 obj#=67277 tim=314586295442
CLOSE #2:c=0,e=0,dep=0,type=0,tim=314586271811
...

(A bit later in the trace):

PARSING IN CURSOR #2 len=533 dep=0 uid=56 oct=2 lid=56 tim=314624211019 hv=1207345385 ad='2734b6ff8' sqlid='15mbakd3zd879'
insert into test_assm values
  (239,'123456789*123456789*123456789*123456789*1234567','123456789*123456789­*123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*1234­567','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789­*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234­567',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789­*123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*1234­567','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #2:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=314624211019
WAIT #2: nam='db file sequential read' ela= 519 file#=7 block#=1972923 blocks=1 obj#=67277 tim=314624366489
EXEC #2:c=140401,e=124781,p=1,cr=69794,cu=83960,mis=0,r=1,dep=0,og=1,plh=0,tim=3­14624335800
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69794 pr=1 pw=0 time=0 us)'
WAIT #2: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=67277 tim=314624366655
WAIT #2: nam='SQL*Net message from client' ela= 814 driver id=1413697536 #bytes=1 p3=0 obj#=67277 tim=314624367493
CLOSE #2:c=0,e=0,dep=0,type=0,tim=314624367010

In yesterday’s run, Oracle kept performing single block reads on exactly the same blocks for each insert statement (additional blocks were added one at a time on later inserts).  Today this only happened for the first insert statement, with occasional single block reads after that point.

Jonathan had commented somewhere that ASSM is like freelists(16) (or maybe it was freelist groups (16)).  The blocks selected for insert are dependent on the v$process.pid for the session (I have seen a couple of good descriptions of how this works, but cannot locate those descriptions right now).  See the follow up to comment 10 here:
http://jonathanlewis.wordpress.com/2009/01/14/books/

I suspect that this might have something to do with the problem I experienced yesterday, but not today (which free blocks are available to the session).

In this link, there are a couple of posts which describe this, or a similar problem:
asktom.oracle.com Article 1
(April 24, 2003 by Jan van Mourik)
(December 21, 2004 by Steve)

asktom.oracle.com Article 2
(December 4, 2005 by Jonathan Lewis)

On December 24, 2008 a thread was started on Oracle’s OTN forums titled “Performance degradation of repeated delete/inserts” that seems to describe a similar problem.  The thread has since disappeared from Oracle’s OTN forums.  In that thread someone told me “YOU CANNOT PROVE ANYTHING ABOUT ORACLE PERFORMANCE. EVER. NO EQUATIONS, NO PROOFS, NO WAY, NO HOW…“.  I think that it was comments like this that eventually contributed to the thread being pulled from the OTN forums.

I believe that this blog article and the contents of the two Usenet threads demonstrates the potential value of test cases.

——————————————————

Take some time to read the two Usenet threads.  I think that you will find that Jonathan Lewis and other contributors were able to identify the root cause of the performance difference between the tests with the ASSM and non-ASSM tablespaces.





Neat Tricks

27 01 2010

January 27, 2010 (Modified December 13, 2011)

Over the years I have seen a couple of interesting approaches to doing certain tasks with Oracle databases – something that makes me think – wow, that’s neat.  Below are a couple of the items that I have found to be interesting, but I probably could not make interesting enough for a dedicated article.

First, an example that retrieves the DDL needed to recreate three tables and the indexes for those tables, output to a file.  The definitions are complete with foreign keys, column constraints, and storage parameters.  Note that in some cases it might be necessary to fix line wrapping problems in the resulting text file (specifically if a line is longer than 200 characters).

SET PAGESIZE 0
SET LONG 90000
SET LINESIZE 200
COLUMN OBJECT_DEF FORMAT A200
SPOOL 'GETMETA.SQL'

SELECT
  DBMS_METADATA.GET_DDL('TABLE',TABLE_NAME,OWNER) OBJECT_DEF
FROM
  DBA_TABLES
WHERE
  TABLE_NAME IN ('T1','T2','T3')
UNION ALL
SELECT
  DBMS_METADATA.GET_DDL('INDEX',INDEX_NAME,OWNER) OBJECT_DEF
FROM
  DBA_INDEXES
WHERE
  TABLE_NAME IN ('T1','T2','T3');

SPOOL OFF

——

Next, an example that creates a database link to another database.  Note that the first example fails with some configurations – this is intentional in this example.

A Failed Attempt at a DB Link:

CREATE PUBLIC DATABASE LINK TEST_LINK CONNECT TO MY_USERNAME IDENTIFIED BY MY_PASS_HERE USING 'TEST';

Now, trying to use the connection:

SQL> SELECT COUNT(*) FROM T1@TEST_LINK; SELECT COUNT(*) FROM T1@TEST_LINK                                  *
ERROR at line 1:
ORA-02085: database link TEST_LINK.WORLD connects to TEST.WORLD

“.WORLD”? – I don’t recall specifying that.  I guess that we should consult the documentation.

 “If the value of the GLOBAL_NAMES initialization parameter is TRUE, then the database link must have the same name as the database to which it connects. If the value of GLOBAL_NAMES is FALSE, and if you have changed the global name of the database, then you can specify the global name.”

Now, creating the database link correctly (note that a COMMIT or ROLLBACK should be used at some point after performing a query on a remote database):

CREATE PUBLIC DATABASE LINK TEST CONNECT TO MY_USERNAME IDENTIFIED BY MY_PASS_HERE USING 'TEST.WORLD';

SELECT COUNT(*) FROM T1@TEST; COMMIT;

——

Next, recovering a table from the Oracle 10g (and above) recyclebin:

FLASHBACK TABLE T1 TO BEFORE DROP;

——

Output the columns in a table’s rows as semicolon delimited values (I think that I first saw this example on asktom.oracle.com):

SELECT
  REGEXP_REPLACE(COLUMN_VALUE,' *<[^>]*>[^>]*>',';')
FROM
  TABLE(XMLSEQUENCE(CURSOR(
    SELECT
      *
    FROM
      MY_TABLE)));

——

Output the columns in a table’s rows as XML:

SELECT
  *
FROM
  TABLE(XMLSEQUENCE(CURSOR(
    SELECT
      *
    FROM
      MY_TABLE)));

——

Output the text contained in a BLOB column as a VARCHAR2:

SELECT
  UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(BITS,32000,1))
FROM
  T1
WHERE
  C1='123';

——

View the internal representation of data – how is it stored in the database:

SELECT
  DUMP(DESCRIPTION)
FROM
  MY_TABLE
WHERE
  PART_ID='123';

——

Disable AWR collection on Oracle 10g if a license for the Diagnostic Pack was not purchased (there is an easy way to do this in 11g):

SQL> @DBMSNOAWR.PLB

SQL> BEGIN DBMS_AWR.DISABLE_AWR(); END;

SQL> /

The DBMSNOAWR package may be downloaded from Metalink (MOS). (Note: link updated December 13, 2011)

——

View jobs scheduled in the 10g Enterprise Manager Database Control:

SELECT
  MJ.JOB_OWNER,
  MJ.JOB_NAME,
  JS.FREQUENCY_CODE,
  TO_CHAR(JS.START_TIME,'MM/DD/YYYY HH24:MI') START_TIME,
  TO_CHAR(JS.END_TIME,'MM/DD/YYYY HH24:MI') END_TIME,
  JS.EXECUTION_HOURS,
  JS.EXECUTION_MINUTES,
  JS.INTERVAL,
  JS.MONTHS,
  JS.DAYS
FROM
  SYSMAN.MGMT_JOB MJ,
  SYSMAN.MGMT_JOB_SCHEDULE JS
WHERE
  MJ.EXPIRED=0
  AND MJ.SCHEDULE_ID=JS.SCHEDULE_ID;

——

Manually set the value of MBRC in SYS.AUX_STATS$:

EXEC DBMS_STATS.SET_SYSTEM_STATS('MBRC',16) 

——

Unsetting the DB_FILE_MULTIBLOCK_READ_COUNT parameter from the spfile to allow Oracle to auto-tune the parameter value (in 10.2.0.1 and above, requires DB bounce):

ALTER SYSTEM RESET DB_FILE_MULTIBLOCK_READ_COUNT SCOPE=SPFILE SID='*';

——

Select a random number between 1 and 50 (after seeding the random number generator):

EXEC DBMS_RANDOM.SEED(0)

SELECT
  ROUND(DBMS_RANDOM.VALUE(1,50)) MY_NUMBER
FROM
  DUAL;

 MY_NUMBER
----------
        42

——

Select a random 10 character string:

SELECT
  DBMS_RANDOM.STRING('A',10) MY_STRING
FROM
  DUAL;

MY_STRING
----------

SRnFjRGbiw

——

Count to 10:

SELECT
  ROWNUM MY_NUMBER
FROM
  DUAL
CONNECT BY
  LEVEL<=10;

 MY_NUMBER
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10

——

Randomly order a row source (in this case the numbers from 1 to 10):

SELECT
  MY_NUMBER
FROM
  (SELECT
    ROWNUM MY_NUMBER
  FROM
    DUAL
  CONNECT BY
    LEVEL<=10)
ORDER BY
  DBMS_RANDOM.VALUE;

 MY_NUMBER
----------
        10
         5
         4
         9
         1
         3
         7
         6
         8
         2




10046 Extended SQL Trace Interpretation 2

26 01 2010

January 26, 2010 

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

In an earlier blog article I described several methods for enabling and disabling 10046 extended SQL traces, listed several keywords that are found in 10046 traces, and demonstrated output generated by my Toy Project for Performance Tuning as it processed the raw trace file. 

Another way to enable a 10046 trace for a session is through the use of a logon trigger created by the SYS user.  For example, the following trigger will enable a 10046 trace at level 12 for any program that begins with the letters MS or VB, even if the path to the program is included in the PROGRAM column of V$SESSION.  With the use of DECODE, it is very easy to allow the trigger to enable tracing for an additional list of programs: 

CREATE OR REPLACE TRIGGER LOGON_10046_TRACE AFTER LOGON ON DATABASE
DECLARE
  SHOULD_EXECUTE INTEGER;
BEGIN
  SELECT DECODE(SUBSTR(UPPER(PROGRAM),1,2),'MS',1,'VB',1,0)
      +DECODE(INSTR(PROGRAM,'\',-1),0,0,DECODE(SUBSTR(UPPER(SUBSTR(PROGRAM,INSTR(PROGRAM,'\',-1)+1)),1,2),'MS',1,'VB',1,0))
    INTO SHOULD_EXECUTE FROM V$SESSION
    WHERE SID=(SELECT SID FROM V$MYSTAT WHERE ROWNUM=1);
  IF SHOULD_EXECUTE > 0 THEN
    EXECUTE IMMEDIATE 'ALTER SESSION SET EVENTS ''10046 TRACE NAME CONTEXT FOREVER, LEVEL 12''';
  END IF;
END;
/

Obviously, if you create the trigger, you should drop the trigger when it is no longer needed using the following command. 

DROP TRIGGER LOGON_10046_TRACE;

Let’s try creating a test trace file with Oracle Database 10.2.0.4.  First, we need to create a couple of test tables with 10,000 rows each and collect statistics for the tables and primary key indexes: 

CREATE TABLE T1 (
  C1 NUMBER,
  C2 VARCHAR2(255),
  PRIMARY KEY (C1));

CREATE TABLE T2 (
  C1 NUMBER,
  C2 VARCHAR2(255),
  PRIMARY KEY (C1));

INSERT INTO
  T1
SELECT
  ROWNUM,
  RPAD(TO_CHAR(ROWNUM),255,'A')
FROM
  DUAL
CONNECT BY
  LEVEL<=10000;

INSERT INTO
  T2
SELECT
  ROWNUM,
  RPAD(TO_CHAR(ROWNUM),255,'A')
FROM
  DUAL
CONNECT BY
  LEVEL<=10000;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2',CASCADE=>TRUE)

We now have two tables, each with 10,000 rows.  This test case in SQL*Plus will:

  1. Flush the buffer cache (twice) to force physical reads
  2. Set the fetch array size to the SQL*Plus default value of 15 rows per fetch call
  3. Disable output of the rows returned from the database to limit client-side delays
  4. Create two bind variables with the value of 1 and 2
  5. Enable a 10046 extended SQL trace at level 12
  6. Give the trace file an easy to identify name
  7. Execute a query that joins the two tables
  8. Increases the fetch array size to 50 rows per fetch call
  9. Increase the value of the second bind variable from 2 to 10,000
  10. Execute the same SQL statement executed in step 7 
ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;

SET ARRAYSIZE 15
SET AUTOTRACE TRACEONLY STATISTICS

VARIABLE N1 NUMBER
VARIABLE N2 NUMBER

EXEC :N1 := 1
EXEC :N2 := 2

EXEC DBMS_SESSION.SESSION_TRACE_ENABLE(WAITS=>TRUE, BINDS=>TRUE)
ALTER SESSION SET TRACEFILE_IDENTIFIER='10046_FIND_ME';

SELECT
  T1.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

SET ARRAYSIZE 50
EXEC :N1 := 1
EXEC :N2 := 10000

SELECT
  T1.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

SELECT SYSDATE FROM DUAL;

EXEC DBMS_SESSION.SESSION_TRACE_DISABLE;

Rather than scrolling all of the rows on the screen, SQL*Plus output the following from the two executions: 

Statistics
---------------------------------------------------
          1  recursive calls
          0  db block gets
          9  consistent gets
          5  physical reads
          0  redo size
        921  bytes sent via SQL*Net to client
        334  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          2  rows processed

Statistics
---------------------------------------------------
          0  recursive calls
          0  db block gets
      10982  consistent gets
        404  physical reads
          0  redo size
    2647906  bytes sent via SQL*Net to client
       2523  bytes received via SQL*Net from client
        201  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      10000  rows processed

From the above, the first execution required 9 consistent block gets, and 5 of those block gets involved reading the block from disk.  The server sent 921 bytes to the client in 2 round trips, and 2 rows were retrieved.  2 round trips (we will see why later)?  The second execution required 10,982 consistent block gets, and 404 of those involved physical reads.  The server sent about 2.53MB to the client in 10,000 rows, using 201 round trips.  Nice, but we can find out more about what happened.

We could process the trace file with tkprof using a command like this: 

tkprof testdb_ora_4148_10046_find_me.trc testdb_ora_4148_10046_find_me.txt

A portion of the TKPROF output might look like this (see Metalink Doc ID 41634.1 for help with reading the tkprof output): 

********************************************************************************
count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
********************************************************************************

SELECT
  T1.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C1 BETWEEN :N1 AND :N2

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute      3      0.00       0.00          0          0          0           0
Fetch      203      0.28       0.38        409      10991          0       10002
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      208      0.28       0.38        409      10991          0       10002

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 31 

Rows     Row Source Operation
-------  ---------------------------------------------------
      2  FILTER  (cr=9 pr=5 pw=0 time=19577 us)
      2   NESTED LOOPS  (cr=9 pr=5 pw=0 time=19569 us)
      2    TABLE ACCESS BY INDEX ROWID T2 (cr=5 pr=3 pw=0 time=13843 us)
      2     INDEX RANGE SCAN SYS_C0020548 (cr=3 pr=2 pw=0 time=9231 us)(object id 114211)
      2    INDEX UNIQUE SCAN SYS_C0020547 (cr=4 pr=2 pw=0 time=5788 us)(object id 114210)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                     204        0.00          0.00
  db file sequential read                       409        0.00          0.21
  SQL*Net message from client                   204        0.00          0.07
  SQL*Net more data to client                  1200        0.00          0.02

The above is a nice summary of what was found in the trace file for our specific test SQL statement, but what is it telling us?  There were 2 parse calls (1 for each execution), and one of those parse calls resulted in a hard parse.  There were 3 execution calls (I am only able to explain 2 of the execution calls).  There were 203 fetch calls that retrieved a total of 10,002 rows – from this we could derive that on average the client fetched 49.27 rows per fetch call.  All of the time for the execution happened on the fetch calls, which required 0.28 seconds of CPU time and a total of 0.38 clock seconds to execute.  A total of 10,991 consistent gets were required and 409 blocks were read from disk. 

The execution plan displayed is a little misleading, since it shows that only 2 rows were retrieved (note that if this example were executed on Oracle 11.1.0.6 or higher, the first and second executions of the SQL statement could have had different execution plans).  An index range scan is performed on the index SYS_C0020548 (the primary key index for table T2) to locate all of the C1 values between 1 and 2 – note that the optimizer used transitive closure here since the restriction in the SQL statement was actually placed on the column C1 of table T1.  The top line in the plan indicates that in total the query required 9 consistent gets, 5 physical block reads, and 0.019577 seconds to execute.  The index range scan on SYS_C0020548 required 3 consistent gets, and 2 physical block reads were required to satisfy the 3 consistent gets.  Table T2 required an additional 2 consistent gets and 1 physical block read.  A nested loop operation was performed, driving into the primary key index for table T1 – this required an additional 4 consistent gets and 2 physical block reads.  But, what about the second execution of the SQL statement?

The wait events show 409 waits on the “db file sequential read” wait event, which indicates physically reading 1 block at a time from disk – note that this exactly matches the “disk” column in the “fetch” line of the tkprof summary.  Every block that had to be read from disk was read one block at a time, with an average read time of 0.000513 seconds per block read (the extremely fast average time likely indicate caching of blocks at the file system, RAID controller, SAN, or hard drive).  There were 1200 waits on the “SQL*Net more data to client” wait, indicating that the SDU size was filled 1200 times when sending the data to the client computer.

We could also run the trace file though my Toy Project (below is one of 4 outputs from my program): 

Total for Trace File:
|PARSEs       6|CPU S    0.000000|CLOCK S    0.008073|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      4|
|EXECs        6|CPU S    0.015625|CLOCK S    0.007978|ROWs        3|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      2|
|FETCHs     205|CPU S    0.281250|CLOCK S    0.382338|ROWs    10003|PHY RD BLKs       409|CON RD BLKs (Mem)     10991|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|

Wait Event Summary:
SQL*Net message to client           0.000318  On Client/Network   Min Wait:     0.000001  Avg Wait:     0.000001  Max Wait:     0.000005
SQL*Net message from client         3.916585  On Client/Network   Min Wait:     0.000211  Avg Wait:     0.018302  Max Wait:     3.829540
db file sequential read             0.211736  On DB Server        Min Wait:     0.000182  Avg Wait:     0.000518  Max Wait:     0.005905
SQL*Net more data to client         0.020220  On Client/Network   Min Wait:     0.000010  Avg Wait:     0.000017  Max Wait:     0.000258

Total for Similar SQL Statements in Each Group:
----------------------------------------------------------------------------------
Similar SQL Statements in Group: 2
|PARSEs       2|CPU S    0.000000|CLOCK S    0.001231|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|EXECs        2|CPU S    0.000000|CLOCK S    0.004237|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|FETCHs     203|CPU S    0.281250|CLOCK S    0.382314|ROWs    10002|PHY RD BLKs       409|CON RD BLKs (Mem)     10991|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
  CPU S 94.74%  CLOCK S 97.34%
  *    0.211736 seconds of time related data file I/O
  *    0.096212 seconds of time related to client/network events
| +++++++++++++++++++|| +++++++++++++++++++|

Cursor 3   Ver 1   Parse at 0.000000  Similar Cnt 1
|PARSEs       1|CPU S    0.000000|CLOCK S    0.001113|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|EXECs        1|CPU S    0.000000|CLOCK S    0.003538|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|FETCHs       2|CPU S    0.015625|CLOCK S    0.019769|ROWs        2|PHY RD BLKs         5|CON RD BLKs (Mem)         9|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
  CPU S 5.26%  CLOCK S 6.13%
|                   +||                   +|
SELECT
  T1.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C1 BETWEEN :N1 AND :N2

       (Rows 2)   FILTER  (cr=9 pr=5 pw=0 time=19577 us)
       (Rows 2)    NESTED LOOPS  (cr=9 pr=5 pw=0 time=19569 us)
       (Rows 2)     TABLE ACCESS BY INDEX ROWID T2 (cr=5 pr=3 pw=0 time=13843 us)
       (Rows 2)      INDEX RANGE SCAN SYS_C0020548 (cr=3 pr=2 pw=0 time=9231 us)
       (Rows 2)     INDEX UNIQUE SCAN SYS_C0020547 (cr=4 pr=2 pw=0 time=5788 us)

Bind Variables:BINDS #3:  -0.000008
   Bind#0
    oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
    oacflg=03 fl2=1000000 frm=00 csi=00 siz=48 off=0
    kxsbbbfp=13ce8870  bln=22  avl=02  flg=05
    value=1
   Bind#1
    oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
    oacflg=03 fl2=1000000 frm=00 csi=00 siz=0 off=24
    kxsbbbfp=13ce8888  bln=22  avl=02  flg=01
    value=2

------------
Cursor 4   Ver 1   Parse at 0.041427  (TD Prev 0.007600)  Similar Cnt 2
|PARSEs       1|CPU S    0.000000|CLOCK S    0.000118|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
|EXECs        1|CPU S    0.000000|CLOCK S    0.000699|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
|FETCHs     201|CPU S    0.265625|CLOCK S    0.362545|ROWs    10000|PHY RD BLKs       404|CON RD BLKs (Mem)     10982|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
  CPU S 89.47%  CLOCK S 91.21%
|  ++++++++++++++++++||  ++++++++++++++++++|
SELECT
  T1.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C1 BETWEEN :N1 AND :N2

   (Rows 10000)   FILTER  (cr=10982 pr=404 pw=0 time=680024 us)
   (Rows 10000)    NESTED LOOPS  (cr=10982 pr=404 pw=0 time=670018 us)
   (Rows 10000)     TABLE ACCESS BY INDEX ROWID T2 (cr=781 pr=387 pw=0 time=590006 us)
   (Rows 10000)      INDEX RANGE SCAN SYS_C0020548 (cr=218 pr=17 pw=0 time=10038 us)
   (Rows 10000)     INDEX UNIQUE SCAN SYS_C0020547 (cr=10201 pr=17 pw=0 time=77882 us)

Bind Variables:BINDS #4:  0.041421
   Bind#0
    oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
    oacflg=03 fl2=1000000 frm=00 csi=00 siz=48 off=0
    kxsbbbfp=13ce8870  bln=22  avl=02  flg=05
    value=1
   Bind#1
    oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
    oacflg=03 fl2=1000000 frm=00 csi=00 siz=0 off=24
    kxsbbbfp=13ce8888  bln=22  avl=02  flg=01
    value=10000
----------------------------------------------------------------------------------

The above shows basically the same output as tkprof, just with greater resolution, both sets of bind variables, and both sets of execution plans.

We could also use any number of other 10046 trace file parsers including TRCANLZR (see Metalink Doc ID 224270.1), TVD$XTAT (see the book “Troubleshooting Oracle Performance”), ESQLTRCPROF (see the book “Secrets of the Oracle Database”), the Hotsos Profiler (Method R), OraSRP (www.oracledba.ru/orasrp/), or one of several other programs.

I typically either look at the output from my program or the raw 10046 trace file.  That brings us to the raw 10046 trace file.  So, what does the raw output of the trace file look like?  Before diving into the raw trace file, let’s find a little information to help us later:

COLUMN TABLE_NAME FORMAT A10

SELECT
  TABLE_NAME,
  INDEX_NAME
FROM
  DBA_INDEXES
WHERE
  TABLE_NAME IN ('T1','T2')
ORDER BY
  TABLE_NAME;

TABLE_NAME INDEX_NAME
---------- ------------
T1         SYS_C0020547
T2         SYS_C0020548

COLUMN OBJECT_NAME FORMAT A15

SELECT
  OBJECT_ID,
  OBJECT_NAME,
  OBJECT_TYPE
FROM
  DBA_OBJECTS
WHERE
  OBJECT_NAME IN ('T1','T2','SYS_C0020547','SYS_C0020548')
ORDER BY
  OBJECT_NAME;

 OBJECT_ID OBJECT_NAME     OBJECT_TYPE
---------- --------------- -----------
    114210 SYS_C0020547    INDEX
    114211 SYS_C0020548    INDEX
    114209 T1              TABLE
    114207 T2              TABLE

From the above output, the index on table T1 is named SYS_C0020547 and it has an OBJECT_ID of 114210.  The index on table T2 is named SYS_C0020548 and it has an OBJECT_ID of 114211.  Table T1 has an OBJECT_ID of 114209, and table T2 has an OBJECT_ID of 114207.  Now on to a portion of the raw 10046 trace file:

=====================
PARSING IN CURSOR #3 len=91 dep=0 uid=31 oct=3 lid=31 tim=4963261335 hv=3021110247 ad='982ab100'
SELECT
  T1.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C1 BETWEEN :N1 AND :N2
END OF STMT
PARSE #3:c=0,e=1113,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,tim=4963261327
BINDS #3:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1000000 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=13ce8870  bln=22  avl=02  flg=05
  value=1
 Bind#1
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1000000 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=13ce8888  bln=22  avl=02  flg=01
  value=2

From the above, we see that the first parse was a hard parse that required 0 CPU seconds and 0.001113 clock seconds.  Additionally, two bind variables were passed in.  A level 4 or level 12 10046 extended SQL trace file will include the submitted bind variable values, as shown above.  It is possible to use the following list to decode the bind variable data type (oacdty), in the process determining that the bind variables are in fact, defined as numbers.  See Metalink Doc IDs 67701.1 and 154170.1, the Oracle OCI documentation, or Julian Dyke’s site for a more complete list of datatype constants: 

  0 - This row is a placeholder for a procedure with no arguments.
  1 - VARCHAR2 (or NVARCHAR)
  2 - NUMBER
  3 - NATIVE INTEGER (for PL/SQL's BINARY_INTEGER)
  8 - LONG
 11 - ROWID
 12 - DATE
 23 - RAW
 24 - LONG RAW
 96 - CHAR (or NCHAR)
112 - CLOB or NCLOB
113 - BLOB
114 - BFILE
106 - MLSLABEL
250 - PL/SQL RECORD
251 - PL/SQL TABLE
252 - PL/SQL BOOLEAN

The trace file continues below:

EXEC #3:c=0,e=3538,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,tim=4963265927
WAIT #3: nam='SQL*Net message to client' ela= 5 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=4963266031
WAIT #3: nam='db file sequential read' ela= 4816 file#=4 block#=1138316 blocks=1 obj#=114211 tim=4963271120
WAIT #3: nam='db file sequential read' ela= 3934 file#=4 block#=1138318 blocks=1 obj#=114211 tim=4963275219
WAIT #3: nam='db file sequential read' ela= 4443 file#=4 block#=1138310 blocks=1 obj#=114207 tim=4963279805
WAIT #3: nam='db file sequential read' ela= 5221 file#=4 block#=1093148 blocks=1 obj#=114210 tim=4963285172
WAIT #3: nam='db file sequential read' ela= 236 file#=4 block#=1093150 blocks=1 obj#=114210 tim=4963285569
FETCH #3:c=15625,e=19589,p=5,cr=5,cu=0,mis=0,r=1,dep=0,og=1,tim=4963285717
WAIT #3: nam='SQL*Net message from client' ela= 364 driver id=1413697536 #bytes=1 p3=0 obj#=114210 tim=4963286240
WAIT #3: nam='SQL*Net message to client' ela= 4 driver id=1413697536 #bytes=1 p3=0 obj#=114210 tim=4963286494
FETCH #3:c=0,e=180,p=0,cr=4,cu=0,mis=0,r=1,dep=0,og=1,tim=4963286588

Above, we see the 5 physical block reads of blocks 1138316 and 1138318 of  OBJECT_ID 114211 (index on table T2, SYS_C0020548), followed by a single block read of OBJECT_ID 114207 (table T2), and 2 single block reads of object 114210 (index on table T1, SYS_C0020547) – note that the final of the 5 physical block reads completed in 0.000236 seconds, which is about 20 times faster than the time it takes for 1 revolution of a 15,000 RPM hard drive platter.  The first fetch call returned a single row, even though the array fetch size was explicitly set to 15 rows.  That fetch required 5 consistent gets, which then required the 5 physical block reads.  The 1 row was sent to the client which then fetched a second row  (4963286588 – 4963286240 4963285717)/1,000,000 =  0.000348 0.000871 seconds later.  The trace file continues:

WAIT #3: nam='SQL*Net message from client' ela= 319 driver id=1413697536 #bytes=1 p3=0 obj#=114210 tim=4963287035
*** SESSION ID:(211.17427) 2010-01-25 15:41:01.540
STAT #3 id=1 cnt=2 pid=0 pos=1 obj=0 op='FILTER  (cr=9 pr=5 pw=0 time=19577 us)'
STAT #3 id=2 cnt=2 pid=1 pos=1 obj=0 op='NESTED LOOPS  (cr=9 pr=5 pw=0 time=19569 us)'
STAT #3 id=3 cnt=2 pid=2 pos=1 obj=114207 op='TABLE ACCESS BY INDEX ROWID T2 (cr=5 pr=3 pw=0 time=13843 us)'
STAT #3 id=4 cnt=2 pid=3 pos=1 obj=114211 op='INDEX RANGE SCAN SYS_C0020548 (cr=3 pr=2 pw=0 time=9231 us)'
STAT #3 id=5 cnt=2 pid=2 pos=2 obj=114210 op='INDEX UNIQUE SCAN SYS_C0020547 (cr=4 pr=2 pw=0 time=5788 us)'
WAIT #0: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=114210 tim=4963289021
WAIT #0: nam='SQL*Net message from client' ela= 2329 driver id=1413697536 #bytes=1 p3=0 obj#=114210 tim=4963291420
=====================

Cursor #3 was closed, so Oracle output the STAT (row source operation) lines, as we saw in the tkprof output.  The trace file continues (with a couple of lines removed):

...
=====================
PARSING IN CURSOR #4 len=91 dep=0 uid=31 oct=3 lid=31 tim=4963302762 hv=3021110247 ad='982ab100'
SELECT
  T1.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C1 BETWEEN :N1 AND :N2
END OF STMT
PARSE #4:c=0,e=118,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=4963302756
BINDS #4:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1000000 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=13ce8870  bln=22  avl=02  flg=05
  value=1
 Bind#1
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1000000 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=13ce8888  bln=22  avl=02  flg=01
  value=10000
EXEC #4:c=0,e=699,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=4963304299
WAIT #4: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=114210 tim=4963304383
FETCH #4:c=0,e=94,p=0,cr=5,cu=0,mis=0,r=1,dep=0,og=1,tim=4963304564
WAIT #4: nam='SQL*Net message from client' ela= 718 driver id=1413697536 #bytes=1 p3=0 obj#=114210 tim=4963305403
WAIT #4: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=114210 tim=4963305590
WAIT #4: nam='SQL*Net more data to client' ela= 29 driver id=1413697536 #bytes=2146 p3=0 obj#=114210 tim=4963305766
WAIT #4: nam='SQL*Net more data to client' ela= 20 driver id=1413697536 #bytes=1862 p3=0 obj#=114210 tim=4963305913
WAIT #4: nam='SQL*Net more data to client' ela= 17 driver id=1413697536 #bytes=2128 p3=0 obj#=114210 tim=4963306065
WAIT #4: nam='db file sequential read' ela= 2272 file#=4 block#=1138311 blocks=1 obj#=114207 tim=4963308471
WAIT #4: nam='SQL*Net more data to client' ela= 27 driver id=1413697536 #bytes=1868 p3=0 obj#=114207 tim=4963308686
WAIT #4: nam='SQL*Net more data to client' ela= 18 driver id=1413697536 #bytes=2122 p3=0 obj#=114207 tim=4963308841
WAIT #4: nam='SQL*Net more data to client' ela= 13 driver id=1413697536 #bytes=2128 p3=0 obj#=114207 tim=4963309001
FETCH #4:c=0,e=3573,p=1,cr=54,cu=0,mis=0,r=50,dep=0,og=1,tim=4963309109

Note that there was no hard parse this time.  The first two fetches are complete at this point.  Again, the first fetch returned a single row, while the second fetch returned 50 rows.  Note the presence of the “SQL*Net more data to client” wait before the second fetch line printed – each of these lines indicates that the SDU size was filled on the previous send to the client.  Notice that there was only a single physical block read of OBJECT_ID 114207 (table T2), requiring 0.002272 seconds, when fetching the first 51 rows (the other blocks were already in the buffer cache).  The trace file continues below:

WAIT #4: nam='SQL*Net message from client' ela= 256 driver id=1413697536 #bytes=1 p3=0 obj#=114207 tim=4963309476
WAIT #4: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=114207 tim=4963309654
WAIT #4: nam='db file sequential read' ela= 226 file#=4 block#=1138312 blocks=1 obj#=114207 tim=4963309995
WAIT #4: nam='SQL*Net more data to client' ela= 32 driver id=1413697536 #bytes=2116 p3=0 obj#=114207 tim=4963310197
WAIT #4: nam='SQL*Net more data to client' ela= 14 driver id=1413697536 #bytes=2096 p3=0 obj#=114207 tim=4963310353
WAIT #4: nam='SQL*Net more data to client' ela= 13 driver id=1413697536 #bytes=1834 p3=0 obj#=114207 tim=4963310488
WAIT #4: nam='db file sequential read' ela= 1762 file#=4 block#=1138308 blocks=1 obj#=114207 tim=4963312390
WAIT #4: nam='SQL*Net more data to client' ela= 23 driver id=1413697536 #bytes=2096 p3=0 obj#=114207 tim=4963312551
WAIT #4: nam='SQL*Net more data to client' ela= 16 driver id=1413697536 #bytes=2096 p3=0 obj#=114207 tim=4963312783
WAIT #4: nam='SQL*Net more data to client' ela= 13 driver id=1413697536 #bytes=1834 p3=0 obj#=114207 tim=4963312904
FETCH #4:c=0,e=3345,p=2,cr=55,cu=0,mis=0,r=50,dep=0,og=1,tim=4963312955

Two more physical block reads of OBJECT_ID 114207 (table T2) to return the next 50 rows to the client.  Jumping forward in the trace file to the last two fetches:

...
FETCH #4:c=0,e=1259,p=2,cr=55,cu=0,mis=0,r=50,dep=0,og=1,tim=4963757700
WAIT #4: nam='SQL*Net message from client' ela= 842 driver id=1413697536 #bytes=1 p3=0 obj#=114207 tim=4963758586
WAIT #4: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=114207 tim=4963758653
WAIT #4: nam='SQL*Net more data to client' ela= 16 driver id=1413697536 #bytes=2124 p3=0 obj#=114207 tim=4963758766
WAIT #4: nam='db file sequential read' ela= 242 file#=4 block#=1773782 blocks=1 obj#=114207 tim=4963759071
WAIT #4: nam='SQL*Net more data to client' ela= 17 driver id=1413697536 #bytes=2104 p3=0 obj#=114207 tim=4963759182
WAIT #4: nam='SQL*Net more data to client' ela= 13 driver id=1413697536 #bytes=1841 p3=0 obj#=114207 tim=4963759268
WAIT #4: nam='SQL*Net more data to client' ela= 17 driver id=1413697536 #bytes=2104 p3=0 obj#=114207 tim=4963759365
WAIT #4: nam='SQL*Net more data to client' ela= 13 driver id=1413697536 #bytes=1841 p3=0 obj#=114207 tim=4963759453
WAIT #4: nam='db file sequential read' ela= 226 file#=4 block#=1773852 blocks=1 obj#=114207 tim=4963759715
WAIT #4: nam='SQL*Net more data to client' ela= 20 driver id=1413697536 #bytes=2104 p3=0 obj#=114207 tim=4963759867
FETCH #4:c=0,e=1290,p=2,cr=54,cu=0,mis=0,r=49,dep=0,og=1,tim=4963759912

From the above, we see that Oracle is still performing an average of two single block physical reads per fetch call, and the final fetch call retrieved just 49 rows.  The trace file continues:

WAIT #4: nam='SQL*Net message from client' ela= 792 driver id=1413697536 #bytes=1 p3=0 obj#=114207 tim=4963760775
*** SESSION ID:(211.17427) 2010-01-25 15:41:02.008
STAT #4 id=1 cnt=10000 pid=0 pos=1 obj=0 op='FILTER  (cr=10982 pr=404 pw=0 time=680024 us)'
STAT #4 id=2 cnt=10000 pid=1 pos=1 obj=0 op='NESTED LOOPS  (cr=10982 pr=404 pw=0 time=670018 us)'
STAT #4 id=3 cnt=10000 pid=2 pos=1 obj=114207 op='TABLE ACCESS BY INDEX ROWID T2 (cr=781 pr=387 pw=0 time=590006 us)'
STAT #4 id=4 cnt=10000 pid=3 pos=1 obj=114211 op='INDEX RANGE SCAN SYS_C0020548 (cr=218 pr=17 pw=0 time=10038 us)'
STAT #4 id=5 cnt=10000 pid=2 pos=2 obj=114210 op='INDEX UNIQUE SCAN SYS_C0020547 (cr=10201 pr=17 pw=0 time=77882 us)'
WAIT #0: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=114207 tim=4963764774
WAIT #0: nam='SQL*Net message from client' ela= 2789 driver id=1413697536 #bytes=1 p3=0 obj#=114207 tim=4963767585

From the above, we see the execution plan for the second execution – this information was missing from the tkprof output.  A hash join with two full table scans probably would have been more efficient than a nested loop join with index lookups, especially if the number of rows were larger.  This is one of the potential problems with using bind variables, especially when bind variable peeking is enabled (by default in recent releases) – the execution plan is essentially locked after the initial hard parse.  Oracle 11.1.0.6 introduced a feature known as adaptive cursor sharing that could potentially alter the plan on a future execution if Oracle senses that there will be significant changes in the number of rows returned when different bind variable values are submitted.

Incidentally, you may have noticed the keyword “oct” on the “PARSING IN CURSOR” lines in the above trace file.  This keyword identifies the Oracle command type, which is related to the V$SESSION.COMMAND column and the V$SQL.COMMAND_TYPE column. Common command type values include: 

  1 - CREATE TABLE
  2 - INSERT
  3 - SELECT
  6 - UPDATE
  7 - DELETE
  9 - CREATE INDEX

See the “Command Column of V$SESSION and Corresponding Commands” table in the Oracle Reference documentation (Table 8-2 in the Oracle Database Reference 11g Release 2 book) for a complete list of command types.

For more information about 10046 trace files, see Chapter 8, “Understanding Performance Optimization Methods”, in the book “Expert Oracle Practices: Oracle Database Administration from the Oak Table” (the chapter was co-written by Randolf Geist and myself).  The book “Optimizing Oracle Performance” is also highly recommended.





Notes about Various Oracle Parameters

25 01 2010

January 25, 2010

There are a large number of initialization parameters that control the behavior of Oracle, and thus the performance of the database server.  It would likely take hours (or many, many pages) to explain the ideal value of each of these parameters – the ideal parameter values are different for different databases, otherwise Oracle Database could simply default to the ideal parameters.  Below are a couple of my notes on various parameters.

O7_DICTIONARY_ACCESSIBILITY: when set to true, permits non-sysdba users to query the data dictionary without explicitly granting permission to the users to view the data dictionary (setting this parameter to TRUE also allows the SYS user to connect without specifying AS SYSDBA).  This parameter must be set to TRUE for certain applications’ functions to work correctly, but ideally should be set to FALSE, if possible.  This parameter is set to FALSE by default on 9.0.1 and above.  Changing the parameter requires bouncing the database.

COMPATIBLE: Sets the datafile level binary compatibility, allowing the database binaries to be rolled back to an earlier version and still access the database’s datafiles.  Note that this parameter’s purpose is incorrectly described as controlling the query optimizer in a couple of Oracle related books.

PLSQL_CODE_TYPE: Watch out – there is a bug, at least on the Windows platform of 10.2.0.x where this setting will automatically change from interpreted to compiled when parameter values for other parameters are changed using Enterprise Manager Database Control.

PROCESSES and SESSIONS: Control the maximum number of client connections which may connect at any time.  The SESSIONS parameter is typically 10 to 20 greater than the value for PROCESSES.  The database must be bounced to change these parameters.  See here for a related article.

RECYCLEBIN: Controls whether or not dropped tables and indexes will be saved to an area which will permit the objects to be recovered.  If applications are using non-standard methods of determining objects belonging to a user (for example, directly querying SYS.OBJ$ and SYS.USER$), it is possible for the objects in the recylebin to be listed with the normal tables – attempting to assign permissions to objects in the recyclebin will result in the database returning errors to the client.

TIMED_STATISTICS: Should be set to TRUE to permit most forms of performance tuning, when set to FALSE, time deltas between events are not calculated.  Setting this parameter to TRUE may impose a small performance penalty on database performance on some operating systems, but the penalty is typically small.  This parameter defaults to TRUE when the STATISTICS_LEVEL parameter is set to TYPICAL or ALL.

DB_DOMAIN: Allows a database SID to be suffixed with a DNS style domain name.  If set, it may cause problems when database links are created between databases (only one name is valid for a database link when the DB_DOMAIN parameter is set, a custom name cannot be assigned to the database link).

UNDO_MANAGEMENT: When set to AUTO, rollback segments are no longer used – instead, the system automatically manages undo segments.

UNDO_RETENTION: Specifies the suggested minimum number of seconds that undo information should remain available.  Used to limit the frequency of “snapshot too old” error messages.

DB_RECOVERY_FILE_DEST_SIZE: Specifies a hard upper limit of the number of bytes available to store archived redo log, backups, and other items in the flash recovery area.  Note that if files are removed from the flash recovery area using operating system commands, Oracle may incorrectly calculate the space used in the flash recovery area, potentially creating problems if copies of archived redo logs and/or backups are sent to the flash recovery area.

DB_RECOVERY_FILE_DEST: Specifies the location to be used for the flash recovery area.

DB_WRITER_PROCESSES: Do not increase from the default value of 1 unless the server has more than 8 CPUs.  See here for a related article.

STATISTICS_LEVEL: Should be set to TYPICAL, do not leave the parameter set to ALL at the system-wide level as it will significantly slow down performance as more performance data must be collected for each SQL statement executed.  The performance hit when set to ALL is more significant on Oracle 10g than it is on Oracle 11g.

SGA_MAX_SIZE: Specifies the absolute maximum size of memory allocated to items in the system global area (SGA), defaults to the value of SGA_TARGET if not set.  Requires bouncing the database to change the parameter’s value.

SGA_TARGET: Specifies the suggested maximum amount of memory to be allocated to items in the system global area.  The value may be manually increased to the value of SGA_MAX_SIZE without bouncing the database.

SHARED_POOL_SIZE: When the SGA_TARGET is specifies, sets the minimum amount of memory available for caching items in the shared pool (SQL statements, packages, etc.).

TRACE_ENABLED:  Allows the database to create extended trace and 10053 trace files when session request those trace files to be generated.  Typically default to a value of TRUE.

MEMORY_MAX_TARGET: Part of the new memory management parameters in Oracle 11g, specifies the absolute maximum amount of memory that may be used by Oracle.

MEMORY_TARGET: Part of the new memory management parameters in Oracle 11g, by default 80% of this memory will be allocated to the SGA and 20% to the PGA.

DB_CACHE_SIZE: Specifies the minimum amount of memory in bytes for the DEFAULT block buffer cache (KEEP and RECYCLE buffer cache sizes do not subtract from this value).

LOG_BUFFER: Specifies the amount of memory to be allocated for buffering redo information before it is written to the redo logs.  512KB to 1MB is typically sufficient on older Oracle releases, 10g and above may automatically set this parameter’s value to a size close to the memory granule size, which may be 16MB.

WORKAREA_SIZE_POLICY: When set to AUTO, allows the automatic allocation of memory for work areas from the memory specified for the PGA_AGGREGATE_TARGET.

SORT_AREA_SIZE: Has no effect when WORKAREA_SIZE_POLICY is set to AUTO (assuming dedicated sessions), specifies the amount of memory that may be used during a sorting or hashing operation when executing a SQL statement.  HASH_AREA_SIZE defaults to twice this value.

SORT_AREA_RETAINED_SIZE: Has no effect when WORKAREA_SIZE_POLICY is set to AUTO, specifies the amount of memory that may be used after a sorting operation when the client is retrieving the results of the SQL statement.  If the server has sufficient memory, set this value to the same as the value of SORT_AREA_SIZE to avoid unnecessarily spilling the results to the temp tablespace after the sort, but before the client starts retrieving the results.

OPEN_CURSORS: Specifies the maximum number of cursors that may be simultaneously open for each client’s session.  Depending on the application connecting to the database, a value between 300 and 1000 might be a safe target if there is sufficient memory on the server.

SESSION_CACHED_CURSORS: On older Oracle releases, this parameter defaults to 0, and on more recent releases the parameter defaults to either 20 or 50 (this parameter controls the number of cached cursors per session).  If the value of this parameter is set to a non-zero value and the same SQL statement is submitted at least 3 times, the SQL statement is added to the session cached cursors and remains open even when the client explicitly closes the cursor.  The helps reduce the performance hit caused by soft parses when the client repeatedly submits the same SQL statement to be executed – on the next parse request Oracle does not need to search the library cache as would be needed during a soft parse.  A value of 50 to 100 probably would be a good target, and if server memory permits, consider setting this parameter to a higher value, possibly 200.

CURSOR_SHARING: Starting with Oracle 8i, it is possible for the database server to automatically convert constants (literals) submitted in SQL statements to bind variables in order to reduce the number of hard parses.  There are problems in patched Oracle 10.2.0.2 and 10.2.0.3 when this parameter is set to anything except EXACT (the October 2006 CPU for Oracle 10.2.0.2, for example, introduces problems when the CURSOR_SHARING parameter is set to FORCE – the problem may appear a couple of hours after the database is used in production).

CURSOR_SPACE_FOR_TIME: This parameter will be removed from future releases of Oracle as it is often misused (removed from 11.2.0.1?).  When set to TRUE, this parameter causes Oracle to assume that required SQL statements will not be prematurely aged out of the library cache.

OPTIMIZER_INDEX_CACHING: Tells Oracle the approximate percentage of index blocks that remain in the buffer cache – primarily has an effect during nested loop joins, affects costing of nested loop joins and in-lists.

OPTIMIZER_INDEX_COST_ADJ: Artificially lowers the calculated cost of an index access to the percentage of the original specified by this parameter.  Due to rounding problems, may cause the wrong index to be used if this parameter is set to too low of a value.  If index access costs are calculated too high compared to full table scans (and fast full index scans), use CPU (system) statistics, if available, to increase the cost of full table scans, rather than using this parameter to decrease the cost of index accesses.

OPTIMIZER_FEATURES_ENABLE: When adjusted, automatically changes the value of many hidden initialization parameters to permit the query optimizer to behave similar to the optimizer in an earlier release of Oracle.

OPTIMIZER_SECURE_VIEW_MERGING: Defaults to TRUE on 10g, and may cause performance problems when set to TRUE when a user accesses a view created by another user, while the performance problem is not present for the view owner.

DB_FILE_MULTIBLOCK_READ_COUNT: Controls the maximum number of blocks that may be fetched in a single read operation during a full table scan or fast full index scan.  Oracle 10.2.0.x and above is able to auto-set the DB_FILE_MULTIBLOCK_READ_COUNT, which will likely set the parameter to permit multi-block reads of 1MB.








Follow

Get every new post delivered to your Inbox.

Join 142 other followers