Faulty Quotes 5 – Block Sizes

31 01 2010

January 31, 2010 (Updated Feb 1, 2010)

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

The topic of deviating from the default 8KB block size in Oracle Database, or using multiple block sizes in a single database seems to surface every couple of months in the OTN forums, Oracle-L, comp.databases.oracle.server Usenet group, and similar discussion forums.  I think that I understand why.  A lot of information has been written that advocates using multiple block sizes in a single Oracle database, or using the largest possible block size to improve “full scan” or “range scan” performance.  Such information is found in blogs, news articles, discussion forums, expert sites, books, and even Oracle’s download.oracle.com website.  So, why do people ask questions about using larger than default block sizes or multiple block sizes in discussion forums if there are so many sources of information that say “just do it”.  Well, chances are that the Google (or other search engine) search that found all of the sources recommending the use of non-standard settings also found several pages where people basically stated “stop, think, understand before making any changes.”  See the Faulty Quotes 3 blog article.

So, you might be curious what my Google search found.  Is it a best practice to implement multiple block sizes in a single database, and is it a best practice to move all of your indexes to a tablespace using the largest supported block size?  (See chapter 1 of Expert Oracle Practices for a discussion on the topic of implementing “best practices”.)  In the following quotes, I have attempted to quote the bare minimum of each article so that the quote is not taken too far out of context (I am attempting to avoid changing the meaning of what is being quoted).

http://download.oracle.com/docs/cd/E13214_01/wli/docs102/dbtuning/dbtuning.html

“Oracle9i introduced a new feature that allowed a single instance of the database to have data structures with multiple block sizes. This feature is useful for databases that need the flexibility of using a small block size for transaction processing applications (OLTP); and a larger block size to support batch processing applications, decision support systems (DSS), or data warehousing. It can also be used to support more efficient access to larger data types like LOBs.”

http://www.virtual-dba.com/pdfs/Xtivia_WP_Oracle_Best_Practices_2008.pdf  (page 14)

“In Oracle databases 9i, 10g, and 11g, it is a best practice to use multiple block sizes; this allows you to tailor the block size to a specific type of access. Place tables and indexes in tablespaces sized (block size) according to access…”

http://www.oracle.com/technology/products/database/clustering/pdf/bp_rac_dw.pdf (page 19)

“Larger oracle block sizes typically give fewer index levels and hence improved index access times to data. A single I/O will fetch many related rows and subsequent requests for the next rows will already be in the data buffer. This is one of the major benefits of a larger block size. Another benefit is that it will decrease the number of splits.”

dba-oracle.com/art_so_blocksize.htm

“Because the blocksize affects the number of keys within each index block, it follows that the blocksize will have an effect on the structure of the index tree. All else being equal, large 32k blocksizes will have more keys per block, resulting in a flatter index than the same index created in a 2k tablespace.”
“As you can see, the amount of logical reads has been reduced in half simply by using the new 16K tablespace and accompanying 16K data cache. Clearly, the benefits of properly using the new data caches and multi-block tablespace feature of Oracle9i and above are worth your investigation and trials in your own database.”

rampant-books.com/t_oracle_blocksize_disk_i_o.htm

“B-tree indexes with frequent index range scans perform best in the largest supported block size.  This facilitates retrieval of as many index nodes as possible with a single I/O, especially for SQL during index range scans.  Some indexes do not perform range scans, so the DBA should make sure to identify the right indexes”

praetoriate.com/t_oracle_tuning_data_buffer_pools.htm

“This is an important concept for Oracle indexes because indexes perform better when stored in large block size tablespaces.  The indexes perform better because the b-trees may have a lower height and mode entries per index node, resulting in less overall disk overhead with sequential index node access.”

remote-dba.cc/s56.htm

“Indexes want large block sizes – B-tree indexes perform best in the largest supported block size and some experts recommend that all indexes should reside in 32K block size tablespaces. This facilitates retrieval of as many index nodes as possible with a single I/O, especially for SQL performing index range scans.”
“Many DBAs make their default db_block_size 32k and place indexes, the TEMP tablespace and tables with large-table full-table scans in it, using other block sizes for objects that require a smaller fetch size.”

remote-dba.net/unix_linux/multiple_block_sizes.htm

” Large blocks – Indexes, row-ordered tables, single-table clusters, and table with frequent full-table scans should reside in tablespaces with large block sizes.”

oracle-training.cc/s54.htm

“Larger block sizes are suitable for indexes, row-ordered tables, single-table clusters, and tables with frequent full-table scans. In this way, a single I/O will retrieve many related rows, and future requests for related rows will already be available in the data buffer.”

oracle-training.cc/oracle_tips_block_sizes.htm

“Indexes want large block sizes – Indexes will always favor the largest supported blocksize. You want to be able to retrieve as many index nodes as possible in a single I/O, especially for SQL that performs index range scans.  Hence, all indexes should reside in tablespaces with a 32k block size.”

oracle-training.cc/t_oracle_multiple_buffers.htm

“One of the first things the Oracle9i DBA should do is to migrate all of their Oracle indexes into a large blocksize tablespace. Indexes will always favor the largest supported blocksize.”

http://forums.oracle.com/forums/thread.jspa?messageID=2445936

“It’s pretty well established that RAC performs less pinging with 2k blocksizes”
“Large blocks gives more data transfer per I/O call.”
“Indexes like big blocks because index height can be lower and more space exists within the index branch nodes.”

dbapool.com/articles/040902.html

“Index Branches: Larger oracle block sizes typically give fewer index levels and hence improved index access times to data .This is one of the major benefits of a larger block size.”

toadworld.com/LinkClick.aspx?fileticket=fqDqiUsip1Y=&tabid=234  (page 8 )

“In Oracle9i and Oracle10g it is a good practice to use multiple block sizes, this allows you to tailor the block size to a specific type of access. Place tables and indexes in tablespaces according to access. For single block read type OLTP access, use 8k block sizes. For full table scan access such as with data warehouses use 16-32K block sizes. For index lookups use 8-16K block sizes. For indexes that are scanned or bitmap indexes, use 16-32K block sizes.”

dbaforums.org/oracle/index.php?s=87341768e1865563322676a1bd504db6&showtopic=83&pid=133&mode=threaded&start=#entry133

“Multiuple blocksizes are GREAT, but ONLY if your database is I/O-bound… Finally, research has proved that Oracle indexes build cleaner in large blocksizes.”

searchoracle.techtarget.com/tip/Oracle-tuning-Blocksize-and-index-tree-structures

“Because the blocksize affects the number of keys within each index block, it follows that the blocksize will have an effect on the structure of the index tree. All else being equal, large 32k blocksizes will have more keys per block, resulting in a flatter index than the same index created in a 2k tablespace… You can use the large (16-32K) blocksize data caches to contain data from indexes or tables that are the object of repeated large scans.”

dbazine.com/oracle/or-articles/burleson2

“Hence, one of the first things the Oracle9i database administrator will do is to create a 32K tablespace, a corresponding 32K data buffer, and then migrate all of the indexes in their system from their existing blocks into the 32K tablespace… Indexes will always favor the largest supported blocksize.”

statspackanalyzer.com/sample.asp

“You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.”

http://books.google.com/books?id=xxx0KAwY_ZMC&pg=PT133#v=onepage&q=&f=false

“If you have large indexes in your database, you will need a large block size for their tablespaces.”
“Oracle provides separate pools for the various block sizes, and this leads to better use of Oracle memory.”

noriegaaoracleexpert.blogspot.com/2007/08/advances-in-multiple-block-size-caches.html

“… and using multiple block caches act as an intelligent cache differentiator that automatically leverage cache performance optimization. I have successfully tested, like many other DBAs and developers, that beyond any possible SGA tuning that using multiple-block-size database can certainly improve performance through this performance approach.”

http://books.google.com/books?id=Wx6OmllCfIkC&pg=PA164#v=onepage&q=&f=false

“Simply by using the new 16K tablespace and accompanying 16K data cache, the amount of logical reads has been reduced by half.  Most assuredly, the benefits of properly using the new data caches and multi-block tablespace feature of Oracle9i and later, are worth examination and trials in the DBA’s own database.”

http://books.google.com/books?id=bxHDtttb0ZAC&pg=PA406#v=onepage&q=&f=false

“Objects that experience full scans and indexes with frequent range scans might benefit from being placed in a larger block size, with db_file_multiblock_read_count set to the block size for that tablespace.”

http://books.google.com/books?id=Uf2pb1c1H2AC&pg=RA1-PA317#v=onepage&q=&f=false

“Indexes want large block sizes: Indexes will always favor the largest supported block size… Hence, all indexes should reside in tablespaces with a 32K block size.”

dba-oracle.com/oracle_tips_multiple_blocksizes.htm (Added Feb 1, 2010):

“At first, beginners denounced multiple block sizes because they were invented to support transportable tablespaces.  Fortunately, Oracle has codified the benefits of multiple blocksizes, and the Oracle 11g Performance Tuning Guide notes that multiple blocksizes are indeed beneficial in large databases to eliminate superfluous I/O and isolate critical objects into a separate data buffer cache:

‘With segments that have atypical access patterns, store blocks from those segments in two different buffer pools: the KEEP pool and the RECYCLE pool…'”

————————-

Before deciding whether or not to implement a large block size (or a very small block size), or add either a larger or smaller than default block size tablespace, I suggest reviewing the following:

http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/iodesign.htm#i20394  (directly relates to Faulty Quotes 3)

“The use of multiple block sizes in a single database instance is not encouraged because of manageability issues.”

“Expert Oracle Database Architecture”
http://books.google.com/books?id=TmPoYfpeJAUC&pg=PA147#v=onepage&q=&f=false

“These multiple blocksizes were not intended as a performance or Tuning feature, but rather came about in support of transportable tablespaces…”

http://www.freelists.org/post/oracle-l/32K-block-size-tablespace-for-indexes,4

“But in most cases the administration overhead is much bigger than the performance benefit. You can easily end up with over- or undersized db_XXk_cache_size and the database can’t do anything about it. Then the performance will be better in some parts of the day and worse later on.”

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1468781700346675276

“I would not recommend going into a system planning on using multiple blocksizes – they were invented for one thing, to transport data from a transactional system to a warehouse (where you might be going from 8k OLTP to 16/32k warehouse) and to be used only to extract/transform/load the OLTP data.”

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1468781700346675276

“My block size is 4096 and my db_32k_cache_size=67108864
I want to create a tablespace with 32K and rebuild all indexes into this tablespace. These are
frequently used indexes. Do you think is there any benefit for using 32K block size in this scenerio”

“before you do something, you should have an identified goal in mind
so, tell us all – WHY would you do this? Don’t say “cause I read on some website it makes things super fast” (it doesn’t), tell us WHY you think YOU would derive benefit from this?
I do not think there is in general benefits to be gained from using multiple block size tablespaces – short of TRANSPORTING data from one block size to another for an ‘extract transform and load’ process.”

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:729373100346730466

“BUT – do not use multiple block sizes for anything other than transporting data from database A to database B where the block size in A is different from B. No silver bullets with this ‘trick’, nothing you want to do in real life. The cases whereby multiple blocksizes are useful are typically limited to benchmarks, old wives tales, and very exceptionally rare conditions.”

http://jonathanlewis.wordpress.com/2009/03/22/block-size-again/
“ORA-01555: snapshot too old” caused by large block size

“Oracle9i Performance Tuning Tips & Techniques”
http://books.google.com/books?id=59ks3deVd0UC&pg=PA9#v=onepage&q=&f=false

“Warning: Oracle development does not support the notion of using multiple block sizes for performance tuning. The nonstandard block caches are not optimized.”

http://forums.oracle.com/forums/thread.jspa?messageID=2445936

“How can I determine which block size is correct for my database.”

“Use 8k. This is right in the middle, and won’t put you in an edge condition. Call it the Goldilocks block, not to small, not to big, just right.
For both OLTP and DSS, 8k is an optimal size. I use 8k, always.
There is minimal gains to be had in messing with block sizes. Having good db design and good execution plans is a better place to worry about performance.”

Series of related articles (there are at least 5 related articles in this series where the author directly addresses many of the claimed benefits of fiddling with block sizes):
http://richardfoote.wordpress.com/category/index-block-size/

Summary of an OTN forums thread – what was likely the longest thread ever on the topic of block sizes (and very likely multiple block sizes in the same database) from June 2008.  The message thread was too large to be supported on the new OTN software due to performance reasons.  Fortunately, Jonathan Lewis obtained a copy of the thread content in a PDF file:
http://jonathanlewis.wordpress.com/2008/07/19/block-sizes/

Related to the above mentioned OTN thread:
http://structureddata.org/2008/08/14/automatic-db_file_multiblock_read_count/
http://structureddata.org/2008/09/08/understanding-performance/

I posted a number of test cases in the above mentioned OTN thread where I simulated some of the activity in a data warehouse, and activity in an OLTP type database.  To a large extent, the performance was very close to being identical in the databases with the default 8KB and 16KB tablespaces, with just a few exceptions.  As I recall, the 16KB database encountered performance problems when a column with a NULL value was updated, and when a rollback was performed.

Below you will find the scripts to reproduce my test cases that appeared in the above mentioned OTN thread, and the performance results that I obtained.  The OLTP test required roughly 10-12 hours to complete:
Block Size Comparison (save with a .XLS extension and open with Microsoft Excel).

I guess the message is that you should verify that the swimming pool contains water before diving in head first.





Execution Plans – What is the Plan, and Where Do I Find It?

30 01 2010

January 30, 2010

(Forward to the Next Post in the Series)

So, what is the plan?  There are a lot of resources available to help with understanding execution plans, some of which are much more effective than others.  The most effective explanations that I have found are in the “Troubleshooting Oracle Performance” book, but the documentation (documentation2) is also helpful from time to time.  There are also a lot of ways to look at execution plans, including directly examining V$SQL_PLAN in recent Oracle releases.  This article shows some of the ways for generating execution plans, and some of the problems that might be encountered when attempting to obtain the “correct” execution plan.

For this blog article, the following script to create test tables will be used:

CREATE TABLE T1 (
  C1 NUMBER,
  C2 VARCHAR2(255),
  PRIMARY KEY (C1));

CREATE TABLE T2 (
  C1 NUMBER,
  C2 VARCHAR2(255),
  PRIMARY KEY (C1));

CREATE TABLE T3 (
  C1 NUMBER,
  C2 VARCHAR2(255));

CREATE TABLE T4 (
  C1 NUMBER,
  C2 VARCHAR2(255));

INSERT INTO
  T1
SELECT
  ROWNUM,
  RPAD(TO_CHAR(ROWNUM),255,'A')
FROM
  DUAL
CONNECT BY
  LEVEL<=1000000;

INSERT INTO
  T2
SELECT
  ROWNUM,
  RPAD(TO_CHAR(ROWNUM),255,'A')
FROM
  DUAL
CONNECT BY
  LEVEL<=1000000;

INSERT INTO
  T3
SELECT
  ROWNUM,
  RPAD(TO_CHAR(ROWNUM),255,'A')
FROM
  DUAL
CONNECT BY
  LEVEL<=1000000;

INSERT INTO
  T4
SELECT
  ROWNUM,
  RPAD(TO_CHAR(ROWNUM),255,'A')
FROM
  DUAL
CONNECT BY
  LEVEL<=1000000;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T3',CASCADE=>TRUE)

The above script creates four tables with 1,000,000 rows each.  The first two tables have indexes on the primary key column due to the declared primary key column.  Additionally, the script collects statistics for the first and third tables, but not the second and fourth tables (to help with a couple of demonstrations).

If you are running Oracle 10.1.0.1 or higher, you can use the DBMS_XPLAN.DISPLAY_CURSOR function to display execution plans.  This function is called automatically, starting in Oracle 10.1.0.1, when AUTOTRACE is enabled in SQL*Plus.  DBMS_XPLAN.DISPLAY_CURSOR is the preferred method for displaying execution plans, so we will start with that method.

First, we will generate an execution plan for a SQL statement with the OPTIMIZER_MODE set to ALL_ROWS:

SET LINESIZE 150
ALTER SESSION SET OPTIMIZER_MODE='ALL_ROWS';

SELECT
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN 1 AND 10;

 C1 C2
--- ----------
  1 1AAAAAAAAA
  2 2AAAAAAAAA
  3 3AAAAAAAAA
  4 4AAAAAAAAA
  5 5AAAAAAAAA
  6 6AAAAAAAAA
  7 7AAAAAAAAA
  8 8AAAAAAAAA
  9 9AAAAAAAAA
 10 10AAAAAAAA

The typical way to display an execution plan for a SQL statement that was just executed is as follows (note that the third parameter, the FORMAT parameter, could have also been set to NULL to produce the same result):

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));

SQL_ID  9dq71tc7vasgu, child number 0
-------------------------------------
SELECT   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,   T1 WHERE   T1.C1=T3.C1   AND
T1.C1 BETWEEN 1 AND 10

Plan hash value: 1519387310

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |       |       |  1893 (100)|          |
|*  1 |  HASH JOIN                   |              |     9 |  2385 |  1893   (4)| 00:00:09 |
|*  2 |   TABLE ACCESS FULL          | T3           |    10 |    50 |  1888   (4)| 00:00:09 |
|   3 |   TABLE ACCESS BY INDEX ROWID| T1           |    10 |  2600 |     4   (0)| 00:00:01 |
|*  4 |    INDEX RANGE SCAN          | SYS_C0020554 |    10 |       |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T3"."C1")
   2 - filter(("T3"."C1"<=10 AND "T3"."C1">=1))
   4 - access("T1"."C1">=1 AND "T1"."C1"<=10)

In the above, note that the calculated cost of the execution plan is 1893.  Now, a little more experimentation, setting OPTIMIZER_MODE to FIRST_ROWS (Oracle assumes that only the first row will be retrieved, and the rest will likely be discarded):

ALTER SESSION SET OPTIMIZER_MODE='FIRST_ROWS_1';

SELECT
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN 1 AND 10;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));

SQL_ID  9dq71tc7vasgu, child number 1
-------------------------------------
SELECT   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,   T1 WHERE   T1.C1=T3.C1   AND
T1.C1 BETWEEN 1 AND 10

Plan hash value: 2674910673

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |       |       |   575 (100)|          |
|   1 |  NESTED LOOPS                |              |     2 |   530 |   575   (4)| 00:00:03 |
|*  2 |   TABLE ACCESS FULL          | T3           |     7 |    35 |   568   (4)| 00:00:03 |
|   3 |   TABLE ACCESS BY INDEX ROWID| T1           |     1 |   260 |     2   (0)| 00:00:01 |
|*  4 |    INDEX UNIQUE SCAN         | SYS_C0020554 |     1 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("T3"."C1"<=10 AND "T3"."C1">=1))
   4 - access("T1"."C1"="T3"."C1")
       filter(("T1"."C1"<=10 AND "T1"."C1">=1))

In the above, notice that by changing the optimizer mode from ALL_ROWS to FIRST_ROWS_1 a new execution plan was created (the child number has increased by 1 and the Plan hash value has changed) that now uses a nested loops join, rather than a hash join, and the calculated cost has decreased.  You might be wondering why Oracle did not pick this execution plan with the nested loops join, rather than the hash join when the OPTIMIZER_MODE was set to ALL_ROWS, since this plan has a lower calculated cost – we will leave that for another blog article (unless, of course, someone knows the answer and wants to share).  Oddly, the estimated number of rows to be returned from table T3 has decreased when compared to the execution plan with the hash join, and the estimated execution time has also decreased.  But, we really do not know anything about performance from just looking at the above plans.  So, let’s repeat the test again, changing the SQL statement so that we are able to pass in ‘ALLSTATS LAST’ as the format parameter for the DBMS_XPLAN call:

ALTER SESSION SET OPTIMIZER_MODE='ALL_ROWS';

SELECT /*+ GATHER_PLAN_STATISTICS */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN 1 AND 10;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  ddnbt67ftu9ds, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,
   T1 WHERE   T1.C1=T3.C1   AND T1.C1 BETWEEN 1 AND 10

Plan hash value: 1519387310

-------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------
|*  1 |  HASH JOIN                   |              |      1 |      9 |     10 |00:00:04.92 |   37144 |  37125 |  1452K|  1452K| 1328K (0)|
|*  2 |   TABLE ACCESS FULL          | T3           |      1 |     10 |     10 |00:00:00.01 |   37138 |  37125 |       |       |          |
|   3 |   TABLE ACCESS BY INDEX ROWID| T1           |      1 |     10 |     10 |00:00:00.01 |       6 |      0 |       |       |          |
|*  4 |    INDEX RANGE SCAN          | SYS_C0020554 |      1 |     10 |     10 |00:00:00.01 |       4 |      0 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T3"."C1")
   2 - filter(("T3"."C1"<=10 AND "T3"."C1">=1))
   4 - access("T1"."C1">=1 AND "T1"."C1"<=10)

The query completed in 4.92 seconds, retrieved 10 rows, and used 1,328 * 1,024 bytes of memory during the in-memory hash join.  Wow, the full table scan of table T3 completed in 0.01 seconds while reading 37,125 blocks from disk.  If each multiblock disk read were 128 blocks, that would be 290.04 multiblock reads in 0.01 seconds, for an average of  0.000034 seconds per 1MB multiblock read – who needs SSD when you can just read an execution plan incorrectly and make assumptions.  🙂  (Keep reading)

ALTER SESSION SET OPTIMIZER_MODE='FIRST_ROWS_1';

SELECT /*+ GATHER_PLAN_STATISTICS */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN 1 AND 10;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  ddnbt67ftu9ds, child number 1
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,
   T1 WHERE T1.C1=T3.C1   AND T1.C1 BETWEEN 1 AND 10

Plan hash value: 2674910673

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------------
|   1 |  NESTED LOOPS                |              |      1 |      2 |     10 |00:00:00.02 |   37171 |  37125 |
|*  2 |   TABLE ACCESS FULL          | T3           |      1 |      7 |     10 |00:00:00.02 |   37139 |  37125 |
|   3 |   TABLE ACCESS BY INDEX ROWID| T1           |     10 |      1 |     10 |00:00:00.01 |      32 |      0 |
|*  4 |    INDEX UNIQUE SCAN         | SYS_C0020554 |     10 |      1 |     10 |00:00:00.01 |      22 |      0 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("T3"."C1"<=10 AND "T3"."C1">=1))
   4 - access("T1"."C1"="T3"."C1")
       filter(("T1"."C1"<=10 AND "T1"."C1">=1))

As shown above, with the FIRST_ROWS_1 optimizer mode, the query completed in 0.02 seconds.  But, something is wrong.  The execution plan shows that there were 37,125 blocks read from disk, just like the previous execution plan, which would mean that each of the 1MB physical reads required 0.000068 seconds if we only look at the A-Time column of ID 1.  Who needs SSD when you can just read an execution plan incorrectly and make assumptions – oh, wait, I just said that.

The GATHER_PLAN_STATISTICS hint is helpful because it allows DBMS_XPLAN to output the actual execution statistics for the previous SQL statement.  If you do not want to use that hint, it is also possible to change the STATISTICS_LEVEL to ALL at the session level level, as the following demonstrates:

ALTER SESSION SET OPTIMIZER_MODE='ALL_ROWS';
ALTER SESSION SET STATISTICS_LEVEL='ALL';

SELECT
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN 1 AND 10;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  9dq71tc7vasgu, child number 2
-------------------------------------
SELECT   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,   T1 WHERE   T1.C1=T3.C1   AND T1.C1 BETWEEN 1 AND 10

Plan hash value: 1519387310

-------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------
|*  1 |  HASH JOIN                   |              |      1 |      9 |     10 |00:00:05.59 |   37144 |  37125 |  1452K|  1452K| 1324K (0)|
|*  2 |   TABLE ACCESS FULL          | T3           |      1 |     10 |     10 |00:00:05.59 |   37138 |  37125 |       |       |          |
|   3 |   TABLE ACCESS BY INDEX ROWID| T1           |      1 |     10 |     10 |00:00:00.01 |       6 |      0 |       |       |          |
|*  4 |    INDEX RANGE SCAN          | SYS_C0020554 |      1 |     10 |     10 |00:00:00.01 |       4 |      0 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T3"."C1")
   2 - filter(("T3"."C1"<=10 AND "T3"."C1">=1))
   4 - access("T1"."C1">=1 AND "T1"."C1"<=10)

Note that the change to the STATISTICS_LEVEL caused another hard parse for the SQL statement, and the child number is now listed as 2.  Unlike the case where the GATHER_PLAN_STATISTICS hint was used, this time we see the actual time for the full table scan, rather than 0.01 seconds.  If you look closely, you will also notice that the execution time increased from 4.92 seconds to 5.59 seconds as a result of changing the STATISTICS_LEVEL parameter.

Let’s try again with the FIRST_ROWS_1 value specified for OPTIMIZER_MODE.

ALTER SESSION SET OPTIMIZER_MODE='FIRST_ROWS_1';

SELECT
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN 1 AND 10;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  9dq71tc7vasgu, child number 3
-------------------------------------
SELECT   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,   T1 WHERE   T1.C1=T3.C1   AND T1.C1 BETWEEN 1 AND 10

Plan hash value: 2674910673

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------------
|   1 |  NESTED LOOPS                |              |      1 |      2 |     10 |00:00:04.79 |   37171 |  37125 |
|*  2 |   TABLE ACCESS FULL          | T3           |      1 |      7 |     10 |00:00:04.79 |   37139 |  37125 |
|   3 |   TABLE ACCESS BY INDEX ROWID| T1           |     10 |      1 |     10 |00:00:00.01 |      32 |      0 |
|*  4 |    INDEX UNIQUE SCAN         | SYS_C0020554 |     10 |      1 |     10 |00:00:00.01 |      22 |      0 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("T3"."C1"<=10 AND "T3"."C1">=1))
   4 - access("T1"."C1"="T3"."C1")
       filter(("T1"."C1"<=10 AND "T1"."C1">=1))

Note again that there was a hard parse, and the child number increased by 1.  This time, rather than completing in 0.02 seconds, the query required 4.79 seconds, with most of that time attributed to the full table scan.  It is odd that the optimizer predicted that only 2 rows would be returned, rather than 9 or 10 rows.

Let’s try again, this time using bind variables rather than constants (literals):

ALTER SESSION SET OPTIMIZER_MODE='ALL_ROWS';
ALTER SESSION SET STATISTICS_LEVEL='TYPICAL';

VARIABLE N1 NUMBER
VARIABLE N2 NUMBER
EXEC :N1:=1
EXEC :N2:=10

SELECT /*+ GATHER_PLAN_STATISTICS */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  cvq22z77c8fww, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,
   T1 WHERE   T1.C1=T3.C1   AND T1.C1 BETWEEN :N1 AND :N2

Plan hash value: 3807353021

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows|   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
|*  1 |  FILTER                       |              |      1 |        |     10|00:00:05.60 |   37144 |  37125 |       |       |          |
|*  2 |   HASH JOIN                   |              |      1 |      9 |     10|00:00:05.60 |   37144 |  37125 |  1452K|  1452K| 1328K (0)|
|*  3 |    TABLE ACCESS FULL          | T3           |      1 |     10 |     10|00:00:00.69 |   37138 |  37125 |       |       |          |
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |      1 |     10 |     10|00:00:00.01 |       6 |      0 |       |       |          |
|*  5 |     INDEX RANGE SCAN          | SYS_C0020554 |      1 |     10 |     10|00:00:00.01 |       4 |      0 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:N1<=:N2)
   2 - access("T1"."C1"="T3"."C1")
   3 - filter(("T3"."C1"<=:N2 AND "T3"."C1">=:N1))
   5 - access("T1"."C1">=:N1 AND "T1"."C1"<=:N2)

Same execution plan as we saw earlier, with the same cardinality estimates due to bind variable peeking, except now there is a FILTER operation AT ID 1 with a predicate that states that the N1 bind variable must be less than or equal to the N2 bind variable – this is an automatically generated predicate caused by the BETWEEN syntax in the SQL statement.

Now we change the value of the second bind variable to a much larger value:

EXEC :N2:=500000

SELECT /*+ GATHER_PLAN_STATISTICS */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  cvq22z77c8fww, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,
   T1 WHERE   T1.C1=T3.C1   AND T1.C1 BETWEEN :N1 AND :N2

Plan hash value: 3807353021

--------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |Used-Tmp|
--------------------------------------------------------------------------------------------------------------------------------------------------------------
|*  1 |  FILTER                       |              |      1 |        |    500K|00:00:18.48 |     105K|  61283 |   4995 |       |       |          |        |
|*  2 |   HASH JOIN                   |              |      1 |      9 |    500K|00:00:17.98 |     105K|  61283 |   4995 |    15M|  3722K|   18M (1)|  43008 |
|*  3 |    TABLE ACCESS FULL          | T3           |      1 |     10 |    500K|00:00:01.51 |   37138 |  37125 |      0 |       |       |          |        |
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |      1 |     10 |    500K|00:00:11.50 |   68480 |  19163 |      0 |       |       |          |        |
|*  5 |     INDEX RANGE SCAN          | SYS_C0020554 |      1 |     10 |    500K|00:00:01.00 |   25893 |    648 |      0 |       |       |          |        |
--------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:N1<=:N2)
   2 - access("T1"."C1"="T3"."C1")
   3 - filter(("T3"."C1"<=:N2 AND "T3"."C1">=:N1))
   5 - access("T1"."C1">=:N1 AND "T1"."C1"<=:N2)

In the above, notice that there was no hard parse (same SQL_ID and child number as we saw earlier), and the E-Rows column is the same for the two DBMS_XPLAN outputs.  The Used-Tmp column indicates that the hash join spilled to disk during the previous execution, using 43008 * 1024 bytes of space in the TEMP tablespace.  Let’s repeat the test with the altered OPTIMIZER_MODE:

EXEC :N2:=10
ALTER SESSION SET OPTIMIZER_MODE='FIRST_ROWS_1';

SELECT /*+ GATHER_PLAN_STATISTICS */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  cvq22z77c8fww, child number 1
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,
   T1 WHERE T1.C1=T3.C1   AND T1.C1 BETWEEN :N1 AND :N2

Plan hash value: 3268213130

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows|   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------------
|*  1 |  FILTER                       |              |      1 |        |     10|00:00:00.01 |   37171 |  37125 |
|   2 |   NESTED LOOPS                |              |      1 |      2 |     10|00:00:00.01 |   37171 |  37125 |
|*  3 |    TABLE ACCESS FULL          | T3           |      1 |      7 |     10|00:00:00.01 |   37139 |  37125 |
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |     10 |      1 |     10|00:00:00.01 |      32 |      0 |
|*  5 |     INDEX UNIQUE SCAN         | SYS_C0020554 |     10 |      1 |     10|00:00:00.01 |      22 |      0 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:N1<=:N2)
   3 - filter(("T3"."C1"<=:N2 AND "T3"."C1">=:N1))
   5 - access("T1"."C1"="T3"."C1")
       filter(("T1"."C1"<=:N2 AND "T1"."C1">=:N1))

EXEC :N2:=500000

SELECT /*+ GATHER_PLAN_STATISTICS */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  cvq22z77c8fww, child number 1
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,
   T1 WHERET1.C1=T3.C1   AND T1.C1 BETWEEN :N1 AND :N2

Plan hash value: 3268213130

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
-----------------------------------------------------------------------------------------------------------------
|*  1 |  FILTER                       |              |      1 |        |    500K|00:00:18.03 |    1603K|  56267 |
|   2 |   NESTED LOOPS                |              |      1 |      2 |    500K|00:00:17.53 |    1603K|  56267 |
|*  3 |    TABLE ACCESS FULL          | T3           |      1 |      7 |    500K|00:00:04.51 |   70472 |  37125 |
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |    500K|      1 |    500K|00:00:13.43 |    1533K|  19142 |
|*  5 |     INDEX UNIQUE SCAN         | SYS_C0020554 |    500K|      1 |    500K|00:00:04.21 |    1033K|    637 |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:N1<=:N2)
   3 - filter(("T3"."C1"<=:N2 AND "T3"."C1">=:N1))
   5 - access("T1"."C1"="T3"."C1")
       filter(("T1"."C1"<=:N2 AND "T1"."C1">=:N1))

18.03 seconds to execute the SQL statement with OPTIMIZER_MODE set to FIRST_ROWS_1 and 18.48 seconds with OPTIMIZER_MODE set to ALL_ROWS (and 0.01 seconds compared to 5.60 seconds for the execution retrieving 10 rows).  Obviously, this means that we should be running with OPTIMIZER_MODE set to FIRST_ROWS_1 if we want to optimize performance, right?  Well, in short No.  Maybe this will be something that will be investigated in a later blog article.

Now, turning to the unanalyzed tables.  We modify the original SQL statement using bind variables to point to the two tables without up-to-date statistics on the tables and their indexes:

EXEC :N2:=10
ALTER SESSION SET OPTIMIZER_MODE='ALL_ROWS';

SELECT /*+ GATHER_PLAN_STATISTICS */
  T4.C1,
  SUBSTR(T2.C2,1,10) C2
FROM
  T4,
  T2
WHERE
  T2.C1=T4.C1
  AND T2.C1 BETWEEN :N1 AND :N2;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  94bv1jwkzcc38, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T4.C1,   SUBSTR(T2.C2,1,10) C2 FROM   T4,
   T2 WHERE   T2.C1=T4.C1   AND T2.C1 BETWEEN :N1 AND :N2

Plan hash value: 3374068464

-------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows|   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------
|*  1 |  FILTER                       |              |      1 |        |     10|00:00:00.01 |   74124 |  36361 |       |       |          |
|*  2 |   HASH JOIN                   |              |      1 |     10 |     10|00:00:00.01 |   74124 |  36361 |   711K|   711K| 1087K (0)|
|   3 |    TABLE ACCESS BY INDEX ROWID| T2           |      1 |     10 |     10|00:00:00.01 |       5 |      1 |       |       |          |
|*  4 |     INDEX RANGE SCAN          | SYS_C0020555 |      1 |     10 |     10|00:00:00.01 |       3 |      0 |       |       |          |
|*  5 |    TABLE ACCESS FULL          | T4           |      1 |    408 |     10|00:00:00.01 |   74119 |  36360 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:N1<=:N2)
   2 - access("T2"."C1"="T4"."C1")
   4 - access("T2"."C1">=:N1 AND "T2"."C1"<=:N2)
   5 - filter(("T4"."C1">=:N1 AND "T4"."C1"<=:N2))

Note
-----
   - dynamic sampling used for this statement

The only change here is that Oracle is now estimating that 10 rows will be returned rather than the 9 we saw earlier, and the note below the predicate information heading that states that dynamic sampling was used, oh – and the order of the row sources directly below the HASH JOIN line in the plan has changed (is this a problem?).  Maybe dynamic sampling will be a topic for another blog article, but the topic is discussed in various books and articles on the Internet.

ALTER SESSION SET OPTIMIZER_MODE='FIRST_ROWS_1';

SELECT /*+ GATHER_PLAN_STATISTICS */
  T4.C1,
  SUBSTR(T2.C2,1,10) C2
FROM
  T4,
  T2
WHERE
  T2.C1=T4.C1
  AND T2.C1 BETWEEN :N1 AND :N2;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  94bv1jwkzcc38, child number 1
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T4.C1,   SUBSTR(T2.C2,1,10) C2 FROM   T4,
   T2 WHERE   T2.C1=T4.C1   AND T2.C1 BETWEEN :N1 AND :N2

Plan hash value: 3374068464

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows|   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
|*  1 |  FILTER                       |              |      1 |        |     10|00:00:00.01 |   37143 |  37120 |       |       |          |
|*  2 |   HASH JOIN                   |              |      1 |     10 |     10|00:00:00.01 |   37143 |  37120 |   711K|   711K| 1087K (0)|
|   3 |    TABLE ACCESS BY INDEX ROWID| T2           |      1 |     10 |     10|00:00:00.01 |       4 |      0 |       |       |          |
|*  4 |     INDEX RANGE SCAN          | SYS_C0020555 |      1 |     10 |     10|00:00:00.01 |       3 |      0 |       |       |          |
|*  5 |    TABLE ACCESS FULL          | T4           |      1 |   2307 |     10|00:00:00.01 |   37139 |  37120 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:N1<=:N2)
   2 - access("T2"."C1"="T4"."C1")
   4 - access("T2"."C1">=:N1 AND "T2"."C1"<=:N2)
   5 - filter(("T4"."C1">=:N1 AND "T4"."C1"<=:N2))

Note
-----
   - dynamic sampling used for this statement

OK, this plan changed a bit from when the SQL statement referenced tables T1 and T3.  The execution plan is no longer using a nested loops join – in fact it is using the same plan as was used when the OPTIMIZER_MODE was set to ALL_ROWS.

Oracle 9i and earlier Oracle releases had a default OPTIMIZER_MODE of CHOOSE, so let’s see what happens when we use that optimizer mode with the same two tables:

ALTER SESSION SET OPTIMIZER_MODE='CHOOSE';

SELECT /*+ GATHER_PLAN_STATISTICS */
  T4.C1,
  SUBSTR(T2.C2,1,10) C2
FROM
  T4,
  T2
WHERE
  T2.C1=T4.C1
  AND T2.C1 BETWEEN :N1 AND :N2;

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

SQL_ID  94bv1jwkzcc38, child number 2
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T4.C1,   SUBSTR(T2.C2,1,10) C2 FROM   T4,
   T2 WHERE T2.C1=T4.C1   AND T2.C1 BETWEEN :N1 AND :N2

Plan hash value: 1544755769

-------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Starts | A-Rows |   A-Time   | Buffers | Reads  |
-------------------------------------------------------------------------------------------------------
|   1 |  NESTED LOOPS                |              |      1 |     10 |00:00:00.02 |    2039K|  39001 |
|   2 |   TABLE ACCESS FULL          | T4           |      1 |   1000K|00:00:02.02 |   37139 |  37125 |
|   3 |   TABLE ACCESS BY INDEX ROWID| T2           |   1000K|     10 |00:00:06.80 |    2001K|   1876 |
|*  4 |    INDEX UNIQUE SCAN         | SYS_C0020555 |   1000K|     10 |00:00:04.78 |    2001K|   1876 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T2"."C1"="T4"."C1")
       filter(("T2"."C1"<=:N2 AND "T2"."C1">=:N1))

Note
-----
   - rule based optimizer used (consider using cbo)

Note that the Note section indicates that the rule based optimizer was used, even though the documentation for Oracle 10.2 states that as of Oracle 10.1 the RULE based optimizer is no longer supported.  Also note that the execution plan is now using a nested loops join, the FILTER operation no longer appears, and the full table scan is listed first below the NESTED LOOPS operation, just as it was when the OPTIMIZER_MODE was set to FIRST_ROWS_1 and the query accessed tables T1 and T3.

Let’s say that we are bored, and did not read chapter 15 by Pete Finnigan in the “Expert Oracle Practices” book… assume that column C1 contains a credit card number.  Now for an experiment, we will retrieve all child cursors for SQL_ID cvq22z77c8fww with the bind variables that were submitted on the initial hard parse – be careful with who has access to this feature in a production environment, as it could expose sensitive information:

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('cvq22z77c8fww',NULL,'ALLSTATS LAST +PEEKED_BINDS'));

PLAN_TABLE_OUTPUT                                                                                                                                    
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  cvq22z77c8fww, child number 0                                                                                                                
-------------------------------------                                                                                                                
SELECT /*+ GATHER_PLAN_STATISTICS */   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,   T1 WHERE   T1.C1=T3.C1   AND T1.C1 BETWEEN :N1 AND :N2            

Plan hash value: 3807353021                                                                                                                          

------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem | Used-Tmp|
------------------------------------------------------------------------------------------------------------------------------------------------------
|*  1 |  FILTER                       |              |      1 |        |     10 |00:00:04.28 |   37144 |  37125 |       |       |          |         |
|*  2 |   HASH JOIN                   |              |      1 |      9 |     10 |00:00:04.28 |   37144 |  37125 |  1452K|  1452K| 2315K (0)|         |
|*  3 |    TABLE ACCESS FULL          | T3           |      1 |     10 |     10 |00:00:00.01 |   37138 |  37125 |       |       |          |         |
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |      1 |     10 |     10 |00:00:00.01 |       6 |      0 |       |       |          |         |
|*  5 |     INDEX RANGE SCAN          | SYS_C0020554 |      1 |     10 |     10 |00:00:00.01 |       4 |      0 |       |       |          |         |
------------------------------------------------------------------------------------------------------------------------------------------------------

Peeked Binds (identified by position):                                                                                                               
--------------------------------------                                                                                                               
   1 - (NUMBER): 1                                                                                                                                   
   2 - (NUMBER): 10                                                                                                                                  

Predicate Information (identified by operation id):                                                                                                  
---------------------------------------------------                                                                                                  
   1 - filter(:N1<=:N2)                                                                                                                              
   2 - access("T1"."C1"="T3"."C1")                                                                                                                   
   3 - filter(("T3"."C1"<=:N2 AND "T3"."C1">=:N1))                                                                                                   
   5 - access("T1"."C1">=:N1 AND "T1"."C1"<=:N2)                                                                                                     

-

SQL_ID  cvq22z77c8fww, child number 1                                                                                                                
-------------------------------------                                                                                                                
SELECT /*+ GATHER_PLAN_STATISTICS */   T3.C1,   SUBSTR(T1.C2,1,10) C2 FROM   T3,   T1 WHERE                                                          
T1.C1=T3.C1   AND T1.C1 BETWEEN :N1 AND :N2                                                                                                          

Plan hash value: 3268213130                                                                                                                          
 -----------------------------------------------------------------------------------------------------------------                                    
| Id  | Operation                     | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |                                    
-----------------------------------------------------------------------------------------------------------------                                    
|*  1 |  FILTER                       |              |      1 |        |    500K|00:00:18.03 |    1603K|  56267 |                                    
|   2 |   NESTED LOOPS                |              |      1 |      2 |    500K|00:00:17.53 |    1603K|  56267 |                                    
|*  3 |    TABLE ACCESS FULL          | T3           |      1 |      7 |    500K|00:00:04.51 |   70472 |  37125 |                                    
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |    500K|      1 |    500K|00:00:13.43 |    1533K|  19142 |                                    
|*  5 |     INDEX UNIQUE SCAN         | SYS_C0020554 |    500K|      1 |    500K|00:00:04.21 |    1033K|    637 |                                    
-----------------------------------------------------------------------------------------------------------------                                    

Peeked Binds (identified by position):                                                                                                               
--------------------------------------                                                                                                               
   1 - (NUMBER): 1                                                                                                                                   
   2 - (NUMBER): 10                                                                                                                                  

Predicate Information (identified by operation id):                                                                                                  
---------------------------------------------------                                                                                                  
   1 - filter(:N1<=:N2)                                                                                                                              
   3 - filter(("T3"."C1"<=:N2 AND "T3"."C1">=:N1))                                                                                                   
   5 - access("T1"."C1"="T3"."C1")                                                                                                                   
       filter(("T1"."C1"<=:N2 AND "T1"."C1">=:N1)) 

-

SQL_ID  94bv1jwkzcc38, child number 2
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */   T4.C1,   SUBSTR(T2.C2,1,10) C2 FROM   T4,
   T2 WHERET2.C1=T4.C1   AND T2.C1 BETWEEN :N1 AND :N2

Plan hash value: 1544755769

-------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Starts | A-Rows |   A-Time   | Buffers | Reads  |
-------------------------------------------------------------------------------------------------------
|   1 |  NESTED LOOPS                |              |      1 |     10 |00:00:00.02 |    2039K|  39001 |
|   2 |   TABLE ACCESS FULL          | T4           |      1 |   1000K|00:00:02.02 |   37139 |  37125 |
|   3 |   TABLE ACCESS BY INDEX ROWID| T2           |   1000K|     10 |00:00:06.80 |    2001K|   1876 |
|*  4 |    INDEX UNIQUE SCAN         | SYS_C0020555 |   1000K|     10 |00:00:04.78 |    2001K|   1876 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T2"."C1"="T4"."C1")
       filter(("T2"."C1"<=:N2 AND "T2"."C1">=:N1))

Note
-----
   - rule based optimizer used (consider using cbo)

To learn more about DBMS_XPLAN.DISPLAY_CURSOR, see the documenation.

So, what if we want to know why the child cursors were created?  We could do something like this:

DESC V$SQL_SHARED_CURSOR

SET LINESIZE 200
SET HEADING ON
BREAK ON SQL_ID SKIP 1 

SELECT
  *
FROM
  V$SQL_SHARED_CURSOR
WHERE
  SQL_ID='94bv1jwkzcc38';

SQL_ID        ADDRESS          CHILD_ADDRESS    CHILD_NUMBER USOOSLSEBPISTABDLTRIIRLIOSMUTNFAITDLDBPCSRPTMBMROPMFL
------------- ---------------- ---------------- ------------ -----------------------------------------------------
94bv1jwkzcc38 000007FF9829D308 000007FFA68C1038            0 NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
              000007FF9829D308 000007FF94E0D950            1 NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNYNNNN
              000007FF9829D308 000007FF98046318            2 NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNYNNNN

The above shows that the child cursors were created because of an optimizer mode mismatch (yes, we changed the OPTIMIZER_MODE).  We could also check the bind variable definitions (not needed in this case because we used the undocumented PEEKED_BINDS format parameter with DBMS_XPLAN to see most of this information):

SELECT
  S.CHILD_NUMBER CN,
  SBM.*
FROM
  V$SQL_BIND_METADATA SBM,
  V$SQL S
WHERE
  S.SQL_ID='94bv1jwkzcc38'
  AND S.CHILD_ADDRESS=SBM.ADDRESS
ORDER BY
  S.CHILD_NUMBER,
  SBM.POSITION;

 CN ADDRESS            POSITION   DATATYPE MAX_LENGTH  ARRAY_LEN BIND_NAME
--- ---------------- ---------- ---------- ---------- ---------- ---------
  0 000007FFA68C1038          1          2         22          0 N1
  0 000007FFA68C1038          2          2         22          0 N2
  1 000007FF94E0D950          1          2         22          0 N1
  1 000007FF94E0D950          2          2         22          0 N2
  2 000007FF98046318          1          2         22          0 N1
  2 000007FF98046318          2          2         22          0 N2

OK, now that we have moved off on a tangent, let’s return again to the topic of viewing execution plans.  The above examples show the actual execution plan that was used, which may be different from that produced by explain plan.  So, for fun, let’s look at the EXPLAIN PLAN FOR syntax (DBMS_XPLAN.DISPLAY is valid on Oracle 9.2.0.1 and higher):

ALTER SYSTEM FLUSH SHARED_POOL;

ALTER SESSION SET OPTIMIZER_MODE='ALL_ROWS';

EXPLAIN PLAN FOR
SELECT
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN 1 AND 10;

Explained.

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY());

Plan hash value: 1519387310

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |     9 |  2385 |  1893   (4)| 00:00:09 |
|*  1 |  HASH JOIN                   |              |     9 |  2385 |  1893   (4)| 00:00:09 |
|*  2 |   TABLE ACCESS FULL          | T3           |    10 |    50 |  1888   (4)| 00:00:09 |
|   3 |   TABLE ACCESS BY INDEX ROWID| T1           |    10 |  2600 |     4   (0)| 00:00:01 |
|*  4 |    INDEX RANGE SCAN          | SYS_C0020554 |    10 |       |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T3"."C1")
   2 - filter("T3"."C1"<=10 AND "T3"."C1">=1)
   4 - access("T1"."C1">=1 AND "T1"."C1"<=10)

The above plan appears to be identical to that of the first of the actual plans.  Now a test with bind variables:

VARIABLE N1 NUMBER
VARIABLE N2 NUMBER
EXEC :N1:=1
EXEC :N2:=10

EXPLAIN PLAN FOR
SELECT
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

Explained.

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY());

Plan hash value: 3807353021

----------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |              |  2499 |   646K|  2084   (5)| 00:00:10 |
|*  1 |  FILTER                       |              |       |       |            |          |
|*  2 |   HASH JOIN                   |              |  2499 |   646K|  2084   (5)| 00:00:10 |
|*  3 |    TABLE ACCESS FULL          | T3           |  2500 | 12500 |  1905   (5)| 00:00:10 |
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |  2500 |   634K|   178   (0)| 00:00:01 |
|*  5 |     INDEX RANGE SCAN          | SYS_C0020554 |  4500 |       |    11   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(TO_NUMBER(:N1)<=TO_NUMBER(:N2))
   2 - access("T1"."C1"="T3"."C1")
   3 - filter("T3"."C1">=TO_NUMBER(:N1) AND "T3"."C1"<=TO_NUMBER(:N2))
   5 - access("T1"."C1">=TO_NUMBER(:N1) AND "T1"."C1"<=TO_NUMBER(:N2))

Notice in the above that there are TO_NUMBER entries surrounding each of the bind variables in the Predicate Information section, even though those bind variables were declared as a NUMBER data type.  The cost has also increased a bit.

Let’s use AUTOTRACE to see the execution plan (AUTOTRACE starting in Oracle 10.1.0.1 uses DBMS_XPLAN to output the formatted execution plan).

SET AUTOTRACE TRACEONLY EXPLAIN

SELECT
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

Execution Plan
----------------------------------------------------------
Plan hash value: 3807353021

----------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |              |  2499 |   646K|  2084   (5)| 00:00:10 |
|*  1 |  FILTER                       |              |       |       |            |          |
|*  2 |   HASH JOIN                   |              |  2499 |   646K|  2084   (5)| 00:00:10 |
|*  3 |    TABLE ACCESS FULL          | T3           |  2500 | 12500 |  1905   (5)| 00:00:10 |
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |  2500 |   634K|   178   (0)| 00:00:01 |
|*  5 |     INDEX RANGE SCAN          | SYS_C0020554 |  4500 |       |    11   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(TO_NUMBER(:N1)<=TO_NUMBER(:N2))
   2 - access("T1"."C1"="T3"."C1")
   3 - filter("T3"."C1">=TO_NUMBER(:N1) AND "T3"."C1"<=TO_NUMBER(:N2))
   5 - access("T1"."C1">=TO_NUMBER(:N1) AND "T1"."C1"<=TO_NUMBER(:N2))

OK, the same as the previous example on Oracle 10.1.0.1 and above, complete with the incorrect Predicate Information section.  Again, displaying the runtime statistics and explain plan:

SET AUTOTRACE TRACEONLY STATISTICS EXPLAIN

SELECT
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

10 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 3807353021

----------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |              |  2499 |   646K|  2084   (5)| 00:00:10 |
|*  1 |  FILTER                       |              |       |       |            |          |
|*  2 |   HASH JOIN                   |              |  2499 |   646K|  2084   (5)| 00:00:10 |
|*  3 |    TABLE ACCESS FULL          | T3           |  2500 | 12500 |  1905   (5)| 00:00:10 |
|   4 |    TABLE ACCESS BY INDEX ROWID| T1           |  2500 |   634K|   178   (0)| 00:00:01 |
|*  5 |     INDEX RANGE SCAN          | SYS_C0020554 |  4500 |       |    11   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(TO_NUMBER(:N1)<=TO_NUMBER(:N2))
   2 - access("T1"."C1"="T3"."C1")
   3 - filter("T3"."C1">=TO_NUMBER(:N1) AND "T3"."C1"<=TO_NUMBER(:N2))
   5 - access("T1"."C1">=TO_NUMBER(:N1) AND "T1"."C1"<=TO_NUMBER(:N2))

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      37144  consistent gets
      37125  physical reads
          0  redo size
        579  bytes sent via SQL*Net to client
        334  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         10  rows processed

Note that the above plan is not necessarily the actual plan, even though we are looking at the actual runtime statistics.  This could be confusing since we are seeing a rough guess for an execution plan with the actual execution statistics.

For more information about Using EXPLAIN PLAN, see the documentation.

Now, another test.  This time we will instruct Oracle to write the actual execution plan to a trace file every time Oracle must perform a hard parse.  We will force a hard parse by adding a comment to the SQL statement:

ALTER SESSION SET TRACEFILE_IDENTIFIER = '10132_HARD_PARSE';
ALTER SESSION SET EVENTS '10132 TRACE NAME CONTEXT FOREVER, LEVEL 1';

SELECT /* TEST */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

ALTER SESSION SET EVENTS '10132 TRACE NAME CONTEXT OFF';

If we take a look at the output in the trace file, we might see something like this:

sql_id=3g3fc5qyju0j3.
Current SQL statement for this session:
SELECT /* TEST */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2

============
Plan Table
============
-----------------------------------------------------+-----------------------------------+
| Id  | Operation                      | Name        | Rows  | Bytes | Cost  | Time      |
-----------------------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT               |             |       |       |  1893 |           |
| 1   |  FILTER                        |             |       |       |       |           |
| 2   |   HASH JOIN                    |             |     9 |  2385 |  1893 |  00:00:09 |
| 3   |    TABLE ACCESS FULL           | T3          |    10 |    50 |  1888 |  00:00:09 |
| 4   |    TABLE ACCESS BY INDEX ROWID | T1          |    10 |  2600 |     4 |  00:00:01 |
| 5   |     INDEX RANGE SCAN           | SYS_C0020554|    10 |       |     3 |  00:00:01 |
-----------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
1 - filter(:N1<=:N2)
2 - access("T1"."C1"="T3"."C1")
3 - filter(("T3"."C1"<=:N2 AND "T3"."C1">=:N1))
5 - access("T1"."C1">=:N1 AND "T1"."C1"<=:N2)

Content of other_xml column
===========================
  db_version     : 10.2.0.2
  parse_schema   : TESTUSER
  plan_hash      : 3807353021
Peeked Binds
============
  Bind variable information
    position=1
    datatype(code)=2
    datatype(string)=NUMBER
    precision=0
    scale=0
    max length=22
    value=1
  Bind variable information
    position=2
    datatype(code)=2
    datatype(string)=NUMBER
    precision=0
    scale=0
    max length=22
    value=10
  Outline Data:
  /*+
    BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('10.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T3"@"SEL$1")
      INDEX(@"SEL$1" "T1"@"SEL$1" ("T1"."C1"))
      LEADING(@"SEL$1" "T3"@"SEL$1" "T1"@"SEL$1")
      USE_HASH(@"SEL$1" "T1"@"SEL$1")
    END_OUTLINE_DATA
  */

Optimizer environment:
  optimizer_mode_hinted               = false
  optimizer_features_hinted           = 0.0.0
  parallel_execution_enabled          = false
  parallel_query_forced_dop           = 0
  parallel_dml_forced_dop             = 0
  parallel_ddl_forced_degree          = 0
  parallel_ddl_forced_instances       = 0
  _query_rewrite_fudge                = 90
  optimizer_features_enable           = 10.2.0.1
  _optimizer_search_limit             = 5
  cpu_count                           = 4
  active_instance_count               = 1
  parallel_threads_per_cpu            = 2
  hash_area_size                      = 131072
  bitmap_merge_area_size              = 1048576
  sort_area_size                      = 65536
  sort_area_retained_size             = 0
  _sort_elimination_cost_ratio        = 0
  _optimizer_block_size               = 8192
  _sort_multiblock_read_count         = 2
  _hash_multiblock_io_count           = 0
  _db_file_optimizer_read_count       = 128
  _optimizer_max_permutations         = 2000
  pga_aggregate_target                = 204800 KB
  _pga_max_size                       = 204800 KB
  _query_rewrite_maxdisjunct          = 257
  _smm_auto_min_io_size               = 56 KB
  _smm_auto_max_io_size               = 248 KB
  _smm_min_size                       = 204 KB
  _smm_max_size                       = 40960 KB
  _smm_px_max_size                    = 102400 KB
  _cpu_to_io                          = 0
  _optimizer_undo_cost_change         = 10.2.0.1
  parallel_query_mode                 = enabled
  parallel_dml_mode                   = disabled
  parallel_ddl_mode                   = enabled
  optimizer_mode                      = all_rows
  sqlstat_enabled                     = false
  _optimizer_percent_parallel         = 101
  _always_anti_join                   = choose
  _always_semi_join                   = choose
  _optimizer_mode_force               = true
  _partition_view_enabled             = true
  _always_star_transformation         = false
  _query_rewrite_or_error             = false
  _hash_join_enabled                  = true
  cursor_sharing                      = exact
  _b_tree_bitmap_plans                = true
  star_transformation_enabled         = false
  _optimizer_cost_model               = choose
  _new_sort_cost_estimate             = true
  _complex_view_merging               = true
  _unnest_subquery                    = true
  _eliminate_common_subexpr           = true
  _pred_move_around                   = true
  _convert_set_to_join                = false
  _push_join_predicate                = true
  _push_join_union_view               = true
  _fast_full_scan_enabled             = true
  _optim_enhance_nnull_detection      = true
  _parallel_broadcast_enabled         = true
  _px_broadcast_fudge_factor          = 100
  _ordered_nested_loop                = true
  _no_or_expansion                    = false
  optimizer_index_cost_adj            = 100
  optimizer_index_caching             = 0
  _system_index_caching               = 0
  _disable_datalayer_sampling         = false
  query_rewrite_enabled               = true
  query_rewrite_integrity             = enforced
  _query_cost_rewrite                 = true
  _query_rewrite_2                    = true
  _query_rewrite_1                    = true
  _query_rewrite_expression           = true
  _query_rewrite_jgmigrate            = true
  _query_rewrite_fpc                  = true
  _query_rewrite_drj                  = true
  _full_pwise_join_enabled            = true
  _partial_pwise_join_enabled         = true
  _left_nested_loops_random           = true
  _improved_row_length_enabled        = true
  _index_join_enabled                 = true
  _enable_type_dep_selectivity        = true
  _improved_outerjoin_card            = true
  _optimizer_adjust_for_nulls         = true
  _optimizer_degree                   = 0
  _use_column_stats_for_function      = true
  _subquery_pruning_enabled           = true
  _subquery_pruning_mv_enabled        = false
  _or_expand_nvl_predicate            = true
  _like_with_bind_as_equality         = false
  _table_scan_cost_plus_one           = true
  _cost_equality_semi_join            = true
  _default_non_equality_sel_check     = true
  _new_initial_join_orders            = true
  _oneside_colstat_for_equijoins      = true
  _optim_peek_user_binds              = true
  _minimal_stats_aggregation          = true
  _force_temptables_for_gsets         = false
  workarea_size_policy                = auto
  _smm_auto_cost_enabled              = true
  _gs_anti_semi_join_allowed          = true
  _optim_new_default_join_sel         = true
  optimizer_dynamic_sampling          = 2
  _pre_rewrite_push_pred              = true
  _optimizer_new_join_card_computation = true
  _union_rewrite_for_gs               = yes_gset_mvs
  _generalized_pruning_enabled        = true
  _optim_adjust_for_part_skews        = true
  _force_datefold_trunc               = false
  statistics_level                    = typical
  _optimizer_system_stats_usage       = true
  skip_unusable_indexes               = true
  _remove_aggr_subquery               = true
  _optimizer_push_down_distinct       = 0
  _dml_monitoring_enabled             = true
  _optimizer_undo_changes             = false
  _predicate_elimination_enabled      = true
  _nested_loop_fudge                  = 100
  _project_view_columns               = true
  _local_communication_costing_enabled = true
  _local_communication_ratio          = 50
  _query_rewrite_vop_cleanup          = true
  _slave_mapping_enabled              = true
  _optimizer_cost_based_transformation = linear
  _optimizer_mjc_enabled              = true
  _right_outer_hash_enable            = true
  _spr_push_pred_refspr               = true
  _optimizer_cache_stats              = false
  _optimizer_cbqt_factor              = 50
  _optimizer_squ_bottomup             = true
  _fic_area_size                      = 131072
  _optimizer_skip_scan_enabled        = true
  _optimizer_cost_filter_pred         = false
  _optimizer_sortmerge_join_enabled   = true
  _optimizer_join_sel_sanity_check    = true
  _mmv_query_rewrite_enabled          = true
  _bt_mmv_query_rewrite_enabled       = true
  _add_stale_mv_to_dependency_list    = true
  _distinct_view_unnesting            = false
  _optimizer_dim_subq_join_sel        = true
  _optimizer_disable_strans_sanity_checks = 0
  _optimizer_compute_index_stats      = true
  _push_join_union_view2              = true
  _optimizer_ignore_hints             = false
  _optimizer_random_plan              = 0
  _query_rewrite_setopgrw_enable      = true
  _optimizer_correct_sq_selectivity   = true
  _disable_function_based_index       = false
  _optimizer_join_order_control       = 3
  _optimizer_cartesian_enabled        = true
  _optimizer_starplan_enabled         = true
  _extended_pruning_enabled           = true
  _optimizer_push_pred_cost_based     = true
  _sql_model_unfold_forloops          = run_time
  _enable_dml_lock_escalation         = false
  _bloom_filter_enabled               = true
  _update_bji_ipdml_enabled           = 0
  _optimizer_extended_cursor_sharing  = udo
  _dm_max_shared_pool_pct             = 1
  _optimizer_cost_hjsmj_multimatch    = true
  _optimizer_transitivity_retain      = true
  _px_pwg_enabled                     = true
  optimizer_secure_view_merging       = true
  _optimizer_join_elimination_enabled = true
  flashback_table_rpi                 = non_fbt
  _optimizer_cbqt_no_size_restriction = true
  _optimizer_enhanced_filter_push     = true
  _optimizer_filter_pred_pullup       = true
  _rowsrc_trace_level                 = 0
  _simple_view_merging                = true
  _optimizer_rownum_pred_based_fkr    = true
  _optimizer_better_inlist_costing    = all
  _optimizer_self_induced_cache_cost  = false
  _optimizer_min_cache_blocks         = 10
  _optimizer_or_expansion             = depth
  _optimizer_order_by_elimination_enabled = true
  _optimizer_outer_to_anti_enabled    = true
  _selfjoin_mv_duplicates             = true
  _dimension_skip_null                = true
  _force_rewrite_enable               = false
  _optimizer_star_tran_in_with_clause = true
  _optimizer_complex_pred_selectivity = true
  _optimizer_connect_by_cost_based    = false
  _gby_hash_aggregation_enabled       = true
  _globalindex_pnum_filter_enabled    = false
  _fix_control_key                    = 0
  _optimizer_skip_scan_guess          = false
  _enable_row_shipping                = false
  *********************************
  Bug Fix Control Environment
  ***************************
  fix  4611850 = disabled
  fix  4663804 = disabled
  fix  4663698 = disabled
  fix  4545833 = disabled
  fix  3499674 = disabled
  fix  4584065 = disabled
  fix  4602374 = disabled
  fix  4569940 = enabled
  fix  4631959 = disabled
  fix  4519340 = disabled
  fix  4550003 = enabled
  fix  4488689 = disabled
  fix  3118776 = enabled
  fix  4519016 = enabled
  fix  4487253 = enabled
  fix  4556762 = 0      
  fix  4728348 = disabled
  fix  4723244 = disabled
  fix  4554846 = disabled
  fix  4175830 = enabled
  fix  5240607 = disabled
  fix  4722900 = enabled
Query Block Registry:
*********************
SEL$1 0x122d9358 (PARSER) [FINAL]
Optimizer State Dump: call(in-use=53568, alloc=81816), compile(in-use=76816, alloc=126256)

Note in the above that we are able to see the actual execution plan, the peeked bind variables, the set of hints that will reproduce the execution plan, and a large number of normal and hidden optimizer parameters.

We could generate a 10053 trace at level 1 to determine why the above plan was selected, but we will skip that for now.

An example with a 10046 trace at level 4 (we must execute another SQL statement after the SQL statement under investigation so that the STAT lines for our SQL statement are written to the trace file):

ALTER SESSION SET TRACEFILE_IDENTIFIER = '10046_EXECUTION_PLAN';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 4';

SELECT /* TEST */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

SELECT SYSDATE FROM DUAL;

The resulting trace file looks something like this (you did read Chapter 15 of the book, didn’t you – so be sure to delete the trace files when no longer needed):

=====================
PARSING IN CURSOR #34 len=118 dep=0 uid=31 oct=3 lid=31 tim=2087403486 hv=3172794915 ad='9811b110'
SELECT /* TEST */
  T3.C1,
  SUBSTR(T1.C2,1,10) C2
FROM
  T3,
  T1
WHERE
  T1.C1=T3.C1
  AND T1.C1 BETWEEN :N1 AND :N2
END OF STMT
PARSE #34:c=0,e=162,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=2087403479
BINDS #34:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1000000 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=13beb440  bln=22  avl=02  flg=05
  value=1
 Bind#1
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1000000 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=13beb458  bln=22  avl=02  flg=01
  value=10
EXEC #34:c=0,e=170,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=2087403808
FETCH #34:c=1078125,e=3875649,p=37124,cr=37142,cu=0,mis=0,r=1,dep=0,og=1,tim=2091279513
FETCH #34:c=0,e=180,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=2091280273
STAT #34 id=1 cnt=10 pid=0 pos=1 obj=0 op='FILTER  (cr=37144 pr=37124 pw=0 time=3875627 us)'
STAT #34 id=2 cnt=10 pid=1 pos=1 obj=0 op='HASH JOIN  (cr=37144 pr=37124 pw=0 time=3875620 us)'
STAT #34 id=3 cnt=10 pid=2 pos=1 obj=114232 op='TABLE ACCESS FULL T3 (cr=37138 pr=37124 pw=0 time=19345 us)'
STAT #34 id=4 cnt=10 pid=2 pos=2 obj=114228 op='TABLE ACCESS BY INDEX ROWID T1 (cr=6 pr=0 pw=0 time=66 us)'
STAT #34 id=5 cnt=10 pid=4 pos=1 obj=114229 op='INDEX RANGE SCAN SYS_C0020554 (cr=4 pr=0 pw=0 time=37 us)'
=====================

We could, of course, just read the actual plan direct from the 10046 trace file, as well as the bind variable values and data types.  But, we will use TKPROF instead (we will not be able to see the bind variable values when using TKPROF).

C:\> tkprof test_ora_4436_10046_execution_plan.trc test_ora_4436_10046_execution_plan.txt

The output might look like this:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      1.07       3.87      37124      37144          0          10
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      1.07       3.87      37124      37144          0          10

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 31 

Rows     Row Source Operation
-------  ---------------------------------------------------
     10  FILTER  (cr=37144 pr=37124 pw=0 time=3875627 us)
     10   HASH JOIN  (cr=37144 pr=37124 pw=0 time=3875620 us)
     10    TABLE ACCESS FULL T3 (cr=37138 pr=37124 pw=0 time=19345 us)
     10    TABLE ACCESS BY INDEX ROWID T1 (cr=6 pr=0 pw=0 time=66 us)
     10     INDEX RANGE SCAN SYS_C0020554 (cr=4 pr=0 pw=0 time=37 us)(object id 114229)

Of course, it seems to be a little too common that some people will try using EXPLAIN in TKPROF rather than just working with the Row Source Operation lines (what really happened – most of the time, unless of course you read this article):

C:\>tkprof test_ora_4436_10046_execution_plan.trc test_ora_4436_10046_execution_plan.txt EXPLAIN=TESTUSER/TESTPASS

Note that the resulting output could be very confusing if the “Row Source Operation” plan is completely different from the “Execution Plan”, since the first plan is the actual execution plan, while the second is essentially just an EXPLAIN PLAN FOR type of execution plan:

Rows     Row Source Operation
-------  ---------------------------------------------------
     10  FILTER  (cr=37144 pr=37124 pw=0 time=3875627 us)
     10   HASH JOIN  (cr=37144 pr=37124 pw=0 time=3875620 us)
     10    TABLE ACCESS FULL T3 (cr=37138 pr=37124 pw=0 time=19345 us)
     10    TABLE ACCESS BY INDEX ROWID T1 (cr=6 pr=0 pw=0 time=66 us)
     10     INDEX RANGE SCAN SYS_C0020554 (cr=4 pr=0 pw=0 time=37 us)(object id 114229)

Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   MODE: ALL_ROWS
     10   FILTER
     10    HASH JOIN
     10     TABLE ACCESS   MODE: ANALYZED (FULL) OF 'T3' (TABLE)
     10     TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF 'T1'
                (TABLE)
     10      INDEX   MODE: ANALYZED (RANGE SCAN) OF 'SYS_C0020554'
                 (INDEX (UNIQUE))

So hopefully, you now know what is the plan, how to find it, and what types of messes (faulty information) you do not accidentally step in.





Explain Plan – Which Plan is Better

29 01 2010

January 29, 2010

A recent post appeared in the OTN forums that indirectly asked the question: which execution plan is better?  The execution plans follow:

The Unhinted Execution Plan:

--------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT       |             |  1285M|   326G|       |    45M  (1)|178:06:59 |       |       |
|   1 |  LOAD AS SELECT        | E           |       |       |       |            |          |       |       |
|*  2 |   HASH JOIN            |             |  1285M|   326G|  5153M|    45M  (1)|178:06:59 |       |       |
|   3 |    TABLE ACCESS FULL   | D           |   135M|  3607M|       |   254K  (2)| 00:59:17 |       |       |
|*  4 |    HASH JOIN           |             |  1261M|   287G|  2857M|    32M  (1)|124:52:03 |       |       |
|   5 |     TABLE ACCESS FULL  | C           |    76M|  1978M|       |   143K  (2)| 00:33:33 |       |       |
|*  6 |     HASH JOIN          |             |  1241M|   252G|  1727M|    20M  (1)| 78:33:50 |       |       |
|   7 |      TABLE ACCESS FULL | B           |    54M|  1099M|       | 23217   (4)| 00:05:26 |       |       |
|   8 |      PARTITION HASH ALL|             |  1241M|   227G|       |  3452K  (4)| 13:25:29 |     1 |    64 |
|   9 |       TABLE ACCESS FULL| A           |  1241M|   227G|       |  3452K  (4)| 13:25:29 |     1 |    64 |
--------------------------------------------------------------------------------------------------------------

The Hinted Execution Plan that Sets the Cardinality for Table A to 10M Rows:

--------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT       |             |    10M|  2696M|       |  4578K  (1)| 17:48:26 |       |       |
|   1 |  LOAD AS SELECT        | E           |       |       |       |            |          |       |       |
|*  2 |   HASH JOIN            |             |    10M|  2696M|  2491M|  4578K  (1)| 17:48:26 |       |       |
|*  3 |    HASH JOIN           |             |    10M|  2374M|  2193M|  3996K  (1)| 15:32:36 |       |       |
|*  4 |     HASH JOIN          |             |    10M|  2079M|  1727M|  3636K  (1)| 14:08:30 |       |       |
|   5 |      TABLE ACCESS FULL | B           |    54M|  1099M|       | 23217   (4)| 00:05:26 |       |       |
|   6 |      PARTITION HASH ALL|             |    10M|  1878M|       |  3362K  (1)| 13:04:42 |     1 |    64 |
|   7 |       TABLE ACCESS FULL| A           |    10M|  1878M|       |  3362K  (1)| 13:04:42 |     1 |    64 |
|   8 |     TABLE ACCESS FULL  | C           |    76M|  1978M|       |   143K  (2)| 00:33:33 |       |       |
|   9 |    TABLE ACCESS FULL   | D           |   135M|  3607M|       |   254K  (2)| 00:59:17 |       |       |
--------------------------------------------------------------------------------------------------------------

The Original Poster Stated Both Plans have the Same Predicates:

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access(A."ID"="D"."ID")
   3 - access("A"."E_ID"="C"."E_ID")
   4 - access("A"."M_ID"="B"."M_ID")

So, which execution plan is better?  Are the plans the same?  How are they the same, and how do they differ?

While we comtemplate which execution plan is optimal, the OTN thread took a slight detour into a discussion of work areas in Oracle:

“Can you please help understanding workarea.”

A search of the documentation found this page that offered the following definition:

“work area: A private allocation of memory used for sorts, hash joins, and other operations that are memory-intensive. A sort operator uses a work area (the sort area) to perform the in-memory sort of a set of rows. Similarly, a hash-join operator uses a work area (the hash area) to build a hash table from its left input.”

There may be multiple active work areas in a single SQL statement. While not the original purpose of this blog article, the article does show how to see the amount of memory in use for active work areas.  If you have a copy of the book “Troubleshooting Oracle Performance“, I highly recommend that you read pages 434 through 439 if you are curious about Oracle work areas.  Those pages describe how hash joins work and provide a detailed description of work areas.

Back to the original question.  While reading the plans, keep in mind that you are only looking at the optimizer’s estimates for the number of rows, time, memory usage, temp tablespace usage, and costs.  If you are attempting to conclude which plan is faster/better based on the estimates in the first plan and an altered plan with a hinted cardinality estimate, you might be setting yourself up for failure.  Note that the first plan has a calculated cost of about 45,000,000 while the second plan has a calculated cost of about 4,578,000.  So obviously, the second plan is more efficient.  Or is it?  With the cardinality hint, the OP has effectively changed the number of rows that the optimizer expects to be returned from table A from roughly 1,241,000,000 to 10,000,000.  Additionally, one should not directly compare the calcualted cost of one execution plan with that of a second execution plan.  You probably should be thinking to yourself at this point: “Have you considered actually testing the performance?

In the OTN thread Timur stated that both plans use the very same join order: B->A->C->D.  Based on my understanding of execution plans, this is a correct statement, even though the plans look a bit different.  Note that the OP was slightly incorrect in stating that the Predicate Information sections for the two plans were identical – the plan ID numbers should have been a bit different.

(Confession: I re-read the section of the book “Troubleshooting Oracle Performance” that discussed hash joins before providing the following response.)  Essentially, the difference between the two plans is which table (or row source) is the build input, and which table is the probe input. The first table (or row source) listed below the words HASH JOIN is the source for the hash table (the optimizer typically tries to select the smaller estimated row source of the two row sources as the source for the hash table). The second table (or row source) is fully scanned, probing the generated hash table in search of a match. By artifically altering the optimizer’s estimated rows to be returned from table A, the OP has flipped which table (or row source) is the build input, and which table (or row source) is the probe input at each hash join – this could significantly increase, or significantly decrease, or have no impact on the amount of time required for the query to execute, the amount of memory used, or the amount of temp space needed.

My suggestion to the OP is to test the performance to see which execution plan is more efficient, rather than guessing. My blog article that is referenced above has SQL statements that may be used to see the number of work areas that are active at any point, as well as the amount of RAM and temp space in use. You could continue to guess about which plan is better, but why guess?





Database using ASSM Tablespace Exhibits Slow Insert Performance After an Uncommitted Delete

28 01 2010

January 28, 2010 (Updated January 19, 2011)

In August 2009 an interesting test case appeared in the comp.database.oracle.server Usenet group, with a follow-up post in a second thread.  The test case that was posted follows:

1. Create tablespace, it uses default 8K block size

create tablespace assm
extent management local uniform size 1m
segment space management auto
datafile
'/abc/db01/oracle/ABC1P/oradata/assm_01.dbf' size 1000m;

2. Create table

create table test_assm
(
 n1 number,
 v1 varchar2(50),
 v2 varchar2(50),
 v3 varchar2(50),
 v4 varchar2(50),
 v5 varchar2(50),
 v6 varchar2(50),
 v7 varchar2(50),
 v8 varchar2(50),
 v9 varchar2(50),
v10 varchar2(50)
)
tablespace assm;

3. Populate table with 1,000,000 rows, COMMIT at the end

begin
for i in 1..1000000 loop
insert into test_assm values
(i,
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567');
end loop;
end;
/

COMMIT;

4. Insert additional 1000 rows into the table using ***SINGLE_ROW*** inserts.  I used following script to generate INSERT statements (don’t forget to execute the resulting INSERT statements)

select
'insert into test_assm values(' || n1 ||
',''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'');'
from
        test_assm
where
        rownum < 1001;

It took 1 second to insert 1000 rows through single-row inserts.

5. Delete all rows from the table, don’t commit

6. Re-execute script that inserts 1000 rows from a different session.  Runtime > 20 min.  There were no indexes on the table.

Insert into table containing uncomitted DELETE should not be significantly slower than insert into table without DELETE.

My test results follow.

I executed the test on 64 bit Windows with a fairly slow disk system (little front end caching from the disk subsystem) running Oracle 11.1.0.7, 8KB block size, with the __DB_CACHE_SIZE currently floating at 0.9375GB due to a much larger DB_KEEP_CACHE_SIZE value.  What do I see?

(Edit Jan 19, 2011: Script added to create the c:\insertstatements.sql file)

set linesize 1000
set trimspool on
set pagesize 2000
spool c:\insertstatements.sql

select
'insert into test_assm values(' || n1 ||
',''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'',' ||
'''123456789*123456789*123456789*123456789*1234567'');'
from
        test_assm
where
        rownum < 1001;

spool off 

(Remove the creation SQL statement and the header row from the c:\insertstatements.sql file.)

SET TIMING ON
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'FIND_ME_TEST_ASSM';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

@c:\insertstatements.sql
 
DELETE FROM TEST_ASSM;
 
@c:\insertstatements.sql

1 row created.
Elapsed: 00:00:20.92
1 row created.
Elapsed: 00:00:15.98
1 row created.
Elapsed: 00:00:13.52
1 row created.
...
Elapsed: 00:00:12.41
1 row created.
Elapsed: 00:00:11.84
1 row created.
Elapsed: 00:00:12.32
1 row created.
...

Interesting… becoming faster as more blocks are cached.

So, what is in the trace file?

PARSING IN CURSOR #3 len=532 dep=0 uid=56 oct=2 lid=56 tim=220841924138 hv=471712922 ad='2778b31b8' sqlid='dyqznk8f1vj4u'
insert into test_assm values
  (15,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #3:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220841924138
WAIT #3: nam='db file sequential read' ela= 17613 file#=7 block#=1900672 blocks=1 obj#=67153 tim=220841943750
WAIT #3: nam='db file sequential read' ela= 458 file#=7 block#=1900680 blocks=1 obj#=67153 tim=220841944275
WAIT #3: nam='db file sequential read' ela= 617 file#=7 block#=1900681 blocks=1 obj#=67153 tim=220841944980
WAIT #3: nam='db file sequential read' ela= 73 file#=7 block#=1900682 blocks=1 obj#=67153 tim=220841945113
WAIT #3: nam='db file sequential read' ela= 387 file#=7 block#=1900683 blocks=1 obj#=67153 tim=220841945532
WAIT #3: nam='db file sequential read' ela= 72 file#=7 block#=1900684 blocks=1 obj#=67153 tim=220841945656
WAIT #3: nam='db file sequential read' ela= 14610 file#=7 block#=1900685 blocks=1 obj#=67153 tim=220841960301
...
WAIT #3: nam='db file sequential read' ela= 28 file#=7 block#=1972309 blocks=1 obj#=67153 tim=220862843585
WAIT #3: nam='db file sequential read' ela= 29 file#=7 block#=1972325 blocks=1 obj#=67153 tim=220862843638
WAIT #3: nam='db file sequential read' ela= 69 file#=7 block#=1972341 blocks=1 obj#=67153 tim=220862843732
WAIT #3: nam='db file sequential read' ela= 41 file#=7 block#=1972102 blocks=1 obj#=67153 tim=220862843817
EXEC #3:c=3759624,e=20904025,p=69802,cr=69793,cu=83979,mis=0,r=1,dep=0,og=1,plh=­0,tim=220862828163
STAT #3 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=69802 pw=0 time=0 us)'

Looks like a lot of single block reads, some in the range of 0.017613 seconds, others in the range of 0.000028 seconds. A summary of the first insert looks like this:

First Reference: Cursor 3   Ver 1   Parse at 0.000000
|PARSEs       1|CPU S    0.000000|CLOCK S    0.000000|ROWs        0| PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|
|EXECs        1|CPU S    3.759624|CLOCK S   20.904025|ROWs        1| PHY RD BLKs     69802|CON RD BLKs (Mem)     69793|CUR RD BLKs (Mem)     83979|
|FETCHs       0|CPU S    0.000000|CLOCK S    0.000000|ROWs        0| PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|

  CPU S 100.00%  CLOCK S 100.00%
  *   18.032425 seconds of time related data file I/O
  *    0.001392 seconds of time related to client/network events

Wait Event Summary:
db file sequential read            18.032425  On DB Server        Min Wait:     0.000022  Avg Wait:     0.000258  Max Wait:     0.071639
SQL*Net message to client           0.000003  On Client/Network   Min Wait:     0.000003  Avg Wait:     0.000003  Max Wait:     0.000003
SQL*Net message from client         0.001389  On Client/Network   Min Wait:     0.001389  Avg Wait:     0.001389  Max Wait:     0.001389

69,802 physical block reads, 69,793 consistent gets, 83,979 current mode gets, 18.03 seconds spent performing single block reads.  This seems to be behaving similar to the bug that Jonathan Lewis found with ASSM 16KB block size tablespaces in 2008 when column values in existing rows were changed from NULL to a value.  In that case, the current mode gets were the tipoff that there was a problem.

I repeated the test with an ASSM tablespace with 1MB uniform extents. The first insert performed 71,250 physical block reads, 71,206 consistent gets, 85,473 current mode gets, 18.85 seconds performing single block reads with an elapsed time of 21.53 and for some reason 0 CPU seconds (the next insert reported 3.59 CPU seconds).

I also repeated the test with a locally managed table with with 1MB uniform extents without ASSM: “SIZE 2G REUSE AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M SEGMENT SPACE MANAGEMENT MANUAL”.  The results in the final test were a little disappointing.  The totals from the script execution for all of the inserts:

Total for Trace File:
|PARSEs    1003|CPU S    0.234002|CLOCK S    0.312034|ROWs        0| PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|
|EXECs     1003|CPU S    0.031200|CLOCK S    0.062434|ROWs     1002| PHY RD BLKs         0|CON RD BLKs (Mem)      1051|CUR RD BLKs (Mem)      1343|
|FETCHs       2|CPU S    0.000000|CLOCK S    0.000000|ROWs        1| PHY RD BLKs         0|CON RD BLKs (Mem)         3|CUR RD BLKs (Mem)         0|

Wait Event Summary:
SQL*Net message to client           0.001472  On Client/Network   Min Wait:     0.000001  Avg Wait:     0.000001  Max Wait:     0.000076
SQL*Net message from client         0.683966  On Client/Network   Min Wait:     0.000402  Avg Wait:     0.000684  Max Wait:     0.001799

The totals for all of the inserts performed 0 physical block reads, 1,051 consistent gets, 1,343 current mode gets, 0 seconds performing single block reads with an elapsed time of 0.374468 seconds (0.312034 of that was for parsing) and 0.265202 CPU seconds (0.234002 of that for parsing).

A couple of additional tests, since some of the posters in the Usenet thread reported different behavior.

CREATE SMALLFILE TABLESPACE "LOCAL_UNIFORM1M" DATAFILE 'C:\ORACLE\ORADATA\OR11\locun1MOR1101.dbf' SIZE 2G REUSE AUTOEXTEND ON NEXT 10M
    MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M SEGMENT SPACE MANAGEMENT MANUAL;

CREATE SMALLFILE TABLESPACE "LOCAL_ASSM" LOGGING DATAFILE 'C:\Oracle\OraData\OR11\locassmOR1101.dbf' SIZE 2G REUSE AUTOEXTEND ON NEXT 10M
    MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;

SELECT
  TABLESPACE_NAME,
  BLOCK_SIZE,
  ALLOCATION_TYPE,SEGMENT_SPACE_MANAGEMENT,EXTENT_MANAGEMENT
FROM DBA_TABLESPACES;

TABLESPACE_NAME BLOCK_SIZE ALLOCATIO SEGMEN EXTENT_MAN
--------------- ---------- --------- ------ ----------
LOCAL_UNIFORM1M       8192 UNIFORM   MANUAL LOCAL
LOCAL_ASSM            8192 SYSTEM    AUTO   LOCAL

We now have a new locally managed tablespace with 1MB extents not using ASSM, and another new tablespace using ASSM with autoallocated extents (my original test used an old ASSM autoallocate tablespace containing other data).

(Session 1)

create table test_assm
(
 n1 number,
 v1 varchar2(50),
 v2 varchar2(50),
 v3 varchar2(50),
 v4 varchar2(50),
 v5 varchar2(50),
 v6 varchar2(50),
 v7 varchar2(50),
 v8 varchar2(50),
 v9 varchar2(50),
v10 varchar2(50)
)
tablespace LOCAL_UNIFORM1M;

begin
for i in 1..1000000 loop
insert into test_assm values
(i,
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567');
end loop;
end;
/

COMMIT;

Build the insertstatements.sql file using the select statement provided by the OP, which will include statements like the following:

insert into test_assm values
  (15,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567');
insert into test_assm values
  (16,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567');
insert into test_assm values
  (17,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567');
insert into test_assm values
  (18,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567');
insert into test_assm values
  (19,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567');
...

)

@c:\insertstatements.sql

DELETE FROM test_assm;

(Session 2)

SET TIMING ON
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'FIND_ME_TEST_LOCAL1UM';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

@c:\insertstatements.sql

ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';

EXIT

Using the LOCAL_UNIFORM1M tablespace, the insert completed in less than a second.

Reconnect session 2

Repeated the test with the KEEP pool at 1MB, which allowed the default buffer pool to grow:
(Session 1)

DROP TABLE TEST_ASSM PURGE;

create table test_assm
(
 n1 number,
 v1 varchar2(50),
 v2 varchar2(50),
 v3 varchar2(50),
 v4 varchar2(50),
 v5 varchar2(50),
 v6 varchar2(50),
 v7 varchar2(50),
 v8 varchar2(50),
 v9 varchar2(50),
v10 varchar2(50)
)
tablespace LOCAL_ASSM;

begin
for i in 1..1000000 loop
insert into test_assm values
(i,
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567');
end loop;
end;
/

COMMIT;

@c:\insertstatements.sql

DELETE FROM test_assm;

(Session 2)

SET TIMING ON
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'FIND_ME_TEST_LOCALAM';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

@c:\insertstatements.sql

ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';

EXIT

Each insert statements reported an elapsed time of 0.14 to 0.15 seconds.  __DB_CACHE_SIZE floated to 7,449,083,904

Reconnect session 2

Repeating the test again with a smaller __DB_CACHE_SIZE:
(Session 1)

DROP TABLE TEST_ASSM PURGE;
ALTER SYSTEM SET DB_KEEP_CACHE_SIZE=6G;
create table test_assm
(
 n1 number,
 v1 varchar2(50),
 v2 varchar2(50),
 v3 varchar2(50),
 v4 varchar2(50),
 v5 varchar2(50),
 v6 varchar2(50),
 v7 varchar2(50),
 v8 varchar2(50),
 v9 varchar2(50),
v10 varchar2(50)
)
tablespace LOCAL_ASSM;

begin
for i in 1..1000000 loop
insert into test_assm values
(i,
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567');
end loop;
end;
/

COMMIT;

@c:\insertstatements.sql

DELETE FROM test_assm;

(Session 2)

SET TIMING ON
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'FIND_ME_TEST_LOCALAM2';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

@c:\insertstatements.sql

ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';

EXIT

Each insert statement after the first reported an elapsed time of 0.14 to 0.15 seconds.  __DB_CACHE_SIZE floated to 1,073,741,824

The execution time was about the same as with the larger __DB_CACHE_SIZE.  Apparently only the first insert experienced a large number of ‘db file sequential read’ waits, totalling about 28 seconds based on the timing reported in SQL*Plus.

What if we flood the KEEP and DEFAULT buffer pools:

(Session 3 connected as SYS)

SET LINESIZE 150
SET PAGESIZE 10000
SPOOL C:\TABLES.SQL

SELECT
  'SELECT * FROM '||OWNER||'.'||TABLE_NAME||' ORDER BY 1;' T
FROM
  DBA_TABLES;

SPOOL OFF

Clean up the C:\TABLES.SQL file.

SET AUTOTRACE TRACEONLY STATISTICS;

@C:\TABLES.SQL

SET AUTOTRACE OFF

(Session 1)

DROP TABLE TEST_ASSM PURGE;

create table test_assm
(
 n1 number,
 v1 varchar2(50),
 v2 varchar2(50),
 v3 varchar2(50),
 v4 varchar2(50),
 v5 varchar2(50),
 v6 varchar2(50),
 v7 varchar2(50),
 v8 varchar2(50),
 v9 varchar2(50),
v10 varchar2(50)
)
tablespace LOCAL_ASSM;

begin
for i in 1..1000000 loop
insert into test_assm values
(i,
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567',
'123456789*123456789*123456789*123456789*1234567');
end loop;
end;
/

COMMIT;

@c:\insertstatements.sql

DELETE FROM test_assm;

(Session 2)

SET TIMING ON
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'FIND_ME_TEST_LOCALAM3';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

@c:\insertstatements.sql

ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';

EXIT

Each insert statement after the first reported an elapsed time of 0.17 to 0.19 seconds.  OK, that increased the time slightly, but not as much seen earlier.

Maybe it has to due with the process ID – luck of the draw regarding which blocks session 2 attempts to insert into due to the way ASSM attempts to reduce block contention?  I repeated the test again using the same old ASSM tablespace which I used earlier – insert times for the second session where roughly 0.15 seconds each after the first insert completed.  Of course, I bounced the database since the test run the day before, so maybe that has an impact?

The first couple of EXEC and STAT lines from the first of the most recent traces with the 6GB KEEP pool in effect:

EXEC #1:c=3541222,e=26125284,p=54865,cr=69793,cu=83979,mis=0,r=1,dep=0,og=1,plh=­0,tim=314231018338
STAT #1 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=54865 pw=0 time=0 us)'
EXEC #2:c=171601,e=187295,p=0,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=0,tim=3­14231205633
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=0 pw=0 time=0 us)'
EXEC #1:c=156001,e=155942,p=0,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=0,tim=3­14231361575
STAT #1 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=0 pw=0 time=0 us)'
EXEC #2:c=171602,e=156113,p=0,cr=69793,cu=83959,mis=0,r=1,dep=0,og=1,plh=0,tim=3­14231517688
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=0 pw=0 time=0 us)'

The first couple of lines from the previous day’s trace file also with the 6GB KEEP pool in effect:

EXEC #3:c=3759624,e=20904025,p=69802,cr=69793,cu=83979,mis=0,r=1,dep=0,og=1,plh=­0,tim=220862828163
STAT #3 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=69802 pw=0 time=0 us)'
EXEC #2:c=3978025,e=15984033,p=69802,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=­0,tim=220878812196
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=69802 pw=0 time=0 us)'
EXEC #1:c=3666024,e=13540824,p=69802,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=­0,tim=220892353020
STAT #1 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=69802 pw=0 time=0 us)'
EXEC #3:c=3744024,e=13634412,p=69802,cr=69793,cu=83959,mis=0,r=1,dep=0,og=1,plh=­0,tim=220905987432
STAT #3 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=69802 pw=0 time=0 us)'
EXEC #2:c=3650423,e=13447212,p=69803,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=­0,tim=220919434644
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=69803 pw=0 time=0 us)'

This is partial output from the 10046 trace file captured yesterday and today, which targetted a pre-existing ASSM tablespace with roughly the same size default buffer cache in effect (note that the database was bounced between runs, and that may be a source of the time difference).
Yesterday:

PARSING IN CURSOR #3 len=532 dep=0 uid=56 oct=2 lid=56 tim=220841924138 hv=471712922 ad='2778b31b8' sqlid='dyqznk8f1vj4u'
insert into test_assm values
  (15,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #3:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220841924138
WAIT #3: nam='db file sequential read' ela= 17613 file#=7 block#=1900672 blocks=1 obj#=67153 tim=220841943750
WAIT #3: nam='db file sequential read' ela= 458 file#=7 block#=1900680 blocks=1 obj#=67153 tim=220841944275
WAIT #3: nam='db file sequential read' ela= 617 file#=7 block#=1900681 blocks=1 obj#=67153 tim=220841944980
WAIT #3: nam='db file sequential read' ela= 73 file#=7 block#=1900682 blocks=1 obj#=67153 tim=220841945113
...
PARSING IN CURSOR #2 len=532 dep=0 uid=56 oct=2 lid=56 tim=220862828163 hv=1479200138 ad='2778aa958' sqlid='6su1s3tc2pmca'
insert into test_assm values
  (16,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #2:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220862828163
WAIT #2: nam='db file sequential read' ela= 4517 file#=7 block#=1900672 blocks=1 obj#=67153 tim=220862850571
WAIT #2: nam='db file sequential read' ela= 484 file#=7 block#=1900680 blocks=1 obj#=67153 tim=220862851087
WAIT #2: nam='db file sequential read' ela= 548 file#=7 block#=1900681 blocks=1 obj#=67153 tim=220862851684
WAIT #2: nam='db file sequential read' ela= 33 file#=7 block#=1900682 blocks=1 obj#=67153 tim=220862851760
...
PARSING IN CURSOR #1 len=532 dep=0 uid=56 oct=2 lid=56 tim=220878812196 hv=3933466223 ad='2737d43f8' sqlid='g6dd19gp77vmg'
insert into test_assm values
  (17,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #1:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220878812196
WAIT #1: nam='db file sequential read' ela= 8169 file#=7 block#=1900672 blocks=1 obj#=67153 tim=220878847836
WAIT #1: nam='db file sequential read' ela= 470 file#=7 block#=1900680 blocks=1 obj#=67153 tim=220878848364
WAIT #1: nam='db file sequential read' ela= 510 file#=7 block#=1900681 blocks=1 obj#=67153 tim=220878848923
WAIT #1: nam='db file sequential read' ela= 37 file#=7 block#=1900682 blocks=1 obj#=67153 tim=220878849003
...
PARSING IN CURSOR #3 len=532 dep=0 uid=56 oct=2 lid=56 tim=220892353020 hv=1578030285 ad='273749f28' sqlid='a7tcr1tg0xp6d'
insert into test_assm values
  (18,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #3:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220892353020
WAIT #3: nam='db file sequential read' ela= 6309 file#=7 block#=1900672 blocks=1 obj#=67153 tim=220892365476
WAIT #3: nam='db file sequential read' ela= 507 file#=7 block#=1900680 blocks=1 obj#=67153 tim=220892366027
WAIT #3: nam='db file sequential read' ela= 476 file#=7 block#=1900681 blocks=1 obj#=67153 tim=220892366551
WAIT #3: nam='db file sequential read' ela= 37 file#=7 block#=1900682 blocks=1 obj#=67153 tim=220892366630
...
PARSING IN CURSOR #2 len=532 dep=0 uid=56 oct=2 lid=56 tim=220905987432 hv=2708362693 ad='2737a1a18' sqlid='7mbckzyhqwpf5'
insert into test_assm values
  (19,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #2:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220905987432
WAIT #2: nam='db file sequential read' ela= 7892 file#=7 block#=1900672 blocks=1 obj#=67153 tim=220905999307
WAIT #2: nam='db file sequential read' ela= 513 file#=7 block#=1900680 blocks=1 obj#=67153 tim=220905999847
WAIT #2: nam='db file sequential read' ela= 518 file#=7 block#=1900681 blocks=1 obj#=67153 tim=220906000413
WAIT #2: nam='db file sequential read' ela= 37 file#=7 block#=1900682 blocks=1 obj#=67153 tim=220906000493
...
PARSING IN CURSOR #1 len=532 dep=0 uid=56 oct=2 lid=56 tim=220919434644 hv=3773067906 ad='2778aa2c8' sqlid='4gsb2p3hf8wn2'
insert into test_assm values
  (20,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #1:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=220919434644
WAIT #1: nam='db file sequential read' ela= 4513 file#=7 block#=1900672 blocks=1 obj#=67153 tim=220919467246
WAIT #1: nam='db file sequential read' ela= 483 file#=7 block#=1900680 blocks=1 obj#=67153 tim=220919467788
WAIT #1: nam='db file sequential read' ela= 474 file#=7 block#=1900681 blocks=1 obj#=67153 tim=220919468320
WAIT #1: nam='db file sequential read' ela= 45 file#=7 block#=1900682 blocks=1 obj#=67153 tim=220919468416

Today, same tablespace, roughly the same default buffer size:

PARSING IN CURSOR #2 len=532 dep=0 uid=56 oct=2 lid=56 tim=314564780904 hv=471712922 ad='2772d7f28' sqlid='dyqznk8f1vj4u'
insert into test_assm values
  (15,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #2:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=314564780904
WAIT #2: nam='db file sequential read' ela= 12462 file#=7 block#=1901256 blocks=1 obj#=67277 tim=314564795862
WAIT #2: nam='db file sequential read' ela= 531 file#=7 block#=1901257 blocks=1 obj#=67277 tim=314564796472
WAIT #2: nam='db file sequential read' ela= 72 file#=7 block#=1901258 blocks=1 obj#=67277 tim=314564796577
WAIT #2: nam='db file sequential read' ela= 331 file#=7 block#=1901259 blocks=1 obj#=67277 tim=314564796941
...
PARSING IN CURSOR #1 len=532 dep=0 uid=56 oct=2 lid=56 tim=314585990954 hv=1479200138 ad='2772d7a28' sqlid='6su1s3tc2pmca'
insert into test_assm values
  (16,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #1:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=314585990954
EXEC #1:c=156001,e=156030,p=0,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=0,tim=3­14586146984
STAT #1 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=0 pw=0 time=0 us)'
WAIT #1: nam='SQL*Net message to client' ela= 5 driver id=1413697536 #bytes=1 p3=0 obj#=67277 tim=314586149201
WAIT #1: nam='SQL*Net message from client' ela= 1674 driver id=1413697536 #bytes=1 p3=0 obj#=67277 tim=314586150901
CLOSE #1:c=0,e=0,dep=0,type=0,tim=314586146984
PARSING IN CURSOR #2 len=532 dep=0 uid=56 oct=2 lid=56 tim=314586146984 hv=3933466223 ad='2734efe38' sqlid='g6dd19gp77vmg'
insert into test_assm values
  (17,'123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789*­123456789*123456789*1234567','123456789*123456789*123456789*123456789*12345­67',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789*­123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*12345­67','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #2:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=314586146984
EXEC #2:c=140401,e=124827,p=0,cr=69793,cu=83958,mis=0,r=1,dep=0,og=1,plh=0,tim=3­14586271811
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69793 pr=0 pw=0 time=0 us)'
WAIT #2: nam='SQL*Net message to client' ela= 5 driver id=1413697536 #bytes=1 p3=0 obj#=67277 tim=314586294069
WAIT #2: nam='SQL*Net message from client' ela= 1347 driver id=1413697536 #bytes=1 p3=0 obj#=67277 tim=314586295442
CLOSE #2:c=0,e=0,dep=0,type=0,tim=314586271811
...

(A bit later in the trace):

PARSING IN CURSOR #2 len=533 dep=0 uid=56 oct=2 lid=56 tim=314624211019 hv=1207345385 ad='2734b6ff8' sqlid='15mbakd3zd879'
insert into test_assm values
  (239,'123456789*123456789*123456789*123456789*1234567','123456789*123456789­*123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*1234­567','123456789*123456789*123456789*123456789*1234567',
  '123456789*123456789­*123456789*123456789*1234567','123456789*123456789*123456789*123456789*1234­567',
  '123456789*123456789*123456789*123456789*1234567','123456789*123456789­*123456789*123456789*1234567',
  '123456789*123456789*123456789*123456789*1234­567','123456789*123456789*123456789*123456789*1234567')
END OF STMT
PARSE #2:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=314624211019
WAIT #2: nam='db file sequential read' ela= 519 file#=7 block#=1972923 blocks=1 obj#=67277 tim=314624366489
EXEC #2:c=140401,e=124781,p=1,cr=69794,cu=83960,mis=0,r=1,dep=0,og=1,plh=0,tim=3­14624335800
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL (cr=69794 pr=1 pw=0 time=0 us)'
WAIT #2: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=67277 tim=314624366655
WAIT #2: nam='SQL*Net message from client' ela= 814 driver id=1413697536 #bytes=1 p3=0 obj#=67277 tim=314624367493
CLOSE #2:c=0,e=0,dep=0,type=0,tim=314624367010

In yesterday’s run, Oracle kept performing single block reads on exactly the same blocks for each insert statement (additional blocks were added one at a time on later inserts).  Today this only happened for the first insert statement, with occasional single block reads after that point.

Jonathan had commented somewhere that ASSM is like freelists(16) (or maybe it was freelist groups (16)).  The blocks selected for insert are dependent on the v$process.pid for the session (I have seen a couple of good descriptions of how this works, but cannot locate those descriptions right now).  See the follow up to comment 10 here:
http://jonathanlewis.wordpress.com/2009/01/14/books/

I suspect that this might have something to do with the problem I experienced yesterday, but not today (which free blocks are available to the session).

In this link, there are a couple of posts which describe this, or a similar problem:
asktom.oracle.com Article 1
(April 24, 2003 by Jan van Mourik)
(December 21, 2004 by Steve)

asktom.oracle.com Article 2
(December 4, 2005 by Jonathan Lewis)

On December 24, 2008 a thread was started on Oracle’s OTN forums titled “Performance degradation of repeated delete/inserts” that seems to describe a similar problem.  The thread has since disappeared from Oracle’s OTN forums.  In that thread someone told me “YOU CANNOT PROVE ANYTHING ABOUT ORACLE PERFORMANCE. EVER. NO EQUATIONS, NO PROOFS, NO WAY, NO HOW…“.  I think that it was comments like this that eventually contributed to the thread being pulled from the OTN forums.

I believe that this blog article and the contents of the two Usenet threads demonstrates the potential value of test cases.

——————————————————

Take some time to read the two Usenet threads.  I think that you will find that Jonathan Lewis and other contributors were able to identify the root cause of the performance difference between the tests with the ASSM and non-ASSM tablespaces.





Neat Tricks

27 01 2010

January 27, 2010 (Modified December 13, 2011)

Over the years I have seen a couple of interesting approaches to doing certain tasks with Oracle databases – something that makes me think – wow, that’s neat.  Below are a couple of the items that I have found to be interesting, but I probably could not make interesting enough for a dedicated article.

First, an example that retrieves the DDL needed to recreate three tables and the indexes for those tables, output to a file.  The definitions are complete with foreign keys, column constraints, and storage parameters.  Note that in some cases it might be necessary to fix line wrapping problems in the resulting text file (specifically if a line is longer than 200 characters).

SET PAGESIZE 0
SET LONG 90000
SET LINESIZE 200
COLUMN OBJECT_DEF FORMAT A200
SPOOL 'GETMETA.SQL'

SELECT
  DBMS_METADATA.GET_DDL('TABLE',TABLE_NAME,OWNER) OBJECT_DEF
FROM
  DBA_TABLES
WHERE
  TABLE_NAME IN ('T1','T2','T3')
UNION ALL
SELECT
  DBMS_METADATA.GET_DDL('INDEX',INDEX_NAME,OWNER) OBJECT_DEF
FROM
  DBA_INDEXES
WHERE
  TABLE_NAME IN ('T1','T2','T3');

SPOOL OFF

——

Next, an example that creates a database link to another database.  Note that the first example fails with some configurations – this is intentional in this example.

A Failed Attempt at a DB Link:

CREATE PUBLIC DATABASE LINK TEST_LINK CONNECT TO MY_USERNAME IDENTIFIED BY MY_PASS_HERE USING 'TEST';

Now, trying to use the connection:

SQL> SELECT COUNT(*) FROM T1@TEST_LINK; SELECT COUNT(*) FROM T1@TEST_LINK                                  *
ERROR at line 1:
ORA-02085: database link TEST_LINK.WORLD connects to TEST.WORLD

“.WORLD”? – I don’t recall specifying that.  I guess that we should consult the documentation.

 “If the value of the GLOBAL_NAMES initialization parameter is TRUE, then the database link must have the same name as the database to which it connects. If the value of GLOBAL_NAMES is FALSE, and if you have changed the global name of the database, then you can specify the global name.”

Now, creating the database link correctly (note that a COMMIT or ROLLBACK should be used at some point after performing a query on a remote database):

CREATE PUBLIC DATABASE LINK TEST CONNECT TO MY_USERNAME IDENTIFIED BY MY_PASS_HERE USING 'TEST.WORLD';

SELECT COUNT(*) FROM T1@TEST; COMMIT;

——

Next, recovering a table from the Oracle 10g (and above) recyclebin:

FLASHBACK TABLE T1 TO BEFORE DROP;

——

Output the columns in a table’s rows as semicolon delimited values (I think that I first saw this example on asktom.oracle.com):

SELECT
  REGEXP_REPLACE(COLUMN_VALUE,' *<[^>]*>[^>]*>',';')
FROM
  TABLE(XMLSEQUENCE(CURSOR(
    SELECT
      *
    FROM
      MY_TABLE)));

——

Output the columns in a table’s rows as XML:

SELECT
  *
FROM
  TABLE(XMLSEQUENCE(CURSOR(
    SELECT
      *
    FROM
      MY_TABLE)));

——

Output the text contained in a BLOB column as a VARCHAR2:

SELECT
  UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(BITS,32000,1))
FROM
  T1
WHERE
  C1='123';

——

View the internal representation of data – how is it stored in the database:

SELECT
  DUMP(DESCRIPTION)
FROM
  MY_TABLE
WHERE
  PART_ID='123';

——

Disable AWR collection on Oracle 10g if a license for the Diagnostic Pack was not purchased (there is an easy way to do this in 11g):

SQL> @DBMSNOAWR.PLB

SQL> BEGIN DBMS_AWR.DISABLE_AWR(); END;

SQL> /

The DBMSNOAWR package may be downloaded from Metalink (MOS). (Note: link updated December 13, 2011)

——

View jobs scheduled in the 10g Enterprise Manager Database Control:

SELECT
  MJ.JOB_OWNER,
  MJ.JOB_NAME,
  JS.FREQUENCY_CODE,
  TO_CHAR(JS.START_TIME,'MM/DD/YYYY HH24:MI') START_TIME,
  TO_CHAR(JS.END_TIME,'MM/DD/YYYY HH24:MI') END_TIME,
  JS.EXECUTION_HOURS,
  JS.EXECUTION_MINUTES,
  JS.INTERVAL,
  JS.MONTHS,
  JS.DAYS
FROM
  SYSMAN.MGMT_JOB MJ,
  SYSMAN.MGMT_JOB_SCHEDULE JS
WHERE
  MJ.EXPIRED=0
  AND MJ.SCHEDULE_ID=JS.SCHEDULE_ID;

——

Manually set the value of MBRC in SYS.AUX_STATS$:

EXEC DBMS_STATS.SET_SYSTEM_STATS('MBRC',16) 

——

Unsetting the DB_FILE_MULTIBLOCK_READ_COUNT parameter from the spfile to allow Oracle to auto-tune the parameter value (in 10.2.0.1 and above, requires DB bounce):

ALTER SYSTEM RESET DB_FILE_MULTIBLOCK_READ_COUNT SCOPE=SPFILE SID='*';

——

Select a random number between 1 and 50 (after seeding the random number generator):

EXEC DBMS_RANDOM.SEED(0)

SELECT
  ROUND(DBMS_RANDOM.VALUE(1,50)) MY_NUMBER
FROM
  DUAL;

 MY_NUMBER
----------
        42

——

Select a random 10 character string:

SELECT
  DBMS_RANDOM.STRING('A',10) MY_STRING
FROM
  DUAL;

MY_STRING
----------

SRnFjRGbiw

——

Count to 10:

SELECT
  ROWNUM MY_NUMBER
FROM
  DUAL
CONNECT BY
  LEVEL<=10;

 MY_NUMBER
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10

——

Randomly order a row source (in this case the numbers from 1 to 10):

SELECT
  MY_NUMBER
FROM
  (SELECT
    ROWNUM MY_NUMBER
  FROM
    DUAL
  CONNECT BY
    LEVEL<=10)
ORDER BY
  DBMS_RANDOM.VALUE;

 MY_NUMBER
----------
        10
         5
         4
         9
         1
         3
         7
         6
         8
         2




10046 Extended SQL Trace Interpretation 2

26 01 2010

January 26, 2010 

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

In an earlier blog article I described several methods for enabling and disabling 10046 extended SQL traces, listed several keywords that are found in 10046 traces, and demonstrated output generated by my Toy Project for Performance Tuning as it processed the raw trace file. 

Another way to enable a 10046 trace for a session is through the use of a logon trigger created by the SYS user.  For example, the following trigger will enable a 10046 trace at level 12 for any program that begins with the letters MS or VB, even if the path to the program is included in the PROGRAM column of V$SESSION.  With the use of DECODE, it is very easy to allow the trigger to enable tracing for an additional list of programs: 

CREATE OR REPLACE TRIGGER LOGON_10046_TRACE AFTER LOGON ON DATABASE
DECLARE
  SHOULD_EXECUTE INTEGER;
BEGIN
  SELECT DECODE(SUBSTR(UPPER(PROGRAM),1,2),'MS',1,'VB',1,0)
      +DECODE(INSTR(PROGRAM,'\',-1),0,0,DECODE(SUBSTR(UPPER(SUBSTR(PROGRAM,INSTR(PROGRAM,'\',-1)+1)),1,2),'MS',1,'VB',1,0))
    INTO SHOULD_EXECUTE FROM V$SESSION
    WHERE SID=(SELECT SID FROM V$MYSTAT WHERE ROWNUM=1);
  IF SHOULD_EXECUTE > 0 THEN
    EXECUTE IMMEDIATE 'ALTER SESSION SET EVENTS ''10046 TRACE NAME CONTEXT FOREVER, LEVEL 12''';
  END IF;
END;
/

Obviously, if you create the trigger, you should drop the trigger when it is no longer needed using the following command. 

DROP TRIGGER LOGON_10046_TRACE;

Let’s try creating a test trace file with Oracle Database 10.2.0.4.  First, we need to create a couple of test tables with 10,000 rows each and collect statistics for the tables and primary key indexes: 

CREATE TABLE T1 (
  C1 NUMBER,
  C2 VARCHAR2(255),
  PRIMARY KEY (C1));

CREATE TABLE T2 (
  C1 NUMBER,
  C2 VARCHAR2(255),
  PRIMARY KEY (C1));

INSERT INTO
  T1
SELECT
  ROWNUM,
  RPAD(TO_CHAR(ROWNUM),255,'A')
FROM
  DUAL
CONNECT BY
  LEVEL<=10000;

INSERT INTO
  T2
SELECT
  ROWNUM,
  RPAD(TO_CHAR(ROWNUM),255,'A')
FROM
  DUAL
CONNECT BY
  LEVEL<=10000;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2',CASCADE=>TRUE)

We now have two tables, each with 10,000 rows.  This test case in SQL*Plus will:

  1. Flush the buffer cache (twice) to force physical reads
  2. Set the fetch array size to the SQL*Plus default value of 15 rows per fetch call
  3. Disable output of the rows returned from the database to limit client-side delays
  4. Create two bind variables with the value of 1 and 2
  5. Enable a 10046 extended SQL trace at level 12
  6. Give the trace file an easy to identify name
  7. Execute a query that joins the two tables
  8. Increases the fetch array size to 50 rows per fetch call
  9. Increase the value of the second bind variable from 2 to 10,000
  10. Execute the same SQL statement executed in step 7 
ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;

SET ARRAYSIZE 15
SET AUTOTRACE TRACEONLY STATISTICS

VARIABLE N1 NUMBER
VARIABLE N2 NUMBER

EXEC :N1 := 1
EXEC :N2 := 2

EXEC DBMS_SESSION.SESSION_TRACE_ENABLE(WAITS=>TRUE, BINDS=>TRUE)
ALTER SESSION SET TRACEFILE_IDENTIFIER='10046_FIND_ME';

SELECT
  T1.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

SET ARRAYSIZE 50
EXEC :N1 := 1
EXEC :N2 := 10000

SELECT
  T1.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C1 BETWEEN :N1 AND :N2;

SELECT SYSDATE FROM DUAL;

EXEC DBMS_SESSION.SESSION_TRACE_DISABLE;

Rather than scrolling all of the rows on the screen, SQL*Plus output the following from the two executions: 

Statistics
---------------------------------------------------
          1  recursive calls
          0  db block gets
          9  consistent gets
          5  physical reads
          0  redo size
        921  bytes sent via SQL*Net to client
        334  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          2  rows processed

Statistics
---------------------------------------------------
          0  recursive calls
          0  db block gets
      10982  consistent gets
        404  physical reads
          0  redo size
    2647906  bytes sent via SQL*Net to client
       2523  bytes received via SQL*Net from client
        201  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      10000  rows processed

From the above, the first execution required 9 consistent block gets, and 5 of those block gets involved reading the block from disk.  The server sent 921 bytes to the client in 2 round trips, and 2 rows were retrieved.  2 round trips (we will see why later)?  The second execution required 10,982 consistent block gets, and 404 of those involved physical reads.  The server sent about 2.53MB to the client in 10,000 rows, using 201 round trips.  Nice, but we can find out more about what happened.

We could process the trace file with tkprof using a command like this: 

tkprof testdb_ora_4148_10046_find_me.trc testdb_ora_4148_10046_find_me.txt

A portion of the TKPROF output might look like this (see Metalink Doc ID 41634.1 for help with reading the tkprof output): 

********************************************************************************
count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
********************************************************************************

SELECT
  T1.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C1 BETWEEN :N1 AND :N2

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute      3      0.00       0.00          0          0          0           0
Fetch      203      0.28       0.38        409      10991          0       10002
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      208      0.28       0.38        409      10991          0       10002

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 31 

Rows     Row Source Operation
-------  ---------------------------------------------------
      2  FILTER  (cr=9 pr=5 pw=0 time=19577 us)
      2   NESTED LOOPS  (cr=9 pr=5 pw=0 time=19569 us)
      2    TABLE ACCESS BY INDEX ROWID T2 (cr=5 pr=3 pw=0 time=13843 us)
      2     INDEX RANGE SCAN SYS_C0020548 (cr=3 pr=2 pw=0 time=9231 us)(object id 114211)
      2    INDEX UNIQUE SCAN SYS_C0020547 (cr=4 pr=2 pw=0 time=5788 us)(object id 114210)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                     204        0.00          0.00
  db file sequential read                       409        0.00          0.21
  SQL*Net message from client                   204        0.00          0.07
  SQL*Net more data to client                  1200        0.00          0.02

The above is a nice summary of what was found in the trace file for our specific test SQL statement, but what is it telling us?  There were 2 parse calls (1 for each execution), and one of those parse calls resulted in a hard parse.  There were 3 execution calls (I am only able to explain 2 of the execution calls).  There were 203 fetch calls that retrieved a total of 10,002 rows – from this we could derive that on average the client fetched 49.27 rows per fetch call.  All of the time for the execution happened on the fetch calls, which required 0.28 seconds of CPU time and a total of 0.38 clock seconds to execute.  A total of 10,991 consistent gets were required and 409 blocks were read from disk. 

The execution plan displayed is a little misleading, since it shows that only 2 rows were retrieved (note that if this example were executed on Oracle 11.1.0.6 or higher, the first and second executions of the SQL statement could have had different execution plans).  An index range scan is performed on the index SYS_C0020548 (the primary key index for table T2) to locate all of the C1 values between 1 and 2 – note that the optimizer used transitive closure here since the restriction in the SQL statement was actually placed on the column C1 of table T1.  The top line in the plan indicates that in total the query required 9 consistent gets, 5 physical block reads, and 0.019577 seconds to execute.  The index range scan on SYS_C0020548 required 3 consistent gets, and 2 physical block reads were required to satisfy the 3 consistent gets.  Table T2 required an additional 2 consistent gets and 1 physical block read.  A nested loop operation was performed, driving into the primary key index for table T1 – this required an additional 4 consistent gets and 2 physical block reads.  But, what about the second execution of the SQL statement?

The wait events show 409 waits on the “db file sequential read” wait event, which indicates physically reading 1 block at a time from disk – note that this exactly matches the “disk” column in the “fetch” line of the tkprof summary.  Every block that had to be read from disk was read one block at a time, with an average read time of 0.000513 seconds per block read (the extremely fast average time likely indicate caching of blocks at the file system, RAID controller, SAN, or hard drive).  There were 1200 waits on the “SQL*Net more data to client” wait, indicating that the SDU size was filled 1200 times when sending the data to the client computer.

We could also run the trace file though my Toy Project (below is one of 4 outputs from my program): 

Total for Trace File:
|PARSEs       6|CPU S    0.000000|CLOCK S    0.008073|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      4|
|EXECs        6|CPU S    0.015625|CLOCK S    0.007978|ROWs        3|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      2|
|FETCHs     205|CPU S    0.281250|CLOCK S    0.382338|ROWs    10003|PHY RD BLKs       409|CON RD BLKs (Mem)     10991|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|

Wait Event Summary:
SQL*Net message to client           0.000318  On Client/Network   Min Wait:     0.000001  Avg Wait:     0.000001  Max Wait:     0.000005
SQL*Net message from client         3.916585  On Client/Network   Min Wait:     0.000211  Avg Wait:     0.018302  Max Wait:     3.829540
db file sequential read             0.211736  On DB Server        Min Wait:     0.000182  Avg Wait:     0.000518  Max Wait:     0.005905
SQL*Net more data to client         0.020220  On Client/Network   Min Wait:     0.000010  Avg Wait:     0.000017  Max Wait:     0.000258

Total for Similar SQL Statements in Each Group:
----------------------------------------------------------------------------------
Similar SQL Statements in Group: 2
|PARSEs       2|CPU S    0.000000|CLOCK S    0.001231|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|EXECs        2|CPU S    0.000000|CLOCK S    0.004237|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|FETCHs     203|CPU S    0.281250|CLOCK S    0.382314|ROWs    10002|PHY RD BLKs       409|CON RD BLKs (Mem)     10991|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
  CPU S 94.74%  CLOCK S 97.34%
  *    0.211736 seconds of time related data file I/O
  *    0.096212 seconds of time related to client/network events
| +++++++++++++++++++|| +++++++++++++++++++|

Cursor 3   Ver 1   Parse at 0.000000  Similar Cnt 1
|PARSEs       1|CPU S    0.000000|CLOCK S    0.001113|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|EXECs        1|CPU S    0.000000|CLOCK S    0.003538|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|FETCHs       2|CPU S    0.015625|CLOCK S    0.019769|ROWs        2|PHY RD BLKs         5|CON RD BLKs (Mem)         9|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
  CPU S 5.26%  CLOCK S 6.13%
|                   +||                   +|
SELECT
  T1.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C1 BETWEEN :N1 AND :N2

       (Rows 2)   FILTER  (cr=9 pr=5 pw=0 time=19577 us)
       (Rows 2)    NESTED LOOPS  (cr=9 pr=5 pw=0 time=19569 us)
       (Rows 2)     TABLE ACCESS BY INDEX ROWID T2 (cr=5 pr=3 pw=0 time=13843 us)
       (Rows 2)      INDEX RANGE SCAN SYS_C0020548 (cr=3 pr=2 pw=0 time=9231 us)
       (Rows 2)     INDEX UNIQUE SCAN SYS_C0020547 (cr=4 pr=2 pw=0 time=5788 us)

Bind Variables:BINDS #3:  -0.000008
   Bind#0
    oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
    oacflg=03 fl2=1000000 frm=00 csi=00 siz=48 off=0
    kxsbbbfp=13ce8870  bln=22  avl=02  flg=05
    value=1
   Bind#1
    oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
    oacflg=03 fl2=1000000 frm=00 csi=00 siz=0 off=24
    kxsbbbfp=13ce8888  bln=22  avl=02  flg=01
    value=2

------------
Cursor 4   Ver 1   Parse at 0.041427  (TD Prev 0.007600)  Similar Cnt 2
|PARSEs       1|CPU S    0.000000|CLOCK S    0.000118|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
|EXECs        1|CPU S    0.000000|CLOCK S    0.000699|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
|FETCHs     201|CPU S    0.265625|CLOCK S    0.362545|ROWs    10000|PHY RD BLKs       404|CON RD BLKs (Mem)     10982|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
  CPU S 89.47%  CLOCK S 91.21%
|  ++++++++++++++++++||  ++++++++++++++++++|
SELECT
  T1.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C1 BETWEEN :N1 AND :N2

   (Rows 10000)   FILTER  (cr=10982 pr=404 pw=0 time=680024 us)
   (Rows 10000)    NESTED LOOPS  (cr=10982 pr=404 pw=0 time=670018 us)
   (Rows 10000)     TABLE ACCESS BY INDEX ROWID T2 (cr=781 pr=387 pw=0 time=590006 us)
   (Rows 10000)      INDEX RANGE SCAN SYS_C0020548 (cr=218 pr=17 pw=0 time=10038 us)
   (Rows 10000)     INDEX UNIQUE SCAN SYS_C0020547 (cr=10201 pr=17 pw=0 time=77882 us)

Bind Variables:BINDS #4:  0.041421
   Bind#0
    oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
    oacflg=03 fl2=1000000 frm=00 csi=00 siz=48 off=0
    kxsbbbfp=13ce8870  bln=22  avl=02  flg=05
    value=1
   Bind#1
    oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
    oacflg=03 fl2=1000000 frm=00 csi=00 siz=0 off=24
    kxsbbbfp=13ce8888  bln=22  avl=02  flg=01
    value=10000
----------------------------------------------------------------------------------

The above shows basically the same output as tkprof, just with greater resolution, both sets of bind variables, and both sets of execution plans.

We could also use any number of other 10046 trace file parsers including TRCANLZR (see Metalink Doc ID 224270.1), TVD$XTAT (see the book “Troubleshooting Oracle Performance”), ESQLTRCPROF (see the book “Secrets of the Oracle Database”), the Hotsos Profiler (Method R), OraSRP (www.oracledba.ru/orasrp/), or one of several other programs.

I typically either look at the output from my program or the raw 10046 trace file.  That brings us to the raw 10046 trace file.  So, what does the raw output of the trace file look like?  Before diving into the raw trace file, let’s find a little information to help us later:

COLUMN TABLE_NAME FORMAT A10

SELECT
  TABLE_NAME,
  INDEX_NAME
FROM
  DBA_INDEXES
WHERE
  TABLE_NAME IN ('T1','T2')
ORDER BY
  TABLE_NAME;

TABLE_NAME INDEX_NAME
---------- ------------
T1         SYS_C0020547
T2         SYS_C0020548

COLUMN OBJECT_NAME FORMAT A15

SELECT
  OBJECT_ID,
  OBJECT_NAME,
  OBJECT_TYPE
FROM
  DBA_OBJECTS
WHERE
  OBJECT_NAME IN ('T1','T2','SYS_C0020547','SYS_C0020548')
ORDER BY
  OBJECT_NAME;

 OBJECT_ID OBJECT_NAME     OBJECT_TYPE
---------- --------------- -----------
    114210 SYS_C0020547    INDEX
    114211 SYS_C0020548    INDEX
    114209 T1              TABLE
    114207 T2              TABLE

From the above output, the index on table T1 is named SYS_C0020547 and it has an OBJECT_ID of 114210.  The index on table T2 is named SYS_C0020548 and it has an OBJECT_ID of 114211.  Table T1 has an OBJECT_ID of 114209, and table T2 has an OBJECT_ID of 114207.  Now on to a portion of the raw 10046 trace file:

=====================
PARSING IN CURSOR #3 len=91 dep=0 uid=31 oct=3 lid=31 tim=4963261335 hv=3021110247 ad='982ab100'
SELECT
  T1.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C1 BETWEEN :N1 AND :N2
END OF STMT
PARSE #3:c=0,e=1113,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,tim=4963261327
BINDS #3:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1000000 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=13ce8870  bln=22  avl=02  flg=05
  value=1
 Bind#1
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1000000 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=13ce8888  bln=22  avl=02  flg=01
  value=2

From the above, we see that the first parse was a hard parse that required 0 CPU seconds and 0.001113 clock seconds.  Additionally, two bind variables were passed in.  A level 4 or level 12 10046 extended SQL trace file will include the submitted bind variable values, as shown above.  It is possible to use the following list to decode the bind variable data type (oacdty), in the process determining that the bind variables are in fact, defined as numbers.  See Metalink Doc IDs 67701.1 and 154170.1, the Oracle OCI documentation, or Julian Dyke’s site for a more complete list of datatype constants: 

  0 - This row is a placeholder for a procedure with no arguments.
  1 - VARCHAR2 (or NVARCHAR)
  2 - NUMBER
  3 - NATIVE INTEGER (for PL/SQL's BINARY_INTEGER)
  8 - LONG
 11 - ROWID
 12 - DATE
 23 - RAW
 24 - LONG RAW
 96 - CHAR (or NCHAR)
112 - CLOB or NCLOB
113 - BLOB
114 - BFILE
106 - MLSLABEL
250 - PL/SQL RECORD
251 - PL/SQL TABLE
252 - PL/SQL BOOLEAN

The trace file continues below:

EXEC #3:c=0,e=3538,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,tim=4963265927
WAIT #3: nam='SQL*Net message to client' ela= 5 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=4963266031
WAIT #3: nam='db file sequential read' ela= 4816 file#=4 block#=1138316 blocks=1 obj#=114211 tim=4963271120
WAIT #3: nam='db file sequential read' ela= 3934 file#=4 block#=1138318 blocks=1 obj#=114211 tim=4963275219
WAIT #3: nam='db file sequential read' ela= 4443 file#=4 block#=1138310 blocks=1 obj#=114207 tim=4963279805
WAIT #3: nam='db file sequential read' ela= 5221 file#=4 block#=1093148 blocks=1 obj#=114210 tim=4963285172
WAIT #3: nam='db file sequential read' ela= 236 file#=4 block#=1093150 blocks=1 obj#=114210 tim=4963285569
FETCH #3:c=15625,e=19589,p=5,cr=5,cu=0,mis=0,r=1,dep=0,og=1,tim=4963285717
WAIT #3: nam='SQL*Net message from client' ela= 364 driver id=1413697536 #bytes=1 p3=0 obj#=114210 tim=4963286240
WAIT #3: nam='SQL*Net message to client' ela= 4 driver id=1413697536 #bytes=1 p3=0 obj#=114210 tim=4963286494
FETCH #3:c=0,e=180,p=0,cr=4,cu=0,mis=0,r=1,dep=0,og=1,tim=4963286588

Above, we see the 5 physical block reads of blocks 1138316 and 1138318 of  OBJECT_ID 114211 (index on table T2, SYS_C0020548), followed by a single block read of OBJECT_ID 114207 (table T2), and 2 single block reads of object 114210 (index on table T1, SYS_C0020547) – note that the final of the 5 physical block reads completed in 0.000236 seconds, which is about 20 times faster than the time it takes for 1 revolution of a 15,000 RPM hard drive platter.  The first fetch call returned a single row, even though the array fetch size was explicitly set to 15 rows.  That fetch required 5 consistent gets, which then required the 5 physical block reads.  The 1 row was sent to the client which then fetched a second row  (4963286588 – 4963286240 4963285717)/1,000,000 =  0.000348 0.000871 seconds later.  The trace file continues:

WAIT #3: nam='SQL*Net message from client' ela= 319 driver id=1413697536 #bytes=1 p3=0 obj#=114210 tim=4963287035
*** SESSION ID:(211.17427) 2010-01-25 15:41:01.540
STAT #3 id=1 cnt=2 pid=0 pos=1 obj=0 op='FILTER  (cr=9 pr=5 pw=0 time=19577 us)'
STAT #3 id=2 cnt=2 pid=1 pos=1 obj=0 op='NESTED LOOPS  (cr=9 pr=5 pw=0 time=19569 us)'
STAT #3 id=3 cnt=2 pid=2 pos=1 obj=114207 op='TABLE ACCESS BY INDEX ROWID T2 (cr=5 pr=3 pw=0 time=13843 us)'
STAT #3 id=4 cnt=2 pid=3 pos=1 obj=114211 op='INDEX RANGE SCAN SYS_C0020548 (cr=3 pr=2 pw=0 time=9231 us)'
STAT #3 id=5 cnt=2 pid=2 pos=2 obj=114210 op='INDEX UNIQUE SCAN SYS_C0020547 (cr=4 pr=2 pw=0 time=5788 us)'
WAIT #0: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=114210 tim=4963289021
WAIT #0: nam='SQL*Net message from client' ela= 2329 driver id=1413697536 #bytes=1 p3=0 obj#=114210 tim=4963291420
=====================

Cursor #3 was closed, so Oracle output the STAT (row source operation) lines, as we saw in the tkprof output.  The trace file continues (with a couple of lines removed):

...
=====================
PARSING IN CURSOR #4 len=91 dep=0 uid=31 oct=3 lid=31 tim=4963302762 hv=3021110247 ad='982ab100'
SELECT
  T1.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C1 BETWEEN :N1 AND :N2
END OF STMT
PARSE #4:c=0,e=118,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=4963302756
BINDS #4:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1000000 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=13ce8870  bln=22  avl=02  flg=05
  value=1
 Bind#1
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1000000 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=13ce8888  bln=22  avl=02  flg=01
  value=10000
EXEC #4:c=0,e=699,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=4963304299
WAIT #4: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=114210 tim=4963304383
FETCH #4:c=0,e=94,p=0,cr=5,cu=0,mis=0,r=1,dep=0,og=1,tim=4963304564
WAIT #4: nam='SQL*Net message from client' ela= 718 driver id=1413697536 #bytes=1 p3=0 obj#=114210 tim=4963305403
WAIT #4: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=114210 tim=4963305590
WAIT #4: nam='SQL*Net more data to client' ela= 29 driver id=1413697536 #bytes=2146 p3=0 obj#=114210 tim=4963305766
WAIT #4: nam='SQL*Net more data to client' ela= 20 driver id=1413697536 #bytes=1862 p3=0 obj#=114210 tim=4963305913
WAIT #4: nam='SQL*Net more data to client' ela= 17 driver id=1413697536 #bytes=2128 p3=0 obj#=114210 tim=4963306065
WAIT #4: nam='db file sequential read' ela= 2272 file#=4 block#=1138311 blocks=1 obj#=114207 tim=4963308471
WAIT #4: nam='SQL*Net more data to client' ela= 27 driver id=1413697536 #bytes=1868 p3=0 obj#=114207 tim=4963308686
WAIT #4: nam='SQL*Net more data to client' ela= 18 driver id=1413697536 #bytes=2122 p3=0 obj#=114207 tim=4963308841
WAIT #4: nam='SQL*Net more data to client' ela= 13 driver id=1413697536 #bytes=2128 p3=0 obj#=114207 tim=4963309001
FETCH #4:c=0,e=3573,p=1,cr=54,cu=0,mis=0,r=50,dep=0,og=1,tim=4963309109

Note that there was no hard parse this time.  The first two fetches are complete at this point.  Again, the first fetch returned a single row, while the second fetch returned 50 rows.  Note the presence of the “SQL*Net more data to client” wait before the second fetch line printed – each of these lines indicates that the SDU size was filled on the previous send to the client.  Notice that there was only a single physical block read of OBJECT_ID 114207 (table T2), requiring 0.002272 seconds, when fetching the first 51 rows (the other blocks were already in the buffer cache).  The trace file continues below:

WAIT #4: nam='SQL*Net message from client' ela= 256 driver id=1413697536 #bytes=1 p3=0 obj#=114207 tim=4963309476
WAIT #4: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=114207 tim=4963309654
WAIT #4: nam='db file sequential read' ela= 226 file#=4 block#=1138312 blocks=1 obj#=114207 tim=4963309995
WAIT #4: nam='SQL*Net more data to client' ela= 32 driver id=1413697536 #bytes=2116 p3=0 obj#=114207 tim=4963310197
WAIT #4: nam='SQL*Net more data to client' ela= 14 driver id=1413697536 #bytes=2096 p3=0 obj#=114207 tim=4963310353
WAIT #4: nam='SQL*Net more data to client' ela= 13 driver id=1413697536 #bytes=1834 p3=0 obj#=114207 tim=4963310488
WAIT #4: nam='db file sequential read' ela= 1762 file#=4 block#=1138308 blocks=1 obj#=114207 tim=4963312390
WAIT #4: nam='SQL*Net more data to client' ela= 23 driver id=1413697536 #bytes=2096 p3=0 obj#=114207 tim=4963312551
WAIT #4: nam='SQL*Net more data to client' ela= 16 driver id=1413697536 #bytes=2096 p3=0 obj#=114207 tim=4963312783
WAIT #4: nam='SQL*Net more data to client' ela= 13 driver id=1413697536 #bytes=1834 p3=0 obj#=114207 tim=4963312904
FETCH #4:c=0,e=3345,p=2,cr=55,cu=0,mis=0,r=50,dep=0,og=1,tim=4963312955

Two more physical block reads of OBJECT_ID 114207 (table T2) to return the next 50 rows to the client.  Jumping forward in the trace file to the last two fetches:

...
FETCH #4:c=0,e=1259,p=2,cr=55,cu=0,mis=0,r=50,dep=0,og=1,tim=4963757700
WAIT #4: nam='SQL*Net message from client' ela= 842 driver id=1413697536 #bytes=1 p3=0 obj#=114207 tim=4963758586
WAIT #4: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=114207 tim=4963758653
WAIT #4: nam='SQL*Net more data to client' ela= 16 driver id=1413697536 #bytes=2124 p3=0 obj#=114207 tim=4963758766
WAIT #4: nam='db file sequential read' ela= 242 file#=4 block#=1773782 blocks=1 obj#=114207 tim=4963759071
WAIT #4: nam='SQL*Net more data to client' ela= 17 driver id=1413697536 #bytes=2104 p3=0 obj#=114207 tim=4963759182
WAIT #4: nam='SQL*Net more data to client' ela= 13 driver id=1413697536 #bytes=1841 p3=0 obj#=114207 tim=4963759268
WAIT #4: nam='SQL*Net more data to client' ela= 17 driver id=1413697536 #bytes=2104 p3=0 obj#=114207 tim=4963759365
WAIT #4: nam='SQL*Net more data to client' ela= 13 driver id=1413697536 #bytes=1841 p3=0 obj#=114207 tim=4963759453
WAIT #4: nam='db file sequential read' ela= 226 file#=4 block#=1773852 blocks=1 obj#=114207 tim=4963759715
WAIT #4: nam='SQL*Net more data to client' ela= 20 driver id=1413697536 #bytes=2104 p3=0 obj#=114207 tim=4963759867
FETCH #4:c=0,e=1290,p=2,cr=54,cu=0,mis=0,r=49,dep=0,og=1,tim=4963759912

From the above, we see that Oracle is still performing an average of two single block physical reads per fetch call, and the final fetch call retrieved just 49 rows.  The trace file continues:

WAIT #4: nam='SQL*Net message from client' ela= 792 driver id=1413697536 #bytes=1 p3=0 obj#=114207 tim=4963760775
*** SESSION ID:(211.17427) 2010-01-25 15:41:02.008
STAT #4 id=1 cnt=10000 pid=0 pos=1 obj=0 op='FILTER  (cr=10982 pr=404 pw=0 time=680024 us)'
STAT #4 id=2 cnt=10000 pid=1 pos=1 obj=0 op='NESTED LOOPS  (cr=10982 pr=404 pw=0 time=670018 us)'
STAT #4 id=3 cnt=10000 pid=2 pos=1 obj=114207 op='TABLE ACCESS BY INDEX ROWID T2 (cr=781 pr=387 pw=0 time=590006 us)'
STAT #4 id=4 cnt=10000 pid=3 pos=1 obj=114211 op='INDEX RANGE SCAN SYS_C0020548 (cr=218 pr=17 pw=0 time=10038 us)'
STAT #4 id=5 cnt=10000 pid=2 pos=2 obj=114210 op='INDEX UNIQUE SCAN SYS_C0020547 (cr=10201 pr=17 pw=0 time=77882 us)'
WAIT #0: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=114207 tim=4963764774
WAIT #0: nam='SQL*Net message from client' ela= 2789 driver id=1413697536 #bytes=1 p3=0 obj#=114207 tim=4963767585

From the above, we see the execution plan for the second execution – this information was missing from the tkprof output.  A hash join with two full table scans probably would have been more efficient than a nested loop join with index lookups, especially if the number of rows were larger.  This is one of the potential problems with using bind variables, especially when bind variable peeking is enabled (by default in recent releases) – the execution plan is essentially locked after the initial hard parse.  Oracle 11.1.0.6 introduced a feature known as adaptive cursor sharing that could potentially alter the plan on a future execution if Oracle senses that there will be significant changes in the number of rows returned when different bind variable values are submitted.

Incidentally, you may have noticed the keyword “oct” on the “PARSING IN CURSOR” lines in the above trace file.  This keyword identifies the Oracle command type, which is related to the V$SESSION.COMMAND column and the V$SQL.COMMAND_TYPE column. Common command type values include: 

  1 - CREATE TABLE
  2 - INSERT
  3 - SELECT
  6 - UPDATE
  7 - DELETE
  9 - CREATE INDEX

See the “Command Column of V$SESSION and Corresponding Commands” table in the Oracle Reference documentation (Table 8-2 in the Oracle Database Reference 11g Release 2 book) for a complete list of command types.

For more information about 10046 trace files, see Chapter 8, “Understanding Performance Optimization Methods”, in the book “Expert Oracle Practices: Oracle Database Administration from the Oak Table” (the chapter was co-written by Randolf Geist and myself).  The book “Optimizing Oracle Performance” is also highly recommended.





Notes about Various Oracle Parameters

25 01 2010

January 25, 2010

There are a large number of initialization parameters that control the behavior of Oracle, and thus the performance of the database server.  It would likely take hours (or many, many pages) to explain the ideal value of each of these parameters – the ideal parameter values are different for different databases, otherwise Oracle Database could simply default to the ideal parameters.  Below are a couple of my notes on various parameters.

O7_DICTIONARY_ACCESSIBILITY: when set to true, permits non-sysdba users to query the data dictionary without explicitly granting permission to the users to view the data dictionary (setting this parameter to TRUE also allows the SYS user to connect without specifying AS SYSDBA).  This parameter must be set to TRUE for certain applications’ functions to work correctly, but ideally should be set to FALSE, if possible.  This parameter is set to FALSE by default on 9.0.1 and above.  Changing the parameter requires bouncing the database.

COMPATIBLE: Sets the datafile level binary compatibility, allowing the database binaries to be rolled back to an earlier version and still access the database’s datafiles.  Note that this parameter’s purpose is incorrectly described as controlling the query optimizer in a couple of Oracle related books.

PLSQL_CODE_TYPE: Watch out – there is a bug, at least on the Windows platform of 10.2.0.x where this setting will automatically change from interpreted to compiled when parameter values for other parameters are changed using Enterprise Manager Database Control.

PROCESSES and SESSIONS: Control the maximum number of client connections which may connect at any time.  The SESSIONS parameter is typically 10 to 20 greater than the value for PROCESSES.  The database must be bounced to change these parameters.  See here for a related article.

RECYCLEBIN: Controls whether or not dropped tables and indexes will be saved to an area which will permit the objects to be recovered.  If applications are using non-standard methods of determining objects belonging to a user (for example, directly querying SYS.OBJ$ and SYS.USER$), it is possible for the objects in the recylebin to be listed with the normal tables – attempting to assign permissions to objects in the recyclebin will result in the database returning errors to the client.

TIMED_STATISTICS: Should be set to TRUE to permit most forms of performance tuning, when set to FALSE, time deltas between events are not calculated.  Setting this parameter to TRUE may impose a small performance penalty on database performance on some operating systems, but the penalty is typically small.  This parameter defaults to TRUE when the STATISTICS_LEVEL parameter is set to TYPICAL or ALL.

DB_DOMAIN: Allows a database SID to be suffixed with a DNS style domain name.  If set, it may cause problems when database links are created between databases (only one name is valid for a database link when the DB_DOMAIN parameter is set, a custom name cannot be assigned to the database link).

UNDO_MANAGEMENT: When set to AUTO, rollback segments are no longer used – instead, the system automatically manages undo segments.

UNDO_RETENTION: Specifies the suggested minimum number of seconds that undo information should remain available.  Used to limit the frequency of “snapshot too old” error messages.

DB_RECOVERY_FILE_DEST_SIZE: Specifies a hard upper limit of the number of bytes available to store archived redo log, backups, and other items in the flash recovery area.  Note that if files are removed from the flash recovery area using operating system commands, Oracle may incorrectly calculate the space used in the flash recovery area, potentially creating problems if copies of archived redo logs and/or backups are sent to the flash recovery area.

DB_RECOVERY_FILE_DEST: Specifies the location to be used for the flash recovery area.

DB_WRITER_PROCESSES: Do not increase from the default value of 1 unless the server has more than 8 CPUs.  See here for a related article.

STATISTICS_LEVEL: Should be set to TYPICAL, do not leave the parameter set to ALL at the system-wide level as it will significantly slow down performance as more performance data must be collected for each SQL statement executed.  The performance hit when set to ALL is more significant on Oracle 10g than it is on Oracle 11g.

SGA_MAX_SIZE: Specifies the absolute maximum size of memory allocated to items in the system global area (SGA), defaults to the value of SGA_TARGET if not set.  Requires bouncing the database to change the parameter’s value.

SGA_TARGET: Specifies the suggested maximum amount of memory to be allocated to items in the system global area.  The value may be manually increased to the value of SGA_MAX_SIZE without bouncing the database.

SHARED_POOL_SIZE: When the SGA_TARGET is specifies, sets the minimum amount of memory available for caching items in the shared pool (SQL statements, packages, etc.).

TRACE_ENABLED:  Allows the database to create extended trace and 10053 trace files when session request those trace files to be generated.  Typically default to a value of TRUE.

MEMORY_MAX_TARGET: Part of the new memory management parameters in Oracle 11g, specifies the absolute maximum amount of memory that may be used by Oracle.

MEMORY_TARGET: Part of the new memory management parameters in Oracle 11g, by default 80% of this memory will be allocated to the SGA and 20% to the PGA.

DB_CACHE_SIZE: Specifies the minimum amount of memory in bytes for the DEFAULT block buffer cache (KEEP and RECYCLE buffer cache sizes do not subtract from this value).

LOG_BUFFER: Specifies the amount of memory to be allocated for buffering redo information before it is written to the redo logs.  512KB to 1MB is typically sufficient on older Oracle releases, 10g and above may automatically set this parameter’s value to a size close to the memory granule size, which may be 16MB.

WORKAREA_SIZE_POLICY: When set to AUTO, allows the automatic allocation of memory for work areas from the memory specified for the PGA_AGGREGATE_TARGET.

SORT_AREA_SIZE: Has no effect when WORKAREA_SIZE_POLICY is set to AUTO (assuming dedicated sessions), specifies the amount of memory that may be used during a sorting or hashing operation when executing a SQL statement.  HASH_AREA_SIZE defaults to twice this value.

SORT_AREA_RETAINED_SIZE: Has no effect when WORKAREA_SIZE_POLICY is set to AUTO, specifies the amount of memory that may be used after a sorting operation when the client is retrieving the results of the SQL statement.  If the server has sufficient memory, set this value to the same as the value of SORT_AREA_SIZE to avoid unnecessarily spilling the results to the temp tablespace after the sort, but before the client starts retrieving the results.

OPEN_CURSORS: Specifies the maximum number of cursors that may be simultaneously open for each client’s session.  Depending on the application connecting to the database, a value between 300 and 1000 might be a safe target if there is sufficient memory on the server.

SESSION_CACHED_CURSORS: On older Oracle releases, this parameter defaults to 0, and on more recent releases the parameter defaults to either 20 or 50 (this parameter controls the number of cached cursors per session).  If the value of this parameter is set to a non-zero value and the same SQL statement is submitted at least 3 times, the SQL statement is added to the session cached cursors and remains open even when the client explicitly closes the cursor.  The helps reduce the performance hit caused by soft parses when the client repeatedly submits the same SQL statement to be executed – on the next parse request Oracle does not need to search the library cache as would be needed during a soft parse.  A value of 50 to 100 probably would be a good target, and if server memory permits, consider setting this parameter to a higher value, possibly 200.

CURSOR_SHARING: Starting with Oracle 8i, it is possible for the database server to automatically convert constants (literals) submitted in SQL statements to bind variables in order to reduce the number of hard parses.  There are problems in patched Oracle 10.2.0.2 and 10.2.0.3 when this parameter is set to anything except EXACT (the October 2006 CPU for Oracle 10.2.0.2, for example, introduces problems when the CURSOR_SHARING parameter is set to FORCE – the problem may appear a couple of hours after the database is used in production).

CURSOR_SPACE_FOR_TIME: This parameter will be removed from future releases of Oracle as it is often misused (removed from 11.2.0.1?).  When set to TRUE, this parameter causes Oracle to assume that required SQL statements will not be prematurely aged out of the library cache.

OPTIMIZER_INDEX_CACHING: Tells Oracle the approximate percentage of index blocks that remain in the buffer cache – primarily has an effect during nested loop joins, affects costing of nested loop joins and in-lists.

OPTIMIZER_INDEX_COST_ADJ: Artificially lowers the calculated cost of an index access to the percentage of the original specified by this parameter.  Due to rounding problems, may cause the wrong index to be used if this parameter is set to too low of a value.  If index access costs are calculated too high compared to full table scans (and fast full index scans), use CPU (system) statistics, if available, to increase the cost of full table scans, rather than using this parameter to decrease the cost of index accesses.

OPTIMIZER_FEATURES_ENABLE: When adjusted, automatically changes the value of many hidden initialization parameters to permit the query optimizer to behave similar to the optimizer in an earlier release of Oracle.

OPTIMIZER_SECURE_VIEW_MERGING: Defaults to TRUE on 10g, and may cause performance problems when set to TRUE when a user accesses a view created by another user, while the performance problem is not present for the view owner.

DB_FILE_MULTIBLOCK_READ_COUNT: Controls the maximum number of blocks that may be fetched in a single read operation during a full table scan or fast full index scan.  Oracle 10.2.0.x and above is able to auto-set the DB_FILE_MULTIBLOCK_READ_COUNT, which will likely set the parameter to permit multi-block reads of 1MB.





SQL Basics – Working with ERP Data

24 01 2010

January 24, 2010

This blog article is based on a portion of a presentation that I gave at a regional ERP user’s group meeting.  While some of the information is specific to that particular ERP platform, the concepts should be general enough that the material may be applied to other environments.

Typically, a language called Structured Query Language (SQL) is used to directly communicate with the database.  As with all languages, there are syntax rules that must be followed.  In general, data is stored in a series of tables, which may be thought of as if they were worksheets in an Excel spreadsheet.  The various tables may be joined together to provide greater detail, but great care must be taken to correctly join the tables together.  The correct table joining conditions may be partially determined by examining the primary and foreign key relationships between the tables, and we will talk about that more later in the presentation.

Tips:

Relationships between tables containing related information may be determined by:

  • Primary (parent) and foreign (child) relationships defined in the database (see Data Dict Foreign Keys worksheet).
  • Primary key columns are often named ID, and the foreign key columns are often named table_ID, for example: ACCOUNT.ID = ACCOUNT_BALANCE.ACCOUNT_ID
  • Relationships may be discovered by searching for other tables in the database containing the same column names (see Data Dict Tables worksheet).

SQL Basics:

Indexes on table columns may allow a query to execute faster, but it is important that all of the beginning columns in the index are used (don’t forget the TYPE column when retrieving information from the WORK_ORDER table, or the WORKORDER_TYPE column when accessing the OPERATION table).  While indexes usually help when a small amount of information is needed from a table, other methods (full table scan) are sometimes more appropriate. 

Indexes usually cannot be used for those columns in the WHERE clause if the column appears inside a function name – index will not be used for   TRUNC(LABOR_TICKET.TRANSACTION_DATE) =  – unless a function based index is created for that function and column combination.

When multiple tables must be accessed, each column retrieved should be prefixed with the table name (or an aliased name for the table) containing the column.  Prefixing the columns improves the readabilty of the SQL statement and prevents errors that happen when two tables contain columns with the same names.

In a WHERE clause, character type data should appear in single quotes ( ‘ ), and number type data should not appear in single quotes.  Dates should not rely on implicit data type conversion –  don’t use ’24-JAN-2010′ as there is a chance that the implicit conversion will fail in certain environments.

Information retrieved from the database using a SQL statement may be grouped to summarize the data.

Executing SQL:

Assume that we are new to SQL and just start typing a SQL statement, hoping that the database will be able to help us make a correct request – since that kind of works in Microsoft Access.

SELECT DISTINCT
  *
FROM
  WORK_ORDER,
  OPERATION,
  REQUIREMENT;

When we execute this SQL statement, the database server spins and spins (not the formal meaning of a spin), until the SQL statement finally falls over and dies (to the uninitiated, this is not supposed to happen when a query executes).

Mon Jan 07 11:01:32 2009
ORA-1652: unable to extend temp segment by 128 in tablespace TEMPORARY_DATA
Mon Jan 07 11:12:31 2009
ORA-1652: unable to extend temp segment by 128 in tablespace TEMPORARY_DATA
Mon Jan 07 11:15:11 2009
ORA-1652: unable to extend temp segment by 128 in tablespace TEMPORARY_DATA
Mon Jan 07 11:25:28 2009
ORA-1652: unable to extend temp segment by 128 in tablespace TEMPORARY_DATA
Mon Jan 07 11:28:13 2009
ORA-1652: unable to extend temp segment by 128 in tablespace TEMPORARY_DATA

Depending on the database engine and the database administrator, it might be that the database is down for a long time, or that just the query tool that submitted the SQL statement crashes after forcing the CPUs on the server to spin excessively.  Be careful about who has access to a query tool that access the database.

Simple SQL – Retrieve the part ID, description, product code, and quantity on hand for all parts:

The following is a simple SQL statement which will retrieve four columns from the PART table for all parts, essentially in random order.  You may notice that my SQL statement is formatted in a very specific way – the reason for this formatting will become more clear later.  Essentially, standardized formats help improve database performance (by reducing the number of hard parses) – for ad hoc SQL statements (those created for one time use), the performance difference probably will not be noticed, but when placed into various applications that execute the SQL statements repeatedly, the performance difference will be very clear.

SELECT
  ID,
  DESCRIPTION,
  PRODUCT_CODE,
  QTY_ON_HAND
FROM
  PART;

Retrieve the part ID, description, product code, and quantity on hand for all parts with a commodity code of AAAA:

SELECT
  ID,
  DESCRIPTION,
  PRODUCT_CODE,
  QTY_ON_HAND
FROM
  PART
WHERE
  COMMODITY_CODE = 'AAAA';

Retrieve the part ID, description, product code, and quantity on hand for all parts with a commodity code of AAAA with more than 10 on hand:

SELECT
  ID,
  DESCRIPTION,
  PRODUCT_CODE,
  QTY_ON_HAND
FROM
  PART
WHERE
  COMMODITY_CODE = 'AAAA'
  AND QTY_ON_HAND > 10

Retrieve the part ID, description, product code, and quantity on hand for all parts with a commodity code beginning with  A  with 10 to 100 on hand:

SELECT
  ID,
  DESCRIPTION,
  PRODUCT_CODE,
  QTY_ON_HAND
FROM
  PART
WHERE
  COMMODITY_CODE LIKE 'A%'
  AND QTY_ON_HAND BETWEEN 10 AND 100;

Retrieve the part ID, description, product code, and quantity on hand sorted by product code, then part ID – Fixing the Random Order:

SELECT
  ID,
  DESCRIPTION,
  PRODUCT_CODE,
  QTY_ON_HAND
FROM
  PART
WHERE
  COMMODITY_CODE LIKE 'A%'
  AND QTY_ON_HAND BETWEEN 10 AND 100
ORDER BY
  PRODUCT_CODE,
  ID;

Retrieve the product code, and total quantity on hand by product code, sorted by product code:

SELECT
  PRODUCT_CODE,
  SUM(QTY_ON_HAND) AS TOTAL_QTY
FROM
  PART
WHERE
  COMMODITY_CODE LIKE 'F%'
GROUP BY
  PRODUCT_CODE
ORDER BY
  PRODUCT_CODE;

The above example changed the previous example quite a bit, so that only those parts with a commodity code beginning with F are returned – in the example, I want to determine the total number of parts on hand by product code (labeled TOTAL_QTY) for those parts with a commodity code beginning with F.  In addition to the ORDER BY clause, a GROUP BY clause was also needed.  The columns that must be listed in the group by clause are those columns in the SELECT clause which are not inside a SUM(), AVG(), MIN(), MAX(), or similar function.

Retrieve the product code, and total quantity on hand by product code, return only those with a total quantity on hand more than 100, sorted by product code:

SELECT
  PRODUCT_CODE,
  SUM(QTY_ON_HAND) AS TOTAL_QTY
FROM
  PART
WHERE
  COMMODITY_CODE LIKE 'F%'
GROUP BY
  PRODUCT_CODE
HAVING
  SUM(QTY_ON_HAND) > 100
ORDER BY
  PRODUCT_CODE;

Retrieve the top level part ID produced by all unreleased, firmed, and released work orders, include the work order, lot, part description, and quantity on hand:

Now that we know how to work with data stored in a single table, let’s take a look at an example with two tables.  Each column returned from the tables should be prefixed with the table name – primarily in case where the same column name appears in both tables, but doing this also makes it easier to troubleshoot problems with the SQL statement at a later time.  The following SQL statement retrieves a list of all parts produced by non-closed and non-canceled work orders that are in the system (status is unreleased, firmed, or released).

SELECT
  WORK_ORDER.BASE_ID,
  WORK_ORDER.LOT_ID,
  WORK_ORDER.SPLIT_ID,
  WORK_ORDER.PART_ID,
  PART.DESCRIPTION,
  PART.QTY_ON_HAND,
  WORK_ORDER.DESIRED_QTY,
  WORK_ORDER.RECEIVED_QTY
FROM
  WORK_ORDER,
  PART
WHERE
  WORK_ORDER.TYPE = 'W'
  AND WORK_ORDER.SUB_ID='0'
  AND WORK_ORDER.PART_ID=PART.ID
  AND WORK_ORDER.DESIRED_QTY > WORK_ORDER.RECEIVED_QTY
  AND WORK_ORDER.STATUS IN ('U', 'F', 'R')
ORDER BY
  WORK_ORDER.PART_ID,
  WORK_ORDER.BASE_ID,
  WORK_ORDER.LOT_ID,
  WORK_ORDER.SPLIT_ID;

The following SQL statement is essentially the same SQL statement as the last, just with table aliases (or short-names) which significantly reduce the amount of typing.

SELECT
  WO.BASE_ID,
  WO.LOT_ID,
  WO.SPLIT_ID,
  WO.PART_ID,
  P.DESCRIPTION,
  P.QTY_ON_HAND,
  WO.DESIRED_QTY,
  WO.RECEIVED_QTY
FROM
  WORK_ORDER WO,
  PART P
WHERE
  WO.TYPE = 'W'
  AND WO.SUB_ID='0'
  AND WO.PART_ID=P.ID
  AND WO.DESIRED_QTY > WO.RECEIVED_QTY
  AND WO.STATUS IN ('U', 'F', 'R')
ORDER BY
  WO.PART_ID,
  WO.BASE_ID,
  WO.LOT_ID,
  WO.SPLIT_ID;

Retrieve the engineering master information for a part

Back to the original example which brought down the database server (or at the least filled the temp tablespace to its maximum size), adding in two references to the PART table, each with a different alias name.  This SQL statement will retrieve the main header card, all operations, and all material requirements for a specific fabricated part.  But, there is a catch.  Operations without material requirements are excluded from the output.  Fixing that problem requires the use of an outer join, which on Oracle is indicated by a (+) following the column name that is permitted to be NULL, and on SQL Server the outer join is indicated by an * to the side of the equality that is NOT permitted to be NULL.  (Note that there are also ANSI style inner and outer joins, but these are not mentioned here).

SELECT
  WO.BASE_ID || DECODE(O.WORKORDER_SUB_ID, '0', '/', '-' || O.WORKORDER_SUB_ID  || '/') || WO.LOT_ID AS WORK_ORDER,
  WO.DESIRED_QTY - WO.RECEIVED_QTY AS REMAINING_QTY,
  WO.PART_ID AS WO_PART_ID,
  P.DESCRIPTION AS WO_PART_DESC,
  O.SEQUENCE_NO AS OP,
  O.RESOURCE_ID,
  SR.DESCRIPTION AS RESOURCE_DESC,
  O.SETUP_HRS,
  O.RUN_HRS,
  O.CALC_END_QTY,
  R.PIECE_NO,
  R.PART_ID AS REQ_PART_ID,
  P2.DESCRIPTION AS REQ_PART_DESC,
  R.CALC_QTY
FROM
  WORK_ORDER WO,
  PART P,
  OPERATION O,
  SHOP_RESOURCE SR,
  REQUIREMENT R,
  PART P2
WHERE
  WO.TYPE = 'M'
  AND P.ID = 'ABC123'
  AND P.ID = WO.BASE_ID
  AND P.ENGINEERING_MSTR = WO.LOT_ID
  AND WO.SPLIT_ID = '0'
  AND WO.SUB_ID = '0'
  AND WO.TYPE = O.WORKORDER_TYPE
  AND WO.BASE_ID = O.WORKORDER_BASE_ID
  AND WO.LOT_ID = O.WORKORDER_LOT_ID
  AND WO.SPLIT_ID = O.WORKORDER_SPLIT_ID
  AND O.RESOURCE_ID = SR.ID(+)
  AND O.WORKORDER_TYPE = R.WORKORDER_TYPE(+)
  AND O.WORKORDER_BASE_ID = R.WORKORDER_BASE_ID(+)
  AND O.WORKORDER_LOT_ID = R.WORKORDER_LOT_ID(+)
  AND O.WORKORDER_SPLIT_ID = R.WORKORDER_SPLIT_ID(+)
  AND O.WORKORDER_SUB_ID = R.WORKORDER_SUB_ID(+)
  AND O.SEQUENCE_NO = R.OPERATION_SEQ_NO(+)
  AND R.PART_ID = P2.ID(+)
ORDER BY
  O.WORKORDER_SUB_ID,
  O.SEQUENCE_NO,
  R.PART_ID;

Analyze the UNIT_MATERIAL_COST column in the PART table.  For each part, find the relative cost (high to low) ranking, average cost, smallest cost, highest cost, and the total number in each group, when the parts are grouped individually by product code, commodity code, and also preferred vendor (all parts without a preferred vendor are grouped together):

SELECT
  ID,
  DESCRIPTION,
  PRODUCT_CODE,
  COMMODITY_CODE,
  UNIT_MATERIAL_COST,
  ROW_NUMBER() OVER (PARTITION BY PRODUCT_CODE ORDER BY COMMODITY_CODE,ID) PART_WITHIN_PC,
  COUNT(1) OVER (PARTITION BY PRODUCT_CODE ORDER BY COMMODITY_CODE,ID) PART_WITHIN_PC2,
  RANK() OVER (PARTITION BY PRODUCT_CODE ORDER BY UNIT_MATERIAL_COST DESC NULLS LAST) RANK_PC_COST,
  AVG(UNIT_MATERIAL_COST) OVER (PARTITION BY PRODUCT_CODE) AVG_PC_COST,
  MIN(UNIT_MATERIAL_COST) OVER (PARTITION BY PRODUCT_CODE) MIN_PC_COST,
  MAX(UNIT_MATERIAL_COST) OVER (PARTITION BY PRODUCT_CODE) MAX_PC_COST,
  COUNT(UNIT_MATERIAL_COST) OVER (PARTITION BY PRODUCT_CODE) COUNT_PC,
  RANK() OVER (PARTITION BY COMMODITY_CODE ORDER BY UNIT_MATERIAL_COST DESC NULLS LAST) RANK_CC_COST,
  AVG(UNIT_MATERIAL_COST) OVER (PARTITION BY COMMODITY_CODE) AVG_CC_COST,
  MIN(UNIT_MATERIAL_COST) OVER (PARTITION BY COMMODITY_CODE) MIN_CC_COST,
  MAX(UNIT_MATERIAL_COST) OVER (PARTITION BY COMMODITY_CODE) MAX_CC_COST,
  COUNT(UNIT_MATERIAL_COST) OVER (PARTITION BY COMMODITY_CODE) COUNT_CC,
  RANK() OVER (PARTITION BY NVL(PREF_VENDOR_ID,'IN_HOUSE_FAB') ORDER BY UNIT_MATERIAL_COST
    DESC NULLS LAST) RANK_VENDOR_COST,
  AVG(UNIT_MATERIAL_COST) OVER (PARTITION BY NVL(PREF_VENDOR_ID,'IN_HOUSE_FAB')) AVG_VENDOR_COST,
  MIN(UNIT_MATERIAL_COST) OVER (PARTITION BY NVL(PREF_VENDOR_ID,'IN_HOUSE_FAB')) MIN_VENDOR_COST,
  MAX(UNIT_MATERIAL_COST) OVER (PARTITION BY NVL(PREF_VENDOR_ID,'IN_HOUSE_FAB')) MAX_VENDOR_COST,
  COUNT(UNIT_MATERIAL_COST) OVER (PARTITION BY PREF_VENDOR_ID) COUNT_VENDOR
FROM
  PART
ORDER BY
  ID;

On Oracle, there are also analytical functions which allow information to be grouped together without the need for a GROUP BY clause, and each column returned could potentially be grouped using different criteria.  There are several interesting analytical functions that make otherwise difficult comparisons both easy to accomplish and efficient to execute.  Many of the analytical functions allow data to be summarized by groups without losing the detail contained in each row of the data, for instance we are able to select the part_ID, description, and unit_material_cost without grouping on those columns.  PARTITION BY may be thought of as behaving like GROUP BY.  The inclusion of ORDER BY within the OVER clause means that only those rows encountered to that point, when sorted in the specified order, will be considered.

Show the sum of the hours worked for each employee by shift date, along with the previous five days and the next five days, and the next Monday after the shift date – looking at previous and next rows in the data, using inline view:

SELECT
  EMPLOYEE_ID,
  SHIFT_DATE,
  NEXT_DAY(SHIFT_DATE,'MONDAY') PAYROLL_PREPARE_DATE,
  LAG(HOURS_WORKED,5,0) OVER (PARTITION BY EMPLOYEE_ID ORDER BY SHIFT_DATE) PREV5_HOURS,
  LAG(HOURS_WORKED,4,0) OVER (PARTITION BY EMPLOYEE_ID ORDER BY SHIFT_DATE) PREV4_HOURS,
  LAG(HOURS_WORKED,3,0) OVER (PARTITION BY EMPLOYEE_ID ORDER BY SHIFT_DATE) PREV3_HOURS,
  LAG(HOURS_WORKED,2,0) OVER (PARTITION BY EMPLOYEE_ID ORDER BY SHIFT_DATE) PREV2_HOURS,
  LAG(HOURS_WORKED,1,0) OVER (PARTITION BY EMPLOYEE_ID ORDER BY SHIFT_DATE) PREV_HOURS,
  HOURS_WORKED,
  LEAD(HOURS_WORKED,1,0) OVER (PARTITION BY EMPLOYEE_ID ORDER BY SHIFT_DATE) NEXT_HOURS,
  LEAD(HOURS_WORKED,2,0) OVER (PARTITION BY EMPLOYEE_ID ORDER BY SHIFT_DATE) NEXT2_HOURS,
  LEAD(HOURS_WORKED,3,0) OVER (PARTITION BY EMPLOYEE_ID ORDER BY SHIFT_DATE) NEXT3_HOURS,
  LEAD(HOURS_WORKED,4,0) OVER (PARTITION BY EMPLOYEE_ID ORDER BY SHIFT_DATE) NEXT4_HOURS,
  LEAD(HOURS_WORKED,5,0) OVER (PARTITION BY EMPLOYEE_ID ORDER BY SHIFT_DATE) NEXT5_HOURS
FROM
  (SELECT
    EMPLOYEE_ID,
    SHIFT_DATE,
    SUM(HOURS_WORKED) HOURS_WORKED
  FROM
    LABOR_TICKET
  WHERE
    SHIFT_DATE>=TRUNC(SYSDATE-14)
  GROUP BY
    EMPLOYEE_ID,
    SHIFT_DATE
  ORDER BY
    EMPLOYEE_ID,
    SHIFT_DATE);

LAG and LEAD are interesting functions which permit looking at previous and next rows, when sorted in the specified order.

SQL coding is not hard to understand – as long as you build out from a simple SQL statement to the SQL statement that returns the desired output.





Query Active Directory, WMI, and Upload to Database

23 01 2010

January 23, 2010

I will say up front that this example is a bit complicated – if you feel sick to your stomach after reading this article it is not my fault.  So, what does this example show:

  • Query Active Directory using ADO to obtain a list of all computers on the domain.
  • Ping each of the computers to verify that the computer may be reached over the network.
  • Send a WMI query to each computer that responded to a ping.  The WMI query targets Win32_ComputerSystem which describes the timezone, domain, computer role, computer manufacturer, computer model, number of CPUs, amount of physical memory, currently logged on user, and more.
  • Create a table in the Oracle database to contain the data returned by the WMI query (using a very generic VARCHAR2(100) definition for each column).
  • Transfer the data from the WMI results to the database table.
  • Announce using voice prompts if a computer does not respond to a ping, and also announce when the WMI query is being sent to the remote computer.

Note that I had a bit of difficulty making the usual method for submitting SQL statements with bind variables work correctly with the WMI data, so I used another approach.  There are 2 subroutines in the script, the Main script is started when the VBS file executes, and the Main script calls the PingTest function as needed.  Save the script as CheckComputers.vbs, then execute the script using either cscript or wscript.  Note that you must be an aministrator on each computer, or a domain administrator for the remote WMI queries to correctly execute.

Main

Sub Main()
    Const CONVERT_TO_LOCAL_TIME = True
    Const wbemFlagReturnImmediately = &H10
    Const wbemFlagForwardOnly = &H20
    Const adCmdText = 1
    Const adVarChar = 200
    Const adchar = 129
    Const adParamInput = 1
    Const adOpenKeyset = 1
    Const adLockOptimistic = 3
    Const ADS_SCOPE_SUBTREE = 2

    Dim strSpeech
    Dim objSpeech

    Dim strComputer
    Dim varStartTime
    Dim objWMIService
    Dim colItems
    Dim objItem
    Dim i
    Dim intResult
    Dim intComputer
    Dim lngPass
    Dim lngPassMax
    Dim intColumns
    Dim intColumn

    Dim objProperty

    Dim dbActiveDirectory
    Dim strSQL
    Dim strSQLInsert
    Dim strSQLTable
    Dim comData
    Dim snpData
    Dim strDomain
    Dim comDataInsert
    Dim dynDataInsert
    Dim dbDatabase
    Dim strUsername
    Dim strPassword
    Dim strDatabase

    On Error Resume Next

    Set dbDatabase = CreateObject("ADODB.Connection")
    Set comDataInsert = CreateObject("ADODB.Command")
    Set dynDataInsert = CreateObject("ADODB.Recordset")

    Set objSpeech = CreateObject("SAPI.SpVoice")

    strUsername = "MyUsername"
    strPassword = "MyPassword"
    strDatabase = "MyDB"

    strDomain = "DC=oracle,DC=com"            'Your domain: Equivalent to oracle.com, change as needed

    Set dbActiveDirectory = CreateObject("ADODB.Connection")
    Set comData = CreateObject("ADODB.Command")
    Set snpData = CreateObject("ADODB.Recordset")

    dbDatabase.ConnectionString = "Provider=OraOLEDB.Oracle;Data Source=" & strDatabase & ";User ID=" & strUsername & ";Password=" & strPassword & ";"
    dbDatabase.Open
    'Should verify that the connection attempt was successful, but I will leave that for someone else to code

    dbActiveDirectory.Provider = "ADsDSOObject"
    dbActiveDirectory.Open "Active Directory Provider"

    comData.ActiveConnection = dbActiveDirectory

    If Err <> 0 Then
        intResult = MsgBox("An error happened while connecting to Active Directory" & vbCrLf & Err.Description, 16, "Oh NO!")
        Exit Sub
    End If

    With comData
        strSQL = "SELECT" & vbCrLf
        strSQL = strSQL & "  NAME" & vbCrLf
        strSQL = strSQL & "FROM" & vbCrLf
        strSQL = strSQL & "  'LDAP://" & strDomain & "'" & vbCrLf
        strSQL = strSQL & "WHERE" & vbCrLf
        strSQL = strSQL & "  OBJECTCLASS='computer'" & vbCrLf
        strSQL = strSQL & "ORDER BY" & vbCrLf
        strSQL = strSQL & "  NAME"

        .CommandText = strSQL

        .Properties("Page Size") = 1000
        .Properties("Searchscope") = ADS_SCOPE_SUBTREE
    End With

    Set snpData = comData.Execute

    If Err <> 0 Then
        intResult = MsgBox("An error happened while reading the computer list from Active Directory" & vbCrLf & Err.Description, 16, "Oh NO!")
        Exit Sub
    End If

    strSQL = "SELECT * FROM Win32_ComputerSystem"

    strComputer = "."
    Set objWMIService = GetObject("winmgmts:" & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
    Set colItems = objWMIService.ExecQuery(strSQL, "WQL", wbemFlagReturnImmediately + wbemFlagForwardOnly)

    strSQLTable = "CREATE TABLE COMPUTER_LIST (" & vbCrLf
    strSQLTable = strSQLTable & "  COMPUTER_NAME VARCHAR2(100)," & vbCrLf

    strSQLInsert = "INSERT INTO COMPUTER_LIST VALUES (" & vbCrLf

    intColumns = 1

    With comDataInsert
        .Parameters.Append .CreateParameter("computer_name", adVarChar, adParamInput, 100, " ")
        For Each objItem In colItems
            For Each objProperty In objItem.Properties_
                'We are in the header row
                intColumns = intColumns + 1
                strSQLTable = strSQLTable & "  " & Replace(CStr(objProperty.Name), " ", "_") & " VARCHAR2(100)," & vbCrLf

                'This method seems to be having problems
                'strSQLInsert = strSQLInsert & "  ?," & vbCrLf
                '.Parameters.Append .CreateParameter("value" & FormatNumber(intColumns, 0), adVarChar, adParamInput, 100, " ")
            Next
        Next
        'This method seems to be having problems
        'strSQLInsert = Left(strSQLInsert, Len(strSQLInsert) - 3) & ")"

        '.CommandText = strSQLInsert
        '.CommandType = adCmdText
        '.CommandTimeout = 30
        '.ActiveConnection = dbDatabase
    End With

    strSQLTable = strSQLTable & "  PRIMARY KEY (COMPUTER_NAME))"
    dbDatabase.Execute strSQLTable

    'Alternate method should also use bind variables
    strSQLInsert = "SELECT" & vbCrLf
    strSQLInsert = strSQLInsert & "  *" & vbCrLf
    strSQLInsert = strSQLInsert & "FROM" & vbCrLf
    strSQLInsert = strSQLInsert & "  COMPUTER_LIST" & vbCrLf
    strSQLInsert = strSQLInsert & "WHERE" & vbCrLf
    strSQLInsert = strSQLInsert & "  1=2"
    dynDataInsert.Open strSQLInsert, dbDatabase, adOpenKeyset, adLockOptimistic

    strSQL = "SELECT * FROM Win32_ComputerSystem"

    dbDatabase.BeginTrans
    If snpData.State = 1 Then
        Do While Not (snpData.EOF)
            If PingTest(CStr(snpData.Fields("Name").Value)) = True Then
                Err = 0  'Reset the error indicator

                strComputer = CStr(snpData.Fields("Name").Value)
                strSpeech = "Checking, " & strComputer
                objSpeech.Speak strSpeech

                Err = 0

                Set objWMIService = GetObject("winmgmts:" & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")

                If Err = 0 Then
                    Set colItems = objWMIService.ExecQuery(strSQL, "WQL", wbemFlagReturnImmediately + wbemFlagForwardOnly)

                    intColumn = 1

                    'This method seems to be having problems
                    'comDataInsert("computer_name") = Left(CStr(strComputer), 100)

                    dynDataInsert.AddNew
                    dynDataInsert("computer_name") = Left(CStr(strComputer), 100)

                    For Each objItem In colItems
                        intColumn = 1
                        For Each objProperty In objItem.Properties_
                            intColumn = intColumn + 1

                            If Not (IsNull(objProperty.Value)) Then
                                If VarType(objProperty.Value) <> 8204 Then
                                    'This method seems to be having problems
                                    'comDataInsert("value" & FormatNumber(intColumn, 0)) = Left(objProperty.Value, 100)

                                    dynDataInsert(Replace(CStr(objProperty.Name), " ", "_")) = Left(objProperty.Value, 100)
                                End If
                            End If
                        Next
                    Next

                    'This method seems to be having problems
                    'comDataInsert.Execute

                    'Alternate method should also use bind variables
                    dynDataInsert.Update
                End If
            Else
               strComputer = CStr(snpData.Fields("Name").Value)
                strSpeech = "Could Not Ping, " & strComputer
                objSpeech.Speak strSpeech
            End If
            snpData.MoveNext
        Loop

        snpData.Close

        strSpeech = "Done!"
        objSpeech.Speak strSpeech
    Else
        If Err <> 0 Then
            intResult = MsgBox("An error happened while connecting to Active Directory" & vbCrLf & Err.Description, 16, "Oh NO!")
        End If
    End If
    dbDatabase.CommitTrans

    Set objWMIService = Nothing
    Set colItems = Nothing
    Set objItem = Nothing
    Set objProperty = Nothing
    Set objSpeech = Nothing
    Set dbDatabase = Nothing
    Set dynDataInsert = Nothing
    Set comDataInsert = Nothing
    Set snpData = Nothing
End Sub

Function PingTest(strComputer)
    Dim intPosition
    Dim objShell
    Dim objExec
    Dim strLine
    Dim strCommand

    On Error Resume Next

    PingTest = False
    Set objShell = CreateObject("wscript.shell")
    'command to execute
    strCommand = "PING -i 10 -w 10 -n 1 " & strComputer
    'Create Exec object
    Set objExec = objShell.Exec(strCommand)
    'skip lines that contain information about our DNS server
    Do While objExec.StdOut.AtEndOfStream <> True
        strLine = objExec.StdOut.ReadLine
        intPosition = InStr(UCase(strLine), "RECEIVED =")
        If intPosition > 0 Then
            If InStr(strLine, "TTL expired in transit") = 0 Then
                If Trim(Mid(strLine, intPosition + 10, 2)) = "1" Then
                    PingTest = True
                Else
                    PingTest = False
                End If
            Else
                PingTest = False
            End If
            Exit Do
        End If
    Loop

    Set objShell = Nothing
    Set objExec = Nothing
End Function

Yes, it is that easy… to have the computer talk to you.





Simple Query Generates Complex Execution Plan, the Mysterious 4063.88 Second Single Block Read Wait

22 01 2010

January 22, 2010

There was an interesting thread on Oracle’s OTN forums in April 2008:
http://forums.oracle.com/forums/thread.jspa?threadID=642641

In the thread, the original poster wrote:

I have 4 tables. I wrote one query as follows. But it’s taking 45 mins for executing. Can you please write a query for this to improve performance. The tables are as follows:

  1. Case
  2. Item 
  3. Master 
  4. Warehouse

The (table) columns are:

Case: Item, Supply_indicator, Country_Indicator, item_size
Item: Item, location, location_type, ondate, offdate, status, create_date
Master: item, status,  pack_indicator, Item_level(either 1 or 0), Trns_level(either 1 or 0), create_date, Foreind (either Y or N)
Warehouse: Warehose_no, Name, Address

The OP posted the SQL statement and a form of an execution plan:

Comparing the query with the explan plan:

SELECT
  IM.LOCATION,
  IM.ITEM,
  CS.CASE_SIZE,
  IM.ONDATE,
  IM.OFFDATE,
  MAS.STATUS
FROM
  CASE_SIZE CS,
  ITEM IM,
  MASTER MAS,
  WAREHOUSE WH
WHERE
  MAS.PACK_IND = 'N'
  AND MAS.ITEM_LEVEL = MAS.TRNS_LEVEL
  AND MAS.STATUS = 'A'
  AND MAS.FOREIND = 'Y'
  AND MAS.ITEM = IM.ITEM
  AND IM.LOCATION_TYPE = 'S'
  AND MAS.ITEM = CS.ITEM
  AND CS.SUPPLY_INDICATOR = 'Y'
  AND CS.COUNTRY_INDICATOR = 'Y'
  AND IM.LOCATION =WH.WAREHOSE_NO
  AND NVL(WH.CLOSE_DATE,'04-APR-9999')>=TO_DATE(&VERSDATE}, 'YYYYMMDD')

Execution Plan
--------------------------------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=15085 Card=1139512 Bytes=134462416)
   1    0   HASH JOIN (Cost=15085 Card=1139512 Bytes=134462416)
   2    1     HASH JOIN (Cost=580 Card=30772 Bytes=1569372)
   3    2       TABLE ACCESS (FULL) OF 'MASTER' (Cost=63 Card=1306 Bytes=18284)
   4    2       TABLE ACCESS (BY GLOBAL INDEX ROWID) OF 'ITEM' (Cost=2 Card=105311 Bytes=3369952)
   5    4         NESTED LOOPS (Cost=86 Card=4296685 Bytes=158977345)
   6    5           TABLE ACCESS (FULL) OF 'WAREHOUSE' (Cost=4 Card=41 Bytes=205)
   7    5           INDEX (RANGE SCAN) OF 'PK_LOCATION' (UNIQUE) (Cost=1 Card=210622)
   8    1     VIEW (Cost=14271 Card=48399 Bytes=3242733)
   9    8       SORT (UNIQUE) (Cost=14271 Card=48399 Bytes=6098274)
  10    9         HASH JOIN (Cost=992 Card=187614 Bytes=23639364)
  11   10           HASH JOIN (Cost=186 Card=7449 Bytes=581022)
  12   11             TABLE ACCESS (FULL) OF 'MASTER' (Cost=63 Card=10451 Bytes=156765)
  13   11             HASH JOIN (Cost=105 Card=12489 Bytes=786807)
  14   13               MERGE JOIN (CARTESIAN) (Cost=40 Card=12489 Bytes=549516)
  15   14                 MERGE JOIN (CARTESIAN) (Cost=6 Card=1 Bytes=13)
  16   15                   TABLE ACCESS (FULL) OF 'SYSTEM' (Cost=3 Card=1 Bytes=3)
  17   15                   BUFFER (SORT) (Cost=3 Card=1 Bytes=10)
  18   17                     TABLE ACCESS (FULL) OF 'SYSTEM' (Cost=3 Card=1 Bytes=10)
  19   14                 BUFFER (SORT) (Cost=37 Card=12489 Bytes=387159)
  20   19                   TABLE ACCESS (FULL) OF 'ITEM_SUPPLIER_COUNTRY' (Cost=34 Card=12489 Bytes=387159)
  21   13               TABLE ACCESS (FULL) OF 'SUPPLIER' (Cost=28 Card=24989 Bytes=474791)
  22   10           VIEW (Cost=536 Card=172449 Bytes=8277552)
  23   22             UNION-ALL
  24   23               INDEX (FAST FULL SCAN) OF 'PK_ITEM_SUPPLIER_COUNTRY' (UNIQUE) (Cost=11 Card=24978 Bytes=324714)
  25   23               TABLE ACCESS (FULL) OF 'ITEM_SUPPLIER_COUNTRY' (Cost=34 Card=24978 Bytes=399648)
  26   23               TABLE ACCESS (FULL) OF 'ITEM_SUPPLIER_COUNTRY' (Cost=34 Card=24978 Bytes=374670)
  27   23               TABLE ACCESS (FULL) OF 'ITEM_SUPPLIER_COUNTRY' (Cost=34 Card=24978 Bytes=499560)
  28   23               VIEW (Cost=141 Card=24179 Bytes=1039697)
  29   28                 SORT (UNIQUE) (Cost=141 Card=24179 Bytes=507759)
  30   29                   INDEX (FAST FULL SCAN) OF 'PK_CASE_UPDATES' (UNIQUE) (Cost=14 Card=24179 Bytes=507759)
  31   23               VIEW (Cost=141 Card=24179 Bytes=1039697)
  32   31                 SORT (UNIQUE) (Cost=141 Card=24179 Bytes=507759)
  33   32                   INDEX (FAST FULL SCAN) OF 'PK_CASE_UPDATES' (UNIQUE) (Cost=14 Card=24179 Bytes=507759)
  34   23               VIEW (Cost=141 Card=24179 Bytes=1039697)
  35   34                 SORT (UNIQUE) (Cost=141 Card=24179 Bytes=507759)
  36   35                   INDEX (FAST FULL SCAN) OF 'PK_CASE_UPDATES' (UNIQUE) (Cost=14 Card=24179 Bytes=507759)

So, how could a query of 4 tables produce the above execution plan, and is there anything wrong with the plan?

The OP also generated a 10046 trace and processed that trace file with TKPROF.  The TKPROF output for this SQL statement is as follows:

(select statement)
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.32       0.29          0        593          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch   819704    874.76    1529.15    3879188   12619996          5    12295541
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   819706    875.08    1529.45    3879188   12620589          5    12295541

Misses in library cache during parse: 1
Optimizer goal: ALL_ROWS
Parsing user id: 315  (DEVEL)

 Rows     Row Source Operation
 -------  ---------------------------------------------------
12295541  HASH JOIN  (cr=12619996 r=3879188 w=2408 time=1503903336 us)
  212315   VIEW  (cr=7553336 r=2408 w=2408 time=82297538 us)
(as per my prevous post)

Rows     Execution Plan
-------  ---------------------------------------------------
(as per  my previous post)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                  819704        0.00          0.45
  direct path write                               4        0.00          0.00
  direct path read                               51        0.00          0.07
  db file sequential read                     94884        0.41         60.91
  db file scattered read                     222696     2302.70       4675.10
  SQL*Net message from client                819704     3672.87       6299.98
  latch free                                      2        0.00          0.00

How could the query of the 4 tables cause TKPROF to output the above information, and is there anything wrong with the information provided?  To get you started, what would cause a 2,302.70 second wait for a multi-block read?

The OP also posted the following portion of a 10046 trace:

WAIT #3: nam='SQL*Net message to client' ela= 0 p1=1650815232 p2=1 p3=0
FETCH #3:c=0,e=131,p=0,cr=1,cu=0,mis=0,r=15,dep=0,og=4,tim=1180445875641857
WAIT #3: nam='SQL*Net message from client' ela= 704 p1=1650815232 p2=1 p3=0
WAIT #3: nam='SQL*Net message to client' ela= 1 p1=1650815232 p2=1 p3=0
FETCH #3:c=0,e=140,p=0,cr=1,cu=0,mis=0,r=15,dep=0,og=4,tim=1180445875642773
WAIT #3: nam='SQL*Net message from client' ela= 668 p1=1650815232 p2=1 p3=0
WAIT #3: nam='SQL*Net message to client' ela= 1 p1=1650815232 p2=1 p3=0
FETCH #3:c=0,e=140,p=0,cr=2,cu=0,mis=0,r=15,dep=0,og=4,tim=1180445875643631
WAIT #3: nam='SQL*Net message from client' ela= 4179109066 p1=1650815232 p2=1 p3=0
WAIT #3: nam='SQL*Net message to client' ela= 0 p1=1650815232 p2=1 p3=0
FETCH #3:c=10000,e=144,p=0,cr=1,cu=0,mis=0,r=15,dep=0,og=4,tim=1180445875643757
WAIT #3: nam='SQL*Net message from client' ela= 660 p1=1650815232 p2=1 p3=0
WAIT #3: nam='SQL*Net message to client' ela= 0 p1=1650815232 p2=1 p3=0
FETCH #3:c=0,e=128,p=0,cr=1,cu=0,mis=0,r=15,dep=0,og=4,tim=1180445875644594

Anything wrong in the above?

What if we try real hard to analyze the above trace file snippet, anything wrong?  I will cheat and use my Toy Project for Performance Tuning to generate the following output.

Fetch at 1180445875.641860 (Parse to Fetch 1180445875.641860),CPU Time 0.000000,Elapsed Time 0.000131,Rows Retrievd 15,Blks from Buff 1,Blks from Disk 0
     0.000704   SQL*Net message from client
     0.000001   SQL*Net message to client
Fetch at 1180445875.642770 (Parse to Fetch 1180445875.642770),CPU Time 0.000000,Elapsed Time 0.000140,Rows Retrievd 15,Blks from Buff 1,Blks from Disk 0
     0.000668   SQL*Net message from client
     0.000001   SQL*Net message to client
Fetch at 1180445875.643630 (Parse to Fetch 1180445875.643630),CPU Time 0.000000,Elapsed Time 0.000140,Rows Retrievd 15,Blks from Buff 2,Blks from Disk 0
  4179.109066   SQL*Net message from client
     0.000000   SQL*Net message to client
Fetch at 1180445875.643760 (Parse to Fetch 1180445875.643760),CPU Time 0.010000,Elapsed Time 0.000144,Rows Retrievd 15,Blks from Buff 1,Blks from Disk 0
     0.000660   SQL*Net message from client
     0.000000   SQL*Net message to client
Fetch at 1180445875.644590 (Parse to Fetch 1180445875.644590),CPU Time 0.000000,Elapsed Time 0.000128,Rows Retrievd 15,Blks from Buff 1,Blks from Disk 0

Don’t worry too much if the above does not make too much sense.  At 1180445875.641860 seconds there was a fetch call that required 0 CPU seconds, 0.000131 elapsed seconds, 15 rows were retrieved, there was one consistent get, and 0 physical reads.  The next fetch happened 0.00091 (5.642770 – 5.641860) seconds later with two short duration wait events between the two fetch calls (0.000704 seconds on SQL*Net message from client and 0.000001 seconds on SQL*Net message to client).  Take a closer look at the waits between the fetch call at 1180445875.643630 seconds and the fetch call at 1180445875.643760 seconds (0.00013 later).

The OP also posted the following odd snippet from a 10046 trace:

WAIT #3: nam='db file sequential read' ela= 260 p1=354 p2=128875 p3=1
WAIT #3: nam='db file sequential read' ela= 180 p1=354 p2=128878 p3=1
WAIT #3: nam='db file sequential read' ela= 179 p1=354 p2=128877 p3=1
WAIT #3: nam='db file sequential read' ela= 178 p1=354 p2=128880 p3=1
WAIT #3: nam='db file sequential read' ela= 192 p1=354 p2=128879 p3=1
WAIT #3: nam='db file sequential read' ela= 197 p1=354 p2=128882 p3=1
WAIT #3: nam='db file sequential read' ela= 192 p1=354 p2=128881 p3=1
WAIT #3: nam='db file sequential read' ela= 4063882583 p1=354 p2=128884 p3=1
WAIT #3: nam='db file sequential read' ela= 194 p1=354 p2=128883 p3=1
WAIT #3: nam='db file sequential read' ela= 180 p1=354 p2=128885 p3=1
WAIT #3: nam='db file sequential read' ela= 192 p1=354 p2=128887 p3=1
WAIT #3: nam='db file sequential read' ela= 176 p1=354 p2=128886 p3=1
WAIT #3: nam='db file sequential read' ela= 179 p1=354 p2=128889 p3=1
WAIT #3: nam='db file sequential read' ela= 186 p1=354 p2=128888 p3=1

So, what is odd about the above?  Is there an explanation?

The thread contains other interesting details, but it does not look like the OP ever received an explanation for why his query required 45 minutes to execute.  My money is still on a faulty RAID 10 floppy array.





How Many Topics can be Packed into a Short OTN Thread?

21 01 2010

January 21, 2010

This thread from 2008 has 31 replies:
http://forums.oracle.com/forums/thread.jspa?messageID=2502493&tstart=0

The thread starts off with a simple statement, without a lot of technical detail (kind of like this example):

“We are runing in 10g (10.2.0.3), when using full table scan, the perfromance is very slow. Is a bug in ASM or SQL program problem? How can I vertify the problem come from? I have runing health check in oracle but found nothing.”

So, where does the thread head?  Topics?

  • “By definition the Full Table Scan access is the most feared enemy, you should avoid that monster when your tables are huge (many rows &/or long rows).”
  • “Not all full table scans are bad, not all indexes are good”
  • Enabling a 10046 trace file for a session might help.
  • “Usually you don’t want your applications to access a very large part of a large table. This will be very slow and could deteriorate the performance of your application severely.”
  • “We had the same issues with full table scans in 10.2.0.3 where we had gathered system stats”
  • “If I’m not mistaken, Oracle says that if you query 7.5% of the table rows and above you are usually better off with an FTS”  A link was provided by someone else to a site that made a similar claim.
  • “I can very easily give you an example where an index would be the best option to query 99% of data. I can very easily give you an example where a FTS is the best option to query 1% of data.”
  • What is noise in a thread?
  • A full tablescan reads all of the blocks up to the high watermark – but does it always?
  • “The only way to improve the end-to-end performance of a full-table scan is Oracle parallel query (OPQ).” – or is it?
  • “That’s improved the tablescan by a factor of nearly 30 simply by changing the array fetch size”
  • Properly setting the DB_FILE_MULTIBLOCK_READ_COUNT will have an impact on the performance of a full table scan.
  • The value of a 10046 trace, a test case.

In this OTN thread I provided a nice litte test case that showed a 10046 trace where a full table scan operation did not read all of the table blocks up to the high watermark for the table.  That test case appears below:

CREATE TABLE T1(
  C1 NUMBER(10),
  C2 VARCHAR2(255),
  C3 VARCHAR2(255),
  C4 VARCHAR2(255));

INSERT INTO
  T1
SELECT
  ROWNUM RN,
  LPAD('A',255,'A'),
  LPAD('B',255,'B'),
  LPAD('C',255,'C') 
FROM
  DUAL
CONNECT BY
  LEVEL<=500000;

COMMIT;

We have a table with 500,000 rows with possibly 10 rows per 8KB block (AVG_ROW_LEN is 772). Now, flush the buffer cache to force physical reads:

ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;

Now, let’s force a full table scan (a 10046 trace is enabled at level 8, and a DBMS_XPLAN is generated):

SELECT
  *
FROM
  T1
WHERE
  ROWNUM<=1000;

DBMS_XPLAN (partial output):

----------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------
|*  1 |  COUNT STOPKEY     |      |      1 |        |   1000 |00:00:00.10 |     123 |    176 |
|   2 |   TABLE ACCESS FULL| T1   |      1 |    500K|   1000 |00:00:00.08 |     123 |    176 |
----------------------------------------------------------------------------------------------

Did the TABLE ACCESS FULL (full table scan) operation in the plan indicate that Oracle read all blocks up to the high water mark (I intentially excluded the Access/Filter Predicates)? Oracle did NOT read all blocks up to the high water mark, regardless of what the plan shows. The proof is in the 10046 trace file:

=====================
PARSING IN CURSOR #11 len=54 dep=0 uid=63 oct=3 lid=63 tim=1044938463796 hv=4195490999 ad='1c501fdc' sqlid='g386bagx1475r'
SELECT
  *
FROM
  T1
WHERE
  ROWNUM<=1000
END OF STMT
PARSE #11:c=0,e=2282,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=4,tim=1044938463790
EXEC #11:c=0,e=50,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=1044938464457
WAIT #11: nam='SQL*Net message to client' ela= 7 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938464504
WAIT #11: nam='db file sequential read' ela= 40853 file#=1 block#=84361 blocks=1 obj#=72319 tim=1044938505437
WAIT #11: nam='reliable message' ela= 232 channel context=563314104 channel handle=563285032 broadcast message=564250208 obj#=72319 tim=1044938506251
WAIT #11: nam='enq: KO - fast object checkpoint' ela= 163 name|mode=1263468550 2=65558 0=1 obj#=72319 tim=1044938506489
WAIT #11: nam='direct path read' ela= 41814 file number=1 first dba=84362 block cnt=7 obj#=72319 tim=1044938586589
WAIT #11: nam='direct path read' ela= 19323 file number=1 first dba=87305 block cnt=48 obj#=72319 tim=1044938606571
FETCH #11:c=0,e=142335,p=86,cr=15,cu=0,mis=0,r=100,dep=0,og=4,tim=1044938606885
WAIT #11: nam='SQL*Net message from client' ela= 1258 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938608214
WAIT #11: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938608305
FETCH #11:c=0,e=690,p=0,cr=12,cu=0,mis=0,r=100,dep=0,og=4,tim=1044938608971
WAIT #11: nam='SQL*Net message from client' ela= 656 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938609666
WAIT #11: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938609725
FETCH #11:c=0,e=687,p=0,cr=12,cu=0,mis=0,r=100,dep=0,og=4,tim=1044938610396
WAIT #11: nam='SQL*Net message from client' ela= 579 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938611012
WAIT #11: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938611072
FETCH #11:c=0,e=681,p=0,cr=12,cu=0,mis=0,r=100,dep=0,og=4,tim=1044938611734
WAIT #11: nam='SQL*Net message from client' ela= 742 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938612512
WAIT #11: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938612575
WAIT #11: nam='direct path read' ela= 4898 file number=1 first dba=87361 block cnt=30 obj#=72319 tim=1044938618207
FETCH #11:c=0,e=5901,p=42,cr=12,cu=0,mis=0,r=100,dep=0,og=4,tim=1044938618458
WAIT #11: nam='SQL*Net message from client' ela= 631 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938619135
WAIT #11: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938619197
FETCH #11:c=0,e=681,p=0,cr=12,cu=0,mis=0,r=100,dep=0,og=4,tim=1044938619860
WAIT #11: nam='SQL*Net message from client' ela= 581 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938620476
WAIT #11: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938620532
FETCH #11:c=0,e=683,p=0,cr=12,cu=0,mis=0,r=100,dep=0,og=4,tim=1044938621199
WAIT #11: nam='SQL*Net message from client' ela= 920 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938622154
WAIT #11: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938622219
WAIT #11: nam='direct path read' ela= 8072 file number=1 first dba=87391 block cnt=42 obj#=72319 tim=1044938630900
FETCH #11:c=0,e=8966,p=48,cr=12,cu=0,mis=0,r=100,dep=0,og=4,tim=1044938631167
WAIT #11: nam='SQL*Net message from client' ela= 613 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938631822
WAIT #11: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938631880
FETCH #11:c=0,e=686,p=0,cr=12,cu=0,mis=0,r=100,dep=0,og=4,tim=1044938632550
WAIT #11: nam='SQL*Net message from client' ela= 583 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938633168
WAIT #11: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938633271
FETCH #11:c=0,e=675,p=0,cr=12,cu=0,mis=0,r=100,dep=0,og=4,tim=1044938633928
WAIT #11: nam='SQL*Net message from client' ela= 602 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938634565
WAIT #11: nam='direct path read' ela= 22151 file number=1 first dba=86793 block cnt=48 obj#=72319 tim=1044938656764
FETCH #11:c=0,e=22249,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=1044938656854
STAT #11 id=1 cnt=1000 pid=0 pos=1 obj=0 op='COUNT STOPKEY (cr=123 pr=176 pw=176 time=37561 us)'
STAT #11 id=2 cnt=1000 pid=1 pos=1 obj=72319 op='TABLE ACCESS FULL T1 (cr=123 pr=176 pw=176 time=34756 us cost=8433 size=386000000 card=500000)'
WAIT #11: nam='SQL*Net message to client' ela= 4 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938656998
WAIT #11: nam='SQL*Net message from client' ela= 132538 driver id=1413697536 #bytes=1 p3=0 obj#=72319 tim=1044938789567
=====================

From the trace file we see many interesting details, including the absence of the typical db file scattered reads commonly associated with full table scans. From the trace it is also possible to see that 100 rows were read at a time, with a fairly consistent delay between requests for each set of the 100 rows. What else might we see in the trace file that would help us identify the source of a performance problem?





Excel – Scrolling Oracle Performance Charts

20 01 2010

January 20, 2010

This example shows how to generate scrolling charts in Excel that report performance data from V$OSSTAT, V$SYS_TIME_MODEL, and V$SYSSTAT.  This example retrieves 11 statistics from the three views, writes those values to a worksheet, and then calculates the delta values from the previous values read from the database – the last 20 delta values for each statistic are included in the charts.  While this example only generates 4 charts from the data, it is easy to extend the example to build additional charts.

With named cell ranges, it is not necessary to continually change the chart’s data values range, for example, you could create 4 named ranges in Excel and set those ranges as the values ranges for each of the charts:

ChartDBTime:     =IF(COUNTA(ScrollingChartData!$A:$A)>20,OFFSET(ScrollingChartData!$A$5,COUNTA(ScrollingChartData!$A:$A)-21,0,20),OFFSET(ScrollingChartData!$A$5,0,0,COUNTA(ScrollingChartData!$A:$A)-1))
ChartDBCPU:      =IF(COUNTA(ScrollingChartData!$B:$B)>20,OFFSET(ScrollingChartData!$B$5,COUNTA(ScrollingChartData!$B:$B)-21,0,20),OFFSET(ScrollingChartData!$B$5,0,0,COUNTA(ScrollingChartData!$B:$B)-1))
ChartSQLElapsed: =IF(COUNTA(ScrollingChartData!$C:$C)>20,OFFSET(ScrollingChartData!$C$5,COUNTA(ScrollingChartData!$C:$C)-21,0,20),OFFSET(ScrollingChartData!$C$5,0,0,COUNTA(ScrollingChartData!$C:$C)-1))
ChartParseTime:  =IF(COUNTA(ScrollingChartData!$D:$D)>20,OFFSET(ScrollingChartData!$D$5,COUNTA(ScrollingChartData!$D:$D)-21,0,20),OFFSET(ScrollingChartData!$D$5,0,0,COUNTA(ScrollingChartData!$D:$D)-1))

However, I will not use that approach in this example.

First, we need to name the first worksheet as ScrollingChartData and the second worksheet as ScrollingChart, and then create two ActiveX command buttons on the ScrollingChartData worksheet with the names cmdStart and cmdStop:

Next, we need to add a reference to the Microsoft ActiveX Data Objects as demonstrated here.  Also, we need to add a module, and name the module as mdlChartUpdater using the Properties window to assign the name (you can optionally name the two worksheets also).

Now, switch back to the Excel workbook, right-click the ScrollingChartData worksheet and select View Code.  In the Visual Basic editor, add the following code to the code for the worksheet:

Option Explicit

Private Sub cmdStart_Click()
    Dim lngResult As Long
    Dim objChartRange As Range

    On Error Resume Next

    'Clear out any of the old values

    ActiveWorkbook.Sheets("ScrollingChart").ChartObjects.Delete
    ActiveWorkbook.Sheets("ScrollingChartData").Range("4:10000").Clear

    With Sheets("ScrollingChart").ChartObjects.Add(10, 10, 400, 300)
        .Chart.SeriesCollection.NewSeries
        .Chart.Axes(1).CategoryType = 2
        .Chart.SeriesCollection(1).Values = "ScrollingChartData!A5:A5"

        .Chart.HasLegend = False

        .Chart.HasTitle = True
        .Chart.ChartTitle.Text = "DB Time"

        .Chart.Axes(xlCategory, xlPrimary).HasTitle = True
        .Chart.Axes(xlCategory, xlPrimary).AxisTitle.Characters.Text = ""
        .Chart.Axes(xlValue, xlPrimary).HasTitle = True
        .Chart.Axes(xlValue, xlPrimary).AxisTitle.Characters.Text = ""

        .Chart.SeriesCollection(1).HasDataLabels = True
        .Chart.SeriesCollection(1).HasLeaderLines = True

        With .Chart.PlotArea.Border
            .ColorIndex = 16
            .Weight = xlThin
            .LineStyle = xlContinuous
        End With

        .Chart.PlotArea.Fill.OneColorGradient Style:=msoGradientHorizontal, Variant:=2, Degree:=0.756847486076142
        .Chart.PlotArea.Fill.ForeColor.SchemeColor = 23
        .Chart.PlotArea.Fill.Visible = True
        With .Chart.PlotArea.Border
            .ColorIndex = 57
            .Weight = xlThin
            .LineStyle = xlContinuous
        End With

        .Chart.SeriesCollection(1).Fill.OneColorGradient Style:=msoGradientVertical, Variant:=4, Degree:=0.2
        .Chart.SeriesCollection(1).Fill.Visible = True
        .Chart.SeriesCollection(1).Fill.ForeColor.SchemeColor = 4

        .Chart.Axes(xlValue).MajorGridlines.Border.ColorIndex = 2
        With .Chart.SeriesCollection(1).DataLabels.Font
            .Name = "Arial"
            .FontStyle = "Regular"
            .Size = 8
            .Color = RGB(255, 255, 255)
        End With
        With .Chart.Axes(xlCategory).TickLabels.Font
            .Name = "Arial"
            .FontStyle = "Regular"
            .Size = 8
            .Color = RGB(255, 255, 255)
        End With
        With .Chart.ChartTitle.Font
            .Name = "Arial"
            .FontStyle = "Bold"
            .Size = 16
            .Color = RGB(0, 0, 255)
        End With
    End With

    With Sheets("ScrollingChart").ChartObjects.Add(410, 10, 400, 300)
        .Chart.SeriesCollection.NewSeries
        .Chart.Axes(1).CategoryType = 2
        .Chart.SeriesCollection(1).Values = "ScrollingChartData!B5:B5"
        .Chart.HasLegend = False

        .Chart.HasTitle = True
        .Chart.ChartTitle.Text = "DB CPU"

        .Chart.Axes(xlCategory, xlPrimary).HasTitle = True
        .Chart.Axes(xlCategory, xlPrimary).AxisTitle.Characters.Text = ""
        .Chart.Axes(xlValue, xlPrimary).HasTitle = True
        .Chart.Axes(xlValue, xlPrimary).AxisTitle.Characters.Text = ""

        .Chart.SeriesCollection(1).HasDataLabels = True
        .Chart.SeriesCollection(1).HasLeaderLines = True

        With .Chart.PlotArea.Border
            .ColorIndex = 16
            .Weight = xlThin
            .LineStyle = xlContinuous
        End With

        .Chart.PlotArea.Fill.OneColorGradient Style:=msoGradientHorizontal, Variant:=2, Degree:=0.756847486076142
        .Chart.PlotArea.Fill.ForeColor.SchemeColor = 23
        .Chart.PlotArea.Fill.Visible = True
        With .Chart.PlotArea.Border
            .ColorIndex = 57
            .Weight = xlThin
            .LineStyle = xlContinuous
        End With

        .Chart.SeriesCollection(1).Fill.OneColorGradient Style:=msoGradientVertical, Variant:=4, Degree:=0.2
        .Chart.SeriesCollection(1).Fill.Visible = True
        .Chart.SeriesCollection(1).Fill.ForeColor.SchemeColor = 3

        .Chart.Axes(xlValue).MajorGridlines.Border.ColorIndex = 2
        With .Chart.SeriesCollection(1).DataLabels.Font
            .Name = "Arial"
            .FontStyle = "Regular"
            .Size = 8
            .Color = RGB(255, 255, 255)
        End With
        With .Chart.Axes(xlCategory).TickLabels.Font
            .Name = "Arial"
            .FontStyle = "Regular"
            .Size = 8
            .Color = RGB(255, 255, 255)
        End With
        With .Chart.ChartTitle.Font
            .Name = "Arial"
            .FontStyle = "Bold"
            .Size = 16
            .Color = RGB(0, 0, 255)
        End With
    End With

    With Sheets("ScrollingChart").ChartObjects.Add(10, 320, 400, 300)
        .Chart.SeriesCollection.NewSeries
        .Chart.Axes(1).CategoryType = 2
        .Chart.SeriesCollection(1).Values = "ScrollingChartData!C5:C5"

        .Chart.HasLegend = False

        .Chart.HasTitle = True
        .Chart.ChartTitle.Text = "SQL Elapsed Time"

        .Chart.Axes(xlCategory, xlPrimary).HasTitle = True
        .Chart.Axes(xlCategory, xlPrimary).AxisTitle.Characters.Text = ""
        .Chart.Axes(xlValue, xlPrimary).HasTitle = True
        .Chart.Axes(xlValue, xlPrimary).AxisTitle.Characters.Text = ""

        .Chart.SeriesCollection(1).HasDataLabels = True
        .Chart.SeriesCollection(1).HasLeaderLines = True

        With .Chart.PlotArea.Border
            .ColorIndex = 16
            .Weight = xlThin
            .LineStyle = xlContinuous
        End With

        .Chart.PlotArea.Fill.OneColorGradient Style:=msoGradientHorizontal, Variant:=2, Degree:=0.756847486076142
        .Chart.PlotArea.Fill.ForeColor.SchemeColor = 23
        .Chart.PlotArea.Fill.Visible = True
        With .Chart.PlotArea.Border
            .ColorIndex = 57
            .Weight = xlThin
            .LineStyle = xlContinuous
        End With

        .Chart.SeriesCollection(1).Fill.OneColorGradient Style:=msoGradientVertical, Variant:=4, Degree:=0.2
        .Chart.SeriesCollection(1).Fill.Visible = True
        .Chart.SeriesCollection(1).Fill.ForeColor.SchemeColor = 5

        .Chart.Axes(xlValue).MajorGridlines.Border.ColorIndex = 2
        With .Chart.SeriesCollection(1).DataLabels.Font
            .Name = "Arial"
            .FontStyle = "Regular"
            .Size = 8
            .Color = RGB(255, 255, 255)
        End With
        With .Chart.Axes(xlCategory).TickLabels.Font
            .Name = "Arial"
            .FontStyle = "Regular"
            .Size = 8
            .Color = RGB(255, 255, 255)
        End With
        With .Chart.ChartTitle.Font
            .Name = "Arial"
            .FontStyle = "Bold"
            .Size = 16
            .Color = RGB(0, 0, 255)
        End With
    End With

    With Sheets("ScrollingChart").ChartObjects.Add(410, 320, 400, 300)
        .Chart.SeriesCollection.NewSeries
        .Chart.Axes(1).CategoryType = 2
        .Chart.SeriesCollection(1).Values = "ScrollingChartData!D5:D5"
        .Chart.HasLegend = False

        .Chart.HasTitle = True
        .Chart.ChartTitle.Text = "Parse Time"

        .Chart.Axes(xlCategory, xlPrimary).HasTitle = True
        .Chart.Axes(xlCategory, xlPrimary).AxisTitle.Characters.Text = ""
        .Chart.Axes(xlValue, xlPrimary).HasTitle = True
        .Chart.Axes(xlValue, xlPrimary).AxisTitle.Characters.Text = ""

        .Chart.SeriesCollection(1).HasDataLabels = True
        .Chart.SeriesCollection(1).HasLeaderLines = True

        With .Chart.PlotArea.Border
            .ColorIndex = 16
            .Weight = xlThin
            .LineStyle = xlContinuous
        End With

        .Chart.PlotArea.Fill.OneColorGradient Style:=msoGradientHorizontal, Variant:=2, Degree:=0.756847486076142
        .Chart.PlotArea.Fill.ForeColor.SchemeColor = 23
        .Chart.PlotArea.Fill.Visible = True
        With .Chart.PlotArea.Border
            .ColorIndex = 57
            .Weight = xlThin
            .LineStyle = xlContinuous
        End With

        .Chart.SeriesCollection(1).Fill.OneColorGradient Style:=msoGradientVertical, Variant:=4, Degree:=0.2
        .Chart.SeriesCollection(1).Fill.Visible = True
        .Chart.SeriesCollection(1).Fill.ForeColor.SchemeColor = 6

        .Chart.Axes(xlValue).MajorGridlines.Border.ColorIndex = 2
        With .Chart.SeriesCollection(1).DataLabels.Font
            .Name = "Arial"
            .FontStyle = "Regular"
            .Size = 8
            .Color = RGB(255, 255, 255)
        End With
        With .Chart.Axes(xlCategory).TickLabels.Font
            .Name = "Arial"
            .FontStyle = "Regular"
            .Size = 8
            .Color = RGB(255, 255, 255)
        End With
        With .Chart.ChartTitle.Font
            .Name = "Arial"
            .FontStyle = "Bold"
            .Size = 16
            .Color = RGB(0, 0, 255)
        End With
    End With

    'Make certain that the initial values are specified
    intStopScrollingChart = False
    lngLastRowScrollingChart = 3

    lngResult = mdlChartUpdater.ConnectDatabase

    If lngResult = True Then
        'If the connection attempt was successful, then start the updater
        UpdateChart
    End If
End Sub

Private Sub cmdStop_Click()
    intStopScrollingChart = True
End Sub

When the Start button on the worksheet is clicked, the above code deletes any charts on the ScrollingChart worksheet, creates 4 new charts, and then executes the ConnectDatabase and UpdateChart functions/procedures in the mdlChartUpdater module that was added in an earlier step.

Next, click the mdlChartUpdater module in the Visual Basic editor to switch to that code window – that is where the magic happens.  In the mdlChartUpdater module, add the following code:

Public intStopScrollingChart As Integer 'Used to indicate if the new rows are being added to the ScrollingChart sheet
Public lngLastRowScrollingChart As Long 'Used to keep track of the last row added to the Scrolling Chart tab

Option Explicit 'Forces all variables to be declared

Dim dbDatabase As New ADODB.Connection
Dim strDatabase As String
Dim strUserName As String
Dim strPassword As String

Dim intColumns As Integer
Dim strLastColumn As String

Public Function ConnectDatabase() As Integer
    Dim intResult As Integer

    On Error Resume Next

    If dbDatabase.State <> 1 Then
        'Connection to the database if closed
        strDatabase = "MyDB"
        strUserName = "MyUser"
        strPassword = "MyPassword"

        'Connect to the database
        'Oracle connection string
        dbDatabase.ConnectionString = "Provider=OraOLEDB.Oracle;Data Source=" & strDatabase & ";User ID=" & strUserName & ";Password=" & strPassword & ";ChunkSize=1000;FetchSize=100;"

        dbDatabase.ConnectionTimeout = 40
        dbDatabase.CursorLocation = adUseClient
        dbDatabase.Open

        If (dbDatabase.State <> 1) Or (Err <> 0) Then
            intResult = MsgBox("Could not connect to the database.  Check your user name and password." & vbCrLf & Error(Err), 16, "Excel Demo")

            ConnectDatabase = False
        Else
            ConnectDatabase = True
        End If
    Else
        ConnectDatabase = True
    End If
End Function

Public Sub UpdateChart()
    Dim sglChange As Single
    Dim strSQL As String

    Dim snpData As ADODB.Recordset

    If intStopScrollingChart = True Then
        Set snpData = Nothing
        Exit Sub
    End If

    On Error Resume Next

    Set snpData = New ADODB.Recordset

    strSQL = "SELECT" & vbCrLf
    strSQL = strSQL & "  STAT_NAME," & vbCrLf
    strSQL = strSQL & "  VALUE" & vbCrLf
    strSQL = strSQL & "FROM" & vbCrLf
    strSQL = strSQL & "  V$SYS_TIME_MODEL" & vbCrLf
    strSQL = strSQL & "WHERE" & vbCrLf
    strSQL = strSQL & "  STAT_NAME IN ('DB time','DB CPU','sql execute elapsed time','parse time elapsed')" & vbCrLf
    strSQL = strSQL & "UNION ALL" & vbCrLf
    strSQL = strSQL & "SELECT" & vbCrLf
    strSQL = strSQL & "  STAT_NAME," & vbCrLf
    strSQL = strSQL & "  VALUE" & vbCrLf
    strSQL = strSQL & "FROM" & vbCrLf
    strSQL = strSQL & "  V$OSSTAT" & vbCrLf
    strSQL = strSQL & "WHERE" & vbCrLf
    strSQL = strSQL & "  STAT_NAME IN ('AVG_IDLE_TIME','AVG_BUSY_TIME','AVG_USER_TIME','AVG_SYS_TIME')" & vbCrLf
    strSQL = strSQL & "UNION ALL" & vbCrLf
    strSQL = strSQL & "SELECT" & vbCrLf
    strSQL = strSQL & "  NAME STAT_NAME," & vbCrLf
    strSQL = strSQL & "  VALUE" & vbCrLf
    strSQL = strSQL & "FROM" & vbCrLf
    strSQL = strSQL & "  V$SYSSTAT" & vbCrLf
    strSQL = strSQL & "WHERE" & vbCrLf
    strSQL = strSQL & "  NAME IN ('consistent gets','table scan rows gotten','user calls')"
    snpData.Open strSQL, dbDatabase

    If snpData.State = 1 Then
        lngLastRowScrollingChart = lngLastRowScrollingChart + 1

        'Recordset opened OK
        Do While Not (snpData.EOF)

            'Put the abolute values since startup starting in column 21, with the delta vales starting in column 1
            Select Case snpData("stat_name")
                Case "DB time"
                    Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 21).Value = snpData("value") / 1000000
                    If lngLastRowScrollingChart > 4 Then
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 1).Value = _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 21).Value - _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 21).Value
                    Else
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 1).Value = "DB Time"
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 1).Value = 0
                    End If
                Case "DB CPU"
                    Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 22).Value = snpData("value") / 1000000
                    If lngLastRowScrollingChart > 4 Then
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 2).Value = _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 22).Value - _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 22).Value
                    Else
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 2).Value = "DB CPU"
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 2).Value = 0
                    End If
                Case "sql execute elapsed time"
                    Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 23).Value = snpData("value") / 1000000
                    If lngLastRowScrollingChart > 4 Then
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 3).Value = _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 23).Value - _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 23).Value
                    Else
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 3).Value = "SQL Exec"
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 3).Value = 0
                    End If
                Case "parse time elapsed"
                    Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 24).Value = snpData("value") / 1000000
                    If lngLastRowScrollingChart > 4 Then
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 4).Value = _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 24).Value - _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 24).Value
                    Else
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 4).Value = "Parse Ela"
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 4).Value = 0
                    End If
                Case "consistent gets"
                    Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 25).Value = snpData("value")
                    If lngLastRowScrollingChart > 4 Then
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 5).Value = _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 25).Value - _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 25).Value
                    Else
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 5).Value = "Con Gets"
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 5).Value = 0
                    End If
                Case "table scan rows gotten"
                    Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 26).Value = snpData("value")
                    If lngLastRowScrollingChart > 4 Then
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 6).Value = _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 26).Value - _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 26).Value
                    Else
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 6).Value = "Tbl Scan Rows"
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 6).Value = 0
                    End If
                Case "user calls"
                    Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 27).Value = snpData("value")
                    If lngLastRowScrollingChart > 4 Then
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 7).Value = _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 27).Value - _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 27).Value
                    Else
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 7).Value = "User Calls"
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 7).Value = 0
                    End If
                Case "AVG_BUSY_TIME"
                    Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 28).Value = snpData("value") / 100
                    If lngLastRowScrollingChart > 4 Then
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 8).Value = _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 28).Value - _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 28).Value
                    Else
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 8).Value = "Avg Busy"
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 8).Value = 0
                    End If
                Case "AVG_IDLE_TIME"
                    Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 29).Value = snpData("value") / 100
                    If lngLastRowScrollingChart > 4 Then
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 9).Value = _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 29).Value - _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 29).Value
                    Else
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 9).Value = "Avg Idle"
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 9).Value = 0
                    End If
                Case "AVG_USER_TIME"
                    Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 30).Value = snpData("value") / 100
                    If lngLastRowScrollingChart > 4 Then
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 10).Value = _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 30).Value - _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 30).Value
                    Else
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 10).Value = "Avg User"
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 10).Value = 0
                    End If
                Case "AVG_SYS_TIME"
                    Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 31).Value = snpData("value") / 100
                    If lngLastRowScrollingChart > 4 Then
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 11).Value = _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 31).Value - _
                          Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 31).Value
                    Else
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart - 1, 11).Value = "Avg Sys"
                        Sheets("ScrollingChartData").Cells(lngLastRowScrollingChart, 11).Value = 0
                    End If
            End Select

            snpData.MoveNext
        Loop

        snpData.Close
    End If

    If lngLastRowScrollingChart > 4 Then

        'Update the source data locations for each chart - would not need to do this if we used named cell ranges
        ActiveSheet.ChartObjects(1).Chart.SeriesCollection(1).Values = "ScrollingChartData!A" & _
          Format(IIf(lngLastRowScrollingChart - 19 > 5, lngLastRowScrollingChart - 19, 5)) & ":A" & Format(lngLastRowScrollingChart)
        ActiveSheet.ChartObjects(2).Chart.SeriesCollection(1).Values = "ScrollingChartData!B" & _
          Format(IIf(lngLastRowScrollingChart - 19 > 5, lngLastRowScrollingChart - 19, 5)) & ":B" & Format(lngLastRowScrollingChart)
        ActiveSheet.ChartObjects(3).Chart.SeriesCollection(1).Values = "ScrollingChartData!C" & _
          Format(IIf(lngLastRowScrollingChart - 19 > 5, lngLastRowScrollingChart - 19, 5)) & ":C" & Format(lngLastRowScrollingChart)
        ActiveSheet.ChartObjects(4).Chart.SeriesCollection(1).Values = "ScrollingChartData!D" & _
          Format(IIf(lngLastRowScrollingChart - 19 > 5, lngLastRowScrollingChart - 19, 5)) & ":D" & Format(lngLastRowScrollingChart)
    End If

    If intStopScrollingChart = False Then
        'Instruct Excel to execute the UpdateChart sub again in 60 seconds
        Application.OnTime DateAdd("s", 60, Now), "UpdateChart"
    End If
    Set snpData = Nothing
End Sub

Back in the ScrollingChartData worksheet, click the Start button.  Every 60 seconds (until the Stop button is clicked) the UpdateChart macro will re-execute itself, collecting the most recent statistics from the database.  After the macro has been running for a couple of minutes the worksheet might look something like this:

And the ScrollingChart tab might look something like this (zoomed at 75% – feel free to change the chart colors in the cmdStart code):

After 20+ minutes of logging, the ScrollingChartData worksheet might look like this:

And the ScollingChart worksheet might look like this:

There is certainly a lot that may be done to extend this example, but the above should give you the general idea of what needs to be done.





PGA Memory – The Developer’s Secret Weapon for Stealing All of the Memory in the Server 2

19 01 2010

January 19, 2010

This article is a follow up to the earlier article – just how much PGA memory can a SQL statement with two NOT IN clauses, and an ORDER BY clause consume?  As we saw in the previous post, DBMS_XPLAN.DISPLAY_CURSOR may be a bit misleading due to the scale of the Used-Tmp column, and the fact that not all of the memory listed in the Used-Mem column is necessarily used at the same time.

So, let’s try three experiments where we modify the SQL statement in the script to have one of the following:

AND T1.C1 BETWEEN 1 AND 500000
AND T1.C1 BETWEEN 1 AND 1000000
AND T1.C1 BETWEEN 1 AND 1400000

So, for the first test, the PGAMemoryFill2.sql script will look like this:

DECLARE
CURSOR C_MEMORY_FILL IS
SELECT
  T1.C1,
  T1.C2,
  T1.C3
FROM
  T1
WHERE
  T1.C1 NOT IN (
    SELECT
      C1
    FROM
      T2)
  AND T1.C2 NOT IN (
    SELECT
      C2
    FROM
      T3)
  AND T1.C1 BETWEEN 1 AND 500000
ORDER BY
  T1.C2 DESC,
  T1.C1 DESC;

TYPE TYPE_MEMORY_FILL IS TABLE OF C_MEMORY_FILL%ROWTYPE
INDEX BY BINARY_INTEGER;

T_MEMORY_FILL  TYPE_MEMORY_FILL;

BEGIN
  OPEN C_MEMORY_FILL;
  LOOP
    FETCH C_MEMORY_FILL BULK COLLECT INTO  T_MEMORY_FILL LIMIT  10000000;

    EXIT WHEN T_MEMORY_FILL.COUNT = 0;

    FOR I IN T_MEMORY_FILL.FIRST..T_MEMORY_FILL.LAST LOOP
      NULL;
    END LOOP;

    DBMS_LOCK.SLEEP(20);
  END LOOP;
END;
/

(You two DBAs who are about to stand and clap, sit back down, didn’t you learn anything from the previous article that used bulk collect?)  We will use just two sessions, and make a small adjustment to the query of V$SQL_WORKAREA_ACTIVE so that we will be able to match the memory allocation to a specific step in the execution plan.  Additionally, that view will be queried once approximately every 10 seconds.

Session 1:

SELECT SID FROM V$MYSTAT WHERE ROWNUM<=1;

       SID
----------
       303

SET AUTOTRACE TRACEONLY EXPLAIN

SELECT
  T1.C1,
  T1.C2,
  T1.C3
FROM
  T1
WHERE
  T1.C1 NOT IN (
    SELECT
      C1
    FROM
      T2)
  AND T1.C2 NOT IN (
    SELECT
      C2
    FROM
      T3)
  AND T1.C1 BETWEEN 1 AND 500000
ORDER BY
  T1.C2 DESC,
  T1.C1 DESC;

Plan hash value: 3251203018

-------------------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|  Time    |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   497K|    82M|       |   825K  (1)| 02:45:02 |
|   1 |  SORT ORDER BY       |      |   497K|    82M|    86M|   825K  (1)| 02:45:02 |
|*  2 |   HASH JOIN ANTI NA  |      |   497K|    82M|    73M|   806K  (1)| 02:41:14 |
|*  3 |    HASH JOIN ANTI NA |      |   499K|    68M|    71M|   668K  (1)| 02:13:46 |
|*  4 |     TABLE ACCESS FULL| T1   |   500K|    65M|       |   543K  (1)| 01:48:42 |
|   5 |     TABLE ACCESS FULL| T2   |    10M|    57M|       |   113K  (1)| 00:22:39 |
|   6 |    TABLE ACCESS FULL | T3   |    10M|   295M|       |   113K  (1)| 00:22:39 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C2"="C2")
   3 - access("T1"."C1"="C1")
   4 - filter("T1"."C1"<=500000 AND "T1"."C1">=1)

SET AUTOTRACE OFF

ALTER SESSION SET STATISTICS_LEVEL=ALL;

@PGAMemoryFill2.sql

Session 2:

SET PAGESIZE 2000
SET LINESIZE 150

COLUMN ID FORMAT 99
COLUMN PASSES FORMAT 999999
COLUMN OPERATION_TYPE FORMAT A12
COLUMN WA_SIZE FORMAT 9999999990
SPOOL SQL_WORKAREA.TXT

SELECT
  SQL_ID,
  OPERATION_ID ID,
  OPERATION_TYPE,
  WORK_AREA_SIZE WA_SIZE,
  ACTUAL_MEM_USED,
  NUMBER_PASSES PASSES,
  TEMPSEG_SIZE
FROM
  V$SQL_WORKAREA_ACTIVE
ORDER BY
  SQL_ID,
  OPERATION_ID;

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95463424        80363520       0

So far, the hash join at ID 2 is consuming about 4.04MB, and the hash join at ID 3 is consuming about 76.64MB.  Now we repeat the query of V$SQL_WORKAREA_ACTIVE roughly every 10 seconds:

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95463424        80363520       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95463424        80363520       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95463424        80363520       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95463424        80363520       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95463424        80363520       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95463424        80363520       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95463424        80363520       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95463424        80363520       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95463424        80363520       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95463424        80363520       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95463424        80363520       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95463424        80363520       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95463424        80363520       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95771648        97603584       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95771648        97603584       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       98526208         4239360       0
57vx5p5xq42jq   3 HASH-JOIN       95771648        97603584       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       87232512        89805824       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
57vx5p5xq42jq   2 HASH-JOIN       87232512        89805824       0

As we can see from the above, the hash join at ID 2 continued to consume 4.04MB, while the hash join at ID 3 increased to 93.09 MB.  When the hash join at ID 3 disappeared, the hash join at ID 2 consumed roughly 83.19MB.  The two hash joins and the sort operation completed in-memory, without spilling to the TEMP tablespace.

Two executions of this SQL statement show the total PGA memory consumed by the session jumped up to a high of 207.40MB, but dropped down to 133.03MB, and then eventually hit 8.03MB when the script ended:

SELECT
  SN.NAME,
  SS.VALUE
FROM
  V$STATNAME SN,
  V$SESSTAT SS
WHERE
  SS.SID=303
  AND SS.STATISTIC#=SN.STATISTIC#
  AND SN.NAME LIKE '%pga%';

NAME                          VALUE
----------------------- -----------
session pga memory      139,489,936
session pga memory max  217,477,776

NAME                          VALUE
----------------------- -----------
session pga memory        8,417,936
session pga memory max  217,477,776

Let’s check the DBMS_XPLAN output:

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('57vx5p5xq42jq',0,'ALLSTATS LAST'));

Plan hash value: 3251203018

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |    400K|00:02:33.07 |    2833K|   2833K|       |       |          |
|   1 |  SORT ORDER BY       |      |      1 |    497K|    400K|00:02:33.07 |    2833K|   2833K|    68M|  2873K|   61M (0)|
|*  2 |   HASH JOIN ANTI NA  |      |      1 |    497K|    400K|00:02:32.75 |    2833K|   2833K|    74M|  7919K|   85M (0)|
|*  3 |    HASH JOIN ANTI NA |      |      1 |    499K|    450K|00:02:09.73 |    2416K|   2416K|    82M|  7919K|   93M (0)|
|*  4 |     TABLE ACCESS FULL| T1   |      1 |    500K|    500K|00:01:46.14 |    2000K|   1999K|       |       |          |
|   5 |     TABLE ACCESS FULL| T2   |      1 |     10M|     10M|00:00:20.03 |     416K|    416K|       |       |          |
|   6 |    TABLE ACCESS FULL | T3   |      1 |     10M|     10M|00:00:20.03 |     416K|    416K|       |       |          |
---------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C2"="C2")
   3 - access("T1"."C1"="C1")
   4 - filter(("T1"."C1"<=500000 AND "T1"."C1">=1))

The DBMS_XPLAN output indicates that all three workarea executions where optimal with the sort consuming 61MB, the hash join at ID2 consuming 85MB, and the hash join at ID 3 consuming 93MB – but remember that the memory was not all used at the same time.

Let’s repeat the test with a larger number range to see if we are able to locate the tipping point.

Session 1:

SET AUTOTRACE TRACEONLY EXPLAIN

SELECT
  T1.C1,
  T1.C2,
  T1.C3
FROM
  T1
WHERE
  T1.C1 NOT IN (
    SELECT
      C1
    FROM
      T2)
  AND T1.C2 NOT IN (
    SELECT
      C2
    FROM
      T3)
  AND T1.C1 BETWEEN 1 AND 1000000
ORDER BY
  T1.C2 DESC,
  T1.C1 DESC;

Plan hash value: 3251203018

-------------------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|  Time    |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   995K|   165M|       |   851K  (1)| 02:50:16 |
|   1 |  SORT ORDER BY       |      |   995K|   165M|   172M|   851K  (1)| 02:50:16 |
|*  2 |   HASH JOIN ANTI NA  |      |   995K|   165M|   147M|   813K  (1)| 02:42:40 |
|*  3 |    HASH JOIN ANTI NA |      |   999K|   136M|   142M|   672K  (1)| 02:14:28 |
|*  4 |     TABLE ACCESS FULL| T1   |  1000K|   130M|       |   543K  (1)| 01:48:42 |
|   5 |     TABLE ACCESS FULL| T2   |    10M|    57M|       |   113K  (1)| 00:22:39 |
|   6 |    TABLE ACCESS FULL | T3   |    10M|   295M|       |   113K  (1)| 00:22:39 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C2"="C2")
   3 - access("T1"."C1"="C1")
   4 - filter("T1"."C1"<=1000000 AND "T1"."C1">=1)

SET AUTOTRACE OFF

@PGAMemoryFill2.sql

Session 2:

SELECT
  SQL_ID,
  OPERATION_ID ID,
  OPERATION_TYPE,
  WORK_AREA_SIZE WA_SIZE,
  ACTUAL_MEM_USED,
  NUMBER_PASSES PASSES,
  TEMPSEG_SIZE
FROM
  V$SQL_WORKAREA_ACTIVE
ORDER BY
  SQL_ID,
  OPERATION_ID;

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      187907072         8454144       0
7wy7nqhbn5v7g   3 HASH-JOIN      181816320       161557504       0 

We start off with the hash join at ID 2 consuming 8.06MB and the hash join at ID 3 consuming  154.07MB.  Now we continuing executing that query roughly every 10 seconds:

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      187907072         8454144       0
7wy7nqhbn5v7g   3 HASH-JOIN      181816320       161557504       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      187907072         8454144       0
7wy7nqhbn5v7g   3 HASH-JOIN      181816320       161557504       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      187907072         8454144       0
7wy7nqhbn5v7g   3 HASH-JOIN      181816320       161557504       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      187907072         8454144       0
7wy7nqhbn5v7g   3 HASH-JOIN      181816320       161557504       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      187907072         8454144       0
7wy7nqhbn5v7g   3 HASH-JOIN      181816320       161557504       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      187907072         8454144       0
7wy7nqhbn5v7g   3 HASH-JOIN      181816320       161557504       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      187907072         8454144       0
7wy7nqhbn5v7g   3 HASH-JOIN      181816320       161557504       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      187907072         8454144       0
7wy7nqhbn5v7g   3 HASH-JOIN      181816320       161557504       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      187907072         8454144       0
7wy7nqhbn5v7g   3 HASH-JOIN      181816320       161557504       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      187907072         8454144       0
7wy7nqhbn5v7g   3 HASH-JOIN      181816320       161557504       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      187907072         8454144       0
7wy7nqhbn5v7g   3 HASH-JOIN      181816320       161557504       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      187907072         8454144       0
7wy7nqhbn5v7g   3 HASH-JOIN      188286976       215750656       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      187907072         8454144       0
7wy7nqhbn5v7g   3 HASH-JOIN      188286976       215750656       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      167822336       194778112       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      167822336       194778112       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   2 HASH-JOIN      167822336       194778112       0

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
7wy7nqhbn5v7g   1 SORT (v2)        1245184          486400       1    117440512

As we are able to see from the above, the hash join at ID 2 continued consuming 8.06MB of memory while the hash join at ID 3 grew to 205.76MB.  Once the hash join at ID 3 disappeared, the hash join at ID 2 grew to 185.75MB – both of the hash joins completed using an optimal, in-memory execution.  We saw in the earlier test that the SORT operation at ID 1 required about 24MB less PGA memory that the hash join at ID 2, yet this time the sort operation spilled to disk, using 112MB of space in the TEMP tablespace and just 0.46MB of PGA memory (there must be a reason why the hash join completed in memory, but the SORT operation that consumed less memory spilled to disk, but it escapes me at the moment – the old rule before the PGA_AGGREGATE_TARGET was introduced is that HASH_AREA_SIZE defaulted to twice the value for SORT_AREA_SIZE – I wonder if some of that logic is still present).

So, what about the PGA memory usage?

SELECT
  SN.NAME,
  SS.VALUE
FROM
  V$STATNAME SN,
  V$SESSTAT SS
WHERE
  SS.SID=303
  AND SS.STATISTIC#=SN.STATISTIC#
  AND SN.NAME LIKE '%pga%';

NAME                          VALUE
----------------------- -----------
session pga memory      287,994,512
session pga memory max  390,558,352

NAME                          VALUE
----------------------- -----------
session pga memory        8,549,008
session pga memory max  390,558,352

The PGA memory usage hit a high of 372.47MB and dropped down to 8.15MB when the script completed.  Let’s check the DBMS_XPLAN output:

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('7wy7nqhbn5v7g',0,'ALLSTATS LAST'));

Plan hash value: 3251203018

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |    800K|00:02:37.04 |    2833K|   2847K|  14286 |       |       |          |         |
|   1 |  SORT ORDER BY       |      |      1 |    995K|    800K|00:02:37.04 |    2833K|   2847K|  14286 |   126M|  3808K|  116M (1)|     112K|
|*  2 |   HASH JOIN ANTI NA  |      |      1 |    995K|    800K|00:02:33.47 |    2833K|   2833K|      0 |   145M|  7919K|  185M (0)|         |
|*  3 |    HASH JOIN ANTI NA |      |      1 |    999K|    900K|00:02:09.85 |    2416K|   2416K|      0 |   161M|  7919K|  205M (0)|         |
|*  4 |     TABLE ACCESS FULL| T1   |      1 |   1000K|   1000K|00:01:45.86 |    2000K|   1999K|      0 |       |       |          |         |
|   5 |     TABLE ACCESS FULL| T2   |      1 |     10M|     10M|00:00:20.00 |     416K|    416K|      0 |       |       |          |         |
|   6 |    TABLE ACCESS FULL | T3   |      1 |     10M|     10M|00:00:10.03 |     416K|    416K|      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C2"="C2")
   3 - access("T1"."C1"="C1")
   4 - filter(("T1"."C1"<=1000000 AND "T1"."C1">=1))

The above seems to indicate that the SORT operation at ID 1 at one point consumed 126MB 116MB of memory, and must have then spilled to disk, reducing the memory usage to the 0.46MB value that we saw with the earlier query of V$SQL_WORKAREA_ACTIVE.  This output confirms that the SORT operation performed a 1 pass workarea execution, while the two hash joins performed an optimal workarea execution.

Let’s repeat the test a final time with a larger number range to see if we are able to locate the tipping point.

Session 1:

SET AUTOTRACE TRACEONLY EXPLAIN

SELECT
  T1.C1,
  T1.C2,
  T1.C3
FROM
  T1
WHERE
  T1.C1 NOT IN (
    SELECT
      C1
    FROM
      T2)
  AND T1.C2 NOT IN (
    SELECT
      C2
    FROM
      T3)
  AND T1.C1 BETWEEN 1 AND 1400000
ORDER BY
  T1.C2 DESC,
  T1.C1 DESC;

Plan hash value: 1147745168

------------------------------------------------------------------------------------------
| Id  | Operation                 | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |      |  1393K|   231M|       |   872K  (1)|02:54:27 |
|   1 |  SORT ORDER BY            |      |  1393K|   231M|   242M|   872K  (1)|02:54:27 |
|*  2 |   HASH JOIN ANTI NA       |      |  1393K|   231M|   206M|   819K  (1)|02:43:49 |
|*  3 |    HASH JOIN RIGHT ANTI NA|      |  1399K|   190M|   171M|   675K  (1)|02:15:02 |
|   4 |     TABLE ACCESS FULL     | T2   |    10M|    57M|       |   113K  (1)|00:22:39 |
|*  5 |     TABLE ACCESS FULL     | T1   |  1400K|   182M|       |   543K  (1)|01:48:42 |
|   6 |    TABLE ACCESS FULL      | T3   |    10M|   295M|       |   113K  (1)|00:22:39 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C2"="C2")
   3 - access("T1"."C1"="C1")
   5 - filter("T1"."C1"<=1400000 AND "T1"."C1">=1)

SET AUTOTRACE OFF

Session 2:

SELECT
  SQL_ID,
  OPERATION_ID ID,
  OPERATION_TYPE,
  WORK_AREA_SIZE WA_SIZE,
  ACTUAL_MEM_USED,
  NUMBER_PASSES PASSES,
  TEMPSEG_SIZE
FROM
  V$SQL_WORKAREA_ACTIVE
ORDER BY
  SQL_ID,
  OPERATION_ID;

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN       21715968         8454144       0
a6yfcryfux22j   3 HASH-JOIN       29298688        20733952       0     19922944

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN       21715968         8454144       0
a6yfcryfux22j   3 HASH-JOIN       29298688        20733952       0     57671680

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN       21715968         8454144       0
a6yfcryfux22j   3 HASH-JOIN       29298688        20733952       0     96468992

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN       21715968         8454144       0
a6yfcryfux22j   3 HASH-JOIN      132551680        97767424       1    130023424

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN       21715968        20730880       0    173015040
a6yfcryfux22j   3 HASH-JOIN      145126400       151730176       1    169869312

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN       21715968        20730880       0    173015040
a6yfcryfux22j   3 HASH-JOIN      145126400       151730176       1    169869312

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN       21715968        20730880       0    173015040
a6yfcryfux22j   3 HASH-JOIN      145126400       151730176       1    169869312

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN       21715968        20730880       0    173015040
a6yfcryfux22j   3 HASH-JOIN      145126400       151730176       1    169869312

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN       21715968        20730880       0    173015040
a6yfcryfux22j   3 HASH-JOIN      145126400       151730176       1    169869312

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN       21715968        20730880       0    173015040
a6yfcryfux22j   3 HASH-JOIN      145126400       151730176       1    169869312

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN       21715968        20730880       0    173015040
a6yfcryfux22j   3 HASH-JOIN      145126400       151730176       1    169869312

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN       21715968        20730880       0    173015040
a6yfcryfux22j   3 HASH-JOIN      145126400       151730176       1    169869312

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN       21715968        20730880       0    173015040
a6yfcryfux22j   3 HASH-JOIN      145126400       151730176       1    169869312

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN       21715968        20730880       0    173015040
a6yfcryfux22j   3 HASH-JOIN      145126400       151730176       1    169869312

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN       21715968        20730880       0    173015040
a6yfcryfux22j   3 HASH-JOIN      145126400       151730176       1    169869312

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN       21715968        20730880       0    173015040
a6yfcryfux22j   3 HASH-JOIN      145126400       151730176       1    169869312

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN      190238720       105683968       1    189792256

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN      202813440       204740608       1    199229440

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN      202813440       204740608       1    220200960

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   2 HASH-JOIN      202813440       204740608       1    242221056

SQL_ID         ID OPERATION_TY     WA_SIZE ACTUAL_MEM_USED  PASSES TEMPSEG_SIZE
------------- --- ------------ ----------- --------------- ------- ------------
a6yfcryfux22j   1 SORT (v2)       32429056        25973760       1    137363456
a6yfcryfux22j   2 HASH-JOIN       10075136         8312832       1    251658240

SELECT
  SN.NAME,
  SS.VALUE
FROM
  V$STATNAME SN,
  V$SESSTAT SS
WHERE
  SS.SID=303
  AND SS.STATISTIC#=SN.STATISTIC#
  AND SN.NAME LIKE '%pga%';

NAME                          VALUE
----------------------- -----------
session pga memory      377,975,440
session pga memory max  390,558,352

NAME                          VALUE
----------------------- -----------
session pga memory        8,549,008
session pga memory max  390,558,352

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('a6yfcryfux22j',0,'ALLSTATS LAST'));

Plan hash value: 1147745168

---------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
---------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |      |      1 |        |   1120K|00:03:03.88 |    2833K|   2902K|  68818 |       |       |          |         |
|   1 |  SORT ORDER BY            |      |      1 |   1393K|   1120K|00:03:03.88 |    2833K|   2902K|  68818 |   177M|  4474K|  116M (1)|     158K|
|*  2 |   HASH JOIN ANTI NA       |      |      1 |   1393K|   1120K|00:02:57.80 |    2833K|   2882K|  48701 |   202M|  7914K|  195M (1)|     240K|
|*  3 |    HASH JOIN RIGHT ANTI NA|      |      1 |   1399K|   1260K|00:02:23.18 |    2416K|   2436K|  19840 |   269M|    14M|  144M (1)|     162K|
|   4 |     TABLE ACCESS FULL     | T2   |      1 |     10M|     10M|00:00:20.03 |     416K|    416K|      0 |       |       |          |         |
|*  5 |     TABLE ACCESS FULL     | T1   |      1 |   1400K|   1400K|00:01:48.32 |    2000K|   1999K|      0 |       |       |          |         |
|   6 |    TABLE ACCESS FULL      | T3   |      1 |     10M|     10M|00:00:20.03 |     416K|    416K|      0 |       |       |          |         |
---------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C2"="C2")
   3 - access("T1"."C1"="C1")
   5 - filter(("T1"."C1"<=1400000 AND "T1"."C1">=1))

All three of the workarea executions became 1 pass executions, but look at the Used-Mem and the Used-Tmp columns.  If you had not seen the previous test cases, you might take a look at the DBMS_XPLAN output and remark how silly Oracle is to consume 116M of PGA memory during a SORT operation and spill to the TEMP tablespace just 156KB, or how silly it is that Oracle would consume 195MB in the hash join at ID 2 and spill just 240KB to the TEMP tablespace.  It should now be obious that this is not what is happening – so much for relying on the DBMS_XPLAN output with ALLSTATS LAST specified at the format parameter and STATISTICS_LEVEL set to ALL.  Your results could be different with a different Oracle release (the above test results are from 11.1.0.7), different value for PGA_AGGREGATE_TARGET, or with different levels of concurrent activity in the database.





PGA Memory – The Developer’s Secret Weapon for Stealing All of the Memory in the Server

18 01 2010

January 18, 2010

(Forward to the Follow-Up Post)

Here is a fun test where you might be able to bring down the server in one of several ways (warning, you might not want to try this with anything less than Oracle Database 11.1.0.6 – if you want to try the test with Oracle 9i or 10g, add NOT NULL constraints to the columns C1 and C2 in each table):

  • Filling up the last bit of available space in the datafiles.
  • Causing the Temp tablespace to madly expand until it reaches its maximum size.
  • Stealing all of the memory on the server, so much for setting the PGA_AGGREGATE_TARGET parameter.
  • Swamping the disk subsystem

We start out with three innocent looking tables created by the following script:

CREATE TABLE T1 AS
SELECT
  ROWNUM C1,
  RPAD('R'||TO_CHAR(ROWNUM),30,'B') C2,
  RPAD('A',100,'A') C3
FROM
  (SELECT
    ROWNUM C1
  FROM
    DUAL
  CONNECT BY
    LEVEL<=10000) V1,
  (SELECT
    ROWNUM C1
  FROM
    DUAL
  CONNECT BY
    LEVEL<=10000) V2;

CREATE TABLE T2 AS
SELECT
  ROWNUM*10 C1,
  RPAD('R'||TO_CHAR(ROWNUM*10),30,'B') C2,
  RPAD('A',255,'A') C3
FROM
  (SELECT
    ROWNUM C1
  FROM
    DUAL
  CONNECT BY
    LEVEL<=10000) V1,
  (SELECT
    ROWNUM C1
  FROM
    DUAL
  CONNECT BY
    LEVEL<=1000) V2;

CREATE TABLE T3 AS
SELECT
  (ROWNUM*10)+2 C1,
  RPAD('R'||TO_CHAR((ROWNUM*10)+2),30,'B') C2,
  RPAD('A',255,'A') C3
FROM
  (SELECT
    ROWNUM C1
  FROM
    DUAL
  CONNECT BY
    LEVEL<=10000) V1,
  (SELECT
    ROWNUM C1
  FROM
    DUAL
  CONNECT BY
    LEVEL<=1000) V2;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1')
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2')
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T3')

We can see how much disk space is in use by the three tables with the following SQL statement:

SELECT
  SEGMENT_NAME SEGMENT,
  SUM(BYTES/1048576) TOTAL_MB
FROM
  DBA_EXTENTS
WHERE
  OWNER=USER
  AND SEGMENT_NAME IN ('T1','T2','T3')
GROUP BY
  SEGMENT_NAME
ORDER BY
  SEGMENT_NAME;

SEGMENT   TOTAL_MB
------- ----------
T1           15684
T2            3269
T3            3266

Looks like about 21.7GB is in use by the three tables.  Next, we need a script that we will name PGAMemoryFill.sql:

DECLARE
CURSOR C_MEMORY_FILL IS
SELECT
  T1.C1,
  T1.C2,
  T1.C3
FROM
  T1
WHERE
  T1.C1 NOT IN (
    SELECT
      C1
    FROM
      T2)
  AND T1.C2 NOT IN (
    SELECT
      C2
    FROM
      T3)
ORDER BY
  T1.C2 DESC,
  T1.C1 DESC;

TYPE TYPE_MEMORY_FILL IS TABLE OF C_MEMORY_FILL%ROWTYPE
INDEX BY BINARY_INTEGER;

T_MEMORY_FILL  TYPE_MEMORY_FILL;

BEGIN
  OPEN C_MEMORY_FILL;
  LOOP
    FETCH C_MEMORY_FILL BULK COLLECT INTO  T_MEMORY_FILL  LIMIT 10000000;

    EXIT WHEN T_MEMORY_FILL.COUNT = 0;

    FOR I IN T_MEMORY_FILL.FIRST..T_MEMORY_FILL.LAST LOOP
      NULL;
    END LOOP;

    DBMS_LOCK.SLEEP(20);
  END LOOP;
END;
/

Yes, the script is performing bulk collection (2 DBAs stand up and clap, the rest start shaking their heads side to side).

Let’s check the PGA_AGGREGATE_TARGET:

SHOW PARAMETER PGA_AGGREGATE_TARGET

NAME                                 TYPE        VALUE
------------------------------------ ----------- -----
pga_aggregate_target                 big integer 1800M

OK, the PGA_AGGREGATE_TARGET is just less then 1.8GB, and the server has 12GB of memory.

Now for the test, we will need two sessions, session 1 will be the session that executes the above script, and session 2 will execute various queries to see what is happening in the database.

Session 1:

SELECT SID FROM V$MYSTAT WHERE ROWNUM<=1;

  SID
-----
  335

SET AUTOTRACE TRACEONLY EXPLAIN

SELECT
  T1.C1,
  T1.C2,
  T1.C3
FROM
  T1
WHERE
  T1.C1 NOT IN (
    SELECT
      C1
    FROM
      T2)
  AND T1.C2 NOT IN (
    SELECT
      C2
    FROM
      T3)
ORDER BY
  T1.C2 DESC,
  T1.C1 DESC;

Plan hash value: 2719846691

------------------------------------------------------------------------------------------
| Id  | Operation                 | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|Time      |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |      |     1 |   174 |       |  2351K  (1)| 07:50:19 |
|   1 |  SORT ORDER BY            |      |     1 |   174 |       |  2351K  (1)| 07:50:19 |
|*  2 |   HASH JOIN RIGHT ANTI NA |      |     1 |   174 |   171M|  2351K  (1)| 07:50:19 |
|   3 |    TABLE ACCESS FULL      | T2   |    10M|    57M|       |   113K  (1)| 00:22:39 |
|*  4 |    HASH JOIN RIGHT ANTI NA|      |    99M|    15G|   410M|  1382K  (1)| 04:36:25 |
|   5 |     TABLE ACCESS FULL     | T3   |    10M|   295M|       |   113K  (1)| 00:22:39 |
|   6 |     TABLE ACCESS FULL     | T1   |   100M|    12G|       |   543K  (1)| 01:48:42 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C1"="C1")
   4 - access("T1"."C2"="C2")

SET AUTOTRACE OFF

SELECT 1 FROM DUAL;

From the above, Oracle is planning to perform a NULL aware hash join between table T1 and T3 (predicted to consume 410MB of space in the TEMP tablespace… is this the true unit of measurement, keep reading), and then join that row source to table T2 using a NULL aware hash join (Oracle 10.2.0.4 and lower will not use a NULL aware hash join – you have been warned) – the SQL statement involving tables T1, T2, and T3 is the SQL statement that will be executed in the PGAMemoryFill.sql script.

In session 2 we will take a look at the optimizer parameters in effect for the last SQL statement executed by Session 1:

SET PAGESIZE 1000
COLUMN CN FORMAT 99
COLUMN NAME FORMAT A37
COLUMN VALUE FORMAT A14
COLUMN DEF FORMAT A3

SELECT
  CHILD_NUMBER CN,
  NAME,
  VALUE,
  ISDEFAULT DEF
FROM
  V$SQL_OPTIMIZER_ENV SOE,
  V$SESSION S
WHERE
  SOE.SQL_ID=S.SQL_ID
  AND SOE.CHILD_NUMBER=S.SQL_CHILD_NUMBER
  AND S.SID=335
ORDER BY
  NAME;

 CN NAME                                  VALUE          DEF
--- ------------------------------------- -------------- ---
  0 _pga_max_size                         368640 KB      NO
  0 active_instance_count                 1              YES
  0 bitmap_merge_area_size                1048576        YES
  0 cell_offload_compaction               ADAPTIVE       YES
  0 cell_offload_plan_display             AUTO           YES
  0 cell_offload_processing               true           YES
  0 cpu_count                             8              YES
  0 cursor_sharing                        exact          YES
  0 db_file_multiblock_read_count         128            YES
  0 hash_area_size                        131072         YES
  0 is_recur_flags                        0              YES
  0 optimizer_capture_sql_plan_baselines  false          YES
  0 optimizer_dynamic_sampling            2              YES
  0 optimizer_features_enable             11.1.0.7       YES
  0 optimizer_index_caching               0              YES
  0 optimizer_index_cost_adj              100            YES
  0 optimizer_mode                        all_rows       YES
  0 optimizer_secure_view_merging         true           YES
  0 optimizer_use_invisible_indexes       false          YES
  0 optimizer_use_pending_statistics      false          YES
  0 optimizer_use_sql_plan_baselines      true           YES
  0 parallel_ddl_mode                     enabled        YES
  0 parallel_degree                       0              YES
  0 parallel_dml_mode                     disabled       YES
  0 parallel_execution_enabled            true           YES
  0 parallel_query_default_dop            0              YES
  0 parallel_query_mode                   enabled        YES
  0 pga_aggregate_target                  1843200 KB     YES
  0 query_rewrite_enabled                 true           YES
  0 query_rewrite_integrity               enforced       YES
  0 result_cache_mode                     MANUAL         YES
  0 skip_unusable_indexes                 true           YES
  0 sort_area_retained_size               0              YES
  0 sort_area_size                        65536          YES
  0 star_transformation_enabled           false          YES
  0 statistics_level                      typical        YES
  0 transaction_isolation_level           read_commited  YES
  0 workarea_size_policy                  auto           YES 

Notice in the above that _pga_max_size was set to 368640KB (360MB – 20% of the PGA_AGGREGATE_TARGET – note that this value does not seem to decrease as hard parses are forced when a lot of PGA memory is in use), and even though the ISDEFAULT column shows that this is not the default value, the value was set automatically based on the PGA_AGGREGATE_TARGET value.  To further demonstate that _pga_max_size has not been adjusted, here are two screen shots from one of my programs that shows all of the initialization parameters that are in effect, note Is Default is set to TRUE for this parameter (to output all of the hidden parameter values, see http://www.jlcomp.demon.co.uk/params.html):

The 368640 KB value reported for the _PGA_MAX_SIZE in the V$SQL_OPTIMIZER_ENV view exactly matches the value for _PGA_MAX_SIZE returned by the query of  X$KSPPI and X$KSPPSV.

Before we start causing damage, let’s check the documentation for the V$SQL_WORKAREA_ACTIVE view:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/memory.htm#i48705
http://download-west.oracle.com/docs/cd/B28359_01/server.111/b28320/dynviews_3058.htm

The second of the above links defines the columns in the view.  A couple of those column definitions follow:

  • OPERATION_TYPE: Type of operation using the work area (SORT, HASH JOIN, GROUP BY, BUFFERING, BITMAP MERGE, or BITMAP CREATE)
  • WORK_AREA_SIZE: Maximum size (in bytes) of the work area as it is currently used by the operation
  • ACTUAL_MEM_USED: Amount of PGA memory (in bytes) currently allocated on behalf of this work area. This value should range between 0 and WORK_AREA_SIZE.
  • NUMBER_PASSES: Number of passes corresponding to this work area (0 if running in OPTIMAL mode)
  • TEMPSEG_SIZE: Size (in bytes) of the temporary segment used on behalf of this work area.  This column is NULL if this work area has not (yet) spilled to disk.

While session 1 is busy executing the PGAMemoryFill.sql script, session 2 will periodically query the V$SQL_WORKAREA_ACTIVE view to see what is happening.

In Session 1:

ALTER SESSION SET STATISTICS_LEVEL=ALL;

@PGAMemoryFill.sql

In Session 2 starts repeatedly executing the following SQL statement after a short delay (note that I could have selected the OPERATION_ID column to make it easy to tie the memory used to a specific operation in the execution plan that was displayed earlier):

SELECT
  SQL_ID,
  OPERATION_TYPE,
  WORK_AREA_SIZE,
  ACTUAL_MEM_USED,
  NUMBER_PASSES,
  TEMPSEG_SIZE
FROM
  V$SQL_WORKAREA_ACTIVE;

SQL_ID        OPERATION_TYPE       WORK_AREA_SIZE ACTUAL_MEM_USED NUMBER_PASSES TEMPSEG_SIZE
------------- -------------------- -------------- --------------- ------------- ------------
0k5pr4rx072sv HASH-JOIN                  29298688        20712448             0    121634816

So, Session 1 is using about 19.75MB of PGA memory for a hash join, and according to the definition of the NUMBER_PASSES column, the hash join is currently an optimal execution, yet that seems to conflict with the definition of the TEMPSEG_SIZE definition and the output in that column.  Session 2 will continue to re-execute the above SQL statement, pausing after each execution:

SQL_ID        OPERATION_TYPE       WORK_AREA_SIZE ACTUAL_MEM_USED NUMBER_PASSES TEMPSEG_SIZE
------------- -------------------- -------------- --------------- ------------- ------------
0k5pr4rx072sv HASH-JOIN                  40738816        33011712             0     45088768
0k5pr4rx072sv HASH-JOIN                 189427712        20740096             1    130023424

Now there are two hash joins active for the SQL statement with a total of 51.26MB of PGA memory in use.  One of the hash joins is still an optimal execution, while the second has become a 1 pass execution.

 
SQL_ID        OPERATION_TYPE       WORK_AREA_SIZE ACTUAL_MEM_USED NUMBER_PASSES TEMPSEG_SIZE
------------- -------------------- -------------- --------------- ------------- ------------
0k5pr4rx072sv SORT (v2)                  78364672        28554240             1   1293942784
0k5pr4rx072sv HASH-JOIN                 147055616       148588544             1    814743552
0k5pr4rx072sv HASH-JOIN                 129864704       110271488             1    470810624

Now both of the hash joins are reporting a 1 pass execution.  A V2 sort operation has joined the output, and it too is executing as a 1 pass operation.  The session is now using just over 274MB of PGA memory based on the output of this view.

 
SQL_ID        OPERATION_TYPE       WORK_AREA_SIZE ACTUAL_MEM_USED NUMBER_PASSES TEMPSEG_SIZE
------------- -------------------- -------------- --------------- ------------- ------------
0k5pr4rx072sv SORT (v2)                  78304256        43396096             1   1749024768
0k5pr4rx072sv HASH-JOIN                 147055616       148588544             1    968884224
0k5pr4rx072sv HASH-JOIN                 129864704       110271488             1    591396864

SQL_ID        OPERATION_TYPE       WORK_AREA_SIZE ACTUAL_MEM_USED NUMBER_PASSES TEMPSEG_SIZE
------------- -------------------- -------------- --------------- ------------- ------------
0k5pr4rx072sv SORT (v2)                  80089088         5712896             1   5366611968
0k5pr4rx072sv HASH-JOIN                 147055616       148588544             1   2097152000
0k5pr4rx072sv HASH-JOIN                 129864704       110271488             1   1509949440

SQL_ID        OPERATION_TYPE       WORK_AREA_SIZE ACTUAL_MEM_USED NUMBER_PASSES TEMPSEG_SIZE
------------- -------------------- -------------- --------------- ------------- ------------
0k5pr4rx072sv SORT (v2)                  73031680        67003392             1   8383365120
0k5pr4rx072sv HASH-JOIN                 147055616       148588544             1   3050307584
0k5pr4rx072sv HASH-JOIN                 129864704       110271488             1   2283798528

The session has made it up to 310.77MB of PGA memory, and the TEMPSEG_SIZE column values continue to grow.

--

SQL_ID        OPERATION_TYPE       WORK_AREA_SIZE ACTUAL_MEM_USED NUMBER_PASSES TEMPSEG_SIZE
------------- -------------------- -------------- --------------- ------------- ------------
0k5pr4rx072sv SORT (v2)                  67550208        37590016             1   9634316288
0k5pr4rx072sv HASH-JOIN                  23760896        13456384             1   3338665984
0k5pr4rx072sv HASH-JOIN                 129864704       110271488             1   2607808512

SQL_ID        OPERATION_TYPE       WORK_AREA_SIZE ACTUAL_MEM_USED NUMBER_PASSES TEMPSEG_SIZE
------------- -------------------- -------------- --------------- ------------- ------------
0k5pr4rx072sv SORT (v2)                  67550208        47077376             1   1.0252E+10
0k5pr4rx072sv HASH-JOIN                  23760896        13456384             1   3338665984
0k5pr4rx072sv HASH-JOIN                 129864704       110271488             1   2770337792

SQL_ID        OPERATION_TYPE       WORK_AREA_SIZE ACTUAL_MEM_USED NUMBER_PASSES TEMPSEG_SIZE
------------- -------------------- -------------- --------------- ------------- ------------
0k5pr4rx072sv SORT (v2)                 188743680       188285952             1   1.1250E+10
0k5pr4rx072sv HASH-JOIN                  18951168        17658880             1   2839543808

One of the hash join operations has completed, must be about done now.

SQL_ID        OPERATION_TYPE       WORK_AREA_SIZE ACTUAL_MEM_USED NUMBER_PASSES TEMPSEG_SIZE
------------- -------------------- -------------- --------------- ------------- ------------
0k5pr4rx072sv SORT (v2)                  90914816        91714560             1   1.1966E+10

The final hash join finished, and the TEMPSEG_SIZE is now 1.1966E+10, which indicates that the temporary segment size in the TEMP tablespace is about 11.14GB.  That is kind of big – remember that number.  When just the sort operation is returned by the above query, session 2 executes this SQL statement:

COLUMN VALUE FORMAT 999,999,999,990

SELECT
  SN.NAME,
  SS.VALUE
FROM
  V$STATNAME SN,
  V$SESSTAT SS
WHERE
  SS.SID=335
  AND SS.STATISTIC#=SN.STATISTIC#
  AND SN.NAME LIKE '%pga%';

NAME                             VALUE
------------------------ -------------
session pga memory       3,391,500,272
session pga memory max   3,391,500,272

Based on the above, Session 1 is not consuming about 90MB of PGA memory, but instead roughly 3234.39MB of PGA memory (the 2 DBAs still standing and clapping should sit down now).  Let’s hope that the DBA responsible for this database did not consider the 1800MB value for the PGA_AGGREGATE_TARGET parameter as a hard upper limit, and set the other parameters to take full advantage of the 12GB of memory in the server.

Once the script ends, the above SQL statement returns the following values:

NAME                             VALUE
------------------------ -------------
session pga memory          10,039,280
session pga memory max   3,391,500,272 

The session is still consuming 9.57MB of PGA memory just sitting idle – remember this number.

Now just to make sure that 0k5pr4rx072sv, as output by the query of the V$SQL_WORKAREA_ACTIVE view, is the SQL_ID for our SQL statement:

SELECT
  SQL_TEXT
FROM
  V$SQL
WHERE
  SQL_ID='0k5pr4rx072sv';

SQL_TEXT
--------------------------------------------------------------------------------
SELECT T1.C1, T1.C2, T1.C3 FROM T1 WHERE T1.C1 NOT IN ( SELECT C1 FROM T2) AND T
1.C2 NOT IN ( SELECT C2 FROM T3) ORDER BY T1.C2 DESC, T1.C1 DESC

Good, now let’s check the execution plan for the SQL statement:

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('0k5pr4rx072sv',0,'ALLSTATS LAST'));

SQL_ID  0k5pr4rx072sv, child number 0
-------------------------------------
SELECT T1.C1, T1.C2, T1.C3 FROM T1 WHERE T1.C1 NOT IN ( SELECT C1 FROM
T2) AND T1.C2 NOT IN ( SELECT C2 FROM T3) ORDER BY T1.C2 DESC, T1.C1
DESC

Plan hash value: 2719846691

---------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
---------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |      |      1 |        |     80M|00:24:55.23 |    2833K|   5024K|   2190K|       |       |          |         |
|   1 |  SORT ORDER BY            |      |      1 |      1 |     80M|00:24:55.23 |    2833K|   5024K|   2190K|    12G|    35M|  179M (1)|      11M|
|*  2 |   HASH JOIN RIGHT ANTI NA |      |      1 |      1 |     80M|00:12:08.70 |    2833K|   3563K|    730K|   269M|    14M|  105M (1)|    2708K|
|   3 |    TABLE ACCESS FULL      | T2   |      1 |     10M|     10M|00:00:20.03 |     416K|    416K|      0 |       |       |          |         |
|*  4 |    HASH JOIN RIGHT ANTI NA|      |      1 |     99M|     90M|00:08:25.72 |    2416K|   2811K|    394K|   521M|    19M|  141M (1)|    3184K|
|   5 |     TABLE ACCESS FULL     | T3   |      1 |     10M|     10M|00:00:20.12 |     416K|    416K|      0 |       |       |          |         |
|   6 |     TABLE ACCESS FULL     | T1   |      1 |    100M|    100M|00:03:20.37 |    2000K|   1999K|      0 |       |       |          |         |
---------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C1"="C1")
   4 - access("T1"."C2"="C2") 

Let’s see, almost 25 minutes to execute the SQL statement, a total of roughly 425MB of memory was used during one pass workarea executions (but from our earlier output, not all of that memory was in use at the same time) and the SORT ORDER BY operation used 11M of TEMP tablespace space… but is that 11MB, or is it 11 million KB, or is it 11,534,336 KB (2^20 * 11 KB)?  Remember earlier we found “that the temporary segment size in the TEMP tablespace is about 11.14GB”, so that 11M means 11,534,336 KB, or about 11GB.  OK, that was slightly confusing, but we are not done yet.  (Side note: the author of the book “Troubleshooting Oracle Performance” commented on the Used-Tmp column here.)

Let’s have some fun and burn memory (or have some fun until something breaks).  Now, we will run the PGAMemoryFill.sql script in 4 sessions, with a fifth session to monitor the progress (if you want to have more fun, modify the WHERE clause on the T1 table so that all workarea executions are optimal, rather than spilling to disk in a one-pass or multi-pass operation).  In 4 sessions, execute the script:

@PGAMemoryFill.sql

After a short pause, session 5 (the monitoring session) should periodically submit the following query:

SELECT
  SN.NAME,
  SUM(SS.VALUE) VALUE
FROM
  V$STATNAME SN,
  V$SESSTAT SS
WHERE
  SS.STATISTIC#=SN.STATISTIC#
  AND SN.NAME='session pga memory'
GROUP BY
  SN.NAME;

NAME                            VALUE
-------------------- ----------------
session pga memory        144,351,120

Roughly 137.66MB of PGA memory in use – I wonder if we will hit 3,391,500,272 * 4 = 12.63GB of PGA memory in use – 4 times the value seen for the single session?)  Well, let’s keep executing the above query with brief pauses between each execution:

NAME                            VALUE
-------------------- ----------------
session pga memory        144,351,120

NAME                            VALUE
-------------------- ----------------
session pga memory        285,902,000

NAME                            VALUE
-------------------- ----------------
session pga memory      1,191,920,144

NAME                            VALUE
-------------------- ----------------
session pga memory      1,296,843,280

NAME                            VALUE
-------------------- ----------------
session pga memory      1,379,306,720

NAME                            VALUE
-------------------- ----------------
session pga memory      1,401,504,272

NAME                            VALUE
-------------------- ----------------
session pga memory      1,465,467,408

NAME                            VALUE
-------------------- ----------------
session pga memory      1,473,207,536

NAME                            VALUE
-------------------- ----------------
session pga memory      1,484,283,120

Let’s check one of the sessions to see how it is doing:

SELECT
  SN.NAME,
  SS.VALUE
FROM
  V$STATNAME SN,
  V$SESSTAT SS
WHERE
  SS.SID=335
  AND SS.STATISTIC#=SN.STATISTIC#
  AND SN.NAME LIKE '%pga%';

NAME                              VALUE
---------------------- ----------------
session pga memory          357,904,368
session pga memory maz      357,904,368

This one session is using roughly 341.32MB of PGA memory, now back to the other query:

NAME                            VALUE
-------------------- ----------------
session pga memory      1,476,418,800

NAME                            VALUE
-------------------- ----------------
session pga memory      4,517,252,992

NAME                            VALUE
-------------------- ----------------
session pga memory      4,518,556,832

The PGA memory usage seems to have stabilized at 4,309.23MB (4.21GB), so we didn’t bring down the server by exceeding the 12GB of memory in the server, but this is 2.4 times the value of the PGA_AGGREGATE_TARGET parameter.  Let’s check on the progress of our 4 sessions:

SELECT
  SS.SID,
  SN.NAME,
  SS.VALUE
FROM
  V$STATNAME SN,
  V$SESSTAT SS
WHERE
  SS.VALUE>=300*1024*1024
  AND SS.STATISTIC#=SN.STATISTIC#
  AND SN.NAME LIKE '%pga%'
ORDER BY
  SS.SID,
  SN.NAME;

SID NAME                              VALUE
--- ---------------------- ----------------
297 session pga memory        3,322,442,384
297 session pga memory max    3,384,832,656
304 session pga memory          368,734,864
304 session pga memory max      373,518,992
305 session pga memory          368,734,864
305 session pga memory max      368,734,864
335 session pga memory          357,904,368
335 session pga memory max      357,904,368

The above seems to show that one of the sessions is still using 3,168.53GB of PGA memory, while the other three have each retreated to roughly 352MB of PGA memory.  Let’s check on the sessions…  The script in 3 of the 4 sessions crashed with this error:

ERROR at line 1:
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
ORA-06512: at line 33

Quick math time: is 11.14GB * 4 greater than the maximum size of a SMALLFILE datafile in a database with an 8KB block size?  128 * 8KB = 1MB, which is the extent size in the TEMP tablespace.  OK, if the script crashed in 3 of the 4 sessions, why are each of those sessions still consuming about 352MB of PGA memory when they are just sitting there waiting for the next SQL statement?  This would certainly drive someone mad trying to figure out what Jimmy the Developer has done.  So, how do you get the memory back from the session so that it can be returned to the operating system?  You must execute this specially crafted SQL statement in each session:

SELECT
  42
FROM
  DUAL
WHERE
  1=2;

OK, it does not need to be that SQL statement, but until another SQL statement is executed, the 352MB acquired by each of the three sessions cannot be used for anything else.  And that, my friends, is the developer’s secret weapon for stealing all of the memory in the server.  Now try to modify the SQL statement in the PGAMemoryFill.sql script so that all three workarea executions are optimal executions to see how high the memory usage can be pushed while executing the SQL statement.





Submit Input to an ASP Web Page and Retrieve the Result using VBS

17 01 2010

January 17, 2010

While it is helpful that SQL statements may be submitted directly from VBS scripts, in most cases the username and password for the database user will be exposed in the VBS script.  So, we need another way.  How about having the VBS script pass a value of interest to an ASP web page (note that this is not ASP.Net, this is old style programming without a Net).  First, we need to create the ASP web page (after, of course, enabling ASP in Microsoft’s IIS web server configuration):

<html>
<head>
<title>I am a Hidden Web Page - You will Not See Me</title>
</head>
<body>
   <%
    Dim adVarChar
    Dim adParamInput
    Dim adCmdText
    Dim strSQL
    Dim snpData
    Dim comData
    Dim strPartID

    Dim dbDatabase

    adVarChar = 200
    adParamInput = 1
    adCmdText = 1
    Set dbDatabase = Server.CreateObject("ADODB.Connection")
    Set comData = Server.CreateObject("ADODB.Command")
    Set snpData = Server.CreateObject("ADODB.Recordset")

    On Error Resume Next

    strPartID = cStr(Request("strPartID"))

    dbDatabase.Open "Provider=MSDAORA.1;User ID=MyUser;Password=MyPassword;Data Source=MyDB;Persist Security Info=True"

    With ComData
        strSQL = "SELECT /*+ LEADING(IT) INDEX(IT X_INV_TRANS_1) */" & vbCrLf
        strSQL = strSQL & "  IT.PART_ID," & vbCrLf
        strSQL = strSQL & "  TRUNC(SUM(DECODE(IT.CLASS,'I',DECODE(IT.TYPE,'O',IT.QTY,0),0))-SUM(DECODE(IT.CLASS,'I',DECODE(IT.TYPE,'I',IT.QTY,0),0))+.9999) AS NEW_ANNUAL_USAGE" & vbCrLf
        strSQL = strSQL & "FROM" & vbCrLf
        strSQL = strSQL & "  INVENTORY_TRANS IT," & vbCrLf
        strSQL = strSQL & "  PART P" & vbCrLf
        strSQL = strSQL & "WHERE" & vbCrLf
        strSQL = strSQL & "  IT.TRANSACTION_DATE>TRUNC(SYSDATE-365)" & vbCrLf
        strSQL = strSQL & "  AND P.ID=IT.PART_ID" & vbCrLf
        strSQL = strSQL & "  AND P.ID= ?" & vbCrLf
        strSQL = strSQL & "GROUP BY" & vbCrLf
        strSQL = strSQL & "  IT.PART_ID" & vbCrLf
        '
        .Parameters.Append .CreateParameter("part_id", adVarChar, adParamInput, 30, strPartID)
        'Set up the command properties
        .CommandText = strSQL
        .CommandType = adCmdText
        .CommandTimeout = 30

        .ActiveConnection = dbDatabase
    End With
    Set snpData = ComData.Execute

    Response.Write "<input type=""text"" name=""txtPartID"" size=30 value=""" & strPartID & """ disabled=true>"
    If Not (snpData Is Nothing) Then
        If Not(snpData.EOF) Then
            Response.Write "<input type=""text"" name=""txtAnnualUsage"" size=30 value=""" & cstr(snpData("new_annual_usage")) & """ disabled=true>"
            Response.Write "<input type=""text"" name=""txtOK"" size=255 value=""RETRIEVED"" disabled=true>"
        Else
            Response.Write "<input type=""text"" name=""txtAnnualUsage"" size=30 value=""0"" disabled=true>"
            Response.Write "<input type=""text"" name=""txtOK"" size=255 value=""NO TRANSACTIONS"" disabled=true>"
        End If
    Else
        Response.Write "<input type=""text"" name=""txtAnnualUsage"" size=30 value=""0"" disabled=true>"
        Response.Write "<input type=""text"" name=""txtOK"" size=255 value=""ERROR"" disabled=true>"
    End If

    snpData.Close
    dbDatabase.Close

    Set snpData = Nothing
    Set comData = Nothing
    Set dbDatabase = Nothing
    %>
</body>
</html>

OK, reading the ASP web page code, we create an ADO database connection object, an ADO recordset object, and an ADO command object.  Next, we set the strPartID variable to the value of the passed in strPartID variable from the web session, build a SQL statement with a bind variable set to the value of the strPartID variable, and then execute the SQL statement.  If the SQL statement successfully executed, we build two HTML text box, the first with the value of NEW_ANNUAL_USAGE, and the second with a status of either RETRIEVED or NO TRANSACTIONS.  If the SQL statement failed to execute, the two HTML text boxes will contain 0 and ERROR.

Now for the VBS script that will call the ASP web page:

Dim intResult
Dim objIE
Dim strHTML
Dim strID
Dim ANNUAL_USAGE_QTY

On Error Resume Next

Set objIE = CreateObject("InternetExplorer.Application")

strID = "ABCDEF123456"
ANNUAL_USAGE_QTY = 100

objIE.Navigate "http://localhost/Update_Annual_Usage_Qty.asp?strPartID=" & strID

objIE.Width=100
objIE.Height=100
objIE.Statusbar=False
objIE.Menubar=False
objIE.Toolbar=False
objIE.Visible = False

Do While objIE.Busy <> False
    WScript.Sleep 200
Loop

'loop until the button is clicked
Do While intFlag = 0
    If Err <> 0 Then
        IntFlag = -1
    End If   
    If objIE is Nothing Then
        'User closed ID
        intFlag = -1
    Else
        If objIE.Document.All.txtOK.Value <> " " Then
            intFlag = 1
        End If
    End If
    WScript.Sleep 200
Loop

If intFlag = 1 Then
    If objIE.Document.Body.All.txtOK.Value = "ERROR" Then
        MsgBox "Error sending the query to the database"
    Else
        If objIE.Document.Body.All.txtOK.Value = "NO TRANSACTIONS" Then
            intResult = MsgBox ("No transactions for this part in the last year, OK to set the annual usage qty to 0?  The old value is " & cStr(ANNUAL_USAGE_QTY), vbQuestion + vbYesNo, "Annual Usage")
            If intResult = vbYes Then
                ANNUAL_USAGE_QTY = 0
            End If
        Else
            'Copy in the values from the web page
            intResult = MsgBox ("The old annual usage quantity value is " & cStr(ANNUAL_USAGE_QTY) & " - the database indicates that the updated quantity should be " & cstr(objIE.Document.Body.All.txtAnnualUsage.Value) & ".  Would you like to update the annual usage quantity?", vbQuestion + vbYesNo,"Annual Usage")
            If intResult = vbYes Then
                 ANNUAL_USAGE_QTY = objIE.Document.Body.All.txtAnnualUsage.Value
            End If
        End If
    End If
    objIE.Quit
End If

Set objIE = Nothing
Set objShell = Nothing

The VBS script launches the ASP page in a hidden Internet Explorer window, passing in the value of strID on the address line (this is picked up in the ASP script as the strPartID session variable).  The VBS script then waits until the ASP page finishes loading.  Once the ASP page finishes, the VBS script reads the values of the two HTML text boxes and acts appropriately based on the values of those text boxes.

The neat thing about straight ASP programming code is that it looks a lot like the VBS programming code, and that looks a lot like the Excel macro programming code, and that looks a lot like the classic Visual Basic programming code, and that kind of looks like the classic BASIC programming code that I started working with in 1981/1982.  I have been sitting in on the technology training advisory committee for one of the local colleges.  The committee helps determine what computer classes will be taught to earn a degree at the college.  The question was asked what languages to teach – I heard C++ and Java being suggested… I wonder if I should have suggested Visual Basic?  VBS like languages are also used as macro languages in some ERP products and other packages (I believe that AutoCAD uses a similar macro syntax, as does PC-DMIS).





If I Need to Fetch My Rows Faster, Is There Any Way?

17 01 2010

January 17, 2010

Yes, the title of this blog article is the question, the whole question, and nothing but the question from this OTN post:
http://forums.oracle.com/forums/thread.jspa?threadID=1013283&tstart=0

The OP stated in the subject line that his query needed to retrieve 1 lakh rows, which I assumed meant 100,000,000 rows, but a Google search indicates that it is just 100,000 rows.

One of the responders went in for the kill with this response:

The most precise way for fetching rows faster can be attained in number of ways.

  1. The first way is apply indexes and in case indexes got large number of deletions then rebuild it.
  2. The next way is the optimizer you are choosing.

Literaly these parameters are effective then this thing will automatically lead to faster fetching.

I was a bit confused by the above response (I dislike being confused).  So, I asked that responder for clarification of the suggestions for improving the precise way of fetching rows faster (for some reason, the phrase “Battle Against Any Guess” popped into my head).

  1. Are you suggesting that the OP should rebuild indexes to improve how quickly Oracle is able to find rows when there were a lot of deletions in the table? There is a fun series of blog articles here that might help before the OP attempts to rebuild indexes: http://richardfoote.wordpress.com/category/index-rebuild/
  2. Are you suggesting that the OP switch between the RULE based optimizer and the COST based optimizer (or vice-versa)?

I then offered the following to the original poster:

  1. What about changing the array fetch size (number of rows fetched in a single fetch request)?
  2. Why are you selecting so many rows – will a large number of the rows be eliminated in the client-side application. Is it possible to reduce the number of rows returned from the database by aggregating the data, filtering the data, or processing the data on the server?
  3. Are there any columns being returned from the database that are not needed? If so, remove those columns.
  4. Is there a high latency WAN connection, or a slow LAN connection between the server and the client? If so, repeat the test again when connected at gigabit speeds.
  5. Are table columns included in inline views in the SQL statement that are not used (discarded, not returned to the client) outside the inline view? If so, get rid of those columns – there is no sense in carrying those columns through a join, group by, or sort operation if the columns are never used. The same applies to statically defined views accessed by the SQL statement.
  6. Assuming that the cost-based optimizer is in use, have you checked the various optimizer parameters – have you done something silly like setting OPTIMIZER_INDEX_COST_ADJ to 1 and set OPTIMIZER_MODE to FIRST_ROWS?
  7. Have you set other parameters to silly values, like setting DB_FILE_MULTIBLOCK_READ_COUNT to 0, 1, 8, 16, etc?
  8. Have you not collected system (CPU) statistics, if available on your Oracle version (what is the Oracle version number, ex: 8.1.7.3, 9.2.0.7, 11.2.0.1, etc.).
  9. Have you examined an explain plan (or better yet, a DBMS_XPLAN with ‘ALLSTATS LAST’ as the format parameter)?
  10. Have you captured a 10046 trace at level 8, and either manually reviewed the file or passed it through TKPROF (or another utility)?
  11. Have you tried to re-write the SQL statement into an equivalent, but more efficient form?
  12. Have you collected a 10053 trace for a hard parse of the SQL statement?
  13. Have you recently collected table and index statistics for the objects?

What about finding the root cause of the performance problem? Sure, it might be fun to blindly try things to see if they help, but how do you know if what you have tried has helped without measuring?

 This brings me to the next suggestion – before posting a request to any forum or other website, make certain that you have provided something, anything, that will help someone answer your question.  Suggestions for what to include in your post are outlined here:
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
http://forums.oracle.com/forums/thread.jspa?messageID=1812597

This blog is not the place to post requests for help, and I likely will not respond to requests for help by email.  Requests for help should be directed to an appropriate forum or Oracle support (Metalink/MOS); those forums include the comp.databases.oracle.server / comp.databases.oracle.misc Usenet groups, the OTN forums, AskTom.Oracle.com, and Oracle-L.





Excel – Session Viewer with Query Capability

16 01 2010

January 16, 2010

This is a simple example that shows how to query an Oracle database using user input, passing in the user specified values with bind variables.  While this example just queries V$SESSION, it is possible to expand this demonstration considerably to allow Excel to act as a command center for viewing SQL statements executed by sessions (with their execution plans), enable 10046 traces, and more.

To begin, we need to create two ActiveX command buttons in cells A1 through A3.  Name the top command button cmdInitialize, and the bottom button cmdFind.

Next, name the worksheet as DatabaseInfo, then right-click the worksheet DatabaseInfo tab name and select View Code.  Once in the Visual Basic editor, add a reference to the Microsoft ActiveX Data Objects, as demonstrated here.

Now, we need to add the code to make the cmdInitialize button work:

Option Explicit 'Forces all variables to be declared

Dim dbDatabase As New ADODB.Connection
Dim strDatabase As String
Dim strUserName As String
Dim strPassword As String

Dim intColumns As Integer
Dim strLastColumn As String

Private Function ConnectDatabase() As Integer
    Dim intResult As Integer

    On Error Resume Next

    If dbDatabase.State <> 1 Then
        'Connection to the database if closed
        strDatabase = "MyDB"
        strUserName = "MyUser"
        strPassword = "MyPassword"

        'Connect to the database
        'Oracle connection string
        dbDatabase.ConnectionString = "Provider=OraOLEDB.Oracle;Data Source=" & strDatabase & ";User ID=" & strUserName & ";Password=" & strPassword & ";ChunkSize=1000;FetchSize=100;"

        dbDatabase.ConnectionTimeout = 40
        dbDatabase.CursorLocation = adUseClient
        dbDatabase.Open

        If (dbDatabase.State <> 1) Or (Err <> 0) Then
            intResult = MsgBox("Could not connect to the database.  Check your user name and password." & vbCrLf & Error(Err), 16, "Excel Demo")

            ConnectDatabase = False
        Else
            ConnectDatabase = True
        End If
    Else
        ConnectDatabase = True
    End If
End Function

Private Sub cmdInitialize_Click()
    Dim i As Integer
    Dim intResult As Integer
    Dim strSQL As String
    Dim snpData As ADODB.Recordset

    'Don't allow Excel to display an error message on the screen if an error happens while executing this
    '  procedure, we will handle the problem in-line in the code
    On Error Resume Next

    'Jump to our ConnectDatabase function which returns a value of True if we are connected to the database
    '  or False if the connection attempt failed
    intResult = ConnectDatabase

    'If we could not connect to the database, display a message for the user that something is wrong and stop
    '  the execution of the code in this module
    If intResult = False Then
        Exit Sub
    End If

    'Create the ADO object which will be used to retrieve the data from the database
    Set snpData = New ADODB.Recordset

    strSQL = "SELECT" & vbCrLf
    strSQL = strSQL & "  *" & vbCrLf    'Retrieve all columns from the table without listing the columns
    strSQL = strSQL & "FROM" & vbCrLf
    strSQL = strSQL & "  V$SESSION" & vbCrLf
    strSQL = strSQL & "WHERE" & vbCrLf
  'ROWNUM is an Oracle only function - each row returned is assigned an increasing sequential value,
  '  essentially, I am telling Oracle to not retrieve any rows, as I am just interested in the column name
  '  and the data types in the PART table.  Using 0=1  in place of ROWNUM<1 will likely work on other database
  '  platforms.
    strSQL = strSQL & "  ROWNUM<1"

    'Pass the SQL statement into our database connection to return the matching rows
    snpData.Open strSQL, dbDatabase

    'Always verify that the SQL statement was able to be executed, and that the database server did not simply
    '  return an error message.  Failing to perform the check could result in a situation where the macro
    '  becomes stuck in an infinite loop.  State = 1 indicates that the SQL statement was executed, and that
    '  the recordset is available for use, but does not necessarily mean that there are any rows in the
    '  recordset
    If snpData.State = 1 Then
        For i = 0 To snpData.Fields.Count - 1
            'Let's try to determine the type of data that may be stored in the database column and output that
            '  to the Excel spreadsheet.  Doing so will help the user, and it will help the cmdFind_Click
            '  procedure determine how the bind variables should be set up
            Select Case snpData.Fields(i).Type
                Case adVarChar
                    'A string of characters
                    ActiveSheet.Cells(1, i + 2).Value = "String"
                Case adChar
                    'A fixed length string of characters, where values are padded with spaces as needed
                    ActiveSheet.Cells(1, i + 2).Value = "Character"
                Case adDate, 135
                    'A column which may contain date and time information
                    ActiveSheet.Cells(1, i + 2).Value = "Date"
                Case adNumeric, adSingle, adInteger, adDouble, 139
                    'A column which may contain integers, floating point numbers, but not imaginary numbers
                    ActiveSheet.Cells(1, i + 2).Value = "Number"
                Case Else
                    'What should we do with these types of columns, are they BLOBs, RAWs?
                    ActiveSheet.Cells(1, i + 2).Value = snpData.Fields(i).Type
            End Select
            'Output the name of the column on the second row in the spreadsheet
            ActiveSheet.Cells(2, i + 2).Value = snpData.Fields(i).Name
            'Blank out the third row in the spreadsheet so that the user may specify how the rows returned
            '  by the SQL statement should be restricted on that row
            ActiveSheet.Cells(3, i + 2).Value = ""
        Next i
        'Record the number of columns in the PART table for future reference
        intColumns = snpData.Fields.Count

        'Just for fun, output to the Debug window (View menu - Immediate Window the Excel column names
        '  for the 26th through the 100th columns in the spreadsheet just to make certain that our interesting
        '  looking formula below is working correctly
        For i = 26 To 100
            Debug.Print i, Chr(64 + Int((i + 2) / 26)); Chr(64 + ((i + 2) Mod 26 + 1))
        Next i
        'Chr returns the character represented by the ASCII/ANSI value specified.  An uppercase letter A has
        '  an ASCII/ANSI value of 65, so the first column based on the formula would be Chr(64 + 1) = A
        'Mod is a function which returns the remainder after a number is divided by another number
        '  28 Mod 26 would equal 2  as 28/26 = 1 with a remainder of 2, are to be mathematically fancy:
        '  (28 / 26 - Int(28 / 26)) * 26
        '  Thus Mod produces a repeating sequence from 0 to one less than the number following the word Mod
        strLastColumn = Chr(64 + Int((intColumns + 2) / 26)) & Chr(64 + ((intColumns + 2) Mod 26 + 1))

        'Close the ADO recordset to free up memory on the database server since we are done using the data
        snpData.Close

        'Make certain that the full column names of the various columns in the PART table are visible in the
        '  spreadsheet
        ActiveSheet.Columns("A:" & strLastColumn).AutoFit
    End If

    'Erase any rows in the spreadsheet that may have been left by a previous execution of the cmdFind code
    Worksheets("DatabaseInfo").Range("4:50000").Delete Shift:=xlUp

    'Remove the ADO recordset object that we created earlier from memory - in theory this happens automatically
    '  but it is good practice to explicitly perform the operation
    Set snpData = Nothing
End Sub 

Now, switch back to the Excel window to verify that the cmdInitialize button works correctly.  You should see something like this:

Notice that row 1 in the worksheet shows the data type of the columns in the V$SESSION view (204 is a RAW data type) and row 2 in the worksheet shows the column names.  Row 3 will be used to allow the user to restrict the rows that will be returned.  Next, we need to add the code to the cmdFind button.  Switch back to the Visual Basic editor and add the following code:

Private Sub cmdFind_Click()
    Dim i As Integer
    Dim lngRow As Long
    Dim strSQL As String
    Dim snpData As ADODB.Recordset
    Dim comData As ADODB.Command

    'Create the in-memory ADO objects that will be used to return the data from the PART table
    Set snpData = New ADODB.Recordset
    Set comData = New ADODB.Command

    With comData
        strSQL = "SELECT" & vbCrLf
        strSQL = strSQL & "  *" & vbCrLf
        strSQL = strSQL & "FROM" & vbCrLf
        strSQL = strSQL & "  V$SESSION" & vbCrLf
        strSQL = strSQL & "WHERE" & vbCrLf
        strSQL = strSQL & "  1=1" & vbCrLf

        'Walk through the columns to determine which have restrictions placed on them by the user
        For i = 1 To intColumns
            If ActiveSheet.Cells(3, i).Value <> "" Then
                'The user placed a striction on this column
                If (InStr(ActiveSheet.Cells(3, i).Value, "%") > 0) Or (InStr(ActiveSheet.Cells(3, i).Value, "_") > 0) Then
                    'Partial match, the column name is in row 2 of the spreadsheet
                    strSQL = strSQL & "  AND " & ActiveSheet.Cells(2, i).Value & " LIKE ?" & vbCrLf
                    'We need to look in row 1 for the data type of the column and set up an appropriate bind
                    '  variable data type to pass in the restriction requested by the user
                    'Each bind variable must have a unique name, so we generate one as  ValueCol#
                    Select Case ActiveSheet.Cells(1, i).Value
                        Case "String"
                            .Parameters.Append .CreateParameter("value" & Format(i), adVarChar, adParamInput, Len(ActiveSheet.Cells(3, i).Value), ActiveSheet.Cells(3, i).Value)
                        Case "Character"
                            .Parameters.Append .CreateParameter("value" & Format(i), adChar, adParamInput, Len(ActiveSheet.Cells(3, i).Value), ActiveSheet.Cells(3, i).Value)
                        Case "Number"
                            'A partial match on a number is not possible, just including to see what happens
                            .Parameters.Append .CreateParameter("value" & Format(i), adNumeric, adParamInput, 12, ActiveSheet.Cells(3, i).Value)
                        Case "Date"
                            'A partial match on a date is not possible, just including to see what happens
                            .Parameters.Append .CreateParameter("value" & Format(i), adDate, adParamInput, 8, CDate(ActiveSheet.Cells(3, i).Value))
                    End Select
                Else
                    'Full match, the column name is in row 2 of the spreadsheet
                    strSQL = strSQL & "  AND " & ActiveSheet.Cells(2, i).Value & " = ?" & vbCrLf
                    'We need to look in row 1 for the data type of the column and set up an appropriate bind
                    '  variable data type to pass in the restriction requested by the user
                    'Each bind variable must have a unique name, so we generate one as  ValueCol#
                    Select Case ActiveSheet.Cells(1, i).Value
                        Case "String"
                            .Parameters.Append .CreateParameter("value" & Format(i), adVarChar, adParamInput, Len(ActiveSheet.Cells(3, i).Value), ActiveSheet.Cells(3, i).Value)
                        Case "Character"
                            .Parameters.Append .CreateParameter("value" & Format(i), adChar, adParamInput, Len(ActiveSheet.Cells(3, i).Value), ActiveSheet.Cells(3, i).Value)
                        Case "Number"
                            .Parameters.Append .CreateParameter("value" & Format(i), adNumeric, adParamInput, 12, ActiveSheet.Cells(3, i).Value)
                        Case "Date"
                            .Parameters.Append .CreateParameter("value" & Format(i), adDate, adParamInput, 8, CDate(ActiveSheet.Cells(3, i).Value))
                    End Select
                End If
            End If
        Next i
        'We will sort the rows by the part ID
        strSQL = strSQL & "ORDER BY" & vbCrLf
        strSQL = strSQL & "  SID"

        'Set up the command properties
        .CommandText = strSQL
        .CommandType = adCmdText
        .CommandTimeout = 30

        .ActiveConnection = dbDatabase
    End With

    Set snpData = comData.Execute

    lngRow = 3 'We will start outputting at row 4, so 3 is our "0" line - the starting point

    'The slow way to populate the cells
'    If Not (snpData Is Nothing) Then
'        Do While Not snpData.EOF
'            'Increase the row number so that we do not output all of the information on the same row of the
'            '  spreadsheet
'            lngRow = lngRow + 1
'            'Output the data returned by the SQL statement, one column at a time.  The first column is in the
'            '  0 position, and the last column is one less than the total number of columns returned
'            For i = 0 To snpData.Fields.Count - 1
'                ActiveSheet.Cells(lngRow, i + 2).Value = snpData.Fields(i)
'            Next i
'            snpData.MoveNext
'        Loop
'
'        snpData.Close
'    End If
'
'    'Do we have extra rows left over from the last run?  If so, delete all rows below the last row that we output
'    Worksheets("DatabaseInfo").Range(Format(lngRow + 1) & ":50000").Delete Shift:=xlUp

    'The fast way to place the query results into cells   
    Worksheets("DatabaseInfo").Range(Format(lngRow + 1) & ":50000").Delete Shift:=xlUp
    If Not (snpData Is Nothing) Then

        ActiveSheet.Range("B4").CopyFromRecordset snpData

        ActiveSheet.Range("B4").Select

        snpData.Close
    End If

    'Tell Excel to fix the column widths so that all of the data returned in each column is visible
    'We recorded the value of strLastColumn in the initialize procedure
    ActiveSheet.Columns("B:" & strLastColumn).AutoFit

    'Memory clean up
    Set snpData = Nothing
    Set comData = Nothing
End Sub

Switch back to the Excel worksheet and test the cmdFind button.  You should see something like this:

Next, try to enter a search keyword in row 3 – if a wildcard character ( % or _ ) is used, the query will use a LIKE keyword, rather than an = operator.  After entering the search criteria, click the Find button:

There is no need to stop at this point.  It is easy to add a UserForm to the Excel workbook, for example something like this from another demonstration: 

For example, to enable a trace for a session, you could create a function like this:

Sub SetTraceInSession(lngSID As Long, lngSerial As Long, lngTrace As Long, lngTraceLevel As Long)
    Dim cmdTrace As New ADODB.Command
    Dim strSQL As String

    On Error Resume Next

    With cmdTrace
        strSQL = "SYS.DBMS_SYSTEM.SET_EV(" & Format(lngSID) & "," & Format(lngSerial) & "," & Format(lngTrace) & "," & Format(lngTraceLevel) & ",'')"
        .CommandText = strSQL
        .CommandType = adCmdStoredProc
        .ActiveConnection = dbDatabase
    End With

    cmdTrace.Execute

    Set cmdTrace = Nothing
End Sub




Book Library – Finding Motivation

16 01 2010

January 16, 2010

I occasionally see requests for book suggestions in various Internet forums, blogs, and various other websites.  I am typically very careful when preparing to buy a book, since it my money that will be wasted if the book’s contents are worthless.  I try to read the reviews for books on Amazon.com, blogs, and various other sites to try to determine if the book’s contents should have been retired with the release of Oracle Database 8.0, whether or not the book’s authors emphasize volume over technical quality, and whether or not people are willing to rely on a particular author’s advice.

I thought that it might be interesting to look at my book library to see if I would be able to offer any advice.  My book library at home looks like the following picture (I intentionally or unintentionally left out about 6 to 10 books) – click the picture to see a larger version.

 If you closely examine the books, you probably will be able to tell that I need to spend a little more time digging through the books on the right (and the four years of related magazines that have been sitting mostly unread).

Here is my personal book library at work – click the picture to see a larger version.

As I wrote in this OTN thread, a small number of Oracle books marked distinct turning points in my knowledge of Oracle.  Additionally, it is necessary to find motivation to continue learning whatever subject falls at your feet.  In the OTN thread, I stated the following:

I have been very fortunate to buy and read several very high quality Oracle books which not only correctly state the way something works, but also manage to provide a logical, reasoned explanation for why things happen as they do, when it is appropriate, and when it is not. While not the first book I read on the topic of Oracle, the book “Oracle Performance Tuning 101” by Gaja Vaidyanatha marked the start of logical reasoning in performance tuning exercises for me. A couple years later I learned that Gaja was a member of the Oaktable Network. I read the book “Expert Oracle One on One” by Tom Kyte and was impressed with the test cases presented in the book which help readers understand the logic of why Oracle behaves as it does, and I also enjoyed the performance tuning stories in the book. A couple years later I found Tom Kyte’s “Expert Oracle Database Architecture” book at a book store and bought it without a second thought; some repetition from his previous book, fewer performance tuning stories, but a lot of great, logically reasoned information. A couple years later I learned that Tom was a member of the Oaktable Network. I read the book “Optimizing Oracle Performance” by Cary Millsap, a book that once again marked a distinct turning point in the method I used for performance tuning – the logic made all of the book easy to understand. A couple years later I learned that Cary was a member of the Oaktable Network. I read the book “Cost-Based Oracle Fundamentals” by Jonathan Lewis, a book by its title seemed to be too much of a beginner’s book until I read the review by Tom Kyte. Needless to say, the book also marked a turning point in the way I approach problem solving through logical reasoning, asking and answering the question – “What is Oracle thinking”. Jonathan is a member of the Oaktable Network, a pattern is starting to develop here. At this point I started looking for anything written in book or blog form by members of the Oaktable Network. I found Richard Foote’s blog, which some how managed to make Oracle indexes interesting for me – probably through the use of logic and test cases which allowed me to reproduce what I reading about. I found Jonathan Lewis’ blog, which covers so many interesting topics about Oracle, all of which leverage logical approaches to help understanding. I also found the blogs of Kevin Closson, Greg Rahn, Tanel Poder, and a number of other members of the Oaktable Network. The draw to the performance tuning side of Oracle administration was primarily for a search for the elusive condition known as Compulsive Tuning Disorder, which was coined in the book written by Gaja. There were, of course, many other books which contributed to my knowledge – I reviewed at least 8 of the Oracle related books on the Amazon.com website

The above was written before I set up this blog – there are more book reviews on this blog here: https://hoopercharles.wordpress.com/category/book-review/.  In the above pictures you will see all of the books that I referenced in the OTN post, as well as the book that I had the opportunity to co-author with a fairly large number of OakTable Network members (top photo – I have not yet received my printed copy of the book from Amazon, so the picture shows a printed copy of the electronic version from Apress).  There are of course a large number of books in my personal library at work – as you can see, I have the opportunity to dig into much more than Oracle Database.  I have read most of the books cover to cover, and a very small number of the books have been read cover to cover twice.

My post in the OTN thread continues:

Motivation… it is interesting to read what people write about Oracle. Sometimes what is written directly contradicts what one knows about Oracle. In such cases, it may be a fun exercise to determine if what was written is correct (and why it is logically correct), or why it is wrong (and why it is logically incorrect). Take, for example, the “Top 5 Timed Events” seen in this book …

The text of the book states that the “Top 5 Timed Events” shown indicates a CPU Constrained Database (side note: if a database is a series of files stored physically on a disk, can it ever be CPU constrained?). From the “Top 5 Timed Events”, we see that there were 4,851 waits on the CPU for a total time of 4,042 seconds, and this represented 55.76% of the wait time. Someone reading the book might be left thinking one of:

  • “That obviously means that the CPU is overwhelmed!”
  • “Wow 4,851 wait events on the CPU, that sure is a lot!”
  • “Wow wait events on the CPU, I didn’t know that was possible?”
  • “Hey, something is wrong with this ‘Top 5 Timed Events’ output as Oracle never reports the number of waits on CPU.”
  • “Something is really wrong with this ‘Top 5 Timed Events’ output as we do not know the number of CPUs in the server (what if there are 32 CPUs), the time range of the statics, and why the average time for a single block read is more than a second!”

Another page from the same book shows this command:

alter system set optimizer_index_cost_adj=20 scope = pfile;

Someone reading the book might be left thinking one of:

  • That looks like an easy to implement solution.
  • I thought that it was only possible to alter parameters in the spfile with an ALTER SYSTEM command, neat.
  • That command will never execute, and should return an “ORA-00922: missing or invalid option” error.
  • Why would the author suggest a value of 20 for OPTIMIZER_INDEX_COST_ADJ and not 1, 5, 10, 12, 50, or 100? Are there any side effects? Why isn’t the author recommending the use of system (CPU) statistics to correct the cost of full table scans? 

I suggest that you try reading an old Oracle book, such as “Practical Oracle 8i”, and see if you are able to pick out anything that is:

  • Obviously wrong, and was never correct.
  • Obviously wrong since Oracle 10.1.0.1 (or some other release version), but was 100% correct at the time the book was written.
  • Obviously correct now, just as it was when the book was originally written.
  • Grossly over applying a fix that worked in a finite set of conditions (possibly due to false correlation) to situations with nearly infinite scope.

Someone posted a comment on this blog asking for a sequenced list of book recommendations for learning Oracle Database.  I suggested that the list of books might be a bit different depending on whether the person had an interest in general DBA work or performance tuning (or development, or …).  The suggestions that I provided to the comment follow:

Quick suggestions:

  • A solid foundation of Oracle specific SQL is needed. I enjoyed reading “Mastering Oracle SQL and SQL*Plus“, and I believe that book provides a solid foundation. That book appears to be in the process of being updated, and might even include page numbers this time (http://www.apress.com/book/view/9781430271970). I am currently reading “Oracle SQL Recipes: A Problem-Solution Approach” (http://www.apress.com/book/view/1430225092), probably about 30 pages into the book now – and I believe that I have already found a small handful of minor errors/issues with the book that would make it difficult to use as a starting point.
  • A solid foundation of understanding Oracle’s behavior is needed. I believe that Tom Kyte’s “Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions” book (http://www.apress.com/book/view/9781590595305) is one of the best sources. I understand that Tom Kyte also re-wrote the Oracle 11.2.0.1 “Concepts Guide” (http://download.oracle.com/docs/cd/E11882_01/server.112/e10713/toc.htm), so that might be a decent substitute for his book.
  • If you are planning to do general DBA work, probably the next book should be on the topic of RMAN. The books in the Oracle documentation library are good, and you will find two reviews of other RMAN books on this blog.
  • Next, I would suggest reading a book that provides a solid foundation of the Oracle wait interface. “Oracle Wait Interface: A Practical Guide to Performance Diagnostics & Tuning” seems to be the best source of that information, but it would be nice to see an update of the book that covers more recent releases of Oracle.
  • Next, the “Oracle Performance Tuning Guide” from the Oracle documentation library.
  • Next, I suggest the book “Troubleshooting Oracle Performance” – the book is great for not only introducing people to various approaches for troubleshooting problems, but also provides foundation knowledge that is needed in order to understand why an approach worked.
  • Next, I suggest digging deeper into troubleshooting with 10046 trace files – Cary Millsap’s “Optimizing Oracle Performance” is the best source for this information.
  • Next, I suggest digging deeper into troubleshooting with 10053 trace files – Jonathan Lewis’ “Cost-Based Oracle Fundamentals” is the best source for this information.

+ If queueing theory, introduced in “Optimizing Oracle Performance“, is of interest, take a look at “Forecasting Oracle Performance”

+ If Statspack/AWR report reading, introduced in the “Performance Tuning Guide” is of interest, see the excellent series of articles on Jonathan Lewis’ blog.

+ If you want your jaw to drop, take a look at Tanel Poder’s blog. I also recommend reading all of the blog entries on Jonathan Lewis’ blog and Richard Foote’s blog.

+ I have now read most of the chapters in the “Expert Oracle Practices: Oracle Database Administration from the Oak Table” book.  The book contains theory, tuning philosophy, tuning/troubleshooting logic, test cases, and up to date information that cannot be found in any other book.  It is my opinion that this book belongs in the “Quick suggestions” list above.  Disclaimer: Just estimating here, for every 17 copies of this book that are sold, I think that I will have enough royalty money to buy a soda drink from the vending machine (my recommendation has nothing to do with how thirsty I am right now 🙂 ).  It is my belief that the motivation for all of the authors of this book was simply to help readers improve their skills well beyond the basics.





An Interesting ERP Problem in Oracle 11g that is Not a Problem in 10g R2

15 01 2010

January 15, 2010

Let’s imagine that there is an ERP platform that supports Oracle Database 10.2.0.x, but not Oracle 11.1.0.x (or 11.2.0.1).  Why would the vendor not support Oracle 11g?  Read this article and see if you are able to determine the source of the problem, and a solution.  Yes, there is a solution, but not the one that you are thinking about (the solution that I put together works with 11.1.0.6, 11.1.0.7, and 11.2.0.1).

Here is the story:

I have been testing Oracle 11.1.0.6 and 11.1.0.7 with an ERP package since January 2008 and have encountered an interesting issue where the ERP package throws an error “ORA-02005: implicit (-1) length not valid for this bind or define datatype” error, when selecting the BLOB column from any table containing a BLOB – this same ERP package executes without problem with Oracle 10.2.0.2/10.2.0.3/10.2.0.4. The table definition is as follows:

PART_ID     NOT NULL VARCHAR2(30)
TYPE        NOT NULL CHAR(1)
BITS                 BLOB
BITS_LENGTH NOT NULL NUMBER(38)

The previous version of the ERP package had the same table defined as follows, and the previous version of the ERP package had no problem with Oracle 11.1.0.6:

PART_ID     NOT NULL VARCHAR2(30)
TYPE        NOT NULL CHAR(1)
BITS                 LONG RAW
BITS_LENGTH NOT NULL NUMBER(38)

One of the SQL statements that tosses the error is this one:

SELECT BITS FROM PART_MFG_BINARY where TYPE = :1 and PART_ID = :2 

A portion of a 10046 trace file from Oracle 10.2.0.2 that includes this SQL statement follows:

=====================
PARSING IN CURSOR #2 len=87 dep=0 uid=30 oct=3 lid=30 tim=749963475 hv=1159951869 ad='53a45ac8'
select mfg_name, mfg_part_id from part where id = :1                                  
END OF STMT
PARSE #2:c=0,e=1427,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,tim=749963466
BINDS #2:
kkscoacd
 Bind#0
  oacdty=96 mxl=32(09) mxlc=00 mal=00 scl=00 pre=00
  oacflg=01 fl2=1000000 frm=01 csi=178 siz=32 off=0
  kxsbbbfp=380b9b68  bln=32  avl=09  flg=05
  value="18567109M"
EXEC #2:c=0,e=3357,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,tim=749971833
FETCH #2:c=0,e=52,p=0,cr=3,cu=0,mis=0,r=1,dep=0,og=1,tim=749971968
FETCH #2:c=0,e=2,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,tim=749973655
=====================
PARSING IN CURSOR #3 len=59 dep=0 uid=30 oct=3 lid=30 tim=749983314 hv=2907586799 ad='5457f690'
select part_udf_labels from APPLICATION_GLOBAL            
END OF STMT
PARSE #3:c=0,e=3389,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,tim=749983305
BINDS #3:
EXEC #3:c=0,e=152,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=749986393
FETCH #3:c=0,e=124,p=0,cr=7,cu=0,mis=0,r=1,dep=0,og=1,tim=749988214
STAT #3 id=1 cnt=1 pid=0 pos=1 obj=11925 op='TABLE ACCESS FULL APPLICATION_GLOBAL (cr=7 pr=0 pw=0 time=104 us)'
=====================
PARSING IN CURSOR #3 len=59 dep=0 uid=30 oct=3 lid=30 tim=749992936 hv=2907586799 ad='5457f690'
select part_udf_labels from APPLICATION_GLOBAL            
END OF STMT
PARSE #3:c=0,e=117,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=749992932
BINDS #3:
EXEC #3:c=0,e=83,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=749996097
FETCH #3:c=0,e=116,p=0,cr=7,cu=0,mis=0,r=1,dep=0,og=1,tim=749997800
STAT #2 id=1 cnt=1 pid=0 pos=1 obj=12429 op='TABLE ACCESS BY INDEX ROWID PART (cr=3 pr=0 pw=0 time=48 us)'
STAT #2 id=2 cnt=1 pid=1 pos=1 obj=12436 op='INDEX UNIQUE SCAN SYS_C005496 (cr=2 pr=0 pw=0 time=28 us)'
=====================
PARSING IN CURSOR #2 len=99 dep=0 uid=30 oct=3 lid=30 tim=750003263 hv=1519706035 ad='7e235fc0'
SELECT BITS FROM PART_MFG_BINARY  where TYPE = :1       and PART_ID = :2                          
END OF STMT
PARSE #2:c=0,e=1100,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,tim=750003255
BINDS #2:
kkscoacd
 Bind#0
  oacdty=96 mxl=32(01) mxlc=00 mal=00 scl=00 pre=00
  oacflg=01 fl2=1000000 frm=01 csi=178 siz=64 off=0
  kxsbbbfp=380bdcd8  bln=32  avl=01  flg=05
  value="D"
 Bind#1
  oacdty=96 mxl=32(09) mxlc=00 mal=00 scl=00 pre=00
  oacflg=01 fl2=1000000 frm=01 csi=178 siz=0 off=32
  kxsbbbfp=380bdcf8  bln=32  avl=09  flg=01
  value="18567109M"
EXEC #2:c=0,e=2512,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,tim=750022595
FETCH #2:c=0,e=33,p=0,cr=1,cu=0,mis=0,r=0,dep=0,og=1,tim=750024142
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=101246 op='TABLE ACCESS BY INDEX ROWID PART_MFG_BINARY (cr=1 pr=0 pw=0 time=30 us)'
STAT #2 id=2 cnt=0 pid=1 pos=1 obj=101249 op='INDEX UNIQUE SCAN SYS_C0018720 (cr=1 pr=0 pw=0 time=21 us)'
=====================

A portion of a 10046 trace file from Oracle 11.1.0.6 that includes this SQL statement follows:

=====================
PARSING IN CURSOR #3 len=87 dep=0 uid=59 oct=3 lid=59 tim=1023659125907 hv=1159951869 ad='22a109c8' sqlid='7k8rzcj2k6xgx'
select mfg_name, mfg_part_id from part where id = :1                                  
END OF STMT
PARSE #3:c=0,e=432,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,tim=1023659125903
BINDS #3:
 Bind#0
  oacdty=96 mxl=32(09) mxlc=00 mal=00 scl=00 pre=00
  oacflg=01 fl2=1000000 frm=01 csi=178 siz=32 off=0
  kxsbbbfp=0c6e0fd4  bln=32  avl=09  flg=05
  value="18567109M"
EXEC #3:c=0,e=1068,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,tim=1023659130848
FETCH #3:c=0,e=37,p=0,cr=3,cu=0,mis=0,r=1,dep=0,og=1,tim=1023659132062
STAT #3 id=1 cnt=1 pid=0 pos=1 obj=67567 op='TABLE ACCESS BY INDEX ROWID PART (cr=3 pr=0 pw=0 time=0 us cost=2 size=14 card=1)'
STAT #3 id=2 cnt=1 pid=1 pos=1 obj=69248 op='INDEX UNIQUE SCAN SYS_C0011926 (cr=2 pr=0 pw=0 time=0 us cost=1 size=0 card=1)'
=====================
PARSING IN CURSOR #6 len=59 dep=0 uid=59 oct=3 lid=59 tim=1023659138710 hv=2907586799 ad='22a44ae8' sqlid='3n102kqqnwh7g'
select part_udf_labels from APPLICATION_GLOBAL            
END OF STMT
PARSE #6:c=0,e=701,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,tim=1023659138706
BINDS #6:
EXEC #6:c=0,e=51,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1023659142030
FETCH #6:c=0,e=55,p=0,cr=3,cu=0,mis=0,r=1,dep=0,og=1,tim=1023659143936
STAT #6 id=1 cnt=1 pid=0 pos=1 obj=67410 op='TABLE ACCESS FULL APPLICATION_GLOBAL (cr=3 pr=0 pw=0 time=0 us cost=3 size=146 card=1)'
=====================
PARSING IN CURSOR #6 len=59 dep=0 uid=59 oct=3 lid=59 tim=1023659148354 hv=2907586799 ad='22a44ae8' sqlid='3n102kqqnwh7g'
select part_udf_labels from APPLICATION_GLOBAL            
END OF STMT
PARSE #6:c=0,e=40,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1023659148351
BINDS #6:
EXEC #6:c=0,e=89,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1023659151927
FETCH #6:c=0,e=46,p=0,cr=3,cu=0,mis=0,r=1,dep=0,og=1,tim=1023659153664
STAT #6 id=1 cnt=1 pid=0 pos=1 obj=67410 op='TABLE ACCESS FULL APPLICATION_GLOBAL (cr=3 pr=0 pw=0 time=0 us cost=3 size=146 card=1)'
=====================
PARSING IN CURSOR #3 len=99 dep=0 uid=59 oct=3 lid=59 tim=1023659158452 hv=1519706035 ad='22a10580' sqlid='gm6bkj9d99rxm'
SELECT BITS FROM PART_MFG_BINARY  where TYPE = :1       and PART_ID = :2                          
END OF STMT
PARSE #3:c=0,e=399,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,tim=1023659158448
XCTEND rlbk=1, rd_only=1

In the above, notice the rollback (XCTEND rlbk=1, rd_only=1) before Oracle would have output the bind variable values in the trace file (bind variable values were never written).

Flushing the shared pool and enabling both a 10046 and 10053 trace in Oracle 11.1.0.6 during an execution of the program generated the following snippet from the 10053 trace file for the failing query on the PART_MFG_BINARY table:

******************************************
----- Current SQL Statement for this session (sql_id=gm6bkj9d99rxm) -----
SELECT BITS FROM PART_MFG_BINARY  where TYPE = :1       and PART_ID = :2                          
*******************************************
***************************************
PARAMETERS USED BY THE OPTIMIZER
********************************
  *************************************
  PARAMETERS WITH ALTERED VALUES
  ******************************
Compilation Environment Dump
_smm_min_size                       = 128 KB
_smm_max_size                       = 20480 KB
_smm_px_max_size                    = 51200 KB
sqlstat_enabled                     = true
Bug Fix Control Environment

  *************************************
  PARAMETERS WITH DEFAULT VALUES
  ******************************
Compilation Environment Dump
optimizer_mode_hinted               = false
optimizer_features_hinted           = 0.0.0
parallel_execution_enabled          = false
parallel_query_forced_dop           = 0
parallel_dml_forced_dop             = 0
parallel_ddl_forced_degree          = 0
parallel_ddl_forced_instances       = 0
_query_rewrite_fudge                = 90
optimizer_features_enable           = 11.1.0.6
_optimizer_search_limit             = 5
cpu_count                           = 2
active_instance_count               = 1
parallel_threads_per_cpu            = 2
hash_area_size                      = 131072
bitmap_merge_area_size              = 1048576
sort_area_size                      = 65536
sort_area_retained_size             = 0
_sort_elimination_cost_ratio        = 0
_optimizer_block_size               = 8192
_sort_multiblock_read_count         = 2
_hash_multiblock_io_count           = 0
_db_file_optimizer_read_count       = 8
_optimizer_max_permutations         = 2000
pga_aggregate_target                = 167936 KB
_pga_max_size                       = 204800 KB
_query_rewrite_maxdisjunct          = 257
_smm_auto_min_io_size               = 56 KB
_smm_auto_max_io_size               = 248 KB
_cpu_to_io                          = 0
_optimizer_undo_cost_change         = 11.1.0.6
parallel_query_mode                 = enabled
parallel_dml_mode                   = disabled
parallel_ddl_mode                   = enabled
optimizer_mode                      = all_rows
_optimizer_percent_parallel         = 101
_always_anti_join                   = choose
_always_semi_join                   = choose
_optimizer_mode_force               = true
_partition_view_enabled             = true
_always_star_transformation         = false
_query_rewrite_or_error             = false
_hash_join_enabled                  = true
cursor_sharing                      = exact
_b_tree_bitmap_plans                = true
star_transformation_enabled         = false
_optimizer_cost_model               = choose
_new_sort_cost_estimate             = true
_complex_view_merging               = true
_unnest_subquery                    = true
_eliminate_common_subexpr           = true
_pred_move_around                   = true
_convert_set_to_join                = false
_push_join_predicate                = true
_push_join_union_view               = true
_fast_full_scan_enabled             = true
_optim_enhance_nnull_detection      = true
_parallel_broadcast_enabled         = true
_px_broadcast_fudge_factor          = 100
_ordered_nested_loop                = true
_no_or_expansion                    = false
optimizer_index_cost_adj            = 100
optimizer_index_caching             = 0
_system_index_caching               = 0
_disable_datalayer_sampling         = false
query_rewrite_enabled               = true
query_rewrite_integrity             = enforced
_query_cost_rewrite                 = true
_query_rewrite_2                    = true
_query_rewrite_1                    = true
_query_rewrite_expression           = true
_query_rewrite_jgmigrate            = true
_query_rewrite_fpc                  = true
_query_rewrite_drj                  = true
_full_pwise_join_enabled            = true
_partial_pwise_join_enabled         = true
_left_nested_loops_random           = true
_improved_row_length_enabled        = true
_index_join_enabled                 = true
_enable_type_dep_selectivity        = true
_improved_outerjoin_card            = true
_optimizer_adjust_for_nulls         = true
_optimizer_degree                   = 0
_use_column_stats_for_function      = true
_subquery_pruning_enabled           = true
_subquery_pruning_mv_enabled        = false
_or_expand_nvl_predicate            = true
_like_with_bind_as_equality         = false
_table_scan_cost_plus_one           = true
_cost_equality_semi_join            = true
_default_non_equality_sel_check     = true
_new_initial_join_orders            = true
_oneside_colstat_for_equijoins      = true
_optim_peek_user_binds              = true
_minimal_stats_aggregation          = true
_force_temptables_for_gsets         = false
workarea_size_policy                = auto
_smm_auto_cost_enabled              = true
_gs_anti_semi_join_allowed          = true
_optim_new_default_join_sel         = true
optimizer_dynamic_sampling          = 2
_pre_rewrite_push_pred              = true
_optimizer_new_join_card_computation = true
_union_rewrite_for_gs               = yes_gset_mvs
_generalized_pruning_enabled        = true
_optim_adjust_for_part_skews        = true
_force_datefold_trunc               = false
statistics_level                    = typical
_optimizer_system_stats_usage       = true
skip_unusable_indexes               = true
_remove_aggr_subquery               = true
_optimizer_push_down_distinct       = 0
_dml_monitoring_enabled             = true
_optimizer_undo_changes             = false
_predicate_elimination_enabled      = true
_nested_loop_fudge                  = 100
_project_view_columns               = true
_local_communication_costing_enabled = true
_local_communication_ratio          = 50
_query_rewrite_vop_cleanup          = true
_slave_mapping_enabled              = true
_optimizer_cost_based_transformation = linear
_optimizer_mjc_enabled              = true
_right_outer_hash_enable            = true
_spr_push_pred_refspr               = true
_optimizer_cache_stats              = false
_optimizer_cbqt_factor              = 50
_optimizer_squ_bottomup             = true
_fic_area_size                      = 131072
_optimizer_skip_scan_enabled        = true
_optimizer_cost_filter_pred         = false
_optimizer_sortmerge_join_enabled   = true
_optimizer_join_sel_sanity_check    = true
_mmv_query_rewrite_enabled          = true
_bt_mmv_query_rewrite_enabled       = true
_add_stale_mv_to_dependency_list    = true
_distinct_view_unnesting            = false
_optimizer_dim_subq_join_sel        = true
_optimizer_disable_strans_sanity_checks = 0
_optimizer_compute_index_stats      = true
_push_join_union_view2              = true
_optimizer_ignore_hints             = false
_optimizer_random_plan              = 0
_query_rewrite_setopgrw_enable      = true
_optimizer_correct_sq_selectivity   = true
_disable_function_based_index       = false
_optimizer_join_order_control       = 3
_optimizer_cartesian_enabled        = true
_optimizer_starplan_enabled         = true
_extended_pruning_enabled           = true
_optimizer_push_pred_cost_based     = true
_optimizer_null_aware_antijoin      = true
_optimizer_extend_jppd_view_types   = true
_sql_model_unfold_forloops          = run_time
_enable_dml_lock_escalation         = false
_bloom_filter_enabled               = true
_update_bji_ipdml_enabled           = 0
_optimizer_extended_cursor_sharing  = udo
_dm_max_shared_pool_pct             = 1
_optimizer_cost_hjsmj_multimatch    = true
_optimizer_transitivity_retain      = true
_px_pwg_enabled                     = true
optimizer_secure_view_merging       = true
_optimizer_join_elimination_enabled = true
flashback_table_rpi                 = non_fbt
_optimizer_cbqt_no_size_restriction = true
_optimizer_enhanced_filter_push     = true
_optimizer_filter_pred_pullup       = true
_rowsrc_trace_level                 = 0
_simple_view_merging                = true
_optimizer_rownum_pred_based_fkr    = true
_optimizer_better_inlist_costing    = all
_optimizer_self_induced_cache_cost  = false
_optimizer_min_cache_blocks         = 10
_optimizer_or_expansion             = depth
_optimizer_order_by_elimination_enabled = true
_optimizer_outer_to_anti_enabled    = true
_selfjoin_mv_duplicates             = true
_dimension_skip_null                = true
_force_rewrite_enable               = false
_optimizer_star_tran_in_with_clause = true
_optimizer_complex_pred_selectivity = true
_optimizer_connect_by_cost_based    = true
_gby_hash_aggregation_enabled       = true
_globalindex_pnum_filter_enabled    = true
_px_minus_intersect                 = true
_fix_control_key                    = 0
_force_slave_mapping_intra_part_loads = false
_force_tmp_segment_loads            = false
_query_mmvrewrite_maxpreds          = 10
_query_mmvrewrite_maxintervals      = 5
_query_mmvrewrite_maxinlists        = 5
_query_mmvrewrite_maxdmaps          = 10
_query_mmvrewrite_maxcmaps          = 20
_query_mmvrewrite_maxregperm        = 512
_query_mmvrewrite_maxmergedcmaps    = 50
_query_mmvrewrite_maxqryinlistvals  = 500
_disable_parallel_conventional_load = false
_trace_virtual_columns              = false
_replace_virtual_columns            = true
_virtual_column_overload_allowed    = true
_kdt_buffering                      = true
_first_k_rows_dynamic_proration     = true
_optimizer_sortmerge_join_inequality = true
_aw_row_source_enabled              = true
_optimizer_aw_stats_enabled         = true
_bloom_pruning_enabled              = true
result_cache_mode                   = MANUAL
_px_ual_serial_input                = true
_optimizer_skip_scan_guess          = false
_enable_row_shipping                = true
_row_shipping_threshold             = 80
_row_shipping_explain               = false
transaction_isolation_level         = read_commited
_optimizer_distinct_elimination     = true
_optimizer_multi_level_push_pred    = true
_optimizer_group_by_placement       = true
_optimizer_rownum_bind_default      = 10
_enable_query_rewrite_on_remote_objs = true
_optimizer_extended_cursor_sharing_rel = simple
_optimizer_adaptive_cursor_sharing  = true
_direct_path_insert_features        = 0
_optimizer_improve_selectivity      = true
optimizer_use_pending_statistics    = false
_optimizer_enable_density_improvements = true
_optimizer_aw_join_push_enabled     = true
_optimizer_connect_by_combine_sw    = true
_enable_pmo_ctas                    = 0
_optimizer_native_full_outer_join   = force
_bloom_predicate_enabled            = false
_optimizer_enable_extended_stats    = true
_is_lock_table_for_ddl_wait_lock    = 0
_pivot_implementation_method        = choose
optimizer_capture_sql_plan_baselines = false
optimizer_use_sql_plan_baselines    = true
_optimizer_star_trans_min_cost      = 0
_optimizer_star_trans_min_ratio     = 0
_with_subquery                      = OPTIMIZER
_optimizer_fkr_index_cost_bias      = 10
_optimizer_use_subheap              = true
_parallel_policy                    = manual
parallel_degree                     = 0
_parallel_time_threshold            = 10
_parallel_time_unit                 = 10
_optimizer_or_expansion_subheap     = true
_optimizer_free_transformation_heap = true
_optimizer_reuse_cost_annotations   = true
_result_cache_auto_size_threshold   = 100
_result_cache_auto_time_threshold   = 1000
_optimizer_nested_rollup_for_gset   = 100
_nlj_batching_enabled               = 1
parallel_query_default_dop          = 0
is_recur_flags                      = 0
optimizer_use_invisible_indexes     = false
flashback_data_archive_internal_cursor = 0
_optimizer_extended_stats_usage_control = 240
Bug Fix Control Environment
    fix  3834770 = 1      
    fix  3746511 = enabled
    fix  4519016 = enabled
    fix  3118776 = enabled
    fix  4488689 = enabled
    fix  2194204 = disabled
    fix  2660592 = enabled
    fix  2320291 = enabled
    fix  2324795 = enabled
    fix  4308414 = enabled
    fix  3499674 = disabled
    fix  4569940 = enabled
    fix  4631959 = enabled
    fix  4519340 = enabled
    fix  4550003 = enabled
    fix  1403283 = enabled
    fix  4554846 = enabled
    fix  4602374 = enabled
    fix  4584065 = enabled
    fix  4545833 = enabled
    fix  4611850 = enabled
    fix  4663698 = enabled
    fix  4663804 = enabled
    fix  4666174 = enabled
    fix  4567767 = enabled
    fix  4556762 = 15     
    fix  4728348 = enabled
    fix  4708389 = enabled
    fix  4175830 = enabled
    fix  4752814 = enabled
    fix  4583239 = enabled
    fix  4386734 = enabled
    fix  4887636 = enabled
    fix  4483240 = enabled
    fix  4872602 = disabled
    fix  4711525 = enabled
    fix  4545802 = enabled
    fix  4605810 = enabled
    fix  4704779 = enabled
    fix  4900129 = enabled
    fix  4924149 = enabled
    fix  4663702 = enabled
    fix  4878299 = enabled
    fix  4658342 = enabled
    fix  4881533 = enabled
    fix  4676955 = enabled
    fix  4273361 = enabled
    fix  4967068 = enabled
    fix  4969880 = disabled
    fix  5005866 = enabled
    fix  5015557 = enabled
    fix  4705343 = enabled
    fix  4904838 = enabled
    fix  4716096 = enabled
    fix  4483286 = disabled
    fix  4722900 = enabled
    fix  4615392 = enabled
    fix  5096560 = enabled
    fix  5029464 = enabled
    fix  4134994 = enabled
    fix  4904890 = enabled
    fix  5104624 = enabled
    fix  5014836 = enabled
    fix  4768040 = enabled
    fix  4600710 = enabled
    fix  5129233 = enabled
    fix  4595987 = enabled
    fix  4908162 = enabled
    fix  5139520 = enabled
    fix  5084239 = enabled
    fix  5143477 = disabled
    fix  2663857 = enabled
    fix  4717546 = enabled
    fix  5240264 = enabled
    fix  5099909 = enabled
    fix  5240607 = enabled
    fix  5195882 = enabled
    fix  5220356 = enabled
    fix  5263572 = enabled
    fix  5385629 = enabled
    fix  5302124 = enabled
    fix  5391942 = enabled
    fix  5384335 = enabled
    fix  5482831 = enabled
    fix  4158812 = enabled
    fix  5387148 = enabled
    fix  5383891 = enabled
    fix  5466973 = enabled
    fix  5396162 = enabled
    fix  5394888 = enabled
    fix  5395291 = enabled
    fix  5236908 = enabled
    fix  5509293 = enabled
    fix  5449488 = enabled
    fix  5567933 = enabled
    fix  5570494 = enabled
    fix  5288623 = enabled
    fix  5505995 = enabled
    fix  5505157 = enabled
    fix  5112460 = enabled
    fix  5554865 = enabled
    fix  5112260 = enabled
    fix  5112352 = enabled
    fix  5547058 = enabled
    fix  5618040 = enabled
    fix  5585313 = enabled
    fix  5547895 = enabled
    fix  5634346 = enabled
    fix  5620485 = enabled
    fix  5483301 = enabled
    fix  5657044 = enabled
    fix  5694984 = enabled
    fix  5868490 = enabled
    fix  5650477 = enabled
    fix  5611962 = enabled
    fix  4279274 = enabled
    fix  5741121 = enabled
    fix  5714944 = enabled
    fix  5391505 = enabled
    fix  5762598 = enabled
    fix  5578791 = enabled
    fix  5259048 = enabled
    fix  5882954 = enabled
    fix  2492766 = enabled
    fix  5707608 = enabled
    fix  5891471 = enabled
    fix  5884780 = enabled
    fix  5680702 = enabled
    fix  5371452 = enabled
    fix  5838613 = enabled
    fix  5949981 = enabled
    fix  5624216 = enabled
    fix  5741044 = enabled
    fix  5976822 = enabled
    fix  6006457 = enabled
    fix  5872956 = enabled
    fix  5923644 = enabled
    fix  5943234 = enabled
    fix  5844495 = enabled
    fix  4168080 = enabled
    fix  6020579 = enabled
    fix  5842686 = disabled
    fix  5996801 = enabled
    fix  5593639 = enabled
    fix  6133948 = enabled
    fix  6239909 = enabled

  ***************************************
  PARAMETERS IN OPT_PARAM HINT
  ****************************
***************************************
Column Usage Monitoring is ON: tracking level = 1
***************************************

Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
CBQT bypassed for query block SEL$1 (#0): no complex view or sub-queries.
CBQT: Validity checks failed for gm6bkj9d99rxm.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE:     CSE not performed on query block SEL$1 (#0).
OBYE:   Considering Order-by Elimination from view SEL$1 (#0)
***************************
Order-by elimination (OBYE)
***************************
OBYE:     OBYE bypassed: no order by to eliminate.
CVM: Considering view merge in query block SEL$1 (#0)
query block SEL$1 (#0) unchanged
Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
CBQT bypassed for query block SEL$1 (#0): no complex view or sub-queries.
CBQT: Validity checks failed for gm6bkj9d99rxm.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE:     CSE not performed on query block SEL$1 (#0).
SU: Considering subquery unnesting in query block SEL$1 (#0)
********************
Subquery Unnest (SU)
********************
SJC: Considering set-join conversion in query block SEL$1 (#0)
*************************
Set-Join Conversion (SJC)
*************************
SJC: not performed
PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
PM:     PM bypassed: Outer query contains no views.
PM:     PM bypassed: Outer query contains no views.
query block SEL$1 (#0) unchanged
FPD: Considering simple filter push in query block SEL$1 (#0)
"PART_MFG_BINARY"."TYPE"=:B1 AND "PART_MFG_BINARY"."PART_ID"=:B2
try to generate transitive predicate from check constraints for query block SEL$1 (#0)
finally: "PART_MFG_BINARY"."TYPE"=:B1 AND "PART_MFG_BINARY"."PART_ID"=:B2

apadrv-start sqlid=17985532525431545779
  :
    call(in-use=456, alloc=16360), compile(in-use=61748, alloc=65000), execution(in-use=67844, alloc=69396)

*******************************************
Peeked values of the binds in SQL statement
*******************************************
----- Bind Info (kkscoacd) -----
 Bind#0
  oacdty=01 mxl=2000(00) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0020 frm=01 csi=178 siz=2000 off=0
  No bind buffers allocated
 Bind#1
  oacdty=01 mxl=2000(00) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0020 frm=01 csi=178 siz=2000 off=0
  No bind buffers allocated

kkoqbc: optimizing query block SEL$1 (#0)

        :
    call(in-use=456, alloc=16360), compile(in-use=62388, alloc=65000), execution(in-use=67844, alloc=69396)

kkoqbc-subheap (create addr=0x0FC5BDB0)
****************
QUERY BLOCK TEXT
****************
SELECT BITS FROM PART_MFG_BINARY  where TYPE = :1       and PART_ID = :2                          
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (optimizer): qb_name=SEL$1 nbfros=1 flg=0
  fro(0): flg=0 objn=68499 hint_alias="PART_MFG_BINARY"@"SEL$1"

-----------------------------
SYSTEM STATISTICS INFORMATION
-----------------------------
  Using NOWORKLOAD Stats
  CPUSPEED: 1416 millions instructions/sec
  IOTFRSPEED: 4096 bytes per millisecond (default is 4096)
  IOSEEKTIM: 10 milliseconds (default is 10)

***************************************
BASE STATISTICAL INFORMATION
***********************
Table Stats::
  Table: PART_MFG_BINARY  Alias: PART_MFG_BINARY
    #Rows: 0  #Blks:  1  AvgRowLen:  0.00
Index Stats::
  Index: SYS_C0012623  Col#: 1 2
    LVLS: 0  #LB: 0  #DK: 0  LB/K: 0.00  DB/K: 0.00  CLUF: 0.00
  Index: SYS_IL0000068499C00003$$  Col#:    (NOT ANALYZED)
    LVLS: 1  #LB: 25  #DK: 100  LB/K: 1.00  DB/K: 1.00  CLUF: 800.00
***************************************
1-ROW TABLES:  PART_MFG_BINARY[PART_MFG_BINARY]#0
Access path analysis for PART_MFG_BINARY
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for PART_MFG_BINARY[PART_MFG_BINARY]
=====================
PARSING IN CURSOR #7 len=210 dep=1 uid=0 oct=3 lid=0 tim=519852188056 hv=864012087 ad='521c267c' sqlid='96g93hntrzjtr'
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_size, minimum, maximum, distcnt, lowval, hival, density, col#, spare1, spare2, avgcln from hist_head$ where

obj#=:1 and intcol#=:2
END OF STMT
PARSE #7:c=0,e=36,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=3,tim=519852188053
BINDS #7:
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=10e3cfdc  bln=22  avl=04  flg=05
  value=68499
 Bind#1
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=10e3cfb8  bln=24  avl=02  flg=05
  value=2
EXEC #7:c=0,e=150,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=3,tim=519852188332
FETCH #7:c=0,e=35,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=3,tim=519852188392
STAT #7 id=1 cnt=1 pid=0 pos=1 obj=411 op='TABLE ACCESS BY INDEX ROWID HIST_HEAD$ (cr=3 pr=0 pw=0 time=0 us)'
STAT #7 id=2 cnt=1 pid=1 pos=1 obj=413 op='INDEX RANGE SCAN I_HH_OBJ#_INTCOL# (cr=2 pr=0 pw=0 time=0 us)'
BINDS #4:
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=10e3cfdc  bln=22  avl=04  flg=05
  value=68499
 Bind#1
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=10e3cfb8  bln=24  avl=02  flg=05
  value=1
=====================
PARSING IN CURSOR #4 len=210 dep=1 uid=0 oct=3 lid=0 tim=519852188677 hv=864012087 ad='521c267c' sqlid='96g93hntrzjtr'
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_size, minimum, maximum, distcnt, lowval, hival, density, col#, spare1, spare2, avgcln from hist_head$ where

obj#=:1 and intcol#=:2
END OF STMT
EXEC #4:c=0,e=129,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=3,tim=519852188675
FETCH #4:c=0,e=17,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=3,tim=519852188762
  ColGroup (#1, Index) SYS_C0012623
    Col#: 1 2    CorStregth: 0.00
  ColGroup Usage:: PredCnt: 2  Matches Full: #0  Partial:  Sel: 1.0000
  Table: PART_MFG_BINARY  Alias: PART_MFG_BINARY
    Card: Original: 0.000000  Rounded: 1  Computed: 0.00  Non Adjusted: 0.00
  Access Path: TableScan
    Cost:  2.00  Resp: 2.00  Degree: 0
      Cost_io: 2.00  Cost_cpu: 7121
      Resp_io: 2.00  Resp_cpu: 7121
  Access Path: index (UniqueScan)
    Index: SYS_C0012623
    resc_io: 0.00  resc_cpu: 1240
    ix_sel: 0.000000  ix_sel_with_filters: 0.000000
    Cost: 0.00  Resp: 0.00  Degree: 1
  ColGroup Usage:: PredCnt: 2  Matches Full: #0  Partial:  Sel: 1.0000
  ColGroup Usage:: PredCnt: 2  Matches Full: #0  Partial:  Sel: 1.0000
  Access Path: index (AllEqUnique)
    Index: SYS_C0012623
    resc_io: 0.00  resc_cpu: 1240
    ix_sel: 1.000000  ix_sel_with_filters: 1.000000
    Cost: 0.00  Resp: 0.00  Degree: 1
 One row Card: 1.000000
  Best:: AccessPath: IndexUnique
  Index: SYS_C0012623
         Cost: 0.00  Degree: 1  Resp: 0.00  Card: 1.00  Bytes: 0

BINDS #9:
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=10e3cfdc  bln=22  avl=04  flg=05
  value=68499
 Bind#1
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=10e3cfb8  bln=24  avl=02  flg=05
  value=3
=====================
PARSING IN CURSOR #9 len=210 dep=1 uid=0 oct=3 lid=0 tim=519852189639 hv=864012087 ad='521c267c' sqlid='96g93hntrzjtr'
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_size, minimum, maximum, distcnt, lowval, hival, density, col#, spare1, spare2, avgcln from hist_head$ where

obj#=:1 and intcol#=:2
END OF STMT
EXEC #9:c=0,e=137,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=3,tim=519852189635
FETCH #9:c=0,e=21,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=3,tim=519852189729
***************************************

OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]:  PART_MFG_BINARY[PART_MFG_BINARY]#0
***********************
Best so far:  Table#: 0  cost: 0.0001  card: 1.0000  bytes: 2022
***********************
(newjo-stop-1) k:0, spcnt:0, perm:1, maxperm:2000

*********************************
Number of join permutations tried: 1
*********************************
Or-Expansion validity checks failed on query block SEL$1 (#0) because no need for OR expansion if we only have 1-row tables
Transfer Optimizer annotations for query block SEL$1 (#0)
id=0 frofkks[i] (index start key) predicate="PART_MFG_BINARY"."PART_ID"=:B1
id=0 frofkks[i] (index start key) predicate="PART_MFG_BINARY"."TYPE"=:B1
id=0 frofkke[i] (index stop key) predicate="PART_MFG_BINARY"."PART_ID"=:B1
id=0 frofkke[i] (index stop key) predicate="PART_MFG_BINARY"."TYPE"=:B1
Final cost for query block SEL$1 (#0) - All Rows Plan:
  Best join order: 1
  Cost: 0.0001  Degree: 1  Card: 1.0000  Bytes: 2022
  Resc: 0.0001  Resc_io: 0.0000  Resc_cpu: 1240
  Resp: 0.0001  Resp_io: 0.0000  Resc_cpu: 1240
kkoqbc-subheap (delete addr=0x0FC5BDB0, in-use=10612, alloc=12148)
kkoqbc-end:
        :
    call(in-use=6600, alloc=32736), compile(in-use=68440, alloc=69056), execution(in-use=72232, alloc=73472)

kkoqbc: finish optimizing query block SEL$1 (#0)
apadrv-end
          :
    call(in-use=6600, alloc=32736), compile(in-use=69096, alloc=73112), execution(in-use=76308, alloc=77548)

Starting SQL statement dump

user_id=59 user_name=TESTUSER module=PRTMNT.EXE action=
sql_id=gm6bkj9d99rxm plan_hash_value=1612811168 problem_type=3
----- Current SQL Statement for this session (sql_id=gm6bkj9d99rxm) -----
SELECT BITS FROM PART_MFG_BINARY  where TYPE = :1       and PART_ID = :2                          
sql_text_length=100
sql=SELECT BITS FROM PART_MFG_BINARY  where TYPE = :1       and PART_ID = :2                          
----- Explain Plan Dump -----
----- Plan Table -----

============
Plan Table
============
------------------------------------------------------+-----------------------------------+
| Id  | Operation                    | Name           | Rows  | Bytes | Cost  | Time      |
------------------------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT             |                |       |       |     1 |           |
| 1   |  TABLE ACCESS BY INDEX ROWID | PART_MFG_BINARY|     1 |  2022 |     0 |           |
| 2   |   INDEX UNIQUE SCAN          | SYS_C0012623   |     1 |       |     0 |           |
------------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - access("PART_ID"=:2 AND "TYPE"=:1)

Content of other_xml column
===========================
  db_version     : 11.1.0.6
  parse_schema   : TESTUSER
  plan_hash      : 1612811168
  plan_hash_2    : 424781238
  Outline Data:
  /*+
    BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.1.0.6')
      DB_VERSION('11.1.0.6')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      INDEX_RS_ASC(@"SEL$1" "PART_MFG_BINARY"@"SEL$1" ("PART_MFG_BINARY"."PART_ID" "PART_MFG_BINARY"."TYPE"))
    END_OUTLINE_DATA
  */

Query Block Registry:
SEL$1 0x4220f70c (PARSER) [FINAL]

:
    call(in-use=8828, alloc=32736), compile(in-use=89856, alloc=187352), execution(in-use=194096, alloc=196424)

End of Optimizer State Dump
====================== END SQL Statement Dump ======================
XCTEND rlbk=1, rd_only=1

A portion of a SQL*Net trace captured when the client ERP program is connected to Oracle 10.2.0.2 follows:

-----------------------------------------------------------------------------
nioqrc: entry
nsdo: entry
nsdo: cid=0, opcode=84, *bl=0, *what=1, uflgs=0x20, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
nsdofls: entry
nsdofls: DATA flags: 0x0
nsdofls: sending NSPTDA packet
nspsend: entry
nspsend: plen=276, type=6
nttwr: entry
nttwr: socket 340 had bytes written=276
nttwr: exit
nspsend: packet dump
nspsend: 01 14 00 00 06 00 00 00  |........|
nspsend: 00 00 03 5E 85 09 80 02  |...^....|
nspsend: 00 02 00 00 00 01 63 00  |......c.|
nspsend: 00 00 01 0D 00 00 00 00  |........|
nspsend: 01 00 00 00 00 01 00 00  |........|
nspsend: 00 00 00 00 00 01 02 00  |........|
nspsend: 00 00 00 00 00 00 01 01  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 01 63 53 45 4C 45 43  |..cSELEC|
nspsend: 54 20 42 49 54 53 20 46  |T.BITS.F|
nspsend: 52 4F 4D 20 50 41 52 54  |ROM.PART|
nspsend: 5F 4D 46 47 5F 42 49 4E  |_MFG_BIN|
nspsend: 41 52 59 20 20 77 68 65  |ARY..whe|
nspsend: 72 65 20 54 59 50 45 20  |re.TYPE.|
nspsend: 3D 20 3A 31 20 20 20 20  |=.:1....|
nspsend: 20 20 20 61 6E 64 20 50  |...and.P|
nspsend: 41 52 54 5F 49 44 20 3D  |ART_ID.=|
nspsend: 20 3A 32 20 20 20 20 20  |.:2.....|
nspsend: 20 20 20 20 20 20 20 20  |........|
nspsend: 20 20 20 20 20 20 20 20  |........|
nspsend: 20 20 20 20 20 20 01 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 01 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 01 80 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 01  |........|
nspsend: 80 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00              |....    |
nspsend: 276 bytes to transport
nspsend: normal exit
nsdofls: exit (0)
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: normal exit
nsdo: entry
nsdo: cid=0, opcode=85, *bl=0, *what=0, uflgs=0x0, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: switching to application buffer
nsrdr: entry
nsrdr: recving a packet
nsprecv: entry
nsprecv: reading from transport...
nttrd: entry
nttrd: socket 340 had bytes read=223
nttrd: exit
nsprecv: 223 bytes from transport
nsprecv: tlen=223, plen=223, type=6
nsprecv: packet dump
nsprecv: 00 DF 00 00 06 00 00 00  |........|
nsprecv: 00 00 10 17 34 44 80 BB  |....4D..|
nsprecv: 49 5F 2C 75 8A 72 99 F9  |I_,u.r..|
nsprecv: B3 DF 94 5A 78 6C 0B 18  |...Zxl..|
nsprecv: 08 30 12 00 00 00 00 01  |.0......|
nsprecv: 00 00 00 4D 71 00 00 00  |...Mq...|
nsprecv: A0 0F 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 01 04 04 00 00 00 04  |........|
nsprecv: 42 49 54 53 00 00 00 00  |BITS....|
nsprecv: 00 00 00 00 00 00 07 00  |........|
nsprecv: 00 00 07 78 6C 0B 18 0C  |...xl...|
nsprecv: 17 2C 01 00 00 00 E8 1F  |.,......|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 08 06 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 02 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 04 01 00  |........|
nsprecv: 00 00 83 01 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 02 00  |........|
nsprecv: 11 00 03 00 00 00 00 00  |........|
nsprecv: C3 88 01 00 04 00 00 0D  |........|
nsprecv: EA 0A 00 0E 00 00 00 00  |........|
nsprecv: 00 00 85 00 00 01 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00     |....... |
nsprecv: normal exit
nsrdr: got NSPTDA packet
nsrdr: NSPTDA flags: 0x0
nsrdr: normal exit
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: *what=1, *bl=2001
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: normal exit
nioqrc: exit
nioqsn: entry
nioqsn: exit
nioqrc: entry
nsdo: entry
nsdo: cid=0, opcode=84, *bl=0, *what=1, uflgs=0x20, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
nsdofls: entry
nsdofls: DATA flags: 0x0
nsdofls: sending NSPTDA packet
nspsend: entry
nspsend: plen=218, type=6
nttwr: entry
nttwr: socket 340 had bytes written=218
nttwr: exit
nspsend: packet dump
nspsend: 00 DA 00 00 06 00 00 00  |........|
nspsend: 00 00 03 5E 86 78 80 00  |...^.x..|
nspsend: 00 02 00 00 00 00 00 00  |........|
nspsend: 00 00 01 0D 00 00 00 00  |........|
nspsend: 01 00 00 00 00 01 00 00  |........|
nspsend: 00 14 00 00 00 01 02 00  |........|
nspsend: 00 00 00 00 00 00 01 01  |........|
nspsend: 01 00 00 00 00 00 00 00  |........|
nspsend: 00 01 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 01 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 60 01  |......`.|
nspsend: 00 00 01 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 B2 00 01 00  |........|
nspsend: 00 00 00 60 01 00 00 09  |...`....|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 B2 00 01 00 00 00 00  |........|
nspsend: 71 05 00 00 14 00 00 00  |q.......|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 07 01 44  |.......D|
nspsend: 09 30 31 35 34 37 30 30  |.1856710|
nspsend: 39 4D                    |9M      |
nspsend: 218 bytes to transport
nspsend: normal exit
nsdofls: exit (0)
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: normal exit
nsdo: entry
nsdo: cid=0, opcode=85, *bl=0, *what=0, uflgs=0x0, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: switching to application buffer
nsrdr: entry
nsrdr: recving a packet
nsprecv: entry
nsprecv: reading from transport...
nttrd: entry
nttrd: socket 340 had bytes read=223
nttrd: exit
nsprecv: 223 bytes from transport
nsprecv: tlen=223, plen=223, type=6
nsprecv: packet dump
nsprecv: 00 DF 00 00 06 00 00 00  |........|
nsprecv: 00 00 10 17 34 44 80 BB  |....4D..|
nsprecv: 49 5F 2C 75 8A 72 99 F9  |I_,u.r..|
nsprecv: B3 DF 94 5A 78 6C 0B 18  |...Zxl..|
nsprecv: 08 30 12 00 00 00 00 01  |.0......|
nsprecv: 00 00 00 4D 71 00 00 00  |...Mq...|
nsprecv: A0 0F 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 01 04 04 00 00 00 04  |........|
nsprecv: 42 49 54 53 00 00 00 00  |BITS....|
nsprecv: 00 00 00 00 00 00 07 00  |........|
nsprecv: 00 00 07 78 6C 0B 18 0C  |...xl...|
nsprecv: 17 2C 01 00 00 00 E8 1F  |.,......|
nsprecv: 00 00 02 00 00 00 02 00  |........|
nsprecv: 00 00 08 06 00 B5 D0 3D  |.......=|
nsprecv: 47 00 00 00 00 02 00 00  |G.......|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 04 01 00  |........|
nsprecv: 00 00 84 01 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 02 00  |........|
nsprecv: 00 00 03 00 00 00 00 00  |........|
nsprecv: C3 88 01 00 04 00 00 0D  |........|
nsprecv: EA 0A 00 0E 00 00 00 00  |........|
nsprecv: 00 00 86 00 00 01 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00     |....... |
nsprecv: normal exit
nsrdr: got NSPTDA packet
nsrdr: NSPTDA flags: 0x0
nsrdr: normal exit
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: *what=1, *bl=2001
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: normal exit
nioqrc: exit
nioqsn: entry
nioqsn: exit
nioqrc: entry
nsdo: entry
nsdo: cid=0, opcode=84, *bl=0, *what=1, uflgs=0x20, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
nsdofls: entry
nsdofls: DATA flags: 0x0
nsdofls: sending NSPTDA packet
nspsend: entry
nspsend: plen=21, type=6
nttwr: entry
nttwr: socket 340 had bytes written=21
nttwr: exit
nspsend: packet dump
nspsend: 00 15 00 00 06 00 00 00  |........|
nspsend: 00 00 03 05 87 02 00 00  |........|
nspsend: 00 01 00 00 00           |.....   |
nspsend: 21 bytes to transport
nspsend: normal exit
nsdofls: exit (0)
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: normal exit
nsdo: entry
nsdo: cid=0, opcode=85, *bl=0, *what=0, uflgs=0x0, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: switching to application buffer
nsrdr: entry
nsrdr: recving a packet
nsprecv: entry
nsprecv: reading from transport...
nttrd: entry
nttrd: socket 340 had bytes read=102
nttrd: exit
nsprecv: 102 bytes from transport
nsprecv: tlen=102, plen=102, type=6
nsprecv: packet dump
nsprecv: 00 66 00 00 06 00 00 00  |.f......|
nsprecv: 00 00 04 01 00 00 00 85  |........|
nsprecv: 01 00 00 00 00 7B 05 00  |.....{..|
nsprecv: 00 00 00 02 00 00 00 03  |........|
nsprecv: 00 00 00 00 00 C3 88 01  |........|
nsprecv: 00 04 00 00 0D EA 0A 00  |........|
nsprecv: 0E 00 00 00 00 00 00 87  |........|
nsprecv: 00 00 01 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 19 4F 52 41  |.....ORA|
nsprecv: 2D 30 31 34 30 33 3A 20  |-01403:.|
nsprecv: 6E 6F 20 64 61 74 61 20  |no.data.|
nsprecv: 66 6F 75 6E 64 0A        |found.  |
nsprecv: normal exit
nsrdr: got NSPTDA packet
nsrdr: NSPTDA flags: 0x0
nsrdr: normal exit
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: *what=1, *bl=2001
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: normal exit
nioqrc: exit 

A portion of a SQL*Net trace captured when the client ERP program (Oracle 10.2.0.1 Client) is connected to Oracle 11.1.0.6 follows:

-----------------------------------------------------------------------------
nioqrc: entry
nsdo: entry
nsdo: cid=0, opcode=84, *bl=0, *what=1, uflgs=0x20, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
nsdofls: entry
nsdofls: DATA flags: 0x0
nsdofls: sending NSPTDA packet
nspsend: entry
nspsend: plen=177, type=6
nttwr: entry
nttwr: socket 308 had bytes written=177
nttwr: exit
nspsend: packet dump
nspsend: 00 B1 00 00 06 00 00 00  |........|
nspsend: 00 00 03 4A FE 01 00 00  |...J....|
nspsend: 00 03 00 00 00 78 14 FD  |.....x..|
nspsend: 02 63 00 00 00 00 00 00  |.c......|
nspsend: 00 00 00 00 00 48 D8 12  |.....H..|
nspsend: 00 01 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 63 53 45 4C 45 43 54  |.cSELECT|
nspsend: 20 42 49 54 53 20 46 52  |.BITS.FR|
nspsend: 4F 4D 20 50 41 52 54 5F  |OM.PART_|
nspsend: 4D 46 47 5F 42 49 4E 41  |MFG_BINA|
nspsend: 52 59 20 20 77 68 65 72  |RY..wher|
nspsend: 65 20 54 59 50 45 20 3D  |e.TYPE.=|
nspsend: 20 3A 31 20 20 20 20 20  |.:1.....|
nspsend: 20 20 61 6E 64 20 50 41  |..and.PA|
nspsend: 52 54 5F 49 44 20 3D 20  |RT_ID.=.|
nspsend: 3A 32 20 20 20 20 20 20  |:2......|
nspsend: 20 20 20 20 20 20 20 20  |........|
nspsend: 20 20 20 20 20 20 20 20  |........|
nspsend: 20 20 20 20 20 02 00 00  |........|
nspsend: 00                       |.       |
nspsend: 177 bytes to transport
nspsend: normal exit
nsdofls: exit (0)
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: normal exit
nsdo: entry
nsdo: cid=0, opcode=85, *bl=0, *what=0, uflgs=0x0, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: switching to application buffer
nsrdr: entry
nsrdr: recving a packet
nsprecv: entry
nsprecv: reading from transport...
nttrd: entry
nttrd: socket 308 had bytes read=106
nttrd: exit
nsprecv: 106 bytes from transport
nsprecv: tlen=106, plen=106, type=6
nsprecv: packet dump
nsprecv: 00 6A 00 00 06 00 00 00  |.j......|
nsprecv: 00 00 04 05 00 00 00 FC  |........|
nsprecv: 01 01 01 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 03 00 00 00  |........|
nsprecv: 03 00 00 00 00 00 30 0A  |......0.|
nsprecv: 01 00 05 00 00 00 86 2A  |.......*|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 FE 00 00 01 00  |........|
nsprecv: 00 00 36 01 00 00 00 00  |..6.....|
nsprecv: 00 00 58 BF 0C 0E 00 00  |..X.....|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00                    |..      |
nsprecv: normal exit
nsrdr: got NSPTDA packet
nsrdr: NSPTDA flags: 0x0
nsrdr: normal exit
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: *what=1, *bl=2001
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: normal exit
nioqrc: exit
nioqsn: entry
nioqsn: exit
nioqrc: entry
nsdo: entry
nsdo: cid=0, opcode=84, *bl=0, *what=1, uflgs=0x20, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
nsdofls: entry
nsdofls: DATA flags: 0x0
nsdofls: sending NSPTDA packet
nspsend: entry
nspsend: plen=49, type=6
nttwr: entry
nttwr: socket 308 had bytes written=49
nttwr: exit
nspsend: packet dump
nspsend: 00 31 00 00 06 00 00 00  |.1......|
nspsend: 00 00 03 2B FF 03 00 00  |...+....|
nspsend: 00 01 00 00 00 90 DA C2  |........|
nspsend: 01 60 A2 C2 01 20 00 00  |.`......|
nspsend: 00 92 DA C2 01 94 DA C2  |........|
nspsend: 01 C0 03 00 00 54 DE C2  |.....T..|
nspsend: 01                       |.       |
nspsend: 49 bytes to transport
nspsend: normal exit
nsdofls: exit (0)
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: normal exit
nsdo: entry
nsdo: cid=0, opcode=85, *bl=0, *what=0, uflgs=0x0, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: switching to application buffer
nsrdr: entry
nsrdr: recving a packet
nsprecv: entry
nsprecv: reading from transport...
nttrd: entry
nttrd: socket 308 had bytes read=79
nttrd: exit
nsprecv: 79 bytes from transport
nsprecv: tlen=79, plen=79, type=6
nsprecv: packet dump
nsprecv: 00 4F 00 00 06 00 00 00  |.O......|
nsprecv: 00 00 08 01 00 01 00 01  |........|
nsprecv: 71 00 00 00 A0 0F 00 00  |q.......|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 01 04 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 05 00 05 42 49 54 53 22  |...BITS"|
nsprecv: 09 05 00 00 00 FD 01     |....... |
nsprecv: normal exit
nsrdr: got NSPTDA packet
nsrdr: NSPTDA flags: 0x0
nsrdr: normal exit
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: *what=1, *bl=2001
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: normal exit
nioqrc: exit
nioqsn: entry
nioqsn: exit
nioqrc: entry
nsdo: entry
nsdo: cid=0, opcode=84, *bl=0, *what=1, uflgs=0x20, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
nsdofls: entry
nsdofls: DATA flags: 0x0
nsdofls: sending NSPTDA packet
nspsend: entry
nspsend: plen=33, type=6
nttwr: entry
nttwr: socket 308 had bytes written=33
nttwr: exit
nspsend: packet dump
nspsend: 00 21 00 00 06 00 00 00  |.!......|
nspsend: 00 00 03 15 00 D5 07 00  |........|
nspsend: 00 00 00 00 00 EB 8B DB  |........|
nspsend: 00 C8 00 00 00 48 D8 12  |.....H..|
nspsend: 00                       |.       |
nspsend: 33 bytes to transport
nspsend: normal exit
nsdofls: exit (0)
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: normal exit
nsdo: entry
nsdo: cid=0, opcode=85, *bl=0, *what=0, uflgs=0x0, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: switching to application buffer
nsrdr: entry
nsrdr: recving a packet
nsprecv: entry
nsprecv: reading from transport...
nttrd: entry
nttrd: socket 308 had bytes read=96
nttrd: exit
nsprecv: 96 bytes from transport
nsprecv: tlen=96, plen=96, type=6
nsprecv: packet dump
nsprecv: 00 60 00 00 06 00 00 00  |.`......|
nsprecv: 00 00 08 4B 00 4B 4F 52  |...K.KOR|
nsprecv: 41 2D 30 32 30 30 35 3A  |A-02005:|
nsprecv: 20 69 6D 70 6C 69 63 69  |.implici|
nsprecv: 74 20 28 2D 31 29 20 6C  |t.(-1).l|
nsprecv: 65 6E 67 74 68 20 6E 6F  |ength.no|
nsprecv: 74 20 76 61 6C 69 64 20  |t.valid.|
nsprecv: 66 6F 72 20 74 68 69 73  |for.this|
nsprecv: 20 62 69 6E 64 20 6F 72  |.bind.or|
nsprecv: 20 64 65 66 69 6E 65 20  |.define.|
nsprecv: 64 61 74 61 74 79 70 65  |datatype|
nsprecv: 0A 09 05 00 00 00 FD 01  |........|
nsprecv: normal exit
nsrdr: got NSPTDA packet
nsrdr: NSPTDA flags: 0x0
nsrdr: normal exit
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: *what=1, *bl=2001
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: normal exit
nioqrc: exit
nioqsn: entry
nioqsn: exit
nioqrc: entry
nsdo: entry
nsdo: cid=0, opcode=84, *bl=0, *what=1, uflgs=0x20, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011 

For comparison, a portion of a SQL*Net Trace at level 16, 10.2.0.1 client -> 11.1.0.6 server, custom app, successful:

-----------------------------------------------------------------------------
nioqrc: entry
nsdo: entry
nsdo: cid=0, opcode=84, *bl=0, *what=1, uflgs=0x20, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
nsdofls: entry
nsdofls: DATA flags: 0x0
nsdofls: sending NSPTDA packet
nspsend: entry
nspsend: plen=329, type=6
nttwr: entry
nttwr: socket 672 had bytes written=329
nttwr: exit
nspsend: packet dump
nspsend: 01 49 00 00 06 00 00 00  |.I......|
nspsend: 00 00 11 69 0B A8 E8 14  |...i....|
nspsend: 03 01 00 00 00 04 00 00  |........|
nspsend: 00 03 5E 0C 69 80 00 00  |..^.i...|
nspsend: 00 00 00 00 48 E3 14 03  |....H...|
nspsend: 4A 00 00 00 14 AB 14 03  |J.......|
nspsend: 0D 00 00 00 00 00 00 00  |........|
nspsend: 48 AB 14 03 00 00 00 00  |H.......|
nspsend: 64 00 00 00 00 00 00 00  |d.......|
nspsend: DC E3 14 03 02 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 4A AB 14 03 DC E3 14 03  |J.......|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 58 AB 14 03  |....X...|
nspsend: FE 40 53 45 4C 45 43 54  |.@SELECT|
nspsend: 20 42 49 54 53 20 20 46  |.BITS..F|
nspsend: 52 4F 4D 20 50 41 52 54  |ROM.PART|
nspsend: 5F 4D 46 47 5F 42 49 4E  |_MFG_BIN|
nspsend: 41 52 59 20 77 68 65 72  |ARY.wher|
nspsend: 65 20 20 54 59 50 45 20  |e..TYPE.|
nspsend: 3D 20 3A 31 20 20 20 20  |=.:1....|
nspsend: 20 20 20 61 6E 64 20 50  |...and.P|
nspsend: 41 52 0A 54 5F 49 44 20  |AR.T_ID.|
nspsend: 3D 20 3A 32 20 00 01 00  |=.:2....|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 01 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 01 60 00 00 00 02  |...`....|
nspsend: 00 00 00 00 00 00 00 10  |........|
nspsend: 00 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 B2 00 01 00 00  |........|
nspsend: 00 00 00 01 60 00 00 00  |....`...|
nspsend: 12 00 00 00 00 00 00 00  |........|
nspsend: 10 00 00 00 00 00 00 00  |........|
nspsend: 00 00 00 00 B2 00 01 00  |........|
nspsend: 00 00 00 00 07 01 44 09  |......D.|
nspsend: 30 39 35 34 37 30 30 39  |98567109|
nspsend: 4D                       |M       |
nspsend: 329 bytes to transport
nspsend: normal exit
nsdofls: exit (0)
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: normal exit
nsdo: entry
nsdo: cid=0, opcode=85, *bl=0, *what=0, uflgs=0x0, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: switching to application buffer
nsrdr: entry
nsrdr: recving a packet
nsprecv: entry
nsprecv: reading from transport...
nttrd: entry
nttrd: socket 672 had bytes read=257
nttrd: exit
nsprecv: 257 bytes from transport
nsprecv: tlen=257, plen=257, type=6
nsprecv: packet dump
nsprecv: 01 01 00 00 06 00 00 00  |........|
nsprecv: 00 00 10 17 4E A5 71 9E  |....N.q.|
nsprecv: 5F 80 3E 52 46 CC 9F F4  |_.>RF...|
nsprecv: 96 4C 0C FD 78 6C 0B 18  |.L..xl..|
nsprecv: 0C 31 23 00 00 00 00 01  |.1#.....|
nsprecv: 00 00 00 39 01 71 00 00  |...9.q..|
nsprecv: 00 A0 0F 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 01 04 04  |........|
nsprecv: 00 00 00 04 42 49 54 53  |....BITS|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 07 00 00 00 07 78  |.......x|
nsprecv: 6C 0B 18 0C 31 23 00 00  |l...1#..|
nsprecv: 00 00 E8 1F 00 00 02 00  |........|
nsprecv: 00 00 02 00 00 00 08 06  |........|
nsprecv: 00 15 90 51 47 00 00 00  |...QG...|
nsprecv: 00 03 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 04 01 00 00 00 0A 00  |........|
nsprecv: 01 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 03 00 12 00 03  |........|
nsprecv: 00 00 08 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 0C 00 00 01 00 00  |........|
nsprecv: 00 36 01 00 00 00 00 00  |.6......|
nsprecv: 00 10 EE 0D 0E 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00                       |.       |
nsprecv: normal exit
nsrdr: got NSPTDA packet
nsrdr: NSPTDA flags: 0x0
nsrdr: normal exit
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: *what=1, *bl=2001
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: normal exit
nioqrc: exit
nioqsn: entry
nioqsn: exit
nioqrc: entry
nsdo: entry
nsdo: cid=0, opcode=84, *bl=0, *what=1, uflgs=0x20, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
nsdofls: entry
nsdofls: DATA flags: 0x0
nsdofls: sending NSPTDA packet
nspsend: entry
nspsend: plen=21, type=6
nttwr: entry
nttwr: socket 672 had bytes written=21
nttwr: exit
nspsend: packet dump
nspsend: 00 15 00 00 06 00 00 00  |........|
nspsend: 00 00 03 05 0D 03 00 00  |........|
nspsend: 00 64 00 00 00           |.d...   |
nspsend: 21 bytes to transport
nspsend: normal exit
nsdofls: exit (0)
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: normal exit
nsdo: entry
nsdo: cid=0, opcode=85, *bl=0, *what=0, uflgs=0x0, cflgs=0x3
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: rank=64, nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: nsctx: state=8, flg=0x400d, mvd=0
nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: switching to application buffer
nsrdr: entry
nsrdr: recving a packet
nsprecv: entry
nsprecv: reading from transport...
nttrd: entry
nttrd: socket 672 had bytes read=132
nttrd: exit
nsprecv: 132 bytes from transport
nsprecv: tlen=132, plen=132, type=6
nsprecv: packet dump
nsprecv: 00 84 00 00 06 00 00 00  |........|
nsprecv: 00 00 04 01 00 00 00 0B  |........|
nsprecv: 00 01 00 00 00 00 7B 05  |......{.|
nsprecv: 00 00 00 00 03 00 00 00  |........|
nsprecv: 03 00 20 08 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 0D 00 00 01 00  |........|
nsprecv: 00 00 36 01 00 00 00 00  |..6.....|
nsprecv: 00 00 10 EE 0D 0E 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 00 00 00 00 00 00  |........|
nsprecv: 00 00 19 4F 52 41 2D 30  |...ORA-0|
nsprecv: 31 34 30 33 3A 20 6E 6F  |1403:.no|
nsprecv: 20 64 61 74 61 20 66 6F  |.data.fo|
nsprecv: 75 6E 64 0A              |und.    |
nsprecv: normal exit
nsrdr: got NSPTDA packet
nsrdr: NSPTDA flags: 0x0
nsrdr: normal exit
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: *what=1, *bl=2001
snsbitts_ts: entry
snsbitts_ts: acquired the bit
snsbitts_ts: normal exit
nsdo: nsctxrnk=0
snsbitcl_ts: entry
snsbitcl_ts: normal exit
nsdo: normal exit
nioqrc: exit

Experiments:

  • Oracle 10.2.0.1 client, Oracle 11.1.0.7 client, Oracle 11.1.0.6 client – Same issue with all when connecting to Oracle 11.1.0.6/7 database.
  • Setting optimizer_features_enable to 10.2.0.1 and 10.1.0.4 – same issue with both when connecting to Oracle 11.1.0.6/7 database.
  • Compared the Oracle parameters including the hidden (underscore) parameters between 10.2.0.2 and 11.1.0.7 and adjusted a couple that seemed as though they would be possible causes:
    — _OPTIMIZER_ADAPTIVE_CURSOR_SHARING=FALSE
    — _OPTIMIZER_COMPLEX_PRED_SELECTIVITY=FALSE
    — _OPTIMIZER_EXTENDED_CURSOR_SHARING=NONE
    — etc.

The end result of all of the experiments was exactly the same – any query from the ERP program that accessed a table with a BLOB column triggered the  “ORA-02005: implicit (-1)  length not valid for this bind or define datatype” error.

So, what would you do to troubleshoot this problem?  Remember, the old version of the ERP system that used the LONG RAW datatype worked fine with Oracle 11.1.0.6, while the new version that uses the BLOB datatype fails to work on Oracle 11.1.0.6, 11.1.0.7, and 11.2.0.1.





Working with Oracle’s Time Model Data 2

14 01 2010

January 14, 2010

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

So, how is it possible to transform this:

Into something like this:

A fairly long VBS script connects to the Oracle database, performs a lot of calculations on the data returned from the database, and then outputs the formatted result to a web page in Internet Explorer.  The VBS script continues to control the web page once the page is built, automatically refreshing the web page after a specified number of seconds, and responding to button clicks on the web page.  Easy, right?  Because this is done using a VBS script, the client computer must be running on Windows, while the server may run Unix, Linux, or Windows (I suggest not running this script directly on the server, instead run it from another computer).  The Show Detail button acts like a toggle to either show or hide the session details (on the yellow lines) that contributed to the system-wide statistic values (by default, the sessions must contribute to at least 10% of the total to be included in the session-level output).  Clicking the Re-Query button causes the script to update the page with the latest statistic delta values prior to the automatic refresh timer expiring.

So, where is the code?  Note that there may be bugs in the code – don’t step on them.  Also, the code is mostly written in a very verbose syntax so that it is easy to follow along with the script logic.

Const adCmdText = 1

Dim i
Dim j
Dim k
Dim strSQL
Dim strUsername
Dim strPassword
Dim strDatabase
Dim intCheckIterations              'Number of times to check the instances
Dim intDelayIterations              'Number of seconds to delay between iterations
Dim sglSessionMinimumPercent        'Minimum percent of the total required for the session to be included in the report detail
Dim dteLastLoopStart                'Time of the last loop start
Dim intDataChanged                  'Indicates whether or not the data to be displayed on the web page has changed
Dim intDisplaySessionDetail         'Indicates whether or not to display the session level detail
Dim snpDataWait                     'ADO recordset used to query V$SYSTEM_EVENT
Dim comDataWait                     'ADO command object used to retrieve data from V$SYSTEM_EVENT
Dim snpDataOSStat                   'ADO recordset used to query V$OSSTAT
Dim comDataOSStat                   'ADO command object used to retrieve data from V$OSSTAT
Dim snpDataSysTime                  'ADO recordset used to query V$SYS_TIME_MODEL
Dim comDataSysTime                  'ADO command object used to retrieve from V$SYS_TIME_MODEL
Dim snpDataSessTime                 'ADO recordset used to query V$SESS_TIME_MODEL
Dim comDataSessTime                 'ADO command object used to retrieve from V$SESS_TIME_MODEL
Dim dbDatabase                      'ADO database connection object

Dim strHTML                         'The raw HTML for the web page
Dim objIE                           'The Internet Explorer object
Dim strInd                          'Indent characters for the table
Dim intFlag                         'Loop control variable, allow to jump out of the loop early

Dim intNumCPUs                      'Number of CPUs
Dim dblIdleTime                     'Current value of idle time from V$OSSTAT
Dim dblBusyTime                     'Current value of busy time from V$OSSTAT
Dim dblUserTime                     'Current value of user time from V$OSSTAT
Dim dblSysTime                      'Current value of system/kernel mode time from V$OSSTAT
Dim dblIdleTimeLast                 'Previous value of idle time from V$OSSTAT
Dim dblBusyTimeLast                 'Previous value of busy time from V$OSSTAT
Dim dblUserTimeLast                 'Previous value of user time from V$OSSTAT
Dim dblSysTimeLast                  'Previous value of system/kernel mode time from V$OSSTAT

Dim dblDBCPU                        'Current value of DB CPU from V$SYS_TIME_MODEL
Dim dblDBTime                       'Current value of DB time from V$SYS_TIME_MODEL
Dim dblJavaTime                     'Current value of Java execution elapsed time from V$SYS_TIME_MODEL
Dim dblPLSQLCompile                 'Current value of PL/SQL compilation elapsed time from V$SYS_TIME_MODEL
Dim dblPLSQLExecution               'Current value of PL/SQL execution elapsed time from V$SYS_TIME_MODEL
Dim dblRMANCPU                      'Current value of RMAN cpu time (backup/restore) from V$SYS_TIME_MODEL
Dim dblBackgroundCPU                'Current value of background cpu time from V$SYS_TIME_MODEL
Dim dblBackgroundElapsed            'Current value of background elapsed time from V$SYS_TIME_MODEL
Dim dblConnectMgmt                  'Current value of connection management call elapsed time from V$SYS_TIME_MODEL
Dim dblFailedParseMemory            'Current value of failed parse (out of shared memory) elapsed time from V$SYS_TIME_MODEL
Dim dblFailedParseElapsed           'Current value of failed parse elapsed time from V$SYS_TIME_MODEL
Dim dblHardParseBind                'Current value of hard parse (bind mismatch) elapsed time from V$SYS_TIME_MODEL
Dim dblHardParseSharing             'Current value of hard parse (sharing criteria) elapsed time from V$SYS_TIME_MODEL
Dim dblHardParseElapsed             'Current value of hard parse elapsed time from V$SYS_TIME_MODEL
Dim dblInboundPLSQL                 'Current value of inbound PL/SQL rpc elapsed time from V$SYS_TIME_MODEL
Dim dblParseTimeElapsed             'Current value of parse time elapsed from V$SYS_TIME_MODEL
Dim dblRepeatedBind                 'Current value of repeated bind elapsed time from V$SYS_TIME_MODEL
Dim dblSequenceLoad                 'Current value of sequence load elapsed time from V$SYS_TIME_MODEL
Dim dblSQLExecuteTime               'Current value of sql execute elapsed time from V$SYS_TIME_MODEL

Dim dblDBCPULast                    'Last value of DB CPU from V$SYS_TIME_MODEL
Dim dblDBTimeLast                   'Last value of DB time from V$SYS_TIME_MODEL
Dim dblJavaTimeLast                 'Last value of Java execution elapsed time from V$SYS_TIME_MODEL
Dim dblPLSQLCompileLast             'Last value of PL/SQL compilation elapsed time from V$SYS_TIME_MODEL
Dim dblPLSQLExecutionLast           'Last value of PL/SQL execution elapsed time from V$SYS_TIME_MODEL
Dim dblRMANCPULast                  'Last value of RMAN cpu time (backup/restore) from V$SYS_TIME_MODEL
Dim dblBackgroundCPULast            'Last value of background cpu time from V$SYS_TIME_MODEL
Dim dblBackgroundElapsedLast        'Last value of background elapsed time from V$SYS_TIME_MODEL
Dim dblConnectMgmtLast              'Last value of connection management call elapsed time from V$SYS_TIME_MODEL
Dim dblFailedParseMemoryLast        'Last value of failed parse (out of shared memory) elapsed time from V$SYS_TIME_MODEL
Dim dblFailedParseElapsedLast       'Last value of failed parse elapsed time from V$SYS_TIME_MODEL
Dim dblHardParseBindLast            'Last value of hard parse (bind mismatch) elapsed time from V$SYS_TIME_MODEL
Dim dblHardParseSharingLast         'Last value of hard parse (sharing criteria) elapsed time from V$SYS_TIME_MODEL
Dim dblHardParseElapsedLast         'Last value of hard parse elapsed time from V$SYS_TIME_MODEL
Dim dblInboundPLSQLLast             'Last value of inbound PL/SQL rpc elapsed time from V$SYS_TIME_MODEL
Dim dblParseTimeElapsedLast         'Last value of parse time elapsed from V$SYS_TIME_MODEL
Dim dblRepeatedBindLast             'Last value of repeated bind elapsed time from V$SYS_TIME_MODEL
Dim dblSequenceLoadLast             'Last value of sequence load elapsed time from V$SYS_TIME_MODEL
Dim dblSQLExecuteTimeLast           'Last value of sql execute elapsed time from V$SYS_TIME_MODEL

Dim intSessionCount                 'Number of sessions logged
Dim intSessionCurrent               'Index of the current session
Dim lngSIDLast                      'SID for the previous row from the database
Dim lngSerialLast                   'SERIAL# for the previous row
Dim intSessionExists(999)           'Used to determine if the session is still found in the system
Dim lngSID(999)                     'SID for session
Dim lngSerial(999)                  'SERIAL# for the session
Dim strSessionOther(999)            'USERNAME, MACHINE, PROGRAM
Dim dblDBCPUS(999)                  'Current value of DB CPU from V$SESS_TIME_MODEL
Dim dblDBTimeS(999)                 'Current value of DB time from V$SESS_TIME_MODEL
Dim dblJavaTimeS(999)               'Current value of Java execution elapsed time from V$SESS_TIME_MODEL
Dim dblPLSQLCompileS(999)           'Current value of PL/SQL compilation elapsed time from V$SESS_TIME_MODEL
Dim dblPLSQLExecutionS(999)         'Current value of PL/SQL execution elapsed time from V$SESS_TIME_MODEL
Dim dblRMANCPUS(999)                'Current value of RMAN cpu time (backup/restore) from V$SESS_TIME_MODEL
Dim dblBackgroundCPUS(999)          'Current value of background cpu time from V$SESS_TIME_MODEL
Dim dblBackgroundElapsedS(999)      'Current value of background elapsed time from V$SESS_TIME_MODEL
Dim dblConnectMgmtS(999)            'Current value of connection management call elapsed time from V$SESS_TIME_MODEL
Dim dblFailedParseMemoryS(999)      'Current value of failed parse (out of shared memory) elapsed time from V$SESS_TIME_MODEL
Dim dblFailedParseElapsedS(999)     'Current value of failed parse elapsed time from V$SESS_TIME_MODEL
Dim dblHardParseBindS(999)          'Current value of hard parse (bind mismatch) elapsed time from V$SESS_TIME_MODEL
Dim dblHardParseSharingS(999)       'Current value of hard parse (sharing criteria) elapsed time from V$SESS_TIME_MODEL
Dim dblHardParseElapsedS(999)       'Current value of hard parse elapsed time from V$SESS_TIME_MODEL
Dim dblInboundPLSQLS(999)           'Current value of inbound PL/SQL rpc elapsed time from V$SESS_TIME_MODEL
Dim dblParseTimeElapsedS(999)       'Current value of parse time elapsed from V$SESS_TIME_MODEL
Dim dblRepeatedBindS(999)           'Current value of repeated bind elapsed time from V$SESS_TIME_MODEL
Dim dblSequenceLoadS(999)           'Current value of sequence load elapsed time from V$SESS_TIME_MODEL
Dim dblSQLExecuteTimeS(999)         'Current value of sql execute elapsed time from V$SESS_TIME_MODEL

Dim dblDBCPUSLast(999)              'Last value of DB CPU from V$SESS_TIME_MODEL
Dim dblDBTimeSLast(999)             'Last value of DB time from V$SESS_TIME_MODEL
Dim dblJavaTimeSLast(999)           'Last value of Java execution elapsed time from V$SESS_TIME_MODEL
Dim dblPLSQLCompileSLast(999)       'Last value of PL/SQL compilation elapsed time from V$SESS_TIME_MODEL
Dim dblPLSQLExecutionSLast(999)     'Last value of PL/SQL execution elapsed time from V$SESS_TIME_MODEL
Dim dblRMANCPUSLast(999)            'Last value of RMAN cpu time (backup/restore) from V$SESS_TIME_MODEL
Dim dblBackgroundCPUSLast(999)      'Last value of background cpu time from V$SESS_TIME_MODEL
Dim dblBackgroundElapsedSLast(999)  'Last value of background elapsed time from V$SESS_TIME_MODEL
Dim dblConnectMgmtSLast(999)        'Last value of connection management call elapsed time from V$SESS_TIME_MODEL
Dim dblFailedParseMemorySLast(999)  'Last value of failed parse (out of shared memory) elapsed time from V$SESS_TIME_MODEL
Dim dblFailedParseElapsedSLast(999) 'Last value of failed parse elapsed time from V$SESS_TIME_MODEL
Dim dblHardParseBindSLast(999)      'Last value of hard parse (bind mismatch) elapsed time from V$SESS_TIME_MODEL
Dim dblHardParseSharingSLast(999)   'Last value of hard parse (sharing criteria) elapsed time from V$SESS_TIME_MODEL
Dim dblHardParseElapsedSLast(999)   'Last value of hard parse elapsed time from V$SESS_TIME_MODEL
Dim dblInboundPLSQLSLast(999)       'Last value of inbound PL/SQL rpc elapsed time from V$SESS_TIME_MODEL
Dim dblParseTimeElapsedSLast(999)   'Last value of parse time elapsed from V$SESS_TIME_MODEL
Dim dblRepeatedBindSLast(999)       'Last value of repeated bind elapsed time from V$SESS_TIME_MODEL
Dim dblSequenceLoadSLast(999)       'Last value of sequence load elapsed time from V$SESS_TIME_MODEL
Dim dblSQLExecuteTimeSLast(999)     'Last value of sql execute elapsed time from V$SESS_TIME_MODEL

Dim intWaitCount                    'Number of wait events read from the database
Dim intWaitCurrent                  'Current index of the wait event
Dim strWaitEventName(1300)          'Name of the wait event
Dim dblWaitValue(1300)              'Current wait event total time
Dim dblWaitValueLast(1300)          'Previous wait event total time
Dim dblWaitWaitsValue(1300)         'Current wait event number of waits
Dim dblWaitWaitsValueLast(1300)     'Previous wait event number of waits
Dim dblWaitTOValue(1300)            'Current wait event number of timeouts
Dim dblWaitTOValueLast(1300)        'Previous wait event number of timeouts

Set snpDataWait = CreateObject("ADODB.Recordset")
Set comDataWait = CreateObject("ADODB.Command")
Set snpDataOSStat = CreateObject("ADODB.Recordset")
Set comDataOSStat = CreateObject("ADODB.Command")
Set snpDataSysTime = CreateObject("ADODB.Recordset")
Set comDataSysTime = CreateObject("ADODB.Command")
Set snpDataSessTime = CreateObject("ADODB.Recordset")
Set comDataSessTime = CreateObject("ADODB.Command")

Set dbDatabase = CreateObject("ADODB.Connection")

strUsername = "MyUsername"
strPassword = "MyPassword"
strDatabase = "MyDB"

intCheckIterations = 20
intDelayIterations = 60
sglSessionMinimumPercent = 0.1  '10% of the total for the time period needed to be inccluded in the detail
strInd = "&nbsp;&nbsp;"

dbDatabase.ConnectionString = "Provider=OraOLEDB.Oracle;Data Source=" & strDatabase & ";User ID=" & strUsername & ";Password=" & strPassword & ";"
dbDatabase.Open

'Should verify that the connection attempt was successful, but I will leave that for someone else to code
On Error Resume Next  'Allow continuing the script if an error happens

With comDataWait
  strSQL = "SELECT" & vbCrLf
  strSQL = strSQL & "  EVENT," & vbCrLf
  strSQL = strSQL & "  TOTAL_WAITS," & vbCrLf
  strSQL = strSQL & "  TOTAL_TIMEOUTS," & vbCrLf
  strSQL = strSQL & "  TIME_WAITED" & vbCrLf
  strSQL = strSQL & "FROM" & vbCrLf
  strSQL = strSQL & "  V$SYSTEM_EVENT" & vbCrLf
  strSQL = strSQL & "WHERE" & vbCrLf
  strSQL = strSQL & "  WAIT_CLASS<>'Idle'" & vbCrLf
  strSQL = strSQL & "ORDER BY" & vbCrLf
  strSQL = strSQL & "  EVENT"

  .CommandText = strSQL
  .CommandType = adCmdText
  .CommandTimeout = 30
  .ActiveConnection = dbDatabase
End With

With comDataOSStat
  strSQL = "SELECT" & vbCrLf
  strSQL = strSQL & "  STAT_NAME," & vbCrLf
  strSQL = strSQL & "  VALUE" & vbCrLf
  strSQL = strSQL & "FROM" & vbCrLf
  strSQL = strSQL & "  V$OSSTAT" & vbCrLf
  strSQL = strSQL & "WHERE" & vbCrLf
  strSQL = strSQL & "  STAT_NAME IN ('NUM_CPUS','IDLE_TIME','BUSY_TIME','USER_TIME','SYS_TIME')"

  .CommandText = strSQL
  .CommandType = adCmdText
  .CommandTimeout = 30
  .ActiveConnection = dbDatabase
End With

With comDataSysTime
  strSQL = "SELECT" & vbCrLf
  strSQL = strSQL & "  VALUE," & vbCrLf
  strSQL = strSQL & "  STAT_NAME" & vbCrLf
  strSQL = strSQL & "FROM" & vbCrLf
  strSQL = strSQL & "  V$SYS_TIME_MODEL"

  .CommandText = strSQL
  .CommandType = adCmdText
  .CommandTimeout = 30
  .ActiveConnection = dbDatabase
End With

With comDataSessTime
  strSQL = "SELECT" & vbCrLf
  strSQL = strSQL & "  S.SID," & vbCrLf
  strSQL = strSQL & "  S.SERIAL#," & vbCrLf
  strSQL = strSQL & "  NVL(S.USERNAME,' ') USERNAME," & vbCrLf
  strSQL = strSQL & "  NVL(S.MACHINE,' ') MACHINE," & vbCrLf
  strSQL = strSQL & "  NVL(S.PROGRAM,' ') PROGRAM," & vbCrLf
  strSQL = strSQL & "  NVL(S.SQL_ID,' ') SQL_ID," & vbCrLf
  strSQL = strSQL & "  NVL(S.SQL_CHILD_NUMBER,0) SQL_CHILD_NUMBER," & vbCrLf
  strSQL = strSQL & "  STM.VALUE," & vbCrLf
  strSQL = strSQL & "  STM.STAT_NAME" & vbCrLf
  strSQL = strSQL & "FROM" & vbCrLf
  strSQL = strSQL & "  V$SESS_TIME_MODEL STM," & vbCrLf
  strSQL = strSQL & "  V$SESSION S" & vbCrLf
  strSQL = strSQL & "WHERE" & vbCrLf
  strSQL = strSQL & "  S.SID=STM.SID" & vbCrLf
  strSQL = strSQL & "ORDER BY" & vbCrLf
  strSQL = strSQL & "  S.USERNAME," & vbCrLf
  strSQL = strSQL & "  S.PROGRAM," & vbCrLf
  strSQL = strSQL & "  S.SID"

  .CommandText = strSQL
  .CommandType = adCmdText
  .CommandTimeout = 30
  .ActiveConnection = dbDatabase
End With

'Fire up Internet Explorer
Set objIE = CreateObject("InternetExplorer.Application")
objIE.Left = 0
objIE.Top = 0
objIE.Width = 950
objIE.Height = 800
objIE.StatusBar = False
objIE.MenuBar = False
objIE.Toolbar = False

objIE.Navigate "about:blank"
objIE.Document.Title = "Charles Hooper's Time Model Data Viewer"
objIE.Visible = True

For i = 1 To intCheckIterations
  Set snpDataOSStat = comDataOSStat.Execute
  If Not (snpDataOSStat Is Nothing) Then
    Do While Not (snpDataOSStat.EOF)
      Select Case CStr(snpDataOSStat("stat_name"))
        Case "NUM_CPUS"
          intNumCPUs = CInt(snpDataOSStat("value"))
        Case "IDLE_TIME"
          dblIdleTimeLast = dblIdleTime
          dblIdleTime = CDbl(snpDataOSStat("value"))
        Case "BUSY_TIME"
          dblBusyTimeLast = dblBusyTime
          dblBusyTime = CDbl(snpDataOSStat("value"))
        Case "USER_TIME"
          dblUserTimeLast = dblUserTime
          dblUserTime = CDbl(snpDataOSStat("value"))
        Case "SYS_TIME"
          dblSysTimeLast = dblSysTime
          dblSysTime = CDbl(snpDataOSStat("value"))
      End Select

      snpDataOSStat.movenext
    Loop
  End If

  Set snpDataWait = comDataWait.Execute
  If Not (snpDataWait Is Nothing) Then
    Do While Not (snpDataWait.EOF)
      intWaitCurrent = intWaitCount + 1
      'Find the previous entry for this wait event
      For j = 1 To intWaitCount
        If strWaitEventName(j) = CStr(snpDataWait("event")) Then
          intWaitCurrent = j
          Exit For
        End If
      Next
      If intWaitCurrent = intWaitCount + 1 Then
        'New entry
        intWaitCount = intWaitCount + 1
        strWaitEventName(intWaitCurrent) = CStr(snpDataWait("event"))
      End If
      dblWaitValueLast(intWaitCurrent) = dblWaitValue(intWaitCurrent)
      dblWaitValue(intWaitCurrent) = CDbl(snpDataWait("time_waited"))
      dblWaitWaitsValueLast(intWaitCurrent) = dblWaitWaitsValue(intWaitCurrent)
      dblWaitWaitsValue(intWaitCurrent) = CDbl(snpDataWait("total_waits"))
      dblWaitTOValueLast(intWaitCurrent) = dblWaitTOValue(intWaitCurrent)
      dblWaitTOValue(intWaitCurrent) = CDbl(snpDataWait("total_timeouts"))

      snpDataWait.movenext
    Loop
  End If

  Set snpDataSysTime = comDataSysTime.Execute
  If Not (snpDataSysTime Is Nothing) Then
    Do While Not (snpDataSysTime.EOF)
      Select Case CStr(snpDataSysTime("stat_name"))
        Case "DB CPU"
          dblDBCPULast = dblDBCPU
          dblDBCPU = CDbl(snpDataSysTime("value"))
        Case "DB time"
          dblDBTimeLast = dblDBTime
          dblDBTime = CDbl(snpDataSysTime("value"))
        Case "Java execution elapsed time"
          dblJavaTimeLast = dblJavaTime
          dblJavaTime = CDbl(snpDataSysTime("value"))
        Case "PL/SQL compilation elapsed time"
          dblPLSQLCompileLast = dblPLSQLCompile
          dblPLSQLCompile = CDbl(snpDataSysTime("value"))
        Case "PL/SQL execution elapsed time"
          dblPLSQLExecutionLast = dblPLSQLExecution
          dblPLSQLExecution = CDbl(snpDataSysTime("value"))
        Case "RMAN cpu time (backup/restore)"
          dblRMANCPULast = dblRMANCPU
          dblRMANCPU = CDbl(snpDataSysTime("value"))
        Case "background cpu time"
          dblBackgroundCPULast = dblBackgroundCPU
          dblBackgroundCPU = CDbl(snpDataSysTime("value"))
        Case "background elapsed time"
          dblBackgroundElapsedLast = dblBackgroundElapsed
          dblBackgroundElapsed = CDbl(snpDataSysTime("value"))
        Case "connection management call elapsed time"
          dblConnectMgmtLast = dblConnectMgmt
          dblConnectMgmt = CDbl(snpDataSysTime("value"))
        Case "failed parse (out of shared memory) elapsed time"
          dblFailedParseMemoryLast = dblFailedParseMemory
          dblFailedParseMemory = CDbl(snpDataSysTime("value"))
        Case "failed parse elapsed time"
          dblFailedParseElapsedLast = dblFailedParseElapsed
          dblFailedParseElapsed = CDbl(snpDataSysTime("value"))
        Case "hard parse (bind mismatch) elapsed time"
          dblHardParseBindLast = dblHardParseBind
          dblHardParseBind = CDbl(snpDataSysTime("value"))
        Case "hard parse (sharing criteria) elapsed time"
          dblHardParseSharingLast = dblHardParseSharing
          dblHardParseSharing = CDbl(snpDataSysTime("value"))
        Case "hard parse elapsed time"
          dblHardParseElapsedLast = dblHardParseElapsed
          dblHardParseElapsed = CDbl(snpDataSysTime("value"))
        Case "inbound PL/SQL rpc elapsed time"
          dblInboundPLSQLLast = dblInboundPLSQL
          dblInboundPLSQL = CDbl(snpDataSysTime("value"))
        Case "parse time elapsed"
          dblParseTimeElapsedLast = dblParseTimeElapsed
          dblParseTimeElapsed = CDbl(snpDataSysTime("value"))
        Case "repeated bind elapsed time"
          dblRepeatedBindLast = dblRepeatedBind
          dblRepeatedBind = CDbl(snpDataSysTime("value"))
        Case "sequence load elapsed time"
          dblSequenceLoadLast = dblSequenceLoad
          dblSequenceLoad = CDbl(snpDataSysTime("value"))
        Case "sql execute elapsed time"
          dblSQLExecuteTimeLast = dblSQLExecuteTime
          dblSQLExecuteTime = CDbl(snpDataSysTime("value"))
      End Select

      snpDataSysTime.MoveNext
    Loop
  End If

  For j = 1 To intSessionCount
    intSessionExists(j) = False
  Next
  Set snpDataSessTime = comDataSessTime.Execute
  If Not (snpDataSessTime Is Nothing) Then
    Do While Not (snpDataSessTime.EOF)
      'Find the matching session's previous statistics
      If (lngSIDLast <> CLng(snpDataSessTime("sid"))) Or (lngSerialLast <> CLng(snpDataSessTime("serial#"))) Then
        'This is a different session, see if the session was previously captured
        lngSIDLast = CLng(snpDataSessTime("sid"))
        lngSerialLast = CLng(snpDataSessTime("serial#"))

        intSessionCurrent = intSessionCount + 1
        For j = 1 To intSessionCount
          If (lngSID(j) = CLng(snpDataSessTime("sid"))) And (lngSerial(j) = CLng(snpDataSessTime("serial#"))) Then
            intSessionCurrent = j
            Exit For
          End If
        Next
        If intSessionCurrent = intSessionCount + 1 Then
          intSessionCount = intSessionCount + 1
          lngSID(intSessionCurrent) = CLng(snpDataSessTime("sid"))
          lngSerial(intSessionCurrent) = CLng(snpDataSessTime("serial#"))
          strSessionOther(intSessionCurrent) = CStr(snpDataSessTime("machine")) & " ~ " & _
             CStr(snpDataSessTime("username")) & " ~ " & _
             CStr(snpDataSessTime("program")) & " ~ "
          If snpDataSessTime("sql_id") <> " " Then
            strSessionOther(intSessionCurrent) = strSessionOther(intSessionCurrent) & "SQL_ID/Child: " & _
              CStr(snpDataSessTime("sql_id")) & "/" & CStr(snpDataSessTime("sql_child_number"))
          End If
        End If
      End If

      intSessionExists(intSessionCurrent) = True
      Select Case CStr(snpDataSessTime("stat_name"))
        Case "DB CPU"
          dblDBCPUSLast(intSessionCurrent) = dblDBCPUS(intSessionCurrent)
          dblDBCPUS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "DB time"
          dblDBTimeSLast(intSessionCurrent) = dblDBTimeS(intSessionCurrent)
          dblDBTimeS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "Java execution elapsed time"
          dblJavaTimeSLast(intSessionCurrent) = dblJavaTimeS(intSessionCurrent)
          dblJavaTimeS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "PL/SQL compilation elapsed time"
          dblPLSQLCompileSLast(intSessionCurrent) = dblPLSQLCompileS(intSessionCurrent)
          dblPLSQLCompileS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "PL/SQL execution elapsed time"
          dblPLSQLExecutionSLast(intSessionCurrent) = dblPLSQLExecutionS(intSessionCurrent)
          dblPLSQLExecutionS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "RMAN cpu time (backup/restore)"
          dblRMANCPUSLast(intSessionCurrent) = dblRMANCPUS(intSessionCurrent)
          dblRMANCPUS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "background cpu time"
          dblBackgroundCPUSLast(intSessionCurrent) = dblBackgroundCPUS(intSessionCurrent)
          dblBackgroundCPUS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "background elapsed time"
          dblBackgroundElapsedSLast(intSessionCurrent) = dblBackgroundElapsedS(intSessionCurrent)
          dblBackgroundElapsedS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "connection management call elapsed time"
          dblConnectMgmtSLast(intSessionCurrent) = dblConnectMgmtS(intSessionCurrent)
          dblConnectMgmtS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "failed parse (out of shared memory) elapsed time"
          dblFailedParseMemorySLast(intSessionCurrent) = dblFailedParseMemoryS(intSessionCurrent)
          dblFailedParseMemoryS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "failed parse elapsed time"
          dblFailedParseElapsedSLast(intSessionCurrent) = dblFailedParseElapsedS(intSessionCurrent)
          dblFailedParseElapsedS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "hard parse (bind mismatch) elapsed time"
          dblHardParseBindSLast(intSessionCurrent) = dblHardParseBindS(intSessionCurrent)
          dblHardParseBindS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "hard parse (sharing criteria) elapsed time"
          dblHardParseSharingSLast(intSessionCurrent) = dblHardParseSharingS(intSessionCurrent)
          dblHardParseSharingS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "hard parse elapsed time"
          dblHardParseElapsedSLast(intSessionCurrent) = dblHardParseElapsedS(intSessionCurrent)
          dblHardParseElapsedS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "inbound PL/SQL rpc elapsed time"
          dblInboundPLSQLSLast(intSessionCurrent) = dblInboundPLSQLS(intSessionCurrent)
          dblInboundPLSQLS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "parse time elapsed"
          dblParseTimeElapsedSLast(intSessionCurrent) = dblParseTimeElapsedS(intSessionCurrent)
          dblParseTimeElapsedS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "repeated bind elapsed time"
          dblRepeatedBindSLast(intSessionCurrent) = dblRepeatedBindS(intSessionCurrent)
          dblRepeatedBindS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "sequence load elapsed time"
          dblSequenceLoadSLast(intSessionCurrent) = dblSequenceLoadS(intSessionCurrent)
          dblSequenceLoadS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
        Case "sql execute elapsed time"
          dblSQLExecuteTimeSLast(intSessionCurrent) = dblSQLExecuteTimeS(intSessionCurrent)
          dblSQLExecuteTimeS(intSessionCurrent) = CDbl(snpDataSessTime("value"))
      End Select

      snpDataSessTime.MoveNext
    Loop
  End If

  dteLastLoopStart = Now
  intDataChanged = True

  'Uncomment the following line if you would like for the session detail to be collapsed on each refresh
  'intDisplaySessionDetail = False

  Do While DateDiff("s", dteLastLoopStart, Now) < intDelayIterations
    'Remain in this loop until intDelayIterations seconds have elapsed
    intFlag = 0
    If intDataChanged = True Then
      'Update the web page
      strHTML = ""

      strHTML = strHTML & "<form name=""OracleTimeModel"">" & vbCrLf
      'strHTML = strHTML & "<input type=text id=divStatus name=divStatus value="" "" size=50 disabled=true><br />" & vbCrLf
      strHTML = strHTML & "<input type=hidden id=txtOK value="" "">" & vbCrLf
      strHTML = strHTML & "<input type=button value=""Re-Query"" id=cmdQuery onclick=""document.getElementById('txtOK').value='QUERY';"">" & vbCrLf
      strHTML = strHTML & "<input type=button value=""Show Detail"" id=cmdShowDetail onclick=""document.getElementById('txtOK').value='DETAIL';"">" & vbCrLf
      strHTML = strHTML & "<input type=button value=""Close"" id=cmdClose onclick=""document.getElementById('txtOK').value='CLOSE';"">" & vbCrLf
      strHTML = strHTML & "</form>" & vbCrLf

      strHTML = strHTML & "<table border=""1"" width=""500"" style=""font-family: Courier New; font-size: 8pt""" & vbCrLf
      strHTML = strHTML & "<tr><td bgcolor=""#11AAFF"">CPUs</td><td  bgcolor=""#11AAFF"">Busy Time</td><td bgcolor=""#11AAFF"">Idle Time</td>" & _
        "<td bgcolor=""#11AAFF"">User Mode</td><td bgcolor=""#11AAFF"">Kernel Mode</td></tr>" & vbCrLf
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber(intNumCPUs, 0) & "</td>"
      strHTML = strHTML & "<td><p align=""right"">" & FormatNumber((dblBusyTime - dblBusyTimeLast) / 100, 2) & "</td>"
      strHTML = strHTML & "<td><p align=""right"">" & FormatNumber((dblIdleTime - dblIdleTimeLast) / 100, 2) & "</td>"
      strHTML = strHTML & "<td><p align=""right"">" & FormatNumber((dblUserTime - dblUserTimeLast) / 100, 2) & "</td>"
      strHTML = strHTML & "<td><p align=""right"">" & FormatNumber((dblSysTime - dblSysTimeLast) / 100, 2) & "</td></tr>"
      strHTML = strHTML & "</table><p>" & vbCrLf

      strHTML = strHTML & "<table border=""1"" width=""900"" style=""font-family: Courier New; font-size: 8pt""" & vbCrLf
      strHTML = strHTML & "<tr><td bgcolor=""#11AAFF"">Value</td><td bgcolor=""#11AAFF"" colspan=""5"">Statistic Name</td></tr>" & vbCrLf
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblBackgroundElapsed - dblBackgroundElapsedLast) / 1000000, 2) & "</td><td colspan=""5"">Background Elapsed Time</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblBackgroundElapsed - dblBackgroundElapsedLast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblBackgroundElapsedS(j) - dblBackgroundElapsedSLast(j)) / (dblBackgroundElapsed - dblBackgroundElapsedLast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblBackgroundElapsedS(j) - dblBackgroundElapsedSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblBackgroundCPU - dblBackgroundCPULast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & "Background CPU Time</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblBackgroundCPU - dblBackgroundCPULast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblBackgroundCPUS(j) - dblBackgroundCPUSLast(j)) / (dblBackgroundCPU - dblBackgroundCPULast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblBackgroundCPUS(j) - dblBackgroundCPUSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblRMANCPU - dblRMANCPULast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & strInd & "RMAN CPU Time (Backup Restore)</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblRMANCPU - dblRMANCPULast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblRMANCPUS(j) - dblRMANCPUSLast(j)) / (dblRMANCPU - dblRMANCPULast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><<td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblRMANCPUS(j) - dblRMANCPUSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblDBTime - dblDBTimeLast) / 1000000, 2) & "</td><td colspan=""5"">DB Time</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblDBTime - dblDBTimeLast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblDBTimeS(j) - dblDBTimeSLast(j)) / (dblDBTime - dblDBTimeLast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblDBTimeS(j) - dblDBTimeSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblDBCPU - dblDBCPULast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & "DB CPU</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblDBCPU - dblDBCPULast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblDBCPUS(j) - dblDBCPUSLast(j)) / (dblDBCPU - dblDBCPULast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblDBCPUS(j) - dblDBCPUSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblConnectMgmt - dblConnectMgmtLast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & "Connection Management Call Elapsed Time</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblConnectMgmt - dblConnectMgmtLast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblConnectMgmtS(j) - dblConnectMgmtSLast(j)) / (dblConnectMgmt - dblConnectMgmtLast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblConnectMgmtS(j) - dblConnectMgmtSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblSequenceLoad - dblSequenceLoadLast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & "Sequence Load Elapsed Time</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblSequenceLoad - dblSequenceLoadLast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblSequenceLoadS(j) - dblSequenceLoadSLast(j)) / (dblSequenceLoad - dblSequenceLoadLast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblSequenceLoadS(j) - dblSequenceLoadSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblSQLExecuteTime - dblSQLExecuteTimeLast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & "SQL Execute Elapsed Time</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblSQLExecuteTime - dblSQLExecuteTimeLast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblSQLExecuteTimeS(j) - dblSQLExecuteTimeSLast(j)) / (dblSQLExecuteTime - dblSQLExecuteTimeLast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblSQLExecuteTimeS(j) - dblSQLExecuteTimeSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblParseTimeElapsed - dblParseTimeElapsedLast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & "Parse Time Elapsed</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblParseTimeElapsed - dblParseTimeElapsedLast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblParseTimeElapsedS(j) - dblParseTimeElapsedSLast(j)) / (dblParseTimeElapsed - dblParseTimeElapsedLast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblParseTimeElapsedS(j) - dblParseTimeElapsedSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblHardParseElapsed - dblHardParseElapsedLast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & strInd & "Hard Parse Elapsed Time</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblHardParseElapsed - dblHardParseElapsedLast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblHardParseElapsedS(j) - dblHardParseElapsedSLast(j)) / (dblHardParseElapsed - dblHardParseElapsedLast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblHardParseElapsedS(j) - dblHardParseElapsedSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblHardParseSharing - dblHardParseSharingLast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & strInd & strInd & "Hard Parse (Sharing Criteria) Elapsed Time</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblHardParseSharing - dblHardParseSharingLast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblHardParseSharingS(j) - dblHardParseSharingSLast(j)) / (dblHardParseSharing - dblHardParseSharingLast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblHardParseSharingS(j) - dblHardParseSharingSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblHardParseBind - dblHardParseBindLast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & strInd & strInd & strInd & "Hard Parse (Bind Mismatch) Elapsed Time</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblHardParseBind - dblHardParseBindLast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblHardParseBindS(j) - dblHardParseBindSLast(j)) / (dblHardParseBind - dblHardParseBindLast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblHardParseBindS(j) - dblHardParseBindSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblFailedParseElapsed - dblFailedParseElapsedLast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & strInd & "Failed Parse Elapsed Time</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblFailedParseElapsed - dblFailedParseElapsedLast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblFailedParseElapsedS(j) - dblFailedParseElapsedSLast(j)) / (dblFailedParseElapsed - dblFailedParseElapsedLast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblFailedParseElapsedS(j) - dblFailedParseElapsedSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblFailedParseMemory - dblFailedParseMemoryLast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & strInd & strInd & "Failed Parse (Out of Shared Memory) Elapsed Time</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblFailedParseMemory - dblFailedParseMemoryLast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblFailedParseMemoryS(j) - dblFailedParseMemorySLast(j)) / (dblFailedParseMemory - dblFailedParseMemoryLast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblFailedParseMemoryS(j) - dblFailedParseMemorySLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblPLSQLExecution - dblPLSQLExecutionLast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & "PL/SQL Execution Elapsed Time</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblPLSQLExecution - dblPLSQLExecutionLast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblPLSQLExecutionS(j) - dblPLSQLExecutionSLast(j)) / (dblPLSQLExecution - dblPLSQLExecutionLast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblPLSQLExecutionS(j) - dblPLSQLExecutionSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblInboundPLSQL - dblInboundPLSQLLast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & "Inbound PL/SQL RPC Elapsed Time</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblInboundPLSQL - dblInboundPLSQLLast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblInboundPLSQLS(j) - dblInboundPLSQLSLast(j)) / (dblInboundPLSQL - dblInboundPLSQLLast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblInboundPLSQLS(j) - dblInboundPLSQLSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblPLSQLCompile - dblPLSQLCompileLast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & "PL/SQL Compilation Elapsed Time</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblPLSQLCompile - dblPLSQLCompileLast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblPLSQLCompileS(j) - dblPLSQLCompileSLast(j)) / (dblPLSQLCompile - dblPLSQLCompileLast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblPLSQLCompileS(j) - dblPLSQLCompileSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblJavaTime - dblJavaTimeLast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & "Java Execution Elapsed Time</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblJavaTime - dblJavaTimeLast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblJavaTimeS(j) - dblJavaTimeSLast(j)) / (dblJavaTime - dblJavaTimeLast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblJavaTimeS(j) - dblJavaTimeSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "<tr><td><p align=""right"">" & FormatNumber((dblRepeatedBind - dblRepeatedBindLast) / 1000000, 2) & "</td><td colspan=""5"">" & strInd & "Repeated Bind Elapsed Time</td></tr>" & vbCrLf
      If (intDisplaySessionDetail = True) And ((dblRepeatedBind - dblRepeatedBindLast) <> 0) Then
        For j = 1 To intSessionCount
          If intSessionExists(j) = True Then
            If (dblRepeatedBindS(j) - dblRepeatedBindSLast(j)) / (dblRepeatedBind - dblRepeatedBindLast) >= sglSessionMinimumPercent Then
              strHTML = strHTML & "<tr bgcolor=""#FFFF88""><td>&nbsp;</td><td colspan=""1"">&nbsp;</td><td colspan=""1""><p align=""right"">" & FormatNumber((dblRepeatedBindS(j) - dblRepeatedBindSLast(j)) / 1000000, 2) & "</td><td>SID: " & CStr(lngSID(j)) & "</td><td>Serial #: " & CStr(lngSerial(j)) & "</td><td>" & strSessionOther(j) & "</td></tr>" & vbCrLf
            End If
          End If
        Next
      End If
      strHTML = strHTML & "</table><p>" & vbCrLf
      strHTML = strHTML & "<table border=""1"" width=""500"" style=""font-family: Courier New; font-size: 8pt""" & vbCrLf
      strHTML = strHTML & "<tr><td bgcolor=""#11AAFF"">Wait Event Name</td><td  bgcolor=""#11AAFF"">Wait Time</td><td bgcolor=""#11AAFF"">Waits</td><td bgcolor=""#11AAFF"">Timeouts</td></tr>" & vbCrLf
      For j = 1 To intWaitCount
        If (dblWaitValue(j) - dblWaitValueLast(j)) <> 0 Then
          strHTML = strHTML & "<tr><td>" & strWaitEventName(j) & "</td>"
          strHTML = strHTML & "<td><p align=""right"">" & FormatNumber((dblWaitValue(j) - dblWaitValueLast(j)) / 100, 2) & "</td>"
          strHTML = strHTML & "<td><p align=""right"">" & FormatNumber((dblWaitWaitsValue(j) - dblWaitWaitsValueLast(j)), 0) & "</td>"
          strHTML = strHTML & "<td><p align=""right"">" & FormatNumber((dblWaitTOValue(j) - dblWaitTOValueLast(j)), 0) & "</td></tr>"
        End If
      Next
      strHTML = strHTML & "</table>" & vbCrLf

      objIE.Document.Body.InnerHTML = strHTML
      intDataChanged = False
    End If

    'Put the VBS script into suspend mode for 1/2 second to prevent hammering the CPUs
    Wscript.Sleep 500

    If objIE Is Nothing Then
      'User closed the Window
      intFlag = -1
    Else
      If objIE.Document.All.txtOK.Value <> " " Then
        Select Case objIE.Document.All.txtOK.Value
          Case "QUERY"
            intFlag = 1
            objIE.Document.All.txtOK.Value = " "
          Case "DETAIL"
            intFlag = 2
            If intDisplaySessionDetail = True Then
              intDisplaySessionDetail = False
            Else
              intDisplaySessionDetail = True
            End If
            intDataChanged = True
            objIE.Document.All.txtOK.Value = " "
          Case "CLOSE"
            intFlag = -1
            objIE.Document.All.txtOK.Value = " "
            objIE.Quit
        End Select
      End If
    End If

    If Abs(intFlag) = 1 Then
      Exit Do
    End If
  Loop

  If intFlag = -1 Then
    Exit For
  End If
Next

dbDatabase.Close

Set snpDataWait = Nothing
Set comDataWait = Nothing
Set snpDataOSStat = Nothing
Set comDataOSStat = Nothing
Set snpDataSysTime = Nothing
Set comDataSysTime = Nothing
Set snpDataSessTime = Nothing
Set comDataSessTime = Nothing
Set dbDatabase = Nothing
Set objIE = Nothing

Easy, right? 

There is no need to stop at this level – dig into the session-level wait events, enable a 10046 trace for sessions, set up the script to send an email to a user if the user consumes more than 10% of the server’s capacity, … and most important, have fun.