Invalid Hints are Silently Ignored? An Invalid USE_HASH Hint Transforms a Sort-Merge Join into a Nested Loops Join

30 09 2011

September 30, 2011

How many times have you heard or read that the Oracle Database query optimizer may freely ignore hints if it so chooses?  How many times have you heard or read that the query optimizer silently ignores hints that are invalid?  I was recently reminded of both of these concepts while reading the book “Oracle Database 11g Performance Tuning Recipes“.

Page 494 of the book states:

“Tip: Hints with incorrect or improper syntax are ignored by the optimizer.”

Page 496 of the book states:

“Note: Hints influence the optimizer, but the optimizer may still choose to ignore any hints specified in the query.”

Hints with incorrect syntax may cause the optimizer to consider the hints as invalid, but the changed outcome is potentially much worse (or better) than the hint being simply ignored, as demonstrated in this blog article.  As described in another blog article, hints are directives and must be obeyed unless…

I thought that I would craft an interesting, yet simple test case to see if invalid hints are silently ignored.  Here is the test case script:

SET AUTOTRACE OFF

DROP TABLE T1 PURGE;
DROP TABLE T2 PURGE;

CREATE TABLE T1 AS
SELECT
  ROWNUM RN
FROM
  DUAL
CONNECT BY
  LEVEL<=100;

CREATE TABLE T2 AS
SELECT
  ROWNUM RN
FROM
  DUAL
CONNECT BY
  LEVEL<=100;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>NULL,TABNAME=>'T1')
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>NULL,TABNAME=>'T2')

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'USE_HASH_10053';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

SET AUTOTRACE TRACEONLY EXPLAIN

SELECT
  *
FROM
  T1,
  T2
WHERE
  T1.RN>T2.RN;

SELECT /*+ USE_HASH(T1 T2) */
  *
FROM
  T1,
  T2
WHERE
  T1.RN>T2.RN;

SET AUTOTRACE OFF

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

The following shows the output that I received with Oracle Database 11.2.0.2:

SQL> SELECT
  2    *
  3  FROM
  4    T1,
  5    T2
  6  WHERE
  7    T1.RN>T2.RN;

Execution Plan
----------------------------------------------------------
Plan hash value: 412793182

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |  4950 | 29700 |     9  (34)| 00:00:01 |
|   1 |  MERGE JOIN         |      |  4950 | 29700 |     9  (34)| 00:00:01 |
|   2 |   SORT JOIN         |      |   100 |   300 |     4  (25)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| T1   |   100 |   300 |     3   (0)| 00:00:01 |
|*  4 |   SORT JOIN         |      |   100 |   300 |     4  (25)| 00:00:01 |
|   5 |    TABLE ACCESS FULL| T2   |   100 |   300 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access(INTERNAL_FUNCTION("T1"."RN")>INTERNAL_FUNCTION("T2"."RN"))
       filter(INTERNAL_FUNCTION("T1"."RN")>INTERNAL_FUNCTION("T2"."RN"))

SQL> SELECT /*+ USE_HASH(T1 T2) */
  2    *
  3  FROM
  4    T1,
  5    T2
  6  WHERE
  7    T1.RN>T2.RN;

Execution Plan
----------------------------------------------------------
Plan hash value: 1967407726

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |  4950 | 29700 |   144   (5)| 00:00:01 |
|   1 |  NESTED LOOPS      |      |  4950 | 29700 |   144   (5)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T1   |   100 |   300 |     3   (0)| 00:00:01 |
|*  3 |   TABLE ACCESS FULL| T2   |    50 |   150 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T1"."RN">"T2"."RN")

So, the USE_HASH hint which is invalid because a hash join cannot be used when the join condition is an inequality, causes the optimizer to switch from a sort-merge join to a nested loops join.  Does this mean that invalid hints are potentially NOT silently ignored?





Having Fun with ORA-00600 (qkebCreateConstantOpn1) – What is the Cause of this Error?

28 09 2011

September 28, 2011

A friend brought an interesting Oracle Database 11.2.0.2 problem to my attention. He is in the process of upgrading to a new Oracle Database server (and Oracle Database version) and a new version of an ERP package. He has experienced an ORA-00600 error with both patch 6 and patch 9 for 11.2.0.2 on 64 bit Windows after rolling forward the database table definitions for the new version of the ERP package and then testing the ERP functionality (all in a test environment, of course).

I have not seen the most recent table modification scripts first hand, but upgrade scripts for the earlier ERP versions exercised good habits of declaring column constraints and default column values (as well as triggers to make certain that SQL script writers do not get too carried away).

So, what to do to troubleshoot this problem, call Oracle support because an ORA-00600 error is returned, or poke around a little to see if we can determine what is happening? Since I am not able to touch the database to investigate the problem freely, I asked for a 10046 trace at level 12 to be generated. Below is a portion of that trace file (the portion in bold was actually executed by the client application, while the rest is at dep=1, in this case internal SQL used for query optimization):

PARSING IN CURSOR #386947304 len=42 dep=1 uid=0 oct=3 lid=0 tim=63747639340 hv=844002283 ad='314cbd908' sqlid='ftj9uawt4wwzb'
select condition from cdef$ where rowid=:1
END OF STMT
PARSE #386947304:c=0,e=16,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=1884964958,tim=63747639340
BINDS #386947304:
 Bind#0
  oacdty=11 mxl=16(16) mxlc=00 mal=00 scl=00 pre=00
  oacflg=18 fl2=0001 frm=00 csi=00 siz=16 off=0
  kxsbbbfp=1c28c668  bln=16  avl=16  flg=05
  value=0000A680.003D.0001
EXEC #386947304:c=0,e=32,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=1884964958,tim=63747639412
FETCH #386947304:c=0,e=8,p=0,cr=2,cu=0,mis=0,r=1,dep=1,og=4,plh=1884964958,tim=63747639432
STAT #386947304 id=1 cnt=1 pid=0 pos=1 obj=31 op='TABLE ACCESS BY USER ROWID CDEF$ (cr=1 pr=0 pw=0 time=4 us cost=1 size=15 card=1)'
CLOSE #386947304:c=0,e=21,dep=1,type=0,tim=63747639466
...
...
...
PARSING IN CURSOR #386943464 len=58 dep=0 uid=36 oct=6 lid=36 tim=63750497441 hv=1532843717 ad='3144ac3d8' sqlid='9m0sbg1dpupq5' 
update PART set mrp_required = :1     where id = :2       
END OF STMT 
PARSE #386943464:c=0,e=121,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=63750497440 
PARSE #386947304:c=0,e=10,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=1884964958,tim=63750497602
BINDS #386947304:
 Bind#0
  oacdty=11 mxl=16(16) mxlc=00 mal=00 scl=00 pre=00
  oacflg=18 fl2=0001 frm=00 csi=00 siz=16 off=0
  kxsbbbfp=1edffbd8  bln=16  avl=16  flg=05
  value=0000A7B3.0019.0001
EXEC #386947304:c=0,e=33,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=1884964958,tim=63750497664
FETCH #386947304:c=0,e=9,p=0,cr=2,cu=0,mis=0,r=1,dep=1,og=4,plh=1884964958,tim=63750497684
CLOSE #386947304:c=0,e=8,dep=1,type=3,tim=63750497704
=====================
PARSING IN CURSOR #517995552 len=98 dep=1 uid=0 oct=3 lid=0 tim=63750497798 hv=3044436590 ad='2f0e99070' sqlid='dha5nv6urcumf'
select binaryDefVal,           lengthb(binaryDefVal) from ecol$ where tabobj# = :1 and colnum = :2
END OF STMT
PARSE #517995552:c=0,e=19,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=3081038021,tim=63750497797
BINDS #517995552:
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=1edfe758  bln=22  avl=04  flg=05
  value=18450
 Bind#1
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=1edfe770  bln=22  avl=03  flg=01
  value=141
EXEC #517995552:c=0,e=56,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=3081038021,tim=63750497891
FETCH #517995552:c=0,e=14,p=0,cr=2,cu=0,mis=0,r=1,dep=1,og=4,plh=3081038021,tim=63750497913
STAT #517995552 id=1 cnt=1 pid=0 pos=1 obj=123 op='TABLE ACCESS BY INDEX ROWID ECOL$ (cr=2 pr=0 pw=0 time=11 us cost=2 size=95 card=1)'
STAT #517995552 id=2 cnt=1 pid=1 pos=1 obj=126 op='INDEX RANGE SCAN ECOL_IX1 (cr=1 pr=0 pw=0 time=8 us cost=1 size=0 card=1)'
CLOSE #517995552:c=0,e=1,dep=1,type=3,tim=63750497955
PARSE #386947304:c=0,e=9,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=1884964958,tim=63750498060
BINDS #386947304:
 Bind#0
  oacdty=11 mxl=16(16) mxlc=00 mal=00 scl=00 pre=00
  oacflg=18 fl2=0001 frm=00 csi=00 siz=16 off=0
  kxsbbbfp=1edfe360  bln=16  avl=16  flg=05
  value=0000A7B3.0019.0001
EXEC #386947304:c=0,e=34,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=1884964958,tim=63750498120
FETCH #386947304:c=0,e=9,p=0,cr=2,cu=0,mis=0,r=1,dep=1,og=4,plh=1884964958,tim=63750498140
CLOSE #386947304:c=0,e=7,dep=1,type=3,tim=63750498159
Symbol file c:\Oracle\product\11.2.0\dbhome_1\BIN\orannzsbb11.SYM does not match binary.
 Symbol TimeStamp=4cea6d1b, Module TimeStamp=4d6f3707 are different
Incident 4193 created, dump file: C:\ORACLE\diag\rdbms\test\test\incident\incdir_4193\test_ora_3796_i4193.trc
ORA-00600: internal error code, arguments: [qkebCreateConstantOpn1], [], [], [], [], [], [], [], [], [], [], []

* BLOCK CHECKING IS NOT DONE-Call Heap not initialized.
* Skip checking block dba=0x2402d02.
* BLOCK CHECKING IS NOT DONE-Call Heap not initialized.
* Skip checking block dba=0x340660f.
WAIT #386943464: nam='SQL*Net break/reset to client' ela= 2 driver id=1413697536 break?=1 p3=0 obj#=17988 tim=63752354884
WAIT #386943464: nam='SQL*Net break/reset to client' ela= 469 driver id=1413697536 break?=0 p3=0 obj#=17988 tim=63752355367
WAIT #386943464: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=17988 tim=63752355385
WAIT #386943464: nam='SQL*Net message from client' ela= 7198 driver id=1413697536 #bytes=1 p3=0 obj#=17988 tim=63752362607
WAIT #0: nam='ADR file lock' ela= 3  =0  =0  =0 obj#=17988 tim=63752362675
WAIT #0: nam='ADR file lock' ela= 67  =0  =0  =0 obj#=17988 tim=63752362763
WAIT #0: nam='ADR block file read' ela= 146  =0  =0  =0 obj#=17988 tim=63752363029
WAIT #0: nam='ADR block file write' ela= 392  =0  =0  =0 obj#=17988 tim=63752363449
XCTEND rlbk=1, rd_only=0, tim=63752364013
WAIT #0: nam='log file sync' ela= 115 buffer#=1552 sync scn=251585418 p3=0 obj#=17988 tim=63752364516
WAIT #0: nam='SQL*Net message to client' ela= 0 driver id=1413697536 #bytes=1 p3=0 obj#=17988 tim=63752364548

Interesting, maybe?  I ran a test in an older version of the ERP package, also using Oracle Database 11.2.0.2.  My test did not generate an ORA-00600 error, but I did notice something that was a bit interesting.  There was not a single SQL statement similar to the following in my trace file, while the SQL statement was found in the trace file that was sent to me:

select binaryDefVal, lengthb(binaryDefVal) from ecol$ where tabobj# = :1 and colnum = :2

ECOL$? Anything found by Google?  OK, two articles, I will briefly quote one of the articles for dramatic effect:

“11G introduced a fast=true feature for adding columns with default values. In 11G when you want to add a not null column with a default value, the operation completes in the blink of an eye (if there are no locks on the table of course). Now it does not have to update all the rows, it just keeps the information as meta-data in the dictionary. So adding a new column becomes just a few dictionary operations.”

I suspect that the ERP upgrade script might have added non-NULL columns with default values… so let’s take a tour through Metalink (MOS) to see if anything interesting is found:

  • Doc ID 1301504.1: “ORA-00600: internal error code, arguments: [qkebCreateConstantOpn1] ON UPDATES ON TABLES WITH CONSTRAINTS”
  • Bug 12316092: “ORA-600 [QKEBCREATECONSTANTOPN1] AFTER ADDING NOT NULL COLUMN WITH DEFAULT”
  • Bug 12760255: “ORA-00600 INTERNAL ERROR CODE, ARGUMENTS [QKEBCREATECONSTANTOPN1] ON SELECT STAT”
  • Doc ID 8501439.8, Bug 8501439:” OERI[kghfrh:ds] / [kghfrempty:ds] / [17147] ORA-7445 [memcpy] with ANALYZE in a table with added column”
  • Doc ID 1106553.1: “Wrong Result For Added Column After Table Creation in 11g”
  • Bug 11865621: “MERGE ON TABLE WITH COL ADDED WHEN _ADD_COL_OPTIM_ENABLED=TRUE SPINS ON CPU”
  • Bug 10261680: “QUERY RETURNING WRONG RESULT WITH PARALLEL ENABLED”
  • Doc ID 10080167.8, Bug 10080167: “ORA-600 [15599] / dump [kghalf] from TRIGGER with WHEN clause”

OK, so I located through Metalink a couple of potential problems that might be related to the problem that was brought to my attention, some of which might be fixed in Oracle Database 11.2.0.3 or 11.2 (the first two seem to directly pertain to this issue).  It appears that quite a few of the bugs are triggered by the optimization (the fast=true reference in the above quote) that was introduced in Oracle Database 11.1.

What might be the work-around for this particular problem (mostly rough guesses at this point – pick one):

  • Wait for the release of Oracle Database 11.2.0.3 on the Windows platform… or 12.1.
  • Execute “ALTER SYSTEM SET “_ADD_COL_OPTIM_ENABLED”=FALSE;” before running the scripts that add the additional columns to the tables.
  • After running the ERP upgrade scripts, export the database, drop the database, create a new blank database, and then import the exported file.

Any other options?

If you look at the trace file output above, you might have noticed the following line:

Symbol file c:\Oracle\product\11.2.0\dbhome_1\BIN\orannzsbb11.SYM does not match binary

I wonder if that message has any relevance to the problem? It is slightly interesting to find that Oracle Database 11.2.0.1 with patch 7 installed has a file by that name with a file date of April 8, 2011 (399KB in size) while 11.2.0.2 with patch 6 apparently has an older version with a file date of November 22, 2010 (395KB in size). Probably unrelated to the problem at hand, but still interesting that a more recent release offers an older version of a file. I wonder if that file date problem could be related (requires a My Oracle Support account):

Doc ID 1272490.1: “‘Symbol file oraclsra11.SYM does not match binary’ Reported In Trace File + ORA-600 / ORA-7445″





Using TKPROF for Analyzing 10046 Extended SQL Trace Files – What is Wrong with this Quote?

21 09 2011

September 21, 2011

I have written a couple of blog articles on the topic of reading 10046 extended SQL trace files.  Some of those blog articles are listed below:

Once you have collected a 10046 extended SQL trace, you can pass that trace file through TKPROF or any number of other utilities.  For example, if I pick a 10046 extended SQL trace file captured at level 8 from a 10.2.0.2 database instance and pass that file through TKPROF that is supplied with Oracle Database 11.2.0.2, a portion of the TKPROF output might appear as follows:

********************************************************************************
select currency_id, sell_rate, buy_rate,  act_material_cost, act_labor_cost, act_burden_cost, act_service_cost,
tracking_currency,
s_currency, s_thousand, i_currency, i_neg_curr, i_curr_digits,
rnd_acct_id, real_gn_acct_id, real_ls_acct_id, unreal_gn_acct_id, unreal_ls_acct_id
from INV_TRANS_CURR t, currency c
where TRANSACTION_ID = 1158316
and t.currency_id = c.id
order by c.id asc                                                                                                                                                                                                                                                                                                                          

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          6          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0          6          0           0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 49 
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  NESTED LOOPS  (cr=6 pr=0 pw=0 time=125 us)
         2          2          2   TABLE ACCESS BY INDEX ROWID CURRENCY (cr=2 pr=0 pw=0 time=69 us)
         2          2          2    INDEX FULL SCAN SYS_C004588 (cr=1 pr=0 pw=0 time=37 us)(object id 12048)
         0          0          0   TABLE ACCESS BY INDEX ROWID INV_TRANS_CURR (cr=4 pr=0 pw=0 time=45 us)
         0          0          0    INDEX UNIQUE SCAN SYS_C005202 (cr=4 pr=0 pw=0 time=36 us)(object id 12327)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       5        0.00          0.00
  SQL*Net message from client                     5        0.00          0.00
  latch: library cache                            1        0.00          0.00
********************************************************************************
select euro_participant, xchg_rate from currency where id = :1             

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          2          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0          2          0           1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 49 
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  TABLE ACCESS BY INDEX ROWID CURRENCY (cr=2 pr=0 pw=0 time=37 us)
         1          1          1   INDEX UNIQUE SCAN SYS_C004588 (cr=1 pr=0 pw=0 time=20 us)(object id 12048)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       5        0.00          0.00
  SQL*Net message from client                     5        0.00          0.00
  latch: library cache                            1        0.00          0.00
******************************************************************************** 

My Toy Project for Performance Tuning (otherwise known as the Hyper-Extended Oracle Performance Monitor) produces several output files when reading 10046 extended SQL trace files.  For example, if I pass in the same 10046 extended SQL trace file that was supplied to TKPROF, one of the output files contains information similar to the following that provides a detailed explanation of the running 10046 extended SQL trace file:

Statement Depth 0 (Application Code)
Cursor 62   Ver 2614   Parse at 90.371707  (0.004613)
select euro_participant, xchg_rate from currency where id = :1

Cursor 62   Ver 2614   Parse at 90.371704 (Parse to Parse -0.000003),CPU Time 0.000000,Elapsed Time 0.000034,Rows Affected 0,Blks from Buff 0,Blks from Disk 0,Goal=ALL_ROWS
Cursor 62   Ver 2614            90.371776  0.000001   SQL*Net message to client
Cursor 62   Ver 2614            90.371931  0.000137   SQL*Net message from client
Cursor 62   Ver 2614            90.372036  0.000001   SQL*Net message to client
Cursor 62   Ver 2614            90.372368  0.000297   SQL*Net message from client
Bind Variables (62):BINDS #62:  90.372368
   Bind#0
    oacdty=96 mxl=32(03) mxlc=00 mal=00 scl=00 pre=00
    oacflg=01 fl2=1000000 frm=01 csi=178 siz=32 off=0
    kxsbbbfp=6c8e4750  bln=32  avl=03  flg=05
    value="USD"

Cursor 62   Ver 2614 Execute at 90.372578 (Parse to Exec  0.000871),CPU Time 0.000000,Elapsed Time 0.000144,Rows Affected 0,Blks from Buff 0,Blks from Disk 0,
Cursor 62   Ver 2614            90.372604  0.000001   SQL*Net message to client
Cursor 62   Ver 2614            90.372757  0.000130   SQL*Net message from client
Cursor 62   Ver 2614            90.372826  0.000001   SQL*Net message to client
Cursor 62   Ver 2614   Fetch at 90.372860 (Parse to Fetch 0.001153),CPU Time 0.000000,Elapsed Time 0.000068,Rows Retrievd 1,Blks from Buff 2,Blks from Disk 0
Cursor 62   Ver 2614            90.373213  0.000301   SQL*Net message from client
Cursor 0   Ver 0            90.373330  0.000001   SQL*Net message to client
Cursor 0   Ver 0            90.375677  0.002331   SQL*Net message from client
...
Cursor 42   Ver 5 Execute at 90.384344 (Parse to Exec  81.475396),CPU Time 0.000000,Elapsed Time 0.000108,Rows Affected 0,Blks from Buff 0,Blks from Disk 0,
Cursor 42   Ver 5            90.384368  0.000001   SQL*Net message to client
Cursor 42   Ver 5            90.384532  0.000144   SQL*Net message from client
Cursor 42   Ver 5            90.384682  0.000001   SQL*Net message to client
Cursor 42   Ver 5   Fetch at 90.384739 (Parse to Fetch 81.475791),CPU Time 0.000000,Elapsed Time 0.000175,Rows Retrievd 1,Blks from Buff 11,Blks from Disk 0
Cursor 42   Ver 5            90.386534  0.001750   SQL*Net message from client
Cursor 38   Ver 2608            90.386582  0.000001   SQL*Net message to client
Cursor 38   Ver 2608            90.387369  0.000763   SQL*Net message from client
Cursor 38   Ver 2608            90.389990  0.000103   latch: library cache
----------------------------------------------------------------------------------
Statement Depth 0 (Application Code)
Cursor 38   Ver 2609   Parse at 90.391579  (0.010830)

select currency_id, sell_rate, buy_rate,  act_material_cost, act_labor_cost, act_burden_cost, act_service_cost,
tracking_currency,
s_currency, s_thousand, i_currency, i_neg_curr, i_curr_digits,
rnd_acct_id, real_gn_acct_id, real_ls_acct_id, unreal_gn_acct_id, unreal_ls_acct_id
from INV_TRANS_CURR t, currency c
where TRANSACTION_ID = 1158746
and t.currency_id = c.id
order by c.id asc

Cursor 38   Ver 2609   Parse at 90.391571 (Parse to Parse -0.000008),CPU Time 0.015625,Elapsed Time 0.004159,Rows Affected 0,Blks from Buff 0,Blks from Disk 0,**Shared Pool Misses** 1,Goal=ALL_ROWS
Cursor 38   Ver 2609            90.391738  0.000006   SQL*Net message to client
Cursor 38   Ver 2609            90.392021  0.000235   SQL*Net message from client
Cursor 38   Ver 2609            90.392118  0.000001   SQL*Net message to client
Cursor 38   Ver 2609            90.392607  0.000436   SQL*Net message from client 

A second output file from my program provides summarized information, somewhat like TKPROF:

Total for Trace File:
|PARSEs   17914|CPU S   12.546875|CLOCK S   12.487227|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         6|CUR RD BLKs (Mem)         0|SHARED POOL MISs   3281|
|EXECs    33730|CPU S    6.328125|CLOCK S    6.450126|ROWs     1348|PHY RD BLKs         0|CON RD BLKs (Mem)      8574|CUR RD BLKs (Mem)      4631|SHARED POOL MISs     27|
|FETCHs   32222|CPU S    6.546875|CLOCK S    6.436054|ROWs    27368|PHY RD BLKs         1|CON RD BLKs (Mem)    690260|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|

Wait Event Summary:
SQL*Net message to client           0.185460  On Client/Network   Min Wait:     0.000001  Avg Wait:     0.000001  Max Wait:     0.000028  Waits: 128632
SQL*Net message from client        72.674370  On Client/Network   Min Wait:     0.000112  Avg Wait:     0.000553  Max Wait:     0.016187  Waits: 131318
log file sync                       0.180988  On DB Server        Min Wait:     0.000005  Avg Wait:     0.000150  Max Wait:     0.010718  Waits:   1209
SQL*Net more data to client         0.001780  On Client/Network   Min Wait:     0.000013  Avg Wait:     0.000018  Max Wait:     0.000026  Waits:     98
db file sequential read             0.003380  On DB Server        Min Wait:     0.003380  Avg Wait:     0.003380  Max Wait:     0.003380  Waits:      1
latch: library cache                0.000256  On DB Server        Min Wait:     0.000067  Avg Wait:     0.000085  Max Wait:     0.000103  Waits:      3

Similar SQL Statements in Group: 4253 
First Reference: Cursor 62   Ver 4   Parse at 8.941502
|PARSEs    4253|CPU S    0.109375|CLOCK S    0.149414|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|EXECs     4253|CPU S    0.671875|CLOCK S    0.655012|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|FETCHs    4253|CPU S    0.468750|CLOCK S    0.295336|ROWs     4253|PHY RD BLKs         0|CON RD BLKs (Mem)      8506|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
  CPU S 4.92%  CLOCK S 4.33%
  *    0.000153 seconds of time related to unclassified events
  *    7.060429 seconds of time related to client/network events
----------------------------------------------------------------------------------

...
Similar SQL Statements in Group: 1 
First Reference: Cursor 18   Ver 1   Parse at 8.618896
|PARSEs       1|CPU S    0.000000|CLOCK S    0.001696|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|EXECs      486|CPU S    0.078125|CLOCK S    0.065229|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         2|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|FETCHs     486|CPU S    0.812500|CLOCK S    0.771616|ROWs        0|PHY RD BLKs         1|CON RD BLKs (Mem)    148489|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
  CPU S 3.50%  CLOCK S 3.30%
  *    0.003380 seconds of time related data file I/O
  *    0.197874 seconds of time related to client/network events

Similar SQL Statements in Group: 4253
|PARSEs    4253|CPU S    0.109375|CLOCK S    0.149414|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|EXECs     4253|CPU S    0.671875|CLOCK S    0.655012|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|FETCHs    4253|CPU S    0.468750|CLOCK S    0.295336|ROWs     4253|PHY RD BLKs         0|CON RD BLKs (Mem)      8506|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|

...
Statement Depth 0 (Application Code)
Cursor 62   Ver 4   Parse at 8.941502  (TD Prev 0.011920)  Similar Cnt 1
|PARSEs       1|CPU S    0.000000|CLOCK S    0.001544|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|EXECs        1|CPU S    0.000000|CLOCK S    0.002476|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|FETCHs       1|CPU S    0.000000|CLOCK S    0.000076|ROWs        1|PHY RD BLKs         0|CON RD BLKs (Mem)         2|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
  CPU S 0.00%  CLOCK S 0.02%

select euro_participant, xchg_rate from currency where id = :1

       (Rows 1)   TABLE ACCESS BY INDEX ROWID CURRENCY (cr=2 pr=0 pw=0 time=48 us)
       (Rows 1)    INDEX UNIQUE SCAN SYS_C004588 (cr=1 pr=0 pw=0 time=24 us)

Bind Variables (62):BINDS #62:  8.944672
   Bind#0
    oacdty=96 mxl=32(03) mxlc=00 mal=00 scl=00 pre=00
    oacflg=01 fl2=1000000 frm=01 csi=178 siz=32 off=0
    kxsbbbfp=36d87e78  bln=32  avl=03  flg=05
    value="USD"

------------
Statement Depth 0 (Application Code)
Cursor 4   Ver 5   Parse at 8.976105  (TD Prev 0.004661)  Similar Cnt 2
|PARSEs       1|CPU S    0.000000|CLOCK S    0.000109|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
|EXECs        1|CPU S    0.000000|CLOCK S    0.000145|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
|FETCHs       1|CPU S    0.000000|CLOCK S    0.000071|ROWs        1|PHY RD BLKs         0|CON RD BLKs (Mem)         2|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
  CPU S 0.00%  CLOCK S 0.00%

select euro_participant, xchg_rate from currency where id = :1

       (Rows 1)   TABLE ACCESS BY INDEX ROWID CURRENCY (cr=2 pr=0 pw=0 time=40 us)
       (Rows 1)    INDEX UNIQUE SCAN SYS_C004588 (cr=1 pr=0 pw=0 time=20 us)

Bind Variables (4):BINDS #4:  8.976753
   Bind#0
    oacdty=96 mxl=32(03) mxlc=00 mal=00 scl=00 pre=00
    oacflg=01 fl2=1000000 frm=01 csi=178 siz=32 off=0
    kxsbbbfp=6c8e87d8  bln=32  avl=03  flg=05
    value="USD"

------------
Statement Depth 0 (Application Code)
Cursor 37   Ver 6   Parse at 9.020236  (TD Prev 0.004586)  Similar Cnt 3
|PARSEs       1|CPU S    0.000000|CLOCK S    0.000056|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
|EXECs        1|CPU S    0.000000|CLOCK S    0.000144|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
|FETCHs       1|CPU S    0.000000|CLOCK S    0.000067|ROWs        1|PHY RD BLKs         0|CON RD BLKs (Mem)         2|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
  CPU S 0.00%  CLOCK S 0.00%

select euro_participant, xchg_rate from currency where id = :1

       (Rows 1)   TABLE ACCESS BY INDEX ROWID CURRENCY (cr=2 pr=0 pw=0 time=37 us)
       (Rows 1)    INDEX UNIQUE SCAN SYS_C004588 (cr=1 pr=0 pw=0 time=19 us)

Bind Variables (37):BINDS #37:  9.020884
   Bind#0
    oacdty=96 mxl=32(03) mxlc=00 mal=00 scl=00 pre=00
    oacflg=01 fl2=1000000 frm=01 csi=178 siz=32 off=0
    kxsbbbfp=6c4d08e0  bln=32  avl=03  flg=05
    value="USD"
...

Similar SQL Statements in Group: 1
|PARSEs       1|CPU S    0.015625|CLOCK S    0.003585|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|EXECs        1|CPU S    0.000000|CLOCK S    0.000119|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
|FETCHs       1|CPU S    0.000000|CLOCK S    0.000111|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         6|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|

Statement Depth 0 (Application Code)
Cursor 31   Ver 4   Parse at 8.922969  (TD Prev 0.014021)  Similar Cnt 1
|PARSEs       1|CPU S    0.015625|CLOCK S    0.003585|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|EXECs        1|CPU S    0.000000|CLOCK S    0.000119|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
|FETCHs       1|CPU S    0.000000|CLOCK S    0.000111|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         6|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|
  CPU S 0.06%  CLOCK S 0.02%

select currency_id, sell_rate, buy_rate,  act_material_cost, act_labor_cost, act_burden_cost, act_service_cost,
tracking_currency,
s_currency, s_thousand, i_currency, i_neg_curr, i_curr_digits,
rnd_acct_id, real_gn_acct_id, real_ls_acct_id, unreal_gn_acct_id, unreal_ls_acct_id
from INV_TRANS_CURR t, currency c
where TRANSACTION_ID = 413948
and t.currency_id = c.id
order by c.id asc

       (Rows 0)   NESTED LOOPS  (cr=6 pr=0 pw=0 time=117 us)
       (Rows 2)    TABLE ACCESS BY INDEX ROWID CURRENCY (cr=2 pr=0 pw=0 time=65 us)
       (Rows 2)     INDEX FULL SCAN SYS_C004588 (cr=1 pr=0 pw=0 time=38 us)
       (Rows 0)    TABLE ACCESS BY INDEX ROWID INV_TRANS_CURR (cr=4 pr=0 pw=0 time=37 us)
       (Rows 0)     INDEX UNIQUE SCAN SYS_C005202 (cr=4 pr=0 pw=0 time=28 us) 

It is interesting to see how the contents of 10046 extended SQL trace files have evolved through the various Oracle Database release versions, yet at its core the file format has not really changed that much.  Below are a couple of examples from different Oracle Database release versions.

8.1.7.3 10046 extended SQL trace at level 8:

=====================
PARSING IN CURSOR #38 len=287 dep=0 uid=78 oct=2 lid=78 tim=370357232 hv=3536790953 ad='5ad8bec'
INSERT INTO WORK_ORDER_SCHED (SCHEDULE_ID, WORKORDER_TYPE, WORKORDER_BASE_ID, WORKORDER_SUB_ID, WORKORDER_LOT_ID, WORKORDER_SPLIT_ID, PART_ID, START_DATE, FINISH_DATE, BITS_LENGTH, SCHEDULE_TIME, PASS_COUNT, SCHEDULE_RANK) VALUES (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13)
END OF STMT
PARSE #38:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=0,tim=370357232
WAIT #38: nam='db file sequential read' ela= 0 p1=4 p2=207930 p3=1
WAIT #38: nam='db file sequential read' ela= 2 p1=9 p2=300266 p3=1
WAIT #38: nam='db file sequential read' ela= 0 p1=9 p2=350987 p3=1
WAIT #38: nam='db file sequential read' ela= 2 p1=9 p2=300330 p3=1
WAIT #38: nam='db file sequential read' ela= 0 p1=9 p2=300339 p3=1
WAIT #38: nam='db file sequential read' ela= 0 p1=9 p2=300181 p3=1
EXEC #38:c=0,e=4,p=6,cr=1,cu=12,mis=0,r=1,dep=0,og=4,tim=370357236
WAIT #38: nam='SQL*Net message to client' ela= 0 p1=1413697536 p2=1 p3=0
WAIT #38: nam='SQL*Net message from client' ela= 0 p1=1413697536 p2=1 p3=0
XCTEND rlbk=0, rd_only=0
WAIT #0: nam='log file sync' ela= 0 p1=1176 p2=0 p3=0
...
XCTEND rlbk=0, rd_only=0
WAIT #0: nam='log file sync' ela= 0 p1=1328 p2=0 p3=0
WAIT #0: nam='SQL*Net message to client' ela= 0 p1=1413697536 p2=1 p3=0
WAIT #0: nam='SQL*Net message from client' ela= 0 p1=1413697536 p2=1 p3=0
EXEC #34:c=0,e=0,p=0,cr=3,cu=2,mis=0,r=1,dep=2,og=4,tim=370537187
EXEC #34:c=0,e=0,p=0,cr=3,cu=2,mis=0,r=1,dep=2,og=4,tim=370537187
EXEC #33:c=0,e=0,p=0,cr=11,cu=8,mis=0,r=1,dep=1,og=4,tim=370537187
WAIT #33: nam='latch free' ela= 0 p1=197748644 p2=106 p3=0
WAIT #33: nam='latch free' ela= 0 p1=197748540 p2=106 p3=0
EXEC #34:c=2,e=2,p=0,cr=3,cu=2,mis=0,r=1,dep=2,og=4,tim=370537189
EXEC #34:c=0,e=0,p=0,cr=3,cu=2,mis=0,r=1,dep=2,og=4,tim=370537189
EXEC #33:c=3,e=3,p=0,cr=11,cu=8,mis=0,r=1,dep=1,og=4,tim=370537190
EXEC #39:c=3,e=3,p=0,cr=26,cu=20,mis=0,r=1,dep=0,og=4,tim=370537190
WAIT #39: nam='SQL*Net message to client' ela= 0 p1=1413697536 p2=1 p3=0
WAIT #39: nam='SQL*Net message from client' ela= 0 p1=1413697536 p2=1 p3=0
XCTEND rlbk=0, rd_only=0
...
WAIT #0: nam='log file sync' ela= 0 p1=583 p2=0 p3=0
WAIT #0: nam='SQL*Net message to client' ela= 0 p1=1413697536 p2=1 p3=0
WAIT #0: nam='SQL*Net message from client' ela= 0 p1=1413697536 p2=1 p3=0
EXEC #34:c=0,e=0,p=0,cr=3,cu=2,mis=0,r=1,dep=2,og=4,tim=370580579
EXEC #34:c=0,e=0,p=0,cr=3,cu=2,mis=0,r=1,dep=2,og=4,tim=370580579
EXEC #33:c=0,e=0,p=0,cr=10,cu=6,mis=0,r=1,dep=1,og=4,tim=370580579
EXEC #34:c=0,e=0,p=0,cr=3,cu=2,mis=0,r=1,dep=2,og=4,tim=370580579
WAIT #34: nam='latch free' ela= 0 p1=197748644 p2=106 p3=0
EXEC #34:c=0,e=0,p=0,cr=3,cu=2,mis=0,r=1,dep=2,og=4,tim=370580579
EXEC #33:c=0,e=0,p=0,cr=10,cu=6,mis=0,r=1,dep=1,og=4,tim=370580579
EXEC #39:c=0,e=0,p=0,cr=24,cu=16,mis=0,r=1,dep=0,og=4,tim=370580579
WAIT #39: nam='SQL*Net message to client' ela= 0 p1=1413697536 p2=1 p3=0
WAIT #39: nam='SQL*Net message from client' ela= 0 p1=1413697536 p2=1 p3=0
XCTEND rlbk=0, rd_only=0
...

9.2.0.4 10046 extended SQL trace at level 12:

PARSING IN CURSOR #40 len=118 dep=0 uid=23 oct=3 lid=23 tim=1160715895381603 hv=3290581551 ad='67afe40c'
select bits_length from CUST_ORDER_BINARY
where cust_order_id = :1              and type = :2                       
END OF STMT
EXEC #40:c=0,e=109,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=1160715895381598
WAIT #40: nam='SQL*Net message to client' ela= 1 p1=1413697536 p2=1 p3=0
WAIT #40: nam='SQL*Net message from client' ela= 601 p1=1413697536 p2=1 p3=0
WAIT #40: nam='db file sequential read' ela= 20 p1=5 p2=105821 p3=1
WAIT #40: nam='SQL*Net message to client' ela= 1 p1=1413697536 p2=1 p3=0
FETCH #40:c=0,e=102,p=1,cr=3,cu=0,mis=0,r=1,dep=0,og=4,tim=1160715895382393
WAIT #40: nam='SQL*Net message from client' ela= 2276 p1=1413697536 p2=1 p3=0
BINDS #43:
 bind 0: dty=96 mxl=32(08) mal=00 scl=00 pre=00 oacflg=01 oacfl2=0 size=64 offset=0
   bfp=b75b4a74 bln=32 avl=08 flg=05
   value="41120-26"
 bind 1: dty=96 mxl=32(01) mal=00 scl=00 pre=00 oacflg=01 oacfl2=0 size=0 offset=32
   bfp=b75b4a94 bln=32 avl=01 flg=01
   value="D"
=====================
PARSING IN CURSOR #43 len=105 dep=0 uid=23 oct=3 lid=23 tim=1160715895384818 hv=3322014952 ad='67afc9fc'
select bits from CUST_ORDER_BINARY
where cust_order_id = :1              and type = :2                 
END OF STMT
EXEC #43:c=0,e=96,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=1160715895384815
WAIT #43: nam='SQL*Net message to client' ela= 1 p1=1413697536 p2=1 p3=0
WAIT #43: nam='SQL*Net message from client' ela= 578 p1=1413697536 p2=1 p3=0
WAIT #43: nam='SQL*Net message to client' ela= 1 p1=1413697536 p2=1 p3=0
FETCH #43:c=0,e=38,p=0,cr=4,cu=0,mis=0,r=1,dep=0,og=4,tim=1160715895385519
WAIT #43: nam='SQL*Net message from client' ela= 7413 p1=1413697536 p2=1 p3=0
WAIT #54: nam='SQL*Net message to client' ela= 0 p1=1413697536 p2=1 p3=0
WAIT #54: nam='SQL*Net message from client' ela= 3506 p1=1413697536 p2=1 p3=0
...

10.2.0.2 10046 extended SQL trace at level 8:

EXEC #3:c=0,e=172,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=2761350071
WAIT #3: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2761350121
WAIT #3: nam='SQL*Net message from client' ela= 151 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2761350309
WAIT #3: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2761350464
FETCH #3:c=0,e=178,p=0,cr=11,cu=0,mis=0,r=1,dep=0,og=1,tim=2761350529
WAIT #3: nam='SQL*Net message from client' ela= 1703 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2761352318
WAIT #38: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2761352422
WAIT #38: nam='SQL*Net message from client' ela= 14855 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2761367335
WAIT #38: nam='latch: library cache' ela= 86 address=6702518304 number=214 tries=1 obj#=-1 tim=2761370634
=====================
PARSING IN CURSOR #38 len=708 dep=0 uid=49 oct=3 lid=49 tim=2761372680 hv=3811084033 ad='7a792970'
...
EXEC #42:c=0,e=108,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=2792074555
WAIT #42: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2792074579
WAIT #42: nam='SQL*Net message from client' ela= 144 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2792074743
WAIT #42: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2792074893
FETCH #42:c=0,e=175,p=0,cr=11,cu=0,mis=0,r=1,dep=0,og=1,tim=2792074950
WAIT #42: nam='SQL*Net message from client' ela= 1750 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2792076745
WAIT #38: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2792076793
WAIT #38: nam='SQL*Net message from client' ela= 763 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2792077580
WAIT #38: nam='latch: library cache' ela= 103 address=6702518304 number=214 tries=1 obj#=-1 tim=2792080201
=====================
PARSING IN CURSOR #38 len=709 dep=0 uid=49 oct=3 lid=49 tim=2792081790 hv=332841952 ad='6553b430'
...
WAIT #3: nam='SQL*Net message from client' ela= 132 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2792095443
WAIT #3: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2792096040
FETCH #3:c=15625,e=634,p=0,cr=11,cu=0,mis=0,r=1,dep=0,og=1,tim=2792096110
WAIT #3: nam='SQL*Net message from client' ela= 1645 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2792097829
WAIT #62: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2792097885
WAIT #62: nam='SQL*Net message from client' ela= 779 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=2792098689
WAIT #62: nam='latch: library cache' ela= 67 address=6702518784 number=214 tries=1 obj#=-1 tim=2792099425
=====================
PARSING IN CURSOR #62 len=709 dep=0 uid=49 oct=3 lid=49 tim=2792102933 hv=3211547247 ad='59a46098'

11.2.0.2 10046 extended SQL trace at level 8:

PARSING IN CURSOR #395158696 len=44 dep=0 uid=62 oct=42 lid=62 tim=279105099708 hv=1347470086 ad='0' sqlid='5pgz5gt851hs6'
ALTER SESSION SET OPTIMIZER_INDEX_COST_ADJ=5
END OF STMT
PARSE #395158696:c=0,e=32,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=0,tim=279105099708
EXEC #395158696:c=0,e=33,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=0,tim=279105099773
WAIT #395158696: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=0 tim=279105099786
WAIT #395158696: nam='SQL*Net message from client' ela= 29739 driver id=1413697536 #bytes=1 p3=0 obj#=0 tim=279105129537
*** SESSION ID:(4.2401) 2011-08-18 11:30:18.997

CLOSE #395158696:c=0,e=6,dep=0,type=0,tim=279105130224
=====================
PARSING IN CURSOR #395158696 len=66 dep=0 uid=62 oct=3 lid=62 tim=279105130999 hv=1053158396 ad='7ffa55c7da0' sqlid='f2q5x9szcbuzw'
SELECT
  ID,
  DESCRIPTION
FROM
  T90
WHERE
  ID BETWEEN 1 AND 400
END OF STMT
PARSE #395158696:c=0,e=745,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=3691735591,tim=279105130998
EXEC #395158696:c=0,e=23,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3691735591,tim=279105131072
WAIT #395158696: nam='SQL*Net message to client' ela= 0 driver id=1413697536 #bytes=1 p3=0 obj#=0 tim=279105131099
WAIT #395158696: nam='db file sequential read' ela= 145 file#=4 block#=15211 blocks=1 obj#=70282 tim=279105131283
WAIT #395158696: nam='db file sequential read' ela= 141 file#=4 block#=3048038 blocks=1 obj#=70282 tim=279105131458
WAIT #395158696: nam='db file sequential read' ela= 124 file#=4 block#=15212 blocks=1 obj#=70282 tim=279105131606
WAIT #395158696: nam='db file sequential read' ela= 127 file#=4 block#=26658 blocks=1 obj#=69278 tim=279105131769
FETCH #395158696:c=0,e=676,p=4,cr=4,cu=0,mis=0,r=1,dep=0,og=1,plh=3691735591,tim=279105131790
WAIT #395158696: nam='SQL*Net message from client' ela= 68 driver id=1413697536 #bytes=1 p3=0 obj#=69278 tim=279105131879
WAIT #395158696: nam='db file parallel read' ela= 1187 files=1 blocks=39 requests=39 obj#=69278 tim=279105133171
WAIT #395158696: nam='db file sequential read' ela= 140 file#=4 block#=26704 blocks=1 obj#=69278 tim=279105133374
WAIT #395158696: nam='SQL*Net message to client' ela= 0 driver id=1413697536 #bytes=1 p3=0 obj#=69278 tim=279105133394
WAIT #395158696: nam='db file parallel read' ela= 1372 files=1 blocks=39 requests=39 obj#=69278 tim=279105134838
WAIT #395158696: nam='db file sequential read' ela= 128 file#=4 block#=32056 blocks=1 obj#=69278 tim=279105135092
WAIT #395158696: nam='db file parallel read' ela= 652 files=1 blocks=19 requests=19 obj#=69278 tim=279105135796
WAIT #395158696: nam='db file sequential read' ela= 142 file#=4 block#=106517 blocks=1 obj#=69278 tim=279105136038
FETCH #395158696:c=0,e=4182,p=100,cr=101,cu=0,mis=0,r=100,dep=0,og=1,plh=3691735591,tim=279105136084
...
WAIT #395158696: nam='db file parallel read' ela= 539 files=1 blocks=16 requests=16 obj#=69278 tim=279193197874
WAIT #395158696: nam='db file sequential read' ela= 123 file#=4 block#=3692381 blocks=1 obj#=69278 tim=279193198082
FETCH #395158696:c=0,e=2238,p=57,cr=58,cu=0,mis=0,r=57,dep=0,og=1,plh=3691735591,tim=279193198120
STAT #395158696 id=1 cnt=2547158 pid=0 pos=1 obj=69278 op='TABLE ACCESS BY INDEX ROWID T90 (cr=2573591 pr=2070791 pw=0 time=3919404 us cost=202973 size=216021546 card=4000399)'
STAT #395158696 id=2 cnt=2547158 pid=1 pos=1 obj=70282 op='INDEX RANGE SCAN IND_T90 (cr=30646 pr=5238 pw=0 time=891170 us cost=426 size=0 card=4071715)'
WAIT #395158696: nam='SQL*Net message from client' ela= 13470 driver id=1413697536 #bytes=1 p3=0 obj#=69278 tim=279193211655

While reading the Oracle 11g Performance Tuning Recipes book I have found a number of interesting bits of information.  For example, at the bottom of page 334 and the top of page 335 is the following interesting quote:

“A locking situation is a good example where you can visually inspect a raw trace file. TKPROF doesn’t provide you details about latches and locks (enqueues). If you suspect that a query was waiting on a lock, digging deep into a raw trace file shows you exactly where and why a query was waiting. In the WAIT line, the elapsed time (ela) shows the amount of time waited (in microseconds).”

What, if anything, is wrong with the above quote from the book?

The point of blog articles like this one is not to insult authors who have spent thousands of hours carefully constructing an accurate and helpful book, but instead to suggest that readers investigate when something stated does not exactly match what one believes to be true.  It could be that the author “took a long walk down a short pier”, or that the author is revealing accurate information which simply cannot be found through other resources (and may in the process be directly contradicting information sources you have used in the past).  If you do not investigate in such cases, you may lose an important opportunity to learn something that could prove to be extremely valuable.





Inline Views – What is Wrong with this Quote?

18 09 2011

September 18, 2011

I do not recall putting together any articles about inline views, so let’s start out with a couple of examples before taking a look at a quote.  We need to create a table for this example, so I will reuse a slightly modified table creation script from another article, and also collect statistics with 100% sampling.

DROP TABLE T1 PURGE;

CREATE TABLE T1(
  C1 NUMBER,
  C2 NUMBER,
  C3 NUMBER,
  C4 DATE,
  C5 DATE,
  C6 DATE,
  C7 VARCHAR2(20),
  C8 VARCHAR2(20),
  C9 VARCHAR2(20),
  PRIMARY KEY(C1));

INSERT INTO
  T1
SELECT
  ROWNUM,
  DECODE(MOD(ROWNUM,5),0,NULL,ROWNUM),
  DECODE(MOD(ROWNUM,5),0,NULL,ROWNUM),
  TRUNC(SYSDATE)+TRUNC(ROWNUM/100),
  DECODE(MOD(ROWNUM,5),0,NULL,TRUNC(SYSDATE)+TRUNC(ROWNUM/100)),
  DECODE(MOD(ROWNUM,5),0,NULL,TRUNC(SYSDATE)+TRUNC(ROWNUM/100)),
  RPAD(CHR(MOD(ROWNUM-1,26)+65)||
    CHR(MOD(ROWNUM,26)+65)||
    CHR(MOD(ROWNUM+1,26)+65)||
    CHR(MOD(ROWNUM+2,26)+65),20,'A'),
  DECODE(MOD(ROWNUM,5),0,NULL,
   RPAD(CHR(MOD(ROWNUM-1,26)+65)||
    CHR(MOD(ROWNUM,26)+65)||
    CHR(MOD(ROWNUM+1,26)+65)||
    CHR(MOD(ROWNUM+2,26)+65),20,'A')),
  DECODE(MOD(ROWNUM,5),0,NULL,
   RPAD(CHR(MOD(ROWNUM-1,26)+65)||
    CHR(MOD(ROWNUM,26)+65)||
    CHR(MOD(ROWNUM+1,26)+65)||
    CHR(MOD(ROWNUM+2,26)+65),20,'A'))
FROM
  DUAL
CONNECT BY
  LEVEL<=200000;

COMMIT;

CREATE INDEX IND_T1_C2 ON T1(C2);
CREATE INDEX IND_T1_C7 ON T1(C7);

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE,ESTIMATE_PERCENT=>NULL)

Now, let’s create a simple SQL statement that references the above table 3 times, twice in inline views and once again outside of an inline view:

SET AUTOTRACE TRACEONLY EXPLAIN

SELECT
  T1.C1,
  T1.C2
FROM
  T1,
  (SELECT
     C1,
     C2,
     C3,
     C4
   FROM
     T1
   WHERE
     C1 BETWEEN 1 AND 10000) V1,
  (SELECT
     C1,
     C6,
     C7,
     C8
   FROM
     T1
   WHERE
     C1 BETWEEN 9000 AND 12000) V2
WHERE
  T1.C1=V1.C1
  AND T1.C1=V2.C1;

If I execute the above on Oracle Database 11.2.0.2 I see the following autotrace generated execution plan:

Execution Plan
----------------------------------------------------------
Plan hash value: 3832506612

--------------------------------------------------------------------------------------------
| Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |              |  1002 | 10020 |    17   (0)| 00:00:22 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1           |  1002 | 10020 |    17   (0)| 00:00:22 |
|*  2 |   INDEX RANGE SCAN          | SYS_C0039297 |  1002 |       |     3   (0)| 00:00:04 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C1">=9000 AND "T1"."C1"<=10000)

Notice in my query I did not reference any of the columns in view V1 or view V2 – the table T1 appears only a single time in the execution plan.  Let’s fix the query and try again:

SELECT
  T1.C1,
  T1.C2,
  V1.C3,
  V1.C4,
  V2.C6,
  V2.C7,
  V2.C8
FROM
  T1,
  (SELECT
     C1,
     C2,
     C3,
     C4
   FROM
     T1
   WHERE
     C1 BETWEEN 1 AND 10000) V1,
  (SELECT
     C1,
     C6,
     C7,
     C8
   FROM
     T1
   WHERE
     C1 BETWEEN 9000 AND 12000) V2
WHERE
  T1.C1=V1.C1
  AND T1.C1=V2.C1;

Execution Plan
----------------------------------------------------------
Plan hash value: 3832506612

--------------------------------------------------------------------------------------------
| Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |              |  1002 | 68136 |    17   (0)| 00:00:22 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1           |  1002 | 68136 |    17   (0)| 00:00:22 |
|*  2 |   INDEX RANGE SCAN          | SYS_C0039297 |  1002 |       |     3   (0)| 00:00:04 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C1">=9000 AND "T1"."C1"<=10000)

The above was the desired result.

Let’s create a second table that is identical to the first table:

DROP TABLE T2 PURGE;

CREATE TABLE T2(
  C1 NUMBER,
  C2 NUMBER,
  C3 NUMBER,
  C4 DATE,
  C5 DATE,
  C6 DATE,
  C7 VARCHAR2(20),
  C8 VARCHAR2(20),
  C9 VARCHAR2(20),
  PRIMARY KEY(C1));

INSERT INTO
  T2
SELECT
  ROWNUM,
  DECODE(MOD(ROWNUM,5),0,NULL,ROWNUM),
  DECODE(MOD(ROWNUM,5),0,NULL,ROWNUM),
  TRUNC(SYSDATE)+TRUNC(ROWNUM/100),
  DECODE(MOD(ROWNUM,5),0,NULL,TRUNC(SYSDATE)+TRUNC(ROWNUM/100)),
  DECODE(MOD(ROWNUM,5),0,NULL,TRUNC(SYSDATE)+TRUNC(ROWNUM/100)),
  RPAD(CHR(MOD(ROWNUM-1,26)+65)||
    CHR(MOD(ROWNUM,26)+65)||
    CHR(MOD(ROWNUM+1,26)+65)||
    CHR(MOD(ROWNUM+2,26)+65),20,'A'),
  DECODE(MOD(ROWNUM,5),0,NULL,
   RPAD(CHR(MOD(ROWNUM-1,26)+65)||
    CHR(MOD(ROWNUM,26)+65)||
    CHR(MOD(ROWNUM+1,26)+65)||
    CHR(MOD(ROWNUM+2,26)+65),20,'A')),
  DECODE(MOD(ROWNUM,5),0,NULL,
   RPAD(CHR(MOD(ROWNUM-1,26)+65)||
    CHR(MOD(ROWNUM,26)+65)||
    CHR(MOD(ROWNUM+1,26)+65)||
    CHR(MOD(ROWNUM+2,26)+65),20,'A'))
FROM
  DUAL
CONNECT BY
  LEVEL<=200000;

COMMIT;

CREATE INDEX IND_T2_C2 ON T2(C2);
CREATE INDEX IND_T2_C7 ON T2(C7);

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2',CASCADE=>TRUE,ESTIMATE_PERCENT=>NULL)

Let’s modify the last query to access the table T2 in the second inline view and take a look at the autotrace generated execution plan for the query:

SET AUTOTRACE TRACEONLY EXPLAIN

SELECT
  T1.C1,
  T1.C2,
  V1.C3,
  V1.C4,
  V2.C6,
  V2.C7,
  V2.C8
FROM
  T1,
  (SELECT
     C1,
     C2,
     C3,
     C4
   FROM
     T1
   WHERE
     C1 BETWEEN 1 AND 10000) V1,
  (SELECT
     C1,
     C6,
     C7,
     C8
   FROM
     T2
   WHERE
     C1 BETWEEN 9000 AND 12000) V2
WHERE
  T1.C1=V1.C1
  AND T1.C1=V2.C1;

Execution Plan
----------------------------------------------------------
Plan hash value: 2810660667

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |  1001 | 73073 |    35   (3)| 00:00:44 |
|*  1 |  HASH JOIN                   |              |  1001 | 73073 |    35   (3)| 00:00:44 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1           |  1002 | 23046 |    17   (0)| 00:00:22 |
|*  3 |    INDEX RANGE SCAN          | SYS_C0039297 |  1002 |       |     3   (0)| 00:00:04 |
|   4 |   TABLE ACCESS BY INDEX ROWID| T2           |  1002 | 50100 |    17   (0)| 00:00:22 |
|*  5 |    INDEX RANGE SCAN          | SYS_C0039300 |  1002 |       |     3   (0)| 00:00:04 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="C1")
   3 - access("T1"."C1">=9000 AND "T1"."C1"<=10000)
   5 - access("C1">=9000 AND "C1"<=10000)

If you do not like inline views you can instead take advantage of subquery factoring (WITH blocks):

WITH
  V1 AS (SELECT
     C1,
     C2,
     C3,
     C4
   FROM
     T1
   WHERE
     C1 BETWEEN 1 AND 10000),
  V2 AS (SELECT
     C1,
     C6,
     C7,
     C8
   FROM
     T2
   WHERE
     C1 BETWEEN 9000 AND 12000)
SELECT
  T1.C1,
  T1.C2,
  V1.C3,
  V1.C4,
  V2.C6,
  V2.C7,
  V2.C8
FROM
  T1,
  V1,
  V2
WHERE
  T1.C1=V1.C1
  AND T1.C1=V2.C1;

Execution Plan
----------------------------------------------------------
Plan hash value: 2810660667

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |  1001 | 73073 |    35   (3)| 00:00:44 |
|*  1 |  HASH JOIN                   |              |  1001 | 73073 |    35   (3)| 00:00:44 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1           |  1002 | 23046 |    17   (0)| 00:00:22 |
|*  3 |    INDEX RANGE SCAN          | SYS_C0039297 |  1002 |       |     3   (0)| 00:00:04 |
|   4 |   TABLE ACCESS BY INDEX ROWID| T2           |  1002 | 50100 |    17   (0)| 00:00:22 |
|*  5 |    INDEX RANGE SCAN          | SYS_C0039300 |  1002 |       |     3   (0)| 00:00:04 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="C1")
   3 - access("T1"."C1">=9000 AND "T1"."C1"<=10000)
   5 - access("C1">=9000 AND "C1"<=10000)

Hints may be used to change the optimization behavior of the query optimizer.  For instance, if we specify the NO_QUERY_TRANSFORMATION hint in the last of the above queries that used inline views, the autotrace generated execution plan appears as follows:

SELECT /*+ NO_QUERY_TRANSFORMATION */
  T1.C1,
  T1.C2,
  V1.C3,
  V1.C4,
  V2.C6,
  V2.C7,
  V2.C8
FROM
  T1,
  (SELECT
     C1,
     C2,
     C3,
     C4
   FROM
     T1
   WHERE
     C1 BETWEEN 1 AND 10000) V1,
  (SELECT
     C1,
     C6,
     C7,
     C8
   FROM
     T2
   WHERE
     C1 BETWEEN 9000 AND 12000) V2
WHERE
  T1.C1=V1.C1
  AND T1.C1=V2.C1;

Execution Plan
----------------------------------------------------------
Plan hash value: 3681900949

------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |              |  1002 | 91182 |  1037   (1)| 00:21:33 |
|*  1 |  HASH JOIN                      |              |  1002 | 91182 |  1037   (1)| 00:21:33 |
|   2 |   NESTED LOOPS                  |              |       |       |            |          |
|   3 |    NESTED LOOPS                 |              |  1002 | 45090 |  1019   (0)| 00:21:11 |
|   4 |     VIEW                        |              |  1002 | 35070 |    17   (0)| 00:00:22 |
|   5 |      TABLE ACCESS BY INDEX ROWID| T1           |  1002 | 18036 |    17   (0)| 00:00:22 |
|*  6 |       INDEX RANGE SCAN          | SYS_C0039297 |  1002 |       |     3   (0)| 00:00:04 |
|*  7 |     INDEX UNIQUE SCAN           | SYS_C0039297 |     1 |       |     0   (0)| 00:00:01 |
|   8 |    TABLE ACCESS BY INDEX ROWID  | T1           |     1 |    10 |     1   (0)| 00:00:02 |
|   9 |   VIEW                          |              |  1002 | 46092 |    17   (0)| 00:00:22 |
|  10 |    TABLE ACCESS BY INDEX ROWID  | T2           |  1002 | 50100 |    17   (0)| 00:00:22 |
|* 11 |     INDEX RANGE SCAN            | SYS_C0039300 |  1002 |       |     3   (0)| 00:00:04 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="V2"."C1")
   6 - access("C1">=9000 AND "C1"<=10000)
   7 - access("T1"."C1"="V1"."C1")
  11 - access("C1">=9000 AND "C1"<=10000)

For fun, on the Enterprise Edition of Oracle Database (if I remember correctly, the Standard Edition will behave differentlyEdit Sept. 18, 2011: just tested 11.2.0.2 Standard Edition and 10.2.0.5 Standard Edition – both versions generated execution plans similar to what is displayed below, so the MATERIALIZE hint apparently does not behave differently as originally described) we can also use the MATERIALIZE hint using the subquery factored (WITH block) version of the query:

WITH
  V1 AS (SELECT /*+ MATERIALIZE */
     C1,
     C2,
     C3,
     C4
   FROM
     T1
   WHERE
     C1 BETWEEN 1 AND 10000),
  V2 AS (SELECT /*+ MATERIALIZE */
     C1,
     C6,
     C7,
     C8
   FROM
     T2
   WHERE
     C1 BETWEEN 9000 AND 12000)
SELECT
  T1.C1,
  T1.C2,
  V1.C3,
  V1.C4,
  V2.C6,
  V2.C7,
  V2.C8
FROM
  T1,
  V1,
  V2
WHERE
  T1.C1=V1.C1
  AND T1.C1=V2.C1;

Execution Plan
----------------------------------------------------------
Plan hash value: 1193025797

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                           |   150 | 13650 |  1491   (1)| 00:31:00 |
|   1 |  TEMP TABLE TRANSFORMATION    |                           |       |       |            |          |
|   2 |   LOAD AS SELECT              | SYS_TEMP_0FD9D6625_D08144 |       |       |            |          |
|   3 |    TABLE ACCESS BY INDEX ROWID| T1                        | 10000 |   224K|   151   (0)| 00:03:09 |
|*  4 |     INDEX RANGE SCAN          | SYS_C0039297              | 10000 |       |    20   (0)| 00:00:25 |
|   5 |   LOAD AS SELECT              | SYS_TEMP_0FD9D6626_D08144 |       |       |            |          |
|   6 |    TABLE ACCESS BY INDEX ROWID| T2                        |  3002 |   146K|    47   (0)| 00:00:59 |
|*  7 |     INDEX RANGE SCAN          | SYS_C0039300              |  3002 |       |     7   (0)| 00:00:09 |
|*  8 |   HASH JOIN                   |                           |   150 | 13650 |  1293   (1)| 00:26:53 |
|*  9 |    HASH JOIN                  |                           |  3002 |   164K|  1276   (1)| 00:26:31 |
|  10 |     VIEW                      |                           |  3002 |   134K|    12   (0)| 00:00:15 |
|  11 |      TABLE ACCESS FULL        | SYS_TEMP_0FD9D6626_D08144 |  3002 |   146K|    12   (0)| 00:00:15 |
|  12 |     TABLE ACCESS FULL         | T1                        |   200K|  1953K|  1263   (0)| 00:26:16 |
|  13 |    VIEW                       |                           | 10000 |   341K|    17   (0)| 00:00:22 |
|  14 |     TABLE ACCESS FULL         | SYS_TEMP_0FD9D6625_D08144 | 10000 |   224K|    17   (0)| 00:00:22 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("C1">=1 AND "C1"<=10000)
   7 - access("C1">=9000 AND "C1"<=12000)
   8 - access("T1"."C1"="V1"."C1")
   9 - access("T1"."C1"="V2"."C1")

Ouch, an estimated 31 minutes to complete?

SET AUTOTRACE OFF

SELECT
  PNAME,
  PVAL1
FROM
  SYS.AUX_STATS$
WHERE
  PNAME IN ('SREADTIM','MREADTIM');

PNAME         PVAL1
-------- ----------
SREADTIM   1247.132
MREADTIM   3575.567

Must be that darn floppy RAID 10 array again.  Let’s make it faster:

EXEC DBMS_STATS.SET_SYSTEM_STATS('SREADTIM',0.4)
EXEC DBMS_STATS.SET_SYSTEM_STATS('MREADTIM',0.8)

SET AUTOTRACE TRACEONLY EXPLAIN

WITH
  V1 AS (SELECT /*+ MATERIALIZE */
     C1,
     C2,
     C3,
     C4
   FROM
     T1
   WHERE
     C1 BETWEEN 1 AND 10000),
  V2 AS (SELECT /*+ MATERIALIZE */
     C1,
     C6,
     C7,
     C8
   FROM
     T2
   WHERE
     C1 BETWEEN 9000 AND 12000)
SELECT
  T1.C1,
  T1.C2,
  V1.C3,
  V1.C4,
  V2.C6,
  V2.C7,
  V2.C8
FROM
  T1,
  V1,
  V2
WHERE
  T1.C1=V1.C1
  AND T1.C1=V2.C1;

Execution Plan
----------------------------------------------------------
Plan hash value: 1626848331

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                           |   150 | 13650 |  1211  (10)| 00:00:01 |
|   1 |  TEMP TABLE TRANSFORMATION    |                           |       |       |            |          |
|   2 |   LOAD AS SELECT              | SYS_TEMP_0FD9D6627_D08144 |       |       |            |          |
|   3 |    TABLE ACCESS BY INDEX ROWID| T1                        | 10000 |   224K|   158   (5)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | SYS_C0039297              | 10000 |       |    23  (14)| 00:00:01 |
|   5 |   LOAD AS SELECT              | SYS_TEMP_0FD9D6628_D08144 |       |       |            |          |
|   6 |    TABLE ACCESS BY INDEX ROWID| T2                        |  3002 |   146K|    49   (5)| 00:00:01 |
|*  7 |     INDEX RANGE SCAN          | SYS_C0039300              |  3002 |       |     8  (13)| 00:00:01 |
|*  8 |   HASH JOIN                   |                           |   150 | 13650 |  1004  (11)| 00:00:01 |
|*  9 |    HASH JOIN                  |                           |  3002 |   164K|   987  (10)| 00:00:01 |
|  10 |     VIEW                      |                           |  3002 |   134K|    10  (10)| 00:00:01 |
|  11 |      TABLE ACCESS FULL        | SYS_TEMP_0FD9D6628_D08144 |  3002 |   146K|    10  (10)| 00:00:01 |
|  12 |     TABLE ACCESS FULL         | T1                        |   200K|  1953K|   950   (8)| 00:00:01 |
|  13 |    VIEW                       |                           | 10000 |   341K|    15  (20)| 00:00:01 |
|  14 |     TABLE ACCESS FULL         | SYS_TEMP_0FD9D6627_D08144 | 10000 |   224K|    15  (20)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("C1">=1 AND "C1"<=10000)
   7 - access("C1">=9000 AND "C1"<=12000)
   8 - access("T1"."C1"="V1"."C1")
   9 - access("T1"."C1"="V2"."C1")

A one second estimated completion time, that is better.  Bugs…

Now that hopefully everyone is on the same page about what an inline view is and how to use one in a query, let’s take a look at two quotes about inline views from the book Oracle 11g Performance Tuning Recipes:

Page 291 (click the link to see the quote in full context):

“While extremely useful, if misused or overused, inline views can cause database performance issues, especially in terms of the use of the temporary tablespace. Since inline views are created and used only for the duration of a query, their results are held in the program global memory area, and if too large, the temporary tablespace.”

Page 292 (note that this quote is quite a bit longer than I would prefer, but it is as short as possible without losing the context of the quote):

“In this query, there are three inline views created: the SERVICE_INFO view, the PRODUCT_INFO view, and the BILLING_INFO view. Each of these queries will be processed and the results stored in the program global area or the temporary tablespace before finally processing the true end-user query, which starts with the final SELECT statement shown in the query.  In examples such as these, it is generally more efficient at the database level to create tables that hold the data defined by the inline views—in this case, three separate tables. Then, the final query can be extracted from joining the three permanent tables to generate the results.”

What, if anything, is wrong with the above quote from the book?

The point of blog articles like this one is not to insult authors who have spent thousands of hours carefully constructing an accurate and helpful book, but instead to suggest that readers investigate when something stated does not exactly match what one believes to be true.  It could be that the author “took a long walk down a short pier”, or that the author is revealing accurate information which simply cannot be found through other resources (and may in the process be directly contradicting information sources you have used in the past).  If you do not investigate in such cases, you may lose an important opportunity to learn something that could prove to be extremely valuable.





10046 Extended SQL Tracing for a Session – What is Wrong with this Quote?

16 09 2011

September 16, 2011

While reading the Oracle 11g Performance Tuning Recipes book, I noticed that some of the recipes in chapter 10 seemed to have an odd familiarity, although as of yet I have not quite pinpointed where I had previously seen some of the concepts in chapter 10.  Recipes 10-2, 10-11, and 10-16 seem to share a common ancestry with a blog article, but maybe I am imagining things.  A quote from recipe 10-16, found on page 351 (page 351 is currently unavailable for me in Google Books, but page 352 is available - you might need to scroll down one page if you click this link) of the book is certainly interesting, using a named event that was introduced with Oracle Database 11.1, rather than the event number 10046:

Execute the alter session (or alter system) set events command to trace a session by its operating system process ID which is shown by the SPID column in the V$PROCESS view [the book demonstrates replacing the word pid in the following command with the SPID value from V$PROCESS] . The general format of this command is as follows:

alter session set events 'sql_trace {process:pid}'

For additional clarification, the book provides the following extended example:

alter session set events 'sql_trace {process : pid = <pid>, pname = <pname>, orapid = <orapid>} rest of event specification'

Interesting… a slight distraction (requires a sign-on for My Oracle Support) Metalink (MOS) Doc ID 813737.1, “How To Use The New 11g Events++ Syntax For Easier SQL Tracing Of Datapump Operations” – I wonder if I had read that article in the past?

What, if anything, is wrong with the above quote from the book?

The point of blog articles like this one is not to insult authors who have spent thousands of hours carefully constructing an accurate and helpful book, but instead to suggest that readers investigate when something stated does not exactly match what one believes to be true.  It could be that the author “took a long walk down a short pier”, or that the author is revealing accurate information which simply cannot be found through other resources (and may in the process be directly contradicting information sources you have used in the past).  If you do not investigate in such cases, you may lose an important opportunity to learn something that could prove to be extremely valuable.





Brain Teaser: Why is this Query Performing a Full Table Scan

14 09 2011

September 14, 2011

While taking a look at the OTN threads this morning, I found an interesting test case, where the OP was asking why a query used a full table scan and not an index range scan.  The OP would like for the query to use the index without using a hint in the query.  So, why doesn’t the OP’s test case use an index range scan?  Just for fun I will state that my first two initial guesses were not quite on target.

A slightly modified table creation script of the setup for the OP’s test case:

DROP TABLE T1 PURGE;

CREATE TABLE T1 AS
SELECT
  *
FROM
  DBA_OBJECTS
WHERE
  STATUS='VALID';

UPDATE T1 SET STATUS='INVALID' WHERE ROWNUM=1;
COMMIT;

CREATE INDEX IND_T1_STATUS ON T1(STATUS); 

Let’s take a look at the data distribution in the table:

SELECT
  STATUS,
  CNT,
  ROUND((RATIO_TO_REPORT(CNT) OVER ())*100,6) PERCENT
FROM
  (SELECT
    STATUS,
    COUNT(*) CNT
  FROM
    T1
  GROUP BY
    STATUS);

STATUS         CNT    PERCENT
------- ---------- ----------
INVALID          1    .001513
VALID        66095  99.998487 

99.998% of the table’s rows have a STATUS of VALID with just a single row having a STATUS of invalid.

Now let’s collect the statistics for the table and index and check the execution plan:

ANALYZE INDEX IND_T1_STATUS COMPUTE STATISTICS;
ANALYZE TABLE T1 COMPUTE STATISTICS;

SET AUTOTRACE TRACEONLY EXPLAIN
SET PAGESIZE 1000
SET LINESIZE 140

SELECT
  *
FROM
  T1
WHERE
  STATUS='INVALID';

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 33048 |  3227K|   265   (1)| 00:00:04 |
|*  1 |  TABLE ACCESS FULL| T1   | 33048 |  3227K|   265   (1)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("STATUS"='INVALID') 

If you were assigned to help the OP achieve his task with the test case, what would you do?





Book Review: Oracle Database 11g Performance Tuning Recipes

10 09 2011

September 10, 2011

Hammering a Square Peg into a Round Hole: Fine Edges are Lost, Gaps in Detail
http://www.amazon.com/Oracle-Database-Performance-Tuning-Recipes/dp/1430236620 

(Forward to the Next Post in the Series)

(Update September 14, 2011:  It is a bit unfair to this book that my review only covered the first six chapters and portions of chapter 7 – roughly the first half of the book.  The review as originally posted was 22 typewritten pages in length, the second longest of all of my reviews.  I intend to review the second half of this book, so watch for part two of this book’s review.)

I bought the alpha copy of this book from the Apress site in May 2011 when I heard about the book’s pending release, and was able to obtain the final version of the book at the end of August 2011.  I have read a couple of the other “Recipe” books from Apress, so I was excited to see how the recipe format could be applied to the task of Oracle Database performance tuning, specifically in versions 11.1 and above.

The authors of this book all have at least 10-15 years of experience with Oracle Database (and likely several years of IT related experience prior to that), are accomplished writers of several books (at least three of the “Recipes” books were co-authored by this book’s authors), and at least two of the authors teach at the university level.  The technical reviewer of the book is an Oracle Ace with an OCP certificate and also has an Oracle Certified RAC Expert certificate.  Apress, based on my past experience, publishes some of the best Oracle Database material on the market.  All of the critical recipe elements for a very successful, extremely useful book were initially in place for this book.

So, how well did the book’s final contents meet the level of expectations provided by the book’s front cover and the publisher’s description of the book?  Simply put, some of the chapters appeared as though someone tried to hammer a square cut nail into a finely machined and tapped machine-screw hole.  The end result shows that the topic of performance tuning Oracle Databases simply does not fit well into a recipe format.  The number of technical/logical errors per page in some of the chapters makes for a difficult challenge to find any useful/reliable information in those chapters without spending significant time verifying that the material presented by the book is accurate.  Some of the other chapters did provide useful information that was mostly correct, but that information sometimes lacked sufficient depth of the information presented, likely constrained by the recipe format of the book.  The topic selection for the book is reasonably good, although in execution the book sometimes drifted far away from performance tuning, into a lesson of what to do if a user cannot connect to the database, how to write basic SQL statements, how to perform basic database administration (such as handling snapshot too old errors), and how to issue operating system commands.  Some sections of the book also seemed to be more specific to Oracle Database 9.2 and below than 11.1 and above.  In short, there is good material in the book, but it is difficult to determine what is accurate and what is completely wrong.

This review is very long (20+ type-written pages for the first seven chapters in the book), as that level of detail is required to justify the rating attached to this book review.  As such, this review will exceed the length limit permitted by Amazon, so see my blog for the full review.  I have divided this review into three sections: interesting dictionary views, parameters, functions, hints, and Oracle error messages mentioned in the book; comments, corrections, and problems for the contents of the book; and interesting and/or well-above-average bits of information found in the book.

Data Dictionary Views:

  • DATABASE_PROPERTIES (page 4)
  • DBA_TABLESPACES (page 6)
  • USER_SEGMENTS (page 12)
  • USER_EXTENTS (page 13)
  • V$DATABASE, DBA_TABLESPACES (page 16)
  • V$TRANSACTION (page 18)
  • DBA_AUTO_SEGADV_SUMMARY (page 20)
  • DBA_ADVISOR_FINDINGS, DBA_ADVISOR_OBJECTS (page 25)
  • DBA_INDEXES (page 29)
  • DBA_AUDIT_TRAIL, DBA_AUDIT_OBJECT (page 41)
  • USER_CONSTRAINTS (page 54)
  • USER_INDEXES (page 55)
  • USER_IND_EXPRESSIONS (page 66)
  • V$OBJECT_USAGE (page 75)
  • V$MEMORY_DYNAMIC_COMPONENTS, V$MEMORY_TARGET_ADVICE (page 90)
  • V$SQL_WORKAREA_HISTOGRAM (page 94)
  • V$RESULT_CACHE_STATISTICS (page 98)
  • V$STATISTICS_LEVEL (page 114)
  • DBA_HIST_WR_CONTROL (page 116)
  • DBA_HIST_BASELINE (page 125)
  • DBA_HIST_ACTIVE_SESS_HISTORY, V$ACTIVE_SESSION_HISTORY (page 139)
  • V$SYSMETRIC, V$SESSION, V$SESSION_WAIT, V$SESSION_WAIT_HISTORY, V$SESSION_EVENT (page 148)
  • V$SYSTEM_EVENT, V$SYSTEM_WAIT_CLASS (page 149)
  • V$ACTIVE_SESSION_HISTORY, V$SQLAREA (page 150)
  • V$EVENT_NAME, V$SYSTEM_WAIT_CLASS (page 155)
  • DBA_EXTENTS (page 160)
  • V$LOCK, V$LOCKED_OBJECT (page 164)
  • SYS_TIME_MODEL (page 175)
  • V$SESSION_WAIT_CLASS, V$EVENT_HISTOGRAM (page 177)
  • V$UNDOSTAT (page 212)
  • V$TRANSACTION, V$SESSTAT, V$STATNAME (page 214)
  • DBA_TEMP_FILES, V$TEMP_SPACE_HEADER (page 217)
  • V$OPEN_CURSOR, GV$OPEN_CURSOR (page 222)

Parameters

  • UNDO_MANAGEMENT, UNDO_TABLESPACE (page 4)
  • AUDIT_TRAIL (page 40)
  • _USE_NOSEGMENT_INDEXES (page 54)
  • OPTIMIZER_USE_INVISIBLE_INDEXES (page 70)
  • PARALLEL_THREADS_PER_CPU (page 78)
  • DDL_LOCK_TIMEOUT (page 80)
  • SGA_TARGET, PGA_AGGREGATE_TARGET,
    MEMORY_TARGET, MEMORY_MAX_TARGET (page 83)
  • RESULT_CACHE_MAX_SIZE,
    RESULT_CACHE_MAX_RESULT, and RESULT_CACHE_REMOTE_EXPIRATION (page 95)
  • RESULT_CACHE_MODE (page 100)
  • CLIENT_RESULT_CACHE_SIZE, CLIENT_RESULT_CACHE_LAG (page 103)
  • DB_FLASH_CACHE_FILE, DB_FLASH_CACHE_SIZE (page 109)
  • LOG_BUFFER (page 111)
  • STATISTICS_LEVEL (page 114)
  • CONTROL_MANAGEMENT_PACK_ACCESS (page 115)
  • COMMIT_WRITE (page 159)
  • LOG_BUFFER (page 160)
  • CURSOR_SPACE_FOR_TIME (page 179)
  • UNDO_TABLESPACE, UNDO_RETENTION (page 211)
  • OPEN_CURSORS (page 222)
  • SESSION_CACHED_CURSORS (page 223)

Functions:

  • DBMS_RESULT_CACHE.STATUS (page 97)
  • DBMS_RESULT_CACHE.MEMORY_REPORT (page 98)

Hints:

  • APPEND (page 15)
  • INDEX (page 70)
  • RESULT_CACHE, NO_RESULT_CACHE (page 100)

Error Numbers:

  • Chapter 1: ORA-01578: ORACLE data block corrupted, ORA-26040: Data block was loaded using the NOLOGGING option (page 16); ORA-02266: unique/primary keys in table referenced by enabled foreign keys (page 18)
  • Chapter 2: ORA-00001: unique constraint violated (page 58), ORA-02270: no matching unique or primary key for this column-list (page 59), ORA-10635: Invalid segment or tablespace type (page 81)
  • Chapter 3: ORA-00838: Specified value of MEMORY_TARGET is too small (page 86)
  • Chapter 4: ORA-20200: The instance was shutdown between snapshots (page 120), ORA-13541: system moving window baseline size (page 124), ORA-13539: invalid input for modify baseline window size (page 124)
  • Chapter 6: ORA-09817: Write to audit file failed (page 187)
  • Chapter 7: ORA-0155: snapshot too old (page 211)

——

Comments, Corrections, and Problems:

  • In recipe 1-2 the following note appears, “As of Oracle Database 11g R2, the EXTENT MANAGEMENT DICTIONARY clause has been deprecated.”  I could not locate confirmation in the Oracle Database documentation that that clause was deprecated in 11g R2, but the documentation does state that dictionary managed tablespaces were deprecated in Oracle Database 9i R2, even though Oracle Database 11g R2 apparently still supports (to an extent) dictionary managed tablespaces (and the deprecated DICTIONARY keyword) as long as the system tablespace is not locally managed. This recipe also states that the authors recommend using ASSM tablespaces, without describing the potential negative performance impact of such tablespaces. (pages 5-7)
  • In recipe 1-4 the authors state “Specify a separate tablespace for the table and indexes” with the justification that doing so “simplifies administration and maintenance” – it would have been helpful for the authors to provide supporting evidence for this claim (pages 9-12).  In recipe 2-2 on page 51, the authors  suggests that backup and recovery is easier with separate tablespaces for  data and indexes.
  • In recipe 1-5 (page 13), an apparently unintentional word substitution: “An extent will not be allocated until the initial record is inserted into a given extent” – the second “extent” in the sentence should probably be replaced with the  word “segment”.  The recipe should mention that deferred segment creation is an Enterprise Edition only feature, and explain the potential problems that restriction might cause. (pages 12-14)
  • In recipe 1-8 the authors suggest checking the DBA_ADVISOR_EXECUTIONS,  DBA_ADVISOR_FINDINGS, and DBA_ADVISOR_OBJECTS views without providing a warning about the licensing requirements associated with the DBA_ADVISOR views. (pages 19-23)
  • Recipe 1-11 demonstrates using several approaches to reduce the number of rows that span more than one data block. It would have been helpful if the book differentiated between chained rows and migrated rows, and discussed why the ANALYZE TABLE  LIST CHAINED ROWS command might still report a count greater than 0 in the CHAINED_ROWS table (table has more than 255 columns and must be broken into multiple row pieces, table row is larger than a table block, etc.). Since it is the absolute file number that appears most commonly in the various data dictionary views, it probably would have been a better approach to use the DBMS.ROWID_TO_ABSOLUTE_FNO function rather than the ROWID_RELATIVE_FNO function when decoding the ROWID values. (pages 28-32)
  • Recipe 2-5 includes a SQL statement that attempts to identify unindexed foreign key columns.  This same SQL statement appears on page 243 of the book “Oracle SQL Recipes: A Problem-Solution Approach”.  While the SQL statement only works for the current user’s schema, and A, B, and C generic aliases are used for table aliases, there are more significant problems with the SQL statement and the recipe as a whole.  The recipe gives the impression that the primary reason for indexes on the foreign key columns is to improve performance – specifically the performance of SELECT statements where the foreign key columns are included in the WHERE clause.  When you consider that the tables with the foreign key columns are likely detail (child) tables, there is a bit of a threat that each of the unique values found in the foreign key columns will be repeated many times through the various rows (scattered among the various rows, thus the CLUSTERING_FACTOR for the index might be close to the number of rows), making it a bit less likely that an index on the foreign key columns will help performance significantly (unless an [bitmap] index join is possible).  The more common performance reason for indexing foreign key columns is to reduce the threat of TM enqueue contention on the child table when rows are updated or deleted in the parent table – this reason was not mentioned in this recipe.  The recipe also gives the impression that all foreign keys should be indexed, while it is primarily those cases where the unique/primary column(s) in the parent table is subject to change that will benefit the most from indexing the foreign key columns.  The problem with the SQL statement found in the recipe appears when concatenated (multi-column) indexes reference the foreign key columns.  The SQL statement found in the book does not consider the foreign key column’s position in the concatenated index. While the optimizer could select to use an index skip scan (consider what might happen if the foreign key column is the sixth column in a concatenated index) for a SELECT statement that does not reference the leading columns of the index in the WHERE clause, the same is not true for TM enqueues.  Thus, the SQL statement misidentifies cases where the foreign key columns should be indexed, but are not.  A DISTINCT clause may be necessary to eliminate duplicate rows returned by the query. (pages 59-60)
  • In Recipe 2-6 on page 62, the final paragraph states the following, “A concatenated index that is used for skip-scanning is more efficient than a full table scan. However, if you’re consistently using only a lagging edge column of a concatenated index, then consider creating a single-column index on the lagging column.”  This statement along with other statements in this chapter, if generalized as implied, suggests that index access paths are always more efficient than full table scans – this is simply not true often enough to make this statement in such black and white wording (pages 60-62).  The ordering of the rows in the table’s blocks compared to the sort order of the index (externalized as the CLUSTERING_FACTOR index statistic), the percentage of the table to be retrieved through the index access path, and the cardinality of the leading index column (in the case of the index skip scan) are all factors that must be considered.  It is also important to point out that while adding an extra column to a concatenated index may help the performance of one SQL statement, doing so may hurt the performance of other SQL statements that had been using that index due to the likely increased CLUSTERING_FACTOR and the increased index size (number of leaf blocks required). (pages 60-62)
  • Recipe 2-11 claims to show a method of adding an index to a database such that adding that index will not affect third party vendor applications.  The recipe provides a SQL statement that sets the OPTIMIZER_USE_INVISIBLE_INDEXES parameter to TRUE at the SYSTEM level (ALTER SYSTEM), and then states, “use a hint to tell the optimizer that the index exists”.  The hint is entirely unnecessary, changing the parameter at the SYSTEM scope is sufficient to allow all sessions to use the invisible index, thus potentially affecting the third part vendor application.  Creating a UNIQUE invisible index will still affect the third party application on inserts, updates, and deletes regardless of the setting of the OPTIMIZER_USE_INVISIBLE_INDEXES parameter.  Regardless of the setting of the parameter, the invisible indexes may still be used by the optimizer for cardinality calculations (thus adding the index could change execution plans, even if the session cannot use the index), and to prevent TM enqueues on foreign key columns. The recipe also states that one of the uses of invisible indexes is as a test to determine whether or not it is safe to later drop the index – however, for the reason just mentioned, this is not a complete/safe test to determine whether or not an index may be safely dropped. (pages 70-71 reference reference2)
  • Recipe 2-12 The recipe does not explain why “bitmap indexes are ideal for low-cardinality columns (few distinct values)” – additional clarification is needed.  Ideally, bitmap indexes on low-cardinality columns would be bit ANDed together with other bitmap indexes, thus making two or more indexes on low-cardinality columns much more selective before touching the table. While bitmap indexes cannot be used on columns defined as unique, those types of indexes can be used even on columns that are not low-cardinality.  Thus the need for additional clarification. (page 72)
  • Recipe 2-15 describes using index monitoring and V$OBJECT_USAGE to determine if an index was used, so that unused indexes may be dropped.  The same risks are present with this approach as with marking an index as invisible, as described in recipe 2-11. (pages 75-76)
  • The introduction to chapter 3 states that the results of SQL queries and PL/SQL functions may be stored in the shared pool when a results cache is used.  It might be a better idea to be much more specific, and state that Oracle Database is using the Result Cache Memory in the shared pool as indicated in the Oracle Database documentation, but maybe this is explained later. (page 83)
  • In recipe 3-1 the authors mention that they would set the MEMORY_MAX_TARGET to 2000M, and then show a SQL statement that sets that parameter to 2G (that is 2048M rather than 2000M).  This recipe seems to be mirroring a page from the Oracle documentation library, only that the MEMORY_TARGET parameter was set to a value after setting SGA_TARGET and PGA_AGGREGATE_TARGET to 0 in the book, while the MEMORY_TARGET was set first in the documentation (note that MEMORY_TARGET will be set to the value of MEMORY_MAX_TARGET if not explicitly set, so changing the order of the commands may or may not be a problem).  This recipe states that “automatic memory management is the recommended approach to managing Oracle’s memory allocation,” but does not describe any of the potential pitfalls or operating system specific problem areas/limitations (reference1 reference2 reference3 reference4 reference5 reference6).  Beyond telling the reader to simply add the current PGA_AGGREGATE_TARGET and SGA_TARGET values to determine a starting point for the MEMORY_MAX_TARGET parameter, the recipe gives little insight into the ”best” value for this parameter. (pages 84-87)
  • In recipe 3-2 the authors state, “if you want a segment to be cached (or pinned) in the buffer pool, you must specify the KEEP buffer pool [when creating the segment].”  Something just doesn’t seem to be right (or complete)
    about this statement – for one, I wonder what the CACHE and  NOCACHE clauses might affect? Secondly, this statement might be implying that blocks belonging to an object will only remain in the buffer cache if those objects are assigned to the KEEP buffer cache.  The recipe states that the approximate size of the KEEP buffer pool can be determined by “summing up the size of all candidate objects for this pool” – great, now what happens when Oracle Database creates up to six (and potentially many more) consistent read versions of the objects’ blocks (reference)?  The last two paragraphs of this recipe seem to indicate that it is the segments themselves (apparently all blocks) that are aged out of the buffer cache when space is needed to cache other blocks, rather than the segments’ blocks that are aged out of the cache – additional clarity is needed in these two paragraphs. The second to last paragraph also makes a claim that I am not sure how to interpret: “Oracle figures that if a single segment takes up more than 10% of (nonsequential) physical reads, it’s a large segment, for the purpose of deciding if it needs to use the KEEP or RECYCLE buffer pools.”  The SQL statement provided in recipe 3-2 is also found in the Oracle Database 9.2 Performance Tuning Manual (with a small, but notable correction), but that SQL statement potentially produces misleading information. (pages 87-88)
  • In recipe 3-3, the book states, “The database will continue to automatically allocate memory to the various components of the SGA…”  Technically, while the intention of this statement is headed in the right direction, confusing the technical terms is best avoided.  The wording could have been cleaned up, keeping in mind that the “database” is actually the files that are stored in disk – the database does not allocate memory. (page 89)
  • In recipe 3-4, the book describes the V$MEMORY_DYNAMIC_COMPONENTS view as showing the current allocations of memory when the MEMORY_TARGET parameter is in use.  The book then shows how to query the view, unfortunately the example in the book shows querying the V$MEMORY_TARGET_ADVICE view, rather than the V$MEMORY_DYNAMIC_COMPONENTS view.  The Oracle Database 11.2 documentation library indicates that the V$MEMORY_DYNAMIC_COMPONENTS view only shows information about dynamic SGA components, but a query of the view also shows an entry for “PGA Target”, indicating that the Oracle documentation is not completely correct in this case.  It probably would have been helpful if the query of V$MEMORY_RESIZE_OPS found in the book included the END_TIME column since the book indicates that this view may report up to 800 memory resizing operations from various time periods.  The book states that the operation type is one of two modes: grow or shrink, while the Oracle Documentation library indicates that the operation type (the OPER_TYPE column) will report one of six values: STATIC, INITIALIZING, DISABLED, GROW, SHRINK, SHRINK_CANCEL. (pages 90-91)
  • In recipe 3-5, the steps listed in the solution appear to be missing step 1.5, “Click the Memory Advisors” link. The book makes the following statement regarding the estimated improvement in DB time “The higher the value of improvement in DB time, the better off will be the performance.” – that statement seems to suggest that the performance will definitely improve by adjusting the parameter, rather than the performance is estimated to improve.  The description of the Memory Size Advice graph states that the maximum memory for the instance is indicated by a dotted straight line, while the graph itself seems to indicate that the maximum is indicated by a solid red line.  This recipe recommends checking the ADDM reports, without first indicating the licensing requirements needed to access those reports.  The book states, “Ideally, you’d want the PGA cache ratio somewhere upwards of around 70%.”, but it does not state why 70% is ideal rather than 90% or 50% (the Oracle documentation states that “the optimal zone for cache hit percentage is between 75 and 100 percent.”) (pages 91-93)
  • In recipe 3-6, the book states “The memory you allocate to the PGA component is used by various SQL work areas running in the database” – the SQL work areas are memory allocations, not processes, so the SQL work areas cannot “run”.  The book states that a low value, such as 33.37% for the PGA Cache Hit Percentage is a definite indication of the need to increase the PGA_AGGREGATE_TARGET.  No explanation is provided as to why the first query of V$SQL_WORKAREA_HISTOGRAM excluded workarea executions that required less than 64KB (note that this SQL statement is identical to a SQL statement found in the Oracle Database 9.2 Performance Tuning Guide – the second query of V$SQL_WORKAREA_HISTOGRAM is also found in the same Oracle documentation, as is the query of V$SYSSTAT).  The book states, “One pass is slower than none at all, but a multi–pass operation is a sure sign of trouble in your database, especially if it involves large work areas.” – this is stated as an absolute, while in reality there may be times when the only option is a multi-pass workarea execution due to the data size.  The recipe mentions checking ASH, AWR, and ADDM reports without mentioning the licensing requirements. (pages 93-95)
  • Recipes 3-7 through 3-11 are about the result cache which in an Enterprise Edition only feature – the book should have mentioned that limitation.
  • Recipe 3-7 seems to closely mirror section 7.6.1 Managing the Server Result Cache of the Oracle 11.2 Performance Tuning Guide. (pages 95-97)
  • Recipe 3-8 seems to have paraphrased sections of the Oracle 11.2 Performance Tuning Guide and sections of the PL/SQL Reference. (pages 97-99)
  • Recipe 3-9 States: “When you run this query [with the NO_RESULT_CACHE hint], the server won’t cache the results of this query any longer, because you’ve specified the MANUAL setting for the RESULT_CACHE_MODE initialization parameter.”  This statement is technically incorrect (or just confusing) – the hint does not change the RESULT_CACHE_MODE for the SQL statement to MANUAL, it simply prevents the SQL statement results from being added to the results cache if the RESULT_CACHE_MODE is set to FORCE, or the RESULTS_CACHE property (annotation) of the table(s) involved in the query is set to FORCE.  The recipe also states the following about the table-level RESULT_CACHE mode: “The mode value DEFAULT is, of course, the default value, and this merely removes any table annotations you may have set and doesn’t permit caching of results that  involve this table” – this statement is misleading, the query results may still be cached depending on the value of the RESULTS_CACHE_MODE parameter and whether or not the RESULT_CACHE hint is specified. (pages 99-103 reference reference2 reference3)
  • Recipe 3-10 states: “You can query the V$CLIENT_RESULT_CACHE_STATS view for details such as the number of cached result sets, number of cached result sets invalidated, and the number of cache misses.”  That view does not exist – the intended view name is likely CLIENT_RESULT_CACHE_STATS$. (pages 103-104)
  • Recipe 3-11, much like the Oracle documentation, the book lists four views (V$RESULT_CACHE_STATISTICS, V$RESULT_CACHE_MEMORY, V$RESULT_CACHE_OBJECTS, and V$RESULT_CACHE_DEPENDENCY) that may be used to monitor the result cache.  However, the recipe does not describe the individual views or show examples of querying those views.  The comment about the RESULT_CACHE_REMOTE_EXPIRATION parameter in this recipe seems to be out of place, while a comment about RESULT_CACHE_RELIES_ON might have been a better approach to explain DML against which specific tables causes invalidation of PL/SQL function cache results.  The bullet points under the heading Restrictions on the “PL/SQL Function Cache” are the same as the restrictions found in the Oracle documentation with one critical difference, the book states “An IN parameter of a function has one of these types: BLOB, CLOB, …, and Record” while the Oracle documentation states, “No IN parameter has one of the following types: ◦BLOB, ◦CLOB, …, ◦Record”. (pages 105-108)
  • Recipe 3-12 states that the flash cache feature is for the Enterprise Edition only and that there only two supported operating systems, so that is a positive for this recipe.  The Oracle recommended conditions in the book for enabling Flash Cache is missing this potentially important point from the Oracle documentation: “The Buffer Pool Advisory section of your Automatic Workload Repository (AWR) report or STATSPACK report indicates that doubling the size of the buffer cache would be beneficial.”  While the documentation states “Any multiplier [of the buffer cache size] less than two would not provide any benefit,” the book states that trial and error should be used to pick a multiplier between 1 and 10. (pages 109-110)
  • In recipe 3-13 we learn how to tune the redo log buffer.  The book states, “since there’s no cost whatsoever to increasing the LOG_BUFFER size, feel free to set it to higher than the suggested maximum of 1MB for this parameter.”  Starting with Oracle Database 10.2 the LOG_BUFFER parameter is automatically set to a value that is slightly smaller than the granule size – the most common granule size for Oracle Database 10.2 databases was likely 16MB (the maximum granule size).  The granule size rules changed again with the release of Oracle Database 11.1 and again with the release of Oracle Database 11.2.  With recent Oracle Database releases it is quite possible that the LOG_BUFFER could be auto-set to just under 512MB in size, depending on the size of the SGA (reference). The ratio of the ‘redo entries’ statistic value divided by the ‘redo log space requests’ statistic value, as shown in the SQL statement probably yields little useful information when adjusting the size of the LOG_BUFFER parameter. (pages 110-112)
  • Recipe 4-10 incorrectly states that the “Parse CPU to Parse Elapsd” statistic found in an AWR report is “how much time the CPU is spending parsing SQL statements. The lower this metric is, the better. In the following example, it is about 2%, which is very low. If this metric ever gets to 5%, it may mean investigation is warranted to determine why the CPU is spending this much time simply parsing SQL statements.”  The book’s definition of this statistic is
    incorrect – the statistic actually indicates delays (wait events) in the parsing of SQL statements, very likely due to contention between sessions (or possibly excessive competition for the server’s CPUs, however such competition probably would not be explicitly captured in an Oracle wait event).  Ideally, this statistic in an AWR report should be close to 100%. It appears that the book authors attempted to describe the “PARSE TIME CPU” statistic, which is not found in this section of an AWR report, or attempted to describe a derivative of the “Non-Parse CPU” statistic which does appear in the Instance Efficiency Percentages section of an AWR report. (page 133-134)
  • Recipe 4-13 contains three SQL statements that attempt to calculate the SUM of the WAIT_TIME column plus the TIME_WAITED column when selecting from V$ACTIVE_SESSION_HISTORY – this approach can lead to misleading information for a couple of reasons, including: the session may actually be running on the CPU and this view will still be capturing the last wait event as if it were a currently active wait; the wait time is cumulative, thus the query would show a wait time of 3 seconds after the second second, 6 seconds after the third second, 10 seconds after the fourth second, etc. (page 142-145 reference)
  • Chapter 5 is a little loose with terminology, as shown by this quote found on page 148: “It’s easy to find out the percentage of time a database has spent waiting for resources instead of actually executing.”  The database does not wait, and it does not execute – it is a set of files on disk.
  • In recipe 5-1, when describing the V$SESSION_EVENT view, the book states, “The data in this view is available only so long as a session is active.” – considering that one of the columns in V$SESSION (another view described in this recipe) is named ACTIVE, I can see where the quoted statement could lead to confusion.  The recipe seems to rely too much on the information provided by the ratios from V$SYSMETRIC, ignoring the possibility that a session simply burning CPU (through spinning on latches, excessive parsing, or performing unnecessary levels of consistent gets due to poorly selected execution plans) could make it appear that the instance is working optimally; the book states: “On the other hand, high values for the Database CPU Time Ratio indicate a well-running database, with few waits or bottlenecks.”  The last paragraph of the recipe states, “Note that you can query the V$WAITSTAT view for the same information as well.” – that view was not previously introduced in the book and in actuality, the V$WAITSTAT view, related to buffer busy waits, does not retrieve information similar to the V$SYSTEM_EVENT view that was described a sentence or two earlier. (pages 147-150)
  • Recipe 5-2 queries V$ACTIVE_SESSION_HISTORY without first providing a warning regarding the licensing requirements to access this view.  SUM(WAIT_TIME + TIME_WAITED) when grouped by SQL_ID and USER_ID could lead to incorrect assumptions regarding the wait time per SQL statement for the last 30 minutes.  There is a risk of double-counting, or counting incidental CPU usage immediately after a wait event completes, as part of the time spent in wait events for a specific SQL statement.  Using COUNT is likely a better approach than using SUM.  I am left wondering why the authors did not introduce the readers to the APPLICATION_WAIT_TIME, CONCURRENCY_WAIT_TIME, CLUSTER_WAIT_TIME, USER_IO_WAIT_TIME, PLSQL_EXEC_TIME, and JAVA_EXEC_TIME columns of V$SQL (or V$SQLAREA) periodically querying those columns, and calculating the delta (change) values for specific SQL statements – those columns have existed as long as ASH (since the release of Oracle Database 10.1). (pages 150-151)
  • In Recipe 5-3, as of Oracle Database 10.1, “enqueue waits” probably should not be listed among the most common wait events since the most frequently occurring enqueue type waits are now broken out into separate wait events:
    “enq: TX – row lock contention”, “enq: TM – contention”, etc.  Additionally, it would be helpful to determine how the seven wait events printed were selected as the most common, and not one of the following: “rdbms ipc message”, “pmon timer”, “SQL*Net message from client”, “RMAN backup & recovery I/O” or “direct path read”.  It might have been helpful for the book to provide a little information about the seven selected wait events. (pages 151-152)
  • Recipe 5-4 states, “Here are some examples of typical waits in some of these classes,” however the book simply describes some of the characteristics of wait events found in four wait event groups, rather than listing and describing the actual wait events.  The book states that only the “log file sync” wait event is in the Commit class, while Oracle Database 11.1 and above also include the “enq: BB – 2PC across RAC instances” wait event in the Commit class.
    The recipe states, “You can ignore the idle waits.” – that statement is mostly true at the instance level, but mostly not true at the session level. This recipe did not provide much more information than what a query of V$EVENT_NAME would have provided. (pages 152-153)
  • In recipe 5-5, the first query should include the STATE column in the WHERE clause to eliminate all of those waits that have completed in less than 0.01 seconds and those waits that have already completed with the session now burning server CPU time.  The recipe probably should have stated that since Oracle Database 10.1 the columns mentioned in the V$SESSION_WAIT view are also found in the V$SESSION view. (pages 153-154)
  • Recipe 5-6 demonstrates a problem present in several of the recipes where the problem statement should have provided more detail, rather than something as simple as, “You want to examine Oracle wait event classes.”  Grouping and summing the statistics from the V$SYSTEM_WAIT_CLASS view is unnecessary since the view is already grouped by the WAIT_CLASS column.  The third query has an unnecessary join between the V$SYSTEM_EVENT, V$EVENT_NAME, and V$SYSTEM_WAIT_CLASS views – all of the displayed information is found in V$SYSTEM_EVENT as well as the WAIT_CLASS column that is in the WHERE clause.  The tip regarding the “db file scattered read” wait event being caused by full table scans of large tables, and “db file sequential read” wait event being caused by indexed reads is a little too shallow of a description to be accurate, especially with the introduction of serial direct path reads for full table scans starting in Oracle Database 11.1. Page 156 of the book states, “You can see that the enqueue waits caused by the row lock contention are what’s causing the most waits under these two classes. Now you know exactly what’s slowing down the queries in your database!”  Three problems with these two sentences: 1) the top wait event in the Application and Concurrency classes is shown as “enq: UL – contention”, which is a user lock caused by a call to DBMS_LOCK, rather than the wait event that the book appeared to be describing, “enq: TX – row lock contention”. 2) There was only a single wait for the “enq: UL – contention” event and that wait lasted 5 seconds, is this wait event entry showing what is really slowing down the queries in the database, or could the multiple waits for “latch: shared pool” and “library cache load lock” have a greater impact?  Keep in mind that this view is showing what has happened since the last bounce of the database, so the user lock enqueue might have happened shortly after the database was bounced. 3) Only two wait event classes were checked, how do we know that the performance problem is found in the Application or Concurrency wait classes?  The final query in this recipe does not show that SID 68 “is waiting”, rather it shows that SID 68 “had waited” and *might* still be waiting. (pages 154-157)
  • The use of “a”, “b” and “c” as aliases in queries makes it more difficult for the reader to determine where the columns are located, than it would be had the authors used meaningful aliases – “se” for the V$SESSION_EVENT table rather than “a”.  Also, not all columns are aliased, at times leaving the reader guessing which table/view contains the column.
  • Recipe 5-7 appears to closely mirror section 10.3.1 in the Oracle Database 11.2 Performance Tuning Guide documentation. V$SESSION’s ROW_WAIT_OBJ# column refers to the OBJECT_ID column in DBA_OBJECTS, not the DATA_OBJECT_ID column in DBA_OBJECTS as indicated by the book.  The recipe in the book fails to query either V$SESSION_WAIT or V$SESSION for column P3 which indicates the block class that indicates whether the buffer busy wait is for a segment header, data block, undo header, or undo block (while not stated in the book, information can be cross-referenced with the V$WAITSTAT view).  The recipe states about segment header buffer busy waits when ASSM is not in use, “If you aren’t already using it, you must switch from a manual space management to automatic segment space management (ASSM) – under ASSM, the database doesn’t use free lists.” – that advice, specifically “must” is a bit extreme, especially when the severity of the segment header buffer busy waits is not listed as a consideration in the book.  This sentence from the book does not seem to apply to buffer busy waits for the segment header, “due to several processes attempting to insert into the same data block – each of these processes needs to obtain to a free list before it can insert data into that block.”  The book states, “One of the reasons for a high data number of buffer busy waits is that an inefficient query is reading too many data blocks into the buffer cache, thus potentially keeping in wait other sessions that want to access one or more of those same blocks… By making the hot block always available in memory, you’ll avoid the high buffer busy waits.” – that classification of buffer busy waits now falls under the classification of the “read by other session” wait event and not the “buffer busy waits” wait as of Oracle Database 10.1.  (reference reference2 reference3: Metalink (MOS) ID 15476.1 “FAQ about Detecting and Resolving Locking Conflicts” reference4: Metalink(MOS) Doc ID 603661.1 “ORA-01410: Invalid Rowid When Using rowid Generated from V$Session and Dba_objects”).  The Oracle Database 11.2 Performance Tuning Guide documentation also appears to have used the ROW_WAIT_OBJ# column incorrectly (pages 157-158 reference5)
  • Recipe 5-8 describes how to resolve log file sync waits.  The book states, “You must change the commit behavior by batching the commits. Instead of committing after each row, for example, you can specify that the commits occur after every 500 rows.” Commits should only be used to mark the end of a transaction, suggesting otherwise potentially leads to data corruption where one portion of a transaction is committed, while another portion cannot be committed due to activity of other sessions (or conversely 499 transactions should not be committed because 1 of the transactions must be rolled back).  This commit recommendation is later followed up by the statement “The log file sync wait event can also be caused by too large a setting for the LOG_BUFFER initialization parameter. Too large a value for the LOG_BUFFER parameter will lead the LGWR process to write data less frequently to the redo log files.” – this is in sharp contrast to recipe 3-13 which stated that there is no risk in sizing the LOG_BUFFER parameter too large.  The book states that changing the COMMIT_WRITE parameter may have an adverse effect on performance – actually, changing that parameter from the default should help performance, while potentially compromising the durability of the writes to the database in the event of instance failure.  The suggestion to check V$SESSION_WAIT to see if the SEQ# column is being updated should indicate whether column (P1, P2, or P3) in that view should be checked.  The recommendation telling readers to modify the hidden parameter _LOG_IO_SIZE (described as the LOG_IO_SIZE parameter, without the leading underscore) without providing any warnings about modifying hidden parameters is a bit disappointing.  The example shows the _LOG_IO_SIZE parameter being set to a value of 1,024,000, which with a OS block size of 512 bytes, would tell Oracle Database that it should start writing the redo log buffer when contains roughly 512MB, while the default value is the lower value of 1MB or 1/3 the value of the LOG_BUFFER value.   The recipe failed to mention that excessive time spent in the log file sync wait event could be caused by a CPU starvation condition, and also failed to mention that the redo log buffer contents are written every 3 seconds regardless of the value of the value of the _LOG_IO_SIZE parameter. (pages 158-160 reference reference2 reference3 reference4)
  • Recipe 5-9 investigates read by other session waits. While the query of DBA_EXTENTS based on the FILE_ID and BLOCK_ID range will likely work, this query is potentially very slow in all but the smallest of Oracle databases.  It is usually possible to query the ROW_WAIT_OBJ# column of V$SESSION to immediately retrieve the associated OBJECT_ID which can then be used to quickly look up the OWNER, OBJECT_NAME and OBJECT_TYPE column values from DBA_OBJECTS.  The book states that one of the goals of this process is to identify the SQL statements that are accessing those objects – the SQL_ID and SQL_CHILD_NUMBER, and possibly also PREV_SQL_ID and PREV_CHILD_NUMBER columns of V$SESSION will provide the link to identify the SQL statement – this fact was not stated in the book.  The book states that the blocks being read from disk are “hot blocks” – that likely is not the case, otherwise those blocks would have remained in the buffer cache and not need to be re-read from disk. The suggestion to create a new tablespace with a smaller block size and then move these objects into that tablespace is probably not a very good idea due to manageability and bug issues related to having multiple block sizes in the same database. It would have been a good idea for the book to suggest examining the execution plan. (page 160-161)
  • Recipe 5-10 states, “The direct path read and direct path read temp events usually indicate that that the sorts being performed are very large and that the PGA is unable to accommodate those sorts.” That statement is not true regarding the direct path read waits, if direct path read temp wait events are also present in the database.  The statement is mostly true regarding the direct path read temp wait event, but the wait event will also appear with hash joins, and the sorts need not be “large”.  The two queries provided to investigate the direct path read temp waits to see if the waits are for a temp tablespace are likely a wasted effort – it probably would have been a better idea to investigate the V$SQL_WORKAREA_ACTIVE view to understand why the direct path read temp wait events were appearing for a specific session, down to a specific line in an execution plan.  The second of the two SQL statements contains a typo “between” is spelled as “betgween”.  The book states that the direct path read wait events can be caused by either reading data from the temp tablespace or due to full table scans performed by parallel slaves.  The book failed to mention serial direct path reads, introduced in Oracle Database 11.1, will cause sessions not performing parallel operations to wait on direct path read wait events while performing full table scans. A very confused statement: “A direct path read event can also occur when the I/O sub system is overloaded, most likely due to full table scans caused by setting a high degree of parallelism for tables, causing the database to return buffers slower that what the processing speed of the server process requires.”  Direct path read waits will occur whether or not the I/O sub-system is overloaded, however it is the _duration_ of those waits that is likely to increase when the I/O sub-system is overloaded.  At times, the book does not seem to differentiate between the number of times a wait event occurs and the total/average duration of that wait event.  Direct path read waits will be more common in Oracle Database 11.1 and above, and will happen during full table scans even when parallel query is not used, unless serial direct path read is disabled by setting event 10949 to a value of 1. (pages 161-162)
  • Recipe 5-11 seems to closely mirror chapter 7 of the Oracle Database Backup and Recovery User’s Guide from the Oracle Database documentation library, specifically the section titled, “Configuring the Environment for Optimal Flashback Database Performance”.  However, the book appears to misinterpret the Oracle documentation in a couple of cases in this recipe.  First, the book implies that when enabling the flashback database feature, the “flashback buf free by RVWR” wait event will become one of the top wait events.  The documentation states that operating system level file caching may add CPU overhead, and thus recommends against using operating system level file caching for the location where the flashback logs will be stored. The book attempts to combine a couple of sentences found in the Oracle documentation, resulting in a sentence such as the following: “Since flashback logs tend to be quite large, your database is going to incur some CPU overhead when writing to these files.”  The book then mangles the sentence about operating system file caching, stating that it “tends to slow down I/O.”  The other bulleted items are just restatements of the bulleted items found in the Oracle documentation. On page 163 the book states “when thie database is writing” – in addition to the misspelling of the word the, a second problem is that the database technically does not write. (pages 162-163)
  • Recipe 5-12 shows a SQL statement to identify blocked sessions and the blocker of those sessions.  The query has a couple of problems: while the query performs a self-join on the V$LOCK table using the columns ID1 and ID2, it should have also joined on the TYPE column; the query specifies BLOCK=1 when attempting to identify the blocking sessions – the BLOCK column can contain a value of 2 in a RAC environment.  The SQL statement that queries V$LOCK for a BLOCK value greater than 0 to determine if there are any blocking locks in the instance can produce false blocking entries in a RAC environment – selecting from V$SESSION where BLOCKING_SESSION IS NOT NULL might have produced a more usable result. The how it works section for this recipe seems to misinterpret the Oracle documentation that describes this same behavior, leading to confusion for the reader: “Oracle uses two types of locks to prevent destructive behavior: exclusive and share locks… The concomitant table row held by the first session is merely intended to prevent any other sessions from issuing a DDL statement to alter the table’s structure.” (pages 163-165 reference reference2 reference3)
  • Recipe 5-13 shows a functional SQL statement that does identify blocked sessions and the blocking session. Unfortunately, the recipe also include another SQL statement that fails to retrieve the ID1 and ID2 columns from V$LOCK.  Examining the output of that SQL statement, the recipe makes a claim that because one session holds a mode 6 TX lock, it is the blocker of another session that requests a mode 6 TX lock – that association is only true if the ID1 and ID2 column (and of course TYPE) values are identical. The recipe includes another SQL statement that queries V$SESSION to determine the blocking session and WAIT_CLASS – that is a helpful query, but it would have been more useful to retrieve the EVENT column rather than the WAIT_CLASS column from V$SESSION. (pages 165-167)
  • Recipe 5-14 simply offers an ALTER SYSTEM KILL SESSION command as the quick solution for resolving blocking locks.  It would have been helpful if the book first suggested that the reader spend a couple of minutes to determine what the blocking session is currently doing, before simply terminating that session – even a statement suggesting the checking of the ACTIVE column of V$SESSION a couple of times for the blocking session would have been a step in the right direction.  The book states, “Oracle uses several types of internal ‘system’ locks to maintain the library cache and other instance related components, but those locks are normal and you won’t find anything related to those locks in the V$LOCK view.” – you will not find library cache locks in the V$LOCK view because those are NOT locks, but latches prior to Oracle Database 11.1, and as of 11.1 most of the library cache related latches are now mutexes.  The book states, “The preceding discussion applies to row locks, which are always taken in the exclusive mode.” – it probably would have been helpful for the book to clarify that a session may attempt to acquire a TX type lock in a mode other than exclusive mode (mode 6), for example when a session enqueues (wait event “enq: TX – row lock contention”) due to potential primary key violation error it will attempt to acquire a TX lock in share mode (mode 4) – the same requested lock mode will appear when the session is waiting for an interested transaction slot or a shared bitmap index fragment.  (pages 167-168)
  • Recipe 5-15 The How it Works section states, “You can certainly use Oracle Enterprise Manager to quickly identify a locked object, the ROWID of the object involved in the lock, and the SQL statement that’s responsible for the locks.”  In general, it is not possible to determine the exact SQL statement that the blocking session executed which resulted in another session being blocked.  It is possible to determine the SQL_ID for the _currently_ executing SQL statement in the blocking session, as well as the SQL_ID for the previously executed SQL statement, but there is a fair chance that neither of those SQL statements is responsible for a blocking lock. (pages 168-169)
  • Recipe 5-16 states, “The enq: TM – contention event is usually due to missing foreign key constraints on a table that’s part of an Oracle DML operation. Once you fix the problem by adding the foreign key constraint to the relevant table, the enqueue: TM –contention event will go away.” – the authors should have proofread these two sentences a second time – the enqueues are NOT caused by a missing foreign key constraint, it is typically a problem of missing index(es) on a foreign key column(s) in a child table.  It is helpful that recipe 5-16 includes DDL to create two tables to demonstrate problems caused by unindexed foreign keys, but it would have been helpful if the script also inserted a couple of rows into the tables because in a sense the book author’s statement is either right and wrong depending on how the test case is constructed.  The SQL statement described as finding all of the unindexed foreign key columns in the database (note that this SQL statement is different from the one found in recipe 2-5) has a couple of flaws: 1) the SQL statement only examines indexes and constraints defined in the current user’s schema (not all schemas in the database), 2) changes in the column order between the foreign key constraint columns and the index columns will cause false alarms, 3) duplicate rows are eliminated and the foreign key name is not retrieved, so the output might occasionally show two adjacent rows that likely are not supposed to be displayed together.  The test case and SQL statement to locate unindexed foreign key columns very closely resembles a test case and SQL statement found on the Confio website.  (pages -169-171)
  • Recipe 5-17 appears to have borrowed the SQL statement from a blog operated by someone other than one of the book authors.  The book states, “You can execute the following statement based on ASH, to find out information about all locks held in the database during the previous 5 minutes.”  The query in this recipe actually retrieves data from V$ACTIVE_SESSION_HISTORY for the last 2 minutes, does not retrieve just enqueues, and the data provided in V$ACTIVE_SESSION_HISTORY does not indicate ALL enqueues – just those that the sessions were waiting on the instant that the once-a-second sample was collected.  It is unclear why the SQL statement selecting from V$SQL is returning the APPLICATION_WAIT_TIME column (possibly because enqueues tend to fall into the APPLICATION wait group), but not the CONCURRENCY_WAIT_TIME, CLUSTER_WAIT_TIME, USER_IO_WAIT_TIME, PLSQL_EXEC_TIME or JAVA_EXEC_TIME columns – it is probably also important to point out that the values returned by these columns is not specific to one user session, nor the specific time interval queried in V$ACTIVE_SESSION_HISTORY.  There is a risk (greater in Oracle Database 11.1 and even greater in Oracle Database 11.2) that a SQL statement may have multiple execution plans – the execution of the DBMS_XPLAN.DISPLAY_AWR probably should have also specified the SQL_PLAN_HASH_VALUE value from the V$ACTIVE_SESSION_HISTORY view to retrieve the actual execution plan for the SQL statement that was executing in the session of interest. The “typical” query of V$SESSION_WAIT_HISTORY appears to be borrowed from another blog article, without explaining the purpose of the query.  (pages 171-174)
  • In recipe 5-18 the queries of V$ACTIVE_SESSION_HISTORY, where the SUM of WAIT_TIME + TIME_WAITED is determined when grouping on various columns, are significantly flawed when you consider that the view essentially shows samples of V$SESSION for each second, and only for active sessions.  Consider a session that has waited 10 seconds in a single wait event.  The queries will likely show that the session waited 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 = 55 seconds (+- a couple of seconds depending on the update frequency) in that particular wait event for the first 10 seconds in that wait event.  Assume that the session had already waited 240 seconds and continued to wait for an additional 10 seconds – the queries would show that the session waited 240 + 241 + 242 + 243 + 244 + 245 + 246 + 247 + 248 + 249 + 250 = 2695 seconds in that 10 second time interval.  The same mistake is found on pages 1170-1171 of the book “Expert Oracle Database 11g Administration”.  (pages 174-175)
  • Recipe 5-19, while the query that UNIONs the non-idle wait events found in V$SYSTEM_EVENT with the all rows from V$SYS_TIME_MODEL (with the exception of the “background elapsed time” and “background cpu time” statistics) is interesting, its usefulness is limited due to time-scoping issues, failure to eliminate all background statistics from V$SYS_TIME_MODEL (if that was the intention), ignoring the hierarchy of the statistics found in V$SYS_TIME_MODEL, and the fact that the V$SYS_TIME_MODEL view shows both wait related and non-wait related statistics.  While the output from V$SESSION is described, the actual output from the SQL statement does not appear in the book.  The query that joins V$SESSION and V$SQL to retrieve the SQL statement that caused session 81 to wait in a TX enqueue is flawed: the SQL statement potentially returns two distinct SQL statements per session (one for the SQL_ID column and one for the PREV_SQL_ID column);  because the SQL statement joins to V$SQL, it should also join on the columns SQL_CHILD_NUMBER and PREV_CHILD_NUMBER to avoid duplicate rows cause by the existence of multiple child cursors for the same SQL statement; it is quite possible that neither of the two distinct SQL statements that are potentially returned are in fact the SQL statement that caused session 81 to be blocked.  The book advocates killing a session as the only initial solution if that session is blocking several other sessions – the book probably should have suggested investigating what the session is doing in the database before blindly terminating the session.  There appears to be a logic error in the book, where the authors suggested using the V$SYSTEM_EVENT view to investigate the specific waits that are identified in a specific wait class of the V$SESSION_WAIT_CLASS view – a better approach would have used the V$SESSION_EVENT view since both views are specific to a single session. It probably is not too beneficial for the book to identify that the WAIT_CLASS_ID for the Application class of wait events is 4217450380, and the join to the V$EVENT_NAME view is unnecessary – since the introduction of the wait classes with Oracle Database 10.2, every commonly used view that includes the WAIT_CLASS_ID column also contains the WAIT_CLASS column, where the wait classification “Application” would be found.  The quote from the book, “The SELECT FOR UPDATE NOWAIT statement locks the row without waiting” is simply incorrect – if the requested rows cannot be locked, Oracle will return the error “ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired” and no rows will be locked or selected. (pages 175-178)
  • Recipe 5-20 includes the same non-time scoped query found in recipe 5-19 that unions the non-idle wait events found in V$SYSTEM_EVENT with the all rows from V$SYS_TIME_MODEL (with the exception of the “background elapsed time” and “background cpu time” statistics).  The recipe shows the “Top 5 Timed Events” section from an AWR report, describing that information as the “top 5 wait events” – this label may cause confusion because CPU utilization is not a wait event.  It would be helpful for the book to explain why the percentages add up to 138.9%, when 100% should be the maximum, and why the “wait list latch free” wait (not an actual name of a wait event) appears in the list of wait events.  In the solution for excessive shared pool and library cache latch counts, the book suggests specifying a value for the CURSOR_SPACE_FOR_TIME parameter. The book does not specify a suggested value for this parameter, does not indicate that the parameter is deprecated as of Oracle Database 11.1, does not suggest increasing the value for the SESSION_CACHED_CURSORS parameter, and seems to confuse the word “variable” with the word “literal” and suggests that it is the database that executes SQL statement: “For example, a database may execute a SQL statement 10,000 times, each time with a different value for a variable.”  The book states the following about setting the CURSOR_SHARING parameter to SIMILAR: “the SIMILAR setting seems a safer way to go about forcing the database to use bind variables instead of literals.” Metalink (MOS) Doc ID 1169017.1 states that the parameter value of SIMILAR is deprecated as of Oracle Database 11.1 and will be removed in the 12.1 release.  The book also states that the authors have not seen any issues with setting the CURSOR_SHARING parameter to FORCE – there are numerous bug reports in Metalink (MOS) that indicate setting that parameter to FORCE may cause wrong results to be returned, ORA-600 instance (or session) crashes, excessively long parse times, execution performance problems, problems with function based indexes, and various other problems.  Describing the CURSOR_SHARING parameter as a silver bullet to cure latch contention just might be true – there probably will not be much latch contention when the database instance is down. (pages 178-180 reference)
  • Recipe 5-21 should indicate that an Enterprise Edition license and an additional cost Diagnostics Pack license are required to use the Row Locks link in the Enterprise Manager Database Control. (pages 181-182)
  • Recipe 5-22 should indicate that an Enterprise Edition license and an additional cost Diagnostics Pack license are required to use the Active Sessions graph in Enterprise Manager Database Control. (pages 182-182)
  • Figure 6-1 “Troubleshooting poor performance” shows a flow chart that indicates how a DBA or performance analyst should proceed when a user reports sluggish performance. The idea behind a flow chart for this process is great, but there are a couple of problems with the execution of the flow. If a user is reporting a problem, it probably would be a better idea to stay away from instance-wide analyzing tools such as AWR/Statspack reports that can easily hide performance issues that are specific to a single user session, and instead use a 10046 trace and/or the session-level wait interface. The flow chart lists bulleted items that could be tried, but it is not clear from the flow chart and its description if the reader should perform all of the bulleted items, some of the items, or a single item from the list. For instance, if an identified SQL statement is not tunable, is the reader supposed to buy an Exadata box, hire Oracle consultants, and then find a new job (I suspect that the last item included in this portion of the flow chart is intended as a hidden joke)? (page 186)
  • Table 6-1 on pages 191-192 (in recipe 6-2) shows column descriptions from the vmstat output – the descriptions of those columns printed in the book are nearly identical word for word with the descriptions found by typing “man vmstat” at a Linux command line.
  • Recipe 6-6 includes a couple of SQL statements that are intended to show the SQL statements contributing the most to the IO load, the sessions that are currently waiting for I/O to complete, and the database objects that are
    involved in the disk activity. Since the first query accesses V$SQL, retrieving the DIRECT_WRITES, and DISK_READS columns, there are both time-scoping problems (the SQL statement may have been executed days ago and might not have anything to do with the current iostat output) and the query does not distinguish between multi-block reads and single block reads (which is worse, 10,000 blocks read in 79 multi-block reads or 5,000 blocks read one block at a time using single-block reads). Recent Oracle Database releases have the columns PHYSICAL_READ_REQUESTS, PHYSICAL_READ_BYTES, PHYSICAL_WRITE_REQUESTS, PHYSICAL_WRITE_BYTES, and USER_IO_WAIT_TIME that would seem to also be helpful.  The second and third SQL statements, because those query V$SESSION, are much less likely to exhibit time scoping issues because those queries indicate the current waits.  One of the problems with these two queries is that they only provide output when the sessions are waiting on “db file sequential read” or “db file scattered read” wait events – with the introduction of Oracle Database 11.1 a couple of years ago, serial direct path reads became much more common when multi-block reads were performed; reads and writes to the temp tablespace are also ignored by the queries.  The third query joins the ROW_WAIT_OBJ# column from V$SESSION to the DATA_OBJECT_ID column of DBA_OBJECTS – it should instead be joined to the OBJECT_ID column. (pages 198-201)
  • Recipe 6-9 shows how to map a resource intensive operating system process to an Oracle Database session. One of the SQL statements in the recipe retrieves the SQL_ID column from V$SESSION, but not the SQL_CHILD_NUMBER column.  A SQL statement follows in the recipe that calls DBMS_XPLAN.DISPLAY_CURSOR to retrieve the execution plan, however that SQL statement does not specify a value for the SQL child number, thus the call defaults to SQL child number 0.  Therefore, one of three things will happen: 1) no execution plan will be displayed if the plan for child number 0 is no longer in the shared pool; 2) an incorrect execution plan will be displayed if the execution plan for SQL child number 0 does not match the execution plan for the SQL child number that is actually executed by the resource intensive session; 3) the correct execution plan will be returned if the resource intensive session is using child number 0 or the same execution plan as is used by child number 0.  Had the SQL statement querying V$SESSION also retrieved the SQL_CHILD_NUMBER column, that value could have been specified for the second parameter in the DBMS_XPLAN.DISPLAY_CURSOR call, or NULL could have been specified for the second parameter so that all execution plans for the various child cursors of the specified SQL_ID would be retrieved.  It is also important to keep in mind that the SQL_ID retrieved by the query of V$SESSION may not identify the resource intensive SQL statement executed by the session, it is just the SQL statement that was executing at that instant that the query of V$SESSION was executed. (pages 204-207)
  • Recipe 6-10 should have detailed some of the risks associated with terminating Oracle sessions with either the “ALTER SYSTEM KILL” command or the operating system “kill -9” command (other than just stating that you should be careful not to kill the wrong process). Killing processes used by sessions can lead to stability problems in the instance, especially if the instance is configured for shared servers, or if the session was in the process of executing a distributed query (see Metalink/MOS Bug ID 8686128 that affects 10.2.0.4 and is supposed to be fixed in the not yet released 12.1, and Bug ID 12961905), or if auditing is enabled (see Metalink/MOS Doc 604511.1), or if resource limits are in use (see Metalink/MOS Doc 561322.1). (pages 207-208)
  • Recipe 7-1 includes a SQL statement (page 210) that is described as showing the actual redo that is generated in a database – the query actually shows the sum of the bytes in the datafiles that belong to the online undo tablespace.  The formula shown for setting the UNDO_RETENTION parameter seems to calculate a value for UNDO_RETENTION that is unlikely to cause the datafiles for the undo tablespace to grow, if the statistics for the last seven (or four) days represent typical activity. (pages 209-213)
  • Recipe 7-2 shows a couple of SQL statements that are intended to show what is consuming the most undo.  It probably would be a good idea to define consuming” in this context  – should it be the case that a normal SELECT statement _consumes_ undo when it builds a read consistent view of data, while INSERT, UPDATE, and DELETE statements _generate_ the undo?  The “query to find out which session is currently using the most undo in an instance” (third query on page 214) actually does not show which session is _currently using_ the most redo, nor does it show which session generated the most undo that is _currently_ in the undo tablespace – instead it shows which session generated the most bytes of undo data _since connecting_. (pages 213-214)
  • Recipes 7-3 (resolving snapshot too old errors), 7-6 (resolving unable to extend temp segment), 7-7 (resolving open cursor errors), 7-9 (invoking the Automatic Diagnostic Repository command interpretter), 7-10 (viewing the alert log from ADRCI), 7-11 (viewing incidents with ADRCI), 7-12 (packaging incidents for Oracle support), 7-13 (running a database health check), and 7-14 (creating an Oracle test case for Oracle Support) seem to have drifted too far from the title of the book, into a subject that seems to be recipes for _administering_ your database when errors happen.  As such, this review will only briefly touch on a couple of these recipes.
  • Recipe 7-4 suggests that when the V$TEMP_SPACE_HEADER view indicates that BYTES_FREE is 0 for the temp tablespace, that is a serious problem for the DBA – it is quite possible that while BYTES_FREE might show 0, that does not mean that all of the existing extents are in use and cannot be automatically reused by a session (reference reference2 reference3). (page 217)
  • Recipe 7-7 suggests using a 10046 trace to determine if a session opens cursors but fails to close those cursors.  This is a good suggestion, but as of Oracle Database 11.1 it is not necessary to determine, as suggested by the book, if cursor numbers are re-used in order to determine if the cursors opened by the session are being closed.  Oracle Database 11.1 and above output an explicit CLOSE # line every time a cursor closes (the type= parameter indicates if the cursor is added to the session cursor cache).  With the changes to 10046 traces made in 11.2.0.2, exceptionally long cursor numbers (for example 90902224, 47112801352808) representing the address of the cursor are now printed in 10046 trace files rather than 1, 2, 3 or 4 digit numbers that explicitly translate to a specific slot number in the open cursor array for a session.  These large cursor numbers could pose a bit of difficulty for a person trying to determine if the cursor numbers changed for a specific SQL statement, and just because the number changes, that does not mean that the previous cursor for this SQL statement was left open.  The book states “You can use this [SESSION_CACHED_CURSORS] parameter to prevent a session from opening an excessive number of cursors, thereby filling the library cache or forcing excessive hard parses” – actually, the SESSION_CACHED_CURSORS has an effect on the performance impact of soft parses, not hard parses, and that parameter is NOT designed to prevent a session from holding open an excessive number of cursors – that is the responsibility of the OPEN_CURSORS parameter that was described earlier in the recipe.  The book states, “Session cursor caches use the shared pool,” and then indicates that if automatic memory management is in use, the shared pool size will be increased if necessary due to a change in the SESSION_CACHED_CURSORS parameter.  This seems to be misleading – the cached cursors are held in the UGA, which is typically found in the PGA memory in dedicated server mode and in the SGA in shared server mode (reference1 reference2 from one of the references: “Every individiual holding a cursor open has an entry in x$kgllk – which is in the SGA – and these entries seem to be 172 bytes long in 10g (152 in 9i). So, clearly, if you hold more cursors open, you will be using more memory for these structures.” – this quote might back up the comment found in the book, in the SGA but not necessarily in the shared pool).  SGA memory regions in recent Oracle release versions are allocated in granules, and in a typical production database a granule is at least 16MB in size (increasing significantly from 16MB in recent releases), so it is quite possible that the shared pool might not actually increase in size, even with a large value for SESSION_CACHED_CURSORS. (pages 222-224)

——

Interesting And/Or Well-Above-Average Bits of Information

  • In recipe 1-1, good advice to establish default data tablespaces and tempfile tablespaces for new database users: “You don’t want users ever creating objects in the SYSTEM tablespace, as this can adversely affect performance and availability.” (pages 2-4)
  • In recipe 1-3, a helpful quote, “Sometimes folks read about a feature and decide to implement it without first knowing what the performance benefits or maintenance costs will be. You should first be able to test and prove that a feature has solid performance benefits.” (pages 8-9)
  • Recipe 1-10 includes an interesting approach to sending automated emails with Segment Advisor output. (pages 27-28)
  • In recipe 2-8 is a warning that once a function-based index is created on a column, DDL cannot be performed on that column until the function-based index is dropped. (pages 64-66)
  • Recipe 2-12 correctly states that bitmap indexes and bitmap joins are only available in the Enterprise Edition of Oracle.
  • Recipe 2-17 describes reclaiming unused index space, using the segment advisor to determine which indexes to rebuild. It is important that this recipe provides valid reasons for rebuilding indexes including corruption, changing storage characteristics, and being marked as unusable. (page 80)
  • Recipe 4-1 includes a notes section that correctly indicates that the database must be licensed for the Diagnostics Pack to use AWR, and the 11.1 and above CONTROL_MANAGEMENT_PACK_ACCESS parameter must be set to the correct value. (pages 113-115)
  • Recipe 4-2 demonstrates how to change the frequency of AWR snapshots and the amount of time those snapshots remain in the database. (page 116-117)
  • Recipe 4-3 Provides a warning that it is not possible to generate AWR reports across database bounces because the statistics in the dynamic performance views are lost on each bounce. (page 117-120)
  • Recipe 4-5 demonstrates how to use the awrsqrpt.sql script to retrieve execution statistics for a SQL statement that was captured in AWR. (pages 121-122)
  • Good point in Recipe 5-3: “However, It’s important to understand that wait events only show the symptoms of underlying problems – thus, you should view a wait event as a window into a particular problem, and not the problem itself.”
    (page 152)
  • Recipe 6-1 shows several useful Linux commands to retrieve disk space usage and the files that are contributing the most to the disk space consumption. The df and find commands are demonstrated to locate large files and directories containing many files; the du command is used to show the largest subdirectories (pages 187-190)
  • Recipe 6-2 shows the vmstat command on Linux, and briefly describes Oracle’s OS Watcher (pages 190-192)
  • Recipe 6-3 demonstrates the Solaris prstat and ps utilities (pages 192-194)
  • Recipe 6-4 demonstrates the top utility (pages 194-197)
  • Recipe 6-5 demonstrates the ps utility (pages197-198)
  • Recipe 6-6 demonstrates the iostat utility (pages 198-201)
  • Recipe 6-7 demonstrates the netstat utility (pages 201-202)
  • Recipe 6-8 demonstrates the ping, telnet, and tnsping utilities (pages 202-203)
  • Recipe 6-9 demonstrates the SQL*Plus ORADEBUG utility’s ability to retrieve the current SQL statement executing for an operating system process ID and also how to retrieve the trace file name for the session (side note: it would have been helpful if the book mentioned that SYSDBA permissions were required to used ORADEBUG) (pages 204-207)
  • Recipe 6-10 demonstrates the operating system kill utility.
  • Recipe 7-6 – good example showing that an ORA-01652: unable to extend temp segment error message can point to a permanent tablespace, and not just at the temp tablespace.

——

Currently, the review includes chapters 1 through 6, and portions of  chapter 7.  If time permits, I will continue adding review entries for the remaining recipes in the book.  The “Oracle Database 11g Performance Tuning Recipes” book is partially visible in Google Books.

——

Blog articles that reference the “Oracle Database 11g Performance Tuning Recipes” book:

Invalid Hints are Silently Ignored? An Invalid USE_HASH Hint Transforms a Sort-Merge Join into a Nested Loops Join

Using TKPROF for Analyzing 10046 Extended SQL Trace Files – What is Wrong with this Quote?

Inline Views – What is Wrong with this Quote?

10046 Extended SQL Tracing for a Session – What is Wrong with this Quote?

Unindexed Foreign Keys – What is Wrong with this Quote?

Parse CPU to Parse Elapsd – What is Wrong with this Quote?

KEEP Pool – What is Wrong with this Quote?

The CURSOR_SHARING Parameter is a Silver Bullet – What is Wrong with this Quote?

Find Objects with the Greatest Wait Time – What is Wrong with this Quote?

Addressing TM Enqueue Contention – What is Wrong with this Quote?

Row Lock is Synonymous with TX Lock, and is a Lock on a Single Row of a Table – Verify the Documentation Challenge

NOLOGGING Option and log file sync Waits – What is Wrong with this Quote?

V$SESSION’s ROW_WAIT_OBJ# Column Shows the DATA_OBJECT_ID – Verify the Documentation Challenge





Unindexed Foreign Keys – What is Wrong with this Quote?

5 09 2011

September 5, 2011

I often look for easier ways to accomplish time consuming tasks, and I suspect that is part of the reason why I buy so many computer books.  In Recipe 2-5 (page 60) of the “Oracle Database 11g Performance Tuning Recipes” book there is a SQL statement to determine which database columns (owned by the user) have foreign key constraints without indexes.  The same SQL statement is also found on page 243 of the book Oracle SQL Recipes: A Problem-Solution Approach.  What caught my attention is the length of the SQL statement – it is considerably shorter than the SQL statement I had been using, so that should certainly reduce the amount of unnecessary typing (and free up a couple of bytes in the library cache).  The SQL statement as it appears in these two books:

select
  a.constraint_name cons_name
 ,a.table_name tab_name
 ,b.column_name cons_column
 ,nvl(c.column_name,'***No Index***') ind_column
from user_constraints a
     join
     user_cons_columns b on a.constraint_name = b.constraint_name
     left outer join
     user_ind_columns c on b.column_name = c.column_name
                       and b.table_name = c.table_name
where constraint_type = 'R'
order by 2,1;

Using A, B, and C for table aliases… that must be what is wrong with the above SQL statement.

What, if anything, is wrong with the above quote from the book?

The point of blog articles like this one is not to insult authors who have spent thousands of hours carefully constructing an accurate and helpful book, but instead to suggest that readers investigate when something stated does not exactly match what one believes to be true.  It could be that the author “took a long walk down a short pier”, or that the author is revealing accurate information which simply cannot be found through other resources (and may in the process be directly contradicting information sources you have used in the past).  If you do not investigate in such cases, you may lose an important opportunity to learn something that could prove to be extremely valuable.

In case you are wondering, the following is the SQL statement that I had been using to locate unindexed foreign key columns (this SQL statement is based on a SQL statement that was found in the Expert Oracle Database Architecture book).

SELECT
  DECODE(B.TABLE_NAME, NULL, '*Check*', 'OK' ) STATUS,
  A.OWNER,
  A.TABLE_NAME,
  A.COLUMNS,
  B.COLUMNS INDEX_COLUMNS
FROM
  (SELECT
    A.OWNER,
    SUBSTR(A.TABLE_NAME,1,30) TABLE_NAME,
    SUBSTR(A.CONSTRAINT_NAME,1,30) CONSTRAINT_NAME,
    MAX(DECODE(POSITION, 1,     SUBSTR(COLUMN_NAME,1,30),NULL)) ||
    MAX(DECODE(POSITION, 2,', '||SUBSTR(COLUMN_NAME,1,30),NULL)) ||
    MAX(DECODE(POSITION, 3,', '||SUBSTR(COLUMN_NAME,1,30),NULL)) ||
    MAX(DECODE(POSITION, 4,', '||SUBSTR(COLUMN_NAME,1,30),NULL)) ||
    MAX(DECODE(POSITION, 5,', '||SUBSTR(COLUMN_NAME,1,30),NULL)) ||
    MAX(DECODE(POSITION, 6,', '||SUBSTR(COLUMN_NAME,1,30),NULL)) ||
    MAX(DECODE(POSITION, 7,', '||SUBSTR(COLUMN_NAME,1,30),NULL)) ||
    MAX(DECODE(POSITION, 8,', '||SUBSTR(COLUMN_NAME,1,30),NULL)) ||
    MAX(DECODE(POSITION, 9,', '||SUBSTR(COLUMN_NAME,1,30),NULL)) ||
    MAX(DECODE(POSITION,10,', '||SUBSTR(COLUMN_NAME,1,30),NULL)) ||
    MAX(DECODE(POSITION,11,', '||SUBSTR(COLUMN_NAME,1,30),NULL)) ||
    MAX(DECODE(POSITION,12,', '||SUBSTR(COLUMN_NAME,1,30),NULL)) ||
    MAX(DECODE(POSITION,13,', '||SUBSTR(COLUMN_NAME,1,30),NULL)) ||
    MAX(DECODE(POSITION,14,', '||SUBSTR(COLUMN_NAME,1,30),NULL)) ||
    MAX(DECODE(POSITION,15,', '||SUBSTR(COLUMN_NAME,1,30),NULL)) ||
    MAX(DECODE(POSITION,16,', '||SUBSTR(COLUMN_NAME,1,30),NULL)) COLUMNS
  FROM
    DBA_CONS_COLUMNS A,
    DBA_CONSTRAINTS B
  WHERE
    A.CONSTRAINT_NAME = B.CONSTRAINT_NAME
    AND A.OWNER=B.OWNER
    AND B.CONSTRAINT_TYPE = 'R'
  GROUP BY
    A.OWNER,
    SUBSTR(A.TABLE_NAME,1,30),
    SUBSTR(A.CONSTRAINT_NAME,1,30) ) A,
    (SELECT
      TABLE_OWNER,
      SUBSTR(TABLE_NAME,1,30) TABLE_NAME,
      SUBSTR(INDEX_NAME,1,30) INDEX_NAME,
      MAX(DECODE(COLUMN_POSITION, 1,
      SUBSTR(COLUMN_NAME,1,30),NULL)) ||
      MAX(DECODE(COLUMN_POSITION, 2,', '||
      SUBSTR(COLUMN_NAME,1,30),NULL)) ||
      MAX(DECODE(COLUMN_POSITION, 3,', '||
      SUBSTR(COLUMN_NAME,1,30),NULL)) ||
      MAX(DECODE(COLUMN_POSITION, 4,', '||
      SUBSTR(COLUMN_NAME,1,30),NULL)) ||
      MAX(DECODE(COLUMN_POSITION, 5,', '||
      SUBSTR(COLUMN_NAME,1,30),NULL)) ||
      MAX(DECODE(COLUMN_POSITION, 6,', '||
      SUBSTR(COLUMN_NAME,1,30),NULL)) ||
      MAX(DECODE(COLUMN_POSITION, 7,', '||
      SUBSTR(COLUMN_NAME,1,30),NULL)) ||
      MAX(DECODE(COLUMN_POSITION, 8,', '||
      SUBSTR(COLUMN_NAME,1,30),NULL)) ||
      MAX(DECODE(COLUMN_POSITION, 9,', '||
      SUBSTR(COLUMN_NAME,1,30),NULL)) ||
      MAX(DECODE(COLUMN_POSITION,10,', '||
      SUBSTR(COLUMN_NAME,1,30),NULL)) ||
      MAX(DECODE(COLUMN_POSITION,11,', '||
      SUBSTR(COLUMN_NAME,1,30),NULL)) ||
      MAX(DECODE(COLUMN_POSITION,12,', '||
      SUBSTR(COLUMN_NAME,1,30),NULL)) ||
      MAX(DECODE(COLUMN_POSITION,13,', '||
      SUBSTR(COLUMN_NAME,1,30),NULL)) ||
      MAX(DECODE(COLUMN_POSITION,14,', '||
      SUBSTR(COLUMN_NAME,1,30),NULL)) ||
      MAX(DECODE(COLUMN_POSITION,15,', '||
      SUBSTR(COLUMN_NAME,1,30),NULL)) ||
      MAX(DECODE(COLUMN_POSITION,16,', '||
      SUBSTR(COLUMN_NAME,1,30),NULL)) COLUMNS
    FROM
      DBA_IND_COLUMNS
    GROUP BY
      TABLE_OWNER,
      SUBSTR(TABLE_NAME,1,30),
      SUBSTR(INDEX_NAME,1,30)) B
    WHERE
      A.TABLE_NAME = B.TABLE_NAME (+)
      AND A.OWNER=B.TABLE_OWNER(+)
      AND B.COLUMNS (+) LIKE A.COLUMNS || '%'
ORDER BY
  A.OWNER,
  A.TABLE_NAME;

Are you feeling that it might be an ANSI problem with the book’s SQL statement?  I certainly would prefer to use the shorter version of the SQL statement – I have an aversion to unnecessary typing.

While on the subject of creating indexes on foreign key columns, are those indexes created just to reduce the chance of deadlocks, or is it done primarily for SELECT query performance as described in the book (Google Books view)?  think about this question a bit.





Parse CPU to Parse Elapsd – What is Wrong with this Quote?

1 09 2011

September 1, 2011

I located another interesting statement in the “Oracle Database 11g Performance Tuning Recipes” book, so I thought that it was time for another blog article that takes a look at a statement from the book.  Chapter 4 of the book is significantly better written (much more accurate) than chapter 3.

To set the stage, let’s take a look at the “Instance Efficiency Percentages” section of two AWR reports that I have collected over the years:

Example #1:

Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:  100.00       Redo NoWait %:  100.00
            Buffer  Hit   %:   27.89    In-memory Sort %:  100.00
            Library Hit   %:   91.32        Soft Parse %:   76.52
         Execute to Parse %:   30.71         Latch Hit %:  100.00
Parse CPU to Parse Elapsd %:   10.26     % Non-Parse CPU:   92.99 

Example #2:

Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:  100.00       Redo NoWait %:  100.00
               Buffer Hit %:   99.89    In-memory Sort %:  100.00
              Library Hit %:   97.45        Soft Parse %:   97.79
         Execute to Parse %:   97.30         Latch Hit %:   95.31
Parse CPU to Parse Elapsd %:   79.21       Non-Parse CPU:   99.10  

For obvious reasons, it is not a desirable outcome to see that a large percentage of the CPU time consumed by a database instance is used for parsing.  Ideally, most of the CPU time would be consumed actually executing the SQL statements submitted by user sessions.

Now let’s take a look at the quote from Recipe 4-10 on page 133:

“The Parse CPU to Parse Elapsd metric shows how much time the CPU is spending parsing SQL statements. The lower this metric is, the better. In the following example [not quoted], it is about 2%, which is very low. If this metric ever gets to 5%, it may mean investigation is warranted to determine why the CPU is spending this much time simply parsing SQL statements.”

What, if anything, is wrong with the above quote from the book?

The point of blog articles like this one is not to insult authors who have spent thousands of hours carefully constructing an accurate and helpful book, but instead to suggest that readers investigate when something stated does not exactly match what one believes to be true.  It could be that the author “took a long walk down a short pier”, or that the author is revealing accurate information which simply cannot be found through other resources (and may in the process be directly contradicting information sources you have used in the past).  If you do not investigate in such cases, you may lose an important opportunity to learn something that could prove to be extremely valuable.








Follow

Get every new post delivered to your Inbox.

Join 139 other followers