Book Review: Troubleshooting Oracle Performance, Second Edition

15 07 2014

July 15, 2014 (Modified August 21, 2014, August 22, 2014, November 6, 2014, January 9, 2015)

Extensively Researched with Detailed Analysis Covering a Broad Range of Oracle Database Performance Topics, Providing Insight that is Not Available from Other Sources

I pre-ordered this book in December 2013, having previously read the first edition of this book twice.  Once the printed copy of the second edition book arrived, I ordered the PDF companion for the book from Apress.  While the first edition of the book covered Oracle Database through, the second edition targets versions through, the author successfully scrubbed the book of all information that is not relevant to the targeted Oracle versions.  Despite the removed obsolete content and the new page formatting that places approximately 17% more content per page, the page count for the second edition of the book grew by roughly 130 pages.

Some of the improvements in the second edition, when compared to the first edition of this book:

  • Extended explanation of the different definitions of the term cardinality.  Pg 19
  • Second edition of the book added a half page definition of the term cursor.  Pg 21
  • The description of V$SQL_CS_HISTOGRAM was omitted from the first edition of this book, and is now included.  Pgs 37-39
  • The Instrumentation section that was found in chapter 3 of the first edition is now relocated into chapter 2.  Pgs 42-48
  • A new section was added in this edition of the book that is intended to guide the reader in attacking performance problems using different procedures, based on whether or not the problem is reproducible.  Chapters 3 and 4
  • A new roadmap flow chart was added to the second edition, showing how to begin the performance tuning process.  Pg 104
  • Page 204 of the first edition of the book stated that it was not possible to retrieve a Statspack captured execution plan using DBMS_XPLAN – that statement was incorrect.  Page 306 of the second edition contains a corrected statement:  “Statspack stores execution plans in the stats$sql_plan repository table when a level equal to or greater than 6 is used for taking the snapshots. Even though no specific function is provided by the dbms_xplan package to query that repository table, it’s possible to take advantage of the display function to show the execution plans it contains.”
  • The second edition of the book includes a new SQL optimization techniques chapter – the book seems to be making a more dedicated effort to help the reader understand the decision process that determines when to use the various techniques to attack performance issues – explaining the decision tree for performance tuning.  Chapter 11, SQL Optimization Techniques, is a good example of the enhancements made to the second edition.

Over the last several years I have read (and reviewed) a number of Oracle Database performance related books, including the freely available Performance Tuning Guide that is part of the official Oracle Database documentation.  None of the books, including the official Performance Tuning Guide (at least three errors identified in the first 100 pages of the version), is completely free of errors (wrong, omitted, or obsolete information).  However, this book sets the technical content accuracy bar extremely high for books that cover a broad-range of Oracle performance related topics.

As was the case for the first edition of this book, there are several factors that separate this book from the other broad-ranging Oracle Database performance books on the market:

  • For every feature that is described to help solve a problem, as many as possible of the benefits are listed, and an equal amount of attention is paid to the potentially wide-ranging problem areas of the various solutions.  Very few potential problems were overlooked in this book.  Some of the other books on the market only describe the potential benefits of implementing a feature, without discussing limitations or unintended side-effects.  One such example is the discussion of the CURSOR_SHARING parameter in two different books.  On page 434 the “Troubleshooting Oracle Performance” book the following warning is provided “Cursor sharing has a reputation for not being very stable. This is because, over the years, plenty of bugs related to it have been found and fixed… my advice is to carefully review Oracle Support note 94036.1…”  This quote is in contrast to the following quotes from pages 191 and 484 of the book “Oracle Database 12c Performance Tuning Recipes”, “Although there are some concerns about the safety of setting the CURSOR_SHARING parameter to FORCE, we haven’t seen any real issues with using this setting.”  “There are really no issues with setting the cursor_sharing parameter to a nondefault value, except minor drawbacks such as the nonsupport for star transformations, for example.”  (Reference) (Reference 2)
  • For nearly every feature described in the book, the book lists the licensing and version requirements (sometimes to a specific point release such as,, that are required so that the reader is able to take advantage of the feature – these requirements are often listed early in the description of the feature (the monitoring/tuning discussion in chapters four and five contain several good examples).  The book commonly describes how to accomplish a task in the current Oracle Database release, as well as older releases, if the approach differs.  Some of the other books on the market inter-mix features and behaviors in various Oracle Database releases, without clearly distinguishing what will and what will not be available in the reader’s environment.
  • While many strong statements are made about Oracle Database in the book, there is no “hand waiving”, and there are very few inaccurate statements.  The book uses a “demonstrate and test in your environment” approach from cover to cover.  The downloadable scripts library is extensive with roughly 280 scripts and trace files, and those scripts often contain more performance information than what is presented in the book.  It is thus recommended to view the scripts and experiment with those scripts while the book is read.  The scripts are currently downloadable only from the author’s website.  In contrast, other books seem to take the approach of “trust me, I have performed this task 1,000 times and never had a problem” rather than the “demonstrate and test in your environment” approach as was used in this book.
  • Information in this book is densely packaged, without unnecessarily repeating information, and without giving the impression that sections of the book are a paraphrase of some other set of articles, a paraphrase of chapters in the official Oracle documentation, or a reprint of a page that was originally copyright in 1997.  Additionally, the information is well organized into a logical progression of topics, rather than each section of the book appearing as an island of unrelated information.
  • The well-placed graphics throughout the book support the contents of the book, rather than distract from the information that is described.
  • The book makes extensive use of forward and backward references to other sections in the book, as well as suggestions to review specific Oracle support documents and other books.  Some of the other books handle each chapter as an information silo, never (or rarely) mentioning specific content found elsewhere in the book.
  • In the acknowledgments section at the beginning of the previous book edition the author mentioned that his English writing ability is poor and that “I should really try to improve my English skills someday.”  While the English wording in the first edition of the book was easily understood, I took issue with the author’s repeated use of the phrase “up to” when describing features that exist in one Oracle Database release version or another.  The second edition of the book fixes that one issue that I pointed out, typically replacing the text with “up to and including”, and overall the technical grammar in the second edition of the book is among the best that I have seen in a couple years.  It appears that the author exercised great care when presenting his information on each page.  In contrast, some of the other Oracle Database book authors seem to be more concerned with slamming something onto the page so that something else that is more interesting could be introduced, in the process introducing sentences that can best be described as non-sense.
  • Almost without exception the issues that were identified as wrong, misleading, or incomplete in the first edition of the book were corrected in the second edition.  Unfortunately, the same cannot be said about other books that survived to see a second or third edition.

The second edition of “Troubleshooting Oracle Performance” is of value to Oracle Database administrators, programmers, and Oracle performance tuning specialists.  Chapter one of this book should be required reading for all people intending to be developers, regardless if the person intends to build advanced Oracle Database solutions or just simple Microsoft Access solutions.  One of my favor quotes from the book is found on page three, “Performance is not merely optional, though; it is a key property of an application.”  Ideally, this book should be read after reading the “Expert Oracle Database Architecture” book (or the Concepts Guide found in the Oracle Database documentation library), and before advancing to books such as “Cost-Based Oracle Fundamentals” or “Oracle Core: Essential Internals for DBAs and Developers”.

The full review of this book is quite long, currently covering the first 12 chapters (447 pages) of the book.  Over the next couple of weeks I will finish the remaining book chapters and post an updated review.  The index at the back of most Apress books seems to be limited in value, so I have tried to include a useful index as part of this review.

Foundation Knowledge, and Miscellaneous Tips:

  • The ten most common design problems: no formal logical database design; using generic tables (entity-attribute-value or XML); failing to use constraints, failing to implement physical design (partitioning, bitmap indexes, index organized tables, function-based indexes, etc); selecting the wrong data type for table columns (using a VARCHAR2 column to store dates); incorrect bind variable usage; failure to use RDBMS specific advanced features; avoiding PL/SQL when extensive data manipulation is required within a single database; excessive commits; non-persistent database connections.  Pgs 8-11
  • To avoid compulsive tuning disorder, there are three sources for identifying actual performance problems: user reported unsatisfactory performance, system monitoring reports time outs or unusual load, response time monitoring indicates performance that is outside of the parameters specified by the service level agreement.  Pg 11
  • Cardinality is the number of rows returned by an operation (estimated number of rows in an execution plan).  Cardinality = selectivity *num_rows  Pg 19
  • “A cursor is a handle to a private SQL area with an associated shared SQL area.”  Pg 21
  • Life cycle of a cursor is explained with a diagram.  Pgs 21-23
  • Good explanation of why hard parses and even soft parses should be minimized as much as possible.  Pg 26
  • Even though the OPTIMIZER_ENV_HASH_VALUE column value in V$SQL is different for a given SQL statement when the FIRST_ROWS, FIRST_ROWS_1, or FIRST_ROWS_1000 optimizer modes are used, that difference in the OPTIMIZER_ENV_HASH_VALUE column does not prevent a specific child cursor from being shared among sessions with those different optimizer modes.  “This fact leads to the potential problem that even though the execution environment is different, the SQL engine doesn’t distinguish that difference. As a result, a child cursor might be incorrectly shared.”  Pg 27 (Reference)
  • Example of using Oracle’s built-in XML processing to convert the REASON column found in V$SQL_SHARED_CURSOR into three separate regular Oracle columns.  Pgs 27-28
  • Benefits and disadvantages of using bind variables.  Pgs 29-31, 32-39
  • Adaptive cursor sharing (bind-aware cursor sharing) was introduced in Oracle 11.1.  The IS_BIND_SENSITIVE, IS_BIND_AWARE, and IS_SHAREABLE columns of V$SQL indicate if a specific child cursor was affected (created or made obsolete) by adaptive cursor sharing.  Pg 34
  • Bind aware cursors require the query optimizer to perform an estimation of the selectivity of predicates on each execution.   Pg 37
  • Definition of different types of database file reads and writes.  Pg 40
  • Basic definition of Exadata and the goals of Exadata smart scans.  Pg 41
  • The database engine allows dynamically setting the following attributes for a session: client identifier, client information, module name, and action name.  Pg 45
  • Example of setting the client identifier information using PL/SQL, OCI, JDBC, ODP.NET, and PHP.  Pgs 46-48
  • 10046 trace levels 0, 1, 4, 8, 16, 32, and 64 are described.  Pg 55
  • See $ORACLE_HOME/rdbms/mesg/oraus.msg for a list of all debugging event numbers – not available on all operating system platforms.  Pg 56
  • Using DBMS_PROFILER  requires the CREATE privilege on the PL/SQL code.  DBMS_HPROF just requires execute on DBMS_HPROF.  Pg 96
  • Very good description of the performance related columns in V$SESSION.  Pgs 115-116
  • The MMNL backgroup process collects active session history data once a second.  Pg 117
  • Real-time monitoring is available starting in version 11.1, and requires that the CONTROL_MANAGEMENT_PACK_ACCESS parameter to be set to diagnostic + tuning.  Pg 127
  • Real-time monitoring is enabled for SQL statements only if the executions require at least 5 seconds, if the SQL statement is executed using parallel processing, or if the MONITOR hint is specified in the SQL statement.  Pg 127
  • The author’s system_activity.sql script file produces output that is similar to the data contained in a Diagnostic Pack chart, without requiring a Diagnostic Pack license.  Pg 143
  • The author’s time_model.sql script samples the V$SYS_TIME_MODEL dynamic performance view and outputs results that show the parent, child, and grandchild relationship between the various statistics.  Pg 144
  • Use an interval of 20 to 30 minutes for the Statspack or AWR sample period to limit the distortion effects of the reported averages (important problems may be hidden if the sample period covers many hours of time).  Pg 152
  • AWR dictionary views have a DBA_HIST or CDB_HIST (12.1 multitenant environment) prefix.  Pg 152
  • While the procedure for using Statspack is no longer described in the documentation, the spdoc.txt file in the $ORACLE_HOME/rdbms/admin directory describes how to install, configure, and manage Statspack.  Pg 156
  • Statspack data can be captured at levels 0, 5, 6, 7, or 10 (see the book for an explanation of what is captured at each level).  Pg 157
  • The book provides an example of automating the collection of Statspack snaps, and automatically purging old Statspack snaps after 35 days.  Pg 159
  • The book describes in detail the various inputs that are provided to the query optimizer, including: system statistics, object statistics, constraints, physical design, stored outlines/SQL profiles/SQL plan baselines, execution environment/initialization parameters/client side environment variables, bind variable values/data types, dynamic sampling, and cardinality feedback.  The Oracle Database version, edition (Standard or Enterprise), and installed patches also potentially affect the plans generated by the query optimizer.  Pgs 170-172
  • Prior to version 11.1 the ANSI full outer join syntax was automatically translated into Oracle syntax utilizing a UNION ALL.  Pg 187
  • Access to the DBMS_STATS package is granted to public, but the GATHER_SYSTEM_STATISTICS role (automatically granted to DBA role) is required to change the system statistics in the data dictionary.  Pg 192
  • Bug 9842771 causes the SREADTIM and MREADTIM statistics to be incorrectly calculated when gathering system statistics on Oracle Database and unless patch 9842771 is installed.  Pg 197
  • The calculated CPU cost to access a specific table column is computed as the column position multiplied by 20.  Pg 203 (Reference)
  • When the mreadtim system statistic is null (has not been computed) or is smaller than the sreadtim system statistic, a formula is used to calculate the mreadtim static value when execution plans are generated.  When the sreadtim system statistic is 0 or not computed, a formula is used to derive a sreadtim statistic value when execution plans are generated.  If the MBRC system statistic is not set (or set to 0), the NOWORKLOAD system statistics are used.  See page 204 for the formulas.
  • The maximum number of buckets for histograms increased from 254 to 2048 in Oracle Database 12.1.  pg 213
  • Script to show tracked column usage that is used by DBMS_STATS.  Note that USER should be replaced with the schema name that contains the specified table.  Pg 242
  • When object statistics are collected using the default NO_INVALIDATE parameter value of DBMS_STATS.AUTO_INVALIDATE, cursors that depend on the object for which statistics were collected will be marked as invalidated after a random time period that is up to five hours (as determined by the value of the _OPTIMIZER_INVALIDATION_PERIOD  parameter; SQL statements using parallel execution will be immediately invalidated).  Pg 244
  • “Unfortunately, not all new features are disabled by this [OPTIMIZER_FEATURES_ENABLE] initialization parameter. For example, if you set it to in version 11.2, you won’t get exactly the query optimizer.”  Pg 277
  • “When the [memory utilization specified by the PGA_AGGREGATE_LIMIT] limit is reached, the database engine terminates calls or even kills sessions. To choose the session to deal with, the database engine doesn’t consider the maximum PGA utilization [for each session]. Instead, the database engine considers the session using the highest amount of untunable memory.”  Pg 296
  • EXPLAIN PLAN defines all bind variables as VARCHAR2, which may lead to unintended/unexpected data type conversion problems in the generated execution plan.  EXPLAIN PLAN also does not take advantage of bind variable peeking, further limiting EXPLAIN PLAN’s ability to accurately generate an execution plan for a previously executed SQL statement.  Unfortunately, there are times when EXPLAIN PLAN shows the correct predicate information, while the typically more reliable DBMS_XPLAN.DISPLAY_CURSOR, V$SQL_PLAN view, and V$SQL_PLAN_STATISTICS_ALL  view show incorrect predicate information for one or more lines in the execution plan.  Pgs 302-303, 336, 339, 346, 348
  • To have the query optimizer generate a 10053 trace whenever a specific SQL statement is hard parsed, execute the following command, replacing 9s5u1k3vshsw4 with the correct SQL_ID value: ALTER SYSTEM SET events ‘trace[rdbms.SQL_Optimizer.*][sql:9s5u1k3vshsw4]’  pg 308
  • Description of the columns found in most execution plans.  Pgs 312-313
  • Description of the undocumented ADVANCED format parameter value for DBMS_XPLAN.  Pg 316
  • Adaptive execution plans, where the query optimizer in Oracle Database 12.1 is able to postpone some execution plan decisions (such as selecting a nested loops join vs. a hash join), requires the Enterprise Edition of Oracle Database.  Pg 349
  • The IS_RESOLVED_ADAPTIVE_PLAN column of V$SQL indicates whether or not an execution plan takes advantage of adaptive execution (use +ADAPTIVE in the format parameter of the DBMS_XPLAN call to see the adaptive portion of the execution plan).  Pg 351
  • Rather than just suggesting to the reader to add an index to avoid an unnecessary full table scan, the book includes the following important note: “For instance, if you add an index like in the previous example, you have to consider that the index will slow down the execution of every INSERT and DELETE statement on the indexed table as well as every UPDATE statement that modifies the indexed columns.”  Pg 361
  • “Simply put, hints are directives added to SQL statements to influence the query optimizer’s decisions. In other words, a hint is something that impels toward an action, rather than merely suggests one.”  Pg 363
  • “However, mixing comments and hints don’t always work. For example, a comment added before a hint invalidates it.”  This warning is an actual threat to intentionally included hints, and this warning was not included in the first edition of the book.  Pg 366
  • The default query block names assigned by the optimizer are: CRI$ CREATE INDEX statements, DEL$ DELETE statements, INS$ INSERT statements, MISC$ Miscellaneous SQL statements like LOCK TABLE, MRC$ MERGE statements, SEL$ SELECT statements, SET$ Set operators like UNION and MINUS, UPD$ UPDATE statements.  Use the QB_NAME hint to specify a different, non-default query block name for use with various hints.  Pg 369
  • “One of the most common mistakes made in the utilization of hints is related to table aliases. The rule is that when a table is referenced in a hint, the alias should be used instead of the table name, whenever the table has an alias.”  Pg 371
  • Cross reference between several initialization parameter values and the equivalent hint syntax.  Pg 373
  • A demonstration of creating a hacked stored outline for a SQL statement (use as a last resort when it is not possible to create a suitable outline using other techniques such as exp/imp or initialization parameter changes).  Pgs 381-387
  • SQL profiles, a feature of the Enterprise Edition with the Tuning Pack and the Diagnostic Pack options, are applied even when the upper/lowercase letters and/or the white space differs, and if the FORCE_MATCH parameter is set to true, a SQL profile may be applied even if the literals (constants) in a SQL statement differ.   While SQL profiles allow text normalization, stored outlines and SQL plan management do not support the same degree of text normalization.  Pgs 390, 394, 402
  • SQL plan management, which requires an Enterprise Edition license, could be considered an enhanced version of stored outlines.  Pg 402
  • “Inappropriate hints occur frequently in practice as the reason for inefficient execution plans. Being able to override them with the technique you’ve seen in this section [SQL plan baseline execution plan replacement (stored outlines are also capable of removing embedded hints using the techniques shown on pages 381-387)] is extremely useful.”  Pg 408
  • “What causes long parse times? Commonly, they are caused by the query optimizer evaluating too many different execution plans. In addition, it can happen because of recursive queries executed on behalf of dynamic sampling.”  Pg 433
  • “The values provided by the parse count (total) and session cursor cache hits statistics are subject to several bugs.”  Details are provided on pages 437-438

Suggestions, Problems, and Errors:

  • The following scripts are currently missing from the script library:
    — session_info.sql  Pg 45 (in the script library as session_attributes.sql per the book author).
    — ash_top_files.sql  Pg 136  (Edit: Jan 9, 2015, now downloadable here)
    — ash_top_objects.sql  Pg 136  (Edit: Jan 9, 2015, now downloadable here)
    — ash_top_plsql.sql  Pg 136  (Edit: Jan 9, 2015, now downloadable here)
    — search_space.sql  Pg 169  (Edit: Nov 6, 2014, now downloadable here)
    — incremental_stats.sql  Pg 255  (Edit: Aug 22, 2014, now downloadable here)
    — copy_table_stats.sql  Pg 256  (Edit: Aug 22, 2014, now downloadable here)
    — optimizer_index_cost_adj.sql  Pg 288  (Edit: Aug 22, 2014, now downloadable here)
    — display_statspack.sql  Pg 306  (Edit: Nov 6, 2014, now downloadable here)
    — dynamic_in_conditions.sql  Pg 499  (Edit: Nov 6, 2014, now downloadable here)
    — fbi_cs.sql  Pg 506  (Edit: Nov 6, 2014, now downloadable here)
    — reserve_index.sql should be reverse_index.sql  Pg 671 (Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • Page 19 states that “selectivity is a value between 0 and 1 representing the fraction of rows filtered by an operation.”  I understand the intention of this statement, and the examples that follow the statement further clarify the author’s statement.  However, the “filtered” word in the statement seems to suggest that selectivity represents the fraction of the rows removed by an operation, rather than the rows that survived the filter at an operation.  This is just a minor wording problem that might cause the reader a little confusion when reading the book.  The author has addressed this issue in his errata list for the book.  (Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • Page 24, figure 2-3 has two entries for “Store parent cursor in library cache” – the second entry should show “Store child cursor in the library cache”, just as it is shown in figure 2-2 of the first edition of the book.  The author has addressed this issue in his errata list for the book.  (Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • Page 62 states, “The ALTER SESSION privilege required to execute the previous trigger can’t be granted through a role. Instead, it has to be granted directly to the user executing the trigger.”  I believe that the session executing the AFTER LOGON trigger, by default, would not need the ALTER SESSION privilege if the user creating the AFTER LOGON trigger had the ALTER SESSION privilege because the trigger is created by default with Definer’s Rights (Reference(Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • Page 72 states, “disk is the number of blocks read with physical reads. Be careful—this isn’t the number of physical I/O operations. If this value is larger than the number of logical reads (disk > query + current), it means that blocks spilled into the temporary tablespace.”  While the statement is correct, and supported by the test case output, it might be a good idea to also mention that prefetching (index or table) or buffer warm up could be another possible cause of the DISK statistic value exceeding the value of the QUERY statistic value (especially after the database is bounced or the buffer cache is flushed).  The PHYSICAL READS CACHE PREFETCH and PHYSICAL READS PREFETCH WARMUP statistics might be useful for monitoring this type of access.  (Reference)
  • Page 73 states, “In addition to information about the first execution, version and higher also provides the average and maximum number of rows returned over all executions. The number of executions itself is provided by the Number of plan statistics captured value.”  It appears that the word “executions” should have been “execution plans”.  (Reference(Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • The book made the transition from views that require no additional cost licensing to views that require a Diagnostic Pack license on pages 114 and 115, without providing the reader a warning about the licensing requirements (such a warning is typically present in the book).  (Edit: August 21, 2014: I have discussed this bullet item with the book author.  Pages 114 and 115 describe the use of V$METRICV$METRICGROUP, and V$METRICNAME.  The author provided this link, which clearly describes which features, views, and scripts require a Diagnostic Pack license to access.  V$METRICV$METRICGROUP, and V$METRICNAME are not among the views that are listed, and it is possible to access data from V$METRIC when the CONTROL_MANAGEMENT_PACK_ACCESS parameter is set to NONE.  Some of the Oracle documentation (link) (link) (Oracle Support Doc ID 748642.1) indicate that “V$METRIC displays the most recent statistic values for the complete set of metrics captured by the AWR infrastructure.” and/or that V$METRIC is one of the Automatic Workload Repository views.  The book did not transition to views that require a Diagnostic Pack license on pages 114 and 115.)
  • There are a couple of minor typos in the book that do not affect the accuracy of statements made by the book.  For example, “… prevent it from happenning again” on page 149.  Most of these typos are easily missed when reading the book.  (Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • The book states on page 215, “This is especially true for multibyte character sets where each character might take up to three bytes.”  Per the Oracle Database globalization documentation, some charactersets, such as UTF-8 (AL32UTF8) may require up to four bytes per character.  The author has addressed this issue in his errata list for the book.
  • The book states on page 221, “For this reason, as of version 12.1, top frequency histograms and hybrid histograms replace height-balanced histograms.”  It appears based on the Oracle documentation that height-balanced histograms are not replaced if the histograms are created before the upgrade.  Additionally, if the ESTIMATE_PERCENT parameter is specified in the DBMS_STATS call, a height-balanced histogram will be created if the number of distinct values exceeds the number of buckets.  (Reference).  Page 239 makes a clarifying statement, “Also note that some features (top frequency histograms, hybrid histograms, and incremental statistics) only work when dbms_stats.auto_sample_size is specified [for the ESTIMATE_PERCENT parameter].”  “Work” may be a poor wording choice, “generated” may be a better choice of wording.  (Reference Oracle 12c Histogram Test(Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • The book states on page 282 about dynamic sampling level 11: “The query optimizer decides when and how to use dynamic sampling.  This level is available as of version 12.1 only.”  Oracle Database also adds support for dynamic sampling level 11. (Reference(Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • When describing the output of DBMS_XPLAN, the book states, “Reads: The number of physical reads performed during the execution.”  The book should have clarified that the unit of measure for the Buffers, Reads, and Writes statistics is blocks.  Pg 313  (Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • The book states, “Syntactical errors in hints don’t raise errors. If the parser doesn’t manage to parse them, they’re simply considered real comments.”  That statement is correct for all hints except the oddly behaving IGNORE_ROW_ON_DUPKEY_INDEX hint, which will raise an “ORA-38917: IGNORE_ROW_ON_DUPKEY_INDEX hint disallowed for this operation” error, the CHANGE_DUPKEY_ERROR_INDEX hint which will raise an “ORA-38916: CHANGE_DUPKEY_ERROR_INDEX hint disallowed for this operation” error, and the RETRY_ON_ROW_CHANGE hint which will raise an “ORA-38918: RETRY_ON_ROW_CHANGE hint disallowed for this operation” error  if the hints are specified incorrectly.  Pg 365 (a similar comment is made at the top of page 371). (Reference) (Reference 2(Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • The book states, “The aim of using a prepared statement is to share a single cursor for all SQL statements and, consequently, to avoid unnecessary hard parses by turning them into soft parses.”  This statement should be clarified to point out that the aim is to share a single cursor for all _similar_ SQL statements (those that would have differed only by a literal/constant if bind variables were not used).  Pg 428  (Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • The book states, “Cursor sharing doesn’t replace literal values contained in static SQL statements executed through PL/SQL. For dynamic SQL statements, the replacement takes place only when literals aren’t mixed with bind variables. This isn’t a bug; it’s a design decision.”  This statement about dynamic SQL statements, at least for Oracle Database and (and possibly is no longer true.  The author’s cursor_sharing_mix.sql script does shows literal value replacement when bind variables are also used for SQL statements executed outside PL/SQL.  Pg 434  (Reference Oracle Cursor Sharing Test).  (Edit: August 21, 2014: I have discussed this bullet item with the book author.  The quote from page 434 is describing the behavior of cursor sharing only as it relates to SQL statements executed within PL/SQL code.  I misinterpreted “dynamic SQL statements” to include ad-hoc SQL statements that are submitted by client-side applications (including SQL*Plus) that are built on-demand as a result of user input.  In technical terms, Dynamic implicitly implies that the SQL statements are executed within PL/SQL code.  The test script results do match what is printed in the book.  The author’s script demonstrated (specifically about the CURSOR_SHARING parameter, but likely applies to other functionality as well) that SQL executed directly may be handled differently when executed within PL/SQL code.)

Data Dictionary Views/Structures (the index at the back of the book misses most of these entries):

  • AUX_STATS$  (SYS schema)  Pgs 193, 196
  • CDB_ENABLED_TRACES  Pgs 58, 59, 60
  • CDB_SQL_PROFILES  Pgs 393, 399
  • COL$ (SYS schema)  Pg 242
  • COL_USAGE$ (SYS schema)  Pg 242
  • DBA_ENABLED_TRACES  Pgs 58, 59, 60
  • DBA_HIST_BASELINE  Pgs 155, 156
  • DBA_HIST_SQL_PLAN  Pgs 305, 324
  • DBA_OPTSTAT_OPERATIONS  Pgs 200, 201, 269, 270
  • DBA_SCHEDULER_JOBS  Pgs 257-258
  • DBA_SQL_PLAN_BASELINES  Pgs 408, 409
  • DBA_SQL_PROFILES  Pgs 393, 399
  • DBA_USERS  Pgs 235, 242
  • GV$INSTANCE  Pgs 60, 63
  • OBJ$ (SYS schema)  Pg 242
  • OL$ (OUTLN schema)  Pgs 381, 383
  • OL$HINTS (OUTLN schema)  Pgs 381, 383-384
  • OL$NODES (OUTLN schema)  Pg 381
  • OPTSTAT_HIST_CONTROL$ (SYS schema)  Pgs 246, 248
  • PLAN_TABLE  Pgs 203, 215, 216, 219-223, 225, 300
  • REGISTRY$ (SYS schema)  Pg 235
  • REGISTRY$SCHEMAS (SYS schema)  Pg 235
  • SQLOBJ$ (SYS schema)  Pgs 399, 410
  • SQLOBJ$DATA (SYS schema)  Pgs 399, 410
  • SQLPROF$ (SYS schema)  Pg 399
  • SQLPROF$ATTR (SYS schema)  Pg 399
  • STATS$SQL_PLAN  Pg 306
  • USER_IND_STATISTICS  Pgs 209, 231-232, 251
  • USER_INDEXES  Pgs 354, 486, 487, 488, 519, 520, 523
  • USER_OBJECTS  Pg 300
  • USER_OUTLINES  Pgs 377, 378, 379, 380
  • USER_TAB_COL_STATISTICS  Pgs 209, 211-213, 251, 252, 355
  • USER_TAB_COLS  Pg 227
  • USER_TAB_HISTOGRAMS  Pgs 209, 214-215, 218-220, 222, 224-225, 251
  • USER_TAB_STATISTICS  Pgs 209, 210-211, 215, 249, 251, 255, 264
  • USER_TABLES  Pg 240
  • V$ACTIVE_SESSION_HISTORY  Pgs 119-121, 132, 138, 420, 424
  • V$BGPROCESS  Pg 63
  • V$CON_SYS_TIME_MODEL  Pgs 108, 419
  • V$CON_SYSSTAT  Pg 113
  • V$DATABASE  Pg 324
  • V$DIAG_INFO  Pgs 63, 64
  • V$EVENT_NAME  Pg 109, 163
  • V$FILESTAT  Pg 195
  • V$FIXED_TABLE  Pg 233
  • V$METRIC  Pg 115, 142
  • V$METRIC_HISTORY  Pgs 115, 131
  • V$METRICNAME  Pg 114
  • V$MYSTAT  Pgs 114, 280
  • V$OSSTAT  Pgs 106, 131, 161
  • V$PARAMETER  Pgs 62, 296
  • V$PGASTAT  Pg 294-295
  • V$PROCESS  Pgs 64, 65
  • V$PX_PROCESS  Pg 63
  • V$SESS_TIME_MODEL  Pgs 107, 108, 110, 111, 144, 145, 419
  • V$SESSION  Pgs  45, 57, 61, 63, 64, 115-116, 117, 119, 145, 146, 304, 322, 390
  • V$SESSION_EVENT  Pgs 111, 145
  • V$SESSION_WAIT  Pg 116
  • V$SESSTAT  Pgs 113, 114, 145, 430, 437
  • V$SGASTAT  Pg 118
  • V$SQL  Pgs 25, 26, 126, 147, 304, 322, 378, 380, 386, 409
  • V$SQL_CS_HISTOGRAM  Pgs 36, 37
  • V$SQL_CS_SELECTIVITY  Pgs 36, 37
  • V$SQL_HINT  Pgs 367, 376, 403
  • V$SQL_MONITOR  Pg 127
  • V$SQL_PLAN  Pgs 303, 304, 305, 322, 336, 339,346, 348, 351
  • V$SQL_PLAN_STATISTICS_ALL  Pgs 303, 304, 320, 321, 322, 336, 339, 346, 348
  • V$SQL_SHARED_CURSOR  Pgs 27, 29, 31
  • V$SQL_WORKAREA  Pg 303
  • V$SQLAREA  Pgs 25,26, 126, 147, 277, 424
  • V$SQLSTATS  Pgs 126, 147, 424
  • V$STATNAME  Pgs 113, 280, 430, 437
  • V$SYS_TIME_MODEL  Pgs 108, 144, 419
  • V$SYSMETRIC  Pg 115
  • V$SYSSTAT  Pg 113
  • V$SYSTEM_EVENT  Pgs 111, 112
  • V$SYSTEM_WAIT_CLASS  Pgs 110, 143
  • X$KSUSE  Pg 257

Initialization Parameters:

  • _SQLMON_MAX_PLAN pg 127
  • BITMAP_MERGE_AREA_SIZE  Pgs 292, 297
  • CONTROL_MANAGEMENT_PACK_ACCESS  Pgs 103, 117, 127, 402
  • CPU_COUNT  Pg 131
  • CURSOR_SHARING  Pgs 373, 434-436
  • DB_BLOCK_SIZE  Pgs 278, 281
  • DB_CACHE_SIZE  Pg 281
  • DB_FILE_MULTIBLOCK_READ_COUNT  Pgs 195, 198, 274, 278-281
  • DB_NAME  Pg 63
  • HASH_AREA_SIZE  Pgs 292, 297
  • JOB_QUEUE_PROCESSES  Pg 243, 261
  • MAX_DUMP_FILE_SIZE  Pgs 61, 62
  • NLS_SORT  Pg 28
  • OPEN_CURSORS  Pgs 432, 433, 437
  • OPTIMIZER_FEATURES_ENABLE  Pgs 202, 213, 277-278, 282, 373
  • OPTIMIZER_INDEX_CACHING  Pgs 191, 274, 289, 373
  • OPTIMIZER_INDEX_COST_ADJ  Pgs 191, 274, 286-289, 373
  • OPTIMIZER_MODE  Pgs 26, 274, 276-277, 373
  • OPTIMIZER_SECURE_VIEW_MERGING  Pgs 176, 289-291, 373
  • PGA_AGGREGATE_LIMIT  Pgs 274, 293
  • PGA_AGGREGATE_TARGET  Pgs 274, 292, 295
  • PROCESSES  Pg 295
  • SESSION_CACHED_CURSORS  Pgs 436, 437, 440
  • SESSIONS  Pg 281
  • SORT_AREA_RETAINED_SIZE  Pgs 292, 296-297
  • SORT_AREA_SIZE  Pgs 292, 296
  • SQLTUNE_CATEGORY  Pgs 395, 402
  • STATISTICS_LEVEL  Pgs 62, 106, 152, 199, 238, 261, 303, 322
  • TIMED_STATISTICS  Pgs 61, 62, 74, 105-106
  • USE_STORED_OUTLINES  Pgs 379, 385
  • WORKAREA_SIZE_POLICY  Pgs 274, 292

SQL Hints:

  • ALL_ROWS  Pgs 277, 373, 399
  • APPEND  Pg 367
  • BIND_AWARE  Pgs 38, 367
  • CACHE  Pg 367
  • CLUSTER  Pg 366
  • COLUMN_STATS  Pg 401
  • CURSOR_SHARING_EXACT  Pgs 366, 373, 434
  • DRIVING_SITE  Pg 367
  • DYNAMIC_SAMPLING  Pgs 283, 366, 367, 373
  • EXPAND_TABLE  Pg 366
  • FACT  Pg 366
  • FIRST_ROWS  Pg 373
  • FIRST_ROWS(n)  Pgs 277, 366, 373
  • FULL  Pgs 365, 366, 367, 368, 369, 371, 383, 406, 407, 408
  • GATHER_PLAN_STATISTICS  Pgs 303, 320, 322, 366, 376, 403
  • HASH  Pg 366
  • INDEX  Pgs 366, 371, 407
  • INDEX_ASC  Pg 366
  • INDEX_DESC  Pg 366
  • INDEX_FFS  Pg 366
  • INDEX_JOIN  Pg 366
  • INDEX_SS  Pg 366
  • INDEX_SS_ASC  Pg 366
  • INDEX_SS_DESC  Pg 366
  • INDEX_STATS  Pg 401
  • INLINE  Pgs 356, 376, 403
  • LEADING  Pg 366
  • MATERIALIZE  Pgs 367, 376, 403
  • MERGE  Pg 366
  • MONITOR  Pgs 127, 367
  • NLJ_BATCHING  Pg 366
  • NO_APPEND  Pg 367
  • NO_BIND_AWARE  Pg 367
  • NO_CACHE  Pg 367
  • NO_CPU_COSTING – disables the CPU cost model, reverting to the I/O cost model.  Pg 191
  • NO_EXPAND  Pg 366
  • NO_FACT  Pg 366
  • NO_INDEX  Pg 366
  • NO_INDEX_FFS  Pg 366
  • NO_INDEX_SS_ASC  Pg 366
  • NO_INDEX_SS_DESC  Pg 366
  • NO_MERGE  Pg 366
  • NO_MONITOR  Pgs 127, 367
  • NO_PARALLEL  Pg 367
  • NO_PQ_SKEW  Pg 367
  • NO_PUSH_PRED  Pg 366
  • NO_PUSH_SUBQ  Pg 366
  • NO_RESULT_CACHE  Pgs 367, 373
  • NO_REWRITE  Pg 366
  • NO_UNNEST  Pg 366
  • NO_USE_CUBE  Pg 366
  • NO_USE_HASH  Pg 366
  • NO_USE_MERGE  Pg 366
  • NO_USE_NL  Pg 366
  • OPT_ESTIMATE  Pg 400
  • OPT_PARAM  Pgs 250, 366, 373
  • OPTIMIZER_FEATURES_ENABLE  Pgs 278, 366, 373, 399
  • ORDERED  Pg 366
  • PARALLEL  Pg 367
  • PQ_FILTER  Pg 367
  • PQ_SKEW  Pg 367
  • PUSH_PRED  Pg 366
  • PUSH_SUBQ  Pg 366
  • PX_JOIN_FILTER  Pg 367
  • QB_NAME  Pgs 367, 368
  • RESULT_CACHE  Pgs 367, 373
  • REWRITE  Pg 366
  • RULE  Pg 398
  • SET_TO_JOIN – used to enable the set (INTERSECT or MINUS) transformation to standard join syntax.  Pg 189
  • TABLE_STATS  Pg 401
  • UNNEST  Pg 366
  • USE_CONCAT  Pg 366
  • USE_CUBE  Pg 366
  • USE_HASH  Pg 366
  • USE_MERGE  Pg 366
  • USE_NL  Pg 366


  • 10046  Pgs 52-62
  • 10053  Pgs 307-309
  • 10132  Pgs 309-310, 371


  • ADVISOR  Pg 397
  • ALTER SESSION  Pgs 61, 62
  • ANALYZE ANY  Pgs 200, 245, 248, 251, 261, 263
  • ANALYZE ANY DICTIONARY  Pgs 200, 245, 248, 261
  • CREATE ANY  Pg 96
  • CREATE JOB  Pg 244
  • MERGE ANY VIEW  Pg 291


  • DBA  Pgs 57, 192, 244, 291, 385, 417
  • HPROF_PROFILE (custom role)  Pg 101
  • SELECT_CATALOG_ROLE  Pgs 322, 324
  • SQL_TRACE (custom role)  Pg 61


  • background elapsed time  Pg 107
  • bytes received via SQL*Net from client  Pg 430
  • bytes sent via SQL*Net to client  Pgs 430-431
  • DB CPU  Pgs 108, 110, 111, 162, 163
  • DB time  Pgs 55, 107, 108, 109, 111, 120, 122, 126, 127, 132, 134, 137, 140, 162, 163, 165, 166, 419, 420, 424
  • hard parse elapsed time  Pg 107
  • parse count (total)  Pgs 437, 438
  • parse time elapsed  Pgs 107, 420
  • session cursor cache count  Pg 437
  • session cursor cache hits  Pgs 437, 438
  • sql execute elapsed time  Pg 165

Wait Events:

  • DB FILE SEQUENTIAL READ  Pgs 75, 83, 112, 163
  • latch free  Pg 436
  • library cache: mutex X  Pg 436
  • SQL*NET MESSAGE FROM CLIENT  Pgs 76, 111, 421-422, 423, 430, 432, 435
  • SQL*NET MESSAGE TO CLIENT  Pgs 76, 80, 430

Built-In Functions/Procedures:

  • ANALYZE  Pgs 207, 264
  • CAST  Pgs 284, 321
  • DBMS_OUTPUT.PUT_LINE  Pgs 238, 245
  • DBMS_RANDOM.STRING  Pgs 209, 284
  • DBMS_SPM.CONFIGURE  Pgs 417, 418
  • DBMS_SQLDIAG.DUMP_TRACE (no extra cost license required, dumps 10053 trace from library cache)  Pg 308
  • DBMS_STATS.GATHER_SCHEMA_STATS  Pgs 233, 238, 240, 263
  • DBMS_STATS.GATHER_SYSTEM_STATS  Pgs 192, 194-197, 200, 201, 206
  • DBMS_STATS.GATHER_TABLE_STATS  Pgs 233, 249, 250, 252, 255, 263, 285
  • DBMS_STATS.GET_PARAM  Pgs 245, 246, 247
  • DBMS_STATS.PURGE_STATS  Pgs 199-200, 261
  • DBMS_STATS.REPORT_COL_USAGE  (note: use SET LONG 100000 in SQL*Plus before executing query)  Pgs 228, 242-243
  • DBMS_STATS.SET_PARAM  Pgs 245, 246
  • DBMS_STATS.SET_TABLE_PREFS  Pgs 247, 250, 251, 254, 256
  • DBMS_XPLAN.DISPLAY  Pgs 301, 302, 306, 316-321, 349, 350, 368, 370, 371, 380, 382, 384, 395
  • DBMS_XPLAN.DISPLAY_AWR  Pgs 305-306, 311, 323-325
  • DBMS_XPLAN.DISPLAY_CURSOR  Pgs 249, 304-305, 311, 322-323, 336, 352, 353, 406, 407, 408
  • EXPLAIN PLAN  Pgs 299-302, 320, 336, 339, 346, 348, 349, 368, 370, 371, 380, 382, 384, 395
  • EXTRACT  Pgs 399, 410
  • EXTRACTVALUE  Pgs 399, 410
  • LAG  Pgs 203, 214, 224
  • LATERAL  Pg 180
  • LNNVL  Pgs 175, 346, 347, 502
  • NTILE  Pg 217
  • NULLIF  Pgs 354, 419, 424
  • RATIO_TO_REPORT  Pgs 92, 111, 112, 120, 221, 223
  • SYS_CONTEXT, SYS_CONTEXT(‘USERENV’,’SID’) , SYS_CONTEXT(‘USERENV’,’SESSION_USER’)  Pgs 45, 61, 64, 113, 321, 372, 390
  • TABLE  Pgs 155, 249, 301, 302, 304, 305, 317-321, 323, 324, 349, 350, 352, 368, 370, 371, 380, 382, 384, 395, 399, 406-411, 625, 639, 669
  • UNPIVOT  Pg 158
  • XMLSEQUENCE  Pgs 399, 410
  • XMLTABLE  Pgs 201, 270
  • XMLTYPE  Pgs 399, 410


  • DBMS_HPROF profiler  Pgs 89-96
  • DBMS_JOB  Pg 257
  • DBMS_PROFILER profiler  Pgs 89, 96-101
  • DBMS_SQLTUNE  Pgs 389-402
  • DBMS_STATS  Pgs 191-201, 207, 233-271
  • DBMS_XPLAN  Pgs 311-325

Oracle Provided Scripts:

  • ashrpt.sql  Pg 124
  • ashrpti.sql  Pg 124
  • awrddrpt.sql  Pg 161
  • awrrpt.sql  Pg 160
  • awrsqrpt.sql  Pg 306
  • coe_xfr_sql_profile.sql  Pg 401
  • dbmshptab.sql  Pg 90
  • profrep.sql  Pg 100
  • proftab.sql  Pg 97
  • spauto.sql  Pg 159
  • spcreate.sql  Pg 157
  • sppurge.sql  Pg 159
  • spreport.sql  Pg 161
  • sprepsql.sql  Pgs 166, 306
  • utlxplan.sql  Pg 300
  • utlxmv.sql  Pg 587
  • utlxrw.sql  Pg 586


  • /proc/loadavg command on Linux.  Pg 131
  • exp  Pg 381
  • grep  Pg 422
  • PLSHPROF (processes DBMS_HPROF output)  Pgs 89, 94
  • snapper.sql (utility script by Tanel Põder)  Pgs 145-147
  • SQL Developer  Pgs 95-96, 100
  • TKPROF  Pgs 67-79, 421-422, 426
  • TRCSESS  Pg 67
  • TVD$XTAT  Pgs 67, 79-88, 422-423, 426-427

Execution Plans:

Automatic Query Transformations:

    • Count transformation – transforms COUNT(column) to COUNT(*) when a NOT NULL constraint is present on the column.  Pg 174
    • Common sub-expression elimination – removes unnecessary/duplicate predicates from the WHERE clause.  Pgs 174-175
    • OR expansion – transforms SQL statements with OR clauses into multiple SQL blocks with UNION ALL specified between the SQL blocks. Pg 175
    • Simple view merging – merges query blocks using heuristics, used when the query blocks do not contain aggregations, set operators, hierarchical queries, MODEL clauses, or scalar subqueries (SELECT statement in the column list of a query).  Pgs 176-177
    • Complex view merging  – merges query blocks that contain aggregations using a cost-based approach, but does not work with hierarchical queries or queries containing GROUPING SETS, ROLLUP, PIVOT, or MODEL clauses.  Pgs 176-177
    • Select list pruning – removing selected columns from non-top-level query blocks that are not referenced by parent query blocks.  Pgs 177-178
    • Filter push down – pushing column restrictions from the WHERE clause of a parent query block into a child query block.  Pgs 178-179
    • Join predicate push down – push join predicates from a parent query block into an unmergeable child query block.  Pgs 179-180
    • Predicate move around – applying filter restrictions in one child query block (inline view) to another child query block due to join conditions that are specified between the parent and child query blocks.  Pg 180
    • Distinct placement – applies a DISTINCT filter to a child table before joining to a parent table, when the DISTINCT clause is specified for a SQL statement.  Pg 181
    • Distinct elimination – removes a specified DISTINCT filter when all columns of the primary key, a unique key, or the ROWID are selected from a table.  Pg 181
    • Group by placement – applies an aggregation function to a child table before joining that table to a parent table – a COUNT aggregation specified in a SQL statement is transformed into a COUNT aggregation of the child table before the join, and a SUM aggregation after the join.  Pgs 181-182
    • Order-by elimination – removes unnecessary ORDER BY clauses using heuristics from subqueries, inline views, and joined standard views.  Pg 182
    • Subquery unnesting – injects semi-join, anti-join, and scalar subqueries into the FROM clause; helps to eliminate the need to execute a subquery once for every row returned by the containing query block.  Pg 183
    • Subquery coalescing – combines equivalent semi-join and anti-join subqueries into a single query block.  Pgs 183-184
    • Subquery removal using window functions – replaces subqueries containing aggregate functions with analytic functions and standard joins.  Pgs 184-185
    • Join elimination – eliminates the parent table from a view when a declared foreign key is present between the tables, and no columns are selected from the parent table in the query block that selects from the view.  Pg 185
    • Join factorization – avoids repeatedly accessing one of the row sources by rewriting UNION ALL type queries so that predicates are applied to a second table as well as the UNION ALL operation before joining to the other row source.  Pgs 185-186
    • Outer join to inner join – when possible, converts left/right joins to equijoins, possibly due to other predicates that conflict with the left/right join syntax.  Pg 186
    • Full outer join to outer join, UNION ALL, and antijoin.  Pg 187
    • Table expansion – using an index scan on a partitioned table when the index is unusable for some table partitions.
    • Set to join conversion – converts INTERSECT or MINUS type queries to standard join queries, must specify the SET_TO_JOIN hint.  Pg 189
    • Star transformation.  Pg 189, chapter 14
    • Query rewrite with materialized views – accesses data stored in a materialized view, rather than the base tables, even though the materialized view is not referenced in the SQL statement (Enterprise Edition feature).  Pg 189, chapter 15

Execution Plan Operations:

    • AND-EQUAL  Pg 331
    • BITMAP AND  Pg 331
    • BITMAP KEY ITERATION  Pgs 333, 334
    • BITMAP MINUS  Pg 331
    • BITMAP OR  Pg 331
    • CONNECT BY PUMP  Pg 339
    • CONNECT BY WITH FILTERING  Pgs 333, 338-341
    • COUNT STOPKEY  Pg 330
    • FILTER  Pgs 330, 333, 335-337
    • HASH JOIN  Pg 331
    • HASH JOIN ANTI  Pg 362
    • HASH JOIN ANTI NA  Pgs 347, 362
    • INDEX RANGE SCAN  Pg 327
    • INTERSECTION  Pg 331
    • MERGE JOIN  Pgs 331, 332, 333
    • MINUS  Pgs 331, 362
    • NESTED LOOPS  Pgs 333, 334-335
    • SELECT  Pg 327
    • SORT ORDER BY  Pg 327
    • SQL MODEL  Pg 331
    • STATISTICS COLLECTOR  Pgs 350, 351, 352
    • UNION-ALL  Pgs 331, 332, 333
    • UPDATE  Pgs 333, 337-338


  •  CACHE BUFFERS CHAINS  Pgs 666, 668
  • LIBRARY CACHE  Pgs 25, 436
  • SHARED POOL  Pg 25

Oracle Error Messages:

  • Error: cannot fetch last explain plan from PLAN_TABLE.  Pg 320
  • ORA-00600: Internal error code, arguments.  Pg 569
  • ORA-00904: Invalid identifier.  Pg 537
  • ORA-00931: Missing identifier.  Pg 379
  • ORA-00942: Table or view does not exist.  Pg 569
  • ORA-01013: User requested cancel of current operation.  Pg 259
  • ORA-01017: invalid username/password; logon denied.  Pg 69
  • ORA-01039: Insufficient privileges on underlying objects of the view.  Pg 300
  • ORA-01408: Such column list already indexed.  Pg 524
  • ORA-01422: Exact fetch returns more than requested number of rows.  Pg 322
  • ORA-01722: Invalid number error.  Pg 657
  • ORA-02097: Parameter cannot be modified because specified value is invalid.  Pg 295
  • ORA-03113: End-of-file on communication channel.  Pg 296
  • ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT.  Pg 296
  • ORA-07445: Exception encountered: core dump [exception].  Pg 569
  • ORA-12026: Invalid filter column detected.  Pg 596
  • ORA-12827: Insufficient parallel query slaves available error.  Pgs 624, 625
  • ORA-12838: Cannot read/modify an object after modifying it in parallel.  Pgs 640, 644
  • ORA-12839: Cannot modify an object in parallel after modifying it.  Pg 640
  • ORA-12850: Could not allocate slaves on all specified instances.  Pg 614
  • ORA-13541: System moving window baseline size greater than retention.  Pg 156
  • ORA-13717: Tuning Package License is needed for using this feature.  Pg 402
  • ORA-13859: Action cannot be specified without the module specification.  Pg 59
  • ORA-20005: Object statistics are locked.  Pgs 238, 263
  • ORA-23538: Cannot explicitly refresh a NEVER REFRESH materialized view.  Pg 588
  • ORA-26040: Data block was loaded using the NOLOGGING option.  Pg 642
  • ORA-32313: REFRESH FAST of <mview> unsupported after PMOPs.  Pg 594
  • ORA-38029: Object statistics are locked.  Pg 264
  • ORA-48913: Writing into trace file failed, file size limit [] reached.  Pg 62

Blog articles that reference the “Troubleshooting Oracle Performance, Second Edition” book:

FIRST_ROWS_n Optimizer Mode – What is Wrong with this Statement?

Book Review: Oracle Database 11gR2 Performance Tuning Cookbook (Part 2)

23 07 2012

July 23, 2012

Oracle Database Performance Tuning Test Cases without Many “Why”, “When”, and “How Much” Filler Details

(Back to the Previous Post in the Series)

This blog article contains the review for the second half of the “Oracle Database 11gR2 Performance Tuning Cookbook” book.  It has been nearly five months since I posted the first half of this review.  My full review of this book is roughly 26 typewritten pages in length.

This book definitely is not an easy read.  Instead, it is necessary to pick through the material in the book, in search of correct information, and then perform a bit of research to determine where the book’s contents deviated from describing actual Oracle Database behavior.  Unfortunately, the same review process was required when reading a handful of other Oracle Database performance related books, so this book is not unique in this regard.

Five months ago I submitted 21 errata items for the first 80 or so pages of this book.  As of July 23, 2012, the publisher’s website shows the following when attempting to find the errata for this book:

“No errata currently available.

Known errata are listed here. Please let us know if you have found any errata not on this list by completing the errata submission form. Our editors will check them and post them to the list. Thank you.”

The errata list found on the book’s errata page is disappointing!

Test case scripts are used throughout this book, and that is one of the biggest selling points of this book.  One of the problems, however, is that it appears that the primary goal of this book is to demonstrate the author’s 50+ scripts, show screen captures of the each script’s execution, describe the action performed by each command in the script, and then describe the result of the script.  The all-important “how”, “when”, “how much”, and “why” components were largely excluded in the book.  There are also many errors, misstatements, and misinterpretations of the scripts’ output in the book – items that should have never survived the technical review cycle for the book.  I have no desire to “bash” this book, or to discourage readers of this blog from buying the book.

This review currently excludes chapter 11 on the topic of “Tuning Contention” and a couple of pages in another chapter.  The format of this review mirrors the format used in several of my recent book reviews.

Data Dictionary Views:

  • V$SYSSTAT (pages 208, 400)
  • V$PGA_TARGET_ADVICE (page 210)
  • V$SORT_USAGE (page 252)
  • USER_TAB_HISTOGRAMS (pages 306, 318)
  • V$SGA, V$SGAINFO (page 376)
  • V$PROCESS (page 377)
  • V$DB_OBJECT_CACHE (pages 389, 394)
  • V$SQLAREA (page 395)
  • V$SHARED_POOL_ADVICE (page 395)
  • V$ROWCACHE (page 395)
  • V$DB_CACHE_ADVICE (page 400)
  • X$BH, DBA_OBJECTS (page 402)
  • ALL_INDEXES (page 419)
  • DBA_HIST_SEG_STAT (page 420)
  • V$INSTANCE_RECOVERY (page 429)


  • PGA_AGGREGATE_TARGET (pages 211, 215)
  • STATISTICS_LEVEL (page 215)
  • PLSQL_CODE_TYPE (page 276)
  • USE_STORED_OUTLINES (page 311)
  • RESULT_CACHE_MODE (page 344)
  • RESULT_CACHE_MAX_SIZE (page 345)
  • LOCK_SGA (pages 377, 381)
  • CURSOR_SHARING (pages 388, 399)
  • SHARED_POOL_SIZE (pages 388, 396)
  • TIMED_STATISTICS (page 412)


  • DYNAMIC_SAMPLING (page 304)
  • RESULT_CACHE (page 343)
  • NO_RESULT_CACHE (page 345)
  • PARALLEL (page 347)
  • APPEND (page 352, 355)

Error Messages:

  • ORA-4031: “unable to allocate X bytes of shared memory” (page 390)
  • RMAN-06207 (page 417)

Comments, Corrections, and Problems:

  • A script in the book attempts to set the SORT_AREA_SIZE to a value of 1000 (less than 1KB), after setting the WORKAREA_SIZE_POLICY parameter to MANUAL.  A side effect of this change is that the HASH_AREA_SIZE defaults to twice the SORT_AREA_SIZE, thus the change set the HASH_AREA_SIZE to 2000 bytes.  The execution plan printed in the book shows a single sort operation, and two hash join operations – thus the author is not just testing what happens when the SORT_AREA_SIZE parameter is set to less than 1/65 of its already small default value.  The script in the book orders the rows by AMOUNT_SOLD DESC, while the script in the script library orders the rows by TIME_ID ASC. (page 208)
  • The book states, “We then set a MANUAL policy, reserving the size of 1000 blocks of memory for sorting, with the following statements… ALTER SESSION SET SORT_AREA_SIZE = 1000; …”  The SORT_AREA_SIZE parameter is NOT specified in blocks, but instead in bytes. (page 212)
  • The book states, “In fact, the DB_BLOCK_SIZE for our database is 8 KB, which reserves 8192 KB of memory for the sort operation, but actually needs 24 MB of memory to complete the sort in-memory.”  If the unit of measure of the SORT_AREA_SIZE parameter is blocks (it is bytes, not blocks), 8 KB * 1000 is 8000 KB, not 8192 KB.  The 24 MB figure from the autotrace execution plan is only an estimate of the temp space requirements for the query – my test execution showed 4096 KB was actually required. (page 213)
  • The book states, “If we use an Oracle Shared Server Connection, the User Global Area is not inside the PGA, but it’s in the Shared Pool. Hence, an application which often performs sorting should not use Oracle Shared Server Connections.”  The second sentence is not a logical conclusion of the first sentence.  The first sentence is not completely accurate, as the UGA could be in the large pool, rather than the shared pool. (page 215)
  • The script output shows a HASH UNIQUE operation in the execution plan when the author issued a SQL statement with a DISTINCT clause to demonstrate that a sort to disk would be performed when DISTINCT is specified in a SQL statement – HASH UNIQUE is not a sort operation.  The results of this test case script conflicts with the second paragraph on page 207. (page 218)
  • The script output shows a HASH GROUP BY operation in the execution plan when the author issued a SQL statement with a GROUP BY clause to demonstrate that a sort to disk would be performed when GROUP BY is specified in a SQL statement – HASH GROUP BY is not a sort operation.  The results of this test case script conflicts with the second paragraph on page 207. (page 219)
  • The book states, “Select the first 10 customers, ordered by their age, from youngest to oldest.”  It might be better to state that 10 customers will be selected at random, specifically those customers with rows in the first set of table blocks – this statement was clarified on page 226. (page 224)
  • The book states that changing the where clause to “WHERE ROWNUM < 11” will permit the query using RANK to obtain the same result as earlier.  There are a couple of reasons why this statement is incorrect: 1) the order of the rows is COINCIDENTALLY the same as the value assigned by RANK, but that will not always be true.  2) there are 31 rows that have a RANK of 1, so even if the rows were produced in sequential rank order, there is no guarantee that the same 10 of 31 rows will appear as had happened earlier. (page 227)
  • The example in the book shows that an “INDEX FULL SCAN (MIN/MAX)” operation is used when an index is present on a column to which the MAX or MIN functions are used.  The book also shows that if both the MIN and MAX values of a column are requested in a single SELECT statement, the optimizer switched to a “TABLE ACCESS FULL”  operation, but the book did not explain why that happened.  The work-around suggested by the book almost works, except that the solution produces two rows rather than a single row. (pages 234-236)
  • While the author’s statement that “the myth SELECT (1) is faster than SELECT(*) is wrong – they are the same query” is headed in the right direction, the approach is wrong.  It is simply insufficient to look at the execution plan to determine that two SQL statements are the same (and are performing the same work).  Had the author suggested generating a 10053 trace for both SQL statement, he would have had the opportunity to mention that the query optimizer silently transformed the “SELECT COUNT(1)” query into “SELECT COUNT(*)” – so yes, the queries are the same, but the book’s approach to determining the queries are the same is not correct. (page 237)
  • The author created a temporary tablespace with a 16MB uniform extent size without identifying why that extent size was selected rather than a 1MB default extent size. (page 249)
  • The book states, “If we haven’t defined a default temporary tablespace for a user, the SYSTEM tablespace is used to store SORT SEGMENTS.”  The statement in the book appears to be out of date – the Oracle Database 11.2 documentation states, “Users who are not explicitly assigned a temporary tablespace use the database default temporary tablespace, which for new installations is TEMP.” (page 251)
  • The book’s discussion of defining the INITIAL and NEXT storage parameters for temporary tablespaces as a multiple of the SORT_AREA_SIZE parameter seems to be out of place in an Oracle Database 11g R2 performance tuning book – dictionary managed tablespaces were deprecated with the release of Oracle Database 9.2, and the effect of these parameters is different in locally managed tablespaces. (page 252)
  • The book states, “The PCTINCREASE parameter cannot be specified when we use the AUTOALLOCATE option for the tablespace.”  This statement appears to be incorrect – the Oracle Database documentation states that a specified value for PCTINCREASE will be ignored when creating an AUTOALLOCATE tablespace. (page 252)
  • The book states, “We can have different tablespaces for each user and we can query the dynamic performance view V$SORT_USAGE to retrieve the details about the active disk sorts occurring in the instance.”  This statement is a bit confusing, so it is probably best to break the sentence into two separate logical sentences for analysis.  The first half of the statement seems to suggest that a separate temp tablespace should (or could) be created for each user – I am not sure that this is the author’s intended interpretation; the default temporary tablespace may be set at the user level so that not all users are required to use the same (default) temporary tablespace.  For the second logical sentence, the V$SORT_USAGE performance view was deprecated with the release of Oracle Database 9.2, replaced with V$TEMPSEG_USAGE.  In Oracle Database 11.2, the V$SORT_USAGE view is based on the GV$SORT_USAGE view which selects from X$KTSSO (SELECT VIEW_DEFINITION FROM V$FIXED_VIEW_DEFINITION WHERE VIEW_NAME=’GV$SORT_USAGE’;).  The GV$SORT_USAGE  view definition, which is pointed to by both the V$SORT_USAGE and V$TEMPSEG_USAGE synonyms, indicates that the SEGTYPE column may be one of SORT, HASH, DATA, INDEX, LOB_ DATA, LOB_INDEX , or UNDEFINED.  The potential values of the SEGTYPE column suggests that the view potentially shows a variety of activity in addition to what the book mentions (the WHERE clause should be adapted to restrict the rows returned from this view, so that the results are consistent with this chapter’s contents).  The V$SORT_USAGE view also shows entries for aborted SQL statements, for instance when Ctrl-C is pressed in SQL*Plus and another SQL statement has not been executed by the session. (page 252)
  • The book states, “Please note that – due to their nature – temporary tablespaces are not affected by backup and restore operations.”  This is a vague statement at best, and the accuracy of the statement depends in part on the type of backup performed (online or offline, user-managed or RMAN hot backup, etc.).  An online RMAN backup will not back up the tempfiles of locally managed temporary tablespaces; however, when those backed up databases are restored and recovered, Oracle Database 10.2 and later will recreate the temporary tablespace tempfiles when the database is opened. (page 252)
  • The book states, “Using the LIMIT clause, we can see a slight decline in performance, so why use it?”  The screen captures in the book actually show the test without the LIMIT clause completing in 0.44 seconds, and the test with the LIMIT clause completing in 0.30 seconds, so in this case the LIMIT clause appears to have improved performance. (page 261)
  • Better clarification is needed in the section describing the use of the PL/SQL NOCOPY clause.  The book states that when a parameter variable is passed by reference, a value change to that variable actually changes the value of a copy of that variable, without affecting the originally passed in variable – this concept is backward.  The book should also make an effort to clarify the statement that the NOCOPY clause is a hint, touching on some of the causes of the NOCOPY clause being intentionally ignored as described in the documentation. (page 266)
  • The description of the CLUSTER hint is almost a verbatim copy of the description from the Oracle Database documentation.  The description in the book is a little vague regarding the usage of that hint, considering that the author lists the hint among the most commonly used hints.  The book did not mention this fact found in the documentation: “This hint applies only to tables in an index cluster.” (reference page 296)
  • The book’s description of the LEADING hint is misleading if there are more than two tables (or other row sources) specified in the SQL statement.  The book states, “Asks the optimizer to join table_name1 and table_name2 in the given order.”  First, a hint is a directive – it does not “ask” of the optimizer, it “demands” of the optimizer (this “Asks” wording is used in several of the hint descriptions).  The LEADING hint does not just specify that two tables should be joined in the listed order.  That hint states that the join order should begin with the first table (or other row source) specified in the hint, joining each successive table (or other row source) listed in the hint in the specified order. (page 297)
  • The book states that the default value for the OPTIMIZER_DYNAMIC_SAMPLING parameter is 2, which is true for Oracle Database 10.1 and higher.  However, the book also describes dynamic sampling as being introduced in Oracle Database 9.2.  Oracle Database 9.2 had a default value of 1 for this parameter.  The book offered no description of what happens when a value, such as 8, is used in a DYNAMIC_SAMPLING hint, other than 0 means that dynamic sampling should not be used, and 10 means “use aggressive sampling”. (page 304)
  • The book states, “Histograms are useful only on indexed columns containing skewed values, because they help the optimizer to choose whether to use the index or not to access values.  Obviously, if the frequency for a specific value is very high, using the index won’t be the best choice.”  “Don’t use histograms in situations where: 1. the column is not used in the WHERE clauses of the queries; 2. the data in the column is uniformly distributed; 3. bind variables are used when comparing against the column.”  The quoted section from the book seems to be closely derived from a page of the Oracle Database 8.1.7 documentation which was modified and finally removed from later releases of the documentation.  Histograms are potentially useful when created on non-indexed columns, with the goal of improving cardinality estimates, which are used to determine join order and join type when multiple tables are referenced in a SQL statement.  Histograms are also potentially useful when the data is not skewed, helping to improve the cardinality estimates of range scans when outlier values are present (such as an out of range value taking the place of NULL values).  Regarding the statement, “Obviously, if the frequency for a specific value is very high, using the index won’t be the best choice,” the frequency of a value in a column is but one of several factors that the optimizer considers, but a high frequency of a value is not necessarily a problem if the use of the index avoids the need to visit the related table.  Regarding the bullet points that describes when histograms should not be used, “The column is not used in the WHERE clauses of the queries,” with the introduction of ANSI join syntax in Oracle Database 9.0.1, a column that only appears in JOIN clauses could potentially benefit from a histogram.  Prior to the introduction of adaptive cursor sharing in Oracle Database 11.1, bind variables when combined with bind variable peeking and histograms could generate execution plans that, while efficient for the first execution, were potentially considerably inefficient when different bind variable values were submitted during later executions of the SQL statement.  Adaptive cursor sharing attempts to address the issues caused by the combination of bind variable peeking and histograms.  In cases where the same value is always submitted for a comparison with a submitted bind variable, a histogram on the comparison column could significantly improve the optimizer’s cardinality estimates such that a more efficient execution plan is developed (consider a case when an index on a STATUS column is present, and that STATUS column contains the value of either PROCESSED or NOT PROCESSED).  The book only mentioned that adaptive cursor sharing would be discussed in a later recipe – there was no comment in this section of the book regarding whether adaptive cursor sharing helped or further hindered the bullet-pointed items in the book. (page 309)
  • The discussion of stored outlines missed a couple of key items.  One item missed is that other sessions will not use stored outlines, unless the USE_STORED_OUTLINES parameter is set to a value other than FALSE at either the session (after the session connects) or system level (after each bounce of the database) because the parameter cannot be included in pfiles (init.ora) or spfiles.  In the demonstration script, the author displayed the execution plan for a query, then displayed the execution plan for the same query modified with an index hint, before creating a stored outline for the query; it appears that while creating the script, that the author thought that the unhinted query would use the stored outline for the hinted version of the query – that will not happen without implementing a technique that was not discussed in the book.  The book states, “If we want to avoid changes to the execution plan, we can store them in stored outlines – the plans in stored outlines don’t change…”  A stored outline does not contain an execution plan, it stores a series of hints (visible in DBA_OUTLINE_HINTS), and as stated in the Oracle Database documentation, “The degree to which plan stability controls execution plans is dictated by how much the Oracle Database hint mechanism controls execution plans…” (pages 310-317)
  • The AUTOTRACE generated plans for a SQL statement using bind variables, intended to show adaptive cursor sharing in action, may in fact show a different execution plan thzn the actual execution plan that will be used because AUTOTRACE handles all bind variables as VARCHAR2, does not perform bind variable peeking, and does not pull an existing execution plan from the library cache. (pages 321-322)
  • The TKPROF output shows two executions, implying that the TKPROF output shows the output for test script line 16 and 17, not just 16 as stated in the book. (page 325)
  • The TKPROF output shows that the execution plan changed with the different bind variable value, which means that the author must have re-executed the SQL statement again after the first execution with a bind variable value of 1 resulted in an inefficient execution plan.  The book does not mention that adaptive cursor sharing only takes effect after the runtime engine first executes the SQL statement at least once with an inefficient execution plan. (page 326)
  • The book states, “Pre Oracle Database 10g, bind variables and histograms could not be used together.”  This is a confusing sentence – is the book implying that histograms could not be utilized on any columns, even those that are only compared with literals, if the SQL statement uses bind variables?  Bind variable peeking was introduced with Oracle Database 9.0.1, which is sufficient to allow the optimizer to take advantage of histograms on columns referenced in the WHERE clause when those columns are compared with bind variables. (page 326)
  • The book states that in an OLTP type database, “we probably want to disable the Adaptive Cursor Sharing feature to eliminate the related overhead.”  The book then suggests changing the CURSOR_SHARING parameter to a value of EXACT, and the  _OPTIMIZER_EXTENDED_CURSOR_SHARING_REL parameter to a value of NONE.  First, the book should not suggest altering a hidden parameter without mentioning that hidden parameters should only be changed after consulting Oracle support.  Second, it is not the CURSOR_SHARING parameter that should be set to a value of EXACT, but the _OPTIMIZER_ADAPTIVE_CURSOR_SHARING parameter that should be set to a value of FALSE (see Metalink (MOS) Doc ID 11657468.8).  Third, the blanket statement that adaptive cursor sharing should be disabled in OLTP databases seems to be an incredibly silly suggestion for any Oracle Database version other than (this version contained a bug that lead to an impressive number of child cursors due to repeated executions of a SQL statement with different bind variable values). (page 327)
  • (Skipped pages 328-340)
  • The example demonstrating the client-side result cache was of limited use because the author specified “SET AUTOT TRACE EXP” in the script, meaning that neither the query without the RESULT_CACHE hint, nor the query with the hint actually executed.  For the example to be useful, autotrace should have been disabled, DBMS_XPLAN.DISPLAY_CURSOR should have been used to display the ALLSTATS LAST statistics, and the query with the RESULT_CACHE hint should have been executed twice. (pages 342-344)
  • When trying to compare the relative performance of the PARALLEL hint, rather than using SET TERM OFF, and SPOOL /DEV/NULL to avoid outputting the results to the screen, and selecting the SYSDATE for timing information, it probably would have been better to use SET AUTOTRACE TRACEONLY STATISTICS and SET TIMING ON.  Caching at the operating system, SAN, and disk level also needs to be considered.  For example, simply executing the SQL statement with the PARALLEL hint first might have required that version of the SQL statement to take longer than the unhinted SQL statement (hinted: 12.81 seconds, unhinted 10.05 seconds; hinted:10.80 seconds, unhinted: 7.58 seconds; hinted: 9.15 seconds, unhinted 13.00 seconds). (page 346-349)
  • The book states, “In OLTP environments parallel executions cause too many locks and consume more resources – limiting the scalability of the application.”  The author should clarify the statement that parallel executions cause too many locks – is the author describing parallel DML, parallel DDL, or parallel query? (page 350)
  • The book states, “Another issue can arise when using direct path insert – remember that conventional load inserts in memory – and only the redo log is written to disk on commit.”  This statement is confusing and misleading.  The contents of the redo log buffer are flushed to disk on commit – a commit in any session.  But the redo log buffer is also flushed every 3 seconds, and when it is at least 1/3 full.  The section of the author’s test script without the APPEND hint generated roughly 37 MB of redo (in a NOARCHIVELOG mode test database) when I tested, and the book shows that the insert required 5.3 seconds during the author’s test run.  DBWR is free to write blocks modified by a session to disk before the session issues a commit.  The demonstration should have mentioned that triggers and foreign key constraints on the table will prevent the APPEND hint from working as expected, and that more redo would be expected to be written in an ARCHIVELOG mode database unless NOLOGGING is set on the table or NOLOGGING is specified in the INSERT statement. (page 355)
  • The book states, “The NOLOGGING clause doesn’t write redo log entries for the operation, which is unrecoverable due to this behavior.”  The wording of this statement makes it a little difficult to understand the point of the statement.  Additionally, including the NOLOGGING clause in the CREATE TABLE AS SELECT statement does NOT prevent the generation of redo log entries – 178KB of redo was written when I reproduced the author’s test case.  That clause will minimize the amount of redo generated, not eliminate the generation of redo. (page 358)
  • The book includes a good test case example showing the effects of row level triggers and excessive indexing.  The author possibly could have made the performance difference point a bit clearer (or more dramatic) by using single row inserts, rather than INSERT INTO … SELECT. (reference pages 359-365)
  • The book states, “We will execute SQL*Plus from the /oracle/home/ directory…” – it appears from looking at the script that the intended to write “/home/oracle/ directory”.  The bang (!) shortcut for the SQL*Plus host command might not work on some operating systems such as VMS and Windows, so ideally the author would have spelled out the word host (or the first two characters of the word) in the scripts. (pages 366-367)
  • The book states that automatic memory management should not be used with Oracle Multithreaded server.  Considering that the “Multithreaded server” name was replaced with “Shared server” with the release of Oracle Database 9.0.1, and Automatic Memory Management is a feature introduced in Oracle Database, this statement in the book is a bit confusing. (page 381)
  • The select from V$SQL_PLAN using only HASH_VALUE in the WHERE clause is very odd, it would seem that using the SQL_ID and CHILD_NUMBER columns would make more sense.  Considering that the book is targeting Oracle Database 11.2, using DBMS_XPLAN rather than directly querying V$SQL_PLAN probably would have yielded more useful results than simply listing the operation, owner, and object name. (page 384)
  • The sample code showing how to pin a cursor in the library cache using DBMS_SHARED_POOL.KEEP is missing a single quote after the SQL statement’s hash value and the letter C (or any letter other than P, T, R, or Q) in single quotes.  As a result, the error “ORA-01756: quoted string not properly terminated” will be returned when that sample code is executed.  In addition to showing the sample code, the author should have provided guidance regarding why this feature should or should not be used, and when the feature should or should not be used (this is a frequent problem throughout the book – showing the how without showing the “why”, “when”, and “how much” details). (pages 390, 394)
  • The book states, “The UGA is located in the PGA when using dedicated server environments, and inside the Shared Pool when using shared servers.”  A similar sentence appears on the previous page.  It would be a bit more accurate to state that in a shared server environment, the Large Pool (if configured) contains the session UGA.  With a value set for the SGA_TARGET parameter, the Large Pool should be configured automatically, with its size indicated by the __LARGE_POOL_SIZE hidden parameter. (reference page 397)
  • The query that calculates the MAX of the “session uga memory max” statistic found in V$SESSTAT is limited in usefulness for a couple of reasons: 1) PGA memory used by sessions is allocated as needed and deallocated when no longer needed – summing the maximum value for the various sessions does not indicate the concurrent high watermark of session memory usage, rather it overstates the high watermark; 2) when a session disconnects, the statistics representing the PGA memory used by that session will disappear from V$SESSTAT. (page 399)
  • The formula for calculating the buffer cache hit ratio, while it appeared in the Oracle Database 9.2 documentation, is incorrect.  The author used the “physical reads”, “physical reads direct”, “physical reads direct (lob)” and “session logical reads” statistics from V$SYSSTAT, while the Oracle Database 11.2 documentation uses the statistics “physical reads cache”, “consistent gets from cache”, and “db block gets from cache” for calculating the buffer cache hit ratio.  The usefulness of the derived metric is limited at best. (page 400)
  • The book states, “The main use of the LRU list is to add blocks to the LRU end using a full table scan, while the normal operations add blocks to the MRU end of the list, and hence they are quickly replaced by blocks required for subsequent operations.”  This statement is similar to one made on page 170 of the book.  Recent releases (9.0.1 and above?) of Oracle Database perform mid-point insertions, even with full table scans.  If the CACHE keyword is specified when an object is created or altered, then that object’s blocks will be placed at the MRU end of the list when the object’s blocks are accessed from disk. (reference page 408).
  • The book states, “On this basis, we will establish a strategy to distribute the Oracle files on different disks to help improve performance.”  Perhaps a suggestion to use SAME (stripe and mirror everything), as recommended by Oracle Corporation, would be a better approach. (page 412)
  • The book states that the TIMED_STATISTICS parameter should be set to a value of TRUE in order to diagnose I/O performance issues.  While it is true that this parameter should have a value of TRUE, that parameter should default to a value of TRUE when the value of the STATISTICS_LEVEL parameter is set at its default value of TYPICAL, or adjusted to a value of ALL. (page 412)
  • The book states, “The following steps will show how to destribute Oravle files to increase performance.”  That sentence includes two uncorrected typos. (page 412)
  • The query of V$FILESTAT and the query of V$TEMPSTAT are of limited value because there was no attempt to calculate delta values for the statistics.  The queries do not take into consideration the effects of backup activity and statistics collection, because the delta period is since the opening of the database. (page 413)
  • Rather than just arbitrarily moving the redo logs from one directory belonging to the database TESTDB to another directory apparently on the same hard drive volume, the directory which belongs to the database TESTDB2 (assuming that optimal flexible architecture approaches are in use), the author should explain the factors that influence the decision whether or not to move the redo logs. (page 413)
  • Justification should be provided for changing the archived redo log locations.  Justification should be provided for moving the datafiles for a tablespace – where is the evidence that the current location of the datafiles is causing a performance problem that will be addressed by changing the directory in which the datafiles are located? (page 414)
  • When describing the output of the V$FILESTAT query, the author states, “For each file, we can see the number of physical reads and writes performed and the time spent (in milliseconds) for these operations.”  V$FILESTAT displays the time in centiseconds (1/100 of a second) not in milliseconds (1/1000 of a second).  The output from V$FILESTAT shows that none of the files were heavily accessed, so there is probably little value in using the output to make the decision that one or another datafile should be relocated. (page 415)
  • The book states, “When we heavily access data files and we know which file is executing the query against the statistics (shown in step 2 and step 3), we can gain performance by separating the heavily-accessed data files on different disks.”  This statement is confusing – a file does not execute a query.  Just because the blocks in two datafiles are heavily accessed, that does not mean that there will be contention between the datafiles if those datafiles reside on the same disk volume.  Considering the number of times the author recommended relocating files to “different disks” it might be interesting to understand how many disk volumes the author considers as being essential to efficient operation of the database. (page 417)
  • The book states, “We distribute the data files based on disk controller allocation – distributing them on different mount points managed by the same disk controller brings no performance improvement.”  This statement is limited in accuracy, and because the sentence is stated as an absolute, it is incorrect.  The current and maximum throughput of the disk controller are but two of several factors that might have an impact on performance. (page 418)
  • The book states, “The CKPT, LGWR, and ARCn processes can only read and write control files.”  I think that the author intended something other than what is written in this quote – otherwise someone might wonder which process writes to the online redo logs, and which process archives those redo logs once filled. (page 418)
  • The book states, “LGWR writes online redo logs sequentially, using a RAID 5 on the disks, where online redo logs are stored.”  I think that this sentence requires a lot of reworking before it is understandable.  The book goes on to state, “This can lead to poor performance due to the slower write times that characterize this type of disk array [RAID 5] – using RAID 0+1 is preferable.”  Suggesting the more robust RAID 10, rather than RAID 0+1, would be a good idea in this sentence. (page 419)
  • The author suggested querying the DBA_HIST_SEG_STAT view without mentioning the licensing requirements for accessing that AWR view. (page 420)
  • The book states, “If we have tables and indexes on the same disk, we need to read and write in two different places on the same disk.  By dividing the work between two disks, we can perform an index range scan faster than when the index and table are on the same disk.”  While a statement similar to this one has appeared in a couple of older Oracle Database related books, the statement has been shown to be incorrect for various reasons. (page 422)
  • The book states, “The latter [hardware-based RAID] offers guaranteed performance and no overhead on the CPU.”  Hardware-based RAID does not offer guaranteed performance, although it should have less CPU overhead than software based RAID. (page 423)
  • The book states, “Starting with step 1, in terms of performance the best RAID level for an Oracle database is RAID 0+1, also known as RAID 10.”  RAID 0+1 and RAID 10 are not the same – the two approaches have different fault tolerances.  The same mistake is made on page 425.  (page 423)
  • The book states, “It’s not a good idea to store redo logs and temporary segments on RAID level 1 configurations, because redo logs should be multiplexed on different disks, and there is no need to protect temporary segments from failures.”  Using multiple RAID 1 arrays for the online redo logs is certainly possible – it is not clear if the book suggests that the location where the online redo logs are located should not be protected by one of the redundant RAID levels (RAID 1, RAID 10, etc.).  It could easily be argued that the tempfiles should be located on an array with a redundant RAID level; it is true that the tempfiles can be recreated, but losing the drive hosting the tempfile could be disastrous for the connected sessions. (page 424)
  • The description of RAID 5 should have described how losing one drive in the array affects performance, and how the performance is affected when a second drive in the array is lost. (page 424)
  • The book states, “A checkpoint is used to ensure consistency in the database; during this operation, all data files are synchronized with the data blocks in memory.”  This statement needs a bit of clarification, first of what is the meaning of “synchronized with the data blocks in memory.”  There are several types of checkpoints, including an object level checkpoint, a checkpoint which takes place prior to the execution of a parallel query for all tables that are referenced in the query. (page 428)
  • The script demonstrating how checkpoints work has a couple of flaws.  The script changes the value of the LOG_CHECKPOINTS_TO_ALERT parameter with a scope of SPFILE, and does not mention that the database must be bounced after executing this command.  The script issues an “ALTER SYSTEM SWITCH LOGFILE” command, rather than an “ALTER SYSTEM CHECKPOINT” command. (page 428)
  • The book states, “In the following screenshot, you can see that the checkpoint information was written to the alert log.”  The screen capture shows the log switch, but not the checkpoint message. (page 430)
  • The book essentially states that the value of the LOG_CHECKPOINT_INTERVAL multiplied by the redo block size (typically 512 bytes, but may be 1024 or 4096 in later releases) cannot exceed 90% of the smallest redo log file – that statement conflicts with the Oracle Database 10.2 documentation.  Regardless of that fact, both the LOG_CHECKPOINT_INTERVAL and LOG_CHECKPOINT_TIMEOUT parameters described on the page are deprecated as of Oracle Database 10.1 per Metalink (MOS) Doc ID 472821.1. (page 431)
  • The book states, “The symptoms of I/O issues, related to slow disks on which redo log files are written, are highlighted by a high value for log file sync statistic. Often a high value for the log file parallel write is confirmed by a high value for this parameter.”  The log file sync wait event probably should not be referred to as either a statistic or a parameter, because those labels have very specific meaning in Oracle Database terminology.  The excessive time spent in the log file sync wait event could be caused by several factors other than slow disks where the redo logs are located, including excessive CPU competition and simply excessive COMMITs (or ROLLBACKs) issued by a session. (page 434)

Book Review: Oracle Database 11gR2 Performance Tuning Cookbook (Part 1)

3 03 2012

March 3, 2012

Oracle Database Performance Tuning Test Cases without Many “Why”, “When”, and “How Much” Filler Details

(Forward to the Next Post in the Series)

I ordered the “Oracle Database 11gR2 Performance Cookbook” book shortly after it became available for purchase.  I was very curious to see how the book compared with the similarly titled “Oracle Database 11g Performance Tuning Recipes” book, as well as some of the other Oracle Database performance books that are on the market.  Packt is a fairly new book publisher, and this book marks the first Packt book in my collection.

The author of this book does not appear to be widely known in the international Oracle Database community, although it does appear that the author is an active reviewer of SQL Server and programming books on an Italian programming focused website.  The author’s LinkedIn page indicates that he obtained OCA and OCP certification in 2002 and 2003, respectively, has a variety of programming experience, and currently is an IT Manager.

One important characteristic of this book that is missing from some of the other Oracle Database performance focused books on the market is the extensive use of test case scripts throughout most of the book that allow the reader to reproduce the performance changes mentioned in the book, in the reader’s Oracle Database environments.  The test case scripts, related screen captures, and author’s explanations of the results are both a blessing and a curse for this book.  It appears that the author used a single Amazon Elastic Compute Cloud hosted database instance with only one set of instance parameters and system statistics for the various test case results and the author’s descriptions of the expected outcome when the inputs in the test case script are provided.  Had the author re-executed the test case scripts in another Oracle Database environment, the author probably would have written quite differently the explanations that follow the test case scripts.  It is not uncommon for 80% of some of the book pages to be consumed by one or two SQL*Plus screen captures; combined with the slightly larger font sizes, double-spacing between paragraphs, and apparent one and a half spacing between lines in code sections, the technical content in the book is a bit more limited than the page count might suggest.

So, how well did the book’s contents meet the level of expectations provided by the book’s front cover and the publisher’s description of the book?  One of the bullet pointed descriptions of the book reads, “Avoid common myths and pitfalls that slow down the database.”  Unfortunately, the book reintroduces several myths and inaccurate conclusions about Oracle Database that have diminished in frequency during the last 10+ years.  Some of the information in the book is of good quality.  However, the significant number of inaccurate, vague, misleading, and/or over-generalized facts in this book suggests that the author of this book may have not received sufficient guidance from Packt and the four technical reviewers of the book.  The book publisher’s site currently lists no errata for the book, even though I personally submitted 21 errata items to the publisher’s errata reporting site.

The author’s native language is obviously not English, so it is probably to be expected that some of the sentences in the book are incomprehensible.  Yet, there are also sentences in the book that use completely different phrasing, close to that of a person who double-majored in English and computer science with a focus on Oracle Database.  The consistent usage of the term “fields” in some sections of the book, with the consistent usage of the term “columns” in other sections of the book is but one example of the style shift that is present in the book.  Some of the sentences found in the book are oddly familiar, and although I was not able to identify the original sources of all of the oddly familiar sentences, I did manage to locate a few.  What constitutes plagiarism in an Oracle Database book, and how much change is required to the original material to avoid the plagiarism label?  Would slightly reformatting a section of text to replace dashes with colons be sufficient to avoid the label?  Would changing the order of some sentences and eliminating other sentences be sufficient to avoid the label?  Would performing simple word substitutions here and there, or shortening sentences be sufficient to avoid the label?  I am not suggesting that there is rampant plagiarism in the book, but one does need to question when that plateau is reached in a book about Oracle Database.

While in some respects this book is more useful to the reader than the “Oracle Database 11g Performance Tuning Recipes” book due to the inclusion of test cases, both books seem to omit the reasoning behind why and when someone might consider performing the 80 or so tasks/recipes mentioned in the books.  Vague, inaccurate, over-generalized, and out of date descriptions of Oracle Database behavior are limiting factors of both books.  This review is quite long, and likely will not appear in full on Amazon – see my blog for the full review.

Data Dictionary Views:

  • DBA_VIEWS (page 20)
  • V$FIXED_TABLE (page 21)
  • V$LIBRARYCACHE (page 52)
  • V$STATNAME, V$MYSTAT (page 53)
  • SYS.SEQ$ (page 65)
  • INDEX_STATS (pages 127, 128)
  • V$SYSSTAT (page 160)
  • V$SESSION (page 205)


  • CURSOR_SHARING (pages 9, 38)
  • TIMED_STATISTICS (pages 20, 201)
  • STATISTICS_LEVEL (pages 29, 32)
  • DB_16K_CACHE_SIZE (page 84)
  • SQL_TRACE (page 202)


  • APPEND (page 72)
  • INDEX (page 121)

Comments, Corrections, and Problems:

  • The book states, “The first rule in writing applications which connect to an Oracle Database is to always use bind variables, which means not to include parameters in SQL statements as literals.”  The statement should be clarified that this is a general recommendation.  There are times when literals should be used rather than bind variables, for instance if there are very popular and unpopular values in a column, it might be wise to prevent the sharing of execution plans when a very popular or very unpopular value is used in the WHERE clause.  A correction/clarification is provided on page 51 (page 8).
  • Steps for creating a database with the Oracle Database Configuration Assistant seem to be out of place in a performance tuning book (pages 17-19)
  • Uses the term “fields” where the term “columns” should be used (page 21).
  • The book demonstrates the use of ANALYZE TABLE … COMPUTE STATISTICS, and DBMS_UTILITY.ANALYZE_SCHEMA to collect object statistics.  The book states that ANALYZE is retained for backward compatibility, but the book provides no warning that using ANALYZE to collect statistics could be problematic since the release of Oracle Database 8.1 (reference page 21).
  • The book uses the word “elaborate” rather than “create” or “generate” (pages 24, 26, 27, 31, 37)
  • The book demonstrates the use of AWR without first mentioning the licensing requirements of that feature (pages 30-31).
  • Word substitution error: “… and we experiment a lack of performance in another period, we can elaborate two reports…” (page 31)
  • The book demonstrates the use of ADDM without first mentioning the licensing requirements of that feature.  The book also states, “ADDM is enabled by default in Oracle Database 11g; it depends on two configuration parameters…”  Unlike with Oracle Database 10.1 and 10.2, ADDM is not enabled by default in the Standard Edition of Oracle Database 11.1 or 11.2, nor can it be legally enabled on the Standard Edition.  While ADDM is enabled by default in the Enterprise Edition 11.1 and 11.2, it cannot be legally used without a Diagnostic Pack license (pages 32-35).
  • The book suggests the system-wide use of the deprecated SIMILAR value for the CURSOR_SHARING parameter as one of two solutions to address a hard parsing problem in a test case script (page 38).
  • The book states, “Now the Soft Parse is 97.84 percent.”  The output shown in the book actually indicates a Soft Parse percent of 99.20.  The instance efficiency numbers in the output are identical to those found on page 40, so this might be an indication of a copy-paste error (page 39).
  • The book states, “If the PreparedStatement is not closed, it can be executed multiple times – changing the value assigned to bind variables – and only a ‘light’ soft-parse will occur, with no syntax and semantic check.”  If the SQL statement is held open – there will NOT be a “light” soft-parse (session cached cursors are not discussed in this section of the book, which would allow a “light” soft-parse if the cursor is NOT held open) (page 52).
  • The elapsed time comparison between the directly executed SELECT statement, and the REFCURSOR that is returned by the SH.SALES_BY_PRODUCT procedure is not valid for a couple of reasons: 1) The script is executed by the internal user rather than a normal user, which can lead to unexpected performance differences; 2) The SELECT statement method displays its rows to the screen, so it is subject to delays caused by formatting the output for the SQL*Plus window (SET AUTOTRACE TRACEONLY STATISTICS may be used to reduce the impact of the formatting delays, but that change had little effect); 3) The REFCURSOR method, because it involves PL/SQL, will be subject to a context switch while the normal SELECT will not be subject to the context switch – the associated delay is operating system dependent and the timing should suggest that something is wrong with the test result; 4) While the normal SELECT statement test actually fetches the rows, the REFCURSOR method does not, as can be seen within an enabled 10046 trace (the normal SELECT will show a FETCH line that is preceded by WAIT lines, while the REFCURSOR method will not show a FETCH line in the trace file) (pages 54-55).
  • The output of the Java version of the SQL*Plus test script found on pages 54-55 conflicts with the author’s intended result.  Directly executing the SQL statement required 1.438 seconds, while using the REFCURSOR in the Java code required 1.722 seconds.  The performance difference may be more significant than shown, because the direct execution of the SQL statement test was performed first, and the timing results include the time to flush the shared pool and the buffer cache (the first call will almost certainly take longer than the second call) (pages 56-58).
  • The book uses a test case script to demonstrate the negative effects of using a “COUNTER” table rather than using a sequence to provide the same counter value.  The test case script uses a trigger on the table to populate the counter column in the table, and the test case script does show that performance improves with the use of the Oracle sequence.  The test case script, however, should have also included a test that completely eliminates the trigger on the table, populating the TRAVELID column by including TRAVEL_SEQ.NEXTVAL directly in the SQL statement that populates the table.  My timing results show that the counter trigger-table method completes in 0.45 seconds, the trigger-sequence method completes in 0.14 seconds, and the select-sequence method completes in 0.03 seconds (reference pages 60-62).
  • Accidental word substitution, “… and if the high watermark is reached, it caches other X numbers in the same manner.” “other” should be “another” (page 65).
  • The author incorrectly read the AUTOTRACE generated execution plan.  The book states “We can see that in the execution plan, there is full table access to the SALES table examining 918K rows and reading 8075 KB.”  An AUTOTRACE generated execution plan shows an estimated execution plan that may differ from the actual execution plan in some situations, such as cases where bind variables are involved.  Additionally, an AUTOTRACE generated execution plan shows the predicted number of rows that will be returned (not examined), and the predicted volume of data that will be returned (not read) based on the existing statistics for the objects (page 67).
  • The book states, “However, from the execution plan, the number of rows processed is 72, and each row is 648 bytes long.”  Once again it is important to stress that the execution plan is a predicted execution plan generated by AUTOTRACE.  The estimated 72 rows returned by the operation in the execution plan does agree with the “72 rows processed” displayed in the actual statistics for the execution, but that will not always be the case for an AUTOTRACE generated execution plan (it happens to be the case because statistics were collected for the materialized view with a 100% sample rate).  The statement that each row is 648 bytes long appears to be the result of misreading the previous execution plan, which estimated that 72 rows consuming 648 bytes total would be returned from operation 0 in the execution plan.  The AUTOTRACE generated execution plan for the materialized view predicts that 72 rows consuming 1872 bytes will be returned from operation 0 in the execution plan, which shows a predicted row length of 1872/72 = 26 bytes per row (pages 67-68).
  • The book states, “In the latter case [after flushing the buffer cache], we have 4047 consistent gets and 240 physical reads…”  There are a couple of issues with this test case, found in the source code library file 2602_02_Materialized Views.sql.  First, the script in the source code library uses “ANALYZE TABLE SH.MV_SALES_BY_PRODUCT COMPUTE STATISTICS” to collect the statistics on the materialized view, while the book shows the use of “EXEC DBMS_STATS.GATHER_TABLE_STATS” to collect the statistics – the collected statistics from the ANALYZE table command could very easily be different from the collected statistics from the DBMS_STATS.GATHER_TABLE_STATS command.  The screen capture shown after flushing the buffer cache and re-executing the select from the materialized view does show 4,047 consistent gets and 240 physical block reads, as stated in the book, but it also shows 20,544 recursive calls where 0 recursive calls were shown prior to flushing the buffer cache – this recursive call count figure indicates that something else happened beyond the author flushing the buffer cache.  My test results with just flushing the buffer cache show 8 consistent gets, 6 physical reads, and 0 recursive calls.  The author also apparently flushed the shared pool, which triggered the recursive calls and the majority of the consistent gets and physical block reads (15,296, 2,978, and 177 respectively).  The author probably should mention that the test case and advice will not work in a Standard Edition database, and should also state that the decision whether or not the materialized view is used is a cost-based optimizer decision (page 68).
  • The book lists “QUERY REWRITE” as a required privilege to create materialized views.  The Oracle Database 11.2 (and 10.1) documentation state that the QUERY REWRITE privilege is deprecated, and thus not needed (reference page 69).
  • The book states, “The same parameters [QUERY_REWRITE_ENABLED,  and QUERY_REWRITE_INTEGRITY] have to be enabled to use another functionality, function-based indexes.”  QUERY_REWRITE_ENABLED must be set to TRUE in Oracle Database 9.2 to use function-based indexes, but that requirement disappeared in Oracle Database 10.1 (page 70).
  • The book states, “We encounter row chaining when the size of the row data is larger than the size of the database block used to store it.”  While this statement is correct, the book omits a secondary cause of chained rows – Oracle database supports a maximum of 255 columns in a row piece, so tables with more than 255 columns will necessarily have chained rows (page 84).
  • The book casually demonstrates setting up a 16KB block size tablespace in a database that has a default 8KB block size.  The book provides a list of several advantages for including smaller or larger than default block sizes in a single database including, “Faster scans: tables and indexes that require full scans can see faster performance when placed in a large block size.”  This justification is incorrect for several reasons including the fact that the DB_FILE_MULTIBLOCK_READ_COUNT parameter is scaled up for tablespaces that use a smaller than database default block size, and scales the parameter down for tablespaces that use a larger than database default block size.  All of the justifications found on page 88 appear to be copied verbatim from a commercial website page.  The book does not discuss the bugs and unexpected optimizer cost changes that might result from using multiple block sizes in a single database (reference reference2 pages 84-88).
  • Step 5 contains two typos: using angle brackets (less than and greater than signs) rather than single quotes, and a spurious 3 after the semicolon (page 89).
  • Step 7 and 9 contain typos: using angle brackets (less than and greater than signs) rather than single quotes (page 90). 
  • Steps 4 and 5 contain typos: using angle brackets (less than and greater than signs) rather than single quotes (page 97).
  • Step 14 contains a corrupted SQL statement: “CREATE.5* FROM HR.BIG_ROWS WHERE 1=0;”.  Steps 15, 16, and 19 contain typos: using angle brackets (less than and greater than signs) rather than single quotes.  The author should have mentioned at least one of the possible problems with this approach, which might include triggers on the table, foreign keys that point to the table, and the potential statistics problems caused by the use of the ANALYZE TABLE command (page 92).
  • The book states about the DBMS_SPACE.CREATE_TABLE_COST example, “In this procedure we have set the tablespace to use the average row size and the row count…”  The purpose of this function is to estimate space usage, not to make changes to a tablespace (page 95).
  • Step 1 contains an extraneous “.5” in the command.
  • Pages 96-112 are present in the book, but omitted from this review.
  • Steps 11 and 13 use angle brackets (less than and greater than signs) rather than single quotes (pages 116-117)
  • The book states, “We can also create a function-based descending index.”  This is a strange statement – all descending indexes in Oracle Database are function-based indexes (page 119).
  • The book states, “… this test allows us to dispel a myth. Oracle uses the indexes even if the leading columns are not referenced in the WHERE predicate of the query.  We can see that in such a case, the operation will be an INDEX FAST FULL SCAN.”  In this case, the author is incorrectly attempting to generalize a special case into a general rule.  Firstly, there is no myth to dispel – Oracle’s query optimizer has had the ability to use INDEX SKIP SCAN operations when the leading column of an index is not specified in the WHERE clause, since the release of Oracle Database 9.0.1 a decade ago – but that access path is usually only advisable when there are few distinct values in the leading column of the index.  The author’s test case is a special case because all of the columns selected from the table are present in the index structure (page 119).
  • The book states, “If we use a regular index to access the data, Oracle is unable to do the sort in a mixed way, in a query like this.”  The author then shows a SQL statement with the first column in the ORDER BY clause sorted in descending order and the second column in the ORDER BY clause sorted in ascending order.  At this point in the book, the author has not yet stated that Oracle Database is able to read index entries in an ascending or descending order through a normal (ascending sorted) b*tree index, so this sentence in the book is confusing – almost to say that Oracle Database is not able to sort one column in ascending sequence and a second column in descending sequence – that concept is obviously false.  It would have been more accurate for the book to state that, “Oracle Database is unable to _avoid_ a sort operation when accessing the rows through a concatenated index if both of the columns in the index are sorted in ascending sequence, the ORDER BY clause of the SQL statement specifies that one and only one column contained in the index should be ordered in descending sequence, and the second column in the concatenated index is included in the WHERE clause.” (page 120)
  • A self-contradicting sentence, “In the first case, we have a full table scan, because we cannot retrieve all of the data from the index, so we have to do a TABLE ACCESS BY ROWID operation for each row, which satisfies the predicate.”  Full table scan probably does not belong in that sentence (page 121).
  • The book states, “In the next screenshot, we can see that Oracle knows (from the table statistics) that only 43 rows satisfy the where condition.”  It is important to stress that the autotrace generated execution plan only shows the estimated number of rows that will be returned by an operation – the author’s query, in fact, retrieves a single row.  The index that the author specified in the index hint was created on the columns CUST_LAST_NAME and CUST_YEAR_OF_BIRTH (in descending order), yet the author’s query only included the CUST_FIRST_NAME column in the WHERE clause – it is ridiculous to force the optimizer to use this index with a hint (page 121).
  • The index’s clustering factor was not mentioned in the discussion of what determines the point at which it is more efficient to access a table through an index access path, rather than a full table scan – only the average row length was described as a consideration and the percentage of the rows that need to be retrieved.  It could very well be the case that with a very poor clustering factor, that it is more efficient to retrieve less than 1% of the table’s rows through a full table scan, rather than an index lookup (page 122).
  • The book should define “intra-block fragmentation” which is the benefit that the book lists as resulting from rebuilding indexes (page 123).
  • The two session example of one session rebuilding an index while a second session executes a SELECT and INSERT seems to be pointless.  The second session does not use the index that the first session attempts to rebuild, instead a full table scan is performed on the BIG_CUSTOMERS table, followed by an index unique scan of the CUSTOMERS_PK index.  An index named IX1_BIG_CUSTOMERS was created in the script, yet the script attempts to rebuild a non-existent index named IX1_MYCUSTOMERS.  The test case only shows an example of efficiency gains due to blocks being buffered in the buffer cache.  The book should have mentioned that an online rebuild and parallel rebuild are only possible in the Enterprise Edition of Oracle Database (pages 123-125).
  • Step 10 uses angle brackets (less than and greater than signs) rather than single quotes (page 126).
  • The book states, “We have used the PARALLEL option too, to speed up the rebuild process.”  While specifying PARALLEL during an index rebuild may speed up the rebuild, it is important to note that this results in an index with a parallel degree that should be manually reset to the original value, once the rebuild completed (page 127).
  • The book states, “However, when we have a table on which there are many INSERTs and DELETEs, we could schedule an index rebuild, because when deleting an index entry, the space is not freed in the index leaf, but just marked as deleted. If we have massive DELETE and INSERT operations, we could have a skewed index structure, which could slow performance due to intra-block fragmentation.”  The book should have defined what is meant by “skewed index structure” – does the book mean, for instance, that one portion of the index could have a BLEVEL of 2 while another portion of the index could have a BLEVEL of 3 – if that is the case, the book’s statement is incorrect.  If the book’s definition of “skewed index structure” is that some leaf blocks of the index will be more densely packed than other leaf blocks in the same index structure, then that should be considered normal behavior for Oracle indexes – an occasional coalesce might be used to combine index entries in logically adjacent leaf blocks, but scheduling index rebuilds is neither required, nor recommended.  Depending on the order of the inserted values in relation to the order of the entries in the index leaf blocks, an index leaf block split operation could evenly divide the existing index entries between two leaf blocks (a 50-50 split, resulting in both index blocks being 50% utilized, if the inserted value is not the highest value that would be inserted into the leaf block), or all of the existing entries will remain in the existing leaf block and the new entry will be placed by itself into a new leaf block (a 90-10 split).  A deleted index entry will remain in the block at least until that transaction is committed, but any post-transaction insert into the block will clear out all deleted index entries in the block.  Deleting all table rows with index entries at the low end of the index (the values were populated by a sequence, for example, and are deleted in the same sequential order) could leave many blocks in the index structure with nothing but deleted index entries, but that situation should only result in a performance problem if SQL statements attempt to determine the minimum value for the indexed column, or to some extent, fast full index scans and full index scans (reference reference2 page 127).
  • The book states, “If the value for DEL_LF_ROWS/LF_ROWS is greater than 2, or LF_ROWS is lower than LF_BLKS, or HEIGHT is 4 then the index should be rebuilt.”  Some of the advice found on the Internet suggests that if DEL_LF_ROWS is 20% of LF_ROWS, then the index should be rebuilt – did the author of this book intend to write “If the value for DEL_LF_ROWS/LF_ROWS is greater than 0.2”?  Why should the result of DEL_LF_ROWS/LF_ROWS be a consideration of whether or not an index should be rebuilt – is it supposed to measure the amount of wasted/unused space in the index leaf blocks?  The next INSERT/UPDATE DML operation in a given leaf block will clear out the index rows that are flagged as deleted, but then does that imply that the space is not wasted (or is the space wasted)?  What if there are many index blocks that are roughly 50% utilized due to a large number of 50-50 leaf block splits, is that space not wasted (or is the space wasted)?  Since the formula DEL_LF_ROWS/LF_ROWS really does not describe the percent of used space in the index, it is probably best to just ignore the result of that formula.  DEL_LF_ROWS/LF_ROWS can never be greater than 1 because the statistic found in the LF_ROWS column includes the DEL_LF_ROWS statistic.  The second criteria suggests comparing LF_ROWS to LF_BLKS, such that if on average there is less than one index entry per leaf block, that the index should be rebuilt – there can never be less than one index entry per leaf block, because the leaf block will be detached from the index structure when all rows are removed from that leaf block.  The final criteria suggests rebuilding the index when the height is exactly 4 – does that mean that an index with a height of 5, 6, 7, etc. does not need to be rebuilt?  What if after rebuilding the index it still has a height of 4 – will it help to rebuild a second time? (page 127)
  • The book states, “When we rebuild an index, we can add the COMPUTE STATISTICS option to that statement.”  Since the release of Oracle Database 10.1, statistics are automatically collected when indexes are created and/or rebuilt, so the COMPUTE STATISTICS clause is unnecessary (page 127).
  • Steps 6 and 9 uses angle brackets (less than and greater than signs) rather than single quotes (page 128-129).
  • Steps 8 and 15 uses angle brackets (less than and greater than signs) rather than single quotes (page 131-132).
  • The book should mention that bitmap indexes are not available in the Standard Edition of Oracle Database (page 136).
  • Step 3 uses angle brackets (less than and greater than signs) rather than single quotes (page 137).
  • The author created a composite bitmap index with three columns to demonstrate the use of bitmap indexes.  Composite bitmap indexes are rare – one of the strengths in using bitmap indexes is the ability to create multiple single column bitmap indexes, and as needed the optimizer will select to bitmap join two or more bitmap indexes in an attempt to significantly reduce the number of rows visited in the table (page 138).
  • The book states, “This time the execution plan uses the newly created bitmap index, … using the INDEX RANGE SCAN or INDEX FAST FULL SCAN operation, depending on whether we are filtering on the first key column of the index – CUST_GENDER – or not. This result is obtained thanks to the structure of bitmap indexes.”  With the index definition found in the book, the operations that should be present in the execution plan are BITMAP INDEX RANGE SCAN and BITMAP INDEX FAST FULL SCAN, while you might expect to find INDEX RANGE SCAN or INDEX FAST FULL SCAN operations associated with normal b*tree indexes.  However, it is a cost-based decision for the optimizer to use or not use an index, so there is no guarantee that index will be used as indicated in the book if the leading column in the index is either specified or not specified.  Additionally, it is not the structure of bitmap indexes that permits INDEX RANGE SCAN or INDEX FAST FULL SCAN operation, depending on whether we are filtering on the first key column of the index – creating a normal b*tree index in the script rather than a composite bitmap index could (will) actually allow the optimizer to take advantage of INDEX RANGE SCAN or INDEX FAST FULL SCAN operations (page 139).
  • The book states, “Bitmap indexes offer very fast performance when we have a low cardinality field indexed on a table containing many rows.”  This statement could have several different interpretations, but I believe that the author’s intended meaning is “Bitmap indexes offer significantly faster performance than b*tree indexes when columns with few distinct values are indexed in tables containing a significant number of rows.”  This fixed statement still requires additional clarification – if the bitmap index does not help to further reduce the number of table rows that are accessed through the index, the end result may be performance that is roughly the same as that of an equivalent b*tree index.  One way to accomplish the task of further reducing the number of table rows accessed is through the utilization of multiple bitmap indexes with bitmap combine operations to significantly reduce the number of rowids that are used to fetch table rows (page 139).
  • The book states, “When rows are frequently inserted, deleted, and updated, there is a performance bottleneck if we use a bitmap index. When the index is updated, all the bitmap segments are locked.”  This statement requires a bit of clarification.  I do not believe that the author is stating that updating an entry in a bitmap index will lock all of the bitmap indexes in the database (a segment could be a table, table partition, index, etc.).  Instead, I think that the author is intending to state that updating an entry in a bitmap index will lock all of the index entries in that index, effectively preventing any other session from inserting, updating (the column covered by the index), or deleting rows in the table.  For very small bitmap indexes, this statement could very well be true.  However, for larger bitmap indexes, built for tables with many rows, the number of index rows that will be locked during an update is determined by the number of rows covered by the index block(s) that update changed, possibly 20,000 to 50,000 rows per index block. (reference slide 46, reference2 page 2, reference3 comments section; page 139).
  • The book states, “This [bitmap join index] is a bitmap index which represents the join between two tables, and can be used instead of a materialized view in certain conditions.”  The book did not offer any suggestions or describe any conditions that permit a bitmap join index to take the place of a materialized view.  The statement in the book needs additional clarification (reference reference2 page 140).
  • The book states about index organized tables, “If the row size exceeds the size indicated by this parameter [PCTTHRESHOLD], the fields not indicated by the INCLUDING option are stored in the OVERFLOW – if indicated, otherwise the row is not accepted.”  This is a confusing sentence – it is not clear what the author is attempting to state.  The Oracle documentation states, “In addition to specifying PCTTHRESHOLD, you can use the INCLUDING clause to control which nonkey columns are stored with the key columns. The database accommodates all nonkey columns up to and including the column specified in the INCLUDING clause in the index leaf block, provided it does not exceed the specified threshold. All nonkey columns beyond the column specified in the INCLUDING clause are stored in the overflow segment. If the INCLUDING and PCTTHRESHOLD clauses conflict, PCTTHRESHOLD takes precedence.” (reference page 146).
  • The book demonstrates partitioning without mentioning that partitioning is an extra cost option that may only be purchased for the Enterprise Edition (page 146).
  • The book states, “To obtain the best uniform data distribution, it’s better to choose a number of partitions which is a power of 2, having a unique or near partition key.”  The Oracle Database documentation states, “For optimal data distribution, the following requirements should be satisfied: Choose a column or combination of columns that is unique or almost unique. Create multiple partitions and subpartitions for each partition that is a power of two.”  It appears that the author of the book incorrectly stated the first requirement that is mentioned in the documentation (reference page 149).
  • The script is run by the SYS user rather than a normal user, which can lead to unexpected performance differences (page 157).
  • The “ALTER TABLE SH.MY_SALES_2 ENABLE ROW MOVEMENT” and “SHRINK SPACE” commands are only applicable if the MY_SALES_2 table is stored in an ASSM tablespace – the book did not mention that limitation (page 165).
  • The book states, “If we do an FTS [full table scan], database buffers are used to read all the table data, and this situation may lead to flushing the buffer cache data to make room for the FTS data. To avoid this situation and to limit the consequences on the database buffer cache, the database blocks from FTS operations are put on the top of the LRU (Least Recently Used) list.” This statement requires significant adjustment before it is an accurate statement.  Is the author describing the behavior of Oracle Database 8.1 or Oracle Database 11.2 as indicated on the front cover of the book?  What is meant by the “top of the LRU list” – is that the MRU (most recently used) end?  If the author meant that the blocks were placed on the least recently used end of the LRU list, then the author agrees with the Oracle Database 10.2 documentation, but that documentation is incorrect (the behavior changed around the time 9.0.1 was released).  The Oracle Database 11.2 documentation states that blocks read by full table scan are placed at the mid-point of the LRU list (if the CACHE keyword is specified when the table is created or altered, then the table blocks will be placed on the MRU end of the list).  Since the book is specifically about Oracle Database 11.2, it is worth pointing out that since the release of Oracle Database 11.1, the Oracle RDBMS often makes use of serial direct path read when performing full table scans, and that type of access completely avoids reading the table blocks into the buffer cache (the blocks are read into the PGA).  Oracle event 10949 may be used to disable serial direct path read.  What about parallel full table scans of larger tables?  Those too will avoid flooding the buffer cache with blocks that may only be accessed once.  Smaller tables will certainly have their blocks read into the buffer cache, but the touch count associated with each of the table’s blocks will limit the problems that those blocks will cause in the buffer cache (reference page 170).
  • The book states, “This is because when we have a larger database block, we can read many rows in a block and even subsequent database blocks – in one operation – by setting the parameter DB_FILE_MULTIBLOCK_READ_COUNT (at the instance or session level).”  While the book does indicate that the maximum number of bytes read is operating system dependent, the book provides no hint regarding the Oracle limit for the parameter, or anything else that might cause fewer blocks to be read in a single read request (extent size, pre-existing blocks already in the buffer cache, etc.)  Since this book is about Oracle Database 11.2, it is worth mentioning that as of Oracle Database 10.2 the Oracle RDBMS will automatically derive a value for the DB_FILE_MULTIBLOCK_READ_COUNT parameter based on the SGA size and the value of the SESSIONS parameter – so nothing actually needs to be set to take advantage of multi-block reads.  Stating the obvious, but the DB_FILE_MULTIBLOCK_READ_COUNT parameter has no influence over Oracle reading multiple rows from the same table block (page 171).
  • The book states, “The use of this parameter [ DB_FILE_MULTIBLOCK_READ_COUNT] influences even the optimizer – if it’s less expensive to read all the rows in a table than using an index, the optimizer will use an FTS even if there are usable indexes in place.”  The accuracy of this statement is Oracle Database version dependent (prior to Oracle Database 9.0.1 the statement was true, as of Oracle Database 9.0.1 the statement is false when workload system statistics have been collected (reference page 171).
  • The recipe titled “Avoiding full table scans” showed how to trigger full table scans, but did not show how to avoid full table scans, nor did it provide any advice about when full table scans should be avoided (pages 164-172).
  • The book states, “The effectiveness of an index depends on the number of rows selected out of the total number of rows in the table… In the real world, an index with a selectivity of less than 10 percent [of the table’s rows] is considered suitable enough [for the index to be used].”  The 10% figure is a very rough figure – the suitability of an index is dependent on many items beyond the percentage of rows that will be selected.  Quoting from the documentation, “The cost of an index scan depends on the levels in the B-tree, the number of index leaf blocks to be scanned, and the number of rows to be fetched using the rowid in the index keys. The cost of fetching rows using rowids depends on the index clustering factor.” “Therefore, the optimizer’s decision to use full table scans is influenced by the percentage of blocks accessed, not rows. This is called the index clustering factor. If blocks contain single rows, then rows accessed and blocks accessed are the same.”  The calculated cost determines whether or not an index will be used – the cost for an index range scan is index blevel + ceil(index selectivity * leaf blocks) + ceil(table selectivity * clustering factor).  While the optimizer’s calculated cost for a particular index access path may not accurately represent the effectiveness of the index, the cost is the final deciding factor for the optimizer when determining the effectiveness of the index for a specific SQL statement (reference reference2 reference3 page 176).
  • The book states, “The answer is simple – indexes cannot be used when we compare values with a not equal operator.”  To clarify, the book is describing a situation involving a single table, when the only predicate on the indexed column is the inequality comparison.  Adding an INDEX hint to the author’s sample SQL statement results in an INDEX FULL SCAN operation – the hint allows the SQL statement containing the not equal predicate to use the MY_CUSTOMERS_IXVALID index.  It is not the case that the index cannot be used with an inequality comparison, however, the query optimizer does not automatically consider an access path involving the index due to the general assumption that many rows will be returned when all except one value is selected.  To avoid the INDEX FULL SCAN operation, where the index structure is read one block at a time, the inequality could be converted to a less than predicate and a greater than predicate with a logical OR between the two predicates (reference page 176).
  • The book states, “NULL values are not stored in indexes, so when we query for records with a NULL value in field X, even if the X column is indexed, the index will not be used.”  The book’s description is incomplete.  NULL values are not stored in single column b*tree indexes.  There are at least four methods to work around this issue and allow indexes to be used to identify rows with a NULL value in the indexed column: 1) Define a composite index with at least one other column that has a NOT NULL constraint – ideally, the column in which the NULL values might appear would be the leading column in the composite index. 2) Define a composite index with a numeric constant (such as 1) as the second column in the composite index. 3) Bitmap indexes always store NULL values – if appropriate (column experiences few updates, deletes, inserts, and an Enterprise Edition database), create a bitmap index for the column. 4) If the number of NULL values in a column will be relatively small, and the original SQL statement may be modified, create a function based index that converts NULL values to 1 and non-NULL values to NULL: DECODE(C3,NULL,1), or (CASE WHEN C3 IS NULL THEN 1 ELSE NULL END), or (NVL2(C3,NULL,1)).  The DECODE syntax is subject to NLS related translations – be certain to check the view USER_IND_EXPRESSIONS for the index to determine the syntax required in the WHERE clause of the SQL statements to take advantage of the function based index (page 177).
  • The book states, “If direct path load would be the fastest method to insert data in tables, without constraints, the optimizer would use it by default.”  This sentence does not make sense.  Is the word “constraints” in this sentence to be read as a table without primary key/foreign key/unique key constraints, or is “constraints” to be read as serialized operations/space usage “limitations” of direct path insert operations – it is these limitations that allow the direct path inserts to function without corrupting tables and indexes (page 185).
  • In step 5, the book states, “Rewrite the query using the NOT IN construct:”.  The example shown in the book shows a non-correlated (invalid) NOT EXISTS SQL statement rather than the NOT IN construction that is found in the source code library (page 193).
  • The book states, “Even in the previous case we can see the substantial equivalence of the NOT IN and NOT EXISTS operations, related to gets/reads.”  One of the problems with this test case is that the output is not showing the equivalence of the NOT IN and NOT EXISTS forms of the SQL statement – there can be a difference between the two when NULL values enter the picture (that possibility is described in pages 199-200).  The issue with the test case is that the (11.2) Oracle query optimizer has transformed (as seen in a 10053 trace) both the NOT IN and NOT EXISTS queries into the following query that uses a regular join “SELECT S.AMOUNT_SOLD AMOUNT_SOLD FROM SH.CUSTOMERS C,SH.SALES S WHERE S.CUST_ID=C.CUST_ID AND (C.CUST_CREDIT_LIMIT=10000 OR C.CUST_CREDIT_LIMIT=11000 OR C.CUST_CREDIT_LIMIT=15000)” (page 197).
  • When describing the output of the ANSI left outer join version of the NOT IN/NOT EXISTS SQL statement, the book states, “Even if we have the same statistics and almost the same execution plan, the meaning of the last query isn’t as intuitive as in the previous case… and we have seen that there is no performance improvement (or detriment) in doing so.”  The problem here is that the (11.2) Oracle query optimizer has transformed the ANSI left outer join SQL statement into a SQL statement that is very similar to the transformed NOT IN/NOT EXISTS version of the query.  In older Oracle Database versions, where these automatic transformations would not take place, the left outer join syntax is often much faster than the equivalent NOT IN or NOT EXISTS syntax.  The (11.2) optimizer transformed version of the SQL statement follows, “SELECT S.AMOUNT_SOLD AMOUNT_SOLD FROM SH.SALES S,SH.CUSTOMERS C WHERE C.CUST_ID IS NULL AND S.CUST_ID=C.CUST_ID(+) AND (C.CUST_CREDIT_LIMIT(+)=10000 OR C.CUST_CREDIT_LIMIT(+)=11000 OR C.CUST_CREDIT_LIMIT(+)=15000)” (page 198).
  • The book states, “We have to set the destination for our trace files also. When using dedicated servers, the parameter is USER_DUMP_DEST. In the multi-threaded server environment the parameter is BACKGROUND_DUMP_DEST…”  This recipe is oddly reminiscent of pages 452 through 458 of the book “Expert One-On-One Oracle”, and seems to be more applicable to Oracle Database 8.1 than to Oracle Database 11.2.  In Oracle Database 11.1 and 11.2, by default the USER_DUMP_DEST and BACKGROUND_DUMP_DEST parameters both point at the same directory named “trace”.  The TIMED_STATISTICS parameter defaults to TRUE because the STATISTICS_LEVEL parameter defaults to TYPICAL in Oracle Database 10.1 and above, so it is not necessary to modify the TIMED_STATISTICS parameter.  The use of “ALTER SESSION SET SQL_TRACE=TRUE;”, as shown in the recipe, is deprecated as of Oracle Database 10.2 (see Metalink Doc ID 30820.1).  “Multi-threaded server” was renamed to “shared server” with the release of Oracle Database 9.0.1. (pages 201-205)
  • Appendix A and Appendix B include a somewhat random sampling of various Oracle Database performance views and database packages.  The depth of coverage of the various views and packages rarely extends beyond a brief summary of a couple of view columns or procedures in the package.  Think of these appendixes as a condensed and reformatted version of the Oracle Database documentation book contents.
  • The descriptions of all of the columns in the V$SESSION dynamic performance view are shortened verbatim copies or slightly reworded copies of the descriptions found in the Oracle Database 11.2 Reference book in the Oracle Database documentation library – below are comments about three randomly selected entries in the appendixes (page 490).
  • The descriptions of all of the procedures listed for the DBMS_OUTLN package are copied verbatim from the descriptions found in the Oracle Database 11.2 Supplied PL/SQL Packages book in the Oracle Database documentation library (page 505).
  • The descriptions of the REFRESH_PRIVATE_OUTLINES and DROP_EDIT_TABLES procedures of the DBMS_OUTLN_EDIT package are verbatim copies of the descriptions found in the Oracle Database 9.2 Supplied PL/SQL Packages book in the Oracle Database documentation library (page 504).
  • The descriptions of the LOAD_PLANS_FROM_CURSOR_CACHE, LOAD_PLANS_FROM_SQLSET, and DROP_SQL_PLAN_BASELINE procedures of the DBMS_SPM package are verbatim copies of the descriptions found in the Oracle Database 11.2 Supplied PL/SQL Packages book in the Oracle Database documentation library (page 505).

Useful Information:

  • Seven steps for solving a performance problem – might lead to compulsive tuning disorder if followed to the letter, but still a good general practice (page 12).
  • Book seems to use a lot of forward and backward references.
  • Good (simple) example of SQL injection risks (pages 161-163).

Part 2 of this book review is expected to cover the second half of the book, including the freely downloadable chapter 10.

Blog articles that reference the “Oracle Database 11gR2 Performance Tuning Cookbook” book:

Tables and Their Indexes Should be Located on Different Disks – What is Wrong with this Quote?

Directly Executing a Query is Slower than Returning a REFCURSOR from a PL/SQL Procedure – What is Wrong with this Quote?

Interesting Index “Facts” – What is Wrong with these Quotes?

Full Table Scans and the Buffer Cache in 11.2 – What is Wrong with this Quote?

Repeat After Me: NULL Values are Not Stored in Indexes?

Book Review: Oracle Core Essential Internals for DBAs and Developers

25 12 2011

December 25, 2011 (Modified December 29, 2011)

Digging Deeply into Oracle Internal Processing when the Oracle Wait Interface is Insufficient

I pre-ordered a paperback copy of this book three months ago from Amazon and also purchased a PDF copy of the book from Apress.  It was a long, nearly six year wait since the publication of the author’s “Cost-Based Oracle Fundamentals” book, and I am fairly certain that many of those who read the “Cost-Based Oracle Fundamentals” book were looking forward to reading volume two in the anticipated three part series.

The author of this book is a well known Oracle Ace Director who has written at least three books since the year 2000 and contributed to a handful of other books.  In addition to writing books, the author has also maintained a technical Oracle Database blog since 2006, and contributed to a number of Oracle focused Internet forums (Usenet, Oracle-L, AskTom, Oracle OTN) dating back to at least 1994.  The book’s primary technical reviewer is also a well known Oracle Ace Director and Oracle Certified Master who also maintains a technical Oracle Database blog and “living” Oracle reference site with deeply technical articles.

Did the book’s contents meet the level of expectations provided by the book’s cover and the publisher’s description of the book?  Shortly before the book arrived, I assumed that the book was targeted at people who might have struggled with the “Cost-Based Oracle Fundamentals” book.  The back cover of this book states that the book targets readers with knowledge ranging from beginners to intermediate.  I was surprised to find that chapter one lasted all of four pages, and that page seven introduced the reader to the first of many symbolic block dumps.  It was at this point that the obvious becomes obvious – this book is intended to take the reader on a journey that is far deeper, more intense, and more densely packaged than pretty much any other Oracle Database book published in the last five or ten years.  Reading the book twice might be required for full comprehension of the material, and a third read-through a year or two later might be a welcome reminder of how Oracle Database works under the covers to produce the pretty pictures painted by Enterprise Manager.   In short, before reading this book, be certain to understand the Oracle Database concepts, and have a reasonable understanding of Oracle performance troubleshooting (read either the Performance Tuning Guide from the Oracle documentation library or the book “Troubleshooting Oracle Performance”).

This book fills a key void in the Performance Tuning Guide from the Oracle documentation library: what is the next step when the Oracle wait interface fails to identify the source of a particular performance problem?  There is no recipe for that solution; the solution is to understand what triggers Oracle Database to behave as it does.  This book pieces together the under-the-hood understanding through the detailed inter-mixing of many Oracle statistics, performance views, X$ structures, latches, parameters, wait events, and Oracle error messages.

While there are a handful of problems/errors in the book, the vast majority of those problems are simple word substitutions or keystroke errors (for example, putting in an extra underscore or removing an underscore from an Oracle keyword on one page, while correctly specifying the keyword elsewhere in the book) that are fairly easy to identify and work around.  The author devoted a section of his blog to quickly address potential errors found in the book, and to expand the book’s contents as additional information becomes available.

In short, if you need to drill into Oracle Database performance problems beyond what is provided by the Oracle wait interface, this is the key book that glues together the bits and pieces of information that Oracle Database exposes (and selectively hides).

Comments on the Book’s Contents:

  • The test scripts used in the book show evidence that those scripts were often run on different Oracle Database versions, and the differences found in the output from those versions are often described in the scripts.
  • While mostly avoiding Oracle features that require additional cost licenses, features that require an extra cost license, such as partitioning, state that an extra cost license is required.
  • Description of symbolic block dump: starting byte position (follows @ sign), lock byte (lb:).  (page 7)
  • Description of change vector dump: operation code (KDO Op code), update row piece (URP), block address being updated (bdba:), segment header block (hdba:), interested transaction list (itli:), table number (tabn:), row number in the block (slot:), number of columns in the table (ncol:), number of columns updated (nnew:), increase in the row length (size:). (page 8)
  • Description of ACID (pages 11-13)
  • Due to an optimization (private redo and in-memory undo) in 10g, a session only needs to obtain the public redo allocation latch once per transaction, rather than once per change. (page 15)
  • Points out an error in the Oracle Documentation, and in some books about LGWR writing a commit record to the online redo log. (page 28)
  • Plenty of forward and backward references in the book.
  • Undo block dump: transaction ID (xid:), block renewed incarnation number (seq:) (pages 32-33)
  • A single undo block may contain undo records from multiple transactions, but only from a single active transaction. (page 34)
  • Data block dump: interested transaction list index (Itl), transaction ID of a transaction that modified the block in the format of undo segment.undo slot.undo sequence number (Xid), undo record address in the format of absolute block address.block sequence number.record in the block (Uba), bit flag indicating state of the transaction (Flag), rows locked by the transaction (Lck), commit SCN or space available if the transaction committed (Scn/Fsc), cleanout SCN (csc:), last change SCN (scn:), number of times the block has changed at the SCN (seq:), row locked by transaction number (lb:)  (page 37-38)
  • For an index, the initrans parameter only applies to leaf blocks. (page 38)
  • Helpful scripts (snap_9_buffer, snap_9_kcbsw, snap_11_kcbsw, snap_myst, snap_rollstat, snap_stat) in chapter 3’s script library that create packages used for calculating the delta values of various statistics from Oracle’s various performance views.  The scripts often describe previously achieved results from Oracle Database versions ranging from through  The script libraries for chapters 2, 6, and 7 include packages for monitoring the delta values of statistics from additional performance views.
  • Parallel query execution, serial direct path read scans, and accesses to read-only tablespaces can result in repeated delayed block cleanout related work.  In the cases of parallel query execution and serial direct path read scans, the cleaned out version of the block is not copied from the PGA to the SGA as a “dirty” block. (page 50)
  • The spin and sleep approach to acquiring latches changed in recent Oracle Database releases.  Rather than sleeping progressively longer times after each spin when attempting to acquire a latch, the process simply goes to sleep and waits for the process holding the latch to notify the process at the top of the list that is waiting for the latch. (page 72)
  • Decoding TX lock ID1 and ID2 values into undo segment, slot, and wrap number. (page 77)
  • The book tries to be specific regarding changes made in Oracle database versions, such as the change in when the SESSION_CACHED_CURSORS parameter started controlling the number of PL/SQL cursors that were automatically held open, rather than the OPEN_CURSORS parameter. (page 89)
  • The book states: “There is one particularly interesting difference between latches and mutexes: latches are held by processes, while mutexes are held by sessions…” (page 91)
  • Default block sizes other than powers of 2 (12KB, 5KB, 4.5KB, etc.) are possible, but may be viewed as unsupported by Oracle support.  The book makes a good case for using (only) 8KB block sizes, providing an exception for a 4KB block size as a secondary choice on some Linux platforms. (page 98)
  • The book frequently addresses topics that are incorrectly described in other resources.  For example, referencing the touch count of blocks to determine which block is the source of latch contention. (page 104)
  • Recently discovered potential ACID durability problem described in detail. (page 129)
  • Describes problems related to measuring LGWR performance through examination of LOG FILE SYNC wait event durations. (page 134)
  • One of the threats of newer hard drives with larger sector sizes (4 KB rather than 512 bytes) is that redo wastage will increase. (page 136)
  • The book mentions setting event 10120 to trigger relative file numbers to differ from absolute file numbers when new datafiles are created.  Some information on the Internet incorrectly describes this event number as disabling index fast full scans.  An interesting side-effect of experimenting with this event is that the database can contain multiple datafiles with the same relative file number (in different tablespaces), even when there are few datafiles in the database. (page 143)
  • Oracle checkpoints described: Instance recovery checkpoint, Media recovery checkpoint, Thread checkpoint, Interval checkpoint, Tablespace checkpoint, PQ tablespace checkpoint, Close database checkpoint, Incremental checkpoint, Local database checkpoint, Global database checkpoint, Object reuse checkpoint, Object checkpoint, RBR checkpoint, Multiple object checkpoint (pages 148-149)
  • Serial direct path read in 11.1 and later will perform a PQ tablespace checkpoint before the direct path read begins. (page 149)
  • The process of allowing space at the end of a redo log file for potential redo data in the public and private redo threads is one explanation why archived redo log files become a couple of megabytes smaller than the online redo logs. (page 151)
  • Four different definitions of the term CURSOR, as related to Oracle Database. (page 162)
  • Demonstration of the use of bind variables significantly decreasing the number of dictionary cache accesses. (pages 171-172)
  • The CURSOR_SHARING parameter value of SIMILAR is deprecated as of Oracle Database 11.1 due to the arrival of adaptive cursor sharing. (page 173)
  • Lengthy discussion of the types of problems that might be intensified when moving a database from a single instance to a multi-instance RAC environment. (pages 202-229)
  • Detailed ORADEBUG examples in the appendix. (pages 231-238)
  • Detailed list of Oracle Database terms and their definitions. (pages 245-253)

 Suggestions, Problems, and Errors:

  • A potential area for improvement: explain how the author determined that block 0x008009a found in a redo header dump was in fact referencing datafile 2, block 154.  The following SQL statement: SELECT TO_CHAR(DBMS_UTILITY.MAKE_DATA_BLOCK_ADDRESS(2,154), ‘XXXXXXX’) DBA FROM DUAL; produces a value of 80009A which confirms that the author is correct.  The approach for decoding the block printed in the dump might be covered later in the book. (page 9)  (Edit Dec 29, 2011: the author has provided an extended explanation describing how to decode the datafile number and block number in a comment found on his “OC 2 Undo and Redo” blog page)
  • The book states, “(as can be seen in the dynamic performance view v$latch_holder, which is underpinned by the structure x$ksuprlatch).”  There should not be an underscore character in V$LATCHHOLDER, and the X$KSUPRLATCH structure does not seem to exist in (confirmed by a response on the author’s errata page on his blog that the structure name is X$KSUPRLAT). (page 73)
  • The book states, “… if you query the dynamic performance view v$lock, you are querying the structure defined by the parameter enqueues.”  It appears that the author intended to state, “defined by the parameter _ENQUEUE_LOCKS”.  (Confirmed by a response on the author’s errata page.) (page 77)
  • The book states, “…and a column x$ksqlres, which is the address of the resource it’s locking, exposed indirectly through the type, id1, and id2.” The actual column name is KSQLKRES, without the embedded $ character.  (Confirmed by a response on the author’s errata page.) (page 78)
  • Missing word in steps 1 and 2 that describe the sequence of events that lead to the V$LOCK output shown in the book.  (Confirmed by a response on the author’s errata page – the missing word is delete.) (page 79)
  • The book expands the abbreviation ASMM as “automatic system memory management”, while the documentation and most other sources expand this abbreviation as “automatic shared memory management”.  (Confirmed by a response on the author’s errata page.) (page 94)
  • The book states, “If you want to see how memory allocations change with the number of CPUs, you can adjust parameter cpu_count and restart the instance; however, in 11.2 you also need to set parameter _disable_cpu_check to false.”  The _DISABLE_CPU_CHECK parameter defaults to FALSE, so the author probably intended to write TRUE.  (Confirmed by a response on the author’s errata page.) (page 101)
  • The book dropped the “s” following the word “get” in the statistic name “CONSISTENT GETS – EXAMINATION”.  The statistic name is spelled correctly on pages 44 and 51.  (Confirmed by a response on the author’s errata page.) (page 114)
  • The book states, “If the optimizer thought the table was larger than the 2 percent limit, then the buffers used to read the table were immediately dropped to the end of the LRU list as the blocks were read…”  It appears that the author intended to write “runtime engine” (as stated in the previous paragraph) rather “than optimizer”.  (Confirmed by a response on the author’s errata page.) (page 118)
  • The book states, “ takes this a little further with two new statistics: redo synch write time (usec), which is the time in microseconds…”  It appears that the actual statistic name does not contain the word “write”.  (Confirmed by a response on the author’s errata page.) (page 129)
  • The book states: “If we check x$qrst (the X$ structure underlying v$rowcache), we find that it contains an interesting column, as follows”.  It appears that the X$ structure is actually X$KQRST – the DESC command that follows the sentence in the book shows the correct X$ structure name. (page 166)  (Edit Dec 29, 2011: Confirmed by a response on the author’s “OC 7 Parsing and Optimising” errata blog page)
  • The book states: “(If you needed an argument why you should select only the columns you need in a query, rather than using select *, the extra cost of accessing dc_histogram_defs should be enough to convince you.)”  This sentence immediately follows a sentence that described how adding a second predicate in the WHERE clause referencing a second column would double the number of gets from the dictionary cache; thus it seems that the comment about the threat of “select *” causing more visits to dc_histogram_defs is only significant if those columns are also specified in the WHERE clause. (page 171)  (Edit Dec 29, 2011: a response from the author suggested experimenting with the core_dc_activity_01.sql script in the book’s script library;  experimentation with that script indicates that the statement in the book is correct)
  • The book states: “There are probably a number of sites that could benefit from increasing the session_cache_cursor parameter,…”  The parameter name is SESSION_CACHED_CURSORS – that parameter is correctly spelled on the previous page and further down the same page.  (Confirmed by a response on the author’s errata page.) (page 176)
  • The book states, “However, you may recall all those latches relating to the library cache that appeared in 10g—like the library cache pin allocation latch…”.  That particular latch appears to have been removed in Oracle Database, however it is not clear if this section of the book is only describing behavior prior to Oracle Database 11.1. (page 194)  (Edit Dec 29, 2011: Confirmed by a response on the author’s “OC 7 Parsing and Optimising” errata blog page, with additional detail provided related to the statement in the book)
  • The book states, “In 10.2 they introduced code to allow KGL pins to be cached by sessions (hidden parameter _session_kept_cursor_pins) with similar intent…”.  The _SESSION_KEPT_CURSOR_PINS hidden parameter does not exist in Oracle Database (10.2.0.x not checked). (page 195)  (Edit Dec 29, 2011: Confirmed by a response on the author’s “OC 7 Parsing and Optimising” errata blog page, with additional detail provided related to the statement in the book)
  • The book states, “However, the parameter cursor_spare_for_time is deprecated in 11g…”  The CURSOR_SPACE_FOR_TIME parameter is spelled correctly in the previous sentence. (page 195)  (Edit Dec 29, 2011: Confirmed by an errata entry on the author’s “OC 7 Parsing and Optimising” errata blog page)
  • The book states, “Let’s start with the memory structures—we have v$dlm_ress that is analogous to v$resources — it lists the things that are lockable…”  It appears that V$RESOURCE should not have a trailing S. (page 209)  (Edit Dec 29, 2011: Confirmed by an errata entry on the author’s “OC 8 RAC and Ruin” errata blog page)
  • The glossary indicates that the possible granule sizes are one of 4MB, 8MB, 16MB, and 64MB depending on the Oracle Database version and the size of the SGA.  The statement in the book is more accurate than what is provided by the Oracle Database documentation for 11.2 which states that the granule size is either 4MB or 16MB depending on the size of the SGA.  However, limited testing in Oracle Database indicates that the granule size increases from 64MB to 128MB when the SGA_TARGET parameter is set to 1 byte greater than 32G, and jumps to 256MB when the SGA_TARGET parameter is set to 1 byte greater than 64G.  A granule size of 32MB is possible when the SGA_TARGET was set to a value between 8G + 1 byte to 16G. (page 247)  (Edit Dec 29, 2011: Confirmed by an errata entry and follow up comments on the author’s “OC Glossary” errata blog page)

 Data Dictionary Views/Structures (the index at the back of the book misses most of these entries):

  • V$SESSTAT (page 15)
  • V$LATCH (pages 15, 69, 116, 138, 175)
  • V$LATCH_CHILDREN (pages 16, 69, 138, 166, 170)
  • X$KTIFP (in-memory undo pool, undo change vectors) (page 16)
  • V$TRANSACTION (pages 16, 29)
  • X$KCRFSTRAND (private pool, redo change vectors) (pages 16, 125, 237)
  • X$KTIFF (pages 19, 152)
  • DBA_ROLLBACK_SEGS (page 28)
  • V$LOCK (pages 29, 77, 78, 226)
  • X$KTUXE (transaction table information) (page 30)
  • V$BH (page 47)
  • X$KSUSE (base structure for V$SESSION) (page 59)
  • X$KSMFSV (pages 60, 236)
  • X$KSQRS (enqueue resources) (pages 60, 77, 78, 79, 88)
  • V$SQL (pages 63, 234)
  • V$LATCH_PARENT (page 69)
  • V$SYSTEM_EVENT (page 70)
  • X$KSUPRLATCH (page 73)
  • V$LATCHHOLDER (pages 73, 133)
  • V$RESOURCES (page 77)
  • X$KSQEQ (generic enqueues) (pages 77, 78)
  • X$KTCXB (transactions) (pages 78, 252)
  • X$KGLLK (library cache lock) (pages 89, 195)
  • V$OPEN_CURSOR (pages 89, 195, 248)
  • X$BH (pages 95, 102, 113, 219, 220)
  • V$BUFFER_POOL (pages 96, 99)
  • V$BUFFER_POOL_STATISTICS (pages 99, 100)
  • X$KCBWDS (pages 100, 102, 138, 144, 179, 180)
  • X$KCBBF (page 113)
  • V$SYSSTAT (page 118)
  • V$SESSION (page 126)
  • V$SESSION_WAIT (pages 126, 226)
  • V$LOG, V$LOG_HISTORY (page 139)
  • X$KCBOQH (kernel cache buffers object queue header), X$KCBOBH (kernel cache buffers object buffer headers) (page 151)
  • OBJ$,TAB$, COL$, IND$, ICOL$, TRIGGER$ (page 162)
  • V$ROWCACHE (dictionary cache) (pages 164, 166, 169, 170, 225, 227)
  • V$ROWCACHE_PARENT (pages 164, 166)
  • X$KQRPD (pages 164, 168, 169)
  • SYS.BOOTSTRAP$ (pages 165, 233)
  • X$KQRSD (pages 168, 169)
  • V$SGASTAT (page 169)
  • X$KGHLU (pages 179, 180, 188, 189, 190)
  • X$KSMSP (page 188)
  • V$LIBRARYCACHE (pages 194, 234)
  • X$KGLPN (library cache pin) (page 195)
  • X$KGLOB (pages 196, 234)
  • V$GES_ENQUEUE (page 208, 209)
  • V$DLM_RESS (pages 208, 209)
  • V$LOCK (pages 209, 233)
  • V$RESOURCE_LIMIT (pages 209, 210)
  • V$SGASTAT (page 209, 210)
  • SEQ$ (pages 223, 224, 225, 226)
  • V$ENQUEUE_STAT (page 225)
  • V$PROCESS, V$BGPROCESS (page 231)
  • SYS.AUD$ (page 232)
  • X$KSMMEM (page 237)

 Parameters (the index at the back of the book misses most of these entries):

  • LOG_PARALLELISM (page 15)
  • UNDO_RETENTION (page 56)
  • CPU_COUNT (pages 72, 75, 101)
  • _ENQUEUE_RESOURCES (page 77)
  • SESSION_CACHED_CURSORS (pages 89, 175)
  • OPEN_CURSORS (pages 89, 176)
  • SHARED_POOL_SIZE (page 94)
  • DB_CACHE_SIZE (pages 94, 97)
  • _DB_HOT_BLOCK_TRACKING (page 104)
  • _DB_PERCENT_HOT_DEFAULT (page 108)
  • _DB_BLOCK_MAX_CR_DBA (pages 108, 115)
  • _DB_BLOCK_HASH_BUCKETS (page 111)
  • LOG_BUFFER (pages 123, 125)
  • TRANSACTIONS (page 136)
  • _DB_BLOCK_MAX_SCAN_PCT (page 144)
  • ARCHIVE_LAG_TARGET (page 151)
  • CURSOR_SHARING (page 172, 194)
  • SHARED_POOL_RESERVED_SIZE (pages 179, 183)
  • _SHARED_POOL_RESERVED_PCT (pages 179, 183)
  • _SHARED_POOL_RESERVED_MIN_ALLOC (pages 183, 191)
  • CURSOR_SPACE_FOR_TIME (page 195)
  • USE_STORED_OUTLINES (page 236)

 Statistics (the index at the back of the book misses most of these entries):

  • CONSISTENT GETS – EXAMINATION (pages 44, 51, 114)
  • REDO SYNCH WRITES (pages 49, 126, 128, 131, 132)
  • LATCH FREE (page 70)
  • BUFFER IS PINNED COUNT (pages 114, 115)
  • SWITCH CURRENT TO NEW BUFFER (pages 115, 116, 139)
  • CR BLOCKS CREATED (page 116)
  • MESSAGES RECEIVED (page 125)
  • REDO SIZE (pages 125, 135, 152)
  • MESSAGES SENT (pages 125, 126)
  • REDO BLOCKS WRITTEN (page 131)
  • REDO ENTRIES (pages 131, 152)
  • REDO WASTAGE (pages 131, 135)
  • PARSE COUNT (TOTAL) (pages 173, 174, 176, 178)
  • PARSE COUNT (HARD) (pages 174, 176, 178, 194)
  • SESSION CURSOR CACHE HITS (pages 175, 176)

 Wait Events (the index at the back of the book misses most of these entries):

  • BUFFER BUSY WAITS (pages 113, 224)
  • LOG BUFFER SPACE (pages 123, 133)
  • LOG FILE SYNC (pages 125, 126, 132, 134)
  • RDBMS IPC MESSAGE (pages 125, 126)
  • LGWR WAIT FOR REDO COPY (pages 133, 134)
  • CF ENQUEUE (page 146)
  • ROW CACHE LOCK (pages 169, 227)
  • CURSOR: PIN S WAIT ON X (page 192)
  • LIBRARY CACHE PIN (pages 192, 196)
  • GC BUFFER BUSY (pages 224, 227)


  • DBMS_SYSTEM.KSDWRT (write to trace file or alert log) (page 238)
  • DBMS_SYSTEM.SET_EV (page 239)


  • REDO ALLOCATION (pages 14, 126, 132, 134, 136, 152)
  • REDO COPY (page 15)
  • IN MEMORY UNDO LATCH (page 16)
  • CACHE BUFFERS CHAINS (pages 106, 112, 114, 116, 178)
  • REDO WRITING (pages 125, 132)
  • CHECKPOINT QUEUE LATCH (page 139, 141)
  • SHARED POOL (pages 178, 190)

 Oracle Error Messages:

  • ORA-01456: may not perform insert/delete/update operation inside a READ ONLY transaction (page 13)
  • ORA-08177: can’t serialize access for this transaction (page 35)
  • ORA-01555: snapshot too old (page 56)
  • ORA-22924: snapshot too old (page 57)
  • ORA-00060: deadlock detected (pages 82-84, 232)
  • ORA-01000: maximum open cursors exceeded (page 177)
  • ORA-04031: unable to allocate %n bytes of memory(%s, %s, %s, %s) (pages 183, 190, 191)
  • ORA-03113: end-of-file on communication channel (page 237)

Book Review: Troubleshooting Oracle Performance (Part 2)

7 11 2011

November 7, 2011

Most Thorough, Yet Compact Performance Tuning Book 9i-11g

(Back to the Previous Post in the Series)

I originally reviewed the “Troubleshooting Oracle Performance” book a bit over three years ago, having pre-ordered the book prior to its publication.  The original review is quite limited in depth compared to some of my more recent Oracle Database book reviews.  I recently decided to purchase the companion PDF file from Apress, as well as to re-read the book so that I could provide a much more detailed book review. 

Since the time when I wrote my original review of this book I have formally reviewed at least three other books that are Oracle Database performance specific, reviewed a handful of other books that contain Oracle Database performance sections, and briefly viewed and wrote comments about a couple of other performance related books.  The “Troubleshooting Oracle Performance” book effectively sets the standard by which all other Oracle Database performance books are measured.  The depth of coverage, accuracy of contents, and signal to noise ratio are unparalleled in the Oracle Database performance book category.

There are several factors that separate this book from the other Oracle Database performance books on the market:

  • For every feature that is described to help solve a problem, as many as possible of the benefits are listed, and an equal amount of attention is paid to the potentially wide-ranging problem areas of various solutions.  Very few potential problems were overlooked.  Some of the other books on the market only describe the potential benefits of implementing a feature, without discussing limitations or unintended side-effects.
  • For nearly every feature described in the book, the book lists the licensing and version requirements (sometimes to a specific point release such as,,, that are required so that the reader is able to take advantage of the feature – these requirements are often listed early in the description of the feature.  The book commonly describes how to accomplish a task in the current Oracle Database release, as well as older releases, if the approach differs.  Some of the other books on the market inter-mix features and behaviors in various Oracle Database releases, without clearly distinguishing what will and what will not be available in the reader’s environment.
  • While many strong statements are made about Oracle Database in the book, there is no “hand waiving”, and there are very few inaccurate statements.  The book uses a “demonstrate and test in your environment” approach from cover to cover.  The downloadable scripts library is extensive, and often contains more performance information than what is presented in the book.  It is thus recommended to view the scripts and experiment with those scripts while the book is read.  The downloadable scripts on the Apress website appear to be corrupt (this corruption appears to affect more than just the scripts for this book).  Updated versions of the scripts are available for download from the author’s website.  In contrast, other books seem to take the approach of “trust me, I have performed this task 1,000 times and never had a problem” rather than the “demonstrate and test in your environment” approach as was used in this book.
  • Information in this book is densely packaged, without unnecessarily repeating information, and without giving the impression that sections of the book are a paraphrase of some other set of articles.  Additionally, the information is well organized into a logical progression of topics, rather than each section of the book appearing as an island of unrelated information.
  • The well-placed graphics throughout the book support the contents of the book, rather than distract from the information that is described.
  • In the acknowledgments section at the beginning of the book the author mentioned that his English writing ability is poor and that “I should really try to improve my English skills someday.”  In the book the only hint that English is not the author’s primary language is the repeated use of the phrase “up to” when describing features that exist in one Oracle Database release version or another.  The author’s use of “up to” should be interpreted as “through” (including the specified end-point) rather than as “prior to” (before the specified end-point).  It appears that the author exercised great care when presenting his information on each page.  In contrast, some of the other Oracle Database book authors seem to be more concerned with slamming something onto the page so that something else that is more interesting could be introduced, in the process introducing sentences that can best be described as non-sense.

The “Troubleshooting Oracle Performance” book covers Oracle releases through,,, and  For the most part the information provided in the book applies to Oracle Database 11.1 and above, with exceptions noted for earlier release versions.  The author’s website effectively extends the book’s contents to cover Oracle Database,, and 11.2.0.x.  It is recommended that the Oracle sample schemas are installed in a test database so that the reader is able to experiment with all of the sample scripts provided with the book.  The book appears to be mostly directed at DBAs, however sections of the book are appropriate for developers.

This review is a bit long (roughly 18 typewritten pages), and might not completely appear on Amazon (see my Oracle blog if the review does not appear in full).  As such I will begin the detail portion of the review with the problems/corrections to the book that I have identified (see the author’s website for the complete list of errata), describe some of the foundation knowledge/tips found in the book, and then list various data dictionary views/tables, Oracle Database parameters, SQL hints, built-in functions, execution plan elements, and Oracle error messages that are described in the book (many of these items cannot be located in the index at the back of the book, so a page number reference is provided).

Comments, Corrections, and Problems:

  • The descriptions of both the IS_BIND_AWARE and IS_SHAREABLE columns of V$SQL include the phrase “If set to N, the cursor is obsolete, and it will no longer be used.”  It appears that this phrase was accidentally added to the description of the IS_BIND_AWARE column. (pages 27-28)
  • The book states, “Remember, execution plans are stored only when the cursors are closed, and the wait events are stored only if they occurred while the tracing of wait events was enabled.”  Technically, this behavior changed with the release of Oracle Database 11.1.  The PLAN_STAT parameter of the DBMS_SESSION.SESSION_TRACE_ENABLE function, and the PLAN_STAT parameter of the various DBMS_MONITOR functions default to a value of FIRST_EXECUTION.  The default behavior in 11.1 and later is to write out the execution plans to the trace file after the first execution (before the cursor is closed), however that parameter may be changed to ALL_EXECUTIONS (plan is written to the trace file after each execution) or NEVER (do not output the execution plan). (page 82)
  • The book states, “Notice how the number of waits, 941, exactly matches the number of physical writes of the operation HASH GROUP BY provided earlier in the row source operations.”  The statement in the book is correct, but as written it might be slightly confusing.  This statement probably could have been clarified slightly, repeating what was stated earlier about the cumulative nature of the statistics for the parent and child operations.  The reader would then more easily understand that the pw=1649 value associated with the “TABLE ACCESS FULL SALES” operation must be subtracted from the pw=2590 value found on the “HASH GROUP BY” operation to arrive at the number 941 mentioned in the book. (page 86)
  • The book states, “As of Oracle Database 10g, the rule-based optimizer is no longer supported and, therefore, will not be covered here.”  This sentence, as written, could be misinterpreted.  The rule based optimizer still exists in the latest release of Oracle Database, but its use is deprecated, and therefore the use of the rule based optimizer is no longer supported by Oracle Corp., even though it still exists for backward compatibility purposes.  Page 174 of the book also states that the rule based optimizer has been desupported. (page 108)
  • The script that compares the time required to read from table BIG_TABLE using different values for the DB_FILE_MULTIBLOCK_READ_COUNT parameter is subject to at least three potential problems that could throw off the accuracy of the results: 1. Portions of the table may remain cached in the buffer cache between each execution (a warning about this potential issue is provided in the book immediately after the script). 2. The query optimizer may decide to use serial direct path reads (“direct path read” wait event), rather than the commonly expected “db file scattered read” type accesses for the full table scan.  Testing seems to indicate that the number of blocks read in a single serial direct path read is related to the DB_FILE_MULTIBLOCK_READ_COUNT value – the maximum number of blocks seems to be the largest power of two that is less than or equal to the DB_FILE_MULTIBLOCK_READ_COUNT value (this might explain the stair-stepped diagram that is included in the book).  Serial direct path reads where introduced in Oracle Database 11.1 as a potential replacement for Oracle buffer cache buffered reads when parallel query was not implemented; that change in behavior was apparently not documented prior to the publication of this book. 3. What unrelated blocks are in the buffer cache at the time that the test started might be important. (page 178)
  • That book states, “Unfortunately, no feature is provided by the package dbms_xplan to query it [the stats$sql_plan repository table].”  There is a way to use DBMS_XPLAN.DISPLAY to display an execution plan that was captured in the PERFSTAT.STATS$SQL_PLAN table, but the syntax is a little awkward. (page 204)
  • The book demonstrates that it is possible to add comment text to a hint block without affecting the hint in that hint block.  Ideally, the book would have mentioned that there are risks that hints will be ignored by the optimizer when adding regular comments to hint blocks, especially when the comments are added in front of the hints. (page 254)
  • “8;” is missing from the first EXPLAIN PLAN FOR SELECT statement. (page 260)
  • A test case (access_structures_1000.sql) is provided that uses partitioning without first mentioning that the partitioning option may only be purchased for the Enterprise Edition.  This is one of the very few instances where the licensing requirements for a feature are not mentioned in the book when the feature is introduced.  The licensing requirements are described two pages later. (page 348)
  • Considering the depth of explanation found in the rest of the book, the book should have mentioned that “ALTER TABLE t SHINK SPACE” is only valid if table t is in an ASSM tablespace. (page 351)
  • The book references the clustering_factor.sql script, but that script does not exist in the chapter 9 script library.  A clustering_factor.sql script does exist in the chapter 4 script library, but the table definition differs from what is shown in chapter 9.  This does not appear to be a significant problem because the essential portions of the script appear in the book. (page 375)
  • The book states, “Note: Full table scans, full partition scans, and fast full index scans executed in parallel use direct reads and, therefore, bypass the buffer cache.”  This was a correct statement at the time the book was printed.  However, Oracle Database 11.2 introduced in-memory parallel execution.  The book author briefly mentions this feature in one of his blog articles. (page 500)
  • The book states that direct path insert does not work with INSERT statements containing a VALUES clause.  Since the time when the book was published, the author created a blog article that demonstrates how to use direct path insert for SQL statements that have a VALUES clause.  According to the blog article, the behavior changed in Oracle Database 11.1 and again in 11.2. (page 513)

The actual errors found in the book are minor in comparison to the amount of information covered by the book.

Foundation Knowledge, and Miscellaneous Tips:

  • A quote from the book, one of the reasons why application performance is important: “The studies showed a one-to-one decrease in user think time and error rates as system transaction rates increased. This was attributed to a user’s loss of attention because of longer wait times.” (page 3)
  • The basic equation that determines the time required for a response from the database server: “response time = service time + wait time” (page 4)
  • Description of service level agreements, and a description of a method to design and test code to meet service level agreements. (pages 5-8)
  • A quote that sets the tone for the rest of the book: “So if you do not want to troubleshoot nonexistent or irrelevant problems (compulsive tuning disorder), it is essential to understand what the problems are from a business perspective—even if more subtle work is required.” (page 11)
  • Describing the selectivity and cardinality statistics: “selectivity is a value between 0 and 1 representing the fraction of rows filtered by an operation.” “cardinality = selectivity * num_rows”. (page 13)
  • The description of each step in the cursor life cycle is explained: open, parse, define output variables, bind input variables, execute, fetch, close. (page 15)
  • Description of each step in parsing: include VPD predicates; check syntax, semantics, access rights; store parent in library cache; logical optimization; physical optimization; store child in the library cache. (page 18)
  • Advantages and disadvantages of bind variables (pages 22-23)
  • Causes of new child cursors when using bind variables: maximum size of the bind variable increases, execution environment changes, adaptive (extended) cursor sharing. (pages 23, 27)
  • Bind variable peeking was introduced in Oracle 9i. (page 25)
  • A caution about using bind variables when histograms are present: “On the other hand, bind variables should be avoided at all costs whenever histograms provide important information to the query optimizer.” (page 30)
  • Various profiling applications mentioned: PerformaSure, JProbe; load-testing framework: the Grinder.
  • Sample TKPROF output, along with the original 10046 extended SQL trace file.
  • Debugging event numbers are listed in the file $ORACLE_HOME/rdbms/mesg/oraus.msg – but that file is not distributed on all platforms. (page 63)
  • Structure of 10046 trace files (pages 73-76)
  • Using TRCSESS to process 10046 trace files. (page 76-77)
  • Using TKPROF to process 10046 trace files. (page 78-90)
  • Unlike other books, this book provides a warning about using the EXPLAIN parameter of TKPROF, “In any case, even if all the previous conditions are met, as the execution plans generated by EXPLAIN PLAN do not necessarily match the real ones (the reasons will be explained in Chapter 6), it is not advisable to specify the argument explain.” (page 79)
  • Interesting comment about the SORT parameter for TKPROF – it does not perform a multi-level sort, “When you specify a comma-separated list of values [for the SORT parameter], TKPROF sums the value of the options passed as arguments. This occurs even if they are incompatible with each other.” (page 81)
  • Using TVD$XTAT – a trace file parser developed by the book author. (pages 90-100)
  • Discussion of the I/O cost model, CPU cost model, noworkload statistics, and workload statistics. (page 111)
  • A possible explanation why some developers want table columns to appear in a specified order in a table’s definition: “The estimated CPU cost to access a column depends on its position in the table.” “cpu_cost = column_position*20” (page 117)
  • Calculated Cost of an operation is approximately IO_COST + CPU_COST  / (CPUSPEED * SREADTIME * 1000). (page 118)
  • When using noworkload statistics, SREADTIME is calculated as IOSEEKTIM + (DB_BLOCK_SIZE / IOTFRSPEED). (page 119)
  • When SREADTIM, MREADTIM, or MBRC are not available in SYS.AUX_STATS$, the optimizer falls back to noworkload statistics. (page 119)
  • An ASSOCIATE STATISTICS SQL statement may be used to associate statistics with columns, functions, packages, types, domain indexes, and index types. (page 120)
  • The ENDPOINT_VALUE of the USER_TAB_HISTOGRAMS view only includes the first 6 bytes of character columns – the book did not mention the ENDPOINT_ACTUAL_VALUE column. (pages 126, 129)
  • Extended statistics for column groups work only with equality predicates because of the hashing function that is applied to the column group values. (page 132)
  • An index is always balanced because the same number of branch blocks are present between the root block and all of the leaf blocks. (page 133)
  • As of Oracle Database 10.1 the default value of DBMS_STATS’ CASCADE parameter is DBMS_STATS.AUTO_CASCADE, which allows the database engine to decide when to collect index statistics when the table’s statistics are collected. (page 140)
  • Very through description of the DBMS_STATS package.
  • Regarding when Oracle determines to automatically collect object statistics, the book states: “By default, a table is considered stale when more than 10 percent of the rows change. This is a good default value. As of Oracle Database 11g, this can be changed if necessary.” (page 163)
  • When describing historical object statistics, the book states: “As of Oracle Database 10g, whenever system statistics or object statistics are gathered through the package dbms_stats, instead of simply overwriting current statistics with the new statistics, the current statistics are saved in other data dictionary tables that keep a history of all changes occurring within a retention period.” (page 164)
  • Interesting flowchart diagram that describes a process of setting initialization parameters and statistics. (page 171)
  • Factors that might cause fewer blocks to be read than is specified by the DB_FILE_MULTIBLOCK_READ_COUNT parameter: “Segment headers are read with single-block reads. Physical reads never span several extents. Blocks already in the buffer cache, except for direct reads, are not reread from the I/O subsystem.” (page 175)
  • Regarding behavior changes related to specific initialization parameters from one Oracle Database release to another, the book states: “When workload system statistics are available, the I/O cost is no longer dependent on the value of the initialization parameter db_file_multiblock_read_count.” (page 177)
  • A close approximation for the calculation of the auto-tuned DB_FILE_MULTIBLOCK_READ_COUNT parameter is provided in the book, rather than simply stating that the auto-tuned parameter could be affected “if the number of sessions is extremely large” as is stated in the Oracle Database documentation. (page 178)
  • Nice test case that demonstrates dynamic sampling. (pages 180-183)
  • For an index access, the Io_cost ≈ blevel + (leaf_blocks +clustering_factor) * selectivity * (optimizer_index_cost_adj/100). (page 184)
  • A danger in modifying the OPTIMIZER_INDEX_COST_ADJ parameter to too low of a value is that the optimizer might calculate the same cost for two different indexes due to rounding in the cost estimates – the optimizer will then select the index that sorts first alphabetically.  The book provides a partial test case that demonstrates the problem. (page 185-186)
  • The purpose of the OPTIMIZER_INDEX_CACHING  parameter is often misstated in various books and websites.  This book correctly states: The OPTIMIZER_INDEX_CACHING parameter “does not specify how much of each of the indexes is actually cached by the database engine…  Values greater than 0 decrease the cost of index scans performed for in-list iterators and in the inner loop of nested loop joins. Because of this, it is used to push the utilization of these operations.” (page 186)
  • Formula showing how the OPTIMIZER_INDEX_CACHING parameter is applied to costing calculations (page 186).
  • Defining a non-mergeable view: “When the view contains grouping functions in the SELECT clause, set operators, or a hierarchical query, the query optimizer is not able to use view merging. Such a view is called a nonmergeable view.” (page 188)
  • Test case showing why the OPTIMIZER_SECURE_VIEW_MERGING parameter defaults to TRUE, when it is sensible to set that parameter to FALSE for performance reasons, and the privileges that may be assigned to a user so that the user is not subject to the negative performance effects caused by having a value of TRUE set for this parameter. (pages 187-189)
  • In Oracle Database 9.2, automatic PGA management did not work with a shared server configuration, but it does work with a shared server configuration starting in Oracle Database 10.1. (page 190)
  • Oracle Database 10.1 and lower artificially limit the amount of memory that a session can allocate when automatic PGA management is enabled (for example 100MB for serial operations), and overcoming that limit requires the modification of hidden initialization parameters.  Oracle Database 10.2 and higher remove this artificial limit, allowing PGA allocation to increase as the amount of memory increases.  While not stated in the book, setting the OPTIMIZER_FEATURES_ENABLE parameter value to that of an earlier release, for example, causes the PGA allocation to be limited, just as was the case prior to Oracle Database 10.2. (page 190)
  • As of Oracle Database 10.1, the default value for the PGA_AGGREGATE_TARGET is 20% of the SGA size. (page 190)
  • The PLAN_TABLE exists by default starting in Oracle Database 10.1 as a global temporary table.  The utlxplan.sql script only needs to be run in Oracle Database versions prior to 10.1 – this fact was missed by a couple of recently released books that still indicate that the utlxplan.sql script must be executed to create the PLAN_TABLE. (page 197)
  • Detailed explanation why EXPLAIN PLAN FOR with a SQL statement using bind variables might generate an incorrect execution plan: no bind peeking, all bind variables are handled as VARCHAR2. (pages 198-199)
  • Demonstration of a method to simulate DBMS_XPLAN.DISPLAY_CURSOR on Oracle Database 9.2. (page 202)
  • The book states that understanding 10053 trace files is not as easy task, that the trace file is only generated when there is a hard parse.  The book provides three references to other sources to help the reader understand 10053 trace file contents. (page 205)
  • The book describes setting event 10132 to cause Oracle Database to write out the SQL statement, execution plan, and initialization parameters that affects the optimizer on each hard parse. (page 206)
  • In an execution plan, the Used-Tmp  and Max-Tmp columns are indicated in KB, so the column values must be multiplied by 1024 so that the unit of measure is consistent with the other memory related columns. (page 210)
  • Detailed walk through of the order of operations in execution plans (pages 224, 227, 229, 231, 232, 235, 237-239, 240, 241, 363, 401, 421, 437, 453-454, 501)
  • The book describes when a feature requires an Enterprise Edition license (such as SQL plan baselines), Enterprise Edition with the Tuning Pack (such as SQL profiles), and the Oracle Database release version that first supported the feature.
  • The book correctly states about SQL hints: “Simply put, hints are directives added to SQL statements to influence the query optimizer’s decisions. In other words, it is something that impels toward an action, not merely suggesting one.” (page 252)
  • Default prefixes for query block names: CRI$ – CREATE INDEX statements; DEL$ – DELETE statements; INS$ – INSERT statements; MISC$ – Miscellaneous SQL statements like LOCK TABLE; MRG$ – MERGE statements; SEL$ – SELECT statements; SET$ – Set operators like UNION and MINUS; UPD$ – UPDATE statements (page 258)
  • Table 7-3 contains a cross-reference between certain initialization parameters and equivalent hints that affect a single SQL statement. (pages 262-263)
  • The text of SQL statements is normalized so that it is case-insensitive and white-space insensitive when SQL profiles are created.  This normalization allows the SQL profile to work even if changes in capitalization and white-space result in a different SQL_ID for the SQL statement (and can be set to normalize changes in constants on Oracle Database 11g). (page 271)
  • The book states about SQL Profiles: “Simply put, a SQL profile stores a set of hints representing the adjustments to be performed by the query optimizer.” (page 275)
  • When both a stored outline and SQL profile exist for the same SQL statement, the stored outline will be used rather than the SQL profile. (page 279)
  • Procedure for editing a stored outline. (pages 286-288)
  • Interesting three part test case that demonstrates the execution time difference for a parse intensive SQL statement that is repeatedly parsed with different literal values, with bind variables when cursors are closed, and with bind variables when cursors are held open. (pages 317-324)
  • While other books advocate the use of non-default values for the CURSOR_SHARING parameter, this book provides the following warning, “Cursor sharing has a reputation for not being very stable. This is because, over the years, plenty of bugs related to it have been found and fixed. Therefore, my advice is to carefully test applications when cursor sharing is enabled.”  The book does not mention that the CURSOR_SHARING value of SIMILAR is deprecated as of Oracle Database 11.1 (see Metalink (MOS) Doc ID: 1169017.1), however, this book was likely written long before that parameter value was officially deprecated. (page 325)
  • The book correctly states the default value for the SESSION_CACHED_CURSORS parameter is dependent on the Oracle Database release version, while the Oracle Database documentation for 10.2 incorrectly states the default value, and other books simply pick one of the previously used default values (0, 20, or 50) when discussing the parameter. (page 327)
  • The book correctly states that there is a change introduced in Oracle Database that changed the default number of cached cursors from being dependent on the OPEN_CURSORS parameter to being dependent on the SESSION_CACHED_CURSORS parameter.  The book probably should have stressed that this change in behavior is only related to PL/SQL cursors. (page 331)
  • Code demonstrations are provided in standard SQL, PL/SQL, C, and Java.
  • Non-aggregated access paths that use 5 or fewer logical I/Os per row returned are reasonable; non-aggregated access paths that use between 5 and 15 logical I/Os per row returned are probably reasonable; non-aggregated access paths that use more than 15 to 20 logical I/Os per row returned are probably inefficient.  Read consistency and row prefetching (array fetch size, the ARRAYSIZE system variable in SQL*Plus) can distort these suggested targets. (pages 341, 343, 376-378)
  • For a simple execution plan with a single full table scan, if the array fetch size is set to 1, reading each row will increment the number of consistent gets for the session by 1.  If the array fetch size is set to a value larger than the maximum number of rows in the table’s blocks, the number of consistent gets will be approximately the same as the number of blocks in the table. (page 343)
  • In Oracle Database 11.1 and above, it is possible to use a virtual column as a partition key. (page 358)
  • Characteristics of bitmap indexes: cannot be scanned in descending order (SORT ORDER BY operation is required), bitmap indexes always store NULL values (b*tree indexes do not store NULL values when all column values are NULL). (page 371)  The clustering factor of bitmap indexes is always set to the number of keys in the index. (page 375)  Cannot be used to enforce primary/unique key constraints, do not support row-level locking, space management problems due to updates in releases prior to 10.1, supports efficient index combine operations, supports star transformation. (page 378)  A single bitmap index entry might reference thousands of rows – modifying one of those rows may cause concurrency problems for other sessions that need to modify one of the other rows referenced by the same bitmap index entry; bitmap indexes generally work better than b*tree indexes with low cardinality data, but that does not mean that they work efficiently for extremely low cardinality data (unless bitmap combine operations significantly reduce the number of rowids that are used to fetch table rows). (page 379)
  • As of Oracle Database 10.1 it is possible to specify table columns in index hints, which effectively forces the optimizer to select an index that references the specified column, ex: /*+ index_asc(t ( */.  (page 385)
  • The book states about index-organized tables: “A row in an index-organized table is not referenced by a physical rowid. Instead, it is referenced by a logical rowid. This kind of rowid is composed of two parts: first, a guess referencing the block that contains the row (key) at the time it is inserted, and second, the value of the primary key.” Use ALTER INDEX i UPDATE BLOCK REFERENCES;  to update the guess references. (page 404)
  • Changing an index definition from unique to non-unique could result in the structure of an execution plan changing, even though the index is still used, apparently to help take advantage of block prefetching. (page 423)
  • Join elimination is introduced in Oracle Database 10.2, which allows the query optimizer to eliminate a join to a table, which then removes the linked table from the execution plan.  This is possible if no columns are selected from the linked table, and a validated foreign key links to the table from the other tables specified in the SQL statement. (page 448)
  • The star transformation is a cost-based transformation, even when a STAR_TRANSFORMATION hint is specified in the SQL statement. (page 456)
  • When materialized views are created, the ORDER BY clause is only respected during the initial materialized view creation because the ORDER BY clause is not included in the definition that is stored in the data dictionary. (page 461)
  • Automatic query rewrite that allows the optimizer to take advantage of materialized views is a cost based decision that can be controlled by the REWRITE and NOREWRITE hints.  The materialized view must be altered to ENABLE QUERY REWRITE, and the QUERY_REWRITE_ENABLED parameter must be set to TRUE. (pages 462-463)
  • The book states: “Full-text-match and partial-text-match query rewrites can be applied very quickly… In contrast, general query rewrite is much more powerful. The downside is that the overhead of applying it is much higher.” (page 466)
  • Extensive coverage of materialized views, and improving the performance of those materialized views.  The book contains a warning (page 481) not to use ANSI join syntax with materialized views. (pages 459-481)
  • The various results caches are described. (pages 481-489)
  • Result cache limitations: “Queries that reference nondeterministic functions, sequences, and temporary tables are not cached.  Queries that violate read consistency are not cached. For example, the result set created by a session with outstanding transactions on the referenced tables cannot be cached.  Queries that reference data dictionary views are not cached.” (page 485)
  • Various details of parallel processing (query, DML, and DDL) are described. (pages 489-513)
  • “Parallel DML statements are disabled by default (be careful, this is the opposite of parallel queries).”  Use a command such as “ALTER SESSION ENABLE PARALLEL DML” or “ALTER SESSION FORCE PARALLEL DML PARALLEL 4” to specify the default degree of DML parallelism. (page 503)
  • The book states: “Parallel DDL statements are enabled by default.” (page 505)
  • Disabling parallel execution at the session level does not disable parallel execution for recursive SQL statements. (page 509)
  • Parallel execution should not be enabled unless there is sufficient CPU/memory/IO bandwidth available and the query requires at least a minute to execute.  “It is important to stress that if these two conditions are not met, the performance could decrease instead of increase.” (page 509)
  • Regarding the various parallel related hints, the book states: “It is very important to understand that the hints parallel and parallel_index do not force the query optimizer to use parallel processing. Instead, they override the degree of parallelism defined at the table or index level.” (page 509)
  • The book describes direct path insert. (pages 513-517)
  • Regarding the reduction of redo generation, the book states: “Even if minimal logging is not used, a database running in noarchivelog mode doesn’t generate redo for direct-path inserts.” (page 516)
  • Enabling array fetching (row prefetching) in PL/SQL (pages 518-519), OCI (pages 519-520), JDBC (pages 520-521), ODP.NET (page 521), SQL*Plus (page 521)
  • Enabling array INSERT in PL/SQL (pages 523-524), OCI (page 524), JDBC (page 524), ODP.NET (page 524-525)
  • The book provides a test case that demonstrates performance differences caused by the relative position of columns in a table’s definition.  The book states: “So whenever a row has more than a few columns, a column near the beginning of the row might be located much faster than a column near the end of the row.” (page 528)
  • Selecting the appropriate datatype for columns. (pages 529-535)
  • Row chaining and row migration: row migration happens when there is not enough free space in a block for a row, so the entire row (leaving behind forwarding information) is moved to another block; row chaining happens when a row contains more than 255 columns, or where a row exceeds the full storage capacity of a block. (pages 535-538)
  • Block contention (pages 539-546)
  • Data compression (pages 546-548)

Data Dictionary Views/Tables:

  • V$SQLAREA (page 18)
  • V$SQL (pages 18, 27)
  • V$SQL_SHARED_CURSOR (page 21)
  • V$SQL_BIND_METADATA (page 23)
  • V$SESSION (pages 45, 501, 503)
  • DBA_ENABLED_TRACES (page 67)
  • V$PARAMETER (page 70)
  • V$PROCESS (page 73)
  • SYS.AUX_STATS$ (page 112)
  • V$FILESTAT (page 114)
  • USER_JOBS (pages 115, 474)
  • USER_SCHEDULER_JOBS (page 115)
  • SYS.COL_USAGE$ (page 145)
  • SYS.COL$, SYS.OBJ$, SYS.USER$ (page 146)
  • DBA_TAB_STAT_PREFS (page 150)
  • V$PGASTAT (page 191)
  • PLAN_TABLE (page 197, 532)
  • DBA_HIST_SQL_PLAN (page 203)
  • STATS$SQL_PLAN (page 204)
  • V$SQL_HINT (page 256)
  • V$SQL_OPTIMIZER_ENV (page 264)
  • DBA_SQL_PROFILES (page 271)
  • SYS.SQLPROF$ATTR (page 275)
  • SYS.SQLPROF$ (page 276)
  • V$SESSTAT, V$STATNAME (page 320)
  • USER_TAB_PARTITIONS (pages 353, 355, 366)
  • USER_TAB_COLUMNS (page 396)
  • INDEX_STATS (page 399)
  • V$SQL_WORKAREA_ACTIVE (page 438)
  • USER_MVIEWS (page 471)
  • MV_CAPABILITIES_TABLE (pages 476, 480)
  • V$PX_PROCESS_SYSSTAT (page 495)
  • V$SYSSTAT (page 500)
  • V$PQ_TQSTAT (page 511)
  • DBA_TABLES (page 538)
  • V$WAITSTAT (page 539)
  • DBA_EXTENTS (page 542)

Initialization Parameters:

  • OPTIMIZER_MODE (pages 20, 173)
  • SQL_TRACE (page 63)
  • DIAGNOSTIC_DEST (page 71)
  • _OPTIMIZER_COST_MODEL (page 111)
  • OPTIMIZER_FEATURES_ENABLE (pages 174, 179)
  • OPTIMIZER_INDEX_COST_ADJ (pages 111, 183)
  • OPTIMIZER_INDEX_CACHING (pages 111, 186)
  • SORT_AREA_SIZE (page 191)
  • SQLTUNE_CATEGORY (page 273)
  • USE_STORED_OUTLINES (page 284)
  • OPEN_CURSORS (page 322)
  • CURSOR_SHARING (page 325)
  • QUERY_REWRITE_ENABLED (pages 391, 463)
  • QUERY_REWRITE_INTEGRITY (pages 391, 469)
  • NLS_COMP, NLS_SORT (page 392)
  • _B_TREE_BITMAP_PLANS (page 402)

SQL Hints:

  • NO_CPU_COSTING (page 111)
  • DYNAMIC_SAMPLING (page 179)
  • FULL (page 254)
  • QB_NAME (page 257)
  • ALL_ROWS, FIRST_ROWS(n), RULE (page 263)
  • RESULT_CACHE (pages 263, 482)
  • NO_RESULT_CACHE (pages 263, 484)
  • OPT_ESTIMATE (page 277)
  • INDEX (pages 369, 419)
  • INDEX_SS (page 397)
  • INDEX_SS_ASC, INDEX_SS_DESC (page 398)
  • INDEX_COMBINE (page 400, 402)
  • USE_MERGE (page 425)
  • USE_HASH (page 435)
  • INDEX_JOIN (page 439)
  • SWAP_JOIN_INPUTS (page 440)
  • PQ_DISTRIBUTE (page 445)
  • NO_OUTER_JOIN_TO_INNER (page 449)
  • NO_UNNEST (page 450)
  • REWRITE, NO_REWRITE (page 462)
  • APPEND (page 513)

Built-In Functions/Procedures:

  • USERENV (page 45)
  • SYS_CONTEXT (pages 46, 268)
  • DBMS_SYSTEM.SET_EV (note: DBMS_SYSTEM package function by default can only be executed by the SYS user, page 64)
  • NTILE (page 127)
  • SYS_OP_COMBINED_HASH (page 132)
  • DBMS_STATS.SET_PARAM (page 147)
  • DBMS_STATS.GET_PREFS (page 149)
  • DBMS_UTILITY.GET_TIME (page 178)
  • DBMS_XPLAN.DISPLAY_CURSOR (pages 201, 217)
  • DBMS_XPLAN.DISPLAY_AWR (pages 204, 219)
  • DBMS_XPLAN.DISPLAY(page 213)
  • DBMS_OUTLN.DROP_BY_CAT (page 289)
  • DBMS_OUTLN.CLEAR_USED (page 290)
  • VSIZE (page 533)


  • DBMS_SQL (page 17, 332)
  • DBMS_SESSION (page 45)
  • DBMS_PROFILER (page 100)
  • DBMS_STATS (pages 109, 136)
  • DBMS_XPLAN (page 208)
  • DBMS_SQLTUNE (pages 267-279)

Execution Plans:

  • TABLE ACCESS BY INDEX ROWID operation (page 25)
  • Common columns found in an execution plan (pages 209-210)
  • Query Block Name/Object Alias, Predicate Information (page 211)
  • Column Projection, Note (page 212)
  • Format parameters for DBMS_XPLAN (pages 213-214, 218)
  • Permissions to display execution plans (197, 217, 219)
  • Basic rules for parent and child operations in execution plans (pages 221-222)
  • COUNT STOPKEY operation (pages 224-225)
  • FILTER operation (pages 225, 229)
  • UNION-ALL operation (page 226)
  • NESTED LOOPS operation (page 228)
  • UPDATE operation (page 232)
  • CONNECT BY WITH FILTERING operation (page 233)
  • Identifying wrong estimated cardinality problems (page 241)
  • Displaying SQL Plan Baselines (page 297)
  • TABLE ACCESS FULL operation (page 350)
  • PARTITION RANGE SINGLE operation (pages 354, 355)
  • PARTITION RANGE ALL (page 358)
  • INDEX FULL SCAN (page 369)
  • INDEX FAST FULL SCAN (page 370)
  • INLIST ITERATOR (page 374)
  • INDEX UNIQUE SCAN (page 381)
  • NLSSORT (page 394)
  • INDEX SKIP SCAN (page 397)
  • BITMAP AND, BITMAP OR (page 400)
  • BITMAP MINUS (page 401)
  • TABLE ACCESS HASH (page 407)
  • CONCATENATION (page 408)
  • MERGE JOIN, SORT JOIN (page 425)
  • 0Mem column, Used-Mem column (page 429)
  • 1Mem column, Used-Tmp column (pages 433-434)
  • INDEX$_JOIN$ (page 439)
  • HASH JOIN RIGHT OUTER (page 440)
  • PX SEND QC (RANDOM) (pages 445, 492; page 493 for methods other than QC RANDOM)
  • PX BLOCK ITERATOR (pages 447, 491)
  • HASH JOIN SEMI (page 450)
  • RESULT CACHE (page 483)
  • TQ column (page 491)
  • IN-OUT column (pages 491, 493)
  • PX COORDINATOR (page 494)

Oracle Error Numbers:

  • ORA-13859: action cannot be specified without the module specification (page 67)
  • ORA-00922: missing or invalid option (page 80)
  • ORA-20005: object statistics are locked (pages 142, 156)
  • ORA-01013: user requested cancel of current operation (page 153)
  • ORA-38029: object statistics are locked (page 157)
  • ORA-03113: TNS:end-of-file on communication channel (page 175)
  • ORA-01039: insufficient privileges on underlying objects of the view (page 196)
  • ORA-00931: missing identifier (page 283)
  • ORA-18009: one or more outline system tables do not exist (page 286)
  • ORA-07445, ORA-00600 (page 452)
  • ORA-23538: cannot explicitly refresh a NEVER REFRESH materialized view (page 472)
  • ORA-12026: invalid filter column detected (page 481)
  • ORA-12827: insufficient parallel query slaves available (page 497)
  • ORA-12838: cannot read/modify an object after modifying it in parallel (pages 513, 516)
  • ORA-26040: data block was loaded using the NOLOGGING option (page 514)
  • ORA-01722: invalid number (page 530)

Book Review: Oracle Database 11g Performance Tuning Recipes (Part 2)

6 10 2011

October 6, 2011

Hammering a Square Peg into a Round Hole: Fine Edges are Lost, Gaps in Detail

(Back to the Previous Post in the Series)

In an effort for my review to be fair, I have completed the review for the second half of the “Oracle Database 11g Performance Tuning Recipes” book (omitting the pages for chapters 11 and 12).  The total review is now 35.5 typewritten pages in length.  Since this is the longest review that I have written for a book, I decided to leave the second half of this book review open for reader comments.

The following is from a comment that I attached to the original review on Amazon in response to a comment that was attached to my review:

That said, there is still a chance that the format could have worked for performance tuning if:

  1. The authors focused on Oracle Database 11.1 and 11.2
  2. The authors focused on performance tuning, rather than telling the reader, for instance, how to select all of the rows from a table (that is for the SQL Recipes book to explain)
  3. The authors, two of whom co-authored the book “RMAN Recipes for Oracle Database 11g”, stayed true to the recipe format that was used in that RMAN book. The RMAN book typically had a lengthy introduction to almost every chapter that more or less set the stage for the recipes in that chapter, providing the background knowledge and theory needed to understand the chapters (reference pages 313-318, the introduction for chapter 11 – viewable through Google books).
  4. The authors of this book are no doubt talented, and I think that if the authors had an extra six to eight months of reading, reviewing, and reworking the recipes, the authors probably would have corrected most of the instances of logic errors, weak development problems, and off-topic nature of the recipes that I mentioned in the book review.
  5. The authors spent some time discussing how a recipe could fail (maybe the reader did not buy that extra cost license, maybe the tip is valid only for versions of Oracle Database before 10.2, etc.).

For other people who read my review, I need to clarify that I am NOT recommending against buying this book – the book will just need to be used differently than originally designed. If you buy the book, after reading a recipe, perform a search of the documentation and various blogs, and most importantly test the material, to see if you are able to confirm or refute what is in the book. There is a good chance that such an approach will result in a better understanding of the topic than would have been the case if the recipes contained no errors AND were more fully developed. As had happened in my blog articles that referenced the alpha copy of this book, the topics in the book may be used as conversation starters to venture far further into the details of Oracle Database performance tuning.

The second half of the review follows, in the same format as the review for the first half of the book.

Data Dictionary Views:

  • V$SORT_USAGE (page 218)
  • V$SESSION_LONGOPS (page 310)
  • V$SQLSTATS (page 312)
  • V$SQL_MONITOR (page 313)
  • V$SQL_PLAN_MONITOR (page 316)
  • V$DIAG_INFO (page 328)
  • GV$PX_PROCESS (page 345)
  • DBA_ENABLED_TRACES (page 353)
  • V$BGPROCESS (page 363)
  • DBA_AUTOTASK_CLIENT (page 449)
  • SYS.AUX_STATS$ (page 464)
  • SYS.COL_GROUP_USAGE$ (page 485)
  • V$PQ_TQSTAT (page 532)
  • V$PQ_SYSSTAT (page 549)


  • _PX_TRACE (page 346)
  • SQL_TRACE (page 353)
  • LOG_ARCHIVE_TRACE (page 365)
  • OPTIMIZER_MODE (page 447)

Error Numbers:

  • ORA-01652: unable to extend temp segment (page 219)
  • ORA-03297: file contains used data beyond requested RESIZE value (page 221)
  • ORA-01427: single-row subquery returns more than one row (page 264)
  • ORA-01790: expression must have same datatype as corresponding expression (page 274)
  • ORA-00932: inconsistent datatypes: expected NUMBER got CHAR (page 280)
  • ORA-00060: Deadlock detected (page 362)


  • MONITOR, NOMONITOR (page 314)
  • PARALLEL (page 344)
  • FULL (page 493)
  • INDEX (page 494)
  • NO_INDEX (page 495)
  • ORDERED, LEADING (page 497)
  • USE_NL (page 498)
  • USE_HASH, USE_MERGE (page 499)
  • FIRST_ROWS (page 503)
  • ALL_ROWS (page 504)
  • APPEND, APPEND_VALUES (page 505)
  • NOAPPEND (page 506)
  • RESULT_CACHE (page 509)
  • DRIVING_SITE (page 513)
  • REWRITE (page 519)

 Comments, Corrections, and Problems:

  • Recipe 7-5 seems to have borrowed the two SQL statements from a consulting website’s page.  The line breaks and column aliases match, however the table aliases were changed.  The first query is found under the heading “Sort Space Usage by Statement”, and the second query is found under the “Sort Space Usage by Session” heading. (pages 218-219)
  • Recipe 7-8 on page 227 states, “While the database can automatically resolve deadlocks between sessions (by killing one of the sessions holding a needed lock), when there’s a latch or pin on…”  When a deadlock happens, the SQL statement that is executed by one of the sessions is automatically rolled back – the session itself is not killed. (pages 224-229)
  • Recipe 8-1 “Retrieving All Rows from a Table” really does not belong in an Oracle performance tuning book, but instead in a beginning SQL book.  The recipe probably should have discussed some of the potential performance problems related to using “SELECT *” – the book simply stated that it was an alternative to listing every column to retrieve. (pages 254-255)
  • Recipe 8-2 “Retrieve a Subset of Rows from a Table” shows how to add a basic WHERE clause to a SQL statement.  Anyone attempting to learn even basic Oracle performance tuning should be well beyond the level of knowledge covered in this recipe, which shares concepts similar to those in recipe 1-1 in the “Oracle SQL Recipes” book. (pages 256-257)
  • Recipe 8-3 “Joining Tables with Corresponding Rows” demonstrates how to perform simple equijoins between tables using Oracle SQL syntax and ANSI syntax.  Once again, the concepts in this recipe are appropriate for a beginner level SQL book – concepts discussed are similar to those found in recipe 3-1 in the “Oracle SQL Recipes” book.  It might have been helpful for the book to discuss some of the potential dangers of natural joins. (pages 258-259)
  • Recipe 8-4 “Joining Tables When Corresponding Rows May Be Missing” demonstrates outer joins using Oracle syntax and ANSI syntax, and in that regard is similar to recipe 3-3 in the “Oracle SQL Recipes” book.  The full outer join using Oracle syntax has a couple of problems, including the removal of duplicate rows when those duplicate rows are necessary (consider a father and son working at the same place and in the same department).  To fix the solution, change the UNION clause to a UNION ALL, and then in the second WHERE clause (found in the second half of the UNION ALL), add “AND E.MANAGER_ID IS NULL”. (pages 259-262)
  • Recipe 8-5 “Constructing Simple Subqueries” describes how to add subqueries to the SELECT, WHERE, and HAVING clauses.  No examples of actual subqueries were provided under the heading of “Multi-Row Subqueries”, just simple variants of IN lists using ANY, SOME, and ALL keywords.  Based on the section heading I would have expected to see a subquery in the WHERE clause that potentially returned more than one row and used the ANY, SOME, or ALL keywords to prevent an ORA-01427 error. (pages 263-267)
  • Recipe 8-6 “Constructing Correlated Subqueries” shows how to place a subquery in the WHERE clause that references columns in the parent query’s row sources – another example that might appear in an introductory SQL book).  The “How It Works” section of this query seems to ignore the fact that a correlated query may be automatically transformed by Oracle’s query optimizer into a standard join – the quote on page 268 could very easily be demonstrated as incorrect, “In a correlated subquery, the outer query is executed first, as the inner query needs the data from the outer query in order to be able to process the query and retrieve the results.” (pages 267-269)
  • Recipe 8-7 “Comparing Two Tables to Finding Missing Rows” shows how to use the SQL MINUS operator.  Considering that this book describes performance tuning methods, it probably would have been a good idea to show alternate methods for accomplishing the same task that often perform more efficiently than the MINUS operator (such as a left outer join, with a specification that the column on the right side of the join must be NULL). (pages 269-271)
  • Recipe 8-8 “Comparing Two Tables to Finding Matching Rows” shows how to use the SQL INTERSECT operator – basic SQL syntax again with no performance tuning side to the description. (page 271)
  • Recipe 8-9 “Combining Results from Similar SELECT Statements” demonstrates using the UNION and UNION ALL operators – material that should be associated with a beginning SQL book (pages 271-274)
  • Recipe 8-10 “Searching for a Range of Values” demonstrates how to use the SQL BETWEEN clause – another introduction to SQL recipe with no performance tuning twist (although the examples did not use TRUNC on a date column, a couple of the examples seem to rely on implicit varchar to date datatype conversions). (pages 274-276)
  • Recipe 8-11 “Handling Null Values” shows how to use the NVL, NVL2, and COALESCE functions as well as the IS NULL and IS NOT NULL clauses.  The material in the recipe is similar to recipe 1-13 in the “Oracle SQL Recipes” book.  Considering that this is a book about performance tuning, the book should have indicated why     the COALESCE function is often preferable to the NVL function. (pages 277-280)
  • Recipe 8-12 “Searching for Partial Column Values” shows how to use the SQL LIKE keyword.  There is a bit of a performance related side to this recipe, where the recipe shows applying the TO_CHAR function to an indexed date column prevents the optimizer from using the standard index on that column.  The recipe then shows a demonstration of altering the NLS_DATE_FORMAT to “’yyyy-mm-dd’” and then demonstrates with the help of an execution plan that using “WHERE HIRE_DATE LIKE ‘1995%’” in the SQL statement will allow the index on the date column to be used.  Unfortunately, the execution plan seems to show an index on the employee’s name being used rather than the index on the HIRE_DATE column.  If the recipe had included the “Predicate Information” section of the execution plan it would have shown that a function is applied to the HIRE_DATE column which will prevent the optimizer again from using that index; something like the following would appear in the “Predicate Information” section of the execution plan: “filter(INTERNAL_FUNCTION(“HIRE_DATE”) LIKE ‘1995%’)”  (pages 280-283)
  • Recipe 8-13 “Re-using SQL Statements Within the Shared Pool” includes the following quote on page 284, “Essentially, bind variables are called substitution variables, and replace literals within a query.”  Technically, in Oracle Database terminology bind variables are not synonymous with substitution variables. On page 286 the book states about the row source generation stage of parsing, “This is an Oracle program that received the execution plan from the optimization step and generates a query plan.”  It appears that the book is attempting to paraphrase the Oracle Database Concepts Guide which states, “The row source generator is software that receives the optimal execution plan from the optimizer and produces an iterative plan, called the query plan, that is usable by the rest of the database. The iterative plan is a binary program that, when executed by the SQL virtual machine, produces the result set.”  Page 286 shows enabling a 10046 trace by changing the SQL_TRACE parameter at the session level – that parameter is deprecated as of Oracle Database 10.2. (pages 284-288)
  • Recipe 8-14 “Avoiding Accidental Full Table Scans” states as a method of preventing accidental full table scans when indexes are present on a column, “When constructing a SQL statement, a fundamental rule to try to always observe, if possible, is to avoid using functions on the left side of the comparison operator.”  This statement does not make sense: “TO_CHAR(ORDER_DATE,’YYYY’) = ‘2011’” is equivalent to “’2011’ = TO_CHAR(ORDER_DATE,’YYYY’)”.  The authors probably intended to state, “If possible, in the WHERE clause avoid placing indexed columns in function calls, such as TO_CHAR, unless a function-based index is present for that exact function call and column combination.” (pages 288-290)
  • Recipe 8-15 “Creating Efficient Temporary Views” shows how to create inline views, and provides performance warnings about their use.  The authors of the book make several critical mistakes in this recipe, giving the readers the impression that the rows requested by inline views are always materialized into tables, and as such should be avoided whenever possible.  The SQL statement found in the How it Works section uses subquery factoring (WITH blocks) rather than inline views as described in the book.  Page 292 states about the query found in the How it Works section, “In examples such as these, it is generally more efficient at the database level to create tables that hold the data defined by the inline views—in this case, three separate tables. Then, the final query can be extracted from joining the three permanent tables to generate the results.”  The recipe also contains the following cautionary note, “Large inline views can easily consume a large amount of temporary tablespace.”  There is more wrong than right in this recipe.  Oracle Database does not materialized every inline view, creating a temp table such as SYS_TEMP_0FD9D6628_D08144, as it processes the SQL statement.  The optimizer is free to materialize an inline view into a temp table, but quite often it is also free to push predicates into the inline view, and completely rewrite the original SQL statement so that it no longer contains an inline view.  The book’s suggested alternative of storing the intermediate results of inline views in permanent tables, unnecessarily generating redo and undo in the process, is very strange (if we must head in this direction, why not suggest using materialized views or statically defined temporary tables).  The original SQL statement with the inline views is able to take advantage of not only pushed predicates to further restrict the rows requested by the inline view, but also any indexes and partitioning that may be present in the parent tables.  Additionally, when troubleshooting the performance or accuracy of a SQL statement it is helpful to have the full SQL text that generates the resultset in a single SQL statement. (pages 290-292)
  • Recipe 8-16 “Avoiding the NOT Clause” attempts to describe “performance drawbacks in using the NOT clause within your SQL statements, as they trigger full table scans.”  The sample SQL statement in the Solution section shows a SQL statement selecting from a single table named EMPLOYEES_BIG with the execution plan showing an estimated 697,000 rows to be returned by a full table scan and a calculated cost of 4,480.  The “improved” version of the query selects from two tables, with the execution plan showing a full table scan of the EMPLOYEES table and a nested loops join to the DEPARTMENTS table’s primary key with an estimate of 33 rows to be returned with an estimated cost of 3.  The authors specifically mentioned the drop in the calculated cost from 4480 to 3 (usually not a good measure of the performance improvement of two queries), but failed to recognize that a full table scan is still being performed, and the name of the table as shown by the execution plan changed from EMPLOYEE_BIG to EMPLOYEES – the execution plan included in the book does not match the associated SQL statement displayed in the book.  The use of the NOT keyword does not specifically prevent the use of an index on a column, as was stated in the book.  The NOT EXISTS example SQL statement will not perform as the authors expect because the NOT EXISTS clause is not a correlated NOT EXISTS clause – the query will return all employees with a COMMISSION_PCT greater than 0 regardless of the employee’s DEPARTMENT_ID value. (pages 293-295)
  • Recipe 8-17 “Controlling Transaction Sizes”, while an interesting discussion on the topic of savepoints, the recipe does not have anything to do with performance tuning. (pages 295-297)
  • Recipe 9-4 states on page 308, “Once you have an explain plan to interpret, you can tell which steps are executed first because the innermost or most indented steps are executed first, and are executed from the inside out, in top-down order.”  That statement, while sometimes is correct, is easy to disprove – there are easy to find examples where that statement is not true.  The Oracle Database 11.2 Performance Tuning Guide states essentially the same thing, “The execution order in EXPLAIN PLAN output begins with the line that is the furthest indented to the right. The next step is the parent of that line. If two lines are indented equally, then the top line is normally executed first” – that statement is also occasionally incorrect as can be determined by a 10046 extended SQL trace.  The explained order of the execution plan (ID 3, 5, 2, 4, 1, 0) is incorrect – it should be (ID 3, 2, 5, 4, 1, 0).  The recipe also advises looking at the Cost column, so that it is possible to determine the amount of improvement – such an approach is unreliable.  On page 309 the book states, “By looking at our original explain plan, we determined that the EMP table is larger in size” – the execution plan does not indicate the size of a specific table, only the estimated number of rows that will be returned and the estimated number of bytes that will be returned from the row source (pages 307-310)
  • Recipe 9-5 states that, “With a simple query against the V$SESSION_LONGOPS view, you can quickly get an idea of how long the query will execute, and when it will finish” – that statement is mostly true for simple SQL statements with a single table being read by a full table scan.  This view shows estimated remaining completion time of individual (currently executing) operations in an execution plan, and not the estimated time remaining before the SQL statement will finish (pages 310-311).
  • In Recipe 9-6, the query of V$SQLSTATS does not necessarily retrieve the top five recently executed SQL statements that performed disk I/O, rather it retrieves the five SQL statements that have accumulated the most disk I/O (possibly through multiple executions) without regard to when the SQL statement executed (the statistics could still be in the V$SQLSTATS even though the SQL statement is no longer in the shared pool). (pages 311-312)
  • Recipe 9-7 should have mentioned that using the V$SQL_MONITOR view requires an Enterprise Edition license, a Diagnostics Pack license, and a Tuning Pack license.
  • Recipe 9-8 should have mentioned that using the V$SQL_PLAN_MONITOR view requires an Enterprise Edition license, a Diagnostics Pack license, and a Tuning Pack license. (pages 316-318)
  • Recipe 9-9 should have mentioned that using the Oracle SQL Performance Analyzer requires a license for the Oracle Real Application Testing option. (pages 321-325)
  • Recipe 10-1 states (on page 328) that if the TIMED_STATISTICS parameter is set to FALSE that SQL tracing [extended 10046 SQL trace] is disabled – the TIMED_STATISTICS parameter does not determine whether or not SQL tracing is enabled.
  • Recipe 10-5 shows how to interpret a raw 10046 extended SQL trace file.  This recipe is surprisingly short at just over one page, including a short sample trace file.  The recipe really did not provide much information other than describing what (generic, not necessarily specific to the sample 10046 extended SQL trace file) processes take place in the Parse, Execute, and Fetch stages, without providing much information beyond stating that the 15.8ms long wait for ‘Disk file operations I/O’ is not a significant amount of time.  This recipe in the book missed a significant chance to describe one of the most useful tools when troubleshooting performance problems when the problems can be associated with a specific session.  See the book “Secrets of the Oracle Database”, the book “Optimizing Oracle Performance”, or the book “Troubleshooting Oracle Performance” for better examples of extracting useful information from a raw 10046 extended SQL trace file. (pages 334-335)
  • Recipe 10-6 states the following, “Note that the TKPROF or other profiling tools show the elapsed times for various phases of query execution, but not the information for locks and latches. If you’re trying to find out if any locks are slowing down a query, look at the raw trace files to see if there are any enqueue waits in the WAIT lines of the raw file.”  I might be misinterpreting these two sentences, but the sentences appear to be incorrect assuming that a level 8 or greater (wait events) extended SQL trace generated by Oracle Database 10.1 (or higher) is passed into TKPROF.  Starting in Oracle Database 10.1 most of the enqueue waits are divided into much more specific wait events that begin with “enq:” (there are 299 wait events in Oracle Database that begin with “enq:”).  The same is true for latches, where the “latch free” wait event is split out into multiple wait events (there are 43 wait events in Oracle Database for latches). Oracle Database 9.2 and earlier did not provide information about the exact type of enqueue or latch wait in TKPROF summaries. (pages 335-336)
  • Recipe 10-8 shows how to invoke TKPROF to analyze a 10046 extended SQL trace file.  The TKPROF call used by the book specifies the EXPLAIN option, and provides the following description “If you specified the explain parameter when issuing the tkprof command, you’ll find an execution table showing the execution plan for each SQL statement.”  The book does not explain that the execution plan displayed in the output file may not be the actual execution plan used for the query, much like the execution plan generated by autotrace and explain plan might not be the actual execution plan (this fact was not mentioned in the book either).  The book made no mention that the Row Source Operation information in the output file shows the actual execution plan used by the SQL statement (note that the client may still show the wrong execution plan when analyzing a trace file from or more recent version of Oracle Database).  At the top of page 340 the book states that the unit of measure of the time statistic is in milliseconds (1/1000 of a second).  Prior to Oracle Database 9.1 the unit of measure was centiseconds (1/100 of a second) and as of Oracle Database 9.1 the unit of measure is microseconds (1/1000000 of a second). The book stated that because the “SQL*Net message to client” wait event is an idle wait event, the time reported as waited can be ignored.  The actual wait event displayed in the book’s TKPROF output is “SQL*Net message from client” and the time reported is 462.81 seconds in 5,461 waits with a max wait time of 112.95 seconds.  Even though the “SQL*Net message from client” wait event is typically classified as an idle wait event, at the session level, and especially in a 10046 extended SQL trace that wait event may reveal critical details.  For instance, it probably required roughly 112.95 seconds for the 10046 trace to be ended, so if we subtract the 112.95 seconds, we are left with 349.86 seconds in the wait event, which would indicate that the average wait time for this client (and in theory network) side wait is 0.064 seconds, which is longer than the only other wait reported, a 0.05 second single wait for a block read from disk.  The long average duration of this wait event either means that the network link is very slow (you would expect average wait times in the range of 0.00001 seconds to 0.002 seconds), or that the client computer receiving the data was excessively busy on average. In this recipe, the book also again states that the TKPROF output will not show any enqueue related waits – that is also not true. (pages 337-340)
  • Recipe 10-15 shows how to enable and disable a 10046 extended SQL trace for another session using DBMS_MONITOR.SESSION_TRACE_ENABLE and  DBMS_MONITOR.SESSION_TRACE_DISABLE, respectively.  The command provided for disabling the 10046 trace in the other session, “execute dbms_monitor.session_trace_disable();” does not specify the SID and SERIAL# for the other session, thus it defaults to disabling a 10046 trace in the current session.  This problem can be confirmed by checking the SQL_TRACE column of V$SESSION for the session for which the trace was enabled, for example “SELECT SQL_TRACE FROM V$SESSION WHERE SID=4;”.  The book also states, “The trace file will have the suffix mytrace1, the value you set as the trace file identifier,” but that is only true if tracing were enabled for the current session (and not another session as demonstrated) and if the recipe had included a command to set the session level TRACEFILE_IDENTIFIER parameter. (pages 349-350)
  • Recipes 10-2, 10-11, and 10-16 seem to share a common ancestry with a blog article.
  • Recipe 10-16 demonstrates how to enable 10046 extended SQL traces in other sessions using Oracle Database 11.1 and above syntax.  All of the examples in this recipe use ALTER SESSION SET EVENTS commands rather than ALTER SYSTEM SET EVENTS commands, and are thus ineffective (do not perform as described) for enabling a trace in another session. (pages 351-352)
  • Recipe 10-18 shows how to enabled a system-wide 10046 trace using DBMS_MONITOR.DATABASE_TRACE_ENABLE – if I recall correctly, tracing might not be disabled when requested for the background processes, which means that those processes will continue writing to their trace files until their max trace file size is reached, the drive containing the trace files fills, or the database is bounced.  This may be an Oracle release version specific problem.  The recipe probably should have mentioned this potential problem. (page 353)
  • Recipe 10-20 shows how to enable a system-wide 10046 trace using “ALTER SYSTEM SET EVENTS ‘10046 TRACE NAME CONTEXT FOREVER,LEVEL 12’;”.  The same potential problem is present in this recipe as for recipe 10-18.  (page 356)
  • Recipe 10-21 shows how to enable a 10046 trace using SQL*Plus’ ORADEBUG.  The top of page 357 states, “You can set a trace in a running session using the operating system process ID (PID), with the help of the oradebug utility.”  Abbreviating the operating system process ID as PID is likely to cause unnecessary confusion.  The PID column in V$PROCESS is the Oracle process number, while the SPID column holds the operating system process ID. (pages 356-357)
  • Recipe 10-22 shows a method for enabling a 10046 extended SQL trace with wait events and bind variables using a DBMS_MONITOR.SESSION_TRACE_ENABLE call in an AFTER LOGON ON DATABASE trigger.  The book states that the user to be traced, user SH in this case, must have the ALTER SESSION privilege before the trigger is created that will enable a 10046 trace for just that user.  Testing will quickly show that the GRANT is unnecessary because of the default setting of definer’s rights associated with the trigger that must be created by a user with SYSDBA permissions – the user to be traced only needs to be granted CONNECT for the trigger to execute and enable a trace for the session. (pages 357-358)
  • Recipe 10-23 shows how to enable and read 10053 traces.  This recipe just barely scratches the surface of 10053 trace file analysis.  The book fails to mention that 10053 traces will only be generated when a hard parse is required, and fails to describe most of the keywords found in a typical 10053 trace file – such trace files may be quite confusing when several tables with multiple indexes are analyzed during optimization; 10053 trace file contents are also subject to change from one Oracle Database release version to the next, so the following quote found on page 360 is at best misleading, “Unlike a raw 10046 event trace file, a 10053 event trace file is quite easy (and interesting) to read.”  (pages 358-361)
  • Recipe 10-24 shows how to enable a specific trace type when an error, such as deadlock, occurs in a session.  While the contents of this recipe are potentially helpful, the depth of discussion is very limited compared to some blog articles found on the Internet, and the content really is not focused on performance, but rather on troubleshooting. (pages 361-362)
  • Recipe 10-25 shows how to enable tracing for Oracle Database background processes – this recipe is very similar to recipe 10-16 except that this recipe uses ALTER SYSTEM rather than ALTER SESSION and specifies a value for PNAME rather than specifying the operating system process number.  The recipe probably should have provided a warning about potential problems that may result from enabling traces for background processes. (pages 362-363)
  • Recipe 10-26 opens with the following problem statement, “You want to trace the Oracle listener for diagnostic purposes.”  While this recipe has nothing specific to do with performance tuning, it would have been helpful to answer the question of “Why?” or “What will I see in the listener trace file?”  Several of the recipes in this book fall into this pattern of not describing the “Why?” or “When?” or “How often?” or “Should I check something else first?” (pages 363-365)
  • Recipe 10-27 shows how to set the LOG_ARCHIVE_TRACE parameter – this is another recipe that is not directly related to performance tuning.  The book states on page 366, “You can specify any of 15 levels of archive log tracing,” however the book only shows seven levels, none of which match level 15 that was specified in the recipe.  The authors seem to have used the Oracle Database Reference from the official Oracle Database documentation library for the source of the seven event levels listed, but it appears that a mistake was made when level 128 was written – that level means “Track FAL (fetch archived log) server related activities”, while level 512 is “Track LGWR redo shipping network activity” which the book incorrectly attributes to level 128. (pages 365-366)
  • (Skipping chapter 11 for now)
  • Page 412 incorrectly states that the Enterprise Edition is required to use stored outlines.
  • (Skipping chapter 12 for now)
  • Recipe 13-2 on page 450 states “Note that in addition to collecting statistics for all schema objects, the auto stats job also gathers dictionary statistics (for the SYS and SYSTEM schemas).”  This statement is incorrect per Metalink (MOS) – the fixed object statistics are not collected automatically.  However, either of the following two commands will accomplish that task:
  • Recipe 13-3 states “By default, the database automatically collects statistics (cascade=true) for all of a table’s indexes.”  The Oracle Database documentation seems to indicate that the default value is DBMS_STATS.AUTO_CASCADE, which does not necessarily mean that statistics will be collected for all indexes. Page 456 states, “Oracle recommends that you set the AUTOSTATS_TARGET preference to the value ORACLE, to ensure that the database collects fresh dictionary statistics”.  Metalink (MOS) Doc ID 725845.1 states that setting the value to ORACLE prevents the automatic statistics collection process from collecting statistics from any schema that does not belong to an Oracle component (SYS, SYSMAN, WMSYS, EXFSYS, etc.), so that is probably not the recommended value.  Additionally, setting that parameter value does not appear to cause the dictionary statistics to be collected. (pages 451-456)
  • Recipe 13-6, when discussing the various levels of dynamic sampling, seems to be closely paralleling (or simply rephrasing) the Oracle Database documentation.  The recipe states about level 3 dynamic sampling, “Uses dynamic sampling if the query meets the Level 2 criteria and it has one or more expressions in a WHERE clause predicate.” – the documentation states that either of these conditions must be true, not both.  The same problem is present for the description of Level 4 dynamic sampling – the book states “AND” while the documentation states “OR”.  Interestingly, the book and the documentation both state that level 9 dynamic sampling samples 4086 blocks, while 4096 (a power of 2) seems to be a more likely number (and that 4096 number agrees with other articles). (pages 459-460)
  • Recipe 13-9 states the following about the mreadtim system statistic, “mreadtim: The Multiblock Read Time statistic shows the average time (in seconds) for a sequential multiblock read.”  This is the only system statistic for which the unit of measure is provided – the unit of measure is actually milliseconds.  Note that the output of the workload system statistics on page 466 shows a value of 46,605.947 for the MREADTIM value (if that statistic’s unit of measure really is seconds, that is a very slow server – note that these odd statistic values show a bug that is present in and and 51,471.538 for the SREADTIM statistic.  The recipe should have explained why the MREADTIM value in this case is less than the SREADTIM value.  The recipe also should have mentioned whether the system statistics should be collected during a relatively quiet time of system activity or a relatively busy time. (pages 463-466)
  • Recipe 13-10 seems to closely mirror section 13.5.1 Pending Statistics of the Oracle Database Performance Tuning Guide.  Page 468 states, “By default the database immediately starts using all statistics it gathers.” – that statement is not true for Oracle Database 10.1 and above, where the _OPTIMIZER_INVALIDATION_PERIOD parameter is set to 18000 (5 hours), permitting already hard parsed SQL statements to not become immediately invalidated for a period up to 5 hours (thus the new statistics may not have any impact for 5 hours). (pages 466-468)
  • Recipe 13-11 provides conflicting advice about using the OPTIMIZER_INDEX_COST_ADJ parameter to force the optimizer to use an index.  While the recipe states that the parameter should not be adjusted system-wide, and also states, “Ideally, you should collect workload system statistics and leave the optimizer_index_cost_adj parameter alone,” in between those comments the recipe states, “Use the optimizer_index_cost_adj parameter with confidence, especially in an OLTP environment, where you can experiment with low values such as 5 or 10 for the parameter to force the optimizer to use an index.” and also suggests that the parameter could be adjusted for a single SQL statement without providing an example of how such a change could be made.  Potential examples of changing that parameter for a single SQL statement might include adding the hint /*+ OPT_PARAM(‘OPTIMIZER_INDEX_COST_ADJ’,5) */, or temporarily adjusting the parameter at the session level so that a stored outline might be created.  It is quite surprising that the book did not describe some of the problems that might be caused by adjusting the OPTIMIZER_INDEX_COST_ADJ parameter.  It is also surprising that the book did not suggest using an index hint, or even suggest determining whether or not an index access path is the most appropriate access path for the data.  The book states, “By default, the optimizer assumes that the cost of a multiblock read I/O associated with a full table scan and the single block read cost associated with an indexed read are identical.” – that has not been the case since the introduction of system statistics in Oracle Database 9.0.1.  Also stated in the recipe is the following, “By default, if you set the ALL_ROWS optimizer goal, there’s a built-in preference for full table scans by the optimizer.” – that also is not necessarily true, in the end the decision is completely based on the calculated cost of the various access paths – if the system statistics and object statistics provide an invalid picture of the performance characteristics of the server and the data, then an inappropriate access path may be selected by the query optimizer.  The recipe states that the SREADTIM, MREADTIM, and MBRC system statistics have a bearing on whether or not a full table scan is performed, but the depth of insight does not venture any deeper than that comment. (pages 468-470)
  • Recipe 13-12 states that setting the OPTIMIZER_FEATURES_ENABLE parameter to a previous Oracle Database release version will cause the optimizer to behave exactly as it did in the older release version.  This is true to an extent, however, some bug fixes such as the one for the cardinality estimates when ROWNUM is used in the WHERE clause, are not automatically disabled, and when you combine that with changes to statistics collection (such as the system statistics MREADTIM and SREADTIM bug in and, the optimizer could very well not behave the same is it did in the earlier release. (pages 470-471)
  • Recipe 13-13 uses exactly the same command (down to the schema, table, and column names) as a blog article written by the Oracle Optimizer Group.  (pages 472-473)
  • Recipe 13-14 repeats portions of the CURSOR_SHARING information found in recipe 5-20.  Page 473 states, “There’s no need for hard-parsing the new statement because a parsed version already exists in the shared pool.” – that statement is mostly true prior to the introduction of adaptive cursor sharing in Oracle Database 11.1.  The tip on page 474 states, “Oracle recommends using the FORCE setting for the CURSOR_SHARING parameter, in an OLTP environment.” – it would be helpful if the book indicated where Oracle Corporation made this recommendation.  On page 474, the book states, “There are really no issues with setting the cursor_sharing parameter to a non-default value, except minor drawbacks such as the non-support for star transformation, for example.” – see the review comments for recipe 5-20.  (pages 473-476)
  • Recipe 13-15 states, “Adaptive cursor sharing is enabled by default, and you can’t disable it.”  Technically, this is not a true statement.  Although not recommended, setting the STATISTICS_LEVEL parameter to BASIC will disable adaptive cursor sharing.  Per Metalink (MOS) Doc ID 11657468.8, adaptive cursor sharing may be disabled by changing _OPTIMIZER_ADAPTIVE_CURSOR_SHARING to a value of FALSE and _OPTIMIZER_EXTENDED_CURSOR_SHARING_REL to a value of NONE, but it can also be disabled without the modification of the hidden initialization parameters by changing the OPTIMIZER_FEATURES_ENABLED parameter as was discussed in recipe 13-12.  Page 481 states, “The database creates a new child cursor (child_number=2) for this query and marks the original cursor (child_cursor=0) as not being bind-aware. Eventually the database will remove this cursor from the shared pool.”  Oracle actually set the child cursor 0’s IS_SHARABLE column value to N without changing the previously assigned BIND_AWARE value.  The book could have been more clear when it assigned column aliases in the query by not aliasing the IS_SHARABLE column as BIND_SHARE.  On page 481 the book states, “Since adaptive cursor sharing works only where literal values are replaced with binds, Oracle encourages you to use the FORCE setting for the cursor_sharing parameter… You must set the cursor_sharing parameter to FORCE for adaptive cursor sharing to work, thus letting the optimizer select the optimal execution plan for different values of the bind variable.”  Those statements are incorrect, adaptive cursor sharing will work just as well with the CURSOR_SHARING parameter set to the default value of EXACT as long as bind variables are used in the SQL statements. (pages 476-482)
  • Recipe 13-16 includes SQL statements that are exact copies of SQL statements found in section from the Oracle Database documentation library, with a bit of rewording before and after the SQL statements. (pages 482-483)
  • Recipe 13-17 seems to closely parallel section from the Oracle Database documentation library, except that the column order of the COUNTRY_ID and CUST_STATE_PROVINCE were reversed, and the book apparently considers the COUNTRY_ID column as a VARCHAR2 while the documentation considers that column as a number. (pages 483-484)
  • Recipe 13-18 seems to be based on a blog article that was written by the Oracle optimizer group. The book seems to have omitted some SQL*Plus formatting commands that are necessary to correctly view the report that is generated by the code in the book (refer to the blog article for the correct SQL*Plus formatting commands). (pages 484-486)
  • Recipe 13-19 seems to be based on a blog article that was written by the Oracle optimizer group (information from a second blog article also seems to have been used). (pages 486-487)
  • Recipe 13-20 seems to be based on a blog article that was written by the Oracle optimizer group (pages 488-490)
  • The introduction for chapter 14 states that there are more than 60 Oracle Database hints (apparently quoting the Oracle Database Performance Tuning Guide documentation).  A query of V$SQL_HINT in Oracle Database shows 269 hints. (page 491)
  • Recipe 14-1 states, “For example, there is an INDEX hint to enable the use of an index. There is also a NO_INDEX hint, which disables the use of an index.”  These two statements should have been rewritten.  An INDEX hint forces the optimizer to use a specific index, assuming that is a valid access path for the data retrieval.  A NO_INDEX hint prevents the optimizer from using a specific index or set of indexes for data retrieval. (pages 491-493)
  • Recipe 14-2 states, “Note Hints influence the optimizer, but the optimizer may still choose to ignore any hints specified in the query.”  Hints are directives and cannot be ignored by the optimizer.  The only reasons that a hint cannot be used by the optimizer is if: the hint is invalid due to the wrong alias used in the hint; the hint is malformed; the hint is incompatible with another hint; the query was transformed by the optimizer into a form that is incompatible with the hint before the optimizer applied the hint; the hint, if followed, would cause the wrong results to be returned; bugs in Oracle Database cause the hint to be lost; the hint specifies the use of a feature that is explicitly disabled by an initialization parameter; or a comment is added before the hint that includes an Oracle keyword. (pages 493-496)
  • Recipe 14-3 shows how to control the join order using the ORDERED and LEADING hints.  The recipe does not discuss some of the potential problem areas that are present with the ORDERED hint that are not present with the LEADING hint.  Additionally, the execution plan that accompanies the LEADING hint example appears to be the wrong execution plan.  While it appears to work without issue, the correct syntax for the LEADING hint does not use commas to separate the row sources that are listed in the hint. (pages 497-498)
  • Recipe 14-4 demonstrates how to use the USE_NL, USE_HASH, and USE_MERGE hints.  While it apparently does not affect the result of the hint, the official syntax does not use commas to separate the row source names in the hints, as shown in the book.  On page 499 the book states, “The smaller of the two tables is used by the optimizer to build a hash table on the join key between the two tables.”  “Smaller” is not defined in this case – does that mean the table itself occupies less space on disk, the table contains fewer rows, fewer rows are expected to be returned from the table, the aggregated estimated size of the columns multiplied by the estimated number of rows is smaller, or something else? (pages 498-501)
  • Recipe 14-5 shows how to use the OPTIMIZER_FEATURES_ENABLE hint by specifying a value of 10.2 in that hint.  The value 10.2 is invalid for that hint, and in testing on Oracle Database is sufficient to prevent the optimizer from seeing the hints that are listed after the OPTIMIZER_FEATURES_ENABLE hint. (pages 501-502)
  • Recipe 14-6 on page 504 shows setting the OPTIMIZER_MODE using an ALTER SYSTEM command to the deprecated value of FIRST_ROWS as part of a test.  A better approach would have been to use an ALTER SESSION command, setting the OPTIMIZER_MODE parameter to FIRST_ROWS_1, FIRST_ROWS_10, FIRST_ROWS_100, or FIRST_ROWS_1000.  Likewise, the recipe shows the use of the FIRST_ROWS hint without a number enclosed in parentheses (the hint syntax per the Oracle Database documentation since the release of Oracle Database 9.2). (page 502-504)
  • Recipe 14-7 probably should mention that the APPEND hint has no effect when the table into which the rows are to be inserted has either triggers or foreign key references.  Ideally, the recipe should have provided a warning stating that direct path inserts are unrecoverable. (pages 505-506)
  • Recipe 14-8 states, “Tip: Hints in queries that reference a complex view are ignored.”  This is not completely correct – the book is only partially rewording what is written in the Oracle Database documentation, and selectively removing other information.  The Oracle Database documentation states: “Because nonmergeable views are optimized separately from the top-level query, access path and join hints inside the view are preserved. For the same reason, access path hints on the view in the top-level query are ignored. However, join hints on the view in the top-level query are preserved because, in this case, a nonmergeable view is similar to a table.” (pages 506-509)
  • Recipe 14-9 describes using the Oracle Database 11.1 (and higher) result cache, as did recipes 3-9 and 3-11.  The performance improvement shown on page 511 and 512, where the execution time decreased from 21.80 seconds to 0.08 seconds by changing the NO_RESULT_CACHE hint to a RESULT_CACHE hint, could have been caused by the effects of buffering previously read blocks from disk, and not by the use of the RESULT_CACHE hint.  The book simply stated that the query was executed once with each hint – to be fair, the query should have been executed twice with each hint so that the NO_RESULT_CACHE hinted query could have benefitted from the buffering of previously read blocks from disk, and the RESULT_CACHE hinted query could have benefitted from the caching of the query result in the result cache. (pages 509-513)
  • Recipe 14-11 shows how to use the GATHER_PLAN_STATISTICS hint.  The A-Rows column in the execution plan seems strange in that only ID 0 shows any rows returned by a line in the execution plan.  It is also strange that the Starts column shows a value of 2 for ID 0 while the execution plan shows 0 starts for all of the remaining lines in the execution plan.  It would have been helpful for the book to explain if the execution statistics were caused by an Oracle Database bug, or some sort of an optimization.  My first thought is that the query results cache was used as that would explain the displayed 0 Starts for all except ID 0 as well as the A-Rows column values, however ID 1 should then show a RESULT CACHE operation.  Page 519 contains a confusing statement, “From this, you can determine if the optimizer is optimally executing the SQL, and you can determine if any optimization is needed.” (pages 517-519)
  • Recipe 15-1 describes enabling parallelism for queries.  While the recipe describes using PARALLEL and PARALLEL_INDEX hints, the book implies that simply specifying the hint will cause the query to run in parallel, if at all possible to execute the query in parallel.  The determination of whether or not parallel execution is used is based on several factors, including the calculated query cost reduction and the availability of parallel servers – the use of the hints merely over-rides the default degree of parallelism for the objects, but does not force parallelism.  Page 530 of the book states. “Note As of Oracle 11g Release 2, the NOPARALLEL and NOPARALLEL_INDEX hints have been deprecated.  Instead, use NO_PARALLEL and NO_PARALLEL_INDEX.”  Those hints were actually deprecated as of Oracle Database 10.1.  It would have been helpful for the book to mention that parallel operations require the Enterprise Edition of Oracle.  The book also seems to consider that the CPU is the only potential bottleneck for parallel execution, without a mention of the I/O subsystem. (pages 526-530)
  • Recipe 15-4 lists restrictions for parallel DML – it appears that the book paraphrased a section of the Oracle Database VLDB and Partitioning Guide, resulting in vague descriptions of some of the limitations.  A better paraphrase example is found in the “Expert Oracle Database Architecture” book on page 633. (pages 533-536)
  • Recipe 15-7 describes rebuilding indexes in parallel.  The recipe provides four key reasons for rebuilding an index, including “index that has become fragmented over time.”  The book does not describe what it means for an index to be fragmented and does not describe some of the potential unanticipated side-effects of parallel index rebuilds (such as causing parallel execution when that index is accessed in a query). (pages 539-541)
  • Recipe 15-10 states, “One prerequisite of using automatic DOP is to run the DBMS_RESOURCE_MANAGER.CALIBRATE_IO procedure.”  An example of using CALIBRATE_IO is not provided. (pages 543-545)
  • Recipe 15-12 includes table 15-6 that lists several views that are related to parallel execution.  The description of the V$PQ_SESSTAT view seems to disagree with the documentation for that view. (pages 548-550)
  • Recipe 15-13 states, “If you query the V$SYSTEM_EVENT view, you can get an idea of the parallelism-related waits that have occurred in your database instance.”  The SQL statement that follows that sentence is a query of V$SESSION_EVENT, not V$SYSTEM_EVENT.  The recipe should have mentioned the need to calculate the delta values from that view before deciding whether or not there is a bottleneck in parallel processes.  Three of the four wait events that the book classifies as indicating a potential parallel tuning issue (“PX Deq: Execute Reply”, “PX Deq: Parse Reply”, and “PX Deq: Table Q Normal”) are classified by the official Oracle Database documentation as idle wait events.  It would have been helpful to more completely describe why these idle wait events should not be ignored, keeping in mind that recipe 5-4 in this book states, “You can ignore the idle waits” – possibly the book should have cited a reference. (pages 550-552)

Interesting And/Or Well-Above-Average Bits of Information

  • Specifying the TRACEFILE_IDENTIFIER for a session to reduce the time needed to locate the trace file for a session. (page 332)
  • Recipe 15-1 provides warnings to not abuse parallel execution: “Parallel SQL must be used with caution, as it is common to overuse, and can cause an overutilization of system resources, which ultimately results in slower rather than faster performance.”

Book Review: Oracle Database 11g Performance Tuning Recipes

10 09 2011

September 10, 2011

Hammering a Square Peg into a Round Hole: Fine Edges are Lost, Gaps in Detail 

(Forward to the Next Post in the Series)

(Update September 14, 2011:  It is a bit unfair to this book that my review only covered the first six chapters and portions of chapter 7 – roughly the first half of the book.  The review as originally posted was 22 typewritten pages in length, the second longest of all of my reviews.  I intend to review the second half of this book, so watch for part two of this book’s review.)

I bought the alpha copy of this book from the Apress site in May 2011 when I heard about the book’s pending release, and was able to obtain the final version of the book at the end of August 2011.  I have read a couple of the other “Recipe” books from Apress, so I was excited to see how the recipe format could be applied to the task of Oracle Database performance tuning, specifically in versions 11.1 and above.

The authors of this book all have at least 10-15 years of experience with Oracle Database (and likely several years of IT related experience prior to that), are accomplished writers of several books (at least three of the “Recipes” books were co-authored by this book’s authors), and at least two of the authors teach at the university level.  The technical reviewer of the book is an Oracle Ace with an OCP certificate and also has an Oracle Certified RAC Expert certificate.  Apress, based on my past experience, publishes some of the best Oracle Database material on the market.  All of the critical recipe elements for a very successful, extremely useful book were initially in place for this book.

So, how well did the book’s final contents meet the level of expectations provided by the book’s front cover and the publisher’s description of the book?  Simply put, some of the chapters appeared as though someone tried to hammer a square cut nail into a finely machined and tapped machine-screw hole.  The end result shows that the topic of performance tuning Oracle Databases simply does not fit well into a recipe format.  The number of technical/logical errors per page in some of the chapters makes for a difficult challenge to find any useful/reliable information in those chapters without spending significant time verifying that the material presented by the book is accurate.  Some of the other chapters did provide useful information that was mostly correct, but that information sometimes lacked sufficient depth of the information presented, likely constrained by the recipe format of the book.  The topic selection for the book is reasonably good, although in execution the book sometimes drifted far away from performance tuning, into a lesson of what to do if a user cannot connect to the database, how to write basic SQL statements, how to perform basic database administration (such as handling snapshot too old errors), and how to issue operating system commands.  Some sections of the book also seemed to be more specific to Oracle Database 9.2 and below than 11.1 and above.  In short, there is good material in the book, but it is difficult to determine what is accurate and what is completely wrong.

This review is very long (20+ type-written pages for the first seven chapters in the book), as that level of detail is required to justify the rating attached to this book review.  As such, this review will exceed the length limit permitted by Amazon, so see my blog for the full review.  I have divided this review into three sections: interesting dictionary views, parameters, functions, hints, and Oracle error messages mentioned in the book; comments, corrections, and problems for the contents of the book; and interesting and/or well-above-average bits of information found in the book.

Data Dictionary Views:

  • DBA_TABLESPACES (page 6)
  • USER_SEGMENTS (page 12)
  • USER_EXTENTS (page 13)
  • V$TRANSACTION (page 18)
  • DBA_INDEXES (page 29)
  • USER_CONSTRAINTS (page 54)
  • USER_INDEXES (page 55)
  • V$OBJECT_USAGE (page 75)
  • V$STATISTICS_LEVEL (page 114)
  • DBA_HIST_WR_CONTROL (page 116)
  • DBA_HIST_BASELINE (page 125)
  • DBA_EXTENTS (page 160)
  • V$LOCK, V$LOCKED_OBJECT (page 164)
  • SYS_TIME_MODEL (page 175)
  • V$UNDOSTAT (page 212)


  • AUDIT_TRAIL (page 40)
  • DDL_LOCK_TIMEOUT (page 80)
  • RESULT_CACHE_MODE (page 100)
  • LOG_BUFFER (page 111)
  • STATISTICS_LEVEL (page 114)
  • COMMIT_WRITE (page 159)
  • LOG_BUFFER (page 160)
  • CURSOR_SPACE_FOR_TIME (page 179)
  • OPEN_CURSORS (page 222)




  • APPEND (page 15)
  • INDEX (page 70)

Error Numbers:

  • Chapter 1: ORA-01578: ORACLE data block corrupted, ORA-26040: Data block was loaded using the NOLOGGING option (page 16); ORA-02266: unique/primary keys in table referenced by enabled foreign keys (page 18)
  • Chapter 2: ORA-00001: unique constraint violated (page 58), ORA-02270: no matching unique or primary key for this column-list (page 59), ORA-10635: Invalid segment or tablespace type (page 81)
  • Chapter 3: ORA-00838: Specified value of MEMORY_TARGET is too small (page 86)
  • Chapter 4: ORA-20200: The instance was shutdown between snapshots (page 120), ORA-13541: system moving window baseline size (page 124), ORA-13539: invalid input for modify baseline window size (page 124)
  • Chapter 6: ORA-09817: Write to audit file failed (page 187)
  • Chapter 7: ORA-0155: snapshot too old (page 211)


Comments, Corrections, and Problems:

  • In recipe 1-2 the following note appears, “As of Oracle Database 11g R2, the EXTENT MANAGEMENT DICTIONARY clause has been deprecated.”  I could not locate confirmation in the Oracle Database documentation that that clause was deprecated in 11g R2, but the documentation does state that dictionary managed tablespaces were deprecated in Oracle Database 9i R2, even though Oracle Database 11g R2 apparently still supports (to an extent) dictionary managed tablespaces (and the deprecated DICTIONARY keyword) as long as the system tablespace is not locally managed. This recipe also states that the authors recommend using ASSM tablespaces, without describing the potential negative performance impact of such tablespaces. (pages 5-7)
  • In recipe 1-4 the authors state “Specify a separate tablespace for the table and indexes” with the justification that doing so “simplifies administration and maintenance” – it would have been helpful for the authors to provide supporting evidence for this claim (pages 9-12).  In recipe 2-2 on page 51, the authors  suggests that backup and recovery is easier with separate tablespaces for  data and indexes.
  • In recipe 1-5 (page 13), an apparently unintentional word substitution: “An extent will not be allocated until the initial record is inserted into a given extent” – the second “extent” in the sentence should probably be replaced with the  word “segment”.  The recipe should mention that deferred segment creation is an Enterprise Edition only feature, and explain the potential problems that restriction might cause. (pages 12-14)
  • In recipe 1-8 the authors suggest checking the DBA_ADVISOR_EXECUTIONS,  DBA_ADVISOR_FINDINGS, and DBA_ADVISOR_OBJECTS views without providing a warning about the licensing requirements associated with the DBA_ADVISOR views. (pages 19-23)
  • Recipe 1-11 demonstrates using several approaches to reduce the number of rows that span more than one data block. It would have been helpful if the book differentiated between chained rows and migrated rows, and discussed why the ANALYZE TABLE  LIST CHAINED ROWS command might still report a count greater than 0 in the CHAINED_ROWS table (table has more than 255 columns and must be broken into multiple row pieces, table row is larger than a table block, etc.). Since it is the absolute file number that appears most commonly in the various data dictionary views, it probably would have been a better approach to use the DBMS.ROWID_TO_ABSOLUTE_FNO function rather than the ROWID_RELATIVE_FNO function when decoding the ROWID values. (pages 28-32)
  • Recipe 2-5 includes a SQL statement that attempts to identify unindexed foreign key columns.  This same SQL statement appears on page 243 of the book “Oracle SQL Recipes: A Problem-Solution Approach”.  While the SQL statement only works for the current user’s schema, and A, B, and C generic aliases are used for table aliases, there are more significant problems with the SQL statement and the recipe as a whole.  The recipe gives the impression that the primary reason for indexes on the foreign key columns is to improve performance – specifically the performance of SELECT statements where the foreign key columns are included in the WHERE clause.  When you consider that the tables with the foreign key columns are likely detail (child) tables, there is a bit of a threat that each of the unique values found in the foreign key columns will be repeated many times through the various rows (scattered among the various rows, thus the CLUSTERING_FACTOR for the index might be close to the number of rows), making it a bit less likely that an index on the foreign key columns will help performance significantly (unless an [bitmap] index join is possible).  The more common performance reason for indexing foreign key columns is to reduce the threat of TM enqueue contention on the child table when rows are updated or deleted in the parent table – this reason was not mentioned in this recipe.  The recipe also gives the impression that all foreign keys should be indexed, while it is primarily those cases where the unique/primary column(s) in the parent table is subject to change that will benefit the most from indexing the foreign key columns.  The problem with the SQL statement found in the recipe appears when concatenated (multi-column) indexes reference the foreign key columns.  The SQL statement found in the book does not consider the foreign key column’s position in the concatenated index. While the optimizer could select to use an index skip scan (consider what might happen if the foreign key column is the sixth column in a concatenated index) for a SELECT statement that does not reference the leading columns of the index in the WHERE clause, the same is not true for TM enqueues.  Thus, the SQL statement misidentifies cases where the foreign key columns should be indexed, but are not.  A DISTINCT clause may be necessary to eliminate duplicate rows returned by the query. (pages 59-60)
  • In Recipe 2-6 on page 62, the final paragraph states the following, “A concatenated index that is used for skip-scanning is more efficient than a full table scan. However, if you’re consistently using only a lagging edge column of a concatenated index, then consider creating a single-column index on the lagging column.”  This statement along with other statements in this chapter, if generalized as implied, suggests that index access paths are always more efficient than full table scans – this is simply not true often enough to make this statement in such black and white wording (pages 60-62).  The ordering of the rows in the table’s blocks compared to the sort order of the index (externalized as the CLUSTERING_FACTOR index statistic), the percentage of the table to be retrieved through the index access path, and the cardinality of the leading index column (in the case of the index skip scan) are all factors that must be considered.  It is also important to point out that while adding an extra column to a concatenated index may help the performance of one SQL statement, doing so may hurt the performance of other SQL statements that had been using that index due to the likely increased CLUSTERING_FACTOR and the increased index size (number of leaf blocks required). (pages 60-62)
  • Recipe 2-11 claims to show a method of adding an index to a database such that adding that index will not affect third party vendor applications.  The recipe provides a SQL statement that sets the OPTIMIZER_USE_INVISIBLE_INDEXES parameter to TRUE at the SYSTEM level (ALTER SYSTEM), and then states, “use a hint to tell the optimizer that the index exists”.  The hint is entirely unnecessary, changing the parameter at the SYSTEM scope is sufficient to allow all sessions to use the invisible index, thus potentially affecting the third part vendor application.  Creating a UNIQUE invisible index will still affect the third party application on inserts, updates, and deletes regardless of the setting of the OPTIMIZER_USE_INVISIBLE_INDEXES parameter.  Regardless of the setting of the parameter, the invisible indexes may still be used by the optimizer for cardinality calculations (thus adding the index could change execution plans, even if the session cannot use the index), and to prevent TM enqueues on foreign key columns. The recipe also states that one of the uses of invisible indexes is as a test to determine whether or not it is safe to later drop the index – however, for the reason just mentioned, this is not a complete/safe test to determine whether or not an index may be safely dropped. (pages 70-71 reference reference2)
  • Recipe 2-12 The recipe does not explain why “bitmap indexes are ideal for low-cardinality columns (few distinct values)” – additional clarification is needed.  Ideally, bitmap indexes on low-cardinality columns would be bit ANDed together with other bitmap indexes, thus making two or more indexes on low-cardinality columns much more selective before touching the table. While bitmap indexes cannot be used on columns defined as unique, those types of indexes can be used even on columns that are not low-cardinality.  Thus the need for additional clarification. (page 72)
  • Recipe 2-15 describes using index monitoring and V$OBJECT_USAGE to determine if an index was used, so that unused indexes may be dropped.  The same risks are present with this approach as with marking an index as invisible, as described in recipe 2-11. (pages 75-76)
  • The introduction to chapter 3 states that the results of SQL queries and PL/SQL functions may be stored in the shared pool when a results cache is used.  It might be a better idea to be much more specific, and state that Oracle Database is using the Result Cache Memory in the shared pool as indicated in the Oracle Database documentation, but maybe this is explained later. (page 83)
  • In recipe 3-1 the authors mention that they would set the MEMORY_MAX_TARGET to 2000M, and then show a SQL statement that sets that parameter to 2G (that is 2048M rather than 2000M).  This recipe seems to be mirroring a page from the Oracle documentation library, only that the MEMORY_TARGET parameter was set to a value after setting SGA_TARGET and PGA_AGGREGATE_TARGET to 0 in the book, while the MEMORY_TARGET was set first in the documentation (note that MEMORY_TARGET will be set to the value of MEMORY_MAX_TARGET if not explicitly set, so changing the order of the commands may or may not be a problem).  This recipe states that “automatic memory management is the recommended approach to managing Oracle’s memory allocation,” but does not describe any of the potential pitfalls or operating system specific problem areas/limitations (reference1 reference2 reference3 reference4 reference5 reference6).  Beyond telling the reader to simply add the current PGA_AGGREGATE_TARGET and SGA_TARGET values to determine a starting point for the MEMORY_MAX_TARGET parameter, the recipe gives little insight into the ”best” value for this parameter. (pages 84-87)
  • In recipe 3-2 the authors state, “if you want a segment to be cached (or pinned) in the buffer pool, you must specify the KEEP buffer pool [when creating the segment].”  Something just doesn’t seem to be right (or complete)
    about this statement – for one, I wonder what the CACHE and  NOCACHE clauses might affect? Secondly, this statement might be implying that blocks belonging to an object will only remain in the buffer cache if those objects are assigned to the KEEP buffer cache.  The recipe states that the approximate size of the KEEP buffer pool can be determined by “summing up the size of all candidate objects for this pool” – great, now what happens when Oracle Database creates up to six (and potentially many more) consistent read versions of the objects’ blocks (reference)?  The last two paragraphs of this recipe seem to indicate that it is the segments themselves (apparently all blocks) that are aged out of the buffer cache when space is needed to cache other blocks, rather than the segments’ blocks that are aged out of the cache – additional clarity is needed in these two paragraphs. The second to last paragraph also makes a claim that I am not sure how to interpret: “Oracle figures that if a single segment takes up more than 10% of (nonsequential) physical reads, it’s a large segment, for the purpose of deciding if it needs to use the KEEP or RECYCLE buffer pools.”  The SQL statement provided in recipe 3-2 is also found in the Oracle Database 9.2 Performance Tuning Manual (with a small, but notable correction), but that SQL statement potentially produces misleading information. (pages 87-88)
  • In recipe 3-3, the book states, “The database will continue to automatically allocate memory to the various components of the SGA…”  Technically, while the intention of this statement is headed in the right direction, confusing the technical terms is best avoided.  The wording could have been cleaned up, keeping in mind that the “database” is actually the files that are stored in disk – the database does not allocate memory. (page 89)
  • In recipe 3-4, the book describes the V$MEMORY_DYNAMIC_COMPONENTS view as showing the current allocations of memory when the MEMORY_TARGET parameter is in use.  The book then shows how to query the view, unfortunately the example in the book shows querying the V$MEMORY_TARGET_ADVICE view, rather than the V$MEMORY_DYNAMIC_COMPONENTS view.  The Oracle Database 11.2 documentation library indicates that the V$MEMORY_DYNAMIC_COMPONENTS view only shows information about dynamic SGA components, but a query of the view also shows an entry for “PGA Target”, indicating that the Oracle documentation is not completely correct in this case.  It probably would have been helpful if the query of V$MEMORY_RESIZE_OPS found in the book included the END_TIME column since the book indicates that this view may report up to 800 memory resizing operations from various time periods.  The book states that the operation type is one of two modes: grow or shrink, while the Oracle Documentation library indicates that the operation type (the OPER_TYPE column) will report one of six values: STATIC, INITIALIZING, DISABLED, GROW, SHRINK, SHRINK_CANCEL. (pages 90-91)
  • In recipe 3-5, the steps listed in the solution appear to be missing step 1.5, “Click the Memory Advisors” link. The book makes the following statement regarding the estimated improvement in DB time “The higher the value of improvement in DB time, the better off will be the performance.” – that statement seems to suggest that the performance will definitely improve by adjusting the parameter, rather than the performance is estimated to improve.  The description of the Memory Size Advice graph states that the maximum memory for the instance is indicated by a dotted straight line, while the graph itself seems to indicate that the maximum is indicated by a solid red line.  This recipe recommends checking the ADDM reports, without first indicating the licensing requirements needed to access those reports.  The book states, “Ideally, you’d want the PGA cache ratio somewhere upwards of around 70%.”, but it does not state why 70% is ideal rather than 90% or 50% (the Oracle documentation states that “the optimal zone for cache hit percentage is between 75 and 100 percent.”) (pages 91-93)
  • In recipe 3-6, the book states “The memory you allocate to the PGA component is used by various SQL work areas running in the database” – the SQL work areas are memory allocations, not processes, so the SQL work areas cannot “run”.  The book states that a low value, such as 33.37% for the PGA Cache Hit Percentage is a definite indication of the need to increase the PGA_AGGREGATE_TARGET.  No explanation is provided as to why the first query of V$SQL_WORKAREA_HISTOGRAM excluded workarea executions that required less than 64KB (note that this SQL statement is identical to a SQL statement found in the Oracle Database 9.2 Performance Tuning Guide – the second query of V$SQL_WORKAREA_HISTOGRAM is also found in the same Oracle documentation, as is the query of V$SYSSTAT).  The book states, “One pass is slower than none at all, but a multi–pass operation is a sure sign of trouble in your database, especially if it involves large work areas.” – this is stated as an absolute, while in reality there may be times when the only option is a multi-pass workarea execution due to the data size.  The recipe mentions checking ASH, AWR, and ADDM reports without mentioning the licensing requirements. (pages 93-95)
  • Recipes 3-7 through 3-11 are about the result cache which in an Enterprise Edition only feature – the book should have mentioned that limitation.
  • Recipe 3-7 seems to closely mirror section 7.6.1 Managing the Server Result Cache of the Oracle 11.2 Performance Tuning Guide. (pages 95-97)
  • Recipe 3-8 seems to have paraphrased sections of the Oracle 11.2 Performance Tuning Guide and sections of the PL/SQL Reference. (pages 97-99)
  • Recipe 3-9 States: “When you run this query [with the NO_RESULT_CACHE hint], the server won’t cache the results of this query any longer, because you’ve specified the MANUAL setting for the RESULT_CACHE_MODE initialization parameter.”  This statement is technically incorrect (or just confusing) – the hint does not change the RESULT_CACHE_MODE for the SQL statement to MANUAL, it simply prevents the SQL statement results from being added to the results cache if the RESULT_CACHE_MODE is set to FORCE, or the RESULTS_CACHE property (annotation) of the table(s) involved in the query is set to FORCE.  The recipe also states the following about the table-level RESULT_CACHE mode: “The mode value DEFAULT is, of course, the default value, and this merely removes any table annotations you may have set and doesn’t permit caching of results that  involve this table” – this statement is misleading, the query results may still be cached depending on the value of the RESULTS_CACHE_MODE parameter and whether or not the RESULT_CACHE hint is specified. (pages 99-103 reference reference2 reference3)
  • Recipe 3-10 states: “You can query the V$CLIENT_RESULT_CACHE_STATS view for details such as the number of cached result sets, number of cached result sets invalidated, and the number of cache misses.”  That view does not exist – the intended view name is likely CLIENT_RESULT_CACHE_STATS$. (pages 103-104)
  • Recipe 3-11, much like the Oracle documentation, the book lists four views (V$RESULT_CACHE_STATISTICS, V$RESULT_CACHE_MEMORY, V$RESULT_CACHE_OBJECTS, and V$RESULT_CACHE_DEPENDENCY) that may be used to monitor the result cache.  However, the recipe does not describe the individual views or show examples of querying those views.  The comment about the RESULT_CACHE_REMOTE_EXPIRATION parameter in this recipe seems to be out of place, while a comment about RESULT_CACHE_RELIES_ON might have been a better approach to explain DML against which specific tables causes invalidation of PL/SQL function cache results.  The bullet points under the heading Restrictions on the “PL/SQL Function Cache” are the same as the restrictions found in the Oracle documentation with one critical difference, the book states “An IN parameter of a function has one of these types: BLOB, CLOB, …, and Record” while the Oracle documentation states, “No IN parameter has one of the following types: ◦BLOB, ◦CLOB, …, ◦Record”. (pages 105-108)
  • Recipe 3-12 states that the flash cache feature is for the Enterprise Edition only and that there only two supported operating systems, so that is a positive for this recipe.  The Oracle recommended conditions in the book for enabling Flash Cache is missing this potentially important point from the Oracle documentation: “The Buffer Pool Advisory section of your Automatic Workload Repository (AWR) report or STATSPACK report indicates that doubling the size of the buffer cache would be beneficial.”  While the documentation states “Any multiplier [of the buffer cache size] less than two would not provide any benefit,” the book states that trial and error should be used to pick a multiplier between 1 and 10. (pages 109-110)
  • In recipe 3-13 we learn how to tune the redo log buffer.  The book states, “since there’s no cost whatsoever to increasing the LOG_BUFFER size, feel free to set it to higher than the suggested maximum of 1MB for this parameter.”  Starting with Oracle Database 10.2 the LOG_BUFFER parameter is automatically set to a value that is slightly smaller than the granule size – the most common granule size for Oracle Database 10.2 databases was likely 16MB (the maximum granule size).  The granule size rules changed again with the release of Oracle Database 11.1 and again with the release of Oracle Database 11.2.  With recent Oracle Database releases it is quite possible that the LOG_BUFFER could be auto-set to just under 512MB in size, depending on the size of the SGA (reference). The ratio of the ‘redo entries’ statistic value divided by the ‘redo log space requests’ statistic value, as shown in the SQL statement probably yields little useful information when adjusting the size of the LOG_BUFFER parameter. (pages 110-112)
  • Recipe 4-10 incorrectly states that the “Parse CPU to Parse Elapsd” statistic found in an AWR report is “how much time the CPU is spending parsing SQL statements. The lower this metric is, the better. In the following example, it is about 2%, which is very low. If this metric ever gets to 5%, it may mean investigation is warranted to determine why the CPU is spending this much time simply parsing SQL statements.”  The book’s definition of this statistic is
    incorrect – the statistic actually indicates delays (wait events) in the parsing of SQL statements, very likely due to contention between sessions (or possibly excessive competition for the server’s CPUs, however such competition probably would not be explicitly captured in an Oracle wait event).  Ideally, this statistic in an AWR report should be close to 100%. It appears that the book authors attempted to describe the “PARSE TIME CPU” statistic, which is not found in this section of an AWR report, or attempted to describe a derivative of the “Non-Parse CPU” statistic which does appear in the Instance Efficiency Percentages section of an AWR report. (page 133-134)
  • Recipe 4-13 contains three SQL statements that attempt to calculate the SUM of the WAIT_TIME column plus the TIME_WAITED column when selecting from V$ACTIVE_SESSION_HISTORY – this approach can lead to misleading information for a couple of reasons, including: the session may actually be running on the CPU and this view will still be capturing the last wait event as if it were a currently active wait; the wait time is cumulative, thus the query would show a wait time of 3 seconds after the second second, 6 seconds after the third second, 10 seconds after the fourth second, etc. (page 142-145 reference)
  • Chapter 5 is a little loose with terminology, as shown by this quote found on page 148: “It’s easy to find out the percentage of time a database has spent waiting for resources instead of actually executing.”  The database does not wait, and it does not execute – it is a set of files on disk.
  • In recipe 5-1, when describing the V$SESSION_EVENT view, the book states, “The data in this view is available only so long as a session is active.” – considering that one of the columns in V$SESSION (another view described in this recipe) is named ACTIVE, I can see where the quoted statement could lead to confusion.  The recipe seems to rely too much on the information provided by the ratios from V$SYSMETRIC, ignoring the possibility that a session simply burning CPU (through spinning on latches, excessive parsing, or performing unnecessary levels of consistent gets due to poorly selected execution plans) could make it appear that the instance is working optimally; the book states: “On the other hand, high values for the Database CPU Time Ratio indicate a well-running database, with few waits or bottlenecks.”  The last paragraph of the recipe states, “Note that you can query the V$WAITSTAT view for the same information as well.” – that view was not previously introduced in the book and in actuality, the V$WAITSTAT view, related to buffer busy waits, does not retrieve information similar to the V$SYSTEM_EVENT view that was described a sentence or two earlier. (pages 147-150)
  • Recipe 5-2 queries V$ACTIVE_SESSION_HISTORY without first providing a warning regarding the licensing requirements to access this view.  SUM(WAIT_TIME + TIME_WAITED) when grouped by SQL_ID and USER_ID could lead to incorrect assumptions regarding the wait time per SQL statement for the last 30 minutes.  There is a risk of double-counting, or counting incidental CPU usage immediately after a wait event completes, as part of the time spent in wait events for a specific SQL statement.  Using COUNT is likely a better approach than using SUM.  I am left wondering why the authors did not introduce the readers to the APPLICATION_WAIT_TIME, CONCURRENCY_WAIT_TIME, CLUSTER_WAIT_TIME, USER_IO_WAIT_TIME, PLSQL_EXEC_TIME, and JAVA_EXEC_TIME columns of V$SQL (or V$SQLAREA) periodically querying those columns, and calculating the delta (change) values for specific SQL statements – those columns have existed as long as ASH (since the release of Oracle Database 10.1). (pages 150-151)
  • In Recipe 5-3, as of Oracle Database 10.1, “enqueue waits” probably should not be listed among the most common wait events since the most frequently occurring enqueue type waits are now broken out into separate wait events:
    “enq: TX – row lock contention”, “enq: TM – contention”, etc.  Additionally, it would be helpful to determine how the seven wait events printed were selected as the most common, and not one of the following: “rdbms ipc message”, “pmon timer”, “SQL*Net message from client”, “RMAN backup & recovery I/O” or “direct path read”.  It might have been helpful for the book to provide a little information about the seven selected wait events. (pages 151-152)
  • Recipe 5-4 states, “Here are some examples of typical waits in some of these classes,” however the book simply describes some of the characteristics of wait events found in four wait event groups, rather than listing and describing the actual wait events.  The book states that only the “log file sync” wait event is in the Commit class, while Oracle Database 11.1 and above also include the “enq: BB – 2PC across RAC instances” wait event in the Commit class.
    The recipe states, “You can ignore the idle waits.” – that statement is mostly true at the instance level, but mostly not true at the session level. This recipe did not provide much more information than what a query of V$EVENT_NAME would have provided. (pages 152-153)
  • In recipe 5-5, the first query should include the STATE column in the WHERE clause to eliminate all of those waits that have completed in less than 0.01 seconds and those waits that have already completed with the session now burning server CPU time.  The recipe probably should have stated that since Oracle Database 10.1 the columns mentioned in the V$SESSION_WAIT view are also found in the V$SESSION view. (pages 153-154)
  • Recipe 5-6 demonstrates a problem present in several of the recipes where the problem statement should have provided more detail, rather than something as simple as, “You want to examine Oracle wait event classes.”  Grouping and summing the statistics from the V$SYSTEM_WAIT_CLASS view is unnecessary since the view is already grouped by the WAIT_CLASS column.  The third query has an unnecessary join between the V$SYSTEM_EVENT, V$EVENT_NAME, and V$SYSTEM_WAIT_CLASS views – all of the displayed information is found in V$SYSTEM_EVENT as well as the WAIT_CLASS column that is in the WHERE clause.  The tip regarding the “db file scattered read” wait event being caused by full table scans of large tables, and “db file sequential read” wait event being caused by indexed reads is a little too shallow of a description to be accurate, especially with the introduction of serial direct path reads for full table scans starting in Oracle Database 11.1. Page 156 of the book states, “You can see that the enqueue waits caused by the row lock contention are what’s causing the most waits under these two classes. Now you know exactly what’s slowing down the queries in your database!”  Three problems with these two sentences: 1) the top wait event in the Application and Concurrency classes is shown as “enq: UL – contention”, which is a user lock caused by a call to DBMS_LOCK, rather than the wait event that the book appeared to be describing, “enq: TX – row lock contention”. 2) There was only a single wait for the “enq: UL – contention” event and that wait lasted 5 seconds, is this wait event entry showing what is really slowing down the queries in the database, or could the multiple waits for “latch: shared pool” and “library cache load lock” have a greater impact?  Keep in mind that this view is showing what has happened since the last bounce of the database, so the user lock enqueue might have happened shortly after the database was bounced. 3) Only two wait event classes were checked, how do we know that the performance problem is found in the Application or Concurrency wait classes?  The final query in this recipe does not show that SID 68 “is waiting”, rather it shows that SID 68 “had waited” and *might* still be waiting. (pages 154-157)
  • The use of “a”, “b” and “c” as aliases in queries makes it more difficult for the reader to determine where the columns are located, than it would be had the authors used meaningful aliases – “se” for the V$SESSION_EVENT table rather than “a”.  Also, not all columns are aliased, at times leaving the reader guessing which table/view contains the column.
  • Recipe 5-7 appears to closely mirror section 10.3.1 in the Oracle Database 11.2 Performance Tuning Guide documentation. V$SESSION’s ROW_WAIT_OBJ# column refers to the OBJECT_ID column in DBA_OBJECTS, not the DATA_OBJECT_ID column in DBA_OBJECTS as indicated by the book.  The recipe in the book fails to query either V$SESSION_WAIT or V$SESSION for column P3 which indicates the block class that indicates whether the buffer busy wait is for a segment header, data block, undo header, or undo block (while not stated in the book, information can be cross-referenced with the V$WAITSTAT view).  The recipe states about segment header buffer busy waits when ASSM is not in use, “If you aren’t already using it, you must switch from a manual space management to automatic segment space management (ASSM) – under ASSM, the database doesn’t use free lists.” – that advice, specifically “must” is a bit extreme, especially when the severity of the segment header buffer busy waits is not listed as a consideration in the book.  This sentence from the book does not seem to apply to buffer busy waits for the segment header, “due to several processes attempting to insert into the same data block – each of these processes needs to obtain to a free list before it can insert data into that block.”  The book states, “One of the reasons for a high data number of buffer busy waits is that an inefficient query is reading too many data blocks into the buffer cache, thus potentially keeping in wait other sessions that want to access one or more of those same blocks… By making the hot block always available in memory, you’ll avoid the high buffer busy waits.” – that classification of buffer busy waits now falls under the classification of the “read by other session” wait event and not the “buffer busy waits” wait as of Oracle Database 10.1.  (reference reference2 reference3: Metalink (MOS) ID 15476.1 “FAQ about Detecting and Resolving Locking Conflicts” reference4: Metalink(MOS) Doc ID 603661.1 “ORA-01410: Invalid Rowid When Using rowid Generated from V$Session and Dba_objects”).  The Oracle Database 11.2 Performance Tuning Guide documentation also appears to have used the ROW_WAIT_OBJ# column incorrectly (pages 157-158 reference5)
  • Recipe 5-8 describes how to resolve log file sync waits.  The book states, “You must change the commit behavior by batching the commits. Instead of committing after each row, for example, you can specify that the commits occur after every 500 rows.” Commits should only be used to mark the end of a transaction, suggesting otherwise potentially leads to data corruption where one portion of a transaction is committed, while another portion cannot be committed due to activity of other sessions (or conversely 499 transactions should not be committed because 1 of the transactions must be rolled back).  This commit recommendation is later followed up by the statement “The log file sync wait event can also be caused by too large a setting for the LOG_BUFFER initialization parameter. Too large a value for the LOG_BUFFER parameter will lead the LGWR process to write data less frequently to the redo log files.” – this is in sharp contrast to recipe 3-13 which stated that there is no risk in sizing the LOG_BUFFER parameter too large.  The book states that changing the COMMIT_WRITE parameter may have an adverse effect on performance – actually, changing that parameter from the default should help performance, while potentially compromising the durability of the writes to the database in the event of instance failure.  The suggestion to check V$SESSION_WAIT to see if the SEQ# column is being updated should indicate whether column (P1, P2, or P3) in that view should be checked.  The recommendation telling readers to modify the hidden parameter _LOG_IO_SIZE (described as the LOG_IO_SIZE parameter, without the leading underscore) without providing any warnings about modifying hidden parameters is a bit disappointing.  The example shows the _LOG_IO_SIZE parameter being set to a value of 1,024,000, which with a OS block size of 512 bytes, would tell Oracle Database that it should start writing the redo log buffer when contains roughly 512MB, while the default value is the lower value of 1MB or 1/3 the value of the LOG_BUFFER value.   The recipe failed to mention that excessive time spent in the log file sync wait event could be caused by a CPU starvation condition, and also failed to mention that the redo log buffer contents are written every 3 seconds regardless of the value of the value of the _LOG_IO_SIZE parameter. (pages 158-160 reference reference2 reference3 reference4)
  • Recipe 5-9 investigates read by other session waits. While the query of DBA_EXTENTS based on the FILE_ID and BLOCK_ID range will likely work, this query is potentially very slow in all but the smallest of Oracle databases.  It is usually possible to query the ROW_WAIT_OBJ# column of V$SESSION to immediately retrieve the associated OBJECT_ID which can then be used to quickly look up the OWNER, OBJECT_NAME and OBJECT_TYPE column values from DBA_OBJECTS.  The book states that one of the goals of this process is to identify the SQL statements that are accessing those objects – the SQL_ID and SQL_CHILD_NUMBER, and possibly also PREV_SQL_ID and PREV_CHILD_NUMBER columns of V$SESSION will provide the link to identify the SQL statement – this fact was not stated in the book.  The book states that the blocks being read from disk are “hot blocks” – that likely is not the case, otherwise those blocks would have remained in the buffer cache and not need to be re-read from disk. The suggestion to create a new tablespace with a smaller block size and then move these objects into that tablespace is probably not a very good idea due to manageability and bug issues related to having multiple block sizes in the same database. It would have been a good idea for the book to suggest examining the execution plan. (page 160-161)
  • Recipe 5-10 states, “The direct path read and direct path read temp events usually indicate that that the sorts being performed are very large and that the PGA is unable to accommodate those sorts.” That statement is not true regarding the direct path read waits, if direct path read temp wait events are also present in the database.  The statement is mostly true regarding the direct path read temp wait event, but the wait event will also appear with hash joins, and the sorts need not be “large”.  The two queries provided to investigate the direct path read temp waits to see if the waits are for a temp tablespace are likely a wasted effort – it probably would have been a better idea to investigate the V$SQL_WORKAREA_ACTIVE view to understand why the direct path read temp wait events were appearing for a specific session, down to a specific line in an execution plan.  The second of the two SQL statements contains a typo “between” is spelled as “betgween”.  The book states that the direct path read wait events can be caused by either reading data from the temp tablespace or due to full table scans performed by parallel slaves.  The book failed to mention serial direct path reads, introduced in Oracle Database 11.1, will cause sessions not performing parallel operations to wait on direct path read wait events while performing full table scans. A very confused statement: “A direct path read event can also occur when the I/O sub system is overloaded, most likely due to full table scans caused by setting a high degree of parallelism for tables, causing the database to return buffers slower that what the processing speed of the server process requires.”  Direct path read waits will occur whether or not the I/O sub-system is overloaded, however it is the _duration_ of those waits that is likely to increase when the I/O sub-system is overloaded.  At times, the book does not seem to differentiate between the number of times a wait event occurs and the total/average duration of that wait event.  Direct path read waits will be more common in Oracle Database 11.1 and above, and will happen during full table scans even when parallel query is not used, unless serial direct path read is disabled by setting event 10949 to a value of 1. (pages 161-162)
  • Recipe 5-11 seems to closely mirror chapter 7 of the Oracle Database Backup and Recovery User’s Guide from the Oracle Database documentation library, specifically the section titled, “Configuring the Environment for Optimal Flashback Database Performance”.  However, the book appears to misinterpret the Oracle documentation in a couple of cases in this recipe.  First, the book implies that when enabling the flashback database feature, the “flashback buf free by RVWR” wait event will become one of the top wait events.  The documentation states that operating system level file caching may add CPU overhead, and thus recommends against using operating system level file caching for the location where the flashback logs will be stored. The book attempts to combine a couple of sentences found in the Oracle documentation, resulting in a sentence such as the following: “Since flashback logs tend to be quite large, your database is going to incur some CPU overhead when writing to these files.”  The book then mangles the sentence about operating system file caching, stating that it “tends to slow down I/O.”  The other bulleted items are just restatements of the bulleted items found in the Oracle documentation. On page 163 the book states “when thie database is writing” – in addition to the misspelling of the word the, a second problem is that the database technically does not write. (pages 162-163)
  • Recipe 5-12 shows a SQL statement to identify blocked sessions and the blocker of those sessions.  The query has a couple of problems: while the query performs a self-join on the V$LOCK table using the columns ID1 and ID2, it should have also joined on the TYPE column; the query specifies BLOCK=1 when attempting to identify the blocking sessions – the BLOCK column can contain a value of 2 in a RAC environment.  The SQL statement that queries V$LOCK for a BLOCK value greater than 0 to determine if there are any blocking locks in the instance can produce false blocking entries in a RAC environment – selecting from V$SESSION where BLOCKING_SESSION IS NOT NULL might have produced a more usable result. The how it works section for this recipe seems to misinterpret the Oracle documentation that describes this same behavior, leading to confusion for the reader: “Oracle uses two types of locks to prevent destructive behavior: exclusive and share locks… The concomitant table row held by the first session is merely intended to prevent any other sessions from issuing a DDL statement to alter the table’s structure.” (pages 163-165 reference reference2 reference3)
  • Recipe 5-13 shows a functional SQL statement that does identify blocked sessions and the blocking session. Unfortunately, the recipe also include another SQL statement that fails to retrieve the ID1 and ID2 columns from V$LOCK.  Examining the output of that SQL statement, the recipe makes a claim that because one session holds a mode 6 TX lock, it is the blocker of another session that requests a mode 6 TX lock – that association is only true if the ID1 and ID2 column (and of course TYPE) values are identical. The recipe includes another SQL statement that queries V$SESSION to determine the blocking session and WAIT_CLASS – that is a helpful query, but it would have been more useful to retrieve the EVENT column rather than the WAIT_CLASS column from V$SESSION. (pages 165-167)
  • Recipe 5-14 simply offers an ALTER SYSTEM KILL SESSION command as the quick solution for resolving blocking locks.  It would have been helpful if the book first suggested that the reader spend a couple of minutes to determine what the blocking session is currently doing, before simply terminating that session – even a statement suggesting the checking of the ACTIVE column of V$SESSION a couple of times for the blocking session would have been a step in the right direction.  The book states, “Oracle uses several types of internal ‘system’ locks to maintain the library cache and other instance related components, but those locks are normal and you won’t find anything related to those locks in the V$LOCK view.” – you will not find library cache locks in the V$LOCK view because those are NOT locks, but latches prior to Oracle Database 11.1, and as of 11.1 most of the library cache related latches are now mutexes.  The book states, “The preceding discussion applies to row locks, which are always taken in the exclusive mode.” – it probably would have been helpful for the book to clarify that a session may attempt to acquire a TX type lock in a mode other than exclusive mode (mode 6), for example when a session enqueues (wait event “enq: TX – row lock contention”) due to potential primary key violation error it will attempt to acquire a TX lock in share mode (mode 4) – the same requested lock mode will appear when the session is waiting for an interested transaction slot or a shared bitmap index fragment.  (pages 167-168)
  • Recipe 5-15 The How it Works section states, “You can certainly use Oracle Enterprise Manager to quickly identify a locked object, the ROWID of the object involved in the lock, and the SQL statement that’s responsible for the locks.”  In general, it is not possible to determine the exact SQL statement that the blocking session executed which resulted in another session being blocked.  It is possible to determine the SQL_ID for the _currently_ executing SQL statement in the blocking session, as well as the SQL_ID for the previously executed SQL statement, but there is a fair chance that neither of those SQL statements is responsible for a blocking lock. (pages 168-169)
  • Recipe 5-16 states, “The enq: TM – contention event is usually due to missing foreign key constraints on a table that’s part of an Oracle DML operation. Once you fix the problem by adding the foreign key constraint to the relevant table, the enqueue: TM –contention event will go away.” – the authors should have proofread these two sentences a second time – the enqueues are NOT caused by a missing foreign key constraint, it is typically a problem of missing index(es) on a foreign key column(s) in a child table.  It is helpful that recipe 5-16 includes DDL to create two tables to demonstrate problems caused by unindexed foreign keys, but it would have been helpful if the script also inserted a couple of rows into the tables because in a sense the book author’s statement is either right and wrong depending on how the test case is constructed.  The SQL statement described as finding all of the unindexed foreign key columns in the database (note that this SQL statement is different from the one found in recipe 2-5) has a couple of flaws: 1) the SQL statement only examines indexes and constraints defined in the current user’s schema (not all schemas in the database), 2) changes in the column order between the foreign key constraint columns and the index columns will cause false alarms, 3) duplicate rows are eliminated and the foreign key name is not retrieved, so the output might occasionally show two adjacent rows that likely are not supposed to be displayed together.  The test case and SQL statement to locate unindexed foreign key columns very closely resembles a test case and SQL statement found on the Confio website.  (pages -169-171)
  • Recipe 5-17 appears to have borrowed the SQL statement from a blog operated by someone other than one of the book authors.  The book states, “You can execute the following statement based on ASH, to find out information about all locks held in the database during the previous 5 minutes.”  The query in this recipe actually retrieves data from V$ACTIVE_SESSION_HISTORY for the last 2 minutes, does not retrieve just enqueues, and the data provided in V$ACTIVE_SESSION_HISTORY does not indicate ALL enqueues – just those that the sessions were waiting on the instant that the once-a-second sample was collected.  It is unclear why the SQL statement selecting from V$SQL is returning the APPLICATION_WAIT_TIME column (possibly because enqueues tend to fall into the APPLICATION wait group), but not the CONCURRENCY_WAIT_TIME, CLUSTER_WAIT_TIME, USER_IO_WAIT_TIME, PLSQL_EXEC_TIME or JAVA_EXEC_TIME columns – it is probably also important to point out that the values returned by these columns is not specific to one user session, nor the specific time interval queried in V$ACTIVE_SESSION_HISTORY.  There is a risk (greater in Oracle Database 11.1 and even greater in Oracle Database 11.2) that a SQL statement may have multiple execution plans – the execution of the DBMS_XPLAN.DISPLAY_AWR probably should have also specified the SQL_PLAN_HASH_VALUE value from the V$ACTIVE_SESSION_HISTORY view to retrieve the actual execution plan for the SQL statement that was executing in the session of interest. The “typical” query of V$SESSION_WAIT_HISTORY appears to be borrowed from another blog article, without explaining the purpose of the query.  (pages 171-174)
  • In recipe 5-18 the queries of V$ACTIVE_SESSION_HISTORY, where the SUM of WAIT_TIME + TIME_WAITED is determined when grouping on various columns, are significantly flawed when you consider that the view essentially shows samples of V$SESSION for each second, and only for active sessions.  Consider a session that has waited 10 seconds in a single wait event.  The queries will likely show that the session waited 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 = 55 seconds (+- a couple of seconds depending on the update frequency) in that particular wait event for the first 10 seconds in that wait event.  Assume that the session had already waited 240 seconds and continued to wait for an additional 10 seconds – the queries would show that the session waited 240 + 241 + 242 + 243 + 244 + 245 + 246 + 247 + 248 + 249 + 250 = 2695 seconds in that 10 second time interval.  The same mistake is found on pages 1170-1171 of the book “Expert Oracle Database 11g Administration”.  (pages 174-175)
  • Recipe 5-19, while the query that UNIONs the non-idle wait events found in V$SYSTEM_EVENT with the all rows from V$SYS_TIME_MODEL (with the exception of the “background elapsed time” and “background cpu time” statistics) is interesting, its usefulness is limited due to time-scoping issues, failure to eliminate all background statistics from V$SYS_TIME_MODEL (if that was the intention), ignoring the hierarchy of the statistics found in V$SYS_TIME_MODEL, and the fact that the V$SYS_TIME_MODEL view shows both wait related and non-wait related statistics.  While the output from V$SESSION is described, the actual output from the SQL statement does not appear in the book.  The query that joins V$SESSION and V$SQL to retrieve the SQL statement that caused session 81 to wait in a TX enqueue is flawed: the SQL statement potentially returns two distinct SQL statements per session (one for the SQL_ID column and one for the PREV_SQL_ID column);  because the SQL statement joins to V$SQL, it should also join on the columns SQL_CHILD_NUMBER and PREV_CHILD_NUMBER to avoid duplicate rows cause by the existence of multiple child cursors for the same SQL statement; it is quite possible that neither of the two distinct SQL statements that are potentially returned are in fact the SQL statement that caused session 81 to be blocked.  The book advocates killing a session as the only initial solution if that session is blocking several other sessions – the book probably should have suggested investigating what the session is doing in the database before blindly terminating the session.  There appears to be a logic error in the book, where the authors suggested using the V$SYSTEM_EVENT view to investigate the specific waits that are identified in a specific wait class of the V$SESSION_WAIT_CLASS view – a better approach would have used the V$SESSION_EVENT view since both views are specific to a single session. It probably is not too beneficial for the book to identify that the WAIT_CLASS_ID for the Application class of wait events is 4217450380, and the join to the V$EVENT_NAME view is unnecessary – since the introduction of the wait classes with Oracle Database 10.2, every commonly used view that includes the WAIT_CLASS_ID column also contains the WAIT_CLASS column, where the wait classification “Application” would be found.  The quote from the book, “The SELECT FOR UPDATE NOWAIT statement locks the row without waiting” is simply incorrect – if the requested rows cannot be locked, Oracle will return the error “ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired” and no rows will be locked or selected. (pages 175-178)
  • Recipe 5-20 includes the same non-time scoped query found in recipe 5-19 that unions the non-idle wait events found in V$SYSTEM_EVENT with the all rows from V$SYS_TIME_MODEL (with the exception of the “background elapsed time” and “background cpu time” statistics).  The recipe shows the “Top 5 Timed Events” section from an AWR report, describing that information as the “top 5 wait events” – this label may cause confusion because CPU utilization is not a wait event.  It would be helpful for the book to explain why the percentages add up to 138.9%, when 100% should be the maximum, and why the “wait list latch free” wait (not an actual name of a wait event) appears in the list of wait events.  In the solution for excessive shared pool and library cache latch counts, the book suggests specifying a value for the CURSOR_SPACE_FOR_TIME parameter. The book does not specify a suggested value for this parameter, does not indicate that the parameter is deprecated as of Oracle Database 11.1, does not suggest increasing the value for the SESSION_CACHED_CURSORS parameter, and seems to confuse the word “variable” with the word “literal” and suggests that it is the database that executes SQL statement: “For example, a database may execute a SQL statement 10,000 times, each time with a different value for a variable.”  The book states the following about setting the CURSOR_SHARING parameter to SIMILAR: “the SIMILAR setting seems a safer way to go about forcing the database to use bind variables instead of literals.” Metalink (MOS) Doc ID 1169017.1 states that the parameter value of SIMILAR is deprecated as of Oracle Database 11.1 and will be removed in the 12.1 release.  The book also states that the authors have not seen any issues with setting the CURSOR_SHARING parameter to FORCE – there are numerous bug reports in Metalink (MOS) that indicate setting that parameter to FORCE may cause wrong results to be returned, ORA-600 instance (or session) crashes, excessively long parse times, execution performance problems, problems with function based indexes, and various other problems.  Describing the CURSOR_SHARING parameter as a silver bullet to cure latch contention just might be true – there probably will not be much latch contention when the database instance is down. (pages 178-180 reference)
  • Recipe 5-21 should indicate that an Enterprise Edition license and an additional cost Diagnostics Pack license are required to use the Row Locks link in the Enterprise Manager Database Control. (pages 181-182)
  • Recipe 5-22 should indicate that an Enterprise Edition license and an additional cost Diagnostics Pack license are required to use the Active Sessions graph in Enterprise Manager Database Control. (pages 182-182)
  • Figure 6-1 “Troubleshooting poor performance” shows a flow chart that indicates how a DBA or performance analyst should proceed when a user reports sluggish performance. The idea behind a flow chart for this process is great, but there are a couple of problems with the execution of the flow. If a user is reporting a problem, it probably would be a better idea to stay away from instance-wide analyzing tools such as AWR/Statspack reports that can easily hide performance issues that are specific to a single user session, and instead use a 10046 trace and/or the session-level wait interface. The flow chart lists bulleted items that could be tried, but it is not clear from the flow chart and its description if the reader should perform all of the bulleted items, some of the items, or a single item from the list. For instance, if an identified SQL statement is not tunable, is the reader supposed to buy an Exadata box, hire Oracle consultants, and then find a new job (I suspect that the last item included in this portion of the flow chart is intended as a hidden joke)? (page 186)
  • Table 6-1 on pages 191-192 (in recipe 6-2) shows column descriptions from the vmstat output – the descriptions of those columns printed in the book are nearly identical word for word with the descriptions found by typing “man vmstat” at a Linux command line.
  • Recipe 6-6 includes a couple of SQL statements that are intended to show the SQL statements contributing the most to the IO load, the sessions that are currently waiting for I/O to complete, and the database objects that are
    involved in the disk activity. Since the first query accesses V$SQL, retrieving the DIRECT_WRITES, and DISK_READS columns, there are both time-scoping problems (the SQL statement may have been executed days ago and might not have anything to do with the current iostat output) and the query does not distinguish between multi-block reads and single block reads (which is worse, 10,000 blocks read in 79 multi-block reads or 5,000 blocks read one block at a time using single-block reads). Recent Oracle Database releases have the columns PHYSICAL_READ_REQUESTS, PHYSICAL_READ_BYTES, PHYSICAL_WRITE_REQUESTS, PHYSICAL_WRITE_BYTES, and USER_IO_WAIT_TIME that would seem to also be helpful.  The second and third SQL statements, because those query V$SESSION, are much less likely to exhibit time scoping issues because those queries indicate the current waits.  One of the problems with these two queries is that they only provide output when the sessions are waiting on “db file sequential read” or “db file scattered read” wait events – with the introduction of Oracle Database 11.1 a couple of years ago, serial direct path reads became much more common when multi-block reads were performed; reads and writes to the temp tablespace are also ignored by the queries.  The third query joins the ROW_WAIT_OBJ# column from V$SESSION to the DATA_OBJECT_ID column of DBA_OBJECTS – it should instead be joined to the OBJECT_ID column. (pages 198-201)
  • Recipe 6-9 shows how to map a resource intensive operating system process to an Oracle Database session. One of the SQL statements in the recipe retrieves the SQL_ID column from V$SESSION, but not the SQL_CHILD_NUMBER column.  A SQL statement follows in the recipe that calls DBMS_XPLAN.DISPLAY_CURSOR to retrieve the execution plan, however that SQL statement does not specify a value for the SQL child number, thus the call defaults to SQL child number 0.  Therefore, one of three things will happen: 1) no execution plan will be displayed if the plan for child number 0 is no longer in the shared pool; 2) an incorrect execution plan will be displayed if the execution plan for SQL child number 0 does not match the execution plan for the SQL child number that is actually executed by the resource intensive session; 3) the correct execution plan will be returned if the resource intensive session is using child number 0 or the same execution plan as is used by child number 0.  Had the SQL statement querying V$SESSION also retrieved the SQL_CHILD_NUMBER column, that value could have been specified for the second parameter in the DBMS_XPLAN.DISPLAY_CURSOR call, or NULL could have been specified for the second parameter so that all execution plans for the various child cursors of the specified SQL_ID would be retrieved.  It is also important to keep in mind that the SQL_ID retrieved by the query of V$SESSION may not identify the resource intensive SQL statement executed by the session, it is just the SQL statement that was executing at that instant that the query of V$SESSION was executed. (pages 204-207)
  • Recipe 6-10 should have detailed some of the risks associated with terminating Oracle sessions with either the “ALTER SYSTEM KILL” command or the operating system “kill -9” command (other than just stating that you should be careful not to kill the wrong process). Killing processes used by sessions can lead to stability problems in the instance, especially if the instance is configured for shared servers, or if the session was in the process of executing a distributed query (see Metalink/MOS Bug ID 8686128 that affects and is supposed to be fixed in the not yet released 12.1, and Bug ID 12961905), or if auditing is enabled (see Metalink/MOS Doc 604511.1), or if resource limits are in use (see Metalink/MOS Doc 561322.1). (pages 207-208)
  • Recipe 7-1 includes a SQL statement (page 210) that is described as showing the actual redo that is generated in a database – the query actually shows the sum of the bytes in the datafiles that belong to the online undo tablespace.  The formula shown for setting the UNDO_RETENTION parameter seems to calculate a value for UNDO_RETENTION that is unlikely to cause the datafiles for the undo tablespace to grow, if the statistics for the last seven (or four) days represent typical activity. (pages 209-213)
  • Recipe 7-2 shows a couple of SQL statements that are intended to show what is consuming the most undo.  It probably would be a good idea to define consuming” in this context  – should it be the case that a normal SELECT statement _consumes_ undo when it builds a read consistent view of data, while INSERT, UPDATE, and DELETE statements _generate_ the undo?  The “query to find out which session is currently using the most undo in an instance” (third query on page 214) actually does not show which session is _currently using_ the most redo, nor does it show which session generated the most undo that is _currently_ in the undo tablespace – instead it shows which session generated the most bytes of undo data _since connecting_. (pages 213-214)
  • Recipes 7-3 (resolving snapshot too old errors), 7-6 (resolving unable to extend temp segment), 7-7 (resolving open cursor errors), 7-9 (invoking the Automatic Diagnostic Repository command interpretter), 7-10 (viewing the alert log from ADRCI), 7-11 (viewing incidents with ADRCI), 7-12 (packaging incidents for Oracle support), 7-13 (running a database health check), and 7-14 (creating an Oracle test case for Oracle Support) seem to have drifted too far from the title of the book, into a subject that seems to be recipes for _administering_ your database when errors happen.  As such, this review will only briefly touch on a couple of these recipes.
  • Recipe 7-4 suggests that when the V$TEMP_SPACE_HEADER view indicates that BYTES_FREE is 0 for the temp tablespace, that is a serious problem for the DBA – it is quite possible that while BYTES_FREE might show 0, that does not mean that all of the existing extents are in use and cannot be automatically reused by a session (reference reference2 reference3). (page 217)
  • Recipe 7-7 suggests using a 10046 trace to determine if a session opens cursors but fails to close those cursors.  This is a good suggestion, but as of Oracle Database 11.1 it is not necessary to determine, as suggested by the book, if cursor numbers are re-used in order to determine if the cursors opened by the session are being closed.  Oracle Database 11.1 and above output an explicit CLOSE # line every time a cursor closes (the type= parameter indicates if the cursor is added to the session cursor cache).  With the changes to 10046 traces made in, exceptionally long cursor numbers (for example 90902224, 47112801352808) representing the address of the cursor are now printed in 10046 trace files rather than 1, 2, 3 or 4 digit numbers that explicitly translate to a specific slot number in the open cursor array for a session.  These large cursor numbers could pose a bit of difficulty for a person trying to determine if the cursor numbers changed for a specific SQL statement, and just because the number changes, that does not mean that the previous cursor for this SQL statement was left open.  The book states “You can use this [SESSION_CACHED_CURSORS] parameter to prevent a session from opening an excessive number of cursors, thereby filling the library cache or forcing excessive hard parses” – actually, the SESSION_CACHED_CURSORS has an effect on the performance impact of soft parses, not hard parses, and that parameter is NOT designed to prevent a session from holding open an excessive number of cursors – that is the responsibility of the OPEN_CURSORS parameter that was described earlier in the recipe.  The book states, “Session cursor caches use the shared pool,” and then indicates that if automatic memory management is in use, the shared pool size will be increased if necessary due to a change in the SESSION_CACHED_CURSORS parameter.  This seems to be misleading – the cached cursors are held in the UGA, which is typically found in the PGA memory in dedicated server mode and in the SGA in shared server mode (reference1 reference2 from one of the references: “Every individiual holding a cursor open has an entry in x$kgllk – which is in the SGA – and these entries seem to be 172 bytes long in 10g (152 in 9i). So, clearly, if you hold more cursors open, you will be using more memory for these structures.” – this quote might back up the comment found in the book, in the SGA but not necessarily in the shared pool).  SGA memory regions in recent Oracle release versions are allocated in granules, and in a typical production database a granule is at least 16MB in size (increasing significantly from 16MB in recent releases), so it is quite possible that the shared pool might not actually increase in size, even with a large value for SESSION_CACHED_CURSORS. (pages 222-224)


Interesting And/Or Well-Above-Average Bits of Information

  • In recipe 1-1, good advice to establish default data tablespaces and tempfile tablespaces for new database users: “You don’t want users ever creating objects in the SYSTEM tablespace, as this can adversely affect performance and availability.” (pages 2-4)
  • In recipe 1-3, a helpful quote, “Sometimes folks read about a feature and decide to implement it without first knowing what the performance benefits or maintenance costs will be. You should first be able to test and prove that a feature has solid performance benefits.” (pages 8-9)
  • Recipe 1-10 includes an interesting approach to sending automated emails with Segment Advisor output. (pages 27-28)
  • In recipe 2-8 is a warning that once a function-based index is created on a column, DDL cannot be performed on that column until the function-based index is dropped. (pages 64-66)
  • Recipe 2-12 correctly states that bitmap indexes and bitmap joins are only available in the Enterprise Edition of Oracle.
  • Recipe 2-17 describes reclaiming unused index space, using the segment advisor to determine which indexes to rebuild. It is important that this recipe provides valid reasons for rebuilding indexes including corruption, changing storage characteristics, and being marked as unusable. (page 80)
  • Recipe 4-1 includes a notes section that correctly indicates that the database must be licensed for the Diagnostics Pack to use AWR, and the 11.1 and above CONTROL_MANAGEMENT_PACK_ACCESS parameter must be set to the correct value. (pages 113-115)
  • Recipe 4-2 demonstrates how to change the frequency of AWR snapshots and the amount of time those snapshots remain in the database. (page 116-117)
  • Recipe 4-3 Provides a warning that it is not possible to generate AWR reports across database bounces because the statistics in the dynamic performance views are lost on each bounce. (page 117-120)
  • Recipe 4-5 demonstrates how to use the awrsqrpt.sql script to retrieve execution statistics for a SQL statement that was captured in AWR. (pages 121-122)
  • Good point in Recipe 5-3: “However, It’s important to understand that wait events only show the symptoms of underlying problems – thus, you should view a wait event as a window into a particular problem, and not the problem itself.”
    (page 152)
  • Recipe 6-1 shows several useful Linux commands to retrieve disk space usage and the files that are contributing the most to the disk space consumption. The df and find commands are demonstrated to locate large files and directories containing many files; the du command is used to show the largest subdirectories (pages 187-190)
  • Recipe 6-2 shows the vmstat command on Linux, and briefly describes Oracle’s OS Watcher (pages 190-192)
  • Recipe 6-3 demonstrates the Solaris prstat and ps utilities (pages 192-194)
  • Recipe 6-4 demonstrates the top utility (pages 194-197)
  • Recipe 6-5 demonstrates the ps utility (pages197-198)
  • Recipe 6-6 demonstrates the iostat utility (pages 198-201)
  • Recipe 6-7 demonstrates the netstat utility (pages 201-202)
  • Recipe 6-8 demonstrates the ping, telnet, and tnsping utilities (pages 202-203)
  • Recipe 6-9 demonstrates the SQL*Plus ORADEBUG utility’s ability to retrieve the current SQL statement executing for an operating system process ID and also how to retrieve the trace file name for the session (side note: it would have been helpful if the book mentioned that SYSDBA permissions were required to used ORADEBUG) (pages 204-207)
  • Recipe 6-10 demonstrates the operating system kill utility.
  • Recipe 7-6 – good example showing that an ORA-01652: unable to extend temp segment error message can point to a permanent tablespace, and not just at the temp tablespace.


Currently, the review includes chapters 1 through 6, and portions of  chapter 7.  If time permits, I will continue adding review entries for the remaining recipes in the book.  The “Oracle Database 11g Performance Tuning Recipes” book is partially visible in Google Books.


Blog articles that reference the “Oracle Database 11g Performance Tuning Recipes” book:

Invalid Hints are Silently Ignored? An Invalid USE_HASH Hint Transforms a Sort-Merge Join into a Nested Loops Join

Using TKPROF for Analyzing 10046 Extended SQL Trace Files – What is Wrong with this Quote?

Inline Views – What is Wrong with this Quote?

10046 Extended SQL Tracing for a Session – What is Wrong with this Quote?

Unindexed Foreign Keys – What is Wrong with this Quote?

Parse CPU to Parse Elapsd – What is Wrong with this Quote?

KEEP Pool – What is Wrong with this Quote?

The CURSOR_SHARING Parameter is a Silver Bullet – What is Wrong with this Quote?

Find Objects with the Greatest Wait Time – What is Wrong with this Quote?

Addressing TM Enqueue Contention – What is Wrong with this Quote?

Row Lock is Synonymous with TX Lock, and is a Lock on a Single Row of a Table – Verify the Documentation Challenge

NOLOGGING Option and log file sync Waits – What is Wrong with this Quote?

V$SESSION’s ROW_WAIT_OBJ# Column Shows the DATA_OBJECT_ID – Verify the Documentation Challenge

Book Review: Beginning Oracle SQL

23 08 2011

August 23, 2011

The Ideal Guide for Beginning to Intermediate Oracle SQL: Providing the Logical Reasoning Behind the Approach to Understanding and Solving Data Access Challenges

The “Beginning Oracle SQL” book is an updated version of the book “Mastering Oracle SQL and SQL*Plus” which covered Oracle release versions through  The original version, while extremely well written, suffered from a handful of typesetting flaws that resulted in most pages appearing without page numbers, initial words of some paragraphs discarded, left-most columns of some tables mangled, and a small number of typos.  The “Beginning Oracle SQL” book corrects the typesetting flaws, eliminates sections of the original book that are obsolete due to Oracle product changes, eliminates SQL*Plus related sections that can be easily found in the Oracle documentation and those sections that are deemed too advanced, and adds a couple of new sections.  I purchased two copies of the “Beginning Oracle SQL” book, a Google Books version through the Android market on a Motorola Xoom (unfortunately, the Google Books app crashes when displaying book pages 33 and 34 side-by-side,  pages 56 and 57 as displayed in Google Books), and a PDF copy from Apress.

There are several Oracle SQL books on the market, and I have had the opportunity to read a couple of those books.  This book, much like the original, is exceptional in comparison with other SQL language books, and seems to be suitable for use in classroom environments due to the inclusion of the exploratory excises at the end of each chapter (I did not spend much time reviewing these exercises or the Appendix contents).  The book offers a carefully constructed progression of topics that build on prior material presented in the book.

Concepts Covered by the Book, Book Specific Features, and Quick Observations:

  • Why do we implement the various types of database constraints.
  • Forms an early distinction between the database and the software database management system, while other books muddy the term “database”.
  • Demonstrates how to get started with SQL*Plus, edit lines of the SQL statement that is currently in the buffer, and save the SQL statement that is currently in the buffer.
  • Builds a foundation of why things are as they are, and how to find where things are located before diving too deeply into SQL syntax.
  • Downloaded source code examples seems to work.
  • Plenty of forward and backward references within the book, and external references to other resources.
  • Components of the SQL statement, column aliases, regular expressions (page 125).
  • Several of the chapters (4, 5, 6, 9, 12) include SQL statements without showing the typical SQL*Plus SQL> prompt and line numbers – the formatting without SQL> is helpful when copying the SQL statements from the PDF file for execution in SQL*Plus (holding the Alt key while selecting the text is a work-around for the other chapters).
  • Changing the default object schema used by a session: alter session set current_schema=scott; (page 189).
  • Recovering a dropped table: flashback table history to before drop; (page 190).
  • Correctly states the facts, while other books state otherwise: “If you drop a table, you implicitly drop certain dependent database objects, such as indexes, triggers, and table privileges granted to other database users. You also invalidate certain other database objects, such as views and packages,” (page 190).
  • Cartesian products (page 198), equijoins (page 198), non-equijoins (page 199), self-joins (page 201), natural joins (page 203), outer joins (page 205 – other Oracle specific and ANSI))
  • Offers advice to pick a SQL formatting convention and adhere to that convention (page 199).
  • Book repeatedly reminds the reader of potential problems caused by NULL values, and includes several examples which show where NULL values may cause unexpected problems for developers.
  • Nugget of advice: Test performance of a query not only with the typically small datasets found in development environments, but also with larger datasets that are more typical of production systems (page 237).
  • Correlated subqueries (page 237).
  • Scalar subqueries – subqueries in the SELECT clause (page 243).
  • Inline views – subqueries in the FROM clause (page 244).
  • Demonstrates throughout the book proper techniques for table alias names, rather than just using generic aliases such as A, B, C, etc.
  • USER_UPDATABLE _COLUMNS view shows which view and table columns are possible to update through inserts, updates, and deletes (page 277).
  • Brief mention of Instead of Triggers to permit the data normally returned by a view to be updated (page 278).
  • Materialized views and automatic query rewrite (page 283).
  • Substitution variables in SQL*Plus (page 288).
  • User-defined variables in SQL*Plus (page 290).
  • Implicit user-defined variables in SQL*Plus (page 291).
  • PROMPT, PAUSE, and ACCEPT  SQL*Plus commands (page 292).
  • Common SQL*Plus system variables (page 294).
  • Bind variables (page 298).
  • login.sql and glogin.sql scripts for customizing the SQL*Plus environment (page 305).
  • SQL*Plus MARKUP setting to produce HTML formatted output for SQL statements (page 318).
  • Scripting with SQL*Plus using input parameters for scripts (page 322).
  • Passing values between scripted SQL statements with the NEW_VALUE clause in SQL*Plus (page 323).
  • Passing values between scripted SQL statements with the help of bind variables (page 324).
  • SQL*Plus WHENEVER command to react to certain errors during script execution (page 325).
  • VARARRAYs in custom data types (page 332).
  • Inserting VARARRAY values into table columns (page 333).
  • Nested tables (page 336).
  • Multiset operators with nested tables (page 341).
  • Using CAST and COLLECT to convert an array to a nested table (page 346).

Much like the “Expert Oracle Database Architecture” books, this book uses Oracle error messages to teach SQL troubleshooting techniques.  Unfortunately, most of these Oracle error messages are not highlighted in the index at the back of the book, so it is a difficult to locate a specific error message and its solution.  Oracle Error Messages as Teachable Moments:

  • Chapter 2: ORA-00904: “column”: invalid identifier (page 44, 48); ORA-00933: SQL command not properly ended (page 63)
  • Chapter 3: ORA-04043: object “owner”.”name” does not exist (page 77)
  • Chapter 4: ORA-00920: invalid relational operator (page 97); ORA-00913: too many values (page 106); ORA-01427: single-row subquery returns more than one row (page 108)
  • Chapter 5: ORA-01722: invalid number (page 118)
  • Chapter 6: ORA-00001: unique constraint ( violated (page 148); ORA-02292: integrity constraint ( violated – child record found (page 156); ORA-01555: Snapshot too old (page 161)
  • Chapter 7: ORA-02292: integrity constraint ( violated – child record found (page 172); ORA-02449: unique/primary keys in table referenced by foreign keys (page 191); ORA-08002: sequence sequencename.CURRVAL is not yet defined in this session (page 193)
  • Chapter 8: ORA-00918: column ambiguously defined (page 197); ORA-00937: not a single-group group function (page 216, 221); ORA-00934: group function is not allowed here (page 218, 219); ORA-00979: not a GROUP BY expression (page 221)
  • Chapter 9: ORA-01436: CONNECT BY loop in user data (page 249); ORA-00907: missing right parenthesis (page 254); ORA-30483: window functions are not allowed here (page 258); ORA-01466: unable to read data – table definition has changed (page 261)
  • Chapter 10: ORA-00957: duplicate column name (page 269); ORA-01400: cannot insert NULL into (???) (page 275); ORA-01732: data manipulation operation not legal on this view (page 276); ORA-01752: cannot delete from view without exactly one key-preserved table (page 276); ORA-01779: cannot modify a column which maps to a non key-preserved table (page 276); ORA-01402: view WITH CHECK OPTION where-clause violation (page 280)
  • Chapter 11: ORA-00936: missing expression (page 302); ORA-01403: no rows found (page 325)
  • Chapter 12: ORA-00904: “owner”.”table”: invalid identifier (page 338)
  • Appendix B: ORA-01839: date not valid for month specified (page 373); ORA-00001: unique constraint ( violated (page 376); ORA-02297: cannot disable constraint (owner.constraint) – dependencies exist (page 400)

The Oracle specific SQL syntax is quite broad, and while this book makes a great effort to cover much of the syntax, it does not starve the “Pro Oracle SQL” book (the intermediate to advanced SQL book) of too few advanced topics.  The “Beginning Oracle SQL” book covers the following SQL keywords:


  • OR (page 94)
  • AND (page 95)
  • NOT (page 96)
  • BETWEEN (page 98)
  • IN (page 99)
  • LIKE (page 100)
  • CASE (page 101)
  • GROUP BY (page 208)
  • HAVING (page 217)
  • GROUP BY ROLLUP (page 222)
  • GROUP BY CUBE (page 223)
  • PARTITION BY in the FROM clause (page 227)
  • ANY and ALL operators with subqueries (page 234)
  • EXISTS operator (page 238)
  • WITH clause – subquery factoring (page 245)
  • SIBLINGS keyword in the ORDER BY clause (page 251)
  • AS OF – for use with Flashback Query (page 260)
  • VERSIONS BETWEEN operator – for use with Flashback Query (page 262)
  • WITH READ ONLY (page 267)
  • WITH CHECK OPTION (page 278)


  • REGEXP_LIKE (page 128)
  • REGEXP_INSTR (page 129)
  • REGEXP_SUBSTR (page 130)
  • REGEXP_REPLACE (page 130)
  • NEW_TIME (page 132)
  • EXTRACT (page 132)
  • ROUND (page 133)
  • TRUNC (page 133)
  • MONTHS_BETWEEN (page 133)
  • ADD_MONTHS (page 133)
  • NEXT_DAY (page 134)
  • LAST_DAY (page 134)
  • NULLIF, COALESCE, NVL2 (page 135)
  • GREATEST (page 135)
  • LEAST (page 135)
  • NVL (page 136)
  • DECODE (page 136)
  • TO_NUMBER (page 138)
  • TO_CHAR (page 138)
  • TO_DATE (page 138)
  • CAST (page 141)
  • GROUPING function (page 224)
  • GROUPING_ID function (page 225)
  • RANK analytic function (page 253)
  • LAG analytic function (page 256)
  • DENSE_RANK analytic function (page 258)

SQL Statement Types:

  • INSERT (page 146)
  • UPDATE (page 151)
  • DELETE (page 154)
  • MERGE (page 157)

Data Definition:

  • CREATE TABLE (pages 72, 163)
  • DROP TABLE (page 154, 189)
  • ALTER TABLE (page 168)
  • RENAME (just a brief mention on page 169)
  • Constraints (page 170)
  • CREATE SCHEMA (page 176)
  • CREATE INDEX (page 179)
  • ALTER INDEX (page 181)
  • CREATE SEQUENCE (page 185)
  • CREATE SYNONYM (page 186)
  • TRUNCATE TABLE (page 191)
  • COMMENT (page 191)
  • CREATE VIEW (page 266)
  • CREATE TYPE (page 336, 339)

If you own the “Mastering Oracle SQL” book, you would see the following changes when reading this book (I might have missed a few changes):

  • New section “Other Query Languages, Really” – includes a brief overview of some of the query languages used by various database management systems and the different SQL dialects used by Oracle SQL, Microsoft SQL Server, and IBM DB2 (page 9).
  • New Figure 1-1 that shows the relationship between rows and columns for the DEPARTMENTS table (page 11).
  • New Figure 1-2 that shows NULL values in the EMPLOYEES table (page 13).
  • New section that introduces SQL Developer – provides just enough information to help someone who might find SQL*Plus to be too difficult to use (page 58).
  • New tip that indicates that access to the DUAL table starting in 10g is less resource intensive than using a custom table to perform the same function (page 89).
  • An explanation was added to explain why a NOT IN clause with NULL as one of the items failed to return any rows (page 114).
  • The multitable INSERT command syntax was removed from this edition of the book (it is now described in the “Pro Oracle SQL” book) and is replaced with an extended description that should help those who need to perform an INSERT using a SELECT statement for the row data source.
  • Provides enhanced descriptions for examples, such as the missing explanation of the UPDATE examples, which now appears on pages 153, 154-156.
  • Slightly smaller font size than the original book – more information per page.
  • Removed the discussion of the SQL*Plus AUTOCOMMIT option, frequency of COMMITs, and SAVEPOINTs just prior to section 6.6.
  • Removed the SET TRANSACTION discussion from section 6.6.
  • Section 7.6 shows using SQL Developer to display autotrace statistics rather than using SQL*Plus as in the previous edition (all mention of iSQL*Plus have been removed in the book).
  • New description added to section 9.1 that is dedicated to demonstrating how NULLs can be problematic in EXISTS, IN, NOT EXISTS, and NOT IN clauses (pages 240-243).
  • Removed the ROWNUM and Top-N SQL section (pages 245).
  • Extended example that shows why subquery factoring might be helpful.  Also included is an example that shows the steps one might take to build a factored subquery, and then using that factored subquery in a complex SQL statement (page 246-247).
  • The subtopic of windowing (ex: ROWS BETWEEN) was removed from the discussion of analytic functions.
  • Added RELEASE and TAB to the table of common SQL*Plus system variables (page 294)
  • Removed the “HTML in iSQL*Plus” section and added the section, “11.6 Building SQL*Plus Scripts for Automation” (page 321).

It seems that even fantastic books invite problems that creep into book pages.  The editing required to bring this book up to date seems to have introduced a couple of problems, while correcting a couple of problems that made the original book difficult to use for reference purposes.  Suggestions for Improvement, Problems, and/or Errors:

  • It appears that the wrong figure was copied (figure 1-3 is a duplicate of figure 1-5) – shows an entity relationship model diagram rather than the three set operators (UNION, INTERSECT, MINUS) – the correct diagram is shown in the original book (page 15).
  • The URL pointing to the script to recreate the example schema used by the book appeared to be incorrect – this issue might be caused by a recent redesign to the Apress website.  The actual location is that of the Apress website followed by /9781430271970 (page 19, 72, 116).
  • Book separates SELECT SQL statements into a “Retrieval” grouping, while the Oracle SQL documentation considers SELECT SQL statements as part of the “DML” grouping.  This difference might cause a little confusion.
  • Mistakenly left screen captures of SQL Plus for Windows (SQLPLUSW) in the book (page 50, 52).
  • Need to change ‘S02’ to ‘SQL’ so that the SQL statement produces the expected output (page 103).
  • The description of the EXP function has an NLS problem – the comma should be changed to a decimal point in the number (page 119).
  • The Summary paragraph for chapter 6 still mentions the SAVEPOINT and SET TRANSACTION commands which were removed from the chapter.
  • Book names the resultset rows of a query as a “result table”, which might be a little confusing.  For example, page 265 states, “The result of a query is always a table, or more precisely, a derived table.” (This might be 100% correct, it just seemed a bit odd to me.)
  • The descriptions of some of the SQL keywords, such as the description for CONNECT_BY_ROOT, could have benefitted from an example SQL statement.  In a couple of rare cases in the book the output of a SQL statement could have been explained a little more completely, rather than relying on the reader to completely understand what was displayed.
  • The word “unacceptable” should be replaced with “acceptable” (page 252).
  • The book states that chapter 6 demonstrated changing a session’s isolation level to READ ONLY so that query results were always as of a specific transaction start time.  It appears this section was removed from chapter 6 (page 259).
  • The book probably should clarify that an Enterprise Edition license is required in order to use Flashback Query – although the book does state that the DBA will be required to perform some configuration in advance (so maybe this is implied).
  • The permissions described on page 182 as required to use DBMS_XPLAN might be a little excessive, “You need the SELECT_CATALOG_ROLE and SELECT ANY DICTIONARY privileges.” (reference reference2)
  • The book states, “see Appendix A of this book or SQL*Plus User’s Guide and Reference for details” – Appendix A is no longer the “Quick Reference to SQL and SQL*Plus”, but instead is “The Seven Case Tables” (page 297).  A similar problem is found on pages 39, 56, 68, 81, and 89.
  • The book states: “SQL*Plus treats &1 and $2 just…” – the $2 should appear as &2 (page 303).
  • The book states: “SQL Developer has more features than SQL*Plus in this area, because it runs in a browser environment itself” – this is an unfortunate copy and replace error when removing all references to iSQL*Plus from the book (page 318).

The issues identified above are minor when compared to the value offered by the book’s contents.  I would recommend this book without reservation, whether the person’s first experience with SQL was less than ten minutes ago, or for someone who has worked with Oracle SQL for a decade or longer.  Clear, concise, and helpful, with minimal confusing details – an accurate description of not only the SQL language, but also of this book.

Book Review: Pro Oracle SQL

17 01 2011

January 17, 2011

Hard-Hitting, Beyond the Basics Advice for Breaking the Black Box Approach to Database Programming, Leveraging the SQL Language on Oracle Database, and Improving Overall Performance of the SQL that You Write

I pre-ordered this book in October 2010 while searching for a way to learn some of the more advanced features of Oracle Database’s SQL; I have been searching for a follow up book to the “Mastering Oracle SQL and SQL*Plus” book that I read a couple of years ago, and I think that I finally found that follow up book.  Written as the Oracle Database 11.2 sequel to the book “Mastering Oracle SQL and SQL*Plus” (and that book’s much updated revision titled “Beginning Oracle SQL”), this book is not written as a simple reference for the Oracle specific SQL dialect.  This is not a book that you will want to use when trying to learn the basics of creating a simple SQL statement with a four table join.  However, if you are able to create that simple SQL statement with a four table join in less than 30 minutes with the help of an entity-relationship diagram (ERD), and without using a DISTINCT clause, this book’s contents will prove to be a valuable resource not only to expand your depth of knowledge of the SQL language, but also to extract the extra value that becomes visible when Oracle Database is viewed as much more than a black box dumping ground for data. 

The authors of this book are all OakTable Network members and have between 15 and 29 years of experiencing working with Oracle products.  This long term exposure to Oracle Database is quite clearly an advantage when discussing many of the subtopics that are simply missed by other Oracle specific SQL language references.  While there was no discussion of the SQL language from Oracle Database 2.0, the authors are clearly comfortable with the features in Oracle Database 11.2 as well as the features that are available in Oracle Database 8i, 9i,10g, and 11g R1.

The book’s contents are well organized.  While each chapter identifies the author who wrote the chapter (possibly indicating that the authors did not work together on each individual chapter), the book flows well with plenty of forward and backward references between chapters, as well as including references to other resources (Metalink, books, and blogs).  Chapter one of the book is a bit different from the rest of the chapters in the book, and appears to be written as a transition area for readers to become familiar with SQL*Plus and Oracle Database.  Chapter one will be valuable to readers attempting to adjust from using graphical query tools (TOAD, SQL Developer, Excel, or any number of other graphical tools) to using SQL*Plus.  Additionally, that chapter helps readers who are familiar with other SQL dialects (such as that used by SQL Server) take advantage of Oracle Database’s special characteristics, and introduces readers to multi-table INSERTs and MERGE operations.  Understanding the information presented in execution plans is an emphasis in the remaining chapters of the book; this book’s explanation of execution plan content ranks among the best, if not the best, that I have seen to date.  While there is not a single cartoon drawing in the book, and it does not appear that any sections of this book were borrowed from other books, there are several very well placed diagrams in the book’s chapters.  The book makes a significant effort to control the scope of the material presented.  In most cases, that effort resulted in a very easy to understand, yet thorough discussions of complex topics while building bridges to help the reader transition into Oracle performance specific books, such as the book “Troubleshooting Oracle Performance”.  In a couple of areas, adding an additional half-dozen words might have saved the reader a little confusion, but those are rare occurrences in this book.  Be certain to install Oracle Database’s sample schema, and download the script library for this book from the Apress website (some of the scripts, especially those in chapter 16, are quite useful and may not be printed directly in the book).

Foundation knowledge, and miscellaneous comments while reading the book:

  • Pages 3-8: Provides a quick demonstration of how to connect to the database using SQL*Plus, which is helpful for people who have primarily used other query tools, or other database platforms.  Also demonstrates the basics of executing SQL statements, setting up the SQL*Plus environment, and executing scripts.
  • The book briefly touches on why it is important to use consistent formatting and bind variables in order to reduce the number of hard parses, and why writing SQL to limit logical IO is also important.
  • Page 40: Automatic query transformation often takes place, for instance, converting an IN subquery into a standard join.
  • Chapter 2 tries to break open the black box approach to writing SQL statements.
  • The book makes effective use of analogies.
  • Page 59: The book demonstrates that the array fetch size has an impact on the number of consistent gets performed when executing a query.
  • Pages 64-65: Includes a test case that shows why an index would be used in one case to retrieve 1% of the table rows, while in another case a full table scan was more appropriate to retrieve 1% of the rows.
  • Page 66: Nice summary of what controls the number of blocks that are read in a single read call during a full table scan or a fast full index scan.  There is one potentially confusing sentence, “This could mean that a multiblock read might only read one block at a time.”  The concepts were correctly stated, however it might be worthwhile to state that “a multiblock read might be truncated to a single block read due to the blocks that are already in the buffer cache.”
  • Pages 74-75: Describes how B*tree indexes grow from a single block in size, and what triggers the index height to increase.
  • Long code sections in the early chapters with embedded comments – probably OK for the intended audience.
  • Describes and lists when the various join methods (NESTED LOOPS, SORT-MERGE, HASH, CARTESIAN) are appropriate.
  • Pages 120-127: Explains how NULL values behave in potentially unexpected ways in IN, NOT IN, UNION, UNION ALL, INTERSECT, MINUS, GROUP BY, ORDER BY, COUNT, SUM, AVG, MIN, and MAX.
  • The book demonstrates through several examples that there is more than one way to build a SQL statement to answer a specific question, but not all methods are equally efficient.
  • Page 153: States that EXPLAIN PLAN only shows the estimated plan, while the actual plan may be different.
  • Page 160: Provides a demonstration that shows EXPLAIN PLAN sometimes shows the wrong execution plan.
  • Page 171: Provides a warning to SET SERVEROUTPUT OFF before displaying the execution plan using DBMS_XPLAN.DISPLAY_CURSOR.
  • Page 175: Describes all of the format parameters for DBMS_XPLAN, including the undocumented ADVANCED parameter.
  • Page 187: Example of collecting statistics on a table and its indexes without creating histograms.
  • Page 191: States that using an account with DBA privileges simplifies the process of using DBMS_XPLAN.DISPLAY_CURSOR, using data from other accounts, and selecting from system views, but advises to do so only in a test environment.
  • The book typically indicates the first Oracle release version that supports a feature that is being described.
  • Page 218: Includes a test case that seems to demonstrate an error in the Oracle Database 11.2 documentation.
  • Pages 223-224: Very helpful brief summary of most of the analytic functions that are discussed in the chapter – this list reduces the frustration in finding the correct analytic function that is needed to produce a desired result.
  • Page 225: Demonstrates how to create a running sum that automatically resets when the value of a column changes.
  • Page 244: Presence of the keywords WINDOW SORT in an execution plan indicates that the SQL statement uses an analytic function.
  • The KEEP keyword is not mentioned in the chapter that describes analytic functions, however an example is provided on page 173 of the book in the pln.sql script (in the script download for chapters 5 and 15).
  • Page 283: WITH clause is known as subquery factoring, and other database platforms (and the ANSI standard) refer to subquery factoring as common table expression.
  • Page 286: Oracle may process a WITH block as either an inline view or as a temporary table.
  • Book describes several methods for optimizing query performance, and suggests revisiting queries created in older release versions of Oracle Database to take advantage of performance optimizations that are available in more recent release versions.  Also suggests revisiting PL/SQL code to determine if such procedural code may be accomplished with set-based plain SQL code.  Emphasizes testing for performance, rather than just using the first query that seems to produce the expected result.
  • Describes SQL features through version 11.2 (as does the book “Beginning Oracle SQL”).
  • Page 309: Table 10-1 quickly describes the various functions, operators, and pseudo columns that are related to the Oracle CONNECT BY syntax, including SYS_CONNECT_BY_PATH, CONNECT_BY_ROOT, CONNECT_BY_ISCYCLE, CONNECT_BY_ISLEAF, PRIOR, LEVEL, and  NOCYCLE.
  • Page 331-334: Provides various demonstrations of common mistakes with queries containing EXISTS clauses.
  • Page 336-339: Demonstration that EXISTS and IN queries may be automatically rewritten into equivalent SQL statements and suggests looking in a 10053 trace file to see the actions performed by the optimizer.
  • Page 349: NOT IN and NOT EXISTS may return different results, and are not functionally equivalent due to the different handling of NULL values.
  • Page 355: Prior to 11g, anti-joins could not be performed on NOT IN queries unless the optimizer was sure that NULL values could not be returned.
  • Page 378: Shows how to index NULL values in a B*Tree index by using a constant as the second column in the index.
  • Page 388: Using DBMS_ROWID.ROWID_OBJECT(ROWID) to show how table rows are distributed with hash partitioning.
  • Page 388: Using ORA_HASH( column_name, 31, 0) to predict the partition into which a row will be placed in a hash partitioning scheme with 32 partitions.
  • Page 392: Function based indexes add a virtual column to the table, and those columns are viewable through the DBA_TAB_COLS view.
  • Page 393: Shows how to create a virtual column that always returns a specified calculated value.
  • Page 397: Starting in Oracle Database 11g it is possible to alter an index to make it invisible – this can be used to reduce the risks associate with just dropping the index.
  • Page 403: Direct path inserts are invoked with the APPEND hint, causing rows to be inserted above the high water mark.
  • Page 408: DBMS_ERRLOG.CREATE_ERROR_LOG procedure creates a logging table that is the destination for rows that fail during an INSERT, UPDATE, or DELETE when the LOG ERRORS INTO clause is included in the INSERT, UPDATE, or DELETE.
  • Pages 418-421: Demonstrates with a test case that when a large percentage of rows in a table need to be updated, creating a new table and using INSERT APPEND may be much faster than a typical INSERT statement if a little down time is acceptable during the creation of the new table.
  • Page 434: Definition of ACID (Atomicity, Consistency, Isolation, Durability)
  • Page 436: ANSI isolation levels explained.
  • Page 482: Demonstrates one of the potential problems associated with using statically defined views in a query, where a query is later extended to return additional information by joining directly to one of the base tables specified in the view.
  • Page 498: Oracle 10g and 11g use rolling invalidation of cursors when statistics are collected on the objects referenced by the cursors.
  • Chapter 15: Instrumenting code, comparing execution plan pre-change and post-change to determine the performance impacts of changes, regression testing of code changes, identifying methods to cause applications to fail.
  • Page 501: Mentions V$SQL_OPTIMIZER_ENV and the underlying structure X$KQLFSQCE.
  • Page 505: Very good explanation of adaptive cursor sharing.
  • Page 510: Identifying SQL statements with large swings in execution time.
  • Page 514: Bind variables are not appropriate in all situations.
  • Page 514: “Hints are actually directives to the optimizer. As long as the hint is valid, the optimizer will obey it.”
  • Page 516: Placing the word COMMENT in front of a set of hints in a SQL statement prevents the optimizer from using the hints that follow.
  • Page 517: Script to extract the OTHER_XML column from V$SQL.
  • Page 527: Example of creating a stored outline from a SQL statement in the library cache.
  • Page 538: SQL Profiles do not lock an execution plan in place – instead they lock a cardinality estimate adjustment in place through the use of embedded hints.  It is also possible to embed specific hints into SQL statements by creating a SQL profile using the undocumented DBMS_SQLTUNE.IMPORT_SQL_PROFILE function.

Execution Plan Operations Described (after chapter 6 “SQL Execution Plans”):

  • FILTER (page 195)
  • GENERATE CUBE (page 198)
  • WINDOW SORT (page 244)
  • SQL MODEL ACYCLIC (page 272)
  • SQL MODEL CYCLIC  (page 273)
  • SQL MODEL ORDERED (page 274)
  • COUNT STOP KEY (page 385)

 Analytic and Other Advanced SQL Constructs Described:

  • CUBE function (page 197)
  • GROUPING (page 209)
  • GROUPING_ID (page 212)
  • SUM analytic function (page 225)
  • MAX analytic function (page 225)
  • LAG analytic function (page 227)
  • LEAD analytic function (page 229)
  • FIRST_VALUE analytic function (page 231)
  • LAST_VALUE analytic function (page 231)
  • NTH_VALUE analytic function (page 232 – new in Oracle Database 11.2)
  • RANK analytic function (page 234)
  • DENSE_RANK analytic function (page 235)
  • ROW_NUMBER analytic function (page 236)
  • RATIO_TO_REPORT analytic function (page 237)
  • PERCENT_RANK analytic function (page 238)
  • PERCENTILE_CONT analytic function (page 238)
  • PERCENTILE_DISC analytic function (page 240)
  • NTILE analytic function (page 241)
  • STDDEV analytic function (page 242)
  • LISTAGG analytic function (page 243 – new in Oracle Database 11.2)
  • MODEL clause (page 253)
  • PIVOT clause (page 284)
  • SYS_CONNECT_BY_PATH (page 311)
  • CONNECT_BY_ROOT operator (page 313)
  • NOCYCLE parameter (page 316)
  • CONNECT_BY_ISCYCLE pseudo column (page 316)
  • CONNECT_BY_ISLEAF pseudo column (page 319)

Hints Described:

  • NO_MERGE (page 42)
  • MERGE (page 43)
  • NO_PUSH_PRED (page 49)
  • REWRITE (page 51)
  • FULL (page 86)
  • ORDERED (page 91)
  • USE_NL (page 91)
  • PARALLEL (page 277)
  • INLINE (page 286)
  • MATERIALIZE (page 286)
  • SEMIJOIN (page 339)
  • NO_SEMIJOIN (page 339)
  • ANTIJOIN (page 363)
  • USE_ANTI (page 363)
  • NL_AJ (page 363)
  • HASH_AJ (page 365)
  • INDEX (pages 374, 551)
  • APPEND (page 403)
  • APPEND_VALUES (page 403 – new in Oracle Database 11.2)
  • QB_NAME (page 516)
  • OPT_ESTIMATE (page 538)

Parameters Described:

  • _UNNEST_SUBQUERY (page 47)
  • _ALWAYS_SEMI_JOIN (page 342)
  • USE_STORED_OUTLINES (page 529)
  • SQLTUNE_CATEGORY (page 539)

Miscellaneous Useful Functions:

  • ORA_HASH function (pages 95, 388 – used to predict the partition into which a row will be placed in a hash partitioning scheme)
  • DBMS_RANDOM.VALUE (page 473)

While the “Pro Oracle SQL” book is an overall excellent book, clearly demonstrating that there was a great deal of care put into the book, there are a couple of small problem areas in the book (note that the authors appear to have addressed most of these issues on the errata page for the book on the Apress website):

  • While not significantly affecting the comprehension of the material presented in the book, there are a couple of typos in the book.  For instance, on page 9 the book states “In this book” rather than “In this chapter”; page 31 of the book uses the word “effect” rather than “affect”; and page 221 of the book demonstrates that simple connecting words were sometimes lost, “… would require multiple self-joins to [the] employees table.”  None of these typos change the intended message of the paragraphs.  However, the writing style in some of the chapters is significantly more fluid than in other chapters.
  • Pages 51-52: When the optimizer did not select to automatically use the materialized view, the author used a REWRITE hint to force the optimizer to use that materialized view.  It probably would have been a good idea to mention that the optimizer likely did not automatically use the materialized view because the calculated cost for that execution plan (1935) exceeded the cost for the execution plan that directly accessed the tables (485).
  • Page 73: States, “Block accesses made via an index scan are made using single-block reads.”  Technically, index range scans may employ multi-block reads in certain cases (such as index pre-fetching), but stating that at this point in the book might lead to unnecessary confusion.
  • Page 85: A missing word might lead to misunderstanding: “However, in the cases where the number of subindexes needed would be smaller, the operation can be many times more efficient than a full scan as scanning [a] smaller [number of] index blocks can be more efficient than scanning [a] larger [number of] table blocks.”
  • Page 94: Missing words in an explanation regarding how hash joins work might lead to misunderstanding: “Based on table and index statistics, the table that is determined to return the fewest rows will be hashed in its entirety into memory. This hash table includes all the row data for that table…”  Based on testing, it appears that the hash table only includes the selected columns and the columns that are joined to other row sources.  Additionally, the row source that is hashed into the hash table might not be from a table.  Including this additional detail might be too much information for the scope of the book (reference).
  • Page 102: There is a risk that the full outer join Oracle syntax equivalent implementation, when the approach is applied to other data, could yield different results from the ANSI full outer join if each row returned is not unique.  One way to work around that limitation is to replace the UNION with a UNION ALL and add AND E2.ROWID IS NULL to the final WHERE clause.
  • Page 163: States, “The operation [in the execution plan] that is most indented is actually the first operation that will be executed. If there are multiple operations at the same level, the operations are executed in a top-down order.”  The Oracle Database Performance Tuning Guide for 11.2 from the Oracle documentation library states essentially the same fact, and is also incorrect.  This guideline is correct in some cases, but incorrect in other cases as can be confirmed with a 10046 extended SQL trace.  The first operation executed in an execution plan is actually the first operation from the top of the execution plan that has no child operations (reference reference2).
  • Page 379: States, “B-tree indexes are suitable for columns with lower selectivity. If the columns are not selective enough, the index range scan will be slower. Further, less selective columns will retrieve numerous rowids from the leaf blocks leading to excessive single block access to the table.”  Based on the second and third quoted sentences, it appears that the author intended to state that “B-tree indexes are GENERALLY NOT suitable for columns with lower selectivity.”
  • Page 397: States, “There is another use case for the invisible indexes. These indexes are useful to reduce the risk while dropping unused indexes… From Oracle Database version 11g onwards, you can mark the index as invisible, wait for few weeks, and then drop the index if no process is affected with less risk.”  It is important that the book states “less risk” and not that there is no risk – it would have been helpful if the book discussed what risks remain (but again this might exceed the intended scope of the book).  Just because an index is not used for a couple of weeks does not mean that the index will not be used during month-end closing, year-end processing, or some other infrequently occurring activity.  Invisible indexes on foreign key columns may still be used to prevent table locking problems on the child table when the parent’s primary key columns are updated.  Additionally, the statistics from the invisible indexes may be used by the optimizer to more accurately determine cardinality estimates (reference reference2).
  • Page 437: The third paragraph contains an apparent typo regarding the location of the online redo logs: “Undo blocks retain the before condition of the data, while the redo information is stored in the online redo logs in the system global area (SGA).”
  • Page 484: Specifies 10044 tracing when 10046 tracing was likely intended.  The third paragraph states, “I’ve also added code to enable 10044 tracing…”  Event 10044 appears to enable tracing of free list undo operations – I believe that the author intended to write, “I’ve also added code to enable 10046 tracing…”
  • Page 529: The outline_startup_trigger.sql script is not included in the script library for the book.
  • Page 539: The create_tuning_task.sql and accept_sql_profile.sql scripts are not included in the script library for the book.

As indicated by the above, most of the problem areas are related to unintentional word substitutions.  The majority of the other problem areas are cases where the authors had to draw the line of discussion at a selected level of detail in order to limit potential confusion and control the scope of the book.  The remaining problem areas are minor in the overall context of the book, and might elicit a response from the authors along the lines of “I thought that I modified that phrase in draft 20 of the chapter”.  A fantastic Oracle SQL book that ventures well beyond simple syntax diagrams, and the book is an excellent value for the price.

Related blog articles that discuss the contents of this book:

  • Hash Joins – What is Wrong with this Statement?
  • ANSI Full Outer Join, Ready or Not?
  • Analytic Functions – What is Wrong with this Statement?
  • NO_QUERY_TRANSFORMATION Hint is Ignored… Well, Almost Ignored
  • Adding Comments to SQL Statements Improves Performance?

  • Book Review: Oracle Tuning: The Definitive Reference Second Edition

    7 11 2010

    November 7, 2010 (Updated November 9, 2010, November 28, 2010, December 8, 2010, February 28, 2011, March 9, 2011)

    1,100 Page True-False Quiz, “Definitive” is Left as an Exercise for the Student

    (Edit November 28, 2010: permitted me to submit a review for the book, but only for the first 85 or so pages.  Much more detail has been added to this review since the last update.)

    Based on feedback that this book’s author provided in a comment attached to one of my other (thirteen) Amazon Oracle Database related book reviews, I decided to add a couple of comments at the start of this review.  Last year I contributed to a book that was written by a total of sixteen people, co-authoring two of the chapters in that book.  While that book contained “Oracle” in its title, and the two chapters that I co-authored were Oracle performance related, the book as a whole is not a competitor of the “Oracle Tuning: The Definitive Reference” book.  I am an IT Manager and Oracle DBA, so I am not a competing author or a competing Oracle consultant.  Additionally, I only review books that I have purchased using a portion of my paycheck, so my reviews also take into account whether or not I feel that I have received my money’s worth from a book – does the book deliver on the promises of its front cover and its description.

    The author of the “Oracle Tuning: The Definitive Reference” book is described as having authored more than 30 books, is “one of the world’s top Oracle Database experts with more than 25 years of full-time DBA experience,” operates an Oracle consulting/training company, is the Editor-in-Chief at Rampant TechPress, participates in Oracle related discussion forums (including one operated by his consulting company), and produces articles for a non-Oracle specific blog.

    Why did I buy the “Oracle Tuning: The Definitive Reference” book?  Put simply, I was curious.  I participated in several discussions with Mr. Burleson on Oracle’s OTN forums where his articles and sections of his previous books were discussed.  I brought to his attention a couple of what I felt to be errors and/or omissions from articles and book sections that were linked in the OTN threads.  More knowledgeable contributors to the OTN threads admittedly provided more thorough feedback to Mr. Burleson.  So, I was curious to know whether or not some or all of those corrections and suggestions became a part of the second edition of this book. The second edition of this book is the first book that I purchased that was published by Rampant TechPress, and the first book that I purchased which was written by Mr. Burleson.  I initially ordered the book from Amazon at the end of May 2010 (it was expected to be in stock the first week of June 2010), and when it had not shipped by the end of August, my curiosity subsided due to the delay (and various other reasons) and I cancelled the original order.  Near the end of October 2010 my curiosity peaked again, so I placed a new order for the book (at that time the book was expected to be in stock the first week of November 2010).  While waiting for the book to arrive, I started reading portions of the first and second editions of the book through the Google books library.

    Considering that the word “definitive” appears on this book’s front cover, and the book publisher’s website states the following, I expected to see information that went well beyond showing long listings followed by a sentence or two: “This is not a book for beginners. Targeted at the senior Oracle DBA, this comprehensive book gives you all of the knowledge you need to be successful in tuning even the most complex Oracle database.”  So, how well did the book meet the level of expectation that was set by its front cover and publisher’s description?

    The first and second editions of this book are partially viewable through Google books, and I suggest that anyone considering the purchase of this book should first read a couple of pages of the book at a local bookstore or through the Google books interface.  In comparing the first 30 pages of the first edition with the first 250 pages of the second edition, it is apparent that a much more reasonable font size was selected for the second edition of the book.  Several logic, spelling, and grammar errors that are present in the first edition were corrected or removed from the second edition.

    Now, moving on to the problems that prevented this book from receiving a five out of five star rating.  Internet searches of phrases from the book identified several paragraphs that were directly copied and pasted from other books (most published by Rampant TechPress), other articles, the official Oracle documentation, and in at least one case part of the contents of a script found in the RDBMS/ADMIN directory of the Oracle Database home – the original sources of the material were not mentioned.  The book repeatedly emphasizes the goal of changing/flipping initialization parameters, using a scheduling utility, periodically throughout the week (or even throughout the day) in anticipation of workload changes.  Unfortunately, a number of the scripts that the author provides to help the reader with monitoring the workload, so that the reader knows when to automatically flip the initialization parameters, do not provide the information claimed in the book.  Repeated advertisements of the author’s Ion tool are unnecessary distractions throughout the book.  Most importantly, the number of errors and lack of sufficient detail prevent the book from meeting my expectations of a book that is advertised as comprehensive and targeted at the senior Oracle DBA; for example, showing the output of DESC V$SYS_TIME_MODEL and a simple listing of the statistics names in that view cannot be described as comprehensive, or as targeted at the senior Oracle DBA.

    This review is roughly 24 typewritten pages in length, several times longer than any of my previous Oracle related book reviews.  The significant number of specific errors, omissions, and distractions far exceeds the problems per page count of any other Oracle related book that I have read to date, and the listed items below likely miss less easily identified problems that a senior Oracle DBA with 25 years of experience or an Oracle Certified Master/Oracle Ace should be able to identify.  I am a bit disappointed that most of the errors that I previously pointed out to the book author, as well as those errors identified by others, were reprinted, or even added to the second edition of this book.  This book truly is not targeted at beginners, but it makes for a spirited true/false quiz or even an essay exam for someone who has a bit more Oracle Database knowledge by having read portions of the official Oracle Database documentation as well as other books.  Those DBAs who enjoy hourly parameter modification, running their databases with multiple block sizes, and performing frequent index rebuilds might find value in this book if those DBAs are able to over-look the sometimes ragged joins of sections of multiple books being copied into this book and the scripts that do not work as designed.

    Specific errors, omissions, and distractions in order for the first 200 or so pages followed by random selections from other parts of the book (the detail behind the rating assigned to the book) (edit November 11, 2010 ~ 111110: Google Books preview for the book):

    • Page 1 contains an advertisement for the author’s Ion product and some sort of a script library that can be purchased.
    • Page 6 states that the author “wrote five of the officially authorized Oracle Press books.”  Books with the “Oracle Press” logo are not officially authorized by Oracle Corporation.  “Oracle Press” is simply a marketing label applied to books published by McGraw Hill.
    • Each chapter opens with a cartoon drawing that is typically one half to two-thirds as tall as the page.
    • Page 8 states “This tuning approach is concerned with delivering data to the end-users quickly, even if it means that the database will consume additional resources.  This approach is associated with the SQL optimizer goal of first_rows_n.”  This statement is a bit sloppy.  The intention of the OPTIMIZER_MODE parameter’s value FIRST_ROWS_n (where n is one of 1, 10, 100, or 1000) is to return the first n rows as quickly as is possible, with the assumption that the remaining rows in the resultset will not be retrieved.  The statement in the book seems to imply that setting the OPTIMIZER_MODE to a value of FIRST_ROWS_n will allow retrieval of ALL of the rows from the database as quickly as possible, with the tradeoff being an increase in resource usage.  Technically, it is not the database that is consuming the resources, but instead the instance’s processes.
    • Page 10, starting to notice several paragraphs that are a single sentence in length, and this formatting continues throughout the book.  SQL code listings also seem to be formatted to consume as much vertical space on a page as is possible.
    • Page 10, confusing sentence/paragraph, “The Oracle 11g SQL tuning advisors are a fully automatic SQL tuning approach and that they use a reactive tool approach, waiting watching for problems to occur.”
    • Page 10, after stating on the previous page that “reactive tuning deals with the present, commonly using v$ views and Active Session History (ASH),” the book states on page 10, “in reactive tuning, the Oracle DBA fights the symptoms, not the disease… Reactive tuning is also problematic because we wait until end-users have been inconvenienced.”  The book author is advocating the use of what he terms “proactive tuning”, which the author describes as attempting to determine the best combination of database parameters and tablespace/table options for each SQL workload.  Confusingly, the author also states that his “proactive tuning” method uses Active Session History, Statspack, and the Automated Workload Repository.  In my opinion, reacting to what is found in ASH, AWR, Statspack reports, most V$ views, and 10046 trace files should be termed as “reactive tuning”, where the DBA responds to an actual identified problem experienced by the end-users.  What the author describes as “proactive tuning” appears to be more along the lines of initialization parameter flipping while end-users are not complaining.
    • Page 11 states “For example, if the AWR shows that the demands on the shared pool become very high between 1:00 pm and 2:00 pm, the DBA might trigger a dynamic decrease of db_cache_size and a corresponding increase of the shared_pool_size parameter during this time period.”  If Oracle’s buffer cache is using the memory specified by the DB_CACHE_SIZE, the size value specified for DB_CACHE_SIZE cannot be decreased without first manually flushing the buffer cache – flushing the buffer cache to allow the shared pool to grow for an hour is a bad idea.  It is important to keep in mind that when the SGA_TARGET parameter is specified, the values for DB_CACHE_SIZE and SHARED_POOL_SIZE specify the minimum values for the parameters.
    • Page 12, when describing examples of poor schema design, suggests that databases using extensive data normalization to minimize data redundancy is a poor design because it forces Oracle to perform unnecessary table joins.
    • Page 12 states, “If poorly designed PL/SQL is encountered, users may be able to tune the database by using array processing such as bulk collect, forall and ref cursors…”  As of Oracle Database 10g, Oracle automatically array fetches 100 rows at a time in cursor FOR loops, which might make it much less likely/necessary that a ‘user’ will re-write the PL/SQL to take advantage of bulk collection’s slightly better performance, assuming that the ‘user’ is not synonymous with ‘end-user’ (reference).
    • Page 15 a confusing sentence, “In these cases, we need to adjust the instance parameters need to change, depending on the time-of-day and day-of-week.”
    • Page 17, the timeline states, “1990s: Oracle is developed and relational databases dominate the IT market.”  This timeline statement, giving consideration to the lack of mention about databases prior to 1990, seems to indicate that Oracle Database version 2 did not exist in 1979.
    • Page 17 states, “RAM speed has not improved since the 1970s while CPU speeds get faster every year.”  This is an inaccurate statement about memory speed not improving.
    • Page 19, figure 1.2, which is described as “The Intel CPU Architecture of the Early 21st Century” seems to be terribly out of place in this section of the book, the acronyms contained within the figure are not explained, and the figure fills half of the page.
    • Page 21, figure 1.3, which is described as, “The Changing Dynamics of Human and Hardware Costs” simply shows the intersection of an ascending line labeled as People Costs and a descending curve labeled as Hardware Costs.  It might have been helpful to provide some sort of numbers on the illustration to indicate a timeline and monetary figures – at what point did hardware costs and DBA costs equalize?
    • Page 22 states, “The number one cause of poor performance is the over-normalization of Oracle tables.”  Interesting.
    • Page 22 states, “All SQL should use bind variables, preferably in the code or via cursor_sharing=force, to make SQL reusable within the library cache.”  In general, most SQL statements should use bind variables, but of course there are occasions where the use of bind variables should not be used.  “All” is too strong of a recommendation, and failure to use bind variables does not automatically make the already hard parsed SQL statement unavailable for reuse.
    • Page 22 states, “Failure to set pga_aggregate_target to allow the CBO to choose hash joins can result in very slow SQL performance.”  It is unclear from this section in the book if the PGA_AGGREGATE_TARGET must just be set to a non-zero value, or if there is some other value that it must be set to in order for Oracle’s optimizer to consider using hash joins.  Considering that the optimizer may select to use a hash join when the PGA_AGGREGATE_TARGET parameter is set to a value of zero, and also when that parameter is set to a small value, this statement found in the book is vague.
    • Page 25 states “The first tasks when tuning a database are to identify the external bottleneck conditions, which may include: CPU Bottleneck,… RAM Bottleneck,…  Network Bottleneck, … and Disk Bottleneck.”  While it is important to check these items, I suggest that these probably would not be the first items to check when tuning the database instance’s performance.  Yes, checking whether or not the CPUs are overloaded (or heavily loaded) is important, but maybe it would be better to determine if the overload is causing a measurable negative impact on performance by first checking the delta values of the various wait events, system statistics, and system time model statistics before deciding to check, for instance, whether there are network bottlenecks.
    • Page 25 states, “When the number of run queue waits exceeds the number of CPUs on the server, the server is experiencing a CPU shortage… Note that a 10% CPU level is not always a concern as modern servers are designed to keep the processors as busy as possible.”  The first question is how would someone measure the number of run queue waits.  Is this a measurement over a period of time, or is it an instantaneous measurement of the number of processes waiting to run on the CPUs, or is it an instantaneous measurement of the number of processes currently running on the CPUs plus the number of processes waiting to run on the CPUs?  The second question is whether or not the system can be considered to be CPU bound before the run queue length matches the number of CPUs in the server; for example, previous pages in the book suggested setting the processor affinity or using VMWare to confine the instance (or its processes) to a subset of the CPUs, or to use the Unix nice command to alter the dispatching priority of processes.  The third question is why was 10% selected, and not 50%, 75%, 90%, or 100%?  The first edition of the book did not include the word “wait” in the quoted sentence, and I believe that word causes additional confusion in this edition of the book.
    • Page 25 states that “Large amounts of Oracle*Net traffic contribute to slow SQL performance.”  The book does not describe what is considered “large amounts”, nor what to do about addressing the network bottleneck.
    • Page 25 under the heading of disk bottlenecks the only issue identified is “using RAID5 for high update systems.”  The adjective “high” is never clarified.
    • Page 25 states, while discussing potential solutions for over-allocation of the server’s RAM, that the options for “excessive swapping is to add more RAM, reduce the size of Oracle’s SGAs, or turn on Oracle’s Shared Server.”  Are these the only options, or could more suggestions be recommended?  For example, “size of Oracle’s SGAs” implies that more than one instance is running on the server – would it make sense to reduce the number of instances on the server (suggesting so would, of course, disagree with an earlier suggestion of moving most/all of the instances to a single high-end internally redundant server)?  Are there any reasons not to implement a shared server configuration?  What about tuning the SQL, analyze excessive PGA utilization, etc.?
    • Page 26, the author appears to misuse the term “network latency”, or at least stretch the meaning of latency as it applies to a network.
    • Page 26 and 27 include half page advertisements for the author’s Ion tool.  Pages 42, 46, 60, 62, 68, 71, 73, and 89 also include an advertisement for the tool.
    • Page 27 states “In a disk-bound database, the majority of the wait time is spent accessing data blocks.”  It is quite possible in a non-disk-bound, well performing database instance that the majority of wait time could very well be spent reading data blocks.  In fact, these events should be among the top 5 wait events, otherwise there probably is a significant performance problem.  It is also quite possible that the system could be disk-bound when the majority of wait time is spent _writing_ to the redo logs, archiving redo logs, and updating data/undo/temp/control files.
    • Page 27, the Top 5 Timed Events that are described as demonstrating a disk constrained database, shows 44 waits for the CPU.  Waits for the CPU cannot be reported in the Top 5 Timed Events – this data in the report is bogus.  The report also shows that the average single block read time is 2.75 seconds, while the average multi-block read time is 0.127 seconds (127ms) – this data in the report is also likely bogus.  The report also shows that 1,363 seconds were lost in 673 waits (average of 2.03 seconds) to the “library cache load lock” wait – if that wait appeared in the top 5 report, even in a report showing the single block read time is 2.75 seconds, I think it would be a good idea to start with the “library cache load lock” wait since that is one of the waits that indicates a severe problem if it appears in the top 5 wait events.
    • Page 28 states “CPU enqueues can be observed when the CPU run queue exceeds the number of CPUs on the database server. This can be seen by… If the system is already optimized, having CPU time as a top wait event is a positive because the addition of faster CPUs or more CPUs will relieve the bottleneck.”  The effects of CPU over-load may be observed long before the CPU run queue exceeds the number of CPUs in the server, whether it be longer average waits for the log file sync wait event, increased waits for latches, increased wait time for single block or multi-block reads, or any number of statistics in the system time model (V$SYS_TIME_MODEL, V$SESS_TIME_MODEL).  Additionally, upgrading and/or adding CPUs is not a cheap option from a licensing standpoint, and is not guaranteed to remove CPU time from the top five “wait” event list (it might even become more prominent as more processes may then simultaneously spin while trying to acquire a latch).
    • Page 28 shows the Top 5 Timed Events portion of an AWR report that includes 4,851 waits for CPU time – a Top 5 report cannot show the number of waits for the CPU – this data in the report is bogus.  Second, the CPU is a limited “wait” resource with an upper utilization limit equal to the number of CPUs in the server multiplied by the elapsed number of seconds, while the other foreground wait events, for the most part, have an upper limit of the average number of sessions (plus parallel related additional processes) connected to the instance multiplied by the elapsed number of seconds.  It is quite possible that a server experiencing excessive CPU usage may not cause the “CPU time” to appear in the top 5 timed events.  Third, we do not know how many CPUs are in the server – would it make a difference if there was only a single CPU compared to 96 CPUs?  Fourth, we have no concept of the elapsed time represented by the report – what if the elapsed time for the report is eight hours (the total reported CPU time is 4,042 seconds)?  Fifth, if the single block read time averages more than a second, shouldn’t the investigation start with the IO system?  Sixth, the most that can be determined from the top 5 timed events is the average utilization, number of waits, and number of timeouts – why are the number of timeouts not shown in the book?  From the top 5 timed events it is difficult, if not impossible, to determine if there wasn’t a brief period of intensive activity, followed by a long period of almost no activity – or if the average activity levels were fairly constant in the time period.
    • Page 28, the Top 5 Wait Events portion of an AWR report (what caused the switch from Top 5 _Timed_ Events as shown in the previous example) is labeled as a mockup – probably a good idea that this is labeled as such because the report shows 94,754 seconds where the server was waiting to send data across the network while single block and multi-block reads totaled only 85.75 seconds.  That is why you do not use a satellite link for the network backbone.
    • Page 29 states “Individual Program Global Areas (PGAs) for each session are a set of running programs that do work for the instance and are referred to as processes.”  To put it simply, PGA is much closer to a concept of “an allocated block of memory”, than it is to “a set of running programs” (reference).
    • Page 30 describes the DB_CACHE_SIZE parameter as “the number of data buffers to allocate for the instance.”  The parameter does not specify the number of data buffers (blocks?), it specifies the amount of memory in bytes to allocate to the DEFAULT buffer cache (or the minimum amount in the event the SGA_TARGET is specified).
    • Page 30, a self-conflicting sentence that seems to make no sense. “The most important parameters for instance tuning are those that are immutable because they cannot be changed without starting and stopping the instance or using alter session commands or SQL hints.”
    • Page 30, the DB_FILE_MULTIBLOCK_READ_COUNT parameter is listed among the parameters that cannot be changed without bouncing the database – that is not correct.  When workload CPU costing is in use, the value of the DB_FILE_MULTIBLOCK_READ_COUNT parameter also has no effect in costing of multi-block reads, in stark contrast to the statement made by the book that it is (implied always) used in optimizer costing calculations.
    • Page 30 states “parallel_automatic_tuning:… since parallel full-table scans are very fast, the CBO will give a higher cost-to-index access and be friendlier to full-table scans.”  This is simply incorrect.  The optimizer does not increase the cost of index access paths when costing parallel access paths that use full table scans – it lowers the cost of the full table scan relative to the parallel degree.  Second, parallel full table scans are not automatically very fast, especially if the IO subsystem is bottlenecked.
    • Page 30, the description of the OPTIMIZER_INDEX_COST_ADJ parameter is very weak, “this parameter controls the relative costs of full-table scans versus index scans.”  It would be much better to state that the parameter specifies the percentage of the index access cost to be retained.  Setting the value to a very low value could cause the optimizer to select not only an index access path rather than a full table scan, but to select the wrong index due to cost rounding problems as the value of the OPTIMIZER_INDEX_COST_ADJ parameter approaches 1.  Ideally, since at least Oracle Database 10.1 (and possibly as early as 9.0.1) this parameter should be left at the default value of 100 due to the Oracle optimizer’s use of system (CPU) statistics by default (the default behavior of system statistics usage changed in 10.1).
    • Page 30, the description of the OPTIMIZER_INDEX_CACHING parameter is vague – the book doesn’t indicate when the value of the parameter is used in cost calculations, and when it is not used.
    • By page 30 the book has recommended the use of SQL Performance Analyzer, SQL Tuning Advisor, AWR, ASH, accessing the various DBA_HIST views, and accessing the various performance views in Enterprise Manager without mentioning the various extra cost licenses required to use those features.  For example, the book should have mentioned that the Diagnostic Pack must be licensed for access to AWR information, and the Diagnostic and Tuning Packs must be licensed for access to the SQL Tuning Advisor, long before any mention of needing to purchase additional licenses on page 31.
    • Page 33, the half-page SQL script printed in the book and in the source code depot is Oracle Database’s DBA_FEATURE_USAGE_STATISTICS view definition found in the  catdbfus.sql file located in the RDBMS/ADMIN directory of the Oracle home.  For some reason this script is marked as “Copyright © 2005 by Rampant TechPress” in the script library for the book, while the original version found in the Oracle home is marked “Copyright (c) 2002, 2005, Oracle. All rights reserved.”  No effort is made in the book to interpret the meaning of the columns returned by this view.
    • Page 34 states “Starting in Oracle 10g, statistics collections are automated, but users may still need to selectively add histograms and other specialized optimizer statistics.”  The book does not state what triggers the statistics collection for a specific object, at what time the statistics are collected, that histograms (maybe excessively) are automatically generated, nor the potential problems that might be encountered after the statistics collection.
    • Page 34 indicates that table compression is introduced in Oracle 11g.  That feature was actually introduced in Oracle 9i R2 (reference).
    • Page 35 states “In Oracle 11g we have SQL profiles and the SQL Access Advisor to help identify sub-optimal SQL statements. Once identified, the SQL profile utility will allow changes to execution plans without adding hints.”  This is untrue – a SQL profile is a series of hints that are automatically generated and applied to the SQL statement during cost-based optimization.
    • Page 36 mentions that Oracle 10g will sometimes automatically rewrite a SQL statement into a more efficient form, but does not provide much, if any, detail describing why this is done, how to see that it has happened, or what to do if the rewritten form is not more efficient (Oracle 9i’s cost-based optimizer was also able to rewrite SQL statements).
    • Page 39, frequently uses the term “Oracle” to mean both Oracle Corporation and Oracle Database, even in the same sentence, and that makes it a bit difficult to comprehend the intention of some of the passages in the book.  For example, “Oracle first addressed this problem by developing special advisory utilities in Oracle.”
    • Page 39, the text at the bottom of the page appears to be a copy and paste of text that appeared in a book titled, “High Performance Data Warehousing” which was originally copyrighted in 1997.
    • Pages 40, 90 is the large Trumpet of Doom graphic useful?
    • Page 46, the bottom half of this page is largely a direct copy and paste of pages 8 and 9 of an article titled “Metric Baselines: Detecting Unusual Performance Events Using System-Level Metrics in EM 10GR2” that was written by John Beresniewicz at Oracle Corporation.  The article is marked Copyright 2005 by Oracle Corporation (reference which was hosted by permission of the author).
    • Page 47 states, “When the data buffer hit ratio falls below a predefined threshold, it might be useful to add RAM to the data buffer cache.”  It is not a good idea to tune databases based on the buffer cache hit ratio, a ratio that started falling out of favor by the mid to late 1990s (reference).
    • Page 49 states that when the buffer cache hit ratio drops below 70%, the DBA should set up the system to dynamically adjust the buffer cache.
    • Page 51, the text in the last section of the page is largely a copy and paste of page 36 of the book “Oracle Silver Bullets: Real-World Performance Secrets”.
    • Page 52, it would have been helpful if the book briefly described why each of the statistics that were listed are the most commonly used statistics for Oracle exception reporting.
    • Page 53, the SQL statement lacks an ORDER BY clause, which possibly means that the SQL*Plus BREAK ON command will add extra line breaks in unexpected positions in the output.
    • Page 56, table 2.2 is almost an exact copy and paste, ignoring slight formatting differences, of the Oracle Database Reference documentation from Oracle Corporation for DBA_HIST_TEMPSTATXS (reference).
    • Page 57, the SQL statement does not work as described for a variety of reasons, including the cumulative nature of the statistic values, with no attempt to calculate the delta values of the statistics.  This is one of the SQL statements that the book suggests for trend identification with the goal of scheduling the automatic changing of initialization parameters.
    • Page 58, the SQL statement does not work as described for a variety of reasons, including the cumulative nature of the statistic values with no attempt to calculate the delta values of the statistics.
    • Page 61, the SQL statement does not work as described for a variety of reasons, including the cumulative nature of the statistic values with no attempt to calculate the delta values of the statistics.
    • Page 65 simply shows a DESC for V$SYS_TIME_MODEL and V$SESS_TIME_MODEL, and includes a list of the STAT_NAME values found in those views without attempting to describe the meaning of the STAT_NAME values.
    • Page 67, the author does not explain the parent-child relationships of the statistics from V$SESS_TIME_MODEL, rather the query just sorts the statistics in descending order by time.
    • Page 69, the SQL statement does not take into account that some of the sessions may have been connected for more than a year, while other sessions may have been connected for only a couple of minutes.  Consider how a check of the db time statistic for the two sessions might show similar values for the two sessions – which of the two sessions represents more of a potential impact on system-wide performance?
    • Page 70, the first SQL statement querying AWR data that compares two snapshots, unfortunately the script header in the book does not match the script header in the script library for the book, making it difficult to locate the script.  Matching scripts listed in the book with those in the script library is difficult for many of the scripts due to the scripts being labeled differently in the book and script library.
    • Page 74, the book states that AWR and Statspack reports show time model statistics, and then proceeds to show operating system statistics from an Oracle Database 10.1 report which shows statistics ending with _TICKS, rather than _TIME which became standard starting with Oracle Database 10.2.  It would have been helpful if this section of the book were updated to show the _TIME statistics, or at least mention that the reader will likely see _TIME statistics rather than _TICKS statistics.
    • Pages 81 through 83, the discussion of Oracle Data Miner and decision support systems seems to be off topic for this book.
    • Page 85 states, “this can be plotted as a 1/x function and the exact optimal point computed as the second derivative of the function 1/x as shown in Figure 4.1.”  Many fancy words in this sentence, but no example is provided that shows how to translate the displayed statistics into the graph, how to calculate the second derivative of the 1/x function, or exactly what x is.  Quite often the data contained in the V$DB_CACHE_ADVICE view does not conform to any kind of curve that is represented by figure 4.1.
    • Page 87, the SQL statement does not work as described for a variety of reasons, including the cumulative nature of the statistic values with no attempt to calculate the delta values of the statistics.  This is one of the SQL statements that the book suggests for trend identification with the goal of scheduling the automatic changing of initialization parameters.
    • Page 91, the cumulative nature of the AWR data seems to imply that the author’s database is bounced late every Sunday night – other than that, the chart’s information is nearly useless because the delta values of the AWR statistics were not calculated.
    • Page 92, table 4.2 appears to be a direct copy and paste of the Oracle documentation library for DBA_HIST_ACTIVE_SESS_HISTORY (reference).
    • Page 93, the SQL statement does not work as described for a variety of reasons, including the cumulative nature of the statistic values with no attempt to calculate the delta values of the statistics.  One might wonder how this SQL statement would behave in a RAC environment.
    • Page 95, table 4.3 appears to be a direct copy and paste of the Oracle documentation library for DBA_HIST_FILESTATXS (reference).
    • Page 96, it is interesting that the script output on page 96 matches the script output on page 57.  The SQL statement does not work as described for a variety of reasons, including the cumulative nature of the statistic values with no attempt to calculate the delta values of the statistics.
    • Page 100 contains what is essentially a full-page advertisement for the author’s remote DBA consulting company.
    • Page 102, two unnecessary cartoon drawings, one apparently depicting an upset IT manager, and another depicting a person wearing a ski mask while working on a computer.
    • Page 103, the case study suggests that changing the OPTIMIZER_MODE from the default of ALL_ROWS to FIRST_ROWS_100, followed by restarting the database instance corrects performance in OLTP type databases.  “Their DBA did not understand the difference between the optimizer goal of all_rows, which minimizes computing resources and favors full-tale [sic] scans and the first_rows_n optimizer which sacrifices fast throughput in favor of fast access via indexes.”  First problem with this advice: bouncing the database is not necessary when changing the OPTIMIZER_MODE parameter’s value – simply changing the parameter value will automatically force a hard parse of each SQL statement the next time that SQL statement is executed – the client experienced an unnecessary system outage when the database was bounced, as well as losing the performance benefits derived from the blocks already residing in the buffer cache.  Second problem with this advice: it might simply be the case that due to the presence of histograms, some of the execution plans were less than optimal due to the specification of uncommon (or very common) bind variable values on the initial hard parse as a result of the optimizer peeking at the initial bind variable values to generate an optimal execution plan for those values.  Adaptive cursor sharing, introduced in 11.1, reduces the risks associated with the combination of bind variable peeking and histograms on columns (this was not mentioned).  By changing a global setting, such as the OPTIMIZER_MODE parameter, rather than spending a couple of minutes enabling a 10046 trace for a slow session, or examining an AWR or Statspack report to identify resource intensive SQL statements, the author possibly caused other performance problems for any SQL statement that, for example, might exist in a report which typically returns more than 100 rows.  If a query is expected to return 100 or fewer rows, the execution plans should be identical when the OPTIMIZER_MODE parameter is set to ALL_ROWS and when it is set to FIRST_ROWS_100 (this value tells the optimizer to assume that all rows after the first 100 will be discarded) which probably describes most of the queries that are commonly executed in OLTP databases (with the exception of a few).  Third problem with this advice: the author did not describe why FIRST_ROWS_100 was selected rather than FIRST_ROWS_1, FIRST_ROWS_10, or FIRST_ROWS_1000 (or actually even provide a good explanation why ALL_ROWS was the wrong setting).
    • Page 104 states, “in this emergency, an Oracle9i shop called complaining about a serious degradation in SQL performance right after implementing a partitioned tablespace.”  I was not aware that a tablespace could be partitioned, and a search of the Oracle documentation library found nothing on this topic.  The author provided a DBMS_STATS.GATHER_SCHEMA_STATS command with several parameters.  Only the DEGREE parameter was explained, stating that it allowed collecting the statistics using 15 parallel processes.  I wonder if the author reset the parallel degree of the indexes belonging to the schema from the resulting value of 15 to the normal value of 1?
    • Page 104, the author references output generated by his plan9i.sql script, and indicated that the output showed unnecessary full table scans of large tables.  We really do not know if this database had been online for a couple of hours, or a couple of years based on the output.  In examining the contents of the plan9i.sql script, there are a couple of problems with the script.  First, it queries V$SQLAREA rather than V$SQL, so it is completely blind to the possibility that multiple child cursors might exist for the same SQL statement, each with different execution plans.  Second, the script reports on all SQL statements that are still in the library cache when the sum of the executions of those SQL statements is ten or greater when grouped by the table’s name – this sum becomes the NBR_FTS column.  Therefore, the NBR_FTS column does not take into account the fact that a full table scan of a table may be performed more than once per execution of the SQL statement, and fails to count the full table scans that happened for SQL statements already aged out of the library cache.  Third, the case study did not discuss how to examine whether or not the full table scans were more efficient than an available index access paths, or how it was found that a common WHERE clause in the query contained a bind variable name that was wrapped in single quotes.  Fourth, how did creating a single index fix the five tables in the output, yield a 50 fold performance improvement, and reduce disk I/O by 600%?
    • Page 105, how did creating a single function based index fix the five tables in the output and result in a five-fold performance improvement?
    • Page 105, the silver bullet tip to change the CURSOR_SHARING parameter to FORCE did not warn of any of the possible side-effects, which may include poor performance when cursors that should not be shared because uneven data distributions may cause the optimizer to generate execution plans that are inappropriate for other literal values, or bugs related to CURSOR_SHARING that either result in wrong data being returned, or instance stability problems.  It would be interesting to see how the 75% performance improvement was achieved by changing this parameter.
    • Page 106, as mentioned previously, the author’s plan9i.sql script shows something other than what is described, so the output presented in the book may or may not be useful.  Additionally, the author did not state whether the database had been online for an hour when the report was generated, or three years.  The author’s buf_blocks.sql script, mentioned in this section of the book, is a bit interesting.  The script builds a normal table T1 by selecting a count(distinct file# || block#) of the blocks in the buffer cache, joined to DBA_OBJECTS, when grouping on the OWNER, OBJECT_NAME, SUBOBJECT_NAME, and OBJECT_TYPE.  The script then joins the T1 table to DBA_SEGMENTS in order to calculate the percentage of the object’s blocks that are currently in the buffer cache for all objects with at least 11 blocks in the buffer cache.  Why does the script’s output, that is printed in the book, show six tables with seven or fewer blocks in the buffer cache.  Additionally, if these table blocks really were accessed frequently, wouldn’t Oracle automatically keep the blocks in the buffer cache by promoting the blocks closer to the MRU end.  The author’s buf_keep_pool.sql script is also a bit interesting, as it generates SQL statements to alter all objects that have more than 80% of their blocks in the buffer cache, so that the objects will reside in the KEEP buffer cache (if not already in the KEEP buffer cache).  This seems slightly counter-intuitive to move these objects out of the DEFAULT or RECYCLE buffer cache and into what is likely a smaller KEEP buffer cache.  Additionally, this script could not have been used as described in the book because all of those tables listed in the book had less than 0.45% of their blocks in the buffer cache.
    • Page 107, technically, the 2GB per process limit on 32 bit Windows can be raised to almost 3GB (Metalink recommends lowering the maximum from 3GB) by adding the /3GB parameter to the boot.ini file (reference).
    • Page 107, Figure 5.1 is described as showing the output of the author’s plan9i.sql script.  Figure 5.1 actually shows a “Top 5 Timed Events” report, possibly from Statspack.  The book states that the database was crippled by db file scattered read wait events, yet the “Top 5 Timed Events” printed in the book shows a much more sinister problem.  While 3,246 seconds were spent on the db file scattered read wait event (average wait time 0.010 seconds), 1,363 seconds were spent on the library cache load lock wait event (average wait time 0.29 seconds), and 7,146 seconds were spent on the db file sequential read wait event (average wait time 0.013 seconds).  If I was presented with the same “Top 5 Timed Events”, I probably would have been focusing on the time lost to the library cache load lock wait, rather than trying to create a materialized view.  It is rather amazing, considering the “Top 5 Timed Events”, that the three materialized views reduced disk I/O by more than 2,000 percent and improved database performance 30 fold.
    • Page 108, the case study that suggests implementing bitmap indexes seems to be flawed.  An AskTom thread detailed the flaws better than I am able to do so (reference).
    • Page 108, the adding freelists case study lacks sufficient detail for a person to be able to use that case study to help solve a problem experienced by the reader.  For example, the book does not describe how the author was able to examine V$SQL and discover that “virtually all” of the DML statement were inserts into the CUSTOMER_ORDER table – what in V$SQL did the author examine?  Were there no SELECT statements (those are considered DML also)?  The author stated “the top timed event was buffer busy waits and it was clear that there were enqueues on the segment header blocks for the table and its indexes.”  The author does not describe how the transition from “virtually all” DML were inserts into a single table (I wonder if the header detail and order line detail were both contained in the same CUSTOMER_ORDER table) was made to stating that the buffer busy wait problem was in fact indicating a problem with waits for the segment header blocks for the CUSTOMER_ORDER table and its indexes.  Was it just coincidence because the two seemed to be happening at the same time, and was therefore a lucky guess?  The author’s website stated that this was a database (reference), so how did the author determine that the buffer busy waits where not caused by the same action that is reported as the read by other session wait in 10g?  Did the author check V$WAITSTAT, V$SESSION_WAIT, generate a 10046 trace, or do something else?  Setting the freelists for the CUSTOMER_ORDER table and an index named CUST_PK to a value of 5 immediately saved the client $500 per minute?  Finally, what conditions would have to be present for a buffer busy wait to last 30 seconds, was someone playing with bitmap indexes (assuming that some of the sessions were also waiting in enqueue waits), and would that problem be fixed by altering the freelists?  (Edit: December 9, 2010: this $500 per minute fix is also described on page 697, but in this case the book mentions that the Oracle Database release version is in fact
    • Page 110 states, “with the Internet, there is a wealth of information about Oracle troubleshooting and some of it is good, while most of it is absolute garbage.  Every self-anointed Oracle expert is touting their own methodology for Oracle troubleshooting, and some are absolute zealots in proclaiming their approach as the ‘best’ method.”  I am having a little trouble translating these two sentences.  Is the author stating that the Oracle experts who smear themselves with perfumed oil will refuse to believe any other method is better?  Or is the author stating that some people who self-proclaimed that they are one of the best Oracle experts in the world, have a tendency to game the Internet search engines?  Either way, the two sentences should be rewritten to clarify the author’s intention.
    • Page 111, the quote printed in the book from the documentation appears to have been re-worded in the 11g documentation, however I am not sure that the quote indicates that Oracle Corporation recommends the use of the buffer cache hit ratio for much of anything.
    • Page 112, the book describes how in a well-tuned database the buffer cache hit ratio metric is useful.  Half of page 112 is devoted to showing a buffer pool advisory for an instance with a 52MB buffer cache.
    • Page 115, the script printed in the book for calculating the buffer cache hit ratio accesses SYS.V_SYSSTAT, rather than the more common view synonym V$SYSSTAT.  The printed script also does not match the buffratio.sql script from the book’s script library, where the book specifies the “db block gets”, while the script from the script library specifies “physical reads direct (lob)”; the book specifies “consistent gets”, while the script from the script library specifies “session logical reads”.  Worse yet, neither formula matches the formula printed in the Oracle Performance Tuning Guide, which is part of the official Oracle Database documentation (reference).
    • Page 117 states, “by default, this [TIMED_STATISTICS] parameter is set to false, which disallows the collection of wait times for each wait event defined in the Oracle engine.”  This statement is incorrect as of the release of Oracle 10.1.  The TIMED_STATISTICS parameter defaults to TRUE when the STATISTICS_LEVEL parameter is set to either the default value of TYPICAL, or the value of ALL (reference).
    • Pages 117-118 lists several wait event names that the author states “provides no real value to a DBA who is trying to figure out where a database bottleneck exists.”  The lists includes: lock element cleanup, SQL*Net message from client, SQL*Net more data to client, and PL/SQL lock timer.  The first of these wait events does not exist in an Oracle 10.2 database, and might have been made deprecated around the time of Oracle 8i. The SQL*Net message from client wait event, at the session level, will tell the DBA when the bottleneck is not in the database, and that could be extremely valuable information.  The SQL*Net more data to client wait event was highlighted on page 28 of this book as identifying a “Network Constrained Database”. (reference)
    • Pages 118-119, while the script does execute, it probably should be updated to use analytic functions to eliminate the need of joining two inline views that both query SYS.V_SYSTEM_EVENT.  Additionally, it would probably be helpful if the script also eliminated most of the idle wait events introduced since Oracle 10.1, for example: jobq slave wait, DIAG idle wait, shared server idle wait, reliable message, etc.  Additionally, the script in the script library will not remove most of the idle wait events that are listed in the script because a blank space appears before the closing single quote for each of those wait events: ‘pmon timer ‘, ‘rdbms ipc message ‘, ‘smon timer ‘, etc.  The book did almost nothing with the script, other than to display the results in some sort of graphical query tool (most of the previous query results used SQL*Plus to show the results).
    • Pages 120-121, the sesswaits.sql script does not appear to be in the script library for the book.  The script in the book has an obvious typo near the beginning: “a.a.total_waits,”.  Like some of the other SQL statements, this SQL statement does not use the more common (those that are in the Oracle documentation) synonym names, but instead lists row source names such as SYS.V$_SESSION_EVENT.  A minor improvement would be to use meaningful aliases for row sources, rather than generic aliases such as A, B, and C.  This script attempts to remove some potentially helpful idle wait events, while not excluding some of the more recently introduced idle wait events.  The book did almost nothing with the script, other than to display the results in some sort of graphical query tool (most of the previous query results used SQL*Plus to show the results).
    • Pages 121-122, the csesswaits.sql script does not appear to be in the script library for the book.  This script attempts to remove some potentially helpful idle wait events, while not excluding some of the more recently introduced idle wait events.  The book did almost nothing with the script, other than to display the results in some sort of graphical query tool (most of the previous query results used SQL*Plus to show the results).  It is a bit disappointing that the author made little attempt to explain the columns returned from V$SESSION_WAIT (sys.v_$session_wait in the script).
    • Pages 122-123, the objwaits.sql script does not appear to be in the script library for the book.  The book states that the script may be used if enqueue waits are present in V$SESSION_WAIT to determine which object and datafile are “the holdup.”  The objwaits.sql script is not specific to enqueue type waits, and in some cases the ROW_WAIT_OBJ# from V$SESSION may not contain information about the object (showing a value of -1), while V$LOCK might expose the OBJECT_ID in the ID2 column during an enqueue wait.
    • Page 124, sentence uses the word “undo” rather than “undue”, “… to quickly pinpoint resource-intensive SQL code that is causing undo strain on the database.”
    • Pages 124-125 state that the number one Oracle Performance Myth is that, “eighty percent of a database’s overall performance is derived from the code that is written against it.”  An Internet search for this sentence finds essentially the same set of supporting paragraphs in a couple of other books (“High-Performance SQL Server DBA”, “Oracle Performance Troubleshooting with Dictionary Internals SQL & Tuning Scripts”).  Oddly, the publisher’s website’s description for the book “Oracle Performance Troubleshooting with Dictionary Internals SQL & Tuning Scripts Second Edition” uses essentially the same sentence as justification for its chapter 8 contents (reference reference2).
    • Pages 126-134 appear to be a copy and paste of pages 406-421 of the book “Oracle Tuning Power Tuning Scripts” with a small number of modifications.  It might be interesting to note that the script outputs in this section of the book also use a graphical screen capture from some program, rather than the raw SQL*Plus style output seen in the earlier portions of the book.
    • Page 126 recommends that temporary tablespaces be set to autoextend.  Run-away queries with missing join conditions can cause a lot of problems if temporary tablespaces are in fact set to autoextend with the maximum size set to unlimited (32GB for an 8KB block size database when a single file is used).
    • Page 126-127, the spacesum.sql script does not appear to be in the script library for the book.  How the script operates is not well described, and the top half of the UNION ALL returns a VARCHAR2 in the last column position, while the bottom half of the UNION ALL returns a NUMBER in the last column position.  It is not clear whether or not the query works as intended.
    • Page 128, the datafileae.sql script does not appear to be in the script library for the book.  This query joins SYS.DBA_DATA_FILES to SYS.FILEXT$ to determine which datafiles are configured to extend automatically.  It appears that the join to SYS.FILEXT$ is unnecessary as DBA_DATA_FILES contains the column AUTOEXTENSIBLE.
    • Page 129, the tsfrag.sql script does not appear to be in the script library for the book.  It is possibly interesting that the script calculates the square root of the square root of the COUNT of a column in DBA_FREESPACE when grouped by the tablespace name.
    • Page 130, the dffrag.sql script does not appear to be in the script library for the book.  This query joins SYS.DBA_DATA_FILES to SYS.FILEXT$ to determine which datafiles are configured to extend automatically.  It appears that the join to SYS.FILEXT$ is unnecessary as DBA_DATA_FILES contains the column AUTOEXTENSIBLE.
    • Page 131, the odd formatting were a single sentence becomes a paragraph has essentially changed the meaning of the second paragraph on the page such that it could be misunderstood.  The fragmentation terms honeycomb and bubble were used, but never defined.
    • Page 131, the tsmap.sql script does not appear to be in the script library for the book.  One might be curious to determine if any of these scripts for detecting fragmentation that examine extents in datafiles, if they work as described, are useful since the introduction of locally managed tablespaces in Oracle Database 8i roughly a decade ago (reference).
    • Page 132 states, “as a result of repeated insert and delete activity, tables can become internally fragmented and contain a lot of wasted space.”  I suspect that this is a confusing sentence, it seems that the book implies that when a row is deleted, the space previously occupied by the row cannot be reused – this is simply not true.  The book continues with, “In the same way, indexes can become fragmented so that their depth reaches unacceptable levels.”  The second quote also seems to imply that when an index entry is deleted, that the space cannot be reused.  Additionally, is this the definition of fragmented indexes that the author uses occasionally?
    • Page 132-133, the maxext.sql script does not appear to be in the script library for the book.  This script appears to be very similar to one that appears on page 144 in the book “Oracle Performance Troubleshooting: With Dictionary Internals SQL & Tuning Scripts” (even the graphical version of the query output matches), and for some reason the SQL statement excludes the SEGMENT_TYPE of CACHE, but not any of the other segment types (referencereference2)
    • Pages 129-144 appear to be copied and pasted, with minor changes, from pages 131-170 of the book “Oracle Performance Troubleshooting: With Dictionary Internals SQL & Tuning Scripts” (even the graphical version of the query output matches reference).
    • Page 133, the objdef.sql script does not appear to be in the script library for the book.
    • Page 134, potentially confusing sentence, “Sometimes, however, this is not the case, and the table suddenly finds itself containing chained or migrated rows, which are rows that span more than one data block.”
    • Pages 134-136, the tabreorg.sql script does not appear to be in the script library for the book.  This script uses the RULE hint, which according to the Oracle documentation is no longer supported as of Oracle Database 10.1 (reference).
    • Pages 137-138, the tabreorg.sql script does not appear to be in the script library for the book.  This script uses the RULE hint, which according to the Oracle documentation is no longer supported as of Oracle Database 10.1.
    • Page 138, a sentence seems to imply that a high clustering factor, or having the BLEVEL greater than four may be a reason to either reorganize (rebuild) an index or drop it.  Drop – maybe, rebuild – no.  Changing the clustering factor for an index requires changing the order of the rows in the table, although the clustering factor calculation can be thrown off if the statistics collection sampling percent is too low.
    • Page 140, the sgasize.sql script does not appear to be in the script library for the book.
    • Pages 143-144, the memsnap.sql script does not appear to be in the script library for the book.
    • Page 145, the poolhit.sql script does not appear to be in the script library for the book.  Page 145 also states, “However, when properly computed, the buffer cache hit ratio is an excellent indicator…”  The formula provided for calculating the buffer cache hit ratio appears to be incorrect, according to the formula that appears in the 11g R2 Performance Tuning Guide from the Oracle documentation library.
    • Page 145-146, the sesshitrate.sql script does not appear to be in the script library for the book.
    • Page 147, the sqlhitrate.sql script does not appear to be in the script library for the book.  Considering that it is possible for a SQL statement to have more than one execution plan, if this SQL statement is producing useful information it probably would have been a better idea to query V$SQL rather than V$SQLAREA.
    • Pages 148-156 appear to be copied and pasted, with minor changes, from pages 182-196 of the book “Oracle Performance Troubleshooting: With Dictionary Internals SQL & Tuning Scripts” (reference) .
    • Page 148, the explain plan format pre-dates the release of Oracle 10.1, and shows the use of the OPTIMIZER_MODE CHOOSE which became deprecated with the release of Oracle 10.1.  The book calls a listing “runtime statistics”, but does not explain how the statistics were generated (likely the output of SQL*Plus’ AUTOTRACE functionality).
    • Page 149 states, “The amount of logical reads has been cut in half simply by using the new 16K tablespace and accompanying 16K data cache.”  It has been demonstrated to the book author multiple times why this test case is flawed (reference reference2 reference3 reference4).
    • Page 149, the cacheobjcnt.sql script does not appear to be in the script library for the book.
    • Page 150, the buffutl.sql script does not appear to be in the script library for the book.  The query joins SYS.X$BH to SYS.TS$ four times, rather than performing the join a single time with either a DECODE or CASE statement to differentiate the different values for the STATE column.
    • Page 152, the libdet.sql script does not appear to be in the script library for the book.
    • Page 153, the libwait.sql script does not appear to be in the script library for the book.
    • Page 153, the libobj.sql script does not appear to be in the script library for the book.
    • Page 155, the dictdet.sql script does not appear to be in the script library for the book.
    • Page 156-157, the text at the bottom portion of page 156 through the middle of page 157 appears to be copied and pasted from page 306 of the book “Oracle Tuning Power Scripts” (reference).
    • Page 157-158, the bottom half of page 157 through the top half of page 158 are supposed to demonstrate the wait events and statistics that will be present when the LOG_BUFFER parameter is undersized.  As of Oracle Database 10.2 the LOG_BUFFER parameter is automatically set to a value that is slightly smaller than the granule size, so having a value for that parameter not specified is not a significant risk starting with 10.2.  Page 158 has been addressed by other contributors on the Oracle OTN forums (reference reference2) .
    • Pages 158-195, the bottom of page 158 through page 195 appears to be copied and pasted (with minor modifications) from page 197-275 of the book “Oracle Performance Troubleshooting: With Dictionary Internals SQL & Tuning Scripts” (reference).
    • Page 159, the memhog.sql script does not appear to be in the script library for the book.
    • Page 160, the first paragraph on this page probably should have appeared on page 147, shortly after the script mentioned in the paragraph appeared in the book, rather than in a section of the book that describes session memory usage.
    • Page 161, which appears to be describing Oracle Database 8i and earlier, states, “if the memory sort ratio falls below 90%, increasing the parameters devoted to memory sorts, sort_area_size and sort_area_retained_size, may help.”  The first question someone might ask is why 90%, and not 99% or 50%?  It would probably be a better idea to determine which queries are causing the sorts to disk.  Maybe increasing the SORT_AREA_SIZE to 1000 times it original value will still not cause the memory sort ratio (which the book’s script calculates since the last bounce of the database) will not allow the ratio to increase to 90% – then what.  The second question someone might ask is what do these parameters do – if the reader sets the parameters to 500M to try to get the memory sort ratio to 90%, what kinds of problems might happen?
    • Page 161 seems to indicate that the PGA_AGGREGATE_TARGET will not be used for shared server connections.  While that was correct for Oracle Database 9i R1 and 9i R2, it is not the case starting with Oracle Database 10g R1 (reference).
    • Page 161 recommends that on a Unix server the PGA_AGGREGATE_TARGET be set to 90% of the memory remaining in the server after the database instance starts.  There are a variety of reasons why this is a bad idea, the fact that the PGA_AGGREGATE_TARGET is not a hard upper limit for PGA memory consumption is just one of those reasons.
    • Page 162, the term “one-pass memory” is mentioned but never defined.  The use of the term seems to imply that a workarea execution is either optimal or one-pass – are multi-pass workarea executions never a possibility?
    • Page 162, the bufobjwaits.sql script does not appear to be in the script library for the book.  Additionally, the script does not appear to be RAC aware as the statistic “gc buffer busy” was ignored and the query did not select from  GV$SEGMENT_STATISTICS.
    • Page 163, the latchdet.sql script does not appear to be in the script library for the book.  I suspect that it could be debatable whether or not the author’s specification that a 1% latch miss ratio indicates that further investigation is required – a better approach would probably start by seeing if a specific type of latching is contributing a significant amount of time to the wait events between a known start and end time.
    • Page 164, the currlwaits.sql script does not appear to be in the script library for the book.  This script uses SYS.V_$ view names rather than the officially documented synonym names, and uses alias “a”, “b”, and “c” rather than meaningful aliases for each of the views.  The query also contains an unnecessary subquery that queries SYS.V_$EVENT_NAME.
    • Page 165, while the globiostats.sql script is found in the script library, the statistic values that are requested seem to omit those statistics that are introduced in Oracle Database 10.1, and provide statistics values since the last open of the database.  I suspect that nightly backups, statistics collection, or scheduled index rebuilds might distort the statistics retrieved by the script.  Nothing is done with the script in the book – the book immediately led into another script.
    • Page 166, while the script syswaits.sql is in the script library for the book, the script printed in the book differs from what is found in the script library.  Additionally, the script in the script library includes an extra space before the closing single quote that follows each wait event name that should be excluded from the report.
    • Page 167, the advice about the “latch free” and “enqueue” waits predates the release of Oracle Database 10.1.  The advice about the “db file scattered read” wait event could cause unnecessary concern – that wait event should be one of the top wait events in a healthy database instance.
    • Page 168, the globaccpatt.sql script does not appear to be in the script library for the book.  The book makes no attempt to interpret the statistic values returned by the script, nor the meaning of the statistics.  The book does state that “large-table full-table scans can indicate sub-optimal SQL and/or missing indexes, cases where the SQL needlessly reads all blocks in the table.”  The book does not state which of the statistics indicate a “large-table full-table scan” – is it the “table scans (long tables)” statistic?  It is also quite possible that full table scans of long (large) tables may just indicate that everything is working as it should due to the data requirements, indexes with poor clustering factors, or maybe the database is simply a data warehouse.  For clarity, it is not the SQL that needlessly reads all of the blocks in the table.
    • Page 168 contains an advertisement for the author’s ION program.
    • Page 169 states that “if standard DBA practices have been followed and indexes are placed in their own tablespace, the I/O statistics for that tablespace can be examined to determine if the indexes are actually being used.”  Is it really a standard DBA practice to separate indexes from their tables by placing each into a different tablespace?  How would seeing the I/O statistics for a tablespace (more correctly the datafiles for the tablespace) really indicate if an index were used – an index may be used by Oracle Database’s optimizer without actually being used for query execution.
    • Page 169, the fileio.sql script found in the book differs significantly from the script by the same name found in the script library for the book.  The script found in the book seems to unnecessarily join several objects owned by the SYS user when a much more simple query could be written using the documented synonym names V$FILESTAT, V$TABLESPACE, V$DATAFILE, and V$TEMPFILE.
    • Page 170, the term “hub tables” is used without defining the term.
    • Page 170-171, the purpose of the toptables.sql script does not seem to be well defined, and might contain one or more errors.  The script returned no rows when tested.
    • Page 172, the largescan.sql script, identified as largescan9i.sql in the script library, fails to recognize that a table referenced in an execution plan and identified as experiencing a full table scan may actually be full table scanned many times when the SQL statement is executed once, even if the table is listed only once in the execution plan.
    • Page 172, a “large table” is defined within the script as a table that is more than 1MB in size.  If we associate this definition with the term “large-table” found on page 168, the reader might be a bit confused.
    • Page 173 states “one rule of thumb is that if any session is currently consuming 50% or more of the total I/O, that session and its SQL need to be investigated further to determine what activity it is engaged in.”  Why 50%?  Would it make a difference if there are 2 or 2,000 sessions connected?  Would it matter how many sessions are actively executing SQL statements?  Over what period would the DBA examine to see if a session meets the 50% criteria?  The provided script, physpctio.sql, bases the time period on the length of time that the session has been connected to the database.
    • Page 174, the totpctio.sql script is nearly identical to the physpctio.sql script from the previous page, and suffers from the same time scoping problem.
    • Pages 175-176, the topiousers.sql script is certainly long, and aside from renaming some of the statistic names from SYS.V_$STATNAME, really does not introduce any new information that was not provided by the earlier scripts.  The script has time scoping problems, and no effort is made to explain the meaning of the statistics returned by the script.
    • Page 177, the bgact.sql script is apparently not found in the book’s script library.  The script is described as “providing a handle on DBWR and LGWR activity,” yet the script retrieves the “dirty buffers inspected” foreground statistic and a “write requests” statistic that does not seem to be a valid statistic name (reference).
    • Page 178 states, “seeing non-zero values for the DBWR summed dirty queue length typically indicates that buffers are being left in the write queue after a write request.”  The “DBWR summed dirty queue length” statistic is apparently not an Oracle Database statistic, so seeing non-zero values might mean that something else is wrong.
    • Page 178 states, “non-zero values for the redo log space wait requests and redo log space wait time statistics could indicate the log buffer setting is too low.”  There is no statistic named “redo log space wait requests” – the actual statistic name is “redo log space requests”.  These statistics are not associated with the redo log buffer size, but instead the size of the redo log files in relation to the amount of redo information that needs to be written to the log files (reference).
    • Page 178, the archhist.sql script does not appear to be in the script library for the book.  The script summarizes by day, rather than a smaller time period such as an hour.  This might provide misleading information if on average 4 redo logs are archived per hour, but during busy time periods during the day archived redo logs are archived at a rate of once per minute – the average over 24 hours will hide important information about a problem.
    • Page 179, the rolldet.sql script does not appear to be in the script library for the book.  The script does not permit time scoping, uses the non-documented object names in the SYS schema rather than the public synonyms, and uses generic alias names for the objects.
    • Page 180, a sentence which makes no sense, “an overall contention ratio of 1% or higher is an indicator of too few rollbacks as well.”  First, the sentence is missing at least one word.  Second, the method to calculate the overall contention ratio is not described.  Third, why 1% and not 10% or some other number?
    • Page 181, the totuserspace.sql script does not appear to be in the script library for the book.
    • Page 182, the sortusage.sql script does not appear to be in the script library for the book.
    • Page 182, the sortdet.sql script does not appear to be in the script library for the book.
    • Page 183-185, the topsess.sql script does not appear to be in the script library for the book.  After using three pages for the script, it probably would have been wise to use more than a small paragraph to explain what to do with the output of the script.
    • Page 186 states, “a rule of thumb is that no session should consume more than 25-50% of the overall resources in a particular category.”  Why 25% to 50%?  It would seem that the percentage might make a difference if there was a single session connected or 1,000 sessions.
    • Pages 186-187, the topsessdet.sql script does not appear to be in the script library for the book.  The script’s output is cumulative since the session connected, so it could easily generate invalid results if one session had been connected to 2 months, while a session consuming significant resources has been connected for 30 minutes.  The query does not use the time model statistics, so it is possible for the statistics displayed to be inaccurate for actively executing sessions – for example if a session has been on the CPU for the last 60 minutes performing logical IO.
    • Pages 188-189, the userscans.sql script does not appear to be in the script library for the book.  The script is interesting in that it retrieves the “table scans (long tables)” statistic for each session.  According to the Oracle documentation, a table is considered long if the table is not cached and its high water mark is more than 5 blocks, and the statistic is impacted by the CACHE and NOCACHE hints – based on that description, the query is not behaving as described in the book (reference).
    • Page 189, the scatwait.sql script does not appear to be in the script library for the book.  The query is not time scoped, so a session that has been connected for months may unfairly be sorted to the top of the output.
    • Page 189, the book states that the “db file scattered read” wait event “is generally thought to be an indicator of table scan activity.”  It probably would have been wise to state that the “db file scattered read” wait event also appears during index fast full scans, and in some cases index range scans.  As of Oracle Database 11.1 it is quite common for the “direct path read” wait event to appear when full table scans are performed, but the book made no mention of that fact.
    • Page 190, the large_scanusers.sql script does not appear to be in the script library for the book.  The script suffers from the same errors as the scripts earlier in the book when it attempts to determine how many full table scans users have performed based on querying SYS.V_$SQL_PLAN – a table that appears a single time in an execution plan may be full table scanned multiple times during a single execution.
    • Page 190 states that the “table scans (long tables)” statistic indicates full table scans of tables that are five or more blocks in size.  That is almost what the Oracle 10.2 documentation states, but that raises the question regarding the script on page 188 and 189 – is there a problem with performing a couple of full table scans on a table with 10 blocks under its high water mark, and is this as severe as performing a couple of full table scans of tables with 10 million blocks under its high water mark?
    • Page 191, the lockcnt.sql script does not appear to be in the script library for the book.  The script appears to have an incomplete join to the SYS.V_$LOCKED_OBJECT view.
    • Page 192, the curriosql.sql script does not appear to be in the script library for the book.  This SQL statement has time scoping problems, directly comparing a session that has been connected to the database for months with one that has been connected for a couple of minutes.
    • Page 193, the cartsession.sql script does not appear to be in the script library for the book.  The script joins SYS.V_$SQL to SYS.V_$SQL_PLAN using just the HASH_VALUE, and there are certainly risks in doing that.  Not all Cartesian joins are bad – the optimizer may (frequently) use a Cartesian join when the statistics for one of the row sources indicates that a single row, or a very small number of rows, will be joined to the second row source.
    • Pages 193-194, the cartsql.sql script does not appear to be in the script library for the book.
    • Page 195, the list of optimizer modes probably should indicate that the FIRST_ROWS optimizer mode has been deprecated for quite a while and/or not list it as one of the two optimizer modes.  The book states of the first rows optimizer mode, “first rows: Fetches the desired rows with a minimum amount of block touches (favors indexes).”  This, of course, raises the question, what if the desired number of rows is all or nearly all of the rows – is this still a true statement?  Do the non-deprecated FIRST_ROWS_1, FIRST_ROWS_10, FIRST_ROWS_100, and FIRST_ROWS_1000 also favor indexes – this section of the book does not mention those optimizer modes.  This page also incorrectly states that the ALL_ROWS optimizer mode favors full scans, such a statement is misleading at best.
    • Pages 196-197, the topsql.sql script does not appear to be in the script library for the book.  It appears that the script is designed for Oracle Database 9i (query retrieves HASH_VALUE rather than SQL_ID), queries SYS.V_$SQLAREA rather than the more common (and documented) V$SQLAREA, unnecessarily groups all child cursors of the same SQL statement together (should have queried V$SQL instead) which can be quite misleading especially in 11.1 with the introduction of adaptive cursor sharing and 11.2 with the introduction of cardinality feedback (two items that are apparently not mentioned in this book).  Additionally, the starting time point of the query is the database open time, rather than a more meaningful examination of the delta statistics for a shorter time period such as an hour or a day – a query that is executed once a month that causes a lot of physical reads might not be a problem, while a query that executed many times in the last hour might be much more of a cause for concern.
    • Page 198 appears to be copied and pasted (with minor modifications) from pages 279-280 of the book “Oracle Performance Troubleshooting: With Dictionary Internals SQL & Tuning Scripts”
    • Page 198, the book is states, “the first query in the result set has accumulated almost five seconds of total execution time and runs for about two and a half seconds each time it is executed.”  While the query might have an *average* execution time of 1.5 seconds, it very well could have required 4.99 seconds for the first execution and 0.01 seconds for the second execution.  Considering that this data that the author is commenting about is drawn from V$SQLAREA, it very well could be the case that the first execution and the second execution have entirely different execution plans.  The effects of block caching could also mean that the time for each execution is not nearly the same as the average execution time.
    • Page 198 states, “this can be accomplished by noting the hash_value for each SQL statement and then issuing the fullsql.sql script to obtain the full SQL statement.”  This statement causes confusion because the fullsql.sql script printed in the book appears to request the SQL_ID, and not the HASH_VALUE as described, which was returned by the earlier SQL statement.
    • Pages 199-201 appear to be copied and pasted (with minor modifications) from pages 347-350 of the book “Oracle Tuning Power Scripts” (reference).
    • Page 199, the cartcount.sql script does not appear to be in the script library for the book.  Not all Cartesian joins are bad – the optimizer may (frequently) use a Cartesian join when the statistics for one of the row sources indicates that a single row, or a very small number of rows, will be joined to the second row source.
    • Page 199, the same cartsql.sql script that appeared on pages 193-194 is pasted on this page also.
    • Page 200, the tabscan.sql script does not appear to be in the script library for the book.  
    • Page 200 appears to be copied and pasted (with minor modifications) from page 284 of the book “Oracle Performance Troubleshooting: With Dictionary Internals SQL & Tuning Scripts”
    • Page 204-205 appear to be copied and pasted (with minor modifications) from page 61-64 of the book “Oracle9i High Performance Tuning with Statspack” (reference).
    • Pages 213-217 appear to be copied and pasted (with minor modifications) from pages 158-162 of the book “Oracle Tuning Power Scripts” (reference).
    • Pages 224-225 appear to be copied and pasted (with minor modifications) from pages 147-149 of the book “Oracle Tuning Power Scripts” (reference).
    • Page 226 appears to be copied and pasted (with minor modifications) from page 146 of the book “Oracle Tuning Power Scripts”
    • Page 229 appears to be copied and pasted (with minor modifications) from page 139 of the book “Easy Oracle Automation: Oracle 10g Automatic Storage, Memory and Diagnostic Features” (reference).
    • Pages 229-234 appear to be copied and pasted (with minor modifications) from pages 150-160 of the book “Oracle Tuning Power Scripts” (reference).

    Jumping to mostly random pages:

    • Page 235 appears to be copied and pasted (with minor modifications) from pages 252-253 of the book “Oracle Tuning Power Scripts” (reference).
    • Pages 243-247 appear to be copied and pasted (with minor modifications) from pages 112-115 of the book “Easy Oracle Automation” (reference).
    • Page 249 appears to be copied and pasted (with minor modifications) from pages 47-48 of the book “Oracle 9i High-Performance Tuning with Statspack” (reference).
    • Pages 250, 309, 311, and 318 contain an advertisement for the author’s ION product.
    • Page 280 states, “Oracle does not run in a vacuum, and it’s critical to search outside the box and see what is happening with your CPU, RAM, network, and disk I/O subsystems.”  While this quote might raise a couple of questions, I would like to know why Oracle (assuming Database) was not designed to function correctly inside a vacuum, and why would the server’s CPU be outside the box?
    • Pages 313-315 are nearly identical word for word with Metalink (MOS) Doc ID 884046.1 “Understand each field of AWR” as well as pages 22 through 25 of a document that is marked as copyright 2010 by IBM Corp. (reference).
    • Page 325 states, “The reason that CPU drives to 10% utilization is because the UNIX internal dispatchers will always attempt to keep the CPU’s as busy as possible. This maximizes task throughput, but it can be misleading for a neophyte. Remember, it is not a cause for concern when the user + system CPU values approach 100 percent.”  Why 10%?  Page 325 of the book makes the same errors as are found on page 25 of the book (reference).
    • Page 355 provides no real help with decoding the time model statistics, not even a suggestion to arrange the statistics into a logical hierarchy.  All that is stated is, “in the sample output of the AWR Time Model Statistics Report shown above, it can be seen that the system spends the most processing time on actual SQL execution but not on parsing.  This is very good for a production system.”  This is at least the second case in the book where the time model statistics were introduced were no information was provided to indicate what to do with the statistics.
    • Page 355-356, other than telling the reader that the operating system statistics are important, and providing what appears to be output from Oracle Database 10.1, the book does not indicate what to do with the statistics. 
    • Pages 516-519 appear to be copied and pasted (with minor modifications) from pages 212-216 of the book “Oracle 9i high-performance tuning with Statspack” (reference).
    • Page 520 states, “for example, tuning an SQL statement to remove full-table scans makes the query run faster because of the direct reduction in the amount of data blocks that are read from disk.”  This is simply too broad and too inaccurate of a statement to be true, as it seems to imply that all full table scans are automatically worse than an index access path.  It can be quite easily demonstrated that even when selecting 0.06% of a table, a full table scan may be significantly faster than an index access path to the same data (reference).
    • Page 521 includes a statement that does not seem to make sense, “Prior to Oracle, Oracle Professionals noticed that by moving the entire database to a larger blocksize, they reduce disk I/O and improve performance of the entire system.”  Were there Oracle DBAs before Oracle Corporation was formed, or is this sentence stating something else?
    • Page 521 states, “there is no question that a large blocksize for indexes is going to reduce I/O, and therefore improve the overall performance of the entire database.”  There have been several well written articles that question whether using a larger block size for indexes will do either of reduce I/O or improve the overall performance.  For example, what happens in a multi-user database when more index entries are packed into a larger index leaf block and multiple sessions concurrently insert into the table to which the index belongs? This is essentially a repeat of the same logic error as is found on page 149 of the book (reference   
    • Page 522 states, “Oracle has codified the benefits of different blocksizes, and the Oracle 11g Performance Tuning Guide notes that multiple block sizes are indeed beneficial…”  The author then quoted the portion of the Oracle Database documentation which describes the KEEP and RECYCLE buffer pools.  The KEEP and RECYCLE buffer pools can only use the same block size as is the default block size for the database – the quote from the documentation does not describe the benefits of different block sizes.  I pointed this error out to the author in the Oracle OTN forums in 2008 after a couple other people previously mentioned the same error (reference).
    • Page 522 states, “for example, insert-intensive databases will perform less write I/O (via the DBWR process) with larger block sizes.  This is because more logical inserts can take place within the data buffer before the block becomes full and requires writing it back to disk.”  This is an extremely weak argument.  DBWR batches block writes, it does not write a block to disk as soon as it becomes full.  Second, what kinds of problems may happen when multiple sessions are concurrently performing the inserts into leaf blocks?  Third, what about all of the bugs that are related to having multiple block sizes in a single database such as bug ID 4940513 (undo related), bug ID 5496041 (index corruption), and doc ID 844497.1 (RMAN skips non-default block size tablespaces).
    • Pages 523-528, the author attempts to argue for the use of multiple block sizes in a single database using arguments that were discussed in several forum threads.  A lot of convincing evidence was provided in those threads that strongly recommended against the practice that is suggested by the author.
    • Page 529, Oracle Database 10.2 and above also considers the value of the SESSIONS parameter when automatically setting the DB_FILE_MULTIBLOCK_READ_COUNT parameter value (reference).
    • Page 535, the same flawed comparison of indexes in an 8KB and 16KB block size which is found on page 149 is repeated.
    • Page 537 describes a case where a customer was using a 16KB block size database that experienced performance problems where updating roughly 850,000 rows in a table required roughly 45 minutes, and that time decreased to just 2.2 minutes by recreating the database with a 4KB block size.  This magic performance increase was discussed on the Oracle OTN forums and several blogs.  Jonathan Lewis create a test case that identified the cause as an ASSM bug that appears in tablespaces created with block sizes larger than 8KB.  A bug was filed with Oracle Corp. related to the problem, and the author of this book as well as the field technician who spotted the error were notified that the cause of the problem was identified and a patch is being developed.  I am not sure why the book states, “I was able to get the update down to 1.5 minutes” – I distinctly recall the field technician (also the technical editor of this book) reported fixing the problem, and not the book author (reference  reference2  reference3  reference4).
    • Page 726 states to check Metalink (MOS) Doc ID 122008.1 for “the officially authorized script to detect indexes that benefit from rebuilding.”  That Metalink article states that the criteria is not valid and the script has been revised to meet “current standards and functionality.”  That means that the suggested criteria for rebuilding that is printed in the book regarding 20% or more of deleted index entries or a depth of more than 4 levels is invalid, as had previously been pointed out to the book author as invalid in a couple of OTN discussion threads (reference  reference2  reference3).
    • Page 726 suggests to check Metalink Doc ID 77574.1 for Oracle’s index rebuilding guidelines – that Doc ID does not exist on Metalink (MOS).
    • Page 726 states, “However, when large numbers of adjacent rows are deleted, it is highly unlikely that Oracle will have an opportunity to reuse the deleted leaf rows, and these represent wasted space in the index.”  The first question is why can the space not be reused.  The second question is if it matters whether the index is a primary key b*tree index, and index on another column, or a different type of index altogether.  The third question is how are the adjacent rows deleted – if this is a heap organized table, how does one make sure that physically adjacent rows in the table blocks are deleted?
    • Page 727 appears to be at least partially copied from page 412 of the book “Oracle 9i High-Performance Tuning” (reference  reference2).
    • Page 727 states, “Hence, an Oracle index may have four levels, but only in those areas of the index tree where the massive inserts have occurred.  Oracle indexes can support many millions of entries in three levels, and any index that has four or more levels would benefit from rebuilding.”  The first sentence is clearly not true, and was communicated to the book’s author by members of the Oracle community in early 2003 (as well as several times since 2003).  The second sentence leads into a bit of a problem area – what happens if after rebuilding an index it is still four or more levels – do you rebuild it again? (reference  reference2)
    • Page 727 states, “Gets per index access: The number of gets per access refers to the amount of logical I/O that is required to fetch a row with the index.”  This attempted description of the BLKS_GETS_PER_ACCESS column of INDEX_STATS seems to be inaccurate, and likely is not a good metric for deciding if an index should be rebuilt – the author suggests that it might be appropriate to rebuild the index when the value of this statistic is more than five (reference).
    • Page 727 suggests using “ANALYZE INDEX VALIDATE STRUCTURE” for all indexes to determine which are candidates for a rebuild.  The book then states, “After analyzing the report above, the DBA may want to consider rebuilding any index where the height is more than three levels…”  The author does not mention that the tables to which the indexes belong will be locked for the entire duration of the ANALYZE INDEX command – this problem was brought to the attention of the book author in 2007 by members of the Oracle community.  For the second of the two quoted sentences, the book does not include a report above the sentence quoted from the book, so it is unclear what the reader should be reviewing (reference).
    • Pages 728-732 contain a lot of faulty facts about the safeness and cheapness of index rebuilds, and how “many” databases see a huge benefit from periodic index rebuilding.  Many of these faulty facts are discussed in detail in Richard Foote’s blog and in various Internet forums.

    (Section added December 8, 2010): Related blog articles that discuss the contents of this book:

    (Section added February 28, 2011): Attempted Purchases of the “Oracle Tuning: The Definitive Reference Second Edition” Book:

    On February 2, 2011 I added the following comment to my review of this book on

    In my review of the book I stated the following:
    “I initially ordered the book from Amazon at the end of May 2010 (it was expected to be in stock the first week of June 2010), and when it had not shipped by the end of August, my curiosity subsided due to the delay (and various other reasons) and I cancelled the original order. Near the end of October 2010 my curiosity peaked again, so I placed a new order for the book (at that time the book was expected to be in stock the first week of November 2010). While waiting for the book to arrive, I started reading portions of the first and second editions of the book through the Google books library.”

    The order that I placed for the book at the end of October 2010 still has not shipped, more than three months after I placed the order. It is somewhat interesting to see that the publisher’s website offers this book for seven dollars less than Amazon’s current price for the same book, while the publisher apparently cannot supply Amazon with a sufficient number of copies of this book to fill the existing orders. I have no intention of cancelling this order, even if it does not ship for another three to six months. Best of luck to others who have waited six months (three months for the first order before cancelling and three months for the second order) to obtain the book.

    This was the status of the first attempt to acquire the book, as shown on August 26, 2010 – that order was cancelled shortly after making this screen capture:

    This screen capture shows the current status of the second attempt to acquire the book, as shown on February 28, 2011 – I do not intend to cancel this order:

    This is certainly an interesting situation.  Why publish a book, and then not provide the book vendors with a sufficient number of copies to satisfy the apparent demand?  There is probably a good explanation for the delay, but not one that I am able to understand.

    (Section added March 9, 2011): Attempted Purchases of the “Oracle Tuning: The Definitive Reference Second Edition” Book, Part 2:

    Amazon cancelled my second order and sent the following email:

    I noticed a couple of days ago that Rampant was selling the book through  Currently there is only a single seller listed on, and that seller is asking $976.98 for a used copy of the book: 

    Book Review: Expert Oracle Database Architecture: Oracle Database Programming 9i, 10g, and 11g Techniques and Solutions, Second Edition

    16 08 2010

    August 16, 2010

    Foundation Knowledge to Understand Most Any Other Oracle Database Book – Required Reading for All DBAs and Developers

    I pre-ordered this book in May, having previously read the two previous editions of this book (“Expert One-On-One Oracle” and “Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions”) cover to cover.  Now that I have had an opportunity to fully read this edition, I now realize that the first edition of this book subconsciously set the standards that I expected of all other Oracle Database books I have subsequently purchased.  The latest edition is now among my five favorite Oracle Database books, sharing the top five ranking with “Cost-Based Oracle Fundamentals”, “Troubleshooting Oracle Performance”, and “Optimizing Oracle Performance”.

    The author of this book is a Vice President at Oracle Corporation, and is well known for his website, articles in Oracle’s magazine, forum/Usenet postings, and presentations.  The author recently revised the Oracle Concepts Guide, which is part of the Oracle Database 11.2 documentation library.  Slightly off topic: It might be notable that a book written by a VP at Oracle Corporation is not published by Oracle Press.  Oracle Press is a marketing label used by McGraw Hill to market its line of Oracle books – the books are not formally written by Oracle Corporation nor are they formally tech edited by Oracle Corporation, although some employees of Oracle Corporation have used McGraw Hill as a publishing company.  At least for me, for several years the “Officially Authorized Oracle Press” logo on books published by McGraw Hill seemed to be an endorsement of the books by Oracle Corporation, but that is not the case, as was clarified by one of the threads on the AskTom site (Reference Reference2).

    The book contents are very well organized, allowing the reader to continually build on previously introduced concepts.  Many forward and backward references in the book are provided – “see the section a1 in chapter n for additional information”.  It is surprising just how much new content is included in this edition of the book – this book reads as though it were written specifically for Oracle Database 11g R2 with exceptions in behavior mentioned for older Oracle Database releases.  Even if you have read one of the older releases of the book, I do not recommend just jumping into one of the chapters using either the table of contents or the index at the back of the book.  Start at the beginning of the book and finish at the end of the book.  The author introduces very useful bits of information in the chapters that are not specific to the chapter title, and do not appear in the table of contents or the index at the back of the book.  Just a couple of examples of useful information that will be missed (especially if you only read the first half of the book):

    • Page 421: UTL_RAW.CAST_TO_RAW and UTL_RAW.CAST_TO_VARCHAR2 functions with sample code.
    • Page 434: Using DUMP(number, 16) to see the raw bytes used to store values.
    • Page 464: Using DBMS_UTILITY.GET_CPU_TIME to determine the previously used CPU seconds for the session.
    • Page 465: Oracle Database 10g R2’s deterministic function optimization.
    • Page 470: Use of the analytic function RATIO_TO_REPORT
    • Page 472: Creating a UNIQUE function based index to enforce uniqueness of values in a column when another column contains a specific value.
    • Page 477: Oracle Forms by default updates every column including primary key columns, and that behavior could lead to deadlocks.
    • Page 478: Disabling table locks on a table will help detect unindexed foreign keys.
    • Page 487: Using DBMS_SPACE.FREEBLOCKS to determine the number of index blocks in the index freelist.
    • Page 512: Using the VSIZE function to indicate how much storage space a column value requires.
    • Page 513: Warns of a calculation error (that is not a bug) when adding the floating point numbers 0.3 and 0.1 – the result is not 0.4.
    • Page 646: Using DBMS_STATS.SET_TABLE_STATS to simulate a large table in order  to determine how data volume changes will impact the execution plan generated by the optimizer.
    • Page 656: Using the NTILE analytic function to break a rowset into equally sized groups of rows.
    • Page 687: Starting with Oracle 9i it is possible to insert into one or more tables using complex WHEN conditions.
    • Page 739: Encrypting a column in a table may cause significant changes to the execution plans that specify that column in join or filtering conditions.
    • Page 740: Index range scans are not possible with indexes created for encrypted columns.

    The author uses an odd, yet very helpful teaching method throughout the book.  Break something, with the assumption that you are reading the chapter because something is already broken, and then show you how to fix the problem and ultimately avoid the problem in the future.  Unfortunately, the index at the back of the book only contains a small handful of these problem-solution cases – another reason to read the whole book and take good notes.  A list of the errors covered in the book as teachable moments follows:

    • Chapter 1: ORA-00060 (deadlock detected while waiting for resource: page 6), ORA-00942 (table or view does not exist: page 19), ORA-08177 (can’t serialize access for this transaction: pages 37, 38), ORA-02391 (exceeded simultaneous SESSIONS_PER_USER limit: page 44)
    • Chapter 2: ORA-12162 (TNS:net service name is incorrectly specified: page 53), ORA-01078 (failure in processing system parameters: page 55), ORA-16196 (database has been previously opened and closed: page 57)
    • Chapter 3: ORA-00942 (table or view does not exist: pages 70, 76) , ORA-25138 (initialization parameter has been made obsolete: page 71), ORA-01031 (insufficient privileges: pages 75, 110, 112), ORA-02096 (specified initialization parameter is not modifiable with this option: page 78), ORA-12913 (Cannot create dictionary managed tablespace: page 102), ORA-12505 (TNS:listener does not currently know of SID given in connect descriptor: page 111)
    • Chapter 4: ORA-29339 (page 155)
    • Chapter 6: ORA-00054 (pages 210, 224), ORA-00060 (page 211), ORA-00069 (page ?)
    • Chapter 7: ORA-01555 (pages 252, 254, 255), ORA-08177 (pages 252, 261)
    • Chapter 8: ORA-02290 (pages 270, 272, 274, 297), ORA-02292 (page 281), ORA-02291 (page 281), ORA-30036 (pages 288, 289)
    • Chapter 9: Checkpoint not complete (page 321), ORA-01403 (page 324), ORA-01555 (pages 323, 334-344)
    • Chapter 10: ORA-22908 (page 401), ORA-01427 (page 401)
    • Chapter 11: ORA-25954 (page 459), ORA-01450 (pages 465, 466), ORA-01743 (page 472), ORA-00001 (page 476), ORA-00069 (page 478)
    • Chapter 12: ORA-12899 (page 504), ORA-01461 (page 504), ORA-01465 (page 507), ORA-01438 (pages 510-512), ORA-00932 (page 518), ORA-01839 (pages 529, 534), ORA-30079 (page 538), ORA- 22285 (page 553)
    • Chapter 13: ORA-00376 (pages 559, 595, 611), ORA-01652 (page 562), ORA-14511 (page 562), ORA-01843 (page 567), ORA-14400 (pages 569, 575), ORA-14767 (page 576), ORA-14652 (page 576), ORA-14659 (page 585), ORA-14402 (page 585), ORA-01408 (page 598), ORA-02429 (page 600), ORA-14038 (page 600)
    • Chapter 14: ORA-12838 (page 633)
    • Chapter 15: SQLLDR-601 (page 660), ORA-22285 (page 679), SQL*Loader-144 (page 688), ORA-29913 (page 691), ORA-06512 (page 692)
    • Chapter 16: ORA-28365 (page 718), ORA-28348 (page 720)

    Helpful Foundation Knowledge Found in the Book:

    • Notes at the front of the book show how to quickly install Statspack, change AUTOTRACE settings, permanently configure the SQL*Plus environment, and the requirements for the Runstats, Mystat, Show_Space, and Big_Table scripts that are used later in the book.
    • Provides a warning not to use bitmap indexes on columns just because the columns have low cardinality, for example a column named PROCESSED that has either a value of Y or N, without first considering concurrency – how will the column be used and updated.
    • Pages 17 through 21 demonstrate SQL injection when bind variables are not used in an example that most people probably would not consider as a threat – the SQL injection was accomplished by simply changing the NLS_DATE_FORMAT (Reference).
    • Page 72 warns against setting undocumented parameters: “Use undocumented parameters only at the request of Oracle support. Their use can be damaging to a database, and their implementation can – and will – change from release to release.”  This is a warning that should be included in every book before the first mention of hidden parameters.
    • The book provides a detailed list and description of the functionality provided by Oracle’s background processes.
    • Demonstrates how to build test cases to verify performance/scalability issues.
    • Pages 262-265 provide a very detailed demonstration of transactions automatically restarting.  This behavior might cause, for example, before update triggers to fire more than once, various problems with PL/SQL variables, autonomous transactions, and various PL/SQL packages (UTL_FILE, UTL_SMTP, UTL_HTTP, etc.).
    • Excellent section on the topic of index myths, one of which had its roots in Oracle Database 5.
    • Excellent six case summary of why an index may not have been used.
    • Shows a lot of examples of practical SQL and PL/SQL in test cases, which is extremely helpful for people trying to learn either SQL or PL/SQL.  The test cases frequently include TKPROF output.
    • The book includes a nice analogy of Oracle Database’s parallel execution option by comparing that feature with writing either a short or long report – coordinating the writing of a small report could take longer than simply writing the report without assistance.
    • Excellent section dedicated to answering frequently asked questions about SQL Loader.
    • The chapter showing the use of external data includes examples with SQLLOADER, external tables, flat files, and Data Pump unload (used to write out the contents of a table to allow that table’s rows to be used in another database).
    • The book almost always lists the first Oracle release version that offered a feature when that feature is introduced in the book.
    • Typically indicates when syntax introduced in the book is operating system dependent.  For example, on page 682 “ls –l” is specified for Unix/Linux and “dir /q /n” for Windows.
    • The downloadable source code library seems to be complete, making it easy for readers to reproduce the tests that are found in the book.

    Not everything about the book is perfect.  The errata page for the book currently lists a problem where the author used the word “be” too many times in a sentence.  What else needs a little fine tuning, is not quite right, or is simply incorrect?  The following are my observations about the book:

    • The book frequently seems to assume that a fully optioned version of the Enterprise Edition of Oracle is used by all of the readers of the book, which may set the readers up for false hope in solving certain problems, or implementing solutions that cannot be used in production environments (the chapters discussing bitmap indexes, partitioning, and parallel execution are exceptions, where the requirements are stated):
      — Advanced Queuing on page 10
      — SQL Profiles on page 11
      — FLASHBACK on page 26
      — Fine-grained access control on page 46
      — Index key compression on page 407
      — Chapter on the topic of Data Encryption does not mention what capabilities are available for the Standard Edition, Enterprise Edition, or the Enterprise Edition with the Oracle Advanced Security Option.
    • Page xxxvi describes running the utlxplan script as the user SYSTEM.  Running this script should not be necessary starting with Oracle Database 9i (Reference).
    • Chapter 1 seems to be a little loose with terminology, specifically with the terms “database” and “Oracle”.  These chapters were probably intentionally written as such to help introduce the reader to the world of Oracle Database.  The first couple of pages of the second chapter provide a definition of “database” and “instance”.
    • Some code sections of the book have lines with unnecessarily bold fonts (page 13, 29, 33, 34, 37, 53, 54, 59, 61, 175).
    • Page 109 typing error: “Normally, on UNIX/Linux, this group will be DBA by default, and OSDBA on Windows. That group is ‘special,’ in that any user in that group can connect to Oracle ‘as SYSDBA’ without specifying a username or password.” On Windows the group name is ora_dba, not OSDBA – the same error is found in the previous edition of the book (Reference).
    • Page 131 behavior change: “A serial (nonparallel query) session will use a small percentage of the PGA_AGGREGATE_TARGET, typically about 5 percent or less.  So, if you’ve set the PGA_AGGREGATE_TARGET to 100MB, you’d expect to use no more than about 5MB per work area (e.g., the sort or hash work area). You may well have multiple work areas in your session for multiple queries, or more than one sort or hash operation in a single query, but each work area will be about 5 percent or less of the PGA_AGGREGATE_TARGET.”  It appears that this behavior changes when the OPTIMIZER_FEATURES_ENABLE parameter is set to or greater, as evidenced by the change in the _SMM_MAX_SIZE parameter as found in a 10053 trace file or a query of X$KSPPSV and X$KSPPI (Reference Reference2).
    • Page 220 join error: The self-join of V$LOCK probably should have also joined on the TYPE column in addition to the ID1 and ID2 column.  Additionally, the BLOCK column could potentially have a value of 2, rather than 1 in a RAC environment (Reference).
    • Page 335 typing error: shows the view name V$UNDOSTATUNDOSTAT, should be V$UNDOSTAT.
    • Page 419 suggested use of a deprecated feature: Shows using the exp command with Oracle Database to create a people.sql file containing a CREATE TABLE command for an object based table.  Page 116 states the following “Export [exp] is officially deprecated with Oracle Database 11g Release 2. It is supplied only for use with legacy database structures.”
    • Some of the autotrace generated execution plans do not conform to the DBMS_XPLAN style execution plans that are automatically generated as of Oracle Database 10g R2, even though it appears that the author was connected to Oracle Database 11g R2.  Examples include the two execution plans on page 480.
    • The SQL*Plus prompts are atypical (not “SQL>”) as displayed in the book –  while it might be helpful to show which database the SQL*Plus session is connected to, it makes it hard to read some of the code samples that span more than a single line.

    The problems found in the book are absolutely minor in comparison to the value of the information provided in the book.  Yes, it is probably harsh to be judged by the standards set in the previous edition of the book.  This edition update was certainly not a five hour task, and probably was not a five month task either.  I suspect that this edition has been in development for most of the last five years, since the publication of the previous edition.  If you have read the previous editions of this book, set those books aside and start reading from the beginning of this edition – you will not be disappointed.

    (Section added December 13, 2010): Related blog articles that discuss the contents of this book:

    Book Review: Oracle Performance Firefighting (Fourth Printing)

    25 06 2010

    June 25, 2010

    Many Helpful Topics, However the Errors and Omissions are Unnecessary Distractions

    This review covers the June 2010 fourth printing of the “Oracle Performance Firefighting” book.  I bought a PDF copy of the book direct from the OraPub site.  The author’s excellent “Forecasting Oracle Performance” book was a factor in my decision to purchase this book, as was my desire to learn something new while trying to maintain previously acquired knowledge.  As stated in this book, the author has roughly 20 years of experience working with Oracle Database, worked with Cary Millsap at Oracle Corporation for a period of time, and conducts a successful consulting and training business.  I recognize the names of three of the six technical reviewers listed in the book as well known experts in the Oracle community.  The author’s website states that there are no known issues/errata starting with the third printing (November 2009) of the book (reference).  All of the components for the book’s success are in place, so how did the book perform?

    There are definitely positive moments in the book, where the book successfully communicates complicated topics using very clear statements.  The book publisher selected a font size and margin size that packs a significant amount of detail into each of the book’s 387 pages.  There are several well-placed helpful diagrams with no space wasting cartoon drawings.  The author introduced his three circle analysis which involves identifying the intersection of the Oracle Database, the Application, and the Operating System to help pinpoint the source of performance problems.  Several examples were provided where the author used the Linux strace command to troubleshoot Oracle Database performance problems.  The book recommends against modifying the _SPIN_COUNT hidden initialization parameter, which is a wise advisement. Clearly states that when there is an operating system bottleneck, there are only 4 subsystems to investigate: CPU, memory, IO, and network.  Provides a good demonstration of using DBMS_MONITOR to enable 10046 traces.  Provides a nice description of in-memory undo and queuing theory in easily understood language.  The author’s website offers the OSM toolkit, which is a collection of scripts that can be freely downloaded – several of the scripts were used throughout the book.  It is obvious that the author invested a significant amount of time into the writing of this book.

    There are also definitely negative moments in the book, sections filled with typing errors, logic errors, and omissions.  Signs of problems include a paragraph that includes four hidden initialization parameters that the author suggests to adjust, without the author indicating by how much the initialization parameters should be adjusted, and without the author explaining why three of the four hidden initialization parameters simply do not exist in Oracle Database  Some sections of the book are written in a condescending tone, as if no one other than a DBA is intelligent enough to understand the challenges that a DBA or performance tuning consultant faces, even if the concepts are provided in simple terms to others.  Very few reproducible test cases are provided in the book, which makes it a little difficult for readers to practice or duplicate concepts introduced in the book.

    This book is not a replacement for the “Troubleshooting Oracle Performance” book – the two books share very little in common.  This book seems to spend little time explaining how to understand the root cause of the performance problems through various trace file analyses (10046, 10053, 10032, etc.) or even analysis of execution plans, while the author does spend a lot of effort suggesting changes to initialization parameters.  The book’s author is in a somewhat unique position in that he has either a 3 out of 5 star book, or a 5 out of 5 star book depending on the corrections that make it into the fifth printing of the book.  The PDF version of the book cannot be printed, does not permit copying text, and has the purchaser’s name, address, and phone number at the top of each page.

    Specific errors, omissions, and distractions in order:

    • The book uses a lot of footnotes, some of which do not add value to the discussion of Oracle Database.
    • Page 16 suggests to keep working to improve performance until time and budget are exhausted, or performance is acceptable.  I think that this is a pathway to compulsive tuning disorder.
    • Page 22 suggests that when the CPU run queue is usually longer than the number of CPU cores, that means that processes are needing to wait for CPU resources.  This suggestion begs the question – isn’t it possible to reach the point of processes needing to wait for CPU resources without the CPU run queue reaching the length of the number of CPU cores?
    • Chapters one and two seem to spend a lot of time trying to relate Oracle firefighting concepts to non-computer related activities: telephone operator, school report card, doctor checking a young child, clients requiring the author to keep a timecard to track time, etc.  Additionally, the early chapters excessively repeat the terms firefighting and firefight.
    • Page 41: Missing letter “Oracle has never guaranteed it has completely instrumented it code.”
    • Page 42: Missing letter “By tracing an Oracle process through the operating system, we can easily observe how Oracle has instrumented it code.”
    • Page 43: Incorrectly defines the meaning of a db file scattered read wait event when attempting to correlate a Linux readv OS call to an Oracle wait event: “Regardless of the operating system or the actual system call, Oracle gives a multiblock IO call the special name db file scattered read, because these multiple blocks can be scattered over the IO subsystem.”  One of the problems with the quote is that a multiblock read from the operating system could correspond to several different Oracle wait events.  A second problem is that the readv call reads logically adjacent sections of a file (think multiple Oracle blocks) on disk into non-adjacent memory locations (reference).
    • Page 47: While Table 2-1 states that only selected columns of V$SYSTEM_EVENT are displayed, the table probably should have included the TOTAL_TIMEOUTS column, which helps describe the true length of wait events that have, for instance, three second timeouts – this column has existed since Oracle Database 8.1.5.  The table also states that the WAIT_CLASS, WAIT_CLASS_ID, AND WAIT_CLASS# columns (the book incorrectly includes an underscore between CLASS and #) were introduced in 11g R1, while the Oracle documentation shows that those columns were introduced in 10g R2 (reference).
    • Page 49: While Table 2-2 states that only selected columns of V$SESSION_WAIT are displayed, the table seems to discount the value of the SECONDS_IN_WAIT column by excluding that column.  The table also states that the WAIT_CLASS, WAIT_CLASS_ID, AND WAIT_CLASS# columns (the book incorrectly includes an underscore between CLASS and #) were introduced in 11g R1, while the Oracle documentation shows that those columns were introduced in 10g R2 (reference).  The table should provide a better description for the STATE column’s value of “WAITED KNOWN TIME: Previous wait completed; the session is currently not waiting.” – the book should describe what does this STATE value means: that the session is in an idle wait event, or that the session is currently burning CPU time (this is what it means).
    • Page 55: States “Systems based on Oracle Database 10g and later can take advantage of the v$sys_time_model and v$ses_time_model views…” – the session level view is actually named V$SESS_TIME_MODEL.
    • Page 61: The book strongly suggests that a user’s “Submit” or “Save” operation equates to a COMMIT, and that a COMMIT equates to a LOG FILE SYNC wait event, and therefore if the average LOG FILE SYNC wait time is identical to a previous day when performance was acceptable, the performance problem is “most likely not related to the database.”  A lot may be said about where this paragraph fails the reality check – just think of one other operation that may happen when the user selects the “Save” option, or one other cause for the LOG FILE SYNC wait, or how averages may distort the true slowness that is experienced by the user, or how the author is simply measuring the wrong thing when trying to determine if the problem is present in the database (reference).
    • Page 68: Example of condescending tone: “When working on Oracle systems, you will eventually hear some poor whiner say something like, ‘Oracle is a resource hog!  It consumes every bit of resources it can get its despicable hands on!’”  More examples of condescending/confrontational tone with the server admin, storage admin, storage vendor, etc. are found on pages 128, 130, 132, 148, 156, 201, 264, 324.
    • Page 74: Example of an odd footnote that does not add value to the book: “I am in no way implying Oracle sincerely cares how much memory its database system requires.”
    • Page 94: The author indicated that one of his tests that modified the _KKS_USE_MUTEX_PIN parameter showed that the mutex implementation decreased the execution time by 11%, yet the author did not provide a SQL statement to create the test table, nor provide the PL/SQL script used in the test so that readers could reproduce the test results.
    • Page 104: The book states: “Virtual memory management, process scheduling, power management, or essentially any activity not directly related to a user task is classified as system time.  From an Oracle-centric perspective, system time is pure overhead…”  The book fails to mention that activities in Oracle Database itself directly cause kernel mode CPU utilization.  Network I/O,  disk I/O (especially with buffered file systems), timer calls (typically gettimeofday calls),  memory access requests, memory management, and process creation and termination (think session creation and termination) all directly cause kernel mode CPU utilization.  The paragraph seems to suggest that the DBAs has no control over kernel mode CPU consumption (reference reference2).
    • Page 116: States: “While it may seem strange, the run queue reported by the operating system includes processes waiting to be serviced by a CPU as well as processes currently being serviced by a CPU.”  This is an operating system specific observation.  Linux appears to behave this way, Windows does not, and others have indicated that various Unix releases do not behave the same as Linux (reference).
    • Page 136: States: “For example, if you ran a response time-focused report over a duration of 60 seconds, and if the single-CPU core subsystem were heavily overworked, Oracle could report that Oracle processes consumed 65 seconds of CPU time. Yet, we clearly know the CPU subsystem is limited to supplying up to 60 seconds of CPU time and no more.”  This appears to be an operating system specific CPU usage measurement problem.  Windows will exhibit this problem, where time spent in the run queue is accumulated with time spent actually running on the CPU, but I could not reproduce that behavior on Linux (reference).
    • The book seems to use Oracle 10g R1 frequently for tests, rather than something more recent, such as 10g R2, in various tests.  Oracle 10g R1 was desupported quite a long time ago, while the patch was only recently released.  In the span of a couple of sentences, the book frequently uses the term “Oracle” to mean both “Oracle Database” and “Oracle Corporation” – this might cause some degree of confusion.
    • Page 175: The book seems to suggest that bind variable values are typically included in TKPROF summaries, except when multiple trace files are combined.  Bind variable values are never included in TKPROF summaries.
    • Page 177: The book seems to recommend changing hidden initialization parameters without warning that those parameters should only be modified after consulting Oracle support.
    • Page 182: The book suggests modifying the _ASH_SAMPLING_FREQUENCY hidden initialization parameter.  Unfortunately, this parameter does not exist in Oracle Database (the only release that I checked).
    • Page 183: The book suggests that changing the _ASH_SIZE hidden initialization parameter will modify the amount of memory used by Active Session History, yet the book mentions that Oracle does not always respect the setting.  This might be a sign that the parameter controls something other than what the author described.  The author also mentioned changing the  _ASH_ENABLE hidden initialization parameter.
    • Page 200: The author suggests that the _DB_BOCK_MAX_CR_DBA hidden initialization parameter controls the number of clones per buffer, however that parameter does not exist in Oracle Database  I believe that the author intended to write _DB_BLOCK_MAX_CR_DBA which attempts to control the maximum number of consistent read copies of a block.
    • Page 203: The author suggests that changing the _DB_BLOCK_HASH_LATCHES hidden initialization parameter will provide some relief to cache buffer chains latch contention.  This is one of the few times that the author cautions, “Before you change this, be aware there may be support issues from both Oracle and your application vendor.”
    • Page 208: The book states that the _SMALL_TABLE_THRESHOLD hidden initialization parameter was deprecated because of the modified LRU (Least Recently Used) algorithm that the book stated was introduced in Oracle Database 8.1.5, and the hidden initialization parameter assumed new life with the introduction of Oracle Database 11g where the “parameter is the threshold for a server process to start issuing direct reads.”  Metalink (My Oracle Support) Doc ID 787373.1 indicates that the parameter was never deprecated.  The actual threshold for selecting a direct path read for a serial operation in Oracle Database and above is roughly five times the value specified for _SMALL_TABLE_THRESHOLD.  Various other articles describe the purpose of the _SMALL_TABLE_THRESHOLD parameter prior to Oracle Database (reference reference2 reference3)
    • Page 213: The book states: “Without physical blocks being read from disk, there will be no LRU chain latch contention, because there will be no need to find a free buffer or insert a buffer header into an LRU chain.”  This statement begs the question, what about consistent read copies of blocks, where Oracle must essentially roll back a block to a specific SCN to provide a consistent read for an in-process query?
    • Page 219: The book states: “Investigate the _db_block_write_batch and the _db_writer_max_writes parameters. Also considering increasing the instance parameter _db_writer_max_scan_pct (the default may be 40, for 40%) or _db_writer_max_scan_cnt…”  One problem with these two sentences is that the _DB_BLOCK_WRITE_BATCH, _DB_WRITER_MAX_SCAN_PCT, and _DB_WRITER_MAX_SCAN_CNT hidden initialization parameters do not exist in Oracle Database (the only release checked) (reference).
    • Page 221: The book states: “Now suppose you have scanned more than _db_writer_max_scan_pct buffer headers. If so, you would be very frustrated.”  Unfortunately, the _DB_WRITER_MAX_SCAN_PCT hidden initialization parameter does not exist in Oracle Database (the only release checked).
    • Page 222: The book states: “Increase _db_writer_max_scan_pct. This will give the database writer more time to flush its write list.”  Unfortunately, the _DB_WRITER_MAX_SCAN_PCT hidden initialization parameter does not exist in Oracle Database (the only release checked).
    • Page 223: The book states, “Strangely, in Oracle Database 11g, Oracle removed the reason code from the v$session_wait and v$session view’s p3 column!”  Column P3 was changed from the reason code to block class starting with Oracle Database 10g R1.
    • Page 226: The author used block dumps to investigate locks, while it seems that it would be easier to query a couple of views.
    • Page 230: The book states, “… Simply keep adding free lists until the contention subsides.” This begs the question of what side-effects may be encountered?
    • Page 231: The book describes how active transactions are assigned to undo segments.  Limited testing indicates that the book is correct unless the objects are created in the SYSTEM tablespace (creating objects in that tablespace is not recommended).
    • Page 253: The book used library_cache dumps to investigate child cursors – it seems that there are easier, less resource intensive ways to obtain this information.
    • Page 256: The book states: “Oracle keeps track of these reloads in the reloads column from v$library_cache…” – the correct view name is V$LIBRARYCACHE, not V$LIBRARY_CACHE.
    • Page 265: The book mentions the use of CURSOR_SHARING = SIMILAR and FORCE, but does not discuss the performance impact of the change other than stating that Mark Gury stated that CURSOR_SHARING = FORCE can result in incorrect results.  No other discussion of performance problems were provided.
    • Page 284: The book describes the author’s under-powered Linux server used in a test.  The server was configured with 512MB of memory, 256MB shared pool, 4MB for the buffer cache and was running Oracle Database 11g Release 1.  The Oracle documentation (reference) states that 1GB of memory is the minimum for Oracle Database 11g Release 1, and that puts into question the validity of any tests performed with that server.  The details of the test case, which obtained a 21% decrease in CPU usage through the use of in-memory undo over the configuration with in-memory undo disabled, was largely omitted, making it impossible for readers to duplicate the test.
    • Page 286: The book lists a parameter named COMPATIBILITY, however that parameter does not exist.  The actual parameter name is COMPATIBLE.
    • Page 289: The book states: “Increase the number of IMU pools by increasing the _im_pools parameter…”  The _im_pools hidden initialization parameter does not exist in Oracle Database (the only release checked) – the author probably intended to write _IMU_POOLS.
    • Page 294: The test script shows that redo statistics for a session are recorded, the session sleeps for 5 seconds, the session queries a table, and then the redo statistics are recorded again.  The author attempted to explain that redo could be generated by a session merely by querying a table due to delayed block cleanout.  The test case is slightly unclear, making it difficult for readers to reproduce the results – there is no point in the session sleeping for 5 seconds, it would have been helpful if the author explained what changes were made to the CUSTOMERS that eventually resulted in the delayed block cleanouts, and because the statistic number for the redo size statistic is prone to change from one Oracle release to the next it would have made more sense to specify SN.NAME=’redo size’ rather than essentially specifying SN.STATISTIC#=133.
    • Page 297 states: “… There is also the instance parameter _log_small_entry_max_size, which is used to shift allocation latch activity onto one of the redo copy latches…”  The _LOG_SMALL_ENTRY_MAX_SIZE hidden initialization parameter does not exist in Oracle Database (the only release checked).
    • Page 298 states: “The DBA can specifically set the number of redo strands via the hidden parameter _log_parallelism.”  The _LOG_PARALLELISM hidden initialization parameter does not exist in Oracle Database (the only release checked).
    • Page 300 states: “Global temporary tables can be explicitly dropped. When this occurs, the associated indexes and views are also dropped.”  The associated indexes will be dropped assuming that the temporary table is also purged, however affected views will not be dropped and will have a status of INVALID.
    • Page 301 states: “Object growth can be observed by repeatedly by querying v$sort_usage_view.  Global temporary tables create 50% less redo than standard tables.” This quote has a minor grammar error, and the view name is V$SORT_USAGE (should not include _VIEW in the name).  The question remains how the author determined that working with global temporary tables generates half as much redo as regular heap tables.
    • Page 302: Another instance where the author correlates an application save with a LOG FILE SYNC wait and concludes that the problem is not in the database instance.
    • Page 305: While it is mentioned on the next page in another section, it probably would have made sense to mention that the redo log buffer is also flushed every three seconds and repeated that the redo log buffer is also by default flushed after every COMMIT.
    • Page 314: The footnote states: “When a process is serviced by one CPU and then is switched to another CPU, it undergoes a context switch… Exclusively binding a CPU and process together virtually eliminates context switching, because the CPU will always be there for that process and the process will never need to wait while the operating system switches its processing to another CPU.”  The author seems to have either redefined the meaning of context switch, or the footnote needs additional clarification (reference reference2).
    • Page 315 states: “The redo log buffer is structured in relatively small operating system-size blocks (typically 512 bytes), not Oracle-size blocks, so it is normal for log writer activity to be performed with multiple blocks.”  The Oracle documentation (reference) states that the redo log file block size defaults to the physical sector size of the disk, which is typically 512 bytes.
    • Page 322 states: “The heavy redo-generating SQL will obviously be some type of DML, like inserts, updates, deletes, or perhaps create statements. If you have ASH licensed, as Figure 8-9 demonstrated, you can easily find the SQL related to the log file parallel write event.” – The heavy redo generating SQL statement could also be a SELECT statement due to delayed block cleanout (as mentioned earlier in the book), a CREATE statement would likely be classified as DDL.  Since the LOG FILE PARALLEL WRITE wait event is posted by LGWR and not a user session, there probably will not be a SQL statement that is definitely associated with the LOG FILE PARALLEL WRITE wait event.
    • Page 324: The book makes a claim that is difficult to verify: “Using global temporary tables can cut redo generation by 50%, substantially reducing Oracle’s overall IO requirements. This alone could solve the log file parallel write problem.”
    • Page 326 states: “If the log writer encounters a nonarchived redo log, it will stop writing and post the wait event log file switch (archive incomplete).”  The actual wait event that will be posted is log file switch (archiving needed).
    • Pages 327 – 387 seem to be related to the author’s “Forecasting Oracle Performance” book, with concepts written in an easier to understand, more fluid writing style.  This is not a problem, just worth mentioning.
    • Page 330: The word “difference” should be replaced with “different”.
    • Page 330 states: “For example, the Statspack facility typically collects data in 60-minute intervals and stores the data in tables starting with stats$ (the key table is stats$snap).”  There is no typical collection period for Statspack, while AWR snapshots default to 60 minutes in duration.
    • Page 333 states: “In previous chapters, I have presented how to gather CPU requirements from the v$sesstat, v$sysstat, v$ses_time_mode, and v$sys_time_model views.”   V$SES_TIME_MODEL should be written as V$SESS_TIME_MODEL.
    • Page 343 states: “Also, Oracle CPU consumption includes Oracle processes waiting for CPU and also queuing for CPU. As a result, in a CPU-saturated system, Oracle may report CPU consumption higher than actually occurred.”  The behavior mentioned by the author is operating system dependent.
    • Page 344 states: “In other words, Oracle does not have the ability to split CPU time into service time and queue time. When we gather CPU consumption from either the instance statistics views or the system time model views, what we collect as CPU time and typically classify as CPU service time actually contains both CPU service time and CPU queue time. Oracle has no way of knowing the difference and reports the total, which is, in the truest sense, CPU response time.” This behavior is operating system dependent.
    • Page 346 states: “The arrival rate in Figure 9-9, which is the horizontal axis, is simply the number of logical IOs (v$sysstat: buffer gets plus consistent gets) processed per millisecond.”  There is no statistic in V$SYSSTAT that is named buffer gets, the author probably should have stated db block gets instead. (reference).
    • Page 347 states: “The instance statistic physical reads signifies the number of Oracle blocks that Oracle server processes had to request from the operating system because the blocks did not reside in Oracle’s buffer cache.”  The physical reads statistic includes block reads that will never hit the buffer cache, including blocks read into the PGA.  It appears that the author is describing the physical reads cache statistic instead.
    • Page 349 states: “…when focusing on a particular session or group of sessions, use v$sesstat, v$ses_time_model, and v$session_event.” The correct view name is V$SESS_TIME_MODEL.

    The “Oracle Performance Firefighting” book has a lot of potential for being helpful, however the errors and omissions found in the fourth printing of the book give the book a decidedly dangerous edge.  If flipping initialization parameter values fits your style, this book will be a good fit.

    Pages 352 though 387 are currently not included in this book review.

    (Section added December 13, 2010): Related blog articles that discuss the contents of this book:

    Book Review: Oracle SQL Recipes: A Problem – Solution Approach

    6 06 2010

    June 6, 2010 (References Last Updated June 7, 2010)

    Answering the Question – What can I do with SQL? A Handful of Errors Tarnish a Great Book

    Last November I pre-ordered this book, hoping that it would pick up where the “Mastering Oracle SQL and SQL*Plus” book ended (at roughly Oracle Database release  The expectations were very high, and this book delivers for the most part.  Some parts of the book left me scratching my head, thinking that this SQL statement or paragraph just cannot be correct.  Even when taking those sections into account, this book conveys a significant amount of practical, useful information.

    The book’s format of problem statement, solution, and how/why the solution works is the key to the success of this book.  You will not find page after page of SQL statement syntax.  The range of problem statements covered in this compact 500 page book is impressive, touching on a wide range of topics:

    • Basic SELECT statements
    • Manipulating numbers, dates, and character data
    • Transaction management
    • XML data
    • Partitioning
    • Analytic functions
    • Regular expressions
    • Performance tuning
    • Database troubleshooting
    • Creating databases, tablespaces, tables, indexes, and constraints


    • Thoroughly answers the question of why it is important to learn Oracle’s dialect of SQL.  The book provides 233 problem situations, most with more than one variation of the problem along with the SQL statement for each variation.
    • Includes interesting side stories that are not necessarily related to SQL coding, but help to keep the reader interested in the book contents.  For example, a side story titled “Beer and Statistics Do Mix” that describes the invention of T-Tests.
    • Typically states when a feature requires the Enterprise Edition or an additional cost license beyond the Enterprise Edition (Tuning Pack, for example).
    • Most, if not all of the code samples from the book may be downloaded from the Apress website.
    • Describes the LEAD, LAG, RANK, DENSE_RANK, ROW_NUMBER, SUM, MIN, MAX, NTILE and a couple of the other common analytic functions.  Does not cover all of the analytic functions, for example: RATIO_TO_REPORT, FIRST_VALUE, LAST_VALUE, PERCENTILE_DISC, etc.
    • Recipe 5-8 shows how to create a PL/SQL function that converts from base 10 to binary, octal, and hex.
    • Recipe 11-9 shows an impressive example that creates a HTML chart from a SQL statement.  It might be interesting to see this example extended to use floating objects with absolute positioning rather than HTML tables.
    • Recipes 12-3 and 12-4 include PL/SQL functions that help to determine if a VARCHAR2 value can be safely converted to a NUMBER or DATE.

    Problems (in order, not all code samples tested):

    • In recipe 1-1, retrieving data from a table, the authors mentioned “currently set schema,” a concept which probably would have been much better to introduce in a later recipe.  Especially in the early sections of the book, it seems that the authors struggled with what concepts to introduce early and what concepts required a more establish foundation.  As such, people just learning SQL, or even just Oracle’s dialect, will struggle with the early sections of the book.
    • Several recipes (1-4, 2-11, 2-14, 2-15, 2-16, 6-5, 6-14) use implicit date conversions, or explicit varchar to date conversions without specifying the format of the date contained in the varchar.  This is a very poor practice to teach to people who are just beginning the process of learning SQL.  A change in the NLS parameters, or even a change in the connection method from Oracle’s ODBC driver to Oracle’s OLEDB driver is sufficient to cause such SQL statements to fail.  Unfortunately, the TO_DATE function was not introduced until chapter 6.
    • Page 26 incorrectly stated, “for performance reasons, Oracle will implicitly sort the data to match the grouping desired.”  As of Oracle Database 10.1 [edit: probably should state 10.2 for hash group by, previous releases did not necessarily need to perform a sort], more often than not a hashing algorithm will be used to perform GROUP BY operations, which means that there will be no implicit sort of the data. [Reference/Test Case] [Reference/Test Case]
    • Recipe 3-5 attempted to explain when to use three different syntaxes to delete rows from a table (IN, EXISTS, and SELECT with USING clause).  As may be confirmed with a 10053 trace file, the optimizer will often automatically re-write SQL statements from one form to another, for example from an IN clause to a standard join, or an EXISTS clause to a standard join.  The authors probably should have mentioned the automatic query re-write capability of the cost-based optimizer, rather than stating something like this about an EXISTS clause example “this is not as elegant as the first solution, but might be more efficient.” [Reference/Test Case – Check the Comments Section Also]
    • Recipe 5-11 demonstrates how to use the FOR UPDATE clause of a SELECT statement and demonstrates that the second session will be blocked until the first session commits or issues a rollback.  At this stage of the book the NOWAIT clause is not introduced.  Unfortunately, such a demonstration might be an invitation to “lost” updates.
    • The query output for recipe 6-6 does not match the expected output, which may lead to confusion.  The book shows that the result of the timezone conversion is “13-AUG-09 PM AMERICA/LOS_ANGELES” when it should be “13-AUG-09 PM AMERICA/LOS_ANGELES”
    • Recipe 6-12, which uses the EXTRACT function, will produce incorrect/unexpected results, resulting in the MONTHS calculation increasing by one up to 15 days earlier than should be expected if used to determine the length of time an employee has worked.  That recipe would have been the perfect time to introduce the MONTHS_BETWEEN SQL function (not included in the book). [Reference/Test Case – Check the Comments Section Also]
    • Recipe 7-1 states that Oracle will never use a non-function based index on a column when that column appears inside the INSTR function that is present in a WHERE clause.  That statement is incorrect, a hint, covered in recipe 19-12, is sufficient to allow Oracle to perform an index full scan operation on that column when the only predicate in the WHERE clause includes that index’s column wrapped in an INSTR function (tested on Oracle releases through [Reference/Test Case – Check the Comments Section Also]
    • Recipe 8-2 states, “With the value one-third stored in each column (BINARY_DOUBLE and NUMBER datatypes), we can use the VSIZE function to show it was much more complicated to store this with decimal precision, taking nearly three times the space.”  There are a couple of problems with this statement.  First, a BINARY_DOUBLE column will always require eight bytes to store any number, while it takes a variable amount of space to store a number in a NUMBER column (for instance, the number 1,000,000 requires fewer bytes to store than the number 333).  Second, the BINARY_DOUBLE column only maintained 16 significant digits to the right of the decimal point, while the NUMBER column maintained 40 significant digits to the right of the decimal point (the bytes required can be decreased by specifying a smaller number of significant digits to the right of the decimal point). [Reference/Test Case – Check the Comments Section Also]
    • Recipe 8-6 states “The final two implicit defaults (of a CREATE SEQUENCE clause) cause Oracle to cache 20 values and not force strict first-come, first served ordering on those who call NEXTVAL.”  This statement is either misleading or incorrect.  The default will force the sequence numbers to be assigned to calling sessions in the sequential order specified by CREATE SEQUENCE – Oracle will not provide the numbers out of sequential order.  There can be some degree of out of sequential order assignment in a RAC environment.
    • Recipe 9-2, which includes a SQL statement that is intended to show the objects which are locked that are blocking other sessions, has a number of problems.  1) It is not RAC aware – it should be selecting from the GV$ views, rather than the V$ views.  2) The SQL statement explicitly looks for the BLOCK column to be equal to 1 – in a RAC environment it could potentially show a value of 2.  3) When performing a self-join on V$LOCK, the query is only joining on the ID1 column of V$LOCK, while the documentation shows a self-join on that view using the ID1, ID2, and TYPE columns.  4) There appears to be a missing join condition between V$LOCK and V$LOCKED_OBJECT that results in objects being identified as contributing to the blocking of the second session, when in fact that may not be the case.  5) The query seems to assume that all locks acquired are full table locks. [Reference/Test Case – Check the Comments Section Also]
    • Page 436 suggests checking Metalink (My Oracle Support) Doc ID 555284.1 for a script to identify indexes that should be rebuilt.  I suspect that that document looked very different prior to the last modification date of May 13, 2009.  The script in the document does not detect indexes that are candidates for a rebuild – the script simply outputs SQL statements to rebuild every index that is not owned by SYS or SYSTEM (and a small number of other users) into an INDX tablespace with a specified initial, next, and maxextents clause.
    • Recipe 18-13, in the Note section, states that when a primary key constraint is created Oracle will create a unique index with the same name as the constraint.  That statement is true unless a unique or non-unique index already exists for the column or a composite index that has that column as its leading column exists.  Later in the recipe it was stated that the DBA could pre-create the index that will be used to help enforce the primary key constraint, but the Note section might cause a little confusion. [Reference/Test Case – Check the Comments Section Also]
    • Recipe 19-1, the join between V$SQL_MONITOR and V$SQL is incomplete and will result in repeated rows if there are multiple child cursors for a SQL_ID, which seems to happen frequently starting with Oracle Database due to adaptive cursor sharing, and due to cardinality feedback.  Additionally, V$SQL_MONITOR will only be populated if the CONTROL_MANAGEMENT_PACK_ACCESS initialization parameter is set to DIAGNOSTIC+TUNING to enable the extra cost licenses (Tuning Pack, for example). [Reference/Test Case]
    • Recipe 19-3 states “the V$SESSION_LONGOPS view gives you a rough idea of how long a currently running query has left to finish.”  Technically, this statement is both correct and incorrect.  The view provides a rough idea, based on the execution plan estimates, of how much time is left for a specific stage (step/line) in the execution plan.  Entries in the V$SESSION_LONGOPS view will appear and disappear as the different stages in the execution plan execute.  Most index type accesses will likely complete in less than six seconds per access, and probably will not appear in this view. [Reference/Test Case – Check the Comments Section Also]
    • Recipe 19-4, the query may produce misleading results as the query is not time-scoped.  The query does not consider that the database may have been up for several years – a query that on average consumes one second of CPU time, but was executed 100 million times since the library cache was last purged will likely be flagged as a problem, while a query that on average consumes 1,000 seconds of CPU time but has only executed 1,000 times in the last 24 hours probably will not appear in the SQL results.
    • Recipe 19-7 states that a PLAN_TABLE must exist, and that if it does not exist the utlxplan.sql script should be run to create a PLAN_TABLE in the user’s schema. As of Oracle Database [Edit: it appears that this change happened prior to] user schemas should no longer have a PLAN_TABLE in their schema – a global temporary table owned by the SYS user is used instead. [Reference/Test Case – Check the Comments Section Also]
    • Chapter 19 demonstrates the use of EXPLAIN PLAN, AUTOTRACE, and the EXPLAIN option of TKPROF, but the chapter does not mention that using those methods for displaying the execution plan could show the wrong execution plan. [Reference/Test Case – Check the Comments Section Also]
    • Recipes 19-10 and 19-12 suggest that a goal of tuning is reducing the calculated COST value that appears in the execution plan, and this was accomplished in the recipes by adding an index or using an index hint.  This is not a good approach to tuning; it is much better to rely on actual response time rather than the calculated cost estimates. [Reference/Test Case – Check the Comments Section Also] [Reference/Test Case – Check the Comments Section Also]
    • Recipe 20-1, the SQL statements are not time-scoped, and even if the database has only been open for a short period of time, the SQL statement could yield incorrect results because the “CPU used by this session” statistic is not updated in real-time, only when a SQL statement completes, or a fetch completes. [Reference/Test Case – Check the Comments Section Also]
    • Page 500 suggests inspecting V$SESSION to determine the type of block contention that causes buffer busy waits.  It would have been nice to see the authors provide a demonstration of this technique because from what I am able to determine, the chances of a success are extremely small using V$SESSION (I assume that the authors might be suggesting to look at either the combination of P1, P2, and P3 or the combination of ROW_WAIT_OBJ#, ROW_WAIT_FILE#, and ROW_WAIT_BLOCK#).
    • Page 500 states that the “SQL*Net message from client” wait event indicates possible network or middle-tier bottlenecks.  Client-side “think time” is probably the most significant contributor to this wait event, and I think that it would be very difficult to demonstrate that this wait event indicates network bottlenecks. [Reference/Test Case] [Reference/Test Case] [Reference/Test Case – Check the Comments Section Also]
    • Recipe 20-8 suggests an approach to identify those indexes that are safe to drop by enabling monitoring of the indexes.  One of the problems with this approach is that the index may still be used by the cost-based optimizer even when monitoring reveals that the index is never used. [Reference/Test Case – Check the Comments Section Also] [Reference/Test Case – Check the Comments Section Also]
    • Very few cross-recipe (forward and backward) references were provided, which makes the sections of the book appear to be disconnected and stand-alone.

    The book attempts to cover a lot of ground.  While it probably requires the authors between one and eight hours per page to research, write, proof-read, test, and format a typical page in an Apress book, the authors probably could have controlled the scope of the book a little better and spent a little more time to test and expand some of the sections of the book – the sections that discuss analytic functions, for example.  I did not spend a lot of time reading through the chapters “Working with XML Data”, “Partitioning”, or “LOBS”, so I did not comment on those sections of the book.  The majority of the book is exceptional – far better than any book that is strictly a SQL reference type book, and I would not hesitate to recommend the book to others who are trying to figure out: “What do I do with SQL.”

    I will probably update this review at a later time to supply links to test cases or articles for most of the problems that were identified with the book.  I might also take a closer look at the “Working with XML Data”, “Partitioning”, or “LOBS” chapters, and provide feedback for those sections of the book.

    (Section added December 13, 2010): Related blog articles that discuss the contents of this book:

    Book Library – Finding Motivation

    16 01 2010

    January 16, 2010

    I occasionally see requests for book suggestions in various Internet forums, blogs, and various other websites.  I am typically very careful when preparing to buy a book, since it my money that will be wasted if the book’s contents are worthless.  I try to read the reviews for books on, blogs, and various other sites to try to determine if the book’s contents should have been retired with the release of Oracle Database 8.0, whether or not the book’s authors emphasize volume over technical quality, and whether or not people are willing to rely on a particular author’s advice.

    I thought that it might be interesting to look at my book library to see if I would be able to offer any advice.  My book library at home looks like the following picture (I intentionally or unintentionally left out about 6 to 10 books) – click the picture to see a larger version.

     If you closely examine the books, you probably will be able to tell that I need to spend a little more time digging through the books on the right (and the four years of related magazines that have been sitting mostly unread).

    Here is my personal book library at work – click the picture to see a larger version.

    As I wrote in this OTN thread, a small number of Oracle books marked distinct turning points in my knowledge of Oracle.  Additionally, it is necessary to find motivation to continue learning whatever subject falls at your feet.  In the OTN thread, I stated the following:

    I have been very fortunate to buy and read several very high quality Oracle books which not only correctly state the way something works, but also manage to provide a logical, reasoned explanation for why things happen as they do, when it is appropriate, and when it is not. While not the first book I read on the topic of Oracle, the book “Oracle Performance Tuning 101” by Gaja Vaidyanatha marked the start of logical reasoning in performance tuning exercises for me. A couple years later I learned that Gaja was a member of the Oaktable Network. I read the book “Expert Oracle One on One” by Tom Kyte and was impressed with the test cases presented in the book which help readers understand the logic of why Oracle behaves as it does, and I also enjoyed the performance tuning stories in the book. A couple years later I found Tom Kyte’s “Expert Oracle Database Architecture” book at a book store and bought it without a second thought; some repetition from his previous book, fewer performance tuning stories, but a lot of great, logically reasoned information. A couple years later I learned that Tom was a member of the Oaktable Network. I read the book “Optimizing Oracle Performance” by Cary Millsap, a book that once again marked a distinct turning point in the method I used for performance tuning – the logic made all of the book easy to understand. A couple years later I learned that Cary was a member of the Oaktable Network. I read the book “Cost-Based Oracle Fundamentals” by Jonathan Lewis, a book by its title seemed to be too much of a beginner’s book until I read the review by Tom Kyte. Needless to say, the book also marked a turning point in the way I approach problem solving through logical reasoning, asking and answering the question – “What is Oracle thinking”. Jonathan is a member of the Oaktable Network, a pattern is starting to develop here. At this point I started looking for anything written in book or blog form by members of the Oaktable Network. I found Richard Foote’s blog, which some how managed to make Oracle indexes interesting for me – probably through the use of logic and test cases which allowed me to reproduce what I reading about. I found Jonathan Lewis’ blog, which covers so many interesting topics about Oracle, all of which leverage logical approaches to help understanding. I also found the blogs of Kevin Closson, Greg Rahn, Tanel Poder, and a number of other members of the Oaktable Network. The draw to the performance tuning side of Oracle administration was primarily for a search for the elusive condition known as Compulsive Tuning Disorder, which was coined in the book written by Gaja. There were, of course, many other books which contributed to my knowledge – I reviewed at least 8 of the Oracle related books on the website

    The above was written before I set up this blog – there are more book reviews on this blog here:  In the above pictures you will see all of the books that I referenced in the OTN post, as well as the book that I had the opportunity to co-author with a fairly large number of OakTable Network members (top photo – I have not yet received my printed copy of the book from Amazon, so the picture shows a printed copy of the electronic version from Apress).  There are of course a large number of books in my personal library at work – as you can see, I have the opportunity to dig into much more than Oracle Database.  I have read most of the books cover to cover, and a very small number of the books have been read cover to cover twice.

    My post in the OTN thread continues:

    Motivation… it is interesting to read what people write about Oracle. Sometimes what is written directly contradicts what one knows about Oracle. In such cases, it may be a fun exercise to determine if what was written is correct (and why it is logically correct), or why it is wrong (and why it is logically incorrect). Take, for example, the “Top 5 Timed Events” seen in this book …

    The text of the book states that the “Top 5 Timed Events” shown indicates a CPU Constrained Database (side note: if a database is a series of files stored physically on a disk, can it ever be CPU constrained?). From the “Top 5 Timed Events”, we see that there were 4,851 waits on the CPU for a total time of 4,042 seconds, and this represented 55.76% of the wait time. Someone reading the book might be left thinking one of:

    • “That obviously means that the CPU is overwhelmed!”
    • “Wow 4,851 wait events on the CPU, that sure is a lot!”
    • “Wow wait events on the CPU, I didn’t know that was possible?”
    • “Hey, something is wrong with this ‘Top 5 Timed Events’ output as Oracle never reports the number of waits on CPU.”
    • “Something is really wrong with this ‘Top 5 Timed Events’ output as we do not know the number of CPUs in the server (what if there are 32 CPUs), the time range of the statics, and why the average time for a single block read is more than a second!”

    Another page from the same book shows this command:

    alter system set optimizer_index_cost_adj=20 scope = pfile;

    Someone reading the book might be left thinking one of:

    • That looks like an easy to implement solution.
    • I thought that it was only possible to alter parameters in the spfile with an ALTER SYSTEM command, neat.
    • That command will never execute, and should return an “ORA-00922: missing or invalid option” error.
    • Why would the author suggest a value of 20 for OPTIMIZER_INDEX_COST_ADJ and not 1, 5, 10, 12, 50, or 100? Are there any side effects? Why isn’t the author recommending the use of system (CPU) statistics to correct the cost of full table scans? 

    I suggest that you try reading an old Oracle book, such as “Practical Oracle 8i”, and see if you are able to pick out anything that is:

    • Obviously wrong, and was never correct.
    • Obviously wrong since Oracle (or some other release version), but was 100% correct at the time the book was written.
    • Obviously correct now, just as it was when the book was originally written.
    • Grossly over applying a fix that worked in a finite set of conditions (possibly due to false correlation) to situations with nearly infinite scope.

    Someone posted a comment on this blog asking for a sequenced list of book recommendations for learning Oracle Database.  I suggested that the list of books might be a bit different depending on whether the person had an interest in general DBA work or performance tuning (or development, or …).  The suggestions that I provided to the comment follow:

    Quick suggestions:

    • A solid foundation of Oracle specific SQL is needed. I enjoyed reading “Mastering Oracle SQL and SQL*Plus“, and I believe that book provides a solid foundation. That book appears to be in the process of being updated, and might even include page numbers this time ( I am currently reading “Oracle SQL Recipes: A Problem-Solution Approach” (, probably about 30 pages into the book now – and I believe that I have already found a small handful of minor errors/issues with the book that would make it difficult to use as a starting point.
    • A solid foundation of understanding Oracle’s behavior is needed. I believe that Tom Kyte’s “Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions” book ( is one of the best sources. I understand that Tom Kyte also re-wrote the Oracle “Concepts Guide” (, so that might be a decent substitute for his book.
    • If you are planning to do general DBA work, probably the next book should be on the topic of RMAN. The books in the Oracle documentation library are good, and you will find two reviews of other RMAN books on this blog.
    • Next, I would suggest reading a book that provides a solid foundation of the Oracle wait interface. “Oracle Wait Interface: A Practical Guide to Performance Diagnostics & Tuning” seems to be the best source of that information, but it would be nice to see an update of the book that covers more recent releases of Oracle.
    • Next, the “Oracle Performance Tuning Guide” from the Oracle documentation library.
    • Next, I suggest the book “Troubleshooting Oracle Performance” – the book is great for not only introducing people to various approaches for troubleshooting problems, but also provides foundation knowledge that is needed in order to understand why an approach worked.
    • Next, I suggest digging deeper into troubleshooting with 10046 trace files – Cary Millsap’s “Optimizing Oracle Performance” is the best source for this information.
    • Next, I suggest digging deeper into troubleshooting with 10053 trace files – Jonathan Lewis’ “Cost-Based Oracle Fundamentals” is the best source for this information.

    + If queueing theory, introduced in “Optimizing Oracle Performance“, is of interest, take a look at “Forecasting Oracle Performance”

    + If Statspack/AWR report reading, introduced in the “Performance Tuning Guide” is of interest, see the excellent series of articles on Jonathan Lewis’ blog.

    + If you want your jaw to drop, take a look at Tanel Poder’s blog. I also recommend reading all of the blog entries on Jonathan Lewis’ blog and Richard Foote’s blog.

    + I have now read most of the chapters in the “Expert Oracle Practices: Oracle Database Administration from the Oak Table” book.  The book contains theory, tuning philosophy, tuning/troubleshooting logic, test cases, and up to date information that cannot be found in any other book.  It is my opinion that this book belongs in the “Quick suggestions” list above.  Disclaimer: Just estimating here, for every 17 copies of this book that are sold, I think that I will have enough royalty money to buy a soda drink from the vending machine (my recommendation has nothing to do with how thirsty I am right now 🙂 ).  It is my belief that the motivation for all of the authors of this book was simply to help readers improve their skills well beyond the basics.

    Book Review: Misc. Oracle Related Books

    30 11 2009

    Oracle Performance Tuning 101 (Oracle Press) by Gaja Krishna Vaidyanatha
    This is the book which introduced me to correct methods for Oracle performance tuning through system and session level wait events. Throwing out the old methods of performance tuning (mostly cache-hit ratios), and focusing in on the real problem made significant differences in the production database (3.25 hour Scheduler run time dropped to roughly 29 minutes with the same hardware). But the book does have a couple errors. Page 44: “One of the classic myths about CPU utilization is that a system with 0 percent idle is categorized as a system undergoing CPU bottlenecks. The real issue here is how many processes are waiting for the CPU? It is perfectly okay to have a system with 0 percent idle, so long as the average runnable queue for the CPU is less than (2 x number of CPUs).” Queuing Theory, Metalink note:148176.1, Cary Millsap, Jonathan Lewis, Craig Shallahamer, and others have suggested that 100% utilization is not desirable for various reasons. But the book was written in 2001, so there are bound to be a couple mistakes which were found later.

    Optimizing Oracle Performance (O’Reilly) by Cary Millsap
    This is the book which introduced me to focused performance tuning methods through the use of 10046 traces, rather than just flipping a series of knobs and switches in the hope of reducing the system-wide wait events related to the knobs and switches. The book drives to the heart of the problem – what is the important process, is that important process too slow, if it is too slow exactly what is causing it to be too slow? Is it a bad execution plan, too many logical IOs, large sorts to disk, contention with other sessions, high server CPU utilization, slow redo log writing, etc. Page 264: ”But be careful: pegging CPU utilization at 100% over long periods often causes OS scheduler thrashing, which can reduce throughput. On interactive-only application systems, CPU utilization that stays to the right of the knee (queuing theory reference) over long periods of time is bad. When CPU utilization exceeds the knee in the response time curve, response time fluctuations become unbearable.”

    Oracle Database Performance Tuning Guide by Oracle Corporation
    The version for 11g R1/R2 for this title is much improved over previous versions. Covers wait events, execution plans, tracing, and more.

    Expert One on One Oracle (Wrox, now available from Apress) by Thomas Kyte
    Describes the problems related to database independent applications, differences between Oracle and other database platforms, internal architecture of Oracle, reading 10046 trace files, writing PL/SQL, and more. The most memorable part of the book for me was the Oracle war stories described in the book, and how to overcome the problems.

    Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions (Apress) by Thomas Kyte…amming/dp/1590595300
    I was initially disappointed with the book as the first chapter or two was nearly identical to those chapters in “Expert One on One Oracle”, but that disappointment faded quickly once the third chapter was reached. While the book lacked many of the Oracle war stories from the earlier title, the rest of the contents were interesting enough that I recently bought a second copy of the book. I have actually purchased two copies of “Expert Oracle Database Architecture”, and have read both book titles cover to cover. His books are excellent, highly detailed, and include a number of test cases which allow the reader to understand concepts, while reproducing the output using their databases. The test cases also offer a method to determine if something was true in Oracle 9i R1 or in 10g R1 is no longer true in 11g R2. I do not know if it is a fair comparison, but you might be able to consider his books as the Oracle “Concepts Guide” written so that it is easy to understand, with real-world examples, and real-world “war stories” (quite a number of these appear in “Expert One on One”). I consider Tom Kyte’s “Expert Oracle Database Architecture” as the critical starting point, conveying the foundation knowledge needed to understand the material presented in other books including “Optimizing Oracle Performance”, “Troubleshooting Oracle Performance”, “Cost-Based Oracle Fundamentals”, “Forecasting Oracle Performance”, and the “Expert Oracle Practices” book.

    Expert Oracle Database 10g Administration (Apress) by Sam R. Alapati…xperts/dp/1590594517
    This book served as the final reference for my conversion from Oracle 8i ( to Oracle 10g R2 ( The book is quite literally a dictionary for Oracle – just about any task that needs to be completed is described very well in this book. There are, however, a couple errors in the book. In addition to a couple cases where information was simply omitted, there is this quote from page 202: [i]“Indexes often become too large if there are many deletions, because the space used by the deleted values is not reused automatically by the index.” That statement is simply not true.

    Oracle Insights Tales of the Oak Table (Apress) by several members of the Oak Table…-Table/dp/1590593871
    Oracle war stories, performance tuning and more from the world’s most respected database tuning experts – what more could any one want except a follow up to the original book?

    Oracle Database 2 Day DBA by Oracle Corporation…erver.102/b14196.pdf
    The basics of Oracle database administration, helpful when upgrading from a prior release, or if suddenly pushed into a DBA role.

    Oracle Database Administrator’s Guide 10g by Oracle Corporation
    Essentially, covers the foundation knowledge required to properly administer Oracle 10g R2. Much of the contents of the book may be found in the “Expert Oracle Database 10g Administration” and “Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions” books, but sometimes it is nice to read the official documentation.

    Recovery Manager Reference and Backup and Recovery Advanced User’s Guide by Oracle Corporation…ackup.102/b14194.pdf…erver.101/b10734.pdf
    I read the versions of these books for Oracle 8i and was impressed with the depth of coverage in the books. At the time I felt that no book could be purchased which exceeded the depth, accuracy, nor easy of locating information as was contained in these two books. Things change.

    Forecasting Oracle Performance (Apress) by Craig Shallahamer…ahamer/dp/1590598024
    This is a great book, with a lot of queuing theory and mathematics. Despite the review by Donald Burleson, there is no coverage of Statspack or AWR in the book. Paraphrased from the book: Once the CPU utilization reaches around 75%, queuing becomes significant, leading to performance problems. The book also describes why a 16 CPU server might not yield 8 or even 4 times the performance of a 2 CPU server. Great book, but a little too much theory.

    Oracle Database 10g Insider Solutions (SAMS) by Arun Kumar…utions/dp/0672327910
    Of the Oracle books that I have read, this is the first that I read in about a week. Every page seemed to say “Hey, look at this neat feature” without a sufficient amount of supporting details to determine if the solution applies to servers costing less than ¾ of a million dollars with every available Oracle option, if the solutions are helpful or harmful, etc. This is one of the few books I read which I have never considered taking off the shelf to help implement an Oracle feature.

    Oracle SQL High Performance Tuning (Prentice Hall) by Guy Harrison…sional/dp/0130123811
    I was very impressed with this book when I initially read the book in 2002. Since that time I have learned that some of the book contents are just plain wrong. What is wrong? A couple quotes come to mind: “On Windows NT/Windows 2000, the maximum number of blocks that can be read [in a multiblock read] cannot exceed 128K (for 32 bit versions).” “The size of the Oracle block can have an influence on the efficiency of full-table scans. Larger block sizes can often improve scan performance.” “Improving the hit rate in the buffer cache” section title.

    Applied Mathematics for Database Professionals (Apress) by Lex de Haan and Toon Koppelaars…xperts/dp/1590597451
    If ever I needed a book to remind me how much mathematics I have forgotten (my major was mathematics), this is the book. I started reading this book a year ago, but had to put it down to read more exciting titles once I made it half way through the book. I will finish this book, eventually.

    Special Edition Using SQL by Que
    A fairly thorough book (inline views are not covered), but also a bit old. Not specific to Oracle, but does include Oracle specific content.

    Joe Celko’s SQL for Smarties: Advanced SQL Programming
    Contains advanced set operations performed in SQL.  Not specific to Oracle.

    Oracle Database SQL Language Reference
    Primarily useful to cover the items missed by the above SQL books, such as using Oracle analytical functions.

    Edit December 16, 2009: Fixed a couple of non-functional links.

    Book Review: Expert Oracle Database 11g Administration

    30 11 2009

    Great Reference, Even with a Couple Accuracy Problems, September 5, 2009

    This is a huge book, which could possibly best be described as the condensed encyclopedia of Oracle databases. The book covers everything from administration to performance tuning to using SQL*Plus to installing Oracle on Linux to using RMAN. While I did not read this book cover to cover, I did page through the book looking for interesting topics to read. I did read the author’s “Expert Oracle Database 10g Administration” book cover to cover a couple years ago and was at the time very impressed with that book. There were a couple small errors in the 10g book, repeated a couple times, but I commented to a couple people that the 10g book is by far the best and most thorough Oracle reference book that I had run across. The appendix at the back of the 10g book was very effective in helping me find exactly the right topic, and usually the right syntax for just about any task. The appendix in the 11g version of the book is just about as good. It appears that the author may have rushed the job of updating the 10g book for 11g R1 as quite a few screen captures still show Oracle versions such as and a couple other sections of the book also seem to be more specific to 10g R1 than 11g R1 (or 11g R2).

    This book contains a lot of great and/or very helpful information, but while paging through the book I found a couple problems. Overlooking the problems, I would still recommend this book as a reference for Oracle 11g R1. The section on performance tuning is not my first choice for performance tuning information.

    Problems found when paging through the book (I know that I probably missed several issues):

    • Page 90 mentions RAID 0+1 but not the more robust RAID 10.
    • Page 92 states “RAID 5 offers many advantages over the other levels of RAID. The traditional complaint about the `write penalty’ should be discounted because of sophisticated advances in write caches and other strategies that make RAID 5 more efficient than in the past.” Visit to see the opinions of other DBAs.
    • Page 166 states “if you use an Oracle block size of 64KB (65,536 bytes)…” The maximum supported block size is 32KB, not 64KB, and some platforms support a maximum of a 16KB block size.
    • Page 171 states when suggesting the use of multiple block sizes in a single database “if you have large indexes in your database, you will need a large block size for their tablespaces.” “Oracle provides separate pools for the various block sizes, and this leads to better use of Oracle memory.” For those who have followed the multiple block size discussions over the years, it should be fairly clear that it is not a good idea to use multiple block sizes in a single database. Oracle’s documentation states that multiple block sizes are intended to be used only to support transportable tablespaces.
    • Page 181 states “The database writer process writes dirty blocks to disk under the following conditions… Every 3 seconds.” A check of the Oracle documentation will quickly confirm that this is not the case.
    • Page 182 states “Oracle further recommends that you first ensure that your system is using asynchronous I/O before deploying additional database writer processes beyond the default number – you might not need multiple database writer processes if so.” I think that I misread this several times as saying “do not enable multiple database writers unless you also plan to enable asynchronous I/O,” which would be an incorrect statement.
    • Page 190 states “this is why the buffer cache hit ratio, which measures the percentage of time users accessed the data they needed from the buffer cache (rather than requiring a disk read), is such an important indicator of performance of the Oracle instance.” The author provides a link on page 1161 to an article authored by Cary Millsap which discusses why a higher buffer cache hit ratio may not be ideal. This is definitely a step in the right direction regarding the buffer cache hit ratio, but it might be better to simply ignore the statistic.
    • Page 190 when describing the buffer cache hit ratio states that the formula for calculating the hit ratio is “hit rate = (1 – (physical reads)/(logical reads))*100”. The Oracle Database Performance Tuning Guide 11g Release 1 manual states that the correct formula is “1 – ((‘physical reads cache’)/(‘consistent gets from cache’+’db block gets from cache’))”
    • Page 395 states “11.1.0 is an alternative name for Oracle Database 11g Release 2.” Oracle 11g R2 was just released on September 1, 2009 and its version is
    • Page 402 correctly (according to the documentation) states that Oracle Enterprise Linux 4 and 5, as well as Red Hat Enterprise Linux are supported platforms for Oracle Database 11g, and correctly (according to the documentation) does not list Red Hat Enterprise Linux 3. Page 403 lists the required RPM packages for Red Hat Enterprise Linux 3, but ignores the supported Linux platforms.
    • Page 405 shows parameters that need to be set on Linux. This appears to be a direct copy of the parameters in the Oracle documentation, but the author did not include the net.core.wmem-max parameter. Note that Oracle will require different parameters than those specified in this book, but that is not the fault of the author.
    • Page 452 states that the COMPATIBLE parameter may be set to “10.2 so the untested features of the new Oracle version won’t hurt your application.” I think that this is misleading at best.
    • Page 466 states “If you’re supporting data warehouse applications, it makes sense to have a very large DB_BLOCK_SIZE – something between 8KB and 32KB. This will improve database performance when reading huge chunks from disk.” This is not quite a correct statement, especially if the DB_FILE_MULTIBLOCK_READ_COUNT is set correctly, or not set in the case Oracle or above is in use. 8KB is the standard block size, so I am not sure why the author groups it with the other block sizes in the very large block size group.
    • Page 477 states “the default value for the STATISTICS_LEVEL initialization parameter is TYPICAL. You need to use this setting if you want to use several of Oracle’s features, including Automatic Shared Memory Management.” This is an incomplete statement as a setting of ALL will also work.
    • Page 1055 shows the use of both CASCADE>=YES and CASCADE=>’TRUE’ with DBMS_STATS. I believe that TRUE is the correct syntax, but it should not be specified in single quotes.
    • Page 1067 states “subqueries perform better when you use IN rather than EXISTS.” The reality is that Oracle may very well automatically transform queries using IN syntax into queries using EXISTS syntax (or vice-versa), and may even transform both IN syntax queries and EXISTS syntax queries into standard join syntax.
    • Page 1074 stated that “inline stored functions can help improve the performance of your SQL statement.” The author then demonstrated this concept by converting a SQL statement with a three table equijoin accessing apparent primary and foreign keys of the tables into a SQL statement which accessed a single table and called two PL/SQL functions (per row), each of which queried one of the other two tables which were included in the original equijoin SQL statement. This approach does not seem to be a good idea given the number of additional context switches which will result, as well as the additional number of parse calls. In short, do not solve something in PL/SQL when it may be easily solved in SQL alone.
    • Page 1075 states “you should avoid the use of inequality and the greater than or equal to predicates, as they may also bypass indexes.” Something does not look right about that statement.
    • Page 1089 states “the indexes could become unbalanced in a database with a great deal of DML. It is important to rebuild such indexes regularly so queries can run faster.” Interesting suggestion – I believe that standard SELECT statements are classified as DML as they are definitely not DDL. An index cannot become unbalanced, and indexes rarely need to be rebuilt – see Richard Foote’s blog for the more details.
    • Page 1089 states “when you rebuild the indexes, include the COMPUTE STATISTICS statement so you don’t have to gather statistics after the rebuild.” Oracle 10g and above automatically collect statistics when building indexes, and I would assume that the same is true when rebuilding indexes.
    • Page 1108 states that “cpu_time is the total parse and execute time” when describing the columns found in V$SQL. It is actually the time measure in centiseconds (100th of a second) of the CPU utilization when executing the SQL statement, and probably also includes the CPU time for parsing the SQL statement, but I have not verified this.
    • Page 1154 provides a demonstration of finding sessions consuming a lot of CPU time. The example used the `CPU used by this session’ statistic rather than drilling through V$SYS_TIME_MODEL into V$SESS_TIME_MODEL. The example used the `CPU used by this session’ as a reason for examining parse CPU usage and recursive CPU usage. While it is good to check CPU usage of sessions, there are a couple problems with this approach. First, the `CPU used by this session’ statistic is not updated until the first fetch from a SQL statement is returned to the client. If the client was waiting for the last hour for the first row to be returned, the CPU utilization will show close to 0 seconds difference between the start of the SQL statement and the check of the `CPU used by this session’ statistic in V$SESSTAT – this is not the case for the statistics in V$SESS_TIME_MODEL, which are updated in near real-time. Second, the change (delta) in the statistic for the sessions was not determined – if one session had been connected for 120 days, it probably should have consumed more CPU time than a session connected for 30 minutes, and probably should not be investigated.

    Even with the above issues, I would still recommend this book as a reference. For specific information on performance tuning, using RMAN, or installing on Linux, it might be a good idea to use a second book for deeper understanding of the process.

    Book Review: Secrets of the Oracle Database

    30 11 2009

    Useful Book Even without a Lot of Real Undocumented Secrets, July 1, 2009

    The “Secrets of the Oracle Database” book with the subtitle of “Advanced administration, tuning, and troubleshooting using undocumented features” follows the same pattern as most of the other Apress books, with very well researched and verified information with careful references to well regarded books and external resources. Many code samples and demonstrations are provided throughout the book in the format of: tell me about something, and then show me/prove to me that it actually works. The topics in this book reminded me a bit of the topics discussed in the book “Oracle 10g Insider Solutions,” except for the fact that the “Secrets of the Oracle Database” book actually makes an effort (a very thorough effort) at indicating which features are available with each version of Oracle (through, which features require additional cost licenses (such as the use of AWR), and provides a good enough example of the features so that the feature may be utilized with an understanding of why the feature should be used. The “Secrets of the Oracle Database” book even does a thorough job indicating the permissions and/or roles needed to leverage the various features. Specific items that I found to be helpful:

    • In most cases, commands are provided for Linux/Unix and the equivalents for Windows.
    • Good description of SYS.AUX_STATS and the various functions to view and populate the CPU stats are described in the book.
    • Very detailed description of raw 10046 trace files.
    • Detailed listing of the purpose of the various database tables related to Statspack.
    • Good summary of ORADEBUG functionality.
    • Provides a warning not to adjust the hidden (underscore) parameters unless under the supervision of Oracle support.

    With the above in mind, why not give the book a 5 out of 5 rating?

    Several Oracle features/behavior which are described as undocumented are in fact fairly well documented and/or discussed in Metalink notes, on various Oracle related blogs, and various Internet forums:

    • Page 143 describes ALTER USER IDENTIFIED BY VALUES as being undocumented while it is documented in Metalink (279355.1 last modified 27-OCT-2008 and 1051962.101 last modified 16-OCT-2008) and on several websites.
    • Page 136 states that the TRACE=TRUE parameter for the exp/imp and expdp/impdp utilities is undocumented while it is documented in a couple Metalink notes (271782.1 last modified 17-JAN-2005 and 286496.1 last modified 21-APR-2009) .
    • Page 337 states that DBMS_SYSTEM is undocumented while it is documented in a couple Metalink notes (286496.1 last modified 21-APR-2009 and 103267.1 last modified 20-NOV-2002 and 436036.1 last modified 09-MAR-2009 and DBMS_SUPPORT is described in 62294.1 last modified 25-OCT-2002), several books, and several websites.
    • Pages 271 and 371 state that the 10046 trace file format is undocumented while it is documented in a couple Metalink notes (39817.1 last modified 09-DEC-2008 and 376442.1 last modified 25-JUN-2009), two books referenced by this book (“Optimizing Oracle Performance” and “Troubleshooting Oracle Performance“), and several websites.
    • Page 299 states that “it is undocumented which parameter changes force the optimizer to consider a new plan,” after showing how changing OPTIMIZER_INDEX_COST_ADJ forced a change in the execution plan – but the book never went on to suggest checking V$SES_OPTIMIZER_ENV, V$SQL.OPTIMIZER_ENV_HASH_VALUE/V$SQL.OPTIMIZER_ENV, or a 10053 trace file.

    There also appear to be a couple errors, or at least exceptions to some of the broad rules discussed in the book:

    • Page 29 states that “V$PARAMETER is built in such a way that it lists documented (non-hidden) parameters only,” with documented parameters being those which do not begin with one or two underscore characters. This is a correct statement, until one of the hidden parameters is modified, with a command such as the following: ALTER SYSTEM SET “_OPTIMIZER_UNDO_COST_CHANGE”=’′; (_OPTIMIZER_UNDO_COST_CHANGE is one of those parameters which are adjusted automatically when OPTIMIZER_FEATURES_ENABLE is set to a different value). Once the _OPTIMIZER_UNDO_COST_CHANGE parameter (or likely any _ parameter) is modified, it will then be listed along with the documented parameters in V$PARAMETER (tested on Oracle and
    • Page 45 states “Since SELECT statements don’t benefit from unused indexes… it may be worthwhile to drop unused indexes.” Richard Foote’s blog provides evidence that those indexes which appear to be unused indexes may actually provide the cost based optimizer statistical information that it would not otherwise have. Also, not every use of an index is recorded as a use of that index.

    Given the title and subtitle of the book, I expected much more insight into the internals of Oracle databases. For instance on page 277 when describing the content of a 10046 trace file, the book stated that cr is defined simply as “consistent reads”, while cu is defined simply as “current blocks processed”. Page 357 gives a little more detail on the cu statistic “call parameter cu (current read) corresponds to the statistic db block gets.” This limited description is disappointing as a book described as providing advanced administration, tuning, and troubleshooting techniques should be able to tell much more about the cr and cu statistics. The same might also be stated about the coverage of V$SYS_TIME_MODEL, V$LOCK, V$SQL_SHARED_CURSOR and several other features discussed in the book.

    Several of the chapters (5, 6, 8, 13, 14, 15, etc.) are very short, ranging from two to five pages in length. It might have been a better idea to combine several of these small chapters and/or provide more knowledge of Oracle’s internal behavior in those chapters. The author seems to favor the word “contrary” as it appears many times throughout the book. The back cover of the book indicates that this book should be read after “Troubleshooting Oracle Performance” and/or “Expert Oracle Database 11g Administration”. Reading either of these two books before reading the book “Secrets of the Oracle Database” might make some of the secrets disclosed in the book seem quite ordinary.

    In summary, this book’s contents would likely seem as “secrets” for those DBA who have worked with Oracle for only a couple years and for those DBAs whose last book read was an Oracle 8.0.5 administrator’s guide. This book contains a great collection of disconnected information about Oracle (no building process from one item to the next), but it sometimes stops short of providing undocumented secrets to those DBAs who have read a couple good books recently, read a couple good blogs, and followed along on the discussions of a couple Oracle related forums. Much of the information may be obtained from other sources, but it is helpful that so much of the information is contained in a single book.

    Book Review: Oracle Wait Interface: A Practical Guide to Performance Diagnostics & Tuning

    30 11 2009

    Even Five Years After Publication, this Book Still Belongs on the Bookshelf Next to the Other Worthy Books, June 16, 2009

    For the last couple years I debated whether or not to buy this book, or for that matter any additional book, on the topic of Oracle’s wait event interface. I felt that between my copy of “Oracle Performance Tuning 101”, the enhanced wait event documentation in the free Oracle 11g Performance Tuning Guide, and the notes that I have collected from various web sites, that there was no need for another book in my collection which solely described wait events. To put it lightly, I was surprised by the depth of detail, the range of topics, the clarity of concepts, and the level of accuracy in this Oracle Press book, something I have come to expect usually only of the books offered by Apress.

    This book, now five years old and covering through (apparently) Oracle, makes a very good attempt at covering a large percentage of the most frequently occurring of the 800+ wait events in Oracle 10g (there are nearly 1,000 in Oracle Not only does the book provide a typically detailed explanation of what each wait event means, but it also describes the related system/session statistics, related system parameters, and what the background processes might wait on while the foreground process waits on a different wait event (for example LOG FILE PARALLEL WRITE and LOG FILE SYNC, respectively).

    Utilizing a wide range of system views while identifying performance problems and methods for correcting those performance problems, the book describes views including: v$system_event, v$session_event, v$session_wait, v$session_wait_history, v$system_wait_class, v$session_wait_class, v$sql_workarea, v$sql_workarea_active, v$latch_misses, v$sess_time_model, v$sql_shared_cursor, v$rowcache, x$kcqrs, v$resource, v$lock, v$session, v$segment_statistics, x$kcbfwait, x$kcbwds, x$ksolsfts, and several others.

    Many system parameters, both normal and hidden, were described throughout the book. Better definitions of the system parameters are hard to find from other sources, especially in the case of the hidden parameters. The hidden parameters (those beginning with an _) should not be altered without first consulting Oracle support – I believe that the book only mentioned this fact once near the end. Having a good description of the meaning of the hidden parameters might help determine if another DBA working with the same database instance attempted to implement some sort of `silver bullet’ fix by adjusting one of the hidden parameters, but it might also be helpful when trying to troubleshoot a situation where the Oracle instance is behaving in an unexpected way. The book describes the following parameters: hash_area_size, sort_area_size, db_writer_max_writes, db_writer_processes, session_cached_cursors, _db_writer_chunk_writes, _log_io_size, _max_exponential_sleep, _max_sleep_holding_latch, _spin_count, _kghdsidx_count, _kgl_latch_count, _optim_peek_user_binds, _db_block_hash_latches, _db_block_hash_buckets, _wait_for_sync, _log_io_size, fast_start_mttr_target, _lm_dynamic_remastering, _db_block_max_cr_dba, _db_aging_freeze_cr, _db_aging_touch_time, _fairness_threshold, _bump_highwater_mark_count, _ash_enable, _ash_disk_write_enable, _ash_size, and several others.

    There are a few sections of the book which may benefit from a re-write to update the information for Oracle 10g R2, 11g R1 or 11g R2 (currently in beta) versions of Oracle. There are also a few sections of the book which could use a bit more supporting information, and a few more sections which contain information which might not be 100% correct based on information made available on the Internet and in various books in the last five years. Regardless of any small problems which might be present in the book, it is an excellent companion to the books “Oracle Performance Tuning 101”, “Optimizing Oracle Performance”, “Cost-Based Oracle Fundamentals”, “Troubleshooting Oracle Performance”, and to some extent “Forecasting Oracle Performance”.

    Possibly interesting comments that I noted as I read the book:

    • The discussion on page 28 and 29 of the V$SESSION_WAIT view is limited, and does not answer the question of when the times reported should be ignored. For example, it would have been helpful if the book stated something to the effect of: if the STATE column indicates that the session is Waiting, Waited Unknown Time, Waited Short Time, then the WAIT_TIME column is mostly meaningless. If the entry is reporting a STATE of Waited Known Time, then the WAIT_TIME is the number of 1/100th of a second that the session waited on the event. If the STATE is Waiting, then the SECONDS_IN_WAIT column is the actual number of seconds that the session has been in the wait, otherwise that column shows the approximate number of seconds since the last wait started.
    • A SQL statement is provided to decode the P1 value of an ENQUEUE type wait.
    • The SQL*Net message to client description on page 65 seems to be incorrect – the wait actually ends as soon as the data is handed off to the server’s network stack – the event cannot be used to detect network latency as stated: “… or the network latency delays may be causing the message delivery to take longer.”
    • Pages 143-144 contain a very nice summary of the differences between latches and locks.
    • The book frequently uses the adjective “high” when stating the boundary values between OK and problem conditions for wait events, but the book usually does not indicate the contextual meaning of “high” – for example on pages 151 and 157.
    • On page 184, the book states “Unindexed foreign keys are no longer an issue starting in Oracle 9i Database.” This does not appear to be 100% correct as sessions will still block other sessions with TM enqueues on the child table without the foreign key index if the primary key (or a column defined as unique) on the parent table, to which the foreign key on the child table points, is modified (even when modified to the same value).
    • Page 190 suggests creating all data segments with a minimum of 2 freelist groups, even in a single-instance database. It would have been helpful if the authors discussed some of the possible side effects related to multiple freelist groups, although a nice table was provided to show some of the performance benefits.
    • It might have been helpful if the book discussed how to correlate the average time spent on the LOG FILE SYNC wait event with the average time spent on the LOG FILE PARALLEL WRITE wait event – what does it mean if the average times are roughly the same, what does it mean if the average times are significantly different. What happens to the average LOG FILE SYNC wait time when the CPUs in the server become very busy?
    • Pages 202-203 state “There are many things that you and your system administrator can do to provide some relief to the log file sync wait by increasing the I/O throughput to the log files… placing your log files on raw device and binding the LUN in RAID 0 or 0+1 instead of RAID 5.” This advice comes a couple dozen pages after the book warned to make certain that the redo logs need to be on very reliable mount points. With that in mind, avoid RAID 0, and use RAID 10 rather than RAID 0+1.
    • The book states that a log buffer larger than 1MB is likely oversized, yet on recent versions of Oracle the log buffer size is controlled by the granule size, meaning that a 14MB log buffer is not out of the ordinary for database instances with large SGA sizes (resulting in a 16MB granule size).
    • The book suggests several times to increase the number of DBWR processes even on systems which support asynchronous writes. For example, on page 210 the book suggests modifying the number of DBWR processes before increasing the buffer cache size when encountering FREE BUFFER WAITS. It would have been helpful if the book explored some of the potential problems related to increasing the number of DBWR processes.
    • Very well written section on delayed block cleanout.
    • The section describing wait events in a RAC environment contains helpful information that is useful in non-RAC environments, so don’t skip this section of the book if RAC is not used in your environment.
    • The indented time model statistics section in the book looks nearly identical to the indented time model in the Oracle documentation, yet that section of the book lacks the descriptions of the time model statistics which are now present in the Oracle documentation.
    • The _ASH_ENABLED parameter mentioned on page 254 is not the Oracle approved method of disabling ASH collection. Oracle has since released an approved method of disabling ASH collection, see Metalink for the DBMS_AWR.DISABLE_AWR function.
    • Aside from the brief hint on page 259, the book fails to mention that the use of AWR/ADDM requires an additional cost license.
    • Appendix B contains a list of Oracle 10g enqueue wait events and a brief description of each wait event. Any description of the more obscure 10g enqueue waits is difficult to locate from other sources, including Oracle documentation, Metalink, and searches of the Internet.
    • Appendix C shows how to create a variety of dumps using both ALTER SYSTEM/SESSION and ORADEBUG. Dumps include data block dumps, file headers, control files, library cache, processstate, systemstate, and more.
    • The book often lists in which Oracle version a specific feature or change in behavior takes place.

    Book Review: Practical Oracle8i: Building Efficient Databases

    30 11 2009

    No Expiration Date for Useful, Accurate Information, March 3, 2009

    I bought this book a couple weeks ago, after reading a couple reviews from different sources stating that while this book is a bit old, much of the information contained within is still useful. But, why spend the money buying a book that is eight years old, when the last database maintained in-house was moved to Oracle 10g R2 three years ago, and Oracle 11g R2 is likely set to be released in just a couple months? As fast as computer technology, and computer software for that matter, changes, is there much value in eight year old information? I was very pleased with the author’s “Cost-Based Oracle Fundamentals” book, so I thought that it was wise to buy a copy of this book before it disappears from the retail market. Was it a waste of money? After all, there is no coverage of ASSM tablespaces, MEMORY_TARGET parameter, SGA_TARGET parameter, PGA_AGGREGATE_TARGET parameter, AWR, Enterprise Manager Database Control, or any other feature introduced after Oracle 8.1.6.

    If one ignores what likely amounted to ten pages of bug descriptions related to problems which must be avoided in Oracle 8.1.5 and 8.1.6, 95% of the book information is still relevant to people administering Oracle 9i through Oracle 11g. If one ignores this quote from page 28, “Of course the buyers may decide that they can get a really good deal by buying 4 35GB disks instead of the 16 9GB disks you requested… and maybe 4 600-MHz CPUs would be cheaper than the 12 220-MHz CPUs you wanted,” the relevance and accuracy of the advice dispensed in the book does not diminish with time, and that cannot be said for many Oracle related books of the same vintage. It might even be an interesting mental exercise to read the book with a critical eye, attempting to identify limitations outlined within the text which no longer apply to more recent releases of Oracle – one would hope that the bug related limitations have long since been exterminated.

    Despite having read a couple books by Thomas Kyte a couple years ago, as well as several others written by various respected authors, I still managed to record eight pages of typewritten notes while reading the Practical Oracle 8i book. Topics of the notes seemed to fall into the categories of “I remember that, kind of”, “*There* is the missing link that connects the facts of the seemingly unrelated events”, “Wow, I never thought of that as a possibility”, and “That fact, or a question related to that fact, was just mentioned on one of the Oracle forums, or on a technical Oracle blog”.

    Paraphrased sections of the book, the case against rebuilding indexes on a scheduled basis, circa 2001 (and probably just as accurate today): “When an index block becomes completely empty, it is placed in the index’s freelist, but is not removed from the structure until the block is reused. The index block, when reused, may be used in a different part of the index structure.” Formulas are floating about on the Internet purporting to indicate when an index should be rebuilt based on criteria which is only updated after analyzing an index. “Analyzing an index will take out a mode 4 (shared) lock on a table, in the process preventing inserts, updates, and deletes within the index. An online rebuild of an index requires two periods in which the parent table is locked – once at the start of the rebuild and once at the end of the rebuild. Waits caused by the locks will not show in V$LOCK, and V$SESSION_WAIT will show waits on a NULL event. It is not possible to perform an online rebuild of bitmap indexes, reversed indexes, LOB indexes, secondary indexes on index organized tables, or on function based indexes. During an online index rebuild, space will be needed for the original index, the rebuilt index, and a logging segment named SYS.JOURNAL_nnnnn. Unlike the rebuild command, coalesce does not demand extra space to maintain two copies of the index, nor does it need to lock the table, but using coalesce does increase the chances of other sessions hitting a snapshot too old error.”

    Paraphrased from the book, the case against and for using the SYS user for an export (CONISTENT=Y cannot be used when the SYS user is performing an export, per the “Oracle Database Utilities 10g Release 2, Original Export and Import” documentation): “Oracle 8i introduces row level security, which uses a PL/SQL function to apply an additional WHERE clause predicate to a table – row level security does not apply to the SYS user. It is important to use CONSISTENT=Y when exporting partitioned tables. When CONSISTENT=N is specified, the export of each partition in a table is treated as a separate transaction, and may be exported at a different SCN number (incremented when any session commits). When tables are exported which contain nested tables, the two physical segments are exported in separate transactions, potentially resulting in inconsistent data during the import if the export was performed with the default CONSISTENT=N.” So, what to do if the database is using partitioning, row level security, and nested tables? (Read the book to find the answer).

    The book is written in a style which seems to mimic the type of conversation one might expect if an Oracle guru casually pulls up a chair in the library and starts telling the story of the little Oracle database engine that grew up to become the big database engine with occasional sharp teeth. The author’s casual language in the text is not what one would expect from a technical computer book, but the writing style does flow well. When I reached the last page of the final appendix, I thought to myself “is that all there is to the book, what happened next?” Reviewing my notes, I think that I am able to answer that question.

    Struggling to Find a Couple Negatives:
    * A couple scripts are missing obvious characters, such as a closing parenthesis.
    * Features requiring an Enterprise Edition license were not identified as such until Appendix A (an important section of the book which briefly describes various features mentioned in the book, along with associated benefits and negatives).
    * Appendix A failed to mention that analytic functions and function based indexes were not available to users on the Standard Edition of Oracle 8.1.x

    Book Review: RMAN Recipes for Oracle Database 11g : A Problem-Solution Approach

    30 11 2009

    Useful for Oracle 10g and Above, with Carefully Constructed, Easily Followed Solutions, February 14, 2009

    About six months ago I read the book “Oracle Database 10g RMAN Backup and Recovery”. While I enjoyed reading that book, I found that the book was difficult to use when practicing various backup, recovery, and cloning procedures which were outlined in the book. The reward for flipping through what must have been 50 to 100 pages in order to find everything needed for a practice procedure seemed to be the appearance of unexplained ORA errors, but maybe it was just my inability to follow the directions. In a panic database down situation, should it occur, the “Oracle Database 10g RMAN Backup and Recovery” book will remain on the shelf. So, I started looking for a different resource.

    I initially discounted the “RMAN Recipes for Oracle Database 11g” book as irrelevant primarily as I currently use Oracle 10g R2 in production, and did not wish to be confused by a book using 11g specific RMAN syntax without acknowledging that a slightly different syntax is required for Oracle 10g (or 9i, for that matter). In addition to output captures showing that Oracle 11.1.0.x (beta) had been used, other output indicated that Oracle 10.2.0.x was used to demonstrate various commands in the book. The authors made a significant effort to indicate which features are available, and when necessary the alternate syntax in 11g, 10g, and versions prior to 10g. Paraphrasing a couple sections of the book: “Prior to Oracle 10g, it was necessary to set ARCHIVE_LOG_START ([SIC] – LOG_ARCHIVE_START) to TRUE in order to enable automatic archiving of redo logs.” “RMAN does not back up locally managed temporary tablespace tempfiles – starting with Oracle 10g Oracle will recreate the tempfiles when the database is opened.” “Beginning with Oracle database 10g, it is possible to restore a backup from a previous incarnation and recover through a resetlogs command.” “An archived redo log deletion policy is only available starting in Oracle 11g.” “In 11g it is possible to use Active Database Duplication to duplicate a database without any prior backups of the source database.” The extra effort to identify version specific features allows the book to be used successfully with Oracle 11g, 10g, and possibly 9i.

    The title of this book is the second reason why I initially discounted the book as unusable. The title implied that the book’s layout was one of “Perform steps X, Y, and Z in order, and don’t bother wondering why it works – essentially, do as I say or your database will not rise when placed into the oven.” That perception of the book’s contents was far from reality. The basic book structure is: Chapter introduction, very brief problem description, well organized step by step problem solution (typically contained on a single page, with relevant related information needed by the solution within 5 pages of the solution), detailed description of how and why each step should be performed as well as side notes and cautionary advice, second very brief problem description, etc. The first half of the book describes how to backup the database, and the second half describes how to recover the database as well as how to set up for special configurations. Roughly 35 pages near the end of the book describe how to configure three different media management products. Most of the book focuses on the command line interface for RMAN, which in my opinion is the correct approach as extensive coverage of backups performed with the Enterprise Manager Database Control would have been a waste of book pages – the nagging question of what to do when the GUI breaks was avoided in this book. The “RMAN Recipes for Oracle Database 11g” book, while not perfect, is organized such that it is not necessary to dig through 50 to 100 pages in order to implement a solution listed in the table of contents, should the book be needed for future reference purposes.

    As I did with the “Oracle Database 10g RMAN Backup and Recovery” book, I tested a couple of the backup, recovery, and cloning solutions outlined in this book using Oracle Standard Edition on a Windows platform.

    • Repeated backups with and without a recovery catalog, with and without backing up archived redo logs, with and without automatic control file and spfile auto backups, all with a flash recovery area were performed. All worked as described in the book without issue.
    • Restore of the control file without a recovery catalog – worked as described in the book, the fact that ALTER DATABASE OPEN RESETLOGS; must be executed when a control file is restored from backup was repeatedly emphasized throughout the book.
    • Restore and recovery of a deleted datafile – worked as described in the book.
    • FLASHBACK DATABASE did not work as it is not supported on the Standard Edition of Oracle. Block level recovery also would have failed as it is not supported on Standard Edition. FLASHBACK TABLE TO BEFORE DROP did work without problem. See the Oracle documentation “Oracle Database Licensing Information – Oracle Database Editions” for the list of unsupported features.
    • Recovering to a previous incarnation of the database did not work as expected (it is possible that I missed a step). When testing, be careful with issuing RESET DATABASE INCARNATION 1; as this could lead to complications when later testing other restore and recovery scenarios – dealing with “ORA-00600: internal error code, arguments: [kcvsor_current_inc_rdfail]” is not covered, and may be an indication that the database incarnation needs to be reset to a different value after recovering the control file, but before attempting to recover.
    • Dropping a table and recovering to a past time using the UNTIL TIME syntax worked as described in the book.
    • Cloning a database worked on the second try, after modifying the author’s init.ora to include a line for SGA_TARGET, and there was no sense of frustration trying to locate all of the items to be set as was the case with the “Oracle Database 10g RMAN Backup and Recovery” book.
    • Setting up the production database to permit scheduled cloning of the database worked without problem, although it might have been helpful to stress to the readers that the later addition of data files to the production database will likely require re-executing the RUN block with CONFIGURE AUXNAME so that auxnames are specified for the new datafiles.

    General comments about the book, as it was read from cover to cover:

    • The book’s errata page on Apress’ website identifies a number of typos in the book, which may or may not cause problems for people using the book in a high stress situation.
    • The first couple chapters contain many paragraphs with odd sentence structure and strange synonyms, making it difficult in some cases to easily understand the authors’ advice.
    • Page 41 states “Remember to recycle the database after editing the [parameter] file for the changes to take effect.” This is after page 29 states “Stopping and restarting your database in quick succession is known colloquially in the DBA world as bouncing your database.” I have heard of bouncing the database, but I am guessing that recycling the database is what one would do after corrupting the spfile after editing it with a text editor, or what one might do when the database crashes and RMAN cannot recover the database.
    • Page 55 advises against placing the archived redo logs in the flash recovery area, yet page 17, under the heading “Best Practices” states that Oracle recommends using the flash recovery area to store the entire set of backup and recovery related files. “Oracle recommends” is stated a few too many times in the first couple chapters.
    • Paraphrasing: “A backup taken while the database is online or in the mount state (???) is considered to be an inconsistent backup. An inconsistent backup always needs recovery after a restore. A noarchive redo log mode database when backed up with RMAN must be mounted.” This begs the question, how does one create a consistent backup of a database in noarchive redo log mode when the database must be mounted – I think that “mount” is a typo in the book.
    • People reading the book cover to cover will find that the book frequently repeats information throughout the book, but that repetition proves to be quite valuable when using the book as a reference during a backup or recovery situation, as in most cases all need information will be within five pages, or a reference will be made to see Recipe x-y should problem z occur.
    • The book does not indicate what features described in the book require an Enterprise Edition license, nor what features require additional licenses beyond an Enterprise Edition license (this seems to happen in several books by different authors). For example, in Recipe 7-10 Performing Incremental Backups, the authors fail to mention how incremental backups in the Standard Edition differs from incremental backups in the Enterprise Edition.
    • The method of freeing space in the flash recovery area on page 96 appears to be a bit extreme (like launching a grenade at an ant hill), although better approaches are provided later. Page 246 seems to indicate that the method on page 96 might not be as extreme as it first appears.
    • The book shows how to schedule a backup script using either Linux/Unix cron or the Windows task scheduler, with included scripts for both Unix and Windows.
    • The recipe showing how to duplicate (clone) a production database on a single host is very well constructed, showing how to create the backup of the production database, what to enter in the listener.ora, the initialization parameters to specify in the clone database’s init.ora file, starting the clone in NOMOUNT mode using SQL*Plus, and performing the duplication in RMAN. Unfortunately, the authors failed to specify a value for SGA_TARGET in the suggested init.ora file parameters, resulting in “ORA-04031: unable to allocate 100 bytes of shared memory” when RMAN attempted to start cloning the production database. Even with this minor omission, the directions are far easier to follow than directions in other resources describing how to accomplish the same task. It would have been helpful had the directions for cloning the database included a side note stating “On the Windows platform, use ORADIM -NEW -SID TEST1 to create a service for the clone instance.”
    • The book provides a recipe describing how to set up the production database to permit cloning the database on a scheduled basis. A portion of the script closely resembled a similar script in the Oracle 10g R2 documentation “Backup and Recover Advanced User’s Guide”.
    • The book provides a helpful section describing how to respond to various RMAN error messages.
    • The book demonstrates flashback versions queries, flashback table, flashback drop, flashback database.

    Overall, I recommend this book to people using Oracle 10g or 11g in production environments which require backup and recovery to be accomplished correctly on the first attempt.