Book Review: Beginning Oracle SQL

23 08 2011

August 23, 2011

The Ideal Guide for Beginning to Intermediate Oracle SQL: Providing the Logical Reasoning Behind the Approach to Understanding and Solving Data Access Challenges

The “Beginning Oracle SQL” book is an updated version of the book “Mastering Oracle SQL and SQL*Plus” which covered Oracle release versions through  The original version, while extremely well written, suffered from a handful of typesetting flaws that resulted in most pages appearing without page numbers, initial words of some paragraphs discarded, left-most columns of some tables mangled, and a small number of typos.  The “Beginning Oracle SQL” book corrects the typesetting flaws, eliminates sections of the original book that are obsolete due to Oracle product changes, eliminates SQL*Plus related sections that can be easily found in the Oracle documentation and those sections that are deemed too advanced, and adds a couple of new sections.  I purchased two copies of the “Beginning Oracle SQL” book, a Google Books version through the Android market on a Motorola Xoom (unfortunately, the Google Books app crashes when displaying book pages 33 and 34 side-by-side,  pages 56 and 57 as displayed in Google Books), and a PDF copy from Apress.

There are several Oracle SQL books on the market, and I have had the opportunity to read a couple of those books.  This book, much like the original, is exceptional in comparison with other SQL language books, and seems to be suitable for use in classroom environments due to the inclusion of the exploratory excises at the end of each chapter (I did not spend much time reviewing these exercises or the Appendix contents).  The book offers a carefully constructed progression of topics that build on prior material presented in the book.

Concepts Covered by the Book, Book Specific Features, and Quick Observations:

  • Why do we implement the various types of database constraints.
  • Forms an early distinction between the database and the software database management system, while other books muddy the term “database”.
  • Demonstrates how to get started with SQL*Plus, edit lines of the SQL statement that is currently in the buffer, and save the SQL statement that is currently in the buffer.
  • Builds a foundation of why things are as they are, and how to find where things are located before diving too deeply into SQL syntax.
  • Downloaded source code examples seems to work.
  • Plenty of forward and backward references within the book, and external references to other resources.
  • Components of the SQL statement, column aliases, regular expressions (page 125).
  • Several of the chapters (4, 5, 6, 9, 12) include SQL statements without showing the typical SQL*Plus SQL> prompt and line numbers – the formatting without SQL> is helpful when copying the SQL statements from the PDF file for execution in SQL*Plus (holding the Alt key while selecting the text is a work-around for the other chapters).
  • Changing the default object schema used by a session: alter session set current_schema=scott; (page 189).
  • Recovering a dropped table: flashback table history to before drop; (page 190).
  • Correctly states the facts, while other books state otherwise: “If you drop a table, you implicitly drop certain dependent database objects, such as indexes, triggers, and table privileges granted to other database users. You also invalidate certain other database objects, such as views and packages,” (page 190).
  • Cartesian products (page 198), equijoins (page 198), non-equijoins (page 199), self-joins (page 201), natural joins (page 203), outer joins (page 205 – other Oracle specific and ANSI))
  • Offers advice to pick a SQL formatting convention and adhere to that convention (page 199).
  • Book repeatedly reminds the reader of potential problems caused by NULL values, and includes several examples which show where NULL values may cause unexpected problems for developers.
  • Nugget of advice: Test performance of a query not only with the typically small datasets found in development environments, but also with larger datasets that are more typical of production systems (page 237).
  • Correlated subqueries (page 237).
  • Scalar subqueries – subqueries in the SELECT clause (page 243).
  • Inline views – subqueries in the FROM clause (page 244).
  • Demonstrates throughout the book proper techniques for table alias names, rather than just using generic aliases such as A, B, C, etc.
  • USER_UPDATABLE _COLUMNS view shows which view and table columns are possible to update through inserts, updates, and deletes (page 277).
  • Brief mention of Instead of Triggers to permit the data normally returned by a view to be updated (page 278).
  • Materialized views and automatic query rewrite (page 283).
  • Substitution variables in SQL*Plus (page 288).
  • User-defined variables in SQL*Plus (page 290).
  • Implicit user-defined variables in SQL*Plus (page 291).
  • PROMPT, PAUSE, and ACCEPT  SQL*Plus commands (page 292).
  • Common SQL*Plus system variables (page 294).
  • Bind variables (page 298).
  • login.sql and glogin.sql scripts for customizing the SQL*Plus environment (page 305).
  • SQL*Plus MARKUP setting to produce HTML formatted output for SQL statements (page 318).
  • Scripting with SQL*Plus using input parameters for scripts (page 322).
  • Passing values between scripted SQL statements with the NEW_VALUE clause in SQL*Plus (page 323).
  • Passing values between scripted SQL statements with the help of bind variables (page 324).
  • SQL*Plus WHENEVER command to react to certain errors during script execution (page 325).
  • VARARRAYs in custom data types (page 332).
  • Inserting VARARRAY values into table columns (page 333).
  • Nested tables (page 336).
  • Multiset operators with nested tables (page 341).
  • Using CAST and COLLECT to convert an array to a nested table (page 346).

Much like the “Expert Oracle Database Architecture” books, this book uses Oracle error messages to teach SQL troubleshooting techniques.  Unfortunately, most of these Oracle error messages are not highlighted in the index at the back of the book, so it is a difficult to locate a specific error message and its solution.  Oracle Error Messages as Teachable Moments:

  • Chapter 2: ORA-00904: “column”: invalid identifier (page 44, 48); ORA-00933: SQL command not properly ended (page 63)
  • Chapter 3: ORA-04043: object “owner”.”name” does not exist (page 77)
  • Chapter 4: ORA-00920: invalid relational operator (page 97); ORA-00913: too many values (page 106); ORA-01427: single-row subquery returns more than one row (page 108)
  • Chapter 5: ORA-01722: invalid number (page 118)
  • Chapter 6: ORA-00001: unique constraint ( violated (page 148); ORA-02292: integrity constraint ( violated – child record found (page 156); ORA-01555: Snapshot too old (page 161)
  • Chapter 7: ORA-02292: integrity constraint ( violated – child record found (page 172); ORA-02449: unique/primary keys in table referenced by foreign keys (page 191); ORA-08002: sequence sequencename.CURRVAL is not yet defined in this session (page 193)
  • Chapter 8: ORA-00918: column ambiguously defined (page 197); ORA-00937: not a single-group group function (page 216, 221); ORA-00934: group function is not allowed here (page 218, 219); ORA-00979: not a GROUP BY expression (page 221)
  • Chapter 9: ORA-01436: CONNECT BY loop in user data (page 249); ORA-00907: missing right parenthesis (page 254); ORA-30483: window functions are not allowed here (page 258); ORA-01466: unable to read data – table definition has changed (page 261)
  • Chapter 10: ORA-00957: duplicate column name (page 269); ORA-01400: cannot insert NULL into (???) (page 275); ORA-01732: data manipulation operation not legal on this view (page 276); ORA-01752: cannot delete from view without exactly one key-preserved table (page 276); ORA-01779: cannot modify a column which maps to a non key-preserved table (page 276); ORA-01402: view WITH CHECK OPTION where-clause violation (page 280)
  • Chapter 11: ORA-00936: missing expression (page 302); ORA-01403: no rows found (page 325)
  • Chapter 12: ORA-00904: “owner”.”table”: invalid identifier (page 338)
  • Appendix B: ORA-01839: date not valid for month specified (page 373); ORA-00001: unique constraint ( violated (page 376); ORA-02297: cannot disable constraint (owner.constraint) – dependencies exist (page 400)

The Oracle specific SQL syntax is quite broad, and while this book makes a great effort to cover much of the syntax, it does not starve the “Pro Oracle SQL” book (the intermediate to advanced SQL book) of too few advanced topics.  The “Beginning Oracle SQL” book covers the following SQL keywords:


  • OR (page 94)
  • AND (page 95)
  • NOT (page 96)
  • BETWEEN (page 98)
  • IN (page 99)
  • LIKE (page 100)
  • CASE (page 101)
  • GROUP BY (page 208)
  • HAVING (page 217)
  • GROUP BY ROLLUP (page 222)
  • GROUP BY CUBE (page 223)
  • PARTITION BY in the FROM clause (page 227)
  • ANY and ALL operators with subqueries (page 234)
  • EXISTS operator (page 238)
  • WITH clause – subquery factoring (page 245)
  • SIBLINGS keyword in the ORDER BY clause (page 251)
  • AS OF – for use with Flashback Query (page 260)
  • VERSIONS BETWEEN operator – for use with Flashback Query (page 262)
  • WITH READ ONLY (page 267)
  • WITH CHECK OPTION (page 278)


  • REGEXP_LIKE (page 128)
  • REGEXP_INSTR (page 129)
  • REGEXP_SUBSTR (page 130)
  • REGEXP_REPLACE (page 130)
  • NEW_TIME (page 132)
  • EXTRACT (page 132)
  • ROUND (page 133)
  • TRUNC (page 133)
  • MONTHS_BETWEEN (page 133)
  • ADD_MONTHS (page 133)
  • NEXT_DAY (page 134)
  • LAST_DAY (page 134)
  • NULLIF, COALESCE, NVL2 (page 135)
  • GREATEST (page 135)
  • LEAST (page 135)
  • NVL (page 136)
  • DECODE (page 136)
  • TO_NUMBER (page 138)
  • TO_CHAR (page 138)
  • TO_DATE (page 138)
  • CAST (page 141)
  • GROUPING function (page 224)
  • GROUPING_ID function (page 225)
  • RANK analytic function (page 253)
  • LAG analytic function (page 256)
  • DENSE_RANK analytic function (page 258)

SQL Statement Types:

  • INSERT (page 146)
  • UPDATE (page 151)
  • DELETE (page 154)
  • MERGE (page 157)

Data Definition:

  • CREATE TABLE (pages 72, 163)
  • DROP TABLE (page 154, 189)
  • ALTER TABLE (page 168)
  • RENAME (just a brief mention on page 169)
  • Constraints (page 170)
  • CREATE SCHEMA (page 176)
  • CREATE INDEX (page 179)
  • ALTER INDEX (page 181)
  • CREATE SEQUENCE (page 185)
  • CREATE SYNONYM (page 186)
  • TRUNCATE TABLE (page 191)
  • COMMENT (page 191)
  • CREATE VIEW (page 266)
  • CREATE TYPE (page 336, 339)

If you own the “Mastering Oracle SQL” book, you would see the following changes when reading this book (I might have missed a few changes):

  • New section “Other Query Languages, Really” – includes a brief overview of some of the query languages used by various database management systems and the different SQL dialects used by Oracle SQL, Microsoft SQL Server, and IBM DB2 (page 9).
  • New Figure 1-1 that shows the relationship between rows and columns for the DEPARTMENTS table (page 11).
  • New Figure 1-2 that shows NULL values in the EMPLOYEES table (page 13).
  • New section that introduces SQL Developer – provides just enough information to help someone who might find SQL*Plus to be too difficult to use (page 58).
  • New tip that indicates that access to the DUAL table starting in 10g is less resource intensive than using a custom table to perform the same function (page 89).
  • An explanation was added to explain why a NOT IN clause with NULL as one of the items failed to return any rows (page 114).
  • The multitable INSERT command syntax was removed from this edition of the book (it is now described in the “Pro Oracle SQL” book) and is replaced with an extended description that should help those who need to perform an INSERT using a SELECT statement for the row data source.
  • Provides enhanced descriptions for examples, such as the missing explanation of the UPDATE examples, which now appears on pages 153, 154-156.
  • Slightly smaller font size than the original book – more information per page.
  • Removed the discussion of the SQL*Plus AUTOCOMMIT option, frequency of COMMITs, and SAVEPOINTs just prior to section 6.6.
  • Removed the SET TRANSACTION discussion from section 6.6.
  • Section 7.6 shows using SQL Developer to display autotrace statistics rather than using SQL*Plus as in the previous edition (all mention of iSQL*Plus have been removed in the book).
  • New description added to section 9.1 that is dedicated to demonstrating how NULLs can be problematic in EXISTS, IN, NOT EXISTS, and NOT IN clauses (pages 240-243).
  • Removed the ROWNUM and Top-N SQL section (pages 245).
  • Extended example that shows why subquery factoring might be helpful.  Also included is an example that shows the steps one might take to build a factored subquery, and then using that factored subquery in a complex SQL statement (page 246-247).
  • The subtopic of windowing (ex: ROWS BETWEEN) was removed from the discussion of analytic functions.
  • Added RELEASE and TAB to the table of common SQL*Plus system variables (page 294)
  • Removed the “HTML in iSQL*Plus” section and added the section, “11.6 Building SQL*Plus Scripts for Automation” (page 321).

It seems that even fantastic books invite problems that creep into book pages.  The editing required to bring this book up to date seems to have introduced a couple of problems, while correcting a couple of problems that made the original book difficult to use for reference purposes.  Suggestions for Improvement, Problems, and/or Errors:

  • It appears that the wrong figure was copied (figure 1-3 is a duplicate of figure 1-5) – shows an entity relationship model diagram rather than the three set operators (UNION, INTERSECT, MINUS) – the correct diagram is shown in the original book (page 15).
  • The URL pointing to the script to recreate the example schema used by the book appeared to be incorrect – this issue might be caused by a recent redesign to the Apress website.  The actual location is that of the Apress website followed by /9781430271970 (page 19, 72, 116).
  • Book separates SELECT SQL statements into a “Retrieval” grouping, while the Oracle SQL documentation considers SELECT SQL statements as part of the “DML” grouping.  This difference might cause a little confusion.
  • Mistakenly left screen captures of SQL Plus for Windows (SQLPLUSW) in the book (page 50, 52).
  • Need to change ‘S02’ to ‘SQL’ so that the SQL statement produces the expected output (page 103).
  • The description of the EXP function has an NLS problem – the comma should be changed to a decimal point in the number (page 119).
  • The Summary paragraph for chapter 6 still mentions the SAVEPOINT and SET TRANSACTION commands which were removed from the chapter.
  • Book names the resultset rows of a query as a “result table”, which might be a little confusing.  For example, page 265 states, “The result of a query is always a table, or more precisely, a derived table.” (This might be 100% correct, it just seemed a bit odd to me.)
  • The descriptions of some of the SQL keywords, such as the description for CONNECT_BY_ROOT, could have benefitted from an example SQL statement.  In a couple of rare cases in the book the output of a SQL statement could have been explained a little more completely, rather than relying on the reader to completely understand what was displayed.
  • The word “unacceptable” should be replaced with “acceptable” (page 252).
  • The book states that chapter 6 demonstrated changing a session’s isolation level to READ ONLY so that query results were always as of a specific transaction start time.  It appears this section was removed from chapter 6 (page 259).
  • The book probably should clarify that an Enterprise Edition license is required in order to use Flashback Query – although the book does state that the DBA will be required to perform some configuration in advance (so maybe this is implied).
  • The permissions described on page 182 as required to use DBMS_XPLAN might be a little excessive, “You need the SELECT_CATALOG_ROLE and SELECT ANY DICTIONARY privileges.” (reference reference2)
  • The book states, “see Appendix A of this book or SQL*Plus User’s Guide and Reference for details” – Appendix A is no longer the “Quick Reference to SQL and SQL*Plus”, but instead is “The Seven Case Tables” (page 297).  A similar problem is found on pages 39, 56, 68, 81, and 89.
  • The book states: “SQL*Plus treats &1 and $2 just…” – the $2 should appear as &2 (page 303).
  • The book states: “SQL Developer has more features than SQL*Plus in this area, because it runs in a browser environment itself” – this is an unfortunate copy and replace error when removing all references to iSQL*Plus from the book (page 318).

The issues identified above are minor when compared to the value offered by the book’s contents.  I would recommend this book without reservation, whether the person’s first experience with SQL was less than ten minutes ago, or for someone who has worked with Oracle SQL for a decade or longer.  Clear, concise, and helpful, with minimal confusing details – an accurate description of not only the SQL language, but also of this book.

Book Review: Pro Oracle SQL

17 01 2011

January 17, 2011

Hard-Hitting, Beyond the Basics Advice for Breaking the Black Box Approach to Database Programming, Leveraging the SQL Language on Oracle Database, and Improving Overall Performance of the SQL that You Write

I pre-ordered this book in October 2010 while searching for a way to learn some of the more advanced features of Oracle Database’s SQL; I have been searching for a follow up book to the “Mastering Oracle SQL and SQL*Plus” book that I read a couple of years ago, and I think that I finally found that follow up book.  Written as the Oracle Database 11.2 sequel to the book “Mastering Oracle SQL and SQL*Plus” (and that book’s much updated revision titled “Beginning Oracle SQL”), this book is not written as a simple reference for the Oracle specific SQL dialect.  This is not a book that you will want to use when trying to learn the basics of creating a simple SQL statement with a four table join.  However, if you are able to create that simple SQL statement with a four table join in less than 30 minutes with the help of an entity-relationship diagram (ERD), and without using a DISTINCT clause, this book’s contents will prove to be a valuable resource not only to expand your depth of knowledge of the SQL language, but also to extract the extra value that becomes visible when Oracle Database is viewed as much more than a black box dumping ground for data. 

The authors of this book are all OakTable Network members and have between 15 and 29 years of experiencing working with Oracle products.  This long term exposure to Oracle Database is quite clearly an advantage when discussing many of the subtopics that are simply missed by other Oracle specific SQL language references.  While there was no discussion of the SQL language from Oracle Database 2.0, the authors are clearly comfortable with the features in Oracle Database 11.2 as well as the features that are available in Oracle Database 8i, 9i,10g, and 11g R1.

The book’s contents are well organized.  While each chapter identifies the author who wrote the chapter (possibly indicating that the authors did not work together on each individual chapter), the book flows well with plenty of forward and backward references between chapters, as well as including references to other resources (Metalink, books, and blogs).  Chapter one of the book is a bit different from the rest of the chapters in the book, and appears to be written as a transition area for readers to become familiar with SQL*Plus and Oracle Database.  Chapter one will be valuable to readers attempting to adjust from using graphical query tools (TOAD, SQL Developer, Excel, or any number of other graphical tools) to using SQL*Plus.  Additionally, that chapter helps readers who are familiar with other SQL dialects (such as that used by SQL Server) take advantage of Oracle Database’s special characteristics, and introduces readers to multi-table INSERTs and MERGE operations.  Understanding the information presented in execution plans is an emphasis in the remaining chapters of the book; this book’s explanation of execution plan content ranks among the best, if not the best, that I have seen to date.  While there is not a single cartoon drawing in the book, and it does not appear that any sections of this book were borrowed from other books, there are several very well placed diagrams in the book’s chapters.  The book makes a significant effort to control the scope of the material presented.  In most cases, that effort resulted in a very easy to understand, yet thorough discussions of complex topics while building bridges to help the reader transition into Oracle performance specific books, such as the book “Troubleshooting Oracle Performance”.  In a couple of areas, adding an additional half-dozen words might have saved the reader a little confusion, but those are rare occurrences in this book.  Be certain to install Oracle Database’s sample schema, and download the script library for this book from the Apress website (some of the scripts, especially those in chapter 16, are quite useful and may not be printed directly in the book).

Foundation knowledge, and miscellaneous comments while reading the book:

  • Pages 3-8: Provides a quick demonstration of how to connect to the database using SQL*Plus, which is helpful for people who have primarily used other query tools, or other database platforms.  Also demonstrates the basics of executing SQL statements, setting up the SQL*Plus environment, and executing scripts.
  • The book briefly touches on why it is important to use consistent formatting and bind variables in order to reduce the number of hard parses, and why writing SQL to limit logical IO is also important.
  • Page 40: Automatic query transformation often takes place, for instance, converting an IN subquery into a standard join.
  • Chapter 2 tries to break open the black box approach to writing SQL statements.
  • The book makes effective use of analogies.
  • Page 59: The book demonstrates that the array fetch size has an impact on the number of consistent gets performed when executing a query.
  • Pages 64-65: Includes a test case that shows why an index would be used in one case to retrieve 1% of the table rows, while in another case a full table scan was more appropriate to retrieve 1% of the rows.
  • Page 66: Nice summary of what controls the number of blocks that are read in a single read call during a full table scan or a fast full index scan.  There is one potentially confusing sentence, “This could mean that a multiblock read might only read one block at a time.”  The concepts were correctly stated, however it might be worthwhile to state that “a multiblock read might be truncated to a single block read due to the blocks that are already in the buffer cache.”
  • Pages 74-75: Describes how B*tree indexes grow from a single block in size, and what triggers the index height to increase.
  • Long code sections in the early chapters with embedded comments – probably OK for the intended audience.
  • Describes and lists when the various join methods (NESTED LOOPS, SORT-MERGE, HASH, CARTESIAN) are appropriate.
  • Pages 120-127: Explains how NULL values behave in potentially unexpected ways in IN, NOT IN, UNION, UNION ALL, INTERSECT, MINUS, GROUP BY, ORDER BY, COUNT, SUM, AVG, MIN, and MAX.
  • The book demonstrates through several examples that there is more than one way to build a SQL statement to answer a specific question, but not all methods are equally efficient.
  • Page 153: States that EXPLAIN PLAN only shows the estimated plan, while the actual plan may be different.
  • Page 160: Provides a demonstration that shows EXPLAIN PLAN sometimes shows the wrong execution plan.
  • Page 171: Provides a warning to SET SERVEROUTPUT OFF before displaying the execution plan using DBMS_XPLAN.DISPLAY_CURSOR.
  • Page 175: Describes all of the format parameters for DBMS_XPLAN, including the undocumented ADVANCED parameter.
  • Page 187: Example of collecting statistics on a table and its indexes without creating histograms.
  • Page 191: States that using an account with DBA privileges simplifies the process of using DBMS_XPLAN.DISPLAY_CURSOR, using data from other accounts, and selecting from system views, but advises to do so only in a test environment.
  • The book typically indicates the first Oracle release version that supports a feature that is being described.
  • Page 218: Includes a test case that seems to demonstrate an error in the Oracle Database 11.2 documentation.
  • Pages 223-224: Very helpful brief summary of most of the analytic functions that are discussed in the chapter – this list reduces the frustration in finding the correct analytic function that is needed to produce a desired result.
  • Page 225: Demonstrates how to create a running sum that automatically resets when the value of a column changes.
  • Page 244: Presence of the keywords WINDOW SORT in an execution plan indicates that the SQL statement uses an analytic function.
  • The KEEP keyword is not mentioned in the chapter that describes analytic functions, however an example is provided on page 173 of the book in the pln.sql script (in the script download for chapters 5 and 15).
  • Page 283: WITH clause is known as subquery factoring, and other database platforms (and the ANSI standard) refer to subquery factoring as common table expression.
  • Page 286: Oracle may process a WITH block as either an inline view or as a temporary table.
  • Book describes several methods for optimizing query performance, and suggests revisiting queries created in older release versions of Oracle Database to take advantage of performance optimizations that are available in more recent release versions.  Also suggests revisiting PL/SQL code to determine if such procedural code may be accomplished with set-based plain SQL code.  Emphasizes testing for performance, rather than just using the first query that seems to produce the expected result.
  • Describes SQL features through version 11.2 (as does the book “Beginning Oracle SQL”).
  • Page 309: Table 10-1 quickly describes the various functions, operators, and pseudo columns that are related to the Oracle CONNECT BY syntax, including SYS_CONNECT_BY_PATH, CONNECT_BY_ROOT, CONNECT_BY_ISCYCLE, CONNECT_BY_ISLEAF, PRIOR, LEVEL, and  NOCYCLE.
  • Page 331-334: Provides various demonstrations of common mistakes with queries containing EXISTS clauses.
  • Page 336-339: Demonstration that EXISTS and IN queries may be automatically rewritten into equivalent SQL statements and suggests looking in a 10053 trace file to see the actions performed by the optimizer.
  • Page 349: NOT IN and NOT EXISTS may return different results, and are not functionally equivalent due to the different handling of NULL values.
  • Page 355: Prior to 11g, anti-joins could not be performed on NOT IN queries unless the optimizer was sure that NULL values could not be returned.
  • Page 378: Shows how to index NULL values in a B*Tree index by using a constant as the second column in the index.
  • Page 388: Using DBMS_ROWID.ROWID_OBJECT(ROWID) to show how table rows are distributed with hash partitioning.
  • Page 388: Using ORA_HASH( column_name, 31, 0) to predict the partition into which a row will be placed in a hash partitioning scheme with 32 partitions.
  • Page 392: Function based indexes add a virtual column to the table, and those columns are viewable through the DBA_TAB_COLS view.
  • Page 393: Shows how to create a virtual column that always returns a specified calculated value.
  • Page 397: Starting in Oracle Database 11g it is possible to alter an index to make it invisible – this can be used to reduce the risks associate with just dropping the index.
  • Page 403: Direct path inserts are invoked with the APPEND hint, causing rows to be inserted above the high water mark.
  • Page 408: DBMS_ERRLOG.CREATE_ERROR_LOG procedure creates a logging table that is the destination for rows that fail during an INSERT, UPDATE, or DELETE when the LOG ERRORS INTO clause is included in the INSERT, UPDATE, or DELETE.
  • Pages 418-421: Demonstrates with a test case that when a large percentage of rows in a table need to be updated, creating a new table and using INSERT APPEND may be much faster than a typical INSERT statement if a little down time is acceptable during the creation of the new table.
  • Page 434: Definition of ACID (Atomicity, Consistency, Isolation, Durability)
  • Page 436: ANSI isolation levels explained.
  • Page 482: Demonstrates one of the potential problems associated with using statically defined views in a query, where a query is later extended to return additional information by joining directly to one of the base tables specified in the view.
  • Page 498: Oracle 10g and 11g use rolling invalidation of cursors when statistics are collected on the objects referenced by the cursors.
  • Chapter 15: Instrumenting code, comparing execution plan pre-change and post-change to determine the performance impacts of changes, regression testing of code changes, identifying methods to cause applications to fail.
  • Page 501: Mentions V$SQL_OPTIMIZER_ENV and the underlying structure X$KQLFSQCE.
  • Page 505: Very good explanation of adaptive cursor sharing.
  • Page 510: Identifying SQL statements with large swings in execution time.
  • Page 514: Bind variables are not appropriate in all situations.
  • Page 514: “Hints are actually directives to the optimizer. As long as the hint is valid, the optimizer will obey it.”
  • Page 516: Placing the word COMMENT in front of a set of hints in a SQL statement prevents the optimizer from using the hints that follow.
  • Page 517: Script to extract the OTHER_XML column from V$SQL.
  • Page 527: Example of creating a stored outline from a SQL statement in the library cache.
  • Page 538: SQL Profiles do not lock an execution plan in place – instead they lock a cardinality estimate adjustment in place through the use of embedded hints.  It is also possible to embed specific hints into SQL statements by creating a SQL profile using the undocumented DBMS_SQLTUNE.IMPORT_SQL_PROFILE function.

Execution Plan Operations Described (after chapter 6 “SQL Execution Plans”):

  • FILTER (page 195)
  • GENERATE CUBE (page 198)
  • WINDOW SORT (page 244)
  • SQL MODEL ACYCLIC (page 272)
  • SQL MODEL CYCLIC  (page 273)
  • SQL MODEL ORDERED (page 274)
  • COUNT STOP KEY (page 385)

 Analytic and Other Advanced SQL Constructs Described:

  • CUBE function (page 197)
  • GROUPING (page 209)
  • GROUPING_ID (page 212)
  • SUM analytic function (page 225)
  • MAX analytic function (page 225)
  • LAG analytic function (page 227)
  • LEAD analytic function (page 229)
  • FIRST_VALUE analytic function (page 231)
  • LAST_VALUE analytic function (page 231)
  • NTH_VALUE analytic function (page 232 – new in Oracle Database 11.2)
  • RANK analytic function (page 234)
  • DENSE_RANK analytic function (page 235)
  • ROW_NUMBER analytic function (page 236)
  • RATIO_TO_REPORT analytic function (page 237)
  • PERCENT_RANK analytic function (page 238)
  • PERCENTILE_CONT analytic function (page 238)
  • PERCENTILE_DISC analytic function (page 240)
  • NTILE analytic function (page 241)
  • STDDEV analytic function (page 242)
  • LISTAGG analytic function (page 243 – new in Oracle Database 11.2)
  • MODEL clause (page 253)
  • PIVOT clause (page 284)
  • SYS_CONNECT_BY_PATH (page 311)
  • CONNECT_BY_ROOT operator (page 313)
  • NOCYCLE parameter (page 316)
  • CONNECT_BY_ISCYCLE pseudo column (page 316)
  • CONNECT_BY_ISLEAF pseudo column (page 319)

Hints Described:

  • NO_MERGE (page 42)
  • MERGE (page 43)
  • NO_PUSH_PRED (page 49)
  • REWRITE (page 51)
  • FULL (page 86)
  • ORDERED (page 91)
  • USE_NL (page 91)
  • PARALLEL (page 277)
  • INLINE (page 286)
  • MATERIALIZE (page 286)
  • SEMIJOIN (page 339)
  • NO_SEMIJOIN (page 339)
  • ANTIJOIN (page 363)
  • USE_ANTI (page 363)
  • NL_AJ (page 363)
  • HASH_AJ (page 365)
  • INDEX (pages 374, 551)
  • APPEND (page 403)
  • APPEND_VALUES (page 403 – new in Oracle Database 11.2)
  • QB_NAME (page 516)
  • OPT_ESTIMATE (page 538)

Parameters Described:

  • _UNNEST_SUBQUERY (page 47)
  • _ALWAYS_SEMI_JOIN (page 342)
  • USE_STORED_OUTLINES (page 529)
  • SQLTUNE_CATEGORY (page 539)

Miscellaneous Useful Functions:

  • ORA_HASH function (pages 95, 388 – used to predict the partition into which a row will be placed in a hash partitioning scheme)
  • DBMS_RANDOM.VALUE (page 473)

While the “Pro Oracle SQL” book is an overall excellent book, clearly demonstrating that there was a great deal of care put into the book, there are a couple of small problem areas in the book (note that the authors appear to have addressed most of these issues on the errata page for the book on the Apress website):

  • While not significantly affecting the comprehension of the material presented in the book, there are a couple of typos in the book.  For instance, on page 9 the book states “In this book” rather than “In this chapter”; page 31 of the book uses the word “effect” rather than “affect”; and page 221 of the book demonstrates that simple connecting words were sometimes lost, “… would require multiple self-joins to [the] employees table.”  None of these typos change the intended message of the paragraphs.  However, the writing style in some of the chapters is significantly more fluid than in other chapters.
  • Pages 51-52: When the optimizer did not select to automatically use the materialized view, the author used a REWRITE hint to force the optimizer to use that materialized view.  It probably would have been a good idea to mention that the optimizer likely did not automatically use the materialized view because the calculated cost for that execution plan (1935) exceeded the cost for the execution plan that directly accessed the tables (485).
  • Page 73: States, “Block accesses made via an index scan are made using single-block reads.”  Technically, index range scans may employ multi-block reads in certain cases (such as index pre-fetching), but stating that at this point in the book might lead to unnecessary confusion.
  • Page 85: A missing word might lead to misunderstanding: “However, in the cases where the number of subindexes needed would be smaller, the operation can be many times more efficient than a full scan as scanning [a] smaller [number of] index blocks can be more efficient than scanning [a] larger [number of] table blocks.”
  • Page 94: Missing words in an explanation regarding how hash joins work might lead to misunderstanding: “Based on table and index statistics, the table that is determined to return the fewest rows will be hashed in its entirety into memory. This hash table includes all the row data for that table…”  Based on testing, it appears that the hash table only includes the selected columns and the columns that are joined to other row sources.  Additionally, the row source that is hashed into the hash table might not be from a table.  Including this additional detail might be too much information for the scope of the book (reference).
  • Page 102: There is a risk that the full outer join Oracle syntax equivalent implementation, when the approach is applied to other data, could yield different results from the ANSI full outer join if each row returned is not unique.  One way to work around that limitation is to replace the UNION with a UNION ALL and add AND E2.ROWID IS NULL to the final WHERE clause.
  • Page 163: States, “The operation [in the execution plan] that is most indented is actually the first operation that will be executed. If there are multiple operations at the same level, the operations are executed in a top-down order.”  The Oracle Database Performance Tuning Guide for 11.2 from the Oracle documentation library states essentially the same fact, and is also incorrect.  This guideline is correct in some cases, but incorrect in other cases as can be confirmed with a 10046 extended SQL trace.  The first operation executed in an execution plan is actually the first operation from the top of the execution plan that has no child operations (reference reference2).
  • Page 379: States, “B-tree indexes are suitable for columns with lower selectivity. If the columns are not selective enough, the index range scan will be slower. Further, less selective columns will retrieve numerous rowids from the leaf blocks leading to excessive single block access to the table.”  Based on the second and third quoted sentences, it appears that the author intended to state that “B-tree indexes are GENERALLY NOT suitable for columns with lower selectivity.”
  • Page 397: States, “There is another use case for the invisible indexes. These indexes are useful to reduce the risk while dropping unused indexes… From Oracle Database version 11g onwards, you can mark the index as invisible, wait for few weeks, and then drop the index if no process is affected with less risk.”  It is important that the book states “less risk” and not that there is no risk – it would have been helpful if the book discussed what risks remain (but again this might exceed the intended scope of the book).  Just because an index is not used for a couple of weeks does not mean that the index will not be used during month-end closing, year-end processing, or some other infrequently occurring activity.  Invisible indexes on foreign key columns may still be used to prevent table locking problems on the child table when the parent’s primary key columns are updated.  Additionally, the statistics from the invisible indexes may be used by the optimizer to more accurately determine cardinality estimates (reference reference2).
  • Page 437: The third paragraph contains an apparent typo regarding the location of the online redo logs: “Undo blocks retain the before condition of the data, while the redo information is stored in the online redo logs in the system global area (SGA).”
  • Page 484: Specifies 10044 tracing when 10046 tracing was likely intended.  The third paragraph states, “I’ve also added code to enable 10044 tracing…”  Event 10044 appears to enable tracing of free list undo operations – I believe that the author intended to write, “I’ve also added code to enable 10046 tracing…”
  • Page 529: The outline_startup_trigger.sql script is not included in the script library for the book.
  • Page 539: The create_tuning_task.sql and accept_sql_profile.sql scripts are not included in the script library for the book.

As indicated by the above, most of the problem areas are related to unintentional word substitutions.  The majority of the other problem areas are cases where the authors had to draw the line of discussion at a selected level of detail in order to limit potential confusion and control the scope of the book.  The remaining problem areas are minor in the overall context of the book, and might elicit a response from the authors along the lines of “I thought that I modified that phrase in draft 20 of the chapter”.  A fantastic Oracle SQL book that ventures well beyond simple syntax diagrams, and the book is an excellent value for the price.

Related blog articles that discuss the contents of this book:

  • Hash Joins – What is Wrong with this Statement?
  • ANSI Full Outer Join, Ready or Not?
  • Analytic Functions – What is Wrong with this Statement?
  • NO_QUERY_TRANSFORMATION Hint is Ignored… Well, Almost Ignored
  • Adding Comments to SQL Statements Improves Performance?

  • Book Review: Oracle Tuning: The Definitive Reference Second Edition

    7 11 2010

    November 7, 2010 (Updated November 9, 2010, November 28, 2010, December 8, 2010, February 28, 2011, March 9, 2011)

    1,100 Page True-False Quiz, “Definitive” is Left as an Exercise for the Student

    (Edit November 28, 2010: permitted me to submit a review for the book, but only for the first 85 or so pages.  Much more detail has been added to this review since the last update.)

    Based on feedback that this book’s author provided in a comment attached to one of my other (thirteen) Amazon Oracle Database related book reviews, I decided to add a couple of comments at the start of this review.  Last year I contributed to a book that was written by a total of sixteen people, co-authoring two of the chapters in that book.  While that book contained “Oracle” in its title, and the two chapters that I co-authored were Oracle performance related, the book as a whole is not a competitor of the “Oracle Tuning: The Definitive Reference” book.  I am an IT Manager and Oracle DBA, so I am not a competing author or a competing Oracle consultant.  Additionally, I only review books that I have purchased using a portion of my paycheck, so my reviews also take into account whether or not I feel that I have received my money’s worth from a book – does the book deliver on the promises of its front cover and its description.

    The author of the “Oracle Tuning: The Definitive Reference” book is described as having authored more than 30 books, is “one of the world’s top Oracle Database experts with more than 25 years of full-time DBA experience,” operates an Oracle consulting/training company, is the Editor-in-Chief at Rampant TechPress, participates in Oracle related discussion forums (including one operated by his consulting company), and produces articles for a non-Oracle specific blog.

    Why did I buy the “Oracle Tuning: The Definitive Reference” book?  Put simply, I was curious.  I participated in several discussions with Mr. Burleson on Oracle’s OTN forums where his articles and sections of his previous books were discussed.  I brought to his attention a couple of what I felt to be errors and/or omissions from articles and book sections that were linked in the OTN threads.  More knowledgeable contributors to the OTN threads admittedly provided more thorough feedback to Mr. Burleson.  So, I was curious to know whether or not some or all of those corrections and suggestions became a part of the second edition of this book. The second edition of this book is the first book that I purchased that was published by Rampant TechPress, and the first book that I purchased which was written by Mr. Burleson.  I initially ordered the book from Amazon at the end of May 2010 (it was expected to be in stock the first week of June 2010), and when it had not shipped by the end of August, my curiosity subsided due to the delay (and various other reasons) and I cancelled the original order.  Near the end of October 2010 my curiosity peaked again, so I placed a new order for the book (at that time the book was expected to be in stock the first week of November 2010).  While waiting for the book to arrive, I started reading portions of the first and second editions of the book through the Google books library.

    Considering that the word “definitive” appears on this book’s front cover, and the book publisher’s website states the following, I expected to see information that went well beyond showing long listings followed by a sentence or two: “This is not a book for beginners. Targeted at the senior Oracle DBA, this comprehensive book gives you all of the knowledge you need to be successful in tuning even the most complex Oracle database.”  So, how well did the book meet the level of expectation that was set by its front cover and publisher’s description?

    The first and second editions of this book are partially viewable through Google books, and I suggest that anyone considering the purchase of this book should first read a couple of pages of the book at a local bookstore or through the Google books interface.  In comparing the first 30 pages of the first edition with the first 250 pages of the second edition, it is apparent that a much more reasonable font size was selected for the second edition of the book.  Several logic, spelling, and grammar errors that are present in the first edition were corrected or removed from the second edition.

    Now, moving on to the problems that prevented this book from receiving a five out of five star rating.  Internet searches of phrases from the book identified several paragraphs that were directly copied and pasted from other books (most published by Rampant TechPress), other articles, the official Oracle documentation, and in at least one case part of the contents of a script found in the RDBMS/ADMIN directory of the Oracle Database home – the original sources of the material were not mentioned.  The book repeatedly emphasizes the goal of changing/flipping initialization parameters, using a scheduling utility, periodically throughout the week (or even throughout the day) in anticipation of workload changes.  Unfortunately, a number of the scripts that the author provides to help the reader with monitoring the workload, so that the reader knows when to automatically flip the initialization parameters, do not provide the information claimed in the book.  Repeated advertisements of the author’s Ion tool are unnecessary distractions throughout the book.  Most importantly, the number of errors and lack of sufficient detail prevent the book from meeting my expectations of a book that is advertised as comprehensive and targeted at the senior Oracle DBA; for example, showing the output of DESC V$SYS_TIME_MODEL and a simple listing of the statistics names in that view cannot be described as comprehensive, or as targeted at the senior Oracle DBA.

    This review is roughly 24 typewritten pages in length, several times longer than any of my previous Oracle related book reviews.  The significant number of specific errors, omissions, and distractions far exceeds the problems per page count of any other Oracle related book that I have read to date, and the listed items below likely miss less easily identified problems that a senior Oracle DBA with 25 years of experience or an Oracle Certified Master/Oracle Ace should be able to identify.  I am a bit disappointed that most of the errors that I previously pointed out to the book author, as well as those errors identified by others, were reprinted, or even added to the second edition of this book.  This book truly is not targeted at beginners, but it makes for a spirited true/false quiz or even an essay exam for someone who has a bit more Oracle Database knowledge by having read portions of the official Oracle Database documentation as well as other books.  Those DBAs who enjoy hourly parameter modification, running their databases with multiple block sizes, and performing frequent index rebuilds might find value in this book if those DBAs are able to over-look the sometimes ragged joins of sections of multiple books being copied into this book and the scripts that do not work as designed.

    Specific errors, omissions, and distractions in order for the first 200 or so pages followed by random selections from other parts of the book (the detail behind the rating assigned to the book) (edit November 11, 2010 ~ 111110: Google Books preview for the book):

    • Page 1 contains an advertisement for the author’s Ion product and some sort of a script library that can be purchased.
    • Page 6 states that the author “wrote five of the officially authorized Oracle Press books.”  Books with the “Oracle Press” logo are not officially authorized by Oracle Corporation.  “Oracle Press” is simply a marketing label applied to books published by McGraw Hill.
    • Each chapter opens with a cartoon drawing that is typically one half to two-thirds as tall as the page.
    • Page 8 states “This tuning approach is concerned with delivering data to the end-users quickly, even if it means that the database will consume additional resources.  This approach is associated with the SQL optimizer goal of first_rows_n.”  This statement is a bit sloppy.  The intention of the OPTIMIZER_MODE parameter’s value FIRST_ROWS_n (where n is one of 1, 10, 100, or 1000) is to return the first n rows as quickly as is possible, with the assumption that the remaining rows in the resultset will not be retrieved.  The statement in the book seems to imply that setting the OPTIMIZER_MODE to a value of FIRST_ROWS_n will allow retrieval of ALL of the rows from the database as quickly as possible, with the tradeoff being an increase in resource usage.  Technically, it is not the database that is consuming the resources, but instead the instance’s processes.
    • Page 10, starting to notice several paragraphs that are a single sentence in length, and this formatting continues throughout the book.  SQL code listings also seem to be formatted to consume as much vertical space on a page as is possible.
    • Page 10, confusing sentence/paragraph, “The Oracle 11g SQL tuning advisors are a fully automatic SQL tuning approach and that they use a reactive tool approach, waiting watching for problems to occur.”
    • Page 10, after stating on the previous page that “reactive tuning deals with the present, commonly using v$ views and Active Session History (ASH),” the book states on page 10, “in reactive tuning, the Oracle DBA fights the symptoms, not the disease… Reactive tuning is also problematic because we wait until end-users have been inconvenienced.”  The book author is advocating the use of what he terms “proactive tuning”, which the author describes as attempting to determine the best combination of database parameters and tablespace/table options for each SQL workload.  Confusingly, the author also states that his “proactive tuning” method uses Active Session History, Statspack, and the Automated Workload Repository.  In my opinion, reacting to what is found in ASH, AWR, Statspack reports, most V$ views, and 10046 trace files should be termed as “reactive tuning”, where the DBA responds to an actual identified problem experienced by the end-users.  What the author describes as “proactive tuning” appears to be more along the lines of initialization parameter flipping while end-users are not complaining.
    • Page 11 states “For example, if the AWR shows that the demands on the shared pool become very high between 1:00 pm and 2:00 pm, the DBA might trigger a dynamic decrease of db_cache_size and a corresponding increase of the shared_pool_size parameter during this time period.”  If Oracle’s buffer cache is using the memory specified by the DB_CACHE_SIZE, the size value specified for DB_CACHE_SIZE cannot be decreased without first manually flushing the buffer cache – flushing the buffer cache to allow the shared pool to grow for an hour is a bad idea.  It is important to keep in mind that when the SGA_TARGET parameter is specified, the values for DB_CACHE_SIZE and SHARED_POOL_SIZE specify the minimum values for the parameters.
    • Page 12, when describing examples of poor schema design, suggests that databases using extensive data normalization to minimize data redundancy is a poor design because it forces Oracle to perform unnecessary table joins.
    • Page 12 states, “If poorly designed PL/SQL is encountered, users may be able to tune the database by using array processing such as bulk collect, forall and ref cursors…”  As of Oracle Database 10g, Oracle automatically array fetches 100 rows at a time in cursor FOR loops, which might make it much less likely/necessary that a ‘user’ will re-write the PL/SQL to take advantage of bulk collection’s slightly better performance, assuming that the ‘user’ is not synonymous with ‘end-user’ (reference).
    • Page 15 a confusing sentence, “In these cases, we need to adjust the instance parameters need to change, depending on the time-of-day and day-of-week.”
    • Page 17, the timeline states, “1990s: Oracle is developed and relational databases dominate the IT market.”  This timeline statement, giving consideration to the lack of mention about databases prior to 1990, seems to indicate that Oracle Database version 2 did not exist in 1979.
    • Page 17 states, “RAM speed has not improved since the 1970s while CPU speeds get faster every year.”  This is an inaccurate statement about memory speed not improving.
    • Page 19, figure 1.2, which is described as “The Intel CPU Architecture of the Early 21st Century” seems to be terribly out of place in this section of the book, the acronyms contained within the figure are not explained, and the figure fills half of the page.
    • Page 21, figure 1.3, which is described as, “The Changing Dynamics of Human and Hardware Costs” simply shows the intersection of an ascending line labeled as People Costs and a descending curve labeled as Hardware Costs.  It might have been helpful to provide some sort of numbers on the illustration to indicate a timeline and monetary figures – at what point did hardware costs and DBA costs equalize?
    • Page 22 states, “The number one cause of poor performance is the over-normalization of Oracle tables.”  Interesting.
    • Page 22 states, “All SQL should use bind variables, preferably in the code or via cursor_sharing=force, to make SQL reusable within the library cache.”  In general, most SQL statements should use bind variables, but of course there are occasions where the use of bind variables should not be used.  “All” is too strong of a recommendation, and failure to use bind variables does not automatically make the already hard parsed SQL statement unavailable for reuse.
    • Page 22 states, “Failure to set pga_aggregate_target to allow the CBO to choose hash joins can result in very slow SQL performance.”  It is unclear from this section in the book if the PGA_AGGREGATE_TARGET must just be set to a non-zero value, or if there is some other value that it must be set to in order for Oracle’s optimizer to consider using hash joins.  Considering that the optimizer may select to use a hash join when the PGA_AGGREGATE_TARGET parameter is set to a value of zero, and also when that parameter is set to a small value, this statement found in the book is vague.
    • Page 25 states “The first tasks when tuning a database are to identify the external bottleneck conditions, which may include: CPU Bottleneck,… RAM Bottleneck,…  Network Bottleneck, … and Disk Bottleneck.”  While it is important to check these items, I suggest that these probably would not be the first items to check when tuning the database instance’s performance.  Yes, checking whether or not the CPUs are overloaded (or heavily loaded) is important, but maybe it would be better to determine if the overload is causing a measurable negative impact on performance by first checking the delta values of the various wait events, system statistics, and system time model statistics before deciding to check, for instance, whether there are network bottlenecks.
    • Page 25 states, “When the number of run queue waits exceeds the number of CPUs on the server, the server is experiencing a CPU shortage… Note that a 10% CPU level is not always a concern as modern servers are designed to keep the processors as busy as possible.”  The first question is how would someone measure the number of run queue waits.  Is this a measurement over a period of time, or is it an instantaneous measurement of the number of processes waiting to run on the CPUs, or is it an instantaneous measurement of the number of processes currently running on the CPUs plus the number of processes waiting to run on the CPUs?  The second question is whether or not the system can be considered to be CPU bound before the run queue length matches the number of CPUs in the server; for example, previous pages in the book suggested setting the processor affinity or using VMWare to confine the instance (or its processes) to a subset of the CPUs, or to use the Unix nice command to alter the dispatching priority of processes.  The third question is why was 10% selected, and not 50%, 75%, 90%, or 100%?  The first edition of the book did not include the word “wait” in the quoted sentence, and I believe that word causes additional confusion in this edition of the book.
    • Page 25 states that “Large amounts of Oracle*Net traffic contribute to slow SQL performance.”  The book does not describe what is considered “large amounts”, nor what to do about addressing the network bottleneck.
    • Page 25 under the heading of disk bottlenecks the only issue identified is “using RAID5 for high update systems.”  The adjective “high” is never clarified.
    • Page 25 states, while discussing potential solutions for over-allocation of the server’s RAM, that the options for “excessive swapping is to add more RAM, reduce the size of Oracle’s SGAs, or turn on Oracle’s Shared Server.”  Are these the only options, or could more suggestions be recommended?  For example, “size of Oracle’s SGAs” implies that more than one instance is running on the server – would it make sense to reduce the number of instances on the server (suggesting so would, of course, disagree with an earlier suggestion of moving most/all of the instances to a single high-end internally redundant server)?  Are there any reasons not to implement a shared server configuration?  What about tuning the SQL, analyze excessive PGA utilization, etc.?
    • Page 26, the author appears to misuse the term “network latency”, or at least stretch the meaning of latency as it applies to a network.
    • Page 26 and 27 include half page advertisements for the author’s Ion tool.  Pages 42, 46, 60, 62, 68, 71, 73, and 89 also include an advertisement for the tool.
    • Page 27 states “In a disk-bound database, the majority of the wait time is spent accessing data blocks.”  It is quite possible in a non-disk-bound, well performing database instance that the majority of wait time could very well be spent reading data blocks.  In fact, these events should be among the top 5 wait events, otherwise there probably is a significant performance problem.  It is also quite possible that the system could be disk-bound when the majority of wait time is spent _writing_ to the redo logs, archiving redo logs, and updating data/undo/temp/control files.
    • Page 27, the Top 5 Timed Events that are described as demonstrating a disk constrained database, shows 44 waits for the CPU.  Waits for the CPU cannot be reported in the Top 5 Timed Events – this data in the report is bogus.  The report also shows that the average single block read time is 2.75 seconds, while the average multi-block read time is 0.127 seconds (127ms) – this data in the report is also likely bogus.  The report also shows that 1,363 seconds were lost in 673 waits (average of 2.03 seconds) to the “library cache load lock” wait – if that wait appeared in the top 5 report, even in a report showing the single block read time is 2.75 seconds, I think it would be a good idea to start with the “library cache load lock” wait since that is one of the waits that indicates a severe problem if it appears in the top 5 wait events.
    • Page 28 states “CPU enqueues can be observed when the CPU run queue exceeds the number of CPUs on the database server. This can be seen by… If the system is already optimized, having CPU time as a top wait event is a positive because the addition of faster CPUs or more CPUs will relieve the bottleneck.”  The effects of CPU over-load may be observed long before the CPU run queue exceeds the number of CPUs in the server, whether it be longer average waits for the log file sync wait event, increased waits for latches, increased wait time for single block or multi-block reads, or any number of statistics in the system time model (V$SYS_TIME_MODEL, V$SESS_TIME_MODEL).  Additionally, upgrading and/or adding CPUs is not a cheap option from a licensing standpoint, and is not guaranteed to remove CPU time from the top five “wait” event list (it might even become more prominent as more processes may then simultaneously spin while trying to acquire a latch).
    • Page 28 shows the Top 5 Timed Events portion of an AWR report that includes 4,851 waits for CPU time – a Top 5 report cannot show the number of waits for the CPU – this data in the report is bogus.  Second, the CPU is a limited “wait” resource with an upper utilization limit equal to the number of CPUs in the server multiplied by the elapsed number of seconds, while the other foreground wait events, for the most part, have an upper limit of the average number of sessions (plus parallel related additional processes) connected to the instance multiplied by the elapsed number of seconds.  It is quite possible that a server experiencing excessive CPU usage may not cause the “CPU time” to appear in the top 5 timed events.  Third, we do not know how many CPUs are in the server – would it make a difference if there was only a single CPU compared to 96 CPUs?  Fourth, we have no concept of the elapsed time represented by the report – what if the elapsed time for the report is eight hours (the total reported CPU time is 4,042 seconds)?  Fifth, if the single block read time averages more than a second, shouldn’t the investigation start with the IO system?  Sixth, the most that can be determined from the top 5 timed events is the average utilization, number of waits, and number of timeouts – why are the number of timeouts not shown in the book?  From the top 5 timed events it is difficult, if not impossible, to determine if there wasn’t a brief period of intensive activity, followed by a long period of almost no activity – or if the average activity levels were fairly constant in the time period.
    • Page 28, the Top 5 Wait Events portion of an AWR report (what caused the switch from Top 5 _Timed_ Events as shown in the previous example) is labeled as a mockup – probably a good idea that this is labeled as such because the report shows 94,754 seconds where the server was waiting to send data across the network while single block and multi-block reads totaled only 85.75 seconds.  That is why you do not use a satellite link for the network backbone.
    • Page 29 states “Individual Program Global Areas (PGAs) for each session are a set of running programs that do work for the instance and are referred to as processes.”  To put it simply, PGA is much closer to a concept of “an allocated block of memory”, than it is to “a set of running programs” (reference).
    • Page 30 describes the DB_CACHE_SIZE parameter as “the number of data buffers to allocate for the instance.”  The parameter does not specify the number of data buffers (blocks?), it specifies the amount of memory in bytes to allocate to the DEFAULT buffer cache (or the minimum amount in the event the SGA_TARGET is specified).
    • Page 30, a self-conflicting sentence that seems to make no sense. “The most important parameters for instance tuning are those that are immutable because they cannot be changed without starting and stopping the instance or using alter session commands or SQL hints.”
    • Page 30, the DB_FILE_MULTIBLOCK_READ_COUNT parameter is listed among the parameters that cannot be changed without bouncing the database – that is not correct.  When workload CPU costing is in use, the value of the DB_FILE_MULTIBLOCK_READ_COUNT parameter also has no effect in costing of multi-block reads, in stark contrast to the statement made by the book that it is (implied always) used in optimizer costing calculations.
    • Page 30 states “parallel_automatic_tuning:… since parallel full-table scans are very fast, the CBO will give a higher cost-to-index access and be friendlier to full-table scans.”  This is simply incorrect.  The optimizer does not increase the cost of index access paths when costing parallel access paths that use full table scans – it lowers the cost of the full table scan relative to the parallel degree.  Second, parallel full table scans are not automatically very fast, especially if the IO subsystem is bottlenecked.
    • Page 30, the description of the OPTIMIZER_INDEX_COST_ADJ parameter is very weak, “this parameter controls the relative costs of full-table scans versus index scans.”  It would be much better to state that the parameter specifies the percentage of the index access cost to be retained.  Setting the value to a very low value could cause the optimizer to select not only an index access path rather than a full table scan, but to select the wrong index due to cost rounding problems as the value of the OPTIMIZER_INDEX_COST_ADJ parameter approaches 1.  Ideally, since at least Oracle Database 10.1 (and possibly as early as 9.0.1) this parameter should be left at the default value of 100 due to the Oracle optimizer’s use of system (CPU) statistics by default (the default behavior of system statistics usage changed in 10.1).
    • Page 30, the description of the OPTIMIZER_INDEX_CACHING parameter is vague – the book doesn’t indicate when the value of the parameter is used in cost calculations, and when it is not used.
    • By page 30 the book has recommended the use of SQL Performance Analyzer, SQL Tuning Advisor, AWR, ASH, accessing the various DBA_HIST views, and accessing the various performance views in Enterprise Manager without mentioning the various extra cost licenses required to use those features.  For example, the book should have mentioned that the Diagnostic Pack must be licensed for access to AWR information, and the Diagnostic and Tuning Packs must be licensed for access to the SQL Tuning Advisor, long before any mention of needing to purchase additional licenses on page 31.
    • Page 33, the half-page SQL script printed in the book and in the source code depot is Oracle Database’s DBA_FEATURE_USAGE_STATISTICS view definition found in the  catdbfus.sql file located in the RDBMS/ADMIN directory of the Oracle home.  For some reason this script is marked as “Copyright © 2005 by Rampant TechPress” in the script library for the book, while the original version found in the Oracle home is marked “Copyright (c) 2002, 2005, Oracle. All rights reserved.”  No effort is made in the book to interpret the meaning of the columns returned by this view.
    • Page 34 states “Starting in Oracle 10g, statistics collections are automated, but users may still need to selectively add histograms and other specialized optimizer statistics.”  The book does not state what triggers the statistics collection for a specific object, at what time the statistics are collected, that histograms (maybe excessively) are automatically generated, nor the potential problems that might be encountered after the statistics collection.
    • Page 34 indicates that table compression is introduced in Oracle 11g.  That feature was actually introduced in Oracle 9i R2 (reference).
    • Page 35 states “In Oracle 11g we have SQL profiles and the SQL Access Advisor to help identify sub-optimal SQL statements. Once identified, the SQL profile utility will allow changes to execution plans without adding hints.”  This is untrue – a SQL profile is a series of hints that are automatically generated and applied to the SQL statement during cost-based optimization.
    • Page 36 mentions that Oracle 10g will sometimes automatically rewrite a SQL statement into a more efficient form, but does not provide much, if any, detail describing why this is done, how to see that it has happened, or what to do if the rewritten form is not more efficient (Oracle 9i’s cost-based optimizer was also able to rewrite SQL statements).
    • Page 39, frequently uses the term “Oracle” to mean both Oracle Corporation and Oracle Database, even in the same sentence, and that makes it a bit difficult to comprehend the intention of some of the passages in the book.  For example, “Oracle first addressed this problem by developing special advisory utilities in Oracle.”
    • Page 39, the text at the bottom of the page appears to be a copy and paste of text that appeared in a book titled, “High Performance Data Warehousing” which was originally copyrighted in 1997.
    • Pages 40, 90 is the large Trumpet of Doom graphic useful?
    • Page 46, the bottom half of this page is largely a direct copy and paste of pages 8 and 9 of an article titled “Metric Baselines: Detecting Unusual Performance Events Using System-Level Metrics in EM 10GR2” that was written by John Beresniewicz at Oracle Corporation.  The article is marked Copyright 2005 by Oracle Corporation (reference which was hosted by permission of the author).
    • Page 47 states, “When the data buffer hit ratio falls below a predefined threshold, it might be useful to add RAM to the data buffer cache.”  It is not a good idea to tune databases based on the buffer cache hit ratio, a ratio that started falling out of favor by the mid to late 1990s (reference).
    • Page 49 states that when the buffer cache hit ratio drops below 70%, the DBA should set up the system to dynamically adjust the buffer cache.
    • Page 51, the text in the last section of the page is largely a copy and paste of page 36 of the book “Oracle Silver Bullets: Real-World Performance Secrets”.
    • Page 52, it would have been helpful if the book briefly described why each of the statistics that were listed are the most commonly used statistics for Oracle exception reporting.
    • Page 53, the SQL statement lacks an ORDER BY clause, which possibly means that the SQL*Plus BREAK ON command will add extra line breaks in unexpected positions in the output.
    • Page 56, table 2.2 is almost an exact copy and paste, ignoring slight formatting differences, of the Oracle Database Reference documentation from Oracle Corporation for DBA_HIST_TEMPSTATXS (reference).
    • Page 57, the SQL statement does not work as described for a variety of reasons, including the cumulative nature of the statistic values, with no attempt to calculate the delta values of the statistics.  This is one of the SQL statements that the book suggests for trend identification with the goal of scheduling the automatic changing of initialization parameters.
    • Page 58, the SQL statement does not work as described for a variety of reasons, including the cumulative nature of the statistic values with no attempt to calculate the delta values of the statistics.
    • Page 61, the SQL statement does not work as described for a variety of reasons, including the cumulative nature of the statistic values with no attempt to calculate the delta values of the statistics.
    • Page 65 simply shows a DESC for V$SYS_TIME_MODEL and V$SESS_TIME_MODEL, and includes a list of the STAT_NAME values found in those views without attempting to describe the meaning of the STAT_NAME values.
    • Page 67, the author does not explain the parent-child relationships of the statistics from V$SESS_TIME_MODEL, rather the query just sorts the statistics in descending order by time.
    • Page 69, the SQL statement does not take into account that some of the sessions may have been connected for more than a year, while other sessions may have been connected for only a couple of minutes.  Consider how a check of the db time statistic for the two sessions might show similar values for the two sessions – which of the two sessions represents more of a potential impact on system-wide performance?
    • Page 70, the first SQL statement querying AWR data that compares two snapshots, unfortunately the script header in the book does not match the script header in the script library for the book, making it difficult to locate the script.  Matching scripts listed in the book with those in the script library is difficult for many of the scripts due to the scripts being labeled differently in the book and script library.
    • Page 74, the book states that AWR and Statspack reports show time model statistics, and then proceeds to show operating system statistics from an Oracle Database 10.1 report which shows statistics ending with _TICKS, rather than _TIME which became standard starting with Oracle Database 10.2.  It would have been helpful if this section of the book were updated to show the _TIME statistics, or at least mention that the reader will likely see _TIME statistics rather than _TICKS statistics.
    • Pages 81 through 83, the discussion of Oracle Data Miner and decision support systems seems to be off topic for this book.
    • Page 85 states, “this can be plotted as a 1/x function and the exact optimal point computed as the second derivative of the function 1/x as shown in Figure 4.1.”  Many fancy words in this sentence, but no example is provided that shows how to translate the displayed statistics into the graph, how to calculate the second derivative of the 1/x function, or exactly what x is.  Quite often the data contained in the V$DB_CACHE_ADVICE view does not conform to any kind of curve that is represented by figure 4.1.
    • Page 87, the SQL statement does not work as described for a variety of reasons, including the cumulative nature of the statistic values with no attempt to calculate the delta values of the statistics.  This is one of the SQL statements that the book suggests for trend identification with the goal of scheduling the automatic changing of initialization parameters.
    • Page 91, the cumulative nature of the AWR data seems to imply that the author’s database is bounced late every Sunday night – other than that, the chart’s information is nearly useless because the delta values of the AWR statistics were not calculated.
    • Page 92, table 4.2 appears to be a direct copy and paste of the Oracle documentation library for DBA_HIST_ACTIVE_SESS_HISTORY (reference).
    • Page 93, the SQL statement does not work as described for a variety of reasons, including the cumulative nature of the statistic values with no attempt to calculate the delta values of the statistics.  One might wonder how this SQL statement would behave in a RAC environment.
    • Page 95, table 4.3 appears to be a direct copy and paste of the Oracle documentation library for DBA_HIST_FILESTATXS (reference).
    • Page 96, it is interesting that the script output on page 96 matches the script output on page 57.  The SQL statement does not work as described for a variety of reasons, including the cumulative nature of the statistic values with no attempt to calculate the delta values of the statistics.
    • Page 100 contains what is essentially a full-page advertisement for the author’s remote DBA consulting company.
    • Page 102, two unnecessary cartoon drawings, one apparently depicting an upset IT manager, and another depicting a person wearing a ski mask while working on a computer.
    • Page 103, the case study suggests that changing the OPTIMIZER_MODE from the default of ALL_ROWS to FIRST_ROWS_100, followed by restarting the database instance corrects performance in OLTP type databases.  “Their DBA did not understand the difference between the optimizer goal of all_rows, which minimizes computing resources and favors full-tale [sic] scans and the first_rows_n optimizer which sacrifices fast throughput in favor of fast access via indexes.”  First problem with this advice: bouncing the database is not necessary when changing the OPTIMIZER_MODE parameter’s value – simply changing the parameter value will automatically force a hard parse of each SQL statement the next time that SQL statement is executed – the client experienced an unnecessary system outage when the database was bounced, as well as losing the performance benefits derived from the blocks already residing in the buffer cache.  Second problem with this advice: it might simply be the case that due to the presence of histograms, some of the execution plans were less than optimal due to the specification of uncommon (or very common) bind variable values on the initial hard parse as a result of the optimizer peeking at the initial bind variable values to generate an optimal execution plan for those values.  Adaptive cursor sharing, introduced in 11.1, reduces the risks associated with the combination of bind variable peeking and histograms on columns (this was not mentioned).  By changing a global setting, such as the OPTIMIZER_MODE parameter, rather than spending a couple of minutes enabling a 10046 trace for a slow session, or examining an AWR or Statspack report to identify resource intensive SQL statements, the author possibly caused other performance problems for any SQL statement that, for example, might exist in a report which typically returns more than 100 rows.  If a query is expected to return 100 or fewer rows, the execution plans should be identical when the OPTIMIZER_MODE parameter is set to ALL_ROWS and when it is set to FIRST_ROWS_100 (this value tells the optimizer to assume that all rows after the first 100 will be discarded) which probably describes most of the queries that are commonly executed in OLTP databases (with the exception of a few).  Third problem with this advice: the author did not describe why FIRST_ROWS_100 was selected rather than FIRST_ROWS_1, FIRST_ROWS_10, or FIRST_ROWS_1000 (or actually even provide a good explanation why ALL_ROWS was the wrong setting).
    • Page 104 states, “in this emergency, an Oracle9i shop called complaining about a serious degradation in SQL performance right after implementing a partitioned tablespace.”  I was not aware that a tablespace could be partitioned, and a search of the Oracle documentation library found nothing on this topic.  The author provided a DBMS_STATS.GATHER_SCHEMA_STATS command with several parameters.  Only the DEGREE parameter was explained, stating that it allowed collecting the statistics using 15 parallel processes.  I wonder if the author reset the parallel degree of the indexes belonging to the schema from the resulting value of 15 to the normal value of 1?
    • Page 104, the author references output generated by his plan9i.sql script, and indicated that the output showed unnecessary full table scans of large tables.  We really do not know if this database had been online for a couple of hours, or a couple of years based on the output.  In examining the contents of the plan9i.sql script, there are a couple of problems with the script.  First, it queries V$SQLAREA rather than V$SQL, so it is completely blind to the possibility that multiple child cursors might exist for the same SQL statement, each with different execution plans.  Second, the script reports on all SQL statements that are still in the library cache when the sum of the executions of those SQL statements is ten or greater when grouped by the table’s name – this sum becomes the NBR_FTS column.  Therefore, the NBR_FTS column does not take into account the fact that a full table scan of a table may be performed more than once per execution of the SQL statement, and fails to count the full table scans that happened for SQL statements already aged out of the library cache.  Third, the case study did not discuss how to examine whether or not the full table scans were more efficient than an available index access paths, or how it was found that a common WHERE clause in the query contained a bind variable name that was wrapped in single quotes.  Fourth, how did creating a single index fix the five tables in the output, yield a 50 fold performance improvement, and reduce disk I/O by 600%?
    • Page 105, how did creating a single function based index fix the five tables in the output and result in a five-fold performance improvement?
    • Page 105, the silver bullet tip to change the CURSOR_SHARING parameter to FORCE did not warn of any of the possible side-effects, which may include poor performance when cursors that should not be shared because uneven data distributions may cause the optimizer to generate execution plans that are inappropriate for other literal values, or bugs related to CURSOR_SHARING that either result in wrong data being returned, or instance stability problems.  It would be interesting to see how the 75% performance improvement was achieved by changing this parameter.
    • Page 106, as mentioned previously, the author’s plan9i.sql script shows something other than what is described, so the output presented in the book may or may not be useful.  Additionally, the author did not state whether the database had been online for an hour when the report was generated, or three years.  The author’s buf_blocks.sql script, mentioned in this section of the book, is a bit interesting.  The script builds a normal table T1 by selecting a count(distinct file# || block#) of the blocks in the buffer cache, joined to DBA_OBJECTS, when grouping on the OWNER, OBJECT_NAME, SUBOBJECT_NAME, and OBJECT_TYPE.  The script then joins the T1 table to DBA_SEGMENTS in order to calculate the percentage of the object’s blocks that are currently in the buffer cache for all objects with at least 11 blocks in the buffer cache.  Why does the script’s output, that is printed in the book, show six tables with seven or fewer blocks in the buffer cache.  Additionally, if these table blocks really were accessed frequently, wouldn’t Oracle automatically keep the blocks in the buffer cache by promoting the blocks closer to the MRU end.  The author’s buf_keep_pool.sql script is also a bit interesting, as it generates SQL statements to alter all objects that have more than 80% of their blocks in the buffer cache, so that the objects will reside in the KEEP buffer cache (if not already in the KEEP buffer cache).  This seems slightly counter-intuitive to move these objects out of the DEFAULT or RECYCLE buffer cache and into what is likely a smaller KEEP buffer cache.  Additionally, this script could not have been used as described in the book because all of those tables listed in the book had less than 0.45% of their blocks in the buffer cache.
    • Page 107, technically, the 2GB per process limit on 32 bit Windows can be raised to almost 3GB (Metalink recommends lowering the maximum from 3GB) by adding the /3GB parameter to the boot.ini file (reference).
    • Page 107, Figure 5.1 is described as showing the output of the author’s plan9i.sql script.  Figure 5.1 actually shows a “Top 5 Timed Events” report, possibly from Statspack.  The book states that the database was crippled by db file scattered read wait events, yet the “Top 5 Timed Events” printed in the book shows a much more sinister problem.  While 3,246 seconds were spent on the db file scattered read wait event (average wait time 0.010 seconds), 1,363 seconds were spent on the library cache load lock wait event (average wait time 0.29 seconds), and 7,146 seconds were spent on the db file sequential read wait event (average wait time 0.013 seconds).  If I was presented with the same “Top 5 Timed Events”, I probably would have been focusing on the time lost to the library cache load lock wait, rather than trying to create a materialized view.  It is rather amazing, considering the “Top 5 Timed Events”, that the three materialized views reduced disk I/O by more than 2,000 percent and improved database performance 30 fold.
    • Page 108, the case study that suggests implementing bitmap indexes seems to be flawed.  An AskTom thread detailed the flaws better than I am able to do so (reference).
    • Page 108, the adding freelists case study lacks sufficient detail for a person to be able to use that case study to help solve a problem experienced by the reader.  For example, the book does not describe how the author was able to examine V$SQL and discover that “virtually all” of the DML statement were inserts into the CUSTOMER_ORDER table – what in V$SQL did the author examine?  Were there no SELECT statements (those are considered DML also)?  The author stated “the top timed event was buffer busy waits and it was clear that there were enqueues on the segment header blocks for the table and its indexes.”  The author does not describe how the transition from “virtually all” DML were inserts into a single table (I wonder if the header detail and order line detail were both contained in the same CUSTOMER_ORDER table) was made to stating that the buffer busy wait problem was in fact indicating a problem with waits for the segment header blocks for the CUSTOMER_ORDER table and its indexes.  Was it just coincidence because the two seemed to be happening at the same time, and was therefore a lucky guess?  The author’s website stated that this was a database (reference), so how did the author determine that the buffer busy waits where not caused by the same action that is reported as the read by other session wait in 10g?  Did the author check V$WAITSTAT, V$SESSION_WAIT, generate a 10046 trace, or do something else?  Setting the freelists for the CUSTOMER_ORDER table and an index named CUST_PK to a value of 5 immediately saved the client $500 per minute?  Finally, what conditions would have to be present for a buffer busy wait to last 30 seconds, was someone playing with bitmap indexes (assuming that some of the sessions were also waiting in enqueue waits), and would that problem be fixed by altering the freelists?  (Edit: December 9, 2010: this $500 per minute fix is also described on page 697, but in this case the book mentions that the Oracle Database release version is in fact
    • Page 110 states, “with the Internet, there is a wealth of information about Oracle troubleshooting and some of it is good, while most of it is absolute garbage.  Every self-anointed Oracle expert is touting their own methodology for Oracle troubleshooting, and some are absolute zealots in proclaiming their approach as the ‘best’ method.”  I am having a little trouble translating these two sentences.  Is the author stating that the Oracle experts who smear themselves with perfumed oil will refuse to believe any other method is better?  Or is the author stating that some people who self-proclaimed that they are one of the best Oracle experts in the world, have a tendency to game the Internet search engines?  Either way, the two sentences should be rewritten to clarify the author’s intention.
    • Page 111, the quote printed in the book from the documentation appears to have been re-worded in the 11g documentation, however I am not sure that the quote indicates that Oracle Corporation recommends the use of the buffer cache hit ratio for much of anything.
    • Page 112, the book describes how in a well-tuned database the buffer cache hit ratio metric is useful.  Half of page 112 is devoted to showing a buffer pool advisory for an instance with a 52MB buffer cache.
    • Page 115, the script printed in the book for calculating the buffer cache hit ratio accesses SYS.V_SYSSTAT, rather than the more common view synonym V$SYSSTAT.  The printed script also does not match the buffratio.sql script from the book’s script library, where the book specifies the “db block gets”, while the script from the script library specifies “physical reads direct (lob)”; the book specifies “consistent gets”, while the script from the script library specifies “session logical reads”.  Worse yet, neither formula matches the formula printed in the Oracle Performance Tuning Guide, which is part of the official Oracle Database documentation (reference).
    • Page 117 states, “by default, this [TIMED_STATISTICS] parameter is set to false, which disallows the collection of wait times for each wait event defined in the Oracle engine.”  This statement is incorrect as of the release of Oracle 10.1.  The TIMED_STATISTICS parameter defaults to TRUE when the STATISTICS_LEVEL parameter is set to either the default value of TYPICAL, or the value of ALL (reference).
    • Pages 117-118 lists several wait event names that the author states “provides no real value to a DBA who is trying to figure out where a database bottleneck exists.”  The lists includes: lock element cleanup, SQL*Net message from client, SQL*Net more data to client, and PL/SQL lock timer.  The first of these wait events does not exist in an Oracle 10.2 database, and might have been made deprecated around the time of Oracle 8i. The SQL*Net message from client wait event, at the session level, will tell the DBA when the bottleneck is not in the database, and that could be extremely valuable information.  The SQL*Net more data to client wait event was highlighted on page 28 of this book as identifying a “Network Constrained Database”. (reference)
    • Pages 118-119, while the script does execute, it probably should be updated to use analytic functions to eliminate the need of joining two inline views that both query SYS.V_SYSTEM_EVENT.  Additionally, it would probably be helpful if the script also eliminated most of the idle wait events introduced since Oracle 10.1, for example: jobq slave wait, DIAG idle wait, shared server idle wait, reliable message, etc.  Additionally, the script in the script library will not remove most of the idle wait events that are listed in the script because a blank space appears before the closing single quote for each of those wait events: ‘pmon timer ‘, ‘rdbms ipc message ‘, ‘smon timer ‘, etc.  The book did almost nothing with the script, other than to display the results in some sort of graphical query tool (most of the previous query results used SQL*Plus to show the results).
    • Pages 120-121, the sesswaits.sql script does not appear to be in the script library for the book.  The script in the book has an obvious typo near the beginning: “a.a.total_waits,”.  Like some of the other SQL statements, this SQL statement does not use the more common (those that are in the Oracle documentation) synonym names, but instead lists row source names such as SYS.V$_SESSION_EVENT.  A minor improvement would be to use meaningful aliases for row sources, rather than generic aliases such as A, B, and C.  This script attempts to remove some potentially helpful idle wait events, while not excluding some of the more recently introduced idle wait events.  The book did almost nothing with the script, other than to display the results in some sort of graphical query tool (most of the previous query results used SQL*Plus to show the results).
    • Pages 121-122, the csesswaits.sql script does not appear to be in the script library for the book.  This script attempts to remove some potentially helpful idle wait events, while not excluding some of the more recently introduced idle wait events.  The book did almost nothing with the script, other than to display the results in some sort of graphical query tool (most of the previous query results used SQL*Plus to show the results).  It is a bit disappointing that the author made little attempt to explain the columns returned from V$SESSION_WAIT (sys.v_$session_wait in the script).
    • Pages 122-123, the objwaits.sql script does not appear to be in the script library for the book.  The book states that the script may be used if enqueue waits are present in V$SESSION_WAIT to determine which object and datafile are “the holdup.”  The objwaits.sql script is not specific to enqueue type waits, and in some cases the ROW_WAIT_OBJ# from V$SESSION may not contain information about the object (showing a value of -1), while V$LOCK might expose the OBJECT_ID in the ID2 column during an enqueue wait.
    • Page 124, sentence uses the word “undo” rather than “undue”, “… to quickly pinpoint resource-intensive SQL code that is causing undo strain on the database.”
    • Pages 124-125 state that the number one Oracle Performance Myth is that, “eighty percent of a database’s overall performance is derived from the code that is written against it.”  An Internet search for this sentence finds essentially the same set of supporting paragraphs in a couple of other books (“High-Performance SQL Server DBA”, “Oracle Performance Troubleshooting with Dictionary Internals SQL & Tuning Scripts”).  Oddly, the publisher’s website’s description for the book “Oracle Performance Troubleshooting with Dictionary Internals SQL & Tuning Scripts Second Edition” uses essentially the same sentence as justification for its chapter 8 contents (reference reference2).
    • Pages 126-134 appear to be a copy and paste of pages 406-421 of the book “Oracle Tuning Power Tuning Scripts” with a small number of modifications.  It might be interesting to note that the script outputs in this section of the book also use a graphical screen capture from some program, rather than the raw SQL*Plus style output seen in the earlier portions of the book.
    • Page 126 recommends that temporary tablespaces be set to autoextend.  Run-away queries with missing join conditions can cause a lot of problems if temporary tablespaces are in fact set to autoextend with the maximum size set to unlimited (32GB for an 8KB block size database when a single file is used).
    • Page 126-127, the spacesum.sql script does not appear to be in the script library for the book.  How the script operates is not well described, and the top half of the UNION ALL returns a VARCHAR2 in the last column position, while the bottom half of the UNION ALL returns a NUMBER in the last column position.  It is not clear whether or not the query works as intended.
    • Page 128, the datafileae.sql script does not appear to be in the script library for the book.  This query joins SYS.DBA_DATA_FILES to SYS.FILEXT$ to determine which datafiles are configured to extend automatically.  It appears that the join to SYS.FILEXT$ is unnecessary as DBA_DATA_FILES contains the column AUTOEXTENSIBLE.
    • Page 129, the tsfrag.sql script does not appear to be in the script library for the book.  It is possibly interesting that the script calculates the square root of the square root of the COUNT of a column in DBA_FREESPACE when grouped by the tablespace name.
    • Page 130, the dffrag.sql script does not appear to be in the script library for the book.  This query joins SYS.DBA_DATA_FILES to SYS.FILEXT$ to determine which datafiles are configured to extend automatically.  It appears that the join to SYS.FILEXT$ is unnecessary as DBA_DATA_FILES contains the column AUTOEXTENSIBLE.
    • Page 131, the odd formatting were a single sentence becomes a paragraph has essentially changed the meaning of the second paragraph on the page such that it could be misunderstood.  The fragmentation terms honeycomb and bubble were used, but never defined.
    • Page 131, the tsmap.sql script does not appear to be in the script library for the book.  One might be curious to determine if any of these scripts for detecting fragmentation that examine extents in datafiles, if they work as described, are useful since the introduction of locally managed tablespaces in Oracle Database 8i roughly a decade ago (reference).
    • Page 132 states, “as a result of repeated insert and delete activity, tables can become internally fragmented and contain a lot of wasted space.”  I suspect that this is a confusing sentence, it seems that the book implies that when a row is deleted, the space previously occupied by the row cannot be reused – this is simply not true.  The book continues with, “In the same way, indexes can become fragmented so that their depth reaches unacceptable levels.”  The second quote also seems to imply that when an index entry is deleted, that the space cannot be reused.  Additionally, is this the definition of fragmented indexes that the author uses occasionally?
    • Page 132-133, the maxext.sql script does not appear to be in the script library for the book.  This script appears to be very similar to one that appears on page 144 in the book “Oracle Performance Troubleshooting: With Dictionary Internals SQL & Tuning Scripts” (even the graphical version of the query output matches), and for some reason the SQL statement excludes the SEGMENT_TYPE of CACHE, but not any of the other segment types (referencereference2)
    • Pages 129-144 appear to be copied and pasted, with minor changes, from pages 131-170 of the book “Oracle Performance Troubleshooting: With Dictionary Internals SQL & Tuning Scripts” (even the graphical version of the query output matches reference).
    • Page 133, the objdef.sql script does not appear to be in the script library for the book.
    • Page 134, potentially confusing sentence, “Sometimes, however, this is not the case, and the table suddenly finds itself containing chained or migrated rows, which are rows that span more than one data block.”
    • Pages 134-136, the tabreorg.sql script does not appear to be in the script library for the book.  This script uses the RULE hint, which according to the Oracle documentation is no longer supported as of Oracle Database 10.1 (reference).
    • Pages 137-138, the tabreorg.sql script does not appear to be in the script library for the book.  This script uses the RULE hint, which according to the Oracle documentation is no longer supported as of Oracle Database 10.1.
    • Page 138, a sentence seems to imply that a high clustering factor, or having the BLEVEL greater than four may be a reason to either reorganize (rebuild) an index or drop it.  Drop – maybe, rebuild – no.  Changing the clustering factor for an index requires changing the order of the rows in the table, although the clustering factor calculation can be thrown off if the statistics collection sampling percent is too low.
    • Page 140, the sgasize.sql script does not appear to be in the script library for the book.
    • Pages 143-144, the memsnap.sql script does not appear to be in the script library for the book.
    • Page 145, the poolhit.sql script does not appear to be in the script library for the book.  Page 145 also states, “However, when properly computed, the buffer cache hit ratio is an excellent indicator…”  The formula provided for calculating the buffer cache hit ratio appears to be incorrect, according to the formula that appears in the 11g R2 Performance Tuning Guide from the Oracle documentation library.
    • Page 145-146, the sesshitrate.sql script does not appear to be in the script library for the book.
    • Page 147, the sqlhitrate.sql script does not appear to be in the script library for the book.  Considering that it is possible for a SQL statement to have more than one execution plan, if this SQL statement is producing useful information it probably would have been a better idea to query V$SQL rather than V$SQLAREA.
    • Pages 148-156 appear to be copied and pasted, with minor changes, from pages 182-196 of the book “Oracle Performance Troubleshooting: With Dictionary Internals SQL & Tuning Scripts” (reference) .
    • Page 148, the explain plan format pre-dates the release of Oracle 10.1, and shows the use of the OPTIMIZER_MODE CHOOSE which became deprecated with the release of Oracle 10.1.  The book calls a listing “runtime statistics”, but does not explain how the statistics were generated (likely the output of SQL*Plus’ AUTOTRACE functionality).
    • Page 149 states, “The amount of logical reads has been cut in half simply by using the new 16K tablespace and accompanying 16K data cache.”  It has been demonstrated to the book author multiple times why this test case is flawed (reference reference2 reference3 reference4).
    • Page 149, the cacheobjcnt.sql script does not appear to be in the script library for the book.
    • Page 150, the buffutl.sql script does not appear to be in the script library for the book.  The query joins SYS.X$BH to SYS.TS$ four times, rather than performing the join a single time with either a DECODE or CASE statement to differentiate the different values for the STATE column.
    • Page 152, the libdet.sql script does not appear to be in the script library for the book.
    • Page 153, the libwait.sql script does not appear to be in the script library for the book.
    • Page 153, the libobj.sql script does not appear to be in the script library for the book.
    • Page 155, the dictdet.sql script does not appear to be in the script library for the book.
    • Page 156-157, the text at the bottom portion of page 156 through the middle of page 157 appears to be copied and pasted from page 306 of the book “Oracle Tuning Power Scripts” (reference).
    • Page 157-158, the bottom half of page 157 through the top half of page 158 are supposed to demonstrate the wait events and statistics that will be present when the LOG_BUFFER parameter is undersized.  As of Oracle Database 10.2 the LOG_BUFFER parameter is automatically set to a value that is slightly smaller than the granule size, so having a value for that parameter not specified is not a significant risk starting with 10.2.  Page 158 has been addressed by other contributors on the Oracle OTN forums (reference reference2) .
    • Pages 158-195, the bottom of page 158 through page 195 appears to be copied and pasted (with minor modifications) from page 197-275 of the book “Oracle Performance Troubleshooting: With Dictionary Internals SQL & Tuning Scripts” (reference).
    • Page 159, the memhog.sql script does not appear to be in the script library for the book.
    • Page 160, the first paragraph on this page probably should have appeared on page 147, shortly after the script mentioned in the paragraph appeared in the book, rather than in a section of the book that describes session memory usage.
    • Page 161, which appears to be describing Oracle Database 8i and earlier, states, “if the memory sort ratio falls below 90%, increasing the parameters devoted to memory sorts, sort_area_size and sort_area_retained_size, may help.”  The first question someone might ask is why 90%, and not 99% or 50%?  It would probably be a better idea to determine which queries are causing the sorts to disk.  Maybe increasing the SORT_AREA_SIZE to 1000 times it original value will still not cause the memory sort ratio (which the book’s script calculates since the last bounce of the database) will not allow the ratio to increase to 90% – then what.  The second question someone might ask is what do these parameters do – if the reader sets the parameters to 500M to try to get the memory sort ratio to 90%, what kinds of problems might happen?
    • Page 161 seems to indicate that the PGA_AGGREGATE_TARGET will not be used for shared server connections.  While that was correct for Oracle Database 9i R1 and 9i R2, it is not the case starting with Oracle Database 10g R1 (reference).
    • Page 161 recommends that on a Unix server the PGA_AGGREGATE_TARGET be set to 90% of the memory remaining in the server after the database instance starts.  There are a variety of reasons why this is a bad idea, the fact that the PGA_AGGREGATE_TARGET is not a hard upper limit for PGA memory consumption is just one of those reasons.
    • Page 162, the term “one-pass memory” is mentioned but never defined.  The use of the term seems to imply that a workarea execution is either optimal or one-pass – are multi-pass workarea executions never a possibility?
    • Page 162, the bufobjwaits.sql script does not appear to be in the script library for the book.  Additionally, the script does not appear to be RAC aware as the statistic “gc buffer busy” was ignored and the query did not select from  GV$SEGMENT_STATISTICS.
    • Page 163, the latchdet.sql script does not appear to be in the script library for the book.  I suspect that it could be debatable whether or not the author’s specification that a 1% latch miss ratio indicates that further investigation is required – a better approach would probably start by seeing if a specific type of latching is contributing a significant amount of time to the wait events between a known start and end time.
    • Page 164, the currlwaits.sql script does not appear to be in the script library for the book.  This script uses SYS.V_$ view names rather than the officially documented synonym names, and uses alias “a”, “b”, and “c” rather than meaningful aliases for each of the views.  The query also contains an unnecessary subquery that queries SYS.V_$EVENT_NAME.
    • Page 165, while the globiostats.sql script is found in the script library, the statistic values that are requested seem to omit those statistics that are introduced in Oracle Database 10.1, and provide statistics values since the last open of the database.  I suspect that nightly backups, statistics collection, or scheduled index rebuilds might distort the statistics retrieved by the script.  Nothing is done with the script in the book – the book immediately led into another script.
    • Page 166, while the script syswaits.sql is in the script library for the book, the script printed in the book differs from what is found in the script library.  Additionally, the script in the script library includes an extra space before the closing single quote that follows each wait event name that should be excluded from the report.
    • Page 167, the advice about the “latch free” and “enqueue” waits predates the release of Oracle Database 10.1.  The advice about the “db file scattered read” wait event could cause unnecessary concern – that wait event should be one of the top wait events in a healthy database instance.
    • Page 168, the globaccpatt.sql script does not appear to be in the script library for the book.  The book makes no attempt to interpret the statistic values returned by the script, nor the meaning of the statistics.  The book does state that “large-table full-table scans can indicate sub-optimal SQL and/or missing indexes, cases where the SQL needlessly reads all blocks in the table.”  The book does not state which of the statistics indicate a “large-table full-table scan” – is it the “table scans (long tables)” statistic?  It is also quite possible that full table scans of long (large) tables may just indicate that everything is working as it should due to the data requirements, indexes with poor clustering factors, or maybe the database is simply a data warehouse.  For clarity, it is not the SQL that needlessly reads all of the blocks in the table.
    • Page 168 contains an advertisement for the author’s ION program.
    • Page 169 states that “if standard DBA practices have been followed and indexes are placed in their own tablespace, the I/O statistics for that tablespace can be examined to determine if the indexes are actually being used.”  Is it really a standard DBA practice to separate indexes from their tables by placing each into a different tablespace?  How would seeing the I/O statistics for a tablespace (more correctly the datafiles for the tablespace) really indicate if an index were used – an index may be used by Oracle Database’s optimizer without actually being used for query execution.
    • Page 169, the fileio.sql script found in the book differs significantly from the script by the same name found in the script library for the book.  The script found in the book seems to unnecessarily join several objects owned by the SYS user when a much more simple query could be written using the documented synonym names V$FILESTAT, V$TABLESPACE, V$DATAFILE, and V$TEMPFILE.
    • Page 170, the term “hub tables” is used without defining the term.
    • Page 170-171, the purpose of the toptables.sql script does not seem to be well defined, and might contain one or more errors.  The script returned no rows when tested.
    • Page 172, the largescan.sql script, identified as largescan9i.sql in the script library, fails to recognize that a table referenced in an execution plan and identified as experiencing a full table scan may actually be full table scanned many times when the SQL statement is executed once, even if the table is listed only once in the execution plan.
    • Page 172, a “large table” is defined within the script as a table that is more than 1MB in size.  If we associate this definition with the term “large-table” found on page 168, the reader might be a bit confused.
    • Page 173 states “one rule of thumb is that if any session is currently consuming 50% or more of the total I/O, that session and its SQL need to be investigated further to determine what activity it is engaged in.”  Why 50%?  Would it make a difference if there are 2 or 2,000 sessions connected?  Would it matter how many sessions are actively executing SQL statements?  Over what period would the DBA examine to see if a session meets the 50% criteria?  The provided script, physpctio.sql, bases the time period on the length of time that the session has been connected to the database.
    • Page 174, the totpctio.sql script is nearly identical to the physpctio.sql script from the previous page, and suffers from the same time scoping problem.
    • Pages 175-176, the topiousers.sql script is certainly long, and aside from renaming some of the statistic names from SYS.V_$STATNAME, really does not introduce any new information that was not provided by the earlier scripts.  The script has time scoping problems, and no effort is made to explain the meaning of the statistics returned by the script.
    • Page 177, the bgact.sql script is apparently not found in the book’s script library.  The script is described as “providing a handle on DBWR and LGWR activity,” yet the script retrieves the “dirty buffers inspected” foreground statistic and a “write requests” statistic that does not seem to be a valid statistic name (reference).
    • Page 178 states, “seeing non-zero values for the DBWR summed dirty queue length typically indicates that buffers are being left in the write queue after a write request.”  The “DBWR summed dirty queue length” statistic is apparently not an Oracle Database statistic, so seeing non-zero values might mean that something else is wrong.
    • Page 178 states, “non-zero values for the redo log space wait requests and redo log space wait time statistics could indicate the log buffer setting is too low.”  There is no statistic named “redo log space wait requests” – the actual statistic name is “redo log space requests”.  These statistics are not associated with the redo log buffer size, but instead the size of the redo log files in relation to the amount of redo information that needs to be written to the log files (reference).
    • Page 178, the archhist.sql script does not appear to be in the script library for the book.  The script summarizes by day, rather than a smaller time period such as an hour.  This might provide misleading information if on average 4 redo logs are archived per hour, but during busy time periods during the day archived redo logs are archived at a rate of once per minute – the average over 24 hours will hide important information about a problem.
    • Page 179, the rolldet.sql script does not appear to be in the script library for the book.  The script does not permit time scoping, uses the non-documented object names in the SYS schema rather than the public synonyms, and uses generic alias names for the objects.
    • Page 180, a sentence which makes no sense, “an overall contention ratio of 1% or higher is an indicator of too few rollbacks as well.”  First, the sentence is missing at least one word.  Second, the method to calculate the overall contention ratio is not described.  Third, why 1% and not 10% or some other number?
    • Page 181, the totuserspace.sql script does not appear to be in the script library for the book.
    • Page 182, the sortusage.sql script does not appear to be in the script library for the book.
    • Page 182, the sortdet.sql script does not appear to be in the script library for the book.
    • Page 183-185, the topsess.sql script does not appear to be in the script library for the book.  After using three pages for the script, it probably would have been wise to use more than a small paragraph to explain what to do with the output of the script.
    • Page 186 states, “a rule of thumb is that no session should consume more than 25-50% of the overall resources in a particular category.”  Why 25% to 50%?  It would seem that the percentage might make a difference if there was a single session connected or 1,000 sessions.
    • Pages 186-187, the topsessdet.sql script does not appear to be in the script library for the book.  The script’s output is cumulative since the session connected, so it could easily generate invalid results if one session had been connected to 2 months, while a session consuming significant resources has been connected for 30 minutes.  The query does not use the time model statistics, so it is possible for the statistics displayed to be inaccurate for actively executing sessions – for example if a session has been on the CPU for the last 60 minutes performing logical IO.
    • Pages 188-189, the userscans.sql script does not appear to be in the script library for the book.  The script is interesting in that it retrieves the “table scans (long tables)” statistic for each session.  According to the Oracle documentation, a table is considered long if the table is not cached and its high water mark is more than 5 blocks, and the statistic is impacted by the CACHE and NOCACHE hints – based on that description, the query is not behaving as described in the book (reference).
    • Page 189, the scatwait.sql script does not appear to be in the script library for the book.  The query is not time scoped, so a session that has been connected for months may unfairly be sorted to the top of the output.
    • Page 189, the book states that the “db file scattered read” wait event “is generally thought to be an indicator of table scan activity.”  It probably would have been wise to state that the “db file scattered read” wait event also appears during index fast full scans, and in some cases index range scans.  As of Oracle Database 11.1 it is quite common for the “direct path read” wait event to appear when full table scans are performed, but the book made no mention of that fact.
    • Page 190, the large_scanusers.sql script does not appear to be in the script library for the book.  The script suffers from the same errors as the scripts earlier in the book when it attempts to determine how many full table scans users have performed based on querying SYS.V_$SQL_PLAN – a table that appears a single time in an execution plan may be full table scanned multiple times during a single execution.
    • Page 190 states that the “table scans (long tables)” statistic indicates full table scans of tables that are five or more blocks in size.  That is almost what the Oracle 10.2 documentation states, but that raises the question regarding the script on page 188 and 189 – is there a problem with performing a couple of full table scans on a table with 10 blocks under its high water mark, and is this as severe as performing a couple of full table scans of tables with 10 million blocks under its high water mark?
    • Page 191, the lockcnt.sql script does not appear to be in the script library for the book.  The script appears to have an incomplete join to the SYS.V_$LOCKED_OBJECT view.
    • Page 192, the curriosql.sql script does not appear to be in the script library for the book.  This SQL statement has time scoping problems, directly comparing a session that has been connected to the database for months with one that has been connected for a couple of minutes.
    • Page 193, the cartsession.sql script does not appear to be in the script library for the book.  The script joins SYS.V_$SQL to SYS.V_$SQL_PLAN using just the HASH_VALUE, and there are certainly risks in doing that.  Not all Cartesian joins are bad – the optimizer may (frequently) use a Cartesian join when the statistics for one of the row sources indicates that a single row, or a very small number of rows, will be joined to the second row source.
    • Pages 193-194, the cartsql.sql script does not appear to be in the script library for the book.
    • Page 195, the list of optimizer modes probably should indicate that the FIRST_ROWS optimizer mode has been deprecated for quite a while and/or not list it as one of the two optimizer modes.  The book states of the first rows optimizer mode, “first rows: Fetches the desired rows with a minimum amount of block touches (favors indexes).”  This, of course, raises the question, what if the desired number of rows is all or nearly all of the rows – is this still a true statement?  Do the non-deprecated FIRST_ROWS_1, FIRST_ROWS_10, FIRST_ROWS_100, and FIRST_ROWS_1000 also favor indexes – this section of the book does not mention those optimizer modes.  This page also incorrectly states that the ALL_ROWS optimizer mode favors full scans, such a statement is misleading at best.
    • Pages 196-197, the topsql.sql script does not appear to be in the script library for the book.  It appears that the script is designed for Oracle Database 9i (query retrieves HASH_VALUE rather than SQL_ID), queries SYS.V_$SQLAREA rather than the more common (and documented) V$SQLAREA, unnecessarily groups all child cursors of the same SQL statement together (should have queried V$SQL instead) which can be quite misleading especially in 11.1 with the introduction of adaptive cursor sharing and 11.2 with the introduction of cardinality feedback (two items that are apparently not mentioned in this book).  Additionally, the starting time point of the query is the database open time, rather than a more meaningful examination of the delta statistics for a shorter time period such as an hour or a day – a query that is executed once a month that causes a lot of physical reads might not be a problem, while a query that executed many times in the last hour might be much more of a cause for concern.
    • Page 198 appears to be copied and pasted (with minor modifications) from pages 279-280 of the book “Oracle Performance Troubleshooting: With Dictionary Internals SQL & Tuning Scripts”
    • Page 198, the book is states, “the first query in the result set has accumulated almost five seconds of total execution time and runs for about two and a half seconds each time it is executed.”  While the query might have an *average* execution time of 1.5 seconds, it very well could have required 4.99 seconds for the first execution and 0.01 seconds for the second execution.  Considering that this data that the author is commenting about is drawn from V$SQLAREA, it very well could be the case that the first execution and the second execution have entirely different execution plans.  The effects of block caching could also mean that the time for each execution is not nearly the same as the average execution time.
    • Page 198 states, “this can be accomplished by noting the hash_value for each SQL statement and then issuing the fullsql.sql script to obtain the full SQL statement.”  This statement causes confusion because the fullsql.sql script printed in the book appears to request the SQL_ID, and not the HASH_VALUE as described, which was returned by the earlier SQL statement.
    • Pages 199-201 appear to be copied and pasted (with minor modifications) from pages 347-350 of the book “Oracle Tuning Power Scripts” (reference).
    • Page 199, the cartcount.sql script does not appear to be in the script library for the book.  Not all Cartesian joins are bad – the optimizer may (frequently) use a Cartesian join when the statistics for one of the row sources indicates that a single row, or a very small number of rows, will be joined to the second row source.
    • Page 199, the same cartsql.sql script that appeared on pages 193-194 is pasted on this page also.
    • Page 200, the tabscan.sql script does not appear to be in the script library for the book.  
    • Page 200 appears to be copied and pasted (with minor modifications) from page 284 of the book “Oracle Performance Troubleshooting: With Dictionary Internals SQL & Tuning Scripts”
    • Page 204-205 appear to be copied and pasted (with minor modifications) from page 61-64 of the book “Oracle9i High Performance Tuning with Statspack” (reference).
    • Pages 213-217 appear to be copied and pasted (with minor modifications) from pages 158-162 of the book “Oracle Tuning Power Scripts” (reference).
    • Pages 224-225 appear to be copied and pasted (with minor modifications) from pages 147-149 of the book “Oracle Tuning Power Scripts” (reference).
    • Page 226 appears to be copied and pasted (with minor modifications) from page 146 of the book “Oracle Tuning Power Scripts”
    • Page 229 appears to be copied and pasted (with minor modifications) from page 139 of the book “Easy Oracle Automation: Oracle 10g Automatic Storage, Memory and Diagnostic Features” (reference).
    • Pages 229-234 appear to be copied and pasted (with minor modifications) from pages 150-160 of the book “Oracle Tuning Power Scripts” (reference).

    Jumping to mostly random pages:

    • Page 235 appears to be copied and pasted (with minor modifications) from pages 252-253 of the book “Oracle Tuning Power Scripts” (reference).
    • Pages 243-247 appear to be copied and pasted (with minor modifications) from pages 112-115 of the book “Easy Oracle Automation” (reference).
    • Page 249 appears to be copied and pasted (with minor modifications) from pages 47-48 of the book “Oracle 9i High-Performance Tuning with Statspack” (reference).
    • Pages 250, 309, 311, and 318 contain an advertisement for the author’s ION product.
    • Page 280 states, “Oracle does not run in a vacuum, and it’s critical to search outside the box and see what is happening with your CPU, RAM, network, and disk I/O subsystems.”  While this quote might raise a couple of questions, I would like to know why Oracle (assuming Database) was not designed to function correctly inside a vacuum, and why would the server’s CPU be outside the box?
    • Pages 313-315 are nearly identical word for word with Metalink (MOS) Doc ID 884046.1 “Understand each field of AWR” as well as pages 22 through 25 of a document that is marked as copyright 2010 by IBM Corp. (reference).
    • Page 325 states, “The reason that CPU drives to 10% utilization is because the UNIX internal dispatchers will always attempt to keep the CPU’s as busy as possible. This maximizes task throughput, but it can be misleading for a neophyte. Remember, it is not a cause for concern when the user + system CPU values approach 100 percent.”  Why 10%?  Page 325 of the book makes the same errors as are found on page 25 of the book (reference).
    • Page 355 provides no real help with decoding the time model statistics, not even a suggestion to arrange the statistics into a logical hierarchy.  All that is stated is, “in the sample output of the AWR Time Model Statistics Report shown above, it can be seen that the system spends the most processing time on actual SQL execution but not on parsing.  This is very good for a production system.”  This is at least the second case in the book where the time model statistics were introduced were no information was provided to indicate what to do with the statistics.
    • Page 355-356, other than telling the reader that the operating system statistics are important, and providing what appears to be output from Oracle Database 10.1, the book does not indicate what to do with the statistics. 
    • Pages 516-519 appear to be copied and pasted (with minor modifications) from pages 212-216 of the book “Oracle 9i high-performance tuning with Statspack” (reference).
    • Page 520 states, “for example, tuning an SQL statement to remove full-table scans makes the query run faster because of the direct reduction in the amount of data blocks that are read from disk.”  This is simply too broad and too inaccurate of a statement to be true, as it seems to imply that all full table scans are automatically worse than an index access path.  It can be quite easily demonstrated that even when selecting 0.06% of a table, a full table scan may be significantly faster than an index access path to the same data (reference).
    • Page 521 includes a statement that does not seem to make sense, “Prior to Oracle, Oracle Professionals noticed that by moving the entire database to a larger blocksize, they reduce disk I/O and improve performance of the entire system.”  Were there Oracle DBAs before Oracle Corporation was formed, or is this sentence stating something else?
    • Page 521 states, “there is no question that a large blocksize for indexes is going to reduce I/O, and therefore improve the overall performance of the entire database.”  There have been several well written articles that question whether using a larger block size for indexes will do either of reduce I/O or improve the overall performance.  For example, what happens in a multi-user database when more index entries are packed into a larger index leaf block and multiple sessions concurrently insert into the table to which the index belongs? This is essentially a repeat of the same logic error as is found on page 149 of the book (reference   
    • Page 522 states, “Oracle has codified the benefits of different blocksizes, and the Oracle 11g Performance Tuning Guide notes that multiple block sizes are indeed beneficial…”  The author then quoted the portion of the Oracle Database documentation which describes the KEEP and RECYCLE buffer pools.  The KEEP and RECYCLE buffer pools can only use the same block size as is the default block size for the database – the quote from the documentation does not describe the benefits of different block sizes.  I pointed this error out to the author in the Oracle OTN forums in 2008 after a couple other people previously mentioned the same error (reference).
    • Page 522 states, “for example, insert-intensive databases will perform less write I/O (via the DBWR process) with larger block sizes.  This is because more logical inserts can take place within the data buffer before the block becomes full and requires writing it back to disk.”  This is an extremely weak argument.  DBWR batches block writes, it does not write a block to disk as soon as it becomes full.  Second, what kinds of problems may happen when multiple sessions are concurrently performing the inserts into leaf blocks?  Third, what about all of the bugs that are related to having multiple block sizes in a single database such as bug ID 4940513 (undo related), bug ID 5496041 (index corruption), and doc ID 844497.1 (RMAN skips non-default block size tablespaces).
    • Pages 523-528, the author attempts to argue for the use of multiple block sizes in a single database using arguments that were discussed in several forum threads.  A lot of convincing evidence was provided in those threads that strongly recommended against the practice that is suggested by the author.
    • Page 529, Oracle Database 10.2 and above also considers the value of the SESSIONS parameter when automatically setting the DB_FILE_MULTIBLOCK_READ_COUNT parameter value (reference).
    • Page 535, the same flawed comparison of indexes in an 8KB and 16KB block size which is found on page 149 is repeated.
    • Page 537 describes a case where a customer was using a 16KB block size database that experienced performance problems where updating roughly 850,000 rows in a table required roughly 45 minutes, and that time decreased to just 2.2 minutes by recreating the database with a 4KB block size.  This magic performance increase was discussed on the Oracle OTN forums and several blogs.  Jonathan Lewis create a test case that identified the cause as an ASSM bug that appears in tablespaces created with block sizes larger than 8KB.  A bug was filed with Oracle Corp. related to the problem, and the author of this book as well as the field technician who spotted the error were notified that the cause of the problem was identified and a patch is being developed.  I am not sure why the book states, “I was able to get the update down to 1.5 minutes” – I distinctly recall the field technician (also the technical editor of this book) reported fixing the problem, and not the book author (reference  reference2  reference3  reference4).
    • Page 726 states to check Metalink (MOS) Doc ID 122008.1 for “the officially authorized script to detect indexes that benefit from rebuilding.”  That Metalink article states that the criteria is not valid and the script has been revised to meet “current standards and functionality.”  That means that the suggested criteria for rebuilding that is printed in the book regarding 20% or more of deleted index entries or a depth of more than 4 levels is invalid, as had previously been pointed out to the book author as invalid in a couple of OTN discussion threads (reference  reference2  reference3).
    • Page 726 suggests to check Metalink Doc ID 77574.1 for Oracle’s index rebuilding guidelines – that Doc ID does not exist on Metalink (MOS).
    • Page 726 states, “However, when large numbers of adjacent rows are deleted, it is highly unlikely that Oracle will have an opportunity to reuse the deleted leaf rows, and these represent wasted space in the index.”  The first question is why can the space not be reused.  The second question is if it matters whether the index is a primary key b*tree index, and index on another column, or a different type of index altogether.  The third question is how are the adjacent rows deleted – if this is a heap organized table, how does one make sure that physically adjacent rows in the table blocks are deleted?
    • Page 727 appears to be at least partially copied from page 412 of the book “Oracle 9i High-Performance Tuning” (reference  reference2).
    • Page 727 states, “Hence, an Oracle index may have four levels, but only in those areas of the index tree where the massive inserts have occurred.  Oracle indexes can support many millions of entries in three levels, and any index that has four or more levels would benefit from rebuilding.”  The first sentence is clearly not true, and was communicated to the book’s author by members of the Oracle community in early 2003 (as well as several times since 2003).  The second sentence leads into a bit of a problem area – what happens if after rebuilding an index it is still four or more levels – do you rebuild it again? (reference  reference2)
    • Page 727 states, “Gets per index access: The number of gets per access refers to the amount of logical I/O that is required to fetch a row with the index.”  This attempted description of the BLKS_GETS_PER_ACCESS column of INDEX_STATS seems to be inaccurate, and likely is not a good metric for deciding if an index should be rebuilt – the author suggests that it might be appropriate to rebuild the index when the value of this statistic is more than five (reference).
    • Page 727 suggests using “ANALYZE INDEX VALIDATE STRUCTURE” for all indexes to determine which are candidates for a rebuild.  The book then states, “After analyzing the report above, the DBA may want to consider rebuilding any index where the height is more than three levels…”  The author does not mention that the tables to which the indexes belong will be locked for the entire duration of the ANALYZE INDEX command – this problem was brought to the attention of the book author in 2007 by members of the Oracle community.  For the second of the two quoted sentences, the book does not include a report above the sentence quoted from the book, so it is unclear what the reader should be reviewing (reference).
    • Pages 728-732 contain a lot of faulty facts about the safeness and cheapness of index rebuilds, and how “many” databases see a huge benefit from periodic index rebuilding.  Many of these faulty facts are discussed in detail in Richard Foote’s blog and in various Internet forums.

    (Section added December 8, 2010): Related blog articles that discuss the contents of this book:

    (Section added February 28, 2011): Attempted Purchases of the “Oracle Tuning: The Definitive Reference Second Edition” Book:

    On February 2, 2011 I added the following comment to my review of this book on

    In my review of the book I stated the following:
    “I initially ordered the book from Amazon at the end of May 2010 (it was expected to be in stock the first week of June 2010), and when it had not shipped by the end of August, my curiosity subsided due to the delay (and various other reasons) and I cancelled the original order. Near the end of October 2010 my curiosity peaked again, so I placed a new order for the book (at that time the book was expected to be in stock the first week of November 2010). While waiting for the book to arrive, I started reading portions of the first and second editions of the book through the Google books library.”

    The order that I placed for the book at the end of October 2010 still has not shipped, more than three months after I placed the order. It is somewhat interesting to see that the publisher’s website offers this book for seven dollars less than Amazon’s current price for the same book, while the publisher apparently cannot supply Amazon with a sufficient number of copies of this book to fill the existing orders. I have no intention of cancelling this order, even if it does not ship for another three to six months. Best of luck to others who have waited six months (three months for the first order before cancelling and three months for the second order) to obtain the book.

    This was the status of the first attempt to acquire the book, as shown on August 26, 2010 – that order was cancelled shortly after making this screen capture:

    This screen capture shows the current status of the second attempt to acquire the book, as shown on February 28, 2011 – I do not intend to cancel this order:

    This is certainly an interesting situation.  Why publish a book, and then not provide the book vendors with a sufficient number of copies to satisfy the apparent demand?  There is probably a good explanation for the delay, but not one that I am able to understand.

    (Section added March 9, 2011): Attempted Purchases of the “Oracle Tuning: The Definitive Reference Second Edition” Book, Part 2:

    Amazon cancelled my second order and sent the following email:

    I noticed a couple of days ago that Rampant was selling the book through  Currently there is only a single seller listed on, and that seller is asking $976.98 for a used copy of the book: 

    Book Review: Expert Oracle Database Architecture: Oracle Database Programming 9i, 10g, and 11g Techniques and Solutions, Second Edition

    16 08 2010

    August 16, 2010

    Foundation Knowledge to Understand Most Any Other Oracle Database Book – Required Reading for All DBAs and Developers

    I pre-ordered this book in May, having previously read the two previous editions of this book (“Expert One-On-One Oracle” and “Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions”) cover to cover.  Now that I have had an opportunity to fully read this edition, I now realize that the first edition of this book subconsciously set the standards that I expected of all other Oracle Database books I have subsequently purchased.  The latest edition is now among my five favorite Oracle Database books, sharing the top five ranking with “Cost-Based Oracle Fundamentals”, “Troubleshooting Oracle Performance”, and “Optimizing Oracle Performance”.

    The author of this book is a Vice President at Oracle Corporation, and is well known for his website, articles in Oracle’s magazine, forum/Usenet postings, and presentations.  The author recently revised the Oracle Concepts Guide, which is part of the Oracle Database 11.2 documentation library.  Slightly off topic: It might be notable that a book written by a VP at Oracle Corporation is not published by Oracle Press.  Oracle Press is a marketing label used by McGraw Hill to market its line of Oracle books – the books are not formally written by Oracle Corporation nor are they formally tech edited by Oracle Corporation, although some employees of Oracle Corporation have used McGraw Hill as a publishing company.  At least for me, for several years the “Officially Authorized Oracle Press” logo on books published by McGraw Hill seemed to be an endorsement of the books by Oracle Corporation, but that is not the case, as was clarified by one of the threads on the AskTom site (Reference Reference2).

    The book contents are very well organized, allowing the reader to continually build on previously introduced concepts.  Many forward and backward references in the book are provided – “see the section a1 in chapter n for additional information”.  It is surprising just how much new content is included in this edition of the book – this book reads as though it were written specifically for Oracle Database 11g R2 with exceptions in behavior mentioned for older Oracle Database releases.  Even if you have read one of the older releases of the book, I do not recommend just jumping into one of the chapters using either the table of contents or the index at the back of the book.  Start at the beginning of the book and finish at the end of the book.  The author introduces very useful bits of information in the chapters that are not specific to the chapter title, and do not appear in the table of contents or the index at the back of the book.  Just a couple of examples of useful information that will be missed (especially if you only read the first half of the book):

    • Page 421: UTL_RAW.CAST_TO_RAW and UTL_RAW.CAST_TO_VARCHAR2 functions with sample code.
    • Page 434: Using DUMP(number, 16) to see the raw bytes used to store values.
    • Page 464: Using DBMS_UTILITY.GET_CPU_TIME to determine the previously used CPU seconds for the session.
    • Page 465: Oracle Database 10g R2’s deterministic function optimization.
    • Page 470: Use of the analytic function RATIO_TO_REPORT
    • Page 472: Creating a UNIQUE function based index to enforce uniqueness of values in a column when another column contains a specific value.
    • Page 477: Oracle Forms by default updates every column including primary key columns, and that behavior could lead to deadlocks.
    • Page 478: Disabling table locks on a table will help detect unindexed foreign keys.
    • Page 487: Using DBMS_SPACE.FREEBLOCKS to determine the number of index blocks in the index freelist.
    • Page 512: Using the VSIZE function to indicate how much storage space a column value requires.
    • Page 513: Warns of a calculation error (that is not a bug) when adding the floating point numbers 0.3 and 0.1 – the result is not 0.4.
    • Page 646: Using DBMS_STATS.SET_TABLE_STATS to simulate a large table in order  to determine how data volume changes will impact the execution plan generated by the optimizer.
    • Page 656: Using the NTILE analytic function to break a rowset into equally sized groups of rows.
    • Page 687: Starting with Oracle 9i it is possible to insert into one or more tables using complex WHEN conditions.
    • Page 739: Encrypting a column in a table may cause significant changes to the execution plans that specify that column in join or filtering conditions.
    • Page 740: Index range scans are not possible with indexes created for encrypted columns.

    The author uses an odd, yet very helpful teaching method throughout the book.  Break something, with the assumption that you are reading the chapter because something is already broken, and then show you how to fix the problem and ultimately avoid the problem in the future.  Unfortunately, the index at the back of the book only contains a small handful of these problem-solution cases – another reason to read the whole book and take good notes.  A list of the errors covered in the book as teachable moments follows:

    • Chapter 1: ORA-00060 (deadlock detected while waiting for resource: page 6), ORA-00942 (table or view does not exist: page 19), ORA-08177 (can’t serialize access for this transaction: pages 37, 38), ORA-02391 (exceeded simultaneous SESSIONS_PER_USER limit: page 44)
    • Chapter 2: ORA-12162 (TNS:net service name is incorrectly specified: page 53), ORA-01078 (failure in processing system parameters: page 55), ORA-16196 (database has been previously opened and closed: page 57)
    • Chapter 3: ORA-00942 (table or view does not exist: pages 70, 76) , ORA-25138 (initialization parameter has been made obsolete: page 71), ORA-01031 (insufficient privileges: pages 75, 110, 112), ORA-02096 (specified initialization parameter is not modifiable with this option: page 78), ORA-12913 (Cannot create dictionary managed tablespace: page 102), ORA-12505 (TNS:listener does not currently know of SID given in connect descriptor: page 111)
    • Chapter 4: ORA-29339 (page 155)
    • Chapter 6: ORA-00054 (pages 210, 224), ORA-00060 (page 211), ORA-00069 (page ?)
    • Chapter 7: ORA-01555 (pages 252, 254, 255), ORA-08177 (pages 252, 261)
    • Chapter 8: ORA-02290 (pages 270, 272, 274, 297), ORA-02292 (page 281), ORA-02291 (page 281), ORA-30036 (pages 288, 289)
    • Chapter 9: Checkpoint not complete (page 321), ORA-01403 (page 324), ORA-01555 (pages 323, 334-344)
    • Chapter 10: ORA-22908 (page 401), ORA-01427 (page 401)
    • Chapter 11: ORA-25954 (page 459), ORA-01450 (pages 465, 466), ORA-01743 (page 472), ORA-00001 (page 476), ORA-00069 (page 478)
    • Chapter 12: ORA-12899 (page 504), ORA-01461 (page 504), ORA-01465 (page 507), ORA-01438 (pages 510-512), ORA-00932 (page 518), ORA-01839 (pages 529, 534), ORA-30079 (page 538), ORA- 22285 (page 553)
    • Chapter 13: ORA-00376 (pages 559, 595, 611), ORA-01652 (page 562), ORA-14511 (page 562), ORA-01843 (page 567), ORA-14400 (pages 569, 575), ORA-14767 (page 576), ORA-14652 (page 576), ORA-14659 (page 585), ORA-14402 (page 585), ORA-01408 (page 598), ORA-02429 (page 600), ORA-14038 (page 600)
    • Chapter 14: ORA-12838 (page 633)
    • Chapter 15: SQLLDR-601 (page 660), ORA-22285 (page 679), SQL*Loader-144 (page 688), ORA-29913 (page 691), ORA-06512 (page 692)
    • Chapter 16: ORA-28365 (page 718), ORA-28348 (page 720)

    Helpful Foundation Knowledge Found in the Book:

    • Notes at the front of the book show how to quickly install Statspack, change AUTOTRACE settings, permanently configure the SQL*Plus environment, and the requirements for the Runstats, Mystat, Show_Space, and Big_Table scripts that are used later in the book.
    • Provides a warning not to use bitmap indexes on columns just because the columns have low cardinality, for example a column named PROCESSED that has either a value of Y or N, without first considering concurrency – how will the column be used and updated.
    • Pages 17 through 21 demonstrate SQL injection when bind variables are not used in an example that most people probably would not consider as a threat – the SQL injection was accomplished by simply changing the NLS_DATE_FORMAT (Reference).
    • Page 72 warns against setting undocumented parameters: “Use undocumented parameters only at the request of Oracle support. Their use can be damaging to a database, and their implementation can – and will – change from release to release.”  This is a warning that should be included in every book before the first mention of hidden parameters.
    • The book provides a detailed list and description of the functionality provided by Oracle’s background processes.
    • Demonstrates how to build test cases to verify performance/scalability issues.
    • Pages 262-265 provide a very detailed demonstration of transactions automatically restarting.  This behavior might cause, for example, before update triggers to fire more than once, various problems with PL/SQL variables, autonomous transactions, and various PL/SQL packages (UTL_FILE, UTL_SMTP, UTL_HTTP, etc.).
    • Excellent section on the topic of index myths, one of which had its roots in Oracle Database 5.
    • Excellent six case summary of why an index may not have been used.
    • Shows a lot of examples of practical SQL and PL/SQL in test cases, which is extremely helpful for people trying to learn either SQL or PL/SQL.  The test cases frequently include TKPROF output.
    • The book includes a nice analogy of Oracle Database’s parallel execution option by comparing that feature with writing either a short or long report – coordinating the writing of a small report could take longer than simply writing the report without assistance.
    • Excellent section dedicated to answering frequently asked questions about SQL Loader.
    • The chapter showing the use of external data includes examples with SQLLOADER, external tables, flat files, and Data Pump unload (used to write out the contents of a table to allow that table’s rows to be used in another database).
    • The book almost always lists the first Oracle release version that offered a feature when that feature is introduced in the book.
    • Typically indicates when syntax introduced in the book is operating system dependent.  For example, on page 682 “ls –l” is specified for Unix/Linux and “dir /q /n” for Windows.
    • The downloadable source code library seems to be complete, making it easy for readers to reproduce the tests that are found in the book.

    Not everything about the book is perfect.  The errata page for the book currently lists a problem where the author used the word “be” too many times in a sentence.  What else needs a little fine tuning, is not quite right, or is simply incorrect?  The following are my observations about the book:

    • The book frequently seems to assume that a fully optioned version of the Enterprise Edition of Oracle is used by all of the readers of the book, which may set the readers up for false hope in solving certain problems, or implementing solutions that cannot be used in production environments (the chapters discussing bitmap indexes, partitioning, and parallel execution are exceptions, where the requirements are stated):
      — Advanced Queuing on page 10
      — SQL Profiles on page 11
      — FLASHBACK on page 26
      — Fine-grained access control on page 46
      — Index key compression on page 407
      — Chapter on the topic of Data Encryption does not mention what capabilities are available for the Standard Edition, Enterprise Edition, or the Enterprise Edition with the Oracle Advanced Security Option.
    • Page xxxvi describes running the utlxplan script as the user SYSTEM.  Running this script should not be necessary starting with Oracle Database 9i (Reference).
    • Chapter 1 seems to be a little loose with terminology, specifically with the terms “database” and “Oracle”.  These chapters were probably intentionally written as such to help introduce the reader to the world of Oracle Database.  The first couple of pages of the second chapter provide a definition of “database” and “instance”.
    • Some code sections of the book have lines with unnecessarily bold fonts (page 13, 29, 33, 34, 37, 53, 54, 59, 61, 175).
    • Page 109 typing error: “Normally, on UNIX/Linux, this group will be DBA by default, and OSDBA on Windows. That group is ‘special,’ in that any user in that group can connect to Oracle ‘as SYSDBA’ without specifying a username or password.” On Windows the group name is ora_dba, not OSDBA – the same error is found in the previous edition of the book (Reference).
    • Page 131 behavior change: “A serial (nonparallel query) session will use a small percentage of the PGA_AGGREGATE_TARGET, typically about 5 percent or less.  So, if you’ve set the PGA_AGGREGATE_TARGET to 100MB, you’d expect to use no more than about 5MB per work area (e.g., the sort or hash work area). You may well have multiple work areas in your session for multiple queries, or more than one sort or hash operation in a single query, but each work area will be about 5 percent or less of the PGA_AGGREGATE_TARGET.”  It appears that this behavior changes when the OPTIMIZER_FEATURES_ENABLE parameter is set to or greater, as evidenced by the change in the _SMM_MAX_SIZE parameter as found in a 10053 trace file or a query of X$KSPPSV and X$KSPPI (Reference Reference2).
    • Page 220 join error: The self-join of V$LOCK probably should have also joined on the TYPE column in addition to the ID1 and ID2 column.  Additionally, the BLOCK column could potentially have a value of 2, rather than 1 in a RAC environment (Reference).
    • Page 335 typing error: shows the view name V$UNDOSTATUNDOSTAT, should be V$UNDOSTAT.
    • Page 419 suggested use of a deprecated feature: Shows using the exp command with Oracle Database to create a people.sql file containing a CREATE TABLE command for an object based table.  Page 116 states the following “Export [exp] is officially deprecated with Oracle Database 11g Release 2. It is supplied only for use with legacy database structures.”
    • Some of the autotrace generated execution plans do not conform to the DBMS_XPLAN style execution plans that are automatically generated as of Oracle Database 10g R2, even though it appears that the author was connected to Oracle Database 11g R2.  Examples include the two execution plans on page 480.
    • The SQL*Plus prompts are atypical (not “SQL>”) as displayed in the book –  while it might be helpful to show which database the SQL*Plus session is connected to, it makes it hard to read some of the code samples that span more than a single line.

    The problems found in the book are absolutely minor in comparison to the value of the information provided in the book.  Yes, it is probably harsh to be judged by the standards set in the previous edition of the book.  This edition update was certainly not a five hour task, and probably was not a five month task either.  I suspect that this edition has been in development for most of the last five years, since the publication of the previous edition.  If you have read the previous editions of this book, set those books aside and start reading from the beginning of this edition – you will not be disappointed.

    (Section added December 13, 2010): Related blog articles that discuss the contents of this book:

    Book Review: Oracle Performance Firefighting (Fourth Printing)

    25 06 2010

    June 25, 2010

    Many Helpful Topics, However the Errors and Omissions are Unnecessary Distractions

    This review covers the June 2010 fourth printing of the “Oracle Performance Firefighting” book.  I bought a PDF copy of the book direct from the OraPub site.  The author’s excellent “Forecasting Oracle Performance” book was a factor in my decision to purchase this book, as was my desire to learn something new while trying to maintain previously acquired knowledge.  As stated in this book, the author has roughly 20 years of experience working with Oracle Database, worked with Cary Millsap at Oracle Corporation for a period of time, and conducts a successful consulting and training business.  I recognize the names of three of the six technical reviewers listed in the book as well known experts in the Oracle community.  The author’s website states that there are no known issues/errata starting with the third printing (November 2009) of the book (reference).  All of the components for the book’s success are in place, so how did the book perform?

    There are definitely positive moments in the book, where the book successfully communicates complicated topics using very clear statements.  The book publisher selected a font size and margin size that packs a significant amount of detail into each of the book’s 387 pages.  There are several well-placed helpful diagrams with no space wasting cartoon drawings.  The author introduced his three circle analysis which involves identifying the intersection of the Oracle Database, the Application, and the Operating System to help pinpoint the source of performance problems.  Several examples were provided where the author used the Linux strace command to troubleshoot Oracle Database performance problems.  The book recommends against modifying the _SPIN_COUNT hidden initialization parameter, which is a wise advisement. Clearly states that when there is an operating system bottleneck, there are only 4 subsystems to investigate: CPU, memory, IO, and network.  Provides a good demonstration of using DBMS_MONITOR to enable 10046 traces.  Provides a nice description of in-memory undo and queuing theory in easily understood language.  The author’s website offers the OSM toolkit, which is a collection of scripts that can be freely downloaded – several of the scripts were used throughout the book.  It is obvious that the author invested a significant amount of time into the writing of this book.

    There are also definitely negative moments in the book, sections filled with typing errors, logic errors, and omissions.  Signs of problems include a paragraph that includes four hidden initialization parameters that the author suggests to adjust, without the author indicating by how much the initialization parameters should be adjusted, and without the author explaining why three of the four hidden initialization parameters simply do not exist in Oracle Database  Some sections of the book are written in a condescending tone, as if no one other than a DBA is intelligent enough to understand the challenges that a DBA or performance tuning consultant faces, even if the concepts are provided in simple terms to others.  Very few reproducible test cases are provided in the book, which makes it a little difficult for readers to practice or duplicate concepts introduced in the book.

    This book is not a replacement for the “Troubleshooting Oracle Performance” book – the two books share very little in common.  This book seems to spend little time explaining how to understand the root cause of the performance problems through various trace file analyses (10046, 10053, 10032, etc.) or even analysis of execution plans, while the author does spend a lot of effort suggesting changes to initialization parameters.  The book’s author is in a somewhat unique position in that he has either a 3 out of 5 star book, or a 5 out of 5 star book depending on the corrections that make it into the fifth printing of the book.  The PDF version of the book cannot be printed, does not permit copying text, and has the purchaser’s name, address, and phone number at the top of each page.

    Specific errors, omissions, and distractions in order:

    • The book uses a lot of footnotes, some of which do not add value to the discussion of Oracle Database.
    • Page 16 suggests to keep working to improve performance until time and budget are exhausted, or performance is acceptable.  I think that this is a pathway to compulsive tuning disorder.
    • Page 22 suggests that when the CPU run queue is usually longer than the number of CPU cores, that means that processes are needing to wait for CPU resources.  This suggestion begs the question – isn’t it possible to reach the point of processes needing to wait for CPU resources without the CPU run queue reaching the length of the number of CPU cores?
    • Chapters one and two seem to spend a lot of time trying to relate Oracle firefighting concepts to non-computer related activities: telephone operator, school report card, doctor checking a young child, clients requiring the author to keep a timecard to track time, etc.  Additionally, the early chapters excessively repeat the terms firefighting and firefight.
    • Page 41: Missing letter “Oracle has never guaranteed it has completely instrumented it code.”
    • Page 42: Missing letter “By tracing an Oracle process through the operating system, we can easily observe how Oracle has instrumented it code.”
    • Page 43: Incorrectly defines the meaning of a db file scattered read wait event when attempting to correlate a Linux readv OS call to an Oracle wait event: “Regardless of the operating system or the actual system call, Oracle gives a multiblock IO call the special name db file scattered read, because these multiple blocks can be scattered over the IO subsystem.”  One of the problems with the quote is that a multiblock read from the operating system could correspond to several different Oracle wait events.  A second problem is that the readv call reads logically adjacent sections of a file (think multiple Oracle blocks) on disk into non-adjacent memory locations (reference).
    • Page 47: While Table 2-1 states that only selected columns of V$SYSTEM_EVENT are displayed, the table probably should have included the TOTAL_TIMEOUTS column, which helps describe the true length of wait events that have, for instance, three second timeouts – this column has existed since Oracle Database 8.1.5.  The table also states that the WAIT_CLASS, WAIT_CLASS_ID, AND WAIT_CLASS# columns (the book incorrectly includes an underscore between CLASS and #) were introduced in 11g R1, while the Oracle documentation shows that those columns were introduced in 10g R2 (reference).
    • Page 49: While Table 2-2 states that only selected columns of V$SESSION_WAIT are displayed, the table seems to discount the value of the SECONDS_IN_WAIT column by excluding that column.  The table also states that the WAIT_CLASS, WAIT_CLASS_ID, AND WAIT_CLASS# columns (the book incorrectly includes an underscore between CLASS and #) were introduced in 11g R1, while the Oracle documentation shows that those columns were introduced in 10g R2 (reference).  The table should provide a better description for the STATE column’s value of “WAITED KNOWN TIME: Previous wait completed; the session is currently not waiting.” – the book should describe what does this STATE value means: that the session is in an idle wait event, or that the session is currently burning CPU time (this is what it means).
    • Page 55: States “Systems based on Oracle Database 10g and later can take advantage of the v$sys_time_model and v$ses_time_model views…” – the session level view is actually named V$SESS_TIME_MODEL.
    • Page 61: The book strongly suggests that a user’s “Submit” or “Save” operation equates to a COMMIT, and that a COMMIT equates to a LOG FILE SYNC wait event, and therefore if the average LOG FILE SYNC wait time is identical to a previous day when performance was acceptable, the performance problem is “most likely not related to the database.”  A lot may be said about where this paragraph fails the reality check – just think of one other operation that may happen when the user selects the “Save” option, or one other cause for the LOG FILE SYNC wait, or how averages may distort the true slowness that is experienced by the user, or how the author is simply measuring the wrong thing when trying to determine if the problem is present in the database (reference).
    • Page 68: Example of condescending tone: “When working on Oracle systems, you will eventually hear some poor whiner say something like, ‘Oracle is a resource hog!  It consumes every bit of resources it can get its despicable hands on!’”  More examples of condescending/confrontational tone with the server admin, storage admin, storage vendor, etc. are found on pages 128, 130, 132, 148, 156, 201, 264, 324.
    • Page 74: Example of an odd footnote that does not add value to the book: “I am in no way implying Oracle sincerely cares how much memory its database system requires.”
    • Page 94: The author indicated that one of his tests that modified the _KKS_USE_MUTEX_PIN parameter showed that the mutex implementation decreased the execution time by 11%, yet the author did not provide a SQL statement to create the test table, nor provide the PL/SQL script used in the test so that readers could reproduce the test results.
    • Page 104: The book states: “Virtual memory management, process scheduling, power management, or essentially any activity not directly related to a user task is classified as system time.  From an Oracle-centric perspective, system time is pure overhead…”  The book fails to mention that activities in Oracle Database itself directly cause kernel mode CPU utilization.  Network I/O,  disk I/O (especially with buffered file systems), timer calls (typically gettimeofday calls),  memory access requests, memory management, and process creation and termination (think session creation and termination) all directly cause kernel mode CPU utilization.  The paragraph seems to suggest that the DBAs has no control over kernel mode CPU consumption (reference reference2).
    • Page 116: States: “While it may seem strange, the run queue reported by the operating system includes processes waiting to be serviced by a CPU as well as processes currently being serviced by a CPU.”  This is an operating system specific observation.  Linux appears to behave this way, Windows does not, and others have indicated that various Unix releases do not behave the same as Linux (reference).
    • Page 136: States: “For example, if you ran a response time-focused report over a duration of 60 seconds, and if the single-CPU core subsystem were heavily overworked, Oracle could report that Oracle processes consumed 65 seconds of CPU time. Yet, we clearly know the CPU subsystem is limited to supplying up to 60 seconds of CPU time and no more.”  This appears to be an operating system specific CPU usage measurement problem.  Windows will exhibit this problem, where time spent in the run queue is accumulated with time spent actually running on the CPU, but I could not reproduce that behavior on Linux (reference).
    • The book seems to use Oracle 10g R1 frequently for tests, rather than something more recent, such as 10g R2, in various tests.  Oracle 10g R1 was desupported quite a long time ago, while the patch was only recently released.  In the span of a couple of sentences, the book frequently uses the term “Oracle” to mean both “Oracle Database” and “Oracle Corporation” – this might cause some degree of confusion.
    • Page 175: The book seems to suggest that bind variable values are typically included in TKPROF summaries, except when multiple trace files are combined.  Bind variable values are never included in TKPROF summaries.
    • Page 177: The book seems to recommend changing hidden initialization parameters without warning that those parameters should only be modified after consulting Oracle support.
    • Page 182: The book suggests modifying the _ASH_SAMPLING_FREQUENCY hidden initialization parameter.  Unfortunately, this parameter does not exist in Oracle Database (the only release that I checked).
    • Page 183: The book suggests that changing the _ASH_SIZE hidden initialization parameter will modify the amount of memory used by Active Session History, yet the book mentions that Oracle does not always respect the setting.  This might be a sign that the parameter controls something other than what the author described.  The author also mentioned changing the  _ASH_ENABLE hidden initialization parameter.
    • Page 200: The author suggests that the _DB_BOCK_MAX_CR_DBA hidden initialization parameter controls the number of clones per buffer, however that parameter does not exist in Oracle Database  I believe that the author intended to write _DB_BLOCK_MAX_CR_DBA which attempts to control the maximum number of consistent read copies of a block.
    • Page 203: The author suggests that changing the _DB_BLOCK_HASH_LATCHES hidden initialization parameter will provide some relief to cache buffer chains latch contention.  This is one of the few times that the author cautions, “Before you change this, be aware there may be support issues from both Oracle and your application vendor.”
    • Page 208: The book states that the _SMALL_TABLE_THRESHOLD hidden initialization parameter was deprecated because of the modified LRU (Least Recently Used) algorithm that the book stated was introduced in Oracle Database 8.1.5, and the hidden initialization parameter assumed new life with the introduction of Oracle Database 11g where the “parameter is the threshold for a server process to start issuing direct reads.”  Metalink (My Oracle Support) Doc ID 787373.1 indicates that the parameter was never deprecated.  The actual threshold for selecting a direct path read for a serial operation in Oracle Database and above is roughly five times the value specified for _SMALL_TABLE_THRESHOLD.  Various other articles describe the purpose of the _SMALL_TABLE_THRESHOLD parameter prior to Oracle Database (reference reference2 reference3)
    • Page 213: The book states: “Without physical blocks being read from disk, there will be no LRU chain latch contention, because there will be no need to find a free buffer or insert a buffer header into an LRU chain.”  This statement begs the question, what about consistent read copies of blocks, where Oracle must essentially roll back a block to a specific SCN to provide a consistent read for an in-process query?
    • Page 219: The book states: “Investigate the _db_block_write_batch and the _db_writer_max_writes parameters. Also considering increasing the instance parameter _db_writer_max_scan_pct (the default may be 40, for 40%) or _db_writer_max_scan_cnt…”  One problem with these two sentences is that the _DB_BLOCK_WRITE_BATCH, _DB_WRITER_MAX_SCAN_PCT, and _DB_WRITER_MAX_SCAN_CNT hidden initialization parameters do not exist in Oracle Database (the only release checked) (reference).
    • Page 221: The book states: “Now suppose you have scanned more than _db_writer_max_scan_pct buffer headers. If so, you would be very frustrated.”  Unfortunately, the _DB_WRITER_MAX_SCAN_PCT hidden initialization parameter does not exist in Oracle Database (the only release checked).
    • Page 222: The book states: “Increase _db_writer_max_scan_pct. This will give the database writer more time to flush its write list.”  Unfortunately, the _DB_WRITER_MAX_SCAN_PCT hidden initialization parameter does not exist in Oracle Database (the only release checked).
    • Page 223: The book states, “Strangely, in Oracle Database 11g, Oracle removed the reason code from the v$session_wait and v$session view’s p3 column!”  Column P3 was changed from the reason code to block class starting with Oracle Database 10g R1.
    • Page 226: The author used block dumps to investigate locks, while it seems that it would be easier to query a couple of views.
    • Page 230: The book states, “… Simply keep adding free lists until the contention subsides.” This begs the question of what side-effects may be encountered?
    • Page 231: The book describes how active transactions are assigned to undo segments.  Limited testing indicates that the book is correct unless the objects are created in the SYSTEM tablespace (creating objects in that tablespace is not recommended).
    • Page 253: The book used library_cache dumps to investigate child cursors – it seems that there are easier, less resource intensive ways to obtain this information.
    • Page 256: The book states: “Oracle keeps track of these reloads in the reloads column from v$library_cache…” – the correct view name is V$LIBRARYCACHE, not V$LIBRARY_CACHE.
    • Page 265: The book mentions the use of CURSOR_SHARING = SIMILAR and FORCE, but does not discuss the performance impact of the change other than stating that Mark Gury stated that CURSOR_SHARING = FORCE can result in incorrect results.  No other discussion of performance problems were provided.
    • Page 284: The book describes the author’s under-powered Linux server used in a test.  The server was configured with 512MB of memory, 256MB shared pool, 4MB for the buffer cache and was running Oracle Database 11g Release 1.  The Oracle documentation (reference) states that 1GB of memory is the minimum for Oracle Database 11g Release 1, and that puts into question the validity of any tests performed with that server.  The details of the test case, which obtained a 21% decrease in CPU usage through the use of in-memory undo over the configuration with in-memory undo disabled, was largely omitted, making it impossible for readers to duplicate the test.
    • Page 286: The book lists a parameter named COMPATIBILITY, however that parameter does not exist.  The actual parameter name is COMPATIBLE.
    • Page 289: The book states: “Increase the number of IMU pools by increasing the _im_pools parameter…”  The _im_pools hidden initialization parameter does not exist in Oracle Database (the only release checked) – the author probably intended to write _IMU_POOLS.
    • Page 294: The test script shows that redo statistics for a session are recorded, the session sleeps for 5 seconds, the session queries a table, and then the redo statistics are recorded again.  The author attempted to explain that redo could be generated by a session merely by querying a table due to delayed block cleanout.  The test case is slightly unclear, making it difficult for readers to reproduce the results – there is no point in the session sleeping for 5 seconds, it would have been helpful if the author explained what changes were made to the CUSTOMERS that eventually resulted in the delayed block cleanouts, and because the statistic number for the redo size statistic is prone to change from one Oracle release to the next it would have made more sense to specify SN.NAME=’redo size’ rather than essentially specifying SN.STATISTIC#=133.
    • Page 297 states: “… There is also the instance parameter _log_small_entry_max_size, which is used to shift allocation latch activity onto one of the redo copy latches…”  The _LOG_SMALL_ENTRY_MAX_SIZE hidden initialization parameter does not exist in Oracle Database (the only release checked).
    • Page 298 states: “The DBA can specifically set the number of redo strands via the hidden parameter _log_parallelism.”  The _LOG_PARALLELISM hidden initialization parameter does not exist in Oracle Database (the only release checked).
    • Page 300 states: “Global temporary tables can be explicitly dropped. When this occurs, the associated indexes and views are also dropped.”  The associated indexes will be dropped assuming that the temporary table is also purged, however affected views will not be dropped and will have a status of INVALID.
    • Page 301 states: “Object growth can be observed by repeatedly by querying v$sort_usage_view.  Global temporary tables create 50% less redo than standard tables.” This quote has a minor grammar error, and the view name is V$SORT_USAGE (should not include _VIEW in the name).  The question remains how the author determined that working with global temporary tables generates half as much redo as regular heap tables.
    • Page 302: Another instance where the author correlates an application save with a LOG FILE SYNC wait and concludes that the problem is not in the database instance.
    • Page 305: While it is mentioned on the next page in another section, it probably would have made sense to mention that the redo log buffer is also flushed every three seconds and repeated that the redo log buffer is also by default flushed after every COMMIT.
    • Page 314: The footnote states: “When a process is serviced by one CPU and then is switched to another CPU, it undergoes a context switch… Exclusively binding a CPU and process together virtually eliminates context switching, because the CPU will always be there for that process and the process will never need to wait while the operating system switches its processing to another CPU.”  The author seems to have either redefined the meaning of context switch, or the footnote needs additional clarification (reference reference2).
    • Page 315 states: “The redo log buffer is structured in relatively small operating system-size blocks (typically 512 bytes), not Oracle-size blocks, so it is normal for log writer activity to be performed with multiple blocks.”  The Oracle documentation (reference) states that the redo log file block size defaults to the physical sector size of the disk, which is typically 512 bytes.
    • Page 322 states: “The heavy redo-generating SQL will obviously be some type of DML, like inserts, updates, deletes, or perhaps create statements. If you have ASH licensed, as Figure 8-9 demonstrated, you can easily find the SQL related to the log file parallel write event.” – The heavy redo generating SQL statement could also be a SELECT statement due to delayed block cleanout (as mentioned earlier in the book), a CREATE statement would likely be classified as DDL.  Since the LOG FILE PARALLEL WRITE wait event is posted by LGWR and not a user session, there probably will not be a SQL statement that is definitely associated with the LOG FILE PARALLEL WRITE wait event.
    • Page 324: The book makes a claim that is difficult to verify: “Using global temporary tables can cut redo generation by 50%, substantially reducing Oracle’s overall IO requirements. This alone could solve the log file parallel write problem.”
    • Page 326 states: “If the log writer encounters a nonarchived redo log, it will stop writing and post the wait event log file switch (archive incomplete).”  The actual wait event that will be posted is log file switch (archiving needed).
    • Pages 327 – 387 seem to be related to the author’s “Forecasting Oracle Performance” book, with concepts written in an easier to understand, more fluid writing style.  This is not a problem, just worth mentioning.
    • Page 330: The word “difference” should be replaced with “different”.
    • Page 330 states: “For example, the Statspack facility typically collects data in 60-minute intervals and stores the data in tables starting with stats$ (the key table is stats$snap).”  There is no typical collection period for Statspack, while AWR snapshots default to 60 minutes in duration.
    • Page 333 states: “In previous chapters, I have presented how to gather CPU requirements from the v$sesstat, v$sysstat, v$ses_time_mode, and v$sys_time_model views.”   V$SES_TIME_MODEL should be written as V$SESS_TIME_MODEL.
    • Page 343 states: “Also, Oracle CPU consumption includes Oracle processes waiting for CPU and also queuing for CPU. As a result, in a CPU-saturated system, Oracle may report CPU consumption higher than actually occurred.”  The behavior mentioned by the author is operating system dependent.
    • Page 344 states: “In other words, Oracle does not have the ability to split CPU time into service time and queue time. When we gather CPU consumption from either the instance statistics views or the system time model views, what we collect as CPU time and typically classify as CPU service time actually contains both CPU service time and CPU queue time. Oracle has no way of knowing the difference and reports the total, which is, in the truest sense, CPU response time.” This behavior is operating system dependent.
    • Page 346 states: “The arrival rate in Figure 9-9, which is the horizontal axis, is simply the number of logical IOs (v$sysstat: buffer gets plus consistent gets) processed per millisecond.”  There is no statistic in V$SYSSTAT that is named buffer gets, the author probably should have stated db block gets instead. (reference).
    • Page 347 states: “The instance statistic physical reads signifies the number of Oracle blocks that Oracle server processes had to request from the operating system because the blocks did not reside in Oracle’s buffer cache.”  The physical reads statistic includes block reads that will never hit the buffer cache, including blocks read into the PGA.  It appears that the author is describing the physical reads cache statistic instead.
    • Page 349 states: “…when focusing on a particular session or group of sessions, use v$sesstat, v$ses_time_model, and v$session_event.” The correct view name is V$SESS_TIME_MODEL.

    The “Oracle Performance Firefighting” book has a lot of potential for being helpful, however the errors and omissions found in the fourth printing of the book give the book a decidedly dangerous edge.  If flipping initialization parameter values fits your style, this book will be a good fit.

    Pages 352 though 387 are currently not included in this book review.

    (Section added December 13, 2010): Related blog articles that discuss the contents of this book:

    Book Review: Oracle SQL Recipes: A Problem – Solution Approach

    6 06 2010

    June 6, 2010 (References Last Updated June 7, 2010)

    Answering the Question – What can I do with SQL? A Handful of Errors Tarnish a Great Book

    Last November I pre-ordered this book, hoping that it would pick up where the “Mastering Oracle SQL and SQL*Plus” book ended (at roughly Oracle Database release  The expectations were very high, and this book delivers for the most part.  Some parts of the book left me scratching my head, thinking that this SQL statement or paragraph just cannot be correct.  Even when taking those sections into account, this book conveys a significant amount of practical, useful information.

    The book’s format of problem statement, solution, and how/why the solution works is the key to the success of this book.  You will not find page after page of SQL statement syntax.  The range of problem statements covered in this compact 500 page book is impressive, touching on a wide range of topics:

    • Basic SELECT statements
    • Manipulating numbers, dates, and character data
    • Transaction management
    • XML data
    • Partitioning
    • Analytic functions
    • Regular expressions
    • Performance tuning
    • Database troubleshooting
    • Creating databases, tablespaces, tables, indexes, and constraints


    • Thoroughly answers the question of why it is important to learn Oracle’s dialect of SQL.  The book provides 233 problem situations, most with more than one variation of the problem along with the SQL statement for each variation.
    • Includes interesting side stories that are not necessarily related to SQL coding, but help to keep the reader interested in the book contents.  For example, a side story titled “Beer and Statistics Do Mix” that describes the invention of T-Tests.
    • Typically states when a feature requires the Enterprise Edition or an additional cost license beyond the Enterprise Edition (Tuning Pack, for example).
    • Most, if not all of the code samples from the book may be downloaded from the Apress website.
    • Describes the LEAD, LAG, RANK, DENSE_RANK, ROW_NUMBER, SUM, MIN, MAX, NTILE and a couple of the other common analytic functions.  Does not cover all of the analytic functions, for example: RATIO_TO_REPORT, FIRST_VALUE, LAST_VALUE, PERCENTILE_DISC, etc.
    • Recipe 5-8 shows how to create a PL/SQL function that converts from base 10 to binary, octal, and hex.
    • Recipe 11-9 shows an impressive example that creates a HTML chart from a SQL statement.  It might be interesting to see this example extended to use floating objects with absolute positioning rather than HTML tables.
    • Recipes 12-3 and 12-4 include PL/SQL functions that help to determine if a VARCHAR2 value can be safely converted to a NUMBER or DATE.

    Problems (in order, not all code samples tested):

    • In recipe 1-1, retrieving data from a table, the authors mentioned “currently set schema,” a concept which probably would have been much better to introduce in a later recipe.  Especially in the early sections of the book, it seems that the authors struggled with what concepts to introduce early and what concepts required a more establish foundation.  As such, people just learning SQL, or even just Oracle’s dialect, will struggle with the early sections of the book.
    • Several recipes (1-4, 2-11, 2-14, 2-15, 2-16, 6-5, 6-14) use implicit date conversions, or explicit varchar to date conversions without specifying the format of the date contained in the varchar.  This is a very poor practice to teach to people who are just beginning the process of learning SQL.  A change in the NLS parameters, or even a change in the connection method from Oracle’s ODBC driver to Oracle’s OLEDB driver is sufficient to cause such SQL statements to fail.  Unfortunately, the TO_DATE function was not introduced until chapter 6.
    • Page 26 incorrectly stated, “for performance reasons, Oracle will implicitly sort the data to match the grouping desired.”  As of Oracle Database 10.1 [edit: probably should state 10.2 for hash group by, previous releases did not necessarily need to perform a sort], more often than not a hashing algorithm will be used to perform GROUP BY operations, which means that there will be no implicit sort of the data. [Reference/Test Case] [Reference/Test Case]
    • Recipe 3-5 attempted to explain when to use three different syntaxes to delete rows from a table (IN, EXISTS, and SELECT with USING clause).  As may be confirmed with a 10053 trace file, the optimizer will often automatically re-write SQL statements from one form to another, for example from an IN clause to a standard join, or an EXISTS clause to a standard join.  The authors probably should have mentioned the automatic query re-write capability of the cost-based optimizer, rather than stating something like this about an EXISTS clause example “this is not as elegant as the first solution, but might be more efficient.” [Reference/Test Case – Check the Comments Section Also]
    • Recipe 5-11 demonstrates how to use the FOR UPDATE clause of a SELECT statement and demonstrates that the second session will be blocked until the first session commits or issues a rollback.  At this stage of the book the NOWAIT clause is not introduced.  Unfortunately, such a demonstration might be an invitation to “lost” updates.
    • The query output for recipe 6-6 does not match the expected output, which may lead to confusion.  The book shows that the result of the timezone conversion is “13-AUG-09 PM AMERICA/LOS_ANGELES” when it should be “13-AUG-09 PM AMERICA/LOS_ANGELES”
    • Recipe 6-12, which uses the EXTRACT function, will produce incorrect/unexpected results, resulting in the MONTHS calculation increasing by one up to 15 days earlier than should be expected if used to determine the length of time an employee has worked.  That recipe would have been the perfect time to introduce the MONTHS_BETWEEN SQL function (not included in the book). [Reference/Test Case – Check the Comments Section Also]
    • Recipe 7-1 states that Oracle will never use a non-function based index on a column when that column appears inside the INSTR function that is present in a WHERE clause.  That statement is incorrect, a hint, covered in recipe 19-12, is sufficient to allow Oracle to perform an index full scan operation on that column when the only predicate in the WHERE clause includes that index’s column wrapped in an INSTR function (tested on Oracle releases through [Reference/Test Case – Check the Comments Section Also]
    • Recipe 8-2 states, “With the value one-third stored in each column (BINARY_DOUBLE and NUMBER datatypes), we can use the VSIZE function to show it was much more complicated to store this with decimal precision, taking nearly three times the space.”  There are a couple of problems with this statement.  First, a BINARY_DOUBLE column will always require eight bytes to store any number, while it takes a variable amount of space to store a number in a NUMBER column (for instance, the number 1,000,000 requires fewer bytes to store than the number 333).  Second, the BINARY_DOUBLE column only maintained 16 significant digits to the right of the decimal point, while the NUMBER column maintained 40 significant digits to the right of the decimal point (the bytes required can be decreased by specifying a smaller number of significant digits to the right of the decimal point). [Reference/Test Case – Check the Comments Section Also]
    • Recipe 8-6 states “The final two implicit defaults (of a CREATE SEQUENCE clause) cause Oracle to cache 20 values and not force strict first-come, first served ordering on those who call NEXTVAL.”  This statement is either misleading or incorrect.  The default will force the sequence numbers to be assigned to calling sessions in the sequential order specified by CREATE SEQUENCE – Oracle will not provide the numbers out of sequential order.  There can be some degree of out of sequential order assignment in a RAC environment.
    • Recipe 9-2, which includes a SQL statement that is intended to show the objects which are locked that are blocking other sessions, has a number of problems.  1) It is not RAC aware – it should be selecting from the GV$ views, rather than the V$ views.  2) The SQL statement explicitly looks for the BLOCK column to be equal to 1 – in a RAC environment it could potentially show a value of 2.  3) When performing a self-join on V$LOCK, the query is only joining on the ID1 column of V$LOCK, while the documentation shows a self-join on that view using the ID1, ID2, and TYPE columns.  4) There appears to be a missing join condition between V$LOCK and V$LOCKED_OBJECT that results in objects being identified as contributing to the blocking of the second session, when in fact that may not be the case.  5) The query seems to assume that all locks acquired are full table locks. [Reference/Test Case – Check the Comments Section Also]
    • Page 436 suggests checking Metalink (My Oracle Support) Doc ID 555284.1 for a script to identify indexes that should be rebuilt.  I suspect that that document looked very different prior to the last modification date of May 13, 2009.  The script in the document does not detect indexes that are candidates for a rebuild – the script simply outputs SQL statements to rebuild every index that is not owned by SYS or SYSTEM (and a small number of other users) into an INDX tablespace with a specified initial, next, and maxextents clause.
    • Recipe 18-13, in the Note section, states that when a primary key constraint is created Oracle will create a unique index with the same name as the constraint.  That statement is true unless a unique or non-unique index already exists for the column or a composite index that has that column as its leading column exists.  Later in the recipe it was stated that the DBA could pre-create the index that will be used to help enforce the primary key constraint, but the Note section might cause a little confusion. [Reference/Test Case – Check the Comments Section Also]
    • Recipe 19-1, the join between V$SQL_MONITOR and V$SQL is incomplete and will result in repeated rows if there are multiple child cursors for a SQL_ID, which seems to happen frequently starting with Oracle Database due to adaptive cursor sharing, and due to cardinality feedback.  Additionally, V$SQL_MONITOR will only be populated if the CONTROL_MANAGEMENT_PACK_ACCESS initialization parameter is set to DIAGNOSTIC+TUNING to enable the extra cost licenses (Tuning Pack, for example). [Reference/Test Case]
    • Recipe 19-3 states “the V$SESSION_LONGOPS view gives you a rough idea of how long a currently running query has left to finish.”  Technically, this statement is both correct and incorrect.  The view provides a rough idea, based on the execution plan estimates, of how much time is left for a specific stage (step/line) in the execution plan.  Entries in the V$SESSION_LONGOPS view will appear and disappear as the different stages in the execution plan execute.  Most index type accesses will likely complete in less than six seconds per access, and probably will not appear in this view. [Reference/Test Case – Check the Comments Section Also]
    • Recipe 19-4, the query may produce misleading results as the query is not time-scoped.  The query does not consider that the database may have been up for several years – a query that on average consumes one second of CPU time, but was executed 100 million times since the library cache was last purged will likely be flagged as a problem, while a query that on average consumes 1,000 seconds of CPU time but has only executed 1,000 times in the last 24 hours probably will not appear in the SQL results.
    • Recipe 19-7 states that a PLAN_TABLE must exist, and that if it does not exist the utlxplan.sql script should be run to create a PLAN_TABLE in the user’s schema. As of Oracle Database [Edit: it appears that this change happened prior to] user schemas should no longer have a PLAN_TABLE in their schema – a global temporary table owned by the SYS user is used instead. [Reference/Test Case – Check the Comments Section Also]
    • Chapter 19 demonstrates the use of EXPLAIN PLAN, AUTOTRACE, and the EXPLAIN option of TKPROF, but the chapter does not mention that using those methods for displaying the execution plan could show the wrong execution plan. [Reference/Test Case – Check the Comments Section Also]
    • Recipes 19-10 and 19-12 suggest that a goal of tuning is reducing the calculated COST value that appears in the execution plan, and this was accomplished in the recipes by adding an index or using an index hint.  This is not a good approach to tuning; it is much better to rely on actual response time rather than the calculated cost estimates. [Reference/Test Case – Check the Comments Section Also] [Reference/Test Case – Check the Comments Section Also]
    • Recipe 20-1, the SQL statements are not time-scoped, and even if the database has only been open for a short period of time, the SQL statement could yield incorrect results because the “CPU used by this session” statistic is not updated in real-time, only when a SQL statement completes, or a fetch completes. [Reference/Test Case – Check the Comments Section Also]
    • Page 500 suggests inspecting V$SESSION to determine the type of block contention that causes buffer busy waits.  It would have been nice to see the authors provide a demonstration of this technique because from what I am able to determine, the chances of a success are extremely small using V$SESSION (I assume that the authors might be suggesting to look at either the combination of P1, P2, and P3 or the combination of ROW_WAIT_OBJ#, ROW_WAIT_FILE#, and ROW_WAIT_BLOCK#).
    • Page 500 states that the “SQL*Net message from client” wait event indicates possible network or middle-tier bottlenecks.  Client-side “think time” is probably the most significant contributor to this wait event, and I think that it would be very difficult to demonstrate that this wait event indicates network bottlenecks. [Reference/Test Case] [Reference/Test Case] [Reference/Test Case – Check the Comments Section Also]
    • Recipe 20-8 suggests an approach to identify those indexes that are safe to drop by enabling monitoring of the indexes.  One of the problems with this approach is that the index may still be used by the cost-based optimizer even when monitoring reveals that the index is never used. [Reference/Test Case – Check the Comments Section Also] [Reference/Test Case – Check the Comments Section Also]
    • Very few cross-recipe (forward and backward) references were provided, which makes the sections of the book appear to be disconnected and stand-alone.

    The book attempts to cover a lot of ground.  While it probably requires the authors between one and eight hours per page to research, write, proof-read, test, and format a typical page in an Apress book, the authors probably could have controlled the scope of the book a little better and spent a little more time to test and expand some of the sections of the book – the sections that discuss analytic functions, for example.  I did not spend a lot of time reading through the chapters “Working with XML Data”, “Partitioning”, or “LOBS”, so I did not comment on those sections of the book.  The majority of the book is exceptional – far better than any book that is strictly a SQL reference type book, and I would not hesitate to recommend the book to others who are trying to figure out: “What do I do with SQL.”

    I will probably update this review at a later time to supply links to test cases or articles for most of the problems that were identified with the book.  I might also take a closer look at the “Working with XML Data”, “Partitioning”, or “LOBS” chapters, and provide feedback for those sections of the book.

    (Section added December 13, 2010): Related blog articles that discuss the contents of this book:

    Book Library – Finding Motivation

    16 01 2010

    January 16, 2010

    I occasionally see requests for book suggestions in various Internet forums, blogs, and various other websites.  I am typically very careful when preparing to buy a book, since it my money that will be wasted if the book’s contents are worthless.  I try to read the reviews for books on, blogs, and various other sites to try to determine if the book’s contents should have been retired with the release of Oracle Database 8.0, whether or not the book’s authors emphasize volume over technical quality, and whether or not people are willing to rely on a particular author’s advice.

    I thought that it might be interesting to look at my book library to see if I would be able to offer any advice.  My book library at home looks like the following picture (I intentionally or unintentionally left out about 6 to 10 books) – click the picture to see a larger version.

     If you closely examine the books, you probably will be able to tell that I need to spend a little more time digging through the books on the right (and the four years of related magazines that have been sitting mostly unread).

    Here is my personal book library at work – click the picture to see a larger version.

    As I wrote in this OTN thread, a small number of Oracle books marked distinct turning points in my knowledge of Oracle.  Additionally, it is necessary to find motivation to continue learning whatever subject falls at your feet.  In the OTN thread, I stated the following:

    I have been very fortunate to buy and read several very high quality Oracle books which not only correctly state the way something works, but also manage to provide a logical, reasoned explanation for why things happen as they do, when it is appropriate, and when it is not. While not the first book I read on the topic of Oracle, the book “Oracle Performance Tuning 101” by Gaja Vaidyanatha marked the start of logical reasoning in performance tuning exercises for me. A couple years later I learned that Gaja was a member of the Oaktable Network. I read the book “Expert Oracle One on One” by Tom Kyte and was impressed with the test cases presented in the book which help readers understand the logic of why Oracle behaves as it does, and I also enjoyed the performance tuning stories in the book. A couple years later I found Tom Kyte’s “Expert Oracle Database Architecture” book at a book store and bought it without a second thought; some repetition from his previous book, fewer performance tuning stories, but a lot of great, logically reasoned information. A couple years later I learned that Tom was a member of the Oaktable Network. I read the book “Optimizing Oracle Performance” by Cary Millsap, a book that once again marked a distinct turning point in the method I used for performance tuning – the logic made all of the book easy to understand. A couple years later I learned that Cary was a member of the Oaktable Network. I read the book “Cost-Based Oracle Fundamentals” by Jonathan Lewis, a book by its title seemed to be too much of a beginner’s book until I read the review by Tom Kyte. Needless to say, the book also marked a turning point in the way I approach problem solving through logical reasoning, asking and answering the question – “What is Oracle thinking”. Jonathan is a member of the Oaktable Network, a pattern is starting to develop here. At this point I started looking for anything written in book or blog form by members of the Oaktable Network. I found Richard Foote’s blog, which some how managed to make Oracle indexes interesting for me – probably through the use of logic and test cases which allowed me to reproduce what I reading about. I found Jonathan Lewis’ blog, which covers so many interesting topics about Oracle, all of which leverage logical approaches to help understanding. I also found the blogs of Kevin Closson, Greg Rahn, Tanel Poder, and a number of other members of the Oaktable Network. The draw to the performance tuning side of Oracle administration was primarily for a search for the elusive condition known as Compulsive Tuning Disorder, which was coined in the book written by Gaja. There were, of course, many other books which contributed to my knowledge – I reviewed at least 8 of the Oracle related books on the website

    The above was written before I set up this blog – there are more book reviews on this blog here:  In the above pictures you will see all of the books that I referenced in the OTN post, as well as the book that I had the opportunity to co-author with a fairly large number of OakTable Network members (top photo – I have not yet received my printed copy of the book from Amazon, so the picture shows a printed copy of the electronic version from Apress).  There are of course a large number of books in my personal library at work – as you can see, I have the opportunity to dig into much more than Oracle Database.  I have read most of the books cover to cover, and a very small number of the books have been read cover to cover twice.

    My post in the OTN thread continues:

    Motivation… it is interesting to read what people write about Oracle. Sometimes what is written directly contradicts what one knows about Oracle. In such cases, it may be a fun exercise to determine if what was written is correct (and why it is logically correct), or why it is wrong (and why it is logically incorrect). Take, for example, the “Top 5 Timed Events” seen in this book …

    The text of the book states that the “Top 5 Timed Events” shown indicates a CPU Constrained Database (side note: if a database is a series of files stored physically on a disk, can it ever be CPU constrained?). From the “Top 5 Timed Events”, we see that there were 4,851 waits on the CPU for a total time of 4,042 seconds, and this represented 55.76% of the wait time. Someone reading the book might be left thinking one of:

    • “That obviously means that the CPU is overwhelmed!”
    • “Wow 4,851 wait events on the CPU, that sure is a lot!”
    • “Wow wait events on the CPU, I didn’t know that was possible?”
    • “Hey, something is wrong with this ‘Top 5 Timed Events’ output as Oracle never reports the number of waits on CPU.”
    • “Something is really wrong with this ‘Top 5 Timed Events’ output as we do not know the number of CPUs in the server (what if there are 32 CPUs), the time range of the statics, and why the average time for a single block read is more than a second!”

    Another page from the same book shows this command:

    alter system set optimizer_index_cost_adj=20 scope = pfile;

    Someone reading the book might be left thinking one of:

    • That looks like an easy to implement solution.
    • I thought that it was only possible to alter parameters in the spfile with an ALTER SYSTEM command, neat.
    • That command will never execute, and should return an “ORA-00922: missing or invalid option” error.
    • Why would the author suggest a value of 20 for OPTIMIZER_INDEX_COST_ADJ and not 1, 5, 10, 12, 50, or 100? Are there any side effects? Why isn’t the author recommending the use of system (CPU) statistics to correct the cost of full table scans? 

    I suggest that you try reading an old Oracle book, such as “Practical Oracle 8i”, and see if you are able to pick out anything that is:

    • Obviously wrong, and was never correct.
    • Obviously wrong since Oracle (or some other release version), but was 100% correct at the time the book was written.
    • Obviously correct now, just as it was when the book was originally written.
    • Grossly over applying a fix that worked in a finite set of conditions (possibly due to false correlation) to situations with nearly infinite scope.

    Someone posted a comment on this blog asking for a sequenced list of book recommendations for learning Oracle Database.  I suggested that the list of books might be a bit different depending on whether the person had an interest in general DBA work or performance tuning (or development, or …).  The suggestions that I provided to the comment follow:

    Quick suggestions:

    • A solid foundation of Oracle specific SQL is needed. I enjoyed reading “Mastering Oracle SQL and SQL*Plus“, and I believe that book provides a solid foundation. That book appears to be in the process of being updated, and might even include page numbers this time ( I am currently reading “Oracle SQL Recipes: A Problem-Solution Approach” (, probably about 30 pages into the book now – and I believe that I have already found a small handful of minor errors/issues with the book that would make it difficult to use as a starting point.
    • A solid foundation of understanding Oracle’s behavior is needed. I believe that Tom Kyte’s “Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions” book ( is one of the best sources. I understand that Tom Kyte also re-wrote the Oracle “Concepts Guide” (, so that might be a decent substitute for his book.
    • If you are planning to do general DBA work, probably the next book should be on the topic of RMAN. The books in the Oracle documentation library are good, and you will find two reviews of other RMAN books on this blog.
    • Next, I would suggest reading a book that provides a solid foundation of the Oracle wait interface. “Oracle Wait Interface: A Practical Guide to Performance Diagnostics & Tuning” seems to be the best source of that information, but it would be nice to see an update of the book that covers more recent releases of Oracle.
    • Next, the “Oracle Performance Tuning Guide” from the Oracle documentation library.
    • Next, I suggest the book “Troubleshooting Oracle Performance” – the book is great for not only introducing people to various approaches for troubleshooting problems, but also provides foundation knowledge that is needed in order to understand why an approach worked.
    • Next, I suggest digging deeper into troubleshooting with 10046 trace files – Cary Millsap’s “Optimizing Oracle Performance” is the best source for this information.
    • Next, I suggest digging deeper into troubleshooting with 10053 trace files – Jonathan Lewis’ “Cost-Based Oracle Fundamentals” is the best source for this information.

    + If queueing theory, introduced in “Optimizing Oracle Performance“, is of interest, take a look at “Forecasting Oracle Performance”

    + If Statspack/AWR report reading, introduced in the “Performance Tuning Guide” is of interest, see the excellent series of articles on Jonathan Lewis’ blog.

    + If you want your jaw to drop, take a look at Tanel Poder’s blog. I also recommend reading all of the blog entries on Jonathan Lewis’ blog and Richard Foote’s blog.

    + I have now read most of the chapters in the “Expert Oracle Practices: Oracle Database Administration from the Oak Table” book.  The book contains theory, tuning philosophy, tuning/troubleshooting logic, test cases, and up to date information that cannot be found in any other book.  It is my opinion that this book belongs in the “Quick suggestions” list above.  Disclaimer: Just estimating here, for every 17 copies of this book that are sold, I think that I will have enough royalty money to buy a soda drink from the vending machine (my recommendation has nothing to do with how thirsty I am right now 🙂 ).  It is my belief that the motivation for all of the authors of this book was simply to help readers improve their skills well beyond the basics.