Book Review: Oracle Database 11gR2 Performance Tuning Cookbook (Part 2)

23 07 2012

July 23, 2012

Oracle Database Performance Tuning Test Cases without Many “Why”, “When”, and “How Much” Filler Details

(Back to the Previous Post in the Series)

This blog article contains the review for the second half of the “Oracle Database 11gR2 Performance Tuning Cookbook” book.  It has been nearly five months since I posted the first half of this review.  My full review of this book is roughly 26 typewritten pages in length.

This book definitely is not an easy read.  Instead, it is necessary to pick through the material in the book, in search of correct information, and then perform a bit of research to determine where the book’s contents deviated from describing actual Oracle Database behavior.  Unfortunately, the same review process was required when reading a handful of other Oracle Database performance related books, so this book is not unique in this regard.

Five months ago I submitted 21 errata items for the first 80 or so pages of this book.  As of July 23, 2012, the publisher’s website shows the following when attempting to find the errata for this book:

“No errata currently available.

Known errata are listed here. Please let us know if you have found any errata not on this list by completing the errata submission form. Our editors will check them and post them to the list. Thank you.”

The errata list found on the book’s errata page is disappointing!

Test case scripts are used throughout this book, and that is one of the biggest selling points of this book.  One of the problems, however, is that it appears that the primary goal of this book is to demonstrate the author’s 50+ scripts, show screen captures of the each script’s execution, describe the action performed by each command in the script, and then describe the result of the script.  The all-important “how”, “when”, “how much”, and “why” components were largely excluded in the book.  There are also many errors, misstatements, and misinterpretations of the scripts’ output in the book – items that should have never survived the technical review cycle for the book.  I have no desire to “bash” this book, or to discourage readers of this blog from buying the book.

This review currently excludes chapter 11 on the topic of “Tuning Contention” and a couple of pages in another chapter.  The format of this review mirrors the format used in several of my recent book reviews.

Data Dictionary Views:

  • V$SYSSTAT (pages 208, 400)
  • V$PGA_TARGET_ADVICE (page 210)
  • V$SORT_USAGE (page 252)
  • USER_TAB_HISTOGRAMS (pages 306, 318)
  • V$SGA, V$SGAINFO (page 376)
  • V$PROCESS (page 377)
  • V$DB_OBJECT_CACHE (pages 389, 394)
  • V$SQLAREA (page 395)
  • V$SHARED_POOL_ADVICE (page 395)
  • V$ROWCACHE (page 395)
  • V$DB_CACHE_ADVICE (page 400)
  • X$BH, DBA_OBJECTS (page 402)
  • ALL_INDEXES (page 419)
  • DBA_HIST_SEG_STAT (page 420)
  • V$INSTANCE_RECOVERY (page 429)


  • PGA_AGGREGATE_TARGET (pages 211, 215)
  • STATISTICS_LEVEL (page 215)
  • PLSQL_CODE_TYPE (page 276)
  • USE_STORED_OUTLINES (page 311)
  • RESULT_CACHE_MODE (page 344)
  • RESULT_CACHE_MAX_SIZE (page 345)
  • LOCK_SGA (pages 377, 381)
  • CURSOR_SHARING (pages 388, 399)
  • SHARED_POOL_SIZE (pages 388, 396)
  • TIMED_STATISTICS (page 412)


  • DYNAMIC_SAMPLING (page 304)
  • RESULT_CACHE (page 343)
  • NO_RESULT_CACHE (page 345)
  • PARALLEL (page 347)
  • APPEND (page 352, 355)

Error Messages:

  • ORA-4031: “unable to allocate X bytes of shared memory” (page 390)
  • RMAN-06207 (page 417)

Comments, Corrections, and Problems:

  • A script in the book attempts to set the SORT_AREA_SIZE to a value of 1000 (less than 1KB), after setting the WORKAREA_SIZE_POLICY parameter to MANUAL.  A side effect of this change is that the HASH_AREA_SIZE defaults to twice the SORT_AREA_SIZE, thus the change set the HASH_AREA_SIZE to 2000 bytes.  The execution plan printed in the book shows a single sort operation, and two hash join operations – thus the author is not just testing what happens when the SORT_AREA_SIZE parameter is set to less than 1/65 of its already small default value.  The script in the book orders the rows by AMOUNT_SOLD DESC, while the script in the script library orders the rows by TIME_ID ASC. (page 208)
  • The book states, “We then set a MANUAL policy, reserving the size of 1000 blocks of memory for sorting, with the following statements… ALTER SESSION SET SORT_AREA_SIZE = 1000; …”  The SORT_AREA_SIZE parameter is NOT specified in blocks, but instead in bytes. (page 212)
  • The book states, “In fact, the DB_BLOCK_SIZE for our database is 8 KB, which reserves 8192 KB of memory for the sort operation, but actually needs 24 MB of memory to complete the sort in-memory.”  If the unit of measure of the SORT_AREA_SIZE parameter is blocks (it is bytes, not blocks), 8 KB * 1000 is 8000 KB, not 8192 KB.  The 24 MB figure from the autotrace execution plan is only an estimate of the temp space requirements for the query – my test execution showed 4096 KB was actually required. (page 213)
  • The book states, “If we use an Oracle Shared Server Connection, the User Global Area is not inside the PGA, but it’s in the Shared Pool. Hence, an application which often performs sorting should not use Oracle Shared Server Connections.”  The second sentence is not a logical conclusion of the first sentence.  The first sentence is not completely accurate, as the UGA could be in the large pool, rather than the shared pool. (page 215)
  • The script output shows a HASH UNIQUE operation in the execution plan when the author issued a SQL statement with a DISTINCT clause to demonstrate that a sort to disk would be performed when DISTINCT is specified in a SQL statement – HASH UNIQUE is not a sort operation.  The results of this test case script conflicts with the second paragraph on page 207. (page 218)
  • The script output shows a HASH GROUP BY operation in the execution plan when the author issued a SQL statement with a GROUP BY clause to demonstrate that a sort to disk would be performed when GROUP BY is specified in a SQL statement – HASH GROUP BY is not a sort operation.  The results of this test case script conflicts with the second paragraph on page 207. (page 219)
  • The book states, “Select the first 10 customers, ordered by their age, from youngest to oldest.”  It might be better to state that 10 customers will be selected at random, specifically those customers with rows in the first set of table blocks – this statement was clarified on page 226. (page 224)
  • The book states that changing the where clause to “WHERE ROWNUM < 11” will permit the query using RANK to obtain the same result as earlier.  There are a couple of reasons why this statement is incorrect: 1) the order of the rows is COINCIDENTALLY the same as the value assigned by RANK, but that will not always be true.  2) there are 31 rows that have a RANK of 1, so even if the rows were produced in sequential rank order, there is no guarantee that the same 10 of 31 rows will appear as had happened earlier. (page 227)
  • The example in the book shows that an “INDEX FULL SCAN (MIN/MAX)” operation is used when an index is present on a column to which the MAX or MIN functions are used.  The book also shows that if both the MIN and MAX values of a column are requested in a single SELECT statement, the optimizer switched to a “TABLE ACCESS FULL”  operation, but the book did not explain why that happened.  The work-around suggested by the book almost works, except that the solution produces two rows rather than a single row. (pages 234-236)
  • While the author’s statement that “the myth SELECT (1) is faster than SELECT(*) is wrong – they are the same query” is headed in the right direction, the approach is wrong.  It is simply insufficient to look at the execution plan to determine that two SQL statements are the same (and are performing the same work).  Had the author suggested generating a 10053 trace for both SQL statement, he would have had the opportunity to mention that the query optimizer silently transformed the “SELECT COUNT(1)” query into “SELECT COUNT(*)” – so yes, the queries are the same, but the book’s approach to determining the queries are the same is not correct. (page 237)
  • The author created a temporary tablespace with a 16MB uniform extent size without identifying why that extent size was selected rather than a 1MB default extent size. (page 249)
  • The book states, “If we haven’t defined a default temporary tablespace for a user, the SYSTEM tablespace is used to store SORT SEGMENTS.”  The statement in the book appears to be out of date – the Oracle Database 11.2 documentation states, “Users who are not explicitly assigned a temporary tablespace use the database default temporary tablespace, which for new installations is TEMP.” (page 251)
  • The book’s discussion of defining the INITIAL and NEXT storage parameters for temporary tablespaces as a multiple of the SORT_AREA_SIZE parameter seems to be out of place in an Oracle Database 11g R2 performance tuning book – dictionary managed tablespaces were deprecated with the release of Oracle Database 9.2, and the effect of these parameters is different in locally managed tablespaces. (page 252)
  • The book states, “The PCTINCREASE parameter cannot be specified when we use the AUTOALLOCATE option for the tablespace.”  This statement appears to be incorrect – the Oracle Database documentation states that a specified value for PCTINCREASE will be ignored when creating an AUTOALLOCATE tablespace. (page 252)
  • The book states, “We can have different tablespaces for each user and we can query the dynamic performance view V$SORT_USAGE to retrieve the details about the active disk sorts occurring in the instance.”  This statement is a bit confusing, so it is probably best to break the sentence into two separate logical sentences for analysis.  The first half of the statement seems to suggest that a separate temp tablespace should (or could) be created for each user – I am not sure that this is the author’s intended interpretation; the default temporary tablespace may be set at the user level so that not all users are required to use the same (default) temporary tablespace.  For the second logical sentence, the V$SORT_USAGE performance view was deprecated with the release of Oracle Database 9.2, replaced with V$TEMPSEG_USAGE.  In Oracle Database 11.2, the V$SORT_USAGE view is based on the GV$SORT_USAGE view which selects from X$KTSSO (SELECT VIEW_DEFINITION FROM V$FIXED_VIEW_DEFINITION WHERE VIEW_NAME=’GV$SORT_USAGE’;).  The GV$SORT_USAGE  view definition, which is pointed to by both the V$SORT_USAGE and V$TEMPSEG_USAGE synonyms, indicates that the SEGTYPE column may be one of SORT, HASH, DATA, INDEX, LOB_ DATA, LOB_INDEX , or UNDEFINED.  The potential values of the SEGTYPE column suggests that the view potentially shows a variety of activity in addition to what the book mentions (the WHERE clause should be adapted to restrict the rows returned from this view, so that the results are consistent with this chapter’s contents).  The V$SORT_USAGE view also shows entries for aborted SQL statements, for instance when Ctrl-C is pressed in SQL*Plus and another SQL statement has not been executed by the session. (page 252)
  • The book states, “Please note that – due to their nature – temporary tablespaces are not affected by backup and restore operations.”  This is a vague statement at best, and the accuracy of the statement depends in part on the type of backup performed (online or offline, user-managed or RMAN hot backup, etc.).  An online RMAN backup will not back up the tempfiles of locally managed temporary tablespaces; however, when those backed up databases are restored and recovered, Oracle Database 10.2 and later will recreate the temporary tablespace tempfiles when the database is opened. (page 252)
  • The book states, “Using the LIMIT clause, we can see a slight decline in performance, so why use it?”  The screen captures in the book actually show the test without the LIMIT clause completing in 0.44 seconds, and the test with the LIMIT clause completing in 0.30 seconds, so in this case the LIMIT clause appears to have improved performance. (page 261)
  • Better clarification is needed in the section describing the use of the PL/SQL NOCOPY clause.  The book states that when a parameter variable is passed by reference, a value change to that variable actually changes the value of a copy of that variable, without affecting the originally passed in variable – this concept is backward.  The book should also make an effort to clarify the statement that the NOCOPY clause is a hint, touching on some of the causes of the NOCOPY clause being intentionally ignored as described in the documentation. (page 266)
  • The description of the CLUSTER hint is almost a verbatim copy of the description from the Oracle Database documentation.  The description in the book is a little vague regarding the usage of that hint, considering that the author lists the hint among the most commonly used hints.  The book did not mention this fact found in the documentation: “This hint applies only to tables in an index cluster.” (reference page 296)
  • The book’s description of the LEADING hint is misleading if there are more than two tables (or other row sources) specified in the SQL statement.  The book states, “Asks the optimizer to join table_name1 and table_name2 in the given order.”  First, a hint is a directive – it does not “ask” of the optimizer, it “demands” of the optimizer (this “Asks” wording is used in several of the hint descriptions).  The LEADING hint does not just specify that two tables should be joined in the listed order.  That hint states that the join order should begin with the first table (or other row source) specified in the hint, joining each successive table (or other row source) listed in the hint in the specified order. (page 297)
  • The book states that the default value for the OPTIMIZER_DYNAMIC_SAMPLING parameter is 2, which is true for Oracle Database 10.1 and higher.  However, the book also describes dynamic sampling as being introduced in Oracle Database 9.2.  Oracle Database 9.2 had a default value of 1 for this parameter.  The book offered no description of what happens when a value, such as 8, is used in a DYNAMIC_SAMPLING hint, other than 0 means that dynamic sampling should not be used, and 10 means “use aggressive sampling”. (page 304)
  • The book states, “Histograms are useful only on indexed columns containing skewed values, because they help the optimizer to choose whether to use the index or not to access values.  Obviously, if the frequency for a specific value is very high, using the index won’t be the best choice.”  “Don’t use histograms in situations where: 1. the column is not used in the WHERE clauses of the queries; 2. the data in the column is uniformly distributed; 3. bind variables are used when comparing against the column.”  The quoted section from the book seems to be closely derived from a page of the Oracle Database 8.1.7 documentation which was modified and finally removed from later releases of the documentation.  Histograms are potentially useful when created on non-indexed columns, with the goal of improving cardinality estimates, which are used to determine join order and join type when multiple tables are referenced in a SQL statement.  Histograms are also potentially useful when the data is not skewed, helping to improve the cardinality estimates of range scans when outlier values are present (such as an out of range value taking the place of NULL values).  Regarding the statement, “Obviously, if the frequency for a specific value is very high, using the index won’t be the best choice,” the frequency of a value in a column is but one of several factors that the optimizer considers, but a high frequency of a value is not necessarily a problem if the use of the index avoids the need to visit the related table.  Regarding the bullet points that describes when histograms should not be used, “The column is not used in the WHERE clauses of the queries,” with the introduction of ANSI join syntax in Oracle Database 9.0.1, a column that only appears in JOIN clauses could potentially benefit from a histogram.  Prior to the introduction of adaptive cursor sharing in Oracle Database 11.1, bind variables when combined with bind variable peeking and histograms could generate execution plans that, while efficient for the first execution, were potentially considerably inefficient when different bind variable values were submitted during later executions of the SQL statement.  Adaptive cursor sharing attempts to address the issues caused by the combination of bind variable peeking and histograms.  In cases where the same value is always submitted for a comparison with a submitted bind variable, a histogram on the comparison column could significantly improve the optimizer’s cardinality estimates such that a more efficient execution plan is developed (consider a case when an index on a STATUS column is present, and that STATUS column contains the value of either PROCESSED or NOT PROCESSED).  The book only mentioned that adaptive cursor sharing would be discussed in a later recipe – there was no comment in this section of the book regarding whether adaptive cursor sharing helped or further hindered the bullet-pointed items in the book. (page 309)
  • The discussion of stored outlines missed a couple of key items.  One item missed is that other sessions will not use stored outlines, unless the USE_STORED_OUTLINES parameter is set to a value other than FALSE at either the session (after the session connects) or system level (after each bounce of the database) because the parameter cannot be included in pfiles (init.ora) or spfiles.  In the demonstration script, the author displayed the execution plan for a query, then displayed the execution plan for the same query modified with an index hint, before creating a stored outline for the query; it appears that while creating the script, that the author thought that the unhinted query would use the stored outline for the hinted version of the query – that will not happen without implementing a technique that was not discussed in the book.  The book states, “If we want to avoid changes to the execution plan, we can store them in stored outlines – the plans in stored outlines don’t change…”  A stored outline does not contain an execution plan, it stores a series of hints (visible in DBA_OUTLINE_HINTS), and as stated in the Oracle Database documentation, “The degree to which plan stability controls execution plans is dictated by how much the Oracle Database hint mechanism controls execution plans…” (pages 310-317)
  • The AUTOTRACE generated plans for a SQL statement using bind variables, intended to show adaptive cursor sharing in action, may in fact show a different execution plan thzn the actual execution plan that will be used because AUTOTRACE handles all bind variables as VARCHAR2, does not perform bind variable peeking, and does not pull an existing execution plan from the library cache. (pages 321-322)
  • The TKPROF output shows two executions, implying that the TKPROF output shows the output for test script line 16 and 17, not just 16 as stated in the book. (page 325)
  • The TKPROF output shows that the execution plan changed with the different bind variable value, which means that the author must have re-executed the SQL statement again after the first execution with a bind variable value of 1 resulted in an inefficient execution plan.  The book does not mention that adaptive cursor sharing only takes effect after the runtime engine first executes the SQL statement at least once with an inefficient execution plan. (page 326)
  • The book states, “Pre Oracle Database 10g, bind variables and histograms could not be used together.”  This is a confusing sentence – is the book implying that histograms could not be utilized on any columns, even those that are only compared with literals, if the SQL statement uses bind variables?  Bind variable peeking was introduced with Oracle Database 9.0.1, which is sufficient to allow the optimizer to take advantage of histograms on columns referenced in the WHERE clause when those columns are compared with bind variables. (page 326)
  • The book states that in an OLTP type database, “we probably want to disable the Adaptive Cursor Sharing feature to eliminate the related overhead.”  The book then suggests changing the CURSOR_SHARING parameter to a value of EXACT, and the  _OPTIMIZER_EXTENDED_CURSOR_SHARING_REL parameter to a value of NONE.  First, the book should not suggest altering a hidden parameter without mentioning that hidden parameters should only be changed after consulting Oracle support.  Second, it is not the CURSOR_SHARING parameter that should be set to a value of EXACT, but the _OPTIMIZER_ADAPTIVE_CURSOR_SHARING parameter that should be set to a value of FALSE (see Metalink (MOS) Doc ID 11657468.8).  Third, the blanket statement that adaptive cursor sharing should be disabled in OLTP databases seems to be an incredibly silly suggestion for any Oracle Database version other than (this version contained a bug that lead to an impressive number of child cursors due to repeated executions of a SQL statement with different bind variable values). (page 327)
  • (Skipped pages 328-340)
  • The example demonstrating the client-side result cache was of limited use because the author specified “SET AUTOT TRACE EXP” in the script, meaning that neither the query without the RESULT_CACHE hint, nor the query with the hint actually executed.  For the example to be useful, autotrace should have been disabled, DBMS_XPLAN.DISPLAY_CURSOR should have been used to display the ALLSTATS LAST statistics, and the query with the RESULT_CACHE hint should have been executed twice. (pages 342-344)
  • When trying to compare the relative performance of the PARALLEL hint, rather than using SET TERM OFF, and SPOOL /DEV/NULL to avoid outputting the results to the screen, and selecting the SYSDATE for timing information, it probably would have been better to use SET AUTOTRACE TRACEONLY STATISTICS and SET TIMING ON.  Caching at the operating system, SAN, and disk level also needs to be considered.  For example, simply executing the SQL statement with the PARALLEL hint first might have required that version of the SQL statement to take longer than the unhinted SQL statement (hinted: 12.81 seconds, unhinted 10.05 seconds; hinted:10.80 seconds, unhinted: 7.58 seconds; hinted: 9.15 seconds, unhinted 13.00 seconds). (page 346-349)
  • The book states, “In OLTP environments parallel executions cause too many locks and consume more resources – limiting the scalability of the application.”  The author should clarify the statement that parallel executions cause too many locks – is the author describing parallel DML, parallel DDL, or parallel query? (page 350)
  • The book states, “Another issue can arise when using direct path insert – remember that conventional load inserts in memory – and only the redo log is written to disk on commit.”  This statement is confusing and misleading.  The contents of the redo log buffer are flushed to disk on commit – a commit in any session.  But the redo log buffer is also flushed every 3 seconds, and when it is at least 1/3 full.  The section of the author’s test script without the APPEND hint generated roughly 37 MB of redo (in a NOARCHIVELOG mode test database) when I tested, and the book shows that the insert required 5.3 seconds during the author’s test run.  DBWR is free to write blocks modified by a session to disk before the session issues a commit.  The demonstration should have mentioned that triggers and foreign key constraints on the table will prevent the APPEND hint from working as expected, and that more redo would be expected to be written in an ARCHIVELOG mode database unless NOLOGGING is set on the table or NOLOGGING is specified in the INSERT statement. (page 355)
  • The book states, “The NOLOGGING clause doesn’t write redo log entries for the operation, which is unrecoverable due to this behavior.”  The wording of this statement makes it a little difficult to understand the point of the statement.  Additionally, including the NOLOGGING clause in the CREATE TABLE AS SELECT statement does NOT prevent the generation of redo log entries – 178KB of redo was written when I reproduced the author’s test case.  That clause will minimize the amount of redo generated, not eliminate the generation of redo. (page 358)
  • The book includes a good test case example showing the effects of row level triggers and excessive indexing.  The author possibly could have made the performance difference point a bit clearer (or more dramatic) by using single row inserts, rather than INSERT INTO … SELECT. (reference pages 359-365)
  • The book states, “We will execute SQL*Plus from the /oracle/home/ directory…” – it appears from looking at the script that the intended to write “/home/oracle/ directory”.  The bang (!) shortcut for the SQL*Plus host command might not work on some operating systems such as VMS and Windows, so ideally the author would have spelled out the word host (or the first two characters of the word) in the scripts. (pages 366-367)
  • The book states that automatic memory management should not be used with Oracle Multithreaded server.  Considering that the “Multithreaded server” name was replaced with “Shared server” with the release of Oracle Database 9.0.1, and Automatic Memory Management is a feature introduced in Oracle Database, this statement in the book is a bit confusing. (page 381)
  • The select from V$SQL_PLAN using only HASH_VALUE in the WHERE clause is very odd, it would seem that using the SQL_ID and CHILD_NUMBER columns would make more sense.  Considering that the book is targeting Oracle Database 11.2, using DBMS_XPLAN rather than directly querying V$SQL_PLAN probably would have yielded more useful results than simply listing the operation, owner, and object name. (page 384)
  • The sample code showing how to pin a cursor in the library cache using DBMS_SHARED_POOL.KEEP is missing a single quote after the SQL statement’s hash value and the letter C (or any letter other than P, T, R, or Q) in single quotes.  As a result, the error “ORA-01756: quoted string not properly terminated” will be returned when that sample code is executed.  In addition to showing the sample code, the author should have provided guidance regarding why this feature should or should not be used, and when the feature should or should not be used (this is a frequent problem throughout the book – showing the how without showing the “why”, “when”, and “how much” details). (pages 390, 394)
  • The book states, “The UGA is located in the PGA when using dedicated server environments, and inside the Shared Pool when using shared servers.”  A similar sentence appears on the previous page.  It would be a bit more accurate to state that in a shared server environment, the Large Pool (if configured) contains the session UGA.  With a value set for the SGA_TARGET parameter, the Large Pool should be configured automatically, with its size indicated by the __LARGE_POOL_SIZE hidden parameter. (reference page 397)
  • The query that calculates the MAX of the “session uga memory max” statistic found in V$SESSTAT is limited in usefulness for a couple of reasons: 1) PGA memory used by sessions is allocated as needed and deallocated when no longer needed – summing the maximum value for the various sessions does not indicate the concurrent high watermark of session memory usage, rather it overstates the high watermark; 2) when a session disconnects, the statistics representing the PGA memory used by that session will disappear from V$SESSTAT. (page 399)
  • The formula for calculating the buffer cache hit ratio, while it appeared in the Oracle Database 9.2 documentation, is incorrect.  The author used the “physical reads”, “physical reads direct”, “physical reads direct (lob)” and “session logical reads” statistics from V$SYSSTAT, while the Oracle Database 11.2 documentation uses the statistics “physical reads cache”, “consistent gets from cache”, and “db block gets from cache” for calculating the buffer cache hit ratio.  The usefulness of the derived metric is limited at best. (page 400)
  • The book states, “The main use of the LRU list is to add blocks to the LRU end using a full table scan, while the normal operations add blocks to the MRU end of the list, and hence they are quickly replaced by blocks required for subsequent operations.”  This statement is similar to one made on page 170 of the book.  Recent releases (9.0.1 and above?) of Oracle Database perform mid-point insertions, even with full table scans.  If the CACHE keyword is specified when an object is created or altered, then that object’s blocks will be placed at the MRU end of the list when the object’s blocks are accessed from disk. (reference page 408).
  • The book states, “On this basis, we will establish a strategy to distribute the Oracle files on different disks to help improve performance.”  Perhaps a suggestion to use SAME (stripe and mirror everything), as recommended by Oracle Corporation, would be a better approach. (page 412)
  • The book states that the TIMED_STATISTICS parameter should be set to a value of TRUE in order to diagnose I/O performance issues.  While it is true that this parameter should have a value of TRUE, that parameter should default to a value of TRUE when the value of the STATISTICS_LEVEL parameter is set at its default value of TYPICAL, or adjusted to a value of ALL. (page 412)
  • The book states, “The following steps will show how to destribute Oravle files to increase performance.”  That sentence includes two uncorrected typos. (page 412)
  • The query of V$FILESTAT and the query of V$TEMPSTAT are of limited value because there was no attempt to calculate delta values for the statistics.  The queries do not take into consideration the effects of backup activity and statistics collection, because the delta period is since the opening of the database. (page 413)
  • Rather than just arbitrarily moving the redo logs from one directory belonging to the database TESTDB to another directory apparently on the same hard drive volume, the directory which belongs to the database TESTDB2 (assuming that optimal flexible architecture approaches are in use), the author should explain the factors that influence the decision whether or not to move the redo logs. (page 413)
  • Justification should be provided for changing the archived redo log locations.  Justification should be provided for moving the datafiles for a tablespace – where is the evidence that the current location of the datafiles is causing a performance problem that will be addressed by changing the directory in which the datafiles are located? (page 414)
  • When describing the output of the V$FILESTAT query, the author states, “For each file, we can see the number of physical reads and writes performed and the time spent (in milliseconds) for these operations.”  V$FILESTAT displays the time in centiseconds (1/100 of a second) not in milliseconds (1/1000 of a second).  The output from V$FILESTAT shows that none of the files were heavily accessed, so there is probably little value in using the output to make the decision that one or another datafile should be relocated. (page 415)
  • The book states, “When we heavily access data files and we know which file is executing the query against the statistics (shown in step 2 and step 3), we can gain performance by separating the heavily-accessed data files on different disks.”  This statement is confusing – a file does not execute a query.  Just because the blocks in two datafiles are heavily accessed, that does not mean that there will be contention between the datafiles if those datafiles reside on the same disk volume.  Considering the number of times the author recommended relocating files to “different disks” it might be interesting to understand how many disk volumes the author considers as being essential to efficient operation of the database. (page 417)
  • The book states, “We distribute the data files based on disk controller allocation – distributing them on different mount points managed by the same disk controller brings no performance improvement.”  This statement is limited in accuracy, and because the sentence is stated as an absolute, it is incorrect.  The current and maximum throughput of the disk controller are but two of several factors that might have an impact on performance. (page 418)
  • The book states, “The CKPT, LGWR, and ARCn processes can only read and write control files.”  I think that the author intended something other than what is written in this quote – otherwise someone might wonder which process writes to the online redo logs, and which process archives those redo logs once filled. (page 418)
  • The book states, “LGWR writes online redo logs sequentially, using a RAID 5 on the disks, where online redo logs are stored.”  I think that this sentence requires a lot of reworking before it is understandable.  The book goes on to state, “This can lead to poor performance due to the slower write times that characterize this type of disk array [RAID 5] – using RAID 0+1 is preferable.”  Suggesting the more robust RAID 10, rather than RAID 0+1, would be a good idea in this sentence. (page 419)
  • The author suggested querying the DBA_HIST_SEG_STAT view without mentioning the licensing requirements for accessing that AWR view. (page 420)
  • The book states, “If we have tables and indexes on the same disk, we need to read and write in two different places on the same disk.  By dividing the work between two disks, we can perform an index range scan faster than when the index and table are on the same disk.”  While a statement similar to this one has appeared in a couple of older Oracle Database related books, the statement has been shown to be incorrect for various reasons. (page 422)
  • The book states, “The latter [hardware-based RAID] offers guaranteed performance and no overhead on the CPU.”  Hardware-based RAID does not offer guaranteed performance, although it should have less CPU overhead than software based RAID. (page 423)
  • The book states, “Starting with step 1, in terms of performance the best RAID level for an Oracle database is RAID 0+1, also known as RAID 10.”  RAID 0+1 and RAID 10 are not the same – the two approaches have different fault tolerances.  The same mistake is made on page 425.  (page 423)
  • The book states, “It’s not a good idea to store redo logs and temporary segments on RAID level 1 configurations, because redo logs should be multiplexed on different disks, and there is no need to protect temporary segments from failures.”  Using multiple RAID 1 arrays for the online redo logs is certainly possible – it is not clear if the book suggests that the location where the online redo logs are located should not be protected by one of the redundant RAID levels (RAID 1, RAID 10, etc.).  It could easily be argued that the tempfiles should be located on an array with a redundant RAID level; it is true that the tempfiles can be recreated, but losing the drive hosting the tempfile could be disastrous for the connected sessions. (page 424)
  • The description of RAID 5 should have described how losing one drive in the array affects performance, and how the performance is affected when a second drive in the array is lost. (page 424)
  • The book states, “A checkpoint is used to ensure consistency in the database; during this operation, all data files are synchronized with the data blocks in memory.”  This statement needs a bit of clarification, first of what is the meaning of “synchronized with the data blocks in memory.”  There are several types of checkpoints, including an object level checkpoint, a checkpoint which takes place prior to the execution of a parallel query for all tables that are referenced in the query. (page 428)
  • The script demonstrating how checkpoints work has a couple of flaws.  The script changes the value of the LOG_CHECKPOINTS_TO_ALERT parameter with a scope of SPFILE, and does not mention that the database must be bounced after executing this command.  The script issues an “ALTER SYSTEM SWITCH LOGFILE” command, rather than an “ALTER SYSTEM CHECKPOINT” command. (page 428)
  • The book states, “In the following screenshot, you can see that the checkpoint information was written to the alert log.”  The screen capture shows the log switch, but not the checkpoint message. (page 430)
  • The book essentially states that the value of the LOG_CHECKPOINT_INTERVAL multiplied by the redo block size (typically 512 bytes, but may be 1024 or 4096 in later releases) cannot exceed 90% of the smallest redo log file – that statement conflicts with the Oracle Database 10.2 documentation.  Regardless of that fact, both the LOG_CHECKPOINT_INTERVAL and LOG_CHECKPOINT_TIMEOUT parameters described on the page are deprecated as of Oracle Database 10.1 per Metalink (MOS) Doc ID 472821.1. (page 431)
  • The book states, “The symptoms of I/O issues, related to slow disks on which redo log files are written, are highlighted by a high value for log file sync statistic. Often a high value for the log file parallel write is confirmed by a high value for this parameter.”  The log file sync wait event probably should not be referred to as either a statistic or a parameter, because those labels have very specific meaning in Oracle Database terminology.  The excessive time spent in the log file sync wait event could be caused by several factors other than slow disks where the redo logs are located, including excessive CPU competition and simply excessive COMMITs (or ROLLBACKs) issued by a session. (page 434)



%d bloggers like this: