Simple SQL with and without Inline Views 2

26 10 2011

October 26, 2011

(Back to the Previous Post in the Series)

In the previous article of this series we examined a couple of moderately simple data retrieval requirements, and used inline views and/or analytic functions to solve those data retrieval requirements.  I recently saw another opportunity to help a person on the ERP mailing list with a SQL statement.  The original poster (OP) provided a screen capture that contained information similar to the following:

CUST_ORDER_ID   PART_ID         TYPE CLASS WAREHOUSE_ID LOCATION_ID  QTY_ON_HAND TRANSACTI
--------------- --------------- ---- ----- ------------ ------------ ----------- ---------
ZQHZG           2J-2908         O    I     FIB          AB7A10                30 21-JUN-10
ZQHZZ           2J-2909         O    I     FIB          AB7A10                 1 21-JUN-10
ZIHKR           2J0836          O    I     HIBB         A77Y3               1092 30-NOV-07
ZQKZH           2J0836          O    I     HIBB         A77Y3               1092 10-JUN-08
ZQIHZ           2J0836          O    I     HIBB         A77Y3               1092 23-NOV-09
ZKOHI           2J4120          O    I     PIPABTONA    AB7A10                 2 16-NOV-95
ZKOHQ           2J4129          O    I     PIPABTONA    AB7A10                 2 16-NOV-95
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 25-SEP-95
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 25-SEP-95
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 26-OCT-95
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 26-OCT-95
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 12-DEC-95
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 12-DEC-95
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 27-FEB-96
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 04-APR-96
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 04-APR-96
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 31-MAY-96
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 31-MAY-96
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 06-SEP-96
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 30-SEP-96
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 30-SEP-96
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 30-SEP-96
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 12-JAN-98
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 12-FEB-98
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 04-MAR-98
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 05-MAY-98
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 05-MAY-98
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 18-AUG-98
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 18-AUG-98
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 29-SEP-98
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785          O    I     HIBB         AB7A10                 2 02-DEC-98
ZIRZQ           3J8547          O    I     HIBB         A77Y2                252 18-OCT-07
ZZUQG           4G0007          O    I     HIBB         AB7A10                 2 08-NOV-99
ZZUQG           4G0007          O    I     HIBB         AB7A10                 2 30-NOV-99 
...

Initially, the OP requested that the most recent TRANSACTION_DATE (the last column in the above output) for the PART_ID value to be shown on each row of the output, rather than the TRANSACTION_DATE that is directly associated with a specific row.  However, there are two problems:

  • The OP did not mention what tables were used as the data source for the included screen capture.
  • The OP did not mention whether a recent release of Oracle Database or some other brand of database was in use.

For times like these, it helps a lot to be familiar with the data model used in the database.  The INVENTORY_TRANS table in this particular ERP system has columns that match several of the columns that were included in the OP’s screen capture: CUST_ORDER_ID, PART_ID, TYPE, CLASS, WAREHOUSE_ID, LOCATION_ID, and TRANSACTION_DATE.  The question remains, where did the QTY_ON_HAND column come from – that column is not in the INVENTORY_TRANS table.  We could find out, at least on Oracle Database, with a SQL statement similar to the following:

SELECT
  DTC.TABLE_NAME,
  DT.TABLE_NAME
FROM
  DBA_TAB_COLUMNS DTC,
  DBA_TABLES DT
WHERE
  DTC.COLUMN_NAME='QTY_ON_HAND'
  AND DTC.OWNER='SCHEMA_OWNER_HERE'
  AND DTC.OWNER=DT.OWNER(+)
  AND DTC.TABLE_NAME=DT.TABLE_NAME(+)
ORDER BY
  DTC.TABLE_NAME;

TABLE_NAME                     TABLE_NAME
------------------------------ ----------
CR_EC_LINE_PART
CR_OM_CO_LINE_PART
CR_PART_PO_LINE
CR_WO_PART
PART                           PART
PART_USER_DEFINED_VIEW 

Note that in the above, I had to join the DBA_TAB_COLUMNS dictionary view with the DBA_TABLES dictionary view (I could have used DBA_OBJECTS instead) to determine that 5 of the 6 rows returned from the DBA_TAB_COLUMNS view are in fact not tables, but views.  Thus, the QTY_ON_HAND column in the OP’s screen capture must have originated from the PART table (unless, of course, the column in the screen capture is actually an aliased calculation).

Now that the data sources are known for the query, we can take a best guess approach at reproducing the output that the OP provided in the screen capture:

SELECT
  IT.CUST_ORDER_ID,
  IT.PART_ID,
  TYPE,
  CLASS,
  IT.WAREHOUSE_ID,
  IT.LOCATION_ID,
  P.QTY_ON_HAND,
  IT.TRANSACTION_DATE
FROM
  INVENTORY_TRANS IT,
  PART P
WHERE
  IT.CUST_ORDER_ID IS NOT NULL
  AND IT.TYPE='O'
  AND IT.CLASS='I'
  AND IT.PART_ID=P.ID
  AND P.QTY_ON_HAND>0
ORDER BY
  IT.PART_ID,
  IT.TRANSACTION_DATE; 

Next, there are a couple of ways to have the most recent transaction date appear on each row of the output (partially dependent on the database version).  First, let’s build a query that returns the most recent transaction date where the TYPE is O and the CLASS is I and the CUST_ORDER_ID column does not contain a NULL:

SELECT
  PART_ID,
  MAX(TRANSACTION_DATE) AS LAST_TRANSACTION_DATE
FROM
  INVENTORY_TRANS
WHERE
  TYPE='O'
  AND CLASS='I'
  AND CUST_ORDER_ID IS NOT NULL
GROUP BY
  PART_ID
ORDER BY
  PART_ID; 

PART_ID         LAST_TRAN
--------------- ---------
0000009         29-DEC-00
005035008005    29-MAY-98
005035008006    29-MAY-98
00576649-0080   20-OCT-11
00576649-0088   08-JUN-11
00576649-8005   14-JAN-11
007580398       17-JAN-03
0099229940      01-NOV-95
0099229990      01-NOV-95
0108556         28-APR-00
01550867KM      15-NOV-05
01552316KM      02-OCT-00
01552346KM      30-APR-03
01552369KM      30-APR-01
01558943M       07-JAN-10
01561230KM      30-OCT-01
01563001M       08-MAR-10
01563882M       01-APR-10
01566790KM      08-SEP-08
01569945M       01-APR-10
01583508TO      11-FEB-05
01780151TO      19-JUL-07
...

Next, we will just plug the above inline view into the FROM clause of our original SQL statement and add an additional predicate into the WHERE clause to tell the query optimizer how to associate the inline view with the other tables in the SQL statement:

SELECT
  IT.CUST_ORDER_ID,
  IT.PART_ID,
  IT.QTY,
  IT.WAREHOUSE_ID,
  IT.LOCATION_ID,
  P.QTY_ON_HAND,
  MIT.LAST_TRANSACTION_DATE
FROM
  INVENTORY_TRANS IT,
  PART P,
  (SELECT
     PART_ID,
     MAX(TRANSACTION_DATE) AS LAST_TRANSACTION_DATE
   FROM
     INVENTORY_TRANS
   WHERE
     TYPE='O'
     AND CLASS='I'
     AND CUST_ORDER_ID IS NOT NULL
   GROUP BY
     PART_ID) MIT
WHERE
  IT.CUST_ORDER_ID IS NOT NULL
  AND IT.TYPE='O'
  AND IT.CLASS='I'
  AND IT.PART_ID=P.ID
  AND P.QTY_ON_HAND>0
  AND IT.PART_ID=MIT.PART_ID
ORDER BY
  IT.PART_ID,
  IT.CUST_ORDER_ID,
  IT.TRANSACTION_DATE; 

CUST_ORDER_ID   PART_ID             QTY WAREHOUSE_ID LOCATION_ID  QTY_ON_HAND LAST_TRAN
--------------- ------------ ---------- ------------ ------------ ----------- ---------
ZQHZG           2J-2908               1 FIB          AB7A10                30 21-JUN-10
ZQHZZ           2J-2909               1 FIB          AB7A10                 1 21-JUN-10
ZIHKR           2J0836               48 HIBB         A77Y3               1092 23-NOV-09
ZQIHZ           2J0836               72 HIBB         A77Y3               1092 23-NOV-09
ZQKZH           2J0836              111 HIBB         A77Y3               1092 23-NOV-09
ZKOHI           2J4120                2 PIPABTONA    AB7A10                 2 16-NOV-95
ZKOHQ           2J4129                2 PIPABTONA    AB7A10                 2 16-NOV-95
ZKGQG           2J9785                9 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                1 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                7 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                2 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                6 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                4 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785               10 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                8 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                4 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785               10 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                4 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785               12 HIBB         AB7A10                 2 02-DEC-98
...

Since I am using Oracle Database 11.2.0.2, I can take advantage of analytic functions to further simplify the SQL statement:

SELECT
  IT.CUST_ORDER_ID,
  IT.PART_ID,
  IT.QTY,
  IT.WAREHOUSE_ID,
  IT.LOCATION_ID,
  P.QTY_ON_HAND,
  MAX(IT.TRANSACTION_DATE) OVER (PARTITION BY IT.PART_ID) AS LAST_TRANSACTION_DATE
FROM
  INVENTORY_TRANS IT,
  PART P
WHERE
  IT.CUST_ORDER_ID IS NOT NULL
  AND IT.TYPE='O'
  AND IT.CLASS='I'
  AND IT.PART_ID=P.ID
  AND P.QTY_ON_HAND>0
ORDER BY
  IT.PART_ID,
  IT.CUST_ORDER_ID,
  IT.TRANSACTION_DATE; 

CUST_ORDER_ID   PART_ID             QTY WAREHOUSE_ID LOCATION_ID  QTY_ON_HAND LAST_TRAN
--------------- ------------ ---------- ------------ ------------ ----------- ---------
ZQHZG           2J-2908               1 FIB          AB7A10                30 21-JUN-10
ZQHZZ           2J-2909               1 FIB          AB7A10                 1 21-JUN-10
ZIHKR           2J0836               48 HIBB         A77Y3               1092 23-NOV-09
ZQKZH           2J0836              111 HIBB         A77Y3               1092 23-NOV-09
ZQIHZ           2J0836               72 HIBB         A77Y3               1092 23-NOV-09
ZKOHI           2J4120                2 PIPABTONA    AB7A10                 2 16-NOV-95
ZKOHQ           2J4129                2 PIPABTONA    AB7A10                 2 16-NOV-95
ZKGQG           2J9785                9 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                1 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                7 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                2 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                6 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                4 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785               10 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                8 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                4 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785               10 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                4 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785               12 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785                2 HIBB         AB7A10                 2 02-DEC-98
ZKGQG           2J9785               11 HIBB         AB7A10                 2 02-DEC-98
...

—–

Let’s assume that the OP’s intended result changes – the OP just wants to know which PART_IDs have not had any transactions in the last 89 day.  So, the OP is interested in those PART_IDs with a last transaction that was either 90 or more days ago, or there has never been a transaction for the PART_ID.  Oh, scrap… start over.

First, we will identify the list of PART_IDs that have had transactions in the last 89 days (CLASS values of ‘A’ are adjustments, not true inventory transactions, so rows with those CLASS values will be excluded).  Once we have that list, we can simply subtract the list from the entire list of PART_IDs:

SELECT DISTINCT
  PART_ID
FROM
  INVENTORY_TRANS
WHERE
  TRANSACTION_DATE > TRUNC(SYSDATE-90)
  AND PART_ID IS NOT NULL
  AND CLASS<>'A'; 

PART_ID
---------
2J-2908
9F4810
867-8649
90772357L
90772341L
90772297L
8475047
8145432
90772326L
9340730
90772357
90772347
782-9484
...

The OP is interested in seeing the actual warehouse location containing the PART_IDs, so we need to use the PART_LOCATION table, not the PART table as we did before.  We then need to create a left outer join between the above list and the PART_LOCATION table – essentially, we will find cases where there is a row in the PART_LOCATION table without a corresponding part ID in the above list.  ANSI syntax will be used here for the outer join, because it is still unknown whether the OP is running an Oracle Database backend or a SQL Server backend:

SELECT
  PL.PART_ID,
  PL.WAREHOUSE_ID,
  PL.LOCATION_ID,
  IT.PART_ID,
  PL.QTY
FROM
  PART_LOCATION PL
LEFT OUTER JOIN
  (SELECT DISTINCT
     PART_ID
   FROM
     INVENTORY_TRANS
   WHERE
     TRANSACTION_DATE > TRUNC(SYSDATE-90)
     AND PART_ID IS NOT NULL
     AND CLASS<>'A') IT
ON
  PL.PART_ID=IT.PART_ID
  AND IT.PART_ID IS NULL
WHERE
  PL.QTY<>0
ORDER BY
  PL.PART_ID,
  PL.WAREHOUSE_ID,
  PL.LOCATION_ID; 

PART_ID                WAREHOUSE_ID LOCATION_ID  PART_ID             QTY
---------------------- ------------ ------------ ------------ ----------
#2                     HIBB         AB7A10                             4
#46954                 HIBB         TOOL CRAB                         15
#4HWG LHRK GROUND WIRK HIBB         TOOL CRAB                        250
#4THHN                 HIBB         AB7A10                          2460
#5                     HIBB         AB7A10                             3
#6                     HIBB         TOOL CRAB                          4
#649                   HIBB         TOOL CRAB                          2
#655                   HIBB         TOOL CRAB                          4
#709                   HIBB         TOOL CRAB                          2
#889                   HIBB         TOOL CRAB                          1
#89 KJTKNSION          HIBB         TOOL CRAB                          3
#90                    HIBB         AB7A10                             4
#92 KJTKNSION          HIBB         TOOL CRAB                          2
*G-WM370H-MT           HIBB         AB7A10                             2
.235-H522              BURN         AB7A10                            .5
.235RD-9K0045          BURN         AB7A10                            .7
.260RDJ.963-HSTMH57    BURN         AB7A10                        2.5625
.35-H596-845           PIPABTONA    AB7A10                         4.039
...

In the above, notice that there is a seeminly odd ON clause: “PL.PART_ID=IT.PART_ID  AND IT.PART_ID IS NULL” – that simply means that when resolving the left outer join, a value must be found on the left side, but a corresponding value should not be found on the right side of the join.  This is the “minus” syntax that I selected to use here (the SQL MINUS syntax could not be used because the WAREHOUSE_ID, LOCATION_ID, and QTY column values returned by the query of the INVENTORY_TRANS table would have to exactly match the values found in the PART_LOCATION table… and that is more difficult to accomplish).

If I was certain that the OP was running with an Oracle Database backend, I would have suggested the following syntax, rather than the ANSI syntax as shown above:

SELECT
  PL.PART_ID,
  PL.WAREHOUSE_ID,
  PL.LOCATION_ID,
  IT.PART_ID,
  PL.QTY
FROM
  PART_LOCATION PL,
  (SELECT DISTINCT
     PART_ID
   FROM
     INVENTORY_TRANS
   WHERE
     TRANSACTION_DATE > TRUNC(SYSDATE-90)
     AND PART_ID IS NOT NULL
     AND CLASS<>'A') IT
WHERE
  PL.PART_ID=IT.PART_ID(+)
  AND IT.PART_ID IS NULL
  AND PL.QTY<>0
ORDER BY
  PL.PART_ID,
  PL.WAREHOUSE_ID,
  PL.LOCATION_ID; 

Why the NULL values in the fourth column (the second PART_ID column) of the output?  That NULL simply means that the query is working exactly as intended.  That fourth column would be showing only those PART_IDs that have had transactions in the last 89 days (and NULL if no transactions in the last 89 days) if the restriction “AND IT.PART_ID IS NULL” was NOT included in the ON clause – but such rows should not be included in the output if the output is to be used as intended.





Simple SQL with and without Inline Views

21 10 2011

October 21, 2011

(Forward to the Next Post in the Series)

Sometimes it is interesting to take a step back from some of the more difficult to construct SQL statements, and just put together something simple.  The following request recently arrived through an ERP mailing list:

“I want to find the last ship date for all the open part numbers in Customer Order Entry. The last ship date is not necessarily from the Open order in Customer Order Entry…”

The original poster (OP) provided more information, and even provided a description of what was already tried as a solution.  It is not exceptionally clear what the OP wanted, but because I have several years of experience working with this particular ERP system, I had a feeling that the OP might be headed in the wrong direction.  This ERP system uses tables that are well normalized, which would make it a bit more difficult to locate the most recent ship date for a part number that is recorded with one customer order’s tables’ rows when another customer order’s tables’ rows are reported.  Unless, of course, we remember that there is also a transaction table that records every inventory movement of part numbers through the system – we just need to know what those rows in the transaction table look like.  In this case, the rows have the following characteristics:

  • The CUST_ORDER_ID column value must not be NULL
  • The TYPE column must have a value of ‘O’
  • The CLASS column must have a value of ‘I’

With that knowledge, we could then find the most recent date that any part number shipped with a SQL statement similar to the following:

SELECT
  PART_ID,
  MAX(TRANSACTION_DATE) AS LAST_TRANSACTION_DATE
FROM
  INVENTORY_TRANS
WHERE
  CUST_ORDER_ID IS NOT NULL
  AND TYPE='O'
  AND CLASS='I'
GROUP BY
  PART_ID;

In the above, the AS keyword is optional on an Oracle Database platform, but it is not optional on some other database platforms.

With an Oracle Database backend, inline views may be used to permit the above SQL statement to be used to retrieve additional information for a parent SQL statement.  If we wrap the above SQL statement in ( ) and place it in the FROM clause of the SQL statement, Oracle will treat the data returned by the above SQL statement quite similar to how it would handle a regular table (the Oracle query optimizer might decide to rewrite the combined SQL statement into an equivalent form that no longer contains an inline view).  There is always a chance that a part number that is on order may have never shipped to a customer, so we will need to outer join (in this case using a left outer join) to the newly created inline view:

SELECT
  COL.CUST_ORDER_ID,
  COL.PART_ID,
  LS.LAST_TRANSACTION_DATE
FROM
  CUSTOMER_ORDER CO,
  CUST_ORDER_LINE COL,
  (SELECT
    PART_ID,
    MAX(TRANSACTION_DATE) AS LAST_TRANSACTION_DATE
  FROM
    INVENTORY_TRANS
  WHERE
    CUST_ORDER_ID IS NOT NULL
    AND TYPE='O'
    AND CLASS='I'
  GROUP BY
    PART_ID) LS
WHERE
  CO.ID=COL.CUST_ORDER_ID
  AND CO.STATUS IN ('U','F','R')
  AND COL.LINE_STATUS='A'
  AND COL.ORDER_QTY>COL.TOTAL_SHIPPED_QTY
  AND COL.PART_ID=LS.PART_ID(+);

In the above, I joined the CUSTOMER_ORDER and CUST_ORDER_LINE tables to the inline view that I aliased as LS.  A third table, CUST_LINE_DEL, that optionally contains the delivery schedule for some of the rows in the CUST_ORDER_LINE table, could have also been outer joined to the CUST_ORDER_LINE table.

Running SQL Server, or just feeling ANSI?  If so, the above may be rewritten as follows:

SELECT
  COL.CUST_ORDER_ID,
  COL.PART_ID,
  LS.LAST_TRANSACTION_DATE
FROM
  CUSTOMER_ORDER CO
JOIN
  CUST_ORDER_LINE COL
ON
  CO.ID=COL.CUST_ORDER_ID
LEFT OUTER JOIN
  (SELECT
    PART_ID,
    MAX(TRANSACTION_DATE) AS LAST_TRANSACTION_DATE
  FROM
    INVENTORY_TRANS
  WHERE
    CUST_ORDER_ID IS NOT NULL
    AND TYPE='O'
    AND CLASS='I'
  GROUP BY
    PART_ID) LS
ON
  COL.PART_ID=LS.PART_ID
WHERE
  CO.STATUS IN ('U','F','R')
  AND COL.LINE_STATUS='A'
  AND COL.ORDER_QTY>COL.TOTAL_SHIPPED_QTY;

Unfortunately, the OP is actually using a SQLBase database backend that does not support inline views.  I remember the feeling before I discovered that Oracle Database supported inline views… in that case I would do something like the following:

  1. Create a statically defined view.
  2. Join to that statically defined view just as if the view were a table.
CREATE VIEW CUST_ORDER_PART_LS AS
SELECT
  PART_ID,
  MAX(TRANSACTION_DATE) AS LAST_TRANSACTION_DATE
FROM
  INVENTORY_TRANS
WHERE
  CUST_ORDER_ID IS NOT NULL
  AND TYPE='O'
  AND CLASS='I'
GROUP BY
  PART_ID;

SELECT
  COL.CUST_ORDER_ID,
  COL.PART_ID,
  LS.LAST_TRANSACTION_DATE
FROM
  CUSTOMER_ORDER CO,
  CUST_ORDER_LINE COL,
  CUST_ORDER_PART_LS LS
WHERE
  CO.ID=COL.CUST_ORDER_ID
  AND CO.STATUS IN ('U','F','R')
  AND COL.LINE_STATUS='A'
  AND COL.ORDER_QTY>COL.TOTAL_SHIPPED_QTY
  AND COL.PART_ID=LS.PART_ID(+);

I guess that it is a good exercise once in a while to practice simple SQL.

A second example from the ERP mailing list showed a SQL Server solution for a particular problem.  The particular problem that the SQL Server solution set out to solve is as follows:

“Someone posed an interesting question to me – How can you do a mass insert of Operations? Well most of us know how to manage that without too much trouble but this one came with a couple of caveats – The Master’s first operation (lowest numbered) has to have a particular ResourceID and then a different but specific ResourceID must be on a subsequent Operation. It is after the second operation where the insert must be placed.

Sounds like fun – So I figured that it could be done in a single SQL statement – well after a 4-way join of the Operation table to itself I got it to work how I expected…”

Interesting, I think that there are times in the past that I have used multiple self-joins to a table in order to solve similar SQL problems.  However, there is an easier way using analytic functions.  The following SQL statement attempts to indicate: the sequence of the operation within the work order sub ID, the sequence number that follows sequentially, the second sequence number that follows sequentially, and the resource ID of the next operation in sequential order by the operation number:

SELECT
  WORKORDER_BASE_ID,
  WORKORDER_LOT_ID,
  WORKORDER_SPLIT_ID,
  WORKORDER_SUB_ID,
  SEQUENCE_NO,
  RESOURCE_ID,
  ROW_NUMBER() OVER (PARTITION BY WORKORDER_BASE_ID, WORKORDER_LOT_ID, WORKORDER_SPLIT_ID, WORKORDER_SUB_ID
                     ORDER BY RESOURCE_ID) AS RN,
  LEAD(RESOURCE_ID,1) OVER (PARTITION BY WORKORDER_BASE_ID, WORKORDER_LOT_ID, WORKORDER_SPLIT_ID, WORKORDER_SUB_ID
                            ORDER BY RESOURCE_ID) AS NEXT_RESOURCE_ID,
  LEAD(SEQUENCE_NO,1) OVER (PARTITION BY WORKORDER_BASE_ID, WORKORDER_LOT_ID, WORKORDER_SPLIT_ID, WORKORDER_SUB_ID
                            ORDER BY RESOURCE_ID) AS NEXT_SEQ_NO,
  LEAD(SEQUENCE_NO,2) OVER (PARTITION BY WORKORDER_BASE_ID, WORKORDER_LOT_ID, WORKORDER_SPLIT_ID, WORKORDER_SUB_ID
                            ORDER BY RESOURCE_ID) AS NEXT_NEXT_SEQ_NO
FROM
  OPERATION
WHERE
  WORKORDER_TYPE='M'; 

Next, we only care about those rows when our resource ID of interest (69 in my example) is specified in the first operation on the work order sub ID and the second operation on that work order sub ID specifies the other resource ID that is of interest (FW in my example).  If we are to avoid primary key violations, we should select a new sequence number that is half way between the next-next operation and the next operation:

SELECT
  WORKORDER_BASE_ID,
  WORKORDER_LOT_ID,
  WORKORDER_SPLIT_ID,
  WORKORDER_SUB_ID,
  SEQUENCE_NO,
  NEXT_SEQ_NO,
  NEXT_NEXT_SEQ_NO,
  NEXT_SEQ_NO + ROUND((COALESCE(NEXT_NEXT_SEQ_NO,NEXT_SEQ_NO+10) - NEXT_SEQ_NO)/2) NEW_SEQUENCE_NO,
  'NEW RESOURCE' RESOURCE_ID
FROM
  (SELECT
    WORKORDER_BASE_ID,
    WORKORDER_LOT_ID,
    WORKORDER_SPLIT_ID,
    WORKORDER_SUB_ID,
    SEQUENCE_NO,
    RESOURCE_ID,
    ROW_NUMBER() OVER (PARTITION BY WORKORDER_BASE_ID, WORKORDER_LOT_ID, WORKORDER_SPLIT_ID, WORKORDER_SUB_ID
                       ORDER BY RESOURCE_ID) AS RN,
    LEAD(RESOURCE_ID,1) OVER (PARTITION BY WORKORDER_BASE_ID, WORKORDER_LOT_ID, WORKORDER_SPLIT_ID, WORKORDER_SUB_ID
                              ORDER BY RESOURCE_ID) AS NEXT_RESOURCE_ID,
    LEAD(SEQUENCE_NO,1) OVER (PARTITION BY WORKORDER_BASE_ID, WORKORDER_LOT_ID, WORKORDER_SPLIT_ID, WORKORDER_SUB_ID
                              ORDER BY RESOURCE_ID) AS NEXT_SEQ_NO,
    LEAD(SEQUENCE_NO,2) OVER (PARTITION BY WORKORDER_BASE_ID, WORKORDER_LOT_ID, WORKORDER_SPLIT_ID, WORKORDER_SUB_ID
                              ORDER BY RESOURCE_ID) AS NEXT_NEXT_SEQ_NO
  FROM
    OPERATION
  WHERE
    WORKORDER_TYPE='M')
WHERE
  RN=1
  AND RESOURCE_ID='69'
  AND NEXT_RESOURCE_ID='FW';

WORKORDER_ WOR WOR WOR SEQUENCE_NO NEXT_SEQ_NO NEXT_NEXT_SEQ_NO NEW_SEQUENCE_NO RESOURCE_ID
---------- --- --- --- ----------- ----------- ---------------- --------------- ------------
231610     0   0   0            10         777              888             833 NEW RESOURCE
237680     0   0   0            10         777              888             833 NEW RESOURCE
32018917X  0   0   0            10         777              888             833 NEW RESOURCE
3201B8920  0   0   0            10         777              888             833 NEW RESOURCE
3201C8765  0   0   0            10         777              888             833 NEW RESOURCE
3201G8639  0   0   0            10         777              888             833 NEW RESOURCE
3201G9003  0   0   0            10         777              888             833 NEW RESOURCE
3201J8772  0   0   0            10         777              888             833 NEW RESOURCE
3201J8850  0   0   0            10         777              888             833 NEW RESOURCE
3201K8669  0   0   0            10         777              888             833 NEW RESOURCE
3201M8281  0   0   0            10         777              888             833 NEW RESOURCE
3201M8489  0   0   0            10         777              888             833 NEW RESOURCE
3201N8516  0   0   0            10         777              888             833 NEW RESOURCE
3201N8776  0   0   0            10         777              888             833 NEW RESOURCE
3201Q8545  0   0   0            10         777              888             833 NEW RESOURCE
3201T8964  0   0   0            10         777              888             833 NEW RESOURCE
3201T8964X 0   0   0            10          20               30              25 NEW RESOURCE
3201V8524  0   0   0            10         777              888             833 NEW RESOURCE
3201V8966  0   0   0            10         777              888             833 NEW RESOURCE
3201V8966X 0   0   0            10          20               30              25 NEW RESOURCE
3201W8967  0   0   0            10         777              888             833 NEW RESOURCE
3201W8967X 0   0   0            10          20               30              25 NEW RESOURCE
3201Y8423  0   0   0            10         777              888             833 NEW RESOURCE
3201Z8996  0   0   0            10         777              888             833 NEW RESOURCE

24 rows selected. 

With the above, we are well on our way to perfoming a mass insert into this table.  If I recall correctly, the above must first be inserted into another table (a temp table, preferrably) and then the rows may be inserted into the OPERATION table (the table that is the row source for the analytic functions used in the SQL statement).





What Could Go Wrong? Testing the DB_FILE_MULTIBLOCK_READ_COUNT Parameter

14 10 2011

October 14, 2011 (Modified October 15, 2011)

When reading various Oracle Database books, I often perform mental exercises related to the book’s contents, asking “what could go wrong?”   That is probably why some of my book reviews have extended to record setting lengths… who knew that Amazon imposed a limit on book review lengths…

I recently started re-reading the “Troubleshooting Oracle Performance” book after buying a PDF copy of the book.  As previously mentioned, the ultimate reason for re-reading the book is so that I may compose a detailed book review, just as I have done for most of the Oracle Database related books that I have read in the past couple of years.  So far, I have had few opportunities to ask “what could go wrong” – until of course I made it to page 178 during the second read-through of the book.  That page includes a PL/SQL block that tests the performance effects of modifying the DB_FILE_MULTIBLOCK_READ_COUNT parameter from a value of 1 to 32, increasing the value by 1 for each pass through the loop in the PL/SQL block.  What could go wrong?

Let’s build a sample table with 11,000,000 rows:

DROP TABLE T1 PURGE;

CREATE TABLE T1(
  C1 NUMBER,
  C2 NUMBER,
  C3 NUMBER,
  C4 DATE,
  C5 DATE,
  C6 DATE,
  C7 VARCHAR2(20),
  C8 VARCHAR2(20),
  C9 VARCHAR2(20));

INSERT INTO
  T1
SELECT
  ROWNUM,
  DECODE(MOD(ROWNUM,5),0,NULL,ROWNUM),
  DECODE(MOD(ROWNUM,5),0,NULL,ROWNUM),
  TRUNC(SYSDATE)+TRUNC(ROWNUM/100),
  DECODE(MOD(ROWNUM,5),0,NULL,TRUNC(SYSDATE)+TRUNC(ROWNUM/100)),
  DECODE(MOD(ROWNUM,5),0,NULL,TRUNC(SYSDATE)+TRUNC(ROWNUM/100)),
  RPAD(CHR(MOD(ROWNUM-1,26)+65)||
    CHR(MOD(ROWNUM,26)+65)||
    CHR(MOD(ROWNUM+1,26)+65)||
    CHR(MOD(ROWNUM+2,26)+65),20,'A'),
  DECODE(MOD(ROWNUM,5),0,NULL,
   RPAD(CHR(MOD(ROWNUM-1,26)+65)||
    CHR(MOD(ROWNUM,26)+65)||
    CHR(MOD(ROWNUM+1,26)+65)||
    CHR(MOD(ROWNUM+2,26)+65),20,'A')),
  DECODE(MOD(ROWNUM,5),0,NULL,
   RPAD(CHR(MOD(ROWNUM-1,26)+65)||
    CHR(MOD(ROWNUM,26)+65)||
    CHR(MOD(ROWNUM+1,26)+65)||
    CHR(MOD(ROWNUM+2,26)+65),20,'A'))
FROM
  DUAL
CONNECT BY
  LEVEL<=1000000;

INSERT INTO
  T1
SELECT
  T1.*
FROM
  T1,
  (SELECT
    ROWNUM RN
  FROM
    DUAL
  CONNECT BY
    LEVEL<=10);

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',ESTIMATE_PERCENT=>NULL)

The above script might take a minute or two to complete.  Let’s take a look at the extent sizes for table T1:

SELECT
  SEGMENT_NAME SEGMENT,
  COUNT(*) EXTENTS,
  BYTES/1024 EXT_SIZE_KB,
  (COUNT(*) * BYTES)/1048576 TOTAL_MB
FROM
  DBA_EXTENTS
WHERE
  OWNER=USER
  AND SEGMENT_NAME IN ('T1')
GROUP BY
  SEGMENT_NAME,
  BYTES
ORDER BY
  SEGMENT_NAME,
  BYTES;

SEGMENT    EXTENTS EXT_SIZE_KB   TOTAL_MB
------- ---------- ----------- ----------
T1              16          64          1
T1              63        1024         63
T1             120        8192        960
T1               2       65536        128

Now let’s try a slightly modified version of the script that appears in the book:

SET SERVEROUTPUT ON
SET TIMING ON

DECLARE
  L_COUNT PLS_INTEGER;
  L_TIME PLS_INTEGER;
  L_STARTING_TIME PLS_INTEGER;
  L_ENDING_TIME PLS_INTEGER;
BEGIN
  DBMS_OUTPUT.PUT_LINE('DBFMBRC SECONDS');
  FOR L_DBFMBRC IN 1..128
  LOOP
    EXECUTE IMMEDIATE 'ALTER SESSION SET DB_FILE_MULTIBLOCK_READ_COUNT='||L_DBFMBRC;
    L_STARTING_TIME := DBMS_UTILITY.GET_TIME();
    SELECT /*+ FULL(T) */ COUNT(*) INTO L_COUNT FROM T1 T;
    L_ENDING_TIME := DBMS_UTILITY.GET_TIME();
    L_TIME := ROUND((L_ENDING_TIME-L_STARTING_TIME)/100);
    DBMS_OUTPUT.PUT_LINE(L_DBFMBRC||' '||L_TIME);
  END LOOP;
END;
/

The output that I received when tested on Oracle Database 11.2.0.2 follows:

DBFMBRC SECONDS
1 0
2 0
3 0
4 0
5 0
6 0
7 0
8 0
9 0
10 0
11 0
12 0
13 0
14 0
15 0
16 0
17 0
18 0
19 0
20 0
21 0
22 0
23 0
24 0
25 0
26 0
27 0
28 0
29 0
30 0
31 0
32 0
33 0
34 0
35 0
36 0
37 0
38 0
39 0
40 0
41 0
42 0
43 0
44 0
45 0
46 0
47 0
48 0
49 0
50 0
51 0
52 0
53 0
54 0
55 0
56 0
57 0
58 0
59 0
60 0
61 0
62 0
63 0
64 0
65 0
66 0
67 0
68 0
69 0
70 0
71 0
72 0
73 0
74 0
75 0
76 0
77 0
78 0
79 0
80 0
81 0
82 0
83 0
84 0
85 0
86 0
87 0
88 0
89 0
90 0
91 0
92 0
93 0
94 0
95 0
96 0
97 0
98 0
99 0
100 0
101 0
102 0
103 0
104 0
105 0
106 0
107 0
108 0
109 0
110 0
111 0
112 0
113 0
114 0
115 0
116 0
117 0
118 0
119 0
120 0
121 0
122 0
123 0
124 0
125 0
126 0
127 0
128 0

PL/SQL procedure successfully completed.

What Could Go Wrong?  (I had two or three answers for this question as I was reading the script, but I have a couple more after a bit of experimentation.  Think about what might go wrong without looking in the book.)

Added October 15, 2011:

The “Troubleshooting Oracle Performance” book, to my knowledge, was the first performance tuning book to address Oracle Database 11.1, and this book was released shortly after the release of Oracle Database 11.1.0.6.  While it was not widely known at the time (I do not think that it was documented), Oracle Database 11.1.0.6 might employ serial direct path reads (directly into the PGA) rather than db file scattered reads (into the buffer cache) when performing a full table scan.  Based on an experiment, the DB_FILE_MULTIBLOCK_READ_COUNT parameter influences the maximum number of blocks read in a single direct path read request; the maximum number of blocks read by direct path read appears to be the greatest power of two that is equal to or less than the value of the the DB_FILE_MULTIBLOCK_READ_COUNT parameter.  The paragraphs that follow this test case in the book suggest that a parallel full table scan should not be utilized – that implies that the book author is aware that direct path reads might distort the outcome of the test case.  This was the first thought that I had when viewing this test case when re-reading the book.

The effects of the table blocks that are already buffered in the buffer cache is also a potential area where this test case might see distorted results (the paragraphs that follow this test case in the book address this issue).  The test table is roughly 1,152MB in size (minus the blocks that are above the high water mark, of course), so how likely is it that some of the table blocks might be cached in the buffer cache either before the first execution of the SELECT from the table T1, or between executions of the SELECT from T1?  Does the answer depend on what was performed prior to the test case – how full is the buffer cache with unrelated blocks?  What is possibly interesting is that in my test case, all of the blocks belonging to the entire table were buffered in the buffer cache prior to the start of the PL/SQL block in the test case.  Without the timing information displayed by SET TIMING ON, it is difficult to determine if the DBMS_UTILITY.GET_TIME function simply is not supported on the operating system platform used for the test, or if as Timur stated in the comments section, the full table scans were completing so quickly that the timing of the individual runs was being rounded to 0 seconds.  I was careful not to include the output of the SQL*Plus timing during the initial posting of the blog article.  I obtained the following times from repeated PL/SQL executions:

Elapsed: 00:00:39.37

Elapsed: 00:00:37.05

Elapsed: 00:00:37.14

With a 10046 trace enabled, the execution time climbed sharply:

...
125 1
126 1
127 1
128 1

PL/SQL procedure successfully completed.

Elapsed: 00:01:40.16

The potential buffering of some or all of the table blocks was the second thought that I had while reading the test case during the re-read of the book.  The paragraph in the book that followed the test case addressed the potential problems of the table blocks being cached in the buffer cache, the operating system cache, a SAN cache, or a similar cache.  As previously mentioned, Oracle Database 11.1.0.6 and later are able to perform serial direct path reads, where previous Oracle Database versions would have used db file scattered reads, so the Oracle Database version is potentially important.  The book previously mentioned on page 175 three items that might cause fewer blocks to be read than what is specified by the DB_FILE_MULTIBLOCK_READ_COUNT parameter – those items might impact the results of the test, if the test table is not representative of the other objects in the database that might be read by a db file scattered read.

While reading the test case, another thought raced through my mind: why is the author experimenting with the DB_FILE_MULTIBLOCK_READ_COUNT parameter – just leave it unset and make certain that system statistics are collected so that the MBRC system statistic is used (I believe that this is related to Mohamed’s second point in the comments section).  Since the release of Oracle Database 10.2.0.1 it is possible to leave that parameter unset (and a bug was fixed in 10.2.0.4 that then allowed the parameter to be set to 0 which then had the same effect as leaving it unset).  That is the point that Mohamed made in the comments section.  The book author covered the possibility of leaving the DB_FILE_MULTIBLOCK_READ_COUNT parameter unset in the paragraphs that followed the test case, and since this book attempts to help with performance problems in Oracle 9.0.1 and later, it probably is a good idea that the test case was included to benefit those who had not yet transitioned all of the databases to Oracle Database 10.2 or later.

Result caching, mentioned by Steve Catmull in the comments section, is not something that I had considered while reading the test case.  If I remember correctly, when the RESULT_CACHE_MODE parameter is set to FORCE, the result cache could distort the results of the test case.

I thought that I would include a couple of more test cases on 11.2.0.2.
With an unset DB_FILE_MULTIBLOCK_READ_COUNT and the following script:

ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;

EXEC DBMS_MONITOR.SESSION_TRACE_ENABLE

SELECT /*+ FULL(T1) */ COUNT(*) FROM T1;

EXEC DBMS_MONITOR.SESSION_TRACE_DISABLE

A variety of “direct path read” wait events appear in the resulting trace file.  Two typical wait events found in the trace file follow:

WAIT #362523544: nam='direct path read' ela= 2280 file number=4 first dba=3951616 block cnt=128 obj#=71407 tim=366050651386
WAIT #362523544: nam='direct path read' ela= 2323 file number=4 first dba=3951744 block cnt=128 obj#=71407 tim=366050654201

The above shows the types of wait events that I expected to have when I read the PL/SQL block.

Let’s try again with a specified value for DB_FILE_MULTIBLOCK_READ_COUNT:

ALTER SESSION SET DB_FILE_MULTIBLOCK_READ_COUNT=20;

ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;

EXEC DBMS_MONITOR.SESSION_TRACE_ENABLE

SELECT /*+ FULL(T1) */ COUNT(*) FROM T1;

EXEC DBMS_MONITOR.SESSION_TRACE_DISABLE

A variety of “direct path read” wait events appear in the resulting trace file.  Two typical wait events found in the trace file follow:

WAIT #362523544: nam='direct path read' ela= 272 file number=4 first dba=4061184 block cnt=16 obj#=71407 tim=366357357469
WAIT #362523544: nam='direct path read' ela= 372 file number=4 first dba=4061216 block cnt=16 obj#=71407 tim=366357357889

In the above, note that the direct path read wait did not entirely adhere to the value for the modified DB_FILE_MULTIBLOCK_READ_COUNT, the actual number of blocks was rounded down to the next lower power of 2.

Let’s try again with another value for the DB_FILE_MULTIBLOCK_READ_COUNT parameter:

ALTER SESSION SET DB_FILE_MULTIBLOCK_READ_COUNT=100;

ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;

EXEC DBMS_MONITOR.SESSION_TRACE_ENABLE

SELECT /*+ FULL(T1) */ COUNT(*) FROM T1;

EXEC DBMS_MONITOR.SESSION_TRACE_DISABLE

A variety of “direct path read” wait events appear in the resulting trace file.  Two typical wait events found in the trace file follow:

WAIT #362523544: nam='direct path read' ela= 1206 file number=4 first dba=4050624 block cnt=64 obj#=71407 tim=366563051144
WAIT #362523544: nam='direct path read' ela= 1144 file number=4 first dba=4050688 block cnt=64 obj#=71407 tim=366563052537

In the above, note that the direct path read wait did not entirely adhere to the value for the modified DB_FILE_MULTIBLOCK_READ_COUNT, the actual number of blocks was rounded down to the next lower power of 2.

Let’s try again with another value for the DB_FILE_MULTIBLOCK_READ_COUNT parameter:

ALTER SESSION SET DB_FILE_MULTIBLOCK_READ_COUNT=120;

ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;

EXEC DBMS_MONITOR.SESSION_TRACE_ENABLE

SELECT /*+ FULL(T1) */ COUNT(*) FROM T1;

EXEC DBMS_MONITOR.SESSION_TRACE_DISABLE

A variety of “direct path read” wait events appear in the resulting trace file.  Two typical wait events found in the trace file follow:

WAIT #362523544: nam='direct path read' ela= 1183 file number=4 first dba=4047232 block cnt=64 obj#=71407 tim=366628708554
WAIT #362523544: nam='direct path read' ela= 1280 file number=4 first dba=4047296 block cnt=64 obj#=71407 tim=366628710113

Just for verification:

ALTER SESSION SET DB_FILE_MULTIBLOCK_READ_COUNT=128;

ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;

EXEC DBMS_MONITOR.SESSION_TRACE_ENABLE

SELECT /*+ FULL(T1) */ COUNT(*) FROM T1;

EXEC DBMS_MONITOR.SESSION_TRACE_DISABLE

Wait events:

WAIT #362523544: nam='direct path read' ela= 2505 file number=4 first dba=4050560 block cnt=128 obj#=71407 tim=366681412987
WAIT #362523544: nam='direct path read' ela= 2259 file number=4 first dba=4050688 block cnt=128 obj#=71407 tim=366681415722

The above again shows that the number of blocks read by direct path read are rounded down to the nearest power of two.

OK, let’s try the initial test again:

DISCONNECT ALL;
CONNECT ...

Instead of using a PL/SQL block, let’s use regular SQL statements:

ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',ESTIMATE_PERCENT=>100)

EXEC DBMS_MONITOR.SESSION_TRACE_ENABLE

SELECT /*+ FULL(T1) */ COUNT(*) FROM T1;

EXEC DBMS_MONITOR.SESSION_TRACE_DISABLE

ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;

EXEC DBMS_MONITOR.SESSION_TRACE_ENABLE

SELECT /*+ FULL(T1) */ COUNT(*) FROM T1;

EXEC DBMS_MONITOR.SESSION_TRACE_DISABLE

The results, “direct path read” wait events for both of the selects from table T1:

...
WAIT #357271008: nam='direct path read' ela= 2774 file number=4 first dba=3930496 block cnt=128 obj#=71407 tim=371288846837
WAIT #357271008: nam='direct path read' ela= 2353 file number=4 first dba=3930624 block cnt=128 obj#=71407 tim=371288849736
...
WAIT #357239640: nam='direct path read' ela= 2749 file number=4 first dba=3935360 block cnt=128 obj#=71407 tim=371292356287
WAIT #357239640: nam='direct path read' ela= 2580 file number=4 first dba=3935488 block cnt=128 obj#=71407 tim=371292359340
...

Other testing showed that if the buffer cache was flushed after creating and populating the table, but before collecting statistics, Oracle Database 11.2.0.2 performed serial direct path reads to read the table blocks, rather than reading those blocks into the buffer cache using db file scattered reads.





Book Review: Oracle Database 11g Performance Tuning Recipes (Part 2)

6 10 2011

October 6, 2011

Hammering a Square Peg into a Round Hole: Fine Edges are Lost, Gaps in Detail
http://www.amazon.com/Oracle-Database-Performance-Tuning-Recipes/dp/1430236620

(Back to the Previous Post in the Series)

In an effort for my review to be fair, I have completed the review for the second half of the “Oracle Database 11g Performance Tuning Recipes” book (omitting the pages for chapters 11 and 12).  The total review is now 35.5 typewritten pages in length.  Since this is the longest review that I have written for a book, I decided to leave the second half of this book review open for reader comments.

The following is from a comment that I attached to the original review on Amazon in response to a comment that was attached to my review:

That said, there is still a chance that the format could have worked for performance tuning if:

  1. The authors focused on Oracle Database 11.1 and 11.2
  2. The authors focused on performance tuning, rather than telling the reader, for instance, how to select all of the rows from a table (that is for the SQL Recipes book to explain)
  3. The authors, two of whom co-authored the book “RMAN Recipes for Oracle Database 11g”, stayed true to the recipe format that was used in that RMAN book. The RMAN book typically had a lengthy introduction to almost every chapter that more or less set the stage for the recipes in that chapter, providing the background knowledge and theory needed to understand the chapters (reference pages 313-318, the introduction for chapter 11 – viewable through Google books).
  4. The authors of this book are no doubt talented, and I think that if the authors had an extra six to eight months of reading, reviewing, and reworking the recipes, the authors probably would have corrected most of the instances of logic errors, weak development problems, and off-topic nature of the recipes that I mentioned in the book review.
  5. The authors spent some time discussing how a recipe could fail (maybe the reader did not buy that extra cost license, maybe the tip is valid only for versions of Oracle Database before 10.2, etc.).

For other people who read my review, I need to clarify that I am NOT recommending against buying this book – the book will just need to be used differently than originally designed. If you buy the book, after reading a recipe, perform a search of the documentation and various blogs, and most importantly test the material, to see if you are able to confirm or refute what is in the book. There is a good chance that such an approach will result in a better understanding of the topic than would have been the case if the recipes contained no errors AND were more fully developed. As had happened in my blog articles that referenced the alpha copy of this book, the topics in the book may be used as conversation starters to venture far further into the details of Oracle Database performance tuning.

The second half of the review follows, in the same format as the review for the first half of the book.

Data Dictionary Views:

  • V$SORT_USAGE (page 218)
  • V$HM_FINDING, V$HM_RECOMMENDATION (page 240)
  • V$SESSION_LONGOPS (page 310)
  • V$SQLSTATS (page 312)
  • V$SQL_MONITOR (page 313)
  • V$SQL_PLAN_MONITOR (page 316)
  • DBA_HIST_SQLSTAT, DBA_HIST_SQLTEXT (page 319)
  • V$DIAG_INFO (page 328)
  • GV$PX_PROCESS (page 345)
  • DBA_ENABLED_TRACES (page 353)
  • V$BGPROCESS (page 363)
  • DBA_AUTOTASK_CLIENT (page 449)
  • DBA_OPTSTAT_OPERATIONS, DBA_TAB_MODIFICATIONS (page 450)
  • SYS.AUX_STATS$ (page 464)
  • SYS.COL_GROUP_USAGE$ (page 485)
  • V$RESULT_CACHE_OBJECTS (page 510)
  • V$PQ_TQSTAT (page 532)
  • V$PQ_SYSSTAT (page 549)

Parameters:

  • MAX_DUMP_FILE_SIZE, TIMED_STATISTICS (page 328)
  • STATISTICS_LEVEL, DIAGNOSTIC_DEST, USER_DUMP_DEST (page 329)
  • TRACEFILE_IDENTIFIER (page 332)
  • _PX_TRACE (page 346)
  • SQL_TRACE (page 353)
  • LOG_ARCHIVE_TRACE (page 365)
  • OPTIMIZER_MODE (page 447)
  • OPTIMIZER_DYNAMIC_SAMPLING (page 459)
  • DB_FILE_MULTIBLOCK_READ_COUNT (page 466)
  • OPTIMIZER_USE_PENDING_STATISTICS (page 467)
  • OPTIMIZER_INDEX_COST_ADJ (page 468)
  • OPTIMIZER_FEATURES_ENABLE (page 470)
  • PARALLEL_DEGREE_POLICY (page 531)
  • PARALLEL_MIN_TIME_THRESHOLD, PARALLEL_MAX_SERVERS (page 544)
  • PARALLEL_MIN_SERVERS, PARALLEL_SERVERS_TARGET (page 545)

Error Numbers:

  • ORA-01652: unable to extend temp segment (page 219)
  • ORA-03297: file contains used data beyond requested RESIZE value (page 221)
  • ORA-01427: single-row subquery returns more than one row (page 264)
  • ORA-01790: expression must have same datatype as corresponding expression (page 274)
  • ORA-00932: inconsistent datatypes: expected NUMBER got CHAR (page 280)
  • ORA-00060: Deadlock detected (page 362)

Hints:

  • MONITOR, NOMONITOR (page 314)
  • PARALLEL (page 344)
  • OPTIMIZER_FEATURES_ENABLE (page 471)
  • FULL (page 493)
  • INDEX (page 494)
  • NO_INDEX (page 495)
  • ORDERED, LEADING (page 497)
  • USE_NL (page 498)
  • USE_HASH, USE_MERGE (page 499)
  • OPTIMIZER_FEATURES_ENABLE (page 502)
  • FIRST_ROWS (page 503)
  • ALL_ROWS (page 504)
  • APPEND, APPEND_VALUES (page 505)
  • NOAPPEND (page 506)
  • RESULT_CACHE (page 509)
  • DRIVING_SITE (page 513)
  • GATHER_PLAN_STATISTICS (page 518)
  • REWRITE (page 519)
  • STAR_TRANSFORMATION (page 521)
  • PARALLEL, PARALLEL_INDEX (page 526)

 Comments, Corrections, and Problems:

  • Recipe 7-5 seems to have borrowed the two SQL statements from a consulting website’s page.  The line breaks and column aliases match, however the table aliases were changed.  The first query is found under the heading “Sort Space Usage by Statement”, and the second query is found under the “Sort Space Usage by Session” heading. (pages 218-219)
  • Recipe 7-8 on page 227 states, “While the database can automatically resolve deadlocks between sessions (by killing one of the sessions holding a needed lock), when there’s a latch or pin on…”  When a deadlock happens, the SQL statement that is executed by one of the sessions is automatically rolled back – the session itself is not killed. (pages 224-229)
  • Recipe 8-1 “Retrieving All Rows from a Table” really does not belong in an Oracle performance tuning book, but instead in a beginning SQL book.  The recipe probably should have discussed some of the potential performance problems related to using “SELECT *” – the book simply stated that it was an alternative to listing every column to retrieve. (pages 254-255)
  • Recipe 8-2 “Retrieve a Subset of Rows from a Table” shows how to add a basic WHERE clause to a SQL statement.  Anyone attempting to learn even basic Oracle performance tuning should be well beyond the level of knowledge covered in this recipe, which shares concepts similar to those in recipe 1-1 in the “Oracle SQL Recipes” book. (pages 256-257)
  • Recipe 8-3 “Joining Tables with Corresponding Rows” demonstrates how to perform simple equijoins between tables using Oracle SQL syntax and ANSI syntax.  Once again, the concepts in this recipe are appropriate for a beginner level SQL book – concepts discussed are similar to those found in recipe 3-1 in the “Oracle SQL Recipes” book.  It might have been helpful for the book to discuss some of the potential dangers of natural joins. (pages 258-259)
  • Recipe 8-4 “Joining Tables When Corresponding Rows May Be Missing” demonstrates outer joins using Oracle syntax and ANSI syntax, and in that regard is similar to recipe 3-3 in the “Oracle SQL Recipes” book.  The full outer join using Oracle syntax has a couple of problems, including the removal of duplicate rows when those duplicate rows are necessary (consider a father and son working at the same place and in the same department).  To fix the solution, change the UNION clause to a UNION ALL, and then in the second WHERE clause (found in the second half of the UNION ALL), add “AND E.MANAGER_ID IS NULL”. (pages 259-262)
  • Recipe 8-5 “Constructing Simple Subqueries” describes how to add subqueries to the SELECT, WHERE, and HAVING clauses.  No examples of actual subqueries were provided under the heading of “Multi-Row Subqueries”, just simple variants of IN lists using ANY, SOME, and ALL keywords.  Based on the section heading I would have expected to see a subquery in the WHERE clause that potentially returned more than one row and used the ANY, SOME, or ALL keywords to prevent an ORA-01427 error. (pages 263-267)
  • Recipe 8-6 “Constructing Correlated Subqueries” shows how to place a subquery in the WHERE clause that references columns in the parent query’s row sources – another example that might appear in an introductory SQL book).  The “How It Works” section of this query seems to ignore the fact that a correlated query may be automatically transformed by Oracle’s query optimizer into a standard join – the quote on page 268 could very easily be demonstrated as incorrect, “In a correlated subquery, the outer query is executed first, as the inner query needs the data from the outer query in order to be able to process the query and retrieve the results.” (pages 267-269)
  • Recipe 8-7 “Comparing Two Tables to Finding Missing Rows” shows how to use the SQL MINUS operator.  Considering that this book describes performance tuning methods, it probably would have been a good idea to show alternate methods for accomplishing the same task that often perform more efficiently than the MINUS operator (such as a left outer join, with a specification that the column on the right side of the join must be NULL). (pages 269-271)
  • Recipe 8-8 “Comparing Two Tables to Finding Matching Rows” shows how to use the SQL INTERSECT operator – basic SQL syntax again with no performance tuning side to the description. (page 271)
  • Recipe 8-9 “Combining Results from Similar SELECT Statements” demonstrates using the UNION and UNION ALL operators – material that should be associated with a beginning SQL book (pages 271-274)
  • Recipe 8-10 “Searching for a Range of Values” demonstrates how to use the SQL BETWEEN clause – another introduction to SQL recipe with no performance tuning twist (although the examples did not use TRUNC on a date column, a couple of the examples seem to rely on implicit varchar to date datatype conversions). (pages 274-276)
  • Recipe 8-11 “Handling Null Values” shows how to use the NVL, NVL2, and COALESCE functions as well as the IS NULL and IS NOT NULL clauses.  The material in the recipe is similar to recipe 1-13 in the “Oracle SQL Recipes” book.  Considering that this is a book about performance tuning, the book should have indicated why     the COALESCE function is often preferable to the NVL function. (pages 277-280)
  • Recipe 8-12 “Searching for Partial Column Values” shows how to use the SQL LIKE keyword.  There is a bit of a performance related side to this recipe, where the recipe shows applying the TO_CHAR function to an indexed date column prevents the optimizer from using the standard index on that column.  The recipe then shows a demonstration of altering the NLS_DATE_FORMAT to “’yyyy-mm-dd’” and then demonstrates with the help of an execution plan that using “WHERE HIRE_DATE LIKE ‘1995%’” in the SQL statement will allow the index on the date column to be used.  Unfortunately, the execution plan seems to show an index on the employee’s name being used rather than the index on the HIRE_DATE column.  If the recipe had included the “Predicate Information” section of the execution plan it would have shown that a function is applied to the HIRE_DATE column which will prevent the optimizer again from using that index; something like the following would appear in the “Predicate Information” section of the execution plan: “filter(INTERNAL_FUNCTION(“HIRE_DATE”) LIKE ‘1995%’)”  (pages 280-283)
  • Recipe 8-13 “Re-using SQL Statements Within the Shared Pool” includes the following quote on page 284, “Essentially, bind variables are called substitution variables, and replace literals within a query.”  Technically, in Oracle Database terminology bind variables are not synonymous with substitution variables. On page 286 the book states about the row source generation stage of parsing, “This is an Oracle program that received the execution plan from the optimization step and generates a query plan.”  It appears that the book is attempting to paraphrase the Oracle Database Concepts Guide which states, “The row source generator is software that receives the optimal execution plan from the optimizer and produces an iterative plan, called the query plan, that is usable by the rest of the database. The iterative plan is a binary program that, when executed by the SQL virtual machine, produces the result set.”  Page 286 shows enabling a 10046 trace by changing the SQL_TRACE parameter at the session level – that parameter is deprecated as of Oracle Database 10.2. (pages 284-288)
  • Recipe 8-14 “Avoiding Accidental Full Table Scans” states as a method of preventing accidental full table scans when indexes are present on a column, “When constructing a SQL statement, a fundamental rule to try to always observe, if possible, is to avoid using functions on the left side of the comparison operator.”  This statement does not make sense: “TO_CHAR(ORDER_DATE,’YYYY’) = ‘2011’” is equivalent to “’2011’ = TO_CHAR(ORDER_DATE,’YYYY’)”.  The authors probably intended to state, “If possible, in the WHERE clause avoid placing indexed columns in function calls, such as TO_CHAR, unless a function-based index is present for that exact function call and column combination.” (pages 288-290)
  • Recipe 8-15 “Creating Efficient Temporary Views” shows how to create inline views, and provides performance warnings about their use.  The authors of the book make several critical mistakes in this recipe, giving the readers the impression that the rows requested by inline views are always materialized into tables, and as such should be avoided whenever possible.  The SQL statement found in the How it Works section uses subquery factoring (WITH blocks) rather than inline views as described in the book.  Page 292 states about the query found in the How it Works section, “In examples such as these, it is generally more efficient at the database level to create tables that hold the data defined by the inline views—in this case, three separate tables. Then, the final query can be extracted from joining the three permanent tables to generate the results.”  The recipe also contains the following cautionary note, “Large inline views can easily consume a large amount of temporary tablespace.”  There is more wrong than right in this recipe.  Oracle Database does not materialized every inline view, creating a temp table such as SYS_TEMP_0FD9D6628_D08144, as it processes the SQL statement.  The optimizer is free to materialize an inline view into a temp table, but quite often it is also free to push predicates into the inline view, and completely rewrite the original SQL statement so that it no longer contains an inline view.  The book’s suggested alternative of storing the intermediate results of inline views in permanent tables, unnecessarily generating redo and undo in the process, is very strange (if we must head in this direction, why not suggest using materialized views or statically defined temporary tables).  The original SQL statement with the inline views is able to take advantage of not only pushed predicates to further restrict the rows requested by the inline view, but also any indexes and partitioning that may be present in the parent tables.  Additionally, when troubleshooting the performance or accuracy of a SQL statement it is helpful to have the full SQL text that generates the resultset in a single SQL statement. (pages 290-292)
  • Recipe 8-16 “Avoiding the NOT Clause” attempts to describe “performance drawbacks in using the NOT clause within your SQL statements, as they trigger full table scans.”  The sample SQL statement in the Solution section shows a SQL statement selecting from a single table named EMPLOYEES_BIG with the execution plan showing an estimated 697,000 rows to be returned by a full table scan and a calculated cost of 4,480.  The “improved” version of the query selects from two tables, with the execution plan showing a full table scan of the EMPLOYEES table and a nested loops join to the DEPARTMENTS table’s primary key with an estimate of 33 rows to be returned with an estimated cost of 3.  The authors specifically mentioned the drop in the calculated cost from 4480 to 3 (usually not a good measure of the performance improvement of two queries), but failed to recognize that a full table scan is still being performed, and the name of the table as shown by the execution plan changed from EMPLOYEE_BIG to EMPLOYEES – the execution plan included in the book does not match the associated SQL statement displayed in the book.  The use of the NOT keyword does not specifically prevent the use of an index on a column, as was stated in the book.  The NOT EXISTS example SQL statement will not perform as the authors expect because the NOT EXISTS clause is not a correlated NOT EXISTS clause – the query will return all employees with a COMMISSION_PCT greater than 0 regardless of the employee’s DEPARTMENT_ID value. (pages 293-295)
  • Recipe 8-17 “Controlling Transaction Sizes”, while an interesting discussion on the topic of savepoints, the recipe does not have anything to do with performance tuning. (pages 295-297)
  • Recipe 9-4 states on page 308, “Once you have an explain plan to interpret, you can tell which steps are executed first because the innermost or most indented steps are executed first, and are executed from the inside out, in top-down order.”  That statement, while sometimes is correct, is easy to disprove – there are easy to find examples where that statement is not true.  The Oracle Database 11.2 Performance Tuning Guide states essentially the same thing, “The execution order in EXPLAIN PLAN output begins with the line that is the furthest indented to the right. The next step is the parent of that line. If two lines are indented equally, then the top line is normally executed first” – that statement is also occasionally incorrect as can be determined by a 10046 extended SQL trace.  The explained order of the execution plan (ID 3, 5, 2, 4, 1, 0) is incorrect – it should be (ID 3, 2, 5, 4, 1, 0).  The recipe also advises looking at the Cost column, so that it is possible to determine the amount of improvement – such an approach is unreliable.  On page 309 the book states, “By looking at our original explain plan, we determined that the EMP table is larger in size” – the execution plan does not indicate the size of a specific table, only the estimated number of rows that will be returned and the estimated number of bytes that will be returned from the row source (pages 307-310)
  • Recipe 9-5 states that, “With a simple query against the V$SESSION_LONGOPS view, you can quickly get an idea of how long the query will execute, and when it will finish” – that statement is mostly true for simple SQL statements with a single table being read by a full table scan.  This view shows estimated remaining completion time of individual (currently executing) operations in an execution plan, and not the estimated time remaining before the SQL statement will finish (pages 310-311).
  • In Recipe 9-6, the query of V$SQLSTATS does not necessarily retrieve the top five recently executed SQL statements that performed disk I/O, rather it retrieves the five SQL statements that have accumulated the most disk I/O (possibly through multiple executions) without regard to when the SQL statement executed (the statistics could still be in the V$SQLSTATS even though the SQL statement is no longer in the shared pool). (pages 311-312)
  • Recipe 9-7 should have mentioned that using the V$SQL_MONITOR view requires an Enterprise Edition license, a Diagnostics Pack license, and a Tuning Pack license.
  • Recipe 9-8 should have mentioned that using the V$SQL_PLAN_MONITOR view requires an Enterprise Edition license, a Diagnostics Pack license, and a Tuning Pack license. (pages 316-318)
  • Recipe 9-9 should have mentioned that using the Oracle SQL Performance Analyzer requires a license for the Oracle Real Application Testing option. (pages 321-325)
  • Recipe 10-1 states (on page 328) that if the TIMED_STATISTICS parameter is set to FALSE that SQL tracing [extended 10046 SQL trace] is disabled – the TIMED_STATISTICS parameter does not determine whether or not SQL tracing is enabled.
  • Recipe 10-5 shows how to interpret a raw 10046 extended SQL trace file.  This recipe is surprisingly short at just over one page, including a short sample trace file.  The recipe really did not provide much information other than describing what (generic, not necessarily specific to the sample 10046 extended SQL trace file) processes take place in the Parse, Execute, and Fetch stages, without providing much information beyond stating that the 15.8ms long wait for ‘Disk file operations I/O’ is not a significant amount of time.  This recipe in the book missed a significant chance to describe one of the most useful tools when troubleshooting performance problems when the problems can be associated with a specific session.  See the book “Secrets of the Oracle Database”, the book “Optimizing Oracle Performance”, or the book “Troubleshooting Oracle Performance” for better examples of extracting useful information from a raw 10046 extended SQL trace file. (pages 334-335)
  • Recipe 10-6 states the following, “Note that the TKPROF or other profiling tools show the elapsed times for various phases of query execution, but not the information for locks and latches. If you’re trying to find out if any locks are slowing down a query, look at the raw trace files to see if there are any enqueue waits in the WAIT lines of the raw file.”  I might be misinterpreting these two sentences, but the sentences appear to be incorrect assuming that a level 8 or greater (wait events) extended SQL trace generated by Oracle Database 10.1 (or higher) is passed into TKPROF.  Starting in Oracle Database 10.1 most of the enqueue waits are divided into much more specific wait events that begin with “enq:” (there are 299 wait events in Oracle Database 11.2.0.2 that begin with “enq:”).  The same is true for latches, where the “latch free” wait event is split out into multiple wait events (there are 43 wait events in Oracle Database 11.2.0.2 for latches). Oracle Database 9.2 and earlier did not provide information about the exact type of enqueue or latch wait in TKPROF summaries. (pages 335-336)
  • Recipe 10-8 shows how to invoke TKPROF to analyze a 10046 extended SQL trace file.  The TKPROF call used by the book specifies the EXPLAIN option, and provides the following description “If you specified the explain parameter when issuing the tkprof command, you’ll find an execution table showing the execution plan for each SQL statement.”  The book does not explain that the execution plan displayed in the output file may not be the actual execution plan used for the query, much like the execution plan generated by autotrace and explain plan might not be the actual execution plan (this fact was not mentioned in the book either).  The book made no mention that the Row Source Operation information in the output file shows the actual execution plan used by the SQL statement (note that the 11.1.0.6 client may still show the wrong execution plan when analyzing a trace file from 11.1.0.6 or more recent version of Oracle Database).  At the top of page 340 the book states that the unit of measure of the time statistic is in milliseconds (1/1000 of a second).  Prior to Oracle Database 9.1 the unit of measure was centiseconds (1/100 of a second) and as of Oracle Database 9.1 the unit of measure is microseconds (1/1000000 of a second). The book stated that because the “SQL*Net message to client” wait event is an idle wait event, the time reported as waited can be ignored.  The actual wait event displayed in the book’s TKPROF output is “SQL*Net message from client” and the time reported is 462.81 seconds in 5,461 waits with a max wait time of 112.95 seconds.  Even though the “SQL*Net message from client” wait event is typically classified as an idle wait event, at the session level, and especially in a 10046 extended SQL trace that wait event may reveal critical details.  For instance, it probably required roughly 112.95 seconds for the 10046 trace to be ended, so if we subtract the 112.95 seconds, we are left with 349.86 seconds in the wait event, which would indicate that the average wait time for this client (and in theory network) side wait is 0.064 seconds, which is longer than the only other wait reported, a 0.05 second single wait for a block read from disk.  The long average duration of this wait event either means that the network link is very slow (you would expect average wait times in the range of 0.00001 seconds to 0.002 seconds), or that the client computer receiving the data was excessively busy on average. In this recipe, the book also again states that the TKPROF output will not show any enqueue related waits – that is also not true. (pages 337-340)
  • Recipe 10-15 shows how to enable and disable a 10046 extended SQL trace for another session using DBMS_MONITOR.SESSION_TRACE_ENABLE and  DBMS_MONITOR.SESSION_TRACE_DISABLE, respectively.  The command provided for disabling the 10046 trace in the other session, “execute dbms_monitor.session_trace_disable();” does not specify the SID and SERIAL# for the other session, thus it defaults to disabling a 10046 trace in the current session.  This problem can be confirmed by checking the SQL_TRACE column of V$SESSION for the session for which the trace was enabled, for example “SELECT SQL_TRACE FROM V$SESSION WHERE SID=4;”.  The book also states, “The trace file will have the suffix mytrace1, the value you set as the trace file identifier,” but that is only true if tracing were enabled for the current session (and not another session as demonstrated) and if the recipe had included a command to set the session level TRACEFILE_IDENTIFIER parameter. (pages 349-350)
  • Recipes 10-2, 10-11, and 10-16 seem to share a common ancestry with a blog article.
  • Recipe 10-16 demonstrates how to enable 10046 extended SQL traces in other sessions using Oracle Database 11.1 and above syntax.  All of the examples in this recipe use ALTER SESSION SET EVENTS commands rather than ALTER SYSTEM SET EVENTS commands, and are thus ineffective (do not perform as described) for enabling a trace in another session. (pages 351-352)
  • Recipe 10-18 shows how to enabled a system-wide 10046 trace using DBMS_MONITOR.DATABASE_TRACE_ENABLE – if I recall correctly, tracing might not be disabled when requested for the background processes, which means that those processes will continue writing to their trace files until their max trace file size is reached, the drive containing the trace files fills, or the database is bounced.  This may be an Oracle release version specific problem.  The recipe probably should have mentioned this potential problem. (page 353)
  • Recipe 10-20 shows how to enable a system-wide 10046 trace using “ALTER SYSTEM SET EVENTS ‘10046 TRACE NAME CONTEXT FOREVER,LEVEL 12’;”.  The same potential problem is present in this recipe as for recipe 10-18.  (page 356)
  • Recipe 10-21 shows how to enable a 10046 trace using SQL*Plus’ ORADEBUG.  The top of page 357 states, “You can set a trace in a running session using the operating system process ID (PID), with the help of the oradebug utility.”  Abbreviating the operating system process ID as PID is likely to cause unnecessary confusion.  The PID column in V$PROCESS is the Oracle process number, while the SPID column holds the operating system process ID. (pages 356-357)
  • Recipe 10-22 shows a method for enabling a 10046 extended SQL trace with wait events and bind variables using a DBMS_MONITOR.SESSION_TRACE_ENABLE call in an AFTER LOGON ON DATABASE trigger.  The book states that the user to be traced, user SH in this case, must have the ALTER SESSION privilege before the trigger is created that will enable a 10046 trace for just that user.  Testing will quickly show that the GRANT is unnecessary because of the default setting of definer’s rights associated with the trigger that must be created by a user with SYSDBA permissions – the user to be traced only needs to be granted CONNECT for the trigger to execute and enable a trace for the session. (pages 357-358)
  • Recipe 10-23 shows how to enable and read 10053 traces.  This recipe just barely scratches the surface of 10053 trace file analysis.  The book fails to mention that 10053 traces will only be generated when a hard parse is required, and fails to describe most of the keywords found in a typical 10053 trace file – such trace files may be quite confusing when several tables with multiple indexes are analyzed during optimization; 10053 trace file contents are also subject to change from one Oracle Database release version to the next, so the following quote found on page 360 is at best misleading, “Unlike a raw 10046 event trace file, a 10053 event trace file is quite easy (and interesting) to read.”  (pages 358-361)
  • Recipe 10-24 shows how to enable a specific trace type when an error, such as deadlock, occurs in a session.  While the contents of this recipe are potentially helpful, the depth of discussion is very limited compared to some blog articles found on the Internet, and the content really is not focused on performance, but rather on troubleshooting. (pages 361-362)
  • Recipe 10-25 shows how to enable tracing for Oracle Database background processes – this recipe is very similar to recipe 10-16 except that this recipe uses ALTER SYSTEM rather than ALTER SESSION and specifies a value for PNAME rather than specifying the operating system process number.  The recipe probably should have provided a warning about potential problems that may result from enabling traces for background processes. (pages 362-363)
  • Recipe 10-26 opens with the following problem statement, “You want to trace the Oracle listener for diagnostic purposes.”  While this recipe has nothing specific to do with performance tuning, it would have been helpful to answer the question of “Why?” or “What will I see in the listener trace file?”  Several of the recipes in this book fall into this pattern of not describing the “Why?” or “When?” or “How often?” or “Should I check something else first?” (pages 363-365)
  • Recipe 10-27 shows how to set the LOG_ARCHIVE_TRACE parameter – this is another recipe that is not directly related to performance tuning.  The book states on page 366, “You can specify any of 15 levels of archive log tracing,” however the book only shows seven levels, none of which match level 15 that was specified in the recipe.  The authors seem to have used the Oracle Database Reference from the official Oracle Database documentation library for the source of the seven event levels listed, but it appears that a mistake was made when level 128 was written – that level means “Track FAL (fetch archived log) server related activities”, while level 512 is “Track LGWR redo shipping network activity” which the book incorrectly attributes to level 128. (pages 365-366)
  • (Skipping chapter 11 for now)
  • Page 412 incorrectly states that the Enterprise Edition is required to use stored outlines.
  • (Skipping chapter 12 for now)
  • Recipe 13-2 on page 450 states “Note that in addition to collecting statistics for all schema objects, the auto stats job also gathers dictionary statistics (for the SYS and SYSTEM schemas).”  This statement is incorrect per Metalink (MOS) – the fixed object statistics are not collected automatically.  However, either of the following two commands will accomplish that task:
    EXEC DBMS_STATS.GATHER_SCHEMA_STATS(‘SYS’,GATHER_FIXED=>TRUE);
    EXEC DBMS_STATS.GATHER_FIXED_OBJECTS_STATS(NULL);  (page 448-450)
  • Recipe 13-3 states “By default, the database automatically collects statistics (cascade=true) for all of a table’s indexes.”  The Oracle Database documentation seems to indicate that the default value is DBMS_STATS.AUTO_CASCADE, which does not necessarily mean that statistics will be collected for all indexes. Page 456 states, “Oracle recommends that you set the AUTOSTATS_TARGET preference to the value ORACLE, to ensure that the database collects fresh dictionary statistics”.  Metalink (MOS) Doc ID 725845.1 states that setting the value to ORACLE prevents the automatic statistics collection process from collecting statistics from any schema that does not belong to an Oracle component (SYS, SYSMAN, WMSYS, EXFSYS, etc.), so that is probably not the recommended value.  Additionally, setting that parameter value does not appear to cause the dictionary statistics to be collected. (pages 451-456)
  • Recipe 13-6, when discussing the various levels of dynamic sampling, seems to be closely paralleling (or simply rephrasing) the Oracle Database documentation.  The recipe states about level 3 dynamic sampling, “Uses dynamic sampling if the query meets the Level 2 criteria and it has one or more expressions in a WHERE clause predicate.” – the documentation states that either of these conditions must be true, not both.  The same problem is present for the description of Level 4 dynamic sampling – the book states “AND” while the documentation states “OR”.  Interestingly, the book and the documentation both state that level 9 dynamic sampling samples 4086 blocks, while 4096 (a power of 2) seems to be a more likely number (and that 4096 number agrees with other articles). (pages 459-460)
  • Recipe 13-9 states the following about the mreadtim system statistic, “mreadtim: The Multiblock Read Time statistic shows the average time (in seconds) for a sequential multiblock read.”  This is the only system statistic for which the unit of measure is provided – the unit of measure is actually milliseconds.  Note that the output of the workload system statistics on page 466 shows a value of 46,605.947 for the MREADTIM value (if that statistic’s unit of measure really is seconds, that is a very slow server – note that these odd statistic values show a bug that is present in 11.2.0.1 and 11.2.0.2) and 51,471.538 for the SREADTIM statistic.  The recipe should have explained why the MREADTIM value in this case is less than the SREADTIM value.  The recipe also should have mentioned whether the system statistics should be collected during a relatively quiet time of system activity or a relatively busy time. (pages 463-466)
  • Recipe 13-10 seems to closely mirror section 13.5.1 Pending Statistics of the Oracle Database Performance Tuning Guide.  Page 468 states, “By default the database immediately starts using all statistics it gathers.” – that statement is not true for Oracle Database 10.1 and above, where the _OPTIMIZER_INVALIDATION_PERIOD parameter is set to 18000 (5 hours), permitting already hard parsed SQL statements to not become immediately invalidated for a period up to 5 hours (thus the new statistics may not have any impact for 5 hours). (pages 466-468)
  • Recipe 13-11 provides conflicting advice about using the OPTIMIZER_INDEX_COST_ADJ parameter to force the optimizer to use an index.  While the recipe states that the parameter should not be adjusted system-wide, and also states, “Ideally, you should collect workload system statistics and leave the optimizer_index_cost_adj parameter alone,” in between those comments the recipe states, “Use the optimizer_index_cost_adj parameter with confidence, especially in an OLTP environment, where you can experiment with low values such as 5 or 10 for the parameter to force the optimizer to use an index.” and also suggests that the parameter could be adjusted for a single SQL statement without providing an example of how such a change could be made.  Potential examples of changing that parameter for a single SQL statement might include adding the hint /*+ OPT_PARAM(‘OPTIMIZER_INDEX_COST_ADJ’,5) */, or temporarily adjusting the parameter at the session level so that a stored outline might be created.  It is quite surprising that the book did not describe some of the problems that might be caused by adjusting the OPTIMIZER_INDEX_COST_ADJ parameter.  It is also surprising that the book did not suggest using an index hint, or even suggest determining whether or not an index access path is the most appropriate access path for the data.  The book states, “By default, the optimizer assumes that the cost of a multiblock read I/O associated with a full table scan and the single block read cost associated with an indexed read are identical.” – that has not been the case since the introduction of system statistics in Oracle Database 9.0.1.  Also stated in the recipe is the following, “By default, if you set the ALL_ROWS optimizer goal, there’s a built-in preference for full table scans by the optimizer.” – that also is not necessarily true, in the end the decision is completely based on the calculated cost of the various access paths – if the system statistics and object statistics provide an invalid picture of the performance characteristics of the server and the data, then an inappropriate access path may be selected by the query optimizer.  The recipe states that the SREADTIM, MREADTIM, and MBRC system statistics have a bearing on whether or not a full table scan is performed, but the depth of insight does not venture any deeper than that comment. (pages 468-470)
  • Recipe 13-12 states that setting the OPTIMIZER_FEATURES_ENABLE parameter to a previous Oracle Database release version will cause the optimizer to behave exactly as it did in the older release version.  This is true to an extent, however, some bug fixes such as the one for the cardinality estimates when ROWNUM is used in the WHERE clause, are not automatically disabled, and when you combine that with changes to statistics collection (such as the system statistics MREADTIM and SREADTIM bug in 11.2.0.1 and 11.2.0.2), the optimizer could very well not behave the same is it did in the earlier release. (pages 470-471)
  • Recipe 13-13 uses exactly the same command (down to the schema, table, and column names) as a blog article written by the Oracle Optimizer Group.  (pages 472-473)
  • Recipe 13-14 repeats portions of the CURSOR_SHARING information found in recipe 5-20.  Page 473 states, “There’s no need for hard-parsing the new statement because a parsed version already exists in the shared pool.” – that statement is mostly true prior to the introduction of adaptive cursor sharing in Oracle Database 11.1.  The tip on page 474 states, “Oracle recommends using the FORCE setting for the CURSOR_SHARING parameter, in an OLTP environment.” – it would be helpful if the book indicated where Oracle Corporation made this recommendation.  On page 474, the book states, “There are really no issues with setting the cursor_sharing parameter to a non-default value, except minor drawbacks such as the non-support for star transformation, for example.” – see the review comments for recipe 5-20.  (pages 473-476)
  • Recipe 13-15 states, “Adaptive cursor sharing is enabled by default, and you can’t disable it.”  Technically, this is not a true statement.  Although not recommended, setting the STATISTICS_LEVEL parameter to BASIC will disable adaptive cursor sharing.  Per Metalink (MOS) Doc ID 11657468.8, adaptive cursor sharing may be disabled by changing _OPTIMIZER_ADAPTIVE_CURSOR_SHARING to a value of FALSE and _OPTIMIZER_EXTENDED_CURSOR_SHARING_REL to a value of NONE, but it can also be disabled without the modification of the hidden initialization parameters by changing the OPTIMIZER_FEATURES_ENABLED parameter as was discussed in recipe 13-12.  Page 481 states, “The database creates a new child cursor (child_number=2) for this query and marks the original cursor (child_cursor=0) as not being bind-aware. Eventually the database will remove this cursor from the shared pool.”  Oracle actually set the child cursor 0’s IS_SHARABLE column value to N without changing the previously assigned BIND_AWARE value.  The book could have been more clear when it assigned column aliases in the query by not aliasing the IS_SHARABLE column as BIND_SHARE.  On page 481 the book states, “Since adaptive cursor sharing works only where literal values are replaced with binds, Oracle encourages you to use the FORCE setting for the cursor_sharing parameter… You must set the cursor_sharing parameter to FORCE for adaptive cursor sharing to work, thus letting the optimizer select the optimal execution plan for different values of the bind variable.”  Those statements are incorrect, adaptive cursor sharing will work just as well with the CURSOR_SHARING parameter set to the default value of EXACT as long as bind variables are used in the SQL statements. (pages 476-482)
  • Recipe 13-16 includes SQL statements that are exact copies of SQL statements found in section 13.3.1.7 from the Oracle Database documentation library, with a bit of rewording before and after the SQL statements. (pages 482-483)
  • Recipe 13-17 seems to closely parallel section 13.3.1.6 from the Oracle Database documentation library, except that the column order of the COUNTRY_ID and CUST_STATE_PROVINCE were reversed, and the book apparently considers the COUNTRY_ID column as a VARCHAR2 while the documentation considers that column as a number. (pages 483-484)
  • Recipe 13-18 seems to be based on a blog article that was written by the Oracle optimizer group. The book seems to have omitted some SQL*Plus formatting commands that are necessary to correctly view the report that is generated by the code in the book (refer to the blog article for the correct SQL*Plus formatting commands). (pages 484-486)
  • Recipe 13-19 seems to be based on a blog article that was written by the Oracle optimizer group (information from a second blog article also seems to have been used). (pages 486-487)
  • Recipe 13-20 seems to be based on a blog article that was written by the Oracle optimizer group (pages 488-490)
  • The introduction for chapter 14 states that there are more than 60 Oracle Database hints (apparently quoting the Oracle Database Performance Tuning Guide documentation).  A query of V$SQL_HINT in Oracle Database 11.2.0.2 shows 269 hints. (page 491)
  • Recipe 14-1 states, “For example, there is an INDEX hint to enable the use of an index. There is also a NO_INDEX hint, which disables the use of an index.”  These two statements should have been rewritten.  An INDEX hint forces the optimizer to use a specific index, assuming that is a valid access path for the data retrieval.  A NO_INDEX hint prevents the optimizer from using a specific index or set of indexes for data retrieval. (pages 491-493)
  • Recipe 14-2 states, “Note Hints influence the optimizer, but the optimizer may still choose to ignore any hints specified in the query.”  Hints are directives and cannot be ignored by the optimizer.  The only reasons that a hint cannot be used by the optimizer is if: the hint is invalid due to the wrong alias used in the hint; the hint is malformed; the hint is incompatible with another hint; the query was transformed by the optimizer into a form that is incompatible with the hint before the optimizer applied the hint; the hint, if followed, would cause the wrong results to be returned; bugs in Oracle Database cause the hint to be lost; the hint specifies the use of a feature that is explicitly disabled by an initialization parameter; or a comment is added before the hint that includes an Oracle keyword. (pages 493-496)
  • Recipe 14-3 shows how to control the join order using the ORDERED and LEADING hints.  The recipe does not discuss some of the potential problem areas that are present with the ORDERED hint that are not present with the LEADING hint.  Additionally, the execution plan that accompanies the LEADING hint example appears to be the wrong execution plan.  While it appears to work without issue, the correct syntax for the LEADING hint does not use commas to separate the row sources that are listed in the hint. (pages 497-498)
  • Recipe 14-4 demonstrates how to use the USE_NL, USE_HASH, and USE_MERGE hints.  While it apparently does not affect the result of the hint, the official syntax does not use commas to separate the row source names in the hints, as shown in the book.  On page 499 the book states, “The smaller of the two tables is used by the optimizer to build a hash table on the join key between the two tables.”  “Smaller” is not defined in this case – does that mean the table itself occupies less space on disk, the table contains fewer rows, fewer rows are expected to be returned from the table, the aggregated estimated size of the columns multiplied by the estimated number of rows is smaller, or something else? (pages 498-501)
  • Recipe 14-5 shows how to use the OPTIMIZER_FEATURES_ENABLE hint by specifying a value of 10.2 in that hint.  The value 10.2 is invalid for that hint, and in testing on Oracle Database 11.2.0.2 is sufficient to prevent the optimizer from seeing the hints that are listed after the OPTIMIZER_FEATURES_ENABLE hint. (pages 501-502)
  • Recipe 14-6 on page 504 shows setting the OPTIMIZER_MODE using an ALTER SYSTEM command to the deprecated value of FIRST_ROWS as part of a test.  A better approach would have been to use an ALTER SESSION command, setting the OPTIMIZER_MODE parameter to FIRST_ROWS_1, FIRST_ROWS_10, FIRST_ROWS_100, or FIRST_ROWS_1000.  Likewise, the recipe shows the use of the FIRST_ROWS hint without a number enclosed in parentheses (the hint syntax per the Oracle Database documentation since the release of Oracle Database 9.2). (page 502-504)
  • Recipe 14-7 probably should mention that the APPEND hint has no effect when the table into which the rows are to be inserted has either triggers or foreign key references.  Ideally, the recipe should have provided a warning stating that direct path inserts are unrecoverable. (pages 505-506)
  • Recipe 14-8 states, “Tip: Hints in queries that reference a complex view are ignored.”  This is not completely correct – the book is only partially rewording what is written in the Oracle Database documentation, and selectively removing other information.  The Oracle Database documentation states: “Because nonmergeable views are optimized separately from the top-level query, access path and join hints inside the view are preserved. For the same reason, access path hints on the view in the top-level query are ignored. However, join hints on the view in the top-level query are preserved because, in this case, a nonmergeable view is similar to a table.” (pages 506-509)
  • Recipe 14-9 describes using the Oracle Database 11.1 (and higher) result cache, as did recipes 3-9 and 3-11.  The performance improvement shown on page 511 and 512, where the execution time decreased from 21.80 seconds to 0.08 seconds by changing the NO_RESULT_CACHE hint to a RESULT_CACHE hint, could have been caused by the effects of buffering previously read blocks from disk, and not by the use of the RESULT_CACHE hint.  The book simply stated that the query was executed once with each hint – to be fair, the query should have been executed twice with each hint so that the NO_RESULT_CACHE hinted query could have benefitted from the buffering of previously read blocks from disk, and the RESULT_CACHE hinted query could have benefitted from the caching of the query result in the result cache. (pages 509-513)
  • Recipe 14-11 shows how to use the GATHER_PLAN_STATISTICS hint.  The A-Rows column in the execution plan seems strange in that only ID 0 shows any rows returned by a line in the execution plan.  It is also strange that the Starts column shows a value of 2 for ID 0 while the execution plan shows 0 starts for all of the remaining lines in the execution plan.  It would have been helpful for the book to explain if the execution statistics were caused by an Oracle Database bug, or some sort of an optimization.  My first thought is that the query results cache was used as that would explain the displayed 0 Starts for all except ID 0 as well as the A-Rows column values, however ID 1 should then show a RESULT CACHE operation.  Page 519 contains a confusing statement, “From this, you can determine if the optimizer is optimally executing the SQL, and you can determine if any optimization is needed.” (pages 517-519)
  • Recipe 15-1 describes enabling parallelism for queries.  While the recipe describes using PARALLEL and PARALLEL_INDEX hints, the book implies that simply specifying the hint will cause the query to run in parallel, if at all possible to execute the query in parallel.  The determination of whether or not parallel execution is used is based on several factors, including the calculated query cost reduction and the availability of parallel servers – the use of the hints merely over-rides the default degree of parallelism for the objects, but does not force parallelism.  Page 530 of the book states. “Note As of Oracle 11g Release 2, the NOPARALLEL and NOPARALLEL_INDEX hints have been deprecated.  Instead, use NO_PARALLEL and NO_PARALLEL_INDEX.”  Those hints were actually deprecated as of Oracle Database 10.1.  It would have been helpful for the book to mention that parallel operations require the Enterprise Edition of Oracle.  The book also seems to consider that the CPU is the only potential bottleneck for parallel execution, without a mention of the I/O subsystem. (pages 526-530)
  • Recipe 15-4 lists restrictions for parallel DML – it appears that the book paraphrased a section of the Oracle Database VLDB and Partitioning Guide, resulting in vague descriptions of some of the limitations.  A better paraphrase example is found in the “Expert Oracle Database Architecture” book on page 633. (pages 533-536)
  • Recipe 15-7 describes rebuilding indexes in parallel.  The recipe provides four key reasons for rebuilding an index, including “index that has become fragmented over time.”  The book does not describe what it means for an index to be fragmented and does not describe some of the potential unanticipated side-effects of parallel index rebuilds (such as causing parallel execution when that index is accessed in a query). (pages 539-541)
  • Recipe 15-10 states, “One prerequisite of using automatic DOP is to run the DBMS_RESOURCE_MANAGER.CALIBRATE_IO procedure.”  An example of using CALIBRATE_IO is not provided. (pages 543-545)
  • Recipe 15-12 includes table 15-6 that lists several views that are related to parallel execution.  The description of the V$PQ_SESSTAT view seems to disagree with the documentation for that view. (pages 548-550)
  • Recipe 15-13 states, “If you query the V$SYSTEM_EVENT view, you can get an idea of the parallelism-related waits that have occurred in your database instance.”  The SQL statement that follows that sentence is a query of V$SESSION_EVENT, not V$SYSTEM_EVENT.  The recipe should have mentioned the need to calculate the delta values from that view before deciding whether or not there is a bottleneck in parallel processes.  Three of the four wait events that the book classifies as indicating a potential parallel tuning issue (“PX Deq: Execute Reply”, “PX Deq: Parse Reply”, and “PX Deq: Table Q Normal”) are classified by the official Oracle Database documentation as idle wait events.  It would have been helpful to more completely describe why these idle wait events should not be ignored, keeping in mind that recipe 5-4 in this book states, “You can ignore the idle waits” – possibly the book should have cited a reference. (pages 550-552)

Interesting And/Or Well-Above-Average Bits of Information

  • Specifying the TRACEFILE_IDENTIFIER for a session to reduce the time needed to locate the trace file for a session. (page 332)
  • Recipe 15-1 provides warnings to not abuse parallel execution: “Parallel SQL must be used with caution, as it is common to overuse, and can cause an overutilization of system resources, which ultimately results in slower rather than faster performance.”




10200 Trace Shows Consistent Reads, but Not All Consistent Reads

4 10 2011

October 4, 2011

An interesting, but poorly worded, problem appeared in an OTN thread recently where the OP claimed that a 10200 trace was not showing a consistent get, even though a concurrent 10046 trace’s EXEC and STAT lines indicated one consistent get.  The provided test case and sample 10200/10046 trace left a couple of details unspecified, such as the table and index definitions, the tablespace definition, the Oracle Database release version, and exactly what the OP was trying to understand: does a 10200 trace never work, or is it just not producing output in this specific case.

Let’s try a couple of experiments to try to reproduce the output provided by the OP.  Frst let’s make certain that we know what the OP wanted to see.  A 10200 trace is supposed to write out additional lines in a trace file every time a consistent block read is performed – we have previously experimented a bit with 10200 trace file entries.  Let’s connect to the database using two SQL*Plus sessions, session 1 will connect as a standard database user, and session 2 will connect as the SYS user.  We will create a locally managed, non-ASSM, tablespace with uniform 1MB extent sizes to help improve the consistency of the results.  A sample CREATE TABLESPACE command follows (created in session 2):

CREATE SMALLFILE TABLESPACE "LOCAL_UNIFORM1M" DATAFILE 'C:\Oracle\OraData\OR1122D\locun1MOR1122.dbf' SIZE 100M AUTOEXTEND ON NEXT 10M
  MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M SEGMENT SPACE MANAGEMENT MANUAL; 

In session 1, we will create a table in the recently created tablespace, populate the table with 100,001 rows, and then execute a couple of throw-away SQL statements that will be used later (this is done to eliminate the hard parse during the later re-execution of the SQL statements):

DROP TABLE T4 PURGE;

CREATE TABLE T4(
  C1 NUMBER,
  C2 VARCHAR2(10),
  C3 VARCHAR2(10))
  TABLESPACE LOCAL_UNIFORM1M;

INSERT INTO
  T4
SELECT
  ROWNUM+10,
  TO_CHAR(ROWNUM),
  TO_CHAR(ROWNUM+10)
FROM
  DUAL
CONNECT BY
  LEVEL<=100000;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>NULL,TABNAME=>'T4',CASCADE=>TRUE)

INSERT INTO T4 VALUES(1,NULL,NULL);
COMMIT;

SET AUTOTRACE TRACEONLY STATISTICS

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'T4_IGNORE_ME';
ALTER SESSION SET EVENTS '10200 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

SELECT
  *
FROM
  T4;

SELECT
  SYSDATE
FROM
  DUAL;

INSERT INTO T4 VALUES(2,NULL,NULL);
ROLLBACK;

ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';
ALTER SESSION SET EVENTS '10200 TRACE NAME CONTEXT OFF';

In session 2 (the SYS session), we will execute a SQL statement that examines X$BH where the STATE column is not equal to 0, after flushing the buffer cache.  This query will indicate the blocks that remain in the buffer cache as well as the STATE of the block in the buffer cache.  From http://www.ixora.com.au/q+a/0103/22151356.htm, the meaning of the STATE column (we could have just queried V$BH instead to see the meaning of that column):

0, FREE, no valid block image
1, XCUR, a current mode block, exclusive to this instance
2, SCUR, a current mode block, shared with other instances
3, CR,   a consistent read (stale) block image
4, READ, buffer is reserved for a block being read from disk
5, MREC, a block in media recovery mode
6, IREC, a block in instance (crash) recovery mode

We will need to re-execute the SQL statement a couple of times (waiting a few seconds between executions) until the query results stabilize:

SELECT
  TS#,
  FILE#,
  DBARFIL,
  DBABLK,
  STATE
FROM
  X$BH
WHERE
  STATE<>0
ORDER BY
  TS#,
  FILE#,
  DBABLK;

/

/ 

Once stabilized, the query output appeared as follows for my test case:

 TS#  FILE#    DBARFIL     DBABLK  STATE
---- ------ ---------- ---------- ------
   0      1          1       2016      1
   0      1          1       2017      1 

The above shows two blocks from the SYSTEM tablespace, with a STATE of 1 (current version of the block).

Quickly switching back to session 1, we execute a SQL statement for which we are interested in examining, followed by a second SQL statement that is simply intended to make certain that the STAT lines were written to the trace file (will will need to quickly switch back to session 2 after executing this script):

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'T4_SELECT';
ALTER SESSION SET EVENTS '10200 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

SELECT
  *
FROM
  T4;

SELECT
  SYSDATE
FROM
  DUAL;

ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';
ALTER SESSION SET EVENTS '10200 TRACE NAME CONTEXT OFF'; 

In session 2 (the SYS session), execute the following:

Session 2 produced the following output (slightly modified due to length, with each added block found in the buffer cache numbered):

 TS#  FILE#    DBARFIL     DBABLK  STATE
---- ------ ---------- ---------- ------
   0      1          1       2016      1
   0      1          1       2017      1
   5      5          5        128      1  *   1
   5      5          5        129      1  *   2
   5      5          5        130      1  *   3
   5      5          5        131      1  *   4
   5      5          5        132      1  *   5
   5      5          5        133      1  *   6
   5      5          5        134      1  *   7
   5      5          5        135      1  *   8
   5      5          5        136      1  *   9
   5      5          5        137      1  *  10
...
   5      5          5        423      1  * 296
   5      5          5        424      1  * 297
   5      5          5        425      1  * 298
   5      5          5        426      1  * 299
   5      5          5        427      1  * 300
   5      5          5        428      1  * 301

303 rows selected. 

The query executed in session 1 added 301 blocks to the buffer cache (XCUR – the current versions of the block).  Let’s take a look at the statistics displayed in session 1 for the execution of the select from T4:

SQL> SELECT
  2    *
  3  FROM
  4    T4;

100001 rows selected.

Statistics
---------------------------------------------------
          0  recursive calls
          0  db block gets
       6967  consistent gets
        301  physical reads
          0  redo size
    2353883  bytes sent via SQL*Net to client
      73685  bytes received via SQL*Net from client
       6668  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     100001  rows processed 

The above shows 301 blocks read from disk (physical reads), with 6,967 blocks read by consistent read (consistent gets), and 0 current mode reads (db block gets).

Taking a look at a portion of the generated 10200/10046 trace file (some of the rows are numbered to match the comments added to the query of X$BH):

PARSE #391723720:c=0,e=17,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=2560505625,tim=90982737037
EXEC #391723720:c=0,e=15,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=2560505625,tim=90982737090
WAIT #391723720: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=13759 tim=90982737111
WAIT #391723720: nam='Disk file operations I/O' ela= 133 FileOperation=2 fileno=5 filetype=2 obj#=13759 tim=90982737271
WAIT #391723720: nam='db file sequential read' ela= 163 file#=5 block#=128 blocks=1 obj#=71123 tim=90982737471               * 1
ktrget2(): started for block  <0x0005 : 0x01400081> objd: 0x000115d3 env [0x000000001759AF24]: (scn: 0x0000.0048429e  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid: 0x0000.000.00000000  st-scn: 0x0000.00000000  hi-scn: 0x0000.00000000  ma-scn: 0x0000.00484274  flg: 0x00000661) 
WAIT #391723720: nam='db file scattered read' ela= 2876 file#=5 block#=129 blocks=127 obj#=71123 tim=90982740602             * 2 through 128
ktrgcm(): completed for block  <0x0005 : 0x01400081> objd: 0x000115d3 
ktrget2(): completed for  block <0x0005 : 0x01400081> objd: 0x000115d3 
FETCH #391723720:c=0,e=3865,p=128,cr=3,cu=0,mis=0,r=1,dep=0,og=1,plh=2560505625,tim=90982740990
WAIT #391723720: nam='SQL*Net message from client' ela= 93 driver id=1413697536 #bytes=1 p3=0 obj#=71123 tim=90982741103
ktrget2(): started for block  <0x0005 : 0x01400081> objd: 0x000115d3 env [0x000000001759AF24]: (scn: 0x0000.0048429e  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid: 0x0000.000.00000000  st-scn: 0x0000.00000000  hi-scn: 0x0000.00000000  ma-scn: 0x0000.00484274  flg: 0x00000660) 
ktrexf(): returning 9 on:  0000000012658C48  cr-scn: 0xffff.ffffffff  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  cl-scn: 0xffff.ffffffff  sfl: 0 
ktrgcm(): completed for block  <0x0005 : 0x01400081> objd: 0x000115d3 ktrget2(): completed for  block <0x0005 : 0x01400081> objd: 0x000115d3
WAIT #391723720: nam='SQL*Net message to client' ela= 0 driver id=1413697536 #bytes=1 p3=0 obj#=71123 tim=90982741172
FETCH #391723720:c=0,e=55,p=0,cr=1,cu=0,mis=0,r=15,dep=0,og=1,plh=2560505625,tim=90982741183
...
WAIT #391723720: nam='SQL*Net message to client' ela= 0 driver id=1413697536 #bytes=1 p3=0 obj#=71123 tim=90983507550
ktrget2(): started for block  <0x0005 : 0x014001ac> objd: 0x000115d3 env [0x000000001759AF24]: (scn: 0x0000.0048429e  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid: 0x0000.000.00000000  st-scn: 0x0000.00000000  hi-scn: 0x0000.00000000  ma-scn: 0x0000.00484274  flg: 0x00000660) 
ktrexf(): returning 9 on:  0000000012658C48  cr-scn: 0xffff.ffffffff  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  cl-scn: 0xffff.ffffffff  sfl: 0 
ktrgcm(): completed for block  <0x0005 : 0x014001ac> objd: 0x000115d3 ktrget2(): completed for  block <0x0005 : 0x014001ac> objd: 0x000115d3
FETCH #391723720:c=0,e=88,p=0,cr=2,cu=0,mis=0,r=10,dep=0,og=1,plh=2560505625,tim=90983507599
STAT #391723720 id=1 cnt=100001 pid=0 pos=1 obj=71123 op='TABLE ACCESS FULL T4 (cr=6967 pr=301 pw=0 time=303617 us cost=84 size=1700000 card=100000)'
WAIT #391723720: nam='SQL*Net message from client' ela= 530 driver id=1413697536 #bytes=1 p3=0 obj#=71123 tim=90983508174

In the above, we see various entries that begin with ktrget2(), ktrgcm(), ktrexf(),  and ktrgcm().  Those entries do not normally appear in a 10046 trace file, so we know that Oracle Database will write 10200 trace information for consistent gets to a trace file, when that event is enabled.  The information conveyed by the additional lines found in the trace file is partially explained here.

Let’s try a second test that reproduces what the OP was attempting to investigate.  In session 2 (the SYS session), let’s flush the buffer cache, and query X$BH until the query results stabilize:

ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;

SELECT
  TS#,
  FILE#,
  DBARFIL,
  DBABLK,
  STATE
FROM
  X$BH
WHERE
  STATE<>0
ORDER BY
  TS#,
  FILE#,
  DBABLK;

/

/ 

Once stabilized, the query output for my test appeared as follows (the same blocks as were seen at the start of the previous test above):

 TS#  FILE#    DBARFIL     DBABLK  STATE
---- ------ ---------- ---------- ------
   0      1          1       2016      1
   0      1          1       2017      1 

Quickly switching to session 1, execute the following (then quickly jump to session 2 again):

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'T4_INSERT';
ALTER SESSION SET EVENTS '10200 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

INSERT INTO T4 VALUES(2,NULL,NULL);

SELECT
  SYSDATE
FROM
  DUAL;

ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';
ALTER SESSION SET EVENTS '10200 TRACE NAME CONTEXT OFF'; 

We will later switch back to session 1 to issue a ROLLBACK.  Switch to session 2 (the SYS session) and determine which blocks are in the buffer cache, and the state of those blocks:

The output that I received in session 2 follows (with a comment that counts the blocks that were added to the buffer cache):

 TS#  FILE#    DBARFIL     DBABLK  STATE
---- ------ ---------- ---------- ------
   0      1          1       2016      1
   0      1          1       2017      1
   2      3          3        224      1   * 1
   2      3          3        230      1   * 2
   5      5          5        128      1   * 3
   5      5          5        427      1   * 4
   5      5          5        428      1   * 5 

7 rows selected.

In the above, notice that 5 blocks were added to the buffer cache, all with a STATE of XCUR (the current version of the blocks).

The statistics that were output in session 1 follow:

SQL> INSERT INTO T4 VALUES(2,NULL,NULL);

1 row created.

Statistics
---------------------------------------------------
          0  recursive calls
          3  db block gets
          1  consistent gets
         35  physical reads
          0  redo size
        557  bytes sent via SQL*Net to client
        501  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed 

Notice in the above that the statistics show 35 blocks read from disk (physical reads), with 1 block read by consistent read (consistent gets), and 3 current mode reads (db block gets).  The OP mentioned finding similar output.  Let’s take a look at a portion of the 10200/10046 trace file (with comments that tie back to the output from X$BH):

PARSING IN CURSOR #392100208 len=34 dep=0 uid=62 oct=2 lid=62 tim=91041503275 hv=544833741 ad='7ffb77546b8' sqlid='96mny1hh7m06d'
INSERT INTO T4 VALUES(2,NULL,NULL)
END OF STMT
PARSE #392100208:c=0,e=23,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=0,tim=91041503275
WAIT #392100208: nam='db file sequential read' ela= 175 file#=5 block#=128 blocks=1 obj#=71123 tim=91041503526     * 3
WAIT #392100208: nam='db file scattered read' ela= 971 file#=5 block#=427 blocks=32 obj#=71123 tim=91041504601     * 4 and 5
WAIT #392100208: nam='db file sequential read' ela= 144 file#=3 block#=224 blocks=1 obj#=0 tim=91041504795         * 1
WAIT #392100208: nam='db file sequential read' ela= 142 file#=3 block#=230 blocks=1 obj#=0 tim=91041504992         * 2
EXEC #392100208:c=0,e=1749,p=35,cr=1,cu=3,mis=0,r=1,dep=0,og=1,plh=0,tim=91041505052                               * One Consistent Read, 3 Current Mode Reads
STAT #392100208 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL  (cr=1 pr=35 pw=0 time=1729 us)'          * One Consistent Read
...
CLOSE #392100208:c=0,e=3,dep=0,type=1,tim=91041508828 

Note in the above that there is no 10200 trace information that indicates which block was read by consistent read – the problem posed by the OP.  What happened, which block was read by consistent read?

Before we forget, roll back the INSERT in session 1:

ROLLBACK; 

A search found this Oracle-L post that lead to another blog article that suggested making the following change and bouncing the database (note: do not change hidden Oracle parameters in a production environment without the consent of Oracle Support):

ALTER SYSTEM SET "_TRACE_PIN_TIME"=1 SCOPE=SPFILE;

Go through the same process as outlined above of connecting the sessions, creating the table, and making certain that hard parses do not confuse the output.

In session 2 (the SYS session), flush the buffer cache, determine which blocks are in the buffer cache (execute a couple of times until the output stabilizes):

ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;

SELECT
  TS#,
  FILE#,
  DBARFIL,
  DBABLK,
  STATE
FROM
  X$BH
WHERE
  STATE<>0
ORDER BY
  TS#,
  FILE#,
  DBABLK;

/

/

Just as shown earlier in this article, I again received the following output:

 TS#  FILE#    DBARFIL     DBABLK  STATE
---- ------ ---------- ---------- ------
   0      1          1       2016      1
   0      1          1       2017      1 

Quickly switch to session 1 and execute the following (prepare to switch back to session 2):

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'T4_NO_INDEX2';
ALTER SESSION SET EVENTS '10200 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

INSERT INTO T4 VALUES(2,NULL,NULL);

In session 2 (the SYS session) check the contents of the buffer cache:

The output that I received follows (note that the numbers to the right of the lines correspond to the order in which the physical block reads were unpinned, as it appeared in the trace file that follows):

 TS#  FILE#    DBARFIL     DBABLK  STATE
---- ------ ---------- ---------- ------
   0      1          1       2016      1
   0      1          1       2017      1
   2      3          3        192      1  *  3
   2      3          3      16029      1  *  1
   5      5          5        128      1  *  X
   5      5          5        427      1  *  2
   5      5          5        428      1  *  2 

A portion of the 10200/10046 trace file follows (with the unpinning order numbered):

PARSING IN CURSOR #367514528 len=34 dep=0 uid=62 oct=2 lid=62 tim=21584856863 hv=544833741 ad='7ffb77d8140' sqlid='96mny1hh7m06d'
INSERT INTO T4 VALUES(2,NULL,NULL)
END OF STMT
PARSE #367514528:c=0,e=489,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=21584856863
WAIT #367514528: nam='db file sequential read' ela= 156 file#=5 block#=128 blocks=1 obj#=71114 tim=21584857108  X ****
pin ktswh28: ktsgsp dba 0x1400080:4 time 110020667                                                              X Pin
WAIT #367514528: nam='db file scattered read' ela= 935 file#=5 block#=427 blocks=32 obj#=71114 tim=21584858155  2 ****
pin ktswh72: ktsbget dba 0x14001ab:1 time 110021704                                                             2 Pin
WAIT #367514528: nam='db file sequential read' ela= 132 file#=3 block#=192 blocks=1 obj#=0 tim=21584858344      3 ****
pin ktuwh59: ktugus:ktubnd dba 0xc000c0:25 time 110021892                                                       3 Pin
WAIT #367514528: nam='db file sequential read' ela= 130 file#=3 block#=16029 blocks=1 obj#=0 tim=21584858538    1 ****
pin ktuwh09: ktugfb dba 0xc03e9d:26 time 110022085                                                              1 Pin
pin release        18 ktuwh09: ktugfb dba 0xc03e9d:26                                                           1 Unpin
pin release       428 ktswh72: ktsbget dba 0x14001ab:1                                                          2 Unpin
pin release       246 ktuwh59: ktugus:ktubnd dba 0xc000c0:25                                                    3 Unpin
EXEC #367514528:c=0,e=1731,p=35,cr=1,cu=3,mis=0,r=1,dep=0,og=1,plh=0,tim=21584858629
STAT #367514528 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL  (cr=1 pr=35 pw=0 time=1703 us)'
WAIT #367514528: nam='SQL*Net message to client' ela= 0 driver id=1413697536 #bytes=1 p3=0 obj#=0 tim=21584858681 

In the above, it appears that block 128 was read from file #5, was pinned, and then was still pinned when the EXEC and STAT lines were written to the trace file.  I do not claim to fully understand what appears above – maybe someone can confirm that in fact block 128 was still pinned.  Let’s determine the header block for table T4:

SELECT
  HEADER_FILE,
  HEADER_BLOCK
FROM
  DBA_SEGMENTS
WHERE
  SEGMENT_NAME='T4';

HEADER_FILE HEADER_BLOCK
----------- ------------
          5          128 

Block 128 in file #5 is the segment header block for the T4 test table. I suspect that it is the segment header block that is accounting for the one consistent read in the EXEC and STAT lines. My best guess is that the session is re-reading that block while it is still pinned, and is thus considering that read as a consistent get, even though the session already has the block pinned. That re-read after being pinned could also explain why a 10200 trace would not show that consistent read.

Opinions – or a better explanation?