Sequence Driven Primary Keys – Which is Better: Call NextVal in the Insert Statement or in a Row Level Trigger?

25 03 2011

March 25, 2011 (Updated March 26, 2011)

Occasionally, I see interesting questions being asked about Oracle Database, and it is nice when the person asking the question provides a test case – or at the very least the DDL and DML statements needed to recreate an environment that matches the question.  The question that was asked in this instance is whether or not it is true that calling an Oracle sequence’s NEXTVAL property directly in an INSERT statement is faster/more efficient than relying on a row level trigger to set the primary key value from the Oracle sequence’s NEXTVAL property.  Just as important, a secondary question of “why” was asked.

A slightly modified version of the test script follows:

CREATE TABLE ST ( C1 NUMBER PRIMARY KEY, C2 VARCHAR2(20) );
CREATE SEQUENCE S1 START WITH 1 CACHE 1000;

CREATE OR REPLACE TRIGGER TR1
    BEFORE INSERT
    ON ST
    REFERENCING NEW AS NEW
    FOR EACH ROW
    BEGIN
    SELECT S1.NEXTVAL INTO :NEW.C1 FROM DUAL;
    END;
/   

SET TIMING ON

BEGIN
INSERT INTO ST(C2)
SELECT
     RPAD('A',11,'A')
  FROM
    DUAL
  CONNECT BY
    LEVEL <= 100000;
END;
/   

COMMIT;   

CREATE TABLE NVT ( C1 NUMBER PRIMARY KEY, C2 VARCHAR2(20));
CREATE SEQUENCE S2;

SET TIMING ON

BEGIN
INSERT INTO NVT(C1,C2)
SELECT S2.NEXTVAL,
     RPAD('A',11,'A')
  FROM
    DUAL
  CONNECT BY
    LEVEL <= 100000;
END;
/   

COMMIT;

DROP TABLE ST PURGE;
DROP TABLE NVT PURGE;
DROP SEQUENCE S1;
DROP SEQUENCE S2; 

The above appears to be a good starting point to determine if there is a difference in performance, and by roughly how much the elapsed time will be affected.  So, what were the timing results of the above script?

Row Level Trigger:

SQL> SET TIMING ON
SQL>
SQL> BEGIN
  2  INSERT INTO ST(C2)
  3  SELECT
  4       RPAD('A',11,'A')
  5    FROM
  6      DUAL
  7    CONNECT BY
  8      LEVEL <= 100000;
  9  END;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.24  

Directly Calling NEXTVAL in the INSERT Statement:

SQL> SET TIMING ON
SQL>
SQL> BEGIN
  2  INSERT INTO NVT(C1,C2)
  3  SELECT S2.NEXTVAL,
  4       RPAD('A',11,'A')
  5    FROM
  6      DUAL
  7    CONNECT BY
  8      LEVEL <= 100000;
  9  END;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.82  

Not too bad, 5.17 times faster in this particular test database (only 3.17 times faster in another ~ 7.79 seconds compared to 2.46 seconds) when directly accessing the sequence’s NEXTVAL property in the INSERT statement.  Is this a fair test case?  Take another look at the two sequence definitions.  Notice that the first sequence that is used by the trigger has a CACHE size of 1,000 while the second sequence has the default cache size of 20.  Does that really matter?  Let’s test to find out.  The replacement creation statement for the S2 sequence will be changed like this:

CREATE SEQUENCE S2 START WITH 1 CACHE 1000; 

Row Level Trigger:

SQL> SET TIMING ON
SQL>
SQL> BEGIN
  2  INSERT INTO ST(C2)
  3  SELECT
  4       RPAD('A',11,'A')
  5    FROM
  6      DUAL
  7    CONNECT BY
  8      LEVEL <= 100000;
  9  END;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.32 

Directly Calling NEXTVAL in the INSERT Statement:

SQL> SET TIMING ON
SQL>
SQL> BEGIN
  2  INSERT INTO NVT(C1,C2)
  3  SELECT S2.NEXTVAL,
  4       RPAD('A',11,'A')
  5    FROM
  6      DUAL
  7    CONNECT BY
  8      LEVEL <= 100000;
  9  END;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.40 

That helped to level the playing field a bit, now the direct method is 10.8 times faster than the row level trigger (only 7.29 times faster in another database instance ~ 7.73 seconds compared to 1.06 seconds).

There might still be a couple of questions about the test case – is it a fair test?  What if we increase the number of rows inserted by a factor of 10 (append another 0 on the number specified in the CONNECT BY LEVEL clause) and use the original script with the (originally specified) different CACHE values for the sequences?  Let’s take a look at the results:

Row Level Trigger:

SQL> SET TIMING ON
SQL>
SQL> BEGIN
  2  INSERT INTO ST(C2)
  3  SELECT
  4       RPAD('A',11,'A')
  5    FROM
  6      DUAL
  7    CONNECT BY
  8      LEVEL <= 1000000;
  9  END;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:42.63 

Directly Calling NEXTVAL in the INSERT Statement:

SQL> SET TIMING ON
SQL>
SQL> BEGIN
  2  INSERT INTO NVT(C1,C2)
  3  SELECT S2.NEXTVAL,
  4       RPAD('A',11,'A')
  5    FROM
  6      DUAL
  7    CONNECT BY
  8      LEVEL <= 1000000;
  9  END;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:08.08 

This change to the test case seemed to scale nearly linearly, 5.28 times faster in this particular test database (only 3.29 times faster in another database instance ~ 81.23 seconds compared to 24.70 seconds).   Let’s fix the sequence defiition for the second sequence and try the test again:

Row Level Trigger:

SQL> BEGIN
  2  INSERT INTO ST(C2)
  3  SELECT
  4       RPAD('A',11,'A')
  5    FROM
  6      DUAL
  7    CONNECT BY
  8      LEVEL <= 1000000;
  9  END;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:42.24 

Directly Calling NEXTVAL in the INSERT Statement:

SQL> SET TIMING ON
SQL>
SQL> BEGIN
  2  INSERT INTO NVT(C1,C2)
  3  SELECT S2.NEXTVAL,
  4       RPAD('A',11,'A')
  5    FROM
  6      DUAL
  7    CONNECT BY
  8      LEVEL <= 1000000;
  9  END;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.93 

Once again, having a suitable CACHE value for the sequence helped.  Now the direct method of specifying the primary key value is 10.75 times faster than the row level trigger (only 7.86 times faster in another database instance ~ 80.64 seconds compared to 10.26 seconds).

Are there still minor issues with the test case?  Possibly – how often are rows normally inserted into production database tables using the CONNECT BY LEVEL syntax?  Maybe expanding the test case to use an INSERT INTOSELECT type syntax would help further differentiate the two approaches.  Maybe expanding the test case to use 100,000 single row INSERT statements would be a reasonable reflection of a real-world test?  Maybe expanding the test case to include other operating system platforms would show a more significant difference in performance (the above test cases were performed on 64 bit Windows – context switching is typically less severe on Windows)?  Maybe expanding the test case to include other Oracle Database release versions would show different performance results (the above test cases were performed on Oracle Database 11.2.0.2 and 10.2.0.2)?  What about the monitoring the differences in the session’s CPU (or other resource) consumption?  What about monitoring the differences when multiple sessions concurrently perform inserts?

I do not have the complete answer why the method using the sequence’s NEXTVAL property in the INSERT statement is 10 times faster.  Context switching due to the row level trigger is certainly part of the cause, and I believe that is part of the reason for the longer execution time with the row level trigger.  What else could impact the execution time?  Think about some of the extra bits of low-level program instruction code that Oracle Database must execute to set up the PL/SQL environment so that the values that are to be inserted are copied into the NEW object.  Then the execution of the trigger code needs to open a cursor (with an initial hard parse, and the rest of the uses likely coming from the session’s cached cursors) to return the value of S1.NEXTVAL into the NEW object, add the cursor to the session’s cached cursors, and finally transfer the NEW object values into rows in the table blocks.  Think about what extra low-level program instruction code we simply avoid by just telling Oracle Database to fetch the S2.NEXTVAL value from the server’s memory.

Anyone else have a better explanation for the performance difference between the two methods of populating the primary key values?  Are there any good reasons why someone should use the row level trigger method to populate the primary key values?

———

Added March 26, 2011 in response to Narendra’s test case that is attached to this article.

I thought that it might be interesting to look at how changing from the CONNECT BY LEVEL syntax to single row inserts would affect the elapsed time, CPU utilization, redo generation, and number of current mode block gets.  As you will see, there is not a straight-forward answer – the effects are different for different Oracle release versions.  Here is the script that will be used:

SET ECHO ON

CREATE GLOBAL TEMPORARY TABLE TEMP_STAT ON COMMIT PRESERVE ROWS AS
SELECT
  0 CNT,
  SN.NAME,
  MS.VALUE
FROM
  V$STATNAME SN,
  V$MYSTAT MS
WHERE
  SN.STATISTIC#=MS.STATISTIC#
  AND SN.NAME IN ('CPU used by this session','consistent gets','db block gets','parse count (hard)','parse count (total)','recursive calls','redo size','DB time');

CREATE TABLE T1 ( C1 NUMBER PRIMARY KEY, C2 VARCHAR2(20) );
CREATE SEQUENCE T1_SEQUENCE START WITH 1 CACHE 1000;

CREATE OR REPLACE TRIGGER T1_TRIGGER
    BEFORE INSERT
    ON T1
    REFERENCING NEW AS NEW
    FOR EACH ROW
    BEGIN
    SELECT T1_SEQUENCE.NEXTVAL INTO :NEW.C1 FROM DUAL;
    END;
/  

CREATE TABLE T2 ( C1 NUMBER PRIMARY KEY, C2 VARCHAR2(20));
CREATE SEQUENCE T2_SEQUENCE START WITH 1 CACHE 1000;

SET TIMING ON

INSERT INTO TEMP_STAT
SELECT
  1 CNT,
  SN.NAME,
  MS.VALUE
FROM
  V$STATNAME SN,
  V$MYSTAT MS
WHERE
  SN.STATISTIC#=MS.STATISTIC#
  AND SN.NAME IN ('CPU used by this session','consistent gets','db block gets','parse count (hard)','parse count (total)','recursive calls','redo size','DB time');

DECLARE
  V VARCHAR2(11);
BEGIN
  V := RPAD('A',11,'A');
  FOR I IN 1 .. 1000000 LOOP
    INSERT INTO T1(C2) VALUES (V);
  END LOOP;
END;
/  

INSERT INTO TEMP_STAT
SELECT
  2 CNT,
  SN.NAME,
  MS.VALUE
FROM
  V$STATNAME SN,
  V$MYSTAT MS
WHERE
  SN.STATISTIC#=MS.STATISTIC#
  AND SN.NAME IN ('CPU used by this session','consistent gets','db block gets','parse count (hard)','parse count (total)','recursive calls','redo size','DB time');

TRUNCATE TABLE T1;

INSERT INTO TEMP_STAT
SELECT
  3 CNT,
  SN.NAME,
  MS.VALUE
FROM
  V$STATNAME SN,
  V$MYSTAT MS
WHERE
  SN.STATISTIC#=MS.STATISTIC#
  AND SN.NAME IN ('CPU used by this session','consistent gets','db block gets','parse count (hard)','parse count (total)','recursive calls','redo size','DB time');

BEGIN
INSERT INTO T1(C2)
  SELECT
     RPAD('A',11,'A')
  FROM
    DUAL
  CONNECT BY
    LEVEL <= 1000000;
END;
/

INSERT INTO TEMP_STAT
SELECT
  4 CNT,
  SN.NAME,
  MS.VALUE
FROM
  V$STATNAME SN,
  V$MYSTAT MS
WHERE
  SN.STATISTIC#=MS.STATISTIC#
  AND SN.NAME IN ('CPU used by this session','consistent gets','db block gets','parse count (hard)','parse count (total)','recursive calls','redo size','DB time');

COMMIT;  

SET TIMING ON

INSERT INTO TEMP_STAT
SELECT
  5 CNT,
  SN.NAME,
  MS.VALUE
FROM
  V$STATNAME SN,
  V$MYSTAT MS
WHERE
  SN.STATISTIC#=MS.STATISTIC#
  AND SN.NAME IN ('CPU used by this session','consistent gets','db block gets','parse count (hard)','parse count (total)','recursive calls','redo size','DB time');

DECLARE
  V VARCHAR2(11);
BEGIN
  V := RPAD('A',11,'A');
  FOR I IN 1 .. 1000000 LOOP
    INSERT INTO T2(C1,C2) VALUES (T2_SEQUENCE.NEXTVAL,V);
  END LOOP;
END;
/  

INSERT INTO TEMP_STAT
SELECT
  6 CNT,
  SN.NAME,
  MS.VALUE
FROM
  V$STATNAME SN,
  V$MYSTAT MS
WHERE
  SN.STATISTIC#=MS.STATISTIC#
  AND SN.NAME IN ('CPU used by this session','consistent gets','db block gets','parse count (hard)','parse count (total)','recursive calls','redo size','DB time');

TRUNCATE TABLE T2;

INSERT INTO TEMP_STAT
SELECT
  7 CNT,
  SN.NAME,
  MS.VALUE
FROM
  V$STATNAME SN,
  V$MYSTAT MS
WHERE
  SN.STATISTIC#=MS.STATISTIC#
  AND SN.NAME IN ('CPU used by this session','consistent gets','db block gets','parse count (hard)','parse count (total)','recursive calls','redo size','DB time');

BEGIN
INSERT INTO T2(C1,C2)
SELECT T2_SEQUENCE.NEXTVAL,
     RPAD('A',11,'A')
  FROM
    DUAL
  CONNECT BY
    LEVEL <= 1000000;
END;
/

INSERT INTO TEMP_STAT
SELECT
  8 CNT,
  SN.NAME,
  MS.VALUE
FROM
  V$STATNAME SN,
  V$MYSTAT MS
WHERE
  SN.STATISTIC#=MS.STATISTIC#
  AND SN.NAME IN ('CPU used by this session','consistent gets','db block gets','parse count (hard)','parse count (total)','recursive calls','redo size','DB time');

COMMIT;

DROP TABLE T1 PURGE;
DROP TABLE T2 PURGE;
DROP SEQUENCE T1_SEQUENCE;
DROP SEQUENCE T2_SEQUENCE;

COLUMN NAME FORMAT A28
COLUMN DELTA FORMAT 999,999,990
SET PAGESIZE 1000

SELECT
  TS2.CNT,
  TS2.NAME,
  TS2.VALUE-TS1.VALUE DELTA
FROM
  TEMP_STAT TS1,
  TEMP_STAT TS2
WHERE
  TS2.CNT=TS1.CNT+1
  AND TS2.NAME=TS1.NAME
ORDER BY
  TS2.CNT,
  TS2.NAME;

TRUNCATE TABLE TEMP_STAT; 

As you can see from the script, the script will periodically capture several statistics to a global temporary table, and at the end it will output the delta values of the statistics.  From the delta values, we will be interested in the statistics from CNT 2 (SINGLE ROW WITH TRIGGER), CNT 4 (CONNECT BY WITH TRIGGER), CNT 6 (SINGLE ROW WITHOUT TRIGGER), and CNT 8 (CONNECT BY WITHOUT TRIGGER).

I executed the script on Oracle Database 10.2.0.5 (64 bit Windows), 11.1.0.7 (64 bit Windows), 11.2.0.1 (64 bit Windows), 11.1.0.6 (64 bit Linux), and 11.2.0.1 (64 bit Linux) using a systems with identical hardware for both operating system platforms.  I also executed the script on 11.2.0.2 (64 bit Windows – different server #2), and 10.2.0.2 (64 bit Windows – different server #3).  The results were interesting – take a close look at the results for 11.1.0.6 running on 64 bit Linux:





Nested Loops Join – the Smaller Table is the Driving Table, the Larger Table is the Driving Table

21 03 2011

March 21, 2011

I occasionally see discussions about Oracle Database behavior that make me wonder… is it true, can I generate a test case that validates the statement, and just as important, can I generate a test case that refutes the statement.  An interesting question was posted the the OTN forums regarding apparently conflicting advice that the original posted had received regarding which table the Oracle Database’s optimizer would select as the driving table in a nested loops join; if there is a large table and a small table to be joined by nested loops, which will be the driving (outer) table?  Jonathan Lewis provided a good deal of insight in the thread, indicating that the optimizer’s decision is not as clear cut as suggested by the OP’s sources.

Not as clear cut as the optimizer always selects the smaller table as the driving (outer) table?  Maybe the documentation is able to provide a little more insight into the topic:

“The optimizer uses nested loop joins when joining small number of rows, with a good driving condition between the two tables. You drive from the outer loop to the inner loop, so the order of tables in the execution plan is important.

The outer loop is the driving row source. It produces a set of rows for driving the join condition. The row source can be a table accessed using an index scan or a full table scan. Also, the rows can be produced from any other operation. For example, the output from a nested loop join can be used as a row source for another nested loop join.

The inner loop is iterated for every row returned from the outer loop, ideally by an index scan. If the access path for the inner loop is not dependent on the outer loop, then you can end up with a Cartesian product; for every iteration of the outer loop, the inner loop produces the same set of rows. Therefore, you should use other join methods when two independent row sources are joined together.”

I guess that helps a little – the driving table (or row source) is the outer table, and the inner table (or row source) is checked for every row returned by the driving table (after the specified filter predicates are applied to the driving table).  However, the documentation does not explicitly state that the outer table must be the smaller table.  Let’s check a couple of books:

The books were mostly in agreement, except for the last two.  Since it appears that there might be some disagreement whether the smaller table is selected by the optimizer as the driving table or the larger table is selected by the optimizer as the driving table, we need a test case (as repeatedly requested as evidence by Sean in the OTN thread). 

The following noworkload system statistics will be used:

SELECT
  PNAME,
  PVAL1
FROM
  SYS.AUX_STATS$
ORDER BY
  PNAME;

PNAME                PVAL1
--------------- ----------
CPUSPEED
CPUSPEEDNW      2116.57559
DSTART
DSTOP
FLAGS                    1
IOSEEKTIM               10
IOTFRSPEED            4096
MAXTHR
MBRC
MREADTIM
SLAVETHR
SREADTIM
STATUS 

First the tables.  We will start simple, with one table (T3) having 100 rows and another table (T4) having 10 rows:

CREATE TABLE T3 (
  C1 NUMBER,
  C2 NUMBER,
  C3 NUMBER,
  C4 VARCHAR2(20),
  PADDING VARCHAR2(200));

CREATE TABLE T4 (
  C1 NUMBER,
  C2 NUMBER,
  C3 NUMBER,
  C4 VARCHAR2(20),
  PADDING VARCHAR2(200));

INSERT INTO
  T3
SELECT
  ROWNUM C1,
  1000000-ROWNUM C2,
  MOD(ROWNUM-1,1000) C3,
  TO_CHAR(SYSDATE+MOD(ROWNUM-1,10000),'DAY') C4,
  LPAD(' ',200,'A') PADDING
FROM
  DUAL
CONNECT BY
  LEVEL<=100;

INSERT INTO
  T4
SELECT
  ROWNUM C1,
  1000000-ROWNUM C2,
  MOD(ROWNUM-1,1000) C3,
  TO_CHAR(SYSDATE+MOD(ROWNUM-1,10000),'DAY') C4,
  LPAD(' ',200,'A') PADDING
FROM
  DUAL
CONNECT BY
  LEVEL<=10;

COMMIT;

CREATE INDEX IND_T3_C1 ON T3(C1);
CREATE INDEX IND_T3_C2 ON T3(C2);
CREATE INDEX IND_T3_C3 ON T3(C3);

CREATE INDEX IND_T4_C1 ON T4(C1);
CREATE INDEX IND_T4_C2 ON T4(C2);
CREATE INDEX IND_T4_C3 ON T4(C3);

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T3',CASCADE=>TRUE,ESTIMATE_PERCENT=>NULL)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T4',CASCADE=>TRUE,ESTIMATE_PERCENT=>NULL) 

We are able to easily produce an example where the “smallest” table is selected as the driving table (note that I had to add a hint to specify a nested loop join in several of these examples):

SET AUTOTRACE TRACEONLY EXPLAIN

SELECT /*+ USE_NL(T3 T4) */
  T3.C1,
  T3.C2,
  T3.C3,
  T3.C4,
  T4.C1,
  T4.C2,
  T4.C3,
  T4.C4
FROM
  T3,
  T4
WHERE
  T3.C1=T4.C1;

Execution Plan
----------------------------------------------------------
Plan hash value: 567778651

------------------------------------------------------------------------------------------
| Id  | Operation                    | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |           |    10 |   420 |    13   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |           |       |       |            |          |
|   2 |   NESTED LOOPS               |           |    10 |   420 |    13   (0)| 00:00:01 |
|   3 |    TABLE ACCESS FULL         | T4        |    10 |   210 |     3   (0)| 00:00:01 |
|*  4 |    INDEX RANGE SCAN          | IND_T3_C1 |     1 |       |     0   (0)| 00:00:01 |
|   5 |   TABLE ACCESS BY INDEX ROWID| T3        |     1 |    21 |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T3"."C1"="T4"."C1") 

If we stop at that point, we could declare quite simply that the optimizer selects the smaller table as the driving table.  But wait a minute, take a look at this example where the optimizer selected the largest table as the driving table:

SELECT /*+ USE_NL(T3 T4) */
  T3.C1,
  T3.C2,
  T3.C3,
  T3.C4,
  T4.C1,
  T4.C2,
  T4.C3,
  T4.C4
FROM
  T3,
  T4
WHERE
  T3.C1=T4.C1
  AND T3.C2=1;

Execution Plan
----------------------------------------------------------
Plan hash value: 4214127300

-------------------------------------------------------------------------------------------
| Id  | Operation                     | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |           |     1 |    42 |     3   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                 |           |       |       |            |          |
|   2 |   NESTED LOOPS                |           |     1 |    42 |     3   (0)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T3        |     1 |    21 |     2   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | IND_T3_C2 |     1 |       |     1   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN           | IND_T4_C1 |     1 |       |     0   (0)| 00:00:01 |
|   6 |   TABLE ACCESS BY INDEX ROWID | T4        |     1 |    21 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T3"."C2"=1)
   5 - access("T3"."C1"="T4"."C1") 

The above execution plans were generated on 11.2.0.2, which sometimes differs a bit from older Oracle Database release versions when nested loops joins are used (note the two nested loops joins), however we are able to hint the optimizer to generate the older style nested loops join:

SELECT /*+ USE_NL(T3 T4) OPTIMIZER_FEATURES_ENABLE('10.2.0.4') */
  T3.C1,
  T3.C2,
  T3.C3,
  T3.C4,
  T4.C1,
  T4.C2,
  T4.C3,
  T4.C4
FROM
  T3,
  T4
WHERE
  T3.C1=T4.C1;

Execution Plan
----------------------------------------------------------
Plan hash value: 2465588182

-----------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |    10 |   420 |    13   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T3        |     1 |    21 |     1   (0)| 00:00:01 |
|   2 |   NESTED LOOPS              |           |    10 |   420 |    13   (0)| 00:00:01 |
|   3 |    TABLE ACCESS FULL        | T4        |    10 |   210 |     3   (0)| 00:00:01 |
|*  4 |    INDEX RANGE SCAN         | IND_T3_C1 |     1 |       |     0   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T3"."C1"="T4"."C1") 

 

SELECT /*+ USE_NL(T3 T4) OPTIMIZER_FEATURES_ENABLE('10.2.0.4') */
  T3.C1,
  T3.C2,
  T3.C3,
  T3.C4,
  T4.C1,
  T4.C2,
  T4.C3,
  T4.C4
FROM
  T3,
  T4
WHERE
  T3.C1=T4.C1
  AND T3.C2=1;

Execution Plan
----------------------------------------------------------
Plan hash value: 3446668716

-------------------------------------------------------------------------------------------
| Id  | Operation                     | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |           |     1 |    42 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID  | T4        |     1 |    21 |     1   (0)| 00:00:01 |
|   2 |   NESTED LOOPS                |           |     1 |    42 |     3   (0)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T3        |     1 |    21 |     2   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | IND_T3_C2 |     1 |       |     1   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN           | IND_T4_C1 |     1 |       |     0   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T3"."C2"=1)
   5 - access("T3"."C1"="T4"."C1") 

We found one case where the larger table was selected as the driving table, so the books and articles that simply state absolutely that the smallest table will be the driving table are not completely correct.  Maybe the larger table is only selected as the driving table when both tables are small?  Let’s test that theory by creating a couple of more tables:

SET AUTOTRACE OFF

CREATE TABLE T1 (
  C1 NUMBER,
  C2 NUMBER,
  C3 NUMBER,
  C4 VARCHAR2(20),
  PADDING VARCHAR2(200));

CREATE TABLE T2 (
  C1 NUMBER,
  C2 NUMBER,
  C3 NUMBER,
  C4 VARCHAR2(20),
  PADDING VARCHAR2(200));

INSERT INTO
  T1
SELECT
  ROWNUM C1,
  1000000-ROWNUM C2,
  MOD(ROWNUM-1,1000) C3,
  TO_CHAR(SYSDATE+MOD(ROWNUM-1,10000),'DAY') C4,
  LPAD(' ',200,'A') PADDING
FROM
  DUAL
CONNECT BY
  LEVEL<=1000000;

INSERT INTO
  T2
SELECT
  ROWNUM C1,
  1000000-ROWNUM C2,
  MOD(ROWNUM-1,1000) C3,
  TO_CHAR(SYSDATE+MOD(ROWNUM-1,10000),'DAY') C4,
  LPAD(' ',200,'A') PADDING
FROM
  DUAL
CONNECT BY
  LEVEL<=100000;

COMMIT;

CREATE INDEX IND_T1_C1 ON T1(C1);
CREATE INDEX IND_T1_C2 ON T1(C2);
CREATE INDEX IND_T1_C3 ON T1(C3);

CREATE INDEX IND_T2_C1 ON T2(C1);
CREATE INDEX IND_T2_C2 ON T2(C2);
CREATE INDEX IND_T2_C3 ON T2(C3);

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE,ESTIMATE_PERCENT=>NULL)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2',CASCADE=>TRUE,ESTIMATE_PERCENT=>NULL) 

The above script created table T1 with 1,000,000 rows and table T2 with 100,000 rows.  We will now use queries that are similar to those that were used with the 100 and 10 row tables.

The smaller table (T2) as the driving table:

SET AUTOTRACE TRACEONLY EXPLAIN

SELECT /*+ USE_NL(T1 T2) */
  T1.C1,
  T1.C2,
  T1.C3,
  T1.C4,
  T2.C1,
  T2.C2,
  T2.C3,
  T2.C4
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1;

Execution Plan
----------------------------------------------------------
Plan hash value: 2610346857

------------------------------------------------------------------------------------------
| Id  | Operation                    | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |           |   100K|  4687K|   300K  (1)| 01:00:12 |
|   1 |  NESTED LOOPS                |           |       |       |            |          |
|   2 |   NESTED LOOPS               |           |   100K|  4687K|   300K  (1)| 01:00:12 |
|   3 |    TABLE ACCESS FULL         | T2        |   100K|  2343K|   889   (1)| 00:00:11 |
|*  4 |    INDEX RANGE SCAN          | IND_T1_C1 |     1 |       |     2   (0)| 00:00:01 |
|   5 |   TABLE ACCESS BY INDEX ROWID| T1        |     1 |    24 |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T1"."C1"="T2"."C1") 

The larger table as the driving table:

SELECT /*+ USE_NL(T1 T2) */
  T1.C1,
  T1.C2,
  T1.C3,
  T1.C4,
  T2.C1,
  T2.C2,
  T2.C3,
  T2.C4
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C2 BETWEEN 1 AND 10000;

Execution Plan
----------------------------------------------------------
Plan hash value: 2331401024

-------------------------------------------------------------------------------------------
| Id  | Operation                     | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |           | 10001 |   468K| 11353   (1)| 00:02:17 |
|   1 |  NESTED LOOPS                 |           |       |       |            |          |
|   2 |   NESTED LOOPS                |           | 10001 |   468K| 11353   (1)| 00:02:17 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T1        | 10001 |   234K|   348   (0)| 00:00:05 |
|*  4 |     INDEX RANGE SCAN          | IND_T1_C2 | 10001 |       |    25   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN           | IND_T2_C1 |     1 |       |     1   (0)| 00:00:01 |
|   6 |   TABLE ACCESS BY INDEX ROWID | T2        |     1 |    24 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T1"."C2">=1 AND "T1"."C2"<=10000)
   5 - access("T1"."C1"="T2"."C1") 

So, what is happening?  Is it simply the case that it is the expected number of rows that will be returned from each table that determines which table will be the driving table?  Let’s test:

SET AUTOTRACE OFF

SELECT
  COUNT(*)
FROM
  T1
WHERE
  T1.C1 BETWEEN 890000 AND 1000000;

  COUNT(*)
----------
    110001

SELECT
  COUNT(*)
FROM
  T2
WHERE
 T2.C2 BETWEEN 900000 AND 1000000;

  COUNT(*)
----------
    100000 

The above shows that if we specify T1.C1 BETWEEN 890000 AND 1000000 in the WHERE clause there will be 110,001 rows from the larger table that match the criteria. If we specify T2.C2 BETWEEN 900000 AND 1000000 in the WHERE clause there will be 100,000 rows from the smaller table that match the criteria. If we execute the following query, which table will be the driving table, the 10 times larger T1 table where we are retrieving 110,001 rows or the smaller T2 table where we are retrieving 100,000 rows?

SET AUTOTRACE TRACEONLY EXPLAIN

SELECT /*+ USE_NL(T1 T2) */
  T1.C1,
  T1.C2,
  T1.C3,
  T1.C4,
  T2.C1,
  T2.C2,
  T2.C3,
  T2.C4
FROM
  T1,
  T2
WHERE
  T1.C3=T2.C3
  AND T1.C1 BETWEEN 890000 AND 1000000
  AND T2.C2 BETWEEN 900000 AND 1000000; 

This is the result that I received, which seems to demonstrate that it is not just the size of the tables, nor is it the number of expected rows to be returned from the tables, that determines which table will be the driving table:

-------------------------------------------------------------------------------------------
| Id  | Operation                     | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |           |    11M|   503M|    11M  (1)| 37:03:27 |
|   1 |  NESTED LOOPS                 |           |       |       |            |          |
|   2 |   NESTED LOOPS                |           |    11M|   503M|    11M  (1)| 37:03:27 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T1        |   110K|  2578K|  3799   (1)| 00:00:46 |
|*  4 |     INDEX RANGE SCAN          | IND_T1_C1 |   110K|       |   248   (1)| 00:00:03 |
|*  5 |    INDEX RANGE SCAN           | IND_T2_C3 |   100 |       |     1   (0)| 00:00:01 |
|*  6 |   TABLE ACCESS BY INDEX ROWID | T2        |   100 |  2400 |   101   (0)| 00:00:02 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T1"."C1">=890000 AND "T1"."C1"<=1000000)
   5 - access("T1"."C3"="T2"."C3")
   6 - filter("T2"."C2">=900000 AND "T2"."C2"<=1000000) 

110,001 rows from T1 is still somewhat close in number to the 100,000 rows from T2, so let’s try an experiment selecting 992,701 rows from T1:

SELECT /*+ USE_NL(T1 T2) */
  T1.C1,
  T1.C2,
  T1.C3,
  T1.C4,
  T2.C1,
  T2.C2,
  T2.C3,
  T2.C4
FROM
  T1,
  T2
WHERE
  T1.C3=T2.C3
  AND T1.C1 BETWEEN 7300 AND 1000000
  AND T2.C2 BETWEEN 900000 AND 1000000;

Execution Plan
----------------------------------------------------------
Plan hash value: 3718770616

------------------------------------------------------------------------------------------
| Id  | Operation                    | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |           |    99M|  4544M|   100M  (1)|334:20:23 |
|   1 |  NESTED LOOPS                |           |       |       |            |          |
|   2 |   NESTED LOOPS               |           |    99M|  4544M|   100M  (1)|334:20:23 |
|*  3 |    TABLE ACCESS FULL         | T1        |   992K|    22M|  8835   (1)| 00:01:47 |
|*  4 |    INDEX RANGE SCAN          | IND_T2_C3 |   100 |       |     1   (0)| 00:00:01 |
|*  5 |   TABLE ACCESS BY INDEX ROWID| T2        |   100 |  2400 |   101   (0)| 00:00:02 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T1"."C1">=7300 AND "T1"."C1"<=1000000)
   4 - access("T1"."C3"="T2"."C3")
   5 - filter("T2"."C2">=900000 AND "T2"."C2"<=1000000) 

As shown above, table T1 is still the driving table in the nested loops join.  Let’s test retrieving 993,001 rows from T1:

SELECT /*+ USE_NL(T1 T2) */
  T1.C1,
  T1.C2,
  T1.C3,
  T1.C4,
  T2.C1,
  T2.C2,
  T2.C3,
  T2.C4
FROM
  T1,
  T2
WHERE
  T1.C3=T2.C3
  AND T1.C1 BETWEEN 7000 AND 1000000
  AND T2.C2 BETWEEN 900000 AND 1000000;

------------------------------------------------------------------------------------------
| Id  | Operation                    | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |           |    99M|  4545M|   100M  (1)|334:26:13 |
|   1 |  NESTED LOOPS                |           |       |       |            |          |
|   2 |   NESTED LOOPS               |           |    99M|  4545M|   100M  (1)|334:26:13 |
|*  3 |    TABLE ACCESS FULL         | T2        |   100K|  2343K|   889   (1)| 00:00:11 |
|*  4 |    INDEX RANGE SCAN          | IND_T1_C3 |  1000 |       |     3   (0)| 00:00:01 |
|*  5 |   TABLE ACCESS BY INDEX ROWID| T1        |   993 | 23832 |  1003   (0)| 00:00:13 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T2"."C2">=900000 AND "T2"."C2"<=1000000)
   4 - access("T1"."C3"="T2"."C3")
   5 - filter("T1"."C1">=7000 AND "T1"."C1"<=1000000) 

As shown above, table T2 is now the driving table for the nested loops join.  So, there must be other factors beyond table (or better worded row source) size and the number of rows that will be retrieved from the tables.  You might be wondering if the CLUSTERING_FACTOR of the indexes also plays a role in determining which table is the driving table:

SET AUTOTRACE OFF
 
SELECT
  TABLE_NAME,
  INDEX_NAME,
  CLUSTERING_FACTOR,
  NUM_ROWS
FROM
  USER_INDEXES
WHERE
  TABLE_NAME IN ('T1','T2')
ORDER BY
  TABLE_NAME,
  INDEX_NAME;

TABLE_NAME INDEX_NAME CLUSTERING_FACTOR   NUM_ROWS
---------- ---------- ----------------- ----------
T1         IND_T1_C1              32259    1000000
T1         IND_T1_C2              32259    1000000
T1         IND_T1_C3            1000000    1000000
T2         IND_T2_C1               3226     100000
T2         IND_T2_C2               3226     100000
T2         IND_T2_C3             100000     100000 

I suggested (without checking) in the OTN thread that the CLUSTERING_FACTOR of the index on columns C2 would be higher than the CLUSTERING_FACTOR of the index on columns C1 because of the reverse (descending) order in which the C2 column values were inserted into the tables.  Surprisingly (at least to me), the optimizer set the CLUSTERING_FACTOR of the C1 and C2 columns to be the same values, and set the CLUSTERING_FACTOR of column C3 to be the same as the number of rows in the table.  Maybe one of the readers of this blog article can explain what happened to the CLUSTERING_FACTOR.

So, the answer to the OP’s question is not as simple as “the Smaller Table is the Driving Table” or “the Larger Table is the Driving Table”, but that there are other factors involved.  I think that it might be time for a third read through of the book “Cost-Based Oracle Fundamentals”.  In the mean time, anyone care to share more insight (yes we could look inside a 10053 trace file, but there must be at least one other tip that can be provided without referencing such a trace file).





What is the Difference Between the FIRST_ROWS Hint and ROWNUM in the WHERE Clause?

10 03 2011

March 10, 2011

A couple of days ago I saw an interesting question that asked what is the difference between using the FIRST_ROWS(100) hint in a SQL statement and using ROWNUM<=100 in the WHERE clause with the original SQL statement moved to an inline view.   For example:

SELECT /*+ FIRST_ROWS(100) */
  C1,
  C2
FROM
  T1
WHERE
  C1 BETWEEN 1 AND 1000
ORDER BY
  C1; 

 

SELECT
  *
FROM
  (SELECT
     C1,
     C2
   FROM
     T1
   WHERE
     C1 BETWEEN 1 AND 1000
   ORDER BY
     C1)
WHERE
  ROWNUM<=100; 

My first thought was that the difference is actually quite simple to explain, and then I started to wonder, how could someone logically come to the conclusion that the two approaches would yield the same results?  Do developers look at explain plan output?  Could that explain how someone might look at these two approaches and question whether or not the two approaches are equivalent?  Let’s see:

SET AUTOTRACE TRACEONLY EXPLAIN

SELECT /*+ FIRST_ROWS(100) */
  C1,
  C2
FROM
  T1
WHERE
  C1 BETWEEN 1 AND 1000
ORDER BY
  C1;

--------------------------------------------------------------------------------------------
| Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |              |   102 |  1020 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1           |   102 |  1020 |     5   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | SYS_C0020704 |  1000 |       |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("C1">=1 AND "C1"<=1000) 

 

SET AUTOTRACE TRACEONLY EXPLAIN

SELECT
  *
FROM
  (SELECT
     C1,
     C2
   FROM
     T1
   WHERE
     C1 BETWEEN 1 AND 1000
   ORDER BY
     C1)
WHERE
  ROWNUM<=100;

----------------------------------------------------------------------------------------------
| Id  | Operation                     | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |              |   100 |  2600 |     5   (0)| 00:00:01 |
|*  1 |  COUNT STOPKEY                |              |       |       |            |          |
|   2 |   VIEW                        |              |   102 |  2652 |     5   (0)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T1           |   102 |  1020 |     5   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | SYS_C0020704 |  1000 |       |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM<=100)
   4 - access("C1">=1 AND "C1"<=1000) 

So, the FIRST_ROWS hinted plan shows that 102 rows will be returned, while the plan with ROWNUM in the WHERE clause shows that 100 of 102 rows will be returned.  So, this is the first clue that the two approaches might not be equivalent.  Other people have written extensively about what the FIRST_ROWS hint (and OPTIMIZER_MODE) mean and also how ROWNUM in the WHERE clause works (and a potentially significant bug that is present prior to Oracle Database 11.2.0.1).  Is it possible to simplify the explanation of the difference, something like this?

The short answer is that the FIRST_ROWS hint tells the query optimizer: I really do not care to know if more than 1, 10, 100, or 1000 rows could be returned by the query, just plan the query execution as if my application will only retrieve 1, 10, 100, or 1000 rows – my application might still retrieve all of the rows, but just plan on the specified number being read. 

The ROWNUM predicate in the WHERE clause tells the query optimizer: I really do not care if more than 1, 10, 100, or 1000 rows could be returned by the original query, plan the query execution as if my application will only retrieve 1, 10, 100, or 1000 rows and make certain that my application cannot retrieve more than the specified number of rows.

Simple, maybe just a little too simple?

Let’s put together a little test script to demonstrate.  First, we will create a table with a primary key index, insert 500,000 rows, and then collect statistics with a 100% sample size:

CREATE TABLE T1(
  C1 NUMBER,
  C2 NUMBER,
  C3 NUMBER,
  C4 DATE,
  C5 DATE,
  C6 DATE,
  C7 VARCHAR2(20),
  C8 VARCHAR2(20),
  C9 VARCHAR2(20),
  PRIMARY KEY(C1));

INSERT INTO
  T1
SELECT
  ROWNUM,
  DECODE(MOD(ROWNUM,5),0,NULL,ROWNUM),
  DECODE(MOD(ROWNUM,5),0,NULL,ROWNUM),
  TRUNC(SYSDATE)+TRUNC(ROWNUM/100),
  DECODE(MOD(ROWNUM,5),0,NULL,TRUNC(SYSDATE)+TRUNC(ROWNUM/100)),
  DECODE(MOD(ROWNUM,5),0,NULL,TRUNC(SYSDATE)+TRUNC(ROWNUM/100)),
  RPAD(CHR(MOD(ROWNUM-1,26)+65)||
    CHR(MOD(ROWNUM,26)+65)||
    CHR(MOD(ROWNUM+1,26)+65)||
    CHR(MOD(ROWNUM+2,26)+65),20,'A'),
  DECODE(MOD(ROWNUM,5),0,NULL,
   RPAD(CHR(MOD(ROWNUM-1,26)+65)||
    CHR(MOD(ROWNUM,26)+65)||
    CHR(MOD(ROWNUM+1,26)+65)||
    CHR(MOD(ROWNUM+2,26)+65),20,'A')),
  DECODE(MOD(ROWNUM,5),0,NULL,
   RPAD(CHR(MOD(ROWNUM-1,26)+65)||
    CHR(MOD(ROWNUM,26)+65)||
    CHR(MOD(ROWNUM+1,26)+65)||
    CHR(MOD(ROWNUM+2,26)+65),20,'A'))
FROM
  DUAL
CONNECT BY
  LEVEL<=500000;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE,ESTIMATE_PERCENT=>NULL) 

Now the test script:

SPOOL TestFirstRowsAndRownum.txt
ALTER SYSTEM FLUSH BUFFER_CACHE;
SET AUTOTRACE TRACEONLY STATISTICS EXPLAIN
SET ARRAYSIZE 1000
SET TIMING ON
SET LINESIZE 140
SET PAGESIZE 1000
SET TRIMSPOOL ON

SELECT /*+ INDEX(T1) */
  C1,
  C2,
  C3,
  C4,
  C5,
  C6
FROM
  T1
WHERE
  C1 BETWEEN 10000 AND 100000;

ALTER SYSTEM FLUSH BUFFER_CACHE;

SELECT /*+ INDEX(T1) FIRST_ROWS(100) */
  C1,
  C2,
  C3,
  C4,
  C5,
  C6
FROM
  T1
WHERE
  C1 BETWEEN 10000 AND 100000;

ALTER SYSTEM FLUSH BUFFER_CACHE;

SELECT /*+ INDEX(T1) */
  C1,
  C2,
  C3,
  C4,
  C5,
  C6
FROM
  T1
WHERE
  C1 BETWEEN 10000 AND 100000
  AND ROWNUM<=100;

ALTER SYSTEM FLUSH BUFFER_CACHE;

SELECT /*+  */
  *
FROM
  (SELECT
    C1,
    C2,
    MAX(C3) C3
  FROM
    T1
  GROUP BY
    C1,
    C2
  ORDER BY
    C1,
    C2)
WHERE
  C1 BETWEEN 10000 AND 100000;

ALTER SYSTEM FLUSH BUFFER_CACHE;

SELECT /*+  FIRST_ROWS(100) */
  *
FROM
  (SELECT
    C1,
    C2,
    MAX(C3) C3
  FROM
    T1
  GROUP BY
    C1,
    C2
  ORDER BY
    C1,
    C2)
WHERE
  C1 BETWEEN 10000 AND 100000;

ALTER SYSTEM FLUSH BUFFER_CACHE;

SELECT /*+  */
  *
FROM
  (SELECT
    C1,
    C2,
    MAX(C3) C3
  FROM
    T1
  GROUP BY
    C1,
    C2
  ORDER BY
    C1,
    C2)
WHERE
  C1 BETWEEN 10000 AND 100000
  AND ROWNUM<=100;

SPOOL OFF 

The output of the first query, without the hint and without the ROWNUM restriction:

SQL> SELECT /*+ INDEX(T1) */
  2    C1,
  3    C2,
  4    C3,
  5    C4,
  6    C5,
  7    C6
  8  FROM
  9    T1
 10  WHERE
 11    C1 BETWEEN 10000 AND 100000;

90001 rows selected.

Elapsed: 00:00:00.43

Execution Plan
----------------------------------------------------------
Plan hash value: 2459953871

--------------------------------------------------------------------------------------------
| Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |              | 90002 |  3252K|  1359   (1)| 00:00:07 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1           | 90002 |  3252K|  1359   (1)| 00:00:07 |
|*  2 |   INDEX RANGE SCAN          | SYS_C0020704 | 90002 |       |   174   (2)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("C1">=10000 AND "C1"<=100000)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1534  consistent gets
       1354  physical reads
          0  redo size
    1994612  bytes sent via SQL*Net to client
       1324  bytes received via SQL*Net from client
         92  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      90001  rows processed 

Note in the plain (unmodified) query that 90,001 rows were returned to SQL*Plus, while the optimizer predicted that 90,002 rows would be returned.  It required 0.43 seconds for the rows to be returned to the SQL*Plus session.

Now the second query of the test case, hinted with FIRST_ROWS:

SQL> SELECT /*+ INDEX(T1) FIRST_ROWS(100) */
  2    C1,
  3    C2,
  4    C3,
  5    C4,
  6    C5,
  7    C6
  8  FROM
  9    T1
 10  WHERE
 11    C1 BETWEEN 10000 AND 100000;

90001 rows selected.

Elapsed: 00:00:00.43

Execution Plan
----------------------------------------------------------
Plan hash value: 2459953871

--------------------------------------------------------------------------------------------
| Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |              |   103 |  3811 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1           |   103 |  3811 |     5   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | SYS_C0020704 | 90002 |       |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("C1">=10000 AND "C1"<=100000)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1534  consistent gets
       1354  physical reads
          0  redo size
    1994612  bytes sent via SQL*Net to client
       1324  bytes received via SQL*Net from client
         92  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      90001  rows processed 

Note in the FIRST_ROWS(100) hinted query that 90,001 rows were returned to SQL*Plus, while the optimizer predicted that 103 rows would be returned due to the hint.  It required 0.43 seconds for the rows to be returned to the SQL*Plus session.

Now the third query of the test case:

SQL> SELECT /*+ INDEX(T1) */
  2    C1,
  3    C2,
  4    C3,
  5    C4,
  6    C5,
  7    C6
  8  FROM
  9    T1
 10  WHERE
 11    C1 BETWEEN 10000 AND 100000
 12    AND ROWNUM<=100;

100 rows selected.

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 3250543521

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |   100 |  3700 |     5   (0)| 00:00:01 |
|*  1 |  COUNT STOPKEY               |              |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1           |   103 |  3811 |     5   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | SYS_C0020704 | 90002 |       |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM<=100)
   3 - access("C1">=10000 AND "C1"<=100000)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          8  consistent gets
          6  physical reads
          0  redo size
       2787  bytes sent via SQL*Net to client
        334  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        100  rows processed 

Note in the query with the ROWNUM predicate in the WHERE clause that 100 rows were returned to SQL*Plus, while the optimizer predicted that 100 rows (103 rows before the COUNT STOPKEY operation) would be returned.  It required 0.01 seconds for the rows to be returned to the SQL*Plus session.

Now for the second half of the output (created on Oracle Database 10.2.0.2 – your results may be a bit different).  The first query:

SQL> SELECT /*+  */
  2    *
  3  FROM
  4    (SELECT
  5      C1,
  6      C2,
  7      MAX(C3) C3
  8    FROM
  9      T1
 10    GROUP BY
 11      C1,
 12      C2
 13    ORDER BY
 14      C1,
 15      C2)
 16  WHERE
 17    C1 BETWEEN 10000 AND 100000;

90001 rows selected.

Elapsed: 00:00:00.56

Execution Plan
----------------------------------------------------------
Plan hash value: 915463192

------------------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 90002 |  3427K|       |   904   (5)| 00:00:05 |
|   1 |  VIEW               |      | 90002 |  3427K|       |   904   (5)| 00:00:05 |
|   2 |   SORT GROUP BY     |      | 90002 |  2197K|  7080K|   904   (5)| 00:00:05 |
|*  3 |    TABLE ACCESS FULL| T1   | 90002 |  2197K|       |   352   (8)| 00:00:02 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("C1"<=100000 AND "C1">=10000)

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       6563  consistent gets
       6555  physical reads
          0  redo size
    1307436  bytes sent via SQL*Net to client
       1324  bytes received via SQL*Net from client
         92  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      90001  rows processed 

Nothing too out of the ordinary in the above, the WHERE clause predicate was pushed into the inline view, although I thought that the inline view (indicated by the VIEW line in the explain plan output) might disappear.

The second query with the FIRST_ROWS hint:

SQL> SELECT /*+  FIRST_ROWS(100) */
  2    *
  3  FROM
  4    (SELECT
  5      C1,
  6      C2,
  7      MAX(C3) C3
  8    FROM
  9      T1
 10    GROUP BY
 11      C1,
 12      C2
 13    ORDER BY
 14      C1,
 15      C2)
 16  WHERE
 17    C1 BETWEEN 10000 AND 100000;

90001 rows selected.

Elapsed: 00:00:00.50

Execution Plan
----------------------------------------------------------
Plan hash value: 915463192

------------------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 90002 |  3427K|       |   904   (5)| 00:00:05 |
|   1 |  VIEW               |      | 90002 |  3427K|       |   904   (5)| 00:00:05 |
|   2 |   SORT GROUP BY     |      | 90002 |  2197K|  7080K|   904   (5)| 00:00:05 |
|*  3 |    TABLE ACCESS FULL| T1   | 90002 |  2197K|       |   352   (8)| 00:00:02 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("C1"<=100000 AND "C1">=10000)

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       6563  consistent gets
       6555  physical reads
          0  redo size
    1307436  bytes sent via SQL*Net to client
       1324  bytes received via SQL*Net from client
         92  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      90001  rows processed 

The WHERE clause predicate was pushed into the inline view again, but notice that the optimizer did not predict that 100 or 103 rows would be returned by the query.  Do you see the same results on other Oracle Database release versions?  It is slightly interesting to see that the available index access path was not selected… almost as if the FIRST_ROWS hint had no effect… of course you know the reason why.  :-)  (Edit: March 10, 2011: The same result was achieved when moving the hint to the inline view, and same result was achieved when specifying the hint as FIRST_ROWS(1).  I guess that it is time to review the 10053 trace to see what happened.)

Now the query with the ROWNUM predicate:

SQL> SELECT /*+  */
  2    *
  3  FROM
  4    (SELECT
  5      C1,
  6      C2,
  7      MAX(C3) C3
  8    FROM
  9      T1
 10    GROUP BY
 11      C1,
 12      C2
 13    ORDER BY
 14      C1,
 15      C2)
 16  WHERE
 17    C1 BETWEEN 10000 AND 100000
 18    AND ROWNUM<=100;

100 rows selected.

Elapsed: 00:00:00.31

Execution Plan
----------------------------------------------------------
Plan hash value: 2221909342

----------------------------------------------------------------------------------------
| Id  | Operation               | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |      |   100 |  3900 |       |   748   (5)| 00:00:04 |
|*  1 |  COUNT STOPKEY          |      |       |       |       |            |          |
|   2 |   VIEW                  |      | 90002 |  3427K|       |   748   (5)| 00:00:04 |
|*  3 |    SORT GROUP BY STOPKEY|      | 90002 |  1318K|  4968K|   748   (5)| 00:00:04 |
|*  4 |     TABLE ACCESS FULL   | T1   | 90002 |  1318K|       |   352   (8)| 00:00:02 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM<=100)
   3 - filter(ROWNUM<=100)
   4 - filter("C1"<=100000 AND "C1">=10000)

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       6563  consistent gets
       6555  physical reads
          0  redo size
       2275  bytes sent via SQL*Net to client
        334  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
        100  rows processed 

The standard WHERE clause predicate used in all of the queries was pushed into the inline view again, and the ROWNUM predicate was also pushed into the inline view.  Does this demonstrate the potential bug – what if instead of a full table scan, the primary key index access path were used to read the data from table T1?





SQL Precedence UNION and MINUS, Converting to Full Outer Join Syntax

26 02 2011

February 26, 2011

In a recent OTN thread in the SQL forum I found a SQL statement (see the thread for the original SQL statement) that the original poster (OP) wanted to transform into full outer join syntax, but was unsure how to start the transformation.  To get started with helping the OP, you first need to determine what is the intended result of the SQL statement.  Second, you need to determine if the query really is producing the intended result.  Third, you need to determine why the OP wants to change a functioning query into a different syntax – is the OP facing a performance problem, or is the OP expected to provide an answer as part of a test question for a job interview or classroom project ;-).  Fourth, you need to determine the Oracle Database release version that is available to the OP – can the OP use the FULL OUTER JOIN syntax, or is it necessary to use the Oracle specific join syntax?

The OP’s query essentially had the following format:

SELECT
  C1
FROM
  T1@SOMEDBLINK
WHERE
  C2='26-FEB-2011'
MINUS
SELECT
  C1
FROM
  T1
WHERE
  C2='26-FEB-2011'

UNION

SELECT
  C1
FROM
  T1
WHERE
  C2='26-FEB-2011'
MINUS
SELECT
  C1
FROM
  T1@SOMEDBLINK
WHERE
  C2='26-FEB-2011'; 

Before testing, it appeared that the OP wanted all of the rows in each of the two tables, if and only if, the rows do not exist in both tables.  So, this is a little like a UNION ALL minus the intersection of the two row sources (assuming that there are no duplicate values in either table).  Seems simple enough, until you test it.  Let’s build a simple model to explain what is happening.  We will create two tables with 10 rows each.  The first table has C1 values that increase by 2, while the second has C1 values that increase by 3:

CREATE TABLE T1 AS
SELECT
  ROWNUM*2 C1,
  TRUNC(SYSDATE)+ROWNUM*2 C2
FROM
  DUAL
CONNECT BY
  LEVEL<=10;

CREATE TABLE T2 AS
SELECT
  ROWNUM*3 C1,
  TRUNC(SYSDATE)+ROWNUM*3 C2
FROM
  DUAL
CONNECT BY
  LEVEL<=10; 

Now let’s look at the simulated two halves of the OP’s query:

SELECT
  C1,
  C2
FROM
  T1
MINUS
SELECT
  C1,
  C2
FROM
  T2;

C1 C2
-- ---------
 2 28-FEB-11
 4 02-MAR-11
 8 06-MAR-11
10 08-MAR-11
14 12-MAR-11
16 14-MAR-11
20 18-MAR-11

SELECT
  C1,
  C2
FROM
  T2
MINUS
SELECT
  C1,
  C2
FROM
  T1;

C1 C2
-- ---------
 3 01-MAR-11
 9 07-MAR-11
15 13-MAR-11
21 19-MAR-11
24 22-MAR-11
27 25-MAR-11
30 28-MAR-11 

As can be seen by the above, each half returned 7 rows – there are 7 rows in each table that is not in the other table.  Common sense would state that if we UNION these two results (assuming no duplicate values in each table), we would see 14 rows:

SELECT
  C1,
  C2
FROM
  T1
MINUS
SELECT
  C1,
  C2
FROM
  T2
UNION
SELECT
  C1,
  C2
FROM
  T2
MINUS
SELECT
  C1,
  C2
FROM
  T1;

C1 C2
-- ---------
 3 01-MAR-11
 9 07-MAR-11
15 13-MAR-11
21 19-MAR-11
24 22-MAR-11
27 25-MAR-11
30 28-MAR-11 

Only 7 rows?  Let’s try again with the help of inline views to control the order in which the MINUS and UNION operators are processed:

SELECT
  *
FROM
  (SELECT
    C1,
    C2
  FROM
    T1
  MINUS
  SELECT
    C1,
    C2
  FROM
    T2)
UNION
SELECT
  *
FROM
  (SELECT
    C1,
    C2
  FROM
    T2
  MINUS
  SELECT
    C1,
    C2
  FROM
    T1);

C1 C2
-- ---------
 2 28-FEB-11
 3 01-MAR-11
 4 02-MAR-11
 8 06-MAR-11
 9 07-MAR-11
10 08-MAR-11
14 12-MAR-11
15 13-MAR-11
16 14-MAR-11
20 18-MAR-11
21 19-MAR-11
24 22-MAR-11
27 25-MAR-11
30 28-MAR-11 

Note that the above returned 14 rows, as common sense seemed to imply the previous query would return.  We can return the same result set using just two outer joins:

SELECT
  T1.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1(+)
  AND T2.C1 IS NULL
UNION
SELECT
  T2.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T2.C1=T1.C1(+)
  AND T1.C1 IS NULL;

C1 C2
-- ---------
 2 28-FEB-11
 3 01-MAR-11
 4 02-MAR-11
 8 06-MAR-11
 9 07-MAR-11
10 08-MAR-11
14 12-MAR-11
15 13-MAR-11
16 14-MAR-11
20 18-MAR-11
21 19-MAR-11
24 22-MAR-11
27 25-MAR-11
30 28-MAR-11 

Or we can produce the same output with a FULL OUTER JOIN and a MINUS operator:

SELECT
  NVL(T1.C1,T2.C1) C1,
  NVL2(T1.C1,T1.C2,T2.C2) C2
FROM
  T1 FULL OUTER JOIN T2
    ON T1.C1=T2.C1
MINUS
SELECT
  T1.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1;

C1 C2
-- ---------
 2 28-FEB-11
 3 01-MAR-11
 4 02-MAR-11
 8 06-MAR-11
 9 07-MAR-11
10 08-MAR-11
14 12-MAR-11
15 13-MAR-11
16 14-MAR-11
20 18-MAR-11
21 19-MAR-11
24 22-MAR-11
27 25-MAR-11
30 28-MAR-11 

We are also able to produce the output with a FULL OUTER JOIN with a WHERE clause:

SELECT
  NVL(T1.C1,T2.C1) C1,
  NVL2(T1.C1,T1.C2,T2.C2) C2
FROM
  T1 FULL OUTER JOIN T2
    ON T1.C1=T2.C1
WHERE
  (T1.C1 IS NULL
    OR T2.C1 IS NULL);

C1 C2
-- ---------
 3 01-MAR-11
 9 07-MAR-11
15 13-MAR-11
21 19-MAR-11
24 22-MAR-11
27 25-MAR-11
30 28-MAR-11
 8 06-MAR-11
20 18-MAR-11
 2 28-FEB-11
10 08-MAR-11
 4 02-MAR-11
14 12-MAR-11
16 14-MAR-11 

We still have not determined everything that is outlined at the start of this article.  If the OP is looking for an exact answer so that he may complete a test question for a job interview or classroom project, directly fixing his posted SQL statement may prove unhelpful in the long run.  Considering that we do not know the OP’s Oracle Database version, it is very difficult to say “use this for your full outer join requirement“.  In a recent article I linked to the SQL and PL/SQL FAQ on the OTN forum – it probably would have been helpful had the OP read that FAQ in full before posting a question to the forum, so that the OP would have known that the Oracle Database release version is occasionally a very important detail when seeking help. 

The fact that the OP’s query is apparently relying on implicit data type conversions filecreatedt=’18-feb-2011′ is a little troubling, especially if this question is found on some sort of test.





Strange Timestamp Behavior

17 02 2011

February 17, 2011

I have not experimented much with the TIMESTAMP datatype in Oracle – the DATE datatype is usually sufficient for my needs.  I thought that I would try to build a couple of test scripts to experiment with TIMESTAMPs, but I hit a bit of a snag in my testing.  The first script that I built follows:

DROP TABLE T1;

CREATE TABLE T1 (
  C1 TIMESTAMP(9));

CREATE OR REPLACE TRIGGER T1_UPDATE BEFORE UPDATE ON T1 FOR EACH ROW
BEGIN
  :NEW.C1 := :NEW.C1 + 1;
END T1_UPDATE;
/

ALTER SESSION SET NLS_TIMESTAMP_FORMAT="DD-MON-YY HH24:MI:SS";

INSERT INTO T1 VALUES(SYSDATE);

SELECT
  *
FROM
  T1;

UPDATE T1 SET C1=SYSDATE;

SELECT
  *
FROM
  T1; 

The above script creates a table with a single column having a TIMESTAMP(9) column, creates a trigger on the table that always adds one day to the TIMESTAMP column value every time a row is updated, inserts a row into the table, selects from the table, updates the row that was just inserted, and then selects from that table again.  Would you believe that the results for Oracle Database 10.2.0.2 and 11.1.0.7 are different for this test script?

Output from 10.2.0.2:

SQL> SELECT
  2    *
  3  FROM
  4    T1;

C1
------------------------------
16-FEB-11 10:45:03

SQL>
SQL> UPDATE T1 SET C1=SYSDATE;

1 row updated.

SQL>
SQL> SELECT
  2    *
  3  FROM
  4    T1;

C1
------------------------------
 (NULL)

Output from 11.1.0.7:

SQL> SELECT
  2    *
  3  FROM
  4    T1;

C1
------------------------------
16-FEB-11 10:47:05

SQL>
SQL> UPDATE T1 SET C1=SYSDATE;

1 row updated.

SQL>
SQL> SELECT
  2    *
  3  FROM
  4    T1;

C1
------------------------------
17-FEB-11 10:47:05 

Metalink (MOS) Doc ID 5352587.8, Bug 5352587 – “Triggers not working correctly with timestamp datatype” seems to show a problem similar to the above, as does Metalink (MOS) Bug 5860236 - “Timestamp in trigger given NULL value on update without use of TO_TIMESTAMP”.

No problem, we can work around that problem with a slight modification to our script:

DELETE FROM T1;

INSERT INTO T1 VALUES(SYSDATE);

SELECT
  *
FROM
  T1;

UPDATE T1 SET C1=TO_TIMESTAMP(SYSDATE);

SELECT
  *
FROM
  T1; 

Output from 10.2.0.2:

SQL> SELECT
  2    *
  3  FROM
  4    T1;

C1
--------------------------------------------
16-FEB-11 10:57:02

SQL>
SQL> UPDATE T1 SET C1=TO_TIMESTAMP(SYSDATE);

1 row updated.

SQL>
SQL> SELECT
  2    *
  3  FROM
  4    T1;

C1
--------------------------------------------
17-FEB-11 12:00:00 

The date value is what was expected, but we lost the time component – the problem is obvious.  Let’s try again with another slight modification to the script:

ALTER SESSION SET NLS_DATE_FORMAT="DD-MON-YY HH24:MI:SS";

DELETE FROM T1;

INSERT INTO T1 VALUES(SYSDATE);

SELECT
  *
FROM
  T1;

UPDATE T1 SET C1=TO_TIMESTAMP(SYSDATE);

SELECT
  *
FROM
  T1; 

Output from 10.2.0.2:

SQL> SELECT
  2    *
  3  FROM
  4    T1;

C1
--------------------------------------------
16-FEB-11 11:00:26

SQL>
SQL> UPDATE T1 SET C1=TO_TIMESTAMP(SYSDATE);

1 row updated.

SQL>
SQL> SELECT
  2    *
  3  FROM
  4    T1;

C1
--------------------------------------------
17-FEB-11 11:00:26 

The above shows exactly what we should expect.

—-

There are quite a few different ways to return a TIMESTAMP from the database that represents the current date and time.  For example:

A recent thread in an Oracle forum was closed prematurely without providing the original poster (OP) a usable solution other than to file a bug report - lately, I sometimes wonder what is the intended purpose of that forum due to the number of threads that are closed without providing much of any help to the OP (other than a half dozen partially related links – it is important to recognize that there are a couple of people who post on that forum very well thought out, specific solutions for the problems faced by the OPs).  The test case script that the OP provided is quite interesting, and I have posted my results below when the script was executed on several different Oracle Database release versions.

Output from 10.2.0.2:

O timestamp obtido eh: [16-FEB-11 01.46.42.616000 PM]

DATA      TEXTO
--------- -------------------------------------------------------------------------------------------------
16-FEB-11 1:old.FLAG=123, :new.FLAG=321
16-FEB-11 1:old.TAG=teste trigger, :new.TAG=teste trigger
16-FEB-11 1:old.TS_ATU_DTR=16-FEB-11 01.46.42.523000000 PM, :new.TS_ATU_DTR=16-FEB-11 01.46.42.523000000 PM

16-FEB-11 2:old.FLAG=123, :new.FLAG=321
16-FEB-11 2:old.TAG=teste trigger, :new.TAG=10
16-FEB-11 2:old.TS_ATU_DTR=16-FEB-11 01.46.42.523000000 PM, :new.TS_ATU_DTR=16-FEB-11 01.46.42.616000000 PM

Output from 10.2.0.5:

O timestamp obtido eh: [16-FEB-11 06.18.56.532000 PM]

DATA      TEXTO
--------- -------------------------------------------------------------------------------------------------
16-FEB-11 1:old.FLAG=123, :new.FLAG=321
16-FEB-11 1:old.TAG=teste trigger, :new.TAG=teste trigger
16-FEB-11 1:old.TS_ATU_DTR=16-FEB-11 06.18.56.345000000 PM, :new.TS_ATU_DTR=

16-FEB-11 2:old.FLAG=123, :new.FLAG=321
16-FEB-11 2:old.TAG=teste trigger, :new.TAG=10
16-FEB-11 2:old.TS_ATU_DTR=16-FEB-11 06.18.56.345000000 PM, :new.TS_ATU_DTR=16-FEB-11 06.18.56.532000000 PM 

Output from 11.1.0.7:

O timestamp obtido eh: [16-FEB-11 01.46.41.333000 PM]

DATA      TEXTO
--------- ------------------------------------------------------------------
16-FEB-11 1:old.FLAG=123, :new.FLAG=321
16-FEB-11 1:old.TAG=teste trigger, :new.TAG=teste trigger
16-FEB-11 1:old.TS_ATU_DTR=, :new.TS_ATU_DTR=

16-FEB-11 2:old.FLAG=123, :new.FLAG=321
16-FEB-11 2:old.TAG=teste trigger, :new.TAG=10
16-FEB-11 2:old.TS_ATU_DTR=, :new.TS_ATU_DTR=16-FEB-11 01.46.41.333000000 PM 

Output from 11.2.0.1:

O timestamp obtido eh: [16-FEB-11 06.06.03.348000 PM]

DATA      TEXTO
--------- ------------------------------------------------------------------
16-FEB-11 1:old.FLAG=123, :new.FLAG=321
16-FEB-11 1:old.TAG=teste trigger, :new.TAG=teste trigger
16-FEB-11 1:old.TS_ATU_DTR=, :new.TS_ATU_DTR=
16-FEB-11 2:old.FLAG=123, :new.FLAG=321
16-FEB-11 2:old.TAG=teste trigger, :new.TAG=10
16-FEB-11 2:old.TS_ATU_DTR=, :new.TS_ATU_DTR=16-FEB-11 06.06.03.348000000 PM 

Output from 11.2.0.2:

O timestamp obtido eh: [16-FEB-11 06.02.33.978000 PM]

DATA      TEXTO
--------- -------------------------------------------------------------------------------------------------
16-FEB-11 1:old.FLAG=123, :new.FLAG=321
16-FEB-11 1:old.TAG=teste trigger, :new.TAG=teste trigger
16-FEB-11 1:old.TS_ATU_DTR=16-FEB-11 06.02.33.827000000 PM, :new.TS_ATU_DTR=

16-FEB-11 2:old.FLAG=123, :new.FLAG=321
16-FEB-11 2:old.TAG=teste trigger, :new.TAG=10
16-FEB-11 2:old.TS_ATU_DTR=16-FEB-11 06.02.33.827000000 PM, :new.TS_ATU_DTR=16-FEB-11 06.02.33.978000000 PM  

Notice in the above that the results from Oracle Database 10.2.0.2 show that the trigger code was able to retrieve the existing value of the TS_ATU_DTR (TIMESTAMP datatype) column and write that value to the logging table.  When the test was repeated on Oracle Database 11.1.0.7, we find that the existing (old) value of the TS_ATU_DTR column could not be read, and the replacement (new) value of that column also could not be read unless its value was explictly set in the trigger code.  So, the result of the OP’s test script finds that 11.1.0.7 contains a bug related to TIMESTAMP datatypes that does not exist in 10.2.0.2, while the script at the start of this blog article demonstrates a bug related to TIMESTAMP datatypes that exists in 10.2.0.2, but does not exist in 11.1.0.7.  11.2.0.1 returned results that are very similar to 11.1.0.7.  The results for 10.2.0.5 and 11.2.0.2 matched exactly, however the new.TS_ATU_DTR value was not initially set correctly.

Are we able to help the OP with the issue that he faces, or should we just say “Not my problem, go bug someone else.

Let’s create another reproducible test case script to see if we are able to isolate the cause of the problem:

SET LINESIZE 140
SET PAGESIZE 1000

DROP TABLE T1;
DROP TABLE T1_LOG;

CREATE TABLE T1 (
  ID NUMBER NOT NULL,
  COL2 NUMBER NOT NULL,
  COL_TS TIMESTAMP(9) WITH TIME ZONE NOT NULL);

CREATE TABLE T1_LOG (
  CHANGE_DATE DATE NOT NULL,
  CHANGE_TEXT VARCHAR2(130) NOT NULL);

CREATE OR REPLACE TRIGGER TRI_T1 BEFORE UPDATE ON T1
REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
DECLARE
  OLD_COL_TS TIMESTAMP(9);
  NEW_COL_TS TIMESTAMP(9);
BEGIN
  OLD_COL_TS := :OLDDATA.COL_TS;
  NEW_COL_TS := :NEWDATA.COL_TS;

  INSERT INTO T1_LOG VALUES ( SYSDATE, 'Before:OLDDATA.COL_TS=' || OLD_COL_TS || ', :NEWDATA.COL_TS='||NEW_COL_TS);

  :NEWDATA.COL_TS := CURRENT_TIMESTAMP;
  NEW_COL_TS := :NEWDATA.COL_TS;

  INSERT INTO T1_LOG VALUES ( SYSDATE, 'After:OLDDATA.COL_TS=' || OLD_COL_TS || ', :NEWDATA.COL_TS='||NEW_COL_TS);
END;
/

INSERT INTO
  T1
VALUES (
  1,
  1,
  CURRENT_TIMESTAMP);

UPDATE
  T1
SET
  COL2=COL2+1;

SELECT
  *
FROM
  T1_LOG
ORDER BY
  CHANGE_TEXT DESC;

DECLARE
  OLD_COL_TS TIMESTAMP(9);
BEGIN
  UPDATE
    T1
  SET
    COL2=COL2+1;
END;
/

SELECT
  *
FROM
  T1_LOG
ORDER BY
  CHANGE_TEXT DESC;

DECLARE
  OLD_COL_TS TIMESTAMP(9);
BEGIN
  UPDATE
    T1
  SET
    COL2=COL2+1
  RETURNING
    COL_TS INTO OLD_COL_TS;
END;
/

SELECT
  *
FROM
  T1_LOG
ORDER BY
  CHANGE_TEXT DESC; 

The script performs a lot of actions – don’t worry too much about it.  After creating the table and the logging table, we insert a row, and then modify that row three different ways.

Output from 10.2.0.2:

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.179000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.21.179000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.179000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.21.179000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.179000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.179000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM 

Output from 10.2.0.5 (with EXEC DBMS_LOCK.SLEEP(1) added between updates):

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.11.865000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.20.11.865000000 PM, :NEWDATA.COL_TS=
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.20.11.865000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.12.910000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.11.865000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM 

Output from 11.1.0.7:

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.048000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=, :NEWDATA.COL_TS=
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.048000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM
16-FEB-11 After:OLDDATA.COL_TS=, :NEWDATA.COL_TS=16-FEB-11 03.52.20.064000000 PM 

Output from 11.2.0.1 (with EXEC DBMS_LOCK.SLEEP(1) added between updates):

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.13.035000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=, :NEWDATA.COL_TS=
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.13.035000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM
16-FEB-11 After:OLDDATA.COL_TS=, :NEWDATA.COL_TS=16-FEB-11 06.09.14.089000000 PM

Output from 11.2.0.2 (with EXEC DBMS_LOCK.SLEEP(1) added between updates):

CHANGE_DA CHANGE_TEXT
--------- ----------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:19
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:19

CHANGE_DA CHANGE_TEXT
--------- ----------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:19
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:19
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:20
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:19

CHANGE_DA CHANGE_TEXT
--------- ----------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 17:50:20, :NEWDATA.COL_TS=
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:19
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:19
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 17:50:20, :NEWDATA.COL_TS=16-FEB-11 17:50:21
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:20
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:19 

Notice in the above that 10.2.0.2 produced three nice sets of data, with TIMESTAMP values shown on all rows.  The final set of output (in blue) from 11.1.0.7 shows NULL initial values for the TIMESTAMP columns.  The NULL problem only happens in the trigger code when the RETURNING clause is used in the UPDATE statement.  Neat, I think that we found a bug, and we are able to work around that bug simply by reworking the code to not use the RETURNING clause.  The same solution will work on 10.2.0.5, 11.2.0.1, and 11.2.0.2.

—-

Other TIMESTAMP resources from Metalink (MOS):

  • Doc ID 340512.1 – “Timestamps & time zones – Frequently Asked Questions”
  • Doc ID 5649579.8, Bug 5649579 – “CAST ‘as date’ of TIMESTAMP rounds in SQL, truncates in PLSQL”
  • Doc ID 780809.1 – “TO_TIMESTAMP is Returning Different Results on Oracle 9i Compare to Oracle 10g”




Calculating Overlapping Hours for Labor Transactions

15 02 2011

February 15, 2011

There is an interesting problem in the ERP platform that I support.  When production floor workers begin working on a particular job, the employee “clocks into” the job in the computer system.  When production floor workers stop working on a particular job, the employee “clock out of” the job in the computer system.  If the employee is simultaneously operating multiple production floor machines (might be feeding parts into robots or automatic CNC machines) we need to capture the full machine burden rate for all of the machines, but we need to fairly spread the employee’s labor burden rate among the different simultaneously in-process jobs.  Assume that the employee is operating 4 production floor machines, and in a simplied example, the employee runs machine 1 between periods 1 and 4, runs machine 2 between periods 4 and 8, machine 3 between periods 6 and 10, and machine 4 between periods 8 and 13 – as depicted below:

How much of the employee’s labor burden should go to the jobs’ transactions on machines 1, 2, 3, and 4?  25% (0.25)?  No, because that would only be correct if all of the machines were in use for the entire time duration between periods 1 and 13. 

Machine 1/transaction 1:
Quick Description: 3 time periods where the employee’s labor burden should be fully applied, and 1 time period where half of the employee’s labor burden should be applied due to the overlap with machine 2/transaction 2.   Therefore, the employee’s labor burden time should be 3 + 1/2 = 3.5 time periods.  We can calculate that as a percentage of the total duration as 3.5/4 = 0.875 (87.5%), and save that percentage in a table column, MULTIPLIER_2 in the case of the ERP system.

Machine 2/transaction 2:
Quick Description: 1 time period where the employee’s labor burden should be half applied, 1 time period where the employee’s labor burden should be fully applied, 2 time periods where the employee’s labor burden should be half applied, and 1 time period where the employee’s labor burden should be one third applied.  Therefore, the employee’s labor burden time should be 1/2 + 1 + 2/2 + 1/3 = 2.833333.  Calculated as a percentage of the total duration we have 2.833333/5 = 0.566667 (56.6%).

Machine 3/transaction 3:
Quick Description: 2 time periods where the employee’s labor burden should be half applied, 1 time period where the where the employee’s labor burden should be one third applied, and 2 time periods where the employee’s labor burden should be half applied.  Therefore, the employee’s labor burden time should be 2/2 + 1/3 + 2/2 = 2.333333.  Calculated as a percentage of the total duration we have 2.333333/5 = 0.466667 (46.6%).

Machine 4/transaction 4:
Quick Description: 1 time period where the where the employee’s labor burden should be one third applied, 2 time periods where the employee’s labor burden should be half applied, and 3 time periods where the employee’s labor burden should be fully applied.  Therefore, the employee’s labor burden time should be 1/3 + 2/2 + 3 = 4.333333.  Calculated as a percentage of the total duration we have 4.333333/6 = 0.7222222 (72.2%).

The above is obviously an over-simplification of the calculations, but at least the idea of what needs to happen should be clear when performing the calculations using actual transaction start and end times.  Let’s take a look at a more complicated example.  Assume that there are a total of 3 simultaneous in-process machine transactions that are started and ended as follows:

Clock In      8:44:01
Start Job 1   8:47:58
Start Job 2   8:57:58
Start Job 3   9:07:58
End Job 2     9:27:58
End Job 1     9:40:00
End Job 3     9:50:00 

We can layout a picture in a spreadsheet that shows how the labor burden hours (identified as Hours_Worked) and the calculated percentage of total burden (identified as Multiplier_2) could be computed as the transactions are started and ended:

Of course the above is of limited use without the ability to see the formulas behind the calculations, so I will instruct the spreadsheet program to show the formulas for the cells:

Enough fun with a spreadsheet for now, time to switch to SQL*Plus and Oracle Database to see if we are able to perform the calculations using just a SQL statement.  Let’s create a table for demonstration purposes:

CREATE TABLE T1 (
  TRANSACTION_ID NUMBER,
  EMPLOYEE_ID VARCHAR2(15),
  SHIFT_DATE DATE,
  CLOCK_IN DATE,
  CLOCK_OUT DATE,
  HOURS_WORKED NUMBER(12,2),
  HOURS_PREVIOUS NUMBER(12,2),
  HOURS_OVERALL NUMBER(12,2),
  MULTIPLIER_2 NUMBER(4,3),
  PRIMARY KEY (TRANSACTION_ID)); 

Now let’s insert 140 rows into the table, set a random time for the CLOCK_IN column that is within the first 12 hours of the day, set the CLOCK_OUT column value up to four hours after the CLOCK_IN time, and then store the calculated number of hours between the CLOCK_IN and CLOCK_OUT times:

INSERT INTO T1 (
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE)
SELECT
  ROWNUM TRANSACTION_ID,
  DECODE(TRUNC((ROWNUM-1)/10),
         0, 'ABE',
         1, 'BOB',
         2, 'CARL',
         3, 'DOUG',
         4, 'ED',
         5, 'FRANK',
         6, 'GREG') EMPLOYEE_ID,
  TRUNC(SYSDATE) SHIFT_DATE
FROM
  DUAL
CONNECT BY
  LEVEL<=70;

INSERT INTO T1 (
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE)
SELECT
  ROWNUM+100 TRANSACTION_ID,
  DECODE(TRUNC((ROWNUM-1)/10),
         0, 'ABE',
         1, 'BOB',
         2, 'CARL',
         3, 'DOUG',
         4, 'ED',
         5, 'FRANK',
         6, 'GREG') EMPLOYEE_ID,
  TRUNC(SYSDATE+1) SHIFT_DATE
FROM
  DUAL
CONNECT BY
  LEVEL<=70;

UPDATE
  T1
SET
  CLOCK_IN=SHIFT_DATE+TRUNC(DBMS_RANDOM.VALUE(0,43200))/86400;

UPDATE
  T1
SET
  CLOCK_OUT=CLOCK_IN+TRUNC(DBMS_RANDOM.VALUE(0,14400))/86400;

UPDATE
  T1
SET
  HOURS_WORKED=(CLOCK_OUT-CLOCK_IN)*24,
  HOURS_OVERALL=(CLOCK_OUT-CLOCK_IN)*24;

COMMIT; 

Unfortunately, calculating the MULTIPLIER_2 value for each transaction will be a bit challenging, because we must consider all of the other transactions that were in-process during each of the transactions.  Let’s see what we have so far for just one of the employees:

SELECT
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE,
  TO_CHAR(CLOCK_IN,'HH24:MI:SS') CLOCK_IN,
  TO_CHAR(CLOCK_OUT,'HH24:MI:SS') CLOCK_OUT,
  HOURS_WORKED
FROM
  T1
WHERE
  EMPLOYEE_ID='GREG'
ORDER BY
  SHIFT_DATE,
  CLOCK_IN;

TRANSACTION_ID EMPLOYEE_ID SHIFT_DAT CLOCK_IN CLOCK_OU HOURS_WORKED
-------------- ----------- --------- -------- -------- ------------
            64 GREG        13-FEB-11 00:57:31 04:56:33         3.98
            63 GREG        13-FEB-11 02:12:04 04:45:36         2.56
            61 GREG        13-FEB-11 03:05:34 06:53:24         3.80
            66 GREG        13-FEB-11 03:08:21 03:15:04         0.11
            70 GREG        13-FEB-11 03:58:45 04:08:28         0.16
            62 GREG        13-FEB-11 05:24:03 07:09:41         1.76
            69 GREG        13-FEB-11 06:04:48 09:22:00         3.29
            67 GREG        13-FEB-11 07:41:20 09:07:06         1.43
            65 GREG        13-FEB-11 09:17:35 10:24:15         1.11
            68 GREG        13-FEB-11 10:27:03 14:03:30         3.61
           161 GREG        14-FEB-11 01:15:40 02:24:01         1.14
           169 GREG        14-FEB-11 01:16:01 03:45:38         2.49
           166 GREG        14-FEB-11 01:53:02 03:54:09         2.02
           163 GREG        14-FEB-11 03:47:15 06:55:34         3.14
           170 GREG        14-FEB-11 05:00:19 08:16:00         3.26
           165 GREG        14-FEB-11 06:23:45 07:56:40         1.55
           162 GREG        14-FEB-11 06:26:06 09:54:15         3.47
           168 GREG        14-FEB-11 06:33:39 08:10:56         1.62
           167 GREG        14-FEB-11 08:25:00 12:20:55         3.93
           164 GREG        14-FEB-11 11:18:45 13:25:08         2.11 

Assume that the above is the starting point – the machine transactions (known as labor tickets in the ERP package) exist in the database, even though I have not yet calculated the MULTIPLIER_2 VALUES.  Let’s determine the MULTIPLIER_2 values – first we need a way to determine which labor tickets overlap with each of the above labor tickets, and by how much they overlap.  We will use a self-join of the test table T1 (which simulates the LABOR_TICKET table in the ERP package):

SELECT
  L1.TRANSACTION_ID,
  L1.EMPLOYEE_ID,
  L1.SHIFT_DATE,
  TO_CHAR(L1.CLOCK_IN,'HH24:MI') CLOCK_IN,
  TO_CHAR(L1.CLOCK_OUT,'HH24:MI') CLOCK_OUT,
  TO_CHAR(L2.CLOCK_IN,'HH24:MI') CLOCK_IN,
  TO_CHAR(L2.CLOCK_OUT,'HH24:MI') CLOCK_OUT,
  TO_CHAR(DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN),'HH24:MI') CLOCK_IN,
  TO_CHAR(DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT),'HH24:MI') CLOCK_OUT,
  ROUND((DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT) -
    DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN)) * 24,2) OVERLAP,
  L1.HOURS_WORKED
FROM
  T1 L1,
  T1 L2
WHERE
  L1.EMPLOYEE_ID=L2.EMPLOYEE_ID
  AND L1.SHIFT_DATE=L2.SHIFT_DATE
  AND (L1.CLOCK_IN BETWEEN L2.CLOCK_IN AND L2.CLOCK_OUT-1/86400
       OR L2.CLOCK_IN BETWEEN L1.CLOCK_IN AND L1.CLOCK_OUT-1/86400)
   AND L1.EMPLOYEE_ID='GREG'
ORDER BY
  L1.SHIFT_DATE,
  L1.CLOCK_IN;

TRANSACTION_ID EMPLOYEE_ID SHIFT_DAT CLOCK CLOCK CLOCK CLOCK CLOCK CLOCK OVERLAP HOURS_WORKED
-------------- ----------- --------- ----- ----- ----- ----- ----- ----- ------- ------------
            64 GREG        13-FEB-11 00:57 04:56 02:12 04:45 02:12 04:45    2.56         3.98
            64 GREG        13-FEB-11 00:57 04:56 03:08 03:15 03:08 03:15    0.11         3.98
            64 GREG        13-FEB-11 00:57 04:56 00:57 04:56 00:57 04:56    3.98         3.98
            64 GREG        13-FEB-11 00:57 04:56 03:05 06:53 03:05 04:56    1.85         3.98
            64 GREG        13-FEB-11 00:57 04:56 03:58 04:08 03:58 04:08    0.16         3.98
            63 GREG        13-FEB-11 02:12 04:45 03:05 06:53 03:05 04:45    1.67         2.56
            63 GREG        13-FEB-11 02:12 04:45 00:57 04:56 02:12 04:45    2.56         2.56
            63 GREG        13-FEB-11 02:12 04:45 03:58 04:08 03:58 04:08    0.16         2.56
            63 GREG        13-FEB-11 02:12 04:45 03:08 03:15 03:08 03:15    0.11         2.56
            63 GREG        13-FEB-11 02:12 04:45 02:12 04:45 02:12 04:45    2.56         2.56
            61 GREG        13-FEB-11 03:05 06:53 03:58 04:08 03:58 04:08    0.16         3.80
            61 GREG        13-FEB-11 03:05 06:53 03:08 03:15 03:08 03:15    0.11         3.80
            61 GREG        13-FEB-11 03:05 06:53 06:04 09:22 06:04 06:53    0.81         3.80
            61 GREG        13-FEB-11 03:05 06:53 03:05 06:53 03:05 06:53    3.80         3.80
            61 GREG        13-FEB-11 03:05 06:53 02:12 04:45 03:05 04:45    1.67         3.80
            61 GREG        13-FEB-11 03:05 06:53 00:57 04:56 03:05 04:56    1.85         3.80
            61 GREG        13-FEB-11 03:05 06:53 05:24 07:09 05:24 06:53    1.49         3.80
            66 GREG        13-FEB-11 03:08 03:15 03:05 06:53 03:08 03:15    0.11         0.11
... 

In the above, the first “CLOCK” column shows the CLOCK_IN for our transaction, and the second “CLOCK” column shows the CLOCK_OUT for our transaction.  The next two “CLOCK” columns show the CLOCK_IN and CLOCK_OUT for an overlapping transaction.  If the CLOCK_IN for the transaction is after than the CLOCK_IN for the overlapping transaction, then the CLOCK_IN for our transaction is listed in the second to last “CLOCK” column, otherwise the CLOCK_IN for the overlapping transactions is displayed in the second to last “CLOCK” column.  The reverse is true for the last “CLOCK” column, where the earliest of the CLOCK_OUT for the two transactions is displayed.  The OVERLAP column shows the number of hours difference between the last two “CLOCK” columns – that shows the number of hours the two transactions have in common.

Now the last step to calculate the MULTIPLIER_2 (labor burden hours as a percentage of the machine burden hours), find the total OVERLAP for each transaction, and divide that into the HOURS_WORKED value to determine the MULTIPLIER_2 value:

SELECT
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE,
  CLOCK_IN,
  CLOCK_OUT,
  HOURS_WORKED,
  ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3) MULTIPLIER_2
FROM
(SELECT
  L1.TRANSACTION_ID,
  L1.EMPLOYEE_ID,
  L1.SHIFT_DATE,
  TO_CHAR(L1.CLOCK_IN,'HH24:MI') CLOCK_IN,
  TO_CHAR(L1.CLOCK_OUT,'HH24:MI') CLOCK_OUT,
  ROUND((DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT) -
    DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN)) * 24,2) OVERLAP,
  L1.HOURS_WORKED
FROM
  T1 L1,
  T1 L2
WHERE
  L1.EMPLOYEE_ID=L2.EMPLOYEE_ID
  AND L1.SHIFT_DATE=L2.SHIFT_DATE
  AND (L1.CLOCK_IN BETWEEN L2.CLOCK_IN AND L2.CLOCK_OUT-1/86400
       OR L2.CLOCK_IN BETWEEN L1.CLOCK_IN AND L1.CLOCK_OUT-1/86400)
   AND L1.EMPLOYEE_ID='GREG')
GROUP BY
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE,
  CLOCK_IN,
  CLOCK_OUT,
  HOURS_WORKED
ORDER BY
  SHIFT_DATE,
  CLOCK_IN;

TRANSACTION_ID EMPLOYEE_ID SHIFT_DAT CLOCK CLOCK HOURS_WORKED MULTIPLIER_2
-------------- ----------- --------- ----- ----- ------------ ------------
            64 GREG        13-FEB-11 00:57 04:56         3.98        0.460
            63 GREG        13-FEB-11 02:12 04:45         2.56        0.363
            61 GREG        13-FEB-11 03:05 06:53         3.80        0.384
            66 GREG        13-FEB-11 03:08 03:15         0.11        0.250
            70 GREG        13-FEB-11 03:58 04:08         0.16        0.250
            62 GREG        13-FEB-11 05:24 07:09         1.76        0.406
            69 GREG        13-FEB-11 06:04 09:22         3.29        0.493
            67 GREG        13-FEB-11 07:41 09:07         1.43        0.500
            65 GREG        13-FEB-11 09:17 10:24         1.11        0.941
            68 GREG        13-FEB-11 10:27 14:03         3.61        1.000
           161 GREG        14-FEB-11 01:15 02:24         1.14        0.409
           169 GREG        14-FEB-11 01:16 03:45         2.49        0.453
           166 GREG        14-FEB-11 01:53 03:54         2.02        0.445
           163 GREG        14-FEB-11 03:47 06:55         3.14        0.478
           170 GREG        14-FEB-11 05:00 08:16         3.26        0.320
           165 GREG        14-FEB-11 06:23 07:56         1.55        0.238
           162 GREG        14-FEB-11 06:26 09:54         3.47        0.333
           168 GREG        14-FEB-11 06:33 08:10         1.62        0.245
           167 GREG        14-FEB-11 08:25 12:20         3.93        0.608
           164 GREG        14-FEB-11 11:18 13:25         2.11        0.670 

Now let’s try setting the MULTIPLIER_2 values for GREG’s transactions to verify that our SQL statement works:

UPDATE
  T1 L3
SET
  MULTIPLIER_2=(
    SELECT
      ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3) MULTIPLIER_2
    FROM
      (SELECT
         L1.TRANSACTION_ID,
         L1.SHIFT_DATE,
         ROUND((DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT) -
           DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN)) * 24,2) OVERLAP,
         L1.HOURS_WORKED
       FROM
         T1 L1,
         T1 L2
       WHERE
         L1.EMPLOYEE_ID=L2.EMPLOYEE_ID
         AND L1.SHIFT_DATE=L2.SHIFT_DATE
         AND (L1.CLOCK_IN BETWEEN L2.CLOCK_IN AND L2.CLOCK_OUT-1/86400
              OR L2.CLOCK_IN BETWEEN L1.CLOCK_IN AND L1.CLOCK_OUT-1/86400)
         AND L1.EMPLOYEE_ID='GREG') L1
    WHERE
      L3.TRANSACTION_ID=L1.TRANSACTION_ID
      AND L3.SHIFT_DATE=L1.SHIFT_DATE
    GROUP BY
      HOURS_WORKED)
WHERE
  L3.EMPLOYEE_ID='GREG';

20 rows updated. 

20 rows updated, that is a good start.  Let’s take a look at the rows:

SELECT
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE,
  TO_CHAR(CLOCK_IN,'HH24:MI:SS') CLOCK_IN,
  TO_CHAR(CLOCK_OUT,'HH24:MI:SS') CLOCK_OUT,
  HOURS_WORKED,
  MULTIPLIER_2
FROM
  T1
WHERE
  EMPLOYEE_ID='GREG'
ORDER BY
  SHIFT_DATE,
  CLOCK_IN;

TRANSACTION_ID EMPLOYEE_ID SHIFT_DAT CLOCK_IN CLOCK_OU HOURS_WORKED MULTIPLIER_2
-------------- ----------- --------- -------- -------- ------------ ------------
            64 GREG        13-FEB-11 00:57:31 04:56:33         3.98        0.460
            63 GREG        13-FEB-11 02:12:04 04:45:36         2.56        0.363
            61 GREG        13-FEB-11 03:05:34 06:53:24         3.80        0.384
            66 GREG        13-FEB-11 03:08:21 03:15:04         0.11        0.250
            70 GREG        13-FEB-11 03:58:45 04:08:28         0.16        0.250
            62 GREG        13-FEB-11 05:24:03 07:09:41         1.76        0.406
            69 GREG        13-FEB-11 06:04:48 09:22:00         3.29        0.493
            67 GREG        13-FEB-11 07:41:20 09:07:06         1.43        0.500
            65 GREG        13-FEB-11 09:17:35 10:24:15         1.11        0.941
            68 GREG        13-FEB-11 10:27:03 14:03:30         3.61        1.000
           161 GREG        14-FEB-11 01:15:40 02:24:01         1.14        0.409
           169 GREG        14-FEB-11 01:16:01 03:45:38         2.49        0.453
           166 GREG        14-FEB-11 01:53:02 03:54:09         2.02        0.445
           163 GREG        14-FEB-11 03:47:15 06:55:34         3.14        0.478
           170 GREG        14-FEB-11 05:00:19 08:16:00         3.26        0.320
           165 GREG        14-FEB-11 06:23:45 07:56:40         1.55        0.238
           162 GREG        14-FEB-11 06:26:06 09:54:15         3.47        0.333
           168 GREG        14-FEB-11 06:33:39 08:10:56         1.62        0.245
           167 GREG        14-FEB-11 08:25:00 12:20:55         3.93        0.608
           164 GREG        14-FEB-11 11:18:45 13:25:08         2.11        0.670 

The above is the expected result, let’s generalize the SQL statement to update the rows for the other EMPLOYEE_ID values:

UPDATE
  T1 L3
SET
  MULTIPLIER_2=(
    SELECT
      ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3) MULTIPLIER_2
    FROM
      (SELECT
         L1.TRANSACTION_ID,
         L1.SHIFT_DATE,
         ROUND((DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT) -
           DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN)) * 24,2) OVERLAP,
         L1.HOURS_WORKED
       FROM
         T1 L1,
         T1 L2
       WHERE
         L1.EMPLOYEE_ID=L2.EMPLOYEE_ID
         AND L1.SHIFT_DATE=L2.SHIFT_DATE
         AND (L1.CLOCK_IN BETWEEN L2.CLOCK_IN AND L2.CLOCK_OUT-1/86400
              OR L2.CLOCK_IN BETWEEN L1.CLOCK_IN AND L1.CLOCK_OUT-1/86400)) L1
    WHERE
      L3.TRANSACTION_ID=L1.TRANSACTION_ID
      AND L3.SHIFT_DATE=L1.SHIFT_DATE
    GROUP BY
      HOURS_WORKED);

140 rows updated.  

You could of course verify that the rows were updated correctly, but I will skip that step for now.

COMMIT; 

Now let’s go back and modify one of GREG’s transactions, which in my test data happens to be the first transaction for GREG when those transactions are sorted by the CLOCK_IN time:

UPDATE
  T1
SET
  CLOCK_OUT=CLOCK_OUT+1/24,
  HOURS_WORKED=HOURS_WORKED+1
WHERE
  TRANSACTION_ID=64; 

How has that one small change affected the other transactions:

SELECT
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE,
  CLOCK_IN,
  CLOCK_OUT,
  HOURS_WORKED,
  ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3) MULTIPLIER_2,
  OLD_MULT_2
FROM
(SELECT
  L1.TRANSACTION_ID,
  L1.EMPLOYEE_ID,
  L1.SHIFT_DATE,
  L1.MULTIPLIER_2 OLD_MULT_2,
  TO_CHAR(L1.CLOCK_IN,'HH24:MI') CLOCK_IN,
  TO_CHAR(L1.CLOCK_OUT,'HH24:MI') CLOCK_OUT,
  ROUND((DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT) -
    DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN)) * 24,2) OVERLAP,
  L1.HOURS_WORKED
FROM
  T1 L1,
  T1 L2
WHERE
  L1.EMPLOYEE_ID=L2.EMPLOYEE_ID
  AND L1.SHIFT_DATE=L2.SHIFT_DATE
  AND (L1.CLOCK_IN BETWEEN L2.CLOCK_IN AND L2.CLOCK_OUT-1/86400
       OR L2.CLOCK_IN BETWEEN L1.CLOCK_IN AND L1.CLOCK_OUT-1/86400)
   AND L1.EMPLOYEE_ID='GREG')
GROUP BY
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE,
  CLOCK_IN,
  CLOCK_OUT,
  HOURS_WORKED,
  OLD_MULT_2
ORDER BY
  SHIFT_DATE,
  CLOCK_IN;

TRANSACTION_ID EMPLOYEE_ID SHIFT_DAT CLOCK CLOCK HOURS_WORKED MULTIPLIER_2 OLD_MULT_2
-------------- ----------- --------- ----- ----- ------------ ------------ ----------
            64 GREG        13-FEB-11 00:57 05:56         4.98        0.355      0.460
            63 GREG        13-FEB-11 02:12 04:45         2.56        0.363      0.363
            61 GREG        13-FEB-11 03:05 06:53         3.80        0.349      0.384
            66 GREG        13-FEB-11 03:08 03:15         0.11        0.250      0.250
            70 GREG        13-FEB-11 03:58 04:08         0.16        0.250      0.250
            62 GREG        13-FEB-11 05:24 07:09         1.76        0.361      0.406
            69 GREG        13-FEB-11 06:04 09:22         3.29        0.493      0.493
            67 GREG        13-FEB-11 07:41 09:07         1.43        0.500      0.500
            65 GREG        13-FEB-11 09:17 10:24         1.11        0.941      0.941
            68 GREG        13-FEB-11 10:27 14:03         3.61        1.000      1.000
           161 GREG        14-FEB-11 01:15 02:24         1.14        0.409      0.409
           169 GREG        14-FEB-11 01:16 03:45         2.49        0.453      0.453
           166 GREG        14-FEB-11 01:53 03:54         2.02        0.445      0.445
           163 GREG        14-FEB-11 03:47 06:55         3.14        0.478      0.478
           170 GREG        14-FEB-11 05:00 08:16         3.26        0.320      0.320
           165 GREG        14-FEB-11 06:23 07:56         1.55        0.238      0.238
           162 GREG        14-FEB-11 06:26 09:54         3.47        0.333      0.333
           168 GREG        14-FEB-11 06:33 08:10         1.62        0.245      0.245
           167 GREG        14-FEB-11 08:25 12:20         3.93        0.608      0.608
           164 GREG        14-FEB-11 11:18 13:25         2.11        0.670      0.670 

The above shows that we now need to readjust the MULTIPLIER_2 value for three transactions because we set the CLOCK_OUT time of the first transaction to be an hour later.  Let’s just retrieve the rows that need to be adjusted:

SELECT
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE,
  CLOCK_IN,
  CLOCK_OUT,
  HOURS_WORKED,
  ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3) MULTIPLIER_2,
  OLD_MULT_2
FROM
(SELECT
  L1.TRANSACTION_ID,
  L1.EMPLOYEE_ID,
  L1.SHIFT_DATE,
  L1.MULTIPLIER_2 OLD_MULT_2,
  TO_CHAR(L1.CLOCK_IN,'HH24:MI') CLOCK_IN,
  TO_CHAR(L1.CLOCK_OUT,'HH24:MI') CLOCK_OUT,
  ROUND((DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT) -
    DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN)) * 24,2) OVERLAP,
  L1.HOURS_WORKED
FROM
  T1 L1,
  T1 L2
WHERE
  L1.EMPLOYEE_ID=L2.EMPLOYEE_ID
  AND L1.SHIFT_DATE=L2.SHIFT_DATE
  AND (L1.CLOCK_IN BETWEEN L2.CLOCK_IN AND L2.CLOCK_OUT-1/86400
       OR L2.CLOCK_IN BETWEEN L1.CLOCK_IN AND L1.CLOCK_OUT-1/86400)
   AND L1.EMPLOYEE_ID='GREG')
GROUP BY
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE,
  CLOCK_IN,
  CLOCK_OUT,
  HOURS_WORKED,
  OLD_MULT_2
HAVING
  ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3)<>OLD_MULT_2
ORDER BY
  SHIFT_DATE,
  CLOCK_IN; 

TRANSACTION_ID EMPLOYEE_ID SHIFT_DAT CLOCK CLOCK HOURS_WORKED MULTIPLIER_2 OLD_MULT_2
-------------- ----------- --------- ----- ----- ------------ ------------ ----------
            64 GREG        13-FEB-11 00:57 05:56         4.98        0.445      0.460
            61 GREG        13-FEB-11 03:05 06:53         3.80        0.349      0.384
            62 GREG        13-FEB-11 05:24 07:09         1.76        0.361      0.406

Finally, let’s update just those rows where the MULTIPLIER_2 value should be adjusted using a modified version of the above SQL statement (there might be an easier way to perform this update that also does not resort to procedural type code):

UPDATE
  T1 L3
SET
  MULTIPLIER_2 = (
    SELECT
      ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3) MULTIPLIER_2
    FROM
      (SELECT
         L1.TRANSACTION_ID,
         L1.MULTIPLIER_2 OLD_MULT_2,
         ROUND((DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT) -
           DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN)) * 24,2) OVERLAP,
         L1.HOURS_WORKED
       FROM
         T1 L1,
         T1 L2
       WHERE
         L1.EMPLOYEE_ID=L2.EMPLOYEE_ID
         AND L1.SHIFT_DATE=L2.SHIFT_DATE
         AND (L1.CLOCK_IN BETWEEN L2.CLOCK_IN AND L2.CLOCK_OUT-1/86400
              OR L2.CLOCK_IN BETWEEN L1.CLOCK_IN AND L1.CLOCK_OUT-1/86400)
         AND L1.EMPLOYEE_ID='GREG') L1
     WHERE
       L3.TRANSACTION_ID=L1.TRANSACTION_ID
     GROUP BY
       TRANSACTION_ID,
       EMPLOYEE_ID,
       SHIFT_DATE,
       HOURS_WORKED,
       OLD_MULT_2
     HAVING
       ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3)<>OLD_MULT_2)
WHERE
  L3.TRANSACTION_ID IN (
    SELECT
      TRANSACTION_ID
    FROM
      (SELECT
         L1.TRANSACTION_ID,
         L1.MULTIPLIER_2 OLD_MULT_2,
         ROUND((DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT) -
           DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN)) * 24,2) OVERLAP,
         L1.HOURS_WORKED
       FROM
         T1 L1,
         T1 L2
       WHERE
         L1.EMPLOYEE_ID=L2.EMPLOYEE_ID
         AND L1.SHIFT_DATE=L2.SHIFT_DATE
         AND (L1.CLOCK_IN BETWEEN L2.CLOCK_IN AND L2.CLOCK_OUT-1/86400
              OR L2.CLOCK_IN BETWEEN L1.CLOCK_IN AND L1.CLOCK_OUT-1/86400)
         AND L1.EMPLOYEE_ID='GREG') L1
     WHERE
       L3.TRANSACTION_ID=L1.TRANSACTION_ID
     GROUP BY
       TRANSACTION_ID,
       EMPLOYEE_ID,
       SHIFT_DATE,
       HOURS_WORKED,
       OLD_MULT_2
     HAVING
       ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3)<>OLD_MULT_2); 

3 rows updated.

In this ERP package there is a second way that the time for concurrent machine transactions (labor tickets) may be split, and this method is typically used when a single shop resource is used, such as a paint booth, to work on multiple jobs concurrently.  In such a case, with four concurrent operations, overstating the machine burden rate by a factor of 4 is an unwise decision.  To handle this situation, the ERP package’s modules will take the calculated MULTIPLIER_2 value, multiply that by the HOURS_WORKED (the machine burden hours), set that value as the new HOURS_WORKED value, and then set the MULTIPLIER_2 to a value of 1.0.

Not so confusing… and probably not to useful to most of the readers of this blog.  However, the above process of attacking a problem might be helpful for other types of problems.





SQL Performance Problem, AWR Reports Query is #1 in Elapsed Time, #2 in CPU Time – How Would You Help?

6 02 2011

February 6, 2011 (Updated February 25, 2011)

I occasionally read various forums on the Internet including a couple of Usenet groups (comp.databases.oracle.server, comp.databases.oracle.misc), OTN forums (Database – General, SQL and PL/SQL), Oracle-L, AskTom, and a handful of other forums.  I don’t participate on all of the forums, but I try to help when possible on a couple of those forums.

I saw a SQL tuning request on one of the forums that caught my eye.  The original poster (OP) identified a SQL statement with the help of an AWR report, and he needed to tune that SQL statement.  The identified SQL statement was consuming the most elapsed time (19,307 seconds total, 1.90 seconds per execution, 48.30% of DB time), and the same SQL statement was also identified as consuming the second highest amount of CPU time (1,063 seconds).  A summarization of the advice provided by responders in the thread includes:

  • Check out these links (one of which is for a book that I reviewed a couple of months ago).
  • Start with hints, including those that change the OPTIMIZER_MODE, those that adjust the dynamic sampling, those that change the join type, and those that change which indexes are used.
  • Analyze the data dictionary.

I think that I would first start by trying to determine the intended result of the query.  What is that query supposed to show, and why are we executing it 10,176 times in the AWR reporting period?  To begin that process, I might try to reformat the SQL statement into my standardized format, something like this (changing upper/lower case, spacing, and aliases):

SELECT /*NORULE */
  'DATAPOINT EXTENTS_LEFT '  || ' ' ||
      NVL(MIN(DSE.MAXEXTS - DSE.EXTENTS), 111) || CHR(10) ||
      'DATAPOINT EXTENTS_LEFT_PCT'  || ' ' ||
      ROUND(NVL(MIN(ROUND(DSE.MAXEXTS - DSE.EXTENTS) * 100 / DSE.MAXEXTS), 100),0) BPB
FROM
  (SELECT
     DS.HEADER_FILE FILE#,
     DS.HEADER_BLOCK BLOCK#,
     DS.EXTENTS,
     DS.MAX_EXTENTS MAXEXTS,
     ST.TS#,
     SU.USER#
   FROM
     DBA_SEGMENTS DS,
     SYS.TS$ ST,
     SYS.USER$ SU
   WHERE
     ST.NAME=DS.TABLESPACE_NAME
     AND SU.NAME=DS.OWNER
     AND SEGMENT_TYPE NOT IN ('SPACE HEADER','CACHE')) DSE,
  (SELECT
     NAME,
     TS#,
     ONLINE$
   FROM
     SYS.TS$) TS,
  (SELECT
     TABLESPACE_NAME,
     CONTENTS
   FROM
     DBA_TABLESPACES) DT,
  (SELECT
     TS#,
     FILE#,
     BLOCK#
   FROM
     P$OBJ_EXCLUSION
   WHERE
     TS# IS NOT NULL
     AND FILE# IS NOT NULL
     AND BLOCK# IS NOT NULL) OEB,
  (SELECT
     TS#
   FROM
     P$OBJ_EXCLUSION
   WHERE
     OBJECT_TYPE = 'TABLE'
     AND FILE# IS NULL
     AND BLOCK# IS NULL
     AND USER# IS NULL) OETS,
  (SELECT
     USER#
   FROM
     P$OBJ_EXCLUSION
   WHERE
      USER# IS NOT NULL) OEU
WHERE
  DSE.MAXEXTS > 0 -- CACHE SEGMENT HAS MAXEXTS = 0
  AND TS.TS# > 0
  AND DSE.TS# = TS.TS#
  AND TS.ONLINE$ = 1
  AND DSE.TS# = OEB.TS#(+)
  AND OEB.TS# IS NULL
  AND DSE.FILE# = OEB.FILE#(+)
  AND OEB.FILE# IS NULL
  AND DSE.BLOCK# = OEB.BLOCK#(+)
  AND OEB.BLOCK# IS NULL
  AND DSE.TS# = OETS.TS#(+)
  AND OETS.TS# IS NULL
  AND DSE.USER# = OEU.USER#(+)
  AND OEU.USER# IS NULL
  AND TS.NAME = DT.TABLESPACE_NAME
  AND DT.CONTENTS = 'PERMANENT';

While the query begins with an interesting comment, we can ignore that for now.  What is the next step:

  • Why is the query accessing both SYS.TS$ and DBA_TABLESPACES?
  • Why are some of the restrictions (such as TS.TS# and TS.ONLINE, DT.CONTENTS) that are placed on the tablespaces not included in the WHERE clauses in the inline views?
  • Why does the query specify MIN(DSE.MAXEXTS – DSE.EXTENTS) with no GROUP BY clause?
  • Why does the query perform an implicit NUMBER to VARCHAR conversion in the data returned to the client?
  • The inline views that I aliased with names beginning with OE are apparently designed to provide exclusion lists for the objects in DBA_SEGMENTS, excluding by TS#, BLOCK# (HEADER_BLOCK), and USER#.  Why is the query not retrieving a distinct list of each type of item to be excluded, possibly from a materialized view?

What steps would you take to help the OP?  Would you just tell the OP that in a “Top 5″ type of report that there will always be something in the top two spots?

—–

Edit February 25, 2011:

Donald K. Burleson, apparently realizing what it means to file a false DMCA claim under penalty of perjury, did not file a lawsuit to back up his DMCA claim to copyright ownership of the modified SQL statement that I posted on February 6, 2011 in this article.  His false DMCA claim held hostage this article for a total of 17 days, during which time the article was not available for readers of this blog (for the record, I completely understand and agree with WordPress’ handing of this matter, where their processes require taking DMCA challenged articles offline for two weeks to allow the true copyright holder sufficient time to file a lawsuit).  False DMCA claims from Donald K. Burleson against my blog articles will not be tolerated, and this article will serve as evidence of past abuse, if necessary.





Watching Consistent Gets – 10200 Trace File Parser

24 01 2011

January 24, 2011

It happened again, another blog article that forced me to stop, think, and … hey, why did Oracle Database 11.2.0.2 do something different than Oracle Database 10.2.0.5?  What is different, even when the OPTIMIZER_FEATURES_ENABLE parameter is set to 10.2.0.4 (or 10.2.0.5)?  The number of consistent gets for a SQL statement is significantly different - we did see a similar difference between release version before, but for a different reason.  We need the help of Oracle Database trace event 10200 to determine why there is a difference.  Once we have the trace file, we need an easy way to process the trace file.

Excel Macro that will work with a trace file produced by Oracle Database running on Windows (also works in Microsoft Visual Basic 6.0 and earlier; for an Oracle Database running on Unix/Linux, open the  trace file with Wordpad first, and then save the trace file using Wordpad):
(Replace c:\or10s_ora_4256_watch_consistent.trc with the actual name of the generated trace file – the script as written seems to work with 10.2.0.x and 11.2.0.x)

Private Sub Read_10200_Trace1()
    Dim intFileNum As Integer             '10200 trace file
    Dim intFileNum2 As Integer            'Output file
    Dim strInput As String                'Line read from the 10200 trace file
    Dim strOutput As String               'Line to be written to the output file
    Dim strBlock(2000) As String          'Block read from the trace file
    Dim strBlockCounter(2000) As Integer  'Number of times read
    Dim intBlocks As Integer              'Total number of blocks
    Dim i As Integer                      'Loop counter
    Dim intFound As Integer               'Indicates whether or not the block was found

    intFileNum = FreeFile
    Open "c:\or10s_ora_4256_watch_consistent.trc" For Input As #intFileNum

    intFileNum2 = FreeFile
    Open "c:\watch_consistent.txt" For Output As #intFileNum2

    Do While Not EOF(intFileNum)
        Line Input #intFileNum, strInput
        If InStr(strInput, "started for block") > 0 Then
            strOutput = Trim(Right(strInput, Len(strInput) - InStr(InStr(strInput, "started for block"), strInput, ":")))
            'Find the number of times the block was accessed
            intFound = 0
            For i = 1 To intBlocks
                If strOutput = strBlock(i) Then
                    intFound = i
                    strBlockCounter(i) = strBlockCounter(i) + 1
                    Exit For
                End If
            Next i
            'If the block was not found, record it
            If intFound = 0 Then
                intBlocks = intBlocks + 1
                intFound = intBlocks
                strBlockCounter(intBlocks) = 1
                strBlock(intBlocks) = strOutput
            End If
            Print #intFileNum2, strOutput; vbTab; strBlockCounter(intFound)
        End If
    Loop
    Print #intFileNum2, ""
    For i = 1 To intBlocks
        Print #intFileNum2, strBlock(i); vbTab; strBlockCounter(i)
    Next i
    Close #intFileNum
    Close #intFileNum2
End Sub 

Excel Macro equivalent that will work with a trace file produced by Oracle Database running on Windows/Unix/Linux (also works in Microsoft Visual Basic 6.0 and earlier):
(Replace c:\or10s_ora_4256_watch_consistent.trc with the actual name of the generated trace file)

Private Sub Read_10200_Trace2()
    Dim strInput As String                'Line read from the 10200 trace file
    Dim strOutput As String               'Line to be written to the output file
    Dim strBlock(2000) As String          'Block read from the trace file
    Dim strBlockCounter(2000) As Integer  'Number of times read
    Dim intBlocks As Integer              'Total number of blocks
    Dim i As Integer                      'Loop counter
    Dim intFound As Integer               'Indicates whether or not the block was found

    Dim objFSO As Object                  'FileSystemObjects
    Dim objFile1 As Object                'The FileSystemObjects handle to the raw 10020 trace file
    Dim objFile2 As Object                'The FileSystemObjects handle to the output file

    Const ForReading = 1
    Const ForWriting = 2

    Set objFSO = CreateObject("Scripting.FileSystemObject")
    Set objFile1 = objFSO.OpenTextFile("c:\or10s_ora_4256_watch_consistent.trc", ForReading)
    Set objFile2 = objFSO.CreateTextFile("c:\watch_consistent.txt", True)

    Do While Not (objFile1.AtEndOfStream)
        strInput = objFile1.ReadLine
        If InStr(strInput, "started for block") > 0 Then
            strOutput = Trim(Right(strInput, Len(strInput) - InStr(InStr(strInput, "started for block"), strInput, ":")))
            'Find the number of times the block was accessed
            intFound = 0
            For i = 1 To intBlocks
                If strOutput = strBlock(i) Then
                    intFound = i
                    strBlockCounter(i) = strBlockCounter(i) + 1
                    Exit For
                End If
            Next i
            'If the block was not found, record it
            If intFound = 0 Then
                intBlocks = intBlocks + 1
                intFound = intBlocks
                strBlockCounter(intBlocks) = 1
                strBlock(intBlocks) = strOutput
            End If
            objFile2.Write strOutput & vbTab & strBlockCounter(intFound) & vbCrLf
        End If
    Loop
    objFile2.Write "" & vbCrLf
    For i = 1 To intBlocks
        objFile2.Write strBlock(i) & vbTab & strBlockCounter(i) & vbCrLf
    Next i
    objFile1.Close
    objFile2.Close
End Sub 

VBS Script Equivalent that will work with a trace file produced by Oracle Database running on Windows/Unix/Linux (also works in Excel and Microsoft Visual Basic 6.0 and earlier):
(Replace c:\or10s_ora_4256_watch_consistent.trc with the actual name of the generated trace file)

Dim strInput                'Line read from the 10200 trace file
Dim strOutput               'Line to be written to the output file
Dim strBlock(2000)          'Block read from the trace file
Dim strBlockCounter(2000)   'Number of times read
Dim intBlocks               'Total number of blocks
Dim i                       'Loop counter
Dim intFound                'Indicates whether or not the block was found

Dim objFSO                  'FileSystemObjects
Dim objFile1                'The FileSystemObjects handle to the raw 10020 trace file
Dim objFile2                'The FileSystemObjects handle to the output file

Const ForReading = 1

Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFile1 = objFSO.OpenTextFile("c:\or10s_ora_4256_watch_consistent.trc", ForReading)
Set objFile2 = objFSO.CreateTextFile("c:\watch_consistent.txt", True)

Do While Not (objFile1.AtEndOfStream)
    strInput = objFile1.ReadLine
    If InStr(strInput, "started for block") > 0 Then
        strOutput = Trim(Right(strInput, Len(strInput) - InStr(InStr(strInput, "started for block"), strInput, ":")))
        'Find the number of times the block was accessed
        intFound = 0
        For i = 1 To intBlocks
            If strOutput = strBlock(i) Then
                intFound = i
                strBlockCounter(i) = strBlockCounter(i) + 1
                Exit For
            End If
        Next
        'If the block was not found, record it
        If intFound = 0 Then
            intBlocks = intBlocks + 1
            intFound = intBlocks
            strBlockCounter(intBlocks) = 1
            strBlock(intBlocks) = strOutput
        End If
        objFile2.Write strOutput & vbTab & strBlockCounter(intFound) & vbCrLf
    End If
Loop
objFile2.Write "" & vbCrLf
For i = 1 To intBlocks
    objFile2.Write strBlock(i) & vbTab & strBlockCounter(i) & vbCrLf
Next
objFile1.Close
objFile2.Close 

—-

OK, now that we have the solution, we need an appropriate problem that must be solved with our solution.  The script below creates two test tables, each with a unique index on the ID column:

CREATE TABLE T1 AS
SELECT
  ROWNUM ID,
  TRUNC(DBMS_RANDOM.VALUE(1,300000)) N1,
  LPAD(ROWNUM,10,'0') SMALL_VC,
  RPAD('X',100) PADDING
FROM
  DUAL
CONNECT BY
  LEVEL <= 300000;

CREATE TABLE T2 AS
SELECT
  ROWNUM ID,
  TRUNC(DBMS_RANDOM.VALUE(1,300000)) N1,
  LPAD(ROWNUM,10,'0') SMALL_VC,
  RPAD('X',100) PADDING
FROM
  DUAL
CONNECT BY
  LEVEL <= 300000;

CREATE UNIQUE INDEX PAR_I1 ON T1(ID);
CREATE UNIQUE INDEX CHI_I1 ON T2(ID);

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2',CASCADE=>TRUE) 

Now for the test SQL statement (hinted to help force a specific execution plan):

ALTER SESSION SET OPTIMIZER_FEATURES_ENABLE='10.2.0.4';

SELECT /*+ GATHER_PLAN_STATISTICS LEADING(T1) USE_NL(T2) INDEX(T1) INDEX(T2) */
  T1.ID,
  T1.N1,
  T2.ID,
  T2.N1
FROM
  T1,
  T2
WHERE
  T1.ID=T2.ID
  AND T1.ID BETWEEN 1 AND 200
  AND T2.N1 = 0;

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST -ROWS -PREDICATE')); 

The execution plan that is output looks like this:

SQL_ID  1afa5ym56cagh, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS LEADING(T1) USE_NL(T2) INDEX(T1) INDEX(T2) */
  T1.ID,   T1.N1,   T2.ID,   T2.N1 FROM   T1,   T2 WHERE   T1.ID=T2.ID   AND
T1.ID BETWEEN 1 AND 200   AND T2.N1 = 0

Plan hash value: 3072046012

----------------------------------------------------------------------------------------
| Id  | Operation                    | Name   | Starts | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |        |      1 |      0 |00:00:00.03 |     408 |
|   1 |  NESTED LOOPS                |        |      1 |      0 |00:00:00.03 |     408 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1     |      1 |    200 |00:00:00.01 |       6 |
|   3 |    INDEX RANGE SCAN          | PAR_I1 |      1 |    200 |00:00:00.01 |       2 |
|   4 |   TABLE ACCESS BY INDEX ROWID| T2     |    200 |      0 |00:00:00.02 |     402 |
|   5 |    INDEX UNIQUE SCAN         | CHI_I1 |    200 |    200 |00:00:00.01 |     202 |
---------------------------------------------------------------------------------------- 

In the above, there were 2 consistent gets for the PAR_I1 index, 4 consistent gets for the T1 table, 202 consistent gets for the CHI_I1 index, and 200 consistent gets for the T2 table.  While it might not be obvious from the above, the BLEVEL for both indexes is 1 (HEIGHT = 2 – see the quiz article linked to at the start of this article for an explanation).  When I first saw the quiz that is linked to at the start of this article, I mentally assumed that there would be about 400 consistent gets for the CHI_I1 index – for every Start of the INDEX UNIQUE SCAN operation, I expected the index root block and the index leaf block to count as a consistent get, while the above showed that did not happen.  Let’s trace the consistent gets to see why there were only 202 consistent gets and not roughly 400: 

ALTER SESSION SET TRACEFILE_IDENTIFIER='WATCH_CONSISTENT';
ALTER SESSION SET EVENTS '10200 TRACE NAME CONTEXT FOREVER, LEVEL 1';

SELECT /*+ GATHER_PLAN_STATISTICS LEADING(T1) USE_NL(T2) INDEX(T1) INDEX(T2) */
  T1.ID,
  T1.N1,
  T2.ID,
  T2.N1
FROM
  T1,
  T2
WHERE
  T1.ID=T2.ID
  AND T1.ID BETWEEN 1 AND 200
  AND T2.N1 = 0; 

If we then process the resulting 10200 trace file through one of the above trace file parsers, we might see output like what is listed below (the RDBA in hex is listed first, followed by the number of times that block had been accessed by a consistent get to that point in the trace file):

0206e214 1
0206e215 1
01c0000c 1
01c72e14 1
01c72e15 1
01c003ec 1
01c72e14 2
01c72e15 2
01c003ec 2
01c72e15 3
01c003ec 3
01c72e15 4
01c003ec 4
...
01c72e15 56
01c003ec 56
01c72e15 57
01c003ec 57
01c0000d 1
01c72e15 58
01c003ed 1
01c72e15 59
01c003ed 2
01c72e15 60
01c003ed 3
01c72e15 61
...
01c72e15 113
01c003ed 56
01c72e15 114
01c003ed 57
01c0000e 1
01c72e15 115
01c003ee 1
01c72e15 116
01c003ee 2
01c72e15 117
...
01c72e15 170
01c003ee 56
01c72e15 171
01c003ee 57
01c0000f 1
01c72e15 172
01c003ef 1
01c72e15 173
01c003ef 2
01c72e15 174
01c003ef 3
...
01c72d95 199
01c003ef 28
01c72d95 200
01c003ef 29

01c72e15 199
01c003ef 28
01c72e15 200
01c003ef 29

0206e214 1
0206e215 1
01c0000c 1
01c72e14 2
01c72e15 200
01c003ec 57
01c0000d 1
01c003ed 57
01c0000e 1
01c003ee 57
01c0000f 1
01c003ef 29

At the bottom of the output is a summary that shows (in order) RDBA 0206e214 was accessed a total of 1 time, RDBA 0206e215 was accessed 1 time, RDBA 01c0000c was accessed 1 time, RDBA 01c72e14 was accessed 2 times, RDBA 01c72e15 was accessed 200 times, etc.  Nice, but what do those RDBA numbers represent?  We will get to that later.

Inside the raw 10200 trace file we might see something like this (I am able to identifysome items that appear in the raw trace file, but I do not yet fully understand the file):

...
*** SESSION ID:(146.18) 2011-01-23 14:36:18.700
Consistent read started for block 9 : 0206e214
  env: (scn: 0x0000.00135bfd  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid: xid: 0x0000.000.00000000  scn: 0x0000.00000000 9sch: scn: 0x0000.00000000)
CR exa ret 2 on:  0000000004B99F38  scn: 0xffff.ffffffff  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  scn: 0xffff.ffffffff  sfl: 0
Consistent read finished for block 9 : 206e214
Consistent read started for block 9 : 0206e215
  env: (scn: 0x0000.00135bfd  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid: xid: 0x0000.000.00000000  scn: 0x0000.00000000 10sch: scn: 0x0000.00135bfd)
CR exa ret 9 on:  0000000004B99F38  scn: 0xffff.ffffffff  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  scn: 0xffff.ffffffff  sfl: 0
Consistent read finished for block 9 : 206e215
Consistent read finished for block 9 : 206e215
Consistent read started for block 9 : 01c0000c
  env: (scn: 0x0000.00135bfd  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid: xid: 0x0000.000.00000000  scn: 0x0000.00000000 10sch: scn: 0x0000.00135bfd)
CR exa ret 9 on:  0000000004B99F38  scn: 0xffff.ffffffff  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  scn: 0xffff.ffffffff  sfl: 0
Consistent read finished for block 9 : 1c0000c
Consistent read finished for block 9 : 1c0000c
Consistent read started for block 9 : 01c72e14
  env: (scn: 0x0000.00135bfd  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid: xid: 0x0000.000.00000000  scn: 0x0000.00000000 10sch: scn: 0x0000.00135bfd)
CR exa ret 2 on:  0000000004B99F38  scn: 0xffff.ffffffff  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  scn: 0xffff.ffffffff  sfl: 0
Consistent read finished for block 9 : 1c72e14
Consistent read started for block 9 : 01c72e15
  env: (scn: 0x0000.00135bfd  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid: xid: 0x0000.000.00000000  scn: 0x0000.00000000 10sch: scn: 0x0000.00135bfd)
CR exa ret 2 on:  0000000004B99F38  scn: 0xffff.ffffffff  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  scn: 0xffff.ffffffff  sfl: 0
Consistent read finished for block 9 : 1c72e15
Consistent read started for block 9 : 01c003ec
  env: (scn: 0x0000.00135bfd  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid: xid: 0x0000.000.00000000  scn: 0x0000.00000000 10sch: scn: 0x0000.00135bfd)
CR exa ret 2 on:  0000000004B99F38  scn: 0xffff.ffffffff  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  scn: 0xffff.ffffffff  sfl: 0
Consistent read finished for block 9 : 1c003ec
Consistent read started for block 9 : 01c72e14
  env: (scn: 0x0000.00135bfd  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid: xid: 0x0000.000.00000000  scn: 0x0000.00000000 10sch: scn: 0x0000.00135bfd)
CR exa ret 9 on:  0000000004B99F38  scn: 0xffff.ffffffff  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  scn: 0xffff.ffffffff  sfl: 0
Consistent read finished for block 9 : 1c72e14
Consistent read finished for block 9 : 1c72e14
Consistent read started for block 9 : 01c72e15
  env: (scn: 0x0000.00135bfd  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid: xid: 0x0000.000.00000000  scn: 0x0000.00000000 10sch: scn: 0x0000.00135bfd)
CR exa ret 2 on:  0000000004B99F38  scn: 0xffff.ffffffff  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  scn: 0xffff.ffffffff  sfl: 0
Consistent read finished for block 9 : 1c72e15
... 

Now that we see the RDBA numbers again, I suppose that it is time to try to determine the objects that are referenced by the RDBA numbers.  We can try dumping the index structure to see which blocks are read, but first need to find the OBJECT_IDs for the two indexes:

SELECT
  OBJECT_NAME,
  OBJECT_ID
FROM
  DBA_OBJECTS
WHERE
  OBJECT_NAME IN ('PAR_I1','CHI_I1');

OBJECT_NAME  OBJECT_ID
----------- ----------
CHI_I1           48143
PAR_I1           48142 

With the OBJECT_IDs we are able to write the index structures to a trace file:

ALTER SESSION SET TRACEFILE_IDENTIFIER='TREE_DUMP_CHI_I1';
ALTER SESSION SET EVENTS 'IMMEDIATE TRACE NAME TREEDUMP LEVEL 48143';

ALTER SESSION SET TRACEFILE_IDENTIFIER='TREE_DUMP_PAR_I1';
ALTER SESSION SET EVENTS 'IMMEDIATE TRACE NAME TREEDUMP LEVEL 48142'; 

Partial output from the TREE_DUMP_PAR_I1 trace file might look like the section that follows (items appearing in bold were identified in the trace file summary):

----- begin tree dump
branch: 0x206e214 34005524 (0: nrow: 625, level: 1)
   leaf: 0x206e215 34005525 (-1: nrow: 520 rrow: 520)
   leaf: 0x206e216 34005526 (0: nrow: 513 rrow: 513)
   leaf: 0x206e217 34005527 (1: nrow: 513 rrow: 513)
   leaf: 0x206e218 34005528 (2: nrow: 513 rrow: 513) 
...
   leaf: 0x206ee0e 34008590 (623: nrow: 435 rrow: 435)
----- end tree dump

In the above 0x206e214 is a branch block (actually the root block) and 0x206e215 is the first leaf block.

Partial output from the TREE_DUMP_CHI_I1 trace file might look like the section that follows (items appearing in bold were identified in the trace file summary):

----- begin tree dump
branch: 0x1c72e14 29830676 (0: nrow: 625, level: 1)
   leaf: 0x1c72e15 29830677 (-1: nrow: 520 rrow: 520)
   leaf: 0x1c72e16 29830678 (0: nrow: 513 rrow: 513)
   leaf: 0x1c72e17 29830679 (1: nrow: 513 rrow: 513)
   leaf: 0x1c72e18 29830680 (2: nrow: 513 rrow: 513)
...
   leaf: 0x1c7308e 29831310 (623: nrow: 435 rrow: 435)
----- end tree dump 

In the above, 0x1c72e14 is a branch block (actually the root block) and 0x1c72e15 is the first leaf block.

If we take another look at the summary, we are now able to update the summary with the index block information:

0206e214 1     /* PAR_I1 Root block of index on T1 */
0206e215 1     /* PAR_I1 Leaf block of index on T1 */
01c0000c 1
01c72e14 2     /* CHI_I1 Root block of index on T2 */
01c72e15 200   /* CHI_I1 Leaf block of index on T2 */
01c003ec 57
01c0000d 1
01c003ed 57
01c0000e 1
01c003ee 57
01c0000f 1
01c003ef 29 

Let’s try to find the source of the rest of the blocks that were found in the summary (I guess that this could be the hard way to get the job done):

SELECT
  SEGMENT_NAME,
  HEADER_FILE,
  HEADER_BLOCK,
  BLOCKS,
  HEADER_BLOCK+BLOCKS-1 MAX_BLOCKS
FROM
  DBA_SEGMENTS
WHERE
  SEGMENT_NAME IN ('T1','T2');

SEGMENT_NAME HEADER_FILE HEADER_BLOCK     BLOCKS MAX_BLOCKS
------------ ----------- ------------ ---------- ----------
T1                     7           11       5504       5514
T2                     7         1003       5504       6506 

Taking the above HEADER_FILE, HEADER_BLOCK, and MAX_BLOCKS numbers and dumping the block contents to a trace file (this will work in this test case script because all of the extents for the table blocks are probably close to each other – looking back, it probably would have been a better idea to use DBA_EXTENTS rather than DBA_SEGMENTS and just dump the first extent for each object):

ALTER SESSION SET TRACEFILE_IDENTIFIER='TABLE_DUMP_T1';
ALTER SYSTEM DUMP DATAFILE 7 BLOCK MIN 11 BLOCK MAX 5514;

ALTER SESSION SET TRACEFILE_IDENTIFIER='TABLE_DUMP_T2';
ALTER SYSTEM DUMP DATAFILE 7 BLOCK MIN 1003 BLOCK MAX 6506; 

Partial output from the TABLE_DUMP_T1 trace file might look like the following (items appearing in bold were identified in the trace file summary):

   Second Level Bitmap block DBAs
   --------------------------------------------------------
   DBA 1:   0x01c0000a

buffer tsn: 9 rdba: 0x01c0000c (7/12)
scn: 0x0000.00135a07 seq: 0x02 flg: 0x04 tail: 0x5a070602
frmt: 0x02 chkval: 0x541c type: 0x06=trans data
Hex dump of block: st=0, typ_found=1
Dump of memory from 0x00000000080F4400 to 0x00000000080F6400
...
buffer tsn: 9 rdba: 0x01c0000d (7/13)
scn: 0x0000.00135a07 seq: 0x02 flg: 0x04 tail: 0x5a070602
frmt: 0x02 chkval: 0x42da type: 0x06=trans data
Hex dump of block: st=0, typ_found=1
Dump of memory from 0x00000000080F4400 to 0x00000000080F6400
...
buffer tsn: 9 rdba: 0x01c0000e (7/14)
scn: 0x0000.00135a07 seq: 0x02 flg: 0x04 tail: 0x5a070602
frmt: 0x02 chkval: 0x840f type: 0x06=trans data
Hex dump of block: st=0, typ_found=1
Dump of memory from 0x00000000080F4400 to 0x00000000080F6400
...
buffer tsn: 9 rdba: 0x01c0000f (7/15)
scn: 0x0000.00135a07 seq: 0x02 flg: 0x04 tail: 0x5a070602
frmt: 0x02 chkval: 0x74ce type: 0x06=trans data
Hex dump of block: st=0, typ_found=1
Dump of memory from 0x00000000080F4400 to 0x00000000080F6400
... 

Partial output from the TABLE_DUMP_T1 TABLE_DUMP_T2 trace file might look like the following (items appearing in bold were identified in the trace file summary):

   Second Level Bitmap block DBAs
   --------------------------------------------------------
   DBA 1:   0x01c003ea

buffer tsn: 9 rdba: 0x01c003ec (7/1004)
scn: 0x0000.00135ac4 seq: 0x02 flg: 0x04 tail: 0x5ac40602
frmt: 0x02 chkval: 0x50a8 type: 0x06=trans data
Hex dump of block: st=0, typ_found=1
Dump of memory from 0x00000000080F4400 to 0x00000000080F6400
...
buffer tsn: 9 rdba: 0x01c003ed (7/1005)
scn: 0x0000.00135ac4 seq: 0x02 flg: 0x04 tail: 0x5ac40602
frmt: 0x02 chkval: 0x2ef2 type: 0x06=trans data
Hex dump of block: st=0, typ_found=1
Dump of memory from 0x00000000080F4400 to 0x00000000080F6400
...
buffer tsn: 9 rdba: 0x01c003ee (7/1006)
scn: 0x0000.00135ac4 seq: 0x02 flg: 0x04 tail: 0x5ac40602
frmt: 0x02 chkval: 0xbc00 type: 0x06=trans data
Hex dump of block: st=0, typ_found=1
Dump of memory from 0x00000000080F4400 to 0x00000000080F6400
...
buffer tsn: 9 rdba: 0x01c003ef (7/1007)
scn: 0x0000.00135ac4 seq: 0x02 flg: 0x04 tail: 0x5ac40602
frmt: 0x02 chkval: 0x6c98 type: 0x06=trans data
Hex dump of block: st=0, typ_found=1
Dump of memory from 0x00000000080F4400 to 0x00000000080F6400
...
buffer tsn: 9 rdba: 0x01c003f0 (7/1008)
scn: 0x0000.00135ac4 seq: 0x02 flg: 0x04 tail: 0x5ac40602
frmt: 0x02 chkval: 0xf228 type: 0x06=trans data
Hex dump of block: st=0, typ_found=1
Dump of memory from 0x00000000080F4400 to 0x00000000080F6400
... 

Taking another look at the summary, now updated with the table blocks:

0206e214 1     /* PAR_I1 Root block of index on T1 */
0206e215 1     /* PAR_I1 Leaf block of index on T1 */
01c0000c 1     /* T1     Table block */
01c72e14 2     /* CHI_I1 Root block of index on T2 */
01c72e15 200   /* CHI_I1 Leaf block of index on T2 */
01c003ec 57    /* T2     Table block */
01c0000d 1     /* T1     Table block */
01c003ed 57    /* T2     Table block */
01c0000e 1     /* T1     Table block */
01c003ee 57    /* T2     Table block */
01c0000f 1     /* T1     Table block */
01c003ef 29    /* T2     Table block */

Those datafile dumps can be quite time consuming, is there anything else we can try?

We could try to find the RDBA for the ten blocks (note that there is a risk here if the first extent is only eight blocks in length) in the first extent of each segment using the DBMS_UTILITY.MAKE_DATA_BLOCK_ADDRESS function (items appearing in bold were identified in the trace file summary):

SELECT
  SEGMENT_NAME,
  DBMS_UTILITY.MAKE_DATA_BLOCK_ADDRESS(HEADER_FILE,HEADER_BLOCK+RN) RDBA,
  TO_CHAR(DBMS_UTILITY.MAKE_DATA_BLOCK_ADDRESS(HEADER_FILE,HEADER_BLOCK+RN),'XXXXXXXXXX') HEX_RDBA
FROM
  DBA_SEGMENTS,
  (SELECT /*+ MATERIALIZE */
    ROWNUM-1 RN
  FROM
    DUAL
  CONNECT BY
    LEVEL<=10)
WHERE
  SEGMENT_NAME IN ('T1','T2')
ORDER BY
  SEGMENT_NAME,
  RN;

SEGMENT_NAME       RDBA HEX_RDBA
------------ ---------- -----------
T1             29360139     1C0000B
T1             29360140     1C0000C
T1             29360141     1C0000D
T1             29360142     1C0000E
T1             29360143     1C0000F
T1             29360144     1C00010
T1             29360145     1C00011
T1             29360146     1C00012
T1             29360147     1C00013
T1             29360148     1C00014
T2             29361131     1C003EB
T2             29361132     1C003EC
T2             29361133     1C003ED
T2             29361134     1C003EE
T2             29361135     1C003EF
T2             29361136     1C003F0
T2             29361137     1C003F1
T2             29361138     1C003F2
T2             29361139     1C003F3
T2             29361140     1C003F4 

Or, we could try working from the opposite direction.  With the knowledge that the lower 22 bits of the RDBA is the block number and the upper ten bits of the RDBA is the relative file number, we can manually calculate the relative file number and the block number from the RDBA and then look up the object name associated with the file and block.  First, we need the decimal equivalent of  (binary) 1111111111111111111111:

(binary) 1111111111111111111111 = (decimal) 4194303 

So, if we BITAND the RDBA with 4194303 we can obtain the block number, and if we divide the RDBA by 4194304 we can determine the relative file number for two of the RDBA numbers that were listed in the trace file summary, as shown below:

SELECT
  TO_CHAR(L.RDBA,'XXXXXXXX') HEX_RDBA,
  L.RDBA,
  TRUNC(L.RDBA/4194304) DATA_FILE,
  BITAND(L.RDBA,4194303) DATA_BLOCK
FROM
  (SELECT
    TO_NUMBER('01c0000c', 'XXXXXXXX') RDBA
  FROM
    DUAL
  UNION ALL
  SELECT
    TO_NUMBER('01c003ec', 'XXXXXXXX') RDBA
  FROM
    DUAL) L;

HEX_RDBA        RDBA  DATA_FILE DATA_BLOCK
--------- ---------- ---------- ----------
  1C0000C   29360140          7         12
  1C003EC   29361132          7       1004 

Remembering the number 4194303 might be challenging, so we can just use the DBMS_UTILITY.DATA_BLOCK_ADDRESS_FILE and DBMS_UTILITY.DATA_BLOCK_ADDRESS_BLOCK functions instead.  Looking up the associated object names can be slow, unless we are able to limit the object names to a list of specific objects (ideally, we would also specify the DE.OWNER column in the WHERE clause):

SELECT /*+ LEADING(L) */
  TO_CHAR(L.RDBA, 'XXXXXXXX') RDBA_HEX,
  L.RDBA,
  DE.SEGMENT_NAME,
  DBMS_UTILITY.DATA_BLOCK_ADDRESS_FILE(L.RDBA) DATA_FILE,
  DBMS_UTILITY.DATA_BLOCK_ADDRESS_BLOCK(L.RDBA) DATA_BLOCK
FROM
  (SELECT
    TO_NUMBER('01c0000c', 'XXXXXXXX') RDBA
  FROM
    DUAL
  UNION ALL
  SELECT
    TO_NUMBER('01c003ec', 'XXXXXXXX') RDBA
  FROM
    DUAL) L,
  DBA_EXTENTS DE
WHERE
  DBMS_UTILITY.DATA_BLOCK_ADDRESS_FILE(L.RDBA)=DE.FILE_ID
  AND DBMS_UTILITY.DATA_BLOCK_ADDRESS_BLOCK(L.RDBA) BETWEEN DE.BLOCK_ID AND (DE.BLOCK_ID + DE.BLOCKS - 1)
  AND DE.SEGMENT_NAME IN ('T1','T2')
ORDER BY
  DE.SEGMENT_NAME;

RDBA_HEX        RDBA SEGMENT_NAME  DATA_FILE DATA_BLOCK
--------- ---------- ------------ ---------- ----------
  1C0000C   29360140 T1                    7         12
  1C003EC   29361132 T2                    7       1004  

————————————————-

Now let’s take a look at Oracle Database 11.2.0.2 – what has changed?  If we execute the test SQL statement, we see the following execution plan for the query:

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST -ROWS -PREDICATE'));

SQL_ID  1afa5ym56cagh, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS LEADING(T1) USE_NL(T2) INDEX(T1)
INDEX(T2) */   T1.ID,   T1.N1,   T2.ID,   T2.N1 FROM   T1,   T2 WHERE
T1.ID=T2.ID   AND T1.ID BETWEEN 1 AND 200   AND T2.N1 = 0

Plan hash value: 3072046012

----------------------------------------------------------------------------------------
| Id  | Operation                    | Name   | Starts | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |        |      1 |      0 |00:00:00.01 |     215 |
|   1 |  NESTED LOOPS                |        |      1 |      0 |00:00:00.01 |     215 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1     |      1 |    200 |00:00:00.01 |       6 |
|   3 |    INDEX RANGE SCAN          | PAR_I1 |      1 |    200 |00:00:00.01 |       2 |
|   4 |   TABLE ACCESS BY INDEX ROWID| T2     |    200 |      0 |00:00:00.01 |     209 |
|   5 |    INDEX UNIQUE SCAN         | CHI_I1 |    200 |    200 |00:00:00.01 |       9 |
---------------------------------------------------------------------------------------- 

Notice in the above that the 202 consistent gets that we saw for the CHI_I1 index in Oracle Database 10.2.0.5 oddly only required 9 consistent gets in Oracle Database 11.2.0.2.  But that is not the only change.  If we process the 10200 trace file through one of the trace file parsers, we might see something like this:

0x0200439b> objd: 0x00011711 1
0x0200439c> objd: 0x00011711 1
0x02000083> objd: 0x00011705 1
0x01c04d9b> objd: 0x00011710 1
0x01c04d9c> objd: 0x00011710 1
0x01c0389b> objd: 0x00011706 1
0x01c04d9b> objd: 0x00011710 2
0x01c04d9c> objd: 0x00011710 2
0x01c0389b> objd: 0x00011706 2
0x01c04d9c> objd: 0x00011710 3
0x01c0389b> objd: 0x00011706 3
0x01c04d9c> objd: 0x00011710 4
0x01c0389b> objd: 0x00011706 4
0x01c04d9c> objd: 0x00011710 5
0x01c0389b> objd: 0x00011706 5
0x01c04d9c> objd: 0x00011710 6
0x01c0389b> objd: 0x00011706 6
0x01c04d9c> objd: 0x00011710 7
0x01c0389b> objd: 0x00011706 7
0x01c0389b> objd: 0x00011706 8
0x01c0389b> objd: 0x00011706 9
0x01c0389b> objd: 0x00011706 10
0x01c0389b> objd: 0x00011706 11
0x01c0389b> objd: 0x00011706 12  
...
0x01c0389b> objd: 0x00011706 54
0x01c0389b> objd: 0x00011706 55
0x01c0389b> objd: 0x00011706 56
0x01c0389b> objd: 0x00011706 57
0x02000084> objd: 0x00011705 1
0x01c0389c> objd: 0x00011706 1
0x01c0389c> objd: 0x00011706 2
0x01c0389c> objd: 0x00011706 3
0x01c0389c> objd: 0x00011706 4
...
0x01c0389c> objd: 0x00011706 55
0x01c0389c> objd: 0x00011706 56
0x01c0389c> objd: 0x00011706 57
0x02000085> objd: 0x00011705 1
0x01c0389d> objd: 0x00011706 1
0x01c0389d> objd: 0x00011706 2
0x01c0389d> objd: 0x00011706 3
0x01c0389d> objd: 0x00011706 4
0x01c0389d> objd: 0x00011706 5
...
0x01c0389d> objd: 0x00011706 55
0x01c0389d> objd: 0x00011706 56
0x01c0389d> objd: 0x00011706 57
0x02000086> objd: 0x00011705 1
0x01c0389e> objd: 0x00011706 1
0x01c0389e> objd: 0x00011706 2
0x01c0389e> objd: 0x00011706 3
...
0x01c0389e> objd: 0x00011706 27
0x01c0389e> objd: 0x00011706 28
0x01c0389e> objd: 0x00011706 29

0x0200439b> objd: 0x00011711 1
0x0200439c> objd: 0x00011711 1
0x02000083> objd: 0x00011705 1
0x01c04d9b> objd: 0x00011710 2
0x01c04d9c> objd: 0x00011710 7
0x01c0389b> objd: 0x00011706 57
0x02000084> objd: 0x00011705 1
0x01c0389c> objd: 0x00011706 57
0x02000085> objd: 0x00011705 1
0x01c0389d> objd: 0x00011706 57
0x02000086> objd: 0x00011705 1
0x01c0389e> objd: 0x00011706 29

Interesting – it appears that Oracle Database 11.2.0.2 writes the DATA_OBJECT_ID that is related to the block, directly into the trace file so that we no longer need to execute several SQL statements to determine the object names related to the blocks.

Inside the raw 10200 trace file from 11.2.0.2 we might see something like the following:

ktrgtc2(): started for block <0x0009 : 0x0200439b> objd: 0x00011711
  env [0x000000001870BF5C]: (scn: 0x0000.0014c547  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid: 0x0000.000.00000000  st-scn: 0x0000.00000000  hi-scn: 0x0000.00000000  ma-scn: 0x0000.0014c527  flg: 0x00000661)
ktrexc(): returning 2 on:  0000000013C3D598  cr-scn: 0xffff.ffffffff  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  cl-scn: 0xffff.ffffffff  sfl: 0
ktrgtc2(): completed for block <0x0009 : 0x0200439b> objd: 0x00011711
ktrget2(): started for block  <0x0009 : 0x0200439c> objd: 0x00011711
env [0x000000001870BF5C]: (scn: 0x0000.0014c547  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid: 0x0000.000.00000000  st-scn: 0x0000.00000000  hi-scn: 0x0000.0014c547  ma-scn: 0x0000.0014c527  flg: 0x00000662)
ktrexf(): returning 9 on:  0000000013C3D598  cr-scn: 0xffff.ffffffff  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  cl-scn: 0xffff.ffffffff  sfl: 0
ktrgcm(): completed for block  <0x0009 : 0x0200439c> objd: 0x00011711
ktrget2(): completed for  block <0x0009 : 0x0200439c> objd: 0x00011711
ktrget2(): started for block  <0x0009 : 0x02000083> objd: 0x00011705
env [0x000000001870BF5C]: (scn: 0x0000.0014c547  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid: 0x0000.000.00000000  st-scn: 0x0000.00000000  hi-scn: 0x0000.0014c547  ma-scn: 0x0000.0014c527  flg: 0x00000662)
ktrexf(): returning 9 on:  0000000013C3D598  cr-scn: 0xffff.ffffffff  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  cl-scn: 0xffff.ffffffff  sfl: 0
ktrgcm(): completed for block  <0x0009 : 0x02000083> objd: 0x00011705
ktrget2(): completed for  block <0x0009 : 0x02000083> objd: 0x00011705
ktrgtc2(): started for block <0x0009 : 0x01c04d9b> objd: 0x00011710
  env [0x000000001870BF5C]: (scn: 0x0000.0014c547  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid: 0x0000.000.00000000  st-scn: 0x0000.00000000  hi-scn: 0x0000.0014c547  ma-scn: 0x0000.0014c527  flg: 0x00000662)
ktrexc(): returning 2 on:  0000000013C3D598  cr-scn: 0xffff.ffffffff  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  cl-scn: 0xffff.ffffffff  sfl: 0
ktrgtc2(): completed for block <0x0009 : 0x01c04d9b> objd: 0x00011710
ktrgtc2(): started for block <0x0009 : 0x01c04d9c> objd: 0x00011710
  env [0x000000001870BF5C]: (scn: 0x0000.0014c547  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid: 0x0000.000.00000000  st-scn: 0x0000.00000000  hi-scn: 0x0000.0014c547  ma-scn: 0x0000.0014c527  flg: 0x00000662)
ktrexc(): returning 2 on:  0000000013C3D598  cr-scn: 0xffff.ffffffff  xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  cl-scn: 0xffff.ffffffff  sfl: 0
ktrgtc2(): completed for block <0x0009 : 0x01c04d9c> objd: 0x00011710  

The 10200 trace file in 11.2.0.2 provided the DATA_OBJECT_ID for the consistent reads, while the 10.2.0.5 trace file did not.  We can use this information to determine which objects were accessed, and in which order by pulling in the unique OBJD values from the summary:

SELECT
  OBJECT_NAME,
  DATA_OBJECT_ID,
  TO_CHAR(DATA_OBJECT_ID, 'XXXXX') HEX_DATA_OBJECT_ID
FROM
  DBA_OBJECTS
WHERE
  DATA_OBJECT_ID IN(
    TO_NUMBER('11711', 'XXXXX'),
    TO_NUMBER('11705', 'XXXXX'),
    TO_NUMBER('11710', 'XXXXX'),
    TO_NUMBER('11706', 'XXXXX'));

OBJECT_NAME DATA_OBJECT_ID HEX_DA
----------- -------------- ------
T1                   71429  11705
T2                   71430  11706
CHI_I1               71440  11710
PAR_I1               71441  11711 

Taking another look at the summary, now updated with the table and index blocks:

0x0200439b> objd: 0x00011711 1   /* PAR_I1 Root block of index on T1 */
0x0200439c> objd: 0x00011711 1   /* PAR_I1 Leaf block of index on T1 */
0x02000083> objd: 0x00011705 1   /* T1     Table block */
0x01c04d9b> objd: 0x00011710 2   /* CHI_I1 Root block of index on T2 */
0x01c04d9c> objd: 0x00011710 7   /* CHI_I1 Leaf block of index on T2 */
0x01c0389b> objd: 0x00011706 57  /* T2     Table block */
0x02000084> objd: 0x00011705 1   /* T1     Table block */
0x01c0389c> objd: 0x00011706 57  /* T2     Table block */
0x02000085> objd: 0x00011705 1   /* T1     Table block */
0x01c0389d> objd: 0x00011706 57  /* T2     Table block */
0x02000086> objd: 0x00011705 1   /* T1     Table block */
0x01c0389e> objd: 0x00011706 2   /* T2     Table block */ 

So, from the above, 2 of the consistent gets for the CHI_I1 index were for the root block of the index, and the remaining 7 were for the first leaf block.

Simple?

——————-

Anyone want to try creating a 10200 trace file parser in a different programming language and posting the source code here?





Query is Returning ORA-06502: Character String Buffer Too Small, Any Help for the OP?

21 01 2011

January 21, 2011

I found an interesting SQL statement on the OTN forums today.  When executing the SQL statement Oracle Database returns the following error:

ORA-00604: error occurred at recursive SQL level 1
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 12
ORA-00920: invalid relational operator
00604. 00000 - "error occurred at recursive SQL level %s"
*Cause: An error occurred while processing a recursive SQL statement
(a statement applying to internal dictionary tables).
*Action: If the situation described in the next error on the stack
can be corrected, do so; otherwise contact Oracle Support.
Error at Line: 30 Column: 4 

How would you help the original poster with this particular SQL statement (see the OTN forum thread for the SQL statement and my reformatted version of the SQL statement)?

My thoughts about the SQL statement (as posted in that thread):

Possibilities:

  • Combining ANSI outer joins with Oracle specific outer joins in the same SQL statement
  • Combining ANSI joins with scalar subqueries (SELECTs listed in column positions)
  • Grouping on a.question_type, which could be up to 12,000 bytes long
  • Including SELECT statements in GROUP BY clauses




Adding Comments to SQL Statements Improves Performance?

15 01 2011

January 15, 2011

While reading the “Pro Oracle SQL” book I learned something interesting.  Commenting your work can improve database performance.  You certainly are aware that thoroughly documenting what you do could prevent hours of headaches that might appear later when trying to investigate problems or add additional features to an existing procedure (I think that was part of the message in Cary Millsap’s recent blog article). 

But how can commenting what you do actually improve database performance?  To demonstrate, let’s create a simple test case using two tables and a single insert statement to populate the two tables:

CREATE TABLE T1 (
  C1 NUMBER,
  C2 NUMBER,
  C3 VARCHAR2(200),
  PRIMARY KEY(C1));

CREATE TABLE T2 (
  C1 NUMBER,
  C2 NUMBER,
  C3 VARCHAR2(200),
  PRIMARY KEY(C1));

INSERT ALL
  WHEN 1=1 THEN
    INTO T1
  WHEN MOD(ROWNUM,2)=1 THEN
    INTO T2
SELECT
  ROWNUM C1,
  ROWNUM C2,
  LPAD('A',200,'A') C3
FROM
  DUAL
CONNECT BY
  LEVEL<=1000000;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2',CASCADE=>TRUE) 

The tables created by the above script both have primary key indexes, with table T1 having 1,000,000 rows and table T2 having 500,000 rows.  Now let’s create a simple SQL statement that joins the two tables and output the execution plan for the SQL statement:

SET LINESIZE 140
SET TRIMSPOOL ON
SET PAGESIZE 1000
SET AUTOTRACE TRACEONLY EXPLAIN

SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1;

Execution Plan
----------------------------------------------------------
Plan hash value: 2959412835

-----------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |   500K|  9765K|       | 13974   (1)| 00:02:48 |
|*  1 |  HASH JOIN         |      |   500K|  9765K|    10M| 13974   (1)| 00:02:48 |
|   2 |   TABLE ACCESS FULL| T2   |   500K|  4882K|       |  4126   (1)| 00:00:50 |
|   3 |   TABLE ACCESS FULL| T1   |  1000K|  9765K|       |  8276   (1)| 00:01:40 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1") 

The execution plan indicates that a hash join will be used to join the tables, with table T2 listed as the first row source below the words “HASH JOIN”.  The query is expected to require roughly 2 minutes and 48 seconds to execute.  Now let’s try the query again with a hint:

SELECT /*+ LEADING(T1) */
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1;

Execution Plan
----------------------------------------------------------
Plan hash value: 1838229974

-----------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |   500K|  9765K|       | 13973   (1)| 00:02:48 |
|*  1 |  HASH JOIN         |      |   500K|  9765K|    20M| 13973   (1)| 00:02:48 |
|   2 |   TABLE ACCESS FULL| T1   |  1000K|  9765K|       |  8276   (1)| 00:01:40 |
|   3 |   TABLE ACCESS FULL| T2   |   500K|  4882K|       |  4126   (1)| 00:00:50 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1") 

The execution plan again indicates that a hash join will be used to join the tables, this time with table T1 listed as the first row source below the words “HASH JOIN”.  The query is still expected to require roughly 2 minutes and 48 seconds to execute.  Let’s try the query again with a second hint: 

SELECT /*+ LEADING(T1) USE_NL(T2) */
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1;

Execution Plan
----------------------------------------------------------
Plan hash value: 685492288

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |   500K|  9765K|  1508K  (1)| 05:01:46 |
|   1 |  NESTED LOOPS                |              |       |       |            |          |
|   2 |   NESTED LOOPS               |              |   500K|  9765K|  1508K  (1)| 05:01:46 |
|   3 |    TABLE ACCESS FULL         | T1           |  1000K|  9765K|  8276   (1)| 00:01:40 |
|*  4 |    INDEX UNIQUE SCAN         | SYS_C0010199 |     1 |       |     1   (0)| 00:00:01 |
|   5 |   TABLE ACCESS BY INDEX ROWID| T2           |     1 |    10 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T1"."C1"="T2"."C1") 

The execution plan this time indicates that two nested loops joins will be used to join the tables, with table T1 listed as the first row source below the words “NESTED LOOPS”.  The query is still expected to require roughly 301 minutes and 46 seconds to execute.  Now let’s document what we did to alter performance so that the next person to investigate the performance of this SQL statement will know why it performs as it does:

SELECT /*+ comment: I added these hints on Jan 15, 2011 to fix a performance problem LEADING(T1) USE_NL(T2) */
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1;

Execution Plan
----------------------------------------------------------
Plan hash value: 2959412835

-----------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |   500K|  9765K|       | 13974   (1)| 00:02:48 |
|*  1 |  HASH JOIN         |      |   500K|  9765K|    10M| 13974   (1)| 00:02:48 |
|   2 |   TABLE ACCESS FULL| T2   |   500K|  4882K|       |  4126   (1)| 00:00:50 |
|   3 |   TABLE ACCESS FULL| T1   |  1000K|  9765K|       |  8276   (1)| 00:01:40 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1") 

As you can see from the AUTOTRACE generated execution plan, simply commenting our changes is sufficient to convince the optimizer that the SQL statement will execute roughly 100 times faster than the version of the SQL statement without the comment that documents our work.  The reason for this estimated performance improvement is explained on page 516 of the book.  :-)





Free ANSI SQL to Oracle Specific SQL Translator and SQL Tutor

7 01 2011

January 7, 2011

In a recent OTN thread a developer described their difficulty in working with ANSI SQL in Oracle Database 8i (quick show of hands, how many people know why?).  The OP eventually asked:

“I don’t suppose there are any normal-query-sytax-to-freakishly-old-syntax converters out there?”

And John Spencer jokingly replied back:

“Personally, I’m looking for a freakishly-new-syntax-to-normal-query-sytax converter.”

l thought that this might be an interesting challenge.  A developer is struggling to adjust to Oracle Database, having come from a Microsoft SQL Server development background, and he is looking for a translator that will convert his ANSI SQL to SQL that will work with Oracle Database 8i.  My reply indicated that Oracle provides a free ANSI SQL to Oracle specific SQL translator and tutoring tool in the free Oracle XE.  A free translator and tutor?  Sure, below is an example using Oracle Database 11.2.0.1.  First, let’s create a couple of tables and collect the statistics for those tables:

CREATE TABLE T1 AS
SELECT
  ROWNUM C1,
  LPAD('A',100,'A') PADDING
FROM
  DUAL
CONNECT BY
  LEVEL<=100;

CREATE TABLE T2 AS
SELECT
  ROWNUM C0,
  CEIL(ROWNUM/20) C1,
  LPAD('A',100,'A') PADDING
FROM
  DUAL
CONNECT BY
  LEVEL<=100*20;

CREATE TABLE T3 AS
SELECT
  ROWNUM C0,
  TRUNC((MOD(ROWNUM-1,80)+1)*1.2) C1,
  LPAD('A',100,'A') PADDING
FROM
  DUAL
CONNECT BY
  LEVEL<=1000;

ALTER TABLE T1 MODIFY (C1 NOT NULL);
ALTER TABLE T2 MODIFY (C1 NOT NULL);
ALTER TABLE T3 MODIFY (C1 NOT NULL);

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1')
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2')
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T3') 

We now have three tables with a NOT NULL constraint on a column. Let’s assume that the following two ANSI SQL statements are crafted to access the three tables:

SELECT
  T1.PADDING T1_PADDING,
  T2.PADDING T2_PADDING
FROM
  T1 LEFT JOIN T2
    ON T1.C1=T2.C1
WHERE
  T1.C1 IN (
    SELECT
      T3.C1
    FROM
      T3);

SELECT
  T1.PADDING T1_PADDING,
  T2.PADDING T2_PADDING
FROM
  T1 LEFT JOIN T2
    ON T1.C1=T2.C1
WHERE
  T1.C1 NOT IN (
    SELECT
      T3.C1*2
    FROM
      T3); 

Note that in the above, one of the SQL statements contains an IN clause, and the other a NOT IN clause.  That IN clause could also be written as an EXISTS clause (because of the NOT NULL constraint on column C1), or as an inline view. Oracle Database 8i will NOT automatically transform an IN clause into an EXISTS clause, even when it is much more efficient to execute the SQL statement with the EXISTS clause (more recent release versions of Oracle Database might perform the transformation).  So let’s try an experiment (if you are running Oracle Database 8i, will have to manually convert the above ANSI joins into Oracle specific syntax to try this on Oracle Database 8i):

ALTER SESSION SET TRACEFILE_IDENTIFIER='WATCH_TRANSFORM'; 
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
SET AUTOTRACE TRACEONLY STATISTICS

SELECT
  T1.PADDING T1_PADDING,
  T2.PADDING T2_PADDING
FROM
  T1 LEFT JOIN T2
    ON T1.C1=T2.C1
WHERE
  T1.C1 IN (
    SELECT
      T3.C1
    FROM
      T3);

ALTER SESSION SET TRACEFILE_IDENTIFIER='WATCH_TRANSFORM2';

SELECT
  T1.PADDING T1_PADDING,
  T2.PADDING T2_PADDING
FROM
  T1 LEFT JOIN T2
    ON T1.C1=T2.C1
WHERE
  T1.C1 NOT IN (
    SELECT
      T3.C1*2
    FROM
      T3); 

Now take a look inside the 10053 trace files, you might find equivalent, transformed versions of the SQL statements like these (reformatted to add extra whitespace) in the WATCH_TRANSFORM trace file (note that in the thread Yasu mentioned that he received a different final transformation when connected as the SYS user than he did when connected as a normal user):

SELECT
  "T1"."C1" "QCSJ_C000000000300000",
  "T1"."PADDING" "QCSJ_C000000000300002",
  "from$_subquery$_005"."C1_1" "QCSJ_C000000000300001",
  "from$_subquery$_005"."PADDING_0" "QCSJ_C000000000300003"
FROM
  "TESTUSER"."T1" "T1",
  LATERAL(
    (SELECT
       "T2"."PADDING" "PADDING_0",
       "T2"."C1" "C1_1"
     FROM
       "TESTUSER"."T2" "T2"
     WHERE
       "T1"."C1"="T2"."C1"))(+) "from$_subquery$_005"

SELECT
  "T1"."PADDING" "T1_PADDING",
  "T2"."PADDING" "T2_PADDING"
FROM
  "TESTUSER"."T1" "T1",
  "TESTUSER"."T2" "T2"
WHERE
  "T1"."C1"=ANY (
    SELECT
      "T3"."C1" "C1"
    FROM
      "TESTUSER"."T3" "T3")
  AND "T1"."C1"="T2"."C1"(+)

SELECT
  "T1"."PADDING" "T1_PADDING",
  "T2"."PADDING" "T2_PADDING"
FROM
  "TESTUSER"."T3" "T3",
  "TESTUSER"."T1" "T1",
  "TESTUSER"."T2" "T2"
WHERE
  "T1"."C1"="T2"."C1"(+)
  AND "T1"."C1"="T3"."C1" 

If you are looking at the above, trying to learn alternate SQL syntax that may be more efficient than what you have used in the past, you should note that the last of the above SQL statements is technically not logically equivalent to the submitted SQL statement due to the potential for duplicate values in column C1 in table T3 (also note that the first SQL statement is missing the IN clause).  However, the optimizer in 11.2.0.1 understands how to handle this join as a semi-join, and will stop the join to the second table when the first matching row is found.

In the WATCH_TRANSFORM2 trace file, we see the NOT IN version of the SQL statement:

SELECT
  "T1"."C1" "QCSJ_C000000000300000",
  "T1"."PADDING" "QCSJ_C000000000300002",
  "from$_subquery$_005"."C1_1" "QCSJ_C000000000300001",
  "from$_subquery$_005"."PADDING_0" "QCSJ_C000000000300003"
FROM
  "TESTUSER"."T1" "T1",
  LATERAL(
    (SELECT
       "T2"."PADDING" "PADDING_0",
       "T2"."C1" "C1_1"
     FROM
       "TESTUSER"."T2" "T2"
     WHERE
       "T1"."C1"="T2"."C1"))(+) "from$_subquery$_005"

SELECT
  "T1"."PADDING" "T1_PADDING",
  "T2"."PADDING" "T2_PADDING"
FROM
  "TESTUSER"."T1" "T1",
  "TESTUSER"."T2" "T2"
WHERE
  "T1"."C1"<>ALL (
    SELECT
      "T3"."C1"*2 "T3.C1*2"
    FROM "TESTUSER"."T3" "T3")
  AND "T1"."C1"="T2"."C1"(+)

SELECT
  "T1"."PADDING" "T1_PADDING",
  "T2"."PADDING" "T2_PADDING"
FROM
  "TESTUSER"."T3" "T3",
  "TESTUSER"."T1" "T1",
  "TESTUSER"."T2" "T2"
WHERE
  "T1"."C1"="T3"."C1"*2
  AND "T1"."C1"="T2"."C1"(+) 

If you are looking at the above, trying to learn alternate SQL syntax that may be more efficient than what you have used in the past, you should note that the last of the above SQL statements is technically not logically equivalent to the submitted SQL statement due to the potential for duplicate values in column C1 in table T3.  However, the optimizer in 11.2.0.1 understands how to handle this join as a anti-join, and will stop the join to the second table when the first matching row is found.

Let’s try again with a change to the OPTIMIZER_FEATURES_ENABLE parameter:

ALTER SESSION SET OPTIMIZER_FEATURES_ENABLE='8.1.7';
ALTER SESSION SET TRACEFILE_IDENTIFIER='WATCH_TRANSFORM3';

SELECT
  T1.PADDING T1_PADDING,
  T2.PADDING T2_PADDING
FROM
  T1 LEFT JOIN T2
    ON T1.C1=T2.C1
WHERE
  T1.C1 IN (
    SELECT
      T3.C1
    FROM
      T3);

ALTER SESSION SET TRACEFILE_IDENTIFIER='WATCH_TRANSFORM4';

SELECT
  T1.PADDING T1_PADDING,
  T2.PADDING T2_PADDING
FROM
  T1 LEFT JOIN T2
    ON T1.C1=T2.C1
WHERE
  T1.C1 NOT IN (
    SELECT
      T3.C1*2
    FROM
      T3); 

Now you might see something like this in the WATCH_TRANSFORM3 trace file (the IN syntax):

SELECT
  "T1"."C1" "QCSJ_C000000000300000",
  "T1"."PADDING" "QCSJ_C000000000300002",
  "T2"."C1" "C1",
  "T2"."PADDING" "PADDING"
FROM
  "TESTUSER"."T1" "T1",
  "TESTUSER"."T2" "T2"
WHERE
  "T1"."C1"="T2"."C1"(+)

SELECT
  "T1"."PADDING" "T1_PADDING",
  "T2"."PADDING" "T2_PADDING"
FROM
  "TESTUSER"."T1" "T1",
  "TESTUSER"."T2" "T2"
WHERE
  "T1"."C1"=ANY (
    SELECT
      "T3"."C1" "C1"
    FROM
      "TESTUSER"."T3" "T3")
      AND "T1"."C1"="T2"."C1"(+)

SELECT
  "T1"."PADDING" "T1_PADDING",
  "T2"."PADDING" "T2_PADDING"
FROM
  (SELECT DISTINCT
     "T3"."C1" "C1"
   FROM
     "TESTUSER"."T3" "T3") "VW_NSO_1",
  "TESTUSER"."T1" "T1",
  "TESTUSER"."T2" "T2"
WHERE
  "T1"."C1"="VW_NSO_1"."C1"
  AND "T1"."C1"="T2"."C1"(+) 

Notice in the above that final version of the SQL statement shows the IN clause transformed into an inline view.

In the WATCH_TRANSFORM4 trace file you might see something like this (the NOT IN syntax):

SELECT
  "T1"."C1" "QCSJ_C000000000300000",
  "T1"."PADDING" "QCSJ_C000000000300002",
  "T2"."C1" "C1",
  "T2"."PADDING" "PADDING"
FROM
  "TESTUSER"."T1" "T1",
  "TESTUSER"."T2" "T2"
WHERE
  "T1"."C1"="T2"."C1"(+)

SELECT
  "T1"."PADDING" "T1_PADDING",
  "T2"."PADDING" "T2_PADDING"
FROM
  "TESTUSER"."T1" "T1",
  "TESTUSER"."T2" "T2"
WHERE
  "T1"."C1"<>ALL (
    SELECT
      "T3"."C1"*2 "T3.C1*2"
    FROM
      "TESTUSER"."T3" "T3")
  AND "T1"."C1"="T2"."C1"(+)

SELECT
  "SYS_ALIAS_1"."PADDING" "T1_PADDING",
  "T2"."PADDING" "T2_PADDING"
FROM
  "TESTUSER"."T1" "SYS_ALIAS_1",
  "TESTUSER"."T2" "T2"
WHERE
  NOT EXISTS (
    SELECT
      0
    FROM
      "TESTUSER"."T3" "T3"
    WHERE
      "T3"."C1"*2="SYS_ALIAS_1"."C1")
  AND "SYS_ALIAS_1"."C1"="T2"."C1"(+) 

Notice in the above, the odd last transformation.  Oracle Database 8i is not supposed to be able to transform a SQL statement using a NOT IN clause into a SQL statement using a NOT EXISTS clause, even though it might be more efficient.

Just for fun, let’s see what happens when we add a NO_QUERY_TRANSFORMATION hint after the SELECT in the SQL statements while leaving the modified OPTIMIZER_FEATURES_ENABLE parameter set:

ALTER SESSION SET TRACEFILE_IDENTIFIER='WATCH_TRANSFORM5';

SELECT /*+ NO_QUERY_TRANSFORMATION */
  T1.PADDING T1_PADDING,
  T2.PADDING T2_PADDING
FROM
  T1 LEFT JOIN T2
    ON T1.C1=T2.C1
WHERE
  T1.C1 IN (
    SELECT
      T3.C1
    FROM
      T3);

ALTER SESSION SET TRACEFILE_IDENTIFIER='WATCH_TRANSFORM6';

SELECT /*+ NO_QUERY_TRANSFORMATION */
  T1.PADDING T1_PADDING,
  T2.PADDING T2_PADDING
FROM
  T1 LEFT JOIN T2
    ON T1.C1=T2.C1
WHERE
  T1.C1 NOT IN (
    SELECT
      T3.C1*2
    FROM
      T3); 

The WATCH_TRANSFORM5 trace file (IN syntax) might contain SQL statements that look something like this:

SELECT
  "T1"."C1" "QCSJ_C000000000300000",
  "T1"."PADDING" "QCSJ_C000000000300002",
  "from$_subquery$_005"."C1_1" "C1",
  "from$_subquery$_005"."PADDING_0" "PADDING"
FROM
  "TESTUSER"."T1" "T1",
   LATERAL(
     (SELECT
        "T2"."PADDING" "PADDING_0",
        "T2"."C1" "C1_1"
      FROM
        "TESTUSER"."T2" "T2"
      WHERE
        "T1"."C1"="T2"."C1"))(+) "from$_subquery$_005"

SELECT
  "SYS_ALIAS_1"."QCSJ_C000000000300002_1" "T1_PADDING",
  "SYS_ALIAS_1"."PADDING_3" "T2_PADDING"
FROM
  (SELECT
     "T1"."C1" "QCSJ_C000000000300000_0",
     "T1"."PADDING" "QCSJ_C000000000300002_1",
     "from$_subquery$_005"."C1_1" "C1",
     "from$_subquery$_005"."PADDING_0" "PADDING_3"
   FROM
     "TESTUSER"."T1" "T1",
     LATERAL(
       (SELECT
          "T2"."PADDING" "PADDING_0",
          "T2"."C1" "C1_1"
        FROM
          "TESTUSER"."T2" "T2"
        WHERE
          "T1"."C1"="T2"."C1"))(+) "from$_subquery$_005") "SYS_ALIAS_1"
   WHERE
     EXISTS (
       SELECT
         0
       FROM
         "TESTUSER"."T3" "T3"
       WHERE
         "T3"."C1"="SYS_ALIAS_1"."QCSJ_C000000000300000_0") 

The last of the above SQL statements might be manually cleaned up to look like this:

SELECT
  T1.PADDING T1_PADDING,
  T2.PADDING T2_PADDING
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1(+)
  AND EXISTS (
    SELECT
      0
    FROM
      T3
    WHERE
      T3.C1=T1.C1); 

Notice in the above, that again the optimizer has transformed the IN syntax into EXISTS syntax, even though Oracle Database 8i did not support that automatic transformation (and we did specifically request no query transformations through the use of the hint).

The WATCH_TRANSFORM6 trace file (NOT IN syntax) might contain SQL statements that look something like this:

SELECT
  "T1"."C1" "QCSJ_C000000000300000",
  "T1"."PADDING" "QCSJ_C000000000300002",
  "from$_subquery$_005"."C1_1" "C1",
  "from$_subquery$_005"."PADDING_0" "PADDING"
FROM
  "TESTUSER"."T1" "T1",
  LATERAL(
    (SELECT
       "T2"."PADDING" "PADDING_0",
       "T2"."C1" "C1_1"
     FROM
       "TESTUSER"."T2" "T2"
     WHERE
       "T1"."C1"="T2"."C1"))(+) "from$_subquery$_005"

SELECT
  "SYS_ALIAS_1"."QCSJ_C000000000300002_1" "T1_PADDING",
  "SYS_ALIAS_1"."PADDING_3" "T2_PADDING"
FROM
  (SELECT
     "T1"."C1" "QCSJ_C000000000300000_0",
     "T1"."PADDING" "QCSJ_C000000000300002_1",
     "from$_subquery$_005"."C1_1" "C1",
     "from$_subquery$_005"."PADDING_0" "PADDING_3"
   FROM
     "TESTUSER"."T1" "T1",
     LATERAL(
       (SELECT
          "T2"."PADDING" "PADDING_0",
          "T2"."C1" "C1_1"
        FROM
          "TESTUSER"."T2" "T2"
        WHERE
          "T1"."C1"="T2"."C1"))(+) "from$_subquery$_005") "SYS_ALIAS_1"
WHERE
  NOT EXISTS (
    SELECT
      0
    FROM
      "TESTUSER"."T3" "T3"
    WHERE
      "T3"."C1"*2="SYS_ALIAS_1"."QCSJ_C000000000300000_0") 

Another NOT IN to NOT EXISTS transformation.

—-

In all of the above cases, the first transformed SQL statement printed in the 10053 trace file excluded the IN and the NOT IN clauses, but did include the ANSI to Oracle specific join syntax conversion.

It is nice to know that Oracle Corp. throws in something free with Oracle Database: a free ANSI SQL to Oracle Specific SQL translator, and a free SQL tutor that teaches rewriting SQL statements into alternate SQL syntax that just might be more efficient than the original SQL statement (you might be able to use the more efficient version with older Oracle Database releases, as was the case for the NOT IN to TO EXISTS transformation).





NO_QUERY_TRANSFORMATION Hint is Ignored… Well, Almost Ignored

3 01 2011

January 3, 2011

A couple of days ago I used a NO_QUERY_TRANSFORMATION hint to permit a query (provided by Martin Berger) to execute, which on the surface appeared to be quite sane, but without hints the query would usually fail to execute.  A test case version of the query worked on 11.2.0.1 running on 64 bit Windows on Dec 31, 2010, but suffered from the first known year 2011 bug ( ;-) ), and failed to execute on Jan 1, 2011 in the same database.  

A test case version of that sane looking query looks like this: 

select
  owner,
  view_name
from
  (select
    v.owner,
    v.view_name,
    o.object_type
  from
    dba_views v,
    dba_objects o
  where
    v.owner = o.owner
    AND v.view_name = o.object_name
    AND o.object_type='VIEW'
    AND o.status='VALID'
    AND v.owner ='SYS'  ) vv
where
  dbms_metadata.get_ddl(VV.OBJECT_TYPE, vv.view_name, vv.owner) like '%TEST%'; 

If you think that the DBMS_METADATA.GET_DDL call in the WHERE clause is at fault, you might try this query instead:

select
  owner,
  view_name
from
  (SELECT
    dbms_metadata.get_ddl('VIEW', vv.view_name, vv.owner) DDL,
    OWNER,
    VIEW_NAME
  FROM
    (select
      v.owner,
      v.view_name,
      o.object_type
    from
      dba_views v,
      dba_objects o
    where
      v.owner = o.owner
      AND v.view_name = o.object_name
      AND o.object_type='VIEW'
      AND o.status='VALID'
      AND v.owner ='SYS'  ) vv
  )
WHERE
  DDL like '%TEST%'; 

So, the NO_QUERY_TRANSFORMATION hint was not ignored when the above queries were hinted, and the hint allowed both of the above queries to execute when that hint was added immediately after the SELECT.  Just to make certain that there is no magic involved with this hint, let’s take a look at the description from the Oracle Database 11.2 documentation:

“The NO_QUERY_TRANSFORMATION hint instructs the optimizer to skip all query transformations, including but not limited to OR-expansion, view merging, subquery unnesting, star transformation, and materialized view rewrite. For example:

SELECT /*+ NO_QUERY_TRANSFORMATION */ employee_id, last_name
  FROM (SELECT * FROM employees e) v
  WHERE v.last_name = 'Smith';         "

I think that the above explanation is easy to understand – essentially: leave my query alone!

So, what led to this blog article?  I saw a statement in the “Pro Oracle SQL” book on page 329 that I wanted to verify since I had never used the ANY keyword in a SQL statement:

 “There isn’t much to say about the =ANY version [of the SQL statement] since it is merely an alternate way of writing IN.”

I set up a simple test case script with two tables to see if I could demonstrate to myself that a query with the ANY keyword is handled exactly the same as a query using the IN syntax:

CREATE TABLE T1 AS
SELECT
  ROWNUM C1,
  LPAD('A',100,'A') PADDING
FROM
  DUAL
CONNECT BY
  LEVEL<=100;

CREATE TABLE T2 AS
SELECT
  ROWNUM C0,
  TRUNC((MOD(ROWNUM-1,80)+1)*1.2) C1,
  LPAD('A',100,'A') PADDING
FROM
  DUAL
CONNECT BY
  LEVEL<=1000;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1')
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2') 

Let’s assume that table T1 is a list of items in a hardware store, and table T2 is a list of items that were purchased in the last month.  We just need a quick way to determine the list of items that were purchased in the last month (we might also need to select another column, such as the product description from table T1, thus the need to access both tables T1 and T2).  So, we put together the following two SQL statements to see if the ANY syntax really is equivalent to the IN syntax:

SELECT
  T1.C1
FROM
  T1
WHERE
  T1.C1 IN (
    SELECT
      T2.C1
    FROM
      T2);

SELECT
  T1.C1
FROM
  T1
WHERE
  T1.C1 = ANY (
    SELECT
      T2.C1
    FROM
      T2); 

First, let’s check the execution plans:

SET AUTOTRACE TRACEONLY EXPLAIN

SELECT
  T1.C1
FROM
  T1
WHERE
  T1.C1 IN (
    SELECT
      T2.C1
    FROM
      T2);

Execution Plan
----------------------------------------------------------
Plan hash value: 1713220790

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |    80 |   480 |    11  (10)| 00:00:01 |
|*  1 |  HASH JOIN SEMI    |      |    80 |   480 |    11  (10)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T1   |   100 |   300 |     3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T2   |  1000 |  3000 |     7   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1")

-

SELECT
  T1.C1
FROM
  T1
WHERE
  T1.C1 = ANY (
    SELECT
      T2.C1
    FROM
      T2);

Execution Plan
----------------------------------------------------------
Plan hash value: 1713220790

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |    80 |   480 |    11  (10)| 00:00:01 |
|*  1 |  HASH JOIN SEMI    |      |    80 |   480 |    11  (10)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T1   |   100 |   300 |     3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T2   |  1000 |  3000 |     7   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1") 

The execution plans appear to be the same.  But wait, through the magic of automatic query transformations, Oracle’s optimizer might have actually rewritten both SQL statements so that the actual SQL executed for both queries is identical.  I recall experiencing performance problems in Oracle Database 8.1.7.x with certain IN type queries, while the equivalent EXISTS queries were fast.  Let’s use the magic NO_QUERY_TRANSFORMATION hint that allowed the SQL statement at the start of this blog article to execute, so that we may prevent the optimizer from rewriting the SQL statement.  That hint should help indicate whether or not the SQL statement using the ANY syntax really is equivalent to the query using the IN syntax.  Note that this test is performed on Oracle Database 11.2.0.1 and you might receive different results on different Oracle release versions.  The hinted test queries and their execution plans:

SELECT /*+ NO_QUERY_TRANSFORMATION */
  T1.C1
FROM
  T1
WHERE
  T1.C1 IN (
    SELECT
      T2.C1
    FROM
      T2);

Execution Plan
----------------------------------------------------------
Plan hash value: 895956251

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     3 |   103   (0)| 00:00:02 |
|*  1 |  FILTER            |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL| T1   |   100 |   300 |     3   (0)| 00:00:01 |
|*  3 |   TABLE ACCESS FULL| T2   |     2 |     6 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( EXISTS (SELECT 0 FROM "T2" "T2" WHERE "T2"."C1"=:B1))
   3 - filter("T2"."C1"=:B1)

-

SELECT /*+ NO_QUERY_TRANSFORMATION */
  T1.C1
FROM
  T1
WHERE
  T1.C1 = ANY (
    SELECT
      T2.C1
    FROM
      T2);

Execution Plan
----------------------------------------------------------
Plan hash value: 895956251

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     3 |   103   (0)| 00:00:02 |
|*  1 |  FILTER            |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL| T1   |   100 |   300 |     3   (0)| 00:00:01 |
|*  3 |   TABLE ACCESS FULL| T2   |     2 |     6 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( EXISTS (SELECT 0 FROM "T2" "T2" WHERE "T2"."C1"=:B1))
   3 - filter("T2"."C1"=:B1) 

In you look closely, you will see that the execution plans changed from their unhinted versions, and both of the hinted execution plans are identical.  But wait!  Take a close look at the Predicate Information section of the execution plan, specifically the line for plan ID 1.  Where did that EXISTS entry come from – the hint should have told Oracle’s optimizer to not transform the query?  It appears that both queries were transformed into EXISTS queries.  Let’s take a look at the 10053 trace file for a slightly modified version of the queries (so that we will force a hard parse):

ALTER SESSION SET TRACEFILE_IDENTIFIER='NO_QUERY_TRANSFORM'; 
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

SELECT /*+ NO_QUERY_TRANSFORMATION */
  T1.C1,
  1
FROM
  T1
WHERE
  T1.C1 IN (
    SELECT
      T2.C1
    FROM
      T2);

SELECT /*+ NO_QUERY_TRANSFORMATION */
  T1.C1,
  1
FROM
  T1
WHERE
  T1.C1 = ANY (
    SELECT
      T2.C1
    FROM
      T2);

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF'; 

Taking a look in the trace file:

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "SYS_ALIAS_1"."C1" "C1",1 "1" FROM "TESTUSER"."T1" "SYS_ALIAS_1" WHERE  EXISTS (SELECT 0 FROM "TESTUSER"."T2" "T2" WHERE "T2"."C1"="SYS_ALIAS_1"."C1")
...
sql=SELECT /*+ NO_QUERY_TRANSFORMATION */
  T1.C1,
  1
FROM
  T1
WHERE
  T1.C1 IN (
    SELECT
      T2.C1
    FROM
      T2)
----- Explain Plan Dump -----
----- Plan Table -----

============
Plan Table
============
--------------------------------------+-----------------------------------+
| Id  | Operation           | Name    | Rows  | Bytes | Cost  | Time      |
--------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT    |         |       |       |   103 |           |
| 1   |  FILTER             |         |       |       |       |           |
| 2   |   TABLE ACCESS FULL | T1      |   100 |   300 |     3 |  00:00:01 |
| 3   |   TABLE ACCESS FULL | T2      |     2 |     6 |     2 |  00:00:01 |
--------------------------------------+-----------------------------------+
Predicate Information:
----------------------
1 - filter( IS NOT NULL)
3 - filter("T2"."C1"=:B1)

----------------------------------------------------------------------------------

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "SYS_ALIAS_1"."C1" "C1",1 "1" FROM "TESTUSER"."T1" "SYS_ALIAS_1" WHERE  EXISTS (SELECT 0 FROM "TESTUSER"."T2" "T2" WHERE "T2"."C1"="SYS_ALIAS_1"."C1")
...
sql=SELECT /*+ NO_QUERY_TRANSFORMATION */
  T1.C1,
  1
FROM
  T1
WHERE
  T1.C1 = ANY (
    SELECT
      T2.C1
    FROM
      T2)
----- Explain Plan Dump -----
----- Plan Table -----

============
Plan Table
============
--------------------------------------+-----------------------------------+
| Id  | Operation           | Name    | Rows  | Bytes | Cost  | Time      |
--------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT    |         |       |       |   103 |           |
| 1   |  FILTER             |         |       |       |       |           |
| 2   |   TABLE ACCESS FULL | T1      |   100 |   300 |     3 |  00:00:01 |
| 3   |   TABLE ACCESS FULL | T2      |     2 |     6 |     2 |  00:00:01 |
--------------------------------------+-----------------------------------+
Predicate Information:
----------------------
1 - filter( IS NOT NULL)
3 - filter("T2"."C1"=:B1) 

From the above, we are able to see that despite the NO_QUERY_TRANSFORMATION hint, both queries were written identically to use an EXISTS clause.  Maybe the NO_QUERY_TRANSFORMATION hint was ignored?  From the 10053 trace file:

End of Optimizer State Dump
Dumping Hints
=============
  atom_hint=(@=000007FFA41C2E00 err=0 resol=0 used=1 token=986 org=1 lvl=1 txt=NO_QUERY_TRANSFORMATION ())
====================== END SQL Statement Dump ====================== 

The above shows that the hint did not result in an error, and was used.  Well, maybe we should just say that the NO_QUERY_TRANSFORMATION hint was almost ignored.





Analytic Functions – What is Wrong with this Statement?

1 01 2011

January 1, 2011

I was a bit excited to see the chapter discussing analytic functions in the book “Pro Oracle SQL”, which has a rather extensive coverage of most of Oracle Database’s analytic functions (something that I have not seen from other SQL books).  That chapter is very well assembled, with easier to understand descriptions of the various functions than what is found in the Oracle documentation.

However, there is one particular statement in the chapter that made be stop and think for a moment, and then ask “What, if anything, is wrong with this quote” from page 227 of the book:

“The default windowing clause [of analytic functions that support the windowing clause] is rows between unbounded preceding and current row. If you do not specify a window, you’ll get the default window. It is a good approach to specify this clause explicitly to avoid ambiguities.”

Before you answer, check the Oracle Database 11.2 documentation:

 “If you omit the windowing_clause entirely, then the default is RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW.”

The Oracle Database documentation is stating essentially the same thing as the book. 

What, if anything, is wrong with this quote (or the quote from the documentation)?





ANSI Full Outer Join, Ready or Not?

30 12 2010

December 30, 2010 (Modified January 1, 2011)

When I read pages 101-103 of the book “Pro Oracle SQL” a couple of days ago, I was reminded of a couple of things.  This section of the book describes full outer joins, showing the ANSI syntax and Oracle syntax to perform a full outer join.  If you read my earlier blog article on the topic of ANSI SQL, you probably saw in the article that there are a couple of bug reports listed in Metalink (MOS) that describe problems with ANSI joins, including a couple that are specific to Oracle Database versions below 11.2.0.2.  As far as I can tell (at least with Oracle Database 11.2.0.1), ANSI full outer joins are the only ANSI formatted join syntax that is not converted into Oracle specific join syntax.  The second thing that it reminded me of is a search of the phrase pig outer join that seemed to hit my blog at least once a week.  There is one article on my blog that matches that description, and that article includes the following example of a full outer join using Oracle syntax:

SELECT
  T1.ANIMAL,
  T2.ANIMAL
FROM
  TABLE_1 T1,
  TABLE_2 T2
WHERE
  T1.ANIMAL(+)=T2.ANIMAL
UNION
SELECT
  T1.ANIMAL,
  T2.ANIMAL
FROM
  TABLE_1 T1,
  TABLE_2 T2
WHERE
  T1.ANIMAL=T2.ANIMAL(+);

T1.ANIMAL         T2.ANIMAL
COW               COW
PIG               PIG
(null)            DOG
ZEBRA             (null)
SHARK             (null)
ROOSTER           (null)
LION              (null) 

The above example works well as long as multiple rows do not have the same value in the ANIMAL column.  The example on page 102 of the book looks similar to the one above, just without all of the common farm animals.  :-)

But there was one other item that caught my eye.  I could not reproduce the execution plan found in the book using the ANSI full outer join syntax on Oracle Database 11.2.0.1.  Interesting… was the wrong execution plan copied into the book, or is there another explanation?  Let’s put together a test script to help see what is happening:

ALTER SYSTEM SET PGA_AGGREGATE_TARGET=2000M SCOPE=MEMORY;

CREATE TABLE
  T1
AS
SELECT
  ROWNUM C1,
  LPAD(TO_CHAR(ROWNUM),10,'0') C2,
  LPAD('A',200,'A') PADDING
FROM
  DUAL
CONNECT BY
  LEVEL<=100000;

CREATE TABLE
  T2
AS
SELECT
  ROWNUM C1,
  LPAD(TO_CHAR(ROWNUM),10,'0') C2,
  LPAD('A',400,'A') PADDING
FROM
  DUAL
CONNECT BY
  LEVEL<=100000;

COMMIT;

CREATE INDEX IND_T1_C1 ON T1(C1);
CREATE INDEX IND_T2_C1 ON T2(C1);

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE,ESTIMATE_PERCENT=>NULL)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2',CASCADE=>TRUE,ESTIMATE_PERCENT=>NULL) 

Wait 60 seconds, just to make certain that the 2,000M PGA_AGGREGATE_TARGET has had a chance to take effect, then continue with the script.  We will write the full outer join SQL statement four different ways, with the third method using the UNION clause as it was used in the book:

SET AUTOTRACE TRACEONLY STATISTICS
SET ARRAYSIZE 1000
ALTER SESSION SET STATISTICS_LEVEL='ALL';

SELECT /* FIND_ME 1 */
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1(+)
UNION ALL
SELECT
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1(+)=T2.C1
  AND T1.C1 IS NULL;

SELECT /* FIND_ME 2 */
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1(+)
UNION ALL
SELECT
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1(+)=T2.C1
  AND T1.ROWID IS NULL;

SELECT /* FIND_ME 3 */
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1(+)
UNION
SELECT
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1(+)=T2.C1;

SELECT /* FIND_ME 4 */
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1 FULL OUTER JOIN T2
     ON T1.C1=T2.C1;

SET AUTOTRACE OFF
SET LINESIZE 170
SET PAGESIZE 2000
SET TRIMSPOOL ON
SPOOL TEST_OUTER_PLANS.TXT

SELECT /*+ LEADING(S) */
  T.PLAN_TABLE_OUTPUT
FROM
  (SELECT
    SQL_ID,
    CHILD_NUMBER
  FROM
    V$SQL
  WHERE
    SQL_TEXT LIKE 'SELECT /* FIND_ME%') S,
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(S.SQL_ID,S.CHILD_NUMBER,'ALLSTATS LAST +COST +BYTES')) T;

SPOOL OFF 

When the above script is executed, the execution plans for the first half of the test script will be written to a file named TEST_OUTER_PLANS.TXT.  Now on to the second half of the script were duplicate rows are introduced into table T1, some rows are deleted, and other rows are modified:

INSERT INTO
  T1
SELECT
  ROWNUM C1,
  LPAD(TO_CHAR(ROWNUM),10,'0') C2,
  LPAD('A',200,'A') PADDING
FROM
  DUAL
CONNECT BY
  LEVEL<=50000;

UPDATE
  T1
SET
  C1=NULL
WHERE
  C1 BETWEEN 90000 AND 100000;

UPDATE
  T1
SET
  C1=C1+200000
WHERE
  C1 BETWEEN 1 AND 1000;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE,ESTIMATE_PERCENT=>NULL,NO_INVALIDATE=>FALSE)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2',CASCADE=>TRUE,ESTIMATE_PERCENT=>NULL,NO_INVALIDATE=>FALSE)

ALTER SYSTEM FLUSH SHARED_POOL;
ALTER SYSTEM FLUSH SHARED_POOL;

SET AUTOTRACE TRACEONLY STATISTICS

SELECT /* FIND_ME 1 */
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1(+)
UNION ALL
SELECT
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1(+)=T2.C1
  AND T1.C1 IS NULL;

SELECT /* FIND_ME 2 */
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1(+)
UNION ALL
SELECT
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1(+)=T2.C1
  AND T1.ROWID IS NULL;

SELECT /* FIND_ME 3 */
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1(+)
UNION
SELECT
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1(+)=T2.C1;

SELECT /* FIND_ME 4 */
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1 FULL OUTER JOIN T2
     ON T1.C1=T2.C1;

SET AUTOTRACE OFF
SET LINESIZE 170
SET PAGESIZE 2000
SET TRIMSPOOL ON
SPOOL TEST_OUTER_PLANS2.TXT

SELECT /*+ LEADING(S) */
  T.PLAN_TABLE_OUTPUT
FROM
  (SELECT
    SQL_ID,
    CHILD_NUMBER
  FROM
    V$SQL
  WHERE
    SQL_TEXT LIKE 'SELECT /* FIND_ME%') S,
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(S.SQL_ID,S.CHILD_NUMBER,'ALLSTATS LAST +COST +BYTES')) T;

SPOOL OFF 

The execution plans for the second half of the script are written to a file named TEST_OUTER_PLANS2.TXT.  So, what does the output of the plans look like on Oracle Database 10.2.0.5 Standard Edition?  The execution plans for the first half of the script:

SQL_ID  cfjjpmdc0ghsx, child number 0
-------------------------------------
SELECT /* FIND_ME 1 */   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2 WHERE   T1.C1=T2.C1(+) UNION ALL SELECT   T1.C1,   T2.C1,   T1.C2
FROM   T1,   T2 WHERE   T1.C1(+)=T2.C1   AND T1.C1 IS NULL

Plan hash value: 2463665226

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name      | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |           |      1 |        |       |   998 (100)|    100K|00:00:00.13 |   12483 |       |       |          |
|   1 |  UNION-ALL             |           |      1 |        |       |            |    100K|00:00:00.13 |   12483 |       |       |          |
|*  2 |   HASH JOIN RIGHT OUTER|           |      1 |    100K|  2050K|   273   (2)|    100K|00:00:00.04 |    3460 |  3748K|  1935K| 5608K (0)|
|   3 |    INDEX FAST FULL SCAN| IND_T2_C1 |      1 |    100K|   488K|    20   (5)|    100K|00:00:00.01 |     229 |       |       |          |
|   4 |    TABLE ACCESS FULL   | T1        |      1 |    100K|  1562K|   252   (1)|    100K|00:00:00.01 |    3231 |       |       |          |
|*  5 |   FILTER               |           |      1 |        |       |            |      0 |00:00:00.10 |    9023 |       |       |          |
|*  6 |    HASH JOIN OUTER     |           |      1 |      1 |    21 |   725   (1)|    100K|00:00:00.04 |    9023 |  3748K|  1935K| 5609K (0)|
|   7 |     TABLE ACCESS FULL  | T2        |      1 |    100K|   488K|   472   (1)|    100K|00:00:00.01 |    5892 |       |       |          |
|   8 |     TABLE ACCESS FULL  | T1        |      1 |    100K|  1562K|   252   (1)|    100K|00:00:00.01 |    3131 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C1"="T2"."C1")
   5 - filter("T1"."C1" IS NULL)
   6 - access("T1"."C1"="T2"."C1")

-

SQL_ID  1qtvpf3mnb3nx, child number 0
-------------------------------------
SELECT /* FIND_ME 2 */   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2 WHERE   T1.C1=T2.C1(+) UNION ALL SELECT   T1.C1,   T2.C1,   T1.C2
FROM   T1,   T2 WHERE   T1.C1(+)=T2.C1   AND T1.ROWID IS NULL

Plan hash value: 2463665226

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name      | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |           |      1 |        |       |   998 (100)|    100K|00:00:00.13 |   12483 |       |       |          |
|   1 |  UNION-ALL             |           |      1 |        |       |            |    100K|00:00:00.13 |   12483 |       |       |          |
|*  2 |   HASH JOIN RIGHT OUTER|           |      1 |    100K|  2050K|   273   (2)|    100K|00:00:00.04 |    3460 |  3748K|  1935K| 5608K (0)|
|   3 |    INDEX FAST FULL SCAN| IND_T2_C1 |      1 |    100K|   488K|    20   (5)|    100K|00:00:00.01 |     229 |       |       |          |
|   4 |    TABLE ACCESS FULL   | T1        |      1 |    100K|  1562K|   252   (1)|    100K|00:00:00.01 |    3231 |       |       |          |
|*  5 |   FILTER               |           |      1 |        |       |            |      0 |00:00:00.10 |    9023 |       |       |          |
|*  6 |    HASH JOIN OUTER     |           |      1 |    100K|  3222K|   725   (1)|    100K|00:00:00.04 |    9023 |  3748K|  1935K| 5609K (0)|
|   7 |     TABLE ACCESS FULL  | T2        |      1 |    100K|   488K|   472   (1)|    100K|00:00:00.01 |    5892 |       |       |          |
|   8 |     TABLE ACCESS FULL  | T1        |      1 |    100K|  2734K|   252   (1)|    100K|00:00:00.01 |    3131 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C1"="T2"."C1")
   5 - filter("T1".ROWID IS NULL)
   6 - access("T1"."C1"="T2"."C1")

-

SQL_ID  4v4r3dau0barp, child number 0
-------------------------------------
SELECT /* FIND_ME 3 */   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2 WHERE   T1.C1=T2.C1(+) UNION SELECT   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2
WHERE   T1.C1(+)=T2.C1

Plan hash value: 2719845061

-------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name      | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |           |      1 |        |       |       |  2078 (100)|    100K|00:00:00.31 |   12383 |       |       |          |
|   1 |  SORT UNIQUE            |           |      1 |    200K|  4101K|  5913K|  2078  (62)|    100K|00:00:00.31 |   12383 |  5510K|  1269K| 4897K (0)|
|   2 |   UNION-ALL             |           |      1 |        |       |       |            |    200K|00:00:00.04 |   12383 |       |       |          |
|*  3 |    HASH JOIN RIGHT OUTER|           |      1 |    100K|  2050K|       |   273   (2)|    100K|00:00:00.04 |    3360 |  3748K|  1935K| 5609K (0)|
|   4 |     INDEX FAST FULL SCAN| IND_T2_C1 |      1 |    100K|   488K|       |    20   (5)|    100K|00:00:00.01 |     229 |       |       |          |
|   5 |     TABLE ACCESS FULL   | T1        |      1 |    100K|  1562K|       |   252   (1)|    100K|00:00:00.01 |    3131 |       |       |          |
|*  6 |    HASH JOIN OUTER      |           |      1 |    100K|  2050K|       |   725   (1)|    100K|00:00:00.04 |    9023 |  3748K|  1935K| 5609K (0)|
|   7 |     TABLE ACCESS FULL   | T2        |      1 |    100K|   488K|       |   472   (1)|    100K|00:00:00.01 |    5892 |       |       |          |
|   8 |     TABLE ACCESS FULL   | T1        |      1 |    100K|  1562K|       |   252   (1)|    100K|00:00:00.01 |    3131 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"="T2"."C1")
   6 - access("T1"."C1"="T2"."C1")

-

SQL_ID  gs96s7s3ff9sp, child number 0
-------------------------------------
SELECT /* FIND_ME 4 */   T1.C1,   T2.C1,   T1.C2 FROM   T1 FULL OUTER JOIN T2      ON T1.C1=T2.C1

Plan hash value: 2431666783

-----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name      | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |           |      1 |        |       |   766 (100)|    100K|00:00:00.11 |    9581 |       |       |          |
|   1 |  VIEW                   |           |      1 |    100K|  3222K|   766   (2)|    100K|00:00:00.11 |    9581 |       |       |          |
|   2 |   UNION-ALL             |           |      1 |        |       |            |    100K|00:00:00.11 |    9581 |       |       |          |
|*  3 |    HASH JOIN RIGHT OUTER|           |      1 |    100K|  2050K|   273   (2)|    100K|00:00:00.04 |    3460 |  3748K|  1935K| 5608K (0)|
|   4 |     INDEX FAST FULL SCAN| IND_T2_C1 |      1 |    100K|   488K|    20   (5)|    100K|00:00:00.01 |     229 |       |       |          |
|   5 |     TABLE ACCESS FULL   | T1        |      1 |    100K|  1562K|   252   (1)|    100K|00:00:00.01 |    3231 |       |       |          |
|*  6 |    HASH JOIN ANTI       |           |      1 |      1 |    10 |   493   (2)|      0 |00:00:00.08 |    6121 |  3748K|  1935K| 5546K (0)|
|   7 |     TABLE ACCESS FULL   | T2        |      1 |    100K|   488K|   472   (1)|    100K|00:00:00.01 |    5892 |       |       |          |
|   8 |     INDEX FAST FULL SCAN| IND_T1_C1 |      1 |    100K|   488K|    20   (5)|    100K|00:00:00.01 |     229 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"="T2"."C1")
   6 - access("T1"."C1"="T2"."C1") 

Notice in the above that the ANSI full outer join syntax SQL statement completed faster than the other methods, almost 3 times faster than the method using UNION, but only slightly faster than the method using UNION ALL.  Take a close look at the Predicate Information section of the execution plan for the second SQL statement – notice that there is no filter predicate on plan line 8 (keep an eye on the plan for the second SQL statement later).  The execution plan for the ANSI full outer join is very similar to what is shown in the book, assuming that tables T1 and T2 did not have indexes.

The execution plans for the second half of the script on Oracle Database 10.2.0.5 Standard Edition:

SQL_ID  cfjjpmdc0ghsx, child number 0
-------------------------------------
SELECT /* FIND_ME 1 */   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2 WHERE   T1.C1=T2.C1(+) UNION ALL SELECT   T1.C1,   T2.C1,   T1.C2
FROM   T1,   T2 WHERE   T1.C1(+)=T2.C1   AND T1.C1 IS NULL

Plan hash value: 2463665226

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name      | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |           |      1 |        |       |  1261 (100)|    161K|00:00:00.20 |   15757 |       |       |          |
|   1 |  UNION-ALL             |           |      1 |        |       |            |    161K|00:00:00.20 |   15757 |       |       |          |
|*  2 |   HASH JOIN RIGHT OUTER|           |      1 |    150K|  3076K|   404   (2)|    150K|00:00:00.04 |    5122 |  3748K|  1935K| 5610K (0)|
|   3 |    INDEX FAST FULL SCAN| IND_T2_C1 |      1 |    100K|   488K|    20   (5)|    100K|00:00:00.01 |     229 |       |       |          |
|   4 |    TABLE ACCESS FULL   | T1        |      1 |    150K|  2343K|   383   (1)|    150K|00:00:00.01 |    4893 |       |       |          |
|*  5 |   FILTER               |           |      1 |        |       |            |  11001 |00:00:00.12 |   10635 |       |       |          |
|*  6 |    HASH JOIN OUTER     |           |      1 |   9334 |   191K|   856   (1)|    149K|00:00:00.04 |   10635 |  3748K|  1935K| 5611K (0)|
|   7 |     TABLE ACCESS FULL  | T2        |      1 |    100K|   488K|   472   (1)|    100K|00:00:00.01 |    5892 |       |       |          |
|   8 |     TABLE ACCESS FULL  | T1        |      1 |    150K|  2343K|   383   (1)|    150K|00:00:00.01 |    4743 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C1"="T2"."C1")
   5 - filter("T1"."C1" IS NULL)
   6 - access("T1"."C1"="T2"."C1")

-

SQL_ID  1qtvpf3mnb3nx, child number 0
-------------------------------------
SELECT /* FIND_ME 2 */   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2 WHERE   T1.C1=T2.C1(+) UNION ALL SELECT   T1.C1,   T2.C1,   T1.C2
FROM   T1,   T2 WHERE   T1.C1(+)=T2.C1   AND T1.ROWID IS NULL

Plan hash value: 2463665226

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name      | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |           |      1 |        |       |  1261 (100)|    161K|00:00:00.20 |   15757 |       |       |          |
|   1 |  UNION-ALL             |           |      1 |        |       |            |    161K|00:00:00.20 |   15757 |       |       |          |
|*  2 |   HASH JOIN RIGHT OUTER|           |      1 |    150K|  3076K|   404   (2)|    150K|00:00:00.04 |    5122 |  3748K|  1935K| 5610K (0)|
|   3 |    INDEX FAST FULL SCAN| IND_T2_C1 |      1 |    100K|   488K|    20   (5)|    100K|00:00:00.01 |     229 |       |       |          |
|   4 |    TABLE ACCESS FULL   | T1        |      1 |    150K|  2343K|   383   (1)|    150K|00:00:00.01 |    4893 |       |       |          |
|*  5 |   FILTER               |           |      1 |        |       |            |  11001 |00:00:00.12 |   10635 |       |       |          |
|*  6 |    HASH JOIN OUTER     |           |      1 |    139K|  4511K|   857   (1)|    149K|00:00:00.04 |   10635 |  3748K|  1935K| 5611K (0)|
|   7 |     TABLE ACCESS FULL  | T2        |      1 |    100K|   488K|   472   (1)|    100K|00:00:00.01 |    5892 |       |       |          |
|*  8 |     TABLE ACCESS FULL  | T1        |      1 |    139K|  3828K|   383   (1)|    139K|00:00:00.01 |    4743 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C1"="T2"."C1")
   5 - filter("T1".ROWID IS NULL)
   6 - access("T1"."C1"="T2"."C1")
   8 - filter("T1"."C1" IS NOT NULL)

-

SQL_ID  4v4r3dau0barp, child number 0
-------------------------------------
SELECT /* FIND_ME 3 */   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2 WHERE   T1.C1=T2.C1(+) UNION SELECT   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2
WHERE   T1.C1(+)=T2.C1

Plan hash value: 2719845061

-------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name      | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |           |      1 |        |       |       |  2823 (100)|    111K|00:00:00.49 |   15607 |       |       |          |
|   1 |  SORT UNIQUE            |           |      1 |    289K|  5947K|  8544K|  2823  (58)|    111K|00:00:00.49 |   15607 |  6998K|  1589K| 6220K (0)|
|   2 |   UNION-ALL             |           |      1 |        |       |       |            |    299K|00:00:00.04 |   15607 |       |       |          |
|*  3 |    HASH JOIN RIGHT OUTER|           |      1 |    150K|  3076K|       |   404   (2)|    150K|00:00:00.04 |    4972 |  3748K|  1935K| 5611K (0)|
|   4 |     INDEX FAST FULL SCAN| IND_T2_C1 |      1 |    100K|   488K|       |    20   (5)|    100K|00:00:00.01 |     229 |       |       |          |
|   5 |     TABLE ACCESS FULL   | T1        |      1 |    150K|  2343K|       |   383   (1)|    150K|00:00:00.01 |    4743 |       |       |          |
|*  6 |    HASH JOIN OUTER      |           |      1 |    139K|  2871K|       |   857   (1)|    149K|00:00:00.04 |   10635 |  3748K|  1935K| 5580K (0)|
|   7 |     TABLE ACCESS FULL   | T2        |      1 |    100K|   488K|       |   472   (1)|    100K|00:00:00.01 |    5892 |       |       |          |
|*  8 |     TABLE ACCESS FULL   | T1        |      1 |    139K|  2187K|       |   383   (1)|    139K|00:00:00.01 |    4743 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"="T2"."C1")
   6 - access("T1"."C1"="T2"."C1")
   8 - filter("T1"."C1" IS NOT NULL)

-

SQL_ID  gs96s7s3ff9sp, child number 0
-------------------------------------
SELECT /* FIND_ME 4 */   T1.C1,   T2.C1,   T1.C2 FROM   T1 FULL OUTER JOIN T2      ON T1.C1=T2.C1

Plan hash value: 2431666783

-----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name      | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |           |      1 |        |       |   905 (100)|    161K|00:00:00.20 |   11402 |       |       |          |
|   1 |  VIEW                   |           |      1 |    160K|  5156K|   905   (2)|    161K|00:00:00.20 |   11402 |       |       |          |
|   2 |   UNION-ALL             |           |      1 |        |       |            |    161K|00:00:00.20 |   11402 |       |       |          |
|*  3 |    HASH JOIN RIGHT OUTER|           |      1 |    150K|  3076K|   404   (2)|    150K|00:00:00.04 |    5122 |  3748K|  1935K| 5610K (0)|
|   4 |     INDEX FAST FULL SCAN| IND_T2_C1 |      1 |    100K|   488K|    20   (5)|    100K|00:00:00.01 |     229 |       |       |          |
|   5 |     TABLE ACCESS FULL   | T1        |      1 |    150K|  2343K|   383   (1)|    150K|00:00:00.01 |    4893 |       |       |          |
|*  6 |    HASH JOIN ANTI       |           |      1 |  10001 |    97K|   501   (2)|  11001 |00:00:00.09 |    6280 |  3748K|  1935K| 5547K (0)|
|   7 |     TABLE ACCESS FULL   | T2        |      1 |    100K|   488K|   472   (1)|    100K|00:00:00.01 |    5892 |       |       |          |
|*  8 |     INDEX FAST FULL SCAN| IND_T1_C1 |      1 |    139K|   683K|    27   (4)|    139K|00:00:00.01 |     388 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"="T2"."C1")
   6 - access("T1"."C1"="T2"."C1")
   8 - filter("T1"."C1" IS NOT NULL) 

So, why did I see a different execution plan on Oracle Database 11.2.0.1?  Let’s take a look at a 10053 trace file from 10.2.0.5:

SET AUTOTRACE TRACEONLY STATISTICS
ALTER SESSION SET TRACEFILE_IDENTIFIER='ANSI_TRANSFORM_FULL';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

SELECT /* FIND_ME 5 */
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1 FULL OUTER JOIN T2
     ON T1.C1=T2.C1;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF'; 

Inside the trace file I found this (slightly reformatted to improve readability):

******* UNPARSED QUERY IS *******
SELECT
  "from$_subquery$_003"."C1" "C1",
  "from$_subquery$_003"."C1" "C1",
  "from$_subquery$_003"."C2" "C2"
FROM
  ((SELECT
      "T1"."C1" "QCSJ_C000000000300000",
      "T1"."C2" "QCSJ_C000000000300002",
      "T2"."C1" "C1",
      "T2"."C2" "C2"
    FROM
      "TESTUSER"."T1" "T1",
      "TESTUSER"."T2" "T2"
    WHERE
      "T1"."C1"="T2"."C1"(+))
  UNION ALL
   (SELECT
      NULL,
      NULL,
      "T2"."C1" "C1",
      "T2"."C2" "C2"
    FROM
      "TESTUSER"."T1" "T1",
      "TESTUSER"."T2" "T2"
    WHERE
      "T1"."C1"="T2"."C1"
      AND "T1"."C1" IS NOT NULL)) "from$_subquery$_003"

kkoqbc-subheap (delete addr=000000001602C530, in-use=11064, alloc=21776)
kkoqbc-end
          : call(in-use=83648, alloc=131128), compile(in-use=123584, alloc=166568)
apadrv-end: call(in-use=83648, alloc=131128), compile(in-use=125480, alloc=166568)

sql_id=7jhmvz6nutvpv.
Current SQL statement for this session:
SELECT /* FIND_ME 5 */
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1 FULL OUTER JOIN T2
     ON T1.C1=T2.C1

============
Plan Table
============
--------------------------------------------+-----------------------------------+
| Id  | Operation                | Name     | Rows  | Bytes | Cost  | Time      |
--------------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT         |          |       |       |   905 |           |
| 1   |  VIEW                    |          |  156K | 5156K |   905 |  00:00:08 |
| 2   |   UNION-ALL              |          |       |       |       |           |
| 3   |    HASH JOIN RIGHT OUTER |          |  146K | 3076K |   404 |  00:00:04 |
| 4   |     INDEX FAST FULL SCAN | IND_T2_C1|   98K |  488K |    20 |  00:00:01 |
| 5   |     TABLE ACCESS FULL    | T1       |  146K | 2344K |   383 |  00:00:04 |
| 6   |    HASH JOIN ANTI        |          |   10K |   98K |   501 |  00:00:05 |
| 7   |     TABLE ACCESS FULL    | T2       |   98K |  488K |   472 |  00:00:04 |
| 8   |     INDEX FAST FULL SCAN | IND_T1_C1|  137K |  684K |    27 |  00:00:01 |
--------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
3 - access("T1"."C1"="T2"."C1")
6 - access("T1"."C1"="T2"."C1")
8 - filter("T1"."C1" IS NOT NULL) 

I guess that query transformation explains the execution plan – it looks like Oracle Database 10.2.0.5 will transform an ANSI full outer join into Oracle specific syntax.  Let’s repeat the test on Oracle Database 11.2.0.1 to see what happens:

SQL_ID  cfjjpmdc0ghsx, child number 0
-------------------------------------
SELECT /* FIND_ME 1 */   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2 WHERE
 T1.C1=T2.C1(+) UNION ALL SELECT   T1.C1,   T2.C1,   T1.C2 FROM   T1,
T2 WHERE   T1.C1(+)=T2.C1   AND T1.C1 IS NULL

Plan hash value: 2463665226

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name      | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |           |      1 |        |       |   998 (100)|    100K|00:00:00.21 |   12484 |       |       |          |
|   1 |  UNION-ALL             |           |      1 |        |       |            |    100K|00:00:00.21 |   12484 |       |       |          |
|*  2 |   HASH JOIN RIGHT OUTER|           |      1 |    100K|  2050K|   273   (2)|    100K|00:00:00.09 |    3461 |  3749K|  1936K| 5799K (0)|
|   3 |    INDEX FAST FULL SCAN| IND_T2_C1 |      1 |    100K|   488K|    20   (5)|    100K|00:00:00.01 |     230 |       |       |          |
|   4 |    TABLE ACCESS FULL   | T1        |      1 |    100K|  1562K|   252   (1)|    100K|00:00:00.02 |    3231 |       |       |          |
|*  5 |   FILTER               |           |      1 |        |       |            |      0 |00:00:00.09 |    9023 |       |       |          |
|*  6 |    HASH JOIN OUTER     |           |      1 |      1 |    21 |   725   (1)|    100K|00:00:00.09 |    9023 |  3749K|  1936K| 5800K (0)|
|   7 |     TABLE ACCESS FULL  | T2        |      1 |    100K|   488K|   472   (1)|    100K|00:00:00.01 |    5892 |       |       |          |
|   8 |     TABLE ACCESS FULL  | T1        |      1 |    100K|  1562K|   252   (1)|    100K|00:00:00.01 |    3131 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C1"="T2"."C1")
   5 - filter("T1"."C1" IS NULL)
   6 - access("T1"."C1"="T2"."C1")

-

SQL_ID  1qtvpf3mnb3nx, child number 0
-------------------------------------
SELECT /* FIND_ME 2 */   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2 WHERE
 T1.C1=T2.C1(+) UNION ALL SELECT   T1.C1,   T2.C1,   T1.C2 FROM   T1,
T2 WHERE   T1.C1(+)=T2.C1   AND T1.ROWID IS NULL

Plan hash value: 2463665226

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name      | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |           |      1 |        |       |   998 (100)|    100K|00:00:00.21 |   12484 |       |       |          |
|   1 |  UNION-ALL             |           |      1 |        |       |            |    100K|00:00:00.21 |   12484 |       |       |          |
|*  2 |   HASH JOIN RIGHT OUTER|           |      1 |    100K|  2050K|   273   (2)|    100K|00:00:00.09 |    3461 |  3749K|  1936K| 5799K (0)|
|   3 |    INDEX FAST FULL SCAN| IND_T2_C1 |      1 |    100K|   488K|    20   (5)|    100K|00:00:00.01 |     230 |       |       |          |
|   4 |    TABLE ACCESS FULL   | T1        |      1 |    100K|  1562K|   252   (1)|    100K|00:00:00.02 |    3231 |       |       |          |
|*  5 |   FILTER               |           |      1 |        |       |            |      0 |00:00:00.09 |    9023 |       |       |          |
|*  6 |    HASH JOIN OUTER     |           |      1 |    100K|  3222K|   725   (1)|    100K|00:00:00.09 |    9023 |  3749K|  1936K| 5800K (0)|
|   7 |     TABLE ACCESS FULL  | T2        |      1 |    100K|   488K|   472   (1)|    100K|00:00:00.01 |    5892 |       |       |          |
|   8 |     TABLE ACCESS FULL  | T1        |      1 |    100K|  2734K|   252   (1)|    100K|00:00:00.01 |    3131 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C1"="T2"."C1")
   5 - filter("T1".ROWID IS NULL)
   6 - access("T1"."C1"="T2"."C1")

-

SQL_ID  4v4r3dau0barp, child number 0
-------------------------------------
SELECT /* FIND_ME 3 */   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2 WHERE
 T1.C1=T2.C1(+) UNION SELECT   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2
WHERE   T1.C1(+)=T2.C1

Plan hash value: 2719845061

-------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name      | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |           |      1 |        |       |       |  2078 (100)|    100K|00:00:00.30 |   12384 |       |       |          |
|   1 |  SORT UNIQUE            |           |      1 |    200K|  4101K|  5913K|  2078  (62)|    100K|00:00:00.30 |   12384 |  6360K|  1340K| 5653K (0)|
|   2 |   UNION-ALL             |           |      1 |        |       |       |            |    200K|00:00:00.21 |   12384 |       |       |          |
|*  3 |    HASH JOIN RIGHT OUTER|           |      1 |    100K|  2050K|       |   273   (2)|    100K|00:00:00.08 |    3361 |  3749K|  1936K| 5800K (0)|
|   4 |     INDEX FAST FULL SCAN| IND_T2_C1 |      1 |    100K|   488K|       |    20   (5)|    100K|00:00:00.01 |     230 |       |       |          |
|   5 |     TABLE ACCESS FULL   | T1        |      1 |    100K|  1562K|       |   252   (1)|    100K|00:00:00.01 |    3131 |       |       |          |
|*  6 |    HASH JOIN OUTER      |           |      1 |    100K|  2050K|       |   725   (1)|    100K|00:00:00.09 |    9023 |  3749K|  1936K| 5743K (0)|
|   7 |     TABLE ACCESS FULL   | T2        |      1 |    100K|   488K|       |   472   (1)|    100K|00:00:00.01 |    5892 |       |       |          |
|   8 |     TABLE ACCESS FULL   | T1        |      1 |    100K|  1562K|       |   252   (1)|    100K|00:00:00.01 |    3131 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"="T2"."C1")
   6 - access("T1"."C1"="T2"."C1")

-

SQL_ID  gs96s7s3ff9sp, child number 0
-------------------------------------
SELECT /* FIND_ME 4 */   T1.C1,   T2.C1,   T1.C2 FROM   T1 FULL OUTER
JOIN T2      ON T1.C1=T2.C1

Plan hash value: 3807180574

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name     | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |          |      1 |        |       |   725 (100)|    100K|00:00:00.11 |    9123 |       |       |          |
|   1 |  VIEW                 | VW_FOJ_0 |      1 |    100K|  3222K|   725   (1)|    100K|00:00:00.11 |    9123 |       |       |          |
|*  2 |   HASH JOIN FULL OUTER|          |      1 |    100K|  2050K|   725   (1)|    100K|00:00:00.10 |    9123 |  3749K|  1936K| 5799K (0)|
|   3 |    TABLE ACCESS FULL  | T2       |      1 |    100K|   488K|   472   (1)|    100K|00:00:00.01 |    5892 |       |       |          |
|   4 |    TABLE ACCESS FULL  | T1       |      1 |    100K|  1562K|   252   (1)|    100K|00:00:00.02 |    3231 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C1"="T2"."C1") 

Notice the difference in the ANSI full outer join execution plan.  Now the second half of the output:

SQL_ID  cfjjpmdc0ghsx, child number 0
-------------------------------------
SELECT /* FIND_ME 1 */   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2 WHERE
 T1.C1=T2.C1(+) UNION ALL SELECT   T1.C1,   T2.C1,   T1.C2 FROM   T1,
T2 WHERE   T1.C1(+)=T2.C1   AND T1.C1 IS NULL

Plan hash value: 2463665226

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name      | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |           |      1 |        |       |  1261 (100)|    161K|00:00:00.28 |   15732 |       |       |          |
|   1 |  UNION-ALL             |           |      1 |        |       |            |    161K|00:00:00.28 |   15732 |       |       |          |
|*  2 |   HASH JOIN RIGHT OUTER|           |      1 |    150K|  3076K|   404   (2)|    150K|00:00:00.12 |    5110 |  3749K|  1936K| 5671K (0)|
|   3 |    INDEX FAST FULL SCAN| IND_T2_C1 |      1 |    100K|   488K|    20   (5)|    100K|00:00:00.01 |     230 |       |       |          |
|   4 |    TABLE ACCESS FULL   | T1        |      1 |    150K|  2343K|   383   (1)|    150K|00:00:00.03 |    4880 |       |       |          |
|*  5 |   FILTER               |           |      1 |        |       |            |  11001 |00:00:00.12 |   10622 |       |       |          |
|*  6 |    HASH JOIN OUTER     |           |      1 |   9334 |   191K|   856   (1)|    149K|00:00:00.11 |   10622 |  3749K|  1936K| 5672K (0)|
|   7 |     TABLE ACCESS FULL  | T2        |      1 |    100K|   488K|   472   (1)|    100K|00:00:00.01 |    5892 |       |       |          |
|   8 |     TABLE ACCESS FULL  | T1        |      1 |    150K|  2343K|   383   (1)|    150K|00:00:00.02 |    4730 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C1"="T2"."C1")
   5 - filter("T1"."C1" IS NULL)
   6 - access("T1"."C1"="T2"."C1")

-

SQL_ID  1qtvpf3mnb3nx, child number 0
-------------------------------------
SELECT /* FIND_ME 2 */   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2 WHERE
 T1.C1=T2.C1(+) UNION ALL SELECT   T1.C1,   T2.C1,   T1.C2 FROM   T1,
T2 WHERE   T1.C1(+)=T2.C1   AND T1.ROWID IS NULL

Plan hash value: 2463665226

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name      | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |           |      1 |        |       |  1261 (100)|    161K|00:00:00.27 |   15732 |       |       |          |
|   1 |  UNION-ALL             |           |      1 |        |       |            |    161K|00:00:00.27 |   15732 |       |       |          |
|*  2 |   HASH JOIN RIGHT OUTER|           |      1 |    150K|  3076K|   404   (2)|    150K|00:00:00.12 |    5110 |  3749K|  1936K| 5671K (0)|
|   3 |    INDEX FAST FULL SCAN| IND_T2_C1 |      1 |    100K|   488K|    20   (5)|    100K|00:00:00.01 |     230 |       |       |          |
|   4 |    TABLE ACCESS FULL   | T1        |      1 |    150K|  2343K|   383   (1)|    150K|00:00:00.02 |    4880 |       |       |          |
|*  5 |   FILTER               |           |      1 |        |       |            |  11001 |00:00:00.12 |   10622 |       |       |          |
|*  6 |    HASH JOIN OUTER     |           |      1 |    139K|  4511K|   857   (1)|    149K|00:00:00.11 |   10622 |  3749K|  1936K| 5735K (0)|
|   7 |     TABLE ACCESS FULL  | T2        |      1 |    100K|   488K|   472   (1)|    100K|00:00:00.01 |    5892 |       |       |          |
|*  8 |     TABLE ACCESS FULL  | T1        |      1 |    139K|  3828K|   383   (1)|    139K|00:00:00.02 |    4730 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C1"="T2"."C1")
   5 - filter("T1".ROWID IS NULL)
   6 - access("T1"."C1"="T2"."C1")
   8 - filter("T1"."C1" IS NOT NULL)

-

SQL_ID  4v4r3dau0barp, child number 0
-------------------------------------
SELECT /* FIND_ME 3 */   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2 WHERE
 T1.C1=T2.C1(+) UNION SELECT   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2
WHERE   T1.C1(+)=T2.C1

Plan hash value: 2719845061

-------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name      | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |           |      1 |        |       |       |  2823 (100)|    111K|00:00:00.47 |   15582 |       |       |          |
|   1 |  SORT UNIQUE            |           |      1 |    289K|  5947K|  8544K|  2823  (58)|    111K|00:00:00.47 |   15582 |  7990K|  1681K| 7102K (0)|
|   2 |   UNION-ALL             |           |      1 |        |       |       |            |    299K|00:00:00.28 |   15582 |       |       |          |
|*  3 |    HASH JOIN RIGHT OUTER|           |      1 |    150K|  3076K|       |   404   (2)|    150K|00:00:00.11 |    4960 |  3749K|  1936K| 5671K (0)|
|   4 |     INDEX FAST FULL SCAN| IND_T2_C1 |      1 |    100K|   488K|       |    20   (5)|    100K|00:00:00.01 |     230 |       |       |          |
|   5 |     TABLE ACCESS FULL   | T1        |      1 |    150K|  2343K|       |   383   (1)|    150K|00:00:00.02 |    4730 |       |       |          |
|*  6 |    HASH JOIN OUTER      |           |      1 |    139K|  2871K|       |   857   (1)|    149K|00:00:00.11 |   10622 |  3749K|  1936K| 5640K (0)|
|   7 |     TABLE ACCESS FULL   | T2        |      1 |    100K|   488K|       |   472   (1)|    100K|00:00:00.01 |    5892 |       |       |          |
|*  8 |     TABLE ACCESS FULL   | T1        |      1 |    139K|  2187K|       |   383   (1)|    139K|00:00:00.02 |    4730 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."C1"="T2"."C1")
   6 - access("T1"."C1"="T2"."C1")
   8 - filter("T1"."C1" IS NOT NULL)

-

SQL_ID  gs96s7s3ff9sp, child number 0
-------------------------------------
SELECT /* FIND_ME 4 */   T1.C1,   T2.C1,   T1.C2 FROM   T1 FULL OUTER
JOIN T2      ON T1.C1=T2.C1

Plan hash value: 3807180574

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name     | Starts | E-Rows |E-Bytes| Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |          |      1 |        |       |   856 (100)|    161K|00:00:00.14 |   10772 |       |       |          |
|   1 |  VIEW                 | VW_FOJ_0 |      1 |    150K|  4833K|   856   (1)|    161K|00:00:00.14 |   10772 |       |       |          |
|*  2 |   HASH JOIN FULL OUTER|          |      1 |    150K|  3076K|   856   (1)|    161K|00:00:00.13 |   10772 |  3749K|  1936K| 5671K (0)|
|   3 |    TABLE ACCESS FULL  | T2       |      1 |    100K|   488K|   472   (1)|    100K|00:00:00.01 |    5892 |       |       |          |
|   4 |    TABLE ACCESS FULL  | T1       |      1 |    150K|  2343K|   383   (1)|    150K|00:00:00.02 |    4880 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."C1"="T2"."C1") 

Again, notice that the execution plan for the ANSI full outer join is different.  Let’s take a look at a 10053 trace for the SQL statement:

SET AUTOTRACE TRACEONLY STATISTICS
ALTER SESSION SET TRACEFILE_IDENTIFIER='ANSI_TRANSFORM_FULL';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

SELECT /* FIND_ME 5 */
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1 FULL OUTER JOIN T2
     ON T1.C1=T2.C1;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF'; 

Let’s take a look in the 10053 trace file:

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT
  "VW_FOJ_0"."QCSJ_C000000000300000_0" "C1",
  "VW_FOJ_0"."QCSJ_C000000000300001_2" "C1",
  "VW_FOJ_0"."QCSJ_C000000000300002_1" "C2"
FROM
  (SELECT
     "T1"."C1" "QCSJ_C000000000300000_0",
     "T1"."C2" "QCSJ_C000000000300002_1",
     "T2"."C1" "QCSJ_C000000000300001_2"
   FROM
     "TESTUSER"."T2" "T2" FULL OUTER JOIN "TESTUSER"."T1" "T1"
         ON "T1"."C1"="T2"."C1") "VW_FOJ_0"
...
Starting SQL statement dump

user_id=194 user_name=TESTUSER module=SQL*Plus action=
sql_id=7jhmvz6nutvpv plan_hash_value=-487786722 problem_type=3
----- Current SQL Statement for this session (sql_id=7jhmvz6nutvpv) -----
SELECT /* FIND_ME 5 */
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1 FULL OUTER JOIN T2
     ON T1.C1=T2.C1
sql_text_length=98
sql=SELECT /* FIND_ME 5 */
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1 FULL OUTER JOIN T2
     ON T1.C1=T2.C1
----- Explain Plan Dump -----
----- Plan Table -----

============
Plan Table
============
-----------------------------------------+-----------------------------------+
| Id  | Operation              | Name    | Rows  | Bytes | Cost  | Time      |
-----------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT       |         |       |       |   856 |           |
| 1   |  VIEW                  | VW_FOJ_0|  146K | 4834K |   856 |  00:00:07 |
| 2   |   HASH JOIN FULL OUTER |         |  146K | 3076K |   856 |  00:00:07 |
| 3   |    TABLE ACCESS FULL   | T2      |   98K |  488K |   472 |  00:00:04 |
| 4   |    TABLE ACCESS FULL   | T1      |  146K | 2344K |   383 |  00:00:04 |
-----------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - access("T1"."C1"="T2"."C1") 

I guess that explains why the execution plan looks a bit different from the version produced by 10.2.0.5.

Anyone care to try this test case on 9i R1, 9i R2, 10g R1, or 11.2.0.2?  I am curious to know if we will trip over one of those ANSI bugs that are building a fort in my front lawn (bang head on desk until this makes sense… ants, see! :-) ).

——————–

Edit: December 30, 2010 Speaking of bugs, a quick look through Metalink:

  • Doc ID 9395765.8, Bug 9395765 “ORA-907 / wrong plan from query with ANSI FULL OUTER join”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 8432019.8, Bug 8432019 “Wrong Results With Full Outer Join, Database Link and Merged Inline View”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 8538448.8, Bug 8538448 “OERI [qcsFixQbcnlo2] from FULL OUTER JOIN with UNION ALL”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 8947490.8, Bug 8947490 “OERI[15160] using SQL tuning advisor for full outer joined UNION”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 9337832.8, Bug 9337832 “Wrong results from ANSI outer joins”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 9882805.8, Bug 9882805 “Wrong results with NATIVE FULL OUTER JOIN and NVL”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 7144891.8, Bug 7144891 “OERI [qctopn1] from nested FULL OUTER JOIN”, affects Oracle Database release versions below 10.2.0.5
  • Doc ID 6471020.8, Bug 6471020 “ORA-1790 or ORA-904 from FULL JOIN”, affects Oracle Database release versions below 10.2.0.5
  • Doc ID 8782089.8, Bug 8782089 “OERI [15160] on ANSI join”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 9050716.8, Bug 9050716 “Dumps on kkqstcrf with ANSI joins and Join Elimination”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 9149005.8, Bug 9149005 “ANSI left join query with domain index fails with OERI [qxopq_get_rowid:1]“, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 9256994.8, Bug 9256994 “ORA-1792 using ANSI joins”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 9360157.8, Bug 9360157 “Wrong results using ORA_ROWSCN pseudocolumn with ANSI join”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 9371323.8, Bug 9371323 “OERI [rwoirw: check ret val] on INSERT ALL with ANSI join”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 9387924.8, Bug 9387924 “OERI [qxopq_get_rowid:1] from ANSI join and operator”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 9713012.8, Bug 9713012 “Wrong results from push predicate on ANSI SQL”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 9189070.8, Bug 9189070 “Dump (kkoiqb) parsing ANSI SQL with disjunctive predicate in “ON” clause”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 4459936.8, Bug 4459936 “FGA does not work correctly with ANSI outer joins”, affects Oracle Database release versions below 10.2.0.5
  • Doc ID 4871892.8, Bug 4871892 “Wrong results from “left outer join” with TABLE() expressions”, affects Oracle Database release versions below 10.2.0.5
  • Doc ID 5183871.8, Bug 5183871 “ORA-904 using ORA_ROWSCN in ANSI SQL”, affects Oracle Database release versions below 10.2.0.5
  • Doc ID 7429090.8, Bug 7429090 “Poor plan generated for a OUTER join query involving a LATERAL view”, affects Oracle Database release versions below 10.2.0.5
  • Doc ID 7442232.8, Bug 7442232 “WRONG RESULTS (missing rows) WITH NATIVE FULL OUTER JOIN”, affects Oracle Database release versions below 10.2.0.5
  • Doc ID 7001990.8, Bug 7001990 “Wrong result (null column) on ansi outer join query and index access”, affects Oracle Database release versions below 10.2.0.5
  • Doc ID 8547175.8, Bug 8547175 “FGA does not work correctly with ANSI joins”, affects Oracle Database release versions below 10.2.0.5
  • Doc ID 7420504.8, Bug 7420504 “Wrong Results with ANSI queries and fix for bug 7345484″, affects Oracle Database release versions below 10.2.0.5
  • Doc ID 7375360.8, Bug 7375360 “Repeat execution of uncorrelated subquery in ANSI join”, affects Oracle Database release versions below 10.2.0.5
  • Doc ID 7345484.8, Bug 7345484 “Left outer join with filters are slow”, affects Oracle Database release versions below 10.2.0.5
  • Doc ID 7318276.8, Bug 7318276 “False ORA-918 with fix for bug 5368296 installed”, affects Oracle Database release versions below 10.2.0.5
  • Doc ID 5368296.8, Bug 5368296 “ANSI join SQL may not report ORA-918 for ambiguous column”, affects Oracle Database release versions below 10.2.0.5
  • Doc ID 5358967.8, Bug 5358967 “ORA-2019 on CREATE VIEW after CURRENT_SCHEMA is changed”, affects Oracle Database release versions below 10.2.0.5
  • Doc ID 5031632.8, Bug 5031632 “Wrong results from NATURAL JOIN”, affects Oracle Database release versions below 10.2.0.5
  • Doc ID 8971835.8, Bug 8971835 “Assorted Dumps / ORA-932 / Wrong results when accessing over 1000 columns”, affects Oracle Database release versions below 10.2.0.5

I will suggest that the hidden parameter _OPTIMIZER_NATIVE_FULL_OUTER_JOIN likely controls whether or not an ANSI full outer join is transformed into Oracle specific syntax.  The script in this blog article shows that the default value of this parameter is FORCE on Oracle Database 11.2.0.1.

—–

Edit: Jan 1, 2011:

One more ANSI bug that affects 11.2.0.1 and below - thanks to Coskan for reporting the bug (and for beta testing the ANSI functionality in the production release of 11.2.0.1 :-) ):

  • Doc ID 9287401.8, Bug 9287401 “Full outer join does not push down predicates”, affects Oracle Database release versions below 11.2.0.2




Hash Joins – What is Wrong with this Statement?

29 12 2010

December 29, 2010

I started reading the book “Pro Oracle SQL” a couple of days ago, and I am having trouble putting it down.  Other than a couple of minor word substitutions and intended, but not specified, qualifying words, the first 100 pages of the book are fantastic (I suspect that many of these cases are intentional since that book is not advertised as a SQL performance tuning guide – too much detail might cause confusion).  The description of how SQL processing works, and how to understand execution plans is on par with (or maybe even slightly better than) the treatment in the “Troubleshooting Oracle Performance” book (that reminds me, I still need to finish the second read through of the “Troubleshooting Oracle Performance” book).

As I have done with the last four books that I reviewed, I selected an interesting couple of sentences from this book.  What, if anything, is wrong with the following quote from page 94:

“Based on the table and index statistics, the table that is determined to return the fewest rows will be hashed in its entirety into memory. This hash table includes all of the row data for that table and is loaded into hash buckets based on a randomizing function that converts the join key to a hash value. As long as there is enough memory available, this hash table will reside in memory. However, if there is not enough memory available, the hash table may be written to temp disk space.”

Not as easy of an assignment as with some of the previously extracted quotes?

How about a hint?  Create two simple tables, and collect the full statistics (do not estimate the statistics) on those tables:

CREATE TABLE
  T1
AS
SELECT
  ROWNUM C1,
  LPAD(TO_CHAR(ROWNUM),10,'0') C2,
  LPAD('A',200,'A') PADDING
FROM
  DUAL
CONNECT BY
  LEVEL<=100000;

CREATE TABLE
  T2
AS
SELECT
  ROWNUM C1,
  LPAD(TO_CHAR(ROWNUM),10,'0') C2,
  LPAD('A',400,'A') PADDING
FROM
  DUAL
CONNECT BY
  LEVEL<=100000;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',ESTIMATE_PERCENT=>NULL)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2',ESTIMATE_PERCENT=>NULL) 

Now we should probably create a test script that uses the tables:

SET AUTOTRACE TRACEONLY EXPLAIN

SELECT
  *
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1;

SELECT
  T1.C1,
  T2.C1
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1;

SELECT
  T1.C1,
  T2.C1
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C2=T2.C2;

SELECT
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C2=T2.C2;

SELECT
  T1.C1,
  T2.C1,
  T1.C2,
  T1.PADDING
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C2=T2.C2;

SELECT
  T1.C1,
  T2.C1,
  T1.C2,
  T1.PADDING,
  T2.PADDING
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C2=T2.C2; 

How did you do?  Did you find anything wrong with the quote from the book, or is it completely correct?

-

-

-

I hear someone shouting, “But the above script did not even execute a SQL statement!”.  OK, try this:

ALTER SYSTEM FLUSH SHARED_POOL;
ALTER SYSTEM FLUSH SHARED_POOL;
SET AUTOTRACE TRACEONLY STATISTICS
ALTER SESSION SET STATISTICS_LEVEL='ALL';
ALTER SYSTEM SET PGA_AGGREGATE_TARGET=2000M SCOPE=MEMORY;

Edit Dec 29, 2010: Wait 60 seconds before continuing, just to make certain that the changed PGA_AGGREGATE_TARGET had a chance to take effect (to avoid this trap).  Continuing:

SELECT
  *
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1;

SELECT
  T1.C1,
  T2.C1
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1;

SELECT
  T1.C1,
  T2.C1
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C2=T2.C2;

SELECT
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C2=T2.C2;

SELECT
  T1.C1,
  T2.C1,
  T1.C2,
  T1.PADDING
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C2=T2.C2;

SELECT
  T1.C1,
  T2.C1,
  T1.C2,
  T1.PADDING,
  T2.PADDING
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C2=T2.C2;

SET AUTOTRACE OFF
SET LINESIZE 150
SET PAGESIZE 2000
SET TRIMSPOOL ON
SPOOL TEST_HASH_PLANS.TXT

SELECT /*+ LEADING(S) */
  T.PLAN_TABLE_OUTPUT
FROM
  (SELECT
    SQL_ID,
    CHILD_NUMBER
  FROM
    V$SQL
  WHERE
    SQL_TEXT LIKE 'SELECT%T1.C1=T2.C1%') S,
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(S.SQL_ID,S.CHILD_NUMBER,'ALLSTATS LAST')) T;

SPOOL OFF 

What, if anything, is wrong with the quote from the book?

-

-

-

-

-

Show you the output that I received?  OK:

SELECT
  *
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1;

Execution Plan
----------------------------------------------------------
Plan hash value: 1838229974

-----------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |   100K|    60M|       |  1493   (1)| 00:00:12 |
|*  1 |  HASH JOIN         |      |   100K|    60M|    21M|  1493   (1)| 00:00:12 |
|   2 |   TABLE ACCESS FULL| T1   |   100K|    20M|       |   248   (1)| 00:00:02 |
|   3 |   TABLE ACCESS FULL| T2   |   100K|    39M|       |   464   (1)| 00:00:04 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1")

-

SELECT
  T1.C1,
  T2.C1
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1;

Execution Plan
----------------------------------------------------------
Plan hash value: 1838229974

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |   100K|   976K|   713   (1)| 00:00:06 |
|*  1 |  HASH JOIN         |      |   100K|   976K|   713   (1)| 00:00:06 |
|   2 |   TABLE ACCESS FULL| T1   |   100K|   488K|   248   (1)| 00:00:02 |
|   3 |   TABLE ACCESS FULL| T2   |   100K|   488K|   464   (1)| 00:00:04 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1")

-

SELECT
  T1.C1,
  T2.C1
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C2=T2.C2;

Execution Plan
----------------------------------------------------------
Plan hash value: 1838229974

-----------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |   100K|  3125K|       |   781   (1)| 00:00:07 |
|*  1 |  HASH JOIN         |      |   100K|  3125K|  2736K|   781   (1)| 00:00:07 |
|   2 |   TABLE ACCESS FULL| T1   |   100K|  1562K|       |   248   (1)| 00:00:02 |
|   3 |   TABLE ACCESS FULL| T2   |   100K|  1562K|       |   464   (1)| 00:00:04 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1" AND "T1"."C2"="T2"."C2")

-

SELECT
  T1.C1,
  T2.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C2=T2.C2;

Execution Plan
----------------------------------------------------------
Plan hash value: 1838229974

-----------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |   100K|  3125K|       |   781   (1)| 00:00:07 |
|*  1 |  HASH JOIN         |      |   100K|  3125K|  2736K|   781   (1)| 00:00:07 |
|   2 |   TABLE ACCESS FULL| T1   |   100K|  1562K|       |   248   (1)| 00:00:02 |
|   3 |   TABLE ACCESS FULL| T2   |   100K|  1562K|       |   464   (1)| 00:00:04 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1" AND "T1"."C2"="T2"."C2")

-

SELECT
  T1.C1,
  T2.C1,
  T1.C2,
  T1.PADDING
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C2=T2.C2;

Execution Plan
----------------------------------------------------------
Plan hash value: 2959412835

-----------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |   100K|    22M|       |  1018   (1)| 00:00:09 |
|*  1 |  HASH JOIN         |      |   100K|    22M|  2736K|  1018   (1)| 00:00:09 |
|   2 |   TABLE ACCESS FULL| T2   |   100K|  1562K|       |   464   (1)| 00:00:04 |
|   3 |   TABLE ACCESS FULL| T1   |   100K|    20M|       |   248   (1)| 00:00:02 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1" AND "T1"."C2"="T2"."C2")

-

SELECT
  T1.C1,
  T2.C1,
  T1.C2,
  T1.PADDING,
  T2.PADDING
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1
  AND T1.C2=T2.C2;

Execution Plan
----------------------------------------------------------
Plan hash value: 1838229974

-----------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |   100K|    60M|       |  1493   (1)| 00:00:12 |
|*  1 |  HASH JOIN         |      |   100K|    60M|    21M|  1493   (1)| 00:00:12 |
|   2 |   TABLE ACCESS FULL| T1   |   100K|    20M|       |   248   (1)| 00:00:02 |
|   3 |   TABLE ACCESS FULL| T2   |   100K|    39M|       |   464   (1)| 00:00:04 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1" AND "T1"."C2"="T2"."C2") 

The output from the second script:

SQL_ID  c2nzwfydj9nft, child number 0
-------------------------------------
SELECT   * FROM   T1,   T2 WHERE   T1.C1=T2.C1

Plan hash value: 1838229974

----------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |    100K|00:00:00.24 |   15298 |       |       |          |
|*  1 |  HASH JOIN         |      |      1 |    100K|    100K|00:00:00.24 |   15298 |    25M|  3830K|   28M (0)|
|   2 |   TABLE ACCESS FULL| T1   |      1 |    100K|    100K|00:00:00.01 |    3131 |       |       |          |
|   3 |   TABLE ACCESS FULL| T2   |      1 |    100K|    100K|00:00:00.07 |   12167 |       |       |          |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1")

-

SQL_ID  1r4up8dp9fc00, child number 0
-------------------------------------
SELECT   T1.C1,   T2.C1 FROM   T1,   T2 WHERE   T1.C1=T2.C1

Plan hash value: 1838229974

----------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |    100K|00:00:00.16 |   15298 |       |       |          |
|*  1 |  HASH JOIN         |      |      1 |    100K|    100K|00:00:00.16 |   15298 |  3749K|  1936K| 5635K (0)|
|   2 |   TABLE ACCESS FULL| T1   |      1 |    100K|    100K|00:00:00.01 |    3131 |       |       |          |
|   3 |   TABLE ACCESS FULL| T2   |      1 |    100K|    100K|00:00:00.05 |   12167 |       |       |          |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1")

-

SQL_ID  0rnjh74c5m1dc, child number 0
-------------------------------------
SELECT   T1.C1,   T2.C1 FROM   T1,   T2 WHERE   T1.C1=T2.C1   AND
T1.C2=T2.C2

Plan hash value: 1838229974

----------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |    100K|00:00:00.25 |   15298 |       |       |          |
|*  1 |  HASH JOIN         |      |      1 |    100K|    100K|00:00:00.25 |   15298 |  4670K|  1438K| 6600K (0)|
|   2 |   TABLE ACCESS FULL| T1   |      1 |    100K|    100K|00:00:00.01 |    3131 |       |       |          |
|   3 |   TABLE ACCESS FULL| T2   |      1 |    100K|    100K|00:00:00.09 |   12167 |       |       |          |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1" AND "T1"."C2"="T2"."C2")

-

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------
SQL_ID  c691ypk4k096j, child number 0
-------------------------------------
SELECT   T1.C1,   T2.C1,   T1.C2 FROM   T1,   T2 WHERE   T1.C1=T2.C1
AND T1.C2=T2.C2

Plan hash value: 1838229974

----------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |    100K|00:00:00.23 |   15298 |       |       |          |
|*  1 |  HASH JOIN         |      |      1 |    100K|    100K|00:00:00.23 |   15298 |  4670K|  1438K| 6600K (0)|
|   2 |   TABLE ACCESS FULL| T1   |      1 |    100K|    100K|00:00:00.01 |    3131 |       |       |          |
|   3 |   TABLE ACCESS FULL| T2   |      1 |    100K|    100K|00:00:00.08 |   12167 |       |       |          |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1" AND "T1"."C2"="T2"."C2")

-

SQL_ID  dpq7d224rxc6t, child number 0
-------------------------------------
SELECT   T1.C1,   T2.C1,   T1.C2,   T1.PADDING FROM   T1,   T2 WHERE
T1.C1=T2.C1   AND T1.C2=T2.C2

Plan hash value: 2959412835

----------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |    100K|00:00:00.17 |   15482 |       |       |          |
|*  1 |  HASH JOIN         |      |      1 |    100K|    100K|00:00:00.17 |   15482 |  4670K|  1438K| 6600K (0)|
|   2 |   TABLE ACCESS FULL| T2   |      1 |    100K|    100K|00:00:00.01 |    5892 |       |       |          |
|   3 |   TABLE ACCESS FULL| T1   |      1 |    100K|    100K|00:00:00.05 |    9590 |       |       |          |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1" AND "T1"."C2"="T2"."C2")

-

SQL_ID  0k4q3ahjzu9nd, child number 0
-------------------------------------
SELECT   T1.C1,   T2.C1,   T1.C2,   T1.PADDING,   T2.PADDING FROM   T1,
  T2 WHERE   T1.C1=T2.C1   AND T1.C2=T2.C2

Plan hash value: 1838229974

----------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |    100K|00:00:00.24 |   15298 |       |       |          |
|*  1 |  HASH JOIN         |      |      1 |    100K|    100K|00:00:00.24 |   15298 |    25M|  3830K|   28M (0)|
|   2 |   TABLE ACCESS FULL| T1   |      1 |    100K|    100K|00:00:00.01 |    3131 |       |       |          |
|   3 |   TABLE ACCESS FULL| T2   |      1 |    100K|    100K|00:00:00.06 |   12167 |       |       |          |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."C1"="T2"."C1" AND "T1"."C2"="T2"."C2") 

I think that I forgot the question… was it something about the value of test cases?  :-)





Feeling ANSI About Oracle Join Syntax?

26 12 2010

December 26, 2010

(Forward to the Next Post in the Series)

Yesterday I started reading another book on the topic of Oracle SQL (for the moment I will keep the title of the book a mystery).  I am not much of a fan of ANSI style syntax – that syntax style is easy to read when there are just two tables involved, but I tend to become lost when several tables are listed with left outer joins to some tables and equi-joins to other tables.  Take for instance the following example which I encountered in an ERP mailing list:

SELECT CUSTOMER_ORDER.ID, SHIPPER.SHIPPED_DATE, CUST_ORDER_LINE.LINE_NO,
PART.ID, PART.DESCRIPTION, SHIPPER_LINE.SHIPPED_QTY
FROM CUSTOMER_ORDER
LEFT JOIN SHIPPER ON CUSTOMER_ORDER.ID = SHIPPER.CUST_ORDER_ID
INNER JOIN SHIPPER_LINE ON SHIPPER.PACKLIST_ID =
SHIPPER_LINE.PACKLIST_ID AND SHIPPER_LINE.SHIPPED_QTY > 0
LEFT JOIN CUST_ORDER_LINE ON CUSTOMER_ORDER.ID =
CUST_ORDER_LINE.CUST_ORDER_ID INNER JOIN PART ON PART.ID =
CUST_ORDER_LINE.PART_ID AND PART.DESCRIPTION LIKE 'PUMP%'

The above query did not work quite as the person wanted, so he asked for some assistance:

“The only thing with this is that it doesn’t seem to show me Customer
order lines where no shipments have occurred. I.E all returned records
have a Shipped_date and there should be some that don’t have
Shipped_date.”

After looking at the problem for a couple of minutes, I proposed the following SQL statement:

SELECT
  CO.ID,
  S.SHIPPED_DATE,
  COL.LINE_NO,
  P.ID,
  P.DESCRIPTION,
  SL.SHIPPED_QTY
FROM
  CUSTOMER_ORDER CO,
  CUST_ORDER_LINE COL,
  PART P,
  SHIPPER S,
  SHIPPER_LINE SL
WHERE
  CO.ID=COL.CUST_ORDER_ID
  AND COL.PART_ID=P.ID
  AND COL.CUST_ORDER_ID=SL.CUST_ORDER_ID(+)
  AND COL.LINE_NO=SL.CUST_ORDER_LINE_NO(+)
  AND SL.PACKLIST_ID=S.PACKLIST_ID(+)
  AND NVL(SL.SHIPPED_QTY,1) > 0
  AND P.DESCRIPTION LIKE 'PUMP%';

The above is nice and simple Oracle specific join syntax that seemed to meet the OP’s requirements.  But wait, the OP’s company is considering a switch from Oracle Database to SQL Server!  Bang head on the table until this makes sense (little floating dots, kind of looks like ants see).  Here, try this:

SELECT
  CO.ID,
  S.SHIPPED_DATE,
  COL.LINE_NO,
  P.ID,
  P.DESCRIPTION,
  SL.SHIPPED_QTY
FROM
  CUSTOMER_ORDER CO
INNER JOIN
  CUST_ORDER_LINE COL
ON
  CO.ID=COL.CUST_ORDER_ID
INNER JOIN
  PART P
ON
  COL.PART_ID=P.ID
LEFT OUTER JOIN
  SHIPPER_LINE SL
ON
  COL.CUST_ORDER_ID=SL.CUST_ORDER_ID
  AND COL.LINE_NO=SL.CUST_ORDER_LINE_NO
LEFT OUTER JOIN
  SHIPPER S
ON
  SL.PACKLIST_ID=S.PACKLIST_ID
WHERE
  COALESCE(SL.SHIPPED_QTY,1) > 0
  AND P.DESCRIPTION LIKE 'PUMP%';

So, which of the above two SQL statements is easier to read?

Still not convinced?  Let’s look at the FROM clause of another simple SQL statement.

Oracle specific join syntax (edit: Dec 26, 2010 2.5 hours after the initial post: the fifth line in the WHERE clause should show AND T3.VENDOR_ID(+) = ‘SAM’ due to the outer join between tables T1 and T3 – with that fix is the query section still equivalent to the ANSI version?  Something also needs to be done with the last line in the WHERE clause due to the outer join between tables T3 and T4):

FROM
  T1,
  T2,
  T3,
  T4
WHERE
  T1.PRODUCT_CODE='10'
  AND T1.PART_ID=T2.PART_ID(+)
  AND T2.DRAWING_REV_NO='20'
  AND T1.PART_ID=T3.PART_ID(+)
  AND T3.VENDOR_ID='SAM'
  AND T3.PART_ID=T4.PART_ID(+)
  AND T4.DESCRIPTION LIKE 'PAINT%';

ANSI join syntax:

FROM
  T1
LEFT OUTER JOIN
  T2
ON
  T1.PART_ID=T2.PART_ID
LEFT OUTER JOIN
  T3
ON
  T1.PART_ID=T3.PART_ID
LEFT OUTER JOIN
  T4
ON
  T3.PART_ID=T4.PART_ID
WHERE
  T1.PRODUCT_CODE='10'
  AND T2.DRAWING_REV_NO='20'
  AND T2.LOT_ID='0'
  AND T2.SPLIT_ID='0'
  AND T3.VENDOR_ID='SAM'
  AND T4.DESCRIPTION LIKE 'PAINT%';

When written with the Oracle specific join syntax, I can quickly see the list of tables that are accessed, and I can see how the results from each table will be restricted as the rows are retrieved.  Oracle Database will actually try to convert the ANSI style join syntax back into Oracle join syntax (this can be seen by enabling a 10053 trace), and there are several cases where bugs will cause the wrong results to be returned to the client.  In the above portion of a SQL statement, Oracle’s optimizer will not necessarily join table T1 to table T2 and then join the result to T3 and then join that result to T4.  The optimizer could join the tables in just about any order, possibly even joining two tables that have no join condition between them (in the process creating a Cartesian product).

The OP mentioned reading a book titled “SQL in 10 Minutes”, or something like that, which is one that I read back in 2000 when I was working with Oracle Database 8.0.5 – that book only describes the ANSI style join syntax.  The 8.0.5 version did not support the ANSI join syntax, and neither did 8.1.7 (8i), so I used the Oracle specific syntax until jumping to Oracle Database 10.2.0.2 in 2006.  There are some cases where the Oracle specific syntax does not translate directly into the ANSI style syntax, where the two syntaxes appear to be requesting the same information, but actually return different information (and that result is not considered a bug).  A couple of years ago I participated in a Usenet thread that included one such query.  This blog article by a member of the Oracle Optimizer group shows an example of that type of query – I believe it is examples C. and D.).

Considering that the Oracle query optimizer transforms the ANSI join syntax into Oracle specific syntax, it seems that there is always a risk of triggering an Oracle bug during the transformation (these bugs are less common in recent Oracle releases).  Bugs?  Just a very small sampling from Metalink (MOS) followed by a couple of mentions from other blogs:

  • Doc ID 9050716.8, Bug 9050716 “Dumps on kkqstcrf with ANSI joins and Join Elimination”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 5188321.8, Bug 5188321 “Wrong results (no rows) OR ORA-1445 from ANSI outer join”, affects Oracle Database release versions below 11.1.0.6
  • Doc ID 5368296.8, Bug 5368296 “ANSI join SQL may not report ORA-918 for ambiguous column”, affects Oracle Database release versions below 11.1.0.6
  • Doc ID 7420504.8, Bug 7420504 “Wrong Results with ANSI queries and fix for bug 7345484″, affects versions 10.2.0.3 through 11.1.0.7
  • Doc ID 9189070.8, Bug 9189070 “Dump (kkoiqb) parsing ANSI SQL with disjunctive predicate in ‘ON’ clause”, affects versions below 11.2.0.2
  • Doc ID 5864217.8, Bug 5864217 “OERI [kkoljt1] from ANSI query with join elimination”, affects Oracle Database release versions below 11.1.0.6
  • Doc ID 6050882.8, Bug 6050882 “Wrong results from ANSI outer join with fix for bug 4967068 or 5864217″, affects Oracle Database release versions below 11.1.0.6
  • Doc ID 7001990.8, Bug 7001990 “Wrong result (null column) on ansi outer join query and index access”, affects Oracle Database release versions below 11.1.0.7
  • Doc ID 9371323.8, Bug 9371323 “OERI [rwoirw: check ret val] on INSERT ALL with ANSI join”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 5702433.8, Bug 5702433 “ORA-918 from ANSI joine with a CONTAINS clause”, affects Oracle Database release versions below 11.2.0.1
  • Doc ID 6033480.8, Bug 6033480 “High parse time of query when using ANSI style joins”, affects Oracle Database release versions below 11.1.0.6
  • Doc ID 7710538.8, Bug 7710538 “ORA-904 from UNION ALL with ANSI join”, affects Oracle Database release versions below 11.2.0.1
  • Doc ID 8547175.8, Bug 8547175 “FGA does not work correctly with ANSI joins”, affects Oracle Database release versions below 11.2.0.1
  • Doc ID 8782089.8, Bug 8782089 “OERI [15160] on ANSI join”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 9395765.8, Bug 9395765 “ORA-907 / wrong plan from query with ANSI FULL OUTER join”, affects Oracle Database release versions below 11.2.0.2
  • Doc ID 9713012.8, Bug 9713012 “Wrong results from push predicate on ANSI SQL”, affects Oracle Database release versions below 11.2.0.2
  • ANSI – argh
  • ANSI SQL
  • When ANSI SQL Join Syntax Does Not Work in Oracle
  • ANSI Outer Joins And Lateral Views (relates to the above referenced article by the Oracle Optimizer group)

I recently saw an interesting message thread on the Oracle-L listserve that asked whether or not it was a bad idea to mix ANSI style joins with Oracle specific style joins.  Randolf and I discussed the same topic while working on the “Expert Oracle Practices” book – if I recall correctly, we determined that while it might work, it is a bad idea; the same execution plans were created for both of the simple test queries.  But why were the execution plans the same?  We need a simple test case script for help:

First, let’s create three very simple tables, making certain to declare the first column in each table as NOT NULL (this is important for one section of the test script):

CREATE TABLE T1(
  C1 NUMBER NOT NULL,
  C2 DATE);

CREATE TABLE T2(
  C1 NUMBER NOT NULL,
  C2 DATE);

CREATE TABLE T3(
  C1 NUMBER NOT NULL,
  C2 DATE);

INSERT INTO
  T1
SELECT
  ROWNUM*3 C1,
  TRUNC(SYSDATE)+(ROWNUM*3) C2
FROM
  DUAL
CONNECT BY
  LEVEL<=100;

INSERT INTO
  T2
SELECT
  (ROWNUM-30)*4 C1,
  TRUNC(SYSDATE)+(ROWNUM*4) C2
FROM
  DUAL
CONNECT BY
  LEVEL<=100;

INSERT INTO
  T3
SELECT
  (ROWNUM-60)*4 C1,
  TRUNC(SYSDATE)+(ROWNUM*4) C2
FROM
  DUAL
CONNECT BY
  LEVEL<=200;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1')
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2')
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T3')

For the test script we will enable a 10053 trace, execute a SQL statement using Oracle specific join syntax, and then the equivalent SQL statement using ANSI join syntax:

ALTER SESSION SET TRACEFILE_IDENTIFIER='ANSI_TRANSFORM';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1;

SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1
INNER JOIN
  T2
ON
  T1.C1=T2.C1;

SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1(+);

SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1
LEFT OUTER JOIN
  T2
ON
  T1.C1=T2.C1;

SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1(+)=T2.C1;

SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1
RIGHT OUTER JOIN
  T2
ON
  T1.C1=T2.C1;

SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1(+)
UNION ALL
SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1(+)=T2.C1
  AND T1.C1 IS NULL;

SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1
FULL OUTER JOIN
  T2
ON
  T1.C1=T2.C1;

SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2,
  T3.C2
FROM
  T1,
  T2,
  T3
WHERE
  T1.C1=T2.C1(+)
  AND T2.C1=T3.C1;

SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2,
  T3.C2
FROM
  T1
LEFT OUTER JOIN
  T2
ON
  T1.C1=T2.C1,
  T3
WHERE
  T2.C1=T3.C1;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

The second to the last example (line 72) shows a full outer join using Oracle syntax (note that the column C1 must be declared as NOT NULL) and the ANSI full outer join syntax (line 95).  The last example shows what happens when ANSI join syntax is mixed with Oracle join syntax.  The following shows portions of the contents of the 10053 trace file when the script was executed on Oracle Database 11.2.0.1:

SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1;

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "T1"."C1" "C1","T1"."C2" "C2","T2"."C1" "C1","T2"."C2" "C2" FROM "TESTUSER"."T1" "T1","TESTUSER"."T2" "T2" WHERE "T1"."C1"="T2"."C1" 
SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1
INNER JOIN
  T2
ON
  T1.C1=T2.C1;

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "T1"."C1" "C1","T1"."C2" "C2","T2"."C1" "C1","T2"."C2" "C2" FROM "TESTUSER"."T1" "T1","TESTUSER"."T2" "T2" WHERE "T1"."C1"="T2"."C1"

The final query after transformation is identical for this set of SQL statements.

—-

SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1(+);

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "T1"."C1" "C1","T1"."C2" "C2","T2"."C1" "C1","T2"."C2" "C2" FROM "TESTUSER"."T1" "T1","TESTUSER"."T2" "T2" WHERE "T1"."C1"="T2"."C1"(+) 
SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1
LEFT OUTER JOIN
  T2
ON
  T1.C1=T2.C1;

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "T1"."C1" "C1","T1"."C2" "C2","T2"."C1" "C1","T2"."C2" "C2" FROM "TESTUSER"."T1" "T1","TESTUSER"."T2" "T2" WHERE "T1"."C1"="T2"."C1"(+)

The final query after transformation is identical for this set of SQL statements.

—-

SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1(+)=T2.C1

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "T1"."C1" "C1","T1"."C2" "C2","T2"."C1" "C1","T2"."C2" "C2" FROM "TESTUSER"."T1" "T1","TESTUSER"."T2" "T2" WHERE "T1"."C1"(+)="T2"."C1" 
SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1
RIGHT OUTER JOIN
  T2
ON
  T1.C1=T2.C1

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "T1"."C1" "C1","T1"."C2" "C2","T2"."C1" "C1","T2"."C2" "C2" FROM "TESTUSER"."T2" "T2","TESTUSER"."T1" "T1" WHERE "T1"."C1"(+)="T2"."C1"

The final query after transformation is identical for this set of SQL statements.

—-

SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1(+)
UNION ALL
SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T1.C1(+)=T2.C1
  AND T1.C1 IS NULL

Final query after transformations:******* UNPARSED QUERY IS *******
 (SELECT "T1"."C1" "C1","T1"."C2" "C2","T2"."C1" "C1","T2"."C2" "C2" FROM "TESTUSER"."T1" "T1","TESTUSER"."T2" "T2"
 WHERE "T1"."C1"="T2"."C1"(+))
UNION ALL  (SELECT "T1"."C1" "C1","T1"."C2" "C2","T2"."C1" "C1","T2"."C2" "C2" FROM "TESTUSER"."T1" "T1","TESTUSER"."T2" "T2"
 WHERE "T1"."C1"(+)="T2"."C1" AND "T1"."C1" IS NULL) 
SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2
FROM
  T1
FULL OUTER JOIN
  T2
ON
  T1.C1=T2.C1

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "VW_FOJ_0"."QCSJ_C000000000300000_0" "C1","VW_FOJ_0"."QCSJ_C000000000300002_1" "C2","VW_FOJ_0"."QCSJ_C000000000300001_2" "C1",
"VW_FOJ_0"."QCSJ_C000000000300003_3" "C2" FROM
 (SELECT "T1"."C1" "QCSJ_C000000000300000_0","T1"."C2" "QCSJ_C000000000300002_1","T2"."C1" "QCSJ_C000000000300001_2",
 "T2"."C2" "QCSJ_C000000000300003_3" FROM "TESTUSER"."T2" "T2" FULL OUTER JOIN "TESTUSER"."T1" "T1" ON "T1"."C1"="T2"."C1") "VW_FOJ_0"

The final query after transformation is definitely not the same for the Oracle specific syntax and the ANSI syntax, even though the rows retrieved were identical.

—-

SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2,
  T3.C2
FROM
  T1,
  T2,
  T3
WHERE
  T1.C1=T2.C1(+)
  AND T2.C1=T3.C1;

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "T1"."C1" "C1","T1"."C2" "C2","T2"."C1" "C1","T2"."C2" "C2","T3"."C2" "C2" FROM "TESTUSER"."T1" "T1","TESTUSER"."T2" "T2","TESTUSER"."T3" "T3"
WHERE "T1"."C1"="T2"."C1" AND "T2"."C1"="T3"."C1" 
SELECT
  T1.C1,
  T1.C2,
  T2.C1,
  T2.C2,
  T3.C2
FROM
  T1
LEFT OUTER JOIN
  T2
ON
  T1.C1=T2.C1,
  T3
WHERE
  T2.C1=T3.C1;

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "T1"."C1" "C1","T1"."C2" "C2","T2"."C1" "C1","T2"."C2" "C2","T3"."C2" "C2" FROM "TESTUSER"."T1" "T1","TESTUSER"."T2" "T2","TESTUSER"."T3" "T3"
WHERE "T2"."C1"="T3"."C1" AND "T1"."C1"="T2"."C1"

The final query after transformation is identical for this set of SQL statements, but notice that the optimizer apparently “lost” the “outer” portion of the  join between tables T1 and T2.  I do not think that is supposed to happen (or at least it was not intended – probably should have specified AND T2.C1=T3.C1(+) )!  This is the relevant portion of the 10053 trace file, in case anyone wants to investigate:

Join Elimination (JE)   
*************************
SQL:******* UNPARSED QUERY IS *******
SELECT "T1"."C1" "C1","T1"."C2" "C2","T2"."C1" "C1","T2"."C2" "C2","T3"."C2" "C2" FROM "TESTUSER"."T1" "T1","TESTUSER"."T2" "T2","TESTUSER"."T3" "T3" WHERE "T1"."C1"="T2"."C1"(+) AND "T2"."C1"="T3"."C1"
SQL:******* UNPARSED QUERY IS *******
SELECT "T1"."C1" "C1","T1"."C2" "C2","T2"."C1" "C1","T2"."C2" "C2","T3"."C2" "C2" FROM "TESTUSER"."T1" "T1","TESTUSER"."T2" "T2","TESTUSER"."T3" "T3" WHERE "T1"."C1"="T2"."C1"(+) AND "T2"."C1"="T3"."C1"
Query block SEL$1 (#0) unchanged
CVM: Considering view merge in query block SEL$1 (#0)
OJE: Considering outer-join elimination on query block SEL$1 (#0)
OJE: considering predicate"T1"."C1"="T2"."C1"(+)

rejected
OJE: considering predicate"T2"."C1"="T3"."C1"

OJE:      Converting outer join of T2 and T1 to inner-join.
considered
Registered qb: SEL$6E71C6F6 0x1fbfcba0 (OUTER-JOIN REMOVED FROM QUERY BLOCK SEL$1; SEL$1)
---------------------
QUERY BLOCK SIGNATURE
---------------------
  signature (): qb_name=SEL$6E71C6F6 nbfros=3 flg=0
    fro(0): flg=0 objn=82957 hint_alias="T1"@"SEL$1"
    fro(1): flg=0 objn=82958 hint_alias="T2"@"SEL$1"
    fro(2): flg=0 objn=83037 hint_alias="T3"@"SEL$1"

OJE:   outer-join eliminated
JE:   Considering Join Elimination on query block SEL$6E71C6F6 (#0)
*************************
Join Elimination (JE)   
*************************
SQL:******* UNPARSED QUERY IS *******
SELECT "T1"."C1" "C1","T1"."C2" "C2","T2"."C1" "C1","T2"."C2" "C2","T3"."C2" "C2" FROM "TESTUSER"."T1" "T1","TESTUSER"."T2" "T2","TESTUSER"."T3" "T3" WHERE "T1"."C1"="T2"."C1" AND "T2"."C1"="T3"."C1"
SQL:******* UNPARSED QUERY IS *******
SELECT "T1"."C1" "C1","T1"."C2" "C2","T2"."C1" "C1","T2"."C2" "C2","T3"."C2" "C2" FROM "TESTUSER"."T1" "T1","TESTUSER"."T2" "T2","TESTUSER"."T3" "T3" WHERE "T1"."C1"="T2"."C1" AND "T2"."C1"="T3"."C1"

Considering the above transformation, I now feel a little ansy about Oracle join syntax.





Explain Plan Shows a Cartesian Merge Join, How Would You Help?

24 12 2010

December 24, 2010

Imagine that a developer approached you with the following SQL statement, and explained that he (or she) noticed that the execution plan showed a Cartesian merge join.  Even adding an ORDERED hint did not affect the execution plan, and the execution performance was the same with the ORDERED hint.

SELECT X.TIME_PERIOD EXPOSURE_PERIOD,Y.TIME_PERIOD EVALUATION_PERIOD,BUSINESS_UNIT,
TO_NUMBER(LOB_VALUE) BUSINESS_UNIT_LOB_ID_FIN,0 CALC_VALUE

FROM ACTUARIAL_REF_DATA.TIME_PERIOD_HIERARCHY X,ACTUARIAL_REF_DATA.TIME_PERIOD_HIERARCHY Y,ANALYSIS_BUSINESS_UNITS, ANALYSIS_LOBS

WHERE X.TIME_PERIOD BETWEEN TO_NUMBER('200001') AND TO_NUMBER('200212')
AND Y.TIME_PERIOD BETWEEN TO_NUMBER('200001') AND TO_NUMBER('200212')
AND BUSINESS_UNIT = '33011';

Cartesian joins are not automatically a sign of problems.  Previous blog articles have demonstrated that a Cartesian join may be more efficient than other methods for certain situations (see Improving Performance by Using a Cartesian Join and Improving Performance by Using a Cartesian Join 2), the execution plan may simply display a Cartesian join rather than a nested loops join when one of the row sources is expected to return a single row, or the optimizer may introduce a Cartesian join as a transformation due of transitive closure (see this article).

How would you help the developer with the above SQL statement?  Would you first attempt to reformat the SQL statement, or would you do something else as the first step (note that this blog article is not specifically interested in just helping the developer with this one SQL statement, but instead is seeking advice to help the developer know what to do the next time)?  Think about the SQL statement for a moment…

Almost a year ago I put together another blog article (SQL Basics – Working with ERP Data) that was based on a small portion of a presentation that I gave to an ERP user’s group.  I almost created a new blog article that basically stated the same tips shortly after I saw the above SQL statement in this OTN thread (please do not visit the thread until you have had a chance to think about the SQL statement, and how you would help the developer).





SELECT Statement is Fast, INSERT INTO Using the SELECT Statement is Brutally Slow 3

20 12 2010

December 20, 2010

(Back to the Previous Post in the Series)

The previous article in this series included a test case that demonstrated significantly different performance for a SELECT statement and an INSERT INTO statement that used the same SELECT statement as the data source.  This blog article includes the timing results and the execution plans that were generated from the script in the previous blog article.  At the end of this blog article is an unexpected twist – see if you are able to determine what caused the behavior.

———–

SELECT_10.1.0.4 (OPTIMIZER_FEATURES_ENABLE=’10.1.0.4′, OPTIMIZER_INDEX_CACHING=0, OPTIMIZER_INDEX_COST_ADJ=100), completed in 0.23 seconds when selecting 10,478 rows:

--------------------------------------------------+-----------------------------------+
| Id  | Operation                    | Name       | Rows  | Bytes | Cost  | Time      |
--------------------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT             |            |       |       |  1425 |           |
| 1   |  SORT GROUP BY               |            |   486 |   46K |  1425 |  00:00:12 |
| 2   |   FILTER                     |            |       |       |       |           |
| 3   |    HASH JOIN RIGHT OUTER     |            |   19K | 1882K |  1423 |  00:00:12 |
| 4   |     VIEW                     |            |   700 |   27K |   368 |  00:00:03 |
| 5   |      SORT GROUP BY           |            |   700 |   40K |   368 |  00:00:03 |
| 6   |       HASH JOIN              |            |  146K | 8452K |   362 |  00:00:03 |
| 7   |        TABLE ACCESS FULL     | T2         |   700 |   18K |     3 |  00:00:01 |
| 8   |        TABLE ACCESS FULL     | T2_LINES   |  146K | 4663K |   358 |  00:00:03 |
| 9   |     NESTED LOOPS             |            |   19K | 1126K |  1055 |  00:00:09 |
| 10  |      VIEW                    |            |   700 |   14K |    65 |  00:00:01 |
| 11  |       SORT GROUP BY          |            |   700 |   18K |    65 |  00:00:01 |
| 12  |        FILTER                |            |       |       |       |           |
| 13  |         INDEX FAST FULL SCAN | T1_PK      |  106K | 2755K |    60 |  00:00:01 |
| 14  |      INDEX RANGE SCAN        | T1_LINES_PK|    28 |  1036 |     3 |  00:00:01 |
--------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - filter(("EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365) AND COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200))
3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
6 - access("CO"."ID"="COL"."CUST_ORDER_ID")
7 - filter("CUSTOMER_PO_REF" IS NOT NULL)
8 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
12 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
14 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
14 - filter("EL"."FCSTQTY">0)

Statistics
----------------------------------------------------------                     
          1  recursive calls                                                   
          0  db block gets                                                     
       8303  consistent gets                                                   
          0  physical reads                                                    
      62292  redo size                                                         
     141157  bytes sent via SQL*Net to client                                  
        470  bytes received via SQL*Net from client                            
         12  SQL*Net roundtrips to/from client                                 
          3  sorts (memory)                                                    
          0  sorts (disk)                                                      
      10478  rows processed  

INSERT_10.1.0.4 (OPTIMIZER_FEATURES_ENABLE=’10.1.0.4′, OPTIMIZER_INDEX_CACHING=0, OPTIMIZER_INDEX_COST_ADJ=100), completed in… 30 minutes, 18.01 seconds when inserting 10,478 rows:

-------------------------------------------------+-----------------------------------+
| Id  | Operation                      | Name    | Rows  | Bytes | Cost  | Time      |
-------------------------------------------------+-----------------------------------+
| 0   | INSERT STATEMENT               |         |       |       | 4964K |           |
| 1   |  LOAD TABLE CONVENTIONAL       |         |       |       |       |           |
| 2   |   SORT GROUP BY                |         |   842 |   60K | 4964K |  11:18:45 |
| 3   |    FILTER                      |         |       |       |       |           |
| 4   |     HASH JOIN OUTER            |         |   842 |   60K | 4964K |  11:18:45 |
| 5   |      VIEW                      |         |   842 |   28K | 4964K |  11:18:42 |
| 6   |       FILTER                   |         |       |       |       |           |
| 7   |        SORT GROUP BY           |         |   842 |   53K | 4964K |  11:18:42 |
| 8   |         FILTER                 |         |       |       |       |           |
| 9   |          HASH JOIN             |         |  852M |   54G |  7031 |  00:00:57 |
| 10  |           INDEX FAST FULL SCAN | T1_PK   |  106K | 2755K |    60 |  00:00:01 |
| 11  |           TABLE ACCESS FULL    | T1_LINES| 5628K |  214M |  3388 |  00:00:28 |
| 12  |      VIEW                      |         |   700 |   27K |   368 |  00:00:03 |
| 13  |       SORT GROUP BY            |         |   700 |   40K |   368 |  00:00:03 |
| 14  |        HASH JOIN               |         |  146K | 8452K |   362 |  00:00:03 |
| 15  |         TABLE ACCESS FULL      | T2      |   700 |   18K |     3 |  00:00:01 |
| 16  |         TABLE ACCESS FULL      | T2_LINES|  146K | 4663K |   358 |  00:00:03 |
-------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
3 - filter("E"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365))
4 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
6 - filter("EL"."RELID"=MAX("RELID"))
8 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
9 - access("E"."CUSTPO"="EL"."CUSTPO")
11 - filter(("EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200 AND "EL"."FCSTQTY">0))
14 - access("CO"."ID"="COL"."CUST_ORDER_ID")
15 - filter("CUSTOMER_PO_REF" IS NOT NULL)
16 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)

Statistics
----------------------------------------------------------                     
       1174  recursive calls                                                   
        454  db block gets                                                     
      84217  consistent gets                                                   
     214160  physical reads                                                    
     401488  redo size                                                         
        569  bytes sent via SQL*Net to client                                  
       1520  bytes received via SQL*Net from client                            
          3  SQL*Net roundtrips to/from client                                 
          6  sorts (memory)                                                    
          2  sorts (disk)                                                      
      10478  rows processed                                                     

SELECT_10.2.0.2 (OPTIMIZER_FEATURES_ENABLE=’10.2.0.2′, OPTIMIZER_INDEX_CACHING=0, OPTIMIZER_INDEX_COST_ADJ=100), completed in 0.20 seconds when selecting 10,478 rows:

--------------------------------------------------+-----------------------------------+
| Id  | Operation                    | Name       | Rows  | Bytes | Cost  | Time      |
--------------------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT             |            |       |       |  1425 |           |
| 1   |  SORT GROUP BY               |            |   486 |   46K |  1425 |  00:00:12 |
| 2   |   FILTER                     |            |       |       |       |           |
| 3   |    HASH JOIN RIGHT OUTER     |            |   19K | 1882K |  1423 |  00:00:12 |
| 4   |     VIEW                     |            |   700 |   27K |   368 |  00:00:03 |
| 5   |      HASH GROUP BY           |            |   700 |   40K |   368 |  00:00:03 |
| 6   |       HASH JOIN              |            |  146K | 8452K |   362 |  00:00:03 |
| 7   |        TABLE ACCESS FULL     | T2         |   700 |   18K |     3 |  00:00:01 |
| 8   |        TABLE ACCESS FULL     | T2_LINES   |  146K | 4663K |   358 |  00:00:03 |
| 9   |     NESTED LOOPS             |            |   19K | 1126K |  1055 |  00:00:09 |
| 10  |      VIEW                    |            |   700 |   14K |    65 |  00:00:01 |
| 11  |       HASH GROUP BY          |            |   700 |   18K |    65 |  00:00:01 |
| 12  |        FILTER                |            |       |       |       |           |
| 13  |         INDEX FAST FULL SCAN | T1_PK      |  106K | 2755K |    60 |  00:00:01 |
| 14  |      INDEX RANGE SCAN        | T1_LINES_PK|    28 |  1036 |     3 |  00:00:01 |
--------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - filter(("EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365) AND COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200))
3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
6 - access("CO"."ID"="COL"."CUST_ORDER_ID")
7 - filter("CUSTOMER_PO_REF" IS NOT NULL)
8 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
12 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
14 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
14 - filter("EL"."FCSTQTY">0)

Statistics
----------------------------------------------------------                     
          1  recursive calls                                                   
          0  db block gets                                                     
       7457  consistent gets                                                   
          0  physical reads                                                    
          0  redo size                                                         
     141157  bytes sent via SQL*Net to client                                  
        470  bytes received via SQL*Net from client                            
         12  SQL*Net roundtrips to/from client                                 
          1  sorts (memory)                                                    
          0  sorts (disk)                                                      
      10478  rows processed

INSERT_10.2.0.2 (OPTIMIZER_FEATURES_ENABLE=’10.2.0.2′, OPTIMIZER_INDEX_CACHING=0, OPTIMIZER_INDEX_COST_ADJ=100), completed in 0.23 seconds when inserting 10,478 rows:

---------------------------------------------------+-----------------------------------+
| Id  | Operation                     | Name       | Rows  | Bytes | Cost  | Time      |
---------------------------------------------------+-----------------------------------+
| 0   | INSERT STATEMENT              |            |       |       |  1425 |           |
| 1   |  LOAD TABLE CONVENTIONAL      |            |       |       |       |           |
| 2   |   SORT GROUP BY               |            |   486 |   46K |  1425 |  00:00:12 |
| 3   |    FILTER                     |            |       |       |       |           |
| 4   |     HASH JOIN RIGHT OUTER     |            |   19K | 1882K |  1423 |  00:00:12 |
| 5   |      VIEW                     |            |   700 |   27K |   368 |  00:00:03 |
| 6   |       SORT GROUP BY           |            |   700 |   40K |   368 |  00:00:03 |
| 7   |        HASH JOIN              |            |  146K | 8452K |   362 |  00:00:03 |
| 8   |         TABLE ACCESS FULL     | T2         |   700 |   18K |     3 |  00:00:01 |
| 9   |         TABLE ACCESS FULL     | T2_LINES   |  146K | 4663K |   358 |  00:00:03 |
| 10  |      NESTED LOOPS             |            |   19K | 1126K |  1055 |  00:00:09 |
| 11  |       VIEW                    |            |   700 |   14K |    65 |  00:00:01 |
| 12  |        SORT GROUP BY          |            |   700 |   18K |    65 |  00:00:01 |
| 13  |         FILTER                |            |       |       |       |           |
| 14  |          INDEX FAST FULL SCAN | T1_PK      |  106K | 2755K |    60 |  00:00:01 |
| 15  |       INDEX RANGE SCAN        | T1_LINES_PK|    28 |  1036 |     3 |  00:00:01 |
---------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
3 - filter(("EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365) AND COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200))
4 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
7 - access("CO"."ID"="COL"."CUST_ORDER_ID")
8 - filter("CUSTOMER_PO_REF" IS NOT NULL)
9 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
13 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
15 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
15 - filter("EL"."FCSTQTY">0)

Statistics
----------------------------------------------------------                     
        186  recursive calls                                                   
        341  db block gets                                                     
       7561  consistent gets                                                   
          0  physical reads                                                    
     397832  redo size                                                         
        569  bytes sent via SQL*Net to client                                  
       1520  bytes received via SQL*Net from client                            
          3  SQL*Net roundtrips to/from client                                 
          4  sorts (memory)                                                    
          0  sorts (disk)                                                      
      10478  rows processed

SELECT_11.1.0.7 (OPTIMIZER_FEATURES_ENABLE=’11.1.0.7′, OPTIMIZER_INDEX_CACHING=0, OPTIMIZER_INDEX_COST_ADJ=100), completed in 0.23 seconds when selecting 10,478 rows:

--------------------------------------------------+-----------------------------------+
| Id  | Operation                    | Name       | Rows  | Bytes | Cost  | Time      |
--------------------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT             |            |       |       |  1392 |           |
| 1   |  SORT GROUP BY               |            |   829 |   79K |  1392 |  00:00:12 |
| 2   |   FILTER                     |            |       |       |       |           |
| 3   |    HASH JOIN RIGHT OUTER     |            |   19K | 1882K |  1390 |  00:00:12 |
| 4   |     VIEW                     |            |   700 |   27K |   368 |  00:00:03 |
| 5   |      HASH GROUP BY           |            |   700 |   40K |   368 |  00:00:03 |
| 6   |       HASH JOIN              |            |  146K | 8452K |   362 |  00:00:03 |
| 7   |        TABLE ACCESS FULL     | T2         |   700 |   18K |     3 |  00:00:01 |
| 8   |        TABLE ACCESS FULL     | T2_LINES   |  146K | 4663K |   358 |  00:00:03 |
| 9   |     NESTED LOOPS             |            |   19K | 1126K |  1022 |  00:00:09 |
| 10  |      VIEW                    |            |   700 |   14K |    65 |  00:00:01 |
| 11  |       HASH GROUP BY          |            |   700 |   18K |    65 |  00:00:01 |
| 12  |        FILTER                |            |       |       |       |           |
| 13  |         INDEX FAST FULL SCAN | T1_PK      |  106K | 2755K |    60 |  00:00:01 |
| 14  |      INDEX RANGE SCAN        | T1_LINES_PK|    28 |  1036 |     3 |  00:00:01 |
--------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - filter(("EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365) AND COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200))
3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
6 - access("CO"."ID"="COL"."CUST_ORDER_ID")
7 - filter("CUSTOMER_PO_REF" IS NOT NULL)
8 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
12 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
14 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
14 - filter("EL"."FCSTQTY">0)

Statistics
----------------------------------------------------------                     
          1  recursive calls                                                   
          0  db block gets                                                     
       7457  consistent gets                                                   
          0  physical reads                                                    
          0  redo size                                                         
     141157  bytes sent via SQL*Net to client                                  
        470  bytes received via SQL*Net from client                            
         12  SQL*Net roundtrips to/from client                                 
          1  sorts (memory)                                                    
          0  sorts (disk)                                                      
      10478  rows processed

INSERT_11.1.0.7 (OPTIMIZER_FEATURES_ENABLE=’11.1.0.7′, OPTIMIZER_INDEX_CACHING=0, OPTIMIZER_INDEX_COST_ADJ=100), completed in 0.25 seconds when inserting 10,478 rows:

---------------------------------------------------+-----------------------------------+
| Id  | Operation                     | Name       | Rows  | Bytes | Cost  | Time      |
---------------------------------------------------+-----------------------------------+
| 0   | INSERT STATEMENT              |            |       |       |  1392 |           |
| 1   |  LOAD TABLE CONVENTIONAL      |            |       |       |       |           |
| 2   |   SORT GROUP BY               |            |   829 |   79K |  1392 |  00:00:12 |
| 3   |    FILTER                     |            |       |       |       |           |
| 4   |     HASH JOIN RIGHT OUTER     |            |   19K | 1882K |  1390 |  00:00:12 |
| 5   |      VIEW                     |            |   700 |   27K |   368 |  00:00:03 |
| 6   |       SORT GROUP BY           |            |   700 |   40K |   368 |  00:00:03 |
| 7   |        HASH JOIN              |            |  146K | 8452K |   362 |  00:00:03 |
| 8   |         TABLE ACCESS FULL     | T2         |   700 |   18K |     3 |  00:00:01 |
| 9   |         TABLE ACCESS FULL     | T2_LINES   |  146K | 4663K |   358 |  00:00:03 |
| 10  |      NESTED LOOPS             |            |   19K | 1126K |  1022 |  00:00:09 |
| 11  |       VIEW                    |            |   700 |   14K |    65 |  00:00:01 |
| 12  |        SORT GROUP BY          |            |   700 |   18K |    65 |  00:00:01 |
| 13  |         FILTER                |            |       |       |       |           |
| 14  |          INDEX FAST FULL SCAN | T1_PK      |  106K | 2755K |    60 |  00:00:01 |
| 15  |       INDEX RANGE SCAN        | T1_LINES_PK|    28 |  1036 |     3 |  00:00:01 |
---------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
3 - filter(("EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365) AND COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200))
4 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
7 - access("CO"."ID"="COL"."CUST_ORDER_ID")
8 - filter("CUSTOMER_PO_REF" IS NOT NULL)
9 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
13 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
15 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
15 - filter("EL"."FCSTQTY">0) 

Statistics
----------------------------------------------------------                     
        186  recursive calls                                                   
        341  db block gets                                                     
       7564  consistent gets                                                   
          0  physical reads                                                    
     397816  redo size                                                         
        569  bytes sent via SQL*Net to client                                  
       1520  bytes received via SQL*Net from client                            
          3  SQL*Net roundtrips to/from client                                 
          4  sorts (memory)                                                    
          0  sorts (disk)                                                      
      10478  rows processed

SELECT_11.2.0.1 (OPTIMIZER_FEATURES_ENABLE=’11.2.0.1′, OPTIMIZER_INDEX_CACHING=0, OPTIMIZER_INDEX_COST_ADJ=100), completed in 0.21 seconds when selecting 10,478 rows:

--------------------------------------------------+-----------------------------------+
| Id  | Operation                    | Name       | Rows  | Bytes | Cost  | Time      |
--------------------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT             |            |       |       |  1392 |           |
| 1   |  SORT GROUP BY               |            |   829 |   79K |  1392 |  00:00:12 |
| 2   |   FILTER                     |            |       |       |       |           |
| 3   |    HASH JOIN RIGHT OUTER     |            |   19K | 1882K |  1390 |  00:00:12 |
| 4   |     VIEW                     |            |   700 |   27K |   368 |  00:00:03 |
| 5   |      HASH GROUP BY           |            |   700 |   40K |   368 |  00:00:03 |
| 6   |       HASH JOIN              |            |  146K | 8452K |   362 |  00:00:03 |
| 7   |        TABLE ACCESS FULL     | T2         |   700 |   18K |     3 |  00:00:01 |
| 8   |        TABLE ACCESS FULL     | T2_LINES   |  146K | 4663K |   358 |  00:00:03 |
| 9   |     NESTED LOOPS             |            |   19K | 1126K |  1022 |  00:00:09 |
| 10  |      VIEW                    |            |   700 |   14K |    65 |  00:00:01 |
| 11  |       HASH GROUP BY          |            |   700 |   18K |    65 |  00:00:01 |
| 12  |        FILTER                |            |       |       |       |           |
| 13  |         INDEX FAST FULL SCAN | T1_PK      |  106K | 2755K |    60 |  00:00:01 |
| 14  |      INDEX RANGE SCAN        | T1_LINES_PK|    28 |  1036 |     3 |  00:00:01 |
--------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - filter(("EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365) AND COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200))
3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
6 - access("CO"."ID"="COL"."CUST_ORDER_ID")
7 - filter("CUSTOMER_PO_REF" IS NOT NULL)
8 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
12 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
14 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
14 - filter("EL"."FCSTQTY">0)

Statistics
----------------------------------------------------------                     
          1  recursive calls                                                   
          0  db block gets                                                     
       7457  consistent gets                                                   
          0  physical reads                                                    
          0  redo size                                                         
     141157  bytes sent via SQL*Net to client                                  
        470  bytes received via SQL*Net from client                            
         12  SQL*Net roundtrips to/from client                                 
          1  sorts (memory)                                                    
          0  sorts (disk)                                                      
      10478  rows processed

INSERT_11.2.0.1 (OPTIMIZER_FEATURES_ENABLE=’11.2.0.1′, OPTIMIZER_INDEX_CACHING=0, OPTIMIZER_INDEX_COST_ADJ=100), completed in 0.26 seconds when inserting 10,478 rows:

---------------------------------------------------+-----------------------------------+
| Id  | Operation                     | Name       | Rows  | Bytes | Cost  | Time      |
---------------------------------------------------+-----------------------------------+
| 0   | INSERT STATEMENT              |            |       |       |  1392 |           |
| 1   |  LOAD TABLE CONVENTIONAL      |            |       |       |       |           |
| 2   |   SORT GROUP BY               |            |   829 |   79K |  1392 |  00:00:12 |
| 3   |    FILTER                     |            |       |       |       |           |
| 4   |     HASH JOIN RIGHT OUTER     |            |   19K | 1882K |  1390 |  00:00:12 |
| 5   |      VIEW                     |            |   700 |   27K |   368 |  00:00:03 |
| 6   |       SORT GROUP BY           |            |   700 |   40K |   368 |  00:00:03 |
| 7   |        HASH JOIN              |            |  146K | 8452K |   362 |  00:00:03 |
| 8   |         TABLE ACCESS FULL     | T2         |   700 |   18K |     3 |  00:00:01 |
| 9   |         TABLE ACCESS FULL     | T2_LINES   |  146K | 4663K |   358 |  00:00:03 |
| 10  |      NESTED LOOPS             |            |   19K | 1126K |  1022 |  00:00:09 |
| 11  |       VIEW                    |            |   700 |   14K |    65 |  00:00:01 |
| 12  |        SORT GROUP BY          |            |   700 |   18K |    65 |  00:00:01 |
| 13  |         FILTER                |            |       |       |       |           |
| 14  |          INDEX FAST FULL SCAN | T1_PK      |  106K | 2755K |    60 |  00:00:01 |
| 15  |       INDEX RANGE SCAN        | T1_LINES_PK|    28 |  1036 |     3 |  00:00:01 |
---------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
3 - filter(("EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365) AND COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200))
4 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
7 - access("CO"."ID"="COL"."CUST_ORDER_ID")
8 - filter("CUSTOMER_PO_REF" IS NOT NULL)
9 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
13 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
15 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
15 - filter("EL"."FCSTQTY">0) 

Statistics
----------------------------------------------------------                     
        186  recursive calls                                                   
        341  db block gets                                                     
       7563  consistent gets                                                   
          0  physical reads                                                    
     397876  redo size                                                         
        569  bytes sent via SQL*Net to client                                  
       1520  bytes received via SQL*Net from client                            
          3  SQL*Net roundtrips to/from client                                 
          4  sorts (memory)                                                    
          0  sorts (disk)                                                      
      10478  rows processed

——————–

SELECT_10.1.0.4 (OPTIMIZER_FEATURES_ENABLE=’10.1.0.4′, OPTIMIZER_INDEX_CACHING=100, OPTIMIZER_INDEX_COST_ADJ=80), completed in 0.21 seconds when selecting 10,478 rows (execution plan is affected by the changes to OPTIMIZER_INDEX_CACHING and OPTIMIZER_INDEX_COST_ADJ):

--------------------------------------------------+-----------------------------------+
| Id  | Operation                    | Name       | Rows  | Bytes | Cost  | Time      |
--------------------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT             |            |       |       |   436 |           |
| 1   |  SORT GROUP BY               |            |   486 |   46K |   436 |  00:00:04 |
| 2   |   NESTED LOOPS               |            |   994 |   94K |   435 |  00:00:04 |
| 3   |    FILTER                    |            |       |       |       |           |
| 4   |     HASH JOIN OUTER          |            |   700 |   41K |   433 |  00:00:04 |
| 5   |      VIEW                    |            |   700 |   14K |    65 |  00:00:01 |
| 6   |       SORT GROUP BY          |            |   700 |   18K |    65 |  00:00:01 |
| 7   |        FILTER                |            |       |       |       |           |
| 8   |         INDEX FAST FULL SCAN | T1_PK      |  106K | 2755K |    60 |  00:00:01 |
| 9   |      VIEW                    |            |   700 |   27K |   368 |  00:00:03 |
| 10  |       SORT GROUP BY          |            |   700 |   40K |   368 |  00:00:03 |
| 11  |        HASH JOIN             |            |  146K | 8452K |   362 |  00:00:03 |
| 12  |         TABLE ACCESS FULL    | T2         |   700 |   18K |     3 |  00:00:01 |
| 13  |         TABLE ACCESS FULL    | T2_LINES   |  146K | 4663K |   358 |  00:00:03 |
| 14  |    INDEX RANGE SCAN          | T1_LINES_PK|     1 |    37 |     1 |  00:00:01 |
--------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
3 - filter(COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200)
4 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
7 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
11 - access("CO"."ID"="COL"."CUST_ORDER_ID")
12 - filter("CUSTOMER_PO_REF" IS NOT NULL)
13 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
14 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
14 - filter(("EL"."FCSTQTY">0 AND "EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365))) 

Statistics
----------------------------------------------------------                     
          1  recursive calls                                                   
          0  db block gets                                                     
       7457  consistent gets                                                   
          0  physical reads                                                    
          0  redo size                                                         
     141157  bytes sent via SQL*Net to client                                  
        470  bytes received via SQL*Net from client                            
         12  SQL*Net roundtrips to/from client                                 
          3  sorts (memory)                                                    
          0  sorts (disk)                                                      
      10478  rows processed

INSERT_10.1.0.4 (OPTIMIZER_FEATURES_ENABLE=’10.1.0.4′, OPTIMIZER_INDEX_CACHING=100, OPTIMIZER_INDEX_COST_ADJ=80), completed in… 15 minutes, 11.32 seconds when inserting 10,478 rows (execution plan is unaffected by the changes to OPTIMIZER_INDEX_CACHING and OPTIMIZER_INDEX_COST_ADJ, although the execution completed twice as fast):

-------------------------------------------------+-----------------------------------+
| Id  | Operation                      | Name    | Rows  | Bytes | Cost  | Time      |
-------------------------------------------------+-----------------------------------+
| 0   | INSERT STATEMENT               |         |       |       | 4964K |           |
| 1   |  LOAD TABLE CONVENTIONAL       |         |       |       |       |           |
| 2   |   SORT GROUP BY                |         |   842 |   60K | 4964K |  11:18:45 |
| 3   |    FILTER                      |         |       |       |       |           |
| 4   |     HASH JOIN OUTER            |         |   842 |   60K | 4964K |  11:18:45 |
| 5   |      VIEW                      |         |   842 |   28K | 4964K |  11:18:42 |
| 6   |       FILTER                   |         |       |       |       |           |
| 7   |        SORT GROUP BY           |         |   842 |   53K | 4964K |  11:18:42 |
| 8   |         FILTER                 |         |       |       |       |           |
| 9   |          HASH JOIN             |         |  852M |   54G |  7031 |  00:00:57 |
| 10  |           INDEX FAST FULL SCAN | T1_PK   |  106K | 2755K |    60 |  00:00:01 |
| 11  |           TABLE ACCESS FULL    | T1_LINES| 5628K |  214M |  3388 |  00:00:28 |
| 12  |      VIEW                      |         |   700 |   27K |   368 |  00:00:03 |
| 13  |       SORT GROUP BY            |         |   700 |   40K |   368 |  00:00:03 |
| 14  |        HASH JOIN               |         |  146K | 8452K |   362 |  00:00:03 |
| 15  |         TABLE ACCESS FULL      | T2      |   700 |   18K |     3 |  00:00:01 |
| 16  |         TABLE ACCESS FULL      | T2_LINES|  146K | 4663K |   358 |  00:00:03 |
-------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
3 - filter("E"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365))
4 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
6 - filter("EL"."RELID"=MAX("RELID"))
8 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
9 - access("E"."CUSTPO"="EL"."CUSTPO")
11 - filter(("EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200 AND "EL"."FCSTQTY">0))
14 - access("CO"."ID"="COL"."CUST_ORDER_ID")
15 - filter("CUSTOMER_PO_REF" IS NOT NULL)
16 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)

Statistics
----------------------------------------------------------                     
        618  recursive calls                                                   
        388  db block gets                                                     
      42148  consistent gets                                                   
     107080  physical reads                                                    
     397876  redo size                                                         
        569  bytes sent via SQL*Net to client                                  
       1523  bytes received via SQL*Net from client                            
          3  SQL*Net roundtrips to/from client                                 
          3  sorts (memory)                                                    
          1  sorts (disk)                                                      
      10478  rows processed

SELECT_10.2.0.2 (OPTIMIZER_FEATURES_ENABLE=’10.2.0.2′, OPTIMIZER_INDEX_CACHING=100, OPTIMIZER_INDEX_COST_ADJ=80), completed in 0.20 seconds when selecting 10,478 rows (excution plans are identical for SELECT and INSERT INTO for the OPTIMIZER_FEATURES_ENABLE values 10.2.0.2, 11.1.0.7, and 11.2.0.1):

--------------------------------------------------+-----------------------------------+
| Id  | Operation                    | Name       | Rows  | Bytes | Cost  | Time      |
--------------------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT             |            |       |       |   436 |           |
| 1   |  SORT GROUP BY               |            |   486 |   46K |   436 |  00:00:04 |
| 2   |   NESTED LOOPS               |            |   994 |   94K |   435 |  00:00:04 |
| 3   |    FILTER                    |            |       |       |       |           |
| 4   |     HASH JOIN OUTER          |            |   700 |   41K |   433 |  00:00:04 |
| 5   |      VIEW                    |            |   700 |   14K |    65 |  00:00:01 |
| 6   |       HASH GROUP BY          |            |   700 |   18K |    65 |  00:00:01 |
| 7   |        FILTER                |            |       |       |       |           |
| 8   |         INDEX FAST FULL SCAN | T1_PK      |  106K | 2755K |    60 |  00:00:01 |
| 9   |      VIEW                    |            |   700 |   27K |   368 |  00:00:03 |
| 10  |       HASH GROUP BY          |            |   700 |   40K |   368 |  00:00:03 |
| 11  |        HASH JOIN             |            |  146K | 8452K |   362 |  00:00:03 |
| 12  |         TABLE ACCESS FULL    | T2         |   700 |   18K |     3 |  00:00:01 |
| 13  |         TABLE ACCESS FULL    | T2_LINES   |  146K | 4663K |   358 |  00:00:03 |
| 14  |    INDEX RANGE SCAN          | T1_LINES_PK|     1 |    37 |     1 |  00:00:01 |
--------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
3 - filter(COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200)
4 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
7 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
11 - access("CO"."ID"="COL"."CUST_ORDER_ID")
12 - filter("CUSTOMER_PO_REF" IS NOT NULL)
13 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
14 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
14 - filter(("EL"."FCSTQTY">0 AND "EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)))

Statistics
----------------------------------------------------------                     
          1  recursive calls                                                   
          0  db block gets                                                     
       7457  consistent gets                                                   
          0  physical reads                                                    
          0  redo size                                                         
     141157  bytes sent via SQL*Net to client                                  
        470  bytes received via SQL*Net from client                            
         12  SQL*Net roundtrips to/from client                                 
          1  sorts (memory)                                                    
          0  sorts (disk)                                                      
      10478  rows processed

————————-

The odd twist… assume that you used either Oracle Database 11.2.0.1 or 11.2.0.2 for the test.  12 hours after your test completed, you executed the following SQL statements to pull the execution plans and non-default optimizer parameters for the test SQL statements from the server’s memory:

SET AUTOTRACE OFF
SET LINESIZE 150
SET PAGESIZE 1000
SET TRIMSPOOL ON
SPOOL MYSQL_PLANS.TXT

SELECT /*+ LEADING(S) USE_NL(S T) */
  T.PLAN_TABLE_OUTPUT
FROM
  (SELECT
    SQL_ID,
    CHILD_NUMBER,
    COUNT(*) OVER (PARTITION BY SQL_ID) C
  FROM
    V$SQL
  WHERE
    SQL_TEXT LIKE '%T2%'
    AND SQL_TEXT LIKE '%T1%'
    AND SQL_TEXT NOT LIKE '%MGMT_NOTIFY%'
  ORDER BY
    SQL_ID,
    CHILD_NUMBER) S,
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR(S.SQL_ID,S.CHILD_NUMBER,'TYPICAL')) T
WHERE
  S.C>=1;

SPOOL OFF

SET LINESIZE 150
SET PAGESIZE 1000
SET TRIMSPOOL ON
SPOOL MYSQL_OPTIMIZER_ENV.TXT

SELECT /*+ LEADING(S) USE_NL(S T) */
  SOE.SQL_ID,
  SOE.CHILD_NUMBER,
  SOE.NAME,
  SOE.VALUE
FROM
  (SELECT
    SQL_ID,
    CHILD_NUMBER,
    COUNT(*) OVER (PARTITION BY SQL_ID) C
  FROM
    V$SQL
  WHERE
    SQL_TEXT LIKE '%T2%'
    AND SQL_TEXT LIKE '%T1%'
    AND SQL_TEXT NOT LIKE '%MGMT_NOTIFY%'
  ORDER BY
    SQL_ID,
    CHILD_NUMBER) S,
  V$SQL_OPTIMIZER_ENV SOE
WHERE
  S.SQL_ID=SOE.SQL_ID
  AND S.CHILD_NUMBER=SOE.CHILD_NUMBER
  AND SOE.ISDEFAULT='NO'
ORDER BY
  SOE.SQL_ID,
  SOE.CHILD_NUMBER,
  SOE.NAME;

SPOOL OFF 

Assume that you saw included in the output something like this following:

SQL_ID  4kkx042dnfuvf, child number 0
-------------------------------------
/* SQL Analyze(7,1) */ SELECT /*+ LEADING(E) */
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,   E.FCDUEDATE
REQUIRED_DATE,   SUM(E.FCSTQTY) QTY,   1 PLAN_LEVEL FROM   (SELECT /*+
LEADING(E) */     E.CUSTPO,     EL.FCSTQTY,     EL.FCDUEDATE   FROM
(SELECT /*+ _80 */       E.CUSTPO,       MAX(RELID) RELID     FROM
 T1 E     GROUP BY       E.CUSTPO) E,     T1_LINES EL   WHERE
E.CUSTPO=EL.CUSTPO     AND E.RELID=EL.RELID     AND EL.FCDUEDATE
BETWEEN SYSDATE-365 AND SYSDATE+1200     AND EL.FCSTQTY>0) E,   (SELECT
/*+  */     CO.CUSTOMER_PO_REF,     MAX(COL.DESIRED_SHIP_DATE)
DESIRED_SHIP_DATE,     MAX(COL.PART_ID) PART_ID   FROM     T2 CO,
T2_LINES COL   WHERE     CO.ID=COL.CUST_ORDER_ID     AND
CUSTOMER_PO_REF IS NOT NULL     AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
 GROUP BY     CO.CUSTOMER_PO_REF) CO WHERE
E.CUSTPO=CO.CUSTOMER_PO_REF(+)   AND
E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365) GROUP BY
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),   E.FCDUEDATE ORDER BY

Plan hash value: 1947245329

-------------------------------------------------------------------------------------------
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |       |       |   436 (100)|          |
|   1 |  SORT GROUP BY              |             |   829 | 80413 |   436   (7)| 00:00:04 |
|   2 |   NESTED LOOPS              |             |   994 | 96418 |   435   (6)| 00:00:04 |
|*  3 |    FILTER                   |             |       |       |            |          |
|*  4 |     HASH JOIN OUTER         |             |   700 | 42000 |   433   (6)| 00:00:04 |
|   5 |      VIEW                   |             |   700 | 14700 |    65  (10)| 00:00:01 |
|   6 |       HASH GROUP BY         |             |   700 | 18200 |    65  (10)| 00:00:01 |
|*  7 |        FILTER               |             |       |       |            |          |
|   8 |         INDEX FAST FULL SCAN| T1_PK       |   108K|  2754K|    60   (2)| 00:00:01 |
|   9 |      VIEW                   |             |   700 | 27300 |   368   (6)| 00:00:03 |
|  10 |       HASH GROUP BY         |             |   700 | 40600 |   368   (6)| 00:00:03 |
|* 11 |        HASH JOIN            |             |   149K|  8456K|   362   (4)| 00:00:03 |
|* 12 |         TABLE ACCESS FULL   | T2          |   700 | 18200 |     3   (0)| 00:00:01 |
|* 13 |         TABLE ACCESS FULL   | T2_LINES    |   149K|  4665K|   358   (4)| 00:00:03 |
|* 14 |    INDEX RANGE SCAN         | T1_LINES_PK |     1 |    37 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200)
   4 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
   7 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
  11 - access("CO"."ID"="COL"."CUST_ORDER_ID")
  12 - filter("CUSTOMER_PO_REF" IS NOT NULL)
  13 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
  14 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND
              "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND
              "EL"."FCDUEDATE"<=SYSDATE@!+1200)
       filter(("EL"."FCSTQTY">0 AND "EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DA
              TE",SYSDATE@!-365))) 

 

SQL_ID  4kkx042dnfuvf, child number 1
-------------------------------------
/* SQL Analyze(7,1) */ SELECT /*+ LEADING(E) */
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,   E.FCDUEDATE
REQUIRED_DATE,   SUM(E.FCSTQTY) QTY,   1 PLAN_LEVEL FROM   (SELECT /*+
LEADING(E) */     E.CUSTPO,     EL.FCSTQTY,     EL.FCDUEDATE   FROM
(SELECT /*+ _80 */       E.CUSTPO,       MAX(RELID) RELID     FROM
 T1 E     GROUP BY       E.CUSTPO) E,     T1_LINES EL   WHERE
E.CUSTPO=EL.CUSTPO     AND E.RELID=EL.RELID     AND EL.FCDUEDATE
BETWEEN SYSDATE-365 AND SYSDATE+1200     AND EL.FCSTQTY>0) E,   (SELECT
/*+  */     CO.CUSTOMER_PO_REF,     MAX(COL.DESIRED_SHIP_DATE)
DESIRED_SHIP_DATE,     MAX(COL.PART_ID) PART_ID   FROM     T2 CO,
T2_LINES COL   WHERE     CO.ID=COL.CUST_ORDER_ID     AND
CUSTOMER_PO_REF IS NOT NULL     AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
 GROUP BY     CO.CUSTOMER_PO_REF) CO WHERE
E.CUSTPO=CO.CUSTOMER_PO_REF(+)   AND
E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365) GROUP BY
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),   E.FCDUEDATE ORDER BY

Plan hash value: 1232865634

---------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name         | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |              |       |       |       |  5087K(100)|          |
|   1 |  SORT GROUP BY                   |              |   842 | 61466 |       |  5087K  (2)| 11:18:20 |
|*  2 |   FILTER                         |              |       |       |       |            |          |
|*  3 |    HASH JOIN OUTER               |              |   842 | 61466 |       |  5087K  (2)| 11:18:20 |
|   4 |     VIEW                         |              |   842 | 28628 |       |  5083K  (2)| 11:17:48 |
|*  5 |      FILTER                      |              |       |       |       |            |          |
|   6 |       SORT GROUP BY              |              |   842 | 54730 |    77G|  5083K  (2)| 11:17:48 |
|*  7 |        FILTER                    |              |       |       |       |            |          |
|   8 |         NESTED LOOPS             |              |   893M|    54G|       |  7225 (100)| 00:00:58 |
|   9 |          INDEX FAST FULL SCAN    | T1_PK        |   108K|  2754K|       |    59   (2)| 00:00:01 |
|* 10 |          INDEX RANGE SCAN        | T1_LINES_PK  |  8233 |   313K|       |     2  (50)| 00:00:01 |
|  11 |     VIEW                         |              |   700 | 27300 |       |  3939   (1)| 00:00:32 |
|  12 |      SORT GROUP BY               |              |   700 | 40600 |       |  3939   (1)| 00:00:32 |
|* 13 |       TABLE ACCESS BY INDEX ROWID| T2_LINES     |   213 |  6816 |       |     6   (0)| 00:00:01 |
|  14 |        NESTED LOOPS              |              |   149K|  8456K|       |  3933   (1)| 00:00:32 |
|* 15 |         TABLE ACCESS FULL        | T2           |   700 | 18200 |       |     3  (34)| 00:00:01 |
|* 16 |         INDEX RANGE SCAN         | SYS_C0027538 |   321 |       |       |     2  (50)| 00:00:01 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("E"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365))
   3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
   5 - filter("EL"."RELID"=MAX("RELID"))
   7 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
  10 - access("E"."CUSTPO"="EL"."CUSTPO" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND
              "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
       filter(("EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200 AND
              "EL"."FCSTQTY">0))
  13 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
  15 - filter("CUSTOMER_PO_REF" IS NOT NULL)
  16 - access("CO"."ID"="COL"."CUST_ORDER_ID") 

 

SQL_ID  4kkx042dnfuvf, child number 2
-------------------------------------
/* SQL Analyze(7,1) */ SELECT /*+ LEADING(E) */
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,   E.FCDUEDATE
REQUIRED_DATE,   SUM(E.FCSTQTY) QTY,   1 PLAN_LEVEL FROM   (SELECT /*+
LEADING(E) */     E.CUSTPO,     EL.FCSTQTY,     EL.FCDUEDATE   FROM
(SELECT /*+ _80 */       E.CUSTPO,       MAX(RELID) RELID     FROM
 T1 E     GROUP BY       E.CUSTPO) E,     T1_LINES EL   WHERE
E.CUSTPO=EL.CUSTPO     AND E.RELID=EL.RELID     AND EL.FCDUEDATE
BETWEEN SYSDATE-365 AND SYSDATE+1200     AND EL.FCSTQTY>0) E,   (SELECT
/*+  */     CO.CUSTOMER_PO_REF,     MAX(COL.DESIRED_SHIP_DATE)
DESIRED_SHIP_DATE,     MAX(COL.PART_ID) PART_ID   FROM     T2 CO,
T2_LINES COL   WHERE     CO.ID=COL.CUST_ORDER_ID     AND
CUSTOMER_PO_REF IS NOT NULL     AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
 GROUP BY     CO.CUSTOMER_PO_REF) CO WHERE
E.CUSTPO=CO.CUSTOMER_PO_REF(+)   AND
E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365) GROUP BY
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),   E.FCDUEDATE ORDER BY

Plan hash value: 4293267978

----------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |             |       |       |       |  5083K(100)|          |
|   1 |  SORT GROUP BY               |             |   842 | 61466 |       |  5083K  (2)| 11:17:51 |
|*  2 |   FILTER                     |             |       |       |       |            |          |
|*  3 |    HASH JOIN OUTER           |             |   842 | 61466 |       |  5083K  (2)| 11:17:51 |
|   4 |     VIEW                     |             |   842 | 28628 |       |  5083K  (2)| 11:17:48 |
|*  5 |      FILTER                  |             |       |       |       |            |          |
|   6 |       SORT GROUP BY          |             |   842 | 54730 |    77G|  5083K  (2)| 11:17:48 |
|*  7 |        FILTER                |             |       |       |       |            |          |
|   8 |         NESTED LOOPS         |             |   893M|    54G|       |  7226 (100)| 00:00:58 |
|   9 |          INDEX FAST FULL SCAN| T1_PK       |   108K|  2754K|       |    60   (2)| 00:00:01 |
|* 10 |          INDEX RANGE SCAN    | T1_LINES_PK |  8233 |   313K|       |     2  (50)| 00:00:01 |
|  11 |     VIEW                     |             |   700 | 27300 |       |   369   (6)| 00:00:03 |
|  12 |      SORT GROUP BY           |             |   700 | 40600 |       |   369   (6)| 00:00:03 |
|* 13 |       HASH JOIN              |             |   149K|  8456K|       |   362   (4)| 00:00:03 |
|* 14 |        TABLE ACCESS FULL     | T2          |   700 | 18200 |       |     4  (25)| 00:00:01 |
|* 15 |        TABLE ACCESS FULL     | T2_LINES    |   149K|  4665K|       |   358   (4)| 00:00:03 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("E"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365))
   3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
   5 - filter("EL"."RELID"=MAX("RELID"))
   7 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
  10 - access("E"."CUSTPO"="EL"."CUSTPO" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND
              "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
       filter(("EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200 AND
              "EL"."FCSTQTY">0))
  13 - access("CO"."ID"="COL"."CUST_ORDER_ID")
  14 - filter("CUSTOMER_PO_REF" IS NOT NULL)
  15 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365) 

 

SQL_ID  4kkx042dnfuvf, child number 3
-------------------------------------
/* SQL Analyze(7,1) */ SELECT /*+ LEADING(E) */
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,   E.FCDUEDATE
REQUIRED_DATE,   SUM(E.FCSTQTY) QTY,   1 PLAN_LEVEL FROM   (SELECT /*+
LEADING(E) */     E.CUSTPO,     EL.FCSTQTY,     EL.FCDUEDATE   FROM
(SELECT /*+ _80 */       E.CUSTPO,       MAX(RELID) RELID     FROM
 T1 E     GROUP BY       E.CUSTPO) E,     T1_LINES EL   WHERE
E.CUSTPO=EL.CUSTPO     AND E.RELID=EL.RELID     AND EL.FCDUEDATE
BETWEEN SYSDATE-365 AND SYSDATE+1200     AND EL.FCSTQTY>0) E,   (SELECT
/*+  */     CO.CUSTOMER_PO_REF,     MAX(COL.DESIRED_SHIP_DATE)
DESIRED_SHIP_DATE,     MAX(COL.PART_ID) PART_ID   FROM     T2 CO,
T2_LINES COL   WHERE     CO.ID=COL.CUST_ORDER_ID     AND
CUSTOMER_PO_REF IS NOT NULL     AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
 GROUP BY     CO.CUSTOMER_PO_REF) CO WHERE
E.CUSTPO=CO.CUSTOMER_PO_REF(+)   AND
E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365) GROUP BY
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),   E.FCDUEDATE ORDER BY

Plan hash value: 3038192993

-------------------------------------------------------------------------------------------
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |       |       |   436 (100)|          |
|   1 |  SORT GROUP BY              |             |   486 | 47142 |   436   (7)| 00:00:04 |
|   2 |   NESTED LOOPS              |             |   994 | 96418 |   435   (6)| 00:00:04 |
|*  3 |    FILTER                   |             |       |       |            |          |
|*  4 |     HASH JOIN OUTER         |             |   700 | 42000 |   433   (6)| 00:00:04 |
|   5 |      VIEW                   |             |   700 | 14700 |    65  (10)| 00:00:01 |
|   6 |       SORT GROUP BY         |             |   700 | 18200 |    65  (10)| 00:00:01 |
|*  7 |        FILTER               |             |       |       |            |          |
|   8 |         INDEX FAST FULL SCAN| T1_PK       |   108K|  2754K|    60   (2)| 00:00:01 |
|   9 |      VIEW                   |             |   700 | 27300 |   368   (6)| 00:00:03 |
|  10 |       SORT GROUP BY         |             |   700 | 40600 |   368   (6)| 00:00:03 |
|* 11 |        HASH JOIN            |             |   149K|  8456K|   362   (4)| 00:00:03 |
|* 12 |         TABLE ACCESS FULL   | T2          |   700 | 18200 |     3   (0)| 00:00:01 |
|* 13 |         TABLE ACCESS FULL   | T2_LINES    |   149K|  4665K|   358   (4)| 00:00:03 |
|* 14 |    INDEX RANGE SCAN         | T1_LINES_PK |     1 |    37 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200)
   4 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
   7 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
  11 - access("CO"."ID"="COL"."CUST_ORDER_ID")
  12 - filter("CUSTOMER_PO_REF" IS NOT NULL)
  13 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
  14 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND
              "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND
              "EL"."FCDUEDATE"<=SYSDATE@!+1200)
       filter(("EL"."FCSTQTY">0 AND "EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DA
              TE",SYSDATE@!-365))) 

 

SQL_ID  bqsrv9mngjdfu, child number 0
-------------------------------------
/* SQL Analyze(7,1) */ SELECT /*+ LEADING(E) */
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,   E.FCDUEDATE
REQUIRED_DATE,   SUM(E.FCSTQTY) QTY,   1 PLAN_LEVEL FROM   (SELECT /*+
LEADING(E) */     E.CUSTPO,     EL.FCSTQTY,     EL.FCDUEDATE   FROM
(SELECT /*+  */       E.CUSTPO,       MAX(RELID) RELID     FROM
T1 E     GROUP BY       E.CUSTPO) E,     T1_LINES EL   WHERE
E.CUSTPO=EL.CUSTPO     AND E.RELID=EL.RELID     AND EL.FCDUEDATE
BETWEEN SYSDATE-365 AND SYSDATE+1200     AND EL.FCSTQTY>0) E,   (SELECT
/*+  */     CO.CUSTOMER_PO_REF,     MAX(COL.DESIRED_SHIP_DATE)
DESIRED_SHIP_DATE,     MAX(COL.PART_ID) PART_ID   FROM     T2 CO,
T2_LINES COL   WHERE     CO.ID=COL.CUST_ORDER_ID     AND
CUSTOMER_PO_REF IS NOT NULL     AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
 GROUP BY     CO.CUSTOMER_PO_REF) CO WHERE
E.CUSTPO=CO.CUSTOMER_PO_REF(+)   AND
E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365) GROUP BY
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),   E.FCDUEDATE ORDER BY   CO

Plan hash value: 1522673873

-------------------------------------------------------------------------------------------
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |       |       |  1392 (100)|          |
|   1 |  SORT GROUP BY              |             |   829 | 80413 |  1392   (3)| 00:00:12 |
|*  2 |   FILTER                    |             |       |       |            |          |
|*  3 |    HASH JOIN RIGHT OUTER    |             | 19874 |  1882K|  1390   (2)| 00:00:12 |
|   4 |     VIEW                    |             |   700 | 27300 |   368   (6)| 00:00:03 |
|   5 |      HASH GROUP BY          |             |   700 | 40600 |   368   (6)| 00:00:03 |
|*  6 |       HASH JOIN             |             |   149K|  8456K|   362   (4)| 00:00:03 |
|*  7 |        TABLE ACCESS FULL    | T2          |   700 | 18200 |     3   (0)| 00:00:01 |
|*  8 |        TABLE ACCESS FULL    | T2_LINES    |   149K|  4665K|   358   (4)| 00:00:03 |
|   9 |     NESTED LOOPS            |             | 19874 |  1125K|  1022   (1)| 00:00:09 |
|  10 |      VIEW                   |             |   700 | 14700 |    65  (10)| 00:00:01 |
|  11 |       HASH GROUP BY         |             |   700 | 18200 |    65  (10)| 00:00:01 |
|* 12 |        FILTER               |             |       |       |            |          |
|  13 |         INDEX FAST FULL SCAN| T1_PK       |   108K|  2754K|    60   (2)| 00:00:01 |
|* 14 |      INDEX RANGE SCAN       | T1_LINES_PK |    28 |  1036 |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)
              AND COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200))
   3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
   6 - access("CO"."ID"="COL"."CUST_ORDER_ID")
   7 - filter("CUSTOMER_PO_REF" IS NOT NULL)
   8 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
  12 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
  14 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND
              "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND
              "EL"."FCDUEDATE"<=SYSDATE@!+1200)
       filter("EL"."FCSTQTY">0) 

 

SQL_ID  bqsrv9mngjdfu, child number 1
-------------------------------------
/* SQL Analyze(7,1) */ SELECT /*+ LEADING(E) */
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,   E.FCDUEDATE
REQUIRED_DATE,   SUM(E.FCSTQTY) QTY,   1 PLAN_LEVEL FROM   (SELECT /*+
LEADING(E) */     E.CUSTPO,     EL.FCSTQTY,     EL.FCDUEDATE   FROM
(SELECT /*+  */       E.CUSTPO,       MAX(RELID) RELID     FROM
T1 E     GROUP BY       E.CUSTPO) E,     T1_LINES EL   WHERE
E.CUSTPO=EL.CUSTPO     AND E.RELID=EL.RELID     AND EL.FCDUEDATE
BETWEEN SYSDATE-365 AND SYSDATE+1200     AND EL.FCSTQTY>0) E,   (SELECT
/*+  */     CO.CUSTOMER_PO_REF,     MAX(COL.DESIRED_SHIP_DATE)
DESIRED_SHIP_DATE,     MAX(COL.PART_ID) PART_ID   FROM     T2 CO,
T2_LINES COL   WHERE     CO.ID=COL.CUST_ORDER_ID     AND
CUSTOMER_PO_REF IS NOT NULL     AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
 GROUP BY     CO.CUSTOMER_PO_REF) CO WHERE
E.CUSTPO=CO.CUSTOMER_PO_REF(+)   AND
E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365) GROUP BY
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),   E.FCDUEDATE ORDER BY   CO

Plan hash value: 1232865634

---------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name         | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |              |       |       |       |    16M(100)|          |
|   1 |  SORT GROUP BY                   |              |   842 | 61466 |       |    16M  (1)| 35:51:03 |
|*  2 |   FILTER                         |              |       |       |       |            |          |
|*  3 |    HASH JOIN OUTER               |              |   842 | 61466 |       |    16M  (1)| 35:51:03 |
|   4 |     VIEW                         |              |   842 | 28628 |       |    16M  (1)| 35:50:07 |
|*  5 |      FILTER                      |              |       |       |       |            |          |
|   6 |       SORT GROUP BY              |              |   842 | 54730 |    77G|    16M  (1)| 35:50:07 |
|*  7 |        FILTER                    |              |       |       |       |            |          |
|   8 |         NESTED LOOPS             |              |   893M|    54G|       |    11M  (1)| 24:33:17 |
|   9 |          INDEX FAST FULL SCAN    | T1_PK        |   108K|  2754K|       |    59   (2)| 00:00:01 |
|* 10 |          INDEX RANGE SCAN        | T1_LINES_PK  |  8233 |   313K|       |   104   (1)| 00:00:01 |
|  11 |     VIEW                         |              |   700 | 27300 |       |  7023   (1)| 00:00:57 |
|  12 |      SORT GROUP BY               |              |   700 | 40600 |       |  7023   (1)| 00:00:57 |
|* 13 |       TABLE ACCESS BY INDEX ROWID| T2_LINES     |   213 |  6816 |       |    11  (10)| 00:00:01 |
|  14 |        NESTED LOOPS              |              |   149K|  8456K|       |  7016   (1)| 00:00:57 |
|* 15 |         TABLE ACCESS FULL        | T2           |   700 | 18200 |       |     3  (34)| 00:00:01 |
|* 16 |         INDEX RANGE SCAN         | SYS_C0027538 |   321 |       |       |     4  (25)| 00:00:01 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("E"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365))
   3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
   5 - filter("EL"."RELID"=MAX("RELID"))
   7 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
  10 - access("E"."CUSTPO"="EL"."CUSTPO" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND
              "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
       filter(("EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200 AND
              "EL"."FCSTQTY">0))
  13 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
  15 - filter("CUSTOMER_PO_REF" IS NOT NULL)
  16 - access("CO"."ID"="COL"."CUST_ORDER_ID") 

 

SQL_ID  bqsrv9mngjdfu, child number 2
-------------------------------------
/* SQL Analyze(7,1) */ SELECT /*+ LEADING(E) */
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,   E.FCDUEDATE
REQUIRED_DATE,   SUM(E.FCSTQTY) QTY,   1 PLAN_LEVEL FROM   (SELECT /*+
LEADING(E) */     E.CUSTPO,     EL.FCSTQTY,     EL.FCDUEDATE   FROM
(SELECT /*+  */       E.CUSTPO,       MAX(RELID) RELID     FROM
T1 E     GROUP BY       E.CUSTPO) E,     T1_LINES EL   WHERE
E.CUSTPO=EL.CUSTPO     AND E.RELID=EL.RELID     AND EL.FCDUEDATE
BETWEEN SYSDATE-365 AND SYSDATE+1200     AND EL.FCSTQTY>0) E,   (SELECT
/*+  */     CO.CUSTOMER_PO_REF,     MAX(COL.DESIRED_SHIP_DATE)
DESIRED_SHIP_DATE,     MAX(COL.PART_ID) PART_ID   FROM     T2 CO,
T2_LINES COL   WHERE     CO.ID=COL.CUST_ORDER_ID     AND
CUSTOMER_PO_REF IS NOT NULL     AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
 GROUP BY     CO.CUSTOMER_PO_REF) CO WHERE
E.CUSTPO=CO.CUSTOMER_PO_REF(+)   AND
E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365) GROUP BY
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),   E.FCDUEDATE ORDER BY   CO

Plan hash value: 3881822961

-------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |          |       |       |       |  5083K(100)|          |
|   1 |  SORT GROUP BY               |          |   842 | 61466 |       |  5083K  (2)| 11:17:50 |
|*  2 |   FILTER                     |          |       |       |       |            |          |
|*  3 |    HASH JOIN OUTER           |          |   842 | 61466 |       |  5083K  (2)| 11:17:50 |
|   4 |     VIEW                     |          |   842 | 28628 |       |  5083K  (2)| 11:17:47 |
|*  5 |      FILTER                  |          |       |       |       |            |          |
|   6 |       SORT GROUP BY          |          |   842 | 54730 |    77G|  5083K  (2)| 11:17:47 |
|*  7 |        FILTER                |          |       |       |       |            |          |
|*  8 |         HASH JOIN            |          |   893M|    54G|  4032K|  7033   (9)| 00:00:57 |
|   9 |          INDEX FAST FULL SCAN| T1_PK    |   108K|  2754K|       |    60   (2)| 00:00:01 |
|* 10 |          TABLE ACCESS FULL   | T1_LINES |  5763K|   214M|       |  3389  (16)| 00:00:28 |
|  11 |     VIEW                     |          |   700 | 27300 |       |   369   (6)| 00:00:03 |
|  12 |      SORT GROUP BY           |          |   700 | 40600 |       |   369   (6)| 00:00:03 |
|* 13 |       HASH JOIN              |          |   149K|  8456K|       |   362   (4)| 00:00:03 |
|* 14 |        TABLE ACCESS FULL     | T2       |   700 | 18200 |       |     4  (25)| 00:00:01 |
|* 15 |        TABLE ACCESS FULL     | T2_LINES |   149K|  4665K|       |   358   (4)| 00:00:03 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("E"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365))
   3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
   5 - filter("EL"."RELID"=MAX("RELID"))
   7 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
   8 - access("E"."CUSTPO"="EL"."CUSTPO")
  10 - filter(("EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200 AND
              "EL"."FCSTQTY">0))
  13 - access("CO"."ID"="COL"."CUST_ORDER_ID")
  14 - filter("CUSTOMER_PO_REF" IS NOT NULL)
  15 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365) 

 

SQL_ID  bqsrv9mngjdfu, child number 3
-------------------------------------
/* SQL Analyze(7,1) */ SELECT /*+ LEADING(E) */
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,   E.FCDUEDATE
REQUIRED_DATE,   SUM(E.FCSTQTY) QTY,   1 PLAN_LEVEL FROM   (SELECT /*+
LEADING(E) */     E.CUSTPO,     EL.FCSTQTY,     EL.FCDUEDATE   FROM
(SELECT /*+  */       E.CUSTPO,       MAX(RELID) RELID     FROM
T1 E     GROUP BY       E.CUSTPO) E,     T1_LINES EL   WHERE
E.CUSTPO=EL.CUSTPO     AND E.RELID=EL.RELID     AND EL.FCDUEDATE
BETWEEN SYSDATE-365 AND SYSDATE+1200     AND EL.FCSTQTY>0) E,   (SELECT
/*+  */     CO.CUSTOMER_PO_REF,     MAX(COL.DESIRED_SHIP_DATE)
DESIRED_SHIP_DATE,     MAX(COL.PART_ID) PART_ID   FROM     T2 CO,
T2_LINES COL   WHERE     CO.ID=COL.CUST_ORDER_ID     AND
CUSTOMER_PO_REF IS NOT NULL     AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
 GROUP BY     CO.CUSTOMER_PO_REF) CO WHERE
E.CUSTPO=CO.CUSTOMER_PO_REF(+)   AND
E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365) GROUP BY
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),   E.FCDUEDATE ORDER BY   CO

Plan hash value: 4176222027

-------------------------------------------------------------------------------------------
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |       |       |  1425 (100)|          |
|   1 |  SORT GROUP BY              |             |   486 | 47142 |  1425   (2)| 00:00:12 |
|*  2 |   FILTER                    |             |       |       |            |          |
|*  3 |    HASH JOIN RIGHT OUTER    |             | 19874 |  1882K|  1423   (2)| 00:00:12 |
|   4 |     VIEW                    |             |   700 | 27300 |   368   (6)| 00:00:03 |
|   5 |      SORT GROUP BY          |             |   700 | 40600 |   368   (6)| 00:00:03 |
|*  6 |       HASH JOIN             |             |   149K|  8456K|   362   (4)| 00:00:03 |
|*  7 |        TABLE ACCESS FULL    | T2          |   700 | 18200 |     3   (0)| 00:00:01 |
|*  8 |        TABLE ACCESS FULL    | T2_LINES    |   149K|  4665K|   358   (4)| 00:00:03 |
|   9 |     NESTED LOOPS            |             | 19874 |  1125K|  1055   (1)| 00:00:09 |
|  10 |      VIEW                   |             |   700 | 14700 |    65  (10)| 00:00:01 |
|  11 |       SORT GROUP BY         |             |   700 | 18200 |    65  (10)| 00:00:01 |
|* 12 |        FILTER               |             |       |       |            |          |
|  13 |         INDEX FAST FULL SCAN| T1_PK       |   108K|  2754K|    60   (2)| 00:00:01 |
|* 14 |      INDEX RANGE SCAN       | T1_LINES_PK |    28 |  1036 |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)
              AND COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200))
   3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
   6 - access("CO"."ID"="COL"."CUST_ORDER_ID")
   7 - filter("CUSTOMER_PO_REF" IS NOT NULL)
   8 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
  12 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
  14 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND
              "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND
              "EL"."FCDUEDATE"<=SYSDATE@!+1200)
       filter("EL"."FCSTQTY">0) 

 

SQL_ID  bqsrv9mngjdfu, child number 4
-------------------------------------
/* SQL Analyze(7,1) */ SELECT /*+ LEADING(E) */
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,   E.FCDUEDATE
REQUIRED_DATE,   SUM(E.FCSTQTY) QTY,   1 PLAN_LEVEL FROM   (SELECT /*+
LEADING(E) */     E.CUSTPO,     EL.FCSTQTY,     EL.FCDUEDATE   FROM
(SELECT /*+  */       E.CUSTPO,       MAX(RELID) RELID     FROM
T1 E     GROUP BY       E.CUSTPO) E,     T1_LINES EL   WHERE
E.CUSTPO=EL.CUSTPO     AND E.RELID=EL.RELID     AND EL.FCDUEDATE
BETWEEN SYSDATE-365 AND SYSDATE+1200     AND EL.FCSTQTY>0) E,   (SELECT
/*+  */     CO.CUSTOMER_PO_REF,     MAX(COL.DESIRED_SHIP_DATE)
DESIRED_SHIP_DATE,     MAX(COL.PART_ID) PART_ID   FROM     T2 CO,
T2_LINES COL   WHERE     CO.ID=COL.CUST_ORDER_ID     AND
CUSTOMER_PO_REF IS NOT NULL     AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
 GROUP BY     CO.CUSTOMER_PO_REF) CO WHERE
E.CUSTPO=CO.CUSTOMER_PO_REF(+)   AND
E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365) GROUP BY
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),   E.FCDUEDATE ORDER BY   CO

Plan hash value: 1232865634

---------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name         | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |              |       |       |       |    16M(100)|          |
|   1 |  SORT GROUP BY                   |              |   842 | 61466 |       |    16M  (1)| 35:51:03 |
|*  2 |   FILTER                         |              |       |       |       |            |          |
|*  3 |    HASH JOIN OUTER               |              |   842 | 61466 |       |    16M  (1)| 35:51:03 |
|   4 |     VIEW                         |              |   842 | 28628 |       |    16M  (1)| 35:50:07 |
|*  5 |      FILTER                      |              |       |       |       |            |          |
|   6 |       SORT GROUP BY              |              |   842 | 54730 |    77G|    16M  (1)| 35:50:07 |
|*  7 |        FILTER                    |              |       |       |       |            |          |
|   8 |         NESTED LOOPS             |              |   893M|    54G|       |    11M  (1)| 24:33:17 |
|   9 |          INDEX FAST FULL SCAN    | T1_PK        |   108K|  2754K|       |    59   (2)| 00:00:01 |
|* 10 |          INDEX RANGE SCAN        | T1_LINES_PK  |  8233 |   313K|       |   104   (1)| 00:00:01 |
|  11 |     VIEW                         |              |   700 | 27300 |       |  7023   (1)| 00:00:57 |
|  12 |      SORT GROUP BY               |              |   700 | 40600 |       |  7023   (1)| 00:00:57 |
|* 13 |       TABLE ACCESS BY INDEX ROWID| T2_LINES     |   213 |  6816 |       |    11  (10)| 00:00:01 |
|  14 |        NESTED LOOPS              |              |   149K|  8456K|       |  7016   (1)| 00:00:57 |
|* 15 |         TABLE ACCESS FULL        | T2           |   700 | 18200 |       |     3  (34)| 00:00:01 |
|* 16 |         INDEX RANGE SCAN         | SYS_C0027538 |   321 |       |       |     4  (25)| 00:00:01 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("E"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365))
   3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
   5 - filter("EL"."RELID"=MAX("RELID"))
   7 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
  10 - access("E"."CUSTPO"="EL"."CUSTPO" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND
              "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
       filter(("EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200 AND
              "EL"."FCSTQTY">0))
  13 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
  15 - filter("CUSTOMER_PO_REF" IS NOT NULL)
  16 - access("CO"."ID"="COL"."CUST_ORDER_ID") 

 SQL_ID  bqsrv9mngjdfu, child number 5
-------------------------------------
/* SQL Analyze(7,1) */ SELECT /*+ LEADING(E) */
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,   E.FCDUEDATE
REQUIRED_DATE,   SUM(E.FCSTQTY) QTY,   1 PLAN_LEVEL FROM   (SELECT /*+
LEADING(E) */     E.CUSTPO,     EL.FCSTQTY,     EL.FCDUEDATE   FROM
(SELECT /*+  */       E.CUSTPO,       MAX(RELID) RELID     FROM
T1 E     GROUP BY       E.CUSTPO) E,     T1_LINES EL   WHERE
E.CUSTPO=EL.CUSTPO     AND E.RELID=EL.RELID     AND EL.FCDUEDATE
BETWEEN SYSDATE-365 AND SYSDATE+1200     AND EL.FCSTQTY>0) E,   (SELECT
/*+  */     CO.CUSTOMER_PO_REF,     MAX(COL.DESIRED_SHIP_DATE)
DESIRED_SHIP_DATE,     MAX(COL.PART_ID) PART_ID   FROM     T2 CO,
T2_LINES COL   WHERE     CO.ID=COL.CUST_ORDER_ID     AND
CUSTOMER_PO_REF IS NOT NULL     AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
 GROUP BY     CO.CUSTOMER_PO_REF) CO WHERE
E.CUSTPO=CO.CUSTOMER_PO_REF(+)   AND
E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365) GROUP BY
COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),   E.FCDUEDATE ORDER BY   CO

Plan hash value: 3881822961

-------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |          |       |       |       |  5083K(100)|          |
|   1 |  SORT GROUP BY               |          |   842 | 61466 |       |  5083K  (2)| 11:17:50 |
|*  2 |   FILTER                     |          |       |       |       |            |          |
|*  3 |    HASH JOIN OUTER           |          |   842 | 61466 |       |  5083K  (2)| 11:17:50 |
|   4 |     VIEW                     |          |   842 | 28628 |       |  5083K  (2)| 11:17:47 |
|*  5 |      FILTER                  |          |       |       |       |            |          |
|   6 |       SORT GROUP BY          |          |   842 | 54730 |    77G|  5083K  (2)| 11:17:47 |
|*  7 |        FILTER                |          |       |       |       |            |          |
|*  8 |         HASH JOIN            |          |   893M|    54G|  4032K|  7033   (9)| 00:00:57 |
|   9 |          INDEX FAST FULL SCAN| T1_PK    |   108K|  2754K|       |    60   (2)| 00:00:01 |
|* 10 |          TABLE ACCESS FULL   | T1_LINES |  5763K|   214M|       |  3389  (16)| 00:00:28 |
|  11 |     VIEW                     |          |   700 | 27300 |       |   369   (6)| 00:00:03 |
|  12 |      SORT GROUP BY           |          |   700 | 40600 |       |   369   (6)| 00:00:03 |
|* 13 |       HASH JOIN              |          |   149K|  8456K|       |   362   (4)| 00:00:03 |
|* 14 |        TABLE ACCESS FULL     | T2       |   700 | 18200 |       |     4  (25)| 00:00:01 |
|* 15 |        TABLE ACCESS FULL     | T2_LINES |   149K|  4665K|       |   358   (4)| 00:00:03 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("E"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365))
   3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
   5 - filter("EL"."RELID"=MAX("RELID"))
   7 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
   8 - access("E"."CUSTPO"="EL"."CUSTPO")
  10 - filter(("EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200 AND
              "EL"."FCSTQTY">0))
  13 - access("CO"."ID"="COL"."CUST_ORDER_ID")
  14 - filter("CUSTOMER_PO_REF" IS NOT NULL)
  15 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365) 

Assume that the second script produced the following (the blue text is directly related to the SQL statements that were executed by the script):

SQL_ID        CHILD_NUMBER NAME                                     VALUE
------------- ------------ ---------------------------------------- -------------------------
33n2n2z5a330f            0 _pga_max_size                            409600 KB
33n2n2z5a330f            0 optimizer_features_enable                10.1.0.4
-
33n2n2z5a330f            1 _pga_max_size                            409600 KB
33n2n2z5a330f            1 optimizer_features_enable                10.2.0.2
-
33n2n2z5a330f            2 _pga_max_size                            409600 KB
33n2n2z5a330f            2 optimizer_features_enable                11.1.0.7
-
33n2n2z5a330f            3 _pga_max_size                            409600 KB
------------------------------------------------------------------------------
4kkx042dnfuvf            0 _pga_max_size                            409600 KB
4kkx042dnfuvf            0 is_recur_flags                           72
4kkx042dnfuvf            0 optimizer_features_hinted                11.2.0.1
4kkx042dnfuvf            0 optimizer_index_caching                  100
4kkx042dnfuvf            0 optimizer_index_cost_adj                 80
4kkx042dnfuvf            0 optimizer_mode_hinted                    true
-
4kkx042dnfuvf            1 _pga_max_size                            409600 KB
4kkx042dnfuvf            1 is_recur_flags                           72
4kkx042dnfuvf            1 optimizer_dynamic_sampling               0
4kkx042dnfuvf            1 optimizer_features_hinted                9.0.0
4kkx042dnfuvf            1 optimizer_index_caching                  100
4kkx042dnfuvf            1 optimizer_index_cost_adj                 80
4kkx042dnfuvf            1 optimizer_mode_hinted                    true
4kkx042dnfuvf            1 query_rewrite_enabled                    false
4kkx042dnfuvf            1 skip_unusable_indexes                    false
-
4kkx042dnfuvf            2 _pga_max_size                            409600 KB
4kkx042dnfuvf            2 is_recur_flags                           72
4kkx042dnfuvf            2 optimizer_dynamic_sampling               1
4kkx042dnfuvf            2 optimizer_features_hinted                9.2.0.8
4kkx042dnfuvf            2 optimizer_index_caching                  100
4kkx042dnfuvf            2 optimizer_index_cost_adj                 80
4kkx042dnfuvf            2 optimizer_mode_hinted                    true
4kkx042dnfuvf            2 query_rewrite_enabled                    false
4kkx042dnfuvf            2 skip_unusable_indexes                    false
-
4kkx042dnfuvf            3 _pga_max_size                            409600 KB
4kkx042dnfuvf            3 is_recur_flags                           72
4kkx042dnfuvf            3 optimizer_features_hinted                10.1.0.5
4kkx042dnfuvf            3 optimizer_index_caching                  100
4kkx042dnfuvf            3 optimizer_index_cost_adj                 80
4kkx042dnfuvf            3 optimizer_mode_hinted                    true
------------------------------------------------------------------------------
bqsrv9mngjdfu            0 _pga_max_size                            409600 KB
bqsrv9mngjdfu            0 is_recur_flags                           72
bqsrv9mngjdfu            0 optimizer_features_hinted                11.2.0.1
bqsrv9mngjdfu            0 optimizer_mode_hinted                    true
-
bqsrv9mngjdfu            1 _pga_max_size                            409600 KB
bqsrv9mngjdfu            1 is_recur_flags                           72
bqsrv9mngjdfu            1 optimizer_dynamic_sampling               0
bqsrv9mngjdfu            1 optimizer_features_hinted                9.0.0
bqsrv9mngjdfu            1 optimizer_mode_hinted                    true
bqsrv9mngjdfu            1 query_rewrite_enabled                    false
bqsrv9mngjdfu            1 skip_unusable_indexes                    false
-
bqsrv9mngjdfu            2 _pga_max_size                            409600 KB
bqsrv9mngjdfu            2 is_recur_flags                           72
bqsrv9mngjdfu            2 optimizer_dynamic_sampling               1
bqsrv9mngjdfu            2 optimizer_features_hinted                9.2.0.8
bqsrv9mngjdfu            2 optimizer_mode_hinted                    true
bqsrv9mngjdfu            2 query_rewrite_enabled                    false
bqsrv9mngjdfu            2 skip_unusable_indexes                    false
-
bqsrv9mngjdfu            3 _pga_max_size                            409600 KB
bqsrv9mngjdfu            3 is_recur_flags                           72
bqsrv9mngjdfu            3 optimizer_features_hinted                10.1.0.5
bqsrv9mngjdfu            3 optimizer_mode_hinted                    true
-
bqsrv9mngjdfu            4 _pga_max_size                            409600 KB
bqsrv9mngjdfu            4 is_recur_flags                           72
bqsrv9mngjdfu            4 optimizer_dynamic_sampling               0
bqsrv9mngjdfu            4 optimizer_features_hinted                9.0.0
bqsrv9mngjdfu            4 optimizer_mode_hinted                    true
bqsrv9mngjdfu            4 query_rewrite_enabled                    false
bqsrv9mngjdfu            4 skip_unusable_indexes                    false
-
bqsrv9mngjdfu            5 _pga_max_size                            409600 KB
bqsrv9mngjdfu            5 is_recur_flags                           72
bqsrv9mngjdfu            5 optimizer_dynamic_sampling               1
bqsrv9mngjdfu            5 optimizer_features_hinted                9.2.0.8
bqsrv9mngjdfu            5 optimizer_mode_hinted                    true
bqsrv9mngjdfu            5 query_rewrite_enabled                    false
bqsrv9mngjdfu            5 skip_unusable_indexes                    false
------------------------------------------------------------------------------
gfb3g0axbj17z            0 _pga_max_size                            409600 KB
gfb3g0axbj17z            0 optimizer_features_enable                10.1.0.4
gfb3g0axbj17z            0 optimizer_index_caching                  100
gfb3g0axbj17z            0 optimizer_index_cost_adj                 80
-
gfb3g0axbj17z            1 _pga_max_size                            409600 KB
gfb3g0axbj17z            1 optimizer_features_enable                10.2.0.2
gfb3g0axbj17z            1 optimizer_index_caching                  100
gfb3g0axbj17z            1 optimizer_index_cost_adj                 80
-
gfb3g0axbj17z            2 _pga_max_size                            409600 KB
gfb3g0axbj17z            2 optimizer_features_enable                11.1.0.7
gfb3g0axbj17z            2 optimizer_index_caching                  100
gfb3g0axbj17z            2 optimizer_index_cost_adj                 80
-
gfb3g0axbj17z            3 _pga_max_size                            409600 KB
gfb3g0axbj17z            3 optimizer_index_caching                  100
gfb3g0axbj17z            3 optimizer_index_cost_adj                 80  

—————————————————————————————————————————–
—————————————————————————————————————————–

Just to put a slightly different spin on the test results, I repeated the test case on Oracle Database 10.2.0.5 Standard Edition (also Enterprise Edition) with an 8,000M SGA_TARGET and 1,800M PGA_AGGREGATE_TARGET.  The results?

SELECT_10.1.0.4 (OPTIMIZER_FEATURES_ENABLE=’10.1.0.4′, OPTIMIZER_INDEX_CACHING=0, OPTIMIZER_INDEX_COST_ADJ=100), completed in 0.34 seconds when selecting 10,478 rows (same execution plan for OPTIMIZER_FEATURES_ENABLE=’10.2.0.2′):

--------------------------------------------------+-----------------------------------+
| Id  | Operation                    | Name       | Rows  | Bytes | Cost  | Time      |
--------------------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT             |            |       |       |  1368 |           |
| 1   |  SORT GROUP BY               |            |   489 |   46K |  1368 |  00:00:11 |
| 2   |   FILTER                     |            |       |       |       |           |
| 3   |    HASH JOIN RIGHT OUTER     |            |   20K | 1906K |  1367 |  00:00:11 |
| 4   |     VIEW                     |            |   700 |   27K |   375 |  00:00:03 |
| 5   |      SORT GROUP BY           |            |   700 |   40K |   375 |  00:00:03 |
| 6   |       HASH JOIN              |            |  146K | 8454K |   368 |  00:00:03 |
| 7   |        TABLE ACCESS FULL     | T2         |   700 |   18K |     3 |  00:00:01 |
| 8   |        TABLE ACCESS FULL     | T2_LINES   |  146K | 4664K |   364 |  00:00:03 |
| 9   |     NESTED LOOPS             |            |   20K | 1140K |   992 |  00:00:08 |
| 10  |      VIEW                    |            |   671 |   14K |    65 |  00:00:01 |
| 11  |       SORT GROUP BY          |            |   671 |   17K |    65 |  00:00:01 |
| 12  |        FILTER                |            |       |       |       |           |
| 13  |         INDEX FAST FULL SCAN | T1_PK      |  107K | 2773K |    60 |  00:00:01 |
| 14  |      INDEX RANGE SCAN        | T1_LINES_PK|    30 |  1110 |     3 |  00:00:01 |
--------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - filter(("EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365) AND COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200))
3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
6 - access("CO"."ID"="COL"."CUST_ORDER_ID")
7 - filter("CUSTOMER_PO_REF" IS NOT NULL)
8 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
12 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
14 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
14 - filter("EL"."FCSTQTY">0)

Statistics
----------------------------------------------------------                     
       1434  recursive calls                                                   
          0  db block gets                                                     
       7818  consistent gets                                                   
          0  physical reads                                                    
          0  redo size                                                         
     104015  bytes sent via SQL*Net to client                                  
        549  bytes received via SQL*Net from client                            
          9  SQL*Net roundtrips to/from client                                 
         36  sorts (memory)                                                    
          0  sorts (disk)                                                      
       7604  rows processed

INSERT_10.1.0.4 (OPTIMIZER_FEATURES_ENABLE=’10.1.0.4′, OPTIMIZER_INDEX_CACHING=0, OPTIMIZER_INDEX_COST_ADJ=100), completed in… 0.34 seconds when inserting 10,478 rows (same execution plan for OPTIMIZER_FEATURES_ENABLE=’10.2.0.2′):

--------------------------------------------------+-----------------------------------+
| Id  | Operation                    | Name       | Rows  | Bytes | Cost  | Time      |
--------------------------------------------------+-----------------------------------+
| 0   | INSERT STATEMENT             |            |       |       |  1368 |           |
| 1   |  SORT GROUP BY               |            |   489 |   46K |  1368 |  00:00:11 |
| 2   |   FILTER                     |            |       |       |       |           |
| 3   |    HASH JOIN RIGHT OUTER     |            |   20K | 1906K |  1367 |  00:00:11 |
| 4   |     VIEW                     |            |   700 |   27K |   375 |  00:00:03 |
| 5   |      SORT GROUP BY           |            |   700 |   40K |   375 |  00:00:03 |
| 6   |       HASH JOIN              |            |  146K | 8454K |   368 |  00:00:03 |
| 7   |        TABLE ACCESS FULL     | T2         |   700 |   18K |     3 |  00:00:01 |
| 8   |        TABLE ACCESS FULL     | T2_LINES   |  146K | 4664K |   364 |  00:00:03 |
| 9   |     NESTED LOOPS             |            |   20K | 1140K |   992 |  00:00:08 |
| 10  |      VIEW                    |            |   671 |   14K |    65 |  00:00:01 |
| 11  |       SORT GROUP BY          |            |   671 |   17K |    65 |  00:00:01 |
| 12  |        FILTER                |            |       |       |       |           |
| 13  |         INDEX FAST FULL SCAN | T1_PK      |  107K | 2773K |    60 |  00:00:01 |
| 14  |      INDEX RANGE SCAN        | T1_LINES_PK|    30 |  1110 |     3 |  00:00:01 |
--------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - filter(("EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365) AND COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200))
3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
6 - access("CO"."ID"="COL"."CUST_ORDER_ID")
7 - filter("CUSTOMER_PO_REF" IS NOT NULL)
8 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
12 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
14 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
14 - filter("EL"."FCSTQTY">0)

Statistics
----------------------------------------------------------                     
       1350  recursive calls                                                   
        468  db block gets                                                     
       7981  consistent gets                                                   
          0  physical reads                                                    
     293716  redo size                                                         
        817  bytes sent via SQL*Net to client                                  
       1709  bytes received via SQL*Net from client                            
          3  SQL*Net roundtrips to/from client                                 
         21  sorts (memory)                                                    
          0  sorts (disk)                                                      
       7604  rows processed

Notice in the above that the execution plan is the same for both the SELECT and INSERT INTO SQL statements, unlike what happened in Oracle Database 11.2.0.1 with the same OPTIMIZER_FEATURES_ENABLE setting.

————

SELECT_10.1.0.4 (OPTIMIZER_FEATURES_ENABLE=’10.1.0.4′, OPTIMIZER_INDEX_CACHING=100, OPTIMIZER_INDEX_COST_ADJ=80), completed in 0.34 seconds when selecting 10,478 rows (same execution plan for OPTIMIZER_FEATURES_ENABLE=’10.2.0.2′):

--------------------------------------------------+-----------------------------------+
| Id  | Operation                    | Name       | Rows  | Bytes | Cost  | Time      |
--------------------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT             |            |       |       |   442 |           |
| 1   |  SORT GROUP BY               |            |   489 |   46K |   442 |  00:00:04 |
| 2   |   NESTED LOOPS               |            |  1006 |   95K |   441 |  00:00:04 |
| 3   |    FILTER                    |            |       |       |       |           |
| 4   |     HASH JOIN OUTER          |            |   671 |   39K |   440 |  00:00:04 |
| 5   |      VIEW                    |            |   671 |   14K |    65 |  00:00:01 |
| 6   |       SORT GROUP BY          |            |   671 |   17K |    65 |  00:00:01 |
| 7   |        FILTER                |            |       |       |       |           |
| 8   |         INDEX FAST FULL SCAN | T1_PK      |  107K | 2773K |    60 |  00:00:01 |
| 9   |      VIEW                    |            |   700 |   27K |   375 |  00:00:03 |
| 10  |       SORT GROUP BY          |            |   700 |   40K |   375 |  00:00:03 |
| 11  |        HASH JOIN             |            |  146K | 8454K |   368 |  00:00:03 |
| 12  |         TABLE ACCESS FULL    | T2         |   700 |   18K |     3 |  00:00:01 |
| 13  |         TABLE ACCESS FULL    | T2_LINES   |  146K | 4664K |   364 |  00:00:03 |
| 14  |    INDEX RANGE SCAN          | T1_LINES_PK|     1 |    37 |     1 |  00:00:01 |
--------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
3 - filter(COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200)
4 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
7 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
11 - access("CO"."ID"="COL"."CUST_ORDER_ID")
12 - filter("CUSTOMER_PO_REF" IS NOT NULL)
13 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
14 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND "EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365) AND "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
14 - filter(("EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0))

Statistics
----------------------------------------------------------                     
          1  recursive calls                                                   
          0  db block gets                                                     
       7508  consistent gets                                                   
          0  physical reads                                                    
          0  redo size                                                         
     104015  bytes sent via SQL*Net to client                                  
        549  bytes received via SQL*Net from client                            
          9  SQL*Net roundtrips to/from client                                 
          3  sorts (memory)                                                    
          0  sorts (disk)                                                      
       7604  rows processed

INSERT_10.1.0.4 (OPTIMIZER_FEATURES_ENABLE=’10.1.0.4′, OPTIMIZER_INDEX_CACHING=0, OPTIMIZER_INDEX_COST_ADJ=100), completed in… 0.35 seconds when inserting 10,478 rows (same execution plan for OPTIMIZER_FEATURES_ENABLE=’10.2.0.2′):

--------------------------------------------------+-----------------------------------+
| Id  | Operation                    | Name       | Rows  | Bytes | Cost  | Time      |
--------------------------------------------------+-----------------------------------+
| 0   | INSERT STATEMENT             |            |       |       |   442 |           |
| 1   |  SORT GROUP BY               |            |   489 |   46K |   442 |  00:00:04 |
| 2   |   NESTED LOOPS               |            |  1006 |   95K |   441 |  00:00:04 |
| 3   |    FILTER                    |            |       |       |       |           |
| 4   |     HASH JOIN OUTER          |            |   671 |   39K |   440 |  00:00:04 |
| 5   |      VIEW                    |            |   671 |   14K |    65 |  00:00:01 |
| 6   |       SORT GROUP BY          |            |   671 |   17K |    65 |  00:00:01 |
| 7   |        FILTER                |            |       |       |       |           |
| 8   |         INDEX FAST FULL SCAN | T1_PK      |  107K | 2773K |    60 |  00:00:01 |
| 9   |      VIEW                    |            |   700 |   27K |   375 |  00:00:03 |
| 10  |       SORT GROUP BY          |            |   700 |   40K |   375 |  00:00:03 |
| 11  |        HASH JOIN             |            |  146K | 8454K |   368 |  00:00:03 |
| 12  |         TABLE ACCESS FULL    | T2         |   700 |   18K |     3 |  00:00:01 |
| 13  |         TABLE ACCESS FULL    | T2_LINES   |  146K | 4664K |   364 |  00:00:03 |
| 14  |    INDEX RANGE SCAN          | T1_LINES_PK|     1 |    37 |     1 |  00:00:01 |
--------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
3 - filter(COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200)
4 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
7 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
11 - access("CO"."ID"="COL"."CUST_ORDER_ID")
12 - filter("CUSTOMER_PO_REF" IS NOT NULL)
13 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
14 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND "EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365) AND "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
14 - filter(("EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0))

Statistics
----------------------------------------------------------                     
        183  recursive calls                                                   
        495  db block gets                                                     
       7702  consistent gets                                                   
          2  physical reads                                                    
     294848  redo size                                                         
        830  bytes sent via SQL*Net to client                                  
       1712  bytes received via SQL*Net from client                            
          3  SQL*Net roundtrips to/from client                                 
          4  sorts (memory)                                                    
          0  sorts (disk)                                                      
       7604  rows processed

Again, notice that the execution plan remained the same for the SELECT and INSERT INTO SQL statements.  Is this a bug in 11.2.0.1 that does not exist in 10.2.0.5, or was a bug introduced in 10.2.0.5?  Let’s take a look at a portion of the 10053 trace file from Oracle Database 11.2.0.1, comparing the output for the two settings of OPTIMIZER_FEATURES_ENABLE where a change in the execution plan was identified in 11.2.0.1.  With the setting at 10.2.0.2 we see that the complex view merge was prevented due to a CBQT (cost-based query transformation) directive, while that did not happen with the setting at 10.1.0.4:


11.2.0.1 

OPTIMIZER_FEATURES_ENABLE=’10.1.0.4′

OPTIMIZER_FEATURES_ENABLE=’10.2.0.2′

 

JE:   Considering Join Elimination on query block SEL$4 (#0)

JE:   Considering Join Elimination on query block SEL$4 (#0)

 

*************************

*************************

 

Join Elimination (JE)   

Join Elimination (JE)   

 

*************************

*************************

 

SQL:******* UNPARSED QUERY IS *******

SQL:******* UNPARSED QUERY IS *******

 

SELECT “CO”.”CUSTOMER_PO_REF” “CUSTOMER_PO_REF”,MAX(“COL”.”DESIRED_SHIP_DATE”) “DESIRED_SHIP_DATE”,MAX(“COL”.”PART_ID”) “PART_ID” FROM “TESTUSER”.”T2″ “CO”,”TESTUSER”.”T2_LINES” “COL” WHERE “CO”.”ID”=”COL”.”CUST_ORDER_ID” AND “CO”.”CUSTOMER_PO_REF” IS NOT NULL AND “COL”.”DESIRED_SHIP_DATE”>=SYSDATE@!-365 GROUP BY “CO”.”CUSTOMER_PO_REF”

SELECT “CO”.”CUSTOMER_PO_REF” “CUSTOMER_PO_REF”,MAX(“COL”.”DESIRED_SHIP_DATE”) “DESIRED_SHIP_DATE”,MAX(“COL”.”PART_ID”) “PART_ID” FROM “TESTUSER”.”T2″ “CO”,”TESTUSER”.”T2_LINES” “COL” WHERE “CO”.”ID”=”COL”.”CUST_ORDER_ID” AND “CO”.”CUSTOMER_PO_REF” IS NOT NULL AND “COL”.”DESIRED_SHIP_DATE”>=SYSDATE@!-365 GROUP BY “CO”.”CUSTOMER_PO_REF”

 

SQL:******* UNPARSED QUERY IS *******

SQL:******* UNPARSED QUERY IS *******

 

SELECT “CO”.”CUSTOMER_PO_REF” “CUSTOMER_PO_REF”,MAX(“COL”.”DESIRED_SHIP_DATE”) “DESIRED_SHIP_DATE”,MAX(“COL”.”PART_ID”) “PART_ID” FROM “TESTUSER”.”T2″ “CO”,”TESTUSER”.”T2_LINES” “COL” WHERE “CO”.”ID”=”COL”.”CUST_ORDER_ID” AND “CO”.”CUSTOMER_PO_REF” IS NOT NULL AND “COL”.”DESIRED_SHIP_DATE”>=SYSDATE@!-365 GROUP BY “CO”.”CUSTOMER_PO_REF”

SELECT “CO”.”CUSTOMER_PO_REF” “CUSTOMER_PO_REF”,MAX(“COL”.”DESIRED_SHIP_DATE”) “DESIRED_SHIP_DATE”,MAX(“COL”.”PART_ID”) “PART_ID” FROM “TESTUSER”.”T2″ “CO”,”TESTUSER”.”T2_LINES” “COL” WHERE “CO”.”ID”=”COL”.”CUST_ORDER_ID” AND “CO”.”CUSTOMER_PO_REF” IS NOT NULL AND “COL”.”DESIRED_SHIP_DATE”>=SYSDATE@!-365 GROUP BY “CO”.”CUSTOMER_PO_REF”

 

Query block SEL$4 (#0) unchanged

Query block SEL$4 (#0) unchanged

Different

CVM:   Checking validity of merging in query block SEL$2 (#0)

CVM:     CVM bypassed: view on right side of Outer Join + MuLTiple TABle.

Different

CVM: Considering view merge in query block SEL$2 (#0)

 

Different

CVM:   Checking validity of merging in query block SEL$3 (#0)

 

Different

CVM: Considering view merge in query block SEL$3 (#0)

 

 

CNT:   Considering count(col) to count(*) on query block SEL$3 (#0)

CNT:   Considering count(col) to count(*) on query block SEL$3 (#0)

 

*************************

*************************

 

Count(col) to Count(*) (CNT)

Count(col) to Count(*) (CNT)

 

*************************

*************************

 

CNT:     COUNT() to COUNT(*) not done.

CNT:     COUNT() to COUNT(*) not done.

Different

 

CVM: CBQT Marking query block SEL$3 (#0) as valid for CVM.

Different

CVM:   Merging complex view SEL$3 (#0) into SEL$2 (#0).

CVM:   Not Merging SEL$3 (#0) into SEL$2 (#0) due to CBQT directive.

Different

qbcp:******* UNPARSED QUERY IS *******

 

Different

SELECT /*+ LEADING (“E”) */ “E”.”CUSTPO” “CUSTPO”,”EL”.”FCSTQTY” “FCSTQTY”,”EL”.”FCDUEDATE” “FCDUEDATE” FROM  (SELECT “E”.”CUSTPO” “CUSTPO”,MAX(“E”.”RELID”) “RELID” FROM “TESTUSER”.”T1″ “E” GROUP BY “E”.”CUSTPO”) “E”,”TESTUSER”.”T1_LINES” “EL” WHERE “E”.”CUSTPO”=”EL”.”CUSTPO” AND “E”.”RELID”=”EL”.”RELID” AND “EL”.”FCDUEDATE”>=SYSDATE@!-365 AND “EL”.”FCDUEDATE”<=SYSDATE@!+1200 AND “EL”.”FCSTQTY”>0

 

Different

vqbcp:******* UNPARSED QUERY IS *******

 

Different

SELECT “E”.”CUSTPO” “CUSTPO”,MAX(“E”.”RELID”) “RELID” FROM “TESTUSER”.”T1″ “E” GROUP BY “E”.”CUSTPO”

 

Different

CVM: result SEL$2 (#0)

 

Different

******* UNPARSED QUERY IS *******

 

Different

SELECT /*+ LEADING (“E”) */ “E”.”CUSTPO” “CUSTPO”,”EL”.”FCSTQTY” “FCSTQTY”,”EL”.”FCDUEDATE” “FCDUEDATE” FROM “TESTUSER”.”T1″ “E”,”TESTUSER”.”T1_LINES” “EL” WHERE “E”.”CUSTPO”=”EL”.”CUSTPO” AND “EL”.”FCDUEDATE”>=SYSDATE@!-365 AND “EL”.”FCDUEDATE”<=SYSDATE@!+1200 AND “EL”.”FCSTQTY”>0 GROUP BY “E”.”CUSTPO”,”EL”.ROWID,”EL”.”RELID”,”EL”.”FCDUEDATE”,”EL”.”FCSTQTY” HAVING MAX(“E”.”RELID”)=”EL”.”RELID”

 

 

JE:   Considering Join Elimination on query block SEL$2 (#0)

JE:   Considering Join Elimination on query block SEL$2 (#0)

 

*************************

*************************

 

Join Elimination (JE)   

Join Elimination (JE)   

 

*************************

*************************

 

SQL:******* UNPARSED QUERY IS *******

SQL:******* UNPARSED QUERY IS *******

Different

SELECT /*+ LEADING (“E”) */ “E”.”CUSTPO” “CUSTPO”,”EL”.”FCSTQTY” “FCSTQTY”,”EL”.”FCDUEDATE” “FCDUEDATE” FROM “TESTUSER”.”T1″ “E”,”TESTUSER”.”T1_LINES” “EL” WHERE “E”.”CUSTPO”=”EL”.”CUSTPO” AND “EL”.”FCDUEDATE”>=SYSDATE@!-365 AND “EL”.”FCDUEDATE”<=SYSDATE@!+1200 AND “EL”.”FCSTQTY”>0 GROUP BY “E”.”CUSTPO”,”EL”.ROWID,”EL”.”RELID”,”EL”.”FCDUEDATE”,”EL”.”FCSTQTY” HAVING MAX(“E”.”RELID”)=”EL”.”RELID”

SELECT /*+ LEADING (“E”) */ “E”.”CUSTPO” “CUSTPO”,”EL”.”FCSTQTY” “FCSTQTY”,”EL”.”FCDUEDATE” “FCDUEDATE” FROM  (SELECT “E”.”CUSTPO” “CUSTPO”,MAX(“E”.”RELID”) “RELID” FROM “TESTUSER”.”T1″ “E” GROUP BY “E”.”CUSTPO”) “E”,”TESTUSER”.”T1_LINES” “EL” WHERE “E”.”CUSTPO”=”EL”.”CUSTPO” AND “E”.”RELID”=”EL”.”RELID” AND “EL”.”FCDUEDATE”>=SYSDATE@!-365 AND “EL”.”FCDUEDATE”<=SYSDATE@!+1200 AND “EL”.”FCSTQTY”>0

 ——————-

Let’s take a look at a portion of the 10053 trace file on Oracle Database 10.2.0.5, comparing the output for the two settings of OPTIMIZER_FEATURES_ENABLE where a change in the execution plan was identified.  With the setting at 10.2.0.2 we see that the complex view merge was prevented due to a CBQT (cost-based query transformation) directive, and that also happened with the setting at 10.1.0.4:

10.2.0.5

OPTIMIZER_FEATURES_ENABLE=’10.1.0.4′

OPTIMIZER_FEATURES_ENABLE=’10.2.0.2′

 

**************************

**************************

 

Predicate Move-Around (PM)

Predicate Move-Around (PM)

 

**************************

**************************

 

PM: Considering predicate move-around in INS$1 (#0).

PM: Considering predicate move-around in INS$1 (#0).

 

PM:   Checking validity of predicate move-around in INS$1 (#0).

PM:   Checking validity of predicate move-around in INS$1 (#0).

 

CBQT: Validity checks passed for 2bxr3x54dk7nq.

CBQT: Validity checks passed for 2bxr3x54dk7nq.

 

voptcojrj: logp:0000000012312FF8

voptcojrj: logp:0000000012312FF8

 

voptcojrj:”E”.”CUSTPO”=”CO”.”CUSTOMER_PO_REF”(+)

voptcojrj:”E”.”CUSTPO”=”CO”.”CUSTOMER_PO_REF”(+)

 

rejected

rejected

 

voptcojrj:”E”.”FCDUEDATE”>COALESCE(“CO”.”DESIRED_SHIP_DATE”,SYSDATE@!-365)

voptcojrj:”E”.”FCDUEDATE”>COALESCE(“CO”.”DESIRED_SHIP_DATE”,SYSDATE@!-365)

 

rejected

rejected

Different

 

Query block (0000000012314D80) before join elimination:

Different

 

SQL:******* UNPARSED QUERY IS *******

Different

 

SELECT “CO”.”CUSTOMER_PO_REF” “CUSTOMER_PO_REF”,MAX(“COL”.”DESIRED_SHIP_DATE”) “DESIRED_SHIP_DATE”,MAX(“COL”.”PART_ID”) “PART_ID” FROM “TESTUSER”.”T2″ “CO”,”TESTUSER”.”T2_LINES” “COL” WHERE “CO”.”ID”=”COL”.”CUST_ORDER_ID” AND “CO”.”CUSTOMER_PO_REF” IS NOT NULL AND “COL”.”DESIRED_SHIP_DATE”>=SYSDATE@!-365 GROUP BY “CO”.”CUSTOMER_PO_REF”

Different

 

Query block (0000000012314D80) unchanged

 

CVM: CBQT Marking query block SEL$3 (#0)as valid for CVM.

CVM: CBQT Marking query block SEL$3 (#0)as valid for CVM.

 

CVM:  Not merging SEL$3 (#0) into SEL$2 (#0) due to CBQT directive.

CVM:  Not merging SEL$3 (#0) into SEL$2 (#0) due to CBQT directive.

Different

CVM:   Merging SPJ view SEL$2 (#0) into SEL$1 (#0)

Query block (000000001231DE28) before join elimination:

Different

 

SQL:******* UNPARSED QUERY IS *******

Different

 

SELECT /*+ LEADING (“E”) */ “E”.”CUSTPO” “CUSTPO”,”EL”.”FCSTQTY” “FCSTQTY”,”EL”.”FCDUEDATE” “FCDUEDATE” FROM  (SELECT “E”.”CUSTPO” “CUSTPO”,MAX(“E”.”RELID”) “RELID” FROM “TESTUSER”.”T1″ “E” GROUP BY “E”.”CUSTPO”) “E”,”TESTUSER”.”T1_LINES” “EL” WHERE “E”.”CUSTPO”=”EL”.”CUSTPO” AND “E”.”RELID”=”EL”.”RELID” AND “EL”.”FCDUEDATE”>=SYSDATE@!-365 AND “EL”.”FCDUEDATE”<=SYSDATE@!+1200 AND “EL”.”FCSTQTY”>0

Different

 

Query block (000000001231DE28) unchanged

Different

 

CVM:   Merging SPJ view SEL$2 (#0) into SEL$1 (#0)

 

voptcojrj: logp:0000000012312FF8

voptcojrj: logp:0000000012312FF8

 

voptcojrj:”E”.”CUSTPO”=”CO”.”CUSTOMER_PO_REF”(+)

voptcojrj:”E”.”CUSTPO”=”CO”.”CUSTOMER_PO_REF”(+)

 

rejected

rejected

 

voptcojrj:”EL”.”FCDUEDATE”>COALESCE(“CO”.”DESIRED_SHIP_DATE”,SYSDATE@!-365)

voptcojrj:”EL”.”FCDUEDATE”>COALESCE(“CO”.”DESIRED_SHIP_DATE”,SYSDATE@!-365)

 

rejected

rejected

 

voptcojrj:”E”.”CUSTPO”=”EL”.”CUSTPO”

voptcojrj:”E”.”CUSTPO”=”EL”.”CUSTPO”

 

rejected

rejected

 

voptcojrj:”E”.”RELID”=”EL”.”RELID”

voptcojrj:”E”.”RELID”=”EL”.”RELID”

 

rejected

rejected

 

voptcojrj:”EL”.”FCDUEDATE”>=SYSDATE@!-365

voptcojrj:”EL”.”FCDUEDATE”>=SYSDATE@!-365

 

rejected

rejected

 

voptcojrj:”EL”.”FCDUEDATE”<=SYSDATE@!+1200

voptcojrj:”EL”.”FCDUEDATE”<=SYSDATE@!+1200

 

rejected

rejected

 

voptcojrj:”EL”.”FCSTQTY”>0

voptcojrj:”EL”.”FCSTQTY”>0

 

rejected

rejected

Different

 

Query block (000000001231ED68) before join elimination:

Different

 

SQL:******* UNPARSED QUERY IS *******

Different

 

SELECT /*+ LEADING (“E”) LEADING (“E”) */ COALESCE(“CO”.”PART_ID”,’NOT-IN-SYSTEM’) “PART_ID”,”EL”.”FCDUEDATE” “REQUIRED_DATE”,SUM(“EL”.”FCSTQTY”) “QTY”,1 “PLAN_LEVEL” FROM  (SELECT “E”.”CUSTPO” “CUSTPO”,MAX(“E”.”RELID”) “RELID” FROM “TESTUSER”.”T1″ “E” GROUP BY “E”.”CUSTPO”) “E”,”TESTUSER”.”T1_LINES” “EL”, (SELECT “CO”.”CUSTOMER_PO_REF” “CUSTOMER_PO_REF”,MAX(“COL”.”DESIRED_SHIP_DATE”) “DESIRED_SHIP_DATE”,MAX(“COL”.”PART_ID”) “PART_ID” FROM “TESTUSER”.”T2″ “CO”,”TESTUSER”.”T2_LINES” “COL” WHERE “CO”.”ID”=”COL”.”CUST_ORDER_ID” AND “CO”.”CUSTOMER_PO_REF” IS NOT NULL AND “COL”.”DESIRED_SHIP_DATE”>=SYSDATE@!-365 GROUP BY “CO”.”CUSTOMER_PO_REF”) “CO” WHERE “E”.”CUSTPO”=”CO”.”CUSTOMER_PO_REF”(+) AND “EL”.”FCDUEDATE”>COALESCE(“CO”.”DESIRED_SHIP_DATE”,SYSDATE@!-365) AND “E”.”CUSTPO”=”EL”.”CUSTPO” AND “E”.”RELID”=”EL”.”RELID” AND “EL”.”FCDUEDATE”>=SYSDATE@!-365 AND “EL”.”FCDUEDATE”<=SYSDATE@!+1200 AND “EL”.”FCSTQTY”>0 GROUP BY COALESCE(“CO”.”PART_ID”,’NOT-IN-SYSTEM’),”EL”.”FCDUEDATE” ORDER BY COALESCE(“CO”.”PART_ID”,’NOT-IN-SYSTEM’),”EL”.”FCDUEDATE”

Different

 

Query block (000000001231ED68) unchanged

 

Registered qb: SEL$F5BB74E1 0x1231ed68 (VIEW MERGE SEL$1; SEL$2)

Registered qb: SEL$F5BB74E1 0x1231ed68 (VIEW MERGE SEL$1; SEL$2)

 

  signature (): qb_name=SEL$F5BB74E1 nbfros=3 flg=0

  signature (): qb_name=SEL$F5BB74E1 nbfros=3 flg=0

 

    fro(0): flg=1 objn=0 hint_alias=”CO”@”SEL$1″

    fro(0): flg=1 objn=0 hint_alias=”CO”@”SEL$1″

 

    fro(1): flg=1 objn=0 hint_alias=”E”@”SEL$2″

    fro(1): flg=1 objn=0 hint_alias=”E”@”SEL$2″

 

    fro(2): flg=0 objn=47930 hint_alias=”EL”@”SEL$2″

    fro(2): flg=0 objn=47930 hint_alias=”EL”@”SEL$2″

 

Now what?  Is it considered a bug if Oracle Database 11.2.0.1 behaves similar to 10.2.0.2, but not similar to 10.2.0.5, when the OPTIMIZER_FEATURES_ENABLE parameter is set to 10.1.0.4?





SELECT Statement is Fast, INSERT INTO Using the SELECT Statement is Brutally Slow 2

18 12 2010

December 18, 2010 (Modified December 19, 2010)

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

The previous blog article in this series resulted in several interesting comments with very useful advice.  But was the performance issue just an isolated case, one that only happens in one database, with one Oracle Database release version?  We need a test case to determine how frequently this problem may occur.  First, we will create two tables, table T1 will emulate table EDI830 from the previous blog article, and table T1_LINES will emulate the EDI830_LINES table.  A PADDING column will be used to make the average row length roughly equivalent to that of the original tables:

CREATE TABLE T1(
  CUSTPO VARCHAR2(12) NOT NULL,
  RELID VARCHAR2(12) NOT NULL,
  PADDING VARCHAR2(40),
  CONSTRAINT T1_PK PRIMARY KEY (CUSTPO,RELID));

CREATE TABLE T1_LINES(
  CUSTPO VARCHAR2(12) NOT NULL,
  RELID VARCHAR2(12) NOT NULL,
  FCSTQTY NUMBER NOT NULL,
  FC VARCHAR2(1),
  FCDUEDATE DATE NOT NULL,
  CONSTRAINT T1_LINES_PK PRIMARY KEY (CUSTPO,RELID,FCDUEDATE,FCSTQTY)); 

Next, the table T1 is populated in a repeatable way that as best as possible simulates how the original table was actually populated in production:

INSERT INTO T1
SELECT
  RPAD(CHR(MOD(ROWNUM-1,26)+65)||TO_CHAR(ROWNUM),12,'A') CUSTPO,
  LPAD('1',12,'0') RELID,
  LPAD('A',40,'A') PADDING
FROM
  DUAL
CONNECT BY
  LEVEL<=700;

INSERT INTO
  T1
SELECT
  CUSTPO,
  LPAD(TO_CHAR(V2.RN+1),12,'0') RELID,
  LPAD('A',40,'A') PADDING
FROM
  (SELECT /*+ NO_MERGE */
    CUSTPO,
    PADDING,
    MOD(ROWNUM,10)*30+19 RN
  FROM
    (SELECT /*+ NO_MERGE */
      CUSTPO,
      PADDING
    FROM
      T1
    ORDER BY
      CUSTPO)) V1,
  (SELECT /*+ NO_MERGE */
    ROWNUM RN
  FROM
    DUAL
  CONNECT BY
    LEVEL<=319) V2
WHERE
  V2.RN<=V1.RN; 

If we select from table T1, we are able to see how the row counts are distributed in an unequal, yet repeatable way, with 700 distinct customer purchase order numbers: 

SELECT
  CUSTPO,
  MIN(RELID),
  MAX(RELID)
FROM
  T1
GROUP BY
  CUSTPO
ORDER BY
  CUSTPO;

CUSTPO       MIN(RELID)   MAX(RELID)
------------ ------------ ------------
A105AAAAAAAA 000000000001 000000000050
A131AAAAAAAA 000000000001 000000000080
A157AAAAAAAA 000000000001 000000000110
A183AAAAAAAA 000000000001 000000000140
A1AAAAAAAAAA 000000000001 000000000170
A209AAAAAAAA 000000000001 000000000200
A235AAAAAAAA 000000000001 000000000230
A261AAAAAAAA 000000000001 000000000260
...

Z624AAAAAAAA 000000000001 000000000230
Z650AAAAAAAA 000000000001 000000000260
Z676AAAAAAAA 000000000001 000000000290
Z78AAAAAAAAA 000000000001 000000000020

700 rows selected. 

Just as a quick experiment, let’s take a look at the output of the SQL statement that will be used to populate the line detail table (T1_LINES) for the customer POs:

SELECT
  CUSTPO,
  RELID,
  10 FCSTQTY,
  'A' FC,
  TO_DATE('01-JAN-2011','DD-MON-YYYY') + (V2.RN*3 - V1.REL*2) FCDUEDATE
FROM
  (SELECT /*+ NO_MERGE */
    CUSTPO,
    RELID,
    ROW_NUMBER() OVER (PARTITION BY RELID ORDER BY CUSTPO) RN,
    ROW_NUMBER() OVER (PARTITION BY CUSTPO ORDER BY RELID DESC) REL
  FROM
    T1
  WHERE
    CUSTPO IN ('Z78AAAAAAAAA','Y155AAAAAAAA')) V1,
  (SELECT /*+ NO_MERGE */
    ROWNUM RN
  FROM
    DUAL
  CONNECT BY
    LEVEL<=100) V2
WHERE
  V2.RN < (MOD(V1.RN,100)+10)
ORDER BY
  CUSTPO,
  RELID,
  5;

CUSTPO       RELID           FCSTQTY F FCDUEDATE
------------ ------------ ---------- - ---------
Y155AAAAAAAA 000000000001         10 A 26-SEP-10
Y155AAAAAAAA 000000000001         10 A 29-SEP-10
Y155AAAAAAAA 000000000001         10 A 02-OCT-10
Y155AAAAAAAA 000000000001         10 A 05-OCT-10
Y155AAAAAAAA 000000000001         10 A 08-OCT-10
Y155AAAAAAAA 000000000001         10 A 11-OCT-10
Y155AAAAAAAA 000000000001         10 A 14-OCT-10
Y155AAAAAAAA 000000000001         10 A 17-OCT-10
Y155AAAAAAAA 000000000001         10 A 20-OCT-10
Y155AAAAAAAA 000000000001         10 A 23-OCT-10
Y155AAAAAAAA 000000000002         10 A 28-SEP-10
Y155AAAAAAAA 000000000002         10 A 01-OCT-10
...

Z78AAAAAAAAA 000000000020         10 A 17-JAN-11
Z78AAAAAAAAA 000000000020         10 A 20-JAN-11
Z78AAAAAAAAA 000000000020         10 A 23-JAN-11
Z78AAAAAAAAA 000000000020         10 A 26-JAN-11
Z78AAAAAAAAA 000000000020         10 A 29-JAN-11
Z78AAAAAAAAA 000000000020         10 A 01-FEB-11

720 rows selected.

In the above, you can see how the dates will automatically shift from one release of a customer PO to the next, as well as the order in which the rows will be inserted.  The actual dates and number of rows for these two customer POs will actually be a bit different when the rows are inserted into the table, so use the above as just an illustration.

Next, let’s insert 6,079,050 rows into the line detail table in a repeatable way:

INSERT INTO
  T1_LINES
SELECT
  CUSTPO,
  RELID,
  10 FCSTQTY,
  'A' FC,
  TO_DATE('01-JAN-2011','DD-MON-YYYY') + (V2.RN*3 - V1.REL*2) FCDUEDATE
FROM
  (SELECT /*+ NO_MERGE */
    CUSTPO,
    RELID,
    ROW_NUMBER() OVER (PARTITION BY RELID ORDER BY CUSTPO) RN,
    ROW_NUMBER() OVER (PARTITION BY CUSTPO ORDER BY RELID DESC) REL
  FROM
    T1) V1,
  (SELECT /*+ NO_MERGE */
    ROWNUM RN
  FROM
    DUAL
  CONNECT BY
    LEVEL<=100) V2
WHERE
  V2.RN < (MOD(V1.RN,100)+10)
ORDER BY
  RELID,
  CUSTPO,
  5; 

COMMIT;

Now let’s create table T2 (simulates table CUSTOMER_ORDER) and table T2_LINES (simulates table CUST_ORDER_LINE).  Table T2_LINES will be populated with 224,700 rows:

CREATE TABLE T2(
  ID VARCHAR2(15),
  CUSTOMER_ID VARCHAR2(15),
  CUSTOMER_PO_REF VARCHAR2(40),
  PADDING VARCHAR2(150),
  PRIMARY KEY(ID));

CREATE TABLE T2_LINES(
  CUST_ORDER_ID VARCHAR2(15),
  LINE_NO NUMBER,
  PART_ID VARCHAR2(30),
  DESIRED_SHIP_DATE DATE,
  PADDING VARCHAR2(100),
  PRIMARY KEY(CUST_ORDER_ID,LINE_NO)); 

INSERT INTO
  T2
SELECT DISTINCT
  CUSTPO ID,
  'AAAAAAAAAA' CUSTOMER_ID,
  CUSTPO CUSTOMER_PO_REF,
  RPAD('A',150,'A') PADDING
FROM
  T1
ORDER BY
  DBMS_RANDOM.VALUE;

INSERT INTO
  T2_LINES
SELECT
  CUST_ORDER_ID,
  ROW_NUMBER() OVER (PARTITION BY CUST_ORDER_ID ORDER BY DESIRED_SHIP_DATE) LINE_NO,
  RPAD(TO_CHAR((MOD(ROWNUM,3000)+1) * 33)||'PART',10,'A') PART_ID,
  DESIRED_SHIP_DATE,
  RPAD('A',100,'A') PADDING
FROM 
  (SELECT DISTINCT
    CUSTPO CUST_ORDER_ID,
    FCDUEDATE DESIRED_SHIP_DATE   
  FROM
    T1_LINES
  WHERE
    FCDUEDATE<= TO_DATE('15-JAN-2011','DD-MON-YYYY')) V1;

COMMIT;

Finally, we will gather statistics for the tables and the primary key indexes:

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1_LINES',CASCADE=>TRUE)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2',CASCADE=>TRUE)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2_LINES',CASCADE=>TRUE) 

We are now able to start transforming the original query sections, these sections will be listed one at a time, as they were in the earlier article:

    (SELECT
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E 

 The inline view that uses the above inline view: 

  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN TO_DATE('01-JAN-2011','DD-MON-YYYY')-365 AND TO_DATE('01-JAN-2011','DD-MON-YYYY')+1200
    AND EL.FCSTQTY>0) E 

The inline view for the customer orders:

  (SELECT
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO 

Finally, the full SQL statement with all of the inline views joined:

SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE; 

Next, we need a simple table to hold the contents of the INSERT INTO… SELECT statement:

CREATE TABLE T1_DEST (
  PART_ID VARCHAR2(30),
  REQUIRED_DATE DATE,
  QTY NUMBER,
  PLAN_LEVEL NUMBER); 

Now the script that will actually perform the test with different values for OPTIMIZER_FEATURES_ENABLE, OPTIMIZER_INDEX_COST_ADJ, and OPTIMIZER_INDEX_CACHING (Edit Dec 19, 2010: the original version of the script set the TRACEFILE_IDENTIFIER value identically for the SELECT and the INSERT INTO versions of the SQL statement when the OPTIMIZER_FEATURES_ENABLED parameter was greater than 10.1.0.4, and that error resulted in the SELECT and the INSERT INTO execution plans appearing in the same trace file, rather than in separate trace files – the script below has been corrected):


SET TIMING ON
SET ARRAYSIZE 1000
SET AUTOTRACE TRACEONLY STATISTICS

SPOOL Capture_Results.txt

ALTER SESSION SET OPTIMIZER_INDEX_CACHING=0;
ALTER SESSION SET OPTIMIZER_INDEX_COST_ADJ=100;

ALTER SESSION SET OPTIMIZER_FEATURES_ENABLE='10.1.0.4';
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'SELECT_10.1.0.4';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT /*+  */
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT /*+  */
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'INSERT_10.1.0.4';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

INSERT INTO T1_DEST
SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT /*+  */
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT /*+  */
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

TRUNCATE TABLE T1_DEST;

ALTER SESSION SET OPTIMIZER_FEATURES_ENABLE='10.2.0.2';
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'SELECT_10.2.0.2';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT /*+  */
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT /*+  */
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'INSERT_10.2.0.2';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

INSERT INTO T1_DEST
SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT /*+  */
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT /*+  */
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

TRUNCATE TABLE T1_DEST;

ALTER SESSION SET OPTIMIZER_FEATURES_ENABLE='11.1.0.7';
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'SELECT_11.1.0.7';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT /*+  */
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT /*+  */
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'INSERT_11.1.0.7';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
INSERT INTO T1_DEST
SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT /*+  */
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT /*+  */
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

TRUNCATE TABLE T1_DEST;

ALTER SESSION SET OPTIMIZER_FEATURES_ENABLE='11.2.0.1';
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'SELECT_11.2.0.1';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT /*+  */
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT /*+  */
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'INSERT_11.2.0.1';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

INSERT INTO T1_DEST
SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT /*+  */
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT /*+  */
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

SPOOL Capture_Results80.txt

TRUNCATE TABLE T1_DEST;

ALTER SESSION SET OPTIMIZER_INDEX_CACHING=100;
ALTER SESSION SET OPTIMIZER_INDEX_COST_ADJ=80;

ALTER SESSION SET OPTIMIZER_FEATURES_ENABLE='10.1.0.4';
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'SELECT_10.1.0.4_80';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT /*+ _80 */
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT /*+  */
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'INSERT_10.1.0.4_80';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

INSERT INTO T1_DEST
SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT /*+ _80 */
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT /*+  */
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

TRUNCATE TABLE T1_DEST;

ALTER SESSION SET OPTIMIZER_FEATURES_ENABLE='10.2.0.2';
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'SELECT_10.2.0.2_80';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT /*+ _80 */
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT /*+  */
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'INSERT_10.2.0.2_80';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

INSERT INTO T1_DEST
SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT /*+ _80 */
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT /*+  */
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

TRUNCATE TABLE T1_DEST;

ALTER SESSION SET OPTIMIZER_FEATURES_ENABLE='11.1.0.7';
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'SELECT_11.1.0.7_80';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT /*+ _80 */
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT /*+  */
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'INSERT_11.1.0.7_80';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

INSERT INTO T1_DEST
SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT /*+ _80 */
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT /*+  */
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

TRUNCATE TABLE T1_DEST;

ALTER SESSION SET OPTIMIZER_FEATURES_ENABLE='11.2.0.1';
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'SELECT_11.2.0.1_80';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT /*+ _80 */
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT /*+  */
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'INSERT_11.2.0.1_80';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';

INSERT INTO T1_DEST
SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT /*+ _80 */
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      T1 E
    GROUP BY
      E.CUSTPO) E,
    T1_LINES EL
  WHERE
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT /*+  */
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    T2 CO,
    T2_LINES COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;

ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

SPOOL OFF

For this blog article, I decided to use the Enterprise Edition of Oracle Database 11.2.0.1, with the following manually set system (CPU) statistics (as printed in the 10053 trace files):

SYSTEM STATISTICS INFORMATION
-----------------------------
  Using WORKLOAD Stats
  CPUSPEED: 2664 millions instructions/sec
  SREADTIM: 8.000000 milliseconds
  MREADTIM: 10.000000 millisecons
  MBRC: 16 blocks
  MAXTHR: 19181568 bytes/sec
  SLAVETHR: -1 bytes/sec 

The tablespace used for the test case uses ASSM with auto-allocated extent sizes, and the following instance-wide parameters specified: 

SGA_TARGET =          12000M
PGA_AGGREGATE_TARGET = 2000M
OPTIMIZER_MODE =    ALL_ROWS

Just for fun, for now I will show you only the first two execution plans from the 10053 trace files:

SELECT_10.1.0.4 (OPTIMIZER_FEATURES_ENABLE=’10.1.0.4′, OPTIMIZER_INDEX_CACHING=0, OPTIMIZER_INDEX_COST_ADJ=100), completed in 0.23 seconds when selecting 10,478 rows:

--------------------------------------------------+-----------------------------------+
| Id  | Operation                    | Name       | Rows  | Bytes | Cost  | Time      |
--------------------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT             |            |       |       |  1425 |           |
| 1   |  SORT GROUP BY               |            |   487 |   46K |  1425 |  00:00:12 |
| 2   |   FILTER                     |            |       |       |       |           |
| 3   |    HASH JOIN RIGHT OUTER     |            |   19K | 1884K |  1423 |  00:00:12 |
| 4   |     VIEW                     |            |   700 |   27K |   368 |  00:00:03 |
| 5   |      SORT GROUP BY           |            |   700 |   40K |   368 |  00:00:03 |
| 6   |       HASH JOIN              |            |  146K | 8475K |   362 |  00:00:03 |
| 7   |        TABLE ACCESS FULL     | T2         |   700 |   18K |     3 |  00:00:01 |
| 8   |        TABLE ACCESS FULL     | T2_LINES   |  146K | 4676K |   358 |  00:00:03 |
| 9   |     NESTED LOOPS             |            |   19K | 1126K |  1055 |  00:00:09 |
| 10  |      VIEW                    |            |   700 |   14K |    65 |  00:00:01 |
| 11  |       SORT GROUP BY          |            |   700 |   18K |    65 |  00:00:01 |
| 12  |        FILTER                |            |       |       |       |           |
| 13  |         INDEX FAST FULL SCAN | T1_PK      |  106K | 2755K |    60 |  00:00:01 |
| 14  |      INDEX RANGE SCAN        | T1_LINES_PK|    28 |  1036 |     3 |  00:00:01 |
--------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - filter(("EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365) AND COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200))
3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
6 - access("CO"."ID"="COL"."CUST_ORDER_ID")
7 - filter("CUSTOMER_PO_REF" IS NOT NULL)
8 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365)
12 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
14 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
14 - filter("EL"."FCSTQTY">0) 

INSERT_10.1.0.4 (OPTIMIZER_FEATURES_ENABLE=’10.1.0.4′, OPTIMIZER_INDEX_CACHING=0, OPTIMIZER_INDEX_COST_ADJ=100), completed in… 30 minutes, 18.01 seconds when inserting 10,478 rows:

-------------------------------------------------+-----------------------------------+
| Id  | Operation                      | Name    | Rows  | Bytes | Cost  | Time      |
-------------------------------------------------+-----------------------------------+
| 0   | INSERT STATEMENT               |         |       |       | 4967K |           |
| 1   |  LOAD TABLE CONVENTIONAL       |         |       |       |       |           |
| 2   |   SORT GROUP BY                |         |   842 |   60K | 4967K |  11:18:11 |
| 3   |    FILTER                      |         |       |       |       |           |
| 4   |     HASH JOIN OUTER            |         |   842 |   60K | 4967K |  11:18:11 |
| 5   |      VIEW                      |         |   842 |   28K | 4967K |  11:18:08 |
| 6   |       FILTER                   |         |       |       |       |           |
| 7   |        SORT GROUP BY           |         |   842 |   53K | 4967K |  11:18:08 |
| 8   |         FILTER                 |         |       |       |       |           |
| 9   |          HASH JOIN             |         |  852M |   54G |  7035 |  00:00:57 |
| 10  |           INDEX FAST FULL SCAN | T1_PK   |  106K | 2755K |    60 |  00:00:01 |
| 11  |           TABLE ACCESS FULL    | T1_LINES| 5631K |  214M |  3388 |  00:00:28 |
| 12  |      VIEW                      |         |   700 |   27K |   368 |  00:00:03 |
| 13  |       SORT GROUP BY            |         |   700 |   40K |   368 |  00:00:03 |
| 14  |        HASH JOIN               |         |  146K | 8475K |   362 |  00:00:03 |
| 15  |         TABLE ACCESS FULL      | T2      |   700 |   18K |     3 |  00:00:01 |
| 16  |         TABLE ACCESS FULL      | T2_LINES|  146K | 4676K |   358 |  00:00:03 |
-------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
3 - filter("E"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365))
4 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
6 - filter("EL"."RELID"=MAX("RELID"))
8 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
9 - access("E"."CUSTPO"="EL"."CUSTPO")
11 - filter(("EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200 AND "EL"."FCSTQTY">0))
14 - access("CO"."ID"="COL"."CUST_ORDER_ID")
15 - filter("CUSTOMER_PO_REF" IS NOT NULL)
16 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365) 

——————-

Just one more example of the value of test cases.  By the way, make certain that the air conditioning is working well in the room where your test server is located – your test server could be swamped for a couple of hours while the table creation scripts run and the actual test script is executed.  So, what is ahead in part three of this blog article series?

(P.S. Did anyone’s test result for the first INSERT INTO statement complete in less than 30 minutes?)





SELECT Statement is Fast, INSERT INTO Using the SELECT Statement is Brutally Slow 1

15 12 2010

December 15, 2010 

(Forward to the Next Post in the Series)

I am impressed with the responses that were received from the most recent set of blog articles.  I recently encountered an interesting problem with an Oracle database that had the OPTIMIZER_FEATURES_ENABLE parameter set to 10.1.0.4 and the OPTIMIZER_MODE set to ALL_ROWS.  A somewhat complex SQL statement was written like the following, using LEADING hints to help control the join order (to work around a potential problem related to join order when inline views are used):

SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      EDI830 E
    GROUP BY
      E.CUSTPO) E,
    EDI830_LINES EL
  WHERE 
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E,
  (SELECT
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    CUSTOMER_ORDER CO,
    CUST_ORDER_LINE COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE; 

The above query executes very quickly (a second or less), and the execution plan for the above query looks like this:

Execution Plan
----------------------------------------------------------
Plan hash value: 3426513411

-----------------------------------------------------------------------------------------------
| Id  | Operation                   | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                 |   205 | 17425 |   157  (21)| 00:00:01 |
|   1 |  SORT GROUP BY              |                 |   205 | 17425 |   157  (21)| 00:00:01 |
|*  2 |   FILTER                    |                 |       |       |            |          |
|*  3 |    HASH JOIN OUTER          |                 |   384 | 32640 |   156  (20)| 00:00:01 |
|   4 |     NESTED LOOPS            |                 |   194 |  9506 |    47  (35)| 00:00:01 |
|   5 |      VIEW                   |                 |  1993 | 35874 |    46  (35)| 00:00:01 |
|   6 |       SORT GROUP BY         |                 |  1993 | 39860 |    46  (35)| 00:00:01 |
|*  7 |        FILTER               |                 |       |       |            |          |
|   8 |         INDEX FAST FULL SCAN| EDI830_PK       |   107K|  2092K|    33  (10)| 00:00:01 |
|*  9 |      INDEX RANGE SCAN       | EDI830_LINES_PK |     1 |    31 |     1   (0)| 00:00:01 |
|  10 |     VIEW                    |                 |  6254 |   219K|   108  (13)| 00:00:01 |
|  11 |      SORT GROUP BY          |                 |  6254 |   250K|   108  (13)| 00:00:01 |
|* 12 |       HASH JOIN             |                 |  8213 |   328K|   106  (12)| 00:00:01 |
|* 13 |        TABLE ACCESS FULL    | CUSTOMER_ORDER  |  7554 |   125K|    14   (8)| 00:00:01 |
|* 14 |        TABLE ACCESS FULL    | CUST_ORDER_LINE |  8582 |   201K|    92  (12)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("EL"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365) AND
              COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365)<SYSDATE@!+1200)
   3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF"(+))
   7 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
   9 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND
              "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND
              "EL"."FCDUEDATE"<=SYSDATE@!+1200)
       filter("EL"."FCSTQTY">0)
  12 - access("CO"."ID"="COL"."CUST_ORDER_ID")
  13 - filter("CUSTOMER_PO_REF" IS NOT NULL)
  14 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365) 

Now, assume that we have a very simple table that currently contains no rows (the table was TRUNCATEd) and no indexes, and we would like to insert the rows returned by the above SQL statement into the table:

INSERT INTO
  T1
SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
...
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE; 

The INSERT INTO SQL statement required 20 to 30 minutes to complete.  Using SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,’TYPICAL’)); the following execution plan is displayed:

--------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name            | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT             |                 |       |       |       |   279K(100)|          |
|   1 |  SORT GROUP BY               |                 |   205 | 13735 |       |   279K  (9)| 00:22:11 |
|*  2 |   FILTER                     |                 |       |       |       |            |          |
|   3 |    MERGE JOIN OUTER          |                 | 81363 |  5323K|       |   279K  (9)| 00:22:11 |
|   4 |     VIEW                     |                 | 70658 |  2139K|       |   279K  (9)| 00:22:10 |
|*  5 |      FILTER                  |                 |       |       |       |            |          |
|   6 |       SORT GROUP BY          |                 | 70658 |  3519K|  7169M|   279K  (9)| 00:22:10 |
|*  7 |        FILTER                |                 |       |       |       |            |          |
|*  8 |         HASH JOIN            |                 |   109M|  5343M|  3352K|   1154 (30)| 00:00:06 |
|   9 |          INDEX FAST FULL SCAN| EDI830_PK       |   107K|  2092K|       |     33 (10)| 00:00:01 |
|* 10 |          TABLE ACCESS FULL   | EDI830_LINES    |   242K|  7334K|       |    941 (35)| 00:00:05 |
|* 11 |     SORT JOIN                |                 |  6254 |   219K|       |    110 (15)| 00:00:01 |
|  12 |      VIEW                    |                 |  6254 |   219K|       |    108 (13)| 00:00:01 |
|  13 |       SORT GROUP BY          |                 |  6254 |   250K|       |    108 (13)| 00:00:01 |
|* 14 |        HASH JOIN             |                 |  8213 |   328K|       |    106 (12)| 00:00:01 |
|* 15 |         TABLE ACCESS FULL    | CUSTOMER_ORDER  |  7554 |   125K|       |     14  (8)| 00:00:01 |
|* 16 |         TABLE ACCESS FULL    | CUST_ORDER_LINE |  8582 |   201K|       |     92 (12)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
   2 - filter("E"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365))
   5 - filter("EL"."RELID"=MAX("RELID"))
   7 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
   8 - access("E"."CUSTPO"="EL"."CUSTPO")
  10 - filter(("EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND
       "EL"."FCDUEDATE"<=SYSDATE@!+1200))
  11 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
       filter("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
  14 - access("CO"."ID"="COL"."CUST_ORDER_ID")
  15 - filter("CUSTOMER_PO_REF" IS NOT NULL)
  16 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365) 

What happened, how would you investigate the problem, and how would you correct the problem?  Think about that for a couple of minutes before scrolling down.

Assume that I was able to obtain an execution plan for the INSERT INTO SQL statement that looked like this:

-------------------------------------------------------+-----------------------------------+
| Id  | Operation                     | Name           | Rows  | Bytes | Cost  | Time      |
-------------------------------------------------------+-----------------------------------+
| 0   | INSERT STATEMENT              |                |       |       |   157 |           |
| 1   |  SORT GROUP BY                |                |   194 |   13K |   157 |  00:00:01 |
| 2   |   FILTER                      |                |       |       |       |           |
| 3   |    HASH JOIN OUTER            |                |   223 |   15K |   156 |  00:00:01 |
| 4   |     VIEW                      |                |   194 |  6014 |    47 |  00:00:01 |
| 5   |      NESTED LOOPS             |                |   194 |  9118 |    47 |  00:00:01 |
| 6   |       VIEW                    |                |  1993 |   31K |    46 |  00:00:01 |
| 7   |        SORT GROUP BY          |                |  1993 |   39K |    46 |  00:00:01 |
| 8   |         FILTER                |                |       |       |       |           |
| 9   |          INDEX FAST FULL SCAN | EDI830_PK      |  105K | 2092K |    33 |  00:00:01 |
| 10  |       INDEX RANGE SCAN        | EDI830_LINES_PK|     1 |    31 |     1 |  00:00:01 |
| 11  |     VIEW                      |                |  6254 |  220K |   108 |  00:00:01 |
| 12  |      SORT GROUP BY            |                |  6254 |  250K |   108 |  00:00:01 |
| 13  |       HASH JOIN               |                |  8213 |  329K |   106 |  00:00:01 |
| 14  |        TABLE ACCESS FULL      | CUSTOMER_ORDER |  7554 |  125K |    14 |  00:00:01 |
| 15  |        TABLE ACCESS FULL      | CUST_ORDER_LINE|  8582 |  201K |    92 |  00:00:01 |
-------------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - filter("E"."FCDUEDATE">COALESCE("CO"."DESIRED_SHIP_DATE",SYSDATE@!-365))
3 - access("E"."CUSTPO"="CO"."CUSTOMER_PO_REF")
8 - filter(SYSDATE@!-365<=SYSDATE@!+1200)
10 - access("E"."CUSTPO"="EL"."CUSTPO" AND "E"."RELID"="EL"."RELID" AND "EL"."FCDUEDATE">=SYSDATE@!-365 AND "EL"."FCSTQTY">0 AND "EL"."FCDUEDATE"<=SYSDATE@!+1200)
10 - filter("EL"."FCSTQTY">0)
13 - access("CO"."ID"="COL"."CUST_ORDER_ID")
14 - filter("CUSTOMER_PO_REF" IS NOT NULL)
15 - filter("COL"."DESIRED_SHIP_DATE">=SYSDATE@!-365) 

Using the above execution plan, the INSERT INTO SQL statement completed in about a second (or less).  What did I do to fix the problem?

———

Here is what the sections of the query are intended to accomplish:

On a daily basis the header rows for custom order forecasts are inserted into the EDI830 table, and each order may be revised multiple times (possibly once a day or once a week), each time bumping the RELID value up by 1 or 2 (RELID is a zero padded number stored in a VARCHAR2 column).  This part of the query retrieves the most recent release number for each of the customer POs so that we are able to return the line item detail for the most recent customer order forecast release:

    (SELECT
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      EDI830 E
    GROUP BY
      E.CUSTPO) E 

Join the above inline view to the line item detail, with restrictions so that we do not look at releases from more than a year ago.  The LEADING hint is used to make certain that the optimizer first returns the rows from the EDI830 table before accessing the EDI830_LINES table (there is a chance that the optimizer will access the EDI830_LINES table first before returning the rows from the inline view):

  (SELECT /*+ LEADING(E) */
    E.CUSTPO,
    EL.FCSTQTY,
    EL.FCDUEDATE
  FROM
    (SELECT
      E.CUSTPO,
      MAX(RELID) RELID
    FROM
      EDI830 E
    GROUP BY
      E.CUSTPO) E,
    EDI830_LINES EL
  WHERE 
    E.CUSTPO=EL.CUSTPO
    AND E.RELID=EL.RELID
    AND EL.FCDUEDATE BETWEEN SYSDATE-365 AND SYSDATE+1200
    AND EL.FCSTQTY>0) E 

The goal of the following subquery is to return the ship date of the last order line that made it into the order entry system for each of the customer POs.  This subquery will allow us to determine which of the rows in the most recent EDI release are beyond the last firmed customer order line in the system (typically, only the firmed EDI release lines are brought into the CUST_ORDER_LINE table):

  (SELECT
    CO.CUSTOMER_PO_REF,
    MAX(COL.DESIRED_SHIP_DATE) DESIRED_SHIP_DATE,
    MAX(COL.PART_ID) PART_ID
  FROM
    CUSTOMER_ORDER CO,
    CUST_ORDER_LINE COL
  WHERE
    CO.ID=COL.CUST_ORDER_ID
    AND CUSTOMER_PO_REF IS NOT NULL
    AND COL.DESIRED_SHIP_DATE>=SYSDATE-365
  GROUP BY
    CO.CUSTOMER_PO_REF) CO

The outer-most portion of the SQL statement joins the two inline views together, using a LEADING hint to make certain that the EDI release rows are retrieved before accessing the CUSTOMER_ORDER and CUST_ORDER_LINE tables (even with the outer join, it is possible that those tables might be accessed first):

SELECT /*+ LEADING(E) */
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM') PART_ID,
  E.FCDUEDATE REQUIRED_DATE,
  SUM(E.FCSTQTY) QTY,
  1 PLAN_LEVEL
FROM
...
  GROUP BY
    CO.CUSTOMER_PO_REF) CO
WHERE
  E.CUSTPO=CO.CUSTOMER_PO_REF(+)
  AND E.FCDUEDATE>COALESCE(CO.DESIRED_SHIP_DATE,SYSDATE-365)
GROUP BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE
ORDER BY
  COALESCE(CO.PART_ID,'NOT-IN-SYSTEM'),
  E.FCDUEDATE;  

What happened?  What process would you use to investigate the performance problem?  How would you resolve this performance problem.

If you tell me to buy a bigger server, I will simply tell you to read a different book.  :-)








Follow

Get every new post delivered to your Inbox.

Join 137 other followers