True or False – Wait Events

31 03 2010

March 31, 2010

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

A continuation of the series of true or false quizzes for the week – maybe this one is too easy.  Today’s quiz is on the topic of wait events.   Assuming that the questions apply to a release of Oracle that has been available for up to five years, state why you believe that the answer to the question is true, or why you believe that the answer to the question is false.  Any answers that attempt to utilize logical fallacies will be scored as incorrect.

1. Assume that a database instance is being prepared for 1,500 to 2,000 sessions.  Given an unlimited budget, it is possible to configure the database instance to completely eliminate wait events.

2. Consider two identical Oracle databases with a single session connected to each database instance.  Each of the two sessions submits exactly the same SQL statement, with the first session experiencing no wait events while executing the query, and the second session experiencing nearly continuous I/O wait events.  The lack of wait events implies that the query execution for the first session is optimal, and should be the performance target for the second session.

3. Idle wait events are insignificant in performance tuning exercises, and should be ignored.

4. For every one minute of elapsed time, each CPU in the server is capable of accumulating 60 seconds of CPU wait time.

Did I mention that I dislike true/false type questions?  But then these are not simple true/false questions.





True or False – Improving Performance of SQL Statements

30 03 2010

March 30, 2010

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

A continuation of the concept of yesterday’s true or false quiz – this one is a little easier, with new reading material.  Today’s quiz is on the topic of improving performance of SQL statements.  The reading material:
Article 1
Article 2
Article 3

Article 4
Article 5
Article 6
Article 7
Article 8 (actually a book)
Article 9 (actually a book)
Article 10 (actually a book)
Article 11

Please read the above articles and see if you are able to answer the following true or false questions, assuming that the questions apply to a release of Oracle that has been available for up to five years.  State why you believe that the answer to the question is true, or why you believe that the answer to the question is false.  Any answers that attempt to utilize logical fallacies will be scored as incorrect.  (Most of these questions can be answered with the help of the first couple of links.)

1. Queries containing subqueries should be rewritten as a logical step in improving query performance.

2. Complex queries should be decomposed into multiple queries using global temp tables and/or WITH clauses as a step toward improving query performance.

3. Significant performance improvements, possibly 20 fold, are possible by replacing some or all of a SQL statement with PL/SQL.  If true, provide an example showing a significant performance improvement with such a replacement.  If not true, provide an example that shows that performance did not improve significantly.

4. There are cases were performance improvements are possible by modifying a SQL statement containing a three table equijoin, which accesses primary and foreign keys columns of the tables, into a SQL statement which accesses a single table with two PL/SQL functions (each containing a SELECT) in column positions.





True or False – Oracle Sorting

29 03 2010

March 29, 2010 (Updated April 6, 2010 with a test table)

(Forward to the Next Post in the Series)

I recently encountered a discussion thread (dbaforums.org/oracle/index.php?s=eacd9ff86b358b4a14ecd3fd7653a9fd&showtopic=19407) that pointed to a news article about the internals of Oracle sorting.  The news article (dba-oracle.com/t_oracle_sorting.htm) has a date of October 15, 2007, so it is probably reasonable to assume that the article describes the behavior of Oracle Database 10g R2, and possibly Oracle Database 11g R1.

Please read the news article and see if you are able to answer the following true or false questions.  State why you believe that the question is true, or why you believe that the question is false.  Any answers that attempt to utilize logical fallacies will be scored as incorrect.

1. Sequencing of database output first started in the 1960s.

2. One of the very important components of Oracle tuning is Oracle sorting, yet that process is often overlooked.

3. SSD is a synonym for a super-fast RAM disk.

4. Oracle Database always performs an automatic sorting operation when a GROUP BY clause is used in a SQL statement, when an index is created, and when an ORDER BY clause is used in a SQL statement.

5. The cheapest method is always used by Oracle Database when ordering a resultset.

6. A hinted execution plan involving a single table, with a /*+ index */ hint, will always retrieve the rows in the sorted order of the index.

7. If a SQL statement requires a single sort operation that completes in memory, that SQL statement will not use any space in the TEMP tablespace when the rows are retrieved – with the assumption that a hash join did not spill to disk.

8. The CPU_COST parameter causes Oracle Database to favor the pre-sorted ordering of an index over a discrete sorting operation.

9. The value of the SORT_AREA_SIZE parameter or the PGA_AGGREGATE_TARGET parameter if used, influences Oracle Database’s decision to prefer the pre-sorted ordering of an index over a discrete sorting operation.

10. The clustering factor of an index influences Oracle Database’s decision to prefer the pre-sorted ordering of an index over a discrete sorting operation.

11. The default database block size in use by the database influences Oracle Database’s decision to prefer the pre-sorted ordering of an index over a discrete sorting operation.

12. A sort operation will only spill to disk when RAM is exhausted.

13. “At the time a session is established with Oracle, a private sort area is allocated in memory for use by the session for sorting, based on the value of the sort_area_size initialization parameter.”  Supporting evidence:
http://books.google.com/books?id=gsFC1D1LmvQC&pg=PA306&lpg=PA306#v=onepage&q=&f=false
http://www.articles.freemegazone.com/oracle-sorting.php
oracle-training.cc/oracle_tips_sort_operations.htm

14. For sort intensive tasks it is not possible to adjust the amount of memory allocated to those tasks by adjusting the SORT_AREA_SIZE parameter at the session level.

15. The entire database can be slowed down due to a disk sort in the TEMP tablespace because sorts to disk are I/O intensive.

16. A good general rule is that the SORT_AREA_SIZE parameter should be adjusted to eliminate sorts to disk caused by GROUP BY operations.

17. Buffer pool blocks are allocated to hold or manage the blocks that are in the TEMP tablespace.

18. An optimal workarea execution, completed entirely in memory, is always preferred over a one-pass or multi-pass workarea execution.

19. Free buffer waits can be caused by excessive sorts to disk, which cause data blocks needed by other sessions to be paged out of the buffer.

20. One percent is an acceptable ratio of disk sorts to the total number of sorts.

21. When the PGA_AGGREGATE_TARGET parameter is specified, the total work area size cannot exceed 200MB.

22. No task may use more than 10MB for sorting.

23. A DBA should modify two hidden (underscore) parameters to permit up to 50MB of memory to be used for an in-memory sort operation for a SQL statement.

Have you ever read an article in an attempt to find the answer to a very specific question, only to find that by the time the end of the article is reached, you now have a whole new set of questions?  Try to answer the above questions using something beyond true or false – tell me why in detail it is true or why it is false.  Are there any other questions that could be asked about the article?

——-

Test table for question #10, added April 6, 2010:

CREATE TABLE T1 (
  C1 NUMBER,
  C2 NUMBER,
  C3 VARCHAR2(100),
  PRIMARY KEY (C1));

INSERT INTO T1
SELECT
  ROWNUM,
  DECODE(MOD(ROWNUM,100),0,NULL,ROWNUM),
  RPAD('A',100,'A')
FROM
  DUAL
CONNECT BY
  LEVEL<=10000;

COMMIT;

ALTER TABLE T1 MODIFY (C2 NOT NULL);

CREATE INDEX IND_T1_C2 ON T1(C2);

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)




Duplicates when Querying V$TEMPSEG_USAGE?

22 03 2010

March 22, 2010

If you search through the Oracle documentation, you will find the following in the Oracle Database 9.2 documentation:

V$SESSION is basically an information view used for finding the SID or SADDR of a user. However, it has some columns that change dynamically and are useful for examining a user. For example:

SQL_HASH_VALUE, SQL_ADDRESS: These identify the SQL statement currently being executed by the session. If NULL or 0, then the session is not executing any SQL statement. PREV_HASH_VALUE and PREV_ADDRESS identify the previous statement being executed by the session.

The Oracle Database 10.2 documentation includes the following:

Use the following query to determine, at a point in time, which sessions are waiting for I/O:

SELECT SQL_ADDRESS, SQL_HASH_VALUE
  FROM V$SESSION
 WHERE EVENT LIKE ‘db file%read';

Again in the Oracle Database 10.2 documentation:

SQL_ADDRESS: Used with SQL_HASH_VALUE to identify the SQL statement that is currently being executed
SQL_HASH_VALUE: Used with SQL_ADDRESS to identify the SQL statement that is currently being executed

The Oracle Database 11.2 documentation repeats what is found in the 10.2 documentation:

SQL_ADDRESS: Used with SQL_HASH_VALUE to identify the SQL statement that is currently being executed
SQL_HASH_VALUE: Used with SQL_ADDRESS to identify the SQL statement that is currently being executed

Why doesn’t the documentation for 10.2 and 11.2 suggest using the SQL_ID and SQL_CHILD_NUMBER columns to find the SQL statement that is currently being executed?  For that matter, why isn’t there a SQL_CHILD_ADDRESS column in V$SESSION if the documentation suggests using SQL_ADDRESS to find the SQL statement currently being executed?

Recently, an errata was filed for page 188 of the Expert Oracle Practices book by an observant reader named Andreas, who mentioned that the SQL statement found on that page could produce duplicate rows, and offered an improved version of the SQL statement.  The SQL statement from the book follows:

SQL> SELECT /*+ ORDERED */
  2    TU.USERNAME, S.SID, S.SERIAL#, S.SQL_ID, S.SQL_ADDRESS, TU.SEGTYPE, TU.EXTENTS,
  3    TU.BLOCKS, SQL.SQL_TEXT
  4  FROM
  5    V$TEMPSEG_USAGE TU, V$SESSION S, V$SQL SQL
  6  WHERE
  7    TU.SESSION_ADDR=S.SADDR
  8    AND TU.SESSION_NUM=S.SERIAL#
  9    AND S.SQL_ID=SQL.SQL_ID
 10    AND S.SQL_ADDRESS=SQL.ADDRESS;

The sample output included in the book from that SQL statement is this:

USERNAME   SID  SERIAL# SQL_ID        SQL_ADDRESS      SEGTYPE EXTENTS     BLOCKS
-------- ----- -------- ------------- ---------------- ------- ------- ----------
TESTUSER   165     2171 7pqxpw71fkjvj 000000027DB50320 SORT        370      47360

SQL_TEXT
-------------------------------------------------------------------
SELECT * FROM T1,T2 WHERE T1.ID<=100 AND T2.ID<=1000 ORDER BY T1.ID

We also included the execution plan for the SQL statement, but did not show how the execution plan was obtained since that would be discussed later in the chapter:

------------------------------------------------------------------------------------
| Id |Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|  0|SELECT STATEMENT             |        |       |       |   119G(100)|          |
|  1| MERGE JOIN CARTESIAN        |        |  6552G|   643T|   119G  (3)|999:59:59 |
|  2|  TABLE ACCESS BY INDEX ROWID| T1     |  1005K|    50M|  1006K  (1)| 00:07:10 |
|* 3|   INDEX RANGE SCAN          | IND_T1 |  1005K|       |  2293   (9)| 00:00:01 |
|  4|  BUFFER SORT                |        |  6516K|   341M|   119G  (3)|999:59:59 |
|* 5|   TABLE ACCESS FULL         | T2     |  6516K|   341M|   118K  (3)| 00:00:51 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."ID"<=100)
   5 - filter("T2"."ID"<=1000)

Note
-----
   - dynamic sampling used for this statement

What is wrong with the above SQL statement (no, not the one that generated the Cartesian join – the other one that queried V$TEMPSEG_USAGE)?  We did not write the two chapters in linear order, starting at the beginning of chapter 8 and working to the end of chapter 9.  Instead, we outlined each section that would appear in the chapter, and wrote the contents of those sections in spurts.  At the time that the SQL statement which queried V$TEMPSEG_USAGE was written, we had already written the AdaptiveCursorBindTest.sql sample script that is mentioned later in chapter 8.  When the above SQL statement was written, I recall being internally conflicted – why?

  • Should the SQL statement instead use the HASH_VALUE, rather than SQL_ID, as suggested by the Oracle 10.2 and 11.2 documentation, which would then allow the query to also work on Oracle 9.2?  Would it be sufficient to just add a note stating that if you are running Oracle 9.2 or earlier, substitute the SQL_HASH_VALUE for SQL_ID?
  • Should the SQL statement join to V$SQLAREA to eliminate the problems with multiple child cursors for the same SQL_ID?  But how would we then retrieve the correct execution plan from the server’s memory in the event that the multiple child cursors have different execution plans?
  • Should we also use the SQL_CHILD_NUMBER column to join to V$SQL?  But then what about the users still running Oracle 9.2 or earlier?
  • Should there be a SQL_CHILD_ADDRESS column in V$SESSION since the SQL_ADDRESS column will probably be the same for all child cursors for a given SQL_ID?
  • How do we deal with the output of the AdaptiveCursorBindTest.sql sample script, which showed Oracle 11.1.0.6 producing multiple child cursors with the same CHILD_NUMBER for a give SQL_ID?  That would mean that even if the query included the SQL_CHILD_NUMBER column, there is still a potential for duplicate rows being returned by the query.

The initial plan was to circle back to this section of the chapter and try to describe some of the potential issues with using the SQL statement that queried V$TEMPSEG_USAGE, but that would not be an easy task without the reader first reading the rest of that chapter.  Alas, just days after the final version of the chapters were due to the publisher, and the first drop of ink was spilled on the page of the printed book, we made another pass through the chapter.  When I executed this SQL statement I found that it was listing the same SQL_ID more than once.  But why?  A query of V$SQL_SHARED_CURSOR showed that ROLL_INVALID_MISMATCH was set to Y when a second (or third) child cursor was created.  What does that mean?  Execute a query, collect statistics on the tables or indexes used by the query, wait about 5 hours, re-execute the query, and then check V$SQL_SHARED_CURSOR.  The delay is mentioned here and here, along with a hidden _optimizer_invalidation_period parameter that controls how long after statistics collection is performed, the previously hard parsed cursors that referenced those objects can no longer be used.

In short, we never did circle back to that section of the chapter before the book was printed, so I am happy that Andreas filed the errata report, which allowed us to clarify the situation.  We did join to V$SQL so that with a little more work it would be possible to determine the correct execution plan to pull from memory.  In retrospect, we probably should have just written the SQL statement like the following and provided a quick comment about Oracle Database 9.2.0.8 and earlier, and the potential duplicates in Oracle Database 11.1.0.6 that were demonstrated in the AdaptiveCursorBindTest.sql sample script:

SELECT /*+ ORDERED */
  TU.USERNAME, S.SID, S.SERIAL#, S.SQL_ID, S.SQL_CHILD_NUMBER,
  TU.SEGTYPE, TU.EXTENTS, TU.BLOCKS, SQL.SQL_TEXT
FROM
  V$TEMPSEG_USAGE TU, V$SESSION S, V$SQL SQL
WHERE
  TU.SESSION_ADDR=S.SADDR
  AND TU.SESSION_NUM=S.SERIAL#
  AND S.SQL_ID=SQL.SQL_ID
  AND S.SQL_CHILD_NUMBER=SQL.CHILD_NUMBER
  AND S.SQL_ADDRESS=SQL.ADDRESS;

Without looking at the V$SQL_SHARED_CURSOR view, what are the different reasons for multiple child cursors to be created?  To get you started:

  • Collecting object statistics
  • Changing the data type of bind variables
  • Changing the data length of bind variables
  • Changing the optimizer mode (ALL_ROWS, CHOOSE, RULE, FIRST_ROWS, etc.)
  • Adaptive cursor sharing (starting with Oracle Database 11.1.0.6)
  • Cardinality feedback (starting with Oracle Database 11.2.0.1)

I give my thanks to Andreas for inspiring this blog article.

Related blog articles:
SORT_AREA_SIZE Affecting Sorts to Disk with PGA_AGGREGATE_TARGET Set?
SORT_AREA_SIZE Affecting Sorts to Disk with PGA_AGGREGATE_TARGET Set 2?
SQL PLAN_HASH_VALUE Changes for the Same SQL Statement
V$SESSION_LONGOPS – Where’s My SQL Statement





SQL – Overlapping Transactions, How Many Hours have We Worked?

19 03 2010

March 19, 2010

A couple of years ago I received an email from an ERP mailing list that requested assistance with a task of determining how many hours an employee was in a building.  The ERP software allows employees to log into the computer system that job A was in-process between 8:00 AM and 10:00 AM and job B was in-process from 10:00 AM to noon, so in this case it would be a simple task of subtracting the corresponding starting time from the ending time and summing the results.  But what happens if the employee is responsible for monitoring automated processes, where the employee may monitor several simultaneous in-process jobs?  For example, job A was in-process between 8:00 AM and 10:00 AM and job B was in-process from 8:30 AM to 9:25 AM, job C was in-process from 9:00 AM to 11:00 AM, job D was in-process from 10:30 AM to noon.  In the case of the concurrent in-process jobs it becomes much more difficult to show that the employee worked for 4 hours, just as in the first example.

How could we solve this problem?  Paint a picture – there are 1440 minutes in a day, so we could just lay down each of the transactions on top of each of those minutes to see how many of the minutes are occupied.  If we start with the following, we obtain the number of minutes past midnight of the shift date for the clock in and clock out, rounded to the nearest minute:

SELECT
  EMPLOYEE_ID,
  SHIFT_DATE,
  ROUND((CLOCK_IN-SHIFT_DATE)*1440) CLOCK_IN,
  ROUND((CLOCK_OUT-SHIFT_DATE)*1440) CLOCK_OUT
FROM
  LABOR_TICKET
WHERE
  SHIFT_DATE=TRUNC(SYSDATE-1);

If we had a way of stepping through all of the minutes in the day, and potentially two days in the event that the shift date crosses midnight, we could count the number of minutes in which there was an active labor ticket for each employee.  To do this, we need a counter:

SELECT
  ROWNUM COUNTER
FROM
  DUAL
CONNECT BY
  LEVEL<=2880;

Now, if we combine the two SQL statements together, placing each in an inline view, we can count the number of distinct minutes for which there was a labor ticket:

SELECT
  LT.EMPLOYEE_ID,
  LT.SHIFT_DATE,
  MIN(CLOCK_IN) CLOCK_IN,
  MAX(CLOCK_OUT) CLOCK_OUT,
  COUNT(DISTINCT C.COUNTER)/60 HOURS
FROM
  (SELECT
    EMPLOYEE_ID,
    SHIFT_DATE,
    ROUND((ROUND(CLOCK_IN,'MI')-SHIFT_DATE)*1440) CLOCK_IN,
    ROUND((ROUND(CLOCK_OUT,'MI')-SHIFT_DATE)*1440) CLOCK_OUT
  FROM
    LABOR_TICKET
  WHERE
    SHIFT_DATE=TRUNC(SYSDATE-1)) LT,
  (SELECT
    ROWNUM COUNTER
  FROM
    DUAL
  CONNECT BY
    LEVEL<=2880) C
WHERE
  C.COUNTER>LT.CLOCK_IN
  AND C.COUNTER<=LT.CLOCK_OUT
GROUP BY
  LT.EMPLOYEE_ID,
  LT.SHIFT_DATE;

We have a couple of potential problems with the above:

  1. It could bury/peg the CPU in the database server due to the CONNECT BY.
  2. The CLOCK_IN and CLOCK_OUT times are not in a friendly format.

First, we create a counter table:

CREATE TABLE
  MY_COUNTER
AS
SELECT
  ROWNUM COUNTER
FROM
  DUAL
CONNECT BY
  LEVEL<=2880;

COMMIT;

Now, plugging in the counter table and cleaning up the CLOCK_IN and CLOCK_OUT times:

SELECT
  LT.EMPLOYEE_ID,
  LT.SHIFT_DATE,
  MIN(CLOCK_IN)/1440+SHIFT_DATE CLOCK_IN,
  MAX(CLOCK_OUT)/1440+SHIFT_DATE CLOCK_OUT,
  ROUND(COUNT(DISTINCT C.COUNTER)/60,2) HOURS
FROM
  (SELECT
    EMPLOYEE_ID,
    SHIFT_DATE,
    ROUND((ROUND(CLOCK_IN,'MI')-SHIFT_DATE)*1440) CLOCK_IN,
    ROUND((ROUND(CLOCK_OUT,'MI')-SHIFT_DATE)*1440) CLOCK_OUT
  FROM
    LABOR_TICKET
  WHERE
    SHIFT_DATE=TRUNC(SYSDATE-1)) LT,
  (SELECT
    COUNTER
  FROM
    MY_COUNTER) C
WHERE
  C.COUNTER>LT.CLOCK_IN
  AND C.COUNTER<=LT.CLOCK_OUT
GROUP BY
  LT.EMPLOYEE_ID,
  LT.SHIFT_DATE;

This solution was just as easy as painting by number.  It is just a matter of breaking a complicated task into solvable problems.





Improving Performance by Using a Cartesian Join

18 03 2010

March 18, 2010

(Forward to the Next Post in the Series)

This example is based on a demonstration that I gave during a presentation last year.  I did not go into great detail how the code worked, but I demonstrated that a carefully constructed Cartesian join is helpful and efficient for solutions to certain types of problems.  Assume that you have a table named APPLICATION_LIST that lists all of the modules belonging to an application, another table named USER_LIST that lists each Oracle username that has access to the application, and a third table named USER_PROGRAM_PERMISSION that lists each username that is denied access to one of the application modules.  The table construction may seem a little odd, but this is based on an actual example found in a commercial product.  The goal is to produce a cross-tab style report that shows all users’ permissions to all of the application modules, and have that cross-tab report appear in Excel.  The table definitions for our test tables look like this:

CREATE TABLE APPLICATION_LIST(
  PROGRAM_ID VARCHAR2(30),
  MENU_STRING VARCHAR2(30),
  PRIMARY KEY (PROGRAM_ID));

CREATE TABLE USER_LIST(
  NAME VARCHAR2(30),
  TYPE NUMBER,
  PRIMARY KEY(NAME));

CREATE TABLE USER_PROGRAM_PERMISSION(
  USER_ID VARCHAR2(30),
  PROGRAM_ID VARCHAR2(30),
  PERMISSION CHAR(1),
  PROGRAM_COMPONENT VARCHAR(20),
  PRIMARY KEY(USER_ID,PROGRAM_ID));

We will populate the test tables with the following script:

INSERT INTO
  APPLICATION_LIST
SELECT
  DBMS_RANDOM.STRING('Z',10) PROGRAM_ID,
  DBMS_RANDOM.STRING('A',20) MENU_STRING
FROM
  DUAL
CONNECT BY
  LEVEL<=100;

INSERT INTO
  USER_LIST
SELECT
  'USER'||TO_CHAR(ROWNUM) USER_ID,
  1 TYPE
FROM
  DUAL
CONNECT BY
  LEVEL<=300;

INSERT INTO
  USER_PROGRAM_PERMISSION
SELECT
  USER_ID,
  PROGRAM_COMPONENT,
  PERMISSION,
  'PROGRAM'
FROM
  (SELECT
    UL.NAME USER_ID,
    AL.PROGRAM_ID PROGRAM_COMPONENT,
    'N' PERMISSION
  FROM
    USER_LIST UL,
    APPLICATION_LIST AL
  ORDER BY
    DBMS_RANDOM.VALUE)
WHERE
  ROWNUM<=27000;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'APPLICATION_LIST',CASCADE=>TRUE)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'USER_LIST',CASCADE=>TRUE)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'USER_PROGRAM_PERMISSION',CASCADE=>TRUE)

The first VBS script will not use a Cartesian Merge join – instead it will retrieve a list of all users and all application modules, and then probe the USER_PROGRAM_PERMISSION table once for each USER_ID. (IntentionalCartesian1-NoCartesian.VBS – save as IntentionalCartesian1-NoCartesian.VBS)

Const adVarChar = 200
Const adCmdText = 1
Const adCmdStoredProc = 4
Const adParamInput = 1

Dim i
Dim j
Dim strSQL
Dim strLastColumn
Dim strProgramName
Dim strUsername
Dim strPassword
Dim strDatabase
Dim strPermission
Dim snpData
Dim comData
Dim dbDatabase
Dim objExcel

Set dbDatabase = CreateObject("ADODB.Connection")
Set snpData = CreateObject("ADODB.Recordset")
Set comData = CreateObject("ADODB.Command")

On Error Resume Next

strUsername = "MyUsername"
strPassword = "MyPassword"
strDatabase = "MyDB"

dbDatabase.ConnectionString = "Provider=OraOLEDB.Oracle;Data Source=" & strDatabase & ";User ID=" & strUsername & ";Password=" & strPassword & ";"
dbDatabase.Open
'Should verify that the connection attempt was successful, but I will leave that for someone else to code

Set snpData = CreateObject("adodb.recordset")

'Create an Excel connection
Set objExcel = CreateObject("Excel.Application")

With objExcel
    .Workbooks.Add
    .ActiveWorkbook.Sheets.Add
    .ActiveSheet.Name = "Application Permissions"

    'Remove the three default worksheets
    For i = 1 To .ActiveWorkbook.Sheets.Count
        If .ActiveWorkbook.Sheets(i).Name = "Sheet1" Then
            .Sheets("Sheet1").Select
            .ActiveWindow.SelectedSheets.Delete
        End If
        If .ActiveWorkbook.Sheets(i).Name = "Sheet2" Then
            .Sheets("Sheet2").Select
            .ActiveWindow.SelectedSheets.Delete
        End If
        If .ActiveWorkbook.Sheets(i).Name = "Sheet3" Then
            .Sheets("Sheet3").Select
            .ActiveWindow.SelectedSheets.Delete
        End If
    Next

    .Visible = True
End With

strSQL = "SELECT" & vbCrLf
strSQL = strSQL & "  PROGRAM_ID," & vbCrLf
strSQL = strSQL & "  MENU_STRING" & vbCrLf
strSQL = strSQL & "FROM" & vbCrLf
strSQL = strSQL & "  APPLICATION_LIST" & vbCrLf
strSQL = strSQL & "WHERE" & vbCrLf
strSQL = strSQL & "  PROGRAM_ID NOT IN ('.SEPARATOR')" & vbCrLf
strSQL = strSQL & "ORDER BY" & vbCrLf
strSQL = strSQL & "  MENU_STRING"

snpData.Open strSQL, dbDatabase

If snpData.State = 1 Then
    strProgramName = snpData.GetRows(30000)
    snpData.Close
End If

'Set the number of elements in the strPermission array to match the number of application module names
ReDim strPermission(UBound(strProgramName, 2))

'Copy the module names into Excel
For j = 0 To UBound(strPermission)
    strPermission(j) = strProgramName(1, j) ' & " - " & strProgramName(0, j)
Next
With objExcel
    .Application.ScreenUpdating = False
    .ActiveSheet.Range(.ActiveSheet.Cells(1, 2), .ActiveSheet.Cells(1, 1 + UBound(strProgramName, 2))) = strPermission
End With

'Retrieve the list of users
strSQL = "SELECT" & vbCrLf
strSQL = strSQL & "  NAME" & vbCrLf
strSQL = strSQL & "FROM" & vbCrLf
strSQL = strSQL & "  USER_LIST" & vbCrLf
strSQL = strSQL & "WHERE" & vbCrLf
strSQL = strSQL & "  TYPE=1" & vbCrLf
strSQL = strSQL & "  AND NAME NOT LIKE '%#'" & vbCrLf
strSQL = strSQL & "ORDER BY" & vbCrLf
strSQL = strSQL & "  NAME"

snpData.Open strSQL, dbDatabase

If snpData.State = 1 Then
    strUsername = snpData.GetRows(30000)
    snpData.Close
End If

'Set the SQL statement to use to retrieve permissions, ? is a bind variable placeholder
strSQL = "SELECT" & vbCrLf
strSQL = strSQL & "  PROGRAM_ID," & vbCrLf
strSQL = strSQL & "  PERMISSION" & vbCrLf
strSQL = strSQL & "FROM" & vbCrLf
strSQL = strSQL & "  USER_PROGRAM_PERMISSION" & vbCrLf
strSQL = strSQL & "WHERE" & vbCrLf
strSQL = strSQL & "  PROGRAM_COMPONENT='PROGRAM'" & vbCrLf
strSQL = strSQL & "  AND USER_ID= ?" & vbCrLf
strSQL = strSQL & "ORDER BY" & vbCrLf
strSQL = strSQL & "  PROGRAM_ID"

With comData
    'Set up the command properties
    .CommandText = strSQL
    .CommandType = adCmdText
    .CommandTimeout = 30
    .ActiveConnection = dbDatabase

    .Parameters.Append .CreateParameter("user_id", adVarChar, adParamInput, 30, "")
End With

'Loop through each user
For i = 0 To UBound(strUsername, 2)
    'Reset the permissions for the next user
    For j = 0 To UBound(strPermission)
        strPermission(j) = "Y"
    Next

    comData("user_id") = strUsername(0, i)
    Set snpData = comData.Execute

    If snpData.State = 1 Then
        Do While Not (snpData.EOF)
            For j = 0 To UBound(strProgramName, 2)
                If strProgramName(0, j) = snpData("program_id") Then
                    strPermission(j) = snpData("permission")
                    Exit For
                End If
            Next
            snpData.MoveNext
        Loop
        snpData.Close
    End If

    With objExcel
        .ActiveSheet.Cells(i + 2, 1) = strUsername(0, i)
        .ActiveSheet.Range(.ActiveSheet.Cells(i + 2, 2), .ActiveSheet.Cells(i + 2, 1 + UBound(strProgramName, 2))) = strPermission
    End With
Next

'Convert the number of columns into letter notation
strLastColumn = Chr(64 + Int((UBound(strProgramName, 2)) / 26)) & Chr(64 + ((UBound(strProgramName, 2)) Mod 26 + 1))

'Final cleanup
With objExcel
    .ActiveSheet.Range(.ActiveSheet.Cells(1, 2), .ActiveSheet.Cells(1, 1 + UBound(strProgramName, 2))).Orientation = 90
    .ActiveSheet.Columns("A:" & strLastColumn).AutoFit
    .Application.ScreenUpdating = True
    .ActiveSheet.Range("B2").Select
    .ActiveWindow.FreezePanes = True
End With

dbDatabase.Close

Set snpData = Nothing
Set comData = Nothing
Set objExcel = Nothing

If you ignore the fact that the above script redefines the meaning of the strUsername variable, the script works.  The problem with the script is that it repeatedly sends queries to the database, and probably should be optimized to remove the repeated queries (the number of repeated communication to the database server could have been much worse).  Let’s take a look at version 2 of the script (IntentionalCartesian2-NoCartesian.VBS – save as IntentionalCartesian2-NoCartesian.VBS)

Dim i
Dim j
Dim strSQL
Dim strLastColumn
Dim strProgramName
Dim strEmployeename
Dim strUsername
Dim strPassword
Dim strDatabase
Dim strPermission
Dim snpData
Dim dbDatabase
Dim objExcel

Set dbDatabase = CreateObject("ADODB.Connection")
Set snpData = CreateObject("ADODB.Recordset")

On Error Resume Next

strUsername = "MyUsername"
strPassword = "MyPassword"
strDatabase = "MyDB"

dbDatabase.ConnectionString = "Provider=OraOLEDB.Oracle;Data Source=" & strDatabase & ";User ID=" & strUsername & ";Password=" & strPassword & ";"
dbDatabase.Open
'Should verify that the connection attempt was successful, but I will leave that for someone else to code

Set snpData = CreateObject("adodb.recordset")

'Create an Excel connection
Set objExcel = CreateObject("Excel.Application")

With objExcel
    .Workbooks.Add
    .ActiveWorkbook.Sheets.Add
    .ActiveSheet.Name = "Visual Permissions"

    'Remove the three default worksheets
    For i = 1 To .ActiveWorkbook.Sheets.Count
        If .ActiveWorkbook.Sheets(i).Name = "Sheet1" Then
            .Sheets("Sheet1").Select
            .ActiveWindow.SelectedSheets.Delete
        End If
        If .ActiveWorkbook.Sheets(i).Name = "Sheet2" Then
            .Sheets("Sheet2").Select
            .ActiveWindow.SelectedSheets.Delete
        End If
        If .ActiveWorkbook.Sheets(i).Name = "Sheet3" Then
            .Sheets("Sheet3").Select
            .ActiveWindow.SelectedSheets.Delete
        End If
    Next

    .Visible = True
End With

strSQL = "SELECT" & vbCrLf
strSQL = strSQL & "  PROGRAM_ID," & vbCrLf
strSQL = strSQL & "  MENU_STRING" & vbCrLf
strSQL = strSQL & "FROM" & vbCrLf
strSQL = strSQL & "  APPLICATION_LIST" & vbCrLf
strSQL = strSQL & "WHERE" & vbCrLf
strSQL = strSQL & "  PROGRAM_ID NOT IN ('.SEPARATOR')" & vbCrLf
strSQL = strSQL & "ORDER BY" & vbCrLf
strSQL = strSQL & "  MENU_STRING"

snpData.Open strSQL, dbDatabase

If snpData.State = 1 Then
    strProgramName = snpData.GetRows(30000)
    snpData.Close
End If

'Set the number of elements in the strPermission array to match the number of application module names
ReDim strPermission(UBound(strProgramName, 2))

'Copy the module names into Excel
For j = 0 To UBound(strPermission)
    strPermission(j) = strProgramName(1, j) ' & " - " & strProgramName(0, j)
Next
With objExcel
    .Application.ScreenUpdating = False
    .ActiveSheet.Range(.ActiveSheet.Cells(1, 2), .ActiveSheet.Cells(1, 1 + UBound(strProgramName, 2))) = strPermission
End With

'Retrieve the list of users
strSQL = "SELECT" & vbCrLf
strSQL = strSQL & "  NAME" & vbCrLf
strSQL = strSQL & "FROM" & vbCrLf
strSQL = strSQL & "  USER_LIST" & vbCrLf
strSQL = strSQL & "WHERE" & vbCrLf
strSQL = strSQL & "  TYPE=1" & vbCrLf
strSQL = strSQL & "  AND NAME NOT LIKE '%#'" & vbCrLf
strSQL = strSQL & "ORDER BY" & vbCrLf
strSQL = strSQL & "  NAME"

snpData.Open strSQL, dbDatabase

If snpData.State = 1 Then
    strEmployeename = snpData.GetRows(30000)
    snpData.Close
End If

'Set the SQL statement to use to retrieve permissions, ? is a bind variable placeholder
strSQL = "SELECT" & vbCrLf
strSQL = strSQL & "  USER_ID," & vbCrLf
strSQL = strSQL & "  PROGRAM_ID," & vbCrLf
strSQL = strSQL & "  PERMISSION" & vbCrLf
strSQL = strSQL & "FROM" & vbCrLf
strSQL = strSQL & "  USER_PROGRAM_PERMISSION" & vbCrLf
strSQL = strSQL & "WHERE" & vbCrLf
strSQL = strSQL & "  PROGRAM_COMPONENT='PROGRAM'" & vbCrLf
strSQL = strSQL & "ORDER BY" & vbCrLf
strSQL = strSQL & "  USER_ID," & vbCrLf
strSQL = strSQL & "  PROGRAM_ID"

snpData.Open strSQL, dbDatabase

If snpData.State = 1 Then
    strAllPermission = snpData.GetRows(600000)
    snpData.Close
End If

strLastUser = ""
intLastPermission = 0

'Loop through all users
For i = 0 To UBound(strEmployeename, 2)
    'Reset the permissions for the next user
    For j = 0 To UBound(strPermission)
        strPermission(j) = "Y"
    Next

    For j = intLastPermission To UBound(strAllPermission, 2)
        If strAllPermission(0, j) = strEmployeename(0, i) Then
            'Examine the permissions for this user
            For k = 0 To UBound(strProgramName, 2)
                If strProgramName(0, k) = strAllPermission(1, j) Then
                    strPermission(k) = strAllPermission(2, j)
                    Exit For
                End If
            Next

        End If

        'Record the loop position so that we do not start at 0 for the next user
        intLastPermission = j

        If strAllPermission(0, j) > strEmployeename(0, i) Then
            'We have passed the last permission for this user, exit the For loop
            Exit For
        End If
    Next

    With objExcel
        .ActiveSheet.Cells(i + 2, 1) = strEmployeename(0, i)
        .ActiveSheet.Range(.ActiveSheet.Cells(i + 2, 2), .ActiveSheet.Cells(i + 2, 1 + UBound(strProgramName, 2))) = strPermission
    End With
Next

'Convert the number of columns into letter notation
strLastColumn = Chr(64 + Int((UBound(strProgramName, 2)) / 26)) & Chr(64 + ((UBound(strProgramName, 2)) Mod 26 + 1))

'Final cleanup
With objExcel
    .ActiveSheet.Range(.ActiveSheet.Cells(1, 2), .ActiveSheet.Cells(1, 1 + UBound(strProgramName, 2))).Orientation = 90
    .ActiveSheet.Columns("A:" & strLastColumn).AutoFit
    .Application.ScreenUpdating = True
    .ActiveSheet.Range("B2").Select
    .ActiveWindow.FreezePanes = True
End With

dbDatabase.Close

Set snpData = Nothing
Set dbDatabase = Nothing
Set objExcel = Nothing

While the second version of the script is better than the first, we are still sending three SQL statements to the server.  We can improve that with a Cartesian join.  Let’s take a look at version 3 of the script (IntentionalCartesian3-Cartesian.VBS – save as IntentionalCartesian3-Cartesian.VBS)

Dim i
Dim intUserCount
Dim intPermissionCount
Dim strLastColumn
Dim strLastUser
Dim strSQL
Dim strPermission(500)
Dim strModuleName(500)

Dim strUsername
Dim strPassword
Dim strDatabase
Dim snpData
Dim dbDatabase
Dim objExcel

Set dbDatabase = CreateObject("ADODB.Connection")
Set snpData = CreateObject("ADODB.Recordset")

On Error Resume Next

strUsername = "MyUsername"
strPassword = "MyPassword"
strDatabase = "MyDB"

dbDatabase.ConnectionString = "Provider=OraOLEDB.Oracle;Data Source=" & strDatabase & ";User ID=" & strUsername & ";Password=" & strPassword & ";"
dbDatabase.Open
'Should verify that the connection attempt was successful, but I will leave that for someone else to code

Set snpData = CreateObject("adodb.recordset")

'Create an Excel connection
Set objExcel = CreateObject("Excel.Application")

With objExcel
    .Workbooks.Add
    .ActiveWorkbook.Sheets.Add
    .ActiveSheet.Name = "Application Permissions"

    'Remove the three default worksheets
    For i = 1 To .ActiveWorkbook.Sheets.Count
        If .ActiveWorkbook.Sheets(i).Name = "Sheet1" Then
            .Sheets("Sheet1").Select
            .ActiveWindow.SelectedSheets.Delete
        End If
        If .ActiveWorkbook.Sheets(i).Name = "Sheet2" Then
            .Sheets("Sheet2").Select
            .ActiveWindow.SelectedSheets.Delete
        End If
        If .ActiveWorkbook.Sheets(i).Name = "Sheet3" Then
            .Sheets("Sheet3").Select
            .ActiveWindow.SelectedSheets.Delete
        End If
    Next

    .Visible = True
End With

strSQL = "SELECT" & vbCrLf
strSQL = strSQL & "  AUP.NAME USERNAME," & vbCrLf
strSQL = strSQL & "  AUP.MENU_STRING MODULE," & vbCrLf
strSQL = strSQL & "  NVL(UGA.PERMISSION,AUP.DEFAULT_PERMISSION) PERMISSION" & vbCrLf
strSQL = strSQL & "FROM" & vbCrLf
strSQL = strSQL & "  (SELECT" & vbCrLf
strSQL = strSQL & "    UL.NAME NAME," & vbCrLf
strSQL = strSQL & "    A.PROGRAM_ID," & vbCrLf
strSQL = strSQL & "    A.MENU_STRING," & vbCrLf
strSQL = strSQL & "    'Y' DEFAULT_PERMISSION" & vbCrLf
strSQL = strSQL & "  FROM" & vbCrLf
strSQL = strSQL & "    APPLICATION_LIST A," & vbCrLf
strSQL = strSQL & "    USER_LIST UL" & vbCrLf
strSQL = strSQL & "  WHERE" & vbCrLf
strSQL = strSQL & "    A.PROGRAM_ID NOT IN ('.SEPARATOR')" & vbCrLf
strSQL = strSQL & "    AND UL.NAME NOT LIKE '%#') AUP," & vbCrLf
strSQL = strSQL & "  (SELECT" & vbCrLf
strSQL = strSQL & "    USER_ID," & vbCrLf
strSQL = strSQL & "    PROGRAM_ID," & vbCrLf
strSQL = strSQL & "    PERMISSION" & vbCrLf
strSQL = strSQL & "  FROM" & vbCrLf
strSQL = strSQL & "    USER_PROGRAM_PERMISSION" & vbCrLf
strSQL = strSQL & "  WHERE" & vbCrLf
strSQL = strSQL & "    PROGRAM_COMPONENT='PROGRAM') UGA" & vbCrLf
strSQL = strSQL & "WHERE" & vbCrLf
strSQL = strSQL & "  AUP.NAME=UGA.USER_ID(+)" & vbCrLf
strSQL = strSQL & "  AND AUP.PROGRAM_ID=UGA.PROGRAM_ID(+)" & vbCrLf
strSQL = strSQL & "ORDER BY" & vbCrLf
strSQL = strSQL & "  AUP.NAME," & vbCrLf
strSQL = strSQL & "  AUP.MENU_STRING"

snpData.Open strSQL, dbDatabase

If snpData.State = 1 Then
  With objExcel
    .Application.ScreenUpdating = False
    strLastUser = ""
    intUserCount = 0
    Do While Not snpData.EOF
        If strLastUser <> snpData("username") Then
            If strLastUser <> "" Then
                'Write out the permissions for the previous user
                .ActiveSheet.Range(.ActiveSheet.Cells(intUserCount + 1, 1), .ActiveSheet.Cells(intUserCount + 1, 1 + intPermissionCount)) = strPermission
            End If
            If intUserCount = 1 Then
                'Write out the module names
                .ActiveSheet.Range(.ActiveSheet.Cells(1, 1), .ActiveSheet.Cells(1, 1 + intPermissionCount)) = strModuleName
            End If

            strPermission(0) = snpData("username")
            intPermissionCount = 0
            intUserCount = intUserCount + 1
        End If
        intPermissionCount = intPermissionCount + 1
        strPermission(intPermissionCount) = snpData("permission")
        strLastUser = snpData("username")

        If intUserCount = 1 Then
            'Record the module names
            strModuleName(intPermissionCount) = snpData("module")
        End If

        snpData.MoveNext
    Loop
    If strLastUser <> "" Then
        'Write out the permissions for the last user
        .ActiveSheet.Range(.ActiveSheet.Cells(intUserCount + 1, 1), .ActiveSheet.Cells(intUserCount + 1, 1 + intPermissionCount)) = strPermission
    End If

    strLastColumn = Chr(64 + Int((intPermissionCount) / 26)) & Chr(64 + ((intPermissionCount) Mod 26 + 1))
    .ActiveSheet.Range(.ActiveSheet.Cells(1, 2), .ActiveSheet.Cells(1, 1 + intPermissionCount)).Orientation = 90
    .ActiveSheet.Columns("A:" & strLastColumn).AutoFit
    .Application.ScreenUpdating = True
    .ActiveWindow.FreezePanes = False
    .ActiveSheet.Range("B2").Select
    .ActiveWindow.FreezePanes = True

    .Application.ScreenUpdating = True
  End With
End If

snpData.Close
dbDatabase.Close
Set snpData = Nothing
Set dbDatabase = Nothing
Set objExcel = Nothing

Notice in the above that the client-side code is much smaller, and we have collapsed the three SQL statements into a single SQL statement with the help of a Cartesian join between the APPLICATION_LIST and USER_LIST tables.  The end result looks like this:





What is the Impact on the CPU Statistic in a 10046 Trace File in a CPU Constrained Server?

17 03 2010

March 17, 2010

Sometimes I have wondered about the statistics that appear in 10046 trace files – how much are the statistics distorted by other activities happening in the server, or even just the act of enabling a 10046 trace for a session.  Will the CPU statistics in the trace file increase as running processes remain in the CPU run queue longer and longer, or are the CPU statistics immune to longer execution times caused by a process’ “run” time slice expiring?  Does the CPU architecture matter?  Does the operating system matter?  Is it possible that a query could actually complete faster when the competition for CPU resources increases?  Are there other variables that need to be considered?

We will set up the same test table that was used in yesterday’s blog article:

CREATE TABLE T5 (
  C1 VARCHAR2(10),
  C2 VARCHAR2(100),
  PRIMARY KEY (C1));

INSERT INTO T5 NOLOGGING
SELECT
  'A'||TO_CHAR(ROWNUM,'0000000'),
  RPAD('A',100,'A')
FROM
  DUAL
CONNECT BY
  LEVEL<=1000000;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T5',CASCADE=>TRUE)

From a client computer the following script is executed against Oracle 11.1.0.6 running on 64 bit Linux:  

SET AUTOTRACE TRACEONLY STATISTICS
SET ARRAYSIZE 100
SET TIMING ON
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'NOLOAD';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

SELECT /*+ USE_NL(T51 T52 T53 T54 T55 T56 T57 T58) */
  T51.C1,
  T51.C2,
  T52.C1,
  T52.C2,
  T53.C1,
  T53.C2,
  T54.C1,
  T54.C2,
  T55.C1,
  T55.C2,
  T56.C1,
  T56.C2,
  T57.C1,
  T57.C2,
  T58.C1,
  T58.C2
FROM
  T5 T51,
  T5 T52,
  T5 T53,
  T5 T54,
  T5 T55,
  T5 T56,
  T5 T57,
  T5 T58
WHERE
  T51.C1=T52.C1
  AND T51.C1=T53.C1
  AND T51.C1=T54.C1
  AND T51.C1=T55.C1
  AND T51.C1=T56.C1
  AND T51.C1=T57.C1
  AND T51.C1=T58.C1
  AND T51.C1 BETWEEN 'A 0000000' AND 'A 1000000';

SELECT SYSDATE FROM DUAL;

ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';

The SQL*Plus session displayed the following statistics, which were fairly consistent across multiple executions:

1000000 rows selected.

Elapsed: 00:01:59.85

Statistics
---------------------------------------------------
          0  recursive calls
          0  db block gets
    8696435  consistent gets
          0  physical reads
          0  redo size
   86861789  bytes sent via SQL*Net to client
     110349  bytes received via SQL*Net from client
      10001  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
    1000000  rows processed

TKPROF output for the trace file:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute      2      0.00       0.00          0          0          0           0
Fetch    10001     35.53      35.83          0    8696435          0     1000000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    10005     35.53      35.83          0    8696435          0     1000000

Misses in library cache during parse: 0
Parsing user id: 80 

Rows     Row Source Operation
-------  ---------------------------------------------------
1000000  NESTED LOOPS  (cr=8696435 pr=0 pw=0 time=294720 us)
1000000   NESTED LOOPS  (cr=7696435 pr=0 pw=0 time=272839 us cost=8008211 size=888000000 card=1000000)
1000000    NESTED LOOPS  (cr=7457834 pr=0 pw=0 time=250538 us cost=7007667 size=777000000 card=1000000)
1000000     NESTED LOOPS  (cr=6219233 pr=0 pw=0 time=210922 us cost=6007124 size=666000000 card=1000000)
1000000      NESTED LOOPS  (cr=4980632 pr=0 pw=0 time=170866 us cost=5006580 size=555000000 card=1000000)
1000000       NESTED LOOPS  (cr=3742031 pr=0 pw=0 time=130944 us cost=4006037 size=444000000 card=1000000)
1000000        NESTED LOOPS  (cr=2503430 pr=0 pw=0 time=90395 us cost=3005494 size=333000000 card=1000000)
1000000         NESTED LOOPS  (cr=1264829 pr=0 pw=0 time=49809 us cost=2004950 size=222000000 card=1000000)
1000000          TABLE ACCESS FULL T5 (cr=26228 pr=0 pw=0 time=4775 us cost=4407 size=111000000 card=1000000)
1000000          TABLE ACCESS BY INDEX ROWID T5 (cr=1238601 pr=0 pw=0 time=0 us cost=2 size=111 card=1)
1000000           INDEX UNIQUE SCAN SYS_C0015042 (cr=238601 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 74553)
1000000         TABLE ACCESS BY INDEX ROWID T5 (cr=1238601 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000          INDEX UNIQUE SCAN SYS_C0015042 (cr=238601 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 74553)
1000000        TABLE ACCESS BY INDEX ROWID T5 (cr=1238601 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000         INDEX UNIQUE SCAN SYS_C0015042 (cr=238601 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 74553)
1000000       TABLE ACCESS BY INDEX ROWID T5 (cr=1238601 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000        INDEX UNIQUE SCAN SYS_C0015042 (cr=238601 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 74553)
1000000      TABLE ACCESS BY INDEX ROWID T5 (cr=1238601 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000       INDEX UNIQUE SCAN SYS_C0015042 (cr=238601 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 74553)
1000000     TABLE ACCESS BY INDEX ROWID T5 (cr=1238601 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000      INDEX UNIQUE SCAN SYS_C0015042 (cr=238601 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 74553)
1000000    INDEX UNIQUE SCAN SYS_C0015042 (cr=238601 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 74553)
1000000   TABLE ACCESS BY INDEX ROWID T5 (cr=1000000 pr=0 pw=0 time=0 us cost=1 size=111 card=1)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                   10001        0.00          0.01
  SQL*Net message from client                 10001        0.01         83.34
  SQL*Net more data to client                 10000        0.00          0.19

The above shows that the 1,000,000 rows from the query were returned to the client in just less that 2 minutes.  8,696,435 consistent gets were performed with 35.53 CPU seconds, 35.83 elapsed seconds, and 83.34 seconds waiting for the next fetch request from the client computer.

For fun, we will start up 40 instances of the CPU load generating bash script found on page 197 of the Expert Oracle Practices book.  That will easily push the 8 processors fairly close to 100% utilization with fairly long CPU run queues.  Repeating the previous test, we receive the following output:

1000000 rows selected.

Elapsed: 00:01:57.01

Statistics
---------------------------------------------------
          0  recursive calls
          0  db block gets
    8696435  consistent gets
          0  physical reads
          0  redo size
   86861789  bytes sent via SQL*Net to client
     110349  bytes received via SQL*Net from client
      10001  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
    1000000  rows processed

TKPROF Output:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute      2      0.00       0.00          0          0          0           0
Fetch    10001     32.64      32.64          0    8696435          0     1000000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    10005     32.64      32.64          0    8696435          0     1000000

Misses in library cache during parse: 0
Parsing user id: 80 

Rows     Row Source Operation
-------  ---------------------------------------------------
1000000  NESTED LOOPS  (cr=8696435 pr=0 pw=0 time=258421 us)
1000000   NESTED LOOPS  (cr=7696435 pr=0 pw=0 time=239249 us cost=8008211 size=888000000 card=1000000)
1000000    NESTED LOOPS  (cr=7457834 pr=0 pw=0 time=219902 us cost=7007667 size=777000000 card=1000000)
1000000     NESTED LOOPS  (cr=6219233 pr=0 pw=0 time=185868 us cost=6007124 size=666000000 card=1000000)
1000000      NESTED LOOPS  (cr=4980632 pr=0 pw=0 time=151577 us cost=5006580 size=555000000 card=1000000)
1000000       NESTED LOOPS  (cr=3742031 pr=0 pw=0 time=117680 us cost=4006037 size=444000000 card=1000000)
1000000        NESTED LOOPS  (cr=2503430 pr=0 pw=0 time=83916 us cost=3005494 size=333000000 card=1000000)
1000000         NESTED LOOPS  (cr=1264829 pr=0 pw=0 time=49146 us cost=2004950 size=222000000 card=1000000)
1000000          TABLE ACCESS FULL T5 (cr=26228 pr=0 pw=0 time=5678 us cost=4407 size=111000000 card=1000000)
1000000          TABLE ACCESS BY INDEX ROWID T5 (cr=1238601 pr=0 pw=0 time=0 us cost=2 size=111 card=1)
1000000           INDEX UNIQUE SCAN SYS_C0015042 (cr=238601 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 74553)
1000000         TABLE ACCESS BY INDEX ROWID T5 (cr=1238601 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000          INDEX UNIQUE SCAN SYS_C0015042 (cr=238601 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 74553)
1000000        TABLE ACCESS BY INDEX ROWID T5 (cr=1238601 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000         INDEX UNIQUE SCAN SYS_C0015042 (cr=238601 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 74553)
1000000       TABLE ACCESS BY INDEX ROWID T5 (cr=1238601 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000        INDEX UNIQUE SCAN SYS_C0015042 (cr=238601 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 74553)
1000000      TABLE ACCESS BY INDEX ROWID T5 (cr=1238601 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000       INDEX UNIQUE SCAN SYS_C0015042 (cr=238601 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 74553)
1000000     TABLE ACCESS BY INDEX ROWID T5 (cr=1238601 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000      INDEX UNIQUE SCAN SYS_C0015042 (cr=238601 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 74553)
1000000    INDEX UNIQUE SCAN SYS_C0015042 (cr=238601 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 74553)
1000000   TABLE ACCESS BY INDEX ROWID T5 (cr=1000000 pr=0 pw=0 time=0 us cost=1 size=111 card=1)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                   10001        0.00          0.01
  SQL*Net message from client                 10001        0.01         83.33
  SQL*Net more data to client                 10000        0.00          0.27

From the above we see that the 1,000,000 rows were returned to the client almost 3 seconds faster than before with essentially no change in the SQL*Net message from client wait – the statistics show that the CPU time was reduced by almost 3 seconds.  Great – the next time my SQL statement is burning CPU, I’ll just run 40 other CPU consuming processes to make my SQL statement run faster.  :-)  (Nehalem magic?)

I am sure a couple of people are fans of running Oracle on Windows, so let’s try the test on Oracle 11.1.0.7 running on 64 bit Windows with no other CPU load:

1000000 rows selected.

Elapsed: 00:01:38.40

Statistics
---------------------------------------------------
          1  recursive calls
          0  db block gets
    8695518  consistent gets
          0  physical reads
          0  redo size
   86861789  bytes sent via SQL*Net to client
     110349  bytes received via SQL*Net from client
      10001  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
    1000000  rows processed

TKPROF Output:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.07       0.09          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch    10001     12.23      12.93          0    8695518          0     1000000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    10003     12.30      13.02          0    8695518          0     1000000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 56 

Rows     Row Source Operation
-------  ---------------------------------------------------
1000000  NESTED LOOPS  (cr=8695518 pr=0 pw=0 time=187370 us)
1000000   NESTED LOOPS  (cr=7695518 pr=0 pw=0 time=156136 us cost=8008160 size=888000000 card=1000000)
1000000    NESTED LOOPS  (cr=7457048 pr=0 pw=0 time=156136 us cost=7007623 size=777000000 card=1000000)
1000000     NESTED LOOPS  (cr=6218578 pr=0 pw=0 time=156136 us cost=6007087 size=666000000 card=1000000)
1000000      NESTED LOOPS  (cr=4980108 pr=0 pw=0 time=31242 us cost=5006551 size=555000000 card=1000000)
1000000       NESTED LOOPS  (cr=3741638 pr=0 pw=0 time=31242 us cost=4006015 size=444000000 card=1000000)
1000000        NESTED LOOPS  (cr=2503168 pr=0 pw=0 time=0 us cost=3005479 size=333000000 card=1000000)
1000000         NESTED LOOPS  (cr=1264698 pr=0 pw=0 time=0 us cost=2004943 size=222000000 card=1000000)
1000000          TABLE ACCESS FULL T5 (cr=26228 pr=0 pw=0 time=0 us cost=4407 size=111000000 card=1000000)
1000000          TABLE ACCESS BY INDEX ROWID T5 (cr=1238470 pr=0 pw=0 time=0 us cost=2 size=111 card=1)
1000000           INDEX UNIQUE SCAN SYS_C0016378 (cr=238470 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 70312)
1000000         TABLE ACCESS BY INDEX ROWID T5 (cr=1238470 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000          INDEX UNIQUE SCAN SYS_C0016378 (cr=238470 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 70312)
1000000        TABLE ACCESS BY INDEX ROWID T5 (cr=1238470 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000         INDEX UNIQUE SCAN SYS_C0016378 (cr=238470 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 70312)
1000000       TABLE ACCESS BY INDEX ROWID T5 (cr=1238470 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000        INDEX UNIQUE SCAN SYS_C0016378 (cr=238470 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 70312)
1000000      TABLE ACCESS BY INDEX ROWID T5 (cr=1238470 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000       INDEX UNIQUE SCAN SYS_C0016378 (cr=238470 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 70312)
1000000     TABLE ACCESS BY INDEX ROWID T5 (cr=1238470 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000      INDEX UNIQUE SCAN SYS_C0016378 (cr=238470 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 70312)
1000000    INDEX UNIQUE SCAN SYS_C0016378 (cr=238470 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 70312)
1000000   TABLE ACCESS BY INDEX ROWID T5 (cr=1000000 pr=0 pw=0 time=0 us cost=1 size=111 card=1)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                   10001        0.00          0.00
  SQL*Net message from client                 10001        0.01         84.86
  SQL*Net more data to client                 10000        0.00          0.42

Nice, completed 21.5 seconds faster than the no load test on Linux/Oracle 11.1.0.6 running on the same hardware, even with a hard parse.  The CPU utilization is also 1/3 of the Linux test results. 

Now let’s load up the server with 40 CPU consuming processes using the VBS script found on page 197 of the Expert Oracle Practices book.  The test results follow:

1000000 rows selected.

Elapsed: 00:01:47.39

Statistics
---------------------------------------------------
          0  recursive calls
          0  db block gets
    8695518  consistent gets
          0  physical reads
          0  redo size
   86861789  bytes sent via SQL*Net to client
     110349  bytes received via SQL*Net from client
      10001  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
    1000000  rows processed

TKPROF Output:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch    10001     22.24      22.49          0    8695518          0     1000000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    10003     22.24      22.49          0    8695518          0     1000000

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 56 

Rows     Row Source Operation
-------  ---------------------------------------------------
1000000  NESTED LOOPS  (cr=8695518 pr=0 pw=0 time=280798 us)
1000000   NESTED LOOPS  (cr=7695518 pr=0 pw=0 time=249598 us cost=8008160 size=888000000 card=1000000)
1000000    NESTED LOOPS  (cr=7457048 pr=0 pw=0 time=218398 us cost=7007623 size=777000000 card=1000000)
1000000     NESTED LOOPS  (cr=6218578 pr=0 pw=0 time=218398 us cost=6007087 size=666000000 card=1000000)
1000000      NESTED LOOPS  (cr=4980108 pr=0 pw=0 time=187198 us cost=5006551 size=555000000 card=1000000)
1000000       NESTED LOOPS  (cr=3741638 pr=0 pw=0 time=155998 us cost=4006015 size=444000000 card=1000000)
1000000        NESTED LOOPS  (cr=2503168 pr=0 pw=0 time=124799 us cost=3005479 size=333000000 card=1000000)
1000000         NESTED LOOPS  (cr=1264698 pr=0 pw=0 time=31200 us cost=2004943 size=222000000 card=1000000)
1000000          TABLE ACCESS FULL T5 (cr=26228 pr=0 pw=0 time=0 us cost=4407 size=111000000 card=1000000)
1000000          TABLE ACCESS BY INDEX ROWID T5 (cr=1238470 pr=0 pw=0 time=0 us cost=2 size=111 card=1)
1000000           INDEX UNIQUE SCAN SYS_C0016378 (cr=238470 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 70312)
1000000         TABLE ACCESS BY INDEX ROWID T5 (cr=1238470 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000          INDEX UNIQUE SCAN SYS_C0016378 (cr=238470 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 70312)
1000000        TABLE ACCESS BY INDEX ROWID T5 (cr=1238470 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000         INDEX UNIQUE SCAN SYS_C0016378 (cr=238470 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 70312)
1000000       TABLE ACCESS BY INDEX ROWID T5 (cr=1238470 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000        INDEX UNIQUE SCAN SYS_C0016378 (cr=238470 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 70312)
1000000      TABLE ACCESS BY INDEX ROWID T5 (cr=1238470 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000       INDEX UNIQUE SCAN SYS_C0016378 (cr=238470 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 70312)
1000000     TABLE ACCESS BY INDEX ROWID T5 (cr=1238470 pr=0 pw=0 time=0 us cost=1 size=111 card=1)
1000000      INDEX UNIQUE SCAN SYS_C0016378 (cr=238470 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 70312)
1000000    INDEX UNIQUE SCAN SYS_C0016378 (cr=238470 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 70312)
1000000   TABLE ACCESS BY INDEX ROWID T5 (cr=1000000 pr=0 pw=0 time=0 us cost=1 size=111 card=1)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                   10001        0.00          0.00
  SQL*Net message from client                 10001        0.01         83.55
  SQL*Net more data to client                 10000        0.00          0.59

The Windows test required an extra 9 seconds of CPU time when 40 processes were competing for the CPUs.  The CPU time for the fetch calls increased by 10 seconds over the test run with no competing processes.  Interestingly, the SQL*Net message from client wait decreased by a second during the test run with the increased CPU usage; I suppose that this might be expected considering that the next request from the client might have arrived while the Oracle thread was waiting in the CPU run queue.  Even with the additional 10 seconds of CPU time, the CPU  time consumed was still 10.5 seconds lower than that experienced with the Linux and Oracle 11.1.0.6 combination.  So, is the drop in CPU utilization an improvement in Oracle 11.1.0.7 that yielded better CPU times (you probably saw what happened when the same SQL statement was executed during yesterday’s blog article), is 64 bit Windows just that much more efficient than 64 bit Linux, or do you think that it was the 917 consistent get decrease that allowed Windows to complete the SQL statement faster than Linux?

The above test seems to indicate that Oracle on Linux does not include the time the process spent waiting in the CPU run queue when there is a lot of competition for CPU time – the fact that the rows returned to the client faster when the server was under a load probably means that more testing is required.  The above also seems to indicate that Oracle on Windows does inflate the CPU time statistic when competition for CPU time increases, but of course more testing should probably be performed.  So, why the recent increased interest in the CPU statistics in a trace file?  This OTN thread perked my interest.





Consistently Inconsistent Consistent Gets

16 03 2010

March 16, 2010

You might be aware that the number of consistent gets performed during the execution of a SQL statement will vary as the execution plan for a SQL statement changes.  You might be aware that the number of consistent gets for a SQL statement will vary depending on the block size used by the database.  You might be aware that the number of consistent gets will vary depending on the clustering factor of the indexes used by the SQL statement change (assuming that the table rows are actually accessed).  You might be aware that moving a table and rebuilding  its indexes could cause the number of consistent gets for a SQL statement to change.  But were you aware that there are other reasons for the number of consistent gets to change?  Let’s set up a simple test table in the database that has 1,000,000 rows and an index on the primary key column:

CREATE TABLE T5 (
  C1 VARCHAR2(10),
  C2 VARCHAR2(100),
  PRIMARY KEY (C1));

INSERT INTO T5 NOLOGGING
SELECT
  'A'||TO_CHAR(ROWNUM,'0000000'),
  RPAD('A',100,'A')
FROM
  DUAL
CONNECT BY
  LEVEL<=1000000;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T5',CASCADE=>TRUE)

In Oracle Database, for instance release 11.1.0.6 on 64 bit Linux, we craft a silly SQL statement that uses the test table.  The silly SQL statement’s goal is just to drive up the number of consistent gets to allow us to peg the server’s CPU.  This is the silly SQL statement that essentially joins table T5 to itself multiple times:

SELECT /*+ USE_NL(T51 T52 T53 T54 T55 T56 T57 T58) */
  T51.C1,
  T51.C2,
  T52.C1,
  T52.C2,
  T53.C1,
  T53.C2,
  T54.C1,
  T54.C2,
  T55.C1,
  T55.C2,
  T56.C1,
  T56.C2,
  T57.C1,
  T57.C2,
  T58.C1,
  T58.C2
FROM
  T5 T51,
  T5 T52,
  T5 T53,
  T5 T54,
  T5 T55,
  T5 T56,
  T5 T57,
  T5 T58
WHERE
  T51.C1=T52.C1
  AND T51.C1=T53.C1
  AND T51.C1=T54.C1
  AND T51.C1=T55.C1
  AND T51.C1=T56.C1
  AND T51.C1=T57.C1
  AND T51.C1=T58.C1
  AND T51.C1 BETWEEN 'A 0000000' AND 'A 1000000';

In the above SQL statement I have forced through a hint a nested loop join between the various aliases for the table T5 to force the use of the index, rather than letting the optimizer use full table scans and hash joins.  Let’s see how the array fetch size affects the number of consistent gets for the above SQL statement.  Articles on other blogs have shown demonstrations of the same effect of changing the array fetch size, but stay with me, there is a twist.  The script:

SPOOL consistent_gets.txt

SET AUTOTRACE TRACEONLY STATISTICS
SET TIMING ON

SET ARRAYSIZE 1

SELECT /*+ USE_NL(T51 T52 T53 T54 T55 T56 T57 T58) */
  T51.C1,
  T51.C2,
  T52.C1,
  T52.C2,
  T53.C1,
  T53.C2,
  T54.C1,
  T54.C2,
  T55.C1,
  T55.C2,
  T56.C1,
  T56.C2,
  T57.C1,
  T57.C2,
  T58.C1,
  T58.C2
FROM
  T5 T51,
  T5 T52,
  T5 T53,
  T5 T54,
  T5 T55,
  T5 T56,
  T5 T57,
  T5 T58
WHERE
  T51.C1=T52.C1
  AND T51.C1=T53.C1
  AND T51.C1=T54.C1
  AND T51.C1=T55.C1
  AND T51.C1=T56.C1
  AND T51.C1=T57.C1
  AND T51.C1=T58.C1
  AND T51.C1 BETWEEN 'A 0000000' AND 'A 1000000';

SET ARRAYSIZE 1

/

SET ARRAYSIZE 10

/

SET ARRAYSIZE 50

/

SET ARRAYSIZE 100

/

SET ARRAYSIZE 200

/

SET ARRAYSIZE 500

/

SET ARRAYSIZE 1000

/

SET ARRAYSIZE 2000

/

SET ARRAYSIZE 5000

/

SPOOL OFF

We will throw out the first execution with the array fetch size set at 1 (the client computer is using the 11.1.0.7 client) so that the recursive call does not throw off the timing.  The summarized output of the above script follows:

           Fetch Array:  1            10           50          100          200          500          1000         2000         5000
Elapsed             00:05:36.83  00:02:30.86  00:01:47.15  00:01:38.21  00:01:29.87  00:01:24.80  00:01:23.08  00:01:22.25  00:01:22.49
recursive calls               0            0            0            0            0            0            0            0            0
db block gets                 0            0            0            0            0            0            0            0            0
consistent gets      12,554,387    9,453,633    8,822,789    8,696,435    8,625,467    8,582,840    8,568,603    8,561,488    8,559,626
physical reads                0            0            0            0            0            0            0            0            0
redo size                     0            0            0            0            0            0            0            0            0
bytes sent via SQL  129,001,789   94,601,789   87,721,789   86,861,789   86,431,789   86,173,789   86,087,789   86,044,789   86,018,989
bytes received via    5,500,349    1,100,349      220,349      110,349       55,349       22,349       11,349        5,849        2,549
SQL*Net roundtrips      500,001      100,001       20,001       10,001        5,001        2,001        1,001          501          201
sorts (memory)                0            0            0            0            0            0            0            0            0
sorts (disk)                  0            0            0            0            0            0            0            0            0
rows processed        1,000,000    1,000,000    1,000,000    1,000,000    1,000,000    1,000,000    1,000,000    1,000,000    1,000,000

As you can see from the above, when the fetch array size was set to 1, SQL*Plus actually operated as if the fetch array size was set to 2.  As the fetch array size increased, the execution time for the query decreased until the fetch array size of 5,000 was attempted.  Accompanying the decrease in the execution time is a decrease in the number of consistent gets, bytes sent across the network, and the number of round trips.  Nice, so where is the twist, is it just that the time increased when the fetch array size was increased to 5,000?

Maybe we should repeat the test on Oracle Database 11.2.0.1, which also used the default 8KB block size and runs on 64 bit Linux.  These are the summarized results, excluding the output that did not change significantly from the test run on Oracle 11.1.0.6:

           Fetch Array:  1            10           50          100          200          500          1000         2000         5000
Elapsed             00:05:39.97  00:02:16.71  00:01:36.36  00:01:27.57  00:01:24.48  00:01:22.77  00:01:22.15  00:01:21.96  00:01:22.24
consistent gets         516,378      116,378       36,378       26,378       21,378       18,378       17,378       16,878       16,578

When the fetch array size was specified as 1, the number of consistent gets dropped from 12,554,387 to 516,378 when run on Oracle 11.2.0.1, yet the execution time is almost identical to that achieved on 11.1.0.6.  When the fetch array size was specified as 5,000, the number of consistent gets dropped from 8,559,626 to 16,578, yet the execution time was almost identical to that achieved on 11.1.0.6.  Quite a substantial decrease in the number of consistent gets from one release to another.

Anyone want to take an educated guess as to what caused the decrease in the number of consistent gets?

Take a guess before scrolling down.

Here is the DBMS_XPLAN output for Oracle 11.1.0.6 for the SQL statement:

SQL_ID  7v7q3k01y4r0z, child number 0
-------------------------------------
SELECT /*+ USE_NL(T51 T52 T53 T54 T55 T56 T57 T58) */    T51.C1,   
T51.C2,    T52.C1,    T52.C2,    T53.C1,    T53.C2,    T54.C1,   
T54.C2,    T55.C1,    T55.C2,    T56.C1,    T56.C2,    T57.C1,   
T57.C2,    T58.C1,    T58.C2  FROM    T5 T51,    T5 T52,    T5 T53,   
T5 T54,    T5 T55,    T5 T56,    T5 T57,    T5 T58  WHERE   
T51.C1=T52.C1    AND T51.C1=T53.C1    AND T51.C1=T54.C1    AND
T51.C1=T55.C1    AND T51.C1=T56.C1    AND T51.C1=T57.C1    AND
T51.C1=T58.C1    AND T51.C1 BETWEEN 'A 0000000' AND 'A 1000000'

Plan hash value: 2422939766

--------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------------
|   1 |  NESTED LOOPS                       |              |      1 |        |   1000K|00:00:37.09 |    8696K|
|   2 |   NESTED LOOPS                      |              |      1 |   1000K|   1000K|00:00:34.08 |    7696K|
|   3 |    NESTED LOOPS                     |              |      1 |   1000K|   1000K|00:00:31.07 |    7457K|
|   4 |     NESTED LOOPS                    |              |      1 |   1000K|   1000K|00:00:26.06 |    6218K|
|   5 |      NESTED LOOPS                   |              |      1 |   1000K|   1000K|00:00:21.05 |    4980K|
|   6 |       NESTED LOOPS                  |              |      1 |   1000K|   1000K|00:00:16.04 |    3741K|
|   7 |        NESTED LOOPS                 |              |      1 |   1000K|   1000K|00:00:11.03 |    2503K|
|   8 |         NESTED LOOPS                |              |      1 |   1000K|   1000K|00:00:06.02 |    1264K|
|*  9 |          TABLE ACCESS FULL          | T5           |      1 |   1000K|   1000K|00:00:00.01 |   25906 |
|  10 |          TABLE ACCESS BY INDEX ROWID| T5           |   1000K|      1 |   1000K|00:00:04.28 |    1238K|
|* 11 |           INDEX UNIQUE SCAN         | SYS_C0015042 |   1000K|      1 |   1000K|00:00:01.96 |     238K|
|  12 |         TABLE ACCESS BY INDEX ROWID | T5           |   1000K|      1 |   1000K|00:00:03.80 |    1238K|
|* 13 |          INDEX UNIQUE SCAN          | SYS_C0015042 |   1000K|      1 |   1000K|00:00:01.63 |     238K|
|  14 |        TABLE ACCESS BY INDEX ROWID  | T5           |   1000K|      1 |   1000K|00:00:03.79 |    1238K|
|* 15 |         INDEX UNIQUE SCAN           | SYS_C0015042 |   1000K|      1 |   1000K|00:00:01.62 |     238K|
|  16 |       TABLE ACCESS BY INDEX ROWID   | T5           |   1000K|      1 |   1000K|00:00:03.78 |    1238K|
|* 17 |        INDEX UNIQUE SCAN            | SYS_C0015042 |   1000K|      1 |   1000K|00:00:01.60 |     238K|
|  18 |      TABLE ACCESS BY INDEX ROWID    | T5           |   1000K|      1 |   1000K|00:00:03.79 |    1238K|
|* 19 |       INDEX UNIQUE SCAN             | SYS_C0015042 |   1000K|      1 |   1000K|00:00:01.60 |     238K|
|  20 |     TABLE ACCESS BY INDEX ROWID     | T5           |   1000K|      1 |   1000K|00:00:03.79 |    1238K|
|* 21 |      INDEX UNIQUE SCAN              | SYS_C0015042 |   1000K|      1 |   1000K|00:00:01.62 |     238K|
|* 22 |    INDEX UNIQUE SCAN                | SYS_C0015042 |   1000K|      1 |   1000K|00:00:01.61 |     238K|
|  23 |   TABLE ACCESS BY INDEX ROWID       | T5           |   1000K|      1 |   1000K|00:00:01.54 |    1000K|
--------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   9 - filter(("T51"."C1">='A 0000000' AND "T51"."C1"<='A 1000000'))
  11 - access("T51"."C1"="T52"."C1")
       filter(("T52"."C1">='A 0000000' AND "T52"."C1"<='A 1000000'))
  13 - access("T51"."C1"="T53"."C1")
       filter(("T53"."C1">='A 0000000' AND "T53"."C1"<='A 1000000'))
  15 - access("T51"."C1"="T54"."C1")
       filter(("T54"."C1">='A 0000000' AND "T54"."C1"<='A 1000000'))
  17 - access("T51"."C1"="T55"."C1")
       filter(("T55"."C1">='A 0000000' AND "T55"."C1"<='A 1000000'))
  19 - access("T51"."C1"="T56"."C1")
       filter(("T56"."C1">='A 0000000' AND "T56"."C1"<='A 1000000'))
  21 - access("T51"."C1"="T57"."C1")
       filter(("T57"."C1">='A 0000000' AND "T57"."C1"<='A 1000000'))
  22 - access("T51"."C1"="T58"."C1")
       filter(("T58"."C1">='A 0000000' AND "T58"."C1"<='A 1000000'))

Here is the DBMS_XPLAN output for Oracle 11.2.0.1 for the SQL statement:

SQL_ID  abk386qc1xwkg, child number 0
-------------------------------------
SELECT /*+ USE_NL(T51 T52 T53 T54 T55 T56 T57 T58) */    T51.C1,   
T51.C2,    T52.C1,    T52.C2,    T53.C1,    T53.C2,    T54.C1,   
T54.C2,    T55.C1,    T55.C2,    T56.C1,    T56.C2,    T57.C1,   
T57.C2,    T58.C1,    T58.C2  FROM    T5 T51,    T5 T52,    T5 T53,   
T5 T54,    T5 T55,    T5 T56,    T5 T57,    T5 T58  WHERE   
T51.C1=T52.C1    AND T51.C1=T53.C1    AND T51.C1=T54.C1    AND
T51.C1=T55.C1    AND T51.C1=T56.C1    AND T51.C1=T57.C1    AND
T51.C1=T58.C1    AND T51.C1 BETWEEN 'A 0000000' AND 'A 1000000'

Plan hash value: 2002323537

------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |        |   1000K|00:00:00.50 |   26055 |
|*  1 |  TABLE ACCESS FULL| T5   |      1 |   1000K|   1000K|00:00:00.50 |   26055 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(("T51"."C1">='A 0000000' AND "T51"."C1"<='A 1000000'))

Oracle ignored my hint – darn bugs.  Is that the twist?  :-)





Physical Reads are Very Fast, Why is this SQL Statement Slow

15 03 2010

March 15, 2010

How would you troubleshoot the following situation – what do you know, or believe to be true with at least a 95% confidence about this situation?  You encounter a SQL statement that is taking much longer to execute than is expected.  You enable an extended SQL trace at level 8 and pass the resulting trace file though TKPROF.  In the TKPROF output you find the SQL statement that is of interest:

SELECT          
      COUNT ( * )
  FROM            I_JOURNAL m
               INNER JOIN
                  LIU_TYPES lt
               ON (m.LIU_TYPE = lt.LIU_TYPE)
            INNER JOIN
               LAWFUL_I li
            ON (m.LIID = li.LIID)
         LEFT OUTER JOIN
            PHONE_BOOK pb
         ON (    m.NORM_CIN = pb.NORM_CIN
             AND pb.DELETION_DATE IS NULL
             AND pb.OPERATOR_ID = :"SYS_B_00")
 WHERE   LIU_PRIORITY >= :"SYS_B_01"
         AND (li.ID IN
                    (:"SYS_B_02",
                     :"SYS_B_03",
                     :"SYS_B_04",
                     :"SYS_B_05",
                     :"SYS_B_06",
                     :"SYS_B_07",
                     :"SYS_B_08",
                     :"SYS_B_09",
[...]
                     :"SYS_B_59",
                     :"SYS_B_60",
                     :"SYS_B_61",
                     :"SYS_B_62"))
         AND (li.END_VALID_DATE IS NULL
              OR m.DISPLAY_DATE <= li.END_VALID_DATE)
         AND li.OPERATOR_ID = :"SYS_B_63"

In the above, roughly 50 lines of the SQL statement containing bind variables for the IN list were removed to save space.  The summary information for the SQL statement’s execution follows:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.05       0.04          0          0          0           0
Fetch        2    105.00     102.69     283093     652774          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4    105.05     102.73     283093     652774          0           1

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 50 

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=652774 pr=283093 pw=0 time=102690302 us)
 888030   HASH JOIN RIGHT OUTER (cr=652774 pr=283093 pw=0 time=99929786 us)
  28488    INDEX RANGE SCAN OBJ#(45035) (cr=130 pr=0 pw=0 time=142564 us)(object id 45035)         PBK_GET_NORM_CIN_UK
 888030    HASH JOIN  (cr=652644 pr=283093 pw=0 time=85254971 us)
     15     TABLE ACCESS FULL OBJ#(44893) (cr=7 pr=0 pw=0 time=320 us)                             LIU_TYPES
 888117     TABLE ACCESS BY INDEX ROWID OBJ#(47625) (cr=652637 pr=283093 pw=0 time=63945559 us)    I_JOURNAL
 888179      NESTED LOOPS  (cr=5389 pr=4986 pw=0 time=23801052 us)
     61       INLIST ITERATOR  (cr=213 pr=1 pw=0 time=8299 us)
     61        TABLE ACCESS BY INDEX ROWID OBJ#(44860) (cr=213 pr=1 pw=0 time=7235 us)             LAWFUL_I
     61         INDEX RANGE SCAN OBJ#(45023) (cr=122 pr=0 pw=0 time=2454 us)(object id 45023)      LIN_ID_UK
 888117       INDEX RANGE SCAN OBJ#(52001) (cr=5176 pr=4985 pw=0 time=9904545 us)(object id 52001) IJL_LIN_FK_IX

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  db file sequential read                    283093        0.02          8.51
  SQL*Net message from client                     2        0.00          0.00

While reviewing the above you notice that there were 652,774 consistent gets and 283,093 physical block reads.  The wait events associated with the SQL statement indicate that there were 283,093 waits on the db file sequential read wait event, which indicates that all of the physical block reads were performed one block at a time (the Times Waited statistic for this event matches the disk statistic in the Fetch summary line).  If a total of 8.51 seconds (from the Total Waited statistic for the db file sequential read wait event) were needed to complete those 283,093 single block reads, then on average each single block read required approximately 0.000030 seconds – roughly 133 times faster than one revolution of a fast 15,000 RPM hard drive.

If we experiment with the average single block read time, there were 4,985 single block reads of object ID 52001 (the IJL_LIN_FK_IX index), which suggests that the total time for those physical reads to complete was 4,985 * 0.000030 seconds = 0.15 seconds.  There were 5,176 logical reads for that operation, which is just slightly more than the number of physical blocks that were read from disk for that operation.  The time= statistic for that line in the plan indicates that the operation required 9.90 seconds.  Keeping in mind the other wait events, what happened during the 9.75 seconds of that operation that did not involve the physical block reads?  Notice that the TKPROF output shows that the query statistics summary indicated that the query consumed 105.05 seconds of CPU time with an elapsed query time of 102.73 seconds.

The DBMS_XPLAN output for the SQL statement looks like this:

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                       |       |       | 28440 (100)|          |
|   1 |  SORT AGGREGATE                   |                       |     1 |   130 |            |          |
|*  2 |   HASH JOIN RIGHT OUTER           |                       | 29838 |  3788K| 28440   (1)| 00:05:42 |
|*  3 |    INDEX RANGE SCAN               | PBK_GET_NORM_CIN_UK   | 25437 |   571K|   130   (0)| 00:00:02 |
|*  4 |    HASH JOIN                      |                       | 29838 |  3117K| 28399   (1)| 00:05:41 |
|*  5 |     TABLE ACCESS FULL             | LIU_TYPES             |    16 |   160 |     3   (0)| 00:00:01 |
|*  6 |     TABLE ACCESS BY INDEX ROWID   | I_JOURNAL             |  1172 | 58600 |  1091   (0)| 00:00:14 |
|   7 |      NESTED LOOPS                 |                       | 30659 |  2904K| 28396   (1)| 00:05:41 |
|   8 |       INLIST ITERATOR             |                       |       |       |            |          |
|*  9 |        TABLE ACCESS BY INDEX ROWID| LAWFUL_I              |    26 |  1222 |    10   (0)| 00:00:01 |
|* 10 |         INDEX RANGE SCAN          | LIN_ID_UK             |    61 |       |     2   (0)| 00:00:01 |
|* 11 |       INDEX RANGE SCAN            | IJL_LIN_FK_IX         |  5318 |       |    30   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("M"."NORM_CIN"="PB"."NORM_CIN")
   3 - access("PB"."OPERATOR_ID"=:SYS_B_00 AND "PB"."DELETION_DATE" IS NULL)
       filter("PB"."DELETION_DATE" IS NULL)
   4 - access("M"."LIU_TYPE"="LT"."LIU_TYPE")
   5 - filter("LT"."LIU_PRIORITY">=:SYS_B_01)
   6 - filter(("LI"."END_VALID_DATE" IS NULL OR "M"."DISPLAY_DATE"<="LI"."END_VALID_DATE"))
   9 - filter("LI"."OPERATOR_ID"=:SYS_B_63)
  10 - access(("LI"."ID"=:SYS_B_02 OR "LI"."ID"=:SYS_B_03 OR "LI"."ID"=:SYS_B_04 OR
              "LI"."ID"=:SYS_B_05 OR "LI"."ID"=:SYS_B_06 OR "LI"."ID"=:SYS_B_07 OR "LI"."ID"=:SYS_B_08 OR
              "LI"."ID"=:SYS_B_09 OR "LI"."ID"=:SYS_B_10 OR "LI"."ID"=:SYS_B_11 OR "LI"."ID"=:SYS_B_12 OR
              "LI"."ID"=:SYS_B_13 OR "LI"."ID"=:SYS_B_14 OR "LI"."ID"=:SYS_B_15 OR "LI"."ID"=:SYS_B_16 OR
              "LI"."ID"=:SYS_B_17 OR "LI"."ID"=:SYS_B_18 OR "LI"."ID"=:SYS_B_19 OR "LI"."ID"=:SYS_B_20 OR
              "LI"."ID"=:SYS_B_21 OR "LI"."ID"=:SYS_B_22 OR "LI"."ID"=:SYS_B_23 OR "LI"."ID"=:SYS_B_24 OR
              "LI"."ID"=:SYS_B_25 OR "LI"."ID"=:SYS_B_26 OR "LI"."ID"=:SYS_B_27 OR "LI"."ID"=:SYS_B_28 OR
              "LI"."ID"=:SYS_B_29 OR "LI"."ID"=:SYS_B_30 OR "LI"."ID"=:SYS_B_31 OR "LI"."ID"=:SYS_B_32 OR
              "LI"."ID"=:SYS_B_33 OR "LI"."ID"=:SYS_B_34 OR "LI"."ID"=:SYS_B_35 OR "LI"."ID"=:SYS_B_36 OR
              "LI"."ID"=:SYS_B_37 OR "LI"."ID"=:SYS_B_38 OR "LI"."ID"=:SYS_B_39 OR "LI"."ID"=:SYS_B_40 OR
              "LI"."ID"=:SYS_B_41 OR "LI"."ID"=:SYS_B_42 OR "LI"."ID"=:SYS_B_43 OR "LI"."ID"=:SYS_B_44 OR
              "LI"."ID"=:SYS_B_45 OR "LI"."ID"=:SYS_B_46 OR "LI"."ID"=:SYS_B_47 OR "LI"."ID"=:SYS_B_48 OR
              "LI"."ID"=:SYS_B_49 OR "LI"."ID"=:SYS_B_50 OR "LI"."ID"=:SYS_B_51 OR "LI"."ID"=:SYS_B_52 OR
              "LI"."ID"=:SYS_B_53 OR "LI"."ID"=:SYS_B_54 OR "LI"."ID"=:SYS_B_55 OR "LI"."ID"=:SYS_B_56 OR
              "LI"."ID"=:SYS_B_57 OR "LI"."ID"=:SYS_B_58 OR "LI"."ID"=:SYS_B_59 OR "LI"."ID"=:SYS_B_60 OR
              "LI"."ID"=:SYS_B_61 OR "LI"."ID"=:SYS_B_62))
  11 - access("M"."LIID"="LI"."LIID")

Setting the STATISTICS_LEVEL parameter at the session level to ALL, followed by executing the SQL statement permitted the following execution plan to be retrieved using DBMS_XPLAN:

-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name                  | E-Rows | A-Rows | Buffers | Reads  | Writes | A-Time     |
-----------------------------------------------------------------------------------------------------------------------------
|   1 | SORT AGGREGATE                   |                       |      1 |      1 |     657K|    245K|      0 |00:01:40.79 |
|*  2 |  HASH JOIN RIGHT OUTER           |                       |  30607 |    894K|     657K|    245K|      0 |00:01:37.94 |
|*  3 |   INDEX RANGE SCAN               | PBK_GET_NORM_CIN_UK   |  25437 |  28490 |     130 |     19 |      0 |00:00:00.14 |
|*  4 |   HASH JOIN                      |                       |  30607 |    894K|     657K|    245K|      0 |00:01:23.16 |
|*  5 |    TABLE ACCESS FULL             | LIU_TYPES             |     16 |     15 |       7 |      0 |      0 |00:00:00.01 |
|*  6 |    TABLE ACCESS BY INDEX ROWID   | I_JOURNAL             |   1221 |    894K|     657K|    245K|      0 |00:01:01.70 |
|   7 |     NESTED LOOPS                 |                       |  31449 |    894K|    5428 |   5099 |      0 |00:00:24.02 |
|   8 |      INLIST ITERATOR             |                       |        |     61 |     206 |      1 |      0 |00:00:00.01 |
|*  9 |       TABLE ACCESS BY INDEX ROWID| LAWFUL_I              |     26 |     61 |     206 |      1 |      0 |00:00:00.01 |
|* 10 |        INDEX RANGE SCAN          | LIN_ID_UK             |     61 |     61 |     122 |      0 |      0 |00:00:00.01 |
|* 11 |      INDEX RANGE SCAN            | IJL_LIN_FK_IX         |   5318 |    894K|    5222 |   5098 |      0 |00:00:09.99 |
-----------------------------------------------------------------------------------------------------------------------------

Assume that the database version is 10.1.0.x, and that the tables contain the following number of rows:
I_JOURNAL: 5,000,000 rows
LAWFUL_I: 1,000 rows
PHONE_BOOK: 78,000 rows

Here are some of the questions that you might try answering when indicating what you know about the above performance problem:

  • What was the server doing in the 9.75 seconds of the INDEX RANGE SCAN operation on the last line of the execution plan that probably could not be attributed to physical I/O?
  • How is it possible that 105.05 seconds of CPU time were required for a SQL statement that had a total execution time of 102.69 seconds?  What was the CPU doing for 105.05 seconds?
  • How can you explain the 0.000030 second single block read time?
  • Why are all of the bind variables named similar to SYS_B_nn?
  • Are the number of elements in the IN list contributing to the performance problem?
  • Are there any hints that you might try to force a different execution plan, and if so, is there a reason to change the execution plan?
  • Why was there a miss in the library cache during both the parse and execute calls?
  • What caused the incorrect cardinality estimates, and how would you correct the cardinality estimates?
  • Does the ANSI style join syntax have an impact on the execution time for the query?
  • Would you recommend any changes to initialization parameters, and why?
  • What needs to be done to allow the query to complete in 5 seconds?
  • How would the server’s behavior change if significant competition for the server’s RAM caused excessive use of the operating system’s page file?
  • Are the server’s CPUs a constraining resource for this SQL statement – if so, why, if not, why not?

This SQL statement and TKPROF output where part of an interesting question in the OTN forums – I attempted to share my thoughts about this SQL statement in that forum thread.





From a VBS Script to a 10046 Trace and Back into a VBS Script

12 03 2010

March 12, 2010

I thought that we would try something a bit more difficult today.  In previous articles I showed how to generate and read 10046 trace files using various methods, and I also showed a couple of different VBS scripts that could interact with an Oracle database.  With some effort we could even read through a 10046 to pull out bits of information, much like TKPROF, but it probably does not make much sense to reproduce what TKPROF already accomplishes.  I thought instead what I would do is to create a VBS script that generates a 10046 trace file at level 4, while executing a couple of SQL statements.  A second VBS script will read the raw 10046 trace file and convert that trace file back into a VBS script, complete with bind variables.  The code for the second VBS script is based on some of the code in my Toy project for performance tuning – something that I originally created just to see if it could be done.

First, we need a table to use as the data source for the first VBS script – this is the test table used in this blog article:

CREATE TABLE EMPLOYEE_RECORD_TEST AS
SELECT
  DECODE(TRUNC(DBMS_RANDOM.VALUE(0,5)),
          0,'MIKE',
          1,'ROB',
          2,'SAM',
          3,'JOE',
          4,'ERIC') EMPLOYEE_ID,
  TRUNC(SYSDATE)-ROUND(DBMS_RANDOM.VALUE(0,1000)) SHIFT_DATE,
  DECODE(TRUNC(DBMS_RANDOM.VALUE(0,10)),
          0,'VAC',
          1,'HOL',
          2,'BEREAVE',
          3,'JURY',
          4,'ABS',
          5,'EXCUSE',
          6,'MIL',
          'OTHER') INDIRECT_ID
FROM
  DUAL
CONNECT BY
  LEVEL<=1000;

With the test table built, we execute the following simple VBS script (using either CSCRIPT or WSCRIPT on a Windows client):

Const adCmdText = 1
Const adNumeric = 131
Const adDate = 7
Const adDBDate = 133
Const adDBTimeStamp = 135
Const adDBTime = 134
Const adVarChar = 200
Const adParamInput = 1

Dim strSQL
Dim strUsername
Dim strPassword
Dim strDatabase

Dim dbDatabase
Dim snpDataEmployees
Dim comDataEmployees
Dim snpDataAttend
Dim comDataAttend
Dim snpDataEmpRecord
Dim comDataEmpRecord

Set dbDatabase = CreateObject("ADODB.Connection")
Set snpDataEmployees = CreateObject("ADODB.Recordset")
Set comDataEmployees = CreateObject("ADODB.Command")
Set snpDataAttend = CreateObject("ADODB.Recordset")
Set comDataAttend = CreateObject("ADODB.Command")
Set snpDataEmpRecord = CreateObject("ADODB.Recordset")
Set comDataEmpRecord = CreateObject("ADODB.Command")

strUsername = "MyUsername"
strPassword = "MyPassword"
strDatabase = "MyDB"

dbDatabase.ConnectionString = "Provider=OraOLEDB.Oracle;Data Source=" & strDatabase & ";User ID=" & strUsername & ";Password=" & strPassword & ";"
dbDatabase.Open
'Should verify that the connection attempt was successful, but I will leave that for someone else to code

dbDatabase.Execute "ALTER SESSION SET TRACEFILE_IDENTIFIER = 'VBS2TRACE2VBS'"
dbDatabase.Execute "ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 4'"

strSQL = "INSERT INTO EMPLOYEE_RECORD_TEST(" & VBCrLf
strSQL = strSQL & "  EMPLOYEE_ID," & VBCrLf
strSQL = strSQL & "  SHIFT_DATE," & VBCrLf
strSQL = strSQL & "  INDIRECT_ID)" & VBCrLf
strSQL = strSQL & "VALUES(" & VBCrLf
strSQL = strSQL & "  ?," & VBCrLf
strSQL = strSQL & "  ?," & VBCrLf
strSQL = strSQL & "  ?)"

With comDataEmpRecord
    'Set up the command properties
    .CommandText = strSQL
    .CommandType = adCmdText
    .CommandTimeout = 30
    .ActiveConnection = dbDatabase

    .Parameters.Append .CreateParameter("employee_id", adVarChar, adParamInput, 15, "TEST")
    .Parameters.Append .CreateParameter("shift_date", adDate, adParamInput, 8, Date)
    .Parameters.Append .CreateParameter("indirect_id", adVarchar, adParamInput, 15, "HOL")
End With

'Rollback Test
dbDatabase.BeginTrans

comDataEmpRecord.Execute

dbDatabase.RollbackTrans

strSQL = "SELECT DISTINCT" & vbCrLf
strSQL = strSQL & "  EMPLOYEE_ID" & vbCrLf
strSQL = strSQL & "FROM" & vbCrLf
strSQL = strSQL & "  EMPLOYEE_RECORD_TEST" & vbCrLf
strSQL = strSQL & "WHERE" & vbCrLf
strSQL = strSQL & "  SHIFT_DATE>= ?" & vbCrLf
strSQL = strSQL & "  AND INDIRECT_ID= ?" & vbCrLf
strSQL = strSQL & "ORDER BY" & vbCrLf
strSQL = strSQL & "  EMPLOYEE_ID"

With comDataEmployees
    'Set up the command properties
    .CommandText = strSQL
    .CommandType = adCmdText
    .CommandTimeout = 30
    .ActiveConnection = dbDatabase

    .Parameters.Append .CreateParameter("shift_date", adDate, adParamInput, 8, DateAdd("d", -90, Date))
    .Parameters.Append .CreateParameter("indirect_id", adVarChar, adParamInput, 15, "VAC")
End With

strSQL = "SELECT" & vbCrLf
strSQL = strSQL & "  SUM(DECODE(TO_CHAR(SHIFT_DATE,'D'),'2',1,0)) MON_COUNT," & vbCrLf
strSQL = strSQL & "  SUM(DECODE(TO_CHAR(SHIFT_DATE,'D'),'3',1,0)) TUE_COUNT," & vbCrLf
strSQL = strSQL & "  SUM(DECODE(TO_CHAR(SHIFT_DATE,'D'),'4',1,0)) WED_COUNT," & vbCrLf
strSQL = strSQL & "  SUM(DECODE(TO_CHAR(SHIFT_DATE,'D'),'5',1,0)) THU_COUNT," & vbCrLf
strSQL = strSQL & "  SUM(DECODE(TO_CHAR(SHIFT_DATE,'D'),'6',1,0)) FRI_COUNT" & vbCrLf
strSQL = strSQL & "FROM" & vbCrLf
strSQL = strSQL & "  EMPLOYEE_RECORD_TEST" & vbCrLf
strSQL = strSQL & "WHERE" & vbCrLf
strSQL = strSQL & "  EMPLOYEE_ID= ?" & vbCrLf
strSQL = strSQL & "  AND INDIRECT_ID= ?"

With comDataAttend
    'Set up the command properties
    .CommandText = strSQL
    .CommandType = adCmdText
    .CommandTimeout = 30
    .ActiveConnection = dbDatabase

    .Parameters.Append .CreateParameter("employee_id", adVarChar, adParamInput, 15, "HOOPER")
    .Parameters.Append .CreateParameter("indirect_id", adVarChar, adParamInput, 15, "EXCUSE")
End With

Set snpDataEmployees = comDataEmployees.Execute

If Not (snpDataEmployees Is Nothing) Then
    Do While Not snpDataEmployees.EOF
        comDataAttend("employee_id") = snpDataEmployees("employee_id")
        comDataAttend("indirect_id") = "EXCUSE"
        Set snpDataAttend = comDataAttend.Execute
        If Not snpDataAttend.EOF Then
            'Do Something with the data
        End If
        snpDataAttend.Close

        comDataAttend("indirect_id") = "ABS"
        Set snpDataAttend = comDataAttend.Execute
        If Not snpDataAttend.EOF Then
            'Do Something with the data
        End If
        snpDataAttend.Close

        snpDataEmployees.MoveNext
    Loop

    snpDataEmployees.Close
End If

dbDatabase.Close
Set snpDataEmployees = Nothing
Set snpDataAttend = Nothing
Set comDataEmployees = Nothing
Set comDataAttend = Nothing
Set snpDataEmpRecord = Nothing
Set comDataEmpRecord = Nothing
Set dbDatabase = Nothing

(TestScript.vbs – save as TestScript.vbs)

In the above, replace MyDB with a valid database name from the tnsnames.ora file, MyUsername with a valid username, and MyPassword with the password for the user.  The script starts by starting a transaction (the default behavior is an implicit commit), a row is inserted into the test table, and then a ROLLBACK is performed.  The script then submits a SQL statement that retrieves a list of 5 employees from the test table.  For each of the (up to) 5 employees a second SQL statement is executed with two different bind variable sets to determine the number of each week day the employee has been out of work on an excused (EXCUSE) or unexcused (ABS) absence.  If I were writing a real program to accomplish this task I would combine the three SELECT statements into a single SELECT statement, but I want to demonstrate how the second VBS script handles multiple SQL statements that are open at the same time.

Running the above script generated a trace file when executed against Oracle Database 11.2.0.1: or112_ora_5482_VBS2TRACE2VBS.trc  (save as C:\or112_ora_5482_VBS2TRACE2VBS.trc – Windows users can view the file with Wordpad and convert the file into a plain text file that can be opened with Notepad).  The goal is to take the trace file and transform it back into a VBS script, ignoring SQL statements that appear in the trace file at a depth greater than 0.

The output of the VBS script that reads the 10046 trace file and generates a VBS file should look something like this:

'Source File:C:\or112_ora_5482_VBS2TRACE2VBS.trc

HyperactiveTrace

Sub HyperactiveTrace()
    Const adCmdText = 1
    Const adCmdStoredProc = 4
    Const adParamInput = 1
    Const adVarNumeric = 139
    Const adBigInt = 20
    Const adDecimal = 14
    Const adDouble = 5
    Const adInteger = 3
    Const adLongVarBinary = 205
    Const adNumeric = 131
    Const adSingle = 4
    Const adSmallInt = 2
    Const adTinyInt = 16
    Const adUnsignedBigInt = 21
    Const adUnsignedInt = 19
    Const adUnsignedSmallInt = 18
    Const adUnsignedTinyInt = 17
    Const adDate = 7
    Const adDBDate = 133
    Const adDBTimeStamp = 135
    Const adDBTime = 134
    Const adVarChar = 200
    Const adChar = 129
    Const adUseClient = 3

    Dim i
    Dim strSQL
    Dim strUsername
    Dim strPassword
    Dim strDatabase

    Dim dbDatabase
    Set dbDatabase = CreateObject("ADODB.Connection")
    Dim snpData1
    Dim comData1
    Set snpData1 = CreateObject("ADODB.Recordset")
    Set comData1 = CreateObject("ADODB.Command")
    Dim snpData2
    Dim comData2
    Set snpData2 = CreateObject("ADODB.Recordset")
    Set comData2 = CreateObject("ADODB.Command")
    Dim snpData3
    Dim comData3
    Set snpData3 = CreateObject("ADODB.Recordset")
    Set comData3 = CreateObject("ADODB.Command")
    Dim snpData4
    Dim comData4
    Set snpData4 = CreateObject("ADODB.Recordset")
    Set comData4 = CreateObject("ADODB.Command")
    Dim snpData5
    Dim comData5
    Set snpData5 = CreateObject("ADODB.Recordset")
    Set comData5 = CreateObject("ADODB.Command")
    Dim snpData6
    Dim comData6
    Set snpData6 = CreateObject("ADODB.Recordset")
    Set comData6 = CreateObject("ADODB.Command")
    Dim snpData7
    Dim comData7
    Set snpData7 = CreateObject("ADODB.Recordset")
    Set comData7 = CreateObject("ADODB.Command")
    Dim snpData8
    Dim comData8
    Set snpData8 = CreateObject("ADODB.Recordset")
    Set comData8 = CreateObject("ADODB.Command")
    Dim snpData9
    Dim comData9
    Set snpData9 = CreateObject("ADODB.Recordset")
    Set comData9 = CreateObject("ADODB.Command")
    Dim snpData10
    Dim comData10
    Set snpData10 = CreateObject("ADODB.Recordset")
    Set comData10 = CreateObject("ADODB.Command")
    Dim snpData11
    Dim comData11
    Set snpData11 = CreateObject("ADODB.Recordset")
    Set comData11 = CreateObject("ADODB.Command")
    Dim snpData12
    Dim comData12
    Set snpData12 = CreateObject("ADODB.Recordset")
    Set comData12 = CreateObject("ADODB.Command")
    Dim snpData13
    Dim comData13
    Set snpData13 = CreateObject("ADODB.Recordset")
    Set comData13 = CreateObject("ADODB.Command")
    Dim snpData14
    Dim comData14
    Set snpData14 = CreateObject("ADODB.Recordset")
    Set comData14 = CreateObject("ADODB.Command")
    Dim snpData15
    Dim comData15
    Set snpData15 = CreateObject("ADODB.Recordset")
    Set comData15 = CreateObject("ADODB.Command")
    Dim snpData16
    Dim comData16
    Set snpData16 = CreateObject("ADODB.Recordset")
    Set comData16 = CreateObject("ADODB.Command")
    Dim snpData17
    Dim comData17
    Set snpData17 = CreateObject("ADODB.Recordset")
    Set comData17 = CreateObject("ADODB.Command")
    Dim snpData18
    Dim comData18
    Set snpData18 = CreateObject("ADODB.Recordset")
    Set comData18 = CreateObject("ADODB.Command")
    Dim snpData19
    Dim comData19
    Set snpData19 = CreateObject("ADODB.Recordset")
    Set comData19 = CreateObject("ADODB.Command")
    Dim snpData20
    Dim comData20
    Set snpData20 = CreateObject("ADODB.Recordset")
    Set comData20 = CreateObject("ADODB.Command")

    On Error Resume Next

    strUsername = "MyUsername"
    strPassword = "MyPassword"
    strDatabase = "MyDB"
    dbDatabase.ConnectionString = "Provider=OraOLEDB.Oracle;Data Source=" & strDatabase & ";User ID=" & strUsername & ";Password=" & strPassword & ";"
    dbDatabase.Open
    'Should verify that the connection attempt was successful, but I will leave that for someone else to code

    'dbDatabase.BeginTrans

    'Transaction Committed and NO Records were Affected, Need to determine transaction start
    dbDatabase.CommitTrans

    'dbDatabase.BeginTrans

    Set comData3 = CreateObject("ADODB.Command")

    strSQL = "INSERT INTO EMPLOYEE_RECORD_TEST(" & vbCrLf
    strSQL = strSQL & "  EMPLOYEE_ID," & vbCrLf
    strSQL = strSQL & "  SHIFT_DATE," & vbCrLf
    strSQL = strSQL & "  INDIRECT_ID)" & vbCrLf
    strSQL = strSQL & "VALUES(" & vbCrLf
    strSQL = strSQL & "  ?," & vbCrLf
    strSQL = strSQL & "  ?," & vbCrLf
    strSQL = strSQL & "  ?)"

    With comData3
        'Set up the command properties
        .CommandText = strSQL
        .CommandType = adCmdText
        .CommandTimeout = 30
        .ActiveConnection = dbDatabase
        'Bind variables will be defined below, if there are no bind variable, uncomment the next line and add the looping construct
    End With
    'comData3.Execute

    With comData3
        .Parameters.Append .CreateParameter("B1", adChar, adParamInput, 4, "TEST")
        .Parameters.Append .CreateParameter("B2", adDate, adParamInput, 7, "3/11/2010 0:0:0")
        .Parameters.Append .CreateParameter("B3", adChar, adParamInput, 3, "HOL")
    End With

    comData3("B1") = "TEST"
    comData3("B2") = cDate("3/11/2010 0:0:0")
    comData3("B3") = "HOL"

    comData3.Execute

    'Transaction Rolled Back and Records Should have been Affected, Need to determine transaction start
    dbDatabase.RollbackTrans

    'dbDatabase.BeginTrans

    'Cursor 2 Closing
    If snpData2.State = 1 Then
        snpData2.Close
    End If
    Set comData2 = Nothing

    Set comData2 = CreateObject("ADODB.Command")

    strSQL = "SELECT DISTINCT" & vbCrLf
    strSQL = strSQL & "  EMPLOYEE_ID" & vbCrLf
    strSQL = strSQL & "FROM" & vbCrLf
    strSQL = strSQL & "  EMPLOYEE_RECORD_TEST" & vbCrLf
    strSQL = strSQL & "WHERE" & vbCrLf
    strSQL = strSQL & "  SHIFT_DATE>= ?" & vbCrLf
    strSQL = strSQL & "  AND INDIRECT_ID= ?" & vbCrLf
    strSQL = strSQL & "ORDER BY" & vbCrLf
    strSQL = strSQL & "  EMPLOYEE_ID"

    With comData2
        'Set up the command properties
        .CommandText = strSQL
        .CommandType = adCmdText
        .CommandTimeout = 30
        .ActiveConnection = dbDatabase
        'Bind variables will be defined below, if there are no bind variable, uncomment the next line and add the looping construct
    End With
    'Set snpData2 = comData2.Execute

    'Cursor 4 Closing
    If snpData4.State = 1 Then
        snpData4.Close
    End If
    Set comData4 = Nothing

    'Cursor 4 Closing
    If snpData4.State = 1 Then
        snpData4.Close
    End If
    Set comData4 = Nothing

    'Cursor 4 Closing
    If snpData4.State = 1 Then
        snpData4.Close
    End If
    Set comData4 = Nothing

    With comData2
        .Parameters.Append .CreateParameter("B1", adDate, adParamInput, 7, "12/11/2009 0:0:0")
        .Parameters.Append .CreateParameter("B2", adChar, adParamInput, 3, "VAC")
    End With

    comData2("B1") = cDate("12/11/2009 0:0:0")
    comData2("B2") = "VAC"

    Set snpData2 = comData2.Execute

    If Not (snpData2 Is Nothing) Then
        Do While Not snpData2.EOF

            snpData2.MoveNext
        Loop
    End If

    'Cursor 4 Closing
    If snpData4.State = 1 Then
        snpData4.Close
    End If
    Set comData4 = Nothing

    Set comData4 = CreateObject("ADODB.Command")

    strSQL = "SELECT" & vbCrLf
    strSQL = strSQL & "  SUM(DECODE(TO_CHAR(SHIFT_DATE,'D'),'2',1,0)) MON_COUNT," & vbCrLf
    strSQL = strSQL & "  SUM(DECODE(TO_CHAR(SHIFT_DATE,'D'),'3',1,0)) TUE_COUNT," & vbCrLf
    strSQL = strSQL & "  SUM(DECODE(TO_CHAR(SHIFT_DATE,'D'),'4',1,0)) WED_COUNT," & vbCrLf
    strSQL = strSQL & "  SUM(DECODE(TO_CHAR(SHIFT_DATE,'D'),'5',1,0)) THU_COUNT," & vbCrLf
    strSQL = strSQL & "  SUM(DECODE(TO_CHAR(SHIFT_DATE,'D'),'6',1,0)) FRI_COUNT" & vbCrLf
    strSQL = strSQL & "FROM" & vbCrLf
    strSQL = strSQL & "  EMPLOYEE_RECORD_TEST" & vbCrLf
    strSQL = strSQL & "WHERE" & vbCrLf
    strSQL = strSQL & "  EMPLOYEE_ID= ?" & vbCrLf
    strSQL = strSQL & "  AND INDIRECT_ID= ?"

    With comData4
        'Set up the command properties
        .CommandText = strSQL
        .CommandType = adCmdText
        .CommandTimeout = 30
        .ActiveConnection = dbDatabase
        'Bind variables will be defined below, if there are no bind variable, uncomment the next line and add the looping construct
    End With
    'Set snpData4 = comData4.Execute

    With comData4
        .Parameters.Append .CreateParameter("B1", adChar, adParamInput, 4, "ERIC")
        .Parameters.Append .CreateParameter("B2", adChar, adParamInput, 6, "EXCUSE")
    End With

    comData4("B1") = "ERIC"
    comData4("B2") = "EXCUSE"

    Set snpData4 = comData4.Execute

    If Not (snpData4 Is Nothing) Then
        Do While Not snpData4.EOF

            snpData4.MoveNext
        Loop
    End If

    'Cursor 5 Closing
    If snpData5.State = 1 Then
        snpData5.Close
    End If
    Set comData5 = Nothing

    comData4("B1") = "ERIC"
    comData4("B2") = "ABS"

    Set snpData4 = comData4.Execute

    If Not (snpData4 Is Nothing) Then
        Do While Not snpData4.EOF

            snpData4.MoveNext
        Loop
    End If

    'Cursor 5 Closing
    If snpData5.State = 1 Then
        snpData5.Close
    End If
    Set comData5 = Nothing

    comData4("B1") = "JOE"
    comData4("B2") = "EXCUSE"

    Set snpData4 = comData4.Execute

    If Not (snpData4 Is Nothing) Then
        Do While Not snpData4.EOF

            snpData4.MoveNext
        Loop
    End If

    comData4("B1") = "JOE"
    comData4("B2") = "ABS"

    Set snpData4 = comData4.Execute

    If Not (snpData4 Is Nothing) Then
        Do While Not snpData4.EOF

            snpData4.MoveNext
        Loop
    End If

    comData4("B1") = "MIKE"
    comData4("B2") = "EXCUSE"

    Set snpData4 = comData4.Execute

    If Not (snpData4 Is Nothing) Then
        Do While Not snpData4.EOF

            snpData4.MoveNext
        Loop
    End If

    comData4("B1") = "MIKE"
    comData4("B2") = "ABS"

    Set snpData4 = comData4.Execute

    If Not (snpData4 Is Nothing) Then
        Do While Not snpData4.EOF

            snpData4.MoveNext
        Loop
    End If

    comData4("B1") = "SAM"
    comData4("B2") = "EXCUSE"

    Set snpData4 = comData4.Execute

    If Not (snpData4 Is Nothing) Then
        Do While Not snpData4.EOF

            snpData4.MoveNext
        Loop
    End If

    comData4("B1") = "SAM"
    comData4("B2") = "ABS"

    Set snpData4 = comData4.Execute

    If Not (snpData4 Is Nothing) Then
        Do While Not snpData4.EOF

            snpData4.MoveNext
        Loop
    End If

    'Transaction Committed and NO Records were Affected, Need to determine transaction start
    dbDatabase.CommitTrans

    'dbDatabase.BeginTrans

    'Cursor 5 Closing
    If snpData5.State = 1 Then
        snpData5.Close
    End If
    Set comData5 = Nothing

    '*************************************************************
    'Maximum Recordset Number Used is 5 - Adjust the Code at the Start Accordingly
    '*************************************************************

    If snpData1.State = 1 Then
        snpData1.Close
    End If
    Set snpData1 = Nothing

    If snpData2.State = 1 Then
        snpData2.Close
    End If
    Set snpData2 = Nothing

    If snpData3.State = 1 Then
        snpData3.Close
    End If
    Set snpData3 = Nothing

    If snpData4.State = 1 Then
        snpData4.Close
    End If
    Set snpData4 = Nothing

    If snpData5.State = 1 Then
        snpData5.Close
    End If
    Set snpData5 = Nothing

    Set comData1 = Nothing
    Set comData2 = Nothing
    Set comData3 = Nothing
    Set comData4 = Nothing
    Set comData5 = Nothing

    dbDatabase.Close
    Set dbDatabase = Nothing
End Sub

(TraceToVBSOutput.vbs – save as TraceToVBSOutput.vbs)

If you compare the original TestScript.vbs with the above output, we see that the two scripts are similar, but with a couple of distinct differences:

  • It is not necessarily easy to determine when a transaction starts, but it is possible to determine when a transaction ends.  The script that reads the trace file inserts ‘dbDatabase.BeginTrans where it believes that a transaction should start – remove the ‘ if that is the correct starting point for the transaction.
  • Looping structures with nested SQL statements (the retrieval of the employee list from the EMPLOYEE_RECORD_TEST table and the probing of matching rows for each of those employees) cannot be reproduced automatically – you will have to recognize when one SQL statement is feeding the bind variable values of a second SQL statement.
  • The VBS script assumes that up to 20 cursors will be open at any one time, but will automatically handle many more than 20 simultaneously open cursors.  The resulting VBS file should be cleaned up to remove the unneeded comData and snpData objects.
  • There are spurious snpDatan.Close statements – see the suggestions for improvements.
  • SQL statements submitted without bind variables will not have code written to execute those statements in the generated VBS file – see the suggestions for improvements.
  • Bind variables that are submitted as VARCHAR (adVarchar) are written to the trace file as if the bind variables were declared as CHAR (adChar) – while this does not appear to cause a problem, it might appear to be an unexpected change when comparing the test script with the automatically generated script.

Suggestions for improvement:

  • Recognize the EXEC line in the 10046 trace and use that to actually indicate that a SQL statement should execute in the generated script, rather than executing the SQL statement in response to the submission of bind variables.
  • Allow submitting the source trace file name and the destination (generated) VBS filename on the command line.
  • Allow submitting the username, password, and database name on the command line or in a web-based user interface.
  • Correct the script so that it does not attempt to close recordsets when those recordsets were never opened at dep=0 – this is caused by the script seeing a recursive SQL statement that is preparing to open with that cursor number.

The VBS script that converts 10046 trace files to VBS script files may be downloaded here: TraceToVBS.vbs (save as TraceToVBS.vbs).  There may be bugs in the script, but it should be close enough to provide some degree of educational benefit.

Related Blog Articles:
10046 Extended SQL Trace Interpretation
Automated DBMS_XPLAN, Trace, and Send to Excel
Database Inpector Gadget
Simple VBS Script to Retrieve Data from Oracle
Toy Project for Performance Tuning 2





Select Statement Causing an ORA-00001?

11 03 2010

March 11, 2010

Sometimes I receive seemingly interesting emails showing Oracle errors – leaving me to ponder… certainly, that can’t cause an error, can it?  Here is one that I received a year ago (paraphrased):

The commercially developed application that we are using displayed an error message identifying a SELECT statement as the source of an ORA-00001 error.  What is the source of the Oracle constraint error?  The error message displayed by the application is as follows:

select account_period
from PROJECT_SUMMARY
where project_id = :m_saProjSumProjectID[nProjSumIndex]
and id =:m_saProjSumSavedID[nProjSumIndex]
into :nACCOUNT_PERIOD

ORA-00001: unique constraint (TESTUSER.SYS_C006354) violated

This transaction has resulted in violating an Oracle defined constraint.
Constraints are enforced by the database manager.  This transaction has
been rolled back.

My first thought at the sight of this error was that the commercially developed application was actually displaying one of the SQL statements that was executed after the SQL statement which triggered the primary key violation.  It is easy to let a runtime error slide through for a period of time before the program notices that an error happened – maybe it is just a sign of sloppy programming (I hope not, because this has happened in some of my custom-developed programs too).

How would we start to troubleshoot this error message?  The “SYS_C” portion of the constraint name indicates that the constraint is most likely a system generated constraint name, probably intended to enforce a uniqueness requirement for a primary key column.  Exporting the data from the database using Datapump export (or the legacy exp utility) and importing the data into a new database could cause the number following “SYS_C” to change, and it is likely that constraint SYS_C006354 in my database (that is used by the same application) is very different from that of the person who posed the question to me.

Let’s see if we are able to find the answer by working the problem in a circular fashion.  For example, let’s find the name of the index that is used to enforce the primary key constraint on of one the application’s tables:

SELECT
  INDEX_NAME
FROM
  DBA_INDEXES
WHERE
  INDEX_NAME LIKE 'SYS%'
  AND TABLE_NAME='INVENTORY_TRANS';

INDEX_NAME
-----------
SYS_C005168

Now that we know that the index is named SYS_C005168, we could do something like this:

SELECT
  DC.OWNER,
  DC.CONSTRAINT_NAME,
  DC.CONSTRAINT_TYPE,
  DC.TABLE_NAME,
  DC.STATUS,
  SUBSTR(DCC.COLUMN_NAME,1,30) COLUMN_NAME
FROM
  DBA_CONSTRAINTS DC,
  DBA_CONS_COLUMNS DCC
WHERE
  DC.CONSTRAINT_NAME='SYS_C005168'
  AND DC.OWNER='TESTUSER'
  AND DC.OWNER=DCC.OWNER
  AND DC.CONSTRAINT_NAME=DCC.CONSTRAINT_NAME
ORDER BY
  DCC.POSITION;

OWNER   CONSTRAINT_NAME  CONSTRAINT_TYPE  TABLE_NAME       STATUS   COLUMN_NAME
------  ---------------  ---------------  ---------------  -------  --------------
SYSADM  SYS_C005168      P                INVENTORY_TRANS  ENABLED  TRANSACTION_ID

The above output shows that the primary key constraint SYS_C005168 enforces the uniqueness of the primary key (TRANSACTION_ID) column in the table INVENTORY_TRANS.  We just demonstrated that we now know what we already mostly knew.

In the case of the person who sent the email to me, the table name was not known.  So, we could take the last of the above SQL statements and substitute SYS_C006354 in place of SYS_C005168 to find the table name and primary key column that was violated.  If the SQL statement failed to return usable information the next step might be to enable a 10046 trace at level 4 for one of the affected sessions, and try to reproduce the problem.  A 10046 trace will list the sequence of events that led up to the error message appearing in the client application.





Why Doesn’t This Trigger Work – No Developer Tools Allowed in the Database

10 03 2010

March 10, 2010

In one of last week’s blog articles I asked why a SQL statement that had appeared in a book, was copied to various websites, and was offered as an example of good SQL simply did not work.  A couple of readers of this blog took up the challenge to offer suggestions.  I think that I found another example of blindly reproducing other’s code without verifying that the code works as expected.  What did I find this time?

http://www.orafaq.com/scripts/security/blocktools.txt

rem -----------------------------------------------------------------------
rem Filename:   NoTOAD.sql
rem Purpose:    Block developers from using TOAD and other tools on
rem             production databases.
rem Date:       19-Jan-2004
rem Author:     Frank Naude
rem -----------------------------------------------------------------------

CONNECT / AS SYSDBA;

CREATE OR REPLACE TRIGGER block_tools_from_prod
  AFTER LOGON ON DATABASE
DECLARE
  v_prog sys.v_$session.program%TYPE;
BEGIN
  SELECT program INTO v_prog
    FROM sys.v_$session
  WHERE  audsid = USERENV('SESSIONID')
    AND  audsid != 0  -- Don't Check SYS Connections
    AND  rownum = 1;  -- Parallel processes will have the same AUDSID's

  IF UPPER(v_prog) LIKE '%TOAD%' OR UPPER(v_prog) LIKE '%T.O.A.D%' OR -- Toad
     UPPER(v_prog) LIKE '%SQLNAV%' OR -- SQL Navigator
     UPPER(v_prog) LIKE '%PLSQLDEV%' OR -- PLSQL Developer
     UPPER(v_prog) LIKE '%BUSOBJ%' OR   -- Business Objects
     UPPER(v_prog) LIKE '%EXCEL%'       -- MS-Excel plug-in
  THEN
     RAISE_APPLICATION_ERROR(-20000, 'Development tools are not allowed on PROD DB!');
  END IF;
END;
/
SHOW ERRORS

If you search the Internet you will find this piece of code, minus the header block, on various websites as recently as 2010, including in this OTN thread.  The idea of blocking development tools in a production database could be a valid goal, so I am able to understand the interest in the trigger.  What is wrong with the trigger?  Is it just a case that SQL*Plus could also be considered a developer tool?  What about the Crystal Reports development enviroment?  What if the developer created his own development tool to access the database?  What if… someone changed the filename of TOAD?  What if… the trigger doesn’t actually block Excel without any tricks such as changing Excel’s filename?

The contents of this blog article were tested with Oracle Database 11.1.0.7 with an Oracle 11.1.0.7 client, Excel 2007, and Windows Vista as the client operating system.  First, I created the trigger when connected as SYS, and then queried V$SESSION to find that only my SQL*Plus connection was listed.

Then in Excel 2007 I selected Get External DataFrom Other Sources:

When prompted, I connected to the database as a normal user:

I was then able to freely execute queries against the database using the Microsoft Query Tool, which would then allow me to very easily return the rows to an Excel spreadsheet.

OK, maybe the above is cheating… a little.

But, what is the point in blocking Excel with the logon trigger, is the trigger supposed to stop someone from firing up an Excel macro that uses ADO to connect to the database and retrieve whatever information the developer would like to see?  Kind of like this example:

The logon trigger that appears on several websites did not block my Excel connection – if you take a look at the first screen capture you will see that the logon trigger was created without error, and then I was able to connect with the Microsoft Query Tool and my Excel macro using a normal user’s account.  Someone want to try explaining what went wrong?  Must be that we need to file a Metalink bug report, after all, I saw this example on the news so it must be true.  :-)

Maybe Oracle Database 11.2.0.1 will respond more favorably (Oracle 11.1.0.7 client, Excel 2007, and Windows Vista as the client operating system):

While this code might have worked fine in 2004, it would probably make sense to test this trigger code before copying it to your website.  I think that this is worth repeating:

Lesson 1: if you plan to publish something, whether in book form or on the Internet, make certain that what you publish actually works (or at least looks like you put some effort into it) .

Lesson 2: if you plan to copy someone else’s work and post it on your website/blog make certain that what you copy and pass off as your own actually works.

Lesson 3: don’t trust everything that you read on the Internet or in a book without first verifying that the information is correct, even if you find the information on your favorite website.





V$SESSION_LONGOPS – Where’s My SQL Statement

9 03 2010

March 9, 2010

A question recently appeared on the OTN forums that stated the following:

I have a query joining 4 tables and it returns 48 million. I have used /+parallel(tablename,4) */ hint with the select clause. I want to the 48 million returned by the query to another table t2.

I do get the select result in 20 minutes. when I am performing insert, I checked gv$session_longops. I usually check like select * from gv$session_longops where time_remaining>0.

It didnt return anything. I removed the where clause and rather gave the sid.(i found the sql_id from gv$session)

select * from gv$session_longops where sql_id=’XXXXXXX';

I got 3 rows but time_remaining column is 0. Could someone explain why this is so.

OS: Linux
DB:11.1.0.6.0 – 11g
Rac- 2 node RAC

So what is happening, why does GV$SESSION_LONGOPS show 3 rows for the SQL statement, all with the TIME_REMAINING column set to 0?  This is a good question, but I do not know if there is a straight-forward answer for the question.  Possibly one of the first questions to ask is: What is the SQL statement, and what does its execution plan look like?

It probably makes sense to check the documentation for the GV$SESSION_LONGOPS view (or more accurately, the documentation for the single instance V$SESSION_LONGOPS view):

“V$SESSION_LONGOPS displays the status of various operations that run for longer than 6 seconds (in absolute time). These operations currently include many backup and recovery functions, statistics gathering, and query execution, and more operations are added for every Oracle release.

To monitor query execution progress, you must be using the cost-based optimizer and you must:

  • Set the TIMED_STATISTICS or SQL_TRACE parameters to true
  • Gather statistics for your objects with the ANALYZE statement or the DBMS_STATS package”

From the above, having the TIMED_STATISTICS parameter set to FALSE or failing to gather the statistics for the objects using DBMS_STATS are two possible reasons for the TIME_REMAINING column being set to 0.  Any other ideas?  Where is that SQL statement and execution plan when you need it?  Why ask for the execution plan?  From the documentation link you will notice that the view definition includes the SQL_PLAN_LINE_ID column, which refers to a specific line in the execution plan for the query.  From this bit of information, we just need to define what is considered an operation (and does it have to be an uninterrupted operation) – without parallel query we might be able to perform a couple of tests.  The definition of an operation and the 6 second rule might be very important.

We will use tables T3 and T4 from yesterday’s blog article since the original poster did not provide a SQL statement to reproduce the problem.  Consider the following query:

SELECT
  T3.C1,
  T3.C2,
  T4.C3
FROM
  T3,
  T4
WHERE
  TRUNC(T3.C2) BETWEEN TO_DATE('08-MAR-2009','DD-MON-YYYY')
    AND TO_DATE('01-JUL-2009','DD-MON-YYYY')
  AND T3.C2=T4.C2;

If we set the STATISTICS_LEVEL parameter to ALL at the session level, run the SQL statement, and then generate the DBMS_XPLAN for the last execution, we may see an execution plan that looks something like this:

SQL_ID  2d4f5x92axqgn, child number 0
-------------------------------------
SELECT    T3.C1,    T3.C2,    T4.C3  FROM    T3,    T4  WHERE   
TRUNC(T3.C2) BETWEEN TO_DATE('08-MAR-2009','DD-MON-YYYY')      AND
TO_DATE('01-JUL-2009','DD-MON-YYYY')    AND T3.C2=T4.C2

Plan hash value: 1396201636

-------------------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |    874K|00:03:31.20 |    3771K|   3763K|       |       |          |
|*  1 |  HASH JOIN         |      |      1 |    849K|    874K|00:03:31.20 |    3771K|   3763K|    33M|  5591K|   50M (0)|
|*  2 |   TABLE ACCESS FULL| T3   |      1 |    802K|    795K|00:02:32.69 |    2743K|   2743K|       |       |          |
|   3 |   TABLE ACCESS FULL| T4   |      1 |     25M|     25M|00:00:35.93 |    1028K|   1020K|       |       |          |
-------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T3"."C2"="T4"."C2")
   2 - filter((TRUNC(INTERNAL_FUNCTION("C2"))>=TO_DATE(' 2009-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              TRUNC(INTERNAL_FUNCTION("C2"))<=TO_DATE(' 2009-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))

From the above, we are able to see that Oracle is reporting that the full table scan of table T3 required roughly 152.69 seconds and the full table scan of table T4 required roughly 35.93 seconds.  Assume that in another session we executed the following SQL statement roughly every 6 seconds while the above SQL statement was in the process of being executed:

SELECT
  SQL_ID,
  SQL_PLAN_HASH_VALUE HASH_VALUE,
  SQL_PLAN_LINE_ID LINE_ID,
  OPNAME,
  TARGET,
  TARGET_DESC,
  TIME_REMAINING,
  ELAPSED_SECONDS,
  SID,
  SERIAL#,
  USERNAME
FROM
  V$SESSION_LONGOPS
WHERE
  TIME_REMAINING>0;

The output of the above SQL statement might look something like the following:

SQL_ID        HASH_VALUE    LINE_ID OPNAME          TARGET          TARGET_DESC  TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL#  USERNAME
------------- ---------- ---------- --------------- --------------- ------------ -------------- --------------- ---------- ----------  --------
2d4f5x92axqgn 1396201636          2 Table Scan      TESTUSER.T3                             104              53        159        909  TESTUSER

2d4f5x92axqgn 1396201636          2 Table Scan      TESTUSER.T3                              92              62        159        909  TESTUSER

2d4f5x92axqgn 1396201636          2 Table Scan      TESTUSER.T3                              82              73        159        909  TESTUSER

2d4f5x92axqgn 1396201636          2 Table Scan      TESTUSER.T3                              74              81        159        909  TESTUSER

2d4f5x92axqgn 1396201636          2 Table Scan      TESTUSER.T3                              59              95        159        909  TESTUSER

2d4f5x92axqgn 1396201636          2 Table Scan      TESTUSER.T3                              52             101        159        909  TESTUSER

2d4f5x92axqgn 1396201636          2 Table Scan      TESTUSER.T3                              41             112        159        909  TESTUSER

2d4f5x92axqgn 1396201636          2 Table Scan      TESTUSER.T3                              33             120        159        909  TESTUSER

2d4f5x92axqgn 1396201636          2 Table Scan      TESTUSER.T3                              23             131        159        909  TESTUSER

2d4f5x92axqgn 1396201636          2 Table Scan      TESTUSER.T3                              14             140        159        909  TESTUSER

2d4f5x92axqgn 1396201636          2 Table Scan      TESTUSER.T3                               5             148        159        909  TESTUSER

no rows selected

2d4f5x92axqgn 1396201636          3 Table Scan      TESTUSER.T4                              57              10        159        909  TESTUSER

2d4f5x92axqgn 1396201636          3 Table Scan      TESTUSER.T4                              42              18        159        909  TESTUSER

2d4f5x92axqgn 1396201636          3 Table Scan      TESTUSER.T4                              27              31        159        909  TESTUSER

2d4f5x92axqgn 1396201636          3 Table Scan      TESTUSER.T4                              18              41        159        909  TESTUSER

2d4f5x92axqgn 1396201636          3 Table Scan      TESTUSER.T4                               6              53        159        909  TESTUSER

2d4f5x92axqgn 1396201636          3 Table Scan      TESTUSER.T4                               6              57        159        909  TESTUSER

2d4f5x92axqgn 1396201636          3 Table Scan      TESTUSER.T4                               6              63        159        909  TESTUSER

2d4f5x92axqgn 1396201636          3 Table Scan      TESTUSER.T4                               6              70        159        909  TESTUSER

2d4f5x92axqgn 1396201636          3 Table Scan      TESTUSER.T4                               6              75        159        909  TESTUSER

2d4f5x92axqgn 1396201636          3 Table Scan      TESTUSER.T4                               6              81        159        909  TESTUSER

2d4f5x92axqgn 1396201636          3 Table Scan      TESTUSER.T4                               6              86        159        909  TESTUSER

no rows selected

From the above output it appears that the full table scan of table T3 required between 148 and 154 seconds.  At the point where the full table scan of T3 completed and the first couple of seconds (probably 6) of the full table scan of T4, the SQL statement executed in the second session returned no rows – after roughly 6 seconds of reading table T4, the query of V$SESSION_LONGOPS started reporting that table T4 was being read.  From the above it appears that the read of table T4 required between 86 and 94 seconds, yet that does not agree with the DBMS_XPLAN output – it appears that in the V$SESSION_LONGOPS output the hash join operation’s time is being reported as part of this elapsed time for reading table T4.

So, we have found a third possible reason for the rows in V$SESSION_LONGOPS appearing as 0 for the OP’s SQL statement – the line in the plan had not yet consumed 6 seconds of time, or the operation had already completed.

Let’s look at one of the other SQL statements from yesterday’s blog article:

SELECT
  T3.C1,
  T3.C2,
  T4.C3
FROM
  T3,
  T4
WHERE
  T3.C2 BETWEEN TO_DATE('08-MAR-2009','DD-MON-YYYY')
    AND TO_DATE('01-JUL-2009','DD-MON-YYYY') + (1-1/24/60/60)
  AND T3.C2=T4.C2;

This statement is equivalent to the previous SQL statement, except that it is able to use the index that exists on table T3.  The execution plan follows:

SQL_ID  539d93k50ruz3, child number 0
-------------------------------------
SELECT    T3.C1,    T3.C2,    T4.C3  FROM    T3,    T4  WHERE    T3.C2
BETWEEN TO_DATE('08-MAR-2009','DD-MON-YYYY')      AND
TO_DATE('01-JUL-2009','DD-MON-YYYY') + (1-1/24/60/60)    AND T3.C2=T4.C2

Plan hash value: 1243183227

--------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |           |      1 |        |    874K|00:00:05.19 |   85051 |       |       |          |
|   1 |  MERGE JOIN                   |           |      1 |    795K|    874K|00:00:05.19 |   85051 |       |       |          |
|   2 |   TABLE ACCESS BY INDEX ROWID | T4        |      1 |    795K|    795K|00:00:01.69 |   51097 |       |       |          |
|*  3 |    INDEX RANGE SCAN           | IND_T4_C2 |      1 |    795K|    795K|00:00:00.42 |   10841 |       |       |          |
|*  4 |   SORT JOIN                   |           |    795K|    795K|    874K|00:00:02.11 |   33954 |    30M|  1977K|   26M (0)|
|   5 |    TABLE ACCESS BY INDEX ROWID| T3        |      1 |    795K|    795K|00:00:00.58 |   33954 |       |       |          |
|*  6 |     INDEX RANGE SCAN          | IND_T3_C2 |      1 |    795K|    795K|00:00:00.18 |    2114 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T4"."C2">=TO_DATE(' 2009-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "T4"."C2"<=TO_DATE(' 2009-07-01
              23:59:59', 'syyyy-mm-dd hh24:mi:ss'))
   4 - access("T3"."C2"="T4"."C2")
       filter("T3"."C2"="T4"."C2")
   6 - access("T3"."C2">=TO_DATE(' 2009-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "T3"."C2"<=TO_DATE(' 2009-07-01
              23:59:59', 'syyyy-mm-dd hh24:mi:ss'))

The above SQL statement completed too quickly to report any time in the V$SESSION_LONGOPS view, so let’s set the ’08-MAR-2009′ date to ’08-MAR-2007′ and re-run the SQL statement.  For quite a bit of time the query of V$SESSION_LONGOPS only returned the following:

no rows selected

But the SQL statement was executing, why did it not appear in V$SESSION_LONGOPS – where is that link for reporting a bug on Metalink?  :-)  Or maybe, just maybe, Oracle is not performing any single operation in the execution plan continuously for more than 6 seconds (reading the index at ID 3 followed by the corresponding rows at ID 2, and then back to the index at ID 3 again).

Eventually, the following appeared after what must have been a minute or two:

SQL_ID        HASH_VALUE    LINE_ID OPNAME          TARGET          TARGET_DESC  TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- --------------- --------------- ------------ -------------- --------------- ---------- ---------- --------
f3scvt222bu0t 1243183227          4 Sort Output                                            1194              12        159        909 TESTUSER

f3scvt222bu0t 1243183227          4 Sort Output                                            2115              22        159        909 TESTUSER

f3scvt222bu0t 1243183227          4 Sort Output                                            2115              22        159        909 TESTUSER
...
(in the above the last row continued to repeat with ELAPSED_SECONDS displayed as 22 for a minute or two)

Ready, aim,… hey – what is the TARGET.  The LINE_ID column (the SQL_PLAN_LINE_ID column in V$SESSION_LONGOPS) reported that Oracle is executing ID 4 of the execution plan.  I can’t explain why the last line remained in the V$SESSION_LONGOPS output for a minute or two (maybe someone else knows why).

So, we have found a fourth possible reason for the rows in V$SESSION_LONGOPS appearing as 0 for the OP’s SQL statement – it could be that Oracle was alternating between an index range scan and the retrieval of the matching rows from the corresponding table.

Let’s try executing the first query with a parallel hint to see what happens to the V$SESSION_LONGOPS output:

SELECT /*+ PARALLEL (T3, 4) PARALLEL (T4, 4) */
  T3.C1,
  T3.C2,
  T4.C3
FROM
  T3,
  T4
WHERE
  TRUNC(T3.C2) BETWEEN TO_DATE('08-MAR-2009','DD-MON-YYYY')
    AND TO_DATE('01-JUL-2009','DD-MON-YYYY')
  AND T3.C2=T4.C2;

The DBMS_XPLAN output for the query follows:

SQL_ID  dywt9v0xuvgyv, child number 0
-------------------------------------
SELECT /*+ PARALLEL (T3, 4) PARALLEL (T4, 4) */    T3.C1,    T3.C2,   
T4.C3  FROM    T3,    T4  WHERE    TRUNC(T3.C2) BETWEEN
TO_DATE('08-MAR-2009','DD-MON-YYYY')      AND
TO_DATE('01-JUL-2009','DD-MON-YYYY')    AND T3.C2=T4.C2

Plan hash value: 1800244878

-------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |          |      1 |        |    874K|00:03:49.60 |      14 |       |       |          |
|   1 |  PX COORDINATOR         |          |      1 |        |    874K|00:03:49.60 |      14 |       |       |          |
|   2 |   PX SEND QC (RANDOM)   | :TQ10001 |      0 |    849K|      0 |00:00:00.01 |       0 |       |       |          |
|*  3 |    HASH JOIN            |          |      0 |    849K|      0 |00:00:00.01 |       0 |   126M|    10M|   49M (0)|
|   4 |     PX RECEIVE          |          |      0 |    802K|      0 |00:00:00.01 |       0 |       |       |          |
|   5 |      PX SEND BROADCAST  | :TQ10000 |      0 |    802K|      0 |00:00:00.01 |       0 |       |       |          |
|   6 |       PX BLOCK ITERATOR |          |      0 |    802K|      0 |00:00:00.01 |       0 |       |       |          |
|*  7 |        TABLE ACCESS FULL| T3       |      0 |    802K|      0 |00:00:00.01 |       0 |       |       |          |
|   8 |     PX BLOCK ITERATOR   |          |      0 |     25M|      0 |00:00:00.01 |       0 |       |       |          |
|*  9 |      TABLE ACCESS FULL  | T4       |      0 |     25M|      0 |00:00:00.01 |       0 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T3"."C2"="T4"."C2")
   7 - access(:Z>=:Z AND :Z<=:Z)
       filter((TRUNC(INTERNAL_FUNCTION("C2"))>=TO_DATE(' 2009-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              TRUNC(INTERNAL_FUNCTION("C2"))<=TO_DATE(' 2009-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
   9 - access(:Z>=:Z AND :Z<=:Z)
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"T4"."C2"))

A bloom filter, just in time for Spring (it must be true, it is in the Predicate Information section of the actual execution plan).

After roughly a 6 second delay, V$SESSION_LONGOPS was queried roughly every 6 seconds.  The output of the V$SESSION_LONGOPS query follows:

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              8               7        553        243 TESTUSER
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              4               7         80        544 TESTUSER
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                            114               7          7        251 TESTUSER
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                             14               7        159        982 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              2              16        159        982 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              4               9        553        243 TESTUSER
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              1              13         80        544 TESTUSER
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              7               9          7        251 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                             13               9         80        544 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              2              12        553        243 TESTUSER
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              7              10          7        251 TESTUSER
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              6               7        159        982 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              7               7        553        243 TESTUSER
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              4              10         80        544 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              2              13        159        982 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              4               9        553        243 TESTUSER
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              6              10          7        251 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                             10               8         80        544 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              2              12        553        243 TESTUSER
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              1              16         80        544 TESTUSER
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              3              10          7        251 TESTUSER
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              6              10        159        982 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              6               8        553        243 TESTUSER
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              8               8         80        544 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              3              12        159        982 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              4              10        553        243 TESTUSER
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              5              10         80        544 TESTUSER
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              5               8          7        251 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              7               9        159        982 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              7               9        159        982 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              7               9        159        982 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              7               9        159        982 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              7               9        159        982 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              7               9        159        982 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              7               9        159        982 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                              2              57         80        544 TESTUSER
dywt9v0xuvgyv 1800244878          7 Rowid Range Scan TESTUSER.T3                             20              66        159        982 TESTUSER

SQL> /

no rows selected

SQL> /

no rows selected

SQL> /

no rows selected

SQL> /

no rows selected

SQL> /

no rows selected

SQL> /

no rows selected

SQL> /

no rows selected

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          9 Rowid Range Scan TESTUSER.T4                              9               8         81       1204 TESTUSER
dywt9v0xuvgyv 1800244878          9 Rowid Range Scan TESTUSER.T4                              4               7        395        631 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          9 Rowid Range Scan TESTUSER.T4                             69              13        160         32 TESTUSER
dywt9v0xuvgyv 1800244878          9 Rowid Range Scan TESTUSER.T4                             12              16         81       1204 TESTUSER
dywt9v0xuvgyv 1800244878          9 Rowid Range Scan TESTUSER.T4                              5              15        395        631 TESTUSER
dywt9v0xuvgyv 1800244878          9 Rowid Range Scan TESTUSER.T4                             71              13        238         31 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          9 Rowid Range Scan TESTUSER.T4                             11              25         81       1204 TESTUSER
dywt9v0xuvgyv 1800244878          9 Rowid Range Scan TESTUSER.T4                              4              24        395        631 TESTUSER
dywt9v0xuvgyv 1800244878          9 Rowid Range Scan TESTUSER.T4                             58              22        238         31 TESTUSER

SQL> /

SQL_ID        HASH_VALUE    LINE_ID OPNAME           TARGET          TARGET_DESC TIME_REMAINING ELAPSED_SECONDS        SID    SERIAL# USERNAME
------------- ---------- ---------- ---------------- --------------- ----------- -------------- --------------- ---------- ---------- --------
dywt9v0xuvgyv 1800244878          9 Rowid Range Scan TESTUSER.T4                              7              33         81       1204 TESTUSER

SQL> /

no rows selected

As you can see from the above, the parallel query processes (each with a different SID) appear, and disappear from the V$SESSION_LONGOPS output as the query executes.  The output shows a roughly 40 to 50 second time period during the execution of the query in the first session where nothing appeared in the V$SESSION_LONGOPS output.

So, we have found a fifth possible reason for the rows in V$SESSION_LONGOPS appearing as 0 for the OP’s SQL statement – the OP may have just queried V$SESSION_LONGOPS at the wrong time.

Any other possible reasons why the OP would see 0 for the TIME_REMAINING column when querying V$SESSION_LONGOPS during the execution of a SQL statement that is expected to require 20 minutes to complete?





Impact of the TRUNC Function on an Indexed Date Column

8 03 2010

March 8, 2010

A recent email from an ERP user’s group mailing list reminded me of a small problem in that ERP program’s modules related to the DATE columns in several of its tables.  In DATE columns that should only contain a date component, rows will occasionally be inserted by one of the ERP program’s modules with a date and time component, for example ’08-MAR-2010 13:01:13′ rather than just ’08-MAR-2010 00:00:00′.  This bug, or feature, leads to unexpected performance issues when normal B*Tree indexes are present on that date column.  To work around the time component in the DATE type columns, the ERP program modules frequently uses a technique like this to perform a comparisons on only the date component of a DATE type columns:

SELECT
  *
FROM
  T3
WHERE
  TRUNC(DATE_COLUMN) = :d1;

In the above D1 is a bind variable.  On occasion, the ERP program will instead pass in the date value as a constant/literal rather than as a bind variable.  What is wrong with the above syntax?  Would the above syntax be considered a bug if the DATE_COLUMN column had a normal B*Tree index?  Is there a better way to retrieve the required rows?  Incidentally, I started re-reading the book “Troubleshooting Oracle Performance” and I encountered a couple of interesting sentences on page 7 that seem to address this issue:

“For all intents and purposes, an application experiencing poor performance is no worse [should probably state no better] than an application failing to fulfill its functional requirements. In both situations, the application is useless.”

Let’s try a couple of experiments to see why the above SQL statement requires improvement.

First, we will create table T2 that will serve as a nearly sequential ordered row source with a small amount of randomization introduced into the data by the DBMS_RANDOM function.  This row source will be used to help duplicate the essentially random arrival rate of transactions into our T3 test table:

CREATE TABLE T2 AS
SELECT
  DBMS_RANDOM.VALUE(0,0.55555)+ROWNUM/10000 DAYS
FROM
  (SELECT
    ROWNUM RN
  FROM
    DUAL
  CONNECT BY
    LEVEL<=10000) V1,
  (SELECT
    ROWNUM RN
  FROM
    DUAL
  CONNECT BY
    LEVEL<=10000) V2;

The date column in our T3 table is derived from SYSDATE, so ideally the rows should be in order by the DAYS column in table T2.  In a production environment, on rare occasion someone will slip in a row that is not in sequential order through an edit of the DATE column for a row, so we should be able to simulate that slight randomness by creating another table from table T2 before generating table T3 (the rows will fill the table blocks with an occasional row that is out of date sequence):

CREATE TABLE T2_ORDERED NOLOGGING AS
SELECT
  DAYS
FROM
  T2
ORDER BY
  DAYS;

For our simulation, we have a final problem that needs to be addressed.  The volume of data entered on a Saturday in the production database is less than that for a Monday through Friday, and the volume of data entered on a Sunday is less than that entered on a Saturday.  To add just a little more randomness, we will insert the rows into table T3 based on the following criteria:

  • 90% chance of a row from T2_ORDERED being included if the date falls on a Monday through Friday
  • 20% chance of a row from T2_ORDERED being included if the date falls on a Saturday
  • 10% chance of a row from T2_ORDERED being included if the date falls on a Sunday

The SQL statement to build table T3 follows:

CREATE TABLE T3 NOLOGGING AS
SELECT
  DAYS+TO_DATE('01-JAN-1990','DD-MON-YYYY') C1,
  DAYS+TO_DATE('01-JAN-1990','DD-MON-YYYY') C2,
  LPAD('A',255,'A') C3
FROM
  T2_ORDERED
WHERE
  DECODE(TO_CHAR(DAYS+TO_DATE('01-JAN-1990','DD-MON-YYYY'),'D'),'1',0.9,'7',0.8,0.1)<DBMS_RANDOM.VALUE(0,1);

CREATE INDEX IND_T3_C2 ON T3(C2);

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T3',CASCADE=>TRUE)

ALTER TABLE T3 MODIFY C1 NOT NULL;
ALTER TABLE T3 MODIFY C2 NOT NULL;

Let’s check the data distribution:

SELECT
  COUNT(*) NUM_ROWS,
  SUM(DECODE(TO_CHAR(C1,'D'),'6',1,0)) FRIDAYS,
  SUM(DECODE(TO_CHAR(C1,'D'),'7',1,0)) SATURDAYS,
  SUM(DECODE(TO_CHAR(C1,'D'),'1',1,0)) SUNDAYS
FROM
  T3;

  NUM_ROWS    FRIDAYS  SATURDAYS    SUNDAYS
---------- ---------- ---------- ----------
68,579,287 12,858,100  2,855,164  1,428,569

From the above we are able to determine that roughly 18.7% of the rows have a date that is on a Friday, roughly 4.2% of the rows have a date that is on a Saturday, and 2.1% of the rows are on a Sunday.

This test will be performed on Oracle Database 11.2.0.1 with the __DB_CACHE_SIZE hidden parameter floating at roughly 7,381,975,040 (6.875GB).  I will use my toy project for performance tuning to submit the SQL statements and retrieve the DBMS_XPLAN output, but the same could be accomplished with just SQL*Plus (most tests can also be performed using my Automated DBMS_XPLAN tool).

Let’s start simple, we will start with a simple SQL statement to retrieve the rows with today’s date (March 8, 2010) using literals against the indexed column.  I will execute each SQL statement twice to take advantage of any previously cached blocks in the buffer cache, and eliminate the time consumed by the hard parse:

SELECT
  C1,
  C2,
  C3
FROM
  T3
WHERE
  TRUNC(C2) = TO_DATE('08-MAR-2010','DD-MON-YYYY');

SQL_ID  3us49wsdzdun3, child number 1
-------------------------------------
SELECT    C1,    C2,    C3  FROM    T3  WHERE    TRUNC(C2) =
TO_DATE('08-MAR-2010','DD-MON-YYYY')

Plan hash value: 4161002650

---------------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |        |   9114 |00:02:31.93 |    2743K|   2743K|
|*  1 |  TABLE ACCESS FULL| T3   |      1 |   9114 |   9114 |00:02:31.93 |    2743K|   2743K|
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(TRUNC(INTERNAL_FUNCTION("C2"))=TO_DATE(' 2010-03-08 00:00:00',
              'syyyy-mm-dd hh24:mi:ss'))

Note
-----
   - cardinality feedback used for this statement

Roughly 2 minutes and 32 seconds.  Notice the Note at the bottom of the DBMS_XPLAN output, cardinality feedback (apparently not documented) is a new feature in Oracle Database 11.2.0.1 (see here for a related blog article).  The first execution required 2 minutes and 33 seconds, but a predicted cardinality of 685,000 rows (1% of the total) was returned for the first execution.  The second execution generated a second child cursor with a corrected cardinality estimate based on the actual number of rows returned during the first execution.  2 minutes and 32 seconds is not bad, unless this is an OLTP application and an end user is waiting for the application to return the rows.

Let’s try again with a modified, equivalent SQL statement, again executing the SQL statement twice:

SELECT
  C1,
  C2,
  C3
FROM
  T3
WHERE
  C2 >= TO_DATE('08-MAR-2010','DD-MON-YYYY')
  AND C2 < TO_DATE('08-MAR-2010','DD-MON-YYYY')+1;

SQL_ID  c7jfpa0rpt95a, child number 0
-------------------------------------
SELECT    C1,    C2,    C3  FROM    T3  WHERE    C2 >=
TO_DATE('08-MAR-2010','DD-MON-YYYY')    AND C2 <
TO_DATE('08-MAR-2010','DD-MON-YYYY')+1

Plan hash value: 4176467757

---------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |      1 |        |   9114 |00:00:00.02 |     575 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T3        |      1 |   6859 |   9114 |00:00:00.02 |     575 |
|*  2 |   INDEX RANGE SCAN          | IND_T3_C2 |      1 |   6859 |   9114 |00:00:00.01 |     118 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("C2">=TO_DATE(' 2010-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "C2"<TO_DATE(' 2010-03-09 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

0.02 seconds compared to 2 minutes and 32 seconds.  You will notice that the estimated number of rows, while not exact, is reasonably close even without a cardinality feedback adjustment.  Also notice that the optimizer adjusted the date calculation that was in the WHERE clause of the SQL statement.

Let’s try again with a second modified, equivalent SQL statement, again executing the SQL statement twice:

SELECT
  C1,
  C2,
  C3
FROM
  T3
WHERE
  C2 BETWEEN TO_DATE('08-MAR-2010','DD-MON-YYYY')
    AND TO_DATE('08-MAR-2010','DD-MON-YYYY') + (1-1/24/60/60);

SQL_ID  7xthpspukrbtv, child number 0
-------------------------------------
SELECT    C1,    C2,    C3  FROM    T3  WHERE    C2 BETWEEN
TO_DATE('08-MAR-2010','DD-MON-YYYY')      AND
TO_DATE('08-MAR-2010','DD-MON-YYYY') + (1-1/24/60/60)

Plan hash value: 4176467757

---------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |      1 |        |   9114 |00:00:00.02 |     575 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T3        |      1 |   6860 |   9114 |00:00:00.02 |     575 |
|*  2 |   INDEX RANGE SCAN          | IND_T3_C2 |      1 |   6860 |   9114 |00:00:00.01 |     118 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("C2">=TO_DATE(' 2010-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "C2"<=TO_DATE(' 2010-03-08 23:59:59', 'syyyy-mm-dd hh24:mi:ss'))

0.02 seconds again, and the estimated number of rows is roughly the same as we achieved with the previous SQL statement.  By checking the Predicate Information section of the DBMS_XPLAN output we see that the optimizer has transformed the BETWEEN syntax into roughly the same syntax as was used in the previous SQL statement.

Let’s try again with bind variables (the bind variable names are automatically changed by ADO into generic names, and that is why the bind variable appears in the execution plan as :1 rather than :d1):

SELECT
  C1,
  C2,
  C3
FROM
  T3
WHERE
  TRUNC(C2) = :d1;

SQL_ID  cub25jm7y8zck, child number 0
-------------------------------------
SELECT    C1,    C2,    C3  FROM    T3  WHERE    TRUNC(C2) = :1

Plan hash value: 4161002650

---------------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |        |   9114 |00:02:33.37 |    2743K|   2743K|
|*  1 |  TABLE ACCESS FULL| T3   |      1 |    685K|   9114 |00:02:33.37 |    2743K|   2743K|
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(TRUNC(INTERNAL_FUNCTION("C2"))=:1)

Notice that our friendly note about Cardinality Feedback did not appear this time, and that the cardinality estimate was not corrected when the SQL statement was executed for the second time, even though bind variable peeking did happen.  The incorrect cardinality estimate would not have changed the execution plan for this SQL statement, but could impact the execution plan if table T3 were joined to another table.

Let’s try the other equivalent SQL statement with bind variables:

SELECT
  C1,
  C2,
  C3
FROM
  T3
WHERE
  C2 >= :d1
  AND C2 < :d2 +1;

SQL_ID  9j2a54zbzb9cz, child number 0
-------------------------------------
SELECT    C1,    C2,    C3  FROM    T3  WHERE    C2 >= :1    AND C2 <
:2 +1

Plan hash value: 3025660695

----------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |           |      1 |        |   9114 |00:00:00.02 |     575 |
|*  1 |  FILTER                      |           |      1 |        |   9114 |00:00:00.02 |     575 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T3        |      1 |   6859 |   9114 |00:00:00.02 |     575 |
|*  3 |    INDEX RANGE SCAN          | IND_T3_C2 |      1 |   6859 |   9114 |00:00:00.01 |     118 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:1<:2+1)
   3 - access("C2">=:1 AND "C2"<:2+1)

The optimizer estimated that 6,859 rows would be returned, just as it did when we used literals in the SQL statement  because of bind variable peeking.  In case you are wondering, the same estimated row cardinality was returned when the D2 bind variable was set to ’09-MAR-2010′ in the application and the +1 was removed from the SQL statement.

Quite the problem we caused by pretending to not understand the impact of using a function in the WHERE clause on an indexed column.  We could create a function based index to work around the problem of the application programmers not knowing how to specify a specific date without using the TRUNC function on a DATE column:

CREATE INDEX IND_T3_C2_FBI ON T3(TRUNC(C2));

ALTER SYSTEM FLUSH SHARED_POOL;

But is creating a function based index the best approach, or have we just created another problem rather than attacking the root cause of the original problem?  We now have two indexes on the same column that need to be updated every time a row is inserted or deleted in table T3, and also maintained every time that column is updated (even when updated with the same value).  Let's experiment with the function based index.

SELECT
  C1,
  C2,
  C3
FROM
  T3
WHERE
  TRUNC(C2) = TO_DATE('08-MAR-2010','DD-MON-YYYY');

SQL_ID  3us49wsdzdun3, child number 1
-------------------------------------
SELECT    C1,    C2,    C3  FROM    T3  WHERE    TRUNC(C2) =
TO_DATE('08-MAR-2010','DD-MON-YYYY')

Plan hash value: 3662266936

-------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |               |      1 |        |   9114 |00:00:00.01 |     576 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T3            |      1 |   9114 |   9114 |00:00:00.01 |     576 |
|*  2 |   INDEX RANGE SCAN          | IND_T3_C2_FBI |      1 |   9114 |   9114 |00:00:00.01 |     119 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T3"."SYS_NC00004$"=TO_DATE(' 2010-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Note
-----
   - cardinality feedback used for this statement

Cardinality feedback again helps out the cardinality estimate on the second execution, but look at the Predicate Information section of the execution plan.  We have now increased the difficulty of walking through a complicated execution plan with the help of the Predicate Information section of the execution plan to see how the predicates in the WHERE clause are applied to the execution plan.  Not so bad, right?  What happens if this column C2 is joined to a column in another table, or even specified as being equal to column C1 in this table?  Let's take a look:

SELECT
  C1,
  C2,
  C3
FROM
  T3
WHERE
  C2 >= TO_DATE('08-MAR-2010','DD-MON-YYYY')
  AND C2 < TO_DATE('08-MAR-2010','DD-MON-YYYY')+1
  AND C2=C1;

SQL_ID  27rqhg1mpmzt9, child number 1
-------------------------------------
SELECT    C1,    C2,    C3  FROM    T3  WHERE    C2 >=
TO_DATE('08-MAR-2010','DD-MON-YYYY')    AND C2 <
TO_DATE('08-MAR-2010','DD-MON-YYYY')+1    AND C2=C1

Plan hash value: 4176467757

---------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |      1 |        |   9114 |00:00:00.01 |     575 |
|*  1 |  TABLE ACCESS BY INDEX ROWID| T3        |      1 |   9114 |   9114 |00:00:00.01 |     575 |
|*  2 |   INDEX RANGE SCAN          | IND_T3_C2 |      1 |   9114 |   9114 |00:00:00.01 |     118 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(("C2"="C1" AND "C1">=TO_DATE(' 2010-03-08 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "C1"<TO_DATE(' 2010-03-09 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
   2 - access("C2">=TO_DATE(' 2010-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "C2"<TO_DATE(' 2010-03-09 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Note
-----
   - cardinality feedback used for this statement

On the first execution the E-Rows column for plan ID 1 showed that the cardinality estimate was 1 row, and on the second execution the cardinality estimate was corrected to 9114.  Notice that transitive closure took place - the filter operation on plan ID 1 shows the same restrictions for column C1 as had applied to column C2 in the WHERE clause.

Let's try again with the SQL statement using the TRUNC function - this SQL statement will use the function based index:

SELECT
  C1,
  C2,
  C3
FROM
  T3
WHERE
  TRUNC(C2) = TO_DATE('08-MAR-2010','DD-MON-YYYY')
  AND C2=C1;

SQL_ID  ftu92j3z99ppr, child number 1
-------------------------------------
SELECT    C1,    C2,    C3  FROM    T3  WHERE    TRUNC(C2) =
TO_DATE('08-MAR-2010','DD-MON-YYYY')    AND C2=C1

Plan hash value: 3662266936

-------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |               |      1 |        |   9114 |00:00:00.01 |     576 |
|*  1 |  TABLE ACCESS BY INDEX ROWID| T3            |      1 |   9114 |   9114 |00:00:00.01 |     576 |
|*  2 |   INDEX RANGE SCAN          | IND_T3_C2_FBI |      1 |   9114 |   9114 |00:00:00.01 |     119 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("C2"="C1")
   2 - access("T3"."SYS_NC00004$"=TO_DATE(' 2010-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Note
-----
   - cardinality feedback used for this statement

The cardinality estimate is again correct because of cardinality feedback, but notice what is missing from the Predicate Information section of the execution plan (transitive closure did not happen).

So, does the use of TRUNC(DATE_COLUMN) without the presence of a function based index qualify as an application bug?  What if a the function based index is present - is it still a bug?

Something possibly interesting, but unrelated.  I executed the following commands:

ALTER INDEX IND_T3_C2_FBI UNUSABLE;

(perform a little more testing)

ALTER INDEX IND_T3_C2_FBI REBUILD;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T3',CASCADE=>TRUE)

I received the following after several minutes:

BEGIN DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T3',CASCADE=>TRUE);
END;

*
ERROR at line 1:
ORA-00600: internal error code, arguments: [15851], [3], [2], [1], [1], [], [],
[], [], [], [], []
ORA-06512: at "SYS.DBMS_STATS", line 20337
ORA-06512: at "SYS.DBMS_STATS", line 20360
ORA-06512: at line 1

The same error appeared without the CASCADE option, but a call to collect statistics on the indexes for the table, as well as other tables completes successfully.  I may look at this problem again later.

Continuing, we will create another table:

CREATE TABLE T4 NOLOGGING AS
SELECT
  *
FROM
  T3
WHERE
  C2 BETWEEN TO_DATE('01-JAN-2010','DD-MON-YYYY')
    AND TO_DATE('08-MAR-2010','DD-MON-YYYY') + (1-1/24/60/60);

CREATE INDEX IND_T4_C2 ON T4(C2);

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T4',CASCADE=>TRUE)

Before we start, let's take a look at the disk space used by the objects and the automatically allocated extent sizes:

SELECT
  SEGMENT_NAME SEGMENT,
  (SUM(BYTES))/1048576 TOTAL_MB
FROM
  DBA_EXTENTS
WHERE
  OWNER=USER
  AND SEGMENT_NAME IN ('IND_T3_C2','IND_T3_C2_FBI','T3','T4','IND_T4_C2')
GROUP BY
  SEGMENT_NAME
ORDER BY
  SEGMENT_NAME;

SEGMENT           TOTAL_MB
--------------- ----------
IND_T3_C2             1469
IND_T3_C2_FBI         1472
IND_T4_C2               10
T3                   21480
T4                     144  

SELECT
  SEGMENT_NAME SEGMENT,
  COUNT(*) EXTENTS,
  BYTES/1024 EXT_SIZE_KB,
  (COUNT(*) * BYTES)/1048576 TOTAL_MB
FROM
  DBA_EXTENTS
WHERE
  OWNER=USER
  AND SEGMENT_NAME IN ('IND_T3_C2','IND_T3_C2_FBI','T3','T4','IND_T4_C2')
GROUP BY
  SEGMENT_NAME,
  BYTES
ORDER BY
  SEGMENT_NAME,
  BYTES;

SEGMENT            EXTENTS EXT_SIZE_KB   TOTAL_MB
--------------- ---------- ----------- ----------
IND_T3_C2               16          64          1
IND_T3_C2               63        1024         63
IND_T3_C2              120        8192        960
IND_T3_C2                1       27648         27
IND_T3_C2                1       34816         34
IND_T3_C2                6       65536        384
IND_T3_C2_FBI           16          64          1
IND_T3_C2_FBI           63        1024         63
IND_T3_C2_FBI          120        8192        960
IND_T3_C2_FBI            7       65536        448
IND_T4_C2               16          64          1
IND_T4_C2                9        1024          9
T3                      16          64          1
T3                      63        1024         63
T3                     120        8192        960
T3                       1       19456         19
T3                       1       43008         42
T3                       1       44032         43
T3                     318       65536      20352
T4                      16          64          1
T4                      63        1024         63
T4                      10        8192         80

Table T3 is using about 21GB of space while table T4 is using about 144MB of space.  We occasionally received an extent size that is not a power of 2 in size - a bit unexpected.  Let's try a couple of SQL statements that access the two tables:

SELECT
  T3.C1,
  T3.C2,
  T4.C3
FROM
  T3,
  T4
WHERE
  TRUNC(T3.C2) = TO_DATE('08-MAR-2010','DD-MON-YYYY')
  AND T3.C2=T4.C2;

SQL_ID  f2v7cf7w2bwqq, child number 0
-------------------------------------
SELECT    T3.C1,    T3.C2,    T4.C3  FROM    T3,    T4  WHERE   
TRUNC(T3.C2) = TO_DATE('08-MAR-2010','DD-MON-YYYY')     AND T3.C2=T4.C2

Plan hash value: 1631978485

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |               |      1 |        |  10044 |00:00:00.38 |   18622 |     25 |       |       |          |
|*  1 |  HASH JOIN                   |               |      1 |   7095 |  10044 |00:00:00.38 |   18622 |     25 |  1223K|  1223K| 1593K (0)|
|   2 |   TABLE ACCESS BY INDEX ROWID| T3            |      1 |   6857 |   9114 |00:00:00.04 |     394 |     25 |       |       |          |
|*  3 |    INDEX RANGE SCAN          | IND_T3_C2_FBI |      1 |   6857 |   9114 |00:00:00.03 |      28 |     25 |       |       |          |
|   4 |   TABLE ACCESS FULL          | T4            |      1 |    452K|    452K|00:00:00.12 |   18228 |      0 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T3"."C2"="T4"."C2")
   3 - access("T3"."SYS_NC00004$"=TO_DATE(' 2010-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

The above used a full table scan on table T4, and you will notice that a filter predicate is not applied to table T4 to reduce the number of rows entering the hash join.  Transitive closure did not take place.  Let's try again with the SQL statement with the other syntax that does not use the TRUNC function, nor the function based index:

SELECT
  T3.C1,
  T3.C2,
  T4.C3
FROM
  T3,
  T4
WHERE
  T3.C2 BETWEEN TO_DATE('08-MAR-2010','DD-MON-YYYY')
    AND TO_DATE('08-MAR-2010','DD-MON-YYYY') + (1-1/24/60/60)
  AND T3.C2=T4.C2;

SQL_ID  5swqbjak147vk, child number 0
-------------------------------------
SELECT    T3.C1,    T3.C2,    T4.C3  FROM    T3,    T4  WHERE    T3.C2
BETWEEN TO_DATE('08-MAR-2010','DD-MON-YYYY')       AND
TO_DATE('08-MAR-2010','DD-MON-YYYY') + (1-1/24/60/60)    AND T3.C2=T4.C2

Plan hash value: 3991319422

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |           |      1 |        |  10044 |00:00:00.06 |     983 |      1 |       |       |          |
|*  1 |  HASH JOIN                   |           |      1 |   6761 |  10044 |00:00:00.06 |     983 |      1 |  1223K|  1223K| 1618K (0)|
|   2 |   TABLE ACCESS BY INDEX ROWID| T3        |      1 |   6860 |   9114 |00:00:00.01 |     393 |      0 |       |       |          |
|*  3 |    INDEX RANGE SCAN          | IND_T3_C2 |      1 |   6860 |   9114 |00:00:00.01 |      27 |      0 |       |       |          |
|   4 |   TABLE ACCESS BY INDEX ROWID| T4        |      1 |   6762 |   9114 |00:00:00.03 |     590 |      1 |       |       |          |
|*  5 |    INDEX RANGE SCAN          | IND_T4_C2 |      1 |   6762 |   9114 |00:00:00.01 |     127 |      1 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T3"."C2"="T4"."C2")
   3 - access("T3"."C2">=TO_DATE(' 2010-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "T3"."C2"<=TO_DATE(' 2010-03-08
              23:59:59', 'syyyy-mm-dd hh24:mi:ss'))
   5 - access("T4"."C2">=TO_DATE(' 2010-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "T4"."C2"<=TO_DATE(' 2010-03-08
              23:59:59', 'syyyy-mm-dd hh24:mi:ss'))

Notice this time that transitive closure happened, allowing the optimizer to take advantage of the IND_T4_C2 index on table T4.

You are probably thinking, it must be that we need a function based index on the C2 column of table T4 also to allow transitive closure to happen.  Let's try:

CREATE INDEX IND_T4_C2_FBI ON T4(TRUNC(C2));

ALTER SYSTEM FLUSH SHARED_POOL;

Now our SQL statement again:

SELECT
  T3.C1,
  T3.C2,
  T4.C3
FROM
  T3,
  T4
WHERE
  TRUNC(T3.C2) = TO_DATE('08-MAR-2010','DD-MON-YYYY')
  AND T3.C2=T4.C2;

SQL_ID  f2v7cf7w2bwqq, child number 0
-------------------------------------
SELECT    T3.C1,    T3.C2,    T4.C3  FROM    T3,    T4  WHERE   
TRUNC(T3.C2) = TO_DATE('08-MAR-2010','DD-MON-YYYY')     AND T3.C2=T4.C2

Plan hash value: 1631978485

-----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |               |      1 |        |  10044 |00:00:00.33 |   18622 |       |       |          |
|*  1 |  HASH JOIN                   |               |      1 |   7095 |  10044 |00:00:00.33 |   18622 |  1223K|  1223K| 1584K (0)|
|   2 |   TABLE ACCESS BY INDEX ROWID| T3            |      1 |   6857 |   9114 |00:00:00.01 |     394 |       |       |          |
|*  3 |    INDEX RANGE SCAN          | IND_T3_C2_FBI |      1 |   6857 |   9114 |00:00:00.01 |      28 |       |       |          |
|   4 |   TABLE ACCESS FULL          | T4            |      1 |    452K|    452K|00:00:00.11 |   18228 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T3"."C2"="T4"."C2")
   3 - access("T3"."SYS_NC00004$"=TO_DATE(' 2010-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

As expected, the function based index on column C4 of table T4 was not used because transitive closure did not happen.  Do we still want to do it the wrong way?  The execution time could have been much longer than 0.33 seconds, of course, if table T4 were much larger and a large number of physical reads were required.  Try again using a larger table T4:

DROP TABLE T4 PURGE;

CREATE TABLE T4 NOLOGGING AS
SELECT
  *
FROM
  T3
WHERE
  C2 BETWEEN TO_DATE('01-JAN-2000','DD-MON-YYYY')
    AND TO_DATE('08-MAR-2010','DD-MON-YYYY') + (1-1/24/60/60);

CREATE INDEX IND_T4_C2 ON T4(C2);

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T4',CASCADE=>TRUE)

Table T4 now requires about 7.9GB of disk space.  Now a range scan that accesses tables T3 and T4 (each SQL statement is executed twice, with the last execution plan reported):

SELECT
  T3.C1,
  T3.C2,
  T4.C3
FROM
  T3,
  T4
WHERE
  TRUNC(T3.C2) BETWEEN TO_DATE('08-MAR-2009','DD-MON-YYYY')
    AND TO_DATE('01-JUL-2009','DD-MON-YYYY')
  AND T3.C2=T4.C2;

SQL_ID  2d4f5x92axqgn, child number 0
-------------------------------------
SELECT    T3.C1,    T3.C2,    T4.C3  FROM    T3,    T4  WHERE   
TRUNC(T3.C2) BETWEEN TO_DATE('08-MAR-2009','DD-MON-YYYY')      AND
TO_DATE('01-JUL-2009','DD-MON-YYYY')    AND T3.C2=T4.C2

Plan hash value: 1631978485

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |               |      1 |        |    874K|00:00:33.24 |    1062K|    302K|       |       |          |
|*  1 |  HASH JOIN                   |               |      1 |    849K|    874K|00:00:33.24 |    1062K|    302K|    33M|  5591K|   50M (0)|
|   2 |   TABLE ACCESS BY INDEX ROWID| T3            |      1 |    802K|    795K|00:00:00.56 |   33957 |      0 |       |       |          |
|*  3 |    INDEX RANGE SCAN          | IND_T3_C2_FBI |      1 |    802K|    795K|00:00:00.20 |    2115 |      0 |       |       |          |
|   4 |   TABLE ACCESS FULL          | T4            |      1 |     25M|     25M|00:00:17.13 |    1028K|    302K|       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T3"."C2"="T4"."C2")
   3 - access("T3"."SYS_NC00004$">=TO_DATE(' 2009-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "T3"."SYS_NC00004$"<=TO_DATE('
              2009-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Notice the full table scan of table T4.  Now the other SQL statement:

SELECT
  T3.C1,
  T3.C2,
  T4.C3
FROM
  T3,
  T4
WHERE
  T3.C2 BETWEEN TO_DATE('08-MAR-2009','DD-MON-YYYY')
    AND TO_DATE('01-JUL-2009','DD-MON-YYYY') + (1-1/24/60/60)
  AND T3.C2=T4.C2;

SQL_ID  539d93k50ruz3, child number 0
-------------------------------------
SELECT    T3.C1,    T3.C2,    T4.C3  FROM    T3,    T4  WHERE    T3.C2
BETWEEN TO_DATE('08-MAR-2009','DD-MON-YYYY')      AND
TO_DATE('01-JUL-2009','DD-MON-YYYY') + (1-1/24/60/60)    AND T3.C2=T4.C2

Plan hash value: 1243183227

-----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |           |      1 |        |    874K|00:00:05.80 |   85051 |    574 |       |       |          |
|   1 |  MERGE JOIN                   |           |      1 |    795K|    874K|00:00:05.80 |   85051 |    574 |       |       |          |
|   2 |   TABLE ACCESS BY INDEX ROWID | T4        |      1 |    795K|    795K|00:00:02.43 |   51097 |    574 |       |       |          |
|*  3 |    INDEX RANGE SCAN           | IND_T4_C2 |      1 |    795K|    795K|00:00:00.41 |   10841 |      0 |       |       |          |
|*  4 |   SORT JOIN                   |           |    795K|    795K|    874K|00:00:02.00 |   33954 |      0 |    30M|  1977K|   26M (0)|
|   5 |    TABLE ACCESS BY INDEX ROWID| T3        |      1 |    795K|    795K|00:00:00.50 |   33954 |      0 |       |       |          |
|*  6 |     INDEX RANGE SCAN          | IND_T3_C2 |      1 |    795K|    795K|00:00:00.17 |    2114 |      0 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T4"."C2">=TO_DATE(' 2009-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "T4"."C2"<=TO_DATE(' 2009-07-01
              23:59:59', 'syyyy-mm-dd hh24:mi:ss'))
   4 - access("T3"."C2"="T4"."C2")
       filter("T3"."C2"="T4"."C2")
   6 - access("T3"."C2">=TO_DATE(' 2009-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "T3"."C2"<=TO_DATE(' 2009-07-01
              23:59:59', 'syyyy-mm-dd hh24:mi:ss'))

Notice that the above used the index on table T4, but performed a sort-merge join between the two tables.  We are able to force a hash join, as was used with the other SQL statement, by applying a hint:

SQL_ID  b9q6tf6p6x2m0, child number 0
-------------------------------------
SELECT /*+ USE_HASH (T3 T4) */    T3.C1,    T3.C2,    T4.C3  FROM   
T3,    T4  WHERE    T3.C2 BETWEEN TO_DATE('08-MAR-2009','DD-MON-YYYY') 
    AND TO_DATE('01-JUL-2009','DD-MON-YYYY') + (1-1/24/60/60)    AND
T3.C2=T4.C2

Plan hash value: 3991319422

-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |           |      1 |        |    874K|00:00:03.60 |   85051 |       |       |          |
|*  1 |  HASH JOIN                   |           |      1 |    795K|    874K|00:00:03.60 |   85051 |    33M|  5591K|   50M (0)|
|   2 |   TABLE ACCESS BY INDEX ROWID| T3        |      1 |    795K|    795K|00:00:00.54 |   33954 |       |       |          |
|*  3 |    INDEX RANGE SCAN          | IND_T3_C2 |      1 |    795K|    795K|00:00:00.19 |    2114 |       |       |          |
|   4 |   TABLE ACCESS BY INDEX ROWID| T4        |      1 |    795K|    795K|00:00:01.44 |   51097 |       |       |          |
|*  5 |    INDEX RANGE SCAN          | IND_T4_C2 |      1 |    795K|    795K|00:00:00.40 |   10841 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T3"."C2"="T4"."C2")
   3 - access("T3"."C2">=TO_DATE(' 2009-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "T3"."C2"<=TO_DATE(' 2009-07-01
              23:59:59', 'syyyy-mm-dd hh24:mi:ss'))
   5 - access("T4"."C2">=TO_DATE(' 2009-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "T4"."C2"<=TO_DATE(' 2009-07-01
              23:59:59', 'syyyy-mm-dd hh24:mi:ss'))

9.2 times faster (just 5.7 times faster without the hint) by not using the TRUNC function and function-based index combination.  Are we able to just agree to do it the right way, or should I continue?  Without the function based index we receive an execution plan like this:

SQL_ID  2d4f5x92axqgn, child number 0
-------------------------------------
SELECT    T3.C1,    T3.C2,    T4.C3  FROM    T3,    T4  WHERE   
TRUNC(T3.C2) BETWEEN TO_DATE('08-MAR-2009','DD-MON-YYYY')      AND
TO_DATE('01-JUL-2009','DD-MON-YYYY')    AND T3.C2=T4.C2

Plan hash value: 1396201636

-------------------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |    874K|00:03:05.88 |    3771K|   3013K|       |       |          |
|*  1 |  HASH JOIN         |      |      1 |    849K|    874K|00:03:05.88 |    3771K|   3013K|    33M|  5591K|   52M (0)|
|*  2 |   TABLE ACCESS FULL| T3   |      1 |    802K|    795K|00:02:34.36 |    2743K|   2743K|       |       |          |
|   3 |   TABLE ACCESS FULL| T4   |      1 |     25M|     25M|00:00:15.72 |    1028K|    270K|       |       |          |
-------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T3"."C2"="T4"."C2")
   2 - filter((TRUNC(INTERNAL_FUNCTION("C2"))>=TO_DATE(' 2009-03-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              TRUNC(INTERNAL_FUNCTION("C2"))<=TO_DATE(' 2009-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))

A full table scan of a 21GB table and 7.9GB table, with 795,000 rows from the large table and 25,000,000 rows from the small table entering the hash join - probably not too good for performance.  Fix the performance bug in the application and let the user get back to counting the pencils in the pencil jar 9.2 times faster (or 51.6 times faster if there is no function based index).

While you might not frequently join two tables on a DATE column as I have done in this demonstration, how common is it to store numeric data in a VARCHAR2 column, and then need to be able to compare those values with numbers stored in NUMBER type columns, with literals, or numeric bind variables?





Where Are My Views?

5 03 2010

March 5, 2010

A year ago I received an email that was sent to an ERP user’s mailing list that stated that statically defined views were disappearing from one of the email author’s databases.  He stated that this problem had been an issue since Oracle 9i (probably 9.2.0.x) and continues to be a problem with Oracle 10g (likely 10.2.0.3 or 10.2.0.4).  The views were simply disappearing.  Was it caused by a bug in Oracle?  Was it caused by a bug or a naming conflict in the ERP system?  Was it caused by a renegade developer?  Was it a permission problem?  Was it a dropped synonym?  Or was someone routinely breaking it at night, found that the actual table was too big to fit in the get-away car, and managed to escape with only a picture of the table?

Since we do not know what is happening to the views, maybe the best course of action is to set a trap.  First, we will create a logging table to capture details about any objects that are dropped:

CREATE TABLE OBJECT_DROP_LOG (
  DICT_OBJ_TYPE VARCHAR2(40),
  DICT_OBJ_OWNER VARCHAR2(40),
  DICT_OBJ_NAME VARCHAR2(40),   
  DROP_USERNAME VARCHAR2(30) DEFAULT USER,
  DROP_DATE DATE DEFAULT SYSDATE);

GRANT INSERT ON OBJECT_DROP_LOG TO PUBLIC;
GRANT ALL ON OBJECT_DROP_LOG TO SYS;

As rows are inserted into the table, the DROP_USERNAME column will be automatically populated with the username of the person performing the object drop, and the DROP_DATE column will be automatically populated with the date and time of the drop.  Now we need a trigger to capture the other three columns in the table, the trigger will be created by the SYS user:

CREATE OR REPLACE TRIGGER LOG_DROP AFTER DROP ON DATABASE
BEGIN
  IF ORA_SYSEVENT='DROP' THEN
    INSERT INTO TESTUSER.OBJECT_DROP_LOG(
      DICT_OBJ_TYPE,
      DICT_OBJ_OWNER,
      DICT_OBJ_NAME)
    VALUES(
      ORA_DICT_OBJ_TYPE,
      ORA_DICT_OBJ_OWNER,
      ORA_DICT_OBJ_NAME);
  END IF;
END;
/

Now let’s log back in as our test user and create a couple of objects that will be dropped later:

CREATE TABLE T10 AS
SELECT
  ROWNUM C1
FROM
  DUAL
CONNECT BY
  LEVEL<=10;

Table created.

CREATE UNIQUE INDEX IND_T10 ON T10(C1);

Index created.

CREATE VIEW MY_STUFF AS
SELECT
  *
FROM
  T10;

View created.

Now let’s suppose that the event happens, something does this:

DROP VIEW MY_STUFF;

View dropped.

DROP TABLE T10;

Table dropped.

Now that the damage is done, will the logging table contain 0, 1, 2, or 3 entries?  Let’s log in as SYS and see what was captured:

COLUMN DICT_OBJ_TYPE FORMAT A13
COLUMN DICT_OBJ_OWNER FORMAT A14
COLUMN DICT_OBJ_NAME FORMAT A13
COLUMN DROP_USERNAME FORMAT A13
COLUMN DROP_DATE FORMAT A19

SELECT
  DICT_OBJ_TYPE,
  DICT_OBJ_OWNER,
  DICT_OBJ_NAME,
  DROP_USERNAME,
  TO_CHAR(DROP_DATE,'MM/DD/YYYY HH24:MI:SS') DROP_DATE
FROM
  OBJECT_DROP_LOG
ORDER BY
  DROP_DATE;

DICT_OBJ_TYPE DICT_OBJ_OWNER DICT_OBJ_NAME DROP_USERNAME DROP_DATE
------------- -------------- ------------- ------------- -------------------
VIEW          TESTUSER       MY_STUFF      TESTUSER      03/04/2010 23:17:33
TABLE         TESTUSER       T10           TESTUSER      03/04/2010 23:18:08

We see the drop of the view and the drop of the table, does the index still exist?

SELECT
  COUNT(*)
FROM
  DBA_OBJECTS
WHERE
  OBJECT_NAME='IND_T10';

  COUNT(*)
----------
         0

Looks like the table’s index is gone without a trace.  Let’s recover the table:

FLASHBACK TABLE TESTUSER.T10 TO BEFORE DROP;

Flashback complete.

SELECT
  COUNT(*)
FROM
  DBA_OBJECTS
WHERE
  OBJECT_NAME='T10';

  COUNT(*)
----------
         1

SELECT
  COUNT(*)
FROM
  DBA_OBJECTS
WHERE
  OBJECT_NAME='IND_T10';

  COUNT(*)
----------
         0

The table is back, but where is my index?

SELECT
  INDEX_NAME
FROM
  DBA_INDEXES
WHERE
  TABLE_NAME='T10'
  AND OWNER='TESTUSER';

INDEX_NAME
------------------------------
BIN$9mtb8W99SfiPr1EXVFGkgw==$0

That is a rather ugly name for an index – it was automatically changed when the associated table was dropped.  Connecting again as the TESTUSER:

ALTER INDEX "BIN$9mtb8W99SfiPr1EXVFGkgw==$0" RENAME TO IND_T10;

Index altered.

SELECT
  COUNT(*)
FROM
  USER_OBJECTS
WHERE
  OBJECT_NAME='IND_T10';

  COUNT(*)
----------
         1

The table and index are now back, so:

SELECT
  COUNT(*)
FROM
  USER_OBJECTS
WHERE
  OBJECT_NAME='MY_STUFF';

  COUNT(*)
----------
         0

Where is my view?  If I had executed the following, I could easily recreate the view:

SET PAGESIZE 0
SET LONG 90000
SPOOL MYSTUFF.SQL

SELECT
  DBMS_METADATA.GET_DDL('VIEW','MY_STUFF','TESTUSER')
FROM
  DUAL;

SPOOL OFF

(Caution: If you choose to implement this trigger, test it very carefully before pushing it into a production environment.  You could of course extend this example to capture much more information.)





Why Doesn’t this Trigger Code Work?

4 03 2010

March 4, 2010

A couple of days ago I received an interesting email from an ERP mailing list.  That email contained an Oracle trigger that looked like it would be helpful, but the author of the trigger was struggling a bit with the PL/SQL code and asked for some assistance with the trigger code.  The trigger code contained in the email looked like this:

CREATE OR REPLACE TRIGGER "TESTUSER"."TEST3" AFTER
UPDATE OF "STATUS" ON "TESTUSER"."OPERATION" REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW DECLARE
  msg1 varchar2(20000) := 'Despatch as been closed for ';
  msg2 varchar2(20000) := ' And there are operations not closed';
  recipients varchar2(20000);
  CountPlease number;
  Customer varchar2(20000);
  Base varchar2(20000) := :newdata.workorder_base_id;
BEGIN
IF UPDATING THEN
  if :newdata.resource_id = 'DESPATCH' and :Olddata.status <> :newdata.status and :newdata.status = 'C' then

  select name into Customer from customer where id in (select customer_id from customer_order where id = Base);

---jaggie bit here
    select count(distinct(workorder_lot_id||sequence_no||resource_id)) into CountPlease
    from operation where status = 'R' and workorder_type = 'W' and workorder_base_id = Base
    and not resource_id in ('DEVELOPMENT','START','DESPATCH','SERVICE');
---jaggie bit ends.
---Not yet inserted If CountPlease greater than 0 then...

    UTL_MAIL.SEND (
    sender => 'me@myplace.com.au',
    recipients => 'me@myplace.com.au',
    subject => msg1|| Base,
    message => msg1|| Customer|| ' , ' || Base || msg2||CountPlease);

  end if;
END IF;
END;
/

Paraphrasing the description of the problem:

I am having a problem with a trigger that I am trying to add to the OPERATION table.  The purpose of the trigger is to send an email if someone generates a labor ticket that causes the STATUS column of a row in the OPERATION table to change from ‘R’ (released) to ‘C’ (closed) when the RESOURCE_ID column contains the word ‘DESPATCH’, and there are other associated rows (excluding the rows with the RESOURCE_ID set to ‘DEVELOPMENT’, ‘START’, ‘DESPATCH’, or ‘SERVICE’) in that table associated with the modified row where the STATUS column is set to ‘R’.

The trigger works up to the point of the SQL statement containing the COUNT function, which is ultimately used to determine if the email should be sent.  The trigger compiles without error, however when a labor ticket is created (a row is inserted into the LABOR_TICKET table) that causes the STATUS of the corresponding row in the OPERATION table to change from ‘R’ to ‘C’ (caused by a trigger on the LABOR_TICKET table that then causes another trigger to fire on the OPERATION table), I receive the dreaded error in the ERP program of “invalid handle”. Yet my SQL seems sound via Sqlplus.

My other select statements work, and my email generates, provided I leave out the SELECT COUNT… SQL statement that is used to see if any other related rows in the OPERATION table have a status of ‘R’.

PL/SQL is not my native language (I think that it is still English), so I started by cleaning up the trigger code into a more readable form:

CREATE OR REPLACE TRIGGER "TESTUSER"."TEST3" AFTER UPDATE OF "STATUS" ON "TESTUSER"."OPERATION"
REFERENCING OLD AS OLDDATA NEW AS NEWDATA
FOR EACH ROW DECLARE
  MSG1 VARCHAR2(500) := 'Despatch as been closed for ';
  MSG2 VARCHAR2(500) := ' And there are operations not closed ';
  RECIPIENTS VARCHAR2(500);
  COUNTPLEASE NUMBER;
  CUSTOMERNAME VARCHAR2(50);
  BASEID VARCHAR2(30) := :NEWDATA.WORKORDER_BASE_ID;

BEGIN
IF UPDATING THEN
  IF :NEWDATA.RESOURCE_ID = 'DESPATCH' AND :OLDDATA.STATUS <> :NEWDATA.STATUS AND :NEWDATA.STATUS = 'C' THEN
    SELECT
      C.NAME INTO CUSTOMERNAME
    FROM
      CUSTOMER C,
      CUSTOMER_ORDER CO
    WHERE
      C.ID = CO.CUSTOMER_ID
      AND CO.ID = BASEID;

---jaggie bit here
    SELECT
      COUNT(SEQUENCE_NO||RESOURCE_ID) INTO COUNTPLEASE
    FROM
      OPERATION
    WHERE
      STATUS = 'R'
      AND WORKORDER_TYPE = 'W'
      AND WORKORDER_BASE_ID = :NEWDATA.WORKORDER_BASE_ID
      AND WORKORDER_LOT_ID = :NEWDATA.WORKORDER_LOT_ID
      AND WORKORDER_SPLIT_ID='0'
      AND RESOURCE_ID NOT IN ('DEVELOPMENT','START','DESPATCH','SERVICE');
---jaggie bit ends.

    IF COUNTPLEASE > 0 THEN
      UTL_MAIL.SEND (
        SENDER => 'me@myplace.com.au',
        RECIPIENTS => 'me@myplace.com.au',
        SUBJECT => MSG1|| BASEID,
        MESSAGE => MSG1|| CUSTOMERNAME|| ' , ' || BASEID || MSG2 ||COUNTPLEASE);
      NULL;
    END IF;
  END IF;
END IF;
END;
/

After the clean up I managed to compile the trigger and then performed a test with the ERP program.  The “dreaded error message” looked something like this:

ORA-04091: table TESTUSER.OPERATION is mutating, trigger/function may not see it ORA-06512: at “TESTUSER.TEST3″, line 24 ORA-04088: error during execution of trigger ‘TESTUSER.TEST3′ ORA-06512: at “TESTUSER.AIUD_LABTICK_STMT”, line 137 ORA-06512: at “TESTUSER.TAIUD

Something is wrong – the table to which the trigger is attached is mutating, which apparently means that it is changing into something other than a table (a chair, maybe?).  :-)

What needs to be done to finish fixing this trigger?  If you are struggling to find the answer, I might suggest taking a look at page 287 of the book “Beginning PL/SQL From Novice to Professional” (I am not saying that the answer is on that page, but you might want to look at it just for fun).  The documentation also suggests another approach for dealing with mutating tables, but I do not know if it will help in this case.  So, what needs to be done – removing NULL; was not the solution.





Query Performance Problem, or is Something Else Happening?

3 03 2010

March 3, 2010

Let’s say that you encouter a query that is taking an unexpectedly long time to run, possibly 30 minutes.  Maybe the query looks something like this:

SELECT   mds.messagepartdata, ms.status, mi.ID, mi.messageguid, mi.channel,
         ROWNUM AS messagecount
    FROM pfmq_messagedata md,
         pfmq_messagedatastorage mds,
         pfmq_messageinfo mi,
         pfmq_messagestatus ms
   WHERE (    mi.queuename = 'CL312911032'
          AND mi.ID = ms.ID
          AND mi.ID = md.ID
          AND mi.ID = mds.ID
          AND md.ID = mds.ID
          AND md.messageparttype = mds.messageparttype
          AND md.messageparttype = 1
          AND (ms.statusrevisionnumber = (SELECT MAX (statusrevisionnumber)
                                            FROM pfmq_messagestatus ms2
                                           WHERE ms2.ID = ms.ID)
              )
         )
     AND ((ms.status = 64) AND (mi.direction = 1) AND mi.messagetype = 0)
ORDER BY mi.sequenceordinalnumber, mi.senttime

In the above, the table PFMQ_MESSAGEDATASTORAGE contains a long raw column – the MESSAGEPARTDATA column.  You enable a 10046 trace, but only at level 1 – so there are no wait events written to the trace file.  The resulting trace file is then processed using TKPROF, and you obtain the following output:

call     count       cpu    elapsed       disk      query    current  rows
------- ------  -------- ---------- ---------- ---------- ---------- -----
Parse        1      0.00       0.00          0          0          0     0
Execute      1      0.00       0.00          0          0          0     0
Fetch     4321     14.56     580.31     231750     746064          0 64806
------- ------  -------- ---------- ---------- ---------- ---------- -----
total     4323     14.56     580.31     231750     746064          0 64806

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 29

Rows     Row Source Operation
-------  ---------------------------------------------------
  64806  SORT ORDER BY (cr=681258 r=184767 w=0 time=403515790 us)
  64806   COUNT  (cr=681258 r=184767 w=0 time=1729762996 us)
  64806    NESTED LOOPS  (cr=681258 r=184767 w=0 time=1729717540 us)
  64806     NESTED LOOPS  (cr=486600 r=92648 w=0 time=901417748 us)
  64806      NESTED LOOPS  (cr=356748 r=46572 w=0 time=268980743 us)
  64820       TABLE ACCESS BY INDEX ROWID PFMQ_MESSAGEINFO (cr=31514 r=23422 w=0 time=44437657 us)
 120784        INDEX RANGE SCAN AK1_PFMQ_SEQUENCENUMBER (cr=3117 r=3062 w=0 time=10896605 us)(object id 6511)
  64806       TABLE ACCESS BY INDEX ROWID PFMQ_MESSAGESTATUS (cr=325234 r=23150 w=0 time=224278563 us)
  64820        INDEX RANGE SCAN XPKPF_MESSAGESTATUS (cr=260414 r=15792 w=0 time=208616639 us)(object id 6515)
  64820         SORT AGGREGATE (cr=129644 r=116 w=0 time=1973822 us)
  64820          FIRST ROW  (cr=129644 r=116 w=0 time=1810738 us)
  64820           INDEX RANGE SCAN (MIN/MAX) XPKPF_MESSAGESTATUS (cr=129644 r=116 w=0 time=1756030 us)(object id 6515)
  64806      INDEX UNIQUE SCAN XPKPF_MESSAGEDATA (cr=129852 r=46076 w=0 time=632244506 us)(object id 6505)
  64806     TABLE ACCESS BY INDEX ROWID PFMQ_MESSAGEDATASTORAGE (cr=194658 r=92119 w=0 time=828055493 us)
  64806      INDEX UNIQUE SCAN XPKPF_MESSAGEDATASTORAGE (cr=129852 r=46036 w=0 time=613528422 us)(object id 6507)

How would you troubleshoot this performance problem?  What are the problems?  What looks good about the above output?  What about the above makes you wonder if some detail is missing?  You will find the above SQL statement in this comp.databases.oracle.server Usenet thread.

What can be said about the above output?

  • The client application is retrieving roughly 15 rows in each fetch request: 64,806/4,321 = 14.998 rows per fetch.  Maybe setting the fetch array size to a larger value would help?
  • The fetch calls required 14.56 seconds of the server’s CPU time, while the elapsed time for the fetch was 508.31 580.31 seconds.  565.75 seconds were probably spent doing something other than actively burning CPU cycles – such as waiting for the completion of a disk read.
  • Most of the indexes accessed were high precision indexes, meaning that few of the rowids returned by the index where eliminated at the table level, with the exception of the AK1_PFMQ_SEQUENCENUMBER index, where 46% of the rows identified by the index were discarded.  Those rows were discarded very early in the plan, so the performance impact was likely minimal.
  • There was heavy use of nested loop joins – that might be OK, but might not be as efficient as a hash join if a large percentage of the tables were being accessed or if the clustering factor were high.
  • There were 231,750 blocks read from disk, and considering the heavy use of indexes and nested loop joins, those blocks were likely read one at a time from disk.  If that was the case, the average time to read a block from disk was 0.0024412 seconds (2.4412ms), which is considered to be very fast access times for physical reads from disk.
  • Considering the WHERE clause, the execution plan starts with the two tables with the greatest number of predicates – so that is probably a smart starting point.
  • The elapsed time reported on the last line of the execution plan is greater than the elapsed time reported on the first line of the execution plan – just an odd inconsistency that the time reported on the last line was 613.5 seconds, yet the elapsed time reported by TKPROF for the entire execution was only 580.31 seconds?  The second line of the execution plan shows 1,729.7 seconds (28.8 minutes) on the time= entry, which again is inconsistent with TKPROF’s elapsed time statistic.
  • 580.31 seconds is roughly 9.7 minutes – what happened to the other 20.3 minutes in the 30 minute total execution time that was reported by the original poster?

What might be the next couple of steps for troubleshooting this performance problem?

  • Generate the 10046 trace at level 8, rather than level 1.  That will cause the wait events to be written to the trace file.  If the trace file contains a large number of long duration waits on SQL*Net type wait events, check the network with a packet sniffer (Wireshark, for example) and check the client-side activity to make certain that it is not the client application that is the source of the slow performance.  If you see large unexplained gaps in the tim= values in the trace file without a corresponding long wait event in between, investigate whether the server’s CPUs are over-burdened.
  • Check the statistics on the tables and indexes to make certain that those statistics are reasonably accurate and up to date.
  • Review the current optimizer related initialization parameters to make certain that silly parameters are not specified.
  • Most of all, determine where the missing 20.3 minutes have gone.




Why Doesn’t this SQL Work?

2 03 2010

March 2, 2010

I read a lot of computer books – a fair number of those are on the topic of Oracle, and a portion of those are specific to writing SQL that executes on Oracle Database.  I also spend time browsing the Internet looking for interesting articles.  I found an interesting SQL statement in a couple of places on the Internet, so I thought that I would share the SQL statement:

SELECT
  BOOK_KEY
FROM
  BOOK
WHERE
  NOT EXISTS (SELECT BOOK_KEY FROM SALES);

The SQL statement can be found here:
http://books.google.com/books?id=xJ0fLjQFUFcC&pg=PA105#v=onepage&q=&f=false

And here (as well as a half-dozen other places on the Internet):
http://deepthinking99.wordpress.com/2009/11/20/rewriting-sql-for-faster-performance/

Deep thinking… something is wrong with that SQL statement.  Maybe we need a test script to see the problem?

CREATE TABLE T5 AS
SELECT
  ROWNUM BOOK_KEY
FROM
  DUAL
CONNECT BY
  LEVEL<=20;

CREATE TABLE T6 AS
SELECT
  ROWNUM*2 BOOK_KEY
FROM
  DUAL
CONNECT BY
  LEVEL<=20;

Let’s pretend that table T5 is the table BOOK and table T6 is the table SALES.  The SQL statement would look like this using our test tables:

SELECT
  BOOK_KEY
FROM
  T5
WHERE
  NOT EXISTS (SELECT BOOK_KEY FROM T6);

Both of the above links go on to suggest that a transformed and faster version of the above SQL statement would look like this:

SELECT
  B.BOOK_KEY
FROM
  T5 B,
  T6 S
WHERE
  B.BOOK_KEY=S.BOOK_KEY(+)
  AND S.BOOK_KEY IS NULL;

I suggest that both of the above links (and the 6+ other links found through a Google search) are clearly wrong – the first SQL statement is obviously faster.  Don’t believe me?  Put 1,000,000 rows in each table and time how long it takes to transfer the results back to the client computer.  How confident am I?  Take a look:

SELECT
  BOOK_KEY
FROM
  T5
WHERE
  NOT EXISTS (SELECT BOOK_KEY FROM T6);

no rows selected
--
SELECT
  B.BOOK_KEY
FROM
  T5 B,
  T6 S
WHERE
  B.BOOK_KEY=S.BOOK_KEY(+)
  AND S.BOOK_KEY IS NULL;

  BOOK_KEY
----------
         5
         3
        15
        19
        17
         7
         9
        13
         1
        11

So, if each table contained 1,000,000 rows, which SQL statement would return the result set to the client the fastest?

Lesson 1: if you plan to publish something, whether in book form or on the Internet, make certain that what you publish actually works (or at least looks like you put some effort into it).

Lesson 2: if you plan to copy someone else’s work and post it on your website/blog make certain that what you copy and pass off as your own actually works.

Lesson 3: don’t trust everything that you read on the Internet or in a book without first verifying that the information is correct, even if you find the information on your favorite website.

Makes you wonder if someone would suggest replacing a pure SQL solution with a combined SQL and PL/SQL solution for the purpose of improving performance.  No, that would be silly.  Pardon me while I go re-sequence the 64 bits to keep them from chattering as they pass through the router… maybe I should try to oil the bits or use a bigger router.  On second thought, I’ll just use a hammer (putting down the 28oz framing hammer to grab the small hammer, those bit break too easily).





DBMS_XPLAN Format Parameters

1 03 2010

March 1, 2010 (updated March 5, 2010)

In the last couple of years I have seen several very good references for the DBMS_XPLAN parameters, but it seems that those references are typically hard to locate when needed.  The documentation, while good, is a little confusing because few example outputs are included.  From the documentation:

“format: Controls the level of details for the plan. It accepts four values: 

  • BASIC: Displays the minimum information in the plan—the operation ID, the operation name and its option.
  • TYPICAL: This is the default. Displays the most relevant information in the plan (operation id, name and option, #rows, #bytes and optimizer cost). Pruning, parallel and predicate information are only displayed when applicable. Excludes only PROJECTION, ALIAS and REMOTE SQL information (see below).
  • SERIAL: Like TYPICAL except that the parallel information is not displayed, even if the plan executes in parallel.
  • ALL: Maximum user level. Includes information displayed with the TYPICAL level with additional information (PROJECTION, ALIAS and information about REMOTE SQL if the operation is distributed).

Format keywords must be separated by either a comma or a space: 

  • ROWS – if relevant, shows the number of rows estimated by the optimizer
  • BYTES – if relevant, shows the number of bytes estimated by the optimizer
  • COST – if relevant, shows optimizer cost information
  • PARTITION – if relevant, shows partition pruning information
  • PARALLEL – if relevant, shows PX information (distribution method and table queue information)
  • PREDICATE – if relevant, shows the predicate section
  • PROJECTION -if relevant, shows the projection section
  • ALIAS – if relevant, shows the “Query Block Name / Object Alias” section
  • REMOTE – if relevant, shows the information for distributed query (for example, remote from serial distribution and remote SQL)
  • NOTE – if relevant, shows the note section of the explain plan
  • IOSTATS – assuming that basic plan statistics are collected when SQL statements are executed (either by using the gather_plan_statistics hint or by setting the parameter statistics_level to ALL), this format shows IO statistics for ALL (or only for the LAST as shown below) executions of the cursor.
  • MEMSTATS – Assuming that PGA memory management is enabled (that is, pga_aggregate_target parameter is set to a non 0 value), this format allows to display memory management statistics (for example, execution mode of the operator, how much memory was used, number of bytes spilled to disk, and so on). These statistics only apply to memory intensive operations like hash-joins, sort or some bitmap operators.
  • ALLSTATS – A shortcut for ‘IOSTATS MEMSTATS’
  • LAST – By default, plan statistics are shown for all executions of the cursor. The keyword LAST can be specified to see only the statistics for the last execution.

The following two formats are deprecated but supported for backward compatibility: 

  • RUNSTATS_TOT – Same as IOSTATS, that is, displays IO statistics for all executions of the specified cursor.
  • RUNSTATS_LAST – Same as IOSTATS LAST, that is, displays the runtime statistics for the last execution of the cursor

Format keywords can be prefixed by the sign ‘-‘ to exclude the specified information. For example, ‘-PROJECTION’ excludes projection information.” 

This blog article will attempt to demonstrate using Oracle Database 11.2.0.1 as many of the FORMAT parameters for DBMS_XPLAN.DISPLAY_CURSOR as is possible.  We will use four test tables for the demonstration.  The test table definitions follow (warning – creating table T1 could require an hour or longer): 

CREATE TABLE T1 (
  ID NUMBER,
  DESCRIPTION VARCHAR2(80));

INSERT INTO T1
SELECT
  CEIL(ABS(SIN(ROWNUM/9.9999)*10000)),
  'This is the long description for this number '|| TO_CHAR(CEIL(ABS(SIN(ROWNUM/9.9999)*10000)))
FROM
  (SELECT
    ROWNUM RN
  FROM
    DUAL
  CONNECT BY
    LEVEL<=10000),
  (SELECT
    ROWNUM RN
  FROM
    DUAL
  CONNECT BY
    LEVEL<=10000);

CREATE INDEX IND_T1 ON T1(ID);

CREATE TABLE T2 AS
SELECT
  ROWNUM C1,
  LPAD('A',100,'A') C2
FROM
  DUAL
CONNECT BY
  LEVEL<=10000;

CREATE TABLE T3 AS
SELECT
  ROWNUM C1,
  LPAD('A',100,'A') C2
FROM
  DUAL
CONNECT BY
  LEVEL<=10000;

CREATE TABLE T4 AS
SELECT
  ROWNUM C1,
  LPAD('A',100,'A') C2
FROM
  DUAL
CONNECT BY
  LEVEL<=10000;

CREATE INDEX IND_T4 ON T4(C1);

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE,METHOD_OPT=>'FOR ALL COLUMNS SIZE 1')
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T3',CASCADE=>TRUE)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T4',CASCADE=>TRUE)

I will start by setting the STATISTICS_LEVEL parameter to ALL at the session level.  In general, this parameter should be set to TYPICAL (edit March 5, 2010: a /*+ GATHER_PLAN_STATISTICS */ hint may be used immediately after the SELECT keyword in the SQL statement to provide almost the same level of detail as would be available when setting the STATISTICS_LEVEL parameter to ALL, without as significant of a negative performance impact – see the Related Blog Articles links below for examples that use the hint).  I will also disable the output of rows to the SQL*Plus window: 

ALTER SESSION SET STATISTICS_LEVEL='ALL';
SET AUTOTRACE TRACEONLY STATISTICS

The following SQL statement is executed twice in session 1: 

SELECT /*+ PARALLEL(T1 8 ) */
  *
FROM
  T1;

Statistics
----------------------------------------------------------
         24  recursive calls
          0  db block gets
     815350  consistent gets
     813217  physical reads
          0  redo size
 5509985356  bytes sent via SQL*Net to client
    1100512  bytes received via SQL*Net from client
     100001  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
  100000000  rows processed

In session 2 we determine the SQL_ID and CHILD_NUMBER of the SQL statement that is executing in session 1: 

SELECT
  SQL_ID,
  CHILD_NUMBER
FROM
  V$SQL
WHERE
  SQL_TEXT LIKE 'SELECT /*+ PARALLEL(T1 8 ) */%';

SQL_ID        CHILD_NUMBER
------------- ------------
6kd5fkqdjb8fu            0

BASIC Format Parameter Value:

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('6kd5fkqdjb8fu',0,'BASIC'));

EXPLAINED SQL STATEMENT:
------------------------
SELECT /*+ PARALLEL(T1 8 ) */   * FROM   T1

Plan hash value: 2494645258

-----------------------------------------
| Id  | Operation            | Name     |
-----------------------------------------
|   0 | SELECT STATEMENT     |          |
|   1 |  PX COORDINATOR      |          |
|   2 |   PX SEND QC (RANDOM)| :TQ10000 |
|   3 |    PX BLOCK ITERATOR |          |
|   4 |     TABLE ACCESS FULL| T1       |
-----------------------------------------

SERIAL Format Parameter Value:

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('6kd5fkqdjb8fu',0,'SERIAL'));

SQL_ID  6kd5fkqdjb8fu, child number 0
-------------------------------------
SELECT /*+ PARALLEL(T1 8 ) */   * FROM   T1

Plan hash value: 2494645258

---------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |       |       | 30907 (100)|          |
|   1 |  PX COORDINATOR      |          |       |       |            |          |
|   2 |   PX SEND QC (RANDOM)| :TQ10000 |   100M|  5149M| 30907   (1)| 00:06:11 |
|   3 |    PX BLOCK ITERATOR |          |   100M|  5149M| 30907   (1)| 00:06:11 |
|*  4 |     TABLE ACCESS FULL| T1       |   100M|  5149M| 30907   (1)| 00:06:11 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
 4 - access(:Z>=:Z AND :Z<=:Z

TYPICAL Format Parameter Value:

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('6kd5fkqdjb8fu',0,'TYPICAL'));

SQL_ID  6kd5fkqdjb8fu, child number 0
-------------------------------------
SELECT /*+ PARALLEL(T1 8 ) */   * FROM   T1

Plan hash value: 2494645258

--------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)|  Time    |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |       |       | 30907 (100)|          |        |      |            |
|   1 |  PX COORDINATOR      |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)| :TQ10000 |   100M|  5149M| 30907   (1)| 00:06:11 |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX BLOCK ITERATOR |          |   100M|  5149M| 30907   (1)| 00:06:11 |  Q1,00 | PCWC |            |
|*  4 |     TABLE ACCESS FULL| T1       |   100M|  5149M| 30907   (1)| 00:06:11 |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
 4 - access(:Z>=:Z AND :Z<=:Z)

ALL Format Parameter Value

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('6kd5fkqdjb8fu',0,'ALL'));

SQL_ID  6kd5fkqdjb8fu, child number 0
-------------------------------------
SELECT /*+ PARALLEL(T1 8 ) */   * FROM   T1

Plan hash value: 2494645258

--------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)|   Time   |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |       |       | 30907 (100)|          |        |      |            |
|   1 |  PX COORDINATOR      |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)| :TQ10000 |   100M|  5149M| 30907   (1)| 00:06:11 |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX BLOCK ITERATOR |          |   100M|  5149M| 30907   (1)| 00:06:11 |  Q1,00 | PCWC |            |
|*  4 |     TABLE ACCESS FULL| T1       |   100M|  5149M| 30907   (1)| 00:06:11 |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   4 - SEL$1 / T1@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access(:Z>=:Z AND :Z<=:Z)

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - "T1"."ID"[NUMBER,22], "T1"."DESCRIPTION"[VARCHAR2,80]
   2 - (#keys=0) "T1"."ID"[NUMBER,22], "T1"."DESCRIPTION"[VARCHAR2,80]
   3 - "T1"."ID"[NUMBER,22], "T1"."DESCRIPTION"[VARCHAR2,80]
   4 - "T1"."ID"[NUMBER,22], "T1"."DESCRIPTION"[VARCHAR2,80]

ALLSTATS Format Parameter Value

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('6kd5fkqdjb8fu',0,'ALLSTATS'));

SQL_ID  6kd5fkqdjb8fu, child number 0
-------------------------------------
SELECT /*+ PARALLEL(T1 8 ) */   * FROM   T1
Plan hash value: 2494645258
----------------------------------------------------------------------------------------------------
| Id  | Operation            | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |      2 |        |    200M|00:00:47.47 |      52 |     12 |
|   1 |  PX COORDINATOR      |          |      2 |        |    200M|00:00:47.47 |      52 |     12 |
|   2 |   PX SEND QC (RANDOM)| :TQ10000 |      0 |    100M|      0 |00:00:00.01 |       0 |      0 |
|   3 |    PX BLOCK ITERATOR |          |     16 |    100M|    200M|00:01:56.71 |    1630K|   1626K|
|*  4 |     TABLE ACCESS FULL| T1       |    248 |    100M|    200M|00:00:53.14 |    1528K|   1524K|
----------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - access(:Z>=:Z AND :Z<=:Z)

ALLSTATS LAST Format Parameter Value

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('6kd5fkqdjb8fu',0,'ALLSTATS LAST'));

SQL_ID  6kd5fkqdjb8fu, child number 0
-------------------------------------
SELECT /*+ PARALLEL(T1 8 ) */   * FROM   T1

Plan hash value: 2494645258

-------------------------------------------------------------------------------------------
| Id  | Operation            | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |      1 |        |    100M|00:00:23.61 |      25 |
|   1 |  PX COORDINATOR      |          |      1 |        |    100M|00:00:23.61 |      25 |
|   2 |   PX SEND QC (RANDOM)| :TQ10000 |      0 |    100M|      0 |00:00:00.01 |       0 |
|   3 |    PX BLOCK ITERATOR |          |      0 |    100M|      0 |00:00:00.01 |       0 |
|*  4 |     TABLE ACCESS FULL| T1       |      0 |    100M|      0 |00:00:00.01 |       0 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
 4 - access(:Z>=:Z AND :Z<=:Z)

Next SQL Statement Executed in Session 1:

VARIABLE N1 NUMBER
VARIABLE N2 NUMBER
EXEC :N1:=1
EXEC :N2:=100

SELECT
  *
FROM
  T2,
  T4
WHERE
  T2.C1 BETWEEN :N1 AND :N2
  AND T2.C1=T4.C1;

Statistics
----------------------------------------------------
   340  recursive calls
     0  db block gets
   294  consistent gets
   171  physical reads
     0  redo size
  1994  bytes sent via SQL*Net to client
   360  bytes received via SQL*Net from client
     2  SQL*Net roundtrips to/from client
     6  sorts (memory)
     0  sorts (disk)
   100  rows processed

In session 2 we determine the SQL_ID and CHILD_NUMBER of the SQL statement that is executing in session 1: 

SELECT
  SQL_ID,
  CHILD_NUMBER
FROM
  V$SQL
WHERE
  SQL_TEXT LIKE 'SELECT%T2.C1 BETWEEN :N1 AND :N2%'
  AND SQL_TEXT NOT LIKE '%V$SQL%';

SQL_ID        CHILD_NUMBER
------------- ------------
75chksrfa5fbt            0

Starting Point, Viewing the Last Execution Statistics:

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('75chksrfa5fbt',0,'ALLSTATS LAST'));

SQL_ID  75chksrfa5fbt, child number 0
-------------------------------------
SELECT   * FROM   T2,   T4 WHERE   T2.C1 BETWEEN :N1 AND :N2   AND
T2.C1=T4.C1

Plan hash value: 3771400634

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |        |      1 |        |    100 |00:00:00.03 |     171 |     29 |
|*  1 |  FILTER                       |        |      1 |        |    100 |00:00:00.03 |     171 |     29 |
|   2 |   NESTED LOOPS                |        |      1 |        |    100 |00:00:00.03 |     171 |     29 |
|   3 |    NESTED LOOPS               |        |      1 |      2 |    100 |00:00:00.02 |     168 |     21 |
|*  4 |     TABLE ACCESS FULL         | T2     |      1 |      2 |    100 |00:00:00.01 |     159 |     13 |
|*  5 |     INDEX RANGE SCAN          | IND_T4 |    100 |      1 |    100 |00:00:00.01 |       9 |      8 |
|   6 |    TABLE ACCESS BY INDEX ROWID| T4     |    100 |      1 |    100 |00:00:00.01 |       3 |      8 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:N1<=:N2)
   4 - filter(("T2"."C1">=:N1 AND "T2"."C1"<=:N2))
   5 - access("T2"."C1"="T4"."C1")
       filter(("T4"."C1"<=:N2 AND "T4"."C1">=:N1))

Note
-----
   - dynamic sampling used for this statement (level=2)

Enabling Extra Output:

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('75chksrfa5fbt',0,'ALLSTATS LAST +PEEKED_BINDS +PROJECTION +ALIAS +PREDICATE +COST +BYTES'));

SQL_ID  75chksrfa5fbt, child number 0
-------------------------------------
SELECT   * FROM   T2,   T4 WHERE   T2.C1 BETWEEN :N1 AND :N2   AND  T2.C1=T4.C1

Plan hash value: 3771400634

--------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name   | Starts | E-Rows |E-Bytes| Cost(%CPU) | A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |        |      1 |        |       |    51 (100)|    100 |00:00:00.03 |     171 |     29 |
|*  1 |  FILTER                       |        |      1 |        |       |            |    100 |00:00:00.03 |     171 |     29 |
|   2 |   NESTED LOOPS                |        |      1 |        |       |            |    100 |00:00:00.03 |     171 |     29 |
|   3 |    NESTED LOOPS               |        |      1 |      2 |   340 |    51   (0)|    100 |00:00:00.02 |     168 |     21 |
|*  4 |     TABLE ACCESS FULL         | T2     |      1 |      2 |   130 |    47   (0)|    100 |00:00:00.01 |     159 |     13 |
|*  5 |     INDEX RANGE SCAN          | IND_T4 |    100 |      1 |       |     1   (0)|    100 |00:00:00.01 |       9 |      8 |
|   6 |    TABLE ACCESS BY INDEX ROWID| T4     |    100 |      1 |   105 |     2   (0)|    100 |00:00:00.01 |       3 |      8 |
--------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   4 - SEL$1 / T2@SEL$1
   5 - SEL$1 / T4@SEL$1
   6 - SEL$1 / T4@SEL$1

Peeked Binds (identified by position):
--------------------------------------
   1 - (NUMBER): 1
   2 - (NUMBER): 100

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:N1<=:N2)
   4 - filter(("T2"."C1">=:N1 AND "T2"."C1"<=:N2))
   5 - access("T2"."C1"="T4"."C1")
       filter(("T4"."C1"<=:N2 AND "T4"."C1">=:N1))

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - "T2"."C1"[NUMBER,22], "T2"."C2"[VARCHAR2,100], "T4"."C1"[NUMBER,22], "T4"."C2"[VARCHAR2,100]
   2 - "T2"."C1"[NUMBER,22], "T2"."C2"[VARCHAR2,100], "T4"."C1"[NUMBER,22], "T4"."C2"[VARCHAR2,100]
   3 - "T2"."C1"[NUMBER,22], "T2"."C2"[VARCHAR2,100], "T4".ROWID[ROWID,10], "T4"."C1"[NUMBER,22]
   4 - "T2"."C1"[NUMBER,22], "T2"."C2"[VARCHAR2,100]
   5 - "T4".ROWID[ROWID,10], "T4"."C1"[NUMBER,22]
   6 - "T4"."C2"[VARCHAR2,100]

Note
-----
   - dynamic sampling used for this statement (level=2)

Removing Output Sections:

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('75chksrfa5fbt',0,'ALLSTATS LAST -NOTE -ROWS -PREDICATE'));

SQL_ID  75chksrfa5fbt, child number 0
-------------------------------------
SELECT   * FROM   T2,   T4 WHERE   T2.C1 BETWEEN :N1 AND :N2   AND
T2.C1=T4.C1

Plan hash value: 3771400634

--------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name   | Starts | A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |        |      1 |    100 |00:00:00.03 |     171 |     29 |
|   1 |  FILTER                       |        |      1 |    100 |00:00:00.03 |     171 |     29 |
|   2 |   NESTED LOOPS                |        |      1 |    100 |00:00:00.03 |     171 |     29 |
|   3 |    NESTED LOOPS               |        |      1 |    100 |00:00:00.02 |     168 |     21 |
|   4 |     TABLE ACCESS FULL         | T2     |      1 |    100 |00:00:00.01 |     159 |     13 |
|   5 |     INDEX RANGE SCAN          | IND_T4 |    100 |    100 |00:00:00.01 |       9 |      8 |
|   6 |    TABLE ACCESS BY INDEX ROWID| T4     |    100 |    100 |00:00:00.01 |       3 |      8 |
--------------------------------------------------------------------------------------------------

A More Complicated Example

The previous examples were too simple, so let’s look at something that is a bit more interesting.  We will introduce partitioning, parallel execution, and remote databases.  First, let’s create a larger version of table T3 with 1,000,000 rows rather than 10,000 rows: 

DROP TABLE T3 PURGE;

CREATE TABLE T3
  PARTITION BY RANGE(C1)
   (PARTITION P1 VALUES LESS THAN (5),
    PARTITION P2 VALUES LESS THAN (10),
    PARTITION P3 VALUES LESS THAN (20),
    PARTITION P4 VALUES LESS THAN (40),
    PARTITION P5 VALUES LESS THAN (80),
    PARTITION P6 VALUES LESS THAN (160),
    PARTITION P7 VALUES LESS THAN (320),
    PARTITION P8 VALUES LESS THAN (640),
    PARTITION P9 VALUES LESS THAN (1280),
    PARTITION P10 VALUES LESS THAN (2560),
    PARTITION P11 VALUES LESS THAN (5120),
    PARTITION P12 VALUES LESS THAN (10240),
    PARTITION P20 VALUES LESS THAN (MAXVALUE))
AS
SELECT
  ROWNUM C1,
  LPAD('A',100,'A') C2
FROM
  DUAL
CONNECT BY
  LEVEL<=1000000;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T3',CASCADE=>TRUE)

Now let’s connect to the database in another session as the SYS user and create a database link to an Oracle 11.1.0.6 database (global names are not used in the database, otherwise we would would need a specific name for the database link, as mentioned here), and then flush the buffer cache: 

CREATE PUBLIC DATABASE LINK TEST_LINK CONNECT TO TESTUSER IDENTIFIED BY MY_PASS_HERE USING 'o11106';

ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;

In the remote 11.1.0.6 database, the TESTUSER creates the following tables, and then the SYS user flushes the buffer cache: 

CREATE TABLE T3 AS
SELECT
  ROWNUM C1,
  LPAD('A',100,'A') C2
FROM
  DUAL
CONNECT BY
  LEVEL<=10000;

CREATE TABLE T4 AS
SELECT
  ROWNUM C1,
  LPAD('A',100,'A') C2
FROM
  DUAL
CONNECT BY
  LEVEL<=10000;

CREATE INDEX IND_T4 ON T4(C1);

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T3',CASCADE=>TRUE)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T4',CASCADE=>TRUE)

ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;

Back in the 11.2.0.1 database as our test user, we create a SQL statement to access table T1, T2, T3, T4, and the remote tables T3 and T4: 

VARIABLE N1 NUMBER
VARIABLE N2 NUMBER
EXEC :N1:=1
EXEC :N2:=200
ALTER SESSION SET STATISTICS_LEVEL='ALL';
SET AUTOTRACE TRACEONLY STATISTICS

SELECT /*+ PARALLEL(8) */
  T2.C1 T2_C1,
  SUBSTR(T2.C2,1,10) T2_C2,
  T3.C1 T3_C1,
  SUBSTR(T3.C2,1,10) T3_C2,
  T4.C1 T4_C1,
  SUBSTR(T4.C2,1,10) T4_C2,
  TL_T3.C1 TL_T3_C1,
  SUBSTR(TL_T3.C2,1,10) TL_T3_C2,
  TL_T4.C1 TL_T4_C1,
  SUBSTR(TL_T4.C2,1,10) TL_T4_C2,
  V_T1.C
FROM
  T2,
  T3,
  T4,
  T3@TEST_LINK TL_T3,
  T4@TEST_LINK TL_T4,
  (SELECT
    ID,
    COUNT(*) C
  FROM
    T1
  GROUP BY
    ID) V_T1
WHERE
  T2.C1 BETWEEN :N1 AND :N2
  AND T2.C1=T3.C1
  AND T2.C1=T4.C1
  AND T2.C1=TL_T3.C1
  AND T2.C1=TL_T4.C1
  AND T2.C1=V_T1.ID(+)
ORDER BY
  T2.C1;

Statistics
----------------------------------------------------------
       2855  recursive calls
         12  db block gets
       3979  consistent gets
       2893  physical reads
       1324  redo size
       9145  bytes sent via SQL*Net to client
        667  bytes received via SQL*Net from client
         15  SQL*Net roundtrips to/from client
         95  sorts (memory)
          0  sorts (disk)
        200  rows processed

Now let’s check the execution plan: 

SET AUTOTRACE OFF

SELECT
  SQL_ID,
  CHILD_NUMBER
FROM
  V$SQL
WHERE
  SQL_TEXT LIKE '%T3@TEST_LINK TL_T3,%'
  AND SQL_TEXT NOT LIKE '%V$SQL%';

SQL_ID        CHILD_NUMBER
------------- ------------
dkmcbpadz15w1            0
dkmcbpadz15w1            1

Interesting, two child cursors.  Let’s see the execution plans: 

SELECT
  *
FROM
  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('dkmcbpadz15w1',NULL,'ALLSTATS LAST'));

SQL_ID  dkmcbpadz15w1, child number 0
-------------------------------------
SELECT /*+ PARALLEL(8) */   T2.C1 T2_C1,   SUBSTR(T2.C2,1,10) T2_C2,
T3.C1 T3_C1,   SUBSTR(T3.C2,1,10) T3_C2,   T4.C1 T4_C1,
SUBSTR(T4.C2,1,10) T4_C2,   TL_T3.C1 TL_T3_C1,   SUBSTR(TL_T3.C2,1,10)
TL_T3_C2,   TL_T4.C1 TL_T4_C1,   SUBSTR(TL_T4.C2,1,10) TL_T4_C2,
V_T1.C FROM   T2,   T3,   T4,   T3@TEST_LINK TL_T3,   T4@TEST_LINK
TL_T4,   (SELECT     ID,     COUNT(*) C   FROM     T1   GROUP BY
ID) V_T1 WHERE   T2.C1 BETWEEN :N1 AND :N2   AND T2.C1=T3.C1   AND
T2.C1=T4.C1   AND T2.C1=TL_T3.C1   AND T2.C1=TL_T4.C1   AND
T2.C1=V_T1.ID(+) ORDER BY   T2.C1

Plan hash value: 453902047

---------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                      | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                               |          |      1 |        |    200 |00:00:02.49 |    2609 |   2655 |       |       |          |
|*  1 |  PX COORDINATOR                                |          |      1 |        |    200 |00:00:02.49 |    2609 |   2655 |       |       |          |
|   2 |   PX SEND QC (ORDER)                           | :TQ10008 |      0 |     17 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   3 |    SORT ORDER BY                               |          |      0 |     17 |      0 |00:00:00.01 |       0 |      0 | 73728 | 73728 |          |
|   4 |     PX RECEIVE                                 |          |      0 |     17 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   5 |      PX SEND RANGE                             | :TQ10007 |      0 |     17 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|*  6 |       FILTER                                   |          |      0 |        |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|*  7 |        HASH JOIN OUTER                         |          |      0 |     17 |      0 |00:00:00.01 |       0 |      0 |   697K|   697K|          |
|   8 |         PX RECEIVE                             |          |      0 |     17 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   9 |          PX SEND HASH                          | :TQ10006 |      0 |     17 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|* 10 |           HASH JOIN                            |          |      0 |     17 |      0 |00:00:00.01 |       0 |      0 |   705K|   705K|          |
|  11 |            PART JOIN FILTER CREATE             | :BF0000  |      0 |     17 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  12 |             PX RECEIVE                         |          |      0 |     17 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  13 |              PX SEND BROADCAST                 | :TQ10005 |      0 |     17 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|* 14 |               HASH JOIN                        |          |      0 |     17 |      0 |00:00:00.01 |       0 |      0 |   713K|   713K|          |
|* 15 |                HASH JOIN                       |          |      0 |     21 |      0 |00:00:00.01 |       0 |      0 |   727K|   727K|          |
|* 16 |                 HASH JOIN                      |          |      0 |     25 |      0 |00:00:00.01 |       0 |      0 |   757K|   757K|          |
|  17 |                  BUFFER SORT                   |          |      0 |        |      0 |00:00:00.01 |       0 |      0 | 73728 | 73728 |          |
|  18 |                   PX RECEIVE                   |          |      0 |     25 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  19 |                    PX SEND HASH                | :TQ10000 |      0 |     25 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  20 |                     TABLE ACCESS BY INDEX ROWID| T4       |      1 |     25 |    200 |00:00:00.02 |       6 |     16 |       |       |          |
|* 21 |                      INDEX RANGE SCAN          | IND_T4   |      1 |     45 |    200 |00:00:00.01 |       2 |      8 |       |       |          |
|  22 |                  PX RECEIVE                    |          |      0 |     30 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  23 |                   PX SEND HASH                 | :TQ10004 |      0 |     30 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  24 |                    PX BLOCK ITERATOR           |          |      0 |     30 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|* 25 |                     TABLE ACCESS FULL          | T2       |      0 |     30 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  26 |                 BUFFER SORT                    |          |      0 |        |      0 |00:00:00.01 |       0 |      0 | 73728 | 73728 |          |
|  27 |                  PX RECEIVE                    |          |      0 |     25 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  28 |                   PX SEND HASH                 | :TQ10001 |      0 |     25 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  29 |                    REMOTE                      | T3       |      1 |     25 |    200 |00:00:00.09 |       0 |      0 |       |       |          |
|  30 |                BUFFER SORT                     |          |      0 |        |      0 |00:00:00.01 |       0 |      0 | 73728 | 73728 |          |
|  31 |                 PX RECEIVE                     |          |      0 |     25 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  32 |                  PX SEND HASH                  | :TQ10002 |      0 |     25 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  33 |                   REMOTE                       | T4       |      1 |     25 |    200 |00:00:00.01 |       0 |      0 |       |       |          |
|  34 |            PX BLOCK ITERATOR                   |          |      0 |   2500 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|* 35 |             TABLE ACCESS FULL                  | T3       |      0 |   2500 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  36 |         BUFFER SORT                            |          |      0 |        |      0 |00:00:00.01 |       0 |      0 | 73728 | 73728 |          |
|  37 |          PX RECEIVE                            |          |      0 |  10000 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  38 |           PX SEND HASH                         | :TQ10003 |      0 |  10000 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  39 |            VIEW                                |          |      1 |  10000 |    200 |00:00:01.03 |    2577 |   2632 |       |       |          |
|  40 |             HASH GROUP BY                      |          |      1 |  10000 |    200 |00:00:01.03 |    2577 |   2632 |  1115K|  1115K| 2593K (0)|
|* 41 |              FILTER                            |          |      1 |        |   1273K|00:00:00.78 |    2577 |   2632 |       |       |          |
|* 42 |               INDEX RANGE SCAN                 | IND_T1   |      1 |    250K|   1273K|00:00:00.43 |    2577 |   2632 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:N1<=:N2)
   6 - filter(:N1<=:N2)
   7 - access("T2"."C1"="V_T1"."ID")
  10 - access("T2"."C1"="T3"."C1")
  14 - access("T2"."C1"="TL_T4"."C1")
  15 - access("T2"."C1"="TL_T3"."C1")
  16 - access("T2"."C1"="T4"."C1")
  21 - access("T4"."C1">=:N1 AND "T4"."C1"<=:N2)
  25 - access(:Z>=:Z AND :Z<=:Z)
       filter(("T2"."C1">=:N1 AND "T2"."C1"<=:N2))
  35 - access(:Z>=:Z AND :Z<=:Z)
       filter(("T3"."C1">=:N1 AND "T3"."C1"<=:N2))
  41 - filter(:N1<=:N2)
  42 - access("ID">=:N1 AND "ID"<=:N2)

Note
-----
   - dynamic sampling used for this statement (level=5)
   - Degree of Parallelism is 8 because of hint

 

SQL_ID  dkmcbpadz15w1, child number 1
-------------------------------------
SELECT /*+ PARALLEL(8) */   T2.C1 T2_C1,   SUBSTR(T2.C2,1,10) T2_C2,
T3.C1 T3_C1,   SUBSTR(T3.C2,1,10) T3_C2,   T4.C1 T4_C1,
SUBSTR(T4.C2,1,10) T4_C2,   TL_T3.C1 TL_T3_C1,   SUBSTR(TL_T3.C2,1,10)
TL_T3_C2,   TL_T4.C1 TL_T4_C1,   SUBSTR(TL_T4.C2,1,10) TL_T4_C2,
V_T1.C FROM   T2,   T3,   T4,   T3@TEST_LINK TL_T3,   T4@TEST_LINK
TL_T4,   (SELECT     ID,     COUNT(*) C   FROM     T1   GROUP BY
ID) V_T1 WHERE   T2.C1 BETWEEN :N1 AND :N2   AND T2.C1=T3.C1   AND
T2.C1=T4.C1   AND T2.C1=TL_T3.C1   AND T2.C1=TL_T4.C1   AND
T2.C1=V_T1.ID(+) ORDER BY   T2.C1

Plan hash value: 453902047

--------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                      | Name     | Starts | E-Rows | A-Rows |   A-Time   |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                               |          |      0 |        |      0 |00:00:00.01 |       |       |          |
|*  1 |  PX COORDINATOR                                |          |      0 |        |      0 |00:00:00.01 |       |       |          |
|   2 |   PX SEND QC (ORDER)                           | :TQ10008 |      0 |     17 |      0 |00:00:00.01 |       |       |          |
|   3 |    SORT ORDER BY                               |          |      0 |     17 |      0 |00:00:00.01 | 36864 | 36864 | 4096  (0)|
|   4 |     PX RECEIVE                                 |          |      0 |     17 |      0 |00:00:00.01 |       |       |          |
|   5 |      PX SEND RANGE                             | :TQ10007 |      0 |     17 |      0 |00:00:00.01 |       |       |          |
|*  6 |       FILTER                                   |          |      1 |        |     25 |00:00:01.03 |       |       |          |
|*  7 |        HASH JOIN OUTER                         |          |      1 |     17 |     25 |00:00:01.03 |   693K|   693K| 1286K (0)|
|   8 |         PX RECEIVE                             |          |      1 |     17 |     25 |00:00:00.01 |       |       |          |
|   9 |          PX SEND HASH                          | :TQ10006 |      0 |     17 |      0 |00:00:00.01 |       |       |          |
|* 10 |           HASH JOIN                            |          |      0 |     17 |      0 |00:00:00.01 |  1278K|   930K| 1260K (0)|
|  11 |            PART JOIN FILTER CREATE             | :BF0000  |      0 |     17 |      0 |00:00:00.01 |       |       |          |
|  12 |             PX RECEIVE                         |          |      0 |     17 |      0 |00:00:00.01 |       |       |          |
|  13 |              PX SEND BROADCAST                 | :TQ10005 |      0 |     17 |      0 |00:00:00.01 |       |       |          |
|* 14 |               HASH JOIN                        |          |      1 |     17 |     25 |00:00:00.14 |   705K|   705K| 1139K (0)|
|* 15 |                HASH JOIN                       |          |      1 |     21 |     25 |00:00:00.12 |   720K|   720K| 1129K (0)|
|* 16 |                 HASH JOIN                      |          |      1 |     25 |     25 |00:00:00.03 |   763K|   763K| 1128K (0)|
|  17 |                  BUFFER SORT                   |          |      1 |        |     25 |00:00:00.02 |  4096 |  4096 | 4096  (0)|
|  18 |                   PX RECEIVE                   |          |      1 |     25 |     25 |00:00:00.02 |       |       |          |
|  19 |                    PX SEND HASH                | :TQ10000 |      0 |     25 |      0 |00:00:00.01 |       |       |          |
|  20 |                     TABLE ACCESS BY INDEX ROWID| T4       |      0 |     25 |      0 |00:00:00.01 |       |       |          |
|* 21 |                      INDEX RANGE SCAN          | IND_T4   |      0 |     45 |      0 |00:00:00.01 |       |       |          |
|  22 |                  PX RECEIVE                    |          |      1 |     30 |     25 |00:00:00.01 |       |       |          |
|  23 |                   PX SEND HASH                 | :TQ10004 |      0 |     30 |      0 |00:00:00.01 |       |       |          |
|  24 |                    PX BLOCK ITERATOR           |          |      0 |     30 |      0 |00:00:00.01 |       |       |          |
|* 25 |                     TABLE ACCESS FULL          | T2       |      0 |     30 |      0 |00:00:00.01 |       |       |          |
|  26 |                 BUFFER SORT                    |          |      1 |        |     25 |00:00:00.09 | 36864 | 36864 | 4096  (0)|
|  27 |                  PX RECEIVE                    |          |      1 |     25 |     25 |00:00:00.09 |       |       |          |
|  28 |                   PX SEND HASH                 | :TQ10001 |      0 |     25 |      0 |00:00:00.01 |       |       |          |
|  29 |                    REMOTE                      | T3       |      0 |     25 |      0 |00:00:00.01 |       |       |          |
|  30 |                BUFFER SORT                     |          |      1 |        |     25 |00:00:00.01 | 36864 | 36864 | 4096  (0)|
|  31 |                 PX RECEIVE                     |          |      1 |     25 |     25 |00:00:00.01 |       |       |          |
|  32 |                  PX SEND HASH                  | :TQ10002 |      0 |     25 |      0 |00:00:00.01 |       |       |          |
|  33 |                   REMOTE                       | T4       |      0 |     25 |      0 |00:00:00.01 |       |       |          |
|  34 |            PX BLOCK ITERATOR                   |          |      0 |   2500 |      0 |00:00:00.01 |       |       |          |
|* 35 |             TABLE ACCESS FULL                  | T3       |      0 |   2500 |      0 |00:00:00.01 |       |       |          |
|  36 |         BUFFER SORT                            |          |      1 |        |     25 |00:00:01.03 | 18432 | 18432 | 2048  (0)|
|  37 |          PX RECEIVE                            |          |      1 |  10000 |     25 |00:00:01.03 |       |       |          |
|  38 |           PX SEND HASH                         | :TQ10003 |      0 |  10000 |      0 |00:00:00.01 |       |       |          |
|  39 |            VIEW                                |          |      0 |  10000 |      0 |00:00:00.01 |       |       |          |
|  40 |             HASH GROUP BY                      |          |      0 |  10000 |      0 |00:00:00.01 |    10M|  3024K|          |
|* 41 |              FILTER                            |          |      0 |        |      0 |00:00:00.01 |       |       |          |
|* 42 |               INDEX RANGE SCAN                 | IND_T1   |      0 |    250K|      0 |00:00:00.01 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:N1<=:N2)
   6 - filter(:N1<=:N2)
   7 - access("T2"."C1"="V_T1"."ID")
  10 - access("T2"."C1"="T3"."C1")
  14 - access("T2"."C1"="TL_T4"."C1")
  15 - access("T2"."C1"="TL_T3"."C1")
  16 - access("T2"."C1"="T4"."C1")
  21 - access("T4"."C1">=:N1 AND "T4"."C1"<=:N2)
  25 - access(:Z>=:Z AND :Z<=:Z)
       filter(("T2"."C1">=:N1 AND "T2"."C1"<=:N2))
  35 - access(:Z>=:Z AND :Z<=:Z)
       filter(("T3"."C1">=:N1 AND "T3"."C1"<=:N2))
  41 - filter(:N1<=:N2)
  42 - access("ID">=:N1 AND "ID"<=:N2)

Note
-----
   - dynamic sampling used for this statement (level=5)
   - automatic DOP: Computed Degree of Parallelism is 8

There are a couple of interesting items that you might notice, maybe someone can explain why: 

  • The execution plan for the first child cursor shows on line ID 25 that 0 rows were retrieved from table T2, yet the execution plan shows that 200 rows were retrieved.
  • SHOW PARAMETER OPTIMIZER_DYNAMIC_SAMPLING shows that dynamic sampling is set to 2, yet the Note section of the execution plans show that dynamic sampling at level 5 was performed (statistics were not collected for table T2).
  • The Note section of the first child cursor shows that the degree of parallelism is 8 because of a hint, while the Note section of the second child cursor shows that the degree of parallelism was automatically computed as 8.
  • What was the purpose of the second child cursor?  No rows were returned, yet some lines in that plan show that 25 rows were retrieved.
  • Did I miss something?

Adding and Removing Items from the DBMS_XPLAN Output

The following execution plan was created by specifying the format parameters displayed in the blue box.  The yellow boxes indicate where those items appear in the execution plan, and how to remove other items that appear by default when the ALLSTATS LAST format parameter is provided. 

Related Blog Articles:
Automated DBMS_XPLAN, Trace, and Send to Excel
Execution Plans – What is the Plan, and Where Do I Find It?
Execution Plans – What is the Plan, and Where Do I Find It 2
Execution Plans – What is the Plan, and Where Do I Find It 3
Parallel Reasoning








Follow

Get every new post delivered to your Inbox.

Join 140 other followers