Building Oracle Database Test Case Scripts – Is there an Easy Method?

27 09 2010

September 27, 2010

It is far too easy to invest many hours into building a test case to demonstrate or explore a simple concept, or model a particular problem that is being encountered.  I recently had the opportunity to attend a session by Jonathan Lewis that was titled “The Beginner’s Guide to Becoming an Expert”, a session which emphasized the use of test scripts to investigate various types of unexpected behavior, when that unexpected behavior is found.  Using examples from Jonathan’s session, that unexpected behavior might include noticing a reduction in the redo generated when an update statement is driven through an index access path, for instance.  Another example from Jonathan’s session included the unexpected behavior where a significant amount of redo is generated when a bitmap index is updated, compared to the amount of redo generated when a b*tree index on the same column is updated, and then finding that the amount of redo generated for the bitmap index drops significantly when the Oracle Database is upgraded from 9.2 to 11.1.  Creating a test script that models a particular situation allows that problem to be investigated in isolation, and then repeatedly analyzed as one variable is changed at a time – the change might include modifying an initialization parameter, submission of different initial bind variables, execution of the script against a different Oracle Database release version, or any number of other small scale changes.

One of the key concepts in the creation of test cases is to compare the intended end result of the test case with the starting point, and then identify logical transformation stages between the two end points.  At each transformation stage confirm that the preceding step logically leads to the next step, and then determine if there is any chance that a different result might be obtained given the same input.  Work backward from the last transformation stage to the first, and ask yourself if there is any other possible input that might have caused the outcome.  It takes practice, a fair amount of research, and sometimes trial and error to discover the multiple potential cause and effect relationships.

I recently used this technique to build a test case for an OTN thread to discover why an index was not used to retrieve three rows from an 8 million row table.  The “given” information was that the index was used if an always false comparison of two literal VARCHAR values was removed from an OR condition in the WHERE clause.  The first step in building a test case, assuming that the production environment where the problem was found is not available for use, involves building a model of the source data.  If you are familiar with the real data that causes the unexpected behavior, build a model that simulates the real data, otherwise assume that the data to be returned is scattered among the other rows in the table data.  In my test case, to simplify the calculations a bit, the number of rows to be inserted into the test table was increased to 10 million.  Before building the data model, determine what might cause a full table scan to be performed rather than using an index access path: high clustering factor for the index, poor cardinality estimates (might be caused by out of date object statistics), incorrectly gathered system statistics, poorly specified optimizer parameters, sub-optimal index definition (might require an index skip scan, or return a large number of rows that are later eliminated at the table level), optimizer bugs, maybe the full table scan really is faster than an index access path, etc.
 
Now that we have identified some of the causes for a full table scan, our test case may be used to experiment “what if” and “how would I know if that was the cause?”  To determine if the clustering factor was the problem, I might first need to know how the optimizer uses the clustering factor, and am I able to artificially adjust it to see if a change in the value makes a difference in performance.  To determine if the cardinality estimates are a problem I might use DBMS_XPLAN.DISPLAY_CURSOR to retrieve the estimated cardinality and actual number of rows returned by every operation in the execution plan.  I might remove the comparison of the two literal VARCHAR values to see how that change affects the cardinality estimates, or see if changing one of the literal values causes a change – one of those changes just might also allow an index access path to be used.  I might try to use a hint to force the use the index (hints are, after all, directives and not suggestions), and if that does not work I might analyze why the hint was invalid.  If the hint worked, after verifying that a performance improvement was achieved, I would likely consult a 10053 trace file to determine why the index access path was not automatically used – was the cost too high, or was the index access path cost not even calculated?

Once the cause of the full table scan is determined, additional what-if statements should be considered.  What if the index hint worked, am I able to permanently add that hint to the SQL statement – are there any cases were adding that hint might lead to other performance problems later?  If the cardinality estimates are incorrect, is there anything that I am able to do to fix the cardinality estimates?  If the optimizer parameters are poorly specified, am I able to adjust those parameters – and if I do what might be the impact on the other SQL statements that depend on the current initialization parameter values for optimal performance and costing calculations?  Are we able to modify the application that submitted the SQL statement to remove the unnecessary literal comparison?

Once the desired outcome is reached, work backward through the steps and determine where this approach might go wrong.  Is it reasonable to conclude that the previous step always results in the next step?  In short, test cases are not terribly difficult to construct once you learn what might cause a transformation step to simply lead to the wrong next transformation step.

Oracle Database behavior changes from one release version to another, and sometimes from one patch level to the next.  Save the test case scripts once generated, because the behavior that is demonstrated by the test script might only be true for a small number of Oracle Database releases.  My test case script for generating a deadlock on Oracle Database 11.1 and 11.2, while the same test case script simply generates an expected enqueue on Oracle Database 10.2, is just one example of the usefulness of test cases to demonstrate Oracle behavior.

Do we need a test case to determine why the OTN forums, if working at all, are so slow this morning?  I believe that it required more than two minutes just to display the forum home – that was roughly an hour after I saw the message stating that the OTN forums were down (someone must have tripped over the power cord again).

Any other ideas for easily creating useful test cases?


Actions

Information

8 responses

28 09 2010
Narendra

Charles,

Nice write-up. I must admit it is, kind of, becoming more difficult with each release of oracle (or CBO, in particular) to write a reproducible test cases for oracle.

p.s. BTW, if you don’t mind and have interest and time, can you have a look at this thread ?
It is a long thread and I am not sure if OP has managed to resolve the issue but I would be curious to know how you would approach to the problem.

28 09 2010
Charles Hooper

That is an interesting thread. Just a couple of comments:
* It does not appear that the execution plan actually changed when the user changed the OPTIMIZER_MODE to ALL_ROWS. The server just appeared to be more busy when the query was executed with the OPTIMIZER_MODE set to ALL_ROWS because all of the wait events increased in time – there were probably fewer of the needed blocks still in the buffer cache, and that likely explains the jump in the number of waits on the “db file sequential read” wait event.
* The execution plan that was displayed by the EXPLAIN PLAN FOR command does not exactly match the runtime plan that was output by TKPROF. If you copy both versions of the plan into a spreadsheet package (I used Microsoft Excel) you will see that lines 41 through 44 of the EXPLAIN PLAN FOR version of the plan do not appear in the runtime version of the plan. Between lines 34 and 35 of the EXPLAIN PLAN FOR version of the plan there is an index unique scan of the FND_TERRITORIES_TL_U1 index that is used to retrieve rows from the FND_TERRITORIES_TL table.
* The cardinality estimates appear to be inaccurate, and this might have to do with the ROWNUM=1 specification in the scalar subqueries – there is a bug that affects all release versions prior to 11.2.0.1 when ROWNUM is specified in the WHERE clause which incorrectly pushes the cardinality estimate too low for much of the rest of the plan (I have a test case somewhere in my blog that demonstrates this problem, but I am not sure if it applies in this case and may in fact be forcing a FIRST_ROWS optimizer mode). The cardinality estimates problem could also be caused by the use of ANALYZE rather than DBMS_STATS – Jonathan Lewis has several articles that describe the differences (http://jonathanlewis.wordpress.com/2009/04/30/analyze-this/ http://jonathanlewis.wordpress.com/2010/03/31/analyze-this-2/ http://jonathanlewis.wordpress.com/2010/04/01/analyze-this-2-2/).
* The excessive use of scalar subqueries is probably causing an excessive use of nested loop joins.
* The view POC_ASN_PICKUP_LOCATIONS_V might not be too terribly efficient. Inside the view the FND_LOOKUP_VALUES_U2 index returns 3778 rows, while only 1 row makes it through the restrictions on the FND_LOOKUP_VALUES table (PV”.”SEGMENT1″=”B”.”DESCRIPTION” AND “B”.”DESCRIPTION” IS NOT NULL AND “B”.”ATTRIBUTE9″ IS NOT NULL AND “B”.”ENABLED_FLAG”=’Y’ AND “B”.”TAG”=:B1 AND TO_NUMBER(“B”.”ATTRIBUTE11″)=:B2))
* The number of consistent gets is fairly high at 2,259,039 and that might explain the 19.89 seconds of CPU time.
* There are several calls to USERENV(‘LANG’) – I wonder if those calls result in a context switch.
* If there was a lot of competition for CPU time on the server, it might be the case that the process for this session was having difficulty getting back onto the server’s CPUs (edit: note that it is not the session’s process that must get back onto the CPU to send the next two packets) the every time a fetch call was received from the client and every time two network packets were sent to the client. The max wait time of 7.48 seconds on the “SQL*Net more data to client” wait event in the one run and the 232.41 seconds on the “SQL*Net more data to client” wait event during the second run (when OPTIMIZER_MODE was set to ALL_ROWS) makes me wonder if the server’s CPU is a bottleneck or if there is a severe network problem.
* If the SDU size is set to 8,000, that implies the client was fetching roughly (228+11289)*8000 92,136,000 bytes (87.87MB) of data, which would put the average row length at 4,064 bytes, which is enough to fill 3 network packets. If every two packets received by the client must be ACKed, that probably means there would be around 30,712 (92,136,000 / 1500 /2) ACKs, and if the ping time between the client and server was 20ms, it just might take 614 seconds to transfer the 92,136,000 bytes to the client – that is close to the reported 421.60 elapsed seconds reported for the SQL statement when the OPTIMIZER_MODE was set to CHOOSE. So, it might be a good idea to look at the average ping time between the server and client.
* The client computer may be choking on the data that it is receiving – it may be overloaded.

A Wireshark trace on both the client and server might help, as would monitoring the server’s CPU usage.

28 09 2010
Narendra

Charles,

Thanks a lot for your in-depth analysis accompanied with all calculations.
I must admit I would have to re-read the point about SDU setting many times to understand it ;)
Apart from that, you have managed to point out some amazing points (which I had completely missed).
At present, I am struggling to understand your following statements. Can you help?
The excessive use of scalar subqueries is probably causing an excessive use of nested loop joins
Can use of scalar subqueries really affect the way other tables in the query are joined ?
If there was a lot of competition for CPU time on the server, it might be the case that the process for this session was having difficulty getting back onto the server’s CPUs (edit: note that it is not the session’s process that must get back onto the CPU to send the next two packets) the every time a fetch call was received from the client and every time two network packets were sent to the client. The max wait time of 7.48 seconds on the “SQL*Net more data to client” wait event in the one run and the 232.41 seconds on the “SQL*Net more data to client” wait event during the second run (when OPTIMIZER_MODE was set to ALL_ROWS) makes me wonder if the server’s CPU is a bottleneck or if there is a severe network problem
Is it really possible that a bottleneck on server’s CPU results in large waits on “SQL*Net more data to client”? Again, I am struggling to build a test case for this.

BTW, the OP had managed to eliminate few scalar subqueries (as mentioned in subsequent follow-ups), which brought down the disk reads and consistent gets to some extent. Also, in OP’s version of oracle, the default SDU size is 2KB although OP has not mentioned explicitly if he was using a default or non-default SDU size.
Besides the unusually large “SQL*Net more data to client” waits, I am struggling to understand if the NESTED LOOP join used by CBO is the most appropriate join method and whether the wait event (or the cause for it) is somehow affecting optimizer’s decisions.

p.s. if the OP does respond, I would point him to your reply as it is definitely much more exhaustive and convincing than what I tried to provide on that thread.

28 09 2010
Charles Hooper

I saw that the OP claimed to have removed some of the excessive scalar subqueries and transformed those into inline views, but I did not see those inline views listed in the FROM. So, I did not spend a lot of time investigating the changes. I see that the Predicate Information section of the explain plan no longer shows ROWNUM, for example “20 – filter(ROWNUM=1)”. The scalar subqueries could limit the number of available execution plans, and that is why I made the comment about nested loop joins.

Regarding the test case for CPU competition causing excessive waits for the “SQL*Net more data to client” wait event, you may see different results on different operating system platforms, and even with different network cards. If you want to try testing, build several tables with a couple of columns, such that when the tables are joined and you select from those tables the average row length returned to the client is roughly 4,000 bytes. From a client computer, connect to the database using SQL*Plus, set the array fetch size to 100, disable output to the screen (SET AUTOTRACE TRACEONLY STATISTICS), set timing on, enable a 10046 trace at level 8, then query the tables. Now, apply a load to the database server such that there are 8 processes fighting for every CPU core (or core thread) – you will find examples of simple CPU load generators in the “Expert Oracle Practices” book: http://books.google.com/books?id=xzjw4hoeDegC&pg=PA197 Change the 10046 trace file name, and again from the client computer issue the same query as before. Some servers (in some cases depending on the network card’s offloading capability) will need CPU cycles for generating checksum values for network packets, and also verifying that all packets in the conversation were successfully received. The additional CPU competition should cause an effect on the network transfer speed.

How do the elapsed times displayed in SQL*Plus compare? If you process the 10046 trace files using TKPROF, how do the statistics change? Do you see a large increase in the “SQL*Net more data to client” wait event?

You are right regarding the default SDU size being 2,000 – I was in a bit of a rush when I wrote the above comment and did not have a chance to re-read the quote for typing or logic errors (in this case, stating 8,000 was a logic error not a typing error).

We have to keep in mind that according to the wait events and the CPU usage, for the first TKPROF summary, we see the following:
13.0195 minutes waiting on the client/network
0.3315 minutes on the CPU
0.4963 minutes on server-side wait events

Even if we were able to completely eliminate the server CPU component and the server-side wait events, we still would have to account for the 13 minutes of client/network time. So, if we spend too much time trying to optimize the SQL statement, it might not make much of a difference in the overall execution time for the SQL statement.

28 09 2010
Narendra

Charles,

Many thanks for the pointers on building the test case. I will see what I can do as I doubt I will be able to produce this using only my laptop.
One last question which is nagging me a lot about this thread.
1) Is the time spent on “SQL*Net more data to client” accounted for in the figure displayed under “elapsed time” for the query ?
2) Is the time (and statistics) for scalar subqueries accounted for in the “CPU Time” and “Elapsed Time” figures for the query?

p.s. Ok. those are two questions, not one. Hope you don’t mind ;)

28 09 2010
Charles Hooper

I will claim to not know the answer (actually I cannot remember at the moment), so let’s set up a test case to find out. The test table is created with the following commands:

CREATE TABLE T1 (
  C1 NUMBER,
  C2 VARCHAR2(245),
  C3 VARCHAR2(245),
  C4 VARCHAR2(245),
  C5 VARCHAR2(245),
  PRIMARY KEY (C1));
 
INSERT INTO
  T1
SELECT
  ROWNUM,
  RPAD(TO_CHAR(ROWNUM),245,'A'),
  RPAD(TO_CHAR(ROWNUM),245,'B'),
  RPAD(TO_CHAR(ROWNUM),245,'C'),
  RPAD(TO_CHAR(ROWNUM),245,'D')
FROM
  DUAL
CONNECT BY
  LEVEL<=100000;
 
COMMIT;
 
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)

The above created a table such that if we select a single row it will be slightly less than 1,000 bytes in length (note that I had to make certain that the column values changed from one row to the next so that SQL*Net compression would not alter the test results) which will allow us to try a couple of experiments:
* Using a small array fetch size of 2 we should be able to pack 2 of these rows into a single SDU unit, so we should not have any “SQL*Net more data to client” wait events.
* Using a larger array fetch size of 5,000 we should see plenty of “SQL*Net more data to client” wait events.
* Perfoming a four-way self-join should cause each row returned to be roughly 4,000 bytes.
* Adding a significant CPU load to the server for roughly 10 minutes to see the impact on the “SQL*Net more data to client” wait events as well as the other wait events – the 10 minute CPU load ends before the SQL statement completes.
* Forcing the server-side of the network connection to 10Mb/s (probably half-duplex because the switch side was not also forced to 10Mb/s full duplex), which should increase the duration of the “SQL*Net more data to client” wait events.

The test script (as written requires Oracle Database 11.1.0.6 or higher) – note that the results with the array fetch size set to 15 will be discarded:

SET ARRAYSIZE 15
 
SET TIMING ON 
SET AUTOTRACE TRACEONLY STATISTICS

ALTER SESSION SET TRACEFILE_IDENTIFIER = '10046TraceTest0';
EXEC DBMS_SESSION.SESSION_TRACE_ENABLE(WAITS=>TRUE, BINDS=>FALSE, PLAN_STAT=>'ALL_EXECUTIONS')
 
SELECT
  C2,
  C3,
  C4,
  C5
FROM
  T1;
 
SELECT
  T11.C2,
  T11.C3,
  T11.C4,
  T11.C5,
  T12.C2,
  T12.C3,
  T12.C4,
  T12.C5,
  T13.C2,
  T13.C3,
  T13.C4,
  T13.C5,
  T14.C2,
  T14.C3,
  T14.C4,
  T14.C5
FROM
  T1 T11,
  T1 T12,
  T1 T13,
  T1 T14
WHERE
  T11.C1=T12.C1
  AND T11.C1=T13.C1
  AND T11.C1=T14.C1;
 
SET ARRAYSIZE 2
ALTER SESSION SET TRACEFILE_IDENTIFIER = '10046TraceTest1';

SELECT
  C2,
  C3,
  C4,
  C5
FROM
  T1;
 
SELECT
  T11.C2,
  T11.C3,
  T11.C4,
  T11.C5,
  T12.C2,
  T12.C3,
  T12.C4,
  T12.C5,
  T13.C2,
  T13.C3,
  T13.C4,
  T13.C5,
  T14.C2,
  T14.C3,
  T14.C4,
  T14.C5
FROM
  T1 T11,
  T1 T12,
  T1 T13,
  T1 T14
WHERE
  T11.C1=T12.C1
  AND T11.C1=T13.C1
  AND T11.C1=T14.C1;

SELECT
  SYSDATE
FROM
  DUAL;

ALTER SESSION SET TRACEFILE_IDENTIFIER = '10046TraceTest2';
SET ARRAYSIZE 5000
 
SELECT
  C2,
  C3,
  C4,
  C5
FROM
  T1;
 
SELECT
  T11.C2,
  T11.C3,
  T11.C4,
  T11.C5,
  T12.C2,
  T12.C3,
  T12.C4,
  T12.C5,
  T13.C2,
  T13.C3,
  T13.C4,
  T13.C5,
  T14.C2,
  T14.C3,
  T14.C4,
  T14.C5
FROM
  T1 T11,
  T1 T12,
  T1 T13,
  T1 T14
WHERE
  T11.C1=T12.C1
  AND T11.C1=T13.C1
  AND T11.C1=T14.C1;
 
SELECT
  SYSDATE
FROM
  DUAL;
 
EXEC DBMS_SESSION.SESSION_TRACE_DISABLE;

Now the test script when the significant CPU load is applied:

ALTER SESSION SET TRACEFILE_IDENTIFIER = '10046TraceTest3';
EXEC DBMS_SESSION.SESSION_TRACE_ENABLE(WAITS=>TRUE, BINDS=>FALSE, PLAN_STAT=>'ALL_EXECUTIONS')
SET ARRAYSIZE 5000
 
SELECT
  C2,
  C3,
  C4,
  C5
FROM
  T1;
 
SELECT
  T11.C2,
  T11.C3,
  T11.C4,
  T11.C5,
  T12.C2,
  T12.C3,
  T12.C4,
  T12.C5,
  T13.C2,
  T13.C3,
  T13.C4,
  T13.C5,
  T14.C2,
  T14.C3,
  T14.C4,
  T14.C5
FROM
  T1 T11,
  T1 T12,
  T1 T13,
  T1 T14
WHERE
  T11.C1=T12.C1
  AND T11.C1=T13.C1
  AND T11.C1=T14.C1;
 
SELECT
  SYSDATE
FROM
  DUAL;
 
EXEC DBMS_SESSION.SESSION_TRACE_DISABLE;

The test script when the server’s network card was forced to 10Mb/s full duplex:


SET TIMING ON 
SET AUTOTRACE TRACEONLY STATISTICS

ALTER SESSION SET TRACEFILE_IDENTIFIER = '10046TraceTest4';
EXEC DBMS_SESSION.SESSION_TRACE_ENABLE(WAITS=>TRUE, BINDS=>FALSE, PLAN_STAT=>'ALL_EXECUTIONS')
SET ARRAYSIZE 5000
 
SELECT
  C2,
  C3,
  C4,
  C5
FROM
  T1;
 
SELECT
  T11.C2,
  T11.C3,
  T11.C4,
  T11.C5,
  T12.C2,
  T12.C3,
  T12.C4,
  T12.C5,
  T13.C2,
  T13.C3,
  T13.C4,
  T13.C5,
  T14.C2,
  T14.C3,
  T14.C4,
  T14.C5
FROM
  T1 T11,
  T1 T12,
  T1 T13,
  T1 T14
WHERE
  T11.C1=T12.C1
  AND T11.C1=T13.C1
  AND T11.C1=T14.C1;
 
SELECT
  SYSDATE
FROM
  DUAL;
 
EXEC DBMS_SESSION.SESSION_TRACE_DISABLE;

The results… will follow.

28 09 2010
Charles Hooper

‘10046TraceTest1′

SET ARRAYSIZE 2
SQL&gt; SELECT
  2    C2,
  3    C3,
  4    C4,
  5    C5
  6  FROM
  7    T1;

100000 rows selected.

Elapsed: 00:00:18.96

Statistics
---------------------------------------------------
          0  recursive calls
          0  db block gets
      57323  consistent gets
       7159  physical reads
          0  redo size
  105000466  bytes sent via SQL*Net to client
     550370  bytes received via SQL*Net from client
      50001  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     100000  rows processed
 
SQL&gt; SELECT
  2    T11.C2,
  3    T11.C3,
  4    T11.C4,
  5    T11.C5,
  6    T12.C2,
  7    T12.C3,
  8    T12.C4,
  9    T12.C5,
 10    T13.C2,
 11    T13.C3,
 12    T13.C4,
 13    T13.C5,
 14    T14.C2,
 15    T14.C3,
 16    T14.C4,
 17    T14.C5
 18  FROM
 19    T1 T11,
 20    T1 T12,
 21    T1 T13,
 22    T1 T14
 23  WHERE
 24    T11.C1=T12.C1
 25    AND T11.C1=T13.C1
 26    AND T11.C1=T14.C1;

100000 rows selected.

Elapsed: 00:01:51.59

Statistics
---------------------------------------------------
        725  recursive calls
          0  db block gets
      69999  consistent gets
     117065  physical reads
          0  redo size
  400201092  bytes sent via SQL*Net to client
     550370  bytes received via SQL*Net from client
      50001  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     100000  rows processed

‘10046TraceTest2′

SET ARRAYSIZE 5000
SQL&gt; SELECT
 2    C2,
 3    C3,
 4    C4,
 5    C5
 6  FROM
 7    T1;

00000 rows selected.

elapsed: 00:00:05.84

tatistics
--------------------------------------------------
         0  recursive calls
         0  db block gets
     14483  consistent gets
      7034  physical reads
         0  redo size
  98523058  bytes sent via SQL*Net to client
       590  bytes received via SQL*Net from client
        21  SQL*Net roundtrips to/from client
         0  sorts (memory)
         0  sorts (disk)
    100000  rows processed

SQL&gt; SELECT
  2    T11.C2,
  3    T11.C3,
  4    T11.C4,
  5    T11.C5,
  6    T12.C2,
  7    T12.C3,
  8    T12.C4,
  9    T12.C5,
 10    T13.C2,
 11    T13.C3,
 12    T13.C4,
 13    T13.C5,
 14    T14.C2,
 15    T14.C3,
 16    T14.C4,
 17    T14.C5
 18  FROM
 19    T1 T11,
 20    T1 T12,
 21    T1 T13,
 22    T1 T14
 23  WHERE
 24    T11.C1=T12.C1
 25    AND T11.C1=T13.C1
 26    AND T11.C1=T14.C1;

100000 rows selected.

Elapsed: 00:01:43.60

Statistics
---------------------------------------------------
        725  recursive calls
          0  db block gets
      57870  consistent gets
     117099  physical reads
          0  redo size
  393728682  bytes sent via SQL*Net to client
        590  bytes received via SQL*Net from client
         21  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     100000  rows processed

‘10046TraceTest3′

SET ARRAYSIZE 5000
SQL> SELECT
  2    C2,
  3    C3,
  4    C4,
  5    C5
  6  FROM
  7    T1;

100000 rows selected.

Elapsed: 00:00:10.65

Statistics
---------------------------------------------------
          0  recursive calls
          0  db block gets
      14483  consistent gets
       7074  physical reads
          0  redo size
   98523058  bytes sent via SQL*Net to client
        590  bytes received via SQL*Net from client
         21  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     100000  rows processed
 
SQL> SELECT
  2    T11.C2,
  3    T11.C3,
  4    T11.C4,
  5    T11.C5,
  6    T12.C2,
  7    T12.C3,
  8    T12.C4,
  9    T12.C5,
 10    T13.C2,
 11    T13.C3,
 12    T13.C4,
 13    T13.C5,
 14    T14.C2,
 15    T14.C3,
 16    T14.C4,
 17    T14.C5
 18  FROM
 19    T1 T11,
 20    T1 T12,
 21    T1 T13,
 22    T1 T14
 23  WHERE
 24    T11.C1=T12.C1
 25    AND T11.C1=T13.C1
 26    AND T11.C1=T14.C1;

100000 rows selected.

Elapsed: 00:09:18.73

Statistics
---------------------------------------------------
        725  recursive calls
          0  db block gets
      57870  consistent gets
     117190  physical reads
          0  redo size
  393728682  bytes sent via SQL*Net to client
        590  bytes received via SQL*Net from client
         21  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     100000  rows processed

‘10046TraceTest4′
(still executing)

The tkprof summaries:
‘10046TraceTest1′ (array fetch size set to 2)

SELECT
  C2,
  C3,
  C4,
  C5
FROM
  T1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch    50001      1.70       3.81       7159      57323          0      100000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    50003      1.70       3.81       7159      57323          0      100000

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 60  

Rows     Row Source Operation
-------  ---------------------------------------------------
 100000  TABLE ACCESS FULL T1 (cr=57323 pr=7159 pw=0 time=31246 us cost=1817 size=98400000 card=100000)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                   50001        0.00          0.05
  db file scattered read                        362        0.04          2.69
  SQL*Net message from client                 50001        0.02         10.63
  SQL*Net more data to client                 49999        0.00          0.53
  db file sequential read                        14        0.00          0.02
********************************************************************************

SELECT
  T11.C2,
  T11.C3,
  T11.C4,
  T11.C5,
  T12.C2,
  T12.C3,
  T12.C4,
  T12.C5,
  T13.C2,
  T13.C3,
  T13.C4,
  T13.C5,
  T14.C2,
  T14.C3,
  T14.C4,
  T14.C5
FROM
  T1 T11,
  T1 T12,
  T1 T13,
  T1 T14
WHERE
  T11.C1=T12.C1
  AND T11.C1=T13.C1
  AND T11.C1=T14.C1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      2      0.00       0.00          0          0          0           0
Fetch    50001     10.34      94.31     117065      69999          0      100000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    50004     10.34      94.31     117065      69999          0      100000

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 60  

Rows     Row Source Operation
-------  ---------------------------------------------------
 100000  HASH JOIN  (cr=69999 pr=117065 pw=88908 time=93735 us cost=25134 size=395200000 card=100000)
 100000   TABLE ACCESS FULL T1 (cr=14466 pr=7026 pw=0 time=0 us cost=1817 size=98800000 card=100000)
 100000   HASH JOIN  (cr=55533 pr=71444 pw=50747 time=156238 us cost=15395 size=296400000 card=100000)
 100000    TABLE ACCESS FULL T1 (cr=14466 pr=7040 pw=0 time=31242 us cost=1817 size=98800000 card=100000)
 100000    HASH JOIN  (cr=41067 pr=34520 pw=21142 time=187504 us cost=7622 size=197600000 card=100000)
 100000     TABLE ACCESS FULL T1 (cr=14466 pr=7040 pw=0 time=0 us cost=1817 size=98800000 card=100000)
 100000     TABLE ACCESS FULL T1 (cr=26601 pr=7051 pw=0 time=31255 us cost=1817 size=98800000 card=100000)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                   50002        0.00          0.04
  SQL*Net message from client                 50002        0.01         15.96
  db file sequential read                      6639        0.12         10.69
  direct path write temp                       2868        0.27          6.32
  db file scattered read                        787        0.19         16.87
  direct path read temp                        2868        0.21         49.69
  SQL*Net more data to client                150001        0.00          1.58

‘10046TraceTest2′ (array fetch size set to 5,000)

SELECT
  C2,
  C3,
  C4,
  C5
FROM
  T1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       21      2.09       5.06       7034      14483          0      100000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       23      2.09       5.06       7034      14483          0      100000

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 60  

Rows     Row Source Operation
-------  ---------------------------------------------------
 100000  TABLE ACCESS FULL T1 (cr=14483 pr=7034 pw=0 time=62489 us cost=1817 size=98400000 card=100000)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                      21        0.00          0.00
  db file sequential read                       166        0.03          0.16
  SQL*Net message from client                    21        0.04          0.68
  SQL*Net more data to client                 48350        0.01          1.34
  db file scattered read                        910        0.04          2.31
********************************************************************************

SELECT
  T11.C2,
  T11.C3,
  T11.C4,
  T11.C5,
  T12.C2,
  T12.C3,
  T12.C4,
  T12.C5,
  T13.C2,
  T13.C3,
  T13.C4,
  T13.C5,
  T14.C2,
  T14.C3,
  T14.C4,
  T14.C5
FROM
  T1 T11,
  T1 T12,
  T1 T13,
  T1 T14
WHERE
  T11.C1=T12.C1
  AND T11.C1=T13.C1
  AND T11.C1=T14.C1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      2      0.00       0.00          0          0          0           0
Fetch       21      9.73     101.03     117099      57870          0      100000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       24      9.73     101.03     117099      57870          0      100000

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 60  

Rows     Row Source Operation
-------  ---------------------------------------------------
 100000  HASH JOIN  (cr=57870 pr=117099 pw=88908 time=62476 us cost=25134 size=395200000 card=100000)
 100000   TABLE ACCESS FULL T1 (cr=14466 pr=7049 pw=0 time=0 us cost=1817 size=98800000 card=100000)
 100000   HASH JOIN  (cr=43404 pr=71455 pw=50747 time=124978 us cost=15395 size=296400000 card=100000)
 100000    TABLE ACCESS FULL T1 (cr=14466 pr=7047 pw=0 time=31246 us cost=1817 size=98800000 card=100000)
 100000    HASH JOIN  (cr=28938 pr=34524 pw=21142 time=218748 us cost=7622 size=197600000 card=100000)
 100000     TABLE ACCESS FULL T1 (cr=14466 pr=7048 pw=0 time=0 us cost=1817 size=98800000 card=100000)
 100000     TABLE ACCESS FULL T1 (cr=14472 pr=7047 pw=0 time=0 us cost=1817 size=98800000 card=100000)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                      22        0.00          0.00
  SQL*Net message from client                    22        0.18          2.59
  db file scattered read                        964        0.53         24.14
  db file sequential read                         7        0.07          0.13
  direct path write temp                       2868        0.22          9.46
  direct path read temp                        2868        0.56         54.31
  SQL*Net more data to client                193273        0.01          5.66

‘10046TraceTest3′ (array fetch size set to 5,000 with significant server CPU load)

SELECT
  C2,
  C3,
  C4,
  C5
FROM
  T1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       21      2.21       9.89       7074      14483          0      100000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       23      2.21       9.89       7074      14483          0      100000

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 60  

Rows     Row Source Operation
-------  ---------------------------------------------------
 100000  TABLE ACCESS FULL T1 (cr=14483 pr=7074 pw=0 time=62505 us cost=1817 size=98400000 card=100000)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                      21        0.00          0.00
  db file scattered read                        243        0.30          5.29
  SQL*Net message from client                    21        0.06          0.70
  SQL*Net more data to client                 48350        0.23          2.58
  db file sequential read                         2        0.00          0.00
********************************************************************************

SELECT
  T11.C2,
  T11.C3,
  T11.C4,
  T11.C5,
  T12.C2,
  T12.C3,
  T12.C4,
  T12.C5,
  T13.C2,
  T13.C3,
  T13.C4,
  T13.C5,
  T14.C2,
  T14.C3,
  T14.C4,
  T14.C5
FROM
  T1 T11,
  T1 T12,
  T1 T13,
  T1 T14
WHERE
  T11.C1=T12.C1
  AND T11.C1=T13.C1
  AND T11.C1=T14.C1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.01       0.03          0          0          0           0
Fetch       21      7.56     555.90     117190      57870          0      100000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       23      7.57     555.93     117190      57870          0      100000

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 60  

Rows     Row Source Operation
-------  ---------------------------------------------------
 100000  HASH JOIN  (cr=57870 pr=117190 pw=88908 time=124966 us cost=25134 size=395200000 card=100000)
 100000   TABLE ACCESS FULL T1 (cr=14466 pr=7078 pw=0 time=250001 us cost=1817 size=98800000 card=100000)
 100000   HASH JOIN  (cr=43404 pr=71517 pw=50747 time=187452 us cost=15395 size=296400000 card=100000)
 100000    TABLE ACCESS FULL T1 (cr=14466 pr=7066 pw=0 time=281264 us cost=1817 size=98800000 card=100000)
 100000    HASH JOIN  (cr=28938 pr=34567 pw=21142 time=468776 us cost=7622 size=197600000 card=100000)
 100000     TABLE ACCESS FULL T1 (cr=14466 pr=7085 pw=0 time=0 us cost=1817 size=98800000 card=100000)
 100000     TABLE ACCESS FULL T1 (cr=14472 pr=7053 pw=0 time=31249 us cost=1817 size=98800000 card=100000)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                      21        0.00          0.00
  db file scattered read                       1034        0.39        138.96
  direct path write temp                       2868        0.34        209.45
  db file sequential read                        18        0.28          1.94
  direct path read temp                        2868        0.37        177.05
  SQL*Net more data to client                193273        0.04          7.19
  SQL*Net message from client                    21        0.20          2.74

‘10046TraceTest4′ (array fetch size set to 5,000 with significant server CPU load) – still not done, so I aborted the query.

SELECT
  C2,
  C3,
  C4,
  C5
FROM
  T1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       10      0.75    2055.62        638       5970          0       41665
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       12      0.75    2055.62        638       5970          0       41665

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 60  

Rows     Row Source Operation
-------  ---------------------------------------------------
  41666  TABLE ACCESS FULL T1 (cr=5970 pr=638 pw=0 time=0 us cost=1817 size=98400000 card=100000)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                      10        0.00          0.00
  SQL*Net message from client                     9        1.96          5.75
  SQL*Net more data to client                 20151        2.51       2054.52
  db file scattered read                         21        0.02          0.24
  SQL*Net break/reset to client                  10        0.00          0.00

The above tkprof summary from the aborted execution should answer your question regarding whether the “SQL*Net more data to client” wait event is factored into the elapsed time.

This particular server has a network card that supports CPU offloading. From the raw ‘10046TraceTest3′ (array fetch size set to 5,000 with significant server CPU load), near the start:

PARSE #2:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3617692013,tim=3575876632258
EXEC #2:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3617692013,tim=3575876632258
WAIT #2: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=101278 tim=3575876641354
WAIT #2: nam='db file scattered read' ela= 15862 file#=5 block#=3103731 blocks=5 obj#=101278 tim=3575876657387
FETCH #2:c=0,e=23515,p=5,cr=4,cu=0,mis=0,r=1,dep=0,og=1,plh=3617692013,tim=3575876655773
WAIT #2: nam='SQL*Net message from client' ela= 3099 driver id=1413697536 #bytes=1 p3=0 obj#=101278 tim=3575876660642
WAIT #2: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=101278 tim=3575876660702
WAIT #2: nam='SQL*Net more data to client' ela= 17 driver id=1413697536 #bytes=2259 p3=0 obj#=101278 tim=3575876660752
WAIT #2: nam='SQL*Net more data to client' ela= 14 driver id=1413697536 #bytes=1978 p3=0 obj#=101278 tim=3575876660799
WAIT #2: nam='SQL*Net more data to client' ela= 13 driver id=1413697536 #bytes=1978 p3=0 obj#=101278 tim=3575876660882
WAIT #2: nam='SQL*Net more data to client' ela= 20 driver id=1413697536 #bytes=1978 p3=0 obj#=101278 tim=3575876660948
WAIT #2: nam='SQL*Net more data to client' ela= 27 driver id=1413697536 #bytes=2224 p3=0 obj#=101278 tim=3575876661007
WAIT #2: nam='SQL*Net more data to client' ela= 27 driver id=1413697536 #bytes=1978 p3=0 obj#=101278 tim=3575876661067
WAIT #2: nam='SQL*Net more data to client' ela= 13 driver id=1413697536 #bytes=1978 p3=0 obj#=101278 tim=3575876661132
WAIT #2: nam='SQL*Net more data to client' ela= 12 driver id=1413697536 #bytes=1978 p3=0 obj#=101278 tim=3575876661191
WAIT #2: nam='SQL*Net more data to client' ela= 12 driver id=1413697536 #bytes=2224 p3=0 obj#=101278 tim=3575876661236
WAIT #2: nam='SQL*Net more data to client' ela= 12 driver id=1413697536 #bytes=1978 p3=0 obj#=101278 tim=3575876661321
WAIT #2: nam='SQL*Net more data to client' ela= 20 driver id=1413697536 #bytes=1978 p3=0 obj#=101278 tim=3575876661386
WAIT #2: nam='SQL*Net more data to client' ela= 26 driver id=1413697536 #bytes=1978 p3=0 obj#=101278 tim=3575876661445
WAIT #2: nam='SQL*Net more data to client' ela= 27 driver id=1413697536 #bytes=1978 p3=0 obj#=101278 tim=3575876661505
WAIT #2: nam='SQL*Net more data to client' ela= 14 driver id=1413697536 #bytes=2224 p3=0 obj#=101278 tim=3575876661663
WAIT #2: nam='SQL*Net more data to client' ela= 12 driver id=1413697536 #bytes=1978 p3=0 obj#=101278 tim=3575876661709
WAIT #2: nam='SQL*Net more data to client' ela= 26 driver id=1413697536 #bytes=1978 p3=0 obj#=101278 tim=3575876661783
WAIT #2: nam='db file scattered read' ela= 11300 file#=5 block#=3270808 blocks=8 obj#=101278 tim=3575876673181

Near the end (including the final fetch call

WAIT #3: nam='direct path read temp' ela= 8 file number=202 first dba=22016 block cnt=31 obj#=101278 tim=3576445836527
WAIT #3: nam='SQL*Net more data to client' ela= 14 driver id=1413697536 #bytes=1969 p3=0 obj#=101278 tim=3576445836615
WAIT #3: nam='SQL*Net more data to client' ela= 35 driver id=1413697536 #bytes=1968 p3=0 obj#=101278 tim=3576445836670
WAIT #3: nam='SQL*Net more data to client' ela= 11 driver id=1413697536 #bytes=1969 p3=0 obj#=101278 tim=3576445836706
WAIT #3: nam='SQL*Net more data to client' ela= 11 driver id=1413697536 #bytes=2214 p3=0 obj#=101278 tim=3576445836737
WAIT #3: nam='SQL*Net more data to client' ela= 12 driver id=1413697536 #bytes=1969 p3=0 obj#=101278 tim=3576445836773
WAIT #3: nam='SQL*Net more data to client' ela= 28 driver id=1413697536 #bytes=1968 p3=0 obj#=101278 tim=3576445836821
WAIT #3: nam='SQL*Net more data to client' ela= 15 driver id=1413697536 #bytes=2215 p3=0 obj#=101278 tim=3576445836861
WAIT #3: nam='SQL*Net more data to client' ela= 9 driver id=1413697536 #bytes=1968 p3=0 obj#=101278 tim=3576445836891
WAIT #3: nam='SQL*Net more data to client' ela= 29 driver id=1413697536 #bytes=1969 p3=0 obj#=101278 tim=3576445836943
WAIT #3: nam='SQL*Net more data to client' ela= 13 driver id=1413697536 #bytes=1968 p3=0 obj#=101278 tim=3576445836977
WAIT #3: nam='SQL*Net more data to client' ela= 12 driver id=1413697536 #bytes=2215 p3=0 obj#=101278 tim=3576445837020
WAIT #3: nam='SQL*Net more data to client' ela= 30 driver id=1413697536 #bytes=1969 p3=0 obj#=101278 tim=3576445837073
WAIT #3: nam='SQL*Net more data to client' ela= 12 driver id=1413697536 #bytes=1968 p3=0 obj#=101278 tim=3576445837106
WAIT #3: nam='SQL*Net more data to client' ela= 10 driver id=1413697536 #bytes=2215 p3=0 obj#=101278 tim=3576445837139
WAIT #3: nam='SQL*Net more data to client' ela= 29 driver id=1413697536 #bytes=1968 p3=0 obj#=101278 tim=3576445837188
FETCH #3:c=312500,e=968742,p=2232,cr=0,cu=0,mis=0,r=4999,dep=0,og=1,plh=3623541397,tim=3576445807185

In this case it does not appear that the duration of the “SQL*Net more data to client” wait events was affected by the significant CPU load at the start of the query execution, but what about the difference in the tim= values – is there a greater amount of “missing” time when the server was very busy? I should mention that the server was running on Windows, and that probably explains the increased CPU seconds displayed in the tkprof summary when the server’s CPUs were heavily loaded (on Windows, time in the run queue accumulates as if the process were still on the CPU).

28 09 2010
Narendra

Charles,

Awesome!!!
Thanks a zillion for taking time to answer my question, along with a complete test case.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




Follow

Get every new post delivered to your Inbox.

Join 143 other followers

%d bloggers like this: