April 8, 2010
(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)
I recently encountered a discussion thread (dbaforums.org/oracle/index.php?showtopic=19435) that asked about how to determine whether or not a SQL statement executed in a program uses indexes or full table scans – the database in question is an unspecified release version of Oracle 10g. One of the responders in that thread pointed to an undated article (praetoriate.com/teas_prae_util11.htm) about the AUTOTRACE functionality of SQL*Plus, while other responders suggested tracing the program’s execution and then using TKPROF.
Please read the article, keeping in mind that the question concerns some release version of Oracle 10g, and see if you are able to answer the following true or false questions. State why you believe that the question is true, or why you believe that the question is false. Any answers that attempt to utilize logical fallacies will be scored as incorrect.
1. SQL*Plus’ AUTOTRACE feature performs statement tracing.
2. Using SQL*Plus’ AUTOTRACE feature requires a PLAN_TABLE in the schema of the user using AUTOTRACE.
3. AUTOTRACE retrieves the actual execution plan, along with the associated statistics for the execution.
4. When a large number of rows will be returned by SQL*Plus, the AUTOTRACE TRACEONLY feature should be used.
5. For SQL performance issues, AUTOTRACE is the first tool of choice for investigating the SQL performance issues.
6. Performance issues that are present when SQL is executed in an application will also be present when the SQL statement is executed in SQL*Plus with AUTOTRACE enabled.
–
——————————————————————————-
——————————————————————————-
May 25, 2010
The following screen capture is completely unrelated to this blog article, but is relevant to the comments about the blog redesign. This is how the redesigned blog appears on a 16:9 netbook, with 1024 horizontal pixels of resolution:
I had to scroll the window slightly to the right to see the entire content portion of the blog, while the “floating” navigation section is hidden from view. Internet Explorer 8.0 offers a zoom feature near the bottom right of the IE window – that feature (or a similar one in other browsers) might be a solution if your monitor does not offer 1024 pixels of horizontal resolution.
————————
Original Layout (4:3 monitor):
Modified Layout (4:3 monitor):
1) FALSE. AUTOTRACE displays explain plan (NOT executed plan) and some execution statistics.
2) FALSE. As of 10g default public PLAN_TABLE exists.
3) FALSE. see (1)
4) TRUE. It prevents flooding of rows on output.
5) FALSE. Better SQL trace and DBMS_XPLAN.DISPLAY_CURSOR.
6) TRUE, but the “environment” should be the same
Roberto,
Thanks for participating. Your responses make me wonder if this set of questions were too easy.
With a little bit of “thinking outside the box” reasoning someone might still be able to answer questions #4 and #6 as false with sufficient justification. To help people head in that direction – for #6 would it matter if literals or bind variables were used, what about adding an extra whitespace character?
“this set of questions were too easy” … not for Don 😉
The statement tested should be the same, same literals, same bind variables. Then, explain plan (AUTOTRACE uses it) allows bind variables but ignores the bind variable peeking.
Extra whitespace character? Where? I think the performances don’t change.
Take this opportunity to ask you a question.
How do you simulate (SQL*Plus and AUTOTRACE) a query with bind variables of DATE type?
SQL> variable d date
Usage: VAR[IABLE] [ [ NUMBER | CHAR | CHAR (n [CHAR|BYTE]) |
VARCHAR2 (n [CHAR|BYTE]) | NCHAR | NCHAR (n) |
NVARCHAR2 (n) | CLOB | NCLOB | REFCURSOR |
BINARY_FLOAT | BINARY_DOUBLE ] ]
Roberto,
I am impressed.
Extra whitespace, or changes in the position of whitespace, will cause a hard parse, which might produce a different execution plan than that used by the application, even if the “environment” (I am 99.9% sure that you and I mean the same thing by environment) is the same. Consider the nightly stale statistics collecting process that is enabled by default starting in Oracle 10g R1 – there is typically a delay of about 5 hours before the new statistics will force hard parses when previously hard parsed SQL statements are re-executed. If there is no whitespace change, you could see the same execution plan as was seen in the application. If there is a whitespace change a hard parse will be performed and the new statistics will take immediate effect for that SQL statement. OK, that example works for SQL statements containing bind variables and literals. When you consider that bind variable peeking is enabled by default (if I remember correctly, starting in Oracle 9i R1), any change in the whitespace will force a hard parse, causing bind variable peeking to potentially change the execution plan.
Regarding a bind variable of DATE datatype, I do not have an answer. A Google search found this message thread from the Oracle-L list:
http://www.freelists.org/post/oracle-l/Date-bind-variable-being-peeked-but-not-used-for-cardinality-estimate,2
“Other point, you mention using a bind variable with SQL*Plus. That means that you have used a VARCHAR2, and converted it to a date in your query, since SQL*Plus has no DATE variable (even with 11gR2, I have just checked). In the program, it’s quite likely that the date value is bound as a date, it can make a difference too.”
Let’s set up a test case (note: immediately after the word LEVEL should be a “less than” sign, but I could not include that character in a comment, 😀 is a :d without a space in between – blogging software is interesting):
The above created a test table with 100,000 rows with an index on the primary key column.
Continuing:
The above commands created two bind variables of type VARCHAR2, set dates in the bind variables, enabled a 10132 trace to generate a trace file on a hard parse, and turned on the AUTOTRACE explain feature.
Continuing:
The displayed execution plan, possibly easy to predict:
The above shows, at least for the autotrace, that Oracle did not need to perform a datatype conversion based on the predicate information for line 3 of the plan, but of course the predicate information for line 1 would not be the same as would be found when the SQL statement is executed by the application. The VARCHAR2 bind variables also did not prevent the optimizer from selecting an index range scan using the primary key index.
What is found in the 10132 trace file?
Notice the different execution plan (a full table scan, rather than an index range scan as suggested by AUTOTRACE) – Oracle was able to peek at the bind variables to generate a different execution plan, even though those bind variables were VARCHAR2, rather than DATE bind variables. If you look closely, you will see that there are two different SQL_ID values in the 10132 trace file.
If we wanted to, we could also check the bind variable types used during the hard parse like this:
In this case, the output of both of those SQL statements was:
But that likely would not have been the case if the bind variables were declared as something other than VARCHAR2.
*****
Based on Roberto’s answers to this True Or False quiz, I suspect that he is familiar with most, if not all of the above. It is simply stated for anyone else who happens to stumble across this blog article.
Of course, any change in the query text potentially change the execution plan… but, for me, the topic is to make some performance tests with AUTOTRACE, don’t to analyze plan instability problems, perhaps due to bind variable peeking.
However, suppose you want to test the query executed by an application (drag&drop from V$SQL.SQL_FULLTEXT), the query cointains bind variables of DATE type. You want to use AUTOTRACE. AUTOTRACE is not very reliable but is easy for quick&dirty tests.
How would you do?
I use a program that I custom developed that allows me to submit SQL statements using bind variables, as shown in the first picture of this blog article (it will translate bind variables from a 10046 trace, or they may be manually specified):
https://hoopercharles.wordpress.com/2010/02/11/automated-dbms_xplan-trace-and-send-to-excel/
The “Automated DBMS_XPLAN, Trace, and Send to Excel” blog article includes a VBS script (run from a Windows client PC) that mostly reproduces the feature that is in my custom developed program, minus the handling of bind variables. The script is ready for someone else to modify in order to support bind variables, and there are samples of scripts and Excel macros that use bind variables in other blog articles on my site.
You are very smart and kind, but I would not use graphical tools. I’m looking for something doesn’t need of specific application runtime/environment.
I often use SQL trace on PL/SQL anonymous block like this (to avoid outputs and declaration variables):
declare
b1 date;
b2 varchar2(10);
…
begin
b1 := …
b2 := …
for x in ( — begin query
select …
) — end query
loop
null;
end loop;
end;
/
unfortunately this approch does not generate pure SQL statement in v$sql and execution plan in v$sql_plan
… and SQL*Plus variables have the above-mentioned limits. 😦
I did an error: it is not true “his approch does not generate pure SQL statement in v$sql and execution plan in v$sql_plan” – I forgot that SQLs in PL/SQL block become upper case (and comments vanish), and I did’nt find my SQL statement in V$SQL… what a shame!
However, what do you think about this quick method to test queries with bind variables, from SQL*Plus prompt?
Hi Charles,
Nothing quite related to the blog: the column names you used in your test cases show up as smiley 🙂
Cheers
Andy,
The smileys with the big grin 😀 are actually bind variables ( : D ) – I decided to leave them in the comment section just to see if anyone noticed.
Roberto,
I like your idea, but I think that there might still be risks with obtaining a different execution plan due to the changes in the SQL statement and changes in bind variable names – but your solution does address the issue of the wrong bind variable data type.
Using the same sample table as before:
We can run a test that should result in a full table scan due to wide range of values. If we execute the following script:
The hard_parse_test trace file includes the following:
The hard_parse_test2 trace file includes the following:
Both methods produced an execution plan using a full table scan, and both methods will produce an execution plan using an index range scan if d2 is set to ’25-APR-2010′. But, take a close look at the trace file’s SQL statement for the PL/SQL execution – anything really unusual about it, other than table t1 appearing as T1? So, the PL/SQL method will force a hard parse due to the change in the SQL statement, and the direct SQL*Plus method will force a hard parse due to the change in the bind variable data type (from DATE as executed in the application to VARCHAR2 as executed in SQL*Plus).
Using TO_DATE to overcome the limits of DATE type bind variables in SQL*Plus, however, is dangerous (!) because implicit conversions are sensitive to the parameter NLS_DATE_FORMAT:
Roberto,
Excellent point regarding the demonstration that I provided that is relying on implicit conversions. This:
Explicitly converts a string of characters of a known date format into a DATE data type, which is then implicitly converted into a VARCHAR2, hopefully in the default NLS format, so that it may be stored in the d1 variable. When the SQL statement is executed, the value in the VARCHAR2 variable is implicitly converted back into a date, hopefully using the default NLS format. I think that the solution works in this case because the two implicit conversions (to a VARCHAR2 and back to a DATE) both use the default NLS format. As long as the default NLS format is not changed in between the two implicit conversion, I believe that the demonstration works. Slightly extending your demonstration:
As the above shows, if the values of the d1 and d2 bind variables are re-assigned (to the same date value) after the NLS format change, the same COUNT is returned. When I saw the 10132 trace, I started wondering about the impact of the implicit data type conversions, and the possible data retrieval errors that the implicit conversion might cause – maybe there are still problems?
For the record, I believe that implicit data type conversions should be avoided where possible. That is why I used this:
Rather than this:
But I probably should have done this instead:
Is it just me, or does this output appear to be strange 🙂 :
I didn’t understand your smile… last setting for nls_date_format was ‘YY-MON-DD’, hence that output is right.
In Italy we say: “boh!”.
However you’re right, of course!… two implicit conversions… It works.
>>> “maybe there are still problems?”
I think it’s not possible sharing child cursors:
The top_level_rpi_cursor mismatch is correlated with implicit conversion, but I don’t know what exactly means.
The smile – it is sometimes hard to tell when I am straight-faced serious, and when I intend for something to be interpretted with a sense of humor. In this case, the smile was in the comment to indicate that I knew why the date was showing with the year and day of month switched. The smile might force someone to stop and think about what is displayed – because the displayed date does not conform to the typical display of Oracle’s dates.
You are correct that the two methods would each have their own child cursor for the SQL statement – and neither of the methods would likely share the same child cursor used by the application, as mentioned in my earlier comment.
It is interesting that only the TOP_LEVEL_RPI_CURSOR column of V$SQL_SHARED_CURSOR is set to Y (verified on Oracle Database 10.2.0.4 and 11.1.0.7) when your test is executed. I expected the BIND_MISMATCH to be set to Y because this shows that the bind variable data types differ:
There is only a single document in Metalink that describes the TOP_LEVEL_RPI_CURSOR column, and that description does not seem to apply to this case. The documentation’s description of the column does not provide any detail for that column either: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_2124.htm
Maybe someone else reading this blog article knows why the TOP_LEVEL_RPI_CURSOR column is set to Y and not the BIND_MISMATCH column. Is it maybe that the parsing environment is different?
If we do this:
Now we see the bind mismatch, as expected, and the TOP_LEVEL_RPI_CURSOR column is still set to N. Interesting.
Maybe it means Remote Procedure Invocation, i.e. TOP_LEVEL_RPI_CURSOR changes when cursors are opened in different environments (SQL vs. PL/SQL).
PS. nonshared.sql is Tanel Poder’s script.
Oops. Something odd happened with sourcecode tag. Here is output without it:
SQL> var x varchar2(10)
SQL> exec 😡 := ‘asdf’
PL/SQL procedure successfully completed.
SQL> select * from t1 where dummy = :x;
no rows selected
SQL> select sql_id from v$sql where sql_text = ‘select * from t1 where dummy = :x’;
SQL_ID
————-
b8vhmyrkpu8ah
SQL> begin
2 execute immediate ‘select * from t1 where dummy = :x’ using ‘asdf’;
3 end;
4 /
PL/SQL procedure successfully completed.
SQL> @nonshared b8vhmyrkpu8ah %
SQL_ID : b8vhmyrkpu8ah
ADDRESS : 39355400
CHILD_ADDRESS : 393552BC
CHILD_NUMBER : 0
—————–
SQL_ID : b8vhmyrkpu8ah
ADDRESS : 39355400
CHILD_ADDRESS : 39350228
CHILD_NUMBER : 1
TOP_LEVEL_RPI_CURSOR : Y
—————–
PL/SQL procedure successfully completed.
Maybe, TOP_LEVEL_RPI_CURSOR is due to SQL recursive level/depth
Timur,
Thanks for the help. I search Google using the keywords Remote Procedure Invocation and TOP_LEVEL_RPI_CURSOR. It eventually occurred to me that “Remote” should actually be “Recursive” – Recursive Procedure Invocation. I now suspect that this might be an indication that if a 10046 trace were enabled, a different dep= value would be printed on a “PARSING IN CURSOR” line. That makes me wonder – what if the same SQL statement were executed at dep=0, dep=1, dep=2 (a trigger causing another trigger to execute), etc. – would we have a child cursor in the library cache for each “dep” at which the SQL statement is executed?
The search also lead me to this AskTom thread:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1205168148688
A slightly modified version of one of the test scripts in that AskTom thread:
And then reviewing the library cache contents Tom targetted the TOP_LEVEL_RPI_CURSOR column of V$SQL_SHARED_CURSOR:
Something new – at least new to me.
Charles, in your two “hard parse test” traces the “predicate information”contains TO_DATE function (due to implicit conversion), but the PHV is the same in both cases. I now find that PHV only depends on the “row source operation”.
Thanks for your time
Based on what I have seen, I believe that your statement is correct. If the constants (literals) change in a SQL statement, as long as the plan operations remain the same, the PLAN_HASH should remain the same. Completely removing the WHERE clause could also cause the same PLAN_HASH to be calculated.
From a 10132 trace file:
—-
Removing the WHERE clause:
—-
Removing a column (literal in this case):
—-
Removing the hint, which caused the plan to change:
—-
Similar to one of the previous SQL statements, but this time a filter operation appeared in the plan, so a different PLAN_HASH was generated:
dang amazing stuff man.
Charles,
Great thread but the wordpress template you use makes it really hard to follow. is there a chance you move to a flexible-width template (like the one I use or Jonathan use there are around 8 templates on wordpress ) instead of fixed-width template like the one you use ?
Your stuff is really great but the template makes it very hard to follow especially when there is too many code in it.
Please take this as a regular loyal reader feedback.
Coskan,
Thank you for the suggestion. The last time that I looked at the various themes was in December 2009. The theme that I originally selected simply truncated the right side of code sections. I explored the idea of using a flexible width theme, but found that those themes also caused the right side of some code sections to be truncated. So, I settled on a theme that permits horizontally scrollable code sections. When the code section is taller than the web browser window it is difficult to see the entire code section because, of course, the scrollbar does not appear on the screen.
Now that I have a couple of decent articles on the blog (I think that I am up to 5 decent articles now), it might be worth spending $15 USD a year for access to WordPress’ Custom CSS feature so that I can hopefully make the code sections wider and still scrollable.
Now I need to dig out the book on CSS to fix what I can. The modified theme is not quite flexible-width, but at least I can see most of the code sections now, and I still have the scrolling windows for those code sections that are too wide.
Looks awesome Charles. Thank you very much for this change. It really made reading way easier than it was, I hope others will like it as well.
For my 4/3 monitor is worse than before 😦
Roberto and Coskan,
Thank you for the feedback. I am not 100% happy with the new layout yet.
I admit that the code sections in the original layout caused me problems in the past. It was especially difficult to copy the text from a code section if the last line did not quite fit into the small code window – it seems that 50% of the time Internet Explorer would also copy the entire contents of the web page from that point down. I also did not like that nearly half of an execution plan was hidden in the code sections. The content portion of the blog (and the code sections) is now almost twice as wide as before. This change not only fixed the code sections, but also allows me to add larger versions of pictures that are presented on the blog.
After seeing Roberto’s comment this morning I attempted to view the web page using a Toshiba netbook that has a 16:9 screen with 1024 horizontal lines of resolution. The content portion of the page would just barely fit on the screen if I scrolled the screen slightly to the right. The latest release of Internet Explorer offers a zoom percent, which I noticed automatically adjusts on some websites that are viewed on the netbook. The zoom feature may be a work-around. It would be nice if I did not need to scroll slightly to the right to view a page.
I modified this blog article to include 3 screen capture:
1. Netbook showing the “modified” version of this blog.
2. 4:3 20 inch monitor at 1600 x 1200 resolution (not using the full width of the screen) showing the original version of the blog.
3. 4:3 20 inch monitor at 1600 x 1200 resolution (not using the full width of the screen) showing the modified version of the blog.
Do you give fixed size width or a percentage ? I am not sure how wordpress works but I believe percentage can sort the issue (If I recall my html knowledge corretly). As a feedback With 1280 width resolutions scrollbar appears but for only 10pixel diffrence which does not make any problem for blog post. By the way I think you can also change the size of the header table which is still using the same size
I did not think about the possibility of specifying percentage widths. I am only able to make adjustments through changes to the CSS specification – I do not have direct access to the HTML coding of the pages. That said, CSS offers a lot of flexibility once the names of the styles defined in the HTML coding are determined.
I just made another change – hopefully it will make the blog a bit more viewable on the netbook, and hopefully address the issue that Roberto mentioned with his 4:3 screen.
4:3 now is ok