## OT: Do Search Terms Describe the Visitor?

12 01 2012

January 12, 2012

I thought that I would start this slightly off topic blog article with a bit of humor.  Seven months ago I wrote a blog article that refuses to move from the first position in the most visited articles on this blog.  In the process of trying to understand why a mathematics focused article is the most popular article on an Oracle Database specific blog, I regularly review the search keywords that bring people to my blog.  Yesterday alone, the search keywords that apparently pointed to the popular blog article included (not a complete list):

• finding distance between two points lat long (2)
• distance spherical points coordinates oracle (2)
• find distance between two longitude latitude points (1)
• calculate distance between points in a feature (1)
• calculate speed longitude latitude (1)
• distance between two lattitude points (1)
• how to get distance between two latitudes in meter (1)
• access table that converts latitude and longitude (1)
• calculate time and distance between two longitudes (1)
• distance between two points latitude and longitude transact sql (1)
• converting to oracle degree minutes (1)
• distance between latitude (1)
• select sdo_geom.sdo_distance (1)
• oracle sdo_geom.dos_distance (1)
• calculate distance longitude latitude (1)
• method of measuring distance using plain (1)
• oracle spatial calculate distance (1)
• how to calculate distance between two latitude and longitude (1)
• to calculate distance between two points (1)
• oracle spatial sort lat lon coordinates (1)
• longitude and latitude distance between 2 points (1)
• solve this:find distance between (35 degrees 21′) and (29 degrees 57′) and given that the radius is 3960 (1)
• distance between two points wordpress (1)
• how to get distance between two longitudes (1)
• calculate distance between two lat long points (1)
• how to find distance between points of longitude (1)
• sql function calculate distance gps kilometers (1)
• calculate distance between lattitute longitude (1)
• excel calculate distance from lat long (1)
• calculate distance from lat long sql (1)

I think that the above keywords point to one and only one conclusion: the majority of the visitors to this blog are lost, know only their latitude and longitude coordinates (and the coordinates of where they are supposed to be), and are able to VPN in to their Oracle Database at work to determine how long it will take to become unlost.  🙂

While on the topic of interesting search keywords, I thought that I would mention a favorite search keyword that appeared in the WordPress statistics.  My all time favorite interesting search keyword that was used to access this blog appeared shortly after I reviewed a red covered Oracle related book – the search keywords were “Charles Hooper is getting sued”.  Needless to say, it is interesting that the search keywords found an article on my blog.

Moving on to the point of this blog article, I have also noticed a curious trend in some of the search keywords.  WordPress makes it a bit diffcult to go back in time to tabulate somewhat popular search keywords when there are a lot of commonly used keywords that access this site.  However, with a bit of effort, I attempted to locate search keywords that share a common pattern.  Those search keywords that I located (with the minimum number of hits indicated) follow:

• oracle core essential internals for dbas and developers pdf (8)
• oracle core: essential internals for dbas and developers pdf (4)
• oracle tuning the definitive reference second edition pdf (99)
• oracle tuning: the definitive reference pdf (3)
• oracle tuning the definitive reference pdf (1)
• oracle performance firefighting pdf (18)
• apress.oracle.core.essential.internals.for.dbas.and.developers.nov.2011.rar (1)

Undoubtedly, in the above cases the searcher likely found my review of a particular book.  The above list has me wondering about a couple of items.  I was a full time college student for five years to obtain my degree.  I remember the pain of signing a check for what seemed to be (U.S.) \$500 every semester – it did not seem to matter much if I was buying mathematics books (a \$204 Calculus book from Amazon oddly seems approproate here) , computer books, or psychology books – the total was always about the same.  Was that money for books a waste?  Maybe I could have just found the book out on the Internet somewhere… they had that neat Gopher tool way back then, but not the WYSIWYG interface that is common now.

I am left wondering what the people searching for the above keywords might have been thinking.  I assume that some of the searchers have spent some time in college, and might even be Oracle developers or DBAs.  So, do you think that search terms describe vistors?  If so, how do you think the above search keywords might describe the searchers?

## A Year Older, Any Wiser? 2

30 11 2011

November 30, 2011

Today marks the second anniversary of this blog.  With just 372 articles posted in the last two years, the rate at which new articles are posted has decreased rather dramatically from the 1+ article per day average that was established shortly after the launch of the blog.  To celebrate the second anniversary, I thought that I would post a couple of statistics and bits of information about this blog:

• On October 21, 2011 there were 1,020 views of the articles on this blog, the most in any one day.  Yet on that day one of the most simple blog articles was posted.
• On a typical day search engines generate roughly half of the page views on this blog (this percentage increased in the last couple of months).  I periodically wonder if those people searching for Charles Hooper (currently the most common search keyword) are in search of something else (that link is safe to view at work, although it does recall a certain article by Richard Foote that was posted March 31, 2008).
• No article on this blog has ever resulted in an unintentional distributed denial of service attack (I am not explicitly pointing the finger at this article: http://jonathanlewis.wordpress.com/2011/11/28/oracle-core/ ).  😉
• The most viewed article since this blog went online (currently with roughly 7,530 views) was written roughly 23 months ago and describes a deadlock that can be triggered in Oracle Database 11.1.0.6 and greater, where the same sample code does not trigger a deadlock in Oracle Database 10.2.0.1 through 10.2.0.5 (earlier release versions not tested).
• The second most viewed article since this blog went online (at roughly 5,500 views) was written just 5.5 months ago and has more of a mathematics focus than an Oracle Database slant – more so than any other article on this blog.
• The third most viewed article since this blog went online (at roughly 4,200 views) describes a case where I made an error in testing Oracle Database 10.2.0.1 in 2006, and accidentally confused a partially related cause with an effect.  In the article I then tried to determine the coincidental cause and effect.  Interestingly, there is an odd association with the contents of that blog article with the false DMCA claim that was filed against one of my articles earlier this year.
• One Oracle Database book review article on this blog extended to roughly 18 typewritten (single spaced) pages in length, and I read that book twice from cover to cover.
• One Oracle Database book review article on this blog extended to roughly 24 typewritten (single spaced) pages in length, and that review covered ONLY the first 230 or so pages of the 1100+ page book plus a couple of other pages later in the book.
• One Oracle Database book review article on this blog extended to a record shattering 35.5 typewritten (single spaced) pages in length that excluded two of the book’s chapters, and due to the length of the review had to be divided into two separate blog articles.
• The blog category name Quiz – Whose Answer is it Anyway? is derived from the name of the TV show “Whose Line is it Anyway?” – a TV show where the actors improvise the show content in reaction to the behavior of other actors.  The articles in this category (58 articles by the latest count) demonstrate cases where unwitting actors (uh, authors) improvise their story of how Oracle Database works and how it does not work.
• Some of the blog articles have titles with double or triple meanings.  I sometimes accidentally embed humor in some of the articles – sometimes I don’t recognize the humor for days.
• It is occasionally difficult to develop unique blog article content that is not already better described in five blogs authored by other members of the Oracle community.  I monitor a couple of different discussion forums (OTN, Usenet, Oracle-L, and a forum operated for commercial purposes) to find interesting problems that might appeal to a wider audience.  I am still trying to determine if there is anything somewhat interesting about this article – how hard must the developer work to hide information from the query optimizer?
• Some of the articles are posted in an incomplete form, knowing that helpful readers will often fill in the missing details… or the missing link.  If I could remember everything that I have forgotten about Oracle Database, I would have to forget something else – so thanks for the gentle reminders that I receive on occasion.
• I probably should have created a couple of more blog article categories: Documentation Errors; Test Cases; Ouch, that’s Not Supposed to Happen; and Where Did You get that Idea?.
• I pay a yearly fee to WordPress that allows my articles to stretch across your widescreen monitor, rather than being confined to the narrow width of the title banner at the top of this page.  I also pay a yearly fee to WordPress so that unwanted advertisements do not clutter the page for visitors not logged into a WordPress account.

Looking forward, I can only predict that there is a better than 50% chance that there will be a part three to this blog article series in 12 months.  I managed to purchase four books from Apress during their cyber Monday sale (in between the distributed denial of service attacks) a couple of days ago, including a book that I recently received in paperback form from Amazon.  I think that this might be a hint that there will be more Oracle Database book reviews posted to this blog in the future.

## On the Topic of Copyright

11 02 2011

February 11, 2011 (Updated February 25, 2011)

On August 26, 2010, just before cancelling my first three month old order for the “Oracle Tuning: The Definitive Reference Second Edition” book, I wrote the following in a blog article (side note: the second order for the book is now more than three months old, and I have no intention of cancelling that order):

In June 2010 an article appeared on another blog that identifed three blog articles appearing on a blog operated by an Oracle Certified Master (OCM) that were copied nearly verbatim from three other blogs, thus saving the OCM a considerable effort in not only typing verbage appearing in the article, but also effort in actually building useful content.  Why spent four hours developing a blog article (this is probably the average time I spend), when a simple copy, paste, find-and-replace can be accomplished in a couple of minutes?  Those copies of articles clearly violated commonly understood copyright restrictions, and a quick search showed that the copyright violations extended far beyond the initial three articles which were mentioned (yes, I have screen captures and PDF copies of all that I identified).

So, why did I write the above paragraph?  For some reason I was recently browsing through some of the information about DMCA:

“Any material that was posted without the copyright owner’s authorization must be removed or blocked promptly once the service provider has been notified that it has been removed, blocked, or ordered to be removed or blocked, at the originating site.”

I realized that a random reader of this blog could potentially send a note to WordPress stating that they are, in fact under penalty of law, the owner of the material I just spent 2 hours, 4 hours, 8 hours, 16 hours  – maybe even months writing, and demand that the article be taken offline because I clearly stated that I quoted a couple of words from one of their articles while agreeing with or disagreeing with the other article.  Is quoting another source, while clearly stating that the other source is being quoted, in an effort to critique the quoted section an example of fair use?

I greatly dislike seeing people copying other people’s work and passing it off as their own work.  It takes considerable effort to put together many of my blog articles; it requires a good deal of past experience troubleshooting other problems; and once my blog articles are published, those blog articles are not complete without the valuable input provided by reader comments.  It is not uncommon for me to post a blog article with the sole intention of helping one or two people, with the knowledge that reader comments on the blog articles often take the articles to a completely different level, expanding the articles to help a much wider audience.  The article that is currently the second highest ranked article on this blog in the last 90 days is just one example of this approach in action.

The Digital Millennium Copyright Act (DMCA), among other things, helps protect original, copyrighted content that is posted to the Internet to prevent that material from being republished for profit, or claimed as original, copyrighted content authored by the person who copied the previously copyrighted material.  The doctrine of fair use outlines four factors that determine whether or not a particular use of copyrighted material is considered fair use:

• “The purpose and character of the use, including whether such use is of commercial nature or is for nonprofit educational purposes
• The nature of the copyrighted work
• The amount and substantiality of the portion used in relation to the copyrighted work as a whole
• The effect of the use upon the potential market for, or value of, the copyrighted work”

If you see your original, copyrighted content on an unauthorized site, file a DMCA take down notice.  If you do so, be aware that you will be swearing under penalty of perjury that you are, in fact, the owner of the copyrighted material.  Just for the entertainment value, I decided to see what the law states in Michigan about the penalty of perjury:

“Perjury committed in courts—Any person who, being lawfully required to depose the truth in any proceeding in a court of justice, shall commit perjury shall be guilty of a felony, punishable, if such perjury was committed on the trial of an indictment for a capital crime, by imprisonment in the state prison for life, or any term of years, and if committed in any other case, by imprisonment in the state prison for not more than 15 years.”

Translation, “You better know what the heck you are claiming when you file a DMCA take down notice and you sign, under penalty of perjury, that you are the copyright holder.”

Why all of this discussion of copyright and DMCA?  Some of you may have noticed that my recent article titled “SQL Performance Problem, AWR Reports Query is #1 in Elapsed Time, #2 in CPU Time – How Would You Help?” that was posted on February 6, 2011 has disappeared.  Why has it disappeared?  A DMCA take down notice was filed, and that notice in part stated the following:

——————–BEGIN NOTICE

Pursuant to the DMCA, please see this takedown notice.

I have confirmed that the site says “powered by WordPress.com”, and is hosted by WordPress.

Contains this material originally published on our forum:

(My modified version of the SQL statement posted by a user of an online forum who was seeking help with a performance problem)

This was copied from our forum here:

http://dbaforums.org/oracle/index.php?s=ec2b9e8d676465331d7fe6cd6e07fa0c&showtopic=20523

Where it was originally published in this format:

(SQL statement as posted by a user of an online forum who was seeking help with a performance problem)

Donald K. Burleson
CTO, Burleson Consulting

Kittrell, NC, USA 27544

I hereby swear that this content is copied from the BC DBA forum and that this page infringes on my copyright and it is not authorized.

Under penalty of perjury, I swear that all of the information contained in your Infringement Notice is accurate, and that I am the copyright owner of this material.

Signed: Donald K. Burleson, copyright owner

——————–END NOTICE

You are required to remove the specific content outlined in this notice and identified as the infringing material. If you would like to formally challenge the notice, please let us know so that we can provide you with further instructions.

Interesting… so Donald K. Burleson holds the copyright of anything posted to the forums on dbaforums.org?  So, did I remove the SQL statement that was posted by a user of the forum, or did I file a DMCA challenge stating that Donald K. Burleson is not the owner of that SQL statement?  Think about that for a moment, would I back down when trying to help someone solve a problem (falling under the doctrine of fair use), especially after readers of the forum my blog article provided valuable information to further help the person experiencing the performance problem?  Would I back down when an action by Donald K. Burleson would impede the freely provided furthering of the Oracle community’s Oracle Database knowledge?  Think about that for a moment…

Finished thinking already?  I filed a DMCA challenge with the following text:

The quoted section mentioned in the DMCA take down notice was in fact copied from Donald K. Burleson’s forum, with minor changes to improve readability.  The copied text, however, does not infringe on Donald K. Burleson’s copyright.  The text that I copied was posted by one of the users of his online forum, and that text originated in the computer program titled “Patrol for Oracle” written by BMC Software (as indicated in the comments attached to the blog article).  The copied text is essentially a SQL statement that is submitted to a database server by the “Patrol for Oracle” program, and that SQL statement was exhibiting poor performance.  If anything, the rightful owner of the copyright for the copied text is BMC Software, and credit for the SQL statement currently appears in the comments section of the blog article.  A user posting a SQL statement (the alleged copied text) to a forum does not transfer the copyright for that SQL statement to the owner of the forum, especially if the user posting the SQL statement was not the original author of the SQL statement.

This is not the first time that Donald K. Burleson has falsely claimed ownership of copyright.

The attached “DMCA Challenge 2011-02-08.pdf” file is my signed notice that I did not infringe on a copyright owned by Donald K. Burleson.  The attached “DMCA Challenge 2011-02-08 Attachment.pdf” file shows my blog article content where the alleged copied text appears, and is followed on pages 11, 12, and 13 by the content where the alleged original text appears.

Please keep in mind that this blog is one that is dedicated to technical Oracle Database related discussions, and personal attacks do not belong here.  Attacking technical discussions containing faulty information, of course, is expected.  See the About page for more information.

——–

Edit February 25, 2011:

Donald K. Burleson, apparently realizing what it means to file a false DMCA claim under penalty of perjury, did not file a lawsuit to back up his DMCA claim to copyright ownership of the modified SQL statement that I posted in my February 6, 2011 article titled SQL Performance Problem, AWR Reports Query is #1 in Elapsed Time, #2 in CPU Time – How Would You Help?  His false DMCA claim held hostage one of my articles for a total of 17 days, during which time the article was not available for readers of this blog (for the record, I completely understand and agree with WordPress’ handing of this matter, where their processes require taking DMCA challenged articles offline for two weeks to allow the true copyright holder sufficient time to file a lawsuit).  False DMCA claims from Donald K. Burleson against my blog articles will not be tolerated, and this article will serve as evidence of past abuse, if necessary.

## Interviewed for the November 2010 NoCOUG Journal

4 11 2010

November 4, 2010

Almost two months ago I was interviewed for an article that was to appear in an upcoming NoCOUG Jornal newsletter.  It was a great opportunity to participate in the interview, and Chen (Gwen) Shapira provided to me a great set of thought provoking questions.  The interview went a bit long, so some of the questions and answers had to be trimmed from the printed NoCOUG Jornal newsletter article (Iggy Fernandez, Chen, and the other responsible parties did a fantastic job of picking the best answers for inclusion).  The picture used in the article… let’s just say that of the 50 or so pictures that I reviewed for submission, it was the only one that effectively hid my gray hair, where I did not show either a distressed (or puzzled) expression or appear that I had just spent eight hours writing a critical book review (is that a hint that another book review is in the process of being put together?).

I have decided to post a couple of the questions and answers that had to be trimmed due to space constraints (with the exception of the war story from my experience, which drifted a bit too far off topic, and involved some rather gruesome gutting of computer internals).

———————————

You use Excel as an automation and visualization tool. You definitely have more Excel expertise than most Oracle DBAs? How did you get into the Excel business? Do you have some tricks to share with our readers?

Among other things, I was a computer/technology instructor prior to working for K&M Machine-Fabricating, Inc., and one of the topics that I taught was using Microsoft Excel for data analysis and data visualization.  Excel has a full featured macro language that is quite similar to Microsoft Visual Basic 6.0 (and also the 2.0 version of Visual Basic), and is somewhat similar to the much simplified VBScript that is integrated into some ERP packages (Infor’s Visual Manufacturing for example), some engineering applications (AutoCAD, at least for a period of time, supported VBScript), classic ASP web pages, and even the Windows operating system.  It is the macro automation, either from within Excel or another program (whether a Microsoft Office component, custom developed program, or even an operating system script), that provides the most useful features in Excel for a DBA.

In June 2009 I delivered a presentation to a regional ERP user’s group that among other things showed how to leverage Microsoft Excel as a query and reporting tool for ERP data.  The Excel portion of the presentation required roughly 90 minutes and showed how to tap into the ERP data for reporting and visualization, as well as querying flat files, Windows event logs, and the WMI database that is built into all Windows operating systems since Windows 2000 was released.  It took about three months to prepare for that presentation, and the preparation involved reading three books about Microsoft Excel programming:  “Professional Excel Development”, “Excel Power Programming with VBA”, and “Pro Excel 2007 VBA”.  The first two books are great references but also are easily readable from start to end.  The third book really does not introduce anything that could not be discovered by a couple of hours of experimentation and Excel help documentation reading (the help file in Excel 2007 is much better than earlier versions and was improved again in Excel 2010.

As for tricks, turn on the macro recorder in Excel, make a couple of changes or set up a Microsoft Query that accesses an Oracle Database, and then examine the resulting macro.  This method of generating macros is a great starting point, as long as you keep in mind that the generated code is not necessarily the most efficient approach to producing the intended results, and there is a small chance that the generated code will not work as expected when it is later executed.  If you see an unusual command, click in the middle of the command and press the F1 key for additional information about the command.

Tell us about your favorite network tools. When do you recommend bringing out network tracing? What about other operating system knowledge, do you feel that DBAs would benefit from looking at more low-level tracing and measuring?

For the rare times when I need to perform network analysis, I typically reach for ping, tnsping, or tracert (traceroute) first.  If I am working on a computer running Windows I might also run either nbtstat or netstat with various parameters.  One of the little known key network tools is actually the event log found in a managed network switch, which might provide a clue about misconfigured switch or server settings, failing hardware or failing network links, faulty wiring, and a number of other problems.  For automated network monitoring it is hard to find a better free tool than Nagious.  For a DBA, network monitoring with a packet capture tool such as Wireshark (available on Windows, Unix/Linux with an X Window environment, and Mac) or tcpdump probably should only be attempted when a 10046 trace at level 8 or 12 finds that only a small percentage of processing time is confirmed to be server-side database related activity, essentially when most of the elapsed time is in SQL*Net type wait events.  By time sequencing a Wireshark capture with a 10046 trace file, it might be possible to determine why, for example, every time the client submits a specific SQL statement and a FETCH call is found in the 10046 trace file there is a two second delay before the next entry in the 10046 trace file – essentially answering if there was a problem/slowness in the network communication, or if the client was busy using the retrieved data.  If the Wireshark capture shows that a problem was not found in the 2 second problematic interval, then a switch to client-side tracing with Wireshark and/or another tool such as Micosoft’s Process Monitor, Microsoft’s Spy++, SQL*Net tracing, or other client-side tracing utility might help.

﻿﻿———————————

I hope that you enjoy reading the interview questions and answers; there are several other great articles in the November 2010 NoCOUG Journal.

## Technical Review of MOTS

17 09 2010

September 18, 2010 (Updated September 18, 2010)

(Disclaimer: To satisfy the one person that might wonder if my account of this event should be “considered credible” I must admit up front that I had some involvement in the planning of the event.)

I will make a prediction that an event like this is unlikely to happen again any time in the next couple of years.  The two to one ratio of attendees to OakTable members made it quite easy for me to talk to knowledgeable people from around the region, and around the world – both OakTable and non-OakTable members were quick to share their knowledge, experiences, and knowledge acquisition techniques.  I even bumped into a couple of people who admitted to reading this blog.

• The seat bottoms in the chairs in the conference rooms were a bit hard.  It seems that I had the constant urge to get up and fix something, yet there was nothing to fix, and I would have risked missing something that was quite useful if I did look away for a moment.
• I identified three errors in the presentation slides.  I was not able to pick out the errors until the presenter said something like, “Oh, by the way there is a typo in the second spelling of this parameter on the slide,” or “that should be REMOTE_LISTENER, not LOCAL_LISTENER,” or “that’s not supposed to happen.”  It is refreshing to see that people at this level are able to point out mistakes, and then spin that mistake into a teachable moment.
• The food and beverages.  Yes, there was an abundant supply of high quality food, and a wide variety of beverages (some only available after the sessions).  But where was the Diet Cherry Coke?  Pepsi products are OK unless you have to think clearly about performance problems (this is my excuse for being slow in spotting errors).
• While there were only three simultaneous presentations, without fail there were two sessions that I wanted/needed to see at the same time, and a third session that probably would have broadened my level of understanding into entirely new directions.
• Walking up and down the stairs rather than using the elevator – the hotel is not nearly tall enough to walk off all of the performance problems that could result from the third bullet point (I made sure to stop running the stairs to avoid compulsive tuning disorder).

As you can probably tell from the above list, it was hard to find things not to like about the event.  Carol Dacko did a fantastic job keeping everyone on the planning committee for the MOTS event on task, pushing toward an educational experience for all attendees.  All of the members of the MOTS committee contributed a substantial amount of time and effort (and travel miles) over the last seven to nine months.

Before I describe what I liked about the event, I will first list the presentations that I managed to attend (I highly recommend these to anyone who has the opportunity to see these again in the future):

• Moans the Magnificent by Mogens Nørgaard and Cary Millsap.  An Oracle comedy routine similar to the Carnac series by Johnny Carson.  A great way to begin the event, although I think that one or two of the jokes might not have made sense to most of the audience members until a couple of hours later.
• How to Write Efficient SQL by Jonathan Lewis.  Using cutting the grass as an analogy, Jonathan explained why it was important to be familiar with the data to be queried, to ask if a better execution plan exists, and how to help the optimizer find the better execution plan.
• Battle Against Any Guess by Alex Gorbachev.  Why know when you can guess?  Not even educated guesses are good enough?  Alex founded the BAAG party and wrote a chapter in the Expert Oracle Practices book on the same topic.  Rumor is that there is a BAAD party around the corner if the NoSQL ACID-free movement gains momentum, roll-aids will be used rather than undo.  But once again this is just a rumor.
• Back to Basics – Choosing the Starting Point of Performance Tuning and Troubleshooting Wisely by Tanel Poder.  This is one of those sessions that I had hoped someone would put together.  It seems every time I see a person on the OTN forums propose the use of AWR (or Statspack) to identify a problem found in a specific session, I want to reach out and say to that person, “a system with a loose screw is rarely fixed by putting the screw into a nail gun – use the right tool for the job, and avoid what does not work.”  Tanel’s presentation covered that issue and many others.  His slides had a great deal of detail, much more than could be discussed in the 50 minute time period.  His slides made be downloaded from here
• CBO Fundamentals: Understanding the Different Modes of System Statistics by Randolf Geist.  Highlighting the evolution of the cost based optimizer from using the Oracle 8i OPTIMIZER_INDEX_COST_ADJ and OPTIMIZER_INDEX_CACHING parameters to reduce index costs, onto the use of system (CPU) statistics which effectively increase the cost of full table scans and consider the expected CPU utilization as a component of access path cost.
• Thinking Clearly About Performance by Cary Millsap (part I and II).  It is hard to describe this two hour presentation, other than to say that it is one of the best presentations offered at the symposium.  The whitepaper may be downloaded from here.
• SQL Baselines, Adaptive Cursor Sharing and Automatic Cardinality Feedback by Jože Senegačnik.  The presentation described interesting features that affect Oracle’s optimizer’s ability to repeatedly use the same execution plan in 10g, 11g R1, and 11g R2.
• Everything a DBA Needs to Know about TCP/IP Networks by Chen Shapira.  Chen stated up front that the presentation could not possibly contain everything that a DBA needs to know.  Instead, she described four types of network-related problems that she encountered and the tools that are available to help with analysis.  Her presentation ties into my seven part blog series on network monitoring.
• Interpreting Execution Plans by Christian Antognini.  Christian effectively covers the information that should have appeared in the Oracle documentation about how to read execution plans.  One hint from the presentation: the most indented line in the execution plan is not necessarily the starting point of the execution plan.  A very detailed (better than any other resource) description of reading execution plans is found in Christian’s Troubleshooting Oracle Performance book.
• The Beginner’s Guide to Becoming an Expert by Jonathan Lewis.  This was a presentation with a single slide, but the message was not in the slides, but instead in the scripts.  Creating models of problems, recognizing (and investigating) unexpected results, and comparing the results between Oracle releases to understand how Oracle Database behaves when certain conditions are present.  While I do not have 2,000 scripts, I would like to hope that the test cases found on this blog are constructed to serve the same purpose – to help myself (and others) understand why “A”, “B”, and “D” happen, while “C” only happens when “E” is introduced.  It was definitely a bitmap smashingly good session.  Does the method work?  My personal experience says yes.
• Understanding LGWR, Log File Sync Waits, and Commit Performance by Tanel Poder.  You might be curious about what happens when your CPUs are pushed toward 100% utilization.  You might be curious why placing your redo logs on SSD drives may not help.  You might be curious why SQL_IDs cannot be associated with the log file parallel write wait event, as was stated to be possible on page 322 of one of the books that I reviewed.  If you missed this presentation, you are probably still curious.
• Why Isn’t Oracle Using My Index by Jonathan Lewis.  Is clustering factor a factor (see the sample chapter from the “Cost-Based Oracle Fundamentals” book).  Is ASSM a factor?  Are incorrectly set parameters the problem?  Are the recorded performance statistics about the server a factor?  Are missing statistics for virtual columns the problem?  Is a function applied to the indexed column (I don’t think that this was mentioned, but the time ran out before the presentation finished)?  Interesting examples throughout.

What did I like about MOTS?

• Making a run to Kinkos (local photocopy center) at 12:45 AM to run off copies of the agenda for the next morning.
• Setting up a 13 drive RAID array using USB keys plugged into the laptop – the presentations had to be transferred onto the fancy real wood USB keys.  The MOTSCopy.doc (manually build the text file from the contents) file contains the scripts that I used for the copy process which alternated in parallel the writing to the USB keys when the CopyAll script was executed. (See below)
• Having a chance to meet for the first time a large number of OakTable members.  It is rather remarkable that the OakTable members traveled around the world on their own dime to offer this training in Michigan.  I think that says something about the commitment of the OakTable members to learn Oracle Database details thoroughly, and then openly share that knowledge – just for the sake of helping others.
• There were no sloppy, or poorly assembled, or untested presentation topics.
• The speed with which OakTable members diagnose and resolve problems, for example posting videos and pictures (and more) from the event.
• And… the list goes on, and on, and on.

Any thoughts from other attendees?

Hopefully, more of the pictures from the event and more of the presentations will make their way to the Internet soon.

——————————–

WordPress is currently refusing to deliver the Word documents and the double-extension attachments.  The five script files that I used to transfer the presentations to 13 USB keys simultaneously follows (only need to execute the CopyAll.bat file):

```CopyAll.bat File Contents:
START copy1.bat
START copy2.bat
START copy3.bat
START copy4.bat

Copy1.bat File Contents:
xcopy C:\MOTS\*.* e:
xcopy C:\MOTS\*.* i:
xcopy C:\MOTS\*.* m:
xcopy C:\MOTS\*.* q:

Copy2.bat File Contents:
xcopy C:\MOTS\*.* f:
xcopy C:\MOTS\*.* j:
xcopy C:\MOTS\*.* n:

Copy3.bat File Contents:
xcopy C:\MOTS\*.* g:
xcopy C:\MOTS\*.* k:
xcopy C:\MOTS\*.* o:

Copy4.bat File Contents:
xcopy C:\MOTS\*.* h:
xcopy C:\MOTS\*.* l:
xcopy C:\MOTS\*.* p: ```

## Book Reviews and DMCA – No Longer Curious, Order Cancelled

26 08 2010

August 26, 2010

In June 2010 an article appeared on another blog that identifed three blog articles appearing on a blog operated by an Oracle Certified Master (OCM) that were copied nearly verbatim from three other blogs, thus saving the OCM a considerable effort in not only typing verbage appearing in the article, but also effort in actually building useful content.  Why spent four hours developing a blog article (this is probably the average time I spend), when a simple copy, paste, find-and-replace can be accomplished in a couple of minutes?  Those copies of articles clearly violated commonly understood copyright restrictions, and a quick search showed that the copyright violations extended far beyond the initial three articles which were mentioned (yes, I have screen captures and PDF copies of all that I identified).

So, why did I write the above paragraph?  For some reason I was recently browsing through some of the information about DMCA:

“Any material that was posted without the copyright owner’s authorization must be removed or blocked promptly once the service provider has been notified that it has been removed, blocked, or ordered to be removed or blocked, at the originating site.”

I realized that a random reader of this blog could potentially send a note to WordPress stating that they are, in fact under penalty of law, the owner of the material I just spent 2 hours, 4 hours, 8 hours, 16 hours  – maybe even months writing, and demand that the article be taken offline because I clearly stated that I quoted a couple of words from one of their articles while agreeing with or disagreeing with the other article.  Is quoting another source, while clearly stating that the other source is being quoted, in an effort to critique the quoted section an example of fair use?

“Section 107 contains a list of the various purposes for which the reproduction of a particular work may be considered fair, such as criticism, comment, news reporting, teaching, scholarship, and research. Section 107 also sets out four factors to be considered in determining whether or not a particular use is fair:

• The purpose and character of the use, including whether such use is of commercial nature or is for nonprofit educational purposes
• The nature of the copyrighted work
• The amount and substantiality of the portion used in relation to the copyrighted work as a whole
• The effect of the use upon the potential market for, or value of, the copyrighted work”

Interesting… as some of you may have noticed, the Oracle Database book reviews that I wrote in the last two years attempt to be very thorough,  precise, and specific –  many times down to a page number and a quoted sentence or two.  In my view such quotes clearly fall under the embrella of fair use (specifically criticism, comment, teaching, and research).

On June 10, 2010 I posted a blog article where I mentioned having three Oracle Database books on order that I intended to review just as I had with many of the other Oracle Database books that I purchased in the past.  Two of the books that I ordered, “Oracle Performance Firefighting (fourth printing)” and “Expert Oracle Database Architecture: Oracle Database Programming 9i, 10g, and 11g Techniques and Solutions, Second Edition” arrived and were reviewed, while the book that was supposed to ship on June 1, 2010 currently shows the following status on Amazon.com:

Let’s take a look at that order:

So, in summary Amazon reports that the book has yet to be released, and if it ships in September it will arrive three months later than originally expected.  On August 17, 2010 I received two unsolicited emails from someone at that book publishing company asking me to submit any errata that I find in the second edition of the book so that the problems may be addressed by an OCM who is also an Oracle Ace, after I write up the book review – the person also mentioned that, by the way, the book keeps selling out.  Nice, except that I now have a slight problem.  My reviews typically take between one and two hours to write and when agreeing with or disagreeing with a section of a book I try very hard not to lose the precise message of that section of the book – that often means that it is necessary to directly quote a sentence or two of the book that is being reviewed.  In theory, simply quoting a section of a copyrighted work might be grounds for a DCMA pull down of the content, thus rendering an hour or two of writing lost for a period of time.  Would the author of this book that is now three months delayed try a DCMA pull down if the review is not completely favorable?  I don’t know that answer.

In late May or early June I wrote the following about why I ordered the book that has yet to ship:

I was curious, so I spent good money on the book.  For most of the Oracle books that I have read in the last two years I have posted detailed reviews of the books on Amazon.  The reviews list what I thought was good about the book, what I thought was great about the book, errors/errata found in the book’s chapters, as well as misc. points about the book as it is read.  Some of my reviews identified a dozen or more errors in a book, which is a significant problem with some of the Oracle books on the market.  The errors must make it difficult for beginners to learn about Oracle Database.

How long will the review of the “Oracle Tuning: The Definitive Reference Second Edition” book stretch?  I am not sure, but it appears that I could write about a 4 page review on the first 27 pages of the first edition of this book – I sincerely hope that some/most of those issues were addressed in the second edition.  I want to make certain that I obtain a good value from the \$44 that I spent on the book.  I do not think that it too much to ask for accuracy from a published book, especially when the book cost more than \$40.  Some of the pages in the chapters that I co-authored in the “Expert Oracle Practices” required more than 8 hours of research, typing, verification, testing, proofing, re-testing, and formatting per page. I believe that Raldolf spent roughly the same amount of time on the pages that he wrote for the chapters.  I suspect that many of the very good books that are on the market for Oracle also required about the same amount of time per page.  Certainly, it can’t be too much to ask that other book authors put effort into testing and verification of what they publish? The author’s website states about the the “Oracle Tuning: The Definitive Reference Second Edition” book: [edit: feel free to read the book’s description on the publisher’s website].  So, I will be looking for the portions of the book’s contents that are comprehensive and targeted at the senior Oracle DBA.

After waiting three months, I am no longer curious about the book – so I just cancelled the order.  Was I serious that I could write a four page review on the first 27 pages of the first edition of the book?  I heard someone shout “show me your cards.”  In late May and early June I read the first 27 pages of the first edition using the Google books site, and I made notes as I read.  These are my notes, quoting a minimal amount of the book to fall under the embrella of “criticism, comment” of fair use:

• Page 9 (now page 11) states “For example, if the AWR shows that the demands on the shared pool become very high between 1:00 pm and 2:00 pm, the DBA might trigger a dynamic decrease of db_cache_size and a corresponding increase of the shared_pool_size parameter during this time period.”  If Oracle’s buffer cache is using the memory specified by the DB_CACHE_SIZE, the size value specified for DB_CACHE_SIZE cannot be decreased without first manually flushing the buffer cache – flushing the buffer cache to allow the shared pool to grow for an hour is a bad idea.  It is important to keep in mind that when the SGA_TARGET parameter is specified, the values for DB_CACHE_SIZE and SHARED_POOL_SIZE specify the minimum values for the parameters.
• Page 10 is an advertisement for the WISE package – free copy is apparently available with the book.
• Page 11, when describing examples of poor schema design, suggests that databases using extensive data normalization to minimize data redundancy is a poor design because it forces Oracle to perform unnecessary table joins.
• Page 11 (now page 12) states, “If poorly designed PL/SQL is encountered, users might be able to tune the database by using array processing such as bulk collect, forall and ref cursors…”  As of Oracle Database 10g, Oracle automatically array fetches 100 rows at a time in cursor FOR loops, making it much less likely/necessary that a ‘user’ will re-write the PL/SQL to take advantage of bulk collect’s performance slightly better performance.  http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1203923200346667188
• By page 13 has recommended the use of AWR, various AWR tables, and the SQL Tuning Advisor without mentioning that the Diagnostic Pack must be licensed for access to AWR information, and the Diagnostic and Tuning Pack must be licensed for access to the SQL Tuning Advisor.
• Page 14 (now page 25) states “The first tasks when tuning a database when tuning a database are to identify the external bottleneck conditions, which may include: CPU Bottleneck,… RAM Bottleneck,…  Network Bottleneck, … and Disk Bottleneck.”  While it is important to check these items, I suggest that these probably would not be the first items to check when tuning the database instance’s performance.  Yes, checking whether or not the CPUs are overloaded (or heavily loaded) is important, but maybe it would be better to determine if the overload is causing a measurable negative impact on performance by first checking the delta values of the various wait events, system statistics, and system time model statistics before deciding to check, for instance, whether there are network bottlenecks.
• Page 14 (now page 25) states “Whenever the run queue exceeds the number of CPUs on the Oracle server in the absence of high idle times, the system is said to be CPU-bound.”  The first question with this statement is what qualifies as a “high idle time”?  Secondly, a server may become CPU bound long before the CPU run queue exceeds the number of CPUs in the server, and long before 90% CPU utilization is reached.  The author mentions that CPU consumption can be reduced by tuning SQL and reducing library cache contention – that is true, but much more could be said about the topic.  For example, would it make a difference if the database is used for OLTP rather than data warehouse activities?  The remedies listed on page 15 are just to add additional CPUs to the server or disabling a “high CPU consumer” such as parallel query.  No warning provided to indicate that adding CPUs to the server could be expensive from an Oracle licensing perspective and a system uptime perspective.
• Page 15 (now page 25) states that “Large amounts of Oracle*Net traffic contribute to slow SQL performance.”  The book does not describe what is considered “large amounts”, nor what to do about addressing the network bottleneck.
• Page 15 (now page 25) under the heading of disk bottlenecks the only issue identified is “using RAID5 for high update systems.”  The adjective “high” is never clarified.
• Page 15 (now page 25) states, while discussing potential solutions for over-allocation of the server’s RAM, that the options for “excessive wrapping id to add more RAM, reduce the size of Oracle’s SGAs, or turn-on Oracle’s multi-threaded server.”  Are these the only options, or could more suggestions be recommended?  For example, “size of Oracle’s SGAs” implies that more than one instance is running on the server – would it make sense to reduce the number of instances on the server?  Are there any reasons not to implement a shared server configuration (what the author refers to as “multi-threaded server”?  What about tuning the SQL?
• Page 15 (now page 26), the author appears to misuse the term “network latency”, or at least stretch the meaning of latency as it applies to a network.
• Page 16 (now page 27) states “In a disk-bound database, the majority of the wait time is spent accessing data blocks.”  It is quite possible in a non-disk-bound, well performing database instance that the majority of wait time could very well be spent reading data blocks.  In fact, these events should be among the top 5 wait events, otherwise there probably is a significant performance problem.  It is also quite possible that the system could be disk-bound when the majority of wait time is spent _writing_ to the redo logs, archiving redo logs, and updating data/undo/temp files.
• Page 16 (now page 27), the Top 5 Timed Events, demonstrating a disk constrained database, shows 44 waits for the CPU.  Waits for the CPU cannot be reported in the Top 5 Timed Events – this data in the report is bogus.  The report also shows that the average single block read time is 2.75 seconds, while the average multi-block read time is 0.127 seconds (127ms) – this data in the report is also likely bogus.  The report also shows that 1,363 seconds were lost in 673 waits (average of 2.03 seconds) to the “library cache load lock” wait – if that wait appeared in the top 5 report, even in a report showing the single block read time is 2.75 seconds, I think it would be a good idea to start with the “library cache load lock” wait since that is one of the waits that indicates a severe problem if it appears in the top 5 wait events.
• Page 17 (now page 28) states “CPU enqueues can be observed when the CPU run queue exceeds the number of CPUs on the database server, and this can be seen by…”  The effects of CPU enqueues may be observed long before the CPU run queue exceeds the number of CPUs in the server, whether it be longer average waits for the log file sync wait event, increased waits for latches, increased wait time for single block or multi-block reads, or any number of statistics in the system time model (V\$SYS_TIME_MODEL, V\$SESS_TIME_MODEL).
• Page 17 (now page 28) states “High CPU usage will be reported as a top 5 timed even (sic) in any AWR report as shown below”  First, the Top 5 Timed Events report shows that there were 4,851 waits for CPU time – a Top 5 report cannot show the number of waits for the CPU – this data in the report is bogus.  Second, the CPU is a limited “wait” resource with an upper utilization limit equal to the number of CPUs in the server multiplied by the elapsed number of seconds, while the other foreground wait events, for the most part, have an upper limit of the average number of sessions (plus parallel related additional processes) connected to the instance multiplied by the elapsed number of seconds.  It is quite possible that a server experiencing excessive CPU usage may not cause the “CPU time” to appear in the top 5 timed events.  Third, we do not know how many CPUs are in the server – would it make a difference if there was only a single CPU compared to 96 CPUs?  Fourth, we have not concept of the elapsed time represented by the report – what if the elapsed time for the report is eight hours?  Fifth, if the single block read time averages more than a second, shouldn’t the investigation start with the IO system?  Sixth, the most that can be determined from the top 5 timed events is the average utilization, number of waits, and number of timeouts – why are the number of timeouts not shown?  From the top 5 timed events it is difficult, if not impossible, to determine if there wasn’t a brief period of intensive activity, followed by a long period of almost no activity – or if the average activity levels were fairly constant in the time period.
• Page 18 (now page 28), the Top 5 Wait Events (why the switch from Top 5 Timed Events) is labeled as a mockup – probably a good idea that this is labeled because the report shows 94,754 seconds where the server was waiting to send data across the network while single block and multi-block reads totaled only 85.75 seconds.  That is why you do not use a satellite link for the network backbone.
• Page 19, several grammar errors to this point in the book “It not elegant, or even correct, but it’s a common choice by managers who need quick, low-risk performance improvements…  Individual Program Global Areas (PGA’s for each session) are a set of running programs that do work for the instance and are referred to as processes.”
• Page 20 (now page 30), describes db_cache_size as “the number of data buffers to allocate for the instance.”  The parameter does not specify the number of data buffers (blocks?), it specifies the amount of memory to allocate to the buffer cache (or the minimum amount in the event the SGA_TARGET is specified).
• Page 21, the DB_FILE_MULTIBLOCK_READ_COUNT parameter is listed among the parameters that cannot be changed without bouncing the instance – that is not correct.
• Page 21 (now page 30), states “parallel_automatic_tuning:… since parallel full-table scans are very fast, the CBO will give a higher cost to index access and be friendlier to full-table scans.”  This is simply incorrect.  The optimizer does not increase the cost of index access when costing parallel access paths that use full table scans – it lowers the cost of the full table scan relative to the parallel degree.  Second, parallel full table scans are not automatically very fast, especially if the IO subsystem is bottlenecked.
• Page 21 (now page 30), the description of the OPTIMIZER_INDEX_COST_ADJ parameter is very weak, “this parameter controls the relative costs of full-table scans versus index scans.”  It would be much better to state that the parameter specifies the percentage of the index access cost to be retained.  Setting the value to a very low value could cause the optimizer to select not only an index access path rather than a full table scan, but to select the wrong index due to cost rounding problems as the value of the OPTIMIZER_INDEX_COST_ADJ parameter approaches 1.  Ideally, since at least Oracle Database 10.1 this parameter should be left at the default value of 100 due to the Oracle optimizer’s use of system statistics by default.
• Page 21 (now page 30), the description of the OPTIMIZER_INDEX_CACHING parameter is vague – the book doesn’t indicate when the value of the parameter is used in cost calculations, and when it is not used.
• Page 22 (now page 34) states “In Oracle 10g, statistics collections are automated, but users may still need to selectively add histograms and other specialized optimizer statistics.”  The book does not state what triggers the statistics collection for a specific object, at what time the statistics are collected, that histograms (maybe excessively) are automatically generated, nor the potential problems that might be encountered after the statistics collection.
• Page 23 (now page 35) states “SQL will run fastest when the first table joins deliver the smallest result set.”  This is stated as an absolute, which absolutely will not be correct all of the time.
• Page 23 states “the new Oracle10g SQL profile utility will allow changes to execution plans without adding hints.”  This is untrue – a SQL profile is a series of hints that are automatically generated.
• Page 25 shows operating with OPTIMIZER_MODE set to CHOOSE and deleting a table’s statistics to immediately return the system to acceptable performance.  No warning that this caused the optimizer to switch to the RULE based optimizer, and no demonstration of how statistics should be collected.
• Page 25 “there was a serious degradation in SQL performance immediately after the implementation of partitioned _tablespaces_ in a 16-CPU Solaris 64-bit Oracle 9i database.”
• Page 26 shows a grammar error “Note that is the full-table scans are parallelized with 15 parallel query processes to speed up the statistics collection”
• Page 26 – amazing performance improvements “tripled the performance” “400 percent performance improvement” – no real suggestion of how the silver bullet corrected a problem to cause such a significant increase in performance, so that readers can experience similar performance improvements in their database instances.
• Page 26 – assumes a data warehouse was set up without a PGA_AGGREGATE_TARGET parameter value and an unspecified value for SORT_AREA_SIZE, resulting in a SORT_AREA_SIZE that was the default value, and only a 50% overall improvement was experienced by setting the parameter to 1MB.  The book did not state that the default value of SORT_AREA_SIZE, when not specified, and PGA_AGGREGATE_TARGET is not used, that the SORT_AREA_SIZE defaults to 64KB.  The book also did not describe how the 1MB value was determined to be the best value, nor what impact changing the parameter’s value has on memory availability in the server.
• Page 27 – code section shows single quotes around what is apparently a bind variable defined as VARCHAR2.  The single quotes should not appear around that bind variable.  Author states that a concatenated index on the CUSTOMER_STATUS and CUSTOMER_AGE columns resulted in a 50 fold performance improvement and 6 fold reduction in disk IO.  The book does not state how or why, so I wonder how that could happen when the CUSTOMER_STATUS column likely has few unique values and an inequality is used on the CUSTOMER_AGE column.  Was a bitmap or B*tree index used, was the CUSTOMER_STATUS or CUSTOMER_AGE specified as the leading column in the index?
• Page 27 “While it was not clear why the CBO was not choosing the index, the query with the index hint ran almost 20x faster.  After acting fast and running a script against v\$bh and user_indexes, the DBA discovered that approximately 65 percent of the indexes were currently inside the buffer cache.”  There are a couple of problems with this statement.  The statement is not clear whether the DBA found that at least one block from roughly 65% of the indexes found in USER_INDEXES was in the buffer cache, or if the DBA found that roughly 65% of all blocks belonging to the indexes found in USER_INDEXES were in the buffer cache.  The second problem is why did the DBA check USER_INDEXES and not DBA_INDEXES – if the DBA checked USER_INDEXES, he would only see the indexes that belong to the user account that the DBA used to connect to the database (unless he used ALTER SESSION SET CURRENT_SCHEMA to change to another schema).  The third problem is that there may be multiple, possibly many consistent read versions of an individual block in the buffer cache – how did the DBA factor out the multiple copies of blocks when he performed his calculations?  The fourth problem is that the DBA did not consult a 10053 trace for a hard parse of the SQL statement to understand why the expected index was not selected.
• Page 28 “Based on similar systems, the next step was to lower optimizer_index_cost_adj to a value of 20 in the hopes of forcing the CBO to lower the relative costs of index access.”  At the same time the OPTIMIZER_INDEX_CACHING parameter was set to 65.  What are the problems with this approach?  According to the Oracle documentation, the “OPTIMIZER_INDEX_CACHING parameter lets you adjust the behavior of cost-based optimization to favor nested loops joins and IN-list iterators.  The cost of executing an index using an IN-list iterator or of executing a nested loops join when an index is used to access the inner table depends on the caching of that index in the buffer cache. The amount of caching depends on factors that the optimizer cannot predict, such as the load on the system and the block access patterns of different users.  You can modify the optimizer’s assumptions about index caching for nested loops joins and IN-list iterators by setting this parameter to a value between 0 and 100 to indicate the percentage of the index blocks the optimizer should assume are in the cache. Setting this parameter to a higher value makes nested loops joins and IN-list iterators look less expensive to the optimizer.”  The parameter change is global, and does not apply to all index type accesses, it primarily makes nested loop joins appear to be more favorable than hash joins.  What criteria determined that 20 was the best value for OPTIMIZER_INDEX_COST_ADJ, and why didn’t the DBA correct or even check the system statistics, rather than experimenting with a parameter that should rarely be changed from its default value since the introduction of system statistics in Oracle 9i.
• Page 28 shows “alter system set optimizer_index_cost_adj=20 scope = pfile;”  Issuing that command will result in an “ORA-00922: missing or invalid option” error.  The pfile cannot be altered using an ALTER SYSTEM command, except of course if the spfile is modified, and then the spfile is written over the top of the existing pfile (init.ora file).
• Page 28 indicates that the Statspack report shown was from Oracle 9i, yet the report indicates that the wait time is in centiseconds, while an Oracle 9i Statspack report indicates time in seconds.  It is a bit alarming to see 4,796.54 seconds of wait on enqueue (46.71% of total implies total wait time of 10,268.76 seconds), while the 10,579,442 multi-block reads required 1,972.05 seconds (report shows that this is 29.20% of the total, but a calculator shows that it is 19.20%).  Something is wrong with this Top 5 Wait Events report – it is showing that the average multi-block read completed in 0.0001864 seconds, while the average single block read time required 0.002714 seconds.
• Page 29 – the silver bullet tip to change the CURSOR_SHARING parameter to FORCE did not warn of any of the possible side-effects, which may include poor performance when cursors that should not be shared because of uneven data distributions generate execution plans that are inappropriate for other literal values, or bugs related to CURSOR_SHARING that either result in wrong data being returned, or instance stability problems.  It would be interesting to see how the 75% performance improvement was achieved by changing this parameter.

I guess that if the above mentioned problems are still present in the second edition of the book, it might have taken me significantly longer than the typical one to two hours to write the book review on the remaining 1000+ pages.  Best of luck for anyone ordering the second edition of the book.

## Strange Search Terms – Are the Answers Out There?

22 08 2010

August 22, 1010

I posted the first blog articles on this blog roughly nine months ago.  In the last nine months I have published roughly 230 blog articles, which is roughly 210 more than anticipated.  What happened to just posting Oracle Database test cases to demonstrate Oracle Database behavior?  I started seeing somewhat unusual search terms appearing in WordPress’ statistics for the site, so I gradually added additional content.  The additional content that I wrote did not necessarily answer the questions found in searches, but the searches provided me with motivation, thinking that DBAs and developers really wanted to know THE answer quickly.  Is it possible to be both quick and thorough?  I guess that my answer to that question is no, otherwise I would have stopped after completing the 20th blog article.

So, what are some of the most interesting and/or strange search terms?  Here are a few – these are actual search terms (in a completely non-random order):

```pig outer join
nnnnnnnnnnnnnnnnnnnnn
buffer hit% awr
low buffer hit ratio
buffer cache hit ratio 11g
cache hit percentage oracle 11g            (53 hits)
optimize non standard block cache sizes    (79 hits)
using a large database block size for or
funny examples of root cause analysis
more locks with oracle 11g
oracle index hint not working
site:hoopercharles.wordpress.com burles
sql injection ora-00907
ora books architectural books
sql true false quiz
michigan oaktable symposium               (65 hits)```

Searching to find knowledge… a much better approach than knowing the answer, while not knowing the question that is being answered.

Entering, stage right… Moans the Magnificent (Carnac the Magnificent‘s distant cousin).  Moans the Magnificent might not know the questions to the first two search answers, but I am sure that he would have no trouble with the remainder of the list.  Possible examples:

Possible follow-up question provided by Moans the Magnificent: Name an event that not even the oracles of the world could have predicted to have more of an impact on the climate than Al Gore inventing the Internet.

Possible follow-up question provided by Moans the Magnificent: What is the relationship between code monkeys, fertility clinics, and the big book of child names?

Moans the Magnificent will be the opening session at the Michigan OakTable Symposium in Ann Arbor, MI on Thursday September 16, 2010 and Friday September 17, 2010.  Registration for the event is still open, however hotel rooms are in short supply.  Moans the Magnificent will be joined by roughly twenty other OakTable Network members from around the world, providing a one-of-a-kind two day training session (this will not be an annual event) with virtually on-on-one contact with some of the world’s most respected Oracle DBAs, consultants, programmers, and there might even be a chance to mingle with a couple of the developers who wrote some of the Oracle Database features that we all use on a daily basis.

You could continue to search for the answer… or you could directly talk with the people who know the answer.

18 02 2010

February 18, 2010 (Updated April 5, 2010)

## Looking for an Oracle training session in the rust belt of the United States?

The Michigan members of the OakTable Network are organizing a two day Oracle training event on the Thursday and Friday (September 16th and 17th, 2010) just before Oracle OpenWorld. OakTable Network members from around the world will be providing sessions at the training event in Ann Arbor, Michigan (not far from the University of Michigan) at the Four Points by Sheraton hotel. Presented sessions will appeal to both DBAs and developers. Seating at the event will be limited to 300 people.

Confirmed Speaker List (See the Official Site for the Latest List):

Alex Gorbachev
Alex Gorbachev is a respected figure in the Oracle world, and a sought-after leader and speaker at Oracle conferences around the globe. He has been recognized as an Oracle ACE Director for his contributions to the community and unmatched set of skills. He is the founder of the Battle Against Any Guess movement promoting scientific troubleshooting techniques. He is currently the Chief Technology Officer at The Pythian Group. Alex has worked for The Pythian Group in several roles. He began by leading a team of database experts in Ottawa. He then moved to Australia to build the company’s presence in the East Asia Pacific region. Now he is back in Ottawa as The Pythian Group’s Chief Technology Officer. In all his work, Alex continues to work towards bridging the gap between business and technology. The search for the perfect fit between technology, engineering talents, and business process is what keeps him up at night. Alex co-authored the book “Expert Oracle Practices: Oracle Database Administration from the Oak Table”.

Andy Zitelli
Andrew Zitelli has thirty years of experience as a software developer, data architect and performance analyst. He has served as a consultant to the aerospace, semiconductor, steel, and pharmaceutical industries. Andrew is proficient working with a broad range of operating systems and languages and has 25 years experience working with ten different relational database products, including 17 years working with Oracle. He holds MS and BA degrees in Computer Science. During the past 30 years, Andrew has taught college-level courses and made presentations at technical conferences and other venues on a wide range of topics. He made presentations at the Hotsos Symposiums in 2007, 2008 and 2009.

Cary Millsap
Cary Millsap is the founder and president of Method R Corporation. He is widely known in the Oracle community as a speaker, educator, consultant and writer. He is the author (with Jeff Holt) of Optimizing Oracle Performance (O’Reilly 2003). Cary is also co-author of Oracle Insights: Tales of the Oak Table. Cary is one of the former founding partners of Hotsos and worked at Oracle for ten years. In the early 1990s, Cary created Oracle’s OFA Standard, which became the default configuration standard for Oracle software installations worldwide. Cary is also a woodworker who builds and repairs furniture in his own shop.

Charles Hooper
Charles Hooper is the IT Manager and an Oracle database administrator at K&M Machine-Fabricating, Inc., where he has been employed since 2000. His role in the company extends well beyond Oracle database administration responsibilities, providing opportunities for database performance tuning, network administration, programming, hardware/software troubleshooting, mentoring fellow IT staff, and end-user training for the Infor Visual Enterprise ERP system as well as other custom developed applications. Charles is well known among the user community of Infor Visual Enterprise due to his years of participation in various user forums answering technical questions, providing the only Oracle presentation at Infor’s 2008 user’s conference, and providing presentations to regional user groups. Prior to joining K&M, he was a computer/technology instructor and Novell Netware administrator. Charles co-authored the book “Expert Oracle Practices: Oracle Database Administration from the Oak Table”.

Dan Fink
Daniel Fink is a senior Oracle database engineer and consultant, specializing in Oracle optimization, troubleshooting, internals, and data recovery. He started as a DBA on Oracle 7.0.16 running Parallel Server on OpenVMS and has experience on major Unix platforms and releases up to 10g. He maintains a library of his research at www.optimaldba.com.

Doug Burns
Doug Burns is an independent contractor who has 20 years of experience working with Oracle in a range of industries and applications. As well as presenting at a number of conferences, he has developed, taught, and edited performance-related courses for both Oracle UK and Learning Tree International.

Jeff Needham
Jeff Needham currently works for Red Hat Software and is responsible for Oracle RDBMS, and GFS/RAC certifications. This includes tending the relationship between RHEL Engineering, Oracle Linux Engineering and Server Technologies. Having founded Scale Abilities with James Morle in 1989, Jeff pioneered high performance NFS for Oracle at Yahoo! using Netapp and AMD technologies. Jeff worked on performance and scalability for Oracle 7 and Oracle 8 during his tenure in the Redwood Shores kernel group. When not at work, he attempts fatherhood, collects Opteron servers and tries to maintain a vintage electronic music studio.

Jeremiah Wilton
Jeremiah Wilton has worked with Oracle technology since 1994. His main claim to fame is having been Amazon.com’s first database administrator, back in the pre-IPO days. For seven years, he helped Amazon.com survive exponential scaling, and a wide variety of nearly-catastrophic technology failures. Jeremiah owned and ran ORA-600 Consulting for a number of years, until it was acquired by Blue Gecko, Inc., a global provider of remote administration for Oracle, MySQL, and E-Business Suite. Jeremiah also teaches the Oracle certificate program for the University of Washington. Jeremiah is an Oracle Certified Master, and a frequent presenter at industry conferences and user groups. His publications and whitepapers can be found at www.bluegecko.net. Jeremiah co-authored the book “Expert Oracle Practices: Oracle Database Administration from the Oak Table”.

Jonathan Lewis
Jonathan Lewis is a well-known figure in the Oracle world with more than 21 years experience of using the database (and several years on other products). He has published two books, contributed to three others, runs a couple of websites and contributes fairly regularly to newsgroups, forums, User Group magazines, and speaking events around the world. Jonathan has been self-employed for most of his time in the IT industry. He specialises in short-term assignments, typically of a design, review, or trouble-shooting  nature.

Joze Senegacnik
Jože Senegačnik has more than 20 years of experience in working with Oracle products. He began in 1988 with Oracle Database version 4 while working for the City of Ljubljana, where he had charge over the city’s municipal and geographic information systems. From 1993 to 2003, he worked in developing GIS systems for the Surveying and Mapping Authority of the Republic of Slovenia, and in the development of applications for other governmental institutions, all based on the Oracle database. More recently, he has specialized in performance optimization, having developed his own toolset for monitoring performance and analyzing trace files. Jože is an internationally recognized speaker. He is a regular speaker at user-group conferences, especially those put on by the Slovenian Oracle Users Group (SIOUG), the Independent Oracle Users Group (IOUG), and the United Kingdom Oracle Users Group (UKOUG). He also speaks routinely at the Hotsos Symposium and Oracle Open World. In addition to sharing his knowledge through conference talks, Jože conducts technical seminars organized either by Oracle University or himself. He was awarded Oracle ACE membership for his long record of positive contributions to the Oracle community. Jože co-authored the book “Expert Oracle Practices: Oracle Database Administration from the Oak Table”.

Mogens Nørgaard
Mogens Nørgaard is technical director at Miracle A/S (http:www.miracleas.dk), a database knowledge center and consulting/training company based in Denmark, and is the cofounder and “father figure” of the OakTable Network. He is a renowned speaker at Oracle conferences all over the world and organizes some highly respected events through Miracle A/S, including the annual MasterClass (2001: Cary Millsap, 2002: Jonathan Lewis, 2003: Steve Adams, 2004: Tom Kyte) and the Miracle Database Forum, which is a 3-day conference for database people. He is also the cofounder of the Danish Oracle User Group (OUGKD) and was voted “Educator of the Year” in Oracle Magazine’s Editor’s Choice Awards, 2003.

Randolf Geist
Randolf Geist has been working with Oracle software for 15 years. Since 2000 he has operated as a freelance database consultant focusing primarily on performance related issues, and in particular helping people to understand and unleash the power of the Oracle cost based optimizer (CBO). He is writing on his blog about CBO-related issues and is also regularly contributing to the official OTN forums. Randolf is a member of the Oracle ACE program and is an Oracle Certified Professional DBA for Oracle Versions 8i, 9i, and 10g. He also maintains SQLTools++, an open-source Oracle GUI for Windows. Randolf co-authored the book “Expert Oracle Practices: Oracle Database Administration from the Oak Table”.

Riyaj Shamsudeen
Riyaj Shamsudeen has 17+ years of experience in Oracle and 16+ years as an Oracle DBA/Oracle Applications DBA. He is the principal DBA for OraInternals, a consulting company resolving many advanced performance, recovery, RAC, and EBS11i issues. Riyaj specializes in RAC and performance issues and has authored many articles, mostly about performance, database internals, optimizer internals, etc. He is a frequent presenter in major conferences such as Hotsos Symposia, UKOUG, RMOUG etc. co-authored the book “Expert Oracle Practices”.

Robyn Sands
Robyn Anderson Sands is a software engineer for Cisco Systems. In a previous incarnation, she worked in Industrial Engineering, Manufacturing Development, and Quality Engineering at Lockheed Martin, supporting the P-7, F-22, and C-130J programs. Robyn has been working with databases and Oracle software since around 1996. She began her work with Oracle by developing tools for scheduling, composite fabrication capacity modeling, and engineering work flow, and progressing to the implementation and administration of data warehouse, PeopleSoft, and SAP systems. Current projects include “architecting” and managing the development of embedded database systems for Cisco customers, and searching for new ways to design and develop database systems with consistent performance and minimal maintenance requirements. She has been a speaker at UKOUG, Miracle conferences, Oracle Open World, and the Hotsos Symposium.  Robyn co-authored the book “Expert Oracle Practices: Oracle Database Administration from the Oak Table”.

Tanel Poder
Tanel Põder is an experienced consultant with deep expertise in Oracle database internals, advanced performance tuning and end-to-end troubleshooting. He specializes in solving complex problems spanning multiple infrastructure layers such as UNIX, Oracle, application servers and storage. He is one of the first Oracle Certified Masters in the world, passing the OCM DBA exam in 2002; and is a frequent speaker at major Oracle conferences worldwide.

Tim Gorman
Tim Gorman began his IT career in 1984 as a C programmer on UNIX and VMS systems, working on medical and financial systems as an application developer, systems programmer, and systems administrator. He joined Oracle Corporation in 1990 as a consultant, then became an independent consultant in 1998, and has worked for SageLogix since 2000. Gorman is the co-author of Essential Oracle8i Data Warehousing and Oracle8 Data Warehousing. He specializes in performance tuning applications, databases, and systems, as well as data warehouse design and implementation, backup and recovery, architecture and infrastructure, and database administration. Gorman still considers himself a pretty good coder, although the market for C programs has dried up somewhat lately. Tim co-authored the book “Expert Oracle Practices: Oracle Database Administration from the Oak Table”.

———-

Awaiting Confirmation (Subject to Change):

Christian Antognini
Since 1995, Christian Antognini has focused on understanding how the Oracle database engine works. His main interests include logical and physical database design, the integration of databases with Java applications, the query optimizer and basically everything else related to application performance management and optimization. He is currently working as a principal consultant and trainer at Trivadis AG (http://www.trivadis.com) in Zürich, Switzerland. If Christian is not helping one of his customers get the most out of Oracle, he is somewhere lecturing on application performance management or new Oracle database features for developers. In addition to classes and seminars organized by Trivadis, he regularly presents at conferences and user group meetings. He is a proud member of the Trivadis Performance Team and of the OakTable Network. Christian is the author of the book Troubleshooting Oracle Performance (Apress, 2008).

Eric Grancher
Eric Grancher has been working at CERN since 1996 in the Information Technology Division. He has been working on different aspects of databases and application server products: database design consulting, cluster database administration and usage with commodity hardware, application server consulting, database and application server monitoring. He is currently responsible for a team that focuses on database oriented application deployment. He holds an engineer diploma from the French telecommunication engineer school “ENST – Telecom Paris” and a Magistre (Master) of Parallel Computing from “Ecole Normale Suprieure de Lyon”.

James Morle
With 20 years’ experience in professional computing, James Morle has been personally responsible for the architecture and implementation of some of the world’s largest and most complex business systems, including a 3-node Oracle Parallel Server configuration that services 3000 online users. James is a well-respected member of the Oracle community and is the author of the critically acclaimed book Scaling Oracle8i. He is the cofounder of Scale Abilities (http://www.scaleabilities.com), a specialist consulting and training company focusing on aspects of system engineering related to building very large and complex computer systems.  Authored the book “Scaling Oracle8i: Building Highly Scalable OLTP System Architectures”, co-authored the book “Oracle Insights: Tales of the Oak Table”

Marco Gralike
Marco Gralike, working for AMIS Services BV as a Principal Oracle Database Consultant in the Netherlands, has experience as a DBA since 1994 (Oracle 6). Marco is also eager and skillful in other fields, like Operating System Administration and Application Servers, mainly in finding working, performing solutions. Marco has been specializing in Oracle XMLDB, since 2003, focusing on his old love, database administration and performance. He is an Oracle XMLDB enthusiast. He is also a dedicated contributor of the Oracle User Group community, helping people with their steep XMLDB learning curve. To this purpose, Marco also devoted his personal blog site to XMLDB and other Oracle issues. Marco has presented several times in Holland, UKOUG (2007, 2008) and Oracle Open World (2008, 2009). He has been awarded the Oracle ACE title for his work in the Oracle XMLDB area (2007).

———-

The MOTS Executive Committee:

Carol Dacko – OakTable member
Mark Bobak – Oak Table member
Mark Powell – OakTable member
Charles Hooper – OakTable member
Ric Van Dyke – Hotsos Friend
Myke Ratcliff – Esteemed Friend

The Michigan OakTable Symposium (MOTS) is not intended as a money making event. As such, the cost for attending the event has been set as low as possible to essentially “break-even”. The primary mission of the symposium is to provide attendees with logical, structured processes that lead to reproducible success, rather than treating the Oracle Database as a magical, unpredictable, black-box that yields unexpected behavior.

Early registration at a rate of \$450 per person will end April 30, 2010. Registration between May 1 and July 31 is \$600 per person, and increases to \$750 per person after July 31. These prices do not include room hotel costs (roughly \$100 per night), but will include a high quality buffet lunch. You may register for the event at the site:

Preliminary Schedule for Thursday, 16 September 2010 and Friday, 17 September 2010:

Michigan OakTable Schedule

Abstracts for each of the sessions are located on the official Michigan OakTable site.

8 01 2010

January 8, 2009 (Updated February 5)

… at MOS… uh… in light of the Flashing incident with the other MOS, maybe it is better to stick with the abbreviation MOTS – the Michigan OakTable Symposium.

For the first time ever, the OakTable Network is organizing a two day training event on the Thursday and Friday (September 16th and 17th, 2010) just before Oracle OpenWorld, in Ann Arbor, Michigan (not far from the University of Michigan). The details are still being finalized, so for right now here are the basics:

• Seating will be limited to a maximum of 300 people.
• OakTable Network members from around the world will attend and provide the training sessions.
• Sessions will be provided that will appeal to DBAs and developers.
• MOTS is not intended as a money making event. As such, the cost for attending the event has been set as low as possible to essentially “break-even”.

Probable Speakers Include (subject to change):

Early registration at a rate of \$450 per person will end April 30, 2010. Registration between May 1 and July 31 is \$600 per person, and increases to \$750 per person after July 31. These prices do not include room hotel costs (roughly \$100 per night), but will include a high quality buffet lunch.

———-

February 5 Update:

I heard that a  couple of more people will likely be added to the list of presenters, and it looks like I might have the opportunity to present with Randolf.  Biographies for the presenters will hopefully be available within a couple of weeks, along with the site for registration.

—————————————————-

Latest information: Michigan OakTable Symposium (Advert)
Official website for the event: http://michigan.oaktable.net

## Failure to Collect Fixed Object Statistics Leads to ORA-01013 or ORA-07445

9 12 2009

December 8, 2009 (note added December 9, 2009)

An interesting post from almost two years ago in the comp.databases.oracle.server Usenet group:

```select distinct name from all_source;

.......
SYS_YOID0000052452\$
......
3196 rows selected.```

Ok no problem.

```select distinct owner from v\$access
SQL> /
select distinct owner from v\$access
*
ERROR at line 1:
ORA-01013: user requested cancel of current operation```

I had to terminate it as it become non-terminating.

“select owner from v\$access” returns a mere 193 rows, and it cannot sort it?

I suspect there is exists an infinite loop somewhere.

This was attempted on 10gR2, 11gR1 and both had the same problem.

How would someone start investigating this problem?  Find something that appears out of the ordinary, and start probing.  This is the approach that I used:

I was able to reproduce this problem on Oracle 10.2.0.2 with the Oracle October 2006 CPU on 64 bit Windows 2003.

From the udump trace file:

```ksedmp: internal or fatal error
ORA-07445: exception encountered: core dump [ACCESS_VIOLATION]
Current SQL statement for this session:
select distinct owner from v\$access
----- Call Stack Trace -----
calling              call     entry                argument values in hex
location             type     point                (? means dubious value)
-------------------- -------- -------------------- ----------------------------
__intel_new_memcpy+           0000000000000000     000000000 000000000
0118AF5A0 610                                                7FF970C7598
000007FF95D155F0     CALL???  __intel_new_memcpy+  0000007FF 013DF42E8
001749686
610                  000000000
...```

From a 10046 trace at level 8:

```*** 2008-01-23 06:51:57.901
WAIT #3: nam='latch: library cache' ela= 8566 address=1398554576 number=214 tries=1 obj#=-1 tim=5204643696
WAIT #3: nam='latch: library cache' ela= 859 address=1398554576 number=214 tries=1 obj#=-1 tim=5205047765
WAIT #3: nam='latch: library cache' ela= 2958 address=1398554576 number=214 tries=1 obj#=-1 tim=5205183748
WAIT #3: nam='latch: library cache' ela= 551 address=1398554576 number=214 tries=1 obj#=-1 tim=5205267198
*** 2008-01-23 06:54:39.117
WAIT #3: nam='latch: library cache' ela= 813 address=1398554576 number=214 tries=1 obj#=-1 tim=5365848854
*** 2008-01-23 06:55:06.288
WAIT #3: nam='latch: library cache' ela= 30 address=1398554576 number=214 tries=1 obj#=-1 tim=5393019180
*** 2008-01-23 06:55:30.006
WAIT #3: nam='latch: library cache' ela= 68 address=1398554576 number=214 tries=1 obj#=-1 tim=5416746379
*** 2008-01-23 06:55:50.584
WAIT #3: nam='latch: library cache' ela= 33 address=1398554576 number=214 tries=1 obj#=-1 tim=5437323921
*** 2008-01-23 06:57:09.536
WAIT #3: nam='latch: library cache' ela= 111 address=1398554576 number=214 tries=1 obj#=-1 tim=5516279642
*** 2008-01-23 06:57:20.895
WAIT #3: nam='latch: library cache' ela= 77 address=1398554576 number=214 tries=1 obj#=-1 tim=5527627340
*** 2008-01-23 06:57:36.082
WAIT #3: nam='latch: library cache' ela= 246 address=1398554576 number=214 tries=1 obj#=-1 tim=5542815685
*** 2008-01-23 06:57:57.957
WAIT #3: nam='latch: library cache' ela= 123 address=1398554576 number=214 tries=1 obj#=-1 tim=5564704225
*** 2008-01-23 06:58:14.644
WAIT #3: nam='latch: library cache' ela= 63 address=1398554576 number=214 tries=1 obj#=-1 tim=5581385020
*** 2008-01-23 06:58:26.269
WAIT #3: nam='latch: library cache' ela= 62 address=1398554576 number=214 tries=1 obj#=-1 tim=5593004724
*** 2008-01-23 06:58:48.346
WAIT #3: nam='latch: library cache' ela= 42 address=1398554576 number=214 tries=1 obj#=-1 tim=5615094241
WAIT #3: nam='latch: library cache' ela= 8 address=1398554576 number=214 tries=1 obj#=-1 tim=5615168161
...```

For an average 60 second interval interval, the session had the following latch statistics:

```Latch         Child# Level   Gets  Misses  Sleeps  Sleeps1
LIBRARY CACHE    1       5   529418     25      0       25
LIBRARY CACHE    2       5   539720     36      0       36
LIBRARY CACHE    3       5   519189     15      0       15
LIBRARY CACHE    4       5   516501     55      0       55
LIBRARY CACHE    5       5   524907   1744      4     1740```

On the waits, P1 is 13985, P2 is 214, P3 is 0, Buf is 99, SQL Hash is 3250939240.

```SELECT
SQL_TEXT
FROM
V\$SQL
WHERE
HASH_VALUE=3250939240

SQL_TEXT
-----------------------------------
select distinct owner from v\$access```

```SELECT
ID,
SUBSTR(OPERATION,1,12) OPERATION,
SUBSTR(OPTIONS,1,12) OPTIONS,
SUBSTR(OBJECT_OWNER||'.'||OBJECT_NAME,1,20) OBJECT,
SUBSTR(OBJECT_TYPE,1,13) OBJECT_TYPE,
PARENT_ID,
DEPTH,
POSITION,
CPU_COST
FROM
V\$SQL_PLAN_STATISTICS_ALL
WHERE
HASH_VALUE=3250939240
AND CHILD_NUMBER=1
ORDER BY
ID;

ID OPERATION    OPTIONS      OBJECT               OBJECT_TYPE PARENT_ID      DEPTH   POSITION   CPU_COST
-- ------------ ------------ -------------------- ------------- ---------- ---------- ---------- ----------
1 HASH UNIQUE       .                                                  0          1          1    2142850
2 NESTED LOOPS              .                                          1          2          1    1115000
3 NESTED LOOPS              .                                          2          3          1    1080000
4 MERGE JOIN CARTESIAN    .                                            3          4          1     730000
5 FIXED TABLE  FULL         SYS.X\$KSUSE          TABLE (FIXED)         4          5          1     380000
6 BUFFER SORT         .                                                4          5          2     350000
7 FIXED TABLE  FULL         SYS.X\$KGLDP          TABLE (FIXED)         6          6          1     350000
8 FIXED TABLE  FIXED INDEX  SYS.X\$KGLLK (ind:1)  TABLE (FIXED)         3          4          2       3500
9 FIXED TABLE  FIXED INDEX  SYS.X\$KGLOB (ind:1)  TABLE (FIXED)         2          3          2       3500```

```SELECT
ID,
ACCESS_PREDICATES,
FILTER_PREDICATES
FROM
V\$SQL_PLAN_STATISTICS_ALL
WHERE
HASH_VALUE=3250939240
AND CHILD_NUMBER=1
ORDER BY
ID;

ID ACCESS_PREDICATES FILTER_PREDICATES
-- ----------------- -----------------
1
2
3
4
5                   "S"."INST_ID"=USERENV('INSTANCE')
6
7

Then check a different server:

The query eventually completed on the 32 bit version of Oracle 10.2.0.2 with the Oracle October 2006 CPU.

```PARSE 1|CPU S     0.000000|CLOCK S    0.006227|ROWs 0
EXEC  1|CPU S     0.000000|CLOCK S    0.000201|ROWs 0
FETCH 2|CPU S 13112.828125|CLOCK S  926.981803|ROWs 6

Row Source Execution Plan:
(Rows 6)   HASH UNIQUE (cr=0 pr=0 pw=0 time=568347223 us)
(Rows 3463)    NESTED LOOPS  (cr=0 pr=0 pw=0 time=3464796755 us)
(Rows 3463)     NESTED LOOPS  (cr=0 pr=0 pw=0 time=3464592419 us)
(Rows 1613768)      MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=21019488 us)
(Rows 236)       FIXED TABLE FULL X\$KSUSE (cr=0 pr=0 pw=0 time=2376 us)
(Rows 1613768)       BUFFER SORT (cr=0 pr=0 pw=0 time=12951356 us)
(Rows 6838)        FIXED TABLE FULL X\$KGLDP (cr=0 pr=0 pw=0 time=41073 us)
(Rows 3463)      FIXED TABLE FIXED INDEX X\$KGLLK (ind:1) (cr=0 pr=0 pw=0 time=13094082350 us)
(Rows 3463)     FIXED TABLE FIXED INDEX X\$KGLOB (ind:1) (cr=0 pr=0 pw=0 time=166548 us)```

Note the merge Cartesian join between the 236 rows in X\$KSUSE and the 1613768 rows from X\$KGLDP.

The wait events:
0.03 seconds on latch: library cache

Then keep probing:

I may have found something that may help the OP – it hit me when I found very slow performance with the same SQL statement on 32 bit Oracle 10.2.0.3 and 11.1.0.6, after looking at the DBMS_XPLANs.

The DBMS_XPLAN on 10.2.0.3:

```--------------------------------------------------------------------------­----------------------------------------------
| Id  | Operation                  | Name            | Starts | E-Rows | A-Rows |   A-Time   |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------­----------------------------------------------
|   1 |  HASH UNIQUE               |                 |      1 |    105 |      5 |00:02:51.06 |       |       |          |
|   2 |   NESTED LOOPS             |                 |      1 |    105 |   1131 |00:02:51.04 |       |       |          |
|   3 |    NESTED LOOPS            |                 |      1 |     10 |   1131 |00:02:50.39 |       |       |          |
|   4 |     MERGE JOIN CARTESIAN   |                 |      1 |    100 |    180K|00:00:01.27 |       |       |          |
|*  5 |      FIXED TABLE FULL      | X\$KSUSE         |      1 |      1 |    236 |00:00:00.01 |       |       |          |
|   6 |      BUFFER SORT           |                 |    236 |    100 |    180K|00:00:00.55 | 36864 | 36864 |32768  (0)|
|   7 |       FIXED TABLE FULL     | X\$KGLDP         |      1 |    100 |    763 |00:00:00.01 |       |       |          |
|*  8 |     FIXED TABLE FIXED INDEX| X\$KGLLK (ind:1) |    180K|      1 |   1131 |00:02:48.31 |       |       |          |
|*  9 |    FIXED TABLE FIXED INDEX | X\$KGLOB (ind:1) |   1131 |     10 |   1131 |00:00:00.64 |       |       |          |
--------------------------------------------------------------------------­----------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
5 - filter("S"."INST_ID"=USERENV('INSTANCE'))

Note the MERGE JOIN CARTESIAN, and how the estimated rows compares with the actual rows.

The DBMS_XPLAN on 11.1.0.6:

```select distinct owner from v\$access

--------------------------------------------------------------------------­-----------------------------------------------
| Id  | Operation                   | Name            | Starts | E-Rows | A-Rows |   A-Time   |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------­-----------------------------------------------
|   1 |  HASH UNIQUE                |                 |      1 |      1 |      6 |00:00:40.28 |   951K|   951K|  860K (0)|
|   2 |   NESTED LOOPS              |                 |      1 |      1 |   2342 |00:00:40.27 |       |       |          |
|   3 |    MERGE JOIN CARTESIAN     |                 |      1 |      1 |   2842K|00:00:11.37 |       |       |          |
|   4 |     NESTED LOOPS            |                 |      1 |      1 |  16721 |00:00:00.38 |       |       |          |
|   5 |      FIXED TABLE FULL       | X\$KGLDP         |      1 |    100 |  16721 |00:00:00.05 |       |       |          |
|*  6 |      FIXED TABLE FIXED INDEX| X\$KGLOB (ind:1) |  16721 |      1 |  16721 |00:00:00.21 |       |       |          |
|   7 |     BUFFER SORT             |                 |  16721 |      1 |   2842K|00:00:02.91 |  4096 |  4096 | 4096  (0)|
|*  8 |      FIXED TABLE FULL       | X\$KSUSE         |      1 |      1 |    170 |00:00:00.01 |       |       |          |
|*  9 |    FIXED TABLE FIXED INDEX  | X\$KGLLK (ind:1) |   2842K|      1 |   2342 |00:00:15.49 |       |       |          |
--------------------------------------------------------------------------­-----------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
8 - filter("S"."INST_ID"=USERENV('INSTANCE'))

The above executed more quickly, and the plan is slightly different, but the MERGE JOIN CARTESIAN is still present, as is the difference between the estimated and actual number of rows.

The fixed object stats must be wrong (I recall having a problem with that a couple years ago when perfoming the following)…

```SQL> CONNECT / AS SYSDBA
SQL> EXEC DBMS_STATS.GATHER_FIXED_OBJECTS_STATS('ALL');
BEGIN DBMS_STATS.GATHER_FIXED_OBJECTS_STATS('ALL'); END;

*
ERROR at line 1:
ORA-20000: Insufficient privileges to analyze an object in Fixed Ob
ORA-06512: at "SYS.DBMS_STATS", line 17951
ORA-06512: at "SYS.DBMS_STATS", line 18404
ORA-06512: at "SYS.DBMS_STATS", line 18951
ORA-06512: at line 1```

The same error occurs on Oracle 10.2.0.2, 10.2.0.3, and 11.1.0.6 as the internal user, SYS AS SYSDBA, and SYSTEM.

There must be another way:

```SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS('SYS',GATHER_FIXED=>TRUE)

PL/SQL PROCEDURE SUCCESSFULLY COMPLETED.```

The new DBMS_XPLANs:
10.2.0.3:

```--------------------------------------------------------------------------­---------------------------------------------
| Id  | Operation                 | Name            | Starts | E-Rows | A-Rows |   A-Time   |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------­---------------------------------------------
|   1 |  HASH UNIQUE              |                 |      1 |      7 |      4 |00:00:00.09 |       |       |          |
|   2 |   NESTED LOOPS            |                 |      1 |   1822 |   1003 |00:00:00.08 |       |       |          |
|*  3 |    HASH JOIN              |                 |      1 |   1822 |   1003 |00:00:00.05 |   898K|   898K| 1099K (0)|
|*  4 |     HASH JOIN             |                 |      1 |   1822 |   1897 |00:00:00.03 |  1010K|  1010K|  639K (0)|
|*  5 |      FIXED TABLE FULL     | X\$KSUSE         |      1 |    236 |    236 |00:00:00.01 |       |       |          |
|   6 |      FIXED TABLE FULL     | X\$KGLLK         |      1 |   1822 |   1897 |00:00:00.01 |       |       |          |
|   7 |     FIXED TABLE FULL      | X\$KGLDP         |      1 |   2892 |    649 |00:00:00.01 |       |       |          |
|*  8 |    FIXED TABLE FIXED INDEX| X\$KGLOB (ind:1) |   1003 |      1 |   1003 |00:00:00.01 |       |       |          |
--------------------------------------------------------------------------­---------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
5 - filter("S"."INST_ID"=USERENV('INSTANCE'))

The MERGE JOIN CARTESIAN is gone and the execution time dropped from 2 minutes, 51 seconds to 0.09 seconds.

11.1.0.6:

```--------------------------------------------------------------------------­----------------------------------------------
| Id  | Operation                  | Name            | Starts | E-Rows | A-Rows |   A-Time   |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------­----------------------------------------------
|   1 |  HASH UNIQUE               |                 |      1 |     19 |      1 |00:00:00.04 |  1037K|  1037K|  368K (0)|
|   2 |   NESTED LOOPS             |                 |      1 |   1139 |    134 |00:00:00.04 |       |       |          |
|   3 |    NESTED LOOPS            |                 |      1 |   1139 |    134 |00:00:00.03 |       |       |          |
|*  4 |     HASH JOIN              |                 |      1 |   1139 |   1144 |00:00:00.02 |  1010K|  1010K| 1205K (0)|
|*  5 |      FIXED TABLE FULL      | X\$KSUSE         |      1 |    170 |    170 |00:00:00.01 |       |       |          |
|   6 |      FIXED TABLE FULL      | X\$KGLLK         |      1 |   1139 |   1144 |00:00:00.01 |       |       |          |
|*  7 |     FIXED TABLE FIXED INDEX| X\$KGLDP (ind:1) |   1144 |      1 |    134 |00:00:00.01 |       |       |          |
|*  8 |    FIXED TABLE FIXED INDEX | X\$KGLOB (ind:1) |    134 |      1 |    134 |00:00:00.01 |       |       |          |
--------------------------------------------------------------------------­----------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
5 - filter("S"."INST_ID"=USERENV('INSTANCE'))

The MERGE JOIN CARTESIAN is gone and the execution time dropped from 40.28 seconds to 0.04 seconds.

The OP may be able to run the same DBMS_STATS.GATHER_SCHEMA_STATS procedure to work around the problem.

How to determine if I collected statistics on the fixed tables?  A search on Metalink found this article from 2004:
(Edit: Note that the above link is gone in a Flash  If anyone is able to find the message on the Oracle support site -the site  formerly known as Metalink, please let me know of the address.)

In the article, Jonathan Lewis mentioned that tab_stats\$ could be checked.
Test database on 64 bit Windows:

```SELECT
*
FROM
SYS.TAB_STATS\$```

The above returned about 582 rows after running:

`EXEC DBMS_STATS.GATHER_SCHEMA_STATS('SYS',GATHER_FIXED=>TRUE);`

I then checked a production database, and found no rows returned. Using the suggestion from the Metalink article:

`EXEC DBMS_STATS.GATHER_FIXED_OBJECTS_STATS(NULL);`

The stats collection completed much more quickly, as it did not analyze the full SYS schema, and there were 582 rows returned by the query on SYS.TAB_STATS\$.  It looks like this simple query may be used to determine if fixed object statistics need to be collected.