If I Need to Fetch My Rows Faster, Is There Any Way?

17 01 2010

January 17, 2010

Yes, the title of this blog article is the question, the whole question, and nothing but the question from this OTN post:
http://forums.oracle.com/forums/thread.jspa?threadID=1013283&tstart=0

The OP stated in the subject line that his query needed to retrieve 1 lakh rows, which I assumed meant 100,000,000 rows, but a Google search indicates that it is just 100,000 rows.

One of the responders went in for the kill with this response:

The most precise way for fetching rows faster can be attained in number of ways.

  1. The first way is apply indexes and in case indexes got large number of deletions then rebuild it.
  2. The next way is the optimizer you are choosing.

Literaly these parameters are effective then this thing will automatically lead to faster fetching.

I was a bit confused by the above response (I dislike being confused).  So, I asked that responder for clarification of the suggestions for improving the precise way of fetching rows faster (for some reason, the phrase “Battle Against Any Guess” popped into my head).

  1. Are you suggesting that the OP should rebuild indexes to improve how quickly Oracle is able to find rows when there were a lot of deletions in the table? There is a fun series of blog articles here that might help before the OP attempts to rebuild indexes: http://richardfoote.wordpress.com/category/index-rebuild/
  2. Are you suggesting that the OP switch between the RULE based optimizer and the COST based optimizer (or vice-versa)?

I then offered the following to the original poster:

  1. What about changing the array fetch size (number of rows fetched in a single fetch request)?
  2. Why are you selecting so many rows – will a large number of the rows be eliminated in the client-side application. Is it possible to reduce the number of rows returned from the database by aggregating the data, filtering the data, or processing the data on the server?
  3. Are there any columns being returned from the database that are not needed? If so, remove those columns.
  4. Is there a high latency WAN connection, or a slow LAN connection between the server and the client? If so, repeat the test again when connected at gigabit speeds.
  5. Are table columns included in inline views in the SQL statement that are not used (discarded, not returned to the client) outside the inline view? If so, get rid of those columns – there is no sense in carrying those columns through a join, group by, or sort operation if the columns are never used. The same applies to statically defined views accessed by the SQL statement.
  6. Assuming that the cost-based optimizer is in use, have you checked the various optimizer parameters – have you done something silly like setting OPTIMIZER_INDEX_COST_ADJ to 1 and set OPTIMIZER_MODE to FIRST_ROWS?
  7. Have you set other parameters to silly values, like setting DB_FILE_MULTIBLOCK_READ_COUNT to 0, 1, 8, 16, etc?
  8. Have you not collected system (CPU) statistics, if available on your Oracle version (what is the Oracle version number, ex: 8.1.7.3, 9.2.0.7, 11.2.0.1, etc.).
  9. Have you examined an explain plan (or better yet, a DBMS_XPLAN with ‘ALLSTATS LAST’ as the format parameter)?
  10. Have you captured a 10046 trace at level 8, and either manually reviewed the file or passed it through TKPROF (or another utility)?
  11. Have you tried to re-write the SQL statement into an equivalent, but more efficient form?
  12. Have you collected a 10053 trace for a hard parse of the SQL statement?
  13. Have you recently collected table and index statistics for the objects?

What about finding the root cause of the performance problem? Sure, it might be fun to blindly try things to see if they help, but how do you know if what you have tried has helped without measuring?

 This brings me to the next suggestion – before posting a request to any forum or other website, make certain that you have provided something, anything, that will help someone answer your question.  Suggestions for what to include in your post are outlined here:
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
http://forums.oracle.com/forums/thread.jspa?messageID=1812597

This blog is not the place to post requests for help, and I likely will not respond to requests for help by email.  Requests for help should be directed to an appropriate forum or Oracle support (Metalink/MOS); those forums include the comp.databases.oracle.server / comp.databases.oracle.misc Usenet groups, the OTN forums, AskTom.Oracle.com, and Oracle-L.


Actions

Information

2 responses

18 01 2010
Anand

Hi Sir,

Nice to read all the suggestions.can you throw some light on “What about changing the array fetch size (number of rows fetched in a single fetch request)?”

How can we do it?

Regards,
Anand

18 01 2010
Charles Hooper

In some cases, it depends on the development environment used for the application. In other cases, the application may offer a configuration file to control the array fetch size.

In SQL*PLUS:

SET ARRAYSIZE 100

In PL/SQL:
Use bulk collection (I believe that PL/SQL will do this automatically in recent releases).

When using ADO to connect to the database:
Add FetchSize to the connection string:

dbDatabase.ConnectionString = "Provider=OraOLEDB.Oracle;Data Source=" & strDatabase & ";User ID=" & strUserName & ";Password=" & strPassword & ";ChunkSize=1000;FetchSize=100;"

Or with ADO it may be specified for a single SQL statement after a recordset for the SQL statement has been opened:

snpData.CacheSize = 100

Java and the Oracle Call Interface also have settings to control the array fetch size, but I do not have examples – maybe someone else reading this article will share.

Some applications developed with the Centura / Gupta / Unify / (whatever they call themselves today) SQLWindows allow the array fetch size to be controlled by making a change in the SQL.INI file used by the application:

[oragtwy]
fetchrow=100

Leave a reply to Charles Hooper Cancel reply