10 04 2010

April 10, 2010

There were a couple of recent threads on the OTN forums about the behavior of the DB_FILE_MULTIBLOCK_READ_COUNT parameter’s auto-tuning capability which was introduced with Oracle Database  The threads included this discussion about a problem where the DB_FILE_MULTIBLOCK_READ_COUNT parameter refused to remain at a value of 0, and this discussion that described a problem where Oracle Database was auto-tuning the parameter to a value of 36.

I found the second discussion to be more interesting than the first, but in the first discussion I provided the following quotes from the Oracle documentation:

“As of Oracle Database 10g release 2, the default value of this parameter is a value that corresponds to the maximum I/O size that can be performed efficiently. This value is platform-dependent and is 1MB for most platforms. Because the parameter is expressed in blocks, it will be set to a value that is equal to the maximum I/O size that can be performed efficiently divided by the standard block size. Note that if the number of sessions is extremely large the multiblock read count value is decreased to avoid the buffer cache getting flooded with too many table scan buffers.”

“DB_FILE_MULTIBLOCK_READ_COUNT: The maximum I/O size for full table scans is computed by multiplying this parameter with DB_BLOCK_SIZE. (the upper value is subject to operating system limits). If this value is not set explicitly (or is set to 0), the default value corresponds to the maximum I/O size that can be efficiently performed and is platform-dependent.”

While the documentation did not explicitly state so, I suspected that the size of the buffer cache might also contribute to the value of the auto-tuned DB_FILE_MULTIBLOCK_READ_COUNT parameter’s value.  In the same discussion thread Jonathan Lewis indicated that the DB_CACHE_SIZE parameter was part of the calculation.  More or less the same answer, so my suspicion might have some merit.  I think that we need a test case to see if the size of the buffer cache actually has an impact on the auto-tuned parameter.

I built a new database in 64 bit Oracle Database with an 8KB block size with all of the SGA parameters unspecified (the memory target parameters were also left unspecified), except for SGA_TARGET and SGA_MAX_SIZE.  The PGA_AGGREGATE_TARGET was set to 1800MB.  For the test case I altered the value of SGA_TARGET and SESSIONS (the value of which apparently rounds up to the nearest multiple of 8).  The results of the test follow (the value of Database Buffers also decreased for the 8000M tests as the number of sessions increased, but the changing values were not recorded):

SESSIONS   DB_FILE_MULTI   Database Buffers
--------   -------------   ----------------
    6000             128
    6400             128      6,979,321,856
    6448             127      6,979,321,856
    6504             126
    7000             117
    8000             102
   16000              51
   32000              25
   64000              12


SESSIONS   DB_FILE_MULTI   Database Buffers
--------   -------------   ----------------
    3000             128      3,338,665,984
    3056             128      3,338,665,984
    3080             127      3,338,665,984
    3104             126      3,338,665,984
    3128             125      3,338,665,984
    3152             125      3,338,665,984
    6304              62      3,338,665,984
    6400              61      3,338,665,984
   12608              31      3,137,339,392
   25216              15      2,432,696,320
   50432               7      1,056,964,608
   64000               6        251,658,240


SESSIONS   DB_FILE_MULTI   Database Buffers
--------   -------------   ----------------
    1400             128      1,560,281,088
    1504             123      1,560,281,088
    2000              92      1,560,281,088
    4000              46      1,560,281,088
    8000              23      1,358,954,496
   16000              11        905,969,664

In general, as the number of sessions doubles, the value of the auto-tuned DB_FILE_MULTIBLOCK_READ_COUNT is 1/2 of its previous value.  With the 8000MB SGA_TARGET, the DB_FILE_MULTIBLOCK_READ_COUNT hit 12 at 64,000 sessions.  With the 4000MB SGA_TARGET the DB_FILE_MULTIBLOCK_READ_COUNT hit 6 at 64,000 sessions.  With the 2000MB SGA_TARGET, the DB_FILE_MULTIBLOCK_READ_COUNT hit 11 at 16,000 sessions, which is roughly the same value as when the SGA_TARGET was 4 times larger with a SESSIONS parameter configured 4 times larger.

Pop quiz time:

1. What is the mathematical formula that determines the value of DB_FILE_MULTIBLOCK_READ_COUNT?  There is a chance that the above results might be different for a different release of Oracle Database.

2. Is the SGA_TARGET parameter, the DB_CACHE_SIZE parameter, the __DB_CACHE_SIZE hidden parameter, or the sum of the various buffer pool parameters the determining factor of the DB_FILE_MULTIBLOCK_READ_COUNT value (in addition to the value of the SESSIONS parameter)?  More tests are probably needed.

3. Assuming that the original poster in the OTN thread was using an 8KB block size, how was an auto-tuned value of 36 achieved for the DB_FILE_MULTIBLOCK_READ_COUNT parameter?



4 responses

12 04 2010
Christian Antognini

Hi Charles

I shared my opinion about 1 at the bottom of page 178 in TOP…


12 04 2010
Charles Hooper

Hi Chris,

Thanks for stopping by my blog and adding value to one of the blog articles. The explanation in your book is very good – I must admit that I have not had a chance to read up to that page during the second read through of your book, otherwise your formula probably would have been mentioned in this article.

Your book’s formula seems to produce a number that is close (typically +/- 5) to the reported DB_FILE_MULTIBLOCK_READ_COUNT on Oracle 11g R2 when using the reported “Database Buffers” value that displays when the database instance starts. I did not check the __DB_CACHE_SIZE hidden parameter, so the actual memory for the buffer cache may have been slightly different due to the use of the SGA_TARGET parameter.

13 03 2015
Franck Pachot (@FranckPachot)

Hi Charles,
It seems that the default value do not depend on __DB_CACHE_SIZE when only sga_target is set. But it’s not exactly the result of buffer cache size divided by sessions. Do you have more information about the default value in ASMM?

14 03 2015
Charles Hooper

Hi Franck,
Sorry for the delay in approving your message. That is an interesting piece of information. I have not had a lot of time recently to do some of the extended research and analysis with the more recent versions of Oracle Database, so some formulas might have changed in the last five years. I currently do not have any additional information on this topic.

You might take a lot at Christian Antognini’s second edition book to see if he provides any insight into the behavior when using ASMM. If you do not have a printed copied of his book you should be able to find it here:

My review of his second edition book ( ) indicates that the DB_FILE_MULTIBLOCK_READ_COUNT parameter is described on pages 195, 198, 274, and 278-281. The approximation formula that he provides is on page 281.

You might also do a search through the OakTable website to see if one of the other members (including Christian Antognini’s website) offers some additional researched information.

It has been a while since I looked into the topic, but I suspect that the DB_FILE_MULTIBLOCK_READ_COUNT parameter is not adjusted each time AMM or ASMM make adjustments to the SGA memory distribution – you might need to examine the values right after bouncing the database to verify the formula.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 155 other followers

%d bloggers like this: