DB_BLOCK_SIZE and DB_FILE_MULTIBLOCK_READ_COUNT – What is Wrong with this Quote?

2 12 2010

December 2, 2010

(Forward to the Next Post in the Series)

It has been a couple of months since the last blog article that asked “What is Wrong with this Quote”, so I thought that I would try to add a couple of more blog articles to this series.  I recently reviewed the book “Oracle Tuning the Definitive Reference Second Edition”, but my review stopped when it extended to 24 pages (12 point Times New Roman font, 1 inch margins).   Why 24 pages?  It was probably just coincidence.

Page 531 of the book states the following:

“When multiple blocksizes are implemented, the db_block_size should be set based on the size of the tablespace where the large object full scans will be occurring.  The db_file_multiblock_read_count parameter is only applicable for tables/indexes that are full scanned.”

“With the implementation of multiple blocksizes, Oracle MetaLink notes that the db_file_multiblock_read_count should always be set to a value that sums to the largest supported blocksize of 32k.”

Examples are provided in the book that show that the value for the DB_FILE_MULTIBLOCK_READ_COUNT parameter multiplied by the value for the DB_BLOCK_SIZE parameter should always equal 32KB.  Thus, with a database block size of 16KB, the DB_FILE_MULTIBLOCK_READ_COUNT parameter value should be set to 2.

What, if anything, is wrong with the above quotes from the book?

The point of blog articles like this one is not to insult authors who have spent thousands of hours carefully constructing an accurate and helpful book, but instead to suggest that readers investigate when something stated does not exactly match what one believes to be true.  It could be that the author “took a long walk down a short pier”, or that the author is revealing accurate information which simply cannot be found through other resources (and may in the process be directly contradicting information sources you have used in the past).  If you do not investigate in such cases, you may lose an important opportunity to learn something that could prove to be extremely valuable.

A Google search found similar information here:


Actions

Information

12 responses

2 12 2010
Radoslav Golian

I would never ever buy or read a book from Mr. “E”. I’ve read many discussions with “E” and I can’t believe such person can exist.
He has no self-reflection, he doesn’t learn from his mistakes and misunderstandings (Only stupid person doesn’t do that), and so on..
How did you managed to read such crappy book?
Anyway you did a great service to potential buyers by that review on amazon, I hope nobody else will buy it.

(Edited December 2, 2010 CH: caught by the spam filter)

2 12 2010
Charles Hooper

HI Radoslav,

Sorry for having to edit your comment – the Akismet spam filter prevent it from being posted. I would like to keep the material on this blog as technical content oriented as possible, specifically analyzing what is written, rather than analyzing the person who wrote the material.

I would like to clarify that my review on the Amazon site is not intended to suggest that people should not buy the book. As the publisher’s website states that the book is written for senior Oracle DBAs, I suggest that senior DBAs, Oracle Certified Masters, and other people who are able to research the book’s contents *should* buy the book and post reviews of the book that highlight specific content of the book indicating whether or not that content is correctly stated. Such feedback will be a benefit to the Oracle community, and could help to improve Oracle Database books that are published in the future. I think that we need to keep a positive outlook in order to see things improve.

3 12 2010
Radoslav Golian

I should choose my words more carefully..
Yes, I agree senior DBAs, OCMs and others Oracle experts who are able to research the book’s contents (and love quizzes) could provide a feedback which would be a great benefit to community. There is nothing better than learning from others errors and failures. In this case the community would learn a lot :).

I usually buy a book because I know it’s useful and almost everything is correct. Of course, everybody makes mistakes, even the best experts do (i.e. http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2680538100346782134 ). But they do it much less often..
The problem is that pages owned by Mr. “E” are usually on top in google search and junior members of Oracle community usually do not realize they should be especially aware whenever they visit them, because they can be easily adopt some myths, half-truths and “silver bullets” from an “expert”. It seems the Mr. “E” book has the same “quality” as the Mr. “E” sites.

4 12 2010
Charles Hooper

Radoslav,

If someone were to compare the the referenced book page and the articles listed directly below “A Google search found similar information here:” it seems that the contents of the articles are often direct copies of pages from the books, so it appears that you are correct. Take for example the information in the book that describes 10046 trace files which is located near page 430 in the book. If someone were to search the Internet for phrases from that section of the book, one of the pages that will be found is this one, dba-oracle.com/t_10046_tracing_events.htm, which is listed as being written by Robert Freeman. Another web page that will be found is this one, dba-oracle.com/t_grid_rac_events_in_Oracle.htm, which is listed as being written by Mike Ault.

I agree with your comment regarding problems with Internet searches. The author of this book addresses that problem on page 110. Fortunately, pages like this blog article are already indexed on Google. For example, if someone right now were to search Google for:
db_block_size db_file_multiblock_read_count

This blog article series, specifically part 3 which now has a comment by Jonathan Lewis, is listed as the second item in the search results. The people who leave comments on this blog are helping to improve the Google search results.

3 12 2010
Charles Hooper

Nearly 18 hours later and no one wants to pick out what is wrong in the quote?

I will get you started. What is the difference between “sum” and “product”? Now, keeping in mind that the author of this book labeled me as a “spelling Nazi” (reference http://forums.oracle.com/forums/thread.jspa?threadID=976659 – yes, that comment was edited, the original version that hit my email box contained words that I suspect violated the OTN terms of service), I feel that I have made my contribution to what is wrong with this quote. :-)

3 12 2010
Centinul

“Oracle MetaLink notes that the db_file_multiblock_read_count should always be set to a value that sums to the largest supported blocksize of 32k.”

I don’t think this quote makes a lot of sense to begin with. the units of db_file_multiblock_read_count are blocks, not bytes. If a person read this and implemented this without thinking they could set it to a very unrealistic value. Imagine a db_multiblock_read_count of 32768! That implies a maximum I/O size of 256MB for a 8KB block size database.

Another interpretation, which agrees with the examples you stated are in the book, is that the db_file_multiblock_read_count should sum to a maximum I/O size of 32K. I think this would significantly short change an I/O subsystem in most cases. My guess is that what he should have said is:

“When using multiple block sizes the db_file_multiblock_read_count should be set such that the result of db_block_size*db_file_multibock_read_count is a multiple of the block size of your tablespace with the largest block size which does not exceed the maximum I/O size of your system.”

Additionally in NOWORKLOAD statistics are used setting an incorrect value of db_multiblock_read_count could make Oracle choose plans that are not best suited for the data set.

Hopefully this is what you are looking for :)

3 12 2010
Charles Hooper

Centinul,

I believe in the book review I supplied a link to a test performed by Greg Rahn. Greg demonstrated how the value of the DB_FILE_MULTIBLOCK_READ_COUNT must be specified in terms of the database default block size. Oracle will automatically scale up or down for the tablespaces with non-default block sizes. Thus, if the DB_BLOCK_SIZE is set to a 8KB block size and DB_FILE_MULTIBLOCK_READ_COUNT is set to 32, Oracle will automatically scale the DB_FILE_MULTIBLOCK_READ_COUNT down to 16 blocks when it encounters a tablespace with a 16KB block size, and scale it up to 64 when it encounters a tablespace with a 4KB block size. Considering the quoted text, what would be the point of multi-block reads if the DB_BLOCK_SIZE is set to a 32KB block size. The clarity of your paragraph is much better than the original, but I think that more still needs to be added.

Does the DB_FILE_MULTIBLOCK_READ_COUNT apply during an index full scan (NO)? Does it apply during an index fast full scan? Does it apply to index prefetching? Does it apply to anything else? :-)

3 12 2010
Centinul

Thanks for the information. I appreciate it.

3 12 2010
Mathew Butler

When optimizing for a FTS I used (up to 10.1) to always set MBRC to a value such that the product of MBRC and the block size (I’ve never used multiple block sizes in a DB) was equal to the maximum multi-block I/O supported on the platform I was working on. The multi-block I/O supported was obtained via testing.

I’d only do this with consideration to the rest of the system and how this might affect the costing of all query plans.

I don’t believe that this approach holds any more since 10.2 – I think the optimizer dynamically sets the MBRC value based on how many active sessions are connected. The advice I’ve heard is not to explicitly set MBRC at all (and I haven’t come across any situations yet where I have needed to explicitly set MBRC. I’m reasonably sure though that they may exist). I don’t change MBRC from default these days – working on 11GR1.

I’m interested to find out what you view is in terms of explicitly setting (or not) MBRC.

But to your quote:

I guess the quote is trying to say something similar to my first paragraph above.

M.

3 12 2010
Charles Hooper

Mathew,

Nice additional information. I think that with the introduction of system (CPU) statistics we need to be very careful when abbreviating the DB_FILE_MULTIBLOCK_READ_COUNT parameter. The system (CPU) statistics have a parameter that is named MBRC which is used for execution plan costing purposes (when workload system statistics are in place). Setting that system statistic to too high of a value can cause problems (reference: http://hoopercharles.wordpress.com/2009/12/27/high-value-for-mbrc-causes-high-bchr-high-cpu-usage-and-slow-performance/). I have seen a couple of people abbreviate the DB_FILE_MULTIBLOCK_READ_COUNT parameter as DFMBRC (or maybe DBFMBRC), or something similar just to avoid confusion.

Any other problems with the quote?

3 12 2010
Jonathan Lewis

the db_block_size should be set based on the size of the tablespace

Why should the size of the tablespace have a direct impact on the size of the block ?

where the large object full scans will be occurring

Does this mean any large objects that I may scan should all be moved into a single tablespace ?

The db_file_multiblock_read_count parameter is only applicable for tables/indexes that are full scanned.

Not a lot of people know this (and I wouldn’t expect the author of the “definitive reference” to know it) – but the db_file_multiblock_read_count also has an impact on the costs derived for bitmap indexes.

the db_file_multiblock_read_count should always be set to a value that sums to …

A single value can’t sum to anything.

the largest supported blocksize of 32k

Is this also true on a platform that doesn’t support 32KB block sizes ?

4 12 2010
Charles Hooper

It is great to see the variety of responses to this blog article. The interpretation of the book contents, from the perspective of someone just learning the basics of performance tuning, was covered very well by Jonathan’s comment. If someone were to read exactly what is stated in that section of the book, without having much prior knowledge of Oracle terminology, that person would be left very confused for the reasons pointed out by Jonathan. For the record, I did not know that the value for the DB_FILE_MULTIBLOCK_READ_COUNT parameter affected the costing of bitmap indexes.

Centinul and Mathew covered the other side of the problem – if the author wanted to save that section of the book from the trash bin, what would need to be added or changed to improve the clarity and accuracy.

Radoslav touched on some of the “touchy” points that affect the quality of the Oracle Database material found on the Internet through search engines. The points that he makes have been issues that I too have experienced. The OakTable website (http://www.oaktable.net/main) has a custom Google search engine, much like the one on Mr. “E”‘s site, that re-weights the search results so that less debatable material floats to the top of the search results.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




Follow

Get every new post delivered to your Inbox.

Join 141 other followers

%d bloggers like this: