Faulty Quotes 7 – Deadlock Kills Sessions?

4 01 2012

January 4, 2012

(Back to the Previous Post in the Series)

I thought that I would begin this blog article with a bit of a test case that demonstrates a very simple deadlock between two sessions.

In Session 1 (create the test table with a primary key and insert a single row):

CREATE TABLE T1 (
  C1 NUMBER PRIMARY KEY,
  C2 VARCHAR2(10));

INSERT INTO T1 VALUES (1,'1');

In Session 2 (insert a row that does not conflict with session 1 and then insert a row with the same primary key value as the uncommitted row in Session 1):

INSERT INTO T1 VALUES (2,'2');

INSERT INTO T1 VALUES (1,'3');

(Session 2 hangs while waiting for Session 1 to COMMIT or ROLLBACK)

In Session 1 (insert a row with the same primary key value as the uncommitted row in Session 2):

INSERT INTO T1 VALUES (2,'2');

(Session 1 hangs while waiting for Session 2 to COMMIT or ROLLBACK)

In Session 2 (an ORA-00060 is returned, deadlock between the two sessions ends):

INSERT INTO T1 VALUES (1,'3')
            *
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource

(Session 1 still hangs while waiting for Session 2 to COMMIT or ROLLBACK the first row that was inserted)

In Session 2 (confirming that the session was not killed/terminated and that the first row inserted still exists – indicates that the current TRANSACTION was not rolled back, just the statement that was involved in the deadlock):

SELECT
  *
FROM
  T1;

        C1 C2
---------- ----------
         2 2

While recently reading an OTN thread I was reminded of a bit of information that several sources simply miss-state about Oracle deadlocks.  The incorrect information is repeated, seemingly quite frequently, in various books and forum threads.  As luck would have it, several of the books that I have read about Oracle Database also repeat this incorrect information.  For instance, a quote from page 352 of the book “Expert Oracle Database 11g Administration” (a book that I reviewed and gave 4 out of 5 stars, maybe I need to re-evaluate that rating?):

“This is a catch-22 situation, because the stalemate can’t be broken by either session unilaterally. In such circumstances, Oracle steps in, terminates one of the sessions, and rolls back its statement. Oracle quickly recognizes that two sessions are deadlocked and terminates the session that issued the most recent request for a lock. This will release the object locks that the other session is waiting for.”

A second book, “Secrets of the Oracle Database” (another book that I reviewed and gave 4 out of 5 stars, actually it was probably worth about 4.4 stars), states the following on page 57:

“Hence, the ORACLE DBMS detects circular chains pertaining to interdependent locks, signals the error ‘ORA-00060: deadlock detected while waiting for resource’, and rolls back one of the sessions involved in the would-be deadlock.”

A third book, “Oracle SQL Recipes” (another book that I reviewed and gave 4 out of 5 stars), states the following on page 217:

“The immediate solution to the problem requires no user intervention: Oracle detects the deadlock and automatically rolls back one of the transactions, releasing one of the sets of row locks so that the other session’s DML will complete.”

A fourth book, “Oracle Database 11g Performance Tuning Recipes” (another book that I reviewed and gave 2 out of 5 stars), states on page 227:

“While the database can automatically resolve deadlocks between sessions (by killing one of the sessions holding a needed lock)…”

The Google book search found that the above four books were not alone in their statements of deadlocks that do not match the test case results at the start of this blog article.  One of the books found by Google is one that I have not read yet (and probably never will), “Oracle PL/SQL for Dummies”, which states the following on page 298:

“This is a deadlock. Oracle resolves it by raising a deadlock exception (ORA-00060: deadlock detected while waiting for resource) in user 1’s session. This terminates the procedure and allows user 2’s procedure to successfully complete.”

A second book found by the Google search is a bit old, “Oracle 9i DBA 101″, which on page 194 states:

“A deadlock occurs when two or more users wait for data that is locked by each other. Oracle will kill the first server process that detects the deadlock, so you should always look for and fix code that can cause deadlocks.”

There are certainly many other statements in other books regarding deadlocks, both correct and incorrect.  A quick search through the OTN forums found several threads in addition to the one at the start of this blog article that also drifted a bit from an accurate picture of what happens when a deadlock is broken in Oracle Database.

https://forums.oracle.com/forums/thread.jspa?threadID=2196282

“That is the deadlock. Oracle will then immediately choose arbitrarlly a victim session (542 or 548) and kill its process letting the remaining session continuing its work.”

https://forums.oracle.com/forums/thread.jspa?threadID=2152646

“There should be a simple wait, not a deadlock which would killed one of the session.”

https://forums.oracle.com/forums/thread.jspa?threadID=2261367

“If two sessions are actually deadlocked, Oracle should always kill one of the blocked sessions, raising an ORA-00060 error. It sounds like you’re saying this is not the expected behavior.”

https://forums.oracle.com/forums/thread.jspa?threadID=2227420

“The two session are inserting records in the same table with a unique index on it. With the same database version on another server and the same treatment, the deadlocks are correctly handled (one session is killed with ora-00060) and the process can continue.”

https://forums.oracle.com/forums/thread.jspa?threadID=1072665

“In oracle, when session1 and session2 make a deadlock, the oracle will kill one session , but I want know how to let oracle kill session1 and session2 both, is there way to do that?”

https://forums.oracle.com/forums/thread.jspa?threadID=2169563

“NO. Oracle already broke the deadlock by terminating one of these sessions. By the time trace file was closed, the deadlock no longer existed.”

https://forums.oracle.com/forums/thread.jspa?threadID=953308

“Oracle itself resolve the deadlock by killing the session.”

While reading the book “Oracle Core Essential Internals for DBAs and Developers” I almost flagged a sentence in a discussion of deadlocks as potentially containing an error.  That statement is found on page 82 near the end of the third to the last paragraph on the page.  After reading that paragraph about 10 times I decided that the statement is probably specific to TM locks, and probably is correct (the thought of TX locks maintained at the individual block level that contains the table rows kept running through my head the first nine times I read the paragraph).  This section of the book is interesting, as it not only avoids the mistakes mentioned in the above books, but it also highlights an inaccurate statement or two found in the Oracle Database documentation.





True or False – Optimizer Mode and Tuning Goals

13 05 2010

 May 13, 2010

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

Continuing the true or false blog article series with more questions that seem to have conflicting answers.  Today’s quiz is on the topic of the optimizer mode and tuning goals, inspired by a recent thread that I found in a discussion forum.  A Google search found a couple of related articles.

When answering the questions, answer True or False, followed by a justification for the answer that you provided.

The Articles:

 The Quiz:

1.  As of Oracle 10g R1, there are three methods for joining tables or other row sources: star transformation join, nested loops join, and hash join.

2.  SQL execution speed will be fastest when the first join between two row sources produces the smallest possible result set from the specified row sources.

3.  Oracle Database 10g’s SQL Profiles and SQL Access Advisor help to change execution plans for queries without adding hints.

4. Oracle Database 10g offers three optimizer modes, controlled by the OPTIMIZER_MODE initialization parameter: RULE, COST-BASED, and CHOOSE.

5.  An effective approach for query tuning involves enabling the RULE based optimizer either through a RULE hint, or by changing the session-level optimizer goal with the following SQL statement:

    ALTER SESSION SET OPTIMIZER GOAL = RULE;

6.  The default optimizer mode in Oracle 10g R1 is CHOOSE, which means that the RULE based optimizer will be used if none of the objects referenced in the SQL statement have statistics.

7.  A SELECT statement requiring 10 seconds and performing 10,000 DB_BLOCK_GETS is automatically less efficient that than a transformed version of the SELECT statement that performs 2,000 DB_BLOCK_GETS.

8.  The FIRST_ROWS optimizer mode is appropriate for OLTP type systems because it allows the small resultsets that are exepected by the end users to be returned as quickly as possible.

9.  The RULE based optimizer does not support any features introduced in Oracle since the mid 1990s.

10. Specifying a value for the OPTIMIZER_INDEX_CACHING parameter is recommended, because the value is critical for costing nested loops joins.  Its value should be set to indicate the percentage of an index that is likely to remain in Oracle’s buffer cache.

11. When the PARALLEL_AUTOMATIC_TUNING parameter is set to ON, the cost-based optimizer will artifically increase the calculated cost of index access paths, which increases the frequency of full table scans that may be executed in parallel.

12. With the RULE based optimizer, the driving table is the last table listed in the FROM clause of the SQL statement.  When the Cost-Based optimizer is used, the driving table is the first table listed in the FROM clause of the SQL statement.

13. The Cost-Based optimizer often performs unnecessary full table scans when three or more tables are listed in the FROM clause of a SQL statement.

14. Setting the OPTIMIZER_MODE parameter at the session level to FIRST_ROWS_10 instructs the query optimizer that the SQL statement which will be submitted next will return 10 rows.

I fear that this blog series might be falsely inflating the Google ranking of the Oracle documentation library due to the number of times the documentation library has been referenced – sorry Google users.  :-)





True or False – Hash Joins

12 05 2010

May 12, 2010

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

Continuing the true or false blog article series with more questions that seem to have conflicting answers.  Today’s quiz is on the topic of the hash joins, inspired by a recent thread that I found in a discussion forum.

When answering the questions, answer True or False, followed by a justification for the answer that you provided.

The Articles:

The Quiz:

1.  A hash join is ideal when joining a small table to a large table, and in such cases a hash join is typically faster than a nested loops join or a sort-merge join.

2.  When presented with very large tables in a production database, the Oracle query optimizer will attempt to avoid using hash joins unless the PGA_AGGREGATE_TARGET or HASH_AREA_SIZE (if PGA_AGGREGATE_TARGET is not used) is increased to a large value.

3.  When a PGA_AGGREGATE_TARGET is in use, one session cannot use all of the memory specified by PGA_AGGREGATE_TARGET nor all of the available memory in the server, whichever is the lower value.

4.  A hash join may use only 200MB of memory, or 5% of the PGA_AGGREGATE_TARGET, whichever is the lower value.

5.  The query optimizer will not select to use a hash join unless the hash join is able to complete in memory, as controlled by the PGA_AGGREGATE_TARGET and/or HASH_AREA_SIZE parameter values.

6.  A hash join is only faster than a nested loops join when the memory available for the hash join is at least 1.6 times the sum of bytes in the rows of the table to be hashed.

7.  A hash join’s performance is affected when the hash join overflows the memory established by the HASH_AREA_SIZE parameter, and the hash join spills to the TEMP tablespace.

8.  The degree of parallelism for a table (or individual SQL statement) must be adjusted when a full table scan is used to retrieve the rows during a hash join.  Hash joins typically require the parallel degree to be set to a value greater than 1 in order to improve performance over other join methods.

9.  The _PGA_MAX_SIZE parameter should be adjusted to improve performance when joining large tables with a hash join.

10. The Oracle query optimizer will commonly ignore USE_HASH hints.

11. The OPTIMIZER_INDEX_COST_ADJ parameter and OPTIMIZER_MAX_PERMUTATIONS parameters control the optimizer’s decision to use hash joins.

12. When the PGA_AGGREGATE_TARGET parameter is not used, the HASH_AREA_SIZE parameter should be set to 50% of the square root of the bytes in the smaller of the two row sources being joined, with a minimum value of 1MB.

13. The row source listed below the words HASH JOIN in an execution plan is the source for the hash table – the optimizer typically tries to select the smaller estimated row source of the two row sources as the source for the hash table. The second row source listed below the words HASH JOIN in an execution plan is fully scanned, probing the generated hash table in search of a match.

When attempting to answer the true or false questions, think about under what conditions the answer to the question will be true and under what conditions the answer to the question must be false.





True or False – NVL

11 05 2010

May 11, 2010

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

Continuing the true or false blog article series with more questions that seem to have conflicting answers, if the answers to those questions depend on the source of the information found through Internet searches.  Today’s quiz is on the topic of the NVL function, inspired by a recent thread that I found in a discussion forum.

When answering the questions, answer True or False, followed by a justification for the answer that you provided.

The Articles:

The Quiz:

1.  The most common use of the NVL function is to prevent division by zero errors in mathematical equations.

2.  The SQL statement:

SELECT NVL(CURRENT_STATUS, "Not disclosed") FROM T1;

   replaces NULL values found in the CURRENT_STATUS column of table T1 with the phrase: Not disclosed

3.  The NVL function replaces a NULL column value with any other value.

4.  The NVL function is the most efficient method for converting NULL values to a non-NULL value.

To help you with the questions, here is a test table:

CREATE TABLE T1(
  C1 NUMBER,
  C2 NUMBER,
  C3 NUMBER,
  C4 DATE,
  C5 DATE,
  C6 DATE,
  C7 VARCHAR2(20),
  C8 VARCHAR2(20),
  C9 VARCHAR2(20),
  PRIMARY KEY(C1));

INSERT /*+ APPEND */ INTO
  T1
SELECT
  ROWNUM,
  DECODE(MOD(ROWNUM,5),0,NULL,ROWNUM),
  DECODE(MOD(ROWNUM,5),0,NULL,ROWNUM),
  TRUNC(SYSDATE)+TRUNC(ROWNUM/100),
  DECODE(MOD(ROWNUM,5),0,NULL,TRUNC(SYSDATE)+TRUNC(ROWNUM/100)),
  DECODE(MOD(ROWNUM,5),0,NULL,TRUNC(SYSDATE)+TRUNC(ROWNUM/100)),
  RPAD(CHR(MOD(ROWNUM-1,26)+65)||
    CHR(MOD(ROWNUM,26)+65)||
    CHR(MOD(ROWNUM+1,26)+65)||
    CHR(MOD(ROWNUM+2,26)+65),20,'A'),
  DECODE(MOD(ROWNUM,5),0,NULL,
   RPAD(CHR(MOD(ROWNUM-1,26)+65)||
    CHR(MOD(ROWNUM,26)+65)||
    CHR(MOD(ROWNUM+1,26)+65)||
    CHR(MOD(ROWNUM+2,26)+65),20,'A')),
  DECODE(MOD(ROWNUM,5),0,NULL,
   RPAD(CHR(MOD(ROWNUM-1,26)+65)||
    CHR(MOD(ROWNUM,26)+65)||
    CHR(MOD(ROWNUM+1,26)+65)||
    CHR(MOD(ROWNUM+2,26)+65),20,'A'))
FROM
  DUAL
CONNECT BY
  LEVEL<=1000000;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)

The above creates a table with 1,000,000 rows, where one of every 5 rows contains NULL values in columns C2, C3, C5, C6, C8, and C9.

For some of the quiz questions, the following script might be helpful:

SET ARRAYSIZE 1000
SET AUTOTRACE TRACEONLY STATISTICS
SET TIMING ON

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'NVL_TEST';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

SELECT /*+ GATHER_PLAN_STATISTICS */
  COALESCE(C3,C2,C1) NUM,
  COALESCE(C6,C5,C4) DAT,
  COALESCE(C9,C8,C7) VCAR
FROM
  T1;

SELECT /*+ GATHER_PLAN_STATISTICS */
  NVL(C3,NVL(C2,C1)) NUM,
  NVL(C6,NVL(C5,C4)) DAT,
  NVL(C9,NVL(C8,C7)) VCAR
FROM
  T1;

SELECT /*+ GATHER_PLAN_STATISTICS */
  NVL(C3,C1) NUM,
  NVL(C6,C4) DAT,
  NVL(C9,C7) VCAR
FROM
  T1;

SELECT /*+ GATHER_PLAN_STATISTICS */
  DECODE(C3,NULL,DECODE(C2,NULL,C1,C2),C3) NUM,
  DECODE(C6,NULL,DECODE(C5,NULL,C4,C5),C6) DAT,
  DECODE(C9,NULL,DECODE(C8,NULL,C7,C8),C9) VCAR
FROM
  T1;

SELECT /*+ GATHER_PLAN_STATISTICS */
  NVL(C3,NVL(C2,C4)) COL1,
  NVL(C6,NVL(C5,C7)) COL2
FROM
  T1;

SELECT
  SYSDATE
FROM
  DUAL;

ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';

It is probably best to execute the above script several times to limit the effects of physical block reads.  You may see different performance results whether or not the 10046 trace is enabled, whether or not physical block reads are performed on every execution, and whether or not the test is executed directly on the database server.  The GATHER_PLAN_STATISTICS hint is used to permit DBMS_XPLAN.DISPLAY_CURSOR to retrieve the execution plan with the ALLSTATS LAST format parameter.





True or False – Undo

9 05 2010

May 9, 2010

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

Continuing the true or false blog article series with more questions that seem to have conflicting answers, if the answers to those questions depend on the source of the information found through Internet searches.  Today’s quiz is on the topic of Oracle Database’s undo functionality, and in some cases you may need to consult several of the suggested articles in order to correctly answer each of the questions.

When answering the questions, answer True or False, followed by a justification for the answer that you provided.  Yes, there are a lot of questions, but part of the entertainment in these types of exercises is reading an article and identifying what is logically right or wrong.  Of course, as with the previous quizzes, any answers that attempt to utilize logical fallacies will be scored as incorrect.

The Articles:

The Quiz:

1.  Rollback segment shrinks cause the next session to use the rollback segment to wait while the rollback segment is reduced in size to its optimal size.

2.  An OPTIMAL size for rollback segments should never be set because when the rollback segments are correctly sized, they should never increase in size.

3.  The ideal rollback segment size is partially determined by the values of the DB_WRITER_PROCESSES, LOG_BUFFER, AND LOG_CHECKPOINT_INTERVAL initialization parameters, as well as the COMMIT frequency.

4.  Large, long running batch jobs should be manually set to use a single, large rollback segment to reduce the risk of ORA-01555 (snapshot too old) errors.

5.  Adding a large undo segment when manual undo management is in use will effectively reduce the chances of sessions receiving ORA-01555 (snapshot too old) errors.

6.  When automatic undo management is used, ORA-01555 (snapshot too old) errors are triggered by Oracle Database creating many small undo segments in an undo tablespace that is too small.

7.  In a database with a single session connected, it is not possible to trigger an ORA-01555 (snapshot too old) error when the session is not modifying the tables that are being queried.

8.  When manual undo management is in use, only UPDATE transactions are able to automatically cause an undo segment to grow; SELECT statements do not cause the undo segments to grow, thus increasing the risk of ORA-01555 (snapshot too old) errors when the undo segments are small.

9.  When manual undo management is in use, it is a common practice for DBAs to assign a transaction to a specific rollback segment using the SET TRANSACTION command, but this is unnecessary with automatic undo management.

10. The automatically created undo segments in the SYSTEM tablespace will only be used when modifying objects residing in the SYSTEM tablespace.

 11. The UNDO_TABLESPACE parameter must be set to an undo tablespace’s name when automatic undo management is used.

12. The V$WAITSTAT view includes a block class that indicates the number of waits on the system undo header blocks.

13. The SESSIONS parameter should be set to a value no larger than the expected maximum number of user sessions since the SESSIONS parameter affects the number of undo segments created when automatic undo management is used.

14. When automatic undo management is used, as the number of sessions connected to the database instance increases, the instance dynamically adds additional (offline) undo segments in response to the sessions merely connecting.  The TRANSACTIONS_PER_ROLLBACK_SEGMENT parameter determines the number of additional sessions that must be connected before an additional undo segment is added.

15. SELECT type SQL statements do not generate redo nor undo.

16. In most databases, DML or DDL is performed in only 10% of all transactions in the database.

17. After a session changes data and issues a COMMIT, all sessions with access permissions to view the data are able to immediately see those changes.

18. Read consistency in Oracle means that data included in the query results will always be as of the moment the query was submitted.

19. Undo segments may be used to support flashback type operations.

20. The redo logs contain undo records, which allows Oracle Database to optimize the process of recovering from a shutdown abort, and also means that a future release of Oracle Database will no longer need dedicated undo tablespaces.

Note that a new blog article category has been added to quickly locate the quiz articles.  When attempting to answer the true or false questions, think about under what conditions the answer to the question will be true and under what conditions the answer to the question must be false.





True or False – Data Dump Import and Indexes

6 05 2010

May 6, 2010

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

While reading one of the OTN threads that asked whether Data Pump import will automatically rebuild indexes, I notice that someone posted an interesting response that appeared to be quoted from another source.  I searched the Internet and found the text that was copied into the OTN post, directly below a heading that read “Maximizing Oracle Import (impdp) Performance” – it was then that I decided it was time for another true or false quiz.

The reading material:

  1. http://www.dba-oracle.com/oracle_tips_load_speed.htm
  2. http://download.oracle.com/docs/cd/E14072_01/server.112/e10701/dp_import.htm
  3. http://download.oracle.com/docs/cd/E14072_01/server.112/e10701/dp_export.htm

Read through the above material and attempt to answer the following questions – as in the past, state why the answer is true or false.  Since the first article was last revised in March 2006, I probably should have referenced the documentation for Oracle Database 10g R2, but the Oracle Database 11g R2 documentation should be sufficient.

1.  The options used when data is exported with Data Pump Export (expdp) have no control over how the data may be imported with Data Pump Import (impdp).

2.  Data Pump Export supports enabling and disabling direct path for exports.

3.  Set the ANALYZE parameter of impdp to N to prevent statistics from being gathered on imported tables and indexes.

4.  Using solid state disks (SSD) will speed up import rates when the import job is otherwise fully tuned.

5.  Setting the COMMIT parameter of impdp to N suppresses the committing of the imported data until the load process ends.

6.  Setting the INDEXES parameter of impdp to N prevents indexes from being created until the import process completes.

7.  Setting the INDEXES parameter of impdp to N eliminates the index maintenance overhead when data is imported into a pre-existing table having pre-existing indexes.

8.  When the creation of indexes is postponed through the use of the INDEXES parameter of impdp, the resulting text file should be edited to set the parallel degree of each index to one less than the value of the CPU_COUNT initialization parameter’s value.

9.  Using the EXCLUDE=STATISTICS parameter value of impdp prevents the automatic of gathering of object statistics by impdp.  Setting this parameter value will greatly improve impdp performance.

10. When the _disable_logging initialization parameter is set to TRUE, either a SHUTDOWN ABORT or instance crash will disable/corrupt the database.  This corruption may be prevented by creating a backup of the database before modifying the value of that hidden initialization parameter.

Edit – May 6, 2010: The original title of this blog article was intended to be “True or False – Data Pump Import and Indexes” – a bit of unintentional humor was caused by the typo in the article title.  There are several definitions of the word “Dump“, some of which are more fitting than others.

SchedulerAutomation.zip





True or False – Direct Path Reads

21 04 2010

 April 21, 2010

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

Time for another true or false pop quiz.  Based on the number of views the previous true or false quizzes have experienced, this is a popular series.  However, the questions might be a little too hard to answer correctly – just what is correct is not always easy to determine?  Today’s quiz is on the topic of direct path reads, sparked by one of the test cases that we set up for the Expert Oracle Practices book.

Articles:

  1. http://www.dba-oracle.com/t_direct_path_read_waits.htm (dated June 5, 2009)
  2. http://oracledoug.com/serendipity/index.php?/archives/774-Direct-Path-Reads.html (dated May 22, 2006)
  3. http://oracledoug.com/serendipity/index.php?/archives/1321-11g-and-direct-path-reads.html (dated September 24, 2007)
  4. http://dioncho.wordpress.com/2009/07/21/disabling-direct-path-read-for-the-serial-full-table-scan-11g/ (dated July 21, 2009)
  5. http://antognini.ch/2009/07/impact-of-direct-reads-on-delayed-block-cleanouts/ (dated July 23, 2009)
  6. http://books.google.com/books?id=14OmJzfCfXMC&pg=PA121#v=onepage&q&f=false (dated 2004)

Answering the true or false questions probably requires examining the publication date of the article, if a specific Oracle release is not indicated in the article.  I have had a little bit of difficulty finding the exact (or even approximate) release dates of Oracle’s Database products in the past, so I performed a couple of searches and posted the results below.  The release dates listed below are from the following articles:
http://www.oracle.com/support/library/brochure/lifetime-support-technology.pdf
Metalink Doc ID 742060.1
http://www.orafaq.com/wiki/Oracle_8
http://www.orafaq.com/wiki/Oracle_8i

8         June 1997
8.1.5     February 1999
8.1.7     September 2000
9.0.1     June 2001
9.2.0.1   July 2002
10.1.0.1  January 2004
10.1.0.5  January 2006
10.2.0.1  July 2005
10.2.0.4  February 2008
11.1.0.6  August 2007
11.1.0.7  September 2008
11.2.0.1  September 2009

When answering the quiz, please provide justification for your answers – answer the question “why is the question true or false”.

The quiz:

1.  Direct path read waits will only appear when parallel query is used, for example, a full table scan executed in parallel. 

2.  Parallel query is enabled at the session and instance level by setting a table’s parallel degree to 32.

3.  Setting a parallel degree for a table or index forces the cost-based optimizer to calculate the cost of a full scan of the table (or index) with a lower value than the cost of the corresponding index’s range scan access path.

4.  Physical block reads performed during parallel query operations are always brought directly into the program global area (PGA), completely avoiding the buffer cache. *

5.  The hidden parameter _SERIAL_DIRECT_READ should be set to FALSE to prevent problems caused by excessive direct path reads.

6.  The hidden parameter _SMALL_TABLE_THRESHOLD affects Oracle’s decision to perform direct path reads.








Follow

Get every new post delivered to your Inbox.

Join 142 other followers