To N or Not to N, is SYS_OP_C2C the Question; Oracle NVARCHAR Slow?

8 02 2015

February 8, 2015

I was recently contacted about an odd performance issue with a software product named TDM, where certain SQL statements submitted to an Oracle 9.2.x database required roughly 30 seconds to execute when a nearly instantaneous response was expected.  The person who reported the issue to me provided a sample SQL statement that appeared something similar to the following:

SELECT
  DATA
FROM 
  MVIS_DATA 
WHERE 
  TOOLID = N'112' 
  AND DATATYPE = 0 
  AND COMMISSIONID IS NULL 
  AND OPERATIONID IS NULL 
  AND COMMISSIONLISTPOS IS NULL;

In the SQL statement, notice the N character that is immediately before ‘112’ in the SQL statement.  The person indicated that the SQL statement executed quickly if that N character were removed from the SQL statement.  At this time the developer of that application is unwilling to release a bug fix to remove the N character from this (and likely other) SQL statements.

I did not initially have the table datatype descriptions (retrieved with DESC MVIS_DATA), so I made a couple of guesses about the datatypes.  What if the TOOLID column was defined as a number, and is it the primary key column for the table (indicating that there must be an index on that column)?  It might be the case that the developer of the application decided that in all SQL statements that are submitted with literal values (rather than using bind variables), that all numbers would be submitted in single quotes.  I created a testing table for a mock up in Oracle Database 11.2.0.2:

CREATE TABLE MVIS_DATA_NUM (
  TOOLID NUMBER,
  DATATYPE NUMBER,
  COMMISSIONID NUMBER,
  OPERATIONID NUMBER,
  COMMISSIONLISTPOS NUMBER,
  DATA VARCHAR2(100),
  PRIMARY KEY (TOOLID));
 
INSERT INTO
  MVIS_DATA_NUM
SELECT
  ROWNUM TOOLID,
  MOD(ROWNUM,2) DATATYPE,
  NULL COMMISSIONID,
  DECODE(MOD(ROWNUM,2),0,NULL,MOD(ROWNUM,2)) OPERATIONID,
  DECODE(MOD(ROWNUM,2),0,NULL,MOD(ROWNUM,2)) COMMISSIONLISTPOS,
  LPAD('A',100,'A') DATA
FROM
  DUAL
CONNECT BY
  LEVEL<=100000;
 
COMMIT;
 
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'MVIS_DATA_NUM',CASCADE=>TRUE,NO_INVALIDATE=>FALSE)

With the testing table created with 100,000 rows, and statistics gathered for the table and primary key index, I then tried executing a query and retrieving the execution plan for that query so that I could determine if the Predicate Information section of the execution plan provided any clues.  I executed the following, the first SQL statement retrieved one row, and the second SQL statement retrieved the execution plan for the first SQL statement:

SET LINESIZE 140
SET PAGESIZE 1000
 
SELECT
  DATA
FROM 
  MVIS_DATA_NUM
WHERE 
  TOOLID = N'112' 
  AND DATATYPE = 0 
  AND COMMISSIONID IS NULL 
  AND OPERATIONID IS NULL 
  AND COMMISSIONLISTPOS IS NULL;
 
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));

The execution plan output is as follows:

SQL_ID  gzzrppktqkbmu, child number 0
-------------------------------------
SELECT   DATA FROM   MVIS_DATA_NUM WHERE   TOOLID = N'112'   AND
DATATYPE = 0   AND COMMISSIONID IS NULL   AND OPERATIONID IS NULL   AND
COMMISSIONLISTPOS IS NULL
 
Plan hash value: 1080991
 
---------------------------------------------------------------------------------------------
| Id  | Operation                   | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |               |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS BY INDEX ROWID| MVIS_DATA_NUM |     1 |   113 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | SYS_C0050817  |     1 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(("OPERATIONID" IS NULL AND "COMMISSIONLISTPOS" IS NULL AND
              "DATATYPE"=0 AND "COMMISSIONID" IS NULL))
   2 - access("TOOLID"=112)

Nothing too unusual in the above execution plan, the N’112′ portion of the SQL statement was simply changed by the query optimizer to the number 112, which permitted the use of the table’s primary key index.  Obviously, Oracle Database 11.2.0.2 contains a few optimizations that are not available in Oracle Database 9.2.x, so maybe the outcome would be different in Oracle Database 9.2.x.  However, when a number value is compared to a character (for example VARCHAR2) value, Oracle Database will attempt to implicitly convert the character value to a number value when performing the comparison, so the outcome should be the same on Oracle Database 9.2.x.

What if that TOOLID column were defined as VARCHAR?  Below is another test table with that column defined as VARCHAR2:

CREATE TABLE MVIS_DATA (
  TOOLID VARCHAR2(15),
  DATATYPE NUMBER,
  COMMISSIONID NUMBER,
  OPERATIONID NUMBER,
  COMMISSIONLISTPOS NUMBER,
  DATA VARCHAR2(100),
  PRIMARY KEY (TOOLID));
 
INSERT INTO
  MVIS_DATA
SELECT
  TO_CHAR(ROWNUM) TOOLID,
  MOD(ROWNUM,2) DATATYPE,
  NULL COMMISSIONID,
  DECODE(MOD(ROWNUM,2),0,NULL,MOD(ROWNUM,2)) OPERATIONID,
  DECODE(MOD(ROWNUM,2),0,NULL,MOD(ROWNUM,2)) COMMISSIONLISTPOS,
  LPAD('A',100,'A') DATA
FROM
  DUAL
CONNECT BY
  LEVEL<=100000;
 
COMMIT;
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'MVIS_DATA',CASCADE=>TRUE,NO_INVALIDATE=>FALSE)

With the new test table created, let’s try the SQL statement again.  A 10053 trace file will be enabled in the event that you are interested in examining any potential automatic transformations of the SQL statement:

SET LINESIZE 140
SET PAGESIZE 1000
 
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'SQL_10053V';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
 
SELECT
  DATA
FROM 
  MVIS_DATA 
WHERE 
  TOOLID = N'112' 
  AND DATATYPE = 0 
  AND COMMISSIONID IS NULL 
  AND OPERATIONID IS NULL 
  AND COMMISSIONLISTPOS IS NULL;
 
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));
 
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';

The first of the above SQL statements output one row.  Here is the execution plan that was output:

SQL_ID  5pkwzs079jwu2, child number 0
-------------------------------------
SELECT   DATA FROM   MVIS_DATA WHERE   TOOLID = N'112'   AND DATATYPE =
0   AND COMMISSIONID IS NULL   AND OPERATIONID IS NULL   AND
COMMISSIONLISTPOS IS NULL
 
Plan hash value: 353063534
 
-------------------------------------------------------------------------------
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |       |       |   227 (100)|          |
|*  1 |  TABLE ACCESS FULL| MVIS_DATA |   122 | 13908 |   227   (3)| 00:00:01 |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(("OPERATIONID" IS NULL AND "COMMISSIONLISTPOS" IS NULL
              AND "DATATYPE"=0 AND SYS_OP_C2C("TOOLID")=U'112' AND "COMMISSIONID" IS
              NULL))

Notice the SYS_OP_C2C function in the Predicate Information section of the execution plan, that is a clue that there might be a performance problem lurking.  Also notice that the INDEX UNIQUE SCAN operation was replaced with a TABLE ACCESS FULL operation, that is also a clue that a performance problem may be lurking.  This section of the execution plan also indicates that the N’112′ portion of the SQL statement was changed to U’112′.  Consulting the 10053 trace file indicates that the query optimizer rewrote the submitted SQL statement to the following:

SELECT
  "MVIS_DATA"."DATA" "DATA"
FROM
  "TESTUSER"."MVIS_DATA" "MVIS_DATA"
WHERE
  SYS_OP_C2C("MVIS_DATA"."TOOLID")=U'112'
  AND "MVIS_DATA"."DATATYPE"=0
  AND "MVIS_DATA"."COMMISSIONID" IS NULL
  AND "MVIS_DATA"."OPERATIONID" IS NULL
  AND "MVIS_DATA"."COMMISSIONLISTPOS" IS NULL

SYS_OP_C2C is an internal characterset conversion function.

What happens to the execution plan if the N character is removed from the SQL statement?

SELECT
  DATA
FROM 
  MVIS_DATA 
WHERE 
  TOOLID = '112' 
  AND DATATYPE = 0 
  AND COMMISSIONID IS NULL 
  AND OPERATIONID IS NULL 
  AND COMMISSIONLISTPOS IS NULL;
 
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));

Below is the execution plan that was output:

SQL_ID  d70jxj3ypy60g, child number 0
-------------------------------------
SELECT   DATA FROM   MVIS_DATA WHERE   TOOLID = '112'   AND DATATYPE =
0   AND COMMISSIONID IS NULL   AND OPERATIONID IS NULL   AND
COMMISSIONLISTPOS IS NULL
 
Plan hash value: 1051843381
 
--------------------------------------------------------------------------------------------
| Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |              |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS BY INDEX ROWID| MVIS_DATA    |     1 |   114 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | SYS_C0050814 |     1 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(("OPERATIONID" IS NULL AND "COMMISSIONLISTPOS" IS NULL AND
              "DATATYPE"=0 AND "COMMISSIONID" IS NULL))
   2 - access("TOOLID"='112')

Notice that the SYS_OP_C2C function does not appear in the Predicate Information section of the execution plan this time, and that the primary key index is used, rather than requiring a full table scan.  Unfortunately, the DBMS_XPLAN.DISPLAY_CURSOR function does not exist in Oracle Database 9.2.0.x, otherwise the reason for the performance problem may have been much more readily apparent to the person who reported the issue to me.

So, what is the purpose of that N character in the SQL statement?  I recall seeing SQL statements similar to this one in the past, which converts a character string to a date:

SELECT DATE'2015-02-08' FROM DUAL;
 
DATE'2015
---------
08-FEB-15

After a fair amount of digging through the Oracle documentation, I located the following note about that N character:

“The TO_NCHAR function converts the data at run time, while the N function converts the data at compilation time.”

Interesting.  That quote suggests that the author of the SQL statement may have been trying to convert ‘112’ to a NVARCHAR2 (or NCHAR).  Time for another test, the below script creates a table with the TOOLID column defined as NVARCHAR2, populates the table with 100,000 rows, and then collects statistics on the table and its primary key index:

CREATE TABLE MVIS_DATA_N (
  TOOLID NVARCHAR2(15),
  DATATYPE NUMBER,
  COMMISSIONID NUMBER,
  OPERATIONID NUMBER,
  COMMISSIONLISTPOS NUMBER,
  DATA VARCHAR2(100),
  PRIMARY KEY (TOOLID));
 
INSERT INTO
  MVIS_DATA_N
SELECT
  TO_CHAR(ROWNUM) TOOLID,
  MOD(ROWNUM,2) DATATYPE,
  NULL COMMISSIONID,
  DECODE(MOD(ROWNUM,2),0,NULL,MOD(ROWNUM,2)) OPERATIONID,
  DECODE(MOD(ROWNUM,2),0,NULL,MOD(ROWNUM,2)) COMMISSIONLISTPOS,
  LPAD('A',100,'A') DATA
FROM
  DUAL
CONNECT BY
  LEVEL<=100000;
 
COMMIT;
 
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'MVIS_DATA_N',CASCADE=>TRUE,NO_INVALIDATE=>FALSE)

Executing the query against this table also returns one row:

SELECT
  DATA
FROM 
  MVIS_DATA_N 
WHERE 
  TOOLID = N'112' 
  AND DATATYPE = 0 
  AND COMMISSIONID IS NULL 
  AND OPERATIONID IS NULL 
  AND COMMISSIONLISTPOS IS NULL;
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));

The execution plan follows:

SQL_ID  1yuzz9rqkvnpv, child number 0
-------------------------------------
SELECT   DATA FROM   MVIS_DATA_N WHERE   TOOLID = N'112'   AND DATATYPE
= 0   AND COMMISSIONID IS NULL   AND OPERATIONID IS NULL   AND
COMMISSIONLISTPOS IS NULL
 
Plan hash value: 1044325464
 
--------------------------------------------------------------------------------------------
| Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |              |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS BY INDEX ROWID| MVIS_DATA_N  |     1 |   119 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | SYS_C0050815 |     1 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(("OPERATIONID" IS NULL AND "COMMISSIONLISTPOS" IS NULL AND
              "DATATYPE"=0 AND "COMMISSIONID" IS NULL))
   2 - access("TOOLID"=U'112')

Notice in the above Predicate Information section that the SYS_OP_C2C function does not appear, and the N’112′ portion of the SQL statement was still changed to U’112′.  The execution plan also shows that the primary key index was used, while a full table scan was required when the TOOLID column was defined as a VARCHAR2.

The person who reported the issue to me later provide the output of DESC MVIS_DATA, which indicated that the TOOLID column was in fact defined as a VARCHAR2 column.  If this person were running a more recent version of Oracle Database, he might be able to create a function based index that uses the SYS_OP_C2C function on the TOOLID column.  Such an index might look something like this:

CREATE INDEX IND_TOOLID_FIX ON MVIS_DATA (SYS_OP_C2C("TOOLID"));

Gathering statistics on the table and its indexes, executing the original SQL statement, and outputting the execution plan:

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'MVIS_DATA',CASCADE=>TRUE,NO_INVALIDATE=>FALSE)
 
SELECT
  DATA
FROM 
  MVIS_DATA 
WHERE 
  TOOLID = N'112' 
  AND DATATYPE = 0 
  AND COMMISSIONID IS NULL 
  AND OPERATIONID IS NULL 
  AND COMMISSIONLISTPOS IS NULL;
 
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));

The first of the above queries output the expected one row, while the second query output the following execution plan:

SQL_ID  5pkwzs079jwu2, child number 1
-------------------------------------
SELECT   DATA FROM   MVIS_DATA WHERE   TOOLID = N'112'   AND DATATYPE =
0   AND COMMISSIONID IS NULL   AND OPERATIONID IS NULL   AND
COMMISSIONLISTPOS IS NULL
 
Plan hash value: 1497912695
 
----------------------------------------------------------------------------------------------
| Id  | Operation                   | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS BY INDEX ROWID| MVIS_DATA      |     1 |   125 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | IND_TOOLID_FIX |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(("OPERATIONID" IS NULL AND "COMMISSIONLISTPOS" IS NULL AND "DATATYPE"=0
              AND "COMMISSIONID" IS NULL))
   2 - access("MVIS_DATA"."SYS_NC00007$"=U'112')

In the Predicate Information section of the execution plan, notice the absence of the SYS_OP_C2C function on the TOOLID column – those values are pre-computed in the virtual column (SYS_NC00007$) created for the function based index.  An index range scan is reported in the execution plan, rather than an index unique scan (the function based index, when created, was not declared as unique), so the SQL statement should execute much faster than the roughly 30 seconds required by the SQL statement without the function based index.

So, what are the options that were mentioned above?

  • Have the application programmer fix the SQL statements.
  • Upgrade to a version of Oracle Database (if that version is supported by the application) that supports the SYS_OP_C2C function, and create a function based index using that function.
  • If the TOOLID column only contains numbers, that column should be defined as NUMBER, rather than VARCHAR2.  Consider redefining that column as a NUMBER datatype.
  • If that N character is always used when this column is referenced, that column probably should be defined as NVARCHAR2 rather than VARCHAR2.  Consider redefining that column as a NVARCHAR2 datatype.
  • Consider that the application is working as designed, and that it is nice to receive 30 second breaks every now and then.
  • Take a DUMP and share it with the application developer.  On second thought, such an approach may have limited success in helping to solve this problem.

Here is a little experiment with the DUMP function, which reveals Oracle’s internal representation of data – refer to the Internal datatypes and datatype codes in the Oracle documentation for help in decoding the Typ= values.

SELECT DUMP(112) A FROM DUAL;
 
A
---------------------
Typ=2 Len=3: 194,2,13
 
/* ------------------- */
SELECT DUMP(TO_CHAR(112)) A FROM DUAL;
 
A
---------------------
Typ=1 Len=3: 49,49,50
 
/* ------------------- */
SELECT DUMP('112') A FROM DUAL;
 
A
----------------------
Typ=96 Len=3: 49,49,50
 
/* ------------------- */
SELECT DUMP(N'112') A FROM DUAL;
 
A
----------------------------
Typ=96 Len=6: 0,49,0,49,0,50
 
/* ------------------- */
SELECT DUMP(SYS_OP_C2C('112'))  A FROM DUAL;
 
A
----------------------------
Typ=96 Len=6: 0,49,0,49,0,50
 
/* ------------------- */
SELECT DUMP(TO_NCHAR('112')) A FROM DUAL;
 
A
---------------------------
Typ=1 Len=6: 0,49,0,49,0,50

It is possibly interesting to note that the internal representation for N’112′ is CHAR (or NCHAR), while the internal representation for TO_NCHAR(‘112′) (and TO_NCHAR(112)) is VARCHAR2 (or NVARCHAR2).

This blog’s statistics indicate that the search engine search term Oracle NVARCHAR slow resulted in two page views of this blog yesterday.  I can’t help but wonder if the person who performed that search might have been helped by some of the above analysis.





Install Nagios on a Synology DiskStation DS415+ (Plex Support Also Added)

24 12 2014

December 24, 2014 (Modified December 29, 2014 – Added Plex Support Section, January 11, 2015 – Added Nagios Web Status and Fixed Status Logging, April 5, 2015 – Added Plex/Nagios/Wordpress Stress Test, May 25, 2015 – current Plex download supports CPU in DS415+)

(Back to the Previous Post in the Series)

This article describes how to compile and run Nagios 4.0.8 (with 2.0.3 plugins) on a Synology DiskStation DS415+ (64 bit), which utilizes an Intel Atom Avoton processor (cat /proc/cpuinfo indicates that the unit is using a 2.40GHz Atom C2538, and utilizes the DSM 5.1-5021 (the latest version as of December 24, 2014) operating system.  Nagios is a very useful network monitoring (and even Oracle Database monitoring with plugins) utility that uses email or other communication means to report device or service outages.  Not all Synology DiskStation NAS devices use Intel based CPUs – some of the less expensive DiskStations use ARM type processors (see this link to determine the type of CPU installed in a specific DiskStation).  It may be possible to produce a working version of Nagios on NAS devices that do not have Intel 32 bit or 64 bit processors (such as the DS212+), but I have not yet fully tested the procedure with many different NAS units.  Note that the even though the DS415+ and DS1813+ both use Intel CPUs running a 64 bit DSM version, the installation steps are a bit different (I tried various unsuccessful approaches over the last two months, and finally found a working combination).

Warning: A lot of what follows is based on experimentation, with the end goal of having Nagios running on a Synology DiskStation having the ability to ping devices on the network or the Internet, with an email sent to an administrator when a device stops responding to ping requests, and to send a second email when the device resumes responding to ping requests.  This functionality represents a small fraction of Nagios’ capabilities through the use of plugins (see my article that describes setting up the Oracle client on a Diskstation to permit Oracle Database monitoring).  File paths vary from one Linux distribution to the next, so that adds a bit of challenge to make certain that the files are placed in the required directory.  Copying a file to the wrong directory may temporarily disable the DiskStation and require the reinstallation of the Synology DSM operating system.  The directions below are not final, and quite likely do not represent the most efficient approaches to accomplish the end goal (note that the steps are more efficient than those for the DS1813+ and DS412+) – but the directions will hopefully be “close enough to correct” to allow the average reader of this blog to ping and send email alerts from a DiskStation.

I have relied on the free Nagios network monitoring solution since 2002 to provide an early warning of problems associated with network attached equipment including servers, production floor computers, switches, printers, wireless access points, IP cameras, website accessibility, Internet connection stability, etc.  While I rely on Nagios’ alerting system, I am not an expert at configuring the Nagios network monitoring system; the Nagios configuration documentation may be downloaded here.

First, make certain that the Telnet Service (or SSH Service if that is preferred) is enabled on the DiskStation.  In the DiskStation’s Control Panel, click Terminal & SNMP.

nagiosds415_enable telnet1

Then put a checkmark next to Enable Telnet service, and click the Apply button.

nagiosds415_enable_telnet2

Verify that the computer that you intend to use has a Telnet client.  For Windows 7, access the Programs link in the Control Panel, and then click the Turn Windows features on or off link.  Make certain that there is a checkmark next to Telnet Client, then click the OK button.

nagiosds415_enable telnet_computer

Open a command line (in Windows, Start – Run – type  cmd  and press the Enter key).  On the command line, type telnet followed by either the name of the DiskStation or the IP address of the DiskStation, then press the Enter key.  When prompted for a username, type root and press the Enter key.  Type the admin user’s password (that is used to access the DSM interface in a web browser) and press the Enter key  For example, if the DiskStation has an IP address of 192.168.1.55, the screen would appear similar to the following just before pressing the enter key:.

nagiosds415_telnet_to_diskstation

The command line on the DiskStation is very similar to the command line on a Unix or Linux computer, and is somewhat similar to a Windows command line or MS-DOS command line (use / rather than \, use ls rather than dir, use vi rather than edit):

nagiosds415_telnet_to_diskstation_command_test

We first need to add ipkg support to the DiskStation, detailed directions may be viewed at this link.  The exact directions may be different for other DiskStation models, but the following directions work for the DS415+, DS1813+ and DS412+ (note that all files downloaded from the Internet will be placed on volume1 in a new downloads directory – copy and paste the lines to the Telnet session, one line at a time – simply right-click inside the telnet window to paste, if you are using Windows 7/8’s telnet client):

cd /volume1
mkdir downloads
cd downloads
wget http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/syno-i686-bootstrap_1.2-7_i686.xsh
chmod +x syno-i686-bootstrap_1.2-7_i686.xsh
sh syno-i686-bootstrap_1.2-7_i686.xsh

The vi editor is used on the DiskStation to modify files; that vi editor is a bit challenging to use at first sight, so you may need help with a couple of basic commands (see this quick reference for other commands).  The commands in vi are case sensitive (i is not the same as I).  When a file is opened, press the i key on the keyboard to allow making changes to the file (such as typing commands, or deleting commands).  When finished making changes to the file press the Esc key.  Once the Esc key is pressed, type ZZ to save the changed file and quit, or :q! to quit without saving the changes.

Next, we must modify the file that establishes the environment for the root user, when that user connects to the DiskStation.  This change is needed as part of the ipkg installation.  Edit the .profile file used by the root user:

vi /root/.profile

Press the i key to switch to insert mode, place a # character in front of the two lines that contain the word PATH, and save the file (press the Esc key then type ZZ):

nagiosds415_comment_out_path

Reboot the DiskStation using the Synology interface by clicking the Options button (the head shaped icon) near the top-right, and then click Restart and confirm that the DiskStation should be restarted:

nagiosds415_restart diskstation

Once the Synology reboots, reconnect a telnet session to the Synology.  Update ipkg support and install some ipkg items that will be either directly used by Nagios, or during the installation of Nagios.  When installing the optware-devel package, an error may appear stating that there is an incompatibility between wget and wget-ssl – simply ignore that error.  An error will also appear when installing Apache – that problem will be fixed in the step that follows.

ipkg update
ipkg list
ipkg update wget-ssl
ipkg install optware-devel
ipkg install gcc
ipkg install libtool
ipkg install MySQL
ipkg install openssl
ipkg install openssl-dev
ipkg install sendmail
ipkg install inetutils
ipkg install apache

The ipkg install apache command will likely result in an eroor related to ext_filter_module.  Edit /opt/etc/apache2/httpd.conf using vi, and put a # character in front of the line LoadModule ext_filter_module libexec/mod_ext_filter.so :

nagiosds415_comment_out ext_filter_module

Save the file, and re-execute the ipkg install apache command.

ipkg install apache

Download the compilers for the Avoton CPU, which is installed in the DS415+ (the second compiler downloaded below will be used with the Nagios install – the first compiler typically resulted in errors (ex: ../lib/libnagios.a: could not read symbols: Archive has no index; run ranlib to add one) when compiling the Nagios program):

cd /volume1/downloads
wget http://sourceforge.net/projects/dsgpl/files/DSM%205.1%20Tool%20Chains/Intel%20x86%20Linux%203.2.40%20%28Avoton%29/gcc473_glibc217_x86_64_avoton-GPL.tgz
tar zxpf gcc473_glibc217_x86_64_avoton-GPL.tgz -C /usr/local/
wget http://sourceforge.net/projects/dsgpl/files/DSM%205.1%20Tool%20Chains/Intel%20x86%20Linux%203.2.40%20%28Avoton%29/gcc463_glibc213_i686_avoton-GPL.tgz 
tar zxpf gcc463_glibc213_i686_avoton-GPL.tgz -C /usr/local/

The above command will create two directory structures named x86_64-pc-linux-gnu and i686-pc-linux-gnu in the /usr/local directory.  Edit the .profile file to remove the # character that was earlier added in front of the lines containing the word PATH, save the file, then reboot the Synology (see the earlier provided screen captures for the lines that need to be modified, and the process for rebooting the Synology):

vi /root/.profile

Reconnect to the DiskStation using Telnet after the Synology reboots, and compile libltdl – note that the PATH modification below seems to be necessary to avoid an ldconfig: command not found error or a message about Perl not being found:

PATH=$PATH:/opt/bin/
cd /opt/share/libtool/libltdl/
env CC=/usr/local/i686-pc-linux-gnu/bin/i686-pc-linux-gnu-gcc \
LD=/usr/local/i686-pc-linux-gnu/bin/i686-pc-linux-gnu-ld \
RANLIB=/usr/local/i686-pc-linux-gnu/bin/i686-pc-linux-gnu-ranlib \
CFLAGS="-I/usr/local/i686-pc-linux-gnu/include" \
LDFLAGS="-L/usr/local/i686-pc-linux-gnu/lib" \
./configure --host=i686-pc-linux-gnu --target=i686-pc-linux-gnu --build=i686-pc-linux-gnu -–prefix=/opt
 
make all
make install

Note that the following message may appear later when we attempt to verify that Nagios compiled correctly:

/opt/bin/nagios: error while loading shared libraries: libltdl.so.3: cannot open shared object file: No such file or directory

To avoid that error, execute the following commands to copy a couple of files to the correct locations (if the files do not already exist in those locations):

cp /opt/lib/libltdl.so.3 /opt/local/lib/libltdl.so.3
cp /opt/lib/libltdl.so.3 /usr/lib/libltdl.so.3
cp /opt/lib/libltdl.so /usr/lib/libltdl.so

Using the DiskStation’s Control Panel, click Group, and create a nagios group – the group does not require any special DiskStation permissions.

nagiosds415_create_nagios_group

Using the DiskStation’s Control Panel, click User and create a nagios user and add that user to the nagios group. The nagios user does not require any specific DiskStation permissions.

Next, switch back to the telnet session, download the Nagios source code, and compile the source code (note that if a mistake is made, and the nagios-4.0.8 directory must be removed, change to the /volume1/downloads directory and use this command to remove the directory, then re-execite the tar command: rm -rf nagios-4.0.8 ):

cd /volume1/downloads
wget http://sourceforge.net/projects/nagios/files/nagios-4.x/nagios-4.0.8/nagios-4.0.8.tar.gz
tar xzf nagios-4.0.8.tar.gz
cd nagios-4.0.8
 
env CC=/usr/local/i686-pc-linux-gnu/bin/i686-pc-linux-gnu-gcc \
LD=/usr/local/i686-pc-linux-gnu/bin/i686-pc-linux-gnu-ld \
RANLIB=/usr/local/i686-pc-linux-gnu/bin/i686-pc-linux-gnu-ranlib \
CFLAGS="-I/usr/local/i686-pc-linux-gnu/include" \
LDFLAGS="-L/usr/local/i686-pc-linux-gnu/lib" \
./configure --host=i686-pc-linux-gnu --target=i686-pc-linux-gnu --build=i686-pc-linux-gnu -–prefix=/opt --with-command-group=nagios
 
make all
make install
make install-init
make install-config
make install-commandmode

Try running the Nagios verify command by executing the command on the following line (a Segmentation fault (core dumped) error message is bad news, which may indicate that the wrong compiler was used):

/opt/bin/nagios -v /opt/etc/nagios.cfg

If the compile process worked correctly, the telnet session should appear similar to below when the above verify command is executed:

nagiosds415_verify_ok

If the above command was successful, copy the Nagios startup script to the correct location so that Nagios will automatically start when the DiskStation is rebooted:

cp /usr/local/etc/rc.d/nagios /opt/etc/init.d/S81nagios

Verify that the ownership of the nagios directory is set correctly:

chown nagios:nagios /opt/bin/nagios -R

In addition to the main /opt/etc/nagios.cfg Nagios file, there are several other configuration files that are potentially used by Nagios (defined in the nagios.cfg file) – we will take a look at those files later:

/opt/etc/objects/commands.cfg
/opt/etc/objects/contacts.cfg
/opt/etc/objects/timeperiods.cfg
/opt/etc/objects/templates.cfg
/opt/etc/objects/localhost.cfg
/opt/etc/objects/windows.cfg
/opt/etc/objects/server.cfg
/opt/etc/objects/switch.cfg
/opt/etc/objects/printer.cfg

We need to make a couple of adjustments in the  /opt/etc/objects/commands.cfg file.

vi /opt/etc/objects/commands.cfg

Change the ‘notify-host-by-email’ command definition section as follows:

define command{
    command_name notify-host-by-email
    command_line /usr/bin/printf "%b" "Subject: $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$\n\n***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /opt/sbin/sendmail -vt $CONTACTEMAIL$
    }

Change the ‘notify-service-by-email’ command definition section as follows:

define command{
    command_name notify-service-by-email
    command_line /usr/bin/printf "%b" "Subject: $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$\n\n***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /opt/sbin/sendmail -vt $CONTACTEMAIL$
    }

Change the ‘check_ping’ command definition section as follows (feel free to read the documentation for check_ping and specify different values):

define command{
        command_name    check_ping
        command_line    $USER1$/check_ping -H $HOSTADDRESS$ -w 3000,25% -c 5000,90% -p 3 
        }

Save the file and exit vi.

(This part still needs some fine tuning to make the web interface work with Nagios.)  Edit the Nagios Makefile and change the line beginning with HTTPD_CONF to show HTTPD_CONF=/opt/etc/apache2/conf.d  Then save the file.

cd /volume1/downloads/nagios-4.0.8
vi Makefile

nagiosds415_change_httpd_conf

Execute the following command:

make install-webconf

Create a nagiosadmin user for the web administration, and specify a password when prompted (edit: January 11, 2015: the file location specified by the command below is incorrect if you plan to use the Nagios monitoring web pages – if you intend to use the Nagios monitoring web pages, execute the command as shown below, and we will later recreate the file in the correct location):

htpasswd -c /usr/local/etc/htpasswd.users nagiosadmin

Next, we need to start setting up the plugins for Nagios.  First the net-snmp source code is downloaded and extracted:

cd /volume1/downloads
wget http://sourceforge.net/projects/net-snmp/files/net-snmp/5.7.2/net-snmp-5.7.2.tar.gz
tar xzf net-snmp-5.7.2.tar.gz
cd net-snmp-5.7.2

Execute the following to compile the net-snmp source (note that this command uses the second compiler that was downloaded):

env CC=/usr/local/i686-pc-linux-gnu/bin/i686-pc-linux-gnu-gcc \
LD=/usr/local/i686-pc-linux-gnu/bin/i686-pc-linux-gnu-ld \
RANLIB=/usr/local/i686-pc-linux-gnu/bin/i686-pc-linux-gnu-ranlib \
CFLAGS="-I/usr/local/i686-pc-linux-gnu/include" \
LDFLAGS="-L/usr/local/i686-pc-linux-gnu/lib" \
./configure --host=i686-pc-linux-gnu --target=i686-pc-linux-gnu --build=i686-pc-linux-gnu -–prefix=/opt

Several prompts will appear on the screen when the command is executed.  I entered the following for the prompts:

Default version of SNMP to use (3): 3
System Contact Information: (Enter)
System Location (Unknown): (Enter)
Location to write logfile (/var/log/snmpd.log): /opt/var/snmpd.log
Location to write persistent information (/var/net-snmp): (Enter)

Two additional commands to execute:

make -i
make install -i

Now we need to download the source code for the Nagios plugins (check_apt, check_breeze, check_by_ssh, check_clamd, check_cluster, check_dhcp, check_disk, check_disk_smb, check_dns, check_dummy, check_file_age, check_flexlm, check_ftp, check_http, check_icmp, check_ide_smart, check_ifoperstatup, check_ifstatus, check_imap, check_ircd, check_jabber, check_ldap, check_ldaps, check_load, check_log, check_mailq, check_mrtg, check_mrtgtraf, check_mysql, check_mysql_query, check_nagios, check_nntp, check_nntps, check_nt, check_ntp, check_ntp_peer, check_ntp_time, check_nwstat, check_oracle, check_overcr, check_ping, check_pop, check_procs, check_real, check_rpc, check_sensors, check_simap, check_smtp, check_snmp, check_spop, check_ssh, check_ssmtp, check_swap, check_tcp, check_time, check_udp, check_ups, check_users, check_wave) that allow Nagios to perform various monitoring tasks:

cd /volume1/downloads
wget https://www.nagios-plugins.org/download/nagios-plugins-2.0.3.tar.gz
tar xzf nagios-plugins-2.0.3.tar.gz
cd nagios-plugins-2.0.3
PATH=$PATH:/opt/bin/

Now compile the Nagios plugins:

env CC=/usr/local/i686-pc-linux-gnu/bin/i686-pc-linux-gnu-gcc \
LD=/usr/local/i686-pc-linux-gnu/bin/i686-pc-linux-gnu-ld \
RANLIB=/usr/local/i686-pc-linux-gnu/bin/i686-pc-linux-gnu-ranlib \
CFLAGS="-I/usr/local/i686-pc-linux-gnu/include" \
LDFLAGS="-L/usr/local/i686-pc-linux-gnu/lib" \
./configure --with-openssl=/usr/syno/bin/openssl --with-nagios-user=nagios --with-nagios-group=nagios --with-ping-command="/opt/bin/ping -c %d %s" --psdir=/bin --with-ps-varlist="&procpid,&procppid,&procvsz,&procrss,procprog,&pos" --with-ps-cols=6 --with-ps-format="%d %d %d %d %s %n" --with-ps-command="/bin/ps -w"  --host=i686-pc-linux-gnu --target=i686-pc-linux-gnu --build=i686-pc-linux-gnu -–prefix=/opt
 
make -i
make install -i

At this point, the Nagios network monitoring utility will likely experience an error similar to the following when attempting to send an alert email:

output=collect: Cannot write ./dfr6BFFPC7027203 (bfcommit, uid=1026, gid=25): Permission denied

Execute the following commands, which should fix the above problem:

chmod g+w /opt/var/spool/clientmqueue
chmod 444 /opt/etc/mail/*.cf
chmod 7555 /opt/sbin/sendmail

We will need to use su to test the execution of various commands as the nagios user. Without the following fix (described here), you might see the following error message when attempting to execute a command as the nagios user:

su: warning: cannot change directory to /var/services/homes/nagios: No such file or directory su: /sbin/nologin: No such file or directory

Enter the following commands:

mkdir /var/services/homes
mkdir /var/services/homes/nagios
chown nagios:nagios /var/services/homes/nagios -R
vi /etc/passwd

Locate the line in the passwd file for the Nagios user.  Near the end of the line, /sbin/nologin should appear.  Replace that text with /bin/ash then save and exit vi.

Verify that the Nagios user is able to execute the check_ping plugin.  Replace MyDeviceHere with either an IP address or a network device name (such as localhost) that is on your network:

su - nagios -c "/opt/libexec/check_ping -H MyDeviceHere -w 5000,80% -c 5000,80% -p 5"

If there were no problems, then something similar to the following should appear (a Segmentation fault (core dumped) error message is bad news, which may indicate that the wrong compiler was used):

nagiosds415_check_ping_test

If the ping command (called by check_ping) is not able to resolve a network device name, and the fully qualified DNS name was not specified (MyDeviceHere.MyDomainHere.com), edit the /etc/resolv.conf file:

vi /etc/resolv.conf

On a new line in the file, add the following line (replacing MyDomainHere.com with your DNS domain name for the network):

search MyDomainHere.com

Verify that sendmail works for the Nagios user.  At the prompt that appears, type a short message, press the Enter key, type a period, then press the Enter key again – replace MyEmailAddressHere@MyDomainHere.com with your email address):

su - nagios -c "/opt/sbin/sendmail -vt MyEmailAddressHere@MyDomainHere.com"

If you see the following message, then there are two problems:

/opt/etc/mail/sendmail.cf: line 73: fileclass: cannot open '/opt/etc/mail/local-host-names': Group writable directory
WARNING: local host name (DS415) is not qualified; see cf/README: WHO AM I?

To fix the second problem (the warning), edit the /etc/hosts file using vi. Locate the line with the Synology’s IP address and the Synology’s name (for example DS415).  Between those two entries add the Synology’s name followed by a period and the domain name.  For example, the line may look like the following once edited.  Save the file and exit vi:

192.168.1.55   DS415.MyDomainHere.com DS415

To fix the “cannot open… Group writable directory” error, try the following (note that it is possible that only the first line is necessary, so try the sendmail test ago after executing just the first line below):

chmod 555 /opt/etc
chmod 755 /opt/etc/mail
chmod 444 /opt/etc/mail/local-host-names

Then repeat the email test by executing:

su - nagios -c "/opt/sbin/sendmail -vt MyEmailAddressHere@MyDomainHere.com"

—-

As a general tip, it is important to always verify the Nagios configuration before starting (or restarting after a configuration change) Nagios.  To verify the Nagios configuration type the following:

/opt/bin/nagios -v /opt/etc/nagios.cfg

Once the monitoring target devices are defined, it is possible to start Nagios as a background task (daemon) by executing the following:

/opt/bin/nagios -d /opt/etc/nagios.cfg

To stop Nagios that is executing as a background task, use the ps command to locate the Nagios process, find the lowest process ID (PID) for the line that includes nagios.cfg (17346 in this case), then kill the process that is associated with that PID:

ps | grep 'nagios'

nagiosds415_stop_nagios_daemon

At this point, Nagios will hopefully run as a background task, and it should be able to ping and send email alerts.  However, if you were following the above directions, we have not yet instructed Nagios which devices to monitor, and to whom the alert emails should be sent.  The next step is to define the email contacts by modifying the /opt/etc/objects/contacts.cfg file (see the documentation for assistance):

vi /opt/etc/objects/contacts.cfg

After setting up the contacts, we should probably tell Nagios which devices to monitor.  If there are a lot of devices on your network to be monitored, you might find that using Microsoft Excel rather than vi to create the object definitions makes the task more manageable.  See the previous article for the steps to use Microsoft Excel.

If you decided to use some of the non-standard Nagios group names (as I did), those non-standard group names must be defined in the /opt/etc/objects/templates.cfg file:

vi /opt/etc/objects/templates.cfg

A portion of the additional entries that I made in this file include the following:

define host{
       name                    ap      ; The name of this host template
       use                     generic-host    ; Inherit default values from the generic-host temp
       check_period            24x7            ; By default, access points are monitored round t
       check_interval          5               ; Actively check the access point every 5 minutes
       retry_interval          1               ; Schedule host check retries at 1 minute intervals
       max_check_attempts      10              ; Check each access point 10 times (max)
       check_command           check_ping      ; Default command to check if access points are "alive"
       notification_period     24x7            ; Send notification out at any time - day or night
       notification_interval   30              ; Resend notifications every 30 minutes
       notification_options    d,r             ; Only send notifications for specific host states
       contact_groups          admins          ; Notifications get sent to the admins by default
       hostgroups              ap ; Host groups that access points should be a member of
       register                0               ; DONT REGISTER THIS - ITS JUST A TEMPLATE
       }

define host{
       name                    camera  ; The name of this host template
       use                     generic-host    ; Inherit default values from the generic-host temp
       check_period            24x7            ; By default, cameras are monitored round t
       check_interval          60              ; Actively check the device every 60 minutes
       retry_interval          1               ; Schedule host check retries at 1 minute intervals
       max_check_attempts      10              ; Check each device 10 times (max)
       check_command           check_ping      ; Default command to check if device are "alive"
       notification_period     24x7            ; Send notification out at any time - day or night
       notification_interval   240             ; Resend notifications every 240 minutes
       notification_options    d,r             ; Only send notifications for specific host states
       contact_groups          admins          ; Notifications get sent to the admins by default
       hostgroups              camera ; Host groups that cameras should be a member of
       register                0               ; DONT REGISTER THIS - ITS JUST A TEMPLATE
       }

Nagios will not know that it should read the additional configuration files until it is told to do so by modifying the /opt/etc/nagios.cfg file.

vi /opt/etc/nagios.cfg

If you have selected to use any of the custom files that were created based on my previous article, instruct Nagios to read the associated file by adding entries to the nagios.cfg file:

# Charles Hooper's object types
cfg_file=/opt/etc/objects/ap.cfg
cfg_file=/opt/etc/objects/camera.cfg
cfg_file=/opt/etc/objects/computer.cfg
cfg_file=/opt/etc/objects/external.cfg
cfg_file=/opt/etc/objects/other.cfg
cfg_file=/opt/etc/objects/printer.cfg
cfg_file=/opt/etc/objects/server.cfg
cfg_file=/opt/etc/objects/switch.cfg

A large number of changes were likely made to the Nagios configuration files, so it is important to verify that there are no errors in the configuration:

/opt/bin/nagios -v /opt/etc/nagios.cfg

If no errors are found in the configuration, terminate (kill) nagios and then restart Nagios as described above.

—-

(Added January 11, 2015)

While trying to find a way to allow the Nagios status web pages to work on the DS415+, I discovered that Nagios was not running in daemon mode once I had made entries into the various configuration files to specify the devices to be monitored.  Taking a look at my previous article, I quickly found the solution for the daemon mode problem.  In a telnet session enter the following:

mkdir /opt/var/nagios
mkdir /opt/var/nagios/archives
mkdir /opt/var/nagios/spool
mkdir /opt/var/nagios/spool/checkresults
mkdir /opt/var/nagios/rw/
chown nagios:nagios /opt/var/nagios -R
chmod g+rwx /opt/var/nagios/rw
chmod g+s /opt/var/nagios/rw
vi /opt/etc/nagios.cfg

In the nagios.cfg file, I made the following changes:

log_file=/opt/var/nagios/nagios.log
object_cache_file=/opt/var/nagios/objects.cache
precached_object_file=/opt/var/nagios/objects.precache
status_file=/opt/var/nagios/status.dat
command_file=/opt/var/nagios/rw/nagios.cmd
lock_file=/opt/var/nagios/nagios.lock
temp_file=/opt/var/nagios/nagios.tmp
log_archive_path=/opt/var/nagios/archives
check_result_path=/opt/var/nagios/spool/checkresults
state_retention_file=/opt/var/nagios/retention.dat
debug_file=/opt/var/nagios/nagios.debug

Save and exit vi.  If the Nagios daemon is running in the background, find the process and kill it (replace 24532 with the process ID of the first line that contains /opt/bin/nagios -d /opt/etc/nagios.cfg):

ps | grep 'nagios'
kill 24532

Verify the Nagios configuration, and if there were no problems, then start Nagios in daemon mode:

/opt/bin/nagios -v /opt/etc/nagios.cfg
/opt/bin/nagios -d /opt/etc/nagios.cfg

Check the Nagios log file for problems, displaying just the last 100 lines:

tail -n 100 /opt/var/nagios/nagios.log

I never bothered to determine how to make the Nagios monitoring status web pages work with the Synology DS412+, DS1813+, and DS212+.  I thought that I would see what steps would be required to make that functionality work on a Synology DS415+ (note that the following steps may work exactly the same on the Synology DS412+, DS1813+, and DS212+). The Synology DSM operating system utilizes port 5000 for web traffic.  If you add the WordPress package to the Synology, that package uses the standard port 80 for web traffic.  If you followed the above steps for installing Nagios, you installed a second copy of the Apache web server on the Synology that uses port 8000 for web traffic.  If your Synology has an IP address of 192.168.1.60, then you would use the following website address to access the second web server on the Synology: http://192.168.1.60:8000/

The Nagios monitoring status web pages, once set up, will appear as a directory of that second web server, as shown below:

nagiosds415_web_status

The value of the Nagios monitoring status web pages becomes apparent fairly quickly by reviewing the above screen capture.  The above Tactical Overview shows that there are 18 monitored devices that are up, one monitored device that is down, and another device that is in the process of being checked.  The links at the left provide additional information about the monitored devices.  Let’s configure the second copy of Apache on the Synology to support the Nagios monitoring status web pages.

The second copy of Apache uses the configuration file /opt/etc/apache2/httpd.conf.  Edit that file:

vi /opt/etc/apache2/httpd.conf

Add the following directly below the # Supplemental configuration heading, which is near the end of the file:

# Added by per Charles Hooper's Nagios installation guide for Synology DS415+
Include etc/apache2/conf.d/nagios.conf
Include etc/apache2/conf.d/php.conf

Save the file and exit vi.  Next, we need to create the htpasswd.users file in the correct location.  In the above steps, we executed the following command:

/opt/sbin/htpasswd -c /usr/local/etc/htpasswd.users nagiosadmin

That htpasswd.users file is expected to be in the /opt/etc directory.  Execute the following command to create the file in the correct directory (you will be prompted for a password for the nagiosadmin user):

/opt/sbin/htpasswd -c /opt/etc/htpasswd.users nagiosadmin

The Nagios monitoring status web pages require PHP support to be added to the second Apache installation.  Execute the following commands to install PHP:

/opt/bin/ipkg update
/opt/bin/ipkg list
/opt/bin/ipkg install php
/opt/bin/ipkg install php-apache

Next, we need to modify the /opt/etc/apache2/conf.d/nagios.conf file that was created during the Nagios installation.

vi /opt/etc/apache2/conf.d/nagios.conf

Below the heading <Directory “/opt/share”> add the following line:

   DirectoryIndex index.php

Save the file and exit vi.  Next, we need to adjust the php.ini file on the Synology:

vi /opt/etc/php.ini

In the file (near line 108), locate the following line:

output_buffering = Off

Change that line to show:

output_buffering = 8192

Locate the following line (near line 248) in the file:

memory_limit = 8M

Change that line to show:

memory_limit = 128M

There may be a second php.ini file located in the /etc/php directory, make the same change to that file, then save the file and exit vi:

vi /etc/php/php.ini

Perform a graceful restart of the second copy of Apache:

/opt/sbin/apachectl -k graceful

Try to access the Nagios status monitoring pages in a web browser (replace 192.168.1.60 with the IP address of your Synology)  http://192.168.1.60:8000/nagios/

You should be prompted to enter a username and password.  Enter nagiosadmin for the username, and enter the password for that user that you created when executing the /opt/sbin/htpasswd command earlier.  If the web page does not display, take a look at the last 100 lines of the Apache error log to see if any clues are provided:

tail -n 100 /opt/var/apache2/log/error_log

Some of the interesting error messages that I experienced including the following:

[Sun Jan 04 14:07:55 2015] [error] [client 192.168.1.218] (2)No such file or directory: Could not open password file: /opt/etc/htpasswd.users
 
[Sun Jan 04 14:23:04 2015] [error] [client 192.168.1.218] Directory index forbidden by Options directive: /opt/share/
 
[Sun Jan 04 15:07:09 2015] [error] [client 192.168.1.218] File does not exist: /opt/share/<, referer: http://192.168.1.60:8000/nagios/
 
[Sun Jan 04 17:53:06 2015] [notice] child pid 15126 exit signal Segmentation fault (11)
 
[Sun Jan 11 09:41:57 2015] [error] [client 192.168.1.213] script not found or unable to stat: /opt/sbin/statusmap.cgi, referer: http://192.168.1.60:8000/nagios/side.php

The last of the above errors still remains, the file /opt/sbin/statusmap.cgi does not exist.  For now, I will let someone else find a solution for that problem.

If you find that a monitored device is down, using the web interface it is possible to disable future checking of the device, as well as alter a number of other notification options.  Unfortunately, clicking any of the options under the Host Commands heading will result in a permission error.

nagiosds415_change_notifications

To fix the permission error:

Using the Synology Control Panel interface, create a new group named www – this group requires no specific Synology permissions.  Next, using the Synology Control Panel interface, create a user named apache – make this user a member of the nagios, users, and www groups.  This user requires no specific Synology permissions.  Then change the username and group under which Apache executes:

vi /opt/etc/apache2/httpd.conf

In the httpd.conf file, locate the following two lines:

User nobody
Group #-1

Change those lines to appears as follows:

User apache
Group www

Save the file and exit vi.  Gracefully restart the second copy of Apache:

/opt/sbin/apachectl -k graceful

Refresh the Nagios monitoring status web page – the options under the Host Commands heading should no longer return an error.

—-


Adding Plex Support (Added December 29, 2014)

Note: Some of these steps may use utilities that were installed during the Nagios install.  Plex does not yet support the CPU type in the DS415+, so it is necessary to modify a configuration file during the install.  (Note May 25, 2015: The Plex download version 0.9.12.1.1079 natively supports the DS415+’s CPU, so after downloading the file it is possible to skip to adjusting the Trust Level setting step in the Synology Package Center.)

Connect to the DS415+ using a telnet session and the root username and password (see the directions above, if the steps are not clear).  Change to the downloads directory that was created above, and use the wget command to download what is as of today the current version of Plex:

cd /volume1/downloads/
wget https://downloads.plex.tv/plex-media-server/0.9.11.7.803-87d0708/PlexMediaServer-0.9.11.7.803-87d0708-x86.spk

Create a directory, extract the downloaded PlexMediaServer-0.9.11.7.803-87d0708-x86.spk file to the just created directory, then switch to that directory:

mkdir PlexMediaServer-0.9.11.7.803-87d0708-x86
tar -xvf PlexMediaServer-0.9.11.7.803-87d0708-x86.spk -C /volume1/downloads/PlexMediaServer-0.9.11.7.803-87d0708-x86
cd PlexMediaServer-0.9.11.7.803-87d0708-x86

Next, we need to edit the INFO file to add support for the CPU that is installed in the DS415+:

vi INFO

If you are not familiar with using vi to edit files, you will need to switch vi to edit mode by typing i on the keyboard (note that is a lowercase i) before attempting to make any changes to the file.  When you are finished making changes, press the Esc key on the keyboard, then type ZZ to save and exit (note that is uppercase ZZ).

In the file, change the line:

arch="x86 cedarview bromolow evansport"

to:

arch="x86 cedarview bromolow evansport avoton"

When finished with the edit, the file should appear as below.  Save the file and exit vi.

plexds415_modify_arch

Next, we need to create a new spk file that includes the modified INFO file, and copy that file to a shared folder (mysharedfolder in this example) that was previously created on the DS415+ that is accessible from a computer that has access to the DS415+’s DSM interface.

tar -cvf /volume1/downloads/PlexMediaServerMod-0.9.11.7.803-87d0708-x86.spk *
cd ..
cp PlexMediaServerMod-0.9.11.7.803-87d0708-x86.spk /volume1/mysharedfolder/PlexMediaServerMod-0.9.11.7.803-87d0708-x86.spk

Now on the computer that has access to the DS415+’s DSM interface, click Package Center, then click Settings.  Under the Trust Level heading, change the setting from Synology Inc. to “Any publisher“, then click OK.

plexds415_modify_package_settings

Next, install Plex using the Manual Install feature.  Click the Manual Install button, then click the Browse button.  Switch to the shared folder where the spk file was copied (mysharedfolder in the above example), and select to open the PlexMediaServerMod-0.9.11.7.803-87d0708-x86.spk file.  Click the Next button to complete the install of Plex.

plexds415_manual_install

I have only just started experimenting with Plex, so I do not have a full grasp of its capabilities yet.  There are several “channels” that can be added to watch certain types of recorded video.  After experimenting with a couple of the easily accessible channels, I stumbled across this page, which described how to add several additional “unsupported” channels.  The following steps seem to work to install the additional channels from a telnet session connected to the DS415+.  Download and extract the unsupported channels:

cd /volume1/downloads/
wget https://dl.dropboxusercontent.com/u/15227710/UnSupportedAppstore.bundle.zip
mkdir plexunsupported
unzip UnSupportedAppstore.bundle.zip -d ./plexunsupported/

Change to the directory where the Plex plugins are stored, create a directory for the new plugin that was just downloaded and extracted, copy the downloaded plugin files to the directory that was just created, then change the ownership of the downloaded and extracted files to the plex user:

cd /volume1/Plex/Library/Application\ Support/Plex\ Media\ Server/Plug-ins/
mkdir UnSupportedAppstore.bundle
cp -r /volume1/downloads/plexunsupported/UnSupportedAppstore.bundle/* ./UnSupportedAppstore.bundle
chown -R plex:users ./UnSupportedAppstore.bundle

As far as I could determine, the unsupported channels will only appear when the Plex package is stopped and then started again.  In the Package Center’s Installed list click Plex Media Server.  Click the down arrow to the right of Action, then click Stop.  Wait 30 seconds, then click the down arrow again and select Run.

plexds415_restart_plex

If the DS415+ is named ds415, the it should be possible to access Plex on the Synology using this link http://ds415:32400/web/index.html (replace ds415 with the unit’s IP address if the unit’s network name is not known).  Clicking the Channels link at the left should show the Unsupported AppStore (the black icon below with the white border)

plexds415_after_restart_unsupported1

Clicking the Unsupported AppStore icon should then display a list of categories at the left.  The New category contains quite a few channels, as shown below.  To add the channel, click the channel, then click the Install link.

plexds415_after_restart_unsupported2

I am sure that there is much more to Plex than just offering an alternative to a Roku player or the streaming video options of some Blu-Ray players and network capable TVs (which are also able to retrieve the channels from Plex), but I have not had much time to experiment yet.  I understand that Plex is very good at transcoding video streams for different media consumption devices, such as tablets, but I am not sure that I have a use for that functionality yet.

—-


Plex/Nagios/Wordpress Stress Test (Added April 5, 2014)

A question was asked on the Synology forums whether or not I had any evidence that replacing the pre-installed 2GB of memory with an 8GB Crucial memory card made a measurable difference in the Synology DS415+’s performance.  That is a very difficult question to answer – the answer will depend largely on how the DS415+ is used.  If the DiskStation is only used to receive 10+ GB files from a single client, the additional memory probably will make no difference in the performance of the DS415+.

When the DS415+ was first released there were several discussion threads on the Synology forums indicating that the Synology DS415+ was not powerful enough to handle 1080p video for clients wishing to play back that video on various streaming devices, while the less powerful DS415Play could support 1080P playback for clients because it had additional coprocessors to support video transcoding.  So, I thought that I would try a quick test with the DS415+.  The DS415+ is running Nagios to monitor several devices, including 15+ high definition security cameras, as well as acting as the FTP target for those security cameras.  The DS415+ is also running WordPress, although WordPress has not seen much action lately on the unit.  Plex is occasionally used on the DS415+, but had not been used in the last 10 to 12 hours.  The DS415+ with 8 GB of memory installed showed the following memory statistics in its Resource Monitor: Reserved: 217.6 MB, Used: 741.9 MB,  Buffer: 151.7 MB, Cached: 6.8 GB, Free: 166.5 MB.  So, most of the additional 6 GB of memory was used for file caching, which helps speed up the apparent write speed of files, and the read speed of frequently accessed files.

So, I opened the WordPress website on the DS415+ and viewed a couple of pages.  I then went to the Plex website hosted on the DS415+ using a Window 8.1 tablet and requested the playback of a 1080P movie – the tablet supports 1080P video.  The DS415+ still had a bit of CPU capacity left, and the video was playing back smoothly on the tablet.  Just for fun I then used a Sony Blu-Ray player to request a different 1080P movie from the DS415+ while the first movie was still streaming to the tablet.  Below is a screen capture of a portion of the DSM interface on the DS415+ while both video streams were playing back (with occasional buffering):

plexds415_plex_playback_2_clients

I still cannot say for certain if the additional memory helped in this particular stress test.  However, for the current cost of the 8GB memory card ($55 USD), the additional memory could prove to be helpful depending on how the DS415+ is used – all that it would take in this case is for a WordPress cache optimizer to consume 400 MB of additional memory to push the DS415+ over 2 GB of total memory consumption.





Review of Synology DS415+, How to Add Memory to the DS415+, Web Pages Not Databases

19 12 2014

December 19, 2014

As frequent readers of this blog likely know, I have not posted very many articles recently.  It seems that lately I struggle to find time to post interesting items to the blog, which is mostly focused on Oracle Database notes, so this is a three in one off-topic post.  I have been busy with a lot of items that are completely unrelated to Oracle Database.  One of those busy (non) work items is setting up a Synology DiskStation DS415+ NAS with four 6TB hard drives.

Part 1: Reviewing the Synology DS415+

Below is my review of that NAS, as posted on Amazon:


http://www.amazon.com/Synology-America-Station-Attached-DS415/dp/B00IKTSSIO/

I have previously purchased and implemented Synology Diskstation DS1813+, DS412+, DS214+, DS212+, DS213j, and DS112j units, so Synology network attached storage (NAS) devices are not entirely new to me (I also have experience with administering various Linux and Windows servers). Most of the Synology NAS units are configured primarily as FTP destinations, although the units also provide one or more Windows shares to network computers using either Active Directory integration or Synology Diskstation internal user accounts, as well as offering network time protocol (NTP) services (to security cameras, Active Directory, and/or a PBX system), and Nagios network monitoring.

For the most part, the Synology NAS units have been very reliable. That said I have experienced occasional problems with most of the NAS units that provide FTP services to security cameras. Eventually, all of the permitted client connections become “in use” due to the Synology sometimes remembering FTP connections long after the security cameras have forgotten about those connections. This connection “remembering” issue causes a situation where client computers attempting to connect for Windows file sharing are denied access to the server, but the problem also affects the web-based access to the Synology DSM operating system. There have been issues with the DiskStation DS412+ locking up roughly 90% of the time that a reboot is attempted through the web-based DSM, resulting in a blue flashing light on the front console that could only be fixed by pulling the electrical power cord (note that it is usually possible to kill phantom connections from the DSM interface, if that interface will display, so that a reboot is typically not required to recover from the “remembered” connections). None of the other DiskStations have experienced lockups during an attempted reboot (or any other lockups that I am able to recall).

The DS415+ was bought to take the place of a DS212+, whose CPU simply cannot keep pace with 15+ high definition security cameras feeding the NAS with motion triggered video clips via FTP. I had considered purchasing the new version of the DS1813+ (possibly called a DS1815+), but that model has not been released yet, probably would have the same Intel CPU model as the DS415+ (the DS1812+, 1813+, and DS412+ all have essentially the same CPU model), and likely would have had a higher electric wattage consumption compared to the DS415+ if I filled all drive bays. So, I selected the DS415+ as a device that had some known compromises, but with also some power efficiency benefits that are not present in the DS1813+ and DS412+.

The DS415+ ships with 2GB of memory in a regular memory slot, rather than being soldered to the system board as is the case for the DS412+, opening the possibility for future memory expansion. With two gigabit network ports, two USB 3 ports (one USB 2), and one eSATA port , the Synology DiskStation DS415+ offers decent storage expansion options, although those options are more limited than what is offered by the DS1813+. The DS415+ internally supports up to four hard drives in one of several software RAID levels (SHR, RAID 1, RAID 5, RAID 6, and RAID 10). Drive installs are potentially performed without using a screwdriver, although screws are provided to hold the drives in place if the screw-less arrangement seems too flimsy. Unlike the DS1813+, the drive carriages are held in place by a thumb-release locking clip, rather than a flimsy lock and key mechanism. The DiskStation DS415+ more than triples in weight with four typical hard drives installed – the light weight construction seems to be typical of the various Synology NAS units (at least those that support eight or fewer drives).

The DS415+ ships without an installed operating system, so the first task after powering on the DS415+ with the hard drives installed involves installing the latest DSM operating system. The process for installing the operating system is fairly simple, unless there is another DiskStation NAS on the same LAN (the directions provided in the printed quick start guide caused the DSM web page for another already set up Synology NAS to appear, rather than the operating system installation page for the DS415+ – the old Synology setup program that used to ship on CD with the NAS units probably would have helped in this situation). Once the NAS has nearly automatically downloaded the latest version of the operating system, the operating system installation should complete in a couple of minutes without a lot of issues.

The Synology DSM operating system offers a fantastic graphical user interface which implements HTML5 and CSS, displaying the interface in a web browser. Unfortunately, Synology tends to rearrange the location of various settings with each DSM version (and change the shape/color of icons), which makes it a little confusing when managing different Synology NAS units. Much like Windows Explorer, the File Station utility that is built into the DSM operating system supports context sensitive drag and drop, and well as right mouse button popup menus. The File Station utility that is included in the latest DSM version supports displaying more than 300 files in a paged view – that 300 file limit was an irritation when attempting to copy, move, or delete several thousand security camera videos on a daily basis through the GUI using older DSM versions. Like the other DSM models, the DS415+ supports telnet sessions, which allow access to the Linux command line and the configuration of scheduled script execution through the modification of the /etc/crontab file (side note: I have had issues with only the DS112j automatically resetting the contents of the /etc/crontab file when the DiskStation was power cycled – I believe that problem was caused by the use of spaces rather than tabs as field delimiters in the file).

A plain vanilla install of the DSM 5.0-4528 (as of today at update 1) offers support for network shares (Windows, MAC, and NFS), iSCSI, Active Directory Integration, FTP (standard FTP, anonymous FTP, FTPS, SFTP, TFTP), website hosting, WebDAV, SNMP, network time protocol (NTP), remote command line with telnet or SSH, integrated firewall, VPN client, USB printer sharing, and a handful of other capabilities. The DSM operating system’s native functionality is easily expanded through the download of free software packages from the Package Center. The packages extend the DS415+’s capabilities to include antivirus, Asterisk IP phone server, Internet radio rebroadcasting to networked computers, DNS server functionality, iTunes Server, VPN server, RADIUS server, email server, CRM and ERP packages, WordPress, IP camera monitoring (now includes a license for two IP cameras, additional licenses are roughly $50 per camera), and a variety of other features. Additionally, ipkg support permits the installation of more than 900 additional applications, including C++ compilers – which in theory suggests that the source for the Nagios network monitoring utility can be downloaded and compiled on the DS415+ (I was able to compile Nagios on a DS1813+, DS412+, and DS212+, and am close to having Nagios working on the DS415+).

I installed four new Western Digital Red 6TB drives, configured in a software RAID 10 array (DSM offered to automatically configure the drives in a SHR array during the initial setup, but did not offer a RAID 10 configuration at that time, so configuring the drives for RAID 10, to reduce recovery time in the event of a drive failure, requires a couple of additional mouse clicks). Peak single network link data transfer speeds so far have been impressive, at close to the maximum possible transfer rate for a gigabit network (achieving roughly 112-115MB/s ~ 919Mb/s), which is virtually identical to the speed seen with the DS1813+ that was using four 3TB Western Digital Red drives, and significantly faster than the DS212+ which has a much slower non-Intel CPU and two Western Digital Green 2TB drives. Pushing approximately 41.6GB of large files to the DS415+ from a client computer consumed between 9% and 11% of the DS415+’s CPU (for comparison, this test consumed 20% of the DS1813+ CPU capacity).

I did not test the DiskStation’s IEEE 802.3ad dynamic link aggregation – there was no apparent benefit when I tested the feature with the DS1813+, an HP 4208vl switch, and two client computers. The gigabit switch to which the DS415+ is attached does not support IEEE 802.3ad dynamic link aggregation, so it would have been a very bad idea to connect both of the supplied network cables to the switch.

Power Consumption of the DS415+(based on the output of a Kill-A-Watt meter):
* 1.1 watts when powered off
* 16 watts with no drives installed and unit is sitting idle
* 44 watts with four Western Digital Red 6TB drives while the unit is receiving files at a rate of 112-115MB/s (for comparison, this test required 46 watts with the DS1813+ when outfitted with four Western Digital Red 3TB drives)
* 39 watts with four Western Digital Red 6TB drives installed while the unit is sitting idle for a couple of minutes (identical to the value measured for the DS1813+)
* 14.5 watts with four Western Digital Red 6TB drives hibernating

Even though the throughput and CPU of the DS415+ with software based RAID are no match for the performance and capacity of a high end Windows or Linux server, the Synology NAS units consume far less electrical power, are competitively priced (even though these units are expensive once four 6TB drives are added), should yield a lower total cost of ownership (TCO), and are likely easier to configure and maintain for their intended purpose than either a Windows or Linux server. Like the DS1813+, the DS415+ supports up to 512 concurrent remote connections from other devices (a computer with five mapped drives pointing to the DS415+ consumes five of those 512 concurrent connections). The 512 connection count may not be the hard upper limit on the Synology NAS units – I have encountered some problems with the DS112J blocking connection attempts long before its 64 concurrent limit is reached – I do not yet know if this issue affects any of the other Synology device models. The lack of an available redundant power supply is a shortcoming of the DS1813+ and other less expensive Synology NAS units, but the power supply for the DS415+ (and the DS412+) is external, so it should be easier to obtain and install replacement power supplies for the DS415+ should the need arise (the power supply may not have a standardized connection, which would permit a replacement power supply to be purchased from a third party supplier).

Synology offers a group of customer support forums. However, those forums are apparently not actively monitored by Synology support staff. So far, other than whether or not Plex on the DS415+ is able to transcode 1080P videos, there has been no significant negative comments about the DS415+ on the Synology forums.

The Synology DiskStation DS212+ has served its role surprisingly well for the last two and a half years, even when equipped with slow Western Digital Green drives in a software RAID 1 array. While that NAS was able to support 15+ cameras that potentially simultaneously send video clips via FTP, concurrently allowing a Windows client to connect to the share for the purpose of reviewing the video clips was often just a bit too much of a load for the less powerful DS212+. I am expecting few problems from the DS415+ when serving in a similar role along with supporting a couple of optional packages such as the Media Server, Audio Station, Nagios (currently receiving a Segmentation fault (core dumped) error message when executing the check_ping test command found in my “Install Nagios on a Synology DiskStation DS1813+ or DS412+” blog article), and possibly Plex. Most of the optional Synology packages appear to be decent. However, the Synology Surveillance Station, while possibly useful, still seems to be an overly fragile, overly expensive, experimental package that tends to tax the wireless and wired network much more than the FTP solution that I use with my cameras (your experience with that package may be different than mine).


Part 2: Voiding the Warranty on the Synology DS415+ (Upgrading the Memory to 8GB)

The DS415+ ships with 2GB of DDR3 1600 MT/s (PC3-12800) CL11 SODIMM memory pre-installed, and from what I am able to determine, Synology does not and will not offer memory upgrades for the DS415+.  The memory is installed in a laptop style memory socket, so… I installed a Crucial 8GB memory card into the DS415+.  The greatest difficulty in the memory upgrade, other than the concern for destroying a $630 device, was breaking into the DS415+ case without destroying the plastic clips that hold the two halves of the case together.  I posted the upgrade process to one of the Synology forum threads, but I thought that I would also post the process in this blog article so that it is easier to find the steps for the process later (the pictures do not fully display in the Synology forum thread).

If you have never disassembled a desktop or laptop computer, consider just being happy with the installed 2GB of memory to avoid damaging the Synology. If you have never removed a memory card from a laptop, consider just being happy with the installed 2GB of memory to avoid damaging the Synology. Upgrading the memory will likely void the warranty – there was a label on the installed memory card indicating that the warranty was void if the label was removed from the memory – the label is still attached to my old memory card (so, maybe my warranty is still in effect ;-) ).

Step 1, unplug the Synology and attach labels to each of the hard drives. Write the numbers 1 through 4 on the labels to indicate the ordered position of the drives in the NAS. Release the latch, and remove the drives. Make certain that you ground yourself by touching a large metal object before attempting to open the NAS. When working on the NAS, do not touch any of the contacts inside the NAS or on the memory card (click for a larger view of the picture):

OLYMPUS DIGITAL CAMERA

There are three screws on the back of the Synology that will need to be removed using a #1 Phillips screw driver. Wait to remove the screw that is pointed to by the screw driver in this picture until you have successfully separated the two halves of the NAS case:

OLYMPUS DIGITAL CAMERA

There are plastic clips permanently attached to the smaller half of the NAS case. Use a flat blade screw driver to gently pry up on the larger section of the case at the top-back of the case near the seam between the two sections of the case. A popping sound should be heard when the latch releases. When the first latch releases, move the screw driver to the position of the next latch and gently pry up to again slightly raise the larger section of the case at the seam until the next latch releases. Continue working to release the remaining latches along the seam. Once all of the latches on the top are released, it should be possible to pivot the larger portion of the case so that the bottom latches release. Separate the two halves, being careful not to damage the retaining clips that normally hold the hard drives in place:

OLYMPUS DIGITAL CAMERA

There are four screws on each side of the drive cage – four of the screws are long, and four are short. Remove the screws using the #1 Phillips screw driver. Make note of where the longer screws were installed. Remove the third screw from the back of the NAS if it was not already removed:

OLYMPUS DIGITAL CAMERA

Gently lift the drive cage straight up and then set it aside. Note that there are two slots pointed to by the arrows in the picture below – when it is time to re-assemble the NAS, the gold colored ends of the two circuit boards must be reinserted into those slots. There are two fan connectors circled in the picture – if the fan connectors are carefully removed from the sockets, it is possible to move the drive cage out of the way. The circuit board holding the memory card is below the metal plate – that metal plate should lift straight out of the enclosure, although there may still be some wires that attach it to the enclosure

OLYMPUS DIGITAL CAMERA

There is a metal clip at each end of the pre-installed memory card. Gently push the left clip to the left, and the right clip to the right until the memory card releases. When removing the memory card, make note of the location of the cut out section of the slot, so that the replacement memory card may be installed in the same orientation:

OLYMPUS DIGITAL CAMERA

Slide the replacement memory card into the slot, and gently tilt it down until the two clips lock the memory card in location.

OLYMPUS DIGITAL CAMERA

Reverse the order of the steps to reassemble the NAS. Reinsert the drives in the correct order by referring to the labels. Plug in the NAS – the blue light on the front of the unit may flash for a minute or two.

If all goes well, the Resource Monitor in the Synology interface should show 8GB of memory installed:

dsm51-8gb

After a day or so, the NAS may show in Resource Monitor that it is using 6.9GB (or 7.0GB) of memory for the file cache, as shown below.

DSM51-8GB-2

Why install additional memory in the DS415+?  The 2GB of memory should be sufficient for most tasks that are typically assigned to a NAS.  I was mostly just curious after seeing a couple of questions on Amazon about memory upgrades, as well as on the Synology forums, without a clear description of the upgrade process, and only a passing mention of the memory specifications in a review of the DS415+.  There were a handful of discussion threads on the Synology forums were people were trying various memory modules in their DS1815+ units, and mostly failing to locate compatible memory modules (the Crucial memory module that I used was reported to not work in the DS1815+’s easily accessible memory slot).  So, I bought the memory, tried to figure out how to break into the DS415+ case, and took pictures as I put the unit back together (I thought that there was little point in me taking pictures while disassembling the NAS, especially if I destroyed the NAS during the upgrade attempt, but doing so while disassembling the unit is probably a good idea).

How does the DS415+ utilize the additional memory?  Mostly for the file cache (the NAS runs Linux at its core) – a day after the upgrade I checked the Resource Monitor and found that the Cached statistic increased from 670.5MB to roughly 7GB.  If there is an interest in running various background packages on the NAS (for instance, Nagios, Plex, WordPress, etc.), then the additional memory could have a significant positive impact on performance.  My installation directions for installing Nagios on a Synology DS412+, DS1813+, and DS212+ almost work with the DS415+.  I receive a Segmentation Fault, Core Dumped error message when trying to run Nagios or the Nagios check_ping plugin – I seem to recall seeing similar error messages when trying to find a compiler that would work on the DS412+ and DS1813+, so maybe there is still hope for Nagios on the DS415+ if I try downloading a different compiler (the ToolChain library for the DS415+ was released roughly a week ago, so there may be a solution – I was able to compile Nagios once and it executed without a Segmentation Fault error, but I could not reproduce the result a second time).

Part 3: Web Pages Not Databases

While I have had this blog on wordpress.com for a bit over five years, I had never tried using the standalone version of WordPress.  An opportunity developed recently to use the standalone version of WordPress.  The website for the company were I work has been in the process of being redesigned since roughly June by an outside web development company.  That web development company was making very slow progress on the website, selected to use the standalone version of WordPress as the development environment, and somehow was apparently given the task of designing the website so that it looked great on an Apple iPad, Apple iPhone, and even a now extinct Motorola Xoom tablet – any compatibility with Windows desktop computers using Internet Explorer seemed to be purely accidental, but the website apparently appeared fine on the developer’s Mac.  (Hint for developers: Test what you create using equipment that is similar to your target audience’s equipment.)

I became involved in the new website development a couple of weeks ago, trying to refocus the web development company on what the target viewers of the website will likely be using to view and interact with the new website – chances are that a too-large-to-ignore percentage of those target viewers are still running Windows Vista or Windows XP, and will be accessing the site using some version of Internet Explorer other than the latest version (some websites still are not compatible with Internet Explorer 11, so the potential target viewer may still be forced to run Internet Explorer 8 or 9 – Internet Explorer 8 is the last version supported on Windows XP, and Internet Explorer 10 is the last version supported on Windows Vista).  Ability to Print?  No, people have no need to print the website’s contents (especially not using a PDF virtual printer from Adobe or BlackIce, where all of the text from the new website was replaced by odd symbols) and have it appear on the printed page anything like what appears on-screen.  Viewing the website in a non-maximized window – who would be so silly to do such a thing?  Hamburgers are not on the menu – they are the menu, or it seems that is the official name for the three parallel white lines that sometimes appear on screen and sometimes in the printed copy.  Developers are full of fun surprises some days.

A week ago (now two weeks ago) the web development company was told to stop development on the website for a variety of reasons.  A lost six months of development, or an opportunity to beat one’s head on the table and hammer out a solution for the issues that still existed with the website?  I installed the WordPress package on a Synology DS213j NAS and on the Synology DS415+ NAS and had a go at fixing the issues with the website without affecting what the web development company had done to date.  I picked up the development process reasonably quickly (the five years of blogging on WordPress helped), but found that I was repeatedly flipping back and forth between WordPress’ Visual editor and the Text editor while trying to fix the issues and add additional text information to the pages.  Additionally, the path to files (and web pages) on the Synology must also include /wordpress (when working in a Telnet session, the actual path is /volume1/web/wordpress).  My HTML and cascading style sheet (CSS) skills were very rusty, so tasks that are incredibly easy in Microsoft Excel and Microsoft Word, such as manipulating tables, took quite a bit of Google search skill, for instance determing how to right align or center certain columns in a table without altering the HTML TD attributes of each table cell in the column when composing the table using the WordPress Text editor.  The WordPress pages appeared acceptable on the Synology NAS units, so the same changes were applied to the web development company’s best efforts – hopefully this is not true, but I think that I made about as much progress on the website in three days time as did the web development company in the last three months.  Since then I have been fixing other minor issues, such as the search box disappearing when the website is viewed using a Windows 8.1 tablet that lacks a keyboard and mouse, and improving the website appearance.  I learned, or relearned a couple of skills along the way, so this process definitely was not a wasted effort.

The VP of Manufacturing at the company where I work has become somewhat of an expert recently at creating video and composing written content for the website, so the newly redesigned website is a good platform for his many years of manufacturing experience.  If you feel so inclined, take a look at the new company website, and leave a message here to let me know what you think about the website.  The website development company didn’t like my green highlight when the mouse pointer passed over links – I guess that explains one reason why I am not a graphics artist.

For future reference, this is the CSS code that I constructed to format some of the tables that appear on the website.  In the HTML code, I assigned the table to have a class of km_equipment_table, and then added the following to the style sheet’s CSS:

Handle general formatting of the table:

.km_equipment_table { width:100%; border:1px solid; padding:8px;  }
.km_equipment_table td { padding:8px; border:1px solid; }
.km_equipment_table th {background: #0072BC;}

Alternate between two colors for all but the header row of the table – note that this code is ignored by Internet Explorer 8.0:

.km_equipment_table tr:nth-child(even) { /*(even) or (2n 0)*/
 background: #F1F1F1; border:1px solid;
}
.km_equipment_table.tr:nth-child(odd) { /*(odd) or (2n 1)*/
 background: #FFFFFF; border:1px solid;
}

Set the column alignment of all tables that were assigned the class of km_equipment_table – the first column is number 1 (not 0) – note that this code is ignored by Internet Explorer 8.0:

.km_equipment_table td:nth-child(1) {
    text-align: left;
}
.km_equipment_table td:nth-child(2) {
    text-align: center;
}
.km_equipment_table td:nth-child(3) {
    text-align: right;
}
.km_equipment_table td:nth-child(4) {
    text-align: right;
}
.km_equipment_table td:nth-child(5) {
    text-align: right;
}
.km_equipment_table td:nth-child(6) {
    text-align: right;
}
.km_equipment_table td:nth-child(7) {
    text-align: right;
}

To keep the printed copy of the page appearing correct, I had to specify @media screen for several of the style sheets.  As such, a special style sheet, print.css, was previously set up to handle formatting when printing.  Among other adjustments in that print.css style sheet, I added the following so that the column alignment worked correctly in the printed copy of the web pages (note that this code did not work on Internet Explorer 8.0):

.km_equipment_table {
    border:solid #000 !important;
    border-width:1px 0 0 1px !important;
}
.km_equipment_table.th, .km_equipment_table.td {
    border:solid #000 !important;
    border-width:0 1px 1px 0 !important;
}
.km_equipment_table td:nth-child(1) {
    text-align: left;
}
.km_equipment_table td:nth-child(2) {
    text-align: center;
}
.km_equipment_table td:nth-child(3) {
    text-align: right;
}
.km_equipment_table td:nth-child(4) {
    text-align: right;
}
.km_equipment_table td:nth-child(5) {
    text-align: right;
}
.km_equipment_table td:nth-child(6) {
    text-align: right;
}
.km_equipment_table td:nth-child(7) {
    text-align: right;
}

In the WordPress Text editor for the page containing the table, I constructed the HTML code for the table to begin as follows – the style and border specifications probably could have been handled in the style sheet, but the printed output was not ideal without these changes:

<table class="km_equipment_table" style="border-collapse: collapse;" border="1" width="100%">

To handle cases where the filenames on the old web server had to be redirected to the correct page on the new web server, adjustments had to be made to the .htaccess file – .htaccess files found in parent directories will apply to child directories also.  I spent a couple of days trying unsuccessfully to make the page redirections work, and then stumbled on a solution, again using the WordPress package on a Synology NAS as a test bed.  The old website had a number of web page addresses that contained ? characters, such as this one:

/page.php?menu_id=10

To send requests for that page to the WordPress permalinks naming convention that uses the article titles as the web page address, I added the following to the .htaccess file (R=301 indicates that this is a permanent redirect, while L indicates that this is the last rule that should be processed):

RewriteCond %{QUERY_STRING} menu_id=10
RewriteRule (.*) /industries/? [R=301,L]

To redirect a web page found on the old server that was named large_machining.htm to the appropriate page in WordPress, I added the following to the .htaccess file:

RewriteRule ^large_machining.htm/?$ /large-machining/ [R=301,L]

Those rewrite rules must be wrapped in the file, so a portion of the file may appear as follows – note that if there were a page on the old server with a menu_id=100, that entry must appear before the entry for menu_id=10, and the entry for menu_id=10 (and menu_id=11) must appear before the entry for menu_id=1 – otherwise the person attempting to visit the website from a saved bookmark (favorite) may be sent to the wrong web page:

<IfModule mod_rewrite.c>
    RewriteEngine On
    RewriteBase /
     
    # industries
    RewriteCond %{QUERY_STRING} menu_id=10
    RewriteRule (.*) /industries/? [R=301,L]
     
    # careers
    RewriteCond %{QUERY_STRING} menu_id=12
    RewriteRule (.*) /careers/? [R=301,L]
     
    # ... many more
    RewriteCond %{QUERY_STRING} menu_id=1
    RewriteRule (.*) /about/? [R=301,L]
    
    # Misc. redirects
    RewriteRule ^ABOUT.HTM/?$ /about/ [R=301,L]
    
    RewriteRule ^large_machining.htm/?$ /large-machining/ [R=301,L]
</IfModule>
    
# BEGIN WordPress
<IfModule mod_rewrite.c>
    
    # ... Standard WordPress entries here
</IfModule>
# END WordPress

Part 4: Idle Thoughts (Yes, this was supposed to be a three part article)

So, I was becoming impatient while waiting for the web development company to finally push the new version of the website over to the publically accessible website (I expected this process to happen a week ago, last Friday, but it did not go live until midnight today).  This past Monday I was beginning to think that this task was too much for the web development company.  So, I set out to learn a bit about Amazon’s AWS hosting options.  It turns out that Amazon offers a t2.micro hosting solution that is free for a year.  At the time I was almost completely unfamiliar with Amazon’s hosting offerings, so I signed up for a free account.  After puttering around for a bit, I managed to create an instance on AWS, figured out that I needed to assign an elastic IP address to the instance, determined how to connect to the instance using Putty, installed LAMP in the t2.micro instance, set up an FTP server in the instance, installed WordPress in the instance, imported the new website’s contents, changed the domain’s global DNS settings, and had a fully functional website in probably five or six hours – not too bad for having so little experience with those items. The website in the t2.micro instance crashed three times on me this past Tuesday during final testing due to consuming the full 1GB of memory that is allocated to those types of instances, but at least I think that I was able to demonstrate that the web development company was either seriously dragging this project out longer than necessary, or there is a problem with their method of assigning priority to projects.  The new website went live this morning with a couple of minor issues: 1) No one onsite at my company could view the website because the development company insisted on removing the www. portion of the website address (at companies that use an Active Directory internal domain, that causes the DNS name resolution to point to the web server on one of the internal domain controllers – none of those domain controllers has web server software installed); 2) The .htaccess file that I spent a couple of days creating was overwritten by a file written by the developer – that file either redirected most of the traffic to the wrong page (see my notes above about why the order of entries in that file is important), or sent the person to a 404 error page (page not found) for all historic web page filenames; 3) College was spelled as Collage.  Oh, well.  Close counts in website development, just as it does with horse shoes and hand grenades.





On the Topic of Technology… 7

26 09 2014

September 26, 2014

(Back to the Previous Post in the Series)

As I was walking from the back of the facility where I work to my office just recently, I started repeatedly saying to myself as I approached the office door, “that huge computer was not here when I started calling my office a second home“.  I guess that I have worked at the same place for so long that I became blind to the interesting technology gadgets (if you can call multi-million dollar computers simply just gadgets) that surround me on a daily basis.

A couple of years ago BlackBerry released a small 8″ tablet, and Motorola released a 10″ Xoom tablet (I think that Motorola sold out their Mobility division to Google, who then sold that division to a farmer to use as fertilizer).  At the time the Xoom and BlackBerry tablets were released, my boss was really excited about the Apple iPads, but he did not care to spend $500 or more of his own money for a toy to use at home.  He had a Windows computer at home, but he seemed to always view that computer as excessively slow (mostly when viewing websites), even though he spent close to $3,000 on the computer six years earlier.  I am not much of an Apple fan, so I decided to have a little fun with my boss’ situation.

On the day that the Xoom tablet became available on Amazon, I placed an order for the tablet.  When it arrived, I brought it into work and showed the boss how quickly it could pull up web pages, along with its support of Adobe Flash playback (the iPad never supported Adobe Flash).  Yet, he continued to go on about the iPad, even showing me newspaper articles written by tech gurus that boasted about the fantastic features of the iPad.  A year earlier I had bought a small Windows netbook with a 10″ display, trying to convince him that such a netbook was even better than an iPad, so obviously that prior attempt failed.

When the BlackBerry tablet was released, I made a special trip to Best Buy just to grab the tablet.  I set the tablet up to work with the BlackBerry phone that I had at the time.  Oh neat, I am able to look at the company emails that I receive on the phone using the tablet – certainly, that will convince the boss that something is better than the iPad.  I showed my boss, who was also using a BlackBerry phone at the time, the neat BlackBerry tablet that could not only quickly pull up web pages (along with showing Adobe Flash contents), but could also show company emails and use the phone as a mobile hotspot for viewing web pages.  He spent a couple of minutes looking over the BlackBerry tablet before handing it back to me.  I found a couple more newspaper articles about the iPad on my desk in the weeks that followed.

On a Sunday afternoon, I decided to do some video testing with the two tablets, in a final attempt to convince the boss that something other than an iPad is ideal for his use at home.  I took the two tablets to my second home (that’s the place where my office, and all of those huge computers are located), and decided to do a head to head video test with the two tablets.  I planned to show the best looking video from the two tablets to the boss, and finally win him over.  I held the two tablets side-by-side as I walked down the isles of the huge computers.  As I walked, I wondered what that 40,000 pound part was doing in the big pit that was dug for one of the computers that was expected to arrive in another month or two.  No matter, I continued with my video testing, holding the tablets at head level as I walked.  I received some strange looks from the other employees as I walked about – I simply reassured the other employees that I was just trying to impress the boss.  I took the tablets home and processed the video from the tablets to eliminate meaningless portions of the video.  It seems that both tablets produced 720P video at either 29 or 30 frames per second that was virtually identical in video quality, but the BlackBerry video would playback directly in the Windows Media Player, while the Xoom video required conversion to a compatible format.  I showed the boss the resulting video, that not only could the BlackBerry tablet quickly pull up web pages (along with showing Adobe Flash contents), show company emails and use the phone as a mobile hotspot for viewing web pages, but also record 720P video that easily plays back on your Windows computer at home.  The boss thought for a minute or two, and then said, “did you have a chance to read Walt Mossberg’s latest Wall Street Journal article, there is a new iPad out now.”

Ah, fond memories.

I recently found the video clips that I recorded using the tablets back in 2011, and after reviewing the videos, I still can’t see much difference between the videos captured by either tablet.  The video looks nice when playing back, but pausing either video to take a screen capture results in a blurry single-frame mess 90% of the time.  The video showed the big pit that was dug for the large computer – yep, that pit now contains a multi-million dollar computer, and the wall that had been next to the pit was removed during a later expansion project.

In the nearly five years since I created the first article on this blog, I really have not said much about the company where I work.  I have posted a lot of Oracle Database book reviews on Amazon, as well as several reviews of security cameras.  Some readers on Amazon were convinced that I worked for a couple of book publishing companies, writing fake book reviews to promote the publishers books; people who actually read the book reviews should know better than that – the reviews are brutally honest.  Some other customers on Amazon thought that I was working for a security camera company and/or living in California; no, not the case.  As a result, I put together an article that shows some of the interesting technology and multi-million dollar computers that are located just feet from my office at work.  In the article, I included some still frames from the video that I captured in the walk through with the tablets in 2011.

Below are three pictures from the article that I recently posted.  I am still trying to come up with good captions for the last two pictures, captions such as “taking a break” and “breaking in a new truck” seem to come in mind.

Cincinnati CL-707 Laser Burner Slicing Through 1" X 10' x 20' Plate SteelIn the Deep EndNeed a Bigger TRuck





Book Review: Troubleshooting Oracle Performance, Second Edition

15 07 2014

July 15, 2014 (Modified August 21, 2014, August 22, 2014, November 6, 2014, January 9, 2015)

Extensively Researched with Detailed Analysis Covering a Broad Range of Oracle Database Performance Topics, Providing Insight that is Not Available from Other Sources
http://www.amazon.com/Troubleshooting-Oracle-Performance-Christian-Antognini/dp/143025758X/

I pre-ordered this book in December 2013, having previously read the first edition of this book twice.  Once the printed copy of the second edition book arrived, I ordered the PDF companion for the book from Apress.  While the first edition of the book covered Oracle Database 9.2.0.1 through 11.1.0.6, the second edition targets versions 10.2.0.1 through 12.1.0.1+, the author successfully scrubbed the book of all information that is not relevant to the targeted Oracle versions.  Despite the removed obsolete content and the new page formatting that places approximately 17% more content per page, the page count for the second edition of the book grew by roughly 130 pages.

Some of the improvements in the second edition, when compared to the first edition of this book:

  • Extended explanation of the different definitions of the term cardinality.  Pg 19
  • Second edition of the book added a half page definition of the term cursor.  Pg 21
  • The description of V$SQL_CS_HISTOGRAM was omitted from the first edition of this book, and is now included.  Pgs 37-39
  • The Instrumentation section that was found in chapter 3 of the first edition is now relocated into chapter 2.  Pgs 42-48
  • A new section was added in this edition of the book that is intended to guide the reader in attacking performance problems using different procedures, based on whether or not the problem is reproducible.  Chapters 3 and 4
  • A new roadmap flow chart was added to the second edition, showing how to begin the performance tuning process.  Pg 104
  • Page 204 of the first edition of the book stated that it was not possible to retrieve a Statspack captured execution plan using DBMS_XPLAN – that statement was incorrect.  Page 306 of the second edition contains a corrected statement:  “Statspack stores execution plans in the stats$sql_plan repository table when a level equal to or greater than 6 is used for taking the snapshots. Even though no specific function is provided by the dbms_xplan package to query that repository table, it’s possible to take advantage of the display function to show the execution plans it contains.”
  • The second edition of the book includes a new SQL optimization techniques chapter – the book seems to be making a more dedicated effort to help the reader understand the decision process that determines when to use the various techniques to attack performance issues – explaining the decision tree for performance tuning.  Chapter 11, SQL Optimization Techniques, is a good example of the enhancements made to the second edition.

Over the last several years I have read (and reviewed) a number of Oracle Database performance related books, including the freely available Performance Tuning Guide that is part of the official Oracle Database documentation.  None of the books, including the official Performance Tuning Guide (at least three errors identified in the first 100 pages of the 12.1.0.1 version), is completely free of errors (wrong, omitted, or obsolete information).  However, this book sets the technical content accuracy bar extremely high for books that cover a broad-range of Oracle performance related topics.

As was the case for the first edition of this book, there are several factors that separate this book from the other broad-ranging Oracle Database performance books on the market:

  • For every feature that is described to help solve a problem, as many as possible of the benefits are listed, and an equal amount of attention is paid to the potentially wide-ranging problem areas of the various solutions.  Very few potential problems were overlooked in this book.  Some of the other books on the market only describe the potential benefits of implementing a feature, without discussing limitations or unintended side-effects.  One such example is the discussion of the CURSOR_SHARING parameter in two different books.  On page 434 the “Troubleshooting Oracle Performance” book the following warning is provided “Cursor sharing has a reputation for not being very stable. This is because, over the years, plenty of bugs related to it have been found and fixed… my advice is to carefully review Oracle Support note 94036.1…”  This quote is in contrast to the following quotes from pages 191 and 484 of the book “Oracle Database 12c Performance Tuning Recipes”, “Although there are some concerns about the safety of setting the CURSOR_SHARING parameter to FORCE, we haven’t seen any real issues with using this setting.”  “There are really no issues with setting the cursor_sharing parameter to a nondefault value, except minor drawbacks such as the nonsupport for star transformations, for example.”  (Reference) (Reference 2)
  • For nearly every feature described in the book, the book lists the licensing and version requirements (sometimes to a specific point release such as 10.2.0.3, 10.2.0.4, 11.2.0.4) that are required so that the reader is able to take advantage of the feature – these requirements are often listed early in the description of the feature (the monitoring/tuning discussion in chapters four and five contain several good examples).  The book commonly describes how to accomplish a task in the current Oracle Database release, as well as older releases, if the approach differs.  Some of the other books on the market inter-mix features and behaviors in various Oracle Database releases, without clearly distinguishing what will and what will not be available in the reader’s environment.
  • While many strong statements are made about Oracle Database in the book, there is no “hand waiving”, and there are very few inaccurate statements.  The book uses a “demonstrate and test in your environment” approach from cover to cover.  The downloadable scripts library is extensive with roughly 280 scripts and trace files, and those scripts often contain more performance information than what is presented in the book.  It is thus recommended to view the scripts and experiment with those scripts while the book is read.  The scripts are currently downloadable only from the author’s website.  In contrast, other books seem to take the approach of “trust me, I have performed this task 1,000 times and never had a problem” rather than the “demonstrate and test in your environment” approach as was used in this book.
  • Information in this book is densely packaged, without unnecessarily repeating information, and without giving the impression that sections of the book are a paraphrase of some other set of articles, a paraphrase of chapters in the official Oracle documentation, or a reprint of a page that was originally copyright in 1997.  Additionally, the information is well organized into a logical progression of topics, rather than each section of the book appearing as an island of unrelated information.
  • The well-placed graphics throughout the book support the contents of the book, rather than distract from the information that is described.
  • The book makes extensive use of forward and backward references to other sections in the book, as well as suggestions to review specific Oracle support documents and other books.  Some of the other books handle each chapter as an information silo, never (or rarely) mentioning specific content found elsewhere in the book.
  • In the acknowledgments section at the beginning of the previous book edition the author mentioned that his English writing ability is poor and that “I should really try to improve my English skills someday.”  While the English wording in the first edition of the book was easily understood, I took issue with the author’s repeated use of the phrase “up to” when describing features that exist in one Oracle Database release version or another.  The second edition of the book fixes that one issue that I pointed out, typically replacing the text with “up to and including”, and overall the technical grammar in the second edition of the book is among the best that I have seen in a couple years.  It appears that the author exercised great care when presenting his information on each page.  In contrast, some of the other Oracle Database book authors seem to be more concerned with slamming something onto the page so that something else that is more interesting could be introduced, in the process introducing sentences that can best be described as non-sense.
  • Almost without exception the issues that were identified as wrong, misleading, or incomplete in the first edition of the book were corrected in the second edition.  Unfortunately, the same cannot be said about other books that survived to see a second or third edition.

The second edition of “Troubleshooting Oracle Performance” is of value to Oracle Database administrators, programmers, and Oracle performance tuning specialists.  Chapter one of this book should be required reading for all people intending to be developers, regardless if the person intends to build advanced Oracle Database solutions or just simple Microsoft Access solutions.  One of my favor quotes from the book is found on page three, “Performance is not merely optional, though; it is a key property of an application.”  Ideally, this book should be read after reading the “Expert Oracle Database Architecture” book (or the Concepts Guide found in the Oracle Database documentation library), and before advancing to books such as “Cost-Based Oracle Fundamentals” or “Oracle Core: Essential Internals for DBAs and Developers”.

The full review of this book is quite long, currently covering the first 12 chapters (447 pages) of the book.  Over the next couple of weeks I will finish the remaining book chapters and post an updated review.  The index at the back of most Apress books seems to be limited in value, so I have tried to include a useful index as part of this review.

Foundation Knowledge, and Miscellaneous Tips:

  • The ten most common design problems: no formal logical database design; using generic tables (entity-attribute-value or XML); failing to use constraints, failing to implement physical design (partitioning, bitmap indexes, index organized tables, function-based indexes, etc); selecting the wrong data type for table columns (using a VARCHAR2 column to store dates); incorrect bind variable usage; failure to use RDBMS specific advanced features; avoiding PL/SQL when extensive data manipulation is required within a single database; excessive commits; non-persistent database connections.  Pgs 8-11
  • To avoid compulsive tuning disorder, there are three sources for identifying actual performance problems: user reported unsatisfactory performance, system monitoring reports time outs or unusual load, response time monitoring indicates performance that is outside of the parameters specified by the service level agreement.  Pg 11
  • Cardinality is the number of rows returned by an operation (estimated number of rows in an execution plan).  Cardinality = selectivity *num_rows  Pg 19
  • “A cursor is a handle to a private SQL area with an associated shared SQL area.”  Pg 21
  • Life cycle of a cursor is explained with a diagram.  Pgs 21-23
  • Good explanation of why hard parses and even soft parses should be minimized as much as possible.  Pg 26
  • Even though the OPTIMIZER_ENV_HASH_VALUE column value in V$SQL is different for a given SQL statement when the FIRST_ROWS, FIRST_ROWS_1, or FIRST_ROWS_1000 optimizer modes are used, that difference in the OPTIMIZER_ENV_HASH_VALUE column does not prevent a specific child cursor from being shared among sessions with those different optimizer modes.  “This fact leads to the potential problem that even though the execution environment is different, the SQL engine doesn’t distinguish that difference. As a result, a child cursor might be incorrectly shared.”  Pg 27 (Reference)
  • Example of using Oracle’s built-in XML processing to convert the REASON column found in V$SQL_SHARED_CURSOR into three separate regular Oracle columns.  Pgs 27-28
  • Benefits and disadvantages of using bind variables.  Pgs 29-31, 32-39
  • Adaptive cursor sharing (bind-aware cursor sharing) was introduced in Oracle 11.1.  The IS_BIND_SENSITIVE, IS_BIND_AWARE, and IS_SHAREABLE columns of V$SQL indicate if a specific child cursor was affected (created or made obsolete) by adaptive cursor sharing.  Pg 34
  • Bind aware cursors require the query optimizer to perform an estimation of the selectivity of predicates on each execution.   Pg 37
  • Definition of different types of database file reads and writes.  Pg 40
  • Basic definition of Exadata and the goals of Exadata smart scans.  Pg 41
  • The database engine allows dynamically setting the following attributes for a session: client identifier, client information, module name, and action name.  Pg 45
  • Example of setting the client identifier information using PL/SQL, OCI, JDBC, ODP.NET, and PHP.  Pgs 46-48
  • 10046 trace levels 0, 1, 4, 8, 16, 32, and 64 are described.  Pg 55
  • See $ORACLE_HOME/rdbms/mesg/oraus.msg for a list of all debugging event numbers – not available on all operating system platforms.  Pg 56
  • Using DBMS_PROFILER  requires the CREATE privilege on the PL/SQL code.  DBMS_HPROF just requires execute on DBMS_HPROF.  Pg 96
  • Very good description of the performance related columns in V$SESSION.  Pgs 115-116
  • The MMNL backgroup process collects active session history data once a second.  Pg 117
  • Real-time monitoring is available starting in version 11.1, and requires that the CONTROL_MANAGEMENT_PACK_ACCESS parameter to be set to diagnostic + tuning.  Pg 127
  • Real-time monitoring is enabled for SQL statements only if the executions require at least 5 seconds, if the SQL statement is executed using parallel processing, or if the MONITOR hint is specified in the SQL statement.  Pg 127
  • The author’s system_activity.sql script file produces output that is similar to the data contained in a Diagnostic Pack chart, without requiring a Diagnostic Pack license.  Pg 143
  • The author’s time_model.sql script samples the V$SYS_TIME_MODEL dynamic performance view and outputs results that show the parent, child, and grandchild relationship between the various statistics.  Pg 144
  • Use an interval of 20 to 30 minutes for the Statspack or AWR sample period to limit the distortion effects of the reported averages (important problems may be hidden if the sample period covers many hours of time).  Pg 152
  • AWR dictionary views have a DBA_HIST or CDB_HIST (12.1 multitenant environment) prefix.  Pg 152
  • While the procedure for using Statspack is no longer described in the documentation, the spdoc.txt file in the $ORACLE_HOME/rdbms/admin directory describes how to install, configure, and manage Statspack.  Pg 156
  • Statspack data can be captured at levels 0, 5, 6, 7, or 10 (see the book for an explanation of what is captured at each level).  Pg 157
  • The book provides an example of automating the collection of Statspack snaps, and automatically purging old Statspack snaps after 35 days.  Pg 159
  • The book describes in detail the various inputs that are provided to the query optimizer, including: system statistics, object statistics, constraints, physical design, stored outlines/SQL profiles/SQL plan baselines, execution environment/initialization parameters/client side environment variables, bind variable values/data types, dynamic sampling, and cardinality feedback.  The Oracle Database version, edition (Standard or Enterprise), and installed patches also potentially affect the plans generated by the query optimizer.  Pgs 170-172
  • Prior to version 11.1 the ANSI full outer join syntax was automatically translated into Oracle syntax utilizing a UNION ALL.  Pg 187
  • Access to the DBMS_STATS package is granted to public, but the GATHER_SYSTEM_STATISTICS role (automatically granted to DBA role) is required to change the system statistics in the data dictionary.  Pg 192
  • Bug 9842771 causes the SREADTIM and MREADTIM statistics to be incorrectly calculated when gathering system statistics on Oracle Database 11.2.0.1 and 11.2.0.2 unless patch 9842771 is installed.  Pg 197
  • The calculated CPU cost to access a specific table column is computed as the column position multiplied by 20.  Pg 203 (Reference)
  • When the mreadtim system statistic is null (has not been computed) or is smaller than the sreadtim system statistic, a formula is used to calculate the mreadtim static value when execution plans are generated.  When the sreadtim system statistic is 0 or not computed, a formula is used to derive a sreadtim statistic value when execution plans are generated.  If the MBRC system statistic is not set (or set to 0), the NOWORKLOAD system statistics are used.  See page 204 for the formulas.
  • The maximum number of buckets for histograms increased from 254 to 2048 in Oracle Database 12.1.  pg 213
  • Script to show tracked column usage that is used by DBMS_STATS.  Note that USER should be replaced with the schema name that contains the specified table.  Pg 242
  • When object statistics are collected using the default NO_INVALIDATE parameter value of DBMS_STATS.AUTO_INVALIDATE, cursors that depend on the object for which statistics were collected will be marked as invalidated after a random time period that is up to five hours (as determined by the value of the _OPTIMIZER_INVALIDATION_PERIOD  parameter; SQL statements using parallel execution will be immediately invalidated).  Pg 244
  • “Unfortunately, not all new features are disabled by this [OPTIMIZER_FEATURES_ENABLE] initialization parameter. For example, if you set it to 10.2.0.4 in version 11.2, you won’t get exactly the 10.2.0.4 query optimizer.”  Pg 277
  • “When the [memory utilization specified by the PGA_AGGREGATE_LIMIT] limit is reached, the database engine terminates calls or even kills sessions. To choose the session to deal with, the database engine doesn’t consider the maximum PGA utilization [for each session]. Instead, the database engine considers the session using the highest amount of untunable memory.”  Pg 296
  • EXPLAIN PLAN defines all bind variables as VARCHAR2, which may lead to unintended/unexpected data type conversion problems in the generated execution plan.  EXPLAIN PLAN also does not take advantage of bind variable peeking, further limiting EXPLAIN PLAN’s ability to accurately generate an execution plan for a previously executed SQL statement.  Unfortunately, there are times when EXPLAIN PLAN shows the correct predicate information, while the typically more reliable DBMS_XPLAN.DISPLAY_CURSOR, V$SQL_PLAN view, and V$SQL_PLAN_STATISTICS_ALL  view show incorrect predicate information for one or more lines in the execution plan.  Pgs 302-303, 336, 339, 346, 348
  • To have the query optimizer generate a 10053 trace whenever a specific SQL statement is hard parsed, execute the following command, replacing 9s5u1k3vshsw4 with the correct SQL_ID value: ALTER SYSTEM SET events ‘trace[rdbms.SQL_Optimizer.*][sql:9s5u1k3vshsw4]’  pg 308
  • Description of the columns found in most execution plans.  Pgs 312-313
  • Description of the undocumented ADVANCED format parameter value for DBMS_XPLAN.  Pg 316
  • Adaptive execution plans, where the query optimizer in Oracle Database 12.1 is able to postpone some execution plan decisions (such as selecting a nested loops join vs. a hash join), requires the Enterprise Edition of Oracle Database.  Pg 349
  • The IS_RESOLVED_ADAPTIVE_PLAN column of V$SQL indicates whether or not an execution plan takes advantage of adaptive execution (use +ADAPTIVE in the format parameter of the DBMS_XPLAN call to see the adaptive portion of the execution plan).  Pg 351
  • Rather than just suggesting to the reader to add an index to avoid an unnecessary full table scan, the book includes the following important note: “For instance, if you add an index like in the previous example, you have to consider that the index will slow down the execution of every INSERT and DELETE statement on the indexed table as well as every UPDATE statement that modifies the indexed columns.”  Pg 361
  • “Simply put, hints are directives added to SQL statements to influence the query optimizer’s decisions. In other words, a hint is something that impels toward an action, rather than merely suggests one.”  Pg 363
  • “However, mixing comments and hints don’t always work. For example, a comment added before a hint invalidates it.”  This warning is an actual threat to intentionally included hints, and this warning was not included in the first edition of the book.  Pg 366
  • The default query block names assigned by the optimizer are: CRI$ CREATE INDEX statements, DEL$ DELETE statements, INS$ INSERT statements, MISC$ Miscellaneous SQL statements like LOCK TABLE, MRC$ MERGE statements, SEL$ SELECT statements, SET$ Set operators like UNION and MINUS, UPD$ UPDATE statements.  Use the QB_NAME hint to specify a different, non-default query block name for use with various hints.  Pg 369
  • “One of the most common mistakes made in the utilization of hints is related to table aliases. The rule is that when a table is referenced in a hint, the alias should be used instead of the table name, whenever the table has an alias.”  Pg 371
  • Cross reference between several initialization parameter values and the equivalent hint syntax.  Pg 373
  • A demonstration of creating a hacked stored outline for a SQL statement (use as a last resort when it is not possible to create a suitable outline using other techniques such as exp/imp or initialization parameter changes).  Pgs 381-387
  • SQL profiles, a feature of the Enterprise Edition with the Tuning Pack and the Diagnostic Pack options, are applied even when the upper/lowercase letters and/or the white space differs, and if the FORCE_MATCH parameter is set to true, a SQL profile may be applied even if the literals (constants) in a SQL statement differ.   While SQL profiles allow text normalization, stored outlines and SQL plan management do not support the same degree of text normalization.  Pgs 390, 394, 402
  • SQL plan management, which requires an Enterprise Edition license, could be considered an enhanced version of stored outlines.  Pg 402
  • “Inappropriate hints occur frequently in practice as the reason for inefficient execution plans. Being able to override them with the technique you’ve seen in this section [SQL plan baseline execution plan replacement (stored outlines are also capable of removing embedded hints using the techniques shown on pages 381-387)] is extremely useful.”  Pg 408
  • “What causes long parse times? Commonly, they are caused by the query optimizer evaluating too many different execution plans. In addition, it can happen because of recursive queries executed on behalf of dynamic sampling.”  Pg 433
  • “The values provided by the parse count (total) and session cursor cache hits statistics are subject to several bugs.”  Details are provided on pages 437-438

Suggestions, Problems, and Errors:

  • The following scripts are currently missing from the script library:
    — session_info.sql  Pg 45 (in the script library as session_attributes.sql per the book author).
    — ash_top_files.sql  Pg 136  (Edit: Jan 9, 2015, now downloadable here)
    — ash_top_objects.sql  Pg 136  (Edit: Jan 9, 2015, now downloadable here)
    — ash_top_plsql.sql  Pg 136  (Edit: Jan 9, 2015, now downloadable here)
    — search_space.sql  Pg 169  (Edit: Nov 6, 2014, now downloadable here)
    — incremental_stats.sql  Pg 255  (Edit: Aug 22, 2014, now downloadable here)
    — copy_table_stats.sql  Pg 256  (Edit: Aug 22, 2014, now downloadable here)
    — optimizer_index_cost_adj.sql  Pg 288  (Edit: Aug 22, 2014, now downloadable here)
    — display_statspack.sql  Pg 306  (Edit: Nov 6, 2014, now downloadable here)
    — dynamic_in_conditions.sql  Pg 499  (Edit: Nov 6, 2014, now downloadable here)
    — fbi_cs.sql  Pg 506  (Edit: Nov 6, 2014, now downloadable here)
    — reserve_index.sql should be reverse_index.sql  Pg 671 (Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • Page 19 states that “selectivity is a value between 0 and 1 representing the fraction of rows filtered by an operation.”  I understand the intention of this statement, and the examples that follow the statement further clarify the author’s statement.  However, the “filtered” word in the statement seems to suggest that selectivity represents the fraction of the rows removed by an operation, rather than the rows that survived the filter at an operation.  This is just a minor wording problem that might cause the reader a little confusion when reading the book.  The author has addressed this issue in his errata list for the book.  (Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • Page 24, figure 2-3 has two entries for “Store parent cursor in library cache” – the second entry should show “Store child cursor in the library cache”, just as it is shown in figure 2-2 of the first edition of the book.  The author has addressed this issue in his errata list for the book.  (Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • Page 62 states, “The ALTER SESSION privilege required to execute the previous trigger can’t be granted through a role. Instead, it has to be granted directly to the user executing the trigger.”  I believe that the session executing the AFTER LOGON trigger, by default, would not need the ALTER SESSION privilege if the user creating the AFTER LOGON trigger had the ALTER SESSION privilege because the trigger is created by default with Definer’s Rights (Reference(Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • Page 72 states, “disk is the number of blocks read with physical reads. Be careful—this isn’t the number of physical I/O operations. If this value is larger than the number of logical reads (disk > query + current), it means that blocks spilled into the temporary tablespace.”  While the statement is correct, and supported by the test case output, it might be a good idea to also mention that prefetching (index or table) or buffer warm up could be another possible cause of the DISK statistic value exceeding the value of the QUERY statistic value (especially after the database is bounced or the buffer cache is flushed).  The PHYSICAL READS CACHE PREFETCH and PHYSICAL READS PREFETCH WARMUP statistics might be useful for monitoring this type of access.  (Reference)
  • Page 73 states, “In addition to information about the first execution, version 11.2.0.2 and higher also provides the average and maximum number of rows returned over all executions. The number of executions itself is provided by the Number of plan statistics captured value.”  It appears that the word “executions” should have been “execution plans”.  (Reference(Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • The book made the transition from views that require no additional cost licensing to views that require a Diagnostic Pack license on pages 114 and 115, without providing the reader a warning about the licensing requirements (such a warning is typically present in the book).  (Edit: August 21, 2014: I have discussed this bullet item with the book author.  Pages 114 and 115 describe the use of V$METRICV$METRICGROUP, and V$METRICNAME.  The author provided this link, which clearly describes which features, views, and scripts require a Diagnostic Pack license to access.  V$METRICV$METRICGROUP, and V$METRICNAME are not among the views that are listed, and it is possible to access data from V$METRIC when the CONTROL_MANAGEMENT_PACK_ACCESS parameter is set to NONE.  Some of the Oracle documentation (link) (link) (Oracle Support Doc ID 748642.1) indicate that “V$METRIC displays the most recent statistic values for the complete set of metrics captured by the AWR infrastructure.” and/or that V$METRIC is one of the Automatic Workload Repository views.  The book did not transition to views that require a Diagnostic Pack license on pages 114 and 115.)
  • There are a couple of minor typos in the book that do not affect the accuracy of statements made by the book.  For example, “… prevent it from happenning again” on page 149.  Most of these typos are easily missed when reading the book.  (Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • The book states on page 215, “This is especially true for multibyte character sets where each character might take up to three bytes.”  Per the Oracle Database globalization documentation, some charactersets, such as UTF-8 (AL32UTF8) may require up to four bytes per character.  The author has addressed this issue in his errata list for the book.
  • The book states on page 221, “For this reason, as of version 12.1, top frequency histograms and hybrid histograms replace height-balanced histograms.”  It appears based on the Oracle documentation that height-balanced histograms are not replaced if the histograms are created before the upgrade.  Additionally, if the ESTIMATE_PERCENT parameter is specified in the DBMS_STATS call, a height-balanced histogram will be created if the number of distinct values exceeds the number of buckets.  (Reference).  Page 239 makes a clarifying statement, “Also note that some features (top frequency histograms, hybrid histograms, and incremental statistics) only work when dbms_stats.auto_sample_size is specified [for the ESTIMATE_PERCENT parameter].”  “Work” may be a poor wording choice, “generated” may be a better choice of wording.  (Reference Oracle 12c Histogram Test(Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • The book states on page 282 about dynamic sampling level 11: “The query optimizer decides when and how to use dynamic sampling.  This level is available as of version 12.1 only.”  Oracle Database 11.2.0.4 also adds support for dynamic sampling level 11. (Reference(Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • When describing the output of DBMS_XPLAN, the book states, “Reads: The number of physical reads performed during the execution.”  The book should have clarified that the unit of measure for the Buffers, Reads, and Writes statistics is blocks.  Pg 313  (Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • The book states, “Syntactical errors in hints don’t raise errors. If the parser doesn’t manage to parse them, they’re simply considered real comments.”  That statement is correct for all hints except the oddly behaving IGNORE_ROW_ON_DUPKEY_INDEX hint, which will raise an “ORA-38917: IGNORE_ROW_ON_DUPKEY_INDEX hint disallowed for this operation” error, the CHANGE_DUPKEY_ERROR_INDEX hint which will raise an “ORA-38916: CHANGE_DUPKEY_ERROR_INDEX hint disallowed for this operation” error, and the RETRY_ON_ROW_CHANGE hint which will raise an “ORA-38918: RETRY_ON_ROW_CHANGE hint disallowed for this operation” error  if the hints are specified incorrectly.  Pg 365 (a similar comment is made at the top of page 371). (Reference) (Reference 2(Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • The book states, “The aim of using a prepared statement is to share a single cursor for all SQL statements and, consequently, to avoid unnecessary hard parses by turning them into soft parses.”  This statement should be clarified to point out that the aim is to share a single cursor for all _similar_ SQL statements (those that would have differed only by a literal/constant if bind variables were not used).  Pg 428  (Edit: Aug 22, 2014, author now lists this item in the book’s errata page)
  • The book states, “Cursor sharing doesn’t replace literal values contained in static SQL statements executed through PL/SQL. For dynamic SQL statements, the replacement takes place only when literals aren’t mixed with bind variables. This isn’t a bug; it’s a design decision.”  This statement about dynamic SQL statements, at least for Oracle Database 11.2.0.2 and 12.1.0.1 (and possibly 10.2.0.2) is no longer true.  The author’s cursor_sharing_mix.sql script does shows literal value replacement when bind variables are also used for SQL statements executed outside PL/SQL.  Pg 434  (Reference Oracle Cursor Sharing Test).  (Edit: August 21, 2014: I have discussed this bullet item with the book author.  The quote from page 434 is describing the behavior of cursor sharing only as it relates to SQL statements executed within PL/SQL code.  I misinterpreted “dynamic SQL statements” to include ad-hoc SQL statements that are submitted by client-side applications (including SQL*Plus) that are built on-demand as a result of user input.  In technical terms, Dynamic implicitly implies that the SQL statements are executed within PL/SQL code.  The test script results do match what is printed in the book.  The author’s script demonstrated (specifically about the CURSOR_SHARING parameter, but likely applies to other functionality as well) that SQL executed directly may be handled differently when executed within PL/SQL code.)

Data Dictionary Views/Structures (the index at the back of the book misses most of these entries):

  • ALL_TAB_MODIFICATIONS  Pg 237
  • AUX_STATS$  (SYS schema)  Pgs 193, 196
  • CDB_ENABLED_TRACES  Pgs 58, 59, 60
  • CDB_HIST_SQL_PLAN  Pg 305
  • CDB_OPTSTAT_OPERATIONS  Pg 269
  • CDB_SQL_PLAN_BASELINES  Pg 409
  • CDB_SQL_PROFILES  Pgs 393, 399
  • CDB_TAB_MODIFICATIONS  Pg 237
  • COL$ (SYS schema)  Pg 242
  • COL_USAGE$ (SYS schema)  Pg 242
  • DBA_ADVISOR_EXECUTIONS  Pg 413
  • DBA_ADVISOR_PARAMETERS  Pg 413
  • DBA_AUTOTASK_TASK  Pg 259
  • DBA_AUTOTASK_WINDOW_CLIENTS  Pg 260
  • DBA_ENABLED_TRACES  Pgs 58, 59, 60
  • DBA_HIST_ACTIVE_SESS_HISTORY  Pg 420
  • DBA_HIST_BASELINE  Pgs 155, 156
  • DBA_HIST_COLORED_SQL  Pg 153
  • DBA_HIST_SNAPSHOT  Pg 154
  • DBA_HIST_SQL_PLAN  Pgs 305, 324
  • DBA_HIST_SQLTEXT  Pg 324
  • DBA_HIST_WR_CONTROL  Pg 153
  • DBA_OPTSTAT_OPERATION_TASKS  Pg 200
  • DBA_OPTSTAT_OPERATIONS  Pgs 200, 201, 269, 270
  • DBA_SCHEDULER_JOBS  Pgs 257-258
  • DBA_SCHEDULER_PROGRAMS  Pgs 258, 259
  • DBA_SCHEDULER_WINDOWS  Pgs 258, 260
  • DBA_SCHEDULER_WINGROUP_MEMBERS  Pg 258
  • DBA_SQL_MANAGEMENT_CONFIG  Pgs 417, 418
  • DBA_SQL_PLAN_BASELINES  Pgs 408, 409
  • DBA_SQL_PLAN_DIR_OBJECTS  Pg 230
  • DBA_SQL_PLAN_DIRECTIVES  Pg 230
  • DBA_SQL_PROFILES  Pgs 393, 399
  • DBA_TAB_MODIFICATIONS  Pg 237
  • DBA_TAB_STAT_PREFS  Pg 248
  • DBA_TAB_STATS_HISTORY  Pg 261
  • DBA_USERS  Pgs 235, 242
  • GV$INSTANCE  Pgs 60, 63
  • OBJ$ (SYS schema)  Pg 242
  • OL$ (OUTLN schema)  Pgs 381, 383
  • OL$HINTS (OUTLN schema)  Pgs 381, 383-384
  • OL$NODES (OUTLN schema)  Pg 381
  • OPTSTAT_HIST_CONTROL$ (SYS schema)  Pgs 246, 248
  • PLAN_TABLE  Pgs 203, 215, 216, 219-223, 225, 300
  • REGISTRY$ (SYS schema)  Pg 235
  • REGISTRY$SCHEMAS (SYS schema)  Pg 235
  • SQLOBJ$ (SYS schema)  Pgs 399, 410
  • SQLOBJ$DATA (SYS schema)  Pgs 399, 410
  • SQLPROF$ (SYS schema)  Pg 399
  • SQLPROF$ATTR (SYS schema)  Pg 399
  • STATS$LEVEL_DESCRIPTION  Pg 157
  • STATS$SQL_PLAN  Pg 306
  • STATS$STATSPACK_PARAMETER  Pgs 157, 158
  • USER_COL_PENDING_STATS  Pg 251
  • USER_IND_PENDING_STATS  Pg 251
  • USER_IND_STATISTICS  Pgs 209, 231-232, 251
  • USER_INDEXES  Pgs 354, 486, 487, 488, 519, 520, 523
  • USER_OBJECTS  Pg 300
  • USER_OUTLINE_HINTS pg 377
  • USER_OUTLINES  Pgs 377, 378, 379, 380
  • USER_PART_COL_STATISTICS  Pg 209
  • USER_PART_HISTOGRAMS  Pg 209
  • USER_SCHEDULER_JOBS  Pg 197
  • USER_STAT_EXTENSIONS  Pg 226
  • USER_SUBPART_COL_STATISTICS  Pgs 209, 253
  • USER_SUBPART_HISTOGRAMS  Pg 209
  • USER_TAB_COL_STATISTICS  Pgs 209, 211-213, 251, 252, 355
  • USER_TAB_COLS  Pg 227
  • USER_TAB_HISTGRM_PENDING_STATS  Pg 251
  • USER_TAB_HISTOGRAMS  Pgs 209, 214-215, 218-220, 222, 224-225, 251
  • USER_TAB_MODIFICATIONS  Pg 237
  • USER_TAB_PENDING_STATS  Pg 251
  • USER_TAB_STATISTICS  Pgs 209, 210-211, 215, 249, 251, 255, 264
  • USER_TAB_STATS_HISTORY  Pg 261
  • USER_TABLES  Pg 240
  • V$ACTIVE_SESSION_HISTORY  Pgs 119-121, 132, 138, 420, 424
  • V$BGPROCESS  Pg 63
  • V$CON_SYS_TIME_MODEL  Pgs 108, 419
  • V$CON_SYSSTAT  Pg 113
  • V$CON_SYSTEM_EVENT  Pg 111
  • V$CON_SYSTEM_WAIT_CLASS  Pg 110
  • V$DATABASE  Pg 324
  • V$DIAG_INFO  Pgs 63, 64
  • V$DISPATCHER  Pg 63
  • V$EVENT_HISTOGRAM  Pg 112
  • V$EVENT_NAME  Pg 109, 163
  • V$FILESTAT  Pg 195
  • V$FIXED_TABLE  Pg 233
  • V$METRIC  Pg 115, 142
  • V$METRIC_HISTORY  Pgs 115, 131
  • V$METRICGROUP  Pg 114
  • V$METRICNAME  Pg 114
  • V$MYSTAT  Pgs 114, 280
  • V$OSSTAT  Pgs 106, 131, 161
  • V$PARAMETER  Pgs 62, 296
  • V$PARAMETER_VALID_VALUES  Pg 278
  • V$PGASTAT  Pg 294-295
  • V$PROCESS  Pgs 64, 65
  • V$PX_PROCESS  Pg 63
  • V$SES_OPTIMIZER_ENV  Pg 374
  • V$SESS_TIME_MODEL  Pgs 107, 108, 110, 111, 144, 145, 419
  • V$SESSION  Pgs  45, 57, 61, 63, 64, 115-116, 117, 119, 145, 146, 304, 322, 390
  • V$SESSION_BLOCKERS  Pg 116
  • V$SESSION_EVENT  Pgs 111, 145
  • V$SESSION_WAIT  Pg 116
  • V$SESSION_WAIT_CLASS  Pg 110
  • V$SESSTAT  Pgs 113, 114, 145, 430, 437
  • V$SGASTAT  Pg 118
  • V$SHARED_SERVER  Pg 63
  • V$SQL  Pgs 25, 26, 126, 147, 304, 322, 378, 380, 386, 409
  • V$SQL_BIND_METADATA  Pg 31
  • V$SQL_CS_HISTOGRAM  Pgs 36, 37
  • V$SQL_CS_SELECTIVITY  Pgs 36, 37
  • V$SQL_CS_STATISTICS  Pg 36
  • V$SQL_HINT  Pgs 367, 376, 403
  • V$SQL_MONITOR  Pg 127
  • V$SQL_OPTIMIZER_ENV  Pg 374
  • V$SQL_PLAN  Pgs 303, 304, 305, 322, 336, 339,346, 348, 351
  • V$SQL_PLAN_STATISTICS  Pg 303
  • V$SQL_PLAN_STATISTICS_ALL  Pgs 303, 304, 320, 321, 322, 336, 339, 346, 348
  • V$SQL_SHARED_CURSOR  Pgs 27, 29, 31
  • V$SQL_WORKAREA  Pg 303
  • V$SQLAREA  Pgs 25,26, 126, 147, 277, 424
  • V$SQLSTATS  Pgs 126, 147, 424
  • V$STATNAME  Pgs 113, 280, 430, 437
  • V$SYS_OPTIMIZER_ENV  Pg 374
  • V$SYS_TIME_MODEL  Pgs 108, 144, 419
  • V$SYSAUX_OCCUPANTS  Pg 153
  • V$SYSMETRIC  Pg 115
  • V$SYSMETRIC_HISTORY  Pg 115
  • V$SYSSTAT  Pg 113
  • V$SYSTEM_EVENT  Pgs 111, 112
  • V$SYSTEM_WAIT_CLASS  Pgs 110, 143
  • WRI$_OPTSTAT_AUX_HISTORY  Pg 199
  • X$KSUSE  Pg 257

Initialization Parameters:

  • _CONVERT_SET_TO_JOIN  Pg 189
  • _OPTIMIZER_IGNORE_HINTS  Pg 372
  • _SQLMON_MAX_PLAN pg 127
  • _SQLMON_MAX_PLANLINES  Pg 127
  • _TRACE_FILES_PUBLIC  Pg 64
  • BACKGROUND_DUMP_DEST  Pgs 63, 259
  • BITMAP_MERGE_AREA_SIZE  Pgs 292, 297
  • CONTROL_MANAGEMENT_PACK_ACCESS  Pgs 103, 117, 127, 402
  • CPU_COUNT  Pg 131
  • CREATE_STORED_OUTLINES  Pg 377
  • CURRENT_SCHEMA  Pg 320
  • CURSOR_SHARING  Pgs 373, 434-436
  • DB_BLOCK_SIZE  Pgs 278, 281
  • DB_CACHE_SIZE  Pg 281
  • DB_FILE_MULTIBLOCK_READ_COUNT  Pgs 195, 198, 274, 278-281
  • DB_NAME  Pg 63
  • DIAGNOSTIC_DEST  Pg 63
  • HASH_AREA_SIZE  Pgs 292, 297
  • INSTANCE_NAME  Pg 63
  • JOB_QUEUE_PROCESSES  Pg 243, 261
  • MAX_DUMP_FILE_SIZE  Pgs 61, 62
  • MEMORY_MAX_TARGET  Pg 292
  • MEMORY_TARGET  Pg 292
  • NLS_SORT  Pg 28
  • OPEN_CURSORS  Pgs 432, 433, 437
  • OPTIMIZER_ADAPTIVE_FEATURES  Pgs 230, 351
  • OPTIMIZER_ADAPTIVE_REPORTING_ONLY  Pg 351
  • OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES  Pgs 405, 407, 417
  • OPTIMIZER_DYNAMIC_SAMPLING  Pgs 281-286, 373
  • OPTIMIZER_FEATURES_ENABLE  Pgs 202, 213, 277-278, 282, 373
  • OPTIMIZER_INDEX_CACHING  Pgs 191, 274, 289, 373
  • OPTIMIZER_INDEX_COST_ADJ  Pgs 191, 274, 286-289, 373
  • OPTIMIZER_MODE  Pgs 26, 274, 276-277, 373
  • OPTIMIZER_SECURE_VIEW_MERGING  Pgs 176, 289-291, 373
  • OPTIMIZER_USE_PENDING_STATISTICS  Pgs 250, 373
  • OPTIMIZER_USE_SQL_PLAN_BASELINES  Pg 415
  • PGA_AGGREGATE_LIMIT  Pgs 274, 293
  • PGA_AGGREGATE_TARGET  Pgs 274, 292, 295
  • PROCESSES  Pg 295
  • RESULT_CACHE_MODE  Pg 373
  • SERVICE_NAMES  Pg 59
  • SESSION_CACHED_CURSORS  Pgs 436, 437, 440
  • SESSIONS  Pg 281
  • SORT_AREA_RETAINED_SIZE  Pgs 292, 296-297
  • SORT_AREA_SIZE  Pgs 292, 296
  • SQLTUNE_CATEGORY  Pgs 395, 402
  • STAR_TRANSFORMATION_ENABLED  Pg 373
  • STATISTICS_LEVEL  Pgs 62, 106, 152, 199, 238, 261, 303, 322
  • TIMED_STATISTICS  Pgs 61, 62, 74, 105-106
  • TRACEFILE_IDENTIFIER  Pg 65
  • USE_PRIVATE_OUTLINES  Pg 384
  • USE_STORED_OUTLINES  Pgs 379, 385
  • USER_DUMP_DEST  Pg 63
  • WORKAREA_SIZE_POLICY  Pgs 274, 292

SQL Hints:

  • ALL_ROWS  Pgs 277, 373, 399
  • APPEND  Pg 367
  • APPEND_VALUES  Pg 367
  • BIND_AWARE  Pgs 38, 367
  • CACHE  Pg 367
  • CHANGE_DUPKEY_ERROR_INDEX  Pg 367
  • CLUSTER  Pg 366
  • COLUMN_STATS  Pg 401
  • CURSOR_SHARING_EXACT  Pgs 366, 373, 434
  • DRIVING_SITE  Pg 367
  • DYNAMIC_SAMPLING  Pgs 283, 366, 367, 373
  • DYNAMIC_SAMPLING_EST_CDN  Pg 283
  • ELIMINATE_JOIN  Pg 366
  • EXPAND_TABLE  Pg 366
  • FACT  Pg 366
  • FIRST_ROWS  Pg 373
  • FIRST_ROWS(n)  Pgs 277, 366, 373
  • FULL  Pgs 365, 366, 367, 368, 369, 371, 383, 406, 407, 408
  • GATHER_OPTIMIZER_STATISTICS  Pg 367
  • GATHER_PLAN_STATISTICS  Pgs 303, 320, 322, 366, 376, 403
  • HASH  Pg 366
  • IGNORE_OPTIM_EMBEDDED_HINTS  Pg 399
  • IGNORE_ROW_ON_DUPKEY_INDEX  Pg 367
  • INDEX  Pgs 366, 371, 407
  • INDEX_ASC  Pg 366
  • INDEX_COMBINE  Pg 366
  • INDEX_DESC  Pg 366
  • INDEX_FFS  Pg 366
  • INDEX_JOIN  Pg 366
  • INDEX_SS  Pg 366
  • INDEX_SS_ASC  Pg 366
  • INDEX_SS_DESC  Pg 366
  • INDEX_STATS  Pg 401
  • INLINE  Pgs 356, 376, 403
  • LEADING  Pg 366
  • MATERIALIZE  Pgs 367, 376, 403
  • MERGE  Pg 366
  • MODEL_MIN_ANALYSIS  Pg 367
  • MONITOR  Pgs 127, 367
  • NATIVE_FULL_OUTER_JOIN  Pg 366
  • NLJ_BATCHING  Pg 366
  • NO_APPEND  Pg 367
  • NO_BIND_AWARE  Pg 367
  • NO_CACHE  Pg 367
  • NO_CPU_COSTING – disables the CPU cost model, reverting to the I/O cost model.  Pg 191
  • NO_ELIMINATE_JOIN  Pg 366
  • NO_EXPAND  Pg 366
  • NO_EXPAND_TABLE  Pg 366
  • NO_FACT  Pg 366
  • NO_GATHER_OPTIMIZER_STATISTICS  Pg 367
  • NO_INDEX  Pg 366
  • NO_INDEX_FFS  Pg 366
  • NO_INDEX_SS_ASC  Pg 366
  • NO_INDEX_SS_DESC  Pg 366
  • NO_MERGE  Pg 366
  • NO_MONITOR  Pgs 127, 367
  • NO_NATIVE_FULL_OUTER_JOIN  Pg 366
  • NO_NLJ_BATCHING  Pg 366
  • NO_OUTER_JOIN_TO_INNER  Pg 366
  • NO_PARALLEL  Pg 367
  • NO_PARALLEL_INDEX  Pg 367
  • NO_PQ_CONCURRENT_UNION  Pg 367
  • NO_PQ_SKEW  Pg 367
  • NO_PUSH_PRED  Pg 366
  • NO_PUSH_SUBQ  Pg 366
  • NO_PX_JOIN_FILTER  Pg 367
  • NO_QUERY_TRANSFORMATION  Pg 366
  • NO_RESULT_CACHE  Pgs 367, 373
  • NO_REWRITE  Pg 366
  • NO_STAR_TRANSFORMATION  Pg 366
  • NO_STATEMENT_QUEUING  Pg 367
  • NO_SWAP_JOIN_INPUTS  Pg 366
  • NO_UNNEST  Pg 366
  • NO_USE_CUBE  Pg 366
  • NO_USE_HASH  Pg 366
  • NO_USE_MERGE  Pg 366
  • NO_USE_NL  Pg 366
  • NO_XML_QUERY_REWRITE  Pg 366
  • NO_XMLINDEX_REWRITE  Pg 366
  • OPT_ESTIMATE  Pg 400
  • OPT_PARAM  Pgs 250, 366, 373
  • OPTIMIZER_FEATURES_ENABLE  Pgs 278, 366, 373, 399
  • ORDERED  Pg 366
  • OUTER_JOIN_TO_INNER  Pg 366
  • PARALLEL  Pg 367
  • PARALLEL_INDEX  Pg 367
  • PQ_CONCURRENT_UNION  Pg 367
  • PQ_DISTRIBUTE  Pg 367
  • PQ_FILTER  Pg 367
  • PQ_SKEW  Pg 367
  • PUSH_PRED  Pg 366
  • PUSH_SUBQ  Pg 366
  • PX_JOIN_FILTER  Pg 367
  • QB_NAME  Pgs 367, 368
  • RESULT_CACHE  Pgs 367, 373
  • RETRY_ON_ROW_CHANGE  Pg 367
  • REWRITE  Pg 366
  • RULE  Pg 398
  • SET_TO_JOIN – used to enable the set (INTERSECT or MINUS) transformation to standard join syntax.  Pg 189
  • STAR_TRANSFORMATION  Pg 366
  • STATEMENT_QUEUING  Pg 367
  • SWAP_JOIN_INPUTS  Pg 366
  • TABLE_STATS  Pg 401
  • UNNEST  Pg 366
  • USE_CONCAT  Pg 366
  • USE_CUBE  Pg 366
  • USE_HASH  Pg 366
  • USE_MERGE  Pg 366
  • USE_MERGE_CARTESIAN  Pg 366
  • USE_NL  Pg 366
  • USE_NL_WITH_INDEX  Pg 366

Events:

  • 10046  Pgs 52-62
  • 10053  Pgs 307-309
  • 10132  Pgs 309-310, 371

Privileges:

  • ADMINISTER SQL MANAGEMENT  Pg 397
  • ADMINISTER SQL MANAGEMENT OBJECT  Pg 417
  • ADVISOR  Pg 397
  • ALTER ANY OUTLINE  Pg 385
  • ALTER ANY SQL PROFILE  Pg 397
  • ALTER SESSION  Pgs 61, 62
  • ANALYZE ANY  Pgs 200, 245, 248, 251, 261, 263
  • ANALYZE ANY DICTIONARY  Pgs 200, 245, 248, 261
  • CREATE ANY  Pg 96
  • CREATE ANY OUTLINE  Pg 385
  • CREATE ANY SQL PROFILE  Pg 397
  • CREATE JOB  Pg 244
  • CREATE TABLE  Pg 69
  • DROP ANY OUTLINE  Pg 385
  • DROP ANY SQL PROFILE  Pg 397
  • EXECUTE ANY PROCEDURE  Pg 385
  • MANAGE ANY QUEUE  Pg 244
  • MANAGE SCHEDULER  Pg 244
  • MERGE ANY VIEW  Pg 291
  • SELECT ANY DICTIONARY  Pgs 235, 246

Roles:

  • DBA  Pgs 57, 192, 244, 291, 385, 417
  • EXECUTE_CATALOG_ROLE  Pg 385
  • GATHER_SYSTEM_STATISTICS  Pg 192
  • HPROF_PROFILE (custom role)  Pg 101
  • SELECT_CATALOG_ROLE  Pgs 322, 324
  • SQL_TRACE (custom role)  Pg 61

Statistics:

  • background elapsed time  Pg 107
  • bytes received via SQL*Net from client  Pg 430
  • bytes sent via SQL*Net to client  Pgs 430-431
  • DB CPU  Pgs 108, 110, 111, 162, 163
  • DB time  Pgs 55, 107, 108, 109, 111, 120, 122, 126, 127, 132, 134, 137, 140, 162, 163, 165, 166, 419, 420, 424
  • hard parse elapsed time  Pg 107
  • parse count (total)  Pgs 437, 438
  • parse time elapsed  Pgs 107, 420
  • session cursor cache count  Pg 437
  • session cursor cache hits  Pgs 437, 438
  • sql execute elapsed time  Pg 165

Wait Events:

  • DB FILE SCATTERED READ  Pgs 75, 83
  • DB FILE SEQUENTIAL READ  Pgs 75, 83, 112, 163
  • DIRECT PATH READ TEMP  Pgs 75, 82
  • DIRECT PATH WRITE TEMP  Pgs 75, 82
  • latch free  Pg 436
  • library cache: mutex X  Pg 436
  • SQL*NET MESSAGE FROM CLIENT  Pgs 76, 111, 421-422, 423, 430, 432, 435
  • SQL*NET MESSAGE TO CLIENT  Pgs 76, 80, 430

Built-In Functions/Procedures:

  • ANALYZE  Pgs 207, 264
  • CAST  Pgs 284, 321
  • DBMS_APPLICATION_INFO.SET_ACTION  Pg 46
  • DBMS_APPLICATION_INFO.SET_CLIENT_INFO  Pg 46
  • DBMS_APPLICATION_INFO.SET_MODULE  Pgs 46, 66
  • DBMS_AUTO_SQLTUNE.SET_AUTO_TUNING_TASK_PARAMETER  Pg 413
  • DBMS_AUTO_TASK_ADMIN.DISABLE  Pg 260
  • DBMS_AUTO_TASK_ADMIN.ENABLE  Pg 260
  • DBMS_MONITOR.CLIENT_ID_TRACE_DISABLE  Pg 58
  • DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE  Pg 58
  • DBMS_MONITOR.DATABASE_TRACE_DISABLE  Pg 60
  • DBMS_MONITOR.DATABASE_TRACE_ENABLE  Pg 60
  • DBMS_MONITOR.SERV_MOD_ACT_TRACE_DISABLE  Pg 59
  • DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE  Pg 59
  • DBMS_MONITOR.SESSION_TRACE_DISABLE  Pg 58
  • DBMS_MONITOR.SESSION_TRACE_ENABLE  Pg 57
  • DBMS_OUTLN.CLEAR_USED  Pg 380
  • DBMS_OUTLN.CREATE_OUTLINE  Pg 378
  • DBMS_OUTLN.DROP_BY_CAT  Pg 385
  • DBMS_OUTLN.UPDATE_BY_CAT  Pg 379
  • DBMS_OUTLN_EDIT.REFRESH_PRIVATE_OUTLINE  Pg 384
  • DBMS_OUTPUT.PUT  Pg 238
  • DBMS_OUTPUT.PUT_LINE  Pgs 238, 245
  • DBMS_RANDOM.NORMAL  Pg 209
  • DBMS_RANDOM.STRING  Pgs 209, 284
  • DBMS_RANDOM.VALUE  Pg 284
  • DBMS_SCHEDULER.CREATE_JOB  Pg 159
  • DBMS_SCHEDULER.DISABLE  Pg 259
  • DBMS_SCHEDULER.ENABLE  Pg 259
  • DBMS_SESSION.IS_ROLE_ENABLED  Pgs 61, 101
  • DBMS_SESSION.RESET_PACKAGE  Pg 295
  • DBMS_SESSION.SESSION_TRACE_DISABLE  Pg 61
  • DBMS_SESSION.SESSION_TRACE_ENABLE  Pg 61
  • DBMS_SESSION.SET_IDENTIFIER  Pg 46
  • DBMS_SPM.ALTER_SQL_PLAN_BASELINE  Pgs 414-415
  • DBMS_SPM.CONFIGURE  Pgs 417, 418
  • DBMS_SPM.CREATE_STGTAB_BASELINE  Pgs 415, 416
  • DBMS_SPM.DROP_SQL_PLAN_BASELINE  Pgs 408, 416
  • DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE  Pgs 411-412
  • DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE  Pgs 406, 408
  • DBMS_SPM.LOAD_PLANS_FROM_SQLSET  Pg 409
  • DBMS_SPM.PACK_STGTAB_BASELINE  Pgs 415, 416
  • DBMS_SPM.REPORT_AUTO_EVOLVE_TASK  Pg 413-414
  • DBMS_SPM.UNPACK_STGTAB_BASELINE  Pgs 415, 416
  • DBMS_SQLDIAG.DUMP_TRACE (no extra cost license required, dumps 10053 trace from library cache)  Pg 308
  • DBMS_SQLTUNE.ACCEPT_SQL_PROFILE  Pgs 392-393
  • DBMS_SQLTUNE.ALTER_SQL_PROFILE  Pgs 393-394
  • DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF  Pgs 396, 397
  • DBMS_SQLTUNE.CREATE_TUNING_TASK  Pg 390
  • DBMS_SQLTUNE.DROP_SQL_PROFILE  Pg 397
  • DBMS_SQLTUNE.DROP_TUNING_TASK  Pg 392
  • DBMS_SQLTUNE.EXECUTE_TUNING_TASK  Pg 390
  • DBMS_SQLTUNE.IMPORT_SQL_PROFILE  Pg 401
  • DBMS_SQLTUNE.PACK_STGTAB_SQLPROF  Pgs 396, 397
  • DBMS_SQLTUNE.REMAP_STGTAB_SQLPROF  Pgs 396, 397
  • DBMS_SQLTUNE.REPORT_SQL_DETAIL  Pg 141
  • DBMS_SQLTUNE.REPORT_SQL_MONITOR  Pgs 127-130
  • DBMS_SQLTUNE.REPORT_SQL_MONITOR_LIST  Pg 127
  • DBMS_SQLTUNE.REPORT_TUNING_TASK  Pg 390
  • DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF  Pgs 396, 397
  • DBMS_STATS. GATHER_FIXED_OBJECTS_STATS  Pg 233
  • DBMS_STATS. UPGRADE_STAT_TABLE  Pg 245
  • DBMS_STATS.ALTER_STATS_HISTORY_RETENTION  Pgs 199-200, 261
  • DBMS_STATS.CONVERT_RAW_VALUE  Pg 212
  • DBMS_STATS.CONVERT_RAW_VALUE_NVARCHAR  Pg 212
  • DBMS_STATS.CONVERT_RAW_VALUE_ROWID  Pg 212
  • DBMS_STATS.COPY_TABLE_STATS  Pg 256-257
  • DBMS_STATS.CREATE_EXTENDED_STATS  Pgs 226, 229
  • DBMS_STATS.CREATE_STAT_TABLE  Pgs 200, 244
  • DBMS_STATS.DELETE_COLUMN_STATS  Pg 267, 268
  • DBMS_STATS.DELETE_DATABASE_PREFS  Pg 248
  • DBMS_STATS.DELETE_DATABASE_STATS  Pg 267
  • DBMS_STATS.DELETE_DICTIONARY_STATS  Pg 267
  • DBMS_STATS.DELETE_FIXED_OBJECTS_STATS  Pg 267
  • DBMS_STATS.DELETE_INDEX_STATS  Pg 267
  • DBMS_STATS.DELETE_PENDING_STATS  Pg 251
  • DBMS_STATS.DELETE_SCHEMA_PREFS  Pg 248
  • DBMS_STATS.DELETE_SCHEMA_STATS  Pg 267
  • DBMS_STATS.DELETE_SYSTEM_STATS  Pgs 192, 198, 201
  • DBMS_STATS.DELETE_TABLE_PREFS  Pg 248
  • DBMS_STATS.DELETE_TABLE_STATS  Pgs 252, 267
  • DBMS_STATS.DIFF_TABLE_STATS_IN_HISTORY  Pg 266
  • DBMS_STATS.DIFF_TABLE_STATS_IN_PENDING  Pg 267
  • DBMS_STATS.DIFF_TABLE_STATS_IN_STATTAB  Pg 266
  • DBMS_STATS.DROP_EXTENDED_STATS  Pg 227
  • DBMS_STATS.DROP_STAT_TABLE  Pgs 200, 245
  • DBMS_STATS.EXPORT_SYSTEM_STATS  Pg 192
  • DBMS_STATS.GATHER_DATABASE_STATS  Pgs 233, 239
  • DBMS_STATS.GATHER_DICTIONARY_STATS  Pg 233
  • DBMS_STATS.GATHER_INDEX_STATS  Pg 233
  • DBMS_STATS.GATHER_SCHEMA_STATS  Pgs 233, 238, 240, 263
  • DBMS_STATS.GATHER_SYSTEM_STATS  Pgs 192, 194-197, 200, 201, 206
  • DBMS_STATS.GATHER_TABLE_STATS  Pgs 233, 249, 250, 252, 255, 263, 285
  • DBMS_STATS.GET_PARAM  Pgs 245, 246, 247
  • DBMS_STATS.GET_PREFS  Pg 247
  • DBMS_STATS.GET_STATS_HISTORY_RETENTION  Pgs 199, 261
  • DBMS_STATS.GET_SYSTEM_STATS  Pg 192
  • DBMS_STATS.IMPORT_SYSTEM_STATS  Pg 192
  • DBMS_STATS.LOCK_SCHEMA_STATS  Pg 262
  • DBMS_STATS.LOCK_TABLE_STATS  Pg 262
  • DBMS_STATS.OBJECTTAB  Pg 239
  • DBMS_STATS.PUBLISH_PENDING_STATS  Pgs 250-251
  • DBMS_STATS.PURGE_STATS  Pgs 199-200, 261
  • DBMS_STATS.REPORT_COL_USAGE  (note: use SET LONG 100000 in SQL*Plus before executing query)  Pgs 228, 242-243
  • DBMS_STATS.REPORT_SINGLE_STATS_OPERATION  Pg 201
  • DBMS_STATS.RESET_COL_USAGE  Pg 243
  • DBMS_STATS.RESET_GLOBAL_PREF_DEFAULTS  Pg 248
  • DBMS_STATS.RESET_PARAM_DEFAULTS  Pg 246
  • DBMS_STATS.RESTORE_DATABASE_STATS  Pg 262
  • DBMS_STATS.RESTORE_DICTIONARY_STATS  Pg 262
  • DBMS_STATS.RESTORE_FIXED_OBJECT_STATS  Pg 262
  • DBMS_STATS.RESTORE_SCHEMA_STATS  Pg 262
  • DBMS_STATS.RESTORE_SYSTEM_STATS  Pgs 192, 199-200
  • DBMS_STATS.RESTORE_TABLE_STATS  Pg 262
  • DBMS_STATS.SEED_COL_USAGE  Pg 228
  • DBMS_STATS.SET_DATABASE_PREFS  Pg 246, 247
  • DBMS_STATS.SET_GLOBAL_PREFS  Pg 246
  • DBMS_STATS.SET_PARAM  Pgs 245, 246
  • DBMS_STATS.SET_SCHEMA_PREFS  Pg 247
  • DBMS_STATS.SET_SYSTEM_STATS  Pg 192, 198
  • DBMS_STATS.SET_TABLE_PREFS  Pgs 247, 250, 251, 254, 256
  • DBMS_STATS.UNLOCK_SCHEMA_STATS  Pg 262
  • DBMS_STATS.UNLOCK_TABLE_STATS  Pg 262
  • DBMS_UTILITY.GET_TIME  Pg 280
  • DBMS_WORKLOAD_REPOSITORY.ADD_COLORED_SQL  Pg 153
  • DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE  Pgs 154-155
  • DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT  Pg 154
  • DBMS_WORKLOAD_REPOSITORY.DROP_BASELINE  Pg 156
  • DBMS_WORKLOAD_REPOSITORY.MODIFY_BASELINE_WINDOW_SIZE  Pg 156
  • DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS  Pg 153
  • DBMS_WORKLOAD_REPOSITORY.REMOVE_COLORED_SQL  Pg 15
  • DBMS_WORKLOAD_REPOSITORY.RENAME_BASELINE  Pg 155
  • DBMS_WORKLOAD_REPOSITORY.SELECT_BASELINE_METRIC  Pg 155
  • DBMS_XPLAN.DISPLAY  Pgs 301, 302, 306, 316-321, 349, 350, 368, 370, 371, 380, 382, 384, 395
  • DBMS_XPLAN.DISPLAY_AWR  Pgs 305-306, 311, 323-325
  • DBMS_XPLAN.DISPLAY_CURSOR  Pgs 249, 304-305, 311, 322-323, 336, 352, 353, 406, 407, 408
  • DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE  Pgs 409, 410, 411
  • EXPLAIN PLAN  Pgs 299-302, 320, 336, 339, 346, 348, 349, 368, 370, 371, 380, 382, 384, 395
  • EXTRACT  Pgs 399, 410
  • EXTRACTVALUE  Pgs 399, 410
  • LAG  Pgs 203, 214, 224
  • LATERAL  Pg 180
  • LNNVL  Pgs 175, 346, 347, 502
  • NTILE  Pg 217
  • NULLIF  Pgs 354, 419, 424
  • PERFSTAT.STATSPACK.CLEAR_BASELINE  Pg 160
  • PERFSTAT.STATSPACK.MAKE_BASELINE  Pg 160
  • PERFSTAT.STATSPACK.SNAP  Pg 158
  • PGA_PKG.ALLOCATE  Pg 295
  • RATIO_TO_REPORT  Pgs 92, 111, 112, 120, 221, 223
  • STATSPACK.MODIFY_STATSPACK_PARAMETER  Pg 158
  • STATSPACK.PURGE  Pg 159
  • SYS_CONTEXT, SYS_CONTEXT(‘USERENV’,’SID’) , SYS_CONTEXT(‘USERENV’,’SESSION_USER’)  Pgs 45, 61, 64, 113, 321, 372, 390
  • SYS_OP_COMBINED_HASH  Pg 227
  • TABLE  Pgs 155, 249, 301, 302, 304, 305, 317-321, 323, 324, 349, 350, 352, 368, 370, 371, 380, 382, 384, 395, 399, 406-411, 625, 639, 669
  • UNPIVOT  Pg 158
  • UTL_RAW.CAST_TO_BINARY_DOUBLE  Pg 212
  • UTL_RAW.CAST_TO_BINARY_FLOAT  Pg 212
  • UTL_RAW.CAST_TO_BINARY_INTEGER  Pg 212
  • UTL_RAW.CAST_TO_NUMBER  Pg 212
  • UTL_RAW.CAST_TO_NVARCHAR2  Pg 212
  • UTL_RAW.CAST_TO_RAW  Pg 212
  • UTL_RAW.CAST_TO_VARCHAR2  Pg 212
  • XMLSEQUENCE  Pgs 399, 410
  • XMLTABLE  Pgs 201, 270
  • XMLTYPE  Pgs 399, 410

Packages:

  • DBMS_HPROF profiler  Pgs 89-96
  • DBMS_JOB  Pg 257
  • DBMS_PROFILER profiler  Pgs 89, 96-101
  • DBMS_SCHEDULER  Pg 257
  • DBMS_SQL_MONITOR  Pg 131
  • DBMS_SQLTUNE  Pgs 389-402
  • DBMS_STATS  Pgs 191-201, 207, 233-271
  • DBMS_XPLAN  Pgs 311-325

Oracle Provided Scripts:

  • ashrpt.sql  Pg 124
  • ashrpti.sql  Pg 124
  • awrddrpt.sql  Pg 161
  • awrrpt.sql  Pg 160
  • awrsqrpt.sql  Pg 306
  • coe_xfr_sql_profile.sql  Pg 401
  • dbmshptab.sql  Pg 90
  • profrep.sql  Pg 100
  • proftab.sql  Pg 97
  • spauto.sql  Pg 159
  • spcreate.sql  Pg 157
  • sppurge.sql  Pg 159
  • spreport.sql  Pg 161
  • sprepsql.sql  Pgs 166, 306
  • utlxplan.sql  Pg 300
  • utlxmv.sql  Pg 587
  • utlxrw.sql  Pg 586

Utilities:

  • /proc/loadavg command on Linux.  Pg 131
  • exp  Pg 381
  • grep  Pg 422
  • PLSHPROF (processes DBMS_HPROF output)  Pgs 89, 94
  • snapper.sql (utility script by Tanel Põder)  Pgs 145-147
  • SQL Developer  Pgs 95-96, 100
  • TKPROF  Pgs 67-79, 421-422, 426
  • TRCSESS  Pg 67
  • TVD$XTAT  Pgs 67, 79-88, 422-423, 426-427

Execution Plans:

Automatic Query Transformations:

    • Count transformation – transforms COUNT(column) to COUNT(*) when a NOT NULL constraint is present on the column.  Pg 174
    • Common sub-expression elimination – removes unnecessary/duplicate predicates from the WHERE clause.  Pgs 174-175
    • OR expansion – transforms SQL statements with OR clauses into multiple SQL blocks with UNION ALL specified between the SQL blocks. Pg 175
    • Simple view merging – merges query blocks using heuristics, used when the query blocks do not contain aggregations, set operators, hierarchical queries, MODEL clauses, or scalar subqueries (SELECT statement in the column list of a query).  Pgs 176-177
    • Complex view merging  – merges query blocks that contain aggregations using a cost-based approach, but does not work with hierarchical queries or queries containing GROUPING SETS, ROLLUP, PIVOT, or MODEL clauses.  Pgs 176-177
    • Select list pruning – removing selected columns from non-top-level query blocks that are not referenced by parent query blocks.  Pgs 177-178
    • Filter push down – pushing column restrictions from the WHERE clause of a parent query block into a child query block.  Pgs 178-179
    • Join predicate push down – push join predicates from a parent query block into an unmergeable child query block.  Pgs 179-180
    • Predicate move around – applying filter restrictions in one child query block (inline view) to another child query block due to join conditions that are specified between the parent and child query blocks.  Pg 180
    • Distinct placement – applies a DISTINCT filter to a child table before joining to a parent table, when the DISTINCT clause is specified for a SQL statement.  Pg 181
    • Distinct elimination – removes a specified DISTINCT filter when all columns of the primary key, a unique key, or the ROWID are selected from a table.  Pg 181
    • Group by placement – applies an aggregation function to a child table before joining that table to a parent table – a COUNT aggregation specified in a SQL statement is transformed into a COUNT aggregation of the child table before the join, and a SUM aggregation after the join.  Pgs 181-182
    • Order-by elimination – removes unnecessary ORDER BY clauses using heuristics from subqueries, inline views, and joined standard views.  Pg 182
    • Subquery unnesting – injects semi-join, anti-join, and scalar subqueries into the FROM clause; helps to eliminate the need to execute a subquery once for every row returned by the containing query block.  Pg 183
    • Subquery coalescing – combines equivalent semi-join and anti-join subqueries into a single query block.  Pgs 183-184
    • Subquery removal using window functions – replaces subqueries containing aggregate functions with analytic functions and standard joins.  Pgs 184-185
    • Join elimination – eliminates the parent table from a view when a declared foreign key is present between the tables, and no columns are selected from the parent table in the query block that selects from the view.  Pg 185
    • Join factorization – avoids repeatedly accessing one of the row sources by rewriting UNION ALL type queries so that predicates are applied to a second table as well as the UNION ALL operation before joining to the other row source.  Pgs 185-186
    • Outer join to inner join – when possible, converts left/right joins to equijoins, possibly due to other predicates that conflict with the left/right join syntax.  Pg 186
    • Full outer join to outer join, UNION ALL, and antijoin.  Pg 187
    • Table expansion – using an index scan on a partitioned table when the index is unusable for some table partitions.
    • Set to join conversion – converts INTERSECT or MINUS type queries to standard join queries, must specify the SET_TO_JOIN hint.  Pg 189
    • Star transformation.  Pg 189, chapter 14
    • Query rewrite with materialized views – accesses data stored in a materialized view, rather than the base tables, even though the materialized view is not referenced in the SQL statement (Enterprise Edition feature).  Pg 189, chapter 15

Execution Plan Operations:

    • AND-EQUAL  Pg 331
    • BITMAP AND  Pg 331
    • BITMAP KEY ITERATION  Pgs 333, 334
    • BITMAP MINUS  Pg 331
    • BITMAP OR  Pg 331
    • CONCATENATION  Pg 331
    • CONNECT BY PUMP  Pg 339
    • CONNECT BY WITH FILTERING  Pgs 333, 338-341
    • CONNECT BY WITHOUT FILTERING  Pg 331
    • COUNT STOPKEY  Pg 330
    • FILTER  Pgs 330, 333, 335-337
    • HASH JOIN  Pg 331
    • HASH JOIN ANTI  Pg 362
    • HASH JOIN ANTI NA  Pgs 347, 362
    • INDEX RANGE SCAN  Pg 327
    • INLIST ITERATOR  Pg 331
    • INTERSECTION  Pg 331
    • MERGE JOIN  Pgs 331, 332, 333
    • MINUS  Pgs 331, 362
    • MULTI-TABLE INSERT  Pg 331
    • NESTED LOOPS  Pgs 333, 334-335
    • PARTITION RANGE ITERATOR  Pg 331
    • RECURSIVE WITH PUMP  Pg 342
    • SELECT  Pg 327
    • SORT ORDER BY  Pg 327
    • SORT ORDER BY STOPKEY  Pg 330
    • SQL MODEL  Pg 331
    • STATISTICS COLLECTOR  Pgs 350, 351, 352
    • TABLE ACCESS BY INDEX ROWID  Pg 327
    • TEMP TABLE TRANSFORMATION  Pg 331
    • UNION-ALL  Pgs 331, 332, 333
    • UNION ALL (RECURSIVE WITH) BREADTH FIRST, DEPTH FIRST  Pgs 333, 341
    • UPDATE  Pgs 333, 337-338

Latches:

  •  CACHE BUFFERS CHAINS  Pgs 666, 668
  • LIBRARY CACHE  Pgs 25, 436
  • SHARED POOL  Pg 25

Oracle Error Messages:

  • Error: cannot fetch last explain plan from PLAN_TABLE.  Pg 320
  • ORA-00600: Internal error code, arguments.  Pg 569
  • ORA-00904: Invalid identifier.  Pg 537
  • ORA-00931: Missing identifier.  Pg 379
  • ORA-00942: Table or view does not exist.  Pg 569
  • ORA-01013: User requested cancel of current operation.  Pg 259
  • ORA-01017: invalid username/password; logon denied.  Pg 69
  • ORA-01039: Insufficient privileges on underlying objects of the view.  Pg 300
  • ORA-01408: Such column list already indexed.  Pg 524
  • ORA-01422: Exact fetch returns more than requested number of rows.  Pg 322
  • ORA-01722: Invalid number error.  Pg 657
  • ORA-02097: Parameter cannot be modified because specified value is invalid.  Pg 295
  • ORA-03113: End-of-file on communication channel.  Pg 296
  • ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT.  Pg 296
  • ORA-07445: Exception encountered: core dump [exception].  Pg 569
  • ORA-12026: Invalid filter column detected.  Pg 596
  • ORA-12827: Insufficient parallel query slaves available error.  Pgs 624, 625
  • ORA-12838: Cannot read/modify an object after modifying it in parallel.  Pgs 640, 644
  • ORA-12839: Cannot modify an object in parallel after modifying it.  Pg 640
  • ORA-12850: Could not allocate slaves on all specified instances.  Pg 614
  • ORA-13541: System moving window baseline size greater than retention.  Pg 156
  • ORA-13717: Tuning Package License is needed for using this feature.  Pg 402
  • ORA-13859: Action cannot be specified without the module specification.  Pg 59
  • ORA-20005: Object statistics are locked.  Pgs 238, 263
  • ORA-23538: Cannot explicitly refresh a NEVER REFRESH materialized view.  Pg 588
  • ORA-26040: Data block was loaded using the NOLOGGING option.  Pg 642
  • ORA-32313: REFRESH FAST of <mview> unsupported after PMOPs.  Pg 594
  • ORA-38029: Object statistics are locked.  Pg 264
  • ORA-48913: Writing into trace file failed, file size limit [] reached.  Pg 62

Blog articles that reference the “Troubleshooting Oracle Performance, Second Edition” book:

FIRST_ROWS_n Optimizer Mode – What is Wrong with this Statement?





FIRST_ROWS_n Optimizer Mode – What is Wrong with this Statement?

8 06 2014

June 8, 2014

It has been nearly two years since I last wrote a review of an Oracle Database related book, although I have recently written reviews of two Microsoft Exchange Server 2013 books and a handful of security cameras in the last two years.  My copy of the second edition of the “Troubleshooting Oracle Performance” book arrived yesterday, so I have been spending some time reading the new edition and comparing it with a PDF version of the first edition.  My initial impressions of the second edition, based on the first 30 pages, are almost entirely positive, although I did find one glitch in the second edition so far.

At the top of page 27 is the following caution, which did not appear in the first edition of the book:

“As the previous example shows, the optimizer_mode column doesn’t show the right value for child number 1. In fact, the column shows FIRST_ROWS instead of FIRST_ROWS_1. The same behavior can be observed with FIRST_ROWS_10, FIRST_ROWS_100, and FIRST_ROWS_1000 as well.  This fact leads to the potential problem that even though the execution environment is different, the SQL engine doesn’t distinguish that difference. As a result, a child cursor might be incorrectly shared.”

So, what is wrong with the above statement?  The above statement, if true, would seem to indicate a bug in Oracle Database, one that has potentially serious performance side-effects; the opening paragraph of the “Troubleshooting Oracle Performance” book states, “Performance is not merely optional, though; it is a key property of an application,” so as an extension, possibly a performance problem should be addressed in a similar fashion as a bug.  I wrote the following into my notes about the above quoted caution section from the book:

This statement might be incorrect, the OPTIMIZER_ENV_HASH_VALUE column in V$SQL should show a different value if the OPTIMIZER_MODE changes from FIRST_ROWS_1 to FIRST_ROWS_1000 and the same SQL statement is executed a second time.

So, I set up a test case to determine whether the above caution quote from the book is correct, or if what I wrote into my notes about the book is accurate.  Actually, both of the above seemingly mutually exclusive statements are correct based on the results of my test case script.  So, how is that possible?  The key is whether or not the SQL statement’s cursor is flushed out of the library cache, or if the child cursor is somehow marked as unshareable (possibly due to statistics collection) between the two executions of the same SQL statement with different FIRST_ROWS_n OPTIMIZER_MODE parameter values.  The  OPTIMIZER_ENV_HASH_VALUE column in V$SQL will be different (at least in Oracle Database 11.2.0.1) if the OPTIMIZER_MODE changes from FIRST_ROWS_1 to FIRST_ROWS_1000 and a hard parse is required, but the change of the OPTIMIZER_MODE from the first value to the second is NOT sufficient to force that hard parse.

The statement found in the “Troubleshooting Oracle Performance” book is correct, so what is wrong with the statement?  If performance problems are considered bugs, then this particular issue seems to point to a bug in Oracle Database, where an opportunity to re-optimize a SQL statement is missed.  Possibly equally important to recognize is that the OPTIMIZER_ENV_HASH_VALUE that is found in V$SQL is NOT used to determine if a SQL statement must be re-optimized (at least in Oracle Database 11.2.0.1) because the changed OPTIMIZER_MODE resulted in a different OPTIMIZER_ENV_HASH_VALUE when a hard parse happens.

So, how is this seemingly minor hard parsing issue a potential problem?  Consider a case where Oracle’s query optimizer should predict that the cardinality of an operation will be 990 rows.  With the OPTIMIZER_MODE set to FIRST_ROWS_1000, the query optimizer will optimize the SQL statement just as if the OPTIMIZER_MODE were set to ALL_ROWS because the predicted cardinality is less than 1000.  So, the optimizer might correctly select to perform a full table scan; while with the OPTIMIZER_MODE set to FIRST_ROWS_1, the predicted cardinality for the same operation might be just 2 (or 1), thus leading to a possibly inefficient index access path if one exists.

If the quote on page 27 does not contain the glitch that I found in the first 30 pages of the book, where is the glitch?  I actually found two glitches in the first 27 pages of the book, but I will not mention those glitches at this time.

It might be interesting to see if later releases of Oracle Database actually do force a hard parse for a SQL statement when the OPTIMIZER_MODE changes from FIRST_ROWS_1 to FIRST_ROWS_1000. (or if two sessions have the different FIRST_ROWS_n OPTIMIZER_MODE settings, and each execute the same SQL statement). Here is the test script that I constructed:

DROP TABLE T1 PURGE;
 
SET LINESIZE 140
SET PAGESIZE 1000
SET TRIMSPOOL ON
 
SELECT
  VERSION
FROM
  V$INSTANCE;
 
CREATE TABLE T1 AS
SELECT
  ROWNUM C1,
  MOD(ROWNUM,500) C2
FROM
  DUAL
CONNECT BY
  LEVEL<=10000;
 
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1')
 
ALTER SYSTEM FLUSH SHARED_POOL;
 
ALTER SESSION SET OPTIMIZER_MODE=FIRST_ROWS_1;
 
SELECT C1 FROM T1 WHERE C2=2;
 
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));
 
SELECT
  SQL_ID,
  CHILD_NUMBER,
  OPTIMIZER_MODE,
  OPTIMIZER_ENV_HASH_VALUE,
  PLAN_HASH_VALUE
FROM
  V$SQL
WHERE
  SQL_TEXT='SELECT C1 FROM T1 WHERE C2=2';
 
ALTER SESSION SET OPTIMIZER_MODE=FIRST_ROWS_1000;
 
SELECT C1 FROM T1 WHERE C2=2;
 
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));
 
SELECT
  SQL_ID,
  CHILD_NUMBER,
  OPTIMIZER_MODE,
  OPTIMIZER_ENV_HASH_VALUE,
  PLAN_HASH_VALUE
FROM
  V$SQL
WHERE
  SQL_TEXT='SELECT C1 FROM T1 WHERE C2=2';
 
SELECT
  CHILD_NUMBER,
  NAME,
  VALUE
FROM
  V$SQL_OPTIMIZER_ENV
WHERE
  SQL_ID='bqx2tj39jw1f5'
  AND NAME='optimizer_mode'
ORDER BY
  NAME;
 
ALTER SYSTEM FLUSH SHARED_POOL;
 
ALTER SESSION SET OPTIMIZER_MODE=FIRST_ROWS_1000;
 
SELECT C1 FROM T1 WHERE C2=2;
 
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));
 
SELECT
  SQL_ID,
  CHILD_NUMBER,
  OPTIMIZER_MODE,
  OPTIMIZER_ENV_HASH_VALUE,
  PLAN_HASH_VALUE
FROM
  V$SQL
WHERE
  SQL_TEXT='SELECT C1 FROM T1 WHERE C2=2';
 
ALTER SESSION SET OPTIMIZER_MODE=FIRST_ROWS_1;
 
SELECT C1 FROM T1 WHERE C2=2;
 
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));
 
SELECT
  SQL_ID,
  CHILD_NUMBER,
  OPTIMIZER_MODE,
  OPTIMIZER_ENV_HASH_VALUE,
  PLAN_HASH_VALUE
FROM
  V$SQL
WHERE
  SQL_TEXT='SELECT C1 FROM T1 WHERE C2=2';
 
SELECT
  CHILD_NUMBER,
  NAME,
  VALUE
FROM
  V$SQL_OPTIMIZER_ENV
WHERE
  SQL_ID='bqx2tj39jw1f5'
  AND NAME='optimizer_mode'
ORDER BY
  NAME;

The script is broken into two halves, with an ALTER SYSTEM FLUSH SHARED_POOL; separating the two halves of the script.  The execution plan is displayed after each execution of the test SQL statement to show the optimizer’s predicted cardinality for the TABLE ACCESS FULL operation as well as the calculated cost and estimated number of bytes returned from that operation.  The query optimizer’s calculated cost for an operation could cause the execution plan to change, although such a change could not happen in this test case script.

Below is the output that I received on Oracle Database 11.2.0.1 for the first half of the script.  Note that I have removed excessive blank lines and the output of the test SQL statement.  Notice that the OPTIMIZER_ENV_HASH_VALUE is displayed as 1002285490 when starting with the FIRST_ROWS_1 OPTIMIZER_MODE:

SQL> ALTER SYSTEM FLUSH SHARED_POOL;
 
System altered.
 
SQL> ALTER SESSION SET OPTIMIZER_MODE=FIRST_ROWS_1;
 
Session altered.
 
SQL> SELECT C1 FROM T1 WHERE C2=2;
 
20 rows selected.
 
SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID  bqx2tj39jw1f5, child number 0
-------------------------------------
SELECT C1 FROM T1 WHERE C2=2
 
Plan hash value: 3617692013
 
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| T1   |     2 |    16 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("C2"=2)
 
SQL> SELECT
  2    SQL_ID,
  3    CHILD_NUMBER,
  4    OPTIMIZER_MODE,
  5    OPTIMIZER_ENV_HASH_VALUE,
  6    PLAN_HASH_VALUE
  7  FROM
  8    V$SQL
  9  WHERE
 10    SQL_TEXT='SELECT C1 FROM T1 WHERE C2=2';
 
SQL_ID        CHILD_NUMBER OPTIMIZER_ OPTIMIZER_ENV_HASH_VALUE PLAN_HASH_VALUE
------------- ------------ ---------- ------------------------ ---------------
bqx2tj39jw1f5            0 FIRST_ROWS               1002285490      3617692013
 
SQL> ALTER SESSION SET OPTIMIZER_MODE=FIRST_ROWS_1000;
 
Session altered.
 
SQL> SELECT C1 FROM T1 WHERE C2=2;
 
20 rows selected.
 
SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID  bqx2tj39jw1f5, child number 0
-------------------------------------
SELECT C1 FROM T1 WHERE C2=2
 
Plan hash value: 3617692013
 
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| T1   |     2 |    16 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("C2"=2)
 
SQL> SELECT
  2    SQL_ID,
  3    CHILD_NUMBER,
  4    OPTIMIZER_MODE,
  5    OPTIMIZER_ENV_HASH_VALUE,
  6    PLAN_HASH_VALUE
  7  FROM
  8    V$SQL
  9  WHERE
 10    SQL_TEXT='SELECT C1 FROM T1 WHERE C2=2';
 
SQL_ID        CHILD_NUMBER OPTIMIZER_ OPTIMIZER_ENV_HASH_VALUE PLAN_HASH_VALUE
------------- ------------ ---------- ------------------------ ---------------
bqx2tj39jw1f5            0 FIRST_ROWS               1002285490      3617692013
 
SQL>
SQL> SELECT
  2    CHILD_NUMBER,
  3    NAME,
  4    VALUE
  5  FROM
  6    V$SQL_OPTIMIZER_ENV
  7  WHERE
  8    SQL_ID='bqx2tj39jw1f5'
  9    AND NAME='optimizer_mode'
 10  ORDER BY
 11    NAME;
 
CHILD_NUMBER NAME                                     VALUE
------------ ---------------------------------------- -------------------------
           0 optimizer_mode                           first_rows_1

Below is the output that I received on Oracle Database 11.2.0.1 for the second half of the script.  Note that I have removed excessive blank lines and the output of the test SQL statement.  Notice that the OPTIMIZER_ENV_HASH_VALUE is displayed as 4271299772 (rather than 1002285490 as was seen above) when starting with the FIRST_ROWS_1000 OPTIMIZER_MODE:

SQL> ALTER SYSTEM FLUSH SHARED_POOL;
 
System altered.
 
SQL> ALTER SESSION SET OPTIMIZER_MODE=FIRST_ROWS_1000;
 
Session altered.
 
SQL> SELECT C1 FROM T1 WHERE C2=2;
 
20 rows selected.
 
SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID  bqx2tj39jw1f5, child number 0
-------------------------------------
SELECT C1 FROM T1 WHERE C2=2
 
Plan hash value: 3617692013
 
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     3 (100)|          |
|*  1 |  TABLE ACCESS FULL| T1   |    20 |   160 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("C2"=2)
 
SQL> SELECT
  2    SQL_ID,
  3    CHILD_NUMBER,
  4    OPTIMIZER_MODE,
  5    OPTIMIZER_ENV_HASH_VALUE,
  6    PLAN_HASH_VALUE
  7  FROM
  8    V$SQL
  9  WHERE
 10    SQL_TEXT='SELECT C1 FROM T1 WHERE C2=2';
 
SQL_ID        CHILD_NUMBER OPTIMIZER_ OPTIMIZER_ENV_HASH_VALUE PLAN_HASH_VALUE
------------- ------------ ---------- ------------------------ ---------------
bqx2tj39jw1f5            0 FIRST_ROWS               4271299772      3617692013
 
SQL> ALTER SESSION SET OPTIMIZER_MODE=FIRST_ROWS_1;
 
Session altered.
 
SQL> SELECT C1 FROM T1 WHERE C2=2;
 
20 rows selected.
 
SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID  bqx2tj39jw1f5, child number 0
-------------------------------------
SELECT C1 FROM T1 WHERE C2=2
 
Plan hash value: 3617692013
 
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     3 (100)|          |
|*  1 |  TABLE ACCESS FULL| T1   |    20 |   160 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("C2"=2)
 
SQL> SELECT
  2    SQL_ID,
  3    CHILD_NUMBER,
  4    OPTIMIZER_MODE,
  5    OPTIMIZER_ENV_HASH_VALUE,
  6    PLAN_HASH_VALUE
  7  FROM
  8    V$SQL
  9  WHERE
 10    SQL_TEXT='SELECT C1 FROM T1 WHERE C2=2';
 
SQL_ID        CHILD_NUMBER OPTIMIZER_ OPTIMIZER_ENV_HASH_VALUE PLAN_HASH_VALUE
------------- ------------ ---------- ------------------------ ---------------
bqx2tj39jw1f5            0 FIRST_ROWS               4271299772      3617692013
 
SQL> SELECT
  2    CHILD_NUMBER,
  3    NAME,
  4    VALUE
  5  FROM
  6    V$SQL_OPTIMIZER_ENV
  7  WHERE
  8    SQL_ID='bqx2tj39jw1f5'
  9    AND NAME='optimizer_mode'
 10  ORDER BY
 11    NAME;
 
CHILD_NUMBER NAME                                     VALUE
------------ ---------------------------------------- -------------------------
           0 optimizer_mode                           first_rows_1000

As repeatedly demonstrated in the “Troubleshooting Oracle Performance” book, testing theories is important.  In this case, I learned something new when what was mentioned in the book did not agree with what I recalled as being Oracle Database behavior.  I anticipate that the learning (or re-learning) process will continue as I quietly question the statements found in the book.  There is enough new material in the second edition of the book to make it a compulsive buy for people who already own the first edition (I own the print hard copy and the companion PDF versions of the first edition).





On the Topic of Technology… 6

16 03 2014

March 16, 2014

(Back to the Previous Post in the Series)  (Forward to the Next Post in the Series)

It has been a while since my last post on this blog – I guess that the simple answer is that I was busy with a lot of non-Oracle Database related items, and was suffering from a bit of a writer’s block (nothing that a block dump can’t fix?).  I am expecting to soon receive the annual bill from WordPress for keeping this blog free of advertisements, as well as a bill for allowing the customized blog theme.

So, given the number of months since my last blog post,  I took the time to update the list of the top five most viewed articles for the past quarter.  The number one article shows how to install the Nagios network monitoring software on a Synology NAS (actually three different Synology NAS units), which means that a low cost NAS unit could be used to not only verify that a server used with Oracle Database responds to a ping request, but also that an Oracle database is reachable and healthy enough to provide a resultset for a simple SQL statement.  The number two article shows how to do a little mathematics with the help of Oracle Database, approximating the distance between two longitude and latitude coordinates.  The number three article shows how to use a programming language that was last updated in the late 1990s with the latest Microsoft operating system and what was the latest version of Oracle Database.

The advancement of technology certainly means that it is important for IT professionals to try staying on top of the advancements in their technology niche, without completely cutting ties with technology of the past, as illustrated by the current number three article on this blog.  For me, that means buying and then reading cover to cover various books, reading articles, and experimenting with technology.  It helps that I am an IT manager in addition to being an Oracle DBA, so my technology niche is rather broad.  In December 2013 I placed an order for the updated version of “Troubleshooting Oracle Performance“, in part because I enjoyed the first version of that book so much that I read it twice, and also because I have not had sufficient time to experiment with Oracle Database 12c – it appears that the second edition might ship next month.  Someone recently left a comment on another book that I reviewed here and on Amazon – I tried ordering that book twice without success, and now there is apparently a new version of the book on Amazon that includes coverage of Oracle Database 12c, and the book is in stock!  Someone will have to spend the $56, write a review, and let me know if the author fixed the items that I and readers of this blog so patiently and clearly mentioned in 2010.  Anyone interested in the challenge?

As I mentioned, the scope of my job responsibilities extends far beyond that of Oracle Database.  I just recently migrated the company’s email system from Microsoft Exchange 2007 to Microsoft Exchange 2013 SP1.  Anyone who remembers the fun of typing cryptic code on a command line would enjoy this experience.  Simply moving the public folders from the old server to the new server was an excellent example of command line fun, reminding me of the fun that I had years ago trying to compile X.509 certificate support into a Linux kernel.  One book that I read and reviewed was extensively detailed on the topic of public folders, yet the commands that were found in the book failed to execute without returning an error message at step 1.  The other book that I read and reviewed more or less skimmed the topic of public folders, so it was of no help for the task at hand.  No problem, I will just go to the source, Microsoft, for the solution.  A recent article on Microsoft’s site clearly listed all of the steps required to move the public folders from Exchange Server 2007 to Exchange Server 2013… all except for one very important step.  So, I am running command after command on the servers trying to move the public folders from the one server to the next, only having a partial idea of what these commands are doing.  Everything is going great, until I execute the last command listed here:

Get-PublicFolder -Recurse | Export-CliXML C:\PFMigration\Legacy_PFStructure.xml
Get-PublicFolderStatistics | Export-CliXML C:\PFMigration\Legacy_PFStatistics.xml
Get-PublicFolder -Recurse | Get-PublicFolderClientPermission | Select-Object Identity,User -ExpandProperty AccessRights | Export-CliXML C:\PFMigration\Legacy_PFPerms.xml
Get-PublicFolderDatabase | ForEach {Get-PublicFolderStatistics -Server $_.Server | Where {$_.Name -like "*\*"}}
Set-PublicFolder -Identity <public folder identity> -Name <new public folder name>
Get-OrganizationConfig | Format-List PublicFoldersLockedforMigration, PublicFolderMigrationComplete
Set-OrganizationConfig -PublicFoldersLockedforMigration:$false -PublicFolderMigrationComplete:$false
Get-PublicFolderMigrationRequest | Remove-PublicFolderMigrationRequest -Confirm:$false
Get-Mailbox -PublicFolder 
Get-PublicFolder
Get-Mailbox -PublicFolder | Where{$_.IsRootPublicFolderMailbox -eq $false} | Remove-Mailbox -PublicFolder -Force -Confirm:$false
Get-Mailbox -PublicFolder | Remove-Mailbox -PublicFolder -Force -Confirm:$false
.\Export-PublicFolderStatistics.ps1 <Folder to size map path> <FQDN of source server>
...

Spot the error?  Why is this server telling me that I need to provide a comma separated list of parameters when I execute the Export-PublicFolderStatistics.ps1 script?  So, I submit the script again with commas separating the parameters – no the same error is returned.  Must be a problem where I need to specify the parameters in double quotes also – no the same error is returned.  What the four letter word?  That is right, the return of trying to compile X.509 certificate support into the Linux kernel roughly a decade ago, only now on Microsoft’s premium messaging platform.

So, what is the missing step?  Exchange Server 2007 ships with Microsoft PowerShell 1.0 – this command requires Microsoft PowerShell 2.0 to execute, yet that requirement was never mentioned.  Oh yeah, we forgot a step, get over it – you have another set of 10 cryptic commands to enter – only to be greeted with a failure message during the public folder migration, stating that the migration failed because some folder name that once existed on Microsoft Exchange 5.5 contains a character that is now considered an invalid character in its name.  These problems never happen with an upgrade in the Oracle Database world, do they?  Advancement of technology, or Back to the Command Line.

I have also spent a bit of time experimenting with IP security cameras.  I put one in my vehicle and went for a drive.  Ah, 1969, someone obviously has not finished compiling the time saving feature into the camera’s firmware? (Click the picture for a larger view.)

NC-239WF-LicensePlateTest-Mod

Let’s try a different stop light – these two cars are either turning the wrong direction (obviously an indication of a bug in the camera’s firmware), or are running a red light. (Click the picture for a larger view.)

NC-239WF-RedLightTest

The camera did not pick up much interesting in the vehicle, so I set it up just in time to catch a game of what appears to be football… or maybe it was a game of sock-her? (Click the picture for a larger view.)

NC-239WF-DeerFightTest

Technology is fun, except when it hit you in the nose.








Follow

Get every new post delivered to your Inbox.

Join 160 other followers