Install the Oracle Client on a Synology DiskStation DS1813+ or DS412+ for Nagios to Monitor Databases

26 07 2013

July 26, 2013

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

In the previous article of this series I provided steps to install and run the Nagios network monitoring utility on either a Synology DiskStation DS1813+ or a DS412+ for the purpose of pinging network devices to verify that the devices respond to ping requests.  That article may have seemed a bit out of place on a blog that is primarily intended to include notes about using Oracle Database.  In the previous article I hinted that there is a check_oracle plugin for Nagios, suggesting that a Synology DiskStation DS1813+ or DS412+ could be used to monitor Oracle databases.

This blog article does NOT use the check_oracle plugin.  There are a small handful of official Nagios plugins for Oracle Database, but this article does not use any of those plugins either.  Instead, I took a chance at crafting a custom Perl script to monitor Oracle databases, the first time I have ever used Perl.  I thought back to some of the earlier blog articles where I introduced a couple of Oracle Database monitoring scripts that were written in VBScript, including Working with Oracle’s Time Model Data 3 and Oracle Statistics Chart Viewer – I started seeing the possibilities of using Nagios to alert DBAs about potential issues.  Yet, I have never used Perl, and at the time did not even have a clue how to concatenate strings in the language (it turns out that there are more than two techniques).

Verify that Perl is installed on the DiskStation by accessing the Package Center in the DiskStation’s DSM web interface.  If there is an Install button under the Perl heading, that indicates that Perl is not yet installed – click the Install button and wait until that button changes to Installed.

InstallOracleClient-1

Let’s start by installing the Oracle Client on the DiskStation.  I will use the Oracle Instant Client version 11.2.0.3 (the 12.1.0.1 Instant Client throws a different error message than did the 11.2.0.3 and 11.1.0.7 clients, so I reverted back to the 11.2.0.3 version).  To download the Oracle Instant Client for Linux, visit this link.  You will need an OTN account to download the client files – I downloaded the files using a desktop computer, and then transferred the files to a custom created Config share that I set up on the DiskStation.  This article requires the Basic Instant Client package and the SQL*Plus add-on package for the Instant Client.

Connect to the DiskStation using Telnet as the root user (see the previous article for Telnet connection directions).  For consistency, the .zip files for the Oracle Instant Client will be copied to the downloads directory that was created in the previous article.  We will create an oracle directory in the downloads directory:

mkdir /volume1/downloads/oracle/

Next, the downloaded files for the Oracle Instant Client are copied from the config share that I created on the DiskStation (to allow transporting files from a desktop computer) to the /volume1/downloads/oracle directory that was just created, and then the Oracle Instant Client files are unzipped, which automatically creates the instantclient_11_2 directory:

cp /volume1/config/instantclient-basic-linux-11.2.0.3.0.zip /volume1/downloads/oracle/
cp /volume1/config/instantclient-sqlplus-linux-11.2.0.3.0.zip /volume1/downloads/oracle/
cd /volume1/downloads/oracle/
unzip -o instantclient-basic-linux-11.2.0.3.0.zip
unzip -o instantclient-sqlplus-linux-11.2.0.3.0.zip
cd instantclient_11_2

The Oracle Instant Client 11.2.0.3 (and 11.1.0.7) require a file named libaio.so.1 that is not installed by default on the DiskStation.  After searching the Internet for a while, I found a copy of that file in the /lib directory on a Red Hat Enterprise Linux 3 server.  Using a desktop computer, I copied that file to the DiskStation’s config share, as well as a copy of the tnsnames.ora file that includes the database connection information for the databases that must be monitored.  The libaio.so.1 file will be placed in the /volume1/downloads/oracle/instantclient_11_2 directory, and the tnsnames.ora file will be placed in the /etc directory (one of the directories that is automatically searched).

cp /volume1/config/libaio.so.1 .
cp /volume1/config/tnsnames.ora /etc

Let’s start up SQL*Plus:

./sqlplus /nolog
./sqlplus: error while loading shared libraries: libsqlplus.so: cannot open shared object file: No such file or directory

An error…

ls
BASIC_README    adrci   glogin.sql   libclntsh.so.11.1  libocci.so.11.1  libocijdbc11.so  libsqlplusic.so  ojdbc6.jar  sqltest.sql  xstreams.jar
SQLPLUS_README  genezi  libaio.so.1  libnnz11.so        libociei.so      libsqlplus.so    ojdbc5.jar       sqlplus     uidrvci

The file mentioned in the error message exists in the correct directory.  That error is caused by an undefined environment variable.  Fixing the error and trying again:

LD_LIBRARY_PATH="/volume1/downloads/oracle/instantclient_11_2"
export LD_LIBRARY_PATH
./sqlplus /nolog

SQL*Plus: Release 11.2.0.3.0 Production on Fri Jul 26 13:22:16 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

SQL>

Try to connect to one of the databases that are defined in the tnsnames.ora file, and then try executing a simple SQL statement before exiting SQL*Plus (change testuser, password, and DBName as appropriate for your environment):

CONNECT testuser/password@DBName
Connected.
SELECT SYSDATE FROM DUAL;

SYSDATE
---------
26-JUL-13

EXIT

Now that we have verified that the Instant Client version of SQL*Plus works from the DiskStation, we will create a simple SQL script to verify that the Instant Client version of SQL*Plus is able to accept script names from a command line.  The script will be created in the same directory where the Oracle Instant Client is located (see the basic directions for using vi, as found in the previous article):

vi /volume1/downloads/oracle/instantclient_11_2/sqltest.sql

The first line in the script instructs SQL*Plus to abort the execution of the script when an error is encountered; if the connection attempt fails, there is no point in attempting to execute any SQL statements that follow.  The second line connects to the database (change testuser, password, and DBName as appropriate for your environment).  The third line executes a simple SQL statement, selecting from a table (rather than the virtual table DUAL) – change the SQL statement to a valid statement for your database.  The last statement exits SQL*Plus once the SQL statement finishes executing:

WHENEVER SQLERROR EXIT SQL.SQLCODE
CONNECT testuser/password@DBName
SELECT ID, DESCRIPTION FROM T1 WHERE ID LIKE '8X%';
EXIT

Try using SQL*Plus to execute the script:

./sqlplus /nolog @/volume1/downloads/oracle/instantclient_11_2/sqltest.sql

The results from the SQL statement should display in the Telnet window, and then the normal DiskStation prompt should appear.  If the database instance was not running at the time of the execution, you would see something like this rather than the results of the SQL statement:

ERROR:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Process ID: 0
Session ID: 0 Serial number: 0

Nagios apparently expects its plugins to return one of four exit codes, as explained in this article, to determine whether or not the checked object is behaving as expected:

Exit Code Status
        0 OK
        1 WARNING
        2 CRITICAL
        3 UNKNOWN

The exit function in Perl permits returning a number, so that is one problem solved.  I want the Perl plugin to accept a command line parameter from Nagios so that I am able to check more than one database using the same Perl script – the Nagios command line parameter will specify the script to execute.  This article explains how to receive a command line parameter from Nagios, so that is a second problem that is solved.

Let’s put together a simple (OK, not simple for me) script that executes a SQL*Plus script that is specified by Nagios (the sqltest.sql file that was just created), and then returns 0 if no ORA- type error message is returned when the Oracle Instant Client’s SQL*Plus executes a script.  If an ORA- type error message is returned, then the script’s exit code is set to 2 and the ORA- errors are returned to Nagios.  Nagios apparently only accepts one line of output from the script, reading what is written by Perl’s print command.  As we saw earlier, the script may return multiple nested ORA- type errors if the database instance is down, so we need these multiple error messages to appear on one line of output from the Perl script.

vi /opt/libexec/check_oracle.pl

This is the script that I crafted after spending, quite literally, hours days searching the Internet for Perl command syntax that would execute on the DiskStation:

#!/usr/bin/perl
use strict;
use warnings;
use Getopt::Long qw(:config no_ignore_case);
my $output = "";
my ($script);

my $result = GetOptions(
 "s|script=s"         => \$script,
  );

$ENV{"LD_LIBRARY_PATH"} = "/volume1/downloads/oracle/instantclient_11_2/";

my $ExitValue = 0;
print "Test Script: " . $script . ":  ";
open my $in, "/volume1/downloads/oracle/instantclient_11_2/sqlplus /nolog @/volume1/downloads/oracle/instantclient_11_2/$script |";

while (my $line = <$in>)
  {
  if ($line =~ /^(ORA-\d{5})/)
    {
    #$line =~ s/\v//g;
    chomp($line);
    $output = $output . $line . " ; ";
    $ExitValue = 2;
    }
  }

close($in);
print $output;
exit($ExitValue);

The script defines the LD_LIBRARY_PATH environment variable, so the Nagios user should (hopefully) have no problems executing the Perl script.  There are apparently a half-dozen different ways to remove the end of line characters from the SQL*Plus output, chomp($line); worked for my test, but $line =~ s/\v//g; did not work quite as expected.

Save the script and exit vi.  Next, we need to make the Perl script executable:

chmod +x /opt/libexec/check_oracle.pl

In the earlier article we had to modify a couple of the Nagios configuration files – this time we will need to modify some of the same files, starting with the commands.cfg file.

vi /opt/etc/objects/commands.cfg

Locate the ‘check_ping’ command definition that we modified in the earlier article.  Below the } character for that command definition, add the following, which will tell Nagios that our Perl script exists, and that the script command should be recognized by Nagios as check_oracle_ch:

# check_oracle_ch command definition by Charles Hooper
define command{
        command_name    check_oracle_ch
        command_line    $USER1$/check_oracle.pl -s $ARG1$
        }

Save the commands.cfg file and exit vi.  Let’s create another Nagios configuration file that will be used to list the Oracle Databases (and their scripts) that will be checked:

vi /opt/etc/objects/oracle.cfg

On a new line in the oracle.cfg file, add the following:

define service{
        use                     generic-service ; Inherit values from a template
        host_name               server123       ; This is a server name that is defined in the server.cfg file
        service_description     CHECK_DB_TEST   ; A unique name given to the server, database, and script combination
        check_command           check_oracle_ch!sqltest.sql    ; The command definition that was added to the commands.cfg file followed by ! and the name of the SQL script to execute
        normal_check_interval   5               ; Execute the script every 5 minutes under normal conditions
        retry_check_interval    1               ; Re-execute the script after receiving a return code of 2, every minute until its final/hard state is determined
        }

Additionally, we need to instruct Nagios that there is now an additional configuration file to read when starting.  Save the oracle.cfg file and exit vi.  Execute the following command:

vi /opt/etc/nagios.cfg

Below the cfg_file=/opt/etc/objects/switch.cfg line in the file (added in the previous article), add the following line:

cfg_file=/opt/etc/objects/oracle.cfg

Save the nagios.cfg file and exit vi.  Since we have modified the Nagios configuration, we must restart Nagios.  Find the first Nagios process ID, then kill that process (as was described in the previous article):

ps

kill 2683

Verify that Nagios does not return an error message when reading the configuration files:

/opt/bin/nagios -v /opt/etc/nagios.cfg

If no errors are reported, then start Nagios:

/opt/bin/nagios -d /opt/etc/nagios.cfg

Nagios should be writing log entries to a file name nagios.log (as defined in the nagios.cfg file).  The tail command permits viewing the last few (100 in this case) lines from a specified file:

tail -n 100 /opt/var/nagios.log

If Perl is not installed, Nagios may send an email with the following in the Additional Info section:

(Return code of 127 is out of bounds - plugin may be missing)

If the database instance is down, Nagios should send an email, possibly with the following in the Additional Info section (note that the script name is listed first – a different script will be used to test each database):

Test Script: sqltest.sql:  ORA-01034: ORACLE not available : ORA-27101: shared memory realm does not exist :

When the database instance is running, and the SQL statement(s) in the script executes successfully for the first time, Nagios should send an email with just the test script listed in the Additional Info section:

Test Script: sqltest.sql:

Now that the we have confirmed that Nagios and the Perl script work OK together, create additional SQL scripts in the /volume1/downloads/oracle/instantclient_11_2/ directory to connect to the other databases to be monitored (strongly consider changing the permissions for these files, since the files include usernames and passwords for your databases).  For each of those SQL scripts, create an additional service entry in the oracle.cfg file.  For example, if a script named sqltest42.sql is created to test a database on server42, the service entry might be created as shown below:

define service{
        use                     generic-service ; Inherit values from a template
        host_name               server42        ; This is a server name that is defined in the server.cfg file
        service_description     CHECK_DB_TEST42 ; A unique name given to the server, database, and script combination
        check_command           check_oracle_ch!sqltest42.sql  ; The command definition that was added to the commands.cfg file followed by ! and the name of the SQL script to execute
        normal_check_interval   5               ; Execute the script every 5 minutes under normal conditions
        retry_check_interval    1               ; Re-execute the script after receiving a return code of 2, every minute until its final/hard state is determined
        }

Don’t forget to verify the nagios configuration before restarting Nagios.

Anyone up to the challenge of rewriting my Working with Oracle’s Time Model Data 3 script in Perl so that the script will work on the DiskStation?





Install Nagios on a Synology DiskStation DS1813+ or DS412+

19 07 2013

July 19, 2013 (Modified July 27, 2013, July 28, 2013, November 19, 2013, August 12, 2015)

(Forward to the Next Post in the Series)

—-

Update July 27, 2013:

  • I now have Nagios running on an ARM based Synology DiskStation DS212+.  Most of the steps are the same as outlined below, however there are a few additional errors that must be addressed (see below additional steps).
  • All of the ./configure commands should have included –prefix=/opt (rather than –prefix=/usr/local or completely omitting that parameter).  That change eliminates the need to copy the Nagios plugins to the correct location.  Possibly related, the -i parameter was unnecessary for the snmp and Nagios plugins make and make install commands when the ./configure command included the –prefix=/opt prefix.
  • The wget http://sourceforge.net/projects/dsgpl/files/DSM%204.1%20Tool%20Chains/Intel%20×86%20Linux%203.2.11&#8230; download step for the gcc compiler is apparently unnecessary, at least on the Synology DiskStation DS212+ (see below).

—-

This article describes how to compile and run Nagios on a Synology DiskStation DS1813+ (64 bit) or Synology DiskStation DS412+ (32 bit, the 32 bit steps should also apply to the DS1812+) NAS, both of which utilize Intel Atom processors (cat /proc/cpuinfo indicates that the DS412+ is using a 2.13GHz Atom D2700, while the DS1813+ is using a 2.13GHz Atom D2701), and utilize the DSM 4.2 operating system.  Not all Synology DiskStation NAS devices use Intel based CPUs – some of the less expensive DiskStations use ARM type processors (see this link to determine the type of CPU installed in a specific DiskStation).  It may be possible to produce a working version of Nagios on NAS devices that do not have Intel 32 bit or 64 bit processors, but I have not yet fully tested the procedure.

Warning: A lot of what follows is based on experimentation, with the end goal of having Nagios running on a Synology DiskStation having the ability to ping devices on the network or the Internet, with an email sent to an administrator when a device stops responding to ping requests, and to send a second email when the device resumes responding to ping requests.  This functionality represents a small fraction of Nagios’ capabilities through the use of plugins.  File paths vary from one Linux distribution to the next, so that adds a bit of challenge to make certain that the files are placed in the required directory.  Copying a file to the wrong directory may temporarily disable the DiskStation and require the reinstallation of the Synology DSM operating system.  The directions below are not final, and quite likely do not represent the most efficient approaches to accomplish the end goal – but the directions will hopefully be “close enough to correct” to allow the average reader of this blog to ping and send email alerts from a DiskStation.

I have relied on the free Nagios network monitoring solution since 2002 to provide an early warning of problems associated with network attached equipment including servers, production floor computers, switches, printers, wireless access points, IP cameras, Internet connection stability, etc.  While I rely on Nagios’ alerting system, I am not an expert at configuring the Nagios network monitoring system; the Nagios configuration documentation may be downloaded here.

First, make certain that the Telnet Service (or SSH Service if that is preferred) is enabled on the DiskStation.  In the DiskStation’s Control Panel, click Terminal.

InstallNagiosDiskStation1

Place a checkmark next to Enable Telnet service (if the item is not already checked), and then click the Apply button.

InstallNagiosDiskStation2

Verify that the computer that you intend to use has a Telnet client.  For Windows 7, access the Programs link in the Control Panel, and then click the Turn Windows features on or off link.  Make certain that there is a checkmark next to Telnet Client, then click the OK button.

InstallNagiosDiskStation3

Open a command line (in Windows, Start – Run – type  cmd  and press the Enter key).  On the command line, type telnet followed by either the name of the DiskStation or the IP address of the DiskStation, then press the Enter key.  When prompted for a username, type root and press the Enter key.  Type the admin user’s password (that is used to access the DSM interface in a web browser) and press the Enter key.

InstallNagiosDiskStation4

The command line on the DiskStation is very similar to the command line on a Unix or Linux computer, and is somewhat similar to a Windows command line or MS-DOS command line (use / rather than \, use ls rather than dir, use vi rather than edit):

InstallNagiosDiskStation5

We first need to add ipkg support to the DiskStation, detailed directions may be viewed at this link.  The exact directions may be different for other DiskStation models, but the following directions work for both the DS1813+ and DS412+ (note that all files downloaded from the Internet will be placed on volume1 in the downloads directory – copy and paste the lines to the Telnet session, one line at a time):

cd /volume1
mkdir downloads
cd downloads
wget http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/syno-i686-bootstrap_1.2-7_i686.xsh
chmod +x syno-i686-bootstrap_1.2-7_i686.xsh
sh syno-i686-bootstrap_1.2-7_i686.xsh

The vi editor is used on the DiskStation to modify files; that vi editor is a bit challenging to use at first sight, so you may need help with a couple of basic commands (see this quick reference for other commands).  The commands in vi are case sensitive (i is not the same as I).  When a file is opened, press the i key on the keyboard to allow making changes to the file (such as typing commands, or deleting commands).  When finished making changes to the file press the Esc key.  Once the Esc key is pressed, type ZZ to save the changed file and quit, or :q! to quit without saving the changes.

Next, we must modify the file that establishes the environment for the root user, when that user connects to the DiskStation.  This change is needed as part of the ipkg installation.  Edit the .profile file used by the root user:

vi /root/.profile

Add a # character in front of the two lines that contain the word PATH, then save the file (see the brief directions above to switch between command and insert mode in vi):

InstallNagiosDiskStation6

Next, reboot the DiskStation by clicking the Restart button in the Synology DSM interface (note: it should be possible to type reboot in the Telnet interface, however the DiskStation locked up the one time I attempted to execute that command).

InstallNagiosDiskStation7

Once the DiskStation reboots, reconnect to the DiskStation using Telnet, connecting as the root user, just as was done earlier.

The ipkg command should now work on the command line.  First, request that an updated list of available packages is downloaded, then display that list of packages:

ipkg update
ipkg list

Next, download a couple of packages that will be used by the Nagios network monitoring tool.  Note that using ipkg to install packages is a lot easier than compiling source code, so have fun with the ipkg utility.  When installing the optware-devel package, an error may appear stating that there is an incompatibility between wget and wget-ssl – just ignore that error for now.

ipkg update wget-ssl
ipkg install optware-devel
ipkg install gcc
ipkg install libtool
ipkg install mysql

Next, we need to compile a file and copy a couple of files:

cd /opt/share/libtool/libltdl/
./configure --prefix=/opt
make all
make install

cp /usr/syno/apache/modules/mod_ext_filter.so /opt/libexec/mod_ext_filter.so
cp /usr/syno/apache/modules/*.* /opt/libexec/

Now, install the Apache package:

ipkg install apache

If an error message is displayed on screen about mod_ext_filter.so, then modify the /opt/etc/apache2/httpd.conf file and add a # in front of the line LoadModule ext_filter_module libexec/mod_ext_filter.so and save the file.  Re-execute the ipkg install apache command (note that the up arrow on the keyboard may be pressed to quickly retype one of the previously executed commands).

InstallNagiosDiskStation8

Using the DiskStation’s Control Panel, create a nagios group and a nagcmd group (the nagcmd group probably will not be used for anything specific).  These groups do not require any special DiskStation permissions.

InstallNagiosDiskStation9

Using the DiskStation’s Control Panel, create a nagios user and add that user to the nagios and nagcmd groups.  The nagios user does not require any specific DiskStation permissions.

Next, switch back to the Telnet session, download the Nagios source code, and compile the source code:

DiskStation DS212+ Notes:

The following ./configure call was used on the DS212+:

./configure --prefix=/opt --with-command-group=nagios --disable-nanosleep --enable-nanosleep=no

The ./configure aborted with the following error message:

checking for pthread_create in -lpthread... no
checking for pthread_mutex_init in -lpthread... no
checking for pthread_create in -lpthreads... no
checking for pthread_create in -llthread... no
checking if we need -pthread for threads... no
checking for library containing nanosleep... no
Error: nanosleep() needed for timing operations.

The test that threw the error is located roughly 63% of the way through the configure file (on roughly line 5635).  If the exit 1 line in the configure file is commented out, then the configure step will complete.  However, the make all command will then fail with the following error messages:

/volume1/downloads/nagios/base/nebmods.c:363: undefined reference to `dlclose'
nebmods.o: In function `neb_load_module':
/volume1/downloads/nagios/base/nebmods.c:218: undefined reference to `dlopen'
/volume1/downloads/nagios/base/nebmods.c:249: undefined reference to `dlsym'
/volume1/downloads/nagios/base/nebmods.c:266: undefined reference to `dlsym'
/volume1/downloads/nagios/base/nebmods.c:299: undefined reference to `dlsym'
/volume1/downloads/nagios/base/nebmods.c:225: undefined reference to `dlerror'
/opt/lib/gcc/arm-none-linux-gnueabi/4.2.3/../../../../arm-none-linux-gnueabi/lib/libpthread.so: undefined reference to `__default_sa_restorer_v2@GLIBC_PRIVATE'
/opt/lib/gcc/arm-none-linux-gnueabi/4.2.3/../../../../arm-none-linux-gnueabi/lib/libpthread.so: undefined reference to `__default_rt_sa_restorer_v2@GLIBC_PRIVAT
E'
/opt/lib/gcc/arm-none-linux-gnueabi/4.2.3/../../../../arm-none-linux-gnueabi/lib/libpthread.so: undefined reference to `__default_rt_sa_restorer_v1@GLIBC_PRIVAT
E'
/opt/lib/gcc/arm-none-linux-gnueabi/4.2.3/../../../../arm-none-linux-gnueabi/lib/libpthread.so: undefined reference to `__default_sa_restorer_v1@GLIBC_PRIVATE'
collect2: ld returned 1 exit status
make[1]: *** [nagios] Error 1
make[1]: Leaving directory `/volume1/downloads/nagios/base'
make: *** [all] Error 2

After a bit of searching on the Internet, I found a page that suggested making the following changes (note that I unsuccessfully tried a couple of other steps that may have also partially corrected the issue):

mkdir /opt/arm-none-linux-gnueabi/lib_disabled
mv /opt/arm-none-linux-gnueabi/lib/libpthread* /opt/arm-none-linux-gnueabi/lib_disabled

cp /lib/libpthread.so.0 /opt/arm-none-linux-gnueabi/lib/
cd /opt/arm-none-linux-gnueabi/lib/
ln -s libpthread.so.0 libpthread.so
ln -s libpthread.so.0 libpthread-2.5.so

After making the above changes, I was able to run the configure and make all commands without receiving an error.

cd /volume1/downloads
wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios-3.5.0.tar.gz
tar xzf nagios-3.5.0.tar.gz
cd nagios
./configure --prefix=/opt --with-command-group=nagios
make all
make install
make install-init
make install-config
make install-commandmode

We apparently need to copy a couple of files to different locations at this point:

cp /opt/lib/libltdl.so.3 /opt/local/lib/libltdl.so.3
cp /opt/lib/libltdl.so.3 /usr/lib/libltdl.so.3
cp /opt/lib/libltdl.so /usr/lib/

Undo the changes that were earlier made to the /root/.profile file, where # characters were added in front of any line that contained the word PATH.  Remove those # characters and save the file:

vi /root/.profile

(This part still needs some fine tuning to make the web interface work with Nagios.)  Edit the Nagios Makefile and change the line beginning with HTTPD_CONF to show HTTPD_CONF=/opt/etc/apache2/conf.d  Then save the file.

cd /volume1/downloads/nagios
vi Makefile

InstallNagiosDiskStation10

Execute the following command:

make install-webconf

Create a nagiosadmin user for the web administration, specify a password when prompted:

htpasswd -c /usr/local/etc/htpasswd.users nagiosadmin

Update November 19, 2013:

GabrielM reported in a comment below that it may be necessary to specify the full path to the htpasswd program:

/usr/syno/apache/bin/htpasswd -c /usr/local/etc/htpasswd.users nagiosadmin

Install a couple of additional ipkg packages that will be used by Nagios (the last package adds a ping utility that may be used by Nagios – the security permissions on the DiskStation prevent non-root users from using the built-in ping utility):

ipkg install openssl
ipkg install openssl-dev
ipkg install sendmail
ipkg install inetutils

A step that may or may not be required is to download a functioning C++ compiler (some of the commands below point to files provided with the C++ compiler) – it appears that there should already be a compiler on the DiskStation at this point (in /opt/bin), so the successful completion of this task of downloading a usable C++ compiler might not be required.

DiskStation DS212+ Notes:

These wget and tar steps were completely skipped on the DS212+

For the DiskStation DS1813+ 64 bit:

cd /volume1/downloads
wget http://sourceforge.net/projects/dsgpl/files/DSM%204.1%20Tool%20Chains/Intel%20x86%20Linux%203.2.11%20%28Cedarview%29/gcc420_glibc236_x64_cedarview-GPL.tgz
tar zxpf gcc420_glibc236_x64_cedarview-GPL.tgz -C /usr/local/

For the DiskStation DS412+ 32 bit:

cd /volume1/downloads
wget http://sourceforge.net/projects/dsgpl/files/DSM%204.2%20Tool%20Chains/Intel%20x86%20Linux%203.2.11%20%28Bromolow%29/gcc421_glibc236_x86_bromolow-GPL.tgz
tar zxpf gcc421_glibc236_x86_bromolow-GPL.tgz -C /usr/local/

Now the net-snmp source code is downloaded and extracted:

DiskStation DS212+ Notes:

The ./configure call on the DS212 (might also work on the other DiskStation models):

./configure –prefix=/opt

The make call threw several errors, including:

/bin/sh: arm-none-linux-gnueabi-ld: not found
make[2]: *** [../blib/arch/auto/NetSNMP/default_store/default_store.so] Error 127

Before running the make command on the DS212+, execute the following command:

ln -s /opt/bin/ld /opt/bin/arm-none-linux-gnueabi-ld

The -i parameter may be omitted when running the make and make install commands.

cd /volume1/downloads
wget http://sourceforge.net/projects/net-snmp/files/net-snmp/5.7.2/net-snmp-5.7.2.tar.gz
tar xzf net-snmp-5.7.2.tar.gz
cd net-snmp-5.7.2

For the DiskStation DS1813+ 64 bit, execute the following to compile the net-snmp source (note that this command uses the compiler that was downloaded):

env CC=/usr/local/x86_64-linux-gnu/bin/x86_64-linux-gnu-gcc \
LD=/usr/local/x86_64-linux-gnu/bin/x86_64-linux-gnu-ld \
RANLIB=/usr/local/x86_64-linux-gnu/bin/x86_64-linux-gnu-ranlib \
CFLAGS="-I/usr/local/x86_64-linux-gnu/include" \
LDFLAGS="-L/usr/local/x86_64-linux-gnu/lib" \
./configure --host=x86_64-linux-gnu --target=x86_64-linux-gnu --build=x86_64-pc-linux --prefix=/usr/local

For the DiskStation DS412+ 32 bit, execute the following to compile the net-snmp source (note: I could not use any of the different compilers that I tried downloading due to the compilers crashing with one of two error messages, so this command uses the compiler in /opt/bin):

env CC=/opt/bin/i686-linux-gnu-gcc \
LD=/usr/local/i686-linux-gnu/bin/i686-linux-gnu-ld \
RANLIB=/usr/local/i686-linux-gnu/bin/i686-linux-gnu-ranlib \
CFLAGS="-I/usr/local/i686-linux-gnu/include" \
LDFLAGS="-L/usr/local/i686-linux-gnu/lib" \
./configure --host=i686-linux-gnu --target=i686-linux-gnu --build=i686-linux-gnu --prefix=/usr/local

Several prompts will appear on the screen when either of the two commands is executed.  I entered the following for the prompts:

Default version of SNMP to use (3): 3
System Contact Information: (Enter)
System Location (Unknown): (Enter)
Location to write logfile (/var/log/snmpd.log): /opt/var/snmpd.log
Location to write persistent information (/var/net-snmp): (Enter)

Two additional commands to execute:

make -i
make install -i

Now we need to download the source code for the Nagios plugins (check_apt, check_breeze, check_by_ssh, check_clamd, check_cluster, check_dhcp, check_disk, check_disk_smb, check_dns, check_dummy, check_file_age, check_flexlm, check_ftp, check_http, check_icmp, check_ide_smart, check_ifoperstatup, check_ifstatus, check_imap, check_ircd, check_jabber, check_ldap, check_ldaps, check_load, check_log, check_mailq, check_mrtg, check_mrtgtraf, check_mysql, check_mysql_query, check_nagios, check_nntp, check_nntps, check_nt, check_ntp, check_ntp_peer, check_ntp_time, check_nwstat, check_oracle, check_overcr, check_ping, check_pop, check_procs, check_real, check_rpc, check_sensors, check_simap, check_smtp, check_snmp, check_spop, check_ssh, check_ssmtp, check_swap, check_tcp, check_time, check_udp, check_ups, check_users, check_wave) that allow Nagios to perform various monitoring tasks:

cd /volume1/downloads
wget http://prdownloads.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.16.tar.gz
tar xzf nagios-plugins-1.4.16.tar.gz
cd nagios-plugins-1.4.16/

Update November 19, 2013:

GabrielM reported in a comment below that the occasionally changing “current version” of the Nagios plugins makes it difficult to download the plugins from the source shown above.  If you open the http://prdownloads.sourceforge.net/sourceforge/nagiosplug/ web page in a web browser, the web browser will be redirected to http://sourceforge.net/projects/nagiosplug/files/ which contains the following statement:

“The Nagios Plugins are no longer distributed via SourceForge. For downloads and other information, please visit: https://www.nagios-plugins.org/
Source: README.md, updated 2013-10-01″

If you follow that link and then click the Download heading at the top of the page, there should be a link on the page that allows access to the current version of the Nagios plugins.  That link is currently: https://www.nagios-plugins.org/download/nagios-plugins-1.5.tar.gz

The command that GabrielM provided should work:

wget https://www.nagios-plugins.org/download/nagios-plugins-1.5.tar.gz

DiskStation DS212+ Notes:

The following configure, make, and make install commands were used:

./configure --prefix=/opt --with-openssl=/usr/syno/bin/openssl --with-nagios-user=nagios --with-nagios-group=nagios --with-ping-command="/opt/bin/ping -c %d %s" --psdir=/bin --with-ps-varlist="&procpid,&procppid,&procvsz,&procrss,procprog,&pos" --with-ps-cols=6 --with-ps-format="%d %d %d %d %s %n" --with-ps-command="/bin/ps -w"
make
make install

For the DiskStation DS1813+ 64 bit:

./configure --with-openssl=/usr/syno/bin/openssl --with-nagios-user=nagios --with-nagios-group=nagios --with-ping-command="/opt/bin/ping -c %d %s" --psdir=/bin --with-ps-varlist="&procpid,&procppid,&procvsz,&procrss,procprog,&pos" --with-ps-cols=6 --with-ps-format="%d %d %d %d %s %n" --with-ps-command="/bin/ps -w" --host=x86_64-linux-gnu --target=x86_64-linux-gnu --build=x86_64-pc-linux
make -i 
make install -i

For the DiskStation DS412+ 32 bit:

./configure --with-openssl=/usr/syno/bin/openssl --with-nagios-user=nagios --with-nagios-group=nagios --with-ping-command="/opt/bin/ping -c %d %s" --psdir=/bin --with-ps-varlist="&procpid,&procppid,&procvsz,&procrss,procprog,&pos" --with-ps-cols=6 --with-ps-format="%d %d %d %d %s %n" --with-ps-command="/bin/ps -w" --host=i686-linux-gnu --target=i686-linux-gnu --build=i686-linux-gnu --prefix=/usr/local
make -i 
make install -i

Copy the Nagios plugins to the location expected by Nagios:

DiskStation DS212+ Notes:

The plugins were installed in the correct location on the DS212+

cp /usr/local/nagios/libexec/*.* /opt/libexec
cp /usr/local/nagios/libexec/* /opt/libexec
cp /usr/local/libexec/check_* /opt/libexec

Update November 19, 2013:

GabrielM reported in a comment below that the third command above may fail.  Depending on the compile options used, the first two commands or the third command may fail.  The first two commands are intended to accomplish the same task as the third command; the first two commands or the last command are expected to fail, but all three commands should not fail.  I should have explained this potential area of concern better.

Copy the Nagios startup script to the correct location so that Nagios will automatically start when the DiskStation is rebooted:

cp /usr/local/etc/rc.d/nagios /opt/etc/init.d/S81nagios

Verify that the ownership of the nagios directory is set correctly:

DiskStation DS212+ Notes:

The file is actually in the /opt/bin directory, so use this command instead:

chown nagios:nagios /opt/bin/nagios/nagios -R
chown nagios:nagios /usr/local/nagios -R

In addition to the main /opt/etc/nagios.cfg Nagios file, there are several other configuration files that are potentially used by Nagios (defined in the nagios.cfg file):

/opt/etc/objects/commands.cfg
/opt/etc/objects/contacts.cfg
/opt/etc/objects/timeperiods.cfg
/opt/etc/objects/templates.cfg
/opt/etc/objects/localhost.cfg
/opt/etc/objects/windows.cfg
/opt/etc/objects/server.cfg
/opt/etc/objects/switch.cfg
/opt/etc/objects/printer.cfg

We need to make a couple of adjustments in the  /opt/etc/objects/commands.cfg file.

vi /opt/etc/objects/commands.cfg

Change the ‘notify-host-by-email’ command definition section as follows:

define command{
    command_name notify-host-by-email
    command_line /usr/bin/printf "%b" "Subject: $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$\n\n***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /opt/sbin/sendmail -vt $CONTACTEMAIL$
    }

Change the ‘notify-service-by-email’ command definition section as follows:

define command{
    command_name notify-service-by-email
    command_line /usr/bin/printf "%b" "Subject: $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$\n\n***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /opt/sbin/sendmail -vt $CONTACTEMAIL$
    }

Change the ‘check_ping’ command definition section as follows (feel free to read the documentation for check_ping and specify different values):

define command{
        command_name    check_ping
        command_line    $USER1$/check_ping -H $HOSTADDRESS$ -w 3000,25% -c 5000,90% -p 3 
        }

Save the file and exit vi.

At this point, the Nagios network monitoring utility will likely experience an error similar to the following when attempting to send an alert email:

output=collect: Cannot write ./dfr6BFFPC7027203 (bfcommit, uid=1026, gid=25): Permission denied

Execute the following commands, which should fix the above problem:

chmod g+w /opt/var/spool/clientmqueue
chmod 444 /opt/etc/mail/*.cf
chmod 7555 /opt/sbin/sendmail

We will need to use su to test the execution of various commands as the nagios user.  Without this fix (described here), you might see the following error message:

su: warning: cannot change directory to /var/services/homes/nagios: No such file or directory su: /sbin/nologin: No such file or directory

Enter the following commands:

mkdir /var/services/homes
mkdir /var/services/homes/nagios
chown nagios:nagios /var/services/homes/nagios -R
vi /etc/passwd

Locate the line in the passwd file for the Nagios user.  Near the end of the line, /sbin/nologin should appear.  Replace that text with /bin/ash then save and exit vi.

Verify that the Nagios user is able to execute the check_ping plugin.  Replace MyDeviceHere with either an IP address or a network device name that is on your network:

su - nagios -c "/opt/libexec/check_ping -H MyDeviceHere -w 5000,80% -c 5000,80% -p 5"

If the ping command (called by check_ping) is not able to resolve a network device name, and the fully qualified dns name was not specified (MyDeviceHere.MyDomainHere.com), edit the /etc/resolv.conf file:

vi /etc/resolv.conf

On a new line in the file, add the following line (replacing MyDomainHere.com with your dns domain name for the network):

search MyDomainHere.com

Verify that sendmail works for the Nagios user.  At the prompt that appears, type a short message, press the Enter key, type a period, then press the Enter key again – replace MyEmailAddressHere@MyDomainHere.com with your email address):

su - nagios -c "/opt/sbin/sendmail -vt MyEmailAddressHere@MyDomainHere.com"

—-

It is important to always verify the Nagios configuration before starting (or restarting after a configuration change) Nagios.  To verify the configuration type the following:

/opt/bin/nagios -v /opt/etc/nagios.cfg

To start up Nagios as a background task (daemon), execute the following:

/opt/bin/nagios -d /opt/etc/nagios.cfg

To stop Nagios that is executing as a background task, type (or better yet ps | grep ‘nagios’   ):

ps

InstallNagiosDiskStation11

Then search though the list of processes for the first line that shows /opt/bin/nagios -d /opt/etc/nagios.cfg.  The number at the left of that line, 31152 in this case, is used to stop Nagios.  To stop Nagios, type the following (replace 31152 with the number shown on your screen):

kill 31152

Side note: I tried installing quite a few different C++ compilers that supposedly work with the Synology DSM (see here).  As such, I had to find a way to remove a directory, that directory’s subdirectories, and files.  The following command will completely remove the /usr/local/i686-linux-gnu directory, should the need arise:

rm -rf /usr/local/i686-linux-gnu

At this point, Nagios will hopefully run as a background task, and it should be able to ping and send email alerts.  However, if you were following the above directions, we have not yet instructed Nagios which devices to monitor, and to whom the alert emails should be sent.  The next step is to define the email contacts by modifying the /opt/etc/objects/contacts.cfg file (see the documentation for assistance):

vi /opt/etc/objects/contacts.cfg

After setting up the contacts, we should probably tell Nagios which devices to monitor.  If there are a lot of devices on your network to be monitored, you might find that using Microsoft Excel rather than vi to create the object definitions makes the task more manageable.  Set up a simple worksheet with four columns.  Column A will be used to specify the short host_name for the object to be monitored.  Column B will be used to specify the alias (long description for the object).  Column C will be used to either specify the IP address for the device or the network name for the device.  Column D will be used to identify the group to which the object belongs and the file name to which the definition is saved (the Excel macro supports the following groups: ap, camera, computer, external, other, printer, server, switch).

InstallNagiosDiskStation13

The Excel macro is set up to read a tab delimited file, rather than reading the object description directly from the Excel worksheet.  Highlight all of the rows in the worksheet except for the top header row, and press Ctrl C (or edit – Copy) to copy the definitions to the Windows clipboard in tab delimited format.  Start Notepad (Start – Run – Notepad), and then press Ctrl V (or edit – Paste) to paste the tab delimited object descriptions into Notepad.  The Excel macro code expects the text file to be saved as nagioshosts.txt.

The Excel macro code follows (I image that not many computers still have a second floppy drive installed, so change the B:\Hardware Documentation\Synology\ path as appropriate for your environment):

Private Sub cmdProcessText_Click()
    Dim intFileNumRead As Integer
    Dim intFileNumAP As Integer
    Dim intFileNumCamera As Integer
    Dim intFileNumComputer As Integer
    Dim intFileNumExternal As Integer
    Dim intFileNumOther As Integer
    Dim intFileNumPrinter As Integer
    Dim intFileNumServer As Integer
    Dim intFileNumSwitch As Integer
    Dim intFileNumWrite As Integer

    Dim strLine As String
    Dim strItem() As String

    intFileNumRead = FreeFile
    Open "B:\Hardware Documentation\Synology\nagioshosts.txt" For Input As #intFileNumRead

    intFileNumAP = FreeFile
    Open "B:\Hardware Documentation\Synology\ap.cfg" For Output As intFileNumAP
    Print #intFileNumAP, "###############################################################################"; Chr(10);
    Print #intFileNumAP, "# ap.cfg - lists the wireless access points to be monitored"; Chr(10);
    Print #intFileNumAP, "#"; Chr(10);
    Print #intFileNumAP, "# Last Modified: "; Now; Chr(10);
    Print #intFileNumAP, "###############################################################################"; Chr(10);
    Print #intFileNumAP, "#"; Chr(10); Chr(10);
    Print #intFileNumAP, "###############################################################################"; Chr(10);
    Print #intFileNumAP, "#"; Chr(10);
    Print #intFileNumAP, "# HOST GROUP DEFINITIONS"; Chr(10);
    Print #intFileNumAP, "#"; Chr(10);
    Print #intFileNumAP, "###############################################################################"; Chr(10);
    Print #intFileNumAP, "#"; Chr(10); Chr(10);
    Print #intFileNumAP, "define hostgroup{"; Chr(10);
    Print #intFileNumAP, "        hostgroup_name  ap                      ; The name of the hostgroup"; Chr(10);
    Print #intFileNumAP, "        alias           Local Access Points       ; Long name of the group"; Chr(10);
    Print #intFileNumAP, "        }"; Chr(10); Chr(10); Chr(10);

    intFileNumCamera = FreeFile
    Open "B:\Hardware Documentation\Synology\camera.cfg" For Output As intFileNumCamera
    Print #intFileNumCamera, "###############################################################################"; Chr(10);
    Print #intFileNumCamera, "# camera.cfg - lists the IP cameras to be monitored"; Chr(10);
    Print #intFileNumCamera, "#"; Chr(10);
    Print #intFileNumCamera, "# Last Modified: "; Now; Chr(10);
    Print #intFileNumCamera, "###############################################################################"; Chr(10);
    Print #intFileNumCamera, "#"; Chr(10); Chr(10);
    Print #intFileNumCamera, "###############################################################################"; Chr(10);
    Print #intFileNumCamera, "#"; Chr(10);
    Print #intFileNumCamera, "# HOST GROUP DEFINITIONS"; Chr(10);
    Print #intFileNumCamera, "#"; Chr(10);
    Print #intFileNumCamera, "###############################################################################"; Chr(10);
    Print #intFileNumCamera, "#"; Chr(10); Chr(10);
    Print #intFileNumCamera, "define hostgroup{"; Chr(10);
    Print #intFileNumCamera, "        hostgroup_name  camera                  ; The name of the hostgroup"; Chr(10);
    Print #intFileNumCamera, "        alias           Local IP Cameras          ; Long name of the group"; Chr(10);
    Print #intFileNumCamera, "        }"; Chr(10); Chr(10); Chr(10);

    intFileNumComputer = FreeFile
    Open "B:\Hardware Documentation\Synology\computer.cfg" For Output As intFileNumComputer
    Print #intFileNumComputer, "###############################################################################"; Chr(10);
    Print #intFileNumComputer, "# computer.cfg - lists the shop floor computers to be monitored"; Chr(10);
    Print #intFileNumComputer, "#"; Chr(10);
    Print #intFileNumComputer, "# Last Modified: "; Now; Chr(10);
    Print #intFileNumComputer, "###############################################################################"; Chr(10);
    Print #intFileNumComputer, "#"; Chr(10); Chr(10);
    Print #intFileNumComputer, "###############################################################################"; Chr(10);
    Print #intFileNumComputer, "#"; Chr(10);
    Print #intFileNumComputer, "# HOST GROUP DEFINITIONS"; Chr(10);
    Print #intFileNumComputer, "#"; Chr(10);
    Print #intFileNumComputer, "###############################################################################"; Chr(10);
    Print #intFileNumComputer, "#"; Chr(10); Chr(10);
    Print #intFileNumComputer, "define hostgroup{"; Chr(10);
    Print #intFileNumComputer, "        hostgroup_name  computer               ; The name of the hostgroup"; Chr(10);
    Print #intFileNumComputer, "        alias           Domain Computers          ; Long name of the group"; Chr(10);
    Print #intFileNumComputer, "        }"; Chr(10); Chr(10); Chr(10);

    intFileNumExternal = FreeFile
    Open "B:\Hardware Documentation\Synology\external.cfg" For Output As intFileNumExternal
    Print #intFileNumExternal, "###############################################################################"; Chr(10);
    Print #intFileNumExternal, "# external.cfg - lists the devices external to the LAN network to be monitored"; Chr(10);
    Print #intFileNumExternal, "#"; Chr(10);
    Print #intFileNumExternal, "# Last Modified: "; Now; Chr(10);
    Print #intFileNumExternal, "###############################################################################"; Chr(10);
    Print #intFileNumExternal, "#"; Chr(10); Chr(10);
    Print #intFileNumExternal, "###############################################################################"; Chr(10);
    Print #intFileNumExternal, "#"; Chr(10);
    Print #intFileNumExternal, "# HOST GROUP DEFINITIONS"; Chr(10);
    Print #intFileNumExternal, "#"; Chr(10);
    Print #intFileNumExternal, "###############################################################################"; Chr(10);
    Print #intFileNumExternal, "#"; Chr(10); Chr(10);
    Print #intFileNumExternal, "define hostgroup{"; Chr(10);
    Print #intFileNumExternal, "        hostgroup_name  external               ; The name of the hostgroup"; Chr(10);
    Print #intFileNumExternal, "        alias           Monitored devices External to the Network ; Long name of the group"; Chr(10);
    Print #intFileNumExternal, "        }"; Chr(10); Chr(10); Chr(10);

    intFileNumOther = FreeFile
    Open "B:\Hardware Documentation\Synology\other.cfg" For Output As intFileNumOther
    Print #intFileNumOther, "###############################################################################"; Chr(10);
    Print #intFileNumOther, "# other.cfg - lists the miscellaneous devices to be monitored"; Chr(10);
    Print #intFileNumOther, "#"; Chr(10);
    Print #intFileNumOther, "# Last Modified: "; Now; Chr(10);
    Print #intFileNumOther, "###############################################################################"; Chr(10);
    Print #intFileNumOther, "#"; Chr(10); Chr(10);
    Print #intFileNumOther, "###############################################################################"; Chr(10);
    Print #intFileNumOther, "#"; Chr(10);
    Print #intFileNumOther, "# HOST GROUP DEFINITIONS"; Chr(10);
    Print #intFileNumOther, "#"; Chr(10);
    Print #intFileNumOther, "###############################################################################"; Chr(10);
    Print #intFileNumOther, "#"; Chr(10); Chr(10);
    Print #intFileNumOther, "define hostgroup{"; Chr(10);
    Print #intFileNumOther, "        hostgroup_name  other                 ; The name of the hostgroup"; Chr(10);
    Print #intFileNumOther, "        alias           Miscellaneous Devices ; Long name of the group"; Chr(10);
    Print #intFileNumOther, "        }"; Chr(10); Chr(10); Chr(10);

    intFileNumPrinter = FreeFile
    Open "B:\Hardware Documentation\Synology\printer.cfg" For Output As intFileNumPrinter
    Print #intFileNumPrinter, "###############################################################################"; Chr(10);
    Print #intFileNumPrinter, "# printer.cfg - lists the printer devices to be monitored"; Chr(10);
    Print #intFileNumPrinter, "#"; Chr(10);
    Print #intFileNumPrinter, "# Last Modified: "; Now; Chr(10);
    Print #intFileNumPrinter, "###############################################################################"; Chr(10);
    Print #intFileNumPrinter, "#"; Chr(10); Chr(10);
    Print #intFileNumPrinter, "###############################################################################"; Chr(10);
    Print #intFileNumPrinter, "#"; Chr(10);
    Print #intFileNumPrinter, "# HOST GROUP DEFINITIONS"; Chr(10);
    Print #intFileNumPrinter, "#"; Chr(10);
    Print #intFileNumPrinter, "###############################################################################"; Chr(10);
    Print #intFileNumPrinter, "#"; Chr(10); Chr(10);
    Print #intFileNumPrinter, "define hostgroup{"; Chr(10);
    Print #intFileNumPrinter, "        hostgroup_name  printer               ; The name of the hostgroup"; Chr(10);
    Print #intFileNumPrinter, "        alias           Printers and Copiers  ; Long name of the group"; Chr(10);
    Print #intFileNumPrinter, "        }"; Chr(10); Chr(10); Chr(10);

    intFileNumServer = FreeFile
    Open "B:\Hardware Documentation\Synology\server.cfg" For Output As intFileNumServer
    Print #intFileNumServer, "###############################################################################"; Chr(10);
    Print #intFileNumServer, "# server.cfg - lists the servers to be monitored"; Chr(10);
    Print #intFileNumServer, "#"; Chr(10);
    Print #intFileNumServer, "# Last Modified: "; Now; Chr(10);
    Print #intFileNumServer, "###############################################################################"; Chr(10);
    Print #intFileNumServer, "#"; Chr(10); Chr(10);
    Print #intFileNumServer, "###############################################################################"; Chr(10);
    Print #intFileNumServer, "#"; Chr(10);
    Print #intFileNumServer, "# HOST GROUP DEFINITIONS"; Chr(10);
    Print #intFileNumServer, "#"; Chr(10);
    Print #intFileNumServer, "###############################################################################"; Chr(10);
    Print #intFileNumServer, "#"; Chr(10); Chr(10);
    Print #intFileNumServer, "define hostgroup{"; Chr(10);
    Print #intFileNumServer, "        hostgroup_name  server               ; The name of the hostgroup"; Chr(10);
    Print #intFileNumServer, "        alias           Server and Similar Devices ; Long name of the group"; Chr(10);
    Print #intFileNumServer, "        }"; Chr(10); Chr(10); Chr(10);

    intFileNumSwitch = FreeFile
    Open "B:\Hardware Documentation\Synology\switch.cfg" For Output As intFileNumSwitch
    Print #intFileNumSwitch, "###############################################################################"; Chr(10);
    Print #intFileNumSwitch, "# switch.cfg - lists the network equipment type devices to be monitored"; Chr(10);
    Print #intFileNumSwitch, "#"; Chr(10);
    Print #intFileNumSwitch, "# Last Modified: "; Now; Chr(10);
    Print #intFileNumSwitch, "###############################################################################"; Chr(10);
    Print #intFileNumSwitch, "#"; Chr(10); Chr(10);
    Print #intFileNumSwitch, "###############################################################################"; Chr(10);
    Print #intFileNumSwitch, "#"; Chr(10);
    Print #intFileNumSwitch, "# HOST GROUP DEFINITIONS"; Chr(10);
    Print #intFileNumSwitch, "#"; Chr(10);
    Print #intFileNumSwitch, "###############################################################################"; Chr(10);
    Print #intFileNumSwitch, "#"; Chr(10); Chr(10);
    Print #intFileNumSwitch, "define hostgroup{"; Chr(10);
    Print #intFileNumSwitch, "        hostgroup_name  switch               ; The name of the hostgroup"; Chr(10);
    Print #intFileNumSwitch, "        alias           Switche and Similar Devices ; Long name of the group"; Chr(10);
    Print #intFileNumSwitch, "        }"; Chr(10); Chr(10); Chr(10);

    Do While Not (EOF(intFileNumRead))
        Line Input #intFileNumRead, strLine
        strItem = Split(strLine, vbTab)
        'strItem(0) = host_name
        'strItem(1) = alias
        'strItem(2) = address
        'strItem(3) = hostgroups
        Select Case strItem(3)
            Case "ap"
                intFileNumWrite = intFileNumAP
            Case "camera"
                intFileNumWrite = intFileNumCamera
            Case "computer"
                intFileNumWrite = intFileNumComputer
            Case "external"
                intFileNumWrite = intFileNumExternal
            Case "other"
                intFileNumWrite = intFileNumOther
            Case "printer"
                intFileNumWrite = intFileNumPrinter
            Case "server"
                intFileNumWrite = intFileNumServer
            Case "switch"
                intFileNumWrite = intFileNumSwitch
        End Select

        Print #intFileNumWrite, "define host{"; Chr(10);
        Select Case strItem(3)
            Case "ap"
                Print #intFileNumWrite, "        use             ap              ; Inherit default values from a template"; Chr(10);
            Case "camera"
                Print #intFileNumWrite, "        use             camera          ; Inherit default values from a template"; Chr(10);
            Case "computer"
                Print #intFileNumWrite, "        use             computer        ; Inherit default values from a template"; Chr(10);
            Case "external"
                Print #intFileNumWrite, "        use             external        ; Inherit default values from a template"; Chr(10);
            Case "other"
                Print #intFileNumWrite, "        use             other           ; Inherit default values from a template"; Chr(10);
            Case "printer"
                Print #intFileNumWrite, "        use             printer         ; Inherit default values from a template"; Chr(10);
            Case "server"
                Print #intFileNumWrite, "        use             server          ; Inherit default values from a template"; Chr(10);
            Case "switch"
                Print #intFileNumWrite, "        use             switch          ; Inherit default values from a template"; Chr(10);
        End Select
        Print #intFileNumWrite, "        host_name       "; strItem(0); "         ; The name we're giving to this device"; Chr(10);
        Print #intFileNumWrite, "        alias           "; strItem(1); "         ; A longer name associated with the device"; Chr(10);
        Print #intFileNumWrite, "        address         "; strItem(2); "         ; IP address of the device"; Chr(10);
        Print #intFileNumWrite, "        hostgroups      "; strItem(3); "         ; Host groups this device is associated with"; Chr(10);
        Print #intFileNumWrite, "        }"; Chr(10); Chr(10);

        Print #intFileNumWrite, "define service{"; Chr(10);
        Print #intFileNumWrite, "        use                     generic-service ; Inherit values from a template"; Chr(10);
        Print #intFileNumWrite, "        host_name               "; strItem(0); "        ; The name of the host the service is associated with"; Chr(10);
        Print #intFileNumWrite, "        service_description     PING            ; The service description"; Chr(10);
        Print #intFileNumWrite, "        check_command           check_ping!3000,25%!5000,90%    ; The command used to monitor the service"; Chr(10);
        Print #intFileNumWrite, "        normal_check_interval   5               ; Check the service every 5 minutes under normal conditions"; Chr(10);
        Print #intFileNumWrite, "        retry_check_interval    1               ; Re-check the service every minute until its final/hard state is determined"; Chr(10);
        Print #intFileNumWrite, "        }"; Chr(10); Chr(10);
    Loop

    Close #intFileNumRead
    Close #intFileNumAP
    Close #intFileNumCamera
    Close #intFileNumComputer
    Close #intFileNumExternal
    Close #intFileNumOther
    Close #intFileNumPrinter
    Close #intFileNumServer
    Close #intFileNumSwitch
End Sub

The files that are created use Unix/Linux standard line feed end of line marker characters, rather than the Windows standard carriage return/line feed combination characters.  As such, opening the generated files using Notepad is not advised.  Copy the generated files back to the /opt/etc/objects/ path on the DiskStation (copy the files to a Shared Folder on the DiskStation, then use the cp command to copy the files from the share location to /opt/etc/objects/ – the Shared Folders are typically created as a subdirectory in the /volume1/ directory).

If you decided to use some of the non-standard Nagios group names (as I did), those non-standard group names must be defined in the /opt/etc/objects/templates.cfg file:

vi /opt/etc/objects/templates.cfg

A portion of the additional entries that I made in this file include the following:

define host{
       name                    ap      ; The name of this host template
       use                     generic-host    ; Inherit default values from the generic-host temp
       check_period            24x7            ; By default, access points are monitored round t
       check_interval          5               ; Actively check the access point every 5 minutes
       retry_interval          1               ; Schedule host check retries at 1 minute intervals
       max_check_attempts      10              ; Check each access point 10 times (max)
       check_command           check_ping      ; Default command to check if access points are "alive"
       notification_period     24x7            ; Send notification out at any time - day or night
       notification_interval   30              ; Resend notifications every 30 minutes
       notification_options    d,r             ; Only send notifications for specific host states
       contact_groups          admins          ; Notifications get sent to the admins by default
       hostgroups              ap ; Host groups that access points should be a member of
       register                0               ; DONT REGISTER THIS - ITS JUST A TEMPLATE
       }

define host{
       name                    camera  ; The name of this host template
       use                     generic-host    ; Inherit default values from the generic-host temp
       check_period            24x7            ; By default, cameras are monitored round t
       check_interval          60              ; Actively check the device every 60 minutes
       retry_interval          1               ; Schedule host check retries at 1 minute intervals
       max_check_attempts      10              ; Check each device 10 times (max)
       check_command           check_ping      ; Default command to check if device are "alive"
       notification_period     24x7            ; Send notification out at any time - day or night
       notification_interval   240             ; Resend notifications every 240 minutes
       notification_options    d,r             ; Only send notifications for specific host states
       contact_groups          admins          ; Notifications get sent to the admins by default
       hostgroups              camera ; Host groups that cameras should be a member of
       register                0               ; DONT REGISTER THIS - ITS JUST A TEMPLATE
       }

Nagios will not know that it should read the additional configuration files until it is told to do so by modifying the /opt/etc/nagios.cfg file.

vi /opt/etc/nagios.cfg

Add the following lines to the nagios.cfg file:

# Charles Hooper's object types
cfg_file=/opt/etc/objects/ap.cfg
cfg_file=/opt/etc/objects/camera.cfg
cfg_file=/opt/etc/objects/computer.cfg
cfg_file=/opt/etc/objects/external.cfg
cfg_file=/opt/etc/objects/other.cfg
cfg_file=/opt/etc/objects/printer.cfg
cfg_file=/opt/etc/objects/server.cfg
cfg_file=/opt/etc/objects/switch.cfg

We have made a large number of changes to the configuration files, so it is important to verify that there are no errors in the configuration:

/opt/bin/nagios -v /opt/etc/nagios.cfg

If no errors are found in the configuration, terminate (kill) nagios and then restart as described above.

—-

Update July 28, 2013:

When attempting to start Nagios in daemon mode (/opt/bin/nagios -d /opt/etc/nagios.cfg) I encountered a couple of problems related to permissions for the Nagios user.  The nagios process was not listed when I used the ps command.  I then tried executing the following commands:

touch /opt/var/nagios.log
chown nagios:nagios /opt/var/nagios.log

Nagios was then able to start in daemon mode, but wrote messages similar to the following in the /opt/var/nagios.log file:

[1375058364] Warning: Could not open object cache file ‘/opt/var/objects.cache’ for writing!
[1375058364] Failed to obtain lock on file /opt/var/nagios.lock: Permission denied
[1375058364] Bailing out due to errors encountered while attempting to daemonize… (PID=11451)
[1375058656] Nagios 3.5.0 starting… (PID=12936)
[1375058656] Local time is Sun Jul 28 20:44:16 EDT 2013
[1375058656] LOG VERSION: 2.0
[1375058656] Warning: Could not open object cache file ‘/opt/var/objects.cache’ for writing!
[1375058656] Failed to obtain lock on file /opt/var/nagios.lock: Permission denied
[1375058656] Bailing out due to errors encountered while attempting to daemonize… (PID=12936)
[1375060107] Error: Unable to create temp file for writing status data: Permission denied
[1375060117] Error: Unable to create temp file for writing status data: Permission denied
[1375060127] Error: Unable to create temp file for writing status data: Permission denied
[1375060137] Error: Unable to create temp file for writing status data: Permission denied
[1375060147] Error: Unable to create temp file for writing status data: Permission denied
[1375060157] Error: Unable to create temp file for writing status data: Permission denied

I tried to set the permissions for a couple of other files, only to find another long list of Permission denied messages:

touch /opt/var/objects.cache
touch /opt/var/nagios.lock
touch /opt/var/nagios.tmp
chown nagios:nagios /opt/var/objects.cache
chown nagios:nagios /opt/var/nagios.lock
chown nagios:nagios /opt/var/nagios.tmp

I then recalled that I had seen similar messages on the DiskStation DS412+.  I then tried a different approach, creating a nagios directory in the /opt/var directory, creating a couple of subdirectories in that directory, and then assigning nagios as the owner of that directory structure:

mkdir /opt/var/nagios
mkdir /opt/var/nagios/archives
mkdir /opt/var/nagios/spool
mkdir /opt/var/nagios/spool/checkresults
chown nagios:nagios /opt/var/nagios -R
vi /opt/etc/nagios.cfg

In the nagios.cfg file, I made the following changes:

log_file=/opt/var/nagios/nagios.log
status_file=/opt/var/nagios/status.dat
lock_file=/opt/var/nagios/nagios.lock
temp_file=/opt/var/nagios/nagios.tmp
log_archive_path=/opt/var/nagios/archives
check_result_path=/opt/var/nagios/spool/checkresults
state_retention_file=/opt/var/nagios/retention.dat
debug_file=/opt/var/nagios/nagios.debug

After saving the file and exiting vi, I restarted Nagios in daemon mode.  Reading the last 100 lines of the Nagios log file is now accomplished with this command:

tail -n 100 /opt/var/nagios/nagios.log

—-

There are a lot of seemingly interesting Nagios plugins, including check_oracle (I believe that this plugin requires the Oracle client to be installed – good luck with that install).  On one of the DiskStations the check_snmp plugin did not compile, while on the other DiskStation the check_http plugin did not compile.

It might be interesting to see what solutions readers are able to develop from the above starting point.  The above information is the result of many hours of experimentation as well as a couple minutes reading through sections of the Nagios documentation (it reads like the Oracle Database documentation, so it should be an easy read once I am in the right mood) and hopelessly scanning the ‘net for information about obscure error messages.  Have fun, and try not to put the DiskStation out of service due to a mistaken file copy.

Update November 19, 2013:

Installing an updated version of the Synology DSM operating system may temporarily disable Nagios.  Make backups of all Nagios confirguration files (copying the files with the cp command to a directory in /volume1 is generally safe) before installing different versions of the Synology DSM operating system.

The DSM 4.3 operating system installation apparently removed the /var/services/homes directory.  That directory removal makes it impossible for the Nagios user to login to run various commands.  I assume that the removal of the homes directory is intentional, so a work around for that problem:

mkdir /var/services/home
mkdir /var/services/home/nagios
chown nagios:nagios /var/services/home/nagios -R
vi /etc/passwd

In the /etc/passwd file, change all /homes/ entries to /home/ then save and exit vi.

The installation of the different DSM version (including versions before 4.3) will likely also replace/remove the libltdl.* files located in /opt/local/lib and /usr/lib, so we need to copy those files back into the correct directories:

cp /opt/lib/libltdl.so.3 /opt/local/lib/libltdl.so.3
cp /opt/lib/libltdl.so.3 /usr/lib/libltdl.so.3
cp /opt/lib/libltdl.so /usr/lib/

Once the above items are copied, try executing the check_ping command as the nagios user (replace MyDeviceHere with either an IP address or the name of a device on your network).

su - nagios -c "/opt/libexec/check_ping -H MyDeviceHere -w 5000,80% -c 5000,80% -p 5"

If the DiskStation reports that the check_ping command was not found, then copy that file back to the /opt/libexec/ directory.  If the above command was successful, try verifying the Nagios configuration:

/opt/bin/nagios -v /opt/etc/nagios.cfg

If the verification was successful, start Nagios as a daemon:

/opt/bin/nagios -d /opt/etc/nagios.cfg

Execute the ps command and verify that the above command is listed in the running processes:

ps

Finally, verify that Nagios is still set to start automatically as a daemon:

ls /opt/etc/init.d/S81nagios

If a file is listed when the above command is executed, then Nagios should now be fully repaired.

Update August 12, 2015:

You may at some point need to verify that a http web server is online.  If you execute the following command (replace http://www.mydomain.com with an actual web server for a domain to be monitored):

/opt/libexec/check_http -H www.mydomain.com

You may see one of the following errors:

/opt/libexec/check_http: error while loading shared libraries: libssl.so.0.9.8: cannot open shared object file: No such file or directory
/opt/libexec/check_http: error while loading shared libraries: libcrypto.so.0.9.8: cannot open shared object file: No such file or directory

The problem is likely caused by two missing symbolic links.  The following commands worked on both the DS1813+ and the DS412+:

ln -s /usr/lib/libssl.so.1.0.0 /usr/lib/libssl.so.0.9.8
ln -s /lib/libcrypto.so.1.0.0 /usr/lib/libcrypto.so.0.9.8

If instead you see this error message, then there is likely a problem with the compiled Nagios plugins (the DS1813+ was missing this plugin, so I copied it from the DS412+):

-ash: /opt/libexec/check_http: not found

To use check_http to monitor a website status I had to create a new host template because Nagios continued to use the check_ping command to check the website status, even though I had tried to override the default template value in the service entry for the website (this might be a bug, or I might have configured something wrong).  Add the following lines to the /opt/etc/objects/templates.cfg file below the external host definition section:

define host{
        name                    external-http   ; The name of this host template
        use                     generic-host    ; Inherit default values from the generic-host template
        check_period            24x7            ; By default, Windows servers are monitored round the clock
        check_interval          5               ; Actively check the server every 5 minutes
        retry_interval          1               ; Schedule host check retries at 1 minute intervals
        max_check_attempts      1               ; Check each server 10 times (max)
        check_command           check_http      ; Default command to check if servers are "alive"
        notification_period     24x7            ; Send notification out at any time - day or night
        notification_interval   30              ; Resend notifications every 30 minutes
        notification_options    d,r             ; Only send notifications for specific host states
        contact_groups          admins          ; Notifications get sent to the admins by default
        hostgroups              external        ; Host groups that web servers should be a member of
        register                0               ; DONT REGISTER THIS - ITS JUST A TEMPLATE
        }

Next, define the websites to be monitored in the /opt/etc/objects/external.cfg file.  For example, to monitor the http://www.oracle.com website, add the following lines:

define host{
        use             external-http        ; Inherit default values from a template
        host_name       Oracle_website       ; The name we're giving to this device
        alias           Oracle Website       ; A longer name associated with the device
        address         www.oracle.com       ; Host address of the device
        hostgroups      external             ; Host groups this device is associated with
        }
 
define service{
        use                     generic-service ; Inherit values from a template
        host_name               Oracle_website  ; The name of the host the service is associated with
        service_description     HTTP            ; The service description
        check_command           check_http      ; The command used to monitor the service
        normal_check_interval   3               ; Check the service every 5 minutes under normal conditions
        retry_check_interval    1               ; Re-check the service every minute until its final/hard state is determined
        }

It appears that Nagios might be expecting an IP address rather than a host name (such as http://www.oracle.com), so you may also need to modify the /opt/etc/objects/commands.cfg file, changing the -I to -H in the check_http command definition, as shown below:

# 'check_http' command definition
define command{
        command_name    check_http
        command_line    $USER1$/check_http -H $HOSTADDRESS$ $ARG1$
        }

When making changes to the Nagios configuration, always use the verify command to make certain that syntax errors are not present in the configuration files.

/opt/bin/nagios -v /opt/etc/nagios.cfg

To restart Nagios, use the ps command to locate the first process entry with “/opt/bin/nagios -d /opt/etc/nagios.cfg” in its COMMAND column:

ps | grep 'nagios'

If that line contains the number 3692 in the first column, execute the following:

kill 3692

Then restart Nagios in the background (daemon mode)

/opt/bin/nagios -d /opt/etc/nagios.cfg

I never bothered to determine how to make the Nagios monitoring status web pages work with the Synology DS412+, DS1813+, and DS212+.  I thought that I would see what steps would be required to make that functionality work on a Synology DS415+ (note that the following steps work nearly exactly the same on the Synology DS412+ and DS1813). The Synology DSM operating system utilizes port 5000 for web traffic.  If you add the WordPress package to the Synology, that package uses the standard port 80 for web traffic.  If you followed the above steps for installing Nagios, you installed a second copy of the Apache web server on the Synology that uses port 8000 for web traffic.  If your Synology has an IP address of 192.168.1.60, then you would use the following website address to access the second web server on the Synology: http://192.168.1.60:8000/nagios/

The Nagios monitoring status web pages, once set up, will appear as a directory of that second web server, as shown below:

nagiosds415_web_status

The value of the Nagios monitoring status web pages becomes apparent fairly quickly by reviewing the above screen capture.  The above Tactical Overview shows that there are 18 monitored devices that are up, one monitored device that is down, and another device that is in the process of being checked.  The links at the left provide additional information about the monitored devices.  Let’s configure the second copy of Apache on the Synology to support the Nagios monitoring status web pages.

The second copy of Apache uses the configuration file /opt/etc/apache2/httpd.conf.  Edit that file:

vi /opt/etc/apache2/httpd.conf

Add the following directly below the # Supplemental configuration heading, which is near the end of the file:

# Added by per Charles Hooper's Nagios installation guide for Synology DS415+
Include etc/apache2/conf.d/nagios.conf
Include etc/apache2/conf.d/php.conf

Save the file and exit vi.  Next, we need to create the htpasswd.users file in the correct location.  In the above steps, we executed the following command:

/opt/sbin/htpasswd -c /usr/local/etc/htpasswd.users nagiosadmin

That htpasswd.users file is expected to be in the /opt/etc directory.  Execute the following command to create the file in the correct directory (you will be prompted for a password for the nagiosadmin user):

/opt/sbin/htpasswd -c /opt/etc/htpasswd.users nagiosadmin

The Nagios monitoring status web pages require PHP support to be added to the second Apache installation.  Execute the following commands to install PHP:

/opt/bin/ipkg update
/opt/bin/ipkg list
/opt/bin/ipkg install php
/opt/bin/ipkg install php-apache

Next, we need to modify the /opt/etc/apache2/conf.d/nagios.conf file that was created during the Nagios installation.

vi /opt/etc/apache2/conf.d/nagios.conf

Below the heading <Directory “/opt/share”> add the following line:

   DirectoryIndex index.php

Save the file and exit vi.  Next, we need to adjust the php.ini file on the Synology:

vi /opt/etc/php.ini

In the file (near line 108), locate the following line:

output_buffering = Off

Change that line to show:

output_buffering = 8192

Locate the following line (near line 248) in the file:

memory_limit = 8M

Change that line to show:

memory_limit = 128M

Save the file and exit vi.  There may be a second php.ini file located in the /etc/php directory, make the same change to that file, then save the file and exit vi:

vi /etc/php/php.ini

Perform a graceful restart of the second copy of Apache:

/opt/sbin/apachectl -k graceful

Note:

The above command executed without a problem on the DS1813+, however, the DS412+ displayed one of the following two error messages:

httpd: Syntax error on line 102 of /opt/etc/apache2/httpd.conf: Cannot load /opt/libexec/mod_dav.so into server: /opt/libexec/mod_dav.so: undefined symbol: syno_recv_usr2
httpd: Syntax error on line 108 of /opt/etc/apache2/httpd.conf: Cannot load /opt/libexec/mod_dav_fs.so into server: /opt/libexec/mod_dav_fs.so: undefined symbol: HomeIsExist

When each error appeared, I modified the /opt/etc/apache2/httpd.conf file to put a # symbol in front of the offending line.  Doing so permitted the apachectl command to execute without error, but the errors may be a sign that the Nagios status pages will not work correctly.

Try to access the Nagios status monitoring pages in a web browser (replace 192.168.1.60 with the IP address of your Synology)  http://192.168.1.60:8000/nagios/

You should be prompted to enter a username and password.  Enter nagiosadmin for the username, and enter the password for that user that you created when executing the /opt/sbin/htpasswd command earlier.  If the web page does not display, take a look at the last 100 lines of the Apache error log to see if any clues are provided:

tail -n 100 /opt/var/apache2/log/error_log

Note:

The DS1813+ showed the following in the error_log – if you see this error, the error may go away after completing the steps that follow:

File does not exist: /opt/share/www/nagios

Some of the interesting error messages that I experienced including the following:

[Sun Jan 04 14:07:55 2015] [error] [client 192.168.1.218] (2)No such file or directory: Could not open password file: /opt/etc/htpasswd.users
 
[Sun Jan 04 14:23:04 2015] [error] [client 192.168.1.218] Directory index forbidden by Options directive: /opt/share/
 
[Sun Jan 04 15:07:09 2015] [error] [client 192.168.1.218] File does not exist: /opt/share/<, referer: http://192.168.1.60:8000/nagios/
 
[Sun Jan 04 17:53:06 2015] [notice] child pid 15126 exit signal Segmentation fault (11)
 
[Sun Jan 11 09:41:57 2015] [error] [client 192.168.1.213] script not found or unable to stat: /opt/sbin/statusmap.cgi, referer: http://192.168.1.60:8000/nagios/side.php

The last of the above errors still remains, the file /opt/sbin/statusmap.cgi does not exist.  For now, I will let someone else find a solution for that problem.

Note:

If you saw a login prompt, but the login was unsuccessful, check the Apache error log:

tail -n 100 /opt/var/apache2/log/error_log

On the DS1813+, I saw the following error:

No such file or directory: Could not open password file: /opt/etc/htpasswd.users

The above error indicates that the htpasswd.users file was expected to be found in a different location.  To work around this issue, I simply created the file in the location specified in the error log:

/opt/sbin/htpasswd -c /opt/etc/htpasswd.users nagiosadmin

If you find that a monitored device is down, using the web interface it is possible to disable future checking of the device, as well as alter a number of other notification options.  Unfortunately, clicking any of the options under the Host Commands heading will result in a permission error.

nagiosds415_change_notifications

To fix the permission error:

Using the Synology Control Panel interface, create a new group named www – this group requires no specific Synology permissions.  Next, using the Synology Control Panel interface, create a user named apache – make this user a member of the nagios, users, and www groups.  This user requires no specific Synology permissions.  Then change the username and group under which Apache executes:

vi /opt/etc/apache2/httpd.conf

In the httpd.conf file, locate the following two lines:

User nobody
Group #-1

Change those lines to appears as follows:

User apache
Group www

Save the file and exit vi.  Gracefully restart the second copy of Apache:

/opt/sbin/apachectl -k graceful

Refresh the Nagios monitoring status web page – the options under the Host Commands heading should no longer return an error.





On the Topic of Technology… 5

12 07 2013

July 7, 2013

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

As many readers of this blog are probably aware, Oracle Corporation has released Oracle Database 12.1.0.1 so far for the Linux, Solaris, and Windows platforms.  Oracle Database 12.1.0.1 may be downloaded from Oracle’s OTN website.  This article is not about Oracle Database 12.1.0.1, at least not specifically about that version.

In the previous article in this blog series last year, I mentioned experimenting with a Synology DiskStation DS212+, as well as a couple of IP based 640×480 resolution security cameras.  Since that time I have had the opportunity to purchase a couple of additional NAS devices including a Synology Diskstation DS112J, Synology Diskstation DS412+, and Synology Diskstation DS1813+.  The DS212+ and DS112J NAS devices have ARM type processors, while the DS412+ (32 bit?) and DS1813+ (64 bit) have Intel Atom D2700 series processors (the processor series for other Synology processors may be determined by visiting this link).  The processor type in the NAS partially determines the native capabilities of the NAS, as well as what else may be done with the NAS.  Setting up the Synology NAS devices to support FTP server functionality is fairly easy to accomplish, regardless of the processor type.  That FTP server functionality helps to support the upload functionality of the IP based security cameras.

As an experiment shortly after buying the Synology DiskStation DS212+, I attempted to install the network monitoring tool Nagios, in part to allow keeping track of which IP cameras were offline.  I hit a bit of a brick wall trying to find a precompiled package to permit the Nagios server functionality to run on the Synology DiskStations, which at the core run a version of Linux.  The closest thing that I could find was a plugin for Nagios to permit Nagios running on another machine to monitor a Synology NAS.  I first worked with Red Hat Linux in 1999, implemented dual inline manually-coded iptables firewalls based on a stripped down Red Hat Linux in early 2002, compiled/built a Linux based X.509 certificate supporting VPN server before the Linux kernel supported X.509 certificates (I tried compiling a patched version of the Red Hat kernel, patched with X.509 support, but eventually gave up and compiled the Working Overloaded Kernel), and even tried running Red Hat Enterprise Linux with Samba and Windbind as a member of the company’s file server team.  I first worked with Nagios in 2002, when one of my brothers introduced me to the Linux based network monitoring tool (previously called NetSaint).  Needless to say, I have experience working with Linux and manually compiling software on that platform, but that experience is apparently quite rusty.  The attempt to compile the Nagios source code on the Synology DiskStation DS212+ came to an abrupt halt when I received a message during the compile process essentially stating that the ARM type CPU (Marvell Kirkwood mv6282) did not support fine timer resolutions.

A couple of months later, I tried compiling the Nagios source on the Synology DiskStation DS412+, which features an Intel CPU architecture.  I encountered a couple of unexpected snags in the compile process, and had to put the project on hold for several months.  The paths to the various files on the Linux operating system running on the DiskStation differs a bit from the paths used by the Red Hat variants of Linux – that lack of standardization across the various Linux distributions has frustrated me from time to time over the years.

I recently purchased and reviewed a Synology DiskStation DS1813+.  In the review, I stated the following before testing the theory:

“Additionally, ipkg support permits the installation of roughly 900 additional applications, including C++ compilers – which in theory suggests that the source for the Nagios network monitoring utility can be downloaded and compiled on the DS1813+.”

I am curious to know whether or not anyone is able to get the Nagios server software to run on a Synology DiskStation DS412+ or DS1813+.

I suppose that I should not have proposed that the Nagios network monitoring utility might work on the DiskStation without actually confirming that the utility will work.  I am now able to confirm that the Nagios network monitoring utility will execute on the Synology DiskStation DS1813+, although the check_http plugin failed to compile.  The installation is anything but straight-forward – no how-tos that are close to being useful, and no Setup.exe to double-click.  The following screen capture also does not help (non-root users are not permitted to use the ping command on the DiskStations):

TopicOfTechnology5-1

At this time, I cannot provide a detailed instruction list for running the Nagios network monitoring utility on a Synology DiskStation.  However, as a starting point it is necessary to add ipkg support to the DiskStation.  The following ipkg items might be necessary: optware-devel, gcc, libtool, mysql, apache, openssl, openssl-dev, sendmail, inetutils.  With a bit of experimentation (and luck), you might see something similar to this when typing the ps command in a telnet session (I actually typed the command a second time so that the column headings would be visible – there certainly are a lot of background processes on the DiskStation):

TopicOfTechnology5-2

As I found out, just because Nagios is in the process list, that does not mean that it is able to do much of anything useful.  A work-around for the non-root ping issue is needed (I might have hinted part of the solution when listing the various ipkgs), as well as a work-around for the non-root sendmail problem that I did not mention.

When Nagios is working properly, unplugging a monitored device should result in an email message being sent (of course, if you unplug your computer, you probably will not receive an email message stating that the computer is down :-) ):

TopicOfTechnology5-3

There appear to be several Nagios plugins to monitor Oracle databases, although I have not had a chance yet to determine if any of those plugins will compile and work on a Synology DiskStation.  In theory it should wor… wait, I am not headed down that path yet!

In addition to a Synology DiskStation DS212+, the previous article in this series also showed a couple of smart 640×480 resolution IP cameras.  At the time of the previous article, I did not fully comprehend the usefulness of smart IP cameras.  Roughly 30 IP based cameras later, I now have a better understanding of their usefulness and limitations.  Last year I wrote reviews for three 640×480 model cameras here (it appears that Amazon now has this review attached to a different camera), here (it appears that Amazon now has this review attached to a different camera), and here (OK, there is a forth camera included in this review due to a model change over).  I was also burned badly (at a loss of $1343) when I bought two 1080P cameras last year that could not meet (or even approach) the manufacturer’s claims for the product.  All of those reviews include video samples produced by the cameras.

This year I bought and reviewed a couple of smart 720P resolution IP cameras, as well as a couple of different (from last year’s 1080P) smart 1080P resolution IP cameras.  As before, the reviews include sample video clips recorded by the cameras (the 720P and 1080P video was uploaded at the native resolution, but it appears that Amazon uses a pretty aggressive compression algorithm, which leads to some lost video quality).  The new 720P and 1080P cameras are not perfect, but the manufacturer appears to be taking steps to address the weaknesses that I outlined in the reviews.  I was sent another updated firmware for the 1080P cameras, as well as an updated PDF that includes the instructions that were missing from the included printed manual.  The support person for the camera company also stated that their website is currently under development, and will probably be online in the next 30 days.  My review mentioned the lack of success at using the recommended P2PCam264 app on a Motorola Xoom tablet for viewing the live video feed from the smart 720P and 1080P cameras.  The support person suggested using the AnyScene app on the Motorola Xoom tablet for viewing the live feed – that app seems to work.  The AnyScene app, while seemingly lacking the sound feed from the cameras, might even work a little too well.  I brought the Xoom tablet to a different network, only to find that the app is somehow able to still pull a live video feed from any of the configured cameras on the other network without poking any holes in the firewall on either network, and universal plug and play (uPNP) is disabled (below is a low resolution cell phone captured picture).  I am now left wondering what level of security risk this plug and play technology might pose.

TopicOfTechnology5-4

Sample PNG Generated from 720P Camera’s Video (Click to Display Larger Version):

topicoftechnology5-8

Sample PNG Generated from 1080P Camera’s Video (Same Scene as the Above Example – Click to Display Larger Version):

topicoftechnology5-9

Sample JPG 720P Image from an Edited Video (the 1080P video suffers from fewer out of focus problems and is the same resolution – just with a roughly 50% wider and taller viewing area):

TopicOfTechnology5-6





Everything or Nothing in SQL

23 05 2013

May 23, 2013

The following question recently came through an ERP mailing list (significantly rephrased):

I would like to use the Microsoft Query tool in Microsoft Excel to extract records from the ERP database.  I would like the list of parent records to be retrieved into Excel when all of the specified child records belong to the parent.

That sounds as though it would be an easy request.  The OP in the ERP mailing list later followed up with the following:

“If I may ask for another variation to pick your brain….

How would I rewrite this to show ONLY the work order(s) that have all of the 3 Parts, but instead of showing all of them, show only the ones that don’t include a specific other part?

For example, Include all the work order retrieved from your query, but then subtract those work orders that don’t contain a part or parts in another separate list.

I.E. All order containing parts (‘147555′,’227643′,’155328’) but not containing ANY of these parts (‘12345’, ‘456789’)”

In the above follow up, the term “work order(s)” refer to the “parent records” that was previously mentioned, and the term “parts” refer to the “child records” that was previously mentioned.  Below is a graphical mock up, with the light-green colored rectangles representing the “parts”, and the white colored rectangle at the bottom left representing the “work order”.  The other white colored rectangles represent subassemblies that belong to the “work order”, and the cyan colored rectangles represent the operations that utilize the “parts”.  Through normalization, the white colored rectangles represent rows in one database table (WORK_ORDER – the “parent” table), the cyan colored rectangles represent rows in another table (OPERATION – the “child” table), and the light-green colored rectangles represent rows in a third table (REQUIREMENT – the “grandchild” table).

EverythingOrNothing

The first request still seems to be easy to solve, even though there are three tables that are potentially involved.  Let’s simplify the problem further and just focus on the “grandchild” table.  Let’s create a small version of that table and populate the table with some sample rows:

CREATE TABLE T1 (
  WORKORDER_TYPE VARCHAR2(1),
  WORKORDER_BASE_ID VARCHAR2(15),
  WORKORDER_LOT_ID VARCHAR2(3),
  WORKORDER_SUB_ID VARCHAR2(3),
  PIECE_NO NUMBER,
  PART_ID VARCHAR2(15),
  QTY NUMBER(12,4));

INSERT INTO
  T1
SELECT
  'W' WORKORDER_TYPE,
  TRIM(TO_CHAR(TRUNC(ROWNUM / 100 + 1)*1000,'00000')) || 'W' WORKORDER_BASE_ID,
  TO_CHAR(MOD(TRUNC(ROWNUM / 25 + 1), 999) + 1) WORKORDER_LOT_ID,
  TO_CHAR(TRUNC(MOD(ROWNUM - 1, 25) / 5) + 1) WORKORDER_SUB_ID,
  MOD(ROWNUM - 1, 25) + 1 PIECE_NO,
  CHR(65 + MOD(ROWNUM - 1, 26)) || CHR(65 + MOD(ROWNUM - 1, 15)) || MOD(ROWNUM - 1, 10)  PART_ID,
  TRUNC(DBMS_RANDOM.VALUE*100) + 1 QTY
FROM
  DUAL
CONNECT BY
  LEVEL<=1000;

In the above, the combination of the WORKORDER_BASE_ID column and the WORKORDER_LOT_ID column represent the “work order” that was mentioned by the OP.  The WORKORDER_SUB_ID column identifies the subassembly number (answering the question of to which of the white colored rectangles does the “part” belong).  The PIECE_NO column is used to uniquely identify the rows that belong to a specific subassembly number (that is the final column in the REQUIREMENT table’s primary key).  The PART_ID column is the specific “part” that is attached to a specific “work order”.  The QTY column indicates the number of the “part” that is required by the “work order” – with the sample data, this is the only column that will differ from one execution of the table creation script to the next execution.

The sample data is a bit too consistent, however, we will ignore any issues that consistency may present for now.  Let’s execute a simple SQL statement that indicates how many from the following “parts” list appear in the “work orders”: MI8, NJ9, KG6 (note the use of DISTINCT – a single PART_ID could appear on more than one subassembly in a single WORKORDER_BASE_ID, WORKORDER_LOT_ID combination, and we do not want to repeatedly count those “parts”):

SELECT
  T.WORKORDER_TYPE,
  T.WORKORDER_BASE_ID,
  T.WORKORDER_LOT_ID,
  T.WORKORDER_SUB_ID,
  T.PART_ID,
  COUNT(DISTINCT T.PART_ID) OVER (PARTITION BY T.WORKORDER_TYPE, T.WORKORDER_BASE_ID, T.WORKORDER_LOT_ID) AS C
FROM
  T1 TWHERE  T.PART_ID IN ('MI8','NJ9','KG6');

W WORKORDER_BASE_ WOR WOR PART_ID                  C
- --------------- --- --- --------------- ----------
W 01000W          3   3   KG6                      3
W 01000W          3   3   MI8                      3
W 01000W          3   3   NJ9                      3
W 05000W          19  1   KG6                      3
W 05000W          19  1   MI8                      3
W 05000W          19  1   NJ9                      3
W 09000W          34  4   KG6                      3
W 09000W          34  4   MI8                      3
W 09000W          34  4   NJ9                      3

Only 9 rows returned, with three different WORKORDER_BASE_ID, WORKORDER_LOT_ID combinations.  For a moment, let’s ignore that the three “parts” appear in all three “work orders”.  If we wanted a list of those “work orders” where _all_ three of the “parts” are present, and not just _one or more_ of the three “parts”, we could simply slide the above SQL statement into an inline view and specify that our C generated column must be equal to (or greater than – the use of greater than will be explained later) 3:

SELECT
  WORKORDER_TYPE,
  WORKORDER_BASE_ID,
  WORKORDER_LOT_ID,
  WORKORDER_SUB_ID,
  PART_ID
FROM
  (SELECT
    T.WORKORDER_TYPE,
    T.WORKORDER_BASE_ID,
    T.WORKORDER_LOT_ID,
    T.WORKORDER_SUB_ID,
    T.PART_ID,
    COUNT(DISTINCT T.PART_ID) OVER (PARTITION BY T.WORKORDER_TYPE, T.WORKORDER_BASE_ID, T.WORKORDER_LOT_ID) AS C
  FROM
    T1 T
  WHERE
    T.PART_ID IN ('MI8','NJ9','KG6'))
WHERE
  C>=3
ORDER BY
  WORKORDER_BASE_ID,
  WORKORDER_LOT_ID,
  WORKORDER_SUB_ID;

W WORKORDER_BASE_ WOR WOR PART_ID
- --------------- --- --- -------
W 01000W          3   3   KG6
W 01000W          3   3   MI8
W 01000W          3   3   NJ9
W 05000W          19  1   KG6
W 05000W          19  1   MI8
W 05000W          19  1   NJ9
W 09000W          34  4   KG6
W 09000W          34  4   MI8
W 09000W          34  4   NJ9

While trying to determine the OP’s business case for the requests, I wondered if the OP was attempting to replace a specific “part” with another “part” within the ERP package, and simply required a report to understand the progress of that manual “part” replacement procedure in the ERP package.  Let’s move forward with this assumption by replacing a row containing a specific PART_ID value with a new PART_ID value, but only for a single “work order”:

DELETE FROM
  T1
WHERE
  WORKORDER_TYPE='W'
  AND WORKORDER_BASE_ID='09000W'
  AND WORKORDER_LOT_ID='34'
  AND PART_ID='NJ9';

1 row deleted.

INSERT INTO T1 VALUES (
  'W',
  '09000W',
  '34',
  '4',
  10,
  'REPLACEMENT-NJ9',
  10);

1 row created.

Let’s check the result of the SQL statement that we were in the process of developing:

SELECT
  WORKORDER_TYPE,
  WORKORDER_BASE_ID,
  WORKORDER_LOT_ID,
  WORKORDER_SUB_ID,
  PART_ID
FROM
  (SELECT
    T.WORKORDER_TYPE,
    T.WORKORDER_BASE_ID,
    T.WORKORDER_LOT_ID,
    T.WORKORDER_SUB_ID,
    T.PART_ID,
    COUNT(DISTINCT T.PART_ID) OVER (PARTITION BY T.WORKORDER_TYPE, T.WORKORDER_BASE_ID, T.WORKORDER_LOT_ID) AS C
  FROM
    T1 T
  WHERE
    T.PART_ID IN ('MI8','NJ9','KG6'))
WHERE
  C>=3
ORDER BY
  WORKORDER_BASE_ID,
  WORKORDER_LOT_ID,
  WORKORDER_SUB_ID;

W WORKORDER_BASE_ WOR WOR PART_ID
- --------------- --- --- -------
W 01000W          3   3   KG6
W 01000W          3   3   MI8
W 01000W          3   3   NJ9
W 05000W          19  1   KG6
W 05000W          19  1   MI8
W 05000W          19  1   NJ9

Now, there are only six rows returned from two “work orders” – the third “work order” is no longer included in the query results.

With the use of the C >= n syntax, we are able to find “work orders” with at least n number of the specified “parts”.  So, if we were interested in finding cases where at least two of the three listed “parts” are attached to a “work order”, we just need to change the 3 to 2:

SELECT
  WORKORDER_TYPE,
  WORKORDER_BASE_ID,
  WORKORDER_LOT_ID,
  WORKORDER_SUB_ID,
  PART_ID
FROM
  (SELECT
    T.WORKORDER_TYPE,
    T.WORKORDER_BASE_ID,
    T.WORKORDER_LOT_ID,
    T.WORKORDER_SUB_ID,
    T.PART_ID,
    COUNT(DISTINCT T.PART_ID) OVER (PARTITION BY T.WORKORDER_TYPE, T.WORKORDER_BASE_ID, T.WORKORDER_LOT_ID) AS C
  FROM
    T1 T
  WHERE
    T.PART_ID IN ('MI8','NJ9','KG6'))
WHERE
  C>=2
ORDER BY
  WORKORDER_BASE_ID,
  WORKORDER_LOT_ID,
  WORKORDER_SUB_ID;

W WORKORDER_BASE_ WOR WOR PART_ID
- --------------- --- --- -------
W 01000W          3   3   KG6
W 01000W          3   3   MI8
W 01000W          3   3   NJ9
W 05000W          19  1   KG6
W 05000W          19  1   NJ9
W 05000W          19  1   MI8
W 09000W          34  4   KG6
W 09000W          34  4   MI8

The third “work order” is again included in the query resultset.

Other than putting the results into Microsoft Excel, the sample query satisfies the first request of the OP.  If we were not interested in returning the WORKORDER_SUB_ID and PART_ID columns, we could have simply used the following SQL statement:

SELECT
  T.WORKORDER_TYPE,
  T.WORKORDER_BASE_ID,
  T.WORKORDER_LOT_ID
FROM
  T1 T
WHERE
  T.PART_ID IN ('MI8','NJ9','KG6')
GROUP BY
  T.WORKORDER_TYPE,
  T.WORKORDER_BASE_ID,
  T.WORKORDER_LOT_ID
HAVING
  COUNT(*)>=3;

W WORKORDER_BASE_ WOR
- --------------- ---
W 05000W          19
W 01000W          3

Now to work on the second request, only retrieving the “work orders” when all of a specified list of “parts” are attached to the “work order” and none of another list of “parts” are attached to the “work order”.  As a starting point, we will re-use the inline view from the previous solution.  The IN list will be modified to include the “parts” that must be attached to the “work order”, as well as the “parts” that must not be attached to the “work order”.  The COUNT analytic function must be modified to include either a DECODE or CASE structure to divide the “parts” into the “must” and “must not” lists so that a count of each may be obtained:

SELECT
  T.WORKORDER_TYPE,
  T.WORKORDER_BASE_ID,
  T.WORKORDER_LOT_ID,
  T.WORKORDER_SUB_ID,
  T.PART_ID,
  COUNT(DISTINCT (CASE WHEN T.PART_ID IN ('MI8','NJ9','KG6') THEN T.PART_ID ELSE NULL END)) OVER (PARTITION BY T.WORKORDER_TYPE, T.WORKORDER_BASE_ID, T.WORKORDER_LOT_ID) AS C,
  COUNT(DISTINCT (CASE WHEN T.PART_ID IN ('REPLACEMENT-NJ9') THEN T.PART_ID ELSE NULL END)) OVER (PARTITION BY T.WORKORDER_TYPE, T.WORKORDER_BASE_ID, T.WORKORDER_LOT_ID) AS NC
FROM
  T1 T
WHERE
  T.PART_ID IN ('MI8','KG6',   'REPLACEMENT-NJ9');

W WORKORDER_BASE_ WOR WOR PART_ID                  C         NC
- --------------- --- --- --------------- ---------- ----------
W 01000W          3   3   KG6                      2          0
W 01000W          3   3   MI8                      2          0
W 05000W          19  1   KG6                      2          0
W 05000W          19  1   MI8                      2          0
W 09000W          34  4   KG6                      2          1
W 09000W          34  4   MI8                      2          1
W 09000W          34  4   REPLACEMENT-NJ9          2          1

To satisfy the OP’s second request, we are interested in those rows where the C column value is (at least) 2 and the NC column value is 0.  Just as before, we will slide the above SQL statement into an inline view and add the specifications for the C and NC generated columns:

SELECT
  WORKORDER_TYPE,
  WORKORDER_BASE_ID,
  WORKORDER_LOT_ID,
  WORKORDER_SUB_ID,
  PART_ID
FROM
  (SELECT
    T.WORKORDER_TYPE,
    T.WORKORDER_BASE_ID,
    T.WORKORDER_LOT_ID,
    T.WORKORDER_SUB_ID,
    T.PART_ID,
    COUNT(DISTINCT (CASE WHEN T.PART_ID IN ('MI8','NJ9','KG6') THEN T.PART_ID ELSE NULL END)) OVER (PARTITION BY T.WORKORDER_TYPE, T.WORKORDER_BASE_ID, T.WORKORDER_LOT_ID) AS C,
    COUNT(DISTINCT (CASE WHEN T.PART_ID IN ('REPLACEMENT-NJ9') THEN T.PART_ID ELSE NULL END)) OVER (PARTITION BY T.WORKORDER_TYPE, T.WORKORDER_BASE_ID, T.WORKORDER_LOT_ID) AS NC
  FROM
    T1 T
  WHERE
    T.PART_ID IN ('MI8','KG6',   'REPLACEMENT-NJ9'))
WHERE
  C>=2
  AND NC=0
ORDER BY
  WORKORDER_BASE_ID,
  WORKORDER_LOT_ID,
  WORKORDER_SUB_ID;

W WORKORDER_BASE_ WOR WOR PART_ID
- --------------- --- --- -------
W 01000W          3   3   KG6
W 01000W          3   3   MI8
W 05000W          19  1   KG6
W 05000W          19  1   MI8

The above shows those “work orders” where the NJ9 “part” had not yet been replaced with the REPLACEMENT-NJ9 “part”.  If desired, we are also able to easily modify the OP’s second request to see the “work orders” where the “part” was already replaced just by changing NC=0 to NC=1:

SELECT
  WORKORDER_TYPE,
  WORKORDER_BASE_ID,
  WORKORDER_LOT_ID,
  WORKORDER_SUB_ID,
  PART_ID
FROM
  (SELECT
    T.WORKORDER_TYPE,
    T.WORKORDER_BASE_ID,
    T.WORKORDER_LOT_ID,
    T.WORKORDER_SUB_ID,
    T.PART_ID,
    COUNT(DISTINCT (CASE WHEN T.PART_ID IN ('MI8','NJ9','KG6') THEN T.PART_ID ELSE NULL END)) OVER (PARTITION BY T.WORKORDER_TYPE, T.WORKORDER_BASE_ID, T.WORKORDER_LOT_ID) AS C,
    COUNT(DISTINCT (CASE WHEN T.PART_ID IN ('REPLACEMENT-NJ9') THEN T.PART_ID ELSE NULL END)) OVER (PARTITION BY T.WORKORDER_TYPE, T.WORKORDER_BASE_ID, T.WORKORDER_LOT_ID) AS NC
  FROM
    T1 T
  WHERE
    T.PART_ID IN ('MI8','KG6',   'REPLACEMENT-NJ9'))
WHERE
  C>=2
  AND NC=1
ORDER BY
  WORKORDER_BASE_ID,
  WORKORDER_LOT_ID,
  WORKORDER_SUB_ID;

W WORKORDER_BASE_ WOR WOR PART_ID
- --------------- --- --- ---------------
W 09000W          34  4   KG6
W 09000W          34  4   REPLACEMENT-NJ9
W 09000W          34  4   MI8

What other related information are we able to return?  What about all “parts” (the full bill of material) for each of the work orders that match the OP’s second request.  That modification is simple, just slide the majority of the above SQL statement into a subquery (found in the first WHERE clause):

SELECT
  *
FROM
  T1
WHERE
  (WORKORDER_TYPE,
   WORKORDER_BASE_ID,
   WORKORDER_LOT_ID) IN
    (SELECT
      WORKORDER_TYPE,
      WORKORDER_BASE_ID,
      WORKORDER_LOT_ID
    FROM
      (SELECT
        T.WORKORDER_TYPE,
        T.WORKORDER_BASE_ID,
        T.WORKORDER_LOT_ID,
        T.WORKORDER_SUB_ID,
        T.PART_ID,
        COUNT(DISTINCT (CASE WHEN T.PART_ID IN ('MI8','NJ9','KG6') THEN T.PART_ID ELSE NULL END)) OVER (PARTITION BY T.WORKORDER_TYPE, T.WORKORDER_BASE_ID, T.WORKORDER_LOT_ID) AS C,
        COUNT(DISTINCT (CASE WHEN T.PART_ID IN ('REPLACEMENT-NJ9') THEN T.PART_ID ELSE NULL END)) OVER (PARTITION BY T.WORKORDER_TYPE, T.WORKORDER_BASE_ID, T.WORKORDER_LOT_ID) AS NC
      FROM
        T1 T
      WHERE
        T.PART_ID IN ('MI8','KG6',   'REPLACEMENT-NJ9'))
    WHERE
      C>=2
      AND NC=0)
ORDER BY
  WORKORDER_BASE_ID,
  WORKORDER_LOT_ID,
  WORKORDER_SUB_ID,
  PIECE_NO;

W WORKORDER_BASE_ WOR WOR   PIECE_NO PART_ID                QTY
- --------------- --- --- ---------- --------------- ----------
W 01000W          3   1            1 ZK5                     48
W 01000W          3   1            2 AL6                     60
W 01000W          3   1            3 BM7                     42
W 01000W          3   1            4 CN8                     26
W 01000W          3   1            5 DO9                     49
W 01000W          3   2            6 EA0                     66
W 01000W          3   2            7 FB1                     39
W 01000W          3   2            8 GC2                     57
W 01000W          3   2            9 HD3                     42
W 01000W          3   2           10 IE4                     52
W 01000W          3   3           11 JF5                     16
W 01000W          3   3           12 KG6                     61
W 01000W          3   3           13 LH7                     39
W 01000W          3   3           14 MI8                     75
W 01000W          3   3           15 NJ9                     94
W 01000W          3   4           16 OK0                     67
W 01000W          3   4           17 PL1                     12
W 01000W          3   4           18 QM2                     41
W 01000W          3   4           19 RN3                     97
W 01000W          3   4           20 SO4                     15
W 01000W          3   5           21 TA5                     54
W 01000W          3   5           22 UB6                     59
W 01000W          3   5           23 VC7                     23
W 01000W          3   5           24 WD8                      7
W 01000W          3   5           25 YJ4                     22
W 05000W          19  1            1 JF5                     31
W 05000W          19  1            2 KG6                     42
W 05000W          19  1            3 LH7                     37
W 05000W          19  1            4 MI8                     85
W 05000W          19  1            5 NJ9                    100
W 05000W          19  2            6 OK0                     43
W 05000W          19  2            7 PL1                     58
W 05000W          19  2            8 QM2                     76
W 05000W          19  2            9 RN3                     66
W 05000W          19  2           10 SO4                     75
W 05000W          19  3           11 TA5                     50
W 05000W          19  3           12 UB6                     55
W 05000W          19  3           13 VC7                     18
W 05000W          19  3           14 WD8                     33
W 05000W          19  3           15 XE9                      8
W 05000W          19  4           16 YF0                     86
W 05000W          19  4           17 ZG1                      7
W 05000W          19  4           18 AH2                     25
W 05000W          19  4           19 BI3                     38
W 05000W          19  4           20 CJ4                     34
W 05000W          19  5           21 DK5                     88
W 05000W          19  5           22 EL6                     91
W 05000W          19  5           23 FM7                     52
W 05000W          19  5           24 GN8                     71
W 05000W          19  5           25 IE4                     55

Just as before, we are able to see all “parts” for the “work orders” that already have the replacement part, just by changing the NC=0 to NC=1:

SELECT
  *
FROM
  T1
WHERE
  (WORKORDER_TYPE,
   WORKORDER_BASE_ID,
   WORKORDER_LOT_ID) IN
    (SELECT
      WORKORDER_TYPE,
      WORKORDER_BASE_ID,
      WORKORDER_LOT_ID
    FROM
      (SELECT
        T.WORKORDER_TYPE,
        T.WORKORDER_BASE_ID,
        T.WORKORDER_LOT_ID,
        T.WORKORDER_SUB_ID,
        T.PART_ID,
        COUNT(DISTINCT (CASE WHEN T.PART_ID IN ('MI8','NJ9','KG6') THEN T.PART_ID ELSE NULL END)) OVER (PARTITION BY T.WORKORDER_TYPE, T.WORKORDER_BASE_ID, T.WORKORDER_LOT_ID) AS C,
        COUNT(DISTINCT (CASE WHEN T.PART_ID IN ('REPLACEMENT-NJ9') THEN T.PART_ID ELSE NULL END)) OVER (PARTITION BY T.WORKORDER_TYPE, T.WORKORDER_BASE_ID, T.WORKORDER_LOT_ID) AS NC
      FROM
        T1 T
      WHERE
        T.PART_ID IN ('MI8','KG6',   'REPLACEMENT-NJ9'))
    WHERE
      C>=2
      AND NC=1)
ORDER BY
  WORKORDER_BASE_ID,
  WORKORDER_LOT_ID,
  WORKORDER_SUB_ID,
  PIECE_NO;

W WORKORDER_BASE_ WOR WOR   PIECE_NO PART_ID                QTY
- --------------- --- --- ---------- --------------- ----------
W 09000W          34  1            1 UF0                     80
W 09000W          34  1            2 VG1                     75
W 09000W          34  1            3 WH2                     75
W 09000W          34  1            4 XI3                     77
W 09000W          34  1            5 YJ4                     46
W 09000W          34  2            6 ZK5                     57
W 09000W          34  2            7 AL6                     64
W 09000W          34  2            8 BM7                     14
W 09000W          34  2            9 CN8                     30
W 09000W          34  2           10 DO9                     62
W 09000W          34  3           11 EA0                     45
W 09000W          34  3           12 FB1                     68
W 09000W          34  3           13 GC2                     25
W 09000W          34  3           14 HD3                     73
W 09000W          34  3           15 IE4                     32
W 09000W          34  4           10 REPLACEMENT-NJ9         10
W 09000W          34  4           16 JF5                     49
W 09000W          34  4           17 KG6                     23
W 09000W          34  4           18 LH7                     60
W 09000W          34  4           19 MI8                     75
W 09000W          34  5           21 OK0                     91
W 09000W          34  5           22 PL1                     66
W 09000W          34  5           23 QM2                     45
W 09000W          34  5           24 RN3                     49
W 09000W          34  5           25 TE9                      6

Using the sample table and data provided, are there any other solutions to the OP’s requests?





Grouping Data Sets by Week Number of the Month

1 05 2013

May 1, 2013

I saw a decent SQL brain teaser this morning in the comp.databases.oracle.server Usenet group.  The OP in the message thread is attempting to summarize data in one of his tables, with the summarizations broken down by month and then the week within that month.  Increasing the challenge, the OP required that the dates defining the week start and end dates fall within the month’s data set:

“Today is 4/30/2013.  The we_end [last day of the week containing April 30, 2013] value is 5/4/2013.  What I am really looking for is the weeks within the calendar month.  So, the first week for April 2013 should show 4/1 – 4/6.  And the last week should show 4/28 – 4/30″

How would you solve that particular problem?  Is there an Oracle built-in function that provides the grouping requested by the OP?

There are a couple different methods to solve the problem posed by the OP.  One method uses a virtual lookup table to determine the start date and end date of each week in the month, using the restriction that the start date of each week or the end date of each week might need to be adjusted to fall within the same month.  Interestingly, with the restrictions specified by the OP, there are 62 weeks this year.

As with all SQL statements, there is must be a sensible starting point.  First, let’s determine the start and end dates of each month in 2013.  The month start date is easy to determine, while outputting the month end date is a little more challenging – that date is one day less than the start of the next month:

SELECT 
  ADD_MONTHS(TO_DATE('01-01-2013','MM-DD-YYYY'),(LEVEL-1)) MONTH_START, 
  ADD_MONTHS(TO_DATE('01-01-2013','MM-DD-YYYY'),LEVEL)-1 MONTH_END 
FROM 
  DUAL 
CONNECT BY 
  LEVEL<=12; 

MONTH_STA MONTH_END 
--------- --------- 
01-JAN-13 31-JAN-13 
01-FEB-13 28-FEB-13 
01-MAR-13 31-MAR-13 
01-APR-13 30-APR-13 
01-MAY-13 31-MAY-13 
01-JUN-13 30-JUN-13 
01-JUL-13 31-JUL-13 
01-AUG-13 31-AUG-13 
01-SEP-13 30-SEP-13 
01-OCT-13 31-OCT-13 
01-NOV-13 30-NOV-13 
01-DEC-13 31-DEC-13

There are a couple of approaches that could be taken for the next step, finding the start of the first, second, third, fourth, fifth, and potentially sixth week in each of those months.  I am interested in determining the start of the second week – we are able to accomplish that task by using the NEXT_DAY function to find the next Sunday after the first of the month.  The end of the first week will also be calculated, even though it is not used in later calculations.  Note that this solution is subject to problems depending on the NLS settings on the client (and whether or not Sunday is considered the first day of the week):

SELECT 
  ADD_MONTHS(TO_DATE('01-01-2013','MM-DD-YYYY'),(LEVEL-1)) MONTH_START, 
  ADD_MONTHS(TO_DATE('01-01-2013','MM-DD-YYYY'),LEVEL)-1 MONTH_END, 
  NEXT_DAY(ADD_MONTHS(TO_DATE('01-01-2013','MM-DD-YYYY'),(LEVEL-1))-1,'SATURDAY') END_FIRST_WEEK,
  NEXT_DAY(ADD_MONTHS(TO_DATE('01-01-2013','MM-DD-YYYY'),(LEVEL-1)),'SUNDAY') START_SECOND_WEEK
 FROM 
  DUAL 
CONNECT BY 
  LEVEL<=12; 

MONTH_STA MONTH_END END_FIRST START_SEC 
--------- --------- --------- --------- 
01-JAN-13 31-JAN-13 05-JAN-13 06-JAN-13 
01-FEB-13 28-FEB-13 02-FEB-13 03-FEB-13 
01-MAR-13 31-MAR-13 02-MAR-13 03-MAR-13 
01-APR-13 30-APR-13 06-APR-13 07-APR-13 
01-MAY-13 31-MAY-13 04-MAY-13 05-MAY-13 
01-JUN-13 30-JUN-13 01-JUN-13 02-JUN-13 
01-JUL-13 31-JUL-13 06-JUL-13 07-JUL-13 
01-AUG-13 31-AUG-13 03-AUG-13 04-AUG-13 
01-SEP-13 30-SEP-13 07-SEP-13 08-SEP-13 
01-OCT-13 31-OCT-13 05-OCT-13 06-OCT-13 
01-NOV-13 30-NOV-13 02-NOV-13 03-NOV-13 
01-DEC-13 31-DEC-13 07-DEC-13 08-DEC-13

The above may appear to be a number of related, but useless dates.  However, that is not the case.  Next, we will need a method to count through as many as six weeks per month.  This should work:

SELECT 
  LEVEL WEEK_NUMBER 
FROM 
  DUAL 
CONNECT BY 
  LEVEL<=6; 

WEEK_NUMBER 
----------- 
          1 
          2 
          3 
          4 
          5 
          6

Next, a Cartesian join will be created between the two row sources to permit stepping through each month, and each week within that month (we will refine the output later… note that this unrefined output provides a clue for a second method of solving the problem posed by the OP):

SELECT 
  MONTHS.MONTH_START, 
  WN.WEEK_NUMBER, 
  MONTHS.START_SECOND_WEEK + (WN.WEEK_NUMBER-2)*7 START_WEEK, 
  MONTHS.START_SECOND_WEEK + (WN.WEEK_NUMBER-1)*7 - 1 END_WEEK 
FROM 
  (SELECT 
    ADD_MONTHS(TO_DATE('01-01-2013','MM-DD-YYYY'),(LEVEL-1)) MONTH_START, 
    ADD_MONTHS(TO_DATE('01-01-2013','MM-DD-YYYY'),LEVEL)-1 MONTH_END, 
    NEXT_DAY(ADD_MONTHS(TO_DATE('01-01-2013','MM-DD-YYYY'),(LEVEL-1))-1,'SATURDAY') END_FIRST_WEEK,
    NEXT_DAY(ADD_MONTHS(TO_DATE('01-01-2013','MM-DD-YYYY'),(LEVEL-1)),'SUNDAY') START_SECOND_WEEK
   FROM 
    DUAL 
  CONNECT BY 
    LEVEL<=12) MONTHS, 
  (SELECT 
    LEVEL WEEK_NUMBER 
  FROM 
    DUAL 
  CONNECT BY 
    LEVEL<=6) WN 
WHERE 
  (MONTHS.START_SECOND_WEEK + (WN.WEEK_NUMBER-2)*7)<=MONTHS.MONTH_END; 

MONTH_STA WEEK_NUMBER START_WEE END_WEEK 
--------- ----------- --------- --------- 
01-JAN-13           1 30-DEC-12 05-JAN-13 
01-JAN-13           2 06-JAN-13 12-JAN-13 
01-JAN-13           3 13-JAN-13 19-JAN-13 
01-JAN-13           4 20-JAN-13 26-JAN-13 
01-JAN-13           5 27-JAN-13 02-FEB-13 
01-FEB-13           1 27-JAN-13 02-FEB-13 
01-FEB-13           2 03-FEB-13 09-FEB-13 
01-FEB-13           3 10-FEB-13 16-FEB-13 
01-FEB-13           4 17-FEB-13 23-FEB-13 
01-FEB-13           5 24-FEB-13 02-MAR-13 
01-MAR-13           1 24-FEB-13 02-MAR-13 
01-MAR-13           2 03-MAR-13 09-MAR-13 
01-MAR-13           3 10-MAR-13 16-MAR-13 
01-MAR-13           4 17-MAR-13 23-MAR-13 
01-MAR-13           5 24-MAR-13 30-MAR-13 
01-MAR-13           6 31-MAR-13 06-APR-13 
01-APR-13           1 31-MAR-13 06-APR-13 
01-APR-13           2 07-APR-13 13-APR-13 
01-APR-13           3 14-APR-13 20-APR-13 
01-APR-13           4 21-APR-13 27-APR-13 
01-APR-13           5 28-APR-13 04-MAY-13 
01-MAY-13           1 28-APR-13 04-MAY-13 
01-MAY-13           2 05-MAY-13 11-MAY-13 
01-MAY-13           3 12-MAY-13 18-MAY-13 
01-MAY-13           4 19-MAY-13 25-MAY-13 
01-MAY-13           5 26-MAY-13 01-JUN-13 
01-JUN-13           1 26-MAY-13 01-JUN-13 
...

The above output has a slight problem – the Sunday in the first week of the month may not be in the month of interest; additionally, the Saturday in the last week of the month may not be in the month of interest.  We may overcome those problems with the help of the DECODE function (the CASE expression may be used instead, if so desired):

SELECT 
  MONTHS.MONTH_START, 
  WN.WEEK_NUMBER, 
  DECODE(WN.WEEK_NUMBER, 1, MONTHS.MONTH_START, MONTHS.START_SECOND_WEEK + (WN.WEEK_NUMBER-2)*7) START_WEEK,
  DECODE(SIGN(MONTHS.MONTH_END - (MONTHS.START_SECOND_WEEK + (WN.WEEK_NUMBER-1)*7)), -1, MONTHS.MONTH_END, MONTHS.START_SECOND_WEEK + (WN.WEEK_NUMBER-1)*7 - 1) END_WEEK
FROM 
  (SELECT 
    ADD_MONTHS(TO_DATE('01-01-2013','MM-DD-YYYY'),(LEVEL-1)) MONTH_START, 
    ADD_MONTHS(TO_DATE('01-01-2013','MM-DD-YYYY'),LEVEL)-1 MONTH_END, 
    NEXT_DAY(ADD_MONTHS(TO_DATE('01-01-2013','MM-DD-YYYY'),(LEVEL-1))-1,'SATURDAY') END_FIRST_WEEK,
     NEXT_DAY(ADD_MONTHS(TO_DATE('01-01-2013','MM-DD-YYYY'),(LEVEL-1)),'SUNDAY') START_SECOND_WEEK
   FROM 
    DUAL 
  CONNECT BY 
    LEVEL<=12) MONTHS, 
  (SELECT 
    LEVEL WEEK_NUMBER 
  FROM 
    DUAL 
  CONNECT BY 
    LEVEL<=6) WN 
WHERE 
  (MONTHS.START_SECOND_WEEK + (WN.WEEK_NUMBER-2)*7)<=MONTHS.MONTH_END; 

MONTH_STA WEEK_NUMBER START_WEE END_WEEK 
--------- ----------- --------- --------- 
01-JAN-13           1 01-JAN-13 05-JAN-13 
01-JAN-13           2 06-JAN-13 12-JAN-13 
01-JAN-13           3 13-JAN-13 19-JAN-13 
01-JAN-13           4 20-JAN-13 26-JAN-13 
01-JAN-13           5 27-JAN-13 31-JAN-13 
01-FEB-13           1 01-FEB-13 02-FEB-13 
01-FEB-13           2 03-FEB-13 09-FEB-13 
01-FEB-13           3 10-FEB-13 16-FEB-13 
01-FEB-13           4 17-FEB-13 23-FEB-13 
01-FEB-13           5 24-FEB-13 28-FEB-13 
01-MAR-13           1 01-MAR-13 02-MAR-13 
01-MAR-13           2 03-MAR-13 09-MAR-13 
01-MAR-13           3 10-MAR-13 16-MAR-13 
01-MAR-13           4 17-MAR-13 23-MAR-13 
01-MAR-13           5 24-MAR-13 30-MAR-13 
01-MAR-13           6 31-MAR-13 31-MAR-13 
01-APR-13           1 01-APR-13 06-APR-13 
01-APR-13           2 07-APR-13 13-APR-13 
01-APR-13           3 14-APR-13 20-APR-13 
01-APR-13           4 21-APR-13 27-APR-13 
01-APR-13           5 28-APR-13 30-APR-13 
01-MAY-13           1 01-MAY-13 04-MAY-13 
01-MAY-13           2 05-MAY-13 11-MAY-13 
01-MAY-13           3 12-MAY-13 18-MAY-13 
01-MAY-13           4 19-MAY-13 25-MAY-13 
01-MAY-13           5 26-MAY-13 31-MAY-13 
... 
01-DEC-13           5 29-DEC-13 31-DEC-13 

62 rows selected.

Yes, this solution added 10 extra weeks to the year.  :-)

It is easy to extend this solution to cover more than just the year 2013.  For example, the following modification sets the starting period to January 2000, and runs for 20 years:

SELECT 
  MONTHS.MONTH_START, 
  WN.WEEK_NUMBER, 
  DECODE(WN.WEEK_NUMBER, 1, MONTHS.MONTH_START, MONTHS.START_SECOND_WEEK + (WN.WEEK_NUMBER-2)*7) START_WEEK,
  DECODE(SIGN(MONTHS.MONTH_END - (MONTHS.START_SECOND_WEEK + (WN.WEEK_NUMBER-1)*7)), -1, MONTHS.MONTH_END, MONTHS.START_SECOND_WEEK + (WN.WEEK_NUMBER-1)*7 - 1) END_WEEK
FROM 
  (SELECT 
    ADD_MONTHS(TO_DATE('01-01-2000','MM-DD-YYYY'),(LEVEL-1)) MONTH_START, 
    ADD_MONTHS(TO_DATE('01-01-2000','MM-DD-YYYY'),LEVEL)-1 MONTH_END, 
    NEXT_DAY(ADD_MONTHS(TO_DATE('01-01-2000','MM-DD-YYYY'),(LEVEL-1))-1,'SATURDAY') END_FIRST_WEEK,
     NEXT_DAY(ADD_MONTHS(TO_DATE('01-01-2000','MM-DD-YYYY'),(LEVEL-1)),'SUNDAY') START_SECOND_WEEK
   FROM 
    DUAL 
  CONNECT BY 
    LEVEL<=(12*20)) MONTHS, 
  (SELECT 
    LEVEL WEEK_NUMBER 
  FROM 
    DUAL 
  CONNECT BY 
    LEVEL<=6) WN 
WHERE 
  (MONTHS.START_SECOND_WEEK + (WN.WEEK_NUMBER-2)*7)<=MONTHS.MONTH_END;

With the virtual lookup table complete, the next step would be to slide the above SQL statement into an inline view, and then join it to the data set that will be analyzed.  Assuming that the data set has a column named DATE_SENT, the WHERE clause may appear similar to the following:

WHERE
  DATE_SENT BETWEEN START_WEEK AND END_WEEK

The above is one solution, are there other solutions?  What about a solution that uses a little simple mathematics?  Such a solution could be exponentially more efficient.  First, we will create a simple table for experimentation with 1000 rows containing somewhat random dates on or after January 1, 2013:

CREATE TABLE T1 AS
SELECT
  TRUNC(TO_DATE('01-01-2013','MM-DD-YYYY')+DBMS_RANDOM.VALUE*366) DATE_SENT
FROM
  DUAL
CONNECT BY
  LEVEL<=1000;

Next, we will generate a SQL statement that produces output that is somewhat similar to the unrefined output found in the first solution (note that the SQL statement contains a DISTINCT clause to eliminate duplicate rows – there could be several rows in the row source with the same random date value):

SELECT DISTINCT
  DATE_SENT,
  TRUNC(DATE_SENT,'MM') MONTH_START,
  ADD_MONTHS(TRUNC(DATE_SENT,'MM'),1)-1 MONTH_END,
  NEXT_DAY(TRUNC(DATE_SENT,'MM')-7,'SUNDAY') MONTH_WEEK_START
FROM
  T1
ORDER BY
  DATE_SENT;

DATE_SENT MONTH_STA MONTH_END MONTH_WEE
--------- --------- --------- ---------
01-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
02-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
03-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
04-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
05-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
06-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
07-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
08-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
09-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
10-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
11-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
12-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
13-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
14-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
15-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
16-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
17-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
18-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
19-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
20-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
21-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
22-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
23-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
24-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
25-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
26-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
27-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
28-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
29-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
30-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
31-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12
01-FEB-13 01-FEB-13 28-FEB-13 27-JAN-13
02-FEB-13 01-FEB-13 28-FEB-13 27-JAN-13
03-FEB-13 01-FEB-13 28-FEB-13 27-JAN-13
...
29-DEC-13 01-DEC-13 31-DEC-13 01-DEC-13
30-DEC-13 01-DEC-13 31-DEC-13 01-DEC-13
31-DEC-13 01-DEC-13 31-DEC-13 01-DEC-13
01-JAN-14 01-JAN-14 31-JAN-14 29-DEC-13

In the above, the MONTH_WEEK_START column shows the Sunday of the week that includes the DATE_SENT column value.  At this point, it makes no difference if the MONTH_WEEK_START column value is in a different month than the DATE_SENT column value.

Next, we will determine the week within the month in which the DATE_SENT column value appears:

SELECT DISTINCT
  DATE_SENT,
  TRUNC(DATE_SENT,'MM') MONTH_START,
  ADD_MONTHS(TRUNC(DATE_SENT,'MM'),1)-1 MONTH_END,
  NEXT_DAY(TRUNC(DATE_SENT,'MM')-7,'SUNDAY') MONTH_WEEK_START,
  TRUNC((DATE_SENT-(NEXT_DAY(TRUNC(DATE_SENT,'MM')-7,'SUNDAY')))/7)+1 WEEK_NUMBER
FROM
  T1
ORDER BY
  DATE_SENT;

DATE_SENT MONTH_STA MONTH_END MONTH_WEE WEEK_NUMBER
--------- --------- --------- --------- -----------
01-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           1
02-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           1
03-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           1
04-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           1
05-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           1
06-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           2
07-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           2
08-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           2
09-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           2
10-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           2
11-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           2
12-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           2
13-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           3
14-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           3
15-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           3
16-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           3
17-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           3
18-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           3
19-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           3
20-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           4
21-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           4
22-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           4
23-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           4
24-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           4
25-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           4
26-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           4
27-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           5
28-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           5
29-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           5
30-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           5
31-JAN-13 01-JAN-13 31-JAN-13 30-DEC-12           5
01-FEB-13 01-FEB-13 28-FEB-13 27-JAN-13           1
02-FEB-13 01-FEB-13 28-FEB-13 27-JAN-13           1
03-FEB-13 01-FEB-13 28-FEB-13 27-JAN-13           2
04-FEB-13 01-FEB-13 28-FEB-13 27-JAN-13           2
...
24-DEC-13 01-DEC-13 31-DEC-13 01-DEC-13           4
26-DEC-13 01-DEC-13 31-DEC-13 01-DEC-13           4
27-DEC-13 01-DEC-13 31-DEC-13 01-DEC-13           4
28-DEC-13 01-DEC-13 31-DEC-13 01-DEC-13           4
29-DEC-13 01-DEC-13 31-DEC-13 01-DEC-13           5
30-DEC-13 01-DEC-13 31-DEC-13 01-DEC-13           5
31-DEC-13 01-DEC-13 31-DEC-13 01-DEC-13           5
01-JAN-14 01-JAN-14 31-JAN-14 29-DEC-13           1

Heading toward the request made by the OP in the Usenet thread, the following SQL statement will determine a simple COUNT of the number of rows that have a DATE_SENT date in each week of each month in the T1 table’s data set (note that if there are no rows in a given week, that week will be skipped in the output):

SELECT
  TRUNC(DATE_SENT,'MM') MONTH_START,
  TRUNC((DATE_SENT-(NEXT_DAY(TRUNC(DATE_SENT,'MM')-7,'SUNDAY')))/7)+1 WEEK_NUMBER,
  COUNT(*) DAYS_IN_SET
FROM
  T1
GROUP BY
  TRUNC(DATE_SENT,'MM'),
  TRUNC((DATE_SENT-(NEXT_DAY(TRUNC(DATE_SENT,'MM')-7,'SUNDAY')))/7)+1
ORDER BY
  1,
  2;

MONTH_STA WEEK_NUMBER DAYS_IN_SET
--------- ----------- -----------
01-JAN-13           1          13
01-JAN-13           2          15
01-JAN-13           3          23
01-JAN-13           4          21
01-JAN-13           5          13
01-FEB-13           1           5
01-FEB-13           2          21
01-FEB-13           3          15
01-FEB-13           4          14
01-FEB-13           5          11
01-MAR-13           1           4
01-MAR-13           2          18
01-MAR-13           3          19
01-MAR-13           4          20
01-MAR-13           5          20
01-MAR-13           6           4
01-APR-13           1          14
01-APR-13           2          26
01-APR-13           3          15
01-APR-13           4          23
01-APR-13           5           9
01-MAY-13           1          12
01-MAY-13           2          15
01-MAY-13           3          22
01-MAY-13           4          23
01-MAY-13           5          17
01-JUN-13           1           1
01-JUN-13           2          14
01-JUN-13           3          21
01-JUN-13           4          18
01-JUN-13           5          21
01-JUN-13           6           4
01-JUL-13           1          15
01-JUL-13           2          21
01-JUL-13           3          13
01-JUL-13           4          24
01-JUL-13           5          14
01-AUG-13           1           6
01-AUG-13           2          10
01-AUG-13           3          21
01-AUG-13           4          22
01-AUG-13           5          28
01-SEP-13           1          16
01-SEP-13           2          16
01-SEP-13           3          22
01-SEP-13           4          23
01-SEP-13           5           8
01-OCT-13           1          15
01-OCT-13           2          15
01-OCT-13           3          17
01-OCT-13           4          18
01-OCT-13           5          21
01-NOV-13           1           6
01-NOV-13           2          20
01-NOV-13           3          20
01-NOV-13           4          17
01-NOV-13           5          18
01-DEC-13           1          22
01-DEC-13           2          15
01-DEC-13           3          19
01-DEC-13           4          20
01-DEC-13           5           6
01-JAN-14           1           1

It might be just my opinion, but the mathematical solution appears to be easier to understand than the method that uses the virtual lookup table.  We are still able to derive the start date and end date of each week, if needed, to produce the OP’s solution.

Might there be other solutions to the OP’s SQL brain teaser?





Analysis Challenges

25 04 2013

April 25, 2013

Roughly 12 years ago I was attempting to analyze customer order changes that were received through electronic document interchange (EDI), specifically X12 830 documents that show order forecasted demand for specific part numbers.  At the time, the EDI data was partially transformed and inserted into an Oracle 8.0.5 database, while that data is now inserted into an Oracle 11.2 database.  I recently took another glance that this particular problem to see if there is a better method for performing the analysis.

Complications in the analysis:

  • The EDI documents do not include the customer’s purchase order line numbers, just a list of outstanding due dates and quantities.  That complication means that there is no easy way to see if an outstanding quantity has changed, or the date for that outstanding quantity has changed.
  • It is possible to ship to the customer a partial order quantity, with the remaining quantity on the line handled as a back-order.  If the customer does not alter this particular forecast line, only the quantity outstanding will be provided, along with the due date – the original quantity is not provided.
  • It is possible to ship out an order line received in an 830 document up to five days before the customer’s due date.
  • Shipped parts may require hours or days to reach the customer’s site, thus delaying when those shipped parts are acknowledged in the list of outstanding due dates and quantities received through EDI.
  • EDI 830 documents are generally received the next weekday after the customer batch generates those documents on a nightly basis.  Thus, there is an opportunity for the received forecast to be out of sync with what is known to have arrived at the customer site, and what is acknowledged in the EDI data.
  • The customer may generate multiple EDI 830 documents for a particular part number/purchase order combination on any given day, and only the EDI 830 document received with the highest (sorted in ascending order) release number (which may not be a number) on a specific date is the valid release for the specific date.
  • EDI 830 documents may be generated for a specific part number by the customer nearly every weekday, once a month, or any frequency in between.

I recall trying to find a solution to this particular problem 12 years ago.  I thought that I could possibly pull all of the relevant data from the Oracle database into a custom application, and through some magical formula output what appeared to be a list of what the customer changed.  However, what about those part quantities that are in transit to the customer?  What about those part quantities that were received by the customer, but had not yet made their way through the customer’s batch processing system?  What about those cases where the customer sents two or more EDI 830 releases for a specific part number, and those EDI releases are received on the same date?

At the time, I was frustrated.  The quality of the output of my analysis would be dependent on the accuracy of humans (yes, and humans wrote the program that performed the analysis too).  I simply had to find the first matching due date and order quantity from the previous release with the due date and order quanitity from the current release, and then rely on an end user to correct the 5% to 10% of the cases where the matching algorithm was wrong due to one or more of the “complications” outlined above.

While recently re-examining the problem, I noticed a data element in the EDI 830 document that is described as “ACCUM RECEIPTS AS OF THIS DATE”, and a luck would have it, that particular data element is one of those elements that was selected more than 12 years ago to be transformed into the Oracle Database tables (in the sample data included with this blog article, this column is named PREVIOUS_ACCUM_QTY).  If only I recognized the meaning of the values of this column, and had access to Oracle analytical SQL functions 12 years ago, this particular analysis problem might not have had so many variables that could potentially throw off the quality of the analysis.

This particular blog article has a fairly large set of sample data – however, that sample data is a fairly small subset of the data that must be analyzed on a daily basis.  The sample data may be downloaded here: analysischallenges.sql sample code (save the file as “analysischallenges.sql” – without the .doc double extension).  If the file is saved in the same directory (folder) as SQL*Plus was started from, you may then execute the script in SQL*Plus to build the tables and populate the sample data by using the following command:

@analysischallenges.sql

The two sample table definitions are created with the following commands (also included in the script):

CREATE TABLE T1 (
  CUSTOMER_PO VARCHAR2(15),
  RELEASE_ID VARCHAR2(15),
  PART_ID VARCHAR2(30),
  RELEASE_DATE DATE,
  PREVIOUS_ACCUM_QTY NUMBER,
  CREATE_DATE DATE,
  PRIMARY KEY (CUSTOMER_PO,RELEASE_ID));

CREATE TABLE T1_LINE (
  CUSTOMER_PO VARCHAR2(12),
  RELEASE_ID VARCHAR2(12),
  QTY NUMBER,
  DUE_DATE DATE,
  PRIMARY KEY (CUSTOMER_PO,RELEASE_ID,DUE_DATE));

T1 Table Columns:

CUSTOMER_PO is the unique identifier for the customer purchase order received through the EDI 830 documents.  RELEASE_ID defines the specific serialized change number for the customer’s PO – the RELEASE_ID is unique for a specific PO.  PART_ID is the part number for which the customer is providing a forecast.  RELEASE_DATE is the date that the forecast was batch generated in the customer’s computer system.  PREVIOUS_ACCUM_QTY is the count of the number of this part number that the customer has already received for this specific customer purchase order.  CREATE_DATE is the date that the forecast was received from the customer and transformed/inserted into the Oracle Database tables.

T1_LINE Table Columns:

CUSTOMER_PO and RELEASE_ID are undeclared foreign key columns that point back to table T1.  QTY is the quantity of the part number that should (or is forecasted to) ship on the specfied DUE_DATE.

—–

If you are curious how I created the script to populate the tables, I executed the following in SQL*Plus, and then performed a little cleanup of the resulting spool files:

SET LINESIZE 200
SET TRIMSPOOL ON

SPOOL analysischallenges_t1.txt

SELECT
  'INSERT INTO T1 VALUES ('||CHR(39)||CUSTOMER_PO||CHR(39)||','
    ||CHR(39)||RELEASE_ID||CHR(39)||','
    ||CHR(39)||PART_ID||CHR(39)||','
    ||'TO_DATE('||CHR(39)||TO_CHAR(RELEASE_DATE,'DD-MON-YYYY')||CHR(39)||','||CHR(39)||'DD-MON-YYYY'||CHR(39)||')'||','
    ||PREVIOUS_ACCUM_QTY||','
    ||'TO_DATE('||CHR(39)||TO_CHAR(CREATE_DATE,'DD-MON-YYYY')||CHR(39)||','||CHR(39)||'DD-MON-YYYY'||CHR(39)||')'||');' D
FROM
  T1
ORDER BY
  CUSTOMER_PO,
  RELEASE_ID;

SPOOL OFF

SPOOL analysischallenges_t1_line.txt

SELECT
  'INSERT INTO T1_LINE VALUES ('||CHR(39)||CUSTOMER_PO||CHR(39)||','
    ||CHR(39)||RELEASE_ID||CHR(39)||','
    ||QTY||','
    ||'TO_DATE('||CHR(39)||TO_CHAR(DUE_DATE,'DD-MON-YYYY')||CHR(39)||','||CHR(39)||'DD-MON-YYYY'||CHR(39)||')'||');' D
FROM
  T1_LINE
ORDER BY
  CUSTOMER_PO,
  RELEASE_ID,
  DUE_DATE;

SPOOL OFF

—–

We must find a starting point for performing the analysis of what changed from one day’s forecast to a prior day’s forecast.  There is an issue with the EDI 830 releases that were received (identified by the CREATE_DATE column) on April 24, 2013 – two releases for the same customer purchase orders were received, and there were two purchase orders that demand the same part number.  Our analysis is only concerned with the highest numbered (sorted alphabetically) release number document for each customer purchase order that was received on any particular day, so we need to find a way to eliminate the other older releases for the same customer purchase order that was received on the same day.  The ROW_NUMBER analytic function is perfect for this task – we just need the output to show the same number (1) for the rows that we want to keep, and have some other number appear on the other rows.  Something similar to the following may work (note that the TRUNC function, or other approach that uses a date/time range, probably should be used when working with the CREATE_DATE column – realistically, there could be a time component included with those date values):

COLUMN PART_ID FORMAT A10
SET LINESIZE 120
SET PAGESIZE 1000

SELECT
  T.PART_ID,
  T.CUSTOMER_PO,
  ROW_NUMBER() OVER (PARTITION BY T.CUSTOMER_PO,TRUNC(T.CREATE_DATE) ORDER BY T.RELEASE_ID DESC) REL_ON_DATE,
  T.RELEASE_ID,
  T.RELEASE_DATE,
  T.CREATE_DATE,
  T.PREVIOUS_ACCUM_QTY
FROM
  T1 T
WHERE 
  T.CREATE_DATE=TO_DATE('24-APR-2013','DD-MON-YYYY')
ORDER BY
  T.PART_ID,
  T.CUSTOMER_PO,
  T.RELEASE_ID;

PART_ID    CUSTOMER_PO     REL_ON_DATE RELEASE_ID      RELEASE_D CREATE_DA PREVIOUS_ACCUM_QTY
---------- --------------- ----------- --------------- --------- --------- ------------------
DS1812+    1231234                   2 20130402-001    23-APR-13 24-APR-13                 24
DS1812+    1231234                   1 20130403-001    24-APR-13 24-APR-13                 24
DS1812+    1233290                   2 20130402-001    23-APR-13 24-APR-13                  4
DS1812+    1233290                   1 20130403-001    24-APR-13 24-APR-13                  4
DS212+     1133290                   2 20130402-001    23-APR-13 24-APR-13                  2
DS212+     1133290                   1 20130403-001    24-APR-13 24-APR-13                  2
RS812      1231280                   2 20130402-001    23-APR-13 24-APR-13                 60
RS812      1231280                   1 20130403-001    24-APR-13 24-APR-13                 60

To remove all of the rows that have something other than 1 in the REL_ON_DATE column, we are able to slide the above SQL statement into an inline view, and then specify that the REL_ON_DATE column value must be equal to 1.  Typically, there are supposed to be few/no changes made by the customer within eight weeks of the customer forecast’s release date, but that is not always the case.  While eliminating the unnecessary releases, it probably makes sense to also calculate the end date of this “firm” period by adding 55 days to the date on which the forecast was created:

SELECT
  T.PART_ID,
  T.CUSTOMER_PO,
  T.RELEASE_ID,
  T.RELEASE_DATE+55 POSSIBLE_FIRM_DATE,
  T.PREVIOUS_ACCUM_QTY
FROM
  (SELECT
    T.PART_ID,
    T.CUSTOMER_PO,
    ROW_NUMBER() OVER (PARTITION BY T.CUSTOMER_PO,TRUNC(T.CREATE_DATE) ORDER BY T.RELEASE_ID DESC) REL_ON_DATE,
    T.RELEASE_ID,
    T.RELEASE_DATE,
    T.CREATE_DATE,
    T.PREVIOUS_ACCUM_QTY
  FROM
    T1 T
  WHERE 
    T.CREATE_DATE=TO_DATE('24-APR-2013','DD-MON-YYYY')) T
WHERE
  T.REL_ON_DATE=1
ORDER BY
  T.PART_ID,
  T.CUSTOMER_PO,
  T.RELEASE_ID;

PART_ID    CUSTOMER_PO     RELEASE_ID      POSSIBLE_ PREVIOUS_ACCUM_QTY
---------- --------------- --------------- --------- ------------------
DS1812+    1231234         20130403-001    18-JUN-13                 24
DS1812+    1233290         20130403-001    18-JUN-13                  4
DS212+     1133290         20130403-001    18-JUN-13                  2
RS812      1231280         20130403-001    18-JUN-13                 60

Next, we need to retrieve the forecasted delivery schedule (in the T1_LINES table) that are associated with the above rows.  If we take the value in the PREVIOUS_ACCUM_QTY column, and find a way to generate a running SUM of the QTY column from the T1_LINE table, in theory we should be able to determine if the total quantity ordered through a specific date has changed.  The analytic version of the SUM function offers the capability of producing a running SUM when an ORDER BY clause is included in the function call.  To reduce the data set size, I will only consider those forecasts with a scheduled delivery date that is within 101 days of the customer’s forecast date.  With those changes in place, the SQL statement is modified as follows:

SELECT
  T.PART_ID,
  T.CUSTOMER_PO,
  T.RELEASE_ID,
  T.RELEASE_DATE+55 POSSIBLE_FIRM_DATE,
  T.PREVIOUS_ACCUM_QTY,
  TL.DUE_DATE,
  TL.QTY,
  T.PREVIOUS_ACCUM_QTY + SUM(TL.QTY) OVER (PARTITION BY T.CUSTOMER_PO,T.RELEASE_ID ORDER BY TL.DUE_DATE) REL_RUNNING_SUM
FROM
  (SELECT
    T.PART_ID,
    T.CUSTOMER_PO,
    ROW_NUMBER() OVER (PARTITION BY T.CUSTOMER_PO,TRUNC(T.CREATE_DATE) ORDER BY T.RELEASE_ID DESC) REL_ON_DATE,
    T.RELEASE_ID,
    T.RELEASE_DATE,
    T.CREATE_DATE,
    T.PREVIOUS_ACCUM_QTY
  FROM
    T1 T
  WHERE 
    T.CREATE_DATE=TO_DATE('24-APR-2013','DD-MON-YYYY')) T,
  T1_LINE TL
WHERE
  T.REL_ON_DATE=1
  AND T.CUSTOMER_PO=TL.CUSTOMER_PO
  AND T.RELEASE_ID=TL.RELEASE_ID
  AND (T.RELEASE_DATE+100)>=TL.DUE_DATE
ORDER BY
  T.PART_ID,
  T.CUSTOMER_PO,
  T.RELEASE_ID,
  TL.DUE_DATE;

PART_ID    CUSTOMER_PO     RELEASE_ID      POSSIBLE_ PREVIOUS_ACCUM_QTY DUE_DATE         QTY REL_RUNNING_SUM
---------- --------------- --------------- --------- ------------------ --------- ---------- ---------------
DS1812+    1231234         20130403-001    18-JUN-13                 24 24-APR-13          0              24
DS1812+    1233290         20130403-001    18-JUN-13                  4 24-APR-13          0               4
DS212+     1133290         20130403-001    18-JUN-13                  2 24-APR-13          0               2
RS812      1231280         20130403-001    18-JUN-13                 60 27-JUN-13          1              61
RS812      1231280         20130403-001    18-JUN-13                 60 03-JUL-13          1              62
RS812      1231280         20130403-001    18-JUN-13                 60 24-JUL-13          2              64

As may be seen above, only customer purchase order 1231280 has a forecast, calling for a quantity of 1 on June 27, quantity of 1 on July 3, and a quantity of 2 on July 24.  The previously received quantity for customer purchase order 1231280 is 60, so the additional forecast quantities will bring the total order quantities up to 61, 62, and 64 through the three dates included in the forecast.

-The Top Half of the Final SQL Statement is Above-

So, we now have a starting point – the data that was received on April 24, 2013.  How does that forecast data compare with previously received forecasts?  We need to start building the second half of the SQL statement, which looks at the historical data.  To limit the dataset size, I will limit the data to only those EDI documents that were received up to 56 days prior (the eight week “firm” period) to the April 24, 2013 date that was specified earlier in this article:

SELECT
  T.CUSTOMER_PO,
  ROW_NUMBER() OVER (PARTITION BY T.CUSTOMER_PO,TRUNC(T.CREATE_DATE) ORDER BY T.RELEASE_ID DESC) REL_ON_DATE,
  TRUNC(T.CREATE_DATE) CREATE_DATE,
  T.RELEASE_ID,
  T.PREVIOUS_ACCUM_QTY
FROM
  T1 T
WHERE 
  T.CREATE_DATE >= TO_DATE('24-APR-2013','DD-MON-YYYY') - 56
  AND T.CREATE_DATE < TO_DATE('24-APR-2013','DD-MON-YYYY')
ORDER BY
  T.CUSTOMER_PO,
  T.RELEASE_ID;

CUSTOMER_PO     REL_ON_DATE CREATE_DA RELEASE_ID      PREVIOUS_ACCUM_QTY
--------------- ----------- --------- --------------- ------------------
1131111                   1 27-FEB-13 0371                           575
1131111                   1 01-MAR-13 0381                           577
1131111                   1 06-MAR-13 0431                           578
1131111                   1 07-MAR-13 0441                           581
1131111                   1 08-MAR-13 0451                           581
1131111                   1 11-MAR-13 0461                           581
...
1131111                   1 08-APR-13 0741                           593
1131111                   1 11-APR-13 0791                           593
1131111                   1 23-APR-13 0911                           595
1131245                   1 27-FEB-13 0371                          2299
1131245                   1 01-MAR-13 0381                          2300
1131245                   1 05-MAR-13 0421                          2303
...
1131745                   1 19-APR-13 0871                           435
1131745                   1 23-APR-13 0911                           435
1133290                   1 27-FEB-13 20130205-001                     2
1133290                   1 28-FEB-13 20130206-001                     2
1133290                   1 01-MAR-13 20130207-001                     2
...
1133290                   1 19-APR-13 20130328-001                     2
1133290                   1 23-APR-13 20130401-001                     2
1231234                   1 27-FEB-13 20130205-001                    24
1231234                   1 28-FEB-13 20130206-001                    24
...
1231234                   1 23-APR-13 20130401-001                    24
1231280                   1 27-FEB-13 20130205-001                    57
1231280                   1 28-FEB-13 20130206-001                    57
1231280                   1 01-MAR-13 20130207-001                    57
1231280                   1 04-MAR-13 20130208-001                    58
1231280                   1 05-MAR-13 20130211-001                    58
...
1231280                   1 04-APR-13 20130313-001                    60
1231280                   2 05-APR-13 20130314-002                    60
...
1231280                   1 19-APR-13 20130328-001                    60
1231280                   1 23-APR-13 20130401-001                    60
1233290                   1 27-FEB-13 20130205-001                     4
1233290                   1 28-FEB-13 20130206-001                     4
...

As we found earlier, there could be more than one release for a customer purchase order received in a single date, and we are only concerned with the most recent release received on any particular day.  We have a second issue, in that the dates may have shifted from one release to the next, so we have to artifically generate rows with the potentially missing dates, with an order quantity of 0 on those dates.

But, how to generate a list of dates?  We could use a CONNECT BY LEVEL trick:

SELECT
  TO_DATE('24-APR-2013','DD-MON-YYYY')+(LEVEL-1) DUE_DATE
FROM
  DUAL
CONNECT BY
  LEVEL<=100;

DUE_DATE
---------
24-APR-13
25-APR-13
26-APR-13
27-APR-13
28-APR-13
...
15-JUN-13
16-JUN-13
17-JUN-13
18-JUN-13
...
30-JUL-13
31-JUL-13
01-AUG-13

Now, by creating a Cartesian join between the dataset with the dates and the dataset with the EDI release header records, we will make certain that a date exists for every date that is potentially of interest (up to 100 days after April 24, 2013) when we later join the result with the T1_LINE table (the child table that lists the forecast delivery dates).

SELECT
  T2.CUSTOMER_PO,
  T2.RELEASE_ID,
  T2.CREATE_DATE,
  T2.PREVIOUS_ACCUM_QTY,
  D.DUE_DATE
FROM
  (SELECT
    T.CUSTOMER_PO,
    ROW_NUMBER() OVER (PARTITION BY T.CUSTOMER_PO,TRUNC(T.CREATE_DATE) ORDER BY T.RELEASE_ID DESC) REL_ON_DATE,
    TRUNC(T.CREATE_DATE) CREATE_DATE,
    T.RELEASE_ID,
    T.PREVIOUS_ACCUM_QTY
  FROM
    T1 T
  WHERE 
    T.CREATE_DATE >= TO_DATE('24-APR-2013','DD-MON-YYYY') - 56
    AND T.CREATE_DATE < TO_DATE('24-APR-2013','DD-MON-YYYY')
  ORDER BY
    T.CUSTOMER_PO,
    T.RELEASE_ID) T2,
  (SELECT
    TO_DATE('24-APR-2013','DD-MON-YYYY')+(LEVEL-1) DUE_DATE
  FROM
    DUAL
  CONNECT BY
    LEVEL<=100) D
WHERE
  T2.REL_ON_DATE=1
ORDER BY
  T2.CUSTOMER_PO,
  T2.RELEASE_ID,
  D.DUE_DATE;

...
CUSTOMER_PO     RELEASE_ID      CREATE_DA PREVIOUS_ACCUM_QTY DUE_DATE
--------------- --------------- --------- ------------------ ---------
1233290         20130318-001    09-APR-13                  4 25-MAY-13
1233290         20130318-001    09-APR-13                  4 26-MAY-13
1233290         20130318-001    09-APR-13                  4 27-MAY-13
1233290         20130318-001    09-APR-13                  4 28-MAY-13
...
1233290         20130322-001    15-APR-13                  4 22-JUN-13
1233290         20130322-001    15-APR-13                  4 23-JUN-13
1233290         20130322-001    15-APR-13                  4 24-JUN-13
1233290         20130322-001    15-APR-13                  4 25-JUN-13
...
1233290         20130401-001    23-APR-13                  4 29-JUL-13
1233290         20130401-001    23-APR-13                  4 30-JUL-13
1233290         20130401-001    23-APR-13                  4 31-JUL-13
1233290         20130401-001    23-APR-13                  4 01-AUG-13

Next, we will outer join to the T1_LINE table to bring in the actual forecasted delivery schedule from the previous dates (note that I also removed an unnecessary ORDER BY clause that was still embedded in the SQL statement – the Oracle query optimizer probably would have realized that the ORDER BY clause was meaningless to the query result, and automatically removed it – but I removed the clause just in case my assumption is incorrect):

SELECT
  T3.CUSTOMER_PO,
  T3.CREATE_DATE,
  T3.DUE_DATE,
  T3.PREVIOUS_ACCUM_QTY + SUM(NVL(TL.QTY,0)) OVER (PARTITION BY T3.CUSTOMER_PO,T3.RELEASE_ID ORDER BY T3.DUE_DATE) REL_RUNNING_SUM
FROM
  (SELECT
    T2.CUSTOMER_PO,
    T2.RELEASE_ID,
    T2.CREATE_DATE,
    T2.PREVIOUS_ACCUM_QTY,
    D.DUE_DATE
  FROM
    (SELECT
      T.CUSTOMER_PO,
      ROW_NUMBER() OVER (PARTITION BY T.CUSTOMER_PO,TRUNC(T.CREATE_DATE) ORDER BY T.RELEASE_ID DESC) REL_ON_DATE,
      TRUNC(T.CREATE_DATE) CREATE_DATE,
      T.RELEASE_ID,
      T.PREVIOUS_ACCUM_QTY
    FROM
      T1 T
    WHERE 
      T.CREATE_DATE >= TO_DATE('24-APR-2013','DD-MON-YYYY') - 56
      AND T.CREATE_DATE < TO_DATE('24-APR-2013','DD-MON-YYYY')) T2,
    (SELECT
      TO_DATE('24-APR-2013','DD-MON-YYYY')+(LEVEL-1) DUE_DATE
    FROM
      DUAL
    CONNECT BY
      LEVEL<=100) D
  WHERE
    T2.REL_ON_DATE=1) T3,
  T1_LINE TL
WHERE
  T3.CUSTOMER_PO=TL.CUSTOMER_PO(+)
  AND T3.RELEASE_ID=TL.RELEASE_ID(+)
  AND T3.DUE_DATE=TL.DUE_DATE(+)
ORDER BY
  T3.CUSTOMER_PO,
  T3.RELEASE_ID,
  T3.DUE_DATE;

CUSTOMER_PO     CREATE_DA DUE_DATE  REL_RUNNING_SUM
--------------- --------- --------- ---------------
1131111         27-FEB-13 24-APR-13             577
1131111         27-FEB-13 25-APR-13             578
1131111         27-FEB-13 26-APR-13             579
1131111         27-FEB-13 27-APR-13             579
1131111         27-FEB-13 28-APR-13             579
1131111         27-FEB-13 29-APR-13             580
1131111         27-FEB-13 30-APR-13             581
...
1131111         23-APR-13 30-JUL-13             636
1131111         23-APR-13 31-JUL-13             638
1131111         23-APR-13 01-AUG-13             639
...
1231280         11-MAR-13 08-MAY-13              58
1231280         11-MAR-13 09-MAY-13              58
1231280         11-MAR-13 10-MAY-13              58
1231280         11-MAR-13 11-MAY-13              58
...
1231280         23-APR-13 21-JUL-13              62
1231280         23-APR-13 22-JUL-13              62
1231280         23-APR-13 23-JUL-13              62
1231280         23-APR-13 24-JUL-13              64
1231280         23-APR-13 25-JUL-13              64
1231280         23-APR-13 26-JUL-13              64
...

We are getting close.  Now we just need to join the SQL created earlier in this article with the above SQL statement so that the current customer purchase order forecasts may be compared with the earlier received customer purchase order forecasts.  The LISTAGG analytic funtion, introduced with Oracle Database 11.1, makes it easy to condense multiple rows from the resultset into a much smaller number of rows, if the resultset is grouped on at least one column.  So, the final version of the analysis challenge appears as follows:

SELECT
  C.PART_ID,
  C.CUSTOMER_PO,
  C.POSSIBLE_FIRM_DATE,
  C.DUE_DATE,
  C.QTY,
  C.REL_RUNNING_SUM,
  LISTAGG(T4.REL_RUNNING_SUM||' ('||TO_CHAR(T4.CREATE_DATE,'MM/DD/YY')||')',', ') WITHIN GROUP (ORDER BY T4.CREATE_DATE DESC) EXCEPTIONS
FROM
  (SELECT
    T.PART_ID,
    T.CUSTOMER_PO,
    T.RELEASE_ID,
    T.RELEASE_DATE+55 POSSIBLE_FIRM_DATE,
    T.PREVIOUS_ACCUM_QTY,
    TL.DUE_DATE,
    TL.QTY,
    T.PREVIOUS_ACCUM_QTY + SUM(TL.QTY) OVER (PARTITION BY T.CUSTOMER_PO,T.RELEASE_ID ORDER BY TL.DUE_DATE) REL_RUNNING_SUM
  FROM
    (SELECT
      T.PART_ID,
      T.CUSTOMER_PO,
      ROW_NUMBER() OVER (PARTITION BY T.CUSTOMER_PO,TRUNC(T.CREATE_DATE) ORDER BY T.RELEASE_ID DESC) REL_ON_DATE,
      T.RELEASE_ID,
      T.RELEASE_DATE,
      T.CREATE_DATE,
      T.PREVIOUS_ACCUM_QTY
    FROM
      T1 T
    WHERE 
      T.CREATE_DATE=TO_DATE('24-APR-2013','DD-MON-YYYY')) T,
    T1_LINE TL
  WHERE
    T.REL_ON_DATE=1
    AND T.CUSTOMER_PO=TL.CUSTOMER_PO
    AND T.RELEASE_ID=TL.RELEASE_ID
    AND (T.RELEASE_DATE+100)>=TL.DUE_DATE) C,
  (SELECT
    T3.CUSTOMER_PO,
    T3.CREATE_DATE,
    T3.DUE_DATE,
    T3.PREVIOUS_ACCUM_QTY + SUM(NVL(TL.QTY,0)) OVER (PARTITION BY T3.CUSTOMER_PO,T3.RELEASE_ID ORDER BY T3.DUE_DATE) REL_RUNNING_SUM
  FROM
    (SELECT
      T2.CUSTOMER_PO,
      T2.RELEASE_ID,
      T2.CREATE_DATE,
      T2.PREVIOUS_ACCUM_QTY,
      D.DUE_DATE
    FROM
      (SELECT
        T.CUSTOMER_PO,
        ROW_NUMBER() OVER (PARTITION BY T.CUSTOMER_PO,TRUNC(T.CREATE_DATE) ORDER BY T.RELEASE_ID DESC) REL_ON_DATE,
        TRUNC(T.CREATE_DATE) CREATE_DATE,
        T.RELEASE_ID,
        T.PREVIOUS_ACCUM_QTY
      FROM
        T1 T
      WHERE 
        T.CREATE_DATE >= TO_DATE('24-APR-2013','DD-MON-YYYY') - 56
        AND T.CREATE_DATE < TO_DATE('24-APR-2013','DD-MON-YYYY')) T2,
      (SELECT
        TO_DATE('24-APR-2013','DD-MON-YYYY')+(LEVEL-1) DUE_DATE
      FROM
        DUAL
      CONNECT BY
        LEVEL<=100) D
    WHERE
      T2.REL_ON_DATE=1) T3,
    T1_LINE TL
  WHERE
    T3.CUSTOMER_PO=TL.CUSTOMER_PO(+)
    AND T3.RELEASE_ID=TL.RELEASE_ID(+)
    AND T3.DUE_DATE=TL.DUE_DATE(+)) T4
WHERE
  C.CUSTOMER_PO=T4.CUSTOMER_PO
  AND C.DUE_DATE=T4.DUE_DATE
  AND C.REL_RUNNING_SUM<>T4.REL_RUNNING_SUM
GROUP BY
  C.PART_ID,
  C.CUSTOMER_PO,
  C.POSSIBLE_FIRM_DATE,
  C.DUE_DATE,
  C.QTY,
  C.REL_RUNNING_SUM
ORDER BY
  C.PART_ID,
  C.CUSTOMER_PO,
  C.DUE_DATE;

PART_ID    CUSTOMER_PO     POSSIBLE_ DUE_DATE         QTY REL_RUNNING_SUM
---------- --------------- --------- --------- ---------- ---------------
EXCEPTIONS
-----------------------------------------------------------------------------------------------------------------------
RS812      1231280         18-JUN-13 27-JUN-13          1              61
60 (03/22/13), 60 (03/21/13), 60 (03/19/13), 60 (03/18/13), 60 (03/15/13), 60 (03/14/13), 59 (03/13/13), 59 (03/12/13),
59 (03/11/13), 59 (03/08/13), 60 (03/07/13), 60 (03/06/13), 60 (03/05/13), 60 (03/04/13), 59 (03/01/13), 59 (02/28/13),
59 (02/27/13)

RS812      1231280         18-JUN-13 03-JUL-13          1              62
61 (03/22/13), 61 (03/21/13), 61 (03/19/13), 61 (03/18/13), 61 (03/15/13), 61 (03/14/13), 60 (03/13/13), 60 (03/12/13),
60 (03/11/13), 60 (03/08/13), 61 (03/07/13), 61 (03/06/13), 61 (03/05/13), 61 (03/04/13), 60 (03/01/13), 60 (02/28/13),
60 (02/27/13)

RS812      1231280         18-JUN-13 24-JUL-13          2              64
63 (03/26/13), 63 (03/25/13), 62 (03/22/13), 62 (03/21/13), 62 (03/19/13), 62 (03/18/13), 62 (03/15/13), 62 (03/14/13),
61 (03/13/13), 61 (03/12/13), 61 (03/11/13), 61 (03/08/13), 62 (03/07/13), 62 (03/06/13), 62 (03/05/13), 62 (03/04/13),
61 (03/01/13), 61 (02/28/13), 61 (02/27/13)

It is now easy to see that the requested total quantity due through June 27, 2013 increased by 2 since the first forecast which was received on February 27, 2013.  The requested total quantity increased by 1 in the EDI release that was received on March 4, 2013, and by 1 again on the next EDI release that was received after March 22, 2013.  The second and third lines of the output show that there was a little more fluctuation in the customer’s forecast, increasing in quantity, decreasing in quantity, and then increasing in quantity again.

An unfortunate problem may be present – what if the customer completely eliminates the forecast orders on their purchase order, but still sends in a header line for the forecast?  A future enhancement may be required, where the CONNECT BY LEVEL trick is used in the first half of the SQL statement (the current releases), just as it is used in the second half of the SQL statement (the historical releases), so that when the top half of the final SQL statement is joined to the bottom half, rows returned by the bottom half of the SQL statement are not unnecessarily removed from the output.

The output is a little ugly.  One approach to clean up the output involves spooling the output to an HTML file as explained here and in the documentation here.  Another approach to clean up the output involves sending the data into an Excel spreadsheet using either a macro or the Microsoft Query Wizard, accessed from within Excel (see the articles in the Excel category of this blog for an explanation of these approaches).

—–

The final solution for this analysis challenge is so simple when built up from smaller SQL statements as shown above; I wonder why I did not consider this approach 12 years ago?  :-)





Unexpected Timer Resolution, Unexpected Parked CPUs, Unexpected Power Consumption

19 04 2013

April 19, 2013 (Modified May 11, 2013, June 5, 2014)

This blog article is not purely Oracle Database specific, yet it may have some relevance to companies that run Oracle Database on the Windows Server platform (for those DBAs lucky/unlucky enough to run Oracle Database on the Windows Server platform, you may find this article interesting).

I am in the process of setting up a couple of new Windows servers to perform various non-Oracle Database tasks.  I noticed that one of the servers had an odd issue – the server would occasionally become very slow at responding to mouse movements and keyboard input, for instance taking 30 seconds to move the mouse pointer a short distance across the screen.  These servers are running Windows Server 2012, which shares the same kernel and includes much the same features as Windows 8 – with the exception that the server operating system opens to the desktop rather than Windows 8’s new start screen.

Two years ago I wrote a brain teaser article that asked how it was possible that a 10046 extended SQL trace could output c=15600,e=510 on a line of the trace file when executing a SQL statement without using parallel query – essentially asking how it was possible to consume 0.015600 seconds of CPU time in 0.000510 seconds of elapsed time when the SQL statement was restricted to running on no more than one CPU.  In the comments section of the article I mentioned the ClockRes utility, but did not provide a link for the download of the program.  So, I thought that I would run the ClockRes utility on one of the new servers, make a change to the server, and then run the ClockRes utility again:

UnexpectedClockResOutput

As can be seen above, on the first execution of ClockRes the Current timer interval was 1.001 ms, while on the second execution of the ClockRes program the Current timer interval was 15.626 ms.  There is an odd similarity between that 15.626ms time (which oddly exceeds the reported Maximum timer interval of 15.625ms) and the c=15600 reported in the Oracle 10046 extended SQL trace file.  So, what change did I make to the server between the first execution of ClockRes utility and the second execution?  For now I will just say that I stopped one of the background services on the server (more later).

I recall performing an experiment a couple of years ago with Oracle Database.  I downloaded a utility that offered to change the Windows default timer resolution from 15.625ms to 1.0ms.  That utility did in fact change the Windows timer resolution, resulting in Oracle Database outputting c= values in increments of 1000, rather than in increments of 15600.  If I am remembering correctly, a second outcome of the experiment was a decrease in performance of the test Oracle database on the computer due to the higher resolution of the Windows timer.

Could the change in the resolution of the Windows timer from the Windows default of 15.625ms to 1.001ms be responsible for the occasionally sluggish performance of the server?  One article that I found (and unfortunately did not save the link to) claimed that adjusting the Windows timer from the default of 15.625ms to a lower value, 1ms for example, could cause a significant negative impact in multitasking system performance (roughly 30% decrease, if I recall correctly).  I located an article on Microsoft’s website that offered some level of clarification, below is a short quote from the article:

“Applications can call timeBeginPeriod to increase the timer resolution. The maximum resolution of 1 ms is used to support graphical animations, audio playback, or video playback. This not only increases the timer resolution for the application to 1 ms, but also affects the global system timer resolution, because Windows uses at least the highest resolution (that is, the lowest interval) that any application requests. Therefore, if only one application requests a timer resolution of 1 ms, the system timer sets the interval (also called the “system timer tick”) to at least 1 ms. For more information, see “timeBeginPeriod Function” on the MSDN® website.

Modern processors and chipsets, particularly in portable platforms, use the idle time between system timer intervals to reduce system power consumption. Various processor and chipset components are placed into low-power idle states between timer intervals. However, these low-power idle states are often ineffective at lowering system power consumption when the system timer interval is less than the default.

If the system timer interval is decreased to less than the default, including when an application calls timeBeginPeriod with a resolution of 1 ms, the low-power idle states are ineffective at reducing system power consumption and system battery life suffers.”

The above mentioned Microsoft article also suggested running the following command from the Windows command line:

powercfg /energy

I had actually executed the above command before running the ClockRes program for the first time, and again after running the ClockRes program for the second time.  A very small portion of the powercfg generated HTML file follows, generated prior to the first execution of ClockRes:

Platform Timer Resolution:Platform Timer Resolution
The default platform timer resolution is 15.6ms (15625000ns) and should be used whenever the system is idle. If the timer resolution is increased, processor power management technologies may not be effective. The timer resolution may be increased due to multimedia playback or graphical animations.
Current Timer Resolution (100ns units) 10009
Maximum Timer Period (100ns units) 156250

Platform Timer Resolution:Outstanding Timer Request
A program or service has requested a timer resolution smaller than the platform maximum timer resolution.
Requested Period 10000
Requesting Process ID 536
Requesting Process Path \Device\HarddiskVolume4\PROGRA~2\APC\POWERC~1\agent\pbeagent.exe

This is the same section of the generated HTML file, generated after the second execution of ClockRes:

Platform Timer Resolution:Platform Timer Resolution
The default platform timer resolution is 15.6ms (15625000ns) and should be used whenever the system is idle. If the timer resolution is increased, processor power management technologies may not be effective. The timer resolution may be increased due to multimedia playback or graphical animations.
Current Timer Resolution (100ns units) 156261

That is potentially interesting.  The output of powercfg stated that PROGRA~2\APC\POWERC~1\agent\pbeagent.exe requested a timer of 1.000 ms, which then changed the Windows server system-wide timer to 1.0009ms.  Interesting?  PROGRA~2\APC\POWERC~1\agent\pbeagent.exe resolves to the “APC PBE Agent” service in Windows, which is a component of the American Power Conversion (APC) PowerChute Business Edition software.  That software interfaces with an attached UPS to provide a gentle shutdown of the server in the event of an extended power outage.  The “APC PBE Agent” service happens to be the service that I shut down between the first and second execution of the ClockRes utility.

Interesting?  Does that suggest that installing the APC PowerChute Business Edition software on a server potentially has a significant impact on the performance of that server due to the program’s insistance on changing the Windows system-wide timer resolution to 1ms?  A quick observation indicates that the change made by the APC software to the Windows system-wide timer resolution does NOT apparently affect the reporting of the c=15600 entries in an Oracle Database 10046 extended SQL trace when the APC software is installed on the server.  The question remains whether or not this APC software could significantly decrease the performance of that Oracle Database software (potentially by 30%, as suggested in the one unnamed article).

——

The Windows Server that is experiencing occasionally jittery mouse and keyboard input is reasonally high-end for a Windows server: Intel Xeon E5-2690 8 core CPU at 2.9GHz (with hyperthreading enabled, giving the appearance of 16 CPUs in Windows), 64GB of memory, RAID controller with 1GB of battery backed cache, 16 internal 10,000 RPM hard drives, two gigabit network adapters in a teamed configuration, etc.  It should require a substantial load on the server to cause the jittery mouse and keyboard input behavior.

The power option plan in Windows was set to High Performance, while the default plan in Windows Server is Balanced.  Various articles on Microsoft’s website state that the Balanced plan allows the server/operating system to use CPU speed throttling (reducing the CPU speed from the stated speed rating, 2.9GHz in the case of this server), and core parking (essentially putting one or more CPU cores to sleep) in order to reduce energy consumption.  Some articles on Microsoft’s site indicate that, at least with Windows Server 2008, that CPU parking may increase IO latencies – that, of course, would be bad if Oracle Database were installed on the server.  Other articles on Microsoft’s site indicate that there are bugs, at least with Windows Server 2008, related to core parking which causes the parked cores not to wake up when the CPU load increases.  I wonder if this particular bug is playing a part in the performance issue faced in this very recent Usenet thread that describes poor performance of Oracle Database running in Hyper-V on Windows?

Here is a screen capture of the Power Options window and Task Manager on the Windows Server 2012 machine that is experiencing occasionally jittery mouse and keyboard input (screen capture taken when the server was mostly idle):

UnexpectedPowerOptionsTaskManager

Notice the inconsistency?  The server’s CPU is throttled down from 2.9GHz to just 1.16GHz while the power option plan is set to High Performance.  The Microsoft published “Performance Tuning Guidelines for Windows Server 2012” document on pages 16-17 states the following (I highlighted some of the words in red):

Balanced (recommended): Default setting. Targets good energy efficiency with minimal performance impact.  Matches capacity to demand. Energy-saving features balance power and performance.

High Performance: Increases performance at the cost of high energy consumption. Power and thermal limitations, operating expenses, and reliability considerations apply.  Processors are always locked at the highest performance state (including “turbo” frequencies). All cores are unparked.

Power Saver: Limits performance to save energy and reduce operating cost.  Caps processor frequency at a percentage of maximum (if supported), and enables other energy-saving features.”

Well, that is interesting, and is inconsistent with the above screen capture.  Incidentally, when the server was experiencing the worst of the occasionally jittery mouse and keyboard input, the CPU utilization was hovering around 6% and the CPU speed was still coasting at 1.16GHz to 1.18GHz, the network performance hovered between 600Mbps and 1100Mbps, and the server’s internal hard drives barely noticed the traffic passing to/from the disks through the network interface (lower than 75MB/s and 137MB/s, respectively).  6% CPU utilization causes the mouse and keyboard input to become jittery?  With hyperthreading enabled, there is essentially 16 available CPU seconds per each second of elapsed time.  A quick check: 1/16 = 0.0625, so 1 (hyperthreaded) CPU at 100% utilization would be reported as a system-wide utilization of 6.25%.  Interesting, but is that statistic relevant?

I happened to have the Windows Resource Monitor open during one of the jittery episodes.  The Resource Monitor showed, shockingly, that 14 (possibly 15) of the hyperthreaded “CPUs” were parked!  That result is also in conflict with the Microsoft document mentioned above regarding “all cores are unparked” when the High Performance power plan is selected.  So, at 6% CPU utilization the server was CPU constrained.  Modifying the setting in the server’s BIOS that controls whether or not cores may be parked, so that the cores could not be parked, fixed the issue in Windows Server 2012 that resulted in the 30 second delay that accompanied moving the mouse pointer a short distance across the screen.

The server still exhibits a bit of jittery behavior with mouse and keyboard input when the server’s teamed network cards are heavily used for file transfers to the server, but at least the CPU activity is no longer confined to a single hyperthreaded “CPU”:

UnexpectedResourceMonitor

Considering that this server was ordered from the manufacturer as “performance optimized”, I am a bit surprised at the power consumption of the server.  The server was ordered with dual (redundant) 1100 watt power supplies.  With the CPU’s 135 watt maximum TDP (per Intel: “Thermal Design Power (TDP) represents the near maximum power a product can draw for a thermally significant period while running commercially available software.”), 16 hard drives, and 64GB of memory, I fully expected the server to consume between 700 and 900 watts of electrical power.

Here is the server’s power consumption when the server is lightly loaded with roughly 68 running processes (note that the server is connected to a 120 volt power outlet):

UnexpectedPowerConsumptionLittleLoad

Here is the server’s power consumption when the server is moderately loaded with between 600Mbps and 1100Mbps of network traffic (the mouse pointer was slightly jittery at this point):

UnexpectedPowerConsumptionNetworkLoad

So, the server consumes 1.2 amps (126 watts) when lightly loaded and 1.4 amps (154 watts) when moderately loaded.  Keeping in mind that many of the popular incandescent light bulbs require 100 watts of power (note that some governments have now restricted the manufacturing of high wattage incandescent light bulbs), this server is consuming just a little more electrical power than a light bulb that might have been hung overhead just a decade or two ago.

One of the common arguments for server virtualization is energy savings – the above screen captures may suggest that energy savings may not be a significant cost-savings factor for virtualization with modern server hardware.  One might question how much energy is really being saved when the network interface is maxed out by a single virtualized server, just 6% CPU utilization results in a jittering mouse pointer, and there are eight to ten virtualized servers stacked on the physical hardware (all competing for the scarce CPU and network resources).

Added May 11, 2013:

Dell BIOS setting to enable or disable CPU parking in Windows Server 2012:

UnexpectedPowerConsumptionBIOSProcIdle

With the BIOS option set to enabled, disk activity caused by network traffic results in occasionally jittery mouse movements on the server.  Based on a bit of research, installing the Hyper-V role on either Windows Server 2012 or Windows 8 may disable CPU throttling and/or disable CPU parking.

Added June 5, 2014:

I finally had sufficient time to fully analyze this problem, where a 2.9GHz CPU in a Dell PowerEdge T620 server crawled along at a leasurely pace of about 1.16GHz, actually throttling back performance further as demand for the server’s resources increased.  A second Dell PowerEdge T620 server with a 2.6GHz CPU that was purchased at the same time also coasted along at roughly 1.16GHz, but that server did not seem to throttle back performance further as demand for the server’s resources increased.

As a review, the screen capture shown below at the left shows the Windows Server 2012 Power Options settings and the Performance tab of the Task Manager.  The screen capture below at the right shows the Windows Server 2012 Power Options settings and the Performance tab of the Task Manager after fixing this particular problem – note that the 2.9GHz CPU is now essentially overclocked at 3.28GHz (it has operated at roughly that speed since the fix).

UnexpectedT620PowerOptionsTaskManager UnexpectedT620PowerOptionsTaskManager2

The 2.9GHz PowerEdge T620 and the 2.6GHz PowerEdge T620 are both Active Directory domain controllers and internal DNS servers (along with supporting other tasks), so the occasionally slow (or extremely slow) performance of the servers negatively impacted the performance of other servers as well as client workstations.

There was a BIOS firmware update released in the third quarter of 2013, which was supposed to address some CPU throttling issues – that BIOS update did not seem to help the problem that I experienced.

I thought that the low power consumption of the big server with the 2.9 GHz E5-2690 8 core CPU was a significant clue when I tried troubleshooting the server a year ago since that CPU is rated to consume up to 135 watts and the server liked to hover between 120 watts and 140 watts regardless of the server’s workload.  The Dell T620 (and other recent Dell servers) has some pretty sophisticated power management capabilities.  Dell sells a utility that is able to alter the electrical power profile of a server, and I thought that Dell might have imposed a 140 watt limit on the server for some reason, but I could not find where that limit was specified.  The 2.9 GHz E5-2690 8 core CPU apparently has some additional electrical power limiting capabilities.  A year ago I even tried downloading a demo of Dell’s power management utility – that did not help resolve the issue (I think that the installation might have caused some other issues that I had to fix).  Last week Tuesday I read the following articles:
http://www.dell.com/learn/us/en/19/financial-services-markets-solutions-processor-acceleration-technology
http://www.dell.com/us/business/p/dell-openmanage-power-center/pd
http://www.intel.com/content/www/us/en/data-center/data-center-management/how-to-configure-node-manager-video.html
ftp://ftp.dell.com/Manuals/Common/poweredge-r720_Concept%20Guide_en-us.pdf
http://en.community.dell.com/techcenter/power-cooling/w/wiki/3536.openmanage-power-center-faq.aspx

I rebooted the server, pressed F2, and dug around in the settings a bit.  I found that the System Profile Setting was set to “Performance per Watt” (I believe that this was how it was set when it left the Dell factory).  I changed that setting to “Performance”, saved the changes, and rebooted the server again.  The server is now consuming 200+ watts, and the CPU is freely exceeding its rated speed.  Once in the System BIOS settings, the pictures below show the configuration changes to remove the electric power cap, thus allowing the server to behave as it should have from the factory:

UnexpectedT620SystemProfile1 UnexpectedT620SystemProfile2

UnexpectedT620SystemProfile3 UnexpectedT620SystemProfile4

I suppose that if a Dell PowerEdge T620 (or similar recent model Dell server) seems to be running a bit slower than expected (note that the particular problem mentioned above is NOT Windows specific – a Dell PowerEdge T620 running Linux should be affected in the same way), you might take a quick peek at the System Profile Setting in the System BIOS to make certain that the System Profile is set to Performance.  As shipped from the factory, two Dell PowerEdge T620 servers purchased this year were NOT affected by the problems mentioned in this blog article.








Follow

Get every new post delivered to your Inbox.

Join 174 other followers