Oracle Database Time Model Viewer in Excel 1

28 02 2011

February 28, 2011

(Forward to the Next Post in the Series)

Previously, I had written a couple of blog articles that showed how to build a reasonably usable Oracle Database Time Model Viewer using nothing more than a text file (containing a VBS script) and a dynamically generated web page that is displayed on a Windows client computer using Internet Explorer (see the three part series).  It might be interesting to see what is possible in a more sophisticated programming environment, such as Microsoft Excel.

170.64 seconds of CPU time consumed in 60 seconds, meaning that on average the CPUs were 34.97% busy, 20.18 seconds of CPU time were consumed in kernel mode, meaning that 88.17% of the CPU time consumed was consumed by Oracle Database or other foregound processes.  Interesting, but I could probably obtain roughly the same information from an operating system utility.  Possibly the more important question is “What is happening in my database instance?”  Before the time model statistics were introduced in Oracle Database 10.1, we could see by examining repeated samplings of V$SYSSTAT that 78.05 CPU seconds were consumed by the sessions (falling into the CPU used by session statistic), that 2.43 CPU seconds were needed parsing activity (falling into the parse time cpu statistic, and 2.90 CPU seconds were needed for recursive operations (falling into the recursive cpu usage statistic) while performing operations such as trigger execution and space management calls – we are able to drill down into the session level detail by comparing delta values of V$SESSTAT.  Starting with Oracle Database 9.2 we could also monitor the CPU usage of SQL statements by repeatedly checking V$SQL (Oracle Database 10.2 and above permit accessing V$SQLSTATS as a less resource intensive method to access much of the same information using the SQL_ID and PLAN_HASH_VALUE).  We could also check the various views (V$SYSTEM_EVENT, V$SESSION_EVENT, V$SESSION_WAIT, V$LOCK, etc.) that compose the wait event interface, but that information will not indicate the amount of CPU used, and why the CPU was used by a session.

Let’s see if it is possible to build an extended version of the Oracle Database Time Model Viewer in Excel, hopefully something like this – the screen capture includes a lot of cross-referenced information that extends well beyond the Oracle Database time model views (V$SYS_TIME_MODEL, V$SESS_TIME_MODEL):

If you look closely at the above picture, we also see that this Oracle Database instance accounted for 80.63 CPU seconds (Background CPU Time plus DB CPU), with SID 1230 consuming the greatest percentage of CPU time at 19.13 seconds.  Looking at the bottom of the screen capture, we see wait events from V$SYSTEM_EVENT and V$SESSION_EVENT that are grouped into categories (Administrative, Application, Commit, Concurrency, Configuration, Idle, Network, Other, Scheduler, System I/O, and User I/O), a feature which first appeared in Oracle Database 10.2.   The time model statistics show that session 1230 spent 24.98 seconds executing SQL statements, a portion of the time was spent running on the CPU (19.13 seconds), and a port of the time was spent sitting in wait events (6.47 seconds in the direct path read wait event).  Interesting possibilities here.

Let’s look at another example.  793.46 seconds of CPU time consumed in (roughly) 60 seconds, meaning that on average the CPUs were just under 100.00% busy, 11.60 seconds of CPU time were consumed in kernel mode, meaning that 98.54% of the CPU time consumed was consumed by Oracle Database or other foregound processes.  Fantastic, we are using 100% of the server’s CPUs (oh wait, that is not a good idea).  Unlike the earlier screen capture, where DB Time (125.86 seconds) was roughly equal to DB CPU (80.61 seconds) plus the value for Total Non-Idle Wait Time (43.74 seconds); in this screen capture we find that DB Time (259.40 seconds) seems to have a bit of lost time because the DB CPU (131.66 seconds) plus Total Non-Idle Wait Time (34.72 seconds) is about 93 seconds short of the DB Time statistic – on a positive note, if we were only looking at the wait events, it would appear that the sessions waited 9.02 seconds less in this time interval than in the time interval of the previous screen capture – we would have missed that the missing time detail if we had we not been looking at more than just the wait events 🙂 .  If we look at the delta values from V$OSSTAT (the statistics at the top of the screen captures), and compare those statistics to the DB CPU plus Background CPU Time time model statistics, we get a sense of what the problem might entail.  The V$OSSTATS statistics indicate that the processes running on the server (and the kernel mode CPU usage that was provoked by those processes) consumed 793.46 seconds of CPU time, while we are only able to account for 131.65 seconds + 0.08 seconds, or roughly 16.6% of the total CPU usage within the database instance.  A relevant question at this point is what process or processes consumed the other 83.4% of the CPU time?

Moving on to the third screen capture, we see that things have started calming down a a little, with the average CPU utilization for the 60 second time period at 89.24% busy – of course this probably means that there were periods of 100% utilization, and periods of 70% utilization (I cheated, I watched the CPU utilization in roughly 5 second intervals, but note that DB Time minus DB CPU minus Total Non-Idle Wait Time indicates that 36.86 seconds are apparently missing… or lost in the CPU run queue, or in an uninstrumented code path).  In the following screen capture we are able to see that the database instance consumed 203.77 seconds plus 0.03 seconds of the 439.56 seconds of total server CPU time consumed in the interval (roughly 46.4% of the total server CPU consumed could be attributed to the database instance – what else is consuming the server’s CPU time, possibly another database instance?).

By the time of the fourth screen capture things are starting to settle down, with the server’s CPUs just 24.58% busy.  Unfortunately, the database instance being monitored only accounted for 13.95% of that CPU usage, so some other process is still competing for the server’s CPU time.  Lots of pretty colors and other information in the screen captures, but we will save the explanation for later.

Let’s start up Excel and begin building the project – you will probably need the 32 bit version of Excel for this exercise, even on a 64 bit computer.  Right-click one of the worksheet tabs and click View Code.

From the Tools menu, select References.  Find Microsoft ActiveX Data Objects 2.8 (or 6.0) Library, and place a check next to that item.  This is the feature that will allow our macros to communicate with the Oracle Database.  Click OK.

Next, we need to create a window for our application, and in Excel that window is called a UserForm.  Right-click Microsoft Excel Objects, and then select Insert – UserForm from the menu.

From the View menu, select Properties Explorer.  Change the (Name) property of the UserForm to frmTimeModel and feel free to change the other form properties as you see fit.

Now the potentially challenging part.  We need a Microsoft Windows built-in 32 bit element (control) called TreeView.  This 32 bit control is found in the MSCOMCTL.OCX, and on a 32 bit operating system the file should be found in the C:\Windows\System32 folder.  On a 64 bit operating system the file should be found in the C:\Windows\SysWOW64 folder.  This file might be installed by various installer programs, but can also be downloaded from Microsoft (this link might also work, but the file is much older).  If you had to download the file, put it in the correct folder location and then “register” the file with Windows, using a command like this (on a 64 bit client computer):

REGSVR32 c:\windows\SysWOW64\MSCOMCTL.OCX 

(Possible bad news, the TreeView control might not work without Visual Basic 5.0 or 6.0 installed, see this article – we might need to simulate the TreeView using an Excel worksheet.)

Once we verify that the TreeView control is on the computer and registered, switch back to the Visual Basic editor, and from the Tools menu, select Additional Controls…  Locate one of the Microsoft TreeView Controls in the list and place a checkmark next to it.  Then click the OK button.

Find the TreeView control in the Toolbox tools list, and click it.  Draw a rectangle convering most of the UserForm.  In the Properties window, change the (Name) property of the TreeView control to tvTimeModel and then double-click the word (Custom) in the Properties window.  In the Properties Pages window, change the Line Style to 1 – tvwRootLines and then click the OK button.  Back in the Properties window, change the font to Courier New with an 8 point font size (to do this, double-click the word Font in the properties list).

Now, let’s add a little code to the project.  Earlier, when we added a UserForm to the project, we right-clicked Microsoft Excel Object.  In the same area of the screen is an item titled ThisWorkbook – double click the word ThisWorkbook.  Add the following code to the code window that appeared:

Private Sub Workbook_Open()
    'Code to initialize sheets should be placed here
    lngTimerTriggerSeconds = 60
    lngTimerEventCounter = lngTimerTriggerSeconds
    frmTimeModel.Show
End Sub 

The above code sets the values of a couple of variables that can be accessed throughout the Excel project and then tells Microsoft Excel to display the UserForm that we just created, every time this Excel spreadsheet workbook is opened.

Now let’s create a timer subroutine that will automatically run once a second.  Right-click Microsoft Excel Objects, and then select Insert – Module from the menu.  Add the following code to the new module:

Option Explicit
Public lngTimerEventCounter As Long
Public lngTimerTriggerSeconds As Long
Public intKillFlag As Integer
Public Sub TimerEvent()
    Dim i As Integer
    lngTimerEventCounter = lngTimerEventCounter + 1

    If lngTimerTriggerSeconds <= lngTimerEventCounter Then
        frmTimeModel.tvTimeModel.Nodes.Clear
        frmTimeModel.tvTimeModel.Nodes.Add , , "BackgroundElapsedTime", "This is a test, the time is now " & Now
        frmTimeModel.tvTimeModel.Nodes.Add "BackgroundElapsedTime", tvwChild, "BackgroundCPU", "The background is still in the background."

        For i = 1 To frmTimeModel.tvTimeModel.Nodes.Count
            'Force all of the nodes to appear expanded
            frmTimeModel.tvTimeModel.Nodes(i).Expanded = True
        Next i
        frmTimeModel.tvTimeModel.Nodes(1).Selected = True

        lngTimerEventCounter = 0
        intKillFlag = intKillFlag + 1
    End If
    'Instruct Excel to execute the TimerEvent sub again in 1 second
    If intKillFlag < 10 Then
        Application.OnTime DateAdd("s", 1, Now), "TimerEvent"
    End If
End Sub 

The above creates a sort of recursive routine, where every time the TimerEvent subroutine executes, it instructs Excel to execute the subroutine again after waiting one second.  Every 60 times the TimerEvent subroutine executes, it will update the TreeView control on the UserForm.  But there is a catch – somehow we need to execute the TimerEvent the first time so that it will automatically execute once a second.  Below where you previously found Microsoft Excel Objects you will see frmTimeModel.  Right-click frmTimeModel and select View Code.  Add the following code:

Private Sub UserForm_Initialize()
    TimerEvent
End Sub 

Now save the Excel spreadsheet workbook, exit Excel, and the open the spreadsheet workbook that we just created.  If everything works as it should, you should see something like this (note that you may need to adjust the Macro Security Level to allow the execution of all macros):

We will continue from here in the next segment of the series.

Edit February 28, 2011:
The current Excel project file, it might be necessary to hit the F5 key on the keyboard for the macro to run correctly after the workbook opens: TimeModelViewerExcelArticle1





SQL Precedence UNION and MINUS, Converting to Full Outer Join Syntax

26 02 2011

February 26, 2011

In a recent OTN thread in the SQL forum I found a SQL statement (see the thread for the original SQL statement) that the original poster (OP) wanted to transform into full outer join syntax, but was unsure how to start the transformation.  To get started with helping the OP, you first need to determine what is the intended result of the SQL statement.  Second, you need to determine if the query really is producing the intended result.  Third, you need to determine why the OP wants to change a functioning query into a different syntax – is the OP facing a performance problem, or is the OP expected to provide an answer as part of a test question for a job interview or classroom project ;-).  Fourth, you need to determine the Oracle Database release version that is available to the OP – can the OP use the FULL OUTER JOIN syntax, or is it necessary to use the Oracle specific join syntax?

The OP’s query essentially had the following format:

SELECT
  C1
FROM
  T1@SOMEDBLINK
WHERE
  C2='26-FEB-2011'
MINUS
SELECT
  C1
FROM
  T1
WHERE
  C2='26-FEB-2011'

UNION

SELECT
  C1
FROM
  T1
WHERE
  C2='26-FEB-2011'
MINUS
SELECT
  C1
FROM
  T1@SOMEDBLINK
WHERE
  C2='26-FEB-2011'; 

Before testing, it appeared that the OP wanted all of the rows in each of the two tables, if and only if, the rows do not exist in both tables.  So, this is a little like a UNION ALL minus the intersection of the two row sources (assuming that there are no duplicate values in either table).  Seems simple enough, until you test it.  Let’s build a simple model to explain what is happening.  We will create two tables with 10 rows each.  The first table has C1 values that increase by 2, while the second has C1 values that increase by 3:

CREATE TABLE T1 AS
SELECT
  ROWNUM*2 C1,
  TRUNC(SYSDATE)+ROWNUM*2 C2
FROM
  DUAL
CONNECT BY
  LEVEL<=10;

CREATE TABLE T2 AS
SELECT
  ROWNUM*3 C1,
  TRUNC(SYSDATE)+ROWNUM*3 C2
FROM
  DUAL
CONNECT BY
  LEVEL<=10; 

Now let’s look at the simulated two halves of the OP’s query:

SELECT
  C1,
  C2
FROM
  T1
MINUS
SELECT
  C1,
  C2
FROM
  T2;

C1 C2
-- ---------
 2 28-FEB-11
 4 02-MAR-11
 8 06-MAR-11
10 08-MAR-11
14 12-MAR-11
16 14-MAR-11
20 18-MAR-11

SELECT
  C1,
  C2
FROM
  T2
MINUS
SELECT
  C1,
  C2
FROM
  T1;

C1 C2
-- ---------
 3 01-MAR-11
 9 07-MAR-11
15 13-MAR-11
21 19-MAR-11
24 22-MAR-11
27 25-MAR-11
30 28-MAR-11 

As can be seen by the above, each half returned 7 rows – there are 7 rows in each table that is not in the other table.  Common sense would state that if we UNION these two results (assuming no duplicate values in each table), we would see 14 rows:

SELECT
  C1,
  C2
FROM
  T1
MINUS
SELECT
  C1,
  C2
FROM
  T2
UNION
SELECT
  C1,
  C2
FROM
  T2
MINUS
SELECT
  C1,
  C2
FROM
  T1;

C1 C2
-- ---------
 3 01-MAR-11
 9 07-MAR-11
15 13-MAR-11
21 19-MAR-11
24 22-MAR-11
27 25-MAR-11
30 28-MAR-11 

Only 7 rows?  Let’s try again with the help of inline views to control the order in which the MINUS and UNION operators are processed:

SELECT
  *
FROM
  (SELECT
    C1,
    C2
  FROM
    T1
  MINUS
  SELECT
    C1,
    C2
  FROM
    T2)
UNION
SELECT
  *
FROM
  (SELECT
    C1,
    C2
  FROM
    T2
  MINUS
  SELECT
    C1,
    C2
  FROM
    T1);

C1 C2
-- ---------
 2 28-FEB-11
 3 01-MAR-11
 4 02-MAR-11
 8 06-MAR-11
 9 07-MAR-11
10 08-MAR-11
14 12-MAR-11
15 13-MAR-11
16 14-MAR-11
20 18-MAR-11
21 19-MAR-11
24 22-MAR-11
27 25-MAR-11
30 28-MAR-11 

Note that the above returned 14 rows, as common sense seemed to imply the previous query would return.  We can return the same result set using just two outer joins:

SELECT
  T1.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1(+)
  AND T2.C1 IS NULL
UNION
SELECT
  T2.C1,
  T2.C2
FROM
  T1,
  T2
WHERE
  T2.C1=T1.C1(+)
  AND T1.C1 IS NULL;

C1 C2
-- ---------
 2 28-FEB-11
 3 01-MAR-11
 4 02-MAR-11
 8 06-MAR-11
 9 07-MAR-11
10 08-MAR-11
14 12-MAR-11
15 13-MAR-11
16 14-MAR-11
20 18-MAR-11
21 19-MAR-11
24 22-MAR-11
27 25-MAR-11
30 28-MAR-11 

Or we can produce the same output with a FULL OUTER JOIN and a MINUS operator:

SELECT
  NVL(T1.C1,T2.C1) C1,
  NVL2(T1.C1,T1.C2,T2.C2) C2
FROM
  T1 FULL OUTER JOIN T2
    ON T1.C1=T2.C1
MINUS
SELECT
  T1.C1,
  T1.C2
FROM
  T1,
  T2
WHERE
  T1.C1=T2.C1;

C1 C2
-- ---------
 2 28-FEB-11
 3 01-MAR-11
 4 02-MAR-11
 8 06-MAR-11
 9 07-MAR-11
10 08-MAR-11
14 12-MAR-11
15 13-MAR-11
16 14-MAR-11
20 18-MAR-11
21 19-MAR-11
24 22-MAR-11
27 25-MAR-11
30 28-MAR-11 

We are also able to produce the output with a FULL OUTER JOIN with a WHERE clause:

SELECT
  NVL(T1.C1,T2.C1) C1,
  NVL2(T1.C1,T1.C2,T2.C2) C2
FROM
  T1 FULL OUTER JOIN T2
    ON T1.C1=T2.C1
WHERE
  (T1.C1 IS NULL
    OR T2.C1 IS NULL);

C1 C2
-- ---------
 3 01-MAR-11
 9 07-MAR-11
15 13-MAR-11
21 19-MAR-11
24 22-MAR-11
27 25-MAR-11
30 28-MAR-11
 8 06-MAR-11
20 18-MAR-11
 2 28-FEB-11
10 08-MAR-11
 4 02-MAR-11
14 12-MAR-11
16 14-MAR-11 

We still have not determined everything that is outlined at the start of this article.  If the OP is looking for an exact answer so that he may complete a test question for a job interview or classroom project, directly fixing his posted SQL statement may prove unhelpful in the long run.  Considering that we do not know the OP’s Oracle Database version, it is very difficult to say “use this for your full outer join requirement“.  In a recent article I linked to the SQL and PL/SQL FAQ on the OTN forum – it probably would have been helpful had the OP read that FAQ in full before posting a question to the forum, so that the OP would have known that the Oracle Database release version is occasionally a very important detail when seeking help. 

The fact that the OP’s query is apparently relying on implicit data type conversions filecreatedt=’18-feb-2011′ is a little troubling, especially if this question is found on some sort of test.





How Do You Find Information in the Oracle Documentation Library?

22 02 2011

February 22, 2011

A recent thread in the OTN forums mentioned Metalink (MOS) Doc ID 1203353.1 – “How to find Oracle Database Documentation on OTN Web Site” and lists four easy steps for finding information in the Oracle documentation library, with the starting point being this link.  I think that it is great having the list of steps in such as easily read form.  There is very useful thread in another OTN forum, and that thread is titled “SQL and PL/SQL FAQ” (I really like this thread, and hope that a similar thread makes its way into the Database – General forum also).  Among other helpful topics mentioned in the thread is how to access the Oracle Database Documentation using another website (this is the site that I have seen specified most often as the starting point for accessing the Oracle documentation). 

As I mentioned in the first of the above OTN threads, I use a slightly different method to access the Oracle documentation that seems to be a bit more efficient for my needs – Google searches. For example, if I am looking for the definition of V$SESSION_EVENT in a specific Oracle Database release version, I will use the following search keywords:

For Oracle Database 8i (8.1.5 through 8.1.7): site:download.oracle.com 815 V$SESSION_EVENT
For Oracle Database 9.0.1: site:download.oracle.com 901 V$SESSION_EVENT
For Oracle Database 9.2.0: site:download.oracle.com 920 V$SESSION_EVENT
For Oracle Database 10.1.0: site:download.oracle.com 101 V$SESSION_EVENT
For Oracle Database 10.2.0: site:download.oracle.com 102 V$SESSION_EVENT
For Oracle Database 11.1.0: site:download.oracle.com 111 V$SESSION_EVENT
For Oracle Database 11.2.0: site:download.oracle.com 112 V$SESSION_EVENT

Once in a while, it might be necessary to add the term database to the search keywords to eliminate search results that are not Oracle Database specific.

The custom search engine on the Miracle Finland site also seems to do a good job of taking me directly to the Oracle documentation library.





Strange Timestamp Behavior

17 02 2011

February 17, 2011

I have not experimented much with the TIMESTAMP datatype in Oracle – the DATE datatype is usually sufficient for my needs.  I thought that I would try to build a couple of test scripts to experiment with TIMESTAMPs, but I hit a bit of a snag in my testing.  The first script that I built follows:

DROP TABLE T1;

CREATE TABLE T1 (
  C1 TIMESTAMP(9));

CREATE OR REPLACE TRIGGER T1_UPDATE BEFORE UPDATE ON T1 FOR EACH ROW
BEGIN
  :NEW.C1 := :NEW.C1 + 1;
END T1_UPDATE;
/

ALTER SESSION SET NLS_TIMESTAMP_FORMAT="DD-MON-YY HH24:MI:SS";

INSERT INTO T1 VALUES(SYSDATE);

SELECT
  *
FROM
  T1;

UPDATE T1 SET C1=SYSDATE;

SELECT
  *
FROM
  T1; 

The above script creates a table with a single column having a TIMESTAMP(9) column, creates a trigger on the table that always adds one day to the TIMESTAMP column value every time a row is updated, inserts a row into the table, selects from the table, updates the row that was just inserted, and then selects from that table again.  Would you believe that the results for Oracle Database 10.2.0.2 and 11.1.0.7 are different for this test script?

Output from 10.2.0.2:

SQL> SELECT
  2    *
  3  FROM
  4    T1;

C1
------------------------------
16-FEB-11 10:45:03

SQL>
SQL> UPDATE T1 SET C1=SYSDATE;

1 row updated.

SQL>
SQL> SELECT
  2    *
  3  FROM
  4    T1;

C1
------------------------------
 (NULL)

Output from 11.1.0.7:

SQL> SELECT
  2    *
  3  FROM
  4    T1;

C1
------------------------------
16-FEB-11 10:47:05

SQL>
SQL> UPDATE T1 SET C1=SYSDATE;

1 row updated.

SQL>
SQL> SELECT
  2    *
  3  FROM
  4    T1;

C1
------------------------------
17-FEB-11 10:47:05 

Metalink (MOS) Doc ID 5352587.8, Bug 5352587 – “Triggers not working correctly with timestamp datatype” seems to show a problem similar to the above, as does Metalink (MOS) Bug 5860236 – “Timestamp in trigger given NULL value on update without use of TO_TIMESTAMP”.

No problem, we can work around that problem with a slight modification to our script:

DELETE FROM T1;

INSERT INTO T1 VALUES(SYSDATE);

SELECT
  *
FROM
  T1;

UPDATE T1 SET C1=TO_TIMESTAMP(SYSDATE);

SELECT
  *
FROM
  T1; 

Output from 10.2.0.2:

SQL> SELECT
  2    *
  3  FROM
  4    T1;

C1
--------------------------------------------
16-FEB-11 10:57:02

SQL>
SQL> UPDATE T1 SET C1=TO_TIMESTAMP(SYSDATE);

1 row updated.

SQL>
SQL> SELECT
  2    *
  3  FROM
  4    T1;

C1
--------------------------------------------
17-FEB-11 12:00:00 

The date value is what was expected, but we lost the time component – the problem is obvious.  Let’s try again with another slight modification to the script:

ALTER SESSION SET NLS_DATE_FORMAT="DD-MON-YY HH24:MI:SS";

DELETE FROM T1;

INSERT INTO T1 VALUES(SYSDATE);

SELECT
  *
FROM
  T1;

UPDATE T1 SET C1=TO_TIMESTAMP(SYSDATE);

SELECT
  *
FROM
  T1; 

Output from 10.2.0.2:

SQL> SELECT
  2    *
  3  FROM
  4    T1;

C1
--------------------------------------------
16-FEB-11 11:00:26

SQL>
SQL> UPDATE T1 SET C1=TO_TIMESTAMP(SYSDATE);

1 row updated.

SQL>
SQL> SELECT
  2    *
  3  FROM
  4    T1;

C1
--------------------------------------------
17-FEB-11 11:00:26 

The above shows exactly what we should expect.

—-

There are quite a few different ways to return a TIMESTAMP from the database that represents the current date and time.  For example:

A recent thread in an Oracle forum was closed prematurely without providing the original poster (OP) a usable solution other than to file a bug report – lately, I sometimes wonder what is the intended purpose of that forum due to the number of threads that are closed without providing much of any help to the OP (other than a half dozen partially related links – it is important to recognize that there are a couple of people who post on that forum very well thought out, specific solutions for the problems faced by the OPs).  The test case script that the OP provided is quite interesting, and I have posted my results below when the script was executed on several different Oracle Database release versions.

Output from 10.2.0.2:

O timestamp obtido eh: [16-FEB-11 01.46.42.616000 PM]

DATA      TEXTO
--------- -------------------------------------------------------------------------------------------------
16-FEB-11 1:old.FLAG=123, :new.FLAG=321
16-FEB-11 1:old.TAG=teste trigger, :new.TAG=teste trigger
16-FEB-11 1:old.TS_ATU_DTR=16-FEB-11 01.46.42.523000000 PM, :new.TS_ATU_DTR=16-FEB-11 01.46.42.523000000 PM

16-FEB-11 2:old.FLAG=123, :new.FLAG=321
16-FEB-11 2:old.TAG=teste trigger, :new.TAG=10
16-FEB-11 2:old.TS_ATU_DTR=16-FEB-11 01.46.42.523000000 PM, :new.TS_ATU_DTR=16-FEB-11 01.46.42.616000000 PM

Output from 10.2.0.5:

O timestamp obtido eh: [16-FEB-11 06.18.56.532000 PM]

DATA      TEXTO
--------- -------------------------------------------------------------------------------------------------
16-FEB-11 1:old.FLAG=123, :new.FLAG=321
16-FEB-11 1:old.TAG=teste trigger, :new.TAG=teste trigger
16-FEB-11 1:old.TS_ATU_DTR=16-FEB-11 06.18.56.345000000 PM, :new.TS_ATU_DTR=

16-FEB-11 2:old.FLAG=123, :new.FLAG=321
16-FEB-11 2:old.TAG=teste trigger, :new.TAG=10
16-FEB-11 2:old.TS_ATU_DTR=16-FEB-11 06.18.56.345000000 PM, :new.TS_ATU_DTR=16-FEB-11 06.18.56.532000000 PM 

Output from 11.1.0.7:

O timestamp obtido eh: [16-FEB-11 01.46.41.333000 PM]

DATA      TEXTO
--------- ------------------------------------------------------------------
16-FEB-11 1:old.FLAG=123, :new.FLAG=321
16-FEB-11 1:old.TAG=teste trigger, :new.TAG=teste trigger
16-FEB-11 1:old.TS_ATU_DTR=, :new.TS_ATU_DTR=

16-FEB-11 2:old.FLAG=123, :new.FLAG=321
16-FEB-11 2:old.TAG=teste trigger, :new.TAG=10
16-FEB-11 2:old.TS_ATU_DTR=, :new.TS_ATU_DTR=16-FEB-11 01.46.41.333000000 PM 

Output from 11.2.0.1:

O timestamp obtido eh: [16-FEB-11 06.06.03.348000 PM]

DATA      TEXTO
--------- ------------------------------------------------------------------
16-FEB-11 1:old.FLAG=123, :new.FLAG=321
16-FEB-11 1:old.TAG=teste trigger, :new.TAG=teste trigger
16-FEB-11 1:old.TS_ATU_DTR=, :new.TS_ATU_DTR=
16-FEB-11 2:old.FLAG=123, :new.FLAG=321
16-FEB-11 2:old.TAG=teste trigger, :new.TAG=10
16-FEB-11 2:old.TS_ATU_DTR=, :new.TS_ATU_DTR=16-FEB-11 06.06.03.348000000 PM 

Output from 11.2.0.2:

O timestamp obtido eh: [16-FEB-11 06.02.33.978000 PM]

DATA      TEXTO
--------- -------------------------------------------------------------------------------------------------
16-FEB-11 1:old.FLAG=123, :new.FLAG=321
16-FEB-11 1:old.TAG=teste trigger, :new.TAG=teste trigger
16-FEB-11 1:old.TS_ATU_DTR=16-FEB-11 06.02.33.827000000 PM, :new.TS_ATU_DTR=

16-FEB-11 2:old.FLAG=123, :new.FLAG=321
16-FEB-11 2:old.TAG=teste trigger, :new.TAG=10
16-FEB-11 2:old.TS_ATU_DTR=16-FEB-11 06.02.33.827000000 PM, :new.TS_ATU_DTR=16-FEB-11 06.02.33.978000000 PM  

Notice in the above that the results from Oracle Database 10.2.0.2 show that the trigger code was able to retrieve the existing value of the TS_ATU_DTR (TIMESTAMP datatype) column and write that value to the logging table.  When the test was repeated on Oracle Database 11.1.0.7, we find that the existing (old) value of the TS_ATU_DTR column could not be read, and the replacement (new) value of that column also could not be read unless its value was explictly set in the trigger code.  So, the result of the OP’s test script finds that 11.1.0.7 contains a bug related to TIMESTAMP datatypes that does not exist in 10.2.0.2, while the script at the start of this blog article demonstrates a bug related to TIMESTAMP datatypes that exists in 10.2.0.2, but does not exist in 11.1.0.7.  11.2.0.1 returned results that are very similar to 11.1.0.7.  The results for 10.2.0.5 and 11.2.0.2 matched exactly, however the new.TS_ATU_DTR value was not initially set correctly.

Are we able to help the OP with the issue that he faces, or should we just say “Not my problem, go bug someone else.

Let’s create another reproducible test case script to see if we are able to isolate the cause of the problem:

SET LINESIZE 140
SET PAGESIZE 1000

DROP TABLE T1;
DROP TABLE T1_LOG;

CREATE TABLE T1 (
  ID NUMBER NOT NULL,
  COL2 NUMBER NOT NULL,
  COL_TS TIMESTAMP(9) WITH TIME ZONE NOT NULL);

CREATE TABLE T1_LOG (
  CHANGE_DATE DATE NOT NULL,
  CHANGE_TEXT VARCHAR2(130) NOT NULL);

CREATE OR REPLACE TRIGGER TRI_T1 BEFORE UPDATE ON T1
REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
DECLARE
  OLD_COL_TS TIMESTAMP(9);
  NEW_COL_TS TIMESTAMP(9);
BEGIN
  OLD_COL_TS := :OLDDATA.COL_TS;
  NEW_COL_TS := :NEWDATA.COL_TS;

  INSERT INTO T1_LOG VALUES ( SYSDATE, 'Before:OLDDATA.COL_TS=' || OLD_COL_TS || ', :NEWDATA.COL_TS='||NEW_COL_TS);

  :NEWDATA.COL_TS := CURRENT_TIMESTAMP;
  NEW_COL_TS := :NEWDATA.COL_TS;

  INSERT INTO T1_LOG VALUES ( SYSDATE, 'After:OLDDATA.COL_TS=' || OLD_COL_TS || ', :NEWDATA.COL_TS='||NEW_COL_TS);
END;
/

INSERT INTO
  T1
VALUES (
  1,
  1,
  CURRENT_TIMESTAMP);

UPDATE
  T1
SET
  COL2=COL2+1;

SELECT
  *
FROM
  T1_LOG
ORDER BY
  CHANGE_TEXT DESC;

DECLARE
  OLD_COL_TS TIMESTAMP(9);
BEGIN
  UPDATE
    T1
  SET
    COL2=COL2+1;
END;
/

SELECT
  *
FROM
  T1_LOG
ORDER BY
  CHANGE_TEXT DESC;

DECLARE
  OLD_COL_TS TIMESTAMP(9);
BEGIN
  UPDATE
    T1
  SET
    COL2=COL2+1
  RETURNING
    COL_TS INTO OLD_COL_TS;
END;
/

SELECT
  *
FROM
  T1_LOG
ORDER BY
  CHANGE_TEXT DESC; 

The script performs a lot of actions – don’t worry too much about it.  After creating the table and the logging table, we insert a row, and then modify that row three different ways.

Output from 10.2.0.2:

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.179000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.21.179000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.179000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.21.179000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.179000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.179000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.21.148000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.21.163000000 PM 

Output from 10.2.0.5 (with EXEC DBMS_LOCK.SLEEP(1) added between updates):

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.11.865000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.20.11.865000000 PM, :NEWDATA.COL_TS=
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.20.11.865000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.12.910000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.11.865000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.20.10.819000000 PM 

Output from 11.1.0.7:

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.048000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=, :NEWDATA.COL_TS=
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.048000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM, :NEWDATA.COL_TS=16-FEB-11 03.52.20.032000000 PM
16-FEB-11 After:OLDDATA.COL_TS=, :NEWDATA.COL_TS=16-FEB-11 03.52.20.064000000 PM 

Output from 11.2.0.1 (with EXEC DBMS_LOCK.SLEEP(1) added between updates):

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.13.035000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM

CHANGE_DA CHANGE_TEXT
--------- ------------------------------------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM
16-FEB-11 Before:OLDDATA.COL_TS=, :NEWDATA.COL_TS=
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.13.035000000 PM
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 06.09.11.971000000 PM, :NEWDATA.COL_TS=16-FEB-11 06.09.11.986000000 PM
16-FEB-11 After:OLDDATA.COL_TS=, :NEWDATA.COL_TS=16-FEB-11 06.09.14.089000000 PM

Output from 11.2.0.2 (with EXEC DBMS_LOCK.SLEEP(1) added between updates):

CHANGE_DA CHANGE_TEXT
--------- ----------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:19
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:19

CHANGE_DA CHANGE_TEXT
--------- ----------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:19
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:19
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:20
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:19

CHANGE_DA CHANGE_TEXT
--------- ----------------------------------------------------------------------------
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 17:50:20, :NEWDATA.COL_TS=
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:19
16-FEB-11 Before:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:19
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 17:50:20, :NEWDATA.COL_TS=16-FEB-11 17:50:21
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:20
16-FEB-11 After:OLDDATA.COL_TS=16-FEB-11 17:50:19, :NEWDATA.COL_TS=16-FEB-11 17:50:19 

Notice in the above that 10.2.0.2 produced three nice sets of data, with TIMESTAMP values shown on all rows.  The final set of output (in blue) from 11.1.0.7 shows NULL initial values for the TIMESTAMP columns.  The NULL problem only happens in the trigger code when the RETURNING clause is used in the UPDATE statement.  Neat, I think that we found a bug, and we are able to work around that bug simply by reworking the code to not use the RETURNING clause.  The same solution will work on 10.2.0.5, 11.2.0.1, and 11.2.0.2.

—-

Other TIMESTAMP resources from Metalink (MOS):

  • Doc ID 340512.1 – “Timestamps & time zones – Frequently Asked Questions”
  • Doc ID 5649579.8, Bug 5649579 – “CAST ‘as date’ of TIMESTAMP rounds in SQL, truncates in PLSQL”
  • Doc ID 780809.1 – “TO_TIMESTAMP is Returning Different Results on Oracle 9i Compare to Oracle 10g”




Calculating Overlapping Hours for Labor Transactions

15 02 2011

February 15, 2011

There is an interesting problem in the ERP platform that I support.  When production floor workers begin working on a particular job, the employee “clocks into” the job in the computer system.  When production floor workers stop working on a particular job, the employee “clock out of” the job in the computer system.  If the employee is simultaneously operating multiple production floor machines (might be feeding parts into robots or automatic CNC machines) we need to capture the full machine burden rate for all of the machines, but we need to fairly spread the employee’s labor burden rate among the different simultaneously in-process jobs.  Assume that the employee is operating 4 production floor machines, and in a simplied example, the employee runs machine 1 between periods 1 and 4, runs machine 2 between periods 4 and 8, machine 3 between periods 6 and 10, and machine 4 between periods 8 and 13 – as depicted below:

How much of the employee’s labor burden should go to the jobs’ transactions on machines 1, 2, 3, and 4?  25% (0.25)?  No, because that would only be correct if all of the machines were in use for the entire time duration between periods 1 and 13. 

Machine 1/transaction 1:
Quick Description: 3 time periods where the employee’s labor burden should be fully applied, and 1 time period where half of the employee’s labor burden should be applied due to the overlap with machine 2/transaction 2.   Therefore, the employee’s labor burden time should be 3 + 1/2 = 3.5 time periods.  We can calculate that as a percentage of the total duration as 3.5/4 = 0.875 (87.5%), and save that percentage in a table column, MULTIPLIER_2 in the case of the ERP system.

Machine 2/transaction 2:
Quick Description: 1 time period where the employee’s labor burden should be half applied, 1 time period where the employee’s labor burden should be fully applied, 2 time periods where the employee’s labor burden should be half applied, and 1 time period where the employee’s labor burden should be one third applied.  Therefore, the employee’s labor burden time should be 1/2 + 1 + 2/2 + 1/3 = 2.833333.  Calculated as a percentage of the total duration we have 2.833333/5 = 0.566667 (56.6%).

Machine 3/transaction 3:
Quick Description: 2 time periods where the employee’s labor burden should be half applied, 1 time period where the where the employee’s labor burden should be one third applied, and 2 time periods where the employee’s labor burden should be half applied.  Therefore, the employee’s labor burden time should be 2/2 + 1/3 + 2/2 = 2.333333.  Calculated as a percentage of the total duration we have 2.333333/5 = 0.466667 (46.6%).

Machine 4/transaction 4:
Quick Description: 1 time period where the where the employee’s labor burden should be one third applied, 2 time periods where the employee’s labor burden should be half applied, and 3 time periods where the employee’s labor burden should be fully applied.  Therefore, the employee’s labor burden time should be 1/3 + 2/2 + 3 = 4.333333.  Calculated as a percentage of the total duration we have 4.333333/6 = 0.7222222 (72.2%).

The above is obviously an over-simplification of the calculations, but at least the idea of what needs to happen should be clear when performing the calculations using actual transaction start and end times.  Let’s take a look at a more complicated example.  Assume that there are a total of 3 simultaneous in-process machine transactions that are started and ended as follows:

Clock In      8:44:01
Start Job 1   8:47:58
Start Job 2   8:57:58
Start Job 3   9:07:58
End Job 2     9:27:58
End Job 1     9:40:00
End Job 3     9:50:00 

We can layout a picture in a spreadsheet that shows how the labor burden hours (identified as Hours_Worked) and the calculated percentage of total burden (identified as Multiplier_2) could be computed as the transactions are started and ended:

Of course the above is of limited use without the ability to see the formulas behind the calculations, so I will instruct the spreadsheet program to show the formulas for the cells:

Enough fun with a spreadsheet for now, time to switch to SQL*Plus and Oracle Database to see if we are able to perform the calculations using just a SQL statement.  Let’s create a table for demonstration purposes:

CREATE TABLE T1 (
  TRANSACTION_ID NUMBER,
  EMPLOYEE_ID VARCHAR2(15),
  SHIFT_DATE DATE,
  CLOCK_IN DATE,
  CLOCK_OUT DATE,
  HOURS_WORKED NUMBER(12,2),
  HOURS_PREVIOUS NUMBER(12,2),
  HOURS_OVERALL NUMBER(12,2),
  MULTIPLIER_2 NUMBER(4,3),
  PRIMARY KEY (TRANSACTION_ID)); 

Now let’s insert 140 rows into the table, set a random time for the CLOCK_IN column that is within the first 12 hours of the day, set the CLOCK_OUT column value up to four hours after the CLOCK_IN time, and then store the calculated number of hours between the CLOCK_IN and CLOCK_OUT times:

INSERT INTO T1 (
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE)
SELECT
  ROWNUM TRANSACTION_ID,
  DECODE(TRUNC((ROWNUM-1)/10),
         0, 'ABE',
         1, 'BOB',
         2, 'CARL',
         3, 'DOUG',
         4, 'ED',
         5, 'FRANK',
         6, 'GREG') EMPLOYEE_ID,
  TRUNC(SYSDATE) SHIFT_DATE
FROM
  DUAL
CONNECT BY
  LEVEL<=70;

INSERT INTO T1 (
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE)
SELECT
  ROWNUM+100 TRANSACTION_ID,
  DECODE(TRUNC((ROWNUM-1)/10),
         0, 'ABE',
         1, 'BOB',
         2, 'CARL',
         3, 'DOUG',
         4, 'ED',
         5, 'FRANK',
         6, 'GREG') EMPLOYEE_ID,
  TRUNC(SYSDATE+1) SHIFT_DATE
FROM
  DUAL
CONNECT BY
  LEVEL<=70;

UPDATE
  T1
SET
  CLOCK_IN=SHIFT_DATE+TRUNC(DBMS_RANDOM.VALUE(0,43200))/86400;

UPDATE
  T1
SET
  CLOCK_OUT=CLOCK_IN+TRUNC(DBMS_RANDOM.VALUE(0,14400))/86400;

UPDATE
  T1
SET
  HOURS_WORKED=(CLOCK_OUT-CLOCK_IN)*24,
  HOURS_OVERALL=(CLOCK_OUT-CLOCK_IN)*24;

COMMIT; 

Unfortunately, calculating the MULTIPLIER_2 value for each transaction will be a bit challenging, because we must consider all of the other transactions that were in-process during each of the transactions.  Let’s see what we have so far for just one of the employees:

SELECT
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE,
  TO_CHAR(CLOCK_IN,'HH24:MI:SS') CLOCK_IN,
  TO_CHAR(CLOCK_OUT,'HH24:MI:SS') CLOCK_OUT,
  HOURS_WORKED
FROM
  T1
WHERE
  EMPLOYEE_ID='GREG'
ORDER BY
  SHIFT_DATE,
  CLOCK_IN;

TRANSACTION_ID EMPLOYEE_ID SHIFT_DAT CLOCK_IN CLOCK_OU HOURS_WORKED
-------------- ----------- --------- -------- -------- ------------
            64 GREG        13-FEB-11 00:57:31 04:56:33         3.98
            63 GREG        13-FEB-11 02:12:04 04:45:36         2.56
            61 GREG        13-FEB-11 03:05:34 06:53:24         3.80
            66 GREG        13-FEB-11 03:08:21 03:15:04         0.11
            70 GREG        13-FEB-11 03:58:45 04:08:28         0.16
            62 GREG        13-FEB-11 05:24:03 07:09:41         1.76
            69 GREG        13-FEB-11 06:04:48 09:22:00         3.29
            67 GREG        13-FEB-11 07:41:20 09:07:06         1.43
            65 GREG        13-FEB-11 09:17:35 10:24:15         1.11
            68 GREG        13-FEB-11 10:27:03 14:03:30         3.61
           161 GREG        14-FEB-11 01:15:40 02:24:01         1.14
           169 GREG        14-FEB-11 01:16:01 03:45:38         2.49
           166 GREG        14-FEB-11 01:53:02 03:54:09         2.02
           163 GREG        14-FEB-11 03:47:15 06:55:34         3.14
           170 GREG        14-FEB-11 05:00:19 08:16:00         3.26
           165 GREG        14-FEB-11 06:23:45 07:56:40         1.55
           162 GREG        14-FEB-11 06:26:06 09:54:15         3.47
           168 GREG        14-FEB-11 06:33:39 08:10:56         1.62
           167 GREG        14-FEB-11 08:25:00 12:20:55         3.93
           164 GREG        14-FEB-11 11:18:45 13:25:08         2.11 

Assume that the above is the starting point – the machine transactions (known as labor tickets in the ERP package) exist in the database, even though I have not yet calculated the MULTIPLIER_2 VALUES.  Let’s determine the MULTIPLIER_2 values – first we need a way to determine which labor tickets overlap with each of the above labor tickets, and by how much they overlap.  We will use a self-join of the test table T1 (which simulates the LABOR_TICKET table in the ERP package):

SELECT
  L1.TRANSACTION_ID,
  L1.EMPLOYEE_ID,
  L1.SHIFT_DATE,
  TO_CHAR(L1.CLOCK_IN,'HH24:MI') CLOCK_IN,
  TO_CHAR(L1.CLOCK_OUT,'HH24:MI') CLOCK_OUT,
  TO_CHAR(L2.CLOCK_IN,'HH24:MI') CLOCK_IN,
  TO_CHAR(L2.CLOCK_OUT,'HH24:MI') CLOCK_OUT,
  TO_CHAR(DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN),'HH24:MI') CLOCK_IN,
  TO_CHAR(DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT),'HH24:MI') CLOCK_OUT,
  ROUND((DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT) -
    DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN)) * 24,2) OVERLAP,
  L1.HOURS_WORKED
FROM
  T1 L1,
  T1 L2
WHERE
  L1.EMPLOYEE_ID=L2.EMPLOYEE_ID
  AND L1.SHIFT_DATE=L2.SHIFT_DATE
  AND (L1.CLOCK_IN BETWEEN L2.CLOCK_IN AND L2.CLOCK_OUT-1/86400
       OR L2.CLOCK_IN BETWEEN L1.CLOCK_IN AND L1.CLOCK_OUT-1/86400)
   AND L1.EMPLOYEE_ID='GREG'
ORDER BY
  L1.SHIFT_DATE,
  L1.CLOCK_IN;

TRANSACTION_ID EMPLOYEE_ID SHIFT_DAT CLOCK CLOCK CLOCK CLOCK CLOCK CLOCK OVERLAP HOURS_WORKED
-------------- ----------- --------- ----- ----- ----- ----- ----- ----- ------- ------------
            64 GREG        13-FEB-11 00:57 04:56 02:12 04:45 02:12 04:45    2.56         3.98
            64 GREG        13-FEB-11 00:57 04:56 03:08 03:15 03:08 03:15    0.11         3.98
            64 GREG        13-FEB-11 00:57 04:56 00:57 04:56 00:57 04:56    3.98         3.98
            64 GREG        13-FEB-11 00:57 04:56 03:05 06:53 03:05 04:56    1.85         3.98
            64 GREG        13-FEB-11 00:57 04:56 03:58 04:08 03:58 04:08    0.16         3.98
            63 GREG        13-FEB-11 02:12 04:45 03:05 06:53 03:05 04:45    1.67         2.56
            63 GREG        13-FEB-11 02:12 04:45 00:57 04:56 02:12 04:45    2.56         2.56
            63 GREG        13-FEB-11 02:12 04:45 03:58 04:08 03:58 04:08    0.16         2.56
            63 GREG        13-FEB-11 02:12 04:45 03:08 03:15 03:08 03:15    0.11         2.56
            63 GREG        13-FEB-11 02:12 04:45 02:12 04:45 02:12 04:45    2.56         2.56
            61 GREG        13-FEB-11 03:05 06:53 03:58 04:08 03:58 04:08    0.16         3.80
            61 GREG        13-FEB-11 03:05 06:53 03:08 03:15 03:08 03:15    0.11         3.80
            61 GREG        13-FEB-11 03:05 06:53 06:04 09:22 06:04 06:53    0.81         3.80
            61 GREG        13-FEB-11 03:05 06:53 03:05 06:53 03:05 06:53    3.80         3.80
            61 GREG        13-FEB-11 03:05 06:53 02:12 04:45 03:05 04:45    1.67         3.80
            61 GREG        13-FEB-11 03:05 06:53 00:57 04:56 03:05 04:56    1.85         3.80
            61 GREG        13-FEB-11 03:05 06:53 05:24 07:09 05:24 06:53    1.49         3.80
            66 GREG        13-FEB-11 03:08 03:15 03:05 06:53 03:08 03:15    0.11         0.11
... 

In the above, the first “CLOCK” column shows the CLOCK_IN for our transaction, and the second “CLOCK” column shows the CLOCK_OUT for our transaction.  The next two “CLOCK” columns show the CLOCK_IN and CLOCK_OUT for an overlapping transaction.  If the CLOCK_IN for the transaction is after than the CLOCK_IN for the overlapping transaction, then the CLOCK_IN for our transaction is listed in the second to last “CLOCK” column, otherwise the CLOCK_IN for the overlapping transactions is displayed in the second to last “CLOCK” column.  The reverse is true for the last “CLOCK” column, where the earliest of the CLOCK_OUT for the two transactions is displayed.  The OVERLAP column shows the number of hours difference between the last two “CLOCK” columns – that shows the number of hours the two transactions have in common.

Now the last step to calculate the MULTIPLIER_2 (labor burden hours as a percentage of the machine burden hours), find the total OVERLAP for each transaction, and divide that into the HOURS_WORKED value to determine the MULTIPLIER_2 value:

SELECT
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE,
  CLOCK_IN,
  CLOCK_OUT,
  HOURS_WORKED,
  ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3) MULTIPLIER_2
FROM
(SELECT
  L1.TRANSACTION_ID,
  L1.EMPLOYEE_ID,
  L1.SHIFT_DATE,
  TO_CHAR(L1.CLOCK_IN,'HH24:MI') CLOCK_IN,
  TO_CHAR(L1.CLOCK_OUT,'HH24:MI') CLOCK_OUT,
  ROUND((DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT) -
    DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN)) * 24,2) OVERLAP,
  L1.HOURS_WORKED
FROM
  T1 L1,
  T1 L2
WHERE
  L1.EMPLOYEE_ID=L2.EMPLOYEE_ID
  AND L1.SHIFT_DATE=L2.SHIFT_DATE
  AND (L1.CLOCK_IN BETWEEN L2.CLOCK_IN AND L2.CLOCK_OUT-1/86400
       OR L2.CLOCK_IN BETWEEN L1.CLOCK_IN AND L1.CLOCK_OUT-1/86400)
   AND L1.EMPLOYEE_ID='GREG')
GROUP BY
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE,
  CLOCK_IN,
  CLOCK_OUT,
  HOURS_WORKED
ORDER BY
  SHIFT_DATE,
  CLOCK_IN;

TRANSACTION_ID EMPLOYEE_ID SHIFT_DAT CLOCK CLOCK HOURS_WORKED MULTIPLIER_2
-------------- ----------- --------- ----- ----- ------------ ------------
            64 GREG        13-FEB-11 00:57 04:56         3.98        0.460
            63 GREG        13-FEB-11 02:12 04:45         2.56        0.363
            61 GREG        13-FEB-11 03:05 06:53         3.80        0.384
            66 GREG        13-FEB-11 03:08 03:15         0.11        0.250
            70 GREG        13-FEB-11 03:58 04:08         0.16        0.250
            62 GREG        13-FEB-11 05:24 07:09         1.76        0.406
            69 GREG        13-FEB-11 06:04 09:22         3.29        0.493
            67 GREG        13-FEB-11 07:41 09:07         1.43        0.500
            65 GREG        13-FEB-11 09:17 10:24         1.11        0.941
            68 GREG        13-FEB-11 10:27 14:03         3.61        1.000
           161 GREG        14-FEB-11 01:15 02:24         1.14        0.409
           169 GREG        14-FEB-11 01:16 03:45         2.49        0.453
           166 GREG        14-FEB-11 01:53 03:54         2.02        0.445
           163 GREG        14-FEB-11 03:47 06:55         3.14        0.478
           170 GREG        14-FEB-11 05:00 08:16         3.26        0.320
           165 GREG        14-FEB-11 06:23 07:56         1.55        0.238
           162 GREG        14-FEB-11 06:26 09:54         3.47        0.333
           168 GREG        14-FEB-11 06:33 08:10         1.62        0.245
           167 GREG        14-FEB-11 08:25 12:20         3.93        0.608
           164 GREG        14-FEB-11 11:18 13:25         2.11        0.670 

Now let’s try setting the MULTIPLIER_2 values for GREG’s transactions to verify that our SQL statement works:

UPDATE
  T1 L3
SET
  MULTIPLIER_2=(
    SELECT
      ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3) MULTIPLIER_2
    FROM
      (SELECT
         L1.TRANSACTION_ID,
         L1.SHIFT_DATE,
         ROUND((DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT) -
           DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN)) * 24,2) OVERLAP,
         L1.HOURS_WORKED
       FROM
         T1 L1,
         T1 L2
       WHERE
         L1.EMPLOYEE_ID=L2.EMPLOYEE_ID
         AND L1.SHIFT_DATE=L2.SHIFT_DATE
         AND (L1.CLOCK_IN BETWEEN L2.CLOCK_IN AND L2.CLOCK_OUT-1/86400
              OR L2.CLOCK_IN BETWEEN L1.CLOCK_IN AND L1.CLOCK_OUT-1/86400)
         AND L1.EMPLOYEE_ID='GREG') L1
    WHERE
      L3.TRANSACTION_ID=L1.TRANSACTION_ID
      AND L3.SHIFT_DATE=L1.SHIFT_DATE
    GROUP BY
      HOURS_WORKED)
WHERE
  L3.EMPLOYEE_ID='GREG';

20 rows updated. 

20 rows updated, that is a good start.  Let’s take a look at the rows:

SELECT
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE,
  TO_CHAR(CLOCK_IN,'HH24:MI:SS') CLOCK_IN,
  TO_CHAR(CLOCK_OUT,'HH24:MI:SS') CLOCK_OUT,
  HOURS_WORKED,
  MULTIPLIER_2
FROM
  T1
WHERE
  EMPLOYEE_ID='GREG'
ORDER BY
  SHIFT_DATE,
  CLOCK_IN;

TRANSACTION_ID EMPLOYEE_ID SHIFT_DAT CLOCK_IN CLOCK_OU HOURS_WORKED MULTIPLIER_2
-------------- ----------- --------- -------- -------- ------------ ------------
            64 GREG        13-FEB-11 00:57:31 04:56:33         3.98        0.460
            63 GREG        13-FEB-11 02:12:04 04:45:36         2.56        0.363
            61 GREG        13-FEB-11 03:05:34 06:53:24         3.80        0.384
            66 GREG        13-FEB-11 03:08:21 03:15:04         0.11        0.250
            70 GREG        13-FEB-11 03:58:45 04:08:28         0.16        0.250
            62 GREG        13-FEB-11 05:24:03 07:09:41         1.76        0.406
            69 GREG        13-FEB-11 06:04:48 09:22:00         3.29        0.493
            67 GREG        13-FEB-11 07:41:20 09:07:06         1.43        0.500
            65 GREG        13-FEB-11 09:17:35 10:24:15         1.11        0.941
            68 GREG        13-FEB-11 10:27:03 14:03:30         3.61        1.000
           161 GREG        14-FEB-11 01:15:40 02:24:01         1.14        0.409
           169 GREG        14-FEB-11 01:16:01 03:45:38         2.49        0.453
           166 GREG        14-FEB-11 01:53:02 03:54:09         2.02        0.445
           163 GREG        14-FEB-11 03:47:15 06:55:34         3.14        0.478
           170 GREG        14-FEB-11 05:00:19 08:16:00         3.26        0.320
           165 GREG        14-FEB-11 06:23:45 07:56:40         1.55        0.238
           162 GREG        14-FEB-11 06:26:06 09:54:15         3.47        0.333
           168 GREG        14-FEB-11 06:33:39 08:10:56         1.62        0.245
           167 GREG        14-FEB-11 08:25:00 12:20:55         3.93        0.608
           164 GREG        14-FEB-11 11:18:45 13:25:08         2.11        0.670 

The above is the expected result, let’s generalize the SQL statement to update the rows for the other EMPLOYEE_ID values:

UPDATE
  T1 L3
SET
  MULTIPLIER_2=(
    SELECT
      ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3) MULTIPLIER_2
    FROM
      (SELECT
         L1.TRANSACTION_ID,
         L1.SHIFT_DATE,
         ROUND((DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT) -
           DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN)) * 24,2) OVERLAP,
         L1.HOURS_WORKED
       FROM
         T1 L1,
         T1 L2
       WHERE
         L1.EMPLOYEE_ID=L2.EMPLOYEE_ID
         AND L1.SHIFT_DATE=L2.SHIFT_DATE
         AND (L1.CLOCK_IN BETWEEN L2.CLOCK_IN AND L2.CLOCK_OUT-1/86400
              OR L2.CLOCK_IN BETWEEN L1.CLOCK_IN AND L1.CLOCK_OUT-1/86400)) L1
    WHERE
      L3.TRANSACTION_ID=L1.TRANSACTION_ID
      AND L3.SHIFT_DATE=L1.SHIFT_DATE
    GROUP BY
      HOURS_WORKED);

140 rows updated.  

You could of course verify that the rows were updated correctly, but I will skip that step for now.

COMMIT; 

Now let’s go back and modify one of GREG’s transactions, which in my test data happens to be the first transaction for GREG when those transactions are sorted by the CLOCK_IN time:

UPDATE
  T1
SET
  CLOCK_OUT=CLOCK_OUT+1/24,
  HOURS_WORKED=HOURS_WORKED+1
WHERE
  TRANSACTION_ID=64; 

How has that one small change affected the other transactions:

SELECT
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE,
  CLOCK_IN,
  CLOCK_OUT,
  HOURS_WORKED,
  ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3) MULTIPLIER_2,
  OLD_MULT_2
FROM
(SELECT
  L1.TRANSACTION_ID,
  L1.EMPLOYEE_ID,
  L1.SHIFT_DATE,
  L1.MULTIPLIER_2 OLD_MULT_2,
  TO_CHAR(L1.CLOCK_IN,'HH24:MI') CLOCK_IN,
  TO_CHAR(L1.CLOCK_OUT,'HH24:MI') CLOCK_OUT,
  ROUND((DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT) -
    DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN)) * 24,2) OVERLAP,
  L1.HOURS_WORKED
FROM
  T1 L1,
  T1 L2
WHERE
  L1.EMPLOYEE_ID=L2.EMPLOYEE_ID
  AND L1.SHIFT_DATE=L2.SHIFT_DATE
  AND (L1.CLOCK_IN BETWEEN L2.CLOCK_IN AND L2.CLOCK_OUT-1/86400
       OR L2.CLOCK_IN BETWEEN L1.CLOCK_IN AND L1.CLOCK_OUT-1/86400)
   AND L1.EMPLOYEE_ID='GREG')
GROUP BY
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE,
  CLOCK_IN,
  CLOCK_OUT,
  HOURS_WORKED,
  OLD_MULT_2
ORDER BY
  SHIFT_DATE,
  CLOCK_IN;

TRANSACTION_ID EMPLOYEE_ID SHIFT_DAT CLOCK CLOCK HOURS_WORKED MULTIPLIER_2 OLD_MULT_2
-------------- ----------- --------- ----- ----- ------------ ------------ ----------
            64 GREG        13-FEB-11 00:57 05:56         4.98        0.355      0.460
            63 GREG        13-FEB-11 02:12 04:45         2.56        0.363      0.363
            61 GREG        13-FEB-11 03:05 06:53         3.80        0.349      0.384
            66 GREG        13-FEB-11 03:08 03:15         0.11        0.250      0.250
            70 GREG        13-FEB-11 03:58 04:08         0.16        0.250      0.250
            62 GREG        13-FEB-11 05:24 07:09         1.76        0.361      0.406
            69 GREG        13-FEB-11 06:04 09:22         3.29        0.493      0.493
            67 GREG        13-FEB-11 07:41 09:07         1.43        0.500      0.500
            65 GREG        13-FEB-11 09:17 10:24         1.11        0.941      0.941
            68 GREG        13-FEB-11 10:27 14:03         3.61        1.000      1.000
           161 GREG        14-FEB-11 01:15 02:24         1.14        0.409      0.409
           169 GREG        14-FEB-11 01:16 03:45         2.49        0.453      0.453
           166 GREG        14-FEB-11 01:53 03:54         2.02        0.445      0.445
           163 GREG        14-FEB-11 03:47 06:55         3.14        0.478      0.478
           170 GREG        14-FEB-11 05:00 08:16         3.26        0.320      0.320
           165 GREG        14-FEB-11 06:23 07:56         1.55        0.238      0.238
           162 GREG        14-FEB-11 06:26 09:54         3.47        0.333      0.333
           168 GREG        14-FEB-11 06:33 08:10         1.62        0.245      0.245
           167 GREG        14-FEB-11 08:25 12:20         3.93        0.608      0.608
           164 GREG        14-FEB-11 11:18 13:25         2.11        0.670      0.670 

The above shows that we now need to readjust the MULTIPLIER_2 value for three transactions because we set the CLOCK_OUT time of the first transaction to be an hour later.  Let’s just retrieve the rows that need to be adjusted:

SELECT
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE,
  CLOCK_IN,
  CLOCK_OUT,
  HOURS_WORKED,
  ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3) MULTIPLIER_2,
  OLD_MULT_2
FROM
(SELECT
  L1.TRANSACTION_ID,
  L1.EMPLOYEE_ID,
  L1.SHIFT_DATE,
  L1.MULTIPLIER_2 OLD_MULT_2,
  TO_CHAR(L1.CLOCK_IN,'HH24:MI') CLOCK_IN,
  TO_CHAR(L1.CLOCK_OUT,'HH24:MI') CLOCK_OUT,
  ROUND((DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT) -
    DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN)) * 24,2) OVERLAP,
  L1.HOURS_WORKED
FROM
  T1 L1,
  T1 L2
WHERE
  L1.EMPLOYEE_ID=L2.EMPLOYEE_ID
  AND L1.SHIFT_DATE=L2.SHIFT_DATE
  AND (L1.CLOCK_IN BETWEEN L2.CLOCK_IN AND L2.CLOCK_OUT-1/86400
       OR L2.CLOCK_IN BETWEEN L1.CLOCK_IN AND L1.CLOCK_OUT-1/86400)
   AND L1.EMPLOYEE_ID='GREG')
GROUP BY
  TRANSACTION_ID,
  EMPLOYEE_ID,
  SHIFT_DATE,
  CLOCK_IN,
  CLOCK_OUT,
  HOURS_WORKED,
  OLD_MULT_2
HAVING
  ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3)<>OLD_MULT_2
ORDER BY
  SHIFT_DATE,
  CLOCK_IN; 

TRANSACTION_ID EMPLOYEE_ID SHIFT_DAT CLOCK CLOCK HOURS_WORKED MULTIPLIER_2 OLD_MULT_2
-------------- ----------- --------- ----- ----- ------------ ------------ ----------
            64 GREG        13-FEB-11 00:57 05:56         4.98        0.445      0.460
            61 GREG        13-FEB-11 03:05 06:53         3.80        0.349      0.384
            62 GREG        13-FEB-11 05:24 07:09         1.76        0.361      0.406

Finally, let’s update just those rows where the MULTIPLIER_2 value should be adjusted using a modified version of the above SQL statement (there might be an easier way to perform this update that also does not resort to procedural type code):

UPDATE
  T1 L3
SET
  MULTIPLIER_2 = (
    SELECT
      ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3) MULTIPLIER_2
    FROM
      (SELECT
         L1.TRANSACTION_ID,
         L1.MULTIPLIER_2 OLD_MULT_2,
         ROUND((DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT) -
           DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN)) * 24,2) OVERLAP,
         L1.HOURS_WORKED
       FROM
         T1 L1,
         T1 L2
       WHERE
         L1.EMPLOYEE_ID=L2.EMPLOYEE_ID
         AND L1.SHIFT_DATE=L2.SHIFT_DATE
         AND (L1.CLOCK_IN BETWEEN L2.CLOCK_IN AND L2.CLOCK_OUT-1/86400
              OR L2.CLOCK_IN BETWEEN L1.CLOCK_IN AND L1.CLOCK_OUT-1/86400)
         AND L1.EMPLOYEE_ID='GREG') L1
     WHERE
       L3.TRANSACTION_ID=L1.TRANSACTION_ID
     GROUP BY
       TRANSACTION_ID,
       EMPLOYEE_ID,
       SHIFT_DATE,
       HOURS_WORKED,
       OLD_MULT_2
     HAVING
       ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3)<>OLD_MULT_2)
WHERE
  L3.TRANSACTION_ID IN (
    SELECT
      TRANSACTION_ID
    FROM
      (SELECT
         L1.TRANSACTION_ID,
         L1.MULTIPLIER_2 OLD_MULT_2,
         ROUND((DECODE(SIGN(L1.CLOCK_OUT-L2.CLOCK_OUT),-1,L1.CLOCK_OUT,L2.CLOCK_OUT) -
           DECODE(SIGN(L1.CLOCK_IN-L2.CLOCK_IN),-1,L2.CLOCK_IN,L1.CLOCK_IN)) * 24,2) OVERLAP,
         L1.HOURS_WORKED
       FROM
         T1 L1,
         T1 L2
       WHERE
         L1.EMPLOYEE_ID=L2.EMPLOYEE_ID
         AND L1.SHIFT_DATE=L2.SHIFT_DATE
         AND (L1.CLOCK_IN BETWEEN L2.CLOCK_IN AND L2.CLOCK_OUT-1/86400
              OR L2.CLOCK_IN BETWEEN L1.CLOCK_IN AND L1.CLOCK_OUT-1/86400)
         AND L1.EMPLOYEE_ID='GREG') L1
     WHERE
       L3.TRANSACTION_ID=L1.TRANSACTION_ID
     GROUP BY
       TRANSACTION_ID,
       EMPLOYEE_ID,
       SHIFT_DATE,
       HOURS_WORKED,
       OLD_MULT_2
     HAVING
       ROUND(HOURS_WORKED/DECODE(SUM(OVERLAP),0,1,SUM(OVERLAP)),3)<>OLD_MULT_2); 

3 rows updated.

In this ERP package there is a second way that the time for concurrent machine transactions (labor tickets) may be split, and this method is typically used when a single shop resource is used, such as a paint booth, to work on multiple jobs concurrently.  In such a case, with four concurrent operations, overstating the machine burden rate by a factor of 4 is an unwise decision.  To handle this situation, the ERP package’s modules will take the calculated MULTIPLIER_2 value, multiply that by the HOURS_WORKED (the machine burden hours), set that value as the new HOURS_WORKED value, and then set the MULTIPLIER_2 to a value of 1.0.

Not so confusing… and probably not to useful to most of the readers of this blog.  However, the above process of attacking a problem might be helpful for other types of problems.





On the Topic of Copyright

11 02 2011

February 11, 2011 (Updated February 25, 2011)

On August 26, 2010, just before cancelling my first three month old order for the “Oracle Tuning: The Definitive Reference Second Edition” book, I wrote the following in a blog article (side note: the second order for the book is now more than three months old, and I have no intention of cancelling that order):

In June 2010 an article appeared on another blog that identifed three blog articles appearing on a blog operated by an Oracle Certified Master (OCM) that were copied nearly verbatim from three other blogs, thus saving the OCM a considerable effort in not only typing verbage appearing in the article, but also effort in actually building useful content.  Why spent four hours developing a blog article (this is probably the average time I spend), when a simple copy, paste, find-and-replace can be accomplished in a couple of minutes?  Those copies of articles clearly violated commonly understood copyright restrictions, and a quick search showed that the copyright violations extended far beyond the initial three articles which were mentioned (yes, I have screen captures and PDF copies of all that I identified).

So, why did I write the above paragraph?  For some reason I was recently browsing through some of the information about DMCA:

“Any material that was posted without the copyright owner’s authorization must be removed or blocked promptly once the service provider has been notified that it has been removed, blocked, or ordered to be removed or blocked, at the originating site.”

I realized that a random reader of this blog could potentially send a note to WordPress stating that they are, in fact under penalty of law, the owner of the material I just spent 2 hours, 4 hours, 8 hours, 16 hours  – maybe even months writing, and demand that the article be taken offline because I clearly stated that I quoted a couple of words from one of their articles while agreeing with or disagreeing with the other article.  Is quoting another source, while clearly stating that the other source is being quoted, in an effort to critique the quoted section an example of fair use?

I greatly dislike seeing people copying other people’s work and passing it off as their own work.  It takes considerable effort to put together many of my blog articles; it requires a good deal of past experience troubleshooting other problems; and once my blog articles are published, those blog articles are not complete without the valuable input provided by reader comments.  It is not uncommon for me to post a blog article with the sole intention of helping one or two people, with the knowledge that reader comments on the blog articles often take the articles to a completely different level, expanding the articles to help a much wider audience.  The article that is currently the second highest ranked article on this blog in the last 90 days is just one example of this approach in action. 

The Digital Millennium Copyright Act (DMCA), among other things, helps protect original, copyrighted content that is posted to the Internet to prevent that material from being republished for profit, or claimed as original, copyrighted content authored by the person who copied the previously copyrighted material.  The doctrine of fair use outlines four factors that determine whether or not a particular use of copyrighted material is considered fair use:

  • “The purpose and character of the use, including whether such use is of commercial nature or is for nonprofit educational purposes
  • The nature of the copyrighted work
  • The amount and substantiality of the portion used in relation to the copyrighted work as a whole
  • The effect of the use upon the potential market for, or value of, the copyrighted work”

If you see your original, copyrighted content on an unauthorized site, file a DMCA take down notice.  If you do so, be aware that you will be swearing under penalty of perjury that you are, in fact, the owner of the copyrighted material.  Just for the entertainment value, I decided to see what the law states in Michigan about the penalty of perjury:

“Perjury committed in courts—Any person who, being lawfully required to depose the truth in any proceeding in a court of justice, shall commit perjury shall be guilty of a felony, punishable, if such perjury was committed on the trial of an indictment for a capital crime, by imprisonment in the state prison for life, or any term of years, and if committed in any other case, by imprisonment in the state prison for not more than 15 years.”

Translation, “You better know what the heck you are claiming when you file a DMCA take down notice and you sign, under penalty of perjury, that you are the copyright holder.”

Why all of this discussion of copyright and DMCA?  Some of you may have noticed that my recent article titled “SQL Performance Problem, AWR Reports Query is #1 in Elapsed Time, #2 in CPU Time – How Would You Help?” that was posted on February 6, 2011 has disappeared.  Why has it disappeared?  A DMCA take down notice was filed, and that notice in part stated the following:

——————–BEGIN NOTICE

Pursuant to the DMCA, please see this takedown notice.

I have confirmed that the site says “powered by WordPress.com”, and is hosted by WordPress.

This blog page https://hoopercharles.wordpress.com/2011/02/06/sql-performance-problem-awr-reports-query-is-1-in-elapsed-time-2-in-cpu-time-how-would-you-help/

Contains this material originally published on our forum:

(My modified version of the SQL statement posted by a user of an online forum who was seeking help with a performance problem)

This was copied from our forum here:

http://dbaforums.org/oracle/index.php?s=ec2b9e8d676465331d7fe6cd6e07fa0c&showtopic=20523

Where it was originally published in this format:

(SQL statement as posted by a user of an online forum who was seeking help with a performance problem)

Donald K. Burleson
CTO, Burleson Consulting

Kittrell, NC, USA 27544
(email address withheld)@remote-dba.net

I hereby swear that this content is copied from the BC DBA forum and that this page infringes on my copyright and it is not authorized.

Under penalty of perjury, I swear that all of the information contained in your Infringement Notice is accurate, and that I am the copyright owner of this material.

Signed: Donald K. Burleson, copyright owner

——————–END NOTICE

You are required to remove the specific content outlined in this notice and identified as the infringing material. If you would like to formally challenge the notice, please let us know so that we can provide you with further instructions.

Interesting… so Donald K. Burleson holds the copyright of anything posted to the forums on dbaforums.org?  So, did I remove the SQL statement that was posted by a user of the forum, or did I file a DMCA challenge stating that Donald K. Burleson is not the owner of that SQL statement?  Think about that for a moment, would I back down when trying to help someone solve a problem (falling under the doctrine of fair use), especially after readers of the forum my blog article provided valuable information to further help the person experiencing the performance problem?  Would I back down when an action by Donald K. Burleson would impede the freely provided furthering of the Oracle community’s Oracle Database knowledge?  Think about that for a moment…

Finished thinking already?  I filed a DMCA challenge with the following text:

The quoted section mentioned in the DMCA take down notice was in fact copied from Donald K. Burleson’s forum, with minor changes to improve readability.  The copied text, however, does not infringe on Donald K. Burleson’s copyright.  The text that I copied was posted by one of the users of his online forum, and that text originated in the computer program titled “Patrol for Oracle” written by BMC Software (as indicated in the comments attached to the blog article).  The copied text is essentially a SQL statement that is submitted to a database server by the “Patrol for Oracle” program, and that SQL statement was exhibiting poor performance.  If anything, the rightful owner of the copyright for the copied text is BMC Software, and credit for the SQL statement currently appears in the comments section of the blog article.  A user posting a SQL statement (the alleged copied text) to a forum does not transfer the copyright for that SQL statement to the owner of the forum, especially if the user posting the SQL statement was not the original author of the SQL statement.

This is not the first time that Donald K. Burleson has falsely claimed ownership of copyright.

The attached “DMCA Challenge 2011-02-08.pdf” file is my signed notice that I did not infringe on a copyright owned by Donald K. Burleson.  The attached “DMCA Challenge 2011-02-08 Attachment.pdf” file shows my blog article content where the alleged copied text appears, and is followed on pages 11, 12, and 13 by the content where the alleged original text appears.

Please keep in mind that this blog is one that is dedicated to technical Oracle Database related discussions, and personal attacks do not belong here.  Attacking technical discussions containing faulty information, of course, is expected.  See the About page for more information.

——–

Edit February 25, 2011:

Donald K. Burleson, apparently realizing what it means to file a false DMCA claim under penalty of perjury, did not file a lawsuit to back up his DMCA claim to copyright ownership of the modified SQL statement that I posted in my February 6, 2011 article titled SQL Performance Problem, AWR Reports Query is #1 in Elapsed Time, #2 in CPU Time – How Would You Help?  His false DMCA claim held hostage one of my articles for a total of 17 days, during which time the article was not available for readers of this blog (for the record, I completely understand and agree with WordPress’ handing of this matter, where their processes require taking DMCA challenged articles offline for two weeks to allow the true copyright holder sufficient time to file a lawsuit).  False DMCA claims from Donald K. Burleson against my blog articles will not be tolerated, and this article will serve as evidence of past abuse, if necessary.





SQL Performance Problem, AWR Reports Query is #1 in Elapsed Time, #2 in CPU Time – How Would You Help?

6 02 2011

February 6, 2011 (Updated February 25, 2011)

I occasionally read various forums on the Internet including a couple of Usenet groups (comp.databases.oracle.server, comp.databases.oracle.misc), OTN forums (Database – General, SQL and PL/SQL), Oracle-L, AskTom, and a handful of other forums.  I don’t participate on all of the forums, but I try to help when possible on a couple of those forums.

I saw a SQL tuning request on one of the forums that caught my eye.  The original poster (OP) identified a SQL statement with the help of an AWR report, and he needed to tune that SQL statement.  The identified SQL statement was consuming the most elapsed time (19,307 seconds total, 1.90 seconds per execution, 48.30% of DB time), and the same SQL statement was also identified as consuming the second highest amount of CPU time (1,063 seconds).  A summarization of the advice provided by responders in the thread includes:

  • Check out these links (one of which is for a book that I reviewed a couple of months ago).
  • Start with hints, including those that change the OPTIMIZER_MODE, those that adjust the dynamic sampling, those that change the join type, and those that change which indexes are used.
  • Analyze the data dictionary.

I think that I would first start by trying to determine the intended result of the query.  What is that query supposed to show, and why are we executing it 10,176 times in the AWR reporting period?  To begin that process, I might try to reformat the SQL statement into my standardized format, something like this (changing upper/lower case, spacing, and aliases):

SELECT /*NORULE */
  'DATAPOINT EXTENTS_LEFT '  || ' ' ||
      NVL(MIN(DSE.MAXEXTS - DSE.EXTENTS), 111) || CHR(10) ||
      'DATAPOINT EXTENTS_LEFT_PCT'  || ' ' ||
      ROUND(NVL(MIN(ROUND(DSE.MAXEXTS - DSE.EXTENTS) * 100 / DSE.MAXEXTS), 100),0) BPB
FROM
  (SELECT
     DS.HEADER_FILE FILE#,
     DS.HEADER_BLOCK BLOCK#,
     DS.EXTENTS,
     DS.MAX_EXTENTS MAXEXTS,
     ST.TS#,
     SU.USER#
   FROM
     DBA_SEGMENTS DS,
     SYS.TS$ ST,
     SYS.USER$ SU
   WHERE
     ST.NAME=DS.TABLESPACE_NAME
     AND SU.NAME=DS.OWNER
     AND SEGMENT_TYPE NOT IN ('SPACE HEADER','CACHE')) DSE,
  (SELECT
     NAME,
     TS#,
     ONLINE$
   FROM
     SYS.TS$) TS,
  (SELECT
     TABLESPACE_NAME,
     CONTENTS
   FROM
     DBA_TABLESPACES) DT,
  (SELECT
     TS#,
     FILE#,
     BLOCK#
   FROM
     P$OBJ_EXCLUSION
   WHERE
     TS# IS NOT NULL
     AND FILE# IS NOT NULL
     AND BLOCK# IS NOT NULL) OEB,
  (SELECT
     TS#
   FROM
     P$OBJ_EXCLUSION
   WHERE
     OBJECT_TYPE = 'TABLE'
     AND FILE# IS NULL
     AND BLOCK# IS NULL
     AND USER# IS NULL) OETS,
  (SELECT
     USER#
   FROM
     P$OBJ_EXCLUSION
   WHERE
      USER# IS NOT NULL) OEU
WHERE
  DSE.MAXEXTS > 0 -- CACHE SEGMENT HAS MAXEXTS = 0
  AND TS.TS# > 0
  AND DSE.TS# = TS.TS#
  AND TS.ONLINE$ = 1
  AND DSE.TS# = OEB.TS#(+)
  AND OEB.TS# IS NULL
  AND DSE.FILE# = OEB.FILE#(+)
  AND OEB.FILE# IS NULL
  AND DSE.BLOCK# = OEB.BLOCK#(+)
  AND OEB.BLOCK# IS NULL
  AND DSE.TS# = OETS.TS#(+)
  AND OETS.TS# IS NULL
  AND DSE.USER# = OEU.USER#(+)
  AND OEU.USER# IS NULL
  AND TS.NAME = DT.TABLESPACE_NAME
  AND DT.CONTENTS = 'PERMANENT';

While the query begins with an interesting comment, we can ignore that for now.  What is the next step:

  • Why is the query accessing both SYS.TS$ and DBA_TABLESPACES?
  • Why are some of the restrictions (such as TS.TS# and TS.ONLINE, DT.CONTENTS) that are placed on the tablespaces not included in the WHERE clauses in the inline views?
  • Why does the query specify MIN(DSE.MAXEXTS – DSE.EXTENTS) with no GROUP BY clause?
  • Why does the query perform an implicit NUMBER to VARCHAR conversion in the data returned to the client?
  • The inline views that I aliased with names beginning with OE are apparently designed to provide exclusion lists for the objects in DBA_SEGMENTS, excluding by TS#, BLOCK# (HEADER_BLOCK), and USER#.  Why is the query not retrieving a distinct list of each type of item to be excluded, possibly from a materialized view?

What steps would you take to help the OP?  Would you just tell the OP that in a “Top 5” type of report that there will always be something in the top two spots?

—–

Edit February 25, 2011:

Donald K. Burleson, apparently realizing what it means to file a false DMCA claim under penalty of perjury, did not file a lawsuit to back up his DMCA claim to copyright ownership of the modified SQL statement that I posted on February 6, 2011 in this article.  His false DMCA claim held hostage this article for a total of 17 days, during which time the article was not available for readers of this blog (for the record, I completely understand and agree with WordPress’ handing of this matter, where their processes require taking DMCA challenged articles offline for two weeks to allow the true copyright holder sufficient time to file a lawsuit).  False DMCA claims from Donald K. Burleson against my blog articles will not be tolerated, and this article will serve as evidence of past abuse, if necessary.