Brain Teaser: 10046 Extended SQL Trace Shows a FETCH Call with c=306350000, e=299174653, p=0, cr=22298 – How is that Possible?

26 04 2012

April 26, 2012

Last week I posted a popular article that questioned the effectiveness of showing a job candidate a raw 10046 trace file that was captured at level 12, asking the job candidate to evaluate whether or not the SQL statement in the 10046 trace is efficient.  Many of the commenters suggested that the approach is potentially very effective if the interviewer is knowledgeable about the subject matter, and he recognizes that the question deserves more than a simple yes or no answer.

A year ago I posted a related article, a brain teaser in fact, that showed 0.015600 seconds of CPU consumption in 0.000510 seconds of elapsed time without using parallel execution.  The comments attached to last year’s article correctly identified the source of the seemingly impossible statistics.

A recent OTN thread (don’t click the link until you have thought about the problem for a while) shows an interesting section of a 10046 trace file, a portion of which appears below:

PARSE #2:c=0,e=4764,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1182070262719
BINDS #2:
EXEC #2:c=0,e=255,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1182070263378
WAIT #2: nam='SQL*Net message to client' ela= 5 driver id=1650815232 #bytes=1 p3=0 obj#=258 tim=1182070263514
FETCH #2:c=306350000,e=299174653,p=0,cr=22298,cu=0,mis=0,r=1,dep=0,og=1,tim=1182369438322
WAIT #2: nam='SQL*Net message from client' ela= 1454 driver id=1650815232 #bytes=1 p3=0 obj#=258 tim=1182369440799
WAIT #2: nam='SQL*Net message to client' ela= 4 driver id=1650815232 #bytes=1 p3=0 obj#=258 tim=1182369441012
FETCH #2:c=0,e=293,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=1,tim=1182369441265
WAIT #2: nam='SQL*Net message from client' ela= 7437 driver id=1650815232 #bytes=1 p3=0 obj#=258 tim=1182369448858
WAIT #2: nam='SQL*Net message to client' ela= 3 driver id=1650815232 #bytes=1 p3=0 obj#=258 tim=1182369449007
FETCH #2:c=0,e=283,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=1,tim=1182369449257
WAIT #2: nam='SQL*Net message from client' ela= 6701 driver id=1650815232 #bytes=1 p3=0 obj#=258 tim=1182369456082
WAIT #2: nam='SQL*Net message to client' ela= 2 driver id=1650815232 #bytes=1 p3=0 obj#=258 tim=1182369456222 

The interesting portion of the above trace file is the first FETCH line where a single row is returned to the client.  The c=306350000 entry indicates that 306.35 seconds of server CPU time was consumed during the fetch call.  The e=299174653 entry indicates that the elapsed time (as if captured by a stop watch) of the single row fetch was 299.174653 seconds.  The p=0 entry indicates that 0 blocks were physically read from disk when preparing to retrieve the one row.  The cr=22298 entry indicates that 22,298 consistent gets were performed when preparing to retrieve the first row.  Typically, performing a large number of consistent gets will result in the consumption of a considerable number of CPU seconds, yet 22,298 consistent gets could not explain the consumption of more than 5 minutes of CPU time, even if the OP was running Oracle Database on an Intel 8088 CPU.

The brain teaser posed (directly or indirectly) by the OP:

  1. Why did the first fetch of a single row require roughly 299 seconds of elapsed time, when the second fetch of 15 rows apparently required 0.000293 seconds?
  2. Why did the first fetch of a single row require roughly 306 seconds of CPU time, when the second fetch of 15 rows apparently required no CPU time?
  3. How is it possible that the CPU consumption exceeds the elapsed time by 7.175347 seconds?
  4. How is it possible that 306.35 seconds of CPU were burned while performing only 22,298 consistent gets?
  5. The trace file indicates that 306.35 seconds of CPU were burned during the first fetch call, does the number 306.35 represent 100% of the CPU consumption during the fetch call, or is the actual CPU consumption higher or lower than that number?
  6. What release version of Oracle Database is the OP running?
  7. What operating system is the OP using for Oracle Database?
  8. What other questions might you ask of a job candidate if the 10046 trace file from the OTN thread were handed to a job candidate during an interview?

Think about the questions for a couple of minutes before visiting the OTN thread.  Several of the above questions are answered in that thread.





Temporary Tablespace Storage Parameters – What is Wrong with this Quote

22 04 2012

April 22, 2012 (Modified April 24, 2012)

I had intended to finish assembling the second half of the “Oracle Database 11gR2 Performance Tuning Cookbook” book review, however my free time that may be dedicated to book reviews has been a bit limited lately (I have a review of another book started, but left untouched for the last three months).  On a side note, I am a little shocked that none of the 21 errata reports that I filed for the first 88 pages of the book made it onto the publisher’s website, but maybe that is not too uncommon (the same has happened for at least one Apress book). 

I encountered an interesting quote on page 252 of the “Oracle Database 11gR2 Performance Tuning Cookbook” that I thought I would share with readers:

“The PCTINCREASE parameter cannot be specified when we use the AUTOALLOCATE option for the tablespace. We can have different tablespaces for each database user and we can query the dynamic performance view V$SORT_USAGE to retrieve the details about the active disk sorts occurring in the instance.

We can obtain better performance by striping the temporary tablespace using multiple disks. Please note that – due to their nature – temporary tablespaces are not affected by backup and restore operations.”

If you own the book, start on page 251 and read through the end of page 252. – there are a couple of additional interesting sentences in this section of the book 

What, if anything, is wrong with the above quote from the book? For those readers that need some assistance with this task, the Oracle Database 11.2 documentation might be helpful.

The point of blog articles like this one is not to insult authors who have spent thousands of hours carefully constructing an accurate and helpful book, but instead to suggest that readers investigate when something stated does not exactly match what one believes to be true. It could be that the author “took a long walk down a short pier”, or that the author is revealing accurate information which simply cannot be found through other resources (and may in the process be directly contradicting information sources you have used in the past). If you do not investigate in such cases, you may lose an important opportunity to learn something that could prove to be extremely valuable.

(Added April 24, 2012, my book review notes for the four quoted sentences, plus one of the preceding sentences)

 

  • The book’s discussion of defining the INITIAL and NEXT storage parameters for temporary tablespaces as a multiple of the SORT_AREA_SIZE parameter seems to be out of place in an Oracle Database 11g R2 performance tuning book – dictionary managed tablespaces were deprecated with the release of Oracle Database 9.2, and the effect of these parameters is different in locally managed tablespaces (page 252).
  • The book states, “The PCTINCREASE parameter cannot be specified when we use the AUTOALLOCATE option for the tablespace.”  This statement appears to be incorrect – the Oracle Database documentation states that a specified value for PCTINCREASE will be ignored when creating an AUTOALLOCATE tablespace (page 252).
  • The book states, “We can have different tablespaces for each user and we can query the dynamic performance view V$SORT_USAGE to retrieve the details about the active disk sorts occurring in the instance.”  This statement is a bit confusing, so it is probably best to break the sentence into two separate logical sentences for analysis.  The first half of the statement seems to suggest that a separate temp tablespace should (or could) be created for each user – I am not sure that this is the author’s intended interpretation; the default temporary tablespace may be set at the user level so that not all users are required to use the same (default) temporary tablespace.  For the second logical sentence, the V$SORT_USAGE performance view was deprecated with the release of Oracle Database 9.2, replaced with V$TEMPSEG_USAGE.  In Oracle Database 11.2 (the version mentioned on the book’s front cover), the V$SORT_USAGE view is based on the GV$SORT_USAGE view which selects from X$KTSSO (confirm by issuing SELECT VIEW_DEFINITION FROM V$FIXED_VIEW_DEFINITION WHERE VIEW_NAME=’GV$SORT_USAGE’;).  The GV$SORT_USAGE  view definition, which is pointed to by both the V$SORT_USAGE and V$TEMPSEG_USAGE synonyms, indicates that the SEGTYPE column values may be one of SORT, HASH, DATA, INDEX, LOB_ DATA, LOB_INDEX , or UNDEFINED.  The potential values of the SEGTYPE column suggests that the view potentially shows a variety of activity in addition to what the book mentions (the WHERE clause should be adapted to restrict the rows returned from this view, so that the results are consistent with this chapter’s contents).  The V$SORT_USAGE view also shows entries for aborted SQL statements, for instance when Ctrl-C is pressed in SQL*Plus and another SQL statement has not been executed by the session (page 252).
  • The book states, “Please note that – due to their nature – temporary tablespaces are not affected by backup and restore operations.”  This is a vague statement at best, and the accuracy of the statement depends in part on the type of backup performed (online or offline, user-managed or RMAN hot backup, etc.).  An online RMAN backup will not back up the tempfiles of locally managed temporary tablespaces; however, when those backed up databases are restored and recovered, Oracle Database 10.2 and later will recreate the temporary tablespace tempfiles when the database is opened (page 252).




Extents of an Oracle Database DBA’s Knowledge Base

18 04 2012

April 18, 2012

I saw an interesting thread on the OTN forums this morning that forced me to stop and think about several items.  The thread contains a question posed during a recent job interview – one of the best questions that I have seen (dare I say, better than one I might have crafted myself).  Consider this: you are given a SQL statement, and a raw 10046 trace captured at level 12.  The question is very simple: Is the SQL statement efficient or not?

What is the point of the interview question?  Barely scratching the surface:

  • It tests how you will approach unfamiliar problems if you encounter the problem once hired – it may not be this exact question, but this may be similar to a real problem that was faced by the company’s DBAs. Do you panic and say that it cannot be solved, do you say that the job belongs to someone else, do you say that there is no point in looking at the raw trace file, or do you dive into the problem and think about what could happen if… (yes, there is something here, but I do not want to spoil the fun for other candidates that may face this exact problem).
  • It determines in part what types of material you have read to further enhance your knowledge of Oracle Database.
  • It determines whether or not you recognize the potential problems that may be associated with specific Oracle Database releases (unpatched and feeling ANSI?)
  • It determines whether or not you recognize differences in predicated cardinality and the actual number of rows returned, and how that may affect the performance outcome. Is it better to filter early or filter late, and does that apply to this situation?
  • Is there a DISTINCT possibility that the ROWNUM function may allow an arbitrary set of rows to be returned, possibly dependent on the value of the OPTIMIZER_FEATURES_ENABLE parameter (see *** item below)?
  • It determines whether or not you follow logic or magic when troubleshooting problems.

Reflecting on the question a bit, did the person in the interview have access to Google, TKPROF, the Oracle Database documentation, books with red covers, books with yellow covers, books with white covers, books with blue covers, etc.

*** A year ago in an OTN thread I made the following comment that applies to at least one of the above bullet point and also the answer to the question posed by the interviewer:

“Other than the “HASH GROUP BY” operation on the second line of the printed execution plan for 11.2.0.2, and the “SORT GROUP BY” operation on the second line of the printed execution plan for 10.1.0.3, the execution plans are identical. Keep in mind that 11.2.0.2, due to adaptive cursor sharing, is capable of changing the execution plan for future executions (there was a single parse call in this case, so adaptive cursor sharing likely did not take place). Also, keep in mind that 11.2.0.2 by default in a 10046 trace will output the row source operation execution plan after the first execution, rather than when the cursor was closed – this explains the difference in the ROWS column in the execution plan. If we look closely at the summary information, 11.2.0.2 performed 113,319 consistent gets in 673 executions, while 10.1.0.3 performed 175,168 consistent gets in 644 executions. Each execution in 11.2.0.2 is in theory more efficient than each execution in 10.1.0.3, yet the average time per execution is much longer.”

In your opinion, what do you think of the interview question?

(Please do not provide an exact answer to the interviewer’s question for at least 14 days, there is no sense in disrupting the interview process.)





Reproducing a Canned Report using a Single SQL Statement

11 04 2012

April 11, 2012

I recently received an interesting request for assistance from an ERP email mailing list.  The author of the email wanted to reproduce a canned report found in the ERP package so that the information could be published on a Microsoft Sharepoint system.  The author of the email is using SQL Server for the database backend, but we will ignore that bit of complexity for the moment.

The canned report that ships with the ERP system looks similar to the following:

Basically, the end user enters a start and an end date for the report, the ERP system checks the shop resource availability calendar for the days between the start and end dates, and then calculates the Capacity value from that calendar (there is a default shop resource calendar, and an override calendar for some of the shop resources).  The Act Hrs column in the report is simply the sum of hours calculated from the employee labor transactions that are recorded in real-time on the production floor – that column value is easy to calculate, but is subject to minor rounding errors and date calculation errors when an employee labor transaction starts before midnight and ends after midnight on either the start date or end date specified by the end user running the report.  The Est Hrs column in the report is a little more difficult to calculate, requiring a bit of calculation that determines what item the employee actually claimed to have produced, and in what quantity – what if the employee claimed to have produced 6 of 10 widgets in the specified time period, might he have completed 99% of the seventh widget but not yet reported that widget complete?  The EFF column is simply the Est Hours column divided by the Act Hrs column, with the result multiplied by 100.  The Act Util column is also easy to calculate: the Act Hrs column value divided by the Capacity column value, with the result multiplied by 100.  The Est Util column is simply the Est Hrs column value divided by the Capacity column value, with the result multiplied by 100.

So, where do we start in trying to reproduce this particular report?  How about enabling a 10046 trace for the session that creates the report.  Analyzing the resulting trace file might provide some insight into how the report is built.  Below is a summary of the SQL statements found in the trace file:

Query 1: (this query that retrieves the estimated hours, shop resource description, and various other information that does not appear on the report – this appears to be the query that provokes the ERP system to issue the queries that follow)

SELECT
  L.RESOURCE_ID,
  L.WORKORDER_TYPE,
  L.WORKORDER_BASE_ID, 
  L.WORKORDER_LOT_ID,
  L.WORKORDER_SPLIT_ID,
  L.WORKORDER_SUB_ID, 
  L.OPERATION_SEQ_NO,
  L.TRANSACTION_DATE,
  L.HOURS_WORKED,
  L.GOOD_QTY, 
  L.TYPE,
  L.EMPLOYEE_ID, 
  OP.SETUP_HRS,
  OP.RUN,
  OP.RUN_TYPE,
  OP.LOAD_SIZE_QTY,
  OP.CALC_END_QTY, 
  R.DESCRIPTION, 
  RTRIM(E.LAST_NAME),
  RTRIM(E.FIRST_NAME),
  E.MIDDLE_INITIAL, 
  W.PART_ID,
  P.DESCRIPTION,
  L.TRANSACTION_ID 
FROM
  LABOR_TICKET L,
  OPERATION OP,
  SHOP_RESOURCE R,
  EMPLOYEE E,
  WORK_ORDER W,
  PART P 
WHERE
  L.RESOURCE_ID = R.ID 
  AND L.EMPLOYEE_ID = E.ID 
  AND L.WORKORDER_TYPE = OP.WORKORDER_TYPE 
  AND L.WORKORDER_BASE_ID = OP.WORKORDER_BASE_ID 
  AND L.WORKORDER_LOT_ID = OP.WORKORDER_LOT_ID 
  AND L.WORKORDER_SPLIT_ID = OP.WORKORDER_SPLIT_ID 
  AND L.WORKORDER_SUB_ID = OP.WORKORDER_SUB_ID 
  AND L.OPERATION_SEQ_NO = OP.SEQUENCE_NO  
  AND W.TYPE = OP.WORKORDER_TYPE 
  AND W.BASE_ID = OP.WORKORDER_BASE_ID 
  AND W.LOT_ID = OP.WORKORDER_LOT_ID 
  AND W.SPLIT_ID = OP.WORKORDER_SPLIT_ID 
  AND W.SUB_ID = OP.WORKORDER_SUB_ID  
  AND W.PART_ID = P.ID (+)  
  AND TRUNC(L.TRANSACTION_DATE) BETWEEN :1  AND :2
ORDER BY
  L.RESOURCE_ID,
  L.WORKORDER_TYPE,
  L.WORKORDER_BASE_ID, 
  L.WORKORDER_LOT_ID,
  L.WORKORDER_SPLIT_ID,
  L.WORKORDER_SUB_ID,
  L.OPERATION_SEQ_NO;

Query 2: (this query returns the quantity of a specific resource that is available for each of three shifts… for instance, the number of employees with a specific skill)

SELECT
  SHIFT_1_CAPACITY,
  SHIFT_2_CAPACITY,
  SHIFT_3_CAPACITY 
FROM
  SHOP_RESOURCE 
WHERE
  ID = :1;

Query 3: (this query returns the number of hours of availability for a specific resource, for each day of the week; the default resource calendar has a NULL value for the RESOURCE_ID column, so the specified sort order will return the default resource calendar rows before the over-ride calendar for a specific shop resource)

SELECT 
  DAY_OF_WEEK,
  START_OF_DAY,
  SHIFT_1,
  SHIFT_2,
  SHIFT_3 
FROM
  CALENDAR_WEEK 
WHERE
  (SCHEDULE_ID IS NULL OR SCHEDULE_ID = 'STANDARD') 
  AND (RESOURCE_ID IS NULL OR RESOURCE_ID = :1 ) 
ORDER BY
  SCHEDULE_ID,
  RESOURCE_ID DESC,
  DAY_OF_WEEK;

Query 4: (this query returns the exceptions, planned days of unavailabilty, for the default resource calendar and the shop resource specific over-ride calendar.  I just noticed a logic error in this SQL statement – what about those scheduled exceptions that start before the user specified start date and/or those scheduled exceptions that end after the user specified end date, where some of those dates fall into the user specified date range?)

SELECT  
  START_DATE,
  END_DATE,
  START_OF_DAY,
  SHIFT_1,
  SHIFT_2,
  SHIFT_3, 
  SHIFT_1_CAPACITY,
  SHIFT_2_CAPACITY,
  SHIFT_3_CAPACITY 
FROM
  CALENDAR_CHANGE 
WHERE
  (SCHEDULE_ID IS NULL OR SCHEDULE_ID = 'STANDARD') 
  AND (RESOURCE_ID IS NULL OR RESOURCE_ID = :1 ) 
  AND START_DATE >= :2 AND END_DATE <= :3
ORDER BY
  SCHEDULE_ID,
  RESOURCE_ID,
  START_DATE;

Query 5: (this query returns the number of operation setup hours for a specific operation, time spent preparing to produce parts, reported on the shop floor – it would seem that Query 1 could be modified to return this information)

SELECT
  SUM(HOURS_WORKED) 
FROM
  LABOR_TICKET 
WHERE
  WORKORDER_TYPE = :1     
  AND WORKORDER_BASE_ID = :2
  AND WORKORDER_LOT_ID = :3
  AND WORKORDER_SPLIT_ID = :4
  AND WORKORDER_SUB_ID = :5
  AND OPERATION_SEQ_NO = :6
  AND TYPE = 'S';

Now that we have a general idea of what steps the ERP system is performing to reproduce the report, where do we start?  We start by telling the OP (the person with the SQL Server backend) that with an Oracle Database backend, we could produce this report using a single SQL statement (without any stored procedures) – that approach would certainly eliminate a lot of back and forth communication between the client computer and the database server, which is an important consideration if a high latency network connects the two devices.  For the sake of simplicity, I will set the above query 4 aside for now.  Since there is an index on the LABOR_TICKET.TRANSACTION_DATE column, the ERP system loses the benefit of that index by applying the TRUNC function to that column’s values – so we will need to fix that problem.

First, we need to know how many hours of capacity are available for each resource.  If a specific over-ride schedule (shop calendar) is not available for a shop resource, then the standard schedule is used.  To retrieve the schedules for the shop resources that do not have a defined unique schedule, we can create a Cartesian join between two tables with a NOT EXISTS clause to exclude those shop resources with an over-ride schedule:

SELECT
  SR.ID AS RESOURCE_ID, 
  CW.DAY_OF_WEEK,
  CW.SHIFT_1*SR.SHIFT_1_CAPACITY AS HOURS_1,
  CW.SHIFT_2*SR.SHIFT_2_CAPACITY AS HOURS_2,
  CW.SHIFT_3*SR.SHIFT_3_CAPACITY AS HOURS_3,
  'DEFAULT' CALENDAR_TYPE
FROM
  CALENDAR_WEEK CW,
  SHOP_RESOURCE SR
WHERE
  (CW.SCHEDULE_ID IS NULL OR CW.SCHEDULE_ID = 'STANDARD')
  AND CW.RESOURCE_ID IS NULL
  AND NOT EXISTS (
    SELECT
      C.RESOURCE_ID
    FROM
      CALENDAR_WEEK C
    WHERE
      SR.ID=C.RESOURCE_ID);

We also need to retrieve the shop resources with defined over-ride schedules:

SELECT
  SR.ID AS RESOURCE_ID, 
  CW.DAY_OF_WEEK,
  CW.SHIFT_1*SR.SHIFT_1_CAPACITY AS HOURS_1,
  CW.SHIFT_2*SR.SHIFT_2_CAPACITY AS HOURS_2,
  CW.SHIFT_3*SR.SHIFT_3_CAPACITY AS HOURS_3,
  'RESOURCE' CALENDAR_TYPE
FROM
  CALENDAR_WEEK CW,
  SHOP_RESOURCE SR
WHERE
  (CW.SCHEDULE_ID IS NULL OR CW.SCHEDULE_ID = 'STANDARD')
  AND CW.RESOURCE_ID=SR.ID;

Finally, we need to UNION ALL the above two queries:

SELECT
  SR.ID AS RESOURCE_ID, 
  CW.DAY_OF_WEEK,
  CW.SHIFT_1*SR.SHIFT_1_CAPACITY AS HOURS_1,
  CW.SHIFT_2*SR.SHIFT_2_CAPACITY AS HOURS_2,
  CW.SHIFT_3*SR.SHIFT_3_CAPACITY AS HOURS_3,
  'DEFAULT' CALENDAR_TYPE
FROM
  CALENDAR_WEEK CW,
  SHOP_RESOURCE SR
WHERE
  (CW.SCHEDULE_ID IS NULL OR CW.SCHEDULE_ID = 'STANDARD')
  AND CW.RESOURCE_ID IS NULL
  AND NOT EXISTS (
    SELECT
      C.RESOURCE_ID
    FROM
      CALENDAR_WEEK C
    WHERE
      SR.ID=C.RESOURCE_ID)
UNION ALL
SELECT
  SR.ID AS RESOURCE_ID, 
  CW.DAY_OF_WEEK,
  CW.SHIFT_1*SR.SHIFT_1_CAPACITY AS HOURS_1,
  CW.SHIFT_2*SR.SHIFT_2_CAPACITY AS HOURS_2,
  CW.SHIFT_3*SR.SHIFT_3_CAPACITY AS HOURS_3,
  'RESOURCE' CALENDAR_TYPE
FROM
  CALENDAR_WEEK CW,
  SHOP_RESOURCE SR
WHERE
  (CW.SCHEDULE_ID IS NULL OR CW.SCHEDULE_ID = 'STANDARD')
  AND CW.RESOURCE_ID=SR.ID
ORDER BY
  RESOURCE_ID,
  DAY_OF_WEEK;

The ERP system uses the following code numbers to represent each day of the week in the output produced by the above SQL statement:

0: Saturday
1: Sunday
2: Monday
3: Tuesday
4: Wednesday
5: Thursday
6: Friday

On Oracle Database, we are able to use the TO_DATE function to almost reproduce the above translation table, although NLS settings that are in effect could produce results that require further adjustment.  The following formula returns 1 for a Sunday and 6 for a Friday, but 7 for Saturday:

TO_DATE(TRANSACTION_DATE, 'D')

We are able to convert the 7 value for Saturdays into a 0 by using the MOD function to return the remainder value after dividing by 7:

MOD(TO_DATE(TRANSACTION_DATE, 'D'), 7)

Assume that we want to generate a report for the dates between March 26, 2012 and April 14, 2012.  We need a way to determine the number of Mondays, Tuesdays, Wednesdays, Thursdays, Fridays, Saturdays, and Sundays between the two dates (including the end-point dates).  There are a couple of approaches to this problem:

  1. Use the CONNECT BY LEVEL syntax to generate a row for each date between the start and end dates.
  2. Use a pre-created, statistically defined table that simply lists all of the dates between an arbitrary start and end date.
  3. Use a stored procedure to calculate the number of each day of the week between the start and end dates.
  4. Mathematically calculate within the SQL statement the number of each day of the week.

I will use the first of the above approaches… I suppose this approach would be a bit of a challenge with a SQL Server backend.

SELECT
  TO_DATE('03/26/2012','MM/DD/YYYY')+(LEVEL-1) MY_DATE,
  TO_CHAR(TO_DATE('03/26/2012','MM/DD/YYYY')+(LEVEL-1), 'D') AS D,
  MOD(TO_CHAR(TO_DATE('03/26/2012','MM/DD/YYYY')+(LEVEL-1), 'D'),7) DAY_OF_WEEK
FROM
  DUAL
CONNECT BY
  LEVEL <= (TO_DATE('04/14/2012','MM/DD/YYYY') - TO_DATE('03/26/2012','MM/DD/YYYY'))+1;

MY_DATE   D DAY_OF_WEEK
--------- - -----------
26-MAR-12 2           2
27-MAR-12 3           3
28-MAR-12 4           4
29-MAR-12 5           5
30-MAR-12 6           6
31-MAR-12 7           0
01-APR-12 1           1
02-APR-12 2           2
03-APR-12 3           3
04-APR-12 4           4
05-APR-12 5           5
06-APR-12 6           6
07-APR-12 7           0
08-APR-12 1           1
09-APR-12 2           2
10-APR-12 3           3
11-APR-12 4           4
12-APR-12 5           5
13-APR-12 6           6
14-APR-12 7           0

If we were to use the above in a production environment, we certainly would convert the literal (constant) date values to bind variables.  The DAY_OF_WEEK column values (assuming that the NLS settings result in Sunday being the first day of the week) are the important values in the above output.  If we also implemented query 4 used by the ERP system, then the MY_DATE column values would also need to be considered.  How many Mondays, Tuesdays, Wednesdays, etc. are between the two dates?:

SELECT
  MOD(TO_CHAR(TO_DATE('03/26/2012','MM/DD/YYYY')+(LEVEL-1), 'D'),7) DAY_OF_WEEK,
  COUNT(*) NUM_DAYS
FROM
  DUAL
CONNECT BY
  LEVEL <= (TO_DATE('04/14/2012','MM/DD/YYYY') - TO_DATE('03/26/2012','MM/DD/YYYY'))+1
GROUP BY
  MOD(TO_CHAR(TO_DATE('03/26/2012','MM/DD/YYYY')+(LEVEL-1), 'D'),7);

DAY_OF_WEEK   NUM_DAYS
----------- ----------
          0          3
          1          2
          2          3
          3          3
          4          3
          5          3
          6          3

As shown above, just 2 Sundays, but 3 of every other day of the week.

We now have two SQL statements (the one that indicates the available capacity per day of week, and the other that indicates the number of each day of the week in the user specified time interval) that need to be joined together:

SELECT
  C.RESOURCE_ID,
  C.DAY_OF_WEEK,
  (C.HOURS_1 + C.HOURS_2 + C.HOURS_3) * D.NUM_DAYS
FROM
  (SELECT
    SR.ID AS RESOURCE_ID, 
    CW.DAY_OF_WEEK,
    CW.SHIFT_1*SR.SHIFT_1_CAPACITY AS HOURS_1,
    CW.SHIFT_2*SR.SHIFT_2_CAPACITY AS HOURS_2,
    CW.SHIFT_3*SR.SHIFT_3_CAPACITY AS HOURS_3,
    'DEFAULT' CALENDAR_TYPE
  FROM
    CALENDAR_WEEK CW,
    SHOP_RESOURCE SR
  WHERE
    (CW.SCHEDULE_ID IS NULL OR CW.SCHEDULE_ID = 'STANDARD')
    AND CW.RESOURCE_ID IS NULL
    AND NOT EXISTS (
      SELECT
        C.RESOURCE_ID
      FROM
        CALENDAR_WEEK C
      WHERE
        SR.ID=C.RESOURCE_ID)
  UNION ALL
  SELECT
    SR.ID AS RESOURCE_ID, 
    CW.DAY_OF_WEEK,
    CW.SHIFT_1*SR.SHIFT_1_CAPACITY AS HOURS_1,
    CW.SHIFT_2*SR.SHIFT_2_CAPACITY AS HOURS_2,
    CW.SHIFT_3*SR.SHIFT_3_CAPACITY AS HOURS_3,
    'RESOURCE' CALENDAR_TYPE
  FROM
    CALENDAR_WEEK CW,
    SHOP_RESOURCE SR
  WHERE
    (CW.SCHEDULE_ID IS NULL OR CW.SCHEDULE_ID = 'STANDARD')
    AND CW.RESOURCE_ID=SR.ID) C,
  (SELECT
    MOD(TO_CHAR(TO_DATE('03/26/2012','MM/DD/YYYY')+(LEVEL-1), 'D'),7) DAY_OF_WEEK,
    COUNT(*) NUM_DAYS
  FROM
    DUAL
  CONNECT BY
    LEVEL <= (TO_DATE('04/14/2012','MM/DD/YYYY') - TO_DATE('03/26/2012','MM/DD/YYYY'))+1
  GROUP BY
    MOD(TO_CHAR(TO_DATE('03/26/2012','MM/DD/YYYY')+(LEVEL-1), 'D'),7) ) D
WHERE
  C.DAY_OF_WEEK=D.DAY_OF_WEEK;

Above is a somewhat large and complicated SQL statement, if you simply jumped into the middle of this article.

We still need to GROUP the result by the RESOURCE_ID column and add in the DESCRIPTION that is associated with each RESOURCE_ID:

SELECT
  C.RESOURCE_ID,
  C.DESCRIPTION,
  SUM((C.HOURS_1 + C.HOURS_2 + C.HOURS_3) * D.NUM_DAYS) CAPACITY
FROM
  (SELECT
    SR.ID AS RESOURCE_ID, 
    SR.DESCRIPTION,
    CW.DAY_OF_WEEK,
    CW.SHIFT_1*SR.SHIFT_1_CAPACITY AS HOURS_1,
    CW.SHIFT_2*SR.SHIFT_2_CAPACITY AS HOURS_2,
    CW.SHIFT_3*SR.SHIFT_3_CAPACITY AS HOURS_3,
    'DEFAULT' CALENDAR_TYPE
  FROM
    CALENDAR_WEEK CW,
    SHOP_RESOURCE SR
  WHERE
    (CW.SCHEDULE_ID IS NULL OR CW.SCHEDULE_ID = 'STANDARD')
    AND CW.RESOURCE_ID IS NULL
    AND NOT EXISTS (
      SELECT
        C.RESOURCE_ID
      FROM
        CALENDAR_WEEK C
      WHERE
        SR.ID=C.RESOURCE_ID)
  UNION ALL
  SELECT
    SR.ID AS RESOURCE_ID,
    SR.DESCRIPTION,
    CW.DAY_OF_WEEK,
    CW.SHIFT_1*SR.SHIFT_1_CAPACITY AS HOURS_1,
    CW.SHIFT_2*SR.SHIFT_2_CAPACITY AS HOURS_2,
    CW.SHIFT_3*SR.SHIFT_3_CAPACITY AS HOURS_3,
    'RESOURCE' CALENDAR_TYPE
  FROM
    CALENDAR_WEEK CW,
    SHOP_RESOURCE SR
  WHERE
    (CW.SCHEDULE_ID IS NULL OR CW.SCHEDULE_ID = 'STANDARD')
    AND CW.RESOURCE_ID=SR.ID) C,
  (SELECT
    MOD(TO_CHAR(TO_DATE('03/26/2012','MM/DD/YYYY')+(LEVEL-1), 'D'),7) DAY_OF_WEEK,
    COUNT(*) NUM_DAYS
  FROM
    DUAL
  CONNECT BY
    LEVEL <= (TO_DATE('04/14/2012','MM/DD/YYYY') - TO_DATE('03/26/2012','MM/DD/YYYY'))+1
  GROUP BY
    MOD(TO_CHAR(TO_DATE('03/26/2012','MM/DD/YYYY')+(LEVEL-1), 'D'),7) ) D
WHERE
  C.DAY_OF_WEEK=D.DAY_OF_WEEK
GROUP BY
  C.RESOURCE_ID,
  C.DESCRIPTION
ORDER BY
  RESOURCE_ID;

The above SQL statement results in the following output – we now have the first column in the ERP system’s canned report:

At this point, my claim of being able to reproduce this canned report in a single SQL statement might seem suspect – all of the above work for just a single column in the report.  It’s easy, just build the report in separate units, and then join the separate units.

As I mentioned earlier, calculating the estimated average hours and average actual hours could be a little difficult.  We might start with something like this:

SELECT
  LT.RESOURCE_ID,
  O.RUN_HRS AS EST_HOURS,
  O.CALC_END_QTY AS OPERATION_QTY,
  ROUND(O.RUN_HRS/O.CALC_END_QTY,2) AS AVG_EST_HOURS,
  SUM(LT.HOURS_WORKED) AS ACT_HOURS,
  SUM(LT.GOOD_QTY) AS COMPLETED_QTY,
  ROUND(DECODE(SUM(LT.GOOD_QTY),0,SUM(LT.HOURS_WORKED),SUM(LT.HOURS_WORKED)/SUM(LT.GOOD_QTY)),2) AVG_ACT_HOURS
FROM
  LABOR_TICKET LT,
  OPERATION O
WHERE
  LT.WORKORDER_TYPE='W'
  AND LT.TYPE='R'
  AND LT.WORKORDER_TYPE=O.WORKORDER_TYPE
  AND LT.WORKORDER_BASE_ID=O.WORKORDER_BASE_ID
  AND LT.WORKORDER_LOT_ID=O.WORKORDER_LOT_ID
  AND LT.WORKORDER_SPLIT_ID=O.WORKORDER_SPLIT_ID
  AND LT.WORKORDER_SUB_ID=O.WORKORDER_SUB_ID
  AND LT.OPERATION_SEQ_NO=O.SEQUENCE_NO
  AND LT.SHIFT_DATE BETWEEN TO_DATE('03/26/2012','MM/DD/YYYY') AND TO_DATE('04/14/2012','MM/DD/YYYY')
GROUP BY
  LT.RESOURCE_ID,
  O.RUN_HRS,
  O.CALC_END_QTY
ORDER BY
  LT.RESOURCE_ID;

The output of the above SQL statement looks similar to the following:

Nice, if we were just interested in summarizing by the individual operations.  However, the above will not work as a starting point for the next two columns in the report because we need to summarized by the RESOURCE_ID column – if we group on that column, we will throw off the accuracy of the efficiency calculations in the report.  So, we need to take a step back and try again.

First, we will summarize the data from the LABOR_TICKET table by itself – notice that I am not using the TRUNC function around the date column, and I am using the SHIFT_DATE column rather than the TRANSACTION_DATE column (the SHIFT_DATE column in this table is better able to work around the issue with labor transactions that cross midnight, and the time portion of the date values are already truncated to midnight):

SELECT
  LT.WORKORDER_TYPE,
  LT.WORKORDER_BASE_ID,
  LT.WORKORDER_LOT_ID,
  LT.WORKORDER_SPLIT_ID,
  LT.WORKORDER_SUB_ID,
  LT.OPERATION_SEQ_NO,
  LT.RESOURCE_ID,
  SUM(LT.HOURS_WORKED) AS ACT_HOURS,
  SUM(LT.GOOD_QTY) AS COMPLETED_QTY
FROM
  LABOR_TICKET LT
WHERE
  LT.SHIFT_DATE BETWEEN TO_DATE('03/26/2012','MM/DD/YYYY') AND TO_DATE('04/14/2012','MM/DD/YYYY')
  AND LT.TYPE='R'
  AND LT.WORKORDER_TYPE='W'
GROUP BY
  LT.WORKORDER_TYPE,
  LT.WORKORDER_BASE_ID,
  LT.WORKORDER_LOT_ID,
  LT.WORKORDER_SPLIT_ID,
  LT.WORKORDER_SUB_ID,
  LT.OPERATION_SEQ_NO,
  LT.RESOURCE_ID
HAVING
  SUM(LT.HOURS_WORKED)>0;

Now that we have pre-summarized the results from the LABOR_TICKET table, we can slide the above SQL statement into an inline view and join that inline view with the OPERATION table:

SELECT
  LT.RESOURCE_ID,
  SUM(ROUND(O.RUN_HRS/O.CALC_END_QTY*LT.COMPLETED_QTY,2)) AS ENG_HOURS,
  SUM(LT.ACT_HOURS) AS ACT_HOURS,
  ROUND(SUM(ROUND(O.RUN_HRS/O.CALC_END_QTY*LT.COMPLETED_QTY,2))/SUM(LT.ACT_HOURS)*100,2) AS EFF
FROM
  (SELECT
    LT.WORKORDER_TYPE,
    LT.WORKORDER_BASE_ID,
    LT.WORKORDER_LOT_ID,
    LT.WORKORDER_SPLIT_ID,
    LT.WORKORDER_SUB_ID,
    LT.OPERATION_SEQ_NO,
    LT.RESOURCE_ID,
    SUM(LT.HOURS_WORKED) AS ACT_HOURS,
    SUM(LT.GOOD_QTY) AS COMPLETED_QTY
  FROM
    LABOR_TICKET LT
  WHERE
    LT.SHIFT_DATE BETWEEN TO_DATE('03/26/2012','MM/DD/YYYY') AND TO_DATE('04/14/2012','MM/DD/YYYY')
    AND LT.TYPE='R'
    AND LT.WORKORDER_TYPE='W'
  GROUP BY
    LT.WORKORDER_TYPE,
    LT.WORKORDER_BASE_ID,
    LT.WORKORDER_LOT_ID,
    LT.WORKORDER_SPLIT_ID,
    LT.WORKORDER_SUB_ID,
    LT.OPERATION_SEQ_NO,
    LT.RESOURCE_ID
  HAVING
    SUM(LT.HOURS_WORKED)>0) LT,
  OPERATION O
WHERE
  LT.WORKORDER_TYPE=O.WORKORDER_TYPE
  AND LT.WORKORDER_BASE_ID=O.WORKORDER_BASE_ID
  AND LT.WORKORDER_LOT_ID=O.WORKORDER_LOT_ID
  AND LT.WORKORDER_SPLIT_ID=O.WORKORDER_SPLIT_ID
  AND LT.WORKORDER_SUB_ID=O.WORKORDER_SUB_ID
  AND LT.OPERATION_SEQ_NO=O.SEQUENCE_NO
GROUP BY
  LT.RESOURCE_ID
ORDER BY
  LT.RESOURCE_ID;

The result of the above SQL statement is much more useful than the original attempt.  Note that the above SQL statement makes certain assumptions about the estimated (ENG_HOURS) that the ERP system does not make, so the resulting numbers could be slightly different.

We now have three more columns of our report.

The final step is to put the complete SQL statement together, joining the previously created portion of the SQL statement that determined the available capacity with the just created SQL statement that determined the actual and estimated utilization.  The resulting SQL statement appears as follows:

SELECT
  C.RESOURCE_ID,
  C.DESCRIPTION,
  C.CAPACITY,
  E.ACT_HOURS,
  E.ENG_HOURS AS EST_HOURS,
  E.EFF,
  DECODE(C.CAPACITY,0,0,ROUND(E.ACT_HOURS/C.CAPACITY*100,2)) AS ACT_UTIL,
  DECODE(C.CAPACITY,0,0,ROUND(E.ENG_HOURS/C.CAPACITY*100,2)) AS EST_UTIL
FROM
  (SELECT
    C.RESOURCE_ID,
    C.DESCRIPTION,
    SUM((C.HOURS_1 + C.HOURS_2 + C.HOURS_3) * D.NUM_DAYS) CAPACITY
  FROM
    (SELECT
      SR.ID AS RESOURCE_ID, 
      SR.DESCRIPTION,
      CW.DAY_OF_WEEK,
      CW.SHIFT_1*SR.SHIFT_1_CAPACITY AS HOURS_1,
      CW.SHIFT_2*SR.SHIFT_2_CAPACITY AS HOURS_2,
      CW.SHIFT_3*SR.SHIFT_3_CAPACITY AS HOURS_3,
      'DEFAULT' CALENDAR_TYPE
    FROM
      CALENDAR_WEEK CW,
      SHOP_RESOURCE SR
    WHERE
      (CW.SCHEDULE_ID IS NULL OR CW.SCHEDULE_ID = 'STANDARD')
      AND CW.RESOURCE_ID IS NULL
      AND NOT EXISTS (
        SELECT
          C.RESOURCE_ID
        FROM
          CALENDAR_WEEK C
        WHERE
          SR.ID=C.RESOURCE_ID)
    UNION ALL
    SELECT
      SR.ID AS RESOURCE_ID,
      SR.DESCRIPTION,
      CW.DAY_OF_WEEK,
      CW.SHIFT_1*SR.SHIFT_1_CAPACITY AS HOURS_1,
      CW.SHIFT_2*SR.SHIFT_2_CAPACITY AS HOURS_2,
      CW.SHIFT_3*SR.SHIFT_3_CAPACITY AS HOURS_3,
      'RESOURCE' CALENDAR_TYPE
    FROM
      CALENDAR_WEEK CW,
      SHOP_RESOURCE SR
    WHERE
      (CW.SCHEDULE_ID IS NULL OR CW.SCHEDULE_ID = 'STANDARD')
      AND CW.RESOURCE_ID=SR.ID) C,
    (SELECT
      MOD(TO_CHAR(TO_DATE('03/26/2012','MM/DD/YYYY')+(LEVEL-1), 'D'),7) DAY_OF_WEEK,
      COUNT(*) NUM_DAYS
    FROM
      DUAL
    CONNECT BY
      LEVEL <= (TO_DATE('04/14/2012','MM/DD/YYYY') - TO_DATE('03/26/2012','MM/DD/YYYY'))+1
    GROUP BY
      MOD(TO_CHAR(TO_DATE('03/26/2012','MM/DD/YYYY')+(LEVEL-1), 'D'),7) ) D
  WHERE
    C.DAY_OF_WEEK=D.DAY_OF_WEEK
  GROUP BY
    C.RESOURCE_ID,
    C.DESCRIPTION) C,
  (SELECT
    LT.RESOURCE_ID,
    SUM(ROUND(O.RUN_HRS/O.CALC_END_QTY*LT.COMPLETED_QTY,2)) AS ENG_HOURS,
    SUM(LT.ACT_HOURS) AS ACT_HOURS,
    ROUND(SUM(ROUND(O.RUN_HRS/O.CALC_END_QTY*LT.COMPLETED_QTY,2))/SUM(LT.ACT_HOURS)*100,2) AS EFF
  FROM
    (SELECT
      LT.WORKORDER_TYPE,
      LT.WORKORDER_BASE_ID,
      LT.WORKORDER_LOT_ID,
      LT.WORKORDER_SPLIT_ID,
      LT.WORKORDER_SUB_ID,
      LT.OPERATION_SEQ_NO,
      LT.RESOURCE_ID,
      SUM(LT.HOURS_WORKED) AS ACT_HOURS,
      SUM(LT.GOOD_QTY) AS COMPLETED_QTY
    FROM
      LABOR_TICKET LT
    WHERE
      LT.SHIFT_DATE BETWEEN TO_DATE('03/26/2012','MM/DD/YYYY') AND TO_DATE('04/14/2012','MM/DD/YYYY')
      AND LT.TYPE='R'
      AND LT.WORKORDER_TYPE='W'
    GROUP BY
      LT.WORKORDER_TYPE,
      LT.WORKORDER_BASE_ID,
      LT.WORKORDER_LOT_ID,
      LT.WORKORDER_SPLIT_ID,
      LT.WORKORDER_SUB_ID,
      LT.OPERATION_SEQ_NO,
      LT.RESOURCE_ID
    HAVING
      SUM(LT.HOURS_WORKED)>0) LT,
    OPERATION O
  WHERE
    LT.WORKORDER_TYPE=O.WORKORDER_TYPE
    AND LT.WORKORDER_BASE_ID=O.WORKORDER_BASE_ID
    AND LT.WORKORDER_LOT_ID=O.WORKORDER_LOT_ID
    AND LT.WORKORDER_SPLIT_ID=O.WORKORDER_SPLIT_ID
    AND LT.WORKORDER_SUB_ID=O.WORKORDER_SUB_ID
    AND LT.OPERATION_SEQ_NO=O.SEQUENCE_NO
  GROUP BY
    LT.RESOURCE_ID) E
WHERE
  E.RESOURCE_ID=C.RESOURCE_ID
ORDER BY
  C.RESOURCE_ID;

The resulting output appears as follows:

Well, that was easy… :-) 

The solution reminds me of the phrase “How to Eat an Elephant“.  Elephant poaching is illegal in most parts of the world, so I propose changing this phrase to “How to Walk an Elephant”.  One step at a time, and make certain not to stand directly in front or directly behind.  ;-)





Failed Logon Attempts

4 04 2012

April 4, 2012

A fair number of articles on this blog are Oracle performance related in one respect or another.  I started thinking about how to measure how much more efficient something is when compared to something that is never performed.  Consider a situation where you had a database user account that is a member of the DEFAULT Oracle Database profile that is configured as follows:

ALTER PROFILE DEFAULT LIMIT
  PASSWORD_LIFE_TIME UNLIMITED
  FAILED_LOGIN_ATTEMPTS 5
  PASSWORD_LOCK_TIME 1
  PASSWORD_VERIFY_FUNCTION NULL; 

With the above configuration, passwords do not expire after a specified number of days, there are no password complexity requirements, and after five failed logon attempts, the account is automatically locked for one day.  The above configuration is not ideal from a security standpoint, but that is not the point of this blog article.

Assume that the one database user account is shared by multiple people (or multiple utility programs with an embedded username and password).  Suddenly, you find that your efficient utility program becomes inefficient… to the point that the “utility” portion of the program never has an opportunity to execute.  Well, that was unexpected, the database user account is locked out.  How did that happen?

The following SQL statement is probably quite simplistic for many of the readers of this blog, and probably should have been included in my Neat Tricks article that I published a couple of years ago, but I thought that I would include the SQL statement here to save myself a couple of minutes of typing the next time I need to figure out who-done-it:

SELECT
  TO_CHAR(TIMESTAMP,'MM/DD HH24:MI') TIMESTAMP,
  SUBSTR(OS_USERNAME,1,20) OS_USERNAME,
  SUBSTR(USERNAME,1,20) USERNAME,
  SUBSTR(TERMINAL,1,20) TERMINAL,
  ACTION_NAME,
  RETURNCODE
FROM
  SYS.DBA_AUDIT_SESSION
WHERE
  USERNAME LIKE 'MYUSER%'
  AND TIMESTAMP BETWEEN SYSDATE-1 AND SYSDATE
ORDER BY
  TIMESTAMP DESC;

TIMESTAMP   OS_USERNAME          USERNAME TERMINAL        ACTION_NAME          RETURNCODE
----------- -------------------- -------- --------------- -------------------- ----------
04/03 11:33 USER1                MYUSER   CUSER1          LOGON                         0
04/03 11:33 USER1                MYUSER   CUSER1          LOGOFF                        0
04/03 11:33 USER1                MYUSER   CUSER1          LOGOFF                        0
04/03 11:33 USER1                MYUSER   CUSER1          LOGON                         0
04/03 10:54 USER2                MYUSER   CUSER2          LOGOFF                        0
04/03 10:53 USER2                MYUSER   CUSER2          LOGON                         0
04/03 10:53 USER2                MYUSER   CUSER2          LOGON                         0
04/03 10:53 USER2                MYUSER   CUSER2          LOGON                         0
04/03 09:58 USER3                MYUSER   CUSER3          LOGOFF                        0
04/03 09:56 USER4                MYUSER   CUSER4          LOGON                         0
04/03 09:56 USER4                MYUSER   CUSER4          LOGOFF                        0
04/03 09:51 USER5                MYUSER   CUSER5          LOGON                         0
04/03 09:51 USER5                MYUSER   CUSER5          LOGON                         0
04/03 09:51 USER3                MYUSER   CUSER3          LOGON                         0
04/03 09:51 USER5                MYUSER   CUSER5          LOGOFF                        0
04/03 09:51 USER5                MYUSER   CUSER5          LOGOFF                        0
04/03 09:44 USER4                MYUSER   CUSER4          LOGOFF                        0
04/03 09:29 USER3                MYUSER   CUSER3          LOGOFF                        0
04/03 09:10 USER4                MYUSER   CUSER4          LOGON                         0
04/03 09:09 USER3                MYUSER   CUSER3          LOGON                         0
04/03 09:06 USER3                MYUSER   CUSER3          LOGOFF                        0
04/03 09:06 USER3                MYUSER   CUSER3          LOGOFF                        0
04/03 08:47 USER3                MYUSER   CUSER3          LOGON                         0
04/03 07:50 USER4                MYUSER   CUSER4          LOGON                         0
04/03 07:50 USER4                MYUSER   CUSER4          LOGOFF                        0
04/03 07:50 USER4                MYUSER   CUSER4          LOGOFF BY CLEANUP             0
04/03 07:50 USER4                MYUSER   CUSER4          LOGOFF BY CLEANUP             0
04/03 07:50 USER4                MYUSER   CUSER4          LOGOFF BY CLEANUP             0
04/03 07:50 USER4                MYUSER   CUSER4          LOGON                         0
04/03 07:50 USER4                MYUSER   CUSER4          LOGON                         0
04/03 07:50 USER4                MYUSER   CUSER4          LOGON                         0
04/03 07:12 USER3                MYUSER   CUSER3          LOGON                         0
04/03 07:12 NETWORK SERVICE      MYUSER   SERVER          LOGOFF                        0
04/03 07:11 NETWORK SERVICE      MYUSER   SERVER          LOGON                         0
04/03 07:11 IUSR_WEBSERV         MYUSER   SERVER          LOGON                         0
04/03 06:35 NETWORK SERVICE      MYUSER   SERVER          LOGOFF                        0
04/03 06:35 IUSR_WEBSERV         MYUSER   SERVER          LOGOFF                        0
04/03 06:35 NETWORK SERVICE      MYUSER   SERVER          LOGOFF                        0
04/03 06:34 IUSR_WEBSERV         MYUSER   SERVER          LOGON                         0
04/03 06:34 NETWORK SERVICE      MYUSER   SERVER          LOGON                         0
04/03 06:34 NETWORK SERVICE      MYUSER   SERVER          LOGON                         0
04/03 06:29 USER3                MYUSER   CUSER3          LOGOFF                        0
04/03 06:29 USER3                MYUSER   CUSER3          LOGON                         0
04/03 06:28 NETWORK SERVICE      MYUSER   SERVER          LOGOFF                        0
04/03 06:28 NETWORK SERVICE      MYUSER   SERVER          LOGOFF                        0
04/03 06:27 IUSR_WEBSERV         MYUSER   SERVER          LOGOFF                        0
04/03 06:27 NETWORK SERVICE      MYUSER   SERVER          LOGON                         0
04/03 06:26 IUSR_WEBSERV         MYUSER   SERVER          LOGON                         0
04/03 06:26 USER3                MYUSER   CUSER3          LOGOFF                        0
04/03 06:26 USER3                MYUSER   CUSER3          LOGON                         0
04/03 06:26 NETWORK SERVICE      MYUSER   SERVER          LOGON                         0
04/03 06:23 USER3                MYUSER   CUSER3          LOGON                     28000
04/03 06:23 USER3                MYUSER   CUSER3          LOGON                     28000
04/03 06:22 NETWORK SERVICE      MYUSER   SERVER          LOGON                     28000
04/03 06:22 NETWORK SERVICE      MYUSER   SERVER          LOGON                     28000
04/03 02:30 USER5                MYUSER   SERVER2         LOGON                     28000
04/02 19:53 USER1                MYUSER   CUSER1-LT       LOGON                     28000
04/02 19:53 USER1                MYUSER   CUSER1-LT       LOGON                     28000
04/02 19:11 USER1                MYUSER   CUSER1-LT       LOGON                     28000
04/02 19:11 USER1                MYUSER   CUSER1-LT       LOGON                     28000
04/02 18:37 USER1                MYUSER   CUSER1-LT       LOGON                     28000
04/02 18:37 USER1                MYUSER   CUSER1-LT       LOGON                     28000
04/02 18:37 USER1                MYUSER   CUSER1-LT       LOGON                      1017
04/02 18:37 USER1                MYUSER   CUSER1-LT       LOGON                     28000
04/02 18:32 USER1                MYUSER   CUSER1-LT       LOGON                      1017
04/02 18:32 USER1                MYUSER   CUSER1-LT       LOGON                      1017
04/02 18:31 USER1                MYUSER   CUSER1-LT       LOGON                      1017
04/02 18:31 USER1                MYUSER   CUSER1-LT       LOGON                      1017
04/02 17:08 USER3                MYUSER   CUSER3          LOGOFF                        0
04/02 17:03 USER3                MYUSER   CUSER3          LOGOFF                        0
04/02 17:03 USER3                MYUSER   CUSER3          LOGON                         0
04/02 16:55 USER3                MYUSER   CUSER3          LOGOFF                        0
04/02 16:55 USER3                MYUSER   CUSER3          LOGON                         0
04/02 16:54 USER3                MYUSER   CUSER3          LOGON                         0
04/02 16:52 USER3                MYUSER   CUSER3          LOGOFF                        0
04/02 16:45 NETWORK SERVICE      MYUSER   SERVER          LOGOFF                        0
04/02 16:45 NETWORK SERVICE      MYUSER   SERVER          LOGOFF                        0
04/02 16:45 IUSR_WEBSERV         MYUSER   SERVER          LOGOFF                        0
04/02 16:44 NETWORK SERVICE      MYUSER   SERVER          LOGON                         0
04/02 16:44 NETWORK SERVICE      MYUSER   SERVER          LOGON                         0
04/02 16:44 IUSR_WEBSERV         MYUSER   SERVER          LOGON                         0
04/02 16:43 USER3                MYUSER   CUSER3          LOGOFF                        0
04/02 16:42 USER3                MYUSER   CUSER3          LOGON                         0
04/02 16:42 USER3                MYUSER   CUSER3          LOGON                         0
04/02 16:32 USER6                MYUSER   CUSER6          LOGON                         0
04/02 16:32 USER6                MYUSER   CUSER6          LOGON                         0
04/02 16:32 USER6                MYUSER   CUSER6          LOGOFF                        0
04/02 16:32 USER6                MYUSER   CUSER6          LOGOFF                        0
04/02 16:21 USER3                MYUSER   CUSER3          LOGON                         0
04/02 16:20 USER3                MYUSER   CUSER3          LOGON                         0
04/02 16:20 USER3                MYUSER   CUSER3          LOGOFF                        0
04/02 16:19 USER3                MYUSER   CUSER3          LOGOFF                        0
04/02 16:15 USER3                MYUSER   CUSER3          LOGON                         0
04/02 15:40 USER3                MYUSER   CUSER3          LOGON                         0
04/02 15:40 USER3                MYUSER   CUSER3          LOGOFF                        0
04/02 15:31 USER3                MYUSER   CUSER3          LOGOFF                        0
04/02 15:06 USER1                MYUSER   USER1           LOGON                         0
04/02 15:06 USER1                MYUSER   USER1           LOGOFF                        0
04/02 15:06 USER1                MYUSER   USER1           LOGON                         0 

A couple of the rows in the above output are slightly out of sequence, but the order of the output is close enough for my needs.  The RETURNCODE column is the number associated with the ORA- error code that was returned to the client computer:

  • RETURNCODE=0 indicates success
  • RETURNCODE=1017 indicates bad password
  • RETURNCODE=28000 indicates account is locked out

Reviewing the above output, operating system user USER3 logged on successfully at 5:03 PM and logged off at 5:08 PM.  Operating system user USER1 attempted to log on at 6:31 PM from a computer named CUSER1-LT, but failed due to an invalid password.  By 6:37 PM, that same operating system user and computer combination had entered an incorrect password five times, which resulted in an ORA-28000 account lockout error being returned starting with the sixth logon attempt.

At 2:30 AM, USER5 on a computer named SERVER2 attempted to connect using the same database user account, but was also greeted with an ORA-28000.  The same fate awaited operating system user NETWORK SERVICE (this was a web-based logon attempt) and USER3 at 6:22 AM and 6:23 AM, respectively.  The problem was obviously corrected by 6:26 AM, quite likely through the application of the following SQL statement:

ALTER USER MYUSER ACCOUNT UNLOCK;

If you tried executing the above query, you might have noticed that the query execution required a bit of time.  Enterprise Manager also, apparently, executes a somewhat similar SQL statement every 30 minutes.  You can learn more about the potential performance impact of Enterprise Manager’s use of SYS.DBA_AUDIT_SESSION here.

I again started thinking about how to measure how much more efficient something is when compared to something that is never performed… I suppose that it is important to first determine whether or not the task at hand is important before deciding if not doing something is infinitely more efficient or infinitely less efficient than would be the case if the task at hand were completed.








Follow

Get every new post delivered to your Inbox.

Join 148 other followers