December 13, 2009
I previously mentioned that I created a “Toy” project for performance tuning, but only provided a couple screen captures and very little descriptive text:
The program is perhaps well beyond the “Toy” stage. My program is not available for purchase, but I thought that I would share a couple of its features as an incentive for you to create something usable for yourself. Below are a couple more screen captures of the program (with descriptions).
Configure Data Change Logging:
The Data Change Logging feature is used to build SQL*Plus compatible scripts that create logging tables, triggers to populate the logging tables, and optionally a database schema and tablespace for storing the logged data.
The Search feature performs a keyword search on statistics, wait events, initialization parameters, hints, and other topic categories. Such a search may reveal relationships between wait events, initialization parameters, and workarounds/solutions for various problems.
The SQL Monitor feature examines the shared pool for potentially high load SQL statements. Clicking on a row in the grid displays the SQL statement, while double-clicking a row displays the execution plan for all child cursors, the reason why the child cursor was created, and the defined bind variable types. Placing a check in Log File writes the grid contents to a file on the next refresh, and if SQL Statements is also checked, the SQL statement will also be written to the log file. CPU Time and Elapsed time are only available on Oracle 9i R2 and above.
10046 Trace File Parser:
The Drag & Drop Extended Trace feature performs extensive analysis of 10046 trace files that were captured at levels 4, 8, and 12. A minimum of four time coordinated analysis files are generated for each trace file that is analyzed. Several options are available to determine what type of data is retrieved from the trace file.
The wait events output is sent to Microsoft Excel. The Trace Time column indicates the time offset from the start of the trace file at which the wait event appeared in the trace file. The Wait Time column indicates the duration of the wait event in 1/1000th of a second. The Wait Event column indicates the type of wait – a description of the wait event appears at the bottom of the wait events output. The wait events Raw Details displays the wait event line from the trace without interpretation. The Oracle Cursor Num column displays an identifier that can be used to relate the wait event back to a specific SQL statement in the other analysis files.
The SQL Parse Order analysis file lists each SQL statement in the order in which the application requested a parse call. Parse, execute, fetch, bind variables, and row source execution plan for each SQL statement is listed together, along with a summary per SQL statement parsed. Cursor number is retrieved directly from the 10046 trace file, and may not be directly dependent on application coding. Ver number indicates the number of cursors that were parsed at the Cursor number through this point in the trace file. Parse at indicates the time offset in seconds from the start of the trace file until the parse call was issued. TD Prev indicates the time difference between the current SQL statement and the previous parse call. EXECs indicates the total number of execution calls from the application for the current SQL statement. FETCHs indicates the total number of number of fetch requests to retrieve rows from the database for the current SQL statement. CPU S indicates the number of seconds of database server CPU time required for the SQL statement. CLOCK S indicates the elapsed time for the SQL statement – the elapsed time will frequently differ from the CPU time, unless the server’s CPU is the only bottleneck in the database. ROWS indicates the number of rows retrieved or affected by the SQL statement. PHY RD BLKs indicates the number of blocks that had to be read from disk to satisfy the SQL statement. CON RD BLKs (Mem) indicates the number of blocks read from memory in consistent read mode – a logical read, which is only roughly 100 times faster than a physical read due to overhead related to consistent reads. CUR RD BLKs (Mem) indicates the number of blocks read in current mode, which is considerably less expensive that a consistent read. SHARED POOL MISS indicates the number of times a hard parse was required – if during an execute or fetch call, such parses are expensive and may be partially caused by setting CURSOR_SHARING to SIMILAR.
The SQL Execution order analysis file outputs the SQL statements and bind variable values in the order of execution as the trace file is read. Minimal diagnostic data is provided in this type of analysis file, although it may help to determine the looping structure of SQL statements that are executed, where the output of one SQL statement is fed in as the input for another SQL statement.
The Grouping Similar SQL statements analysis file attempts to group together SQL statements that are parsed multiple times, rather than only being parsed once and executed many times. This type of analysis file accumulates the statistics for the similar SQL statements, indicating the percentage of the total execution time each group represents. While individual executions may total only 0.01 seconds for a SQL statement, if the SQL statement is executed 30,000 times, the SQL statement should probably be analyzed to determine if its execution plan is as efficient as possible. This analysis file allows one to drill down to the root cause of the problematic SQL statement that represents the greatest percentage of the total run time. Wait events are summarized at the beginning of the file, as well as with each group of similar SQL statements.
Configure Session Tracing:
Session tracing is an important component of database tuning. Enabling and disabling various types of traces is possible within the Hyper-Extended Performance Monitor. Activated tracing of sessions is automatically disabled when the Hyper-Extended Performance Monitor program is closed.
10046 Trace: Performance specific to each SQL statement executed.
10053 Trace: Cost Based Optimizer decisions during hard parses.
10032 and 10033 Trace: Sort related trace files.
10104 Trace: Hash join related trace files.
Advanced Initialization Parameters:
The Advanced Init Parameter feature retrieves all system level and session level normal as well as hidden parameters in the Oracle database instance. As a general rule, never modify the value of a hidden parameter (those that begin with _). A brief description of each parameter is provided, and in many cases a longer description is presented at the bottom of the window. Session specific parameters are displayed after the system parameters, with the session’s SID displayed in the Type column in the table. All parameters are simultaneously output to a file in the C:\Oraclelog folder, with a name such as “All Initialization Parameters 200710021321.txt”.
Most of the hidden parameters are only visible to the SYS user. As such, the Hyper-Extended Oracle Performance Monitor will prompt for the SYS password. If the SYS password is not provided, only the non-hidden parameters will be presented. Oracle has several hundred initialization parameters, many of which are hidden (those that begin with _ ) and should not be modified without the guidance of Oracle support. Following the list of system wide parameters are query optimization parameters which are specific to individual sessions.
Smart Logging Configuration:
To decrease the database server’s CPU impact due to logging, logging intervals may be specified to occur less frequently than once every 60 seconds. Additional options are available to allow the logging capture to start in response to triggering events, such as increased CPU activity, blocking locks, etc.
The program also accepts various command line arguments to control logging and exit the program after logging for a specified number of minutes.
Logging Summary While Capturing a Smart Log:
When logging is enabled, a quick overview screen is presented that shows a portion of what was logged during the previous time intervals. Session Wait Reasons and Session Locks appear in the tables in real-time when they happen.
System-Wide Overview of Log Data:
The top wait events for the log interval are displayed at the upper left, a graphical display of the history for the selected statistic is displayed at the top center (blue indicates the current period), and a graphical display of the top 30 sessions contributing to the statistic is displayed at the top right. The bottom left provides a description of the statistic and possible additional tips for addressing problems. The middle section provides a listing of various performance statistics captured during the logging interval – click one of those statistics to set it as the current statistic. When a significant event occurs, such as a spike in CPU usage, click the Investigate button to display additional statistics.
Session-Level Drill-Down into Logged Data:
Left-clicking a session’s bar in the Review Log Stats window causes the session specific performance statistics to be displayed for the time interval.
Investigating the Log Interval:
The Investigate Log Interval window provides additional logging detail for the current log interval, but it is possible to view the data for a range of log intervals. The top left grid shows system-wide wait events encountered during the logging interval range. Immediately below the system-wide wait events are the session level wait events. Below the session level wait events are the session level wait event reasons that were identified during the logging. These statistics may be used to identify data file/block level contention between sessions, hard parsing that causes excessive wait events, etc – more information is revealed by double-clicking a row. The bottom left grid shows details of the sessions that were blocked during the specified time intervals – more information is revealed by double-clicking a row. The top right grid shows data file activity during the specified time intervals – summary information is revealed by double-clicking a row. Below the file activity is the rollback/undo activity during the specified time intervals. Below the rollback/undo activity is the data segment changes, which shows expansion and contraction of indexes, tables, and other objects in the database during the logging interval. Below the data segment changes are the data block wait statistics, which related to buffer busy wait events – more information is revealed by double-clicking a row. The bottom right shows latch contention that occurred during the logging interval – significant latch contention can cause performance issues – more information is revealed by double-clicking a row.
Double-clicking a couple of the grids produces the following:
I wonder what that Interpret Statistics button does?
There are of course more screen captures, but those may wait for some other time.