On the Topic of Technology… 6

16 03 2014

March 16, 2014

(Back to the Previous Post in the Series)

It has been a while since my last post on this blog – I guess that the simple answer is that I was busy with a lot of non-Oracle Database related items, and was suffering from a bit of a writer’s block (nothing that a block dump can’t fix?).  I am expecting to soon receive the annual bill from WordPress for keeping this blog free of advertisements, as well as a bill for allowing the customized blog theme.

So, given the number of months since my last blog post,  I took the time to update the list of the top five most viewed articles for the past quarter.  The number one article shows how to install the Nagios network monitoring software on a Synology NAS (actually three different Synology NAS units), which means that a low cost NAS unit could be used to not only verify that a server used with Oracle Database responds to a ping request, but also that an Oracle database is reachable and healthy enough to provide a resultset for a simple SQL statement.  The number two article shows how to do a little mathematics with the help of Oracle Database, approximating the distance between two longitude and latitude coordinates.  The number three article shows how to use a programming language that was last updated in the late 1990s with the latest Microsoft operating system and what was the latest version of Oracle Database.

The advancement of technology certainly means that it is important for IT professionals to try staying on top of the advancements in their technology niche, without completely cutting ties with technology of the past, as illustrated by the current number three article on this blog.  For me, that means buying and then reading cover to cover various books, reading articles, and experimenting with technology.  It helps that I am an IT manager in addition to being an Oracle DBA, so my technology niche is rather broad.  In December 2013 I placed an order for the updated version of “Troubleshooting Oracle Performance“, in part because I enjoyed the first version of that book so much that I read it twice, and also because I have not had sufficient time to experiment with Oracle Database 12c – it appears that the second edition might ship next month.  Someone recently left a comment on another book that I reviewed here and on Amazon – I tried ordering that book twice without success, and now there is apparently a new version of the book on Amazon that includes coverage of Oracle Database 12c, and the book is in stock!  Someone will have to spend the $56, write a review, and let me know if the author fixed the items that I and readers of this blog so patiently and clearly mentioned in 2010.  Anyone interested in the challenge?

As I mentioned, the scope of my job responsibilities extends far beyond that of Oracle Database.  I just recently migrated the company’s email system from Microsoft Exchange 2007 to Microsoft Exchange 2013 SP1.  Anyone who remembers the fun of typing cryptic code on a command line would enjoy this experience.  Simply moving the public folders from the old server to the new server was an excellent example of command line fun, reminding me of the fun that I had years ago trying to compile X.509 certificate support into a Linux kernel.  One book that I read and reviewed was extensively detailed on the topic of public folders, yet the commands that were found in the book failed to execute without returning an error message at step 1.  The other book that I read and reviewed more or less skimmed the topic of public folders, so it was of no help for the task at hand.  No problem, I will just go to the source, Microsoft, for the solution.  A recent article on Microsoft’s site clearly listed all of the steps required to move the public folders from Exchange Server 2007 to Exchange Server 2013… all except for one very important step.  So, I am running command after command on the servers trying to move the public folders from the one server to the next, only having a partial idea of what these commands are doing.  Everything is going great, until I execute the last command listed here:

Get-PublicFolder -Recurse | Export-CliXML C:\PFMigration\Legacy_PFStructure.xml
Get-PublicFolderStatistics | Export-CliXML C:\PFMigration\Legacy_PFStatistics.xml
Get-PublicFolder -Recurse | Get-PublicFolderClientPermission | Select-Object Identity,User -ExpandProperty AccessRights | Export-CliXML C:\PFMigration\Legacy_PFPerms.xml
Get-PublicFolderDatabase | ForEach {Get-PublicFolderStatistics -Server $_.Server | Where {$_.Name -like "*\*"}}
Set-PublicFolder -Identity <public folder identity> -Name <new public folder name>
Get-OrganizationConfig | Format-List PublicFoldersLockedforMigration, PublicFolderMigrationComplete
Set-OrganizationConfig -PublicFoldersLockedforMigration:$false -PublicFolderMigrationComplete:$false
Get-PublicFolderMigrationRequest | Remove-PublicFolderMigrationRequest -Confirm:$false
Get-Mailbox -PublicFolder 
Get-PublicFolder
Get-Mailbox -PublicFolder | Where{$_.IsRootPublicFolderMailbox -eq $false} | Remove-Mailbox -PublicFolder -Force -Confirm:$false
Get-Mailbox -PublicFolder | Remove-Mailbox -PublicFolder -Force -Confirm:$false
.\Export-PublicFolderStatistics.ps1 <Folder to size map path> <FQDN of source server>
...

Spot the error?  Why is this server telling me that I need to provide a comma separated list of parameters when I execute the Export-PublicFolderStatistics.ps1 script?  So, I submit the script again with commas separating the parameters – no the same error is returned.  Must be a problem where I need to specify the parameters in double quotes also – no the same error is returned.  What the four letter word?  That is right, the return of trying to compile X.509 certificate support into the Linux kernel roughly a decade ago, only now on Microsoft’s premium messaging platform.

So, what is the missing step?  Exchange Server 2007 ships with Microsoft PowerShell 1.0 – this command requires Microsoft PowerShell 2.0 to execute, yet that requirement was never mentioned.  Oh yeah, we forgot a step, get over it – you have another set of 10 cryptic commands to enter – only to be greeted with a failure message during the public folder migration, stating that the migration failed because some folder name that once existed on Microsoft Exchange 5.5 contains a character that is now considered an invalid character in its name.  These problems never happen with an upgrade in the Oracle Database world, do they?  Advancement of technology, or Back to the Command Line.

I have also spent a bit of time experimenting with IP security cameras.  I put one in my vehicle and went for a drive.  Ah, 1969, someone obviously has not finished compiling the time saving feature into the camera’s firmware? (Click the picture for a larger view.)

NC-239WF-LicensePlateTest-Mod

-

Let’s try a different stop light – these two cars are either turning the wrong direction (obviously an indication of a bug in the camera’s firmware), or are running a red light. (Click the picture for a larger view.)

NC-239WF-RedLightTest

The camera did not pick up much interesting in the vehicle, so I set it up just in time to catch a game of what appears to be football… or maybe it was a game of sock-her? (Click the picture for a larger view.)

NC-239WF-DeerFightTest

Technology is fun, except when it hit you in the nose.





On the Topic of Technology… 5

12 07 2013

July 7, 2013

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

As many readers of this blog are probably aware, Oracle Corporation has released Oracle Database 12.1.0.1 so far for the Linux, Solaris, and Windows platforms.  Oracle Database 12.1.0.1 may be downloaded from Oracle’s OTN website.  This article is not about Oracle Database 12.1.0.1, at least not specifically about that version.

In the previous article in this blog series last year, I mentioned experimenting with a Synology DiskStation DS212+, as well as a couple of IP based 640×480 resolution security cameras.  Since that time I have had the opportunity to purchase a couple of additional NAS devices including a Synology Diskstation DS112J, Synology Diskstation DS412+, and Synology Diskstation DS1813+.  The DS212+ and DS112J NAS devices have ARM type processors, while the DS412+ (32 bit?) and DS1813+ (64 bit) have Intel Atom D2700 series processors (the processor series for other Synology processors may be determined by visiting this link).  The processor type in the NAS partially determines the native capabilities of the NAS, as well as what else may be done with the NAS.  Setting up the Synology NAS devices to support FTP server functionality is fairly easy to accomplish, regardless of the processor type.  That FTP server functionality helps to support the upload functionality of the IP based security cameras.

As an experiment shortly after buying the Synology DiskStation DS212+, I attempted to install the network monitoring tool Nagios, in part to allow keeping track of which IP cameras were offline.  I hit a bit of a brick wall trying to find a precompiled package to permit the Nagios server functionality to run on the Synology DiskStations, which at the core run a version of Linux.  The closest thing that I could find was a plugin for Nagios to permit Nagios running on another machine to monitor a Synology NAS.  I first worked with Red Hat Linux in 1999, implemented dual inline manually-coded iptables firewalls based on a stripped down Red Hat Linux in early 2002, compiled/built a Linux based X.509 certificate supporting VPN server before the Linux kernel supported X.509 certificates (I tried compiling a patched version of the Red Hat kernel, patched with X.509 support, but eventually gave up and compiled the Working Overloaded Kernel), and even tried running Red Hat Enterprise Linux with Samba and Windbind as a member of the company’s file server team.  I first worked with Nagios in 2002, when one of my brothers introduced me to the Linux based network monitoring tool (previously called NetSaint).  Needless to say, I have experience working with Linux and manually compiling software on that platform, but that experience is apparently quite rusty.  The attempt to compile the Nagios source code on the Synology DiskStation DS212+ came to an abrupt halt when I received a message during the compile process essentially stating that the ARM type CPU (Marvell Kirkwood mv6282) did not support fine timer resolutions.

A couple of months later, I tried compiling the Nagios source on the Synology DiskStation DS412+, which features an Intel CPU architecture.  I encountered a couple of unexpected snags in the compile process, and had to put the project on hold for several months.  The paths to the various files on the Linux operating system running on the DiskStation differs a bit from the paths used by the Red Hat variants of Linux – that lack of standardization across the various Linux distributions has frustrated me from time to time over the years.

I recently purchased and reviewed a Synology DiskStation DS1813+.  In the review, I stated the following before testing the theory:

“Additionally, ipkg support permits the installation of roughly 900 additional applications, including C++ compilers – which in theory suggests that the source for the Nagios network monitoring utility can be downloaded and compiled on the DS1813+.”

I am curious to know whether or not anyone is able to get the Nagios server software to run on a Synology DiskStation DS412+ or DS1813+.

I suppose that I should not have proposed that the Nagios network monitoring utility might work on the DiskStation without actually confirming that the utility will work.  I am now able to confirm that the Nagios network monitoring utility will execute on the Synology DiskStation DS1813+, although the check_http plugin failed to compile.  The installation is anything but straight-forward – no how-tos that are close to being useful, and no Setup.exe to double-click.  The following screen capture also does not help (non-root users are not permitted to use the ping command on the DiskStations):

TopicOfTechnology5-1

At this time, I cannot provide a detailed instruction list for running the Nagios network monitoring utility on a Synology DiskStation.  However, as a starting point it is necessary to add ipkg support to the DiskStation.  The following ipkg items might be necessary: optware-devel, gcc, libtool, mysql, apache, openssl, openssl-dev, sendmail, inetutils.  With a bit of experimentation (and luck), you might see something similar to this when typing the ps command in a telnet session (I actually typed the command a second time so that the column headings would be visible – there certainly are a lot of background processes on the DiskStation):

TopicOfTechnology5-2

As I found out, just because Nagios is in the process list, that does not mean that it is able to do much of anything useful.  A work-around for the non-root ping issue is needed (I might have hinted part of the solution when listing the various ipkgs), as well as a work-around for the non-root sendmail problem that I did not mention.

When Nagios is working properly, unplugging a monitored device should result in an email message being sent (of course, if you unplug your computer, you probably will not receive an email message stating that the computer is down :-) ):

TopicOfTechnology5-3

There appear to be several Nagios plugins to monitor Oracle databases, although I have not had a chance yet to determine if any of those plugins will compile and work on a Synology DiskStation.  In theory it should wor… wait, I am not headed down that path yet!

In addition to a Synology DiskStation DS212+, the previous article in this series also showed a couple of smart 640×480 resolution IP cameras.  At the time of the previous article, I did not fully comprehend the usefulness of smart IP cameras.  Roughly 30 IP based cameras later, I now have a better understanding of their usefulness and limitations.  Last year I wrote reviews for three 640×480 model cameras here (it appears that Amazon now has this review attached to a different camera), here (it appears that Amazon now has this review attached to a different camera), and here (OK, there is a forth camera included in this review due to a model change over).  I was also burned badly (at a loss of $1343) when I bought two 1080P cameras last year that could not meet (or even approach) the manufacturer’s claims for the product.  All of those reviews include video samples produced by the cameras.

This year I bought and reviewed a couple of smart 720P resolution IP cameras, as well as a couple of different (from last year’s 1080P) smart 1080P resolution IP cameras.  As before, the reviews include sample video clips recorded by the cameras (the 720P and 1080P video was uploaded at the native resolution, but it appears that Amazon uses a pretty aggressive compression algorithm, which leads to some lost video quality).  The new 720P and 1080P cameras are not perfect, but the manufacturer appears to be taking steps to address the weaknesses that I outlined in the reviews.  I was sent another updated firmware for the 1080P cameras, as well as an updated PDF that includes the instructions that were missing from the included printed manual.  The support person for the camera company also stated that their website is currently under development, and will probably be online in the next 30 days.  My review mentioned the lack of success at using the recommended P2PCam264 app on a Motorola Xoom tablet for viewing the live video feed from the smart 720P and 1080P cameras.  The support person suggested using the AnyScene app on the Motorola Xoom tablet for viewing the live feed – that app seems to work.  The AnyScene app, while seemingly lacking the sound feed from the cameras, might even work a little too well.  I brought the Xoom tablet to a different network, only to find that the app is somehow able to still pull a live video feed from any of the configured cameras on the other network without poking any holes in the firewall on either network, and universal plug and play (uPNP) is disabled (below is a low resolution cell phone captured picture).  I am now left wondering what level of security risk this plug and play technology might pose.

TopicOfTechnology5-4

Sample PNG Generated from 720P Camera’s Video (Click to Display Larger Version):

topicoftechnology5-8

Sample PNG Generated from 1080P Camera’s Video (Same Scene as the Above Example – Click to Display Larger Version):

topicoftechnology5-9

Sample JPG 720P Image from an Edited Video (the 1080P video suffers from fewer out of focus problems and is the same resolution – just with a roughly 50% wider and taller viewing area):

TopicOfTechnology5-6

TriVision User Manual for PC – Aug 2012
TriVision User Manual for PC and Mac – Dec 2013





Unexpected Timer Resolution, Unexpected Parked CPUs, Unexpected Power Consumption

19 04 2013

April 19, 2013 (Modified May 11, 2013, June 5, 2014)

This blog article is not purely Oracle Database specific, yet it may have some relevance to companies that run Oracle Database on the Windows Server platform (for those DBAs lucky/unlucky enough to run Oracle Database on the Windows Server platform, you may find this article interesting).

I am in the process of setting up a couple of new Windows servers to perform various non-Oracle Database tasks.  I noticed that one of the servers had an odd issue – the server would occasionally become very slow at responding to mouse movements and keyboard input, for instance taking 30 seconds to move the mouse pointer a short distance across the screen.  These servers are running Windows Server 2012, which shares the same kernel and includes much the same features as Windows 8 – with the exception that the server operating system opens to the desktop rather than Windows 8’s new start screen.

Two years ago I wrote a brain teaser article that asked how it was possible that a 10046 extended SQL trace could output c=15600,e=510 on a line of the trace file when executing a SQL statement without using parallel query – essentially asking how it was possible to consume 0.015600 seconds of CPU time in 0.000510 seconds of elapsed time when the SQL statement was restricted to running on no more than one CPU.  In the comments section of the article I mentioned the ClockRes utility, but did not provide a link for the download of the program.  So, I thought that I would run the ClockRes utility on one of the new servers, make a change to the server, and then run the ClockRes utility again:

UnexpectedClockResOutput

As can be seen above, on the first execution of ClockRes the Current timer interval was 1.001 ms, while on the second execution of the ClockRes program the Current timer interval was 15.626 ms.  There is an odd similarity between that 15.626ms time (which oddly exceeds the reported Maximum timer interval of 15.625ms) and the c=15600 reported in the Oracle 10046 extended SQL trace file.  So, what change did I make to the server between the first execution of ClockRes utility and the second execution?  For now I will just say that I stopped one of the background services on the server (more later).

I recall performing an experiment a couple of years ago with Oracle Database.  I downloaded a utility that offered to change the Windows default timer resolution from 15.625ms to 1.0ms.  That utility did in fact change the Windows timer resolution, resulting in Oracle Database outputting c= values in increments of 1000, rather than in increments of 15600.  If I am remembering correctly, a second outcome of the experiment was a decrease in performance of the test Oracle database on the computer due to the higher resolution of the Windows timer.

Could the change in the resolution of the Windows timer from the Windows default of 15.625ms to 1.001ms be responsible for the occasionally sluggish performance of the server?  One article that I found (and unfortunately did not save the link to) claimed that adjusting the Windows timer from the default of 15.625ms to a lower value, 1ms for example, could cause a significant negative impact in multitasking system performance (roughly 30% decrease, if I recall correctly).  I located an article on Microsoft’s website that offered some level of clarification, below is a short quote from the article:

“Applications can call timeBeginPeriod to increase the timer resolution. The maximum resolution of 1 ms is used to support graphical animations, audio playback, or video playback. This not only increases the timer resolution for the application to 1 ms, but also affects the global system timer resolution, because Windows uses at least the highest resolution (that is, the lowest interval) that any application requests. Therefore, if only one application requests a timer resolution of 1 ms, the system timer sets the interval (also called the “system timer tick”) to at least 1 ms. For more information, see “timeBeginPeriod Function” on the MSDN® website.

Modern processors and chipsets, particularly in portable platforms, use the idle time between system timer intervals to reduce system power consumption. Various processor and chipset components are placed into low-power idle states between timer intervals. However, these low-power idle states are often ineffective at lowering system power consumption when the system timer interval is less than the default.

If the system timer interval is decreased to less than the default, including when an application calls timeBeginPeriod with a resolution of 1 ms, the low-power idle states are ineffective at reducing system power consumption and system battery life suffers.”

The above mentioned Microsoft article also suggested running the following command from the Windows command line:

powercfg /energy

I had actually executed the above command before running the ClockRes program for the first time, and again after running the ClockRes program for the second time.  A very small portion of the powercfg generated HTML file follows, generated prior to the first execution of ClockRes:

Platform Timer Resolution:Platform Timer Resolution
The default platform timer resolution is 15.6ms (15625000ns) and should be used whenever the system is idle. If the timer resolution is increased, processor power management technologies may not be effective. The timer resolution may be increased due to multimedia playback or graphical animations.
Current Timer Resolution (100ns units) 10009
Maximum Timer Period (100ns units) 156250

Platform Timer Resolution:Outstanding Timer Request
A program or service has requested a timer resolution smaller than the platform maximum timer resolution.
Requested Period 10000
Requesting Process ID 536
Requesting Process Path \Device\HarddiskVolume4\PROGRA~2\APC\POWERC~1\agent\pbeagent.exe

This is the same section of the generated HTML file, generated after the second execution of ClockRes:

Platform Timer Resolution:Platform Timer Resolution
The default platform timer resolution is 15.6ms (15625000ns) and should be used whenever the system is idle. If the timer resolution is increased, processor power management technologies may not be effective. The timer resolution may be increased due to multimedia playback or graphical animations.
Current Timer Resolution (100ns units) 156261

That is potentially interesting.  The output of powercfg stated that PROGRA~2\APC\POWERC~1\agent\pbeagent.exe requested a timer of 1.000 ms, which then changed the Windows server system-wide timer to 1.0009ms.  Interesting?  PROGRA~2\APC\POWERC~1\agent\pbeagent.exe resolves to the “APC PBE Agent” service in Windows, which is a component of the American Power Conversion (APC) PowerChute Business Edition software.  That software interfaces with an attached UPS to provide a gentle shutdown of the server in the event of an extended power outage.  The “APC PBE Agent” service happens to be the service that I shut down between the first and second execution of the ClockRes utility.

Interesting?  Does that suggest that installing the APC PowerChute Business Edition software on a server potentially has a significant impact on the performance of that server due to the program’s insistance on changing the Windows system-wide timer resolution to 1ms?  A quick observation indicates that the change made by the APC software to the Windows system-wide timer resolution does NOT apparently affect the reporting of the c=15600 entries in an Oracle Database 10046 extended SQL trace when the APC software is installed on the server.  The question remains whether or not this APC software could significantly decrease the performance of that Oracle Database software (potentially by 30%, as suggested in the one unnamed article).

——

The Windows Server that is experiencing occasionally jittery mouse and keyboard input is reasonally high-end for a Windows server: Intel Xeon E5-2690 8 core CPU at 2.9GHz (with hyperthreading enabled, giving the appearance of 16 CPUs in Windows), 64GB of memory, RAID controller with 1GB of battery backed cache, 16 internal 10,000 RPM hard drives, two gigabit network adapters in a teamed configuration, etc.  It should require a substantial load on the server to cause the jittery mouse and keyboard input behavior.

The power option plan in Windows was set to High Performance, while the default plan in Windows Server is Balanced.  Various articles on Microsoft’s website state that the Balanced plan allows the server/operating system to use CPU speed throttling (reducing the CPU speed from the stated speed rating, 2.9GHz in the case of this server), and core parking (essentially putting one or more CPU cores to sleep) in order to reduce energy consumption.  Some articles on Microsoft’s site indicate that, at least with Windows Server 2008, that CPU parking may increase IO latencies – that, of course, would be bad if Oracle Database were installed on the server.  Other articles on Microsoft’s site indicate that there are bugs, at least with Windows Server 2008, related to core parking which causes the parked cores not to wake up when the CPU load increases.  I wonder if this particular bug is playing a part in the performance issue faced in this very recent Usenet thread that describes poor performance of Oracle Database running in Hyper-V on Windows?

Here is a screen capture of the Power Options window and Task Manager on the Windows Server 2012 machine that is experiencing occasionally jittery mouse and keyboard input (screen capture taken when the server was mostly idle):

UnexpectedPowerOptionsTaskManager

Notice the inconsistency?  The server’s CPU is throttled down from 2.9GHz to just 1.16GHz while the power option plan is set to High Performance.  The Microsoft published “Performance Tuning Guidelines for Windows Server 2012” document on pages 16-17 states the following (I highlighted some of the words in red):

Balanced (recommended): Default setting. Targets good energy efficiency with minimal performance impact.  Matches capacity to demand. Energy-saving features balance power and performance.

High Performance: Increases performance at the cost of high energy consumption. Power and thermal limitations, operating expenses, and reliability considerations apply.  Processors are always locked at the highest performance state (including “turbo” frequencies). All cores are unparked.

Power Saver: Limits performance to save energy and reduce operating cost.  Caps processor frequency at a percentage of maximum (if supported), and enables other energy-saving features.”

Well, that is interesting, and is inconsistent with the above screen capture.  Incidentally, when the server was experiencing the worst of the occasionally jittery mouse and keyboard input, the CPU utilization was hovering around 6% and the CPU speed was still coasting at 1.16GHz to 1.18GHz, the network performance hovered between 600Mbps and 1100Mbps, and the server’s internal hard drives barely noticed the traffic passing to/from the disks through the network interface (lower than 75MB/s and 137MB/s, respectively).  6% CPU utilization causes the mouse and keyboard input to become jittery?  With hyperthreading enabled, there is essentially 16 available CPU seconds per each second of elapsed time.  A quick check: 1/16 = 0.0625, so 1 (hyperthreaded) CPU at 100% utilization would be reported as a system-wide utilization of 6.25%.  Interesting, but is that statistic relevant?

I happened to have the Windows Resource Monitor open during one of the jittery episodes.  The Resource Monitor showed, shockingly, that 14 (possibly 15) of the hyperthreaded “CPUs” were parked!  That result is also in conflict with the Microsoft document mentioned above regarding “all cores are unparked” when the High Performance power plan is selected.  So, at 6% CPU utilization the server was CPU constrained.  Modifying the setting in the server’s BIOS that controls whether or not cores may be parked, so that the cores could not be parked, fixed the issue in Windows Server 2012 that resulted in the 30 second delay that accompanied moving the mouse pointer a short distance across the screen.

The server still exhibits a bit of jittery behavior with mouse and keyboard input when the server’s teamed network cards are heavily used for file transfers to the server, but at least the CPU activity is no longer confined to a single hyperthreaded “CPU”:

UnexpectedResourceMonitor

Considering that this server was ordered from the manufacturer as “performance optimized”, I am a bit surprised at the power consumption of the server.  The server was ordered with dual (redundant) 1100 watt power supplies.  With the CPU’s 135 watt maximum TDP (per Intel: “Thermal Design Power (TDP) represents the near maximum power a product can draw for a thermally significant period while running commercially available software.”), 16 hard drives, and 64GB of memory, I fully expected the server to consume between 700 and 900 watts of electrical power.

Here is the server’s power consumption when the server is lightly loaded with roughly 68 running processes (note that the server is connected to a 120 volt power outlet):

UnexpectedPowerConsumptionLittleLoad

Here is the server’s power consumption when the server is moderately loaded with between 600Mbps and 1100Mbps of network traffic (the mouse pointer was slightly jittery at this point):

UnexpectedPowerConsumptionNetworkLoad

So, the server consumes 1.2 amps (126 watts) when lightly loaded and 1.4 amps (154 watts) when moderately loaded.  Keeping in mind that many of the popular incandescent light bulbs require 100 watts of power (note that some governments have now restricted the manufacturing of high wattage incandescent light bulbs), this server is consuming just a little more electrical power than a light bulb that might have been hung overhead just a decade or two ago.

One of the common arguments for server virtualization is energy savings – the above screen captures may suggest that energy savings may not be a significant cost-savings factor for virtualization with modern server hardware.  One might question how much energy is really being saved when the network interface is maxed out by a single virtualized server, just 6% CPU utilization results in a jittering mouse pointer, and there are eight to ten virtualized servers stacked on the physical hardware (all competing for the scarce CPU and network resources).

Added May 11, 2013:

Dell BIOS setting to enable or disable CPU parking in Windows Server 2012:

UnexpectedPowerConsumptionBIOSProcIdle

With the BIOS option set to enabled, disk activity caused by network traffic results in occasionally jittery mouse movements on the server.  Based on a bit of research, installing the Hyper-V role on either Windows Server 2012 or Windows 8 may disable CPU throttling and/or disable CPU parking.

Added June 5, 2014:

I finally had sufficient time to fully analyze this problem, where a 2.9GHz CPU in a Dell PowerEdge T620 server crawled along at a leasurely pace of about 1.16GHz, actually throttling back performance further as demand for the server’s resources increased.  A second Dell PowerEdge T620 server with a 2.6GHz CPU that was purchased at the same time also coasted along at roughly 1.16GHz, but that server did not seem to throttle back performance further as demand for the server’s resources increased.

As a review, the screen capture shown below at the left shows the Windows Server 2012 Power Options settings and the Performance tab of the Task Manager.  The screen capture below at the right shows the Windows Server 2012 Power Options settings and the Performance tab of the Task Manager after fixing this particular problem – note that the 2.9GHz CPU is now essentially overclocked at 3.28GHz (it has operated at roughly that speed since the fix).

UnexpectedT620PowerOptionsTaskManager UnexpectedT620PowerOptionsTaskManager2

The 2.9GHz PowerEdge T620 and the 2.6GHz PowerEdge T620 are both Active Directory domain controllers and internal DNS servers (along with supporting other tasks), so the occasionally slow (or extremely slow) performance of the servers negatively impacted the performance of other servers as well as client workstations.

There was a BIOS firmware update released in the third quarter of 2013, which was supposed to address some CPU throttling issues – that BIOS update did not seem to help the problem that I experienced.

I thought that the low power consumption of the big server with the 2.9 GHz E5-2690 8 core CPU was a significant clue when I tried troubleshooting the server a year ago since that CPU is rated to consume up to 135 watts and the server liked to hover between 120 watts and 140 watts regardless of the server’s workload.  The Dell T620 (and other recent Dell servers) has some pretty sophisticated power management capabilities.  Dell sells a utility that is able to alter the electrical power profile of a server, and I thought that Dell might have imposed a 140 watt limit on the server for some reason, but I could not find where that limit was specified.  The 2.9 GHz E5-2690 8 core CPU apparently has some additional electrical power limiting capabilities.  A year ago I even tried downloading a demo of Dell’s power management utility – that did not help resolve the issue (I think that the installation might have caused some other issues that I had to fix).  Last week Tuesday I read the following articles:
http://www.dell.com/learn/us/en/19/financial-services-markets-solutions-processor-acceleration-technology
http://www.dell.com/us/business/p/dell-openmanage-power-center/pd
http://www.intel.com/content/www/us/en/data-center/data-center-management/how-to-configure-node-manager-video.html
ftp://ftp.dell.com/Manuals/Common/poweredge-r720_Concept%20Guide_en-us.pdf
http://en.community.dell.com/techcenter/power-cooling/w/wiki/3536.openmanage-power-center-faq.aspx

I rebooted the server, pressed F2, and dug around in the settings a bit.  I found that the System Profile Setting was set to “Performance per Watt” (I believe that this was how it was set when it left the Dell factory).  I changed that setting to “Performance”, saved the changes, and rebooted the server again.  The server is now consuming 200+ watts, and the CPU is freely exceeding its rated speed.  Once in the System BIOS settings, the pictures below show the configuration changes to remove the electric power cap, thus allowing the server to behave as it should have from the factory:

UnexpectedT620SystemProfile1 UnexpectedT620SystemProfile2

UnexpectedT620SystemProfile3 UnexpectedT620SystemProfile4

I suppose that if a Dell PowerEdge T620 (or similar recent model Dell server) seems to be running a bit slower than expected (note that the particular problem mentioned above is NOT Windows specific – a Dell PowerEdge T620 running Linux should be affected in the same way), you might take a quick peek at the System Profile Setting in the System BIOS to make certain that the System Profile is set to Performance.  As shipped from the factory, two Dell PowerEdge T620 servers purchased this year were NOT affected by the problems mentioned in this blog article.





On the Topic of Technology… 4

7 05 2012

May 7, 2012

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

Today’s blog article has an unusual tie in with Oracle.

The last couple of weeks I have been experimenting with video technology.  Computer related video capabilities have certainly changed over the years.  In 1996 I purchased a Sony Handicam and a Video Snappy.  The Sony video camera was capable of recording video with about 480 lines of resolution (NTSC standard) with a 10x optical zoom and recording capability in just 0.6 LUX of lighting.  The Video Snappy plugs into a computer’s parallel (old style printer) port, connects to the Sony video camera by an RCA style video cable, and converts the live video feed from the video camera to still photos.  Combined with a 120MHz Pentium processor, it seemed to be a state-of-the-art setup at the time (of course ignoring the capabilities of the Commodore Amiga/Newtek Video Toaster).  A picture of the Sony/Snappy configuration is shown below.

Of course the video capabilies of current tablets, cell phones, and digital cameras far exceed what was available in the 1990s – video recording with those devices was described in the previous blog article in this series (see the link at the top of this article).

A product that I recently found is the Synology DiskStation DS212+, which has some remarkable features considering that its primary objective is to provide an external hard drive array with RAID 1.  This particular unit ships without hard drives, so I purchased two Western Digital 2TB Green hard drives.  The external hard drive enclosure includes an SD media card reader, a single USB 2 port, two USB 3 ports, and an ESATA port to allow connecting additional external hard drives, printers, and wireless cards.  While not much larger than the hard drives installed in the unit, it certainly offers much more than access to those drives.  The DS212+ offers FTP services (including secure FTP), an iSCSI interface, DHCP services, media sharing services, WordPress, MySQL, PHP, a remarkable operating system that fully renders console screens in a web browser without the use of Adobe Flash (uses HTML 5 and CSS 3), and more.

The Synology DiskStation DS212 line’s disk throughput is limited by a combination of CPU performance and gigabit network maximum transfer speed, with the DS212+ offering the fastest rated transfer speed of roughly 108 MB/s read (very close to the maximum speed of gigabit Ethernet) and 66 MB/s write.  The Synology DS212+ is pictured below, measuring roughly the same height as five books on the topic of Oracle Database.

So, what does the first picture have in common with the second?  More about that commonality later.

Below is a screen capture of the Synology DS212+ operating system GUI (graphical user interface) rendered in Internet Explorer 9, roughly 21 minutes after powering on the unit for the first time and installing the latest version of the operating system.  As seen below, the 2TB drives were 84% formatted roughly six and a half minutes after I connected for the first time (a verify process lasting several hours started immediately after the format, but the hard drives were accessible during this verify process).

The operating system renders resizable, movable, drag and drop capable, right-clickable windows within the web page.  Several of the free optional packages for the DS212+; a resource meter showing CPU, memory, and network utilization; current network connections; and recent log entries are shown in the picture below.

So, what function does the DS212+ serve other than consuming electricity?  That is still a question that I am trying to answer, but I have only had access to the system for a couple of days.  I installed several of the free optional packages (after downloading the latest version from the company’s website), and experimented a bit.  The screen capture below shows the DS212+ playing an Internet radio stream (the channel was essentially selected at random), while simultaneously playing back a 640 pixel by 480 pixel video.

Incidentally, the above video was captured in a completely dark room using infrared lights that are built into the video camera.

As I mentioned at the beginning of this article, over the last couple of weeks I have spent a bit of time working with video technology.  Pictured below are two TriVision NC-107WF video cameras and a SanDisk 32GB micro SD memory card that works with the cameras.  I have also worked with a couple of TriVision NC-107W video cameras, which lack an infrared cut filter, resulting in poor color rendering.

So, what has 16 years of technology progress provided, comparing to the Sony Handycam shown at the start of this article?  The camera shown below records video at 640 pixels by 480 pixels, much like the Sony Handycam, so that feature has not improved much.  The TriVision camera digitally records nearly a month’s worth of video to a memory card that is about the size of a thumbnail, while the Sony Hanycam digitally records between a half hour and two hours of video to a tape that is about the size of an average person’s fist.  The TriVision camera records black and white video in complete darkness due to its built in infrared lights, while the Sony Handycam records excellent completely black videos in the same lighting conditions.

Surprisingly, there are no reviews of the TriVision line of cameras on Amazon.  The cameras appear to be a clone of the (Amazon) highly rated Sharx Security brand of security cameras.  Unlike some of the other security cameras on the market, this camera ships with a well written user manual (with only a small number of typos).  Offering motion detection, support of up to 32 GB of storage, automatic upload of video and still photos to an FTP server, live streaming through desktop web browsers and mobile devices, and a handful of other capabilities, it is hard to believe just how much technology is stuffed into such a small package.  The wireless range when paired with a Cisco 1250 series access point is impressive, but not terribly impressive when paired with a consumer grade Linksys/Cisco wireless router with integrated antennas.  Poor wireless performance is not necessarily a problem, since the camera stores recorded video to the memory card until the specified FTP server is accessible.  The cameras ship with Multi-live software that permits simultaneous viewing and recording of up to 36 cameras directly from the video streams, which is helpful if an FTP server is not configured.

Reliability of the TriVision NC-107WF/NC-107W cameras is still an unknown.  I have experienced occasional glitches accessing the built-in web server, making it impossible to adjust the camera settings (power cycling the camera seems to correct this issue), however those glitches apparently do not affect video recording or uploading of the captured video to FTP servers.

I have also spent a bit of time working with TriVision’s NC-306W outdoor wireless video cameras, which are shown in the picture below.  The NC-306W camera appears to be a clone of the (Amazon) highly rated Sharx video camera.  The web-based configuration of the NC-306W is nearly identical to that of the NC-107WF.  A 32GB memory card with automatic FTP uploading is supported, as is two-way audio (the NC-107WF supports one-way audio).

Since there are no reviews of the Trivision NC-306W, it is difficult to determine the long-term reliability of this camera.  During installation, one of the mounting nuts snapped due to over-torquing, but that nut is only needed for overhead mounting as seen in the picture below (the mounting nut is attached directly between the sun shield at the top of the camera and the white colored dial at the end of the mounting rod).  As with the TriVision NC-107WF/NC-107W cameras, the built-in web server has occasionally stopped responding, but that problem has not affected video capture or FTP upload.

Below is a screen capture of a video stream from a TriVision NC-107WF camera.  The original video quality was slightly better than pictured below (conversion of the screen capture to JPG format caused some detail loss).  The same scene captured by a TriVision NC-107W camera would have a pink, purple, or red cast due to the presence of infrared light (the NC-107WF and NC-306W are able to selectively filter out the infrared light).

I had hoped to upload a couple of videos captured by the cameras, however, WordPress apparently does not support directly uploaded video formats.  I plan to update this blog article as I better understand all of the features that the Synology Diskstation DS212+ offers, and to provide reliability updates of the DS212+ and the TriVision cameras.





OT: Do Search Terms Describe the Visitor?

12 01 2012

January 12, 2012

I thought that I would start this slightly off topic blog article with a bit of humor.  Seven months ago I wrote a blog article that refuses to move from the first position in the most visited articles on this blog.  In the process of trying to understand why a mathematics focused article is the most popular article on an Oracle Database specific blog, I regularly review the search keywords that bring people to my blog.  Yesterday alone, the search keywords that apparently pointed to the popular blog article included (not a complete list):

  • finding distance between two points lat long (2)
  • distance spherical points coordinates oracle (2)
  • find distance between two longitude latitude points (1)
  • calculate distance between points in a feature (1)
  • calculate speed longitude latitude (1)
  • distance between two lattitude points (1)
  • how to get distance between two latitudes in meter (1)
  • access table that converts latitude and longitude (1)
  • calculate time and distance between two longitudes (1)
  • distance between two points latitude and longitude transact sql (1)
  • converting to oracle degree minutes (1)
  • distance between latitude (1)
  • select sdo_geom.sdo_distance (1)
  • oracle sdo_geom.dos_distance (1)
  • calculate distance longitude latitude (1)
  • method of measuring distance using plain (1)
  • oracle spatial calculate distance (1)
  • how to calculate distance between two latitude and longitude (1)
  • to calculate distance between two points (1)
  • oracle spatial sort lat lon coordinates (1)
  • longitude and latitude distance between 2 points (1)
  • solve this:find distance between (35 degrees 21′) and (29 degrees 57′) and given that the radius is 3960 (1)
  • distance between two points wordpress (1)
  • how to get distance between two longitudes (1)
  • calculate distance between two lat long points (1)
  • how to find distance between points of longitude (1)
  • sql function calculate distance gps kilometers (1)
  • calculate distance between lattitute longitude (1)
  • excel calculate distance from lat long (1)
  • calculate distance from lat long sql (1) 

I think that the above keywords point to one and only one conclusion: the majority of the visitors to this blog are lost, know only their latitude and longitude coordinates (and the coordinates of where they are supposed to be), and are able to VPN in to their Oracle Database at work to determine how long it will take to become unlost.  :-)

While on the topic of interesting search keywords, I thought that I would mention a favorite search keyword that appeared in the WordPress statistics.  My all time favorite interesting search keyword that was used to access this blog appeared shortly after I reviewed a red covered Oracle related book – the search keywords were “Charles Hooper is getting sued”.  Needless to say, it is interesting that the search keywords found an article on my blog.

Moving on to the point of this blog article, I have also noticed a curious trend in some of the search keywords.  WordPress makes it a bit diffcult to go back in time to tabulate somewhat popular search keywords when there are a lot of commonly used keywords that access this site.  However, with a bit of effort, I attempted to locate search keywords that share a common pattern.  Those search keywords that I located (with the minimum number of hits indicated) follow: 

  • oracle core essential internals for dbas and developers pdf (8)
  • oracle core essential internals for dbas and developers download (6)
  • oracle core: essential internals for dbas and developers pdf (4)
  • oracle core: essential internals for dbas and developers download (1)
  • oracle tuning the definitive reference second edition pdf (99)
  • oracle tuning: the definitive reference pdf (3)
  • oracle tuning the definitive reference pdf (1)
  • oracle tuning the definitive reference 2nd edition pdf download (2)
  • oracle performance firefighting pdf (18)
  • expert oracle practices download (1)
  • apress.oracle.core.essential.internals.for.dbas.and.developers.nov.2011.rar (1)

Undoubtedly, in the above cases the searcher likely found my review of a particular book.  The above list has me wondering about a couple of items.  I was a full time college student for five years to obtain my degree.  I remember the pain of signing a check for what seemed to be (U.S.) $500 every semester – it did not seem to matter much if I was buying mathematics books (a $204 Calculus book from Amazon oddly seems approproate here) , computer books, or psychology books – the total was always about the same.  Was that money for books a waste?  Maybe I could have just found the book out on the Internet somewhere… they had that neat Gopher tool way back then, but not the WYSIWYG interface that is common now.

I am left wondering what the people searching for the above keywords might have been thinking.  I assume that some of the searchers have spent some time in college, and might even be Oracle developers or DBAs.  So, do you think that search terms describe vistors?  If so, how do you think the above search keywords might describe the searchers?





Internal Server Error – Contact Your System Administrator

6 12 2011

December 6, 2011

The Oracle OTN forums changed a bit a year or two ago, and in the process I stopped receiving email notifications when new entries were added to discussion threads.  The absence of email notifications was not a significant loss, although at times it is a bit interesting to see how a post in some of the threads changed a week or two after the initial post.  It appears that the OTN staff have corrected the email notification problems, and the speed of the forums seems to have improved significantly since the dramatic performance drop that was caused by an upgrade to the OTN forums a couple of years ago.

An interesting new problem has arrived.  The unbreakable (but free) forums tell me to contact the system administrator after I attempt to log in – the web server claims that the problem is caused by either an internal error or a misconfiguration.  I tried calling the system administrator here, but his phone is busy every time I try to place the call, and remarkably I always get interrupted when I try calling from a different phone.  ;-)  This is the error that I see immediately after logging in:

What is really irritating is that I received three emails today from OTN telling me that the OP has updated an OTN thread that I responded to, but sadly I cannot reach that thread.  After logging into OTN, I can’t even tell the browser to display forums.oracle.com – this is what I see: 

I can make it into Metalink (My Oracle Support) without an issue – I didn’t even need to log in (no password requested):

So, what happens if I click Sign Out in My Oracle Support?  Let’s try and then head back to forums.oracle.com (this seems to work sometimes):

So, the forums work, just as long as you do not care to contribute.  :-)  If we skip the login step, there are a couple of threads in the Community Feedback and Discussion forum about the problems (thread1, thread2).

Let’s log in again… (something comes to mind about the definition of insanity and doing something over and over again):

Out of curiosity, let’s see where forums.oracle.com is pointing:The traceroute is successful (ignore the long ping times – that was caused by other traffic on the Internet connection).

I noted that Google’s 8.8.8.8 DNS server is currently resolving forums.oracle.com also to e4606.b.akamaiedge.net which Google’s DNS server indicates is at IP address 184.25.198.174 (using Google’s DNS server does not change the error message that is displayed in the browser):

Without using Google’s DNS server, forums.oracle.com resolves to 23.1.18.174, as indicated by the trace route output.  I was curious what Wireshark might show when attempting to display forums.oracle.com (while logged in), so I fired it up and then told the browser to refresh the screen twice:

TCP ACKed lost segments…  Would the same problem happen when using Google’s DNS server, which points at IP address 184.25.198.174?  Let’s check:

Not exactly the same, but similar.  I have not spent a lot of time trying to dig through the Wireshark captures, but it is a bit odd that there are still TCP retransmissions (I might need to take a closer look at the Internet connection).

I guess that maybe this blog article drifted a bit.  Anyone else fail to connect 100% of the time, or never have a problem connecting?  I can only imagine the staff changes that probably would take place if one of our internal systems offered less than 95% uptime, much less the 99.99999% uptime that seems to be our internal expectation.  It is important to keep in mind that the OTN forums (even the copyrighted error message in the second screen capture) is a free service offered by Oracle Corporation – the problems will be resolved.





A Year Older, Any Wiser? 2

30 11 2011

November 30, 2011

(Back to the Previous Post in the Series)

Today marks the second anniversary of this blog.  With just 372 articles posted in the last two years, the rate at which new articles are posted has decreased rather dramatically from the 1+ article per day average that was established shortly after the launch of the blog.  To celebrate the second anniversary, I thought that I would post a couple of statistics and bits of information about this blog:

  • On October 21, 2011 there were 1,020 views of the articles on this blog, the most in any one day.  Yet on that day one of the most simple blog articles was posted.
  • On a typical day search engines generate roughly half of the page views on this blog (this percentage increased in the last couple of months).  I periodically wonder if those people searching for Charles Hooper (currently the most common search keyword) are in search of something else (that link is safe to view at work, although it does recall a certain article by Richard Foote that was posted March 31, 2008).
  • No article on this blog has ever resulted in an unintentional distributed denial of service attack (I am not explicitly pointing the finger at this article: http://jonathanlewis.wordpress.com/2011/11/28/oracle-core/ ).  ;-)
  • The most viewed article since this blog went online (currently with roughly 7,530 views) was written roughly 23 months ago and describes a deadlock that can be triggered in Oracle Database 11.1.0.6 and greater, where the same sample code does not trigger a deadlock in Oracle Database 10.2.0.1 through 10.2.0.5 (earlier release versions not tested).
  • The second most viewed article since this blog went online (at roughly 5,500 views) was written just 5.5 months ago and has more of a mathematics focus than an Oracle Database slant – more so than any other article on this blog.
  • The third most viewed article since this blog went online (at roughly 4,200 views) describes a case where I made an error in testing Oracle Database 10.2.0.1 in 2006, and accidentally confused a partially related cause with an effect.  In the article I then tried to determine the coincidental cause and effect.  Interestingly, there is an odd association with the contents of that blog article with the false DMCA claim that was filed against one of my articles earlier this year.
  • One Oracle Database book review article on this blog extended to roughly 18 typewritten (single spaced) pages in length, and I read that book twice from cover to cover.
  • One Oracle Database book review article on this blog extended to roughly 24 typewritten (single spaced) pages in length, and that review covered ONLY the first 230 or so pages of the 1100+ page book plus a couple of other pages later in the book.
  • One Oracle Database book review article on this blog extended to a record shattering 35.5 typewritten (single spaced) pages in length that excluded two of the book’s chapters, and due to the length of the review had to be divided into two separate blog articles.
  • The blog category name Quiz – Whose Answer is it Anyway? is derived from the name of the TV show “Whose Line is it Anyway?” – a TV show where the actors improvise the show content in reaction to the behavior of other actors.  The articles in this category (58 articles by the latest count) demonstrate cases where unwitting actors (uh, authors) improvise their story of how Oracle Database works and how it does not work.
  • Some of the blog articles have titles with double or triple meanings.  I sometimes accidentally embed humor in some of the articles – sometimes I don’t recognize the humor for days.
  • It is occasionally difficult to develop unique blog article content that is not already better described in five blogs authored by other members of the Oracle community.  I monitor a couple of different discussion forums (OTN, Usenet, Oracle-L, and a forum operated for commercial purposes) to find interesting problems that might appeal to a wider audience.  I am still trying to determine if there is anything somewhat interesting about this article – how hard must the developer work to hide information from the query optimizer? 
  • Some of the articles are posted in an incomplete form, knowing that helpful readers will often fill in the missing details… or the missing link.  If I could remember everything that I have forgotten about Oracle Database, I would have to forget something else – so thanks for the gentle reminders that I receive on occasion.
  • I probably should have created a couple of more blog article categories: Documentation Errors; Test Cases; Ouch, that’s Not Supposed to Happen; and Where Did You get that Idea?.
  • I pay a yearly fee to WordPress that allows my articles to stretch across your widescreen monitor, rather than being confined to the narrow width of the title banner at the top of this page.  I also pay a yearly fee to WordPress so that unwanted advertisements do not clutter the page for visitors not logged into a WordPress account.

Looking forward, I can only predict that there is a better than 50% chance that there will be a part three to this blog article series in 12 months.  I managed to purchase four books from Apress during their cyber Monday sale (in between the distributed denial of service attacks) a couple of days ago, including a book that I recently received in paperback form from Amazon.  I think that this might be a hint that there will be more Oracle Database book reviews posted to this blog in the future.





On the Topic of Technology… 3

24 05 2011

May 24, 2011 (Updated May 28, 2011)

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

In the previous article in this series I described some of the features of the Motorola Xoom tablet, and some of the reasons why I purchased that tablet rather than the Apple iPad 2.  When I wrote the previous article I could not identify a lot of productive uses for the Xoom – most of the uses seemed to be specific to media consumption (something that is quite expensive when you must pay per megabyte of usage on a wireless plan).

After a bit of thought, I added a BlueTooth keyboard to the Xoom and fired up Oracle Database 11.2.0.2:

The method by which the above works is actually quite simple.  More on that later.  The Honeycomb 3.1 operating system upgrade that was pushed out on May 14, 2011 seems to have corrected the lagging keypresses (and missing keypresses) problem with the on-screen keyboard.

It is quite easy to fill an entire Xoom screen with applications, some of which are free, and others that cost just a couple of dollars.  Fortunately, there are four more screens waiting to be filled with more applications and widgets:

A quick search of the Android MarketPlace for the word Oracle currently finds 94 applications and 246 books, the number of which is slightly surprising, yet might be pointing at a positive that the Xoom can be useful for productive tasks too:

To test the built-in book reader application, I decided I needed to add an Oracle Database related book.  The results are impressive, with line drawings, color graphics, and text just like the original book.  Slick special effects are also a part of the book viewer, most noticeable when changing from one page to another.  The book viewer did crash when viewing one set of pages in this book, however I am not sure if that is still a problem after the rollout of the Honeycomb 3.1 operating system:

Google’s speech to text engine in the Xoom is very impressive, even if it did drop the word “the” in the phrase that I recited (this happened multiple times).  There is a free Google Translate application for the Xoom that accepts speech, converts the speech to text, and then translates the text to another language.  In theory it will also recite the translated text in the foreign language, but that did not seem to work properly when tested:

After the Honeycomb 3.1 upgrade, the Xoom now supports movie rental though the Marketplace application, the tasklist (third button from the left at the bottom of the screen) shows more than the last 5 or so recently started applications, and a couple of other enhancements made their way into the operating system.  Google Goggles and Google Voice Actions are interesting applications for the Xoom, and just might lead to far too many people mindlessly talking to their digital gadgets.  The PocketCloud application was used to VNC into a laptop computer for the first of the pictures in this article – without the external keyboard it was nearly impossible to type into the SQL*Plus window on the Xoom, because the on-screen keyboard did not appear automatically (the on-screen keyboard can be manually displayed with the PocketCloud application).

Prior to the release of the Honeycomb 3.1 operating system for the Xoom I was becoming upset with the number of crashes (force close in Android terminology) and the on-screen keyboard issues, and I was also growing a bit bored trying to find useful applications for the Xoom that did not pose a risk of exceeding my monthly Internet data cap.  So, I wandered into an electronics store.  Nothing interesting here (I later learned that the manufacturer would recall 1,000 of these units from this particular chain electonics store), so I drove to another store located about a block away.  I wandered out with one of these:

The above is a picture of a BlackBerry PlayBook.  The unit has 16GB of memory (larger memory capacities were available), a smaller screen than the Xoom, a good quality video and still picture camera, and an impressive task-switcher that shows off the multitasking capabilities of the unit (shown above).  The PlayBook also shipped with a built-in word processor and spreadsheet application, while I had to pay extra for those features on the Xoom.  The on-screen keyboard is fantastic compared to that found on the Xoom.

In this article I did not include a picture of the search, but searching for the term Oracle in the PlayBook’s App World returned a single result… and that result had nothing to do with Oracle Databases.

So, why did I buy a BlackBerry Playbook?  There is not even a stand-alone email application for the PlayBook, unlike the built-in Gmail application on the Xoom.  Well, if you have a BlackBerry phone that is connected to a BlackBerry Enterprise Server, you can look at and respond to your corporate email:

The above uses the BlackBerry Bridge application on the PlayBook to talk to the BlackBerry phone.  The catch here is that if the group that is responsible for administering the IT policy for the BlackBerry phones has disabled installation of third party applications in the IT Policy for the phone, it is not possible to install the BlackBerry application on the phone that works with the BlackBerry Bridge application on the PlayBook.

The BlackBerry Bridge also permits browsing websites using the phone’s data plan, without incurring an additional monthly cost.  The catch is that content rich websites simply do not display (the limit seems to be around 100KB).  The same websites will display correctly when using the PlayBook’s regular web browser, even if websites insist on sending the mobile version of their web pages.  The same data limit also seems to be present when viewing large email attachments (those that are larger than 100KB do not seem to display).

The iSpeech Translator is a free application found in the BlackBerry App World.  This free application features a speech to text converter, as well as translation to other languages both in text form and as speech.  The speech to text converter is very slow compared to the Xoom, and I think that I might observe a funny response if I actually used the translated text shown below (this is the same phrase that was supplied to the Xoom):

Unlike the Xoom, where it appears to be impossible to perform a screen capture without “rooting” the Xoom or using a cumbersome process that involves installing the Xoom developer kit, it is very easy to perform a screen capture on the PlayBook by holding the volume up and volume down buttons.  This screen capture capability is disabled when the BlackBerry Bridge is active, so shut off your BlackBerry phone if you have set up the BlackBerry Bridge:

The music player in the PlayBook works well.  Using the Xoom I bought several CDs worth of DRM free MP3s from Amazon’s Android portal.  Through a connected USB cable I was able to easily transfer the MP3s to a laptop using Windows Explorer, and then transfer the MP3s to the PlayBook also using Windows Explorer:

Incidentally, the USB and HDMI cables that work with the Xoom also work without problem with the PlayBook.

Unlike the Xoom, with its many applications, the applications for the PlayBook are currently very limited in number (and in some cases depth).  The PocketCloud application, with both VNC and Remote Desktop capability, does not exist for the PlayBook.  However, there is an interesting application that will talk to a VNC server on a Windows computer, and that application is capable of remotely controlling the Windows Media Center application on the Windows computer:

Simply add Internet TV to Windows Media Center, and you can use a BlackBerry PlayBook to change the channel to something more interesting (I might be implying something here).

One final interesting detail.  The Xoom is able to broadcast its MP3s to car receivers using BlueTooth – in this case the car is equiped with Microsoft Sync.  It is a slick feature, and will work even if the Xoom is turned off.  I have not tested to see if the BlackBerry PlayBook supports a similar feature.  A fun way to add a little Xoom to your vehicle without excessive visits to the fuel station:

—————————————————————————————————————-

(Section Added May 28, 2011)

Electronic Book Testing on the Xoom and PlayBook:

Acrobat Reader on the Xoom and PlayBook when showing the PDF eBook version of “Troubleshooting Oracle Performance” both accurately reproduced the printed version of the book.  However, on both tablets, when the pages were viewed in portrait orientation, the screen capture picture at the bottom of page 288 was hard to read due to antialiasing problems where the application simply removed lines of resolution.  The picture was readable in landscape orientation, however in that orientation only about 40% of the page was viewable without scrolling.  Only one page is visible on the screen at a time, although the pages can be stacked vertically end to end to produce a smoothly scrolling chain of pages (similar to the appearance on a desktop PC).

ezPDF Reader on the Xoom when showing the PDF eBook version of “Troubleshooting Oracle Performance” accurately reproduced the printed version of the book.  There were no antialiasing problems with the screen capture picture at the bottom of page 288 – the contents of that picture were easily readable in portrait orientation.  Only one page is visible on the screen at a time.

QuickOffice Pro HD on the Xoom when showing the PDF eBook version of “Troubleshooting Oracle Performance” accurately reproduced the printed version of the book.  There were no antialiasing problems with the screen capture picture at the bottom of page 288 – the contents of that picture were easily readable in portrait orientation.  Only one page is visible on the screen at a time, with pages stacked vertically end to end to produce a smoothly scrolling chain of pages (similar to the appearance on a desktop PC).

The ePUB format of the “Troubleshooting Oracle Performance” book has a file size roughly half as large as the PDF version of the same book.  I tested the Aldiko, FBReader, and CoolReader ePUB readers on the Xoom.  While all three of the ePUB viewers worked (Aldiko force closed a couple of times, usually when rotating the screen 90 degrees) the format reminded me of early HTML web pages with plain text between pictures.  None of the three ePUB viewers retained the original book page numbers, although it was still possible to jump from the index to section headings in the book.  All three viewers had difficulty with the monospaced Courier text areas, most noticeable in execution plans.  The Aldiko program came close when attempting to align the monospaced text correctly, although some of the execution plans still had formatting/alignment issues (alignment spaces were sometimes missing, and wide execution plans often line-wrapped).  The other two viewers used fonts that were not monospaced in those sections of the book, so the formatting/alignment issues often made the code sections impossible to read.

Acrobat Reader on the Xoom seems to lower the quality of images that are embedded in the text.  This was an obvious problem when looking at pages 112 and 114 of the “Expert Oracle Practices” book.

ezPDF Reader on the Xoom does well with the PDF version of the “Expert Oracle Practices” book.  The picture on page 112 is nearly as clear as it is in Acrobat reader on the PC when Acrobat Reader is zoomed to 134% – the picture in both cases is inferior to the printed copy of the picture as found in the book.  The picture on page 114 is very blurry on the PC and in ezPDF, but it is much more clear than it was in Acrobat Reader on the Xoom.

QuickOffice Pro HD failed to render all pictures that were more complicated than simple line drawings in the “Expert Oracle Practices” book.  The pictures on the front cover, pages 112, 114, 115, and several other pages were replaced with simple rectangles.  The line drawing on pages 30, 81, 82 appeared as expected.

Acrobat Reader on the PlayBook could not open the PDF version of the “Expert Oracle Practices” book because the PDF document is password protected as purchased from Apress.

Acrobat Reader, ezPDF Reader, and QuickOffice Pro HD on the Xoom all do well with the PDF version of the book “Expert One-On-One Oracle”.   The start of chapter picture on page 196, and the line drawings on pages 199 and 203 showed as clearly as is found in the printed book.  The code sections that had a light gray background in the book had a nearly invisible gray background in all three PDF viewing applications on the Xoom.  Acrobat Reader on the PlayBook also did well with this book, although the code sections had a much more visible light gray background that duplicated the effect found in the printed book (Acrobat Reader on the PC also produced the more visible light gray background).

Acrobat Reader displayed jagged edges on the circles found on page 18 of the “Oracle Performance Firefighting” book.  Zooming in on the circles greatly reduces the jagged edges, almost as if that forced the application to re-render the line drawing in a higher resolution.  The line drawings on pages 28, 30, 32 also exhibit jagged edges.

ezPDF Reader, and QuickOffice Pro HD on the Xoom exhibited no problems with the “Oracle Performance Firefighting” book – the line drawings were very smooth.

The “Oracle Performance Firefighting” PDF is password protected so it would not open in Acrobat Reader on the PlayBook.

The Google Books reader on the Xoom is very good, and seems to accurately reproduce printed books.  The application crashes when viewing the book “Beginning Oracle SQL” while attempting to display pages 56 and 57 side-by-side – of course the application tries to reopen these two pages when the book is selected, so an endless loop of crashes is experienced every time the book is opened until the tablet is rotated 90 degrees.





Adding Features to PowerPoint Based Oracle Presentation Files

28 04 2011

April 28, 2011

This blog article is not specific to Oracle Database, but I thought that I would share the concepts anyway.  If you are giving presentations and using PowerPoint, consider including detailed notes sections in the presentation.  What benefit do those notes sections serve?  Well, if you share the presentation files with the viewers of the presentation, those note sections act as a reminder of what was stated during your presentation.  In fact, you could go so far as to type everything that you intend to say during the presentation into the presentation notes section for the slides.

Let’s take a look at a couple of interesting features that can be implemented in a PowerPoint presentation when detailed notes sections are provided.

Read to Me:

In 1998 (or maybe it was 1999) I experimented with the Microsoft Speech API, which at that time was still in Beta form for the initial release.  More recent releases of the Microsoft Speech API are obviously much more sophisticated, but at the time it was possible to easily change between one of several “voices” with different pitch and speed settings.  It is very easy to incorporate speech capabilities into a PowerPoint presentation, because the Speech API is installed by default on computers running Microsoft Office (I believe that the Speech API is also included in Windows operating systems starting with Microsoft Vista).  A very simple, generic PowerPoint macro may be used to read back the notes section of the currently displayed slide:

Sub SpeakNotes()
    Const SVSFlagsAsync = 1
    Const SVSFPurgeBeforeSpeak = 2
    'Dim strText As String
    Dim strSpeech As String
    Dim objSpeech As Object
    Dim lngCurrentSlide As Long

    On Error Resume Next

    Set objSpeech = CreateObject("SAPI.SpVoice")

    lngCurrentSlide = SlideShowWindows(1).View.CurrentShowPosition

    If Application.Version <= "11.0" Then
        strSpeech = ActivePresentation.Slides(lngCurrentSlide).NotesPage.Shapes.Placeholders(2).TextFrame.TextRange.Text
        'Change the pitch
        'strSpeech = "<pitch middle='25'>" & ActivePresentation.Slides(lngCurrentSlide).NotesPage.Shapes.Placeholders(2).TextFrame.TextRange.Text
    Else
        strSpeech = ActivePresentation.Slides(lngCurrentSlide).NotesPage.Shapes.Placeholders(2).TextFrame.TextRange.Text
    End If

    objSpeech.Speak strSpeech

    Set objSpeech = Nothing
End Sub 

In the above you will notice that the macro code checks the version of PowerPoint so that it can potentially run a different set of control commands for the speech API (I do not recall the exact reason why I included this years ago, but I believe it is because the default voice in Microsoft Office 2003 is a male voice, while the default voice in Microsoft Office 2007 is a female voice).  Now all that needs to be done is to create a picture or object of some sort on a slide and associate an action with the object that executes the above macro.  I have used a couple of different objects over the years, typically designed to clearly communicate what will happen when the object is clicked, for example:

Write to Me:

Another interesting feature that may be implemented is exporting the slides to JPG pictures, and then building a Microsoft Word Document from the exported JPG pictures and the slide notes – this is helpful for both the presenter and the people learning from the presentation.  In the past I had to manually create these types of handouts, so I thought “why not automate the process?”

We will start with the code to generate the JPG pictures from the presentation slides:

 
Sub WriteSlidestoJPG()
    On Error Resume Next

    'Create a folder for the slides if one does not already exist
    If Len(Dir("C:\Presentation Slides", vbDirectory)) < 4 Then
        MkDir "C:\Presentation Slides"
    End If

    'Remove any slides from a previous execution
    Kill "C:\Presentation Slides\*.*"
    'Save the slides as JPG pictures
    ActivePresentation.Export "C:\Presentation Slides", "JPG", 640, 480  '640 pixels by 480 pixels
End Sub

Next, we will add a second macro that builds the Microsoft Word document:

Sub SendPowerPointSlidestoWord()
    Dim i As Integer
    Dim objWord As Word.Application

    On Error Resume Next

    Set objWord = New Word.Application

    If Err = 0 Then
        WriteSlidestoJPG

        With objWord
            .Documents.Add
            .Visible = True
            With .ActiveDocument.Styles(wdStyleNormal).Font
                If .NameFarEast = .NameAscii Then
                    .NameAscii = ""
                End If
                .NameFarEast = ""
            End With
            With .ActiveDocument.PageSetup
                .TopMargin = InchesToPoints(0.5)
                .BottomMargin = InchesToPoints(0.5)
                .LeftMargin = InchesToPoints(0.75)
                .RightMargin = InchesToPoints(0.25)
                .HeaderDistance = InchesToPoints(0.25)
                .FooterDistance = InchesToPoints(0.25)
            End With

            If .ActiveWindow.View.SplitSpecial <> wdPaneNone Then
                .ActiveWindow.Panes(2).Close
            End If
            .ActiveWindow.ActivePane.View.Type = wdPrintView
            .ActiveWindow.ActivePane.View.SeekView = wdSeekCurrentPageHeader
            .Selection.Style = .ActiveDocument.Styles("Heading 1")
            .Selection.TypeText Text:=Left(ActivePresentation.Name, InStrRev(ActivePresentation.Name, ".") - 1)
            .Selection.TypeText Text:="   by " & ActivePresentation.BuiltInDocumentProperties.Item("author").Value
            .ActiveWindow.ActivePane.View.SeekView = wdSeekCurrentPageFooter
            .Selection.ParagraphFormat.TabStops(InchesToPoints(6)).Position = InchesToPoints(7.5)
            .Selection.TypeText Text:=vbTab & vbTab & "Page "
            .Selection.Fields.Add Range:=.Selection.Range, Type:=wdFieldPage
            .Selection.TypeText Text:=" of "
            .Selection.Fields.Add Range:=.Selection.Range, Type:=wdFieldNumPages
            .ActiveWindow.ActivePane.View.SeekView = wdSeekMainDocument

            .Selection.MoveLeft Unit:=wdCharacter, Count:=2

            .ActiveDocument.Tables.Add Range:=.Selection.Range, NumRows:=ActivePresentation.Slides.Count, NumColumns _
                :=2, DefaultTableBehavior:=wdWord9TableBehavior, AutoFitBehavior:= _
                wdAutoFitFixed
            With .Selection.Tables(1)
                .Columns.PreferredWidth = InchesToPoints(7.5)
            End With
            With .Selection.Tables(1)
                .TopPadding = InchesToPoints(0)
                .BottomPadding = InchesToPoints(0)
                .LeftPadding = InchesToPoints(0.08)
                .RightPadding = InchesToPoints(0.08)
                .Spacing = 0
                .AllowPageBreaks = True
                .AllowAutoFit = False
            End With
            .Selection.Tables(1).Columns(1).PreferredWidthType = wdPreferredWidthPoints
            .Selection.Tables(1).Columns(1).PreferredWidth = InchesToPoints(3)
            .Selection.Move Unit:=wdColumn, Count:=1
            .Selection.SelectColumn
            .Selection.Columns.PreferredWidthType = wdPreferredWidthPoints
            .Selection.Columns.PreferredWidth = InchesToPoints(4.5)

            .Selection.MoveLeft Unit:=wdCharacter, Count:=2

            For i = 1 To ActivePresentation.Slides.Count
                .Selection.InlineShapes.AddPicture FileName:="C:\Presentation Slides\Slide" & Format(i) & ".JPG", LinkToFile:=False, SaveWithDocument:=True
                .Selection.MoveLeft Unit:=wdCharacter, Count:=1, Extend:=wdExtend
                .Selection.InlineShapes(1).LockAspectRatio = msoTrue
                .Selection.InlineShapes(1).Width = 203.05
                .Selection.InlineShapes(1).Height = 152.65
                .Selection.MoveRight Unit:=wdCharacter, Count:=2
                With .Selection.Font
                    .Name = "Times New Roman"
                    .Size = 8
                    .Bold = False
                End With
                .Selection.TypeText Text:=ActivePresentation.Slides(i).NotesPage.Shapes.Placeholders(2).TextFrame.TextRange.Text
                .Selection.MoveDown Unit:=wdLine, Count:=1
                .Selection.MoveLeft Unit:=wdCharacter, Count:=1
            Next i
        End With
    End If
    Set objWord = Nothing
End Sub 

—————

Anyone else have additional ideas for adding features to PowerPoint based Oracle Presentation Files?





On the Topic of Technology… 2

11 04 2011

April 11, 2011

(Back to the Previous Post in the Series) (Forward to the Next Post in the Series)

Roughly a year ago I wrote a blog article that described the process that I went through, wandering around a big chain electronics store in pursuit of an iPad.  I first wandered past the netbooks, and if that had not happened, I probably would have walked out of the store with an iPad rather than a Toshiba netbook.  The netbook still works reasonably well for those times when I need an ultra-portable full featured computer – however, I more than doubled the value of the netbook by installing 2GB of memory, upgrading Windows 7 Starter to Windows 7 Professional, and installing Microsoft Office 2010.

Well, nearly a year later with the launch of the iPad 2 underway, I again went for a walk through another big chain electronics store in search of an interesting electronics gadget.  I found the iPad 2 display and experimented with one of the two units that was on display.  My first reaction was a remark about the snazzy screen effects when opening applications – an impressive feature.  The iPad 2 reminded me a lot of the iPad that I experimented with a year earlier – it is a media consumption device.  Since I still have a Verizon wireless plan for Internet access, with steep overage charges, an Internet media consumption device would still have limited usefulness for me.  Still no Flash support, so that would limit its usefulness of accessing the Flash based sites (such as the Oracle support site) on the iPad 2.

I wandered around the electronics store a bit more and found another tablet tucked away in the corner with a couple of laptops – a Motorola Xoom with 3G wireless.  My first reaction to the less than snazzy screen effects (compared to the iPad 2) was simply – is that all there is?  I experimented with the tablet for a couple of minutes.  Neat satellite view of the store on the little tablet, and wow is it fast updating the screen.  Moderately neat book application.  Sweeping my finger across the screen… that’s kind of neat – I can’t do that on the netbook (the netbook and Xoom have similar screen dimensions and aspect ratio).  Viewing web pages seems to work, even if the websites deliver the mobile versions of the web pages.  Interesting, or as useless as the iPad 2 for what I would do with it?  I left the store empty handed.

I did a bit more research on the Motorola Xoom tablet.  It seems that not only does the Xoom have a 5 megapixel camera on its backside that records 720p video, but it also has a high resolution forward facing camera.  Oh, it supports Flash also, and is apparently the particular tablet model that Google used during devlopment of the new Android 3 (Honeycomb) operating system.  The information that I had found on the Internet suggested that the Xoom was less of a media consumption device than an iPad 2, and more of a content producer device.  Interesting, but can it run Oracle Database?

Considering that flyers and articles about various tablets (iPad, Zoom, BlackBerry, Dell, etc.)  were stacking up on my desk at work, I thought that I would try an experiment:

Zooming in on the center of the above picture, we see that the screen on the Xoom is the My Oracle Support Flash-based site.  As of yet I have not determined how to display the on-screen keyboard on demand, so I can’t yet type into the text entry fields:

Flash works, and I am able to see the non-mobile, non-limited versions of websites.  So, what is the killer application for the Xoom?  Well, it is hard to beat a talking cat for $1.00:

So, the question again, can it run Oracle?  Well, I think that this picture answers that question (click the piture to zoom in on the guy who is waving from the boat):

If it were not for the crashing applications including the Android Apps/Books Market and SpeedView (uses the Xoom’s built-in GPS to display various vehicle speed statistics), the device would be nearly perfect as a  productivity tool that also is able to consume media (the TuneIn Radio application is quite handy, and with an HDMI cable plugged into a 7.1 receiver, it sounds very good).  The QuickOffice application, at $15, is certainly less expensive than Office 2010 that is installed on the netbook; the Power Point document editor in QuickOffice seems to have difficulty with complexly formatted Power Point files, but it works OK as a viewer (as long as there is no animation).  The PDF viewer that is part of QuickOffice works, but it is not great.  No problem, just a couple more dollars for the ezPDF reader and I am able to view various PDF books and the Oracle documentation library.  Attaching a $12 HDMI cable will even allow the screen to be quickly projected onto the TV, while attaching the included USB cable allows organizing the various files loaded onto the Xoom into folders and to easily rename those files.

So, is it worth $600?  If you need a high definition video camera, a 5 mega-pixel camera, a GPS, easy/fast speech recognition, inexpensive applications, the ability to view just about any web page in a light-weight, mobile package, then the Xoom just might be worth $600.  If you have no use for those features, then the Xoom will make an excellent paperweight to keep the tablet flyers and news articles from blowing off the desktop.





On the Topic of Copyright

11 02 2011

February 11, 2011 (Updated February 25, 2011)

On August 26, 2010, just before cancelling my first three month old order for the “Oracle Tuning: The Definitive Reference Second Edition” book, I wrote the following in a blog article (side note: the second order for the book is now more than three months old, and I have no intention of cancelling that order):

In June 2010 an article appeared on another blog that identifed three blog articles appearing on a blog operated by an Oracle Certified Master (OCM) that were copied nearly verbatim from three other blogs, thus saving the OCM a considerable effort in not only typing verbage appearing in the article, but also effort in actually building useful content.  Why spent four hours developing a blog article (this is probably the average time I spend), when a simple copy, paste, find-and-replace can be accomplished in a couple of minutes?  Those copies of articles clearly violated commonly understood copyright restrictions, and a quick search showed that the copyright violations extended far beyond the initial three articles which were mentioned (yes, I have screen captures and PDF copies of all that I identified).

So, why did I write the above paragraph?  For some reason I was recently browsing through some of the information about DMCA:

“Any material that was posted without the copyright owner’s authorization must be removed or blocked promptly once the service provider has been notified that it has been removed, blocked, or ordered to be removed or blocked, at the originating site.”

I realized that a random reader of this blog could potentially send a note to WordPress stating that they are, in fact under penalty of law, the owner of the material I just spent 2 hours, 4 hours, 8 hours, 16 hours  – maybe even months writing, and demand that the article be taken offline because I clearly stated that I quoted a couple of words from one of their articles while agreeing with or disagreeing with the other article.  Is quoting another source, while clearly stating that the other source is being quoted, in an effort to critique the quoted section an example of fair use?

I greatly dislike seeing people copying other people’s work and passing it off as their own work.  It takes considerable effort to put together many of my blog articles; it requires a good deal of past experience troubleshooting other problems; and once my blog articles are published, those blog articles are not complete without the valuable input provided by reader comments.  It is not uncommon for me to post a blog article with the sole intention of helping one or two people, with the knowledge that reader comments on the blog articles often take the articles to a completely different level, expanding the articles to help a much wider audience.  The article that is currently the second highest ranked article on this blog in the last 90 days is just one example of this approach in action. 

The Digital Millennium Copyright Act (DMCA), among other things, helps protect original, copyrighted content that is posted to the Internet to prevent that material from being republished for profit, or claimed as original, copyrighted content authored by the person who copied the previously copyrighted material.  The doctrine of fair use outlines four factors that determine whether or not a particular use of copyrighted material is considered fair use:

  • “The purpose and character of the use, including whether such use is of commercial nature or is for nonprofit educational purposes
  • The nature of the copyrighted work
  • The amount and substantiality of the portion used in relation to the copyrighted work as a whole
  • The effect of the use upon the potential market for, or value of, the copyrighted work”

If you see your original, copyrighted content on an unauthorized site, file a DMCA take down notice.  If you do so, be aware that you will be swearing under penalty of perjury that you are, in fact, the owner of the copyrighted material.  Just for the entertainment value, I decided to see what the law states in Michigan about the penalty of perjury:

“Perjury committed in courts—Any person who, being lawfully required to depose the truth in any proceeding in a court of justice, shall commit perjury shall be guilty of a felony, punishable, if such perjury was committed on the trial of an indictment for a capital crime, by imprisonment in the state prison for life, or any term of years, and if committed in any other case, by imprisonment in the state prison for not more than 15 years.”

Translation, “You better know what the heck you are claiming when you file a DMCA take down notice and you sign, under penalty of perjury, that you are the copyright holder.”

Why all of this discussion of copyright and DMCA?  Some of you may have noticed that my recent article titled “SQL Performance Problem, AWR Reports Query is #1 in Elapsed Time, #2 in CPU Time – How Would You Help?” that was posted on February 6, 2011 has disappeared.  Why has it disappeared?  A DMCA take down notice was filed, and that notice in part stated the following:

——————–BEGIN NOTICE

Pursuant to the DMCA, please see this takedown notice.

I have confirmed that the site says “powered by WordPress.com”, and is hosted by WordPress.

This blog page http://hoopercharles.wordpress.com/2011/02/06/sql-performance-problem-awr-reports-query-is-1-in-elapsed-time-2-in-cpu-time-how-would-you-help/

Contains this material originally published on our forum:

(My modified version of the SQL statement posted by a user of an online forum who was seeking help with a performance problem)

This was copied from our forum here:

http://dbaforums.org/oracle/index.php?s=ec2b9e8d676465331d7fe6cd6e07fa0c&showtopic=20523

Where it was originally published in this format:

(SQL statement as posted by a user of an online forum who was seeking help with a performance problem)

Donald K. Burleson
CTO, Burleson Consulting

Kittrell, NC, USA 27544
(email address withheld)@remote-dba.net

I hereby swear that this content is copied from the BC DBA forum and that this page infringes on my copyright and it is not authorized.

Under penalty of perjury, I swear that all of the information contained in your Infringement Notice is accurate, and that I am the copyright owner of this material.

Signed: Donald K. Burleson, copyright owner

——————–END NOTICE

You are required to remove the specific content outlined in this notice and identified as the infringing material. If you would like to formally challenge the notice, please let us know so that we can provide you with further instructions.

Interesting… so Donald K. Burleson holds the copyright of anything posted to the forums on dbaforums.org?  So, did I remove the SQL statement that was posted by a user of the forum, or did I file a DMCA challenge stating that Donald K. Burleson is not the owner of that SQL statement?  Think about that for a moment, would I back down when trying to help someone solve a problem (falling under the doctrine of fair use), especially after readers of the forum my blog article provided valuable information to further help the person experiencing the performance problem?  Would I back down when an action by Donald K. Burleson would impede the freely provided furthering of the Oracle community’s Oracle Database knowledge?  Think about that for a moment…

Finished thinking already?  I filed a DMCA challenge with the following text:

The quoted section mentioned in the DMCA take down notice was in fact copied from Donald K. Burleson’s forum, with minor changes to improve readability.  The copied text, however, does not infringe on Donald K. Burleson’s copyright.  The text that I copied was posted by one of the users of his online forum, and that text originated in the computer program titled “Patrol for Oracle” written by BMC Software (as indicated in the comments attached to the blog article).  The copied text is essentially a SQL statement that is submitted to a database server by the “Patrol for Oracle” program, and that SQL statement was exhibiting poor performance.  If anything, the rightful owner of the copyright for the copied text is BMC Software, and credit for the SQL statement currently appears in the comments section of the blog article.  A user posting a SQL statement (the alleged copied text) to a forum does not transfer the copyright for that SQL statement to the owner of the forum, especially if the user posting the SQL statement was not the original author of the SQL statement.

This is not the first time that Donald K. Burleson has falsely claimed ownership of copyright.

The attached “DMCA Challenge 2011-02-08.pdf” file is my signed notice that I did not infringe on a copyright owned by Donald K. Burleson.  The attached “DMCA Challenge 2011-02-08 Attachment.pdf” file shows my blog article content where the alleged copied text appears, and is followed on pages 11, 12, and 13 by the content where the alleged original text appears.

Please keep in mind that this blog is one that is dedicated to technical Oracle Database related discussions, and personal attacks do not belong here.  Attacking technical discussions containing faulty information, of course, is expected.  See the About page for more information.

——–

Edit February 25, 2011:

Donald K. Burleson, apparently realizing what it means to file a false DMCA claim under penalty of perjury, did not file a lawsuit to back up his DMCA claim to copyright ownership of the modified SQL statement that I posted in my February 6, 2011 article titled SQL Performance Problem, AWR Reports Query is #1 in Elapsed Time, #2 in CPU Time – How Would You Help?  His false DMCA claim held hostage one of my articles for a total of 17 days, during which time the article was not available for readers of this blog (for the record, I completely understand and agree with WordPress’ handing of this matter, where their processes require taking DMCA challenged articles offline for two weeks to allow the true copyright holder sufficient time to file a lawsuit).  False DMCA claims from Donald K. Burleson against my blog articles will not be tolerated, and this article will serve as evidence of past abuse, if necessary.





A Year Older, Any Wiser?

30 11 2010

November 30, 2010 (Updated December 1, 2010)

(Forward to the Next Post in the Series)

It feels a bit strange, but this blog went live a year ago today.  What was the motivation for creating the blog?  At the time I was comtemplating creating a publically accessible Oracle Database related blog, I already had a restricted-access Oracle blog, and if I was lucky I had maybe four regular visitors (I might have five regular visitors now, or two that hit the refresh button 2.5 times a day).  Before setting up this blog I made the following comment in the above linked OTN thread:

“Yes, a lot of time is spent putting together the test cases. Even more time would be required to put together blog articles that better describe what appears in the test. I do not know if I am able to justify the time involvement, but I will consider starting a blog. However, there are already a large number of very good Oracle blogs. Maybe if it were used just to preserve some of the test cases?”

I had commented something like this, in a previous OTN thread when someone suggested that I set up a blog:

There are already a large number of fantastic Oracle related blogs on the Internet, and every subtopic seemed to have been previously covered in great detail.  I could not image developing unique material that anyone would care to read.

I guess that I proved myself wrong, yet again.  I can think of five good articles in the last year that I published, although I might be imagining things.  So, for the five regular readers (or the 2 that hit the refresh button 2.5 times daily), what are the five good articles?

Wow, I spent (maybe wasted, had I not learned a lot in the process) a lot of time in the last year creating material for this blog.

——————-

Edit: December 1, 2010:

Some of the 400 to 800 non-regular (thus, those who want to know why something is as it is) visitors to this blog who visit daily may be wondering what the symbol means at the top right of my blog, and where I obtained the symbol.  I will leave the meaning of the symbol to your imagination.  So, where did I obtain the symbol?  It was created using a painting/imaging editing program that I created in the late 1990s.  Take a look:

The splash screen, created entirely in the program:

The starting point, not much unusual here:

Let’s use some of those mathematics abilities that the teachers said were so critical to know in life, let’s build a brush using a cosine wave:

Better yet, let’s use sine wave for the brush and change the colors a bit:

Finger painting with the new brush – looking good, except that I keep scratching the canvas, leaving behind black marks:

Let’s play with some more mathematics to transform the picture:

We will use a spiral wave which progressively twists the image the further out from the center you go – using the angular coordinate system and sine and cosine to translate back into X, Y coordinates:

Let’s crank up the twist a bit more and see what it looks like – that certainly looks better than the previous pictures (I fixed the scratches in the canvas first):

But the earlier picture is still too plain, let’s add some lighting effects using, you guessed it, more mathematics:

That lower right corner looks a little boring, let’s roll it up using more mathematics:

Let’s do the same for the top left corner – that’s what mathematics are good for, rolling up the rug:

The last four pictures – pretty neat what can be done with all the mathematics we have worked so hard to forget over the years, too bad I never did anything useful with the painting/image editing program other than to use it as a teaching tool:
 
 





Interviewed for the November 2010 NoCOUG Journal

4 11 2010

November 4, 2010

Almost two months ago I was interviewed for an article that was to appear in an upcoming NoCOUG Jornal newsletter.  It was a great opportunity to participate in the interview, and Chen (Gwen) Shapira provided to me a great set of thought provoking questions.  The interview went a bit long, so some of the questions and answers had to be trimmed from the printed NoCOUG Jornal newsletter article (Iggy Fernandez, Chen, and the other responsible parties did a fantastic job of picking the best answers for inclusion).  The picture used in the article… let’s just say that of the 50 or so pictures that I reviewed for submission, it was the only one that effectively hid my gray hair, where I did not show either a distressed (or puzzled) expression or appear that I had just spent eight hours writing a critical book review (is that a hint that another book review is in the process of being put together?).

I have decided to post a couple of the questions and answers that had to be trimmed due to space constraints (with the exception of the war story from my experience, which drifted a bit too far off topic, and involved some rather gruesome gutting of computer internals).

———————————

You use Excel as an automation and visualization tool. You definitely have more Excel expertise than most Oracle DBAs? How did you get into the Excel business? Do you have some tricks to share with our readers?

Among other things, I was a computer/technology instructor prior to working for K&M Machine-Fabricating, Inc., and one of the topics that I taught was using Microsoft Excel for data analysis and data visualization.  Excel has a full featured macro language that is quite similar to Microsoft Visual Basic 6.0 (and also the 2.0 version of Visual Basic), and is somewhat similar to the much simplified VBScript that is integrated into some ERP packages (Infor’s Visual Manufacturing for example), some engineering applications (AutoCAD, at least for a period of time, supported VBScript), classic ASP web pages, and even the Windows operating system.  It is the macro automation, either from within Excel or another program (whether a Microsoft Office component, custom developed program, or even an operating system script), that provides the most useful features in Excel for a DBA.

In June 2009 I delivered a presentation to a regional ERP user’s group that among other things showed how to leverage Microsoft Excel as a query and reporting tool for ERP data.  The Excel portion of the presentation required roughly 90 minutes and showed how to tap into the ERP data for reporting and visualization, as well as querying flat files, Windows event logs, and the WMI database that is built into all Windows operating systems since Windows 2000 was released.  It took about three months to prepare for that presentation, and the preparation involved reading three books about Microsoft Excel programming:  “Professional Excel Development”, “Excel Power Programming with VBA”, and “Pro Excel 2007 VBA”.  The first two books are great references but also are easily readable from start to end.  The third book really does not introduce anything that could not be discovered by a couple of hours of experimentation and Excel help documentation reading (the help file in Excel 2007 is much better than earlier versions and was improved again in Excel 2010. 

As for tricks, turn on the macro recorder in Excel, make a couple of changes or set up a Microsoft Query that accesses an Oracle Database, and then examine the resulting macro.  This method of generating macros is a great starting point, as long as you keep in mind that the generated code is not necessarily the most efficient approach to producing the intended results, and there is a small chance that the generated code will not work as expected when it is later executed.  If you see an unusual command, click in the middle of the command and press the F1 key for additional information about the command.

Tell us about your favorite network tools. When do you recommend bringing out network tracing? What about other operating system knowledge, do you feel that DBAs would benefit from looking at more low-level tracing and measuring?

For the rare times when I need to perform network analysis, I typically reach for ping, tnsping, or tracert (traceroute) first.  If I am working on a computer running Windows I might also run either nbtstat or netstat with various parameters.  One of the little known key network tools is actually the event log found in a managed network switch, which might provide a clue about misconfigured switch or server settings, failing hardware or failing network links, faulty wiring, and a number of other problems.  For automated network monitoring it is hard to find a better free tool than Nagious.  For a DBA, network monitoring with a packet capture tool such as Wireshark (available on Windows, Unix/Linux with an X Window environment, and Mac) or tcpdump probably should only be attempted when a 10046 trace at level 8 or 12 finds that only a small percentage of processing time is confirmed to be server-side database related activity, essentially when most of the elapsed time is in SQL*Net type wait events.  By time sequencing a Wireshark capture with a 10046 trace file, it might be possible to determine why, for example, every time the client submits a specific SQL statement and a FETCH call is found in the 10046 trace file there is a two second delay before the next entry in the 10046 trace file – essentially answering if there was a problem/slowness in the network communication, or if the client was busy using the retrieved data.  If the Wireshark capture shows that a problem was not found in the 2 second problematic interval, then a switch to client-side tracing with Wireshark and/or another tool such as Micosoft’s Process Monitor, Microsoft’s Spy++, SQL*Net tracing, or other client-side tracing utility might help.

———————————

I hope that you enjoy reading the interview questions and answers; there are several other great articles in the November 2010 NoCOUG Journal.





Technical Review of MOTS

17 09 2010

September 18, 2010 (Updated September 18, 2010)

(Disclaimer: To satisfy the one person that might wonder if my account of this event should be “considered credible” I must admit up front that I had some involvement in the planning of the event.)

I will make a prediction that an event like this is unlikely to happen again any time in the next couple of years.  The two to one ratio of attendees to OakTable members made it quite easy for me to talk to knowledgeable people from around the region, and around the world – both OakTable and non-OakTable members were quick to share their knowledge, experiences, and knowledge acquisition techniques.  I even bumped into a couple of people who admitted to reading this blog.

Let’s first start with what I did not like about the event:

  • The seat bottoms in the chairs in the conference rooms were a bit hard.  It seems that I had the constant urge to get up and fix something, yet there was nothing to fix, and I would have risked missing something that was quite useful if I did look away for a moment.
  • I identified three errors in the presentation slides.  I was not able to pick out the errors until the presenter said something like, “Oh, by the way there is a typo in the second spelling of this parameter on the slide,” or “that should be REMOTE_LISTENER, not LOCAL_LISTENER,” or “that’s not supposed to happen.”  It is refreshing to see that people at this level are able to point out mistakes, and then spin that mistake into a teachable moment.
  • The food and beverages.  Yes, there was an abundant supply of high quality food, and a wide variety of beverages (some only available after the sessions).  But where was the Diet Cherry Coke?  Pepsi products are OK unless you have to think clearly about performance problems (this is my excuse for being slow in spotting errors).
  • While there were only three simultaneous presentations, without fail there were two sessions that I wanted/needed to see at the same time, and a third session that probably would have broadened my level of understanding into entirely new directions.
  • Walking up and down the stairs rather than using the elevator – the hotel is not nearly tall enough to walk off all of the performance problems that could result from the third bullet point (I made sure to stop running the stairs to avoid compulsive tuning disorder).

As you can probably tell from the above list, it was hard to find things not to like about the event.  Carol Dacko did a fantastic job keeping everyone on the planning committee for the MOTS event on task, pushing toward an educational experience for all attendees.  All of the members of the MOTS committee contributed a substantial amount of time and effort (and travel miles) over the last seven to nine months. 

Before I describe what I liked about the event, I will first list the presentations that I managed to attend (I highly recommend these to anyone who has the opportunity to see these again in the future):

  • Moans the Magnificent by Mogens Nørgaard and Cary Millsap.  An Oracle comedy routine similar to the Carnac series by Johnny Carson.  A great way to begin the event, although I think that one or two of the jokes might not have made sense to most of the audience members until a couple of hours later.
  • How to Write Efficient SQL by Jonathan Lewis.  Using cutting the grass as an analogy, Jonathan explained why it was important to be familiar with the data to be queried, to ask if a better execution plan exists, and how to help the optimizer find the better execution plan.
  • Battle Against Any Guess by Alex Gorbachev.  Why know when you can guess?  Not even educated guesses are good enough?  Alex founded the BAAG party and wrote a chapter in the Expert Oracle Practices book on the same topic.  Rumor is that there is a BAAD party around the corner if the NoSQL ACID-free movement gains momentum, roll-aids will be used rather than undo.  But once again this is just a rumor.
  • Back to Basics – Choosing the Starting Point of Performance Tuning and Troubleshooting Wisely by Tanel Poder.  This is one of those sessions that I had hoped someone would put together.  It seems every time I see a person on the OTN forums propose the use of AWR (or Statspack) to identify a problem found in a specific session, I want to reach out and say to that person, “a system with a loose screw is rarely fixed by putting the screw into a nail gun – use the right tool for the job, and avoid what does not work.”  Tanel’s presentation covered that issue and many others.  His slides had a great deal of detail, much more than could be discussed in the 50 minute time period.  His slides made be downloaded from here
  • CBO Fundamentals: Understanding the Different Modes of System Statistics by Randolf Geist.  Highlighting the evolution of the cost based optimizer from using the Oracle 8i OPTIMIZER_INDEX_COST_ADJ and OPTIMIZER_INDEX_CACHING parameters to reduce index costs, onto the use of system (CPU) statistics which effectively increase the cost of full table scans and consider the expected CPU utilization as a component of access path cost.
  • Thinking Clearly About Performance by Cary Millsap (part I and II).  It is hard to describe this two hour presentation, other than to say that it is one of the best presentations offered at the symposium.  The whitepaper may be downloaded from here.
  • SQL Baselines, Adaptive Cursor Sharing and Automatic Cardinality Feedback by Jože Senegačnik.  The presentation described interesting features that affect Oracle’s optimizer’s ability to repeatedly use the same execution plan in 10g, 11g R1, and 11g R2.
  • Everything a DBA Needs to Know about TCP/IP Networks by Chen Shapira.  Chen stated up front that the presentation could not possibly contain everything that a DBA needs to know.  Instead, she described four types of network-related problems that she encountered and the tools that are available to help with analysis.  Her presentation ties into my seven part blog series on network monitoring.
  • Interpreting Execution Plans by Christian Antognini.  Christian effectively covers the information that should have appeared in the Oracle documentation about how to read execution plans.  One hint from the presentation: the most indented line in the execution plan is not necessarily the starting point of the execution plan.  A very detailed (better than any other resource) description of reading execution plans is found in Christian’s Troubleshooting Oracle Performance book.
  • The Beginner’s Guide to Becoming an Expert by Jonathan Lewis.  This was a presentation with a single slide, but the message was not in the slides, but instead in the scripts.  Creating models of problems, recognizing (and investigating) unexpected results, and comparing the results between Oracle releases to understand how Oracle Database behaves when certain conditions are present.  While I do not have 2,000 scripts, I would like to hope that the test cases found on this blog are constructed to serve the same purpose – to help myself (and others) understand why “A”, “B”, and “D” happen, while “C” only happens when “E” is introduced.  It was definitely a bitmap smashingly good session.  Does the method work?  My personal experience says yes.
  • Understanding LGWR, Log File Sync Waits, and Commit Performance by Tanel Poder.  You might be curious about what happens when your CPUs are pushed toward 100% utilization.  You might be curious why placing your redo logs on SSD drives may not help.  You might be curious why SQL_IDs cannot be associated with the log file parallel write wait event, as was stated to be possible on page 322 of one of the books that I reviewed.  If you missed this presentation, you are probably still curious.
  • Why Isn’t Oracle Using My Index by Jonathan Lewis.  Is clustering factor a factor (see the sample chapter from the “Cost-Based Oracle Fundamentals” book).  Is ASSM a factor?  Are incorrectly set parameters the problem?  Are the recorded performance statistics about the server a factor?  Are missing statistics for virtual columns the problem?  Is a function applied to the indexed column (I don’t think that this was mentioned, but the time ran out before the presentation finished)?  Interesting examples throughout.

What did I like about MOTS?

  • Making a run to Kinkos (local photocopy center) at 12:45 AM to run off copies of the agenda for the next morning.
  • Setting up a 13 drive RAID array using USB keys plugged into the laptop – the presentations had to be transferred onto the fancy real wood USB keys.  The MOTSCopy.doc (manually build the text file from the contents) file contains the scripts that I used for the copy process which alternated in parallel the writing to the USB keys when the CopyAll script was executed. (See below)
  • Having a chance to meet for the first time a large number of OakTable members.  It is rather remarkable that the OakTable members traveled around the world on their own dime to offer this training in Michigan.  I think that says something about the commitment of the OakTable members to learn Oracle Database details thoroughly, and then openly share that knowledge – just for the sake of helping others.
  • There were no sloppy, or poorly assembled, or untested presentation topics.
  • The speed with which OakTable members diagnose and resolve problems, for example posting videos and pictures (and more) from the event.
  • And… the list goes on, and on, and on. 

Any thoughts from other attendees? 

Hopefully, more of the pictures from the event and more of the presentations will make their way to the Internet soon.

——————————–

WordPress is currently refusing to deliver the Word documents and the double-extension attachments.  The five script files that I used to transfer the presentations to 13 USB keys simultaneously follows (only need to execute the CopyAll.bat file):

CopyAll.bat File Contents:
START copy1.bat
START copy2.bat
START copy3.bat
START copy4.bat

Copy1.bat File Contents:
xcopy C:\MOTS\*.* e:
xcopy C:\MOTS\*.* i:
xcopy C:\MOTS\*.* m:
xcopy C:\MOTS\*.* q:

Copy2.bat File Contents:
xcopy C:\MOTS\*.* f:
xcopy C:\MOTS\*.* j:
xcopy C:\MOTS\*.* n:

Copy3.bat File Contents:
xcopy C:\MOTS\*.* g:
xcopy C:\MOTS\*.* k:
xcopy C:\MOTS\*.* o:

Copy4.bat File Contents:
xcopy C:\MOTS\*.* h:
xcopy C:\MOTS\*.* l:
xcopy C:\MOTS\*.* p: 




Book Reviews and DMCA – No Longer Curious, Order Cancelled

26 08 2010

August 26, 2010

In June 2010 an article appeared on another blog that identifed three blog articles appearing on a blog operated by an Oracle Certified Master (OCM) that were copied nearly verbatim from three other blogs, thus saving the OCM a considerable effort in not only typing verbage appearing in the article, but also effort in actually building useful content.  Why spent four hours developing a blog article (this is probably the average time I spend), when a simple copy, paste, find-and-replace can be accomplished in a couple of minutes?  Those copies of articles clearly violated commonly understood copyright restrictions, and a quick search showed that the copyright violations extended far beyond the initial three articles which were mentioned (yes, I have screen captures and PDF copies of all that I identified).

So, why did I write the above paragraph?  For some reason I was recently browsing through some of the information about DMCA:

“Any material that was posted without the copyright owner’s authorization must be removed or blocked promptly once the service provider has been notified that it has been removed, blocked, or ordered to be removed or blocked, at the originating site.”

I realized that a random reader of this blog could potentially send a note to WordPress stating that they are, in fact under penalty of law, the owner of the material I just spent 2 hours, 4 hours, 8 hours, 16 hours  – maybe even months writing, and demand that the article be taken offline because I clearly stated that I quoted a couple of words from one of their articles while agreeing with or disagreeing with the other article.  Is quoting another source, while clearly stating that the other source is being quoted, in an effort to critique the quoted section an example of fair use?

“Section 107 contains a list of the various purposes for which the reproduction of a particular work may be considered fair, such as criticism, comment, news reporting, teaching, scholarship, and research. Section 107 also sets out four factors to be considered in determining whether or not a particular use is fair:

  • The purpose and character of the use, including whether such use is of commercial nature or is for nonprofit educational purposes
  • The nature of the copyrighted work
  • The amount and substantiality of the portion used in relation to the copyrighted work as a whole
  • The effect of the use upon the potential market for, or value of, the copyrighted work”

Interesting… as some of you may have noticed, the Oracle Database book reviews that I wrote in the last two years attempt to be very thorough,  precise, and specific –  many times down to a page number and a quoted sentence or two.  In my view such quotes clearly fall under the embrella of fair use (specifically criticism, comment, teaching, and research).

On June 10, 2010 I posted a blog article where I mentioned having three Oracle Database books on order that I intended to review just as I had with many of the other Oracle Database books that I purchased in the past.  Two of the books that I ordered, “Oracle Performance Firefighting (fourth printing)” and “Expert Oracle Database Architecture: Oracle Database Programming 9i, 10g, and 11g Techniques and Solutions, Second Edition” arrived and were reviewed, while the book that was supposed to ship on June 1, 2010 currently shows the following status on Amazon.com:

Let’s take a look at that order:

So, in summary Amazon reports that the book has yet to be released, and if it ships in September it will arrive three months later than originally expected.  On August 17, 2010 I received two unsolicited emails from someone at that book publishing company asking me to submit any errata that I find in the second edition of the book so that the problems may be addressed by an OCM who is also an Oracle Ace, after I write up the book review – the person also mentioned that, by the way, the book keeps selling out.  Nice, except that I now have a slight problem.  My reviews typically take between one and two hours to write and when agreeing with or disagreeing with a section of a book I try very hard not to lose the precise message of that section of the book – that often means that it is necessary to directly quote a sentence or two of the book that is being reviewed.  In theory, simply quoting a section of a copyrighted work might be grounds for a DCMA pull down of the content, thus rendering an hour or two of writing lost for a period of time.  Would the author of this book that is now three months delayed try a DCMA pull down if the review is not completely favorable?  I don’t know that answer.

In late May or early June I wrote the following about why I ordered the book that has yet to ship:

I was curious, so I spent good money on the book.  For most of the Oracle books that I have read in the last two years I have posted detailed reviews of the books on Amazon.  The reviews list what I thought was good about the book, what I thought was great about the book, errors/errata found in the book’s chapters, as well as misc. points about the book as it is read.  Some of my reviews identified a dozen or more errors in a book, which is a significant problem with some of the Oracle books on the market.  The errors must make it difficult for beginners to learn about Oracle Database.

How long will the review of the “Oracle Tuning: The Definitive Reference Second Edition” book stretch?  I am not sure, but it appears that I could write about a 4 page review on the first 27 pages of the first edition of this book – I sincerely hope that some/most of those issues were addressed in the second edition.  I want to make certain that I obtain a good value from the $44 that I spent on the book.  I do not think that it too much to ask for accuracy from a published book, especially when the book cost more than $40.  Some of the pages in the chapters that I co-authored in the “Expert Oracle Practices” required more than 8 hours of research, typing, verification, testing, proofing, re-testing, and formatting per page. I believe that Raldolf spent roughly the same amount of time on the pages that he wrote for the chapters.  I suspect that many of the very good books that are on the market for Oracle also required about the same amount of time per page.  Certainly, it can’t be too much to ask that other book authors put effort into testing and verification of what they publish? The author’s website states about the the “Oracle Tuning: The Definitive Reference Second Edition” book: [edit: feel free to read the book's description on the publisher's website].  So, I will be looking for the portions of the book’s contents that are comprehensive and targeted at the senior Oracle DBA.

After waiting three months, I am no longer curious about the book – so I just cancelled the order.  Was I serious that I could write a four page review on the first 27 pages of the first edition of the book?  I heard someone shout “show me your cards.”  In late May and early June I read the first 27 pages of the first edition using the Google books site, and I made notes as I read.  These are my notes, quoting a minimal amount of the book to fall under the embrella of “criticism, comment” of fair use:

  • Page 9 (now page 11) states “For example, if the AWR shows that the demands on the shared pool become very high between 1:00 pm and 2:00 pm, the DBA might trigger a dynamic decrease of db_cache_size and a corresponding increase of the shared_pool_size parameter during this time period.”  If Oracle’s buffer cache is using the memory specified by the DB_CACHE_SIZE, the size value specified for DB_CACHE_SIZE cannot be decreased without first manually flushing the buffer cache – flushing the buffer cache to allow the shared pool to grow for an hour is a bad idea.  It is important to keep in mind that when the SGA_TARGET parameter is specified, the values for DB_CACHE_SIZE and SHARED_POOL_SIZE specify the minimum values for the parameters.
  • Page 10 is an advertisement for the WISE package – free copy is apparently available with the book.
  • Page 11, when describing examples of poor schema design, suggests that databases using extensive data normalization to minimize data redundancy is a poor design because it forces Oracle to perform unnecessary table joins.
  • Page 11 (now page 12) states, “If poorly designed PL/SQL is encountered, users might be able to tune the database by using array processing such as bulk collect, forall and ref cursors…”  As of Oracle Database 10g, Oracle automatically array fetches 100 rows at a time in cursor FOR loops, making it much less likely/necessary that a ‘user’ will re-write the PL/SQL to take advantage of bulk collect’s performance slightly better performance.  http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1203923200346667188
  • By page 13 has recommended the use of AWR, various AWR tables, and the SQL Tuning Advisor without mentioning that the Diagnostic Pack must be licensed for access to AWR information, and the Diagnostic and Tuning Pack must be licensed for access to the SQL Tuning Advisor.
  • Page 14 (now page 25) states “The first tasks when tuning a database when tuning a database are to identify the external bottleneck conditions, which may include: CPU Bottleneck,… RAM Bottleneck,…  Network Bottleneck, … and Disk Bottleneck.”  While it is important to check these items, I suggest that these probably would not be the first items to check when tuning the database instance’s performance.  Yes, checking whether or not the CPUs are overloaded (or heavily loaded) is important, but maybe it would be better to determine if the overload is causing a measurable negative impact on performance by first checking the delta values of the various wait events, system statistics, and system time model statistics before deciding to check, for instance, whether there are network bottlenecks.
  • Page 14 (now page 25) states “Whenever the run queue exceeds the number of CPUs on the Oracle server in the absence of high idle times, the system is said to be CPU-bound.”  The first question with this statement is what qualifies as a “high idle time”?  Secondly, a server may become CPU bound long before the CPU run queue exceeds the number of CPUs in the server, and long before 90% CPU utilization is reached.  The author mentions that CPU consumption can be reduced by tuning SQL and reducing library cache contention – that is true, but much more could be said about the topic.  For example, would it make a difference if the database is used for OLTP rather than data warehouse activities?  The remedies listed on page 15 are just to add additional CPUs to the server or disabling a “high CPU consumer” such as parallel query.  No warning provided to indicate that adding CPUs to the server could be expensive from an Oracle licensing perspective and a system uptime perspective.
  • Page 15 (now page 25) states that “Large amounts of Oracle*Net traffic contribute to slow SQL performance.”  The book does not describe what is considered “large amounts”, nor what to do about addressing the network bottleneck.
  • Page 15 (now page 25) under the heading of disk bottlenecks the only issue identified is “using RAID5 for high update systems.”  The adjective “high” is never clarified.
  • Page 15 (now page 25) states, while discussing potential solutions for over-allocation of the server’s RAM, that the options for “excessive wrapping id to add more RAM, reduce the size of Oracle’s SGAs, or turn-on Oracle’s multi-threaded server.”  Are these the only options, or could more suggestions be recommended?  For example, “size of Oracle’s SGAs” implies that more than one instance is running on the server – would it make sense to reduce the number of instances on the server?  Are there any reasons not to implement a shared server configuration (what the author refers to as “multi-threaded server”?  What about tuning the SQL?
  • Page 15 (now page 26), the author appears to misuse the term “network latency”, or at least stretch the meaning of latency as it applies to a network.
  • Page 16 includes an advertisement for WISE.
  • Page 16 (now page 27) states “In a disk-bound database, the majority of the wait time is spent accessing data blocks.”  It is quite possible in a non-disk-bound, well performing database instance that the majority of wait time could very well be spent reading data blocks.  In fact, these events should be among the top 5 wait events, otherwise there probably is a significant performance problem.  It is also quite possible that the system could be disk-bound when the majority of wait time is spent _writing_ to the redo logs, archiving redo logs, and updating data/undo/temp files.
  • Page 16 (now page 27), the Top 5 Timed Events, demonstrating a disk constrained database, shows 44 waits for the CPU.  Waits for the CPU cannot be reported in the Top 5 Timed Events – this data in the report is bogus.  The report also shows that the average single block read time is 2.75 seconds, while the average multi-block read time is 0.127 seconds (127ms) – this data in the report is also likely bogus.  The report also shows that 1,363 seconds were lost in 673 waits (average of 2.03 seconds) to the “library cache load lock” wait – if that wait appeared in the top 5 report, even in a report showing the single block read time is 2.75 seconds, I think it would be a good idea to start with the “library cache load lock” wait since that is one of the waits that indicates a severe problem if it appears in the top 5 wait events.
  • Page 17 (now page 28) states “CPU enqueues can be observed when the CPU run queue exceeds the number of CPUs on the database server, and this can be seen by…”  The effects of CPU enqueues may be observed long before the CPU run queue exceeds the number of CPUs in the server, whether it be longer average waits for the log file sync wait event, increased waits for latches, increased wait time for single block or multi-block reads, or any number of statistics in the system time model (V$SYS_TIME_MODEL, V$SESS_TIME_MODEL).
  • Page 17 (now page 28) states “High CPU usage will be reported as a top 5 timed even (sic) in any AWR report as shown below”  First, the Top 5 Timed Events report shows that there were 4,851 waits for CPU time – a Top 5 report cannot show the number of waits for the CPU – this data in the report is bogus.  Second, the CPU is a limited “wait” resource with an upper utilization limit equal to the number of CPUs in the server multiplied by the elapsed number of seconds, while the other foreground wait events, for the most part, have an upper limit of the average number of sessions (plus parallel related additional processes) connected to the instance multiplied by the elapsed number of seconds.  It is quite possible that a server experiencing excessive CPU usage may not cause the “CPU time” to appear in the top 5 timed events.  Third, we do not know how many CPUs are in the server – would it make a difference if there was only a single CPU compared to 96 CPUs?  Fourth, we have not concept of the elapsed time represented by the report – what if the elapsed time for the report is eight hours?  Fifth, if the single block read time averages more than a second, shouldn’t the investigation start with the IO system?  Sixth, the most that can be determined from the top 5 timed events is the average utilization, number of waits, and number of timeouts – why are the number of timeouts not shown?  From the top 5 timed events it is difficult, if not impossible, to determine if there wasn’t a brief period of intensive activity, followed by a long period of almost no activity – or if the average activity levels were fairly constant in the time period.
  • Page 18 (now page 28), the Top 5 Wait Events (why the switch from Top 5 Timed Events) is labeled as a mockup – probably a good idea that this is labeled because the report shows 94,754 seconds where the server was waiting to send data across the network while single block and multi-block reads totaled only 85.75 seconds.  That is why you do not use a satellite link for the network backbone.
  • Page 19, several grammar errors to this point in the book “It not elegant, or even correct, but it’s a common choice by managers who need quick, low-risk performance improvements…  Individual Program Global Areas (PGA’s for each session) are a set of running programs that do work for the instance and are referred to as processes.”
  • Page 20 (now page 30), describes db_cache_size as “the number of data buffers to allocate for the instance.”  The parameter does not specify the number of data buffers (blocks?), it specifies the amount of memory to allocate to the buffer cache (or the minimum amount in the event the SGA_TARGET is specified).
  • Page 21, the DB_FILE_MULTIBLOCK_READ_COUNT parameter is listed among the parameters that cannot be changed without bouncing the instance – that is not correct.
  • Page 21 (now page 30), states “parallel_automatic_tuning:… since parallel full-table scans are very fast, the CBO will give a higher cost to index access and be friendlier to full-table scans.”  This is simply incorrect.  The optimizer does not increase the cost of index access when costing parallel access paths that use full table scans – it lowers the cost of the full table scan relative to the parallel degree.  Second, parallel full table scans are not automatically very fast, especially if the IO subsystem is bottlenecked.
  • Page 21 (now page 30), the description of the OPTIMIZER_INDEX_COST_ADJ parameter is very weak, “this parameter controls the relative costs of full-table scans versus index scans.”  It would be much better to state that the parameter specifies the percentage of the index access cost to be retained.  Setting the value to a very low value could cause the optimizer to select not only an index access path rather than a full table scan, but to select the wrong index due to cost rounding problems as the value of the OPTIMIZER_INDEX_COST_ADJ parameter approaches 1.  Ideally, since at least Oracle Database 10.1 this parameter should be left at the default value of 100 due to the Oracle optimizer’s use of system statistics by default.
  • Page 21 (now page 30), the description of the OPTIMIZER_INDEX_CACHING parameter is vague – the book doesn’t indicate when the value of the parameter is used in cost calculations, and when it is not used.
  • Page 22 (now page 34) states “In Oracle 10g, statistics collections are automated, but users may still need to selectively add histograms and other specialized optimizer statistics.”  The book does not state what triggers the statistics collection for a specific object, at what time the statistics are collected, that histograms (maybe excessively) are automatically generated, nor the potential problems that might be encountered after the statistics collection.
  • Page 23 (now page 35) states “SQL will run fastest when the first table joins deliver the smallest result set.”  This is stated as an absolute, which absolutely will not be correct all of the time.
  • Page 23 states “the new Oracle10g SQL profile utility will allow changes to execution plans without adding hints.”  This is untrue – a SQL profile is a series of hints that are automatically generated.
  • Page 25 shows operating with OPTIMIZER_MODE set to CHOOSE and deleting a table’s statistics to immediately return the system to acceptable performance.  No warning that this caused the optimizer to switch to the RULE based optimizer, and no demonstration of how statistics should be collected.
  • Page 25 “there was a serious degradation in SQL performance immediately after the implementation of partitioned _tablespaces_ in a 16-CPU Solaris 64-bit Oracle 9i database.”
  • Page 26 shows a grammar error “Note that is the full-table scans are parallelized with 15 parallel query processes to speed up the statistics collection”
  • Page 26 – amazing performance improvements “tripled the performance” “400 percent performance improvement” – no real suggestion of how the silver bullet corrected a problem to cause such a significant increase in performance, so that readers can experience similar performance improvements in their database instances.
  • Page 26 – assumes a data warehouse was set up without a PGA_AGGREGATE_TARGET parameter value and an unspecified value for SORT_AREA_SIZE, resulting in a SORT_AREA_SIZE that was the default value, and only a 50% overall improvement was experienced by setting the parameter to 1MB.  The book did not state that the default value of SORT_AREA_SIZE, when not specified, and PGA_AGGREGATE_TARGET is not used, that the SORT_AREA_SIZE defaults to 64KB.  The book also did not describe how the 1MB value was determined to be the best value, nor what impact changing the parameter’s value has on memory availability in the server.
  • Page 27 – code section shows single quotes around what is apparently a bind variable defined as VARCHAR2.  The single quotes should not appear around that bind variable.  Author states that a concatenated index on the CUSTOMER_STATUS and CUSTOMER_AGE columns resulted in a 50 fold performance improvement and 6 fold reduction in disk IO.  The book does not state how or why, so I wonder how that could happen when the CUSTOMER_STATUS column likely has few unique values and an inequality is used on the CUSTOMER_AGE column.  Was a bitmap or B*tree index used, was the CUSTOMER_STATUS or CUSTOMER_AGE specified as the leading column in the index?
  • Page 27 “While it was not clear why the CBO was not choosing the index, the query with the index hint ran almost 20x faster.  After acting fast and running a script against v$bh and user_indexes, the DBA discovered that approximately 65 percent of the indexes were currently inside the buffer cache.”  There are a couple of problems with this statement.  The statement is not clear whether the DBA found that at least one block from roughly 65% of the indexes found in USER_INDEXES was in the buffer cache, or if the DBA found that roughly 65% of all blocks belonging to the indexes found in USER_INDEXES were in the buffer cache.  The second problem is why did the DBA check USER_INDEXES and not DBA_INDEXES – if the DBA checked USER_INDEXES, he would only see the indexes that belong to the user account that the DBA used to connect to the database (unless he used ALTER SESSION SET CURRENT_SCHEMA to change to another schema).  The third problem is that there may be multiple, possibly many consistent read versions of an individual block in the buffer cache – how did the DBA factor out the multiple copies of blocks when he performed his calculations?  The fourth problem is that the DBA did not consult a 10053 trace for a hard parse of the SQL statement to understand why the expected index was not selected.
  • Page 28 “Based on similar systems, the next step was to lower optimizer_index_cost_adj to a value of 20 in the hopes of forcing the CBO to lower the relative costs of index access.”  At the same time the OPTIMIZER_INDEX_CACHING parameter was set to 65.  What are the problems with this approach?  According to the Oracle documentation, the “OPTIMIZER_INDEX_CACHING parameter lets you adjust the behavior of cost-based optimization to favor nested loops joins and IN-list iterators.  The cost of executing an index using an IN-list iterator or of executing a nested loops join when an index is used to access the inner table depends on the caching of that index in the buffer cache. The amount of caching depends on factors that the optimizer cannot predict, such as the load on the system and the block access patterns of different users.  You can modify the optimizer’s assumptions about index caching for nested loops joins and IN-list iterators by setting this parameter to a value between 0 and 100 to indicate the percentage of the index blocks the optimizer should assume are in the cache. Setting this parameter to a higher value makes nested loops joins and IN-list iterators look less expensive to the optimizer.”  The parameter change is global, and does not apply to all index type accesses, it primarily makes nested loop joins appear to be more favorable than hash joins.  What criteria determined that 20 was the best value for OPTIMIZER_INDEX_COST_ADJ, and why didn’t the DBA correct or even check the system statistics, rather than experimenting with a parameter that should rarely be changed from its default value since the introduction of system statistics in Oracle 9i.
  • Page 28 shows “alter system set optimizer_index_cost_adj=20 scope = pfile;”  Issuing that command will result in an “ORA-00922: missing or invalid option” error.  The pfile cannot be altered using an ALTER SYSTEM command, except of course if the spfile is modified, and then the spfile is written over the top of the existing pfile (init.ora file).
  • Page 28 indicates that the Statspack report shown was from Oracle 9i, yet the report indicates that the wait time is in centiseconds, while an Oracle 9i Statspack report indicates time in seconds.  It is a bit alarming to see 4,796.54 seconds of wait on enqueue (46.71% of total implies total wait time of 10,268.76 seconds), while the 10,579,442 multi-block reads required 1,972.05 seconds (report shows that this is 29.20% of the total, but a calculator shows that it is 19.20%).  Something is wrong with this Top 5 Wait Events report – it is showing that the average multi-block read completed in 0.0001864 seconds, while the average single block read time required 0.002714 seconds.
  • Page 29 – the silver bullet tip to change the CURSOR_SHARING parameter to FORCE did not warn of any of the possible side-effects, which may include poor performance when cursors that should not be shared because of uneven data distributions generate execution plans that are inappropriate for other literal values, or bugs related to CURSOR_SHARING that either result in wrong data being returned, or instance stability problems.  It would be interesting to see how the 75% performance improvement was achieved by changing this parameter.

I guess that if the above mentioned problems are still present in the second edition of the book, it might have taken me significantly longer than the typical one to two hours to write the book review on the remaining 1000+ pages.  Best of luck for anyone ordering the second edition of the book.





Strange Search Terms – Are the Answers Out There?

22 08 2010

August 22, 1010

I posted the first blog articles on this blog roughly nine months ago.  In the last nine months I have published roughly 230 blog articles, which is roughly 210 more than anticipated.  What happened to just posting Oracle Database test cases to demonstrate Oracle Database behavior?  I started seeing somewhat unusual search terms appearing in WordPress’ statistics for the site, so I gradually added additional content.  The additional content that I wrote did not necessarily answer the questions found in searches, but the searches provided me with motivation, thinking that DBAs and developers really wanted to know THE answer quickly.  Is it possible to be both quick and thorough?  I guess that my answer to that question is no, otherwise I would have stopped after completing the 20th blog article.

So, what are some of the most interesting and/or strange search terms?  Here are a few – these are actual search terms (in a completely non-random order):

pig outer join
nnnnnnnnnnnnnnnnnnnnn
buffer hit% awr
low buffer hit ratio
buffer cache hit ratio 11g
cache hit percentage oracle 11g            (53 hits)
optimize non standard block cache sizes    (79 hits)
using a large database block size for or
funny examples of root cause analysis
oracle inconsistent reads
ipad metalink oracle
more locks with oracle 11g
oracle index hint not working
site:hoopercharles.wordpress.com burles
sql injection ora-00907
ora books architectural books
sql true false quiz
michigan oaktable symposium               (65 hits)

Searching to find knowledge… a much better approach than knowing the answer, while not knowing the question that is being answered.

Entering, stage right… Moans the Magnificent (Carnac the Magnificent‘s distant cousin).  Moans the Magnificent might not know the questions to the first two search answers, but I am sure that he would have no trouble with the remainder of the list.  Possible examples:

Answer: Michigan OakTable Symposium
Possible follow-up question provided by Moans the Magnificent: Name an event that not even the oracles of the world could have predicted to have more of an impact on the climate than Al Gore inventing the Internet.

Answer: SQL Injection
Possible follow-up question provided by Moans the Magnificent: What is the relationship between code monkeys, fertility clinics, and the big book of child names?

Moans the Magnificent will be the opening session at the Michigan OakTable Symposium in Ann Arbor, MI on Thursday September 16, 2010 and Friday September 17, 2010.  Registration for the event is still open, however hotel rooms are in short supply.  Moans the Magnificent will be joined by roughly twenty other OakTable Network members from around the world, providing a one-of-a-kind two day training session (this will not be an annual event) with virtually on-on-one contact with some of the world’s most respected Oracle DBAs, consultants, programmers, and there might even be a chance to mingle with a couple of the developers who wrote some of the Oracle Database features that we all use on a daily basis.

You could continue to search for the answer… or you could directly talk with the people who know the answer.





On the Topic of Technology…

4 05 2010

May 4, 2010

(Forward to the Next Post in the Series)

Today’s post seems slightly off topic from previous posts, but maybe there is a hidden message.

I have in my basement an ancient Commodore Amiga 1000 computer which first hit the market in 1985.  At the time of its initial release the computer offered an earth shattering feature set: 7.14MHz Motorola 65000 CPU, 4096 colors, stereo sound, speech synthesis, and preemptive multitasking.  It’s true that the computer is rather tame by comparison with current computers, 25 years after the initial introduction of the Amiga 1000.  Before Commodore’s eventual demise, I was very much a fan of their product line.

So, why bring up ancient history about a 25 year old computer from a long ago defunct computer company?  This past Saturday I stopped into one of the electronic stores on my way home from an OakTable planning meeting, with the intention of buying one of the Apple iPad devices.  From a personal finance position, I planned to consider the purchase as an educational expenditure, not much different from the justification that I use to spend a portion of my income on various books and magazines.  Let’s see… $500 USD for a unit with 16GB or $700 USD for a unit with 64GB – I figured that I would probably need to buy the $700 unit if I planned to do anything useful with the device since it cannot be upgraded at a later time.  An expensive learning exercise, no doubt, but I already have a dozen or so newspaper, magazine, and Internet articles sitting on my desk at work about the iPad, most of which have been brought to me by others in the company who were curious about the iPad.

Back to the story – as I was walking toward the iPads in the electronics store I walked past 8 netbooks that were running Windows 7 Starter Edition.  The most expensive netbook offered a 1.66GHz Intel Atom processor (with hyperthreading support), 1GB of memory, 250GB hard drive, 10.2 inch LCD, an 11 hour battery, built-in web camera, and weighed 2.5 pounds – for $400 USD (I later determined that the same netbook is available online for $360).  I found that it was interesting that the netbook was able to cold boot in about 35 seconds, and resume from sleep mode in about 2 seconds.  After a couple of minutes experimenting with the netbooks I wandered over to the Apple display.  On the way I started wondering what I would do with the iPad, since I own a decent 3 year old laptop with a high-resolution 17 inch LCD.  I found one of the unoccupied iPads at the Apple display counter and started experimenting with it.

Wow, neat visual effects, jumping from the main screen into the mapping application.  OK, now how do I switch back to the main screen and pull up a web browser?  The iPad does not offer multitasking, even though that 25 year old Amiga 1000 computer sitting in my basement does?  True, but I knew that before entering the store.  After hopelessly pressing on the screen to find the hidden “close” button for the mapping application, I found a button on the frame that surrounded the screen – surprisingly, that button closed the mapping application.  Interesting, apparently no button on the main screen to open a web browser (the feature was probably disabled on the display unit), but there is a button for YouTube.  Wow, neat visual effects when pointing at YouTube.  What should I search for?  How about “Oracle”.  I brought up the onscreen keyboard, set the unit down on the display counter so that I could effectively type on the screen, and found that this motion caused the screen to rotate 90 degrees.  Neat visual effect… it was about this time that I noticed that the iPad screen was also roughly 10 inches measured diagonally, much like those netbooks that I passed on the way over to the Apple display.

The iPad played the selected YouTube video without problem, and did I mention the neat visual effects.  I started thinking again about how I would use the iPad.  I could use it to watch videos, as long as the videos were not Flash based.  My broadband connection has either a 5GB or 6GB monthly maximum transfer limit as it is provided by a cell phone company, of course I could easily hit that limit if the iPad was only used to watch videos.  All I have to do is plug the USB modem into the USB port on the iPAD… of course I mean plug the USB modem into the CradlePoint wireless router – there are no USB ports on the iPad.  I could use it for viewing the Oracle documentation, assuming that the Adobe’s Acrobat Reader format is not banned from the iPad.  While I don’t think that it will ever be possible to run a copy of Oracle Database on an iPad, I could use it to look up information on Metalink (My Oracle Support)… on second thought, who made the decision to build the new Metalink site using Flash – didn’t that person consult Steve Jobs?  I started to wonder again why I was planning to buy one of these iPads – for my purposes the only thing that it really had going for it was the neat visual effects, the mapping application, and the semi-portability (I have to drag along the CradlePoint router and external antenna).

I wandered back over to the netbooks.  Let’s see, these miniature sized computers offer four times as much storage space as the most expensive iPad, check; 66% faster CPU than the most expensive iPad, check; keyboard does not rotate 90 degrees when placed on the table, check; same screen size as the iPad, check; same weight as the iPad, check; multiple (3) USB ports, check; no fingerprints on the screen/case, check; 11 hour battery life, check; built-in web cam (probably coming to the iPads in December), check; able to visit the Flash enabled My Oracle Support site, check; able to run most of the software that I own, check; most expensive unit is $100 cheaper than the least expensive iPad, check; able to watch most video formats, with the notable exception of MPEG2 (as recorded by a Tivo), check; able to multitask just like the 25 year old Amiga, check; able to run Oracle Database – certainly that can’t be possible on a netbook, right?

So, I walked out of the store with a bag in hand, mission accomplished.  That certainly is a strange looking iPad – how did Apple manage to sell a million of them in a month?  Forward progress still means moving forward, right?

Using 40% of the 1GB of memory, just sitting idle.

I thought about installing Oracle on the little device, but as you can see, the books were just too heavy.

It is far too small, too limited of a device to support Oracle Database 11g R2.

Definitely, the hard copies are far too heavy.

This blog article is not intended to imply that the technology of the iPad is behind that of the 25 year old Commodore Amiga due to the iPad’s lack of multitasking.  This blog article is not intended to insult the owners of the 1 million iPads that were sold in the last month, even those that ended up in a blender or were smashed on the concrete as demonstrations of the rugged design of the iPads.  This blog article is not intended to point out how limited an iPad might be to someone wanting to perform serious work on the iPad – if you consider navigating the Flash enabled My Oracle Support site serious work.  No, this blog article is intended to point out that it is hard to install Oracle Database on a device without a physical keyboard.  See, the blog article was on topic after all.  :-)

I think that humor somehow found its way into this blog article.  I wonder if this article about the iPad will end up on my desk with the other assorted iPad articles?





Michigan OakTable Symposium (Advert)

18 02 2010

February 18, 2010 (Updated April 5, 2010)

Looking for an Oracle training session in the rust belt of the United States?

The Michigan members of the OakTable Network are organizing a two day Oracle training event on the Thursday and Friday (September 16th and 17th, 2010) just before Oracle OpenWorld. OakTable Network members from around the world will be providing sessions at the training event in Ann Arbor, Michigan (not far from the University of Michigan) at the Four Points by Sheraton hotel. Presented sessions will appeal to both DBAs and developers. Seating at the event will be limited to 300 people.

Confirmed Speaker List (See the Official Site for the Latest List):

Alex Gorbachev
Alex Gorbachev is a respected figure in the Oracle world, and a sought-after leader and speaker at Oracle conferences around the globe. He has been recognized as an Oracle ACE Director for his contributions to the community and unmatched set of skills. He is the founder of the Battle Against Any Guess movement promoting scientific troubleshooting techniques. He is currently the Chief Technology Officer at The Pythian Group. Alex has worked for The Pythian Group in several roles. He began by leading a team of database experts in Ottawa. He then moved to Australia to build the company’s presence in the East Asia Pacific region. Now he is back in Ottawa as The Pythian Group’s Chief Technology Officer. In all his work, Alex continues to work towards bridging the gap between business and technology. The search for the perfect fit between technology, engineering talents, and business process is what keeps him up at night. Alex co-authored the book “Expert Oracle Practices: Oracle Database Administration from the Oak Table”.

Andy Zitelli
Andrew Zitelli has thirty years of experience as a software developer, data architect and performance analyst. He has served as a consultant to the aerospace, semiconductor, steel, and pharmaceutical industries. Andrew is proficient working with a broad range of operating systems and languages and has 25 years experience working with ten different relational database products, including 17 years working with Oracle. He holds MS and BA degrees in Computer Science. During the past 30 years, Andrew has taught college-level courses and made presentations at technical conferences and other venues on a wide range of topics. He made presentations at the Hotsos Symposiums in 2007, 2008 and 2009.

Cary Millsap
Cary Millsap is the founder and president of Method R Corporation. He is widely known in the Oracle community as a speaker, educator, consultant and writer. He is the author (with Jeff Holt) of Optimizing Oracle Performance (O’Reilly 2003). Cary is also co-author of Oracle Insights: Tales of the Oak Table. Cary is one of the former founding partners of Hotsos and worked at Oracle for ten years. In the early 1990s, Cary created Oracle’s OFA Standard, which became the default configuration standard for Oracle software installations worldwide. Cary is also a woodworker who builds and repairs furniture in his own shop.

Charles Hooper
Charles Hooper is the IT Manager and an Oracle database administrator at K&M Machine-Fabricating, Inc., where he has been employed since 2000. His role in the company extends well beyond Oracle database administration responsibilities, providing opportunities for database performance tuning, network administration, programming, hardware/software troubleshooting, mentoring fellow IT staff, and end-user training for the Infor Visual Enterprise ERP system as well as other custom developed applications. Charles is well known among the user community of Infor Visual Enterprise due to his years of participation in various user forums answering technical questions, providing the only Oracle presentation at Infor’s 2008 user’s conference, and providing presentations to regional user groups. Prior to joining K&M, he was a computer/technology instructor and Novell Netware administrator. Charles co-authored the book “Expert Oracle Practices: Oracle Database Administration from the Oak Table”.

Dan Fink
Daniel Fink is a senior Oracle database engineer and consultant, specializing in Oracle optimization, troubleshooting, internals, and data recovery. He started as a DBA on Oracle 7.0.16 running Parallel Server on OpenVMS and has experience on major Unix platforms and releases up to 10g. He maintains a library of his research at www.optimaldba.com.

Doug Burns
Doug Burns is an independent contractor who has 20 years of experience working with Oracle in a range of industries and applications. As well as presenting at a number of conferences, he has developed, taught, and edited performance-related courses for both Oracle UK and Learning Tree International.

Jeff Needham
Jeff Needham currently works for Red Hat Software and is responsible for Oracle RDBMS, and GFS/RAC certifications. This includes tending the relationship between RHEL Engineering, Oracle Linux Engineering and Server Technologies. Having founded Scale Abilities with James Morle in 1989, Jeff pioneered high performance NFS for Oracle at Yahoo! using Netapp and AMD technologies. Jeff worked on performance and scalability for Oracle 7 and Oracle 8 during his tenure in the Redwood Shores kernel group. When not at work, he attempts fatherhood, collects Opteron servers and tries to maintain a vintage electronic music studio.

Jeremiah Wilton
Jeremiah Wilton has worked with Oracle technology since 1994. His main claim to fame is having been Amazon.com’s first database administrator, back in the pre-IPO days. For seven years, he helped Amazon.com survive exponential scaling, and a wide variety of nearly-catastrophic technology failures. Jeremiah owned and ran ORA-600 Consulting for a number of years, until it was acquired by Blue Gecko, Inc., a global provider of remote administration for Oracle, MySQL, and E-Business Suite. Jeremiah also teaches the Oracle certificate program for the University of Washington. Jeremiah is an Oracle Certified Master, and a frequent presenter at industry conferences and user groups. His publications and whitepapers can be found at www.bluegecko.net. Jeremiah co-authored the book “Expert Oracle Practices: Oracle Database Administration from the Oak Table”.

Jonathan Lewis
Jonathan Lewis is a well-known figure in the Oracle world with more than 21 years experience of using the database (and several years on other products). He has published two books, contributed to three others, runs a couple of websites and contributes fairly regularly to newsgroups, forums, User Group magazines, and speaking events around the world. Jonathan has been self-employed for most of his time in the IT industry. He specialises in short-term assignments, typically of a design, review, or trouble-shooting  nature.

Joze Senegacnik
Jože Senegačnik has more than 20 years of experience in working with Oracle products. He began in 1988 with Oracle Database version 4 while working for the City of Ljubljana, where he had charge over the city’s municipal and geographic information systems. From 1993 to 2003, he worked in developing GIS systems for the Surveying and Mapping Authority of the Republic of Slovenia, and in the development of applications for other governmental institutions, all based on the Oracle database. More recently, he has specialized in performance optimization, having developed his own toolset for monitoring performance and analyzing trace files. Jože is an internationally recognized speaker. He is a regular speaker at user-group conferences, especially those put on by the Slovenian Oracle Users Group (SIOUG), the Independent Oracle Users Group (IOUG), and the United Kingdom Oracle Users Group (UKOUG). He also speaks routinely at the Hotsos Symposium and Oracle Open World. In addition to sharing his knowledge through conference talks, Jože conducts technical seminars organized either by Oracle University or himself. He was awarded Oracle ACE membership for his long record of positive contributions to the Oracle community. Jože co-authored the book “Expert Oracle Practices: Oracle Database Administration from the Oak Table”.

Mogens Nørgaard
Mogens Nørgaard is technical director at Miracle A/S (http:www.miracleas.dk), a database knowledge center and consulting/training company based in Denmark, and is the cofounder and “father figure” of the OakTable Network. He is a renowned speaker at Oracle conferences all over the world and organizes some highly respected events through Miracle A/S, including the annual MasterClass (2001: Cary Millsap, 2002: Jonathan Lewis, 2003: Steve Adams, 2004: Tom Kyte) and the Miracle Database Forum, which is a 3-day conference for database people. He is also the cofounder of the Danish Oracle User Group (OUGKD) and was voted “Educator of the Year” in Oracle Magazine’s Editor’s Choice Awards, 2003.

Randolf Geist
Randolf Geist has been working with Oracle software for 15 years. Since 2000 he has operated as a freelance database consultant focusing primarily on performance related issues, and in particular helping people to understand and unleash the power of the Oracle cost based optimizer (CBO). He is writing on his blog about CBO-related issues and is also regularly contributing to the official OTN forums. Randolf is a member of the Oracle ACE program and is an Oracle Certified Professional DBA for Oracle Versions 8i, 9i, and 10g. He also maintains SQLTools++, an open-source Oracle GUI for Windows. Randolf co-authored the book “Expert Oracle Practices: Oracle Database Administration from the Oak Table”.

Riyaj Shamsudeen
Riyaj Shamsudeen has 17+ years of experience in Oracle and 16+ years as an Oracle DBA/Oracle Applications DBA. He is the principal DBA for OraInternals, a consulting company resolving many advanced performance, recovery, RAC, and EBS11i issues. Riyaj specializes in RAC and performance issues and has authored many articles, mostly about performance, database internals, optimizer internals, etc. He is a frequent presenter in major conferences such as Hotsos Symposia, UKOUG, RMOUG etc. co-authored the book “Expert Oracle Practices”.

Robyn Sands
Robyn Anderson Sands is a software engineer for Cisco Systems. In a previous incarnation, she worked in Industrial Engineering, Manufacturing Development, and Quality Engineering at Lockheed Martin, supporting the P-7, F-22, and C-130J programs. Robyn has been working with databases and Oracle software since around 1996. She began her work with Oracle by developing tools for scheduling, composite fabrication capacity modeling, and engineering work flow, and progressing to the implementation and administration of data warehouse, PeopleSoft, and SAP systems. Current projects include “architecting” and managing the development of embedded database systems for Cisco customers, and searching for new ways to design and develop database systems with consistent performance and minimal maintenance requirements. She has been a speaker at UKOUG, Miracle conferences, Oracle Open World, and the Hotsos Symposium.  Robyn co-authored the book “Expert Oracle Practices: Oracle Database Administration from the Oak Table”.

Tanel Poder
Tanel Põder is an experienced consultant with deep expertise in Oracle database internals, advanced performance tuning and end-to-end troubleshooting. He specializes in solving complex problems spanning multiple infrastructure layers such as UNIX, Oracle, application servers and storage. He is one of the first Oracle Certified Masters in the world, passing the OCM DBA exam in 2002; and is a frequent speaker at major Oracle conferences worldwide.

Tim Gorman 
Tim Gorman began his IT career in 1984 as a C programmer on UNIX and VMS systems, working on medical and financial systems as an application developer, systems programmer, and systems administrator. He joined Oracle Corporation in 1990 as a consultant, then became an independent consultant in 1998, and has worked for SageLogix since 2000. Gorman is the co-author of Essential Oracle8i Data Warehousing and Oracle8 Data Warehousing. He specializes in performance tuning applications, databases, and systems, as well as data warehouse design and implementation, backup and recovery, architecture and infrastructure, and database administration. Gorman still considers himself a pretty good coder, although the market for C programs has dried up somewhat lately. Tim co-authored the book “Expert Oracle Practices: Oracle Database Administration from the Oak Table”.

———-

Awaiting Confirmation (Subject to Change):

Christian Antognini
Since 1995, Christian Antognini has focused on understanding how the Oracle database engine works. His main interests include logical and physical database design, the integration of databases with Java applications, the query optimizer and basically everything else related to application performance management and optimization. He is currently working as a principal consultant and trainer at Trivadis AG (http://www.trivadis.com) in Zürich, Switzerland. If Christian is not helping one of his customers get the most out of Oracle, he is somewhere lecturing on application performance management or new Oracle database features for developers. In addition to classes and seminars organized by Trivadis, he regularly presents at conferences and user group meetings. He is a proud member of the Trivadis Performance Team and of the OakTable Network. Christian is the author of the book Troubleshooting Oracle Performance (Apress, 2008).

Eric Grancher
Eric Grancher has been working at CERN since 1996 in the Information Technology Division. He has been working on different aspects of databases and application server products: database design consulting, cluster database administration and usage with commodity hardware, application server consulting, database and application server monitoring. He is currently responsible for a team that focuses on database oriented application deployment. He holds an engineer diploma from the French telecommunication engineer school “ENST – Telecom Paris” and a Magistre (Master) of Parallel Computing from “Ecole Normale Suprieure de Lyon”.

James Morle
With 20 years’ experience in professional computing, James Morle has been personally responsible for the architecture and implementation of some of the world’s largest and most complex business systems, including a 3-node Oracle Parallel Server configuration that services 3000 online users. James is a well-respected member of the Oracle community and is the author of the critically acclaimed book Scaling Oracle8i. He is the cofounder of Scale Abilities (http://www.scaleabilities.com), a specialist consulting and training company focusing on aspects of system engineering related to building very large and complex computer systems.  Authored the book “Scaling Oracle8i: Building Highly Scalable OLTP System Architectures”, co-authored the book “Oracle Insights: Tales of the Oak Table”

Marco Gralike
Marco Gralike, working for AMIS Services BV as a Principal Oracle Database Consultant in the Netherlands, has experience as a DBA since 1994 (Oracle 6). Marco is also eager and skillful in other fields, like Operating System Administration and Application Servers, mainly in finding working, performing solutions. Marco has been specializing in Oracle XMLDB, since 2003, focusing on his old love, database administration and performance. He is an Oracle XMLDB enthusiast. He is also a dedicated contributor of the Oracle User Group community, helping people with their steep XMLDB learning curve. To this purpose, Marco also devoted his personal blog site to XMLDB and other Oracle issues. Marco has presented several times in Holland, UKOUG (2007, 2008) and Oracle Open World (2008, 2009). He has been awarded the Oracle ACE title for his work in the Oracle XMLDB area (2007).

———-

The MOTS Executive Committee:

Carol Dacko – OakTable member
Mark Bobak – Oak Table member
Mark Powell – OakTable member
Charles Hooper – OakTable member
Ric Van Dyke – Hotsos Friend
Myke Ratcliff – Esteemed Friend

The Michigan OakTable Symposium (MOTS) is not intended as a money making event. As such, the cost for attending the event has been set as low as possible to essentially “break-even”. The primary mission of the symposium is to provide attendees with logical, structured processes that lead to reproducible success, rather than treating the Oracle Database as a magical, unpredictable, black-box that yields unexpected behavior.

Early registration at a rate of $450 per person will end April 30, 2010. Registration between May 1 and July 31 is $600 per person, and increases to $750 per person after July 31. These prices do not include room hotel costs (roughly $100 per night), but will include a high quality buffet lunch. You may register for the event at the site:

Watch for future updates, agenda, registration details, and more information on the official MOTS site: http://michigan.oaktable.net.

Preliminary Schedule for Thursday, 16 September 2010 and Friday, 17 September 2010: 

Michigan OakTable Schedule

Michigan OakTable Schedule

Abstracts for each of the sessions are located on the official Michigan OakTable site.





The OakTable Network Invades Michigan, USA (Advert)

8 01 2010

January 8, 2009 (Updated February 5)

… at MOS… uh… in light of the Flashing incident with the other MOS, maybe it is better to stick with the abbreviation MOTS – the Michigan OakTable Symposium. 

For the first time ever, the OakTable Network is organizing a two day training event on the Thursday and Friday (September 16th and 17th, 2010) just before Oracle OpenWorld, in Ann Arbor, Michigan (not far from the University of Michigan). The details are still being finalized, so for right now here are the basics:

  • Seating will be limited to a maximum of 300 people.
  • OakTable Network members from around the world will attend and provide the training sessions.
  • Sessions will be provided that will appeal to DBAs and developers.
  • MOTS is not intended as a money making event. As such, the cost for attending the event has been set as low as possible to essentially “break-even”.

Probable Speakers Include (subject to change):

Early registration at a rate of $450 per person will end April 30, 2010. Registration between May 1 and July 31 is $600 per person, and increases to $750 per person after July 31. These prices do not include room hotel costs (roughly $100 per night), but will include a high quality buffet lunch.

More details will follow later.

———-

February 5 Update:

I heard that a  couple of more people will likely be added to the list of presenters, and it looks like I might have the opportunity to present with Randolf.  Biographies for the presenters will hopefully be available within a couple of weeks, along with the site for registration.

—————————————————-

Latest information: Michigan OakTable Symposium (Advert)
Official website for the event: http://michigan.oaktable.net





Failure to Collect Fixed Object Statistics Leads to ORA-01013 or ORA-07445

9 12 2009

December 8, 2009 (note added December 9, 2009)

An interesting post from almost two years ago in the comp.databases.oracle.server Usenet group:
http://groups.google.com/group/comp.databases.oracle.server/browse_thread/thread/e0599d3e043fc199

select distinct name from all_source;

.......
SYS_YOID0000052452$
......
3196 rows selected.

Ok no problem.

select distinct owner from v$access
SQL> /
select distinct owner from v$access
                           *
ERROR at line 1:
ORA-01013: user requested cancel of current operation

I had to terminate it as it become non-terminating.

“select owner from v$access” returns a mere 193 rows, and it cannot sort it?

I suspect there is exists an infinite loop somewhere.

This was attempted on 10gR2, 11gR1 and both had the same problem.

How would someone start investigating this problem?  Find something that appears out of the ordinary, and start probing.  This is the approach that I used:

-

I was able to reproduce this problem on Oracle 10.2.0.2 with the Oracle October 2006 CPU on 64 bit Windows 2003.

From the udump trace file:

ksedmp: internal or fatal error
ORA-07445: exception encountered: core dump [ACCESS_VIOLATION]
[__intel_new_memcpy+610] [PC:0x3236532] [ADDR:0x0] [UNABLE_TO_READ] []
Current SQL statement for this session:
select distinct owner from v$access
----- Call Stack Trace -----
calling              call     entry                argument values in hex
location             type     point                (? means dubious value)
-------------------- -------- -------------------- ----------------------------
__intel_new_memcpy+           0000000000000000     000000000 000000000
0118AF5A0 610                                                7FF970C7598
000007FF95D155F0     CALL???  __intel_new_memcpy+  0000007FF 013DF42E8
001749686
                              610                  000000000
...

From a 10046 trace at level 8:

*** 2008-01-23 06:51:57.901
WAIT #3: nam='latch: library cache' ela= 8566 address=1398554576 number=214 tries=1 obj#=-1 tim=5204643696
WAIT #3: nam='latch: library cache' ela= 859 address=1398554576 number=214 tries=1 obj#=-1 tim=5205047765
WAIT #3: nam='latch: library cache' ela= 2958 address=1398554576 number=214 tries=1 obj#=-1 tim=5205183748
WAIT #3: nam='latch: library cache' ela= 551 address=1398554576 number=214 tries=1 obj#=-1 tim=5205267198
*** 2008-01-23 06:54:39.117
WAIT #3: nam='latch: library cache' ela= 813 address=1398554576 number=214 tries=1 obj#=-1 tim=5365848854
*** 2008-01-23 06:55:06.288
WAIT #3: nam='latch: library cache' ela= 30 address=1398554576 number=214 tries=1 obj#=-1 tim=5393019180
*** 2008-01-23 06:55:30.006
WAIT #3: nam='latch: library cache' ela= 68 address=1398554576 number=214 tries=1 obj#=-1 tim=5416746379
*** 2008-01-23 06:55:50.584
WAIT #3: nam='latch: library cache' ela= 33 address=1398554576 number=214 tries=1 obj#=-1 tim=5437323921
*** 2008-01-23 06:57:09.536
WAIT #3: nam='latch: library cache' ela= 111 address=1398554576 number=214 tries=1 obj#=-1 tim=5516279642
*** 2008-01-23 06:57:20.895
WAIT #3: nam='latch: library cache' ela= 77 address=1398554576 number=214 tries=1 obj#=-1 tim=5527627340
*** 2008-01-23 06:57:36.082
WAIT #3: nam='latch: library cache' ela= 246 address=1398554576 number=214 tries=1 obj#=-1 tim=5542815685
*** 2008-01-23 06:57:57.957
WAIT #3: nam='latch: library cache' ela= 123 address=1398554576 number=214 tries=1 obj#=-1 tim=5564704225
*** 2008-01-23 06:58:14.644
WAIT #3: nam='latch: library cache' ela= 63 address=1398554576 number=214 tries=1 obj#=-1 tim=5581385020
*** 2008-01-23 06:58:26.269
WAIT #3: nam='latch: library cache' ela= 62 address=1398554576 number=214 tries=1 obj#=-1 tim=5593004724
*** 2008-01-23 06:58:48.346
WAIT #3: nam='latch: library cache' ela= 42 address=1398554576 number=214 tries=1 obj#=-1 tim=5615094241
WAIT #3: nam='latch: library cache' ela= 8 address=1398554576 number=214 tries=1 obj#=-1 tim=5615168161
...

For an average 60 second interval interval, the session had the following latch statistics:

Latch         Child# Level   Gets  Misses  Sleeps  Sleeps1
LIBRARY CACHE    1       5   529418     25      0       25
LIBRARY CACHE    2       5   539720     36      0       36
LIBRARY CACHE    3       5   519189     15      0       15
LIBRARY CACHE    4       5   516501     55      0       55
LIBRARY CACHE    5       5   524907   1744      4     1740

On the waits, P1 is 13985, P2 is 214, P3 is 0, Buf is 99, SQL Hash is 3250939240.

SELECT
  SQL_TEXT
FROM
  V$SQL
WHERE
  HASH_VALUE=3250939240

SQL_TEXT
-----------------------------------
select distinct owner from v$access

 

SELECT
  ID,
  SUBSTR(OPERATION,1,12) OPERATION,
  SUBSTR(OPTIONS,1,12) OPTIONS,
  SUBSTR(OBJECT_OWNER||'.'||OBJECT_NAME,1,20) OBJECT,
  SUBSTR(OBJECT_TYPE,1,13) OBJECT_TYPE,
  PARENT_ID,
  DEPTH,
  POSITION,
  CPU_COST
FROM
  V$SQL_PLAN_STATISTICS_ALL
WHERE
  HASH_VALUE=3250939240
  AND CHILD_NUMBER=1
ORDER BY
  ID;

ID OPERATION    OPTIONS      OBJECT               OBJECT_TYPE PARENT_ID      DEPTH   POSITION   CPU_COST
-- ------------ ------------ -------------------- ------------- ---------- ---------- ---------- ----------
 1 HASH UNIQUE       .                                                  0          1          1    2142850
 2 NESTED LOOPS              .                                          1          2          1    1115000
 3 NESTED LOOPS              .                                          2          3          1    1080000
 4 MERGE JOIN CARTESIAN    .                                            3          4          1     730000
 5 FIXED TABLE  FULL         SYS.X$KSUSE          TABLE (FIXED)         4          5          1     380000
 6 BUFFER SORT         .                                                4          5          2     350000
 7 FIXED TABLE  FULL         SYS.X$KGLDP          TABLE (FIXED)         6          6          1     350000
 8 FIXED TABLE  FIXED INDEX  SYS.X$KGLLK (ind:1)  TABLE (FIXED)         3          4          2       3500
 9 FIXED TABLE  FIXED INDEX  SYS.X$KGLOB (ind:1)  TABLE (FIXED)         2          3          2       3500

 

SELECT
  ID,
  ACCESS_PREDICATES,
  FILTER_PREDICATES
FROM
  V$SQL_PLAN_STATISTICS_ALL
WHERE
  HASH_VALUE=3250939240
  AND CHILD_NUMBER=1
ORDER BY
  ID;

ID ACCESS_PREDICATES FILTER_PREDICATES
-- ----------------- -----------------
 1
 2
 3
 4
 5                   "S"."INST_ID"=USERENV('INSTANCE')
 6
 7
 8                   ("L"."KGLLKUSE"="S"."ADDR" AND "L"."KGLLKHDL"="D"."KGLHDADR" AND "L"."KGLNAHSH"="D"."KGLNAHSH")
 9                   ("O"."KGLNAHSH"="D"."KGLRFHSH" AND "O"."KGLHDADR"="D"."KGLRFHDL")


Then check a different server:

The query eventually completed on the 32 bit version of Oracle 10.2.0.2 with the Oracle October 2006 CPU.

PARSE 1|CPU S     0.000000|CLOCK S    0.006227|ROWs 0
EXEC  1|CPU S     0.000000|CLOCK S    0.000201|ROWs 0
FETCH 2|CPU S 13112.828125|CLOCK S  926.981803|ROWs 6

Row Source Execution Plan:
       (Rows 6)   HASH UNIQUE (cr=0 pr=0 pw=0 time=568347223 us)
    (Rows 3463)    NESTED LOOPS  (cr=0 pr=0 pw=0 time=3464796755 us)
    (Rows 3463)     NESTED LOOPS  (cr=0 pr=0 pw=0 time=3464592419 us)
 (Rows 1613768)      MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=21019488 us)
     (Rows 236)       FIXED TABLE FULL X$KSUSE (cr=0 pr=0 pw=0 time=2376 us)
 (Rows 1613768)       BUFFER SORT (cr=0 pr=0 pw=0 time=12951356 us)
    (Rows 6838)        FIXED TABLE FULL X$KGLDP (cr=0 pr=0 pw=0 time=41073 us)
    (Rows 3463)      FIXED TABLE FIXED INDEX X$KGLLK (ind:1) (cr=0 pr=0 pw=0 time=13094082350 us)
    (Rows 3463)     FIXED TABLE FIXED INDEX X$KGLOB (ind:1) (cr=0 pr=0 pw=0 time=166548 us)

Note the merge Cartesian join between the 236 rows in X$KSUSE and the 1613768 rows from X$KGLDP.

The wait events:
0.03 seconds on latch: library cache

Then keep probing:

I may have found something that may help the OP – it hit me when I found very slow performance with the same SQL statement on 32 bit Oracle 10.2.0.3 and 11.1.0.6, after looking at the DBMS_XPLANs.

The DBMS_XPLAN on 10.2.0.3:

--------------------------------------------------------------------------­----------------------------------------------
| Id  | Operation                  | Name            | Starts | E-Rows | A-Rows |   A-Time   |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------­----------------------------------------------
|   1 |  HASH UNIQUE               |                 |      1 |    105 |      5 |00:02:51.06 |       |       |          |
|   2 |   NESTED LOOPS             |                 |      1 |    105 |   1131 |00:02:51.04 |       |       |          |
|   3 |    NESTED LOOPS            |                 |      1 |     10 |   1131 |00:02:50.39 |       |       |          |
|   4 |     MERGE JOIN CARTESIAN   |                 |      1 |    100 |    180K|00:00:01.27 |       |       |          |
|*  5 |      FIXED TABLE FULL      | X$KSUSE         |      1 |      1 |    236 |00:00:00.01 |       |       |          |
|   6 |      BUFFER SORT           |                 |    236 |    100 |    180K|00:00:00.55 | 36864 | 36864 |32768  (0)|
|   7 |       FIXED TABLE FULL     | X$KGLDP         |      1 |    100 |    763 |00:00:00.01 |       |       |          |
|*  8 |     FIXED TABLE FIXED INDEX| X$KGLLK (ind:1) |    180K|      1 |   1131 |00:02:48.31 |       |       |          |
|*  9 |    FIXED TABLE FIXED INDEX | X$KGLOB (ind:1) |   1131 |     10 |   1131 |00:00:00.64 |       |       |          |
--------------------------------------------------------------------------­----------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - filter("S"."INST_ID"=USERENV('INSTANCE'))
   8 - filter(("L"."KGLLKUSE"="S"."ADDR" AND "L"."KGLLKHDL"="D"."KGLHDADR" AND "L"."KGLNAHSH"="D"."KGLNAHSH"))
   9 - filter(("O"."KGLNAHSH"="D"."KGLRFHSH" AND "O"."KGLHDADR"="D"."KGLRFHDL"))

 
Note the MERGE JOIN CARTESIAN, and how the estimated rows compares with the actual rows.

The DBMS_XPLAN on 11.1.0.6:

select distinct owner from v$access 

--------------------------------------------------------------------------­-----------------------------------------------
| Id  | Operation                   | Name            | Starts | E-Rows | A-Rows |   A-Time   |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------­-----------------------------------------------
|   1 |  HASH UNIQUE                |                 |      1 |      1 |      6 |00:00:40.28 |   951K|   951K|  860K (0)|
|   2 |   NESTED LOOPS              |                 |      1 |      1 |   2342 |00:00:40.27 |       |       |          |
|   3 |    MERGE JOIN CARTESIAN     |                 |      1 |      1 |   2842K|00:00:11.37 |       |       |          |
|   4 |     NESTED LOOPS            |                 |      1 |      1 |  16721 |00:00:00.38 |       |       |          |
|   5 |      FIXED TABLE FULL       | X$KGLDP         |      1 |    100 |  16721 |00:00:00.05 |       |       |          |
|*  6 |      FIXED TABLE FIXED INDEX| X$KGLOB (ind:1) |  16721 |      1 |  16721 |00:00:00.21 |       |       |          |
|   7 |     BUFFER SORT             |                 |  16721 |      1 |   2842K|00:00:02.91 |  4096 |  4096 | 4096  (0)|
|*  8 |      FIXED TABLE FULL       | X$KSUSE         |      1 |      1 |    170 |00:00:00.01 |       |       |          |
|*  9 |    FIXED TABLE FIXED INDEX  | X$KGLLK (ind:1) |   2842K|      1 |   2342 |00:00:15.49 |       |       |          |
--------------------------------------------------------------------------­-----------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - filter(("O"."KGLNAHSH"="D"."KGLRFHSH" AND "O"."KGLHDADR"="D"."KGLRFHDL"))
   8 - filter("S"."INST_ID"=USERENV('INSTANCE'))
   9 - filter(("L"."KGLLKUSE"="S"."ADDR" AND "L"."KGLLKHDL"="D"."KGLHDADR" AND "L"."KGLNAHSH"="D"."KGLNAHSH"))

The above executed more quickly, and the plan is slightly different, but the MERGE JOIN CARTESIAN is still present, as is the difference between the estimated and actual number of rows.

The fixed object stats must be wrong (I recall having a problem with that a couple years ago when perfoming the following)…

SQL> CONNECT / AS SYSDBA
SQL> EXEC DBMS_STATS.GATHER_FIXED_OBJECTS_STATS('ALL');
BEGIN DBMS_STATS.GATHER_FIXED_OBJECTS_STATS('ALL'); END;

*
ERROR at line 1:
ORA-20000: Insufficient privileges to analyze an object in Fixed Ob
ORA-06512: at "SYS.DBMS_STATS", line 17951
ORA-06512: at "SYS.DBMS_STATS", line 18404
ORA-06512: at "SYS.DBMS_STATS", line 18951
ORA-06512: at line 1

The same error occurs on Oracle 10.2.0.2, 10.2.0.3, and 11.1.0.6 as the internal user, SYS AS SYSDBA, and SYSTEM.

There must be another way:

SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS('SYS',GATHER_FIXED=>TRUE)

PL/SQL PROCEDURE SUCCESSFULLY COMPLETED.

The new DBMS_XPLANs:
10.2.0.3:

--------------------------------------------------------------------------­---------------------------------------------
| Id  | Operation                 | Name            | Starts | E-Rows | A-Rows |   A-Time   |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------­---------------------------------------------
|   1 |  HASH UNIQUE              |                 |      1 |      7 |      4 |00:00:00.09 |       |       |          |
|   2 |   NESTED LOOPS            |                 |      1 |   1822 |   1003 |00:00:00.08 |       |       |          |
|*  3 |    HASH JOIN              |                 |      1 |   1822 |   1003 |00:00:00.05 |   898K|   898K| 1099K (0)|
|*  4 |     HASH JOIN             |                 |      1 |   1822 |   1897 |00:00:00.03 |  1010K|  1010K|  639K (0)|
|*  5 |      FIXED TABLE FULL     | X$KSUSE         |      1 |    236 |    236 |00:00:00.01 |       |       |          |
|   6 |      FIXED TABLE FULL     | X$KGLLK         |      1 |   1822 |   1897 |00:00:00.01 |       |       |          |
|   7 |     FIXED TABLE FULL      | X$KGLDP         |      1 |   2892 |    649 |00:00:00.01 |       |       |          |
|*  8 |    FIXED TABLE FIXED INDEX| X$KGLOB (ind:1) |   1003 |      1 |   1003 |00:00:00.01 |       |       |          |
--------------------------------------------------------------------------­---------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("L"."KGLLKHDL"="D"."KGLHDADR" AND "L"."KGLNAHSH"="D"."KGLNAHSH")
   4 - access("L"."KGLLKUSE"="S"."ADDR")
   5 - filter("S"."INST_ID"=USERENV('INSTANCE'))
   8 - filter(("O"."KGLNAHSH"="D"."KGLRFHSH" AND "O"."KGLHDADR"="D"."KGLRFHDL"))

The MERGE JOIN CARTESIAN is gone and the execution time dropped from 2 minutes, 51 seconds to 0.09 seconds.

11.1.0.6:

--------------------------------------------------------------------------­----------------------------------------------
| Id  | Operation                  | Name            | Starts | E-Rows | A-Rows |   A-Time   |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------­----------------------------------------------
|   1 |  HASH UNIQUE               |                 |      1 |     19 |      1 |00:00:00.04 |  1037K|  1037K|  368K (0)|
|   2 |   NESTED LOOPS             |                 |      1 |   1139 |    134 |00:00:00.04 |       |       |          |
|   3 |    NESTED LOOPS            |                 |      1 |   1139 |    134 |00:00:00.03 |       |       |          |
|*  4 |     HASH JOIN              |                 |      1 |   1139 |   1144 |00:00:00.02 |  1010K|  1010K| 1205K (0)|
|*  5 |      FIXED TABLE FULL      | X$KSUSE         |      1 |    170 |    170 |00:00:00.01 |       |       |          |
|   6 |      FIXED TABLE FULL      | X$KGLLK         |      1 |   1139 |   1144 |00:00:00.01 |       |       |          |
|*  7 |     FIXED TABLE FIXED INDEX| X$KGLDP (ind:1) |   1144 |      1 |    134 |00:00:00.01 |       |       |          |
|*  8 |    FIXED TABLE FIXED INDEX | X$KGLOB (ind:1) |    134 |      1 |    134 |00:00:00.01 |       |       |          |
--------------------------------------------------------------------------­----------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("L"."KGLLKUSE"="S"."ADDR")
   5 - filter("S"."INST_ID"=USERENV('INSTANCE'))
   7 - filter(("L"."KGLLKHDL"="D"."KGLHDADR" AND "L"."KGLNAHSH"="D"."KGLNAHSH"))
   8 - filter(("O"."KGLNAHSH"="D"."KGLRFHSH" AND "O"."KGLHDADR"="D"."KGLRFHDL"))

The MERGE JOIN CARTESIAN is gone and the execution time dropped from 40.28 seconds to 0.04 seconds.

The OP may be able to run the same DBMS_STATS.GATHER_SCHEMA_STATS procedure to work around the problem.

How to determine if I collected statistics on the fixed tables?  A search on Metalink found this article from 2004:
https://metalink.oracle.com/metalink/plsql/f%3Fp%3D200:27:5000154048035945504::::p27_id,p27_show_header,p27_show_help:525959.996,1,1&usg=AFQjCNHJ_wI9tlazsGQ7AIUZ5RSlrp_8nw
(Edit: Note that the above link is gone in a Flash  If anyone is able to find the message on the Oracle support site -the site  formerly known as Metalink, please let me know of the address.)

In the article, Jonathan Lewis mentioned that tab_stats$ could be checked.
Test database on 64 bit Windows:

SELECT
  *
FROM
  SYS.TAB_STATS$

The above returned about 582 rows after running:

EXEC DBMS_STATS.GATHER_SCHEMA_STATS('SYS',GATHER_FIXED=>TRUE);

I then checked a production database, and found no rows returned. Using the suggestion from the Metalink article:

EXEC DBMS_STATS.GATHER_FIXED_OBJECTS_STATS(NULL);

The stats collection completed much more quickly, as it did not analyze the full SYS schema, and there were 582 rows returned by the query on SYS.TAB_STATS$.  It looks like this simple query may be used to determine if fixed object statistics need to be collected.








Follow

Get every new post delivered to your Inbox.

Join 140 other followers