Monday, February 26, 2007


Is it just me, or is oracle's effort to wards the DST seem to be a convoluted mess of notes, readme's, more notes and superseding items. Download this, this, this, this, and this, and possibly this but only if you have this and this. Version 4 of a patch? So, we go ahead and patch take the necessary downtime, and then version 5 comes out, we go ahead and patch, and version 6 comes out, we go ahead and ... well you get the point. Easier to shut everything down over the time change, manually set the time on the server and continue from there. Luckily we have only one system that uses timezones and the user group took this opportunity to punt it and purchase a newer system that does not.

Sigh... Rant over.

On a good note, the conversion is going along nicely. The consultants found some other consultants to help with the work load. All of the hardware and new network lines are in place and tested and we have a target date of April the 7th for the final move. The second test of the database move will happen 2nd week in March as that is when the consultants say they will have the first ready for user testing front end working.

Wednesday, February 14, 2007

We Love RMAN

Say "We love RMAN, RMAN is great"

Say it again. And again, and one more time for good luck. RMAN coupled with a good backup strategy saved my... "our" butts this morning. Get an email from my monitoring stuff that a production database is down, about 2 minutes later get one from OEM saying can't connect. Was sitting at a coffee shop having breakfast, so go across the street to the office and take a look, yes, that production database is down, crashed hard. All 3 incoming lines on my phone are flashing away. Connect to the server, take a look, hmmm... that's funny, we are missing the entire /PROD mount point. Completely gone. Try my limited knowledge to see whats up, give up after about 1 minute and call an SA. After a quick conversation convincing them the mount point is gone they go and look. This is a DAS box with good mirroring and RAID, I am not concerned at all. On /PROD are the tablespace files and a control file copy.

15 minutes later, all 3 SA's come back, all 12 disks on the mount point are gone, dead, no lights, no comforting whirring noise. Nothing. No explanation why, but they will plug the enclosure into another server to see what they can see. We wait another 20 minutes and they can't get any of the disks to spin up, even plugged a disk plugged in on its own, nothing, finished like last nights dinner.

They have more disks and rapidly rebuild the enclosure and plug it back into the server and start up the whole shebang. They get the mount point /PROD created and accessible after 45 minutes or so and then wipe their hands of the matter by telling the managers all they can do is done, it is in my hands. By this time managers (vultures) have been circling, now they have landed and fighting amongst themselves to who will have the privilege of the first juicy eyeball to be plucked from the assumed to be near death DBA.

copy a control file from another mount point on that server to /PROD.

$rman / target

>restore database;

{wait about 15 minutes as files come off of our tape array online storage, go and get a coffee, mingle and socialize while the vultures (managers) eyeing me the entire time asking me why I am not at my desk}

>recover database;

{wait about 9 minutes, finish coffee and chat with the folks just coming into work}

>alter database open;

> exit

Few quick sanity checks.

Tell everybody it is back and start a backup just to have it.

Smile brightly and continue on with my day.