Sunday, March 19, 2006
Application upgrade
Further to my post about project management and notification. I get myself invited to a planning meeting for the application upgrade.
They have done vast amounts of testing, they have 4 databases that they have tested the application upgrade in all of them and everything works smoothly. Takes about 8 hours for the patching scripts to run, then another 12-13 hours for the data load of the old data extracted into the newly patched application. I simply ask one question "Do you think you have enough room in production for all that data?".
I know I am in for problems when I get a bunch of blank stares and open mouths. "There should be" and "yeah probably" are the common answers.. then the answer that almost cost me my job as I almost couldn't restrain myself telling this senior fellow exactly what I thought of him. The answer "Well thats YOUR job isn't it".
I dang near lost it on the fellow. So I simply snapped back something to the effect of you should have told me sometime in the last two months, not less than a week from go live, your not my wife and I can't read your mind.
So, after the meeting I go and take a look at the 2 dev databases I built for them. I know how big they were when I built them, and I know how big they are now. Final numbers, 23 gig larger.
23 gig for an application patch, 23 gig. I am stunned.
23 gig. I can't get over it. I check the other database, yeah 23 gig.
23 gig.
More than a 50% increase over the size that is there. I talk with the support people, they once again give the blank stares and open mouth answers I hate.
Lots of room in production for the data. I make sure the UNDO is set to grow and there is enough room for the archive logs. I am thinking of approaching them with some hints and suggestions on speeding up the data load as I have caught a snippet or two of their insert scripts. Pure insert scripts into tables with triggers and indexes active. If they are happy with sitting in the office for 24 hours. So be it.
I recommend scheduling a full RMAN backup while the users are in, then after the users are out a full export. Then to top it all off a cold backup of the database as it has to go down for the patching anyway.
I have a export shell script that splits the database up into chunks based on dba_segments, aiming for 1 gig files, actually building export parameter files listing the tables to keep the files under 1 gig. The objects over one gig, I create parameter files that use the export functionality to split the file into 1 gig files.
Then it creates the pipe files for compressing on the fly, and groups the exports into 4 shells scripts so I can run the exports as 4 seperate processes on the server. Took me about 1 week to make the script, and proved its usefullness the first time I used it. I will post it here if enough people ask.
Subscribe to:
Post Comments (Atom)
1 comment:
I decided it would be best to teach the support people a few tricks. I explained to them and gave them some examples of speeding up their inserts and explained to them why it is best to drop and the rebuild the indexes after all of their data is in and edited.
Data load went from 12-13 hours to a little over 8 hours.
I am their hero and they swear they will involve the DBA group in any changes they do from now on.
It was a good day.
Post a Comment