Monday, August 27, 2007

Documentation

The "you have to document code" theory has cropped up again, I have resisted writing about this in the past, but I am in the correct mood today. I thoroughly respect the writers that are all for heavy documentation of code, people like Eddie Awad, he is great but I do disagree with him and others.

I am not picking on Eddie, he just has the latest post concerning this that I have come across. Eddie has had a few posts about comments and documentation, with the most recent being Self Documenting Code is Not Enough where he references an article titled Comments Are More Important Than code, I have read in the past by Jef Raskin. I do believe with some of the writings, but where I heartily disagree is basically, comments are more important than code. My view on the matter is pretty darn simple. If you need comments to follow the code, get another job because you don't deserve to be a programmer. If you can't spend 5 minutes and follow the flow of the code, even hundreds of lines, your local fast food outlet is always hiring.

Now, where comments do belong is describing the business logic behind the code, but not the code itself, code itself is self documenting because, well it is written!! Yes you always use proper naming conventions because everybody has their own naming conventions that are the better than everybody else's. Using conventions is not documentation, it is job preservation.

The documentation belongs as normal human readable sections at the beginning of the code segment and in a word document everybody has access to and only 1 person (and a backup person) can update. I hate to say it, but flow charts and other visual items are the best way to get a point across. All the staff that look at the visio can go "ooh, ahh, look at the pretty pictures... ahhh, look at the pretty colors.".

My top 5 reasons why database software projects fail.



  1. You have a scrum master.
  2. You follow agile programming, extreme programming or any other cluster f*ck methodology
  3. The comment "we want to be database independent" is heard.
  4. The comment "We don't need source control." is heard.
  5. The number of minutes spent in weekly meetings by a developer is larger than the number of minutes spent in the bathroom by the same developer.

One of the reasons why a developer is let go in our organization, they say something to the effect of

"I was writing the documentation, I didn't have time to finish the program that is why I am X days late"

In their past companies, they have found that saying "I was doing documentation instead of code" is a perfectly acceptable excuse for being late. We translate that comment into "Please fire me because I am lazy and don't respect deadlines.".



We have been doing a bunch of hiring lately - as normal - we just can't keep up with the oracle development positions, I might be just a little hard on them. I have recently streamlined my end of the hiring process greatly. First 3 questions on the interview

  1. Who is Thomas (Tom) Kyte?

  2. Where can you get all of the oracle documentation online?

  3. Have you read the application developer fundamentals guide for oracle <latest version here>?

Without a quick and positive proper answer to at least 2 of the three questions, I basically say "Thanks, but no thanks" and hang up.



Tuesday, August 21, 2007

Oracle 11g

Well. Since everybody is talking about oracle 11g, and I have been really quiet lately... I took today and installed oracle 11g, then I upgraded a test database to 11g with zero problems.

I ran some of our testing scripts and performance and results matched what had been documented earlier in the month. So, I bit the bullet, I upgraded a production database to 11g. Now, when I say "production" I mean a 3 user database coming in at just under 2 gig in size including system tablespace, so awfully small. It is simply a way for users to search for files that were created before 2000 and was simply 5 spreadsheets compiled to a common format and loaded. It is a recent addition to our systems and I have all of the base data that can be loaded again.

So, we are successfully running an oracle 11g database in production.


Woot Woot.



Monday, August 13, 2007

SPAM

Due to the LARGE amount of spam this blog is getting, I am going to switch comments to registered bloggers only. Sorry all, but I have had enough of deleting the SPAM posts. Death to all spammers.

Friday, August 03, 2007

Don and Jonathan at it again

Once again they are at it.

http://forums.oracle.com/forums/thread.jspa?threadID=527201&start=0&tstart=0


All I have to say on the matter is, Don Burleson and his employee comments, scripts, "how to's" and expert advice have screwed up more than one thing mostly due to me trusting them without actually paying attention to what was going on. Nothing from Jonathan Lewis has ever failed me.
Don Burleson has an interesting outlook on life - check out his personal blog - I won't link to it, but just google it "don burleson blog personal" and it is the first hit.

Rather enlightening to see a his view on life.


Wednesday, July 04, 2007

BAAG

I joined the BAAG party awhile back - Battle Against Any Guess.

Go and give it a read, especially you folks that send emails that have a subject of PLZ HELP or URGENT PLZ or something similiar.


Monday, June 18, 2007

User tracking

This is so obvious - at least to us here - that it really kind of shook us up that we were not doing it. I was talking to ,y friend Jeremy, and he mentioned his company had rolled out an internally built APEX app for tracking users and their accounts across all of the applications in the company. It took him awhile of explaining before it dawned on me how good of an idea this is.

They (as do we) have a database that is used for tracking employee phone numbers, with the multitude of payroll systems acquired through acquisitions having a single repository of all the active employees in the company that can be referenced even if those employees do not have network accounts is very important. We use our central database basically primarily for a place to allow employees to store their phone numbers and other contact information that is then access via a few JP pages. Jeremy and crew went a few steps farther. They actually replicate the employee information into this central database from all their various payroll systems, so that the information in the central database for where the employee is working, name changes, terminations, new hires etc is automatically looked after, so there is no out of date information when somebody transfers or quits. They then (and this is the brilliant part) pull in all of the user information from every single application in the company into this central database giving them user lists from every application. That user list is then compared to their service desk software (incident tracking stuff) and then all of that information is compared against the employee information from the payroll systems.

With all of the data in a central spot, now, they can issue a simple query on a person, and every application tied to that person is displayed and if their account is active or not. They also flag new hires and terminations and automatically generate tickets in their service desk software to have a help desk person investigate if an account is needed, verify the request for a new account is actually a live person according to payroll, and termination notices no longer have to rely on a manager at some location filling in the proper paperwork and forwarding to the IT department. Now the IT department is aware of the termination and what applications the user had access to and the IT department can now pursue the site to make sure the paper work is filled out.

In the first few weeks they were able to reduce their user count by almost 10% due to duplicate accounts, people that were no longer with the company etc. May not seem like much, but he said it was $20K alone in Novell licensing. Some reports have been put together to give them detailed user counts by application and server for licensing and usage reports.

Very simple idea but yields so much information. Jeremy did it all in APEX and oracle 10gR2 on Linux on an old piece of hardware. I will see if I can get him to put together something technical for this, I can almost smell a business opportunity here.


Good show!

Tuesday, June 12, 2007

SOA

We have been working on implementing Oracle Fusion middleware for our SOA solution, ESB and BPEL and all the other bells and whistles, BAM and BPA. I have to say, even for an old dog like me, this is an amazing concept. We had IBM and Oracle come and visit numerous times flogging their software and we decided on oracle simply because their products seemed to be more tightly integrated and basically looked better. The kicker was that everything is done via JDeveloper instead of 3 or 4 different products like IBM's SOA stuff (Websphere).

I have built a handful of adapters of some interfaces I know very well. I was able to retrieve data without a problem, I then took a crack at creating ESB and BPEL services to then "listen" for changes and move the data around. All I can say is wow, the ease at which data can be mapped and moved around is astounding. The largest time saving is an adapter only has to be made once per system and then a simple BPEL assign to do the mapping and presto, data acquired, transformed to a common format, and then transformed to what the destination system requires. Then the ESB process sits and waits for more data to come in. Wonderful stuff, no wonder everybody is talking about SOA, even though it is in hushed voices and darkened hallways. The learning curve is extreme and my knowledge of XML,XSD and XSLT has increased a hundred fold, but it is all worth it.

After this I am off to update by CV ;)

We have a few training classes scheduled to try and get the rest of the department some exposure to the products. BPA - Business Process Analyzer I believe is what the acronym is, there are just a few to try and remember. BPA is a product that I have just started to look at, we bought a lot of licenses for it, and as the marketing for it says, it allows business "super users" to actually map the work flow in an almost Visio environment, pass that to a developer and with a lot less work than I expected, the developer can turn that into a BPEL flow and deploy the application. The best part is the changes to the flow the developer needed (if any) are automatically passed back to the user that created the original document.

Really cool stuff.




Thursday, May 10, 2007

Well, I made a list

I read Eddie Awad's blog all the time. Once again, I was quite surprised when I went scrolling through the listing of the Top Oracle Blogs Ranked by Technorati Authority and, low and behold. I am on the list.

#124 out of 130 but, on the list none the less.



Tuesday, May 08, 2007

Date columns

I have a good friend from a previous job who is an Oracle DBA, he is kind of shy and doesn't like a big presence on the net. I have blatantly stolen work from him in the past, including some excerpts from this particular document (always with his knowledge). But the poor bugger had the lack of foresight to send me this complete PDF document. So, I quickly decided I had better post it just to show him blogging isn't all evil and invitation to flaming.

That said, Jeremy's quick write up on oracle date columns that he did for his internal support staff. Absolutely nothing earth shattering or ground breaking, but I can't see any errors with it and it is good to give to people who can't speak "DBA". Yes, I have used this in the past elsewhere as well. But this is the complete PDF.


Monday, May 07, 2007

More APEX

APEX is great, APEX is a life saver... OK, job saver?


As I mentioned before I like APEX, but not having the time to develop in it and learn it well, I get a little lost in the product. The APEX forums are a good place to pick up information, like most forums if you use them properly. I read Dimitri Gielis blog on a regular basis, imagine my surprise when he posts about answering one of my questions.

Thanks again Dimitri!



Friday, May 04, 2007

Fire Fighting

With the conversion of '07, staffing issues, hardware failures, fires, and the day to day grind, my job satisfaction over the last year has well, been at an all time low. I can never seem to get anything done properly, everything is patched together and done in haste to move onto the next fire. One of the fine ladies that works with me actually bought everybody in the group a little fireman's hat and a tiny little working fire hydrant and hose squirt gun, because it seems to be how we are viewed by the company as of late, simply running around putting out fires. A necessary, expensive evil entity who never has time to talk, and is only around when there is an emergency, or a fresh pot of coffee.

Well, today was a little different. 4 of us IT people had taken a group trek to the lunch room for a refill, we bumped into a few of our sales staff, once we all realized we were on hallowed neutral ground and no marking of territory was needed we started to chit chat, the weather around here has been record breaking and that is what we were talking about, all relaxing nicely. Then one of the sales guys pipes up, completely out of the blue and says "So, any of you ever look at our order processing and shipping system?". Well the system is a bolt on system to our oracle e-business suite and is relatively new. Development was outsourced and then the application support was laid in the lap of our haggard sales support team. None us IT people at that time were part of the sales support team, so the answer was a resounding "No". I monitor the database, but it is quiet and never complains about anything.

The sales guy goes on to talk about how over the last 2 months, it takes longer and longer to process an order, to the point of the sales guys queue up their orders and process let the system process them over night, because if they process the order manually, it locks up the application for as long as the order takes to process. The sales guy, lets call him Henry, goes on to say when the system was first in place, an order would process in about 1-2 seconds, now an order he processed this morning while a customer was on the other end of the phone took 7 minutes. I know his time is valid because the screen actually tells the sales person how long the processing took.

Henry continues on saying all of the sales folks have raised the issue with their manager, who was supposed to pass this along to the IT group, but it appears never has. I decide to return to my cube and take a look. I do the normal thing of right off the bat of starting a snapshot every 10 minutes and dig up some of the baseline snapshots so I have something to compare too. I call Henry and ask him to process an order so I can trace his session. I find his session and start a trace, he processes an order, and his session goes inactive after only a couple of seconds. I have Henry on the other end of the phone, and he says the process is still running. I can plainly see, that according to the DB, the process is over.

The app is a Delphi fat client, so I can rule out any connection pooling or anything like that. I go up to Henry's desk, the process is still going. I wait until it finishes, it takes almost 7 minutes, and he reviews the PDF that pop's up of the order and then he clicks the forward button and lets the work flow take over and push the order off to the next people in line. Those people happen to be in the cube right next to Henry, so I check with them. They say yes, they see they have an alert that the order has come in, but the lady is not wanting to open the order because at this time of day, it can take almost 10 minutes for the form to open., she says, first thing in the morning she can process a hundred forms in just a few minutes and I should come back tomorrow. I push the issue and she opens the form, and yes it takes - get this, 7 minutes.

Speculating to myself on what the problem might be, I go down to the sales support group and talk to their team lead, they know about the problem and it has been a ticket in the help desk for awhile but they are short staffed by 3 and do not have time to deal with it any time soon. I ask if I can, and of course am promptly handed the keys to the system and told to "have fun, don't change anything". I take a look and I find the PL/SQL that generates the PDF, it is a simply external call to a java program via the client that reads the DB and generates the PDF. I can run it in test no problem, generates a PDF in a couple of seconds. I ask Henry to try test, he does, takes just a few seconds.

Well, this is turning into a really long story, but I needed to lay the groundwork, sorry about that. Continuing the story, I decide to check out the DB server. lots of free space, minimal CPU usage, our reporting software shows that the machine only averages about 70% CPU usage over the course of a few days. Well that isn't it. I then cd into the temporary holding location for the PDF files and do an ls. Then I wait, and wait, and wait, and wait. I open another session and check the server. No CPU usage, no nothing. Being the server is a Linux box I run top and take a look, nothing out of the ordinary, I run iostat, nothing out of the ordinary. Very confused I check the other session and it is completed, I run "time ls" and wait. I am sure you can guess it, time comes back to just under 7 minutes. I check the scroll back on my terminal, it goes back to the max of the buffer, 2000 lines. I do a ls | wc -l. I wait the 7 minutes and get the outstanding number of 532,932. Wow, I didn't think that was possible, that is a lot of files. Then I realize with the file name sizes, ls is 4 columns wide. That is 2,131,728 files in that one directory assuming they are all the same file name length. I go back up one directory, do a ls -lrt and the directory is only 2 bytes in size. That doesn't seem right, with that many files in there, the directory header should be massive.

I call the SA group and have them check. They come back, surprisingly about 10 minutes later saying "Yeah, the directory is f*cked" it is a mount point on the NAS and we can't even see it from console. The NAS is mounted to the DB server and as a share that the sales people can access. A few phone calls later, and we confirm that the PDF's are temporary and not needed, we down the DB with the user's consent, the SA's unmount the drive and remount a new empty one immediately. Once we bring it back, we call Henry and he tries it and an order processes in under 2 seconds. We look at the directory and there is a single PDF in there. We know we haven't written 2+ million orders in 2 months (we wish!!), so the SA's work on getting some files out of the directory so we can take a look. While waiting on the SA's I monitor the directory, there are now 3 files, but Henry is the only user in the system still, we haven't released it to the public yet. I call Henry, he processed 1 other order, not 2 more - he swears by it. I asked Henry to process another, he does. now 7 files, and another one, now 15 files. I tell Henry to stop and call the support team and tell them about the problem.

The sales support group immediately drop everything and dig into the issue. Takes the team lead about 10 minutes to find the problem. The Delphi, just in case, backs up all files submitted by that user before it processes the current order. It assigns a sequence number and copies the files, effectively doubling the number of orders issued by the sales person before the new order is generated. The user's PC does all the work copying the files. The code to remove the backup files is instead a simple comment saying something to the effect of "delete was taking too long, investigate using a cron task to delete, the directory can fill up fast", made by one of the sales team support people who left awhile back. A quick change, and bingo, the code will delete order's older than 24 hours and not make any backups. Since the app is ran from a network drive, the sales group compiles a new version and pushes it out. After letting a few more users in, we verify the problem is fixed and go home for the day.

This made me feel good. Nothing to do with problems in an oracle database, no meetings, no hours of planning and discussion. Just suit up, get to work, and put the fire out...


sigh.




Wednesday, May 02, 2007

Migration from Access

A few months ago, we hired a consultant to work with a group of users and start to migrate from the numerous access database applications and excel spreadsheet "applications" to APEX. At the time it was APEX 2.0. We created them a database on a little used server and gave the consultant about 100 gig of storage and a list of items the consultant must do and must not do written into the contract.

We have heard very little about the project, the manager in charge pretty much left them on their own. Today we had a meeting which turned out to be the "wrap-up" meeting for the project. The consultant and the users, in under 4 months converted 16 MS Access databases and 44 excel spreadsheets to APEX applications. LDAP authentication and centrally managed user list as all the apps are in a single APEX workspace. The project is magnificent! The users are happy, all the rules have been adhered to and the consultant came in under the time limit getting the bonus money that was part of the contract. Everybody is happy all around. I looked into the DB, proper referential integrity, very few bad SQL, the schema is organized well with non shared tables prefixed with a TLA that they came up for that particular MS access or excel application. Applications talk to each other when necessary and the users are reporting much better productivity and no missing transactions or data.

Every application is snappy and efficient. The consultant and the project user lead also signed off on retiring the 155 MS Access licenses, which will be a nice thing to stop paying for when we renew.

All and all, a good day!

APEX is amazing... I still struggle with some basics on it... but I am a DBA, not a developer. We already have another meeting with the consultant to put them onto another APEX project we have waiting in the wings.



Monday, April 23, 2007

The big leap

In the last 8 months we have cycled through 4 MS SQL Server DBA's at one of our partner/remote sites, 11 servers, 11 databases. We are responsible for the IT services at that site and hiring SQL Server DBA's has become a 4 letter word. The original long term fellow had health problems and had to go on permanent medical at 40, I wish him luck. The first fellow we hired was a complete and utter useless twat who we fired after 2 months. The next fellow we hired was an OK fellow, a little too Microsoft branded, he had pretty much every MS certification known to the universe and made sure you knew it. He only lasted about 3 weeks when it was discovered he lied on his application and had a criminal background. We don't care if you do, just don't lie about it because the position requires being bonded. We also changed our company we use to check things like that because they were a little slow. This last lady, well, lets just say, she fell somewhere in between utterly useless and waste of skin and we let her go 2 weeks ago when 2 of the servers went down hard and it was discovered she hadn't been doing backup's since not long after she started. Her supervisor at the site was give a good dragging over the coals as well.

I was asked if I would like to learn to be a SQL Server DBA right around the timeline of the first replacement. I said "No thank you". When we let Mr. MS go, I was asked again. I said "Do not have time". When the 3rd guy was let go, I was asked again and I said "Starting to look that way isn't it". And now, that the last (and only) lady is gone, I was begged to learn SQL Server, so I said "OK, but only in a backup capacity", of course, if I believe that, I am sure the tooth fairy and Santa are going to come visiting bearing gifts of great winged pigs. I know enough about SQL server to get around, start, stop, run some scripts. Nothing to do with performance or optimization or anything like that.

So, this past weekend, I started to read the documentation for SQL Server, and immediately began to miss Oracle documentation. MS doc's suck. Nothing else to say, than they suck. I read what passes as the concepts guide, and started to wonder why does MS charge for SQL Server? it should be given away for free because it isn't worth any money to anybody at all.

I have put myself completely at the mercy of a company we use for training purposes, and they are coming up with a complete MS certified training course for SQL server with all of the prerequisites needed over the next year. I shudder to think of it. The admin at the site is also making me a VM of the test server so I can play with it on my laptop.




Monday, April 16, 2007

Conversion of 07 is done!

Required a marathon 36 hours at work for 3 of us, but it is complete. The users have been using the new production system for almost 24 hours now. New system is magnitudes faster than the old one, the users are ecstatic.

Only issue we had was about 2 hours into the data transfer some chuckle head at the remote site accidental stepped on the power bar for 2 of the 4 new internet routers, but the error checking took care of it. Once the modem's were powered back on, the process realized it, and started the transfers again.


Monday, March 26, 2007

Application Express 3.0

We have eagerly been waiting for the release of APEX 3.0 , Last week, APEX 3.0 was made available for download. We use APEX extensively for internal application development and are porting many oracle forms applications that were internally developed to APEX. The users love APEX, the IT people love APEX, I as a DBA love APEX. We were looking forward to the PDF printing most of all, the ability to build PDF reports in APEX was going to be a huge feather in the cap for oracle and there was serious talk about having a vendor port a major application from a Delphi fat client to APEX, the vendor was on board, everything seemed to line up.

Then we installed APEX 3.0, the upgrade and installation on our test environment went perfectly, no problems at all. We start to read the documentation about the PDF printing, and we realize. We need to install XML/BI publisher to take advantage of the robust PDF printing that was all the rage. Well, that burnt us. Yes we can install the FOP stuff and do really basic reporting, but we had hoped the PDF generation was part of APEX 3.0.

We have had XML/BI publisher on the radar for awhile, knowing we will have to go to it with our Oracle E-Business suite upgrade later this year. I guess I had upgrade our evaluation copy to the newest version and see what has changed and try and figure out some business cases for spending the C$45,872 per CPU for XML/BI publisher.


We still love APEX 3.0, wonderful product, if your not using it, you should figure out a reason to.



Thursday, March 22, 2007

Oracle sues SAP

Interesting


http://www.theregister.co.uk/2007/03/22/oracle_sues_sap/


I had seen some rumours floating around that Oracle was going to try and take over parts of SAP, guess the rumours were wrong, but some legal work was brewing. Makes you wonder who stated the rumours to begin with.



Monday, March 12, 2007

Ramblings

I am bored, I am sitting at my desk staring at a very slowly moving tail -f on an rman log copying a production database to test, 52 gig of data, the fun never ends here. It is almost 8pm and I have been here since 7am this morning, let the ramblings begin.

I am an avid follower of Jonathan Lewis and his articles on the oracle optimizer. I have his book "Cost-Based Oracle Fundamentals", and I have actually managed to read the entire thing from cover to cover. I can guarantee to you that most of it went straight in, and straight out leaving little behind. I hope at least that enough of it remains behind to have a positive effect at a later date.

I had an actual case to use Tom Kyte and Jonathan Lewis blog entries on ordering a query, showing a developer he can't rely on the order in a table, because there is no order in a normal heap table. Didn't take much, simply forwarded him the links and let him try to find a way to prove them wrong. I haven't heard back from him.

We have 2 ISV's, lets call the Bob and Doug that have really been causing me grief lately, these are small shops that unfortunately have developed two systems that have become integral to our production. It is amazing, these two companies are located within a few kilometers of each other, they have no idea of each other's existence, but they cause me the same troubles. The troubles are always the same, no apparent in house testing of patches or upgrades, they appear to be under the impression that is what our IT staff is for - testing the ISV's code. Bob is database happy, they keep asking for more and more databases on our side. In our environment (they VPN in) they have a production, a test, a dev, and a QA instance for each province we operate in - total of 12 instances each around 50 gig in size. Now, that doesn't seem like much, but we do absolutely no development internally - none at all. These instances are here basically because they don't have the server space available to have what they think they need so they burden us with the responsibility of keeping the databases backed up and in good working order. The copy I mentioned at the opening is being done on their behalf. We pay them support, yet we maintain their support environment.

Doug on the other hand is amazingly skilled at stalling problem fixes long enough so that the users simply forget and develop work around. When Doug does release patches or upgrades, something always, consistently goes wrong, never during our testing of course. They give us a list of what they changed, we test that, and do a general test of everything else, this particular example is year end stuff. In October they release an update that worked pretty well, they only had to release the update to us for testing 4 times which is a new minimal record, with the maximum being 21 times. We test, users signed off and away we went. January comes along, users do their month end and everything works great. February rolls in and we are doing an internal audit between Doug's system and our financial system, and the auditors notice a rather minor $60K variance. Tracking it back, turns out that in the update in October the developers at Doug's company slightly modified a view that is only used at month/year end, "for performance purposes" and never told us about it. Their solution to the performance problems was removing a rather important table from the query which tracked and accounted for user manual changes in the data.

The user that sent the data should have read the reports and caught it early, so Doug made sure the fault landed solely on the users shoulder. So, the users had to make two correcting entries in the GL. Luckily the discrepancy was small and we didn't have to change our year end results. We are still waiting on Doug to give us a document on any changes necessary to their application for DST.

We are hiring an oracle applications support person and 2 IBM Lotus Notes (shudder) developers. I haven't quite figured out where the management plans to seat them, our cube farm is pretty packed together with us each only getting about a 9 foot square of space. Maybe they think we don't need that easy access to the fire escape, they can cram one in there. What they plan to do with the other two, I do not know. Possibly stack us up, lay some flooring across the tops of the cubes and put the cubes two high. We do have a very tall ceiling.

As for my "shudder" about IBM Notes, I have no bad feelings toward the developers that use Notes, I just hate IBM Lotus Notes, it is simply one of the worst programs ever created. We have the newest version of it (7x) and it still sucks. The only time the IBM team that develops Lotus Notes stops building something that sucks is when they start to build vacuum cleaners.

Oh boy, we need to work on the I/O on this test system, I swear there are gerbils in that server running back and forth with some floppies in their mouths transferring the data between disks. 52% done.

I managed to get management sign off on upgrading an oracle 7 database to oracle 10gR2. Apparently the company that supports the application uses 10gR2 internally, even though the majority of their customers are still on oracle 7. The application is 100% web based using some web language I can't remember so the upgrade is apparently really easy and they are going to supply us with the necessary scripts. That will only leave us one oracle 7 database in production.

uggg... I have to type slower, the RMAN log hasn't moved in minutes.

I have been keeping up with what was happening at Hotsos this year by reading Doug Burn's blog. His house mate of the month is amusing and his technical knowledge and writing style are well above average. I came across Don Burleson's personal blog, I follow the forum he hosts. Well, lets just say that starts a whole new chapter on that fellow for me. I know he needs to plug his and his fellow Rampant press author's books, but come on :).

I see my RSS reader is showing me that David Aldridge has posted again, finally after a very long time.

The Conversion of '07 continues later this week, this is the final test before we have to do it in production. I will write a note or two on how it goes.

Well, enough rambling for now.








Sunday, March 11, 2007

DST over

Well, the time is passed, no problems related to DST have cropped up. We all breathed a sigh of relief. Some frantic last minute patches were put in on our large JSP application, when I say last minute, I mean just after midnight this morning, a few hours before the time switch.

The only major casualty, which is out of our hands, is our cell phone provider seems to have had some issues. All of the cell phones switch just fine, but none of the blackberries did. Oh well, manual switch of the time and good to go.

DST is over...


for now.



Thursday, March 08, 2007

OEM 10gR3

For those of you that remember, about a year ago, we installed and got OEM 10gV2 installed and running, and I was hopeful. Well, about the only thing OEM was used for was downtime reporting by a manager. The occasional email from the system when something came down, but not much more. No new agents installed on new servers, nothing kept up to date, basically OEM 10gV2 was a dismal failure.


Well, we have decided to upgrade to OEM 10gV3 and take another kick at the can and see if we can get OEM configured the way it should be and use it the way it should be. Our big push in this came from our Oracle Technical Sales representative. He came by (for free) and learned our environment over 2 days and then he presented some very compelling reasons for using OEM during a 7 hour presentation. Every single one of the reasons was expensive, but he got the managers convinced.

We will be upgrading (or reinstalling) to V3 by the end of March. Our sales rep gave us a 60 day free trial of all of the OEM packs on all of our servers to get me hooked, as well as 5 full days of the technical sales rep here helping out configuring and the proper way to use it. For the estimated $900K bill (before discount) to have the OEM packs on every database and the non-oracle database servers monitored as well, and our SQL Server databases plugged in too. Oracle is willing to spend some time.

We are also looking at purchasing Oracle Fusion Middleware as our SOA solution, so later this year, oracle is going to make some money on us.

I am now off to a presentation where I am going to try my darndest to convince management that we really and truly need to upgrade our oracle 7 and oracle 8 production databases.



DST... Ready?

Well,

We appear to be ready for the big bad early DST. Yesterday we rolled every single one of our test servers forward and waited for the OS to do the switch. No databases came down (yeah!), unfortunately some of our vendor supplied patches for our large JSP based application seemed to have failed badly as the application simply refused to allow data to be entered "PO create date can not be past PO update date" or something like that.

Now, for the databases where test and production are on the same server - well, that is going to be a "fingers crossed" type of fix.There will still be a large number of IT people in on the morning of the 11th for the old 'just in case'.



Saturday, March 03, 2007

Insert into multiple tables from a single query

A friend who does not blog wrote this up for his co-workers, it is straight forward but useful. Enjoy.

A few days ago someone asked if it was possible in an oracle DB to insert into multiple different tables from a single query. I said "Yes of course", they asked "So, How?", I of course said "RTFM". Well, here it is a little easier to read than in "The Fine Manual"

This works all the way back to Oracle 8 so feel free to test it out. But, since it does drop objects, please do it in a test location. Personally I recommend everybody download and install oracle XE ( http://www.oracle.com/technology/software/products/database/xe/htdocs/102xewinsoft.html ) on your PC, gives you a nice safe place to work, play and learn, and as an added bonus Oracle XE comes with Application Express (APEX) already installed and ready to go. Now that I said that, I don't support PC's so who knows what it will change on the configuration for on your PC. Do so at your own risk.


Simply creating some test tables and a sequence for later use in this example.

SQL> CREATE TABLE BASETABLE (BASEID NUMBER PRIMARY KEY,BASEDATA VARCHAR2(30));

Table created.

Elapsed: 00:00:00.03

SQL> CREATE TABLE DEST1 (DESTID NUMBER PRIMARY KEY,BASEID NUMBER UNIQUE,BASEDATA VARCHAR2(30));

Table created.

Elapsed: 00:00:00.01

SQL> CREATE TABLE DEST2 (DESTID NUMBER PRIMARY KEY,BASEID NUMBER UNIQUE,BASEDATA VARCHAR2(30));

Table created.

Elapsed: 00:00:00.03

SQL> CREATE TABLE DEST3 (DESTID NUMBER PRIMARY KEY,BASEID NUMBER UNIQUE,BASEDATA VARCHAR2(30));

Table created.

Elapsed: 00:00:00.06

SQL> CREATE TABLE DEST4 (DESTID NUMBER PRIMARY KEY,BASEID NUMBER UNIQUE,BASEDATA VARCHAR2(30));

Table created.

Elapsed: 00:00:00.03

SQL>

SQL>

SQL> CREATE SEQUENCE DESTID_SEQ;

Sequence created.

Elapsed: 00:00:00.00

SQL>

Insert some data into the base table for use later

SQL> INSERT INTO BASETABLE SELECT ROWNUM*-1,DBMS_RANDOM.STRING('A',30) FROM DUAL CONNECT BY LEVEL <=500;

500 rows created.

Elapsed: 00:00:00.09

SQL> COMMIT;

Commit complete.

Elapsed: 00:00:00.00

Now the actual insert, you can see the WHEN and ELSE clause of the INSERT statement. You can have as many of those as you want, each inserting different combination of columns for the VALUES section. In this case, I am using a sequence to satisfy the primary key of the DESTx table and then the two column names from the select clause at the end.

SQL>

SQL> INSERT ALL

2 WHEN BASEID=-1 THEN INTO DEST1 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)

3 WHEN BASEID=-10 THEN INTO DEST2 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)

4 WHEN BASEID IN (-100,-200,-300,-400) THEN INTO DEST3 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)

5 ELSE INTO DEST4 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)

6 SELECT BASEID,BASEDATA FROM BASETABLE ORDER BY BASEID DESC;

500 rows created.

Elapsed: 00:00:00.01

SQL> COMMIT;

Commit complete.

Elapsed: 00:00:00.00


Now to show what happened. From the following query you can see that the BASEID of -1 was inserted and the DESTID was the very first record in the insert as shown by the sequence value of 1.

This following data was inserted based on the

WHEN BASEID=-1 THEN INTO DEST1 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)line in the insert statement.

SQL> SELECT * FROM DEST1;

DESTID BASEID BASEDATA

---------- ---------- ------------------------------

1 -1 uzvIPoJevGslWNzcsEULVsOIHrWtkA

Elapsed: 00:00:00.00



From the following query you can see that the BASEID of -10 was inserted, and was the 10th line in the select query return result. This was inserted based on the line

WHEN BASEID=-10 THEN INTO DEST2 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)in the insert statement.

SQL> SELECT * FROM DEST2;

DESTID BASEID BASEDATA

---------- ---------- ------------------------------

10 -10 AzRwrjLpzvxtacxBOitYhGDGDuKmaU

Elapsed: 00:00:00.01


From the following query you can see that the BASEIDs of -100,-200,-300 and -400 were inserted.This was inserted based on the

line WHEN BASEID IN (-100,-200,-300,-400) THEN INTO DEST3 VALUES (DESTID_SEQ.NEXTVAL, BASEID,BASEDATA)in the insert statement.


SQL> SELECT * FROM DEST3;

DESTID BASEID BASEDATA

---------- ---------- ------------------------------

100 -100 uJixIEqFTeZEBDOCPYkJgyipInuTdt

200 -200 ikmTNgdjGTjkINEGbxEFifWAetPBMt

300 -300 gKcFyianMOtGzdJzVlkjqaLPiwBkic

400 -400 prucyUxTqhPhUTzarsJRyFQYlOUlWz

Elapsed: 00:00:00.01

From the following query you can see the remainder of the records in the BASETABLE were inserted into the DEST4 table. If you look you can see that BASEID of -1,-10,-100 and -200 are missing. You will have to trust me that -300 and -400 are missing in the result set as well, but I didn't want this running too long.


SQL> SELECT * FROM DEST4 ORDER BY DESTID;

DESTID BASEID BASEDATA

---------- ---------- ------------------------------

2 -2 fPNMkRbJAEoeaWejzrAigZjKqZVzUl

3 -3 NDmRQNKmPhAnzfuWhLQDnWIcRVpjLF

4 -4 DoNnVEskItQAfANavQVHdJWdOeZbAc

5 -5 SNacUWsrPCPyLwDBxEtndSsiiSTmPW

6 -6 gLxiVlWXsdcLPhDgLThISCutKBfuOj

7 -7 sZCNlljiTveZPIUgyEBPalpJPrMdck

8 -8 UOwvqNxyPXcpsxRmjsxLQGfEsHQOqO

9 -9 WDwQqUnMHjDautMrYYBMCcjIoNWMKg

11 -11 BOfKwqtFZWQuLVEHFhMRHrfBGyeTfQ

<SNIP>

99 -99 VjmavGgzdQroTHutlhcOQjiqlTiLHW

101 -101 cjuHxrklWRaQmRJZyVShliswLRCgBm

<SNIP>

199 -199 xvaXYHPkexmFOkXCDBOODqjEatyMwY

201 -201 fXwQaaSTWAEDrYDqnRHVxLqcQEkbCZ

<SNIP>

500 -500 eLqsjEKEzWTmQUTsEtHFcRVEkEiQZz

494 rows selected.

Elapsed: 00:00:01.06

Now simply the cleanup.

SQL> DROP SEQUENCE DESTID_SEQ;

Sequence dropped.

Elapsed: 00:00:00.03

SQL> DROP TABLE BASETABLE;

Table dropped.

Elapsed: 00:00:00.03

SQL> DROP TABLE DEST1;

Table dropped.

Elapsed: 00:00:00.03

SQL> DROP TABLE DEST2;

Table dropped.

Elapsed: 00:00:00.04

SQL> DROP TABLE DEST3;

Table dropped.

Elapsed: 00:00:00.03

SQL> DROP TABLE DEST4;

Table dropped.

Elapsed: 00:00:00.01

SQL>

SQL> SPOOL OFF


New Look

I decided to finally allow Google to move my blog to the new now no longer beta blogger.

It looks good. New "spot", I decided on a different look.

If you care, let me know if you have any issues with it.

Thanks.

Friday, March 02, 2007

More on DST

When did oracle start being ran by a bunch of morons?

This DST patch is simply a joke. There must be a punch line in here someplace, because there should be no way that a company the size and past capability of oracle can screw something this simple up so many times and on so many levels.


One of Oracle's marketing slogans


Oracle Is The Information Company


I guess only if your information is not time stamped.


What a crock... SQL Server simply looks just a little better now, but I am sure after the next 9 days of being awake at work patching databases again, I won't remember I said that.


Monday, February 26, 2007

DST

Is it just me, or is oracle's effort to wards the DST seem to be a convoluted mess of notes, readme's, more notes and superseding items. Download this, this, this, this, and this, and possibly this but only if you have this and this. Version 4 of a patch? So, we go ahead and patch take the necessary downtime, and then version 5 comes out, we go ahead and patch, and version 6 comes out, we go ahead and ... well you get the point. Easier to shut everything down over the time change, manually set the time on the server and continue from there. Luckily we have only one system that uses timezones and the user group took this opportunity to punt it and purchase a newer system that does not.


Sigh... Rant over.


On a good note, the conversion is going along nicely. The consultants found some other consultants to help with the work load. All of the hardware and new network lines are in place and tested and we have a target date of April the 7th for the final move. The second test of the database move will happen 2nd week in March as that is when the consultants say they will have the first ready for user testing front end working.



Wednesday, February 14, 2007

We Love RMAN

Say "We love RMAN, RMAN is great"


Say it again. And again, and one more time for good luck. RMAN coupled with a good backup strategy saved my... "our" butts this morning. Get an email from my monitoring stuff that a production database is down, about 2 minutes later get one from OEM saying can't connect. Was sitting at a coffee shop having breakfast, so go across the street to the office and take a look, yes, that production database is down, crashed hard. All 3 incoming lines on my phone are flashing away. Connect to the server, take a look, hmmm... that's funny, we are missing the entire /PROD mount point. Completely gone. Try my limited knowledge to see whats up, give up after about 1 minute and call an SA. After a quick conversation convincing them the mount point is gone they go and look. This is a DAS box with good mirroring and RAID, I am not concerned at all. On /PROD are the tablespace files and a control file copy.


15 minutes later, all 3 SA's come back, all 12 disks on the mount point are gone, dead, no lights, no comforting whirring noise. Nothing. No explanation why, but they will plug the enclosure into another server to see what they can see. We wait another 20 minutes and they can't get any of the disks to spin up, even plugged a disk plugged in on its own, nothing, finished like last nights dinner.

They have more disks and rapidly rebuild the enclosure and plug it back into the server and start up the whole shebang. They get the mount point /PROD created and accessible after 45 minutes or so and then wipe their hands of the matter by telling the managers all they can do is done, it is in my hands. By this time managers (vultures) have been circling, now they have landed and fighting amongst themselves to who will have the privilege of the first juicy eyeball to be plucked from the assumed to be near death DBA.

copy a control file from another mount point on that server to /PROD.

$rman / target

>restore database;

{wait about 15 minutes as files come off of our tape array online storage, go and get a coffee, mingle and socialize while the vultures (managers) eyeing me the entire time asking me why I am not at my desk}

>recover database;

{wait about 9 minutes, finish coffee and chat with the folks just coming into work}

>alter database open;

> exit

Few quick sanity checks.


Tell everybody it is back and start a backup just to have it.

Smile brightly and continue on with my day.





Friday, January 26, 2007

Conversion of '07 - First stage finished

The first export & load process finished. 304 gig of data transfered and loaded in 188 hours, only 15 tables had problems and were all easily corrected and scripted around for the next test. We had some network problems too that caused us to rewrite the process script on the linux server to pull the file size that was created on the old server and verify it against what actually appeared on disk.We are going to take a different approach to our transfers for the next test, we are going to have 5 "pipes" to transfer data over as the network guys promise to have 4 VPN's connected by the end of next week, simply will be 5 different IP addresses to transfer too so we can manually split the transfer up over the dedicated pipes and the shared pipe. The network guys estimate we should be able to cut our network transfer times by about 45%. The first shipment of new drives for the SAN has been shipped, the SA says it will be installed and functional by the 2nd week of February. I have assigned the PL/SQL verifying to 2 applications support people who had the unfortunate bad luck to look bored at a meeting, and I have delegated the rewriting of the .sh that creates the scripts to precreate the tables before the import to a developer here. I put an SA in charge of co-ordinating the hardware and OS installations so I don't have to. I have come to realize, being the project leader is a good thing, I don't have to do it, I can get someone else to do it and just verify they did it right.


More good news on the hardware front. Our broker found us hardware sooner than expected and HP came up with a supported OS structure that will run what is necessary for the contractors to move/port the application to a different apps server. Timing is going to be an issue though, our "soft" deadline is quickly approaching and already the MS project estimated finish date is 12 days behind our deadline. The consultants rewriting the front end have hit a major roadblock pushing their time estimates out over a week. As I said though, it is a "soft" deadline as the old system will still be functional but the contractors will become extremely expensive to continue on.

Things are still going good though and I confident that the oracle end of the transfer will be smooth.

Monday, January 22, 2007

Conversion of '07 - Still Processing

Well the ftp and load process is still going at 130 hours, but so far very few errors, and the few that are happening are easily by passable or fixable. I am very pleasantly surprised. Our estimates have greatly increased due to the higher than expected bandwidth use by the users during business hours. If everything continues on track the last file should be sent via FTP on this coming Wednesday morning.

I used a combination of DBMS_META_DATA and CTAS , extracted the create DDL and created the entire database sans data on my laptop and have gotten all of the DDL for the PL/SQL ran and the majority of the objects created. A boatload of errors and uncompiled PL/SQL, but at least it is all there.The project from the oracle end is ticking along nicely.

From the hardware end, well that is another story. The hardware to replace the VMS system oh man, people that say "disk is cheap" need to have their heads examined! To increase the capacity on our SAN to hold this new DB, this includes the drives themselves, a new drive enclosure, cabling, more room on the DAS (more drives) on the tape array to hold near line backup, another 25 tapes for the tape array for the offline/offsite and various odds and ends to make all that happen - initial estimate, $425,000. We almost fell off of our chairs at that. That was just the storage for the DB. The application server estimate from HP via our broker is in, $375,000 and we can't have it for 45 days.


Sunday, January 21, 2007

The conversion of '07

5 days straight on this project, averaging 15 hours a day for 6 "in house" IT staff and 5 consultants. Took today off as it is Sunday, but I had to come into the office to do the "real" work that has been missed. This coming week is not going to show much work internally on this as other projects are taking priority. I used sqlplus and extracted all of the DDL for the programs and triggers, 109.302 megabytes of text DDL files (uncompressed). That was a shocker, even with almost 6,000 PL/SQL that is a huge amount.

I was made the official "Project Lead", that means that today, after spending 6-8 hours to find out why backups were failing last week and delegate the user requests that flowed in, I get to transpose the project task list and milestones into a Gant chart for our status meeting on Friday. I have to devise a project time line that will have this system running on a Linux back-ended database and the front end running on an ancient HP-UX server that was slated for the trash bin no later than the middle of April 2007. One of our SA's called HP to see about getting some warranty and parts coverage on the ancient server and the possibility of adding a CPU. He was literally laughed at, HP is looking into it and will get back to him with some options, we hope they have some newer hardware that can run the older OS and we will simply purchase the hardware. On the purchase note, we were given an almost blank check for this project on Friday, simply "keep it under 6 million", that dollar figure does not count internal IT time ("free") or the consultants who's time gets divided up and charged to the sites. The sites had came back some what appears to be valid numbers that for every day they do not have this system the chance of a processing line failure increases by 1.2%, even the smallest plant going down will cost the company $125,000 a day with the largest plant showing a net loss of $1.1 million a day. There are 17 sites that will be exposed, that woke the management up. The customer interaction and shipping data retrieved from the system is another story and can not be quantified easily into a dollar figure, the note was something to the effect of

How much is it worth when a customer who buys a million a month worth of product asks "When will my order by completed?" and the companies answer is "We don't know, our system is down".


I called our Oracle sales rep on Thursday, to talk about the licensing implications of this move, I loved the answer I got back on Friday morning - "Won't cost you any more oracle licensing as long as the old system is turned off within 60 days of the new system coming online, your covered - need any consulting?".


As I sat here writing this, 4 other IT staff showed up without being asked to catch up on missed work and work on this project. I like the people I work with.

Thursday, January 18, 2007

Some good progress

Well, we made some really good progress today with the DW conversion. We carved off 400 gig section of our SAN even before the new disks have arrived, and created a 10gR2 DB there in noarchive mode. I built a DCL script that runs on the VMS cluster and exports one table at a time from a list that we compiled to a distinct file name, then FTP's the export file to the Linux server with the new LUN mounted. Then the Linux server imports those tables into the new DB and sends a notification if there are any failures. We have created all of the tables already in the DB and spread out the tables over multiple tablespaces. This is more for our ease of management than anything performance based. The process has been running for about 7 hours now and has sent 1,079 tables with very few failures, but the big tables are still to come, with the throughput we are getting, are estimates are about 120-130 hours. The import process imports 4 files at a time as any more than that saturates the IO on the linux box. This is of course simply a test to see what will fail and give us an idea on how long we will have to book downtime for. The largest slow down is the network link between the 2 sites, we 100% utilize the network link. The network folks are going to investigate some point to point VPN's as they are confident they can get 3 or 4 dedicated ADSL lines into the remote site and our internet pipe is much larger (x10) than the dedicated network link.

I think our downtime will be limited though, The data import process can be stopped and only a select few users have rights assigned by the application to make data changes. We will remove those rights via the application, then stop the data load process and start the transfer when we do this for real. That will mean the users will be able to query the old system while the data is being transfered. The consultants have been very successful in "fooling" the application to connect to the test database on the linux server, took some work on their side though. They knew they were able to compile the application on an older HP-UX PA-RISC series running V10 of the OS, luckily we had one. On that server we were able to install an oracle 9 client which their application recognizes and will still connect to the 10g database. They have about 3 weeks of work to finish the application move to the HP-UX server though and that does not include any testing or recoding the inevitable failures. We have a consulting company coming up next week to work on converting the VMS DCL script portion of the system to unix shell scripts. That is going to be interesting working through that conversion.

Tomorrow we tackle the triggers and PL/SQL code and see what fails there. 15 hours at the office today is enough for me.



Tuesday, January 16, 2007

OCP certification and knowledge

I have mentioned before that Howard Rogers is a great man, and this post proves it. Absolutely no argument from me on this.

I too have written in many places about the "dead from the neck up" job candidates I have been through over the years, OCP certification is a joke and a waste of money for companies. The people who have proclaimed themselves experts "OCP Certified" who can't answer the simplest of database questions.

I will put the list of questions we ask up on a post very soon.




New project, time for a new job?

A few years ago the company I work for purchased another company of almost the same size effectively doubling our size.

During this acquisition, we in the IT group were told about some of the systems that were in place, but luckily other than the core applications we didn't have to do anything about the older systems at all. There were consultants who managed the systems and the servers were housed far from the IT group. An "Out of Site, Out of Mind" deal.

Well, yesterday, that changed for one particular system. Our manager got a call that the 3 person consulting company that looks after the "data warehouse database" for 17 of our sites has put decided to not renew the contract which expires in 90 days, effectively dropping the management of the database into the IT group's laps. This has happened before, we took it in stride. Until the DBA's got access to take a look at the DB yesterday afternoon. Oh boy... I checked to make sure the headhunter firm I use had an up to date CV and I removed my restriction on not wanting to work in the United States.

Oracle database version 7, with an astounding 3,319 tables in a single schema with only one primary key constraint created on one table. No database referential integrity at all. 891 procedures, 4319 triggers, 771 functions (no packages). OS is OpenVMS running on some really old Digital Equipment hardware. The front end uses proprietary VMS DCL scripting coupled with a Fortran VT100 terminal user interface. Total database storage across the 8 Vax/VMS machines in the cluster - 636 gigabytes with 432 gigabytes of that being the database (according to dba_segments). Apparently backups are cold backup's that start on Friday night at 10pm and finish early Monday morning if there were no problems, it is noted that sometimes the database can go 4-6 weeks without a backup during the heavy work season because the data is needed 24 hours a day 7 days a week and the database can not be shut down. At least it is running in archive log mode. Sites send data to the database via file transfer every morning and the data is loaded some what manually by some clerks at one site.

Management asked us how long it would take to convert the data to an oracle 10g database (or 11g) on linux so a development team can be put together to build a new front end because apparently they sites can not live without that data that goes into that database, once the data is converted to 10g on Linux then we can look at redesigning the entire system but downtime has to be minimal. I was handed the "documentation" that the consulting firm has done to date, and was promised the bulk of the remaining time the consultants have on contract will be dedicated to writing better documentation. I would hope so, because the documentation I tried to read last night looks like it was written by a 11 year old in a hurry to complete a school work assignment. We also got a 3 inch binder stuffed full of about the last years worth of user requests, user filed problems and "bug" tracking that the consultants have - nothing electronic at all, all done by hand. I have until this coming Monday to come up with a realistic plan on converting the data to oracle 10g.

This project scares me, this is far past the "take it as a challenge" or "show off your skills" that the management is throwing around, I see this becoming a career ruining and stress related heart attack inducing project. Anybody out there looking for an experienced level headed Oracle DBA with years of experience in both DBA skills and development skills? :)


I think this project will be the seed of hundreds of blog posts.






Friday, January 05, 2007

Blogger and formatting

I forgot just how hard it is to get something to format the way I want it on blogger, I think I will have to do some research on another blogspace.


Old Questions refreshed again

Saw somebody bumped one of the many "is select count(1) faster than select count(*)" threads on AskTom .

I believe Tom, but I just had to go and look AGAIN to see if anything changed in 10gR2:

SQL> select count(1) from dbtesting.t; 

COUNT(1) 
---------- 
20413 

Elapsed: 00:00:00.00 

Execution Plan 
---------------------------------------------------------- 
Plan hash value: 2966233522 

------------------------------------------------------------------- 
| Id | Operation | Name | Rows | Cost (%CPU)| Time | 
------------------------------------------------------------------- 
| 0 | SELECT STATEMENT | | 1 | 59 (2)| 00:00:01 | 
| 1 | SORT AGGREGATE | | 1 | | | 
| 2 | TABLE ACCESS FULL| T | 20413 | 59 (2)| 00:00:01 | 
------------------------------------------------------------------- 


Statistics 
---------------------------------------------------------- 
0 recursive calls 
0 db block gets 
315 consistent gets 
0 physical reads 
0 redo size 
413 bytes sent via SQL*Net to client 
381 bytes received via SQL*Net from client 
2 SQL*Net roundtrips to/from client 
0 sorts (memory) 
0 sorts (disk) 
1 rows processed 

SQL> select count(*) from dbtesting.t; 

COUNT(*) 
---------- 
20413 

Elapsed: 00:00:00.00 

Execution Plan 
---------------------------------------------------------- 
Plan hash value: 2966233522 

------------------------------------------------------------------- 
| Id | Operation | Name | Rows | Cost (%CPU)| Time | 
------------------------------------------------------------------- 
| 0 | SELECT STATEMENT | | 1 | 59 (2)| 00:00:01 | 
| 1 | SORT AGGREGATE | | 1 | | | 
| 2 | TABLE ACCESS FULL| T | 20413 | 59 (2)| 00:00:01 | 
------------------------------------------------------------------- 


Statistics 
---------------------------------------------------------- 
0 recursive calls 
0 db block gets 
315 consistent gets 
0 physical reads 
0 redo size 
413 bytes sent via SQL*Net to client 
381 bytes received via SQL*Net from client 
2 SQL*Net roundtrips to/from client 
0 sorts (memory) 
0 sorts (disk) 
1 rows processed 

SQL> select count(66666) from dbtesting.t; 

COUNT(66666) 
------------ 
20413 

Elapsed: 00:00:00.00 

Execution Plan 
---------------------------------------------------------- 
Plan hash value: 2966233522 

------------------------------------------------------------------- 
| Id | Operation | Name | Rows | Cost (%CPU)| Time | 
------------------------------------------------------------------- 
| 0 | SELECT STATEMENT | | 1 | 59 (2)| 00:00:01 | 
| 1 | SORT AGGREGATE | | 1 | | | 
| 2 | TABLE ACCESS FULL| T | 20413 | 59 (2)| 00:00:01 | 
------------------------------------------------------------------- 


Statistics 
---------------------------------------------------------- 
1 recursive calls 
0 db block gets 
315 consistent gets 
0 physical reads 
0 redo size 
417 bytes sent via SQL*Net to client 
381 bytes received via SQL*Net from client 
2 SQL*Net roundtrips to/from client 
0 sorts (memory) 
0 sorts (disk) 
1 rows processed 

SQL> select count(dbms_random.value(1,1000)) from dbtesting.t; 

COUNT(DBMS_RANDOM.VALUE(1,1000)) 
-------------------------------- 
20413 

Elapsed: 00:00:00.17 

Execution Plan 
---------------------------------------------------------- 
Plan hash value: 2966233522 

------------------------------------------------------------------- 
| Id | Operation | Name | Rows | Cost (%CPU)| Time | 
------------------------------------------------------------------- 
| 0 | SELECT STATEMENT | | 1 | 59 (2)| 00:00:01 | 
| 1 | SORT AGGREGATE | | 1 | | | 
| 2 | TABLE ACCESS FULL| T | 20413 | 59 (2)| 00:00:01 | 
------------------------------------------------------------------- 


Statistics 
---------------------------------------------------------- 
1 recursive calls 
0 db block gets 
315 consistent gets 
0 physical reads 
0 redo size 
437 bytes sent via SQL*Net to client 
381 bytes received via SQL*Net from client 
2 SQL*Net roundtrips to/from client 
0 sorts (memory) 
0 sorts (disk) 
1 rows processed 

SQL> spool off 



> >


There, now if somebody somebody is wondering, the argument is solved. They are the same when selecting from a table without a predicate, I even threw in the dbms_random to show any number in the count() is the same.



Tuesday, January 02, 2007

Dizwell is Back

Ahhh... there we go.


Howard relented and opened his site again. Howard is a great guy!

http://www.dizwell.com