Saturday, December 30, 2006

ORA-07445

It is interesting to actually listen to users occasionally, down right extraordinary. Take this for example.

I was surprised to see on a brand new oracle 10gR2 DB which sole purpose s to test the application for upgrading from 9iR2 this error come through our monitoring script:

ORA-07445: exception encountered: core dump [kddlkr()+748] [SIGBUS] [unknown code] [0x000000010] [] []

Within 1 minute the 9i database on the same server,

ORA-07445: exception encountered: core dump [kddlkr()+748] [SIGBUS] [unknown code] [0x000000010] [] []

Remarkable, two different homes, two different databases, two different application installs with the same error.

Search on metalink comes back with the dreaded:

A Description for this ORA-7445 error is not available.
Your request has been recorded and will be used for publishing prioritization.
Doing an Advanced Search on ORA-7445 'kddlkr' which may help to provide additional information on this error...

After about an hour of fruitless searching on metalink and trace file diving, I decided to call around to the users and see if anybody had received an error message. I found a very nice lady who had recently received an error in the application, she explained to me in great detail on exactly what she was doing to test the new version of the application. I was taken aback by her attention to detail and since she was just 2 floors up, I decided to visit her. She had almost 30 pages of checklists and notes that she had written over the years and the many different versions of this application of things she has to test every time a new version is released. She explained that this one module consistently gives an error when the user first opens the screen, and has done so for 3 years and 4 versions of the application, she had diligently written down the exact application error code in her notes for each of her testing sessions on the new applications all with the exact date and time noted.

I was properly impressed and told her so, I had her go into the module and show me the error, and sure enough, the error box pops up and exactly matches her hand written notes, it felt like the world had stopped spinning and I was wondering why I had never heard from this magnificent lady before, surely she was next in line for the "Power User" member of the application team. I asked her if the vendor or our local application support people had given her a work around or a valid excuse to this error. Her answer set the world spinning again after it's brief stoppage - her words "I have never told anybody about it, because it has always had an error since I started, I just tested to see if it had started to work with the new version, we don't even know what the module does."

So close, but yet so far. I will task the local application support people to investigate this error and open a ticket with the vendor.



Friday, December 29, 2006

Dizwell is gone

I take a few months of reading off and Howard Rogers leaves in that time, talk about horrible timing on my part.


There is some discussion of the closing of the site here


http://oraclesponge.wordpress.com/2006/12/27/so-farewell-then-dizwell/


I wish Howard well, and hope to bump into him on-line.

Cheers Howard!


What happened to Dizwell?

I grabbed a cup of tea, settled in for a good long read of Dizwell to catch up on what I missed... and its gone.


http://www.dizwell.com/


Comes back to


Welcome to dizwell.com !

This is a place-holder for the dizwell.com home page.



He can't be gone? can he?

Back (again)

Well, after a lengthy period of time, I am back, and planning on blogging again. Normal things got in the way - like work, divestitures, purchases and mergers, and family life which caused blogging to be dropped right off of the to-do list.

I haven't even been reading or participating in any forums or other blogs until yesterday. So I am back and ready to roll.

So if there is still anybody out there reading this, thanks for sticking around and I promise some stuff soon.


Wednesday, November 01, 2006

Primary And Unique Keys


Unique and Primary Constraints

There was some confusion on if a primary key constraint allowed multiple null values, or if a

unique constraint actually enforced the uniqueness of null values. So, here is the answer to those

questions.

Primary Keys

Let us start with primary key constraints. I will build a few objects to help us out.

SQL> CREATE SEQUENCE SOMETEST;

Sequence created.

SQL>

SQL> CREATE TABLE TEST1

2 (DATACOLUMN NUMBER NOT NULL);

Table created.

SQL> CREATE TABLE TEST2

2 (DATACOLUMN NUMBER PRIMARY KEY);

Table created.

SQL> CREATE TABLE TEST3

2 (DATACOLUMN NUMBER PRIMARY KEY NOT NULL);

Table created.

SQL> CREATE TABLE TEST4

2 (DATACOLUMN NUMBER);

Table created.

To standardize the testing data, here I will create a table and populate it so that we can use as a base

for the upcoming inserts. I will put in 5 rows of data at creation and two rows of null.

SQL> CREATE TABLE SOMEDATA AS SELECT SOMETEST.NEXTVAL DATACOLUMN FROM DUAL CONNECT BY

LEVEL <= 5;

Table created.

SQL> INSERT INTO SOMEDATA (DATACOLUMN) VALUES (NULL);

1 row created.

SQL> INSERT INTO SOMEDATA (DATACOLUMN) VALUES (NULL);

1 row created.

SQL> COMMIT;

Commit complete.

SQL> SELECT DATACOLUMN,DECODE(DATACOLUMN,NULL,'Y','N') ISNULL FROM SOMEDATA;


DATACOLUMN ISNULL

---------- ----------

1 N

2 N

3 N

4 N

5 N

Y

Y

7 rows selected.

You can see there are 7 rows of data, all unique except for two rows of null values.

First we will do a simple insert into from a full query of the SOMEDATA table.

SQL> INSERT INTO TEST1 SELECT DATACOLUMN FROM SOMEDATA;

INSERT INTO TEST1 SELECT DATACOLUMN FROM SOMEDATA

*

ERROR at line 1:

ORA-01400: cannot insert NULL into ("DBTESTING"."TEST1"."DATACOLUMN")

The above results were pretty much expected, the column is marked as NOT NULL meaning of

course, no null values at all are allowed.

Now to try the same insert into the TEST2 table with the primary key.

SQL> INSERT INTO TEST2 SELECT DATACOLUMN FROM SOMEDATA;

INSERT INTO TEST2 SELECT DATACOLUMN FROM SOMEDATA

*

ERROR at line 1:

ORA-01400: cannot insert NULL into ("DBTESTING"."TEST2"."DATACOLUMN")

So, a primary key is automatically NOT NULL. So when we created the table TEST3 we simply did

not need to add the NOT NULL parameter to the column. For completeness here is the same insert

carried out on the TEST3 table.

SQL> INSERT INTO TEST3 SELECT DATACOLUMN FROM SOMEDATA;

INSERT INTO TEST3 SELECT DATACOLUMN FROM SOMEDATA

*

ERROR at line 1:

ORA-01400: cannot insert NULL into ("DBTESTING"."TEST3"."DATACOLUMN")

No surprises there. The insert failed.

Now for TABLE4 the table was created with no constraints at all. So now if we do the insert, all 7

of the rows will go into the table.

SQL> INSERT INTO TEST4 SELECT DATACOLUMN FROM SOMEDATA;

7 rows created.

SQL> COMMIT;

Commit complete.


If we are now to add a primary key constraint to the table

SQL>

SQL> ALTER TABLE TEST4 ADD CONSTRAINT TEST4PK

2 PRIMARY KEY (

3 DATACOLUMN

4 )

5 /

ERROR at line 3:

ORA-01449: column contains NULL values; cannot alter to NOT NULL

The database automatically tries to put a NOT NULL check constraint on the column, adding the

primary key fails.

If we try to add the primary key with the command option of NOVALIDATE, the table is altered

and the primary key is added.

1 ALTER TABLE TEST4 ADD CONSTRAINT TEST4PK

2 PRIMARY KEY (

3 DATACOLUMN

4* ) NOVALIDATE

SQL> /

Table altered.

After the primary key is on with the NOVALIDATE we still are unable to insert a null value into

TEST4.

SQL> INSERT INTO TEST4 (DATACOLUMN) VALUES (NULL);

INSERT INTO TEST4 (DATACOLUMN) VALUES (NULL)

*

ERROR at line 1:

ORA-01400: cannot insert NULL into ("DBTESTING"."TEST4"."DATACOLUMN")

Even though TEST4 contains null values

1* SELECT DATACOLUMN,DECODE(DATACOLUMN,NULL,'Y','N') ISNULL FROM TEST4

SQL> /

DATACOLUMN I

---------- -

1 N

2 N

3 N

4 N

5 N

Y

Y

7 rows selected.

That situation is one you should be aware of. Even though the primary key is in place, there is bad

data in the table that could throw a large wrench into a well running application.


Unique Keys

Now for unique keys, we will create the same base data table as for primary keys and create new

tables with testing unique keys.

SQL> CREATE TABLE TEST1

2 (DATACOLUMN NUMBER);

Table created.

SQL> CREATE TABLE TEST2

2 (DATACOLUMN NUMBER UNIQUE);

Table created.

SQL> CREATE TABLE TEST3

2 (DATACOLUMN NUMBER UNIQUE NOT NULL);

Table created.

Now for the inserts, first start with TEST1, no constraints at all

SQL> INSERT INTO TEST1 SELECT DATACOLUMN FROM SOMEDATA;

7 rows created.

The TEST2 table has a unique constraint so most people will expect the insert to fail.

SQL> INSERT INTO TEST2 SELECT DATACOLUMN FROM SOMEDATA;

7 rows created.

But no, all 7 rows go in. Unique constraints do not count nulls when checking for uniqueness.

Now, TABLE3 has an unique key and has been set as NOT NULL

SQL> INSERT INTO TEST3 SELECT DATACOLUMN FROM SOMEDATA;

INSERT INTO TEST3 SELECT DATACOLUMN FROM SOMEDATA

*

ERROR at line 1:

ORA-01400: cannot insert NULL into ("DBTESTING"."TEST3"."DATACOLUMN")

SQL> COMMIT;

Commit complete.

A quick couple of queries to verify the data

SQL> SELECT DATACOLUMN,DECODE(DATACOLUMN,NULL,'Y','N') ISNULL FROM TEST1;

DATACOLUMN ISNULL

---------- ----------

1 N

2 N

3 N

4 N

5 N

Y

Y

7 rows selected.

SQL> SELECT DATACOLUMN,DECODE(DATACOLUMN,NULL,'Y','N') ISNULL FROM TEST2;

DATACOLUMN ISNULL

---------- ----------

1 N

2 N

3 N

4 N

5 N

Y

Y

7 rows selected.

SQL> SELECT DATACOLUMN,DECODE(DATACOLUMN,NULL,'Y','N') ISNULL FROM TEST3;

no rows selected

Constraint Objects

The objects that are created when a constraint is created are worthwhile mentioning as well. You

know what this means, time for more test objects, I can feel your joy at this prospect from here.

SQL> CREATE TABLE TEST1

2 (DATACOLUMN NUMBER PRIMARY KEY);

Table created.

SQL> desc test1

Name Null? Type

------------------------------------------------------- -------- ---------------

DATACOLUMN NOT NULL NUMBER

Simple table you have seen before with the primary key defined at creation. The system will

automatically make up a name for the constraint and apply it to the table. If you try this on your

own, your constraint and index name will be different. You can create the constraints with

whatever name you choose, check the documentation on how to do that.

SQL> SELECT CONSTRAINT_NAME,CONSTRAINT_TYPE,INDEX_NAME FROM USER_CONSTRAINTS WHERE

TABLE_NAME='TEST1';

CONSTRAINT_NAME C INDEX_NAME

------------------------------ - ------------------------------

SYS_C006436 P SYS_C006436

You can see, to efficiently check for violations of the constraint, the database also automatically

creates an index on the table. The INDEX_NAME column clearly shows this. A quick query into

the USER_INDEXES view will confirm this.

SQL> SELECT INDEX_NAME,INDEX_TYPE,UNIQUENESS FROM USER_INDEXES WHERE TABLE_NAME='TEST1';

INDEX_NAME INDEX_TYPE UNIQUENES

------------------------------ --------------------------- ---------

SYS_C006436 NORMAL UNIQUE

You will notice that the DATACOLUMN is marked as NOT NULL and there is a constraint on the

table saying so. Observe

SQL> CREATE TABLE TEST2

2 (DATACOLUMN NUMBER PRIMARY KEY NOT NULL);

Table created.

SQL> SELECT CONSTRAINT_NAME,CONSTRAINT_TYPE,INDEX_NAME FROM USER_CONSTRAINTS WHERE

TABLE_NAME='TEST2';

CONSTRAINT_NAME C INDEX_NAME

------------------------------ - ------------------------------

SYS_C006437 C

SYS_C006438 P SYS_C006438

SQL> SELECT INDEX_NAME,INDEX_TYPE,UNIQUENESS FROM USER_INDEXES WHERE TABLE_NAME='TEST2';

INDEX_NAME INDEX_TYPE UNIQUENES

------------------------------ --------------------------- ---------

SYS_C006438 NORMAL UNIQUE

SQL> desc test2

Name Null? Type

------------------------------------------------------- -------- ---------------

DATACOLUMN NOT NULL NUMBER

The database created a CHECK constraint called SYS_C006437 in this case on the table. One

reason this is done is to allow you to drop the primary key constraint and still keep the NOT NULL

constraint. A quick run of the DBMS_METADATA.GET_DDL function will show us what the

database did.

SQL> SELECT DBMS_METADATA.get_ddl('CONSTRAINT','SYS_C006437') FROM DUAL;

DBMS_METADATA.GET_DDL('CONSTRAINT','SYS_C006437')

--------------------------------------------------------------------------------

ALTER TABLE "DBTESTING"."TEST2" MODIFY ("DATACOLUMN" NOT NULL ENABLE)

Something to note is if there is already an index on the column and you add a constraint the

database will "hijack" that index instead of creating a new one. You can explicitly create an index

if you so desire.

SQL> CREATE TABLE TEST3

2 (DATACOLUMN NUMBER);

Table created.

SQL>

SQL> CREATE INDEX MYINDEX ON TEST3(DATACOLUMN);

Index created.

SQL> SELECT CONSTRAINT_NAME,CONSTRAINT_TYPE,INDEX_NAME FROM USER_CONSTRAINTS WHERE

TABLE_NAME='TEST3';

no rows selected

SQL> SELECT INDEX_NAME,INDEX_TYPE,UNIQUENESS FROM USER_INDEXES WHERE TABLE_NAME='TEST3';

INDEX_NAME INDEX_TYPE UNIQUENES

------------------------------ --------------------------- ---------

MYINDEX NORMAL NONUNIQUE

SQL> DESC TEST3;

Name Null? Type

------------------------------------------------------- -------- ---------------

DATACOLUMN NUMBER

No constraints and only the one index. Add a primary constraint to the table.

SQL> ALTER TABLE TEST3 ADD CONSTRAINT TEST3_PK

2 PRIMARY KEY (

3 DATACOLUMN

4 )

5 /

Table altered.

SQL> SELECT CONSTRAINT_NAME,CONSTRAINT_TYPE,INDEX_NAME FROM USER_CONSTRAINTS WHERE

TABLE_NAME='TEST3';

CONSTRAINT_NAME C INDEX_NAME

------------------------------ - ------------------------------

TEST3_PK P MYINDEX

SQL> SELECT INDEX_NAME,INDEX_TYPE,UNIQUENESS FROM USER_INDEXES WHERE TABLE_NAME='TEST3';

INDEX_NAME INDEX_TYPE UNIQUENES

------------------------------ --------------------------- ---------

MYINDEX NORMAL NONUNIQUE

SQL> DESC TEST3;

Name Null? Type

-------------------------------------------------------- -------- --------

DATACOLUMN NOT NULL NUMBER

You can see the index is marked as NONUNIQUE even though there is an obvious primary key on

the table. The index name of the primary key is MYINDEX the index we created on the table. We

can now attempt to insert some bad data into the table.

SQL> INSERT INTO TEST3 SELECT DATACOLUMN FROM SOMEDATA;

INSERT INTO TEST3 SELECT DATACOLUMN FROM SOMEDATA

*

ERROR at line 1:

ORA-01400: cannot insert NULL into ("DBTESTING"."TEST3"."DATACOLUMN")

So the NOT NULL is of course enforced.

SQL> INSERT INTO TEST3 SELECT DATACOLUMN FROM SOMEDATA WHERE DATACOLUMN IS NOT NULL;

5 rows created.

SQL> INSERT INTO TEST3 SELECT DATACOLUMN FROM SOMEDATA WHERE DATACOLUMN IS NOT NULL;

INSERT INTO TEST3 SELECT DATACOLUMN FROM SOMEDATA WHERE DATACOLUMN IS NOT NULL

*

ERROR at line 1:

ORA-00001: unique constraint (DBTESTING.TEST3_PK) violated

SQL> ROLLBACK;

Rollback complete.

The uniqueness of the primary key is enforced, even though as you can plainly see there is no

unique index on the table. When the constraint is dropped, the index is not dropped as well, but be

careful there is syntax in the drop constraint command to allow the index to be dropped at the same

time.

SQL> ALTER TABLE dbtesting.test3

2 DROP CONSTRAINT test3_pk

3 /

Table altered.

SQL>

SQL> SELECT CONSTRAINT_NAME,CONSTRAINT_TYPE,INDEX_NAME FROM USER_CONSTRAINTS WHERE

TABLE_NAME='TEST3';

no rows selected

SQL> SELECT INDEX_NAME,INDEX_TYPE,UNIQUENESS FROM USER_INDEXES WHERE TABLE_NAME='TEST3';

INDEX_NAME INDEX_TYPE UNIQUENES

------------------------------ --------------------------- ---------

MYINDEX NORMAL NONUNIQUE

The constraint is gone, the original index is still there, all is right in the world. Now, some very

thorough person or more importantly a very thorough GUI comes through with that handy right

click drop ability and the results are very disturbing.

SQL> ALTER TABLE TEST3 ADD CONSTRAINT TEST3_PK

2 PRIMARY KEY (

3 DATACOLUMN

4 )

5 /

Table altered.

SQL> ALTER TABLE dbtesting.test3

2 DROP CONSTRAINT test3_pk DROP INDEX

3 /

Table altered.

SQL> SELECT CONSTRAINT_NAME,CONSTRAINT_TYPE,INDEX_NAME FROM USER_CONSTRAINTS WHERE

TABLE_NAME='TEST3'

;

no rows selected

SQL> SELECT INDEX_NAME,INDEX_TYPE,UNIQUENESS FROM USER_INDEXES WHERE TABLE_NAME='TEST3';

no rows selected

Conclusion

Every table should have a primary key it is plain good RDBMS design. Be aware of the fact the

database does give you the ability to bypass constraints but still make sure you use constraints in

your system, database referential integrity is the best, it is the fastest and no outside code can be as

efficient as the built in database code. Use the database, we are paying for it and why reinvent the

wheel when it rolls along so nice.

Wow!

That is all I can say and not have content blocked.

The company decided to sell off a small portion to interested buyers. Seems easy enough, every thing is stored in orginisational units, we can shave off that portion, provide it in a easy to load format for the purchasers and we are ready to go, only 5 systems need to have data transferred.. The team of managers and "in the know" people put together to analyse what what was needed said "1 week to develop a plan, another week to extract the data, 1 week of quality control", 3 weeks of work and then all done. We were steaming along steadily well into week two with no major problems, when we started to get calls to our help desk, "Systems is hung", "I can't get logged in".

The help desk guys ran through their normal list of things to check, well, they tried to. Our knowledge base was down, that triggered pretty much every help desk person to pick up the phone and call somebody. Most of the non help desk IT people were in a meeting about the sale and didn't notice anything. It was very funny though, almost like you see in those movies where all of the stars are in a room and all of their cell phones start going off at once to say they had better do something before the world as they know it comes to an end. I was up at the front showing some interesting data I had found in the financials system that was not orginised properly but needed to be extracted when my blackberry started ringing. The IT managers blackberry was vibrating away on the table, and the two SA's cell phones beeping.

After only a few seconds, we all started to file out of the meeting room in a straight line for the server room. I was about 10 feet from the server room at the back of the line, when, wait for it. The fire alarm goes off, the main building fire alarm is whooping and ringing. The hallway to the server room has one of those red fire alarm bells in it. I don't think there was a dry pair of pants in that hallway when that bell went off, those things are freaking loud and the hallway has a door at one end from the lobby and the server room door on the other end, and nothing but concrete, linoleum and at this particular point in time 8 highly trained IT folks to absorb the sound. Despite the fire alarm going off, our manager had been opening the door to the server room and we continue into the server room with the screaming ringing racket of the fire bell going off about 8 feet behind us, as we all came into the server room we found out where the fire that triggered the fire alarm was- in our server room. There was a thick layer of smoke along the roof of the server room and the back of our our tape array rack was billowing smoke, the smell was almost enough to knock one out. Well, being the highly trained IT proffesionals we are, we were stepping on one another trying to get back out of the server room in a mess of arms and legs as 8 of us fit through the standard sized door pretty much at once.

After a very undignified exit into the lobby scaring the people in the lobby pretty much out of their wits we composed ourselves a bit, realized we were still alive and were moving quickly out of the building when one of the SA's spoke up and said "What about the fire suppression system in the room?". My answer, in a moment of pure brilliance at stating the obvious was "It isn't working". We loitered around outside waiting for the billowing smoke to consume the entire building, after what seemed like an hour the fire department showed up. It was really only 7 minutes from the time the alarm went off to the time the FD pulled up. The IT manager and the building manager spoke with the fireman in charge and explained where the fire was.

Us IT folks stood outside with the rest of the building population and waited and wondered. The two SA's were bickering back and forth on who's day it was to send the offsite tapes offsite and wondering if our DR site was in good enough shape to run the company while this place was rebuilt. After another 10 minutes or so the firemen came out and said it was just mostly smoke and they had put the fire out and after a few minutes we can go in and inspect what was up. The building manager would only let one SA and the IT manager into the room for insurance reasons. They didn't touch anything until they had taken about 3,000 pictures and the insurance company over the phone said we could do what was needed to get our business running again.

The Fire inspector figured out what the problem was, it was pretty obvious once you could see it. A power bar that comes built into the rack had ignited into a slow smoldering burn, causing all 8 power cords plugged into it to start to smolder and put off smoke too. It didn't aparranty get hot enough to trigger the fire suppression system in the room. It got plenty hot enough to melt all of the plastic off of all of the power cords, damage the rack and a fibre network hub thing too and generate vast quantities of smoke. All told, under $6,000 dollars damage. Not including the rooms new paint job, contractors to clean the room and the IT departments time to inspect all of the equipment since running it in a smokey environment is apparently bad for it.

The downtime was the remaining portion of the day it happened, and the entire following day but we were up and running at full capacity by 6am on day 3. One of the most junior help desk people we have, a great guy, summed up the entire thing into two words "Mother F***er!" when he was told what was going on. I will let you fill in the bleeped out section.

I promise to post some documentation on primary and unique keys I had been working on later this week. I also have a document on the pitfalls I have ran into using CURSOR_SHARING of SIMILAR or FORCE, but that is a week or two away.



Sunday, October 15, 2006

Well even longer

Still no posts technical or otherwise from me.

Sale is still ongoing and work is mounting. We had a person quit and another go on long term medical leave cutting our team from 5 to 3. That hurt the project and we are now scrambling for consultants who know our apps.

Good news is I got a new laptop, one of those new Dell duo core laptops, the 820 series, 4 gig of RAM and the 15.4 widescreen. It is a bit of a brick to pack around but the 3 hours of battery life and the fact I can put all copies of the databases we are extracting data from on my laptop now and work on the scripts and testing locally is wonderful.


Thursday, September 28, 2006

Long Time

It has been awhile, sorry to all of you.

We just have just sold a small portion of the company and to our surprise, since we haven't done it before, to carve off a section of the company is many times more work than it is to add to the company.


Friday, September 08, 2006

Which SCI FI character

I did the survey and came up with Agent Smith from the Matrix series.
Tom Kyte came up with James T. Kirk.



Which Fantasy/SciFi Character Are You?

The oracle sponge

If you haven't you must read David Aldridge stories of his most recent holiday, you have to read all parts, this link is to part 1.

http://oraclesponge.wordpress.com/2006/09/05/three-days-two-hospitals-part-i/


Wonderfully well written, entertaining and enlightening. Being a ex-motorcycle enthusiast I know exactly what he is talking about.



Monitor Alert Log

UNIX shell script to monitor and email errors found in the alert log. Is ran as the oracle OS owner. Make sure you change the "emailaddresshere" entries to the email you want and put the check_alert.awk someplace. I have chosen $HOME for this example, in real life I put it on as mounted directory on the NAS.


if test $# -lt 1
        then
 echo You must pass a SID
        exit 
 fi
#
# ensure environment variables set
#
#set your environment here
export ORACLE_SID=$1
export ORACLE_HOME=/home/oracle/orahome
export MACHINE=`hostname`
export PATH=$ORACLE_HOME/bin:$PATH

# check if the database is running, if not exit

ckdb ${ORACLE_SID} -s
if [ "$?" -ne 0 ]
then
  echo " $ORACLE_SID is not running!!!"
 echo "${ORACLE_SID is not running!" | mailx -m -s "Oracle sid ${ORACLE_SID} is not running!" "
|emailaddresshere|" 
  exit 1
fi;

#Search the alert log, and email all of the errors
#move the alert_log to a backup copy
#cat the existing alert_log onto the backup copy

#oracle 8 or higher DB's only.
sqlplus '/ as sysdba' << EOF > /tmp/${ORACLE_SID}_monitor_temp.txt
column xxxx format a10
column value format a80
set lines 132
SELECT 'xxxx' ,value FROM  v\$parameter WHERE  name = 'background_dump_dest'
/
exit
EOF


cat /tmp/${ORACLE_SID}_monitor_temp.txt | awk '$1 ~ /xxxx/ {print $2}' > /tmp/${ORACLE_SID}_monitor_location.txt
read ALERT_DIR < /tmp/${ORACLE_SID}_monitor_location.txt
ORIG_ALERT_LOG=${ALERT_DIR}/alert_${ORACLE_SID}.log
NEW_ALERT_LOG=${ORIG_ALERT_LOG}.monitored
TEMP_ALERT_LOG=${ORIG_ALERT_LOG}.temp
cat ${ORIG_ALERT_LOG} | awk -f $HOME/check_alert.awk > /tmp/${ORACLE_SID}_check_monitor_log.log
rm /tmp/${ORACLE_SID}_monitor_temp.txt 2>/dev/null
if [ -s /tmp/${ORACLE_SID}_check_monitor_log.log ]
   then 
     echo "Found errors in sid ${ORACLE_SID}, mailed errors"
     echo "The following errors were found in the alert log for ${ORACLE_SID}" > /tmp/${ORACLE_SID}_check_monitor_log.mail
     echo "Alert log was copied into ${NEW_ALERT_LOG}" >> /tmp/${ORACLE_SID}_check_monitor_log.mail
     echo " "
     date >> /tmp/${ORACLE_SID}_check_monitor_log.mail 
     echo "--------------------------------------------------------------">>/tmp/${ORACLE_SID}_check_monitor_log.mail
     echo " "
     echo " " >> /tmp/${ORACLE_SID}_check_monitor_log.mail 
     echo " " >> /tmp/${ORACLE_SID}_check_monitor_log.mail 
     cat /tmp/${ORACLE_SID}_check_monitor_log.log >>  /tmp/${ORACLE_SID}_check_monitor_log.mail

 cat /tmp/${ORACLE_SID}_check_monitor_log.mail | mailx -m -s "on ${MACHINE}, MONITOR of Alert Log for ${ORACLE_SID} found errors" "
|emailaddresshere|" 

     mv ${ORIG_ALERT_LOG} ${TEMP_ALERT_LOG}
     cat ${TEMP_ALERT_LOG} >> ${NEW_ALERT_LOG}
     touch ${ORIG_ALERT_LOG}
     rm /tmp/${ORACLE_SID}_monitor_temp.txt 2> /dev/null
     rm /tmp/${ORACLE_SID}_check_monitor_log.log 
     rm /tmp/${ORACLE_SID}_check_monitor_log.mail 
exit
fi;

rm /tmp/${ORACLE_SID}_check_monitor_log.log > /dev/null
rm /tmp/${ORACLE_SID}_monitor_location.txt > /dev/null


The referenced awk script (check_alert.awk). You can modify it as needed to add or remove things you wish to look for. The ERROR_AUDIT is a custom entry that a trigger on DB error writes in our environment.



$0 ~ /Errors in file/ {print $0}
$0 ~ /PMON: terminating instance due to error 600/ {print $0}
$0 ~ /Started recovery/{print $0}
$0 ~ /Archival required/{print $0}
$0 ~ /Instance terminated/ {print $0}
$0 ~ /Checkpoint not complete/ {print $0}
$1 ~ /ORA-/ { print $0; flag=1 }
$0 !~ /ORA-/ {if (flag==1){print $0; flag=0;print " "} }
$0 ~ /ERROR_AUDIT/ {print $0}
  




I simply put this script into cron to run every 5 minutes passing the SID of the DB I want to monitor.

Monday, September 04, 2006

SQL*plus in windows

As you probably know from reading my blog, I don't like windows. I know how to use windows. I use windows, I have had the misfortune of administrating databases on windows, and most of our applications are made to run on windows. Windows runs the world from the everyday user.


I have windows on my laptop, but I have as little as possible running on windows and pretty much my first task in the morning is to startup my SUSE vm client on my laptop and work from there. This is of course resource intensive and with only having a D600 with the possibility of flaming batteries, resources are scarce. There are times on my 1.3ghz w/1 gig of ram laptop I am trying to run 3 oracle databases, 1 EE for windows, 1 EE for Suse in a VM client,and 1 SE for Suse in a VM client. When I do that, lets just say my laptop has difficulties keeping up with the simplest tasks, like keeping the clock in the taskbar up to date.

I was reading one of the threads going on flaming XE at Doug Burns blog -An Oracle XE user speaks and checked out the link from William Robertson. William Robertson is now my hero. I knew the capabilities existed, I had set up things like this for users. But I had never thought of doing something this simple for myself. I am talking about his wonderfully in depth but amazingly simple to do article on setting up SQL*plus on windows . I immediately went a head and configured up my windows sqlplus as he has explained.

Man... it is sometimes amazing how much the simple little things can make your life so much easier.

Oh, and for those of you waiting for an update. The fellows did their duty and dressed as described in earlier posts. I have requested pictures and I have been assured I will receive them shortly. I will of course post them as soon as I can. I was told, it was a banner day at the office, everybody that could be at the office was at the office.

Friday, August 25, 2006

Contest over

Regarding I am in for it now the contest is over. Well, I won. The fine young gentlemen have chosen August 31st as the day they will live up to their end of the bargain. The final myth was explained to them with the following


SQL> SET TRIMSPOOL ON
SQL> SET LINES 120
SQL> DROP TABLE TABLE1;
DROP TABLE TABLE1
           *
ERROR at line 1:
ORA-00942: table or view does not exist


SQL> DROP SEQUENCE TABLE1_PK_SEQ;

Sequence dropped.

SQL> CREATE SEQUENCE TABLE1_PK_SEQ CACHE 500;

Sequence created.

SQL> CREATE TABLE TABLE1 AS SELECT OBJECT_NAME,LAST_DDL_TIME,FLOOR(DBMS_RANDOM.VALUE(1,2)) DATA FROM ALL_OBJECTS;

Table created.

SQL> COMMIT;

Commit complete.

SQL> CREATE INDEX TABLE1IDX1 ON TABLE1 (DATA);

Index created.

SQL> BEGIN
  2  DBMS_STATS.GATHER_TABLE_STATS(USER,'TABLE1',CASCADE =>TRUE);
  3  END;
  4  /

PL/SQL procedure successfully completed.

SQL> SET AUTOTRACE TRACEONLY ;
SQL> SET TIMING ON;
SQL> SELECT /*+INDEX (TABLE1 TABLE1IDX1) */ * FROM TABLE1 WHERE DATA=1;

50574 rows selected.

Elapsed: 00:00:00.52

Execution Plan
----------------------------------------------------------
Plan hash value: 809506743

------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            | 50574 |  1679K|   384   (2)| 00:00:05 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TABLE1     | 50574 |  1679K|   384   (2)| 00:00:05 |
|*  2 |   INDEX RANGE SCAN          | TABLE1IDX1 | 50574 |       |   101   (2)| 00:00:02 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("DATA"=1)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       7106  consistent gets
          0  physical reads
          0  redo size
    2463263  bytes sent via SQL*Net to client
      37462  bytes received via SQL*Net from client
       3373  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      50574  rows processed

SQL> SELECT * FROM TABLE1 WHERE DATA=1;

50574 rows selected.

Elapsed: 00:00:00.43

Execution Plan
----------------------------------------------------------
Plan hash value: 963482612

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        | 50574 |  1679K|    68   (3)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TABLE1 | 50574 |  1679K|    68   (3)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("DATA"=1)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       3638  consistent gets
          0  physical reads
          0  redo size
    1548409  bytes sent via SQL*Net to client
      37462  bytes received via SQL*Net from client
       3373  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      50574  rows processed

SQL> 
SQL> SPOOL OFF



They capitulated with almost no fight, they had run similar tests and scripts and had gone through the online information and come up with the same answers. I do feel for the fellows... well just a little :)


Saturday, August 19, 2006

Myth #3

I know I am going about working on the myth's my colleagues came up with in a strange order. I was working on myth #3 and was once again stumped.

No matter what I did, it appeared that using a varchar date field or a function based index field was preferable to a normal B*tree index on a normal heap table.

For my testing I had come up with the following test:



SQL> CREATE TABLE TABLE1 (PK NUMBER PRIMARY KEY NOT NULL
,DATA1 VARCHAR2(30) ,DATA2 VARCHAR2(30) 
,DATA3 DATE,DATA4 NUMBER
,FAKEDATE VARCHAR2(20));

Table created.

SQL> DROP SEQUENCE TABLE1_PK_SEQ;

Sequence dropped.

SQL> CREATE SEQUENCE TABLE1_PK_SEQ CACHE 500;

Sequence created.

SQL> INSERT INTO TABLE1 SELECT TABLE1_PK_SEQ.NEXTVAL
  2    ,DBMS_RANDOM.STRING('A',30),DBMS_RANDOM.STRING('A',30)
  3    ,SYSDATE - DBMS_RANDOM.VALUE(1,365) ,TRUNC(DBMS_RANDOM.VALUE(1,10000)),NULL
  4    FROM DUAL CONNECT BY LEVEL <=  50000;

50000 rows created.

SQL> COMMIT;

Commit complete.

SQL> UPDATE TABLE1 SET FAKEDATE=TO_CHAR(DATA3,'DD-MON-YYYY');

50000 rows updated.

SQL> COMMIT;

Commit complete.

SQL> CREATE INDEX DATEIDX ON TABLE1 (DATA3);

Index created.

SQL> CREATE INDEX FAKEDATEIDX ON TABLE1 (FAKEDATE);

Index created.

SQL> CREATE INDEX FBIIDX ON TABLE1 (TO_CHAR(DATA3,'DD-MON-YYYY'));

Index created.

SQL> BEGIN
  2  DBMS_STATS.GATHER_TABLE_STATS(USER,'TABLE1',CASCADE =>TRUE);
  3  END;
  4  /

PL/SQL procedure successfully completed.





The results of the queries were unsettling to say the least. I have edited the following to try and shorten up the post a little.
I will put a link to the full output files at the end of the post.


SQL> SELECT * FROM TABLE1 WHERE DATA3 >= TO_DATE('24-JUN-2006 00:00:00','DD-MON-YYYY HH24:MI:SS')
  2    AND DATA3 <= TO_DATE('24-JUN-2006 23:59:59','DD-MON-YYYY HH24:MI:SS');

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |   139 | 12649 |   141   (3)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL| TABLE1 |   139 | 12649 |   141   (3)| 00:00:02 |
----------------------------------------------------------------------------


SQL> SELECT /*+ INDEX(TABLE1 DATEIDX) */ * FROM TABLE1 WHERE DATA3 >= TO_DATE('24-JUN-2006 00:00:00','DD-MON-YYYY HH24:MI:SS')
  2    AND DATA3 <= TO_DATE('24-JUN-2006 23:59:59','DD-MON-YYYY HH24:MI:SS');
---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |   139 | 12649 |   142   (0)| 00:00:02 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TABLE1  |   139 | 12649 |   142   (0)| 00:00:02 |
|*  2 |   INDEX RANGE SCAN          | DATEIDX |   139 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------


SQL> SELECT * FROM TABLE1 WHERE FAKEDATE='24-JUN-2006';
-------------------------------------------------------------------------------------------
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |   137 | 12467 |   123   (0)| 00:00:02 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TABLE1      |   137 | 12467 |   123   (0)| 00:00:02 |
|*  2 |   INDEX RANGE SCAN          | FAKEDATEIDX |   137 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------


SQL> SELECT * FROM TABLE1 WHERE TO_CHAR(DATA3,'DD-MON-YYYY')='24-JUN-2006';
--------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |   137 | 12467 |   123   (0)| 00:00:02 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TABLE1 |   137 | 12467 |   123   (0)| 00:00:02 |
|*  2 |   INDEX RANGE SCAN          | FBIIDX |   137 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------




I scratched my head, thought about this, researched on the web a little bit. Even considered submitting a question to asktom. Then I thought. Data sizes, that has to be it. So I made the table bigger, 3,000,000 rows to be exact but I came up with the same access paths. Well I thought, they won this one... but it doesn't make sense. Why isn't oracle considering a index. I checked for bugs, I couldn't find any. Then I went back to data sizes. I decided to create a "wider" table and try it again. Now we are talking.


SQL> CREATE TABLE TABLE1 (PK NUMBER PRIMARY KEY NOT NULL
  2  ,DATA1 VARCHAR2(30)
  3  ,DATA2 VARCHAR2(30)
  4  ,DATA3 DATE
  5  ,DATA4 NUMBER
  6  ,FAKEDATE VARCHAR2(20)
  7  ,DATA5 VARCHAR2(30)
  8  ,DATA6 VARCHAR2(30)
  9  ,DATA7 VARCHAR2(30)
 10  ,DATA8 VARCHAR2(30)
 11  );

Table created.

SQL> DROP SEQUENCE TABLE1_PK_SEQ;

Sequence dropped.

SQL> CREATE SEQUENCE TABLE1_PK_SEQ CACHE 500;

Sequence created.

SQL> INSERT INTO TABLE1 SELECT TABLE1_PK_SEQ.NEXTVAL,DBMS_RANDOM.STRING('A',30),DBMS_RANDOM.STRING('A',30)
  2    ,SYSDATE - DBMS_RANDOM.VALUE(1,365) ,TRUNC(DBMS_RANDOM.VALUE(1,10000)),NULL,DBMS_RANDOM.STRING('A',30)
  3    ,DBMS_RANDOM.STRING('A',30),DBMS_RANDOM.STRING('A',30),DBMS_RANDOM.STRING('A',30)
  4  FROM DUAL CONNECT BY LEVEL <=  50000;

50000 rows created.

SQL> COMMIT;

Commit complete.

SQL> UPDATE TABLE1 SET FAKEDATE=TO_CHAR(DATA3,'DD-MON-YYYY');

50000 rows updated.

SQL> COMMIT;

Commit complete.

SQL> CREATE INDEX DATEIDX ON TABLE1 (DATA3);

Index created.

SQL> CREATE INDEX FAKEDATEIDX ON TABLE1 (FAKEDATE);

Index created.

SQL> CREATE INDEX FBIIDX ON TABLE1 (TO_CHAR(DATA3,'DD-MON-YYYY'));

Index created.

SQL> BEGIN
  2  DBMS_STATS.GATHER_TABLE_STATS(USER,'TABLE1',CASCADE =>TRUE);
  3  END;
  4  /

PL/SQL procedure successfully completed.

The access paths now look more like what I was thinking they would be:



SQL> SELECT * FROM TABLE1 WHERE DATA3 >= TO_DATE('24-JUN-2006 00:00:00','DD-MON-YYYY HH24:MI:SS')
  2    AND DATA3 <= TO_DATE('24-JUN-2006 23:59:59','DD-MON-YYYY HH24:MI:SS');

---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |   139 | 29885 |   142   (0)| 00:00:02 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TABLE1  |   139 | 29885 |   142   (0)| 00:00:02 |
|*  2 |   INDEX RANGE SCAN          | DATEIDX |   139 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

SQL> SELECT * FROM TABLE1 WHERE FAKEDATE='24-JUN-2006';

-------------------------------------------------------------------------------------------
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |   137 | 29455 |   132   (0)| 00:00:02 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TABLE1      |   137 | 29455 |   132   (0)| 00:00:02 |
|*  2 |   INDEX RANGE SCAN          | FAKEDATEIDX |   137 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

SQL> SELECT * FROM TABLE1 WHERE TO_CHAR(DATA3,'DD-MON-YYYY')='24-JUN-2006';

--------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |   137 | 29455 |   132   (0)| 00:00:02 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TABLE1 |   137 | 29455 |   132   (0)| 00:00:02 |
|*  2 |   INDEX RANGE SCAN          | FBIIDX |   137 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------



Apparently the fake varchar2 column or the FBI is more efficient to pick out individual days with. I am going to expand this test to do entire months, just a small period of time, a year to see if a single normal b*tree index on a date is the best solution when you are not targeting such a specific predicate.

So now a question to you readers, did I do anything wrong?. Can you point me in the direction of some documents on indexing dates that I must have missed or slept through that day in class?

The links to the "skinny table" and the "wide table" complete tests and logfiles.

Sunday, August 13, 2006

Proving them wrong 1 of 3

From my earlier post "I am in for it now" I started working on myth #2 as it seemed to be the easiest to debunk.

I ran into a snag though. I had put together my test for them, easily proving that their thought that if you index every column in a table the optimizer will join multiple indexes together and follow that access path to the data. I was in for a bit of a surprise. I had heard of the optimizer building bitmap indexes on the fly but I had never delved any deeper into it and had truthfully forgotten all about it until today. I can see the Jonathon Lewis's book "Cost Based Oracle: Fundamentals" is going to cracked open again and given a very thorough going over.

The whole story, I put together the following test SQL:


SPOOL RUN1.LOG
SET ECHO ON
SET LINES 120
COLUMN NAME FORMAT A30 TRUNCATE
COLUMN VALUE FORMAT A30 TRUNCATE
DROP TABLE A;
CREATE TABLE A
(
PK NUMBER PRIMARY KEY NOT NULL
,DATA1 VARCHAR2(20)
,DATA2 VARCHAR2(20)
,DATA3 VARCHAR2(20)
,DATA4 DATE
);
DROP SEQUENCE PK_SEQ;
CREATE SEQUENCE PK_SEQ CACHE 500; 
CREATE INDEX IDX1 ON A (DATA1);
CREATE INDEX IDX2 ON A (DATA2);
CREATE INDEX IDX3 ON A (DATA3);
CREATE INDEX IDX4 ON A (DATA4);
INSERT INTO A
(PK,DATA1,DATA2,DATA3,DATA4)
SELECT 
PK_SEQ.NEXTVAL PK
,OBJECT_TYPE DATA1
,STATUS DATA2
,DBMS_RANDOM.STRING('A',2) DATA3
,SYSDATE-DBMS_RANDOM.VALUE(1,50) DATA4 
FROM ALL_OBJECTS;
COMMIT;
SELECT NAME,VALUE FROM V$PARAMETER WHERE UPPER(NAME)='OPTIMIZER_MODE';
SELECT * FROM V$VERSION;
SET AUTOTRACE ON
SET AUTOTRACE TRACEONLY
SET TIMING ON
SELECT * FROM A WHERE PK=12;
SELECT * FROM A WHERE PK=53 AND DATA2='BDSA';
SELECT * FROM A WHERE DATA1='SDD' AND DATA2='DKDK' AND DATA3='433';
BEGIN
DBMS_STATS.GATHER_TABLE_STATS(USER,'A',CASCADE=>TRUE);
END;
/
SELECT * FROM A WHERE PK=12;
SELECT * FROM A WHERE PK=53 AND DATA2='BDSA';
SELECT * FROM A WHERE DATA1='SDD' AND DATA2='DKDK' AND DATA3='433';


The explain plans via autotrace from the 3 queries after statistics were gathered:


| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |                             
-------------------------------------------------------------------------------------------                             
|   0 | SELECT STATEMENT            |             |     1 |    30 |     2   (0)| 00:00:01 |                             
|   1 |  TABLE ACCESS BY INDEX ROWID| A           |     1 |    30 |     2   (0)| 00:00:01 |                             
|*  2 |   INDEX UNIQUE SCAN         | SYS_C007595 |     1 |       |     1   (0)| 00:00:01 |                             

------------------------------------------------------------------------------------                                    
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |                                    
------------------------------------------------------------------------------------                                    
|   0 | SELECT STATEMENT            |      |     1 |    30 |     2   (0)| 00:00:01 |                                    
|*  1 |  TABLE ACCESS BY INDEX ROWID| A    |     1 |    30 |     2   (0)| 00:00:01 |                                    
|*  2 |   INDEX RANGE SCAN          | IDX2 |     1 |       |     1   (0)| 00:00:01 |                                    
------------------------------------------------------------------------------------                                    

------------------------------------------------------------------------------------                                    
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |                                    
------------------------------------------------------------------------------------                                    
|   0 | SELECT STATEMENT            |      |     1 |    30 |     2   (0)| 00:00:01 |                                    
|*  1 |  TABLE ACCESS BY INDEX ROWID| A    |     1 |    30 |     2   (0)| 00:00:01 |                                    
|*  2 |   INDEX RANGE SCAN          | IDX2 |     1 |       |     1   (0)| 00:00:01 |                                    
------------------------------------------------------------------------------------                                    


Those explains plans where exactly what I was looking for.

The snag was encountered when I decided that I harp on all of the developers to bind their SQL, and in this case even though binding was not necessary, the two developers would inflate any little oversight on my part to earth shattering levels.

So I set up variables and bound my SQL and reran my test. I had surprising results. I see multiple indexes being used on a single table - Bitmap indexes of all things.

I rebuilt my test creating a new table "B" as a copy of the previous "A" Here is the complete SQL:


SPOOL RUNb.LOG
SET ECHO ON
SET LINES 120
COLUMN NAME FORMAT A30 TRUNCATE
COLUMN VALUE FORMAT A30 TRUNCATE
SELECT NAME,VALUE FROM V$PARAMETER WHERE UPPER(NAME)='OPTIMIZER_MODE';
SELECT * FROM V$VERSION;
SET AUTOTRACE TRACEONLY EXPLAIN
SET TIMING ON
VARIABLE BPK NUMBER;
VARIABLE BD1 VARCHAR2(20);
VARIABLE BD2 VARCHAR2(20);
VARIABLE BD3 VARCHAR2(20);
DROP TABLE B;
CREATE TABLE B AS SELECT * FROM A;
ALTER TABLE B ADD CHECK ("PK" IS NOT NULL);
ALTER TABLE B ADD PRIMARY KEY (pk) USING INDEX;
CREATE INDEX BIDX1 ON B (DATA1);
CREATE INDEX BIDX2 ON B (DATA2);
CREATE INDEX BIDX3 ON B (DATA3);
CREATE INDEX BIDX4 ON B (DATA4);
SELECT * FROM B WHERE PK=12;
SELECT * FROM B WHERE PK=53 AND DATA2='BDSA';
SELECT * FROM B WHERE DATA1='SDD' AND DATA2='DKDK' AND DATA3='433';
EXEC :BPK := 99;
EXEC :BD1 := 'BFSS';
EXEC :BD2 := 'JGJD';
EXEC :BD3 := 'JJGJG';
SELECT * FROM B WHERE PK=:BPK;
SELECT * FROM B WHERE PK=:BPK AND DATA2=:BD2;
SELECT * FROM B WHERE DATA1=:BD1 AND DATA2=:BD2 AND DATA3=:BD3;
SELECT * FROM B WHERE PK=:BPK AND DATA1=:BD1 AND DATA2=:BD2 AND DATA3=:BD3;
BEGIN
DBMS_STATS.GATHER_TABLE_STATS(USER,'B',CASCADE=>TRUE);
END;
/
SELECT * FROM B WHERE PK=:BPK;
SELECT * FROM B WHERE PK=:BPK AND DATA2=:BD2;
SELECT * FROM B WHERE DATA1=:BD1 AND DATA2=:BD2 AND DATA3=:BD3;
SELECT * FROM B WHERE PK=:BPK AND DATA1=:BD1 AND DATA2=:BD2 AND DATA3=:BD3;
SPOOL OFF
EXIT


The output for the following query really did take me by surprise, this was the query after the analyze of "B":


SELECT * FROM B WHERE DATA1=:BD1 AND DATA2=:BD2 AND DATA3=:BD3;

Here is the plan:


Execution Plan
----------------------------------------------------------                                                              
Plan hash value: 849905550                                                                                              
                                                                                                                        
------------------------------------------------------------------------------------------                              
| Id  | Operation                        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |                              
------------------------------------------------------------------------------------------                              
|   0 | SELECT STATEMENT                 |       |     1 |    30 |     7   (0)| 00:00:01 |                              
|*  1 |  TABLE ACCESS BY INDEX ROWID     | B     |     1 |    30 |     7   (0)| 00:00:01 |                              
|   2 |   BITMAP CONVERSION TO ROWIDS    |       |       |       |            |          |                              
|   3 |    BITMAP AND                    |       |       |       |            |          |                              
|   4 |     BITMAP CONVERSION FROM ROWIDS|       |       |       |            |          |                              
|*  5 |      INDEX RANGE SCAN            | BIDX3 |    19 |       |     1   (0)| 00:00:01 |                              
|   6 |     BITMAP CONVERSION FROM ROWIDS|       |       |       |            |          |                              
|*  7 |      INDEX RANGE SCAN            | BIDX1 |    19 |       |     6   (0)| 00:00:01 |                              
------------------------------------------------------------------------------------------                              
                                                                                                                        
Predicate Information (identified by operation id):                                                                     
---------------------------------------------------                                                                     
                                                                                                                        
   1 - filter("DATA2"=:BD2)                                                                                             
   5 - access("DATA3"=:BD3)                                                                                             
   7 - access("DATA1"=:BD1)                                                                                             

Multiple indexes (BIDX3 and BIDX1) on the same table in one query, surprising to say the least.
This particular part of the challenge may end up being a tie.

So I do some more testing



SQL> column PLAN_TABLE_OUTPUT format a120 truncate
SQL> explain plan for SELECT * FROM B WHERE DATA1=:BD1 AND DATA2=:BD2 AND DATA3=:BD3;

Explained.

SQL> @?/rdbms/admin/utlxpls

PLAN_TABLE_OUTPUT                                                                                                       
------------------------------------------------------------------------------------------------------------------------
Plan hash value: 849905550                                                                                              
                                                                                                                        
------------------------------------------------------------------------------------------                              
| Id  | Operation                        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |                              
------------------------------------------------------------------------------------------                              
|   0 | SELECT STATEMENT                 |       |     1 |    30 |     7   (0)| 00:00:01 |                              
|*  1 |  TABLE ACCESS BY INDEX ROWID     | B     |     1 |    30 |     7   (0)| 00:00:01 |                              
|   2 |   BITMAP CONVERSION TO ROWIDS    |       |       |       |            |          |                              
|   3 |    BITMAP AND                    |       |       |       |            |          |                              
|   4 |     BITMAP CONVERSION FROM ROWIDS|       |       |       |            |          |                              
|*  5 |      INDEX RANGE SCAN            | BIDX3 |    19 |       |     1   (0)| 00:00:01 |                              
|   6 |     BITMAP CONVERSION FROM ROWIDS|       |       |       |            |          |                              
|*  7 |      INDEX RANGE SCAN            | BIDX1 |    19 |       |     6   (0)| 00:00:01 |                              
------------------------------------------------------------------------------------------                              
                                                                                                                        
Predicate Information (identified by operation id):                                                                     
---------------------------------------------------                                                                     
                                                                                                                        
   1 - filter("DATA2"=:BD2)                                                                                             
   5 - access("DATA3"=:BD3)                                                                                             
   7 - access("DATA1"=:BD1)                                                                                             

21 rows selected.

SQL> 
PLAN_TABLE_OUTPUT                                                                                                       
------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2496799660                                                                                             
                                                                                                                        
-------------------------------------------------------------------------------------                                   
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |                                   
-------------------------------------------------------------------------------------                                   
|   0 | SELECT STATEMENT            |       |     1 |    30 |     2   (0)| 00:00:01 |                                   
|*  1 |  TABLE ACCESS BY INDEX ROWID| B     |     1 |    30 |     2   (0)| 00:00:01 |                                   
|*  2 |   INDEX RANGE SCAN          | BIDX2 |     1 |       |     1   (0)| 00:00:01 |                                   
-------------------------------------------------------------------------------------                                   
                                                                                                                        
Predicate Information (identified by operation id):                                                                     

PLAN_TABLE_OUTPUT                                                                                                       
----------------------------------------------------------------------------------------

   1 - filter("DATA1"='DFDF' AND "DATA3"='KDWD')                                                                        
   2 - access("DATA2"='SDFVS')                                                                                          

15 rows selected.

SQL> 
SQL> 
SQL> spool off


Only does the bitmap conversion when bind variables are used. A case of bad "bind variable peeking" here I think.


Friday, August 11, 2006

Multiple Oracle clients

Interesting few phone calls this morning, the back story:

We upgraded a database from 9.2.0.5 to 10gr1.

The core group of users tested it thoroughly with no problems, everything was signed off on, the upgrade went with a hitch and everything worked great. For less than 1 business day.

The system has an export facility to the local government body of data we collect. The GUI creates an XML file that the users then upload to the government via a web site. Some users started complaining of getting ORA-0600 errors when they try to create their XML file. Other users were not having any problems at all. This had been tested and signed off on. The application support people had tried and could not find a problem and they could successfully create the files for the users. I started going through metalink trying to find any related items to the upgrade that we might have missed. I could find nothing, but I stumbled across a similar ORA-0600 error 16608 with a oracle 8 client trying to connect to a oracle 10g database. I didn't think that was applicable, but I looked into it anyway. Every single person getting the problem has the oracle 817 client installed because they use an older application that requires it. The 10g application GUI defaults to this home. After working with the vendor, we could not force the GUI to use the oracle 9i home on the PC's. We installed the instant client, to no avail. The application works great with the 9i client so we had decided to slowly install the 10g client as it was needed.

I read through forum entries and the like on using multiple oracle clients on the same PC and none of the suggestions worked. If I removed the Oracle 8 settings from the path, the 10g application worked, but the 8i application did not work. If I moved the oracle 8 settings to the end of the path, the 10g application worked, but the oracle 8i application did not.


I did some head scratching, and came up with the following for starting the 10g application.


set ORACLE_HOME=c:\oracle\ora92
set TNS_ADMIN=c:\oracle\ora92\network\admin
set path=c:\oracle\ora92
call "c:\program files\10gapp\10gapp.exe"

That works very well.

If anybody out there has a better way, please comment as this is now in use on about 100 PC's throughout the company.


Thursday, August 10, 2006

I am in for it now

I have been having, "heated discussions" with a few (male, read on) OCP's from a subsidiary company. Arguing over some Oracle B*tree index myths and other things they "know" about oracle. These fellows are 10g OCP, but all straight out of university and very gung ho. ADF over oracle, .NET over oracle (shudder) and no need for any business logic in the database. Very smart and knowledgeable fellows who are an asset to the company. I just keep slamming them on their database interaction work as I do the performance monitoring on their databases and they are a little sick and tired of ticket after ticket after ticket coming their way of "Bad SQL" or "Inefficient SQL" and the one that started this "Unused Indexes" and thought they would speak up. All very friendly of course.

We came to a agreement that I would put together a nice document blowing away their index theories. I will work on the rest of their theories later. When I have successfully debunked their myths, they will wear skirts,bobby socks, a blouse and makeup to work for a period of no less than 4 straight hours covering at minimum of two of the following - Morning arrival,Morning coffee break, lunch, afternoon coffee break or 5pm departure. If I fail, I will have to wear the same outfit. We have manager approval and backing for this and odds are it will end up in the company newsletter. Not entirely fair as they have 9-12 people at their location. I have 200+ people in this office, but I think I have the legs to pull it off :).

I personally believe these to be myths, well one because Thomas Kyte says so, and two I did some testing and checked it out in the past.


The following are the 3 index myths we decided on, well statements I told them were wrong and they argued the point. They are going to put together tests to prove me wrong. In a event of a tie or unbreakable test cases, a simple toss of a coin will decide our fates. Best out of 3 wins.


Myth #1

You must use a index, full tablescans are always less efficient and slower.

Myth #2

If you index each individual column the optimizer will use multiple indexes in a query if the predicate has any combination of the columns, so build a index on every column.

Myth #3

You must build a calculated column to properly index a date field. So to store a date, the best way to do it is with a VARCHAR field to improve DML to the table.

Over the next few days, I will be putting together the tests and posting the results. Since we never spoke about the ability to look for outside corroboration or help, I thought I would do the tests, and then run them by my blog to see if any readers can find fault in them. Due date for the tests is the end of the month. Management didn't want us using too much company time for this childishness. I know I have 2 and 3 in the bag.. #1 is going to be hard to prove, I think in one of Tom's books he does something similar. I will be reading those chapters over again tonight.


Parameters are Oracle 10gR2 on windows (shudder) with the following somewhat abbreviated init.ora (some parameters removed):

orcl.__db_cache_size=192937984
orcl.__java_pool_size=12582912
orcl.__large_pool_size=4194304
orcl.__shared_pool_size=75497472
orcl.__streams_pool_size=0
*.compatible='10.2.0.1.0'
*.db_block_size=8192
*.optimizer_mode=ALL_ROWS
*.db_file_multiblock_read_count=16
*.db_recovery_file_dest_size=2147483648
*.job_queue_processes=10
*.open_cursors=300
*.pga_aggregate_target=96468992
*.processes=150
*.sga_target=289406976
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'



I am probably going to regret this is some way. But WTF you only live once. I think my wife is still laughing, she offered to shave my legs for me. Evil, Evil lady :)

Stay tuned, I am going to spend some time on building these tests.

Wednesday, August 09, 2006

Statspack

This entry is more so I have a easy to reference link for answering "how do I monitor my DB" questions that always crop up.

Using and installing

statspack for oracle9i release 2

statspack for oracle10g Now called AWR but the statspack stuff is still there and is included, AWR is a extra cost option I believe.

I can't find any online documentation for installing statspack in 8i.

But in 8.1.6+ if you read $ORACLE_HOME/rdbms/admin/spdoc.txt it tells you pretty much everything you need to know about statspack.

The spdoc.txt script exists in all versions at the same location in the oracle home.

If you don't use statspack - learn it and use it.

Over at the Pythian groupShervin Sheidaei has started a thread on a New STATSPACK Methodology It looks promising, he says he has more instore for us.


Sunday, August 06, 2006

Coworker

One of our developers is moving on. He is pursuing a dream, he got a job with a company in the UK. He and his wife had been wishing to take the family to the UK for a visit, but this job presented itself and he grabbed at it. So off he and his entire family go.

The entire company will miss him, he may not have been the team leader, but he was the center of this particular applications development and support team and many times more useful than the team leader. Already the impact of his two weeks notice is being felt throughout the company. He is a University trained developer who actually learned what he was taught and who has specialized in one application and has become one of the best in his field of expertise.

I wish him and his family good luck and safe travels.

As you may have noticed, my job in France didn't go through, it was just too expensive for the company to move me from northern Canada to southern France. They went with somebody from the UK.

I have decided to start to actively look for work, I think I will look in places like the UK, Australia, New Zealand and Western Europe. If I am going to move I will leave Canada and I can't stand the United States, so I have limited locations I can find work where it isn't under active air raids or a crumbling government.




Saturday, August 05, 2006

Retrieving disk space

There are always questions from folks asking how to retrieve disk space from database datafiles that have grown too large usually by accident.

This can be a time consuming task, usually involving using import and export and can lead to large headaches. For the really large jobs, that is probably still the best way to go. For the smaller jobs I use the following method.


I put it together into a hopefully complete test. Sorry for the use of a Windoze DB as a test bed. But VM on my laptop took a nose dive the other day and I haven't been able to recover my Suse VM guest from the bowels of WindowsXP.


SQL> SET TIMING ON
SQL> COLUMN MEG FORMAT 999,999,999
SQL> COLUMN SEGMENT_NAME FORMAT A25 TRUNCATE
SQL> COLUMN FILE_NAME FORMAT A30 TRUNCATE
SQL> COLUMN TABLESPACE_NAME FORMAT A25 TRUNCATE
SQL> SET LINES 130
SQL> set echo on
SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE    10.2.0.1.0      Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production

SQL> DROP TABLESPACE SIZETESTING1 INCLUDING CONTENTS;

Tablespace dropped.

Elapsed: 00:00:02.15
SQL> CREATE TABLESPACE SIZETESTING1 DATAFILE 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\LAPTOP\SIZETESTING1.DBF'
  2  SIZE 100M REUSE AUTOEXTEND ON NEXT 10M MAXSIZE 200M SEGMENT SPACE MANAGEMENT AUTO AUTOALLOCATE;

Tablespace created.

Elapsed: 00:00:04.34
SQL> CREATE TABLE THEBIGONE (THEDATA VARCHAR2(3000)) TABLESPACE SIZETESTING1;

Table created.

Elapsed: 00:00:00.03
SQL> CREATE TABLE THESMALLONE (THEDATA VARCHAR2(3000)) TABLESPACE SIZETESTING1;

Table created.

Elapsed: 00:00:00.00

I just want to put some data into the big table then insert some data into the small table.


SQL> 
SQL> 
SQL> DECLARE
  2  L_THERECORD VARCHAR2(3000);
  3  
  4  BEGIN
  5  
  6  L_THERECORD := DBMS_RANDOM.STRING('U',2999);
  7  DBMS_OUTPUT.PUT_LINE(L_THERECORD);
  8  FOR Y IN 1..3 LOOP
  9  
 10  FOR X IN 1..10000 LOOP
 11  INSERT INTO THEBIGONE (THEDATA) VALUES (L_THERECORD);
 12  END LOOP;
 13  COMMIT;
 14  END LOOP;
 15  COMMIT;
 16  END;
 17  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:07.63
SQL> 
SQL> SELECT COUNT(1) FROM THEBIGONE;

  COUNT(1)                                                                                                                        
----------                                                                                                                        
     30000                                                                                                                        

Elapsed: 00:00:03.62
SQL> SELECT SEGMENT_NAME,SUM(BYTES/1024/1024) MEG FROM USER_SEGMENTS WHERE SEGMENT_NAME IN ('THEBIGONE','THESMALLONE') GROUP BY SEGMENT_NAME;

SEGMENT_NAME                       MEG                                                                                            
------------------------- ------------                                                                                            
THESMALLONE                          0                                                                                            
THEBIGONE                          120                                                                                            

Elapsed: 00:00:00.14
SQL> 
SQL> INSERT INTO THESMALLONE SELECT OBJECT_NAME FROM ALL_OBJECTS;

49787 rows created.

Elapsed: 00:00:03.39
SQL> COMMIT;

Commit complete.

Elapsed: 00:00:00.01
SQL> 
SQL> SELECT SEGMENT_NAME,SUM(BYTES/1024/1024) MEG FROM USER_SEGMENTS WHERE SEGMENT_NAME IN ('THEBIGONE','THESMALLONE') GROUP BY SEGMENT_NAME;

SEGMENT_NAME                       MEG                                                                                            
------------------------- ------------                                                                                            
THESMALLONE                          2                                                                                            
THEBIGONE                          120                                                                                            

Elapsed: 00:00:00.03
SQL> SELECT FILE_NAME,BYTES/1024/1024 MEG FROM DBA_DATA_FILES WHERE TABLESPACE_NAME='SIZETESTING1';

FILE_NAME                               MEG                                                                                       
------------------------------ ------------                                                                                       
C:\ORACLE\PRODUCT\10.2.0\ORADA          130                                                                                       

Elapsed: 00:00:00.05

Now, lets fill the big table up so it blows the storage parameters.


SQL> 
SQL> DECLARE
  2  L_THERECORD VARCHAR2(3000);
  3  
  4  BEGIN
  5  
  6  L_THERECORD := DBMS_RANDOM.STRING('U',2999);
  7  DBMS_OUTPUT.PUT_LINE(L_THERECORD);
  8  FOR Y IN 1..1000 LOOP
  9  
 10  FOR X IN 1..1000 LOOP
 11  INSERT INTO THEBIGONE (THEDATA) VALUES (L_THERECORD);
 12  END LOOP;
 13  COMMIT;
 14  END LOOP;
 15  COMMIT;
 16  END;
 17  /
DECLARE
*
ERROR at line 1:
ORA-01653: unable to extend table SYSTEM.THEBIGONE by 1024 in tablespace SIZETESTING1 
ORA-06512: at line 11 


Elapsed: 00:00:08.76
SQL> 
SQL> SELECT COUNT(1) FROM THEBIGONE;

  COUNT(1)                                                                                                                        
----------                                                                                                                        
     48000                                                                                                                        

Elapsed: 00:00:04.35
SQL> SELECT SEGMENT_NAME,SUM(BYTES/1024/1024) MEG FROM USER_SEGMENTS WHERE SEGMENT_NAME IN ('THEBIGONE','THESMALLONE') GROUP BY SEGMENT_NAME;

SEGMENT_NAME                       MEG                                                                                            
------------------------- ------------                                                                                            
THESMALLONE                          2                                                                                            
THEBIGONE                          192                                                                                            

Elapsed: 00:00:00.26
SQL> SELECT FILE_NAME,BYTES/1024/1024 MEG FROM DBA_DATA_FILES WHERE TABLESPACE_NAME='SIZETESTING1';

FILE_NAME                               MEG                                                                                       
------------------------------ ------------                                                                                       
C:\ORACLE\PRODUCT\10.2.0\ORADA          200                                                                                       

Elapsed: 00:00:00.03

Lets get rid of the big table and add some more rows to the small table.




SQL> TRUNCATE TABLE THEBIGONE;

Table truncated.

Elapsed: 00:00:03.16
SQL> INSERT INTO THESMALLONE SELECT OBJECT_NAME FROM ALL_OBJECTS;

49787 rows created.

Elapsed: 00:00:03.13
SQL> COMMIT;

Commit complete.

Elapsed: 00:00:00.01
SQL> INSERT INTO THESMALLONE SELECT OBJECT_NAME FROM ALL_OBJECTS;

49787 rows created.

Elapsed: 00:00:03.11
SQL> COMMIT;

Commit complete.

Elapsed: 00:00:00.02
SQL> INSERT INTO THESMALLONE SELECT OBJECT_NAME FROM ALL_OBJECTS;

49787 rows created.

Elapsed: 00:00:03.19
SQL> COMMIT;

Commit complete.

Elapsed: 00:00:00.02
SQL> SELECT SEGMENT_NAME,SUM(BYTES/1024/1024) MEG FROM USER_SEGMENTS WHERE SEGMENT_NAME IN ('THEBIGONE','THESMALLONE') GROUP BY SEGMENT_NAME;

SEGMENT_NAME                       MEG                                                                                            
------------------------- ------------                                                                                            
THESMALLONE                          7                                                                                            
THEBIGONE                            0                                                                                            

Elapsed: 00:00:00.03
SQL> 
SQL> SELECT * FROM( SELECT file_name,     ceil( (nvl(hwm,1)*C.VALUE)/1024/1024 ) smallest,    ceil( blocks*C.VALUE/1024/1024) currsize,
  2       ceil( blocks*C.VALUE/1024/1024) -     ceil( (nvl(hwm,1)*C.VALUE)/1024/1024 ) savings
  3       ,a.autoextensible  ,a.maxbytes from dba_data_files a,      ( select file_id, max(block_id+blocks-1) hwm
  4         from dba_extents     group by file_id ) b    ,(select value from v$parameter where name = 'db_block_size') C
  5  where a.file_id = b.file_id(+) AND a.file_name='C:\ORACLE\PRODUCT\10.2.0\ORADATA\LAPTOP\SIZETESTING1.DBF'
  6  ) WHERE 1=1 ;

FILE_NAME                        SMALLEST   CURRSIZE    SAVINGS AUT   MAXBYTES                                                    
------------------------------ ---------- ---------- ---------- --- ----------                                                    
C:\ORACLE\PRODUCT\10.2.0\ORADA        123        200         77 YES  209715200                                                    

Elapsed: 00:00:16.80

Our smallest we can shrink the datafile is down to 123m due to the small table having segments. So lets set up a tablespace to move the small table too while we clean up.



SQL> 
SQL> 
SQL> DROP TABLESPACE HOLDING INCLUDING CONTENTS;

Tablespace dropped.

Elapsed: 00:00:00.88
SQL> CREATE TABLESPACE HOLDING DATAFILE 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\LAPTOP\HOLDING.DBF'
  2  SIZE 10M REUSE AUTOEXTEND ON NEXT 10M MAXSIZE 200M SEGMENT SPACE MANAGEMENT AUTO AUTOALLOCATE;

Tablespace created.

Elapsed: 00:00:01.10

And lets move it.




SQL> 
SQL> ALTER TABLE THESMALLONE MOVE TABLESPACE HOLDING;

Table altered.

Elapsed: 00:00:00.67
SQL> 
SQL> SELECT * FROM( SELECT file_name,     ceil( (nvl(hwm,1)*C.VALUE)/1024/1024 ) smallest,    ceil( blocks*C.VALUE/1024/1024) currsize,
  2       ceil( blocks*C.VALUE/1024/1024) -     ceil( (nvl(hwm,1)*C.VALUE)/1024/1024 ) savings
  3       ,a.autoextensible  ,a.maxbytes from dba_data_files a,      ( select file_id, max(block_id+blocks-1) hwm
  4         from dba_extents     group by file_id ) b    ,(select value from v$parameter where name = 'db_block_size') C
  5  where a.file_id = b.file_id(+) AND a.file_name='C:\ORACLE\PRODUCT\10.2.0\ORADATA\LAPTOP\SIZETESTING1.DBF'
  6  ) WHERE 1=1 ;

FILE_NAME                        SMALLEST   CURRSIZE    SAVINGS AUT   MAXBYTES                                                    
------------------------------ ---------- ---------- ---------- --- ----------                                                    
C:\ORACLE\PRODUCT\10.2.0\ORADA          1        200        199 YES  209715200                                                    

Elapsed: 00:00:16.68

There we go, shrink it down to 1M. So, lets do it.




SQL> ALTER database datafile 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\LAPTOP\SIZETESTING1.DBF' resize 1m;

Database altered.

Elapsed: 00:00:00.49

Move the table back .



SQL> ALTER TABLE THESMALLONE MOVE TABLESPACE SIZETESTING1;

Table altered.

Elapsed: 00:00:01.29
SQL> SELECT TABLESPACE_NAME,SEGMENT_NAME,SUM(BYTES/1024/1024) MEG FROM USER_SEGMENTS WHERE SEGMENT_NAME IN ('THEBIGONE','THESMALLONE') GROUP BY TABLESPACE_NAME,SEGMENT_NAME;

TABLESPACE_NAME           SEGMENT_NAME                       MEG                                                                  
------------------------- ------------------------- ------------                                                                  
SIZETESTING1              THEBIGONE                            0                                                                  
SIZETESTING1              THESMALLONE                          7                                                                  

Elapsed: 00:00:00.03

SQL> SELECT FILE_NAME,BYTES/1024/1024 MEG FROM DBA_DATA_FILES WHERE TABLESPACE_NAME='SIZETESTING1';

FILE_NAME                             MEG
------------------------------ ----------
C:\ORACLE\PRODUCT\10.2.0\ORADA         11

Elapsed: 00:00:00.03

SQL> SPOOL OFF;


Lets take a look at what it looks like on disk


C:\oracle\product\10.2.0\oradata\LAPTOP>DIR SIZE*
 Volume in drive C has no label.
 
 Directory of C:\oracle\product\10.2.0\oradata\LAPTOP

08/05/2006  02:46 PM        11,542,528 SIZETESTING1.DBF
               1 File(s)     11,542,528 bytes
               0 Dir(s)  85,625,917,440 bytes free

This works well and can be expanded to cover more tables similar to "THESMALLONE". Be careful moving tables online and check the documents first, and don't take my word for it. Test Test Test.


Writing an external audit file

Awhile back I answered a question from somebody I believe on Dizwell on how to write a custom trigger that would write a record to an external datafile with every insert. I stumbled across the script here and thought I would post it.




CREATE OR REPLACE TRIGGER audtest_Bi
 BEFORE
  INSERT
 ON audtest
REFERENCING NEW AS NEW OLD AS OLD
 FOR EACH ROW
DECLARE

JOBOUT NUMBER;

BEGIN

dbms_job.submit( JOBOUT, 'BEGIN AUDWRITE('||:NEW.PK||'); END;',SYSDATE );
:NEW.JOBNUM := JOBOUT;

END;
/

CREATE OR REPLACE PROCEDURE AUDWRITE(I_PK NUMBER) AS

L_OUTPUT        UTL_FILE.FILE_TYPE;

BEGIN
    L_OUTPUT := UTL_FILE.FOPEN( 'AUDTEST_DIR', 'AUDTEST_AUDIT.CSV', 'A' );

FOR LINE IN (SELECT * FROM AUDTEST WHERE PK=I_PK) LOOP
    UTL_FILE.PUT( L_OUTPUT, LINE.PK||','||LINE.SOMEDATA);
    UTL_FILE.NEW_LINE( L_OUTPUT );
    END LOOP;

    UTL_FILE.FCLOSE( L_OUTPUT );

END;
/


INSERT INTO AUDTEST (PK,SOMEDATA) VALUES (AUDTEST_SEQ.NEXTVAL,DBMS_RANDOM.string('A',ROUND(DBMS_RANDOM.VALUE(1,45))));
INSERT INTO AUDTEST (PK,SOMEDATA) VALUES (AUDTEST_SEQ.NEXTVAL,DBMS_RANDOM.string('A',ROUND(DBMS_RANDOM.VALUE(1,45))));
INSERT INTO AUDTEST (PK,SOMEDATA) VALUES (AUDTEST_SEQ.NEXTVAL,DBMS_RANDOM.string('A',ROUND(DBMS_RANDOM.VALUE(1,45))));
INSERT INTO AUDTEST (PK,SOMEDATA) VALUES (AUDTEST_SEQ.NEXTVAL,DBMS_RANDOM.string('A',ROUND(DBMS_RANDOM.VALUE(1,45))));
INSERT INTO AUDTEST (PK,SOMEDATA) VALUES (AUDTEST_SEQ.NEXTVAL,DBMS_RANDOM.string('A',ROUND(DBMS_RANDOM.VALUE(1,45))));


--SHOULD BE 5 JOBS
SELECT * FROM USER_JOBS;

--SHOULD BE 5 ROWS WITH JOBNUM FILLED IN
SELECT * FROM AUDTEST;


--WON'T BE A FILE IN C:\TEMP
COMMIT;
--NOW THERE WILL BE A FILE IN C:\TEMP




Some work and a lot of testing will need to be done if this goes into production. But might have some use to somebody.

Wednesday, August 02, 2006

Dizwell Almost back in the saddle again

Howard has started on the long trek to getting his amazing website back.

The new URL (for now?) is http://www.dizwell.com/prod/

I wish him luck