Monday, March 27, 2006

Scripts

This is a repost of a post I made earlier in March, I just simply could not convince blogspot to properly display the scripts with the fonts etc that I wanted. The first script (current_users.sql) gives you the SID,SERIAL# for every session. So, now got a session that has been active for too long, or using a lot of CPU? Then trace the session remotely.

SQL> exec sys.dbms_system.set_ev(:SID,:SERIAL, 10046,12,'');

I have another script I use that you simply pass a SPID to and it figures everything out for you. I will post it later. You will then have a trace file sitting on the server in the UDUMP location. You can then use tkprof to create an output file you can read. Using the parameter sort=prsela,fchela,exeela sorts the statements from longest running to shortest running, so hopefully the offender is at or near the top. To get the file name (dedicated server) you can run the following:


SELECT PAR.VALUE||'/'|| 'ora_'||P.SPID||'_'||lower(I.INSTANCE_NAME)||'.trc'
FROM V$SESSION S, V$PROCESS P, V$INSTANCE I,
V$PARAMETER PAR
WHERE
1=1
AND S.PADDR = P.ADDR
AND P.SPID IN (&THESPID)
AND PAR.NAME = 'user_dump_dest'
;

You will have to edit the output in take into consideration the case sensitivity of your parameters. Don't want to trace.. or the session is "locked", then I also have a another script in this for you.
current_sql.sql
Simply call this script and pass the SPID to it, or more than one SPID with them comma separated (hence the IN). I choose SPID because in a UNIX environment they are easy to find and relate the session to the OS. I haven't attempted this script in the Windows world.


DEFINE THESPID = &1
SET VERIFY OFF
SET LINES 132
SET PAGES 40
COLUMN SIDSER FORMAT A12
COLUMN SQL_TEXT FORMAT A40
COLUMN STATUS FORMAT A10
TRUNCATE COLUMN SPID FORMAT A10
COLUMN USERNAME FORMAT A10 TRUNCATE
SELECT P.SPID ,S.SID','S.SERIAL# SIDSER
,T.SQL_TEXT ,S.STATUS ,S.USERNAME
FROM V$SESSION S, V$SQLAREA T
, V$PROCESS P, V$SESSION_LONGOPS L
WHERE
1=1
AND S.PADDR = P.ADDR (+)
AND S.SQL_HASH_VALUE (+) = T.HASH_VALUE
AND P.SPID IN (&THESPID)
AND L.SQL_HASH_VALUE (+) = S.SQL_HASH_VALUE
GROUP BY
P.SPID,S.SID,S.SERIAL#,T.SQL_TEXT
,S.STATUS,S.USERNAME
/

COLUMN OPNAME FORMAT A10 TRUNC
COLUMN TARGET FORMAT A15 TRUNC
COLUMN MESSAGE FORMAT A40 WRAP
COLUMN SEC_REM FORMAT A10
COLUMN MINS_REM FORMAT A10
COLUMN WAIT NEWLINE FORMAT A60
SELECT DISTINCT L.OPNAME,L.TARGET,L.MESSAGE
,TO_CHAR(TIME_REMAINING) SEC_REM
,TO_CHAR(ROUND((TIME_REMAINING/60),2)) MINS_REM
FROM V$SESSION S, V$SQLAREA T, V$PROCESS P
, V$SESSION_LONGOPS L
WHERE
1=1
AND S.PADDR = P.ADDR (+)
AND S.SQL_HASH_VALUE (+) = T.HASH_VALUE
AND P.SPID IN (&THESPID)
AND L.SQL_HASH_VALUE (+) = S.SQL_HASH_VALUE
;


You get the SQL current executing, as well you get anything in v$session_longops. Very helpful in tracking down problems.

Sunday, March 26, 2006

Newsgroups

Here are some interesting posts on
comp.databases.oracle.server

Follow up (new post) .

This person is complaining to the news group that the information provided by the helpful folks in group was not appropriate and actually caused the company the person is employed with to pay a fine. I would assume because the person wasn't able to get the instant answer they were looking for so it is the communities fault.

This person really flabbergasts me. I would love to have a long conversation with this person going over the basics of problem solving and the importance of testing and benchmarking thoroughly. I have a feeling it would be a waste of time and effort on my part, something akin to talking to a lamp post.

News groups are great, forums are great. Excellent way to get information. But always check that information out first.

After reading a few other posts by the same author. I would recommend a few hundred hours of professional oracle training for the individual.

Wednesday, March 22, 2006

Statspack

Quick SQL to submit a statspack snapshot ever 10 minutes for 12 hours... or so.
set lines 132
show parameter job
DECLARE
I NUMBER;
A NUMBER;
START_TIME DATE;
BEGIN
EXECUTE IMMEDIATE 'DECLARE JOBOUT NUMBER; BEGIN DBMS_JOB.SUBMIT(JOBOUT,''statspack.snap;'',:A,null); COMMIT; END;' USING SYSDATE;
FOR I IN 1 ..72 LOOP
A := I*10;
START_TIME := SYSDATE+A/(24*60);
EXECUTE IMMEDIATE 'DECLARE JOBOUT NUMBER; BEGIN DBMS_JOB.SUBMIT(JOBOUT,''statspack.snap;'',:A,null); COMMIT; END;' USING START_TIME;
NULL;
END LOOP;
END;
/
SELECT
JOB
,to_char(next_date,'DD-MON-YYYY HH24:MI:SS') next_date
from
user_jobs
order by
next_date
/
exit

Tuesday, March 21, 2006

Oh no... no no no no no

I was "ordered" today to do an Application Express presentation and demonstration tomorrow during the lunch break of a meeting. TOMORROW... yeah.. lucky me. What a nightmare, less than 24 hours notice. First presentation in... ummm... 9 years... yeah, thats nine years. Only 15 people, but my boss (May his soul rest in peace, he is the one asking for the presentation), IT managers from 4 other sites and some senior developers. All knowledgable and have relatively powerful influence in the company. I have been busy making data, building a demonstration and stealing power point slides from Oracle Corporation. I am to show its benefits and why we should be using it. I have to show why it is better than Lotus Notes "databases" and "applications".

Monday, March 20, 2006

Script to find all the files

I have built this over the years.. stolen pieces, added pieces, removed pieces. I am sure the baseline came from some public source. If you can identify it let me know. This is a script I run every 24 hours on every one of my databases and store the results.


SELECT
A.*
FROM
(
SELECT
DECODE(D.TABLESPACE_NAME,NULL,DECODE(FILES.TYPE,'CONTROL','CONTROL_FILE','LOG','REDO_LOG_FILE'
,'ARCHIVE_LOG_DEST','ARCHIVE_LOG_DEST','DUMP_DEST '
,'DUMP_DEST','TEMP','TRUE_TEMP'),D.TABLESPACe_NAME) "TABLESPACE_NAME"
,SUBSTR(FILES.FILE_NAME,0,INSTR(SUBSTR(FILES.FILE_NAME
,INSTR(FILES.FILE_NAME,'/')+1,LENGTH(FILES.FILE_NAME)),'/') )"MOUNT_POINT"
,FILES.FILE_NAME "FILE_NAME_PATH"
,tO_CHAR(FILES.BYTES / 1024/1024,'999,999,999.99') "MEG"
,COUNT "BUFFER_WAITS"
, TIME "SECONDS_WAITED"
,D.AUTOEXTENSIBLE
,D.MAXBYTES/1024/1024 "MAX_MEG"
,tO_CHAR(FILE_FREE.FREE_BYTES / 1024/1024,'999,999,999.99') "FILE_FREE_MEG"
,HIGH.HIGHWATERM
FROM
(
SELECT FILE_NAME,BYTES "BYTES",'DATA' "TYPE",FILE_ID FROM DBA_DATA_FILES UNION ALL SELECT MEMBER "FILE_NAME",BYTES "BYTES",'LOG' "DATA",0 "FILE_ID" FROM V$LOGFILE F, V$LOG L WHERE F.GROUP#=L.GROUP#
UNION ALL SELECT NAME "FILE_NAME",SUM(RECORD_SIZE) "BYTES",'CONTROL' "TYPE",0 FILE_ID FROM V$CONTROLFILE,V$CONTROLFILE_RECORD_SECTION GROUP BY NAME
union all select REPLACE(VALUE,'location=','') "FILE_NAME",0 "BYTES",'ARCHIVE_LOG_DEST' "TYPE",0 FILE_ID FROM V$PARAMETER VP WHERE 1=1 and vp.name like 'log_archive_dest%' and vp.value is not null and vp.isdefault='FALSE'
UNION ALL SELECT VALUE "FILE_NAME",0 "BYTES",'DUMP_DEST' "TYPE",0 FILE_ID FROM V$PARAMETER VP WHERE 1=1 AND UPPER(NAME) LIKE '%DUMP%DEST%'
UNION ALL SELECT NAME "FILE_NAME",BYTES,'TEMP' "TYPE",0 FILE_ID FROM V$TEMPFILE
) FILES
,V$INSTANCE V
,DBA_DATA_FILES D
,(SELECT * FROM v$datafile df,x$kcbfwait fw WHERE 1=1 AND FW.INDX+1 = DF.FILE#) WAITS
,(SELECT FILE_ID,SUM(BYTES) "FREE_BYTES" FROM DBA_FREE_SPACE GROUP BY FILE_ID) FILE_FREE
,(SELECT A.TABLESPACE_NAME ,A.FILE_NAME ,A.FILE_ID,
ROUND(((B.MAXIMUM+C.BLOCKS-1)*D.DB_BLOCK_SIZE)/1024/1024) HIGHWATERM
FROM DBA_DATA_FILES A ,(SELECT FILE_ID,MAX(BLOCK_ID) MAXIMUM FROM DBA_EXTENTS GROUP BY FILE_ID) B
,DBA_EXTENTS C ,(SELECT VALUE DB_BLOCK_SIZE FROM V$PARAMETER WHERE UPPER(NAME)='DB_BLOCK_SIZE') D
WHERE 1=1 AND A.FILE_ID = B.FILE_ID AND C.FILE_ID = B.FILE_ID AND C.BLOCK_ID = B.MAXIMUM ORDER BY
A.TABLESPACE_NAME
,A.FILE_NAME
) HIGH
WHERE
1=1
AND D.FILE_NAME (+)=FILES.FILE_NAME
AND D.FILE_ID= WAITS.FILE# (+)
AND FILE_FREE.FILE_ID(+) = FILES.FILE_ID
AND HIGH.FILE_ID (+) = FILES.FILE_ID
ORDER BY SUBSTR(FILES.FILE_NAME,0,INSTR(SUBSTR(FILES.FILE_NAME,INSTR(FILES.FILE_NAME,'/')+1,LENGTH(FILES.FILE_NAME)),'/') ),REPLACE(REPLACE(FILES.FILE_NAME,SUBSTR(FILES.FILE_NAME,0,INSTR(SUBSTR(FILES.FILE_NAME,INSTR(FILES.FILE_NAME,'/')+1,LENGTH(FILES.FILE_NAME)),'/') )'/',NULL),'oracle/'V.INSTANCE_NAME'/',NULL)
) a
WHERE
1=1
ORDER BY 1,2,3,4
;
Certainly isn't the fastest or the prettiest script out there. But the information it provides is beneficial. You will need to modify the mount point parsing to match your environment. I couldn't find the one I built for windows which takes that into account.

Sunday, March 19, 2006

Application upgrade

Further to my post about project management and notification. I get myself invited to a planning meeting for the application upgrade. They have done vast amounts of testing, they have 4 databases that they have tested the application upgrade in all of them and everything works smoothly. Takes about 8 hours for the patching scripts to run, then another 12-13 hours for the data load of the old data extracted into the newly patched application. I simply ask one question "Do you think you have enough room in production for all that data?". I know I am in for problems when I get a bunch of blank stares and open mouths. "There should be" and "yeah probably" are the common answers.. then the answer that almost cost me my job as I almost couldn't restrain myself telling this senior fellow exactly what I thought of him. The answer "Well thats YOUR job isn't it". I dang near lost it on the fellow. So I simply snapped back something to the effect of you should have told me sometime in the last two months, not less than a week from go live, your not my wife and I can't read your mind. So, after the meeting I go and take a look at the 2 dev databases I built for them. I know how big they were when I built them, and I know how big they are now. Final numbers, 23 gig larger. 23 gig for an application patch, 23 gig. I am stunned. 23 gig. I can't get over it. I check the other database, yeah 23 gig. 23 gig. More than a 50% increase over the size that is there. I talk with the support people, they once again give the blank stares and open mouth answers I hate. Lots of room in production for the data. I make sure the UNDO is set to grow and there is enough room for the archive logs. I am thinking of approaching them with some hints and suggestions on speeding up the data load as I have caught a snippet or two of their insert scripts. Pure insert scripts into tables with triggers and indexes active. If they are happy with sitting in the office for 24 hours. So be it. I recommend scheduling a full RMAN backup while the users are in, then after the users are out a full export. Then to top it all off a cold backup of the database as it has to go down for the patching anyway. I have a export shell script that splits the database up into chunks based on dba_segments, aiming for 1 gig files, actually building export parameter files listing the tables to keep the files under 1 gig. The objects over one gig, I create parameter files that use the export functionality to split the file into 1 gig files. Then it creates the pipe files for compressing on the fly, and groups the exports into 4 shells scripts so I can run the exports as 4 seperate processes on the server. Took me about 1 week to make the script, and proved its usefullness the first time I used it. I will post it here if enough people ask.

Experience and Exuberance

I have every DBA's dream, a new developer, fresh out of school (overseas). Trained and experienced in Java, JSP, .NET, PL/SQL and SQL. He started about 2 months ago and was first learning one of our large apps to be an application support person before being turned over to me for training in everything oracle. Great guy, full of ideas. Nothing but good comments about him.

I am under instruction to teach him how in my mind to properly code and develop oracle based applications in our environment. Suddenly, this dream turned into a nightmare. Being a lean shop, we have normally outsource our big applications or have gotten them "of the shelf". We have a mess of different versions of the database, different contractors supporting little bits and pieces of some applications, and not others. Home grown integration programs, ETL utilities that date back to DOS 5.0 days converting data from a mainframe system that was to be replaced in 1998 (commissioned 1990) and is still ticking today. The only standard we have right now, is we have no standards but are working towards one. From someone that has been around for ages, it all makes sense - until you try to explain it to someone new. Oh Boy.

I spent about 3 hours with the fellow, going over basics. Most of which he knew, but he was polite and let me ramble on to him anyway. When I say basics - I mean basics. Don't use weak passwords, name your accounts descriptively, use packages instead of procedures/functions, primary keys, foreign keys, do the work in the database- not the front end, bind variables, bind variables, benchmark and test and document before moving from dev/test to production. I am here to help.

I started to focus in on a single system, one that interfaces with our Oracle Financials system and always has a list of changes wanted/needed to the interface. It is, based on storage and user base, the 3rd largest application in our company. The owner of the software vendor actually used to work for our company, he quit and started selling us software. He and his people write good stuff, very focused, no fluff.

The particular change I started to describe to him, is one I thought would a good introduction to the thousands of lines of interface code, we interface accounts payables invoices, fixed asset information, general ledger and even accounts receivables into and from our financials system. The users are constantly choosing the wrong financials period when they post their data. They must choose a period because they can post into many periods so we can't figure it out for them.

When they pick a wrong period - specifically a period that is closed - it involves IT support to go in, clean up the source system and the financials system and let the users send it into the proper period again. Only about 30 minutes work, but one month end we had 9 users do it wrong at different times of the day. We need something to look across into the financials system and see what period is opened and closed. Easy enough.

We go into the financials system, and work out a view on the tables that show what period is opened per module. He comprehends that all with great speed.

We then go into the source system and start to work out the best place to put the check. We decide it had better go at the very beginning of the generation phase as not to waste too much time of the users, so I show him where it should be, he understands and I let him get to work. About 2 - 3 hours worth of coding work to get it all completed on the backend and on the front end.

He sends me an email early the next morning, he documented everything, he had decided to put the code into a trigger on the table that holds the batch information instead of in the PL/SQL that does the generation because the PL/SQL is controlled by the application vendor, everything links to that table for the header/details, a single record is created no matter how much data is being sent. Seems to be a reasonable place to put it. I go over his documentation (not the code) and it seems to make sense. I have him clear it with one of the "power users" and then give the go ahead to put it into test and let some users at it.

A couple of days go by and I inquire with him on why I haven't heard from him.
He says he can't figure it out, he runs his code in an anonymous block and returns in about 2-3 seconds. Yet he puts it into the trigger on the batch table, creates a batch and he has waited hours for the batch to return. Interesting, ok, well I ask him to create a test schema, take a copy of the tables and lets work on it in there. We spend an hour or so and I can't see any problems. I watch him put it into the test database, looks good. We generate a batch and well... minutes turn to tens of minutes. I look at the code... looks great, copy it out, works great.

WTF? I look into it it, I just can't figure it out. So far we have spent about a 5 days on these 16 lines of code. Everything works. I take a step back. Think about things. Stop thinking like developer, go to DBA mode.

Go to DBA_objects and search for objects changed in the last 5 days. Find the before insert for each row trigger on the batch table, and what's this, changes to the before insert for each row table on the invoice detail table. Take a look at the trigger. There is an exact duplicate of his perfectly working code in the trigger. So, basically, for every record put into this invoice detail table, it queries our financials system to make sure the period attached to the invoice is valid and open. Take a look, the test batch he was trying to generate would have over one hundred thousand detail lines.

"Whoops there it is".

Quick fix and a recompile and his code now works wonderfully well. No perceptible difference in speed to the users and they now can't generate to a closed period.

My experience should have told me to keep better track of his exuberance.

He is still a great guy, and now really focused on what he has open in what window.

Thursday, March 16, 2006

Notification and project management

Always seems to be a lack of that. Can't get over it. I was told yesterday of a huge 3 day outage in 10 days. I have apparently been scheduled to do a bunch of late night/early morning work over the weekend. I am out of town that weekend (already planned) and that seemed to throw a huge wrench into their plans. DBA work critical to the application patching... no DBA. They have known for 2 months. First word I got of it was an email from a contractor asking me if I could pick her up at the airport since I will be at the office in the morning on Saturday. I had even booked the monday off and the manager who is supposedly managing this project missed that.

Monday, March 13, 2006

ORA-04031 errors

Starting getting ORA-04031 today in one of our 24x7 databases. That is a good way to ruin your day, but this was a 9i database - that isn't supposed to happen. I run scripts from our central database that monitors possible shared pool fragmentation (among other things) every ½ hour, charts it all and more importantly sends an email when things are looking like it could be a problem. I was amazed, the utility never reported a problem via email. I started looking at the issue, had 15 phone calls in 5 minutes, so I flushed the shared pool to at least get people into the system. I checked the shared pool right after I flushed it, in only a few seconds, was already up to 70% usage (450meg shared pool).

No way, that just isn't right. The application designers have never heard of the term "bind variable" before coming to our shop, one of these "database independent" , middle tier independent, we just run java server pages applications. A horror show and a constant source of "its too slow" complaints from all 500+ users. After a few minutes investigation, I found a session that had been running for 105 minutes and was still going. No way, my utility would have told me that, it looks for things like that. I am getting phone calls again that people are getting 04031 errors when opening screens. Take a look, 100% shared pool, so I flush it again.


I can now deal with the session, turns out it is pulling massive amounts of data over a DB link from our financials systems (a couple million rows) with thousands and thousands of queries. I hate integration done by developers. I kill both sessions from both databases and flush the shared pool again. 10 minutes later, 55% usage, the balance is restored. No more complaints. Turns out the guy that started the report, went HOME.


Now, onto my utility, it can't be broken, I wrote it, and I normally write really good stuff.It is accessed via an APEX front end using LDAP authentication… thing of beauty. I pop straight to the logging screen, yup, there are the connections to the problem database. Drill into the results, yup, there is a possible shared pool fragmentation alert and a long running query alert. Go to the outgoing email logs, yup 3 emails sent 12 minutes before all of the phone calls.

Check my inbox, and my blackberry no messages. So, I think email server problems. Go check the email server, I can ping it and send an email from my desktop, check the SMTP servers, they are all working, I can connect and do a POP command. Now what… go to the machine the utility DB runs on, manually send an email, wait, wait, wait Hmmm not getting the email. Try from another machine. Not getting the email. Try from gmail, no email.

I get one of our tech guys to check the mail server. Find out that the lotus notes mail server has decided to stop forwarding email, actually turns out all 3 notes servers decide that forwarding mail is a bad thing, no reason why just "Failure to process message", over 3000 messages lost, gone, poof but luckily reboot fixes it (must be a Microsoft server).

Those emails appear to be gone forever though, I certainly hope the CEO wasn't expecting anything important. I then proceed to calmly explain to my manager the he is going to have dozens of phone calls, and other managers coming to him for the next week complaining that their site lost money and people can't do their work because the "database crashed" and how could we let this happen, why do they pay so much of their monthly department earnings to the IT department?

The managers of course haven't a foggiest idea what a "database" is, or care when it is explained to them it was the fact some goober accountant decided that he really and truly needed all of the detail data in our financials system over the last 7 years in his crystal report, so he actually caused all of the problems. My poor manager is going to get raked over the coals, he is going to get all of this, because, LOTUS NOTES SUCKS.

Friday, March 10, 2006

Tracing

Another SQL script:

set echo off
set heading off
column trace new_val tracefilename
set autotrace off
select c.value '/' d.instance_name '_ora_' a.spid '.trc' trace
from v$process a, v$session b, v$parameter c, v$instance d where a.addr = b.paddr
and b.audsid = userenv('sessionid') and c.name = 'user_dump_dest'
/
set heading on
set echo on
alter session set events '10046 trace name context forever, level 12'
/
set timing on
set autotrace on
set echo off
prompt set autotrace traceonly
prompt if do not want rows returned


I have the file name set as an environment parameter in unix:
export traceme=/sql/traceme

I just run

SQL> @$traceme

This is a modification on the getrace.sql from Tom Kyte.
Boy... I am beginning to wonder if I am a Kyte Stalker??



Oracle financials

I also have to administer a mid sized oracle financial's (e-business suite or whatever they call it today) database. It is still stuck on 8.1.7.4 and is a thorn in my side. What a conveluted mess of technology, and since it is a tool from oracle, they of course use very little of the databases capabilities or technologies. We are planning a upgrade to 10g this year. So, after all that work I hope they have improved their application. But the users love it, it is their baby. The amount of paper that database generates via the reports is freaking amazing.

Thursday, March 09, 2006

First Script for you

We have a fair turn about of oracle related people in the company, people come and people go... I stay. I am the senior DBA, and really, the other 2 "DBA's" are great people. But are only DBA's when I am on holidays or sick, otherwise they are application support people... and then my blackberry gets a LOT of use. We just hired a "mini-me" to become a DBA (ex-developer), and he is great - green when it comes to DBA work - but great and LOVES to read manuals. I am looking forward to when he is trained and an asset.

90% of our applications are oracle based, our IT manager believes heavily in the centralized mainframe type concept. I agree with it too. The vast majority of our equipment and databases reside in the one city and we simply have WAN connections to our apps, either client/server, Citrix, or web.

This means unfortunately, we have a few servers, but lots of databases. Our "big iron" is HP-UX servers with a smattering of Linux and the biggest waste of time around, Windows.

In all of the people coming and going, all "experts", in talking with other DBA's that I interact with, mostly contractors on certain projects, or the resident DBA for a vendor. I find it astonishing how many of them know of the V$SESSION view, but really don't know anything about it, what the things mean, or what good it is to them in conjunction with other tables.

This script works when properly called by a .sh or a .bat file, and is a very simple listing of who is in the database, what they are are some information to track them down.
The information it shows is great, and boom, you know who/what/when/where and you can find out why with other scripts (to come later)

col logon_time format a18
col osuser format a8
col type format a4 truncated
col machine format a14
col sid_ser format a10
col pid format 999999
col spid format a7
col program format a30 truncated
col schemaname format a8
col status format a11
set linesize 180
set pages 5000
spool /tmp/session8i.log
column host_name format a8
column archiver format a8
column log_switch_wait format a15
column instance_name format a8
column version format a12
column shutdown_pending format a16
column username format a10
column LOGON_TIME format a16
column osuser format a10 truncated
column machine format a11 truncated
column status format a10 truncated
column program format a8 truncated
column unixpid format a8 truncated

set pagesize 55
set lines 132
select
instance_name, host_name, version,
to_char(startup_time,'DD-MON-YYYY HH24:MI:SS') STARTUP_TIME
,(select
SUBSTR(TRUNC((SYSDATE-logon_time))' Days :'TRUNC(((SYSDATE-logon_time)-TRUNC((SYSDATE-logon_time)))*24)' hours:'TR
UNC(((((
SYSDATE-logon_time)-TRUNC((SYSDATE-logon_time)))*24)-TRUNC(((SYSDATE-logon_time)-TRUNC((SYSDATE-logon_time)))*24))*60)' mi
ns',0,25
)
"UPTIME"
FROM sys.v_$session a
where sid=1 /* this is PMON */
) "UPTIME"
, status
, database_status
,archiver
,logins
from v$instance;

set head off
set pagesize 5000

prompt Type OSUser Oracle User Machine Login SID,SERIAL UnixPID Program Status
prompt ---- -------- ---------- -------------- --------------- ---------- ------- ------- ------

select s.type
, upper(nvl(s.osuser,'JOB')) osuser
, s.username
--, s.machine
,trim(substr(s.machine,INSTR(s.machine,'\')+1,LENGTH(s.machine))) machine
, trim(TO_CHAR(s.logon_time,'DD-MON HH24:MI:SS')) "LOGON_TIME"
, TO_CHAR(s.sid)','TO_CHAR(s.serial#) sid_ser
, trim(substr(nvl(replace(server,' '),'D'),0,1)' 'p.spid) unixpid
, upper(trim(s.program)) program
, s.status
from v$process p, v$session s
where p.addr=s.paddr
order by decode(S.TYPE,'BACKGROUND',0,'USER',1),decode(s.osuser,null,'XXX',s.status) desc,server,decode(s.osuser,'oracle',0,
null,-10,2),logon_Time
;

prompt to kill a session, as sys execute this command:
prompt alter system kill session 'SID,SERIAL'
rem prompt to turn trace on a session, as sys execute this command:
rem prompt exec sys.dbms_system.set_sql_trace_in_session(SID,SERIAL,true);
prompt ---------------------------------

I hope that is readable.
I use this many times a day.

First post, so I have to have a rant

Well, greetings everyone. Welcome to YAODBA. I have been spending some time on the news groups via google, answer some questions and posting some opinions. It always amazes me how completely ignorant and completely untrained some people are, and the tasks they seem to be trying to carry out are so far over their heads it is stunning. My opinion is the internet is working hard at making ignorant people who feel it is easier to ask someone than to go find out the answer for themselves. Asking questions is great, but at least recognized the fact there IS a manual for the oracle product you are asking about and with a 5 minute read, you can probably zero in on the chapter regarding the question. Do that and you will know more about the fix to the problem, the problem itself and odds are something else as well simply from the intellectual osmosis that might happen. Asking questions is a good thing, and I encourage it, but please... try for yourself first. Well end of the rant. I, like most knowledgeable oracle involved people, spend a lot of time nursing at the teat of information from the Great Thomas Kyte. From the his books, the oracle magazine articles, asktom.oracle.com and his relatively new blog, he provides the oracle community with the much needed insight and information, with the proper amount of sarcasm and ranting needed to set some people straight. I have in the past been the type of IT guy that helps set the stereotype in stone, I stayed quiet, I stayed at the back of the room, I only communicated via email. Phone calls flustered me. I have been working on getting out of that shell, as I realized it was only something I did at work, in my personal life I am involved in the community, I spend time at my kids school. I "participate" in life. I get to work, and I am the typical cube dweller. Co-workers would only see me if they were brave enough to come ask for help directly or as I came in or left the building. I realized that sitting and whining about peoples lack of knowledge, instead of trying to enlighten them did nothing but raise my blood pressure and make me sound like a horrible little person. So I broke out of my shell about 3 months ago, and life is getting better. Co-Workers freely come and talk to me, invite me out for lunch and refer to me in the first person... Instead of "The DBA". I have SO much to share... I don't know where to start. But it won't matter, with all of the oracle blogs out there, I doubt this one will get any notice.