Friday, August 25, 2006

Contest over

Regarding I am in for it now the contest is over. Well, I won. The fine young gentlemen have chosen August 31st as the day they will live up to their end of the bargain. The final myth was explained to them with the following


SQL> SET TRIMSPOOL ON
SQL> SET LINES 120
SQL> DROP TABLE TABLE1;
DROP TABLE TABLE1
           *
ERROR at line 1:
ORA-00942: table or view does not exist


SQL> DROP SEQUENCE TABLE1_PK_SEQ;

Sequence dropped.

SQL> CREATE SEQUENCE TABLE1_PK_SEQ CACHE 500;

Sequence created.

SQL> CREATE TABLE TABLE1 AS SELECT OBJECT_NAME,LAST_DDL_TIME,FLOOR(DBMS_RANDOM.VALUE(1,2)) DATA FROM ALL_OBJECTS;

Table created.

SQL> COMMIT;

Commit complete.

SQL> CREATE INDEX TABLE1IDX1 ON TABLE1 (DATA);

Index created.

SQL> BEGIN
  2  DBMS_STATS.GATHER_TABLE_STATS(USER,'TABLE1',CASCADE =>TRUE);
  3  END;
  4  /

PL/SQL procedure successfully completed.

SQL> SET AUTOTRACE TRACEONLY ;
SQL> SET TIMING ON;
SQL> SELECT /*+INDEX (TABLE1 TABLE1IDX1) */ * FROM TABLE1 WHERE DATA=1;

50574 rows selected.

Elapsed: 00:00:00.52

Execution Plan
----------------------------------------------------------
Plan hash value: 809506743

------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            | 50574 |  1679K|   384   (2)| 00:00:05 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TABLE1     | 50574 |  1679K|   384   (2)| 00:00:05 |
|*  2 |   INDEX RANGE SCAN          | TABLE1IDX1 | 50574 |       |   101   (2)| 00:00:02 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("DATA"=1)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       7106  consistent gets
          0  physical reads
          0  redo size
    2463263  bytes sent via SQL*Net to client
      37462  bytes received via SQL*Net from client
       3373  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      50574  rows processed

SQL> SELECT * FROM TABLE1 WHERE DATA=1;

50574 rows selected.

Elapsed: 00:00:00.43

Execution Plan
----------------------------------------------------------
Plan hash value: 963482612

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        | 50574 |  1679K|    68   (3)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TABLE1 | 50574 |  1679K|    68   (3)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("DATA"=1)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       3638  consistent gets
          0  physical reads
          0  redo size
    1548409  bytes sent via SQL*Net to client
      37462  bytes received via SQL*Net from client
       3373  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      50574  rows processed

SQL> 
SQL> SPOOL OFF



They capitulated with almost no fight, they had run similar tests and scripts and had gone through the online information and come up with the same answers. I do feel for the fellows... well just a little :)


Saturday, August 19, 2006

Myth #3

I know I am going about working on the myth's my colleagues came up with in a strange order. I was working on myth #3 and was once again stumped.

No matter what I did, it appeared that using a varchar date field or a function based index field was preferable to a normal B*tree index on a normal heap table.

For my testing I had come up with the following test:



SQL> CREATE TABLE TABLE1 (PK NUMBER PRIMARY KEY NOT NULL
,DATA1 VARCHAR2(30) ,DATA2 VARCHAR2(30) 
,DATA3 DATE,DATA4 NUMBER
,FAKEDATE VARCHAR2(20));

Table created.

SQL> DROP SEQUENCE TABLE1_PK_SEQ;

Sequence dropped.

SQL> CREATE SEQUENCE TABLE1_PK_SEQ CACHE 500;

Sequence created.

SQL> INSERT INTO TABLE1 SELECT TABLE1_PK_SEQ.NEXTVAL
  2    ,DBMS_RANDOM.STRING('A',30),DBMS_RANDOM.STRING('A',30)
  3    ,SYSDATE - DBMS_RANDOM.VALUE(1,365) ,TRUNC(DBMS_RANDOM.VALUE(1,10000)),NULL
  4    FROM DUAL CONNECT BY LEVEL <=  50000;

50000 rows created.

SQL> COMMIT;

Commit complete.

SQL> UPDATE TABLE1 SET FAKEDATE=TO_CHAR(DATA3,'DD-MON-YYYY');

50000 rows updated.

SQL> COMMIT;

Commit complete.

SQL> CREATE INDEX DATEIDX ON TABLE1 (DATA3);

Index created.

SQL> CREATE INDEX FAKEDATEIDX ON TABLE1 (FAKEDATE);

Index created.

SQL> CREATE INDEX FBIIDX ON TABLE1 (TO_CHAR(DATA3,'DD-MON-YYYY'));

Index created.

SQL> BEGIN
  2  DBMS_STATS.GATHER_TABLE_STATS(USER,'TABLE1',CASCADE =>TRUE);
  3  END;
  4  /

PL/SQL procedure successfully completed.





The results of the queries were unsettling to say the least. I have edited the following to try and shorten up the post a little.
I will put a link to the full output files at the end of the post.


SQL> SELECT * FROM TABLE1 WHERE DATA3 >= TO_DATE('24-JUN-2006 00:00:00','DD-MON-YYYY HH24:MI:SS')
  2    AND DATA3 <= TO_DATE('24-JUN-2006 23:59:59','DD-MON-YYYY HH24:MI:SS');

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |   139 | 12649 |   141   (3)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL| TABLE1 |   139 | 12649 |   141   (3)| 00:00:02 |
----------------------------------------------------------------------------


SQL> SELECT /*+ INDEX(TABLE1 DATEIDX) */ * FROM TABLE1 WHERE DATA3 >= TO_DATE('24-JUN-2006 00:00:00','DD-MON-YYYY HH24:MI:SS')
  2    AND DATA3 <= TO_DATE('24-JUN-2006 23:59:59','DD-MON-YYYY HH24:MI:SS');
---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |   139 | 12649 |   142   (0)| 00:00:02 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TABLE1  |   139 | 12649 |   142   (0)| 00:00:02 |
|*  2 |   INDEX RANGE SCAN          | DATEIDX |   139 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------


SQL> SELECT * FROM TABLE1 WHERE FAKEDATE='24-JUN-2006';
-------------------------------------------------------------------------------------------
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |   137 | 12467 |   123   (0)| 00:00:02 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TABLE1      |   137 | 12467 |   123   (0)| 00:00:02 |
|*  2 |   INDEX RANGE SCAN          | FAKEDATEIDX |   137 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------


SQL> SELECT * FROM TABLE1 WHERE TO_CHAR(DATA3,'DD-MON-YYYY')='24-JUN-2006';
--------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |   137 | 12467 |   123   (0)| 00:00:02 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TABLE1 |   137 | 12467 |   123   (0)| 00:00:02 |
|*  2 |   INDEX RANGE SCAN          | FBIIDX |   137 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------




I scratched my head, thought about this, researched on the web a little bit. Even considered submitting a question to asktom. Then I thought. Data sizes, that has to be it. So I made the table bigger, 3,000,000 rows to be exact but I came up with the same access paths. Well I thought, they won this one... but it doesn't make sense. Why isn't oracle considering a index. I checked for bugs, I couldn't find any. Then I went back to data sizes. I decided to create a "wider" table and try it again. Now we are talking.


SQL> CREATE TABLE TABLE1 (PK NUMBER PRIMARY KEY NOT NULL
  2  ,DATA1 VARCHAR2(30)
  3  ,DATA2 VARCHAR2(30)
  4  ,DATA3 DATE
  5  ,DATA4 NUMBER
  6  ,FAKEDATE VARCHAR2(20)
  7  ,DATA5 VARCHAR2(30)
  8  ,DATA6 VARCHAR2(30)
  9  ,DATA7 VARCHAR2(30)
 10  ,DATA8 VARCHAR2(30)
 11  );

Table created.

SQL> DROP SEQUENCE TABLE1_PK_SEQ;

Sequence dropped.

SQL> CREATE SEQUENCE TABLE1_PK_SEQ CACHE 500;

Sequence created.

SQL> INSERT INTO TABLE1 SELECT TABLE1_PK_SEQ.NEXTVAL,DBMS_RANDOM.STRING('A',30),DBMS_RANDOM.STRING('A',30)
  2    ,SYSDATE - DBMS_RANDOM.VALUE(1,365) ,TRUNC(DBMS_RANDOM.VALUE(1,10000)),NULL,DBMS_RANDOM.STRING('A',30)
  3    ,DBMS_RANDOM.STRING('A',30),DBMS_RANDOM.STRING('A',30),DBMS_RANDOM.STRING('A',30)
  4  FROM DUAL CONNECT BY LEVEL <=  50000;

50000 rows created.

SQL> COMMIT;

Commit complete.

SQL> UPDATE TABLE1 SET FAKEDATE=TO_CHAR(DATA3,'DD-MON-YYYY');

50000 rows updated.

SQL> COMMIT;

Commit complete.

SQL> CREATE INDEX DATEIDX ON TABLE1 (DATA3);

Index created.

SQL> CREATE INDEX FAKEDATEIDX ON TABLE1 (FAKEDATE);

Index created.

SQL> CREATE INDEX FBIIDX ON TABLE1 (TO_CHAR(DATA3,'DD-MON-YYYY'));

Index created.

SQL> BEGIN
  2  DBMS_STATS.GATHER_TABLE_STATS(USER,'TABLE1',CASCADE =>TRUE);
  3  END;
  4  /

PL/SQL procedure successfully completed.

The access paths now look more like what I was thinking they would be:



SQL> SELECT * FROM TABLE1 WHERE DATA3 >= TO_DATE('24-JUN-2006 00:00:00','DD-MON-YYYY HH24:MI:SS')
  2    AND DATA3 <= TO_DATE('24-JUN-2006 23:59:59','DD-MON-YYYY HH24:MI:SS');

---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |   139 | 29885 |   142   (0)| 00:00:02 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TABLE1  |   139 | 29885 |   142   (0)| 00:00:02 |
|*  2 |   INDEX RANGE SCAN          | DATEIDX |   139 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

SQL> SELECT * FROM TABLE1 WHERE FAKEDATE='24-JUN-2006';

-------------------------------------------------------------------------------------------
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |   137 | 29455 |   132   (0)| 00:00:02 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TABLE1      |   137 | 29455 |   132   (0)| 00:00:02 |
|*  2 |   INDEX RANGE SCAN          | FAKEDATEIDX |   137 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

SQL> SELECT * FROM TABLE1 WHERE TO_CHAR(DATA3,'DD-MON-YYYY')='24-JUN-2006';

--------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |   137 | 29455 |   132   (0)| 00:00:02 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TABLE1 |   137 | 29455 |   132   (0)| 00:00:02 |
|*  2 |   INDEX RANGE SCAN          | FBIIDX |   137 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------



Apparently the fake varchar2 column or the FBI is more efficient to pick out individual days with. I am going to expand this test to do entire months, just a small period of time, a year to see if a single normal b*tree index on a date is the best solution when you are not targeting such a specific predicate.

So now a question to you readers, did I do anything wrong?. Can you point me in the direction of some documents on indexing dates that I must have missed or slept through that day in class?

The links to the "skinny table" and the "wide table" complete tests and logfiles.

Sunday, August 13, 2006

Proving them wrong 1 of 3

From my earlier post "I am in for it now" I started working on myth #2 as it seemed to be the easiest to debunk.

I ran into a snag though. I had put together my test for them, easily proving that their thought that if you index every column in a table the optimizer will join multiple indexes together and follow that access path to the data. I was in for a bit of a surprise. I had heard of the optimizer building bitmap indexes on the fly but I had never delved any deeper into it and had truthfully forgotten all about it until today. I can see the Jonathon Lewis's book "Cost Based Oracle: Fundamentals" is going to cracked open again and given a very thorough going over.

The whole story, I put together the following test SQL:


SPOOL RUN1.LOG
SET ECHO ON
SET LINES 120
COLUMN NAME FORMAT A30 TRUNCATE
COLUMN VALUE FORMAT A30 TRUNCATE
DROP TABLE A;
CREATE TABLE A
(
PK NUMBER PRIMARY KEY NOT NULL
,DATA1 VARCHAR2(20)
,DATA2 VARCHAR2(20)
,DATA3 VARCHAR2(20)
,DATA4 DATE
);
DROP SEQUENCE PK_SEQ;
CREATE SEQUENCE PK_SEQ CACHE 500; 
CREATE INDEX IDX1 ON A (DATA1);
CREATE INDEX IDX2 ON A (DATA2);
CREATE INDEX IDX3 ON A (DATA3);
CREATE INDEX IDX4 ON A (DATA4);
INSERT INTO A
(PK,DATA1,DATA2,DATA3,DATA4)
SELECT 
PK_SEQ.NEXTVAL PK
,OBJECT_TYPE DATA1
,STATUS DATA2
,DBMS_RANDOM.STRING('A',2) DATA3
,SYSDATE-DBMS_RANDOM.VALUE(1,50) DATA4 
FROM ALL_OBJECTS;
COMMIT;
SELECT NAME,VALUE FROM V$PARAMETER WHERE UPPER(NAME)='OPTIMIZER_MODE';
SELECT * FROM V$VERSION;
SET AUTOTRACE ON
SET AUTOTRACE TRACEONLY
SET TIMING ON
SELECT * FROM A WHERE PK=12;
SELECT * FROM A WHERE PK=53 AND DATA2='BDSA';
SELECT * FROM A WHERE DATA1='SDD' AND DATA2='DKDK' AND DATA3='433';
BEGIN
DBMS_STATS.GATHER_TABLE_STATS(USER,'A',CASCADE=>TRUE);
END;
/
SELECT * FROM A WHERE PK=12;
SELECT * FROM A WHERE PK=53 AND DATA2='BDSA';
SELECT * FROM A WHERE DATA1='SDD' AND DATA2='DKDK' AND DATA3='433';


The explain plans via autotrace from the 3 queries after statistics were gathered:


| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |                             
-------------------------------------------------------------------------------------------                             
|   0 | SELECT STATEMENT            |             |     1 |    30 |     2   (0)| 00:00:01 |                             
|   1 |  TABLE ACCESS BY INDEX ROWID| A           |     1 |    30 |     2   (0)| 00:00:01 |                             
|*  2 |   INDEX UNIQUE SCAN         | SYS_C007595 |     1 |       |     1   (0)| 00:00:01 |                             

------------------------------------------------------------------------------------                                    
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |                                    
------------------------------------------------------------------------------------                                    
|   0 | SELECT STATEMENT            |      |     1 |    30 |     2   (0)| 00:00:01 |                                    
|*  1 |  TABLE ACCESS BY INDEX ROWID| A    |     1 |    30 |     2   (0)| 00:00:01 |                                    
|*  2 |   INDEX RANGE SCAN          | IDX2 |     1 |       |     1   (0)| 00:00:01 |                                    
------------------------------------------------------------------------------------                                    

------------------------------------------------------------------------------------                                    
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |                                    
------------------------------------------------------------------------------------                                    
|   0 | SELECT STATEMENT            |      |     1 |    30 |     2   (0)| 00:00:01 |                                    
|*  1 |  TABLE ACCESS BY INDEX ROWID| A    |     1 |    30 |     2   (0)| 00:00:01 |                                    
|*  2 |   INDEX RANGE SCAN          | IDX2 |     1 |       |     1   (0)| 00:00:01 |                                    
------------------------------------------------------------------------------------                                    


Those explains plans where exactly what I was looking for.

The snag was encountered when I decided that I harp on all of the developers to bind their SQL, and in this case even though binding was not necessary, the two developers would inflate any little oversight on my part to earth shattering levels.

So I set up variables and bound my SQL and reran my test. I had surprising results. I see multiple indexes being used on a single table - Bitmap indexes of all things.

I rebuilt my test creating a new table "B" as a copy of the previous "A" Here is the complete SQL:


SPOOL RUNb.LOG
SET ECHO ON
SET LINES 120
COLUMN NAME FORMAT A30 TRUNCATE
COLUMN VALUE FORMAT A30 TRUNCATE
SELECT NAME,VALUE FROM V$PARAMETER WHERE UPPER(NAME)='OPTIMIZER_MODE';
SELECT * FROM V$VERSION;
SET AUTOTRACE TRACEONLY EXPLAIN
SET TIMING ON
VARIABLE BPK NUMBER;
VARIABLE BD1 VARCHAR2(20);
VARIABLE BD2 VARCHAR2(20);
VARIABLE BD3 VARCHAR2(20);
DROP TABLE B;
CREATE TABLE B AS SELECT * FROM A;
ALTER TABLE B ADD CHECK ("PK" IS NOT NULL);
ALTER TABLE B ADD PRIMARY KEY (pk) USING INDEX;
CREATE INDEX BIDX1 ON B (DATA1);
CREATE INDEX BIDX2 ON B (DATA2);
CREATE INDEX BIDX3 ON B (DATA3);
CREATE INDEX BIDX4 ON B (DATA4);
SELECT * FROM B WHERE PK=12;
SELECT * FROM B WHERE PK=53 AND DATA2='BDSA';
SELECT * FROM B WHERE DATA1='SDD' AND DATA2='DKDK' AND DATA3='433';
EXEC :BPK := 99;
EXEC :BD1 := 'BFSS';
EXEC :BD2 := 'JGJD';
EXEC :BD3 := 'JJGJG';
SELECT * FROM B WHERE PK=:BPK;
SELECT * FROM B WHERE PK=:BPK AND DATA2=:BD2;
SELECT * FROM B WHERE DATA1=:BD1 AND DATA2=:BD2 AND DATA3=:BD3;
SELECT * FROM B WHERE PK=:BPK AND DATA1=:BD1 AND DATA2=:BD2 AND DATA3=:BD3;
BEGIN
DBMS_STATS.GATHER_TABLE_STATS(USER,'B',CASCADE=>TRUE);
END;
/
SELECT * FROM B WHERE PK=:BPK;
SELECT * FROM B WHERE PK=:BPK AND DATA2=:BD2;
SELECT * FROM B WHERE DATA1=:BD1 AND DATA2=:BD2 AND DATA3=:BD3;
SELECT * FROM B WHERE PK=:BPK AND DATA1=:BD1 AND DATA2=:BD2 AND DATA3=:BD3;
SPOOL OFF
EXIT


The output for the following query really did take me by surprise, this was the query after the analyze of "B":


SELECT * FROM B WHERE DATA1=:BD1 AND DATA2=:BD2 AND DATA3=:BD3;

Here is the plan:


Execution Plan
----------------------------------------------------------                                                              
Plan hash value: 849905550                                                                                              
                                                                                                                        
------------------------------------------------------------------------------------------                              
| Id  | Operation                        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |                              
------------------------------------------------------------------------------------------                              
|   0 | SELECT STATEMENT                 |       |     1 |    30 |     7   (0)| 00:00:01 |                              
|*  1 |  TABLE ACCESS BY INDEX ROWID     | B     |     1 |    30 |     7   (0)| 00:00:01 |                              
|   2 |   BITMAP CONVERSION TO ROWIDS    |       |       |       |            |          |                              
|   3 |    BITMAP AND                    |       |       |       |            |          |                              
|   4 |     BITMAP CONVERSION FROM ROWIDS|       |       |       |            |          |                              
|*  5 |      INDEX RANGE SCAN            | BIDX3 |    19 |       |     1   (0)| 00:00:01 |                              
|   6 |     BITMAP CONVERSION FROM ROWIDS|       |       |       |            |          |                              
|*  7 |      INDEX RANGE SCAN            | BIDX1 |    19 |       |     6   (0)| 00:00:01 |                              
------------------------------------------------------------------------------------------                              
                                                                                                                        
Predicate Information (identified by operation id):                                                                     
---------------------------------------------------                                                                     
                                                                                                                        
   1 - filter("DATA2"=:BD2)                                                                                             
   5 - access("DATA3"=:BD3)                                                                                             
   7 - access("DATA1"=:BD1)                                                                                             

Multiple indexes (BIDX3 and BIDX1) on the same table in one query, surprising to say the least.
This particular part of the challenge may end up being a tie.

So I do some more testing



SQL> column PLAN_TABLE_OUTPUT format a120 truncate
SQL> explain plan for SELECT * FROM B WHERE DATA1=:BD1 AND DATA2=:BD2 AND DATA3=:BD3;

Explained.

SQL> @?/rdbms/admin/utlxpls

PLAN_TABLE_OUTPUT                                                                                                       
------------------------------------------------------------------------------------------------------------------------
Plan hash value: 849905550                                                                                              
                                                                                                                        
------------------------------------------------------------------------------------------                              
| Id  | Operation                        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |                              
------------------------------------------------------------------------------------------                              
|   0 | SELECT STATEMENT                 |       |     1 |    30 |     7   (0)| 00:00:01 |                              
|*  1 |  TABLE ACCESS BY INDEX ROWID     | B     |     1 |    30 |     7   (0)| 00:00:01 |                              
|   2 |   BITMAP CONVERSION TO ROWIDS    |       |       |       |            |          |                              
|   3 |    BITMAP AND                    |       |       |       |            |          |                              
|   4 |     BITMAP CONVERSION FROM ROWIDS|       |       |       |            |          |                              
|*  5 |      INDEX RANGE SCAN            | BIDX3 |    19 |       |     1   (0)| 00:00:01 |                              
|   6 |     BITMAP CONVERSION FROM ROWIDS|       |       |       |            |          |                              
|*  7 |      INDEX RANGE SCAN            | BIDX1 |    19 |       |     6   (0)| 00:00:01 |                              
------------------------------------------------------------------------------------------                              
                                                                                                                        
Predicate Information (identified by operation id):                                                                     
---------------------------------------------------                                                                     
                                                                                                                        
   1 - filter("DATA2"=:BD2)                                                                                             
   5 - access("DATA3"=:BD3)                                                                                             
   7 - access("DATA1"=:BD1)                                                                                             

21 rows selected.

SQL> 
PLAN_TABLE_OUTPUT                                                                                                       
------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2496799660                                                                                             
                                                                                                                        
-------------------------------------------------------------------------------------                                   
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |                                   
-------------------------------------------------------------------------------------                                   
|   0 | SELECT STATEMENT            |       |     1 |    30 |     2   (0)| 00:00:01 |                                   
|*  1 |  TABLE ACCESS BY INDEX ROWID| B     |     1 |    30 |     2   (0)| 00:00:01 |                                   
|*  2 |   INDEX RANGE SCAN          | BIDX2 |     1 |       |     1   (0)| 00:00:01 |                                   
-------------------------------------------------------------------------------------                                   
                                                                                                                        
Predicate Information (identified by operation id):                                                                     

PLAN_TABLE_OUTPUT                                                                                                       
----------------------------------------------------------------------------------------

   1 - filter("DATA1"='DFDF' AND "DATA3"='KDWD')                                                                        
   2 - access("DATA2"='SDFVS')                                                                                          

15 rows selected.

SQL> 
SQL> 
SQL> spool off


Only does the bitmap conversion when bind variables are used. A case of bad "bind variable peeking" here I think.


Friday, August 11, 2006

Multiple Oracle clients

Interesting few phone calls this morning, the back story:

We upgraded a database from 9.2.0.5 to 10gr1.

The core group of users tested it thoroughly with no problems, everything was signed off on, the upgrade went with a hitch and everything worked great. For less than 1 business day.

The system has an export facility to the local government body of data we collect. The GUI creates an XML file that the users then upload to the government via a web site. Some users started complaining of getting ORA-0600 errors when they try to create their XML file. Other users were not having any problems at all. This had been tested and signed off on. The application support people had tried and could not find a problem and they could successfully create the files for the users. I started going through metalink trying to find any related items to the upgrade that we might have missed. I could find nothing, but I stumbled across a similar ORA-0600 error 16608 with a oracle 8 client trying to connect to a oracle 10g database. I didn't think that was applicable, but I looked into it anyway. Every single person getting the problem has the oracle 817 client installed because they use an older application that requires it. The 10g application GUI defaults to this home. After working with the vendor, we could not force the GUI to use the oracle 9i home on the PC's. We installed the instant client, to no avail. The application works great with the 9i client so we had decided to slowly install the 10g client as it was needed.

I read through forum entries and the like on using multiple oracle clients on the same PC and none of the suggestions worked. If I removed the Oracle 8 settings from the path, the 10g application worked, but the 8i application did not work. If I moved the oracle 8 settings to the end of the path, the 10g application worked, but the oracle 8i application did not.


I did some head scratching, and came up with the following for starting the 10g application.


set ORACLE_HOME=c:\oracle\ora92
set TNS_ADMIN=c:\oracle\ora92\network\admin
set path=c:\oracle\ora92
call "c:\program files\10gapp\10gapp.exe"

That works very well.

If anybody out there has a better way, please comment as this is now in use on about 100 PC's throughout the company.


Thursday, August 10, 2006

I am in for it now

I have been having, "heated discussions" with a few (male, read on) OCP's from a subsidiary company. Arguing over some Oracle B*tree index myths and other things they "know" about oracle. These fellows are 10g OCP, but all straight out of university and very gung ho. ADF over oracle, .NET over oracle (shudder) and no need for any business logic in the database. Very smart and knowledgeable fellows who are an asset to the company. I just keep slamming them on their database interaction work as I do the performance monitoring on their databases and they are a little sick and tired of ticket after ticket after ticket coming their way of "Bad SQL" or "Inefficient SQL" and the one that started this "Unused Indexes" and thought they would speak up. All very friendly of course.

We came to a agreement that I would put together a nice document blowing away their index theories. I will work on the rest of their theories later. When I have successfully debunked their myths, they will wear skirts,bobby socks, a blouse and makeup to work for a period of no less than 4 straight hours covering at minimum of two of the following - Morning arrival,Morning coffee break, lunch, afternoon coffee break or 5pm departure. If I fail, I will have to wear the same outfit. We have manager approval and backing for this and odds are it will end up in the company newsletter. Not entirely fair as they have 9-12 people at their location. I have 200+ people in this office, but I think I have the legs to pull it off :).

I personally believe these to be myths, well one because Thomas Kyte says so, and two I did some testing and checked it out in the past.


The following are the 3 index myths we decided on, well statements I told them were wrong and they argued the point. They are going to put together tests to prove me wrong. In a event of a tie or unbreakable test cases, a simple toss of a coin will decide our fates. Best out of 3 wins.


Myth #1

You must use a index, full tablescans are always less efficient and slower.

Myth #2

If you index each individual column the optimizer will use multiple indexes in a query if the predicate has any combination of the columns, so build a index on every column.

Myth #3

You must build a calculated column to properly index a date field. So to store a date, the best way to do it is with a VARCHAR field to improve DML to the table.

Over the next few days, I will be putting together the tests and posting the results. Since we never spoke about the ability to look for outside corroboration or help, I thought I would do the tests, and then run them by my blog to see if any readers can find fault in them. Due date for the tests is the end of the month. Management didn't want us using too much company time for this childishness. I know I have 2 and 3 in the bag.. #1 is going to be hard to prove, I think in one of Tom's books he does something similar. I will be reading those chapters over again tonight.


Parameters are Oracle 10gR2 on windows (shudder) with the following somewhat abbreviated init.ora (some parameters removed):

orcl.__db_cache_size=192937984
orcl.__java_pool_size=12582912
orcl.__large_pool_size=4194304
orcl.__shared_pool_size=75497472
orcl.__streams_pool_size=0
*.compatible='10.2.0.1.0'
*.db_block_size=8192
*.optimizer_mode=ALL_ROWS
*.db_file_multiblock_read_count=16
*.db_recovery_file_dest_size=2147483648
*.job_queue_processes=10
*.open_cursors=300
*.pga_aggregate_target=96468992
*.processes=150
*.sga_target=289406976
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'



I am probably going to regret this is some way. But WTF you only live once. I think my wife is still laughing, she offered to shave my legs for me. Evil, Evil lady :)

Stay tuned, I am going to spend some time on building these tests.

Wednesday, August 09, 2006

Statspack

This entry is more so I have a easy to reference link for answering "how do I monitor my DB" questions that always crop up.

Using and installing

statspack for oracle9i release 2

statspack for oracle10g Now called AWR but the statspack stuff is still there and is included, AWR is a extra cost option I believe.

I can't find any online documentation for installing statspack in 8i.

But in 8.1.6+ if you read $ORACLE_HOME/rdbms/admin/spdoc.txt it tells you pretty much everything you need to know about statspack.

The spdoc.txt script exists in all versions at the same location in the oracle home.

If you don't use statspack - learn it and use it.

Over at the Pythian groupShervin Sheidaei has started a thread on a New STATSPACK Methodology It looks promising, he says he has more instore for us.


Sunday, August 06, 2006

Coworker

One of our developers is moving on. He is pursuing a dream, he got a job with a company in the UK. He and his wife had been wishing to take the family to the UK for a visit, but this job presented itself and he grabbed at it. So off he and his entire family go.

The entire company will miss him, he may not have been the team leader, but he was the center of this particular applications development and support team and many times more useful than the team leader. Already the impact of his two weeks notice is being felt throughout the company. He is a University trained developer who actually learned what he was taught and who has specialized in one application and has become one of the best in his field of expertise.

I wish him and his family good luck and safe travels.

As you may have noticed, my job in France didn't go through, it was just too expensive for the company to move me from northern Canada to southern France. They went with somebody from the UK.

I have decided to start to actively look for work, I think I will look in places like the UK, Australia, New Zealand and Western Europe. If I am going to move I will leave Canada and I can't stand the United States, so I have limited locations I can find work where it isn't under active air raids or a crumbling government.




Saturday, August 05, 2006

Retrieving disk space

There are always questions from folks asking how to retrieve disk space from database datafiles that have grown too large usually by accident.

This can be a time consuming task, usually involving using import and export and can lead to large headaches. For the really large jobs, that is probably still the best way to go. For the smaller jobs I use the following method.


I put it together into a hopefully complete test. Sorry for the use of a Windoze DB as a test bed. But VM on my laptop took a nose dive the other day and I haven't been able to recover my Suse VM guest from the bowels of WindowsXP.


SQL> SET TIMING ON
SQL> COLUMN MEG FORMAT 999,999,999
SQL> COLUMN SEGMENT_NAME FORMAT A25 TRUNCATE
SQL> COLUMN FILE_NAME FORMAT A30 TRUNCATE
SQL> COLUMN TABLESPACE_NAME FORMAT A25 TRUNCATE
SQL> SET LINES 130
SQL> set echo on
SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE    10.2.0.1.0      Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production

SQL> DROP TABLESPACE SIZETESTING1 INCLUDING CONTENTS;

Tablespace dropped.

Elapsed: 00:00:02.15
SQL> CREATE TABLESPACE SIZETESTING1 DATAFILE 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\LAPTOP\SIZETESTING1.DBF'
  2  SIZE 100M REUSE AUTOEXTEND ON NEXT 10M MAXSIZE 200M SEGMENT SPACE MANAGEMENT AUTO AUTOALLOCATE;

Tablespace created.

Elapsed: 00:00:04.34
SQL> CREATE TABLE THEBIGONE (THEDATA VARCHAR2(3000)) TABLESPACE SIZETESTING1;

Table created.

Elapsed: 00:00:00.03
SQL> CREATE TABLE THESMALLONE (THEDATA VARCHAR2(3000)) TABLESPACE SIZETESTING1;

Table created.

Elapsed: 00:00:00.00

I just want to put some data into the big table then insert some data into the small table.


SQL> 
SQL> 
SQL> DECLARE
  2  L_THERECORD VARCHAR2(3000);
  3  
  4  BEGIN
  5  
  6  L_THERECORD := DBMS_RANDOM.STRING('U',2999);
  7  DBMS_OUTPUT.PUT_LINE(L_THERECORD);
  8  FOR Y IN 1..3 LOOP
  9  
 10  FOR X IN 1..10000 LOOP
 11  INSERT INTO THEBIGONE (THEDATA) VALUES (L_THERECORD);
 12  END LOOP;
 13  COMMIT;
 14  END LOOP;
 15  COMMIT;
 16  END;
 17  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:07.63
SQL> 
SQL> SELECT COUNT(1) FROM THEBIGONE;

  COUNT(1)                                                                                                                        
----------                                                                                                                        
     30000                                                                                                                        

Elapsed: 00:00:03.62
SQL> SELECT SEGMENT_NAME,SUM(BYTES/1024/1024) MEG FROM USER_SEGMENTS WHERE SEGMENT_NAME IN ('THEBIGONE','THESMALLONE') GROUP BY SEGMENT_NAME;

SEGMENT_NAME                       MEG                                                                                            
------------------------- ------------                                                                                            
THESMALLONE                          0                                                                                            
THEBIGONE                          120                                                                                            

Elapsed: 00:00:00.14
SQL> 
SQL> INSERT INTO THESMALLONE SELECT OBJECT_NAME FROM ALL_OBJECTS;

49787 rows created.

Elapsed: 00:00:03.39
SQL> COMMIT;

Commit complete.

Elapsed: 00:00:00.01
SQL> 
SQL> SELECT SEGMENT_NAME,SUM(BYTES/1024/1024) MEG FROM USER_SEGMENTS WHERE SEGMENT_NAME IN ('THEBIGONE','THESMALLONE') GROUP BY SEGMENT_NAME;

SEGMENT_NAME                       MEG                                                                                            
------------------------- ------------                                                                                            
THESMALLONE                          2                                                                                            
THEBIGONE                          120                                                                                            

Elapsed: 00:00:00.03
SQL> SELECT FILE_NAME,BYTES/1024/1024 MEG FROM DBA_DATA_FILES WHERE TABLESPACE_NAME='SIZETESTING1';

FILE_NAME                               MEG                                                                                       
------------------------------ ------------                                                                                       
C:\ORACLE\PRODUCT\10.2.0\ORADA          130                                                                                       

Elapsed: 00:00:00.05

Now, lets fill the big table up so it blows the storage parameters.


SQL> 
SQL> DECLARE
  2  L_THERECORD VARCHAR2(3000);
  3  
  4  BEGIN
  5  
  6  L_THERECORD := DBMS_RANDOM.STRING('U',2999);
  7  DBMS_OUTPUT.PUT_LINE(L_THERECORD);
  8  FOR Y IN 1..1000 LOOP
  9  
 10  FOR X IN 1..1000 LOOP
 11  INSERT INTO THEBIGONE (THEDATA) VALUES (L_THERECORD);
 12  END LOOP;
 13  COMMIT;
 14  END LOOP;
 15  COMMIT;
 16  END;
 17  /
DECLARE
*
ERROR at line 1:
ORA-01653: unable to extend table SYSTEM.THEBIGONE by 1024 in tablespace SIZETESTING1 
ORA-06512: at line 11 


Elapsed: 00:00:08.76
SQL> 
SQL> SELECT COUNT(1) FROM THEBIGONE;

  COUNT(1)                                                                                                                        
----------                                                                                                                        
     48000                                                                                                                        

Elapsed: 00:00:04.35
SQL> SELECT SEGMENT_NAME,SUM(BYTES/1024/1024) MEG FROM USER_SEGMENTS WHERE SEGMENT_NAME IN ('THEBIGONE','THESMALLONE') GROUP BY SEGMENT_NAME;

SEGMENT_NAME                       MEG                                                                                            
------------------------- ------------                                                                                            
THESMALLONE                          2                                                                                            
THEBIGONE                          192                                                                                            

Elapsed: 00:00:00.26
SQL> SELECT FILE_NAME,BYTES/1024/1024 MEG FROM DBA_DATA_FILES WHERE TABLESPACE_NAME='SIZETESTING1';

FILE_NAME                               MEG                                                                                       
------------------------------ ------------                                                                                       
C:\ORACLE\PRODUCT\10.2.0\ORADA          200                                                                                       

Elapsed: 00:00:00.03

Lets get rid of the big table and add some more rows to the small table.




SQL> TRUNCATE TABLE THEBIGONE;

Table truncated.

Elapsed: 00:00:03.16
SQL> INSERT INTO THESMALLONE SELECT OBJECT_NAME FROM ALL_OBJECTS;

49787 rows created.

Elapsed: 00:00:03.13
SQL> COMMIT;

Commit complete.

Elapsed: 00:00:00.01
SQL> INSERT INTO THESMALLONE SELECT OBJECT_NAME FROM ALL_OBJECTS;

49787 rows created.

Elapsed: 00:00:03.11
SQL> COMMIT;

Commit complete.

Elapsed: 00:00:00.02
SQL> INSERT INTO THESMALLONE SELECT OBJECT_NAME FROM ALL_OBJECTS;

49787 rows created.

Elapsed: 00:00:03.19
SQL> COMMIT;

Commit complete.

Elapsed: 00:00:00.02
SQL> SELECT SEGMENT_NAME,SUM(BYTES/1024/1024) MEG FROM USER_SEGMENTS WHERE SEGMENT_NAME IN ('THEBIGONE','THESMALLONE') GROUP BY SEGMENT_NAME;

SEGMENT_NAME                       MEG                                                                                            
------------------------- ------------                                                                                            
THESMALLONE                          7                                                                                            
THEBIGONE                            0                                                                                            

Elapsed: 00:00:00.03
SQL> 
SQL> SELECT * FROM( SELECT file_name,     ceil( (nvl(hwm,1)*C.VALUE)/1024/1024 ) smallest,    ceil( blocks*C.VALUE/1024/1024) currsize,
  2       ceil( blocks*C.VALUE/1024/1024) -     ceil( (nvl(hwm,1)*C.VALUE)/1024/1024 ) savings
  3       ,a.autoextensible  ,a.maxbytes from dba_data_files a,      ( select file_id, max(block_id+blocks-1) hwm
  4         from dba_extents     group by file_id ) b    ,(select value from v$parameter where name = 'db_block_size') C
  5  where a.file_id = b.file_id(+) AND a.file_name='C:\ORACLE\PRODUCT\10.2.0\ORADATA\LAPTOP\SIZETESTING1.DBF'
  6  ) WHERE 1=1 ;

FILE_NAME                        SMALLEST   CURRSIZE    SAVINGS AUT   MAXBYTES                                                    
------------------------------ ---------- ---------- ---------- --- ----------                                                    
C:\ORACLE\PRODUCT\10.2.0\ORADA        123        200         77 YES  209715200                                                    

Elapsed: 00:00:16.80

Our smallest we can shrink the datafile is down to 123m due to the small table having segments. So lets set up a tablespace to move the small table too while we clean up.



SQL> 
SQL> 
SQL> DROP TABLESPACE HOLDING INCLUDING CONTENTS;

Tablespace dropped.

Elapsed: 00:00:00.88
SQL> CREATE TABLESPACE HOLDING DATAFILE 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\LAPTOP\HOLDING.DBF'
  2  SIZE 10M REUSE AUTOEXTEND ON NEXT 10M MAXSIZE 200M SEGMENT SPACE MANAGEMENT AUTO AUTOALLOCATE;

Tablespace created.

Elapsed: 00:00:01.10

And lets move it.




SQL> 
SQL> ALTER TABLE THESMALLONE MOVE TABLESPACE HOLDING;

Table altered.

Elapsed: 00:00:00.67
SQL> 
SQL> SELECT * FROM( SELECT file_name,     ceil( (nvl(hwm,1)*C.VALUE)/1024/1024 ) smallest,    ceil( blocks*C.VALUE/1024/1024) currsize,
  2       ceil( blocks*C.VALUE/1024/1024) -     ceil( (nvl(hwm,1)*C.VALUE)/1024/1024 ) savings
  3       ,a.autoextensible  ,a.maxbytes from dba_data_files a,      ( select file_id, max(block_id+blocks-1) hwm
  4         from dba_extents     group by file_id ) b    ,(select value from v$parameter where name = 'db_block_size') C
  5  where a.file_id = b.file_id(+) AND a.file_name='C:\ORACLE\PRODUCT\10.2.0\ORADATA\LAPTOP\SIZETESTING1.DBF'
  6  ) WHERE 1=1 ;

FILE_NAME                        SMALLEST   CURRSIZE    SAVINGS AUT   MAXBYTES                                                    
------------------------------ ---------- ---------- ---------- --- ----------                                                    
C:\ORACLE\PRODUCT\10.2.0\ORADA          1        200        199 YES  209715200                                                    

Elapsed: 00:00:16.68

There we go, shrink it down to 1M. So, lets do it.




SQL> ALTER database datafile 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\LAPTOP\SIZETESTING1.DBF' resize 1m;

Database altered.

Elapsed: 00:00:00.49

Move the table back .



SQL> ALTER TABLE THESMALLONE MOVE TABLESPACE SIZETESTING1;

Table altered.

Elapsed: 00:00:01.29
SQL> SELECT TABLESPACE_NAME,SEGMENT_NAME,SUM(BYTES/1024/1024) MEG FROM USER_SEGMENTS WHERE SEGMENT_NAME IN ('THEBIGONE','THESMALLONE') GROUP BY TABLESPACE_NAME,SEGMENT_NAME;

TABLESPACE_NAME           SEGMENT_NAME                       MEG                                                                  
------------------------- ------------------------- ------------                                                                  
SIZETESTING1              THEBIGONE                            0                                                                  
SIZETESTING1              THESMALLONE                          7                                                                  

Elapsed: 00:00:00.03

SQL> SELECT FILE_NAME,BYTES/1024/1024 MEG FROM DBA_DATA_FILES WHERE TABLESPACE_NAME='SIZETESTING1';

FILE_NAME                             MEG
------------------------------ ----------
C:\ORACLE\PRODUCT\10.2.0\ORADA         11

Elapsed: 00:00:00.03

SQL> SPOOL OFF;


Lets take a look at what it looks like on disk


C:\oracle\product\10.2.0\oradata\LAPTOP>DIR SIZE*
 Volume in drive C has no label.
 
 Directory of C:\oracle\product\10.2.0\oradata\LAPTOP

08/05/2006  02:46 PM        11,542,528 SIZETESTING1.DBF
               1 File(s)     11,542,528 bytes
               0 Dir(s)  85,625,917,440 bytes free

This works well and can be expanded to cover more tables similar to "THESMALLONE". Be careful moving tables online and check the documents first, and don't take my word for it. Test Test Test.


Writing an external audit file

Awhile back I answered a question from somebody I believe on Dizwell on how to write a custom trigger that would write a record to an external datafile with every insert. I stumbled across the script here and thought I would post it.




CREATE OR REPLACE TRIGGER audtest_Bi
 BEFORE
  INSERT
 ON audtest
REFERENCING NEW AS NEW OLD AS OLD
 FOR EACH ROW
DECLARE

JOBOUT NUMBER;

BEGIN

dbms_job.submit( JOBOUT, 'BEGIN AUDWRITE('||:NEW.PK||'); END;',SYSDATE );
:NEW.JOBNUM := JOBOUT;

END;
/

CREATE OR REPLACE PROCEDURE AUDWRITE(I_PK NUMBER) AS

L_OUTPUT        UTL_FILE.FILE_TYPE;

BEGIN
    L_OUTPUT := UTL_FILE.FOPEN( 'AUDTEST_DIR', 'AUDTEST_AUDIT.CSV', 'A' );

FOR LINE IN (SELECT * FROM AUDTEST WHERE PK=I_PK) LOOP
    UTL_FILE.PUT( L_OUTPUT, LINE.PK||','||LINE.SOMEDATA);
    UTL_FILE.NEW_LINE( L_OUTPUT );
    END LOOP;

    UTL_FILE.FCLOSE( L_OUTPUT );

END;
/


INSERT INTO AUDTEST (PK,SOMEDATA) VALUES (AUDTEST_SEQ.NEXTVAL,DBMS_RANDOM.string('A',ROUND(DBMS_RANDOM.VALUE(1,45))));
INSERT INTO AUDTEST (PK,SOMEDATA) VALUES (AUDTEST_SEQ.NEXTVAL,DBMS_RANDOM.string('A',ROUND(DBMS_RANDOM.VALUE(1,45))));
INSERT INTO AUDTEST (PK,SOMEDATA) VALUES (AUDTEST_SEQ.NEXTVAL,DBMS_RANDOM.string('A',ROUND(DBMS_RANDOM.VALUE(1,45))));
INSERT INTO AUDTEST (PK,SOMEDATA) VALUES (AUDTEST_SEQ.NEXTVAL,DBMS_RANDOM.string('A',ROUND(DBMS_RANDOM.VALUE(1,45))));
INSERT INTO AUDTEST (PK,SOMEDATA) VALUES (AUDTEST_SEQ.NEXTVAL,DBMS_RANDOM.string('A',ROUND(DBMS_RANDOM.VALUE(1,45))));


--SHOULD BE 5 JOBS
SELECT * FROM USER_JOBS;

--SHOULD BE 5 ROWS WITH JOBNUM FILLED IN
SELECT * FROM AUDTEST;


--WON'T BE A FILE IN C:\TEMP
COMMIT;
--NOW THERE WILL BE A FILE IN C:\TEMP




Some work and a lot of testing will need to be done if this goes into production. But might have some use to somebody.

Wednesday, August 02, 2006

Dizwell Almost back in the saddle again

Howard has started on the long trek to getting his amazing website back.

The new URL (for now?) is http://www.dizwell.com/prod/

I wish him luck

Tuesday, August 01, 2006

Poor Dizwell

He has given up on Joomla and shut his site down for a spell while he finds something else:


Joomla! Logo

Dizwell Informatics

This site has closed. Joomla has driven me insane, and the flood of emails from people saying, 'Do you know your links are broken' is making me even insaner. Time to clear the decks and start afresh. I have no idea how long it will take me to spec out a replacement CMS and convert everything across. Sorry to everyone for the inconvenience in the meantime.


Good luck and I look forward to having your site back.