On table reorg and index rebuild

Before you start reading : do not rebuild all your indexes and reorganize all your tables every Sunday morning. One day you may find one of your table missing or one index invalid.

Ok, let’s take a case where table reorg and index rebuild is good.

One of your table was never cleaned up, it grew to 100000000 rows over the last 5 years and you need only the last 2 weeks.

One of your task will be to create a job to clean up your table on a weekly basis to delete rows older than 14 days. This is beyond the scope of this post.

Now you have deleted more than 99% of your rows and you want to reorganize your table and rebuild the index, to gain disk space and performance.

Here is the demo


SQL> DROP TABLE t1;

Table dropped.

SQL> 
SQL> CREATE TABLE t1
  2  (
  3    r     NUMBER,
  4    txt   VARCHAR2 (4000),
  5    y     NUMBER
  6  );

Table created.

SQL> 
SQL> CREATE INDEX i1
  2    ON t1 (r);

Index created.

SQL> 
SQL> INSERT INTO t1
  2    WITH t
  3         AS (    SELECT *
  4             FROM DUAL
  5       CONNECT BY LEVEL < 1001)
  6    SELECT ROWNUM r, LPAD ('X', 100, '.') txt, MOD (ROWNUM, 2) y
  7      FROM t, t;

1000000 rows created.

SQL> 
SQL> DROP TABLE t2;

Table dropped.

SQL> 
SQL> CREATE TABLE t2
  2  (
  3    r     NUMBER,
  4    txt   VARCHAR2 (4000),
  5    y     NUMBER
  6  )
  7  PARTITION BY HASH (r)
  8    (PARTITION T2_P1);

Table created.

SQL> 
SQL> CREATE INDEX i2
  2    ON t2 (r)
  3    LOCAL (PARTITION i2_p1);

Index created.

SQL> 
SQL> INSERT INTO t2
  2    WITH t
  3         AS (    SELECT *
  4             FROM DUAL
  5       CONNECT BY LEVEL < 1001)
  6    SELECT ROWNUM r, LPAD ('X', 100, '.') txt, MOD (ROWNUM, 2) y
  7      FROM t, t;

1000000 rows created.

SQL> 
SQL> DROP TABLE t3;

Table dropped.

SQL> 
SQL> CREATE TABLE t3
  2  (
  3    r     NUMBER,
  4    txt   VARCHAR2 (4000),
  5    y     NUMBER
  6  )
  7  PARTITION BY RANGE (r)
  8    SUBPARTITION BY HASH (r)
  9       SUBPARTITION TEMPLATE (SUBPARTITION s1 )
 10    (PARTITION T3_P1 VALUES LESS THAN (maxvalue));

Table created.

SQL> 
SQL> CREATE INDEX i3
  2    ON t3 (r)
  3    LOCAL (PARTITION i3_p1
  4        (SUBPARTITION i3_p1_s1));

Index created.

SQL> 
SQL> INSERT INTO t3
  2    WITH t
  3         AS (    SELECT *
  4             FROM DUAL
  5       CONNECT BY LEVEL < 1001)
  6    SELECT ROWNUM r, LPAD ('X', 100, '.') txt, MOD (ROWNUM, 2) y
  7      FROM t, t;

1000000 rows created.

SQL> 
SQL> COMMIT;

Commit complete.

SQL> 
SQL>  SELECT segment_name,
  2          segment_type,
  3          partition_name,
  4          sum(bytes),
  5          count(*)
  6     FROM user_extents
  7    WHERE segment_name IN ('T1', 'T2', 'T3', 'I1', 'I2', 'I3')
  8  group by
  9    segment_name,
 10          segment_type,
 11          partition_name
 12  ORDER BY segment_name, partition_name;

SEGMENT_NA SEGMENT_TYPE       PARTITION_     SUM(BYTES)       COUNT(*)
---------- ------------------ ---------- -------------- --------------
I1         INDEX                             16,777,216             31
I2         INDEX PARTITION    I2_P1          16,777,216             31
I3         INDEX SUBPARTITION I3_P1_S1       16,777,216             31
T1         TABLE                            134,217,728             87
T2         TABLE PARTITION    T2_P1         134,217,728             16
T3         TABLE SUBPARTITION T3_P1_S1      134,217,728             16

I created 3 tables, T1, T2 which is partitioned, T3 which is subpartitioned. There is a slight difference in the number of extents between partitioned and non-partitioned table, but this ASSM, so it is fine.


SQL> DELETE FROM t1
  2       WHERE r > 1;

999999 rows deleted.

SQL> 
SQL> COMMIT;

Commit complete.

SQL> 
SQL> DELETE FROM t2
  2       WHERE r > 1;

999999 rows deleted.

SQL> 
SQL> COMMIT;

Commit complete.

SQL> 
SQL> DELETE FROM t3
  2       WHERE r > 1;

999999 rows deleted.

SQL> 
SQL> COMMIT;

Commit complete.

SQL> 
SQL>  SELECT segment_name,
  2          segment_type,
  3          partition_name,
  4          sum(bytes),
  5          count(*)
  6     FROM user_extents
  7    WHERE segment_name IN ('T1', 'T2', 'T3', 'I1', 'I2', 'I3')
  8  group by
  9    segment_name,
 10          segment_type,
 11          partition_name
 12  ORDER BY segment_name, partition_name;

SEGMENT_NA SEGMENT_TYPE       PARTITION_     SUM(BYTES)       COUNT(*)
---------- ------------------ ---------- -------------- --------------
I1         INDEX                             16,777,216             31
I2         INDEX PARTITION    I2_P1          16,777,216             31
I3         INDEX SUBPARTITION I3_P1_S1       16,777,216             31
T1         TABLE                            134,217,728             87
T2         TABLE PARTITION    T2_P1         134,217,728             16
T3         TABLE SUBPARTITION T3_P1_S1      134,217,728             16

I deleted the completed table but one row, however the size of the table and the number of extents did not change.


SQL> ALTER TABLE t1 MOVE;

Table altered.

SQL> 
SQL> ALTER INDEX I1 REBUILD;

Index altered.

SQL> 
SQL> ALTER TABLE t2 MOVE PARTITION T2_P1;

Table altered.

SQL> 
SQL> ALTER INDEX I2 REBUILD PARTITION I2_P1;

Index altered.

SQL> 
SQL> ALTER TABLE t3 MOVE SUBPARTITION T3_P1_S1;

Table altered.

SQL> 
SQL> ALTER INDEX I3 REBUILD SUBPARTITION I3_P1_S1;

Index altered.

SQL> 
SQL>  SELECT segment_name,
  2          segment_type,
  3          partition_name,
  4          sum(bytes),
  5          count(*)
  6     FROM user_extents
  7    WHERE segment_name IN ('T1', 'T2', 'T3', 'I1', 'I2', 'I3')
  8  group by
  9    segment_name,
 10          segment_type,
 11          partition_name
 12  ORDER BY segment_name, partition_name;

SEGMENT_NA SEGMENT_TYPE       PARTITION_     SUM(BYTES)       COUNT(*)
---------- ------------------ ---------- -------------- --------------
I1         INDEX                                 65,536              1
I2         INDEX PARTITION    I2_P1              65,536              1
I3         INDEX SUBPARTITION I3_P1_S1           65,536              1
T1         TABLE                                 65,536              1
T2         TABLE PARTITION    T2_P1           8,388,608              1
T3         TABLE SUBPARTITION T3_P1_S1        8,388,608              1

Now I have reorganized my tables and rebuilt my indexes.

The sized dropped to 64K or 8M and the fragmentation disappeard as the number of extents dropped to 1.

Note you cannot rebuild a whole partitioned index (ORA-14086) nor reorganize a whole partitioned table (ORA-14511). You need to loop through each partition or subpartition.

EXECUTE IMMEDIATE ‘SELECT’ does not execute anything

I am not sure whether some tuning guy at Oracle decided to ignore any SELECT statement after execute immediate to save time doing nothing.

exec execute immediate 'select 1/0 from dual connect by level<9999999999999'

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00

But it is really annoying… and not documented as far as I know.

Imagine I want to increase all my sequences by 1000


SQL> create sequence s;

Sequence created.

SQL> select s.nextval from dual;

   NEXTVAL
----------
         1

SQL> begin
  2    for f in (select sequence_name n from user_sequences)
  3    loop
  4      execute immediate
  5        'select '||f.n||'.nextval from dual connect by level<=1000';
  6    end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL> select s.currval from dual;

   CURRVAL
----------
         1

Hmm, it does not work. Does SELECT work at all? Yes when it is a SELECT INTO :-)


SQL> drop sequence s;

Sequence dropped.

SQL> create sequence s;

Sequence created.

SQL> select s.nextval from dual;

   NEXTVAL
----------
         1

SQL> declare
  2    type t is table of number index by pls_integer;
  3    c t;
  4  begin
  5    for f in (select sequence_name n from user_sequences)
  6    loop
  7      execute immediate
  8        'select '||f.n||'.nextval from dual connect by level<=1000'
  9        bulk collect into c;
 10    end loop;
 11  end;
 12  /

PL/SQL procedure successfully completed.

SQL> select s.currval from dual;

   CURRVAL
----------
      1001

I wonder in which version this optimization/bug was introduced…

RMAN duplicate does change your DB_NAME !

I had a very serious issue last Friday with errors as weird as ORA-00322: log name of thread num is not current copy. After a clone from Prod to Test, the prod crashed. Both databases are located on the same server (I am not a virtualization fanatic) and clone from prod to test have been done by most of my dba readers.

What did change in 11g ?

Incredibly, in 11g, rman issues the following statement before restore

sql clone "alter system set  db_name = ''PROD'' ...
restore clone primary controlfile...

This is probably related to the capability of cloning a database without connecting to the target database.

At the end of the clone, rman is setting back the db_name to TEST and recreate the TEST controlfile

sql statement: alter system set  db_name = ''TEST'' ...
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "TEST" ...
...
LOGFILE
GROUP 1 ('/.../TEST/redo1.dbf')...

So what’s wrong with this? Howcome could a clone from prod to test screw up the prod db???

Simple, the RMAN job did not complete

1) set new name, restore prod controlfile to test
2) restore issue, for instance ORA-19870: error while restoring backup piece archive1234
3) RMAN-03002: failure of Duplicate Db command

At this point, the complete restore was finished, we restored the missing archivelog, recover and open resetlog.
What happened then???
At this point, remember you still have the prod controlfile (and the prod db_name), so by doing an alter resetlogs, the production redologs will get overwritten without notice !

This is a fairly important change that could really hurt if you are cloning two databases on the same server.

In case you are trying to save a failed database clone, make sure you check db_name and also v$logfile before doing an alter database resetlogs!!!

How to solve ORA-4068

I was amazed by this oneliner in stackoverflow.

First, let me introduce you my old foe, ORA-04068 :
Session 1:

SQL> CREATE OR REPLACE PACKAGE P AS 
  2  X NUMBER;Y NUMBER;END;
  3  /

Package created.

SQL> exec P.X := 1

PL/SQL procedure successfully completed.

Session 2:

SQL> CREATE OR REPLACE PACKAGE P AS 
  2  X NUMBER;Z NUMBER;END;
  3  /

Package created.

Session 1:

SQL> exec P.X := 2
BEGIN P.X := 2; END;

*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04061: existing state of package "SCOTT.P" has been invalidated
ORA-04065: not executed, altered or dropped package "SCOTT.P"
ORA-06508: PL/SQL: could not find program unit being called: "SCOTT.P"
ORA-06512: at line 1

Changing the package in session 2 did invalidate the package variable in session 1.

And the PRAGMA that saves the world : PRAGMA SERIALLY_REUSABLE

Session 1:

SQL> CREATE OR REPLACE PACKAGE P AS 
  2  PRAGMA SERIALLY_REUSABLE;X NUMBER;Y NUMBER;END;
  3  /

Package created.

SQL> exec P.X := 1

PL/SQL procedure successfully completed.

Session 2:

SQL> CREATE OR REPLACE PACKAGE P AS 
  3  PRAGMA SERIALLY_REUSABLE;X NUMBER;Z NUMBER;END;
  2  /

Package created.

Session 1:

SQL> exec P.X := 2

PL/SQL procedure successfully completed.

Oh yes!

make count(*) faster

I just install Oracle Enterprise Linux on my new notebook.

I wanted to check how far could I improve the performance of a count(*)


SQL> drop table big_emp;

table big_emp dropped.
258ms elapsed

SQL> create table big_emp as 
  with l as(select 1 from dual connect by level<=3000) 
  select rownum empno,ename,job,mgr,hiredate,sal,comm,deptno from emp,l,l

table big_emp created.
330,390ms elapsed

SQL> alter table big_emp add primary key(empno)

table big_emp altered.
481,503ms elapsed

SQL> alter system flush buffer_cache

system flush altered.
2,701ms elapsed

SQL> alter system flush shared_pool
system flush altered.
137ms elapsed

SQL> select count(*) from big_emp
COUNT(*)               
---------------------- 
126000000              

9,769ms elapsed

SQL> select count(*) from big_emp
COUNT(*)               
---------------------- 
126000000              

8,157ms elapsed

SQL> alter table big_emp drop primary key

table big_emp altered.
905ms elapsed

SQL> alter table big_emp add primary key(empno) 
  using index (
    create index big_i on big_emp(empno) 
    global partition by hash(empno) 
    partitions 16 parallel 16)

table big_emp altered.
974,300ms elapsed

SQL> alter system flush buffer_cache

system flush altered.
601ms elapsed

SQL> alter system flush shared_pool

system flush altered.
140ms elapsed

SQL> select count(*) from big_emp

COUNT(*)               
---------------------- 
126000000              

5,201ms elapsed

SQL> select count(*) from big_emp

COUNT(*)               
---------------------- 
126000000              

2,958ms elapsed

As it is on a notebook, I suppose the benefit of partitioning is not as good as you could get on your server with lots of fast disks and lot’s of CPUs, but I am pretty happy with the results.

It is still counting 126 Million rows in less than 3 seconds :-)

Thanks for the very kind sponsor of the notebook !

This version of TOAD provides READONLY access!

This Toad 11 beta new feature made me so happy !

Not only you can specify a color for your connection (ex: red=prod, green=devl), and this color is much more visible than it was in previous version of TOAD, but now you can set your connection to be read-only.

For any reason, bored, ill, under pressure, tired, you may screw up your database one of those days.

I feel so good that I can now setup my prod connection to be read-only. It is REALLY cool.

Setting the connection read-only is a TOAD feature. As it is in beta it may not be 100% bullet-proof. Mostly it allows you to do only SELECT in the editor, even if you are logged in as SYSTEM or SYS.

Anything comparable in SQL Developer?

SQL developer for dba

Historically and semantically, SQL Developer is a developer tool. There is very little comparison with TOAD for the dba.

But… Oracle is working on this! In the latest release, 3.0 EA1, you will be able to see Tablespaces, Redo logs and controlfile, finally.

Still no session browser, but I am delighted that the database administrators are not getting forgotten !

Thanks Surachart for spreading the news!

How to check if I have a pending transaction?

Pretty straightforward, check if dbms_transaction.step_id is null!

SQL> select dbms_transaction.step_id from dual;

        STEP_ID
---------------

SQL> insert into t values (1);

1 row created.

SQL> select dbms_transaction.step_id from dual;

        STEP_ID
---------------
114352430549782

SQL> commit;

Commit complete.

SQL> select dbms_transaction.step_id from dual;

        STEP_ID
---------------

SQL> insert into t values (2);

1 row created.

SQL> select dbms_transaction.step_id from dual;

        STEP_ID
---------------
124248035235852

SQL> rollback;

Rollback complete.

SQL> select dbms_transaction.step_id from dual;

        STEP_ID
---------------

where is the TRIGGER ANY TABLE privilege?

You have your table data in one schema and your procedures in another one. But can you have triggers and tables in different schemas?


SYS@lsc01> create user u1 identified by u1;

User created.

SYS@lsc01> create user u2 identified by u2;

User created.

SYS@lsc01> grant create table, unlimited tablespace to u1;

Grant succeeded.

SYS@lsc01> grant create session, create trigger to u2;

Grant succeeded.

SYS@lsc01> create table u1.t(x number);

Table created.

SYS@lsc01> grant select on u1.t to u2;

Grant succeeded.

SYS@lsc01> connect u2/u2
Connected.
U2@lsc01> create trigger u2.tr after insert on u1.t for each row
  2  begin
  3  null;
  4  end;
  5  /
create trigger u2.tr after insert on u1.t for each row
                                        *
ERROR at line 1:
ORA-01031: insufficient privileges

What’s the missing privilege? To create a trigger on another schema, you need the CREATE ANY TRIGGER privilege.

 
U2@lsc01> connect / as sysdba
Connected.
SYS@lsc01> grant CREATE ANY TRIGGER to u2;

Grant succeeded.

SYS@lsc01> 
SYS@lsc01> connect u2/u2
Connected.
U2@lsc01> create trigger u2.tr after insert on u1.t for each row
  2  begin
  3  null;
  4  end;
  5  /

Trigger created.

.plz dump file

What are those .plz dump files in my user dump directory ?


-rw-r--r--   1 oracle   dba        15168 Oct  6 14:34 _anon__3ca8c5e38__AB.plz
-rw-r-----   1 oracle   dba        15883 Oct  6 14:45 db01_ora_10061.trc
-rw-r--r--   1 oracle   dba        15168 Oct  6 14:45 _anon__3c929b088__AB.plz
-rw-r-----   1 oracle   dba        15895 Oct  6 14:47 db01_ora_10666.trc
-rw-r--r--   1 oracle   dba        15168 Oct  6 14:47 _anon__3c8651198__AB.plz

let’s check one, briefly :


*** ASSERT at file pdw4.c, line 2080; Type 0xffffffff7d79fb40 has no MAP method.
Source Location = _anon__3d2474b28__AB[1, 7]

== Dump of OPT Context Object ffffffff7c519ec8. ==
  Tue Oct  6 16:31:11 2009
  Event 10944 = 0
  plsp          = ffffffff7fff67a8
  lu            = 3c9c18210
  Diana root    = 0x20014 = 131092
  Diana proc    = 0x20012 = 131090
  Graph         = ffffffff7d774d70
  Dump file     = /app/oracle/admin/DB01/udump/_anon__3d2474b28__AB.plz
  CG            = 0
...

It is a kind of dump file, apparently. I could not find details on metalink. I generate the one above in 10.2.0.4 sparc with the following code

create or replace type t1 as object (x number)
/
create or replace type t2 as table of t1
/
exec if t1(1)member of t2()then null;end if

BEGIN if t1(1)member of t2()then null;end if; END;
Error at line 10
ORA-06550: line 1, column 7:
PLS-00801: internal error [*** ASSERT at file pdw4.c, line 2080; Type 0xffffffff7d7ba280 has no MAP method.; _anon__3c929b088__AB[1, 7]]

11.1.0.7 is the latest patchset for 11gR1

You will get CPU and even PSU (Patch Set Updates are proactive cumulative patches comprised of recommended bug fixes that are released on a regular and predictable schedule)

But no more patchset. This the first time I see a release with only 1 pachset.

Well, if you use Linux, you can go to 11gR2. Otherwise you may better stick to 10.2.0.4 for a few more months

Ref: 742060.1
11.1.0.7 is the last patch set for Release 11.1

CONNECT no longer has CREATE VIEW, what’s next?

In my current project we are going to move from 9i to 10g really soon. One of the typical issue is to get the right privileges, because the schema owners typically had only CONNECT and RESOURCE, and that does no longer include CREATE VIEW in 10gR2.

I was just reading the 11gR2 readme today :
7.2 UNLIMITED TABLESPACE Privilege Changes
The UNLIMITED TABLESPACE system privilege will be removed from the RESOURCE role in a future Oracle Database release (reference Bug 7614645).

So, probably I should ask for TABLESPACE quotas before we go to 12g 😉

How to tune WHERE NAME LIKE ‘%ABC%’

More than once customers wanted me to tune queries where the LIKE clause made the query very slow…

The easy answer is : you cannot. If you want to search for some characters in the middle of the string, Oracle will not use an index.

Is it a correct answer? Maybe not.

Let’s imagine my application allow you to search for a string in the middle of the name, but to avoid scanning too large amount of data, the application enforces the string to be at least 3 characters long. For example ABC.

In this case, instead of doing a full table scan of the table to retrieve only a few rows, we can use an hint to tell Oracle to use an index on the name:

CREATE TABLE lsc_t AS
SELECT
  ROWNUM ID,
  SUBSTR(DBMS_RANDOM.STRING ('n', DBMS_RANDOM.VALUE (2, 8))
    || ' '
    || DBMS_RANDOM.STRING ('n', DBMS_RANDOM.VALUE (2, 8))
    ,1,17) NAME,
  TRUNC(SYSDATE-ABS(10000*DBMS_RANDOM.NORMAL)) birthdate,
  LPAD('X',4000,'X') address
FROM DUAL
CONNECT BY LEVEL <= 1e5;

ALTER TABLE lsc_t ADD PRIMARY KEY(ID);

CREATE INDEX lsc_i ON lsc_t(NAME);

EXEC dbms_stats.gather_table_stats(user,'LSC_T',cascade=>true)

let’s measure the time for a full table scan

SQL> set timi on
SQL> SELECT ID, NAME, birthdate
  FROM lsc_t WHERE NAME LIKE '%ABC%';

       ID NAME              BIRTHDATE
--------- ----------------- ---------
    60249 ABCBIFAB KRKBCRN  11-MAR-90
    16714 AF YABCG          09-OCT-95
    55571 BABCIQ GESGLW     27-MAR-50
    77561 BP GABC           24-APR-90
    80027 DALSABC TZLOAWDV  05-NOV-01
    49817 EABCTFIY XWB      10-FEB-88
    23283 EMMOGGBF DABCB    20-DEC-87
    39530 FMABCKB AB        18-SEP-87
    68605 FTPGOHE ABCC      28-SEP-28
    74615 KIFDWABC CSSUQ    08-AUG-82
    31772 KNOABCT BO        08-SEP-77
    68730 KRYIEN LMABC      10-APR-07
    43317 LUFJKZJT AUABCZR  19-DEC-88
    76851 MZABC TEIFG       14-SEP-92
    54589 NXE YABCDX        03-MAY-88
     6940 OIWABCZ DLFFXY    29-MAR-88
    59070 ONIB ADGABCI      29-JUL-07
    27264 PGHOABC ZY        05-OCT-90
    38157 QABC OPZHE        13-JUN-87
    17511 QPDKD CIABCJ      08-AUG-69
    25507 RX OWULOABC       24-FEB-92
    62159 SEABC DAILK       25-JUN-02
     3845 SK CCABCG         22-JAN-80
    50059 SPABC BVHRHW      18-MAR-86
    54700 UABCPC WUHAJS     28-OCT-71
    70207 UKY OIDUABC       23-APR-88
    39484 WABC TJLYHVJZ     14-MAR-78
    14561 WDRWABC XZKDH     29-MAR-86
    61501 YBYU RYABCGI      28-JUN-78
    30578 YEWENGX ABCHARA   12-SEP-67
    35397 YHBEABC HFKO      25-AUG-85
    26450 YOABCVG HJT       23-DEC-98
    87224 ZKNLNY YAABC      13-NOV-61

33 rows selected.

Elapsed: 00:00:02.56

about 3 seconds for retrieving 33 rows out of 100000

let’s try with an index

SQL> SELECT /*+INDEX(LSC_T,LSC_I)*/  ID, NAME, birthdate
  FROM lsc_t WHERE NAME LIKE '%ABC%';

       ID NAME              BIRTHDATE
--------- ----------------- ---------
    60249 ABCBIFAB KRKBCRN  11-MAR-90
    16714 AF YABCG          09-OCT-95
    55571 BABCIQ GESGLW     27-MAR-50
    77561 BP GABC           24-APR-90
    80027 DALSABC TZLOAWDV  05-NOV-01
    49817 EABCTFIY XWB      10-FEB-88
    23283 EMMOGGBF DABCB    20-DEC-87
    39530 FMABCKB AB        18-SEP-87
    68605 FTPGOHE ABCC      28-SEP-28
    74615 KIFDWABC CSSUQ    08-AUG-82
    31772 KNOABCT BO        08-SEP-77
    68730 KRYIEN LMABC      10-APR-07
    43317 LUFJKZJT AUABCZR  19-DEC-88
    76851 MZABC TEIFG       14-SEP-92
    54589 NXE YABCDX        03-MAY-88
     6940 OIWABCZ DLFFXY    29-MAR-88
    59070 ONIB ADGABCI      29-JUL-07
    27264 PGHOABC ZY        05-OCT-90
    38157 QABC OPZHE        13-JUN-87
    17511 QPDKD CIABCJ      08-AUG-69
    25507 RX OWULOABC       24-FEB-92
    62159 SEABC DAILK       25-JUN-02
     3845 SK CCABCG         22-JAN-80
    50059 SPABC BVHRHW      18-MAR-86
    54700 UABCPC WUHAJS     28-OCT-71
    70207 UKY OIDUABC       23-APR-88
    39484 WABC TJLYHVJZ     14-MAR-78
    14561 WDRWABC XZKDH     29-MAR-86
    61501 YBYU RYABCGI      28-JUN-78
    30578 YEWENGX ABCHARA   12-SEP-67
    35397 YHBEABC HFKO      25-AUG-85
    26450 YOABCVG HJT       23-DEC-98
    87224 ZKNLNY YAABC      13-NOV-61

33 rows selected.

Elapsed: 00:00:00.06

Much better :)

reposted due to % in url

on recycle bin

more than one user may wondered who created those BIN$ when they first connected to a 10g database.


create table lsc_t(x number) 
partition by range(x) 
(partition LESS_THAN_ONE values less than (1));

drop table lsc_t;

select object_name, subobject_name, created 
from user_objects 
where object_name like 'BIN$%';
OBJECT_NAME                    SUBOBJECT_NAME  CREATED  
------------------------------ --------------- ---------
BIN$bh2VJ6FqFJ3gRAAUT+rFpg==$0 LESS_THAN_ONE   07-JUL-09

Ok, it is quite easy to get rid of it. Either at DROP time with a DROP TABLE LSC_T PURGE or later with PURGE RECYCLEBIN. Most of the objects disappear from USER_OBJECTS when dropped actually. The recyclebin view is called : RECYCLEBIN.


purge recyclebin;

select object_name, subobject_name, created 
from user_objects 
where object_name like 'BIN$%';

no rows selected.

select * from recyclebin;

no rows selected.

So far so good…

Let’s see what’s happening with my primary keys


purge recyclebin;
create table lsc_t(x number constraint lsc_t_pk primary key);
drop table lsc_t;
select object_name, original_name, type from recyclebin;

OBJECT_NAME                    ORIGINAL_NAME TYPE 
------------------------------ ------------- -----
BIN$bh23ggtBHALgRAAUT+rFpg==$0 LSC_T         TABLE
BIN$bh23ggtAHALgRAAUT+rFpg==$0 LSC_T_PK      INDEX

The primary key index is now in the recycle bin as well.

let’s recover the recycle bin version :


flashback table lsc_t to before drop;
select index_name from user_indexes where table_name='LSC_T';

INDEX_NAME                    
------------------------------
BIN$bh3GgNi1HR3gRAAUT+rFpg==$0

select object_name, original_name, type from recyclebin;

no rows selected.

👿

So if you cannot exclude BIN$ objects from your dba maintenance scripts, you will need to deal with thoses as they may be recovered indexes!

How to reuse connection in shell

It is Friday, I wanted to give my readers some stuff for the week-end 😉

Imagine that piece of code :

countlines() {
  c=$(sqlplus -s /nolog <<EOF | grep -v Connected
connect scott/tiger
set feed off head off
select count(*) from $1;
EOF
)
  echo there are $c lines in $1
}

countlines EMP
countlines DEPT

I can run this

time ./script1
there are 14 lines in EMP
there are 4 lines in DEPT

real    0m0.46s
user    0m0.06s
sys     0m0.09s

Sounds like a regular shell script. How could we optimize the countlines function? Well, we could create the connection only once and use coprocess pipe (with |& that is pipe ampersand)

sqlplus -s /nolog |&

print -p "connect scott/tiger"

read -p line
if [ $line != Connected. ]
then
  exit 1
fi

print -p "set feed off head off"

countlines() {
  print -p "select count(*) from $1;"
  read -p c
  echo "there is $c lines in $1"
}

countlines EMP
countlines DEPT

print -p disconnect

A two-ways pipe is opened with sqlplus. There is only one connect, and one disconnect.

Let’s check the performance :

$ time ./script2
there is 14 lines in EMP
there is 4 lines in DEPT

real    0m0.23s
user    0m0.00s
sys     0m0.01s

About twice as fast! Note the “Connected” output may not exist in recent version of sqlplus in silent mode. If you have a script that generates hundreds of connections, or which create a connection every 5 seconds or so, think about it :)

Enjoy your week-end

return code before grep

In my previous post hide-password-from-ps-output-sql-loader I mentioned a way to pass the password to the loader thru a parameter file. As correctly suggested by Brian Tkatch, the password could be passed as standard input

sqlldr control=x.ctl silent=header,feedback <<EOF
scott/tiger
EOF
Username:

The Username: prompt is displayed :(   :(

How do we get rid of this ?

sqlldr control=x.ctl silent=header,feedback <<EOF | grep -v "^Username:"
scott/tiger
EOF

There is no output. But what’s the error code

echo $?
1

The return code is 1 :(

This is not the error code from sqlldr, but the error code from grep !

Ok, here is the trick, a bit cryptic if you are not familiar with file descriptors

( ( (sqlldr control=x <<EOF;echo $? >&3) |grep -v "^Username:" >&4 ) 3>&1 |(read x;exit $x) )4>&1
scott/tiger
EOF
echo $?
0

The return code is 0 :)

Hide password from ps output : sql loader

By reporting the process status with ps, any Unix user will see the command line arguments

ps -ef 
     UID   PID  PPID  C    STIME TTY      TIME CMD
lsc      13837 13825  0   May 11 pts/17   0:01 -ksh
oracle    4698  6294  0 12:00:40 ?        0:00 sqlplus -s system/manager
appluser  4229  4062  0 12:00:03 ?        0:00 sqlldr scott/tiger
applrun0 28445 28281  0 11:54:03 ?        0:00 imp king/gold full=y
...

What you see here above is definitely a security issue. For sqlplus, the trick is to use sqlplus /nolog and then pass connect system/manager as input or script.

For sqlldr (and exp/imp etc…), the trick is to use a parameter file.

To make it as safe as possible, the file must be unique, readable only for owner and removed after usage.

Ex:

umask 0077
TMPFILE=$(mktemp)
echo "userid=scott/tiger" >$TMPFILE
sqlldr parfile=$TMPFILE control=x.ctl silent=header,feedback
rm $TMPFILE

mktemp is an Unix utility that creates temp files with unique names.

ora-984 and sqlerrm

What’s wrong with this code ?


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning option
JServer Release 9.2.0.8.0 - Production

SQL> drop table lsc_t ;

Table dropped.

SQL> create table lsc_t (x varchar2(255), y date);

Table created.

SQL> begin
  2    null;
  3  exception when others then
  4    insert into lsc_t (x,y)
  5          values (sqlerrm, sysdate);
  6  end;
  7  /

PL/SQL procedure successfully completed.

I was in the process of migrating a database with 10g and this piece of code became invalid :(

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> drop table lsc_t ;

Table dropped.

SQL> create table lsc_t (x varchar2(255), y date);

Table created.

SQL> begin
  2    null;
  3  exception when others then
  4    insert into lsc_t (x,y)
  5          values (sqlerrm, sysdate);
  6  end;
  7  /
        values (sqlerrm, sysdate);
                *
ERROR at line 5:
ORA-06550: line 5, column 17:
PL/SQL: ORA-00984: column not allowed here
ORA-06550: line 4, column 3:
PL/SQL: SQL Statement ignored

SQL> 

Ok, the code is wrong. It has been running for half a decade in production but the exception was never reached.

Note it is a bug, and putting sqlerrm after sysdate would have generate the exception even in 9i

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning option
JServer Release 9.2.0.8.0 - Production

SQL> begin
  2    null;
  3  exception when others then
  4    insert into lsc_t (y,x)
  5          values (sysdate,sqlerrm);
  6  end;
  7  /
        values (sysdate,sqlerrm);
                        *
ERROR at line 5:
ORA-06550: line 5, column 25:
PL/SQL: ORA-00984: column not allowed here
ORA-06550: line 4, column 3:
PL/SQL: SQL Statement ignored

Just having fun with 10g migration 😉

grant access to trace files

As a developer, you sometimes need to contact your dba to get an user trace. As a dba, sending trace files to developer is not much fun.

But how do you get access to your traces without any dba ?

One way could be to set _trace_files_public=true, but this is bad. It is undocumented, it is unsupported, there is no control to who can access the files, and probably it will not help if you have no access to the database server.

I prefer to provide a function that reads from the trace directory.

Ok, here we go :


CREATE DIRECTORY user_dump_dest AS
  '/app/oracle/admin/DB01/udump';

CREATE OR REPLACE FUNCTION get_tracefile (file_name VARCHAR2)
   RETURN VARCHAR2
IS
   dest_loc   CLOB;
   src_loc    BFILE;
   ret        VARCHAR2 (4000);
BEGIN
   src_loc := BFILENAME ('USER_DUMP_DEST', file_name);
   DBMS_LOB.OPEN (src_loc, DBMS_LOB.lob_readonly);
   DBMS_LOB.createtemporary (dest_loc, TRUE);
   DBMS_LOB.loadfromfile (dest_loc, src_loc, 4000);
   ret := DBMS_LOB.SUBSTR (dest_loc, 4000);
   DBMS_LOB.CLOSE (src_loc);
   RETURN ret;
END;
/

Just a small function that returns the first 4000 characters of the trace file. I could then grant execute on that function to the developers.

it works quite well

SELECT get_tracefile ('db01_ora_6224.trc')
  FROM DUAL;
GET_TRACEFILE('DB01_ORA_6224.TRC')                                      
----------------------------------------------------------------------
/app/oracle/admin/DB01/udump/db01_ora_6224.trc
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning option
JServer Release 9.2.0.8.0 - Production
ORACLE_HOME = /app/oracle/product/dbms/DB01
System name:  SunOS
Node name:  dbsrv01
Release:  5.8
Version:  Generic_117000-05
Machine:  sun4u
Instance name: DB01
Redo thread mounted by this instance: 1
Oracle process number: 21
Unix process pid: 6224, image: oracle@dbsrv01 (TNS V1-V3)

*** SESSION ID:(273.54591) 2009-04-27 12:13:57.292
*** 2009-04-27 12:13:57.292
ksedmp: internal or fatal error
ORA-00600: internal error code, arguments: [17069], [0x3EED05050], 
[], [], [], [], [], []
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
Current SQL statement for this session:
alter PACKAGE "LSC_PKG" compile body
----- PL/SQL Call Stack -----
...

I should mention than granting access to trace files is a security issue as a hacker could dump some security modules. But it will be fine for your trusted developers.

package version control

Oracle does not offer any kind of version control like CVS or subversion in the database. CVS and subversion could be used in the filesystem, then the code could be deployed with sqlplus.

To quickly compare packages in Test and Production I used :


select 
  env,name,type,line,text 
from (
  select 
    env,name,type,line,text,
    count(distinct text) over(partition by name,type,line) c
  from (
    select 
      'TEST' env,name,type,line,text 
    from 
      user_source 
    union all
    select 
      'PROD' env,name,type,line,text 
    from 
      user_source@prod
  )
)
where c>1 ;

where “PROD” is a database link to production database. My security friends will scream to hear I have a database link to production in my test schema, but you are free to do it the other way round :mrgreen:

I also explained TOAD method in how to compare schema a few months ago.

commit suicide;

Tanel Poder directed me to Miladin Modrakovic blog, I will definitely add this to my prefered feed.

You have to try this :


SQL> alter session set events 'immediate crash';
alter session set events 'immediate crash'
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel

A very decent method for killing yourself 😈

track DDL changes

Why is my package being invalidated? This is the question I asked myself a few times those days. In order to find out what is happening in my schema, I created this simple DDL trigger which tracks all DDL changes in my schema.


CREATE TABLE AUDIT_DDL (
  d date,
  OSUSER varchar2(255),
  CURRENT_USER varchar2(255),
  HOST varchar2(255),
  TERMINAL varchar2(255),
  owner varchar2(30),
  type varchar2(30),
  name varchar2(30),
  sysevent varchar2(30),
  sql_txt varchar2(4000));
   
create or replace trigger audit_ddl_trg after ddl on schema
declare
  sql_text ora_name_list_t;
  stmt VARCHAR2(4000) := '';
  n number;
begin
  if (ora_sysevent='TRUNCATE')
  then
    null;
  else
    n:=ora_sql_txt(sql_text);
    for i in 1..n
    loop
      stmt:=substr(stmt||sql_text(i),1,4000);
    end loop; 
    insert into audit_ddl(d, osuser,current_user,host,terminal,owner,type,name,sysevent,sql_txt)
    values(
      sysdate,
      sys_context('USERENV','OS_USER') ,
      sys_context('USERENV','CURRENT_USER') ,
      sys_context('USERENV','HOST') , 
      sys_context('USERENV','TERMINAL') ,
      ora_dict_obj_owner,
      ora_dict_obj_type,
      ora_dict_obj_name,
      ora_sysevent,
      stmt
    );
  end if;
end;
/

Thanks Yas for your comment, I added the SQL text

high cost

What’s wrong with this query ?


select
(t6.column_value).getstringval() t6
from
table(xmlsequence(extract(xmltype(‘<x/>’),’/x’))) t1,
table(xmlsequence(t1.column_value))t2,
table(xmlsequence(t2.column_value))t3,
table(xmlsequence(t3.column_value))t4,
table(xmlsequence(t4.column_value))t5,
table(xmlsequence(t5.column_value))t6;
T6
————————
<x/>

Elapsed: 00:00:00.01

Well, let’s check the plan :


--------------------------------------------------------------------
| Id  | Operation          | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT   |    18E|    15E|    18E  (0)|999:59:59 |
|   1 |  NESTED LOOPS      |    18E|    15E|    18E  (0)|999:59:59 |
|   2 |   NESTED LOOPS     |    18E|    15E|    99P  (3)|999:59:59 |
|   3 |    NESTED LOOPS    |  4451T|    31P|    12T  (3)|999:59:59 |
|   4 |     NESTED LOOPS   |   544G|  3045G|  1490M  (3)|999:59:59 |
|   5 |      NESTED LOOPS  |    66M|   254M|   182K  (3)| 00:36:31 |
|   6 |       COLLECTION I |       |       |            |          |
|   7 |       COLLECTION I |       |       |            |          |
|   8 |      COLLECTION IT |       |       |            |          |
|   9 |     COLLECTION ITE |       |       |            |          |
|  10 |    COLLECTION ITER |       |       |            |          |
|  11 |   COLLECTION ITERA |       |       |            |          |
--------------------------------------------------------------------

It is returning 18 quadrillions of rows, 15 exabytes, the cost is 1.8E19 and the time is about one month :mrgreen:

Stored outlines

Note:
Performance Tuning Guide
Stored outlines will be desupported in a future release in favor of SQL plan management. In Oracle Database 11g Release 1 (11.1), stored outlines continue to function as in past releases. However, Oracle strongly recommends that you use SQL plan management for new applications. SQL plan management creates SQL plan baselines, which offer superior SQL performance and stability compared with stored outlines.


This said, let’s take a small example. If you have a query which is running fast most of the time and sometimes is running very slow due an unexpected plan change, you may want to considering enforcing plan stability with a Stored Outline.

To fake this example, let’s try to enforce a full table scan for select empno from emp where ename=’SCOTT’.

SQL> set autot on exp
SQL> select empno from emp where ename='SCOTT';
     EMPNO
----------
      7788

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1
   1    0   TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 By

For the purpose of this exercice, I consider this to be the right plan and I want to enforce Oracle to use this plan for this query.


SQL> create or replace outline o for category emp_scott on 
  select empno from emp where ename='SCOTT';

Outline created.

SQL> create unique index i on emp(ename) 
  tablespace my_slow_tape_drive;

Index created.

SQL> set timi on
SQL> set autot on exp
SQL> select empno from emp where ename='SCOTT';
     EMPNO
----------
      7788

Elapsed: 01:45:59.95

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2
   2    1     INDEX (UNIQUE SCAN) OF 'I' (UNIQUE) (Cost=1 

Oracle uses an index scan, but the index is located on a tape (which is not possible on 11gR1 and older 😉 ) and it takes ages to complete the query. Let’s try to use the good plan that was used at the time we created the stored outline

 
SQL> alter session set use_stored_outlines=emp_scott;

Session altered.

Elapsed: 00:00:00.00
SQL> select empno from emp where ename='SCOTT';
     EMPNO
----------
      7788

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1
   1    0   TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 By

SQL> SELECT LAST_LOAD_TIME,OUTLINE_CATEGORY,sql_text 
  from v$sql 
  where sql_text like 'select%SCOTT%';
LAST_LOAD_TIME      OUTLINE_C SQL_TEXT
------------------- --------- ----------------------------
2008-10-16/09:42:30           select empno from emp where 
2008-10-16/09:46:50 EMP_SCOTT select empno from emp where 

The plan using the outline is now used

where is my database link listed?


$ sqlplus scott/tiger@DEVL
SQL> select * from all_db_links;
no rows selected
SQL> select * from dual@PROD;
D
-
X

Hey, why does this work???

Ok, after some research I found out that this seems to be an implicit loopback database link. The fact that the DEVL database has the global name set to PROD is just to annoy more 😉


SQL> select * from global_name;

GLOBAL_NAME
--------------------------------
PROD.LAURENTSCHNEIDER.COM

11g release 1 patchset 1

I just notice on Sven Blog that 11.1.0.7 is available. I have recently upgraded my connection at home so it took a bit less than half an hour to download this 1.5G patchset


$ wget -O p6890831_111070_Linux-x86-64.zip http://oracle-updates.oracle…
–10:17:40– http://oracle-updates.oracle.com/ARUConnect/p6890831_111070_Linux-x86-64..
Resolving oracle-updates.oracle.com… 87.248.199.23, 87.248.199.24
Connecting to oracle-updates.oracle.com|87.248.199.23|:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 1,613,366,248 (1.5G) [application/zip]

100%[==================================>] 1,613,366,248 891.73K/s

10:47:31 (879.89 KB/s) – `p6890831_111070_Linux-x86-64.zip’ saved [1613366248/1613366248]

$ unzip p6890831_111070_Linux-x86-64.zip
$ cd Disk1
$ ./runInstaller
Starting Oracle Universal Installer…
Preparing to launch Oracle Universal Installer from
Oracle Universal Installer, Version 11.1.0.7.0 Production
Copyright (C) 1999, 2008, Oracle. All rights reserved.

$ su –
root’s password:
# /u00/app/oracle/product/11.1/db_1/root.sh
Running Oracle 11g root.sh script…
# exit
$ sqlplus / as sysdba
SQL*Plus: Release 11.1.0.7.0 – Production on Sat Sep 20 11:10:35 2008

Copyright (c) 1982, 2008, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup quiet migrate
ORACLE instance started.
Database mounted.
Database opened.
SQL> set time on
11:42:21 SQL> sho parameter sga_target

NAME TYPE VALUE
———————————— ———– ——————————
sga_target big integer 200M
11:42:25 SQL> alter system set sga_target=300m scope=spfile;

System altered.

11:42:28 SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
11:42:43 SQL> startup quiet migrate
ORACLE instance started.
Database mounted.
Database opened.
11:43:01 SQL> @?/rdbms/admin/catupgrd

11:54:03 SQL> Rem END catupgrd.sql
11:54:03 SQL> startup
ORACLE instance started.

Total System Global Area 313159680 bytes
Fixed Size 2159272 bytes
Variable Size 226495832 bytes
Database Buffers 79691776 bytes
Redo Buffers 4812800 bytes
Database mounted.
Database opened.
11:56:28 SQL> select comp_name,status,version from dba_registry;

COMP_NAME STATUS VERSION
———————————– ——– ———-
Oracle Ultra Search VALID 11.1.0.7.0
Oracle XML Database VALID 11.1.0.7.0
Oracle Text VALID 11.1.0.7.0
Oracle Expression Filter VALID 11.1.0.7.0
Oracle Rules Manager VALID 11.1.0.7.0
Oracle Workspace Manager VALID 11.1.0.7.0
Oracle Database Catalog Views VALID 11.1.0.7.0
Oracle Database Packages and Types VALID 11.1.0.7.0
JServer JAVA Virtual Machine VALID 11.1.0.7.0
Oracle XDK VALID 11.1.0.7.0
Oracle Database Java Packages VALID 11.1.0.7.0

My first try with 200Mb sga target did not succeeded. But with 300Mb sga target it worked fine.

About case sensitivity

SQL is a case insensitive language. That means, you can write any SQL statement in uppercase or lowercase.

SQL> SELECT DUMMY FROM DUAL;
D
-
X

SQL> select dummy from dual;
D
-
X

Some formatters like Toad and some courseware like Skillsoft E-Learning advise to use UPPERCASE for commands and lowercase for tables and columns.


/* Formatted on 2008/09/19 10:00 
(Formatter Plus v4.8.8) */
SELECT dummy
  FROM DUAL;

Well, Toad decided to put DUAL in uppercase. Anyway, this is no authority, define your own guideline and stick to it. If you use TOAD a lot, it is maybe OK to use lowercase for non-keywords.

The column dummy is actually the uppercase “DUMMY” column of “DUAL”. Putting double quotes around a column of table makes it case sensitive, so “DUMMY” is not “dummy”.

SQL> select "dummy" from "dual";
select "dummy" from "dual"
                    *
Error at line 1
ORA-00942: table or view does not exist

SQL> SELECT "DUMMY" FROM "DUAL";
D
-
X

OK, something very bad about Toad formatter is (at least the version I am using) that it considers unquoted password to be case insensitive. Which is wrong. The password, in 11g, is case sensitive, even when not in double quotes. Therefore, be sure to put the right case when creating scripts and do not format your CREATE USER statements with Toad or use double quotes for disambiguation!


SQL> ALTER USER scott IDENTIFIED BY BIG;
User altered.
SQL> connect scott/BIG
Connected.
SQL> /* Formatted on 2008/09/19 10:17 (Formatter Plus v4.8.8) */
SQL> ALTER USER scott IDENTIFIED BY big;
SQL> connect scott/BIG
ERROR:
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.

Difference between rollbac and rollback

What is the difference between rollbac and rollback?


SQL> create table t as select 1 x from dual;

Table created.

SQL> update t set x=2;

1 row updated.

SQL> savepoint a;

Savepoint created.

SQL> update t set x=3;

1 row updated.

SQL> rollbac to savepoint a;
Rollback complete.
SQL> select * from t;
         X
----------
         1

WTF! rollbac does not seem to work correctly 😉

Actually, ROLL, ROLLB, ROLLBA and ROLLBAC are not valid SQL statements. However, SQLPLUS recognizes it a sqlplus statement but this behavior is not documented.


SQL> del *
SQL> roll
Rollback complete.
SQL> list
SP2-0223: No lines in SQL buffer.

The statement does not end with semi-column and is not stored in the SQLPLUS buffer.

So it is not a SQL command.

return size of to_char

The width of a columns is known before execution.

Well, in most cases…


SELECT TO_CHAR(SYSDATE,'DAY','NLS_DATE_LANGUAGE=FRENCH') 
  FROM DUAL;
TO_CHAR(
--------
SAMEDI

The length is as most 8 characters (VENDREDI). Therefore the width of the column is 8.


SELECT TO_CHAR(SYSDATE,'YEAR') FROM DUAL;
TO_CHAR(SYSDATE,'YEAR')
------------------------------------------
TWO THOUSAND EIGHT

Oracle thinks the length is at most 42 characters. This is a bad guess, the year 7777 is the longest to spell and is only 27 characters. So the width is 42.

SELECT TO_CHAR(SYSDATE,'JSP') FROM DUAL;
TO_CHAR(SYSDATE,'JSP')
---------------------------------------------------------
---------------------
TWO MILLION FOUR HUNDRED FIFTY-FOUR THOUSAND SIX HUNDRED 
EIGHTY-EIGHT

again, this is a bad guess, the maximum length of a spelled Julian day is 77 (May 9, 4388) not 78.

let’s try with spelling the nanoseconds :


select to_char(current_timestamp,'FF9SP') from dual;
TO_CHAR(CURRENT_TIMESTAMP,'FF9SP')
-------------------------------------------------------
-----------------------
SEVENTY-FOUR MILLION

here 78 is a really bad guess… the nanoseconds could be 100 character long !

What happened then :


select to_char(timestamp '0001-01-01 00:00:00.777777777',
  'FFSP') from dual;
select to_char(timestamp '0001-01-01 00:00:00.777777777',
               *
ERROR at line 1:
ORA-01877: string is too long for internal buffer

The query fails. Does it means the maximum length returned by TO_CHAR is 78? Not at all!

It simply means Oracle sets the width column to something too small to contain the result.

By setting the column to something longer, the query should work…


select cast(to_char(timestamp '0001-01-01 00:00:00.
  777777777','FFSP') as varchar2(100)) from dual;
CAST(TO_CHAR(TIMESTAMP'0001-01-0100:00:00.777777777','FFSP')
------------------------------------------------------------
ASVARCHAR2(100))
----------------------------------------
SEVEN HUNDRED SEVENTY-SEVEN MILLION SEVEN HUNDRED SEVENTY-
SEVEN THOUSAND SEVEN HUNDRED SEVENTY-SEVEN

Ok, let’s do something perverse :mrgreen:

Let set the CURSOR_SHARING to FORCE !

Have a look :


SQL> alter session set cursor_sharing='FORCE';

Session altered.

SQL> select to_char(to_date(1000000,'J'),'JSP') from dual;
TO_CHAR(TO_DATE(1000000,'J'),'JSP')
----------------------------------------------------------
-----------------
ONE MILLION

the width of the column is evaluated to 75. 75 however is not the maximum length!


SQL> select to_char(to_date(1777777,'J'),'JSP') from dual;
select to_char(to_date(1777777,'J'),'JSP') from dual
       *
ERROR at line 1:
ORA-01801: date format is too long for internal buffer

Not only CURSOR_SHARING=FORCE influence the output of the query, but it also make the SQL failing.

Let’s revert to EXACT for confirmation.

SQL> alter session set cursor_sharing='EXACT';

Session altered.

SQL> select to_char(to_date(1000000,'J'),'JSP') from dual;
TO_CHAR(TO_
-----------
ONE MILLION

SQL> select to_char(to_date(1777777,'J'),'JSP') from dual;
TO_CHAR(TO_DATE(1777777,'J'),'JSP')
--------------------------------------------------------------
--------------
ONE MILLION SEVEN HUNDRED SEVENTY-SEVEN THOUSAND SEVEN HUNDRED 
SEVENTY-SEVEN