Category Archives: sql

one more stragg


select
  to_char(
    sum(
      power(100,rownum-1)*
      deptno
    ),
    'FM99G99G99G99G99', 
    'NLS_NUMERIC_CHARACTERS=,;'
  ) deptlist 
from dept

DEPTLIST       
---------------
40;30;20;10

I also wrote about distinct listagg. The same applies for sum distinct.


select 
  to_char(
    sum(power(1e3,d-1)*deptno),
    'FM999G999G999', 
    'NLS_NUMERIC_CHARACTERS=,;'
  ) deptsum, 
  to_char(
    sum(distinct power(1e2,d-1)*deptno),
    'FM99G99G99', 
    'NLS_NUMERIC_CHARACTERS=,;'
  ) deptsumdist, 
  to_char(
    sum(power(1e1,d-1)),
    'FM9G9G9', 
    'NLS_NUMERIC_CHARACTERS=,;'
  ) deptcount, 
  to_char(
    sum(power(1e4,c-1)*comm),
    'FM9999G9999G9999G9999G9999', 
    'NLS_NUMERIC_CHARACTERS=,;'
  ) commlist 
from (
  select comm, deptno, 
    dense_rank() over (order by deptno) d, 
    dense_rank() over (order by comm) c 
  from emp);   

DEPTSUM      DSUMDIST COUNT COMMLIST
------------ -------- ----- -------------------
180;100;030  30;20;10 6;5;3 1400;0500;0300;0000

rowid of the last insert

If you look for the last insert result, check returning into.

Is it identity column, the rowid, any expression, get it back


SQL> var r varchar2(24)
SQL> var x number
SQL> var d varchar2(30)
SQL> insert into t values (default) 
  returning rowid,x,sysdate into :r,:x,:d;

1 row created.

SQL> print

R
--------------------
AAAaFTAAIAAAAILAAD

         X
----------
         6

D
--------------------------------
2014-11-07_13:33:03

It is documented in the SQL Reference. No need for PL/SQL here. I it very usefull if you use sequences too.


SQL> insert into t(x) values (s.nextval) returning x into :x;

1 row created.

SQL> print x

         X
----------
         2

select pdf from sqlplus

sqlplus 10gR2 and later allows you to select from a BLOB. If you use linux, you could convert the hex output to binary with xxd


sqlplus -s scott/tiger <<EOF |xxd -p -r >doc.pdf
set pages 0 lin 17000 long 1000000000 longc 16384
select document from emp where ename=user;
EOF

Obviously, it could also be a sound, a video or an image !

distinct listagg

One limitation in listagg, you cannot use DISTINCT. Okay, there are plenty of distinct-capable listagg workarounds, but to get the real listagg working, it is a bit of an headache

With one listagg


SELECT 
  DEPTNO,
  LISTAGG (JOB, ',') 
    WITHIN GROUP (ORDER BY JOB) JOBS
FROM (
  SELECT DISTINCT DEPTNO, JOB  FROM EMP)
GROUP BY DEPTNO;

    DEPTNO JOBS                          
---------- ------------------------------
        10 CLERK,MANAGER,PRESIDENT       
        20 ANALYST,CLERK,MANAGER         
        30 CLERK,MANAGER,SALESMAN        

ok, it was not that hard, but it gets more difficult with two listagg’s


SELECT 
  LISTAGG (job, ',') 
    WITHIN GROUP (ORDER BY job) jobs,
  LISTAGG (deptno, ',') 
    WITHIN GROUP (ORDER BY deptno) deptnos
FROM (
  SELECT 
    DECODE(
      ROW_NUMBER () OVER (
        PARTITION BY deptno 
        ORDER BY 1),
      1, deptno) deptno,
    DECODE (
      ROW_NUMBER () OVER (
        PARTITION BY job 
        ORDER BY 1),
      1, job) job
  FROM emp
);
DEPTNOS  JOBS                                      
-------- ----------------------------------------
10,20,30 ANALYST,CLERK,MANAGER,PRESIDENT,SALESMAN

Too bad the DISTINCT keyword was not implemented

disable commit in procedure

There is an obscure syntax that prevents a procedure from issuing a commit


alter session DISABLE COMMIT IN PROCEDURE;

According to the doc, it prevents procedure from committing your data

Test case


SQL> alter session DISABLE COMMIT IN PROCEDURE
Session altered.
SQL> create table t(x number)
Table created.
SQL> create or replace procedure p is 
begin
  commit;
end;
Procedure created.
SQL> insert into t values (1)
1 row created.
SQL> exec p
BEGIN p; END;
Error at line 17
ORA-00034: cannot COMMIT in current PL/SQL session
ORA-06512: at "SCOTT.P", line 3
ORA-06512: at line 1

But some sys procedures may bypass this restriction


SQL> exec dbms_stats.gather_table_stats(user, 'T')
 PL/SQL procedure successfully completed.
SQL> rollback
Rollback complete.
SQL> select * from t

         X
----------
         1

The row was silently committed.

Index suggestion from the access advisor

Test case :


create table t(x varchar2(8) primary key, 
  y varchar2(30));
insert into t(x,y) select 
  to_char(rownum,'FM00000000'), 
  object_name from all_objects where rownum<1e4;
commit;
exec dbms_stats.gather_table_stats(user,'T')

One user wants to filter on x but does not do the casting properly


SQL> select * from t where x=00000001;

X        Y                             
-------- ------------------------------
00000001 CON$

He received the expected data.

Let’s check his plan

 
SQL> explain plan for 
  select * from t where x=00000001;
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------
Plan hash value: 2153619298
----------------------------------
| Id  | Operation         | Name |
----------------------------------
|   0 | SELECT STATEMENT  |      |
|*  1 |  TABLE ACCESS FULL| T    |
----------------------------------

Predicate Information 
  (identified by operation id):
-----------------------------------
   1 - filter(TO_NUMBER("X")=00000001)

Obviously, he is not using the primary key index. He should use single quotes literal


select * from t where x='00000001'

Okay, let’s tune ;)


SQL> VAR start_time VARCHAR2(32)
SQL> VAR end_time VARCHAR2(32)
SQL> exec select to_char(sysdate, 
  'MM-DD-YYYY HH24:MI:SS') into :start_time 
  from dual
SQL> select * from t where x=00000001;

X        Y                             
-------- ------------------------------
00000001 CON$
SQL> exec select to_char(sysdate, 
  'MM-DD-YYYY HH24:MI:SS') into :end_time
   from dual
SQL> VAR task_id NUMBER
SQL> VAR task_name VARCHAR2(32)
SQL> EXEC :task_name := 'ADV01'
SQL> EXEC DBMS_ADVISOR.CREATE_TASK (
  DBMS_ADVISOR.SQLACCESS_ADVISOR, 
  :task_id, :task_name)
SQL> exec DBMS_ADVISOR.SET_TASK_PARAMETER 
  (:task_name, 'EXECUTION_TYPE', 'INDEX_ONLY')
SQL> exec DBMS_ADVISOR.SET_TASK_PARAMETER 
  (:task_name, 'VALID_TABLE_LIST', 'SCOTT.T')
SQL> exec DBMS_ADVISOR.SET_TASK_PARAMETER 
  (:task_name, 'START_TIME', :start_time)
SQL> exec DBMS_ADVISOR.SET_TASK_PARAMETER 
  (:task_name, 'END_TIME', :end_time)
SQL> exec DBMS_SQLTUNE.CREATE_SQLSET ('STS01')
SQL> declare
  c DBMS_SQLTUNE.SQLSET_CURSOR;
begin
  open c for select value(t) from table(
    DBMS_SQLTUNE.SELECT_CURSOR_CACHE) t;
  DBMS_SQLTUNE.LOAD_SQLSET('STS01', c);
end;
SQL> exec DBMS_ADVISOR.ADD_STS_REF
  (:task_name, null, 'STS01')
SQL> EXEC DBMS_ADVISOR.EXECUTE_TASK (:task_name)
SQL> select
  dbms_advisor.get_task_script(:TASK_NAME)
  from dual;

DBMS_ADVISOR.GET_TASK_SCRIPT(:TASK_NAME)
----------------------------------------------
Rem  SQL Access Advisor: Version 11.2.0.4.0 - 
Rem
Rem  Username:        SCOTT
Rem  Task:            TASK_54589
Rem  Execution date:
Rem

CREATE INDEX "SCOTT"."T_IDX$$_D53D0000"
    ON "SCOTT"."T"
    (TO_NUMBER("X"))
    COMPUTE STATISTICS;

I have retrieved the index suggestion from the SQL Cache for the table T.

Let’s blindly implement it…


SQL> CREATE INDEX "SCOTT"."T_IDX$$_D5150000"
    ON "SCOTT"."T"
    (TO_NUMBER("X"))
    COMPUTE STATISTICS;
SQL> explain plan for 
  select * from t where x=00000001
Explain complete.
SQL> select * from table(dbms_xplan.display)

PLAN_TABLE_OUTPUT
---------------------------------------------
Plan hash value: 4112678587

-----------------------------------------------
| Id  | Operation                   | Name    |
-----------------------------------------------
|   0 | SELECT STATEMENT            |         |
|   1 |  TABLE ACCESS BY INDEX ROWID| T       |
|*  2 |   INDEX RANGE SCAN | T_IDX$$_D5150000 |
-----------------------------------------------

Predicate Information 
  (identified by operation id): 
-----------------------------------------------
   2 - access(TO_NUMBER("X")=00000001)

Much better. But …


SQL> insert into t(x) values('UNKNOWN');
insert into t(x) values('UNKNOWN')
Error at line 1
ORA-01722: invalid number

Adding a function-based-index on to_number(x) to the table also implies that no number is allowed in that column. This is an application change. Be aware…

Get the secondmax, again

Just bouncing on 2008/07/secondmax.

Another way of getting secondmax would be with an ordered collection. While collection methods like (n), first, last, count are not in SQL, I used PLSQL (within SQL)


WITH FUNCTION f (c sys.odcinumberlist, n NUMBER) RETURN number
IS BEGIN RETURN c (n); END;
SELECT 
  f(
    CAST(
      COLLECT(
        CAST(
          sal AS NUMBER
        ) ORDER BY sal DESC
      ) 
      AS SYS.odcinumberlist
    ), 
    2
  ) as secondmax
FROM emp; 

 SECONDMAX
----------
      3000

Another 12c syntax would be


SELECT sal secondmax
FROM emp
ORDER BY sal DESC
OFFSET 1 ROW
FETCH FIRST 1 ROW ONLY;

 SECONDMAX
----------
      3000

strings larger than 4000 in 12c

Back in Oracle 7, the maximum length for VARCHAR2 was 2000. In 11gR2, it is still 4000 for varchar2/char columns and for literals. Any attempt to use something larger will produce an infamous ora-910, ora-1704 or ora-1489 error.


SQL> create table t(x varchar2(5000));
create table t(x varchar2(5000))
                           *
ERROR at line 1:
ORA-00910: specified length too long for its datatype

SQL> select 'x-
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-
... 100 times 
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-
' from dual;

select 'x-
       *
ERROR at line 1:
ORA-01704: string literal too long

SQL> select 'x'||
'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'||
... 100 times 
'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'||
'x' from dual;

select 'x'||
       *
ERROR at line 1:
ORA-01489: result of string concatenation is too long

In 12c it is possible to extends the varchar2 to 32k.

This is not the default and it is controlled by max_string_size


SQL> shu immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup upgrade quiet
ORACLE instance started.
Database mounted.
Database opened.
SQL> alter system set max_string_size=extended;
System altered.
SQL> @?/rdbms/admin/utl32k
SQL> shu immediate
Database closed.
Database dismounted.
SQL> startup quiet
ORACLE instance started.
Database mounted.
Database opened.

Now we can create, insert and select longer strings.


SQL> create table t(x varchar2(6000));
Table created.
SQL> desc t
 Name                  Null?    Type
 --------------------- -------- ----------------
 X                              VARCHAR2(6000)
SQL> insert into t values ('-
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-
... 100 times 
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx');
1 row created.
SQL> insert into t values (
'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'||
... 100 times 
'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx');
1 row created.

Oracle 8i came with a bunch of issue (cannot index a varchar2(4000) column) because the datatype was really increased to 4000. In 12c, however, it is silently using clob.


SQL> select TABLE_NAME,COLUMN_NAME,SEGMENT_NAME from user_lobs where table_name='T';

TABLE_NAME COLUMN_NAM SEGMENT_NAME
---------- ---------- ------------------------------
T          X          SYS_LOB0000022083C00001$$

It also works for NVARCHAR2 and RAW (who wasn’t increased to 4000). But not for CHAR.

This is a smart move to provide larger text fields to developers. On the other hand, why limit it to 32k if it is a clob? VARCHAR(MAX) in SQL Server does not have such a limitation.


SQL> create table t(x varchar2(32768));
create table t(x varchar2(32768))
                           *
ERROR at line 1:
ORA-00910: specified length too long for its datatype

SQL> select 'x'||
'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'||
... 999 times 
'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'||
'x' from dual;
select
*
ERROR at line 1:
ORA-01489: result of string concatenation is too long

Monitoring details on your explain plan

We know how to generate an explain plan. SET AUTOT ON, the good old ambulance in Toad (which is no longer an ambulance those days), the explain plan button in SQL Developer or simply EXPLAIN PLAN


create table t1 partition by hash(object_id) partitions 16 as select * from dba_objects;
exec dbms_stats.gather_table_stats(user, 'T1')
explain plan for 
  select /*+ PARALLEL(x, 4) */ * from t1 x, t1 y 
  where x.object_name = y.object_name and x.owner != y.owner

Explain plan writes in the PLAN_TABLE and could be displayed with


SQL> select * from table(dbms_xplan.display)

PLAN_TABLE_OUTPUT
--------------------------------
Plan hash value: 2344570521

---------------------------------------------------------------
| Id  | Operation                  | Name     | Rows  | Bytes |
---------------------------------------------------------------
|   0 | SELECT STATEMENT           |          | 18287 |  3500K|
|   1 |  PX COORDINATOR            |          |       |       |
|   2 |   PX SEND QC (RANDOM)      | :TQ10002 | 18287 |  3500K|
|*  3 |    HASH JOIN               |          | 18287 |  3500K|
|   4 |     PX RECEIVE             |          | 19219 |  1839K|
|   5 |      PX SEND HYBRID HASH   | :TQ10001 | 19219 |  1839K|
|   6 |       STATISTICS COLLECTOR |          |       |       |
|   7 |        PX BLOCK ITERATOR   |          | 19219 |  1839K|
|   8 |         TABLE ACCESS FULL  | T1       | 19219 |  1839K|
|   9 |     BUFFER SORT            |          |       |       |
|  10 |      PX RECEIVE            |          | 19219 |  1839K|
|  11 |       PX SEND HYBRID HASH  | :TQ10000 | 19219 |  1839K|
|  12 |        PARTITION HASH ALL  |          | 19219 |  1839K|
|  13 |         TABLE ACCESS FULL  | T1       | 19219 |  1839K|
---------------------------------------------------------------

PLAN_TABLE_OUTPUT
--------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("X"."OBJECT_NAME"="Y"."OBJECT_NAME")
       filter("X"."OWNER"<>"Y"."OWNER")

But if you want to display progress on a currently running query, use DBMS_SQLTUNE (or Oracle Enterprise Manager SQL Monitoring):


SQL> set lin 150 longc 150 long 1000000;
SQL> select DBMS_SQLTUNE.REPORT_SQL_MONITOR(sql_plan_hash_value
  =>2344570521) from dual;

DBMS_SQLTUNE.REPORT_SQL_MONITOR(SQL_PLAN_HASH_VALUE=>2344570521) 
----------------------------------------------------------------
SQL Monitoring Report

SQL Text
------------------------------
select /*+ PARALLEL(x, 4) */ * from t1 x, t1 y where 
x.object_name = y.object_name and x.owner != y.owner

Global Information
------------------------------
 Status              :  EXECUTING
 Instance ID         :  1
 Session             :  SCOTT (25:10369)
 SQL ID              :  0dpj0fxm2gf81
 SQL Execution ID    :  16777216
 Execution Started   :  08/12/2013 14:48:26
 First Refresh Time  :  08/12/2013 14:48:26
 Last Refresh Time   :  08/12/2013 14:48:59
 Duration            :  34s
 Module/Action       :  SQL*Plus/-
 Service             :  SYS$USERS
 Program             :  sqlplus.exe
 Fetch Calls         :  19

DBMS_SQLTUNE.REPORT_SQL_MONITOR(SQL_PLAN_HASH_VALUE=>2344570521)
----------------------------------------------------------------

Global Stats
=================================================
| Elapsed |   Cpu   |  Other   | Fetch | Buffer |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  |
=================================================
|    0.25 |    0.13 |     0.12 |    19 |    752 |
=================================================

Parallel Execution Details (DOP=4 , Servers Allocated=8)
========================================================
|      Name      | Type  | Server# | Elapsed |   Cpu   |
|                |       |         | Time(s) | Time(s) |
========================================================
| PX Coordinator | QC    |         |    0.08 |    0.03 |
| p000           | Set 1 |       1 |    0.03 |    0.02 |
| p001           | Set 1 |       2 |    0.03 |    0.02 |
| p002           | Set 1 |       3 |         |         |
| p003           | Set 1 |       4 |    0.03 |    0.02 |
| p004           | Set 2 |       1 |    0.02 |    0.01 |
| p005           | Set 2 |       2 |    0.03 |    0.02 |
| p006           | Set 2 |       3 |    0.03 |    0.00 |
| p007           | Set 2 |       4 |    0.02 |    0.00 |
========================================================

SQL Plan Monitoring Details (Plan Hash Value=2344570521)
==================================================================
| Id   |          Operation          |  Rows   | Cost |   Rows   |
|      |                             | (Estim) |      | (Actual) |
==================================================================
| -> 0 | SELECT STATEMENT            |         |      |      271 |
| -> 1 |   PX COORDINATOR            |         |      |      271 |
|    2 |    PX SEND QC (RANDOM)      |   18287 |  202 |      615 |
|    3 |     HASH JOIN               |   18287 |  202 |      538 |
|    4 |      PX RECEIVE             |   19219 |   44 |    14481 |
|    5 |       PX SEND HYBRID HASH   |   19219 |   44 |    19219 |
|    6 |        STATISTICS COLLECTOR |         |      |    19219 |
|    7 |         PX BLOCK ITERATOR   |   19219 |   44 |    19219 |
|    8 |          TABLE ACCESS FULL  |   19219 |   44 |    19219 |
|    9 |      BUFFER SORT            |         |      |     1316 |
|   10 |       PX RECEIVE            |   19219 |  158 |    14481 |
|   11 |        PX SEND HYBRID HASH  |   19219 |  158 |    19219 |
|   12 |         PARTITION HASH ALL  |   19219 |  158 |    19219 |
|   13 |          TABLE ACCESS FULL  |   19219 |  158 |    19219 |
==================================================================

The small -> sign shows you where it is and display some actual (not estimates) info.

If I run it again :


==================================================================
| Id   |          Operation          |  Rows   | Cost |   Rows   |
|      |                             | (Estim) |      | (Actual) |
==================================================================
| -> 0 | SELECT STATEMENT            |         |      |     6451 |

For the same query, we see some progress (6451 rows now).

Check you have licensed the appropriate tuning options before using DBMS_SQLTUNE

One more obscure syntax

As 10g introduced MODEL, which is mostly used to impress your colleagues but seldom used in production, 12c has a new syntax, MATCH_RECOGNIZE

I gave it a first try to recognize trends in EMPs salaries over hire date.


SELECT ename, hiredate, sal, trend
FROM emp 
MATCH_RECOGNIZE (
  ORDER BY hiredate
  MEASURES CLASSIFIER () AS TREND
  ALL ROWS PER MATCH
  PATTERN (FIRST * Better * Worst * Same *)
  DEFINE FIRST AS ROWNUM = 1,
  Better AS Better.sal > PREV (sal),
  Same AS Same.sal = PREV (sal),
  Worst AS Worst.sal < PREV (sal));


ENAME      HIREDATE          SAL TREND                         
---------- ---------- ---------- ------------------------------
SMITH      1980-12-17        800 FIRST                         
ALLEN      1981-02-20       1600 BETTER                        
WARD       1981-02-22       1250 WORST                         
JONES      1981-04-02       2975 BETTER                        
BLAKE      1981-05-01       2850 WORST                         
CLARK      1981-06-09       2450 WORST                         
TURNER     1981-09-08       1500 WORST                         
MARTIN     1981-09-28       1250 WORST                         
KING       1981-11-17       5000 BETTER                        
JAMES      1981-12-03        950 WORST                         
FORD       1981-12-03       3000 BETTER                        
MILLER     1982-01-23       1300 WORST                         
SCOTT      1987-04-19       3000 BETTER                        
ADAMS      1987-05-23       1100 WORST                         

Best practice : use double quotes, even in DBMS_STATS

Whenever you create a table, it is better to use double quotes to avoid invalid identified.


SQL> CREATE TABLE /XXX(x number);
CREATE TABLE /XXX(x number)
             *
ERROR at line 1:
ORA-00903: invalid table name

SQL> CREATE TABLE "/XXX"(x number);

Table created.

Even in DBMS_STATS you should use double quotes


SQL> exec dbms_stats.gather_table_stats(user,'/XXX')
BEGIN dbms_stats.gather_table_stats(user,'/XXX'); END;

*
ERROR at line 1:
ORA-20001: /XXX is an invalid identifier
ORA-06512: at "SYS.DBMS_STATS", line 23829
ORA-06512: at "SYS.DBMS_STATS", line 23880
ORA-06512: at line 1

SQL> exec dbms_stats.gather_table_stats(user,'"/XXX"')

PL/SQL procedure successfully completed.

It is also a good practice to not use table name like “/XXX”, “FROM” or “ROWID”. But if you use dynamic SQL, be sure your code does not bug on invalid identifier.

It is pretty seldom that Oracle introduces new reserved words, as it breaks code, so if you do


CREATE TABLE MYTABLE(x number);

you can be pretty sure that neither MYTABLE nor X will be reserved in 12c or 13c…

Delete one billion row

To delete large number of rows, for instance rows with date until 2010, you can issue this simple statement.


SQL> DELETE FROM T WHERE C<DATE '2011-01-01';
1'000'000'000 rows deleted 
Elapsed: 23:45:22.01
SQL> commit;

This is perfectly fine. The table remains online, other users are not much affected (maybe they will not even notice the lower IO performance).

It will generate quite a lot of UNDO, and you will need enough space for archivelog and a large undo tablespace and a large undo retention setting (to prevent ORA-01555 snapshot too old).

If your table is like 100G big, you do it during week-end, you have 500Gb Undo and 250G free space in your archive destination, you will be fine. Well. Maybe.

There are workarounds where you create a new table then rename etc… but this is not the scope of this post and you will need to validate your index / foreign keys / online strategy with the application guys.

Another way to decrease runtime pro statement and undo requirement pro statement (but increase overall elapsed time) is to divided it chunks, for instance to delete 100’000’000 rows each night during 10 days.


SQL> DELETE FROM T WHERE C<DATE '2011-01-01' AND ROWNUM<=100000000;
100'000'000 rows deleted 
Elapsed: 04:11:15.31
SQL> commit;

Or if you want to delete in much smaller chunks to accomodate your tiny undo tablespace, you could try


BEGIN
  LOOP
    DELETE FROM T WHERE C<DATE '2011-01-01' AND ROWNUM <= 1000;
    EXIT WHEN SQL%ROWCOUNT = 0;
    COMMIT;
   END LOOP;
END;
/

This will run longer than a single transaction, but it is quite usefull if your undo tablespace is too small. Also if you abort it (CTRL-C or kill session), you will not lose all progresses (but you lose on integrity/atomicity) and your KILL SESSION will not last for ever. With a single transaction, your session may be marked as killed for hours/days…

When v$session_longops is not long enough

With large table scans, sometimes the estimated total work is far beyond reality


SQL> select message from v$session_longops where target='SCOTT.EMP';

MESSAGE
------------------------------------------------------------
Table Scan:  SCOTT.EMP: 7377612 out of 629683 Blocks done

The total work is the Oracle estimation :


SQL> select blocks from dba_tables where table_name='EMP';

    BLOCKS
----------
    629683

This may differ quite a lot from the segment size, for instance if the table is not very often analyzed :


SQL> select blocks, sysdate, last_analyzed from dba_tables where table_name='EMP';

    BLOCKS SYSDATE             LAST_ANALYZED
---------- ------------------- -------------------
    629683 2013-04-21_09:21:47 2007-10-13_21:40:58

SQL> select blocks from dba_segments where segment_name='EMP';

    BLOCKS
----------
   7749888

I have customized my very long ops query to deal with very long waits.


col target for a20
set lin 150 pages 40000 termout off
alter session set nls_currency='%';
col PCT_DONE for 990.00L jus r
col time_remaining for 999999

select
  lo.target,lo.sofar,seg.blocks,
  lo.ELAPSED_SECONDS*seg.blocks/lo.sofar-lo.ELAPSED_SECONDS TIME_REMAINING,
  100*lo.sofar/seg.blocks PCT_DONE
from
  dba_segments seg,
  v$session_longops lo
where
  lo.units='Blocks'
  and lo.totalwork>0 and (lo.time_remaining>0 or lo.time_remaining is null)
  and regexp_substr(lo.target,'[^.]+') = seg.owner
  and regexp_substr(lo.target,'[^.]+$') = seg.segment_name
/

BUILD DEFERRED takes ages

When building a materialized view, you may want to postpone the loading to a later phase, for instance you install a new release, and the refresh happends every night.

BUILD DEFERRED allow you to build an empty materialized view and refresh it later. But this may still takes ages.


SQL> create materialized view mv1 build deferred as select count(*) c from emp,emp,emp,emp,emp,emp,emp;

Materialized view created.

Elapsed: 00:00:17.28
SQL> select * from mv1;

no rows selected

No data collected, but still a long time (17sec here, but much worst in real life)

A workaround is to use ON PREBUILT TABLE on an empty table, just add a few WHERE 1=0 in your subqueries


SQL> create table mv1 as select * from (select count(*) c from emp,emp,emp,emp,emp,emp,emp where 1=0) where 1=0;

Table created.

Elapsed: 00:00:00.04
SQL> create materialized view mv1 on prebuilt table as select count(*) c from emp,emp,emp,emp,emp,emp,emp;

Materialized view created.

Elapsed: 00:00:00.15
SQL> select * from mv1;

no rows selected

Elapsed: 00:00:00.00

Much faster !

How big was my database last month

If you backup your database at least weekly, you could check the datafile size from your backup history.

Without catalog :


select to_char(d,'"W"IW-IYYY') compl, sum(b)/1024/1024/1024*8192 GB 
from 
(
  select max(DATAFILE_BLOCKS) b, trunc(completion_time,'IW') d 
  from v$backup_datafile 
  group by FILE# ,trunc(completion_time,'IW')
) 
group by d 
order by d;


COMPL        GB
-------- ------
W30-2012   3.73
W31-2012   4.84
W32-2012   5.00
W33-2012   5.05
W34-2012   5.35
W35-2012   5.80
W36-2012   6.12
W37-2012   6.39
W38-2012    .93
W39-2012   7.02
W40-2012   7.56
W41-2012   7.72
W42-2012   7.88
W43-2012   8.08
W44-2012   8.83
W45-2012   9.03
W46-2012   9.45
W47-2012   9.61
W48-2012  10.11
W49-2012  10.29
W50-2012  10.38

The history mostly depends on control_file_record_keep_time. If you do not use an rman catalog, set it to a high value like 93 (3M) or 366 (1Y)

With the rman catalog, use the RC_ view


select DB_NAME,to_char(d,'"W"IW-IYYY') compl, sum(b)/1024/1024/1024*8192 GB 
from 
(
  select DB_NAME,max(DATAFILE_BLOCKS) b, trunc(completion_time,'IW') d 
  from rc_backup_datafile 
  group by DB_NAME,FILE# ,trunc(completion_time,'IW')
) 
group by DB_NAME,d 
order by DB_NAME,d;


DB_NAME  COMPL        GB
-------- -------- ------
DB01     W30-2012   3.73
DB01     W31-2012   4.83
DB01     W32-2012   5.00
DB01     W33-2012   5.05
DB01     W34-2012   5.34
DB01     W35-2012   5.79
DB01     W36-2012   6.11
DB01     W37-2012   6.39
DB01     W38-2012    .93
DB01     W39-2012   7.01
DB01     W40-2012   7.56
DB01     W41-2012   7.71
DB01     W42-2012   7.87
DB01     W43-2012   8.08
DB01     W44-2012   8.82
DB01     W45-2012   9.02
DB01     W46-2012   9.44
DB01     W47-2012   9.60
DB01     W48-2012  10.10
DB01     W49-2012  10.28
DB01     W50-2012  10.37

If you need to check which table grows the most, check How big was my table yesterday. But remember, RMAN backup is free to use, AWR and the WRI$ tables require the diagnostic pack and the Enterprise edition

accent insensitive regexp

Ever wanted to find an accent insentive expression like “bébé” in a column ?

Maybe you tried to list all possible accents. But Posix has the class for you, the list of éèëê could be refered as [=e=]


SELECT * FROM TAB WHERE REGEXP_LIKE(C,'b[[=e=]]b[[=e=]]')

not only [=e=] is easier to read and to type, but also it is more portable if you copy your scripts from DOS to UNIX and use different character sets

How big was my table yesterday

Oracle saves a lot of information on your behalf. Whenever you get yourself an AWR reported, you access some historic tables (included in the Diagnostic Pack).

Those tables could also be accessed manually.


SELECT savtime,owner,object_name,rowcnt,blkcnt
FROM sys.WRI$_OPTSTAT_TAB_HISTORY w,
  dba_objects o
WHERE 
   o.owner='SCOTT'
   AND o.object_name='EMP'
   and o.object_id = W.OBJ#
ORDER BY o.owner, o.object_name, w.savtime;


SAVTIME           OWNER    OBJECT_NAME     ROWCNT     BLKCNT
----------------- -------- ----------- ---------- ----------
2012-11-06 06:49  SCOTT    EMP           13215425     120077
2012-11-13 07:28  SCOTT    EMP           12678535     120077
2012-11-20 03:15  SCOTT    EMP           12860640     120077
2012-11-27 03:19  SCOTT    EMP           13045850     120077
2012-12-04 05:41  SCOTT    EMP           13326460     120077

To increase the retention, use DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings

Create database link and logon trigger

Today I could not understand why I was getting ORA-1031 with create database link.

After analysis, I found out a logon trigger that changed the current schema. As Tom always says, triggers are evil…

DEMO:


SQL> create user u1 identified by xxx;

User created.

SQL> grant create session, create database link to u2 identified by xxx;

Grant succeeded.

SQL> create trigger evil after logon on database  begin 
  2  execute immediate 'alter session set current_schema=u1';end;
  3  /

Trigger created.

SQL>
SQL> conn u2/xxx
Connected.
SQL> create database link l;
create database link l
                     *
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> show user
USER is "U2"
SQL> select sys_context('USERENV','CURRENT_SCHEMA') from dual;

SYS_CONTEXT('USERENV','CURRENT_SCHEMA')
-----------------------------------------
U1

SQL> alter session set current_schema=u2;

Session altered.

SQL> create database link l;

Database link created.

Drop database link in another schema

Today I wrote this script :

drop_database_link.sql


accept owner char prompt "Enter database link owner : "
accept db_link char prompt "Enter link name : "

begin
  dbms_scheduler.create_job(
    job_name=>'&owner..drop_database_link',
    job_type=>'PLSQL_BLOCK',
    job_action=>'BEGIN execute immediate ''drop database link &db_link'';END;'
  );
  dbms_scheduler.run_job('&owner..drop_database_link',false);
  dbms_lock.sleep(2);
  dbms_scheduler.drop_job('&owner..drop_database_link');
end;
/

I am using the scheduler to run a job as another user. The database link owner does not need to have any privilege, neither CREATE SESSION nor CREATE JOB. It could locked and expired.

Difference between Paris and Zurich

When I was a child, I used to go skiing in the alps, and occasionaly cross the borders. I remember that late in the season (Eastern skiing) restaurants were already empty in France when we had lunch, because our neithbough countries introduced summertime before us.

It is a long way back, namely summers 1976 to ’80. In 1975 and before, neither of us had day light saving. In 1981 and later, we both had it.

Ok, I just had an issue with a wrong date in a customer application. Somehow our database is set with POSIX format, let’s say +01:00 and +02:00, derived from CET/CEST unix timezone (TZ=CET-1CEST,M3.5.0,M10.5.0)

Due to some obscure multiple conversions, dates for summer 1976-80 are wrong, so we sent birthday cards too early to our customers…


SQL> select to_char(cast(
  timestamp '1979-08-01 00:00:00 CET'
    as timestamp with local time zone),
      'YYYY-MM-DD') from dual;

TO_CHAR(CAST(TIMESTAMP'1979-08-0100:00:
---------------------------------------
1979-08-01

but if set my session timezone to Europe/Zurich, which is currently equivalent to CET, I got discrepancies


SQL> alter session set time_zone='Europe/Zurich' ;

Session altered.

SQL> select to_char(cast(
  timestamp '1979-08-01 00:00:00 CET' 
    as timestamp with local time zone),
      'YYYY-MM-DD') from dual;

TO_CHAR(CAST(TIMESTAMP'1979-08-0100:00
--------------------------------------
1979-07-31

A good reason to specify the time zone name correctly in your create database statement !

Dynamic number of columns

I used to believe you cannot have a dynamic number of columns. Today Tom referenced Anton on asktom.

It leaded me there, back in time, 2006, on the OTN forums
https://forums.oracle.com/forums/message.jspa?messageID=1297717#1297717

Difficult to write an article on this without copy-pasting most of Anton code, so just read it on the link above.

Then you will see the magic :


SQL> select * from table( NColPipe.show( 'test', 3 ) );

test1      test2           test3
---------- ---------- ----------
row: 1     row: 1              1
row: 2     row: 2              2
row: 3     row: 3              3

SQL> desc NColPipe
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 L_PARM                                             VARCHAR2(10)
 ROWS_REQUESTED                                     NUMBER
 RET_TYPE                                           ANYTYPE
 ROWS_RETURNED                                      NUMBER

METHOD
------
 STATIC FUNCTION ODCITABLEDESCRIBE RETURNS NUMBER
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 RTYPE                          ANYTYPE                 OUT
 P_PARM                         VARCHAR2                IN
 P_ROWS_REQ                     NUMBER                  IN     DEFAULT

METHOD
------
 STATIC FUNCTION ODCITABLEPREPARE RETURNS NUMBER
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SCTX                           NCOLPIPE                OUT
 TI                             ODCITABFUNCINFO         IN
 P_PARM                         VARCHAR2                IN
 P_ROWS_REQ                     NUMBER                  IN     DEFAULT

METHOD
------
 STATIC FUNCTION ODCITABLESTART RETURNS NUMBER
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SCTX                           NCOLPIPE                IN/OUT
 P_PARM                         VARCHAR2                IN
 P_ROWS_REQ                     NUMBER                  IN     DEFAULT

METHOD
------
 MEMBER FUNCTION ODCITABLEFETCH RETURNS NUMBER
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 NROWS                          NUMBER                  IN
 OUTSET                         ANYDATASET              OUT

METHOD
------
 MEMBER FUNCTION ODCITABLECLOSE RETURNS NUMBER

The function is returning ANYDATASET and implemeting ODCITABLEDESCRIBE. This is all clean documented code.

Read more:
Data Cartridge Developer’s Guide – Using Pipelined and Parallel Table Functions – Describe Method
Sometimes it is not possible to define the structure of the return type from the table function statically … You can implement a ODCITableDescribe() routine

grant select on sys tables

I prefer to use a powerful named user with dba rather than sys. It is more conform to the security policies in place regarding accounting of administrator operations.

Very occasionaly, my user get ORA-1031 insufficient privileges even if I have the dba role.

Amoung others, I have “PURGE DBA_RECYCLEBIN” and DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE

update, 2012-07-24
For purge dba_recyclebin, you probably should purge tables individually
exec for f in(select*from dba_recyclebin where owner!='SYS' and type='TABLE')loop execute immediate 'purge table "'||f.owner||'"."'||f.object_name||'"';end loop;

For DBMS_STREAMS_AUTH, what I am actually missing, is the GRANT OPTION on some documented dba views and dbms package. So I could safely grant the grant option to my user for all sys objects that have been granted to DBA, PUBLIC and any other roles.

Kind of


create table scott.t as 
  select distinct owner,table_name,privilege 
  from dba_tab_privs t 
  where privilege not in ('USE','DEQUEUE') and owner='SYS' ;
begin
  for f in(select * from scott.t) loop 
    execute immediate 
      'grant '||f.privilege||' on "'||f.owner||'"."'
        ||f.table_name||'" to scott with grant option'; 
  end loop;
end;
/

It is better to not select from dba_tab_privs directly, as executing immediate while opening the cursor may have unexpected side effects.

This may help you to increase your security by reducing your connections as sys.

American = fast

I had the incredible behavior of having the same query running 10x faster in sqlplus depending on two different PCs. After analysis, I realised the super fast pc was American… at least in nls_lang

Slow client: PC setup in German, NLS_LANG is set to GERMAN_SWITZERLAND.WE8MSWIN1252


C:\>set NLS_LANG=GERMAN_SWITZERLAND.WE8MSWIN1252

C:\>sqlplus scott/tiger@db01

SQL*Plus: Release 11.2.0.3.0 Production on Fr Jul 6 10:30:25 2012

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Verbunden mit:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> set autot on exp
SQL> select job,count(*) FROM emp group BY job order by job;

JOB         COUNT(*)
--------- ----------
ANALYST            2
CLERK              4
MANAGER            3
PRESIDENT          1
SALESMAN           4

Ausführungsplan
----------------------------------------------------------
Plan hash value: 2389703825

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     5 |    40 |     5  (40)| 00:00:01 |
|   1 |  SORT ORDER BY      |      |     5 |    40 |     5  (40)| 00:00:01 |
|   2 |   HASH GROUP BY     |      |     5 |    40 |     5  (40)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| EMP  |    14 |   112 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------

Now let’s try with the PC setup with american_america


C:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252

C:\>sqlplus scott/tiger@db01

SQL*Plus: Release 11.2.0.3.0 Production on Fri Jul 6 10:31:57 2012

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> set autot on exp
SQL> select job,count(*) FROM emp group BY job order by job;

JOB         COUNT(*)
--------- ----------
ANALYST            2
CLERK              4
MANAGER            3
PRESIDENT          1
SALESMAN           4

Execution Plan
----------------------------------------------------------
Plan hash value: 637087546

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     5 |    40 |     4  (25)| 00:00:01 |
|   1 |  SORT GROUP BY     |      |     5 |    40 |     4  (25)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| EMP  |    14 |   112 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

The “cost” and “plan” changed. Why this? Maybe an optimizer bug, but it is related to NLS_SORT, which is GERMAN for nls_lang=GERMAN_SWITZERLAND and BINARY for nls_lang=AMERICAN_AMERICA.

Binary!

On deferred segment creation and truncate

One year ago, I wrote about a side effect of deferred segment creation and drop tablespace :
on deferred segment creation

Today I discoved one more side effect :

In the good old days (I read once that you are old as soon as you start talking about the good old days) and according to the doc :
You cannot truncate the parent table of an enabled foreign key constraint. You must disable the constraint before truncating the table


SQL> alter session set deferred_segment_creation=false;

Session altered.

SQL> create table t1(x number primary key);

Table created.

SQL> create table t2(x number references t1);

Table created.

SQL> truncate table t1;
truncate table t1
               *
ERROR at line 1:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys

This however does not apply if you have deferred segment creation and empty tables


SQL> alter session set deferred_segment_creation=true;

Session altered.

SQL> create table t1(x number primary key);

Table created.

SQL> create table t2(x number references t1);

Table created.

SQL> truncate table t1;

Table truncated.

Table truncated. Ok, what’s the point in truncating an empty table …

My first .NET gui in Powershell

I managed to interface Oracle and a GUI via powershell.

First, load the Oracle and the .NET assemblies


[void] [Reflection.Assembly]::LoadFile("C:\oracle\product\11.2.0\client_1\ODP.NET\bin\2.x\Oracle.DataAccess.dll")
[void] [Reflection.Assembly]::LoadWithPartialName("Drawing")
[void] [Reflection.Assembly]::LoadWithPartialName("Windows.Forms")

Now, let’s retrieve EMP in a powershell array. I hope one of my reader will advise me on a better way :)


$connection=New-Object Oracle.DataAccess.Client.OracleConnection("Data Source=DB01; User Id=scott; password=tiger")
$connection.open()
$command=new-object Oracle.DataAccess.Client.OracleCommand("select ename from emp",$connection)
$reader = $command.ExecuteReader()
$a = @()
while ($reader.Read()) {
  $a = $a + $reader.GetString(0)
}
$connection.close()

last, let’s create a simple window (a Form) with a list (a List box) where you can select an item with a doubleclick.

Initialize the list with the array from EMP



$form = New-Object Windows.Forms.Form
$form.Text = "Select employee !"
$form.Size = New-Object Drawing.Size(640,480)
$form.StartPosition = "CenterScreen"
$listbox = New-Object Windows.Forms.ListBox
$listbox.Location = New-Object Drawing.Point(10,10)
$listbox.Size = New-Object Drawing.Size(620,460)
$listbox.Items.AddRange($a)
$listbox.Add_DoubleClick({$form.Close();})
$form.Controls.Add($listbox)
$form.Topmost = $True
$form.Add_Shown({$form.Activate()})
[void] $form.ShowDialog()

Show the result (or use it in your powershell scripts)


PS> $listbox.SelectedItems[0]
SCOTT

Pretty cool! No compiler needed, directly run from the powershell prompt

How to get rid of corrupted blocks without a backup

First, you identify the blocks in alert log or with db verify


$ dbv BLOCKSIZE=8192 file=sysaux01.dbf
DBV-00201: Block, DBA 12629823, marked corrupt for invalid redo application
...
DBVERIFY - Verification complete

Total Pages Examined         : 131072
Total Pages Processed (Data) : 69691
Total Pages Failing   (Data) : 0
Total Pages Processed (Index): 28669
Total Pages Failing   (Index): 0
Total Pages Processed (Other): 15755
Total Pages Processed (Seg)  : 0
Total Pages Failing   (Seg)  : 0
Total Pages Empty            : 16957
Total Pages Marked Corrupt   : 9
Total Pages Influx           : 0
Total Pages Encrypted        : 0
Highest block SCN            : 3220271881 (11.3220271881)

For the dba number, identify the block


def dba=12585405
col block_id new_v block_id
col file_id new_v file_id
select dbms_utility.data_block_address_block(&dba) block_id, 
dbms_utility.data_block_address_file(&dba) file_id from dual;

  BLOCK_ID    FILE_ID
---------- ----------
      2493          3

From the block_id/file_id, identify the segment


col owner new_v table_owner 
col segment_name new_v segment_name
select owner,segment_name,segment_type from dba_extents where file_id=&file_id and &BLOCK_ID between block_id and block_id + blocks - 1;
OWNER
------------------------------
SEGMENT_NAME
-----------------------------------
SEGMENT_TYPE
------------------
SYS
SYS_IL0000008786C00008$$
LOBINDEX

If it is a lob, identify the column and data_type


select tablespace_name,owner, table_name, column_name, data_type from dba_lobs join
dba_tab_columns using (owner, table_name, column_name) where segment_name =
'&segment_name' and owner='&table_owner';
TABLESPACE_NAME                OWNER
------------------------------ ------------------------------
TABLE_NAME                     COLUMN_NAME
------------------------------ ------------------------------
DATA_TYPE
--------------------------------------------------------------------------------
SYSAUX                         SYS
WRI$_DBU_FEATURE_USAGE         FEATURE_INFO
CLOB

If you are lucky, you will find a useless segment that you will just drop. Or maybe you will be able to move all segments in another tablespace and drop the tablespace with the corrupt blocks.

If you are pretty unlucky like me today, you will find sys segments in system or sysaux.

Either you export all users data and import them again in a new database (but this means downtime), or you start moving the segments in another tablespace. Or dropping and recreating them.
Check Tablespace maintenance tasks

Once dropped or moved or emptied, you may still see the corrupted blocks. Do not forget to purge the recyclebin, either with PURGE DBA_RECYCLEBIN or PURGE TABLESPACE tbs1 USER usr1

Even then the corruption may belong to no more segment but still appear in dbverify. One workaround is to fill the tablespace (check it does not extend) with a dummy table

create table t(x number, y varchar2(4000) default lpad('x',4000,'x')) tablespace tbs1;

exec while true loop insert into t(x) select rownum r from dual connect by level<10000;commit;end loop

exec while true loop insert into t(x) select rownum r from dual connect by level<100;commit;end loop

exec while true loop insert into t(x,y) select rownum r,'x' from dual;commit;end loop

exec while true loop insert into t(x,y) values (null,null);commit;end loop

drop table t;

Run dbv again and again until you get completly rid of errors. If you drop and recreate sys objects, or even if you simply move them out of the sys tablespace, dictionary corruption and ora-600 is possible. But well, you had corruption anyway …

_optimizer_random_plan parameter

I was trying to find a workaround for a bug in 11.2.0.2

SELECT *  FROM 
  (SELECT 2 B FROM DUAL WHERE DUMMY = 'Y'), 
  (SELECT 3 C FROM DUAL WHERE DUMMY LIKE '%') 
  WHERE C = B(+);

         B          C
---------- ----------
         2          3

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     4 |     4   (0)| 00:00:01 |
|   1 |  NESTED LOOPS OUTER|      |     1 |     4 |     4   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
|*  3 |   TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

As dummy is not Y, B could not be 2.

Ok, I tried :


alter session set "_optimizer_random_plan"=1;

SELECT *  FROM 
  (SELECT 2 B FROM DUAL WHERE DUMMY = 'Y'), 
  (SELECT 3 C FROM DUAL WHERE DUMMY LIKE '%') 
  WHERE C = B(+);

         B          C
---------- ----------
                    3

Execution Plan
----------------------------------------------------------
Plan hash value: 837538736

-------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost  |
-------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |  1146 |  5730 |    27G|
|   1 |  MERGE JOIN OUTER    |      |   603K|  2946K|    27G|
|*  2 |   TABLE ACCESS FULL  | DUAL |   392K|   767K|   136K|
|   3 |   VIEW               |      |     2 |     6 | 69180 |
|*  4 |    FILTER            |      |       |       |       |
|*  5 |     TABLE ACCESS FULL| DUAL |   123K|   240K| 69180 |
-------------------------------------------------------------

Cool, I got correct results! the fact that the cost jumped from 4 to 27 Billions is just a minor annoyance I suppose :twisted:

I also tried


alter session set "_optimizer_random_plan"=0; -- default

alter session  set "_complex_view_merging"=false;

SELECT *  FROM 
  (SELECT 2 B FROM DUAL WHERE DUMMY = 'Y'), 
  (SELECT 3 C FROM DUAL WHERE DUMMY LIKE '%') 
  WHERE C = B(+);

         B          C
---------- ----------
                    3

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |     1 |     5 |     2   (0)| 00:00:01 |
|   1 |  NESTED LOOPS OUTER  |      |     1 |     5 |     2   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL  | DUAL |     1 |     2 |     2   (0)| 00:00:01 |
|   3 |   VIEW               |      |     1 |     3 |            |          |
|*  4 |    FILTER            |      |       |       |            |          |
|*  5 |     TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------

The cost is now 5 and instead of 4 and the results are correct

The first thing I did is opening a SR, now I am impatiently waiting for Oracle Support guidance…

Duplicate table over database link

The old-style approach would be CREATE TABLE T AS SELECT * FROM T@L, where T is the table you want to duplicate and L the database link pointing to the remote database

If you want to keep more info about the structure of t, the constraints, the indexes, you may use Datapump. Here it is…


SQL> var n number
SQL> exec :n := dbms_datapump.open('IMPORT','TABLE','L')

PL/SQL procedure successfully completed.

SQL> print n
         N
----------
        28

SQL> exec dbms_datapump.metadata_filter(:n,'SCHEMA_LIST','''SCOTT''')

PL/SQL procedure successfully completed.

SQL> exec dbms_datapump.metadata_filter(:n,'NAME_LIST','''T''')

PL/SQL procedure successfully completed.

SQL> exec dbms_datapump.start_job(:n)

PL/SQL procedure successfully completed.
SQL> desc t
 Name              Null?    Type
 ----------------- -------- ------------
 X                 NOT NULL NUMBER

This is utterly simple. If you mess up with the link name, global name, syntax, and so on, you may end up with orphan jobs in DBA_SCHEDULER_JOBS. There is some metalink note on dropping the underlying tables (Note 336914.1) but first log off, get a coffee, and they may vanish after a few minutes.

On using ROWID

I have been challenged to assert the safety of rowid in a sql statement.

Against all my beliefs, it is not safe to assume ROWID is consistent over one sql statement. If the ROWID changes, and you use ROWID in your query, you may get inconsistent results.

Obviously I would not write such a post without a test case ;-)


create table t(x, y) partition by hash(x) partitions 32 enable row movement
as select rownum, rownum from dual connect by level<30;

select sum(y) from t;
SUM(Y)
------
   435

Sum[1..29]=435

Let’s write the query with a slow function using rowid


create or replace function f(r rowid) return number is 
  n number; 
begin 
  select y into n from t where rowid=r; 
  sys.dbms_lock.sleep(1); 
  return n; 
end;
/
select sum(f(rowid)) from t;
SUM(F(ROWID))
-------------
          435

Elapsed: 00:00:29.12

The query took 29.1 seconds for 29 rows and returned the same result.

Let’s update the partition key during the select


select sum(f(rowid)) from t;
... hurry up to a new session ...


update t set x=x+1;
commit;

back to your session you will have something inconsistent


...
SUM(F(ROWID))
-------------
            5

Elapsed: 00:00:02.04

Not only the query was faster than the expected 29 seconds, but it is also inconsistent.

Probably not a bug, rowid is just not constant within the same transaction.

List of table and column privileges, including those via roles

I could not find this quickly enough in google so I wrote it myself.

The list of table privileges, with a connect by subquery.

 COL roles FOR a60
COL table_name FOR a30
col privilege for a9
set lin 200 trims on pages 0 emb on hea on newp none

  SELECT *
    FROM (    SELECT CONNECT_BY_ROOT grantee grantee,
                     privilege,
                     REPLACE (
                        REGEXP_REPLACE (SYS_CONNECT_BY_PATH (granteE, '/'),
                                        '^/[^/]*'),
                        '/',
                        ' --> ')
                        ROLES,
                     owner,
                     table_name,
                     column_name
                FROM (SELECT PRIVILEGE,
                             GRANTEE,
                             OWNER,
                             TABLE_NAME,
                             NULL column_name
                        FROM DBA_TAB_PRIVS
                       WHERE owner NOT IN
                                ('SYS',
                                 'SYSTEM',
                                 'WMSYS',
                                 'SYSMAN',
                                 'MDSYS',
                                 'ORDSYS',
                                 'XDB',
                                 'WKSYS',
                                 'EXFSYS',
                                 'OLAPSYS',
                                 'DBSNMP',
                                 'DMSYS',
                                 'CTXSYS',
                                 'WK_TEST',
                                 'ORDPLUGINS',
                                 'OUTLN',
                                 'ORACLE_OCM',
                                 'APPQOSSYS')
                      UNION 
                      SELECT PRIVILEGE,
                             GRANTEE,
                             OWNER,
                             TABLE_NAME,
                             column_name
                        FROM DBA_COL_PRIVS
                       WHERE owner NOT IN
                                ('SYS',
                                 'SYSTEM',
                                 'WMSYS',
                                 'SYSMAN',
                                 'MDSYS',
                                 'ORDSYS',
                                 'XDB',
                                 'WKSYS',
                                 'EXFSYS',
                                 'OLAPSYS',
                                 'DBSNMP',
                                 'DMSYS',
                                 'CTXSYS',
                                 'WK_TEST',
                                 'ORDPLUGINS',
                                 'OUTLN',
                                 'ORACLE_OCM',
                                 'APPQOSSYS')
                      UNION 
                      SELECT GRANTED_ROLE,
                             GRANTEE,
                             NULL,
                             NULL,
                             NULL
                        FROM DBA_ROLE_PRIVS
                       WHERE GRANTEE NOT IN
                                ('SYS',
                                 'SYSTEM',
                                 'WMSYS',
                                 'SYSMAN',
                                 'MDSYS',
                                 'ORDSYS',
                                 'XDB',
                                 'WKSYS',
                                 'EXFSYS',
                                 'OLAPSYS',
                                 'DBSNMP',
                                 'DMSYS',
                                 'CTXSYS',
                                 'WK_TEST',
                                 'ORDPLUGINS',
                                 'OUTLN',
                                 'ORACLE_OCM',
                                 'APPQOSSYS')) T
          START WITH grantee IN (SELECT username FROM dba_users)
          CONNECT BY PRIOR PRIVILEGE = GRANTEE)
   WHERE table_name IS NOT NULL AND grantee != OWNER
ORDER BY grantee,
         owner,
         table_name,
         column_name,
         privilege;

sample output


GRANTEE PRIVILEGE ROLES           OWNER  TABLE_NAME COLUMN_NAME
------- --------- --------------- ------ ---------- -----------
U       UPDATE     --> R          SCOTT  DEPT       DNAME      
U       SELECT                    SCOTT  EMP                   
U2      UPDATE     --> R2 --> R   SCOTT  DEPT       DNAME