Categories
dba sql

How to get rid of corrupted blocks without a backup

First, you identify the blocks in alert log or with db verify


$ dbv BLOCKSIZE=8192 file=sysaux01.dbf
DBV-00201: Block, DBA 12629823, marked corrupt for invalid redo application
...
DBVERIFY - Verification complete

Total Pages Examined : 131072
Total Pages Processed (Data) : 69691
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 28669
Total Pages Failing (Index): 0
Total Pages Processed (Other): 15755
Total Pages Processed (Seg) : 0
Total Pages Failing (Seg) : 0
Total Pages Empty : 16957
Total Pages Marked Corrupt : 9
Total Pages Influx : 0
Total Pages Encrypted : 0
Highest block SCN : 3220271881 (11.3220271881)

For the dba number, identify the block

def dba=12585405
col block_id new_v block_id
col file_id new_v file_id
select dbms_utility.data_block_address_block(&dba) block_id,
dbms_utility.data_block_address_file(&dba) file_id from dual;

BLOCK_ID FILE_ID
---------- ----------
2493 3

From the block_id/file_id, identify the segment

col owner new_v table_owner
col segment_name new_v segment_name
select owner,segment_name,segment_type from dba_extents where file_id=&file_id and &BLOCK_ID between block_id and block_id + blocks - 1;
OWNER
------------------------------
SEGMENT_NAME
-----------------------------------
SEGMENT_TYPE
------------------
SYS
SYS_IL0000008786C00008$$
LOBINDEX

If it is a lob, identify the column and data_type

select tablespace_name,owner, table_name, column_name, data_type from dba_lobs join
dba_tab_columns using (owner, table_name, column_name) where segment_name =
'&segment_name' and owner='&table_owner';
TABLESPACE_NAME OWNER
------------------------------ ------------------------------
TABLE_NAME COLUMN_NAME
------------------------------ ------------------------------
DATA_TYPE
--------------------------------------------------------------------------------
SYSAUX SYS
WRI$_DBU_FEATURE_USAGE FEATURE_INFO
CLOB

If you are lucky, you will find a useless segment that you will just drop. Or maybe you will be able to move all segments in another tablespace and drop the tablespace with the corrupt blocks.

If you are pretty unlucky like me today, you will find sys segments in system or sysaux.

Either you export all users data and import them again in a new database (but this means downtime), or you start moving the segments in another tablespace. Or dropping and recreating them.
Check Tablespace maintenance tasks

Once dropped or moved or emptied, you may still see the corrupted blocks. Do not forget to purge the recyclebin, either with PURGE DBA_RECYCLEBIN or PURGE TABLESPACE tbs1 USER usr1

Even then the corruption may belong to no more segment but still appear in dbverify. One workaround is to fill the tablespace (check it does not extend) with a dummy table

create table t(x number, y varchar2(4000) default lpad('x',4000,'x')) tablespace tbs1;

exec while true loop insert into t(x) select rownum r from dual connect by level<10000;commit;end loop exec while true loop insert into t(x) select rownum r from dual connect by level<100;commit;end loop exec while true loop insert into t(x,y) select rownum r,'x' from dual;commit;end loop exec while true loop insert into t(x,y) values (null,null);commit;end loop drop table t;

Run dbv again and again until you get completly rid of errors. If you drop and recreate sys objects, or even if you simply move them out of the sys tablespace, dictionary corruption and ora-600 is possible. But well, you had corruption anyway ...

Categories
fun sql

How much is + ‘x’ ?

Sounds like an april fool, but I wonder what is + ‘x’ supposed to do 🙂

SQL> select + 'x' from dual;

+
-
x

it does not convert to number as 0 + ‘1’ would do

Categories
11gR2 sql support

_optimizer_random_plan parameter

I was trying to find a workaround for a bug in 11.2.0.2

SELECT * FROM
(SELECT 2 B FROM DUAL WHERE DUMMY = 'Y'),
(SELECT 3 C FROM DUAL WHERE DUMMY LIKE '%')
WHERE C = B(+);

B C
---------- ----------
2 3

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 4 (0)| 00:00:01 |
| 1 | NESTED LOOPS OUTER| | 1 | 4 | 4 (0)| 00:00:01 |
|* 2 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
---------------------------------------------------------------------------

As dummy is not Y, B could not be 2.

Ok, I tried :

alter session set "_optimizer_random_plan"=1;

SELECT * FROM
(SELECT 2 B FROM DUAL WHERE DUMMY = 'Y'),
(SELECT 3 C FROM DUAL WHERE DUMMY LIKE '%')
WHERE C = B(+);

B C
---------- ----------
3

Execution Plan
----------------------------------------------------------
Plan hash value: 837538736

-------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1146 | 5730 | 27G|
| 1 | MERGE JOIN OUTER | | 603K| 2946K| 27G|
|* 2 | TABLE ACCESS FULL | DUAL | 392K| 767K| 136K|
| 3 | VIEW | | 2 | 6 | 69180 |
|* 4 | FILTER | | | | |
|* 5 | TABLE ACCESS FULL| DUAL | 123K| 240K| 69180 |
-------------------------------------------------------------

Cool, I got correct results! the fact that the cost jumped from 4 to 27 Billions is just a minor annoyance I suppose 😈

I also tried

alter session set "_optimizer_random_plan"=0; -- default

alter session set "_complex_view_merging"=false;

SELECT * FROM
(SELECT 2 B FROM DUAL WHERE DUMMY = 'Y'),
(SELECT 3 C FROM DUAL WHERE DUMMY LIKE '%')
WHERE C = B(+);

B C
---------- ----------
3

-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 5 | 2 (0)| 00:00:01 |
| 1 | NESTED LOOPS OUTER | | 1 | 5 | 2 (0)| 00:00:01 |
|* 2 | TABLE ACCESS FULL | DUAL | 1 | 2 | 2 (0)| 00:00:01 |
| 3 | VIEW | | 1 | 3 | | |
|* 4 | FILTER | | | | | |
|* 5 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
-----------------------------------------------------------------------------

The cost is now 5 and instead of 4 and the results are correct

The first thing I did is opening a SR, now I am impatiently waiting for Oracle Support guidance…

Categories
dba plsql sql

Duplicate table over database link

The old-style approach would be CREATE TABLE T AS SELECT * FROM T@L, where T is the table you want to duplicate and L the database link pointing to the remote database

If you want to keep more info about the structure of t, the constraints, the indexes, you may use Datapump. Here it is…


SQL> var n number
SQL> exec :n := dbms_datapump.open('IMPORT','TABLE','L')

PL/SQL procedure successfully completed.

SQL> print n
N
----------
28

SQL> exec dbms_datapump.metadata_filter(:n,'SCHEMA_LIST','''SCOTT''')

PL/SQL procedure successfully completed.

SQL> exec dbms_datapump.metadata_filter(:n,'NAME_LIST','''T''')

PL/SQL procedure successfully completed.

SQL> exec dbms_datapump.start_job(:n)

PL/SQL procedure successfully completed.
SQL> desc t
Name Null? Type
----------------- -------- ------------
X NOT NULL NUMBER

This is utterly simple. If you mess up with the link name, global name, syntax, and so on, you may end up with orphan jobs in DBA_SCHEDULER_JOBS. There is some metalink note on dropping the underlying tables (Note 336914.1) but first log off, get a coffee, and they may vanish after a few minutes.

Categories
sql

On using ROWID

I have been challenged to assert the safety of rowid in a sql statement.

Against all my beliefs, it is not safe to assume ROWID is consistent over one sql statement. If the ROWID changes, and you use ROWID in your query, you may get inconsistent results.

Obviously I would not write such a post without a test case 😉


create table t(x, y) partition by hash(x) partitions 32 enable row movement
as select rownum, rownum from dual connect by level<30; select sum(y) from t; SUM(Y) ------ 435

Sum[1..29]=435

Let's write the query with a slow function using rowid

create or replace function f(r rowid) return number is
n number;
begin
select y into n from t where rowid=r;
sys.dbms_lock.sleep(1);
return n;
end;
/
select sum(f(rowid)) from t;
SUM(F(ROWID))
-------------
435

Elapsed: 00:00:29.12

The query took 29.1 seconds for 29 rows and returned the same result.

Let's update the partition key during the select

select sum(f(rowid)) from t;
... hurry up to a new session ...


update t set x=x+1;
commit;

back to your session you will have something inconsistent

...
SUM(F(ROWID))
-------------
5

Elapsed: 00:00:02.04

Not only the query was faster than the expected 29 seconds, but it is also inconsistent.

Probably not a bug, rowid is just not constant within the same transaction.

Categories
dba security sql Uncategorized

List of table and column privileges, including those via roles

I could not find this quickly enough in google so I wrote it myself.

The list of table privileges, with a connect by subquery.

COL roles FOR a60
COL table_name FOR a30
col privilege for a9
set lin 200 trims on pages 0 emb on hea on newp none

SELECT *
FROM ( SELECT CONNECT_BY_ROOT grantee grantee,
privilege,
REPLACE (
REGEXP_REPLACE (SYS_CONNECT_BY_PATH (granteE, '/'),
'^/[^/]*'),
'/',
' --> ')
ROLES,
owner,
table_name,
column_name
FROM (SELECT PRIVILEGE,
GRANTEE,
OWNER,
TABLE_NAME,
NULL column_name
FROM DBA_TAB_PRIVS
WHERE owner NOT IN
('SYS',
'SYSTEM',
'WMSYS',
'SYSMAN',
'MDSYS',
'ORDSYS',
'XDB',
'WKSYS',
'EXFSYS',
'OLAPSYS',
'DBSNMP',
'DMSYS',
'CTXSYS',
'WK_TEST',
'ORDPLUGINS',
'OUTLN',
'ORACLE_OCM',
'APPQOSSYS')
UNION
SELECT PRIVILEGE,
GRANTEE,
OWNER,
TABLE_NAME,
column_name
FROM DBA_COL_PRIVS
WHERE owner NOT IN
('SYS',
'SYSTEM',
'WMSYS',
'SYSMAN',
'MDSYS',
'ORDSYS',
'XDB',
'WKSYS',
'EXFSYS',
'OLAPSYS',
'DBSNMP',
'DMSYS',
'CTXSYS',
'WK_TEST',
'ORDPLUGINS',
'OUTLN',
'ORACLE_OCM',
'APPQOSSYS')
UNION
SELECT GRANTED_ROLE,
GRANTEE,
NULL,
NULL,
NULL
FROM DBA_ROLE_PRIVS
WHERE GRANTEE NOT IN
('SYS',
'SYSTEM',
'WMSYS',
'SYSMAN',
'MDSYS',
'ORDSYS',
'XDB',
'WKSYS',
'EXFSYS',
'OLAPSYS',
'DBSNMP',
'DMSYS',
'CTXSYS',
'WK_TEST',
'ORDPLUGINS',
'OUTLN',
'ORACLE_OCM',
'APPQOSSYS')) T
START WITH grantee IN (SELECT username FROM dba_users)
CONNECT BY PRIOR PRIVILEGE = GRANTEE)
WHERE table_name IS NOT NULL AND grantee != OWNER
ORDER BY grantee,
owner,
table_name,
column_name,
privilege;

sample output

GRANTEE PRIVILEGE ROLES OWNER TABLE_NAME COLUMN_NAME
------- --------- --------------- ------ ---------- -----------
U UPDATE --> R SCOTT DEPT DNAME
U SELECT SCOTT EMP
U2 UPDATE --> R2 --> R SCOTT DEPT DNAME

Categories
11gR2 dba sql

Reduce the number of commits

“Oftentimes, a database administrator (DBA) simply looks at the symptoms and immediately starts changing the system to fix those symptoms”
Op. Cit. Oracle Database Performance Tuning Guide 11g Release 2 (11.2)

Ok, let’s do this 🙂

  • Finding Waits on event “log file sync” while performing COMMIT and ROLLBACK operations were consuming significant database time.
  • Action Investigate application logic for possible reduction in the number of COMMIT operations by increasing the size of transactions.

If your application is committing too often- maybe Enterprise Manager told you so- you may want to commit less often. Or maybe just do some magic to impress your customer.

As seen on Metalink 857576.1, and if you can afford data loss, and if you cannot change your application, and if you are that kind of dba who cares more on good performing badly written application than on data integrity. just have a quick look …

Ok, ins.sql is 30’000 insert and commits,

insert into scott.t values(1);
commit;
insert into scott.t values(1);
commit;
insert into scott.t values(1);
commit;

Let’s check the time on my old-fashion pc…

SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:40:55.881948 +02:00
SQL> @ins
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:41:19.115447 +02:00

23.3 seconds

Now take the risk to lose some commits (but yes it is documented, no hidden parameter) to boost your performance


SQL> alter session set commit_wait=nowait commit_logging=immediate;
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:43:37.284027 +02:00

SQL> @ins
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:43:54.084547 +02:00

Hey hey hey, 16.8 seconds only 🙂

PS: it used to be called commit_write in 10g

Categories
11gR2 dba sql

nothing in user_segments

I wrote on deferred segment creation recently.

Today I was looking for specific storage attributes that I used to find in user_segments. They are no longer here. Where are they then?

test case :
create table t(x clob) store (x) as securefile x (retention max storage(maxsize 8192000000));

Where do I find the retention max max_size of my securefile? once the segment is created, it is easy to find it in user_segments

SQL> insert into t values('x');
1 row created.

SQL> select max_size from user_segments where segment_name='X';
MAX_SIZE
----------
1000000

1000000 in blocks is my specified 8192000000 bytes.

let’s go back
SQL> truncate table t drop all storage;

Table truncated.

SQL> select max_size from user_segments where segment_name='X';

no rows selected

It is not there.

you must dig in the sys tables to find out. Specifically there is a new table for the deferred segments

SQL> select maxsiz_stg from sys.deferred_stg$ where obj# in (select obj# from sys.obj$ where name='X');
MAXSIZ_STG
----------
1000000

Of course you should not base your business logic on internal tables that may change in a next release. A more appropriate workaround would be to disable deferred segment creation 🙂

Categories
dba java sql

How to unload blob from the database?

There is more than one post on how to unload blob from the database, mostly in plsql with utl_file.put_raw (see note 330146.1) and with java with FileOutputStream (see note 247546.1)

Unfortunately both are terribly slow due to the 32k limitation of put_raw in utl_file and due to a low “optimum buffer size” retrieved by myBlob.getBufferSize(), I increased the java stream buffer to 20M or to the size of the lob, whichever is smaller.

The code is mostly copy-pasted from metalink. But I changed the size from getbuffersize() to length().

Using java for large blob (read large binary large object) is about 3x faster than plsql in this test.


SQL> CREATE USER USER1 IDENTIFIED BY SeCrEt;

User created.

Elapsed: 00:00:00.04
SQL>
SQL> GRANT CREATE SESSION , CREATE PROCEDURE TO USER1;

Grant succeeded.

Elapsed: 00:00:00.01
SQL>
SQL> connect user1/SeCrEt
Connected.
SQL>
SQL> CREATE OR REPLACE JAVA SOURCE NAMED "BlobHandler"
2 AS import java.lang.*;
3 import java.sql.*;
4 import oracle.sql.*;
5 import java.io.*;
6 public class BlobHandler {
7 public static void ExportBlob(String myFile, BLOB myBlob) throws Exception {
8 File binaryFile = new File(myFile);
9 FileOutputStream outStream = new FileOutputStream(binaryFile);
10 InputStream inStream = myBlob.getBinaryStream();
11 int size;
12 if (myBlob.length()> 20000000) { // tune this to whatever appropriate value
13 size = 20000000;
14 } else {
15 size = (int)myBlob.length();
16 }
17 byte[] buffer = new byte[size];
18 int length = -1;
19 while ((length = inStream.read(buffer)) != -1)
20 {
21 outStream.write(buffer, 0, length);
22 outStream.flush();
23 }
24 inStream.close();
25 outStream.close();
26 }
27 }
28 /

Java created.

Elapsed: 00:00:00.16
SQL>
SQL> ALTER JAVA SOURCE "BlobHandler" COMPILE;

Java altered.

Elapsed: 00:00:00.37
SQL>
SQL> sho error
No errors.
SQL>
SQL> CREATE OR REPLACE PROCEDURE ExportBlobJava (p_file IN VARCHAR2,
2 p_blob IN BLOB)
3 AS
4 LANGUAGE JAVA
5 NAME 'BlobHandler.ExportBlob(java.lang.String, oracle.sql.BLOB)';
6 /

Procedure created.

Elapsed: 00:00:00.10
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> EXEC Dbms_Java.Grant_Permission( 'USER1', 'SYS:java.io.FilePermission','/tmp/javatest', 'write' )

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.52
SQL>

a comment here. If you do care about your database server, and you probably won’t GRANT DBA TO PUBLIC, never grant full unlimited access to your filesystem, as suggested chockingly by metalink with Dbms_Java.Grant_Permission( ‘SCOTT’, ‘java.io.FilePermission’, ‘*’, ‘read ,write, execute, delete’);

let’s move on


SQL> connect user1/SeCrEt
Connected.
SQL>
SQL> CREATE OR REPLACE PROCEDURE ExportBlobPlsql (p_dir IN VARCHAR2,
2 p_file IN VARCHAR2,
3 p_blob IN BLOB)
4 AS
5 blob_length INTEGER;
6 out_file UTL_FILE.FILE_TYPE;
7 v_buffer RAW (32767);
8 chunk_size BINARY_INTEGER := 32767;
9 blob_position INTEGER := 1;
10 BEGIN
11 blob_length := DBMS_LOB.GETLENGTH (p_blob);
12 out_file :=
13 UTL_FILE.FOPEN (p_dir,
14 p_file,
15 'wb',
16 chunk_size);
17 WHILE blob_position <= blob_length 18 LOOP 19 IF blob_position + chunk_size - 1 > blob_length
20 THEN
21 chunk_size := blob_length - blob_position + 1;
22 END IF;
23
24 DBMS_LOB.READ (p_blob,
25 chunk_size,
26 blob_position,
27 v_buffer);
28 UTL_FILE.PUT_RAW (out_file, v_buffer, TRUE);
29 blob_position := blob_position + chunk_size;
30 END LOOP;
31 UTL_FILE.FCLOSE (out_file);
32 END;
33 /

Procedure created.

Elapsed: 00:00:00.08
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> CREATE OR REPLACE DIRECTORY tmp AS '/tmp';

Directory created.

Elapsed: 00:00:00.03
SQL>
SQL> grant write on directory tmp to user1;

Grant succeeded.

Elapsed: 00:00:00.02
SQL>

Here again, I give write access to one directory, I do not grant dba to public…

SQL> connect user1/SeCrEt
Connected.
SQL>
SQL> VAR c BLOB
SQL>
SQL> exec :c := UTL_RAW.cast_to_raw ('X'); FOR i IN 1 .. 15 LOOP DBMS_LOB.append (:C, :C); END LOOP

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL>
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
KB
----------
32

Elapsed: 00:00:00.06

I have created a 32K BLOB variable, and I am doubling its size each time to see how it scales

SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.08
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.16
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
KB
----------
64

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
KB
----------
128

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
KB
----------
256

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
KB
----------
512

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
1

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
2

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
4

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.08
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
8

Elapsed: 00:00:00.01
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.14
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.08
SQL>

For small files, we do not see much of a difference between java and plsql. This is because the 32k buffer of plsql is acceptable for small files.


SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.09
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
16

Elapsed: 00:00:00.02
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.20
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.16
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.16
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
32

Elapsed: 00:00:00.04
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.31
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.29
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.46
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
64

Elapsed: 00:00:00.08
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.45
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.90
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.60
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
128

Elapsed: 00:00:00.14
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.96
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.77
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:16.08
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
256

Elapsed: 00:00:00.30
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.24
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:09.65
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:34.70
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
512

Elapsed: 00:00:00.70
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:06.92
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:20.81
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:01:32.96
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
1024

Elapsed: 00:00:01.65
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:14.90
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:44.13

SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:03:33.14

SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
2048

Elapsed: 00:00:00.00

SQL> EXEC ExportBlobJava('/u99/backup/sw/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:26.10
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:01:16.90

The larger the file, the most benefical to have a large write buffer (could well be higher than 20M as in this example). If java is not installed in the database, you can also retriebe the blob from the database and output the stream to a local file.

Categories
11g dba sql

The reasons why I always avoid to shutdown abort

It is a common practice to always shutdown abort the database before restarting and shutting in down immediate. This is because sometimes SHUTDOWN IMMEDIATE takes ages. For instance due to a huge transaction to be rollback.

I do not like it. At all.

First, chance exists that you won’t be able to start the database anymore. I have not heard or meet anyone who had this issue since Oracle7, but I still believe it.

Second, shutdown abort is very useful if something goes seriously wrong. But if something goes wrong, you may want to find out what it is.

Third, you may hit more bugs than if you do close normal. And you may get less help from support if this is due to an abusive shutdown abort. YMMV

Ok, small demo to preach to the converted
disclaimer: this demo is not innocent, do not try this on your database

SQL> create flashback archive fa tablespace ts retention 1 day;

Flashback archive created.

SQL> create table t(x number primary key);

Table created.

SQL> alter table t flashback archive fa;

Table altered.

SQL> insert into t values (1);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from DBA_SEGMENTS where tablespace_name='TS';

no rows selected

I have created a flashback archive table, and the committed transaction is not written down to the flashback tablespace yet.

SQL> shutdown abort
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 417546240 bytes
Fixed Size 2227072 bytes
Variable Size 234882176 bytes
Database Buffers 171966464 bytes
Redo Buffers 8470528 bytes
Database mounted.
Database opened.


SQL> sho parameter undo_tablespace
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_tablespace string UNDO1
SQL> create undo tablespace undo2 datafile '/u02/oradata/@/undo2_01.dbf' size 10m reuse;

Tablespace created.

SQL> alter system set undo_tablespace=undo2;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 417546240 bytes
Fixed Size 2227072 bytes
Variable Size 234882176 bytes
Database Buffers 171966464 bytes
Redo Buffers 8470528 bytes
Database mounted.
Database opened.

I have switched undo tablespace. So far so good. But remember the flashback archive did not write to the flashback tablespace before shutdown abort.


SQL> drop tablespace undo1 including contents and datafiles;
drop tablespace undo1 including contents and datafiles
*
ERROR at line 1:
ORA-01548: active rollback segment '_SYSSMU1_3544069484$' found, terminate dropping tablespace

You see… I cannot drop my old undo tablespace.

Q.E.D.

Categories
11gR2 dba sql

On deferred segment creation

What’s deferred segment creation? It is a feature that saves lots of time by releases and lots of space if you have a -legacy- application with 10’000 tables and most of them are empty.

When was it introduced ? Partly in 11.2.0.1 and partly in 11.2.0.2, depending on the object type.

What’s the opposite ? immediate segment creation

What’s the default ? deferred segment creation

How does it work ?
create table t1(x number) segment creation immediate;
and
create table t1(x number) segment creation deferred;

Where it the doc? start with Understand Deferred Segment Creation

Ok, now my 2 cents on this feature. It is a major change of the most basic elements of Oracle, the segment. This behavior will keep its bunch of surprises on your dba scripts.

1) you will not see the segment in dba_segments
2) if you drop the tablespace the tablespace containing the object without the INCLUDING CONTENTS, the drop tablespace will succeed and the table will remain
SQL> create tablespace ts datafile '/u02/oradata/@/ts.dbf' size 1m;

Tablespace created.

SQL> create table t(x number) tablespace ts;

Table created.

SQL> drop tablespace ts;

Tablespace dropped.

SQL> select * from t;
select * from t
*
ERROR at line 1:
ORA-00959: tablespace 'TS' does not exist

SQL> drop table t;
drop table t
*
ERROR at line 1:
ORA-00959: tablespace 'TS' does not exist

Neither SELECT nor DROP is possible at that stage

To quickly identify those almost-nonexistent tablespaces you may use this query

SELECT TABLESPACE_NAME FROM ALL_CLUSTERS UNION
SELECT TABLESPACE_NAME FROM ALL_INDEXES UNION
SELECT TABLESPACE_NAME FROM ALL_IND_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_IND_SUBPARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_LOBS UNION
SELECT TABLESPACE_NAME FROM ALL_LOB_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_LOB_SUBPARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_TABLES UNION
SELECT TABLESPACE_NAME FROM ALL_TAB_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_TAB_SUBPARTITIONS MINUS
select tablespace_name from dba_tablespaces
;

TABLESPACE_NAME
------------------------------
TS

then you can recreate it and -if wished- drop it with contents

SQL> create tablespace ts datafile '/u02/oradata/@/ts.dbf' size 1m reuse;

Tablespace created.

SQL> drop tablespace ts including contents and datafiles;

Tablespace dropped.

SQL> select * from t;
select * from t
*
ERROR at line 1:
ORA-00942: table or view does not exist

The table is gone for real.

If you based some scripts on dba_segments to list the content of the tablespaces, you probably should check the assigned tablespace in the tables,indexes,lobs and (sub)partitions DBA_VIEWS too.

I met this feature while using transportable tablespace. Transportable table will transport the object with no segment that belongs to the tablespace.

There is a bunch of published bugs on Metalink regarding deferred segment creation. An easy workaround is to not use the feature by setting the initialization parameter DEFERRED_SEGMENT_CREATION to false. This of course affects only new objects.

I am always very cautious about those major changes affecting the dba scripts on the dictionary. While selecting from the base dictionary tables (TAB$, COL$, …) is never recommended, selecting from the USER_ and DBA_ views is supposed to be backward compatible, but the dba scripts that used to work in previous release may break here… This is obviously the price to pay to get new features, right?

Categories
sql

On star transformation

How to configure a schema to support a star transformation query?

This is one of the topic I am currently preparing for the OCM DBA 11g upgrade exam for 9i OCM’s.

Let’s create a star schema with a fact table T1 and two dimension tables T2 and T3 :

> create table t2(y number constraint t2_pk primary key, yy varchar2(50))
table t2 created.
> create table t3(z number constraint t3_pk primary key, zz varchar2(50))
table t3 created.
> create table t1(x number constraint t1_pk primary key,
y number constraint t1_t2_fk references t2,
z number constraint t1_t3_fk references t3)
table t1 created.

According to the Oracle Database Data Warehousing Guide, chapter 20 Schema Modeling Techniques
A prerequisite of the star transformation is that there be a single-column bitmap index on every join column of the fact table. These join columns include all foreign key columns.


> create bitmap index bi1 on t1(y)
bitmap index bi1 created.
> create bitmap index bi2 on t1(z)
bitmap index bi2 created.
> exec dbms_stats.gather_table_stats(user,'t1',cascade=>true)
anonymous block completed
> exec dbms_stats.gather_table_stats(user,'t2',cascade=>true)
anonymous block completed
> exec dbms_stats.gather_table_stats(user,'t3',cascade=>true)
anonymous block completed

Obviously the star_transformation_enabled must be set to true. It was known as buggy in the first releases, I hope most bugs have been fixed in the meantime.

> alter session set star_transformation_enabled=true
session set altered

Let’s try the transformation with a query similar to the one found in the doc :

> set autotrace on exp
Autotrace Enabled
Displays the execution plan only.
> select count(*) from t1 natural join t2 natural join t3
where yy='one' and zz in ('two','three')
COUNT(*)
----------------------
0

Plan hash value: 3024982001

------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 106 | 1 (100)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 106 | | |
| 2 | NESTED LOOPS | | | | | |
| 3 | NESTED LOOPS | | 1 | 106 | 1 (100)| 00:00:01 |
| 4 | NESTED LOOPS | | 1 | 66 | 1 (100)| 00:00:01 |
| 5 | VIEW | index$_join$_001 | 1 | 26 | 1 (100)| 00:00:01 |
|* 6 | HASH JOIN | | | | | |
| 7 | BITMAP CONVERSION TO ROWIDS| | 1 | 26 | 0 (0)| 00:00:01 |
| 8 | BITMAP INDEX FULL SCAN | BI1 | | | | |
| 9 | BITMAP CONVERSION TO ROWIDS| | 1 | 26 | 0 (0)| 00:00:01 |
| 10 | BITMAP INDEX FULL SCAN | BI2 | | | | |
|* 11 | TABLE ACCESS BY INDEX ROWID | T2 | 1 | 40 | 0 (0)| 00:00:01 |
|* 12 | INDEX UNIQUE SCAN | T2_PK | 1 | | 0 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | T3_PK | 1 | | 0 (0)| 00:00:01 |
|* 14 | TABLE ACCESS BY INDEX ROWID | T3 | 1 | 40 | 0 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(ROWID=ROWID)
11 - filter("T2"."YY"='one')
12 - access("T1"."Y"="T2"."Y")
13 - access("T1"."Z"="T3"."Z")
14 - filter("T3"."ZZ"='three' OR "T3"."ZZ"='two')

> set autotrace off
Autotrace Disabled

Something is missing. The star transformation is not used. What could it be???

Data !

We need to load trucks of data in the T1 table and a few rows in the T2 and T3 tables

> insert into t2 select rownum,
to_char(to_timestamp(to_char(rownum/1e9,'.000000000'),'.ff'),'ffsp')
from dual connect by level<11 10 rows inserted. > insert into t3 select rownum,
to_char(to_timestamp(to_char(rownum/1e9,'.000000000'),'.ff'),'ffsp')
from dual connect by level<11 10 rows inserted. > insert into t1 select rownum, mod(rownum,7)+1, mod(rownum,8)+1
from dual connect by level<100001 100,000 rows inserted. > commit
commited.
> exec dbms_stats.gather_table_stats(user,'t1',cascade=>true)
anonymous block completed
> exec dbms_stats.gather_table_stats(user,'t2',cascade=>true)
anonymous block completed
> exec dbms_stats.gather_table_stats(user,'t3',cascade=>true)
anonymous block completed

Let’s try the same query again

> set autotrace on exp
Autotrace Enabled
Displays the execution plan only.
> select count(*) from t1 natural join t2 natural join t3
where yy='one' and zz in ('two','three')
COUNT(*)
----------------------
3570

Plan hash value: 3170767457

-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 12 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 6 | | |
| 2 | BITMAP CONVERSION COUNT | | 2857 | 17142 | 6 (0)| 00:00:01 |
| 3 | BITMAP AND | | | | | |
| 4 | BITMAP MERGE | | | | | |
| 5 | BITMAP KEY ITERATION | | | | | |
|* 6 | TABLE ACCESS FULL | T2 | 1 | 8 | 3 (0)| 00:00:01 |
|* 7 | BITMAP INDEX RANGE SCAN| BI1 | | | | |
| 8 | BITMAP MERGE | | | | | |
| 9 | BITMAP KEY ITERATION | | | | | |
|* 10 | TABLE ACCESS FULL | T3 | 2 | 16 | 3 (0)| 00:00:01 |
|* 11 | BITMAP INDEX RANGE SCAN| BI2 | | | | |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - filter("T2"."YY"='one')
7 - access("T1"."Y"="T2"."Y")
10 - filter("T3"."ZZ"='three' OR "T3"."ZZ"='two')
11 - access("T1"."Z"="T3"."Z")

Note
-----
- star transformation used for this statement

> set autotrace off
Autotrace Disabled

That’s it ! I have 100K rows in my fact table and 10 rows in my dimension tables, that does the trick ! Star transformation needs a lot of rows to work 🙂

Categories
dba sql sqlplus

On implicit commit

An explicit commit is when you issue a COMMIT statement

SQL> create table t(x number);

Table created.

SQL> insert into t values(1);

1 row created.

SQL> commit;

Commit complete.

An implicit commit is when a commit is issued without your approval.

ex: AUTOCOMMIT (default is OFF)
SQL> set autoc on
SQL> insert into t values(1);

1 row created.

Commit complete.

ex: EXITCOMMIT (default is ON)
SQL> set autoc off exitc on
SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> disc
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> conn scott/tiger
Connected.
SQL> select * from t;
X
----------
1

before / after a successful DDL statement
SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t(x);

Index created.

SQL> rollback;

Rollback complete.

SQL> select * from t;
X
----------
1

Before / after an unsuccessful DDL statement, sometimes :
SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t(blabla);
create index i on t(blabla)
*
ERROR at line 1:
ORA-00904: "BLABLA": invalid identifier

SQL> rollback;

Rollback complete.

SQL> select * from t;
X
----------
1

But not always :
SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t();
create index i on t()
*
ERROR at line 1:
ORA-00936: missing expression

SQL> rollback;

Rollback complete.

SQL> select * from t;

no rows selected

In the last case, no DDL was executed, but in the case before that, the DDL was executed and failed.

If you want to commit, use COMMIT 🙂

Categories
sql sqlplus

CSV part 4, fast !!

I got some comments that my other csv solutions were slow to export gigabytes of data.

One more try.

thanks to the feedbacks, I provided a new version

This could generate very large files in just a few minutes (instead of hours).

I use bulk collect and utl_file to boost performance

CREATE TYPE collist IS TABLE OF VARCHAR2 (4000)
/

CREATE OR REPLACE PROCEDURE bulk_csv (directory_name VARCHAR2,
file_name VARCHAR2,
query VARCHAR2)
AUTHID CURRENT_USER
IS
-- $Id$
fh UTL_FILE.file_type;
stmt VARCHAR2 (32767) := NULL;
header VARCHAR2 (32767) := NULL;
curid NUMBER;
desctab DBMS_SQL.DESC_TAB;
colcnt NUMBER;
namevar VARCHAR2 (32767);

TYPE cola IS TABLE OF collist
INDEX BY BINARY_INTEGER;

res cola;
rcur SYS_REFCURSOR;
current_line VARCHAR2 (32767);
next_line VARCHAR2 (32767);

BEGIN
curid := DBMS_SQL.open_cursor;
DBMS_SQL.parse (curid, query, DBMS_SQL.NATIVE);
DBMS_SQL.DESCRIBE_COLUMNS (curid, colcnt, desctab);

FOR i IN 1 .. colcnt
LOOP
DBMS_SQL.DEFINE_COLUMN (curid,
i,
namevar,
32767);
END LOOP;

IF DBMS_SQL.execute (curid) = 0
THEN
FOR i IN 1 .. colcnt
LOOP
IF (i > 1)
THEN
header := header || ';';
stmt := stmt || ',';
END IF;

header := header || desctab (i).col_name;
stmt :=
stmt
|| CASE
WHEN desctab (i).col_type IN
(DBMS_SQL.Varchar2_Type,
DBMS_SQL.Char_Type)
THEN
'"'||desctab (i).col_name || '"'
WHEN desctab (i).col_type IN
(DBMS_SQL.Number_Type,
DBMS_SQL.Date_Type,
DBMS_SQL.Binary_Float_Type,
DBMS_SQL.Binary_Bouble_Type,
DBMS_SQL.Timestamp_Type,
DBMS_SQL.Timestamp_With_TZ_Type,
DBMS_SQL.Interval_Year_to_Month_Type,
DBMS_SQL.Interval_Day_To_Second_Type,
DBMS_SQL.Timestamp_With_Local_TZ_type)
THEN
'to_char("' || desctab (i).col_name || '")'
WHEN desctab (i).col_type = DBMS_SQL.Raw_Type
THEN
'rawtohex("' || desctab (i).col_name || '")'
WHEN desctab (i).col_type = DBMS_SQL.Rowid_Type
THEN
'''unsupport datatype : ROWID'''
WHEN desctab (i).col_type = DBMS_SQL.Long_Type
THEN
'''unsupport datatype : LONG'''
WHEN desctab (i).col_type = DBMS_SQL.Long_Raw_Type
THEN
'''unsupport datatype : LONG RAW'''
WHEN desctab (i).col_type = DBMS_SQL.User_Defined_Type
THEN
'''unsupport datatype : User Defined Type'''
WHEN desctab (i).col_type = DBMS_SQL.MLSLabel_Type
THEN
'''unsupport datatype : MLSLABEL'''
WHEN desctab (i).col_type = DBMS_SQL.Ref_Type
THEN
'''unsupport datatype : REF'''
WHEN desctab (i).col_type = DBMS_SQL.Clob_Type
THEN
'''unsupport datatype : CLOB'''
WHEN desctab (i).col_type = DBMS_SQL.Blob_Type
THEN
'''unsupport datatype : BLOB'''
WHEN desctab (i).col_type = DBMS_SQL.Rowid_Type
THEN
'''unsupport datatype : ROWID'''
WHEN desctab (i).col_type = DBMS_SQL.Bfile_Type
THEN
'''unsupport datatype : BFILE'''
WHEN desctab (i).col_type = DBMS_SQL.Urowid_Type
THEN
'''unsupport datatype : UROWID'''
ELSE
'''unsupport datatype : '||desctab (i).col_type||''''
END;
END LOOP;

stmt := 'select collist(' || stmt || ') from (' || query || ')';

fh :=
UTL_FILE.fopen (directory_name,
file_name,
'W',
32767);

begin
OPEN rcur FOR stmt;
exception
when others then
dbms_output.put_line(stmt);
raise;
end;
LOOP
FETCH rcur
BULK COLLECT INTO res
LIMIT 10000;

current_line := header;
next_line := NULL;

FOR f IN 1 .. res.COUNT
LOOP
FOR g IN 1 .. res (f).COUNT
LOOP
IF (g > 1)
THEN
next_line := next_line || ';';
END IF;

IF ( NVL(LENGTH (current_line),0)
+ NVL(LENGTH (next_line),0)
+ NVL(LENGTH (res (f) (g)),0)
+ 5 > 32767)
THEN
UTL_FILE.put_line (fh, current_line);
current_line := NULL;
END IF;

IF (NVL(LENGTH (next_line),0) + NVL(LENGTH (res (f) (g)),0) + 5 > 32767)
THEN
UTL_FILE.put_line (fh, next_line);
next_line := NULL;
END IF;

next_line := next_line || res (f) (g);
END LOOP;

current_line :=
CASE
WHEN current_line IS NOT NULL
THEN
current_line || CHR (10)
END
|| next_line;
next_line := NULL;
END LOOP;

UTL_FILE.put_line (fh, current_line);
EXIT WHEN rcur%NOTFOUND;
END LOOP;

CLOSE rcur;

UTL_FILE.fclose (fh);
END IF;

DBMS_SQL.CLOSE_CURSOR (curid);
END;
/

CREATE OR REPLACE DIRECTORY tmp AS '/tmp';

EXEC bulk_csv('TMP','emp.csv','SELECT * FROM EMP ORDER BY ENAME')


EMPNO;ENAME;JOB;MGR;HIREDATE;SAL;COMM;DEPTNO
7876;ADAMS;CLERK;7788;1987-05-23 00:00:00;1100;;20
7499;ALLEN;SALESMAN;7698;1981-02-20 00:00:00;1600;30;30
7698;BLAKE;MANAGER;7839;1981-05-01 00:00:00;2850;;30
7782;CLARK;MANAGER;7839;1981-06-09 00:00:00;2450;;10
7902;FORD;ANALYST;7566;1981-12-03 00:00:00;3000;;20
7900;JAMES;CLERK;7698;1981-12-03 00:00:00;950;;30
7566;JONES;MANAGER;7839;1981-04-02 00:00:00;2975;;20
7839;KING;PRESIDENT;;1981-11-17 00:00:00;5000;;10
7654;MARTIN;SALESMAN;7698;1981-09-28 00:00:00;1250;140;30
7934;MILLER;CLERK;7782;1982-01-23 00:00:00;1300;;10
7788;SCOTT;ANALYST;7566;1987-04-19 00:00:00;3000;;20
7369;SMITH;CLERK;7902;1980-12-17 00:00:00;800;;20
7844;TURNER;SALESMAN;7698;1981-09-08 00:00:00;1500;0;30
7521;WARD;SALESMAN;7698;1981-02-22 00:00:00;1250;50;30

Categories
security sql

on materialized view constraints

Oracle is pretty strong at enforcing constraint.

Table for this blog post:
create table t(x number primary key, y number);

For instance if you alter table t add check (y<1000); then Y will not be bigger than 1000, right?

SQL> insert into t values (1,2000);
insert into t values (1,2000)
Error at line 1
ORA-02290: check constraint (SCOTT.SYS_C0029609) violated

I believe this code to be unbreakable. If you have only SELECT and INSERT privilege on the table, you cannot bypass the constraint.

Let's imagine some complex constraint. CHECK (sum(y) < 1000) SQL> alter table t add check (sum(y) < 1000); alter table t add check (sum(y) < 1000) Error at line 1 ORA-00934: group function is not allowed here

Ok, clear enough I suppose, we cannot handle this complex constraint with a CHECK condition.

We could have some before trigger that fires an exception
CREATE TRIGGER tr
BEFORE INSERT OR UPDATE
ON T
FOR EACH ROW
WHEN (NEW.Y > 0)
DECLARE
s NUMBER;
BEGIN
SELECT SUM (y) INTO s FROM t;

IF (s + :new.y >= 1000)
THEN
raise_application_error (-20001, 'SUM(Y) would exceed 1000');
END IF;
END;
/

Now the trigger will compute the sum and return an exception whenever it fails.
SQL> insert into t values (2, 600);

1 row created.

SQL> insert into t values (3, 600);
insert into t values (3, 600)
*
ERROR at line 1:
ORA-20001: SUM(Y) would exceed 1000
ORA-06512: at "SCOTT.TR", line 8
ORA-04088: error during execution of trigger 'SCOTT.TR'

SQL> drop trigger tr;

Trigger dropped.

SQL> truncate table t;

Table truncated.

But I am not good with triggers, and the triggers are as bad as their developers and have dark sides like mutating triggers and thelike.

As Tom Kyte mentioned in the comment, the code above is not efficient effective if more than one user update the table at the same time

Another popular approach is to create a fast-refreshable-on-commit mview with a constraint.

Let's see how this works.


create materialized view log on t with rowid, primary key (y) including new values;

create materialized view mv
refresh fast
on commit
as select sum(y) sum from t;

alter table mv add check (sum < 1000);

The constraint is on the mview, so once you commit (and only at commit time), Oracle will try to refresh the mview.

SQL> insert into t values (4, 600);

1 row created.

SQL> commit;

Commit complete.

SQL> insert into t values (5, 600);

1 row created.

SQL> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-02290: check constraint (SCOTT.SYS_C0029631) violated

SQL> select * from t;

X Y
---------- ----------
4 600

So far so good. The mechanism rollbacks the transaction in case of an ORA-12008. A bit similar to a DEFERABLE constraint.

But how safe is this after all? Oracle does not enforce anything on the table, it just fails on refresh...

Anything that does not fulfill the materialized view fast refresh requisites will also break the data integrity.

SQL> delete from t;

1 row deleted.

SQL> commit;

Commit complete.

SQL> alter session enable parallel dml;

Session altered.

SQL> insert /*+PARALLEL*/ into t select 100+rownum, rownum*100 from dual connect by level<20; 19 rows created. SQL> commit;

Commit complete.

SQL> select sum(y) from t;

SUM(Y)
----------
19000

SQL> select staleness from user_mviews;

STALENESS
-------------------
UNUSABLE

Your data integrity is gone. By "breaking" the mview, with only SELECT, INSERT and ALTER SESSION privilege, you can now insert any data.

This is documented as
FAST Clause

For both conventional DML changes and for direct-path INSERT operations, other conditions may restrict the eligibility of a materialized view for fast refresh.

Other operations like TRUNCATE may also prevent you from inserting fresh data

SQL> alter materialized view mv compile;

Materialized view altered.

SQL> exec dbms_mview.refresh('MV','COMPLETE');

PL/SQL procedure successfully completed.

SQL> select * from mv;

SUM
----------

SQL> insert into t values(1,1);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from mv;

SUM
----------
1

SQL> truncate table t;

Table truncated.

SQL> insert into t values(1,1);

1 row created.

SQL> commit;
commit
*
ERROR at line 1:
ORA-32321: REFRESH FAST of "SCOTT"."MV" unsupported after detail table
TRUNCATE

Categories
dba security sql

track ddl change (part 2)

I wrote about tracking ddl changes with a trigger there : track ddl changes

Another option is to use auditing.

A new and cool alternative is to use enable_ddl_logging (11gR2). This will track all ddl’s in the alert log

ALTER SYSTEM SET enable_ddl_logging=TRUE

Then later you issue
create table t(x number)

and you see in the alertLSC01.log
Tue Apr 05 14:43:32 2011
create table t(x number)

Wait, that’s not really verbose !?

Remember the alert log is just there for backward compatibility, it is time you start looking in the xml file 🙂



create table t(x number)


There is not really much more there but the module, which indeed reveals someone is using TOAD to access my database !

Categories
11gR2 sql

How does random=random evaluates?

I had fun answering a question about random on the technical forums.

What is in your opinion the boolean value of DBMS_RANDOM.VALUE=DBMS_RANDOM.VALUE?

Or, how many rows would
select * from dual where dbms_random.value=dbms_random.value;
return?

It is wrong to assume the function will be evaluated twice.

The short answer would be : do not rely on random plsql functions in SQL…

here is a test case in 11.2.0.2 and 10.2.0.3


SQL> select version from v$instance;
VERSION
-----------------
10.2.0.3.0

SQL> select * from dual where dbms_random.value=dbms_random.value;

no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 1224005312

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 | 2 (0)| 00:00:01 |
|* 1 | FILTER | | | | | |
| 2 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("DBMS_RANDOM"."VALUE"()="DBMS_RANDOM"."VALUE"())

In 10g, the function is executed twice per row, and the chance to have two different values is more than 99.9999…%.


SQL> select version from v$instance;
VERSION
-----------------
11.2.0.2.0

SQL> select * from dual where dbms_random.value=dbms_random.value
D
-
X

Execution Plan
----------------------------------------------------------
Plan hash value: 1224005312

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 | 2 (0)| 00:00:01 |
|* 1 | FILTER | | | | | |
| 2 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("DBMS_RANDOM"."VALUE"() IS NOT NULL)

Here the optimized execute the function only once per row, and since the result is never null, it always evaluates to true.

Is this a bug or a feature?

In my opinion it is a confusing tuning enhancement that may break badsome programs.

In this thread, I mentioned that prior dbms_random.value is not null is an unsafe construct.

Categories
dba sql

Which index can you rebuild?

I recently wrote on table reorg and rebuild index

Rule number one : you cannot rebuild a partitioned index in whole. You need to rebuild each individual (sub-)partition

Rule number two : to rebuild an iot, move the table instead of trying to rebuild the underlying index

Rule number three : a LOB index is not really an index. Do not rebuild this

Rule number four : a NOSEGMENT index is not a supported type of index, but it may appear in your user_objects list. It is used internally by OEM and other tuning tools to do a what-if calculation on the explain plan. It is not listed in USER_INDEXES. Do not rebuild this

Test case :

SQL> CREATE CLUSTER c(x NUMBER);

Cluster created.

SQL> CREATE INDEX a01
2 ON CLUSTER c;

Index created.

SQL> CREATE TABLE t
2 (
3 p NUMBER PRIMARY KEY,
4 a01 NUMBER,
5 a02 NUMBER,
6 a03 NUMBER,
7 a04 NUMBER,
8 a05 NUMBER,
9 a06 NUMBER,
10 a07 VARCHAR2 (40),
11 a08 CLOB
12 );

Table created.

SQL> CREATE INDEX a02
2 ON t (a01);

Index created.

SQL> CREATE INDEX a03
2 ON t (a02)
3 REVERSE;

Index created.

SQL> CREATE INDEX a04
2 ON t (SQRT (a01));

Index created.

SQL> CREATE INDEX a05
2 ON t (COS (a01))
3 REVERSE;

Index created.

SQL> CREATE BITMAP INDEX a06
2 ON t (a03);

Index created.

SQL> CREATE BITMAP INDEX a07
2 ON t (SIGN (a04));

Index created.

SQL> CREATE INDEX a08
2 ON t (a07)
3 INDEXTYPE IS ctxsys.context;

Index created.

SQL> CREATE INDEX a09
2 ON t (a05)
3 GLOBAL PARTITION BY HASH (a05)
4 (PARTITION p);

Index created.

SQL> CREATE TABLE i (x NUMBER CONSTRAINT A10 PRIMARY KEY)
2 ORGANIZATION INDEX;

Table created.

SQL> CREATE INDEX A11 on T(A06) NOSEGMENT;

Index created.

SQL> SELECT index_name,
2 index_type,
3 partitioned,
4 generated
5 FROM user_indexes
6 ORDER BY 1;

INDEX_NAME INDEX_TYPE PAR G
------------------------------ --------------------------- --- -
A01 CLUSTER NO N
A02 NORMAL NO N
A03 NORMAL/REV NO N
A04 FUNCTION-BASED NORMAL NO N
A05 FUNCTION-BASED NORMAL/REV NO N
A06 BITMAP NO N
A07 FUNCTION-BASED BITMAP NO N
A08 DOMAIN NO N
A09 NORMAL YES N
A10 IOT - TOP NO N
DR$A08$X NORMAL NO N
SYS_C009276 NORMAL NO Y
SYS_IL0000028076C00009$$ LOB NO Y
SYS_IL0000028087C00006$$ LOB NO Y
SYS_IL0000028092C00002$$ LOB NO Y
SYS_IOT_TOP_28090 IOT - TOP NO Y
SYS_IOT_TOP_28095 IOT - TOP NO Y

SQL> ALTER INDEX a01 REBUILD;

Index altered.

SQL> ALTER INDEX a02 REBUILD;

Index altered.

SQL> ALTER INDEX a03 REBUILD;

Index altered.

SQL> ALTER INDEX a04 REBUILD;

Index altered.

SQL> ALTER INDEX a05 REBUILD;

Index altered.

SQL> ALTER INDEX a06 REBUILD;

Index altered.

SQL> ALTER INDEX a07 REBUILD;

Index altered.

SQL> ALTER INDEX a08 REBUILD;

Index altered.

SQL> ALTER INDEX a09 REBUILD;
ALTER INDEX a09 REBUILD
*
ERROR at line 1:
ORA-14086: a partitioned index may not be rebuilt as a whole

SQL> ALTER INDEX a09 REBUILD PARTITION P;

Index altered.

SQL> ALTER INDEX a10 REBUILD;
ALTER INDEX a10 REBUILD
*
ERROR at line 1:
ORA-28650: Primary index on an IOT cannot be rebuilt

SQL> ALTER TABLE i MOVE;

Table altered.

SQL> ALTER INDEX A11 REBUILD;
ALTER INDEX A11 REBUILD
*
ERROR at line 1:
ORA-08114: can not alter a fake index

SQL> ALTER INDEX SYS_IL0000028076C00009$$ REBUILD;
ALTER INDEX SYS_IL0000028076C00009$$ REBUILD
*
ERROR at line 1:
ORA-02327: cannot create index on expression with datatype LOB

A function-based domain index should be rebuildable too, I have not tested this for you

Categories
11gR2 dba sql

Return NULL if the column does not exist

It is a very common challenge for a dba to create scripts that work on every version.

How do you return NULL if a column does not exists?

Imagine I have a view that returns the table_name, column_name and retention_type of my LOBS.


SQL> create table t1(c clob) lob(c) store as (retention);

Table created.

SQL> create table t2(c clob) lob(c) store as (pctversion 10);

Table created.

SQL> create or replace force view v as select table_name,
column_name,retention_type from user_lobs;

View created.

SQL> select * from v where table_name in ('T1','T2');
TAB COL RETENTION_TYPE
--- --- --------------
T1 C YES
T2 C NO

Let’s imagine I try to run this on an antique version of Oracle

SQL> select version from v$instance;
VERSION
-----------------
11.2.0.1.0

SQL> create table t1(c clob) lob(c) store as (retention);

Table created.

SQL> create table t2(c clob) lob(c) store as (pctversion 10);

Table created.

SQL> create or replace force view v as select table_name,column_name,retention_type from user_lobs;

Warning: View created with compilation errors.

SQL> select * from v where table_name in ('T1','T2');
select * from v where table_name in ('T1','T2')
*
ERROR at line 1:
ORA-04063: view "SCOTT.V" has errors

Obviously the RETENTION_TYPE did not exist in that version.

Let’s default this to NULL !

SQL> create or replace function retention_type return varchar2 is
begin return null; end;
/

Function created.

SQL> select * from v where table_name in ('T1','T2');
TAB COL RETENTION_TYPE
--- --- --------------
T1 C
T2 C

Very simple workaround, is not it?

Categories
dba sql

On table reorg and index rebuild

Before you start reading : do not rebuild all your indexes and reorganize all your tables every Sunday morning. One day you may find one of your table missing or one index invalid.

Ok, let’s take a case where table reorg and index rebuild is good.

One of your table was never cleaned up, it grew to 100000000 rows over the last 5 years and you need only the last 2 weeks.

One of your task will be to create a job to clean up your table on a weekly basis to delete rows older than 14 days. This is beyond the scope of this post.

Now you have deleted more than 99% of your rows and you want to reorganize your table and rebuild the index, to gain disk space and performance.

Here is the demo


SQL> DROP TABLE t1;

Table dropped.

SQL>
SQL> CREATE TABLE t1
2 (
3 r NUMBER,
4 txt VARCHAR2 (4000),
5 y NUMBER
6 );

Table created.

SQL>
SQL> CREATE INDEX i1
2 ON t1 (r);

Index created.

SQL>
SQL> INSERT INTO t1
2 WITH t
3 AS ( SELECT *
4 FROM DUAL
5 CONNECT BY LEVEL < 1001) 6 SELECT ROWNUM r, LPAD ('X', 100, '.') txt, MOD (ROWNUM, 2) y 7 FROM t, t; 1000000 rows created. SQL>
SQL> DROP TABLE t2;

Table dropped.

SQL>
SQL> CREATE TABLE t2
2 (
3 r NUMBER,
4 txt VARCHAR2 (4000),
5 y NUMBER
6 )
7 PARTITION BY HASH (r)
8 (PARTITION T2_P1);

Table created.

SQL>
SQL> CREATE INDEX i2
2 ON t2 (r)
3 LOCAL (PARTITION i2_p1);

Index created.

SQL>
SQL> INSERT INTO t2
2 WITH t
3 AS ( SELECT *
4 FROM DUAL
5 CONNECT BY LEVEL < 1001) 6 SELECT ROWNUM r, LPAD ('X', 100, '.') txt, MOD (ROWNUM, 2) y 7 FROM t, t; 1000000 rows created. SQL>
SQL> DROP TABLE t3;

Table dropped.

SQL>
SQL> CREATE TABLE t3
2 (
3 r NUMBER,
4 txt VARCHAR2 (4000),
5 y NUMBER
6 )
7 PARTITION BY RANGE (r)
8 SUBPARTITION BY HASH (r)
9 SUBPARTITION TEMPLATE (SUBPARTITION s1 )
10 (PARTITION T3_P1 VALUES LESS THAN (maxvalue));

Table created.

SQL>
SQL> CREATE INDEX i3
2 ON t3 (r)
3 LOCAL (PARTITION i3_p1
4 (SUBPARTITION i3_p1_s1));

Index created.

SQL>
SQL> INSERT INTO t3
2 WITH t
3 AS ( SELECT *
4 FROM DUAL
5 CONNECT BY LEVEL < 1001) 6 SELECT ROWNUM r, LPAD ('X', 100, '.') txt, MOD (ROWNUM, 2) y 7 FROM t, t; 1000000 rows created. SQL>
SQL> COMMIT;

Commit complete.

SQL>
SQL> SELECT segment_name,
2 segment_type,
3 partition_name,
4 sum(bytes),
5 count(*)
6 FROM user_extents
7 WHERE segment_name IN ('T1', 'T2', 'T3', 'I1', 'I2', 'I3')
8 group by
9 segment_name,
10 segment_type,
11 partition_name
12 ORDER BY segment_name, partition_name;

SEGMENT_NA SEGMENT_TYPE PARTITION_ SUM(BYTES) COUNT(*)
---------- ------------------ ---------- -------------- --------------
I1 INDEX 16,777,216 31
I2 INDEX PARTITION I2_P1 16,777,216 31
I3 INDEX SUBPARTITION I3_P1_S1 16,777,216 31
T1 TABLE 134,217,728 87
T2 TABLE PARTITION T2_P1 134,217,728 16
T3 TABLE SUBPARTITION T3_P1_S1 134,217,728 16

I created 3 tables, T1, T2 which is partitioned, T3 which is subpartitioned. There is a slight difference in the number of extents between partitioned and non-partitioned table, but this ASSM, so it is fine.


SQL> DELETE FROM t1
2 WHERE r > 1;

999999 rows deleted.

SQL>
SQL> COMMIT;

Commit complete.

SQL>
SQL> DELETE FROM t2
2 WHERE r > 1;

999999 rows deleted.

SQL>
SQL> COMMIT;

Commit complete.

SQL>
SQL> DELETE FROM t3
2 WHERE r > 1;

999999 rows deleted.

SQL>
SQL> COMMIT;

Commit complete.

SQL>
SQL> SELECT segment_name,
2 segment_type,
3 partition_name,
4 sum(bytes),
5 count(*)
6 FROM user_extents
7 WHERE segment_name IN ('T1', 'T2', 'T3', 'I1', 'I2', 'I3')
8 group by
9 segment_name,
10 segment_type,
11 partition_name
12 ORDER BY segment_name, partition_name;

SEGMENT_NA SEGMENT_TYPE PARTITION_ SUM(BYTES) COUNT(*)
---------- ------------------ ---------- -------------- --------------
I1 INDEX 16,777,216 31
I2 INDEX PARTITION I2_P1 16,777,216 31
I3 INDEX SUBPARTITION I3_P1_S1 16,777,216 31
T1 TABLE 134,217,728 87
T2 TABLE PARTITION T2_P1 134,217,728 16
T3 TABLE SUBPARTITION T3_P1_S1 134,217,728 16

I deleted the completed table but one row, however the size of the table and the number of extents did not change.


SQL> ALTER TABLE t1 MOVE;

Table altered.

SQL>
SQL> ALTER INDEX I1 REBUILD;

Index altered.

SQL>
SQL> ALTER TABLE t2 MOVE PARTITION T2_P1;

Table altered.

SQL>
SQL> ALTER INDEX I2 REBUILD PARTITION I2_P1;

Index altered.

SQL>
SQL> ALTER TABLE t3 MOVE SUBPARTITION T3_P1_S1;

Table altered.

SQL>
SQL> ALTER INDEX I3 REBUILD SUBPARTITION I3_P1_S1;

Index altered.

SQL>
SQL> SELECT segment_name,
2 segment_type,
3 partition_name,
4 sum(bytes),
5 count(*)
6 FROM user_extents
7 WHERE segment_name IN ('T1', 'T2', 'T3', 'I1', 'I2', 'I3')
8 group by
9 segment_name,
10 segment_type,
11 partition_name
12 ORDER BY segment_name, partition_name;

SEGMENT_NA SEGMENT_TYPE PARTITION_ SUM(BYTES) COUNT(*)
---------- ------------------ ---------- -------------- --------------
I1 INDEX 65,536 1
I2 INDEX PARTITION I2_P1 65,536 1
I3 INDEX SUBPARTITION I3_P1_S1 65,536 1
T1 TABLE 65,536 1
T2 TABLE PARTITION T2_P1 8,388,608 1
T3 TABLE SUBPARTITION T3_P1_S1 8,388,608 1

Now I have reorganized my tables and rebuilt my indexes.

The sized dropped to 64K or 8M and the fragmentation disappeard as the number of extents dropped to 1.

Note you cannot rebuild a whole partitioned index (ORA-14086) nor reorganize a whole partitioned table (ORA-14511). You need to loop through each partition or subpartition.

Categories
dba plsql sql tuning

EXECUTE IMMEDIATE ‘SELECT’ does not execute anything

I am not sure whether some tuning guy at Oracle decided to ignore any SELECT statement after execute immediate to save time doing nothing.
exec execute immediate 'select 1/0 from dual connect by level<9999999999999' PL/SQL procedure successfully completed. Elapsed: 00:00:00.00

But it is really annoying... and not documented as far as I know.

Imagine I want to increase all my sequences by 1000

SQL> create sequence s;

Sequence created.

SQL> select s.nextval from dual;

NEXTVAL
----------
1

SQL> begin
2 for f in (select sequence_name n from user_sequences)
3 loop
4 execute immediate
5 'select '||f.n||'.nextval from dual connect by level<=1000'; 6 end loop; 7 end; 8 / PL/SQL procedure successfully completed. SQL> select s.currval from dual;

CURRVAL
----------
1

Hmm, it does not work. Does SELECT work at all? Yes when it is a SELECT INTO 🙂

SQL> drop sequence s;

Sequence dropped.

SQL> create sequence s;

Sequence created.

SQL> select s.nextval from dual;

NEXTVAL
----------
1

SQL> declare
2 type t is table of number index by pls_integer;
3 c t;
4 begin
5 for f in (select sequence_name n from user_sequences)
6 loop
7 execute immediate
8 'select '||f.n||'.nextval from dual connect by level<=1000' 9 bulk collect into c; 10 end loop; 11 end; 12 / PL/SQL procedure successfully completed. SQL> select s.currval from dual;

CURRVAL
----------
1001

I wonder in which version this optimization/bug was introduced...

Categories
dba sql

How to solve ORA-4068

I was amazed by this oneliner in stackoverflow.

First, let me introduce you my old foe, ORA-04068 :
Session 1:
SQL> CREATE OR REPLACE PACKAGE P AS
2 X NUMBER;Y NUMBER;END;
3 /

Package created.

SQL> exec P.X := 1

PL/SQL procedure successfully completed.

Session 2:
SQL> CREATE OR REPLACE PACKAGE P AS
2 X NUMBER;Z NUMBER;END;
3 /

Package created.

Session 1:
SQL> exec P.X := 2
BEGIN P.X := 2; END;

*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04061: existing state of package "SCOTT.P" has been invalidated
ORA-04065: not executed, altered or dropped package "SCOTT.P"
ORA-06508: PL/SQL: could not find program unit being called: "SCOTT.P"
ORA-06512: at line 1

Changing the package in session 2 did invalidate the package variable in session 1.

And the PRAGMA that saves the world : PRAGMA SERIALLY_REUSABLE

Session 1:
SQL> CREATE OR REPLACE PACKAGE P AS
2 PRAGMA SERIALLY_REUSABLE;X NUMBER;Y NUMBER;END;
3 /

Package created.

SQL> exec P.X := 1

PL/SQL procedure successfully completed.

Session 2:
SQL> CREATE OR REPLACE PACKAGE P AS
3 PRAGMA SERIALLY_REUSABLE;X NUMBER;Z NUMBER;END;
2 /

Package created.

Session 1:
SQL> exec P.X := 2

PL/SQL procedure successfully completed.

Oh yes!

Categories
dba sql tuning

make count(*) faster

I just install Oracle Enterprise Linux on my new notebook.

I wanted to check how far could I improve the performance of a count(*)


SQL> drop table big_emp;

table big_emp dropped.
258ms elapsed

SQL> create table big_emp as
with l as(select 1 from dual connect by level<=3000) select rownum empno,ename,job,mgr,hiredate,sal,comm,deptno from emp,l,l table big_emp created. 330,390ms elapsed SQL> alter table big_emp add primary key(empno)

table big_emp altered.
481,503ms elapsed

SQL> alter system flush buffer_cache

system flush altered.
2,701ms elapsed

SQL> alter system flush shared_pool
system flush altered.
137ms elapsed

SQL> select count(*) from big_emp
COUNT(*)
----------------------
126000000

9,769ms elapsed

SQL> select count(*) from big_emp
COUNT(*)
----------------------
126000000

8,157ms elapsed

SQL> alter table big_emp drop primary key

table big_emp altered.
905ms elapsed

SQL> alter table big_emp add primary key(empno)
using index (
create index big_i on big_emp(empno)
global partition by hash(empno)
partitions 16 parallel 16)

table big_emp altered.
974,300ms elapsed

SQL> alter system flush buffer_cache

system flush altered.
601ms elapsed

SQL> alter system flush shared_pool

system flush altered.
140ms elapsed

SQL> select count(*) from big_emp

COUNT(*)
----------------------
126000000

5,201ms elapsed

SQL> select count(*) from big_emp

COUNT(*)
----------------------
126000000

2,958ms elapsed

As it is on a notebook, I suppose the benefit of partitioning is not as good as you could get on your server with lots of fast disks and lot’s of CPUs, but I am pretty happy with the results.

It is still counting 126 Million rows in less than 3 seconds 🙂

Thanks for the very kind sponsor of the notebook !

Categories
dba security sql

On FAILED_LOGIN_ATTEMPTS

I wrote about the new defaults in 10gR2 more than 5 years ago
FAILED_LOGIN_ATTEMPTS default to 10 in 10gR2

This new DEFAULT increases the security by preventing the users from trying millions of different passwords. However the locking of application users is noticeably decreasing the database availability 🙁

My recommendation :
– create a separate profile for the application users with high availabilty requirement with a default of 1000
– add 2 characters to the password of those accounts


SQL> create user u identified by a4sec2pw;
User created.
SQL> grant create session to u;
Grant succeeded.

“U” is a critical user in your application where account locking would mean downtime !

Let’s try to make the schema 10 times more secure and 100 times more available :

SQL> create profile failed1000 limit failed_login_attempts 1000;
Profile created.
SQL> alter user u identified by a4sec2pwx1 profile failed1000;
User altered.

OK?

Categories
fun sql

[FUN] How to tune select count(*) from t?

A user mentioned one call is faster in test than in prod. And the table has the same size. How could we tune the production to make it quick?


user007@PROD> select count(*) from t;
COUNT(*)
----------
19832501

Elapsed: 00:03:05.00

Let’s try in test :

user007@TEST> select count(*) from t
select count(*) from t
*
ERROR at line 1:
ORA-00942: table or view does not exist

Elapsed: 00:00:00.16

Wow, 00:00:00.16! This is a much quicker response time. Probably the ORA-942 was catched by the application and the user did not realise it…

Next time someone ask you to tune a count(*), just revoke access to the table to optimize the response time 😈

Categories
java security sql

jdbc ssl

I already wrote about jdbc hello world and listener with tcps.

Let’s combine both technologies !
TCPS.java
import java.util.Properties;
import java.security.Security;
import java.sql.*;
import javax.net.ssl.*;

public class TCPS {
public static void main(String argv[]) throws SQLException {
String url = "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(Host=dbsrv001)(Port=12345))(CONNECT_DATA=(SID=DB01)))";
Properties props = new Properties();
props.setProperty("user", "scott");
props.setProperty("password", "tiger");
props.setProperty("javax.net.ssl.trustStore","cwallet.sso");
props.setProperty("javax.net.ssl.trustStoreType","SSO");
Security.addProvider(new oracle.security.pki.OraclePKIProvider());
DriverManager.registerDriver(new oracle.jdbc.OracleDriver());
Connection conn = DriverManager.getConnection(url, props);
ResultSet res = conn.
prepareCall("select 'Hello SSL World' txt from dual").
executeQuery();
res.next();
System.out.println(res.getString("TXT"));
}
}

I have an auto-login wallet (cwallet.sso) with the trusted certificate from the server.

There are a few jar’s to use:

$ CLASSPATH=$ORACLE_HOME/jdbc/lib/ojdbc5.jar
$ CLASSPATH=$CLASSPATH:$ORACLE_HOME/jlib/oraclepki.jar
$ CLASSPATH=$CLASSPATH:$ORACLE_HOME/jlib/osdt_cert.jar
$ CLASSPATH=$CLASSPATH:$ORACLE_HOME/jlib/osdt_core.jar
$ CLASSPATH=$CLASSPATH:.
$ export CLASSPATH
$ javac TCPS.java
$ java TCPS
Hello SSL World

Greatly inspired by Jean de Lavarene’s white paper : SSL With Oracle JDBC Thin Driver

Categories
dba sql sqlplus

How to check if I have a pending transaction?

Pretty straightforward, check if dbms_transaction.step_id is null!

SQL> select dbms_transaction.step_id from dual;

STEP_ID
---------------

SQL> insert into t values (1);

1 row created.

SQL> select dbms_transaction.step_id from dual;

STEP_ID
---------------
114352430549782

SQL> commit;

Commit complete.

SQL> select dbms_transaction.step_id from dual;

STEP_ID
---------------

SQL> insert into t values (2);

1 row created.

SQL> select dbms_transaction.step_id from dual;

STEP_ID
---------------
124248035235852

SQL> rollback;

Rollback complete.

SQL> select dbms_transaction.step_id from dual;

STEP_ID
---------------

Categories
sql sqlplus

Do you know the ORA- nonerrors?

In one of my script, I am checking at the end for any ORA- error. And if I have any ORA- error, I quit with an error.

So far so good.

Also when I run a report from the shell, I do set the sqlplus settings I like and I expect the script to receive the output with no ORA- error at the beginning.

But watch this !

$ echo "set feed off hea off newp none
conn scott/tiger
select * from dual;"|sqlplus /nolog
ERROR:
ORA-28002: the password will expire within 10 days

X

My password will expire in 10 days. And this screw up my day 🙁

Categories
plsql sql

Hardcoding SYSDATE

I see TRUNC(SYSDATE) in the code very often. Is this good or bad?

Over my past two years as a developer, I can tell you it is pretty bad. What does TRUNC(SYSDATE) mean? It is today’s date.

But when does today starts and ends? And at the time of the writing, it is already tomorrow in Thailand.

We could argue about using CURRENT_DATE instead of sysdate, but it does not solve everything. Maybe you want your day to end at 10:30pm and initialize for the next day after some post processing. Maybe you want some components running in different timezones. Maybe you want to rollback and rollforward the date in your test systems…

Also the bad news, if your report hardcodes SYSDATE, kind of SELECT SUM(AMT) FROM T WHERE EXPIRY_DATE>TRUNC(SYSDATE), you will not be able to run it tomorrow in case it fails today.

No, I think that you would better store the date in a table and update it daily.

So you will have SELECT SUM(T.AMT) FROM T, TRADINGDAY WHERE T.EXPIRY_DATE>TRADINGDAY.TRADINGDAY and a daily job running UPDATE TRADEDAY SET TRADINGDAY=TRADINGDAY+1;.

You may want to consider the performance overhead of reading from a table instead of using the very performant SYSDATE function…

Categories
11gR2 sql

connect by and recursive with (part 2)

According to the doc
The subquery_factoring_clause now supports recursive subquery
factoring (recursive WITH), which lets you query hierarchical data.
This feature is more powerful than CONNECT BY in that it
provides depth-first search and breadth-first search, and supports
multiple recursive branches. A new search_clause and cycle_clause
let you specify an ordering for the rows and mark cycles in the
recursion

As written there and there, recursive with has more capabilities. It is also ANSI and implemented in DB2, MS SQL Server. CONNECT BY is an Oracle oddity. But does recursive with perfoms as well as connect by?

  1. Create a big emp

  2. create table big_emp as
    with t(empno,mgr) as
    (select 1, null from dual
    union all
    select empno+1,trunc(dbms_random.value(1+empno/10,empno))
    from t
    where empno<100000) select * from t

  3. execution plan
  4. CONNECT BY

    select empno,mgr
    from big_emp
    connect by mgr = prior empno
    start with mgr is null;

    E M
    - -
    1 -
    2 1
    4 2
    6 2
    8 6

    Operation Object Rows Time Cost Bytes
    ------------------------- ------- ------ ---- ---- -------
    SELECT STATEMENT 3 3 185 78
    CONNECT BY WITH FILTERING
    TABLE ACCESS FULL BIG_EMP 1 1 61 10
    HASH JOIN 2 2 122 46
    CONNECT BY PUMP
    TABLE ACCESS FULL BIG_EMP 100000 1 61 1000000

    recursive WITH

    with e(empno,mgr) as (
    select empno, mgr
    from big_emp
    where mgr is null
    union all
    select f.empno,f.mgr
    from big_emp f, e
    where e.empno=f.mgr)
    select empno,mgr
    from e;

    E M
    - -
    1 -
    2 1
    3 1
    4 2
    5 3
    ...

    Operation Object Rows Time Cost Bytes
    ------------------------- ------- ------ ---- ---- -------
    SELECT STATEMENT 3 3 183 78
    VIEW 3 3 183 78
    UNION ALL (RECURSIVE WITH) BREADTH FIRST
    TABLE ACCESS FULL BIG_EMP 1 1 61 10
    HASH JOIN 2 2 122 46
    RECURSIVE WITH PUMP
    TABLE ACCESS FULL BIG_EMP 100000 1 61 1000000

    In this particular simple case, it seems CONNECT BY have a 1% higher cost.

  5. execution time
  6. CONNECT BY

    select sum(mgr)
    from
    (
    select empno,mgr
    from big_emp
    connect by mgr = prior empno
    start with mgr is null
    )

    SUM(MGR)
    ------------
    2745293877

    1 rows returned in 0.73 seconds

    recursive with

    with e(empno,mgr) as (
    select empno, mgr
    from big_emp
    where mgr is null
    union all
    select f.empno,f.mgr
    from big_emp f, e
    where e.empno=f.mgr)
    select sum(mgr)
    from e;

    SUM(MGR)
    ------------
    2745293877

    1 rows returned in 1.24 seconds

Honestly I am not surprised that CONNECT BY is faster, CONNECT BY has been in the Oracle database forever and has been massively tuned in 8.1.7.4.

And then one day you find, ten years have got behind you.

Categories
11gR2 sql

CONNECT BY and Recursive CTE

11gR2 introduced a new mechanism to build up hierarchies.

I remembered a thread in developpez.net that reveals the dubious implementation of nocycle in 10g.

For the CONNECT BY ISLEAF, I have read the technique on amis.nl.

Ok, here is my graph

The 10g query

with o as
(
SELECT 'A' obj, 'B' link from dual union all
SELECT 'A', 'C' from dual union all
SELECT 'C', 'D' from dual union all
SELECT 'D', 'C' from dual union all
SELECT 'D', 'E' from dual union all
SELECT 'E', 'E' from dual)
select connect_by_root obj root,level,obj,link,
sys_connect_by_path(obj||
'->'
||link,','),
connect_by_iscycle,
connect_by_isleaf
from o
connect by nocycle obj=prior link
start with obj='A';

ROOT LEVEL O L PATH CYCLE LEAF
---- ----- - - -------------------- ----- -----
A 1 A B ,A->B 0 1
A 1 A C ,A->C 0 0
A 2 C D ,A->C,C->D 1 0
A 3 D E ,A->C,C->D,D->E 1 1

Obviously in 10g the connect by nocycle does not work that well with that kind of graphs, D-C and E-E are missing and C-D and D-E are marked as cycling…

Let’s try the 11gR2 equivalency.

with o(obj,link) as
(
SELECT 'A', 'B' from dual union all
SELECT 'A', 'C' from dual union all
SELECT 'C', 'D' from dual union all
SELECT 'D', 'C' from dual union all
SELECT 'D', 'E' from dual union all
SELECT 'E', 'E' from dual),
t(root,lev,obj,link,path) as (
select obj,1,obj,link,cast(obj||'->'||link
as varchar2(4000))
from o
where obj='A' -- START WITH
union all
select
t.root,t.lev+1,o.obj,o.link,
t.path||', '||o.obj||
'->'
||o.link
from t, o
where t.link=o.obj
)
search depth first by obj set ord
cycle obj set cycle to 1 default 0
select root,lev,obj,link,path,cycle,
case
when (lev - lead(lev) over (order by ord)) < 0 then 0 else 1 end is_leaf from t; ROOT LEV OBJ LINK PATH CYCLE IS_LEAF ---- ---- ---- ---- --------------------------- ----- ------- A 1 A B A->B 0 1
A 1 A C A->C 0 0
A 2 C D A->C, C->D 0 0
A 3 D C A->C, C->D, D->C 0 0
A 4 C D A->C, C->D, D->C, C->D 1 1
A 3 D F A->C, C->D, D->E 0 0
A 4 F F A->C, C->D, D->E, E->E 0 0
A 5 F F A->C, C->D, D->E, E->E, E->E 1 1

It looks good 🙂

If you exclude the rows with cycle=1, you get the six rows for the graph.