how to run UTL_TCP, UTL_SMTP and the like in 11g

After we upgrade a db to 11g someone complained about an ORA-24248: XML DB extensible security not installed

I thought, it should be easy to revert to 10g mechanism. Probably wrong after reading Marco :
The default behavior for access control to network utility packages has been changed to disallow network operations to all nonprivileged users. This default behavior is different from, and is incompatible with, previous versions of Oracle Database.

I do not want to install XDB to send mail. Sounds like an overkill…

Ok, as an hard core dba I created a wrapper in the sys schema, something you probably should not do !

ex:
10g

SQL> conn scott/tiger
Connected.
SQL> select utl_inaddr.GET_HOST_ADDRESS('localhost') from dual;
UTL_INADDR.GET_HOST_ADDRESS('LOCALHOST')
--------------------------------------------------
127.0.0.1

after upgrade
11g

SQL> conn scott/tiger
Connected.
SQL> select utl_inaddr.GET_HOST_ADDRESS('localhost') from dual;
select utl_inaddr.GET_HOST_ADDRESS('localhost') from dual
*
ERROR at line 1:
ORA-24248: XML DB extensible security not installed
ORA-06512: at "SYS.UTL_INADDR", line 19
ORA-06512: at "SYS.UTL_INADDR", line 40
ORA-06512: at line 1

My workaround to “disable” Fine-Grained Access to External Network Services

SQL> conn / as sysdba
SQL> create or replace function my_utl_inaddr_GET_HOST_ADDRESS(HOST VARCHAR2) return VARCHAR2 is begin return utl_inaddr.GET_HOST_ADDRESS; end;
2 /

Function created.

SQL> grant execute on my_utl_inaddr_GET_HOST_ADDRESS to scott;

Grant succeeded.
SQL> conn scott/tiger
Connected.
SQL> select sys.my_utl_inaddr_GET_HOST_ADDRESS('localhost') from dual;
SYS.MY_UTL_INADDR_GET_HOST_ADDRESS('LOCALHOST')
--------------------------------------------------
127.0.0.1

If you want to use the recommended way of granting access to utl_tcp and the like, check note 453756.1

Do not upgrade 11.2.0.1 to 11.2.0.1

If you do run @?/rdbms/admin/catupgrd for an 11.2.0.1 Oracle Home on a 11.2.0.1, you may later realize some objects are missing (probably related to deferred segment creation).

SQL> delete from t1
2 where id in (
3 select ca.id from ca, p
4 where p.no_form like '%02.98'
5 and p.id = ca.prod_id
6 );
delete from t1
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [kkpo_rcinfo_defstg:objnotfound],
[56480], [], [], [], [], [], [], [], [], [], []

The workaround on metalink is amazing :

  • Do not run catupgrd in 11.2.0.1 against a database that is already at 11.2.0.1

Well, the only solution is to restore your database ! How painful 🙁

How to change the connection string of the Oracle Enterprise Manager Grid Control 11g repository

If you moved your repository to a new host and want to change the connection string, no need to drop it, no need to messup in the properties or xml files, simply read the doc

http://download.oracle.com/docs/cd/E11857_01/em.111/e16790/ha_agent.htm#autoId13
emctl config oms -store_repos_details (-repos_host -repos_port -repos_sid | -repos_conndesc ) -repos_user [-repos_pwd ] [-no_check_db]

Yes it works!

List of table and column privileges, including those via roles

I could not find this quickly enough in google so I wrote it myself.

The list of table privileges, with a connect by subquery.

COL roles FOR a60
COL table_name FOR a30
col privilege for a9
set lin 200 trims on pages 0 emb on hea on newp none

SELECT *
FROM ( SELECT CONNECT_BY_ROOT grantee grantee,
privilege,
REPLACE (
REGEXP_REPLACE (SYS_CONNECT_BY_PATH (granteE, '/'),
'^/[^/]*'),
'/',
' --> ')
ROLES,
owner,
table_name,
column_name
FROM (SELECT PRIVILEGE,
GRANTEE,
OWNER,
TABLE_NAME,
NULL column_name
FROM DBA_TAB_PRIVS
WHERE owner NOT IN
('SYS',
'SYSTEM',
'WMSYS',
'SYSMAN',
'MDSYS',
'ORDSYS',
'XDB',
'WKSYS',
'EXFSYS',
'OLAPSYS',
'DBSNMP',
'DMSYS',
'CTXSYS',
'WK_TEST',
'ORDPLUGINS',
'OUTLN',
'ORACLE_OCM',
'APPQOSSYS')
UNION
SELECT PRIVILEGE,
GRANTEE,
OWNER,
TABLE_NAME,
column_name
FROM DBA_COL_PRIVS
WHERE owner NOT IN
('SYS',
'SYSTEM',
'WMSYS',
'SYSMAN',
'MDSYS',
'ORDSYS',
'XDB',
'WKSYS',
'EXFSYS',
'OLAPSYS',
'DBSNMP',
'DMSYS',
'CTXSYS',
'WK_TEST',
'ORDPLUGINS',
'OUTLN',
'ORACLE_OCM',
'APPQOSSYS')
UNION
SELECT GRANTED_ROLE,
GRANTEE,
NULL,
NULL,
NULL
FROM DBA_ROLE_PRIVS
WHERE GRANTEE NOT IN
('SYS',
'SYSTEM',
'WMSYS',
'SYSMAN',
'MDSYS',
'ORDSYS',
'XDB',
'WKSYS',
'EXFSYS',
'OLAPSYS',
'DBSNMP',
'DMSYS',
'CTXSYS',
'WK_TEST',
'ORDPLUGINS',
'OUTLN',
'ORACLE_OCM',
'APPQOSSYS')) T
START WITH grantee IN (SELECT username FROM dba_users)
CONNECT BY PRIOR PRIVILEGE = GRANTEE)
WHERE table_name IS NOT NULL AND grantee != OWNER
ORDER BY grantee,
owner,
table_name,
column_name,
privilege;

sample output

GRANTEE PRIVILEGE ROLES OWNER TABLE_NAME COLUMN_NAME
------- --------- --------------- ------ ---------- -----------
U UPDATE --> R SCOTT DEPT DNAME
U SELECT SCOTT EMP
U2 UPDATE --> R2 --> R SCOTT DEPT DNAME

Reduce the number of commits

“Oftentimes, a database administrator (DBA) simply looks at the symptoms and immediately starts changing the system to fix those symptoms”
Op. Cit. Oracle Database Performance Tuning Guide 11g Release 2 (11.2)

Ok, let’s do this 🙂

  • Finding Waits on event “log file sync” while performing COMMIT and ROLLBACK operations were consuming significant database time.
  • Action Investigate application logic for possible reduction in the number of COMMIT operations by increasing the size of transactions.

If your application is committing too often- maybe Enterprise Manager told you so- you may want to commit less often. Or maybe just do some magic to impress your customer.

As seen on Metalink 857576.1, and if you can afford data loss, and if you cannot change your application, and if you are that kind of dba who cares more on good performing badly written application than on data integrity. just have a quick look …

Ok, ins.sql is 30’000 insert and commits,

insert into scott.t values(1);
commit;
insert into scott.t values(1);
commit;
insert into scott.t values(1);
commit;

Let’s check the time on my old-fashion pc…

SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:40:55.881948 +02:00
SQL> @ins
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:41:19.115447 +02:00

23.3 seconds

Now take the risk to lose some commits (but yes it is documented, no hidden parameter) to boost your performance


SQL> alter session set commit_wait=nowait commit_logging=immediate;
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:43:37.284027 +02:00

SQL> @ins
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:43:54.084547 +02:00

Hey hey hey, 16.8 seconds only 🙂

PS: it used to be called commit_write in 10g

vi large files

Once upon a time a colleague asked me if there is a better editor than vi installed on my db server. Well, I was not really about arguing the benefit of ed (less memory usage, no useless error message). But one advantage of ed was (I believed) the ability to read large files


$ vi alert_DB01.log
Out of memory saving lines for undo - try using ed
:ed
"alert_DB01.log" 612637 lines, 20458891 characters
:612636
Thu Aug 25 16:08:20 2011
LNS: Standby redo logfile selected for thread 1 sequence 6898 for destination LOG_ARCHIVE_DEST_2
:q

of course most of readers are using some clicky fancy tools with colors and/or unlimited undo.

Well, for those who use vi on AIX, check this

$ vi -y 9999999 alert_DB01.log
Tue Oct 26 13:59:12 2010
Starting ORACLE instance (normal)
sskgpgetexecname failed to get name
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 3
Autotune of undo retention is turned on.
IMODE=BR
ILAT =27
LICENSE_MAX_USERS = 0
"alert_DB01.log" 612662 lines, 20459538 characters

Oh yes !!!

Number of primes below 1000

Do not expect a SQL answer there. My daughter (9) came from school with this question, and I suspect the expected homework was to google for an answer. I am pretty schocked of such practice, teacher encouraging google to do your homework are clearly not from my generation…

Ok, back to the rules, pen and paper. We write down all numbers from 1 to 1000, we start graying out the 1, then the 4,6,8,10,12,… Then the 9 15 21 27 … Then the 25, 35, 55, … Then 7, 49, 77, … And so on.
2016-08-04 : no reader realised 7 is prime?

After having grayed out 1 one, 499 twos, 166 threes, 66 fives, 37 sevens, 20 elevens, 16 thirteens, 10 seventeens, 8 nineteens, 6 twentythrees, 2 twentynines and 1 thirtyone, we counted the remaining numbers and got to the solution. We used the calculator, to gray out numbers like 23*43. But for sure no google there.

I had a similar feeling when my boy teacher ask my boy to look for a description of some exotic butterfly on wikipedia. Is this all what new generations kids can do? Search on google?

Back to google, I once asked in an interview to a young candidate : “How would you speed up a cursor for loop that increases the salary of the employees by 10% ?”. Answer : “I would search on google”.

Not worth mentioning all my traumatic experiences with IT specialists who advise to turn on some tuning magic they found on google to solve their own performance issue with poorly written code.

Using google is good. But do your homework kids 🙂

nothing in user_segments

I wrote on deferred segment creation recently.

Today I was looking for specific storage attributes that I used to find in user_segments. They are no longer here. Where are they then?

test case :
create table t(x clob) store (x) as securefile x (retention max storage(maxsize 8192000000));

Where do I find the retention max max_size of my securefile? once the segment is created, it is easy to find it in user_segments

SQL> insert into t values('x');
1 row created.

SQL> select max_size from user_segments where segment_name='X';
MAX_SIZE
----------
1000000

1000000 in blocks is my specified 8192000000 bytes.

let’s go back
SQL> truncate table t drop all storage;

Table truncated.

SQL> select max_size from user_segments where segment_name='X';

no rows selected

It is not there.

you must dig in the sys tables to find out. Specifically there is a new table for the deferred segments

SQL> select maxsiz_stg from sys.deferred_stg$ where obj# in (select obj# from sys.obj$ where name='X');
MAXSIZ_STG
----------
1000000

Of course you should not base your business logic on internal tables that may change in a next release. A more appropriate workaround would be to disable deferred segment creation 🙂

How to unload blob from the database?

There is more than one post on how to unload blob from the database, mostly in plsql with utl_file.put_raw (see note 330146.1) and with java with FileOutputStream (see note 247546.1)

Unfortunately both are terribly slow due to the 32k limitation of put_raw in utl_file and due to a low “optimum buffer size” retrieved by myBlob.getBufferSize(), I increased the java stream buffer to 20M or to the size of the lob, whichever is smaller.

The code is mostly copy-pasted from metalink. But I changed the size from getbuffersize() to length().

Using java for large blob (read large binary large object) is about 3x faster than plsql in this test.


SQL> CREATE USER USER1 IDENTIFIED BY SeCrEt;

User created.

Elapsed: 00:00:00.04
SQL>
SQL> GRANT CREATE SESSION , CREATE PROCEDURE TO USER1;

Grant succeeded.

Elapsed: 00:00:00.01
SQL>
SQL> connect user1/SeCrEt
Connected.
SQL>
SQL> CREATE OR REPLACE JAVA SOURCE NAMED "BlobHandler"
2 AS import java.lang.*;
3 import java.sql.*;
4 import oracle.sql.*;
5 import java.io.*;
6 public class BlobHandler {
7 public static void ExportBlob(String myFile, BLOB myBlob) throws Exception {
8 File binaryFile = new File(myFile);
9 FileOutputStream outStream = new FileOutputStream(binaryFile);
10 InputStream inStream = myBlob.getBinaryStream();
11 int size;
12 if (myBlob.length()> 20000000) { // tune this to whatever appropriate value
13 size = 20000000;
14 } else {
15 size = (int)myBlob.length();
16 }
17 byte[] buffer = new byte[size];
18 int length = -1;
19 while ((length = inStream.read(buffer)) != -1)
20 {
21 outStream.write(buffer, 0, length);
22 outStream.flush();
23 }
24 inStream.close();
25 outStream.close();
26 }
27 }
28 /

Java created.

Elapsed: 00:00:00.16
SQL>
SQL> ALTER JAVA SOURCE "BlobHandler" COMPILE;

Java altered.

Elapsed: 00:00:00.37
SQL>
SQL> sho error
No errors.
SQL>
SQL> CREATE OR REPLACE PROCEDURE ExportBlobJava (p_file IN VARCHAR2,
2 p_blob IN BLOB)
3 AS
4 LANGUAGE JAVA
5 NAME 'BlobHandler.ExportBlob(java.lang.String, oracle.sql.BLOB)';
6 /

Procedure created.

Elapsed: 00:00:00.10
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> EXEC Dbms_Java.Grant_Permission( 'USER1', 'SYS:java.io.FilePermission','/tmp/javatest', 'write' )

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.52
SQL>

a comment here. If you do care about your database server, and you probably won’t GRANT DBA TO PUBLIC, never grant full unlimited access to your filesystem, as suggested chockingly by metalink with Dbms_Java.Grant_Permission( ‘SCOTT’, ‘java.io.FilePermission’, ‘*’, ‘read ,write, execute, delete’);

let’s move on


SQL> connect user1/SeCrEt
Connected.
SQL>
SQL> CREATE OR REPLACE PROCEDURE ExportBlobPlsql (p_dir IN VARCHAR2,
2 p_file IN VARCHAR2,
3 p_blob IN BLOB)
4 AS
5 blob_length INTEGER;
6 out_file UTL_FILE.FILE_TYPE;
7 v_buffer RAW (32767);
8 chunk_size BINARY_INTEGER := 32767;
9 blob_position INTEGER := 1;
10 BEGIN
11 blob_length := DBMS_LOB.GETLENGTH (p_blob);
12 out_file :=
13 UTL_FILE.FOPEN (p_dir,
14 p_file,
15 'wb',
16 chunk_size);
17 WHILE blob_position <= blob_length 18 LOOP 19 IF blob_position + chunk_size - 1 > blob_length
20 THEN
21 chunk_size := blob_length - blob_position + 1;
22 END IF;
23
24 DBMS_LOB.READ (p_blob,
25 chunk_size,
26 blob_position,
27 v_buffer);
28 UTL_FILE.PUT_RAW (out_file, v_buffer, TRUE);
29 blob_position := blob_position + chunk_size;
30 END LOOP;
31 UTL_FILE.FCLOSE (out_file);
32 END;
33 /

Procedure created.

Elapsed: 00:00:00.08
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> CREATE OR REPLACE DIRECTORY tmp AS '/tmp';

Directory created.

Elapsed: 00:00:00.03
SQL>
SQL> grant write on directory tmp to user1;

Grant succeeded.

Elapsed: 00:00:00.02
SQL>

Here again, I give write access to one directory, I do not grant dba to public…

SQL> connect user1/SeCrEt
Connected.
SQL>
SQL> VAR c BLOB
SQL>
SQL> exec :c := UTL_RAW.cast_to_raw ('X'); FOR i IN 1 .. 15 LOOP DBMS_LOB.append (:C, :C); END LOOP

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL>
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
KB
----------
32

Elapsed: 00:00:00.06

I have created a 32K BLOB variable, and I am doubling its size each time to see how it scales

SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.08
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.16
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
KB
----------
64

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
KB
----------
128

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
KB
----------
256

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
KB
----------
512

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
1

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
2

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
4

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.08
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
8

Elapsed: 00:00:00.01
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.14
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.08
SQL>

For small files, we do not see much of a difference between java and plsql. This is because the 32k buffer of plsql is acceptable for small files.


SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.09
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
16

Elapsed: 00:00:00.02
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.20
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.16
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.16
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
32

Elapsed: 00:00:00.04
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.31
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.29
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.46
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
64

Elapsed: 00:00:00.08
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.45
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.90
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.60
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
128

Elapsed: 00:00:00.14
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.96
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.77
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:16.08
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
256

Elapsed: 00:00:00.30
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.24
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:09.65
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:34.70
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
512

Elapsed: 00:00:00.70
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:06.92
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:20.81
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:01:32.96
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
1024

Elapsed: 00:00:01.65
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:14.90
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:44.13

SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:03:33.14

SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
MB
----------
2048

Elapsed: 00:00:00.00

SQL> EXEC ExportBlobJava('/u99/backup/sw/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:26.10
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:01:16.90

The larger the file, the most benefical to have a large write buffer (could well be higher than 20M as in this example). If java is not installed in the database, you can also retriebe the blob from the database and output the stream to a local file.

The reasons why I always avoid to shutdown abort

It is a common practice to always shutdown abort the database before restarting and shutting in down immediate. This is because sometimes SHUTDOWN IMMEDIATE takes ages. For instance due to a huge transaction to be rollback.

I do not like it. At all.

First, chance exists that you won’t be able to start the database anymore. I have not heard or meet anyone who had this issue since Oracle7, but I still believe it.

Second, shutdown abort is very useful if something goes seriously wrong. But if something goes wrong, you may want to find out what it is.

Third, you may hit more bugs than if you do close normal. And you may get less help from support if this is due to an abusive shutdown abort. YMMV

Ok, small demo to preach to the converted
disclaimer: this demo is not innocent, do not try this on your database

SQL> create flashback archive fa tablespace ts retention 1 day;

Flashback archive created.

SQL> create table t(x number primary key);

Table created.

SQL> alter table t flashback archive fa;

Table altered.

SQL> insert into t values (1);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from DBA_SEGMENTS where tablespace_name='TS';

no rows selected

I have created a flashback archive table, and the committed transaction is not written down to the flashback tablespace yet.

SQL> shutdown abort
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 417546240 bytes
Fixed Size 2227072 bytes
Variable Size 234882176 bytes
Database Buffers 171966464 bytes
Redo Buffers 8470528 bytes
Database mounted.
Database opened.


SQL> sho parameter undo_tablespace
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_tablespace string UNDO1
SQL> create undo tablespace undo2 datafile '/u02/oradata/@/undo2_01.dbf' size 10m reuse;

Tablespace created.

SQL> alter system set undo_tablespace=undo2;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 417546240 bytes
Fixed Size 2227072 bytes
Variable Size 234882176 bytes
Database Buffers 171966464 bytes
Redo Buffers 8470528 bytes
Database mounted.
Database opened.

I have switched undo tablespace. So far so good. But remember the flashback archive did not write to the flashback tablespace before shutdown abort.


SQL> drop tablespace undo1 including contents and datafiles;
drop tablespace undo1 including contents and datafiles
*
ERROR at line 1:
ORA-01548: active rollback segment '_SYSSMU1_3544069484$' found, terminate dropping tablespace

You see… I cannot drop my old undo tablespace.

Q.E.D.

On deferred segment creation

What’s deferred segment creation? It is a feature that saves lots of time by releases and lots of space if you have a -legacy- application with 10’000 tables and most of them are empty.

When was it introduced ? Partly in 11.2.0.1 and partly in 11.2.0.2, depending on the object type.

What’s the opposite ? immediate segment creation

What’s the default ? deferred segment creation

How does it work ?
create table t1(x number) segment creation immediate;
and
create table t1(x number) segment creation deferred;

Where it the doc? start with Understand Deferred Segment Creation

Ok, now my 2 cents on this feature. It is a major change of the most basic elements of Oracle, the segment. This behavior will keep its bunch of surprises on your dba scripts.

1) you will not see the segment in dba_segments
2) if you drop the tablespace the tablespace containing the object without the INCLUDING CONTENTS, the drop tablespace will succeed and the table will remain
SQL> create tablespace ts datafile '/u02/oradata/@/ts.dbf' size 1m;

Tablespace created.

SQL> create table t(x number) tablespace ts;

Table created.

SQL> drop tablespace ts;

Tablespace dropped.

SQL> select * from t;
select * from t
*
ERROR at line 1:
ORA-00959: tablespace 'TS' does not exist

SQL> drop table t;
drop table t
*
ERROR at line 1:
ORA-00959: tablespace 'TS' does not exist

Neither SELECT nor DROP is possible at that stage

To quickly identify those almost-nonexistent tablespaces you may use this query

SELECT TABLESPACE_NAME FROM ALL_CLUSTERS UNION
SELECT TABLESPACE_NAME FROM ALL_INDEXES UNION
SELECT TABLESPACE_NAME FROM ALL_IND_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_IND_SUBPARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_LOBS UNION
SELECT TABLESPACE_NAME FROM ALL_LOB_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_LOB_SUBPARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_TABLES UNION
SELECT TABLESPACE_NAME FROM ALL_TAB_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_TAB_SUBPARTITIONS MINUS
select tablespace_name from dba_tablespaces
;

TABLESPACE_NAME
------------------------------
TS

then you can recreate it and -if wished- drop it with contents

SQL> create tablespace ts datafile '/u02/oradata/@/ts.dbf' size 1m reuse;

Tablespace created.

SQL> drop tablespace ts including contents and datafiles;

Tablespace dropped.

SQL> select * from t;
select * from t
*
ERROR at line 1:
ORA-00942: table or view does not exist

The table is gone for real.

If you based some scripts on dba_segments to list the content of the tablespaces, you probably should check the assigned tablespace in the tables,indexes,lobs and (sub)partitions DBA_VIEWS too.

I met this feature while using transportable tablespace. Transportable table will transport the object with no segment that belongs to the tablespace.

There is a bunch of published bugs on Metalink regarding deferred segment creation. An easy workaround is to not use the feature by setting the initialization parameter DEFERRED_SEGMENT_CREATION to false. This of course affects only new objects.

I am always very cautious about those major changes affecting the dba scripts on the dictionary. While selecting from the base dictionary tables (TAB$, COL$, …) is never recommended, selecting from the USER_ and DBA_ views is supposed to be backward compatible, but the dba scripts that used to work in previous release may break here… This is obviously the price to pay to get new features, right?

Datapump : table like ‘FOO%’ or like ‘BAR%’

Today I tried to put two like condition in an INCLUDE clause of datapump.

I have the following tables
SQL> select table_name from user_tables order by 1;

TABLE_NAME
------------------------------
AAA
BAR1 ***
BAR2 ***
BLA
FOO ***
FOO1 ***
GOZ

and I want tables like BAR% and tables likes FOO%

First try :
$ expdp scott/tiger include=table:"like'FOO%'or like'BAR%'"

Export: Release 11.2.0.2.0 - Production on Thu Jul 14 11:47:13 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39071: Value for INCLUDE is badly formed.
ORA-00936: missing expression

This does not work, because it would translate in WHERE {tablename} like’FOO%’or like’BAR%’

Ok, second try, let’s put multiple TABLE clause

$ expdp scott/tiger include=table:"like'FOO%'",table:"like'BAR%'"

Export: Release 11.2.0.2.0 - Production on Thu Jul 14 11:47:15 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/******** include=table:"like'FOO%'",table:"like'BAR%'" Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
ORA-39168: Object path TABLE was not found.
ORA-31655: no data or metadata objects selected for job
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" completed with 2 error(s) at 11:47:21

no data was found, because it did translate in WHERE {tablename} like ‘FOO%’ and{tablename} like ‘BAR%’. Which returns no row.

Ok, multiple INCLUDE conditions are joined by AND, so let’s do the math. (BAR% OR FOO%)=(>=BAR AND (where FONZZZ is immediately smaller than FOO)

$ expdp scott/tiger include=table:">='BAR'",table:"not between 'BAS' and 'FONZZZZZZZZZZZZZ'",table:"<'FOP'" Export: Release 11.2.0.2.0 - Production on Thu Jul 14 11:47:21 2011 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/******** include=table:">='BAR'",table:"not between 'BAS' and 'FONZZZZZZZZZZZZZ'",table:"<'FOP'" Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 256 KB Processing object type SCHEMA_EXPORT/TABLE/TABLE . . exported "SCOTT"."BAR1" 5.007 KB 1 rows . . exported "SCOTT"."BAR2" 5.007 KB 1 rows . . exported "SCOTT"."FOO" 5.007 KB 1 rows . . exported "SCOTT"."FOO1" 5.007 KB 1 rows Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is: /u01/app/oracle/admin/DB01/dmp/expdat.dmp Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 11:47:31

It is that simple 😉

On star transformation

How to configure a schema to support a star transformation query?

This is one of the topic I am currently preparing for the OCM DBA 11g upgrade exam for 9i OCM’s.

Let’s create a star schema with a fact table T1 and two dimension tables T2 and T3 :

> create table t2(y number constraint t2_pk primary key, yy varchar2(50))
table t2 created.
> create table t3(z number constraint t3_pk primary key, zz varchar2(50))
table t3 created.
> create table t1(x number constraint t1_pk primary key,
y number constraint t1_t2_fk references t2,
z number constraint t1_t3_fk references t3)
table t1 created.

According to the Oracle Database Data Warehousing Guide, chapter 20 Schema Modeling Techniques
A prerequisite of the star transformation is that there be a single-column bitmap index on every join column of the fact table. These join columns include all foreign key columns.


> create bitmap index bi1 on t1(y)
bitmap index bi1 created.
> create bitmap index bi2 on t1(z)
bitmap index bi2 created.
> exec dbms_stats.gather_table_stats(user,'t1',cascade=>true)
anonymous block completed
> exec dbms_stats.gather_table_stats(user,'t2',cascade=>true)
anonymous block completed
> exec dbms_stats.gather_table_stats(user,'t3',cascade=>true)
anonymous block completed

Obviously the star_transformation_enabled must be set to true. It was known as buggy in the first releases, I hope most bugs have been fixed in the meantime.

> alter session set star_transformation_enabled=true
session set altered

Let’s try the transformation with a query similar to the one found in the doc :

> set autotrace on exp
Autotrace Enabled
Displays the execution plan only.
> select count(*) from t1 natural join t2 natural join t3
where yy='one' and zz in ('two','three')
COUNT(*)
----------------------
0

Plan hash value: 3024982001

------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 106 | 1 (100)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 106 | | |
| 2 | NESTED LOOPS | | | | | |
| 3 | NESTED LOOPS | | 1 | 106 | 1 (100)| 00:00:01 |
| 4 | NESTED LOOPS | | 1 | 66 | 1 (100)| 00:00:01 |
| 5 | VIEW | index$_join$_001 | 1 | 26 | 1 (100)| 00:00:01 |
|* 6 | HASH JOIN | | | | | |
| 7 | BITMAP CONVERSION TO ROWIDS| | 1 | 26 | 0 (0)| 00:00:01 |
| 8 | BITMAP INDEX FULL SCAN | BI1 | | | | |
| 9 | BITMAP CONVERSION TO ROWIDS| | 1 | 26 | 0 (0)| 00:00:01 |
| 10 | BITMAP INDEX FULL SCAN | BI2 | | | | |
|* 11 | TABLE ACCESS BY INDEX ROWID | T2 | 1 | 40 | 0 (0)| 00:00:01 |
|* 12 | INDEX UNIQUE SCAN | T2_PK | 1 | | 0 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | T3_PK | 1 | | 0 (0)| 00:00:01 |
|* 14 | TABLE ACCESS BY INDEX ROWID | T3 | 1 | 40 | 0 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(ROWID=ROWID)
11 - filter("T2"."YY"='one')
12 - access("T1"."Y"="T2"."Y")
13 - access("T1"."Z"="T3"."Z")
14 - filter("T3"."ZZ"='three' OR "T3"."ZZ"='two')

> set autotrace off
Autotrace Disabled

Something is missing. The star transformation is not used. What could it be???

Data !

We need to load trucks of data in the T1 table and a few rows in the T2 and T3 tables

> insert into t2 select rownum,
to_char(to_timestamp(to_char(rownum/1e9,'.000000000'),'.ff'),'ffsp')
from dual connect by level<11 10 rows inserted. > insert into t3 select rownum,
to_char(to_timestamp(to_char(rownum/1e9,'.000000000'),'.ff'),'ffsp')
from dual connect by level<11 10 rows inserted. > insert into t1 select rownum, mod(rownum,7)+1, mod(rownum,8)+1
from dual connect by level<100001 100,000 rows inserted. > commit
commited.
> exec dbms_stats.gather_table_stats(user,'t1',cascade=>true)
anonymous block completed
> exec dbms_stats.gather_table_stats(user,'t2',cascade=>true)
anonymous block completed
> exec dbms_stats.gather_table_stats(user,'t3',cascade=>true)
anonymous block completed

Let’s try the same query again

> set autotrace on exp
Autotrace Enabled
Displays the execution plan only.
> select count(*) from t1 natural join t2 natural join t3
where yy='one' and zz in ('two','three')
COUNT(*)
----------------------
3570

Plan hash value: 3170767457

-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 12 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 6 | | |
| 2 | BITMAP CONVERSION COUNT | | 2857 | 17142 | 6 (0)| 00:00:01 |
| 3 | BITMAP AND | | | | | |
| 4 | BITMAP MERGE | | | | | |
| 5 | BITMAP KEY ITERATION | | | | | |
|* 6 | TABLE ACCESS FULL | T2 | 1 | 8 | 3 (0)| 00:00:01 |
|* 7 | BITMAP INDEX RANGE SCAN| BI1 | | | | |
| 8 | BITMAP MERGE | | | | | |
| 9 | BITMAP KEY ITERATION | | | | | |
|* 10 | TABLE ACCESS FULL | T3 | 2 | 16 | 3 (0)| 00:00:01 |
|* 11 | BITMAP INDEX RANGE SCAN| BI2 | | | | |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - filter("T2"."YY"='one')
7 - access("T1"."Y"="T2"."Y")
10 - filter("T3"."ZZ"='three' OR "T3"."ZZ"='two')
11 - access("T1"."Z"="T3"."Z")

Note
-----
- star transformation used for this statement

> set autotrace off
Autotrace Disabled

That’s it ! I have 100K rows in my fact table and 10 rows in my dimension tables, that does the trick ! Star transformation needs a lot of rows to work 🙂

On implicit commit

An explicit commit is when you issue a COMMIT statement

SQL> create table t(x number);

Table created.

SQL> insert into t values(1);

1 row created.

SQL> commit;

Commit complete.

An implicit commit is when a commit is issued without your approval.

ex: AUTOCOMMIT (default is OFF)
SQL> set autoc on
SQL> insert into t values(1);

1 row created.

Commit complete.

ex: EXITCOMMIT (default is ON)
SQL> set autoc off exitc on
SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> disc
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> conn scott/tiger
Connected.
SQL> select * from t;
X
----------
1

before / after a successful DDL statement
SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t(x);

Index created.

SQL> rollback;

Rollback complete.

SQL> select * from t;
X
----------
1

Before / after an unsuccessful DDL statement, sometimes :
SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t(blabla);
create index i on t(blabla)
*
ERROR at line 1:
ORA-00904: "BLABLA": invalid identifier

SQL> rollback;

Rollback complete.

SQL> select * from t;
X
----------
1

But not always :
SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t();
create index i on t()
*
ERROR at line 1:
ORA-00936: missing expression

SQL> rollback;

Rollback complete.

SQL> select * from t;

no rows selected

In the last case, no DDL was executed, but in the case before that, the DDL was executed and failed.

If you want to commit, use COMMIT 🙂

sqlplus -prelim

If you cannot login to the database, for instance due to ORA-00020 maximum number of processes exceeded, then chance exists that you could use the -prelim option.

Documented in note 121779.1 for sqlplus version 10.1 and later :
In some cases, no connections are allowed on the instance (in some ORA-20 situations for example).
As of 10.1.x, there is a new option with SQL*Plus to allow access to an instance to
generate traces.
sqlplus -prelim / as sysdba

Only sysdba connection is possible.
sqlplus -prelim system/manager

SQL*Plus: Release 11.2.0.2.0 Production on Mon Jul 4 10:38:36 2011

Copyright (c) 1982, 2010, Oracle. All rights reserved.

ERROR:
ORA-24300: bad value for mode

And very little access is granted
SQL> select * from dual;
select * from dual
*
ERROR at line 1:
ORA-01012: not logged on
Process ID: 0
Session ID: 0 Serial number: 0

You can shutdown abort and then restart your database, instead of rebooting your server where other instances may be running.

This is the ultimate chance before reboot. Before this, consider disconnecting / killing some user session to get a regular sqlplus / as sysdba

CSV part 4, fast !!

I got some comments that my other csv solutions were slow to export gigabytes of data.

One more try.

thanks to the feedbacks, I provided a new version

This could generate very large files in just a few minutes (instead of hours).

I use bulk collect and utl_file to boost performance

CREATE TYPE collist IS TABLE OF VARCHAR2 (4000)
/

CREATE OR REPLACE PROCEDURE bulk_csv (directory_name VARCHAR2,
file_name VARCHAR2,
query VARCHAR2)
AUTHID CURRENT_USER
IS
-- $Id$
fh UTL_FILE.file_type;
stmt VARCHAR2 (32767) := NULL;
header VARCHAR2 (32767) := NULL;
curid NUMBER;
desctab DBMS_SQL.DESC_TAB;
colcnt NUMBER;
namevar VARCHAR2 (32767);

TYPE cola IS TABLE OF collist
INDEX BY BINARY_INTEGER;

res cola;
rcur SYS_REFCURSOR;
current_line VARCHAR2 (32767);
next_line VARCHAR2 (32767);

BEGIN
curid := DBMS_SQL.open_cursor;
DBMS_SQL.parse (curid, query, DBMS_SQL.NATIVE);
DBMS_SQL.DESCRIBE_COLUMNS (curid, colcnt, desctab);

FOR i IN 1 .. colcnt
LOOP
DBMS_SQL.DEFINE_COLUMN (curid,
i,
namevar,
32767);
END LOOP;

IF DBMS_SQL.execute (curid) = 0
THEN
FOR i IN 1 .. colcnt
LOOP
IF (i > 1)
THEN
header := header || ';';
stmt := stmt || ',';
END IF;

header := header || desctab (i).col_name;
stmt :=
stmt
|| CASE
WHEN desctab (i).col_type IN
(DBMS_SQL.Varchar2_Type,
DBMS_SQL.Char_Type)
THEN
'"'||desctab (i).col_name || '"'
WHEN desctab (i).col_type IN
(DBMS_SQL.Number_Type,
DBMS_SQL.Date_Type,
DBMS_SQL.Binary_Float_Type,
DBMS_SQL.Binary_Bouble_Type,
DBMS_SQL.Timestamp_Type,
DBMS_SQL.Timestamp_With_TZ_Type,
DBMS_SQL.Interval_Year_to_Month_Type,
DBMS_SQL.Interval_Day_To_Second_Type,
DBMS_SQL.Timestamp_With_Local_TZ_type)
THEN
'to_char("' || desctab (i).col_name || '")'
WHEN desctab (i).col_type = DBMS_SQL.Raw_Type
THEN
'rawtohex("' || desctab (i).col_name || '")'
WHEN desctab (i).col_type = DBMS_SQL.Rowid_Type
THEN
'''unsupport datatype : ROWID'''
WHEN desctab (i).col_type = DBMS_SQL.Long_Type
THEN
'''unsupport datatype : LONG'''
WHEN desctab (i).col_type = DBMS_SQL.Long_Raw_Type
THEN
'''unsupport datatype : LONG RAW'''
WHEN desctab (i).col_type = DBMS_SQL.User_Defined_Type
THEN
'''unsupport datatype : User Defined Type'''
WHEN desctab (i).col_type = DBMS_SQL.MLSLabel_Type
THEN
'''unsupport datatype : MLSLABEL'''
WHEN desctab (i).col_type = DBMS_SQL.Ref_Type
THEN
'''unsupport datatype : REF'''
WHEN desctab (i).col_type = DBMS_SQL.Clob_Type
THEN
'''unsupport datatype : CLOB'''
WHEN desctab (i).col_type = DBMS_SQL.Blob_Type
THEN
'''unsupport datatype : BLOB'''
WHEN desctab (i).col_type = DBMS_SQL.Rowid_Type
THEN
'''unsupport datatype : ROWID'''
WHEN desctab (i).col_type = DBMS_SQL.Bfile_Type
THEN
'''unsupport datatype : BFILE'''
WHEN desctab (i).col_type = DBMS_SQL.Urowid_Type
THEN
'''unsupport datatype : UROWID'''
ELSE
'''unsupport datatype : '||desctab (i).col_type||''''
END;
END LOOP;

stmt := 'select collist(' || stmt || ') from (' || query || ')';

fh :=
UTL_FILE.fopen (directory_name,
file_name,
'W',
32767);

begin
OPEN rcur FOR stmt;
exception
when others then
dbms_output.put_line(stmt);
raise;
end;
LOOP
FETCH rcur
BULK COLLECT INTO res
LIMIT 10000;

current_line := header;
next_line := NULL;

FOR f IN 1 .. res.COUNT
LOOP
FOR g IN 1 .. res (f).COUNT
LOOP
IF (g > 1)
THEN
next_line := next_line || ';';
END IF;

IF ( NVL(LENGTH (current_line),0)
+ NVL(LENGTH (next_line),0)
+ NVL(LENGTH (res (f) (g)),0)
+ 5 > 32767)
THEN
UTL_FILE.put_line (fh, current_line);
current_line := NULL;
END IF;

IF (NVL(LENGTH (next_line),0) + NVL(LENGTH (res (f) (g)),0) + 5 > 32767)
THEN
UTL_FILE.put_line (fh, next_line);
next_line := NULL;
END IF;

next_line := next_line || res (f) (g);
END LOOP;

current_line :=
CASE
WHEN current_line IS NOT NULL
THEN
current_line || CHR (10)
END
|| next_line;
next_line := NULL;
END LOOP;

UTL_FILE.put_line (fh, current_line);
EXIT WHEN rcur%NOTFOUND;
END LOOP;

CLOSE rcur;

UTL_FILE.fclose (fh);
END IF;

DBMS_SQL.CLOSE_CURSOR (curid);
END;
/

CREATE OR REPLACE DIRECTORY tmp AS '/tmp';

EXEC bulk_csv('TMP','emp.csv','SELECT * FROM EMP ORDER BY ENAME')


EMPNO;ENAME;JOB;MGR;HIREDATE;SAL;COMM;DEPTNO
7876;ADAMS;CLERK;7788;1987-05-23 00:00:00;1100;;20
7499;ALLEN;SALESMAN;7698;1981-02-20 00:00:00;1600;30;30
7698;BLAKE;MANAGER;7839;1981-05-01 00:00:00;2850;;30
7782;CLARK;MANAGER;7839;1981-06-09 00:00:00;2450;;10
7902;FORD;ANALYST;7566;1981-12-03 00:00:00;3000;;20
7900;JAMES;CLERK;7698;1981-12-03 00:00:00;950;;30
7566;JONES;MANAGER;7839;1981-04-02 00:00:00;2975;;20
7839;KING;PRESIDENT;;1981-11-17 00:00:00;5000;;10
7654;MARTIN;SALESMAN;7698;1981-09-28 00:00:00;1250;140;30
7934;MILLER;CLERK;7782;1982-01-23 00:00:00;1300;;10
7788;SCOTT;ANALYST;7566;1987-04-19 00:00:00;3000;;20
7369;SMITH;CLERK;7902;1980-12-17 00:00:00;800;;20
7844;TURNER;SALESMAN;7698;1981-09-08 00:00:00;1500;0;30
7521;WARD;SALESMAN;7698;1981-02-22 00:00:00;1250;50;30

on materialized view constraints

Oracle is pretty strong at enforcing constraint.

Table for this blog post:
create table t(x number primary key, y number);

For instance if you alter table t add check (y<1000); then Y will not be bigger than 1000, right?

SQL> insert into t values (1,2000);
insert into t values (1,2000)
Error at line 1
ORA-02290: check constraint (SCOTT.SYS_C0029609) violated

I believe this code to be unbreakable. If you have only SELECT and INSERT privilege on the table, you cannot bypass the constraint.

Let's imagine some complex constraint. CHECK (sum(y) < 1000) SQL> alter table t add check (sum(y) < 1000); alter table t add check (sum(y) < 1000) Error at line 1 ORA-00934: group function is not allowed here

Ok, clear enough I suppose, we cannot handle this complex constraint with a CHECK condition.

We could have some before trigger that fires an exception
CREATE TRIGGER tr
BEFORE INSERT OR UPDATE
ON T
FOR EACH ROW
WHEN (NEW.Y > 0)
DECLARE
s NUMBER;
BEGIN
SELECT SUM (y) INTO s FROM t;

IF (s + :new.y >= 1000)
THEN
raise_application_error (-20001, 'SUM(Y) would exceed 1000');
END IF;
END;
/

Now the trigger will compute the sum and return an exception whenever it fails.
SQL> insert into t values (2, 600);

1 row created.

SQL> insert into t values (3, 600);
insert into t values (3, 600)
*
ERROR at line 1:
ORA-20001: SUM(Y) would exceed 1000
ORA-06512: at "SCOTT.TR", line 8
ORA-04088: error during execution of trigger 'SCOTT.TR'

SQL> drop trigger tr;

Trigger dropped.

SQL> truncate table t;

Table truncated.

But I am not good with triggers, and the triggers are as bad as their developers and have dark sides like mutating triggers and thelike.

As Tom Kyte mentioned in the comment, the code above is not efficient effective if more than one user update the table at the same time

Another popular approach is to create a fast-refreshable-on-commit mview with a constraint.

Let's see how this works.


create materialized view log on t with rowid, primary key (y) including new values;

create materialized view mv
refresh fast
on commit
as select sum(y) sum from t;

alter table mv add check (sum < 1000);

The constraint is on the mview, so once you commit (and only at commit time), Oracle will try to refresh the mview.

SQL> insert into t values (4, 600);

1 row created.

SQL> commit;

Commit complete.

SQL> insert into t values (5, 600);

1 row created.

SQL> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-02290: check constraint (SCOTT.SYS_C0029631) violated

SQL> select * from t;

X Y
---------- ----------
4 600

So far so good. The mechanism rollbacks the transaction in case of an ORA-12008. A bit similar to a DEFERABLE constraint.

But how safe is this after all? Oracle does not enforce anything on the table, it just fails on refresh...

Anything that does not fulfill the materialized view fast refresh requisites will also break the data integrity.

SQL> delete from t;

1 row deleted.

SQL> commit;

Commit complete.

SQL> alter session enable parallel dml;

Session altered.

SQL> insert /*+PARALLEL*/ into t select 100+rownum, rownum*100 from dual connect by level<20; 19 rows created. SQL> commit;

Commit complete.

SQL> select sum(y) from t;

SUM(Y)
----------
19000

SQL> select staleness from user_mviews;

STALENESS
-------------------
UNUSABLE

Your data integrity is gone. By "breaking" the mview, with only SELECT, INSERT and ALTER SESSION privilege, you can now insert any data.

This is documented as
FAST Clause

For both conventional DML changes and for direct-path INSERT operations, other conditions may restrict the eligibility of a materialized view for fast refresh.

Other operations like TRUNCATE may also prevent you from inserting fresh data

SQL> alter materialized view mv compile;

Materialized view altered.

SQL> exec dbms_mview.refresh('MV','COMPLETE');

PL/SQL procedure successfully completed.

SQL> select * from mv;

SUM
----------

SQL> insert into t values(1,1);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from mv;

SUM
----------
1

SQL> truncate table t;

Table truncated.

SQL> insert into t values(1,1);

1 row created.

SQL> commit;
commit
*
ERROR at line 1:
ORA-32321: REFRESH FAST of "SCOTT"."MV" unsupported after detail table
TRUNCATE

On using Toad against a database

I got this question once again today in a previous post.

What’s wrong by using Toad against a database?

The worst case scenario:
– some non-technical staff is clicking around in your production database with read-write access 🙁

The best-case scenario :
– nobody has access to your database 🙂

Here is a short list on how you could protect your data :
– Give the right privilege to the right person. DBA role to the DBA, CREATE TABLE/CREATE INDEX to the developer, INSERT/UPDATE/DELETE to the application
– Restrict access to your database server. Use some firewall. Allow only the dba workstation and the application server to the Production environment

What if the end-user PC needs access to the Production database with a powerfull user? This often happend in real world. A fat client is installed on the PC, the password is somehow hardcoded, the privileges granted to the hardcoded user are uterly generous…

It is not a bad practice in this case to block access to the database server to Toad/SQLPLUS and thelike. This will very ineffeciently prevent some garage-hacker from corrupting your database, but it will prevent your sales / marketing colleagues from deleting data, locking tables and degrading performance. This could be done by some login triggers or, my preference, some administrative measures like information, auditting and sanctions.

Troubleshoot ORA-10878

You will probably not hit this bug unless you perform some media recovery in 11.2.0.1/AIX.

Ok. In case you hit ORA-10878: parallel recovery slave died unexpectedly during a DUPLICATE or a RESTORE command, you can disable parallel media recovery with _log_parallelism_max=1.

The usual warning applies : do not use hidden parameter without guidance of Oracle Support. Open an SR if you hit this bug. Check for a patch on your plateform. Read notes 9728806.8 and 315631.1.

Note: for a RECOVER, the option RECOVER NOPARALLEL must be safer. Unfortunately there is no such thing like DUPLICATE NOPARALLEL

Update: This could also happened with standby, if you have stopped your standby site for a while and after restart you get ORA-10878 and ORA-00448 and evtl core dumps or internal errors, then stop dataguard (set dg_broker_start to false) and start the recovery manually with the noparallel option, until all logs are applied. Once this is done, you can restart dataguard, which will then in normal operation mode apply only one log at the time.
Diggout out from Helios’s Blog

scp tuning

I twitted yesterday :

laurentsch
copying 1TB over ssh sucks. How do you fastcopy in Unix without installing Software and without root privilege?

I got plenty of expert answers. I have not gone to far in recompile ssh and I did not try plain ftp.

Ok, let’s try first to transfer 10 files of 100M from srv001 to srv002 with scp :

time scp 100M* srv002:
100M1 100% 95MB 4.5MB/s 00:21
100M10 100% 95MB 6.4MB/s 00:15
100M2 100% 95MB 6.0MB/s 00:16
100M3 100% 95MB 4.2MB/s 00:23
100M4 100% 95MB 3.4MB/s 00:28
100M5 100% 95MB 4.2MB/s 00:23
100M6 100% 95MB 6.4MB/s 00:15
100M7 100% 95MB 6.8MB/s 00:14
100M8 100% 95MB 6.8MB/s 00:14
100M9 100% 95MB 6.4MB/s 00:15

real 3m4.50s
user 0m27.07s
sys 0m21.56s

more than 3 minutes for 1G.

I got hints about the buffer size, about SFTP, about the cipher algorythm, and about parallelizing. I did not install new software and I have a pretty old openssh client (3.8). Thanks to all my contributors tmuth, Ik_zelf, TanelPoder, fritshoogland, jcnars, aejes, surachart, and the ones the will answer after the writting of this blog post…

Ok, let’s try a faster algorythm, with sftp (instead of scp), a higher buffer and in parallel
$ cat batch.ksh
echo "progress\nput 100M1" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M2" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M3" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M4" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M5" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M6" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M7" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M8" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M9" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M10" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
wait
$ time batch.ksh
real 0m19.07s
user 0m12.08s
sys 0m5.86s

This is a 1000% speed enhancement 🙂

What is the current setting of NLS_LANG in sqlplus?

I just learnt a neat trick from Oracle Support.

How do you see the current value of NLS_LANG in SQLPLUS ?

HOST is not the right answer.

E.g.:
Unix:

SQL> host echo $NLS_LANG
AMERICAN_SWITZERLAND

Windows:

SQL> HOST ECHO %NLS_LANG%
%NLS_LANG%

The correct setting is revealed by @.[%NLS_LANG%]
E.g.:
Unix:

SQL> @.[$NLS_LANG]
SP2-0310: unable to open file ".[AMERICAN_AMERICA.WE8ISO8859P1]"

Windows:

SQL> @.[%NLS_LANG%]
SP2-0310: unable to open file ".[AMERICAN_AMERICA.WE8ISO8859P1]"

It could well be that both return the same answer, but not necessarly, as shown above.

The unix discrepancy is related to the subshell created by HOST. The subshell may read some .profile and overwrite the value of NLS_LANG

In Windows, the NLS_LANG setting may be set by sqlplus according to some registry entries

Send html report per email from sqlplus

Your business partner wants to receive some daily mail with an sql query output in it. It does not need to be ultra-fancy, but some colors and titles would not hurt.

Here is the report in SQL:

select dname, sum(sal) from emp join dept using (deptno) group by rollup(dname);

Ok, let’s do the report within sqlplus.

rep.sql

set echo off numf 999G999G999G999 lin 32000 trims on pages 50000 head on feed off markup html off
alter session set nls_numeric_characters='.''' nls_date_format='Day DD. Month, YYYY';
spool /tmp/rep.html
prompt To: [email protected]
prompt From: [email protected]
prompt Subject: Daily department report
prompt Content-type: text/html
prompt MIME-Version: 1.0
set markup html on entmap off table 'BORDER="2" BGCOLOR="pink"'
prompt Good morning,
prompt Here is the department report per &_DATE
prompt Kind Regards,
prompt Your IT Operations

prompt

List of departments with the total salaries of their employees

select dname "Department", sum(sal) "Salary" from emp join dept using (deptno) group by rollup(dname);
spool off
host /usr/sbin/sendmail -t

Then simply call it from sqlplus (you may want to configure the sendmail part)

SQL> @rep

check your mail :

To: [email protected]
From: [email protected]
Subject: Daily department report
Good morning,

Here is the department report per Friday 15. April , 2011

Kind Regards,

Your IT Operations

List of departments with the total salaries of their employees

Department Salary
ACCOUNTING 8'750
RESEARCH 10'875
SALES 9'400
  29'025

It is pretty easier to maintain than APEX, but the capabilities are not that rich...

track ddl change (part 2)

I wrote about tracking ddl changes with a trigger there : track ddl changes

Another option is to use auditing.

A new and cool alternative is to use enable_ddl_logging (11gR2). This will track all ddl’s in the alert log

ALTER SYSTEM SET enable_ddl_logging=TRUE

Then later you issue
create table t(x number)

and you see in the alertLSC01.log
Tue Apr 05 14:43:32 2011
create table t(x number)

Wait, that’s not really verbose !?

Remember the alert log is just there for backward compatibility, it is time you start looking in the xml file 🙂



create table t(x number)


There is not really much more there but the module, which indeed reveals someone is using TOAD to access my database !

my first ADR package

You got an internal error and want to create a zip of all relevant files.

First, let’s generate an internal error. I found a quick way to generate an ora-600 or an ora-700 (which is a harmless ora-600 in 11g, read 737878.1) on oradeblog

SQL> oradebug unit_test dbke_test dde_flow_kge_soft foo bar baz
Statement processed.

Now start the command line interface, and set the ORACLE HOME
$ adrcli
adrci> show home
ADR Homes:
diag/tnslsnr/precision/listener
diag/tnslsnr/localhost/listener
diag/rdbms/lsc02/LSC02
diag/rdbms/lsc03/LSC03
diag/rdbms/lsc01/LSC01
adrci> set homepath diag/rdbms/lsc02/LSC02
adrci> show home
ADR Homes:
diag/rdbms/lsc02/LSC02

Check the incidents :
adrci> show incident

ADR Home = /u01/app/oracle/diag/rdbms/lsc02/LSC02:
*************************************************************************
INCIDENT_ID PROBLEM_KEY CREATE_TIME
-------------------- ----------------- ---------------------------------
53065 ORA 700 [foo] 2011-03-14 18:20:24 +01:00
1 rows fetched

Create the package metadata :

adrci> IPS CREATE PACKAGE INCIDENT 53065
Created package 1 based on incident id 53065, correlation level typical
adrci> ips SHOW PACKAGE 1
DETAILS FOR PACKAGE 1:
PACKAGE_ID 1
PACKAGE_NAME ORA700foo_20110314182607
PACKAGE_DESCRIPTION
DRIVING_PROBLEM 1
DRIVING_PROBLEM_KEY ORA 700 [foo]
DRIVING_INCIDENT 53065
DRIVING_INCIDENT_TIME 2011-03-14 18:20:24.304000 +01:00
STATUS New (0)
CORRELATION_LEVEL Typical (2)
PROBLEMS 1 main problems, 0 correlated problems
INCIDENTS 1 main incidents, 0 correlated incidents
INCLUDED_FILES 4
SEQUENCES Last 0, last full 0, last base 0
UNPACKED FALSE
CREATE_TIME 2011-03-14 18:26:07.566961 +01:00
UPDATE_TIME 2011-03-14 18:26:07.620324 +01:00
BEGIN_TIME N/A
END_TIME N/A
FLAGS 0

The metadata files (in $ORACLE_BASE/rdbms/db_name/sid/*.ams) are in an Oracle binary format

Create the package zip file :
adrci> IPS GENERATE PACKAGE 1 in /home/lsc/foo
Generated package 1 in file /home/lsc/foo/ORA700foo_20110314182607_COM_1.zip,
mode complete

This zip file contains all traces and alerts that you may ever need to diagnose/resolve the analysis

adrci> ips show files package 1
FILE_ID 1
FILE_LOCATION /incident/incdir_53065
FILE_NAME LSC02_ora_14163_i53065.trm
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2
FILE_LOCATION /incident/incdir_53065
FILE_NAME LSC02_ora_14163_i53065.trc
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 3
FILE_LOCATION /trace
FILE_NAME LSC02_ora_14163.trc
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 4
FILE_LOCATION /trace
FILE_NAME LSC02_ora_14163.trm
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 5
FILE_LOCATION /alert
FILE_NAME log.xml
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 6
FILE_LOCATION /trace
FILE_NAME alert_LSC02.log
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 7
FILE_LOCATION /trace
FILE_NAME LSC02_diag_5247.trc
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 8
FILE_LOCATION /trace
FILE_NAME LSC02_diag_5247.trm
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 12
FILE_LOCATION /trace
FILE_NAME LSC02_mmon_5265.trc
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 13
FILE_LOCATION /trace
FILE_NAME LSC02_mmon_5265.trm
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2007
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME IPS_CONFIGURATION.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2008
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME IPS_PACKAGE.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2009
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME IPS_PACKAGE_INCIDENT.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2010
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME IPS_PACKAGE_FILE.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2011
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME IPS_PACKAGE_HISTORY.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2012
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME IPS_FILE_METADATA.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2013
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME IPS_FILE_COPY_LOG.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2014
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME DDE_USER_ACTION_DEF.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2015
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME DDE_USER_ACTION_PARAMETER_DEF.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2016
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME DDE_USER_ACTION.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2017
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME DDE_USER_ACTION_PARAMETER.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2018
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME DDE_USER_INCIDENT_TYPE.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2019
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME DDE_USER_INCIDENT_ACTION_MAP.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2020
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME INCIDENT.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2021
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME INCCKEY.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2022
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME INCIDENT_FILE.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2023
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME PROBLEM.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2024
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME HM_RUN.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2025
FILE_LOCATION /incpkg/pkg_1/seq_1/export
FILE_NAME EM_USER_ACTIVITY.dmp
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2026
FILE_LOCATION /incpkg/pkg_1/seq_1
FILE_NAME config.xml
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2027
FILE_LOCATION /incpkg/pkg_1/seq_1/crs
FILE_NAME crsdiag.log
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2028
FILE_LOCATION /incpkg/pkg_1/seq_1
FILE_NAME metadata.xml
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2029
FILE_LOCATION /incpkg/pkg_1/seq_1
FILE_NAME manifest_1_1.xml
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2030
FILE_LOCATION /incpkg/pkg_1/seq_1
FILE_NAME manifest_1_1.html
LAST_SEQUENCE 1
EXCLUDE Included

FILE_ID 2031
FILE_LOCATION /incpkg/pkg_1/seq_1
FILE_NAME manifest_1_1.txt
LAST_SEQUENCE 1
EXCLUDE Included

Even an html file

Manifest for package 1

Manifest details

Package ID 1
Creation time 2011-03-14 18:26:07.566961 +01:00
Archive time 2011-03-14 18:37:14.499389 +01:00
Sequence 1
Package mode Complete
Package status Generating
Package flags Flags: (No flags set)

Contents summary

Main problems 1
Correlated problems 0
Main incidents 1
Correlated incidents 0

ADR details

Product rdbms
Target lsc02
Instance LSC02
ADR base /u01/app/oracle
ADR home /u01/app/oracle/diag/rdbms/lsc02/LSC02

Main problems

Problem ID Problem key Incidents included Incidents total
1 ORA 700 [foo] 1 1

Correlated problems

Problem ID Problem key Incidents included Incidents total

Main incidents

Incident ID Problem ID Error Message Incident time
53065 1 ORA-700 [foo] [bar] [baz] 2011-03-14 18:20:24.304000 +01:00

Correlated incidents

Incident ID Problem ID Error Message Incident time

Files

File name Location Size File time
LSC02_ora_14163_i53065.trm <ADR_HOME>/incident/incdir_53065 54828 2011-03-14 18:20:26.000000 +01:00
LSC02_ora_14163_i53065.trc <ADR_HOME>/incident/incdir_53065 2433968 2011-03-14 18:20:26.000000 +01:00
LSC02_ora_14163.trc <ADR_HOME>/trace 1308 2011-03-14 18:20:26.000000 +01:00
LSC02_ora_14163.trm <ADR_HOME>/trace 210 2011-03-14 18:20:24.000000 +01:00
log.xml <ADR_HOME>/alert 885849 2011-03-14 18:20:27.000000 +01:00
alert_LSC02.log <ADR_HOME>/trace 164969 2011-03-14 18:20:27.000000 +01:00
LSC02_diag_5247.trc <ADR_HOME>/trace 1287 2011-03-14 18:20:26.000000 +01:00
LSC02_diag_5247.trm <ADR_HOME>/trace 77 2011-03-14 18:20:26.000000 +01:00
LSC02_mmon_5265.trc <ADR_HOME>/trace 8703 2011-03-14 18:33:43.000000 +01:00
LSC02_mmon_5265.trm <ADR_HOME>/trace 838 2011-03-14 18:33:43.000000 +01:00
IPS_CONFIGURATION.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 2818 2011-03-14 18:37:13.000000 +01:00
IPS_PACKAGE.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 476 2011-03-14 18:37:13.000000 +01:00
IPS_PACKAGE_INCIDENT.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 193 2011-03-14 18:37:13.000000 +01:00
IPS_PACKAGE_FILE.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 1126 2011-03-14 18:37:14.000000 +01:00
IPS_PACKAGE_HISTORY.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 280 2011-03-14 18:37:13.000000 +01:00
IPS_FILE_METADATA.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 2888 2011-03-14 18:37:14.000000 +01:00
IPS_FILE_COPY_LOG.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 214 2011-03-14 18:37:14.000000 +01:00
DDE_USER_ACTION_DEF.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 908 2011-03-14 18:37:13.000000 +01:00
DDE_USER_ACTION_PARAMETER_DEF.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 708 2011-03-14 18:37:13.000000 +01:00
DDE_USER_ACTION.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 204 2011-03-14 18:37:13.000000 +01:00
DDE_USER_ACTION_PARAMETER.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 198 2011-03-14 18:37:13.000000 +01:00
DDE_USER_INCIDENT_TYPE.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 353 2011-03-14 18:37:13.000000 +01:00
DDE_USER_INCIDENT_ACTION_MAP.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 166 2011-03-14 18:37:13.000000 +01:00
INCIDENT.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 700 2011-03-14 18:37:13.000000 +01:00
INCCKEY.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 303 2011-03-14 18:37:13.000000 +01:00
INCIDENT_FILE.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 268 2011-03-14 18:37:13.000000 +01:00
PROBLEM.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 395 2011-03-14 18:37:13.000000 +01:00
HM_RUN.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 342 2011-03-14 18:37:14.000000 +01:00
EM_USER_ACTIVITY.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 207 2011-03-14 18:37:14.000000 +01:00
config.xml <ADR_HOME>/incpkg/pkg_1/seq_1 56180 2011-03-14 18:37:14.000000 +01:00
crsdiag.log <ADR_HOME>/incpkg/pkg_1/seq_1/crs 184 2011-03-14 18:37:14.000000 +01:00
metadata.xml <ADR_HOME>/incpkg/pkg_1/seq_1 556 2011-03-14 18:37:14.000000 +01:00

But did Oracle Support ever asked you for an ADR package? Or do they still ask for RDA

I used to select, zip and send traces files manually, I may consider ADR packages by my next ORA-600 !

How does random=random evaluates?

I had fun answering a question about random on the technical forums.

What is in your opinion the boolean value of DBMS_RANDOM.VALUE=DBMS_RANDOM.VALUE?

Or, how many rows would
select * from dual where dbms_random.value=dbms_random.value;
return?

It is wrong to assume the function will be evaluated twice.

The short answer would be : do not rely on random plsql functions in SQL…

here is a test case in 11.2.0.2 and 10.2.0.3


SQL> select version from v$instance;
VERSION
-----------------
10.2.0.3.0

SQL> select * from dual where dbms_random.value=dbms_random.value;

no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 1224005312

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 | 2 (0)| 00:00:01 |
|* 1 | FILTER | | | | | |
| 2 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("DBMS_RANDOM"."VALUE"()="DBMS_RANDOM"."VALUE"())

In 10g, the function is executed twice per row, and the chance to have two different values is more than 99.9999…%.


SQL> select version from v$instance;
VERSION
-----------------
11.2.0.2.0

SQL> select * from dual where dbms_random.value=dbms_random.value
D
-
X

Execution Plan
----------------------------------------------------------
Plan hash value: 1224005312

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 | 2 (0)| 00:00:01 |
|* 1 | FILTER | | | | | |
| 2 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("DBMS_RANDOM"."VALUE"() IS NOT NULL)

Here the optimized execute the function only once per row, and since the result is never null, it always evaluates to true.

Is this a bug or a feature?

In my opinion it is a confusing tuning enhancement that may break badsome programs.

In this thread, I mentioned that prior dbms_random.value is not null is an unsafe construct.

List events in session, process or system

There is a new command in 11g to display the current events, which is oradebug eventdump.

For instance :
SQL> alter session set events '10046 trace name context forever,level 12:942 trace name ERRORSTACK level 3';

SQL> oradebug setmypid
Statement processed.
SQL> oradebug eventdump session
sql_trace level=12
942 trace name ERRORSTACK level 3

Read metalink note 436036.1

In 10g and before, the command was oradebug dump events 1 and the list was dumped in a trace file, 11g directly outputs to the console.

Note there is no backward compatibility with unsupported tools like oradebug.
In 11g you will get an ORA-76 with dump events
SQL> oradebug setmypid
Statement processed.
SQL> oradebug dump events 1
ORA-00076: dump EVENTS not found
$ oerr ora 76
00076, 00000, "dump %s not found"
// *Cause: An attempt was made to invoke a dump that does not exist.
// *Action: Type DUMPLIST to see the list of available dumps.

Which index can you rebuild?

I recently wrote on table reorg and rebuild index

Rule number one : you cannot rebuild a partitioned index in whole. You need to rebuild each individual (sub-)partition

Rule number two : to rebuild an iot, move the table instead of trying to rebuild the underlying index

Rule number three : a LOB index is not really an index. Do not rebuild this

Rule number four : a NOSEGMENT index is not a supported type of index, but it may appear in your user_objects list. It is used internally by OEM and other tuning tools to do a what-if calculation on the explain plan. It is not listed in USER_INDEXES. Do not rebuild this

Test case :

SQL> CREATE CLUSTER c(x NUMBER);

Cluster created.

SQL> CREATE INDEX a01
2 ON CLUSTER c;

Index created.

SQL> CREATE TABLE t
2 (
3 p NUMBER PRIMARY KEY,
4 a01 NUMBER,
5 a02 NUMBER,
6 a03 NUMBER,
7 a04 NUMBER,
8 a05 NUMBER,
9 a06 NUMBER,
10 a07 VARCHAR2 (40),
11 a08 CLOB
12 );

Table created.

SQL> CREATE INDEX a02
2 ON t (a01);

Index created.

SQL> CREATE INDEX a03
2 ON t (a02)
3 REVERSE;

Index created.

SQL> CREATE INDEX a04
2 ON t (SQRT (a01));

Index created.

SQL> CREATE INDEX a05
2 ON t (COS (a01))
3 REVERSE;

Index created.

SQL> CREATE BITMAP INDEX a06
2 ON t (a03);

Index created.

SQL> CREATE BITMAP INDEX a07
2 ON t (SIGN (a04));

Index created.

SQL> CREATE INDEX a08
2 ON t (a07)
3 INDEXTYPE IS ctxsys.context;

Index created.

SQL> CREATE INDEX a09
2 ON t (a05)
3 GLOBAL PARTITION BY HASH (a05)
4 (PARTITION p);

Index created.

SQL> CREATE TABLE i (x NUMBER CONSTRAINT A10 PRIMARY KEY)
2 ORGANIZATION INDEX;

Table created.

SQL> CREATE INDEX A11 on T(A06) NOSEGMENT;

Index created.

SQL> SELECT index_name,
2 index_type,
3 partitioned,
4 generated
5 FROM user_indexes
6 ORDER BY 1;

INDEX_NAME INDEX_TYPE PAR G
------------------------------ --------------------------- --- -
A01 CLUSTER NO N
A02 NORMAL NO N
A03 NORMAL/REV NO N
A04 FUNCTION-BASED NORMAL NO N
A05 FUNCTION-BASED NORMAL/REV NO N
A06 BITMAP NO N
A07 FUNCTION-BASED BITMAP NO N
A08 DOMAIN NO N
A09 NORMAL YES N
A10 IOT - TOP NO N
DR$A08$X NORMAL NO N
SYS_C009276 NORMAL NO Y
SYS_IL0000028076C00009$$ LOB NO Y
SYS_IL0000028087C00006$$ LOB NO Y
SYS_IL0000028092C00002$$ LOB NO Y
SYS_IOT_TOP_28090 IOT - TOP NO Y
SYS_IOT_TOP_28095 IOT - TOP NO Y

SQL> ALTER INDEX a01 REBUILD;

Index altered.

SQL> ALTER INDEX a02 REBUILD;

Index altered.

SQL> ALTER INDEX a03 REBUILD;

Index altered.

SQL> ALTER INDEX a04 REBUILD;

Index altered.

SQL> ALTER INDEX a05 REBUILD;

Index altered.

SQL> ALTER INDEX a06 REBUILD;

Index altered.

SQL> ALTER INDEX a07 REBUILD;

Index altered.

SQL> ALTER INDEX a08 REBUILD;

Index altered.

SQL> ALTER INDEX a09 REBUILD;
ALTER INDEX a09 REBUILD
*
ERROR at line 1:
ORA-14086: a partitioned index may not be rebuilt as a whole

SQL> ALTER INDEX a09 REBUILD PARTITION P;

Index altered.

SQL> ALTER INDEX a10 REBUILD;
ALTER INDEX a10 REBUILD
*
ERROR at line 1:
ORA-28650: Primary index on an IOT cannot be rebuilt

SQL> ALTER TABLE i MOVE;

Table altered.

SQL> ALTER INDEX A11 REBUILD;
ALTER INDEX A11 REBUILD
*
ERROR at line 1:
ORA-08114: can not alter a fake index

SQL> ALTER INDEX SYS_IL0000028076C00009$$ REBUILD;
ALTER INDEX SYS_IL0000028076C00009$$ REBUILD
*
ERROR at line 1:
ORA-02327: cannot create index on expression with datatype LOB

A function-based domain index should be rebuildable too, I have not tested this for you

EZCONNECT and HOSTNAME resolution methods

EZCONNECT is the easy connect protocol, available in 10g, whenever you want to connect to a database without tnsnames and without ldap.
$ grep -iw directory_path $TNS_ADMIN/sqlnet.ora
names.directory_path=EZCONNECT
$ sqlplus scott/tiger@//srv01:1521/db01

connect to server srv01 on port 1521 for service db01

HOSTNAME was the old-fashion way to connect to a database, where hostname = sid and port = 1521. In this regard EZCONNECT is just an extension of the hostname method.

Typical HOSTNAME usage, that is the same as EZCONNECT with default port 1521.
sqlplus scott/tiger@db01
connect to server db01 on port 1521 for service db01

There is a behavior change between 10g and 11g. In 10g, the default service name defaulted to the DNS alias used to connect. In 11g, the default is null.
$ nslookup db01
Server: ns001.example.com
Address: 198.0.0.30

Name: srv01.example.com
Address: 198.0.0.60
Aliases: db01.example.com

$ nslookup db02
Server: ns001.example.com
Address: 198.0.0.30

Name: srv01.example.com
Address: 198.0.0.60
Aliases: db02.example.com

Both DB01 and DB02 DNS aliases point to the same server.

Let’s try with 10g
$ sqlplus -L scott/[email protected]

SQL*Plus: Release 10.2.0.3.0 - Production on Mon Feb 7 15:46:53 2011

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select global_name from global_name;
GLOBAL_NAME
---------------------------------------
DB01.EXAMPLE.COM
SQL> quit
$ sqlplus -L scott/[email protected]

SQL*Plus: Release 10.2.0.3.0 - Production on Mon Feb 7 15:47:33 2011

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select global_name from global_name;
GLOBAL_NAME
---------------------------------------
DB02.EXAMPLE.COM

Let’s try with 11g sqlplus
$ sqlplus -L scott/[email protected]

SQL*Plus: Release 11.2.0.2.0 Production on Mon Feb 7 15:50:27 2011

Copyright (c) 1982, 2010, Oracle. All rights reserved.

ERROR:
ORA-12504: TNS:listener was not given the SERVICE_NAME in CONNECT_DATA

SP2-0751: Unable to connect to Oracle. Exiting SQL*Plus

It no longer works. Period. This is documented as Problem 556996.1 in Metalink.

A 10g tnsping will reveal
$ tnsping db01.example.com:1521

TNS Ping Utility for IBM/AIX RISC System/6000: Version 10.2.0.3.0 - Production on 07-FEB-2011 15:52:34

Copyright (c) 1997, 2006, Oracle. All rights reserved.

Used parameter files:
/home/lsc/sqlnet.ora

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=db01.example.com))(ADDRESS=(PROTOCOL=TCP)(HOST=198.0.0.60)(PORT=1521)))
OK (80 msec)

In 10g the service_name is the connection dns alias used

In contrary, the 11g tnsping service name is null
$ tnsping db01.example.com:1521

TNS Ping Utility for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production on 07-FEB-2011 15:56:55

Copyright (c) 1997, 2010, Oracle. All rights reserved.

Used parameter files:
/home/lsc/sqlnet.ora

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=198.0.0.60)(PORT=1521)))
OK (10 msec)

The tnsping works, but the service_name is empty.

How to fix this?

1) you specify the SID in easy connect (yes, this is easy!)
$ tnsping db01.example.com:1521/db01.example.com

TNS Ping Utility for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production on 07-FEB-2011 15:59:10

Copyright (c) 1997, 2010, Oracle. All rights reserved.

Used parameter files:
/home/lsc/sqlnet.ora

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=db01.example.com))(ADDRESS=(PROTOCOL=TCP)(HOST=198.0.0.60)(PORT=1521)))
OK (10 msec)

2) you use 10g, or 10g behavior in 11g with patch 9271246 (available only on a limited number of plateforms, os and db versions),

3) you specify a default service for your listener
$ vi listener.ora
DEFAULT_SERVER_LISTENER=DB01
$ lsnrctl reload
$ sqlplus -L scott/tiger@db01

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>

This is a bit confusing as if you are servicing more than one database per listener, all dns aliases will default to the same database. So I would not recommend a default service name if there is more than one service name.