tnsping and instant client

Mostly when you install your instant client, you will not have tnsping handy. You could well try to copy it from a full client, but this is cumbersome to just ping your instance.

I just created one function in my .profile


whence tnsping >/dev/null 2>&1 || 
  tnsping() { 
    sqlplus -L -s x/x@$1 </dev/null | 
      grep ORA- | 
        (grep -v ORA-01017 || echo OK)
  }

and tested it


$ tnsping db999
ORA-12154: TNS:could not resolve the connect identifier specified
$ tnsping db01
OK
$ tnsping db02
ORA-12541: TNS:no listener

Create database link and logon trigger

Today I could not understand why I was getting ORA-1031 with create database link.

After analysis, I found out a logon trigger that changed the current schema. As Tom always says, triggers are evil…

DEMO:


SQL> create user u1 identified by xxx;

User created.

SQL> grant create session, create database link to u2 identified by xxx;

Grant succeeded.

SQL> create trigger evil after logon on database  begin 
  2  execute immediate 'alter session set current_schema=u1';end;
  3  /

Trigger created.

SQL>
SQL> conn u2/xxx
Connected.
SQL> create database link l;
create database link l
                     *
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> show user
USER is "U2"
SQL> select sys_context('USERENV','CURRENT_SCHEMA') from dual;

SYS_CONTEXT('USERENV','CURRENT_SCHEMA')
-----------------------------------------
U1

SQL> alter session set current_schema=u2;

Session altered.

SQL> create database link l;

Database link created.

Drop database link in another schema

Today I wrote this script :

drop_database_link.sql


accept owner char prompt "Enter database link owner : "
accept db_link char prompt "Enter link name : "

begin
  dbms_scheduler.create_job(
    job_name=>'&owner..drop_database_link',
    job_type=>'PLSQL_BLOCK',
    job_action=>'BEGIN execute immediate ''drop database link &db_link'';END;'
  );
  dbms_scheduler.run_job('&owner..drop_database_link',false);
  dbms_lock.sleep(2);
  dbms_scheduler.drop_job('&owner..drop_database_link');
end;
/

I am using the scheduler to run a job as another user. The database link owner does not need to have any privilege, neither CREATE SESSION nor CREATE JOB. It could locked and expired.

ORA-01722: invalid number and sql loader

Your manager asked you to load an Excel file in a table. Ok, you look at the header, create a basic table with meaningfull datatype, you open Excel and save as csv, you load your data. But then you get ORA-1722. What happened ?

Ok, let’s do it


create table t(x number not null, y number, z number not null);


LOAD DATA
INFILE *
INTO TABLE T
TRUNCATE
FIELDS TERMINATED BY ';'
(X, Y, Z)
BEGINDATA
1;1;1
2; ;2
3;3;3


$ sqlldr scott/tiger control=foo.ctl
$ vi foo.log
...
Record 2: Rejected - Error on table T, column Y.
ORA-01722: invalid number

Here it is pretty eye-popping, but you probably have 10 years of market data to load with hundreds of columns and most of the columns are empty or/and obsolete.

The thing is, Excel did put a space for your “number” datatype, space is not a valid number !


SQL> select to_number(' ') from dual;
select to_number(' ') from dual
                 *
ERROR at line 1:
ORA-01722: invalid number

A workaround is for each nullable numeric column to specify nullif column=blank


LOAD DATA
INFILE *
INTO TABLE T
TRUNCATE
FIELDS TERMINATED BY ';'
(X, Y NULLIF Y=BLANKS, Z)
BEGINDATA
1;1;1
2; ;2
3;3;3

$ sqlldr scott/tiger control=foo.ctl
$ vi foo.log
...
Table T:
  3 Rows successfully loaded.
  0 Rows not loaded due to data errors.
  0 Rows not loaded because all WHEN clauses were failed.
  0 Rows not loaded because all fields were null.

I hope I will remember this next time I am sqlloading from Excel into Oracle !

old-hash, SHA-1, SHA-2/512

Until pretty recently, only the dubious unsalted proprietary algorithm was available to store Oracle passwords. A bunch of tool where at the time able to decode any 6-8 characters in no time, and the rainbow approach was to precalculate all possibles passwords for a specific user.

Those time are not really for away, only starting at Oracle 11g, you could have salted/case sensitive passwords. Salted means that Scott may have many different passwords keys for tiger.


 SQL> select spare4 from user$ where name='SCOTT';
SPARE4
----------------------------------------------------------------
S:96A5FF65BFF84D7AAC6F8F00879881E8506FE57F555E5BA2927B606DC4F1

SQL> alter user scott identified by tiger;

User altered.

SQL> select spare4 from user$ where name='SCOTT';
SPARE4
----------------------------------------------------------------
S:AE23FB94A462C44A75040CE3BA731E3EF08C4A270F5940491045CBCEF63C

Some users may have only the 10g version (password not changed after migrating to 11g), some may have the 11g version of both, and -who knows- some may have already have SHA-2/512 passwords. SHA2 has many advantages. The chance that 2 passwords provides exactly the same string are much lower than in SHA1 (collision) and it performs twice faster on 64 bits servers.


SQL> select username, password_versions from dba_users where username like 'U_;
USERNAME                       PASSWORD
------------------------------ --------
U1                             10G
U2                             11G
U3                             10G 11G
U4                             12C

Probably you never saw this unless you are in beta 12. But actually it is documented in the 11gR2 Documentation.

12C if a new SHA-2 based SHA-512 hash exists

shutdown timeout

I do not like shutdown abort (see this post). I always use shutdown immediate and it always work… well almost always.

Today I discovered a 9iR2 new feature : shutdown timeout !
Shutdown Timeout
If all events blocking the shutdown do not occur within one hour, the shutdown operation aborts with the following message: ORA-01013: user requested cancel of current operation.

Ok, if and only if I am getting this ORA-1013, I shutdown abort, startup, shutdown immediate.

It is very unusual that a shutdown immediate does not terminate in 1 hour, and hard to reproduce. For this test case, I am doing a shutdown normal

1) make sure you have at least one other session open
2) shutdown normal
3) wait about 60 minutes (defined in _shutdown_completion_timeout_mins, not a supported parameter to change)


SQL> shutdown normal
ORA-01013: user requested cancel of current operation
SQL> 

Now we received a ORA-1013 (but I did not use CTRL-C). The instance is now half-stopped, most sessions and background processes like MMON, CJQ, SMCO are already dead and it is probably a good idea to restart it properly. Maybe with startup force and shutdown immediate.

SQL> startup force
ORACLE instance started.

Total System Global Area 1069252608 bytes
Fixed Size                  2166160 bytes
Variable Size             658510448 bytes
Database Buffers          402653184 bytes
Redo Buffers                5922816 bytes
Database mounted.
Database opened.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

This is all documented :
After ORA-01013 occurs, you must consider the instance to be in an unpredictable state…If subsequent SHUTDOWN commands continue to fail, you must submit a SHUTDOWN ABORT

I am extremly happy to have discovered this, because it will still allow a shutdown abort after one hour of shutdown immediate, which is an extremly rare case, but it is an enhancement for my backup scripts.

OCM 11g upgrade

Last Friday I went to Munich for THE upgrade exam. I have a few recommendations based on my preparation.

DISCLAIMER: no recommendation is based on the exam

Check you have a similar environment. You will get 11gR1 database and 10g entreprise manager. Personnaly I did my preparation on 11gR2 and 11g/12c em, I did not find useful to prepare on some oldish version. You should have some Linux / Unix around. It does not really matter if you use AIX, HPUX, Solaris or Linux.

I bought myself a keyboard with German Layout (the one with ß, Ä and Ö) to gain speed

Read each topic carefully and get proficiency:
– Database:
I know the meaning of the initialisation parameters. In need, I can quickly check one in the Reference
I am familiar with the sql syntax and have read the SQL Reference more than once

– Network configuration :
I know how to configure sqlnet.ora, tnsnames.ora and listener.ora without looking at the documentation.
I have carefully tested all possible parameters in Net Service Guide

– ADR :
I know how to set up the ADR location in the database and network.
I know how to use the command line utility, documented in utilities -> adrci

– RMAN :
I know how to configure RMAN with the configure statement. I am efficient in doing backups and restores
I have read the RMAN Reference carefully.

– Dataguard :
I am confident with the dgmgrl utility and with oem to configure and monitor dataguard.
I have read the Dataguard Broker Guide

– Materialized view
I have read and tested the examples in the Datawarehouse Guide -> Materialized views

– Secure Lob
I have played with lobs as basicfile and securefiles to check the differences and new features. I have read the secure files guide

– Streams
Streams is a very complex product. I have attended last year a five days course in Paris to only realise it is utterly tricky to maintain and debug, it case of errors.
I know where to look at if the capture, propagation or apply fail. Mostly in alert log, but also in the DBA views. I know how to restart the different processes. I am confident with both the OEM and PL/SQL Packages DBMS_STREAMS_*.
I have read the Streams Concept and Streams Replication Administration

– Resource manager
I know both the GUI and command line. I have read Admin guide -> Resource Manager

– Advisors and baselines
A very valuable resource to read is the 2 Days Perf Guide. It may sound strange to prepare an OCM exam with a 2-Days guide, but it is an awesome lecture.
To get deeper and learn the API usage, read Performance Tuning -> Optimizing SQL Statements

– Replay a capture workload
I learnt both the PL/SQL and EM usage in the Real Application Testing User Guide

Almost all the other topics are self explanatory and almost trivial once you know the syntax by heart…

I spent over one year preparing this exam and if you browse my recent posts, you will find detailled example that somehow relate to miscellaneous findings
Check mount option in linux
Transport tablespace over db links
Fast start failover
On star transformation
my first ADR package

For those of my readers who are on this way, good luck!

PS: no, I do not know the result yet…

Difference between Paris and Zurich

When I was a child, I used to go skiing in the alps, and occasionaly cross the borders. I remember that late in the season (Eastern skiing) restaurants were already empty in France when we had lunch, because our neithbough countries introduced summertime before us.

It is a long way back, namely summers 1976 to ’80. In 1975 and before, neither of us had day light saving. In 1981 and later, we both had it.

Ok, I just had an issue with a wrong date in a customer application. Somehow our database is set with POSIX format, let’s say +01:00 and +02:00, derived from CET/CEST unix timezone (TZ=CET-1CEST,M3.5.0,M10.5.0)

Due to some obscure multiple conversions, dates for summer 1976-80 are wrong, so we sent birthday cards too early to our customers…


SQL> select to_char(cast(
  timestamp '1979-08-01 00:00:00 CET'
    as timestamp with local time zone),
      'YYYY-MM-DD') from dual;

TO_CHAR(CAST(TIMESTAMP'1979-08-0100:00:
---------------------------------------
1979-08-01

but if set my session timezone to Europe/Zurich, which is currently equivalent to CET, I got discrepancies


SQL> alter session set time_zone='Europe/Zurich' ;

Session altered.

SQL> select to_char(cast(
  timestamp '1979-08-01 00:00:00 CET' 
    as timestamp with local time zone),
      'YYYY-MM-DD') from dual;

TO_CHAR(CAST(TIMESTAMP'1979-08-0100:00
--------------------------------------
1979-07-31

A good reason to specify the time zone name correctly in your create database statement !

grant select on sys tables

I prefer to use a powerful named user with dba rather than sys. It is more conform to the security policies in place regarding accounting of administrator operations.

Very occasionaly, my user get ORA-1031 insufficient privileges even if I have the dba role.

Amoung others, I have “PURGE DBA_RECYCLEBIN” and DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE

update, 2012-07-24
For purge dba_recyclebin, you probably should purge tables individually
exec for f in(select*from dba_recyclebin where owner!='SYS' and type='TABLE')loop execute immediate 'purge table "'||f.owner||'"."'||f.object_name||'"';end loop;

For DBMS_STREAMS_AUTH, what I am actually missing, is the GRANT OPTION on some documented dba views and dbms package. So I could safely grant the grant option to my user for all sys objects that have been granted to DBA, PUBLIC and any other roles.

Kind of


create table scott.t as 
  select distinct owner,table_name,privilege 
  from dba_tab_privs t 
  where privilege not in ('USE','DEQUEUE') and owner='SYS' ;
begin
  for f in(select * from scott.t) loop 
    execute immediate 
      'grant '||f.privilege||' on "'||f.owner||'"."'
        ||f.table_name||'" to scott with grant option'; 
  end loop;
end;
/

It is better to not select from dba_tab_privs directly, as executing immediate while opening the cursor may have unexpected side effects.

This may help you to increase your security by reducing your connections as sys.

What does # mean in sqlplus?

The script used to be


shutdown abort

it has been replaced by


#shutdown abort
shutdown immediate

Let’s try !


SQL> #shutdown abort
ORACLE instance shut down.
SQL> shutdown immediate
ORA-01012: not logged on

sqlplus just silently ignored the # symbol and executed the first statement.

Thanks to Maxim comment, here is a new case to explain the sql prefix #


SQL> sho sqlpre
sqlprefix "#" (hex 23)
SQL> select
  2  #prompt hello world
hello world
  2  * from dual;

D
-
X

While within (or outside of) an sqlplus block, you can tell sqlplus to immediately run a sqlplus statement

The correct syntaxes to put comment are documented in Placing Comments in Scripts


SQL> remark shutdown abort
SQL> rem shutdown abort
SQL> -- shu abort
SQL> /* shutdown abort */

How to get rid of corrupted blocks without a backup

First, you identify the blocks in alert log or with db verify


$ dbv BLOCKSIZE=8192 file=sysaux01.dbf
DBV-00201: Block, DBA 12629823, marked corrupt for invalid redo application
...
DBVERIFY - Verification complete

Total Pages Examined         : 131072
Total Pages Processed (Data) : 69691
Total Pages Failing   (Data) : 0
Total Pages Processed (Index): 28669
Total Pages Failing   (Index): 0
Total Pages Processed (Other): 15755
Total Pages Processed (Seg)  : 0
Total Pages Failing   (Seg)  : 0
Total Pages Empty            : 16957
Total Pages Marked Corrupt   : 9
Total Pages Influx           : 0
Total Pages Encrypted        : 0
Highest block SCN            : 3220271881 (11.3220271881)

For the dba number, identify the block


def dba=12585405
col block_id new_v block_id
col file_id new_v file_id
select dbms_utility.data_block_address_block(&dba) block_id, 
dbms_utility.data_block_address_file(&dba) file_id from dual;

  BLOCK_ID    FILE_ID
---------- ----------
      2493          3

From the block_id/file_id, identify the segment


col owner new_v table_owner 
col segment_name new_v segment_name
select owner,segment_name,segment_type from dba_extents where file_id=&file_id and &BLOCK_ID between block_id and block_id + blocks - 1;
OWNER
------------------------------
SEGMENT_NAME
-----------------------------------
SEGMENT_TYPE
------------------
SYS
SYS_IL0000008786C00008$$
LOBINDEX

If it is a lob, identify the column and data_type


select tablespace_name,owner, table_name, column_name, data_type from dba_lobs join
dba_tab_columns using (owner, table_name, column_name) where segment_name =
'&segment_name' and owner='&table_owner';
TABLESPACE_NAME                OWNER
------------------------------ ------------------------------
TABLE_NAME                     COLUMN_NAME
------------------------------ ------------------------------
DATA_TYPE
--------------------------------------------------------------------------------
SYSAUX                         SYS
WRI$_DBU_FEATURE_USAGE         FEATURE_INFO
CLOB

If you are lucky, you will find a useless segment that you will just drop. Or maybe you will be able to move all segments in another tablespace and drop the tablespace with the corrupt blocks.

If you are pretty unlucky like me today, you will find sys segments in system or sysaux.

Either you export all users data and import them again in a new database (but this means downtime), or you start moving the segments in another tablespace. Or dropping and recreating them.
Check Tablespace maintenance tasks

Once dropped or moved or emptied, you may still see the corrupted blocks. Do not forget to purge the recyclebin, either with PURGE DBA_RECYCLEBIN or PURGE TABLESPACE tbs1 USER usr1

Even then the corruption may belong to no more segment but still appear in dbverify. One workaround is to fill the tablespace (check it does not extend) with a dummy table

create table t(x number, y varchar2(4000) default lpad('x',4000,'x')) tablespace tbs1;

exec while true loop insert into t(x) select rownum r from dual connect by level<10000;commit;end loop

exec while true loop insert into t(x) select rownum r from dual connect by level<100;commit;end loop

exec while true loop insert into t(x,y) select rownum r,'x' from dual;commit;end loop

exec while true loop insert into t(x,y) values (null,null);commit;end loop

drop table t;

Run dbv again and again until you get completly rid of errors. If you drop and recreate sys objects, or even if you simply move them out of the sys tablespace, dictionary corruption and ora-600 is possible. But well, you had corruption anyway …

Transport tablespace over db links

You do not want to export the metadata from the source database, but rather use a database link to get this.

As prerequisite, you have made a set of self-contained tablespaces in read-only mode and you have copied the datafiles.


SQL>  create tablespace test datafile '/u02/oradata/db01/test01.dbf' size 10m;

Tablespace created.

SQL> create table scott.x(x number) tablespace test;

Table created.

SQL> insert into scott.x values (1);

1 row created.

SQL> commit;

Commit complete.

SQL> alter tablespace test read only;

Tablespace altered.


$ scp srv01:/u02/oradata/db01/test01.dbf /u02/oradata/db02

create a database link on the target database DB02


SQL> create database link l using 'DB01';

Database link created.

Then import the tablespace via db link


impdp scott/tiger network_link=l transport_tablespaces=TEST transport_datafiles=/u02/oradata/db02/test01.dbf logfile=DATA_PUMP_LOG:impdp.log

This avoid the “expdp” step, but it does not avoid copying the datafiles

Fast start failover

There are loads of things to do to prepare you for a fast start failover.

First you must have a working set of primary / standby database
Then you must have both databases in flashback mode.
Then verify your dataguard configuration with OEM.

Ok, once you are so far, you will need to review your dataguard property.

lsc01 my primary, lsc05 my standby


edit database lsc01 set LogXptMode='SYNC';
edit database lsc01 set FastStartFailoverTarget= 'lsc05'
edit database lsc05 set LogXptMode='SYNC';
edit database lsc05 set FastStartFailoverTarget= 'lsc01'
EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;

then you need to start the observer in the background :


nohup dgmgrl -silent sys/*** "start observer" &

note: it does not work if you connect with /. You will get DGM-16979 if you use / or if you use different passwords in standby and primary.

so far so good, let’s enable fast_start failover in dgmgrl


ENABLE FAST_START FAILOVER

Before you switch, check the listener.ora is correctly configured :


SID_LIST_LISTENER=
  (SID_LIST=
    (SID_DESC=
      (GLOBAL_DBNAME=LSC01_DGMGRL.example.com)
      (SID_NAME=LSC01)
      (ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1)
    )
    (SID_DESC=
      (GLOBAL_DBNAME=LSC05_DGMGRL.example.com)
      (SID_NAME=LSC05)
      (ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1)
    )
  )

LISTENER =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = precision.example.com)(PORT = 1521))
  )

The GLOBAL_DBNAME is mandatory to enable a painless switchover. Otherwise the standby startup will fail with ORA-12514.

UPDATE: alternatively in 11gR2 you can set the dataguard property StaticConnectIdentifier to use SID instead of service name :


edit database lsc05 set property StaticConnectIdentifier=
  '(DESCRIPTION=
    (ADDRESS=(PROTOCOL=tcp)(HOST=precision.example.com)(PORT=1521))
    (CONNECT_DATA=(SID=LSC05)))';
edit database lsc01 set property StaticConnectIdentifier=
  '(DESCRIPTION=
    (ADDRESS=(PROTOCOL=tcp)(HOST=precision.example.com)(PORT=1521))
    (CONNECT_DATA=(SID=LSC01)))';

in this case you will not need global_dbname in listener.ora. See note 308943.1

Ok, let’s see how fast I can switch


$ time dgmgrl -silent sys/*** "switchover to lsc05"
Performing switchover NOW, please wait...
New primary database "lsc05" is opening...
Operation requires shutdown of instance "LSC01" on database "lsc01"
Shutting down instance "LSC01"...
ORACLE instance shut down.
Operation requires startup of instance "LSC01" on database "lsc01"
Starting instance "LSC01"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "lsc05"

real    1m10.685s
user    0m0.052s
sys     0m0.109s

About one minute. most of the time was spent restarting the original primary as a standby, the primary was already available for queries after about 20 seconds.

Failover to standby with a delay (until time)

Disaster scenario :

1) you have a primary database
2) you have a standby database
3) you want to be able to failover to the standby database until a given time (ex: within the last 24 hours)

First, you create your primary and standby databases.

If you have dataguard broker, you then set the DELAYMINS to 1440 (=1 day) for the standby database.


DGMGRL> edit database sdb01 set property DelayMins=1440;

If you do not use dg broker, then set the delay in your log_archive_dest_2 parameter

SQL> ALTER SYSTEM SET log_archive_dest_2='service=sdb01','LGWR ASYNC NOAFFIRM delay=1440';

Ok, verify your configuration, with OEM, with show configuration or simply with alter system archive log current.

Wait one day 😉

After one day, you will have a lag between the last retrieved logfile and the last applied redo log entry.


SQL> select applied, max(NEXT_TIME) from v$archived_log group by applied;
APPLIED   MAX(NEXT_TIME)
--------- -------------------
NO        2011-11-17_14:06:53
YES       2011-11-17_13:51:46

Obviously I did not wait one day for this test, but I already see a log of 15 minutes.

Now for some obscure reason, your beloved colleague messed up the primary database and you must recover the database until 14:00.

Note that you cannot use the dataguard broker or OEM to do this.

Ok. Shutdown the production.


SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

Now cancel the standby process (but do not issue a recover managed standby database finish).


SQL>  alter database recover managed standby database cancel;

Database altered.

Now we need to recover the standby until time. Check the syntax twice before you type enter!

On standby


SQL> recover automatic standby database until time '2011-11-17 14:00:00';
Media recovery complete.

On standby, start the database as a primary database


SQL> alter database activate standby database;

Database altered.
SQL> alter database open;

Database altered.

Before writing this post I did some research on how to do it with dataguard, but dataguard does not seem to offer point in time failover. Either you do an immediate failover (and you will lose 1440 minutes of data) or you do a complete failover (and you will apply all logs, inclusive the one after 14:00)

delete unused shared memory segments from an Oracle instance

Once upon a time, a dba issues some kill -9 to clean up dying database processes. Or the database instance crashes. This will left some shared memory segments. Note 68281.1 describe how to remove them on a server with multiple databases.

First, list the ipc process


$ ipcs
IPC status from /dev/mem as of Mon Nov 14 11:28:58 CET 2011
T        ID     KEY        MODE       OWNER    GROUP
Message Queues:
q         0 0x4107001c -Rrw-rw----     root   printq

Shared Memory:
m         0 0x7800006f --rw-rw-rw- itmuser1 itmusers
m         1 0x78000070 --rw-rw-rw- itmuser1 itmusers
m   5242882 0x41d2ba80 --rw-r-----   oracle      dba
m  99614723 0xb0d4d164 --rw-rw----   oracle      dba
m  12582917 0xb84cbc28 --rw-rw----   oracle      dba
m  79691782 0x1058873f --rw-------   oracle      dba
m 638582792 0x78000382 --rw-rw-rw-     root   system
m 218103817 0x780003b7 --rw-rw-rw-     root   system
Semaphores:
s         1 0x6202c477 --ra-r--r--     root   system
s   6291461 0x0102c2d8 --ra-------     root   system
s         6 0xa100004b --ra-ra-ra-     root   system

Get a list of the running databases


$ ps -ef | grep pmon | grep -v grep
  oracle  483334       1   1   Aug 16      -  6:46 ora_pmon_db03
  oracle 1253476       1   0   Oct 31      -  2:00 ora_pmon_db01
  oracle 2298042       1   0   Sep 05      - 11:07 ora_pmon_db02

Then, for each database, get the ipc information
$ export ORACLE_SID=db01
$ sqlplus / as sysdba
SQL> oradebug setmypid
Statement processed.
SQL> oradebug ipc
Information written to trace file.
SQL> oradebug tracefile_name
/u01/app/oracle/admin/db01/udump/db01_ora_2625574.trc
$ awk '/Shared Memory:/{getline;getline;print}' /u01/app/oracle/admin/db01/udump/db01_ora_2625574.trc
5242882         0x41d2ba80
$ export ORACLE_SID=db02
$ sqlplus / as sysdba
SQL> oradebug setmypid
Statement processed.
SQL> oradebug ipc
Information written to trace file.
SQL> oradebug tracefile_name
/u01/app/oracle/diag/rdbms/db02a/db02/trace/db02_ora_2441408.trc
$ awk '/Shared Memory:/{getline;getline;print}' /u01/app/oracle/diag/rdbms/db02a/db02/trace/db02_ora_2441408.trc
99614723        0xb0d4d164
$ export ORACLE_SID=db03
$ sqlplus / as sysdba
SQL> oradebug setmypid
Statement processed.
SQL> oradebug ipc
Information written to trace file.
SQL> oradebug tracefile_name
/u01/app/oracle/diag/rdbms/db03b/db03/trace/db03_ora_2617416.trc
$ awk '/Shared Memory:/{getline;getline;print}' /u01/app/oracle/diag/rdbms/db03b/db03/trace/db03_ora_2617416.trc
12582917        0xb84cbc28

Compare it with the first list, and if you are absolutely sure to do what you are doing, remove the oracle segments that are not assigned to any database instance with ipcrm. If possible try first to figure out where they come from and do a shutdown abort of the not-correctly-stopped database.

Ok, with ipcrm


$ ipcrm -m 79691782

I removed the segment that apparently does not relate to any running instance

This could help you if you are really forced to remove some shared memory segments and you cannot afford shutting down other databases.

Duplicate table over database link

The old-style approach would be CREATE TABLE T AS SELECT * FROM T@L, where T is the table you want to duplicate and L the database link pointing to the remote database

If you want to keep more info about the structure of t, the constraints, the indexes, you may use Datapump. Here it is…


SQL> var n number
SQL> exec :n := dbms_datapump.open('IMPORT','TABLE','L')

PL/SQL procedure successfully completed.

SQL> print n
         N
----------
        28

SQL> exec dbms_datapump.metadata_filter(:n,'SCHEMA_LIST','''SCOTT''')

PL/SQL procedure successfully completed.

SQL> exec dbms_datapump.metadata_filter(:n,'NAME_LIST','''T''')

PL/SQL procedure successfully completed.

SQL> exec dbms_datapump.start_job(:n)

PL/SQL procedure successfully completed.
SQL> desc t
 Name              Null?    Type
 ----------------- -------- ------------
 X                 NOT NULL NUMBER

This is utterly simple. If you mess up with the link name, global name, syntax, and so on, you may end up with orphan jobs in DBA_SCHEDULER_JOBS. There is some metalink note on dropping the underlying tables (Note 336914.1) but first log off, get a coffee, and they may vanish after a few minutes.

Generate network graph from command line

I recently wrote on gnuplot, today I tried another command line utility to generate graphs, graphviz, version 2.24.0 on AIX5L.

Pretty straightforward syntax :

(
  echo "digraph Emp {"
    sqlplus -s -L scott/tiger << EOF
      set pages 0 lin 120 hea off feed off
      select 
        ename ||'->'|| 
        (select ename from emp where empno=e.mgr) || ';' 
      from emp e where mgr is not null;
EOF
  echo "}"
)| neato -Tpng | uuencode Emp.png | mailx laurentschneider@example.com

(or neato -Tpng -o Emp.png to save locally, or -Tps|lp, etc…)

On the number of installed components

I recently posted about network fained fine grained security. More precisely, I posted about the new requirement to have XDB to be able to send a mail or do a nslookup on 11g.

What option should you install on your database ?


SQL> select COMP_NAME,VERSION from DBA_REGISTRY;

COMP_NAME                                VERSION
---------------------------------------- ----------
Oracle Database Catalog Views            11.2.0.2.0
Oracle Database Packages and Types       11.2.0.2.0

What else do you need? If you have java, you will need a java pool. If you have xdb, you will need a xdb schema. The more options you install, the more bugs you will get, the bigger the dictionary will be, the more memory you will need.

But in my experience the worst part of having java, xdb, olap and family installed on your database is that every upgrade will take you hours instead of minutes ! That’s for me a sufficient argument to stick to catalog and catproc (the top base components listed above).

how to run UTL_TCP, UTL_SMTP and the like in 11g

After we upgrade a db to 11g someone complained about an ORA-24248: XML DB extensible security not installed

I thought, it should be easy to revert to 10g mechanism. Probably wrong after reading Marco :
The default behavior for access control to network utility packages has been changed to disallow network operations to all nonprivileged users. This default behavior is different from, and is incompatible with, previous versions of Oracle Database.

I do not want to install XDB to send mail. Sounds like an overkill…

Ok, as an hard core dba I created a wrapper in the sys schema, something you probably should not do !

ex:
10g


SQL> conn scott/tiger
Connected.
SQL> select utl_inaddr.GET_HOST_ADDRESS('localhost') from dual;
UTL_INADDR.GET_HOST_ADDRESS('LOCALHOST')
--------------------------------------------------
127.0.0.1

after upgrade
11g


SQL> conn scott/tiger
Connected.
SQL> select utl_inaddr.GET_HOST_ADDRESS('localhost') from dual;
select utl_inaddr.GET_HOST_ADDRESS('localhost') from dual
       *
ERROR at line 1:
ORA-24248: XML DB extensible security not installed
ORA-06512: at "SYS.UTL_INADDR", line 19
ORA-06512: at "SYS.UTL_INADDR", line 40
ORA-06512: at line 1

My workaround to “disable” Fine-Grained Access to External Network Services


SQL> conn / as sysdba
SQL> create or replace function my_utl_inaddr_GET_HOST_ADDRESS(HOST VARCHAR2) return VARCHAR2 is begin return utl_inaddr.GET_HOST_ADDRESS; end;
  2  /

Function created.

SQL> grant execute on my_utl_inaddr_GET_HOST_ADDRESS to scott;

Grant succeeded.
SQL> conn scott/tiger
Connected.
SQL> select sys.my_utl_inaddr_GET_HOST_ADDRESS('localhost') from dual;
SYS.MY_UTL_INADDR_GET_HOST_ADDRESS('LOCALHOST')
--------------------------------------------------
127.0.0.1

If you want to use the recommended way of granting access to utl_tcp and the like, check note 453756.1

Do not upgrade 11.2.0.1 to 11.2.0.1

If you do run @?/rdbms/admin/catupgrd for an 11.2.0.1 Oracle Home on a 11.2.0.1, you may later realize some objects are missing (probably related to deferred segment creation).


SQL>   delete from t1
  2    where id in (
  3      select ca.id from ca, p
  4      where p.no_form like '%02.98'
  5      and p.id = ca.prod_id
  6    );
  delete from t1
              *
ERROR at line 1:
ORA-00600: internal error code, arguments: [kkpo_rcinfo_defstg:objnotfound],
[56480], [], [], [], [], [], [], [], [], [], []

The workaround on metalink is amazing :

  • Do not run catupgrd in 11.2.0.1 against a database that is already at 11.2.0.1

Well, the only solution is to restore your database ! How painful 🙁

How to change the connection string of the Oracle Enterprise Manager Grid Control 11g repository

If you moved your repository to a new host and want to change the connection string, no need to drop it, no need to messup in the properties or xml files, simply read the doc

http://download.oracle.com/docs/cd/E11857_01/em.111/e16790/ha_agent.htm#autoId13
emctl config oms -store_repos_details (-repos_host <host> -repos_port <port> -repos_sid <sid> | -repos_conndesc <connect descriptor>) -repos_user <username> [-repos_pwd <pwd>] [-no_check_db]

Yes it works!

List of table and column privileges, including those via roles

I could not find this quickly enough in google so I wrote it myself.

The list of table privileges, with a connect by subquery.

 COL roles FOR a60
COL table_name FOR a30
col privilege for a9
set lin 200 trims on pages 0 emb on hea on newp none

  SELECT *
    FROM (    SELECT CONNECT_BY_ROOT grantee grantee,
                     privilege,
                     REPLACE (
                        REGEXP_REPLACE (SYS_CONNECT_BY_PATH (granteE, '/'),
                                        '^/[^/]*'),
                        '/',
                        ' --> ')
                        ROLES,
                     owner,
                     table_name,
                     column_name
                FROM (SELECT PRIVILEGE,
                             GRANTEE,
                             OWNER,
                             TABLE_NAME,
                             NULL column_name
                        FROM DBA_TAB_PRIVS
                       WHERE owner NOT IN
                                ('SYS',
                                 'SYSTEM',
                                 'WMSYS',
                                 'SYSMAN',
                                 'MDSYS',
                                 'ORDSYS',
                                 'XDB',
                                 'WKSYS',
                                 'EXFSYS',
                                 'OLAPSYS',
                                 'DBSNMP',
                                 'DMSYS',
                                 'CTXSYS',
                                 'WK_TEST',
                                 'ORDPLUGINS',
                                 'OUTLN',
                                 'ORACLE_OCM',
                                 'APPQOSSYS')
                      UNION 
                      SELECT PRIVILEGE,
                             GRANTEE,
                             OWNER,
                             TABLE_NAME,
                             column_name
                        FROM DBA_COL_PRIVS
                       WHERE owner NOT IN
                                ('SYS',
                                 'SYSTEM',
                                 'WMSYS',
                                 'SYSMAN',
                                 'MDSYS',
                                 'ORDSYS',
                                 'XDB',
                                 'WKSYS',
                                 'EXFSYS',
                                 'OLAPSYS',
                                 'DBSNMP',
                                 'DMSYS',
                                 'CTXSYS',
                                 'WK_TEST',
                                 'ORDPLUGINS',
                                 'OUTLN',
                                 'ORACLE_OCM',
                                 'APPQOSSYS')
                      UNION 
                      SELECT GRANTED_ROLE,
                             GRANTEE,
                             NULL,
                             NULL,
                             NULL
                        FROM DBA_ROLE_PRIVS
                       WHERE GRANTEE NOT IN
                                ('SYS',
                                 'SYSTEM',
                                 'WMSYS',
                                 'SYSMAN',
                                 'MDSYS',
                                 'ORDSYS',
                                 'XDB',
                                 'WKSYS',
                                 'EXFSYS',
                                 'OLAPSYS',
                                 'DBSNMP',
                                 'DMSYS',
                                 'CTXSYS',
                                 'WK_TEST',
                                 'ORDPLUGINS',
                                 'OUTLN',
                                 'ORACLE_OCM',
                                 'APPQOSSYS')) T
          START WITH grantee IN (SELECT username FROM dba_users)
          CONNECT BY PRIOR PRIVILEGE = GRANTEE)
   WHERE table_name IS NOT NULL AND grantee != OWNER
ORDER BY grantee,
         owner,
         table_name,
         column_name,
         privilege;

sample output


GRANTEE PRIVILEGE ROLES           OWNER  TABLE_NAME COLUMN_NAME
------- --------- --------------- ------ ---------- -----------
U       UPDATE     --> R          SCOTT  DEPT       DNAME      
U       SELECT                    SCOTT  EMP                   
U2      UPDATE     --> R2 --> R   SCOTT  DEPT       DNAME      

Reduce the number of commits

“Oftentimes, a database administrator (DBA) simply looks at the symptoms and immediately starts changing the system to fix those symptoms”
Op. Cit. Oracle Database Performance Tuning Guide 11g Release 2 (11.2)

Ok, let’s do this 🙂

  • Finding Waits on event “log file sync” while performing COMMIT and ROLLBACK operations were consuming significant database time.
  • Action Investigate application logic for possible reduction in the number of COMMIT operations by increasing the size of transactions.

If your application is committing too often- maybe Enterprise Manager told you so- you may want to commit less often. Or maybe just do some magic to impress your customer.

As seen on Metalink 857576.1, and if you can afford data loss, and if you cannot change your application, and if you are that kind of dba who cares more on good performing badly written application than on data integrity. just have a quick look …

Ok, ins.sql is 30’000 insert and commits,


insert into scott.t values(1);
commit;
insert into scott.t values(1);
commit;
insert into scott.t values(1);
commit;

Let’s check the time on my old-fashion pc…


SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:40:55.881948 +02:00
SQL> @ins
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:41:19.115447 +02:00

23.3 seconds

Now take the risk to lose some commits (but yes it is documented, no hidden parameter) to boost your performance


SQL> alter session set commit_wait=nowait commit_logging=immediate;
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:43:37.284027 +02:00

SQL> @ins
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:43:54.084547 +02:00

Hey hey hey, 16.8 seconds only 🙂

PS: it used to be called commit_write in 10g

nothing in user_segments

I wrote on deferred segment creation recently.

Today I was looking for specific storage attributes that I used to find in user_segments. They are no longer here. Where are they then?

test case :
create table t(x clob) store (x) as securefile x (retention max storage(maxsize 8192000000));

Where do I find the retention max max_size of my securefile? once the segment is created, it is easy to find it in user_segments

SQL> insert into t values('x');
1 row created.

 SQL> select max_size from user_segments where segment_name='X';
  MAX_SIZE
----------
   1000000

1000000 in blocks is my specified 8192000000 bytes.

let’s go back

SQL> truncate table t drop all storage;

Table truncated.

SQL> select max_size from user_segments where segment_name='X';

no rows selected

It is not there.

you must dig in the sys tables to find out. Specifically there is a new table for the deferred segments

SQL> select maxsiz_stg from sys.deferred_stg$ where obj# in (select obj# from sys.obj$ where name='X');
MAXSIZ_STG
----------
   1000000

Of course you should not base your business logic on internal tables that may change in a next release. A more appropriate workaround would be to disable deferred segment creation 🙂

How to unload blob from the database?

There is more than one post on how to unload blob from the database, mostly in plsql with utl_file.put_raw (see note 330146.1) and with java with FileOutputStream (see note 247546.1)

Unfortunately both are terribly slow due to the 32k limitation of put_raw in utl_file and due to a low “optimum buffer size” retrieved by myBlob.getBufferSize(), I increased the java stream buffer to 20M or to the size of the lob, whichever is smaller.

The code is mostly copy-pasted from metalink. But I changed the size from getbuffersize() to length().

Using java for large blob (read large binary large object) is about 3x faster than plsql in this test.


SQL> CREATE USER USER1 IDENTIFIED BY SeCrEt;

User created.

Elapsed: 00:00:00.04
SQL>
SQL> GRANT CREATE SESSION , CREATE PROCEDURE TO USER1;

Grant succeeded.

Elapsed: 00:00:00.01
SQL>
SQL> connect user1/SeCrEt
Connected.
SQL>
SQL> CREATE OR REPLACE JAVA SOURCE NAMED "BlobHandler"
  2     AS import java.lang.*;
  3  import java.sql.*;
  4  import oracle.sql.*;
  5  import java.io.*;
  6  public class BlobHandler {
  7    public static void ExportBlob(String myFile, BLOB myBlob) throws Exception {
  8      File binaryFile = new File(myFile);
  9      FileOutputStream outStream = new FileOutputStream(binaryFile);
 10      InputStream inStream = myBlob.getBinaryStream();
 11      int size;
 12      if (myBlob.length()> 20000000) {  // tune this to whatever appropriate value
 13        size = 20000000;
 14      } else {
 15        size = (int)myBlob.length();
 16      }
 17      byte[] buffer = new byte[size];
 18      int length = -1;
 19      while ((length = inStream.read(buffer)) != -1)
 20      {
 21        outStream.write(buffer, 0, length);
 22        outStream.flush();
 23      }
 24      inStream.close();
 25      outStream.close();
 26    }
 27  }
 28  /

Java created.

Elapsed: 00:00:00.16
SQL>
SQL> ALTER JAVA SOURCE "BlobHandler" COMPILE;

Java altered.

Elapsed: 00:00:00.37
SQL>
SQL> sho error
No errors.
SQL>
SQL> CREATE OR REPLACE PROCEDURE ExportBlobJava (p_file   IN VARCHAR2,
  2                                              p_blob   IN BLOB)
  3  AS
  4     LANGUAGE JAVA
  5     NAME 'BlobHandler.ExportBlob(java.lang.String, oracle.sql.BLOB)';
  6  /

Procedure created.

Elapsed: 00:00:00.10
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> EXEC Dbms_Java.Grant_Permission( 'USER1', 'SYS:java.io.FilePermission','/tmp/javatest', 'write' )

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.52
SQL> 

a comment here. If you do care about your database server, and you probably won’t GRANT DBA TO PUBLIC, never grant full unlimited access to your filesystem, as suggested chockingly by metalink with Dbms_Java.Grant_Permission( ‘SCOTT’, ‘java.io.FilePermission’, ‘*’, ‘read ,write, execute, delete’);

let’s move on


SQL> connect user1/SeCrEt
Connected.
SQL>
SQL> CREATE OR REPLACE PROCEDURE ExportBlobPlsql (p_dir    IN VARCHAR2,
  2                                               p_file   IN VARCHAR2,
  3                                               p_blob   IN BLOB)
  4  AS
  5     blob_length     INTEGER;
  6     out_file        UTL_FILE.FILE_TYPE;
  7     v_buffer        RAW (32767);
  8     chunk_size      BINARY_INTEGER := 32767;
  9     blob_position   INTEGER := 1;
 10  BEGIN
 11     blob_length := DBMS_LOB.GETLENGTH (p_blob);
 12     out_file :=
 13        UTL_FILE.FOPEN (p_dir,
 14                        p_file,
 15                        'wb',
 16                        chunk_size);
 17     WHILE blob_position <= blob_length
 18     LOOP
 19        IF blob_position + chunk_size - 1 > blob_length
 20        THEN
 21           chunk_size := blob_length - blob_position + 1;
 22        END IF;
 23
 24        DBMS_LOB.READ (p_blob,
 25                       chunk_size,
 26                       blob_position,
 27                       v_buffer);
 28        UTL_FILE.PUT_RAW (out_file, v_buffer, TRUE);
 29        blob_position := blob_position + chunk_size;
 30     END LOOP;
 31     UTL_FILE.FCLOSE (out_file);
 32  END;
 33  /

Procedure created.

Elapsed: 00:00:00.08
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> CREATE OR REPLACE DIRECTORY tmp AS '/tmp';

Directory created.

Elapsed: 00:00:00.03
SQL>
SQL> grant write on directory tmp to user1;

Grant succeeded.

Elapsed: 00:00:00.02
SQL>

Here again, I give write access to one directory, I do not grant dba to public…

SQL> connect user1/SeCrEt
Connected.
SQL>
SQL> VAR c BLOB
SQL>
SQL> exec :c := UTL_RAW.cast_to_raw ('X'); FOR i IN 1 .. 15 LOOP  DBMS_LOB.append (:C, :C); END LOOP

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL>
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
        KB
----------
        32

Elapsed: 00:00:00.06

I have created a 32K BLOB variable, and I am doubling its size each time to see how it scales

SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.08
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.16
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
        KB
----------
        64

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
        KB
----------
       128

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
        KB
----------
       256

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
        KB
----------
       512

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
         1

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
         2

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
         4

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.08
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
         8

Elapsed: 00:00:00.01
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.14
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.08
SQL>

For small files, we do not see much of a difference between java and plsql. This is because the 32k buffer of plsql is acceptable for small files.


SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.09
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
        16

Elapsed: 00:00:00.02
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.20
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.16
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.16
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
        32

Elapsed: 00:00:00.04
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.31
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.29
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.46
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
        64

Elapsed: 00:00:00.08
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.45
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.90
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.60
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
       128

Elapsed: 00:00:00.14
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.96
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.77
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:16.08
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
       256

Elapsed: 00:00:00.30
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.24
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:09.65
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:34.70
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
       512

Elapsed: 00:00:00.70
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:06.92
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:20.81
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:01:32.96
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
      1024

Elapsed: 00:00:01.65
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:14.90
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:44.13

SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:03:33.14

SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
      2048

Elapsed: 00:00:00.00

SQL> EXEC ExportBlobJava('/u99/backup/sw/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:26.10
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:01:16.90

The larger the file, the most benefical to have a large write buffer (could well be higher than 20M as in this example). If java is not installed in the database, you can also retriebe the blob from the database and output the stream to a local file.

The reasons why I always avoid to shutdown abort

It is a common practice to always shutdown abort the database before restarting and shutting in down immediate. This is because sometimes SHUTDOWN IMMEDIATE takes ages. For instance due to a huge transaction to be rollback.

I do not like it. At all.

First, chance exists that you won’t be able to start the database anymore. I have not heard or meet anyone who had this issue since Oracle7, but I still believe it.

Second, shutdown abort is very useful if something goes seriously wrong. But if something goes wrong, you may want to find out what it is.

Third, you may hit more bugs than if you do close normal. And you may get less help from support if this is due to an abusive shutdown abort. YMMV

Ok, small demo to preach to the converted
disclaimer: this demo is not innocent, do not try this on your database

SQL> create flashback archive fa tablespace ts retention 1 day;

Flashback archive created.

SQL> create table t(x number primary key);

Table created.

SQL> alter table t flashback archive fa;

Table altered.

SQL> insert into t values (1);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from DBA_SEGMENTS where tablespace_name='TS';

no rows selected

I have created a flashback archive table, and the committed transaction is not written down to the flashback tablespace yet.

SQL> shutdown abort
ORACLE instance shut down.
SQL> startup 
ORACLE instance started.
Total System Global Area  417546240 bytes
Fixed Size                  2227072 bytes
Variable Size             234882176 bytes
Database Buffers          171966464 bytes
Redo Buffers                8470528 bytes
Database mounted.
Database opened.


SQL> sho parameter undo_tablespace
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_tablespace                      string      UNDO1
SQL> create undo tablespace undo2 datafile '/u02/oradata/@/undo2_01.dbf' size 10m reuse;

Tablespace created.

SQL> alter system set undo_tablespace=undo2;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area  417546240 bytes
Fixed Size                  2227072 bytes
Variable Size             234882176 bytes
Database Buffers          171966464 bytes
Redo Buffers                8470528 bytes
Database mounted.
Database opened.

I have switched undo tablespace. So far so good. But remember the flashback archive did not write to the flashback tablespace before shutdown abort.


SQL> drop tablespace undo1 including contents and datafiles;
drop tablespace undo1 including contents and datafiles
*
ERROR at line 1:
ORA-01548: active rollback segment '_SYSSMU1_3544069484$' found, terminate dropping tablespace

You see… I cannot drop my old undo tablespace.

Q.E.D.

On deferred segment creation

What’s deferred segment creation? It is a feature that saves lots of time by releases and lots of space if you have a -legacy- application with 10’000 tables and most of them are empty.

When was it introduced ? Partly in 11.2.0.1 and partly in 11.2.0.2, depending on the object type.

What’s the opposite ? immediate segment creation

What’s the default ? deferred segment creation

How does it work ?
create table t1(x number) segment creation immediate;
and
create table t1(x number) segment creation deferred;

Where it the doc? start with Understand Deferred Segment Creation

Ok, now my 2 cents on this feature. It is a major change of the most basic elements of Oracle, the segment. This behavior will keep its bunch of surprises on your dba scripts.

1) you will not see the segment in dba_segments
2) if you drop the tablespace the tablespace containing the object without the INCLUDING CONTENTS, the drop tablespace will succeed and the table will remain

SQL> create tablespace ts datafile '/u02/oradata/@/ts.dbf' size 1m;

Tablespace created.

SQL> create table t(x number) tablespace ts;

Table created.

SQL> drop tablespace ts;

Tablespace dropped.

SQL> select * from t;
select * from t
              *
ERROR at line 1:
ORA-00959: tablespace 'TS' does not exist

SQL> drop table t;
drop table t
           *
ERROR at line 1:
ORA-00959: tablespace 'TS' does not exist

Neither SELECT nor DROP is possible at that stage

To quickly identify those almost-nonexistent tablespaces you may use this query


SELECT TABLESPACE_NAME FROM ALL_CLUSTERS UNION
SELECT TABLESPACE_NAME FROM ALL_INDEXES UNION
SELECT TABLESPACE_NAME FROM ALL_IND_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_IND_SUBPARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_LOBS UNION
SELECT TABLESPACE_NAME FROM ALL_LOB_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_LOB_SUBPARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_TABLES  UNION
SELECT TABLESPACE_NAME FROM ALL_TAB_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_TAB_SUBPARTITIONS MINUS
select tablespace_name from dba_tablespaces
;

TABLESPACE_NAME
------------------------------
TS

then you can recreate it and -if wished- drop it with contents


SQL> create tablespace ts datafile '/u02/oradata/@/ts.dbf' size 1m reuse;

Tablespace created.

SQL> drop tablespace ts including contents and datafiles;

Tablespace dropped.

SQL> select * from t;
select * from t
              *
ERROR at line 1:
ORA-00942: table or view does not exist

The table is gone for real.

If you based some scripts on dba_segments to list the content of the tablespaces, you probably should check the assigned tablespace in the tables,indexes,lobs and (sub)partitions DBA_VIEWS too.

I met this feature while using transportable tablespace. Transportable table will transport the object with no segment that belongs to the tablespace.

There is a bunch of published bugs on Metalink regarding deferred segment creation. An easy workaround is to not use the feature by setting the initialization parameter DEFERRED_SEGMENT_CREATION to false. This of course affects only new objects.

I am always very cautious about those major changes affecting the dba scripts on the dictionary. While selecting from the base dictionary tables (TAB$, COL$, …) is never recommended, selecting from the USER_ and DBA_ views is supposed to be backward compatible, but the dba scripts that used to work in previous release may break here… This is obviously the price to pay to get new features, right?

Datapump : table like ‘FOO%’ or like ‘BAR%’

Today I tried to put two like condition in an INCLUDE clause of datapump.

I have the following tables

SQL> select table_name from user_tables order by 1;

TABLE_NAME
------------------------------
AAA
BAR1   ***
BAR2   ***
BLA
FOO    ***
FOO1   ***
GOZ

and I want tables like BAR% and tables likes FOO%

First try :

$ expdp scott/tiger include=table:"like'FOO%'or like'BAR%'"

Export: Release 11.2.0.2.0 - Production on Thu Jul 14 11:47:13 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39071: Value for INCLUDE is badly formed.
ORA-00936: missing expression

This does not work, because it would translate in WHERE {tablename} like’FOO%’or like’BAR%’

Ok, second try, let’s put multiple TABLE clause

$ expdp scott/tiger include=table:"like'FOO%'",table:"like'BAR%'"

Export: Release 11.2.0.2.0 - Production on Thu Jul 14 11:47:15 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01":  scott/******** include=table:"like'FOO%'",table:"like'BAR%'" Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
ORA-39168: Object path TABLE was not found.
ORA-31655: no data or metadata objects selected for job
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" completed with 2 error(s) at 11:47:21

no data was found, because it did translate in WHERE {tablename} like ‘FOO%’ and{tablename} like ‘BAR%’. Which returns no row.

Ok, multiple INCLUDE conditions are joined by AND, so let’s do the math. (BAR% OR FOO%)=(>=BAR AND <FOP AND NOT BETWEEN BAS AND FONZZZ (where FONZZZ is immediately smaller than FOO)

$ expdp scott/tiger include=table:">='BAR'",table:"not between 'BAS' and 'FONZZZZZZZZZZZZZ'",table:"<'FOP'"

Export: Release 11.2.0.2.0 - Production on Thu Jul 14 11:47:21 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01":  scott/******** include=table:">='BAR'",table:"not between 'BAS' and 'FONZZZZZZZZZZZZZ'",table:"<'FOP'"
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "SCOTT"."BAR1"                             5.007 KB       1 rows
. . exported "SCOTT"."BAR2"                             5.007 KB       1 rows
. . exported "SCOTT"."FOO"                              5.007 KB       1 rows
. . exported "SCOTT"."FOO1"                             5.007 KB       1 rows
Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
  /u01/app/oracle/admin/DB01/dmp/expdat.dmp
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 11:47:31

It is that simple 😉

On implicit commit

An explicit commit is when you issue a COMMIT statement

SQL> create table t(x number);

Table created.

SQL> insert into t values(1);

1 row created.

SQL> commit;

Commit complete.

An implicit commit is when a commit is issued without your approval.

ex: AUTOCOMMIT (default is OFF)

SQL> set autoc on
SQL> insert into t values(1);

1 row created.

Commit complete.

ex: EXITCOMMIT (default is ON)

SQL> set autoc off exitc on
SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> disc
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> conn scott/tiger
Connected.
SQL> select * from t;
         X
----------
         1

before / after a successful DDL statement

SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t(x);

Index created.

SQL> rollback;

Rollback complete.

SQL> select * from t;
         X
----------
         1

Before / after an unsuccessful DDL statement, sometimes :

SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t(blabla);
create index i on t(blabla)
                    *
ERROR at line 1:
ORA-00904: "BLABLA": invalid identifier

SQL> rollback;

Rollback complete.

SQL> select * from t;
         X
----------
         1

But not always :

SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t();
create index i on t()
                    *
ERROR at line 1:
ORA-00936: missing expression

SQL> rollback;

Rollback complete.

SQL> select * from t;

no rows selected

In the last case, no DDL was executed, but in the case before that, the DDL was executed and failed.

If you want to commit, use COMMIT 🙂

sqlplus -prelim

If you cannot login to the database, for instance due to ORA-00020 maximum number of processes exceeded, then chance exists that you could use the -prelim option.

Documented in note 121779.1 for sqlplus version 10.1 and later :
In some cases, no connections are allowed on the instance (in some ORA-20 situations for example).
As of 10.1.x, there is a new option with SQL*Plus to allow access to an instance to
generate traces.
sqlplus -prelim / as sysdba

Only sysdba connection is possible.

sqlplus -prelim system/manager

SQL*Plus: Release 11.2.0.2.0 Production on Mon Jul 4 10:38:36 2011

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

ERROR:
ORA-24300: bad value for mode

And very little access is granted

SQL> select * from dual;
select * from dual
*
ERROR at line 1:
ORA-01012: not logged on
Process ID: 0
Session ID: 0 Serial number: 0

You can shutdown abort and then restart your database, instead of rebooting your server where other instances may be running.

This is the ultimate chance before reboot. Before this, consider disconnecting / killing some user session to get a regular sqlplus / as sysdba

On using Toad against a database

I got this question once again today in a previous post.

What’s wrong by using Toad against a database?

The worst case scenario:
– some non-technical staff is clicking around in your production database with read-write access 🙁

The best-case scenario :
– nobody has access to your database 🙂

Here is a short list on how you could protect your data :
– Give the right privilege to the right person. DBA role to the DBA, CREATE TABLE/CREATE INDEX to the developer, INSERT/UPDATE/DELETE to the application
– Restrict access to your database server. Use some firewall. Allow only the dba workstation and the application server to the Production environment

What if the end-user PC needs access to the Production database with a powerfull user? This often happend in real world. A fat client is installed on the PC, the password is somehow hardcoded, the privileges granted to the hardcoded user are uterly generous…

It is not a bad practice in this case to block access to the database server to Toad/SQLPLUS and thelike. This will very ineffeciently prevent some garage-hacker from corrupting your database, but it will prevent your sales / marketing colleagues from deleting data, locking tables and degrading performance. This could be done by some login triggers or, my preference, some administrative measures like information, auditting and sanctions.