Category Archives: dba

How to resolve ORA-09925 ?

This morning I had to solve an ORA-09925: Unable to create audit trail file and it was not as straightforward as usual…

There is a note 69642.1 on Metalink, [edit]which is now up to date for 10gR2[/edit].

1) AUDIT_FILE_DEST is not writable

$ env
_=/usr/bin/env
ORACLE_SID=FOO
TERM=dtterm
ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_3
PWD=/u01/app/oracle/product/10.2.0/db_3
$ $ORACLE_HOME/bin/sqlplus -L "/ as sysdba"                
SQL> startup 
ORA-09925: Unable to create audit trail file
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 9925
$ grep -i audit_file_dest $ORACLE_HOME/dbs/*$ORACLE_SID.ora
audit_file_dest=/bar
$ ls -lad /bar
/bar not found
$ su -      
root's Password:
# mkdir /bar 
# exit
$ $ORACLE_HOME/bin/sqlplus -L "/ as sysdba"                
SQL> startup 
ORA-09925: Unable to create audit trail file
IBM AIX RISC System/6000 Error: 13: Permission denied
Additional information: 9925
$ su -
root's Password:
# chown oracle /bar
# exit
$ $ORACLE_HOME/bin/sqlplus -L "/ as sysdba"
SQL> startup quiet nomount
ORACLE instance started.
SQL> shutdown abort
ORACLE instance shut down.

2) $ORACLE_BASE/admin/$ORACLE_SID/adump exists and is not writable :!:


$ ls -lad $ORACLE_BASE/admin/$ORACLE_SID/adump  
drwxr-xr-x   2 root   dba .../admin/FOO/adump
$ $ORACLE_HOME/bin/sqlplus -L "/ as sysdba"     

SQL*Plus: Release 10.2.0.3.0 - 
Production on Mon Dec 17 09:02:29 2007

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

ERROR:
ORA-09925: Unable to create audit trail file
IBM AIX RISC System/6000 Error: 13: Permission denied
Additional information: 9925
ORA-09925: Unable to create audit trail file
IBM AIX RISC System/6000 Error: 13: Permission denied
Additional information: 9925

SP2-0751: Unable to connect to Oracle.  Exiting SQL*Plus
$ su -
root's Password:
# chown oracle /bar/admin/FOO/adump 
$ $ORACLE_HOME/bin/sqlplus -L "/ as sysdba"     

SQL*Plus: Release 10.2.0.3.0 - 
Production on Mon Dec 17 09:02:48 2007

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

Connected to an idle instance.

SQL> quit

3) $ORACLE_HOME/rdbms/audit is not writable


$ cat $ORACLE_HOME/dbs/init$ORACLE_SID.ora
db_name=FOO
$ $ORACLE_HOME/bin/sqlplus -L "/ as sysdba"

SQL*Plus: Release 10.2.0.3.0 - 
Production on Mon Dec 17 08:48:09 2007

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

ERROR:
ORA-09925: Unable to create audit trail file
IBM AIX RISC System/6000 Error: 13: Permission denied
Additional information: 9925
ORA-09925: Unable to create audit trail file
IBM AIX RISC System/6000 Error: 13: Permission denied
Additional information: 9925

SP2-0751: Unable to connect to Oracle.  Exiting SQL*Plus
$ ls -lad $ORACLE_HOME/rdbms/audit 
drwxr-x---   2 root     dba ... $ORACLE_HOME/rdbms/audit
$ cd $ORACLE_HOME; su
root's Password:
# chown oracle ./rdbms/audit
# exit
$ $ORACLE_HOME/bin/sqlplus "/ as sysdba"

SQL*Plus: Release 10.2.0.3.0 - 
Production on Mon Dec 17 08:49:12 2007

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

Connected to an idle instance.

SQL> quit

Oracle 11g Hot patching

Online Patching : you can apply or roll back online patches while the RDBMS instance is running

1) download an interim patch for 11g, f.ex. dummy patch 6198642
2) unzip p6198642_111060_LINUX.zip
3) cd 6198642
4) $ORACLE_HOME/OPatch/opatch apply -silent -connectString LSC08 -runSql

Invoking OPatch 11.1.0.6.0

Oracle Interim Patch Installer version 11.1.0.6.0
Copyright (c) 2007, Oracle Corporation.  All rights reserved.

Oracle Home       : /opt/oracle/product/11/db_4
Central Inventory : /opt/oracle/oraInventory
   from           : /etc/oraInst.loc
OPatch version    : 11.1.0.6.0
OUI version       : 11.1.0.6.0
OUI location      : /opt/oracle/product/11/db_4/oui
Log file location : /opt/oracle/product/11/db_4/cfgtoollogs/
    opatch/opatch2007-12-05_21-23-42PM.log

RollbackSession rolling back interim patch '6198642' from OH
    '/opt/oracle/product/11/db_4'

Running prerequisite checks...

OPatch detected non-cluster Oracle Home from the inventory 
    and will patch the local system only.

Please shutdown Oracle instances running out of this 
    ORACLE_HOME on the local system.
(Oracle Home = '/opt/oracle/product/11/db_4')

Is the local system ready for patching? [y|n]
Y (auto-answered by -silent)
User Responded with: Y
Backing up files affected by the patch '6198642' for restore. 
    This might take a while...
Execution of 'sh /opt/oracle/product/11/db_4/.patch_storage/
    6198642_May_07_2007_00_50_36/original_patch/custom/
    scripts/pre -rollback 6198642 ':

Return Code = 0

Patching component oracle.rdbms, 11.1.0.6.0...
Copying file to "/opt/oracle/product/11/db_4/cpu/CPUDummy2007/
    catcpu.sql"
RollbackSession removing interim patch '6198642' from inventory

---------------------------------------------------------------
This is a dummy patch for testing only
---------------------------------------------------------------
Execution of 'sh /opt/oracle/product/11/db_4/.patch_storage/
    6198642_May_07_2007_00_50_36/original_patch/custom/scripts/
    post -rollback 6198642 ':

Return Code = 0

Running the "apply" sql script "/opt/oracle/product/11/db_4/cpu/
    CPUDummy2007/catcpu.sql" with reference to 'patchmd.xml' file 
    for the patch "6198642"...

The local system has been patched and can be restarted.

OPatch succeeded.

OPatch did run the necessary script (catcpu for Dummy2007) on the various instances (LSC08). It needed only one step and 34 seconds on my notebook. This patch is a dummy patch, let’s wait for CPU January to see if it is online applicable ;-)

alter database add logfile size 1e7;

I am in the processing in adding logfiles to a 10gR2 database.

SQL> alter database add logfile group 10 size 1e7;

Database altered.
$ ls -l
-rw-r-----   1 oracle     10000896 Nov  1 15:00
    o1_mf_10_3lmq05ld_.log

The file size is 10,000,896 bytes.

What about this :

SQL> alter database drop logfile group 10;

Database altered.
SQL> alter database add logfile size 1e;
alter database add logfile size 1e
*
ERROR at line 1:
ORA-00741: logfile size of (2251799813685248) blocks 
exceeds maximum logfile size

No way! Oracle does not want me to add a logfile of 1 Exabyte !

Remember the logfile blocks are OS blocks of 512 bytes. Not Oracle blocks.

Ok, let’s try something else

SQL> alter database add logfile size 1p;
alter database add logfile size 1p
*
ERROR at line 1:
ORA-00741: logfile size of (2199023255552) blocks 
exceeds maximum logfile size

No, one Petabyte is not a realistic size for a log file.

I have one more try, but unfortunately it works :evil:

SQL> alter database add logfile size 1t;

...

It just takes ages…

$ ls -l
-rw-r-----   1 oracle   dba      1099511628288 Nov  1 14:49
    o1_mf_5_3lmpb6w6_.log
$ du -g o1_mf_5_3lmpb6w6_.log
6.09    o1_mf_5_3lmpb6w6_.log
$ df -gt .
Filesystem    GB blocks      Used      Free %Used Mounted on
/dev/u02_lv      140.00     19.32    120.68   14% /u02

The ls shows the file size has been set to 1T and 6 Gigabytes have been allocated yet. Since I do not want to fill my filesystem, I just shutdown-abort my instance and remove that file…

Display a blob

I have a table with a blob


create table t(b blob);
insert into t values ('585858');

In 11g sql*plus, I can display raw data

select b from t;
B
------
585858

Ok, but if I want to display XXX (the character content)


select utl_raw.cast_to_varchar2(b) from t;
UTL
--- 
XXX

However, in sql, a raw cannot be more than 2000 bytes long.

Another way to print your blob content is to use DBMS_LOB.CONVERTTOCLOB


var c clob
set autoprint on
declare
  b blob;
  dest_offset integer := 1 ;
  src_offset  integer := 1 ;
  lang_context integer:= 1 ;
  warning integer;
begin
  select b into b from t for update;
  dbms_lob.createtemporary(:c,true);
  dbms_lob.converttoclob(
    :c, b, DBMS_LOB.LOBMAXSIZE,
    dest_offset, src_offset,
    1, lang_context, warning);
end;
/
C
---
XXX

Please RTFOM !

Today I opened two SR about flashback archive in 11g. In one of them, I complained that user SCOTT was not allowed to create a flashback archive. In the doc that I downloaded a few weeks ago I read :
Prerequisites
You must have the FLASHBACK ARCHIVE ADMINISTER system privilege to create a flashback data archive. This privilege can be granted only by a user with DBA privileges. In addition, you must have the CREATE TABLESPACE system privilege to create a flashback data archive, as well as sufficient quota on the tablespace in which the historical information will reside.

So as I was getting an ORA-55611, I opened a SR. The support engineer pointed me to the online documentation where I was astonished to read :
Prerequisites
You must have the FLASHBACK ARCHIVE ADMINISTER system privilege to create a flashback data archive. In addition, you must have the CREATE TABLESPACE system privilege to create a flashback data archive, as well as sufficient quota on the tablespace in which the historical information will reside. To designate a flashback data archive as the system default flashback data archive, you must be logged in as SYSDBA.

Well, Read The Fine Online Manual !!!

The second tar is related to long retention (about the age of the earth)


SQL> alter flashback archive fba01 
  modify retention 4106694757 year;

Flashback archive altered.

SQL> select retention_in_days
  from DBA_FLASHBACK_ARCHIVE;
RETENTION_IN_DAYS
-----------------
                1

:mrgreen:

Oracle Database 11g: The Top Features for DBAs and Developers

I am always delighted to read the top features by Arup Nanda.

He started his 11g series : Oracle Database 11g: The Top Features for DBAs and Developers

There are many partitioning enhancements. The most exciting feature for me is the INTERVAL partitioning. A huge cause of downtime and waste of storage is the range partitioning. In 10g and before, a partitioning by dates required that the partition are defined before values are inserted.

Now we have automatic partition creation :-D


create table t(d date) 
partition by range(d) 
interval(interval '1' month) 
(partition p1 values less than (date '0001-01-01'));

One partition must be created manually, here the partition will contain all dates from 1-JAN-4712BC to 31-DEC-0000 (which is not a legal date by the way)

There is also new syntax to query the partition

SQL> insert into t values (date '2000-01-10');

1 row created.

SQL> insert into t values (date '2000-01-20');

1 row created.

SQL> insert into t values (date '2000-03-30');

1 row created.

SQL> select * from t partition for (date '2000-01-01');
D
-------------------
10.01.2000 00:00:00
20.01.2000 00:00:00

Note the syntax can be used in any form of partitioning. Here in a list-list composite


SQL> create table t(x number, y number) 
  partition by list(x) 
  subpartition by list(y) 
    subpartition template (
      subpartition sp1 values(1),
      subpartition sp2 values(2)) 
  (partition values(1), partition values(2));

Table created.

SQL> insert into t values(1,2); 
1 row created.

SQL> select * from t subpartition for (1,2);
         X          Y
---------- ----------
         1          2

Ok, one more feature Arup introduced is the REF partitioning, where you have a schema with both the parent and child tables partitioned, and you want to partition on a column of the parent table that is not in the child table (as you had bitmap join indexes, you have now ref partitions). Check it on his site.

Finally Arup explained SYSTEM partitioning, which is not inconceivable, but will hardly be used.

Imagine you have a table containing just one single LOB column, and a LOB cannot be used as a partition key.

SQL> create table t(x clob)
  partition by system (
    partition p1, 
    partition p2, 
    partition p3, 
    partition p4);

Table created.

So far this seems fine. So what the problem? You cannot insert in that table!

SQL> insert into t values(1);
insert into t values(1)
            *
ERROR at line 1:
ORA-14701: partition-extended name or bind variable
must be used for DMLs on tables partitioned by the
System method

so you must define in which partition you want to add data. For example round robin. Or random. Whatever.


SQL> insert into t partition (P1)  values ('x');

1 row created.

SQL> insert into t partition (P2)  values ('y');

1 row created.

If you want to use bind variable, you can use dataobj_to_partition


SQL> select object_id 
  from user_objects 
  where object_name='T' 
    and subobject_name is not null;
 OBJECT_ID
----------
     55852
     55853
     55854
     55855

SQL> var partition_id number
SQL> exec :partition_id := 55852

PL/SQL procedure successfully completed.

SQL> insert into t 
  partition (dataobj_to_partition("T",:partition_id))
  values ('x');

1 row created.
SQL> exec :partition_id := 55853

PL/SQL procedure successfully completed.

SQL> insert into t 
  partition (dataobj_to_partition("T",:partition_id))
  values ('x');

1 row created.

Actually, SYSTEM partitioning is misleading, YOU are responsible for choosing the partition in which you want to insert, not the system :mrgreen:

flashback archive table

One of the problem with flashback queries in 10g and before is that you never know if it will works, especially you cannot expect to have flashback queries working for very old tables.

Let’s imagine you want to export your CUSTOMER as of 30/6/2007. No chance in 10g…

Well, with 11g, you can create a flashback archive, and it will save all change until end of retention (many years if you want).

Here it is :

SQL> connect / as sysdba
Connected.
SQL> create tablespace s;

Tablespace created.

SQL> create flashback archive default fba01 tablespace s 
  retention 1 month;

Flashback archive created.

SQL> connect scott/tiger
Connected.
SQL> create table t(x number) flashback archive;

Table created.

SQL> host sleep 10

SQL> insert into t(x) values (1);

1 row created.

SQL> commit;

Commit complete.

SQL> SELECT dbms_flashback.get_system_change_number FROM dual;
GET_SYSTEM_CHANGE_NUMBER
------------------------
                  337754

SQL> update t set x=2;

1 row updated.

SQL> commit;

Commit complete.

SQL> select * from t as of scn 337754;
         X
----------
         1

SQL> alter table t no flashback archive;

Table altered.

SQL> drop table t;

Table dropped.

SQL> select FLASHBACK_ARCHIVE_NAME, RETENTION_IN_DAYS,
  STATUS from DBA_FLASHBACK_ARCHIVE;
FLASHBACK_ARCHIVE_NAME RETENTION_IN_DAYS STATUS
---------------------- ----------------- -------
FBA01                                 30 DEFAULT

SQL> connect / as sysdba
Connected.
SQL> drop flashback archive fba01;

Flashback archive dropped.

SQL> drop tablespace s;

Tablespace dropped.

note that a month is 30 days. If you try to create a flashback archive in a non-empty tablespace you may get
ORA-55603: Invalid Flashback Archive command
which is not a very helpful message

on delete cascade

The use of a referential integrity constraint is to enforce that each child record has a parent.


SQL> CREATE TABLE DEPT
  2    (DEPTNO NUMBER PRIMARY KEY,
  3    DNAME VARCHAR2(10)) ;

Table created.

SQL> CREATE TABLE EMP
  2    (EMPNO NUMBER PRIMARY KEY,
  3    ENAME VARCHAR2(10),
  4    DEPTNO NUMBER
  5      CONSTRAINT EMP_DEPT_FK
  6      REFERENCES DEPT(deptno));

Table created.

SQL> INSERT INTO DEPT(deptno,dname) VALUES
  2    (50,'CREDIT');

1 row created.

SQL> INSERT INTO EMP(EMPNO,ENAME,DEPTNO) VALUES
  2    (9999,'JOEL',50);

1 row created.

SQL> COMMIT;

Commit complete.

SQL> DELETE DEPT WHERE DEPTNO=50;
DELETE DEPT WHERE DEPTNO=50
*
ERROR at line 1:
ORA-02292: integrity constraint (SCOTT.EMP_DEPT_FK) violated
 - child record found

I cannot delete this department, because the department is not empty. Fortunately :!:

Let’s redefine the constraint with a DELETE CASCADE clause


SQL> alter table emp drop constraint emp_dept_fk;

Table altered.

SQL> alter table emp add constraint emp_dept_fk
  2  foreign key (deptno) references dept(deptno)
  3  on delete cascade;

Table altered.

SQL> DELETE DEPT WHERE DEPTNO=50;

1 row deleted.

SQL> select * from emp where ename='JOEL';

no rows selected

Note the line 1 row deleted. This is evil :evil: I have deleted a department, and there were employees in it, but I got no error, no warning and no feedback about the DELETE EMP.

Instead of improving the data quality, the ON DELETE CASCADE foreign key constraint here silently deleted rows. Joel will once phone you and ask why he has been deleted…

There is one more clause of the foreign key which sets the refering column to null


SQL> INSERT INTO DEPT(deptno,dname) VALUES
  2    (60,'RESTAURANT');

1 row created.

SQL> INSERT INTO EMP(EMPNO,ENAME,DEPTNO) VALUES
  2    (9998,'MARC',60);

1 row created.

SQL> alter table emp drop constraint emp_dept_fk;

Table altered.

SQL> alter table emp add constraint emp_dept_fk
  2  foreign key (deptno) references dept(deptno)
  3  on delete set null;

Table altered.

SQL> DELETE DEPT WHERE DEPTNO=60;

1 row deleted.

SQL> select * from emp where ename='MARC';

     EMPNO ENAME          DEPTNO
---------- ---------- ----------
      9998 MARC

Marc has no department, because his department has been deleted. Again, no feedback, no warning, no error.

Instead of improving the data quality, the ON DELETE SET NULL foreign key constraint here silently updated rows columns to NULL. Marc will wonder why he get no invitation to the department meetings.

What could be worse???

Triggers of course! Triggers not only removes rows in child tables, but triggers can also do very weird things, like updating another table, changing the values you are trying to insert, outputing a message, etc.

Also triggers are programmed by your colleagues, so they must be full of bugs :twisted:

You cannot imagine the number of problems that are caused by triggers and revealed only when tracing.

I once had something like

SQL> CREATE INDEX I ON T(X);

P07431B processed

Well, after enabling the trace, I discover one trigger fired on any ddl and the trigger was doing nothing else than this distracting dbms_output for “debugging” purpose. Guess google and metalink for the message did not help much…

errorlogging in 11g

This is a very neat feature in 11g.

I have a script called foo.sql


create table t(x number primary key);
insert into t(x) values (1);
insert into t(x) values (2);
insert into t(x) values (2);
insert into t(x) values (3);
commit;

It is eyes-popping that this script will return an error, but which one?

Let’s errorlog !


SQL>set errorl on
SQL> @foo

Table created.

1 row created.

1 row created.

insert into t(x) values (2)
*
ERROR at line 1:
ORA-00001: unique constraint (SCOTT.SYS_C004200) violated

1 row created.

Commit complete.

SQL> set errorl off
SQL> select timestamp,script,statement,message from sperrorlog;
TIMESTAMP  SCRIPT  STATEMENT
---------- ------- ---------------------------
MESSAGE
---------------------------------------------------------
11:18:56   foo.sql insert into t(x) values (2)
ORA-00001: unique constraint (SCOTT.SYS_C004200) violated

There is also a huge bonus :-D

You can use it with 9i and 10g databases too! Only the client must be 11g. To download the 11g client only, go to Oracle E-Delivery Website

Even small, this is one of my favorite new features!

How to compare schema

If you have receive ddl statements from your developer and you want to check if it matches the current state of the development database, because the developer have done a lot of change in a quick and undocumented manner, what are your options?

I found this handy feature in Toad :
1) I create my objects on a separate database with the ddl I received from development
2) I compare the schema they use with the schema I created in Toad
–> Database –> Compare –> Schema
I select the options I want:
–> functions, indexes, packages, procedures, triggers, tables, view
I select the Reference and Comparison connections/schemas. Then I click compare
3) I receive the result
(only) 29 differences
4) the real bonus, I receive a script to update the live data according to the script I received. Undocumented change should never happen, so I do some communication with the developers


drop index foo;
drop table bar;
alter table gaz drop column bop;
alter table gaz modify (quux null);

this is not going to be blind-executable, some change are simply impossible to implement, but for my little test, I was happy to discover that function

I have been using ERwin for this purpose before, but the version I have (4.1) is very buggy and does not support a lot of syntaxes (ex: deferred constraints, create view v as select cast(1 as number(1)) x from dual, etc…). Also ERwin can compare only with the current model, so no direct comparison between 2 database schema.

ORA-01466: unable to read data – table definition has changed

I re-edited this post and it is unresolved yet. I thought it was related to system time, but apparently not :o


SQL> create table t(x number);

Table created.

SQL> set transaction read only ;

Transaction set.

SQL> select * from t;
select * from t
              *
ERROR at line 1:
ORA-01466: unable to read data - table definition has changed

If I wait one minute after my create table statement, it works


SQL> drop table t;

Table dropped.

SQL> create table t(x number);

Table created.

SQL> host sleep 60

SQL> set transaction read only;

Transaction set.

SQL> select * from t;

no rows selected

:twisted:

v$session_longops in 10gR2

I have read an excellent overview of DBMS_APPLICATION_INFO in Andy Campbell blog on If only…

I have written a procedure P which is executing 5 statements. I want to add the name of the procedure P as a target. I do not want to hardcode the name however… So I can use $$PLSQL_UNIT in 10gR2 (documented in Using Predefined Inquiry Directives With Conditional Compilation)

Ok, here I go


create or replace procedure p is
   rindex binary_integer;
   slno binary_integer;
   sofar number;
   totalwork number;
   target number;
begin
   totalwork := 5;
   sofar := 0;
   select object_id 
   into target 
   from user_objects 
   where object_name = $$PLSQL_UNIT;
   DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS(
     rindex,slno,'Executing...',target,null,
     sofar,totalwork,null,'Statement');
   -- do something 
   sofar:=sofar+1;
   DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS(
     rindex,slno,'Executing...',target,null,
     sofar,totalwork,null,'Statement');
   -- do something 
   sofar:=sofar+1;
   DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS(
     rindex,slno,'Executing...',target,null,
     sofar,totalwork,null,'Statement');
   -- do something 
   sofar:=sofar+1;
   DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS(
     rindex,slno,'Executing...',target,null,
     sofar,totalwork,null,'Statement');
   -- do something 
   sofar:=sofar+1;
   DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS(
     rindex,slno,'Executing...',target,null,
     sofar,totalwork,null,'Statement');
   -- do something 
   sofar:=sofar+1;
   DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS(
     rindex,slno,'Executing...',target,null,
     sofar,totalwork,null,'Statement');
end;
/

Now I execute it


SQL> exec p

PL/SQL procedure successfully completed.

and monitor it


SQL> select sid,opname,target,sofar,totalwork,units
  2   from v$session_longops;
 SID OPNAME       TARGET   SOFAR TOTALWORK UNITS    
---- ------------ -------- ----- --------- ---------
 538 Executing... SCOTT.P      5         5 Statement

create your database with dbca

With dbca you can fasten the procedure of creating databases in your company. You can also run this in silent mode and create exactly the database you want, with your redo/undo size, with your parameters settings. You create the template once, and use it many times. Fast and easy :-D

I have one database which I created with SQL*PLUS called LSC01.

1) Create template lsc-template
dbca
–> Manage Templates
–> Create a database template
–> from an existing database (structure as well as data)
–> LSC01
–> lsc-template
–> convert the file locations to use OFA structure

This takes some place on disk and will speed up database creation. Technically speaking, it is doing a compressed backup with RMAN that will be restore, and restore is way faster than create database

2) Create database LSC99 in silent mode or progressOnly mode.
dbca -silent -createDatabase -templateName lsc-template.dbc -gdbName LSC99.lcsys.ch

It took me only two minutes to create my database on my notebook !

Try it ! Of course I expect comments on this post :mrgreen:

sequence / thread / system change number

I have seen a confusion between sequence and system change number quite often.

The sequence is a fairly little number which is the number of log switch since database creation (it can be resetted by open resetlogs).

The thread is only relevant in RAC. In single-instance database, it is always 1.

You can find the current sequence number with the following query


select sequence#,thread# 
from v$log 
where status='CURRENT';

The system change number (SCN) is a much higher number, which is continously increasing, even when you do nothing. The dbms_flashback package has a function to return the current system change number :


select dbms_flashback.get_system_change_number
from dual;

In 10g, there is a standard function to get the current system change number


select timestamp_to_scn(current_timestamp) from dual;

Before a major application/database upgrade, it is good practice to make a backup and write down the SCN for an easier recovery procedure.

read my comment about non-accuracy of timestamp_to_scn

dbms_xplan and v$sql_plan

do not miss yas comment !

tested in 10.2


create or replace procedure 
  explain_plan(hash_value number) 
is begin
  insert into plan_table 
  select 
    null,
    (select nvl(max(plan_id),0)+1 from plan_table),
    timestamp,
    remarks,
    operation,
    options,
    object_node,
    object_owner,
    object_name,
    object_alias,
    null,
    object_type,
    optimizer,
    search_columns,
    id,
    parent_id,
    depth,
    position,
    cost,
    cardinality,
    bytes,
    other_tag,
    partition_start,
    partition_stop,
    partition_id,
    other,
    other_xml,
    distribution,
    cpu_cost,
    io_cost,
    temp_space,
    access_predicates,
    filter_predicates,
    projection,
    time,
    qblock_name
  from v$sql_plan
  where hash_value=explain_plan.hash_value;
  for f in (
    select PLAN_TABLE_OUTPUT from table(dbms_xplan.DISPLAY)) 
  loop 
    dbms_output.put_line(f.PLAN_TABLE_OUTPUT); 
  end loop;
end;
/

SQL> set lin 200 trims on pages 0 serverout on
SQL> select ename from emp where empno=7788;
ENAME
----------
SCOTT

SQL> select hash_value from v$sql where sql_text like
  2  'select ename from emp where empno=7788%';
HASH_VALUE
----------
1019401098

SQL> exec explain_plan(1019401098)
Plan hash value: 4066871323
------------------------------------------------------
| Id  | Operation                   | Name   | Rows  |
------------------------------------------------------
|   0 | SELECT STATEMENT            |        |       |
|   1 |  TABLE ACCESS BY INDEX ROWID| EMP    |     1 |
|*  2 |   INDEX UNIQUE SCAN         | PK_EMP |     1 |
------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("EMPNO"=7788)

PL/SQL procedure successfully completed.

constraints for referential integrity

On the developpez.net forums I answered a question about referential integrity. How can you delete/update a parent row when the child exist ?


SQL> create table continent(
  2    name varchar2(10),
  3    constraint continent_pk primary key(name));

Table created.

SQL> create table country(
  2    name varchar2(10),
  3    continent varchar2(10),
  4    constraint country_pk
  5      primary key(name),
  6    constraint country_continent_fk
  7      foreign key(continent) references continent);

Table created.

SQL> insert into continent values('Africa');

1 row created.

SQL> insert into country values('Benin', 'Africa');

1 row created.

CONTINENT

NAME
Africa



COUNTRY

NAME CONTINENT
Benin Africa

So in this default constellation, I cannot delete a parent row where the child exist.


SQL> delete continent where name='Africa';
delete continent where name='Africa'
*
ERROR at line 1:
ORA-02292: integrity constraint (AUDBA.COUNTRY_CONTINENT_FK)
violated - child record found

I could specify CASCADE to delete the child rows automatically


SQL> alter table country
  2    drop constraint country_continent_fk;

Table altered.

SQL> alter table country
  2    add constraint country_continent_fk
  3    foreign key(continent)
  4    references continent
  5    on delete cascade;

Table altered.

SQL> delete continent where name='Africa';

1 row deleted.

CONTINENT

NAME



COUNTRY

NAME CONTINENT

but this is dangerous. When I read 1 row deleted., I am not informed that I have deleted rows in the child table.

I could rather set the column to null


SQL> rollback;

Rollback complete.

SQL> alter table country
  2    drop constraint country_continent_fk;

Table altered.

SQL> alter table country
  2    add constraint country_continent_fk
  3    foreign key(continent)
  4    references continent
  5    on delete set null;

Table altered.

SQL> delete continent where name='Africa';

1 row deleted.

CONTINENT

NAME



COUNTRY

NAME CONTINENT
Benin  

Ok, but what If I need to rename Africa to EMEA ? I can do this neither on the child nor on the parent


SQL> rollback;

Rollback complete.

SQL> 
SQL> update continent
  2  set name='EMEA'
  3  where name='Africa';
update continent
*
ERROR at line 1:
ORA-02292: integrity constraint (AUDBA.COUNTRY_CONTINENT_FK) 
violated - child record found

SQL> update country
  2  set continent ='EMEA'
  3  where continent ='Africa';
update country
*
ERROR at line 1:
ORA-02291: integrity constraint (AUDBA.COUNTRY_CONTINENT_FK) 
violated - parent key not found

So I could defer the constraint validation to the commit


SQL> alter table country
  2    drop constraint country_continent_fk;

Table altered.

SQL> alter table country
  2    add constraint country_continent_fk
  3    foreign key(continent)
  4    references continent
  5    deferrable initially deferred;

Table altered.

SQL> update continent
  2  set name='EMEA'
  3  where name='Africa';

1 row updated.

SQL> update country
  2  set continent='EMEA'
  3  where continent ='Africa';

1 row updated.

SQL> commit;

Commit complete.

CONTINENT

NAME
EMEA



COUNTRY

NAME CONTINENT
Benin EMEA

You can use triggers too, but constraints are more efficient than triggers.

How to avoid ORA errors when dropping inexistant objects

There were a similar question in the otn forums today. Ok, when I have to run a script in production, the operators complain about errors like ORA-00942 table or view does not exist. Of course I can provide some documentation to explain what can be ignored, but then they then tend to ignore all ORA errors.

A script to create table t will drop table t if existant. There is no CREATE OR REPLACE TABLE command. So I will simply check the dictionary and drop only if existant.


exec for f in (select 1 from user_tables where 
  table_name='T') loop execute immediate 
  'drop table t cascade constraints'; end loop
create table t(x number);

For context, directory, function, indextype (9iR1), java, library, operator, outline, package, procedure, synonym (9iR2), trigger, type, view, it is possible to use the create or replace syntax.

create or replace public synonym x for y;

For indexes (in case they are not dropped with the table)


exec for f in (select 1 from user_indexes where 
  index_name='I') loop execute immediate 
  'drop index i'; end loop
create index i on t(x);

For sequences


exec for f in (select 1 from user_sequences where 
  sequence_name='S') loop execute immediate 
  'drop sequence s'; end loop
create sequence s;

Let’s try


SQL> exec for f in (select 1 from user_tables where tabl

PL/SQL procedure successfully completed.

SQL> create table t(x number);

Table created.

SQL> exec for f in (select 1 from user_indexes where ind

PL/SQL procedure successfully completed.

SQL> create index i on t(x);

Index created.

SQL> exec for f in (select 1 from user_sequences where s

PL/SQL procedure successfully completed.

SQL> create sequence s;

Sequence created.

End of complaints from production people :mrgreen:

v$sql and bind variable

When you see something like

select * from t where x = :1

you may wonder what is :1

Ok, here is a quick join I tested in 10gR2

SQL> var y varchar2(255)
SQL> exec :y:='SCOTT'

PL/SQL procedure successfully completed.

SQL> select job from emp where ename=:y;
JOB
---------
ANALYST

SQL> select sql_text,name,value_string,datatype_string
  2  from v$sql_bind_capture join v$sql using (hash_value)
  3  where sql_text like
  4    'select job from emp where ename=:y%';
SQL_TEXT                              NAME VALUE DATATYPE_STRING
------------------------------------- ---- ----- ---------------
select job from emp where ename=:y    :Y   SCOTT VARCHAR2(2000)

CPU2007Apr

I just downloaded and installed the Critical Patch Update April 2007

$ lsnrctl stop 
...
The command completed successfully
$ sqlplus / as sysdba
SQL> shutdown immediate
...
ORACLE instance shut down.
SQL> quit
$ cd /tmp/5901891
$ ORACLE_HOME/OPatch/opatch apply
... Finish at Wed Apr 18 10:28:17 CEST 2007
$ lsnrctl start
...
The command completed successfully
$ sqlplus / as sysdba
SQL> startup
...
Database opened.
SQL> @?/cpu/CPUApr2007/catcpu
SQL> @?/rdbms/admin/utlrp

The logfile of opatch is $ORACLE_HOME/cfgtoollogs/opatch/opatchdate.log and the logfile of the catcpu is APPLY_sid_date.log. ORA-02303 can be safely ignored.

SQL> select * from dba_registry_history

ACTION_TIME
------------------------------
ACTION
------------------------------
NAMESPACE
------------------------------
VERSION
------------------------------
        ID
----------
COMMENTS
------------------------------
18-APR-07 10.38.09.565465 AM
CPU
SERVER
10.2.0.3.0
   5901891
CPUApr2007

SQL> select count(*) from dba_objects where status='INVALID';

  COUNT(*)
----------
         0

Successfully applied 8-)

backup database keep forever logs

If you want to keep some backups forever, you maybe tried

RMAN> backup database keep forever logs;

Starting backup at 13.04.2007 13:58:04
...
backup will never be obsolete
archived logs required to recover from this backup 
will expire when this backup expires
...
Finished backup at 13.04.2007 13:58:23

but the problem is that the archivelogs to recover this backup at any time after the backup will NEVER be obsolete.

You could well try the NOLOGS option, but this requires you to take the database in the MOUNT state.

RMAN> backup database keep forever nologs;

Starting backup at 13.04.2007 14:06:36
...
backup will never be obsolete
archived logs required to recover from this backup 
will not be kept
...
Finished backup at 13.04.2007 14:07:25

This is fine if you can stop your database. But you probably wants online backup. What’s next?

Ok, here is the way to go. You do your online backup, then you mark what you want to keep !

First I backup the old archivelogs, because I do not need to keep those yet.

RMAN> backup archivelog all;

Starting backup at 13.04.2007 14:10:00
...
Finished backup at 13.04.2007 14:10:08

Now I do a backup plus archivelog (with a tag for simplicity)

RMAN> backup database tag backuplsc 
plus archivelog tag backuplsc;

Starting backup at 13.04.2007 14:10:42
...
Finished backup at 13.04.2007 14:11:00

Now I can mark my backup as keep

RMAN> change backup tag backuplsc keep forever;

...
keep attributes for the backup are changed
backup will never be obsolete
backup set key=405 RECID=116 STAMP=619798257
keep attributes for the backup are changed
backup will never be obsolete
backup set key=406 RECID=117 STAMP=619798260

Now if I do a delete obsolete, it will never delete my backup.

RMAN> backup database plus archivelog
Starting backup at 13.04.2007 14:16:46
...
Finished backup at 13.04.2007 14:17:10

RMAN> delete noprompt obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
...
Deleting the following obsolete backups and copies:
...
Deleted 7 objects

RMAN> list backup summary;

List of Backups
===============
Key     TY LV S Device Type Completion Time    
------- -- -- - ----------- ------------------- 
 #Pieces #Copies Compressed Tag
------- ------- ---------- ---
...
405     B  F  A DISK        13.04.2007 14:10:57 
1       1       YES        BACKUPLSC
406     B  F  A DISK        13.04.2007 14:11:00 
1       1       YES        BACKUPLSC
...

variable in a view

Imagine you have a view and you want to have a parameter in your view. You cannot have a bind variable in your view. But you could have a function which return a package variable. And this package variable could be set manually for your session

Here we go

SQL> create or replace package p is n number; end p;
  2  /

Package created.

SQL> create or replace function f return number is 
  2  begin return p.n; end;
  3  /

Function created.

SQL> create or replace view v as select ename from 
  2  emp where empno=f;

View created.

SQL> select * from v;

no rows selected

SQL> exec p.n:=7788

PL/SQL procedure successfully completed.

SQL> select * from v;

ENAME
----------
SCOTT

why is bitmap index not designed for OLTP

In case you do not know it yet, having a bitmap on columns like GENDER(male/female) is a very bad practice in OLTP, because each insert does lock the whole table

create table t(name varchar2(10), gender varchar2(10));
create bitmap index bi on t(gender);

+--------------------------------+   +--------------------------------+ 
| Session 1                      |   | Session 2                      |
+--------------------------------+   +--------------------------------+ 
| SQL> insert into t             |   |                                |
|  2   values('JO','MALE');      |   |                                |
|                                |   |                                |
| 1 row created.                 |   |                                |
|                                |   |                                |
|                                |   | SQL> insert into t             |
|                                |   |  2   values('JANE','FEMALE');  |
|                                |   |                                |
| SQL> commit;                   |   |                                |
|                                |   |                                |
| Commit complete.               |   |                                |
|                                |   |                                |
|                                |   |                                |
|                                |   | 1 row created.                 |
|                                |   |                                |
|                                |   |                                |
+--------------------------------+   +--------------------------------+ 

A pending transaction is blocking a portion of the index. So session 2 has to wait for transaction 1 to complete.

read comments on this post

user identified externally with SSL certificate

Today I configured my database to identify users with certificates.

Check my previous post listener with tcps to find out how to configure a listener with SSL, which is a requisite.

Ok, I have a listener.ora and a tnsnames.ora with SSL. I do not need a sqlnet.ora, the default values work.
listener.ora

LISTENER=
  (DESCRIPTION_LIST=
    (DESCRIPTION=
      (ADDRESS=(PROTOCOL=TCP)(HOST=chltlxlsc1)(PORT=1521))
    )
    (DESCRIPTION=
      (ADDRESS=(PROTOCOL=TCPS)(HOST=chltlxlsc1)(PORT=15210))
    )
  )

tnsnames.ora

LSC07=
  (DESCRIPTION=
    (ADDRESS_LIST=(
      (ADDRESS=(PROTOCOL=tcp)(HOST=chltlxlsc1)(PORT=1521))
      (ADDRESS=(PROTOCOL=tcps)(HOST=chltlxlsc1)(PORT=15210))
    )
    (CONNECT_DATA=
      (SERVICE_NAME=LSC07.lcsys.ch)
    )
  )

sqlnet.ora

# empty file

Now I have one user, lsc, which is going to connect with its own certificate. First, I need to create a wallet for that user. I run owm as lsc, I create a new wallet, a new certification request with the DN:CN=lsc, I copy/paste my certification request in the CA server, I import the user certificate, and I save it in the system default, /etc/ORACLE/WALLETS/lsc.

Now I need to trust that certificate, so I export my user certificate to a file, and make that file readable for oracle. As oracle, I run the wallet manager owm, I import the trusted user certificate, and I save in system default, /etc/ORACLE/WALLETS/oracle.

One more thing I need to configure now is the wallet location for lsc in the sqlnet parameter file. To avoid conflicts, I create a specific sqlnet for that user in homedir called .sqlnet.ora (with a leading dot).

~lsc/.sqlnet.ora

WALLET_LOCATION=
  (SOURCE=
    (METHOD=FILE)
    (METHOD_DATA=
      (DIRECTORY = /etc/ORACLE/WALLETS/lsc)
    )
  )

Last thing to do : create the user in sqlplus.

SQL> show parameter os_authent_prefix

NAME               VALUE
------------------ ------------------------------
os_authent_prefix  ops$

SQL> create user ops$lsc identified externally as 'CN=lsc';

User created.

SQL> grant create session to ops$lsc;

Grant succeeded.

Now I can remotely log on my database without a password.

SQL> connect /@LSC07
Connected.

listener with tcps

How can you use SSL to encrypt your network traffic?

Here is how I did it.

  1. Install Oracle Certification Authority 10.1.4
  2. you need a CA to approve a certification request

  3. Install Oracle Database 10gR2 Enterprise Edition with Advanced Security Options
  4. Start Wallet Manager
  5. from Database Oracle Home, start $ORACLE_HOME/bin/owm

  6. create a new Wallet
  7. define a password

  8. add certificate request
  9. fill the fields or chose advanced :
    CN=yourhost,DC=yourdomain,DC=com

  10. Start OCA
  11. from OCA home, start $ORACLE_HOME/oca/bin/ocactl start

  12. Open OCA homepage
  13. Open your browser on

    https://yourhost.yourdomain.com:6600/oca/user

    Install the ROOTca in your browser

  14. Request a new certificate
  15. Server/SubCA Certificates – Request a certificate – Paste String from OWM

  16. Approve the certificate
  17. log on the admin page,

    https://yourhost.yourdomain.com:6600/oca/admin

    define your identity, then in Certificate Management, select the certification request and approve it.

  18. Import the certificate in OWM
  19. Select the approved certificate, view details.
    In owm, import user certificate , and copy paste the BASE64 string.
    To get the CA string, download the CA certificate from

    https://yourhost.yourdomain.com:6600/oca/certDownload

  20. Save the wallet
  21. The certificate should show [Ready]
    Select Autologin
    Save in system default (/etc/ORACLE/WALLETS/oracle)

Ok, we are now ready to use the TCPS protocol

listener.ora
LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=<b>TCPS</b>)
(HOST=yourhost.yourdomain.com)(PORT=1521)))

tnsnames.ora
LSC01=(DESCRIPTION=(ADDRESS=(PROTOCOL=<b>TCPS</b>)
(HOST=yourhost.yourdomain.com)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=LSC01)))

Start the listener
$ lsnrctl start
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=<b>TCPS</b>)
(HOST=yourhost.yourdomain.com)(PORT=1521)))
The command completed successfully

Test it!
$ sqlplus scott/tiger@lsc01
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
SQL&gt;

Monitoring the age of the last backup with OEM

My customer wants to receive alerts if a database has not been backed up (either unsuccessful backup or no backup) for ages.

As the customer have Oracle Enterprise Manager Grid Control 10gR2, I started using User Defined Metric (UDM) yesterday. Thanks Troy for his comment at OEM Generic Service

I would do 2 checks, backup of datafiles and backup of redo logs.

Here the steps :
- Open a database target (LSC01)
- Click on User-Defined-Metrics
- Create
- Metric Name = Age of datafile backup
- Type = Number
- Output = Single Value
- SQL Query : the age in hour since the oldest checkpoint time of the newest backup
select (sysdate-min(t))*24 from
(
  select max(b.CHECKPOINT_TIME) t
  from v$backup_datafile b, v$tablespace ts, v$datafile f
  where INCLUDED_IN_DATABASE_BACKUP='YES'
  and f.file#=b.file#
  and f.ts#=ts.ts#
  group by f.file#
)
- Credentials : dbsnmp/*****
- Threshold Operator > Warning 24 Critical 48
- Repeat every 1 hour
- OK

Same for redologs, with a name of Age of redolog backup query of
select (sysdate-max(NEXT_TIME))*24 from v$BACKUP_REDOLOG

I am not going to do this for each instance, so I will create a monitoring template
- Setup
- Monitoring Templates
- Create
- Select the target (LSC01)
- Name = Age of last backup
- Metric Threshold : remove all metrics except Age of datafile backup and Age of redolog backup
- Policies : remove all policies from template
- OK

Start applying your new metric to your databases.
- Apply
- Add
- Select all your databases
- Continue
- dbsnmp/*** (if you have the same database password for dbsnmp on all databases, it is easier)

It is now possible to define alerts.
- Preferences
- Notification Rules
- Create
- Apply to specific targets : Add you productive databases group
- Deselect Availability Down
- Metric: Add : Show all: Check User defined metric : Select : Age of datafile backup , Age of redolog backup
- Severity : Critical and Clear
- Policy : None
- Method : Email

After a while, you can monitor the metric with a historical graphic
User Defined Metric graph in OEM

search for a string in all tables of a schema

this is often asked on the forums. I also needed this a while ago while reverse engineering a database model.

Here is my today solution:

1) select * and extract the first column found per table with regexp (10g)

SQL> select table_name,column_name from (select rownum,table_name, regexp_substr(dbms_xmlgen.getxml(‘select * from “‘||table_name||’”‘),’<[^>]*>&string</[^<]*>’) column_name from user_tables) where length(column_name)!=0;
Enter value for string: 20
TABLE_NAME COLUMN_NAME
———- ——————————
DEPT       <DEPTNO>20</DEPTNO>
EMP        <DEPTNO>20</DEPTNO>

SQL> select table_name,column_name from (select rownum,table_name, regexp_substr(dbms_xmlgen.getxml(‘select * from “‘||table_name||’”‘),’<[^>]*>&string</[^<]*>’) column_name from user_tables) where length(column_name)!=0;
Enter value for string: KING
TABLE_NAME COLUMN_NAME
———- ——————————
EMP        <ENAME>KING</ENAME>
BONUS      <ENAME>KING</ENAME>

2) add a where condition. much slower of course, because scanning the table more often. somehow nicer output. More sensible to datatypes. Here for number.

SQL> select table_name, column_name from (select rownum,table_name, column_name, dbms_xmlgen.getxml(‘select 1 from “‘||table_name||’” where “‘||column_name||’”=&number’) x from user_tab_columns where data_type=’NUMBER’) where length(x)!=0;
Enter value for number: 3000
TABLE_NAME COLUMN_NAME
———- ——————————
EMP        SAL
BONUS      COMM
SALGRADE   HISAL

su in sqlplus

How to switch user in Oracle ?

One approach is to change the password :

SQL> connect / as sysdba
Connected.
SQL> select password from dba_users where username='SCOTT';

PASSWORD
------------------------------
F894844C34402B67

SQL> alter user scott identified by abc123;

User altered.

SQL> connect scott/abc123
Connected.
SQL> create table t...

SQL> connect / as sysdba
Connected.
SQL> alter user scott identified by values 'F894844C34402B67';

User altered.

but this is unfair. The user will be prevented from logging for a while, the password expire policy will be reset.

Sometimes you can simply use


SQL> alter session set current_schema=scott;

Session altered.

but this does not really mean a “su”, because you do not change your privileges. You just change the “default” schema.

Another approach is to use the BECOME USER privilege and undocumented upicui OCI function. But this will not work in sqlplus.

Thanks to Jonathan Lewis post today Proxy Users, I could imagine using the proxy functionality in sqlplus to do a su


SQL> create user su identified by secret123;

User created.

SQL> alter user scott grant connect through su;

User altered.

SQL> connect su[scott]/secret123;
Connected.

The connect username[proxy]/password@db is documented in the 10gR2 SQL*Plus reference

Restrict network access to listener

If I have a limited number of db clients which are authorized to access my listener, I can restrict access to my listener by setting only two parameters in sqlnet.ora.

TCP.VALIDNODE_CHECKING = yes
TCP.INVITED_NODES = (dbclient001,chltlxlsc1)

chltlxlsc1 is my db server, I include it in the list, it is required to start the listener locally.

From dbclient001, I can connect :

$ sqlplus scott/tiger@lsc02

SQL*Plus: Release 10.1.0.4.2 - Production on Wed Nov 22 09:47:43 2006

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
With the Partitioning option
JServer Release 9.2.0.8.0 - Production

From dbclient002, I cannot connect

$ sqlplus scott/tiger@lsc02

SQL*Plus: Release 10.1.0.4.2 - Production on Wed Nov 22 09:48:26 2006

Copyright (c) 1982, 2005, Oracle. All rights reserved.

ERROR:
ORA-12547: TNS:lost contact

alias oraver 2.0

Thanks to an anonymous comment in my post yesterday, I can now provide a more flexible version of my alias, which do not require the database to be running nor the sysdba privilege


$ alias oraver
oraver=’echo ‘\”ORACLE_SID VERSION %CPU RSZ VSZ START_TIME’\”;awk -F: ‘\”/^[^ *#]/{print “printf 42%-9s %11s %5s %8s %8s %s\\n42″,$1,”$(ORACLE_HOME=”$2,$2″/bin/sqlplus -v 2>/dev/null|cut -d47 47 -f3) $(ps -eo pcpu,rsz,vsz,start_time,args|sed -n 42s/ [o]ra_pmon_”$1″.*//p42)”}’\” /etc/oratab |sh’
$ oraver

ORACLE_SID    VERSION  %CPU      RSZ      VSZ START_TIME
LSC01      10.2.0.2.0   0.0    12184   508448 10:09
LSC02       9.2.0.8.0   0.0     8420   303320 10:10
LSC03       9.2.0.8.0

Well, it is based on the sqlplus version, which is not necessarly the same as database version, like 8.1.7.3 does show 8.1.7.0, but in most of the recent versions, it should be ok

ps parameters may not work on all os, you can use -ef if you prefer

ps -ef |grep pmon alternative

I just wrote a new alias to check if the databases are up and running. I added the version and a dash for non-running database. Needed is /etc/oratab + sysdba access to the database.

Here it is :

awk -F: ‘/^[^*# ]/{system(“echo 42select 47+ “$1″ 1147||version from v\\$instance;42|ORACLE_SID=”$1″ ORACLE_HOME=”$2″ “$2″/bin/sqlplus -s 42/ as sysdba42 2>/dev/null|grep 42^+42||echo 42- “$1″42″)}’ /etc/oratab

+ LSC01         10.2.0.2.0
+ LSC02         9.2.0.8.0
- LSC03

Or, as an alias :

alias oraver=’awk -F: ‘\”/^[^*# ]/{system(“echo 42select 47+ “$1″ 1147||version from v\\$instance;42|ORACLE_SID=”$1″ ORACLE_HOME=”$2″ “$2″/bin/sqlplus -s 42/ as sysdba42 2>/dev/null|grep 42^+42||echo 42- “$1″42″)}’\” /etc/oratab’