How do i store the counts of all tables …

How do i store the counts of all tables …

My answer to the question above using dbms_xmlgen

SQL> select
  2    table_name,
  3    to_number(
  4      extractvalue(
  5        xmltype(
  6 dbms_xmlgen.getxml('select count(*) c from '||table_name))
  7        ,'/ROWSET/ROW/C')) count
  8  from user_tables;

TABLE_NAME                      COUNT
------------------------------ ------
DEPT                                4
EMP                                14
BONUS                               0
SALGRADE                            5

CPU2007Apr

I just downloaded and installed the Critical Patch Update April 2007

$ lsnrctl stop 
...
The command completed successfully
$ sqlplus / as sysdba
SQL> shutdown immediate
...
ORACLE instance shut down.
SQL> quit
$ cd /tmp/5901891
$ ORACLE_HOME/OPatch/opatch apply
... Finish at Wed Apr 18 10:28:17 CEST 2007
$ lsnrctl start
...
The command completed successfully
$ sqlplus / as sysdba
SQL> startup
...
Database opened.
SQL> @?/cpu/CPUApr2007/catcpu
SQL> @?/rdbms/admin/utlrp

The logfile of opatch is $ORACLE_HOME/cfgtoollogs/opatch/opatchdate.log and the logfile of the catcpu is APPLY_sid_date.log. ORA-02303 can be safely ignored.

SQL> select * from dba_registry_history

ACTION_TIME
------------------------------
ACTION
------------------------------
NAMESPACE
------------------------------
VERSION
------------------------------
        ID
----------
COMMENTS
------------------------------
18-APR-07 10.38.09.565465 AM
CPU
SERVER
10.2.0.3.0
   5901891
CPUApr2007

SQL> select count(*) from dba_objects where status='INVALID';

  COUNT(*)
----------
         0

Successfully applied 😎

backup database keep forever logs

If you want to keep some backups forever, you maybe tried

RMAN> backup database keep forever logs;

Starting backup at 13.04.2007 13:58:04
...
backup will never be obsolete
archived logs required to recover from this backup 
will expire when this backup expires
...
Finished backup at 13.04.2007 13:58:23

but the problem is that the archivelogs to recover this backup at any time after the backup will NEVER be obsolete.

You could well try the NOLOGS option, but this requires you to take the database in the MOUNT state.

RMAN> backup database keep forever nologs;

Starting backup at 13.04.2007 14:06:36
...
backup will never be obsolete
archived logs required to recover from this backup 
will not be kept
...
Finished backup at 13.04.2007 14:07:25

This is fine if you can stop your database. But you probably wants online backup. What’s next?

Ok, here is the way to go. You do your online backup, then you mark what you want to keep !

First I backup the old archivelogs, because I do not need to keep those yet.

RMAN> backup archivelog all;

Starting backup at 13.04.2007 14:10:00
...
Finished backup at 13.04.2007 14:10:08

Now I do a backup plus archivelog (with a tag for simplicity)

RMAN> backup database tag backuplsc 
plus archivelog tag backuplsc;

Starting backup at 13.04.2007 14:10:42
...
Finished backup at 13.04.2007 14:11:00

Now I can mark my backup as keep

RMAN> change backup tag backuplsc keep forever;

...
keep attributes for the backup are changed
backup will never be obsolete
backup set key=405 RECID=116 STAMP=619798257
keep attributes for the backup are changed
backup will never be obsolete
backup set key=406 RECID=117 STAMP=619798260

Now if I do a delete obsolete, it will never delete my backup.

RMAN> backup database plus archivelog
Starting backup at 13.04.2007 14:16:46
...
Finished backup at 13.04.2007 14:17:10

RMAN> delete noprompt obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
...
Deleting the following obsolete backups and copies:
...
Deleted 7 objects

RMAN> list backup summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time    
------- -- -- - ----------- ------------------- 
 #Pieces #Copies Compressed Tag
------- ------- ---------- ---
...
405     B  F  A DISK        13.04.2007 14:10:57 
1       1       YES        BACKUPLSC
406     B  F  A DISK        13.04.2007 14:11:00 
1       1       YES        BACKUPLSC
...

10’000 columns in a query

You cannot have more than 1000 columns in a view, but what about a query?

I tried

select 1,2,3,4,....,10000 from dual;

it seems to work fine. However, when increasing, I am facing various errors :

select 1,2,3,4,....,17000 from dual;

*
ERROR at line 1:
ORA-00600: internal error code, arguments: [15201], 
[], [], [], [], [], [], []

or even

select 1,2,3,4,....,50000 from dual;

*
ERROR at line 1:
ORA-03113: end-of-file on communication channel

variable in a view

Imagine you have a view and you want to have a parameter in your view. You cannot have a bind variable in your view. But you could have a function which return a package variable. And this package variable could be set manually for your session

Here we go

SQL> create or replace package p is n number; end p;
  2  /

Package created.

SQL> create or replace function f return number is 
  2  begin return p.n; end;
  3  /

Function created.

SQL> create or replace view v as select ename from 
  2  emp where empno=f;

View created.

SQL> select * from v;

no rows selected

SQL> exec p.n:=7788

PL/SQL procedure successfully completed.

SQL> select * from v;

ENAME
----------
SCOTT

long lines in ps output

In Solaris, the ps output is truncated, so if you have a command with many long parameters, you will not see them all.

If you are interested to see the parameters of a java program, /usr/bin/ps -ef will not give you the expected output

$ ps -ef | grep java
 ldapusr 10744 10692  0 09:50:23 ?        0:09
    /var/opt/mps/oracle/oid/jdk/bin/java -server -Djava
.security.policy=/var/opt/mp

So you could use /usr/ucb/ps awwx to get the long line

$ /usr/ucb/ps awwx  | grep java
 10744 ?        S  0:09 /var/opt/mps/oracle/oid/jdk/bin/
java -server -Djava.security.policy=/var/opt/mps/oracle/
oid/j2ee/oca/config/java2.policy -Djava.awt.headless=tru
e -Xmx256M -Djava.awt.headless=true -Doracle.ons.oracleh
ome=/var/opt/mps/oracle/oid -DIASHOST=novgaasdv01 -java.
library.path=/var/opt/mps/oracle/oid/lib32:/var/opt/mps/
oracle/oid/lib:/var/opt/mps/oracle/oid/jlib:/var/opt/mps
/oracle/oid/oca/lib -DORACLE_HOME=/var/opt/mps/oracle/oi
d -Doracle.home=/var/opt/mps/oracle/oid -Doracle.ons.ora
clehome=/var/opt/mps/oracle/oid -Doracle.home=/var/opt/m
ps/oracle/oid -Doracle.ons.oracleconfighome=/var/opt/mps
/oracle/oid -Doracle.ons.clustername=C_160.61.98.160.161
7189.1119d9146e1.-8000 -Doracle.ons.instancename=oid1014
.novgaasdv01 -Dopmn.compatible=904 -Doracle.ons.indexid=
oca.default_island.1 -Doracle.ons.uid=120848407 -Doracle
.oc4j.instancename=oca -Doracle.oc4j.islandname=default_
island -DOPMN=true -jar oc4j.jar -config /var/opt/mps/or
acle/oid/j2ee/oca/config/server.xml -properties -propert
ies -ports ajp:12502,rmi:12402,jms:12602

Apparently some OS have the ps -w flag to get long lines. Read Note 395113.1 on Metalink (or man ps)

Configure OID with SSL

First you need to install OID. Check the Installation Guide, the Doc and download the Software. If you do not need the dbconsole, stop it (emctl stop dbconsole) and remove the oracle_home/hostname_sid directory

Once you have a running OID, test it with ldapsearch. For this workshop, I use two servers and two usernames. Having the client and the server sharing the same wallet is not a good idea. If you have an Oracle Database running as user oracle, prefer using a different user for OID installation. By installing OID, I specified the following staticports.ini (missing from CD, bug 5936042) :

Oracle HTTP Server port  = 44000
Oracle HTTP Server Listen port  = 44001
Oracle HTTP Server SSL port  = 44002
Oracle HTTP Server Listen (SSL) port  = 44003
Oracle HTTP Server Diagnostic port  = 44004
Java Object Cache port  = 44005
DCM Discovery port  = 44006
Oracle Notification Server Request port  = 44007
Oracle Notification Server Local port  = 44008
Oracle Notification Server Remote port  = 44009
Application Server Control port  = 44010
Application Server Control RMI port  = 44011
Oracle Management Agent port  = 44012
Log Loader port  = 44013
ASG port  = 44014
Oracle Internet Directory port  = 44015
Oracle Internet Directory (SSL) port  = 44016
Oracle Certificate Authority SSL Server 
    Authentication port  = 44017
Oracle Certificate Authority SSL Mutual
    Authentication port  = 44018

Ok, let’s do the search from another server

lsc@dbserver01 $ ldapsearch -h oidserver01 -p 44015 -z 1
orclreplicaid=oidserver01_oid1014,cn=replication configuration
ldap_search: Sizelimit exceeded

Now let’s try with SSL. First, with no authentication (-U 1).

lsc@dbserver01 $ ldapsearch -h oidserver01 -p 44016 -z 1 -U 1
orclreplicaid=oidserver01_oid1014,cn=replication configuration
ldap_search: Sizelimit exceeded

Fine. Let’s create the wallets. You need a wallet for your client (lsc@dbserver01), a wallet for your server (ldapusr@oidserver01). Create a certification request for CN=lsc,cn=users,dc=yourdomain,dc=com, export your user certificate from your client wallet (lsc@dbserver01) and import it as a trusted certificate in your oid wallet (ldapusr@oidserver01). It is the same procedure as described in the user identified externally with SSL certificate post

Then, launch oidadmin (you can launch it from a pc client), and configure the OID for ssl.
User for OID is CN=ORCLADMIN
In Oracle Internet Directory Servers –> cn=orcladmin@oidserver01 –> Server Management –> Directory Server. Right click on Configuration Set1 and chose Create Like. In the Configuration Set2, chose the SSL Settings, specify SSL Client and Server Authentication, SSL only, file://etc/ORACLE/WALLETS/lsc, SSL Port 44019. Apply. Quit

Start a second instance with oidctl where oid1014 is your SID

ldapusr@oidserver01 $ oidctl connect=oid1014 server=oidldapd
    instance=2 configset=2 start

Check the status :

ldapusr@oidserver01 $ oidctl connect=oid1014 server=oidldapd
    status
oidctl:Checking Oracle Internet Directory Processes ...

    Process oidmon is alive as PID = 16191 

    Checking OIDLDAPD instance 1  ...
    Process oidldapd (dispatcher) is alive as PID = 16197
        port=44015 sslport=44016
            oidldapd (server)     is alive as PID = 16206

    Checking OIDLDAPD instance 2  ...
    Process oidldapd (dispatcher) is alive as PID = 16422
        sslport=44019
            oidldapd (server)     is alive as PID = 16426

    Checking ODISRV instance 1  ...
    Process odisrv   is alive as PID = 16203

so far not bad!

Let’s pray !

$ ldapbind -h novgaasdv01.eu.novartis.net -p 44019  -U 3
    -P mywalletpasswd -W file://etc/ORACLE/WALLETS/lsc  
    -D cn=orcladmin -w myorcladminpasswd
bind successful

What a satisfaction 😀 I have being failing on this for days. Mostly getting meaningless message like UnKnown Error Encountered. You cannot start anything with that error. It could be a wallet path error, a wallet password error, a non-authorized certificate, a certification authority problem, and many other errors. Really poor error messaging there. One may argue that meaningless error message on unsuccessful login does increase the security, but well, it is a nightmare to debug 😮

Now I need to stop the non-secure part of it :

ldapusr@oidserver01 $ oidctl connect=oid1014 server=oidldapd
    instance=1 stop

Added 29.3.2007
If I want to use a SSL to authentify my user, I must create a user, for example with the Security Console http://oidserver01:44000/oiddas/ui/oideushome, which matches my certificate.

why is bitmap index not designed for OLTP

In case you do not know it yet, having a bitmap on columns like GENDER(male/female) is a very bad practice in OLTP, because each insert does lock the whole table

create table t(name varchar2(10), gender varchar2(10));
create bitmap index bi on t(gender);

+--------------------------------+   +--------------------------------+ 
| Session 1                      |   | Session 2                      |
+--------------------------------+   +--------------------------------+ 
| SQL> insert into t             |   |                                |
|  2   values('JO','MALE');      |   |                                |
|                                |   |                                |
| 1 row created.                 |   |                                |
|                                |   |                                |
|                                |   | SQL> insert into t             |
|                                |   |  2   values('JANE','FEMALE');  |
|                                |   |                                |
| SQL> commit;                   |   |                                |
|                                |   |                                |
| Commit complete.               |   |                                |
|                                |   |                                |
|                                |   |                                |
|                                |   | 1 row created.                 |
|                                |   |                                |
|                                |   |                                |
+--------------------------------+   +--------------------------------+ 

A pending transaction is blocking a portion of the index. So session 2 has to wait for transaction 1 to complete.

read comments on this post

How to load BLOB in the database?

I have been asked yesterday how to read and write blobs in the database.

With java :
read from an input stream

InputStream myBlobInputStream = 
  connection . 
    createStatement() . 
      executeQuery("select myBlob from t") . 
        getBlob(1) . 
          getBinaryStream();

write to an output stream

OutputStream myBlobStream = 
  connection .
    createStatement() .
      executeQuery("select myBlob from t for update") . 
        getBlob(1) . 
          getBinaryOutputStream();

where connection is your java.sql.connection.

You could also use PL/SQL and the DBMS_LOB API. There is a complete book (306 pages!) in the doc about working with large objects : Application Developer’s Guide – Large Objects.

Sometimes, you can use plain SQL.

SQL> create table t(x BLOB);
Table created

SQL> insert into t values(utl_raw.cast_from_number(1));
1 row inserted

SQL> select utl_raw.cast_to_number(x) from t;
UTL_RAW.CAST_TO_NUMBER(X)
-------------------------
                        1

A smart move may be to use SQL Loader. You can specify one file per row

LOAD DATA INFILE '/tmp/x.txt' INTO TABLE "T"
(name filler char(255), x lobfile(name) terminated by EOF)

and your import file /tmp/x.txt will look like

x.gif
y.gif

but you could also load a 10000 long characters column from your input file in a CLOB column, just by specifying VARCHARC(4,10000) as a datatype

xhost + is a huge security hole

Maybe you have a firewall in your company, only your PC can access the production server, only you have the root password in prod, and your company spend a lot of money in security, but you dare using xhost +.

This is a huge security hole, because it gives anyone access to your X resources, not only your display, but also your mouse and your keyboard, so anyone can read/modify/corrupt what you are typing/clicking. It is a bad habit to use xhost+. Even using xhost +localhost give access to your keyboard to anyone on localhost…

So what else could you use ?

The simplest is probably ssh tunnelling.

lsc@chltlxlsc1:$ ssh -X oracle@localhost
Last login: Fri Max 2 10:24:09 2007 from localhost
oracle@chltlxlsc1:$ if xterm -e true; then echo success; fi
success

Another way to do this is to use X cookies.

lsc@chltlxlsc1:$ xauth extract ~/my-x-cookie $DISPLAY< lsc@chltlxlsc1:$ setfacl -m u:oracle:r ~/my-x-cookie lsc@chltlxlsc1:$ su - oracle -c "DISPLAY=$DISPLAY bash" Password: oracle@chltlxlsc1:$ if xterm -e true; then echo success; fi Xlib: connection to ":0.0" refused by server Xlib: No protocol specified xterm Xt error: Can't open display: :0.0 oracle@chltlxlsc1:$ xauth merge ~lsc/my-x-cookie xauth: creating new authority file ~oracle/.Xauthority oracle@chltlxlsc1:$ if xterm -e true; then echo success; fi success

No need to type all. Here is my alias

alias oracle='
xauth extract $HOME/my-x-cookie $DISPLAY;
setfacl -m u:oracle:r $HOME/my-x-cookie;
su - oracle -c "export DISPLAY=$DISPLAY;
xauth merge $HOME/my-x-cookie;
bash"'

user identified externally with SSL certificate

Today I configured my database to identify users with certificates.

Check my previous post listener with tcps to find out how to configure a listener with SSL, which is a requisite.

Ok, I have a listener.ora and a tnsnames.ora with SSL. I do not need a sqlnet.ora, the default values work.
listener.ora

LISTENER=
  (DESCRIPTION_LIST=
    (DESCRIPTION=
      (ADDRESS=(PROTOCOL=TCP)(HOST=chltlxlsc1)(PORT=1521))
    )
    (DESCRIPTION=
      (ADDRESS=(PROTOCOL=TCPS)(HOST=chltlxlsc1)(PORT=15210))
    )
  )

tnsnames.ora

LSC07=
  (DESCRIPTION=
    (ADDRESS_LIST=(
      (ADDRESS=(PROTOCOL=tcp)(HOST=chltlxlsc1)(PORT=1521))
      (ADDRESS=(PROTOCOL=tcps)(HOST=chltlxlsc1)(PORT=15210))
    )
    (CONNECT_DATA=
      (SERVICE_NAME=LSC07.lcsys.ch)
    )
  )

sqlnet.ora

# empty file

Now I have one user, lsc, which is going to connect with its own certificate. First, I need to create a wallet for that user. I run owm as lsc, I create a new wallet, a new certification request with the DN:CN=lsc, I copy/paste my certification request in the CA server, I import the user certificate, and I save it in the system default, /etc/ORACLE/WALLETS/lsc.

Now I need to trust that certificate, so I export my user certificate to a file, and make that file readable for oracle. As oracle, I run the wallet manager owm, I import the trusted user certificate, and I save in system default, /etc/ORACLE/WALLETS/oracle.

One more thing I need to configure now is the wallet location for lsc in the sqlnet parameter file. To avoid conflicts, I create a specific sqlnet for that user in homedir called .sqlnet.ora (with a leading dot).

~lsc/.sqlnet.ora

WALLET_LOCATION=
  (SOURCE=
    (METHOD=FILE)
    (METHOD_DATA=
      (DIRECTORY = /etc/ORACLE/WALLETS/lsc)
    )
  )

Last thing to do : create the user in sqlplus.

SQL> show parameter os_authent_prefix

NAME               VALUE
------------------ ------------------------------
os_authent_prefix  ops$

SQL> create user ops$lsc identified externally as 'CN=lsc';

User created.

SQL> grant create session to ops$lsc;

Grant succeeded.

Now I can remotely log on my database without a password.

SQL> connect /@LSC07
Connected.

listener with tcps

How can you use SSL to encrypt your network traffic?

Here is how I did it.

  1. Install Oracle Certification Authority 10.1.4
  2. you need a CA to approve a certification request

  3. Install Oracle Database 10gR2 Enterprise Edition with Advanced Security Options
  4. Start Wallet Manager
  5. from Database Oracle Home, start $ORACLE_HOME/bin/owm

  6. create a new Wallet
  7. define a password

  8. add certificate request
  9. fill the fields or chose advanced :
    CN=yourhost,DC=yourdomain,DC=com

  10. Start OCA
  11. from OCA home, start $ORACLE_HOME/oca/bin/ocactl start

  12. Open OCA homepage
  13. Open your browser on
    https://yourhost.yourdomain.com:6600/oca/user
    Install the ROOTca in your browser

  14. Request a new certificate
  15. Server/SubCA Certificates – Request a certificate – Paste String from OWM

  16. Approve the certificate
  17. log on the admin page,
    https://yourhost.yourdomain.com:6600/oca/admin
    define your identity, then in Certificate Management, select the certification request and approve it.

  18. Import the certificate in OWM
  19. Select the approved certificate, view details.
    In owm, import user certificate , and copy paste the BASE64 string.
    To get the CA string, download the CA certificate from
    https://yourhost.yourdomain.com:6600/oca/certDownload

  20. Save the wallet
  21. The certificate should show [Ready]
    Select Autologin
    Save in system default (/etc/ORACLE/WALLETS/oracle)

Ok, we are now ready to use the TCPS protocol

listener.ora
LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)
(HOST=yourhost.yourdomain.com)(PORT=1521)))

tnsnames.ora
LSC01=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)
(HOST=yourhost.yourdomain.com)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=LSC01)))

Start the listener
$ lsnrctl start
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)
(HOST=yourhost.yourdomain.com)(PORT=1521)))
The command completed successfully

Test it!
$ sqlplus scott/tiger@lsc01
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
SQL>

OEM 10gR3 is out!

emgrid 10.2.0.3 is not the second patchset of 10gR2, it is Oracle Enterprise Manager Grid Control Release 3.

Well, it is still called a patch set and must be applied to 10.2.0.1 or 10.2.0.2.

But it is a new release !

You can download the soft and the doc on otn
http://www.oracle.com/technology/software/products/oem/index.html

Out of the new features, you have a Linux pack for your Unbreakable Linux, Management pack for Siebel and Oracle Content Database, and also Linux Server Administration to manage your Linux (RHEL4 and SuSE9).

Monitoring the age of the last backup with OEM

My customer wants to receive alerts if a database has not been backed up (either unsuccessful backup or no backup) for ages.

As the customer have Oracle Enterprise Manager Grid Control 10gR2, I started using User Defined Metric (UDM) yesterday. Thanks Troy for his comment at OEM Generic Service

I would do 2 checks, backup of datafiles and backup of redo logs.

Here the steps :
– Open a database target (LSC01)
– Click on User-Defined-Metrics
– Create
– Metric Name = Age of datafile backup
– Type = Number
– Output = Single Value
– SQL Query : the age in hour since the oldest checkpoint time of the newest backup
select (sysdate-min(t))*24 from
(
select max(b.CHECKPOINT_TIME) t
from v$backup_datafile b, v$tablespace ts, v$datafile f
where INCLUDED_IN_DATABASE_BACKUP='YES'
and f.file#=b.file#
and f.ts#=ts.ts#
group by f.file#
)
– Credentials : dbsnmp/*****
– Threshold Operator > Warning 24 Critical 48
– Repeat every 1 hour
– OK

Same for redologs, with a name of Age of redolog backup query of
select (sysdate-max(NEXT_TIME))*24 from v$BACKUP_REDOLOG

I am not going to do this for each instance, so I will create a monitoring template
– Setup
– Monitoring Templates
– Create
– Select the target (LSC01)
– Name = Age of last backup
– Metric Threshold : remove all metrics except Age of datafile backup and Age of redolog backup
– Policies : remove all policies from template
– OK

Start applying your new metric to your databases.
– Apply
– Add
– Select all your databases
– Continue
– dbsnmp/*** (if you have the same database password for dbsnmp on all databases, it is easier)

It is now possible to define alerts.
– Preferences
– Notification Rules
– Create
– Apply to specific targets : Add you productive databases group
– Deselect Availability Down
– Metric: Add : Show all: Check User defined metric : Select : Age of datafile backup , Age of redolog backup
– Severity : Critical and Clear
– Policy : None
– Method : Email

After a while, you can monitor the metric with a historical graphic
User Defined Metric graph in OEM

Critical Patch Update

The critical patch update for january is released. However, not for every version.

Already in CPU Oct 2006, many versions were delayed. The release date of 10gR1 Tru64 for example was Nov 15, but the note has not been updated for months. 9.2.0.8 Linux was released so late (29 Dec), that the CPU2006OCT-9208 is considered to be CPU2007JAN (patch 5490859). For 10.2.0.3, there is no CPU2007JAN

I applied both 9208-cpu2006oct and 10105-cpu2007jan to my notebook database.

Both complained about not existing XDB schema, but I ignore those (I have no XDB schema). Otherwise it went fine.

I apply the CPU as soon as released for testing purpose. However, applying the CPU as soon as possible in PROD does not apply to most of my customers.

RAC workshop

As announced, I have been speaking for Credit Suisse employees last week. The objective was to give the database project and team leaders the keys arguments for going or not going to RAC.

First : what is RAC ?
Oracle Real Application Cluster.
You have a shared storage, and two instances accessing the same database.

The challenges:

    The consistency

In a single instance model, there is a read-write consistency. This may not change in RAC. Somehow, if one read a table and at the same time someone else write the table, it must offer the same level of consistency.

    Crash recovery

In single instance, if one instance dies (shutdown abort / kill -9 / server crash), there is an instance recovery which reads the commited and uncommited transaction in the redo logs. In RAC, each instance has an UNDO tablespace and a separate thread of redo. So if one instance crash (server crash), the other instances will do the crash recovery. If all instances crashes at the same time (ex: disaster), the one who restart will have to recover all transactions.

    Performance

You have more OS, more network traffic, but you should achieve a comparable performance as for multiple processor. Some Oracle Marketing slides shows 2cpus-4nodes-cluster which performs better than a 8cpu-server. This is a challenge

    Scalability

This is rather related to Linux. If you have Sparc/Solaris, you can scale on a single server to more than 100 cpus and 1Tb memory. But if you have Linux, RAC offers additional scalability. However, it is not guarantee that your cluster will scale over 4 nodes.

    High Availability (HA)

This is THE challenge. It is also the main reason to go RAC. If you cannot afford the downtime of a FAILOVER (a few minutes, 1-30), you probably need RAC.

    Maintainability

It must not be more difficult to maintain RAC than single instance database. This is a very difficult question. On failover cluster, there are lots of in-house scripts to do the failover. A big mess most of the time. In RAC, there is no failover. If one instance crashes, it just need to be restarted one day, that’s it! However, the configuration of the cluster is twice as difficult, because you have not 1 but 2 instances to configure. In case of tuning / troubleshooting, you have also 2 instances to tune, but also the interconnect and the cache fusion, I talk about the later below. If you have a SLA (server level agreement) per Database, going to RAC is a pain. If you have a SLA per instance, you will have more instance to maintain, so you may increase your productivity.

You have (typically) two servers (or more). On each server you have memory (buffer cache / shared pool / pga), cpu (load balance / parallelism) and processes (pmon / dbwr / dispatchers / arch). If the servers have different size, you can have different setting. One instance can have a bigger cache, more cpu used for parallel query, more dispatches, more db writers.

The spfile has the capability of configuring multiple instance in a single shared parameter file.

alter system set shared_pool_size=400M sid='INST01';

The dictionary has view to let you monitor either a single instance (V$SESSION) or globally (GV$SESSION).
select inst_id,sid,serial# from gv$session;
Note that if you want to kill a session, you must be connect as a privileged user on that instance, or wait for the next release of RAC.
There are also performance view about cache fusion and interconnect.

The cache fusion. In a single instance, you can read data from your cache safely. In RAC, if you have a block in the cache, another instance may update this block, so there must be a mechanism to insurance cache invalidation. This mechanism already existed in Oracle 6 Parallel Server (OPS) and later. Here the principle : one instance is the lock master. The first who started. Not configurable. Any time one instance read or write a block, the block can be cached. If one instance has the block in cache, another can read it.
Access time :
local cache << remote cache << disk
Let’s take an example:

INSTA: hello lock master, I want to read block 1008
LOCKM: nobody has this block, please read from disk
LOCKM updates his table, he knows INSTA has the block
INSTA reads the block
INSTB: hello lock master, I want to read block 1008
LOCKM: wait, INSTA has the block, I tell INSTA to send you that block
LOCKM knows both INSTA and INSTB have the block
INSTA sends the block to INSTB
INSTB: hello lock master, I want to update block 1008
LOCKM inform INSTA that his version of the block 1008 is now invalid
LOCKM knows INSTB has the block 1008 in cache (valid)
LOCKM: please do your update

This is cache fusion. If the lock master crashes, a new global cache table is built on another instance, voted as lock master.

This mechanism has performace impact. You need more CPU. If you have 100% cpu usage (ex: batch job), it will have dramatic performance impact. So you need suffisant CPU (cost of license :twisted:). Having 100% cpu in rac is not good.

Also if you have bigger buffer cache (OLTP), the probably to read a block from the network will increase, so it will generate network traffic. Having network traffic is good, because interconnect is faster than disk. But having network bottleneck is terrible.

If you have more nodes, you will have also more traffic. If you have some latches contention on single instance, going to RAC will increase your contention. RAC will not solve your single instance performance problem. If you have a package application, be sure it supports RAC and read the benchmarks for that particular application.

I had a lot of questions about planned downtime. Here are some facts:
If you have a standard os upgrade/patch, you may achieve 0 downtime with RAC. However your application must support connection failover (a challenge 8) ).
If you have an interim patch, like a Critical Patch update, RAC may offer 0 downtime by applying the patch locally in a rolling upgrade manner. Not all versions. Not all interim patches.
If you have a patchset (10.2.0.2 to 10.2.0.3) or major version upgrade (10gR1 to 10gR2), you need a logical standby database to achieve a downtime smaller than one minute. RAC will not help. In the opposite, you have more servers and more oracle homes to patch, so eventually it will increase the downtime in comparison to single instance.

Also the cost of the infrastructure was of biggest interest. If you have RAC, you may need a different cluster infrastructure (ex: infiniband) and some more efficient protocol than ethernet over IP (ex: Reliable Datagramm Socket). It is way more expensive than single instance failover. And the more node you add, the faster your network must perform.

One student asked : “Why invest time and money on RAC when the application / webserver is not redundant?”. This is a good question. You need to offers HA all over the service, from DBA to client. If one server crashes, you need another server. If one network crashes, you need another network. If one application server crashes, you need another application server. If one operator is ill, you need an other operator to do his job. Oracle Application Server offers HA with farms of J2EE services. If your application does not support TAF (transparent application failover), you need to change your application first !

Finally about the skills of the RAC dba. Lot’s of tasks are almost as easy or even easier than Failover cluster. runinstaller / opatch / database configuration assistant (dbca) / enterprise manager are cluster aware. Installing an additional node with OEM or ORACLE_HOME/oui/bin/add_node.sh and adding an additional instance with dbca or OEM is simple. However, the DBA must have some Cluster skills, and it is less easy, at least in Switzerland, to find a DBA with RAC experience than a “normal” DBA :mrgreen:

Fit for RAC

I will be speaking tomorrow in German and Wednesday in English in Credit Suisse Bank.

The workshop is about the differences between single instance and RAC

Some of them : architecture, availability, scalability, manageability, skills required, price. I will talk also about the operational aspects, upgrade, dictionary, parameter file.

Paul Moen article on MONTHS_BETWEEN

I have been shocked by Paul Moen article on MONTHS_BETWEEN.

Here is my own case :
SQL> select months_between( date '2000-03-01', date '2000-02-28') * 31 from dual;
4

Incredible! I have always been very careful with months_between because of the documented end_of_month behaviour (there is one month between 30-APR and 31-MAY), but I did not know the fractional part of add_months was based on a 31-day month 👿

update: but it is documented

OEM Generic service

I want to monitor the output of one script in OEM.

Let’s say my script is that easy :

#!/bin/ksh
echo $RANDOM

So I go to

  • OEM 10gR2 Grid Control
  • All Targets
  • Add Generic Service – Go
  • Name : random
  • Select System : EM Website System
  • Define availability based on: Service Test
  • Test Type: custom test
  • Name : random
  • Collection Frequency : 1 minute
  • Command line : /home/oracle/random.sh
  • Username : oracle
  • Password : oracle
  • Add Beacon : oemsrv01_beacon
  • Next – Next – Finish
  • That’s all, your generic service is ready. Wait a bit, you will see this in your targets list. You can then click on perform in your “random” homepage and chose “custom metric 1” to have a graph of your values ($RANDOM).

    Happy Holidays !

    For those who like free games about skiing with really good graphics, check this :
    Ski Challenge 2007
    go to http://www.tsr.ch/tsr/index.html?siteSect=860020
    click Télécharger le jeu (about 41Mb)

    -ignoreDiskWarning

    I am in the process in installing Oracle 10gR2 patch 2 with response file. There is no plenty of disk available, but installing a patch does not require as much space as specified by the runInstaller.

    when I start in silent mode, I get :

    $ ./runInstaller -responseFile /home/oracle/10202.rsp -silent
    Starting Oracle Universal Installer...
    -----------------------------------------------------------------------------
    You may not have enough space on your drive for a successful install. Do you still want to continue?

    Where should I answer yes? There is a secret parameter called -ignoreDiskWarning which does the trick. The parameter is not listed in ./runInstaller -help.

    So I installed my 10.2.0.1 + patch 10.2.0.2 and still have 500Mb free. It is ok for Oracle Home. Check your diskspace before using this to avoid filesystem full.

    length(”)=null?

    What is the length of an empty string?

    According to Oracle documentation,

    Oracle Database currently treats a character value with a length of zero as null. However, this may not continue to be true in future releases, and Oracle recommends that you do not treat empty strings the same as nulls

    And therefore the length of null is null, not 0 (there are no character value with a length of zero :oops:).

    However, this is not true for clobs 😈


    SQL> create table t(x clob);
    Table created.
    SQL> insert into t values (empty_clob());
    1 row created.
    SQL> select x,length(x) from t where x is not null;
    X LENGTH(X)
    - ----------
    0

    search for a string in all tables of a schema

    this is often asked on the forums. I also needed this a while ago while reverse engineering a database model.

    Here is my today solution:

    1) select * and extract the first column found per table with regexp (10g)

    SQL> select table_name,column_name from (select rownum,table_name, regexp_substr(dbms_xmlgen.getxml(‘select * from “‘||table_name||'”‘),'<[^>]*>&string</[^<]*>’) column_name from user_tables) where length(column_name)!=0;
    Enter value for string: 20
    TABLE_NAME COLUMN_NAME
    ———- ——————————
    DEPT       <DEPTNO>20</DEPTNO>
    EMP        <DEPTNO>20</DEPTNO>

    SQL> select table_name,column_name from (select rownum,table_name, regexp_substr(dbms_xmlgen.getxml(‘select * from “‘||table_name||'”‘),'<[^>]*>&string</[^<]*>’) column_name from user_tables) where length(column_name)!=0;
    Enter value for string: KING
    TABLE_NAME COLUMN_NAME
    ———- ——————————
    EMP        <ENAME>KING</ENAME>
    BONUS      <ENAME>KING</ENAME>

    2) add a where condition. much slower of course, because scanning the table more often. somehow nicer output. More sensible to datatypes. Here for number.

    SQL> select table_name, column_name from (select rownum,table_name, column_name, dbms_xmlgen.getxml(‘select 1 from “‘||table_name||'” where “‘||column_name||'”=&number’) x from user_tab_columns where data_type=’NUMBER’) where length(x)!=0;
    Enter value for number: 3000
    TABLE_NAME COLUMN_NAME
    ———- ——————————
    EMP        SAL
    BONUS      COMM
    SALGRADE   HISAL

    Oracle 9iR2 desupport

    How long is Oracle 9iR2/10g going to be supported? Infinitely!

    Metalink updated the upcoming desupport advisory.

    For 8iR3, you had something like :

    Error Correction Support (ECS):     31-DEC-2004
    Extended Support (ES):              31-DEC-2007
    Extended Maintenance Support (EMS): 31-DEC-2006
    

    Now for 9iR2, 10gR1, 10gR2, you have this :

    Release  GA Date   Premier   Extended  Sustaining
                       Support   Support   Support
    -------  --------  --------  --------  ----------
    DB 9.2   Jul 2002  Jul 2007  Jul 2010  Indefinite
    DB 10.1  Jan 2004  Jan 2009  Jan 2012  Indefinite
    DB 10.2  Jul 2005  Jul 2010  Jul 2013  Indefinite
    

    Sustaining support is less than extended support, you do not get upgrade script and compatibility with new products (ex: connection from 9i client to Oracle 14x is not supported). But it let you keep your very old stuff supported by Oracle.

    Read more :
    Oracle Lifetime Support Policy Datasheet PDF 110K
    Oracle Technical Support Policies PDF 258K

    su in sqlplus

    How to switch user in Oracle ?

    One approach is to change the password :

    SQL> connect / as sysdba
    Connected.
    SQL> select password from dba_users where username='SCOTT';

    PASSWORD
    ------------------------------
    F894844C34402B67

    SQL> alter user scott identified by abc123;

    User altered.

    SQL> connect scott/abc123
    Connected.
    SQL> create table t...

    SQL> connect / as sysdba
    Connected.
    SQL> alter user scott identified by values 'F894844C34402B67';

    User altered.

    but this is unfair. The user will be prevented from logging for a while, the password expire policy will be reset.

    Sometimes you can simply use

    SQL> alter session set current_schema=scott;

    Session altered.

    but this does not really mean a “su”, because you do not change your privileges. You just change the “default” schema.

    Another approach is to use the BECOME USER privilege and undocumented upicui OCI function. But this will not work in sqlplus.

    Thanks to Jonathan Lewis post today Proxy Users, I could imagine using the proxy functionality in sqlplus to do a su


    SQL> create user su identified by secret123;

    User created.

    SQL> alter user scott grant connect through su;

    User altered.

    SQL> connect su[scott]/secret123;
    Connected.

    The connect username[proxy]/password@db is documented in the 10gR2 SQL*Plus reference