All posts by Laurent Schneider

Oracle Certified Master

On implicit commit

An explicit commit is when you issue a COMMIT statement

SQL> create table t(x number);

Table created.

SQL> insert into t values(1);

1 row created.

SQL> commit;

Commit complete.

An implicit commit is when a commit is issued without your approval.

ex: AUTOCOMMIT (default is OFF)

SQL> set autoc on
SQL> insert into t values(1);

1 row created.

Commit complete.

ex: EXITCOMMIT (default is ON)

SQL> set autoc off exitc on
SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> disc
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> conn scott/tiger
Connected.
SQL> select * from t;
         X
----------
         1

before / after a successful DDL statement

SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t(x);

Index created.

SQL> rollback;

Rollback complete.

SQL> select * from t;
         X
----------
         1

Before / after an unsuccessful DDL statement, sometimes :

SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t(blabla);
create index i on t(blabla)
                    *
ERROR at line 1:
ORA-00904: "BLABLA": invalid identifier

SQL> rollback;

Rollback complete.

SQL> select * from t;
         X
----------
         1

But not always :

SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t();
create index i on t()
                    *
ERROR at line 1:
ORA-00936: missing expression

SQL> rollback;

Rollback complete.

SQL> select * from t;

no rows selected

In the last case, no DDL was executed, but in the case before that, the DDL was executed and failed.

If you want to commit, use COMMIT :)

sqlplus -prelim

If you cannot login to the database, for instance due to ORA-00020 maximum number of processes exceeded, then chance exists that you could use the -prelim option.

Documented in note 121779.1 for sqlplus version 10.1 and later :
In some cases, no connections are allowed on the instance (in some ORA-20 situations for example).
As of 10.1.x, there is a new option with SQL*Plus to allow access to an instance to
generate traces.
sqlplus -prelim / as sysdba

Only sysdba connection is possible.

sqlplus -prelim system/manager

SQL*Plus: Release 11.2.0.2.0 Production on Mon Jul 4 10:38:36 2011

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

ERROR:
ORA-24300: bad value for mode

And very little access is granted

SQL> select * from dual;
select * from dual
*
ERROR at line 1:
ORA-01012: not logged on
Process ID: 0
Session ID: 0 Serial number: 0

You can shutdown abort and then restart your database, instead of rebooting your server where other instances may be running.

This is the ultimate chance before reboot. Before this, consider disconnecting / killing some user session to get a regular sqlplus / as sysdba

CSV part 4, fast !!

I got some comments that my other csv solutions were slow to export gigabytes of data.

One more try.

thanks to the feedbacks, I provided a new version

This could generate very large files in just a few minutes (instead of hours).

I use bulk collect and utl_file to boost performance

CREATE TYPE collist IS TABLE OF VARCHAR2 (4000)
/

CREATE OR REPLACE PROCEDURE bulk_csv (directory_name    VARCHAR2,
                                      file_name         VARCHAR2,
                                      query        VARCHAR2)
   AUTHID CURRENT_USER
IS
   -- $Id$
   fh             UTL_FILE.file_type;
   stmt           VARCHAR2 (32767) := NULL;
   header         VARCHAR2 (32767) := NULL;
   curid          NUMBER;
   desctab        DBMS_SQL.DESC_TAB;
   colcnt         NUMBER;
   namevar        VARCHAR2 (32767);

   TYPE cola IS TABLE OF collist
                   INDEX BY BINARY_INTEGER;

   res            cola;
   rcur           SYS_REFCURSOR;
   current_line   VARCHAR2 (32767);
   next_line      VARCHAR2 (32767);

BEGIN
   curid := DBMS_SQL.open_cursor;
   DBMS_SQL.parse (curid, query, DBMS_SQL.NATIVE);
   DBMS_SQL.DESCRIBE_COLUMNS (curid, colcnt, desctab);

   FOR i IN 1 .. colcnt
   LOOP
      DBMS_SQL.DEFINE_COLUMN (curid,
                              i,
                              namevar,
                              32767);
   END LOOP;

   IF DBMS_SQL.execute (curid) = 0
   THEN
      FOR i IN 1 .. colcnt
      LOOP
         IF (i > 1)
         THEN
            header := header || ';';
            stmt := stmt || ',';
         END IF;

         header := header || desctab (i).col_name;
         stmt :=
               stmt
            || CASE
                  WHEN desctab (i).col_type IN
                          (DBMS_SQL.Varchar2_Type,
                           DBMS_SQL.Char_Type)
                  THEN
                     '"'||desctab (i).col_name || '"'
                  WHEN desctab (i).col_type IN
                          (DBMS_SQL.Number_Type,
                           DBMS_SQL.Date_Type,
                           DBMS_SQL.Binary_Float_Type,
                           DBMS_SQL.Binary_Bouble_Type,
                           DBMS_SQL.Timestamp_Type,
                           DBMS_SQL.Timestamp_With_TZ_Type,
                           DBMS_SQL.Interval_Year_to_Month_Type,
                           DBMS_SQL.Interval_Day_To_Second_Type,
                           DBMS_SQL.Timestamp_With_Local_TZ_type)
                  THEN
                     'to_char("' || desctab (i).col_name || '")'
                  WHEN desctab (i).col_type = DBMS_SQL.Raw_Type
                  THEN
                     'rawtohex("' || desctab (i).col_name || '")'
                  WHEN desctab (i).col_type = DBMS_SQL.Rowid_Type
                  THEN
                     '''unsupport datatype : ROWID'''
                  WHEN desctab (i).col_type = DBMS_SQL.Long_Type
                  THEN
                     '''unsupport datatype : LONG'''
                  WHEN desctab (i).col_type = DBMS_SQL.Long_Raw_Type
                  THEN
                     '''unsupport datatype : LONG RAW'''
                  WHEN desctab (i).col_type = DBMS_SQL.User_Defined_Type
                  THEN
                     '''unsupport datatype : User Defined Type'''
                  WHEN desctab (i).col_type = DBMS_SQL.MLSLabel_Type
                  THEN
                     '''unsupport datatype : MLSLABEL'''
                  WHEN desctab (i).col_type = DBMS_SQL.Ref_Type
                  THEN
                     '''unsupport datatype : REF'''
                  WHEN desctab (i).col_type = DBMS_SQL.Clob_Type
                  THEN
                     '''unsupport datatype : CLOB'''
                  WHEN desctab (i).col_type = DBMS_SQL.Blob_Type
                  THEN
                     '''unsupport datatype : BLOB'''
                  WHEN desctab (i).col_type = DBMS_SQL.Rowid_Type
                  THEN
                     '''unsupport datatype : ROWID'''
                  WHEN desctab (i).col_type = DBMS_SQL.Bfile_Type
                  THEN
                     '''unsupport datatype : BFILE'''
                  WHEN desctab (i).col_type = DBMS_SQL.Urowid_Type
                  THEN
                     '''unsupport datatype : UROWID'''
                  ELSE
                     '''unsupport datatype : '||desctab (i).col_type||''''
               END;
      END LOOP;

      stmt := 'select collist(' || stmt || ') from (' || query || ')';

      fh :=
         UTL_FILE.fopen (directory_name,
                         file_name,
                         'W',
                         32767);

      begin
            OPEN rcur FOR stmt;
      exception 
        when others then 
          dbms_output.put_line(stmt);
          raise;
      end;
      LOOP
         FETCH rcur
         BULK COLLECT INTO res
         LIMIT 10000;

         current_line := header;
         next_line := NULL;

         FOR f IN 1 .. res.COUNT
         LOOP
            FOR g IN 1 .. res (f).COUNT
            LOOP
               IF (g > 1)
               THEN
                  next_line := next_line || ';';
               END IF;

               IF (  NVL(LENGTH (current_line),0)
                   + NVL(LENGTH (next_line),0)
                   + NVL(LENGTH (res (f) (g)),0)
                   + 5 > 32767)
               THEN
                  UTL_FILE.put_line (fh, current_line);
                  current_line := NULL;
               END IF;

               IF (NVL(LENGTH (next_line),0) + NVL(LENGTH (res (f) (g)),0) + 5 > 32767)
               THEN
                  UTL_FILE.put_line (fh, next_line);
                  next_line := NULL;
               END IF;

               next_line := next_line || res (f) (g);
            END LOOP;

            current_line :=
                  CASE
                     WHEN current_line IS NOT NULL
                     THEN
                        current_line || CHR (10)
                  END
               || next_line;
            next_line := NULL;
         END LOOP;

         UTL_FILE.put_line (fh, current_line);
         EXIT WHEN rcur%NOTFOUND;
      END LOOP;

      CLOSE rcur;

      UTL_FILE.fclose (fh);
   END IF;

   DBMS_SQL.CLOSE_CURSOR (curid);
END;
/

CREATE OR REPLACE DIRECTORY tmp AS '/tmp';

EXEC bulk_csv('TMP','emp.csv','SELECT * FROM EMP ORDER BY ENAME')


EMPNO;ENAME;JOB;MGR;HIREDATE;SAL;COMM;DEPTNO
7876;ADAMS;CLERK;7788;1987-05-23 00:00:00;1100;;20
7499;ALLEN;SALESMAN;7698;1981-02-20 00:00:00;1600;30;30
7698;BLAKE;MANAGER;7839;1981-05-01 00:00:00;2850;;30
7782;CLARK;MANAGER;7839;1981-06-09 00:00:00;2450;;10
7902;FORD;ANALYST;7566;1981-12-03 00:00:00;3000;;20
7900;JAMES;CLERK;7698;1981-12-03 00:00:00;950;;30
7566;JONES;MANAGER;7839;1981-04-02 00:00:00;2975;;20
7839;KING;PRESIDENT;;1981-11-17 00:00:00;5000;;10
7654;MARTIN;SALESMAN;7698;1981-09-28 00:00:00;1250;140;30
7934;MILLER;CLERK;7782;1982-01-23 00:00:00;1300;;10
7788;SCOTT;ANALYST;7566;1987-04-19 00:00:00;3000;;20
7369;SMITH;CLERK;7902;1980-12-17 00:00:00;800;;20
7844;TURNER;SALESMAN;7698;1981-09-08 00:00:00;1500;0;30
7521;WARD;SALESMAN;7698;1981-02-22 00:00:00;1250;50;30

on materialized view constraints

Oracle is pretty strong at enforcing constraint.

Table for this blog post:
create table t(x number primary key, y number);

For instance if you alter table t add check (y<1000); then Y will not be bigger than 1000, right?

SQL> insert into t values (1,2000);
insert into t values (1,2000)
Error at line 1
ORA-02290: check constraint (SCOTT.SYS_C0029609) violated

I believe this code to be unbreakable. If you have only SELECT and INSERT privilege on the table, you cannot bypass the constraint.

Let’s imagine some complex constraint. CHECK (sum(y) < 1000)

SQL> alter table t add check (sum(y) < 1000);
alter table t add check (sum(y) < 1000)
Error at line 1
ORA-00934: group function is not allowed here

Ok, clear enough I suppose, we cannot handle this complex constraint with a CHECK condition.

We could have some before trigger that fires an exception

CREATE TRIGGER tr
   BEFORE INSERT OR UPDATE
   ON T
   FOR EACH ROW
   WHEN (NEW.Y > 0)
DECLARE
   s   NUMBER;
BEGIN
   SELECT SUM (y) INTO s FROM t;

   IF (s + :new.y >= 1000)
   THEN
      raise_application_error (-20001, 'SUM(Y) would exceed 1000');
   END IF;
END;
/

Now the trigger will compute the sum and return an exception whenever it fails.

SQL> insert into t values (2, 600);

1 row created.

SQL> insert into t values (3, 600);
insert into t values (3, 600)
            *
ERROR at line 1:
ORA-20001: SUM(Y) would exceed 1000
ORA-06512: at "SCOTT.TR", line 8
ORA-04088: error during execution of trigger 'SCOTT.TR'

SQL> drop trigger tr;

Trigger dropped.

SQL> truncate table t;

Table truncated.

But I am not good with triggers, and the triggers are as bad as their developers and have dark sides like mutating triggers and thelike.

As Tom Kyte mentioned in the comment, the code above is not efficient effective if more than one user update the table at the same time

Another popular approach is to create a fast-refreshable-on-commit mview with a constraint.

Let’s see how this works.


create materialized view log on t with rowid, primary key (y) including new values;

create materialized view mv
refresh fast 
on commit 
as select sum(y) sum from t;

alter table mv add check (sum < 1000);

The constraint is on the mview, so once you commit (and only at commit time), Oracle will try to refresh the mview.

SQL> insert into t values (4, 600);

1 row created.

SQL> commit;

Commit complete.

SQL> insert into t values (5, 600);

1 row created.

SQL> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-02290: check constraint (SCOTT.SYS_C0029631) violated

SQL> select * from t;

         X          Y
---------- ----------
         4        600

So far so good. The mechanism rollbacks the transaction in case of an ORA-12008. A bit similar to a DEFERABLE constraint.

But how safe is this after all? Oracle does not enforce anything on the table, it just fails on refresh…

Anything that does not fulfill the materialized view fast refresh requisites will also break the data integrity.

SQL> delete from t;

1 row deleted.

SQL> commit;

Commit complete.

SQL> alter session enable parallel dml;

Session altered.

SQL> insert /*+PARALLEL*/ into t select 100+rownum, rownum*100 from dual connect by level<20;

19 rows created.

SQL> commit;

Commit complete.

SQL> select sum(y) from t;

    SUM(Y)
----------
     19000

SQL> select staleness from user_mviews;

STALENESS
-------------------
UNUSABLE

Your data integrity is gone. By “breaking” the mview, with only SELECT, INSERT and ALTER SESSION privilege, you can now insert any data.

This is documented as
FAST Clause

For both conventional DML changes and for direct-path INSERT operations, other conditions may restrict the eligibility of a materialized view for fast refresh.

Other operations like TRUNCATE may also prevent you from inserting fresh data


SQL> alter materialized view mv compile;

Materialized view altered.

SQL> exec dbms_mview.refresh('MV','COMPLETE');

PL/SQL procedure successfully completed.

SQL> select * from mv;

       SUM
----------

SQL> insert into t values(1,1);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from mv;

       SUM
----------
         1

SQL> truncate table t;

Table truncated.

SQL> insert into t values(1,1);

1 row created.

SQL> commit;
commit
*
ERROR at line 1:
ORA-32321: REFRESH FAST of "SCOTT"."MV" unsupported after detail table
TRUNCATE

On using Toad against a database

I got this question once again today in a previous post.

What’s wrong by using Toad against a database?

The worst case scenario:
– some non-technical staff is clicking around in your production database with read-write access :(

The best-case scenario :
– nobody has access to your database :)

Here is a short list on how you could protect your data :
– Give the right privilege to the right person. DBA role to the DBA, CREATE TABLE/CREATE INDEX to the developer, INSERT/UPDATE/DELETE to the application
– Restrict access to your database server. Use some firewall. Allow only the dba workstation and the application server to the Production environment

What if the end-user PC needs access to the Production database with a powerfull user? This often happend in real world. A fat client is installed on the PC, the password is somehow hardcoded, the privileges granted to the hardcoded user are uterly generous…

It is not a bad practice in this case to block access to the database server to Toad/SQLPLUS and thelike. This will very ineffeciently prevent some garage-hacker from corrupting your database, but it will prevent your sales / marketing colleagues from deleting data, locking tables and degrading performance. This could be done by some login triggers or, my preference, some administrative measures like information, auditting and sanctions.

Troubleshoot ORA-10878

You will probably not hit this bug unless you perform some media recovery in 11.2.0.1/AIX.

Ok. In case you hit ORA-10878: parallel recovery slave died unexpectedly during a DUPLICATE or a RESTORE command, you can disable parallel media recovery with _log_parallelism_max=1.

The usual warning applies : do not use hidden parameter without guidance of Oracle Support. Open an SR if you hit this bug. Check for a patch on your plateform. Read notes 9728806.8 and 315631.1.

Note: for a RECOVER, the option RECOVER NOPARALLEL must be safer. Unfortunately there is no such thing like DUPLICATE NOPARALLEL

Update: This could also happened with standby, if you have stopped your standby site for a while and after restart you get ORA-10878 and ORA-00448 and evtl core dumps or internal errors, then stop dataguard (set dg_broker_start to false) and start the recovery manually with the noparallel option, until all logs are applied. Once this is done, you can restart dataguard, which will then in normal operation mode apply only one log at the time.
Diggout out from Helios’s Blog

scp tuning

I twitted yesterday :

laurentsch
copying 1TB over ssh sucks. How do you fastcopy in Unix without installing Software and without root privilege?

I got plenty of expert answers. I have not gone to far in recompile ssh and I did not try plain ftp.

Ok, let’s try first to transfer 10 files of 100M from srv001 to srv002 with scp :

time scp 100M* srv002:
100M1    100%   95MB   4.5MB/s   00:21
100M10   100%   95MB   6.4MB/s   00:15
100M2    100%   95MB   6.0MB/s   00:16
100M3    100%   95MB   4.2MB/s   00:23
100M4    100%   95MB   3.4MB/s   00:28
100M5    100%   95MB   4.2MB/s   00:23
100M6    100%   95MB   6.4MB/s   00:15
100M7    100%   95MB   6.8MB/s   00:14
100M8    100%   95MB   6.8MB/s   00:14
100M9    100%   95MB   6.4MB/s   00:15

real    3m4.50s
user    0m27.07s
sys     0m21.56s

more than 3 minutes for 1G.

I got hints about the buffer size, about SFTP, about the cipher algorythm, and about parallelizing. I did not install new software and I have a pretty old openssh client (3.8). Thanks to all my contributors tmuth, Ik_zelf, TanelPoder, fritshoogland, jcnars, aejes, surachart, syd_oracle and the ones the will answer after the writting of this blog post…

Ok, let’s try a faster algorythm, with sftp (instead of scp), a higher buffer and in parallel

$ cat batch.ksh
echo "progress\nput 100M1" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M2" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M3" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M4" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M5" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M6" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M7" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M8" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M9" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
echo "progress\nput 100M10" | sftp -B 260000 -o Ciphers=arcfour -R 512 srv002&
wait
$ time batch.ksh
real    0m19.07s
user    0m12.08s
sys     0m5.86s

This is a 1000% speed enhancement :-)

What is the current setting of NLS_LANG in sqlplus?

I just learnt a neat trick from Oracle Support.

How do you see the current value of NLS_LANG in SQLPLUS ?

HOST is not the right answer.

E.g.:
Unix:


SQL> host echo $NLS_LANG
AMERICAN_SWITZERLAND

Windows:

SQL> HOST ECHO %NLS_LANG%
%NLS_LANG%

The correct setting is revealed by @.[%NLS_LANG%]
E.g.:
Unix:


SQL> @.[$NLS_LANG]
SP2-0310: unable to open file ".[AMERICAN_AMERICA.WE8ISO8859P1]"

Windows:

SQL>  @.[%NLS_LANG%]
SP2-0310: unable to open file ".[AMERICAN_AMERICA.WE8ISO8859P1]"

It could well be that both return the same answer, but not necessarly, as shown above.

The unix discrepancy is related to the subshell created by HOST. The subshell may read some .profile and overwrite the value of NLS_LANG

In Windows, the NLS_LANG setting may be set by sqlplus according to some registry entries

Send html report per email from sqlplus

Your business partner wants to receive some daily mail with an sql query output in it. It does not need to be ultra-fancy, but some colors and titles would not hurt.

Here is the report in SQL:


select dname, sum(sal) from emp join dept using (deptno) group by rollup(dname);

Ok, let’s do the report within sqlplus.

rep.sql


set echo off numf 999G999G999G999 lin 32000 trims on pages 50000 head on feed off markup html off
alter session set nls_numeric_characters='.''' nls_date_format='Day DD. Month, YYYY';
spool /tmp/rep.html
prompt To: laurentschneider@example.com
prompt From: laurentschneider@example.com
prompt Subject: Daily department report
prompt Content-type: text/html
prompt MIME-Version: 1.0
set markup html on entmap off table 'BORDER="2" BGCOLOR="pink"'
prompt <i>Good morning, </i>
prompt <i>Here is the department report per &_DATE</i>
prompt <i>Kind Regards, </i>
prompt <i>Your IT Operations</i>

prompt <br/><h3>List of departments with the total salaries of their employees</h3>
select dname "Department", sum(sal) "Salary" from emp join dept using (deptno) group by rollup(dname);
spool off
host /usr/sbin/sendmail -t </tmp/rep.html
quit

Then simply call it from sqlplus (you may want to configure the sendmail part)

SQL> @rep

check your mail :

To: laurentschneider@example.com
From: laurentschneider@example.com
Subject: Daily department report
Good morning,

Here is the department report per Friday 15. April , 2011

Kind Regards,

Your IT Operations



List of departments with the total salaries of their employees

Department Salary
ACCOUNTING 8’750
RESEARCH 10’875
SALES 9’400
  29’025

It is pretty easier to maintain than APEX, but the capabilities are not that rich…

track ddl change (part 2)

I wrote about tracking ddl changes with a trigger there : track ddl changes

Another option is to use auditing.

A new and cool alternative is to use enable_ddl_logging (11gR2). This will track all ddl’s in the alert log

ALTER SYSTEM SET enable_ddl_logging=TRUE

Then later you issue

create table t(x number)

and you see in the alertLSC01.log

Tue Apr 05 14:43:32 2011
create table t(x number)

Wait, that’s not really verbose !?

Remember the alert log is just there for backward compatibility, it is time you start looking in the xml file :-)


<msg time='2011-04-05T14:43:42.210+02:00' org_id='oracle' comp_id='rdbms'
 msg_id='opiexe:3937:4222333111' client_id='' type='NOTIFICATION'
 group='schema_ddl' level='16' host_id='srv01'
 host_addr='192.168.0.141' module='TOAD Beta 11.0.0.52' pid='2777799'>
 <txt>create table t(x number)
 </txt>
</msg>

There is not really much more there but the module, which indeed reveals someone is using TOAD to access my database !

Time offset in Unix

What is the time offset of the current date in Unix?

perl -e '
  $t=time;
  @l=localtime($t);
  @g=gmtime($t);
  $d=$l[2]-$g[2]+($l[1]-$g[1])/60;
  $gd=$g[3]+$g[4]*31+$g[5]*365;
  $ld=$l[3]+$l[4]*31+$l[5]*365;
  if($gd<$ld){$d+=24};
  if($gd>$ld){$d-=24}
print ($d."\n")'
2

Am I in summer (DST)?

perl -e 'if((localtime)[8]){print"yes"}else{print "no"}'
yes

my first ADR package

You got an internal error and want to create a zip of all relevant files.

First, let’s generate an internal error. I found a quick way to generate an ora-600 or an ora-700 (which is a harmless ora-600 in 11g, read 737878.1) on oradeblog

SQL> oradebug unit_test dbke_test dde_flow_kge_soft foo bar baz
Statement processed.

Now start the command line interface, and set the ORACLE HOME

$ adrcli
adrci> show home
ADR Homes: 
diag/tnslsnr/precision/listener
diag/tnslsnr/localhost/listener
diag/rdbms/lsc02/LSC02
diag/rdbms/lsc03/LSC03
diag/rdbms/lsc01/LSC01
adrci> set homepath diag/rdbms/lsc02/LSC02
adrci> show home
ADR Homes: 
diag/rdbms/lsc02/LSC02

Check the incidents :

adrci> show incident

ADR Home = /u01/app/oracle/diag/rdbms/lsc02/LSC02:
*************************************************************************
INCIDENT_ID          PROBLEM_KEY       CREATE_TIME                              
-------------------- ----------------- --------------------------------- 
53065                ORA 700 [foo]     2011-03-14 18:20:24 +01:00       
1 rows fetched

Create the package metadata :


adrci> IPS CREATE PACKAGE INCIDENT 53065
Created package 1 based on incident id 53065, correlation level typical
adrci> ips SHOW PACKAGE 1
DETAILS FOR PACKAGE 1:
   PACKAGE_ID             1
   PACKAGE_NAME           ORA700foo_20110314182607
   PACKAGE_DESCRIPTION    
   DRIVING_PROBLEM        1
   DRIVING_PROBLEM_KEY    ORA 700 [foo]
   DRIVING_INCIDENT       53065
   DRIVING_INCIDENT_TIME  2011-03-14 18:20:24.304000 +01:00
   STATUS                 New (0)
   CORRELATION_LEVEL      Typical (2)
   PROBLEMS               1 main problems, 0 correlated problems
   INCIDENTS              1 main incidents, 0 correlated incidents
   INCLUDED_FILES         4
   SEQUENCES              Last 0, last full 0, last base 0
   UNPACKED               FALSE
   CREATE_TIME            2011-03-14 18:26:07.566961 +01:00
   UPDATE_TIME            2011-03-14 18:26:07.620324 +01:00
   BEGIN_TIME             N/A
   END_TIME               N/A
   FLAGS                  0

The metadata files (in $ORACLE_BASE/rdbms/db_name/sid/*.ams) are in an Oracle binary format

Create the package zip file :

adrci> IPS GENERATE PACKAGE 1 in /home/lsc/foo
Generated package 1 in file /home/lsc/foo/ORA700foo_20110314182607_COM_1.zip, 
mode complete

This zip file contains all traces and alerts that you may ever need to diagnose/resolve the analysis

adrci>  ips show files package 1
   FILE_ID                1
   FILE_LOCATION          <ADR_HOME>/incident/incdir_53065
   FILE_NAME              LSC02_ora_14163_i53065.trm
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2
   FILE_LOCATION          <ADR_HOME>/incident/incdir_53065
   FILE_NAME              LSC02_ora_14163_i53065.trc
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                3
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              LSC02_ora_14163.trc
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                4
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              LSC02_ora_14163.trm
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                5
   FILE_LOCATION          <ADR_HOME>/alert
   FILE_NAME              log.xml
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                6
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              alert_LSC02.log
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                7
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              LSC02_diag_5247.trc
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                8
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              LSC02_diag_5247.trm
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                12
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              LSC02_mmon_5265.trc
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                13
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              LSC02_mmon_5265.trm
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2007
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              IPS_CONFIGURATION.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2008
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              IPS_PACKAGE.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2009
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              IPS_PACKAGE_INCIDENT.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2010
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              IPS_PACKAGE_FILE.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2011
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              IPS_PACKAGE_HISTORY.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2012
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              IPS_FILE_METADATA.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2013
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              IPS_FILE_COPY_LOG.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2014
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              DDE_USER_ACTION_DEF.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2015
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              DDE_USER_ACTION_PARAMETER_DEF.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2016
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              DDE_USER_ACTION.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2017
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              DDE_USER_ACTION_PARAMETER.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2018
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              DDE_USER_INCIDENT_TYPE.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2019
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              DDE_USER_INCIDENT_ACTION_MAP.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2020
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              INCIDENT.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2021
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              INCCKEY.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2022
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              INCIDENT_FILE.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2023
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              PROBLEM.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2024
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              HM_RUN.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2025
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              EM_USER_ACTIVITY.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2026
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1
   FILE_NAME              config.xml
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2027
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/crs
   FILE_NAME              crsdiag.log
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2028
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1
   FILE_NAME              metadata.xml
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2029
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1
   FILE_NAME              manifest_1_1.xml
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2030
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1
   FILE_NAME              manifest_1_1.html
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2031
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1
   FILE_NAME              manifest_1_1.txt
   LAST_SEQUENCE          1
   EXCLUDE                Included

Even an html file

Manifest for package 1

Manifest details

Package ID 1
Creation time 2011-03-14 18:26:07.566961 +01:00
Archive time 2011-03-14 18:37:14.499389 +01:00
Sequence 1
Package mode Complete
Package status Generating
Package flags Flags: (No flags set)

Contents summary

Main problems 1
Correlated problems 0
Main incidents 1
Correlated incidents 0

ADR details

Product rdbms
Target lsc02
Instance LSC02
ADR base /u01/app/oracle
ADR home /u01/app/oracle/diag/rdbms/lsc02/LSC02

Main problems

Problem ID Problem key Incidents included Incidents total
1 ORA 700 [foo] 1 1

Correlated problems

Problem ID Problem key Incidents included Incidents total

Main incidents

Incident ID Problem ID Error Message Incident time
53065 1 ORA-700 [foo] [bar] [baz] 2011-03-14 18:20:24.304000 +01:00

Correlated incidents

Incident ID Problem ID Error Message Incident time

Files

File name Location Size File time
LSC02_ora_14163_i53065.trm <ADR_HOME>/incident/incdir_53065 54828 2011-03-14 18:20:26.000000 +01:00
LSC02_ora_14163_i53065.trc <ADR_HOME>/incident/incdir_53065 2433968 2011-03-14 18:20:26.000000 +01:00
LSC02_ora_14163.trc <ADR_HOME>/trace 1308 2011-03-14 18:20:26.000000 +01:00
LSC02_ora_14163.trm <ADR_HOME>/trace 210 2011-03-14 18:20:24.000000 +01:00
log.xml <ADR_HOME>/alert 885849 2011-03-14 18:20:27.000000 +01:00
alert_LSC02.log <ADR_HOME>/trace 164969 2011-03-14 18:20:27.000000 +01:00
LSC02_diag_5247.trc <ADR_HOME>/trace 1287 2011-03-14 18:20:26.000000 +01:00
LSC02_diag_5247.trm <ADR_HOME>/trace 77 2011-03-14 18:20:26.000000 +01:00
LSC02_mmon_5265.trc <ADR_HOME>/trace 8703 2011-03-14 18:33:43.000000 +01:00
LSC02_mmon_5265.trm <ADR_HOME>/trace 838 2011-03-14 18:33:43.000000 +01:00
IPS_CONFIGURATION.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 2818 2011-03-14 18:37:13.000000 +01:00
IPS_PACKAGE.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 476 2011-03-14 18:37:13.000000 +01:00
IPS_PACKAGE_INCIDENT.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 193 2011-03-14 18:37:13.000000 +01:00
IPS_PACKAGE_FILE.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 1126 2011-03-14 18:37:14.000000 +01:00
IPS_PACKAGE_HISTORY.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 280 2011-03-14 18:37:13.000000 +01:00
IPS_FILE_METADATA.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 2888 2011-03-14 18:37:14.000000 +01:00
IPS_FILE_COPY_LOG.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 214 2011-03-14 18:37:14.000000 +01:00
DDE_USER_ACTION_DEF.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 908 2011-03-14 18:37:13.000000 +01:00
DDE_USER_ACTION_PARAMETER_DEF.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 708 2011-03-14 18:37:13.000000 +01:00
DDE_USER_ACTION.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 204 2011-03-14 18:37:13.000000 +01:00
DDE_USER_ACTION_PARAMETER.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 198 2011-03-14 18:37:13.000000 +01:00
DDE_USER_INCIDENT_TYPE.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 353 2011-03-14 18:37:13.000000 +01:00
DDE_USER_INCIDENT_ACTION_MAP.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 166 2011-03-14 18:37:13.000000 +01:00
INCIDENT.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 700 2011-03-14 18:37:13.000000 +01:00
INCCKEY.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 303 2011-03-14 18:37:13.000000 +01:00
INCIDENT_FILE.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 268 2011-03-14 18:37:13.000000 +01:00
PROBLEM.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 395 2011-03-14 18:37:13.000000 +01:00
HM_RUN.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 342 2011-03-14 18:37:14.000000 +01:00
EM_USER_ACTIVITY.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 207 2011-03-14 18:37:14.000000 +01:00
config.xml <ADR_HOME>/incpkg/pkg_1/seq_1 56180 2011-03-14 18:37:14.000000 +01:00
crsdiag.log <ADR_HOME>/incpkg/pkg_1/seq_1/crs 184 2011-03-14 18:37:14.000000 +01:00
metadata.xml <ADR_HOME>/incpkg/pkg_1/seq_1 556 2011-03-14 18:37:14.000000 +01:00

But did Oracle Support ever asked you for an ADR package? Or do they still ask for RDA

I used to select, zip and send traces files manually, I may consider ADR packages by my next ORA-600 !

How does random=random evaluates?

I had fun answering a question about random on the technical forums.

What is in your opinion the boolean value of DBMS_RANDOM.VALUE=DBMS_RANDOM.VALUE?

Or, how many rows would
select * from dual where dbms_random.value=dbms_random.value;
return?

It is wrong to assume the function will be evaluated twice.

The short answer would be : do not rely on random plsql functions in SQL…

here is a test case in 11.2.0.2 and 10.2.0.3


SQL> select version from v$instance;
VERSION
-----------------
10.2.0.3.0

SQL> select * from dual where dbms_random.value=dbms_random.value;

no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 1224005312

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     2 |     2   (0)| 00:00:01 |
|*  1 |  FILTER            |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("DBMS_RANDOM"."VALUE"()="DBMS_RANDOM"."VALUE"())

In 10g, the function is executed twice per row, and the chance to have two different values is more than 99.9999…%.


SQL> select version from v$instance;
VERSION
-----------------
11.2.0.2.0

SQL> select * from dual where dbms_random.value=dbms_random.value
D
-
X

Execution Plan
----------------------------------------------------------
Plan hash value: 1224005312

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     2 |     2   (0)| 00:00:01 |
|*  1 |  FILTER            |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("DBMS_RANDOM"."VALUE"() IS NOT NULL)

Here the optimized execute the function only once per row, and since the result is never null, it always evaluates to true.

Is this a bug or a feature?

In my opinion it is a confusing tuning enhancement that may break badsome programs.

In this thread, I mentioned that prior dbms_random.value is not null is an unsafe construct.

List events in session, process or system

There is a new command in 11g to display the current events, which is oradebug eventdump.

For instance :

SQL> alter session set events '10046 trace name context forever,level 12:942 trace name ERRORSTACK level 3';

SQL> oradebug setmypid
Statement processed.
SQL> oradebug eventdump session
sql_trace level=12
942 trace name ERRORSTACK level 3

Read metalink note 436036.1

In 10g and before, the command was oradebug dump events 1 and the list was dumped in a trace file, 11g directly outputs to the console.

Note there is no backward compatibility with unsupported tools like oradebug.
In 11g you will get an ORA-76 with dump events

SQL> oradebug setmypid
Statement processed.
SQL> oradebug dump events 1
ORA-00076: dump EVENTS not found
$ oerr ora 76
00076, 00000, "dump %s not found"
// *Cause:  An attempt was made to invoke a dump that does not exist.
// *Action: Type DUMPLIST to see the list of available dumps.

Which index can you rebuild?

I recently wrote on table reorg and rebuild index

Rule number one : you cannot rebuild a partitioned index in whole. You need to rebuild each individual (sub-)partition

Rule number two : to rebuild an iot, move the table instead of trying to rebuild the underlying index

Rule number three : a LOB index is not really an index. Do not rebuild this

Rule number four : a NOSEGMENT index is not a supported type of index, but it may appear in your user_objects list. It is used internally by OEM and other tuning tools to do a what-if calculation on the explain plan. It is not listed in USER_INDEXES. Do not rebuild this

Test case :


SQL> CREATE CLUSTER c(x NUMBER);

Cluster created.

SQL> CREATE INDEX a01
  2    ON CLUSTER c;

Index created.

SQL> CREATE TABLE t
  2  (
  3    p     NUMBER PRIMARY KEY,
  4    a01   NUMBER,
  5    a02   NUMBER,
  6    a03   NUMBER,
  7    a04   NUMBER,
  8    a05   NUMBER,
  9    a06   NUMBER,
 10    a07   VARCHAR2 (40),
 11    a08   CLOB
 12  );

Table created.

SQL> CREATE INDEX a02
  2    ON t (a01);

Index created.

SQL> CREATE INDEX a03
  2    ON t (a02)
  3    REVERSE;

Index created.

SQL> CREATE INDEX a04
  2    ON t (SQRT (a01));

Index created.

SQL> CREATE INDEX a05
  2    ON t (COS (a01))
  3    REVERSE;

Index created.

SQL> CREATE BITMAP INDEX a06
  2    ON t (a03);

Index created.

SQL> CREATE BITMAP INDEX a07
  2    ON t (SIGN (a04));

Index created.

SQL> CREATE INDEX a08
  2    ON t (a07)
  3    INDEXTYPE IS ctxsys.context;

Index created.

SQL> CREATE INDEX a09
  2    ON t (a05)
  3    GLOBAL PARTITION BY HASH (a05)
  4       (PARTITION p);

Index created.

SQL> CREATE TABLE i (x NUMBER CONSTRAINT A10 PRIMARY KEY)
  2  ORGANIZATION INDEX;

Table created.

SQL> CREATE INDEX A11 on T(A06) NOSEGMENT;

Index created.

SQL>   SELECT index_name,
  2          index_type,
  3          partitioned,
  4          generated
  5     FROM user_indexes
  6  ORDER BY 1;

INDEX_NAME                     INDEX_TYPE                  PAR G
------------------------------ --------------------------- --- -
A01                            CLUSTER                     NO  N
A02                            NORMAL                      NO  N
A03                            NORMAL/REV                  NO  N
A04                            FUNCTION-BASED NORMAL       NO  N
A05                            FUNCTION-BASED NORMAL/REV   NO  N
A06                            BITMAP                      NO  N
A07                            FUNCTION-BASED BITMAP       NO  N
A08                            DOMAIN                      NO  N
A09                            NORMAL                      YES N
A10                            IOT - TOP                   NO  N
DR$A08$X                       NORMAL                      NO  N
SYS_C009276                    NORMAL                      NO  Y
SYS_IL0000028076C00009$$       LOB                         NO  Y
SYS_IL0000028087C00006$$       LOB                         NO  Y
SYS_IL0000028092C00002$$       LOB                         NO  Y
SYS_IOT_TOP_28090              IOT - TOP                   NO  Y
SYS_IOT_TOP_28095              IOT - TOP                   NO  Y

SQL> ALTER INDEX a01 REBUILD;

Index altered.

SQL> ALTER INDEX a02 REBUILD;

Index altered.

SQL> ALTER INDEX a03 REBUILD;

Index altered.

SQL> ALTER INDEX a04 REBUILD;

Index altered.

SQL> ALTER INDEX a05 REBUILD;

Index altered.

SQL> ALTER INDEX a06 REBUILD;

Index altered.

SQL> ALTER INDEX a07 REBUILD;

Index altered.

SQL> ALTER INDEX a08 REBUILD;

Index altered.

SQL> ALTER INDEX a09 REBUILD;
ALTER INDEX a09 REBUILD
            *
ERROR at line 1:
ORA-14086: a partitioned index may not be rebuilt as a whole

SQL> ALTER INDEX a09 REBUILD PARTITION P;

Index altered.

SQL> ALTER INDEX a10 REBUILD;
ALTER INDEX a10 REBUILD
*
ERROR at line 1:
ORA-28650: Primary index on an IOT cannot be rebuilt

SQL> ALTER TABLE i MOVE;

Table altered.

SQL> ALTER INDEX A11 REBUILD;
ALTER INDEX A11 REBUILD
*
ERROR at line 1:
ORA-08114: can not alter a fake index

SQL> ALTER INDEX SYS_IL0000028076C00009$$ REBUILD;
ALTER INDEX SYS_IL0000028076C00009$$ REBUILD
*
ERROR at line 1:
ORA-02327: cannot create index on expression with datatype LOB

A function-based domain index should be rebuildable too, I have not tested this for you

EZCONNECT and HOSTNAME resolution methods

EZCONNECT is the easy connect protocol, available in 10g, whenever you want to connect to a database without tnsnames and without ldap.

$ grep -iw directory_path $TNS_ADMIN/sqlnet.ora
names.directory_path=EZCONNECT
$ sqlplus scott/tiger@//srv01:1521/db01

connect to server srv01 on port 1521 for service db01

HOSTNAME was the old-fashion way to connect to a database, where hostname = sid and port = 1521. In this regard EZCONNECT is just an extension of the hostname method.

Typical HOSTNAME usage, that is the same as EZCONNECT with default port 1521.
sqlplus scott/tiger@db01
connect to server db01 on port 1521 for service db01

There is a behavior change between 10g and 11g. In 10g, the default service name defaulted to the DNS alias used to connect. In 11g, the default is null.

$ nslookup db01
Server:  ns001.example.com
Address:  198.0.0.30

Name:    srv01.example.com
Address:  198.0.0.60
Aliases:  db01.example.com

$ nslookup db02
Server:  ns001.example.com
Address:  198.0.0.30

Name:    srv01.example.com
Address:  198.0.0.60
Aliases:  db02.example.com

Both DB01 and DB02 DNS aliases point to the same server.

Let’s try with 10g

$ sqlplus -L scott/tiger@db01.example.com

SQL*Plus: Release 10.2.0.3.0 - Production on Mon Feb 7 15:46:53 2011

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select global_name from global_name;
GLOBAL_NAME
---------------------------------------
DB01.EXAMPLE.COM
SQL> quit
$ sqlplus -L scott/tiger@db02.example.com

SQL*Plus: Release 10.2.0.3.0 - Production on Mon Feb 7 15:47:33 2011

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select global_name from global_name;
GLOBAL_NAME
---------------------------------------
DB02.EXAMPLE.COM

Let’s try with 11g sqlplus

$ sqlplus -L scott/tiger@db01.example.com

SQL*Plus: Release 11.2.0.2.0 Production on Mon Feb 7 15:50:27 2011

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

ERROR:
ORA-12504: TNS:listener was not given the SERVICE_NAME in CONNECT_DATA

SP2-0751: Unable to connect to Oracle.  Exiting SQL*Plus

It no longer works. Period. This is documented as Problem 556996.1 in Metalink.

A 10g tnsping will reveal

$ tnsping db01.example.com:1521

TNS Ping Utility for IBM/AIX RISC System/6000: Version 10.2.0.3.0 - Production on 07-FEB-2011 15:52:34

Copyright (c) 1997, 2006, Oracle.  All rights reserved.

Used parameter files:
/home/lsc/sqlnet.ora

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=db01.example.com))(ADDRESS=(PROTOCOL=TCP)(HOST=198.0.0.60)(PORT=1521)))
OK (80 msec)

In 10g the service_name is the connection dns alias used

In contrary, the 11g tnsping service name is null

$ tnsping db01.example.com:1521

TNS Ping Utility for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production on 07-FEB-2011 15:56:55

Copyright (c) 1997, 2010, Oracle.  All rights reserved.

Used parameter files:
/home/lsc/sqlnet.ora

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=198.0.0.60)(PORT=1521)))
OK (10 msec)

The tnsping works, but the service_name is empty.

How to fix this?

1) you specify the SID in easy connect (yes, this is easy!)

$ tnsping db01.example.com:1521/db01.example.com

TNS Ping Utility for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production on 07-FEB-2011 15:59:10

Copyright (c) 1997, 2010, Oracle.  All rights reserved.

Used parameter files:
/home/lsc/sqlnet.ora

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=db01.example.com))(ADDRESS=(PROTOCOL=TCP)(HOST=198.0.0.60)(PORT=1521)))
OK (10 msec)

2) you use 10g, or 10g behavior in 11g with patch 9271246 (available only on a limited number of plateforms, os and db versions),

3) you specify a default service for your listener

$ vi listener.ora
DEFAULT_SERVER_LISTENER=DB01
$ lsnrctl reload
$ sqlplus -L scott/tiger@db01 

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> 

This is a bit confusing as if you are servicing more than one database per listener, all dns aliases will default to the same database. So I would not recommend a default service name if there is more than one service name.

Return NULL if the column does not exist

It is a very common challenge for a dba to create scripts that work on every version.

How do you return NULL if a column does not exists?

Imagine I have a view that returns the table_name, column_name and retention_type of my LOBS.


SQL> create table t1(c clob) lob(c) store as (retention);

Table created.

SQL> create table t2(c clob) lob(c) store as (pctversion 10);

Table created.

SQL> create or replace force view v as select table_name,
  column_name,retention_type from user_lobs;

View created.

SQL> select * from v where table_name in ('T1','T2');
TAB COL RETENTION_TYPE
--- --- --------------
T1  C   YES
T2  C   NO

Let’s imagine I try to run this on an antique version of Oracle


SQL> select version from v$instance;
VERSION
-----------------
11.2.0.1.0

SQL> create table t1(c clob) lob(c) store as (retention);

Table created.

SQL> create table t2(c clob) lob(c) store as (pctversion 10);

Table created.

SQL> create or replace force view v as select table_name,column_name,retention_type from user_lobs;

Warning: View created with compilation errors.

SQL> select * from v where table_name in ('T1','T2');
select * from v where table_name in ('T1','T2')
              *
ERROR at line 1:
ORA-04063: view "SCOTT.V" has errors

Obviously the RETENTION_TYPE did not exist in that version.

Let’s default this to NULL !


SQL> create or replace function retention_type return varchar2 is 
  begin return null; end;
/

Function created.

SQL> select * from v where table_name in ('T1','T2');
TAB COL RETENTION_TYPE
--- --- --------------
T1  C
T2  C

Very simple workaround, is not it?

On table reorg and index rebuild

Before you start reading : do not rebuild all your indexes and reorganize all your tables every Sunday morning. One day you may find one of your table missing or one index invalid.

Ok, let’s take a case where table reorg and index rebuild is good.

One of your table was never cleaned up, it grew to 100000000 rows over the last 5 years and you need only the last 2 weeks.

One of your task will be to create a job to clean up your table on a weekly basis to delete rows older than 14 days. This is beyond the scope of this post.

Now you have deleted more than 99% of your rows and you want to reorganize your table and rebuild the index, to gain disk space and performance.

Here is the demo


SQL> DROP TABLE t1;

Table dropped.

SQL> 
SQL> CREATE TABLE t1
  2  (
  3    r     NUMBER,
  4    txt   VARCHAR2 (4000),
  5    y     NUMBER
  6  );

Table created.

SQL> 
SQL> CREATE INDEX i1
  2    ON t1 (r);

Index created.

SQL> 
SQL> INSERT INTO t1
  2    WITH t
  3         AS (    SELECT *
  4             FROM DUAL
  5       CONNECT BY LEVEL < 1001)
  6    SELECT ROWNUM r, LPAD ('X', 100, '.') txt, MOD (ROWNUM, 2) y
  7      FROM t, t;

1000000 rows created.

SQL> 
SQL> DROP TABLE t2;

Table dropped.

SQL> 
SQL> CREATE TABLE t2
  2  (
  3    r     NUMBER,
  4    txt   VARCHAR2 (4000),
  5    y     NUMBER
  6  )
  7  PARTITION BY HASH (r)
  8    (PARTITION T2_P1);

Table created.

SQL> 
SQL> CREATE INDEX i2
  2    ON t2 (r)
  3    LOCAL (PARTITION i2_p1);

Index created.

SQL> 
SQL> INSERT INTO t2
  2    WITH t
  3         AS (    SELECT *
  4             FROM DUAL
  5       CONNECT BY LEVEL < 1001)
  6    SELECT ROWNUM r, LPAD ('X', 100, '.') txt, MOD (ROWNUM, 2) y
  7      FROM t, t;

1000000 rows created.

SQL> 
SQL> DROP TABLE t3;

Table dropped.

SQL> 
SQL> CREATE TABLE t3
  2  (
  3    r     NUMBER,
  4    txt   VARCHAR2 (4000),
  5    y     NUMBER
  6  )
  7  PARTITION BY RANGE (r)
  8    SUBPARTITION BY HASH (r)
  9       SUBPARTITION TEMPLATE (SUBPARTITION s1 )
 10    (PARTITION T3_P1 VALUES LESS THAN (maxvalue));

Table created.

SQL> 
SQL> CREATE INDEX i3
  2    ON t3 (r)
  3    LOCAL (PARTITION i3_p1
  4        (SUBPARTITION i3_p1_s1));

Index created.

SQL> 
SQL> INSERT INTO t3
  2    WITH t
  3         AS (    SELECT *
  4             FROM DUAL
  5       CONNECT BY LEVEL < 1001)
  6    SELECT ROWNUM r, LPAD ('X', 100, '.') txt, MOD (ROWNUM, 2) y
  7      FROM t, t;

1000000 rows created.

SQL> 
SQL> COMMIT;

Commit complete.

SQL> 
SQL>  SELECT segment_name,
  2          segment_type,
  3          partition_name,
  4          sum(bytes),
  5          count(*)
  6     FROM user_extents
  7    WHERE segment_name IN ('T1', 'T2', 'T3', 'I1', 'I2', 'I3')
  8  group by
  9    segment_name,
 10          segment_type,
 11          partition_name
 12  ORDER BY segment_name, partition_name;

SEGMENT_NA SEGMENT_TYPE       PARTITION_     SUM(BYTES)       COUNT(*)
---------- ------------------ ---------- -------------- --------------
I1         INDEX                             16,777,216             31
I2         INDEX PARTITION    I2_P1          16,777,216             31
I3         INDEX SUBPARTITION I3_P1_S1       16,777,216             31
T1         TABLE                            134,217,728             87
T2         TABLE PARTITION    T2_P1         134,217,728             16
T3         TABLE SUBPARTITION T3_P1_S1      134,217,728             16

I created 3 tables, T1, T2 which is partitioned, T3 which is subpartitioned. There is a slight difference in the number of extents between partitioned and non-partitioned table, but this ASSM, so it is fine.


SQL> DELETE FROM t1
  2       WHERE r > 1;

999999 rows deleted.

SQL> 
SQL> COMMIT;

Commit complete.

SQL> 
SQL> DELETE FROM t2
  2       WHERE r > 1;

999999 rows deleted.

SQL> 
SQL> COMMIT;

Commit complete.

SQL> 
SQL> DELETE FROM t3
  2       WHERE r > 1;

999999 rows deleted.

SQL> 
SQL> COMMIT;

Commit complete.

SQL> 
SQL>  SELECT segment_name,
  2          segment_type,
  3          partition_name,
  4          sum(bytes),
  5          count(*)
  6     FROM user_extents
  7    WHERE segment_name IN ('T1', 'T2', 'T3', 'I1', 'I2', 'I3')
  8  group by
  9    segment_name,
 10          segment_type,
 11          partition_name
 12  ORDER BY segment_name, partition_name;

SEGMENT_NA SEGMENT_TYPE       PARTITION_     SUM(BYTES)       COUNT(*)
---------- ------------------ ---------- -------------- --------------
I1         INDEX                             16,777,216             31
I2         INDEX PARTITION    I2_P1          16,777,216             31
I3         INDEX SUBPARTITION I3_P1_S1       16,777,216             31
T1         TABLE                            134,217,728             87
T2         TABLE PARTITION    T2_P1         134,217,728             16
T3         TABLE SUBPARTITION T3_P1_S1      134,217,728             16

I deleted the completed table but one row, however the size of the table and the number of extents did not change.


SQL> ALTER TABLE t1 MOVE;

Table altered.

SQL> 
SQL> ALTER INDEX I1 REBUILD;

Index altered.

SQL> 
SQL> ALTER TABLE t2 MOVE PARTITION T2_P1;

Table altered.

SQL> 
SQL> ALTER INDEX I2 REBUILD PARTITION I2_P1;

Index altered.

SQL> 
SQL> ALTER TABLE t3 MOVE SUBPARTITION T3_P1_S1;

Table altered.

SQL> 
SQL> ALTER INDEX I3 REBUILD SUBPARTITION I3_P1_S1;

Index altered.

SQL> 
SQL>  SELECT segment_name,
  2          segment_type,
  3          partition_name,
  4          sum(bytes),
  5          count(*)
  6     FROM user_extents
  7    WHERE segment_name IN ('T1', 'T2', 'T3', 'I1', 'I2', 'I3')
  8  group by
  9    segment_name,
 10          segment_type,
 11          partition_name
 12  ORDER BY segment_name, partition_name;

SEGMENT_NA SEGMENT_TYPE       PARTITION_     SUM(BYTES)       COUNT(*)
---------- ------------------ ---------- -------------- --------------
I1         INDEX                                 65,536              1
I2         INDEX PARTITION    I2_P1              65,536              1
I3         INDEX SUBPARTITION I3_P1_S1           65,536              1
T1         TABLE                                 65,536              1
T2         TABLE PARTITION    T2_P1           8,388,608              1
T3         TABLE SUBPARTITION T3_P1_S1        8,388,608              1

Now I have reorganized my tables and rebuilt my indexes.

The sized dropped to 64K or 8M and the fragmentation disappeard as the number of extents dropped to 1.

Note you cannot rebuild a whole partitioned index (ORA-14086) nor reorganize a whole partitioned table (ORA-14511). You need to loop through each partition or subpartition.

EXECUTE IMMEDIATE ‘SELECT’ does not execute anything

I am not sure whether some tuning guy at Oracle decided to ignore any SELECT statement after execute immediate to save time doing nothing.

exec execute immediate 'select 1/0 from dual connect by level<9999999999999'

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00

But it is really annoying… and not documented as far as I know.

Imagine I want to increase all my sequences by 1000


SQL> create sequence s;

Sequence created.

SQL> select s.nextval from dual;

   NEXTVAL
----------
         1

SQL> begin
  2    for f in (select sequence_name n from user_sequences)
  3    loop
  4      execute immediate
  5        'select '||f.n||'.nextval from dual connect by level<=1000';
  6    end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL> select s.currval from dual;

   CURRVAL
----------
         1

Hmm, it does not work. Does SELECT work at all? Yes when it is a SELECT INTO :-)


SQL> drop sequence s;

Sequence dropped.

SQL> create sequence s;

Sequence created.

SQL> select s.nextval from dual;

   NEXTVAL
----------
         1

SQL> declare
  2    type t is table of number index by pls_integer;
  3    c t;
  4  begin
  5    for f in (select sequence_name n from user_sequences)
  6    loop
  7      execute immediate
  8        'select '||f.n||'.nextval from dual connect by level<=1000'
  9        bulk collect into c;
 10    end loop;
 11  end;
 12  /

PL/SQL procedure successfully completed.

SQL> select s.currval from dual;

   CURRVAL
----------
      1001

I wonder in which version this optimization/bug was introduced…

xml and powershell : using XPATH

I wrote about powershell [xml] yesterady : xml and powershell

Let’s see how to use XPATH expressions in Powershell


<emplist>
  <emp no="1">
    <ename>John</ename>
  </emp>
  <emp no="2">
    <ename>Jack</ename>
  </emp>
</emplist>

With the [xml] datatype, we create a navigator :

(([xml](GC foo.xml)).psbase.createnavigator().evaluate(
'//emplist/emp[@no="1"]/ename/text()'
))|%{$_.Value}

John

I have not been seduced by a Microsoft product since ages, but I must say I felt with love in this goody much more than in perl, cygwin, or whatever python, dos, java, vb…

It is simply great to use on the command line and can do my work.

1:0 for Microsoft

Use your own wallet for EM

If you want to get rid of self signed certificate, and the annoying security warnings in your browser, here is how to do it in 2 easy steps

1) create a new wallet in [OMS]/sysman/wallet/console.servername/, either with owm (gui) or with orapki (command line)
2) restart opmn http server

opmnctl stopproc process-type=HTTP_Server
opmnctl startproc process-type=HTTP_Server 

Later, to access your Enterprise Manager Grid Control homepage, access the Apache server and not the oms upload server


opmnctl status -l 
HTTP_Server http1:7779,http2:7201,https1:4445,https2:1159,http3:4890

https1 is Apache (4445) and https2 is Upload (1159)

So the url will be https://yourserver.dom.com:4445/em

Check note 1278231.1

RMAN duplicate does change your DB_NAME !

I had a very serious issue last Friday with errors as weird as ORA-00322: log name of thread num is not current copy. After a clone from Prod to Test, the prod crashed. Both databases are located on the same server (I am not a virtualization fanatic) and clone from prod to test have been done by most of my dba readers.

What did change in 11g ?

Incredibly, in 11g, rman issues the following statement before restore

sql clone "alter system set  db_name = ''PROD'' ...
restore clone primary controlfile...

This is probably related to the capability of cloning a database without connecting to the target database.

At the end of the clone, rman is setting back the db_name to TEST and recreate the TEST controlfile

sql statement: alter system set  db_name = ''TEST'' ...
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "TEST" ...
...
LOGFILE
GROUP 1 ('/.../TEST/redo1.dbf')...

So what’s wrong with this? Howcome could a clone from prod to test screw up the prod db???

Simple, the RMAN job did not complete

1) set new name, restore prod controlfile to test
2) restore issue, for instance ORA-19870: error while restoring backup piece archive1234
3) RMAN-03002: failure of Duplicate Db command

At this point, the complete restore was finished, we restored the missing archivelog, recover and open resetlog.
What happened then???
At this point, remember you still have the prod controlfile (and the prod db_name), so by doing an alter resetlogs, the production redologs will get overwritten without notice !

This is a fairly important change that could really hurt if you are cloning two databases on the same server.

In case you are trying to save a failed database clone, make sure you check db_name and also v$logfile before doing an alter database resetlogs!!!

[alert] Oracle agents on AIX may not work in 2011 with OMS10g

Fuadar recently wrote : Grid Control 10.2.0.5 AIX Alert

Basically, if you have an 10g oms Server (any OS / any release) and aix agents (any release), and according to Note 1171558.1, communication between [10g] Oracle Management Service and [AIX] Management Agents will break due to a default self-signed certificate expiring in 31 Dec 2010.

There is more than one way to solve this

1) you upgrade your oms to 11g. Good luck to do this before end of year…

2) You upgrade your oms to 10.2.0.5, apply patch 10034237 on your oms, create a new certificate, resecure all your agents. Pretty heavy stuff I promise.

3) You use a Third Party Certificate. This may work. I have not tested this for you.

4) You switch from https to http… this is of course not an acceptable workaround as the connection between the agent and the oms will be unsecure, but it may save your Silvester Party.

  • allow both secure and unsecure connections to the oms
  • on all your OMS instances

    
    opmnctl stopall
    emctl secure unlock
    opmnctl startall

  • switch all your agents to http
  • On all your AIX hosts with an agent installed

    
    emctl unsecure agent -omsurl http://omsserver:4890/em/*

    You can find the port for unsecure in your oms server in OMSHOME/sysman/config/emoms.properties under oracle.sysman.emSDK.svlt.ConsoleServerPort.

Happy holidays !


How to solve ORA-4068

I was amazed by this oneliner in stackoverflow.

First, let me introduce you my old foe, ORA-04068 :
Session 1:

SQL> CREATE OR REPLACE PACKAGE P AS 
  2  X NUMBER;Y NUMBER;END;
  3  /

Package created.

SQL> exec P.X := 1

PL/SQL procedure successfully completed.

Session 2:

SQL> CREATE OR REPLACE PACKAGE P AS 
  2  X NUMBER;Z NUMBER;END;
  3  /

Package created.

Session 1:

SQL> exec P.X := 2
BEGIN P.X := 2; END;

*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04061: existing state of package "SCOTT.P" has been invalidated
ORA-04065: not executed, altered or dropped package "SCOTT.P"
ORA-06508: PL/SQL: could not find program unit being called: "SCOTT.P"
ORA-06512: at line 1

Changing the package in session 2 did invalidate the package variable in session 1.

And the PRAGMA that saves the world : PRAGMA SERIALLY_REUSABLE

Session 1:

SQL> CREATE OR REPLACE PACKAGE P AS 
  2  PRAGMA SERIALLY_REUSABLE;X NUMBER;Y NUMBER;END;
  3  /

Package created.

SQL> exec P.X := 1

PL/SQL procedure successfully completed.

Session 2:

SQL> CREATE OR REPLACE PACKAGE P AS 
  3  PRAGMA SERIALLY_REUSABLE;X NUMBER;Z NUMBER;END;
  2  /

Package created.

Session 1:

SQL> exec P.X := 2

PL/SQL procedure successfully completed.

Oh yes!

make count(*) faster

I just install Oracle Enterprise Linux on my new notebook.

I wanted to check how far could I improve the performance of a count(*)


SQL> drop table big_emp;

table big_emp dropped.
258ms elapsed

SQL> create table big_emp as 
  with l as(select 1 from dual connect by level<=3000) 
  select rownum empno,ename,job,mgr,hiredate,sal,comm,deptno from emp,l,l

table big_emp created.
330,390ms elapsed

SQL> alter table big_emp add primary key(empno)

table big_emp altered.
481,503ms elapsed

SQL> alter system flush buffer_cache

system flush altered.
2,701ms elapsed

SQL> alter system flush shared_pool
system flush altered.
137ms elapsed

SQL> select count(*) from big_emp
COUNT(*)               
---------------------- 
126000000              

9,769ms elapsed

SQL> select count(*) from big_emp
COUNT(*)               
---------------------- 
126000000              

8,157ms elapsed

SQL> alter table big_emp drop primary key

table big_emp altered.
905ms elapsed

SQL> alter table big_emp add primary key(empno) 
  using index (
    create index big_i on big_emp(empno) 
    global partition by hash(empno) 
    partitions 16 parallel 16)

table big_emp altered.
974,300ms elapsed

SQL> alter system flush buffer_cache

system flush altered.
601ms elapsed

SQL> alter system flush shared_pool

system flush altered.
140ms elapsed

SQL> select count(*) from big_emp

COUNT(*)               
---------------------- 
126000000              

5,201ms elapsed

SQL> select count(*) from big_emp

COUNT(*)               
---------------------- 
126000000              

2,958ms elapsed

As it is on a notebook, I suppose the benefit of partitioning is not as good as you could get on your server with lots of fast disks and lot’s of CPUs, but I am pretty happy with the results.

It is still counting 126 Million rows in less than 3 seconds :-)

Thanks for the very kind sponsor of the notebook !

SPARC Supercluster

Oracle buys Sun was an exciting accouncement 20 months ago.

What did change in the Solaris/Oracle Database world?

First, Oracle delivered Exadata on Sun Hardware (x86_64).
Second, they delivered Exadata on Sun Solaris Operating System (x86_64).

But now, they announced a combination of software and hardware that will run Oracle database faster than anything ever before.

I am happy to read Oracle is still investing on R&D on the Sparc processors server line !