Category Archives: 11gR2

distinct listagg

One limitation in listagg, you cannot use DISTINCT. Okay, there are plenty of distinct-capable listagg workarounds, but to get the real listagg working, it is a bit of an headache

With one listagg


SELECT 
  DEPTNO,
  LISTAGG (JOB, ',') 
    WITHIN GROUP (ORDER BY JOB) JOBS
FROM (
  SELECT DISTINCT DEPTNO, JOB  FROM EMP)
GROUP BY DEPTNO;

    DEPTNO JOBS                          
---------- ------------------------------
        10 CLERK,MANAGER,PRESIDENT       
        20 ANALYST,CLERK,MANAGER         
        30 CLERK,MANAGER,SALESMAN        

ok, it was not that hard, but it gets more difficult with two listagg’s


SELECT 
  LISTAGG (job, ',') 
    WITHIN GROUP (ORDER BY job) jobs,
  LISTAGG (deptno, ',') 
    WITHIN GROUP (ORDER BY deptno) deptnos
FROM (
  SELECT 
    DECODE(
      ROW_NUMBER () OVER (
        PARTITION BY deptno 
        ORDER BY 1),
      1, deptno) deptno,
    DECODE (
      ROW_NUMBER () OVER (
        PARTITION BY job 
        ORDER BY 1),
      1, job) job
  FROM emp
);
DEPTNOS  JOBS                                      
-------- ----------------------------------------
10,20,30 ANALYST,CLERK,MANAGER,PRESIDENT,SALESMAN

Too bad the DISTINCT keyword was not implemented

Rman backup compression

Did you know you can make your backup at least twice faster with a single line ?

Demo :


RMAN> backup as compressed backupset database;
Starting backup at 2013-06-05_13:08:01
...
Finished backup at 2013-06-05_13:13:59

6 minutes for a compressed backup on a NAS with 24 Channels and 100Gb of raw data. Not bad. But look at this !


RMAN> configure compression algorithm 'low';
new RMAN configuration parameters are successfully stored
RMAN> backup as compressed backupset database;
Starting backup at 2013-06-05_14:06:09
...
Finished backup at 2013-06-05_14:08:29
RMAN> configure compression algorithm clear;
RMAN configuration parameters are successfully reset to default value

By configuring this magic parameter, it is now more than twice faster ! This is incredible !

Go to your cashier and grab some coins to get this amazing advanced compression option !

On deferred segment creation and truncate

One year ago, I wrote about a side effect of deferred segment creation and drop tablespace :
on deferred segment creation

Today I discoved one more side effect :

In the good old days (I read once that you are old as soon as you start talking about the good old days) and according to the doc :
You cannot truncate the parent table of an enabled foreign key constraint. You must disable the constraint before truncating the table


SQL> alter session set deferred_segment_creation=false;

Session altered.

SQL> create table t1(x number primary key);

Table created.

SQL> create table t2(x number references t1);

Table created.

SQL> truncate table t1;
truncate table t1
               *
ERROR at line 1:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys

This however does not apply if you have deferred segment creation and empty tables


SQL> alter session set deferred_segment_creation=true;

Session altered.

SQL> create table t1(x number primary key);

Table created.

SQL> create table t2(x number references t1);

Table created.

SQL> truncate table t1;

Table truncated.

Table truncated. Ok, what’s the point in truncating an empty table …

TNSNAMES and Active Directory

It is highly probable you already have MS AD in your company. Probably you use a local tnsnames.ora. Apart from setting a Oracle Internet Directory or Oracle Virtual Directory, there is one more option that you may want to consider : AD.

Ok, here is a bit of a road map :

– Schema Extension :
extending the schema is irreversible and you will have to test this properly and explain why you need this (remove the need of distributing a tnsnames, central administration) to your Microsoft Admin friends. To extend the schema, use Oracle Network Configuration Assistant. The step-by-step guide is there

– Anonymous or authenticated bind
prior to 11g, you needed to allow anonymous bind on the AD server. Your Security Admin friends will probably prefer the 11g approach of setting NAMES.LDAP_AUTHENTICATE_BIND to true. If you set NAMES.LDAP_AUTHENTICATE_BIND to true, the Oracle clients will use your windows credentials to do the tnsnames resolution.

For sql developer, use Connection Type=TNS, Connect Identifier=DB01. connection type=Ldap does not work with authenticated bind

– Import the tnsnames and / or create new entries
all done with Net Manager and pretty intuitively. Except that you will use “Directory –> Export Net Service Names” to import the tnsnames in AD

– Configure the clients
sqlnet.ora
NAMES.DIRECTORY_PATH= (LDAP)
NAMES.LDAP_AUTHENTICATE_BIND=1

ldap.ora
DEFAULT_ADMIN_CONTEXT = “DC=example,DC=com”
DIRECTORY_SERVER_TYPE = AD

– test it!
tnsping first


C:\> tnsping db01

TNS Ping Utility for 64-bit Windows: Version 11.2.0.2.0 - Production on 10-NOV-2011 14:42:16

Copyright (c) 1997, 2010, Oracle.  All rights reserved.

Used parameter files:
C:\oracle\product\11.2.0\client_3\network\admin\sqlnet.ora

Used LDAP adapter to resolve the alias
Attempting to contact (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(Host=srv01)(Port=1521)))(CONNECT
_DATA=(SID=DB01)))
OK (20 msec)

I wrote a simple java program to check the connection :


import java.sql.*;
import oracle.jdbc.pool.*;
public class HelloWorld {
  public  static void main(String[] args) throws SQLException {
    OracleDataSource ods = new OracleDataSource();
    ods.setDriverType("oci");
    ods.setTNSEntryName("DB01");
    ods.setUser("scott");
    ods.setPassword("tiger");
    ResultSet res = ods.
      getConnection().
        prepareCall("select 'Hello World' txt from dual").
          executeQuery();
    res.next();
    System.out.println(res.getString("TXT"));
  }
}


C:\> set PATH=C:\oracle\product\11.2.0\client_3\bin
C:\> javac -classpath .;C:\oracle\product\11.2.0\client_3\jdbc\lib\ojdbc6.jar HelloWorld.java
C:\> java -classpath .;C:\oracle\product\11.2.0\client_3\jdbc\lib\ojdbc6.jar -Doracle.net.tns_admin=C:\oracle\product\11.2.0\client_3\network\admin HelloWorld
Hello World

If you get and issue with ocijdbc11, you either do not have the *ocijdbc11* driver in your PATH / LD_LIBRARY_PATH / LIBPATH or the use the wrong driver. For instance if you compile with java 32bits, you cannot use the oci 64 bit.

If you use a jdbc thin ldap resolution and have no anonymous bind, it will return an error


import java.sql.*;
public class HelloWorld {
  public  static void main(String[] args) throws SQLException {
    DriverManager.registerDriver(new oracle.jdbc.OracleDriver());
    ResultSet res = DriverManager.
      getConnection("jdbc:oracle:thin:@ldap://example.com:389/db01,cn=OracleContext", "scott", "tiger").
      prepareCall("select 'Hello World' txt from dual").
      executeQuery();
    res.next();
    System.out.println(res.getString("TXT"));
  }
}


C:\>java -classpath .;C:\oracle\product\11.2.0\client_3\jdbc\lib\ojdbc6.jar HelloWorld
Exception in thread "main" java.sql.SQLException: I/O-Fehler: JNDI Package failure avax.naming.NamingException: [LDAP:error code 1 - 000004DC: LdapErr: DSID-0C0906DC, comment: In order to perform this operation a successful bind must be completed on the connection., data 0, v1db0 ]; remaining name 'cn=db01,cn=OracleContext'
        at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:419)
        at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:536)
        at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:228)
        at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32)
        at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:521)
        at java.sql.DriverManager.getConnection(DriverManager.java:525)
        at java.sql.DriverManager.getConnection(DriverManager.java:171)
        at HelloWorld.main(HelloWorld.java:5)

As the error message says, the ldap server requires a bind

Let’s try to bind


import java.sql.*;
import java.util.*;
import oracle.jdbc.pool.*;
public class HelloWorld {
  public  static void main(String[] args) throws SQLException {
    OracleDataSource ods = new OracleDataSource();
    ods.setDriverType("thin");
    Properties prop = new Properties();
    prop.put("java.naming.security.authentication", "simple");
    prop.put("java.naming.security.principal","CN=Laurent Schneider,CN=Users,DC=example,DC=com");
    prop.put("java.naming.security.credentials", "my_ad_pw");
 
    ods.setConnectionProperties(prop);

    ods.setURL("jdbc:oracle:thin:@ldap://w2k8.local:389/db01,cn=OracleContext,DC=example,DC=com");
    ods.setUser("scott");
    ods.setPassword("tiger");
    ResultSet res = ods.
      getConnection().
        prepareCall("select 'Hello World' txt from dual").
          executeQuery();
    res.next();
    System.out.println(res.getString("TXT"));
  }
}

This works!

_optimizer_random_plan parameter

I was trying to find a workaround for a bug in 11.2.0.2

SELECT *  FROM 
  (SELECT 2 B FROM DUAL WHERE DUMMY = 'Y'), 
  (SELECT 3 C FROM DUAL WHERE DUMMY LIKE '%') 
  WHERE C = B(+);

         B          C
---------- ----------
         2          3

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     4 |     4   (0)| 00:00:01 |
|   1 |  NESTED LOOPS OUTER|      |     1 |     4 |     4   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
|*  3 |   TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

As dummy is not Y, B could not be 2.

Ok, I tried :


alter session set "_optimizer_random_plan"=1;

SELECT *  FROM 
  (SELECT 2 B FROM DUAL WHERE DUMMY = 'Y'), 
  (SELECT 3 C FROM DUAL WHERE DUMMY LIKE '%') 
  WHERE C = B(+);

         B          C
---------- ----------
                    3

Execution Plan
----------------------------------------------------------
Plan hash value: 837538736

-------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost  |
-------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |  1146 |  5730 |    27G|
|   1 |  MERGE JOIN OUTER    |      |   603K|  2946K|    27G|
|*  2 |   TABLE ACCESS FULL  | DUAL |   392K|   767K|   136K|
|   3 |   VIEW               |      |     2 |     6 | 69180 |
|*  4 |    FILTER            |      |       |       |       |
|*  5 |     TABLE ACCESS FULL| DUAL |   123K|   240K| 69180 |
-------------------------------------------------------------

Cool, I got correct results! the fact that the cost jumped from 4 to 27 Billions is just a minor annoyance I suppose :twisted:

I also tried


alter session set "_optimizer_random_plan"=0; -- default

alter session  set "_complex_view_merging"=false;

SELECT *  FROM 
  (SELECT 2 B FROM DUAL WHERE DUMMY = 'Y'), 
  (SELECT 3 C FROM DUAL WHERE DUMMY LIKE '%') 
  WHERE C = B(+);

         B          C
---------- ----------
                    3

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |     1 |     5 |     2   (0)| 00:00:01 |
|   1 |  NESTED LOOPS OUTER  |      |     1 |     5 |     2   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL  | DUAL |     1 |     2 |     2   (0)| 00:00:01 |
|   3 |   VIEW               |      |     1 |     3 |            |          |
|*  4 |    FILTER            |      |       |       |            |          |
|*  5 |     TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------

The cost is now 5 and instead of 4 and the results are correct

The first thing I did is opening a SR, now I am impatiently waiting for Oracle Support guidance…

how to run UTL_TCP, UTL_SMTP and the like in 11g

After we upgrade a db to 11g someone complained about an ORA-24248: XML DB extensible security not installed

I thought, it should be easy to revert to 10g mechanism. Probably wrong after reading Marco :
The default behavior for access control to network utility packages has been changed to disallow network operations to all nonprivileged users. This default behavior is different from, and is incompatible with, previous versions of Oracle Database.

I do not want to install XDB to send mail. Sounds like an overkill…

Ok, as an hard core dba I created a wrapper in the sys schema, something you probably should not do !

ex:
10g


SQL> conn scott/tiger
Connected.
SQL> select utl_inaddr.GET_HOST_ADDRESS('localhost') from dual;
UTL_INADDR.GET_HOST_ADDRESS('LOCALHOST')
--------------------------------------------------
127.0.0.1

after upgrade
11g


SQL> conn scott/tiger
Connected.
SQL> select utl_inaddr.GET_HOST_ADDRESS('localhost') from dual;
select utl_inaddr.GET_HOST_ADDRESS('localhost') from dual
       *
ERROR at line 1:
ORA-24248: XML DB extensible security not installed
ORA-06512: at "SYS.UTL_INADDR", line 19
ORA-06512: at "SYS.UTL_INADDR", line 40
ORA-06512: at line 1

My workaround to “disable” Fine-Grained Access to External Network Services


SQL> conn / as sysdba
SQL> create or replace function my_utl_inaddr_GET_HOST_ADDRESS(HOST VARCHAR2) return VARCHAR2 is begin return utl_inaddr.GET_HOST_ADDRESS; end;
  2  /

Function created.

SQL> grant execute on my_utl_inaddr_GET_HOST_ADDRESS to scott;

Grant succeeded.
SQL> conn scott/tiger
Connected.
SQL> select sys.my_utl_inaddr_GET_HOST_ADDRESS('localhost') from dual;
SYS.MY_UTL_INADDR_GET_HOST_ADDRESS('LOCALHOST')
--------------------------------------------------
127.0.0.1

If you want to use the recommended way of granting access to utl_tcp and the like, check note 453756.1

Do not upgrade 11.2.0.1 to 11.2.0.1

If you do run @?/rdbms/admin/catupgrd for an 11.2.0.1 Oracle Home on a 11.2.0.1, you may later realize some objects are missing (probably related to deferred segment creation).


SQL>   delete from t1
  2    where id in (
  3      select ca.id from ca, p
  4      where p.no_form like '%02.98'
  5      and p.id = ca.prod_id
  6    );
  delete from t1
              *
ERROR at line 1:
ORA-00600: internal error code, arguments: [kkpo_rcinfo_defstg:objnotfound],
[56480], [], [], [], [], [], [], [], [], [], []

The workaround on metalink is amazing :

  • Do not run catupgrd in 11.2.0.1 against a database that is already at 11.2.0.1

Well, the only solution is to restore your database ! How painful :-(

Reduce the number of commits

“Oftentimes, a database administrator (DBA) simply looks at the symptoms and immediately starts changing the system to fix those symptoms”
Op. Cit. Oracle Database Performance Tuning Guide 11g Release 2 (11.2)

Ok, let’s do this :-)

  • Finding Waits on event “log file sync” while performing COMMIT and ROLLBACK operations were consuming significant database time.
  • Action Investigate application logic for possible reduction in the number of COMMIT operations by increasing the size of transactions.

If your application is committing too often- maybe Enterprise Manager told you so- you may want to commit less often. Or maybe just do some magic to impress your customer.

As seen on Metalink 857576.1, and if you can afford data loss, and if you cannot change your application, and if you are that kind of dba who cares more on good performing badly written application than on data integrity. just have a quick look …

Ok, ins.sql is 30’000 insert and commits,


insert into scott.t values(1);
commit;
insert into scott.t values(1);
commit;
insert into scott.t values(1);
commit;

Let’s check the time on my old-fashion pc…


SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:40:55.881948 +02:00
SQL> @ins
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:41:19.115447 +02:00

23.3 seconds

Now take the risk to lose some commits (but yes it is documented, no hidden parameter) to boost your performance


SQL> alter session set commit_wait=nowait commit_logging=immediate;
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:43:37.284027 +02:00

SQL> @ins
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:43:54.084547 +02:00

Hey hey hey, 16.8 seconds only :-)

PS: it used to be called commit_write in 10g

nothing in user_segments

I wrote on deferred segment creation recently.

Today I was looking for specific storage attributes that I used to find in user_segments. They are no longer here. Where are they then?

test case :
create table t(x clob) store (x) as securefile x (retention max storage(maxsize 8192000000));

Where do I find the retention max max_size of my securefile? once the segment is created, it is easy to find it in user_segments

SQL> insert into t values('x');
1 row created.

 SQL> select max_size from user_segments where segment_name='X';
  MAX_SIZE
----------
   1000000

1000000 in blocks is my specified 8192000000 bytes.

let’s go back

SQL> truncate table t drop all storage;

Table truncated.

SQL> select max_size from user_segments where segment_name='X';

no rows selected

It is not there.

you must dig in the sys tables to find out. Specifically there is a new table for the deferred segments

SQL> select maxsiz_stg from sys.deferred_stg$ where obj# in (select obj# from sys.obj$ where name='X');
MAXSIZ_STG
----------
   1000000

Of course you should not base your business logic on internal tables that may change in a next release. A more appropriate workaround would be to disable deferred segment creation :)

On deferred segment creation

What’s deferred segment creation? It is a feature that saves lots of time by releases and lots of space if you have a -legacy- application with 10’000 tables and most of them are empty.

When was it introduced ? Partly in 11.2.0.1 and partly in 11.2.0.2, depending on the object type.

What’s the opposite ? immediate segment creation

What’s the default ? deferred segment creation

How does it work ?
create table t1(x number) segment creation immediate;
and
create table t1(x number) segment creation deferred;

Where it the doc? start with Understand Deferred Segment Creation

Ok, now my 2 cents on this feature. It is a major change of the most basic elements of Oracle, the segment. This behavior will keep its bunch of surprises on your dba scripts.

1) you will not see the segment in dba_segments
2) if you drop the tablespace the tablespace containing the object without the INCLUDING CONTENTS, the drop tablespace will succeed and the table will remain

SQL> create tablespace ts datafile '/u02/oradata/@/ts.dbf' size 1m;

Tablespace created.

SQL> create table t(x number) tablespace ts;

Table created.

SQL> drop tablespace ts;

Tablespace dropped.

SQL> select * from t;
select * from t
              *
ERROR at line 1:
ORA-00959: tablespace 'TS' does not exist

SQL> drop table t;
drop table t
           *
ERROR at line 1:
ORA-00959: tablespace 'TS' does not exist

Neither SELECT nor DROP is possible at that stage

To quickly identify those almost-nonexistent tablespaces you may use this query


SELECT TABLESPACE_NAME FROM ALL_CLUSTERS UNION
SELECT TABLESPACE_NAME FROM ALL_INDEXES UNION
SELECT TABLESPACE_NAME FROM ALL_IND_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_IND_SUBPARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_LOBS UNION
SELECT TABLESPACE_NAME FROM ALL_LOB_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_LOB_SUBPARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_TABLES  UNION
SELECT TABLESPACE_NAME FROM ALL_TAB_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_TAB_SUBPARTITIONS MINUS
select tablespace_name from dba_tablespaces
;

TABLESPACE_NAME
------------------------------
TS

then you can recreate it and -if wished- drop it with contents


SQL> create tablespace ts datafile '/u02/oradata/@/ts.dbf' size 1m reuse;

Tablespace created.

SQL> drop tablespace ts including contents and datafiles;

Tablespace dropped.

SQL> select * from t;
select * from t
              *
ERROR at line 1:
ORA-00942: table or view does not exist

The table is gone for real.

If you based some scripts on dba_segments to list the content of the tablespaces, you probably should check the assigned tablespace in the tables,indexes,lobs and (sub)partitions DBA_VIEWS too.

I met this feature while using transportable tablespace. Transportable table will transport the object with no segment that belongs to the tablespace.

There is a bunch of published bugs on Metalink regarding deferred segment creation. An easy workaround is to not use the feature by setting the initialization parameter DEFERRED_SEGMENT_CREATION to false. This of course affects only new objects.

I am always very cautious about those major changes affecting the dba scripts on the dictionary. While selecting from the base dictionary tables (TAB$, COL$, …) is never recommended, selecting from the USER_ and DBA_ views is supposed to be backward compatible, but the dba scripts that used to work in previous release may break here… This is obviously the price to pay to get new features, right?

my first ADR package

You got an internal error and want to create a zip of all relevant files.

First, let’s generate an internal error. I found a quick way to generate an ora-600 or an ora-700 (which is a harmless ora-600 in 11g, read 737878.1) on oradeblog

SQL> oradebug unit_test dbke_test dde_flow_kge_soft foo bar baz
Statement processed.

Now start the command line interface, and set the ORACLE HOME

$ adrcli
adrci> show home
ADR Homes: 
diag/tnslsnr/precision/listener
diag/tnslsnr/localhost/listener
diag/rdbms/lsc02/LSC02
diag/rdbms/lsc03/LSC03
diag/rdbms/lsc01/LSC01
adrci> set homepath diag/rdbms/lsc02/LSC02
adrci> show home
ADR Homes: 
diag/rdbms/lsc02/LSC02

Check the incidents :

adrci> show incident

ADR Home = /u01/app/oracle/diag/rdbms/lsc02/LSC02:
*************************************************************************
INCIDENT_ID          PROBLEM_KEY       CREATE_TIME                              
-------------------- ----------------- --------------------------------- 
53065                ORA 700 [foo]     2011-03-14 18:20:24 +01:00       
1 rows fetched

Create the package metadata :


adrci> IPS CREATE PACKAGE INCIDENT 53065
Created package 1 based on incident id 53065, correlation level typical
adrci> ips SHOW PACKAGE 1
DETAILS FOR PACKAGE 1:
   PACKAGE_ID             1
   PACKAGE_NAME           ORA700foo_20110314182607
   PACKAGE_DESCRIPTION    
   DRIVING_PROBLEM        1
   DRIVING_PROBLEM_KEY    ORA 700 [foo]
   DRIVING_INCIDENT       53065
   DRIVING_INCIDENT_TIME  2011-03-14 18:20:24.304000 +01:00
   STATUS                 New (0)
   CORRELATION_LEVEL      Typical (2)
   PROBLEMS               1 main problems, 0 correlated problems
   INCIDENTS              1 main incidents, 0 correlated incidents
   INCLUDED_FILES         4
   SEQUENCES              Last 0, last full 0, last base 0
   UNPACKED               FALSE
   CREATE_TIME            2011-03-14 18:26:07.566961 +01:00
   UPDATE_TIME            2011-03-14 18:26:07.620324 +01:00
   BEGIN_TIME             N/A
   END_TIME               N/A
   FLAGS                  0

The metadata files (in $ORACLE_BASE/rdbms/db_name/sid/*.ams) are in an Oracle binary format

Create the package zip file :

adrci> IPS GENERATE PACKAGE 1 in /home/lsc/foo
Generated package 1 in file /home/lsc/foo/ORA700foo_20110314182607_COM_1.zip, 
mode complete

This zip file contains all traces and alerts that you may ever need to diagnose/resolve the analysis

adrci>  ips show files package 1
   FILE_ID                1
   FILE_LOCATION          <ADR_HOME>/incident/incdir_53065
   FILE_NAME              LSC02_ora_14163_i53065.trm
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2
   FILE_LOCATION          <ADR_HOME>/incident/incdir_53065
   FILE_NAME              LSC02_ora_14163_i53065.trc
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                3
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              LSC02_ora_14163.trc
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                4
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              LSC02_ora_14163.trm
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                5
   FILE_LOCATION          <ADR_HOME>/alert
   FILE_NAME              log.xml
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                6
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              alert_LSC02.log
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                7
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              LSC02_diag_5247.trc
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                8
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              LSC02_diag_5247.trm
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                12
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              LSC02_mmon_5265.trc
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                13
   FILE_LOCATION          <ADR_HOME>/trace
   FILE_NAME              LSC02_mmon_5265.trm
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2007
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              IPS_CONFIGURATION.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2008
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              IPS_PACKAGE.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2009
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              IPS_PACKAGE_INCIDENT.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2010
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              IPS_PACKAGE_FILE.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2011
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              IPS_PACKAGE_HISTORY.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2012
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              IPS_FILE_METADATA.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2013
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              IPS_FILE_COPY_LOG.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2014
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              DDE_USER_ACTION_DEF.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2015
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              DDE_USER_ACTION_PARAMETER_DEF.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2016
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              DDE_USER_ACTION.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2017
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              DDE_USER_ACTION_PARAMETER.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2018
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              DDE_USER_INCIDENT_TYPE.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2019
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              DDE_USER_INCIDENT_ACTION_MAP.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2020
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              INCIDENT.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2021
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              INCCKEY.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2022
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              INCIDENT_FILE.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2023
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              PROBLEM.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2024
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              HM_RUN.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2025
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/export
   FILE_NAME              EM_USER_ACTIVITY.dmp
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2026
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1
   FILE_NAME              config.xml
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2027
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1/crs
   FILE_NAME              crsdiag.log
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2028
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1
   FILE_NAME              metadata.xml
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2029
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1
   FILE_NAME              manifest_1_1.xml
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2030
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1
   FILE_NAME              manifest_1_1.html
   LAST_SEQUENCE          1
   EXCLUDE                Included

   FILE_ID                2031
   FILE_LOCATION          <ADR_HOME>/incpkg/pkg_1/seq_1
   FILE_NAME              manifest_1_1.txt
   LAST_SEQUENCE          1
   EXCLUDE                Included

Even an html file

Manifest for package 1

Manifest details

Package ID 1
Creation time 2011-03-14 18:26:07.566961 +01:00
Archive time 2011-03-14 18:37:14.499389 +01:00
Sequence 1
Package mode Complete
Package status Generating
Package flags Flags: (No flags set)

Contents summary

Main problems 1
Correlated problems 0
Main incidents 1
Correlated incidents 0

ADR details

Product rdbms
Target lsc02
Instance LSC02
ADR base /u01/app/oracle
ADR home /u01/app/oracle/diag/rdbms/lsc02/LSC02

Main problems

Problem ID Problem key Incidents included Incidents total
1 ORA 700 [foo] 1 1

Correlated problems

Problem ID Problem key Incidents included Incidents total

Main incidents

Incident ID Problem ID Error Message Incident time
53065 1 ORA-700 [foo] [bar] [baz] 2011-03-14 18:20:24.304000 +01:00

Correlated incidents

Incident ID Problem ID Error Message Incident time

Files

File name Location Size File time
LSC02_ora_14163_i53065.trm <ADR_HOME>/incident/incdir_53065 54828 2011-03-14 18:20:26.000000 +01:00
LSC02_ora_14163_i53065.trc <ADR_HOME>/incident/incdir_53065 2433968 2011-03-14 18:20:26.000000 +01:00
LSC02_ora_14163.trc <ADR_HOME>/trace 1308 2011-03-14 18:20:26.000000 +01:00
LSC02_ora_14163.trm <ADR_HOME>/trace 210 2011-03-14 18:20:24.000000 +01:00
log.xml <ADR_HOME>/alert 885849 2011-03-14 18:20:27.000000 +01:00
alert_LSC02.log <ADR_HOME>/trace 164969 2011-03-14 18:20:27.000000 +01:00
LSC02_diag_5247.trc <ADR_HOME>/trace 1287 2011-03-14 18:20:26.000000 +01:00
LSC02_diag_5247.trm <ADR_HOME>/trace 77 2011-03-14 18:20:26.000000 +01:00
LSC02_mmon_5265.trc <ADR_HOME>/trace 8703 2011-03-14 18:33:43.000000 +01:00
LSC02_mmon_5265.trm <ADR_HOME>/trace 838 2011-03-14 18:33:43.000000 +01:00
IPS_CONFIGURATION.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 2818 2011-03-14 18:37:13.000000 +01:00
IPS_PACKAGE.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 476 2011-03-14 18:37:13.000000 +01:00
IPS_PACKAGE_INCIDENT.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 193 2011-03-14 18:37:13.000000 +01:00
IPS_PACKAGE_FILE.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 1126 2011-03-14 18:37:14.000000 +01:00
IPS_PACKAGE_HISTORY.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 280 2011-03-14 18:37:13.000000 +01:00
IPS_FILE_METADATA.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 2888 2011-03-14 18:37:14.000000 +01:00
IPS_FILE_COPY_LOG.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 214 2011-03-14 18:37:14.000000 +01:00
DDE_USER_ACTION_DEF.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 908 2011-03-14 18:37:13.000000 +01:00
DDE_USER_ACTION_PARAMETER_DEF.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 708 2011-03-14 18:37:13.000000 +01:00
DDE_USER_ACTION.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 204 2011-03-14 18:37:13.000000 +01:00
DDE_USER_ACTION_PARAMETER.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 198 2011-03-14 18:37:13.000000 +01:00
DDE_USER_INCIDENT_TYPE.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 353 2011-03-14 18:37:13.000000 +01:00
DDE_USER_INCIDENT_ACTION_MAP.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 166 2011-03-14 18:37:13.000000 +01:00
INCIDENT.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 700 2011-03-14 18:37:13.000000 +01:00
INCCKEY.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 303 2011-03-14 18:37:13.000000 +01:00
INCIDENT_FILE.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 268 2011-03-14 18:37:13.000000 +01:00
PROBLEM.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 395 2011-03-14 18:37:13.000000 +01:00
HM_RUN.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 342 2011-03-14 18:37:14.000000 +01:00
EM_USER_ACTIVITY.dmp <ADR_HOME>/incpkg/pkg_1/seq_1/export 207 2011-03-14 18:37:14.000000 +01:00
config.xml <ADR_HOME>/incpkg/pkg_1/seq_1 56180 2011-03-14 18:37:14.000000 +01:00
crsdiag.log <ADR_HOME>/incpkg/pkg_1/seq_1/crs 184 2011-03-14 18:37:14.000000 +01:00
metadata.xml <ADR_HOME>/incpkg/pkg_1/seq_1 556 2011-03-14 18:37:14.000000 +01:00

But did Oracle Support ever asked you for an ADR package? Or do they still ask for RDA

I used to select, zip and send traces files manually, I may consider ADR packages by my next ORA-600 !

How does random=random evaluates?

I had fun answering a question about random on the technical forums.

What is in your opinion the boolean value of DBMS_RANDOM.VALUE=DBMS_RANDOM.VALUE?

Or, how many rows would
select * from dual where dbms_random.value=dbms_random.value;
return?

It is wrong to assume the function will be evaluated twice.

The short answer would be : do not rely on random plsql functions in SQL…

here is a test case in 11.2.0.2 and 10.2.0.3


SQL> select version from v$instance;
VERSION
-----------------
10.2.0.3.0

SQL> select * from dual where dbms_random.value=dbms_random.value;

no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 1224005312

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     2 |     2   (0)| 00:00:01 |
|*  1 |  FILTER            |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("DBMS_RANDOM"."VALUE"()="DBMS_RANDOM"."VALUE"())

In 10g, the function is executed twice per row, and the chance to have two different values is more than 99.9999…%.


SQL> select version from v$instance;
VERSION
-----------------
11.2.0.2.0

SQL> select * from dual where dbms_random.value=dbms_random.value
D
-
X

Execution Plan
----------------------------------------------------------
Plan hash value: 1224005312

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     2 |     2   (0)| 00:00:01 |
|*  1 |  FILTER            |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("DBMS_RANDOM"."VALUE"() IS NOT NULL)

Here the optimized execute the function only once per row, and since the result is never null, it always evaluates to true.

Is this a bug or a feature?

In my opinion it is a confusing tuning enhancement that may break badsome programs.

In this thread, I mentioned that prior dbms_random.value is not null is an unsafe construct.

EZCONNECT and HOSTNAME resolution methods

EZCONNECT is the easy connect protocol, available in 10g, whenever you want to connect to a database without tnsnames and without ldap.

$ grep -iw directory_path $TNS_ADMIN/sqlnet.ora
names.directory_path=EZCONNECT
$ sqlplus scott/tiger@//srv01:1521/db01

connect to server srv01 on port 1521 for service db01

HOSTNAME was the old-fashion way to connect to a database, where hostname = sid and port = 1521. In this regard EZCONNECT is just an extension of the hostname method.

Typical HOSTNAME usage, that is the same as EZCONNECT with default port 1521.
sqlplus scott/tiger@db01
connect to server db01 on port 1521 for service db01

There is a behavior change between 10g and 11g. In 10g, the default service name defaulted to the DNS alias used to connect. In 11g, the default is null.

$ nslookup db01
Server:  ns001.example.com
Address:  198.0.0.30

Name:    srv01.example.com
Address:  198.0.0.60
Aliases:  db01.example.com

$ nslookup db02
Server:  ns001.example.com
Address:  198.0.0.30

Name:    srv01.example.com
Address:  198.0.0.60
Aliases:  db02.example.com

Both DB01 and DB02 DNS aliases point to the same server.

Let’s try with 10g

$ sqlplus -L scott/tiger@db01.example.com

SQL*Plus: Release 10.2.0.3.0 - Production on Mon Feb 7 15:46:53 2011

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select global_name from global_name;
GLOBAL_NAME
---------------------------------------
DB01.EXAMPLE.COM
SQL> quit
$ sqlplus -L scott/tiger@db02.example.com

SQL*Plus: Release 10.2.0.3.0 - Production on Mon Feb 7 15:47:33 2011

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select global_name from global_name;
GLOBAL_NAME
---------------------------------------
DB02.EXAMPLE.COM

Let’s try with 11g sqlplus

$ sqlplus -L scott/tiger@db01.example.com

SQL*Plus: Release 11.2.0.2.0 Production on Mon Feb 7 15:50:27 2011

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

ERROR:
ORA-12504: TNS:listener was not given the SERVICE_NAME in CONNECT_DATA

SP2-0751: Unable to connect to Oracle.  Exiting SQL*Plus

It no longer works. Period. This is documented as Problem 556996.1 in Metalink.

A 10g tnsping will reveal

$ tnsping db01.example.com:1521

TNS Ping Utility for IBM/AIX RISC System/6000: Version 10.2.0.3.0 - Production on 07-FEB-2011 15:52:34

Copyright (c) 1997, 2006, Oracle.  All rights reserved.

Used parameter files:
/home/lsc/sqlnet.ora

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=db01.example.com))(ADDRESS=(PROTOCOL=TCP)(HOST=198.0.0.60)(PORT=1521)))
OK (80 msec)

In 10g the service_name is the connection dns alias used

In contrary, the 11g tnsping service name is null

$ tnsping db01.example.com:1521

TNS Ping Utility for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production on 07-FEB-2011 15:56:55

Copyright (c) 1997, 2010, Oracle.  All rights reserved.

Used parameter files:
/home/lsc/sqlnet.ora

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=198.0.0.60)(PORT=1521)))
OK (10 msec)

The tnsping works, but the service_name is empty.

How to fix this?

1) you specify the SID in easy connect (yes, this is easy!)

$ tnsping db01.example.com:1521/db01.example.com

TNS Ping Utility for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production on 07-FEB-2011 15:59:10

Copyright (c) 1997, 2010, Oracle.  All rights reserved.

Used parameter files:
/home/lsc/sqlnet.ora

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=db01.example.com))(ADDRESS=(PROTOCOL=TCP)(HOST=198.0.0.60)(PORT=1521)))
OK (10 msec)

2) you use 10g, or 10g behavior in 11g with patch 9271246 (available only on a limited number of plateforms, os and db versions),

3) you specify a default service for your listener

$ vi listener.ora
DEFAULT_SERVER_LISTENER=DB01
$ lsnrctl reload
$ sqlplus -L scott/tiger@db01 

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> 

This is a bit confusing as if you are servicing more than one database per listener, all dns aliases will default to the same database. So I would not recommend a default service name if there is more than one service name.

Return NULL if the column does not exist

It is a very common challenge for a dba to create scripts that work on every version.

How do you return NULL if a column does not exists?

Imagine I have a view that returns the table_name, column_name and retention_type of my LOBS.


SQL> create table t1(c clob) lob(c) store as (retention);

Table created.

SQL> create table t2(c clob) lob(c) store as (pctversion 10);

Table created.

SQL> create or replace force view v as select table_name,
  column_name,retention_type from user_lobs;

View created.

SQL> select * from v where table_name in ('T1','T2');
TAB COL RETENTION_TYPE
--- --- --------------
T1  C   YES
T2  C   NO

Let’s imagine I try to run this on an antique version of Oracle


SQL> select version from v$instance;
VERSION
-----------------
11.2.0.1.0

SQL> create table t1(c clob) lob(c) store as (retention);

Table created.

SQL> create table t2(c clob) lob(c) store as (pctversion 10);

Table created.

SQL> create or replace force view v as select table_name,column_name,retention_type from user_lobs;

Warning: View created with compilation errors.

SQL> select * from v where table_name in ('T1','T2');
select * from v where table_name in ('T1','T2')
              *
ERROR at line 1:
ORA-04063: view "SCOTT.V" has errors

Obviously the RETENTION_TYPE did not exist in that version.

Let’s default this to NULL !


SQL> create or replace function retention_type return varchar2 is 
  begin return null; end;
/

Function created.

SQL> select * from v where table_name in ('T1','T2');
TAB COL RETENTION_TYPE
--- --- --------------
T1  C
T2  C

Very simple workaround, is not it?

RMAN duplicate does change your DB_NAME !

I had a very serious issue last Friday with errors as weird as ORA-00322: log name of thread num is not current copy. After a clone from Prod to Test, the prod crashed. Both databases are located on the same server (I am not a virtualization fanatic) and clone from prod to test have been done by most of my dba readers.

What did change in 11g ?

Incredibly, in 11g, rman issues the following statement before restore

sql clone "alter system set  db_name = ''PROD'' ...
restore clone primary controlfile...

This is probably related to the capability of cloning a database without connecting to the target database.

At the end of the clone, rman is setting back the db_name to TEST and recreate the TEST controlfile

sql statement: alter system set  db_name = ''TEST'' ...
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "TEST" ...
...
LOGFILE
GROUP 1 ('/.../TEST/redo1.dbf')...

So what’s wrong with this? Howcome could a clone from prod to test screw up the prod db???

Simple, the RMAN job did not complete

1) set new name, restore prod controlfile to test
2) restore issue, for instance ORA-19870: error while restoring backup piece archive1234
3) RMAN-03002: failure of Duplicate Db command

At this point, the complete restore was finished, we restored the missing archivelog, recover and open resetlog.
What happened then???
At this point, remember you still have the prod controlfile (and the prod db_name), so by doing an alter resetlogs, the production redologs will get overwritten without notice !

This is a fairly important change that could really hurt if you are cloning two databases on the same server.

In case you are trying to save a failed database clone, make sure you check db_name and also v$logfile before doing an alter database resetlogs!!!

SPARC Supercluster

Oracle buys Sun was an exciting accouncement 20 months ago.

What did change in the Solaris/Oracle Database world?

First, Oracle delivered Exadata on Sun Hardware (x86_64).
Second, they delivered Exadata on Sun Solaris Operating System (x86_64).

But now, they announced a combination of software and hardware that will run Oracle database faster than anything ever before.

I am happy to read Oracle is still investing on R&D on the Sparc processors server line !

connect by and recursive with (part 2)

According to the doc
The subquery_factoring_clause now supports recursive subquery
factoring (recursive WITH), which lets you query hierarchical data.
This feature is more powerful than CONNECT BY in that it
provides depth-first search and breadth-first search, and supports
multiple recursive branches. A new search_clause and cycle_clause
let you specify an ordering for the rows and mark cycles in the
recursion

As written there and there, recursive with has more capabilities. It is also ANSI and implemented in DB2, MS SQL Server. CONNECT BY is an Oracle oddity. But does recursive with perfoms as well as connect by?

  1. Create a big emp
  2. 
    create table big_emp as
    with t(empno,mgr) as
    (select 1, null from dual
    union all 
    select empno+1,trunc(dbms_random.value(1+empno/10,empno))
    from t 
    where empno<100000)
    select * from t
    

  3. execution plan
  4. CONNECT BY

    
    select empno,mgr
    from big_emp
    connect by mgr = prior empno
    start with mgr is null;
    
    E M 
    - -
    1 -  
    2 1 
    4 2 
    6 2 
    8 6 
    
    Operation                 Object    Rows Time Cost   Bytes
    ------------------------- ------- ------ ---- ---- -------
    SELECT STATEMENT                       3    3  185      78
    CONNECT BY WITH FILTERING
    TABLE ACCESS FULL         BIG_EMP      1    1   61      10
    HASH JOIN                              2    2  122      46
    CONNECT BY PUMP
    TABLE ACCESS FULL         BIG_EMP 100000    1   61 1000000
    

    recursive WITH

    
    with e(empno,mgr) as (
    select empno, mgr 
    from big_emp 
    where mgr is null
    union all
    select f.empno,f.mgr 
    from big_emp f, e 
    where e.empno=f.mgr)
    select empno,mgr
    from e;
    
    E M
    - -
    1 -  
    2 1 
    3 1 
    4 2 
    5 3 
    ...
    
    Operation                 Object    Rows Time Cost   Bytes
    ------------------------- ------- ------ ---- ---- -------
    SELECT STATEMENT                       3    3  183      78
    VIEW                                   3    3  183      78
    UNION ALL (RECURSIVE WITH) BREADTH FIRST
    TABLE ACCESS FULL BIG_EMP              1    1   61      10
    HASH JOIN                              2    2  122      46
    RECURSIVE WITH PUMP
    TABLE ACCESS FULL BIG_EMP         100000    1   61 1000000
    

    In this particular simple case, it seems CONNECT BY have a 1% higher cost.

  5. execution time
  6. CONNECT BY

    
    select sum(mgr)
    from 
    (
    select empno,mgr
    from big_emp
    connect by mgr = prior empno
    start with mgr is null
    )
    
    SUM(MGR) 
    ------------
    2745293877 
    
    1 rows returned in 0.73 seconds 
    

    recursive with

    
    with e(empno,mgr) as (
    select empno, mgr 
    from big_emp 
    where mgr is null
    union all
    select f.empno,f.mgr 
    from big_emp f, e 
    where e.empno=f.mgr)
    select sum(mgr)
    from e;
    
    SUM(MGR) 
    ------------
    2745293877 
    
    1 rows returned in 1.24 seconds 
    

Honestly I am not surprised that CONNECT BY is faster, CONNECT BY has been in the Oracle database forever and has been massively tuned in 8.1.7.4.

And then one day you find, ten years have got behind you.

CONNECT BY and Recursive CTE

11gR2 introduced a new mechanism to build up hierarchies.

I remembered a thread in developpez.net that reveals the dubious implementation of nocycle in 10g.

For the CONNECT BY ISLEAF, I have read the technique on amis.nl.

Ok, here is my graph

The 10g query


with o as
(
SELECT 'A' obj, 'B' link from dual union all
SELECT 'A', 'C' from dual union all
SELECT      'C', 'D' from dual union all
SELECT           'D', 'C' from dual union all
SELECT           'D', 'E' from dual union all
SELECT                'E', 'E' from dual)
select connect_by_root obj root,level,obj,link,
  sys_connect_by_path(obj||
'->'
||link,','),
  connect_by_iscycle,
  connect_by_isleaf
from o 
connect by nocycle obj=prior link
start with obj='A';

ROOT LEVEL O L PATH                 CYCLE  LEAF
---- ----- - - -------------------- ----- -----
A        1 A B ,A->B                    0     1
A        1 A C ,A->C                    0     0
A        2 C D ,A->C,C->D               1     0
A        3 D E ,A->C,C->D,D->E          1     1

Obviously in 10g the connect by nocycle does not work that well with that kind of graphs, D-C and E-E are missing and C-D and D-E are marked as cycling…

Let’s try the 11gR2 equivalency.


with o(obj,link) as
(
SELECT 'A', 'B' from dual union all
SELECT 'A', 'C' from dual union all
SELECT      'C', 'D' from dual union all
SELECT           'D', 'C' from dual union all
SELECT           'D', 'E' from dual union all
SELECT                'E', 'E' from dual),
t(root,lev,obj,link,path) as (
select obj,1,obj,link,cast(obj||'->'||link 
as varchar2(4000))
from o 
where obj='A'  -- START WITH
union all
select 
  t.root,t.lev+1,o.obj,o.link,
  t.path||', '||o.obj||
    '->'
    ||o.link
from t, o 
where t.link=o.obj
)
search depth first by obj set ord
cycle obj set cycle to 1 default 0
select root,lev,obj,link,path,cycle,
    case
    when (lev - lead(lev) over (order by ord)) < 0
    then 0
    else 1
    end is_leaf
 from t;

ROOT LEV  OBJ  LINK PATH                        CYCLE IS_LEAF
---- ---- ---- ---- --------------------------- ----- -------
A    1    A    B    A->B                            0       1
A    1    A    C    A->C                            0       0
A    2    C    D    A->C, C->D                      0       0
A    3    D    C    A->C, C->D, D->C                0       0
A    4    C    D    A->C, C->D, D->C, C->D          1       1
A    3    D    F    A->C, C->D, D->E                0       0
A    4    F    F    A->C, C->D, D->E, E->E          0       0
A    5    F    F    A->C, C->D, D->E, E->E, E->E    1       1

It looks good :)

If you exclude the rows with cycle=1, you get the six rows for the graph.

number series

Patrick Wolf wrote about the newest Apex release, which contains a 11.2 db engine, so I had to play with recursive queries ;)


with t(x) as (select 1 from dual 
union all
select x+1 from t where x<5 )
select x from t;

X
1
2
3
4
5

with t(x,y) as (select 1 x, 1 y from dual 
union all
select x+1,y*(x+1) from t where x<5 )
select x,y "X!" from t;

X  X!
1  1
2  2
3  6
4  24
5  120

with t(r,x,y) as (select 1,1,1 from dual 
union all
select r+1,y,x+y from t where r<5)
select x fib from t

FIB 
1 
1 
2 
3 
5 

with t1(x) as (select 1 from dual 
union all
select x+1 from t1 where x<4),
t2(x,y,z) as (select x, 1,x from t1
union all 
select x,y+1,x*(y+1) from t2 where y<2)
select listagg(z,';') within group (order by x) s from t2 group by y;

S 
1;2;3;4 
2;4;6;8 

stragg in 11gR2

This will be a killer in the Oracle forums ;)

LISTAGG (measure_expr [, 'delimiter_expr'])
  WITHIN GROUP (order_by_clause) [OVER query_partition_clause]

for instance

SELECT LISTAGG(ename,',') WITHIN GROUP (ORDER BY ename) FROM emp;
ADAMS,ALLEN,BLAKE,CLARK,FORD,JAMES,JONES,KING,MARTIN,MILLER,SCOTT,SMITH,TURNER,WARD