The reasons why I always avoid to shutdown abort

It is a common practice to always shutdown abort the database before restarting and shutting in down immediate. This is because sometimes SHUTDOWN IMMEDIATE takes ages. For instance due to a huge transaction to be rollback.

I do not like it. At all.

First, chance exists that you won’t be able to start the database anymore. I have not heard or meet anyone who had this issue since Oracle7, but I still believe it.

Second, shutdown abort is very useful if something goes seriously wrong. But if something goes wrong, you may want to find out what it is.

Third, you may hit more bugs than if you do close normal. And you may get less help from support if this is due to an abusive shutdown abort. YMMV

Ok, small demo to preach to the converted
disclaimer: this demo is not innocent, do not try this on your database

SQL> create flashback archive fa tablespace ts retention 1 day;

Flashback archive created.

SQL> create table t(x number primary key);

Table created.

SQL> alter table t flashback archive fa;

Table altered.

SQL> insert into t values (1);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from DBA_SEGMENTS where tablespace_name='TS';

no rows selected

I have created a flashback archive table, and the committed transaction is not written down to the flashback tablespace yet.

SQL> shutdown abort
ORACLE instance shut down.
SQL> startup 
ORACLE instance started.
Total System Global Area  417546240 bytes
Fixed Size                  2227072 bytes
Variable Size             234882176 bytes
Database Buffers          171966464 bytes
Redo Buffers                8470528 bytes
Database mounted.
Database opened.


SQL> sho parameter undo_tablespace
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_tablespace                      string      UNDO1
SQL> create undo tablespace undo2 datafile '/u02/oradata/@/undo2_01.dbf' size 10m reuse;

Tablespace created.

SQL> alter system set undo_tablespace=undo2;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area  417546240 bytes
Fixed Size                  2227072 bytes
Variable Size             234882176 bytes
Database Buffers          171966464 bytes
Redo Buffers                8470528 bytes
Database mounted.
Database opened.

I have switched undo tablespace. So far so good. But remember the flashback archive did not write to the flashback tablespace before shutdown abort.


SQL> drop tablespace undo1 including contents and datafiles;
drop tablespace undo1 including contents and datafiles
*
ERROR at line 1:
ORA-01548: active rollback segment '_SYSSMU1_3544069484$' found, terminate dropping tablespace

You see… I cannot drop my old undo tablespace.

Q.E.D.

On deferred segment creation

What’s deferred segment creation? It is a feature that saves lots of time by releases and lots of space if you have a -legacy- application with 10’000 tables and most of them are empty.

When was it introduced ? Partly in 11.2.0.1 and partly in 11.2.0.2, depending on the object type.

What’s the opposite ? immediate segment creation

What’s the default ? deferred segment creation

How does it work ?
create table t1(x number) segment creation immediate;
and
create table t1(x number) segment creation deferred;

Where it the doc? start with Understand Deferred Segment Creation

Ok, now my 2 cents on this feature. It is a major change of the most basic elements of Oracle, the segment. This behavior will keep its bunch of surprises on your dba scripts.

1) you will not see the segment in dba_segments
2) if you drop the tablespace the tablespace containing the object without the INCLUDING CONTENTS, the drop tablespace will succeed and the table will remain

SQL> create tablespace ts datafile '/u02/oradata/@/ts.dbf' size 1m;

Tablespace created.

SQL> create table t(x number) tablespace ts;

Table created.

SQL> drop tablespace ts;

Tablespace dropped.

SQL> select * from t;
select * from t
              *
ERROR at line 1:
ORA-00959: tablespace 'TS' does not exist

SQL> drop table t;
drop table t
           *
ERROR at line 1:
ORA-00959: tablespace 'TS' does not exist

Neither SELECT nor DROP is possible at that stage

To quickly identify those almost-nonexistent tablespaces you may use this query


SELECT TABLESPACE_NAME FROM ALL_CLUSTERS UNION
SELECT TABLESPACE_NAME FROM ALL_INDEXES UNION
SELECT TABLESPACE_NAME FROM ALL_IND_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_IND_SUBPARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_LOBS UNION
SELECT TABLESPACE_NAME FROM ALL_LOB_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_LOB_SUBPARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_TABLES  UNION
SELECT TABLESPACE_NAME FROM ALL_TAB_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_TAB_SUBPARTITIONS MINUS
select tablespace_name from dba_tablespaces
;

TABLESPACE_NAME
------------------------------
TS

then you can recreate it and -if wished- drop it with contents


SQL> create tablespace ts datafile '/u02/oradata/@/ts.dbf' size 1m reuse;

Tablespace created.

SQL> drop tablespace ts including contents and datafiles;

Tablespace dropped.

SQL> select * from t;
select * from t
              *
ERROR at line 1:
ORA-00942: table or view does not exist

The table is gone for real.

If you based some scripts on dba_segments to list the content of the tablespaces, you probably should check the assigned tablespace in the tables,indexes,lobs and (sub)partitions DBA_VIEWS too.

I met this feature while using transportable tablespace. Transportable table will transport the object with no segment that belongs to the tablespace.

There is a bunch of published bugs on Metalink regarding deferred segment creation. An easy workaround is to not use the feature by setting the initialization parameter DEFERRED_SEGMENT_CREATION to false. This of course affects only new objects.

I am always very cautious about those major changes affecting the dba scripts on the dictionary. While selecting from the base dictionary tables (TAB$, COL$, …) is never recommended, selecting from the USER_ and DBA_ views is supposed to be backward compatible, but the dba scripts that used to work in previous release may break here… This is obviously the price to pay to get new features, right?

Datapump : table like ‘FOO%’ or like ‘BAR%’

Today I tried to put two like condition in an INCLUDE clause of datapump.

I have the following tables

SQL> select table_name from user_tables order by 1;

TABLE_NAME
------------------------------
AAA
BAR1   ***
BAR2   ***
BLA
FOO    ***
FOO1   ***
GOZ

and I want tables like BAR% and tables likes FOO%

First try :

$ expdp scott/tiger include=table:"like'FOO%'or like'BAR%'"

Export: Release 11.2.0.2.0 - Production on Thu Jul 14 11:47:13 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39071: Value for INCLUDE is badly formed.
ORA-00936: missing expression

This does not work, because it would translate in WHERE {tablename} like’FOO%’or like’BAR%’

Ok, second try, let’s put multiple TABLE clause

$ expdp scott/tiger include=table:"like'FOO%'",table:"like'BAR%'"

Export: Release 11.2.0.2.0 - Production on Thu Jul 14 11:47:15 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01":  scott/******** include=table:"like'FOO%'",table:"like'BAR%'" Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
ORA-39168: Object path TABLE was not found.
ORA-31655: no data or metadata objects selected for job
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" completed with 2 error(s) at 11:47:21

no data was found, because it did translate in WHERE {tablename} like ‘FOO%’ and{tablename} like ‘BAR%’. Which returns no row.

Ok, multiple INCLUDE conditions are joined by AND, so let’s do the math. (BAR% OR FOO%)=(>=BAR AND <FOP AND NOT BETWEEN BAS AND FONZZZ (where FONZZZ is immediately smaller than FOO)

$ expdp scott/tiger include=table:">='BAR'",table:"not between 'BAS' and 'FONZZZZZZZZZZZZZ'",table:"<'FOP'"

Export: Release 11.2.0.2.0 - Production on Thu Jul 14 11:47:21 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01":  scott/******** include=table:">='BAR'",table:"not between 'BAS' and 'FONZZZZZZZZZZZZZ'",table:"<'FOP'"
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "SCOTT"."BAR1"                             5.007 KB       1 rows
. . exported "SCOTT"."BAR2"                             5.007 KB       1 rows
. . exported "SCOTT"."FOO"                              5.007 KB       1 rows
. . exported "SCOTT"."FOO1"                             5.007 KB       1 rows
Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
  /u01/app/oracle/admin/DB01/dmp/expdat.dmp
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 11:47:31

It is that simple ;)

On star transformation

How to configure a schema to support a star transformation query?

This is one of the topic I am currently preparing for the OCM DBA 11g upgrade exam for 9i OCM’s.

Let’s create a star schema with a fact table T1 and two dimension tables T2 and T3 :


> create table t2(y number constraint t2_pk primary key, yy varchar2(50))
table t2 created.
> create table t3(z number constraint t3_pk primary key, zz varchar2(50))
table t3 created.
> create table t1(x number constraint t1_pk primary key, 
      y number constraint t1_t2_fk references t2, 
      z number constraint t1_t3_fk references t3)
table t1 created.

According to the Oracle Database Data Warehousing Guide, chapter 20 Schema Modeling Techniques
A prerequisite of the star transformation is that there be a single-column bitmap index on every join column of the fact table. These join columns include all foreign key columns.


> create bitmap index bi1 on t1(y)
bitmap index bi1 created.
> create bitmap index bi2 on t1(z)
bitmap index bi2 created.
> exec dbms_stats.gather_table_stats(user,'t1',cascade=>true)
anonymous block completed
> exec dbms_stats.gather_table_stats(user,'t2',cascade=>true)
anonymous block completed
> exec dbms_stats.gather_table_stats(user,'t3',cascade=>true)
anonymous block completed

Obviously the star_transformation_enabled must be set to true. It was known as buggy in the first releases, I hope most bugs have been fixed in the meantime.


> alter session set star_transformation_enabled=true
session set altered

Let’s try the transformation with a query similar to the one found in the doc :

> set autotrace on exp
Autotrace Enabled
Displays the execution plan only.
> select count(*) from t1 natural join t2 natural join t3 
      where yy='one' and zz in ('two','three')
COUNT(*)               
---------------------- 
0                      

Plan hash value: 3024982001
 
------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                  |     1 |   106 |     1 (100)| 00:00:01 |
|   1 |  SORT AGGREGATE                   |                  |     1 |   106 |            |          |
|   2 |   NESTED LOOPS                    |                  |       |       |            |          |
|   3 |    NESTED LOOPS                   |                  |     1 |   106 |     1 (100)| 00:00:01 |
|   4 |     NESTED LOOPS                  |                  |     1 |    66 |     1 (100)| 00:00:01 |
|   5 |      VIEW                         | index$_join$_001 |     1 |    26 |     1 (100)| 00:00:01 |
|*  6 |       HASH JOIN                   |                  |       |       |            |          |
|   7 |        BITMAP CONVERSION TO ROWIDS|                  |     1 |    26 |     0   (0)| 00:00:01 |
|   8 |         BITMAP INDEX FULL SCAN    | BI1              |       |       |            |          |
|   9 |        BITMAP CONVERSION TO ROWIDS|                  |     1 |    26 |     0   (0)| 00:00:01 |
|  10 |         BITMAP INDEX FULL SCAN    | BI2              |       |       |            |          |
|* 11 |      TABLE ACCESS BY INDEX ROWID  | T2               |     1 |    40 |     0   (0)| 00:00:01 |
|* 12 |       INDEX UNIQUE SCAN           | T2_PK            |     1 |       |     0   (0)| 00:00:01 |
|* 13 |     INDEX UNIQUE SCAN             | T3_PK            |     1 |       |     0   (0)| 00:00:01 |
|* 14 |    TABLE ACCESS BY INDEX ROWID    | T3               |     1 |    40 |     0   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - access(ROWID=ROWID)
  11 - filter("T2"."YY"='one')
  12 - access("T1"."Y"="T2"."Y")
  13 - access("T1"."Z"="T3"."Z")
  14 - filter("T3"."ZZ"='three' OR "T3"."ZZ"='two')

> set autotrace off
Autotrace Disabled

Something is missing. The star transformation is not used. What could it be???

Data !

We need to load trucks of data in the T1 table and a few rows in the T2 and T3 tables

  
> insert into t2 select rownum, 
      to_char(to_timestamp(to_char(rownum/1e9,'.000000000'),'.ff'),'ffsp') 
      from dual connect by level<11
10 rows inserted.
> insert into t3 select rownum, 
      to_char(to_timestamp(to_char(rownum/1e9,'.000000000'),'.ff'),'ffsp') 
      from dual connect by level<11
10 rows inserted.
> insert into t1 select rownum, mod(rownum,7)+1, mod(rownum,8)+1 
      from dual connect by level<100001
100,000 rows inserted.
> commit
commited.
> exec dbms_stats.gather_table_stats(user,'t1',cascade=>true)
anonymous block completed
> exec dbms_stats.gather_table_stats(user,'t2',cascade=>true)
anonymous block completed
> exec dbms_stats.gather_table_stats(user,'t3',cascade=>true)
anonymous block completed

Let’s try the same query again


> set autotrace on exp
Autotrace Enabled
Displays the execution plan only.
> select count(*) from t1 natural join t2 natural join t3
      where yy='one' and zz in ('two','three')
COUNT(*)               
---------------------- 
3570                   

Plan hash value: 3170767457
 
-------------------------------------------------------------------------------------
| Id  | Operation                    | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |      |     1 |     6 |    12   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE              |      |     1 |     6 |            |          |
|   2 |   BITMAP CONVERSION COUNT    |      |  2857 | 17142 |     6   (0)| 00:00:01 |
|   3 |    BITMAP AND                |      |       |       |            |          |
|   4 |     BITMAP MERGE             |      |       |       |            |          |
|   5 |      BITMAP KEY ITERATION    |      |       |       |            |          |
|*  6 |       TABLE ACCESS FULL      | T2   |     1 |     8 |     3   (0)| 00:00:01 |
|*  7 |       BITMAP INDEX RANGE SCAN| BI1  |       |       |            |          |
|   8 |     BITMAP MERGE             |      |       |       |            |          |
|   9 |      BITMAP KEY ITERATION    |      |       |       |            |          |
|* 10 |       TABLE ACCESS FULL      | T3   |     2 |    16 |     3   (0)| 00:00:01 |
|* 11 |       BITMAP INDEX RANGE SCAN| BI2  |       |       |            |          |
-------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - filter("T2"."YY"='one')
   7 - access("T1"."Y"="T2"."Y")
  10 - filter("T3"."ZZ"='three' OR "T3"."ZZ"='two')
  11 - access("T1"."Z"="T3"."Z")
 
Note
-----
   - star transformation used for this statement

> set autotrace off
Autotrace Disabled

That’s it ! I have 100K rows in my fact table and 10 rows in my dimension tables, that does the trick ! Star transformation needs a lot of rows to work :)

On implicit commit

An explicit commit is when you issue a COMMIT statement

SQL> create table t(x number);

Table created.

SQL> insert into t values(1);

1 row created.

SQL> commit;

Commit complete.

An implicit commit is when a commit is issued without your approval.

ex: AUTOCOMMIT (default is OFF)

SQL> set autoc on
SQL> insert into t values(1);

1 row created.

Commit complete.

ex: EXITCOMMIT (default is ON)

SQL> set autoc off exitc on
SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> disc
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> conn scott/tiger
Connected.
SQL> select * from t;
         X
----------
         1

before / after a successful DDL statement

SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t(x);

Index created.

SQL> rollback;

Rollback complete.

SQL> select * from t;
         X
----------
         1

Before / after an unsuccessful DDL statement, sometimes :

SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t(blabla);
create index i on t(blabla)
                    *
ERROR at line 1:
ORA-00904: "BLABLA": invalid identifier

SQL> rollback;

Rollback complete.

SQL> select * from t;
         X
----------
         1

But not always :

SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t();
create index i on t()
                    *
ERROR at line 1:
ORA-00936: missing expression

SQL> rollback;

Rollback complete.

SQL> select * from t;

no rows selected

In the last case, no DDL was executed, but in the case before that, the DDL was executed and failed.

If you want to commit, use COMMIT :)

sqlplus -prelim

If you cannot login to the database, for instance due to ORA-00020 maximum number of processes exceeded, then chance exists that you could use the -prelim option.

Documented in note 121779.1 for sqlplus version 10.1 and later :
In some cases, no connections are allowed on the instance (in some ORA-20 situations for example).
As of 10.1.x, there is a new option with SQL*Plus to allow access to an instance to
generate traces.
sqlplus -prelim / as sysdba

Only sysdba connection is possible.

sqlplus -prelim system/manager

SQL*Plus: Release 11.2.0.2.0 Production on Mon Jul 4 10:38:36 2011

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

ERROR:
ORA-24300: bad value for mode

And very little access is granted

SQL> select * from dual;
select * from dual
*
ERROR at line 1:
ORA-01012: not logged on
Process ID: 0
Session ID: 0 Serial number: 0

You can shutdown abort and then restart your database, instead of rebooting your server where other instances may be running.

This is the ultimate chance before reboot. Before this, consider disconnecting / killing some user session to get a regular sqlplus / as sysdba