What is the instance name?

If your Oracle SID doesn’t match your instance name in init.ora, this is quite confusing.

Check my previous post, what is sid in oracle

In the instance_name column of the view v$instance, as well as in USERENV context, it matches the ORACLE_SID of the underlying operating system.

SQL> var ORACLE_SID varchar2(9)
SQL> set autoprint on
SQL> exec dbms_system.get_env('ORACLE_SID',:ORACLE_SID)
PL/SQL procedure successfully completed.
ORACLE_SID
------------
ORA001
SQL> select sys_context('USERENV','INSTANCE_NAME') from dual;
SYS_CONTEXT('USERENV','INSTANCE_NAME')
---------------------------------------
ORA001
SQL> select instance_name from v$instance;
INSTANCE_NAME
----------------
ORA001
SQL>

This is not the same as the init.ora parameter

SQL> select name, value, description from v$parameter where name='instance_name';

NAME          VALUE     DESCRIPTION
------------- --------- ----------------------------------------
instance_name INS001    instance name supported by the instance
SQL>

The instance_name doesn’t have to match anything. It’s of relevance if you use ADR. And you probably do. Background dump dest and family are deprecated now. In your ADR docu you’ll read

{ORACLE_BASE}/diag/rdbms/{DB_UNIQUE_NAME}/{SID}/trace

But this SID is actually your init.ora instance name. And not your ORACLE_SID.

What is SID in Oracle ?

In the doc you’ll find, it’s the Oracle system identifier.

Okay, let’s imagine the following

Environment:

ORACLE_SID=ORA001

init.ora:

DB_NAME=DB001
DB_UNIQUE_NAME=UNI001
INSTANCE_NAME=INS001
SERVICE_NAMES=SVC001,SVC002
DB_DOMAIN=EXAMPLE.COM
GLOBAL_NAMES=false

database:

SQL> select * from GLOBAL_NAME;
GLO001.example.com

listener.ora:

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_NAME=GLO001.EXAMPLE.COM)
      (SID_NAME=ORA001)
    )
  )

What is my SID? Actually there is more than one correct answer.

In the environment, Oracle SID is ORA001. This matches SID_NAME in listener.ora. It does not have to match database name, unique name, global name or instance_name.

$ lsnrctl services
Services Summary...
Service "GLO001.EXAMPLE.COM" has 1 instance(s).
  Instance "ORA001", status UNKNOWN, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:7 refused:0
         LOCAL SERVER

As the instance is not running, I have only my listener.ora static connections.

The SERVICE_NAME is GLO001.EXAMPLE.COM and the SID is ORA001.

$ sqlplus "sys/***@
  (DESCRIPTION=
    (ADDRESS=
      (PROTOCOL=TCP)
      (HOST=SRV001)
      (PORT=1521)
    )
    (CONNECT_DATA=
      (SERVICE_NAME=GLO001.EXAMPLE.COM)
    )
  )" as sysdba
Connected to an idle instance.
SQL> 
$ sqlplus "sys/***@
  (DESCRIPTION=
    (ADDRESS=
      (PROTOCOL=TCP)
      (HOST=SRV001)
      (PORT=1521)
    )
    (CONNECT_DATA=
      (SID=ORA001)
    )
  )" as sysdba
Connected to an idle instance.
SQL> 

Let’s start

SQL> startup

and check my services

$ lsnrctl services
Services Summary...
Service "SVC001.EXAMPLE.COM" has 1 instance(s).
  Instance "INS001", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "SVC002.EXAMPLE.COM" has 1 instance(s).
  Instance "INS001", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "UNI001.EXAMPLE.COM" has 1 instance(s).
  Instance "INS001", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "GLO001.EXAMPLE.COM" has 1 instance(s).
  Instance "ORA001", status UNKNOWN, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:13 refused:0
         LOCAL SERVER
The command completed successfully

I know have 4 service names :

  1. The global name in listener.ora
  2. the unique name in init.ora
  3. both service name in init.ora

And 2 sid

  1. The SID in listener.ora
  2. The instance name in init.ora

While we often have sid = oracle_sid = service_name = service_names = global_name = instance_name = db_name = db_unique_name, if you switch from SID to SERVICE_NAME, this could be help to identify legacy application.

If you read the doc carefully, you may have noticed the SID is no longer documented as a valid clause of CONNECT_DATA in 11g and 12c

In 10gR2 :
http://docs.oracle.com/cd/B19306_01/network.102/b14213/tnsnames.htm#i477921
Use the parameter SID to identify the Oracle8 database instance by its Oracle System Identifier (SID). If the database is Oracle9i or Oracle8, use the SERVICE_NAME parameter rather than the SID parameter.

This is probably a documentation bug, I would rephrase this as If the database is Oracle9i or Oracle8i or later.

In 11g and 12c, the comment disappeared. Oracle 8i was released last century, but SID didn’t completly disappear from tnsnames. Yet.

duplicate to a future date

If you work with large databases, you often wait way to long for the clones. Typically you want to duplicate a 10TB database to production timestamp 9am, and you start at 9am and then you wait for hours.

Is it possible to start the clone, let’s say, at midnight, and set until time 9am?

No! You’ll get

RMAN-06617: UNTIL TIME (2016-05-21 09:00:00) is ahead of last NEXT TIME in archived logs (2016-05-20 23:58:52)

But… you could start to restore the datafiles at midnight.

sqlplus sys/***@db02 as sysdba <<EOF
  alter system set db_name='DB01' scope=spfile;
  alter system set db_unique_name='DB02' scope=spfile;
  startup force nomount
EOF

rman target sys/***@db01 auxiliary sys/***@db02 <<EOF
   restore clone primary controlfile;
   alter clone database mount;

run {
   set newname for datafile  1 to
 "/db02/system01.dbf";
   set newname for datafile  2 to
 "/db02/sysaux01.dbf";
   set newname for datafile  3 to
 "/db02/undotbs1_02.dbf";
   set newname for datafile  4 to
 "/db02/users01.dbf";
   restore clone database
   ;
}
EOF

This is exactly when RMAN does when you issue a duplicate. You could use the supported RESTORE command instead of the unsupported RESTORE CLONE command. But then it’ll get a bit more complex as you need to find out the location of your backup and so on.

At 9am, you issue your duplicate, and you’ll see

skipping datafile 1; already restored to file /db02/system01.dbf
skipping datafile 2; already restored to file /db02/sysaux01.dbf
skipping datafile 3; already restored to file /db02/undotbs1_02.dbf
skipping datafile 4; already restored to file /db02/users01.dbf

You just saved nine hours πŸ™‚

column width change in 12c

In 11g I used to have 30 characters width in my dictionary

SQL> select table_name, column_name from user_tab_columns;

TABLE_NAME                     COLUMN_NAME
------------------------------ ------------------------------
BONUS                          COMM
BONUS                          SAL
BONUS                          JOB
BONUS                          ENAME
DEPT                           LOC
DEPT                           DNAME
DEPT                           DEPTNO
EMP                            DEPTNO
EMP                            COMM
EMP                            SAL
EMP                            HIREDATE

TABLE_NAME                     COLUMN_NAME
------------------------------ ------------------------------
EMP                            MGR
EMP                            JOB
EMP                            ENAME
EMP                            EMPNO
SALGRADE                       HISAL
SALGRADE                       LOSAL
SALGRADE                       GRADE

18 rows selected.

Which was pretty nice to read with no setting.

Now in 12c it is ugly enought to make your eyes tired

SQL> select table_name, column_name from user_tab_columns;

TABLE_NAME
------------------------------------------------------------------------------------------------------------------------
COLUMN_NAME
------------------------------------------------------------------------------------------------------------------------
DEPT
DEPTNO

DEPT
DNAME

DEPT
LOC

TABLE_NAME
------------------------------------------------------------------------------------------------------------------------
COLUMN_NAME
------------------------------------------------------------------------------------------------------------------------
EMP
EMPNO

EMP
ENAME

EMP
JOB

TABLE_NAME
------------------------------------------------------------------------------------------------------------------------
COLUMN_NAME
------------------------------------------------------------------------------------------------------------------------
EMP
MGR

EMP
HIREDATE

EMP
SAL

TABLE_NAME
------------------------------------------------------------------------------------------------------------------------
COLUMN_NAME
------------------------------------------------------------------------------------------------------------------------
EMP
COMM

EMP
DEPTNO

BONUS
ENAME

TABLE_NAME
------------------------------------------------------------------------------------------------------------------------
COLUMN_NAME
------------------------------------------------------------------------------------------------------------------------
BONUS
JOB

BONUS
SAL

BONUS
COMM

TABLE_NAME
------------------------------------------------------------------------------------------------------------------------
COLUMN_NAME
------------------------------------------------------------------------------------------------------------------------
SALGRADE
GRADE

SALGRADE
LOSAL

SALGRADE
HISAL

18 rows selected.

SQL>

This is due to a change of the width of the underlying column in the dictionary. Probably a good-news for our data modeling colleagues that it may be 128 in future.

But currently it is only 30 characters in 12c. So why not format it correctly ?

Simply add the format in $ORACLE_HOME/sqlplus/admin/glogin.sql

col TABLE_NAME for a30
col COLUMN_NAME for a30 

DISCLAIMER: it’s fine to add UNIQUE_KEY_LEVEL_NAME or REFERENCED_TRIGGER_NAME, but you may have application tables that have columns called OWNER or USER, it is probably safer to not assume they are all smaller than 30 chars, so don’t add common names.

Licensing Cloud Control

I just read the Enterprise Manager Licensing Information User Manual today. They are a lot of packs there, and you may not even know that autodiscovering targets is part of the lifecycle management pack or that blackouts are part of the diagnostic pack.

Have a look

powershell odbc sybase

Oracle goes there ==> …/oracle-odbc-hello-world-with-powershell.html

To test Sybase ODBC driver with Powershell, it’s not much different

  1. configure the DsN with odbcconf or %SYSTEMROOT%\SysWOW64\odbcconf for the 32bits drivers under Windows 64.
  2. 
    cmd /c "%SYSTEMROOT%\SysWOW64\odbcconf.exe /a {configdsn ""Adaptive Server Enterprise"" ""DSN=helloworld|database=mydb|port=25000|server=srv01""}"
    

    The name of the driver is important. It is probably also called “Adaptive Server Enterprise” on your machine.

  3. test it, same as in Oracle. If you need to test the 32bits drivers under 64bits windows, use the 32bits powershell %SystemRoot%\syswow64\WindowsPowerShell\v1.0\powershell.exe
  4. 
    $conn = New-Object data.odbc.odbcconnection
    $conn.ConnectionString = "dsn=helloworld;uid=scott;pwd=tiger"
    $conn.Open()
    (new-Object Data.Odbc.OdbcCommand("select 'Hello World'",$conn)).ExecuteScalar()
    $conn.close()
    

generate safe passwords

This is probably ambitious and I’ll start with a disclaimer, there is no such thing.

But ok, we know that system/manager isn’t

Apart from system/manager, there are hundreds of weak passwords that are commonly used and easy to guess.

On your database server, after a few tries, the account is lock. And maybe the attacker is detected by then.

So the worst passwords are the default passwords and passwords like oracle.

To enforce good passwords, we have verify functions, like ora12c_strong_verify_function in 12c, that checks for mixed case, special characters, etc. One may prefer to write his own and not disclose what it exactly checks.

In that function in rdbms admin, it states The maximum length of any DB User password is 128 bytes. but it’s 30 character in most cases.

If you have failed login attends of 10, chosing eleven as a password does not make it safe. If the attacker got’s the user metadata, you are screwed in no time. In Oracle 4, it’s clear text. In 7-10, it’s a doubled-DES unsalted with a fixed disclosed key encryption. There any dictionary attack takes milliseconds, and a 6 character password in sub-second. It’s got better in 11, where SHA1 could take weeks to years to have a 8 char password. Depending on its complexity. In 12c, generating a hash cost lot’s of cpu cycle, so it is no longer possible to test millions of password per second, even with the strongest hardware.

But to get a good password it is recommended and often required to use digit / letters / special signs / mixed case and no dictionary word.

I have made a small password generator for my reader using dbms_random.string, which generates pseudorandom string. It is best to use the cryptographically secure dbms_crypto.randombytes, but then you must still get a password that you can type. It should also be possible to use unicode if you like. And depending where you are going to use it, it is sometimes safer to not use signs like * or ‘ because, who know’s, your password may produce an error and end up in a logfile.

Okay, I wrote a small function that generates a 10-char string and verify it with the 12c strong verifier. And loop until one is good enough.

The chance that a random password is manager is pretty low, but it is probably best to check you got not only safe random, but also strong string


-- @?/rdbms/admin/catpvf
CREATE OR REPLACE FUNCTION pw (username        VARCHAR2,
                               old_password    VARCHAR2 DEFAULT NULL)
   RETURN VARCHAR2
IS
   p   VARCHAR2 (30);
   c   BOOLEAN := FALSE;
   i   NUMBER := 0;
BEGIN
   WHILE NOT c AND i < 1000
   LOOP
      p := DBMS_RANDOM.string ('P', 10);
      i := i + 1;
      BEGIN
         c := sys.ora12c_strong_verify_function (username, p, old_PASSWORD);
      EXCEPTION
         WHEN OTHERS
         THEN
            NULL;
      END;
   END LOOP;
   RETURN p;
END;
/

SELECT pw ('SCOTT', 'TIGER') FROM DUAL;

#gA~82NxBv

This could well be a good initial expired password for your user. Later the user will find something easier to remember

Generate 11g password hash

An easy way to generate a value string from the ssl is to use openssl

Let’s take a random salt of ABCDEFGHIJ. The length of 10 is important.

The hexadecimal representation is -41-42-43-44-45-46-47-48-49-4A-


$ echo "SafePassw0rDABCDEFGHIJ\c" | openssl dgst -sha1
(stdin)= 47cc4102144d6e479ef3d776ccd9e0d0158842bb

With this hash, I can construct my value


SQL> create user testuser identified by values 'S:47CC4102144D6E479EF3D776CCD9E0D0158842BB4142434445464748494A';

User created.

SQL> grant create session to testuser;

Grant succeeded.

SQL> conn testuser/SafePassw0rD
Connected.

If you prefer PL/SQL over shell, use DBMS_CRYPTO


SQL> exec dbms_output.put_line('S:'||dbms_crypto.hash(utl_raw.cast_to_raw('SafePassw0rDABCDEFGHIJ'),dbms_crypto.HASH_SH1)||utl_raw.cast_to_raw('ABCDEFGHIJ'))
S:47CC4102144D6E479EF3D776CCD9E0D0158842BB4142434445464748494A

PL/SQL procedure successfully completed.

In 12c there is also a “T” String. According to the doc
The cryptographic hash function used for generating the 12C verifier is based on a de-optimized algorithm involving PBKDF2 and SHA-512.

delete all data

How do you delete all data? The simplistic approach would be to truncate all tables


SQL> select table_name from user_tables;
TABLE_NAME
----------
T1
SQL> truncate table t1;
Table truncated.

You cannot truncate if you have referential integrity constraints.


SQL> truncate table t2;
ORA-02266: unique/primary keys in table 
  referenced by enabled foreign keys

Ok, let’s disable the RIC


SQL> select table_name, constraint_name
  from user_constraints
  where constraint_type='R';
TAB CONSTRAINT
--- ----------
T3  SYS_C00107
SQL> alter table t3 disable constraint SYS_C00107;
Table altered.
SQL> truncate table t2;
Table truncated.
SQL> truncate table t3;
Table truncated.

You cannot truncate cluster tables


SQL> truncate table t4;
ORA-03292: Table to be truncated is part of a cluster

Cluster tables could be dropped with TRUNCATE CLUSTER.


SQL> select cluster_name from user_clusters;
CLUSTER_NAME                  
------------
C                             
SQL> truncate cluster c;
Cluster truncated.

The code above doesn’t work with Partitioned cluster (12.1.0.2) because it was not properly implemented at the time of the writing.
Check Bug 20284579 : CAN NOT QUERY DYNAMIC CLUSTER PARTITIONS

For reference partitioning, it is not possible to disable the foreign key


SQL> alter table t6 disable constraint fk;
ORA-14650: operation not supported for 
  reference-partitioned tables

In 12c, if the foreign key is defined with ON DELETE CASCADE, you can truncate cascade the parent.


SQL> select table_name, REF_PTN_CONSTRAINT_NAME
  from user_part_tables 
  where partitioning_type='REFERENCE';
TAB REF
--- ---
T6  FK 
SQL> select r_constraint_name, delete_rule 
  from user_constraints 
  where constraint_name='FK';
R_CON DELETE_RULE
----- -----------
PK    CASCADE    
SQL> select table_name
  from user_constraints 
  where constraint_name='PK';
TAB
---
T5 
SQL> truncate table t5 cascade;
Table truncated.

But if one of child or child-child table is using reference partitioning without the ON DELETE CASCADE, then the parent or grand-parent could not be truncated. And truncate cascade for reference partitioning is not documented (yet).

But there is very nice alternative to TRUNCATE called is DELETE πŸ™‚


SQL> select table_name, REF_PTN_CONSTRAINT_NAME
  from user_part_tables 
  where partitioning_type='REFERENCE';
TAB REF
--- ---
T8  FK 
SQL> select r_constraint_name, delete_rule 
  from user_constraints 
  where constraint_name='FK';
R_CON DELETE_RULE
----- -----------
PK    NO ACTION  
SQL> select table_name
from user_constraints 
where constraint_name='PK'
TAB
---
T7 
SQL> truncate table t7 cascade;
ORA-14705: unique or primary keys referenced by enabled foreign keys in table "SCOTT"."T8"
SQL> truncate table t8;
Table truncated.
SQL> delete from t7;
2 rows deleted

To get the tables in the right order, parent tables after children, you can do some hierarchical query and then order by rownum desc, a construct I’m using for the first time I confess. Note the leaf tables are truncable.


select c_owner owner, child table_name   
FROM 
  (
    SELECT 
      p_OWNER, parent, nvl(c_owner, a.owner) c_owner,
      nvl(child, a.table_name ) child
    FROM 
    (
      SELECT 
        PT.OWNER P_owner, pt.table_name parent, 
        pt2.owner c_owner, pt2.table_name child
      FROM all_part_tables pt
      JOIN all_constraints c
      ON pt.OWNER = c.owner
        AND PT.TABLE_NAME = c.table_name
        AND c.constraint_type = 'P'
        AND c.status = 'ENABLED'
      JOIN all_constraints r
      ON r.r_owner = c.owner
        AND r.r_constraint_name = c.constraint_name
        AND r.constraint_type = 'R'
        AND r.status = 'ENABLED'
      JOIN all_part_tables pt2
      ON r.owner = pt2.owner
        AND r.constraint_name = pt2.REF_PTN_CONSTRAINT_NAME
        AND pt2.partitioning_type = 'REFERENCE'
    ) t
    RIGHT JOIN all_tables a 
    ON child = table_name and a.owner = T.c_OWNER
  )
where connect_by_isleaf=0  
CONNECT BY parent = PRIOR child and p_owner=PRIOR c_owner
start with parent is null 
order by rownum desc;

OWNER TAB
----- ---
SCOTT T10 
SCOTT T9

Note the query above is very slow. If dictionary-performance is an issue, maybe we could delete all tables and catch exceptions and loop until all tables are empty


SQL> delete from t9;
ORA-02292: integrity constraint (SCOTT.F10) violated - child record found
SQL> delete from t10;
ORA-02292: integrity constraint (SCOTT.F11) violated - child record found
SQL> delete from t11;
1 row deleted.
SQL> delete from t9;
ORA-02292: integrity constraint (SCOTT.F10) violated - child record found
SQL> delete from t10;
1 row deleted.
SQL> delete from t11;
0 row deleted.
SQL> delete from t9;
1 row deleted.
SQL> delete from t10;
0 row deleted.
SQL> delete from t11;
0 row deleted.
SQL> delete from t9;
0 row deleted.
SQL> delete from t10;
0 row deleted.
SQL> delete from t11;
0 row deleted.

If you have close to zero reference-partitioning table, this approach will be more efficient.

tracefile tim to readable date format

In trace file, the time is recorded, this could be used to measure time between two timestamps

But how do you convert 31796862227375 to a human format?

This is how I proceeded :

SQL> oradebug SETMYPID
Statement processed.
SQL> oradebug tracefile_name
/u01/log/oracle/diag/rdbms/db01/DB01/trace/DB01_ora_32440740.trc
SQL> alter session set 
  nls_date_format='YYYY-MM-DD_HH24:MI:SS' 
  sql_trace=true;
Session altered.
SQL> select '&_DATE','Start' from dual;
2015-09-23_13:17:50 Start
SQL> select '&_DATE','End' from dual;
2015-09-23_13:18:38 End
SQL> alter session set sql_trace=false;
Session altered.

I use the sqlplus variable _date and waited about a minute between the select’s.

Now let’s look at the trace

PARSING IN CURSOR #4859519800 tim=31796814530524
select '2015-09-23_13:17:50','Start' from dual

PARSING IN CURSOR #4859511384 tim=31796862227375 
select '2015-09-23_13:18:38','End' from dual

Note the tim=

Between 13:17:50 and 13:18:38 I have 48 seconds; the difference between 31796862227375 and 31796814530524 is 47696851, roughly 48 millions of microseconds.

So the time on this specific version of Oracle is in microseconds.

Now if I substracted 31796862227375 from 2015-09-23_13:17:50 and I get 2014-09-20_12:50:08 on this specific instance.

So to convert tim= to human readable date, I simply add tim microseconds to Sep 20, 2014, 12:50:08.

job_name cannot be null


exec dbms_scheduler.create_job(job_name=>null,job_type=>'PLSQL_BLOCK',job_action=>'BEGIN NULL; END;')
ORA-27451: JOB_NAME cannot be NULL
ORA-06512: at "SYS.DBMS_ISCHED", line 146
ORA-06512: at "SYS.DBMS_SCHEDULER", line 288
ORA-06512: at line 1

This sounds like a proper error message. A bit less obvious is the drop_job message


SQL> exec dbms_scheduler.drop_job(job_name=>null)
ORA-20001: comma-separated list invalid near
ORA-06512: at "SYS.DBMS_UTILITY", line 236
ORA-06512: at "SYS.DBMS_UTILITY", line 272
ORA-06512: at "SYS.DBMS_SCHEDULER", line 743
ORA-06512: at line 1

comma-separated list invalid near what?

Ok, why would you create an empty job? Obviously you wouldn’t. But remember job_name could be a very long expression that won’t fit in your VARCHAR2(30) variable.


SQL> begin 
  dbms_scheduler.create_job(job_name=>
'                  "SCOTT"                    '||
'                     .                       '||
'             "JOB10000000000000000000001"    ',
    job_type=>'PLSQL_BLOCK',
    job_action=>'BEGIN NULL; END;');
end;
/

PL/SQL procedure successfully completed.

SQL> exec dbms_scheduler.drop_job('scott.job10000000000000000000001')

PL/SQL procedure successfully completed.

If you use drop job in the exception clause without catching the exception of the exception, it could lead to this ORA-20001 if job name is null

For exception handling, we could improve


BEGIN
  CREATE JOB 
  RUN JOB
  DROP JOB
EXCEPTION
  WHEN OTHERS THEN
    DROP JOB
    output message
    RAISE
END

into

BEGIN
  CREATE JOB 
  RUN JOB
  DROP JOB
EXCEPTION
  WHEN OTHERS THEN
    BEGIN
      DROP JOB
    EXCEPTION 
      WHEN IS_RUNNING
         sleep
      WHEN OTHERS
         output message
    END LOOP
    output message
    RAISE
END

DELETE is faster than TRUNCATE

Truncate is useful in some serial batch processing but it breaks the read-write consistency, generates stranges errors and results for running selects, and it needs DROP ANY TABLE when run over a table that you do not own.

But also, DELETE is faster in the following test case.

In 12c, you could have over one million partition in a table, but for the sake of the universe, I’ll try with 10000.


SQL> create table scott.t(x) 
  partition by range(x) 
  interval(1) 
  (partition values less than (0)) 
  as 
  select rownum 
  from dual 
  connect by level<10001;
SQL> select count(*) from scott.t;

  COUNT(*)
----------
     10000

The 10K rows table is created, each row is its partition


SQL> delete scott.t;

10000 rows deleted.

Elapsed: 00:00:04.02
SQL> rollback;

Rollback complete.

Not tuned or parallelized or whatever. It took 4 seconds for 10’000 rows. If you have one billion rows, it is doable in a few hours. But you better do it in chunks then.

Anyway, let’s truncate


SQL> truncate table scott.t;

Table truncated.

Elapsed: 00:05:19.24

Five minutes !!! to truncate that tiny table.

If you have one million partitions and underlying indexes and lobs, it will probably failed with out of memory errors after hours and a large impact on the dictionary, sysaux, undo.

The dictionary changes are here very slow.

permission issue due to one role

Most permissions issues are due to a missing role or privilege.

But in the following test case you need to revoke the right to get more privileges.


create table tt(x number);
create view v as select * from tt;
create role rw;
grant all on v to rw;

I’ve created a read-write role on a view. The owner of the role is the DBA, but the owner of the view is the application. Next release, the role may prevent an application upgrade


SQL> create or replace view v as select * from dual;
ORA-01720: grant option does not exist for 'SYS.DUAL'

Ok, if I drop the role, it works


SQL> drop role r;
Role dropped.
SQL> create or replace view v as select * from dual;
View created.

It is not always a good thing to grant privileges on a view, when you are not the owner of that view

Open last one-note page

If you got a one-note document, you may want to automatically go to the last page. This is possible with powershell.

First you create a ComObject. There are incredibly many ComObject that could be manipulated in powershell.


$o = New-Object -ComObject OneNote.Application

Now it get’s a bit confusing. First you open your document


[ref]$x = ""
$o.OpenHierarchy("Z:\Reports.one", "", $x, "cftNone")

Now you get the XML


$o.GetHierarchy("", "hsPages", $x)

With the XML, you select the last page. For instance :


$p = (([xml]($x.value)).Notebooks.OpenSections.Section.Page | select -last 1).ID

And from the id, you generate an URL the GetHyperlinkToObject.


[ref]$h = ""
$o.GetHyperlinkToObject($p,"",$h)

Now we can open the url onenote:///Z:\Reports.one#W31&section-id={12345678-1234-1234-123456789ABC}&page-id={12345678-1234-1234-123456789ABC}&end


start-process $h.value

Don’t call it test

There are quite a few names to avoid in your scripts. Even if there are not reserved-words, keep away !

I’ll start with test


cd $HOME/bin
vi test
  echo hello world
chmod +x test
./test
  hello world

The problem is that it may break your other scripts


$ ssh localhost test 1 = 2 && echo WHAT???
hello world
WHAT???

And it may break sooner or later, depending on your OS / version / default shell / default path / others.

There are quite a few filenames you should not use, like test, date, time, hostname, mail, view, touch, sort and make. The command type lists some of those as reserved word, shell builtin, tracked alias, shell keyword. But again it is not consistent over Unix flavors.


$ uname -sr; type date
SunOS 5.10
date is /usr/bin/date
$ uname -sr; type date
Linux 2.6
date is a tracked alias for /bin/date

Your sysadmin may also alias things for colors and safety in the common profile: for instance vi, ls, rm. But if it annoys you, then use \vi instead of vi.

list database monitoring users

I am quite familiar with the SYSMAN tables but this one required me some googling beyond the Oracle documentation.

The list of targets in your Oracle Enterprise Manager is in SYSMAN.MGMT_TARGETS. Each database target is monitored by a database user, typically DBSNMP.

To retrieve this information, you need some to hijack your database, read this : GΓΆkhan Atil

  1. you copy your encryption key to your repository database, on the OMS server
  2. 
    $ emctl config emkey -copy_to_repos
    Enter Enterprise Manager Root (SYSMAN) Password :
    

    Now anyone with select any table on your repository will see all passwords. You don’t want to do this, but unfortunately you have to do this because even the username is encrpyted.

  3. you decrypt the credentials for db monitoring
  4. 
    SELECT *
    FROM (
      SELECT target_name,
        sysman.em_crypto.decrypt (
          c.cred_attr_value, 
          c.cred_salt) cred,
        cred_attr_name attr
      FROM SYSMAN.EM_TARGET_CREDS c
      JOIN SYSMAN.mgmt_targets t USING (target_guid)
      JOIN sysman.EM_NC_CRED_COLUMNS c USING (cred_guid)
      WHERE c.target_type = 'oracle_database'
      AND c.set_name = 'DBCredsMonitoring' 
    ) PIVOT ( 
      MAX (cred)
      FOR (attr) IN (
        'DBUserName' AS USERNAME, 
        'DBRole' AS "ROLE")
      )
    

    
    TARGET_NAME USERNAME ROLE  
    ----------- -------- ------
    DB01        dbsnmp   NORMAL
    DB02        dbsnmp   NORMAL
    DB03        sys      SYSDBA
    

  5. remove the security leak
  6. 
    $ emctl config emkey -remove_from_repos
    Enter Enterprise Manager Root (SYSMAN) Password :
    

Now the em_crypto won’t work any more


select 
  sysman.em_crypto.decrypt('0','0')
from dual
  *
Error at line 2
ORA-28239: no key provided
ORA-06512: at "SYS.DBMS_CRYPTO_FFI", line 67
ORA-06512: at "SYS.DBMS_CRYPTO", line 44
ORA-06512: at "SYSMAN.EM_CRYPTO", line 250
ORA-06512: at line 1

This information could be used to change the password dynamically accross all databases.


emcli login \
  -username=sysman \
  -password=sysmanpw
emcli update_db_password \
  -target_name=DB01 \
  -user_name=dbsnmp \
  -change_at_target=yes \
  -old_password=oldpw \
  -new_password=newpw \
  -retype_new_password=newpw

Drop table if exists

The syntax that you are looking for is
docs.oracle.com/…/drop-table.html

 
DROP [TEMPORARY] TABLE [IF EXISTS]
    tbl_name [, tbl_name] ...
    [RESTRICT | CASCADE]

Wait, this does not work !


SQL> drop table if exists t;
drop table if exists t
              *
ERROR at line 1:
ORA-00933: SQL command not properly ended

Okay. It was the Oracle MySQL book πŸ˜‰

In the Oracle database, I have created my own droptableifexists script.

I went for a SQL*Plus no-plsql approach. PL/SQL is also possible but it generated different error messages (ORA-06512: at line 1) and different feedback (PL/SQL procedure successfully completed.)

So I check the dictionary, put a command to drop in the sqlplus buffer if a table exists, then run that command first.

droptableifexists.sql


set feed off ver off pages 0 newp none
def cmd="select 'OK: Table does not exist' from dual"
col cmd new_v cmd nopri
select 'drop table "'||table_name||'"' cmd 
from user_tables 
where table_name='&1';
1 select
del *
1 &cmd
set feedb 6 head off
/
set head on
del *
undef cmd
col cmd clear

Ok, let’s try


SQL> create table t(x number);

Table created.

SQL> @droptableifexists T

Table dropped.

SQL> @droptableifexists T
OK: Table does not exist

A PL/SQL approach could be


for f in (
  select 'drop table "'||table_name||'"' cmd 
  from user_tables where table_name='T')
loop 
  execute immediate f.cmd;
end loop;

Try it :


SQL> create table t(x number);

Table created.

SQL> exec for f in (select 'drop table "'||table_name||'"' cmd from user_tables where table_name='T')loop execute immediate f.cmd;end loop

PL/SQL procedure successfully completed.

SQL> exec for f in (select 'drop table "'||table_name||'"' cmd from user_tables where table_name='T')loop execute immediate f.cmd;end loop

PL/SQL procedure successfully completed.

A bit easier to read. Same has to be done for USER, VIEW and so on.

PS: there are also other versions around catching for ORA-942, but ORA-942 may be ORA-943 in next release, try drop table t;; in 11g and 12c to see those things really happen !

Untrusted X11 forwarding

I wrote a while ago about my security concerns regarding


xhost +
xterm -display mypc:0

Way back then, I suggested ssh tunnel. SSH is pretty easy to set up, by enabling the X11Forwarding option.

In OpenSSH 3.8 release note, 2004, there was a new default .

ssh(1) now uses untrusted cookies for X11-Forwarding

In the man ssh_config page, it’s still documented as being the default

ForwardX11Trusted The default is ‘no’

But it actually isn’t on most *ix derivates, e.g. RedHat /etc/ssh/ssh_config

# If this option is set to yes then
# remote X11 clients will have full access
# to the original X11 display. As virtually
# no X11 client supports the untrusted
# mode correctly we set this to yes.
ForwardX11Trusted yes

Who is we?

Okay, let’s go back.

If you use the unsafest method, xhost + and xterm -display pc:0, then you grant everybody the right to manipulate X.

If you use trusted ssh, which is the _undocumented_ default in Linux, then you grant this right only to anyone with access to your authority, most probably located in the file $HOME/.Xauthority. So root and yourself, at least.

If you trust neither yourself nor root, you could restrict access to your resource, preventing one hacker from switching your mouse buttons or doing a screenshot. But this is probably going to prevent most of your applications from working. Also, it probably won’t work at all if you use putty, reflection and (virtually any?) other client tools.

If you want to force Trusted mode, use -Y or -o ForwardX11Trusted=yes.

If you want to force Untrusted mode, use -X and -o ForwardX11Trusted=no.

If you use only -X, it may transparently defaults to the more convenient but less secure -Y. Sometimes. At least on Linux OpenSSH. But if you use different Unix / SSH flavours, the -X may ends with an error message like connection to “localhost:10.0” refused by server. In that case, simply use -Y. Actually, always use -Y if you want Trusted.

Irrecoverable part III : a fix

After part I:the problem, Part II: two reports here is part III.

We have backed up archivelog during a full and deleted them before the full completed.


RMAN> list backup of archivelog all;

List of Backup Sets
===================

BS Size   Type Elapsed  Completion
-- ------ ---- -------- ----------
15 4.00K  DISK 00:00:00   13:31:08
    BP Key: 15   Status: AVAILABLE
    Piece Name: /FULL/0fq7gc0s_1_1

  List of Archived Logs in backup set 15
  Seq LowSCN  LowTime  NextSCN NextTime
  --- ------- -------- ------- --------
  15  355533  13:29:55 355777  13:31:08

BS Size   Type Elapsed  Completion
-- ------ ---- -------- ----------
25 4.00K  DISK 00:00:00   13:31:26
    BP Key: 25   Status: AVAILABLE
    Piece Name: /ARCH/0pq7gc1e_1_1

  List of Archived Logs in backup set 25
  Seq LowSCN  LowTime  NextSCN NextTime
  --- ------- -------- ------- --------
  15  355533  13:29:55 355777  13:31:08

BS Size   Type Elapsed  Completion
-- ------ ---- -------- ----------
26 3.00K  DISK 00:00:00 13:31:26
  BP Key: 26   Status: AVAILABLE
  Piece Name: /ARCH/0qq7gc1e_1_1

  List of Archived Logs in backup set 26
  Seq LowSCN  LowTime  NextSCN NextTime
  --- ------- -------- ------- --------
  16  355777  13:31:08 355827  13:31:22

BS Size   Type Elapsed  Completion
-- ------ ---- -------- ----------
28 2.50K  DISK 00:00:00 13:31:28
    BP Key: 28   Status: AVAILABLE
    Piece Name: /FULL/0sq7gc1g_1_1

  List of Archived Logs in backup set 28
  Seq LowSCN  LowTime  NextSCN NextTime
  --- ------- -------- ------- --------
  17  355827  13:31:22 355863  13:31:28

Sequence 16 is missing in /FULL/ and is available in /ARCH/

Ok, let’s copy the missing pieces from ARCH to FULL


$ cd /
$ cp -p ARCH/* FULL/

Now let’s try. I save my directory on an offline location (tape/usb/cdrom) and dropped my database including backups.


cd /
tar cvf /tmp/FULL.tar FULL

Let’s drop (it’s just a test)


RMAN> startup force mount dba 
RMAN> drop database including backups;
database dropped

Ok let’s get this USB stick back from the moon to my datacenter and see if I can restore …


cd /
tar xvf /tmp/FULL.tar 

Let’s do this !


RMAN> startup force nomount
RMAN> restore controlfile from '/FULL/c-2414975447-20150521-01';
RMAN> alter database mount;

Now I want RMAN to look for other files in FULL and also marked expired stuff. The start with does the magic.

RMAN> catalog start with '/FULL/' noprompt;
RMAN> crosscheck backup;
RMAN> crosscheck copy;
RMAN> crosscheck archivelog all;
RMAN> restore database;
RMAN> recover database;
RMAN> alter database open resetlogs;

As @dbastorage said on twitter:
“if it was easy, we would not need DBAs! ”

Irrecoverable full backup part II : reporting

After my post Can you restore from a full online backup ?, I needed to come up with a report.

Assuming that each backup goes in a different directory, I just wrote two reports.

  1. Report gaps in v$backup_redolog (or rc_backup_redolog if you use the catalog)
    
    DIR     FIRST_CHANGE# NEXT_CHANGE#
    ------- ------------- ------------
    /bck01/        284891       285140
    /bck01/        285140       285178
    
    /bck02/        284891       285140
    === GAP ===
    /bck02/        285178       285245 
    
    /bck03/        285178       285245
    /bck03/        285245       286931
    /bck03/        286931       287803
    /bck03/        287803       288148
    

    This could be done with analytics, by checking where the last next_change is not the current first_change, within a directory

    
    SELECT dir, 
      LAG missing_from_change#, 
      first_change# missing_to_change#
    FROM (
      SELECT REGEXP_REPLACE (handle, '[^/\]+$') dir,
        first_change#,
        next_change#,
        LAG(next_change#) OVER (
          PARTITION BY REGEXP_REPLACE (handle, '[^/\]+$')
          ORDER BY first_change#
        ) LAG
      FROM v$backup_piece p
      JOIN v$backup_redolog l 
        USING (set_stamp, set_count))
    WHERE LAG != first_change#;
    
    DIR     MISSING_FROM_CHANGE# MISSING_TO_CHANGE#
    ------- -------------------- ------------------
    /bck02/               285140             285178
    
  2. Reports directories where archivelogs don’t include changes (backup redolog) from the earliest to the latest checkpoint (backup datafile)
    
    SELECT 
      REGEXP_REPLACE (handle, '[^/\]+$') dir,
      MIN (checkpoint_change#),
      MAX (checkpoint_change#),
      MIN (first_change#),
      MAX (next_change#)
    FROM v$backup_piece p
      LEFT JOIN v$backup_datafile f 
        USING (set_stamp, set_count)
      LEFT JOIN v$backup_redolog l 
        USING (set_stamp, set_count)
    WHERE handle IS NOT NULL
    HAVING 
      MIN (checkpoint_change#) < MIN (first_change#)
      OR 
      MAX (checkpoint_change#) > MAX (next_change#)
    GROUP BY REGEXP_REPLACE (handle, '[^/\]+$');
    
    DIR     MINCHECKPT MAXCHECKPT   MINFIRST    MAXNEXT
    ------- ---------- ---------- ---------- ----------
    /bck04/     954292     954299     959487    1145473
    

    the archives for the changes from 954292 to 959487 are missing.

If some archive backups are missing in one directory, it does not mean the database is irrecoverable, the archive backups could be in another directory. But it means that single directory would no longer permit you to restore or duplicate.

Another approach with RESTORE PREVIEW was provided by Franck in my previous post : List all RMAN backups that are needed to recover.

Usual disclaimer: there are plenty of other irrecoverabilty causes from hardware defect to backup “optimization” that are beyond the scope of this post.

Can you restore from a full online backup ?

The question is not HOW TO DO IT but WHETHER YOU CAN DO IT !

A typical backup script would contains something like BACKUP DATABASE PLUS ARCHIVELOG:


backup database format
'/u99/backup/DB01/20150518/full_0_%d_s%s_p%p'
plus archivelog format
'/u99/backup/DB01/20150518/arc_%d_s%s_p%p';

Starting backup at 2015-05-18_18:27:55
current log archived
input archived log thread=1 sequence=469
...
piece handle=
/u99/backup/DB01/20150518/arc_DB01_s86_p1
Finished backup at 2015-05-18_18:27:58

Starting backup at 2015-05-18_18:27:58
input datafile file number=00002 name=
/u02/oradata/DB01/undots01.dbf
...
including current control file in backup set
including current SPFILE in backup set
piece handle=
/u99/backup/DB01/20150518/full_0_DB01_s88_p1
Finished backup at 2015-05-18_18:28:16

Starting backup at 2015-05-18_18:28:16
current log archived
input archived log thread=1 sequence=20
piece handle=
/u99/backup/DB01/20150518/arc_DB01_s89_p1
Finished backup at 2015-05-18_18:28:17

This sounds pretty safe, but what happened if you keep this full backup for ever ? do you have all what you need in it to restore ?

It depends. Chance exists that you can restore. To annoy the paranoiacs, here is a counter example.

  1. you start your full backup at 6pm.
    backuping sequence 21,22,23,24
    backup datafiles…

  2. at 7 pm one cronjob issue backup archivelog all delete input;
    backuping and deleting sequence 21,22,23,24,25,26,27,28,29,30

  3. at 8pm your backup is about to finish
    backuping sequence 31

Well, where is my sequence 27 ?

Let’s try


rm /tmp/arch_DB01*
startup force mount;
crosscheck backup;restore database;


Starting restore at 2015-05-18_18:47:45
channel ORA_DISK_1: restore complete, elapsed time: 00:02:05
Finished restore at 2015-05-18_18:49:51

Ok, the restore was fine. Now what?


RMAN&gt; recover database;

Starting recover at 2015-05-18_18:50:35
using channel ORA_DISK_1

starting media recovery

RMAN-00571: =================================================
RMAN-00569: ========== ERROR MESSAGE STACK FOLLOWS ==========
RMAN-00571: =================================================
RMAN-03002: failure of recover command at 05/18/2015 18:50:36
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06025: no backup of archived log for thread 1 with sequence 30
RMAN-06025: no backup of archived log for thread 1 with sequence 29
RMAN-06025: no backup of archived log for thread 1 with sequence 27

RMAN&gt; alter database open;

RMAN-00571: =================================================
RMAN-00569: ========== ERROR MESSAGE STACK FOLLOWS ==========
RMAN-00571: =================================================
RMAN-03002: failure of alter db command at 05/18/2015 18:51:29
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: '/u02/oradata/DB01/system01.dbf'

You have lost your database. Even if you did a full online backup plus archivelog, it did not contain all the archivelogs and RMAN did not tell you.

There is more than one way to reduce this annoyance :

1) don’t backup archivelogs during full backup
2) don’t delete archivelog backups done during a full backup
3) report any archivelog backups that run during a full. Either by looking in the RMAN dictionary or in your house logfiles
4) do offline backup for long term archives
5) don’t trust RMAN

And as well :
Test your backups 😀

🎂 10 years 🎂

Thanks to all my readers for being so faithful πŸ™‚
I’ll post a new solution to calculate factorial.

This is the one I posted 10 years ago :
http://laurentschneider.com/wordpress/2005/05/recursive-sql.html

I also used it in the obfuscation contest


with function f (x number) return number 
is 
begin 
  return case x when 1 then x else x*f(x-1) end; 
end;
select value(t), f(value(t)) 
from table(sys.odcinumberlist(4,6))t

  VALUE(T) F(VALUE(T))
---------- -----------
         4          24
         6         720

It is neither quicker nor shorter than the one I posted in 2005, but it could not have worked in 2005

run sudo, ssh, password, su in simulated interactive mode

Some commands do not like non-interactive mode


$ passwd <<EOF
> oldpassword
> newpassword
> newpassword
> EOF
Changing password for user lsc.
Current password for lsc@example.com: passwd: Authentication token manipulation error
$ echo oraclepassword | su - oracle
standard in must be a tty
$ echo sudopassword | sudo su - oracle
[sudo] password for lsc:
sudo: sorry, you must have a tty to run sudo

But ok, if you really want to run those in a script, you have plenty of clean (no hack there) ways of doing this.

For instance, let’s use a screen called BAR.


$ xterm -e "screen -S BAR" &
[1]     31732

Now we have an interactive terminal. It could be redirected to a frame buffer device if no x server is started.

Not really a human device, but an interactive terminal.

Now let’s send stuff


$ CR="$(echo '\r')"
$ screen -S BAR -X stuff "sudo su - oracle$CR"
$ screen -S BAR -X stuff "sudopassword$CR"
$ screen -S BAR -X stuff "id > /tmp/xxx$CR"
$ screen -S BAR -X stuff "exit$CR"
$ screen -S BAR -X stuff "exit$CR"
[1] +  Done                    xterm -e "screen -S BAR" &
$ cat /tmp/xxx
uid=100(oracle) gid=100(dba) groups=100(dba)

Usual disclaimer: it is a bad security practice to hardcode your passwords in scripts. Use this only if you really understand security. Read man openssl about how to use openssl to encrypt your password. Ask your security friends before trying

bypass ora-20

When you really need to run one script, at all cost, an annoying error is ORA-00020: maximum number of processes (40) exceeded, which can even occurs as sysdba.

Test case (21 is a not something to do in real life):


SQL> alter system set processes=21 scope=spfile;
System altered.
SQL> startup force quiet
ORACLE instance started.
Database mounted.
Database opened.
SQL> quit

From now on, sqlplus as sysdba is impossible.

$ sqlplus -s -L / as sysdba
ERROR:
ORA-00020: maximum number of processes (40) exceeded
SP2-0751: Unable to connect to Oracle.  Exiting SQL*Plus

Okay, if you really really need to run one script, you could connect with sqlplus -prelim and restart the database.


if :|sqlplus / as sysdba|grep ORA-00020
then
  echo shu abort|sqlplus -prelim / as sysdba
  echo startup quiet|sqlplus / as sysdba
fi

If ORA-20 is detected, then the database will be restarted.

switch user in Oracle

Almost a decade ago I wrote about su in sqlplus. This 10gR2 “new” feature allows delegation Γ  la sudo.

By checking the DBA_USERS in 12c I found PROXY_ONLY_CONNECT. According to Miguel Anjo, there is a secret syntax for allowing only the proxy user.


SQL> ALTER USER app_user PROXY ONLY CONNECT;
SQL> CONNECT app_user/xyz
ERROR:ORA-28058: login is allowed only through a proxy

comment in external table

Depending the files, you may use different signs for comments, typically


# hash
// slash slash 
/* slash-star star-slash */
: column
-- dash dash

The latest is used in sql and pl/sql, but :


CREATE TABLE t (x NUMBER)
ORGANIZATION EXTERNAL
(
  TYPE ORACLE_LOADER
  DEFAULT DIRECTORY data_pump_dir
  ACCESS PARAMETERS (
    FIELDS TERMINATED BY ';'  -- This is a comment
    (x))
  LOCATION ('x'));

SELECT * FROM t;
SELECT * FROM t
              *
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "minussign": expecting one of: 
  "column, enclosed, (, ltrim, lrtrim, ldrtrim, missing, 
  notrim, optionally, rtrim, reject"
KUP-01007: at line 2 column 38

not in external table access parameters.

No comment is allowed there!

How to convert Excel file to csv

One-liner to convert Excel to CSV (or to and from any other format).
There is a bug 320369 if you have excel in English and your locale is not America. Just change your settings to us_en before conversion.

Unusable index

After table maintenance, like move or split partition, underlying indexes are marked as unusable.

This boils down to segment reorganisation, not dictionary change.

For instance :


CREATE TABLE t(x NUMBER)
PARTITION BY RANGE(x)
(PARTITION p1 VALUES LESS THAN (MAXVALUE));

CREATE INDEX i ON t (x);


INSERT INTO t VALUES (1);

ALTER TABLE T SPLIT PARTITION p1 AT 
  (100) INTO (PARTITION p2, PARTITION p3);

SELECT index_name, status 
FROM user_indexes 
WHERE index_name = 'I';

INDEX_NAME STATUS  
---------- --------
I          VALID

The partition p1 is splitted into two p2 and p3, all rows from p1 belongs to p2, p3 is created empty, no data manipulation, the index remains valid


delete from t; 
INSERT INTO t VALUES (250);

ALTER TABLE T SPLIT PARTITION p3 AT 
  (200) INTO (PARTITION p4, PARTITION p5);

SELECT index_name, status 
FROM user_indexes 
WHERE index_name = 'I';

INDEX_NAME STATUS  
---------- --------
I          VALID

Same here, all rows from p3 moved to p5, p4 is created empty, no problem


delete from t; 
INSERT INTO t VALUES (250);
INSERT INTO t VALUES (350);

ALTER TABLE T SPLIT PARTITION p5 AT 
  (300) INTO (PARTITION p6, PARTITION p7);

SELECT index_name, status 
FROM user_indexes 
WHERE index_name = 'I';
INDEX_NAME STATUS  
---------- --------
I          UNUSABLE

One row goes to p6, one row to p7, the index is invalidated.

To avoid this, use ALTER TABLE T SPLIT PARTITION p5 AT (300) INTO (PARTITION p6, PARTITION p7) UPDATE INDEXES;

What are the consequences?


INSERT INTO t VALUES (320);
1 row inserted

It does not prevent DML in this case.


select * from t where x=320;

         X
----------
       320

Execution Plan
---------------------------------------------------
 0   SELECT STATEMENT Optimizer Mode=ALL_ROWS
 1 0   PARTITION RANGE SINGLE
 2 1     TABLE ACCESS FULL SCOTT.T

The row is retrieved, but the unusable index is not used.

In some case, DML may be prevented.


alter index i rebuild;
alter table t modify (x primary key using index);
ALTER TABLE T SPLIT PARTITION p7 AT 
  (330) INTO (PARTITION p8, PARTITION p9) ;

insert into t values (450);
ORA-01502: index 'SCOTT.I' or partition of such index is in unusable state

The index can no longer be used for constraint enforcement and the INSERT fails.

If the index is partitioned, the index is not marked as unusable as a whole, only the affected (sub)partitions are marked.

Check all your indexes status in dba_indexes.status, dba_ind_partitions.status and dba_ind_subpartitions.status.

Rebuild them with alter index i rebuild, alter index i rebuild partition p and alter index i rebuild subpartition sp.

Did you forget to run root.sh?

Not easy to detect, and depending on the product (agent/database), it may have only limited side effects.

Like external jobs not running, operating systems statistics not collected.

But it is not always easy to diagnose.

For instance if you patch from OMS 12cR2 to 12cR3, and you run the root.sh only in 12cR2, they are very few statistics missing (one is the OS_STORAGE_ENTITY).

Running the root.sh doesn’t generate a log file or an entry in the inventory.

To check if it was executed, check what it is supposed to do. It is a bit different in each version. One think it always does is changing the ownership to root and set the sticky bit for a few binaries. For the database, this is done in sub-scripts called rootadd.sh (10g) or rootadd_rdbms.sh (11g/12c).


eval ls -l $(find $ORACLE_HOME -name "rootadd*sh" -exec awk '$1="$CHOWN"&&$2=="root"{print $3}' {} \;|sort -u)

-rwsr-x--- root dba .../product/11.2.0/db_4/bin/extjob
-rwsr-x--- root dba .../product/11.2.0/db_4/bin/jssu
-rws--x--- root dba .../product/11.2.0/db_4/bin/nmb
-rws--x--- root dba .../product/11.2.0/db_4/bin/nmhs
-rws--x--- root dba .../product/11.2.0/db_4/bin/nmo
-rwsr-x--- root dba .../product/11.2.0/db_4/bin/oradism
-rw-r----- root dba ...11.2.0/db_4/rdbms/admin/externaljob.ora

If the ownership is root, you definitely did run the root.sh.

On the 12c agent, there is a FULL_BINARY_LIST variable that point to list of root binaries in sbin


eval $(grep FULL_BINARY_LIST= $AGENT_HOME/root.sh)
cd $AGENT_HOME/../../sbin
ls -l $FULL_BINARY_LIST

-rws--x--- root dba nmb
-rws--x--- root dba nmhs
-rws--x--- root dba nmo

If all files exist and belong root, it looks like you did run the root.sh.