changing container in plsql

One of the today’s challenge, since Oracle 12c deprecated non-cdb, is to make the dba scripts CDB-aware.

If you are lucky enough to have no 11g around, you can mostly replace DBA_* by CDB_*

OLD:

SQL> select count(*) from dba_users;
  COUNT(*)
----------
       121

NEW: non-cdb

SQL> select con_id, count(*) from cdb_users group by con_id;

CON_ID   COUNT(*)
------ ----------
     0        121

NEW: single-tenant

SQL> select con_id, count(*) from cdb_users group by con_id;
    CON_ID   COUNT(*)
---------- ----------
         1         23
         3         39

As mentioned in a white paper :
The set container privilege has certain restrictions in PL/SQL
multitenant-security-concepts-12c-2402462.pdf

Sometimes the certain restrictions will puzzle you

SQL> set feed off serverout on
SQL> exec dbms_output.put_line('root')
root
SQL> alter session set container=dora1;
SQL> sho serverout
serveroutput ON SIZE UNLIMITED FORMAT WORD_WRAPPED
SQL> exec dbms_output.put_line('dora1');
SQL> -- NO OUTPUT WTF !!!!
SQL> set serveroutput ON 
SQL> exec dbms_output.put_line('dora1');
dora1
SQL> 

The security model prevents you from using alter session (with execute immediate or like in the previous example) to execute plsql.

Now you know…

on input and output file descriptors

Let’s start with some basics. The basics works as well on Unix, Linux and Windows. Later techniques only work on linux/unix

$ ls -l hosts          
-rw-r--r--. 1 root root 211 Oct  5  2015 hosts
$ ls -l xxx  
ls: cannot access xxx: No such file or directory
$ read x
foo
$ 

Outpout and error are displayed on screen and input is read from your keyboard

The output is kwown as file-descriptor-1 or stdout. Sometimes, depending on your OS, it may be exposed as /dev/fd/1 or /dev/stdout. But not all *nix have this.
The error is kwown as file-descriptor-2 or stderr.
The input is known as file-descriptor-0 or stdin.

Instead of keyboard and screen, it could be a file or any other devices, e.g. /dev/null or just a simple file.

$ ls -l hosts 1>file1
$ ls -l xxx 2>file2          
$ read x 0<file3

0 and 1 are optional here.

If is also possible to redirect stdout and vice versa

$ ls -l hosts 1>&2         
-rw-r--r--. 1 root root 211 Oct  5  2015 hosts
$ ls -l xxx 2>&1 
ls: cannot access xxx: No such file or directory

It is possible to close the file descriptor.

sleep 1 1>&- 2>&- 0<&-

Well, sleep has no output and no input and no error, so the effect is not impressive.

If you write to a closed file descriptor, you get an error. Ok, if you close both stdout and stderr, the error will be silent. But there will still be an error.

$ (echo foo) 1>&-     
bash: echo: write error: Bad file descriptor
$ echo $?        
1
$ (echo bar 1>&2) 2>&-      
$ echo $?             
1

if you want to redirect stdin to stdout and stdout to stdin, you better use a new file descriptor

$ (ls -l hosts xxx 1>&2 2>&3) 3>&1            
xxx not found
-rw-rw-r-- 1 root system 2133 Jun 22 2017 hosts

An old trick is to use additional file descriptor to find a return code of command before the pipe.

$ ((((ls hosts; echo $? >&3) | tr '[a-z]' '[A-Z]' >&4) 3>&1) | (read rc; exit $rc)) 4>&1       
HOSTS
$ echo $?
0
$ ((((ls xxx; echo $? >&3) | tr '[a-z]' '[A-Z]' >&4) 3>&1) | (read rc; exit $rc)) 4>&1        
ls: cannot access xxx: No such file or directory
$ echo $?
2

As I know the trick for so long that I could not credit the author, only found some 21st century posts

If you want to redirect all your outputs to a logfile, you can use exec

#!/bin/ksh
exec 1>>/tmp/mylog
exec 2>>/tmp/mylog
cd /etc 
ls -l hosts          
ls -l xxx
exit

If you want to be able to still use your stdout / stderr, again, open new descriptors

#!/bin/ksh
exec 3>&1
exec 4>&2
exec 1>>/tmp/mylog
exec 2>>/tmp/mylog
cd /etc 
ls -l hosts          
ls -l xxx
echo INFO >&3
echo ERROR >&4
exec 3>&-
exec 4>&-
exit

Bash has also one shortcut

ls xxx host &>log

& redirect both 1 and 2 in one step. Doesn't work on ksh.

Standard date format

Let’s start with Powershell

Get-Date -format "o"
2019-03-08T17:41:02.7346332+01:00

The “O” or “o” standard format specifier represents a custom date and time format string using a pattern that preserves time zone information and emits a result string that complies with ISO 8601
docs.microsoft.com

Now Linux

date "+%Y%m%dT%H:%M:%S.%3N%z"

In SQL

to_char(current_timestamp, 'YYYYMMDD"T"HH24:MI:SS.FF3TZH:TZM') 

for my XML fans

extractvalue(xmlelement(t, current_timestamp),'/*') 

Now in AIX

 perl -e '
      use strict;
      use POSIX "strftime";
      use Time::Piece;
      use Time::HiRes "gettimeofday";
      my($x,$y)=gettimeofday;
      my $s=Time::Piece->new;
      my $t=$s->tzoffset;
      printf "%s.%03d%+03d:%02d\n",
        strftime("%Y%m%dT%H:%M:%S",localtime($x)),
        $y/1000,
        $t/3600,
        abs($t)%3600/60;
      '

Could not have figured out without google ๐Ÿ˜‰
The GNU date could also be installed in AIX, but I am not root

A more generic unix version would be the UTC date

date -u "+%Y%m%dT%H:%M:%SZ" 
20190308T16:58:13Z

register database in Cloud Control with command line

a common task of the DBA is to add databases in EM. If you do it 1000 times, you’ll get bored. I guess…

sure you could go to EM Cloud Management Pack, but maybe you use SQLPLUS + Create database + catalog/catproc.

Okay, there are some ways/hacks to do it with emcli/emctl, but let’s face it, there is a tool around the corner that does everything for you :
DBCA

Wait, that GUI thing? Yes and no. Just use the silent option. It is that simple.


dbca -silent
-createDatabase
-templateName General_Purpose.dbc
-gdbname cdb1 -sid cdb1
-sysPassword ***
-systemPassword ***
-emConfiguration CENTRAL
-dbsnmpPassword ***
-emUser sysman
-emPassword ***
-omsHost em.example.com
-omsPort 4900

Wait, that simple?

Yep ๐Ÿ˜‰

More details on how to use silent dbca on oracle-base.com

Audit pluggable database

In the old now-deprecated maybe-soon-desupported non-cdb infrastructure, AUDIT’ing was done right after connect / as sysdba.

In single-tenant (or multi-tenant), things get complicated.

Once again, the doc must be read at least twice ๐Ÿ˜‰

If you issue an audit statement in the root, then the database performs auditing across the entire CDB, that is, in the root and all PDBs […] all common users are audited

This is very tricky, because you don’t want to audit common users only

Let’s try

SQL> conn / as sysdba
Connected.
SQL> create user c##u identified by ***;
User created.
SQL> grant create session to c##u container=all;
Grant succeeded.
SQL> alter session set container=pdb01;
Session altered.
SQL> create user u identified by ***;
User created.
SQL> grant create session to u;
Grant succeeded.
SQL> alter session set container=cdb$root;
Session altered.
SQL> audit connect container=all;
Audit succeeded.
SQL> select AUDIT_OPTION, CON_ID from cdb_STMT_AUDIT_OPTS;
AUDIT_OPTION   CON_ID
-------------- ------
CREATE SESSION      1
SQL> sho parameter audit_trail
NAME         VALUE
------------ -------------
audit_trail  DB, EXTENDED
SQL> 

Audit is now logging all connections of all common users on all databases…

SQL> conn c##u/***
Connected.
SQL> sho user
USER is "C##U"
SQL> sho con_name
CON_NAME
------------------------------
CDB$ROOT
SQL> conn c##u/***@pdb01
Connected.
SQL> sho user
USER is "C##U"
SQL> sho con_name
CON_NAME
------------------------------
pdb01

Let’s verify :

SQL> conn / as sysdba
Connected.
SQL> select * from cdb_audit_trail order by timestamp;

CON_ID USERNAME ACTION TIMESTAM
------ -------- ------ --------
     1 C##U     LOGON  18:01:05
     1 C##U     LOGOFF 18:01:06
     3 C##U     LOGON  18:01:07
     3 C##U     LOGOFF 18:01:08

So far so good. What about local users?

SQL> conn u/***@pdb01
Connected.
SQL> sho user
USER is "U"
SQL> sho con_name
CON_NAME
------------------------------
pdb01

And???

SQL> conn / as sysdba
Connected.
SQL> select * from cdb_audit_trail where USERNAME='U';
no rows selected

Nope! DBA like me and you don’t care about those C## users, we want ALL users, not all common users.

For this purpose, we need to activate audit on every pluggable.

SQL> conn / as sysdba
Connected.
SQL> alter session set container=pdb01;
Session altered.
SQL> audit connect;
Audit succeeded.
SQL> alter session set container=cdb$root;
Session altered.
SQL> select AUDIT_OPTION, CON_ID from cdb_STMT_AUDIT_OPTS;
AUDIT_OPTION   CON_ID
-------------- ------
CREATE SESSION      1
CREATE SESSION      3

Now it should work

SQL> conn u/***@pdb01
Connected.
SQL> sho user
USER is "U"
SQL> sho con_name
CON_NAME
------------------------------
pdb01
SQL> host sleep 1

… and …

SQL> select * from cdb_audit_trail where username='U';
CON_ID USERNAME ACTION TIMESTAM
------ -------- ------ --------
     3 U        LOGON  18:01:12
     3 U        LOGOFF 18:01:13

If you already switched from non-cdb to single-tenant, please check your audit strategy NOW !!!

19c

19c is a mini-release. Remember it is a new name for the second 12cR2 patchset, after 12.2.0.2/18c

https://mikedietrichde.com/2019/02/13/oracle-database-19-2-for-exadata-is-now-available-for-download/ was the first to mention it. By looking up in the doc I found

. Distinct listagg

. Desupport sqlplus product profile

. listener.log log rotation

Go to the doc to find more https://docs.oracle.com/en/database/oracle/oracle-database/19/whats-new.html

Unix ODBC Sybase

very similar to Unix ODBC Oracle

instead of tnsnames, the connections are defined in $SYBASE/interfaces.

the odbc.ini must exists as well in $SYBASE.

if you test with unixODBC-devel, keep in mind to use /usr/bin/isql and not $SYBASE_OCS/bin/isql

$ODBCSYSINI/odbc.ini
[syb]
Driver = Sybase16
DSN = syb
ServerName=SYB01

$ODBCINI/odbcinst.ini
[Sybase16]
Description = Adaptive Server Enterprise
Driver = /u01/app/sybase/product/16.0/DataAccess64/ODBC/lib/libsybdrvodb.so

$SYBASE/odbc.ini
[syb]
DSN = syb
ServerName=SYB01

$SYBASE/interfaces
SYB01
master tcp ether srv01.example.com 15000
query tcp ether srv01.example.com 15000

/usr/bin/isql -v syb user01 passw01
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
SQL>

Unix ODBC Oracle

To connect via ODBC, check https://laurentschneider.com/wordpress/tag/odbc

This article is related to Unix/Linux. Often you have a fat client written in C, while java uses JDBC instead of ODBC.

Okay, it’s pretty easy, if you have an oracle client, you probably already have libsqora.so.xx.1 in your LD_LIBRARY_PATH. In this case you can connect using ODBC.

What you need is an odbc.ini where you defined your connections
[DB01]
Driver = OracleODBC18
DSN = DB01
ServerName = DB01

and an odbcinst.ini where you define your driver
[OracleODBC18]
Description = Oracle ODBC driver for Oracle 18
Driver = /u01/app/oracle/product/18.1.0/client_64/lib/libsqora.so.18.1

the name / location and options may depend on your software / driver vendor.

ODBC uses TNSNAMES, so it really easy, you just the odbc entry ServerName = DB01 that matches
DB01 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(Host = srv01.example.com)(Port = 1521))(CONNECT_DATA = (SERVICE_NAME=DB01.example.com)))
in tnsnames.ora.

ODBC full client allows you to use all connection features like LDAP and SSL.

To test it, I installed unixODBC-devel
sudo yum install unixODBC-devel
export ODBCSYSINI=/home/user1/odbc
cd $ODBCSYSINI
vi odbc.ini odbcinst.ini

the machine-wide ODBCSYSINI is /etc. You can chose to define the ODBCINI user-wide (/home) and the ODBCSYSINI machine-wide (/etc). I wouldn’t use machine-wide passwords. But configuring the drivers only once may be an option. If you are root and you have not too many drivers/versions/bitcode.

Now try to connect :
isql DB01 scott tiger
SQL> select * from scott.emp;
+-------+-----------+----------+-------+--------------------+----------+----------+-------+
| EMPNO | ENAME | JOB | MGR | HIREDATE | SAL | COMM | DEPTNO|
+-------+-----------+----------+-------+--------------------+----------+----------+-------+
| 7369 | SMITH | CLERK | 7902 | 1980-12-17 00:00:00| 800 | | 20 |
| 7499 | ALLEN | SALESMAN | 7698 | 1981-02-20 00:00:00| 1600 | 300 | 30 |
| 7521 | WARD | SALESMAN | 7698 | 1981-02-22 00:00:00| 1250 | 500 | 30 |
| 7566 | JONES | MANAGER | 7839 | 1981-04-02 00:00:00| 2975 | | 20 |
| 7654 | MARTIN | SALESMAN | 7698 | 1981-09-28 00:00:00| 1250 | 1400 | 30 |
| 7698 | BLAKE | MANAGER | 7839 | 1981-05-01 00:00:00| 2850 | | 30 |
| 7782 | CLARK | MANAGER | 7839 | 1981-06-09 00:00:00| 2450 | | 10 |
| 7788 | SCOTT | ANALYST | 7566 | 1987-04-19 00:00:00| 3000 | | 20 |
| 7839 | KING | PRESIDENT| | 1981-11-17 00:00:00| 5000 | | 10 |
| 7844 | TURNER | SALESMAN | 7698 | 1981-09-08 00:00:00| 1500 | 0 | 30 |
| 7876 | ADAMS | CLERK | 7788 | 1987-05-23 00:00:00| 1100 | | 20 |
| 7900 | JAMES | CLERK | 7698 | 1981-12-03 00:00:00| 950 | | 30 |
| 7902 | FORD | ANALYST | 7566 | 1981-12-03 00:00:00| 3000 | | 20 |
| 7934 | MILLER | CLERK | 7782 | 1982-01-23 00:00:00| 1300 | | 10 |
+-------+-----------+----------+-------+--------------------+----------+----------+-------+
SQLRowCount returns -1
14 rows fetched
SQL>

Privileges on a view

Granting too many privileges on a view could be disastrous. A view is often used as a security element; you grant access to only a subset of columns and rows to one user. Mostly only SELECT. If you want to grant update to only some rows, the security could be enhanced with the WITH CHECK OPTION.

But let’s talk about granting too much privs.
disclaimer: it may damaged your database forever

SQL> create or replace view v as select trunc(sysdate) today from dual;
View created.
SQL> create public synonym v for v;
Synonym created.
SQL> grant all on v to public;
Grant succeeded.
SQL> conn u/***@db01
Connected.
SQL> select * from v;
TODAY
----------
2018-10-15
SQL> select * from v;
TODAY
----------
2018-10-15
SQL> delete from v;
1 row deleted.
SQL> select * from v;
TODAY
----------
2018-10-15
SQL> select count(*) from v;
COUNT(*)
----------
1
SQL> select count(dummy) from dual;
COUNT(DUMMY)
------------
0
SQL> rollback;
Rollback complete.

Wait… what happened ???

SQL> delete from v;
1 row deleted.
SQL> select count(*) from v;
COUNT(*)
----------
1

This is a biaised test, because nobody creates view in SYS schema and nobody shall ever do GRANT ALL TO PUBLIC. But sometimes, people do. Because of the grant, you have emptied dual. ๐Ÿ˜ฎ

The COUNT(*) is a magic thing. select count(*) from dual returns 1. Unless your instance collapses.


SQL> delete dual;
1 row deleted.
SQL> alter session set "_fast_dual_enabled"=false;
Session altered.
SQL> select count(*) from dual;
COUNT(*)
----------
1
SQL> rollback;

One reader once asked for assistance because he tried it and its db was broken. I won’t help you. Just do it for fun on a database that you can recreate afterwards.

Okay, enough fun for today, let’s see another side effect of excessive rights.

SQL> create user u identified by ***;
User created.
SQL> create role r;
Role created.
SQL> grant create view, create session to u;
Grant succeeded.
SQL> conn u/***@db01
Connected.
SQL> create view v as select trunc(trunc(sysdate)-.5) yesterday from dual;
View created.
SQL> create role r;
Role created.
SQL> select * from v;
YESTERDAY
----------
2018-10-14
SQL> delete from v;
delete from v
*
ERROR at line 1:
ORA-01031: insufficient privileges
SQL> grant delete on v to r;
grant delete on v to r
*
ERROR at line 1:
ORA-01720: grant option does not exist for 'SYS.DUAL'
SQL> grant select on v to r;
Grant succeeded.

So far so good, I have created a view and granted select only on that view. I cannot delete DUAL. I cannot grant delete.

Now learn about this less-known annoyance

SQL> conn / as sysdba
Connected.
SQL> grant select, update, insert, delete on u.v to r;
Grant succeeded.

What? SYS can give access to my view to a role, even if I have no DELETE right on the underlying?

SQL> grant create session, r to user2;
Grant succeeded.
SQL> conn user2/***@DB01
Connected.
SQL> select * from u.v;
YESTERDAY
----------
2018-10-14
SQL> delete from v;
delete from v
*
ERROR at line 1:
ORA-01031: insufficient privileges

Sofar, it didn’t have so many side effect. It is not uncommon to see scripts that automatically generate grants ; and it is also not uncommon to see those script going doolally.

But, one side effect is preventing future CREATE OR REPLACE statements.

SQL> create or replace view v as select trunc(trunc(sysdate+6,'YYYY')+400,'YYYY')-7 xmas from dual;
create or replace view v as select trunc(trunc(sysdate+6,'YYYY')+400,'YYYY')-7 xmas from dual;
*
ERROR at line 1:
ORA-01720: grant option does not exist for 'SYS.DUAL'

CREATE OR REPLACE no longer work. You need to revoke the right. Either with a DROP VIEW or with


SQL> revoke insert, update, delete on v from r;
Revoke succeeded.
SQL> create or replace view v as select trunc(trunc(sysdate+6,'YYYY')+400,'YYYY')-7 xmas from dual;
View created.
SQL> select * from v;
XMAS
----------
2018-12-25

I’d recommend against using SYS to grant access to user tables. Use the schema owner.

super-long-lines in CLOB

Sometimes you use sqlplus and your line is longer than your linesize


SQL> select n||';'||x from t2;
1;one
2;twoPxMQztzLaqjWjGKOXIVIVrrHC
fJkTLbRgCPiENfWrrCjUMRSkmCfLUY
RdVASFacGtyEnUplOAXspDJZmSLPfg
ziKYQBKoHdkqGXoBCgkKuVOdxEvhQy
lZLQROFxIxOzqeQeFMGXcGkuJwnGGg
zvNSuCFMVxyQgboLAUpDcYnBsuVyXX
vFtWPICwqmcEdmzRRkOzzPethrqjRR
aBSOTOZiYfxcYSqbqFlGuVOBRdxjFh
ZcxJBvSCVXwONmS
prVzTykEfSsePyYwyLVoyYrVLynUzs
MLFWQxwUKNsVcYzUOAhslNldnBpITS
rxPlpJbLSjJqgxNxsGVsrYhkWAMufk
QnRayieEkSDYrNqyLejJuggADNxcgV
tszjJIYKCxPweNGhXsOFKGbMkTBPCf
DXwjBNgQYswbaNWBOEtSTHjIhdLAyM
nbhyhRKKdfaTTpTgHqQelVWmnkBHjA
ZTrGqdtlYAgoXNHnoryxHxVVyaMiGR
SjdVlRwMas

3;three
SQL>

There are two extra line breaks, one after ONmS and one after wMas
An easy solution is to set long lines with set lin 32767 longc 32767 long 2000000000… but, once you reach 32K, end of the game.

I come up with an easy workaround, using clob2file

create directory d as '/tmp';
begin
for f in (select rownum r, n||';'||x||chr(10) txt from t)
loop
dbms_lob.clob2file(f.txt, 'D',
'F'||to_char(f.r,'FM00009')||'.txt');
end loop;
end;
/

ls -la F?????.txt
-rw-r----- 1 oracle 6 Sep 26 13:40 F00001.txt
-rw-r----- 1 oracle 506 Sep 26 13:40 F00002.txt
-rw-r----- 1 oracle 8 Sep 26 13:40 F00003.txt
cat F?????.txt

1;one
2;twoPxMQztzLaqjWjGKOXIVIVrrHC
fJkTLbRgCPiENfWrrCjUMRSkmCfLUY
RdVASFacGtyEnUplOAXspDJZmSLPfg
ziKYQBKoHdkqGXoBCgkKuVOdxEvhQy
lZLQROFxIxOzqeQeFMGXcGkuJwnGGg
zvNSuCFMVxyQgboLAUpDcYnBsuVyXX
vFtWPICwqmcEdmzRRkOzzPethrqjRR
aBSOTOZiYfxcYSqbqFlGuVOBRdxjFh
ZcxJBvSCVXwONmSprVzTykEfSsePyY
wyLVoyYrVLynUzsMLFWQxwUKNsVcYz
UOAhslNldnBpITSrxPlpJbLSjJqgxN
xsGVsrYhkWAMufkQnRayieEkSDYrNq
yLejJuggADNxcgVtszjJIYKCxPweNG
hXsOFKGbMkTBPCfDXwjBNgQYswbaNW
BOEtSTHjIhdLAyMnbhyhRKKdfaTTpT
gHqQelVWmnkBHjAZTrGqdtlYAgoXNH
noryxHxVVyaMiGRSjdVlRwMas
3;three

Now you can produce files with very long lines. It would also possible to dump everything in one file (by using DBMS_LOB.CREATETEMPORARY + DBMS_LOB.APPEND ) or to dump all files in parallel…

in doubt transaction

Distributed transactions allows you to have multiple DML’s over multiple databases within a single transaction

For instance, one local and one remote

insert into t values(1);
insert into t@db02 values(2);
commit;

If you lose connection to db02 and wants to commit, your database server may/does not know about the state of the remote transaction. The transaction then shows up als pending.

Oracle documentation mentions about ORA-2PC-CRASH-TEST transaction comment to test this behavior, however, anything like note 126069.1 who starts with grant dba to scott; should be banned.

Apart from granting DBA to scott and using commit tansaction commment 'ORA-2PC-CRASH-TEST-7', I can still use my good (bad?) old shutdown abort.


SQL> insert into t values(1);
1 row created.
SQL> insert into t@db02 values(2);
1 row created.
SQL> -- shutdown abort on db02
SQL> commit;
commit
*
ERROR at line 1:
ORA-02054: transaction 2.7.4509 in-doubt
ORA-03150: end-of-file on communication channel for database link
ORA-02063: preceding line from DB02
SQL> select LOCAL_TRAN_ID, STATE from dba_2pc_pending;

LOCAL_TRAN_ID STATE
---------------------- ----------------
2.7.4509 prepared

Now you’ve got an issue. Not only the state of the transaction is unknown, but the in-doubt transaction may prevent further DMLs

SQL> update t set x=x+1;
update t set x=x+1
*
ERROR at line 1:
ORA-01591: lock held by in-doubt distributed transaction 2.7.4509

You need to decide whether to commit or rollback the transaction. Let’s say I want to rollback. I need to have FORCE TRANSACTION privilege


SQL> rollback force '2.7.4509';
Rollback complete.
SQL> select LOCAL_TRAN_ID, STATE from dba_2pc_pending;

LOCAL_TRAN_ID STATE
---------------------- ----------------
2.7.4509 forced rollback
SQL> update t set x=x+1;
0 rows updated.
SQL> exec DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('7.20.4519')
PL/SQL procedure successfully completed.
SQL> select LOCAL_TRAN_ID, STATE from dba_2pc_pending;
no rows selected

The lock disappears, dbms_transaction.purge_log_db_entry can also cleanup old entries.

How to migrate non-cdb to pdb

In case you are desesperately looking for an upgrade from non-cdb to pdb with two clicks, here is the answer from Mike : you can’t

If your database is huge and you don’t want to move anything nor change SID nor change path, this is my own receipt.

  1. Backup your database
  2. Create a XML description of your CDB

    shutdown immediate
    startup read only
    exec dbms_pdb.describe('/save/DB01/DB01.xml')
    shutdown immediate
  3. Setup init.ora parameter for PDB

    startup force nomount restrict
    alter system set enable_pluggable_database=true scope=spfile;
    alter system set db_name=CDBDB01 scope=spfile;
    alter system set instance_name=DB01 scope=spfile;
    startup force nomount
  4. Create a CDB database
    Yes, you have read it correctly. You are required to create a database. There is -so far I’ve googled- no way to avoid this step. You can use dbca or sqlplus.
    Don’t overwrite the datafile. Use for instance /data/DB01/cdb/system.dbf instead of /data/DB01/system.dbf

    create database CDBDB01
    user sys identified by ***
    user system identified by ***
    character set al32utf8
    national character set al16utf16
    logfile
    ('/onlinelog/DB01/redo01a.dbf','/onlinelog/DB01/redo01b.dbf') size 32M REUSE,
    ('/onlinelog/DB01/redo02a.dbf','/onlinelog/DB01/redo02b.dbf') size 32M REUSE
    extent management local
    datafile '/data/DB01/cdb/system01.dbf' size 10M autoextend on
    sysaux datafile '/data/DB01/cdb/sysaux01.dbf' size 10M autoextend on
    default tablespace admin datafile '/data/DB01/cdb/admin01.dbf' size 10m autoextend on
    extent management local autoallocate
    default temporary tablespace temp
    tempfile '/data/DB01/cdb/temp01.dbf' size 10M autoextend on
    undo tablespace undotbs1 datafile '/data/DB01/cdb/undotbs1_01.dbf' size 10M autoextend on
    set time_zone='Europe/Zurich'
    ENABLE PLUGGABLE DATABASE
    SEED FILE_NAME_CONVERT = ('/data/DB01/cdb/', '/data/DB01/seed/')
    LOCAL UNDO ON
    USER_DATA TABLESPACE users datafile '/data/DB01/cdb/users01.dbf' size 10m autoextend on;
    perl catcon.pl -u sys/*** -d oh/rdbms/admin -b catalog_DB01 -e -l log catalog.sql
    perl catcon.pl -u sys/*** -d oh/rdbms/admin -b catproc_DB01 -e -l log catproc.sql

    It’s a pretty cumbersome step. Maybe you need Oracle Text or maybe you have DEC character set… all those details must be engineered by the DBA. This is pretty insane Oracle doesn’t offer a tool to migrate to a non-deprecated architecture.
  5. Plug-in the database

    CREATE PLUGGABLE DATABASE DB01 USING '/save/DB01/DB01.xml' nocopy tempfile reuse;

    If you miss this step, ร” rรขge, ร” dรฉsespoir, you probably must restart from the beginning.
  6. run noncdb_to_pdb

    alter session set container=DB01;
    alter pluggable database DB01 open;
    alter session set container=DB01;
    @?/rdbms/admin/noncdb_to_pdb.sql

Once you are so far, you’ll have all advantages of Multi-Tenant. PDB-Clone. PDB-Flashback. Lockdown profiles. And you’ll be able to consolidate if you buy the multitenant option. You’ll save money on hardware (SGA/CPU) to spent on software (Multitenant option).

Most importantly, you’ll have moved to a non-deprecated architecture

Conclusion: there is more than one way to move to pluggable. But there is no way to migrate to pluggable without creating a new database

powershell code signing

Unix users don’t use this. Maybe some java developers do. But no Unix sysadmins. Never.

On Windows, things are getting more secure every release, especially if you pay attention to those details.

In Unix, if I have a script called “getdate” which shows me the date, I can copy it to another machine.


$ cat ./getdate
date
$ ./getdate
Mon Aug 20 13:05:40 CEST 2018

Works locally.


$ scp getdate srv02:
$ ssh srv02 ./getdate
Mon Aug 20 13:06:18 CEST 2018

Works on other servers.

This is a huge risk because anybody could modify anycode and you’ll never know.

Back to powershell.

On powershell, you can define policies.

Or disable policy because you do not want to sign your code.

> Set-ExecutionPolicy remotesigned

and if you are not admin

> Set-ExecutionPolicy -scope currentuser unrestricted

Until one day you find :

> Get-ExecutionPolicy -Scope MachinePolicy
AllSigned

what is the answer to life the universe and everything ?

Code signing. You go to your security admin, send him a certification request for code signing, import it in mmc, then sign your code. Your secadmin can show you how to the request with mmc. Or google it. It is not specific to powershell at all. It can be done with openssl as well I suppose.


> gc getdate.ps1
get-date
> .\getdate.ps1
.\getdate.ps1 : File C:\temp\getdate.ps1 cannot be loaded. The file C:\temp\getdate.ps1 is not digitally signed. You cannot run this script on the current system. For more information about running scripts and setting execution policy, see about_Execution_Policies at https:/go.microsoft.com
At line:1 char:1
+ .\getdate.ps1
+ ~~~~~~~~~~~~~
+ CategoryInfo : SecurityError: (:) [], PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccess
> Set-AuthenticodeSignature getdate.ps1 (dir Cert:\CurrentUser\My\A232D77888B55318)[0]
> gc getdate.ps1
get-date
# SIG # Begin signature block
# MIITSQYJKoZIhvcNAQcCoIITOjC
# 9q4xO/0AczlLX5Zjjn3ByPNrAkkv
# 1GTsSZ9LkPUItDIpJZMk8nTzY4nI
# DUi0+XirQLiHiSB1hlhN/lVyMlyb
# vOdiHnCv9GMTMGsZbSjh/Q4lDIrX
# HIpaQH6BcIy8NAnnHw212dhqrJr7
# TqCHE8CYsvBFBs+9ZfD4zhUys1d
# SIG # End signature block
> .\getdate.ps1
Monday, August 20, 2018 1:22:00 PM
> Get-AuthenticodeSignature getdate.ps1
Directory: D:\temp
SignerCertificate Status Path
----------------- ------ -----------
A232D77888B55318B Valid getdate.ps1

If now I copy it to another server

I may get an error or a warning (depending on the policy)

> ./getdate.ps1

Do you want to run software from this untrusted publisher?
File C:\temp\getdate.ps1 is published by CN=srv01.example.com, OU=Example and is not trusted on your system.
Only run scripts from trusted publishers.
[V] Never run [D] Do not run [R] Run once [A] Always run [?] Help (default is "D"): R

Montag, 20. August 2018 13:29:43
>

if the code change, you get an Unauthorized access

> gc getdate.ps1
get-date -format U
# SIG # Begin signature block
# MIITSQYJKoZIhvcNAQcCoIITOjC
# 9q4xO/0AczlLX5Zjjn3ByPNrAkkv
# 1GTsSZ9LkPUItDIpJZMk8nTzY4nI
# DUi0+XirQLiHiSB1hlhN/lVyMlyb
# vOdiHnCv9GMTMGsZbSjh/Q4lDIrX
# HIpaQH6BcIy8NAnnHw212dhqrJr7
# TqCHE8CYsvBFBs+9ZfD4zhUys1d
# SIG # End signature block
> ./getdate.ps1
./getdate.ps1 : File C:\temp\getdate.ps1 cannot be loaded. The contents of file C:\temp\getdate.ps1 might have been
changed by an unauthorized user or process, because the hash of the file does not match the hash stored in the digital
signature. The script cannot run on the specified system. For more information, run Get-Help about_Signing..
At line:1 char:1
+ ./getdate.ps1
+ ~~~~~~~~~~~~~
+ CategoryInfo : SecurityError: (:) [], PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccess
>

If you change code, you need to resign

> Set-AuthenticodeSignature getdate.ps1 (dir Cert:\CurrentUser\My\A232D77888B55318BE97E2AD7758EA0F0EA6C75B)[0]
> .\getdate.ps1
2018-08-20 13:35:00Z

Single Tenant duplicates

I recently reported an issue regarding single tenant.

In old-time non-cdb, the SID used to be unique on a server. If you connect to srv01:port:sid, then you know where you connect.

Unfortunately, this is no longer true. If for instance you have two database sid’s S01 and S02 with a pluggable P01, and both run on the same server, chances exist you’ll get an invalid username / password or connect to the wrong datatabase.


$ lsnrctl services
LSNRCTL for IBM/AIX RISC System/6000: Version 12.2.0.1.0 - Production on 13-JUL-2018 14:20:23

Copyright (c) 1991, 2017, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=srv01.example.com)(PORT=1521)))
Services Summary...
Service "P01.example.com" has 2 instance(s).
Instance "S01", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Instance "S02", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
$ echo 'select instance_name from v$instance;'|sqlplus -s -L system/***@srv01.example.com:1521/p01.example.com
INSTANCE_NAME
----------------
S01
$ echo 'select instance_name from v$instance;'|sqlplus -s -L system/***@srv01.example.com:1521/p01.example.com
INSTANCE_NAME
----------------
S01
$ echo 'select instance_name from v$instance;'|sqlplus -s -L system/***@srv01.example.com:1521/p01.example.com
INSTANCE_NAME
----------------
S02

As demonstrated, it’s pretty random.

When copying database sid S01 to S02, it is no longer sufficient to rename the database (with NID or RMAN duplicate)

You can of course use DUPLICATE PLUGGABLE, but this means quite a bit of change in your procedures in place if you switch to single-tenant only because non-cdb is deprecated.

Otherwise, also if you use operating system commands + nid + rename datafile to copy your databases, you need to rename the global name.

This is as simple as

SQL> alter pluggable database P01 open restricted force;
Pluggable database altered.
SQL> alter session set container=P01;
Session altered.
SQL> alter pluggable database P01 rename global_name to P02;
Pluggable database altered.
SQL> alter pluggable database P02 open force;
Pluggable database altered.

DISCLAIMER: this apply mostly to environments with dba-scripts that were designed for non-cdbs. For more modern environments, use PLUG/UNPLUG/DUPLICATE PLUGGABLE…

disallow pseudo terminal in ssh

Some Oracle documentation wants you to setup ssh with no password and no passphrase.

Configuring ssh

This is not really something your security admin will like.

ssh-keygen -t dsa
First, using DSA, which is deprecated and disabled by default in OpenSSH 7.0, is a pretty dump instruction
OpenSSH 7.0 and greater similarly disable the ssh-dss (DSA) public key algorithm. It too is weak and we recommend against its use.
http://www.openssh.com/legacy.html
The two recommended key types are rsa and ecdsa. You should not use dsa

Second, ssh-key without passphrase is a huge security hole. If one get access to your key, for instance on a disk, a tape backup, etc, she’ll get access as oracle to all your database nodes. Best practice to use a pass phrase. Depending on your setup, it is sufficient to get ssh keys at installation/upgrade time only.

Third, providing interactive ssh-login as Oracle is against best practice for tracability. You better use SUDO or another elevation mechanism.

Let’s try:

First, use a recommended algoryhtm and key-length.
ssh-keygen -t rsa -b 4096
or
ssh-keygen -t ecdsa -b 521

Then, use a passphrase

Enter passphrase (empty for no passphrase): ***
Enter same passphrase again: ***

Then, when creating you authorized key, disable unwanted features, like pseudo terminal
~/.ssh/id_ecdsa

-----BEGIN EC PRIVATE KEY-----
AAAABBBBCCCC
-----END EC PRIVATE KEY-----

~/.ssh/id_ecdsa.pub

ecdsa-sha2-nistp521 AAAABBBB/cccc== oracle@srv001

~/.ssh/authorized_keys

no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-x11-forwarding ecdsa-sha2-nistp521 AAAABBBB/cccc== oracle@srv001

Also, you could deactivate some features on the client config
~/.ssh/config

ForwardX11=no
BatchMode=yes
ForwardAgent=no

This could also be done one the server sshd_config, but if you are not the sysadmin, don’t mess up with it.

Because you have a passphrase, you need to use an agent before starting your installation. Because pseudo-terminal (no-pty) is disabled, you cannot get a prompt. Because x11 is disabled (no-x11-forwarding), you cannot start an xterm

$ ssh srv002
Permission denied
$ eval $(ssh-agent)
Agent pid 12345
$ ps -fp 12345
UID PID PPID CMD
oracle 123451 0 ssh-agent
$ ssh-add ~/.ssh/id_ecdsa
Enter passphrase for ~/.ssh/id_ecdsa:
Identity added: ~/.ssh/id_ecdsa (~/.ssh/id_ecdsa)
$ ssh -t srv002
PTY allocation request failed on channel 0
$ ssh -Y srv002 aixterm
X11 forwarding request failed on channel 0
1363-008 X server named was not found.
$ ssh srv002 date
Fri Jul 13 12:50:22 CEST 2018

Those are basic steps to make your ssh less unsecure.

dynamic linesize in 18.1

Whenever you select and describe in sqlplus it looks ugly

default: pagesize 14 linesize 80

change the default: it is often too large or too narrow

Let’s try WINDOW in sqlplus 18.1, which is available for download on Solaris / Linux / Windows

SQL> set lin window
SQL> sho lin
linesize 95 WINDOW
SQL> sho pages
pagesize 86

And look the result :

Almost perfect ๐Ÿ™‚

I think it would nicer if DESC didn’t have this empty line

The cool thing with SET LINESIZE WINDOW is that it is dynamic (as I tested with CMD.EXE/windows and XTERM/Linux). Your window is too narrow, you make it bigger, re-run your select, it looks nicer

but… pagesize cannot be set to NON-WINDOW

SQL> set lin window
SQL> set pages 50000
SQL> sho pages
pagesize 21
SQL>

Add-OdbcDsn

I wrote a few odbc articles using ODBCCONF in my blog, so I edit them because ODBCCONF will be removed; read https://docs.microsoft.com/en-us/sql/odbc/odbcconf-exe

Using Powershell Add-OdbcDsn is much easier

PS> remove-OdbcDsn -name DB01 -dsntype User
PS> Add-OdbcDsn -name DB01 -DriverName
"Oracle in client12201" -DsnType "User"
-SetPropertyValue @("Server=DB01")
PS> Get-OdbcDsn

Name : DB01
DsnType : User
Platform : 64-bit
DriverName : Oracle in client12201
Attribute : {Password, StatementCache, ...}

PS> remove-OdbcDsn -name DB01 -dsntype User

How to get dbms_output to print line before the end of the procedure?


begin
dbms_output.put_line('Kilroy1');
dbms_lock.sleep(1);
dbms_output.put_line('Kilroy2');
dbms_lock.sleep(1);
dbms_output.put_line('Kilroy3');
dbms_lock.sleep(1);
dbms_output.put_line('Kilroy4');
dbms_lock.sleep(1);
dbms_output.put_line('Kilroy5');
dbms_lock.sleep(1);
end;
/

You wait five seconds, then get the output.

This is the way it works with dbms_output.

Now I try a new trick

The output does not wait the end. There was a trick I wrote in 2007 using utl_file, but it was a bit cheating, because it was only working with local connections.

There are a lot of ways of simulating this, with DBMS_APPLICATION_INFO, UTL_FILE, TABLEs, DBMS_PIPE, but for today, I chosed advanced queuing.
my program has 4 parts. Here it is shell but it could well be perl / java and others

#!/bin/ksh
# 1. create the objects
./init
# 2. create a dequeuing process in the background
./monitor &
# 3. create the connection
./connect
sleep 1
# 4. cleanup the objects
./cleanup

init.sql creates a message type, a queue table and a queue. Then it creates a procedures that enqueue messages.

create type o as object(t timestamp, txt varchar2(40))
/
exec dbms_aqadm.create_queue_table('QT','O')
exec dbms_aqadm.create_queue('Q','QT')
create procedure enqueue(txt varchar2) is
pragma autonomous_transaction;
msgid raw(16);
enqueue_options dbms_aq.enqueue_options_t;
message_properties dbms_aq.message_properties_t;
begin
dbms_aq.enqueue('Q', enqueue_options, message_properties,
o(current_timestamp, txt), msgid);
commit;
end;
/
exec dbms_aqadm.start_queue('Q')

In my “monitor” script, I loop my monitor.sql with a simple while : ; do sqlplus … until failure

declare
msg o;
msgid raw(16);
dequeue_options dbms_aq.dequeue_options_t;
message_properties dbms_aq.message_properties_t;
begin
dbms_aq.dequeue('Q', dequeue_options, message_properties, msg, msgid);
dbms_output.put_line(msg.t||': '||msg.txt);
end;
/
quit

My code now looks like

begin
enqueue('Kilroy1');
dbms_lock.sleep(1);
enqueue('Kilroy2');
dbms_lock.sleep(1);
enqueue('Kilroy3');
dbms_lock.sleep(1);
enqueue('Kilroy4');
dbms_lock.sleep(1);
enqueue('Kilroy5');
dbms_lock.sleep(1);
end;
/

But thanks for my second processes, the monitor, I have now feedback before the end.

It is my first videos on this blog ๐Ÿ™‚

Dynamic number of columns revisited


The ingenious solution of Anton Scheffer using Data Cartridge is now beaten in 18c using polymorphic table function

Anthologic post of Anton : forums.oracle.com
Chris magic with Oracle 18c :
livesql.oracle.com

better than CTAS


SQL> create table t1(x number primary key);
Table created.
SQL> desc t1
Name Null? Type
----------------------- -------- ----------------
X NOT NULL NUMBER


SQL> create table t2 as select * from t1;
Table created.
SQL> desc t2
Name Null? Type
----------------------- -------- ----------------
X NUMBER

The table T2 has the column X, but not the constraint (primary key / not null).

If you want to do a create table as select but want to keep index / constraints etc, then you rather use datapump

SQL> set autop on
SQL> var job_state varchar2(30)
SQL> declare
n number;
begin
n := DBMS_DATAPUMP.open('IMPORT','TABLE','DB01');
DBMS_DATAPUMP.metadata_filter(n,'NAME_LIST','''T1''');
DBMS_DATAPUMP.metadata_remap(n,'REMAP_TABLE','T1','T3');
DBMS_DATAPUMP.start_job(n);
DBMS_DATAPUMP.WAIT_FOR_JOB(n, :job_state);
end;
/
PL/SQL procedure successfully completed.
JOB_STATE
--------------------------------------------------
COMPLETED
SQL> desc t3
Name Null? Type
----------------------- -------- ----------------
X NOT NULL NUMBER

The Table T3 is a copy of T1. DB01 is my implicit loopback database link (database name).

FBI = function based index

Let me today tell you, I now hate FBI for real.

Let’s start with an easy working example


SQL> CREATE TABLE t(x NUMBER PRIMARY KEY)
Table created.
SQL> insert into t(x) values (1)
1 row created.
SQL> insert into t(x) values (2)
1 row created.
SQL> insert into t(x) values (3)
1 row created.
SQL> commit
Commit complete.
SQL> CREATE FUNCTION f (x NUMBER)
RETURN NUMBER DETERMINISTIC IS
BEGIN
RETURN x * x;
END;
Function created.
SQL> CREATE INDEX i
ON t (f (x))
Index created.
SQL> select * from t where f(x)=4

X
----------
2
Execution Plan
------------------------------------------------
0 SELECT STATEMENT
1 0 TABLE ACCESS BY INDEX ROWID BATCHED T
2 1 INDEX RANGE SCAN I

Okay, this is a nice-working example. I can use where f(x)=4.

A non-fbi code would be something like

SQL> create or replace type tn as table of number;
Type created.
SQL> create or replace function f2(y number)
return tn deterministic is
begin
if (y<0) then return null; end if; return tn (sqrt(y), -sqrt(y)); end; Function created. SQL> select * from t where x member of f2(4)

X
----------
2

Execution Plan
------------------------------------
0 SELECT STATEMENT
1 0 INDEX FULL SCAN SYS_C0026437

The reverse function is somehow more challenging to code, but the benefit is enormous, I have no more fbi.

What’s wrong with fbi?

First example : I recreate my function:

SQL> DROP FUNCTION f
Function dropped.
SQL> CREATE FUNCTION f (x NUMBER)
RETURN NUMBER DETERMINISTIC IS
BEGIN
RETURN power(x,2);
END;
Function created.
SQL> select * from t where f(x)=4
*
Error at line 0
ORA-30554: function-based index I is disabled
SQL> SELECT object_type, object_name, status
FROM user_objects
WHERE object_name IN ('F','I')

OBJECT_TYPE OBJECT_NAME STATUS
----------------------- ------------ -------
INDEX I VALID
FUNCTION F VALID
SQL> SELECT index_name,
table_name,
index_type,
status,
funcidx_status
FROM user_indexes
WHERE index_name = 'I'

INDEX TABLE INDEX_TYPE STATUS FUNCIDX_STATUS
----- ----- -------------- ------ --------------
I T FUNCTION-BASED VALID DISABLED

Remember this error. ORA-30554. And this not-so-well-known column, USER_INDEXES.FUNCIDX_STATUS. The behavior is pretty agressive, every object is valid, but you can no longer select from the table.

A small parenthese. We all know about unusable indexes. Index often get unusable due to partition maintenance and the like.


SQL> create table t2(x number)
Table created.
SQL> insert into t2 values (1)
1 row created.
SQL> create index i2 on t2(x) unusable
Index created.
SQL> SELECT index_name,
table_name,
status
FROM user_indexes
WHERE index_name = 'I2'

INDEX TABLE STATUS
----- ----- ---------
I2 T2 UNUSABLE
SQL> insert into t2 values (2)
1 row created.
SQL> select * from t2 where x=2

X
----------
2

Execution Plan
---------------------------------
0 SELECT STATEMENT
1 0 TABLE ACCESS FULL T2

The index is not unused, but it prevents neither INSERT nor SELECT.

Let’s add a constraint

SQL> alter index i2 rebuild
Index altered.
SQL> alter table t2 add primary key (x)
Table altered.
SQL> alter index i2 unusable
Index altered.
SQL> insert into t2 values (2)
ORA-01502: index 'I2' or partition of such index is in unusable state
SQL> select * from t2 where x=2

X
----------
2

If the index is used by a constraint or is unique, then insert is prevented. But no select is prevented ever.

Okay, frequent readers may wonder why I did DROP FUNCTION and CREATE FUNCTION instead of CREATE OR REPLACE FUNCTION.

Fine, let’s try.

SQL> CREATE or replace FUNCTION f (x NUMBER)
RETURN NUMBER
DETERMINISTIC
IS
BEGIN
RETURN power(x,2);
END;
Function created.
SQL> alter index i rebuild
Index altered.
SQL> alter index i enable
Index altered.
SQL> select x, f(x) from t where f(x)=4

X F(X)
---------- ----------
2 4
SQL> create or replace function f(x number)
return number deterministic is
begin
return 1;
end;
Function created.
SQL> select x, f(x), f(2) from t where f(x)=4

X F(X) F(2)
---------- ---------- ----------
2 4 1

Oh my goodness, select returns completly wrong result, but the index is valid and enabled.

There is more than way to solve this

  1. rebuild your index after create function. You could find the candidates by looking at the last ddl time and dependencies

  2. SQL> select name from user_dependencies d
    where referenced_type = 'FUNCTION'
    and type = 'INDEX' and
    (
    select last_ddl_time
    from user_objects i
    where i.object_name=d.name
    ) < ( select last_ddl_time from user_objects f where f.object_name=d.referenced_name ) NAME ----- I SQL> alter index i rebuild
    Index altered.
    SQL> select x, f(x), f(2) from t where f(x)=4
    no rows selected.
    SQL> select x, f(x), f(2) from t where f(x)=1

    X F(X) F(2)
    ---------- ---------- ----------
    1 1 1
    2 1 1
    3 1 1

    SQL> select name from user_dependencies d
    where referenced_type = 'FUNCTION'
    and type = 'INDEX' and
    (
    select last_ddl_time
    from user_objects i
    where i.object_name=d.name
    ) < ( select last_ddl_time from user_objects f where f.object_name=d.referenced_name ) no rows selected.

  3. file an SR and encourage Oracle to test features before making them available
  4. stop using FBI immediately

Administrative privileges like SYSDBA

The most well-known administrative privilege is sysdba. Back in Oracle 7.2, oracle recommended to no longer use the INTERNAL but to connect as sysdba. A second one, seldom used, is SYSOPER.

When you use sqlplus / as sysdba, or connect / as sysdba in sqlplus / srvmgrl, the you log as SYS.

That’s the most powerfull user in the database.

SYSOPER has less privileges, it can shutdown the database and the like.

While often refered as the SYSDBA role, or SYSDBA system privilege, SYSDBA is an administrative privilege. It is not listed in DBA_ROLES, DBA_ROLE_PRIVS, DBA_SYS_PRIVS, etc…

In non-cdb, you’ll look in V$PWFILE_USERS.
In cdb/pdb, look into cdb_local_admin_privs


grant SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM to u;

SYSRAC and SYSASM also exists, but I have not tested them yet.

SQL> select CON, USERNAME, SYSDBA, SYSOPER, SYSASM,
SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS;

CON USERNAME SYSDBA SYSOPER SYSBACKUP SYSDG SYSKM
--- -------- ------ ------- --------- ----- -----
0 SYS TRUE TRUE FALSE FALSE FALSE
3 U TRUE TRUE TRUE TRUE TRUE

SQL> select * from cdb_local_admin_privs;

CON_NAME GRANTEE SYSDBA SYSOPER SYSBACKUP SYSDG SYSKM
-------- ------- ------ ------- --------- ----- -----
PDB01 U TRUE TRUE TRUE TRUE TRUE

What’s really happening when you log as sysdba?


SQL> conn u/***@DB01 as sysdba
Connected.
SQL> sho user
USER is "SYS"
SQL> select privilege from session_privs
where privilege like 'SYS%';
PRIVILEGE
------------------------------
SYSDBA
SYSOPER
SQL> def _PRIVILEGE
DEFINE _PRIVILEGE = "AS SYSDBA" (CHAR)
SQL> select privilege from dba_sys_privs
where privilege like 'SYS%';
no rows selected

Even if you use your U-credentials, you are SYS and you have SYSDBA privielege. It is not a System privilege but an an administrative privilege. You also get SYSOPER for free.

What happens when you log as SYSOPER?


SQL> conn u/***@DB01 as sysoper
Connected.
SQL> sho user
USER is "PUBLIC"
SQL> select * from session_privs;
PRIVILEGE
------------------------------
CREATE SESSION
RESTRICTED SESSION
SYSOPER

SQL> create public synonym p for x.y;
create public synonym p for x.y
*
ERROR at line 1:
ORA-01031: insufficient privileges
SQL> create synonym p for x.y;
create synonym p for x.y
*
ERROR at line 1:
ORA-01031: insufficient privileges

You are logged as the pseudo-user PUBLIC. It doesn’t give you the right to create “PUBLIC” objects, but you could do a few things, with your SYSOPER privilege, like shutdown;

For the other users, it is no longer matched to SYS or PUBLIC, but they have they own underlying users;


SQL> conn u/***@DB01 as sysbackup;
Connected.
SQL> sho user
USER is "SYSBACKUP"
SQL> select * from session_privs;

PRIVILEGE
------------------------------
ALTER SYSTEM
ALTER SESSION
ALTER TABLESPACE
DROP TABLESPACE
UNLIMITED TABLESPACE
CREATE ANY TABLE
CREATE ANY CLUSTER
AUDIT ANY
ALTER DATABASE
CREATE ANY DIRECTORY
RESUMABLE
SELECT ANY DICTIONARY
SELECT ANY TRANSACTION
SYSBACKUP
SQL> conn u/***@DB01 as sysdg;
Connected.
SQL> sho user
USER is "SYSDG"
SQL> select * from session_privs;

PRIVILEGE
------------------------------
ALTER SYSTEM
ALTER SESSION
ALTER DATABASE
SELECT ANY DICTIONARY
SYSDG
SQL> conn u/***@DB01 as syskm
Connected.
SQL> sho user
USER is "SYSKM"
SQL> select * from session_privs;

PRIVILEGE
------------------------------
SYSKM
ADMINISTER KEY MANAGEMENT

This allows different administrators, that are not in the core dba team, to execute some administrative operation like Dataguard-Switchover / Restore / Shutdown /Startup on their databases, without SYS.

Remember, SYS can now be locked in a PDB ๐Ÿ™‚ see lock sys

check invalid directories

To get the status of a directory, I wrote my own function, which uses DBMS_LOB.FILEEXISTS.


CREATE FUNCTION
status (DIRECTORY_NAME VARCHAR2)
RETURN VARCHAR2
IS
BEGIN
IF (DBMS_LOB.FILEEXISTS(
BFILENAME (DIRECTORY_NAME, '.')) = 1)
THEN
RETURN 'VALID';
ELSE
RETURN 'INVALID';
END IF;
EXCEPTION
WHEN OTHERS
THEN
RETURN SQLERRM;
END;
/


SELECT
directory_name NAME,
directory_path PATH,
status (directory_name) STATUS
FROM dba_directories;

NAME PATH STATUS
---- ---- ---------
FOO /foo INVALID
TMP /tmp VALID
BAK /u99 VALID

check invalid database link for the DBA

followup of check invalid database link
If you need to check db link in another schema, you need to create code that run with that schema.

base on the example from yesterday, here is an extended version for the dba


CREATE FUNCTION dba_status
(owner VARCHAR2, db_link VARCHAR2)
RETURN VARCHAR2
IS
PRAGMA AUTONOMOUS_TRANSACTION;
status VARCHAR2 (4000);
BEGIN
EXECUTE IMMEDIATE
'create or replace function "'
|| owner
|| '".status(db_link varchar2) return varchar2 is '
|| 'x number;'
|| 'begin execute immediate ''select 1 from dual@"''
||DB_LINK||''"'' into x;'
|| 'return ''OK'';'
|| 'exception when others then return sqlerrm;'
|| 'end;';
EXECUTE IMMEDIATE
'begin :1 := "' || owner
||'".status(''' || db_link || '''); end;'
USING OUT status;
EXECUTE IMMEDIATE 'drop function "' || owner || '".status';
COMMIT;
RETURN status;
END;
/


SELECT
owner, db_link, dba_status (owner, db_link)
FROM dba_db_links;
OWNER DB_LINK DBA_STATUS(OWNER,DB_LINK)
----- ------- --------------------------------
SCOTT L3.EXAM OK
SCOTT L2.EXAM ORA-12154: TNS:could not resolve
SCOTT L1.EXAM ORA-01017: invalid username/pass

check invalid database link

If one database link is invalid, you cannot select through it

SQL> select * from dual@z;
select * from dual@z
*
ERROR at line 1:
ORA-02019: connection description for remote database not found

However, there is no STATUS command in your user_db_links table that you could use. For this purpose, write a function.

create function status(db_link varchar2)
return varchar2 is
x number;
begin
execute immediate 'select 1 from dual@"'||
DB_LINK||
'"' into x;
return 'OK';
exception
when others then return sqlerrm;
end;
/


select db_link, status(db_link) from user_db_links;
DB_LINK STATUS(DB_LINK)
------- --------------------------------------------------
L1.EXAM ORA-01017: invalid username/password; logon denied
L2.EXAM ORA-12154: TNS:could not resolve the connect ident
L3.EXAM OK

list targets

As correctly pointed out by dhoogfr , the proper way to list targets on an agent is to use list target

$ emctl config agent listtargets
Oracle Enterprise Manager Cloud Control 13c Release 2
Copyright (c) 1996, 2016 Oracle Corporation. All rights reserved.
[srv01.example.com, host]
[srv01.example.com:3872, oracle_emd]
[agent13c2_2_srv01.example.com_3830, oracle_home]
[DB01.example.com, oracle_database]
[OraDB12Home1_14_srv01.example.com_743, oracle_home]
[DB01_srv01.example.com_CDBROOT, oracle_pdb]
[DB01_srv01.example.com_PDB01, oracle_pdb]
[LISTENER001_srv01.example.com, oracle_listener]

Not really easy to parse, and it does not contain all information. Let’s imagine I want to get the TARGET_NAME out of my SID? hard…

What is actually emctl doing ? It is parsing the targets.xml with perl. Oracle wrote a module, called ias::simpleXPath, that helps parsing the file.


$AGENT_HOME/perl/bin/perl -l -I$AGENT_HOME/sysman/admin/scripts -Mias::simpleXPath -e '
foreach $t(
simpleXPathQueryForNodes(
"targets.xml","Targets/Target")){
print
"[".($t->{"attributes"}->{"NAME"}).
", ".($t->{"attributes"}->{"TYPE"}).
"]"}'

ias::simpleXPath is a wrapper for XML::Parser. XML::Parser is a supported perl that is included in the agent home. So no need to install your own perl modules for this purpose!

back to by example, if I want to get the target name for my SID DB01

$AGENT_HOME/perl/bin/perl -l -MXML::Parser -e '
$xmlfile = "targets.xml";
die "Cannot find file $xmlfile"
unless -f $xmlfile;
$parser = new XML::Parser;
$parser->setHandlers(
Start => \&startElement,
End => \&endElement);
$parser->parsefile($xmlfile);
sub startElement {
( $parseinst, $element, %attrs ) = @_;
if ($element eq "Target") {
$tn=$attrs{"NAME"};
$tt=$attrs{"TYPE"};
}
if ($element eq "Property" &&
$attrs{"NAME"} eq "SID" ) {
$sid=$attrs{"VALUE"};
}
}
sub endElement {
( $parseinst, $element ) = @_;
if ($element eq "Target"){
if (
lc $sid eq lc "DB01"
) {
print $tn . ":" . $tt;
}
$sid="";
}
}
'
DB01.example.com:oracle_database

This could be useful, for instance if you want to start a blackout

emctl start blackout db01_black DB01.example.com:oracle_database

For listener, you could retrieve the LsnrName for your listener LISTENER001

$AGENT_HOME/perl/bin/perl -l -MXML::Parser -e '
$xmlfile = "targets.xml";
die "Cannot find file $xmlfile"
unless -f $xmlfile;
$parser = new XML::Parser;
$parser->setHandlers(
Start => \&startElement,
End => \&endElement);
$parser->parsefile($xmlfile);
sub startElement {
( $parseinst, $element, %attrs ) = @_;
if ($element eq "Target") {
$tn=$attrs{"NAME"};
$tt=$attrs{"TYPE"};
}
if ($element eq "Property" &&
$attrs{"NAME"} eq "LsnrName" ) {
$lsn=$attrs{"VALUE"};
}
}
sub endElement {
( $parseinst, $element ) = @_;
if ($element eq "Target"){
if (
lc $lsn eq lc "LISTENER001"
) {
print $tn . ":" . $tt;
}
$lsn="";
}
}
'
LISTENER001_srv01.example.com:oracle_listener

Which you could also blackout before rebooting.

The parser is not limited to Entreprise Manager targets, you could use it for oraInventory/ContentsXML/inventory.xml or whatever files.

There are plenty of other mean to read xml, from the database, xmllint, powershell.

sid and pluggable

I wrote about SID there. (CONNECT_DATA=(SID=DB01)) is undocumented since Oracle 11gR1 and breaks things with multi-tenant / single-tenant.

You create a container CDB01 and a pluggable DB01, you can connect with


sqlplus "scott/tiger@
(description=(address=
(host=srv01)(port=1521)(protocol=tcp))
(connect_data=(service_name=DB01.example.com)))"

But one of your application has hardcoded SID=DB01


sqlplus "scott/tiger@
(description=(address=
(host=srv01)(port=1521)(protocol=tcp))
(connect_data=(sid=DB01)))"
ORA-12514: TNS:listener does not currently know of
service requested in connect descriptor

How do you do the trick?

  • Remove the domain name in the pluggable

  • alter session set container=DB01;
    alter system set db_domain=''
    container=current
    scope=spfile;

  • You use the listener parameter USE_SID_AS_SERVICE
  • listener.ora

    USE_SID_AS_SERVICE_LISTENER = on

    You restart the listener and the database.

    sqlplus "scott/tiger@
    (description=(address=
    (host=srv01)(port=1521)(protocol=tcp))
    (connect_data=(sid=DB01)))"
    Connected to:
    Oracle Database 12c Enterprise Edition Release 12.2.0.1.0

    You just connected with SID to a pluggable database

default listener port

Long time ago, Maxime Yuen registered 1521 for nCube License Manager.

By googling I found : Ellison cleans house at nCube, and since them 1521 has been used as a default port for Oracle. Still, you’ll see nCube in IANA.ORG service names port numbers and in /etc/services the nCube name. I don’t know which one came first, Oracle using 1521 or Larry investing in nCube, but I am pretty sure it’s related ๐Ÿ˜‰

$ curl https://www.iana.org/assignments/
  service-names-port-numbers/service-names
  -port-numbers.txt | grep 1521
ncube-lm 1521 tcp nCube License Manager 
  [Maxine_Yuen] [Maxine_Yuen]
ncube-lm 1521 udp nCube License Manager 
  [Maxine_Yuen] [Maxine_Yuen]
$ grep -w 1521 /etc/services
ncube-lm 1521/tcp # nCube License Manager
ncube-lm 1521/udp # nCube License Manager
$ netstat -a | grep ncube
tcp 0 0 *.ncube-lm *.* LISTEN

Later, still long time ago, Oracle officially registered 2483 and 2484 (tcps) for the listener communication, as documented on Recommended Port Numbers :
This port number may change to the officially registered port number of 2483 for TCP/IP and 2484 for TCP/IP with SSL.

Still, as of Oracle 12c Release 2, port 1521 is recommended.

Now, another question : do you really want to use port 1521?

On the one hand, it could be good for a hacker to know listener runs on 1521 and ssh on port 22. This is configurable of course.

On the other hand, you better use that is assigned to Oracle. RFC 6335 defines 1024-49151 as User Ports, and 49152-65535 as the Dynamic and/or Private
Ports (aka ephemeral). Remember, if a port is used before you start your listener, your listener won’t start.

Remember every network connection keeps a port busy. So if you start a network client from your database server to another server, ssh, sqlnet, mail, whatever, dns, then your port 1028 or 57313 may be busy for a client connection. Which will prevent your listener from starting. If you use port 9999, you could look on IANA and ask the owner if he plans anything on that port.

Very often, most ports are unused when you start the listener. If you find an unused port in the private range, 49152-65535, you may name it in /etc/services.

Very often I see database servers with more than one listener. Obviously, you cannot run more than one listener on port 1521. There are some case where you want different listener with different sqlnet.ora or different Oracle version. But this render consolidation (e.g. Multitenant) more painful.

The discussion on which port to use is obviously far beyond Oracle. There are gazillions of TCP/UDP servers running in the digital world and less than 65535 ports. For sure you cannot have all them on IANA.ORG, right?

In most cases, stick to Oracle recommendation, use port 1521.

lock sys

In the old days, locking sys had not much effect.

SQL> alter user sys identified by *** account lock;
User altered.
SQL> select account_status
from dba_users
where username='SYS';
ACCOUNT_STATUS
--------------------------------
LOCKED
SQL> conn / as sysdba
Connected.
SQL> conn sys/** as sysdba
Connected.
SQL> conn sys/***@db01 as sysdba
Connected.

Well, in the very-old days, Oracle7, or with the in 12cR2-deprecated parameter O7_DICTIONARY_ACCESSIBILITY, SYS could be locked. But this is out of the scope of this post.

In 12cR2, it is now possible to lock SYS.

SQL> alter user sys
identified by ***
account lock;
User altered.
SQL> select account_status
from dba_users
where username='SYS';
ACCOUNT_STATUS
--------------------------------
LOCKED
SQL> conn / as sysdba
Connected.
SQL> conn sys/** as sysdba
Connected.
SQL> conn sys/***@db01 as sysdba
ERROR:
ORA-28000: the account is locked

I like it ๐Ÿ™‚ Oracle recommends you create other users to perform DBA tasks.

SQL> grant dba, sysdba
to user0001
identified by ***;
Grant succeeded.

Still, probably intentionally left so or simply forgotten, Oracle recommends to lock all Oracle supplied accounts except for SYS and SYSTEM (ref: Changing Passwords for Oracle Supplied Accounts)

Also note, you’ll get an ORA-40365 if you use an old-style password file


SQL> alter user sys identified by *** account lock;
alter user sys identified by *** account lock
*
ERROR at line 1:
ORA-40365: The SYS user cannot be locked
while the password file is in its current format.

Single-Tenant over bequeath connections

If you follow Oracle recommendation to use SingleTenant CDB instead of Non-CDB, then a lot of things will break.

I won’t go into details, but basically, in this new architecture, you have one core container, called CDB$ROOT, which is the default container where you connect to if you connect locally

sqlplus / as sysdba
SQL> select cdb from v$database;
CDB
---
YES
SQL> select sys_context('USERENV','CON_NAME') from dual;
SYS_CONTEXT('USERENV','CON_NAME')
---------------------------------
CDB$ROOT

SQL> select con_id, NAME from V$CONTAINERS

CON_ID NAME
---------- ----------
1 CDB$ROOT
2 PDB$SEED
3 ST01

Then you’ll soon realise, you can no longer do what you used to do

SQL> create user u identified by u;
create user u identified by u
*
ERROR at line 1:
ORA-65096: invalid common user or role name

Some scripts still run in the root container. SHUTDOWN ABORT, ALTER SYSTEM SWITCH LOGFILE. Doing a full backup or clone probably won’t hurt you much. Relatively..

But now let’s imagine I have a very simple and very old script to lock scott

vintage.sh:
echo "alter user scott account lock;"|
sqlplus -s / as sysdba

This won’t work. I need to lock scott in the SingleTenant container ST01.

I could do this in the container

SQL> alter session set container=ST01;
Session altered.
SQL> alter user scott account lock;
User altered.

So fine, so good. NOW : how do I make this work without changing the script ?

Remember, non-cdb database, as they were used in Oracle 11, 10, 9, 8, 7 … are now deprecated. Remember, cdb is recommended. Now face it : it’ll break your dba scripts.

As a production dba, I don’t want to rewrite all the existing scripts. Some are ultra-old and used by people who did not write them.

One method for my script would be to change the container in a login script.

echo "alter session set container=ST01;" > /tmp/login.sql
export ORACLE_PATH=/tmp
vintage.sh
Session altered.
User altered.

(ORACLE_PATH in latest 12.1 and in 12.2, SQL_PATH in older release)

However, if my script must work with both CDB and non-CDB, I need to set the container in only this case.

In my login.sql, I first tried to implement some plsql logic, but alter session set container is not working (aka working with limitation) with execute immediate.

As well, I don’t want my script to break Oracle 11.

So I decide to do some sqlplus magic with defined variable.


set ver off feed off
-- 1) check if the column v$database.cdb exists
col column_name new_v cdb nopri
def cdb=null
select column_name from dba_tab_columns
where owner='SYS' and
table_name='V_$DATABASE' and column_name='CDB';
-- 2) if cdb is YES, then select a dynamic statement using V$PDB
col pdb new_v pdb nopri
def pdb="null stmt from dual where 1=0"
select
'''set container="''||name||''"'' stmt from v$pdbs where name!=''PDB$SEED'''
pdb
from v$database
where &cdb='YES';
-- 3) get a dynamic alter session statement. I use a dummy flagger for non-cdb
col stmt new_val stmt nopri
def stmt="SET FLAGGER=OFF"
select &pdb;
-- 4) alter session
alter session &stmt;
set feed 6
col column_name clear
col stmt clear
col pdb clear
undef cdb
undef stmt
undef pdb
del

Now I run my script

11g: ./vintage.sh
User altered.

12c-non-cdb: ./vintage.sh
User altered.

12cR2-single-tenant: ./vintage.sh
User altered.

DISCLAIMER: you shouldn’t use a global login.sql and you should know that secretly fixing old scripts may have side effects. Test, test and retest your code
DISCLAIMER 2: my frequent readers surely wonder if this statement generating a statement generating a statement is for an obfuscation contest