powershell odbc sybase

Oracle goes there ==> …/oracle-odbc-hello-world-with-powershell.html

To test Sybase ODBC driver with Powershell, it’s not much different

  1. configure the DsN with odbcconf or %SYSTEMROOT%\SysWOW64\odbcconf for the 32bits drivers under Windows 64.

  2. cmd /c "%SYSTEMROOT%\SysWOW64\odbcconf.exe /a {configdsn ""Adaptive Server Enterprise"" ""DSN=helloworld|database=mydb|port=25000|server=srv01""}"

    The name of the driver is important. It is probably also called “Adaptive Server Enterprise” on your machine.

  3. test it, same as in Oracle. If you need to test the 32bits drivers under 64bits windows, use the 32bits powershell %SystemRoot%\syswow64\WindowsPowerShell\v1.0\powershell.exe

  4. $conn = New-Object data.odbc.odbcconnection
    $conn.ConnectionString = "dsn=helloworld;uid=scott;pwd=tiger"
    $conn.Open()
    (new-Object Data.Odbc.OdbcCommand("select 'Hello World'",$conn)).ExecuteScalar()
    $conn.close()

Note: odbcconf will be removed in future release, prefer Add-OdbcDsn

generate safe passwords

This is probably ambitious and I’ll start with a disclaimer, there is no such thing.

But ok, we know that system/manager isn’t

Apart from system/manager, there are hundreds of weak passwords that are commonly used and easy to guess.

On your database server, after a few tries, the account is lock. And maybe the attacker is detected by then.

So the worst passwords are the default passwords and passwords like oracle.

To enforce good passwords, we have verify functions, like ora12c_strong_verify_function in 12c, that checks for mixed case, special characters, etc. One may prefer to write his own and not disclose what it exactly checks.

In that function in rdbms admin, it states The maximum length of any DB User password is 128 bytes. but it’s 30 character in most cases.

If you have failed login attends of 10, chosing eleven as a password does not make it safe. If the attacker got’s the user metadata, you are screwed in no time. In Oracle 4, it’s clear text. In 7-10, it’s a doubled-DES unsalted with a fixed disclosed key encryption. There any dictionary attack takes milliseconds, and a 6 character password in sub-second. It’s got better in 11, where SHA1 could take weeks to years to have a 8 char password. Depending on its complexity. In 12c, generating a hash cost lot’s of cpu cycle, so it is no longer possible to test millions of password per second, even with the strongest hardware.

But to get a good password it is recommended and often required to use digit / letters / special signs / mixed case and no dictionary word.

I have made a small password generator for my reader using dbms_random.string, which generates pseudorandom string. It is best to use the cryptographically secure dbms_crypto.randombytes, but then you must still get a password that you can type. It should also be possible to use unicode if you like. And depending where you are going to use it, it is sometimes safer to not use signs like * or ‘ because, who know’s, your password may produce an error and end up in a logfile.

Okay, I wrote a small function that generates a 10-char string and verify it with the 12c strong verifier. And loop until one is good enough.

The chance that a random password is manager is pretty low, but it is probably best to check you got not only safe random, but also strong string

-- @?/rdbms/admin/catpvf
CREATE OR REPLACE FUNCTION pw (username VARCHAR2,
old_password VARCHAR2 DEFAULT NULL)
RETURN VARCHAR2
IS
p VARCHAR2 (30);
c BOOLEAN := FALSE;
i NUMBER := 0;
BEGIN
WHILE NOT c AND i < 1000 LOOP p := DBMS_RANDOM.string ('P', 10); i := i + 1; BEGIN c := sys.ora12c_strong_verify_function (username, p, old_PASSWORD); EXCEPTION WHEN OTHERS THEN NULL; END; END LOOP; RETURN p; END; / SELECT pw ('SCOTT', 'TIGER') FROM DUAL; #gA~82NxBv

This could well be a good initial expired password for your user. Later the user will find something easier to remember

Generate 11g password hash

An easy way to generate a value string from the ssl is to use openssl

Let’s take a random salt of ABCDEFGHIJ. The length of 10 is important.

The hexadecimal representation is -41-42-43-44-45-46-47-48-49-4A-


$ echo "SafePassw0rDABCDEFGHIJ\c" | openssl dgst -sha1
(stdin)= 47cc4102144d6e479ef3d776ccd9e0d0158842bb

With this hash, I can construct my value

SQL> create user testuser identified by values 'S:47CC4102144D6E479EF3D776CCD9E0D0158842BB4142434445464748494A';

User created.

SQL> grant create session to testuser;

Grant succeeded.

SQL> conn testuser/SafePassw0rD
Connected.

If you prefer PL/SQL over shell, use DBMS_CRYPTO


SQL> exec dbms_output.put_line('S:'||dbms_crypto.hash(utl_raw.cast_to_raw('SafePassw0rDABCDEFGHIJ'),dbms_crypto.HASH_SH1)||utl_raw.cast_to_raw('ABCDEFGHIJ'))
S:47CC4102144D6E479EF3D776CCD9E0D0158842BB4142434445464748494A

PL/SQL procedure successfully completed.

In 12c there is also a “T” String. According to the doc
The cryptographic hash function used for generating the 12C verifier is based on a de-optimized algorithm involving PBKDF2 and SHA-512.

delete all data

How do you delete all data? The simplistic approach would be to truncate all tables


SQL> select table_name from user_tables;
TABLE_NAME
----------
T1
SQL> truncate table t1;
Table truncated.

You cannot truncate if you have referential integrity constraints.

SQL> truncate table t2;
ORA-02266: unique/primary keys in table
referenced by enabled foreign keys

Ok, let’s disable the RIC


SQL> select table_name, constraint_name
from user_constraints
where constraint_type='R';
TAB CONSTRAINT
--- ----------
T3 SYS_C00107
SQL> alter table t3 disable constraint SYS_C00107;
Table altered.
SQL> truncate table t2;
Table truncated.
SQL> truncate table t3;
Table truncated.

You cannot truncate cluster tables

SQL> truncate table t4;
ORA-03292: Table to be truncated is part of a cluster

Cluster tables could be dropped with TRUNCATE CLUSTER.

SQL> select cluster_name from user_clusters;
CLUSTER_NAME
------------
C
SQL> truncate cluster c;
Cluster truncated.

The code above doesn’t work with Partitioned cluster (12.1.0.2) because it was not properly implemented at the time of the writing.
Check Bug 20284579 : CAN NOT QUERY DYNAMIC CLUSTER PARTITIONS

For reference partitioning, it is not possible to disable the foreign key

SQL> alter table t6 disable constraint fk;
ORA-14650: operation not supported for
reference-partitioned tables

In 12c, if the foreign key is defined with ON DELETE CASCADE, you can truncate cascade the parent.

SQL> select table_name, REF_PTN_CONSTRAINT_NAME
from user_part_tables
where partitioning_type='REFERENCE';
TAB REF
--- ---
T6 FK
SQL> select r_constraint_name, delete_rule
from user_constraints
where constraint_name='FK';
R_CON DELETE_RULE
----- -----------
PK CASCADE
SQL> select table_name
from user_constraints
where constraint_name='PK';
TAB
---
T5
SQL> truncate table t5 cascade;
Table truncated.

But if one of child or child-child table is using reference partitioning without the ON DELETE CASCADE, then the parent or grand-parent could not be truncated. And truncate cascade for reference partitioning is not documented (yet).

But there is very nice alternative to TRUNCATE called is DELETE 🙂


SQL> select table_name, REF_PTN_CONSTRAINT_NAME
from user_part_tables
where partitioning_type='REFERENCE';
TAB REF
--- ---
T8 FK
SQL> select r_constraint_name, delete_rule
from user_constraints
where constraint_name='FK';
R_CON DELETE_RULE
----- -----------
PK NO ACTION
SQL> select table_name
from user_constraints
where constraint_name='PK'
TAB
---
T7
SQL> truncate table t7 cascade;
ORA-14705: unique or primary keys referenced by enabled foreign keys in table "SCOTT"."T8"
SQL> truncate table t8;
Table truncated.
SQL> delete from t7;
2 rows deleted

To get the tables in the right order, parent tables after children, you can do some hierarchical query and then order by rownum desc, a construct I’m using for the first time I confess. Note the leaf tables are truncable.


select c_owner owner, child table_name
FROM
(
SELECT
p_OWNER, parent, nvl(c_owner, a.owner) c_owner,
nvl(child, a.table_name ) child
FROM
(
SELECT
PT.OWNER P_owner, pt.table_name parent,
pt2.owner c_owner, pt2.table_name child
FROM all_part_tables pt
JOIN all_constraints c
ON pt.OWNER = c.owner
AND PT.TABLE_NAME = c.table_name
AND c.constraint_type = 'P'
AND c.status = 'ENABLED'
JOIN all_constraints r
ON r.r_owner = c.owner
AND r.r_constraint_name = c.constraint_name
AND r.constraint_type = 'R'
AND r.status = 'ENABLED'
JOIN all_part_tables pt2
ON r.owner = pt2.owner
AND r.constraint_name = pt2.REF_PTN_CONSTRAINT_NAME
AND pt2.partitioning_type = 'REFERENCE'
) t
RIGHT JOIN all_tables a
ON child = table_name and a.owner = T.c_OWNER
)
where connect_by_isleaf=0
CONNECT BY parent = PRIOR child and p_owner=PRIOR c_owner
start with parent is null
order by rownum desc;

OWNER TAB
----- ---
SCOTT T10
SCOTT T9

Note the query above is very slow. If dictionary-performance is an issue, maybe we could delete all tables and catch exceptions and loop until all tables are empty

SQL> delete from t9;
ORA-02292: integrity constraint (SCOTT.F10) violated - child record found
SQL> delete from t10;
ORA-02292: integrity constraint (SCOTT.F11) violated - child record found
SQL> delete from t11;
1 row deleted.
SQL> delete from t9;
ORA-02292: integrity constraint (SCOTT.F10) violated - child record found
SQL> delete from t10;
1 row deleted.
SQL> delete from t11;
0 row deleted.
SQL> delete from t9;
1 row deleted.
SQL> delete from t10;
0 row deleted.
SQL> delete from t11;
0 row deleted.
SQL> delete from t9;
0 row deleted.
SQL> delete from t10;
0 row deleted.
SQL> delete from t11;
0 row deleted.

If you have close to zero reference-partitioning table, this approach will be more efficient.

tracefile tim to readable date format

In trace file, the time is recorded, this could be used to measure time between two timestamps

But how do you convert 31796862227375 to a human format?

This is how I proceeded :

SQL> oradebug SETMYPID
Statement processed.
SQL> oradebug tracefile_name
/u01/log/oracle/diag/rdbms/db01/DB01/trace/DB01_ora_32440740.trc
SQL> alter session set 
  nls_date_format='YYYY-MM-DD_HH24:MI:SS' 
  sql_trace=true;
Session altered.
SQL> select '&_DATE','Start' from dual;
2015-09-23_13:17:50 Start
SQL> select '&_DATE','End' from dual;
2015-09-23_13:18:38 End
SQL> alter session set sql_trace=false;
Session altered.

I use the sqlplus variable _date and waited about a minute between the select’s.

Now let’s look at the trace

PARSING IN CURSOR #4859519800 tim=31796814530524
select '2015-09-23_13:17:50','Start' from dual

PARSING IN CURSOR #4859511384 tim=31796862227375 
select '2015-09-23_13:18:38','End' from dual

Note the tim=

Between 13:17:50 and 13:18:38 I have 48 seconds; the difference between 31796862227375 and 31796814530524 is 47696851, roughly 48 millions of microseconds.

So the time on this specific version of Oracle is in microseconds.

Now if I substracted 31796862227375 from 2015-09-23_13:17:50 and I get 2014-09-20_12:50:08 on this specific instance.

So to convert tim= to human readable date, I simply add tim microseconds to Sep 20, 2014, 12:50:08.

job_name cannot be null


exec dbms_scheduler.create_job(job_name=>null,job_type=>'PLSQL_BLOCK',job_action=>'BEGIN NULL; END;')
ORA-27451: JOB_NAME cannot be NULL
ORA-06512: at "SYS.DBMS_ISCHED", line 146
ORA-06512: at "SYS.DBMS_SCHEDULER", line 288
ORA-06512: at line 1

This sounds like a proper error message. A bit less obvious is the drop_job message


SQL> exec dbms_scheduler.drop_job(job_name=>null)
ORA-20001: comma-separated list invalid near
ORA-06512: at "SYS.DBMS_UTILITY", line 236
ORA-06512: at "SYS.DBMS_UTILITY", line 272
ORA-06512: at "SYS.DBMS_SCHEDULER", line 743
ORA-06512: at line 1

comma-separated list invalid near what?

Ok, why would you create an empty job? Obviously you wouldn’t. But remember job_name could be a very long expression that won’t fit in your VARCHAR2(30) variable.


SQL> begin
dbms_scheduler.create_job(job_name=>
' "SCOTT" '||
' . '||
' "JOB10000000000000000000001" ',
job_type=>'PLSQL_BLOCK',
job_action=>'BEGIN NULL; END;');
end;
/

PL/SQL procedure successfully completed.

SQL> exec dbms_scheduler.drop_job('scott.job10000000000000000000001')

PL/SQL procedure successfully completed.

If you use drop job in the exception clause without catching the exception of the exception, it could lead to this ORA-20001 if job name is null

For exception handling, we could improve

BEGIN
CREATE JOB
RUN JOB
DROP JOB
EXCEPTION
WHEN OTHERS THEN
DROP JOB
output message
RAISE
END

into

BEGIN
CREATE JOB
RUN JOB
DROP JOB
EXCEPTION
WHEN OTHERS THEN
BEGIN
DROP JOB
EXCEPTION
WHEN IS_RUNNING
sleep
WHEN OTHERS
output message
END LOOP
output message
RAISE
END

DELETE is faster than TRUNCATE

Truncate is useful in some serial batch processing but it breaks the read-write consistency, generates stranges errors and results for running selects, and it needs DROP ANY TABLE when run over a table that you do not own.

But also, DELETE is faster in the following test case.

In 12c, you could have over one million partition in a table, but for the sake of the universe, I’ll try with 10000.


SQL> create table scott.t(x)
partition by range(x)
interval(1)
(partition values less than (0))
as
select rownum
from dual
connect by level<10001; SQL> select count(*) from scott.t;

COUNT(*)
----------
10000

The 10K rows table is created, each row is its partition

SQL> delete scott.t;

10000 rows deleted.

Elapsed: 00:00:04.02
SQL> rollback;

Rollback complete.

Not tuned or parallelized or whatever. It took 4 seconds for 10’000 rows. If you have one billion rows, it is doable in a few hours. But you better do it in chunks then.

Anyway, let’s truncate

SQL> truncate table scott.t;

Table truncated.

Elapsed: 00:05:19.24

Five minutes !!! to truncate that tiny table.

If you have one million partitions and underlying indexes and lobs, it will probably failed with out of memory errors after hours and a large impact on the dictionary, sysaux, undo.

The dictionary changes are here very slow.

permission issue due to one role

Most permissions issues are due to a missing role or privilege.

But in the following test case you need to revoke the right to get more privileges.


create table tt(x number);
create view v as select * from tt;
create role rw;
grant all on v to rw;

I’ve created a read-write role on a view. The owner of the role is the DBA, but the owner of the view is the application. Next release, the role may prevent an application upgrade


SQL> create or replace view v as select * from dual;
ORA-01720: grant option does not exist for 'SYS.DUAL'

Ok, if I drop the role, it works

SQL> drop role r;
Role dropped.
SQL> create or replace view v as select * from dual;
View created.

It is not always a good thing to grant privileges on a view, when you are not the owner of that view

Open last one-note page

If you got a one-note document, you may want to automatically go to the last page. This is possible with powershell.

First you create a ComObject. There are incredibly many ComObject that could be manipulated in powershell.

$o = New-Object -ComObject OneNote.Application

Now it get’s a bit confusing. First you open your document

[ref]$x = ""
$o.OpenHierarchy("Z:\Reports.one", "", $x, "cftNone")

Now you get the XML

$o.GetHierarchy("", "hsPages", $x)

With the XML, you select the last page. For instance :


$p = (([xml]($x.value)).Notebooks.OpenSections.Section.Page | select -last 1).ID

And from the id, you generate an URL the GetHyperlinkToObject.


[ref]$h = ""
$o.GetHyperlinkToObject($p,"",$h)

Now we can open the url onenote:///Z:\Reports.one#W31§ion-id={12345678-1234-1234-123456789ABC}&page-id={12345678-1234-1234-123456789ABC}&end


start-process $h.value

Don’t call it test

There are quite a few names to avoid in your scripts. Even if there are not reserved-words, keep away !

I’ll start with test

cd $HOME/bin
vi test
echo hello world
chmod +x test
./test
hello world

The problem is that it may break your other scripts

$ ssh localhost test 1 = 2 && echo WHAT???
hello world
WHAT???

And it may break sooner or later, depending on your OS / version / default shell / default path / others.

There are quite a few filenames you should not use, like test, date, time, hostname, mail, view, touch, sort and make. The command type lists some of those as reserved word, shell builtin, tracked alias, shell keyword. But again it is not consistent over Unix flavors.


$ uname -sr; type date
SunOS 5.10
date is /usr/bin/date
$ uname -sr; type date
Linux 2.6
date is a tracked alias for /bin/date

Your sysadmin may also alias things for colors and safety in the common profile: for instance vi, ls, rm. But if it annoys you, then use \vi instead of vi.

list database monitoring users

I am quite familiar with the SYSMAN tables but this one required me some googling beyond the Oracle documentation.

The list of targets in your Oracle Enterprise Manager is in SYSMAN.MGMT_TARGETS. Each database target is monitored by a database user, typically DBSNMP.

To retrieve this information, you need some to hijack your database, read this : Gökhan Atil

  1. you copy your encryption key to your repository database, on the OMS server

  2. $ emctl config emkey -copy_to_repos
    Enter Enterprise Manager Root (SYSMAN) Password :

    Now anyone with select any table on your repository will see all passwords. You don’t want to do this, but unfortunately you have to do this because even the username is encrpyted.

  3. you decrypt the credentials for db monitoring

  4. SELECT *
    FROM (
    SELECT target_name,
    sysman.em_crypto.decrypt (
    c.cred_attr_value,
    c.cred_salt) cred,
    cred_attr_name attr
    FROM SYSMAN.EM_TARGET_CREDS c
    JOIN SYSMAN.mgmt_targets t USING (target_guid)
    JOIN sysman.EM_NC_CRED_COLUMNS c USING (cred_guid)
    WHERE c.target_type = 'oracle_database'
    AND c.set_name = 'DBCredsMonitoring'
    ) PIVOT (
    MAX (cred)
    FOR (attr) IN (
    'DBUserName' AS USERNAME,
    'DBRole' AS "ROLE")
    )


    TARGET_NAME USERNAME ROLE
    ----------- -------- ------
    DB01 dbsnmp NORMAL
    DB02 dbsnmp NORMAL
    DB03 sys SYSDBA

  5. remove the security leak

  6. $ emctl config emkey -remove_from_repos
    Enter Enterprise Manager Root (SYSMAN) Password :

Now the em_crypto won’t work any more

select
sysman.em_crypto.decrypt('0','0')
from dual
*
Error at line 2
ORA-28239: no key provided
ORA-06512: at "SYS.DBMS_CRYPTO_FFI", line 67
ORA-06512: at "SYS.DBMS_CRYPTO", line 44
ORA-06512: at "SYSMAN.EM_CRYPTO", line 250
ORA-06512: at line 1

This information could be used to change the password dynamically accross all databases.

emcli login \
-username=sysman \
-password=sysmanpw
emcli update_db_password \
-target_name=DB01 \
-user_name=dbsnmp \
-change_at_target=yes \
-old_password=oldpw \
-new_password=newpw \
-retype_new_password=newpw

Drop table if exists

The syntax that you are looking for is
docs.oracle.com/…/drop-table.html

DROP [TEMPORARY] TABLE [IF EXISTS]
tbl_name [, tbl_name] ...
[RESTRICT | CASCADE]

Wait, this does not work !

SQL> drop table if exists t;
drop table if exists t
*
ERROR at line 1:
ORA-00933: SQL command not properly ended

Okay. It was the Oracle MySQL book 😉

In the Oracle database, I have created my own droptableifexists script.

I went for a SQL*Plus no-plsql approach. PL/SQL is also possible but it generated different error messages (ORA-06512: at line 1) and different feedback (PL/SQL procedure successfully completed.)

So I check the dictionary, put a command to drop in the sqlplus buffer if a table exists, then run that command first.

droptableifexists.sql

set feed off ver off pages 0 newp none
def cmd="select 'OK: Table does not exist' from dual"
col cmd new_v cmd nopri
select 'drop table "'||table_name||'"' cmd
from user_tables
where table_name='&1';
1 select
del *
1 &cmd
set feedb 6 head off
/
set head on
del *
undef cmd
col cmd clear

Ok, let’s try

SQL> create table t(x number);

Table created.

SQL> @droptableifexists T

Table dropped.

SQL> @droptableifexists T
OK: Table does not exist

A PL/SQL approach could be

for f in (
select 'drop table "'||table_name||'"' cmd
from user_tables where table_name='T')
loop
execute immediate f.cmd;
end loop;

Try it :

SQL> create table t(x number);

Table created.

SQL> exec for f in (select 'drop table "'||table_name||'"' cmd from user_tables where table_name='T')loop execute immediate f.cmd;end loop

PL/SQL procedure successfully completed.

SQL> exec for f in (select 'drop table "'||table_name||'"' cmd from user_tables where table_name='T')loop execute immediate f.cmd;end loop

PL/SQL procedure successfully completed.

A bit easier to read. Same has to be done for USER, VIEW and so on.

PS: there are also other versions around catching for ORA-942, but ORA-942 may be ORA-943 in next release, try drop table t;; in 11g and 12c to see those things really happen !

Untrusted X11 forwarding

I wrote a while ago about my security concerns regarding

xhost +
xterm -display mypc:0

Way back then, I suggested ssh tunnel. SSH is pretty easy to set up, by enabling the X11Forwarding option.

In OpenSSH 3.8 release note, 2004, there was a new default .

ssh(1) now uses untrusted cookies for X11-Forwarding

In the man ssh_config page, it’s still documented as being the default

ForwardX11Trusted The default is ‘no’

But it actually isn’t on most *ix derivates, e.g. RedHat /etc/ssh/ssh_config

# If this option is set to yes then
# remote X11 clients will have full access
# to the original X11 display. As virtually
# no X11 client supports the untrusted
# mode correctly we set this to yes.
ForwardX11Trusted yes

Who is we?

Okay, let’s go back.

If you use the unsafest method, xhost + and xterm -display pc:0, then you grant everybody the right to manipulate X.

If you use trusted ssh, which is the _undocumented_ default in Linux, then you grant this right only to anyone with access to your authority, most probably located in the file $HOME/.Xauthority. So root and yourself, at least.

If you trust neither yourself nor root, you could restrict access to your resource, preventing one hacker from switching your mouse buttons or doing a screenshot. But this is probably going to prevent most of your applications from working. Also, it probably won’t work at all if you use putty, reflection and (virtually any?) other client tools.

If you want to force Trusted mode, use -Y or -o ForwardX11Trusted=yes.

If you want to force Untrusted mode, use -X and -o ForwardX11Trusted=no.

If you use only -X, it may transparently defaults to the more convenient but less secure -Y. Sometimes. At least on Linux OpenSSH. But if you use different Unix / SSH flavours, the -X may ends with an error message like connection to “localhost:10.0” refused by server. In that case, simply use -Y. Actually, always use -Y if you want Trusted.

Irrecoverable part III : a fix

After part I:the problem, Part II: two reports here is part III.

We have backed up archivelog during a full and deleted them before the full completed.

RMAN> list backup of archivelog all;

List of Backup Sets
===================

BS Size Type Elapsed Completion
-- ------ ---- -------- ----------
15 4.00K DISK 00:00:00 13:31:08
BP Key: 15 Status: AVAILABLE
Piece Name: /FULL/0fq7gc0s_1_1

List of Archived Logs in backup set 15
Seq LowSCN LowTime NextSCN NextTime
--- ------- -------- ------- --------
15 355533 13:29:55 355777 13:31:08

BS Size Type Elapsed Completion
-- ------ ---- -------- ----------
25 4.00K DISK 00:00:00 13:31:26
BP Key: 25 Status: AVAILABLE
Piece Name: /ARCH/0pq7gc1e_1_1

List of Archived Logs in backup set 25
Seq LowSCN LowTime NextSCN NextTime
--- ------- -------- ------- --------
15 355533 13:29:55 355777 13:31:08

BS Size Type Elapsed Completion
-- ------ ---- -------- ----------
26 3.00K DISK 00:00:00 13:31:26
BP Key: 26 Status: AVAILABLE
Piece Name: /ARCH/0qq7gc1e_1_1

List of Archived Logs in backup set 26
Seq LowSCN LowTime NextSCN NextTime
--- ------- -------- ------- --------
16 355777 13:31:08 355827 13:31:22

BS Size Type Elapsed Completion
-- ------ ---- -------- ----------
28 2.50K DISK 00:00:00 13:31:28
BP Key: 28 Status: AVAILABLE
Piece Name: /FULL/0sq7gc1g_1_1

List of Archived Logs in backup set 28
Seq LowSCN LowTime NextSCN NextTime
--- ------- -------- ------- --------
17 355827 13:31:22 355863 13:31:28

Sequence 16 is missing in /FULL/ and is available in /ARCH/

Ok, let’s copy the missing pieces from ARCH to FULL

$ cd /
$ cp -p ARCH/* FULL/

Now let’s try. I save my directory on an offline location (tape/usb/cdrom) and dropped my database including backups.


cd /
tar cvf /tmp/FULL.tar FULL

Let’s drop (it’s just a test)

RMAN> startup force mount dba
RMAN> drop database including backups;
database dropped

Ok let’s get this USB stick back from the moon to my datacenter and see if I can restore …

cd /
tar xvf /tmp/FULL.tar

Let’s do this !

RMAN> startup force nomount
RMAN> restore controlfile from '/FULL/c-2414975447-20150521-01';
RMAN> alter database mount;

Now I want RMAN to look for other files in FULL and also marked expired stuff. The start with does the magic.

RMAN> catalog start with '/FULL/' noprompt;
RMAN> crosscheck backup;
RMAN> crosscheck copy;
RMAN> crosscheck archivelog all;
RMAN> restore database;
RMAN> recover database;
RMAN> alter database open resetlogs;

As @dbastorage said on twitter:
“if it was easy, we would not need DBAs! ”

Irrecoverable full backup part II : reporting

After my post Can you restore from a full online backup ?, I needed to come up with a report.

Assuming that each backup goes in a different directory, I just wrote two reports.

  1. Report gaps in v$backup_redolog (or rc_backup_redolog if you use the catalog)

    DIR FIRST_CHANGE# NEXT_CHANGE#
    ------- ------------- ------------
    /bck01/ 284891 285140
    /bck01/ 285140 285178

    /bck02/ 284891 285140
    === GAP ===
    /bck02/ 285178 285245

    /bck03/ 285178 285245
    /bck03/ 285245 286931
    /bck03/ 286931 287803
    /bck03/ 287803 288148

    This could be done with analytics, by checking where the last next_change is not the current first_change, within a directory

    SELECT dir,
    LAG missing_from_change#,
    first_change# missing_to_change#
    FROM (
    SELECT REGEXP_REPLACE (handle, '[^/\]+$') dir,
    first_change#,
    next_change#,
    LAG(next_change#) OVER (
    PARTITION BY REGEXP_REPLACE (handle, '[^/\]+$')
    ORDER BY first_change#
    ) LAG
    FROM v$backup_piece p
    JOIN v$backup_redolog l
    USING (set_stamp, set_count))
    WHERE LAG != first_change#;

    DIR MISSING_FROM_CHANGE# MISSING_TO_CHANGE#
    ------- -------------------- ------------------
    /bck02/ 285140 285178

  2. Reports directories where archivelogs don’t include changes (backup redolog) from the earliest to the latest checkpoint (backup datafile)

    SELECT
    REGEXP_REPLACE (handle, '[^/\]+$') dir,
    MIN (checkpoint_change#),
    MAX (checkpoint_change#),
    MIN (first_change#),
    MAX (next_change#)
    FROM v$backup_piece p
    LEFT JOIN v$backup_datafile f
    USING (set_stamp, set_count)
    LEFT JOIN v$backup_redolog l
    USING (set_stamp, set_count)
    WHERE handle IS NOT NULL
    HAVING
    MIN (checkpoint_change#) < MIN (first_change#) OR MAX (checkpoint_change#) > MAX (next_change#)
    GROUP BY REGEXP_REPLACE (handle, '[^/\]+$');

    DIR MINCHECKPT MAXCHECKPT MINFIRST MAXNEXT
    ------- ---------- ---------- ---------- ----------
    /bck04/ 954292 954299 959487 1145473

    the archives for the changes from 954292 to 959487 are missing.

If some archive backups are missing in one directory, it does not mean the database is irrecoverable, the archive backups could be in another directory. But it means that single directory would no longer permit you to restore or duplicate.

Another approach with RESTORE PREVIEW was provided by Franck in my previous post : List all RMAN backups that are needed to recover.

Usual disclaimer: there are plenty of other irrecoverabilty causes from hardware defect to backup “optimization” that are beyond the scope of this post.

Can you restore from a full online backup ?

The question is not HOW TO DO IT but WHETHER YOU CAN DO IT !

A typical backup script would contains something like BACKUP DATABASE PLUS ARCHIVELOG:

backup database format
'/u99/backup/DB01/20150518/full_0_%d_s%s_p%p'
plus archivelog format
'/u99/backup/DB01/20150518/arc_%d_s%s_p%p';

Starting backup at 2015-05-18_18:27:55
current log archived
input archived log thread=1 sequence=469
...
piece handle=
/u99/backup/DB01/20150518/arc_DB01_s86_p1
Finished backup at 2015-05-18_18:27:58

Starting backup at 2015-05-18_18:27:58
input datafile file number=00002 name=
/u02/oradata/DB01/undots01.dbf
...
including current control file in backup set
including current SPFILE in backup set
piece handle=
/u99/backup/DB01/20150518/full_0_DB01_s88_p1
Finished backup at 2015-05-18_18:28:16

Starting backup at 2015-05-18_18:28:16
current log archived
input archived log thread=1 sequence=20
piece handle=
/u99/backup/DB01/20150518/arc_DB01_s89_p1
Finished backup at 2015-05-18_18:28:17

This sounds pretty safe, but what happened if you keep this full backup for ever ? do you have all what you need in it to restore ?

It depends. Chance exists that you can restore. To annoy the paranoiacs, here is a counter example.

  1. you start your full backup at 6pm.
    backuping sequence 21,22,23,24
    backup datafiles…

  2. at 7 pm one cronjob issue backup archivelog all delete input;
    backuping and deleting sequence 21,22,23,24,25,26,27,28,29,30

  3. at 8pm your backup is about to finish
    backuping sequence 31

Well, where is my sequence 27 ?

Let’s try

rm /tmp/arch_DB01*
startup force mount;
crosscheck backup;restore database;


Starting restore at 2015-05-18_18:47:45
channel ORA_DISK_1: restore complete, elapsed time: 00:02:05
Finished restore at 2015-05-18_18:49:51

Ok, the restore was fine. Now what?

RMAN> recover database;

Starting recover at 2015-05-18_18:50:35
using channel ORA_DISK_1

starting media recovery

RMAN-00571: =================================================
RMAN-00569: ========== ERROR MESSAGE STACK FOLLOWS ==========
RMAN-00571: =================================================
RMAN-03002: failure of recover command at 05/18/2015 18:50:36
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06025: no backup of archived log for thread 1 with sequence 30
RMAN-06025: no backup of archived log for thread 1 with sequence 29
RMAN-06025: no backup of archived log for thread 1 with sequence 27

RMAN> alter database open;

RMAN-00571: =================================================
RMAN-00569: ========== ERROR MESSAGE STACK FOLLOWS ==========
RMAN-00571: =================================================
RMAN-03002: failure of alter db command at 05/18/2015 18:51:29
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: '/u02/oradata/DB01/system01.dbf'

You have lost your database. Even if you did a full online backup plus archivelog, it did not contain all the archivelogs and RMAN did not tell you.

There is more than one way to reduce this annoyance :

1) don’t backup archivelogs during full backup
2) don’t delete archivelog backups done during a full backup
3) report any archivelog backups that run during a full. Either by looking in the RMAN dictionary or in your house logfiles
4) do offline backup for long term archives
5) don’t trust RMAN

And as well :
Test your backups 😀

🎂 10 years 🎂

Thanks to all my readers for being so faithful 🙂
I’ll post a new solution to calculate factorial.

This is the one I posted 10 years ago :
https://laurentschneider.com/2005/05/recursive-sql.html

I also used it in the obfuscation contest

with function f (x number) return number
is
begin
return case x when 1 then x else x*f(x-1) end;
end;
select value(t), f(value(t))
from table(sys.odcinumberlist(4,6))t

VALUE(T) F(VALUE(T))
---------- -----------
4 24
6 720

It is neither quicker nor shorter than the one I posted in 2005, but it could not have worked in 2005

run sudo, ssh, password, su in simulated interactive mode

Some commands do not like non-interactive mode


$ passwd < oldpassword
> newpassword
> newpassword
> EOF
Changing password for user lsc.
Current password for [email protected]: passwd: Authentication token manipulation error
$ echo oraclepassword | su - oracle
standard in must be a tty
$ echo sudopassword | sudo su - oracle
[sudo] password for lsc:
sudo: sorry, you must have a tty to run sudo

But ok, if you really want to run those in a script, you have plenty of clean (no hack there) ways of doing this.

For instance, let’s use a screen called BAR.

$ xterm -e "screen -S BAR" &
[1] 31732

Now we have an interactive terminal. It could be redirected to a frame buffer device if no x server is started.

Not really a human device, but an interactive terminal.

Now let’s send stuff

$ CR="$(echo '\r')"
$ screen -S BAR -X stuff "sudo su - oracle$CR"
$ screen -S BAR -X stuff "sudopassword$CR"
$ screen -S BAR -X stuff "id > /tmp/xxx$CR"
$ screen -S BAR -X stuff "exit$CR"
$ screen -S BAR -X stuff "exit$CR"
[1] + Done xterm -e "screen -S BAR" &
$ cat /tmp/xxx
uid=100(oracle) gid=100(dba) groups=100(dba)

Usual disclaimer: it is a bad security practice to hardcode your passwords in scripts. Use this only if you really understand security. Read man openssl about how to use openssl to encrypt your password. Ask your security friends before trying

bypass ora-20

When you really need to run one script, at all cost, an annoying error is ORA-00020: maximum number of processes (40) exceeded, which can even occurs as sysdba.

Test case (21 is a not something to do in real life):

SQL> alter system set processes=21 scope=spfile;
System altered.
SQL> startup force quiet
ORACLE instance started.
Database mounted.
Database opened.
SQL> quit

From now on, sqlplus as sysdba is impossible.
$ sqlplus -s -L / as sysdba
ERROR:
ORA-00020: maximum number of processes (40) exceeded
SP2-0751: Unable to connect to Oracle. Exiting SQL*Plus

Okay, if you really really need to run one script, you could connect with sqlplus -prelim and restart the database.

if :|sqlplus / as sysdba|grep ORA-00020
then
echo shu abort|sqlplus -prelim / as sysdba
echo startup quiet|sqlplus / as sysdba
fi

If ORA-20 is detected, then the database will be restarted.

switch user in Oracle

Almost a decade ago I wrote about su in sqlplus. This 10gR2 “new” feature allows delegation à la sudo.

By checking the DBA_USERS in 12c I found PROXY_ONLY_CONNECT. According to Miguel Anjo, there is a secret syntax for allowing only the proxy user.


SQL> ALTER USER app_user PROXY ONLY CONNECT;
SQL> CONNECT app_user/xyz
ERROR:ORA-28058: login is allowed only through a proxy

comment in external table

Depending the files, you may use different signs for comments, typically

# hash
// slash slash
/* slash-star star-slash */
: column
-- dash dash

The latest is used in sql and pl/sql, but :

CREATE TABLE t (x NUMBER)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY data_pump_dir
ACCESS PARAMETERS (
FIELDS TERMINATED BY ';' -- This is a comment
(x))
LOCATION ('x'));

SELECT * FROM t;
SELECT * FROM t
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "minussign": expecting one of:
"column, enclosed, (, ltrim, lrtrim, ldrtrim, missing,
notrim, optionally, rtrim, reject"
KUP-01007: at line 2 column 38

not in external table access parameters.

No comment is allowed there!

Unusable index

After table maintenance, like move or split partition, underlying indexes are marked as unusable.

This boils down to segment reorganisation, not dictionary change.

For instance :

CREATE TABLE t(x NUMBER)
PARTITION BY RANGE(x)
(PARTITION p1 VALUES LESS THAN (MAXVALUE));

CREATE INDEX i ON t (x);


INSERT INTO t VALUES (1);

ALTER TABLE T SPLIT PARTITION p1 AT
(100) INTO (PARTITION p2, PARTITION p3);

SELECT index_name, status
FROM user_indexes
WHERE index_name = 'I';

INDEX_NAME STATUS
---------- --------
I VALID

The partition p1 is splitted into two p2 and p3, all rows from p1 belongs to p2, p3 is created empty, no data manipulation, the index remains valid


delete from t;
INSERT INTO t VALUES (250);

ALTER TABLE T SPLIT PARTITION p3 AT
(200) INTO (PARTITION p4, PARTITION p5);

SELECT index_name, status
FROM user_indexes
WHERE index_name = 'I';

INDEX_NAME STATUS
---------- --------
I VALID

Same here, all rows from p3 moved to p5, p4 is created empty, no problem


delete from t;
INSERT INTO t VALUES (250);
INSERT INTO t VALUES (350);

ALTER TABLE T SPLIT PARTITION p5 AT
(300) INTO (PARTITION p6, PARTITION p7);

SELECT index_name, status
FROM user_indexes
WHERE index_name = 'I';
INDEX_NAME STATUS
---------- --------
I UNUSABLE

One row goes to p6, one row to p7, the index is invalidated.

To avoid this, use ALTER TABLE T SPLIT PARTITION p5 AT (300) INTO (PARTITION p6, PARTITION p7) UPDATE INDEXES;

What are the consequences?


INSERT INTO t VALUES (320);
1 row inserted

It does not prevent DML in this case.

select * from t where x=320;

X
----------
320

Execution Plan
---------------------------------------------------
0 SELECT STATEMENT Optimizer Mode=ALL_ROWS
1 0 PARTITION RANGE SINGLE
2 1 TABLE ACCESS FULL SCOTT.T

The row is retrieved, but the unusable index is not used.

In some case, DML may be prevented.

alter index i rebuild;
alter table t modify (x primary key using index);
ALTER TABLE T SPLIT PARTITION p7 AT
(330) INTO (PARTITION p8, PARTITION p9) ;

insert into t values (450);
ORA-01502: index 'SCOTT.I' or partition of such index is in unusable state

The index can no longer be used for constraint enforcement and the INSERT fails.

If the index is partitioned, the index is not marked as unusable as a whole, only the affected (sub)partitions are marked.

Check all your indexes status in dba_indexes.status, dba_ind_partitions.status and dba_ind_subpartitions.status.

Rebuild them with alter index i rebuild, alter index i rebuild partition p and alter index i rebuild subpartition sp.

Did you forget to run root.sh?

Not easy to detect, and depending on the product (agent/database), it may have only limited side effects.

Like external jobs not running, operating systems statistics not collected.

But it is not always easy to diagnose.

For instance if you patch from OMS 12cR2 to 12cR3, and you run the root.sh only in 12cR2, they are very few statistics missing (one is the OS_STORAGE_ENTITY).

Running the root.sh doesn’t generate a log file or an entry in the inventory.

To check if it was executed, check what it is supposed to do. It is a bit different in each version. One think it always does is changing the ownership to root and set the sticky bit for a few binaries. For the database, this is done in sub-scripts called rootadd.sh (10g) or rootadd_rdbms.sh (11g/12c).

eval ls -l $(find $ORACLE_HOME -name "rootadd*sh" -exec awk '$1="$CHOWN"&&$2=="root"{print $3}' {} \;|sort -u)

-rwsr-x--- root dba .../product/11.2.0/db_4/bin/extjob
-rwsr-x--- root dba .../product/11.2.0/db_4/bin/jssu
-rws--x--- root dba .../product/11.2.0/db_4/bin/nmb
-rws--x--- root dba .../product/11.2.0/db_4/bin/nmhs
-rws--x--- root dba .../product/11.2.0/db_4/bin/nmo
-rwsr-x--- root dba .../product/11.2.0/db_4/bin/oradism
-rw-r----- root dba ...11.2.0/db_4/rdbms/admin/externaljob.ora

If the ownership is root, you definitely did run the root.sh.

On the 12c agent, there is a FULL_BINARY_LIST variable that point to list of root binaries in sbin

eval $(grep FULL_BINARY_LIST= $AGENT_HOME/root.sh)
cd $AGENT_HOME/../../sbin
ls -l $FULL_BINARY_LIST

-rws--x--- root dba nmb
-rws--x--- root dba nmhs
-rws--x--- root dba nmo

If all files exist and belong root, it looks like you did run the root.sh.

last partition

if you really need to quickly find the latest partition per table, I have written this little gem


WITH FUNCTION d (b BLOB, len number) RETURN DATE IS
d DATE;
BEGIN
IF DBMS_LOB.SUBSTR (b, 1, 1) = hextoraw('07') and len=83
THEN
DBMS_STATS.convert_raw_value (DBMS_LOB.SUBSTR (b, 12, 2), d);
ELSE
d := NULL;
END IF;
RETURN d;
END;
SELECT
u.name owner,
o.name table_name,
max(d (bhiboundval, hiboundlen)) last_partition
FROM sys.tabpart$ tp
JOIN sys.obj$ o USING (obj#)
JOIN sys.user$ u ON u.user# = o.owner#
WHERE u.name='SCOTT'
group by u.name, o.name
order by last_partition desc;

It doesn’t cover all partitioning type, but it is pretty fast and simple

Do you have a partition in 2015

You need to check the high_value from dba_tab_partitions.
Or you you could metadata.

With metadata, it is not a long, it either clob or clob-xml.

SELECT t.table_name,
MAX (
TO_DATE (
REGEXP_SUBSTR (
EXTRACT ( (VALUE (x)), 'HIBOUNDVAL').getStringVal (),
' \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}'),
'YYYY-MM-DD HH24:MI;:SS'))
high_value
FROM user_tables t,
TABLE (
XMLSEQUENCE (
EXTRACT (
xmltype (DBMS_METADATA.get_xml ('TABLE', t.table_name)),
'//ROWSET/ROW/TABLE_T/PART_OBJ/PART_LIST/PART_LIST_ITEM/HIBOUNDVAL'))) x
WHERE partitioned = 'YES'
HAVING MAX (
TO_DATE (
REGEXP_SUBSTR (
EXTRACT ( (VALUE (x)), 'HIBOUNDVAL').getStringVal (),
' \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}'),
'YYYY-MM-DD HH24:MI;:SS')) < DATE '2015-01-02' GROUP BY t.table_name ORDER BY 1; TABLE_NAME HIGH_VALUE ---------- ------------------------------ T1 2015-01-01 T11 2000-01-01 T20 1436-03-09 T6 2000-01-01 T7 2014-12-21 T8 2015-01-01

Ok, out of xml, I get all HIBOUNDVAL and hazardously try some regular expression to find a pattern.

It already returned most of my table that could have caused me New Eve headache. But partitioned is not that simple. Line 3 for instance is not in a gregorian format (it's a bug). And it does not cover index partition, subpartitions, interval partitions, neither partition with more than one date column as key.

This sounds a lot, and there is only one HIGH_VALUE for multiple key, and the High_value is something like 0,TO_DATE(' 3543-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'),MAXVALUE

Okay, if you forget about non-gregorian calendars, and ignore about MAXVALUE, we could use something like


curid := DBMS_SQL.open_cursor;
txt :=
'select '
|| REPLACE (g.high_value, 'MAXVALUE', 'NULL')
|| ' from dual';
DBMS_SQL.parse (curid, txt, DBMS_SQL.NATIVE);

Then from dba_PART_KEY_COLUMNS (pkc) or dba_SUBPART_KEY_COLUMNS, joined with dba_tab_columns, we could retrieve the datatype (date and timestamp with or without time zone) and the pkc.column_position.


DBMS_SQL.DEFINE_COLUMN (curid,
f.column_position,
'date',
12);

Casting to date should be fine.

Then we fetch and retrieve the Nth column

IF DBMS_SQL.FETCH_ROWS (curid) > 0
THEN
DBMS_SQL.COLUMN_VALUE (curid, f.column_position, namevar);
END IF;

If you read me that far, you probably can write the rest of the code for yourself.

Could be useful to do this before end-of-year 😉

SSL with PKCS12 truststore

Many many moons ago I vaguely remember having a similar issue with java keystore / truststore and microsoft certificates stores.

When you start using SSL for your listener, you could potentially face a large number of issues amoung your toolsets. In my opinion, the most disastrous one is that you cannot monitor your database with Enterprise Manager and a TCPS listener. I tried with 10g, 11g, 12c and I seriously doubt it will come in 13c, even a dozen of ERs have been filled. The best workaround I found is to use a separate listener to monitor your database and monitor the ssl-listener itself with IPC.

Today I had to deal with a driver from Datadirect, which finally works perfectly fine with SSL, but the challenge was to know what to put in the keystore and truststore…

In SQLNET, you use the single-sign-on wallet (cwallet.sso) created by OWM/orapki or ldap.

In Java, per default, you use a java keystore, that you generate with keytool (or even use the default cacerts). There is only a lexical difference between a keystore and a truststore, they could both be the same file. As documented in the JSSE Ref
A truststore is a keystore that is used when making decisions about what to trust

But for some other tools, the java keytool won’t do the trick, if the truststore cannot be of the type JKS.

One common type is the PKCS12. This is the your ewallet.p12 you get with the Wallet Manager.

E.g. from java :

-Djavax.net.ssl.trustStore=ewallet.p12
-Djavax.net.ssl.trustStoreType=PKCS12
-Djavax.net.ssl.trustStorePassword=***

To use single-sign-on, use trustStoreType=SSO as I wrote there : jdbc-ssl

Other command formats are X509 base64 or DER file. The openssl command line allows you easy conversion

openssl pkcs12 -in ewallet.p12 -out file.pem
openssl x509 -outform der -in file.pem -out file.der

or in Windows Explorer, just click on your p12 file and then click on the certificate to export in the certificate store.

anonymous cypher suites for SSL (and a 12c pitfall)

If you configure your listener for encryption only, you do not really need authentication.

It works pretty fine until 11.2.0.2, I wrote multiple posts on ssl.

You add SSL_CLIENT_AUTHENTICATION=FALSE to your server sqlnet.ora and listener.ora and specify an “anon” cipher suite in your client. You do not need to validate the certificate, so a default wallet will do.


orapki wallet create -wallet . -auto_login_only

sqlnet.ora

WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=.)))
ssl_cipher_suites=(SSL_DH_anon_WITH_3DES_EDE_CBC_SHA)
NAMES.DIRECTORY_PATH=(TNSNAMES)

tnsnames.ora

DB01=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=srv01.example.com)(PORT=1521))(CONNECT_DATA=(SID=DB01)))

or if you use java, the default truststore -usually located in $JAVA_HOME/jre/lib/security/cacerts, will also do.

System.setProperty("oracle.net.ssl_cipher_suites", "SSL_DH_anon_WITH_DES_CBC_SHA");

On some plateform however you may get something like : IBM’s Client TrustManager does not allow anonymous cipher suites.

So far so good, but if you upgrade your listener to 11.2.0.3/4 or 12c, the anonymous suites won’t be accepted if not explicitely set up in sqlnet.ora. This is documented in Note 1434966.1

You will get something like “ORA-28860: Fatal SSL error“, “TNS-12560: TNS:protocol adapter error” in Oracle or “SSLHandshakeException: Received fatal alert: handshake_failure“, “SQLRecoverableException: I/O-Error: Received fatal alert: handshake_failure” in java.

There are two -obvious- ways to fix this. The preferred approach is to not use anonymous suite (they seem to have disappeared from the supported cypher suites in the doc).

For this task, you use another cipher suite. The easiest way is to not specify any or just use one like TLS_RSA_WITH_AES_128_CBC_SHA (java) / SSL_RSA_WITH_AES_128_CBC_SHA (sqlnet). Even if you do not use client authentication, you will then have to authenticate the server, and import the root ca in the wallet or the keystore.
sqlnet.ora

# comment out ssl_cipher_suites=(SSL_DH_anon_WITH_3DES_EDE_CBC_SHA)

java

// comment out : System.setProperty("oracle.net.ssl_cipher_suites", "SSL_DH_anon_WITH_DES_CBC_SHA");
System.setProperty("javax.net.ssl.trustStore","keystore.jks");
System.setProperty("javax.net.ssl.trustStoreType","JKS");
System.setProperty("javax.net.ssl.trustStorePassword","***");

Or, as documented in metalink, define the suite in sqlnet.ora and listener.ora if you use 11.2.0.3 or 11.2.0.4.

How to *really* send a script to the background

Let’s check this small script

foo.sh

#!/bin/sh
echo foo.1:`date` | tee $HOME/tmp/foo.txt
sleep 3
echo foo.2:`date` | tee -a $HOME/tmp/foo.txt


$ $HOME/tmp/foo.sh
foo.1:Thu Nov 27 17:34:53 CET 2014
foo.2:Thu Nov 27 17:34:56 CET 2014

Very obvious, I write to the console, wait three seconds, then write to the console.

Ok, let’s take another script that would call this script in the background using &

bar.sh

#!/bin/sh
echo bar.1:`date`
$HOME/tmp/foo.sh &
echo bar.2:`date`


$ $HOME/tmp/bar.sh
bar.1:Thu Nov 27 17:36:32 CET 2014
bar.2:Thu Nov 27 17:36:32 CET 2014
$
foo.1:Thu Nov 27 17:36:32 CET 2014
foo.2:Thu Nov 27 17:36:35 CET 2014

bar is printing the date, calling foo in the background, then printing the date, then it returns to you, and foo is still running.

BUT this is only in a relative background …

Let’s try this

$ time $HOME/tmp/bar.sh > /dev/null

real 0m0.01s
user 0m0.00s
sys 0m0.00s

So it takes no time to run bar you believe ?

Let’s try, for instance, over ssh (or cron or whatever)

$ time ssh localhost $HOME/tmp/bar.sh > /dev/null
real 0m3.81s
user 0m0.01s
sys 0m0.01s

running bar suddenly waits 3 seconds for foo to finish.

To be sure the script is sent to the farest background, you need to close the file descriptors, stdin, stdout, stderr

I rewrote it as

baz.sh

#!/bin/sh
echo bar.1:`date`
$HOME/tmp/foo.sh <&- >&- 2>&- &
echo bar.2:`date`


$ time ssh localhost $HOME/tmp/baz.sh >/dev/null
real 0m0.44s
user 0m0.00s
sys 0m0.00s

Now the script baz is immediately finished and does not wait for foo to complete