select from comma-separated list

This is asked over and over in the forums, but why not proposing an 11g solution here 😉

create table t(description varchar2(12) primary key,
numbers varchar2(4000));
insert into t(description, numbers) values ('PRIME','2,3,5,7');
insert into t(description, numbers) values ('ODD','1,3,5,7,9');
commit;

DESCRIPTION NUMBERS
PRIME 2,3,5,7
ODD 1,3,5,7,9

Now I want to unpivot numbers in rows


select description,(column_value).getnumberval()
from t,xmltable(numbers)

DESCRIPTION (COLUMN_VALUE).GETNUMBERVAL()
PRIME 2
PRIME 3
PRIME 5
PRIME 7
ODD 1
ODD 3
ODD 5
ODD 7
ODD 9

It is that simple 🙂

Works also with strings :

select (column_value).getstringval()
from xmltable('"a","b","c"');

(COLUMN_VALUE).GETSTRINGVAL()
a
b
c

to ftp or to sftp

Ftp is seen as an old-time unsecure protocol. Many shops nowadays have switched or are switching to sftp. I will try to point out some differences :

Compatibility: none. the protocol is completly different. Multiple graphical clients however do support both mode. But the basic “ftp” client will not work with sftp.

Ascii mode: only in ftp. In sftp, it is always binary so there will be no conversion. Also no blocksize, recordlength or primary/secondary space for your OS/390 connections.

Interactive mode: similar. you enter your username and password, do cd, put and get. But to quit, by will not work in sftp 😉 Use quit or exit instead

Batch mode: different. Most probably you will end up setting a private/public key infrastructure for your ssh connection and use scp (secure copy). If you are using a ssh client like putty, it is possible to do something like pscp -l user -pw password server:file .

Security: sftp is secure, ftp is not.

Speed: ftp is fast, sftp is slow 🙁 !

Oh NOOOOOOO!!!!! What’s the point is bringing something new if it is slower !!!

Ok, let’s try to download a 100m file:
$ time (echo "open dbsrv01
user oracle secret
bin
get 100m"|ftp -n )

real 0m24.673s
user 0m0.030s
sys 0m0.016s
$ time scp -q oracle@dbsrv01:100m .

real 1m46.978s
user 0m0.108s
sys 0m0.202s

it is about 4x slower! Is there anything we could do about it?

Well, maybe :

$ time scp -q -o Compression=yes oracle@dbsrv01:100m .

real 0m18.634s
user 0m0.748s
sys 0m0.452s

ssh/scp/sftp have a compression mode. If you are transferring your large files across a slow network, this may be an interesting option to consider !

OpenSSH homepage : http://www.openssh.org

to_number(to_char(:n))

Is it safe to do a TO_NUMBER(TO_CHAR(:n)) ?

var n number
exec :n := 9.9999999999999999999999999999999999e125
select to_number(to_char(:n)) from dual;
ORA-01722: invalid number

Why?
SQL Reference
If a positive NUMBER value is extremely large and cannot be represented in the specified format, then the infinity sign (~) replaces the value

select to_char(:n) from dual;
TO_CHAR(:N)
--------------
~

I wrote more about infinity here

committing transaction

When and how do you commit a transaction?
SQL> insert into lsc_t(x) values(1); ... (1) not committed
SQL> commit; ... (1) committed

SQL> insert into lsc_t(x) values(2); ... (2) not committed
SQL> alter table lsc_t disable primary key; ... (2) committed

SQL> set autocommit ON
SQL> insert into lsc_t(x) values(3); ... (3) committed
SQL> set autocommit OFF

SQL> insert into lsc_t(x) values(4); ... (4) not committed
SQL> disc ... (4) committed

Row 1 is committed after the commit keyword.

Row 2 is implicitely committed after before the alter table ddl statement.

Row 3 is autocommitted. Autocommit exists in sqlplus, but is more often seen in stateless applications (for instance a web application).

Row 4 is committed after a successfull disconnection.

But what appends if you kill your current session?


SQL> insert into lsc_t(x) values (5);

1 row created.

SQL> !ps
PID TTY TIME CMD
13903 pts/33 0:00 ksh
22384 pts/33 0:00 sqlplus

SQL> !kill 22384
Terminated

The row 5 is not commited

Ref: Overview of Transaction Management

A transaction ends when any of the following occurs:
– A user issues a COMMIT or ROLLBACK statement without a SAVEPOINT clause.
– A user runs a DDL statement such as CREATE, DROP, RENAME, or ALTER (*).
If the current transaction contains any DML statements, Oracle Database
first commits the transaction, and then runs and commits the DDL statement
as a new, single statement transaction.
– A user disconnects from Oracle Database. The current transaction is committed.
– A user process terminates abnormally. The current transaction is rolled back.

(*) but not ALTER SESSION

Please read On Implicit Commit

CONNECT BY NOCYCLE

the idea of this post was largely inspired by nuke_y on developpez.net (in French)

I will try to rephrase it in a hierarchical way.

Let’s take all managers of Adams in EMP :
SELECT empno, ename, mgr
FROM emp
CONNECT BY NOCYCLE PRIOR mgr = empno
START WITH ename = 'ADAMS';
EMPNO ENAME MGR
---------- ---------- ----------
7876 ADAMS 7788
7788 SCOTT 7566
7566 JONES 7839
7839 KING

So far so good. Let’s imagine SCOTT has two managers, JONES and PAUL.

create table lsc_t AS
(SELECT ename, empno, mgr
FROM emp
UNION ALL
SELECT 'SCOTT', 7788, 9999
FROM DUAL
UNION ALL
SELECT 'PAUL', 9999, NULL
FROM DUAL);
SELECT empno, ename, mgr
FROM lsc_t
CONNECT BY NOCYCLE PRIOR mgr = empno
START WITH ename = 'ADAMS';
EMPNO ENAME MGR
---------- ---------- ----------
7876 ADAMS 7788
7788 SCOTT 7566
7566 JONES 7839
7839 KING
7788 SCOTT 9999
9999 PAUL

EMP is not supposed to allow this (primary key on empno), so I created a table LSC_T.

So far still fine, Paul is in my list of managers.

Let’s imagine Scott is the manager of Paul. In 9i and below, this would result in an ORA-0146 CONNECT BY loop in user data, but in 10g, and according to the doc The NOCYCLE parameter instructs Oracle Database to return rows from a query even if a CONNECT BY loop exists in the data.

Should I try ???

UPDATE lsc_t
SET mgr = 7788
WHERE ename = 'PAUL';
COMMIT ;
SELECT empno, ename, mgr
FROM lsc_t
CONNECT BY NOCYCLE PRIOR mgr = empno
START WITH ename = 'ADAMS';
EMPNO ENAME MGR
---------- ---------- ----------
7876 ADAMS 7788
7788 SCOTT 7566
7566 JONES 7839
7839 KING
7788 SCOTT 9999

Due to a loop in user data, PAUL is no longer returned. I have no explanation so far.

How to reuse connection in shell

It is Friday, I wanted to give my readers some stuff for the week-end 😉

Imagine that piece of code :

countlines() {
c=$(sqlplus -s /nolog <

I can run this

time ./script1
there are 14 lines in EMP
there are 4 lines in DEPT

real 0m0.46s
user 0m0.06s
sys 0m0.09s

Sounds like a regular shell script. How could we optimize the countlines function? Well, we could create the connection only once and use coprocess pipe (with |& that is pipe ampersand)

sqlplus -s /nolog |&

print -p "connect scott/tiger"

read -p line
if [ $line != Connected. ]
then
exit 1
fi

print -p "set feed off head off"

countlines() {
print -p "select count(*) from $1;"
read -p c
echo "there is $c lines in $1"
}

countlines EMP
countlines DEPT

print -p disconnect

A two-ways pipe is opened with sqlplus. There is only one connect, and one disconnect.

Let's check the performance :

$ time ./script2
there is 14 lines in EMP
there is 4 lines in DEPT

real 0m0.23s
user 0m0.00s
sys 0m0.01s

About twice as fast! Note the "Connected" output may not exist in recent version of sqlplus in silent mode. If you have a script that generates hundreds of connections, or which create a connection every 5 seconds or so, think about it 🙂

Enjoy your week-end

return code before grep

In my previous post hide-password-from-ps-output-sql-loader I mentioned a way to pass the password to the loader thru a parameter file. As correctly suggested by Brian Tkatch, the password could be passed as standard input

sqlldr control=x.ctl silent=header,feedback <

The Username: prompt is displayed 🙁   🙁

How do we get rid of this ?

sqlldr control=x.ctl silent=header,feedback <

There is no output. But what's the error code
echo $?
1

The return code is 1 🙁

This is not the error code from sqlldr, but the error code from grep !

Ok, here is the trick, a bit cryptic if you are not familiar with file descriptors

( ( (sqlldr control=x <&3) |grep -v "^Username:" >&4 ) 3>&1 |(read x;exit $x) )4>&1
scott/tiger
EOF
echo $?
0

The return code is 0 🙂

Hide password from ps output : sql loader

By reporting the process status with ps, any Unix user will see the command line arguments

ps -ef
UID PID PPID C STIME TTY TIME CMD
lsc 13837 13825 0 May 11 pts/17 0:01 -ksh
oracle 4698 6294 0 12:00:40 ? 0:00 sqlplus -s system/manager
appluser 4229 4062 0 12:00:03 ? 0:00 sqlldr scott/tiger
applrun0 28445 28281 0 11:54:03 ? 0:00 imp king/gold full=y
...

What you see here above is definitely a security issue. For sqlplus, the trick is to use sqlplus /nolog and then pass connect system/manager as input or script.

For sqlldr (and exp/imp etc…), the trick is to use a parameter file.

To make it as safe as possible, the file must be unique, readable only for owner and removed after usage.

Ex:
umask 0077
TMPFILE=$(mktemp)
echo "userid=scott/tiger" >$TMPFILE
sqlldr parfile=$TMPFILE control=x.ctl silent=header,feedback
rm $TMPFILE

mktemp is an Unix utility that creates temp files with unique names.

future of MySQL

a google search on oracle and mysql will returns concerns about the future of MySQL. When SUN bought mysql, the enthousiasm was also not unanimous in the opensource community. But now it really need repositioning.

I was just reading : http://www.mysql.com/products/enterprise/unlimited.html
Deploy an unlimited number of MySQL Enterprise Servers for the price of
a single CPU of Oracle Enterprise Edition

Today, there are over 5,000 organizations that spend more than $1Million on
License fees to Oracle, Microsoft, Sybase and IBM. In many cases, that’s money
that doesn’t need to be spent. Not only does open source save money, it
provides an architecture that is more scalable for modern web-based applications.
No wonder open source software has become a strategic initiative for CIOs & CFOs
in many organizations.

Clearly this need repositioning. (For Oracle, money need to be spent 😈 )

On the redundant offering MySQL and Oracle Database Express Edition, I could imagine Oracle will get rid of the free edition. Oracle seems to put effort on not releasing any database patch or security patch and the amount of features as well as the cpu/memory limitation are just way to annoying.

This is why I believe MySQL will survive XE.

Pure guess of course…

ora-984 and sqlerrm

What’s wrong with this code ?

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning option
JServer Release 9.2.0.8.0 - Production

SQL> drop table lsc_t ;

Table dropped.

SQL> create table lsc_t (x varchar2(255), y date);

Table created.

SQL> begin
2 null;
3 exception when others then
4 insert into lsc_t (x,y)
5 values (sqlerrm, sysdate);
6 end;
7 /

PL/SQL procedure successfully completed.

I was in the process of migrating a database with 10g and this piece of code became invalid 🙁

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> drop table lsc_t ;

Table dropped.

SQL> create table lsc_t (x varchar2(255), y date);

Table created.

SQL> begin
2 null;
3 exception when others then
4 insert into lsc_t (x,y)
5 values (sqlerrm, sysdate);
6 end;
7 /
values (sqlerrm, sysdate);
*
ERROR at line 5:
ORA-06550: line 5, column 17:
PL/SQL: ORA-00984: column not allowed here
ORA-06550: line 4, column 3:
PL/SQL: SQL Statement ignored

SQL>

Ok, the code is wrong. It has been running for half a decade in production but the exception was never reached.

Note it is a bug, and putting sqlerrm after sysdate would have generate the exception even in 9i

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning option
JServer Release 9.2.0.8.0 - Production

SQL> begin
2 null;
3 exception when others then
4 insert into lsc_t (y,x)
5 values (sysdate,sqlerrm);
6 end;
7 /
values (sysdate,sqlerrm);
*
ERROR at line 5:
ORA-06550: line 5, column 25:
PL/SQL: ORA-00984: column not allowed here
ORA-06550: line 4, column 3:
PL/SQL: SQL Statement ignored

Just having fun with 10g migration 😉

grant access to trace files

As a developer, you sometimes need to contact your dba to get an user trace. As a dba, sending trace files to developer is not much fun.

But how do you get access to your traces without any dba ?

One way could be to set _trace_files_public=true, but this is bad. It is undocumented, it is unsupported, there is no control to who can access the files, and probably it will not help if you have no access to the database server.

I prefer to provide a function that reads from the trace directory.

Ok, here we go :

CREATE DIRECTORY user_dump_dest AS
'/app/oracle/admin/DB01/udump';

CREATE OR REPLACE FUNCTION get_tracefile (file_name VARCHAR2)
RETURN VARCHAR2
IS
dest_loc CLOB;
src_loc BFILE;
ret VARCHAR2 (4000);
BEGIN
src_loc := BFILENAME ('USER_DUMP_DEST', file_name);
DBMS_LOB.OPEN (src_loc, DBMS_LOB.lob_readonly);
DBMS_LOB.createtemporary (dest_loc, TRUE);
DBMS_LOB.loadfromfile (dest_loc, src_loc, 4000);
ret := DBMS_LOB.SUBSTR (dest_loc, 4000);
DBMS_LOB.CLOSE (src_loc);
RETURN ret;
END;
/

Just a small function that returns the first 4000 characters of the trace file. I could then grant execute on that function to the developers.

it works quite well
SELECT get_tracefile ('db01_ora_6224.trc')
FROM DUAL;
GET_TRACEFILE('DB01_ORA_6224.TRC')
----------------------------------------------------------------------
/app/oracle/admin/DB01/udump/db01_ora_6224.trc
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning option
JServer Release 9.2.0.8.0 - Production
ORACLE_HOME = /app/oracle/product/dbms/DB01
System name: SunOS
Node name: dbsrv01
Release: 5.8
Version: Generic_117000-05
Machine: sun4u
Instance name: DB01
Redo thread mounted by this instance: 1
Oracle process number: 21
Unix process pid: 6224, image: oracle@dbsrv01 (TNS V1-V3)

*** SESSION ID:(273.54591) 2009-04-27 12:13:57.292
*** 2009-04-27 12:13:57.292
ksedmp: internal or fatal error
ORA-00600: internal error code, arguments: [17069], [0x3EED05050],
[], [], [], [], [], []
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
ORA-00928: missing SELECT keyword
Current SQL statement for this session:
alter PACKAGE "LSC_PKG" compile body
----- PL/SQL Call Stack -----
...

I should mention than granting access to trace files is a security issue as a hacker could dump some security modules. But it will be fine for your trusted developers.

Oracle buys Sun

http://www.oracle.com/sun

I really enjoyed this news. It is both my favorite OS (Solaris) and my favorite database (Oracle) committing to a common future. I have been more than sceptical about Oracle Linux and Oracle Database Machine entering the Datacenter world, but now I feel relieved that Oracle is putting value in acquiring serious database platform (oracle/sparc) 🙂

On the application side, acquiring Java is more than a smart move against Websphere …

Oracle on Mac

I just read a post from Barry Mc Gillin about apex on MacOsX.

After having being using Vista for one year, I would really welcome a Mac as my next notebook. But what about Oracle Database on Mac OS X?

According to otn, the latest release as of today is a deprecated 10gR1 for the deprecated platform PowerPC.

However, according to metalink, a 10gR2 is planned for the first quarter of 2009 (sic)
Db 10gR2 on MacOS X Intel
And 11g is planned too Db 11gR1 on MacOS X Intel, Schedule to be announced.

not a group by expression?

How do I count all objects in the current schema?


SQL*Plus: Release 10.2.0.1.0 - Production on Wed Apr 1 09:58:46 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle 12g Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Expensive option

SQL> select
2 user,
3 count(*)
4 from
5 user_objects
6 where
7 status='INVALID';
user,
*
ERROR at line 2:
ORA-00937: not a single-group group function

Disconnected from Oracle 12g Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Expensive option

It does not seem to work … What’s wrong?

locking and database link

Last week-end a new release went to production. This week we got plenty of ORA-2049. What in the hell went wrong? And why did not we find out this locking before?

We just did not have that concurrency effect in test.

How does locking work ?

With regular tables:
session 1
select * from lsc_t for update;
session 2
update lsc_t set id=3;

Session 2 waits until transaction in session 1 finishes.

But in case the update is using a subquery that access a remote table, the update will fail after 1 minute

session 1
select * from lsc_t for update;
session 2
update lsc_t set x=(select * from lsc_t@db02);
Session 2 will end with ORA-02049: timeout: distributed transaction waiting for lock

The timeout of one minute is defined by distributed_lock_timeout, and it is not modifiable in your session (chances are, you will not be able to modify this parameter).

1!=1

In the process of validating some calculation, I noticed some strange differences from one run to another run, with exactly the same data and exactly the same function.

I suppose the only difference was the order in which the transactions were processed (and since I had no ORDER BY, the order was not deterministic).

To limit it to the minimum, I noticed that, in Oracle, 1/3*3 is not 1 !

SQL> select * from dual where 1/3*3=1
no rows selected.

Let me tell you, this was unexpected. Ok, on my first pocket calculator 30 years ago, I noticed than 1/3=.3333333 and .3333333*3=.9999999. But since then, I used to expect 1.

perl :

$ perl -e 'print 1 if 1/3*3==1'
1

OK, I tried in java

public class X {
public static void main(String[] args){
if (1/3*3<1) { System.out.println("<\n"); } } } $ javac X.java $ java X <

In java it seems different too.

So what is 1/3*3 in Oracle?

SQL> set numw 50
SQL> select 1/3*3 from dual;

1/3*3
--------------------------------------------------
.9999999999999999999999999999999999999999

Surprised?

package version control

Oracle does not offer any kind of version control like CVS or subversion in the database. CVS and subversion could be used in the filesystem, then the code could be deployed with sqlplus.

To quickly compare packages in Test and Production I used :


select
env,name,type,line,text
from (
select
env,name,type,line,text,
count(distinct text) over(partition by name,type,line) c
from (
select
'TEST' env,name,type,line,text
from
user_source
union all
select
'PROD' env,name,type,line,text
from
user_source@prod
)
)
where c>1 ;

where “PROD” is a database link to production database. My security friends will scream to hear I have a database link to production in my test schema, but you are free to do it the other way round :mrgreen:

I also explained TOAD method in how to compare schema a few months ago.

to SQL or to PLSQL

Iggy Fernandez posted a riddle to be solved without PLSQL

http://www.amazon.com/gp/blog/post/PLNKI2MYB0YCYAUL

I tend to second Steven Feuerstein argument :
Some people can perform seeming miracles with straight SQL, but the statements can end up looking like pretzels created by someone who is experimenting with hallucinogens.

In my book I mentionned techniques that enhance the basis functionality of SQL. Actually, many of those techniques allow you to write “shorter” statements than before (by avoid redundant subqueries) and also to gain performance, occasionnaly.

However the downside is that you will end up writting some code that only you can read.

In my opinion a zero-plsql approach is wrong. If you have a problem similar to Iggy Fernandez, then no no no, do not even try to solve it with SQL unless you are really doing this for fun

In my humble opinion, plsql and sql and complementary. If you want to program, use plsql. If you need to retrieve data, use sql.

And thanks a lot to Iggy for the excellent challenge !!!

ORA-2070

Today I received an error which was totally new to me. And totally unexpected…

For the purpose of this post, I reduced it to a minimum
CREATE TABLE LSC_T1
( x number);
CREATE TABLE LSC_T2
( x number);
with c as (select nls_charset_decl_len(1, x)
from lsc_t1 join lsc_t2
using (x))
select 1 from c
union
select 1 from c;

Error at line 1
ORA-02070: database does not support operator NLS_CHARSET_DECL_LEN in this context

My basic query was actually joining two dictionary views, and NLS_CHARSET_DECL_LEN is used in USER_TAB_COLUMNS.CHAR_COL_DECL_LENGTH.
Kind of
with c as (
select table_name, column_name, constraint_name
from user_tab_columns
join user_constraints
using (table_name)
where constraint_type='P'
)
select
constraint_name,column_name
from c
union all
select
column_name,constraint_name
from c;
Error at line 1
ORA-02070: database does not support operator NLS_CHARSET_DECL_LEN in this context

I spent some time to understand the reason for getting an ORA-2070 in a simple query that did seem to be totally unrelated to charset. The database used for this test is 9.2.0.8/Solaris.

It is quite easy to avoid the bug, for instance by not using the ANSI join 😉

When analyzing such a defect, my approach is to simplify the query as much as possible. Very often on forums you see users posting queries of more than 100 lines. Those users just expect the community to debug their code. Posting a tiny reproducible test case is key to receive a proper solution!

dynamic database link

How do I select thru a database link, where the database link is not fixed?

SQL> var db_link varchar2(255);
SQL> exec :db_link := 'DB02';
SQL> select x from LSC_T@:db_link where x>0;
select x from LSC_T@:db_link where x>0;
*
ERROR at line 1:
ORA-01729: database link name expected

My solution using dynamic SQL
Remote databases DB02

create table lsc_t(x number);
insert into lsc_t(x) values (2);
commit;

Remote databases DB03

create table lsc_t(x number);
insert into lsc_t(x) values (3);
commit;

Local database DB01
create or replace type lsc_t_o as object(x number);
/
create or replace type lsc_t_c as table of lsc_t_o;
/
create or replace function lsc_f(str varchar2)
return lsc_t_c is
rc lsc_t_c;
begin
execute immediate
'select lsc_t_o(x) from '
|| dbms_assert.qualified_sql_name(str) bulk collect into rc;
return rc;
end;
/

Note the DBMS_ASSERT function. The DBMS_ASSERT just enhances the security by prevent SQL injection

Ok, let’s try
SQL> var db_link varchar2(255);
SQL> exec :db_link := 'DB02';
SQL> select x from table(lsc_f('LSC_T@'
||:db_link)) where x>0;
X
----------
2
SQL> exec :db_link := 'DB03';
SQL> select x from table(lsc_f('LSC_T@'||:db_link)) where x>0;
X
----------
3

Seems to work 🙂

SOUG-Tagung 09/2

I am impatient to speak in the swiss oracle user group next month

March 18, 2009, Baden-Daettwil
http://www.soug.ch

I will be presenting an XML session regarding selecting XML from the database.

select xmlserialize(document xmlcol) from agenda;

 

12:00

Empfang und Registrierung


SOUG Sekretariat

12:30

Stehlunch
im Foyer Seminarraum
Schottky

13:00

21.
Ordentliche Generalversammlung der SOUG
,
Seminarraum Schottky

Ehrung des SOUG Speaker of the Year 2008, Seminarraum
Schottky

 

Track 1:
Oracle DB und Entwicklung
Moderation Manfred
Drozd, In&Out AG

Track 2:
Oracle
Applications
Moderation Michele
Savino, FI-Solutions AG

14:00



XML in the Oracle Database



Use SQL to retrieve and insert XML data in the database. Learn
about the SQL functions, the XMLTYPE and various techniques to
improve your performance when dealing with XML documents.

Laurent Schneider,
Credit Suisse


Oracle
Applications Portfolio Strategie, Mehrwert und Eco System

Durch die
Akquisition von über 50 Firmen hat sich Oracle vom Technologie
Leader zum Partner für Business Applikationen entwickelt. Dank
seiner kompletten, offenen und integrierten Plattform, ermöglicht
Oracle eine Reduktion der Integrationskosten. Industriespezifische
Blueprints ermöglichen den Kunden eine effiziente Geschäftsprozess
Modellierung

Beat Brunschwiler,
Oracle

14:45



Optimiertes Backup mit RMAN und der LiteSpeed Engine für Oracle

Der Vortrag
gibt zunächst einmal einen Überblick über die unterschiedlichen
Backupverfahren (RMAN, Hotbackup, Coldbackup) und die damit
verbundenen Vor- und Nachteile. Am Beispiel von RMAN mit LiteSpeed
Engine for Oracle wird dann erklärt, wie ein Backup optimiert
werden kann und welche Auswirkungen dies auf ein Recovery hat.

Johannes Ahrends,
Quest Software


Oracle
Application Integrations-Architektur

Ein grosser
Teil des Implementierungsaufwandes wird in IT Projekten für die
Integration von Applikations-Silos investiert. Dadurch erschwert
sich die Anspassung von neuen Geschäfts-anforderungen. Die
Applikation Integration Plattform standardisiert die Integration,
vom Geschäftsprozess bis zum Daten-Layer. Innovation kann dadurch
schnell umgesetzt werden

Jean-Claude
Haupfleisch, Oracle

15:30 Pause
(ca. 30′)

16:00



SQL Plan Baseline 



Diese Technologie ersetzt ab Oracle 11g die früheren Stored
Outlines.

Chris Antognini,
Trivadis



Siebel CRM Referenz Projekt



Beispiel einer Siebel CRM Implementierung bei einem Schweizer
Versicherer.

Lukas
Ehrensperger, ec4u

16:45 Apéro und Informationsaustausch
– Wir danken dem Sponsor Quest

date or timestamp literals

Yesterday I commented on a post on the French developer forums http://www.developpez.net about using DATE and TIMESTAMP literals.

A date literal is DATE '2000-01-01'. It is not possible to have a date literal at 6am, like DATE '2000-01-01 06:00:00', the syntax simply does not exist. You can either use TO_DATE function, or if you are really hooked on litterals, maybe DATE '2000-01-01'+INTERVAL '+0 06:00:00' DAY TO SECOND.

One suggestion was to use TIMESTAMP, because TIMESTAMP literals do have time in it.

However, comparing a date with a timestamp will result in an implicit conversion to timestamp, and if the date colum is indexed, the index will not be used.

SQL> create table lsc_t(x number, d date)
Table created.
SQL> create index lsc_i on lsc_t(d)
Index created.
SQL> insert into lsc_t(d) values (to_date('2000-01-01 06:00:00',
'YYYY-MM-DD HH24:MI:SS'))
1 row created.
SQL> select * from lsc_t where d>=timestamp '2000-01-01 06:00:00'
X D
---------- -------------------
2000-01-01 06:00:00
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer Mode=CHOOSE
1 0 TABLE ACCESS FULL TGPOWNER.LSC_T
SQL> select * from lsc_t where d>=to_date('2000-01-01 06:00:00',
'YYYY-MM-DD HH24:MI:SS')
X D
---------- -------------------
2000-01-01 06:00:00

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer Mode=CHOOSE
1 0 TABLE ACCESS BY INDEX ROWID TGPOWNER.LSC_T
2 1 INDEX RANGE SCAN TGPOWNER.LSC_I

As shown above, the index is not used when comparing to timestamp.

Why does Oracle convert the date to timestamp ? Because timestamp may contain nanoseconds.
SQL> select * from lsc_t where
d>=timestamp '2000-01-01 06:00:00.000000001';

no rows selected

SQL> select * from lsc_t where
cast(d as timestamp)>=timestamp '2000-01-01 06:00:00.000000001';

no rows selected

SQL> select * from lsc_t where
d>=cast(timestamp '2000-01-01 06:00:00.000000001' as date);

X D
---------- -------------------
2000-01-01 06:00:00

Converting to date would deliver wrong result.

Tuning query over database link

I just learnt from a colleague a very useful hint for remote databases. Over a db link, Oracle does not have the necessary statistics to take the right decision.

Here is the test case :
DB01

create database link lsc_link_2 using 'DB02';
create table lsc_small_1 as
select rownum x from dual connect by level<100; alter table lsc_small_1 add primary key(x); create table lsc_big_1 as select rownum x from dual connect by level<1000000; alter table lsc_big_1 add primary key(x);

DB02

create table lsc_small_2 as
select rownum x from dual connect by level<100; alter table lsc_small_2 add primary key(x); create table lsc_big_2 as select rownum x from dual connect by level<1000000; alter table lsc_big_2 add primary key(x);

and here are the two test queries (run on DB01)

select count(*) from lsc_big_1 b1
join lsc_small_2@lsc_link_2 s2 using (x);
COUNT(*)
----------
99
Elapsed: 00:00:00.10

select count(*) from lsc_big_2@lsc_link_2 b2
join lsc_small_1 s1 using (x);
COUNT(*)
----------
99
Elapsed: 00:00:10.31

As shown above, first one is 100x faster. Anything to tune? For sure!

The trick is to execute the join remotely when the remote table is much larger than the local one!

select /*+ DRIVING_SITE(b2) */ count(*)
from lsc_big_2@lsc_link_2 b2
join lsc_small_1 s1 using (x);
COUNT(*)
----------
99
Elapsed: 00:00:00.06

Ref: the DRIVING_SITE hint

track DDL changes

Why is my package being invalidated? This is the question I asked myself a few times those days. In order to find out what is happening in my schema, I created this simple DDL trigger which tracks all DDL changes in my schema.


CREATE TABLE AUDIT_DDL (
d date,
OSUSER varchar2(255),
CURRENT_USER varchar2(255),
HOST varchar2(255),
TERMINAL varchar2(255),
owner varchar2(30),
type varchar2(30),
name varchar2(30),
sysevent varchar2(30),
sql_txt varchar2(4000));

create or replace trigger audit_ddl_trg after ddl on schema
declare
sql_text ora_name_list_t;
stmt VARCHAR2(4000) := '';
n number;
begin
if (ora_sysevent='TRUNCATE')
then
null;
else
n:=ora_sql_txt(sql_text);
for i in 1..n
loop
stmt:=substr(stmt||sql_text(i),1,4000);
end loop;
insert into audit_ddl(d, osuser,current_user,host,terminal,owner,type,name,sysevent,sql_txt)
values(
sysdate,
sys_context('USERENV','OS_USER') ,
sys_context('USERENV','CURRENT_USER') ,
sys_context('USERENV','HOST') ,
sys_context('USERENV','TERMINAL') ,
ora_dict_obj_owner,
ora_dict_obj_type,
ora_dict_obj_name,
ora_sysevent,
stmt
);
end if;
end;
/

Thanks Yas for your comment, I added the SQL text

high cost

What’s wrong with this query ?


select
(t6.column_value).getstringval() t6
from
table(xmlsequence(extract(xmltype(‘<x/>’),’/x’))) t1,
table(xmlsequence(t1.column_value))t2,
table(xmlsequence(t2.column_value))t3,
table(xmlsequence(t3.column_value))t4,
table(xmlsequence(t4.column_value))t5,
table(xmlsequence(t5.column_value))t6;
T6
————————
<x/>

Elapsed: 00:00:00.01

Well, let’s check the plan :

--------------------------------------------------------------------
| Id | Operation | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------
| 0 | SELECT STATEMENT | 18E| 15E| 18E (0)|999:59:59 |
| 1 | NESTED LOOPS | 18E| 15E| 18E (0)|999:59:59 |
| 2 | NESTED LOOPS | 18E| 15E| 99P (3)|999:59:59 |
| 3 | NESTED LOOPS | 4451T| 31P| 12T (3)|999:59:59 |
| 4 | NESTED LOOPS | 544G| 3045G| 1490M (3)|999:59:59 |
| 5 | NESTED LOOPS | 66M| 254M| 182K (3)| 00:36:31 |
| 6 | COLLECTION I | | | | |
| 7 | COLLECTION I | | | | |
| 8 | COLLECTION IT | | | | |
| 9 | COLLECTION ITE | | | | |
| 10 | COLLECTION ITER | | | | |
| 11 | COLLECTION ITERA | | | | |
--------------------------------------------------------------------

It is returning 18 quadrillions of rows, 15 exabytes, the cost is 1.8E19 and the time is about one month :mrgreen:

To bash or not to bash

I have been inspired by Chen to talk about bash…

I have been using ksh for many years, and I mean ksh88 not ksh93. The main reason is, I want my script to run the same way in any Unix flavor.

ksh93 has never been too much popular. I used it a few time to sleep half a second


echo sleep 0.5| /usr/dt/bin/dtksh

ksh has a lot of nice features. I just used one of them in my script :

$ typeset -u name
$ read name?"Enter your name : "
Enter your name : Laurent
$ echo $name
LAURENT

Way easier to force a variable to be uppercase rather than using echo|tr etc

Bash has some nice features too, but unfortunately every OS release come with a different bash version, which is the same pain as perl when you want to write a script that last for a decade or two.

Ok, just4fun

$ mkdir -p {a..z}/{1..9}
... create directories a/1 a/2 ... z/8 z /9
$ [[ text =~ t..t ]]
... check if text matches regular expression t..t
$ echo ${text/pattern/string}
... replace pattern by string

The first two commands require bash3, the last is just fine with bash2.

Have fun shell-scripting 🙂

How to tune something doing nothing?

To end this year in glory, I tried to replace a row-level trigger (LSC_TR1) into a statement level trigger (LSC_TR2)


SQL> create global temporary table lsc_t1(x number primary key)
2 on commit delete rows;
Table created.
SQL> create table lsc_t2(x number primary key);
Table created.
SQL> create table lsc_t3(x number primary key);
Table created.
SQL> create or replace function f return number deterministic is
2 begin sys.dbms_lock.sleep(1); return 10; end;
3 /
Function created.
SQL> create or replace trigger lsc_tr1 after insert on lsc_t1 for each row
2 begin
3 if (:new.x > f)
4 then
5 insert into lsc_t2(x) values (:new.x);
6 end if;
7 end;
8 /
Trigger created.
SQL> create or replace trigger lsc_tr2 after insert on lsc_t1
2 begin
3 insert into lsc_t2(x) select x from lsc_t1 where x > f;
4 end;
5 /
Trigger created.
SQL> insert into lsc_t3 values(1);
1 row created.
SQL> insert into lsc_t3 values(5);
1 row created.
SQL> insert into lsc_t3 values(20);
1 row created.
SQL> insert into lsc_t3 values(50);
1 row created.
SQL> insert into lsc_t3 values(100);
1 row created.
SQL> insert into lsc_t3 values(200);
1 row created.
SQL> insert into lsc_t3 values(1000);
1 row created.
SQL> commit;
Commit complete.
SQL> alter trigger lsc_tr1 enable;
Trigger altered.
SQL> alter trigger lsc_tr2 disable;
Trigger altered.
SQL> insert into lsc_t1(x) select x from lsc_t3;
7 rows created.
Elapsed: 00:00:07.05

The current trigger LSC_TR1 is very slow, it is doing some processing after each row. Since it is a temporary staging table, it makes no sense to process each row, I have written the trigger LSC_TR2 to boost the performance.

Have a look.


SQL> truncate table lsc_t2;
Table truncated.
SQL> alter trigger lsc_tr1 disable;
Trigger altered.
SQL> alter trigger lsc_tr2 enable;
Trigger altered.
SQL> insert into lsc_t1(x) select x from lsc_t3;
7 rows created.
Elapsed: 00:00:01.03

Much faster ! LSC_TR1 just takes ages. What a glorious year 2008 🙂

Well, the table LSC_T3 is empty most of the time, and the insert is running again and again…


SQL> truncate table lsc_t3;
Table truncated.
SQL> alter trigger lsc_tr1 enable;
Trigger altered.
SQL> alter trigger lsc_tr2 disable;
Trigger altered.
SQL> insert into lsc_t1(x) select x from lsc_t3;
0 rows created.
Elapsed: 00:00:00.01
SQL> alter trigger lsc_tr1 disable;
Trigger altered.
SQL> alter trigger lsc_tr2 enable;
Trigger altered.
SQL> insert into lsc_t1(x) select x from lsc_t3;
0 rows created.
Elapsed: 00:00:01.03

Suddenly my tuning exercice turned to be a disaster, in this example it is about 63 times slower 🙁

I could possibly count the rows in the LSC_TR2 trigger, but then it will still be a bit slower when processing very few rows (in this example with 1 row).

So I will keep this AFTER EACH ROW trigger until next year 😉

old-fashion listagg

Yesterday I had my first session about XML, today I have one about SQL Model

Ok, it was the first time I spoke about XML so I did not really now where to focus. XML is so big, you have XQUERY, XPATH, dozens of XML functions in the database.

One of the XML function is called XMLTRANSFORM and transforms XML according to XSLT

I had a fun demo about XSLT to create a semi-column separated list :

select
deptno,
xmltransform
(
sys_xmlagg
(
sys_xmlgen(ename)
),
xmltype
(
'


;


'
)
).getstringval() listagg
from emp
group by deptno;


DEPTNO LISTAGG
------- --------------------------------------
10 CLARK;KING;MILLER;
20 SMITH;FORD;ADAMS;SCOTT;JONES;
30 ALLEN;BLAKE;MARTIN;TURNER;JAMES;WARD;

HOT


CREATE TABLE T(X NUMBER PRIMARY KEY) ORGANIZATION HEAP;

I have read the SQL reference more than once -believe me- but I did not notice this syntax until today… an HOT table is a simple table and this clause is optional. An IOT, or INDEX ORGANIZED TABLE, is a table where you mostly select thru an index. Good candidates for IOT are tables where you mostly access rows thru a full index scan.

Compare :


SQL> CREATE TABLE LSC_SELECTED_EMPLOYEES (EMPNO NUMBER PRIMARY KEY)
2 ORGANIZATION HEAP;

Table created.

SQL> INSERT INTO LSC_SELECTED_EMPLOYEES(EMPNO) VALUES (7839);

1 row created.

SQL> INSERT INTO LSC_SELECTED_EMPLOYEES(EMPNO) VALUES (7788);

1 row created.

SQL> COMMIT;

Commit complete.

SQL> set autot on exp
SQL> select empno, ename from LSC_EMP
2 join LSC_SELECTED_EMPLOYEES using (EMPNO);

EMPNO ENAME
---------- ----------
7839 KING
7788 SCOTT

Execution Plan
----------------------------------------------------------
Plan hash value: 609992009

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| T
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 46 | 3 (0)| 0
| 1 | NESTED LOOPS | | 2 | 46 | 3 (0)| 0
| 2 | TABLE ACCESS FULL| LSC_EMP | 14 | 140 | 3 (0)| 0
|* 3 | INDEX UNIQUE SCAN| SYS_C0010139 | 1 | 13 | 0 (0)| 0
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("LSC_EMP"."EMPNO"="LSC_SELECTED_EMPLOYEES"."EMPNO")

Note
-----
- dynamic sampling used for this statement

SQL> set autot off
SQL> select segment_name,segment_type from user_segments
2 where segment_name in (
3 select object_name from user_objects t where t.CREATED >
4 sysdate-1/24 and segment_name not like 'BIN$%');

SEGMENT_NAME SEGMENT_TYPE
------------------------------ ------------------------------
LSC_SELECTED_EMPLOYEES TABLE
SYS_C0010139 INDEX

SQL> drop table LSC_SELECTED_EMPLOYEES;

Table dropped.

with

SQL> CREATE TABLE LSC_SELECTED_EMPLOYEES2 (EMPNO NUMBER PRIMARY KEY)
2 ORGANIZATION INDEX;

Table created.

SQL> INSERT INTO LSC_SELECTED_EMPLOYEES2(EMPNO) VALUES (7839);

1 row created.

SQL> INSERT INTO LSC_SELECTED_EMPLOYEES2(EMPNO) VALUES (7788);

1 row created.

SQL> COMMIT;

Commit complete.

SQL> set autot on exp
SQL> select empno, ename from LSC_EMP
2 join LSC_SELECTED_EMPLOYEES2 using (EMPNO);

EMPNO ENAME
---------- ----------
7839 KING
7788 SCOTT

Execution Plan
----------------------------------------------------------
Plan hash value: 3539129569

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CP
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 46 | 3 (
| 1 | NESTED LOOPS | | 2 | 46 | 3 (
| 2 | TABLE ACCESS FULL| LSC_EMP | 14 | 140 | 3 (
|* 3 | INDEX UNIQUE SCAN| SYS_IOT_TOP_23608 | 1 | 13 | 0 (
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("LSC_EMP"."EMPNO"="LSC_SELECTED_EMPLOYEES2"."EMPNO")

Note
-----
- dynamic sampling used for this statement

SQL> set autot off
SQL> select segment_name,segment_type from user_segments
2 where segment_name in (
3 select object_name from user_objects t where t.CREATED >
4 sysdate-1/24 and segment_name not like 'BIN$%');

SEGMENT_NAME SEGMENT_TYPE
------------------------------ ------------------------------
SYS_IOT_TOP_23608 INDEX

SQL> drop table LSC_SELECTED_EMPLOYEES2;

Table dropped.

The plan is the same, in IOT, you have only one segment, the INDEX, and in HOT, you have two segments, the INDEX and the TABLE.

Surely a good candidate for IOT 🙂