Check if it a program is already running in Unix

There is more than one way to do it, the safe is probably to check if /home/lsc/OH_YES_I_AM_RUNNING exists and believe it. This is called the file.PID method and is widely used (Apache used to use it since a long long time). It needs file. It needs cleanup if you reboot your server in the middle of something (and surely you do not want to delete old pid files yourself)

Ok, often you see this :


ps -ef | grep program

There you list all processes and check the lines that contain program. So some does a vi program or anything worse (emacs?), you will get more rows than needed.

Maybe it is fine to run program with different arguments, this must be decided.

Well, take a simple test case :
x1.sh and x2.sh :

#!/bin/ksh
while :
do
  date  > /dev/null
done

let’s try to use ps


$ nohup ./x1.sh &
$ nohup ./x2.sh &
$ jobs
[2] +  Running                 nohup ./x2.sh &
[1] -  Running                 nohup ./x1.sh &
$ ps -ef | egrep 'x[12]'
  u22  9240796  6226164  30 14:56:52  pts/2  0:00 /bin/ksh ./x2.sh
  u22 20840608  6226164  31 14:56:48  pts/2  0:01 /bin/ksh ./x1.sh

So fine so good, I see I have one instance of each program.

Let’s try to see if the results are consistent over time :

 $ n=9999;while :
  do 
    ps -ef | 
      egrep 'x[12].sh'>f
    if [ $(wc -l <f) != $n ]
    then 
      n=$(wc -l <f)
      echo
      date
      cat f
      echo "==> $n"
    fi
  done

Fri Oct 28 15:01:01 CEST 2011
  u22  9240796  6226164  32 14:56:52  pts/2  0:14 /bin/ksh ./x2.sh
  u22 20840608  6226164  28 14:56:48  pts/2  0:14 /bin/ksh ./x1.sh
==>        2

Fri Oct 28 15:01:08 CEST 2011
  u22  9240796  6226164  50 14:56:52  pts/2  0:14 /bin/ksh ./x2.sh
==>        1

Fri Oct 28 15:01:09 CEST 2011
  u22  9240796  6226164  52 14:56:52  pts/2  0:14 /bin/ksh ./x2.sh
  u22 20840608  6226164  53 14:56:48  pts/2  0:15 /bin/ksh ./x1.sh
==>        2

Fri Oct 28 15:01:17 CEST 2011
  u22  9240796  6226164  40 14:56:52  pts/2  0:15 /bin/ksh ./x2.sh
  u22 10944520  9240796   0 15:01:17  pts/2  0:00 /bin/ksh ./x2.sh
  u22 20840608  6226164  31 14:56:48  pts/2  0:16 /bin/ksh ./x1.sh
==>        3

the fact that a subshell (pid 10944520 ) of x2 appear is not a problem for me. I have much more of a problem at 15:01:08 where x1 disappeared !

Conclusion : you cannot trust ps

shell and list of files

How do you loop thru a list of files?

For instance you want to archive than delete all pdf documents in the current directory :

Bad practice :


tar cvf f.tar *.pdf
rm *.pdf

There are multiple issue with the command above

1) new files could come during the tar, so the rm will delete files that have not been archived


filelist=$(ls *.pdf)
tar cvf f.tar $filelist
rm $filelist

2) if there is no file, tar and rm will return an error


filelist=$(ls|grep '\.pdf')
if [ -n "$filelist" ]
then
  tar cvf f.tar $filelist
  rm $filelist
fi

3) this will not work for long list (above 100k documents)


filelist=/tmp/filelist.$(date "+%Y%m%d%H%M%S").$$.$RANDOM
ls|grep '\.pdf' > $filelist
if [ -s "$filelist" ]
then
  tar cvfL f.tar $filelist
  for f in $(<filelist)
  do
    rm $f
  done
fi

As you see, this require special handling. tar for instance use the -L option to accept a list of files, rm could delete files one by one (or in bunches with xargs -L).

This 100’000 limit (the limit may vary for your shell/os) is something that often gets forgotten.

Typical error that could occur are


ksh: no space
bash: Arg list too long

pstree in AIX

For those who do not want to download some linuxlike freeware on your aix box, use ps -T :)


ps -fT 2412672
     UID     PID    PPID   C    STIME    TTY  TIME CMD
  oracle 2412672       1   0   Sep 05      -  0:00 /u01/app/oracle/product/OAS
  oracle  630956 2412672   0   Sep 05      -  6:11     \--/u01/app/oracle/prod
  oracle 1347672  630956   0   Sep 05      - 15:32        |\--/u01/app/oracle/
  oracle 1437836  630956   0   Sep 05      -  1:02        |\--/u01/app/oracle/
  oracle  880820 1437836   0   Sep 05      -  0:32        |   |\--/u01/app/ora
  oracle 1036532 1437836   0   Sep 05      -  0:00        |   |\--/u01/app/ora
  oracle 1134796 1437836   0   Sep 05      -  0:01        |   |\--/u01/app/ora
  oracle 1343712 1437836   0   Sep 05      -  0:33        |   |\--/u01/app/ora
  oracle 1368166 1437836   0   Sep 05      -  1:11        |   |\--/u01/app/ora
  oracle 1384684 1437836   0   Sep 05      -  0:33        |   |\--/u01/app/ora
  oracle 1392862 1437836   0   Sep 05      -  0:32        |   |\--/u01/app/ora
  oracle 1396898 1437836   0   Sep 05      -  0:33        |   |\--/u01/app/ora
  oracle 1482978 1437836   0   Sep 05      -  0:32        |   |\--/u01/app/ora
  oracle 1527890 1437836   0   Sep 05      -  0:00        |   |\--/u01/app/ora
  oracle 1781798 1437836   0   Sep 05      -  0:32        |   |\--/u01/app/ora
  oracle 2195474 1437836   0   Sep 26      -  0:13        |    \--/u01/app/ora
  oracle 1626296  630956   0   Sep 05      - 13:49         \--/u01/app/oracle/

Large zip on Windows

I have never been a Microsoft fanatic nor an anti-microsoft terrorist, but today I could not believe that large compressed folders got corrupted in Windows !

I have send a relatively small zip file (5gb, peanuts) from AIX to Windows per sftp and in Windows Explorer, some files in the compressed folder (read zip) were just pointing to the wrong content.

I had some issues with large zip files on unix, but this was last century! Howcome could a modern filesystem/operating system have such issues?

I have found a few bugs on support.microsoft.com.

Ex: Compressed folder becomes corrupted when larger than 2 gigabytes
Workaround : make sure that you limit the size of a compressed folder to 2 GB or less

Amazing!

_optimizer_random_plan parameter

I was trying to find a workaround for a bug in 11.2.0.2

SELECT *  FROM 
  (SELECT 2 B FROM DUAL WHERE DUMMY = 'Y'), 
  (SELECT 3 C FROM DUAL WHERE DUMMY LIKE '%') 
  WHERE C = B(+);

         B          C
---------- ----------
         2          3

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     4 |     4   (0)| 00:00:01 |
|   1 |  NESTED LOOPS OUTER|      |     1 |     4 |     4   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
|*  3 |   TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

As dummy is not Y, B could not be 2.

Ok, I tried :


alter session set "_optimizer_random_plan"=1;

SELECT *  FROM 
  (SELECT 2 B FROM DUAL WHERE DUMMY = 'Y'), 
  (SELECT 3 C FROM DUAL WHERE DUMMY LIKE '%') 
  WHERE C = B(+);

         B          C
---------- ----------
                    3

Execution Plan
----------------------------------------------------------
Plan hash value: 837538736

-------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost  |
-------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |  1146 |  5730 |    27G|
|   1 |  MERGE JOIN OUTER    |      |   603K|  2946K|    27G|
|*  2 |   TABLE ACCESS FULL  | DUAL |   392K|   767K|   136K|
|   3 |   VIEW               |      |     2 |     6 | 69180 |
|*  4 |    FILTER            |      |       |       |       |
|*  5 |     TABLE ACCESS FULL| DUAL |   123K|   240K| 69180 |
-------------------------------------------------------------

Cool, I got correct results! the fact that the cost jumped from 4 to 27 Billions is just a minor annoyance I suppose :twisted:

I also tried


alter session set "_optimizer_random_plan"=0; -- default

alter session  set "_complex_view_merging"=false;

SELECT *  FROM 
  (SELECT 2 B FROM DUAL WHERE DUMMY = 'Y'), 
  (SELECT 3 C FROM DUAL WHERE DUMMY LIKE '%') 
  WHERE C = B(+);

         B          C
---------- ----------
                    3

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |     1 |     5 |     2   (0)| 00:00:01 |
|   1 |  NESTED LOOPS OUTER  |      |     1 |     5 |     2   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL  | DUAL |     1 |     2 |     2   (0)| 00:00:01 |
|   3 |   VIEW               |      |     1 |     3 |            |          |
|*  4 |    FILTER            |      |       |       |            |          |
|*  5 |     TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------

The cost is now 5 and instead of 4 and the results are correct

The first thing I did is opening a SR, now I am impatiently waiting for Oracle Support guidance…

Duplicate table over database link

The old-style approach would be CREATE TABLE T AS SELECT * FROM T@L, where T is the table you want to duplicate and L the database link pointing to the remote database

If you want to keep more info about the structure of t, the constraints, the indexes, you may use Datapump. Here it is…


SQL> var n number
SQL> exec :n := dbms_datapump.open('IMPORT','TABLE','L')

PL/SQL procedure successfully completed.

SQL> print n
         N
----------
        28

SQL> exec dbms_datapump.metadata_filter(:n,'SCHEMA_LIST','''SCOTT''')

PL/SQL procedure successfully completed.

SQL> exec dbms_datapump.metadata_filter(:n,'NAME_LIST','''T''')

PL/SQL procedure successfully completed.

SQL> exec dbms_datapump.start_job(:n)

PL/SQL procedure successfully completed.
SQL> desc t
 Name              Null?    Type
 ----------------- -------- ------------
 X                 NOT NULL NUMBER

This is utterly simple. If you mess up with the link name, global name, syntax, and so on, you may end up with orphan jobs in DBA_SCHEDULER_JOBS. There is some metalink note on dropping the underlying tables (Note 336914.1) but first log off, get a coffee, and they may vanish after a few minutes.

Generate network graph from command line

I recently wrote on gnuplot, today I tried another command line utility to generate graphs, graphviz, version 2.24.0 on AIX5L.

Pretty straightforward syntax :

(
  echo "digraph Emp {"
    sqlplus -s -L scott/tiger << EOF
      set pages 0 lin 120 hea off feed off
      select 
        ename ||'->'|| 
        (select ename from emp where empno=e.mgr) || ';' 
      from emp e where mgr is not null;
EOF
  echo "}"
)| neato -Tpng | uuencode Emp.png | mailx laurentschneider@example.com

(or neato -Tpng -o Emp.png to save locally, or -Tps|lp, etc…)

On using ROWID

I have been challenged to assert the safety of rowid in a sql statement.

Against all my beliefs, it is not safe to assume ROWID is consistent over one sql statement. If the ROWID changes, and you use ROWID in your query, you may get inconsistent results.

Obviously I would not write such a post without a test case ;-)


create table t(x, y) partition by hash(x) partitions 32 enable row movement
as select rownum, rownum from dual connect by level<30;

select sum(y) from t;
SUM(Y)
------
   435

Sum[1..29]=435

Let’s write the query with a slow function using rowid


create or replace function f(r rowid) return number is 
  n number; 
begin 
  select y into n from t where rowid=r; 
  sys.dbms_lock.sleep(1); 
  return n; 
end;
/
select sum(f(rowid)) from t;
SUM(F(ROWID))
-------------
          435

Elapsed: 00:00:29.12

The query took 29.1 seconds for 29 rows and returned the same result.

Let’s update the partition key during the select


select sum(f(rowid)) from t;
... hurry up to a new session ...


update t set x=x+1;
commit;

back to your session you will have something inconsistent


...
SUM(F(ROWID))
-------------
            5

Elapsed: 00:00:02.04

Not only the query was faster than the expected 29 seconds, but it is also inconsistent.

Probably not a bug, rowid is just not constant within the same transaction.

send graph per mail from sqlplus

How to send a graph with a single command from your database to your mail in Unix?

I tried this (gnuplot is available for Solaris, AIX and most Unix derivates) :

echo '
  set hea off pages 0 feed off
  prom set title "salaries of EMP"
  prom unset key
  prom unset xtics
  prom unset xlabel
  prom set term png
  prom 
  prom plot "-" with circle 
  select row_number() over (order by sal),sal from emp;
  prom e
  prom quit
' | 
  sqlplus -s scott/tiger | 
    gnuplot | 
      uuencode emp.png | 
        mailx email@example.com

The picture is sent as attachment.

If you want to embed your png in an HTML mail, use sendmail

/usr/sbin/sendmail -t <<EOF
To: email@example.com
From: email@example.com
Subject: EMP
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="FILEBOUNDARY"

--FILEBOUNDARY
Content-Type: multipart/alternative; boundary="MSGBOUNDARY"

--MSGBOUNDARY
Content-Type: text/html
Content-Disposition: inline

<html><body>Below a graph...<br/>
<img src="cid:png"/><br/></body></html>
--MSGBOUNDARY--
--FILEBOUNDARY
Content-Type: image/png
Content-Disposition: inline; filename="png.png"
Content-Transfer-Encoding: base64
Content-Id: <png>

iVBORw0KGgoAAAANSUhEUgAAAoAAAAHgCAMAAAACDyzWAAABIFBMVEX///8AAACgoKD/AAAAwAAA
gP/AAP8A7u7AQADIyABBaeH/wCAAgEDAgP8wYICLAABAgAD/gP9//9SlKir//wBA4NAAAAAaGhoz
MzNNTU1mZmZ/f3+ZmZmzs7PAwMDMzMzl5eX////wMjKQ7pCt2ObwVfDg///u3YL/tsGv7u7/1wAA
/wAAZAAA/38iiyIui1cAAP8AAIsZGXAAAIAAAM2HzusA////AP8AztH/FJP/f1DwgID/RQD6gHLp
lnrw5oy9t2u4hgv19dyggCD/pQDugu6UANPdoN2QUEBVay+AFACAFBSAQBSAQICAYMCAYP+AgAD/
gED/oED/oGD/oHD/wMD//4D//8DNt57w//Cgts3B/8HNwLB8/0Cg/yC+vr7MAY9LAAAKJElEQVR4
nO3d0XaiSBSGUV15/3ee7unEoKLWwYIfrL0vJhNDmLn4FgQoPacTAAAAAAAAAAAAbOl8Ps++uvAX
oWhhgH+2+N3o/L/T/9+f/33X8f+Qj7awlatf+/7m75fvMBXII9/Hp5/D1M8//2Vz+dnMlje/frXL
y48EyHOTPK5LmqZz+cddT5cXb6M8TwJ0Cuax6eHqND0Cnq9ivPR0f7CbffF0VSw8dhXR7cHussXN
YW/ud29+LkBeuz2xPg/w0SlYgCw1ubQ4T06zk1Pw9MXfX7j99Ztd/qYqQAAAAAAAAACGMP983cpJ
NjL7rgVrx9nKZHGvANneb3Lnuy+XTaCsFuF5LsKfH3ctnuP5Kr38TznAx6dgAY5uzQBnyxMgV2ZT
e9pfezXfp+vZL8Vd8anWDPA1AQ5PgGTNxPa8PwHSkwDJusvtRX8CpKuvr+ff3xEgfU2Te5mfAOnv
6193Xw35CZBVfLXVdxIgYQIkSoBECZAoARIlQKIESJQAiRIgUQIkSoBECZAoARIlQKIESJQAiRIg
UQIkSoBECZAoARIlQKIESJQAiRIgUQIkSoBECZAoARIlQKIESFR9VNf5fmprfVfwT6GayWQ4k5Lo
pL2a8/nnCChAuimM6roeEjcZmHnZojZ8k9FVipkc7IxrpZ/2AC+1mpZJP+WrYAHSUzFA41rpy41o
ogRIlACJEiBRAiRKgEQJkCgBEiVAogRIlACJEiBRAiRKgEQJkCgBEiVAogRIlACJEiBRAiRKgEQJ
kCgBEiVAogRIlACJEiBRAiRKgEQJkCgBEiVAogRIlACJEiBRAiRKgEQJkCgBEmVQDVEL5gUb1UU/
5XnBAqSn+rzg892XyxbmBVOxYF7wfX2OgLyjOC/YKZi+zAsmym0YotyIJkqARAmQKAESJUCiBEiU
AIkSIFECJEqARAmQKAESJUCiBEiUAIkSIFECJEqARAmQKAESJUCiBEiUAIkSIFECJEqARAmQKAES
JUCiBEiUAIkSIFECJEqARAmQKAESJUCiBEiUAIkqzIr7GRNyPy6kuCu4KM+Ku/3OpCTeUR7Vdf6p
ToB0UD4F/w5uNa6V91SL+T3/GtdKJ0sDdAqmiwUXIQKkn+LfgMa10pcb0UQJkCgBEiVAogRIlACJ
EiBRAiRKgEQJkCgBEiVAogRIlACJEiBRAiRKgEQJkCgBEiVAogRIlACJEiBRAiRKgEQJkCgBEiVA
ogRIlACJEiBRAuR0+vqj53YFAhzed1Qv22rdrkaAg5vm9Cyt1u2qBPjBXp8xb3/+aPvW7eoE+Kma
zph3P3sUYON2dQL8TG1nzJkfzG7but0CAvxEzWfWppcESFHjGfOt2DoVaE7IB3qroZ0GaFLSgXxi
gP9vK8BDePPPuMVXxovUx7VeT8o0rnV/DhPggnGt9/U5Au7OYQL8yyn44zT3cqgAXYQcxkb3/NyG
GdfzZ7wfGuCmu+Kxl894Bch6Wp7xbnLPz6O4EbU94xUgK2m7QC2cMS3HoqD/AcuCVArWOGNakk+r
N863T1dFe1MSTQoBFs+Y3pZJg8rtuLXOmN0J8DCKj2TXOWN2J8DDqK8JWOGM2Z0AD2PVRSkxAjwM
AW63K+asuSYgRoDHIcDNdsUcAW62K2atuCglRoAHIsCtdsW89VZFxQjwSNZbFRUjwGM5zDPeVgI8
moM8420lwAM6wjPeVgIkSoBECZAoARIlQKIESJQAiRIgUQIkSoBECZAoARIlQKIEuB+ftMilmQB3
4sOW+TUT4C583ELnZgLcgQ98q0ez8rDC36+XF6u74sYHvtmt2YJRXdffGdX1tk98u2+zcoA/U1sF
2I0AW7edTik0L7iXj/zIlya1Yi7bmhfc17gB/lW4CJn8m1NwRwJs2m7u6kOAHXzmB582aw9w7v6L
2zDvE+AOdzUSAe5wVyMR4A53NRQXIfvb1VAEuL9dDUWA+9vVWDyK292uxiLA3e1qMJZj7W1Xg7Eg
dW+7Go4l+fva1YC8KWlPuxqTt2XuZlcMQ4BECZAoARIlQKIESJQAiRIgUQIkSoBECZAoARIlQKIE
SJQAiRIgUQIkSoBECZAoARIlQKIESJQAiRIgUQIkSoBECZCo4rhWc0Loa8G4VpOS6EeARNXHtZ7v
vlx+bFwrFQvGtd7X5wjIO4rjWp2C6cu4VqKMayXKjWiiBEiUAIkSIFECJEqARAmQKAESJUCiBEiU
AIkSIFECJEqARAmQKAESJUCiBEiUAIkSIFECJEqARAlwfV9/pP8fdkuAK/uOT4MPCHBV0+wkOEeA
K7pNToL3BLiiu94EeEeA65nJTYG3BLgeATYQ4GpmY1PgDQGuRoAtBLjcixvMAmwhwIVe3mB+8LoC
rwlwkYYbzAJsIsAFmm4wC7CJABdousEswCYCrGu8v+cipIUA6wTYkQDLWsMSYIvKqK7To3Eh1V0d
W3NYHsU1KIzq+lfc9zcDT0oSYE/Nk5ImR8CxAyxc3VqO9Vr5FDwztfWywRDjWisBWpD6VHVc6++/
jDyutXR/z5L8V5YG6BTc9rI3Jb2w+BQ8bID12yvelvmEca1l7u/15EZ0mQB7EmCd+3sdCbBOgB0J
8M7rawY3mPsR4LWmuyZuMPcjwKnm+8ZuMPciwF+lA5sbzH0I8Ff1Tzs3mDs4foDNGbza0MVtwsED
bD4RNmwowIRDB9j1msEDjogDB9h8zdC2oQAjjhzgyxdKGwow4rgBNv/J1rah95FnCPDp7wpwbYcN
8J33ps29KMAMAT7bTICrE2B5h/R01ACbD1jvbSjAtQnw6YYCXJsAiy/RlwCLL9HXfgPs9Rn0bzz2
0N/6dhrg68UrawRopf32dhlg18UrlYsLK+03t8MAGw9EnR/FXf5jVtpvao8Bvnzh0asdAjxZab+t
/QW4PKz3lmMRceQA+y5IJWJ3Aa5yzeDiYrcOHWDhmsHFxU4dPMBT4ZrBxcUe7S1Ay/IGI0CiBEiU
AInaPMAFn9Civw+2bYDLPqFFgB9sywCbbgcLcCzbBdj6QMzC5KFUR3UtH1TTuiRAgENZMC942aiu
9q4sXhlJeV7wBgFavDKQ+rDC61GtrfOCS9cWFq+MYcG84Pv6Go+AxYtbi1dGsdW41vrdFYtXhrBR
gJ6wMW+jecECZN5GN6IFyDwBErXVoziPeJnVM8AunyXEWLoeAXt8lhBj6X0Kriyz0h8r/A347mcJ
MZQVLkIscqHdGlfBb36UCyPZLkCLXJixyn3A9z/KhVFsG+DJIheubR4gTK3zKE6BNBIgUQIkSoBE
uQghSoBECZCoLR/FwR0BErXdciyYsdmCVJiz2ZJ8mLPVm5Jg1lZvy4RZe5sTwmAESJQAiRIgUQIk
SoBECZAoARIlQKIESJQAiRIgUcVq5seFLNoVnBYEePlSnJYJc5YcAQVINwuOgE/HtULNgmYfjGuF
TTyalgmrezauFdb3fda+/gIAAAAAAAC0+Q9+0RhW4FaMYwAAAABJRU5ErkJggg==
--FILEBOUNDARY--
EOF

Sendmail syntax and more tips on : Sending Emails With Sendmail – Part 3

On the number of installed components

I recently posted about network fained fine grained security. More precisely, I posted about the new requirement to have XDB to be able to send a mail or do a nslookup on 11g.

What option should you install on your database ?


SQL> select COMP_NAME,VERSION from DBA_REGISTRY;

COMP_NAME                                VERSION
---------------------------------------- ----------
Oracle Database Catalog Views            11.2.0.2.0
Oracle Database Packages and Types       11.2.0.2.0

What else do you need? If you have java, you will need a java pool. If you have xdb, you will need a xdb schema. The more options you install, the more bugs you will get, the bigger the dictionary will be, the more memory you will need.

But in my experience the worst part of having java, xdb, olap and family installed on your database is that every upgrade will take you hours instead of minutes ! That’s for me a sufficient argument to stick to catalog and catproc (the top base components listed above).

how to run UTL_TCP, UTL_SMTP and the like in 11g

After we upgrade a db to 11g someone complained about an ORA-24248: XML DB extensible security not installed

I thought, it should be easy to revert to 10g mechanism. Probably wrong after reading Marco :
The default behavior for access control to network utility packages has been changed to disallow network operations to all nonprivileged users. This default behavior is different from, and is incompatible with, previous versions of Oracle Database.

I do not want to install XDB to send mail. Sounds like an overkill…

Ok, as an hard core dba I created a wrapper in the sys schema, something you probably should not do !

ex:
10g


SQL> conn scott/tiger
Connected.
SQL> select utl_inaddr.GET_HOST_ADDRESS('localhost') from dual;
UTL_INADDR.GET_HOST_ADDRESS('LOCALHOST')
--------------------------------------------------
127.0.0.1

after upgrade
11g


SQL> conn scott/tiger
Connected.
SQL> select utl_inaddr.GET_HOST_ADDRESS('localhost') from dual;
select utl_inaddr.GET_HOST_ADDRESS('localhost') from dual
       *
ERROR at line 1:
ORA-24248: XML DB extensible security not installed
ORA-06512: at "SYS.UTL_INADDR", line 19
ORA-06512: at "SYS.UTL_INADDR", line 40
ORA-06512: at line 1

My workaround to “disable” Fine-Grained Access to External Network Services


SQL> conn / as sysdba
SQL> create or replace function my_utl_inaddr_GET_HOST_ADDRESS(HOST VARCHAR2) return VARCHAR2 is begin return utl_inaddr.GET_HOST_ADDRESS; end;
  2  /

Function created.

SQL> grant execute on my_utl_inaddr_GET_HOST_ADDRESS to scott;

Grant succeeded.
SQL> conn scott/tiger
Connected.
SQL> select sys.my_utl_inaddr_GET_HOST_ADDRESS('localhost') from dual;
SYS.MY_UTL_INADDR_GET_HOST_ADDRESS('LOCALHOST')
--------------------------------------------------
127.0.0.1

If you want to use the recommended way of granting access to utl_tcp and the like, check note 453756.1

Do not upgrade 11.2.0.1 to 11.2.0.1

If you do run @?/rdbms/admin/catupgrd for an 11.2.0.1 Oracle Home on a 11.2.0.1, you may later realize some objects are missing (probably related to deferred segment creation).


SQL>   delete from t1
  2    where id in (
  3      select ca.id from ca, p
  4      where p.no_form like '%02.98'
  5      and p.id = ca.prod_id
  6    );
  delete from t1
              *
ERROR at line 1:
ORA-00600: internal error code, arguments: [kkpo_rcinfo_defstg:objnotfound],
[56480], [], [], [], [], [], [], [], [], [], []

The workaround on metalink is amazing :

  • Do not run catupgrd in 11.2.0.1 against a database that is already at 11.2.0.1

Well, the only solution is to restore your database ! How painful :-(

How to change the connection string of the Oracle Enterprise Manager Grid Control 11g repository

If you moved your repository to a new host and want to change the connection string, no need to drop it, no need to messup in the properties or xml files, simply read the doc

http://download.oracle.com/docs/cd/E11857_01/em.111/e16790/ha_agent.htm#autoId13

emctl config oms -store_repos_details (-repos_host <host> -repos_port <port> -repos_sid <sid> | -repos_conndesc <connect descriptor>) -repos_user <username> [-repos_pwd <pwd>] [-no_check_db]

Yes it works!

List of table and column privileges, including those via roles

I could not find this quickly enough in google so I wrote it myself.

The list of table privileges, with a connect by subquery.

 COL roles FOR a60
COL table_name FOR a30
col privilege for a9
set lin 200 trims on pages 0 emb on hea on newp none

  SELECT *
    FROM (    SELECT CONNECT_BY_ROOT grantee grantee,
                     privilege,
                     REPLACE (
                        REGEXP_REPLACE (SYS_CONNECT_BY_PATH (granteE, '/'),
                                        '^/[^/]*'),
                        '/',
                        ' --> ')
                        ROLES,
                     owner,
                     table_name,
                     column_name
                FROM (SELECT PRIVILEGE,
                             GRANTEE,
                             OWNER,
                             TABLE_NAME,
                             NULL column_name
                        FROM DBA_TAB_PRIVS
                       WHERE owner NOT IN
                                ('SYS',
                                 'SYSTEM',
                                 'WMSYS',
                                 'SYSMAN',
                                 'MDSYS',
                                 'ORDSYS',
                                 'XDB',
                                 'WKSYS',
                                 'EXFSYS',
                                 'OLAPSYS',
                                 'DBSNMP',
                                 'DMSYS',
                                 'CTXSYS',
                                 'WK_TEST',
                                 'ORDPLUGINS',
                                 'OUTLN',
                                 'ORACLE_OCM',
                                 'APPQOSSYS')
                      UNION 
                      SELECT PRIVILEGE,
                             GRANTEE,
                             OWNER,
                             TABLE_NAME,
                             column_name
                        FROM DBA_COL_PRIVS
                       WHERE owner NOT IN
                                ('SYS',
                                 'SYSTEM',
                                 'WMSYS',
                                 'SYSMAN',
                                 'MDSYS',
                                 'ORDSYS',
                                 'XDB',
                                 'WKSYS',
                                 'EXFSYS',
                                 'OLAPSYS',
                                 'DBSNMP',
                                 'DMSYS',
                                 'CTXSYS',
                                 'WK_TEST',
                                 'ORDPLUGINS',
                                 'OUTLN',
                                 'ORACLE_OCM',
                                 'APPQOSSYS')
                      UNION 
                      SELECT GRANTED_ROLE,
                             GRANTEE,
                             NULL,
                             NULL,
                             NULL
                        FROM DBA_ROLE_PRIVS
                       WHERE GRANTEE NOT IN
                                ('SYS',
                                 'SYSTEM',
                                 'WMSYS',
                                 'SYSMAN',
                                 'MDSYS',
                                 'ORDSYS',
                                 'XDB',
                                 'WKSYS',
                                 'EXFSYS',
                                 'OLAPSYS',
                                 'DBSNMP',
                                 'DMSYS',
                                 'CTXSYS',
                                 'WK_TEST',
                                 'ORDPLUGINS',
                                 'OUTLN',
                                 'ORACLE_OCM',
                                 'APPQOSSYS')) T
          START WITH grantee IN (SELECT username FROM dba_users)
          CONNECT BY PRIOR PRIVILEGE = GRANTEE)
   WHERE table_name IS NOT NULL AND grantee != OWNER
ORDER BY grantee,
         owner,
         table_name,
         column_name,
         privilege;

sample output


GRANTEE PRIVILEGE ROLES           OWNER  TABLE_NAME COLUMN_NAME
------- --------- --------------- ------ ---------- -----------
U       UPDATE     --> R          SCOTT  DEPT       DNAME      
U       SELECT                    SCOTT  EMP                   
U2      UPDATE     --> R2 --> R   SCOTT  DEPT       DNAME      

Reduce the number of commits

“Oftentimes, a database administrator (DBA) simply looks at the symptoms and immediately starts changing the system to fix those symptoms”
Op. Cit. Oracle Database Performance Tuning Guide 11g Release 2 (11.2)

Ok, let’s do this :-)

  • Finding Waits on event “log file sync” while performing COMMIT and ROLLBACK operations were consuming significant database time.
  • Action Investigate application logic for possible reduction in the number of COMMIT operations by increasing the size of transactions.

If your application is committing too often- maybe Enterprise Manager told you so- you may want to commit less often. Or maybe just do some magic to impress your customer.

As seen on Metalink 857576.1, and if you can afford data loss, and if you cannot change your application, and if you are that kind of dba who cares more on good performing badly written application than on data integrity. just have a quick look …

Ok, ins.sql is 30’000 insert and commits,


insert into scott.t values(1);
commit;
insert into scott.t values(1);
commit;
insert into scott.t values(1);
commit;

Let’s check the time on my old-fashion pc…


SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:40:55.881948 +02:00
SQL> @ins
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:41:19.115447 +02:00

23.3 seconds

Now take the risk to lose some commits (but yes it is documented, no hidden parameter) to boost your performance


SQL> alter session set commit_wait=nowait commit_logging=immediate;
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:43:37.284027 +02:00

SQL> @ins
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------------------------
2011-08-29 20:43:54.084547 +02:00

Hey hey hey, 16.8 seconds only :-)

PS: it used to be called commit_write in 10g

vi large files

Once upon a time a colleague asked me if there is a better editor than vi installed on my db server. Well, I was not really about arguing the benefit of ed (less memory usage, no useless error message). But one advantage of ed was (I believed) the ability to read large files


$ vi alert_DB01.log
Out of memory saving lines for undo - try using ed
:ed
"alert_DB01.log" 612637 lines, 20458891 characters
:612636
Thu Aug 25 16:08:20 2011
LNS: Standby redo logfile selected for thread 1 sequence 6898 for destination LOG_ARCHIVE_DEST_2
:q

of course most of readers are using some clicky fancy tools with colors and/or unlimited undo.

Well, for those who use vi on AIX, check this


$ vi -y 9999999 alert_DB01.log
Tue Oct 26 13:59:12 2010
Starting ORACLE instance (normal)
sskgpgetexecname failed to get name
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 3
Autotune of undo retention is turned on.
IMODE=BR
ILAT =27
LICENSE_MAX_USERS = 0
"alert_DB01.log" 612662 lines, 20459538 characters

Oh yes !!!

Number of primes below 1000

Do not expect a SQL answer there. My daughter (9) came from school with this question, and I suspect the expected homework was to google for an answer. I am pretty schocked of such practice, teacher encouraging google to do your homework are clearly not from my generation…

Ok, back to the rules, pen and paper. We write down all numbers from 1 to 1000, we start graying out the 1, then the 4,6,8,10,12,… Then the 9 15 21 27 … Then the 25, 35, 55, … Then 7, 49, 77, … And so on.

After having grayed out 1 one, 499 twos, 166 threes, 66 fives, 37 sevens, 20 elevens, 16 thirteens, 10 seventeens, 8 nineteens, 6 twentythrees, 2 twentynines and 1 thirtyone, we counted the remaining numbers and got to the solution. We used the calculator, to gray out numbers like 23*43. But for sure no google there.

I had a similar feeling when my boy teacher ask my boy to look for a description of some exotic butterfly on wikipedia. Is this all what new generations kids can do? Search on google?

Back to google, I once asked in an interview to a young candidate : “How would you speed up a cursor for loop that increases the salary of the employees by 10% ?”. Answer : “I would search on google”.

Not worth mentioning all my traumatic experiences with IT specialists who advise to turn on some tuning magic they found on google to solve their own performance issue with poorly written code.

Using google is good. But do your homework kids :-)

nothing in user_segments

I wrote on deferred segment creation recently.

Today I was looking for specific storage attributes that I used to find in user_segments. They are no longer here. Where are they then?

test case :
create table t(x clob) store (x) as securefile x (retention max storage(maxsize 8192000000));

Where do I find the retention max max_size of my securefile? once the segment is created, it is easy to find it in user_segments

SQL> insert into t values('x');
1 row created.

 SQL> select max_size from user_segments where segment_name='X';
  MAX_SIZE
----------
   1000000

1000000 in blocks is my specified 8192000000 bytes.

let’s go back

SQL> truncate table t drop all storage;

Table truncated.

SQL> select max_size from user_segments where segment_name='X';

no rows selected

It is not there.

you must dig in the sys tables to find out. Specifically there is a new table for the deferred segments

SQL> select maxsiz_stg from sys.deferred_stg$ where obj# in (select obj# from sys.obj$ where name='X');
MAXSIZ_STG
----------
   1000000

Of course you should not base your business logic on internal tables that may change in a next release. A more appropriate workaround would be to disable deferred segment creation :)

How to unload blob from the database?

There is more than one post on how to unload blob from the database, mostly in plsql with utl_file.put_raw (see note 330146.1) and with java with FileOutputStream (see note 247546.1)

Unfortunately both are terribly slow due to the 32k limitation of put_raw in utl_file and due to a low “optimum buffer size” retrieved by myBlob.getBufferSize(), I increased the java stream buffer to 20M or to the size of the lob, whichever is smaller.

The code is mostly copy-pasted from metalink. But I changed the size from getbuffersize() to length().

Using java for large blob (read large binary large object) is about 3x faster than plsql in this test.


SQL> CREATE USER USER1 IDENTIFIED BY SeCrEt;

User created.

Elapsed: 00:00:00.04
SQL>
SQL> GRANT CREATE SESSION , CREATE PROCEDURE TO USER1;

Grant succeeded.

Elapsed: 00:00:00.01
SQL>
SQL> connect user1/SeCrEt
Connected.
SQL>
SQL> CREATE OR REPLACE JAVA SOURCE NAMED "BlobHandler"
  2     AS import java.lang.*;
  3  import java.sql.*;
  4  import oracle.sql.*;
  5  import java.io.*;
  6  public class BlobHandler {
  7    public static void ExportBlob(String myFile, BLOB myBlob) throws Exception {
  8      File binaryFile = new File(myFile);
  9      FileOutputStream outStream = new FileOutputStream(binaryFile);
 10      InputStream inStream = myBlob.getBinaryStream();
 11      int size;
 12      if (myBlob.length()> 20000000) {  // tune this to whatever appropriate value
 13        size = 20000000;
 14      } else {
 15        size = (int)myBlob.length();
 16      }
 17      byte[] buffer = new byte[size];
 18      int length = -1;
 19      while ((length = inStream.read(buffer)) != -1)
 20      {
 21        outStream.write(buffer, 0, length);
 22        outStream.flush();
 23      }
 24      inStream.close();
 25      outStream.close();
 26    }
 27  }
 28  /

Java created.

Elapsed: 00:00:00.16
SQL>
SQL> ALTER JAVA SOURCE "BlobHandler" COMPILE;

Java altered.

Elapsed: 00:00:00.37
SQL>
SQL> sho error
No errors.
SQL>
SQL> CREATE OR REPLACE PROCEDURE ExportBlobJava (p_file   IN VARCHAR2,
  2                                              p_blob   IN BLOB)
  3  AS
  4     LANGUAGE JAVA
  5     NAME 'BlobHandler.ExportBlob(java.lang.String, oracle.sql.BLOB)';
  6  /

Procedure created.

Elapsed: 00:00:00.10
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> EXEC Dbms_Java.Grant_Permission( 'USER1', 'SYS:java.io.FilePermission','/tmp/javatest', 'write' )

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.52
SQL> 

a comment here. If you do care about your database server, and you probably won’t GRANT DBA TO PUBLIC, never grant full unlimited access to your filesystem, as suggested chockingly by metalink with Dbms_Java.Grant_Permission( ‘SCOTT’, ‘java.io.FilePermission’, ‘*’, ‘read ,write, execute, delete’);

let’s move on


SQL> connect user1/SeCrEt
Connected.
SQL>
SQL> CREATE OR REPLACE PROCEDURE ExportBlobPlsql (p_dir    IN VARCHAR2,
  2                                               p_file   IN VARCHAR2,
  3                                               p_blob   IN BLOB)
  4  AS
  5     blob_length     INTEGER;
  6     out_file        UTL_FILE.FILE_TYPE;
  7     v_buffer        RAW (32767);
  8     chunk_size      BINARY_INTEGER := 32767;
  9     blob_position   INTEGER := 1;
 10  BEGIN
 11     blob_length := DBMS_LOB.GETLENGTH (p_blob);
 12     out_file :=
 13        UTL_FILE.FOPEN (p_dir,
 14                        p_file,
 15                        'wb',
 16                        chunk_size);
 17     WHILE blob_position <= blob_length
 18     LOOP
 19        IF blob_position + chunk_size - 1 > blob_length
 20        THEN
 21           chunk_size := blob_length - blob_position + 1;
 22        END IF;
 23
 24        DBMS_LOB.READ (p_blob,
 25                       chunk_size,
 26                       blob_position,
 27                       v_buffer);
 28        UTL_FILE.PUT_RAW (out_file, v_buffer, TRUE);
 29        blob_position := blob_position + chunk_size;
 30     END LOOP;
 31     UTL_FILE.FCLOSE (out_file);
 32  END;
 33  /

Procedure created.

Elapsed: 00:00:00.08
SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> CREATE OR REPLACE DIRECTORY tmp AS '/tmp';

Directory created.

Elapsed: 00:00:00.03
SQL>
SQL> grant write on directory tmp to user1;

Grant succeeded.

Elapsed: 00:00:00.02
SQL>

Here again, I give write access to one directory, I do not grant dba to public…

SQL> connect user1/SeCrEt
Connected.
SQL>
SQL> VAR c BLOB
SQL>
SQL> exec :c := UTL_RAW.cast_to_raw ('X'); FOR i IN 1 .. 15 LOOP  DBMS_LOB.append (:C, :C); END LOOP

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL>
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
        KB
----------
        32

Elapsed: 00:00:00.06

I have created a 32K BLOB variable, and I am doubling its size each time to see how it scales

SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.08
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.16
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
        KB
----------
        64

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
        KB
----------
       128

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
        KB
----------
       256

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL> SELECT DBMS_LOB.getlength (:c)/1024 KB FROM DUAL;
        KB
----------
       512

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
         1

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
         2

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
         4

Elapsed: 00:00:00.00
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.08
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
         8

Elapsed: 00:00:00.01
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.14
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.08
SQL>

For small files, we do not see much of a difference between java and plsql. This is because the 32k buffer of plsql is acceptable for small files.


SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.09
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
        16

Elapsed: 00:00:00.02
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.20
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.16
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.16
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
        32

Elapsed: 00:00:00.04
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.31
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.29
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.46
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
        64

Elapsed: 00:00:00.08
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.45
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.90
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.60
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
       128

Elapsed: 00:00:00.14
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.96
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.77
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:16.08
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
       256

Elapsed: 00:00:00.30
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.24
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:09.65
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:00:34.70
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
       512

Elapsed: 00:00:00.70
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:06.92
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:20.81
SQL>
SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:01:32.96
SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
      1024

Elapsed: 00:00:01.65
SQL> EXEC ExportBlobJava('/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:14.90
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:44.13

SQL> exec DBMS_LOB.append (:C, :C)

PL/SQL procedure successfully completed.

Elapsed: 00:03:33.14

SQL> SELECT DBMS_LOB.getlength (:c)/1024/1024 MB FROM DUAL;
        MB
----------
      2048

Elapsed: 00:00:00.00

SQL> EXEC ExportBlobJava('/u99/backup/sw/tmp/javatest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:00:26.10
SQL> EXEC ExportBlobPlsql('TMP','/plsqltest',:c)

PL/SQL procedure successfully completed.

Elapsed: 00:01:16.90

The larger the file, the most benefical to have a large write buffer (could well be higher than 20M as in this example). If java is not installed in the database, you can also retriebe the blob from the database and output the stream to a local file.

The reasons why I always avoid to shutdown abort

It is a common practice to always shutdown abort the database before restarting and shutting in down immediate. This is because sometimes SHUTDOWN IMMEDIATE takes ages. For instance due to a huge transaction to be rollback.

I do not like it. At all.

First, chance exists that you won’t be able to start the database anymore. I have not heard or meet anyone who had this issue since Oracle7, but I still believe it.

Second, shutdown abort is very useful if something goes seriously wrong. But if something goes wrong, you may want to find out what it is.

Third, you may hit more bugs than if you do close normal. And you may get less help from support if this is due to an abusive shutdown abort. YMMV

Ok, small demo to preach to the converted
disclaimer: this demo is not innocent, do not try this on your database

SQL> create flashback archive fa tablespace ts retention 1 day;

Flashback archive created.

SQL> create table t(x number primary key);

Table created.

SQL> alter table t flashback archive fa;

Table altered.

SQL> insert into t values (1);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from DBA_SEGMENTS where tablespace_name='TS';

no rows selected

I have created a flashback archive table, and the committed transaction is not written down to the flashback tablespace yet.

SQL> shutdown abort
ORACLE instance shut down.
SQL> startup 
ORACLE instance started.
Total System Global Area  417546240 bytes
Fixed Size                  2227072 bytes
Variable Size             234882176 bytes
Database Buffers          171966464 bytes
Redo Buffers                8470528 bytes
Database mounted.
Database opened.


SQL> sho parameter undo_tablespace
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_tablespace                      string      UNDO1
SQL> create undo tablespace undo2 datafile '/u02/oradata/@/undo2_01.dbf' size 10m reuse;

Tablespace created.

SQL> alter system set undo_tablespace=undo2;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area  417546240 bytes
Fixed Size                  2227072 bytes
Variable Size             234882176 bytes
Database Buffers          171966464 bytes
Redo Buffers                8470528 bytes
Database mounted.
Database opened.

I have switched undo tablespace. So far so good. But remember the flashback archive did not write to the flashback tablespace before shutdown abort.


SQL> drop tablespace undo1 including contents and datafiles;
drop tablespace undo1 including contents and datafiles
*
ERROR at line 1:
ORA-01548: active rollback segment '_SYSSMU1_3544069484$' found, terminate dropping tablespace

You see… I cannot drop my old undo tablespace.

Q.E.D.

On deferred segment creation

What’s deferred segment creation? It is a feature that saves lots of time by releases and lots of space if you have a -legacy- application with 10’000 tables and most of them are empty.

When was it introduced ? Partly in 11.2.0.1 and partly in 11.2.0.2, depending on the object type.

What’s the opposite ? immediate segment creation

What’s the default ? deferred segment creation

How does it work ?
create table t1(x number) segment creation immediate;
and
create table t1(x number) segment creation deferred;

Where it the doc? start with Understand Deferred Segment Creation

Ok, now my 2 cents on this feature. It is a major change of the most basic elements of Oracle, the segment. This behavior will keep its bunch of surprises on your dba scripts.

1) you will not see the segment in dba_segments
2) if you drop the tablespace the tablespace containing the object without the INCLUDING CONTENTS, the drop tablespace will succeed and the table will remain

SQL> create tablespace ts datafile '/u02/oradata/@/ts.dbf' size 1m;

Tablespace created.

SQL> create table t(x number) tablespace ts;

Table created.

SQL> drop tablespace ts;

Tablespace dropped.

SQL> select * from t;
select * from t
              *
ERROR at line 1:
ORA-00959: tablespace 'TS' does not exist

SQL> drop table t;
drop table t
           *
ERROR at line 1:
ORA-00959: tablespace 'TS' does not exist

Neither SELECT nor DROP is possible at that stage

To quickly identify those almost-nonexistent tablespaces you may use this query


SELECT TABLESPACE_NAME FROM ALL_CLUSTERS UNION
SELECT TABLESPACE_NAME FROM ALL_INDEXES UNION
SELECT TABLESPACE_NAME FROM ALL_IND_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_IND_SUBPARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_LOBS UNION
SELECT TABLESPACE_NAME FROM ALL_LOB_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_LOB_SUBPARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_TABLES  UNION
SELECT TABLESPACE_NAME FROM ALL_TAB_PARTITIONS UNION
SELECT TABLESPACE_NAME FROM ALL_TAB_SUBPARTITIONS MINUS
select tablespace_name from dba_tablespaces
;

TABLESPACE_NAME
------------------------------
TS

then you can recreate it and -if wished- drop it with contents


SQL> create tablespace ts datafile '/u02/oradata/@/ts.dbf' size 1m reuse;

Tablespace created.

SQL> drop tablespace ts including contents and datafiles;

Tablespace dropped.

SQL> select * from t;
select * from t
              *
ERROR at line 1:
ORA-00942: table or view does not exist

The table is gone for real.

If you based some scripts on dba_segments to list the content of the tablespaces, you probably should check the assigned tablespace in the tables,indexes,lobs and (sub)partitions DBA_VIEWS too.

I met this feature while using transportable tablespace. Transportable table will transport the object with no segment that belongs to the tablespace.

There is a bunch of published bugs on Metalink regarding deferred segment creation. An easy workaround is to not use the feature by setting the initialization parameter DEFERRED_SEGMENT_CREATION to false. This of course affects only new objects.

I am always very cautious about those major changes affecting the dba scripts on the dictionary. While selecting from the base dictionary tables (TAB$, COL$, …) is never recommended, selecting from the USER_ and DBA_ views is supposed to be backward compatible, but the dba scripts that used to work in previous release may break here… This is obviously the price to pay to get new features, right?

Datapump : table like ‘FOO%’ or like ‘BAR%’

Today I tried to put two like condition in an INCLUDE clause of datapump.

I have the following tables

SQL> select table_name from user_tables order by 1;

TABLE_NAME
------------------------------
AAA
BAR1   ***
BAR2   ***
BLA
FOO    ***
FOO1   ***
GOZ

and I want tables like BAR% and tables likes FOO%

First try :

$ expdp scott/tiger include=table:"like'FOO%'or like'BAR%'"

Export: Release 11.2.0.2.0 - Production on Thu Jul 14 11:47:13 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39071: Value for INCLUDE is badly formed.
ORA-00936: missing expression

This does not work, because it would translate in WHERE {tablename} like’FOO%’or like’BAR%’

Ok, second try, let’s put multiple TABLE clause

$ expdp scott/tiger include=table:"like'FOO%'",table:"like'BAR%'"

Export: Release 11.2.0.2.0 - Production on Thu Jul 14 11:47:15 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01":  scott/******** include=table:"like'FOO%'",table:"like'BAR%'" Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
ORA-39168: Object path TABLE was not found.
ORA-31655: no data or metadata objects selected for job
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" completed with 2 error(s) at 11:47:21

no data was found, because it did translate in WHERE {tablename} like ‘FOO%’ and{tablename} like ‘BAR%’. Which returns no row.

Ok, multiple INCLUDE conditions are joined by AND, so let’s do the math. (BAR% OR FOO%)=(>=BAR AND <FOP AND NOT BETWEEN BAS AND FONZZZ (where FONZZZ is immediately smaller than FOO)

$ expdp scott/tiger include=table:">='BAR'",table:"not between 'BAS' and 'FONZZZZZZZZZZZZZ'",table:"<'FOP'"

Export: Release 11.2.0.2.0 - Production on Thu Jul 14 11:47:21 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01":  scott/******** include=table:">='BAR'",table:"not between 'BAS' and 'FONZZZZZZZZZZZZZ'",table:"<'FOP'"
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "SCOTT"."BAR1"                             5.007 KB       1 rows
. . exported "SCOTT"."BAR2"                             5.007 KB       1 rows
. . exported "SCOTT"."FOO"                              5.007 KB       1 rows
. . exported "SCOTT"."FOO1"                             5.007 KB       1 rows
Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
  /u01/app/oracle/admin/DB01/dmp/expdat.dmp
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 11:47:31

It is that simple ;)

On star transformation

How to configure a schema to support a star transformation query?

This is one of the topic I am currently preparing for the OCM DBA 11g upgrade exam for 9i OCM’s.

Let’s create a star schema with a fact table T1 and two dimension tables T2 and T3 :


> create table t2(y number constraint t2_pk primary key, yy varchar2(50))
table t2 created.
> create table t3(z number constraint t3_pk primary key, zz varchar2(50))
table t3 created.
> create table t1(x number constraint t1_pk primary key, 
      y number constraint t1_t2_fk references t2, 
      z number constraint t1_t3_fk references t3)
table t1 created.

According to the Oracle Database Data Warehousing Guide, chapter 20 Schema Modeling Techniques
A prerequisite of the star transformation is that there be a single-column bitmap index on every join column of the fact table. These join columns include all foreign key columns.


> create bitmap index bi1 on t1(y)
bitmap index bi1 created.
> create bitmap index bi2 on t1(z)
bitmap index bi2 created.
> exec dbms_stats.gather_table_stats(user,'t1',cascade=>true)
anonymous block completed
> exec dbms_stats.gather_table_stats(user,'t2',cascade=>true)
anonymous block completed
> exec dbms_stats.gather_table_stats(user,'t3',cascade=>true)
anonymous block completed

Obviously the star_transformation_enabled must be set to true. It was known as buggy in the first releases, I hope most bugs have been fixed in the meantime.


> alter session set star_transformation_enabled=true
session set altered

Let’s try the transformation with a query similar to the one found in the doc :

> set autotrace on exp
Autotrace Enabled
Displays the execution plan only.
> select count(*) from t1 natural join t2 natural join t3 
      where yy='one' and zz in ('two','three')
COUNT(*)               
---------------------- 
0                      

Plan hash value: 3024982001
 
------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                  |     1 |   106 |     1 (100)| 00:00:01 |
|   1 |  SORT AGGREGATE                   |                  |     1 |   106 |            |          |
|   2 |   NESTED LOOPS                    |                  |       |       |            |          |
|   3 |    NESTED LOOPS                   |                  |     1 |   106 |     1 (100)| 00:00:01 |
|   4 |     NESTED LOOPS                  |                  |     1 |    66 |     1 (100)| 00:00:01 |
|   5 |      VIEW                         | index$_join$_001 |     1 |    26 |     1 (100)| 00:00:01 |
|*  6 |       HASH JOIN                   |                  |       |       |            |          |
|   7 |        BITMAP CONVERSION TO ROWIDS|                  |     1 |    26 |     0   (0)| 00:00:01 |
|   8 |         BITMAP INDEX FULL SCAN    | BI1              |       |       |            |          |
|   9 |        BITMAP CONVERSION TO ROWIDS|                  |     1 |    26 |     0   (0)| 00:00:01 |
|  10 |         BITMAP INDEX FULL SCAN    | BI2              |       |       |            |          |
|* 11 |      TABLE ACCESS BY INDEX ROWID  | T2               |     1 |    40 |     0   (0)| 00:00:01 |
|* 12 |       INDEX UNIQUE SCAN           | T2_PK            |     1 |       |     0   (0)| 00:00:01 |
|* 13 |     INDEX UNIQUE SCAN             | T3_PK            |     1 |       |     0   (0)| 00:00:01 |
|* 14 |    TABLE ACCESS BY INDEX ROWID    | T3               |     1 |    40 |     0   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - access(ROWID=ROWID)
  11 - filter("T2"."YY"='one')
  12 - access("T1"."Y"="T2"."Y")
  13 - access("T1"."Z"="T3"."Z")
  14 - filter("T3"."ZZ"='three' OR "T3"."ZZ"='two')

> set autotrace off
Autotrace Disabled

Something is missing. The star transformation is not used. What could it be???

Data !

We need to load trucks of data in the T1 table and a few rows in the T2 and T3 tables

  
> insert into t2 select rownum, 
      to_char(to_timestamp(to_char(rownum/1e9,'.000000000'),'.ff'),'ffsp') 
      from dual connect by level<11
10 rows inserted.
> insert into t3 select rownum, 
      to_char(to_timestamp(to_char(rownum/1e9,'.000000000'),'.ff'),'ffsp') 
      from dual connect by level<11
10 rows inserted.
> insert into t1 select rownum, mod(rownum,7)+1, mod(rownum,8)+1 
      from dual connect by level<100001
100,000 rows inserted.
> commit
commited.
> exec dbms_stats.gather_table_stats(user,'t1',cascade=>true)
anonymous block completed
> exec dbms_stats.gather_table_stats(user,'t2',cascade=>true)
anonymous block completed
> exec dbms_stats.gather_table_stats(user,'t3',cascade=>true)
anonymous block completed

Let’s try the same query again


> set autotrace on exp
Autotrace Enabled
Displays the execution plan only.
> select count(*) from t1 natural join t2 natural join t3
      where yy='one' and zz in ('two','three')
COUNT(*)               
---------------------- 
3570                   

Plan hash value: 3170767457
 
-------------------------------------------------------------------------------------
| Id  | Operation                    | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |      |     1 |     6 |    12   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE              |      |     1 |     6 |            |          |
|   2 |   BITMAP CONVERSION COUNT    |      |  2857 | 17142 |     6   (0)| 00:00:01 |
|   3 |    BITMAP AND                |      |       |       |            |          |
|   4 |     BITMAP MERGE             |      |       |       |            |          |
|   5 |      BITMAP KEY ITERATION    |      |       |       |            |          |
|*  6 |       TABLE ACCESS FULL      | T2   |     1 |     8 |     3   (0)| 00:00:01 |
|*  7 |       BITMAP INDEX RANGE SCAN| BI1  |       |       |            |          |
|   8 |     BITMAP MERGE             |      |       |       |            |          |
|   9 |      BITMAP KEY ITERATION    |      |       |       |            |          |
|* 10 |       TABLE ACCESS FULL      | T3   |     2 |    16 |     3   (0)| 00:00:01 |
|* 11 |       BITMAP INDEX RANGE SCAN| BI2  |       |       |            |          |
-------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - filter("T2"."YY"='one')
   7 - access("T1"."Y"="T2"."Y")
  10 - filter("T3"."ZZ"='three' OR "T3"."ZZ"='two')
  11 - access("T1"."Z"="T3"."Z")
 
Note
-----
   - star transformation used for this statement

> set autotrace off
Autotrace Disabled

That’s it ! I have 100K rows in my fact table and 10 rows in my dimension tables, that does the trick ! Star transformation needs a lot of rows to work :)

On implicit commit

An explicit commit is when you issue a COMMIT statement

SQL> create table t(x number);

Table created.

SQL> insert into t values(1);

1 row created.

SQL> commit;

Commit complete.

An implicit commit is when a commit is issued without your approval.

ex: AUTOCOMMIT (default is OFF)

SQL> set autoc on
SQL> insert into t values(1);

1 row created.

Commit complete.

ex: EXITCOMMIT (default is ON)

SQL> set autoc off exitc on
SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> disc
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> conn scott/tiger
Connected.
SQL> select * from t;
         X
----------
         1

before / after a successful DDL statement

SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t(x);

Index created.

SQL> rollback;

Rollback complete.

SQL> select * from t;
         X
----------
         1

Before / after an unsuccessful DDL statement, sometimes :

SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t(blabla);
create index i on t(blabla)
                    *
ERROR at line 1:
ORA-00904: "BLABLA": invalid identifier

SQL> rollback;

Rollback complete.

SQL> select * from t;
         X
----------
         1

But not always :

SQL> truncate table t;

Table truncated.

SQL> insert into t values(1);

1 row created.

SQL> create index i on t();
create index i on t()
                    *
ERROR at line 1:
ORA-00936: missing expression

SQL> rollback;

Rollback complete.

SQL> select * from t;

no rows selected

In the last case, no DDL was executed, but in the case before that, the DDL was executed and failed.

If you want to commit, use COMMIT :)

sqlplus -prelim

If you cannot login to the database, for instance due to ORA-00020 maximum number of processes exceeded, then chance exists that you could use the -prelim option.

Documented in note 121779.1 for sqlplus version 10.1 and later :
In some cases, no connections are allowed on the instance (in some ORA-20 situations for example).
As of 10.1.x, there is a new option with SQL*Plus to allow access to an instance to
generate traces.
sqlplus -prelim / as sysdba

Only sysdba connection is possible.

sqlplus -prelim system/manager

SQL*Plus: Release 11.2.0.2.0 Production on Mon Jul 4 10:38:36 2011

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

ERROR:
ORA-24300: bad value for mode

And very little access is granted

SQL> select * from dual;
select * from dual
*
ERROR at line 1:
ORA-01012: not logged on
Process ID: 0
Session ID: 0 Serial number: 0

You can shutdown abort and then restart your database, instead of rebooting your server where other instances may be running.

This is the ultimate chance before reboot. Before this, consider disconnecting / killing some user session to get a regular sqlplus / as sysdba

CSV part 4, fast !!

I got some comments that my other csv solutions were slow to export gigabytes of data.

One more try.

thanks to the feedbacks, I provided a new version

This could generate very large files in just a few minutes (instead of hours).

I use bulk collect and utl_file to boost performance

CREATE TYPE collist IS TABLE OF VARCHAR2 (4000)
/

CREATE OR REPLACE PROCEDURE bulk_csv (directory_name    VARCHAR2,
                                      file_name         VARCHAR2,
                                      query        VARCHAR2)
   AUTHID CURRENT_USER
IS
   -- $Id$
   fh             UTL_FILE.file_type;
   stmt           VARCHAR2 (32767) := NULL;
   header         VARCHAR2 (32767) := NULL;
   curid          NUMBER;
   desctab        DBMS_SQL.DESC_TAB;
   colcnt         NUMBER;
   namevar        VARCHAR2 (32767);

   TYPE cola IS TABLE OF collist
                   INDEX BY BINARY_INTEGER;

   res            cola;
   rcur           SYS_REFCURSOR;
   current_line   VARCHAR2 (32767);
   next_line      VARCHAR2 (32767);

BEGIN
   curid := DBMS_SQL.open_cursor;
   DBMS_SQL.parse (curid, query, DBMS_SQL.NATIVE);
   DBMS_SQL.DESCRIBE_COLUMNS (curid, colcnt, desctab);

   FOR i IN 1 .. colcnt
   LOOP
      DBMS_SQL.DEFINE_COLUMN (curid,
                              i,
                              namevar,
                              32767);
   END LOOP;

   IF DBMS_SQL.execute (curid) = 0
   THEN
      FOR i IN 1 .. colcnt
      LOOP
         IF (i > 1)
         THEN
            header := header || ';';
            stmt := stmt || ',';
         END IF;

         header := header || desctab (i).col_name;
         stmt :=
               stmt
            || CASE
                  WHEN desctab (i).col_type IN
                          (DBMS_SQL.Varchar2_Type,
                           DBMS_SQL.Char_Type)
                  THEN
                     '"'||desctab (i).col_name || '"'
                  WHEN desctab (i).col_type IN
                          (DBMS_SQL.Number_Type,
                           DBMS_SQL.Date_Type,
                           DBMS_SQL.Binary_Float_Type,
                           DBMS_SQL.Binary_Bouble_Type,
                           DBMS_SQL.Timestamp_Type,
                           DBMS_SQL.Timestamp_With_TZ_Type,
                           DBMS_SQL.Interval_Year_to_Month_Type,
                           DBMS_SQL.Interval_Day_To_Second_Type,
                           DBMS_SQL.Timestamp_With_Local_TZ_type)
                  THEN
                     'to_char("' || desctab (i).col_name || '")'
                  WHEN desctab (i).col_type = DBMS_SQL.Raw_Type
                  THEN
                     'rawtohex("' || desctab (i).col_name || '")'
                  WHEN desctab (i).col_type = DBMS_SQL.Rowid_Type
                  THEN
                     '''unsupport datatype : ROWID'''
                  WHEN desctab (i).col_type = DBMS_SQL.Long_Type
                  THEN
                     '''unsupport datatype : LONG'''
                  WHEN desctab (i).col_type = DBMS_SQL.Long_Raw_Type
                  THEN
                     '''unsupport datatype : LONG RAW'''
                  WHEN desctab (i).col_type = DBMS_SQL.User_Defined_Type
                  THEN
                     '''unsupport datatype : User Defined Type'''
                  WHEN desctab (i).col_type = DBMS_SQL.MLSLabel_Type
                  THEN
                     '''unsupport datatype : MLSLABEL'''
                  WHEN desctab (i).col_type = DBMS_SQL.Ref_Type
                  THEN
                     '''unsupport datatype : REF'''
                  WHEN desctab (i).col_type = DBMS_SQL.Clob_Type
                  THEN
                     '''unsupport datatype : CLOB'''
                  WHEN desctab (i).col_type = DBMS_SQL.Blob_Type
                  THEN
                     '''unsupport datatype : BLOB'''
                  WHEN desctab (i).col_type = DBMS_SQL.Rowid_Type
                  THEN
                     '''unsupport datatype : ROWID'''
                  WHEN desctab (i).col_type = DBMS_SQL.Bfile_Type
                  THEN
                     '''unsupport datatype : BFILE'''
                  WHEN desctab (i).col_type = DBMS_SQL.Urowid_Type
                  THEN
                     '''unsupport datatype : UROWID'''
                  ELSE
                     '''unsupport datatype : '||desctab (i).col_type||''''
               END;
      END LOOP;

      stmt := 'select collist(' || stmt || ') from (' || query || ')';

      fh :=
         UTL_FILE.fopen (directory_name,
                         file_name,
                         'W',
                         32767);

      begin
            OPEN rcur FOR stmt;
      exception 
        when others then 
          dbms_output.put_line(stmt);
          raise;
      end;
      LOOP
         FETCH rcur
         BULK COLLECT INTO res
         LIMIT 10000;

         current_line := header;
         next_line := NULL;

         FOR f IN 1 .. res.COUNT
         LOOP
            FOR g IN 1 .. res (f).COUNT
            LOOP
               IF (g > 1)
               THEN
                  next_line := next_line || ';';
               END IF;

               IF (  NVL(LENGTH (current_line),0)
                   + NVL(LENGTH (next_line),0)
                   + NVL(LENGTH (res (f) (g)),0)
                   + 5 > 32767)
               THEN
                  UTL_FILE.put_line (fh, current_line);
                  current_line := NULL;
               END IF;

               IF (NVL(LENGTH (next_line),0) + NVL(LENGTH (res (f) (g)),0) + 5 > 32767)
               THEN
                  UTL_FILE.put_line (fh, next_line);
                  next_line := NULL;
               END IF;

               next_line := next_line || res (f) (g);
            END LOOP;

            current_line :=
                  CASE
                     WHEN current_line IS NOT NULL
                     THEN
                        current_line || CHR (10)
                  END
               || next_line;
            next_line := NULL;
         END LOOP;

         UTL_FILE.put_line (fh, current_line);
         EXIT WHEN rcur%NOTFOUND;
      END LOOP;

      CLOSE rcur;

      UTL_FILE.fclose (fh);
   END IF;

   DBMS_SQL.CLOSE_CURSOR (curid);
END;
/

CREATE OR REPLACE DIRECTORY tmp AS '/tmp';

EXEC bulk_csv('TMP','emp.csv','SELECT * FROM EMP ORDER BY ENAME')


EMPNO;ENAME;JOB;MGR;HIREDATE;SAL;COMM;DEPTNO
7876;ADAMS;CLERK;7788;1987-05-23 00:00:00;1100;;20
7499;ALLEN;SALESMAN;7698;1981-02-20 00:00:00;1600;30;30
7698;BLAKE;MANAGER;7839;1981-05-01 00:00:00;2850;;30
7782;CLARK;MANAGER;7839;1981-06-09 00:00:00;2450;;10
7902;FORD;ANALYST;7566;1981-12-03 00:00:00;3000;;20
7900;JAMES;CLERK;7698;1981-12-03 00:00:00;950;;30
7566;JONES;MANAGER;7839;1981-04-02 00:00:00;2975;;20
7839;KING;PRESIDENT;;1981-11-17 00:00:00;5000;;10
7654;MARTIN;SALESMAN;7698;1981-09-28 00:00:00;1250;140;30
7934;MILLER;CLERK;7782;1982-01-23 00:00:00;1300;;10
7788;SCOTT;ANALYST;7566;1987-04-19 00:00:00;3000;;20
7369;SMITH;CLERK;7902;1980-12-17 00:00:00;800;;20
7844;TURNER;SALESMAN;7698;1981-09-08 00:00:00;1500;0;30
7521;WARD;SALESMAN;7698;1981-02-22 00:00:00;1250;50;30

on materialized view constraints

Oracle is pretty strong at enforcing constraint.

Table for this blog post:
create table t(x number primary key, y number);

For instance if you alter table t add check (y<1000); then Y will not be bigger than 1000, right?

SQL> insert into t values (1,2000);
insert into t values (1,2000)
Error at line 1
ORA-02290: check constraint (SCOTT.SYS_C0029609) violated

I believe this code to be unbreakable. If you have only SELECT and INSERT privilege on the table, you cannot bypass the constraint.

Let’s imagine some complex constraint. CHECK (sum(y) < 1000)

SQL> alter table t add check (sum(y) < 1000);
alter table t add check (sum(y) < 1000)
Error at line 1
ORA-00934: group function is not allowed here

Ok, clear enough I suppose, we cannot handle this complex constraint with a CHECK condition.

We could have some before trigger that fires an exception

CREATE TRIGGER tr
   BEFORE INSERT OR UPDATE
   ON T
   FOR EACH ROW
   WHEN (NEW.Y > 0)
DECLARE
   s   NUMBER;
BEGIN
   SELECT SUM (y) INTO s FROM t;

   IF (s + :new.y >= 1000)
   THEN
      raise_application_error (-20001, 'SUM(Y) would exceed 1000');
   END IF;
END;
/

Now the trigger will compute the sum and return an exception whenever it fails.

SQL> insert into t values (2, 600);

1 row created.

SQL> insert into t values (3, 600);
insert into t values (3, 600)
            *
ERROR at line 1:
ORA-20001: SUM(Y) would exceed 1000
ORA-06512: at "SCOTT.TR", line 8
ORA-04088: error during execution of trigger 'SCOTT.TR'

SQL> drop trigger tr;

Trigger dropped.

SQL> truncate table t;

Table truncated.

But I am not good with triggers, and the triggers are as bad as their developers and have dark sides like mutating triggers and thelike.

As Tom Kyte mentioned in the comment, the code above is not efficient effective if more than one user update the table at the same time

Another popular approach is to create a fast-refreshable-on-commit mview with a constraint.

Let’s see how this works.


create materialized view log on t with rowid, primary key (y) including new values;

create materialized view mv
refresh fast 
on commit 
as select sum(y) sum from t;

alter table mv add check (sum < 1000);

The constraint is on the mview, so once you commit (and only at commit time), Oracle will try to refresh the mview.

SQL> insert into t values (4, 600);

1 row created.

SQL> commit;

Commit complete.

SQL> insert into t values (5, 600);

1 row created.

SQL> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-02290: check constraint (SCOTT.SYS_C0029631) violated

SQL> select * from t;

         X          Y
---------- ----------
         4        600

So far so good. The mechanism rollbacks the transaction in case of an ORA-12008. A bit similar to a DEFERABLE constraint.

But how safe is this after all? Oracle does not enforce anything on the table, it just fails on refresh…

Anything that does not fulfill the materialized view fast refresh requisites will also break the data integrity.

SQL> delete from t;

1 row deleted.

SQL> commit;

Commit complete.

SQL> alter session enable parallel dml;

Session altered.

SQL> insert /*+PARALLEL*/ into t select 100+rownum, rownum*100 from dual connect by level<20;

19 rows created.

SQL> commit;

Commit complete.

SQL> select sum(y) from t;

    SUM(Y)
----------
     19000

SQL> select staleness from user_mviews;

STALENESS
-------------------
UNUSABLE

Your data integrity is gone. By “breaking” the mview, with only SELECT, INSERT and ALTER SESSION privilege, you can now insert any data.

This is documented as
FAST Clause

For both conventional DML changes and for direct-path INSERT operations, other conditions may restrict the eligibility of a materialized view for fast refresh.

Other operations like TRUNCATE may also prevent you from inserting fresh data


SQL> alter materialized view mv compile;

Materialized view altered.

SQL> exec dbms_mview.refresh('MV','COMPLETE');

PL/SQL procedure successfully completed.

SQL> select * from mv;

       SUM
----------

SQL> insert into t values(1,1);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from mv;

       SUM
----------
         1

SQL> truncate table t;

Table truncated.

SQL> insert into t values(1,1);

1 row created.

SQL> commit;
commit
*
ERROR at line 1:
ORA-32321: REFRESH FAST of "SCOTT"."MV" unsupported after detail table
TRUNCATE

Oracle Certified Master