select * from test where my_long like ‘%toto%’

A good way to learn is to try to answer user questions. Instead of referencing other posts, I tried today to answer that frequently asked question myself on developpez.com (french forum)


SQL> create table test ( my_long long);

Table created.

SQL> insert into test values ('hello toto !');

1 row created.

SQL> exec for r in ( select my_long from test ) 
   loop if (r.my_long like '%toto%') then 
   dbms_output.put_line(r.my_long); end if; 
   end loop 
hello toto !

clear screen reports cleared columns, breaks and computes


$ echo clear screen|sqlplus scott/tiger
SQL> columns cleared
breaks cleared
computes cleared

but if I quit properly, it does not report that


$ echo "clear screen
quit"|sqlplus scott/tiger
SQL>

a good reason to improve the quality of your shell script by quitting at the end 😉

lock system, restrict dbsnmp

An unlocked user is a security problem. Currently, all my unlocked users have only CREATE SESSION as system privilege, evtl ALTER SESSION.

Except SYS, SYSTEM and DBSNMP

To minimize this security problem, I implemented the following strategy on my test system.

1) delete password file, set remote_login_passwordfile=NONE, O7_DICTIONARY_ACCESSIBILITY=FALSE
2) alter user SYSTEM account lock;
3a) in 10gR2 :
alter user dbsnmp quota 1T on sysaux;
create role secure_oem_role;
grant advisor, analyze any, analyze any dictionary, create job, create procedure, create session, create table, manage any queue, select any dictionary to secure_oem_role;
grant EXECUTE on “SYS”.”DBMS_AQ” to secure_oem_role;
grant EXECUTE on “SYS”.”DBMS_AQADM” to secure_oem_role;
grant EXECUTE on “SYS”.”DBMS_DRS” to secure_oem_role;
grant EXECUTE on “SYS”.”DBMS_MONITOR” to secure_oem_role;
grant EXECUTE on “SYS”.”DBMS_SERVER_ALERT” to secure_oem_role;
grant EXECUTE on “SYS”.”DBMS_SYSTEM” to secure_oem_role;
grant EXECUTE on “SYS”.”DBMS_WORKLOAD_REPOSITORY” to secure_oem_role;
exec SYS.DBMS_AQADM.GRANT_QUEUE_PRIVILEGE(‘DEQUEUE’, ‘ALERT_QUE’, ‘SECURE_OEM_ROLE’)
revoke EXECUTE ON “SYS”.”DBMS_SERVER_ALERT” from dbsnmp;
revoke EXECUTE ON “SYS”.”DBMS_SYSTEM” from dbsnmp;
revoke UNLIMITED TABLESPACE from dbsnmp;
revoke SELECT ANY DICTIONARY from dbsnmp;
revoke CREATE PROCEDURE from dbsnmp;
revoke CREATE TABLE from dbsnmp;
revoke OEM_MONITOR from dbsnmp;
grant secure_oem_role to dbsnmp;
3b) in other versions, you probably can remove more and grant less, I think only in 10g it is necessary to have “quota”. In my other databases, dbsnmp have 0 segments.

Check what system privileges are potentially dangerous to the system :

select path
from
(
select
grantee,
sys_connect_by_path(privilege, ‘:’)||':’||grantee path
from (select grantee, privilege, 0 role from dba_sys_privs union all select grantee, granted_role, 1 role from dba_role_privs)
connect by privilege=prior grantee
start with role=0
)
where
grantee in (
select username from dba_users
where lock_date is null
and password != ‘EXTERNAL’
and username != ‘SYS’)
or grantee=’PUBLIC’
/
:ADVISOR:SECURE_OEM_ROLE:DBSNMP
:ANALYZE ANY:SECURE_OEM_ROLE:DBSNMP
:ANALYZE ANY DICTIONARY:SECURE_OEM_ROLE:DBSNMP
:CREATE JOB:SECURE_OEM_ROLE:DBSNMP
:CREATE PROCEDURE:SECURE_OEM_ROLE:DBSNMP
:CREATE SESSION:USER1
:CREATE SESSION:USER2
:CREATE SESSION:SECURE_OEM_ROLE:DBSNMP
:CREATE TABLE:SECURE_OEM_ROLE:DBSNMP
:MANAGE ANY QUEUE:SECURE_OEM_ROLE:DBSNMP
:SELECT ANY DICTIONARY:SECURE_OEM_ROLE:DBSNMP

it sounds better…

idle events in 10gR2

I just noticed this morning that idle events are very easily identifiable by a new column called wait_class in 10gR2

To ignore idle event, I just wrote


select WAIT_CLASS, event
from (
  select * 
  from V$SYSTEM_EVENT 
  where WAIT_CLASS#!=6
  order by TIME_WAITED_MICRO desc)
where rownum<6 ;

WAIT_CLASS    EVENT
------------- ----------------------------------------
System I/O    log file parallel write                  
Configuration log file switch (checkpoint incomplete)
Configuration log file switch completion             
System I/O    db file parallel write                 
System I/O    control file parallel write            

sys_connect_by_path in 8i or the danger to use undocumented parameters…

I have been posting on metalink technical forum about a query that I run against all my test databases but did not work in production.

as it simpliest form

select sys_connect_by_path(dummy,':’) from dual connect by 1=2;

well, there is nothing wrong with this query. I tried it on 8i, 9iR2, 10gR1 10gR2 and it worked fine. In production, it just refused to work on 8i.

Well, I have an other bitset of 8i on my test environment, and an other operating system version. So I supposed it must be related to the old production 64bits AIX4 os.

After reading the docs again, I discovered that SYS_CONNECT_BY_PATH is not documented in 8i. So why did this work on my test system then? Because I have _new_connect_by_enabled = true in my init.ora.

Ok, on the one hand, I found out an (unsupported) way to let my query run in 8i

alter session set “_new_connect_by_enabled”=TRUE;

On the other hand, I have been tricked by my own parameter file, setting one hidden parameter ages ago, forgetting about it, and the query “unhopefully succeeded” in my test environment… revealing the bug only once distributed !

encrypted listener password

There a few major changes in the database administration and the database security between 9i and 10g.

In 9i, I used to grep in the listener.ora to find out the password.

LISTENER_LSC61 =
(DESCRIPTION=(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=dbsrv85a.ex.zkb.ch)(PORT=10061)(QUEUESIZE=200))
))
PASSWORDS_LISTENER_LSC61 = 1234567890ABCDEF

this 64bit encrypted string can be used in 9i to stop the listener

$ lsnrctl

LSNRCTL for IBM/AIX RISC System/6000: Version 9.2.0.6.0 – Production on 05-DEC-2005 14:33:51

Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved.

Welcome to LSNRCTL, type “help” for information.

LSNRCTL> set current_listener listener_lsc61
Current Listener is listener_lsc61
LSNRCTL> set password 1234567890ABCDEF
The command completed successfully
LSNRCTL> stop
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbsrv85a.ex.zkb.ch)(PORT=10061)(QUEUESIZE=200)))
The command completed successfully

As a dba, it is quite handy, because you can use grep (or awk) to find out the password out of the listener.ora. As a security admin, you should make sure the listener.ora is not readable. Note that the default, when created by netmgr, is to be world-readable :-(

However, this does no longer work in 10g

LISTENER_LSC62 =
(DESCRIPTION=(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=dbsrv85a.ex.zkb.ch)(PORT=10062)(QUEUESIZE=200))
))

PASSWORDS_listener_LSC62 = 1234567890ABCDEF

the encrypted string can no longer be used

$ lsnrctl

LSNRCTL for IBM/AIX RISC System/6000: Version 10.1.0.4.0 – Production on 05-DEC-2005 14:37:24

Copyright (c) 1991, 2004, Oracle. All rights reserved.

Welcome to LSNRCTL, type “help” for information.

LSNRCTL> set current_listener listener_lsc62
Current Listener is listener_lsc62
LSNRCTL> set password 1234567890ABCDEF
The command completed successfully
LSNRCTL> stop
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbsrv85a.ex.zkb.ch)(PORT=10062)(QUEUESIZE=200)))
TNS-01169: The listener has not recognized the password
TNS-01189: The listener could not authenticate the user

As a security admin, you would think it is better so. But, how are you going to stop the listener in your script? Well, in 10g, we can use local authentification (default). So if the script is started as oracle, we would not need to use password

LISTENER_LSC63 =
(DESCRIPTION=(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=dbsrv85a.ex.zkb.ch)(PORT=10016)(QUEUESIZE=200))
))

PASSWORDS_listener_LSC63 = 1234567890ABCDEF

$ whoami
oracle
$ hostname
dbsrv85a.ex.zkb.ch
$ lsnrctl

LSNRCTL for IBM/AIX RISC System/6000: Version 10.2.0.1.0 – Production on 05-DEC-2005 14:43:33

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Welcome to LSNRCTL, type “help” for information.

LSNRCTL> set current_listener LISTENER_LSC63
Current Listener is LISTENER_LSC63
LSNRCTL> stop
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbsrv85a.ex.zkb.ch)(PORT=10016)(QUEUESIZE=200)))
The command completed successfully

I read in an Alex Kornbrust post on Pete Finnigan forum, that a LOCAL_OS_AUTHENTICATION “undocumented” parameter could be used to “avoid” local authentication, but in that case, it is going to be a nightmare to “stop” the listener in an automated script, well, we can still use “kill”, but it is not very beautifoul.

post from palindnilap

I just answered a question about the data dictionary on forums.oracle.com. It is all about the dictionary views.
How many tables should I know in the dictionary ?

SQL> select count(*) from dict;
1857

Well, that’s too much. But I can remove the GV$ view, which contain the instance for RAC, and the DBA_ ALL_ and USER_ have (almost) the same structure.

SQL> select count(*)
from dict
where table_name not like ‘GV$%’
and table_name not like ‘ALL%’
and table_name not like ‘DBA%’ ;
712

Anyway, who knows the 712 views by heart? Hopefully, there is one called DICTIONARY, which helps !

Coming back to the post, palindnilap wants to see which columns of a view are mapped to which column of a table. A quick look at ALL_VIEWS could do the trick, but than you will need to “understand” the query to see which view.column maps to which table.column. What’s more, ALL_VIEWS.TEXT is a long. Arghh!

if you have a view that contains all columns from a table, you could use ALL_DEPENDENCIES to see on which table it is based.

On my first answer, I pointed out that ALL_UPDATABLE_COLUMNS may reveal that a view column belongs to a table if the column is updatable.

My last try was to use the ACCESS_PREDICATES to get the column physically accessed.

SQL> select * from v02 where employee=123456;

no rows selected

SQL> select
max(substr(ACCESS_PREDICATES,1,instr(ACCESS_PREDICATES,’=’)-1))
from v$sql_plan
where ACCESS_PREDICATES like ‘%=123456′;
“EMPNO”

here we see EMPLOYEE is actually named “EMPNO” in the based table. It could be done with explain plan and PLAN_TABLE too.

oracle voyage worm

I wrote a mini script to protect my customer from being attacked by an “oracle voyage worm” variant :

revoke CREATE DATABASE LINK from CONNECT;
revoke ALL on SYS.UTL_FILE from PUBLIC;
revoke ALL on SYS.UTL_HTTP from PUBLIC;
revoke ALL on SYS.UTL_SMTP from PUBLIC;
revoke ALL on SYS.UTL_TCP from PUBLIC;
grant EXECUTE on SYS.UTL_FILE to XDB;
grant EXECUTE on SYS.UTL_HTTP to MDSYS;
grant EXECUTE on SYS.UTL_HTTP to ORDPLUGINS;
@?/rdbms/admin/utlrp


Than, in OEM 10g, check for policy violations.

I added a few grants to special oracle internal users, to avoid invalid objects, which is also a policy violation in OEM… OEM will report a violation if those accounts are not locked and expired

10.2.0.1 hidden parameters

In order to get a clean database configuration, I add the following two hidden parameters in my 10.2.0.1 parameter file.
Do not hurl that loud, I hear you from here!

Well, as I said already about the _pga_max_size, I never recommend using hidden parameters when you can do the same without.

However, I am going to communicate those parameters, and the metalink notes referencing them.
You will need them to have a cleaner 10.2.0.1 installation

  • _kgl_large_heap_warning_threshold=33554432
    This parameter prevent Heap size 2800K exceeds notification threshold errors in the alert log and user trace dumps.
    Note: 330239.1 Bugs: 4286095, 4390265

  • __dg_broker_service_names=”
    In case you do not use dataguard but you do use local_listener parameter.
    this parameter prevents pmon from registering a <DB_NAME>_XPT.<DOMAIN_NAME> service in the listener.
    Thread: 611575.993 Bug: 4632635

    Probably all this will be fixed in 10.2.0.2

  • restore to a new host : nid++

    Great challenge today: restore to a new host from a closed noarchivelog backup on tape library.

    In oracle 8i and before, the only way to rename a database was to recreate the controlfile. In 9i, I could change it with nid, in 10gR2, I should never have a reason again to recreate the controlfile, because even the MAXDATAFILES and MAXINSTANCES could be changed. Change controlfile is bad. Once a collegue forget one file, once he noticed it, only months later, he wondered how to recover it. Create a controlfile is way to much sensible thing too do. It is almost as bad as changing the dictionary!

    Well. How to do than?

    Ok, I have a database called LSC67 on prod, I want to recover it to LSC66 in devl. I have NO ACCESS to production, but I have access to tapes (one may wonder if this is a good security practice…)

    Let’s start.

    First, if LSC66 already exists, shutdown abort. Remove datafiles. Remove controlfiles. Remove redo logs.

    Now I restore the data/control/redo files from prod:/dbms/oracle/LSC67 to devl in /dbms/oracle/LSC66.

    First I rename the files, some are called systemLSC67.dbf. I do not want that…

    find do the trick

    find /dbms/oracle/LSC66 -name “*LSC67*” |
    nawk ‘{printf “mv “$1″ “; gsub(src,target);print}’ src=LSC67 target=LSC66 |
    sh

    I must just change the DB_NAME in my parameter file (which already exists at my site), to reflect the prod controlfile dbname

    startup quiet force nomount
    alter system set db_name=’LSC66′ scope=spfile;
    startup quiet force mount

    now I generate some statements for dynamically renaming the files

    set ver off emb on pages 0 newp 0 lin 9999 trims on head off feedb off termout off
    spool /tmp/rename_LSC67_to_LSC66.sql
    select ‘alter database rename file ”’||name||”’ to ”’||replace(name,’LSC67′,’LSC66′)||”';’
    from (
    select name from v$datafile
    union all
    select member from v$logfile
    )
    where name like ‘%LSC67%';
    spool off
    spool /tmp/drop_temp_LSC67_to_LSC66.sql
    select ‘alter database tempfile ”’||tf.name||”’ drop;’
    from v$tempfile tf
    where tf.name like ‘%LSC67%';
    spool off
    spool /tmp/create_temp_LSC67_to_LSC66.sql
    select ‘alter tablespace “‘||ts.name||'” add tempfile ”’||
    replace(tf.name,’LSC67′,’LSC66′)||
    ”’ size 128M reuse autoextend on next 128M maxsize 2048M;’
    from v$tablespace ts , v$tempfile tf
    where tf.name like ‘%LSC67%’ and tf.ts#=ts.ts#;
    spool off

    ok, now I am in mount, I do a rename datafile and drop tempfile. after I open database, and add tempfile. I am not taking care of the original size and autoextend clause of the original tempfiles, just 128M next 128M max 2G.

    set echo on termout on feedb 6
    @/tmp/rename_LSC67_to_LSC66.sql
    @/tmp/drop_temp_LSC67_to_LSC66.sql
    alter database open;
    @/tmp/create_temp_LSC67_to_LSC66.sql

    now I nid

    shutdown immediate
    startup quiet mount restrict

    nid dbname=LSC66 target=/

    and I change the db name and open resetlogs

    startup quiet force nomount
    alter system set db_name=’LSC66′ scope=spfile;
    startup quiet force mount
    alter database open resetlogs;

    FAILED_LOGIN_ATTEMPTS part 2

    Ref: part 1
    I reported this lack of documentation on http://forums.oracle.com/forums/thread.jspa?threadID=330359

    Here is my test case (take care, it will create a new db!) :

    SQL&gt; startup force quiet nomount;
    ORACLE instance started.
    SQL&gt; create database controlfile reuse extent management
    local default tablespace users default temporary tablespace temp
    undo tablespace undotbs1;
    
    Database created.
    SQL&gt; @?/rdbms/admin/catalog
    SQL&gt; @?/rdbms/admin/catproc
    SQL&gt; col username for a10
    SQL&gt; col PROFILE for a7
    SQL&gt; col LIMIT for a12
    SQL&gt; select username, profile, limit from dba_users join 
    dba_profiles using (profile) 
    where resource_name='FAILED_LOGIN_ATTEMPTS';
    USERNAME   PROFILE LIMIT 
    ---------- ------- ------------ 
    SYSTEM     DEFAULT 10
    SYS        DEFAULT 10
    TSMSYS     DEFAULT 10
    DIP        DEFAULT 10
    DBSNMP     DEFAULT 10
    OUTLN      DEFAULT 10

    undocumented parameter

    Just in case you read my success story on Don Burleson webpage about undocumented parameters.

    out of metalink thread 460157.996 :

    “I set appropriate values for pga_aggregate_target and _pga_max_size…

    alter system set pga_aggregate_target=6G;
    alter system set “_pga_max_size”=2000000000;

    …and I gave the query some hints “NOREWRITE FULL USE_HASH ORDERED”. As a result, it boosted my query performance from 12 hours to 1.5 hour.”

    a few lines below I mentioned :
    this parameter often leads to an ORA-4030, even when plenty of memory available, for some obscure reasons

    I think last sentence is quite interresting, too.

    Well, I must say that I finally opted for a more maintenable solution :
    no more hints, no more undocumented parameter, but parallel processing up to 16 threads on a 4 cpus server.

    As discussed in the iTar, a supported way to increased the maximum pga memory per single sql query is to increase the degree of parallelism.

    As a rule of dumb, if you can avoid hidden parameters, avoid them!

    see you soon @ SF

    migrate database with imp exp

    I prefer to use exp/imp to migrate databases.

    I first create a fresh new database with a new spfile, a new system tablespace, a new undo, locally managed tablespace, automatic segment space management.

    I do not do a full exp. I prefer a schema export. It only exports the schema that I want, not WMSYS or PERFSTAT… I do not want to have old stuff in my system tablespace neither.

    What is missing by schema import ? profiles, roles, tablespaces, users, public synonym, public database link, privileges

    1) generate create statement profiles, roles, tablespaces, users, public synonym, public database link, privileges

    for example with toad or with dbms_metadata. I am using sql + spool. Also possible is PLSQL.

    2) export database with OWNER=user1,user2,… so all your users but not SYS, SYSTEM, PERFSTAT, DBSNMP, WMSYS, TSMSYS. Only your own users, not the one created by oracle

    3) create a new db

    4) create profiles, roles, tablespaces, users on the new db

    5) grant dba to public !!! yes. a bit creasy, but it is convenient to do the import without warning/errors.

    6) import

    7) create the public synonym, public database link, privileges

    8) revoke dba from public (!)

    9) recompile the db

    Well, I have written all that in a script, so migrating a db is no longer a problem to me :-) I can do 7.3 –> 10.2 migration. And I am sure my db is clean. I have undo and temporary tablespace. I can have Java or ASM. I have only 10.2 system objects in my 10.2 database. Since I am using exp/imp, it is no problem to change os/server/domain/bitwordsize.

    select column only if it exists

    i need to display tablespace attributes

    SQL> select tablespace_name, status, contents, logging, extent_management, allocation_type, segment_space_management, retention, bigfile from dba_tablespaces;
    TABLESPACE_NAME STATUS CONTENTS LOGGING EXTENT_MAN ALLOCATIO SEGMEN RETENTION BIG
    ——————– ——— ——— ——— ———- ——— —— ———– —
    SYSTEM ONLINE PERMANENT LOGGING LOCAL SYSTEM MANUAL NOT APPLY NO

    looks fine. but what if I try that on my oracle7 database? I will get ORA-00904: invalid column name

    ok, I will then select only the columns that exist! than select from dba_tablespaces

    t.sql:
    set termout off
    def logging=””
    def extent_management=””
    def allocation_type=””
    def segment_space_management=””
    def retention=””
    def bigfile=””
    col logging new_v logging
    col extent_management new_v extent_management
    col allocation_type new_v allocation_type
    col segment_space_management new_v segment_space_management
    col retention new_v retention
    col bigfile new_v bigfile
    select ‘,logging’ logging from dba_tab_columns where owner=’SYS’ and table_name=’DBA_TABLESPACES’ and COLUMN_NAME=’LOGGING';
    select ‘,extent_management’ extent_management from dba_tab_columns where owner=’SYS’ and table_name=’DBA_TABLESPACES’ and COLUMN_NAME=’EXTENT_MANAGEMENT';
    select ‘,allocation_type’ allocation_type from dba_tab_columns where owner=’SYS’ and table_name=’DBA_TABLESPACES’ and COLUMN_NAME=’ALLOCATION_TYPE';
    select ‘,segment_space_management’ segment_space_management from dba_tab_columns where owner=’SYS’ and table_name=’DBA_TABLESPACES’ and COLUMN_NAME=’SEGMENT_SPACE_MANAGEMENT';
    select ‘,retention’ retention from dba_tab_columns where owner=’SYS’ and table_name=’DBA_TABLESPACES’ and COLUMN_NAME=’RETENTION';
    select ‘,bigfile’ bigfile from dba_tab_columns where owner=’SYS’ and table_name=’DBA_TABLESPACES’ and COLUMN_NAME=’BIGFILE';
    set termout on ver off
    select tablespace_name, status, contents &logging &extent_management &allocation_type &segment_space_management &retention &bigfile from dba_tablespaces;

    let’s try

    SYS@LSC69 AS SYSDBA/7.3.4.5
    SQL> @t
    TABLESPACE_NAME STATUS CONTENTS
    ——————– ——— ———
    SYSTEM ONLINE PERMANENT

    SYS@LSC65 AS SYSDBA/8.1.7.4
    SQL> @t
    TABLESPACE_NAME STATUS CONTENTS LOGGING EXTENT_MAN ALLOCATIO
    ——————– ——— ——— ——— ———- ———
    SYSTEM ONLINE PERMANENT LOGGING DICTIONARY USER

    SYS@LSC67 AS SYSDBA/9.2.0.6
    SQL> @t
    TABLESPACE_NAME STATUS CONTENTS LOGGING EXTENT_MAN ALLOCATIO SEGMEN
    ——————– ——— ——— ——— ———- ——— ——
    SYSTEM ONLINE PERMANENT LOGGING DICTIONARY USER MANUAL

    SYS@LSC63 AS SYSDBA/10.2.0.1
    SQL> @t
    TABLESPACE_NAME STATUS CONTENTS LOGGING EXTENT_MAN ALLOCATIO SEGMEN RETENTION BIG
    ——————– ——— ——— ——— ———- ——— —— ———– —
    SYSTEM ONLINE PERMANENT LOGGING LOCAL SYSTEM MANUAL NOT APPLY NO

    one script for any version

    return code

    there is a myth of using sql.sqlcode in sqlplus

    whenever sqlerror exit sql.sqlcode

    this not ok. you should prefer whenever sqlerror exit failure or exit 1

    Why? because unix return code is 8 bits long. so if you exit ora-600, you will get 88.

    Let’s try it

    $ sqlplus “/ as sysdba”
    SQL> create user gaston identified by lagaffe quota 1k on users default tablespace users;

    User created.

    SQL> grant create table to gaston;

    Grant succeeded.

    SQL> whenever sqlerror exit sql.sqlcode
    SQL> create table gaston.x as select * from all_objects;
    create table gaston.x as select * from all_objects
    *
    ERROR at line 1:
    ORA-01536: space quota exceeded for tablespace ‘USERS’

    Disconnected from …
    $ echo $?
    0

    1536 in binary is 11000000000, so the 8 leftmost bits gives 0, which is definitely not fine in your unix code!

    well, there is also a return code from the os, but it stopped working in 10g. it is the _RC defined variable

    SQL> host ls /xxx
    ls: 0653-341 The file /xxx does not exist.

    SQL> def _rc
    DEFINE _RC = “2” (CHAR)

    many users asked “how to get the returned code of a procedure”. Well, this is a faq, a procedure is returning nothing. but you could have an out variable, then exit that code.

    SQL> create or replace procedure p(o out number) is begin o:=1; end;
    2 /

    Procedure created.

    SQL> var rc number
    SQL> exec p(:rc)

    PL/SQL procedure successfully completed.

    SQL> exit :rc
    Disconnected …
    $ echo $?
    1

    deterministic

    if I create a function for a materialized view with query rewrite or for a function based index, I must create hte deterministic.

    f(x) = x*2 is deterministic. for a give x, f(x) will always be the same, f(5) will be always 10; always.

    f(y) = sysdate+y is non-deterministic. For a given y, the return values may vary.

    OK, we cannot use sysdate in mviews. What else should I say?

    Well, some are less evident to find out!

    Some limit cases:
    to_date(‘2000′,’YYYY’) is non deterministic, it returns the 1st of current month, 2000, f. ex. 2000-07-01 or 2000-08-01
    to_char(date ‘2000-01-01′,’DAY’) is non deterministic, it can deliver SATURDAY or SAMSTAG
    to_char(date ‘2000-01-01′,’DAY’,’NLS_DATE_LANGUAGE=american’) is deterministic
    to_char(date ‘2000-01-01′,’D’) is non deterministic, it can deliver 6,7,1 or 2, depending what your territory is (in Bangladesh, the week starts on friday)
    timestamp ‘2005-10-30 02:30:00 Europe/Zurich’ is per default non deterministic, if you try it in summer, you will get time offset +02:00, and in winter you will get +01:00, because the period 02:00-02:59, Oct 30th, exists in both time zones. This is called time boundaries. I can make it deterministic by setting ERROR_ON_OVERLAP_TIME in the session, in which case boundaries will be rejected.

    I can always define a function as deterministic, at my own risk…

    For exemple if I have t(id,gender) {1,male;2,female}, I could have a function

    f(id)=select gender from t where t.id=f.id;

    and I could define it as deterministic. However, if I decide to set id=2 for female later, I am prepared to have corrupted data, wrong results, inconsistencies and even ORA-600 errors.

    exotic constraints

    Today I read a post on metalink where the user wanted a unique constraint for not-null values…

    Sounds easy, because Oracle never indexes null in btree index.

    If I have only one column, I simply index it, it will work.

    SQL> create table t66 ( n number);

    Table created.

    SQL> create unique index i66 on t66(n);

    Index created.

    SQL> insert into t66 values (null);

    1 row created.

    SQL> insert into t66 values (null);

    1 row created.

    The nulls are not indexed. In that post today, the user is using a two column index, and do not want to enforce that constraint when one of the column is null. No problem, we can use FBI to enforce this.

    create index i on t(decode(col2,null,null,col1), decode(col1,null,null,col2));

    so the index will contain only entries were both columns are not null.

    Yesterday a user on forums.oracle.com wanted to have a not-different constraint, that is was only accepting entries [p;r] if [p;s] does not exist. It is quite hard to solved. I have read an interresting solution using ON-COMMIT-REFRESH materialized view with aggregates and constraints.

    A long time ago, one user wanted a constraint “table should contain only one row”.

    create unique index i on t(col*0);
    would ensure at most one row, with col NOT NULL

    I think I can do better.

    Imagine the user wants always exactly one row in STATUS(code number)

    create table STATUS_MAXONE(code number, n number default 0 primary key);
    create view STATUS as select code from STATUS_MAXONE;
    insert into STATUS values (null);
    create table STATUS_MINONE(n number references STATUS_MAXONE(n));
    insert into STATUS_MINONE values (0);

    Now the user can update the view STATUS, but neither delete nor insert…

    Sometimes, you can also have circular foreign key constraints, for example, a PERSON can only marry with someone who exists in PERSON, that is a PERSON.SPOUSE => PERSON.ID relation. This is also quite special… Actually, a person can only marry his spouse, that means if I am your SPOUSE, you are my SPOUSE! But there is no way to reference a column that may be null (we must reference a primary key).

    Using unique index on FBI has the limitation of FBI: function must be deterministic. Same with triggers. complex constraints needs additional tables to enforce your business rules.

    Like the p;r; accepting p;r; and q;s; but not p;s;
    we could simply have a table containing p;s; with p as primary key…

    unexpected results !

    It makes you cry! It makes you claim you have found a bug! but it is working as specified!

    1) subquery refers to a column of the main query
    select * from emp where ename in (select ename from dept where deptno=10);

    the query does not complain that column does not exist in dept. It is perfectly legal to specify a non-prefixed column of the main query in the subquery. This could be like

    select * from tab where 'foo' in (select 'foo' from dict);

    so the “in” clause is always true

    2) “not in” does not deliver result

    select sysdate from dual where 400 not in (select comm from emp);

    this is because 400!=null is UNKOWN. Check the 3 state booleans operations!

    it could be rewritten with a not exists, or in 10g with LNNVL

    select sysdate from dual where lnnvl(400 in (select comm from emp));

    3) rows are not delivered in the correct order
    it is a wrong assumption to think that the rows will be delivered in a specific order if you do not specify order by.

    4) table not found

    
    SQL> select * from tab;
    TNAME                          TABTYPE  CLUSTERID
    ------------------------------ ------- ----------
    GreatTable                     TABLE
    
    SQL> desc GreatTable
    ERROR:
    ORA-04043: object GreatTable does not exist
    

    Whoever it is, someone created a case sensitive table name.

    
    SQL> desc "GreatTable"
     Name                    Null?    Type
     ----------------------- -------- ----------------
     MagicColumn                      NUMBER(38,2)
    

    Get disk space

    I just read today on sun.com that checking disk space in java will be platform independent in the java.io.File class before 2007, add a few years until it is integrated in Oracle. But I cannot wait that long, so I decided to write my own code with “df” on my AIX box.

    Ok, let’s do java. The horrible regexp there is parsing df.

    create or replace and compile
    java source named “Df”
    as
    import java.io.*;
    public class Df
    {
    public static int getFree(String args)
    {
    return Integer.parseInt(df(args).replaceAll(“[^0-9]*[ ]*[1 ][0-9 ][0-9][0-9][%-][^ ]* [^ ]*[ ]*[^ ]*[ ]*[^ ]*[ ]*[^ ]*[ ]*[^ ]*[ ]*[^ ]*[ ]*[^ ]*[ ]*[^ ]*”,””).trim());
    }
    public static String getFS(String args)
    {
    return df(args).replaceAll(“[^ ]*[ ]*[^ ]*[ ]*[^ ]*[ ]*[1 ][0-9 ][0-9][0-9][%-][^ ]* [^ ]*[ ]*[^ ]*[ ]*[^ ]*[ ]*[^ ]*[ ]*[^ ]*[ ]*[^ /]*”,””).trim();
    }
    public static String df(String args)
    {
    String rc = “”;
    try
    {
    Process p = Runtime.getRuntime().exec(“/bin/df -kt “+args);
    int bufSize = 4096;
    BufferedInputStream bis = new BufferedInputStream(p.getInputStream(), bufSize);
    int len;
    byte buffer[] = new byte[bufSize];
    while ((len = bis.read(buffer, 0, bufSize)) != 1)
    rc += new String(buffer, 0, len-1);
    p.waitFor();
    }
    catch (Exception e)
    {
    e.printStackTrace();
    }
    finally
    {
    return rc;
    }
    }
    }
    /

    Now I create two functions

    create or replace
    function getFree( p_cmd in varchar2) return number
    as
    language java
    name ‘Df.getFree(java.lang.String) return int';
    /

    create or replace
    function getFS( p_cmd in varchar2) return varchar2
    as
    language java
    name ‘Df.getFS(java.lang.String) return String';
    /

    Ok, let’s see if my files can autoextend

    select file_name, BYTES/1024 K, INCREMENT_BY*BYTES/BLOCKS/1024 INC, MAXBYTES/1024 MAXKBYTES, GETFREE(FILE_NAME) FREE, GETFS(FILE_NAME) FS
    from dba_data_files

    FILE_NAME K INC MAXKBYTES FREE FS
    —————————————— ———- ———- ———- ———- ————–
    /dbms/oracle/LSC68/data/system01LSC68.dbf 332800 25600 2097152 3579528 /dev/lsc68
    /dbms/oracle/LSC68/data/undo01LSC68.dbf 184320 2048 204800 3579528 /dev/lsc68
    /dbms/oracle/LSC68/data/sysaux01LSC68.dbf 228352 25600 2097152 3579528 /dev/lsc68
    /dbms/oracle/LSC68/data/users01LSC68.dbf 24576 2048 2097152 3579528 /dev/lsc68
    /dbms/oracle/LSC68/data/sysaud01_LSC68.dbf 4096 5120 204800 3579528 /dev/lsc68
    /app/oracle/product/10.1.0.3/dbs/t.dbf 1024 0 0 851784 /dev/ora10103

    Sounds good! plenty of free space to let the files grow!