What is bigger than infinity?

Nan


select
BINARY_DOUBLE_INFINITY INF,
BINARY_DOUBLE_NAN NAN,
greatest(BINARY_DOUBLE_INFINITY, BINARY_DOUBLE_NAN) GRE
from t;

INF NAN GRE
--- --- ---
Inf Nan Nan

Nan means not a number. It could be square root of -1, log of -1, 0/0, acos(1000), Inf-Inf, etc…


select
SQRT(-1d),
LN(-1d),
0/0d,
acos(1000d),
BINARY_DOUBLE_INFINITY-BINARY_DOUBLE_INFINITY
from t;
SQR LN- 00D ACO BIN
--- --- --- --- ---
Nan Nan Nan Nan Nan

According to the doc, it is greater than any value, inclusive positive infinity.

To check if a value is nan, it could be compared to BINARY_DOUBLE_NAN.
where :z = BINARY_DOUBLE_NAN
There is a function NANVL(:z, :y) which evaluates to :y when :z is equal Nan. if :z is not equal to Nan and :y is not null, then it evaluates to :z. NANVL evaluates to NULL when :z or :y is null.


select NANVL(1,null) from dual;
NANVL
------
[null]

Display a blob

I have a table with a blob

create table t(b blob);
insert into t values ('585858');

In 11g sql*plus, I can display raw data

select b from t;
B
------
585858

Ok, but if I want to display XXX (the character content)

select utl_raw.cast_to_varchar2(b) from t;
UTL
---
XXX

However, in sql, a raw cannot be more than 2000 bytes long.

Another way to print your blob content is to use DBMS_LOB.CONVERTTOCLOB


var c clob
set autoprint on
declare
b blob;
dest_offset integer := 1 ;
src_offset integer := 1 ;
lang_context integer:= 1 ;
warning integer;
begin
select b into b from t for update;
dbms_lob.createtemporary(:c,true);
dbms_lob.converttoclob(
:c, b, DBMS_LOB.LOBMAXSIZE,
dest_offset, src_offset,
1, lang_context, warning);
end;
/
C
---
XXX

On associativity, transitivity and reflexivity

Addition is supposed to be associative.
a+(b+c)=(a+b)+c

This may be wrong in Oracle when dealing with months and days

with t as (select
interval '1' month a,
date '2007-09-26' b,
interval '7' day c
from dual)
select a+(b+c),(a+b)+c
from t;

A+(B+C) (A+B)+C
----------- -----------
03-NOV-2007 02-NOV-2007

The equality is supposed to be transitive
if (a=b and b=c) then (a=c)
However, in Oracle the comparison operator equal may imply some transformation

with t as (select '.0' a, 0 b, '0.' c from dual)
select
case when a=b then 'YES' else 'NO ' end "A=B",
case when b=c then 'YES' else 'NO ' end "B=C",
case when a=c then 'YES' else 'NO ' end "A=C"
from t;
A=B B=C A=C
--- --- ---
YES YES NO

The equality operator is also supposed to be reflexive
a=a

This is unfortunately wrong with null
with t as (select null a from dual)
select case when a=a then 'YES' else 'NO ' end "A=A"
from t;
A=A
---
NO

:mrgreen:

OOW schedule

I have start building my schedule. There are about 1722 sessions to chose from this year, so choice is difficult!

Due to jet-lag, I mostly cannot do all what I planed. And I also like to spent some time by the exhibitors and the boosts.

Ok, the one I will not miss :
Steven Feuerstein : Break Your Addiction to SQL!
Amit Ganesh : Oracle Database 11g: The Next-Generation Data Management Platform
Bryn Llewellyn : Doing SQL from PL/SQL: Best and Worst Practices
Thomas Kyte : The Top 10–No, 11–New Features of Oracle Database 11g
Lucas Jellema : The Great Oracle SQL Quiz

SIG SOUG: times ten

I have been to a SOUG last Thursday.

We first had a presentation from Thomas Koch about performance in Zurich Kantonalbank. As I have been working as a DBA for about two years in that bank, I already had my opinion about performance there 😕

The second presentation was about TimesTen. I must say I have never used Times Ten. So I was glad to hear Stefan Kolmar from Oracle presenting the product. Ok, here it is in a few lines.

In TimesTen, the whole database is in the memory. TimesTen is an Oracle Product and a Database, but it is not an Oracle Database. The objective must be to have a response time in microseconds and hundred of thousands of transactions per second. You have a log buffer, and you can decide to asynchronously dump the buffer to file.

Let me try to explain the example from Stefan :
You have a mobile phone company. Foreign call can be fairly expensive, so those transactions will be synchronously dumped to the disk. Local calls cost about 1 Euro in average. So if you dump the log to disk every ten transactions, in case of a failure an average of 5 Euros will not be billed. In this way you can select the transaction to have synchronous and the one to have asynchronous. It looks promising, but probably not for critical businesses like banking where you are required to guarantee zero data-loss.

There is an additional functionality in TimesTen which is called “cache for Oracle”. It is a layer between the client and the database. It does not offer the same functionality as Oracle. For example, you cannot do PL/SQL. But it may offer microsecond access.

I will document two examples :
1) read only
You have a flight reservation company. Flight reservation are very important, so they will be in the database (no data loss). Flight schedule are read-only for the client. They will be cached in TimesTen. So when accessing the timetables, it will be ultra-fast. When booking, it may take a few seconds.

2) on demand
You have a call center. When a customer phone, all data relative to the customer (history, name, contracts, contract details) are immediately loaded from the database in TimesTen. So when the Call Center employee asks for any info, they are immediately available

How much does it cost? Check on store.oracle.com

For a tiny database up to 2Gb it is 6000$/processor for 3 years. More options, more money…

Please RTFOM !

Today I opened two SR about flashback archive in 11g. In one of them, I complained that user SCOTT was not allowed to create a flashback archive. In the doc that I downloaded a few weeks ago I read :
Prerequisites
You must have the FLASHBACK ARCHIVE ADMINISTER system privilege to create a flashback data archive. This privilege can be granted only by a user with DBA privileges. In addition, you must have the CREATE TABLESPACE system privilege to create a flashback data archive, as well as sufficient quota on the tablespace in which the historical information will reside.

So as I was getting an ORA-55611, I opened a SR. The support engineer pointed me to the online documentation where I was astonished to read :
Prerequisites
You must have the FLASHBACK ARCHIVE ADMINISTER system privilege to create a flashback data archive. In addition, you must have the CREATE TABLESPACE system privilege to create a flashback data archive, as well as sufficient quota on the tablespace in which the historical information will reside. To designate a flashback data archive as the system default flashback data archive, you must be logged in as SYSDBA.

Well, Read The Fine Online Manual !!!

The second tar is related to long retention (about the age of the earth)

SQL> alter flashback archive fba01
modify retention 4106694757 year;

Flashback archive altered.

SQL> select retention_in_days
from DBA_FLASHBACK_ARCHIVE;
RETENTION_IN_DAYS
-----------------
1

:mrgreen:

isNumber in sql

I tried this in 11g
TABLE T

X
123
-1.2e-3
abc


select x,
to_number(
xmlquery('number($X)'
passing x as x
returning content)) n
from t;
X N
------- ----------
123 123
-1.2e-3 -.0012
abc

it is quite a common task to extract numbers from varchar2 and to dig out poor quality data.

select x, to_number(x) from t;
ERROR:
ORA-01722: invalid number

A well-known PL/SQL approach would be to use exception. Ex:

create or replace function f(x varchar2)
return number is
begin return to_number(x);
exception when others then return null;
end;
/
select x, f(x) n from t;
X N
------- ----------
123 123
-1.2e-3 -.0012
abc

another approach in plain sql could involve CASE and REGEXP

select x,
case when
regexp_like(x,
‘^-?(&#92+&#92.?|&#92d*&#92.&#92d+)([eE][+-]&#92d+)?$’)
then to_number(x)
end n
from t;
X N
——- ———-
123 123
-1.2e-3 -.0012
abc

installing OID 10.1.4.2 Preview 1

Download oracle-oid-10.1.4.2.0-1.0.i386.rpm
Download oracle-xe-univ-10.2.0.1-1.0.i386.rpm

Install the rpm
# rpm -i oracle-*.i386.rpm

In SLES 10, there is no /bin/cut, let’s create a link as root to avoid a mistake when running config-oid.sh
# ln -s /usr/bin/cut /bin/cut

Run the configure script as root
# /etc/init.d/oracle-oid configure
That’s all folks! It created an Oracle XE 10gR2 database, and configured a running database. Excellent!

LDAP Server is running and configured.
$ ldapsearch cn=orcladmin dn
cn=orcladmin, cn=Users, dc=com

There is a nice video to run on linux : oracleauthenticationservices_demo.vvl
Save the file, set the display, then
$ chmod +x oracleauthenticationservices_demo.vvl
$ ./oracleauthenticationservices_demo.vvl

It shows also how to use Oracle LDAP Server OID to identify your Linux users with the preview of Oracle Authentication Service

Oracle Database 11g: The Top Features for DBAs and Developers

I am always delighted to read the top features by Arup Nanda.

He started his 11g series : Oracle Database 11g: The Top Features for DBAs and Developers

There are many partitioning enhancements. The most exciting feature for me is the INTERVAL partitioning. A huge cause of downtime and waste of storage is the range partitioning. In 10g and before, a partitioning by dates required that the partition are defined before values are inserted.

Now we have automatic partition creation 😀


create table t(d date)
partition by range(d)
interval(interval '1' month)
(partition p1 values less than (date '0001-01-01'));

One partition must be created manually, here the partition will contain all dates from 1-JAN-4712BC to 31-DEC-0000 (which is not a legal date by the way)

There is also new syntax to query the partition
SQL> insert into t values (date '2000-01-10');

1 row created.

SQL> insert into t values (date '2000-01-20');

1 row created.

SQL> insert into t values (date '2000-03-30');

1 row created.

SQL> select * from t partition for (date '2000-01-01');
D
-------------------
10.01.2000 00:00:00
20.01.2000 00:00:00

Note the syntax can be used in any form of partitioning. Here in a list-list composite

SQL> create table t(x number, y number)
partition by list(x)
subpartition by list(y)
subpartition template (
subpartition sp1 values(1),
subpartition sp2 values(2))
(partition values(1), partition values(2));

Table created.

SQL> insert into t values(1,2);
1 row created.

SQL> select * from t subpartition for (1,2);
X Y
---------- ----------
1 2

Ok, one more feature Arup introduced is the REF partitioning, where you have a schema with both the parent and child tables partitioned, and you want to partition on a column of the parent table that is not in the child table (as you had bitmap join indexes, you have now ref partitions). Check it on his site.

Finally Arup explained SYSTEM partitioning, which is not inconceivable, but will hardly be used.

Imagine you have a table containing just one single LOB column, and a LOB cannot be used as a partition key.

SQL> create table t(x clob)
partition by system (
partition p1,
partition p2,
partition p3,
partition p4);

Table created.

So far this seems fine. So what the problem? You cannot insert in that table!
SQL> insert into t values(1);
insert into t values(1)
*
ERROR at line 1:
ORA-14701: partition-extended name or bind variable
must be used for DMLs on tables partitioned by the
System method

so you must define in which partition you want to add data. For example round robin. Or random. Whatever.


SQL> insert into t partition (P1) values ('x');

1 row created.

SQL> insert into t partition (P2) values ('y');

1 row created.

If you want to use bind variable, you can use dataobj_to_partition

SQL> select object_id
from user_objects
where object_name='T'
and subobject_name is not null;
OBJECT_ID
----------
55852
55853
55854
55855

SQL> var partition_id number
SQL> exec :partition_id := 55852

PL/SQL procedure successfully completed.

SQL> insert into t
partition (dataobj_to_partition("T",:partition_id))
values ('x');

1 row created.
SQL> exec :partition_id := 55853

PL/SQL procedure successfully completed.

SQL> insert into t
partition (dataobj_to_partition("T",:partition_id))
values ('x');

1 row created.

Actually, SYSTEM partitioning is misleading, YOU are responsible for choosing the partition in which you want to insert, not the system :mrgreen:

flashback archive table

One of the problem with flashback queries in 10g and before is that you never know if it will works, especially you cannot expect to have flashback queries working for very old tables.

Let’s imagine you want to export your CUSTOMER as of 30/6/2007. No chance in 10g…

Well, with 11g, you can create a flashback archive, and it will save all change until end of retention (many years if you want).

Here it is :
SQL> connect / as sysdba
Connected.
SQL> create tablespace s;

Tablespace created.

SQL> create flashback archive default fba01 tablespace s
retention 1 month;

Flashback archive created.

SQL> connect scott/tiger
Connected.
SQL> create table t(x number) flashback archive;

Table created.

SQL> host sleep 10

SQL> insert into t(x) values (1);

1 row created.

SQL> commit;

Commit complete.

SQL> SELECT dbms_flashback.get_system_change_number FROM dual;
GET_SYSTEM_CHANGE_NUMBER
------------------------
337754

SQL> update t set x=2;

1 row updated.

SQL> commit;

Commit complete.

SQL> select * from t as of scn 337754;
X
----------
1

SQL> alter table t no flashback archive;

Table altered.

SQL> drop table t;

Table dropped.

SQL> select FLASHBACK_ARCHIVE_NAME, RETENTION_IN_DAYS,
STATUS from DBA_FLASHBACK_ARCHIVE;
FLASHBACK_ARCHIVE_NAME RETENTION_IN_DAYS STATUS
---------------------- ----------------- -------
FBA01 30 DEFAULT

SQL> connect / as sysdba
Connected.
SQL> drop flashback archive fba01;

Flashback archive dropped.

SQL> drop tablespace s;

Tablespace dropped.

note that a month is 30 days. If you try to create a flashback archive in a non-empty tablespace you may get
ORA-55603: Invalid Flashback Archive command
which is not a very helpful message

select*from”EMP”where’SCOTT’=”ENAME”…

What is wrong with this query?

select*from"EMP"where'SCOTT'="ENAME"and"DEPTNO"=20;
EMPNO ENAME JOB MGR HIREDATE
---------- ---------- --------- ---------- ---------
7788 SCOTT ANALYST 7566 13-JUL-87

It is a zero-space query 😎

You could write it as

select
*
from
"EMP"
where
'SCOTT'="ENAME"
and
"DEPTNO"=20;

personnaly, I would write it as

select *
from emp
where ename='SCOTT'
and deptno=20;

Formatting is very important, it makes your code nice to read and indentation make the blocks visualable.

Auto-formatting is also fine, but I like to decide myself if the line is too long, or if I want to have FROM and EMP on the same line.

Have a look at the free online SQL Formatter SQLinForm

positive infinity

I have read a long long time ago the following note on positive infinity http://www.ixora.com.au/notes/infinity.htm

Today I finally succeeded in inserting positive infinity in a number field

create table t as select
STATS_F_TEST(cust_gender, 1, 'STATISTIC','F') f
from (
select 'M' cust_gender from dual union all
select 'M' from dual union all
select 'F' from dual union all
select 'F' from dual)
;

I am so happy 😀

Let’s try a few queries


SQL> desc t
Name Null? Type
----------------- -------- ------
F NUMBER

SQL> select f from t;

F
----------
~

SQL> select f/2 from t;
select f/2 from t
*
ERROR at line 1:
ORA-01426: numeric overflow

SQL> select -f from t;

-F
----------
-~

SQL> select cast(f as binary_double) from t;

CAST(FASBINARY_DOUBLE)
----------------------
Inf

SQL> select * from t
2 where cast(f as binary_double) = binary_double_infinity;

F
----------
~

Now expect a lot of bugs with your oracle clients 😎

Toad 9 for example returns

SQL> select f from t
select f from t
*
Error at line 1
OCI-22065: number to text translation for the given
format causes overflow

on delete cascade

The use of a referential integrity constraint is to enforce that each child record has a parent.

SQL> CREATE TABLE DEPT
2 (DEPTNO NUMBER PRIMARY KEY,
3 DNAME VARCHAR2(10)) ;

Table created.

SQL> CREATE TABLE EMP
2 (EMPNO NUMBER PRIMARY KEY,
3 ENAME VARCHAR2(10),
4 DEPTNO NUMBER
5 CONSTRAINT EMP_DEPT_FK
6 REFERENCES DEPT(deptno));

Table created.

SQL> INSERT INTO DEPT(deptno,dname) VALUES
2 (50,'CREDIT');

1 row created.

SQL> INSERT INTO EMP(EMPNO,ENAME,DEPTNO) VALUES
2 (9999,'JOEL',50);

1 row created.

SQL> COMMIT;

Commit complete.

SQL> DELETE DEPT WHERE DEPTNO=50;
DELETE DEPT WHERE DEPTNO=50
*
ERROR at line 1:
ORA-02292: integrity constraint (SCOTT.EMP_DEPT_FK) violated
- child record found

I cannot delete this department, because the department is not empty. Fortunately ❗

Let’s redefine the constraint with a DELETE CASCADE clause


SQL> alter table emp drop constraint emp_dept_fk;

Table altered.

SQL> alter table emp add constraint emp_dept_fk
2 foreign key (deptno) references dept(deptno)
3 on delete cascade;

Table altered.

SQL> DELETE DEPT WHERE DEPTNO=50;

1 row deleted.

SQL> select * from emp where ename='JOEL';

no rows selected

Note the line 1 row deleted. This is evil 👿 I have deleted a department, and there were employees in it, but I got no error, no warning and no feedback about the DELETE EMP.

Instead of improving the data quality, the ON DELETE CASCADE foreign key constraint here silently deleted rows. Joel will once phone you and ask why he has been deleted…

There is one more clause of the foreign key which sets the refering column to null

SQL> INSERT INTO DEPT(deptno,dname) VALUES
2 (60,'RESTAURANT');

1 row created.

SQL> INSERT INTO EMP(EMPNO,ENAME,DEPTNO) VALUES
2 (9998,'MARC',60);

1 row created.

SQL> alter table emp drop constraint emp_dept_fk;

Table altered.

SQL> alter table emp add constraint emp_dept_fk
2 foreign key (deptno) references dept(deptno)
3 on delete set null;

Table altered.

SQL> DELETE DEPT WHERE DEPTNO=60;

1 row deleted.

SQL> select * from emp where ename='MARC';

EMPNO ENAME DEPTNO
---------- ---------- ----------
9998 MARC

Marc has no department, because his department has been deleted. Again, no feedback, no warning, no error.

Instead of improving the data quality, the ON DELETE SET NULL foreign key constraint here silently updated rows columns to NULL. Marc will wonder why he get no invitation to the department meetings.

What could be worse???

Triggers of course! Triggers not only removes rows in child tables, but triggers can also do very weird things, like updating another table, changing the values you are trying to insert, outputing a message, etc.

Also triggers are programmed by your colleagues, so they must be full of bugs 😈

You cannot imagine the number of problems that are caused by triggers and revealed only when tracing.

I once had something like

SQL> CREATE INDEX I ON T(X);

P07431B processed

Well, after enabling the trace, I discover one trigger fired on any ddl and the trigger was doing nothing else than this distracting dbms_output for “debugging” purpose. Guess google and metalink for the message did not help much…

errorlogging in 11g

This is a very neat feature in 11g.

I have a script called foo.sql

create table t(x number primary key);
insert into t(x) values (1);
insert into t(x) values (2);
insert into t(x) values (2);
insert into t(x) values (3);
commit;

It is eyes-popping that this script will return an error, but which one?

Let’s errorlog !


SQL>set errorl on
SQL> @foo

Table created.

1 row created.

1 row created.

insert into t(x) values (2)
*
ERROR at line 1:
ORA-00001: unique constraint (SCOTT.SYS_C004200) violated

1 row created.

Commit complete.

SQL> set errorl off
SQL> select timestamp,script,statement,message from sperrorlog;
TIMESTAMP SCRIPT STATEMENT
---------- ------- ---------------------------
MESSAGE
---------------------------------------------------------
11:18:56 foo.sql insert into t(x) values (2)
ORA-00001: unique constraint (SCOTT.SYS_C004200) violated

There is also a huge bonus 😀

You can use it with 9i and 10g databases too! Only the client must be 11g. To download the 11g client only, go to Oracle E-Delivery Website

Even small, this is one of my favorite new features!

the password is not longer displayed in dba_users.password in 11g

By reading Pete Finnigan’s Oracle security weblog today, I discovered that the password is no longer displayed in DBA_USERS in 11g.


select username,password
from dba_users
where username='SCOTT';
USERNAME PASSWORD
-------- ------------------------------
SCOTT

select name,password
from sys.user$
where name='SCOTT';
NAME PASSWORD
----- ------------------------------
SCOTT F894844C34402B67

on the one hand, it is good for the security.

On the other hand, it is a huge change which is not documented (I immediately sent comments to the Security and Reference book authors) and it will make a lot of script failing (scripts that use to change the password to log in and change it back to the original value afterwards).

Protecting the hash is extremely important, check your scripts for 11g compatibility!

keep dense_rank with multiple column

create table t(
deptno number,
firstname varchar2(10),
lastname varchar2(10),
hiredate date);

insert into t values (
10,'Jo','Smith',date '2001-01-01');

insert into t values (
10,'Jack','River',date '2002-02-02');

to get the latest hiredate per department
select deptno,
max(hiredate) hiredate
from t
group by deptno;

DEPTNO HIREDATE
---------- ---------
10 02-FEB-02

if you want to get the name of the employee at that date, you could by mistake believe the following works

select deptno,
max(firstname) keep (dense_rank last
order by hiredate) firstname,
max(lastname) keep (dense_rank last
order by hiredate) lastname,
max(hiredate) hiredate
from t
group by deptno;

DEPTNO FIRSTNAME LASTNAME HIREDATE
---------- ---------- ---------- ---------
10 Jack River 02-FEB-02

This will produce wrong result if hiredate is not unique

insert into t values (10,'Bob','Zhong', date '2002-02-02');
select deptno,
max(firstname) keep (dense_rank last
order by hiredate) firstname,
max(lastname) keep (dense_rank last
order by hiredate) lastname,
max(hiredate) hiredate
from t
group by deptno;

DEPTNO FIRSTNAME LASTNAME HIREDATE
---------- ---------- ---------- ---------
10 Jack Zhong 02-FEB-02

of course there is Jack Zhong.

To get a consistent record, it is possible to add all the columns in the order by

select deptno,
max(firstname) keep (dense_rank last
order by hiredate,firstname,lastname) firstname,
max(lastname) keep (dense_rank last
order by hiredate,firstname,lastname) lastname,
max(hiredate) hiredate
from t
group by deptno;

DEPTNO FIRSTNAME LASTNAME HIREDATE
---------- ---------- ---------- ---------
10 Jack River 02-FEB-02

get Nth column of a table

I answered this question twice, once on otn forums and once on developpez.net

Here is the latest to get the third column of emp

select
column_name as name,
extractvalue(column_value,’/ROW/’||column_name) as value
from table(xmlsequence(cursor(select * from emp))),
user_tab_columns
where COLUMN_ID=3 and table_name=’EMP’
;


NAME VALUE
---- ----------
JOB CLERK
JOB SALESMAN
JOB SALESMAN
JOB MANAGER
JOB SALESMAN
JOB MANAGER
JOB MANAGER
JOB ANALYST
JOB PRESIDENT
JOB SALESMAN
JOB CLERK
JOB CLERK
JOB ANALYST
JOB CLERK

probably useless, but fun 😉

How to compare schema

If you have receive ddl statements from your developer and you want to check if it matches the current state of the development database, because the developer have done a lot of change in a quick and undocumented manner, what are your options?

I found this handy feature in Toad :
1) I create my objects on a separate database with the ddl I received from development
2) I compare the schema they use with the schema I created in Toad
–> Database –> Compare –> Schema
I select the options I want:
–> functions, indexes, packages, procedures, triggers, tables, view
I select the Reference and Comparison connections/schemas. Then I click compare
3) I receive the result
(only) 29 differences
4) the real bonus, I receive a script to update the live data according to the script I received. Undocumented change should never happen, so I do some communication with the developers

drop index foo;
drop table bar;
alter table gaz drop column bop;
alter table gaz modify (quux null);

this is not going to be blind-executable, some change are simply impossible to implement, but for my little test, I was happy to discover that function

I have been using ERwin for this purpose before, but the version I have (4.1) is very buggy and does not support a lot of syntaxes (ex: deferred constraints, create view v as select cast(1 as number(1)) x from dual, etc…). Also ERwin can compare only with the current model, so no direct comparison between 2 database schema.

how to spell 999999999 ?


begin
dbms_output.put_line(
to_char(
timestamp '9999-12-31 23:59:59.999999999',
'FF9SP'));
end;
/
NINE HUNDRED NINETY-NINE MILLION NINE HUNDRED NINETY-NINE
THOUSAND NINE HUNDRED NINETY-NINE

Unfortunately, I could not get this in sql/10.2.0.2

select
to_char(
timestamp '9999-12-31 23:59:59.999999999',
'FF9SP') X
from
dual;
ORA-01877: string is too long for internal buffer

Well, since the string is too long, let’s try with a LONGER string 😈

select
regexp_substr(
to_char(
timestamp '9999-12-31 23:59:59.999999999',
'FF9SP/FMDAY MONTH DDTHSP YYYYSP A.D. HH24SP MISP SSSP')
,'[^/]+')X
from
dual;

X
---------------------------------------------------------
----------------------------------
NINE HUNDRED NINETY-NINE MILLION NINE HUNDRED NINETY-NINE
THOUSAND NINE HUNDRED NINETY-NINE

SQL Expert?

I have attended the sql certified expert beta exam this morning. There were a lot of errors in it, I added in the comment that they have to groundly review their regular expressions questions. There were a lot of rubbish question, but hardly any challenge, it is more like detecting the incorrect syntax. So I am deceived. I have to wait 3 months for the result, but I expect no more than 90% 😎

They even have an exhibit with a table containing many columns with the same name (!)

Well, I hope they will improve with the comments I made when the production release will come out.

The time to answer the question is sufficient. I had only 139 questions in 3 hours, and I needed only 2 hours actually with plenty of time to review.

ORA-01466: unable to read data – table definition has changed

I re-edited this post and it is unresolved yet. I thought it was related to system time, but apparently not 😮


SQL> create table t(x number);

Table created.

SQL> set transaction read only ;

Transaction set.

SQL> select * from t;
select * from t
*
ERROR at line 1:
ORA-01466: unable to read data - table definition has changed

If I wait one minute after my create table statement, it works


SQL> drop table t;

Table dropped.

SQL> create table t(x number);

Table created.

SQL> host sleep 60

SQL> set transaction read only;

Transaction set.

SQL> select * from t;

no rows selected

😈