Oracle is pretty strong at enforcing constraint.
Table for this blog post:
create table t(x number primary key, y number);
For instance if you alter table t add check (y<1000);
then Y will not be bigger than 1000, right?
SQL> insert into t values (1,2000);
insert into t values (1,2000)
Error at line 1
ORA-02290: check constraint (SCOTT.SYS_C0029609) violated
I believe this code to be unbreakable. If you have only SELECT and INSERT privilege on the table, you cannot bypass the constraint.
Let's imagine some complex constraint. CHECK (sum(y) < 1000)
SQL> alter table t add check (sum(y) < 1000);
alter table t add check (sum(y) < 1000)
Error at line 1
ORA-00934: group function is not allowed here
Ok, clear enough I suppose, we cannot handle this complex constraint with a CHECK condition.
We could have some before trigger that fires an exception
CREATE TRIGGER tr
BEFORE INSERT OR UPDATE
ON T
FOR EACH ROW
WHEN (NEW.Y > 0)
DECLARE
s NUMBER;
BEGIN
SELECT SUM (y) INTO s FROM t;
IF (s + :new.y >= 1000)
THEN
raise_application_error (-20001, 'SUM(Y) would exceed 1000');
END IF;
END;
/
Now the trigger will compute the sum and return an exception whenever it fails.
SQL> insert into t values (2, 600);
1 row created.
SQL> insert into t values (3, 600);
insert into t values (3, 600)
*
ERROR at line 1:
ORA-20001: SUM(Y) would exceed 1000
ORA-06512: at "SCOTT.TR", line 8
ORA-04088: error during execution of trigger 'SCOTT.TR'
SQL> drop trigger tr;
Trigger dropped.
SQL> truncate table t;
Table truncated.
But I am not good with triggers, and the triggers are as bad as their developers and have dark sides like mutating triggers and thelike.
As Tom Kyte mentioned in the comment, the code above is not efficient effective if more than one user update the table at the same time
Another popular approach is to create a fast-refreshable-on-commit mview with a constraint.
Let's see how this works.
create materialized view log on t with rowid, primary key (y) including new values;
create materialized view mv
refresh fast
on commit
as select sum(y) sum from t;
alter table mv add check (sum < 1000);
The constraint is on the mview, so once you commit (and only at commit time), Oracle will try to refresh the mview.
SQL> insert into t values (4, 600);
1 row created.
SQL> commit;
Commit complete.
SQL> insert into t values (5, 600);
1 row created.
SQL> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-02290: check constraint (SCOTT.SYS_C0029631) violated
SQL> select * from t;
X Y
---------- ----------
4 600
So far so good. The mechanism rollbacks the transaction in case of an ORA-12008. A bit similar to a DEFERABLE constraint.
But how safe is this after all? Oracle does not enforce anything on the table, it just fails on refresh...
Anything that does not fulfill the materialized view fast refresh requisites will also break the data integrity.
SQL> delete from t;
1 row deleted.
SQL> commit;
Commit complete.
SQL> alter session enable parallel dml;
Session altered.
SQL> insert /*+PARALLEL*/ into t select 100+rownum, rownum*100 from dual connect by level<20; 19 rows created. SQL> commit;
Commit complete.
SQL> select sum(y) from t;
SUM(Y)
----------
19000
SQL> select staleness from user_mviews;
STALENESS
-------------------
UNUSABLE
Your data integrity is gone. By "breaking" the mview, with only SELECT, INSERT and ALTER SESSION privilege, you can now insert any data.
This is documented as
FAST Clause
For both conventional DML changes and for direct-path INSERT operations, other conditions may restrict the eligibility of a materialized view for fast refresh.
Other operations like TRUNCATE may also prevent you from inserting fresh data
SQL> alter materialized view mv compile;
Materialized view altered.
SQL> exec dbms_mview.refresh('MV','COMPLETE');
PL/SQL procedure successfully completed.
SQL> select * from mv;
SUM
----------
SQL> insert into t values(1,1);
1 row created.
SQL> commit;
Commit complete.
SQL> select * from mv;
SUM
----------
1
SQL> truncate table t;
Table truncated.
SQL> insert into t values(1,1);
1 row created.
SQL> commit;
commit
*
ERROR at line 1:
ORA-32321: REFRESH FAST of "SCOTT"."MV" unsupported after detail table
TRUNCATE
You wrote:
… But I am not good with triggers, and the triggers are as bad as their developers and have dark sides like mutating triggers and thelike …
I’d like to add that your trigger doesn’t even work correctly! Open two sql plus sessions and in each execute that insert:
insert into t values (2, 600);
Neither will fail. Now commit both and check your data. This is the truly evil side of using triggers to do constraint processing. The fact that reads do not block writes and writes don’t block reads and we usually have more than one user!!
Glad you mentioned this 🙂
MVIEW constraints, they are clever… but not as safe as constraints on tables
Ha, seems to me, that I am on a safer platform than you, I tried your example step-by-step:
SQL> select * from t;
no rows selected
SQL> alter session enable parallel dml;
Session altered.
SQL> insert /*+PARALLEL*/ into t select 100+rownum, rownum*100 from dual connect by level<20;
insert /*+PARALLEL*/ into t select 100+rownum, rownum*100 from dual connect by level<20
*
ERROR at line 1:
ORA-12801: error signaled in parallel query server P002
ORA-00600: internal error code, arguments: [6805], [CDC_RSID_SEQ$], [60], [],
[], [], [], [], [], [], [], []
select * from t;
no rows selected
so, sum(y) is still < 1000
Indeed, very safe!
I added the <code>
CODE
</code> tagsDisabling table locks might fix those two unfortunate ways of side-stepping the constraint, I should imagine.
@David : Nice !!! but it fixes the issue with the refresh, not the fundamental design problem of having the constraint on the mview. Any bug or issue that would invalidate the mview would compromise data integrity. In contrary, if you have a trigger and the trigger is invalid, you will consistently get ORA-4098 trigger is invalid.
You added:
“As Tom Kyte mentioned in the comment, the code above is not efficient if more than one user update the table at the same time”
Well, it’s just plain wrong, or did you mean effective instead of efficient ??? Efficiency is not related to consistency and integrity. And integrity is broken here.
@Bart Thanks to the English lesson as both translate to “efficace” in French. I corrected the red thing
probably the most effective approach would be revoking every access to the table and give access thru a package that does not allow multiple transaction on that table.
very light version :
SQL> CREATE TABLE t
2 (
3 x NUMBER PRIMARY KEY,
4 y NUMBER
5 );
Table created.
SQL>
SQL> CREATE OR REPLACE PROCEDURE insert_t (x NUMBER, y NUMBER)
2 IS
3 sum_y NUMBER;
4 BEGIN
5 LOCK TABLE t IN EXCLUSIVE MODE;
6
7 SELECT SUM (y) INTO sum_y FROM t;
8
9 IF (sum_y + y >= 1000)
10 THEN
11 raise_application_error (-20001, 'SUM(Y) would exceed 1000');
12 END IF;
13
14 INSERT INTO t (x, y)
15 VALUES (x, y);
16
17 COMMIT;
18 END insert_t;
19 /
Procedure created.
SQL>
SQL> exec insert_t(1,600);
PL/SQL procedure successfully completed.
SQL> select * from t;
X Y
---------- ----------
1 600
SQL> exec insert_t(2,600);
BEGIN insert_t(2,600); END;
*
ERROR at line 1:
ORA-20001: SUM(Y) would exceed 1000
ORA-06512: at "U22166.INSERT_T", line 11
ORA-06512: at line 1
SQL> select * from t;
X Y
---------- ----------
1 600
SQL> exec insert_t(3,300);
PL/SQL procedure successfully completed.
SQL> select * from t;
X Y
---------- ----------
1 600
3 300
SQL> grant execute on insert_t to public;
Grant succeeded.