To delete large number of rows, for instance rows with date until 2010, you can issue this simple statement.
SQL> DELETE FROM T WHERE C
This is perfectly fine. The table remains online, other users are not much affected (maybe they will not even notice the lower IO performance).
It will generate quite a lot of UNDO, and you will need enough space for archivelog and a large undo tablespace and a large undo retention setting (to prevent ORA-01555 snapshot too old).
If your table is like 100G big, you do it during week-end, you have 500Gb Undo and 250G free space in your archive destination, you will be fine. Well. Maybe.
There are workarounds where you create a new table then rename etc… but this is not the scope of this post and you will need to validate your index / foreign keys / online strategy with the application guys.
Another way to decrease runtime pro statement and undo requirement pro statement (but increase overall elapsed time) is to divided it chunks, for instance to delete 100’000’000 rows each night during 10 days.
SQL> DELETE FROM T WHERE C
Or if you want to delete in much smaller chunks to accomodate your tiny undo tablespace, you could try
BEGIN
LOOP
DELETE FROM T WHERE C
This will run longer than a single transaction, but it is quite usefull if your undo tablespace is too small. Also if you abort it (CTRL-C or kill session), you will not lose all progresses (but you lose on integrity/atomicity) and your KILL SESSION will not last for ever. With a single transaction, your session may be marked as killed for hours/days...
Hi Laurent –
I would suggest never deleting one billion rows! use a create table as select to keep the rows you want (negate your delete predicate in the CTAS).
or if you persist in deleting – use dbms_parallel_execute. the rownum trick is inefficient (especially in the presence of a full scan! it restarts the full scan each time, each delete takes longer and longer and longer 🙂 )
create table as select would be quite nice if I could stop the underlying application… I just did one delete of one billion rows in a SAP database in one statement (without rownum), it did run for about 24 hours. Without stopping SAP. Nor invalidating objects.
Indeed the delete with rownum will be slower than dbms_parallel_execute, but it will also consume less resource (CPU) and will probably not take longer and longer as there will be less data to scan. Maybe just take as long.
I tried in SCOTT schema :
SQL> drop table t;
Table dropped.
SQL> create table t as select dbms_random.value from dual;
Table created.
SQL> insert into t select dbms_random.value from t;
1 row created.
SQL> r
1* insert into t select dbms_random.value from t
2 rows created.
SQL> r
1* insert into t select dbms_random.value from t
4 rows created.
SQL> r
1* insert into t select dbms_random.value from t
8 rows created.
SQL> r
1* insert into t select dbms_random.value from t
16 rows created.
SQL> r
1* insert into t select dbms_random.value from t
32 rows created.
SQL> r
1* insert into t select dbms_random.value from t
64 rows created.
SQL> r
1* insert into t select dbms_random.value from t
128 rows created.
SQL> r
1* insert into t select dbms_random.value from t
256 rows created.
SQL> r
1* insert into t select dbms_random.value from t
512 rows created.
SQL> r
1* insert into t select dbms_random.value from t
1024 rows created.
SQL> r
1* insert into t select dbms_random.value from t
2048 rows created.
SQL> r
1* insert into t select dbms_random.value from t
4096 rows created.
SQL> r
1* insert into t select dbms_random.value from t
8192 rows created.
SQL> r
1* insert into t select dbms_random.value from t
16384 rows created.
SQL> r
1* insert into t select dbms_random.value from t
32768 rows created.
SQL> r
1* insert into t select dbms_random.value from t
65536 rows created.
SQL> r
1* insert into t select dbms_random.value from t
131072 rows created.
SQL> r
1* insert into t select dbms_random.value from t
262144 rows created.
SQL> r
1* insert into t select dbms_random.value from t
524288 rows created.
SQL> r
1* insert into t select dbms_random.value from t
1048576 rows created.
SQL> r
1* insert into t select dbms_random.value from t
2097152 rows created.
SQL> commit;
Commit complete.
SQL> select count(*), count(case when value<.99 then 1 end), count(*)-count(case when value<.99 then 1 end) diff from t; COUNT(*) COUNT(CASEWHENVALUE<.99THEN1END) DIFF ---------- -------------------------------- ---------- 4194304 4151998 42306 SQL> set timi on
SQL> delete from t where value<.99 and rownum<1e6; 999999 rows deleted. Elapsed: 00:00:12.15 SQL> commit;
Commit complete.
Elapsed: 00:00:00.01
SQL> delete from t where value<.99 and rownum<1e6; 999999 rows deleted. Elapsed: 00:00:12.50 SQL> commit;
Commit complete.
Elapsed: 00:00:00.00
SQL> delete from t where value<.99 and rownum<1e6; 999999 rows deleted. Elapsed: 00:00:12.35 SQL> commit;
Commit complete.
Elapsed: 00:00:00.03
SQL> delete from t where value<.99 and rownum<1e6; 999999 rows deleted. Elapsed: 00:00:12.41 SQL> commit;
Commit complete.
Elapsed: 00:00:00.00
SQL> delete from t where value<.99 and rownum<1e6; 152002 rows deleted. Elapsed: 00:00:01.95 SQL> commit;
Commit complete.
Elapsed: 00:00:00.00
SQL> select count(*), count(case when value<.99 then 1 end), count(*)-count(case when value<.99 then 1 end) diff from t; COUNT(*) COUNT(CASEWHENVALUE<.99THEN1END) DIFF ---------- -------------------------------- ---------- 42306 0 42306
a bit less than one minute in total. really not bad. almost as fast as a single delete in my environment. Create chunks of equal size would surely take some time too...
Hi Laurent –
What is the implication of revisiting same block over and over to process the delete action?
I did something like in the past
delete from ABC where rowid in (select /*+full(x) parallel(x,4) */rowid from ABC x where Col=’xyz’)
I did work almost 4 times or even faster if I recall
/S/
Hi Sudhir,
Thanks for the hint. I tried but could not reproduce it.
SQL> delete from t1 where x<.95 996343 rows deleted. Elapsed: 00:00:14.68 SQL> delete from t1 where rowid in (select /*+ FULL(t) PARALLEL(t,4) */ rowid from t1 t where x<.95) 996343 rows deleted. Elapsed: 00:00:15.09
On the other hand, if you really want to run in parallel, you could do :
SQL> alter session enable parallel dml
Session altered.
Elapsed: 00:00:00.03
SQL> delete /*+ PARALLEL(t,4) */from t1 t where x<.95 996343 rows deleted. Elapsed: 00:00:05.86
But still it is not quite 4 times faster. AND (the big AND) if you multiple tables to delete, you'd better start multiple sqlplus sessions instead of parallelizing within one session
Hi Laurent,
Thanks for the test case. I am going fuzzy on the context under which it happened in 2004 or so 🙂
But the developer gave DELETE statement with proper WHERE clause and the explain plan looked super clean with just right index use.
Unfortunately, the clustering factor killed the performance as it was visiting same block over and over and the job just ran and ran.
I basically did FTS and got hold of all rowid, sorted them and deleted them. My observation was redo generated was less as well and time was of course 4+ times smaller.
Since you are already ahead of the game by leaps and bounds, this process didn’t help much but I want to mention the context.
Because developers at times want to develop process that can be run using ROWNUM<1000 and commit and then may run it again next day..and after that ..
If you use index that works and its not going to "feel" as expensive and you pay extra but system is impacted less. That works well for ILM strategy.
Btw, I am big fan of yours. I came across your Excel data manipulation and its just absolutely amazing. I am going to dig further in that.
I get your blog on google reader and I have heard its going to stop soon so I need to find alternative for that :–)
Thanks again!
/S/
Hi Sudhir,
Thanks for the nice words. Indeed if you use hints like FULL and you have an old version, it may not reproduce without hints with fresh statistics in the latest patchset. The optimizer is much better than in 2004, I pretend.
I will write a separate post on Google Reader for my readers
Cheers
Laurent
HI Laurent
I have a delete statement with rowid, the primary key and date value(yyyymmdd) in the where clause but somehow the statement is running more than 12 hours. Number of records to be deleted is 107262 out of 218967984.
delete from test.snum where rowid=:B3 and snum_id=:B2 and yyyymmdd=:B1
Id | Operation
————————————
0 | DELETE STATEMENT
1 | DELETE
* 2 | TABLE ACCESS BY USER ROWID
————————————
indexes: pk_snum_id (snum_id)
idx1_prod_id (prod_id)
idx2_mnr (mat_num)
Any advice.
Regards
-Haris
I fail to understand how you could delete more than one row with where rowid=:B3 Probably you use some cursor loop and you should simply have fresh stats and use
delete from test.snum where yyyymmdd=:B1
test.snum has any trigger? I remember Tom Kyte mentioning something about “restarting” of internal activity of trigger when consistent get failing or some such thing …
Or may be just blocking lock? unindexed FK?
Thanks
/S/