Truncate is useful in some serial batch processing but it breaks the read-write consistency, generates stranges errors and results for running selects, and it needs DROP ANY TABLE when run over a table that you do not own.
But also, DELETE is faster in the following test case.
In 12c, you could have over one million partition in a table, but for the sake of the universe, I’ll try with 10000.
SQL> create table scott.t(x)
partition by range(x)
interval(1)
(partition values less than (0))
as
select rownum
from dual
connect by level<10001;
SQL> select count(*) from scott.t;
COUNT(*)
----------
10000
The 10K rows table is created, each row is its partition
SQL> delete scott.t;
10000 rows deleted.
Elapsed: 00:00:04.02
SQL> rollback;
Rollback complete.
Not tuned or parallelized or whatever. It took 4 seconds for 10’000 rows. If you have one billion rows, it is doable in a few hours. But you better do it in chunks then.
Anyway, let’s truncate
SQL> truncate table scott.t;
Table truncated.
Elapsed: 00:05:19.24
Five minutes !!! to truncate that tiny table.
If you have one million partitions and underlying indexes and lobs, it will probably failed with out of memory errors after hours and a large impact on the dictionary, sysaux, undo.
The dictionary changes are here very slow.
Of course, in the real world you’d compare a billion row DELETE with a 10thousand partition TRUNCATE.
Also see Bug 18251841 and Doc 1667223.1
truncate – drop and recreate 10,000 partitions vs delete