I just install Oracle Enterprise Linux on my new notebook.
I wanted to check how far could I improve the performance of a count(*)
SQL> drop table big_emp;
table big_emp dropped.
258ms elapsed
SQL> create table big_emp as
with l as(select 1 from dual connect by level<=3000)
select rownum empno,ename,job,mgr,hiredate,sal,comm,deptno from emp,l,l
table big_emp created.
330,390ms elapsed
SQL> alter table big_emp add primary key(empno)
table big_emp altered.
481,503ms elapsed
SQL> alter system flush buffer_cache
system flush altered.
2,701ms elapsed
SQL> alter system flush shared_pool
system flush altered.
137ms elapsed
SQL> select count(*) from big_emp
COUNT(*)
----------------------
126000000
9,769ms elapsed
SQL> select count(*) from big_emp
COUNT(*)
----------------------
126000000
8,157ms elapsed
SQL> alter table big_emp drop primary key
table big_emp altered.
905ms elapsed
SQL> alter table big_emp add primary key(empno)
using index (
create index big_i on big_emp(empno)
global partition by hash(empno)
partitions 16 parallel 16)
table big_emp altered.
974,300ms elapsed
SQL> alter system flush buffer_cache
system flush altered.
601ms elapsed
SQL> alter system flush shared_pool
system flush altered.
140ms elapsed
SQL> select count(*) from big_emp
COUNT(*)
----------------------
126000000
5,201ms elapsed
SQL> select count(*) from big_emp
COUNT(*)
----------------------
126000000
2,958ms elapsed
As it is on a notebook, I suppose the benefit of partitioning is not as good as you could get on your server with lots of fast disks and lot’s of CPUs, but I am pretty happy with the results.
It is still counting 126 Million rows in less than 3 seconds 🙂
Thanks for the very kind sponsor of the notebook !
I also tried
SQL> sho parameter keep
NAME TYPE VALUE
------------------------------------ ----------- ---------
buffer_pool_keep string
control_file_record_keep_time integer 7
db_keep_cache_size big integer 3G
SQL> alter index big_i storage (buffer_pool keep)
index big_i altered.
SQL> select count(*) from big_emp;
...
SQL> select count(*) from big_emp;
COUNT(*)
----------
126000000
Elapsed: 00:00:01.81
Caching the index helps, especially with slow disks (which is not the case here) !
alter table big_emp modify deptno not null;
create bitmap index bigemp_deptno_idx on big_emp(deptno);
select count(*) from big_emp;
SQL> select count(*) from big_emp;
COUNT(*)
----------
126000000
Elapsed: 00:00:00.13
Nice!
And the not null constraint is not even needed when using bitmap indexes. As null values are included.
exec dbms_stats.gather_table_stats(user,’EMP’);
select num_nulls from user_tab_columns where table_name = ‘EMP’ and column_name = ‘MGR’;
1
create bitmap index ebi on emp(mgr);
select count(*) from emp where mgr is null;
select * from table(dbms_xplan.display_cursor());
SQL_ID 0dka84vcrz7hx, child number 0
————————————-
select count(*) from emp where mgr is null
Plan hash value: 3756882435
————————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
————————————————————————————–
| 0 | SELECT STATEMENT | | | | 1 (100)| |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
| 2 | BITMAP CONVERSION COUNT | | 1 | 4 | 1 (0)| 00:00:01 |
|* 3 | BITMAP INDEX FAST FULL SCAN| EBI | | | | |
————————————————————————————–
Predicate Information (identified by operation id):
—————————————————
3 – filter(“MGR” IS NULL)
Hi Laurent,
Just my curiosity, how can you set the elapsed time in ms unit ?
@Bundit this is the default in sql developer 😉
also note the amazing feedback :
system flush altered
!Degree of index big_i is 16, so parallel execution will be used, right?
It will take a bit more CPU, and it won’t be a problem for this SQL.
But how will this “Degree” influence other SQL which was previously using this primary key?
The degree was part of the create index. It does not affect the table.
Technically you could do
create table t(x number primary key using index
(create unique index i on t(x) parallel 4)) parallel 2;
select degree from user_tables where table_name = 'T';
D
-
2
select degree from user_indexes where index_name = 'I';
D
-
4
Obviously you will use more CPU and this will impact your overall performance. As long as I am alone on my multi-threaded quad-core notebook, it is pretty safe I suppose !