Archiv für die Kategorie TOI
Exadata Part III: Compression
With this posting, we look at the New Feature Hybrid Columnar Compression that is only available with the Exadata Database Machine, offering stunning compression ratios together with high query speed. Meanwhile we have not less than 6 different kinds of Table Data Compression methods:
- BASIC compression, introduced in Oracle 8 already and only recommended for Data Warehouse
- OLTP compression, introduced in Oracle 11 and recommended for OLTP Databases as well
- QUERY LOW compression (Exadata only), recommended for Data Warehouse with Load Time as a critical factor
- QUERY HIGH compression (Exadata only), recommended for Data Warehouse with focus on Space Saving
- ARCHIVE LOW compression (Exadata only), recommended for Archival Data with Load Time as a critical factor
- ARCHIVE HIGH compression (Exadata only), recommended for Archival Data with maximum Space Saving
Method 1 and 2 are working by compressing the rows inside of Oracle Blocks – the more redundant values inside the block, the better the compression ratio. OLTP compression might be used with Exadata Database Machine, because Exadata V2 is designed to support OLTP as well as Data Warehouse. Methods 3 to 6 use Hybrid Columnar Compression (HCC), which is designed to achieve better compression ratios by organizing sets of rows into compression units and then detaching the columns from the rows and collecting them. It is more likely that the same columns store similar (redundant) data, therefore the compression ratio improves with HCC.
I will give examples for the 6 compression methods now. Keep in mind that I have only limited access to our Oracle University Exadata Database Machine, restricting me to just one Database Server and two Storage Servers – speed on a Full Rack will probably be faster. Also, the demo table is somewhat artificial and may therefore deliver better compression ratios compared to Real World tables.
In short: Although you should get the picture from the examples, it will be different on your site and it requires thorough testing before you take any kind of compression into production.
The (uncompressed) demonstration table is the same as in the previous posting and was created with this statement:
SQL> create table sales as select rownum as id, rownum + 1 as flag, 'Oracle Enterprise Edition' as product, mod(rownum,5) as channel_id, mod(rownum,1000) as cust_id , 5000 as amount_sold, trunc(sysdate - 10000 + mod(rownum,10000)) as order_date, trunc(sysdate - 9999 + mod(rownum,10000)) as ship_date from dual connect by level<=2e7;
It is 1.5 GB in size. Let’s look how small we can compress and how fast we can load!
BASIC method
SQL> create table comp_classic compress as select * from sales where 1=2; Table created. SQL> alter table comp_classic nologging; Table altered. SQL> insert /*+ append */ into comp_classic select * from sales; 20000000 rows created. Elapsed: 00:00:39.80 SQL> commit; Commit complete.
OLTP method. Notice that we cannot assume direct loads in an OLTP environment. Therefore conventional insert:
SQL> create table comp_oltp compress for oltp as select * from sales where 1=2; Table created. SQL> insert into comp_oltp select * from sales; 20000000 rows created. Elapsed: 00:02:16.56 SQL> commit; Commit complete.
QUERY LOW method. We expect this method to have the fastest Load Time:
SQL> create table comp_query_low compress for query low as select * from sales where 1=2; Table created. SQL> alter table comp_query_low nologging; Table altered. SQL> insert /*+ append */ into comp_query_low select * from sales; 20000000 rows created. Elapsed: 00:00:26.96 SQL> commit; Commit complete.
QUERY HIGH method
SQL> create table comp_query_high compress for query high as select * from sales where 1=2; Table created. SQL> alter table comp_query_high nologging; Table altered. SQL> insert /*+ append */ into comp_query_high select * from sales; 20000000 rows created. Elapsed: 00:00:33.99 SQL> commit; Commit complete.
ARCHIVE LOW method
SQL> create table comp_archive_low compress for archive low as select * from sales where 1=2; Table created. SQL> alter table comp_archive_low nologging; Table altered. SQL> insert /*+ append */ into comp_archive_low select * from sales; 20000000 rows created. Elapsed: 00:00:35.14 SQL> commit; Commit complete.
ARCHIVE HIGH method. We expect this method to have the slowest Load Time and the best Compression Ratio:
SQL> create table comp_archive_high compress for archive high as select * from sales where 1=2; Table created. SQL> alter table comp_archive_high nologging; Table altered. SQL> insert /*+ append */ into comp_archive_high select * from sales; 20000000 rows created. Elapsed: 00:04:13.13 SQL> commit; Commit complete.
That was indeed slow! Now let’s look at the outcome:
SQL> select segment_name,bytes/1024/1024 as mb from user_segments order by 1; SEGMENT_NAME MB ------------------------------ ---------- COMP_ARCHIVE_HIGH 5 COMP_ARCHIVE_LOW 8 COMP_CLASSIC 736 COMP_OLTP 864 COMP_QUERY_HIGH 8 COMP_QUERY_LOW 304 SALES 1535.5 7 rows selected.
We have condensend 1.5 Gig into 5 Megabyte with the ARCHIVE HIGH method!
Summary: Except for ARCHIVE HIGH, we have faster Load Times with HCC than with Block Compression. Every HCC method delivers a better compression ratio than the Block Compression methods. Now how about Query Times?
SQL> set timing on
SQL> connect adam/adam
Connected.
SQL> select count(*) from sales where channel_id=1;
COUNT(*)
----------
4000000
Elapsed: 00:00:02.07
SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat
where name = 'cell physical IO interconnect bytes returned by smart scan'; 2
NAME MB
---------------------------------------------------------------- ----------
cell physical IO interconnect bytes returned by smart scan 46.807457
We take the FTS (with Smart Scan) on the uncompressed table as Baseline. I always reconnect to initialize v$mystat. Now Query Performance of BASIC compression:
SQL> connect adam/adam
Connected.
SQL> select count(*) from comp_classic where channel_id=1;
COUNT(*)
----------
4000000
Elapsed: 00:00:01.79
SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat
where name = 'cell physical IO interconnect bytes returned by smart scan'; 2
NAME MB
---------------------------------------------------------------- ----------
cell physical IO interconnect bytes returned by smart scan 123.650536
Runtime was faster as uncompressed, Smart Scan was possible. Now OLTP compression:
SQL> connect adam/adam
Connected.
SQL> select count(*) from comp_oltp where channel_id=1;
COUNT(*)
----------
4000000
Elapsed: 00:00:01.80
SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat
where name = 'cell physical IO interconnect bytes returned by smart scan'; 2
NAME MB
---------------------------------------------------------------- ----------
cell physical IO interconnect bytes returned by smart scan 130.22644
Dito. Now QUERY LOW (HCC) compression:
SQL> connect adam/adam
Connected.
SQL> select count(*) from comp_query_low where channel_id=1;
COUNT(*)
----------
4000000
Elapsed: 00:00:00.86
SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat
where name = 'cell physical IO interconnect bytes returned by smart scan'; 2
NAME MB
---------------------------------------------------------------- ----------
cell physical IO interconnect bytes returned by smart scan 2.68922424
Fastest Runtime yet. Smart Scan was possible. Now QUERY HIGH (HCC):
SQL> connect adam/adam
Connected.
SQL> select count(*) from comp_query_high where channel_id=1;
COUNT(*)
----------
4000000
Elapsed: 00:00:01.27
SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat
where name = 'cell physical IO interconnect bytes returned by smart scan'; 2
NAME MB
---------------------------------------------------------------- ----------
cell physical IO interconnect bytes returned by smart scan 0
Still faster than uncompressed, although not using Smart Scan. Now ARCHIVE LOW (HCC):
SQL> connect adam/adam
Connected.
SQL> select count(*) from comp_archive_low where channel_id=1;
COUNT(*)
----------
4000000
Elapsed: 00:00:01.13
SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat
where name = 'cell physical IO interconnect bytes returned by smart scan'; 2
NAME MB
---------------------------------------------------------------- ----------
cell physical IO interconnect bytes returned by smart scan 0
Faster than uncompressed (and surprisingly faster than QUERY HIGH) without Smart Scan. Finally ARCHIVE HIGH (HCC):
SQL> connect adam/adam
Connected.
SQL> select count(*) from comp_archive_high where channel_id=1;
COUNT(*)
----------
4000000
Elapsed: 00:00:02.93
SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat
where name = 'cell physical IO interconnect bytes returned by smart scan'; 2
NAME MB
---------------------------------------------------------------- ----------
cell physical IO interconnect bytes returned by smart scan 0
Slowest Query Performance (the only one slower than uncompressed in this experiment) without Smart Scan.
Conclusion: HCC delivers (far) better Compression Ratios than the Block Compression methods. Load Time increases from QUERY LOW (best) over QUERY HIGH and ARCHIVE LOW (both moderate) to ARCHIVE HIGH (longest Load Time). Query Performance decreases similar from QUERY LOW (best) to ARCHIVE HIGH (longest Query Time). All HCC methods except ARCHIVE HIGH delivered better Query Performance than uncompressed, though. In short: HCC is most likely able to save large amounts of space in your Data Warehouse without decreasing Query Performance much if at all. It will probably compress your Archival Data significantly.

Exadata Part II: Storage Index
Another important feature of the Exadata Database Machine that makes it more than just a collection of High-End Hardware are Storage Indexes. Storage Indexes are constructed automatically inside of the memory of the Storage Servers when Database segments (like heap tables) are being scanned. The Storage Cells are divided into 1 MB chunks and inside of these chunks, we record the minimum and maximum values of the columns of the table that allocated space inside that chunk. It is very well possible that these Storage Indexes make many of your „conventional indexes“ obsolete. Look at following drastic example:
SQL> connect adam/adam Connected. SQL> select bytes/1024/1024/1024 as gb from user_segments where segment_name='SALES'; GB ---------- 1.5
This demonstration table was created with the following statement
SQL> create table sales as select rownum as id, rownum + 1 as flag, 'Oracle Enterprise Edition' as product, mod(rownum,5) as channel_id, mod(rownum,1000) as cust_id , 5000 as amount_sold, trunc(sysdate - 10000 + mod(rownum,10000)) as order_date, trunc(sysdate - 9999 + mod(rownum,10000)) as ship_date from dual connect by level<=2e7;
I needed to put WORKAREA_SIZE_POLICY=MANUAL and I specified a SORT_AREA_SIZE=1048576000 to be able to do the connect by level<=2e7, by the way. That is because of the limitation of 200M pga memory for a single session when using PGA_AGGREGATE_TARGET. Anyway, please notice that id has unique values and there is no conventional index present.
SQL> set timing on SQL> select * from sales where id=4711; ID FLAG PRODUCT CHANNEL_ID CUST_ID AMOUNT_SOLD ORDER_DAT SHIP_DATE ---------- ---------- ------------------------- ---------- ---------- ----------- --------- --------- 4711 4712 Oracle Enterprise Edition 1 711 5000 28-JUL-96 29-JUL-96 Elapsed: 00:00:00.11
I have just scanned 1.5 Gigabyte of data in fractions of a second! [Previously said in 11 Milliseconds, but that was of course a mistake – apart from SQL*Plus not being able to produce that exact timing, I just got the unit wrong] That was possible, because the Storage Cells knew that they could exclude 1499 chunks from scanning. 4711 was only possibly present inside one of the chunks. Please notice that that was not a Smart Scan in the first place – we did not offload much I/O on the Storage Layer. Instead, we avoided I/O almost completely:
SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat
where name in
('cell physical IO interconnect bytes returned by smart scan',
'cell physical IO bytes saved by storage index'); 2 3 4
NAME MB
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 1483.67969
cell physical IO interconnect bytes returned by smart scan .001525879
To show the difference, I will do a Smart Scan. I connect again to initialize v$mystat
SQL> connect adam/adam
Connected.
SQL> select count(*) from sales where channel_id=1;
COUNT(*)
----------
4000000
Elapsed: 00:00:02.04
SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat
where name in
('cell physical IO interconnect bytes returned by smart scan',
'cell physical IO bytes saved by storage index'); 2 3 4
NAME MB
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 0
cell physical IO interconnect bytes returned by smart scan 46.7903976
In this case, we scanned 1.5 Gig on the Storage Layer but delivered only 46 MB to the Database Layer. Now back to the Storage Index. Recently, I encountered the opinion that Storage Indexes would not work together with Bind Variables for some reason. That is not the case:
SQL> connect adam/adam Connected. SQL> vari n number SQL> exec :n:=4711 PL/SQL procedure successfully completed. SQL> set timing on SQL> select * from sales where id=:n; ID FLAG PRODUCT CHANNEL_ID CUST_ID AMOUNT_SOLD ORDER_DAT SHIP_DATE ---------- ---------- ------------------------- ---------- ---------- ----------- --------- --------- 4711 4712 Oracle Enterprise Edition 1 711 5000 28-JUL-96 29-JUL-96 Elapsed: 00:00:00.04 SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat where name in ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index'); 2 3 4 NAME MB ---------------------------------------------------------------- ---------- cell physical IO bytes saved by storage index 1483.67969 cell physical IO interconnect bytes returned by smart scan .001525879
The Storage Index clearly could get used. Same with implicit datatype conversion:
SQL> vari c char(4) SQL> exec :c:='4711' SQL> select * from sales where id=:c; ID FLAG PRODUCT CHANNEL_ID CUST_ID AMOUNT_SOLD ORDER_DAT SHIP_DATE ---------- ---------- ------------------------- ---------- ---------- ----------- --------- --------- 4711 4712 Oracle Enterprise Edition 1 711 5000 28-JUL-96 29-JUL-96 Elapsed: 00:00:00.03 SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat where name in ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index'); 2 3 4 NAME MB ---------------------------------------------------------------- ---------- cell physical IO bytes saved by storage index 2967.35938 cell physical IO interconnect bytes returned by smart scan .003051758
It would have surprised me a lot if Bind Variables would introduce any issues with Storage Indexes, because they are processed on the Database Layer. In fact, from the Optimizers viewpoint all these statements lead to the same execution plan: Full Table Scan. There could be an issue (but not exclusively related to Bind Variables), if we would have conventional indexes present here as well, because the Optimizer might favor them over the Storage Index. The Optimizer is not aware of Storage Indexes. Before I get another comment from Kevin suggesting to show joins with the example, let’s just do that:
SQL> select * from channels; CHANNEL_ID CHANNEL_DESC ---------- ---------------------------------------- 0 Telesales 1 Web 2 Field 3 Retail 4 Promotion 99 Special SQL> create table customers as select distinct cust_id, 'John Doe' as cust_name from sales; Table created.
Channels and Customers are now two dimension tables.
SQL> set timing on SQL> select product,channel_desc,cust_name from sales natural join channels natural join customers where id=4711; 2 3 4 PRODUCT CHANNEL_DESC CUST_NAM ------------------------- ---------------------------------------- -------- Oracle Enterprise Edition Web John Doe Elapsed: 00:00:00.06 SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat where name in ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index'); 2 3 4 NAME MB ---------------------------------------------------------------- ---------- cell physical IO bytes saved by storage index 1483.67969 cell physical IO interconnect bytes returned by smart scan .001495361
Storage Indexes can obviously be used in this case as well. When we select on columns with non unique values, it depends on how well clustered the values are, whether it is beneficial to use a Storage Index. The values of order_date column of the sales table are spread across the whole table, which makes the usage of a Storage Index impossible to speed up the query:
SQL> connect adam/adam
Connected.
SQL> set timing on
SQL> select count(*) from sales where order_date=to_date('11.03.2009','dd.mm.yyyy');
COUNT(*)
----------
2000
Elapsed: 00:00:02.30
SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat
where name in
('cell physical IO interconnect bytes returned by smart scan',
'cell physical IO bytes saved by storage index');
2 3 4
NAME MB
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 0
cell physical IO interconnect bytes returned by smart scan .672538757
When we create the same table, but ordered by order_date, this changes:
SQL> create table sales_order as select * from sales where 1=2;
Table created.
SQL> alter table sales_order nologging;
Table altered.
SQL> insert /*+ append */ into sales_order select * from sales order by order_date;
20000000 rows created.
SQL> commit;
Commit complete.
Now we need to access the new table multiple times to get the Storage Index built:
SQL> connect adam/adam
Connected.
SQL> select count(*) from sales_order where order_date=to_date('11.03.2009','dd.mm.yyyy');
COUNT(*)
----------
2000
Elapsed: 00:00:02.02
SQL> select count(*) from sales_order where order_date=to_date('11.03.2009','dd.mm.yyyy');
COUNT(*)
----------
2000
Elapsed: 00:00:01.73
SQL> select count(*) from sales_order where order_date=to_date('11.03.2009','dd.mm.yyyy');
COUNT(*)
----------
2000
Elapsed: 00:00:00.03
SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat
where name in
('cell physical IO interconnect bytes returned by smart scan',
'cell physical IO bytes saved by storage index'); 2 3 4
NAME MB
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 1488.77344
cell physical IO interconnect bytes returned by smart scan .506607056
Now the Storage has „learned“ that it is beneficial to have a Storage Index on the sales_order table. This index will be kept as long as the Cells are up. The Storage Index is created over all the table_columns, apparently. Please notice that I now access the column ship_date (strongly correlated to order_date) for the first time:
SQL> select count(*) from sales_order where ship_date=to_date('11.03.2009','dd.mm.yyyy'); COUNT(*) ---------- 2000 Elapsed: 00:00:00.37 SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat where name in ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index'); 2 3 4 NAME MB ---------------------------------------------------------------- ---------- cell physical IO bytes saved by storage index 2765.77344 cell physical IO interconnect bytes returned by smart scan .562469482
Conclusion: Storage Indexes are another Smart Storage feature of Exadata Database Machine. They may make many conventional indexes obsolete. Although they are created and maintained automatically by the Storage Servers, it is important to know their behavior. Especially, it is relevant that (non unique) column values that we later on query for ocurr ordered to be able to benefit from Storage Indexes.
Exadata Part I: Smart Scan
As many of you will know, Oracle has introduced a new flagship in the core area: The Exadata Database Machine is a combined Hardware & Software offer, consisting of an 8 Node Cluster (each Node with 2 Six-Core CPUs and 72 GB RAM) and 14 Storage Servers (each with 2 Six-Core CPUs and 12 SAS or 12 SATA drives *) in case of a Full Rack, connected (RAC Interconnect & Storage) over Infiniband. Operating System is Oracle Enterprise Linux; Database Version is 11gR2 Enterprise Edition, both mandatory.
I have the privilege to get access to our Oracle University Database Machine – I really do love my job 🙂 But I am limited to only one Database Node with two Storage Servers – which explains the relatively slow speed during the demonstrations compared to a Full Rack.
The idea of this posting is to show that there is more behind the Exadata Database Machine than sophisticated and well combined Hardware, but that there is actually some kind of (Database) Intelligence built into the Storage Layer that enables it to do column & predicate filtering before it sends results to the Database Layer. This functionality is called Smart Scan:
SQL> create table sales as
select
'Oracle Enterprise Edition' as product,
mod(rownum,5) as channel_id,
mod(rownum,1000) as cust_id ,
5000 as amount_sold,
to_date
('01.' || lpad(to_char(mod(rownum,12)+1),2,'0') || '.2010' ,'dd.mm.yyyy')
as time_id
from dual connect by level<=2e7;
Table created.
SQL> select bytes/1024/1024/1024 as gb from user_segments;
GB
----------
1.0625
Above created a roughly 1 Gig sized demo table that is now blown up to 4 Gig:
SQL> alter table sales nologging;
Table altered.
SQL> insert /*+ append */ into sales select * from sales;
20000000 rows created.
SQL> commit;
Commit complete.
SQL> insert /*+ append */ into sales select * from sales;
40000000 rows created.
SQL> commit;
Commit complete.
SQL> select bytes/1024/1024/1024 as gb from user_segments;
GB
----------
4.17822266
SQL> exec dbms_stats.gather_table_stats('ADAM','SALES')
PL/SQL procedure successfully completed.
The playing field is setup. We will look at first after the scan speed without Smart Scan:
SQL> alter session set cell_offload_processing=false; Session altered. SQL> set timing on SQL> select /* NO_SMART_SCAN */ count(*) from sales where channel_id=1; COUNT(*) ---------- 16000000 Elapsed: 00:00:22.96 SQL> select /* NO_SMART_SCAN */ count(*) from sales where channel_id=1; COUNT(*) ---------- 16000000 Elapsed: 00:00:22.85 SQL> set timing off
Twice the statement to show that caching has not much to say in this situation. We needed about 23 seconds to scan about 4 Gig, with the Storage Servers delivering the full amount of data to the Database Layer. Now the very same statement with the standard functionality Smart Scan:
SQL> alter session set cell_offload_processing=true; Session altered. SQL> set timing on SQL> select /* WITH_SMART_SCAN */ count(*) from sales where channel_id=1; COUNT(*) ---------- 16000000 Elapsed: 00:00:05.58 SQL> select /* WITH_SMART_SCAN */ count(*) from sales where channel_id=1; COUNT(*) ---------- 16000000 Elapsed: 00:00:05.64 SQL> set timing off
Runtime was reduced to less than 6 seconds with Smart Scan. Let’s examine the execution plans for the two statements. I included the comment (it is NOT a hint) to retrieve the statements and their execution plans easier from the Library Cache:
SQL> select sql_id from v$sql where sql_text like '%NO_SMART_SCAN%' and sql_text not like '%like%';
SQL_ID
-------------
81vy8y7fkvzta
SQL> select plan_table_output from table (dbms_xplan.display_cursor('81vy8y7fkvzta'));
PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID 81vy8y7fkvzta, child number 0
-------------------------------------
select /* NO_SMART_SCAN */ count(*) from sales where channel_id=1
Plan hash value: 1047182207
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 147K(100)| |
| 1 | SORT AGGREGATE | | 1 | 3 | | |
|* 2 | TABLE ACCESS STORAGE FULL| SALES | 16M| 46M| 147K (1)| 00:29:30 |
------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("CHANNEL_ID"=1)
An ordinary full table scan as expected. Now the second trial:
SQL> select sql_id from v$sql where sql_text like '%WITH_SMART_SCAN%' and sql_text not like '%like%';
SQL_ID
-------------
91w7a8mx8c977
SQL> select plan_table_output from table (dbms_xplan.display_cursor('91w7a8mx8c977'));
PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID 91w7a8mx8c977, child number 0
-------------------------------------
select /* WITH_SMART_SCAN */ count(*) from sales where channel_id=1
Plan hash value: 1047182207
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 147K(100)| |
| 1 | SORT AGGREGATE | | 1 | 3 | | |
|* 2 | TABLE ACCESS STORAGE FULL| SALES | 16M| 46M| 147K (1)| 00:29:30 |
------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - storage("CHANNEL_ID"=1)
filter("CHANNEL_ID"=1)
The Storage Layer did filter on the predicate channel_id=1 before transmitting the result to the Database Layer, which is the reason for the reduced runtime. Apart from Smart Scan, we have some more conventional ways to speed up the statement like doing it in parallel:
SQL> select /*+ parallel(sales) */ count(*) from sales where channel_id=1;
COUNT(*)
----------
16000000
Elapsed: 00:00:02.83
SQL> set timing off
SQL> select sql_id from v$sql where sql_text like '% parallel(sales) %' and sql_text not like '%like%';
SQL_ID
-------------
ckq93tnythxa7
QL> select plan_table_output from table (dbms_xplan.display_cursor('ckq93tnythxa7'));
PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID ckq93tnythxa7, child number 0
-------------------------------------
select /*+ parallel(sales) */ count(*) from sales where channel_id=1
Plan hash value: 3130505568
------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3409 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | 3 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 3 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 3 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 16M| 46M| 3409 (1)| 00:00:01 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS STORAGE FULL| SALES | 16M| 46M| 3409 (1)| 00:00:01 | Q1,00 | PCWP | |
------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
6 - storage(:Z>=:Z AND :Z<=:Z AND "CHANNEL_ID"=1)
filter("CHANNEL_ID"=1)
Parallel Query combined with Smart Scan was even faster, delivering
SQL> select 4.17/2.83 as GB_PER_SEC from dual; GB_PER_SEC ---------- 1.47349823
a scan rate of about 1.5 Gigabyte per Second. With an Oracle Database at hand, I don’t need a calculator 🙂
No idea what the weird storage filter :Z>=:Z and :Z<=:Z means here, to be honest.
Conclusion: Exadata is not only strong hardware but also Database intelligence on the storage layer. Smart Scan means the capability of the Storage Layer to do filtering of columns and predicates before sending the result to the Database Layer.
Addendum:
Using Enterprise Manager (Grid Control or Database Control flavor even with Exadata), we have an easier way to determine Smart Scans:
* Second Addendum: Meanwhile (May 2011), also the High Capacity Drives are SAS – no more SATA

