Beiträge getaggt mit exadata

DBMS_COMPRESSION Example

During my present Exadata course in Munich, we have discussed Hybrid Columnar Compression (HCC) and how to estimate the size of tables after compression. In addition to my previous posting that introduced HCC, I have therefore prepared this little script that calls DBMS_COMPRESSION. Typically, the documentation also gives an example, but not in this case. Also my brief research in the web did not retrieve a (simple) example.

Please notice that you can use that script already on a non-Exadata Oracle Database (from 11.2 on)  also to get an estimation about HCC, not only about BASIC and OLTP compression. The estimation is quite good but takes much space – about as much as the tables you estimate are in size. Therefore, it may be advisable to use a scratch tablespace only for that purpose and drop it afterwards.

SQL> set serveroutput on
declare
 v_blkcnt_cmp     pls_integer;
 v_blkcnt_uncmp   pls_integer;
 v_row_cmp        pls_integer;
 v_row_uncmp      pls_integer;
 v_cmp_ratio      number;
 v_comptype_str   varchar2(60);
begin
 dbms_compression.get_compression_ratio(
 scratchtbsname   => upper('&ScratchTBS'),      
 ownname          => user,           
 tabname          => upper('&TableName'),   
 partname         => NULL,           
 comptype         => dbms_compression.comp_for_query_high,    
 blkcnt_cmp       => v_blkcnt_cmp,    
 blkcnt_uncmp     => v_blkcnt_uncmp,  
 row_cmp          => v_row_cmp,    
 row_uncmp        => v_row_uncmp,  
 cmp_ratio        => v_cmp_ratio,  
 comptype_str     => v_comptype_str);
 dbms_output.put_line('Estimated Compression Ratio: '||to_char(v_cmp_ratio));
 dbms_output.put_line('Blocks used by compressed sample: '||to_char(v_blkcnt_cmp));
 dbms_output.put_line('Blocks used by uncompressed sample: '||to_char(v_blkcnt_uncmp));
end;
/

Addendum: See Jonathan Lewis‘ nice cheat to keep the HCC compressed table that gets created during the evaluation under the  covers – even if you are NOT running DBMS_COMPRESSION on Exadata!

Second addendum: This posting was published in the current Oracle University Newsletter.

,

12 Kommentare

Replacing a damaged Hard Disk on Exadata Cells

A task that you will most likely encounter during the administration of Exadata is the replacement of a damaged Hard Disk on the storage servers. Fortunately, this is quite easy, because almost everything is done by the system itself 🙂

Especially, the original Celldisks and Griddisks are rebuilt automatically on the Cell Layer. On the Database Layer, the related ASM disks also get rebuilt automatically, while due to the (at least) normal redundancy, the availability of the Database(s), relying on the diskgroups is not affected. The task is briefly described in this MOS Note.

As soon as the Hard Disk failure is noticed by the MS (Management Server) background process on the Cell, it will raise an alert that will also be published to Grid Control, if configured. Immediately, due to Pro-Active Disk Quarantine, the ASM-, Grid- and Celldisks get dropped. ASM rebalancing is triggered. You as the responsible Admin notice the alert and order a replacement Disk resp. use a Spare Disk to plug it into the Cell after you plugged out the damaged one. The Cell can stay online, because the Hard Disks are hot-pluggable.

No further administrative work to be done, typically. Easy, isn’t it? Mr. Sengonul from Turkcell (leading global system provider for mobile communications in Turkey, one of our Customer Exadata references) has published the Logfiles from such an incident with this posting. Thank you for that and also for your fine presentation about the Exadata Migration!

 

6 Kommentare

Important Statistics & Wait Events on Exadata

statistics

With this posting, I’d like to highlight the most important Statistics & Wait Events on Exadata that a DBA needs to be familiar with in my view. We start with Statistics, retrievable from v$sysstat, for example. For Exadata, we have introduced 43 new Statistics, that start all with ‚cell%‘:

SQL> select name from v$statname where name like 'cell%';

NAME
----------------------------------------------------------------
cell physical IO interconnect bytes
cell physical IO bytes saved during optimized file creation
cell physical IO bytes saved during optimized RMAN file restore
cell physical IO bytes eligible for predicate offload
cell physical IO bytes saved by storage index
cell smart IO session cache lookups
cell smart IO session cache hits
cell smart IO session cache soft misses
cell smart IO session cache hard misses
cell smart IO session cache hwm
cell num smart IO sessions in rdbms block IO due to user
cell num smart IO sessions in rdbms block IO due to no cell mem
cell num smart IO sessions in rdbms block IO due to big payload
cell num smart IO sessions using passthru mode due to user
cell num smart IO sessions using passthru mode due to cellsrv
cell num smart IO sessions using passthru mode due to timezone
cell num smart file creation sessions using rdbms block IO mode
cell physical IO interconnect bytes returned by smart scan
cell num fast response sessions
cell num fast response sessions continuing to smart scan
cell smart IO allocated memory bytes
cell smart IO memory bytes hwm
cell num active smart IO sessions
cell smart IO sessions hwm
cell scans
cell blocks processed by cache layer
cell blocks processed by txn layer
cell blocks processed by data layer
cell blocks processed by index layer
cell commit cache queries
cell transactions found in commit cache
cell blocks helped by commit cache
cell blocks helped by minscn optimization
cell simulated physical IO bytes eligible for predicate offload
cell simulated physical IO bytes returned by predicate offload
cell CUs sent uncompressed
cell CUs sent compressed
cell CUs sent head piece
cell CUs processed for uncompressed
cell CUs processed for compressed
cell IO uncompressed bytes
cell index scans
cell flash cache read hits

43 rows selected.

The in my opinion most important Statistics are marked in red. Now to their meaning:

SQL> connect adam/adam
Connected.
SQL> select * from sales;
* Output suppressed, because I don't want millions of rows in the posting*
select name,value/1024/1024 as mb from v$statname 
natural join v$mystat where name in
(
'physical read total bytes',
'cell physical IO interconnect bytes',
'cell physical IO interconnect bytes returned by smart scan'
);

NAME                                                                     MB
---------------------------------------------------------------- ----------
physical read total bytes                                          428.4375
cell physical IO interconnect bytes                                428.4375
cell physical IO interconnect bytes returned by smart scan                0

The demo table sales is 428 MB in size, large enough to cause a Serial Direct Read and make Smart Scans possible. You see the Statistic ‚cell physical IO interconnect bytes‘ – in this case equal to ‚physical read total bytes‘, because the whole amount of data scanned on the storage layer was sent over the Storage Interconnect (do not confuse this with a RAC Interconnect!) to the Database Layer. Consequently, this was no Smart Scan, because we did no column projection nor filtering. The red Statistic above counts the bytes sent between the Storage Layer and the Database Layer, and v$mystat displays that value only for my session. Let’s do a Smart Scan:

SQL> select * from sales where channel_id=2;
* Output suppressed*
select name,value/1024/1024 as mb from v$statname 
natural join v$mystat where name in
(
'physical read total bytes',
'cell physical IO interconnect bytes',
'cell physical IO interconnect bytes returned by smart scan'
);
NAME                                                                     MB
---------------------------------------------------------------- ----------
physical read total bytes                                        855.851563
cell physical IO interconnect bytes                              506.039314
cell physical IO interconnect bytes returned by smart scan       77.6018143

The values in v$mystat are cumulative for the whole session. You see the value of ‚physical read total bytes‘ has doubled, because we did again a Full Table Scan on the Storage Layer, but ‚cell physical IO interconnect bytes‘ did not double, because the filtering by the WHERE clause was done on the Storage Layer. We needed only to submit about 77 MB to the Database Layer instead, which is also reflected by ‚cell physical IO interconnect bytes returned by smart scan‘. So this Statistic shows the data volume that was sent to the Database Layer on behalf of a Smart Scan – which is supposed to be significantly smaller than ‚physical read total bytes‘ for a Database that does many Full Table Scans resp. Index Fast Full Scans on large Segments.

Storage Indexes have the ability to speed up Full Table Scan dramatically:

SQL> select * from sales where id=4711;
PRODUCT                   CHANNEL_ID    CUST_ID AMOUNT_SOLD TIME_ID           ID
------------------------- ---------- ---------- ----------- --------- ----------
Oracle Enterprise Edition          3        998        5000 01-NOV-10       4711
SQL> select name,value/1024/1024 as mb from v$statname 
natural join v$mystat where name in
(
'physical read total bytes',
'cell physical IO interconnect bytes',
'cell physical IO interconnect bytes returned by smart scan'
);
NAME                                                                     MB
---------------------------------------------------------------- ----------
physical read total bytes                                        1283.26563
cell physical IO interconnect bytes                              506.041885
cell physical IO interconnect bytes returned by smart scan       77.6043854

Although the Statistic ‚physical read total bytes‘ again got increased by 428 MB, we see only a very slight increase in ‚cell physical IO interconnect bytes‘ and ‚cell physical IO interconnect bytes returned by smart scan‘, because the Data Volume (containing just one row of interest) sent to the Database Layer was very small. Also, the Smart Scan operation was done very fast, because a Storage Index told the Cells where the ID searched after could not possibly be:

SQL> select name,value/1024/1024 as mb from v$statname 
natural join v$mystat where name='cell physical IO bytes saved by storage index';

NAME                                                                     MB
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index                    419.742188

The red Statistic shows the Data Volume, the Cells did not need to scan through, because they knew the Data cannot be there. In this case, the vast majority of the table was not scanned (although ‚physical read total bytes‘ counts it as if the Cells scanned the whole table).

The next Statistic is very important for an OLTP Database running on Exadata:

SQL> select name,value  from v$sysstat where name in
('physical read total IO requests','cell flash cache read hits'); 

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
physical read total IO requests                                      142476
cell flash cache read hits                                            32897

If a large amount of ‚physical read total IO requests‘ was satisfied by the blue Statistics value, we can conclude that we have the basis for a high I/O per second (IOPS) rate, because on Flash Storage, we do not experience the latency time related to the positioning of the Read/Write Head as on Spinning Drives. The blue Statistic counts the number of read requests, resolved from Flash Cache.

Wait Events

We introduced 17 new Wait Events for Exadata. Like the statistics, they are visible also on non Exadata Platforms but have no value there. They can be seen in this view:

SQL> select name from v$event_name where name like 'cell%';

NAME
----------------------------------------------------------------
cell smart table scan
cell smart index scan
cell statistics gather
cell smart incremental backup
cell smart file creation
cell smart restore from backup
cell single block physical read
cell multiblock physical read
cell list of blocks physical read
cell manager opening cell
cell manager closing cell
cell manager discovering disks
cell worker idle
cell smart flash unkeep
cell worker online completion
cell worker retry
cell manager cancel work request

17 rows selected.

Again, I have highlighted the in my view most important ones. I’d like to give a brief explanation of them:

‚cell smart table scan‘:

Wait Event that occurs during a Smart Scan on a Table.

‚cell smart index scan‘:

Wait Event that occurs during a Smart Scan on an Index (on behalf of a Direct Index Fast Full Scan)

We expect both to be among the Top Wait Events on a Data Warehouse Database, running on Exadata. There is nothing wrong with that; something has to be the Top Wait Event 🙂

‚cell multiblock physical read‘:

This Wait Event occurs most likely because of a conventional Full Table Scan resp. a conventional Index Fast Full Scan (both not offloaded). Typically because the Table  resp. the Index was small. It would be surprising to see this as a major Wait Event on a Data Warehouse Database running on Eaxadata, but it will show up regularly on OLTP running on Exadata

‚cell single block physical read‘:

The most likely reason for this is a conventional (not Offloaded) Index Access. Again, we expect to see this Wait Event high on OLTP but low on Data Warehouse Databases, running on Exadata

,

21 Kommentare