With this posting, I’d like to highlight the most important Statistics & Wait Events on Exadata that a DBA needs to be familiar with in my view. We start with Statistics, retrievable from v$sysstat, for example. For Exadata, we have introduced 43 new Statistics, that start all with ‚cell%‘:
SQL> select name from v$statname where name like 'cell%'; NAME ---------------------------------------------------------------- cell physical IO interconnect bytes cell physical IO bytes saved during optimized file creation cell physical IO bytes saved during optimized RMAN file restore cell physical IO bytes eligible for predicate offload cell physical IO bytes saved by storage index cell smart IO session cache lookups cell smart IO session cache hits cell smart IO session cache soft misses cell smart IO session cache hard misses cell smart IO session cache hwm cell num smart IO sessions in rdbms block IO due to user cell num smart IO sessions in rdbms block IO due to no cell mem cell num smart IO sessions in rdbms block IO due to big payload cell num smart IO sessions using passthru mode due to user cell num smart IO sessions using passthru mode due to cellsrv cell num smart IO sessions using passthru mode due to timezone cell num smart file creation sessions using rdbms block IO mode cell physical IO interconnect bytes returned by smart scan cell num fast response sessions cell num fast response sessions continuing to smart scan cell smart IO allocated memory bytes cell smart IO memory bytes hwm cell num active smart IO sessions cell smart IO sessions hwm cell scans cell blocks processed by cache layer cell blocks processed by txn layer cell blocks processed by data layer cell blocks processed by index layer cell commit cache queries cell transactions found in commit cache cell blocks helped by commit cache cell blocks helped by minscn optimization cell simulated physical IO bytes eligible for predicate offload cell simulated physical IO bytes returned by predicate offload cell CUs sent uncompressed cell CUs sent compressed cell CUs sent head piece cell CUs processed for uncompressed cell CUs processed for compressed cell IO uncompressed bytes cell index scans cell flash cache read hits 43 rows selected.
The in my opinion most important Statistics are marked in red. Now to their meaning:
SQL> connect adam/adam
Connected.
SQL> select * from sales;
* Output suppressed, because I don't want millions of rows in the posting*
select name,value/1024/1024 as mb from v$statname
natural join v$mystat where name in
(
'physical read total bytes',
'cell physical IO interconnect bytes',
'cell physical IO interconnect bytes returned by smart scan'
);
NAME MB
---------------------------------------------------------------- ----------
physical read total bytes 428.4375
cell physical IO interconnect bytes 428.4375
cell physical IO interconnect bytes returned by smart scan 0
The demo table sales is 428 MB in size, large enough to cause a Serial Direct Read and make Smart Scans possible. You see the Statistic ‚cell physical IO interconnect bytes‘ – in this case equal to ‚physical read total bytes‘, because the whole amount of data scanned on the storage layer was sent over the Storage Interconnect (do not confuse this with a RAC Interconnect!) to the Database Layer. Consequently, this was no Smart Scan, because we did no column projection nor filtering. The red Statistic above counts the bytes sent between the Storage Layer and the Database Layer, and v$mystat displays that value only for my session. Let’s do a Smart Scan:
SQL> select * from sales where channel_id=2;
* Output suppressed*
select name,value/1024/1024 as mb from v$statname
natural join v$mystat where name in
(
'physical read total bytes',
'cell physical IO interconnect bytes',
'cell physical IO interconnect bytes returned by smart scan'
);
NAME MB
---------------------------------------------------------------- ----------
physical read total bytes 855.851563
cell physical IO interconnect bytes 506.039314
cell physical IO interconnect bytes returned by smart scan 77.6018143
The values in v$mystat are cumulative for the whole session. You see the value of ‚physical read total bytes‘ has doubled, because we did again a Full Table Scan on the Storage Layer, but ‚cell physical IO interconnect bytes‘ did not double, because the filtering by the WHERE clause was done on the Storage Layer. We needed only to submit about 77 MB to the Database Layer instead, which is also reflected by ‚cell physical IO interconnect bytes returned by smart scan‘. So this Statistic shows the data volume that was sent to the Database Layer on behalf of a Smart Scan – which is supposed to be significantly smaller than ‚physical read total bytes‘ for a Database that does many Full Table Scans resp. Index Fast Full Scans on large Segments.
Storage Indexes have the ability to speed up Full Table Scan dramatically:
SQL> select * from sales where id=4711; PRODUCT CHANNEL_ID CUST_ID AMOUNT_SOLD TIME_ID ID ------------------------- ---------- ---------- ----------- --------- ---------- Oracle Enterprise Edition 3 998 5000 01-NOV-10 4711 SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat where name in ( 'physical read total bytes', 'cell physical IO interconnect bytes', 'cell physical IO interconnect bytes returned by smart scan' ); NAME MB ---------------------------------------------------------------- ---------- physical read total bytes 1283.26563 cell physical IO interconnect bytes 506.041885 cell physical IO interconnect bytes returned by smart scan 77.6043854
Although the Statistic ‚physical read total bytes‘ again got increased by 428 MB, we see only a very slight increase in ‚cell physical IO interconnect bytes‘ and ‚cell physical IO interconnect bytes returned by smart scan‘, because the Data Volume (containing just one row of interest) sent to the Database Layer was very small. Also, the Smart Scan operation was done very fast, because a Storage Index told the Cells where the ID searched after could not possibly be:
SQL> select name,value/1024/1024 as mb from v$statname
natural join v$mystat where name='cell physical IO bytes saved by storage index';
NAME MB
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 419.742188
The red Statistic shows the Data Volume, the Cells did not need to scan through, because they knew the Data cannot be there. In this case, the vast majority of the table was not scanned (although ‚physical read total bytes‘ counts it as if the Cells scanned the whole table).
The next Statistic is very important for an OLTP Database running on Exadata:
SQL> select name,value from v$sysstat where name in
('physical read total IO requests','cell flash cache read hits');
NAME VALUE
---------------------------------------------------------------- ----------
physical read total IO requests 142476
cell flash cache read hits 32897
If a large amount of ‚physical read total IO requests‘ was satisfied by the blue Statistics value, we can conclude that we have the basis for a high I/O per second (IOPS) rate, because on Flash Storage, we do not experience the latency time related to the positioning of the Read/Write Head as on Spinning Drives. The blue Statistic counts the number of read requests, resolved from Flash Cache.
Wait Events
We introduced 17 new Wait Events for Exadata. Like the statistics, they are visible also on non Exadata Platforms but have no value there. They can be seen in this view:
SQL> select name from v$event_name where name like 'cell%'; NAME ---------------------------------------------------------------- cell smart table scan cell smart index scan cell statistics gather cell smart incremental backup cell smart file creation cell smart restore from backup cell single block physical read cell multiblock physical read cell list of blocks physical read cell manager opening cell cell manager closing cell cell manager discovering disks cell worker idle cell smart flash unkeep cell worker online completion cell worker retry cell manager cancel work request 17 rows selected.
Again, I have highlighted the in my view most important ones. I’d like to give a brief explanation of them:
‚cell smart table scan‘:
Wait Event that occurs during a Smart Scan on a Table.
‚cell smart index scan‘:
Wait Event that occurs during a Smart Scan on an Index (on behalf of a Direct Index Fast Full Scan)
We expect both to be among the Top Wait Events on a Data Warehouse Database, running on Exadata. There is nothing wrong with that; something has to be the Top Wait Event 🙂
‚cell multiblock physical read‘:
This Wait Event occurs most likely because of a conventional Full Table Scan resp. a conventional Index Fast Full Scan (both not offloaded). Typically because the Table resp. the Index was small. It would be surprising to see this as a major Wait Event on a Data Warehouse Database running on Eaxadata, but it will show up regularly on OLTP running on Exadata
‚cell single block physical read‘:
The most likely reason for this is a conventional (not Offloaded) Index Access. Again, we expect to see this Wait Event high on OLTP but low on Data Warehouse Databases, running on Exadata
#1 von Kellyn Potvin am Juli 6, 2011 - 18:22
Excellent as always, Uwe!
#2 von Uwe Hesse am Juli 7, 2011 - 08:59
Thank you, Kellyn, very much appreciated 🙂
#3 von Ravi Swaminathan am Januar 17, 2012 - 04:18
Hi Tanel,
We have a 2 node rac on 11.2.0.2 on Exadata 2-2 quater RAC.
1 select b.name , a.value_MB
2 from (select name ,statistic# from v$statname
3 where name in (
4 ‚cell physical IO interconnect bytes‘,
5 ‚physical read total bytes‘,
6 ‚physical write total bytes‘
7 )) b,
8 ( select statistic#, sum(value/1024/1024) Value_MB
9 from v$sesstat
10 group by statistic#) a
11* where b.statistic#=a.statistic#
18:08:29 SQL> /
NAME VALUE_MB
———————————————————————- ——————
cell physical IO interconnect bytes 7,667,378.09
physical write total bytes 3,343,361.75
physical read total bytes 953,338.61
What would cause „cell physical IO interconnect bytes“ > „physical read total bytes“ + „physical write total bytes“ ?
From you post I take it that „cell physical IO interconnect bytes“ should always be less than the total physical IO. Did I miss something ?
#4 von Uwe Hesse am Januar 17, 2012 - 10:26
I am not Tanel but Uwe 🙂
However, let me guess: You use HCC compression, right? The decompression can be offloaded to the Cells. So they read less than they send over the Storage Interconnect in that case.
That is indeed a good thing, though, because the decompression is a CPU intensive task that might be a burden for the database nodes – that are less in number than the storage nodes.
#5 von Ravi Swaminathan am Januar 17, 2012 - 13:13
SQL> select compression,count(*) from dba_tables group by compression;
COMPRESSION COUNT(*)
———————— ———-
95
DISABLED 17349
ENABLED 2
SQL> select owner,table_name,COMPRESSION,COMPRESS_FOR from dba_tables where COMPRESSION=’ENABLED‘;
OWNER TABLE_NAME COMPRESSION COMPRESS_FOR
——————————– ——————————– ———————— ————————————
SYSMAN MGMT_METRICS ENABLED OLTP
SYSMAN MGMT_POLICIES ENABLED OLTP
SQL> select count(*) from v$segment_statistics where (owner,OBJECT_NAME) in (select owner,table_name from dba_tables where compression=’ENABLED‘);
COUNT(*)
———-
0
The compression does not seem to be the culprit in this case.
#6 von Uwe Hesse am Januar 18, 2012 - 10:10
Beats me. Maybe it is a Bug – I have seen Bugs reported about high numbers of ‚cell physical IO interconnect bytes‘ statistic even on non-Exadata systems.
#7 von Tanel Poder am März 5, 2012 - 16:40
Just noticed this. It’s likely due to ASM mirroring – as the „physical write“ metric is counted above ASM layer in the stack, but then ASM will double (or triple) these writes due to mirroring – and the „cell physical IO interconnect bytes“ metric measures the real traffic (all 2x or 3x writes) across interconnect. I’ve explained this in the book too…
#8 von Uwe Hesse am März 6, 2012 - 13:44
Thank you, Tanel, for the explanation! It is indeed in your book in Chapter 12 Monitoring Exadata Performance. I was not completely wrong with the HCC guess, though, as this could have been another reason for the phenomenon – you list that also there 🙂
#9 von linamos am August 23, 2012 - 06:05
Uwe,
Good job! I enjoy reading your blog.
2 dump questions:
1 is it „cell physical IO interconnect bytes“ in v$systat = „IO_INTERCONNECT_BYTES“ in v$sql?
2 what are the relationship between those stats in v$systat and those „IO_CELLxxxx“ columns in v$sql?
Thanks,
Amos
#10 von Gary am September 2, 2012 - 01:40
Hi UWE
I have OLTP compression enabled on the table
CREATE BITMAP INDEX
I1 ON
T1(
col 1Asc
)
NOLOGGING TABLESPACE ts_index01
taking very long time.
The wait event is : cell single block physical read.
Any ideas what I can do?
Thank u
#11 von Dontcheff am September 4, 2012 - 11:13
I find this post extremely interesting and useful for all DBAs. Well-done Uwe!
Best,
Julian
#12 von Kate JI am August 25, 2015 - 06:53
HI Uwe,
I’m currently tuning a RAC on Exadata system.
I would like you to help on the following quesiton.
How to check the exedata wait event for a spedific SQL(SQLID) in oracle RAC
I scanned through lot of website including yours looking for this question however most of them are about the v$mystat to show the wait event only for the current session. Could you help to provide a scrip to show the exedata wait event for a spedific SQL(SQLID)? I’m about to check the „IO saved by smart scan“ by a „full table storage scan“.
Thank you so much!
Wen Liu
#13 von Chris Hammond am November 22, 2015 - 07:22
Hi,
I’m new to Exadata – 2 weeks. Thrown into performance tuning a system that was just migrated from 4-node RAC.
I’ve been Googling non-stop, and ordered „Expert Exadata“ on Amazon. Glad to have found this site!.
My question is regarding the cell single block physical read wait event. My app has a 6-way UNION in a PL/SQL package that
takes 28 minutes to run – 3 out of 4 times. ABOUT one time out of 4 it takes twice as long to run 1 hour-1 hour 20 minutes.
There doesn’t seem to be a pattern. The data is exactly the same – we’re testing in a DEV environment. (each SQL in the UNION Has a parallel(6) hint).
Examining all the AWRs, the thing that stood out was that the cell single block physical read wait events for the long running
tests was much higher every time: example : 7 million waits versus ~184,000. Again, the data/totals processed is exactly the same
for every test run. Also, I noticed that the lapsed time of the query is higher on I/O than CPU ONLY during the long running tests. So – it’s looking like I/O is a good candidate for contributing to the longer run times.
Any ideas on what might be happening? You mentioned an index scan not being OFFLOADED. I could use some expert input/thoughts. What might cause that to happen sometimes, bit not most of the time? What other things can I look at?
OR – am I going down the wrong path here, and the cell single block physical read is probably not the source of the problem?
Thanks a lot for your time and expert help.
#14 von Uwe Hesse am November 23, 2015 - 10:05
Chris, it could be, that we are not able to smart scan all blocks every time. Typical reason why we need to read some blocks conventionally is that concurrent transactions modify tables at the same time we are selecting from them. a) Check in the AWR reports if the transactional load is higher in the 1/4 runs with longer execution time. b) Also, check for the top SQL statements and compare their execution plan to see, if we did not (for some reason) choose full table/index scan in the 1/4 runs. If it’s a), there’s not much you can do about it, if it’s b) you may use SQL Plan baselines to prevent the usage of the different execution plan. Also, in case of b) you should investigate if the optimizer environment was different in that period (different parameters, different statistics etc.)
#15 von Chris Hammond am November 26, 2015 - 02:13
Hi Uwe – Thanks very much for your quick response and extremely helpful information. Looking at the good/bad AWRs, it definitely looks like I/O is at least a contributing factor to the slow times – the physical I/O is twice as much and the query’s lapsed time is more heavily weighted to I/O rather than CPU only during the slow executions. All SQLs run in under a minute – except the monster – a UNION of 6 queries, almost the same SQL except for „Category=“ (Financials: Credit Memo types, etc.) No updates are being done – at least not supposed to be running jobs that do updates at the same time, but we know how that works.. I didn’t see any on the AWRs, though.
Interesting point on maybe getting a different cursor/access path on the slow runs. I captured the fast one. I’ll have to capture every time the job runs, because I never know which run be be a slow one. I’m working on a MAC for the first time, and looking for a tool that will help with doing plan compares. Any suggestions would be greatly appreciated.
Thanks again for your time and help!
Regards,
Chris
#16 von Chris Hammond am November 26, 2015 - 02:41
Hi – just wanted to add something I did not mention, which, I believe, can be particularly relevant with Exadata
(please correct me if I’m wrong on that). The query – all 6 parts of the UNION – includes a parallel hint –
parallel(6). Also, I do see the TABLE FULL STORAGE scans being used (from plan cursor).
Thanks again and regards,
Chris
PS – Happy Thanksgiving!