Beiträge getaggt mit Performance Tuning
Monitor RAC I/O with gv$iostat_function
I was delivering an 11gR2 New Features course, followed by an 11gR2 RAC accelerated course. That lead me to combine the not so widely known new view v$iostat_function with the RAC area 🙂
We can now very easy monitor what DB function is doing I/O at what instances in our RAC DB:
SQL> select inst_id,function_name,
sum(small_read_megabytes+large_read_megabytes) as read_mb,
sum(small_write_megabytes+large_write_megabytes) as write_mb
from gv$iostat_function
group by cube (inst_id,function_name)
order by inst_id,function_name;
INST_ID FUNCTION_NAME READ_MB WRITE_MB
---------- ------------------ ---------- ----------
1 ARCH 0 0
1 Archive Manager 0 0
1 Buffer Cache Reads 610 0
1 DBWR 12 373
1 Data Pump 0 0
1 Direct Reads 1162 1
1 Direct Writes 1 167
1 LGWR 1 346
1 Others 5215 2116
1 RMAN 0 0
1 Recovery 0 0
1 Smart Scan 0 0
1 Streams AQ 1 0
1 XDB 0 0
1 7002 3003
2 ARCH 0 0
2 Archive Manager 0 0
2 Buffer Cache Reads 187 0
2 DBWR 11 520
2 Data Pump 0 0
2 Direct Reads 6 0
2 Direct Writes 0 0
2 LGWR 1 299
2 Others 3898 1030
2 RMAN 0 0
2 Recovery 0 0
2 Smart Scan 0 0
2 Streams AQ 1 0
2 XDB 0 0
2 4104 1849
3 ARCH 0 0
3 Archive Manager 0 0
3 Buffer Cache Reads 131 0
3 DBWR 2 79
3 Data Pump 0 0
3 Direct Reads 0 0
3 Direct Writes 0 0
3 LGWR 0 58
3 Others 1140 269
3 RMAN 0 0
3 Recovery 0 0
3 Smart Scan 0 0
3 Streams AQ 0 0
3 XDB 0 0
3 1273 406
ARCH 0 0
Archive Manager 0 0
Buffer Cache Reads 928 0
DBWR 25 972
Data Pump 0 0
Direct Reads 1168 1
Direct Writes 1 167
LGWR 2 703
Others 10253 3415
RMAN 0 0
Recovery 0 0
Smart Scan 0 0
Streams AQ 2 0
XDB 0 0
12379 5258
60 rows selected.
This is a 3 node cluster with not so much I/O done yet. Especially no Data Pump related I/O. We change that:
SQL> create directory dpdir as '+data'; Directory created. SQL> create directory logdir as '/home/oracle/logdir'; Directory created. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options [oracle@host01 ~]$ mkdir /home/oracle/logdir expdp full=y directory=dpdir logfile=logdir:mylog.txt -- Output not shown -- Dump file set for SYS.SYS_EXPORT_FULL_01 is: +DATA/expdat.dmp Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at 18:00:33
The dumpfile was created into the ASM diskgroup DATA here, while the logfile was placed in a conventional directory on the local node. When we look into the diskgroup, we see that an alias was created:
ASMCMD> ls data -al
Type Redund Striped Time Sys Name
Y ORCL/
Y cluster01/
N expdat.dmp => +DATA/ORCL/DUMPSET/SYSSYS_EXPORT_FULL_01_74638_1.272.774121815
gv$iostat_function does show that Data Pump related I/O now:
SQL> select inst_id,function_name, sum(small_read_megabytes+large_read_megabytes) as read_mb, sum(small_write_megabytes+large_write_megabytes) as write_mb from gv$iostat_function group by cube (inst_id,function_name) order by inst_id,function_name; INST_ID FUNCTION_NAME READ_MB WRITE_MB ---------- ------------------ ---------- ---------- 1 ARCH 0 0 1 Archive Manager 0 0 1 Buffer Cache Reads 770 0 1 DBWR 14 425 1 Data Pump 795 540 1 Direct Reads 1194 1 1 Direct Writes 1 167 1 LGWR 1 451 1 Others 5297 2131 1 RMAN 0 0 1 Recovery 0 0 1 Smart Scan 0 0 1 Streams AQ 1 0 1 XDB 0 0 1 8073 3715 2 ARCH 0 0 2 Archive Manager 0 0 2 Buffer Cache Reads 191 0 2 DBWR 13 541 2 Data Pump 0 0 2 Direct Reads 6 0 2 Direct Writes 0 0 2 LGWR 1 309 2 Others 3955 1044 2 RMAN 0 0 2 Recovery 0 0 2 Smart Scan 0 0 2 Streams AQ 1 0 2 XDB 0 0 2 4167 1894 3 ARCH 0 0 3 Archive Manager 0 0 3 Buffer Cache Reads 142 0 3 DBWR 4 83 3 Data Pump 0 0 3 Direct Reads 0 0 3 Direct Writes 0 0 3 LGWR 0 68 3 Others 1233 283 3 RMAN 0 0 3 Recovery 0 0 3 Smart Scan 0 0 3 Streams AQ 0 0 3 XDB 0 0 3 1379 434 ARCH 0 0 Archive Manager 0 0 Buffer Cache Reads 1103 0 DBWR 31 1049 Data Pump 795 540 Direct Reads 1200 1 Direct Writes 1 167 LGWR 2 828 Others 10485 3458 RMAN 0 0 Recovery 0 0 Smart Scan 0 0 Streams AQ 2 0 XDB 0 0 13619 6043 60 rows selected.
In this case, all the Data Pump I/O was done by the Instance 1. There is also a relation to Exadata, when you look at the function Smart Scan 🙂
Addendum: The Data Pump export fails if the log file is not redirected out of the ASM diskgroup:
[oracle@host01 ~]$ expdp tables=scott.dept directory=dpdir
Export: Release 11.2.0.1.0 - Production on Thu Feb 2 17:42:37 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
Therefore I used a local directory – which is less elegant than using an ACFS based directory:
SQL> drop directory logdir; Directory dropped. SQL> create directory logdir as '/u01/app/grid/acfsmounts/data_myvol/logdir'; Directory created. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options [oracle@host01 ~]$ expdp tables=scott.dept directory=dpdir logfile=logdir:mylog.txt Export: Release 11.2.0.1.0 - Production on Thu Feb 2 17:06:46 2012 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Username: / as sysdba -- output not shown -- Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYS.SYS_EXPORT_TABLE_01 is: +DATA/expdat.dmp Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at 17:07:26
On Exadata, we may use DBFS instead.
Auto DOP: Differences of parallel_degree_policy=auto/limited
Recently, I delivered a Seminar about Parallel Processing in 11g where I came across some interesting findings, that I’d like to share with the Oracle Community. See my introduction into the 11g New Feature Auto DOP here, if that topic is completely new for you. There are big differences in the handling of Hints resp. Table-assigned parallel degrees, depending on the setting of parallel_degree_policy.
The parameter defaults to MANUAL, which gives you the known behavior of versions before 11g. LIMITED will only assign a system computed degree of parallelism (DOP) for tables, decorated with a parallel degree of DEFAULT, while prallel_degree_policy=AUTO will consider to assign a system computed DOP to all tables. Let’s see some details:
SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production PL/SQL Release 11.2.0.1.0 - Production CORE 11.2.0.1.0 Production TNS for Linux: Version 11.2.0.1.0 - Production NLSRTL Version 11.2.0.1.0 - Production
I did my tests on 11.2.0.1 and 11.2.0.2.
SQL> grant dba to adam identified by adam; Grant succeeded. SQL> connect adam/adam@prima Connected. SQL> set pages 300 SQL> show parameter parallel NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ fast_start_parallel_rollback string LOW parallel_adaptive_multi_user boolean TRUE parallel_automatic_tuning boolean FALSE parallel_degree_limit string CPU parallel_degree_policy string MANUAL parallel_execution_message_size integer 16384 parallel_force_local boolean FALSE parallel_instance_group string parallel_io_cap_enabled boolean FALSE parallel_max_servers integer 20 parallel_min_percent integer 0 parallel_min_servers integer 0 parallel_min_time_threshold string AUTO parallel_server boolean FALSE parallel_server_instances integer 1 parallel_servers_target integer 8 parallel_threads_per_cpu integer 2 recovery_parallelism integer 0 SQL> create table t as select * from dual; Table created.
That is my test case. All the red parameters have default values. The table t is of course way too small to justify a parallel operation; especially, it will not meet the parallel_min_time_threshold of estimated runtime (about 10 seconds with AUTO). The setting parallel_degree_policy=MANUAL would leave the system as dumb as in earlier versions regarding an appropriate DOP, though. It would give me any DOP I demand with Hints or Parallel Degree on the table. See how that is different with AUTO/LIMITED:
SQL> alter system set parallel_degree_policy=auto; System altered. SQL> select /*+ parallel (t,8) */ * from t; D - X SQL> select * from v$pq_sesstat; STATISTIC LAST_QUERY SESSION_TOTAL ------------------------------ ---------- ------------- Queries Parallelized 0 0 DML Parallelized 0 0 DDL Parallelized 0 0 DFO Trees 0 0 Server Threads 0 0 Allocation Height 0 0 Allocation Width 0 0 Local Msgs Sent 0 0 Distr Msgs Sent 0 0 Local Msgs Recv'd 0 0 Distr Msgs Recv'd 0 0 11 rows selected.
Although this was a valid hint, I got no parallel operation! That is different with LIMITED:
SQL> alter system set parallel_degree_policy=limited; System altered. SQL> select /*+ parallel (t,8) */ * from t; D - X SQL> select * from v$pq_sesstat; STATISTIC LAST_QUERY SESSION_TOTAL ------------------------------ ---------- ------------- Queries Parallelized 1 1 DML Parallelized 0 0 DDL Parallelized 0 0 DFO Trees 1 1 Server Threads 8 0 Allocation Height 8 0 Allocation Width 1 0 Local Msgs Sent 26 26 Distr Msgs Sent 0 0 Local Msgs Recv'd 26 26 Distr Msgs Recv'd 0 0 11 rows selected.
Same statement, now I got my (not sensible) DOP. There is a new hint in 11g on the statement level, though, that is also delivering my requested DOP with AUTO:
SQL> connect adam/adam@prima Connected. SQL> alter system set parallel_degree_policy=auto; System altered. SQL> select /*+ parallel (8) */ * from t; D - X SQL> select * from v$pq_sesstat; STATISTIC LAST_QUERY SESSION_TOTAL ------------------------------ ---------- ------------- Queries Parallelized 1 1 DML Parallelized 0 0 DDL Parallelized 0 0 DFO Trees 1 1 Server Threads 8 0 Allocation Height 8 0 Allocation Width 1 0 Local Msgs Sent 26 26 Distr Msgs Sent 0 0 Local Msgs Recv'd 26 26 Distr Msgs Recv'd 0 0 11 rows selected.
That is the only way to overrule the Auto DOP with parallel_degree_policy=AUTO. Similar that is with Parallel Degree on the table:
SQL> connect adam/adam@prima Connected. SQL> alter system set parallel_degree_policy=auto; System altered. SQL> alter table t parallel; Table altered. SQL> select * from t; D - X SQL> select * from v$pq_sesstat; STATISTIC LAST_QUERY SESSION_TOTAL ------------------------------ ---------- ------------- Queries Parallelized 0 0 DML Parallelized 0 0 DDL Parallelized 0 0 DFO Trees 0 0 Server Threads 0 0 Allocation Height 0 0 Allocation Width 0 0 Local Msgs Sent 0 0 Distr Msgs Sent 0 0 Local Msgs Recv'd 0 0 Distr Msgs Recv'd 0 0 11 rows selected. SQL> alter table t parallel 8; Table altered. SQL> select * from t; D - X SQL> select * from v$pq_sesstat; STATISTIC LAST_QUERY SESSION_TOTAL ------------------------------ ---------- ------------- Queries Parallelized 0 0 DML Parallelized 0 0 DDL Parallelized 0 0 DFO Trees 0 0 Server Threads 0 0 Allocation Height 0 0 Allocation Width 0 0 Local Msgs Sent 0 0 Distr Msgs Sent 0 0 Local Msgs Recv'd 0 0 Distr Msgs Recv'd 0 0 11 rows selected. SQL> alter system set parallel_degree_policy=limited; System altered. SQL> alter table t parallel; Table altered. SQL> select * from t; D - X SQL> select * from v$pq_sesstat; STATISTIC LAST_QUERY SESSION_TOTAL ------------------------------ ---------- ------------- Queries Parallelized 0 0 DML Parallelized 0 0 DDL Parallelized 0 0 DFO Trees 0 0 Server Threads 0 0 Allocation Height 0 0 Allocation Width 0 0 Local Msgs Sent 0 0 Distr Msgs Sent 0 0 Local Msgs Recv'd 0 0 Distr Msgs Recv'd 0 0 11 rows selected. SQL> alter table t parallel 8; Table altered. SQL> select * from t; D - X SQL> select * from v$pq_sesstat; STATISTIC LAST_QUERY SESSION_TOTAL ------------------------------ ---------- ------------- Queries Parallelized 1 1 DML Parallelized 0 0 DDL Parallelized 0 0 DFO Trees 1 1 Server Threads 8 0 Allocation Height 8 0 Allocation Width 1 0 Local Msgs Sent 26 26 Distr Msgs Sent 0 0 Local Msgs Recv'd 26 26 Distr Msgs Recv'd 0 0 11 rows selected.
You saw a behavior like in the below table described:
| parallel_degree_policy | parallel (t,8) | Parallel (8) | degree DEFAULT | degree 8 |
| manual | 8 | 8 | 4 | 8 |
| limited | 8 | 8 | 1 | 8 |
| auto | 1* | 8 | 1 | 1 |
The default degree with parallel_degree_policy=MANUAL is cpu_count * parallel_threads_per_cpu; 4 in my case.
Apart from the shown differences between AUTO and LIMITED, only AUTO enables the also New Features Parallel Statement Queueing and In-Memory Parallel Execution. My personal impression is that LIMITED works like we have hoped that parallel_automatic_tuning would but never did 🙂
Conclusion: parallel_degree_policy=LIMITED will give you the DOP you request and compute an appropriate DOP only with a parallel degree of DEFAULT as an attribute of the table. Use this parameter if you trust that your applications/designers know why they use a certain DOP. parallel_degree_policy=AUTO will overrule any specific DOP you gave – except the new 11g parallel (n) Hint – and consider to do things in parallel for all tables even without a Hint or Degree.
Addendum: See this nice related posting by Gwen Shapira, especially the part about the I/O calibration.
* Second Addendum: With 11.2.0.3, the hint /*+ parallel (t,8) */ determines the DOP to 8, regardless of the parallel_degree_policy setting. Everything else is the same as shown, especially the different behavior of the values AUTO and LIMITED with the parallel degree of the table t explicitly set to 8.
Important Statistics & Wait Events on Exadata

With this posting, I’d like to highlight the most important Statistics & Wait Events on Exadata that a DBA needs to be familiar with in my view. We start with Statistics, retrievable from v$sysstat, for example. For Exadata, we have introduced 43 new Statistics, that start all with ‚cell%‘:
SQL> select name from v$statname where name like 'cell%'; NAME ---------------------------------------------------------------- cell physical IO interconnect bytes cell physical IO bytes saved during optimized file creation cell physical IO bytes saved during optimized RMAN file restore cell physical IO bytes eligible for predicate offload cell physical IO bytes saved by storage index cell smart IO session cache lookups cell smart IO session cache hits cell smart IO session cache soft misses cell smart IO session cache hard misses cell smart IO session cache hwm cell num smart IO sessions in rdbms block IO due to user cell num smart IO sessions in rdbms block IO due to no cell mem cell num smart IO sessions in rdbms block IO due to big payload cell num smart IO sessions using passthru mode due to user cell num smart IO sessions using passthru mode due to cellsrv cell num smart IO sessions using passthru mode due to timezone cell num smart file creation sessions using rdbms block IO mode cell physical IO interconnect bytes returned by smart scan cell num fast response sessions cell num fast response sessions continuing to smart scan cell smart IO allocated memory bytes cell smart IO memory bytes hwm cell num active smart IO sessions cell smart IO sessions hwm cell scans cell blocks processed by cache layer cell blocks processed by txn layer cell blocks processed by data layer cell blocks processed by index layer cell commit cache queries cell transactions found in commit cache cell blocks helped by commit cache cell blocks helped by minscn optimization cell simulated physical IO bytes eligible for predicate offload cell simulated physical IO bytes returned by predicate offload cell CUs sent uncompressed cell CUs sent compressed cell CUs sent head piece cell CUs processed for uncompressed cell CUs processed for compressed cell IO uncompressed bytes cell index scans cell flash cache read hits 43 rows selected.
The in my opinion most important Statistics are marked in red. Now to their meaning:
SQL> connect adam/adam
Connected.
SQL> select * from sales;
* Output suppressed, because I don't want millions of rows in the posting*
select name,value/1024/1024 as mb from v$statname
natural join v$mystat where name in
(
'physical read total bytes',
'cell physical IO interconnect bytes',
'cell physical IO interconnect bytes returned by smart scan'
);
NAME MB
---------------------------------------------------------------- ----------
physical read total bytes 428.4375
cell physical IO interconnect bytes 428.4375
cell physical IO interconnect bytes returned by smart scan 0
The demo table sales is 428 MB in size, large enough to cause a Serial Direct Read and make Smart Scans possible. You see the Statistic ‚cell physical IO interconnect bytes‘ – in this case equal to ‚physical read total bytes‘, because the whole amount of data scanned on the storage layer was sent over the Storage Interconnect (do not confuse this with a RAC Interconnect!) to the Database Layer. Consequently, this was no Smart Scan, because we did no column projection nor filtering. The red Statistic above counts the bytes sent between the Storage Layer and the Database Layer, and v$mystat displays that value only for my session. Let’s do a Smart Scan:
SQL> select * from sales where channel_id=2;
* Output suppressed*
select name,value/1024/1024 as mb from v$statname
natural join v$mystat where name in
(
'physical read total bytes',
'cell physical IO interconnect bytes',
'cell physical IO interconnect bytes returned by smart scan'
);
NAME MB
---------------------------------------------------------------- ----------
physical read total bytes 855.851563
cell physical IO interconnect bytes 506.039314
cell physical IO interconnect bytes returned by smart scan 77.6018143
The values in v$mystat are cumulative for the whole session. You see the value of ‚physical read total bytes‘ has doubled, because we did again a Full Table Scan on the Storage Layer, but ‚cell physical IO interconnect bytes‘ did not double, because the filtering by the WHERE clause was done on the Storage Layer. We needed only to submit about 77 MB to the Database Layer instead, which is also reflected by ‚cell physical IO interconnect bytes returned by smart scan‘. So this Statistic shows the data volume that was sent to the Database Layer on behalf of a Smart Scan – which is supposed to be significantly smaller than ‚physical read total bytes‘ for a Database that does many Full Table Scans resp. Index Fast Full Scans on large Segments.
Storage Indexes have the ability to speed up Full Table Scan dramatically:
SQL> select * from sales where id=4711; PRODUCT CHANNEL_ID CUST_ID AMOUNT_SOLD TIME_ID ID ------------------------- ---------- ---------- ----------- --------- ---------- Oracle Enterprise Edition 3 998 5000 01-NOV-10 4711 SQL> select name,value/1024/1024 as mb from v$statname natural join v$mystat where name in ( 'physical read total bytes', 'cell physical IO interconnect bytes', 'cell physical IO interconnect bytes returned by smart scan' ); NAME MB ---------------------------------------------------------------- ---------- physical read total bytes 1283.26563 cell physical IO interconnect bytes 506.041885 cell physical IO interconnect bytes returned by smart scan 77.6043854
Although the Statistic ‚physical read total bytes‘ again got increased by 428 MB, we see only a very slight increase in ‚cell physical IO interconnect bytes‘ and ‚cell physical IO interconnect bytes returned by smart scan‘, because the Data Volume (containing just one row of interest) sent to the Database Layer was very small. Also, the Smart Scan operation was done very fast, because a Storage Index told the Cells where the ID searched after could not possibly be:
SQL> select name,value/1024/1024 as mb from v$statname
natural join v$mystat where name='cell physical IO bytes saved by storage index';
NAME MB
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 419.742188
The red Statistic shows the Data Volume, the Cells did not need to scan through, because they knew the Data cannot be there. In this case, the vast majority of the table was not scanned (although ‚physical read total bytes‘ counts it as if the Cells scanned the whole table).
The next Statistic is very important for an OLTP Database running on Exadata:
SQL> select name,value from v$sysstat where name in
('physical read total IO requests','cell flash cache read hits');
NAME VALUE
---------------------------------------------------------------- ----------
physical read total IO requests 142476
cell flash cache read hits 32897
If a large amount of ‚physical read total IO requests‘ was satisfied by the blue Statistics value, we can conclude that we have the basis for a high I/O per second (IOPS) rate, because on Flash Storage, we do not experience the latency time related to the positioning of the Read/Write Head as on Spinning Drives. The blue Statistic counts the number of read requests, resolved from Flash Cache.
Wait Events
We introduced 17 new Wait Events for Exadata. Like the statistics, they are visible also on non Exadata Platforms but have no value there. They can be seen in this view:
SQL> select name from v$event_name where name like 'cell%'; NAME ---------------------------------------------------------------- cell smart table scan cell smart index scan cell statistics gather cell smart incremental backup cell smart file creation cell smart restore from backup cell single block physical read cell multiblock physical read cell list of blocks physical read cell manager opening cell cell manager closing cell cell manager discovering disks cell worker idle cell smart flash unkeep cell worker online completion cell worker retry cell manager cancel work request 17 rows selected.
Again, I have highlighted the in my view most important ones. I’d like to give a brief explanation of them:
‚cell smart table scan‘:
Wait Event that occurs during a Smart Scan on a Table.
‚cell smart index scan‘:
Wait Event that occurs during a Smart Scan on an Index (on behalf of a Direct Index Fast Full Scan)
We expect both to be among the Top Wait Events on a Data Warehouse Database, running on Exadata. There is nothing wrong with that; something has to be the Top Wait Event 🙂
‚cell multiblock physical read‘:
This Wait Event occurs most likely because of a conventional Full Table Scan resp. a conventional Index Fast Full Scan (both not offloaded). Typically because the Table resp. the Index was small. It would be surprising to see this as a major Wait Event on a Data Warehouse Database running on Eaxadata, but it will show up regularly on OLTP running on Exadata
‚cell single block physical read‘:
The most likely reason for this is a conventional (not Offloaded) Index Access. Again, we expect to see this Wait Event high on OLTP but low on Data Warehouse Databases, running on Exadata
