Beiträge getaggt mit 11g New Features
Multiple dbverify sessions accessing one large datafile in parallel
Do you use dbverify (dbv) against large datafiles to check for block corruption? Then chances are that you missed an 11gR1 New Feature: dbv can now divide one file into multiple parts and check them in parallel:
SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> col name for a50 SQL> select name,blocks from v$datafile; NAME BLOCKS -------------------------------------------------- ---------- /home/oracle/prima/system01.dbf 38400 /home/oracle/prima/sysaux01.dbf 25600 /home/oracle/prima/undotbs01.dbf 22912 /home/oracle/prima/users01.dbf 1280
My demo system is on 11.2.0.3, but the feature is already there in 11gR1, as the doc page here shows. See the parameters start and end there.
[oracle@uhesse1 ~]$ dbv file=/home/oracle/prima/users01.dbf start=1 end=500 DBVERIFY: Release 11.2.0.3.0 - Production on Mon Aug 20 21:59:28 2012 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. DBVERIFY - Verification starting : FILE = /home/oracle/prima/users01.dbf DBVERIFY - Verification complete Total Pages Examined : 500 Total Pages Processed (Data) : 11 Total Pages Failing (Data) : 0 Total Pages Processed (Index): 0 Total Pages Failing (Index): 0 Total Pages Processed (Other): 136 Total Pages Processed (Seg) : 0 Total Pages Failing (Seg) : 0 Total Pages Empty : 353 Total Pages Marked Corrupt : 0 Total Pages Influx : 0 Total Pages Encrypted : 0 Highest block SCN : 212814 (0.212814) [oracle@uhesse1 ~]$ dbv file=/home/oracle/prima/users01.dbf start=501 end=1280 DBVERIFY: Release 11.2.0.3.0 - Production on Mon Aug 20 22:10:16 2012 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. DBVERIFY - Verification starting : FILE = /home/oracle/prima/users01.dbf DBVERIFY - Verification complete Total Pages Examined : 780 Total Pages Processed (Data) : 0 Total Pages Failing (Data) : 0 Total Pages Processed (Index): 0 Total Pages Failing (Index): 0 Total Pages Processed (Other): 0 Total Pages Processed (Seg) : 0 Total Pages Failing (Seg) : 0 Total Pages Empty : 780 Total Pages Marked Corrupt : 0 Total Pages Influx : 0 Total Pages Encrypted : 0 Highest block SCN : 0 (0.0)
Granted, the file here is not large – but you should get the picture: I could have run the two dbv sessions from two different terminal windows in parallel as well. I realized from a recent twitter thread that this 11g New Feature that I teach since years is probably widely unknown – did you know it?
As always: Don’t believe it, test it! 🙂
Materialized Views & Partition Change Tracking
During the 11g Data Warehouse Administration course that I delivered this week in Munich, I have shown this demonstration about Partition Change Tracking (PCT) that I’d like to share with the Oracle Community. You may have a look here for an Introduction into Materialized Views if that topic is new for you. Often, the Base Tables of Materialized Views are partitioned, which gives use additional options for the Refresh as well as for the Rewrite:
SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production PL/SQL Release 11.2.0.1.0 - Production CORE 11.2.0.1.0 Production TNS for Linux: Version 11.2.0.1.0 - Production NLSRTL Version 11.2.0.1.0 - Production SQL> grant dba to adam identified by adam; Grant succeeded. SQL> connect adam/adam Connected. SQL> create table sales (product varchar2(50), channel_id number, cust_id number, amount_sold number(10,2), time_id date) partition by list (channel_id) (partition c0 values (0), partition c1 values (1), partition c2 values (2), partition c3 values (3), partition c4 values (4) ); Table created. SQL> alter table sales nologging; Table altered. SQL> insert /*+ append */ into sales select 'Oracle Enterprise Edition' as product, mod(rownum,5) as channel_id, mod(rownum,1000) as cust_id , 5000 as amount_sold, to_date ('01.' || lpad(to_char(mod(rownum,12)+1),2,'0') || '.2010' ,'dd.mm.yyyy') as time_id from dual connect by level<=1e6; 1000000 rows created. SQL> commit; Commit complete.
The above was my demo setup, done with 11.2.0.1 Most of the shown should work with lower versions also, though. We will now see a Materialized View that contains the Partition Key of the Base Table, which makes it easily possible for the system to associate the rows of the Materialized View with the Partitions. Otherwise, we would need to explain that with a Partition Marker.
SQL> create materialized view mv_pct enable query rewrite as select channel_id,sum(amount_sold) from sales group by channel_id; Materialized view created.
The MV is now fresh and may get used for Query Rewrite:
SQL> select mview_name,last_refresh_type,staleness from user_mviews; MVIEW_NAME LAST_REF STALENESS ------------------------------ -------- ------------------- MV_PCT COMPLETE FRESH SQL> set autotrace on explain SQL> select channel_id,sum(amount_sold) from sales group by channel_id; CHANNEL_ID SUM(AMOUNT_SOLD) ---------- ---------------- 0 1000000000 1 1000000000 2 1000000000 3 1000000000 4 1000000000 Execution Plan ---------------------------------------------------------- Plan hash value: 470332451 --------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 5 | 130 | 3 (0)| 00:00:01 | | 1 | MAT_VIEW REWRITE ACCESS FULL| MV_PCT | 5 | 130 | 3 (0)| 00:00:01 | --------------------------------------------------------------------------------------- Note ----- - dynamic sampling used for this statement (level=2)
Now we modify one row:
SQL> set autotrace off SQL> update sales set amount_sold=1 where rownum<2; 1 row updated. SQL> commit; Commit complete.:
This makes the MV stale right? Well actually not all of it:
SQL> select mview_name,last_refresh_type,staleness from user_mviews; MVIEW_NAME LAST_REF STALENESS ------------------------------ -------- ------------------- MV_PCT COMPLETE NEEDS_COMPILE SQL> select detail_partition_name, freshness from user_mview_detail_partition; DETAIL_PARTITION_NAME FRESH ------------------------------ ----- C4 FRESH C3 FRESH C2 FRESH C1 FRESH C0 STALE
The second Dictionary View is new in 11g, not the behavior of the next query. Only Partition C0 is stale, which is why queries that do not need channel_id 0 can still use the MV:
SQL> set autotrace on explain
SQL> select sum(amount_sold) from sales where channel_id=1;
SUM(AMOUNT_SOLD)
----------------
1000000000
Execution Plan
----------------------------------------------------------
Plan hash value: 1805549181
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 26 | 3 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 26 | | |
|* 2 | MAT_VIEW REWRITE ACCESS FULL| MV_PCT | 1 | 26 | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("MV_PCT"."CHANNEL_ID"=1)
Note
-----
- dynamic sampling used for this statement (level=2)
The Rewrite is still possible although part of the MV is stale. We call that PCT Rewrite. Furthermore, we can now do a PCT Fast Refresh, although there is no MV Log on the Base Table sales. The PCT Refresh does not need to scan the whole Base Table but only the stale partitions, which is in this case about 5 times faster than a Complete Refresh, because we have 5 equally sized partitions. The value ‚P‚ is new in 11g, but it also works with ‚F‘ or ‚?‘ in earlier versions.
SQL> set autotrace off SQL> exec dbms_mview.refresh('MV_PCT','P') PL/SQL procedure successfully completed. SQL> select mview_name,last_refresh_type,staleness from user_mviews; MVIEW_NAME LAST_REF STALENESS ------------------------------ -------- ------------------- MV_PCT FAST_PCT FRESH
Conclusion: Partitioning delivers some extra benefits when used together with Materialized Views. Especially, Rewrite & Refresh can be done on the Partition layer then. As always: „Don’t believe it, test it!“ 🙂
Monitor RAC I/O with gv$iostat_function
I was delivering an 11gR2 New Features course, followed by an 11gR2 RAC accelerated course. That lead me to combine the not so widely known new view v$iostat_function with the RAC area 🙂
We can now very easy monitor what DB function is doing I/O at what instances in our RAC DB:
SQL> select inst_id,function_name,
sum(small_read_megabytes+large_read_megabytes) as read_mb,
sum(small_write_megabytes+large_write_megabytes) as write_mb
from gv$iostat_function
group by cube (inst_id,function_name)
order by inst_id,function_name;
INST_ID FUNCTION_NAME READ_MB WRITE_MB
---------- ------------------ ---------- ----------
1 ARCH 0 0
1 Archive Manager 0 0
1 Buffer Cache Reads 610 0
1 DBWR 12 373
1 Data Pump 0 0
1 Direct Reads 1162 1
1 Direct Writes 1 167
1 LGWR 1 346
1 Others 5215 2116
1 RMAN 0 0
1 Recovery 0 0
1 Smart Scan 0 0
1 Streams AQ 1 0
1 XDB 0 0
1 7002 3003
2 ARCH 0 0
2 Archive Manager 0 0
2 Buffer Cache Reads 187 0
2 DBWR 11 520
2 Data Pump 0 0
2 Direct Reads 6 0
2 Direct Writes 0 0
2 LGWR 1 299
2 Others 3898 1030
2 RMAN 0 0
2 Recovery 0 0
2 Smart Scan 0 0
2 Streams AQ 1 0
2 XDB 0 0
2 4104 1849
3 ARCH 0 0
3 Archive Manager 0 0
3 Buffer Cache Reads 131 0
3 DBWR 2 79
3 Data Pump 0 0
3 Direct Reads 0 0
3 Direct Writes 0 0
3 LGWR 0 58
3 Others 1140 269
3 RMAN 0 0
3 Recovery 0 0
3 Smart Scan 0 0
3 Streams AQ 0 0
3 XDB 0 0
3 1273 406
ARCH 0 0
Archive Manager 0 0
Buffer Cache Reads 928 0
DBWR 25 972
Data Pump 0 0
Direct Reads 1168 1
Direct Writes 1 167
LGWR 2 703
Others 10253 3415
RMAN 0 0
Recovery 0 0
Smart Scan 0 0
Streams AQ 2 0
XDB 0 0
12379 5258
60 rows selected.
This is a 3 node cluster with not so much I/O done yet. Especially no Data Pump related I/O. We change that:
SQL> create directory dpdir as '+data'; Directory created. SQL> create directory logdir as '/home/oracle/logdir'; Directory created. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options [oracle@host01 ~]$ mkdir /home/oracle/logdir expdp full=y directory=dpdir logfile=logdir:mylog.txt -- Output not shown -- Dump file set for SYS.SYS_EXPORT_FULL_01 is: +DATA/expdat.dmp Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at 18:00:33
The dumpfile was created into the ASM diskgroup DATA here, while the logfile was placed in a conventional directory on the local node. When we look into the diskgroup, we see that an alias was created:
ASMCMD> ls data -al
Type Redund Striped Time Sys Name
Y ORCL/
Y cluster01/
N expdat.dmp => +DATA/ORCL/DUMPSET/SYSSYS_EXPORT_FULL_01_74638_1.272.774121815
gv$iostat_function does show that Data Pump related I/O now:
SQL> select inst_id,function_name, sum(small_read_megabytes+large_read_megabytes) as read_mb, sum(small_write_megabytes+large_write_megabytes) as write_mb from gv$iostat_function group by cube (inst_id,function_name) order by inst_id,function_name; INST_ID FUNCTION_NAME READ_MB WRITE_MB ---------- ------------------ ---------- ---------- 1 ARCH 0 0 1 Archive Manager 0 0 1 Buffer Cache Reads 770 0 1 DBWR 14 425 1 Data Pump 795 540 1 Direct Reads 1194 1 1 Direct Writes 1 167 1 LGWR 1 451 1 Others 5297 2131 1 RMAN 0 0 1 Recovery 0 0 1 Smart Scan 0 0 1 Streams AQ 1 0 1 XDB 0 0 1 8073 3715 2 ARCH 0 0 2 Archive Manager 0 0 2 Buffer Cache Reads 191 0 2 DBWR 13 541 2 Data Pump 0 0 2 Direct Reads 6 0 2 Direct Writes 0 0 2 LGWR 1 309 2 Others 3955 1044 2 RMAN 0 0 2 Recovery 0 0 2 Smart Scan 0 0 2 Streams AQ 1 0 2 XDB 0 0 2 4167 1894 3 ARCH 0 0 3 Archive Manager 0 0 3 Buffer Cache Reads 142 0 3 DBWR 4 83 3 Data Pump 0 0 3 Direct Reads 0 0 3 Direct Writes 0 0 3 LGWR 0 68 3 Others 1233 283 3 RMAN 0 0 3 Recovery 0 0 3 Smart Scan 0 0 3 Streams AQ 0 0 3 XDB 0 0 3 1379 434 ARCH 0 0 Archive Manager 0 0 Buffer Cache Reads 1103 0 DBWR 31 1049 Data Pump 795 540 Direct Reads 1200 1 Direct Writes 1 167 LGWR 2 828 Others 10485 3458 RMAN 0 0 Recovery 0 0 Smart Scan 0 0 Streams AQ 2 0 XDB 0 0 13619 6043 60 rows selected.
In this case, all the Data Pump I/O was done by the Instance 1. There is also a relation to Exadata, when you look at the function Smart Scan 🙂
Addendum: The Data Pump export fails if the log file is not redirected out of the ASM diskgroup:
[oracle@host01 ~]$ expdp tables=scott.dept directory=dpdir
Export: Release 11.2.0.1.0 - Production on Thu Feb 2 17:42:37 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
Therefore I used a local directory – which is less elegant than using an ACFS based directory:
SQL> drop directory logdir; Directory dropped. SQL> create directory logdir as '/u01/app/grid/acfsmounts/data_myvol/logdir'; Directory created. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options [oracle@host01 ~]$ expdp tables=scott.dept directory=dpdir logfile=logdir:mylog.txt Export: Release 11.2.0.1.0 - Production on Thu Feb 2 17:06:46 2012 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Username: / as sysdba -- output not shown -- Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYS.SYS_EXPORT_TABLE_01 is: +DATA/expdat.dmp Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at 17:07:26
On Exadata, we may use DBFS instead.