Beiträge getaggt mit 11g New Features
Record DDL Statements in DBA_LOGSTDBY_EVENTS & alert.log
This week, I am teaching another 11g Data Guard course in Kista (Sweden, near Stockholm). Our education center is there in a quiet area (Kronborgsgränd 17, 164 28 Kista) together with other buildings rented by mostly IT companies:
One nice little feature a came across during that course is the possibility to get DDL statements recorded in the alert.log file and in the DBA_LOGSTDBY_EVENTS view. If I recall that right, that was even the default with 10g Data Guard Logical Standby, but in 11g, you have to do the following at the Logical Standby for it:
SQL> exec dbms_logstdby.apply_set('RECORD_APPLIED_DDL','TRUE')
PL/SQL procedure successfully completed.
SQL> exec dbms_logstdby.apply_set('EVENT_LOG_DEST','DEST_ALL')
PL/SQL procedure successfully completed. SQL> connect scott/tiger@prima Connected. SQL> create table test as select * from dept; Table created.
The second exec lead to the additional recording of te DDL into the alert.log file, else it would only be visible in DBA_LOGSTDBY_EVENTS. Following is from the alert.log then:
Wed Jun 17 08:18:11 2009
LOGSTDBY: APPLY_SET: RECORD_APPLIED_DDL changed to TRUE
Wed Jun 17 08:18:43 2009
LOGSTDBY: APPLY_SET: EVENT_LOG_DEST changed to DEST_ALL
Wed Jun 17 08:18:43 2009
LOGSTDBY status: EVENT_LOG_DEST changed to DEST_ALL
Wed Jun 17 08:19:17 2009
LOGSTDBY stmt: create table test as select * from dept
LOGSTDBY status: ORA-16204: DDL successfully applied
Restrict the usage of new execution plans with SQL Plan Baselines
We have several new features in the 11g version related to change management. One of them is SQL Plan Baselining. That means that you intend a very conservative approach regarding the usage of new execution plans computed by the optimizer. In other words: You suspect that after a change, the newly computed execution plans may be bad and lead to a performance degradation.
Before the introduction of SQL Plan Baselines, you where able to fix the ‚good old execution plans‘ with hints or with stored outlines. Both have the disadvantage that they make it impossible for the optimizer to come up with even better execution plans than the ‚good old‘ ones. By using SQL Plan Baselines, you let the optimizer compute the new plans and you store them, with the possibility to later on test them and allow them for usage, if they have proved to be good. That process is called evolvement.
I have developed a little scenario to demonstrate that. First we see a statement with the ‚good old‘ execution plan of ACCESS BY INDEX ROWID:
SQL> select channel_id,count(*) from sales group by channel_id;
CHANNEL_ID COUNT(*)
---------- ----------
1 1
2 2064200
4 947328
3 4322623
9 16592
SQL> create bitmap index sales_bix on sales (channel_id) nologging;
Index created.
SQL> begin
dbms_stats.gather_table_stats('ADAM','SALES',cascade=>true,
method_opt=>'FOR ALL INDEXED COLUMNS SIZE 5');
end;
/
SQL> select sum(amount_sold) from sales where channel_id=1;
SUM(AMOUNT_SOLD)
----------------
1232.16
Execution Plan
----------------------------------------------------------
Plan hash value: 3388944652
-----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 8 | 1107 (0)| 00:00:14 |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 2 | TABLE ACCESS BY INDEX ROWID | SALES | 5774 | 46192 | 1107 (0)| 00:00:14 |
| 3 | BITMAP CONVERSION TO ROWIDS| | | | | |
|* 4 | BITMAP INDEX SINGLE VALUE | SALES_BIX | | | |
-----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("CHANNEL_ID"=1)
Now I want to create a SQL Baseline for that statement with that execution plan.
One possibility to create it is the following:
SQL> show parameter baseline NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_capture_sql_plan_baselines boolean FALSE optimizer_use_sql_plan_baselines boolean TRUE SQL> select * from dba_sql_plan_baselines; no rows selected SQL> alter session set optimizer_capture_sql_plan_baselines=true; Session altered.
The following (repeating) statements in this session are getting recorded together with their execution plans:
SQL> select sum(amount_sold) from sales where channel_id=1; SUM(AMOUNT_SOLD) ---------------- 1232.16 SQL> select sum(amount_sold) from sales where channel_id=1; SUM(AMOUNT_SOLD) ---------------- 1232.16 SQL> alter session set optimizer_capture_sql_plan_baselines=false; Session altered.
Following shows that there was one statement with one plan was recorded. This one plan
(the index access) is the only accepted one.
SQL> column sql_text format a29
SQL> select SQL_TEXT,SQL_HANDLE,PLAN_NAME,ENABLED,ACCEPTED,FIXED from dba_sql_plan_baselines;
SQL_TEXT SQL_HANDLE PLAN_NAME ENA ACC FIX
----------------------------- ------------------------------ ------------------------------ --- --- ---
select sum(amount_sold) from SYS_SQL_ec28978ecd0909c3 SYS_SQL_PLAN_cd0909c333c3099a YES YES NO
sales where channel_id=1
Now we change the conditions, so that a Full Table Scan becomes attractive to the optimizer:
SQL> alter table sales nologging;
SQL> insert /*+ append */ into sales select PROD_ID,CUST_ID,TIME_ID,1,PROMO_ID,QUANTITY_SOLD,AMOUNT_SOLD from sales;
7350744 rows created.
SQL> commit;
Commit complete.
SQL> delete from sales where channel_id=3;
4322623 rows deleted.
SQL> commit;
Commit complete.
SQL> alter table sales enable row movement;
Table altered.
SQL> alter table sales shrink space cascade;
Table altered.
SQL> begin dbms_stats.gather_table_stats('ADAM','SALES',cascade=>true,
method_opt=>'FOR ALL INDEXED COLUMNS SIZE 5');
end;
/
PL/SQL procedure successfully completed.
The majority of rows now has the channel_id 1. The optimizer would like to do FTS now, but the baseline restricts it to the usage of the ‚good old‘ index access:
SQL> set autotrace on explain
SQL> select sum(amount_sold) from sales where channel_id=1;
SUM(AMOUNT_SOLD)
----------------
785647882
Execution Plan
----------------------------------------------------------
Plan hash value: 3388944652
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 8 | 59594 (1)| 00:11:56 |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 2 | TABLE ACCESS BY INDEX ROWID | SALES | 7340K| 56M| 59594 (1)| 00:11:56 |
| 3 | BITMAP CONVERSION TO ROWIDS| | | | | |
|* 4 | BITMAP INDEX SINGLE VALUE | SALES_BIX | | | | |
-------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("CHANNEL_ID"=1)
Note
-----
- SQL plan baseline "SYS_SQL_PLAN_cd0909c333c3099a" used for this statement
The new plan, however, got recorded, but not accepted:
SQL> select SQL_TEXT,SQL_HANDLE,PLAN_NAME,ENABLED,ACCEPTED,FIXED from dba_sql_plan_baselines; SQL_TEXT SQL_HANDLE PLAN_NAME ENA ACC FIX ----------------------------- ------------------------------ ------------------------------ --- --- --- select sum(amount_sold) from SYS_SQL_ec28978ecd0909c3 SYS_SQL_PLAN_cd0909c333c3099a YES YES NO sales where channel_id=1 select sum(amount_sold) from SYS_SQL_ec28978ecd0909c3 SYS_SQL_PLAN_cd0909c335032dee YES NO NO sales where channel_id=1
An evolvement has to prove a strong improvement compared to the old
plan in order to get the new plan accepted for usage by the optimizer:
set serverout on DECLARE report clob; BEGIN report := DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE ( sql_handle => 'SYS_SQL_ec28978ecd0909c3', plan_name=>'SYS_SQL_PLAN_cd0909c335032dee' ); dbms_output.put_line(report); END; / ------------------------------------------------------------------------------- Evolve SQL Plan Baseline Report ------------------------------------------------------------------------------- Inputs: ------- SQL_HANDLE = SYS_SQL_ec28978ecd0909c3 PLAN_NAME = SYS_SQL_PLAN_cd0909c335032dee TIME_LIMIT = DBMS_SPM.AUTO_LIMIT VERIFY = YES COMMIT = YES Plan: SYS_SQL_PLAN_cd0909c335032dee ----------------------------------- Plan was verified: Time used 14.7 seconds. Failed performance criterion: Compound improvement ratio <= 1.01. Baseline Plan Test Plan Improv. Ratio ------------- --------- ------------- Execution Status: COMPLETE COMPLETE Rows Processed: 1 1 Elapsed Time(ms): 10883 3750 2.9 CPUTime(ms): 9697 2959 3.28 Buffer Gets: 49997 50091 1 Disk Reads: 46911 47217 .99 Direct Writes: 0 0 Fetches: 46911 1089 43.08 Executions: 1 1 ------------------------------------------------------------------------------- Report Summary ------------------------------------------------------------------------------- Number of SQL plan baselines verified: 1. Number of SQL plan baselines evolved: 0. PL/SQL procedure successfully completed.
Although an improvement is visible, it is not big enough to lead to acceptance of the
new plan. This SQL Plan Baseline approach is really conservative.
But we can force the acceptance:
set serverout on DECLARE report clob; BEGIN report := DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE ( sql_handle => 'SYS_SQL_ec28978ecd0909c3', plan_name=>'SYS_SQL_PLAN_cd0909c335032dee', verify=>'NO' ); dbms_output.put_line(report); END; / ------------------------------------------------------------------------------- Evolve SQL Plan Baseline Report ------------------------------------------------------------------------------- Inputs: ------- SQL_HANDLE = SYS_SQL_ec28978ecd0909c3 PLAN_NAME = SYS_SQL_PLAN_cd0909c335032dee TIME_LIMIT = DBMS_SPM.AUTO_LIMIT VERIFY = NO COMMIT = YES Plan: SYS_SQL_PLAN_cd0909c335032dee ----------------------------------- Plan was changed to an accepted plan. ------------------------------------------------------------------------------- Report Summary ------------------------------------------------------------------------------- Number of SQL plan baselines verified: 0. Number of SQL plan baselines evolved: 1. PL/SQL procedure successfully completed.
Now the optimizer is free to choose this accepted plan – or the other accepted ones.
SQL> select SQL_TEXT,SQL_HANDLE,PLAN_NAME,ENABLED,ACCEPTED,FIXED from dba_sql_plan_baselines;
SQL_TEXT SQL_HANDLE PLAN_NAME ENA ACC FIX
----------------------------- ------------------------------ ------------------------------ --- --- ---
select sum(amount_sold) from SYS_SQL_ec28978ecd0909c3 SYS_SQL_PLAN_cd0909c333c3099a YES YES NO
sales where channel_id=1
select sum(amount_sold) from SYS_SQL_ec28978ecd0909c3 SYS_SQL_PLAN_cd0909c335032dee YES YES NO
sales where channel_id=1
SQL> set autotrace on explain
SQL> select sum(amount_sold) from sales where channel_id=1;
SUM(AMOUNT_SOLD)
----------------
785647882
Execution Plan
----------------------------------------------------------
Plan hash value: 1047182207
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 8 | 13795 (2)| 00:02:46 |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
|* 2 | TABLE ACCESS FULL| SALES | 7340K| 56M| 13795 (2)| 00:02:46 |
----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("CHANNEL_ID"=1)
Note
-----
- SQL plan baseline "SYS_SQL_PLAN_cd0909c335032dee" used for this statement
We can even fix one execution plan – for example for runtime comparison.
First we demand index access:
declare numplans pls_integer; begin numplans:= DBMS_SPM.ALTER_SQL_PLAN_BASELINE ( sql_handle=>'SYS_SQL_ec28978ecd0909c3', plan_name=>'SYS_SQL_PLAN_cd0909c333c3099a', attribute_name=>'FIXED', attribute_value=>'YES'); dbms_output.put_line(numplans); end; / SQL> set autotrace on explain SQL> select sum(amount_sold) from sales where channel_id=1; SUM(AMOUNT_SOLD) ---------------- 785647882 Execution Plan ---------------------------------------------------------- Plan hash value: 3388944652 ------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 59594 (1)| 00:11:56 | | 1 | SORT AGGREGATE | | 1 | 8 | | | | 2 | TABLE ACCESS BY INDEX ROWID | SALES | 7340K| 56M| 59594 (1)| 00:11:56 | | 3 | BITMAP CONVERSION TO ROWIDS| | | | | | |* 4 | BITMAP INDEX SINGLE VALUE | SALES_BIX | | | | | ------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 4 - access("CHANNEL_ID"=1) Note ----- - SQL plan baseline "SYS_SQL_PLAN_cd0909c333c3099a" used for this statement SQL> set autotrace off SQL> set timing on SQL> select sum(amount_sold) from sales where channel_id=1; SUM(AMOUNT_SOLD) ---------------- 785647882 Elapsed: 00:00:07.08
Now we demand Full Table Scan:
SQL> declare
2 numplans pls_integer;
begin
numplans:=
DBMS_SPM.ALTER_SQL_PLAN_BASELINE (
sql_handle=>'SYS_SQL_ec28978ecd0909c3',
plan_name=>'SYS_SQL_PLAN_cd0909c333c3099a',
attribute_name=>'FIXED',
attribute_value=>'NO');
dbms_output.put_line(numplans);
end;
/ 3 4 5 6 7 8 9 10 11 12
1
PL/SQL procedure successfully completed.
declare
numplans pls_integer;
begin
numplans:=
DBMS_SPM.ALTER_SQL_PLAN_BASELINE (
sql_handle=>'SYS_SQL_ec28978ecd0909c3',
plan_name=>'SYS_SQL_PLAN_cd0909c335032dee',
attribute_name=>'FIXED',
attribute_value=>'YES');
dbms_output.put_line(numplans);
end;
/
SQL> select sum(amount_sold) from sales where channel_id=1;
SUM(AMOUNT_SOLD)
----------------
785647882
Elapsed: 00:00:02.04
As you see, the new execution plan is really faster in this case – it was the right decision to allow its usage.But first, we saw a proof of it during the evolvement. So we did not trust the optimizer blindly. We had a strong control over the change process!
Addendum: Please see this excellent series of Articles regarding SQL Plan Baselines from the Optimizer Development Group:
http://optimizermagic.blogspot.com/2009/01/plan-regressions-got-you-down-sql-plan.html
http://optimizermagic.blogspot.com/2009/01/sql-plan-management-part-2-of-4-spm.html
http://optimizermagic.blogspot.com/2009/01/sql-plan-management-part-3-of-4.html
http://optimizermagic.blogspot.com/2009/02/sql-plan-management-part-4-of-4-user.html
Second Addendum: Migrate Stored Outlines to SQL Plan Baselines – Why & How
http://blogs.oracle.com/optimizer/entry/how_do_i_migrate_stored
Incremental Backups with Block Change Tracking & Recover Copy
I am in Berlin this week for an Administration Workshop II course. Unfortunately, it’s raining right now, so it would be no fun to stroll around the capital. Instead, there is time now for me to post something 🙂 Two major areas of that course are Performance Tuning and Backup & Recovery. One interesting feature that we discuss from the latter area is the ability to drastically speed up incremental backups (compared to versions before 10g) and to use these incremental backup for an actualization of level 0 image copies. Let’s look at an example:
SQL> alter database enable block change tracking using file '/home/oracle/bc.ora';
This command brings up the background process Change Tracking Writer (CTWR), that maintains a change tracking file. This file will contain the database block adresses of Oracle Blocks, modified after the next level 0 Full Backup. Because of this file, RMAN no longer has to scan all production datafiles and compare their blocks with the blocks of the level 0 backup in order to determine which blocks have changed. That was a very time consuming process before 10g. Now we will invoke RMAN with the following command:
RMAN> run {
recover copy of database with tag 'backrec';
backup incremental level 1 cumulative copies=1 for recover of copy with tag 'backrec' database;
}
Starting recover at 04-MAY-09
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=127 devtype=DISK
no copy of datafile 1 found to recover
no copy of datafile 2 found to recover
no copy of datafile 3 found to recover
no copy of datafile 4 found to recover
no copy of datafile 5 found to recover
no copy of datafile 6 found to recover
Finished recover at 04-MAY-09
Starting backup at 04-MAY-09
using channel ORA_DISK_1
no parent backup or copy of datafile 1 found
no parent backup or copy of datafile 3 found
no parent backup or copy of datafile 5 found
no parent backup or copy of datafile 2 found
no parent backup or copy of datafile 6 found
no parent backup or copy of datafile 4 found
channel ORA_DISK_1: starting datafile copy
input datafile fno=00001 name=/u01/app/oracle/oradata/orcl/system01.dbf
output filename=/u01/app/oracle/flash_recovery_area/ORCL/datafile/o1_mf_system_4zy6pnsg_.dbf tag=BACKREC recid=14 stamp=685996946
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting datafile copy
input datafile fno=00003 name=/u01/app/oracle/oradata/orcl/sysaux01.dbf
output filename=/u01/app/oracle/flash_recovery_area/ORCL/datafile/o1_mf_sysaux_4zy6qqvp_.dbf tag=BACKREC recid=15 stamp=685996969
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile fno=00005 name=/u01/app/oracle/oradata/orcl/example01.dbf
output filename=/u01/app/oracle/flash_recovery_area/ORCL/datafile/o1_mf_example_4zy6rk0w_.dbf tag=BACKREC recid=16 stamp=685996982
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile fno=00002 name=/u01/app/oracle/oradata/orcl/undotbs01.dbf
output filename=/u01/app/oracle/flash_recovery_area/ORCL/datafile/o1_mf_undotbs1_4zy6rr46_.dbf tag=BACKREC recid=17 stamp=685996987
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile fno=00006 name=/u01/app/oracle/oradata/orcl/users02.dbf
output filename=/u01/app/oracle/flash_recovery_area/ORCL/datafile/o1_mf_users_4zy6rv92_.dbf tag=BACKREC recid=18 stamp=685996987
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
input datafile fno=00004 name=/u01/app/oracle/oradata/orcl/users01.dbf
output filename=/u01/app/oracle/flash_recovery_area/ORCL/datafile/o1_mf_users_4zy6rwc5_.dbf tag=BACKREC recid=19 stamp=685996988
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 04-MAY-09
Starting Control File and SPFILE Autobackup at 04-MAY-09
piece handle=/u01/app/oracle/flash_recovery_area/ORCL/autobackup/2009_05_04/o1_mf_s_685996989_4zy6rxvx_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 04-MAY-09
If you look at the output above, you will bascially see that we did a level 0 Full Backup with Image Copies of all the datafiles. Now let’s modify some blocks in the database:
SQL> update sh.sales set amount_sold=amount_sold*1 where rownum<10000;
9999 rows updated.
SQL> commit;
Commit complete.
Now we run the same RMAN command again:
RMAN> run { recover copy of database with tag 'backrec';
backup incremental level 1 cumulative copies=1 for recover of copy with tag 'backrec' database; }
Starting recover at 04-MAY-09
using channel ORA_DISK_1 no copy of datafile 1 found to recover no copy of datafile 2 found to recover no copy of datafile 3 found to recover no copy of datafile 4 found to recover no copy of datafile 5 found to recover no copy of datafile 6 found to recover Finished recover at 04-MAY-09 Starting backup at 04-MAY-09 using channel ORA_DISK_1 channel ORA_DISK_1: starting incremental level 1 datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset input datafile fno=00001 name=/u01/app/oracle/oradata/orcl/system01.dbf input datafile fno=00003 name=/u01/app/oracle/oradata/orcl/sysaux01.dbf input datafile fno=00005 name=/u01/app/oracle/oradata/orcl/example01.dbf input datafile fno=00002 name=/u01/app/oracle/oradata/orcl/undotbs01.dbf input datafile fno=00006 name=/u01/app/oracle/oradata/orcl/users02.dbf input datafile fno=00004 name=/u01/app/oracle/oradata/orcl/users01.dbf channel ORA_DISK_1: starting piece 1 at 04-MAY-09 channel ORA_DISK_1: finished piece 1 at 04-MAY-09 piece handle=/u01/app/oracle/flash_recovery_area/ORCL/backupset/2009_05_04/o1_mf_nnnd1_TAG20090504T185902_4zy7ppmg_.bkp tag=TAG20090504T185902 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 04-MAY-09 Starting Control File and SPFILE Autobackup at 04-MAY-09 piece handle=/u01/app/oracle/flash_recovery_area/ORCL/autobackup/2009_05_04/o1_mf_s_685997943_4zy7pqr4_.bkp comment=NONE Finished Control File and SPFILE Autobackup at 04-MAY-09
Please notice the very short time it took RMAN to backup the (few) modified blocks during the incremental backup. Although this database is tiny of course, that would have taken a much longer time without block change tracking. The effect is way more dramatic in databases of more realistic sizes. Again we modify some blocks as before. After the third call of the command, we will see now always the following behavior:
RMAN> run {recover copy of database with tag 'backrec';
backup incremental level 1 cumulative copies=1 for recover of copy with tag 'backrec' database;}
Starting recover at 04-MAY-09
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile copies to recover
recovering datafile copy fno=00001 name=/u01/app/oracle/flash_recovery_area/ORCL/datafile/o1_mf_system_4zy6pnsg_.dbf
recovering datafile copy fno=00002 name=/u01/app/oracle/flash_recovery_area/ORCL/datafile/o1_mf_undotbs1_4zy6rr46_.dbf
recovering datafile copy fno=00003 name=/u01/app/oracle/flash_recovery_area/ORCL/datafile/o1_mf_sysaux_4zy6qqvp_.dbf
recovering datafile copy fno=00004 name=/u01/app/oracle/flash_recovery_area/ORCL/datafile/o1_mf_users_4zy6rwc5_.dbf
recovering datafile copy fno=00005 name=/u01/app/oracle/flash_recovery_area/ORCL/datafile/o1_mf_example_4zy6rk0w_.dbf
recovering datafile copy fno=00006 name=/u01/app/oracle/flash_recovery_area/ORCL/datafile/o1_mf_users_4zy6rv92_.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/flash_recovery_area/ORCL/backupset/2009_05_04/o1_mf_nnnd1_TAG20090504T185902_4zy7ppmg_.bkp
channel ORA_DISK_1: restored backup piece 1
piece handle=/u01/app/oracle/flash_recovery_area/ORCL/backupset/2009_05_04/o1_mf_nnnd1_TAG20090504T185902_4zy7ppmg_.bkp tag=TAG20090504T185902
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
Finished recover at 04-MAY-09
Starting backup at 04-MAY-09
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u01/app/oracle/oradata/orcl/system01.dbf
input datafile fno=00003 name=/u01/app/oracle/oradata/orcl/sysaux01.dbf
input datafile fno=00005 name=/u01/app/oracle/oradata/orcl/example01.dbf
input datafile fno=00002 name=/u01/app/oracle/oradata/orcl/undotbs01.dbf
input datafile fno=00006 name=/u01/app/oracle/oradata/orcl/users02.dbf
input datafile fno=00004 name=/u01/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: starting piece 1 at 04-MAY-09
channel ORA_DISK_1: finished piece 1 at 04-MAY-09
piece handle=/u01/app/oracle/flash_recovery_area/ORCL/backupset/2009_05_04/o1_mf_nnnd1_TAG20090504T190410_4zy80cd8_.bkp tag=TAG20090504T190410 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 04-MAY-09
Starting Control File and SPFILE Autobackup at 04-MAY-09
piece handle=/u01/app/oracle/flash_recovery_area/ORCL/autobackup/2009_05_04/o1_mf_s_685998253_4zy80g2q_.bkp comment=NONE
First RMAN actualizes the image copies of the datafiles in the recover copy section of the command. Then again a very fast incremental backup is done. In this fashion, we always get a quite fast backup and have our backup very close (in terms of time) to the production datafiles, so that a restore & recovery can be done fairly fast also. This picture illustrates the Recovery Area as well as the Recommended Backup Strategy that you have seen at work in this posting:
Conclusion: Incremental Backups can be taken very fast with the 10g New Feature Block Change Tracking. Together with Image Copies and the 11g New Feature Recover Copy this is the Recommended Backup Strategy for Oracle Databases. What is your reason why it is not implemented at your site?
