Beiträge getaggt mit flashback
„Total Recall“: Brief introduction into Flashback Data Archive
With Oracle Database 11g, we have a new Option available, called Total Recall. This option extends the possibility to do Flashback Query, introduced in 9i already.
We can now designate particular tables for being able to track back all the changes on them even after years. This option addresses especially legal requirements to keep history for certain data for several years. With Total Recall resp. Flashback Data Archive, this can be achieved quite comfortable and efficient. In order to demonstrate that, I create a demo user, giving him the DBA role (kind of „quick & dirty“ of course, but makes the demo easier):
SQL> grant dba to quaid identified by quaid; Grant succeeded. SQL> connect quaid/quaid Connected. SQL> create tablespace flatbs datafile '/u01/app/oracle/oradata/orcl/flatbs01.dbf' size 50m; Tablespace created.
This tablespace is just an ordinary tablespace, but I am going to use it to hold the Flashback Archive, created now:
SQL> create flashback archive fla_10y tablespace flatbs retention 10 year; Flashback archive created.
If I designate a table to fla_10y, I will be able to do Flashback Query on that table even after 10 years – provided I have the space to hold that history in the tablespace(s), associated to that Flashback Archive. Following creates two ordinary tables with ordinary rows in them:
SQL> create table norecall (id number, name varchar2(50)); Table created. SQL> create table totalrecall (id number, name varchar2(50)); Table created. SQL> insert into norecall values (1,'QUAID'); 1 row created. SQL> insert into totalrecall values (1,'QUAID'); 1 row created. SQL> commit; Commit complete.
I want to be able to keep the history of the second table for 10 years:
SQL> alter table totalrecall flashback archive fla_10y; Table altered. SQL> select to_char(systimestamp,'yyyy-mm-dd:hh24:mi:ss') from dual; TO_CHAR(SYSTIMESTAM ------------------- 2009-10-21:10:55:47
I will do DML on the two tables now. As you probably know, this leads to the creation of before images in the undo tablespace, which we can use to look into the past of the tables – but most likely not after 10 years, because the before images in the undo tablespace are getting overwritten somewhen.
SQL> delete from norecall;
1 row deleted.
SQL> commit;
Commit complete.
SQL> select * from norecall as of timestamp
to_timestamp('2009-10-21:10:55:47','yyyy-mm-dd:hh24:mi:ss');
ID NAME
---------- --------------------------------------------------
1 QUAID
SQL> delete from totalrecall;
1 row deleted.
SQL> commit;
Commit complete.
The select above is a Flashback Query, as it is possible since 9i. I will now make sure, that there are no before images left in the undo tablespace by creating a new one and dropping the old one.
SQL> create undo tablespace undonew datafile
'/u01/app/oracle/oradata/orcl/undonew01.dbf' size 50m;
Tablespace created.
SQL> alter system set undo_tablespace='UNDONEW';
System altered.
SQL> select * from norecall as of timestamp
to_timestamp('2009-10-21:10:55:47','yyyy-mm-dd:hh24:mi:ss');
ID NAME
---------- --------------------------------------------------
1 QUAID
As long as the old undo tablespace is still there, it may still get used for a conventional Flashback Query. But no longer after the drop tablespace. The before images of the history-tracked table, though, are saved into the Flashback Archive (getting compressed during the transfer), which would be the same, if the content of the old undo tablespace gets overwritten by new before images. That’s why I still can do Flashback Query with totalrecall:
SQL> connect / as sysdba Connected. SQL> shutdown immediate SQL> startup SQL> drop tablespace undotbs1 including contents and datafiles; Tablespace dropped.
SQL> connect quaid/quaid Connected. SQL> select * from norecall as of timestamp to_timestamp('2009-10-21:10:55:47','yyyy-mm-dd:hh24:mi:ss'); select * from norecall as of timestamp * ERROR at line 1: ORA-01555: snapshot too old: rollback segment number with name "" too small SQL> select * from totalrecall as of timestamp to_timestamp('2009-10-21:10:55:47','yyyy-mm-dd:hh24:mi:ss'); ID NAME ---------- -------------------------------------------------- 1 QUAID
I will insert a new row into the ordinary table to demonstrate further differences between it and the history tracked table:
SQL> insert into norecall values (2,'John Doe');
1 row created.
SQL> commit;
Commit complete.
SQL> select to_char(sysdate,'yyyy-mm-dd:hh24:mi:ss') from dual;
TO_CHAR(SYSDATE,'YY
-------------------
2009-10-23:02:14:28
Upto this point of the story, the shown features are the same in 11g release 1 already available. Following are new features of 11g release 2: We are now able to do DDL statements on the history-tracked tables and can still do Flashback Query!
SQL> alter table norecall drop column name; Table altered. SQL> select * from norecall as of timestamp 2 to_timestamp('2009-10-23:02:14:28','yyyy-mm-dd:hh24:mi:ss'); select * from norecall as of timestamp * ERROR at line 1: ORA-01466: unable to read data - table definition has changed SQL> select to_char(sysdate,'yyyy-mm-dd:hh24:mi:ss') from dual; TO_CHAR(SYSDATE,'YY ------------------- 2009-10-23:02:17:06 SQL> truncate table norecall; Table truncated. SQL> select * from norecall as of timestamp 2 to_timestamp('2009-10-23:02:17:06','yyyy-mm-dd:hh24:mi:ss'); select * from norecall as of timestamp * ERROR at line 1: ORA-01466: unable to read data - table definition has changed
Above demonstrated that DDL on an ordinary table makes Flashback Query behind the DDL impossible. That was nothing new. New is:
SQL> alter table totalrecall drop column name; Table altered. SQL> select * from totalrecall as of timestamp to_timestamp('2009-10-21:10:55:47','yyyy-mm-dd:hh24:mi:ss'); ID NAME ---------- -------------------------------------------------- 1 QUAID SQL> truncate table totalrecall; Table truncated. SQL> select * from totalrecall as of timestamp to_timestamp('2009-10-21:10:55:47','yyyy-mm-dd:hh24:mi:ss'); ID NAME ---------- -------------------------------------------------- 1 QUAID
With 11g release 2, we can do Flashback Query behind DDL on tables in the Flashback Archive! In 11g release 1, DDL on history-tracked tables was prohibited, which was a show-stopper for many customers. One DDL, we still can’t get through is of course:
SQL> drop table totalrecall;
drop table totalrecall
*
ERROR at line 1:
ORA-55610: Invalid DDL statement on history-tracked table
Logminer: Not as modern as Flashback but still useful!
Inspired by a thread from the OTN Discussion Forum, I’d like to demonstrate something about the Logminer utility. It is available since 8i as a package and can still be useful even in 10g and above, if the modern Flashback Technology can’t be used to solve a logical mistake.
The following was done on an 11g version but would work identically even on 9i.
SQL> select to_char(sysdate,'hh24:mi:ss') from dual; TO_CHAR( -------- 10:37:03 SQL> select * from v$log; GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS ---------- ---------- ---------- ---------- ---------- --- --------- 1 1 10 52428800 1 YES INACTIVE 2 1 11 52428800 1 NO CURRENT 3 1 9 52428800 1 YES INACTIVE SQL> connect hr/hr Connected. SQL> select * from goodoldpast; DEPARTMENT_ID DEPARTMENT_NAME MANAGER_ID LOCATION_ID ------------- ------------------------------ ---------- ----------- 10 Administration 200 1700 20 Marketing 201 1800 30 Purchasing 114 1700 40 Human Resources 203 2400 50 Shipping 121 1500 SQL> delete from goodoldpast; 5 rows deleted. SQL> commit; Commit complete. SQL> select * from goodoldpast; no rows selected
The rows from goodoldpast are definitley gone. But the DML left before images in the undo tablespace that can be used for a Flashback Query since 9i and for a Flashback Table To Timestamp since 10g. If you click on the Flashback Tag on my Blog, I have done some brief explanations and demonstrations about that.
SQL> select * from goodoldpast as of timestamp systimestamp - interval '1' minute; DEPARTMENT_ID DEPARTMENT_NAME MANAGER_ID LOCATION_ID ------------- ------------------------------ ---------- ----------- 10 Administration 200 1700 20 Marketing 201 1800 30 Purchasing 114 1700 40 Human Resources 203 2400 50 Shipping 121 1500
Right now, the before images in the undo tablespace are still present. But what if big transactions overwrite them? I have a small undo tablespace with autoextend off on the datafile of it. There are 10 undo segments. I am doing a parallel DML with 10 parallel processes then to make sure every undo segment gets used by this update:
SQL> connect sh/sh Connected. SQL> create table big as select * from sales; Table created. SQL> insert into big select * from big; 918843 rows created. SQL> commit; Commit complete. SQL> alter session enable parallel dml; Session altered. SQL> update /*+ parallel (sales,10) */ sales set amount_sold=amount_sold; update /*+ parallel (sales,10) */ sales set amount_sold=amount_sold * ERROR at line 1: ORA-12801: error signaled in parallel query server P007 ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS2'
Now there can’t be any more before image from the delete of user HR. Consequently the Flashback Query and the Flashback Table do not work any more.
SQL> connect hr/hr Connected. SQL> select * from goodoldpast as of timestamp to_timestamp('2009-07-22:10:37:00','yyyy-mm-dd:hh24:mi:ss'); ERROR at line 1: ORA-01555: snapshot too old: rollback segment number 22 with name "_SYSSMU22_1248331648$" too small SQL> alter table goodoldpast enable row movement; Table altered. SQL> flashback table goodoldpast to timestamp to_timestamp('2009-07-22:10:37:00','yyyy-mm-dd:hh24:mi:ss'); ERROR at line 1: ORA-00604: error occurred at recursive SQL level 1 ORA-12801: error signaled in parallel query server P000 ORA-01555: snapshot too old: rollback segment number 22 with name "_SYSSMU22_1248331648$" too small
The modern Flashback is impossible here – I don’t have flashback logs either, so even a Flashback Database is impossible. Do I have to do a time consuming Database Point In Time Recovery now? No! Enter Logminer to save the day:
SQL> connect / as sysdba
Connected.
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS
---------- ---------- ---------- ---------- ---------- --- --------
1 1 28 52428800 1 YES ACTIVE
2 1 29 52428800 1 YES ACTIVE
3 1 30 52428800 1 NO CURRENT
I am on sequence# 30 meanwhile. Incidentally I know exactly the sequence# of the archive logfile I need to investigate now. That is of course a little unrealistic and would be the hardest part of a real world scenario: Find out when exactly the logical mistake happend.
SQL> select name from v$archived_log where sequence#=11; NAME ------------------------------------------------------------ /u01/fra/ORCL/archivelog/2009_07_22/o1_mf_1_11_56fn039g_.arc SQL> begin 2 dbms_logmnr.add_logfile ('/u01/fra/ORCL/archivelog/2009_07_22/o1_mf_1_11_56fn039g_.arc', 3 dbms_logmnr.new); 4 end; 5 / PL/SQL procedure successfully completed. SQL> exec dbms_logmnr.start_logmnr(options=>DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG); PL/SQL procedure successfully completed. SQL> select sql_undo from v$logmnr_contents where seg_name='GOODOLDPAST'; SQL_UNDO -------------------------------------------------- insert into "HR"."GOODOLDPAST" values ('10','Administration','200','1700'); insert into "HR"."GOODOLDPAST" values ('20','Marketing','201','1800'); insert into "HR"."GOODOLDPAST" values ('30','Purchasing','114','1700'); insert into "HR"."GOODOLDPAST" values ('40','Human Resources','203','2400'); insert into "HR"."GOODOLDPAST" values ('50','Shipping','121','1500');
I could now do these inserts and correct the logical mistake with them easily. Logminer can still be extremely useful!
Retrieve old versions of procedures with Flashback Query
Each stored procedure (or trigger, package etc.) is reflected in DBA_SOURCE, of course. Now imagine, you modified or even dropped a procedure without having the code available to recreate it properly. In this case, you may save the day by using Flashback Query (possible since 9i) against DBA_SOURCE, like in this example:
SQL> select text from dba_source where name='UPD_SALES' order by line;
TEXT
--------------------------------------------------------------------------------
procedure upd_sales as
begin
for i in 1..10000 loop
update sales set amount_sold=amount_sold*1 where rownum<2;
commit;
end loop;
end;
7 rows selected.
SQL> drop procedure upd_sales;
Procedure dropped.
SQL> connect / as sysdba
Connected.
SQL> select text from dba_source
as of timestamp systimestamp - interval '5' minute
where name='UPD_SALES' order by line;
TEXT
--------------------------------------------------------------------------------
procedure upd_sales as
begin
for i in 1..10000 loop
update sales set amount_sold=amount_sold*1 where rownum<2;
commit;
end loop;
end;
7 rows selected.
I could spool the output from above into a text file and use it afterwards to recreate my procedure as it was before 5 minutes! By the way, I know that the commit inside the loop is not a good idea here 🙂