Inspired by a thread from the OTN Discussion Forum, I’d like to demonstrate something about the Logminer utility. It is available since 8i as a package and can still be useful even in 10g and above, if the modern Flashback Technology can’t be used to solve a logical mistake.
The following was done on an 11g version but would work identically even on 9i.
SQL> select to_char(sysdate,'hh24:mi:ss') from dual; TO_CHAR( -------- 10:37:03 SQL> select * from v$log; GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS ---------- ---------- ---------- ---------- ---------- --- --------- 1 1 10 52428800 1 YES INACTIVE 2 1 11 52428800 1 NO CURRENT 3 1 9 52428800 1 YES INACTIVE SQL> connect hr/hr Connected. SQL> select * from goodoldpast; DEPARTMENT_ID DEPARTMENT_NAME MANAGER_ID LOCATION_ID ------------- ------------------------------ ---------- ----------- 10 Administration 200 1700 20 Marketing 201 1800 30 Purchasing 114 1700 40 Human Resources 203 2400 50 Shipping 121 1500 SQL> delete from goodoldpast; 5 rows deleted. SQL> commit; Commit complete. SQL> select * from goodoldpast; no rows selected
The rows from goodoldpast are definitley gone. But the DML left before images in the undo tablespace that can be used for a Flashback Query since 9i and for a Flashback Table To Timestamp since 10g. If you click on the Flashback Tag on my Blog, I have done some brief explanations and demonstrations about that.
SQL> select * from goodoldpast as of timestamp systimestamp - interval '1' minute; DEPARTMENT_ID DEPARTMENT_NAME MANAGER_ID LOCATION_ID ------------- ------------------------------ ---------- ----------- 10 Administration 200 1700 20 Marketing 201 1800 30 Purchasing 114 1700 40 Human Resources 203 2400 50 Shipping 121 1500
Right now, the before images in the undo tablespace are still present. But what if big transactions overwrite them? I have a small undo tablespace with autoextend off on the datafile of it. There are 10 undo segments. I am doing a parallel DML with 10 parallel processes then to make sure every undo segment gets used by this update:
SQL> connect sh/sh Connected. SQL> create table big as select * from sales; Table created. SQL> insert into big select * from big; 918843 rows created. SQL> commit; Commit complete. SQL> alter session enable parallel dml; Session altered. SQL> update /*+ parallel (sales,10) */ sales set amount_sold=amount_sold; update /*+ parallel (sales,10) */ sales set amount_sold=amount_sold * ERROR at line 1: ORA-12801: error signaled in parallel query server P007 ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS2'
Now there can’t be any more before image from the delete of user HR. Consequently the Flashback Query and the Flashback Table do not work any more.
SQL> connect hr/hr Connected. SQL> select * from goodoldpast as of timestamp to_timestamp('2009-07-22:10:37:00','yyyy-mm-dd:hh24:mi:ss'); ERROR at line 1: ORA-01555: snapshot too old: rollback segment number 22 with name "_SYSSMU22_1248331648$" too small SQL> alter table goodoldpast enable row movement; Table altered. SQL> flashback table goodoldpast to timestamp to_timestamp('2009-07-22:10:37:00','yyyy-mm-dd:hh24:mi:ss'); ERROR at line 1: ORA-00604: error occurred at recursive SQL level 1 ORA-12801: error signaled in parallel query server P000 ORA-01555: snapshot too old: rollback segment number 22 with name "_SYSSMU22_1248331648$" too small
The modern Flashback is impossible here – I don’t have flashback logs either, so even a Flashback Database is impossible. Do I have to do a time consuming Database Point In Time Recovery now? No! Enter Logminer to save the day:
SQL> connect / as sysdba
Connected.
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS
---------- ---------- ---------- ---------- ---------- --- --------
1 1 28 52428800 1 YES ACTIVE
2 1 29 52428800 1 YES ACTIVE
3 1 30 52428800 1 NO CURRENT
I am on sequence# 30 meanwhile. Incidentally I know exactly the sequence# of the archive logfile I need to investigate now. That is of course a little unrealistic and would be the hardest part of a real world scenario: Find out when exactly the logical mistake happend.
SQL> select name from v$archived_log where sequence#=11; NAME ------------------------------------------------------------ /u01/fra/ORCL/archivelog/2009_07_22/o1_mf_1_11_56fn039g_.arc SQL> begin 2 dbms_logmnr.add_logfile ('/u01/fra/ORCL/archivelog/2009_07_22/o1_mf_1_11_56fn039g_.arc', 3 dbms_logmnr.new); 4 end; 5 / PL/SQL procedure successfully completed. SQL> exec dbms_logmnr.start_logmnr(options=>DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG); PL/SQL procedure successfully completed. SQL> select sql_undo from v$logmnr_contents where seg_name='GOODOLDPAST'; SQL_UNDO -------------------------------------------------- insert into "HR"."GOODOLDPAST" values ('10','Administration','200','1700'); insert into "HR"."GOODOLDPAST" values ('20','Marketing','201','1800'); insert into "HR"."GOODOLDPAST" values ('30','Purchasing','114','1700'); insert into "HR"."GOODOLDPAST" values ('40','Human Resources','203','2400'); insert into "HR"."GOODOLDPAST" values ('50','Shipping','121','1500');
I could now do these inserts and correct the logical mistake with them easily. Logminer can still be extremely useful!
#1 von Aman.... am Juli 22, 2009 - 15:36
And that’s just proves that „old is indeed gold “ 🙂 . Very nice post Uwe!
regards
Aman….
#2 von coskan am Juli 22, 2009 - 17:05
Clear as sunshine. Thanks for sharing Uwe.
#3 von Noons am Juli 23, 2009 - 00:56
Good one, Uwe!
#4 von Surachart Opun am Juli 23, 2009 - 02:58
Excellence!
Thank you
#5 von Anand am Juli 24, 2009 - 01:07
Nice one!!!!
Easy and simple to understand…
Anand
#6 von Uwe Hesse am Juli 24, 2009 - 07:07
Dear Aman, Coskan, Noons and Anand,
thank you for your nice feedback! It is very encouraging for me (as a newbie blogger) to see that my postings are valued 🙂
Kind regards
Uwe
#7 von Aman.... am Juli 24, 2009 - 11:21
Dear Uwe,
At least I don’t come into the category of seasoned blogger with very less posts on my blog and very limited knowledge 🙂 . Your blog posts are really good, easy to understand with reproducible scripts. And that’s actually is the sign of a good tech post.
regards
Aman….
#8 von Samrat Wani am Juni 20, 2012 - 14:43
What a explaination! Very Clear.
#9 von Uwe Hesse am Juni 23, 2012 - 17:03
Thank you 🙂