Beiträge getaggt mit logminer

Logminer: Not as modern as Flashback but still useful!

Inspired by a thread from the OTN Discussion Forum, I’d like to demonstrate something about the Logminer utility. It is available since 8i as a package and can still be useful even in 10g and above, if the modern Flashback Technology can’t be used to solve a logical mistake.

The following was done on an 11g version but would work identically even on 9i.

SQL> select to_char(sysdate,'hh24:mi:ss') from dual;

TO_CHAR(
--------
10:37:03

SQL> select * from v$log;

 GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS    
---------- ---------- ---------- ---------- ---------- --- ---------
 1          1         10   52428800          1 YES INACTIVE  
 2          1         11   52428800          1 NO  CURRENT   
 3          1          9   52428800          1 YES INACTIVE  

SQL> connect hr/hr
Connected.
SQL> select * from goodoldpast;

DEPARTMENT_ID DEPARTMENT_NAME          MANAGER_ID LOCATION_ID
------------- ------------------------------ ---------- -----------
 10 Administration                        200        1700
 20 Marketing                             201        1800
 30 Purchasing                            114        1700
 40 Human Resources                       203        2400
 50 Shipping                              121        1500

SQL> delete from goodoldpast;

5 rows deleted.

SQL> commit;

Commit complete.

SQL> select * from goodoldpast;

no rows selected

The rows from goodoldpast are definitley gone. But the DML left before images in the undo tablespace that can be used for a Flashback Query since 9i and for a Flashback Table To Timestamp since 10g. If you click on the Flashback Tag on my Blog, I have done some brief explanations and demonstrations about that.

SQL> select * from goodoldpast as of timestamp systimestamp - interval '1' minute;

DEPARTMENT_ID DEPARTMENT_NAME           MANAGER_ID LOCATION_ID
------------- ------------------------------ ---------- -----------
 10 Administration                        200        1700
 20 Marketing                             201        1800
 30 Purchasing                            114        1700
 40 Human Resources                       203        2400
 50 Shipping                              121        1500

Right now, the before images in the undo tablespace are still present. But what if big transactions overwrite them? I have a small undo tablespace with autoextend off on the datafile of it. There are 10 undo segments. I am doing a parallel DML with 10 parallel processes then to make sure every undo segment gets used by this update:

SQL> connect sh/sh
Connected.
SQL> create table big as select * from sales;

Table created.

SQL> insert into big select * from big;

918843 rows created.

SQL> commit;

Commit complete.

SQL> alter session enable parallel dml;

Session altered.

SQL> update /*+ parallel (sales,10) */ sales set amount_sold=amount_sold;
update /*+ parallel (sales,10) */ sales set amount_sold=amount_sold
*
ERROR at line 1:
ORA-12801: error signaled in parallel query server P007
ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS2'

Now there can’t be any more before image from the delete of user HR. Consequently the Flashback Query and the Flashback Table do not work any more.

SQL> connect hr/hr
Connected.
SQL> select * from goodoldpast as of timestamp
to_timestamp('2009-07-22:10:37:00','yyyy-mm-dd:hh24:mi:ss');

ERROR at line 1:
ORA-01555: snapshot too old: rollback segment number 22
with name "_SYSSMU22_1248331648$" too small

SQL> alter table goodoldpast enable row movement;

Table altered.

SQL>  flashback table goodoldpast to timestamp
to_timestamp('2009-07-22:10:37:00','yyyy-mm-dd:hh24:mi:ss');

ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-12801: error signaled in parallel query server P000
ORA-01555: snapshot too old: rollback segment number 22
with name "_SYSSMU22_1248331648$" too small

The modern Flashback is impossible here – I don’t have flashback logs either, so even a Flashback Database is impossible. Do I have to do a time consuming Database Point In Time Recovery now? No! Enter Logminer to save the day:

SQL> connect / as sysdba
Connected.
SQL> select * from v$log;

 GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS
---------- ---------- ---------- ---------- ---------- --- --------
 1          1         28   52428800          1 YES ACTIVE
 2          1         29   52428800          1 YES ACTIVE
 3          1         30   52428800          1 NO  CURRENT

I am on sequence# 30 meanwhile. Incidentally I know exactly the sequence# of the archive logfile I need to investigate now. That is of course a little unrealistic and would be the hardest part of a real world scenario: Find out when exactly the logical mistake happend.

SQL> select name from v$archived_log where sequence#=11;
NAME
------------------------------------------------------------
/u01/fra/ORCL/archivelog/2009_07_22/o1_mf_1_11_56fn039g_.arc

SQL> begin
 2  dbms_logmnr.add_logfile
('/u01/fra/ORCL/archivelog/2009_07_22/o1_mf_1_11_56fn039g_.arc',
 3  dbms_logmnr.new);
 4  end;
 5  /

PL/SQL procedure successfully completed.

SQL> exec dbms_logmnr.start_logmnr(options=>DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
PL/SQL procedure successfully completed.

SQL> select sql_undo from v$logmnr_contents where seg_name='GOODOLDPAST';

SQL_UNDO
--------------------------------------------------
insert into "HR"."GOODOLDPAST"
 values ('10','Administration','200','1700');
insert into "HR"."GOODOLDPAST"
 values ('20','Marketing','201','1800');
insert into "HR"."GOODOLDPAST"
 values ('30','Purchasing','114','1700');
insert into "HR"."GOODOLDPAST"
 values ('40','Human Resources','203','2400');
insert into "HR"."GOODOLDPAST"
 values ('50','Shipping','121','1500');

I could now do these inserts and correct the logical mistake with them easily. Logminer can still be extremely useful!

,

10 Kommentare