Archiv für die Kategorie TOI

Logminer: Not as modern as Flashback but still useful!

Inspired by a thread from the OTN Discussion Forum, I’d like to demonstrate something about the Logminer utility. It is available since 8i as a package and can still be useful even in 10g and above, if the modern Flashback Technology can’t be used to solve a logical mistake.

The following was done on an 11g version but would work identically even on 9i.

SQL> select to_char(sysdate,'hh24:mi:ss') from dual;

TO_CHAR(
--------
10:37:03

SQL> select * from v$log;

 GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS    
---------- ---------- ---------- ---------- ---------- --- ---------
 1          1         10   52428800          1 YES INACTIVE  
 2          1         11   52428800          1 NO  CURRENT   
 3          1          9   52428800          1 YES INACTIVE  

SQL> connect hr/hr
Connected.
SQL> select * from goodoldpast;

DEPARTMENT_ID DEPARTMENT_NAME          MANAGER_ID LOCATION_ID
------------- ------------------------------ ---------- -----------
 10 Administration                        200        1700
 20 Marketing                             201        1800
 30 Purchasing                            114        1700
 40 Human Resources                       203        2400
 50 Shipping                              121        1500

SQL> delete from goodoldpast;

5 rows deleted.

SQL> commit;

Commit complete.

SQL> select * from goodoldpast;

no rows selected

The rows from goodoldpast are definitley gone. But the DML left before images in the undo tablespace that can be used for a Flashback Query since 9i and for a Flashback Table To Timestamp since 10g. If you click on the Flashback Tag on my Blog, I have done some brief explanations and demonstrations about that.

SQL> select * from goodoldpast as of timestamp systimestamp - interval '1' minute;

DEPARTMENT_ID DEPARTMENT_NAME           MANAGER_ID LOCATION_ID
------------- ------------------------------ ---------- -----------
 10 Administration                        200        1700
 20 Marketing                             201        1800
 30 Purchasing                            114        1700
 40 Human Resources                       203        2400
 50 Shipping                              121        1500

Right now, the before images in the undo tablespace are still present. But what if big transactions overwrite them? I have a small undo tablespace with autoextend off on the datafile of it. There are 10 undo segments. I am doing a parallel DML with 10 parallel processes then to make sure every undo segment gets used by this update:

SQL> connect sh/sh
Connected.
SQL> create table big as select * from sales;

Table created.

SQL> insert into big select * from big;

918843 rows created.

SQL> commit;

Commit complete.

SQL> alter session enable parallel dml;

Session altered.

SQL> update /*+ parallel (sales,10) */ sales set amount_sold=amount_sold;
update /*+ parallel (sales,10) */ sales set amount_sold=amount_sold
*
ERROR at line 1:
ORA-12801: error signaled in parallel query server P007
ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS2'

Now there can’t be any more before image from the delete of user HR. Consequently the Flashback Query and the Flashback Table do not work any more.

SQL> connect hr/hr
Connected.
SQL> select * from goodoldpast as of timestamp
to_timestamp('2009-07-22:10:37:00','yyyy-mm-dd:hh24:mi:ss');

ERROR at line 1:
ORA-01555: snapshot too old: rollback segment number 22
with name "_SYSSMU22_1248331648$" too small

SQL> alter table goodoldpast enable row movement;

Table altered.

SQL>  flashback table goodoldpast to timestamp
to_timestamp('2009-07-22:10:37:00','yyyy-mm-dd:hh24:mi:ss');

ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-12801: error signaled in parallel query server P000
ORA-01555: snapshot too old: rollback segment number 22
with name "_SYSSMU22_1248331648$" too small

The modern Flashback is impossible here – I don’t have flashback logs either, so even a Flashback Database is impossible. Do I have to do a time consuming Database Point In Time Recovery now? No! Enter Logminer to save the day:

SQL> connect / as sysdba
Connected.
SQL> select * from v$log;

 GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS
---------- ---------- ---------- ---------- ---------- --- --------
 1          1         28   52428800          1 YES ACTIVE
 2          1         29   52428800          1 YES ACTIVE
 3          1         30   52428800          1 NO  CURRENT

I am on sequence# 30 meanwhile. Incidentally I know exactly the sequence# of the archive logfile I need to investigate now. That is of course a little unrealistic and would be the hardest part of a real world scenario: Find out when exactly the logical mistake happend.

SQL> select name from v$archived_log where sequence#=11;
NAME
------------------------------------------------------------
/u01/fra/ORCL/archivelog/2009_07_22/o1_mf_1_11_56fn039g_.arc

SQL> begin
 2  dbms_logmnr.add_logfile
('/u01/fra/ORCL/archivelog/2009_07_22/o1_mf_1_11_56fn039g_.arc',
 3  dbms_logmnr.new);
 4  end;
 5  /

PL/SQL procedure successfully completed.

SQL> exec dbms_logmnr.start_logmnr(options=>DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
PL/SQL procedure successfully completed.

SQL> select sql_undo from v$logmnr_contents where seg_name='GOODOLDPAST';

SQL_UNDO
--------------------------------------------------
insert into "HR"."GOODOLDPAST"
 values ('10','Administration','200','1700');
insert into "HR"."GOODOLDPAST"
 values ('20','Marketing','201','1800');
insert into "HR"."GOODOLDPAST"
 values ('30','Purchasing','114','1700');
insert into "HR"."GOODOLDPAST"
 values ('40','Human Resources','203','2400');
insert into "HR"."GOODOLDPAST"
 values ('50','Shipping','121','1500');

I could now do these inserts and correct the logical mistake with them easily. Logminer can still be extremely useful!

,

10 Kommentare

Brief introduction into Materialized Views

Materialized Views have the ability to speed up queries (even dramatically) while being transparent for the queries in a similar way as indexes resp. partitioned tables are. That means that we do not have to modify our queries in order to benefit from these structures. Unlike an ordinary view which is only a stored select statement that runs if we use the view, a materialized view stores the result set of the select statement as a container table. Let me demonstrate the benefit of that. I have a (for my little database) relatively large table and do an aggregation query on it:

SQL> desc sales
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 PROD_ID                                   NOT NULL NUMBER
 CUST_ID                                   NOT NULL NUMBER
 TIME_ID                                   NOT NULL DATE
 CHANNEL_ID                                NOT NULL NUMBER
 PROMO_ID                                  NOT NULL NUMBER
 QUANTITY_SOLD                             NOT NULL NUMBER(10,2)
 AMOUNT_SOLD                               NOT NULL NUMBER(10,2)

SQL> select count(*) from sales;

 COUNT(*)
----------
 7350744

SQL> select bytes/1024/1024 as mb from user_segments where segment_name='SALES';

 MB
----------
 286

SQL> show sga

Total System Global Area  373293056 bytes
Fixed Size                  1219496 bytes
Variable Size             134218840 bytes
Database Buffers          234881024 bytes
Redo Buffers                2973696 bytes

SQL> set timing on
SQL> select channel_id,sum(amount_sold) from sales group by channel_id;

CHANNEL_ID SUM(AMOUNT_SOLD)
---------- ----------------
 2        210770739
 4        109654416
 3        463002085
 9       2219410.08

Elapsed: 00:00:04.51

As you can see, the aggregation takes round 5 seconds. Now I will speed that up using a Materialized View:

SQL> create materialized view mv1 enable query rewrite
 2  as select channel_id,sum(amount_sold) from sales group by channel_id;

Materialized view created.

Elapsed: 00:00:05.69
SQL> select channel_id,sum(amount_sold) from sales group by channel_id;

CHANNEL_ID SUM(AMOUNT_SOLD)
---------- ----------------
 2        210770739
 4        109654416
 3        463002085
 9       2219410.08

Elapsed: 00:00:00.01

The very same statement now takes way less time! Why is that so?

SQL> set autotrace on explain
SQL> select channel_id,sum(amount_sold) from sales group by channel_id;

CHANNEL_ID SUM(AMOUNT_SOLD)
---------- ----------------
 2        210770739
 4        109654416
 3        463002085
 9       2219410.08

Execution Plan
----------------------------------------------------------
Plan hash value: 2958490228
--------------------------------------------------------------------------------
| Id  | Operation                    | Name | Rows  | Bytes | Cost (%CPU)| Time
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |      |     4 |   104 | 3   (0)| 00:00:01
|   1 |  MAT_VIEW REWRITE ACCESS FULL| MV1  |     4 |   104 | 3   (0)| 00:00:01
--------------------------------------------------------------------------------

Because the optimizer transparently rewrote the query on the Materialized View! But what if we now change the content of the base table sales?

SQL> update sales set amount_sold=amount_sold+1 where rownum commit;

Commit complete.

The materialized view is now stale and will no longer be used for query rewrite (as we can already determine by query runtime):

SQL> select mview_name,staleness from user_mviews;

MVIEW_NAME                     STALENESS
------------------------------ -------------------
MV1                            NEEDS_COMPILE

SQL> set timing on
SQL> select channel_id,sum(amount_sold) from sales group by channel_id;

CHANNEL_ID SUM(AMOUNT_SOLD)
---------- ----------------
 2        210770739
 4        109654416
 3        463002086
 9       2219410.08

Elapsed: 00:00:04.52

In order to get statements rewritten against the materialized view again, we must refresh it by some method.

One method is on demand with a procedure call like in this example:

SQL> exec dbms_mview.refresh(list=>'MV1',method=>'C')

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.62

SQL> select channel_id,sum(amount_sold) from sales group by channel_id;

CHANNEL_ID SUM(AMOUNT_SOLD)
---------- ----------------
 2        210770739
 4        109654416
 3        463002086
 9       2219410.08

Elapsed: 00:00:00.01

Now the MV1 can get used again, as we see by runtime. That was a complete refresh – which can take a long time. We would like to get this done faster.

One comfortable method to get a fast refreshable materialized view is the usage of the package DBMS_ADVISOR:

SQL> vari t varchar2(50)
SQL> begin
 2   dbms_advisor.tune_mview(task_name=>:t,
 3                           mv_create_stmt=>'create materialized view mv1'
 4                               || ' refresh fast as'
 5                               || ' select channel_id,sum(amount_sold)'
 6                               || ' from sales group by channel_id');
 7  end;
 8/  
PL/SQL procedure successfully completed.

SQL> set long 5000
SQL> select statement from user_tune_mview where task_name=:t order by action_id;

STATEMENT
--------------------------------------------------------------------------------
CREATE MATERIALIZED VIEW LOG ON "ADAM"."SALES" WITH ROWID, SEQUENCE ("CHANNEL_ID
","AMOUNT_SOLD")  INCLUDING NEW VALUES

ALTER MATERIALIZED VIEW LOG FORCE ON "ADAM"."SALES" ADD ROWID, SEQUENCE ("CHANNE
L_ID","AMOUNT_SOLD")  INCLUDING NEW VALUES

CREATE MATERIALIZED VIEW ADAM.MV1   REFRESH FAST WITH ROWID DISABLE QUERY REWRIT
E AS SELECT ADAM.SALES.CHANNEL_ID C1, SUM("ADAM"."SALES"."AMOUNT_SOLD") M1, COUN
T("ADAM"."SALES"."AMOUNT_SOLD") M2, COUNT(*) M3 FROM ADAM.SALES GROUP BY ADAM.SA
LES.CHANNEL_ID

STATEMENT
--------------------------------------------------------------------------------
DROP MATERIALIZED VIEW ADAM.MV1

SQL> exec dbms_advisor.delete_task(:t)
PL/SQL procedure successfully completed.

Usually, we need Materialized View Logs on the base tables and special expressions in the materialized view query to be able to do a fast refresh.

I simply take the coding from above now:

SQL> CREATE MATERIALIZED VIEW LOG ON "ADAM"."SALES"
WITH ROWID, SEQUENCE ("CHANNEL_ID","AMOUNT_SOLD")
INCLUDING NEW VALUES;  2    3

Materialized view log created.

SQL> CREATE MATERIALIZED VIEW ADAM.MV1
 REFRESH FAST WITH ROWID
 enable QUERY REWRITE AS
 SELECT ADAM.SALES.CHANNEL_ID C1,
 SUM("ADAM"."SALES"."AMOUNT_SOLD") M1,
 COUNT("ADAM"."SALES"."AMOUNT_SOLD") M2,
 COUNT(*) M3
 FROM ADAM.SALES
 GROUP BY ADAM.SALES.CHANNEL_ID;

Materialized view created.

After again modifying the base table sales, I will then try a fast refresh of the MV1:

SQL> set timing off
SQL> update sales set amount_sold=amount_sold*1 where rownum commit;

Commit complete.

SQL> select mview_name,staleness from user_mviews;

MVIEW_NAME                     STALENESS
------------------------------ -------------------
MV1                            NEEDS_COMPILE

SQL> set timing on
SQL> exec dbms_mview.refresh('MV1','F')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.62

Materialized View, Query Rewritw, MV Log, Refresh FAST

It worked! But do I always have to refresh manually? No, we can also schedule a regular refresh like this:

SQL>  alter materialized view mv1 refresh start with sysdate next sysdate + interval '1' minute;

Materialized view altered.

SQL> alter session set nls_date_format='hh24:mi:ss';

Session altered.

SQL> select MVIEW_NAME,LAST_REFRESH_TYPE,LAST_REFRESH_DATE from user_mviews;

MVIEW_NAME                     LAST_REF LAST_REF
------------------------------ -------- --------
MV1                            FAST     10:40:05

SQL> select MVIEW_NAME,LAST_REFRESH_TYPE,LAST_REFRESH_DATE from user_mviews;

MVIEW_NAME                     LAST_REF LAST_REF
------------------------------ -------- --------
MV1                            FAST     10:41:04

Of course, a one minute refresh interval is extraordinary short. An other possibility to refresh the MVs very comfortably is on commit:

CREATE MATERIALIZED VIEW ADAM.MV1
 REFRESH FAST on commit 
 WITH ROWID
 enable QUERY REWRITE AS
 SELECT ADAM.SALES.CHANNEL_ID C1,
 SUM("ADAM"."SALES"."AMOUNT_SOLD") M1,
 COUNT("ADAM"."SALES"."AMOUNT_SOLD") M2,
 COUNT(*) M3
 FROM ADAM.SALES
 GROUP BY ADAM.SALES.CHANNEL_ID;

Materialized view created.

SQL> select MVIEW_NAME,LAST_REFRESH_TYPE,LAST_REFRESH_DATE from user_mviews;

MVIEW_NAME                     LAST_REF LAST_REF
------------------------------ -------- --------
MV1                            COMPLETE 11:16:28

SQL> update sales set amount_sold=amount_sold*1 where rownum commit;

Commit complete.

SQL> select MVIEW_NAME,LAST_REFRESH_TYPE,LAST_REFRESH_DATE from user_mviews;

MVIEW_NAME                     LAST_REF LAST_REF
------------------------------ -------- --------
MV1                            FAST     11:19:56

But this may (although done in asynchronous fashion) slow down the DML on the base table, so it requires testing whether the slow down is noticeable or not. So far  for now about Materialized Views. There is of course much more to say, but I think that this is enough for a brief introduction of that topic.

, ,

118 Kommentare

Record DDL Statements in DBA_LOGSTDBY_EVENTS & alert.log

This week, I am teaching another 11g Data Guard course in Kista (Sweden, near Stockholm). Our education center is there in a quiet area (Kronborgsgränd 17, 164 28 Kista) together with other buildings rented by mostly IT companies:

kistaOne nice little feature a came across during that course is the possibility to get DDL statements recorded in the alert.log file and in the DBA_LOGSTDBY_EVENTS view. If I recall that right, that was even the default with 10g Data Guard Logical Standby, but in 11g, you have to do the following at the Logical Standby for it:

SQL> exec dbms_logstdby.apply_set('RECORD_APPLIED_DDL','TRUE')
PL/SQL procedure successfully completed.
SQL> exec dbms_logstdby.apply_set('EVENT_LOG_DEST','DEST_ALL')
PL/SQL procedure successfully completed.
SQL> connect scott/tiger@prima
Connected.
SQL> create table test as select * from dept;
Table created.

The second exec lead to the additional recording of te DDL into the alert.log file, else it would only be visible in DBA_LOGSTDBY_EVENTS. Following is from the alert.log then:

Wed Jun 17 08:18:11 2009
LOGSTDBY: APPLY_SET: RECORD_APPLIED_DDL changed to TRUE
Wed Jun 17 08:18:43 2009
LOGSTDBY: APPLY_SET: EVENT_LOG_DEST changed to DEST_ALL
Wed Jun 17 08:18:43 2009
LOGSTDBY status: EVENT_LOG_DEST changed to DEST_ALL
Wed Jun 17 08:19:17 2009
LOGSTDBY stmt: create table test as select * from dept
LOGSTDBY status: ORA-16204: DDL successfully applied

,

Ein Kommentar