Archive for category TOI

Real-Time Materialized Views in #Oracle 12c

helps

In 12cR2, a Materialized View that is STALE can still speed up queries while delivering correct results. The data from the stale MV is then on the fly combined with the change information from MV logs in an operation called ON QUERY COMPUTATION. The result is delivered slightly slower as if the MV were FRESH, so there is some overhead involved in the process. But it should be noticeable faster than having to do Full Table Scans as it was required in versions before 12c in that situation.

Operationally, that means that REFRESH can be done less frequently while keeping satisfactory query performance all the time. Let’s see that in action:

[oracle@uhesse ~]$ sqlplus adam/adam@pdb1

SQL*Plus: Release 12.2.0.1.0 Production on Thu Jan 5 14:31:00 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Thu Jan 05 2017 10:57:35 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> set timing on
SQL> select channel_id,sum(amount_sold) from sales group by channel_id;

CHANNEL_ID SUM(AMOUNT_SOLD)
---------- ----------------
	 1	    4000000
	 2	    4000000
	 4	    4000000
	 3	    4000000
	 0	    4000000

Elapsed: 00:00:03.47
SQL> set timing off

The query takes more than three seconds without an MV initially.

SQL> create materialized view log on sales
     with rowid, sequence(channel_id,amount_sold)
     including new values;   

Materialized view log created.

SQL> create materialized view mv1
     refresh fast on demand
     enable query rewrite 
     enable on query computation
     as
     select channel_id,
     sum(amount_sold),
     count(amount_sold),
     count(*)
     from sales
     group by channel_id;  

Materialized view created.

SQL> set timing on                                                     
SQL> select channel_id,sum(amount_sold) from sales group by channel_id;

CHANNEL_ID SUM(AMOUNT_SOLD)
---------- ----------------
	 1	    4000000
	 2	    4000000
	 4	    4000000
	 3	    4000000
	 0	    4000000

Elapsed: 00:00:00.07
SQL> set timing off

The FRESH MV speeds up the query – not yet new. The same kind of execution plan would have been used in 11g:

SQL> @lastplan

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------
SQL_ID	9wwp2am6pm4dz, child number 1
-------------------------------------
select channel_id,sum(amount_sold) from sales group by channel_id

Plan hash value: 2958490228

-------------------------------------------------------------------------------------
| Id  | Operation		     | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT	     |	    |	    |	    |	  3 (100)|	    |
|   1 |  MAT_VIEW REWRITE ACCESS FULL| MV1  |	  5 |	 30 |	  3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------


13 rows selected.

Now I change something in the sales table, making the MV STALE:

SQL> update sales set amount_sold=2 where rownum<2; 

1 row updated. 

SQL> commit;

Commit complete.

SQL> select mview_name,staleness,on_query_computation from user_mviews;

MVIEW_NAME STALENESS	       O
---------- ------------------- -
MV1	   NEEDS_COMPILE       Y

In spite of the STALE MV, the next query is still fast, although not as fast as with the FRESH MV:

SQL> set timing on
SQL> select channel_id,sum(amount_sold) from sales group by channel_id;

CHANNEL_ID SUM(AMOUNT_SOLD)
---------- ----------------
	 2	    4000000
	 3	    4000000
	 4	    4000000
	 0	    4000000
	 1	    4000001

Elapsed: 00:00:00.12
SQL> set timing off

So what happens is roughly this:

realtime_mv

That there’s some work been done under the covers is revealed by looking at the (rather scary) execution plan now:

SQL> @lastplan

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------
SQL_ID	9wwp2am6pm4dz, child number 2
-------------------------------------
select channel_id,sum(amount_sold) from sales group by channel_id

Plan hash value: 2525395710

---------------------------------------------------------------------------------------------------
| Id  | Operation			    | Name	  | Rows  | Bytes | Cost (%CPU)| Time	  |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT		    |		  |	  |	  |    18 (100)|	  |
|   1 |  VIEW				    |		  |   363 |  9438 |    18  (23)| 00:00:01 |
|   2 |   UNION-ALL			    |		  |	  |	  |	       |	  |
|*  3 |    VIEW 			    | VW_FOJ_0	  |   100 |  2900 |	7  (15)| 00:00:01 |
|*  4 |     HASH JOIN FULL OUTER	    |		  |   100 |  4300 |	7  (15)| 00:00:01 |
|   5 |      VIEW			    |		  |	5 |   160 |	3   (0)| 00:00:01 |
|   6 |       MAT_VIEW ACCESS FULL	    | MV1	  |	5 |    60 |	3   (0)| 00:00:01 |
|   7 |      VIEW			    |		  |   100 |  1100 |	4  (25)| 00:00:01 |
|   8 |       HASH GROUP BY		    |		  |	  |	  |	4  (25)| 00:00:01 |
|*  9 |        TABLE ACCESS FULL	    | MLOG$_SALES |	2 |    74 |	3   (0)| 00:00:01 |
|  10 |    VIEW 			    |		  |   263 |  6838 |    11  (28)| 00:00:01 |
|  11 |     UNION-ALL			    |		  |	  |	  |	       |	  |
|* 12 |      FILTER			    |		  |	  |	  |	       |	  |
|  13 |       NESTED LOOPS OUTER	    |		  |   250 | 16000 |	4  (25)| 00:00:01 |
|  14 |        VIEW			    |		  |   100 |  5200 |	4  (25)| 00:00:01 |
|* 15 | 	FILTER			    |		  |	  |	  |	       |	  |
|  16 | 	 HASH GROUP BY		    |		  |	  |	  |	4  (25)| 00:00:01 |
|* 17 | 	  TABLE ACCESS FULL	    | MLOG$_SALES |	2 |    74 |	3   (0)| 00:00:01 |
|* 18 |        INDEX UNIQUE SCAN	    | I_SNAP$_MV1 |	3 |    36 |	0   (0)|	  |
|  19 |      MERGE JOIN 		    |		  |    13 |   871 |	7  (29)| 00:00:01 |
|  20 |       MAT_VIEW ACCESS BY INDEX ROWID| MV1	  |	5 |    60 |	2   (0)| 00:00:01 |
|  21 |        INDEX FULL SCAN		    | I_SNAP$_MV1 |	5 |	  |	1   (0)| 00:00:01 |
|* 22 |       FILTER			    |		  |	  |	  |	       |	  |
|* 23 |        SORT JOIN		    |		  |   100 |  5500 |	5  (40)| 00:00:01 |
|  24 | 	VIEW			    |		  |   100 |  5500 |	4  (25)| 00:00:01 |
|  25 | 	 SORT GROUP BY		    |		  |	  |	  |	4  (25)| 00:00:01 |
|* 26 | 	  TABLE ACCESS FULL	    | MLOG$_SALES |	2 |    74 |	3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("AV$0"."OJ_MARK" IS NULL)
   4 - access(SYS_OP_MAP_NONNULL("SNA$0"."CHANNEL_ID")=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
   9 - filter("MAS$"."SNAPTIME$$">TO_DATE(' 2017-01-05 14:32:07', 'syyyy-mm-dd hh24:mi:ss'))
  12 - filter(CASE  WHEN ROWID IS NOT NULL THEN 1 ELSE NULL END  IS NULL)
  15 - filter(SUM(DECODE(DECODE("MAS$"."OLD_NEW$$",'N','I','D'),'I',1,(-1)))>0)
  17 - filter("MAS$"."SNAPTIME$$">TO_DATE(' 2017-01-05 14:32:07', 'syyyy-mm-dd hh24:mi:ss'))
  18 - access("MV1"."SYS_NC00005$"=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
  22 - filter("MV1"."COUNT(*)"+"AV$0"."D0">0)
  23 - access("MV1"."SYS_NC00005$"=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
       filter("MV1"."SYS_NC00005$"=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
  26 - filter("MAS$"."SNAPTIME$$">TO_DATE(' 2017-01-05 14:32:07', 'syyyy-mm-dd hh24:mi:ss'))

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)


60 rows selected.

But the query delivers the correct result – so that is not simply using the STALE MV only like QUERY_REWRITE_INTEGRITY=STALE_TOLERATED does. Just to confirm:

SQL> show parameter query_rewrite_integrity

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
query_rewrite_integrity 	     string	 enforced

Still REFRESH should be done from time to time like here:

SQL> exec dbms_mview.refresh('MV1','F')

PL/SQL procedure successfully completed.

SQL> set timing on
SQL> select channel_id,sum(amount_sold) from sales group by channel_id;

CHANNEL_ID SUM(AMOUNT_SOLD)
---------- ----------------
	 1	    4000001
	 2	    4000000
	 4	    4000000
	 3	    4000000
	 0	    4000000

Elapsed: 00:00:00.06

Isn’t it nice that also the good old stuff gets enhanced instead of only the fancy new things like the In-Memory Option? At least I think so 🙂

Watch me on YouTube explaining and demonstrating the above:

, ,

4 Comments

How to move Partitions ONLINE and make them READ ONLY in #Oracle 12c

readonly

You’ll see two New Features about Partitions covered here:  MOVE ONLINE (12cR1) and READ ONLY (12cR2)

[oracle@uhesse ~]$ sqlplus adam/adam@pdb1

SQL*Plus: Release 12.2.0.1.0 Production on Thu Jan 5 08:53:59 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Thu Jan 05 2017 08:41:13 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select partition_name,compress_for,read_only from user_tab_partitions;

PARTITION_ COMPRESS_FOR 		  READ
---------- ------------------------------ ----
Q1					  NO
Q2					  NO
Q3					  NO
Q4					  NO

SQL> select partition_name,bytes/1024/1024 as mb from user_segments where partition_name like 'Q%';

PARTITION_	   MB
---------- ----------
Q1		   40
Q2		   47
Q3		   40
Q4		   46

The next command has 12cR1 enhancements: row store compress basic is new syntax for compress basic, having the same effect than the 11g syntax. The online clause is the major improvement here, since it allows that movement while other sessions work with DML statements on that partition:

SQL> alter table sales_range move partition q1 row store compress basic online;

Table altered.

SQL> select partition_name,compress_for,read_only from user_tab_partitions;

PARTITION_ COMPRESS_FOR 		  READ
---------- ------------------------------ ----
Q1	   BASIC			  NO
Q2					  NO
Q3					  NO
Q4					  NO

SQL> select partition_name,bytes/1024/1024 as mb from user_segments where partition_name like 'Q%';

PARTITION_	   MB
---------- ----------
Q1		   16
Q2		   47
Q3		   40
Q4		   46

The following command is a 12cR2 New Feature – Partitions can be made READ ONLY now, preventing DML and DDL statements against them:

SQL> alter table sales_range modify partition q1 read only;

Table altered.

SQL> select partition_name,compress_for,read_only from user_tab_partitions;

PARTITION_ COMPRESS_FOR 		  READ
---------- ------------------------------ ----
Q1	   BASIC			  YES
Q2					  NO
Q3					  NO
Q4					  NO

SQL> delete from sales_range where time_id=to_date('01.01.2016','dd.mm.yyyy');
delete from sales_range where time_id=to_date('01.01.2016','dd.mm.yyyy')
            *
ERROR at line 1:
ORA-14466: Data in a read-only partition or subpartition cannot be modified.


SQL> alter table sales_range drop partition q1;
alter table sales_range drop partition q1
*
ERROR at line 1:
ORA-14466: Data in a read-only partition or subpartition cannot be modified.

To undo the above:

SQL> alter table sales_range modify partition q1 read write;

Table altered.

SQL> alter table sales_range move partition q1 nocompress online;

Table altered.

SQL> select partition_name,compress_for,read_only from user_tab_partitions;

PARTITION_ COMPRESS_FOR 		  READ
---------- ------------------------------ ----
Q1					  NO
Q2					  NO
Q3					  NO
Q4					  NO

SQL> select partition_name,bytes/1024/1024 as mb from user_segments where partition_name like 'Q%';

PARTITION_	   MB
---------- ----------
Q1		   40
Q2		   47
Q3		   40
Q4		   46

Instead of the shown basic compression, OLTP compression and Hybrid Columnar Compression (HCC) can also be done online. Could be a topic of another article to cover that new syntax 🙂

Watch me on YouTube demonstrating the above:

,

Leave a comment

Hot cloning and refreshing PDBs in #Oracle 12cR2

Hot cloning PDBs is new in 12.2, so you don’t have to put the source PDB into READ ONLY mode before the cloning if you have it in local undo mode. I suppose shared undo mode will become very unpopular. Another 12.2 New Feature is the option to clone a PDB that can be refreshed from the source PDB. I will show both features with this article, but you may of course do hot cloning without a later refresh. In this case, just leave out the REFRESH MANUAL clause and you don’t have to OPEN READ ONLY the cloned PDB afterwards. On a high level, what I demonstrate is this:

Hot cloning & refreshing a PDB

Hot cloning & refreshing a PDB

Now let’s see that with details:

SQL> connect sys/oracle_4U@cdb1 as sysdba
Connected.
SQL> select name,open_mode from v$pdbs;

NAME	   OPEN_MODE
---------- ----------
PDB$SEED   READ ONLY
PDB1	   READ WRITE

I have prepared a tnsnames.ora with connect descriptors for pdb1 and the future pdb2:

PDB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = uhesse)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = pdb1)
    )
  )
PDB2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = uhesse)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = pdb2)
    )
  )

Now I create the clone user inside of the source PDB and the database link inside the destination CDB:

SQL> connect sys/oracle_4U@pdb1 as sysdba
Connected.
SQL> grant create session,create pluggable database to cloner identified by cloner;

Grant succeeded.

SQL> connect sys/oracle_4U@cdb2 as sysdba
Connected.
SQL> select name,open_mode from v$pdbs;

NAME	   OPEN_MODE
---------- ----------
PDB$SEED   READ ONLY

SQL> create database link clone_link connect to cloner identified by cloner using 'pdb1';

Database link created.
SQL> select * from session_privs@clone_link;

PRIVILEGE
----------------------------------------
CREATE PLUGGABLE DATABASE
CREATE SESSION

The steps so far are the same as with 12.1. My databases are single instances running on the same Linux server without ASM.

SQL> connect sys/oracle_4U@cdb2 as sysdba

SQL> host mkdir /u01/app/oracle/oradata/pdb2
SQL> create pluggable database pdb2 from pdb1@clone_link file_name_convert=('pdb1','pdb2')
     refresh mode manual;  

Pluggable database created.

SQL> alter pluggable database pdb2 open read only;

Pluggable database altered.

The source pdb1 remains OPEN READ WRITE all the time during the demo. Now I change something there and refresh pdb2 afterwards:

SQL> connect adam/adam@pdb2
Connected.
SQL> select count(*) from sales;

  COUNT(*)
----------
    748999

SQL> connect adam/adam@pdb1
Connected.
SQL> delete from sales where rownum<1000; 

999 rows deleted. 

SQL> commit;

Commit complete.

SQL> select count(*) from sales;

  COUNT(*)
----------
    748000

SQL> connect sys/oracle_4U@cdb2 as sysdba
Connected.
SQL> alter pluggable database pdb2 close immediate;

Pluggable database altered.

Notice that you need to perform the refresh from inside the cloned PDB, else you get this error message:

SQL> alter pluggable database pdb2 refresh;
alter pluggable database pdb2 refresh
*
ERROR at line 1:
ORA-65118: operation affecting a pluggable database cannot be performed from
another pluggable database

So I connect into the cloned PDB to do the refresh there:

SQL> connect sys/oracle_4U@pdb2 as sysdba
Connected.
SQL> alter pluggable database refresh;

Pluggable database altered.

SQL> alter pluggable database open read only;

Pluggable database altered.

SQL> select count(*) from adam.sales;

  COUNT(*)
----------
    748000

Fortunately, you are not allowed to open this cloned PDB in normal READ WRITE mode because of the REFRESH clause added to the CREATE PLUGGABLE DATABASE command:

SQL> shutdown immediate
Pluggable Database closed.
SQL> connect sys/oracle_4U@cdb2 as sysdba
Connected.
SQL> alter pluggable database pdb2 open;
alter pluggable database pdb2 open
*
ERROR at line 1:
ORA-65341: cannot open pluggable database in read/write mode

Another possible implementation would have been to go to OPEN READ ONLY silently like with a Physical Standby. Don’t believe it, test it! 🙂

Watch me explaining and demonstrating the above on YouTube:

,

Leave a comment

%d bloggers like this: