Posts Tagged 12c New Features

Index Competition in #Oracle 12c

Suppose you want to find out which type of index is best for performance with your workload. Why not set up a competition and let the optimizer decide? The playground:

ADAM@pdb1 > select max(amount_sold) from sales where channel_id=9;

MAX(AMOUNT_SOLD)
----------------
            5000

ADAM@pdb1 > @lastplan

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------
SQL_ID  3hrvrf1r6kn8s, child number 0
-------------------------------------
select max(amount_sold) from sales where channel_id=9

Plan hash value: 3593230073

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |       |       |     4 (100)|          |
|   1 |  SORT AGGREGATE                      |       |     1 |     6 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| SALES |     1 |     6 |     4   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | BSTAR |     1 |       |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("CHANNEL_ID"=9)


20 rows selected.

There is a standard B*tree index on the column CHANNEL_ID that speeds up the SELECT above. I think a bitmap index would be better:

ADAM@pdb1 > create bitmap index bmap on sales(channel_id) invisible nologging;

Index created.

ADAM@pdb1 > alter index bstar invisible;

Index altered.

ADAM@pdb1 > alter index bmap visible;

Index altered.

ADAM@pdb1 > select max(amount_sold) from sales where channel_id=9;

MAX(AMOUNT_SOLD)
----------------
            5000

ADAM@pdb1 > @lastplan

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
select max(amount_sold) from sales where channel_id=9

Plan hash value: 2178022915

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE                      |       |     1 |     6 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| SALES |     1 |     6 |     3   (0)| 00:00:01 |
|   3 |    BITMAP CONVERSION TO ROWIDS       |       |       |       |            |          |
|*  4 |     BITMAP INDEX SINGLE VALUE        | BMAP  |       |       |            |          |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("CHANNEL_ID"=9)


21 rows selected.

With this 12c New Feature (two indexes on the same column), I got a smooth transition to the new index type. But this left no choice to the optimizer. What about this?

ADAM@pdb1 > alter index bmap invisible;

Index altered.

ADAM@pdb1 > alter session set optimizer_use_invisible_indexes=true;

Now both indexes are invisible and the optimizer may choose any of them. Turns out that it likes the bitmap index better here. Instead of watching the execution plans, V$SEGMENT_STATISTICS can also be used to find out:

ADAM@pdb1 > select object_name,statistic_name,value
            from v$segment_statistics
            where object_name in ('BSTAR','BMAP')
            and statistic_name in ('physical reads','logical reads');

OBJECT STATISTIC_NAME                      VALUE
------ ------------------------------ ----------
BSTAR  logical reads                       22800
BSTAR  physical reads                       6212
BMAP   logical reads                        1696
BMAP   physical reads                          0

The numbers of BSTAR remain static while BMAP numbers increase. You may also monitor that with DBA_HIST_SEG_STAT across AWR snapshots. Now isn’t that cool?🙂
Couple of things to be aware of here:
Watch out for more than just physical/logical reads – bitmap indexes may cause a locking problem in an OLTP environment.
Don’t keep the two indexes invisible forever – after you saw which one performs better, drop the other one. Invisible indexes need to be maintained upon DML and therefore slow it down.

,

2 Comments

Data Redaction and Data Pump in #Oracle 12c

What happens upon Data Pump Export if tables are being exported that have a Data Redaction Policy? I got that question several times in class, which is why I put the answer here , so I can refer to it subsequently.  Might also be of interest to the Oracle Community🙂

SYS@orcl > BEGIN
DBMS_REDACT.ADD_POLICY
(object_schema => 'SCOTT',
object_name => 'EMP',
policy_name => 'EMPSAL_POLICY',
column_name => 'SAL',
function_type => DBMS_REDACT.FULL,
expression => '1=1');
END;
/  

PL/SQL procedure successfully completed.

SYS@orcl > connect scott/tiger
Connected.
SCOTT@orcl > select ename,sal from emp;

ENAME             SAL
---------- ----------
SMITH               0
ALLEN               0
WARD                0
JONES               0
MARTIN              0
BLAKE               0
CLARK               0
SCOTT               0
KING                0
TURNER              0
ADAMS               0
JAMES               0
FORD                0
MILLER              0

14 rows selected.

Scott doesn’t see the values of the SAL column because of the Data Redaction Policy. SYS is not subject to that policy, because SYS has the privilege EXEMPT REDACTION POLICY:

SYS@orcl > select ename,sal from scott.emp;

ENAME             SAL
---------- ----------
SMITH             800
ALLEN            1600
WARD             1250
JONES            2975
MARTIN           1250
BLAKE            2850
CLARK            2450
SCOTT            9000
KING             5000
TURNER           1500
ADAMS            1100
JAMES             950
FORD             9000
MILLER           1300

14 rows selected.

If Data Pump Export is done as a user who owns that privilege, the table is just exported with all its content, regardless of the policy:

SYS@orcl >  create directory dpdir as '/home/oracle/';
[oracle@uhesse ~]$ expdp tables=scott.emp directory=DPDIR

Export: Release 12.1.0.2.0 - Production on Fri Aug 5 08:56:51 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA tables=scott.emp directory=DPDIR
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/RADM_POLICY
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "SCOTT"."EMP"                               8.781 KB      14 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /home/oracle/expdat.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Fri Aug 5 08:57:10 2016 elapsed 0 00:00:15

If Scott tries to export the table, that raises an error message:

SYS@orcl > grant read,write on directory dpdir to scott;

Grant succeeded.

[oracle@uhesse ~]$ expdp scott/tiger tables=scott.emp directory=DPDIR

Export: Release 12.1.0.2.0 - Production on Fri Aug 5 08:55:10 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options
Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/******** tables=scott.emp directory=DPDIR
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
ORA-31693: Table data object "SCOTT"."EMP" failed to load/unload and is being skipped due to error:
ORA-28081: Insufficient privileges - the command references a redacted object.
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
  /home/oracle/expdat.dmp
Job "SCOTT"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at Fri Aug 5 08:55:28 2016 elapsed 0 00:00:16

Taken from the 12c New Features class that I delivered this week in Hinckley. As always: Don’t believe it, test it🙂

2 Comments

DEFAULT_CACHE_SIZE mentioned in alert.log of an #Oracle database

Today, I got this message in my alert.log file:

Full DB Caching disabled: DEFAULT_CACHE_SIZE should be at least 709 MBs bigger than current size.

When I look at the datafile sizes and compare them with the buffer cache size, it shows:

 

SYS@cloudcdb > select name,bytes/1024/1024 as mb from v$sgainfo;

NAME                                                       MB
-------------------------------------------------- ----------
Fixed SGA Size                                     2,80265045
Redo Buffers                                       13,1953125
Buffer Cache Size                                        3296
In-Memory Area Size                                      2048
Shared Pool Size                                          736
Large Pool Size                                            32
Java Pool Size                                             16
Streams Pool Size                                           0
Shared IO Pool Size                                       208
Data Transfer Cache Size                                    0
Granule Size                                               16
Maximum SGA Size                                         6144
Startup overhead in Shared Pool                    181,258133
Free SGA Memory Available                                   0

14 rows selected.

SYS@cloudcdb > select sum(bytes)/1024/1024 as mb from v$datafile;

        MB
----------
      3675

It is true, the database doesn’t fit completely into the buffer cache, missing roughly that amount of space mentioned. There is no such parameter as DEFAULT_CACHE_SIZE, though.
What we have instead is DB_CACHE_SIZE. In order to fix that issue, I was using this initialization parameter file to create a new spfile from:

[oracle@uhesse-service2 dbs]$ cat initCLOUDCDB.ora
*.audit_file_dest='/u02/app/oracle/admin/CLOUDCDB/adump'
*.audit_trail='db'
*.compatible='12.1.0.2.0'
*.control_files='/u02/app/oracle/oradata/CLOUDCDB/control01.ctl','/u03/app/oracle/fra/CLOUDCDB/control02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='CLOUDCDB'
*.db_recovery_file_dest='/u03/app/oracle/fra'
*.db_recovery_file_dest_size=10737418240
*.diagnostic_dest='/u02/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=CLOUDCDBXDB)'
*.enable_pluggable_database=true
*.open_cursors=300
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_tablespace='UNDOTBS1'
*.sga_target=6g
*.pga_aggregate_target=2g
*.inmemory_size=1g
*.db_cache_size=4g

That reduced the size of the In-Memory Column Store to make room for the buffer cache. Now the database fits nicely into the buffer cache again:

SYS@cloudcdb > select name,bytes/1024/1024 as mb from v$sgainfo;

NAME                                                       MB
-------------------------------------------------- ----------
Fixed SGA Size                                     2,80265045
Redo Buffers                                       13,1953125
Buffer Cache Size                                        4256
In-Memory Area Size                                      1024
Shared Pool Size                                          800
Large Pool Size                                            32
Java Pool Size                                             16
Streams Pool Size                                           0
Shared IO Pool Size                                         0
Data Transfer Cache Size                                    0
Granule Size                                               16
Maximum SGA Size                                         6144
Startup overhead in Shared Pool                    181,290176
Free SGA Memory Available                                   0

14 rows selected.

Accordingly the message in the alert.log now reads
Buffer Cache Full DB Caching mode changing from FULL CACHING DISABLED to FULL CACHING ENABLED

Don’t get me wrong: I’m not arguing here against the In-Memory Option or in favor of Full Database Caching. Or whether it makes sense to use any of them or both. This post is just about clarifying the strange message in the alert.log that may confuse people.

And by the way, my demo database is running in the Oracle Cloud🙂

Leave a comment

Follow

Get every new post delivered to your Inbox.

Join 4,323 other followers

%d bloggers like this: