Posts Tagged 12c New Features

How to do PDB PITR in #Oracle 12c

A logical error happened in one Pluggable Database. A PDB Point-In-Time-Recovery rewinds it while the others remain available and stay as they are.

Logical error in PDB2

Logical error in PDB2

The blue arrow represents the Multitenant Database cdb1 with all its containers. A while back in the past, a logical error affected only pdb2. cdb1 is in Archive Log Mode and backups from before the logical error of at least the root container and pdb2 are available. What happens now upon PDB PITR is quite similar to a Tablespace PITR: Backups of root and pdb2 are being restored. All other PDBs can be skipped. A PITR to rewind pdb2 only is done with the help of a temporary instance while cdb1 keeps running. Space is needed to restore the root container files to an auxiliary destination, while the pdb2 files will be restored over the existing files from pdb2:

PDB PITR

PDB PITR

Let’s see that in action!

[oracle@uhesse ~]$ export NLS_LANG=american_america.utf8
[oracle@uhesse ~]$ export NLS_DATE_FORMAT='yyyy-mm-dd hh24:mi:ss'

[oracle@uhesse ~]$ sqlplus sys/oracle_4U@pdb2 as sysdba

SQL> select * from scott.dept;

DEPTNO DNAME      LOC
------ ---------- ----------
    10 ACCOUNTING NEW YORK
    20 RESEARCH   DALLAS
    30 SALES      CHICAGO
    40 OPERATIONS BOSTON

SQL> select sysdate from dual;

SYSDATE
-------------------
2016-09-27 15:06:08

SQL> drop user scott cascade;

User dropped.

The drop user stands for the logical error. Now to the PDB PITR:

SQL> alter pluggable database pdb2 close immediate;

Pluggable database altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@edvmr1p0 ~]$ rman target /
	

RMAN> run{set until time='2016-09-27 15:06:05';restore pluggable database pdb2;
          recover pluggable database pdb2 auxiliary destination '/home/oracle/';
          alter pluggable database pdb2 open resetlogs;}

executing command: SET until clause

Starting restore at 2016-09-27 15:09:38
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=279 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00012 to /u01/app/oracle/oradata/pdb2/system01.dbf
channel ORA_DISK_1: restoring datafile 00013 to /u01/app/oracle/oradata/pdb2/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00014 to /u01/app/oracle/oradata/pdb2/users01.dbf
channel ORA_DISK_1: reading from backup piece 

/u01/app/oracle/fast_recovery_area/CDB1/3D69FAB014BF7D48E0532A40CE0A8038/backupset/2016_09_27/o1_mf_nnndf_TAG20160927T101919_cynkyhtd_.bkp
channel ORA_DISK_1: piece 

handle=/u01/app/oracle/fast_recovery_area/CDB1/3D69FAB014BF7D48E0532A40CE0A8038/backupset/2016_09_27/o1_mf_nnndf_TAG20160927T101919_cynkyhtd_.b

kp tag=TAG20160927T101919
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 2016-09-27 15:09:55

Starting recover at 2016-09-27 15:09:55
current log archived
using channel ORA_DISK_1
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='gkon'

initialization parameters used for automatic instance:
db_name=CDB1
db_unique_name=gkon_pitr_pdb2_CDB1
compatible=12.1.0.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/u01/app/oracle
_system_trig_enabled=FALSE
sga_target=752M
processes=200
db_create_file_dest=/home/oracle/
log_archive_dest_1='location=/home/oracle/'
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used


starting up automatic instance CDB1

Oracle instance started

Total System Global Area     788529152 bytes

Fixed Size                     2929352 bytes
Variable Size                218107192 bytes
Database Buffers             562036736 bytes
Redo Buffers                   5455872 bytes
Automatic instance created

contents of Memory Script:
{
# set requested point in time
set until  time "2016-09-27 15:06:05";
# restore the controlfile
restore clone controlfile;

# mount the controlfile
sql clone 'alter database mount clone database';
}
executing Memory Script

executing command: SET until clause

Starting restore at 2016-09-27 15:10:10
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=165 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CDB1/autobackup/2016_09_27/o1_mf_s_923661139_cynspmom_.bkp
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CDB1/autobackup/2016_09_27/o1_mf_s_923661139_cynspmom_.bkp 

tag=TAG20160927T123219
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/home/oracle/CDB1/controlfile/o1_mf_cyo2ymfv_.ctl
Finished restore at 2016-09-27 15:10:12

sql statement: alter database mount clone database

contents of Memory Script:
{
# set requested point in time
set until  time "2016-09-27 15:06:05";
# switch to valid datafilecopies
switch clone datafile  12 to datafilecopy
 "/u01/app/oracle/oradata/pdb2/system01.dbf";
switch clone datafile  13 to datafilecopy
 "/u01/app/oracle/oradata/pdb2/sysaux01.dbf";
switch clone datafile  14 to datafilecopy
 "/u01/app/oracle/oradata/pdb2/users01.dbf";
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  4 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  6 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 4, 3, 6;

switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

datafile 12 switched to datafile copy
input datafile copy RECID=7 STAMP=923670618 file name=/u01/app/oracle/oradata/pdb2/system01.dbf

datafile 13 switched to datafile copy
input datafile copy RECID=8 STAMP=923670618 file name=/u01/app/oracle/oradata/pdb2/sysaux01.dbf

datafile 14 switched to datafile copy
input datafile copy RECID=9 STAMP=923670618 file name=/u01/app/oracle/oradata/pdb2/users01.dbf

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 2016-09-27 15:10:17
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /home/oracle/CDB1/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00004 to /home/oracle/CDB1/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /home/oracle/CDB1/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00006 to /home/oracle/CDB1/datafile/o1_mf_users_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece 

/u01/app/oracle/fast_recovery_area/CDB1/backupset/2016_09_27/o1_mf_nnndf_TAG20160927T101919_cynkxps1_.bkp
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CDB1/backupset/2016_09_27/o1_mf_nnndf_TAG20160927T101919_cynkxps1_.bkp 

tag=TAG20160927T101919
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:25
Finished restore at 2016-09-27 15:10:42

datafile 1 switched to datafile copy
input datafile copy RECID=14 STAMP=923670642 file name=/home/oracle/CDB1/datafile/o1_mf_system_cyo2yshr_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=15 STAMP=923670642 file name=/home/oracle/CDB1/datafile/o1_mf_undotbs1_cyo2ysjc_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=16 STAMP=923670642 file name=/home/oracle/CDB1/datafile/o1_mf_sysaux_cyo2ysj1_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=17 STAMP=923670642 file name=/home/oracle/CDB1/datafile/o1_mf_users_cyo2ysjr_.dbf

contents of Memory Script:
{
# set requested point in time
set until  time "2016-09-27 15:06:05";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  4 online";
sql clone "alter database datafile  3 online";
sql clone 'PDB2' "alter database datafile
 12 online";
sql clone 'PDB2' "alter database datafile
 13 online";
sql clone 'PDB2' "alter database datafile
 14 online";
sql clone "alter database datafile  6 online";
# recover pdb
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "SYSAUX", "USERS" pluggable database
 'PDB2'   delete archivelog;
sql clone 'alter database open read only';
plsql <<>>;
plsql <<>>;
# shutdown clone before import
shutdown clone abort
plsql <<  'PDB2');
end; >>>;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  1 online

sql statement: alter database datafile  4 online

sql statement: alter database datafile  3 online

sql statement: alter database datafile  12 online

sql statement: alter database datafile  13 online

sql statement: alter database datafile  14 online

sql statement: alter database datafile  6 online

Starting recover at 2016-09-27 15:10:43
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 43 is already on disk as file 

/u01/app/oracle/fast_recovery_area/CDB1/archivelog/2016_09_27/o1_mf_1_43_cynm4w09_.arc
archived log for thread 1 with sequence 44 is already on disk as file 

/u01/app/oracle/fast_recovery_area/CDB1/archivelog/2016_09_27/o1_mf_1_44_cynmy3k9_.arc
archived log for thread 1 with sequence 45 is already on disk as file 

/u01/app/oracle/fast_recovery_area/CDB1/archivelog/2016_09_27/o1_mf_1_45_cynn3bds_.arc
archived log for thread 1 with sequence 46 is already on disk as file 

/u01/app/oracle/fast_recovery_area/CDB1/archivelog/2016_09_27/o1_mf_1_46_cynn3d80_.arc
archived log for thread 1 with sequence 47 is already on disk as file 

/u01/app/oracle/fast_recovery_area/CDB1/archivelog/2016_09_27/o1_mf_1_47_cynn6341_.arc
archived log for thread 1 with sequence 48 is already on disk as file 

/u01/app/oracle/fast_recovery_area/CDB1/archivelog/2016_09_27/o1_mf_1_48_cynrz3bw_.arc
archived log for thread 1 with sequence 49 is already on disk as file 

/u01/app/oracle/fast_recovery_area/CDB1/archivelog/2016_09_27/o1_mf_1_49_cyo2y39z_.arc
archived log file name=/u01/app/oracle/fast_recovery_area/CDB1/archivelog/2016_09_27/o1_mf_1_43_cynm4w09_.arc thread=1 sequence=43
archived log file name=/u01/app/oracle/fast_recovery_area/CDB1/archivelog/2016_09_27/o1_mf_1_44_cynmy3k9_.arc thread=1 sequence=44
archived log file name=/u01/app/oracle/fast_recovery_area/CDB1/archivelog/2016_09_27/o1_mf_1_45_cynn3bds_.arc thread=1 sequence=45
archived log file name=/u01/app/oracle/fast_recovery_area/CDB1/archivelog/2016_09_27/o1_mf_1_46_cynn3d80_.arc thread=1 sequence=46
archived log file name=/u01/app/oracle/fast_recovery_area/CDB1/archivelog/2016_09_27/o1_mf_1_47_cynn6341_.arc thread=1 sequence=47
archived log file name=/u01/app/oracle/fast_recovery_area/CDB1/archivelog/2016_09_27/o1_mf_1_48_cynrz3bw_.arc thread=1 sequence=48
archived log file name=/u01/app/oracle/fast_recovery_area/CDB1/archivelog/2016_09_27/o1_mf_1_49_cyo2y39z_.arc thread=1 sequence=49
media recovery complete, elapsed time: 00:00:04
Finished recover at 2016-09-27 15:10:47

sql statement: alter database open read only



Oracle instance shut down


Removing automatic instance
Automatic instance removed
auxiliary instance file /home/oracle/CDB1/datafile/o1_mf_sysaux_cyo2ysj1_.dbf deleted
auxiliary instance file /home/oracle/CDB1/controlfile/o1_mf_cyo2ymfv_.ctl deleted
Finished recover at 2016-09-27 15:10:51

Statement processed

Now how is the state of affairs after the PDB PITR?

RMAN> exit

[oracle@uhesse ~]$ sqlplus sys/oracle_4U@pdb2 as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 27 15:19:45 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select count(*) from scott.dept;

  COUNT(*)
----------
         4

SQL> select DB_INCARNATION#,PDB_INCARNATION#,INCARNATION_TIME from v$pdb_incarnation;

DB_INCARNATION# PDB_INCARNATION# INCARNATION_TIME
--------------- ---------------- -------------------
              2                2 2016-09-27 15:06:29
              2                0 2015-03-17 16:49:58

SQL> connect / as sysdba
Connected.
SQL> select sequence#,status from v$log;

 SEQUENCE# STATUS
---------- ------------------------------------------------
        49 INACTIVE
        50 CURRENT
        48 INACTIVE

SQL> select INCARNATION#,RESETLOGS_TIME from v$database_incarnation;

INCARNATION# RESETLOGS_TIME
------------ -------------------
           1 2014-07-07 05:38:47
           2 2015-03-17 16:49:58

The logical error inside pdb2 is undone! In spite of the RESETLOGS clause, the CDB stays in the same incarnation as before and the Online Logs are not initialized. The new view V$PDB_INCARNATION confirms the creation of a new incarnation for pdb2, though.
I took this from a live demonstration at my present Oracle Database 12c New Features class. It has been done with 12.1, where Flashback on the PDB layer is not available. Hope you find it useful🙂

, , ,

2 Comments

Index Competition in #Oracle 12c

Suppose you want to find out which type of index is best for performance with your workload. Why not set up a competition and let the optimizer decide? The playground:

ADAM@pdb1 > select max(amount_sold) from sales where channel_id=9;

MAX(AMOUNT_SOLD)
----------------
            5000

ADAM@pdb1 > @lastplan

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------
SQL_ID  3hrvrf1r6kn8s, child number 0
-------------------------------------
select max(amount_sold) from sales where channel_id=9

Plan hash value: 3593230073

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |       |       |     4 (100)|          |
|   1 |  SORT AGGREGATE                      |       |     1 |     6 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| SALES |     1 |     6 |     4   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | BSTAR |     1 |       |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("CHANNEL_ID"=9)


20 rows selected.

There is a standard B*tree index on the column CHANNEL_ID that speeds up the SELECT above. I think a bitmap index would be better:

ADAM@pdb1 > create bitmap index bmap on sales(channel_id) invisible nologging;

Index created.

ADAM@pdb1 > alter index bstar invisible;

Index altered.

ADAM@pdb1 > alter index bmap visible;

Index altered.

ADAM@pdb1 > select max(amount_sold) from sales where channel_id=9;

MAX(AMOUNT_SOLD)
----------------
            5000

ADAM@pdb1 > @lastplan

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
select max(amount_sold) from sales where channel_id=9

Plan hash value: 2178022915

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE                      |       |     1 |     6 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| SALES |     1 |     6 |     3   (0)| 00:00:01 |
|   3 |    BITMAP CONVERSION TO ROWIDS       |       |       |       |            |          |
|*  4 |     BITMAP INDEX SINGLE VALUE        | BMAP  |       |       |            |          |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("CHANNEL_ID"=9)


21 rows selected.

With this 12c New Feature (two indexes on the same column), I got a smooth transition to the new index type. But this left no choice to the optimizer. What about this?

ADAM@pdb1 > alter index bmap invisible;

Index altered.

ADAM@pdb1 > alter session set optimizer_use_invisible_indexes=true;

Now both indexes are invisible and the optimizer may choose any of them. Turns out that it likes the bitmap index better here. Instead of watching the execution plans, V$SEGMENT_STATISTICS can also be used to find out:

ADAM@pdb1 > select object_name,statistic_name,value
            from v$segment_statistics
            where object_name in ('BSTAR','BMAP')
            and statistic_name in ('physical reads','logical reads');

OBJECT STATISTIC_NAME                      VALUE
------ ------------------------------ ----------
BSTAR  logical reads                       22800
BSTAR  physical reads                       6212
BMAP   logical reads                        1696
BMAP   physical reads                          0

The numbers of BSTAR remain static while BMAP numbers increase. You may also monitor that with DBA_HIST_SEG_STAT across AWR snapshots. Now isn’t that cool?🙂
Couple of things to be aware of here:
Watch out for more than just physical/logical reads – bitmap indexes may cause a locking problem in an OLTP environment.
Don’t keep the two indexes invisible forever – after you saw which one performs better, drop the other one. Invisible indexes need to be maintained upon DML and therefore slow it down.

, ,

7 Comments

Data Redaction and Data Pump in #Oracle 12c

What happens upon Data Pump Export if tables are being exported that have a Data Redaction Policy? I got that question several times in class, which is why I put the answer here , so I can refer to it subsequently.  Might also be of interest to the Oracle Community🙂

SYS@orcl > BEGIN
DBMS_REDACT.ADD_POLICY
(object_schema => 'SCOTT',
object_name => 'EMP',
policy_name => 'EMPSAL_POLICY',
column_name => 'SAL',
function_type => DBMS_REDACT.FULL,
expression => '1=1');
END;
/  

PL/SQL procedure successfully completed.

SYS@orcl > connect scott/tiger
Connected.
SCOTT@orcl > select ename,sal from emp;

ENAME             SAL
---------- ----------
SMITH               0
ALLEN               0
WARD                0
JONES               0
MARTIN              0
BLAKE               0
CLARK               0
SCOTT               0
KING                0
TURNER              0
ADAMS               0
JAMES               0
FORD                0
MILLER              0

14 rows selected.

Scott doesn’t see the values of the SAL column because of the Data Redaction Policy. SYS is not subject to that policy, because SYS has the privilege EXEMPT REDACTION POLICY:

SYS@orcl > select ename,sal from scott.emp;

ENAME             SAL
---------- ----------
SMITH             800
ALLEN            1600
WARD             1250
JONES            2975
MARTIN           1250
BLAKE            2850
CLARK            2450
SCOTT            9000
KING             5000
TURNER           1500
ADAMS            1100
JAMES             950
FORD             9000
MILLER           1300

14 rows selected.

If Data Pump Export is done as a user who owns that privilege, the table is just exported with all its content, regardless of the policy:

SYS@orcl >  create directory dpdir as '/home/oracle/';
[oracle@uhesse ~]$ expdp tables=scott.emp directory=DPDIR

Export: Release 12.1.0.2.0 - Production on Fri Aug 5 08:56:51 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA tables=scott.emp directory=DPDIR
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/RADM_POLICY
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "SCOTT"."EMP"                               8.781 KB      14 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /home/oracle/expdat.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Fri Aug 5 08:57:10 2016 elapsed 0 00:00:15

If Scott tries to export the table, that raises an error message:

SYS@orcl > grant read,write on directory dpdir to scott;

Grant succeeded.

[oracle@uhesse ~]$ expdp scott/tiger tables=scott.emp directory=DPDIR

Export: Release 12.1.0.2.0 - Production on Fri Aug 5 08:55:10 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options
Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/******** tables=scott.emp directory=DPDIR
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
ORA-31693: Table data object "SCOTT"."EMP" failed to load/unload and is being skipped due to error:
ORA-28081: Insufficient privileges - the command references a redacted object.
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
  /home/oracle/expdat.dmp
Job "SCOTT"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at Fri Aug 5 08:55:28 2016 elapsed 0 00:00:16

Taken from the 12c New Features class that I delivered this week in Hinckley. As always: Don’t believe it, test it🙂

2 Comments

%d bloggers like this: