Archiv für die Kategorie TOI
Speaking at the OUGN Spring Seminar 2012
The Oracle User Group Norway (OUGN) Spring Seminar 2012 will again take place on board of a cruise ship. I will participate and speak about Benefits of 11g Data Guard, apart from DR (Real-Time Query, in the first place). Looking at the program, I really get excited about this event and the opportunity to see all the other speakers. Hope to see you there too 🙂
Monitor RAC I/O with gv$iostat_function
I was delivering an 11gR2 New Features course, followed by an 11gR2 RAC accelerated course. That lead me to combine the not so widely known new view v$iostat_function with the RAC area 🙂
We can now very easy monitor what DB function is doing I/O at what instances in our RAC DB:
SQL> select inst_id,function_name,
sum(small_read_megabytes+large_read_megabytes) as read_mb,
sum(small_write_megabytes+large_write_megabytes) as write_mb
from gv$iostat_function
group by cube (inst_id,function_name)
order by inst_id,function_name;
INST_ID FUNCTION_NAME READ_MB WRITE_MB
---------- ------------------ ---------- ----------
1 ARCH 0 0
1 Archive Manager 0 0
1 Buffer Cache Reads 610 0
1 DBWR 12 373
1 Data Pump 0 0
1 Direct Reads 1162 1
1 Direct Writes 1 167
1 LGWR 1 346
1 Others 5215 2116
1 RMAN 0 0
1 Recovery 0 0
1 Smart Scan 0 0
1 Streams AQ 1 0
1 XDB 0 0
1 7002 3003
2 ARCH 0 0
2 Archive Manager 0 0
2 Buffer Cache Reads 187 0
2 DBWR 11 520
2 Data Pump 0 0
2 Direct Reads 6 0
2 Direct Writes 0 0
2 LGWR 1 299
2 Others 3898 1030
2 RMAN 0 0
2 Recovery 0 0
2 Smart Scan 0 0
2 Streams AQ 1 0
2 XDB 0 0
2 4104 1849
3 ARCH 0 0
3 Archive Manager 0 0
3 Buffer Cache Reads 131 0
3 DBWR 2 79
3 Data Pump 0 0
3 Direct Reads 0 0
3 Direct Writes 0 0
3 LGWR 0 58
3 Others 1140 269
3 RMAN 0 0
3 Recovery 0 0
3 Smart Scan 0 0
3 Streams AQ 0 0
3 XDB 0 0
3 1273 406
ARCH 0 0
Archive Manager 0 0
Buffer Cache Reads 928 0
DBWR 25 972
Data Pump 0 0
Direct Reads 1168 1
Direct Writes 1 167
LGWR 2 703
Others 10253 3415
RMAN 0 0
Recovery 0 0
Smart Scan 0 0
Streams AQ 2 0
XDB 0 0
12379 5258
60 rows selected.
This is a 3 node cluster with not so much I/O done yet. Especially no Data Pump related I/O. We change that:
SQL> create directory dpdir as '+data'; Directory created. SQL> create directory logdir as '/home/oracle/logdir'; Directory created. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options [oracle@host01 ~]$ mkdir /home/oracle/logdir expdp full=y directory=dpdir logfile=logdir:mylog.txt -- Output not shown -- Dump file set for SYS.SYS_EXPORT_FULL_01 is: +DATA/expdat.dmp Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at 18:00:33
The dumpfile was created into the ASM diskgroup DATA here, while the logfile was placed in a conventional directory on the local node. When we look into the diskgroup, we see that an alias was created:
ASMCMD> ls data -al
Type Redund Striped Time Sys Name
Y ORCL/
Y cluster01/
N expdat.dmp => +DATA/ORCL/DUMPSET/SYSSYS_EXPORT_FULL_01_74638_1.272.774121815
gv$iostat_function does show that Data Pump related I/O now:
SQL> select inst_id,function_name, sum(small_read_megabytes+large_read_megabytes) as read_mb, sum(small_write_megabytes+large_write_megabytes) as write_mb from gv$iostat_function group by cube (inst_id,function_name) order by inst_id,function_name; INST_ID FUNCTION_NAME READ_MB WRITE_MB ---------- ------------------ ---------- ---------- 1 ARCH 0 0 1 Archive Manager 0 0 1 Buffer Cache Reads 770 0 1 DBWR 14 425 1 Data Pump 795 540 1 Direct Reads 1194 1 1 Direct Writes 1 167 1 LGWR 1 451 1 Others 5297 2131 1 RMAN 0 0 1 Recovery 0 0 1 Smart Scan 0 0 1 Streams AQ 1 0 1 XDB 0 0 1 8073 3715 2 ARCH 0 0 2 Archive Manager 0 0 2 Buffer Cache Reads 191 0 2 DBWR 13 541 2 Data Pump 0 0 2 Direct Reads 6 0 2 Direct Writes 0 0 2 LGWR 1 309 2 Others 3955 1044 2 RMAN 0 0 2 Recovery 0 0 2 Smart Scan 0 0 2 Streams AQ 1 0 2 XDB 0 0 2 4167 1894 3 ARCH 0 0 3 Archive Manager 0 0 3 Buffer Cache Reads 142 0 3 DBWR 4 83 3 Data Pump 0 0 3 Direct Reads 0 0 3 Direct Writes 0 0 3 LGWR 0 68 3 Others 1233 283 3 RMAN 0 0 3 Recovery 0 0 3 Smart Scan 0 0 3 Streams AQ 0 0 3 XDB 0 0 3 1379 434 ARCH 0 0 Archive Manager 0 0 Buffer Cache Reads 1103 0 DBWR 31 1049 Data Pump 795 540 Direct Reads 1200 1 Direct Writes 1 167 LGWR 2 828 Others 10485 3458 RMAN 0 0 Recovery 0 0 Smart Scan 0 0 Streams AQ 2 0 XDB 0 0 13619 6043 60 rows selected.
In this case, all the Data Pump I/O was done by the Instance 1. There is also a relation to Exadata, when you look at the function Smart Scan 🙂
Addendum: The Data Pump export fails if the log file is not redirected out of the ASM diskgroup:
[oracle@host01 ~]$ expdp tables=scott.dept directory=dpdir
Export: Release 11.2.0.1.0 - Production on Thu Feb 2 17:42:37 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
Therefore I used a local directory – which is less elegant than using an ACFS based directory:
SQL> drop directory logdir; Directory dropped. SQL> create directory logdir as '/u01/app/grid/acfsmounts/data_myvol/logdir'; Directory created. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options [oracle@host01 ~]$ expdp tables=scott.dept directory=dpdir logfile=logdir:mylog.txt Export: Release 11.2.0.1.0 - Production on Thu Feb 2 17:06:46 2012 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Username: / as sysdba -- output not shown -- Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYS.SYS_EXPORT_TABLE_01 is: +DATA/expdat.dmp Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at 17:07:26
On Exadata, we may use DBFS instead.
After Test-Failover, make NEW Primary Standby again
Maybe we want to test Failover, although the Primary is working fine. After the failover succeeded, we have an OLD Primary then and a NEW Primary. There is a well documented way to convert the OLD Primary into a Standby. This procedure is called Reinstate. This posting shows how to make the NEW Primary a Standby again. The OLD Primary will keep on running as Primary – with all productive users connected there still. A special case why we may want to do that is because we tested Failover to a Snapshot Standby that has no network connection to the Primary.
The initial configuration:
DGMGRL> show configuration Configuration - myconf Protection Mode: MaxPerformance Databases: prima - Primary database physt - Physical standby database Fast-Start Failover: DISABLED Configuration Status: SUCCESS
The version is still 11.2.0.1 like in the previous posting. I will now failover to physt while prima keeps running. Attention: If there is a productive service started on the NEW Primary, make sure to stop it. Else new productive connections will access the NEW Primary! We will deliberately cause a Split Brain situation here with two Primary Databases. This may cause problems in a productive environment and is not recommended.
DGMGRL> failover to physt; Performing failover NOW, please wait... Error: ORA-16600: not connected to target standby database for failover Failed. Unable to failover DGMGRL> exit [oracle@uhesse ~]$ dgmgrl sys/oracle@physt DGMGRL for Linux: Version 11.2.0.1.0 - 64bit Production Copyright (c) 2000, 2009, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. Connected. DGMGRL> failover to physt; Performing failover NOW, please wait... Failover succeeded, new primary is "physt"
The error above shows that we cannot failover, connected to the (still working) Primary but must connect to the Standby first. Now there are two Primary Databases:
[oracle@uhesse ~]$ sqlplus sys/oracle@physt as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Mon Jan 16 10:25:51 2012 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select database_role from v$database; DATABASE_ROLE ---------------- PRIMARY SQL> connect sys/oracle@prima as sysdba Connected. SQL> select database_role from v$database; DATABASE_ROLE ---------------- PRIMARY
I want to make the NEW Primary a Standby again. Similar to a Reinstate, that needs Flashback Logs. My two Databases generate Flashback Logs, so that requirement is met.
SQL> connect sys/oracle@physt as sysdba
Connected.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area 313159680 bytes
Fixed Size 2212936 bytes
Variable Size 264244152 bytes
Database Buffers 41943040 bytes
Redo Buffers 4759552 bytes
Database mounted.
SQL> flashback database to before resetlogs;
Flashback complete.
SQL> alter database convert to physical standby;
Database altered.
This modified the controlfile and puts the Instance in NOMOUNT. We need to restart into MOUNT:
SQL> shutdown immediate
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area 313159680 bytes
Fixed Size 2212936 bytes
Variable Size 264244152 bytes
Database Buffers 41943040 bytes
Redo Buffers 4759552 bytes
Database mounted.
SQL> recover managed standby database using current logfile disconnect;
Media recovery complete.
SQL> select status,sequence# from v$managed_standby where process='MRP0';
STATUS SEQUENCE#
------------ ----------
WAIT_FOR_LOG 12
We want to see here APPLYING LOG – the redo is not yet transmitted from the OLD Primary.
SQL> connect sys/oracle@prima as sysdba
Connected.
SQL> select sequence# from v$log where status='CURRENT';
SEQUENCE#
----------
13
SQL> alter system set log_archive_dest_2='service=physt db_unique_name=physt';
System altered.
SQL> alter system set log_archive_dest_state_2=enable;
System altered.
SQL> alter system switch logfile;
System altered.
SQL> connect sys/oracle@physt as sysdba
Connected.
SQL> select status,sequence# from v$managed_standby where process='MRP0';
STATUS SEQUENCE#
------------ ----------
APPLYING_LOG 15
Everything is fine now on the Database Layer: OLD Primary is still Primary, NEW Primary is again Standby, applying Redo from the OLD Primary. Only the Data Guard Broker is confused now – we need to create a new configuration:
[oracle@uhesse ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 11.2.0.1.0 - 64bit Production
Copyright (c) 2000, 2009, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> show configuration;
ORA-16795: the standby database needs to be re-created
Configuration details cannot be determined by DGMGRL
[oracle@uhesse ~]$ sqlplus sys/oracle@physt as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Mon Jan 16 10:46:13 2012
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> alter system set dg_broker_start=false;
System altered.
SQL> connect sys/oracle@prima as sysdba
Connected.
SQL> alter system set dg_broker_start=false;
System altered.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@uhesse ~]$ rm $ORACLE_HOME/dbs/dr*
The above deleted the Broker Config Files. Now we create a new Broker Configuration:
[oracle@uhesse ~]$ sqlplus sys/oracle@physt as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Mon Jan 16 10:48:22 2012 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> alter system set dg_broker_start=true; System altered. SQL> connect sys/oracle@prima as sysdba Connected. SQL> alter system set dg_broker_start=true; System altered. [oracle@uhesse ~]$ dgmgrl sys/oracle@prima DGMGRL for Linux: Version 11.2.0.1.0 - 64bit Production Copyright (c) 2000, 2009, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. Connected. DGMGRL> create configuration mycon as primary database is prima connect identifier is prima; Configuration "mycon" created with primary database "prima" DGMGRL> add database physt as connect identifier is physt maintained as physical; Database "physt" added DGMGRL> enable configuration; Enabled. DGMGRL> show configuration; Configuration - mycon Protection Mode: MaxPerformance Databases: prima - Primary database physt - Physical standby database Fast-Start Failover: DISABLED Configuration Status: SUCCESS
That was it 🙂
