Beiträge getaggt mit 11g New Features

Real-Time Query for Read-Mostly Applications

Here is one nice little demonstration from my last 11gR2 Data Guard class that I’d like to share with the Oracle Community. It is about enabling Read-Mostly Applications to run on a Physical Standby database, leveraging the 11g New Feature Real-Time Query (part of the Active Data Guard Option). One of the main 11g New Features is the possibility to run Reports anytime on our Physical Standby while it is doing Redo Apply. I have posted about that already here. Now what if we have an Application that is mainly reading but needs to write also to a small amount? That is a problem at first sight:

DGMGRL> show configuration

Configuration - myconf

 Protection Mode: MaxAvailability
 Databases:
 prima - Primary database
 physt - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database physt

Database - physt

 Role:            PHYSICAL STANDBY
 Intended State:  APPLY-ON
 Transport Lag:   0 seconds
 Apply Lag:       0 seconds
 Real Time Query: ON
 Instance(s):
 physt

Database Status:
SUCCESS

This is my (11gR2) Data Guard Configuration. Now my „Read-Mostly Application“ – enter Scott:

SYS@prima > create user scott identified by tiger;

User created.
SYS@prima > alter user scott quota unlimited on users;

User altered.

SYS@prima > grant create session,create table to scott;

Grant succeeded.
SYS@prima > connect scott/tiger@prima

Connected.
SCOTT@prima>  CREATE TABLE DEPT
 (DEPTNO NUMBER(2),
 DNAME VARCHAR2(14),
 LOC VARCHAR2(13) );

INSERT INTO DEPT VALUES (10, 'ACCOUNTING', 'NEW YORK');
INSERT INTO DEPT VALUES (20, 'RESEARCH',   'DALLAS');
INSERT INTO DEPT VALUES (30, 'SALES',      'CHICAGO');
INSERT INTO DEPT VALUES (40, 'OPERATIONS', 'BOSTON');
commit;
Commit complete.

SCOTT@prima > create table flag (database_role varchar2(30), start_date date, end_date date);

Table created.

My Read-Mostly Applications inserts into the flag-table, then run a „huge“ report and inserts into the flag table again:

SCOTT@prima > @report

PL/SQL procedure successfully completed.


1 row created.


Commit complete.


 DEPTNO DNAME          LOC
---------- -------------- -------------
 10 ACCOUNTING     NEW YORK
 20 RESEARCH       DALLAS
 30 SALES          CHICAGO
 40 OPERATIONS     BOSTON


1 row created.


Commit complete.

The content of the report.sql script:

SCOTT@prima > host cat report.sql
vari c char(30)
exec select SYS_CONTEXT('USERENV','DATABASE_ROLE') into :c from dual;
insert into flag values(:c, sysdate,null);
commit;
select * from dept;
insert into flag values(:c, null, sysdate);
commit;

This is right now not possible to run on the Physical Standby because of the insert commands:

SCOTT@prima > connect scott/tiger@physt

Connected.

SCOTT@physt > @report

PL/SQL procedure successfully completed.

insert into flag values(:c, sysdate,null)
 *
ERROR at line 1:
ORA-16000: database open for read-only access

Commit complete.

 DEPTNO DNAME          LOC
---------- -------------- -------------
 10 ACCOUNTING     NEW YORK
 20 RESEARCH       DALLAS
 30 SALES          CHICAGO
 40 OPERATIONS     BOSTON

insert into flag values(:c, null, sysdate)
 *
ERROR at line 1:
ORA-16000: database open for read-only access

Commit complete.

We need to enable that in an Application-Transparent way. Therefore, we create a Dummy-user to switch to silently when Scott accesses the Physical Standby:

SYS@prima > grant create session to rmostly identified by rmostly;

Grant succeeded.

SYS@prima > grant select on scott.dept to rmostly;

Grant succeeded.

SYS@prima > grant all on scott.flag to rmostly;

Grant succeeded.

SYS@prima > grant create synonym to rmostly;

Grant succeeded.

The user rmostly now gets synonyms with the same name as the Original Applications table. The table that need to get inserts is reached via a database link that points to the Primary – modifying it there.

SYS@prima > create public database link prod connect to scott identified by tiger using 'prod';

Database link created.

SYS@prima > connect rmostly/rmostly@prima

Connected.

RMOSTLY@prima > create synonym dept for scott.dept;

Synonym created.

RMOSTLY@prima > create synonym flag for scott.flag@prod;

Synonym created.

The used connect descriptor prod should always lead to the Primary Database even after Role Changes. We will discuss that later on in more detail. Now we need a Logon Trigger that silently switches to the rmostly schema if Scott connects to the Physical Standby:

SYS@prima > CREATE or replace TRIGGER switch_schema_trigger
 AFTER LOGON ON scott.schema
 BEGIN
 IF (SYS_CONTEXT('USERENV','DATABASE_ROLE')
 IN ('PHYSICAL STANDBY'))
 THEN
 execute immediate
 'alter session set current_schema = rmostly';
 END IF;
end;
/
Trigger created.

Also, we need a connect descriptor that always connects Scott to the Physical Standby regardless of Role Changes. Now we are going to setup this part:

SYS@prima > exec dbms_service.create_service('prod','prod')

PL/SQL procedure successfully completed.

SYS@prima > exec dbms_service.create_service('rmostly','rmostly')

PL/SQL procedure successfully completed.

SYS@prima > exec dbms_service.start_service('prod')

PL/SQL procedure successfully completed.

SYS@prima > @physt
Connected.
SYS@physt > exec dbms_service.start_service('rmostly')

PL/SQL procedure successfully completed.

Please notice that the rmostly service was started on the Physical Standby. For now, the services are on the appropriate Database, but the following trigger ensures that this will be still the case after Role Changes like switchover:

SYS@prima > CREATE or replace TRIGGER service_management AFTER STARTUP ON DATABASE
DECLARE
 VROLE VARCHAR(30);
 VOPEN_MODE VARCHAR(30);
BEGIN
 SELECT DATABASE_ROLE INTO VROLE FROM V$DATABASE;
 SELECT OPEN_MODE INTO VOPEN_MODE FROM V$DATABASE;
 IF VROLE = 'PRIMARY' THEN begin
 DBMS_SERVICE.START_SERVICE ('PROD');
 DBMS_SERVICE.STOP_SERVICE ('RMOSTLY');
 end;
 ELSIF VROLE = 'PHYSICAL STANDBY' THEN begin
 IF VOPEN_MODE like 'READ ONLY%' THEN
 DBMS_SERVICE.START_SERVICE ('RMOSTLY');
 END IF;
 DBMS_SERVICE.STOP_SERVICE ('PROD');
 end;
 END IF;
END;
/
Trigger created.

The tnsnames.ora used by the Application Clients looks like this:

$ cat $ORACLE_HOME/network/admin/tnsnames.ora
PROD =
 (DESCRIPTION =
 (ADDRESS_LIST =
 (ADDRESS = (PROTOCOL = TCP)(HOST = uhesse1)(PORT = 1521))
 (ADDRESS = (PROTOCOL = TCP)(HOST = uhesse2)(PORT = 1521))
 )
 (CONNECT_DATA =
 (SERVER = DEDICATED)
 (SERVICE_NAME = prod)
 )
 )

RMOSTLY =
 (DESCRIPTION =
 (ADDRESS_LIST =
 (ADDRESS = (PROTOCOL = TCP)(HOST = uhesse1)(PORT = 1521))
 (ADDRESS = (PROTOCOL = TCP)(HOST = uhesse2)(PORT = 1521))
 )
 (CONNECT_DATA =
 (SERVER = DEDICATED)
 (SERVICE_NAME = rmostly)
 )
 )

In effect, an Application Client connecting to the Physical Standby with RMOSTLY can insert into the Primary table while selecting from the Physical Standby. We assume, that the main part of that Application is doing select, so we have still offloaded the major work from the Primary to the Physical Standby:

SCOTT@rmostly > alter session set nls_date_format='hh24:mi:ss';

Session altered.

SCOTT@rmostly > @report

PL/SQL procedure successfully completed.


1 row created.


Commit complete.


 DEPTNO DNAME          LOC
---------- -------------- -------------
 10 ACCOUNTING     NEW YORK
 20 RESEARCH       DALLAS
 30 SALES          CHICAGO
 40 OPERATIONS     BOSTON


1 row created.


Commit complete.

SCOTT@rmostly > select * from flag;

DATABASE_ROLE                  START_DA END_DATE
------------------------------ -------- --------
PRIMARY                        14:24:35
PRIMARY                                 14:24:35
PHYSICAL STANDBY               14:53:21
PHYSICAL STANDBY                        14:53:21

The SQL prompt BTW changes accordingly because of this little login.sql:

SCOTT@rmostly > host cat login.sql
set lines 800
set pages 300
column name format a50
column file_name format a50
column client_info format a10
column machine format a20
column MASTER_LINK format a10
column parameters format a10
column def_parameters format a10
SET SQLPROMPT "_USER'@'_CONNECT_IDENTIFIER > "

Conclusion: We can even offload Read-Mostly Applications to Physical Standby Databases in 11g with little effort!

, , ,

6 Kommentare

Advert: I will present at the DOAG Conference about 11g Data Guard New Features

The German Oracle User Group (DOAG) is staging their annual Conference. I will give a presentation on 19-NOV-2010 about 11g Data Guard New Features there. It will be in German Language and cover

  • Creation of Physical Standby Database without previous Backup
  • Embedding the Primary & Standby into Oracle Restart
  • Real-Time Query (Using Physical Standby for Reporting while Redo Apply is on)
  • Snapshot Standby (Using Physical Standby for Testing while still receiving Redo Protocol from Primary)
  • Fast-Start Failover in Maximum Performance Mode

Every Topic will be accompanied by Live Demonstrations on the Command Line. You may enroll online here.

, ,

Hinterlasse einen Kommentar

Voting Disk and OCR in 11gR2: Some changes

Having just delivered an Oracle Database 11gR2 RAC Admin course, I’d like to point out some remarkable changes in the way we handle now the important Clusterware components Voting Disk and Oracle Cluster Registry (OCR): Amazingly, we can now store the two inside of an Automatic Storage Management (ASM) Disk Group, which was not possible in 10g.

The OCR is striped and mirrored (if we have a redundancy other than external), similar as ordinary Database Files are. So we can now leverage the mirroring capabilities of ASM to mirror the OCR also, without having to use multiple RAW devices for that purpose only. The Voting Disk (or Voting File, as it is now also referred to) is not striped but put as a whole on ASM Disks – if we use a redundancy of normal on the Diskgroup, 3 Voting Files are placed, each on one ASM Disk into a different failgroup. Therefore, you need to have at least 3 failgroups for that diskgroup, like on this picture:

ASM Diskgroup that contains Voting Files and OCR

This is a concern, if our ASM Diskgroups consist of only 2 ASM Disks respectively only 2 failgroups like with Extended RAC! Therefore, the new quorum failgroup clause was introduced:

create diskgroup data normal redundancy
 failgroup fg1 disk 'ORCL:ASMDISK1'
 failgroup fg2 disk 'ORCL:ASMDISK2'
 quorum failgroup fg3 disk 'ORCL:ASMDISK3'
 attribute 'compatible.asm' = '11.2.0.0.0';

The failgroup fg3 above needs only one small Disk (300 MB should be on the safe side here, since the Voting File is only about 280 MB in size) to keep one Mirror of the Voting File. fg1 and fg2 will contain each one Voting File and all the other stripes of the Database Area as well, but fg3 will only get that one Voting File.

[root@uhesse1 ~]#  /u01/app/11.2.0/grid/bin/crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   511de6e64e354f9bbf4be318fc928c28 (ORCL:ASMDISK1) [DATA]
 2. ONLINE   2f1973ed4be84f50bffc2475949b428f (ORCL:ASMDISK2) [DATA]
 3. ONLINE   5ed44fb7e79c4f79bfaf09b402ba70df (ORCL:ASMDISK3) [DATA]

Another important change regarding the Voting File is that it is no longer supported to take a manual backup of it with dd. Instead, the Voting File gets backed up automatically into the OCR. As a New Feature, you can now do a manual backup of the OCR any time you like, without having to wait until that is done automatically – which is also still done:

[root@uhesse1 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -showbackup

uhesse1     2010/10/06 09:37:30     /u01/app/11.2.0/grid/cdata/cluhesse/backup00.ocr
uhesse1     2010/10/06 05:37:29     /u01/app/11.2.0/grid/cdata/cluhesse/backup01.ocr
uhesse1     2010/10/06 01:37:27     /u01/app/11.2.0/grid/cdata/cluhesse/backup02.ocr
uhesse1     2010/10/05 01:37:21     /u01/app/11.2.0/grid/cdata/cluhesse/day.ocr
uhesse1     2010/10/04 13:37:19     /u01/app/11.2.0/grid/cdata/cluhesse/week.ocr

Above are the automatic backups of the OCR as in earlier versions. Now the manual backup:

[root@uhesse1 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -manualbackup
uhesse1     2010/10/06 13:07:03     /u01/app/11.2.0/grid/cdata/cluhesse/backup_20101006_130703.ocr

I got a manual backup on the default location on my master node. We can define another backup location for the automatic backups as well as for the manual backups – preferrable on a Shared Device that is accessible by all the nodes (which is not the case with /home/oracle, unfortunately 🙂 ):

[root@uhesse1 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -backuploc /home/oracle
[root@uhesse1 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -manualbackup
uhesse1     2010/10/06 13:10:50     /home/oracle/backup_20101006_131050.ocr
uhesse1     2010/10/06 13:07:03     /u01/app/11.2.0/grid/cdata/cluhesse/backup_20101006_130703.ocr

[root@uhesse1 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -showbackup
uhesse1     2010/10/06 09:37:30     /u01/app/11.2.0/grid/cdata/cluhesse/backup00.ocr
uhesse1     2010/10/06 05:37:29     /u01/app/11.2.0/grid/cdata/cluhesse/backup01.ocr
uhesse1     2010/10/06 01:37:27     /u01/app/11.2.0/grid/cdata/cluhesse/backup02.ocr
uhesse1     2010/10/05 01:37:21     /u01/app/11.2.0/grid/cdata/cluhesse/day.ocr
uhesse1     2010/10/04 13:37:19     /u01/app/11.2.0/grid/cdata/cluhesse/week.ocr
uhesse1     2010/10/06 13:10:50     /home/oracle/backup_20101006_131050.ocr
uhesse1     2010/10/06 13:07:03     /u01/app/11.2.0/grid/cdata/cluhesse/backup_20101006_130703.ocr

Conclusion: The way to handle Voting Disk and OCR has changed significantly – they can be kept inside of an ASM Diskgroup especially.

, ,

88 Kommentare