Drop an ASM Disk that contains a Voting Disk?

That was a question I got during my present Oracle 11gR2 RAC accelerated course in Duesseldorf: What happens if we drop an ASM Disk that contains a Voting Disk? My answer was: „I suppose that is not allowed“ but my motto is „Don’t believe it, test it!“ and that is what I did. That is actually one of the good things about doing a course at Oracle University: We can just check out things without affecting critical production systems here in our course environment:

[grid@host01 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   48d3710843274f88bf1eb9b3b5129a7d (ORCL:ASMDISK01) [DATA]
 2. ONLINE   354cfa8376364fd2bfaa1921534fe23b (ORCL:ASMDISK02) [DATA]
 3. ONLINE   762ad94a98554fdcbf4ba5130ac0384c (ORCL:ASMDISK03) [DATA]
Located 3 voting disk(s).

We are on 11.2.0.1 here. The Voting Disk being part of an ASM Diskgroup was an 11gR2 New Feature that I introduced in this posting already. Now let’s try to drop ASMDISK01:

[grid@host01 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.1.0 Production on Wed Jun 13 17:18:21 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

SQL> select name,group_number from v$asm_diskgroup;

NAME                           GROUP_NUMBER
------------------------------ ------------
DATA                                      1
ACFS                                      2
FRA                                       3

SQL> select name from v$asm_disk where group_number=1;

NAME
------------------------------
ASMDISK01
ASMDISK02
ASMDISK03
ASMDISK04

SQL> alter diskgroup data drop disk 'ASMDISK01';

Diskgroup altered.

It just did it without error message! We look further:

SQL> select name from v$asm_disk where group_number=1;

NAME
------------------------------
ASMDISK02
ASMDISK03
ASMDISK04

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Real Application Clusters and Automatic Storage Management options
[grid@host01 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   354cfa8376364fd2bfaa1921534fe23b (ORCL:ASMDISK02) [DATA]
 2. ONLINE   762ad94a98554fdcbf4ba5130ac0384c (ORCL:ASMDISK03) [DATA]
 3. ONLINE   3f0bf16b6eb64f3cbf440a3c2f0da2fd (ORCL:ASMDISK04) [DATA]
Located 3 voting disk(s).

It just moved the Voting Disk silently to another ASM Disk of that Diskgroup.  When I try to drop another ASM Disk from that Diskgroup, the command seems to be silently ignored, because 3 ASM Disks are required here to keep the 3 Voting Disks. Similar behavior with External Redundancy:

[grid@host01 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576      9788     9645                0            9645              0             N  ACFS/
MOUNTED  NORMAL  N         512   4096  1048576      7341     6431              438            2996              0             N  DATA/
MOUNTED  EXTERN  N         512   4096  1048576      4894     4755                0            4755              0             N  FRA/

I will move the Voting Disk to the FRA Diskgroup. It is a bug of 11.2.0.1 that the Voting_files flag is not Y for the DATA Diskgroup here, by the way.

[grid@host01 ~]$ sudo crsctl replace votedisk +FRA
Successful addition of voting disk 4d586fbecf664f8abf01d272a354fa67.
Successful deletion of voting disk 354cfa8376364fd2bfaa1921534fe23b.
Successful deletion of voting disk 762ad94a98554fdcbf4ba5130ac0384c.
Successful deletion of voting disk 3f0bf16b6eb64f3cbf440a3c2f0da2fd.
Successfully replaced voting disk group with +FRA.
CRS-4266: Voting file(s) successfully replaced
[grid@host01 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   4d586fbecf664f8abf01d272a354fa67 (ORCL:ASMDISK10) [FRA]
Located 1 voting disk(s).
[grid@host01 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.1.0 Production on Wed Jun 13 17:36:06 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> alter diskgroup fra drop disk 'ASMDISK10';

Diskgroup altered.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Real Application Clusters and Automatic Storage Management options
[grid@host01 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   0b051cf6e6a14ff1bf31ef7bc66098e0 (ORCL:ASMDISK11) [FRA]
Located 1 voting disk(s).

Not sure whether I would dare that all in a production system, though 🙂

Conclusion: We can drop ASM Disks that contain Voting Disks as long as there are enough Disks left in the Diskgroup to retain the same number of Voting Disks (each inside a separate Failure Group) afterwards. Apparently – but: „Don’t believe it, test it!“

, , ,

  1. #1 von Ryan am Oktober 11, 2012 - 22:20

    Awesome find! Thanks for writing this up!

    I had a question I was hoping you could indulge. I am fairly new to Oracle and have been working on standing up a Cold Failover 11g R2 Grid environment. I made a separate diskgroup for the grid install (named CRS) and chose normal redundancy using 3 drives.

    I had one of my drives go „missing“. Everything (my two failover DBs along with my DATA and FRA diskgroups) still functioned but when I tried to bring the missing drive online, I was told to change the compatibility (which I did) and was never able to bring the drive back online successfully. I was never able to add another drive as well.

    Would you be so kind as to humor me by providing some insight into how to go about fixing/preventing this? I have scapped my install and decided to stick with a minimum of 4 drives for my CRS diskgroup. Much like how you used 4 drives in your DATA diskgroup and were able to drop one without any headaches.

    Is this overkill? Am I just missing something fundamental?

    Regards,

    Ryan

  2. #2 von Uwe Hesse am Oktober 12, 2012 - 09:04

    Ryan,
    your first stop in case of those problems should be Oracle Support.
    If you have no account there, try OTN forums like http://forums.oracle.com/forums/forum.jspa?forumID=62

    To your question about overkill: I would consider it a Best Practice to have at least 4 drives for a diskgroup with normal redundancy that contains voting disks exactly because of this situation: One disk may fail.

    Now when you have 3 disks still available, the clusterware will create a new voting disk on that drive (after the failed ASM disk was automatically dropped) and redundancy is still maintained.

    Kind regards
    Uwe

  3. #3 von Satish am Juni 2, 2014 - 10:34

    Hi Uwe,

    As always, your illustration of concepts is at the best. I salute the teacher in you again.

    May i request your suggestion on relocating the Standard NFS Votedisk in an Extended RAC Cluster configuration.

    Briefly here is the configuration details:

    We have an Extended RAC Cluster set up using Oracle 11.2.0.3.0 Standard Edition on RHEL 5.9 (64 bit).
    We have stored the OCR and Votedisk in an ASM disk group named as OCRVOTDG.
    This diskgroup , OCRVOTDG is of NORMAL redundancy and contains 5 asmdisks, though 3 of them are lying spare. The reason is , 1 disk each from the PDC and SDC site storage is used, Leaving the remaining 2 unused,these 2 are from PDC.
    The third votedisk to maintain normal redundancy of OCRVOTDG is a standard NFS mount on a linux server at location A.

    Now we are planning to build an NFS mount server in a different location, B and move the existing NFS Votedisk to the yet to be built new NFS server.

    In connection with the information in this post, If i delete the existing NFS Votedisk, will oracle, silently build one more votedisk?, Because in the diskgroup we have 2 spare disks.

    If yes, then i would just mount the new NFS disk in the new location B, to both the nodes in the Extended RAC and add it as a quorum.

    If No, then should i create a normal redundancy dummy diskgroup and replace the existing votedisks to that dummy group, thereby effectively removing the existing NFS Votedisk, and again add the NFS Votedisk from location B ?

    May i please know your thoughts and suggestion for this.

    Satish

  4. #4 von Uwe Hesse am Juni 3, 2014 - 13:18

    Hi Satish, I think you should be able to add another quorum failgroup to OCRVOTDG at the new location and then delete the first quorum failgroup that contains the 3rd voting disk. But I did not test that, so you better talk to our support at first about it.

  5. #5 von Satish am Juni 4, 2014 - 04:20

    Hi Uwe,

    Thanks for the reply. I shall sure ask Oracle support on this.
    I am of the understanding that, when votedisks are stored in a Normal Redundancy ASM diskgroups, then we could have only ONE NFS mounted quorum disk.
    As the system is well within the confines of security and change management, I am unable to test.
    I will keep this post updated, as and when i make progress.
    Thanks again for your thoughts.

    Satish

Hinterlasse einen Kommentar

Diese Seite verwendet Akismet, um Spam zu reduzieren. Erfahre, wie deine Kommentardaten verarbeitet werden..