• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

UnixArena

  • Home
  • kubernetes
  • DevOps
    • Terraform
    • Jenkins
    • Docker
    • Openshift
      • OKD
    • Ansible engine
    • Ansible Tower
      • AWX
    • Puppet
  • Cloud
    • Azure
    • AWS
    • Openstack
    • Docker
  • VMware
    • vCloud Director
    • VMware-Guests
    • Vcenter Appliance 5.5
    • vC OPS
    • VMware SDDC
    • VMware vSphere 5.x
      • vSphere Network
      • vSphere DS
      • vShield Suite
    • VMware vSphere 6.0
    • VSAN
    • VMware Free Tools
  • Backup
    • Vembu BDR
    • Veeam
    • Nakivo
    • Azure Backup
    • Altaro VMBackup
    • Spinbackup
  • Tutorials
    • Openstack Tutorial
    • Openstack Beginner’s Guide
    • VXVM-Training
    • ZFS-Tutorials
    • NetApp cDot
    • LVM
    • Cisco UCS
    • LDOM
    • Oracle VM for x86
  • Linux
    • How to Articles
    • Q&A
    • Networking
    • RHEL7
  • DevOps Instructor-led Training
  • Contact

How to configure Quorum devices on Solaris cluster ?

June 30, 2014 By Cloud_Devops 7 Comments

Once you have configured the Solaris cluster , you have to add the quorum device for additional voting process. Without Quorum devices, cluster will be in installed mode. You can verify the status using “cluster show -t global | grep installmode” command.Each node in a configured cluster has one ( 1 ) quorum vote and cluster require minimum two vote to run the cluster.If any one node goes down, cluster won’t get 2 votes and it will panic second node also to avoid the data corruption on shared storage. To avoid this situation, we can make one small SAN disk as quorum device which can provide one vote.So that, if  one node fails, system can still get two votes all the time on two node cluster.

Once you have configured the two node Solaris cluster, you can start configure the quorum device.

1.Check the cluster node status.

UASOL1:#clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name                                       Status
---------                                       ------
UASOL2                                          Online
UASOL1                                          Online
UASOL1:#

2.You can see that ,currently cluster is in install mode.

# cluster show -t global | grep installmode
  installmode:                                     enabled

3.Current cluster quorum status

UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
            Needed   Present   Possible
            ------   -------   --------
            1        1         1
--- Quorum Votes by Node (current status) ---
Node Name       Present       Possible       Status
---------       -------       --------       ------
UASOL2          1             1              Online
UASOL1          0             0              Online
UASOL1:#

4.Make sure you have small size LUN is assigned to both the cluster node from SAN.

UASOL1:#echo |format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
       0. c1t0d0 VMware,-VMware Virtual -1.0  cyl 1824 alt 2 hd 255 sec 63
          /pci@0,0/pci15ad,1976@10/sd@0,0
       1. c1t1d0 VMware,-VMware Virtual -1.0 cyl 508 alt 2 hd 64 sec 32 
          /pci@0,0/pci15ad,1976@10/sd@1,0
Specify disk (enter its number): Specify disk (enter its number):
UASOL1:#

5.Let me label the disk and naming the disk.

UASOL1:#format c1t1d0
selecting c1t1d0: quorum
[disk formatted]
FORMAT MENU:
        disk       - select a disk
        type       - select (define) a disk type
        partition  - select (define) a partition table
        current    - describe the current disk
        format     - format and analyze the disk
        fdisk      - run the fdisk program
        repair     - repair a defective sector
        label      - write label to the disk
        analyze    - surface analysis
        defect     - defect list management
        backup     - search for backup labels
        verify     - read and display labels
        save       - save new disk/partition definitions
        inquiry    - show vendor, product and revision
        volname    - set 8-character volume name
        !     - execute , then return
        quit
format> fdisk
The default partition for the disk is:

a 100% "SOLARIS System" partition

Type "y" to accept the default partition,  otherwise type "n" to edit the
 partition table.
y
format> volname quorum
format> quit
UASOL1:#

6.You can see the same LUN on UASOL2 node as well.

UASOL2:#echo |format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c1t0d0 VMware,-VMware Virtual -1.0  cyl 1824 alt 2 hd 255 sec 63
          /pci@0,0/pci15ad,1976@10/sd@0,0
       1. c1t1d0 VMware,-VMware Virtual -1.0 cyl 508 alt 2 hd 64 sec 32  quorum
          /pci@0,0/pci15ad,1976@10/sd@1,0
Specify disk (enter its number): Specify disk (enter its number):
UASOL2:#

7. Populate the disks in solaris cluster.

UASOL2:#cldev populate
Configuring DID devices
did instance 4 created.
did subpath UASOL2:/dev/rdsk/c1t1d0 created for instance 4.
Configuring the /dev/global directory (global devices)
obtaining access to all attached disks
UASOL2:#

UASOL1:#cldev populate
Configuring DID devices
did instance 4 created.
did subpath UASOL1:/dev/rdsk/c1t1d0 created for instance 4.
Configuring the /dev/global directory (global devices)
obtaining access to all attached disks
UASOL1:#

8.Check the devices status.

UASOL1:#cldevice list -v
DID Device          Full Device Path
----------          ----------------
d1                  UASOL2:/dev/rdsk/c1t0d0
d1                  UASOL1:/dev/rdsk/c1t0d0
d4                  UASOL2:/dev/rdsk/c1t1d0
d4                  UASOL1:/dev/rdsk/c1t1d0
UASOL1:#cldev show d4
=== DID Device Instances ===
DID Device Name:                                /dev/did/rdsk/d4
  Full Device Path:                                UASOL1:/dev/rdsk/c1t1d0
  Full Device Path:                                UASOL2:/dev/rdsk/c1t1d0
  Replication:                                     none
  default_fencing:                                 global
UASOL1:#

9.Add the d4 as quorum device in cluster.

UASOL1:#clquorum add d4
UASOL1:#

10.Check the Quorum status

UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
            Needed   Present   Possible
            ------   -------   --------
            2        3         3
--- Quorum Votes by Node (current status) ---
Node Name       Present       Possible       Status
---------       -------       --------       ------
UASOL2          1             1              Online
UASOL1          1             1              Online
--- Quorum Votes by Device (current status) ---
Device Name       Present      Possible      Status
-----------       -------      --------      ------
d4                1            1             Online
UASOL1:#

We have successfully configured the quorum on two node Solaris cluster 3.3 u2.

How can we test quorum device is working or not ?

Just reboot any one of the node and you can see the voting status .

UASOL2:#reboot
updating /platform/i86pc/boot_archive
Connection to UASOL2 closed by remote host.
Connection to UASOL2 closed.
UASOL1:#
UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
            Needed   Present   Possible
            ------   -------   --------
            2        2         3
--- Quorum Votes by Node (current status) ---
Node Name       Present       Possible       Status
---------       -------       --------       ------
UASOL2          0             1              Offline
UASOL1          1             1              Online
--- Quorum Votes by Device (current status) ---
Device Name       Present      Possible      Status
-----------       -------      --------      ------
d4                1            1             Online
UASOL1:#

We can see that UASOL1 is not panic by cluster. So quorum device worked well.

If you don’t have real SAN storage for shared LUN, you can use openfiler.

What’s Next ? We will configure resource group for failover local zone and perform the test.

Share it ! Comment it !! Be Sociable !!

Filed Under: Solaris Cluster Tagged With: Solaris Cluster

Reader Interactions

Comments

  1. Biman Chandra Roy says

    September 17, 2019 at 8:43 pm

    Hi Lingesh,
    I am just experimenting with 2 node-cluster in a virtualbox. I can share storage with 2 VMs (Sol11). I can see same disk even their labels in both the VMs. Even though the quorum is set as per command output below, each node panics when the other node goes down. And both panic when a 2nd quorum disk is added

    Why so?

    Regards

    == Cluster Quorum ===

    — Quorum Votes Summary from (latest node reconfiguration) —

    Needed Present Possible
    —— ——- ——–
    2 2 3
    — Quorum Votes by Node (current status) —
    Node Name Present Possible Status
    ——— ——- ——– ——
    sol4 1 1 Online
    sol3 1 1 Online
    — Quorum Votes by Device (current status) —
    Device Name Present Possible Status
    ———– ——- ——– ——
    d3 1 1 Online

    Reply
  2. Giri says

    February 24, 2016 at 11:50 am

    Worth Full notes…

    Reply
  3. srinivas says

    December 31, 2015 at 3:38 pm

    I added quorum device as per your article , but is in offline state

    I tried to enable it , I removed it and added it again , but it remains in same state

    Can you suggest me to bring it online

    Reply
    • Lingeswaran R says

      December 31, 2015 at 10:04 pm

      Is your quorum device is coming from SAN storage ? Is that disk is SCSI-3 persistent reservation complaint ?

      Lingesh

      Reply
      • srinivas says

        January 3, 2016 at 12:16 pm

        I configured cluster 3.3 in my Oracle virtual box
        I created a HDD with ( multi-attached) option and added as quorum device , but in clq status it is showing offline

        Reply
        • Lingeswaran R says

          January 4, 2016 at 5:27 pm

          That will not work ….You must have SAN storage or ISCSI LUNs with SCSI PR3 . (Just google for SCSI-3 PR)

          Regards
          Lingesh

          Reply
  4. poobalan says

    March 23, 2015 at 10:18 am

    When I add quorum disk in sun cluster it throw I/O error. Shared Lun mapped to both nodes used by open filer

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Follow UnixArena

  • Facebook
  • LinkedIn
  • Twitter

Copyright © 2025 · UnixArena ·

Go to mobile version