• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

UnixArena

  • Home
  • kubernetes
  • DevOps
    • Terraform
    • Jenkins
    • Docker
    • Openshift
      • OKD
    • Ansible engine
    • Ansible Tower
      • AWX
    • Puppet
  • Cloud
    • Azure
    • AWS
    • Openstack
    • Docker
  • VMware
    • vCloud Director
    • VMware-Guests
    • Vcenter Appliance 5.5
    • vC OPS
    • VMware SDDC
    • VMware vSphere 5.x
      • vSphere Network
      • vSphere DS
      • vShield Suite
    • VMware vSphere 6.0
    • VSAN
    • VMware Free Tools
  • Backup
    • Vembu BDR
    • Veeam
    • Nakivo
    • Azure Backup
    • Altaro VMBackup
    • Spinbackup
  • Tutorials
    • Openstack Tutorial
    • Openstack Beginner’s Guide
    • VXVM-Training
    • ZFS-Tutorials
    • NetApp cDot
    • LVM
    • Cisco UCS
    • LDOM
    • Oracle VM for x86
  • Linux
    • How to Articles
    • Q&A
    • Networking
    • RHEL7
  • DevOps Instructor-led Training
  • Contact

How to configure High Availability zone on Solaris cluster ?

July 1, 2014 By Cloud_Devops 5 Comments

In this article, we will see how we can add local zone as a resource in Solaris cluster to make the zone highly available. In the past we have seen similar setup in veritas cluster. By configuring zone as a resource, if any one node fails ,automatically zone will fly to other node with minimal downtime.(Flying zone on Solaris). Once you have configured the below things, then we can proceed with bring the localzone under Solaris cluster.

  • Two node solaris cluster
  • Quorum devices
  • Configuring resource group

Unlike veritas cluster, local zone IP will be managed from global zone as cluster resource . So let me create a IP resource before proceeding with local zone creation.

1. Login to Solaris cluster nodes and add the local zone IP & Host name information in /etc/hosts file.

UASOL1:#cat /etc/hosts |grep UAHAZ1
192.168.2.94    UAHAZ1
UASOL1:#ssh UASOl2 grep UAHAZ1 /etc/hosts
Password:
192.168.2.94    UAHAZ1
UASOL1:#

Here My local zone IP is 192.168.2.94 and host name is UAHAZ1

2.Add the logical host name as resource in Solaris cluster.

UASOL1:#clreslogicalhostname create -g UA-HA-ZRG -h UAHAZ1 CLUAHAZ1
UASOL1:#
  •  Resource Group Name  =   – g  UA-HA-ZRG
  • Local zone Name = -h UAHAZ1
  • Local zone IP resource Name = CLUAHAZ1

3.Check the solaris cluster resource status

UASOL1:#clresource status

=== Cluster Resources ===

Resource Name       Node Name      State        Status Message
-------------       ---------      -----        --------------
CLUAHAZ1            UASOL2         Online       Online - LogicalHostname online.
                    UASOL1         Offline      Offline

CLUAZPOOL           UASOL2         Online       Online
                    UASOL1         Offline      Offline

UASOL1:#

4.You test the resource by pinging the local zone IP.

UASOL1:#ping UAHAZ1
UAHAZ1 is alive
UASOL1:#

5.You can see that local zone IP has plumbed by Solaris cluster .

UASOL2:#ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 index 2
        inet 192.168.2.91 netmask ffffff00 broadcast 192.168.2.255
        groupname sc_ipmp0
        ether 0:c:29:e:f8:ce
e1000g0:1: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 2
        inet 192.168.2.94 netmask ffffff00 broadcast 192.168.2.255
e1000g1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
        inet 172.16.0.65 netmask ffffffc0 broadcast 172.16.0.127
        ether 0:c:29:e:f8:d8
e1000g2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
        inet 172.16.0.129 netmask ffffffc0 broadcast 172.16.0.191
        ether 0:c:29:e:f8:e2
clprivnet0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
        inet 172.16.2.1 netmask ffffff00 broadcast 172.16.2.255
        ether 0:0:0:0:0:1
UASOL2:#

6. Fail-over resource group  to UASOL1 and check the status.

UASOL2:#clrg switch -n UASOL1 +
UASOL2:#logout
Connection to UASOL2 closed.
UASOL1:#
UASOL1:#clrg status
=== Cluster Resource Groups ===
Group Name       Node Name       Suspended      Status
----------       ---------       ---------      ------
UA-HA-ZRG        UASOL2          No             Offline
                 UASOL1          No             Online

UASOL1:#clresource status
=== Cluster Resources ===
Resource Name       Node Name      State        Status Message
-------------       ---------      -----        --------------
CLUAHAZ1            UASOL2         Offline      Offline - LogicalHostname offline.
                    UASOL1         Online       Online - LogicalHostname online.

CLUAZPOOL           UASOL2         Offline      Offline
                    UASOL1         Online       Online
UASOL1:#

We have successfully created logicalhostname cluster resource and tested on both the nodes.

7.Create a local zone on any one of the cluster node and copy the /etc/zones/global & /etc/zones/zonename.xml file to other node to make the zone configuration available on both the cluster nodes.Create a local zone without adding network part.(Ex:add net)

UASOL1:#zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
   - UAHAZ1           installed  /UAZPOOL/UAHAZ1                native   shared
UASOL1:#ssh UASOL2 zoneadm list -cv
Password:
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
   - UAHAZ1           configured /UAZPOOL/UAHAZ1                native   shared
UASOL1:#

You can refer this article for creating the local zone but do not configure network.

8.Halt the local zone on UASOl1 and failover the resource group to UASOL2 to test the zone on it.

UASOL1:#zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
   - UAHAZ1           running    /UAZPOOL/UAHAZ1                native   shared
UASOL1:#zoneadm -z UAHAZ1 halt
UASOL1:#
UASOL1:#clrg switch -n UASOL2 +
UASOL1:#ssh UASOL2
Password:
Last login: Tue Jul  1 00:27:14 2014 from uasol1
Oracle Corporation      SunOS 5.10      Generic Patch   January 2005
UASOL2:#clrg status
=== Cluster Resource Groups ===
Group Name       Node Name       Suspended      Status
----------       ---------       ---------      ------
UA-HA-ZRG        UASOL2          No             Online
                 UASOL1          No             Offline
UASOL2:#

9. Attach the local zone and boot it .

UASOL2:#zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
   - UAHAZ1           configured /UAZPOOL/UAHAZ1                native   shared
UASOL2:#zoneadm -z UAHAZ1 attach -F
UASOL2:#zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
   - UAHAZ1           installed  /UAZPOOL/UAHAZ1                native   shared
UASOL2:#zoneadm -z UAHAZ1 boot
UASOL2:#

10. Login to local zone and perform the health check .If everything seems to be fine , then just halt the localzone.

UASOL2:#zlogin UAHAZ1
[Connected to zone 'UAHAZ1' pts/4]
Oracle Corporation      SunOS 5.10      Generic Patch   January 2005
# bash
bash-3.2# uptime
 12:37am  up  1 user,  load average: 0.50, 0.13, 0.07
bash-3.2# exit
# ^D
[Connection to zone 'UAHAZ1' pts/4 closed]
UASOL2:#zoneadm -z UAHAZ1 halt
UASOL2:#

Click Page 2 to see how to create the resource for local zone and adding in to the resource group .

Pages: Page 1 Page 2

Filed Under: Solaris Cluster Tagged With: Solaris Cluster

Reader Interactions

Comments

  1. Gopi R says

    April 14, 2016 at 6:32 pm

    Hi Lingeswaran,

    I am facing a challange to go with setup, below is my scenario

    1. Solaris 10 – GZ1, GZ2 Configurated with Sun cluster 3.3
    2. Created zonepath and data mountpoint HA resource as zone-boot-hasp-rs , zone-has-rs
    3. When i create zone-ora-rs and zone-lis-rs for automatic DB start during the failover for zone ( Solaris 9), The cluster switch over is fine and DB is getting started at GZ itself in place of solaris 9 zone.

    My question here, If this is the situation how can i configure db automatic start for solaris 9 zone.
    Even tried with SC_Network=true but no luck…

    Really appreciate if you could help on this….

    Gopi R

    Reply
  2. Yadunandan Jha says

    February 1, 2015 at 10:31 am

    Hi Lingeswaran,

    I would like to configure Oracle database in SUN Cluster for HA availability for that which file system shall i use QFS, ZFS or SVM ? But don’t want to use RAC ?

    I have installed sun cluster 3.3 in Solaris 10 ZFS and allocated shared space from storage as well on both the systems now i would like to create shared file system so that i can install DB in cluster so which file system shall i use for HA.

    Regards,

    Yadu

    Reply
    • Lingeswaran R says

      February 1, 2015 at 9:26 pm

      ZFS pools are better than SVM disk sets. ZFS is not a cluster filesystem.But you can use it for failover service group.

      Reply
  3. Vikrant Aggarwal says

    July 2, 2014 at 10:55 am

    Good Article.

    Just to add here, Now the zone clusters are the way to go instead of zone fail over. This help to minimize the downtime associated in fail over of non-global zone.

    Thanks
    Vikrant Aggarwal

    Reply
    • Lingeswaran R says

      July 2, 2014 at 10:57 am

      Yeah.we can make cluster between zones. Thank you vikrant for your comments.

      Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Follow UnixArena

  • Facebook
  • LinkedIn
  • Twitter

Copyright © 2025 · UnixArena ·

Go to mobile version