In this article, we will see how we can add local zone as a resource in Solaris cluster to make the zone highly available. In the past we have seen similar setup in veritas cluster. By configuring zone as a resource, if any one node fails ,automatically zone will fly to other node with minimal downtime.(Flying zone on Solaris). Once you have configured the below things, then we can proceed with bring the localzone under Solaris cluster.
Unlike veritas cluster, local zone IP will be managed from global zone as cluster resource . So let me create a IP resource before proceeding with local zone creation.
1. Login to Solaris cluster nodes and add the local zone IP & Host name information in /etc/hosts file.
UASOL1:#cat /etc/hosts |grep UAHAZ1 192.168.2.94 UAHAZ1 UASOL1:#ssh UASOl2 grep UAHAZ1 /etc/hosts Password: 192.168.2.94 UAHAZ1 UASOL1:#
Here My local zone IP is 192.168.2.94 and host name is UAHAZ1
2.Add the logical host name as resource in Solaris cluster.
UASOL1:#clreslogicalhostname create -g UA-HA-ZRG -h UAHAZ1 CLUAHAZ1 UASOL1:#
- Resource Group Name = – g UA-HA-ZRG
- Local zone Name = -h UAHAZ1
- Local zone IP resource Name = CLUAHAZ1
3.Check the solaris cluster resource status
UASOL1:#clresource status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- CLUAHAZ1 UASOL2 Online Online - LogicalHostname online. UASOL1 Offline Offline CLUAZPOOL UASOL2 Online Online UASOL1 Offline Offline UASOL1:#
4.You test the resource by pinging the local zone IP.
UASOL1:#ping UAHAZ1 UAHAZ1 is alive UASOL1:#
5.You can see that local zone IP has plumbed by Solaris cluster .
UASOL2:#ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 e1000g0: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 index 2 inet 192.168.2.91 netmask ffffff00 broadcast 192.168.2.255 groupname sc_ipmp0 ether 0:c:29:e:f8:ce e1000g0:1: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 2 inet 192.168.2.94 netmask ffffff00 broadcast 192.168.2.255 e1000g1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4 inet 172.16.0.65 netmask ffffffc0 broadcast 172.16.0.127 ether 0:c:29:e:f8:d8 e1000g2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3 inet 172.16.0.129 netmask ffffffc0 broadcast 172.16.0.191 ether 0:c:29:e:f8:e2 clprivnet0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5 inet 172.16.2.1 netmask ffffff00 broadcast 172.16.2.255 ether 0:0:0:0:0:1 UASOL2:#
6. Fail-over resource group to UASOL1 and check the status.
UASOL2:#clrg switch -n UASOL1 + UASOL2:#logout Connection to UASOL2 closed. UASOL1:# UASOL1:#clrg status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ UA-HA-ZRG UASOL2 No Offline UASOL1 No Online UASOL1:#clresource status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- CLUAHAZ1 UASOL2 Offline Offline - LogicalHostname offline. UASOL1 Online Online - LogicalHostname online. CLUAZPOOL UASOL2 Offline Offline UASOL1 Online Online UASOL1:#
We have successfully created logicalhostname cluster resource and tested on both the nodes.
7.Create a local zone on any one of the cluster node and copy the /etc/zones/global & /etc/zones/zonename.xml file to other node to make the zone configuration available on both the cluster nodes.Create a local zone without adding network part.(Ex:add net)
UASOL1:#zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - UAHAZ1 installed /UAZPOOL/UAHAZ1 native shared UASOL1:#ssh UASOL2 zoneadm list -cv Password: ID NAME STATUS PATH BRAND IP 0 global running / native shared - UAHAZ1 configured /UAZPOOL/UAHAZ1 native shared UASOL1:#
You can refer this article for creating the local zone but do not configure network.
8.Halt the local zone on UASOl1 and failover the resource group to UASOL2 to test the zone on it.
UASOL1:#zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - UAHAZ1 running /UAZPOOL/UAHAZ1 native shared UASOL1:#zoneadm -z UAHAZ1 halt UASOL1:# UASOL1:#clrg switch -n UASOL2 + UASOL1:#ssh UASOL2 Password: Last login: Tue Jul 1 00:27:14 2014 from uasol1 Oracle Corporation SunOS 5.10 Generic Patch January 2005 UASOL2:#clrg status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ UA-HA-ZRG UASOL2 No Online UASOL1 No Offline UASOL2:#
9. Attach the local zone and boot it .
UASOL2:#zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - UAHAZ1 configured /UAZPOOL/UAHAZ1 native shared UASOL2:#zoneadm -z UAHAZ1 attach -F UASOL2:#zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - UAHAZ1 installed /UAZPOOL/UAHAZ1 native shared UASOL2:#zoneadm -z UAHAZ1 boot UASOL2:#
10. Login to local zone and perform the health check .If everything seems to be fine , then just halt the localzone.
UASOL2:#zlogin UAHAZ1 [Connected to zone 'UAHAZ1' pts/4] Oracle Corporation SunOS 5.10 Generic Patch January 2005 # bash bash-3.2# uptime 12:37am up 1 user, load average: 0.50, 0.13, 0.07 bash-3.2# exit # ^D [Connection to zone 'UAHAZ1' pts/4 closed] UASOL2:#zoneadm -z UAHAZ1 halt UASOL2:#
Click Page 2 to see how to create the resource for local zone and adding in to the resource group .
11. Register the SUNW.gds resource type in Solaris cluster for high availability localzone.
UASOL1:# clresourcetype register SUNW.gds UASOL1:#
12.Navigate to the below directory and copy the config file with new zone resource name. CLUAHAZ-OS1 is cluster resource for local zone which we are going to create.
UASOL1:#cd /opt/SUNWsczone/sczbt/util UASOL1:#ls -lrt total 26 -r-xr-xr-x 1 root bin 6949 Jan 28 2013 sczbt_register -rw-r--r-- 1 root bin 4806 Jan 28 2013 sczbt_config UASOL1:#cp -p sczbt_config sczbt_config.CLUAHAZ-OS1 UASOL1:#
13.Edit the sczbt_config.CLUAHAZ-OS1 with below information.
UASOL1:#grep -v "#" sczbt_config.CLUAHAZ-OS1 RS=CLUAHAZ-OS1 RG=UA-HA-ZRG PARAMETERDIR=/UAZPOOL/UAHAZ1/params SC_NETWORK=true SC_LH=CLUAHAZ1 FAILOVER=true HAS_RS=CLUAZPOOL Zonename="UAHAZ1" Zonebrand="native" Zonebootopt="" Milestone="svc:/milestone/multi-user-server" LXrunlevel="3" SLrunlevel="3" Mounts="" UASOL1:#
- RS – Local zone Cluster Resource (End of the file name )
- RG- Existing Resource Group
- PARAMETERDIR – Local zone root path
- SC_NETWORK – If its true then local zone IP is managed by Solaris cluster. So you have provide the SC_LH value.
- SC_LH – Logical host name resource which we have created on first page.
- FAILOVER – To enable the automatic failover,provide value as true.
- HAS_RS – Provide the zpool cluster resource
- Zonename – Enter the local zone name
14.Create params directory on zone’s root path where the resource group is online.
UASOL1:#mkdir -p /UAZPOOL/UAHAZ1/params
15.Create the local zone resource by running below script with input file.
UASOL1:#pwd /opt/SUNWsczone/sczbt/util UASOL1:#./sczbt_register -f ./sczbt_config.CLUAHAZ-OS1 sourcing ./sczbt_config.CLUAHAZ-OS1 Registration of resource CLUAHAZ-OS1 succeeded. Validation of resource CLUAHAZ-OS1 succeeded. UASOL1:#
16.Enable the zone resource .
UASOL1:#clresource enable CLUAHAZ-OS1 UASOL1:#
17.Check the resource status
UASOL1:#clresource status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- CLUAHAZ-OS1 UASOL2 Offline Offline UASOL1 Online Online - Service is online. CLUAHAZ1 UASOL2 Offline Offline - LogicalHostname offline. UASOL1 Online Online - LogicalHostname online. CLUAZPOOL UASOL2 Offline Offline UASOL1 Online Online UASOL1:#
18.CLUAHAZ-OS1 resource is online .So zone must be booted. Check the zone status using zoneadm command.
UASOL1:#zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 4 UAHAZ1 running /UAZPOOL/UAHAZ1 native shared UASOL1:#
19.Switch the resource group to another node and check the zone status.
UASOL1:#clrg switch -n UASOL2 + UASOL1:#zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - UAHAZ1 installed /UAZPOOL/UAHAZ1 native shared UASOL1:#
In UASOL1, zone has been halted.
20.Login to UASOL2 and check the resources status .
ASOL2:#clresource status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- CLUAHAZ-OS1 UASOL2 Starting Unknown - Starting UASOL1 Offline Offline CLUAHAZ1 UASOL2 Online Online - LogicalHostname online. UASOL1 Offline Offline - LogicalHostname offline. CLUAZPOOL UASOL2 Online Online UASOL1 Offline Offline UASOL2:#zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 4 UAHAZ1 running /UAZPOOL/UAHAZ1 native shared UASOL2:#clresource status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- CLUAHAZ-OS1 UASOL2 Online Online UASOL1 Offline Offline CLUAHAZ1 UASOL2 Online Online - LogicalHostname online. UASOL1 Offline Offline - LogicalHostname offline. CLUAZPOOL UASOL2 Online Online UASOL1 Offline Offline UASOL2:#
We have successfully failover the resource group to UASOL2.
Now UAHAZ1 local zone is highly available using Solaris cluster. We can call this type of setup as failover zone cluster or flying zone cluster .
Hope this article is informative to you .
Share it ! Comment it !! Be Sociable !!!