Home / Veritas / VCS / How to configure two node VCS cluster on Solaris using Laptop/Desktop

How to configure two node VCS cluster on Solaris using Laptop/Desktop

In Unix administration, everyone wants to be expert on any of the cluster technologies. Especially on veritas cluster due to its market capitalization.But not every body will get the opportunity to work on it since VCS will be running on critical environment. Then how to learn ? Just reading the text books and attending 5 days training will give enough confidence ? No. I am sure without hands on experience, you can’t be expert in VCS.


I just tried to setup the VCS cluster in laptop/Desktop using vmware workstation. VMware workstation hosted  two Solaris nodes and openfiler as shared storage.This setup worked like a charm.Why can’t you try in your laptop ?

Hardware requirement:
1.Laptop/PC with 4GB RAM
2.Processor should have VT compatibly to install 64 bit operating system under vmware.
(By default Intel core 2 duo above version processor having VT feature.For Dual core processor,please check it www.intel.com)

Required Software’s:
1.Vmware workstation for creating virtual Solaris nodes
2.Solaris 10 – Opertating system
3. Symantec Storage foundation HA – VCS Cluster software
4.Openfiler – Virtual SAN storage (iscsi shared storage)

Before starting to configure the VCS ,the below mentioned prerequisite need to be completed.
1.Create two virtual Solaris guest with three NIC cards in vmware.
2.Configure root password less authentication between two cluster nodes.
3.Install Symantec storage foundation HA on both Solaris nodes. 
4.Install openfiler as virtual guest in vmware for shared storage.

5.Provision a new LUN in openfiler to share across the nodes.
6.Add the newly provisioned LUN in both the nodes using the iscisadm.

Once you have setup the above prerequisites ,you can proceed with VCS cluster configuration.
Run the below command to begin the configuration
Arena-Node1#/opt/VRTS/install/installsf -configure
  Storage Foundation 5.1 Configure Program
Logs are being written to /var/tmp/installsf-201207292354mTw while installsf is in progress.
Enter the Solaris x64 system names separated by spaces: [q,?] node1 node2
                          
                         Storage Foundation 5.1 Configure Program
                                             node1 node2
Logs are being written to /var/tmp/installsf-201207292354mTw while installsf is in progress
    Verifying systems: 100%
    Estimated time remaining: 0:00 5 of 5
    Checking system communication ....................................Done
    Checking release compatibility .......................................Done
    Checking installed product .............................................Done
    Checking platform version ..............................................Done
    Performing product prechecks .........................................Done
System verification checks completed successfully
                                
         Storage Foundation and High Availability 5.1 Configure Program
                                            node1 node2
To configure VCS, answer the set of questions on the next screen.
When [b] is presented after a question, 'b' may be entered to go back to the first question of the configuration set.
When [?] is presented after a question, '?' may be entered for help or additional information about the question.
Following each set of questions, the information you have entered will be presented for confirmation. To repeat the set of
questions and correct any previous errors, enter 'n' at the confirmation prompt.
No configuration changes are made to the systems until all configuration questions are completed and confirmed.
Press [Enter] to continue:

To configure VCS for SF51 the following information is required:
        A unique Cluster name
        A unique Cluster ID number between 0-65535
        Two or more NIC cards per system used for heartbeat links
        One or more heartbeat links are configured as private links
        One heartbeat link may be configured as a low priority link
        All systems are being configured to create one cluster
Enter the unique cluster name: [q,?] arena
Enter a unique Cluster ID number between 0-65535: [b,q,?] (0) 5
    Discovering NICs on node1 ............. Discovered e1000g0 e1000g1 e1000g2
To use aggregated interfaces for private heartbeat, enter the name of an aggregated interface.
To use a NIC for private heartbeat, enter a NIC which is not part of an aggregated interface.
Enter the NIC for the first private heartbeat link on node1: [b,q,?] e1000g1
Would you like to configure a second private heartbeat link? [y,n,q,b,?] (y)
Enter the NIC for the second private heartbeat link on node1: [b,q,?] e1000g2
Do you want to configure an additional low priority heartbeat link? [y,n,q,b,?] (n) y
Enter the NIC for the low priority heartbeat link on node1: [b,q,?] (e1000g0)
Are you using the same NICs for private heartbeat links on all systems? [y,n,q,b,?] (y)
    Checking Media Speed for e1000g1 on node1 .................................1000
    Checking Media Speed for e1000g2 on node1 ................................ 1000
    Checking Media Speed for e1000g1 on node2 .................................1000
    Checking Media Speed for e1000g2 on node2 .................................1000

              Storage Foundation and High Availability 5.1 Configure Program
                                             node1 node2
Cluster information verification:
        Cluster Name: arena
        Cluster ID Number: 5
        Private Heartbeat NICs for node1:
                link1=e1000g1
                link2=e1000g2
        Low Priority Heartbeat NIC for node1: link-lowpri=e1000g0
        Private Heartbeat NICs for node2:
                link1=e1000g1
                link2=e1000g2
        Low Priority Heartbeat NIC for node2: link-lowpri=e1000g0
Is this information correct? [y,n,q,b,?] (y)

Virtual IP can be specified in RemoteGroup resource, and can be used to connect to the cluster using Java GUI
The following data is required to configure the Virtual IP of the Cluster:
        A public NIC used by each system in the cluster
        A Virtual IP address and netmask
Do you want to configure the Virtual IP? [y,n,q,?] (n)
              Storage Foundation and High Availability 5.1 Configure Program
                                         node1 node2
Veritas Cluster Server can be configured to utilize Symantec Security Services
Running VCS in Secure Mode guarantees that all inter-system communication is encrypted, and users are verified with security credentials. When running VCS in Secure Mode, NIS and system usernames and passwords are used to verify identity. VCS usernames and passwords are no longer utilized when a cluster is running in Secure Mode.Before configuring a cluster to operate using Symantec Security Services, another system must already have Symantec Security Services installed and be operating as a Root Broker. Refer to the Veritas Cluster Server Installation Guide for more information on configuring a Symantec Product Authentication Service Root Broker.

Would you like to configure VCS to use Symantec Security Services? [y,n,q] (n)
                                 
               Storage Foundation and High Availability 5.1 Configure Program
                                              node1 node2

The following information is required to add VCS users:
        A user name
        A password for the user
        User privileges (Administrator, Operator, or Guest)

Do you want to set the username and/or password for the Admin user
(default username = 'admin', password='password')? [y,n,q] (n)
Do you want to add another user to the cluster? [y,n,q] (n)
VCS User verification:
        User: admin Privilege: Administrators
        Passwords are not displayed
Is this information correct? [y,n,q] (y)

             Storage Foundation and High Availability 5.1 Configure Program
                                              node1 node2
The following information is required to configure SMTP notification:
        The domain-based hostname of the SMTP server
        The email address of each SMTP recipient
        A minimum severity level of messages to send to each recipient
Do you want to configure SMTP notification? [y,n,q,?] (n)
                                 
           Storage Foundation and High Availability 5.1 Configure Program
                                      node1 node2
The following information is required to configure SNMP notification:
        System names of SNMP consoles to receive VCS trap messages
        SNMP trap daemon port numbers for each console
        A minimum severity level of messages to send to each console
        Do you want to configure SNMP notification? [y,n,q,?] (n)

All SFHA processes that are currently running must be stopped
Do you want to stop SFHA processes now? [y,n,q,?] (y)

            Storage Foundation and High Availability 5.1 Configure Program
                                         node1 node2
Logs are being written to /var/tmp/installsf-201207300006zNi while installsf is in progress
    Stopping SFHA: 100%
    Estimated time remaining: 0:00 8 of 8
    Performing SFHA prestop tasks .....................................Done
    Stopping vxatd ....................................................Done
    Stopping had ......................................................Done
    Stopping hashadow .................................................Done
    Stopping CmdServer ................................................Done
    Stopping vxfen ....................................................Done
    Stopping gab ......................................................Done
    Stopping llt ......................................................Done
          Storage Foundation High Availability Shutdown completed successfully
                                 
               Storage Foundation and High Availability 5.1 Configure Program
                                               node1 node2
Logs are being written to /var/tmp/installsf-201207300006zNi while installsf is in progress
    Starting SFHA: 100%
    Estimated time remaining: 0:00 19 of 19
    Performing SFHA configuration .........................................Done
    Starting vxdmp ........................................................Done
    Starting vxio .........................................................Done
    Starting vxspec .......................................................Done
    Starting vxconfigd ....................................................Done
    Starting vxesd ........................................................Done
    Starting vxrelocd .....................................................Done
    Starting vxconfigbackupd ..............................................Done
    Starting vxportal .....................................................Done
    Starting fdd ..........................................................Done
    Starting llt ..........................................................Done
    Starting gab ..........................................................Done
    Starting vxfen ........................................................Done
    Starting had ..........................................................Done
    Starting hashadow .....................................................Done
    Starting CmdServer ....................................................Done
    Starting vxdbd ........................................................Done
    Starting odm ..........................................................Done
    Performing SFHA poststart tasks .......................................Done

      Storage Foundation High Availability Startup completed successfully

installsf log files, summary file, and response file are saved at:
        /opt/VRTS/install/logs/installsf-201207300006zNi
Arena-Node1#

Arena-Node1#/opt/VRTS/bin/hastatus -sum
-- SYSTEM STATE
-- System State Frozen
A node1 RUNNING 0
A node2 RUNNING 0
Once the configuration is completed like the above we need to create servicegroup. 

Create a diskgroup for cluster using vxvm.

Arena-Node1#echo |format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c1t0d0 
          /pci@0,0/pci15ad,1976@10/sd@0,0
       1. c2t2d0 
          /iscsi/disk@0000iqn.2006-       01.com.openfiler%3Atsn.58533bc21b9e0001,0
       2. c2t3d0 
          /iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.58533bc21b9e0001,1
Specify disk(enter its number): Specify disk (enter its number):

Arena-Node1#vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 auto:none - - online invalid
c2t2d0s2 auto:none - - online invalid
c2t3d0s2 auto:none - - online invalid

Bring the the openfiler LUNS in vxvm control.

Arena-Node1#/etc/vx/bin/vxdisksetup -i c2t2d0
Arena-Node1#/etc/vx/bin/vxdisksetup -i c2t3d0
Arena-Node1#vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 auto:none - - online invalid
c2t2d0s2 auto:cdsdisk - - online
c2t3d0s2 auto:cdsdisk - - online


Create a new diskgroup.

Arena-Node1#vxdg init arenadg iscsi1=c2t2d0 iscsi2=c2t3d0
Arena-Node1#vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 auto:none - - online invalid
c2t2d0s2 auto:cdsdisk iscsi1 arenadg online
c2t3d0s2 auto:cdsdisk iscsi2 arenadg online

Making the volume

Arena-Node1#vxassist -g arenadg make zoravol1 3g
Arena-Node1#vxprint -hvt
Disk group: arenadg
v zoravol1 - ENABLED ACTIVE 6291456 SELECT - fsgen
pl zoravol1-01 zoravol1 ENABLED ACTIVE 6291456 CONCAT - RW
sd iscsi1-01 zoravol1-01 iscsi1 0 4050688 0 c2t2d0 ENA
sd iscsi2-01 zoravol1-01 iscsi2 0 2240768 4050688 c2t3d0 ENA

Creating the filesystem

Arena-Node1#mkfs -F vxfs /dev/vx/rdsk/arenadg/zoravol1
    version 7 layout
    6291456 sectors, 3145728 blocks of size 1024, log size 16384 blocks
    largefiles supported
Arena-Node1#
Arena-Node1#mkdir /zoravol1
Arena-Node1#mount -F vxfs /dev/vx/dsk/arenadg/zoravol1 /zoravol1
Arena-Node1#df -h /zoravol1
Filesystem size used avail capacity Mounted on
/dev/vx/dsk/arenadg/zoravol1
                       3.0G 18M 2.8G 1% /zoravol1
# umount /zoravol1.
Now we have to create the service group to add the above created resources .
To see how to create service group in graphical VCS java console.


Thank you for reading this article.Please leave a comment if you have any doubt ,i will get back to you as soon as possible.
VMTURBO-CLOUD-CAPACITY

17 comments

  1. Great work..is the process almost the same if i try with rhel 5 using Storage Foundation Ha for Rhel

  2. Almost …You can give try…

  3. I am replicating the above mentioned steps… Once done will update.
    Superb job done by you.

    Cheers
    Shahid

  4. Have you seen SFHA version 6 is available and that need high end memory.. Is something new introduced as it contain installer for Sol-11…

    ???

  5. I haven installed on VMWARE..not performed any cluster part.

  6. Can you post somethihg related VCS COnfiguration for Oracle

  7. I have installed SFHA 6 in Solaris 11.

  8. Hi

    When creating new VM on work station, you have mentioned with three nic's, whether all three nic's need to be NAT or Host Only. What should be their type.

  9. Hi,

    Can you please provide the steps and sequence to create a service group and resource and how to failover through command line in solaris 10.

    Thanks in advance.

  10. vcs(6.0.1) configuration in solari10 (vmware)is successful
    please provide link to Download the VCS cluster manager to insatll in laptop(windows 7)

  11. Can anyone advice on how to create 3 NICs ?

    • VMWARE WORKSTATION-> View-Console view->Edit Virtual Machine Settings->Add–>Select Network Adaptor->Add it .Perform the same steps twice, and reboot the guest to see newly created NIC’s

  12. Excellent work Lingeswaran.. Thank you

  13. My sincere thanks to the owner and publisher of this documentation… Awesome!!!!

  14. Hi,

    Can we have 2 node veritas cluster with SCSI-3 in Solaris topology.
    For Solaris cluster with SCSI-3, three nodes are recommended.

    regards
    Kishan

  15. Hi Lingeswaran,

    Can you please share the steps and prerequisites to install in Solaris 11.2

Leave a Reply

Your email address will not be published. Required fields are marked *