• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

UnixArena

  • Home
  • kubernetes
  • DevOps
    • Terraform
    • Jenkins
    • Docker
    • Openshift
      • OKD
    • Ansible engine
    • Ansible Tower
      • AWX
    • Puppet
  • Cloud
    • Azure
    • AWS
    • Openstack
    • Docker
  • VMware
    • vCloud Director
    • VMware-Guests
    • Vcenter Appliance 5.5
    • vC OPS
    • VMware SDDC
    • VMware vSphere 5.x
      • vSphere Network
      • vSphere DS
      • vShield Suite
    • VMware vSphere 6.0
    • VSAN
    • VMware Free Tools
  • Backup
    • Vembu BDR
    • Veeam
    • Nakivo
    • Azure Backup
    • Altaro VMBackup
    • Spinbackup
  • Tutorials
    • Openstack Tutorial
    • Openstack Beginner’s Guide
    • VXVM-Training
    • ZFS-Tutorials
    • NetApp cDot
    • LVM
    • Cisco UCS
    • LDOM
    • Oracle VM for x86
  • Linux
    • How to Articles
    • Q&A
    • Networking
    • RHEL7
  • DevOps Instructor-led Training
  • Contact

How to Configure Redhat Cluster in Linux ?

October 20, 2013 By Cloud_Devops 10 Comments

Once the redhat cluster package installation has been done,you can start configuring new cluster using luci which is web based graphical user interface available in Redhat Linux. Luci can be installed any management host(Redhat Linux  to manage all the Redat clusters which are available in your environment.) Here i have installed “luci” on the one of the cluster since this is a test environment.You can access the luci web interface using hostname or IP address from the web browser(Refer:screenshots). Unlike Sun cluster configuration, this is very simple one and easy to implement as well.

Server Names: UANODE1P(192.168.2.200),UANODE2P(192.168.2.201)
Cluster Alias: UANODE1(192.168.2.210),UANODE2(192.168.2.211)

In this setup,i have installed luci on UANODE2P server but this can be installed on any Redhat Linux node. LUCI provides a web interface for redhat cluster management. 

1. Login to UANODE1P and configure new IP(192.168.2.210) with host alias UANODE1.

[root@UANODE1P ~]# ifconfig  bond1
bond1 Link encap:Ethernet HWaddr 00:1B:ED:1C:28:5A
inet addr:192.168.2.210 Bcast:172.16.25.63 Mask:255.255.255.224
inet6 addr: fe80::20b:cdff:fe1c:185a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2341604 errors:0 dropped:0 overruns:0 frame:0
TX packets:2217673 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:293460932 (279.8 MiB) TX bytes:1042006549 (993.7 MiB)
Interrupt:185 Memory:f7fe0000-f7ff0000

System’s  /etc/host file may be having the below entries on both node.
192.168.2.200                UANODE1P
192.168.2.201                UANODE2P

Add the below entries to /etc/hosts file on both the servers.
192.168.2.210                UANODE1
192.168.2.211                UANODE2

2.Login to UANODE2P and configure new IP with host alias UANODE2.

[root@UANODE2P ~]# ifconfig  bond1
bond1 Link encap:Ethernet HWaddr 00:1B:ED:1C:28:5A
inet addr:192.168.2.211 Bcast:172.16.25.63 Mask:255.255.255.224
inet6 addr: fe80::20b:cdff:fe1c:185a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2341604 errors:0 dropped:0 overruns:0 frame:0
TX packets:2217673 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:293460932 (279.8 MiB) TX bytes:1042006549 (993.7 MiB)
Interrupt:185 Memory:f7fe0000-f7ff0000



3. Login to luci web-interface using root user.




























4.From the home page , Navigate  the windows to ” Manage Cluster”













5.Click on Create a new Cluster link. 

6.Enter the cluster node details and cluster name.Provide “ricci” account password here.
Redhat cluster will send heartbeat using this alias name.That’s the reason we have separated  the public network by configuring new IP’s (Step 1 & 2).
You may get the below error while creating the cluster.
The following errors occurred while creating cluster “UACL1”:Unable to establish an SSL connection to UANODE1:11111 after 5 tries.Is the ricci service running ? : Operation already in progress, Authentication to the ricci agent at UANODE2:11111 failed.
To fix the above error.
1.Start the ricci services on both the nodes
2.Set the password for “ricci” account on both nodes and enter the same here.

Cluster creation on the progress.

7.After creation of Cluster,Just join nodes in cluster by clicking “Join Cluster” link.

8.You will get the notification is that cluster services are started or not.

9.Just refresh the window and see both nodes will shows as cluster nodes.

10.You can also view the cluster status from the command level.
[root@UANODE1 ~]# clustat
Cluster Status for UACL1 @ Sun Oct 13 12:47:10 2013
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
UANODE1 1 Online, Local
UANODE2 2 Online
[root@UANODE1 ~]#

So we just added the nodes in cluster .Still there are plenty of work to do on this configuration to use it in production.

Pending items in Redhat cluster configuration:
1.We need to configure the fencing to reboot/halt the nodes if any network partition or server hung happens.
2. Need to Configure the service group with KVM virtual machines and make sure that redhat cluster provides high availability to the VM’s.

Hope I will come up with the above setup soon to provide the complete live cluster environment.Mean time you can try the above things and let us know if you have any issues.

Filed Under: Redhat Cluster

Reader Interactions

Comments

  1. dinesh says

    January 6, 2017 at 1:46 am

    facing below error in luci

    The following errors occurred while creating cluster “webcluster”: Error receiving header from node1.example.com:11111

    any idea .

    Thanks in advance

    Reply
  2. Victor Flores says

    January 20, 2016 at 8:53 pm

    Hi,

    First of all, I would like to thank you for the time to share your knowledge. It has been very useful.

    Now, I need to configure a cluster with very similar characteristics.

    I have followed the steps, but I am getting the following output for “clustat” command:

    Any ideas ? why one of the nodes shows offline ? Did I miss anything ?

    On Node 1:

    [root@nakdb01 ~]# clustat
    Cluster Status for 3parsto @ Wed Jan 20 10:16:47 2016
    Member Status: Quorate

    Member Name ID Status
    —— —- —- ——
    nakdb01 1 Online, Local
    nakdb02 2 Offline

    ——————————————————————————————————————–
    On node 2:

    [root@nakdb02 ~]# clustat
    Cluster Status for 3parsto @ Wed Jan 20 10:17:00 2016
    Member Status: Quorate

    Member Name ID Status
    —— —- —- ——
    nakdb01 1 Offline
    nakdb02 2 Online, Local

    ————————————————————————————————–

    Thanks in advance.

    Knd regards
    Victor

    Reply
    • Rachana says

      August 31, 2016 at 5:49 pm

      pls try service iptables stop

      Reply
  3. Nitin R says

    February 17, 2015 at 12:38 pm

    Your shared documents are very useful to me. Can you Share further documents…. ?

    Reply
  4. K venkatesh says

    January 7, 2015 at 10:52 pm

    Hi Lingesh, While creating new cluster I am getting below error. Please help to fix the issue.

    The following errors occurred while creating cluster “cluster”: Authentication to the ricci agent at node1:11111 failed, Authentication to the ricci agent at node2:11111 failed

    Reply
    • Lingeswaran R says

      January 7, 2015 at 11:02 pm

      Set a password for user ricci on all cluster node and give same password when adding cluster nodes to new cluster in luci interface.
      User ricci needs to have a password set to authenticate conga interface for making the changes. For the Red Hat Enterprise Linux 6.1 release and later, using ricci requires a password the first time you propagate updated cluster configuration from any particular node. You set the ricci password as root after you install ricci on your system with the passwd ricci command, for user ricci.

      Look for following messages in /var/log/luci/luci.log:
      14:45:40,241 INFO [luci.lib.ricci_communicator] Authentication to the ricci agent at node1:11111 failed
      14:45:42,257 INFO [luci.lib.ricci_communicator] Authentication to the ricci agent at node2:11111 failed
      There could be many reasons behind authentication failure such as required permissions, incorrect password, etc.

      Look for ricci messages on both cluster nodes:

      Feb 18 15:01:22 node1 ricci: ricci user password is not set, clients will be unable to connect
      Feb 18 15:01:22 node1 ricci: startup succeeded
      Feb 19 11:08:20 node1 ricci: ricci user password is not set, clients will be unable to connect
      Here, user ricci has not assigned with a password.

      Reply
  5. sivakumar says

    November 30, 2014 at 7:21 pm

    Hi Lingeswaran,

    Can you able to share is their any document to refere the below steps

    Pending items in Redhat cluster configuration:
    1.We need to configure the fencing to reboot/halt the nodes if any network partition or server hung happens.
    2. Need to Configure the service group with KVM virtual machines and make sure that redhat cluster provides high availability to the VM’s.

    Reply
  6. Venkat says

    September 15, 2014 at 3:00 pm

    Hello Lingeswaran,

    If you don’t mind, Kindly share to us below info if possible.

    Pending items in Redhat cluster configuration:
    1.We need to configure the fencing to reboot/halt the nodes if any network partition or server hung happens.
    2. Need to Configure the service group with KVM virtual machines and make sure that redhat cluster provides high availability to the VM’s.

    Thank you

    Regards,
    Venkat

    Reply
  7. Elumalai M says

    November 20, 2013 at 8:12 pm

    HI,

    How to create the bond0 alias can give some idea?

    Thanks

    Reply
    • Lingeswaran R says

      November 20, 2013 at 10:06 pm

      https://www.unixarena.com/2013/06/how-to-configure-bondingteaming-on.html

      Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Follow UnixArena

  • Facebook
  • LinkedIn
  • Twitter

Copyright © 2025 · UnixArena ·

Go to mobile version