• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

UnixArena

  • Home
  • kubernetes
  • DevOps
    • Terraform
    • Jenkins
    • Docker
    • Openshift
      • OKD
    • Ansible engine
    • Ansible Tower
      • AWX
    • Puppet
  • Cloud
    • Azure
    • AWS
    • Openstack
    • Docker
  • VMware
    • vCloud Director
    • VMware-Guests
    • Vcenter Appliance 5.5
    • vC OPS
    • VMware SDDC
    • VMware vSphere 5.x
      • vSphere Network
      • vSphere DS
      • vShield Suite
    • VMware vSphere 6.0
    • VSAN
    • VMware Free Tools
  • Backup
    • Vembu BDR
    • Veeam
    • Nakivo
    • Azure Backup
    • Altaro VMBackup
    • Spinbackup
  • Tutorials
    • Openstack Tutorial
    • Openstack Beginner’s Guide
    • VXVM-Training
    • ZFS-Tutorials
    • NetApp cDot
    • LVM
    • Cisco UCS
    • LDOM
    • Oracle VM for x86
  • Linux
    • How to Articles
    • Q&A
    • Networking
    • RHEL7
  • DevOps Instructor-led Training
  • Contact

How to Install Cluster LVM2 and GFS2 on Redhat Linux ?

October 24, 2013 By Cloud_Devops 7 Comments

As you know Linux deployment is increasing day by day.Everybody have a question that whether Linux can fulfill the older enterprise operating systems like IBM AIX ,Sun Solaris or HP-UX. Most of the customers thinks that Linux can’t fulfill but by merging one or more Linux system can fulfill the requirements and that is much cheaper than those legacy unix systems. That’s what happening on the Linux market currently.Purchase two X86 blades and install a redhat Linux and redhat cluster with feature of cluster LVM and Global file system(GFS).To use this feature,your application should support the parallel processing.(Ex:Oracle RAC).

Operating system: Redhat Enterprise Linux 6.3(RHEL6.3)Once you have configured the redhat cluster between two nodes , you can proceed to install the Cluster-LVM and GFS2 packages.

CLVMD (Cluster Logical Volume Manager daemon):
 
First we will see that how to install Cluster-LVM on Redhat Linux.
 
1. Login to first cluster node (Assuming that we have a configured yum on the server for package installation).
 
2.Install Cluster LVM package using the below command.
[root@uagl1 Packages]# yum install lvm2-cluster-2.02.95-10.el6.x86_64.rpm 
Setting up Install Process
Examining lvm2-cluster-2.02.95-10.el6.x86_64.rpm: lvm2-cluster-2.02.95-10.el6.x86_64
Marking lvm2-cluster-2.02.95-10.el6.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package lvm2-cluster.x86_64 0:2.02.95-10.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package      Arch       Version         Repository                    Size
================================================================================
Installing:
 lvm2-cluster  x86_64 2.02.95-10.el6  lvm2-cluster-2.02.95-10.el6.x86_64   667 k

Transaction Summary
=================================================================================
Install       1 Package(s)

Total size: 667 k
Installed size: 667 k
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : lvm2-cluster-2.02.95-10.el6.x86_64                                                                                                        1/1 
  Verifying  : lvm2-cluster-2.02.95-10.el6.x86_64                                                                                                        1/1 

Installed:
  lvm2-cluster.x86_64 0:2.02.95-10.el6                                                                                                                       

Complete!
[root@uagl1 Packages]#

3.In some cases, If you have already configured the volume group and cluster services are already enabled,you will get the below error.This can be overcome by installing this package and starting the service.

[root@uagl1 Desktop]# vgs
  connect() failed on local socket: No such file or directory
  Internal cluster locking initialisation failed.
  WARNING: Falling back to local file-based locking.
  Volume Groups with the clustered attribute will be inaccessible.
  VG       #PV #LV #SN Attr   VSize   VFree
  vg_uagl1   1   3   0 wz--n- 148.52g    0 
[root@uagl1 Desktop]#



4.Try to start the clvmd service on first node.
Here its failing because ,we haven’t installed the clvmd package on second node.

[root@uagl1 Packages]# service clvmd start
Starting clvmd: 
Activating VG(s):   Volume group "HAVG" is exported
  clvmd not running on node uagl2h
  3 logical volume(s) in volume group "vg_uagl1" now active
  clvmd not running on node uagl2h
                                                           [FAILED]
[root@uagl1 Packages]#


5.Since we have two nodes in the cluster, we need to make sure that clvmd should be installed on the second node as well.If not ,install it.

[root@uagl1 Packages]# service clvmd start
Starting clvmd: 
Activating VG(s):   Volume group "HAVG" is exported
  clvmd not running on node uagl2h
  3 logical volume(s) in volume group "vg_uagl1" now active
  clvmd is running on node uagl2h
                                                           [OK]
[root@uagl1 Packages]#


We have successfully deployed the cluster-LVM on both cluster nodes. 
Let’s start GFS2 package installation.

GFS2 (Global filesystem):

1. Login to both cluster nodes and install the global filesystem package.

    To learn more about GFS2

[root@uagl1 yum.repos.d]# yum install gfs*
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package gfs2-utils.x86_64 0:3.0.12.1-32.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package       Arch      Version        Repository      Size
================================================================================
Installing:
 gfs2-utils   x86_64   3.0.12.1-32.el6   pkgrepo3       281 k

Transaction Summary
================================================================================
Install       1 Package(s)

Total download size: 281 k
Installed size: 748 k
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : gfs2-utils-3.0.12.1-32.el6.x86_64                                                                                                         1/1 
  Verifying  : gfs2-utils-3.0.12.1-32.el6.x86_64                                                                                                         1/1 

Installed:
  gfs2-utils.x86_64 0:3.0.12.1-32.el6                                                                                                                        

Complete!
[root@uagl1 yum.repos.d]#
Here we will see how to create a shared filesystem between two cluster node.

Configuring the GFS2 on Redhat Linux:You need shared storage here. Here I am using openfiler as shared storage.
I have provisioned two luns to both cluster nodes using iscsi.
1. Create a new volume group (Name: HAVG) .(Read LVM Tutorial)

2.Create a new volume with size of 10GB.(Read LVM Tutorial)

3.Create a GFS2 filesystem.
You need the below information required to create gfs2.
      1.Cluster name (uacl1) – It can be cluster name or any Unique name for lock table. 
      2.Logical volume name (havmvol)
      3.Logical volume full path (/dev/HAVG/havmvol)

[root@uagl1 /]# mkfs -t gfs2 -p lock_dlm -t uacl1:havmvol -j 4 /dev/HAVG/havmvol 
This will destroy any data on /dev/HAVG/havmvol.
It appears to contain: symbolic link to `../dm-3'

Are you sure you want to proceed? [y/n] y

Device:                    /dev/HAVG/havmvol
Blocksize:                 4096
Device Size                10.00 GB (2621440 blocks)
Filesystem Size:           10.00 GB (2621438 blocks)
Journals:                  4
Resource Groups:           40
Locking Protocol:          "lock_dlm"
Lock Table:                "uacl1:havmvol"
UUID:                      9de294ae-6df5-96cc-e65f-6a5be1f699b8

[root@uagl1 /]# mount -t gfs2 -o noatime,nodiratime /dev/HAVG/havmvol /havol1/


4.Add the below entry in /etc/fstab.

/dev/HAVG/havmvol /havol1   gfs2 defaults,noatime,nodiratime 0  0


5.Unmount & re-mount it .

[root@uagl1 /]# umount /havol1/
[root@uagl1 /]# mount /havol1/


6.Sometimes gfs2 servcie will not start if the volume group in disabled mode.
Enable the vg using “vgchange” command.

[root@uagl2 ~]# service gfs2 start
Mounting GFS2 filesystem (/havol1): invalid device path "/dev/HAVG/havmvol"
                                                           [FAILED]
[root@uagl2 ~]# vgs
  VG       #PV #LV #SN Attr   VSize   VFree  
  HAVG       2   1   0 wz--n-  10.18g 180.00m
  vg_uagl2   1   3   0 wz--n- 148.52g      0 
[root@uagl2 ~]# vgchange -ay HAVG
  1 logical volume(s) in volume group "HAVG" now active
[root@uagl2 ~]#


7.You can check the GFS2 status by using the below command.

[root@uagl1 ~]# service gfs2 status
Configured GFS2 mountpoints: 
/havol1
Active GFS2 mountpoints: 
/havol1
[root@uagl1 ~]#


Now we have successfully installed GFS2 and created a new shared volume (/havol1).
This volume will be mounted simultaneously on both the cluster nodes.

Check out ,

Redhat Enterprise Linux 7 Tutorials 

Redhat Linux – LVM tutorials 

Redhat Cluster 
Thank you for reading this article.

Filed Under: Redhat Cluster

Reader Interactions

Comments

  1. Ranjith says

    January 26, 2017 at 2:56 am

    Hi Lingeswaran,
    I have configured 3 node cluster with gfs2. While formatting the file system with gfs2 i made a mistake instead of giving the the cluster name i gave some other name ( REDHAT instead of redhat, cluster name is redhat)

    mkfs.gfs2 -p lock_dlm -t REDHAT:gfslv -j 4 /dev/vg_cluster/lv_http

    So if im trying to mount its giving error

    fs is for a different cluster
    error mounting lockproto lock_dlm

    for solving this I re-formatted with proper cluster name …. but that couldn’t help .

    Now the status is

    if Im running any command im getting

    [root@node1 mapper]# lvs
    connect() failed on local socket: Connection refused
    Internal cluster locking initialisation failed.
    WARNING: Falling back to local file-based locking.
    Volume Groups with the clustered attribute will be inaccessible.
    Skipping clustered volume group vg_cluster

    I cannot see the lv as well as vg .

    So how can I fix this issue ??
    looking for your valuable help

    Thanks,
    Ranjith

    Reply
  2. Vikrant Aggarwal says

    November 5, 2013 at 7:08 pm

    Appreciate your help Lingeswaran, but its still not working. clvmd is running on both nodes. I am able to mount the file system on one node. But when I am going to another node and trying to start the gfs2 service its throwing the same error.

    On another node(lserver2) also gfs file system is hanging.

    [root@lserver1 /]# /etc/init.d/gfs2 start
    Mounting GFS2 filesystems: /sbin/mount.gfs2: lock_dlm_join: gfs_controld join error: -16
    /sbin/mount.gfs2: error mounting lockproto lock_dlm
    [FAILED]
    [root@lserver1 /]# /etc/init.d/gfs2 status
    Configured GFS2 mountpoints:
    /data1
    [root@lserver1 /]# /etc/init.d/clvmd status
    clvmd (pid 5640) is running…
    active volumes: LogVol00 LogVol01 LogVol02 LogVol04
    [root@lserver1 /]# /etc/init.d/cman status
    cman is running.
    [root@lserver1 /]# more /etc/fstab | grep -i data1
    /dev/VolGroup03/LogVol04 /data1 gfs2 defaults,noatime,nodiratime 0 0

    [root@lserver2 ~]# /etc/init.d/gfs2 status
    Configured GFS2 mountpoints:
    /data1
    Active GFS2 mountpoints:
    /data1
    [root@lserver2 ~]# /etc/init.d/clvmd status
    clvmd (pid 7775) is running…
    active volumes: LogVol00 LogVol01 LogVol04

    Thanks,
    Vikrant

    Reply
    • Lingeswaran R says

      November 5, 2013 at 7:41 pm

      No Idea…Have a look at /var/log/messages…

      Regards
      Lingeswaran

      Reply
  3. Vikrant Aggarwal says

    November 4, 2013 at 7:54 pm

    Thanks for your quick reply Lingeswara, That works for me.

    I have configured two node cluster. Node names are lserver1 and lserver2. I am not able to mount the gfs2 file system. As shown in excerpt I am getting error on both nodes. cman is running.

    [root@lserver1 ~]# clustat

    Member Status: Quorate

    Member Name ID Status

    —— —- —- ——

    lserver1.mgmt.local 1 Online, Local

    lserver2.mgmt.local 2 Online

    [root@lserver1 ~]# service cman status

    cman is running.

    [root@lserver2 ~]# cman_tool nodes

    Node Sts Inc Joined Name

    1 M 16 2013-10-31 00:12:23 lserver1.mgmt.local

    2 M 4 2013-10-31 00:01:18 lserver2.mgmt.local

    [root@lserver2 ~]# clustat

    Member Status: Quorate

    Member Name ID Status

    —— —- —- ——

    lserver1.mgmt.local 1 Online

    lserver2.mgmt.local 2 Online, Local

    [root@lserver2 ~]# service cman status

    cman is running.

    [root@lserver2 ~]# mount /data

    /sbin/mount.gfs2: lock_dlm_join: gfs_controld join error: -22

    /sbin/mount.gfs2: error mounting lockproto lock_dlm

    [root@lserver2 ~]# more /etc/fstab | grep -i data

    /dev/VolGroup02/LogVol03 /data gfs2 defaults,noatime,nodiratime 0 0

    Kindly help me regarding above error. Not found any thing useful on Google.

    Thanks,
    Vikrant

    Reply
    • Lingeswaran R says

      November 4, 2013 at 9:06 pm

      Have you started the clvmd & gfs2 services ? If not please start it .
      You can also try to restart the cman service .

      Reply
  4. Vikrant Aggarwal says

    November 3, 2013 at 12:18 pm

    Hi Lingeswaran,

    I am trying to configure two node cluster. Below packages have been installed on one node of cluster services are also running fine.
    But I am not able to take the graphical session of luci can you please help me regarding that. I am not able to figure out what i am missing..
    [root@lserver1 ~]# rpm -qa luci rgmanager ricci cman httpd
    httpd-2.2.3-6.el5
    rgmanager-2.0.23-1
    ricci-0.8-30.el5
    cman-2.0.60-1.el5
    luci-0.8-30.el5
    [root@lserver1 ~]# ps -ef | egrep -i “luci|httpd” | grep -v egrep
    root 18092 1 0 16:02 ? 00:00:00 /usr/sbin/httpd
    luci 18333 1 3 16:13 pts/0 00:00:07 /usr/bin/python /usr/lib/luci/zope/lib/python/Zope2/Startup/run.py -C /var/lib/luci/etc/zope.conf
    luci 18338 1 0 16:13 ? 00:00:00 /usr/sbin/stunnel -fd 0
    apache 18356 18092 0 16:13 ? 00:00:00 /usr/sbin/httpd
    apache 18357 18092 0 16:13 ? 00:00:00 /usr/sbin/httpd
    apache 18358 18092 0 16:13 ? 00:00:00 /usr/sbin/httpd
    apache 18359 18092 0 16:13 ? 00:00:00 /usr/sbin/httpd
    apache 18360 18092 0 16:13 ? 00:00:00 /usr/sbin/httpd
    apache 18361 18092 0 16:13 ? 00:00:00 /usr/sbin/httpd
    apache 18362 18092 0 16:13 ? 00:00:00 /usr/sbin/httpd
    apache 18363 18092 0 16:13 ? 00:00:00 /usr/sbin/httpd

    Thanks
    Vikrant

    Reply
    • Lingeswaran R says

      November 4, 2013 at 7:31 am

      Make sure you have disabled the IP filters and firewalls

      Regards
      Lingeswaran

      Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Follow UnixArena

  • Facebook
  • LinkedIn
  • Twitter

Copyright © 2025 · UnixArena ·

Go to mobile version