As you know Linux deployment is increasing day by day.Everybody have a question that whether Linux can fulfill the older enterprise operating systems like IBM AIX ,Sun Solaris or HP-UX. Most of the customers thinks that Linux can’t fulfill but by merging one or more Linux system can fulfill the requirements and that is much cheaper than those legacy unix systems. That’s what happening on the Linux market currently.Purchase two X86 blades and install a redhat Linux and redhat cluster with feature of cluster LVM and Global file system(GFS).To use this feature,your application should support the parallel processing.(Ex:Oracle RAC).
[root@uagl1 Packages]# yum install lvm2-cluster-2.02.95-10.el6.x86_64.rpm Setting up Install Process Examining lvm2-cluster-2.02.95-10.el6.x86_64.rpm: lvm2-cluster-2.02.95-10.el6.x86_64 Marking lvm2-cluster-2.02.95-10.el6.x86_64.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package lvm2-cluster.x86_64 0:2.02.95-10.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: lvm2-cluster x86_64 2.02.95-10.el6 lvm2-cluster-2.02.95-10.el6.x86_64 667 k Transaction Summary ================================================================================= Install 1 Package(s) Total size: 667 k Installed size: 667 k Is this ok [y/N]: y Downloading Packages: Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : lvm2-cluster-2.02.95-10.el6.x86_64 1/1 Verifying : lvm2-cluster-2.02.95-10.el6.x86_64 1/1 Installed: lvm2-cluster.x86_64 0:2.02.95-10.el6 Complete! [root@uagl1 Packages]#
3.In some cases, If you have already configured the volume group and cluster services are already enabled,you will get the below error.This can be overcome by installing this package and starting the service.
[root@uagl1 Desktop]# vgs connect() failed on local socket: No such file or directory Internal cluster locking initialisation failed. WARNING: Falling back to local file-based locking. Volume Groups with the clustered attribute will be inaccessible. VG #PV #LV #SN Attr VSize VFree vg_uagl1 1 3 0 wz--n- 148.52g 0 [root@uagl1 Desktop]#
4.Try to start the clvmd service on first node.
Here its failing because ,we haven’t installed the clvmd package on second node.
[root@uagl1 Packages]# service clvmd start Starting clvmd: Activating VG(s): Volume group "HAVG" is exported clvmd not running on node uagl2h 3 logical volume(s) in volume group "vg_uagl1" now active clvmd not running on node uagl2h [FAILED] [root@uagl1 Packages]#
5.Since we have two nodes in the cluster, we need to make sure that clvmd should be installed on the second node as well.If not ,install it.
[root@uagl1 Packages]# service clvmd start Starting clvmd: Activating VG(s): Volume group "HAVG" is exported clvmd not running on node uagl2h 3 logical volume(s) in volume group "vg_uagl1" now active clvmd is running on node uagl2h [OK] [root@uagl1 Packages]#
We have successfully deployed the cluster-LVM on both cluster nodes.
Let’s start GFS2 package installation.
GFS2 (Global filesystem):
[root@uagl1 yum.repos.d]# yum install gfs* Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package gfs2-utils.x86_64 0:3.0.12.1-32.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: gfs2-utils x86_64 3.0.12.1-32.el6 pkgrepo3 281 k Transaction Summary ================================================================================ Install 1 Package(s) Total download size: 281 k Installed size: 748 k Is this ok [y/N]: y Downloading Packages: Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : gfs2-utils-3.0.12.1-32.el6.x86_64 1/1 Verifying : gfs2-utils-3.0.12.1-32.el6.x86_64 1/1 Installed: gfs2-utils.x86_64 0:3.0.12.1-32.el6 Complete! [root@uagl1 yum.repos.d]#
Configuring the GFS2 on Redhat Linux:You need shared storage here. Here I am using openfiler as shared storage.
I have provisioned two luns to both cluster nodes using iscsi.
1. Create a new volume group (Name: HAVG) .(Read LVM Tutorial)
2.Create a new volume with size of 10GB.(Read LVM Tutorial)
3.Create a GFS2 filesystem.
You need the below information required to create gfs2.
1.Cluster name (uacl1) – It can be cluster name or any Unique name for lock table.
2.Logical volume name (havmvol)
3.Logical volume full path (/dev/HAVG/havmvol)
[root@uagl1 /]# mkfs -t gfs2 -p lock_dlm -t uacl1:havmvol -j 4 /dev/HAVG/havmvol This will destroy any data on /dev/HAVG/havmvol. It appears to contain: symbolic link to `../dm-3' Are you sure you want to proceed? [y/n] y Device: /dev/HAVG/havmvol Blocksize: 4096 Device Size 10.00 GB (2621440 blocks) Filesystem Size: 10.00 GB (2621438 blocks) Journals: 4 Resource Groups: 40 Locking Protocol: "lock_dlm" Lock Table: "uacl1:havmvol" UUID: 9de294ae-6df5-96cc-e65f-6a5be1f699b8 [root@uagl1 /]# mount -t gfs2 -o noatime,nodiratime /dev/HAVG/havmvol /havol1/
4.Add the below entry in /etc/fstab.
/dev/HAVG/havmvol /havol1 gfs2 defaults,noatime,nodiratime 0 0
5.Unmount & re-mount it .
[root@uagl1 /]# umount /havol1/ [root@uagl1 /]# mount /havol1/
6.Sometimes gfs2 servcie will not start if the volume group in disabled mode.
Enable the vg using “vgchange” command.
[root@uagl2 ~]# service gfs2 start Mounting GFS2 filesystem (/havol1): invalid device path "/dev/HAVG/havmvol" [FAILED] [root@uagl2 ~]# vgs VG #PV #LV #SN Attr VSize VFree HAVG 2 1 0 wz--n- 10.18g 180.00m vg_uagl2 1 3 0 wz--n- 148.52g 0 [root@uagl2 ~]# vgchange -ay HAVG 1 logical volume(s) in volume group "HAVG" now active [root@uagl2 ~]#
7.You can check the GFS2 status by using the below command.
[root@uagl1 ~]# service gfs2 status Configured GFS2 mountpoints: /havol1 Active GFS2 mountpoints: /havol1 [root@uagl1 ~]#
Now we have successfully installed GFS2 and created a new shared volume (/havol1).
This volume will be mounted simultaneously on both the cluster nodes.