Site icon UnixArena

How to Configure Software RAID on Linux ?

Software RAID is one of the greatest feature in Linux to protect the data from disk failure.We have LVM also in Linux to configure mirrored volumes but Software RAID  recovery is much easier in disk failures compare to Linux LVM. I have seen some of the environments are configured with Software RAID and LVM (Volume groups are built using RAID devices).Using simple md commands, we can easily add and remove the disks from RAID. The mentioned RAID level are supported in Red Hat Linux and you can choose any RAID level  according to your requirement. 

Supported Software RAID Configurations on Linux:

table.tableizer-table { border: 1px solid #CCC; font-family: ; font-size: 12px; } .tableizer-table td { padding: 4px; margin: 3px; border: 1px solid #ccc; } .tableizer-table th { background-color: #104E8B; color: #FFF; font-weight: bold; }

RAID level Description Linux Option
RAID 0   Stripping      “–level=0 –raid-devices=3”
RAID 1   Mirroring “–level=mirror –raid-devices=2 “
RAID 5   Stripping with Distributed Parity. “–level=5 –raid-devices=3”
RAID 6   Stripping with Distributed Double Parity “–level=6 –raid-devices=4”
RAID 10 Mirrored Stripe. “–level=10 –raid-devices=4”

Here I am going to demonstrate configuring software RAID volumes and  configuring LVM  on top of  software RAID.

Available Disks for configuring software RAID.
/dev/sdb 
/dev/sdc

1.Label the disk with Software RAID tag:
Before configuring the software RAID,you have to lable the disk properly usinf fdisk command.So that you can easily identify which disks are in RAID and ioctl can read the disk properly.
[root@mylinz ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x92acd7eb.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-512, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-512, default 512):
Using default value 512

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): l
17 Hidden HPFS/NTF 65 Novell Netware b7 BSDI fs fc VMware VMKCORE
18 AST SmartSleep 70 DiskSecure Mult b8 BSDI swap fd Linux raid auto
1b Hidden W95 FAT3 75 PC/IX bb Boot Wizard hid fe LANstep
1c Hidden W95 FAT3 80 Old Minix be Solaris boot ff BBT

Now Perform the same for disk /dev/sdc as well.

[root@mylinz ~]# fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x1a0cd674.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-512, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-512, default 512):
Using default value 512

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.


2.Verify the both disk flag status using fdisk command.

[root@mylinz ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 536 MB, 536870912 bytes
64 heads, 32 sectors/track, 512 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x92acd7eb

Device Boot Start End Blocks Id System
/dev/sdb1 1 512 524272 fd Linux raid autodetect
[root@mylinz ~]# fdisk -l /dev/sdc

Disk /dev/sdc: 536 MB, 536870912 bytes
64 heads, 32 sectors/track, 512 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x1a0cd674

Device Boot Start End Blocks Id System
/dev/sdc1 1 512 524272 fd Linux raid autodetect


3.Configure the desired RAID level.Here i am configuring RAID 1.

[root@mylinz ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

If mdadm command is not found,You have to install mdadm package using yum like below.

[root@mylinz ~]# yum install mdadm*
Loaded plugins: refresh-packagekit, rhnplugin
This system is not registered with RHN.
RHN support will be disabled.
Setting up Install Process
Package mdadm-3.1.3-1.el6.x86_64 already installed and latest version
Nothing to do
[root@mylinz ~]#


4.Create filesystem on md device and mount it.Do not forget to add the device details in /etc/fstab to mount the volume across the system reboot.

[root@mylinz ~]# mkfs.ext4 /dev/md0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
131072 inodes, 524260 blocks
26213 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67633152
64 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@mylinz ~]# mkdir /appvol1
[root@mylinz ~]# mount /dev/md0 /appvol1/
[root@mylinz ~]# df -h /appvol1/
Filesystem Size Used Avail Use% Mounted on
/dev/md0 496M 11M 460M 3% /appvol1
[root@mylinz ~]#


5.If you want to use md device on LVM, then skip “step 4” and continue from here.
You can add the logical volume details in /etc/fstab to mount the volume across the reboots.

[root@mylinz ~]# pvcreate /dev/md0
Physical volume "/dev/md0" successfully created
[root@mylinz ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/md0 lvm2 a- 511.97m 511.97m
/dev/sda2 vg_mylinz lvm2 a- 19.51g 0
/dev/sdd lvm2 a- 512.00m 512.00m
/dev/sde lvm2 a- 512.00m 512.00m
/dev/sdf lvm2 a- 5.00g 5.00g
[root@mylinz ~]# pvs |grep /dev/md0
/dev/md0 lvm2 a- 511.97m 511.97m
[root@mylinz ~]# vgcreate raidvg /dev/md0
Volume group "raidvg" successfully created

[root@mylinz ~]# vgs raidvg
VG #PV #LV #SN Attr VSize VFree
raidvg 1 0 0 wz--n- 508.00m 508.00m

[root@mylinz ~]# lvcreate -L 200M raidvg
Logical volume "lvol0" created

[root@mylinz ~]# lvs |grep lvol0
lvol0 raidvg -wi-a- 200.00m

[root@mylinz ~]# mkfs.ext4 /dev/raidvg/lvol0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
51200 inodes, 204800 blocks
10240 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
25 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

[root@mylinz ~]# mount /dev/raidvg/lvol0 /appvol1/
[root@mylinz ~]# df -h /appvol1
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/raidvg-lvol0
194M 5.6M 179M 4% /appvol1
[root@mylinz ~]#

6.To monitor the RAID configuring status use below command. Any time you can press control-C to terminate the command.

[root@mylinz ~]# watch -n 2 cat /proc/mdstat
Every 2.0s: cat /proc/mdstat Tue Jul 2 13:18:59 2013

Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
524260 blocks super 1.2 [2/2] [UU]


7.The same information can be retrieved  from /proc as well.

[root@mylinz ~]# tail -f /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
524260 blocks super 1.2 [2/2] [UU]

unused devices:
tail: /proc/mdstat: file truncated
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
524260 blocks super 1.2 [2/2] [UU]

unused devices:
^C

8.mdadm configuration file is /etc/mdadm.conf.

[root@mylinz ~]# cat /etc/mdadm.conf
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all

9.You have to update mdadm.conf with newly configured RAID information using below method.

[root@mylinz ~]# mdadm --examine --scan
ARRAY /dev/md/0 metadata=1.2 UUID=feac1ebf:af5d479c:c2d2ad78:3906bd9d name=mylinz:0
[root@mylinz ~]# mdadm --examine --scan >> /etc/mdadm.conf
[root@mylinz ~]# cat /etc/mdadm.conf
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/0 metadata=1.2 UUID=feac1ebf:af5d479c:c2d2ad78:3906bd9d name=mylinz:0
[root@mylinz ~]#


10.To see Configured RAID information,use below commands.

[root@mylinz ~]# mdadm --query /dev/md0
/dev/md0: 511.97MiB raid1 2 devices, 0 spares. Use mdadm --detail for more detail.
[root@mylinz ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue Jul 2 12:24:46 2013
Raid Level : raid1
Array Size : 524260 (512.06 MiB 536.84 MB)
Used Dev Size : 524260 (512.06 MiB 536.84 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Tue Jul 2 12:28:02 2013
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : mylinz:0 (local to host mylinz)
UUID : feac1ebf:af5d479c:c2d2ad78:3906bd9d
Events : 19

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
[root@mylinz ~]#

Now you have successfully configured Linux software RAID and viewed the RAID status using various command.

If you want remove the software RAID, use the below methods.First stop the RAID using madadm command.Once its stopped ,you can remove the superblock to destroy complete RAID configuration from the configured disks.

[root@mylinz ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
[root@mylinz ~]# mdadm --query /dev/md0
mdadm: cannot open /dev/md0: No such file or directory
[root@mylinz ~]# mdadm --zero-superblock /dev/sdb1
[root@mylinz ~]# mdadm --zero-superblock /dev/sdc1
[root@mylinz ~]# watch cat /proc/mdstat
[root@mylinz ~]#


Thank you for reading this article.Hope now you will be very familiar with Linux Software RAID .

Exit mobile version