Site icon UnixArena

How to configure Solaris 10 IPMP ?

This article describes how to configure link based IPMP interfaces in Solaris 10. IPMP eliminates single network card failure and it ensures system will be always accessible via network.You can also configure the failure detection seconds in “/etc/default/mpathd” file and the default value is 10 seconds.In this file there is an option called  “FAILBACK” to specify IP behavior when primary interface recovered from the fault. “in.mpathd” is a daemon which handles IPMP (Internet Protocol Multi-Pathing) operations.There are two type of IPMP configuration available in Solaris 10.


1.Link Based IPMP
The link based IPMP detects network errors by checking the “IFF_RUNNING” flag.Normally it doesn’t require any test IP address like probe based IPMP.

2.Probe Based IPMP
The probe based IPMP detects network errors by sending ICMP “ECHO_REQUEST” messages.It requires test IP Addresses unlike link based IPMP.

1.Link Based IPMP

Request:
Configure IP address “192.168.2.50” on e1000g1 & e1000g2 using Link based IPMP.
 
Step:1
Find out the installed NIC’s on the systems and its status.Verify the ifconfig output as well.
Make sure the NIC status are up and not in use.
Arena-Node1#dladm show-dev
e1000g0         link: up        speed: 1000  Mbps       duplex: full
e1000g1         link: up        speed: 1000  Mbps       duplex: full
e1000g2         link: up        speed: 1000  Mbps       duplex: full
Arena-Node1#ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
        inet 192.168.2.5 netmask ffffff00 broadcast 192.168.2.255
        ether 0:c:29:ec:b3:af
Arena-Node1#
 
Step:2
Add the IP address in /etc/hosts and specify the netmask value in /etc/netmasks like below one.
Arena-Node1#cat /etc/hosts |grep 192.168.2.50
192.168.2.50    arenagroupIP
Arena-Node1#cat /etc/netmasks |grep 192.168.2
192.168.2.0     255.255.255.0
Arena-Node1#eeprom "local-mac-address?=true"



Step:3
Plumb the interfaces which you are going to use for new IP address. check the status in “ifconfig” output.

Arena-Node1#ifconfig e1000g1 plumb
Arena-Node1#ifconfig e1000g2 plumb
Arena-Node1#ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
        inet 192.168.2.5 netmask ffffff00 broadcast 192.168.2.255
        ether 0:c:29:ec:b3:af
e1000g1: flags=1000842 mtu 1500 index 3
        inet 0.0.0.0 netmask 0
        ether 0:c:29:ec:b3:b9
e1000g2: flags=1000842 mtu 1500 index 4
        inet 0.0.0.0 netmask 0
        ether 0:c:29:ec:b3:c3


Step:4
Configure IP on Primary interface and add the interfaces to IPMP group with your own group name.

Arena-Node1#ifconfig e1000g1 192.168.2.50 netmask 255.255.255.0 broadcast + up
Arena-Node1#ifconfig e1000g1
e1000g1: flags=1000843 mtu 1500 index 3
        inet 192.168.2.50 netmask ffffff00 broadcast 192.168.2.255
        ether 0:c:29:ec:b3:b9

Arena-Node1#ifconfig e1000g1 group arenagroup-1
Arena-Node1#ifconfig e1000g2 group arenagroup-1
Arena-Node1#ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
        inet 192.168.2.5 netmask ffffff00 broadcast 192.168.2.255
        ether 0:c:29:ec:b3:af
e1000g1: flags=1000843 mtu 1500 index 3
        inet 192.168.2.50 netmask ffffff00 broadcast 192.168.2.255
        groupname arenagroup-1
        ether 0:c:29:ec:b3:b9
e1000g2: flags=1000842 mtu 1500 index 4
        inet 0.0.0.0 netmask 0
        groupname arenagroup-1
        ether 0:c:29:ec:b3:c3


Step:5
Now we have to ensure IPMP is working fine.This can be done in two ways.

i.Test:1 Remove the primary LAN cable and check it.Here i have removed the LAN cable from e1000g1 and let see what happens.

Arena-Node1#ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
        inet 192.168.2.5 netmask ffffff00 broadcast 192.168.2.255
        ether 0:c:29:ec:b3:af
e1000g1: flags=19000802 mtu 0 index 3
        inet 0.0.0.0 netmask 0
        groupname arenagroup-1
        ether 0:c:29:ec:b3:b9
e1000g2: flags=1000842 mtu 1500 index 4
        inet 0.0.0.0 netmask 0
        groupname arenagroup-1
        ether 0:c:29:ec:b3:c3
e1000g2:1: flags=1000843 mtu 1500 index 4
        inet 192.168.2.50 netmask ffffff00 broadcast 192.168.2.255

Again i have connected back  the LAN cable to e1000g1.

Arena-Node1#dladm show-dev
e1000g0         link: up        speed: 1000  Mbps       duplex: full
e1000g1         link: up        speed: 1000  Mbps       duplex: full
e1000g2         link: up        speed: 1000  Mbps       duplex: full
Arena-Node1#ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
        inet 192.168.2.5 netmask ffffff00 broadcast 192.168.2.255
        ether 0:c:29:ec:b3:af
e1000g1: flags=1000843 mtu 1500 index 3
        inet 192.168.2.50 netmask ffffff00 broadcast 192.168.2.255
        groupname arenagroup-1
        ether 0:c:29:ec:b3:b9
e1000g2: flags=1000842 mtu 1500 index 4
        inet 0.0.0.0 netmask 0
        groupname arenagroup-1
        ether 0:c:29:ec:b3:c3

Here the configured IP is going back to original interface where it was running before. Here I had specified “FALLBACK=yes” . That’s why IP is moving back to original interface.The same way you can also specify failure detection time to mpathd  using parameter “FAILURE_DETECTION_TIME” in ms.

Arena-Node1#cat /etc/default/mpathd |grep -v "#"
FAILURE_DETECTION_TIME=10000
FAILBACK=yes
TRACK_INTERFACES_ONLY_WITH_GROUPS=yes
Arena-Node1#


ii.Test:2 Normally most of the Unix admins will be sitting in remote site. So you will be not able to perform the above test.In this case ,you can use “if_mpadm” command to disable the interface in OS level. 
Fist i am going to disable e1000g1 and let see what happens.

Arena-Node1#if_mpadm -d e1000g1
Arena-Node1#ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
        inet 192.168.2.5 netmask ffffff00 broadcast 192.168.2.255
        ether 0:c:29:ec:b3:af
e1000g1: flags=89000842 mtu 0 index 3
        inet 0.0.0.0 netmask 0
        groupname arenagroup-1
        ether 0:c:29:ec:b3:b9
e1000g2: flags=1000842 mtu 1500 index 4
        inet 0.0.0.0 netmask 0
        groupname arenagroup-1
        ether 0:c:29:ec:b3:c3
e1000g2:1: flags=1000843 mtu 1500 index 4
        inet 192.168.2.50 netmask ffffff00 broadcast 192.168.2.255


Now i am going to enable it back.

Arena-Node1#if_mpadm -r e1000g1
Arena-Node1#ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
        inet 192.168.2.5 netmask ffffff00 broadcast 192.168.2.255
        ether 0:c:29:ec:b3:af
e1000g1: flags=1000843 mtu 1500 index 3
        inet 192.168.2.50 netmask ffffff00 broadcast 192.168.2.255
        groupname arenagroup-1
        ether 0:c:29:ec:b3:b9
e1000g2: flags=1000842 mtu 1500 index 4
        inet 0.0.0.0 netmask 0
        groupname arenagroup-1
        ether 0:c:29:ec:b3:c3

The same way you can manually failover the IP to one interface to another interface.

In the both tests,we can clearly see IP is moving from e1000g1 to e1000g2 automatically without any issues.So we have successfully configured Link based IPMP on Solaris.
These failover logs will be logged in /var/adm/messages like below.

Jun 26 20:57:24 node1 in.mpathd[3800]: [ID 215189 daemon.error] The link has gone down on e1000g1
Jun 26 20:57:24 node1 in.mpathd[3800]: [ID 594170 daemon.error] NIC failure detected on e1000g1 of group arenagroup-1
Jun 26 20:57:24 node1 in.mpathd[3800]: [ID 832587 daemon.error] Successfully failed over from NIC e1000g1 to NIC e1000g2
Jun 26 20:57:57 node1 in.mpathd[3800]: [ID 820239 daemon.error] The link has come up on e1000g1
Jun 26 20:57:57 node1 in.mpathd[3800]: [ID 299542 daemon.error] NIC repair detected on e1000g1 of group arenagroup-1
Jun 26 20:57:57 node1 in.mpathd[3800]: [ID 620804 daemon.error] Successfully failed back to NIC e1000g1
Jun 26 21:03:59 node1 in.mpathd[3800]: [ID 832587 daemon.error] Successfully failed over from NIC e1000g1 to NIC e1000g2
Jun 26 21:04:07 node1 in.mpathd[3800]: [ID 620804 daemon.error] Successfully failed back to NIC e1000g1

To make the above work persistent across the reboot create the configuration files for both the network interfaces.

Arena-Node1#cat /etc/hostname.e1000g1
arenagroupIP netmask + broadcast + group arenagroup up
Arena-Node1#cat /etc/hostname.e1000g2
group arenagroup up

Thank you for reading this article. Please leave a comment if its useful for you.

Exit mobile version