Home / Solaris Networking / SOLARIS 11 – Link aggregation vs IPMP ? How to Configure it?

SOLARIS 11 – Link aggregation vs IPMP ? How to Configure it?

How many of the Solaris administrator knows the difference between link aggregation and IPMP (IP Multi-pathing) ? Even i don’t know the difference till last month.But  I was in situation to choose between IPMP and Link aggregation for one of the server.Then I have read the Solaris 11 manuals and finally find out the difference between both. 

Both IPMP and Link aggregation increase the network bandwidth by binding multiple NIC in to logical interface.Both supports the automatic failover. But Link aggregation uses the IEEE 802.3ad protocol. So that You can combine only Ethernets in to logical interfaces and can’t use in infiniBand networks.Another big draw back of link aggregation is that aggregated links need to connected on the same switch.Only advantages of link aggregation is that its spreads inbound traffic to multiple links.

IPMP in other hand that group links can be connected to different switches and it can be used in infiniband network as well. IPMP also can be used on top of link aggregation as well in Solaris 11.

SOLARIS 11’s LINK AGGREGATION

Let see the configuration of  Link aggregation on Solaris 11. Its completely different from Solaris 10.Here we are going to setup link aggregation between e1000g1
(net1) and e1000g2(net2).


1.To list the installed physical Ehternets,

root@Unixarena-SOL11:~# dladm show-phys
LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
net0              Ethernet             up         1000   full      e1000g0
net1              Ethernet             unknown    0      unknown   e1000g1
net2              Ethernet             unknown    0      unknown   e1000g2


2.Create a link level aggregation using dladm command.

root@Unixarena-SOL11:~# dladm create-aggr -l net1 -l net2 aggr1


3.Configure new IP address to aggr1.

root@Unixarena-SOL11:~# ipadm create-ip aggr1
root@Unixarena-SOL11:~# ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
net0: flags=1000843 mtu 1500 index 2
        inet 192.168.2.51 netmask ffffff00 broadcast 192.168.2.255
        ether 0:c:29:98:1:da
aggr1: flags=1000842 mtu 1500 index 3
        inet 0.0.0.0 netmask 0
        ether 0:c:29:98:1:e4
lo0: flags=2002000849 mtu 8252 index 1
        inet6 ::1/128
net0: flags=20002004841 mtu 1500 index 2
        inet6 fe80::20c:29ff:fe98:1da/10
        ether 0:c:29:98:1:da
aggr1: flags=20002000840 mtu 1500 index 3
        inet6 ::/0
        ether 0:c:29:98:1:e4
root@Unixarena-SOL11:~# ipadm create-addr -T static -a local=192.168.2.52 aggr1/buffbandwidth


4.Verify the newly configure logical interface aggr1.

root@Unixarena-SOL11:~# ifconfig aggr1
aggr1: flags=1000843 mtu 1500 index 3
        inet 192.168.2.52 netmask ffffff00 broadcast 192.168.2.255
        ether 0:c:29:98:1:e4


5.dladm has many sub-commands to verify the physical interface status and its properties.

root@Unixarena-SOL11:~# dladm help
The following subcommands are supported:
Link Aggregation: add-aggr       create-aggr     delete-aggr
                  modify-aggr    remove-aggr     show-aggr
root@Unixarena-SOL11:~# dladm show-phys
LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
net0              Ethernet             up         1000   full      e1000g0
net1              Ethernet             up         1000   full      e1000g1
net2              Ethernet             up         1000   full      e1000g2
root@Unixarena-SOL11:~# dladm show-link
LINK                CLASS     MTU    STATE    OVER
net0                phys      1500   up       --
net1                phys      1500   up       --
net2                phys      1500   up       --
aggr1               aggr      1500   up       net1 net2
root@Unixarena-SOL11:~# dladm show-linkprop |head
LINK     PROPERTY            PERM VALUE        DEFAULT      POSSIBLE
net0     speed               r-   1000         1000         --
net0     autopush            rw   --           --           --
net0     zone                rw   --           --           --
net0     duplex              r-   full         full         half,full
net0     state               r-   up           up           up,down
net0     adv_autoneg_cap     rw   1            1            1,0
net0     mtu                 rw   1500         1500         1500-16362
net0     flowctrl            rw   no           bi           no,tx,rx,bi,pfc,
                                                            auto
root@Unixarena-SOL11:~# dladm show-aggr
LINK              MODE  POLICY   ADDRPOLICY           LACPACTIVITY LACPTIMER
aggr1             trunk L4       auto                 off          short

6.Remove the IP and Link-aggregation

root@Unixarena-SOL11:~# ipadm delete-ip aggr1
root@Unixarena-SOL11:~# dladm delete-aggr  aggr1
root@Unixarena-SOL11:~# dladm
LINK                CLASS     MTU    STATE    OVER
net0                phys      1500   up       --
net1                phys      1500   unknown  --
net2                phys      1500   unknown  --


That’s all for Link-aggregation part.Now we will move on to Solaris 11 IPMP configuration part.

Solaris 11’s IPMP

Many of them may have question what is the difference in Solaris 11 IPMP part.In Solaris 10,you need to use ifconfig command to configure it with addtional files.Here you have advanced IP management command called “ipadm” which will help you to setup IPMP in seconds.It allows us to set persistent tuning for the interfaces. To monitor the IPMP interface,we have tool called ipmpstat.Let’s start now

1.List out the physical interface which you are going to use for IPMP.(net1 & net2)

root@Unixarena-SOL11:~# dladm show-phys
LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
net0              Ethernet             up         1000   full      e1000g0
net1              Ethernet             unknown    1000   full      e1000g1
net2              Ethernet             unknown    1000   full      e1000g2
root@Unixarena-SOL11:~#


2.Create an IPMP group.

root@Unixarena-SOL11:~# ipadm create-ipmp ipmp1
root@Unixarena-SOL11:~# ifconfig ipmp1
ipmp1: flags=8011000802 mtu 68 index 4
        inet 0.0.0.0 netmask 0
        groupname ipmp1
root@Unixarena-SOL11:~#


3.Plumb the interfaces if it’s not already done.

root@Unixarena-SOL11:~# ifconfig net1 plumb
root@Unixarena-SOL11:~# ifconfig net2 plumb
root@Unixarena-SOL11:~# dladm show-phys
LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
net0              Ethernet             up         1000   full      e1000g0
net1              Ethernet             up         1000   full      e1000g1
net2              Ethernet             up         1000   full      e1000g2


4.Add the interfaces to the IPMP group which we have created in above step

root@Unixarena-SOL11:~# ipadm add-ipmp -i net1 -i net2 ipmp1
root@Unixarena-SOL11:~# ipadm
NAME              CLASS/TYPE STATE        UNDER      ADDR
ipmp1             ipmp       down         --         --
lo0               loopback   ok           --         --
   lo0/v4         static     ok           --         127.0.0.1/8
   lo0/v6         static     ok           --         ::1/128
net0              ip         ok           --         --
   net0/v4        static     ok           --         192.168.2.51/24
   net0/v6        addrconf   ok           --         fe80::20c:29ff:fe98:1da/10
net1              ip         ok           ipmp1      --
net2              ip         ok           ipmp1      --
root@Unixarena-SOL11:~#


5.Create IP address on IPMP group interface.

root@Unixarena-SOL11:~# ipadm create-addr -T static -a local=192.168.2.100 ipmp1/UnixVIP


6.Verify your work.

root@Unixarena-SOL11:~# ifconfig ipmp1
ipmp1: flags=8001000843 mtu 1500 index 6
        inet 192.168.2.100 netmask ffffff00 broadcast 192.168.2.255
        groupname ipmp1
root@Unixarena-SOL11:~# ipadm
NAME              CLASS/TYPE STATE        UNDER      ADDR
ipmp1             ipmp       ok           --         --
   ipmp1/UnixVIP  static     ok           --         192.168.2.100/24
lo0               loopback   ok           --         --
   lo0/v4         static     ok           --         127.0.0.1/8
   lo0/v6         static     ok           --         ::1/128
net0              ip         ok           --         --
   net0/v4        static     ok           --         192.168.2.51/24
   net0/v6        addrconf   ok           --         fe80::20c:29ff:fe98:1da/10
net1              ip         ok           ipmp1      --
net2              ip         ok           ipmp1      --

root@Unixarena-SOL11:~# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
net0/v4           static   ok           192.168.2.51/24
ipmp1/UnixVIP     static   ok           192.168.2.100/24
lo0/v6            static   ok           ::1/128
net0/v6           addrconf ok           fe80::20c:29ff:fe98:1da/10


Have a look at ipmpstat tool now.

root@Unixarena-SOL11:~# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
::                        down   ipmp1       --          --
192.168.2.100             up     ipmp1       net1        net2 net1
root@Unixarena-SOL11:~# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
ipmp1       ipmp1       ok        --        net2 net1
root@Unixarena-SOL11:~# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
net2        yes     ipmp1       -------   up        disabled  ok
net1        yes     ipmp1       --mb---   up        disabled  ok
root@Unixarena-SOL11:~# ipmpstat -p
ipmpstat: probe-based failure detection is disabled
root@Unixarena-SOL11:~# ipmpstat -t
INTERFACE   MODE       TESTADDR            TARGETS
net2        disabled   --                  --
net1        disabled   --                  --


I am really feeling Solaris 11 is much better than Solaris 10 for Unix Administrators since Solaris 11 provide all the information in neat format with meaningful commands.
See how to create a VNIC in solaris 11 and assign a IP that.

You can also see how to enable probe-based IPMP failure detection here.

Thank you for reading this article.

VMTURBO-CLOUD-CAPACITY

One comment

  1. Do you know is it possible to setup aggregation or IPMI in AI profile?

Leave a Reply

Your email address will not be published. Required fields are marked *