NICteaming/bonding is used mostly in scenarios where you cannot afford to loose connectivity due to ethernet failover issues and also it has many other advantages like to distribute bandwidth, fault tolerance etc
Let us start with the configuration steps
Make sure you have two(at least) physical Ethernet cards in your Linux machine.
Edit the configuration files of both the Ethernet cards with the options as shown below
# less /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
MASTER=bond0
USERCTL=no
SLAVE=yes
BOOTPROTO=none
TYPE=Ethernet
ONBOOT=yes
# less /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
USERCTL=no
Create a new file inside /etc/sysconfig/network-scripts/ifcfg-bond0 with the parameters as shown below
# less /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
IPADDR=192.168.0.100
GATEWAY=192.168.0.1
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
PEERDNS=yes
In RED HAT 5
Append/make these following changes in below mentioned file as shown
# vi /etc/modprobe.conf
alias bond0 bonding
options bond0 mode=1 miimon=100
In RED HAT 6
You will not find modprobe.conf file so you need to define bonding option inside your ifcfg-bond0 configuration file as shown below
# less /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 IPADDR=192.168.0.100 GATEWAY=192.168.0.1 NETMASK=255.255.255.0 DNS1=8.8.8.8 BONDING_OPTS="miimon=100 mode=1" USERCTL=no PEERDNS=yes BOOTPROTO=none ONBOOT=yes
Here, you can use different values for mode and miimon
What are the different types of mode available for NIC bonding?
You can configure NIC Teaming for various purposes. So while configuration you will have to specify the purpose for which you want to utilize NIC Teaming.
Here are the list of available options
balance-rr or 0 : Sets a round-robin policy for fault tolerance and load balancing. Transmissions are received and sent out sequentially on each bonded slave interface beginning with the first one available.
active-backup or 1: Sets an active-backup policy for fault tolerance. Transmissions are received and sent out via the first available bonded slave interface. Another bonded slave interface is only used if the active bonded slave interface fails.
balance-xor or 2: Sets an XOR(exclusive-or) policy for fault tolerance and load balancing. Using this method the interface matches up the incoming request's MAC Address with the MAC Address for one of the slave NICs. Once the link is established, transmissions are sent out sequentially beginning with the first available interface.
broadcast or 3: Sets a broadcast policy for fault tolerance. All transmissions are sent on all slave interfaces.
802.3ad or 4: Sets an IEEE802.3ad dynamic link aggregation policy. Creates aggregation groups that share the same speed and duplex settings. Transmits and receives on all slave in the active aggregator. Requires a switch that is 802.3ad compliant
balance-tlb or 5: Sets a Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave.
balance-alb or 6: Sets and Active Load balancing (ALB) policy for fault tolerance and load balancing. Includes transmit and receive and load balancing for IPV4 traffic. Receive load balancing is achieved thorugh ARP negotiation
What is miimon in NICTeaming?
Specifies (in milliseconds) how often MII link monitoring occurs. This is useful if high availability is required because MII is used to verify that the NIC is active. To verify that the driver for a particular NIC supports the MII tool, type the following command as root:
# ethtool | grep "Link detected:"
# ethtool eth0 | grep "Link detected:"
Link detected: yes
# ethtool eth1 | grep "Link detected:"
Link detected: yes
So for our demo purpose we will use mode 1 make NIC bonding for Fault Tolerance
Now time to load the bonding module
# modprobe bonding
Restart the network interface to make the changes affect
# service network restart
Verify if your configuration has worked properly using below command
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 5f:5g:56:3v:23:54
Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 4f:76:23:v4:76:f6
Check your network status
# ifconfig
bond0 Link encap:Ethernet HWaddr R5:4G:45:6H:14:54
inet addr:192.168.0.100 Bcast:192.168.0.1 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:675166546 errors:0 dropped:0 overruns:0 frame:0
TX packets:60123345 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:717558660669 (668.2 GiB) TX bytes:680121390699 (633.4 GiB)
eth0 Link encap:Ethernet HWaddr 5F:5G:56:3V:23:54
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:675130834 errors:0 dropped:0 overruns:0 frame:0
TX packets:601230970 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:717553120481 (668.2 GiB) TX bytes:680121390699 (633.4 GiB)
Interrupt:169 Memory:96000000-96012800
eth1 Link encap:Ethernet HWaddr 4F:76:23:V4:76:F6
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:35302 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5540188 (5.2 MiB) TX bytes:0 (0.0 b)
Interrupt:122 Memory:94000000-94012800
Let me know your success and failures