I'm running RHEL 6.4, kernel-2.6.32-358.el6.i686, on an HP ML 350 G5 with two onboard Broadcom NetXtreme II BCM5708 1000Base-T NICs. My goal is to channel bond the two interfaces into a mode=1
failover pair.
My problem is that in spite of all evidence that the bond is set up and accepted, pulling the cable out of the primary NIC causes all communication to cease.
ifcfg-etho and ifcfg-eth1
First, ifcfg-eth0:
DEVICE=eth0
HWADDR=00:22:64:F8:EF:60
TYPE=Ethernet
UUID=99ea681d-831b-42a7-81be-02f71d1f7aa0
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
Next, ifcfg-eth1:
DEVICE=eth1
HWADDR=00:22:64:F8:EF:62
TYPE=Ethernet
UUID=92d46872-eb4a-4eef-bea5-825e914a5ad6
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
ifcfg-bond0
My bond's config file:
DEVICE=bond0
IPADDR=192.168.11.222
GATEWAY=192.168.11.1
NETMASK=255.255.255.0
DNS1=192.168.11.1
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
BONDING_OPTS="mode=1 miimmon=100"
/etc/modprobe.d/bonding.conf
I have an /etc/modprobe.d/bonding.conf
file that is populated thusly:
alias bond0 bonding
ip addr output
The bond is up and I can access the server's public services through the bond's IP address:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
link/ether 00:22:64:f8:ef:60 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
link/ether 00:22:64:f8:ef:60 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 00:22:64:f8:ef:60 brd ff:ff:ff:ff:ff:ff
inet 192.168.11.222/24 brd 192.168.11.255 scope global bond0
inet6 fe80::222:64ff:fef8:ef60/64 scope link
valid_lft forever preferred_lft forever
Bonding Kernel Module
...is loaded:
# cat /proc/modules | grep bond
bonding 111135 0 - Live 0xf9cdc000
/sys/class/net
The /sys/class/net
filesystem shows good things:
cat /sys/class/net/bonding_masters
bond0
cat /sys/class/net/bond0/operstate
up
cat /sys/class/net/bond0/slave_eth0/operstate
up
cat /sys/class/net/bond0/slave_eth1/operstate
up
cat /sys/class/net/bond0/type
1
/var/log/messages
Nothing of concern appears in the log file. In fact, everything looks rather happy.
Jun 15 15:47:28 rhsandbox2 kernel: Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Jun 15 15:47:28 rhsandbox2 kernel: bonding: bond0: setting mode to active-backup (1).
Jun 15 15:47:28 rhsandbox2 kernel: bonding: bond0: setting mode to active-backup (1).
Jun 15 15:47:28 rhsandbox2 kernel: bonding: bond0: setting mode to active-backup (1).
Jun 15 15:47:28 rhsandbox2 kernel: bonding: bond0: setting mode to active-backup (1).
Jun 15 15:47:28 rhsandbox2 kernel: bonding: bond0: Adding slave eth0.
Jun 15 15:47:28 rhsandbox2 kernel: bnx2 0000:03:00.0: eth0: using MSI
Jun 15 15:47:28 rhsandbox2 kernel: bonding: bond0: making interface eth0 the new active one.
Jun 15 15:47:28 rhsandbox2 kernel: bonding: bond0: first active interface up!
Jun 15 15:47:28 rhsandbox2 kernel: bonding: bond0: enslaving eth0 as an active interface with an up link.
Jun 15 15:47:28 rhsandbox2 kernel: bonding: bond0: Adding slave eth1.
Jun 15 15:47:28 rhsandbox2 kernel: bnx2 0000:05:00.0: eth1: using MSI
Jun 15 15:47:28 rhsandbox2 kernel: bonding: bond0: enslaving eth1 as a backup interface with an up link.
Jun 15 15:47:28 rhsandbox2 kernel: 8021q: adding VLAN 0 to HW filter on device bond0
Jun 15 15:47:28 rhsandbox2 kernel: bnx2 0000:03:00.0: eth0: NIC Copper Link is Up, 1000 Mbps full duplex
Jun 15 15:47:28 rhsandbox2 kernel: bnx2 0000:05:00.0: eth1: NIC Copper Link is Up, 1000 Mbps full duplex
So what's the problem?!
Yanking the network cable from eth0 causes all communication to go dark. What could the problem be and what further steps should I take to troubleshoot this?
EDIT:
Further Troubleshooting:
The network is a single subnet, single VLAN provided by a ProCurve 1800-8G switch. I have added primary=eth0
to ifcfg-bond0
and restart networking services, but that has not changed any behavior. I checked /sys/class/net/bond0/bonding/primary
both before and after adding primary=eth1
and it has a null value, which I'm not sure is good or bad.
Tailing /var/log/messages
when eth1
has its cable removed shows nothing more than:
Jun 15 16:51:16 rhsandbox2 kernel: bnx2 0000:03:00.0: eth0: NIC Copper Link is Down
Jun 15 16:51:24 rhsandbox2 kernel: bnx2 0000:03:00.0: eth0: NIC Copper Link is Up, 1000 Mbps full duplex
I added use_carrier=0
to ifcfg-bond0
's BONDING_OPTS
section to enables the use of MII/ETHTOOL ioctls. After restarting the network service, there was no change in symptoms. Pulling the cable from eth0
causes all network communication to cease. Once again, no errors in /var/log/messages
save for the notification that the link on that port went down.