bonding, bridging, and port density

Today is the day. You finally got 10Gbe networking! Boy is it expensive though. The price-per-port is very high, $500-$1000 on the low end. Add to that, “production-grade” things usually need to be highly-available. This begs the question, how can we best take advantage of this costly networking?

Enter bonding, also known as nic or channel teaming, and link aggregation. Bonding can solve the issue of high availability with several different modes of operation. The most common ones are active-backup and LACP. As far as taking advantage of ports, active-backup is the worst. It provides only high availability. This leaves an entire port unused until a failure happens. Given the price-per-port, this is not ideal. A better solution is LACP since it will use both interfaces to send and recieve. But that requires switch support and has a protocol overhead. This means bonding 2 interfaces with LACP does not yield 2x the bandwidth.

For me, the solutions above were less than ideal. I have a home lab with a small 8 port 10Gbe switch (XS708E) and every port counts. I have three servers with two 10Gbe nics each (Intel X540-T2). I simply can’t afford (figuratively and literally) to let a single interface sit unused with active-backup bonding, and LACP support on the switch was difficult to use and configure. This led me to using all of my ports without high availability, without bonding of any kind. But, I had an idea….

I have figured out a way to have each interface essentially in two active-backup bonds. This allows me to use each interface 100% without affecting the other unless an interface has failed, at which point the traffic is merged together into a single physical port. It looks like this:


In this example I have two physical interfaces, eth2 and eth3. The rest of the interfaces we will be creating now.

I start by creating two bridges:

# ip l a br0 type bridge
# ip l a br1 type bridge

I then create two bonds:

# ip l a bond0 type bond
# ip l a bond1 type bond

I then have to create my veth pairs for linking the bonds to the bridges.

# ip l a veth00 type veth peer name veth01
# ip l a veth10 type veth peer name veth11

Finally, we need to plug in all the veth pairs and add our interfaces to the bond.

# ifenslave bond0 eth2 veth00
# ifenslave bond1 eth3 veth10
# brctl addif br0 bond0
# brctl addif br0 veth11
# brctl addif br1 bond1
# brctl addif br1 veth01

And there you have it. Fancy networking. You’ll want to address the bridges as if they are the physical interfaces. In my case, br0 == eth2, and br1 == eth3. Things to check on if you are having issues:

  • bonds are in active-backup mode
  • all bonds, interfaces, bridges, and veth pairs are in state UP
  • the physical interface is set as the primary interface in each bond

I am not sure whether I will stick with this configuration in the long run, but that is what I have been running for a little while now and it is working as well as I could hope. There is no noticeable delay when either interface drops and the failover occurs as it would in a normal bonding setup (polling defaults to 500ms). There is also no measurable performance degradation when failing over and having to traverse two bridges. Overall, I am very happy with the entire setup.

Bonus: Here are the commands used to implement this in openvswitch. One caveat is that there is no way I could find to enforce a default or primary bond member, so you must run the `ovs-appctl` command to set the active slave in the bond after a failover. I solved this with a simple timer/cron job in systemd set to run every minute. Not ideal, but certainly functional.

# ovs-vsctl add-br br0
# ovs-vsctl add-br br0
# ovs-vsctl add-bond br0 bond0 eth2 veth00 -- set port bond0 bond_mode=active-backup -- set interface veth00 type=patch options:peer=veth01
# ovs-vsctl add-bond br1 bond1 eth3 veth10 -- set port bond1 bond_mode=active-backup -- set interface veth10 type=patch options:peer=veth11
# ovs-vsctl add-port br0 veth11 -- set interface veth11 type=patch options:peer=veth10
# ovs-vsctl add-port br1 veth01 -- set interface veth01 type=patch options:peer=veth00
# ovs-appctl bond/set-active-slave bond0 eth2
# ovs-appctl bond/set-active-slave bond1 eth3

2 thoughts on “bonding, bridging, and port density

  1. What about simply using a bond with balance-alb or some other cumulative mode? 🙂

    • Certainly an option. However, I originally built this with 10g + 1g bonding in mind. So in that case, balance-alb would be far from ideal. In the end I did use the same switch which would mean that balance-alb should be perfectly fine. The only reason I am still using this in my setup is just to see if there are any problems in the long run so I can use this method in the future when I am stuck with different speed interfaces.

      There is also the issue with RX balancing with alb which would limit bandwidth and not be as efficient in the situation.

Leave a Reply

Your email address will not be published. Required fields are marked *