Bandwidth testing of LACP bonding link in Linux with iperf

Validating our multi-channel ethernet teams on Debian Lenny & Ubuntu Lucid Lynx

Over the past two months my company (xtendx AG) moved our servers to a new data center at green.ch. One of the primary motivations for this move was to gain access to a multiple gigabit per second Internet link. Each of our production streaming servers have either 2 or 4 channels ethernet bonding configuration with LACP. Once they were configured I set out to test their capacity and validate the entire design and configuration.

To that end, I installed iperf onto each of our servers. One box was configured as a server. Two others were configured as clients. Because of the method LACP uses to split up the traffic, it is impossible or merely very difficult to setup a server-to-server that uses more than 1Gbps. Generally, LACP accomplish multiple link speeds by splitting the individual client-server traffic over the two links on a client-server address pair basis.

Our setup here is fairly simple: three servers, each with either a 2-channel or 4-channel LACP ethernet bonding setup connected to a lone HP ProCurve 2510G-24 switch. The switch was manually configured to place the ports into a dynamic LACP bond. The bonds themselves are configured with the kernel module parameters mode=4 miimon=100 max_bonds=4 xmit_hash_policy=1

Running iperf in server mode. Note that iperf uses port 5001 by default, so adjust your firewalling solution if necessary.

iperf -s -i 2

Running iperf in client mode. This was done on two physically separate machines.

cat /dev/zero |  iperf -c svr.example.com -t 2400 -i 2

Yup, looks good. There was the possibility that both clients would have come in on the same link. This is possible because the decision about which channel to use is based upon a the source and destination addresses. It is also by design–don’t fret! Simply using a different server for one of the clients would resolve the issues.