Bonding and Teaming Network Interfaces

There was a time when one network interface was all a server needed. You plugged it into a switch, gave it an IP, and called it a day. But as infrastructure got more critical — and downtime got more expensive — that single point of failure became a liability.

Enter: network bonding.

Bonding was the first solution to let you combine multiple NICs into one logical interface. It let servers survive cable failures, balance traffic across links, and maintain uptime even when part of the network went down. For years, it was the go-to method for building in redundancy at the network layer — especially on bare metal.

Then came teaming — a more flexible, userspace-driven approach designed to address some of bonding's rough edges. It offered faster failover detection, better event monitoring, and cleaner integration with newer Linux networking tools.

Today, both are still in use — and if you're running bare metal servers, especially in production, you're likely to use one or the other to keep things resilient and performing under load.

Before virtualization became the norm, bonding was the standard way to make physical servers more resilient at the network layer. It let you take two or more physical NICs and combine them into a single logical interface — usually for redundancy, sometimes for extra throughput.

In Linux, bonding was implemented as a kernel module starting in the early 2000s and became widely used across data centers and on-prem deployments. It was particularly common on:

  • Storage nodes that couldn't afford to drop connections
  • HA database clusters that needed predictable failover
  • Frontend web servers with high availability targets
  • Any setup where touching the box to fix a network issue was a costly disruption

Bonding worked reliably and was baked into tools like ifcfg and early system init scripts. It had some quirks, but it got the job done.

Over time, though, its age started to show. Failover detection could be slow, integration with newer networking stacks was clunky, and debugging was limited. Enter teaming — a more modern approach that handled events in userspace and integrated better with evolving network management tools.

Today, bonding is still supported — and used — but most distros and large environments are moving to teaming as the default.

Where Did Bonding and Teaming Come From?

Linux bonding came out of necessity in the early days of high-availability Linux servers. It was developed and maintained by the open source community with key contributions from enterprise-focused Linux vendors like Red Hat and SUSE, who needed a solution for data center networking where downtime wasn't acceptable.

As networking hardware matured and distributed applications took over, the kernel bonding driver began to hit its limits. It wasn't very extensible, and it relied on fairly old assumptions about link state.

Teaming was introduced as a cleaner, userspace-managed alternative. Rather than handling everything inside the kernel, it uses a daemon (teamd) that monitors interfaces and manages behavior more flexibly. This allowed for faster failover, better monitoring, and simpler integration with newer tools like NetworkManager, nmcli, or systemd-networkd.

Red Hat was a major contributor to teaming, pushing it forward as the default in RHEL 7 and 8 — and encouraging its adoption across the broader Linux ecosystem.

Here's how bonding and teaming compare:

FeatureBonding (Linux)Teaming (Linux)
ImplementationKernel moduleUserspace daemon (teamd)
PerformanceGood, limited modesFlexible, event-driven
MonitoringBasic (via MII/XMIT)Advanced, link-watchers
ToolingBuilt into legacy stacksIntegrated with modern tools
StatusLegacy, still supportedPreferred in modern distros

In most environments, either will work — but if you're deploying a new fleet or working with current tooling, teaming will likely fit better.

Common Modes and What They're Good For

Not all bonding or teaming setups are the same. The "mode" you choose determines how traffic is handled across your interfaces — and what kind of switch configuration (if any) is required on the other end.

Active/Backup (Bonding mode 1, Teaming activebackup)

  • Only one NIC is active at a time. If it fails, traffic switches to the backup.
  • No switch configuration required.
  • Most reliable, least fancy — a great default for general use.

LACP / 802.3ad (Bonding mode 4, Teaming lacp)

  • Link aggregation with dynamic negotiation between server and switch.
  • Requires switch configuration (LAG or port channel).
  • Supports load balancing and redundancy.
  • More throughput potential, but more moving parts.

Balance-RR (mode 0) and XOR (mode 2)

  • Legacy options that use simple round-robin or MAC/XOR-based distribution.
  • Can confuse switches, especially with ARP or spanning tree.
  • Not widely recommended for modern environments.

Broadcast, TLB/ALB (modes 3, 5, 6)

  • Broadcast sends traffic on all interfaces — useful for some odd setups.
  • TLB/ALB try to automatically balance load without switch awareness.
  • Rare in production bare metal, but useful in isolated edge cases.

Where It Fits — and What to Watch Out For

Bonding and teaming aren't just for fancy HA clusters or ultra-sensitive workloads. They make a lot of sense in everyday production environments — especially if you're working with bare metal in a place where uptime matters.

If your servers have dual NICs, it's often worth putting them to use. A basic active/passive setup can give you clean failover without much complexity, and you'll be glad it's there the first time a cable gets knocked loose or a switch gets rebooted mid-deploy. This kind of redundancy is especially helpful in colocated or remote deployments where hands-on troubleshooting isn't quick or cheap.

That said, setting up bonding or teaming isn't always plug-and-play. You'll want to make sure your switch config matches your link mode — especially with LACP — or you'll run into flaky connections that are hard to debug. Some modes (like balance-rr or TLB) can confuse switches into thinking MAC addresses are bouncing between ports, which leads to some very strange behavior.

Even the order in which your NICs come up at boot can cause issues if you're not locking them down — interface naming can shift, especially if you're using udev, NetworkManager, or a custom provisioning system. And the way bonding is configured varies wildly between distros: Netplan, systemd-networkd, ifcfg, and NetworkManager all have their own quirks and assumptions.

The best advice? Test it. Simulate a failure — unplug a cable, reboot a switch — and see how your config behaves. That's the only way to know if your setup will hold when it actually matters.

Recap

Bonding and teaming aren't just “nice to have” — they're part of what makes bare metal resilient. Especially in environments where servers don't live behind a hypervisor or orchestrator, building in that redundancy at the link level gives you a better shot at staying online when something goes sideways.

Whether you go with the battle-tested bonding driver or the newer, more flexible teaming approach, both can give you reliability where it counts — with no magic, no middleware, and no regrets.

We use cookies to enhance your experience. You can manage your preferences below.