Network Concepts — Bridges, OVS, and SDN
Key principles behind Linux bridges, Open vSwitch, and Software-Defined Networking in Proxmox.
Quick-reference for the three virtual networking layers used throughout this guide. Each builds on the previous one — read them in order.
Linux Bridge
A bridge is a virtual L2 switch implemented in the Linux kernel. It connects multiple network interfaces (physical NICs, VM tap devices, veth pairs) so that frames can flow between them as if they were plugged into the same physical switch.
How it works:
- Each interface attached to a bridge becomes a bridge port.
- The bridge maintains a MAC address table (FDB — Forwarding Database). When a frame arrives on a port, the bridge learns the source MAC and records which port it came from.
- For a destination MAC it already knows, the bridge forwards only to that port. For unknown MACs, it floods to all ports (same as a dumb switch).
- No routing (L3) — it only forwards L2 Ethernet frames.
In Proxmox: vmbr0 is a Linux bridge. Every VM you create gets a tapXXiY interface that is automatically attached to the bridge. The bridge acts as the virtual switch connecting all your VMs and the physical NIC.
[ enp1s0 (physical NIC) ]
|
[ vmbr0 ] ← Linux bridge
/ | \
tap100 tap101 tap102 ← VM virtual NICsWhen to use: Default for most setups. Zero overhead, kernel-native, simple to configure in /etc/network/interfaces.
Open vSwitch (OVS)
OVS is a programmable virtual switch. Where Linux Bridge has a fixed forwarding model, OVS exposes a flow table you can populate with OpenFlow rules — giving you full control over how packets are forwarded, rewritten, or dropped.
Key concepts:
- Bridge (
ovs-vsctl add-br): same role as a Linux bridge, but backed by OVS. - Port: an interface attached to the OVS bridge. Can be a physical NIC, a VM tap, or a internal port (a virtual interface local to the host, no physical NIC needed).
- Flow table: ordered list of match+action rules. A packet is matched top-to-bottom; the first matching rule wins. Default rule:
normal(behaves like a Linux bridge). - OpenFlow controller: an external process that programs the flow table. OVS ships without one — the default
normalaction handles forwarding just like Linux Bridge until you add custom rules.
What it adds over Linux Bridge:
| Feature | Linux Bridge | OVS |
|---|---|---|
| Programmable flows | No | Yes (OpenFlow) |
| Port mirroring | Limited | Built-in |
| VXLAN tunnels | No | Yes (native) |
| QoS / rate limiting | No | Yes |
| Used by OpenStack | No | Yes (Neutron/OVN) |
Internal port (useful when you have no spare NIC):
An internal port is a virtual interface created by OVS on the bridge. Assign it an IP — VMs attached to that OVS bridge can reach the host through it. No physical hardware required.
ovs-vsctl add-port ovsbr0 ovs-int -- set Interface ovs-int type=internal
ip addr add 10.20.0.1/24 dev ovs-int
ip link set ovs-int upWhen to use: When you need VXLAN tunnels between hosts, per-flow traffic control, or want to learn how OpenStack/OVN works under the hood.
Software-Defined Networking (SDN)
SDN is an architectural pattern, not a specific technology. The core idea: separate the control plane (deciding where traffic goes) from the data plane (actually forwarding packets). A centralized controller programs many switches at once, instead of each switch making independent decisions.
In the context of Proxmox SDN:
Proxmox SDN is a cluster-wide management layer built on top of Linux bridges or OVS. It introduces:
- Zones: define what kind of network a group of VNets uses (Simple, VLAN, VXLAN, EVPN). A zone maps to a forwarding technology.
- VNets: named virtual networks within a zone. Proxmox generates a Linux bridge named after the VNet on every cluster node.
- Subnets: IP ranges and DHCP/DNS config attached to a VNet.
- EVPN / BGP integration: for multi-node L3 routing, Proxmox SDN delegates to FRR (the same daemon used in Exercise 4).
Proxmox SDN (management)
|
Zone: VXLAN ──→ creates VXLAN tunnels between nodes
Zone: Simple ──→ creates plain Linux bridges
Zone: EVPN ──→ creates OVS + FRR for L3 routing
|
VNet: "lab" ──→ generates bridge "lab" on each nodeBroader SDN (outside Proxmox): In Kubernetes, the CNI plugin is your SDN. Cilium's control plane programs eBPF maps (data plane) for pod routing — the same separation of control and data plane that OpenFlow introduced for physical switches.
When to use Proxmox SDN: When managing multiple Proxmox nodes and you want consistent, declarative network definitions across the cluster without editing /etc/network/interfaces on each node by hand.
How They Relate
Linux Bridge ── simple, kernel-native, single-host L2
↓ adds programmability
Open vSwitch ── flow tables, VXLAN tunnels, port mirroring
↓ adds lifecycle management
Proxmox SDN ── declarative zones/VNets, cluster-wide, FRR for L3
↓ adds workload identity
Kubernetes CNI ── Cilium/Calico: pod-level policy, BGP, eBPFEach layer is a superset of the one above it in complexity. Start with Linux Bridge; reach for OVS when you need programmability or VXLAN; use Proxmox SDN when managing a cluster; and understand all three before reasoning about CNI plugins.