Jumbo Frames and MTU 9000 in Your Homelab
The default Ethernet MTU (Maximum Transmission Unit) is 1500 bytes — a value from 1980s networking standards. Jumbo frames extend this to 9000 bytes, fitting 6× more data per packet. For high-throughput storage traffic between servers and a NAS, jumbo frames reduce CPU overhead and can meaningfully improve throughput. But they require every device in the path to support them.
Photo by Johnyvino on Unsplash
What Jumbo Frames Do
Each Ethernet frame has overhead: headers, framing bytes, checksums. At MTU 1500, a 9000-byte payload requires 6 frames and 6× the per-packet processing. With MTU 9000, it's one frame.
Benefits:
- Reduced CPU cycles for network processing (fewer packets = fewer interrupts)
- Slightly higher effective throughput at 10G (reduced header overhead)
- Lower latency jitter for large sequential transfers
Where it helps most:
- iSCSI storage traffic (protocol designed around jumbo frames)
- NFS and SMB large file transfers on 10G networks
- VM live migration with large RAM
- Backup traffic between servers and NAS
Where it doesn't help:
- General web browsing (packets are small anyway)
- 1G networks (overhead savings are proportionally smaller)
- Latency-sensitive workloads (larger frames can add serialization delay)
The Critical Requirement: End-to-End MTU Consistency
Jumbo frames only work if every device in the path supports and is configured for the same MTU. If one switch, NIC, or endpoint is at MTU 1500, packets will be fragmented or dropped.
Path: Server A → Switch → NAS → must all be MTU 9000.
MTU mismatch causes: packet drops, severely degraded performance, connection hangs, silent corruption in some edge cases.
Check Your Current MTU
ip link show eth0
# Look for: mtu 1500
# Or
ip addr show | grep mtu
Like what you're reading? Subscribe to HomeLab Starter — free weekly guides in your inbox.
Configure MTU on Linux
Temporary (lost on reboot)
ip link set dev eth0 mtu 9000
Permanent: systemd-networkd
# /etc/systemd/network/10-eth0.network
[Match]
Name=eth0
[Link]
MTUBytes=9000
systemctl restart systemd-networkd
Permanent: Netplan (Ubuntu)
# /etc/netplan/00-installer-config.yaml
network:
version: 2
ethernets:
eth0:
mtu: 9000
dhcp4: true
netplan apply
Permanent: /etc/network/interfaces (Debian)
iface eth0 inet dhcp
mtu 9000
For Proxmox bridges
# /etc/network/interfaces
auto vmbr0
iface vmbr0 inet static
address 192.168.1.10/24
gateway 192.168.1.1
bridge-ports enp2s0
bridge-stp off
bridge-fd 0
mtu 9000
The bridge and the underlying physical interface both need MTU 9000.
Configure MTU on Switches
Mikrotik (RouterOS)
/interface ethernet set [find name=sfp-sfpplus1] mtu=9000
/interface ethernet set [find name=sfp-sfpplus2] mtu=9000
Or via web UI: Interfaces → select interface → MTU → 9000.
Note: Mikrotik sets MTU per interface, not globally. Set it on every port that will carry jumbo frame traffic.
Ubiquiti UniFi
Older UniFi switches don't support jumbo frames on all models. Check the datasheet — USW-Pro-Aggregation and USW-Enterprise models support MTU 9000; some access switches don't.
For supported switches: Settings → Switching → enable Jumbo Frames.
Brocade/enterprise switches
interface TenGigabitEthernet 1/1/1
mtu 9216
Note: many enterprise switches use MTU 9216 as their jumbo frame size. Use 9000 on Linux endpoints and 9216 on the switch to provide headroom.
Configure MTU on NAS
TrueNAS Scale
Network → Interfaces → select interface → MTU → 9000 → Save.
Apply to both physical interfaces and any LAGG (link aggregation) groups.
Synology
Control Panel → Network → Network Interface → select → Edit → MTU → Jumbo Frame 9000.
Unraid
Settings → Network Settings → MTU → 9000.
Verify Jumbo Frames Work
Test by pinging with a large packet that requires jumbo frames:
# Linux: ping with 8972-byte payload (+ 28 bytes headers = 9000)
ping -M do -s 8972 192.168.1.20
# If jumbo frames work: replies received
# If MTU mismatch: "Message too long" or no replies
The -M do flag prevents fragmentation — the packet must travel intact at full size. Any MTU mismatch in the path causes failure.
# Also test with iperf3 to measure actual throughput
# On server (receiver):
iperf3 -s
# On client (sender):
iperf3 -c 192.168.1.20 -t 30
# Compare results with MTU 1500 baseline
Expected Performance Impact
Results vary, but common observations on 10G storage links:
- NFS sequential read/write: 5-15% improvement
- iSCSI: 10-20% improvement in some configs
- CPU usage for network processing: 10-20% reduction at high throughput
The gains are most visible on CPU-limited paths (especially VMs or embedded NAS CPUs) and iSCSI workloads. For NVMe-to-NVMe transfers on fast servers, you may see minimal improvement because the CPU isn't the bottleneck.
When NOT to Use Jumbo Frames
Mixed-MTU networks: If any traffic on the segment crosses a 1500 MTU device (like your router for internet traffic), jumbo frames on the same segment cause problems. Keep jumbo frames on a dedicated storage VLAN or isolated switch.
VMs using bridged networking: The VM, the bridge, and the physical NIC must all match. Missing one causes silent performance degradation.
Troubleshooting mode: Jumbo frame misconfiguration is a common source of confusing network issues (slow transfers, timeouts, connection resets). If you're troubleshooting unexplained problems, temporarily dropping to MTU 1500 everywhere is a useful diagnostic step.
Summary
Jumbo frames are worth enabling if you have 10G storage links and CPU overhead is measurable. The setup is straightforward — configure every device in the path to MTU 9000, verify with a non-fragmenting ping, and benchmark before and after. If your switch doesn't support jumbo frames or you're mixing storage and internet traffic on the same segment, stick with MTU 1500.
