PCIe Bifurcation: Add Multiple NVMe Drives to One Slot
A common homelab storage problem: you want multiple NVMe SSDs for a ZFS pool, caching, or VM storage, but your motherboard has only one M.2 slot — or you need more drives than slots allow. PCIe bifurcation solves this, if your motherboard and CPU support it.
Photo by Lee Lawson on Unsplash
What Is PCIe Bifurcation?
PCIe lanes are the bandwidth channels between your CPU and PCIe devices. A physical x16 slot provides 16 lanes, typically used by a GPU. PCIe bifurcation lets you split those 16 lanes into multiple independent channels — commonly x4+x4+x4+x4 (four devices) or x8+x8 (two devices).
A bifurcation-enabled NVMe expansion card plugs into your x16 slot and provides 4 M.2 slots, each getting 4 PCIe lanes. Every drive operates independently with its own bandwidth — no RAID controller, no shared bandwidth bottleneck.
Why it matters: 4x NVMe on a single x16 slot, no RAID overhead, each drive presents as a separate device to the OS. ZFS loves this.
Checking Bifurcation Support
Not all motherboards support PCIe bifurcation. Check your BIOS/UEFI:
Intel platforms (Desktop):
- Go to BIOS → PCIe configuration or CPU PCIe Lane settings
- Look for "PCIe x16 Slot Bifurcation" options: x16, x8+x8, x8+x4+x4, x4+x4+x4+x4
- Intel 11th/12th gen and newer desktop platforms generally support it; earlier ones often don't
AMD platforms (Desktop):
- Ryzen 3000/5000/7000 series with X570/B550/X670/B650 boards: most support bifurcation
- Check BIOS → AMD PBS (Platform Boot Support) or PCIe Configuration
Servers (HEDT and server-class):
- EPYC, Xeon, Threadripper: almost always support bifurcation — this is standard data center functionality
Easy check: Look up your motherboard's name + "PCIe bifurcation" on forums. The community has documented most popular boards.
Types of NVMe Expansion Cards
Bifurcation Cards (Preferred)
These require motherboard bifurcation support but offer the best performance. Each M.2 slot gets direct CPU-connected PCIe lanes.
Popular options:
- Asus Hyper M.2 X16 Gen5 Card: 4x M.2 PCIe Gen 5 slots, x16 slot
- Sabrent Rocket 4 Plus-G: 4x M.2 slots, Gen 4 support
- GLOTRENDS PA12E: Budget-friendly 4x M.2 card
These cards are passive (no active components) — all the intelligence is in the CPU's lane management.
PLX Bridge Cards (No Bifurcation Required)
These use a PLX PCIe switch chip to multiplex multiple devices over a single PCIe connection. No bifurcation needed, but:
- The PLX chip adds latency
- All drives share bandwidth through the chip
- More complex, generates more heat
- More expensive
For homelab use, if you have bifurcation support, skip PLX cards.
RAID Controller Cards
Traditional HBA/RAID cards (LSI, Adaptec) use SAS/SATA. They handle their own NVMe drives differently — look for NVMe over PCIe RAID controllers if you need RAID with NVMe. But for ZFS, you want raw drives, not a RAID controller.
Like what you're reading? Subscribe to HomeLab Starter — free weekly guides in your inbox.
BIOS Configuration
Once you have a bifurcation NVMe card:
- Install the card in your x16 slot
- Boot into BIOS
- Find the PCIe bifurcation setting for that slot
- Set it to
x4+x4+x4+x4(for a 4-slot card) - Save and boot
If set incorrectly, the system may not boot or may only see one drive. If drives don't appear:
- Verify bifurcation is set correctly for the right slot
- Some cards require specific slot placement (primary x16 slot vs secondary)
- Check that the M.2 slots on the card are properly seated and the retention screws are secure
Verifying Detection
After configuring bifurcation:
# List all NVMe devices
lsblk -d -o NAME,SIZE,TYPE,ROTA | grep -i nvme
# Detailed NVMe info
nvme list
# Check PCIe topology
lspci -tv | grep -i nvme
Each drive should appear as a separate NVMe device (/dev/nvme0n1, /dev/nvme1n1, etc.) with its own PCIe address.
ZFS Pool on Multiple NVMe Drives
With 4 independent NVMe drives, you can create various ZFS configurations:
# 4-drive mirror pool (2 mirrors = RAID10-like)
zpool create datapool mirror nvme0n1 nvme1n1 mirror nvme2n1 nvme3n1
# RAIDZ1 pool (like RAID5, one drive fault tolerance)
zpool create datapool raidz1 nvme0n1 nvme1n1 nvme2n1 nvme3n1
# Striped pool (no redundancy, maximum speed)
zpool create datapool nvme0n1 nvme1n1 nvme2n1 nvme3n1
# Check pool status
zpool status datapool
For VM storage, a 2-drive mirror offers good redundancy and performance. For scratch/cache storage, striped or RAIDZ1 gives maximum capacity.
Performance Expectations
With x4 lanes each and PCIe Gen 4 drives, each drive gets ~7 GB/s theoretical bandwidth. In practice:
- Single drive random read: ~1 million IOPS
- Sequential read: ~6–7 GB/s per drive
- In a ZFS mirror pool, reads can be striped: ~12–14 GB/s aggregate
- CPU cache effects and DRAM speed may bottleneck before the drives do
This is enough for hosting VMs, containers, and ZFS datasets that would overwhelm any SATA setup. For comparison, a 6x SATA SSD RAIDZ2 pool delivers ~3–4 GB/s max.
Thermal Considerations
Four NVMe drives in one slot generate heat. Thermal throttling reduces performance when drives exceed 70°C.
Mitigations:
- Buy expansion cards with integrated heatsinks (some include them)
- Add active cooling: a small 80mm fan aimed at the card keeps all drives cool
- Check drive temperatures under load:
nvme smart-log /dev/nvme0shows temperature data - In cases with poor airflow, add a case fan specifically for the GPU/PCIe slot area
High-end drives (Samsung 990 Pro, WD Black SN850X) run hotter than value drives. Check thermal benchmarks for your specific drives before assuming they'll be fine.
Proxmox and VM Use Case
A bifurcation NVMe array is excellent for Proxmox:
# Pass individual drives to VMs (optimal for high-I/O databases)
qm set 100 --sata0 /dev/nvme0n1
# Or create a ZFS pool in Proxmox and use it for all VM storage
# In Proxmox web UI: Datacenter → Storage → Add → ZFS
For a homelab Proxmox node, 2 NVMe drives in a ZFS mirror gives you fast, redundant VM storage. The drives appear directly to ZFS with no RAID controller overhead.
Common Issues and Fixes
All drives not appearing: Bifurcation setting mismatch. Verify x4+x4+x4+x4 is set for your specific slot.
One or two drives missing: Seating issue — check M.2 screw retention and SSD insertion angle.
Performance lower than expected: Check if drives are throttling (temperature). Also verify you're in the primary PCIe slot — secondary slots may have fewer lanes or slower connections to the CPU.
Boot issues after adding card: Some BIOS versions default to boot from any detected NVMe. Set your boot SSD explicitly in BIOS boot order.
Budget Build Example
For a homelab storage node:
- Sabrent Rocket 4 Plus-G card: ~$40
- 4x Silicon Power XD80 2TB NVMe (Gen 3): ~$70 each → $280
- Total for 8TB of ZFS RAIDZ1 NVMe storage: ~$320
For comparison, 8TB of enterprise SATA SSDs costs more and delivers a fraction of the IOPS. NVMe bifurcation brings data-center-style NVMe density to a single consumer PCIe slot.
