← All articles
a colorful banner with a group of people on it

PCIe Bifurcation: Add Multiple NVMe Drives to One Slot

Hardware 2026-03-04 · 5 min read pcie nvme bifurcation storage homelab hardware
By HomeLab Starter Editorial TeamHome lab enthusiasts covering hardware setup, networking, and self-hosted services for home and small office environments.

A common homelab storage problem: you want multiple NVMe SSDs for a ZFS pool, caching, or VM storage, but your motherboard has only one M.2 slot — or you need more drives than slots allow. PCIe bifurcation solves this, if your motherboard and CPU support it.

Photo by Lee Lawson on Unsplash

What Is PCIe Bifurcation?

PCIe lanes are the bandwidth channels between your CPU and PCIe devices. A physical x16 slot provides 16 lanes, typically used by a GPU. PCIe bifurcation lets you split those 16 lanes into multiple independent channels — commonly x4+x4+x4+x4 (four devices) or x8+x8 (two devices).

A bifurcation-enabled NVMe expansion card plugs into your x16 slot and provides 4 M.2 slots, each getting 4 PCIe lanes. Every drive operates independently with its own bandwidth — no RAID controller, no shared bandwidth bottleneck.

Why it matters: 4x NVMe on a single x16 slot, no RAID overhead, each drive presents as a separate device to the OS. ZFS loves this.

Checking Bifurcation Support

Not all motherboards support PCIe bifurcation. Check your BIOS/UEFI:

Intel platforms (Desktop):

AMD platforms (Desktop):

Servers (HEDT and server-class):

Easy check: Look up your motherboard's name + "PCIe bifurcation" on forums. The community has documented most popular boards.

Types of NVMe Expansion Cards

Bifurcation Cards (Preferred)

These require motherboard bifurcation support but offer the best performance. Each M.2 slot gets direct CPU-connected PCIe lanes.

Popular options:

These cards are passive (no active components) — all the intelligence is in the CPU's lane management.

PLX Bridge Cards (No Bifurcation Required)

These use a PLX PCIe switch chip to multiplex multiple devices over a single PCIe connection. No bifurcation needed, but:

For homelab use, if you have bifurcation support, skip PLX cards.

RAID Controller Cards

Traditional HBA/RAID cards (LSI, Adaptec) use SAS/SATA. They handle their own NVMe drives differently — look for NVMe over PCIe RAID controllers if you need RAID with NVMe. But for ZFS, you want raw drives, not a RAID controller.

Like what you're reading? Subscribe to HomeLab Starter — free weekly guides in your inbox.

BIOS Configuration

Once you have a bifurcation NVMe card:

  1. Install the card in your x16 slot
  2. Boot into BIOS
  3. Find the PCIe bifurcation setting for that slot
  4. Set it to x4+x4+x4+x4 (for a 4-slot card)
  5. Save and boot

If set incorrectly, the system may not boot or may only see one drive. If drives don't appear:

Verifying Detection

After configuring bifurcation:

# List all NVMe devices
lsblk -d -o NAME,SIZE,TYPE,ROTA | grep -i nvme

# Detailed NVMe info
nvme list

# Check PCIe topology
lspci -tv | grep -i nvme

Each drive should appear as a separate NVMe device (/dev/nvme0n1, /dev/nvme1n1, etc.) with its own PCIe address.

ZFS Pool on Multiple NVMe Drives

With 4 independent NVMe drives, you can create various ZFS configurations:

# 4-drive mirror pool (2 mirrors = RAID10-like)
zpool create datapool mirror nvme0n1 nvme1n1 mirror nvme2n1 nvme3n1

# RAIDZ1 pool (like RAID5, one drive fault tolerance)
zpool create datapool raidz1 nvme0n1 nvme1n1 nvme2n1 nvme3n1

# Striped pool (no redundancy, maximum speed)
zpool create datapool nvme0n1 nvme1n1 nvme2n1 nvme3n1

# Check pool status
zpool status datapool

For VM storage, a 2-drive mirror offers good redundancy and performance. For scratch/cache storage, striped or RAIDZ1 gives maximum capacity.

Performance Expectations

With x4 lanes each and PCIe Gen 4 drives, each drive gets ~7 GB/s theoretical bandwidth. In practice:

This is enough for hosting VMs, containers, and ZFS datasets that would overwhelm any SATA setup. For comparison, a 6x SATA SSD RAIDZ2 pool delivers ~3–4 GB/s max.

Thermal Considerations

Four NVMe drives in one slot generate heat. Thermal throttling reduces performance when drives exceed 70°C.

Mitigations:

High-end drives (Samsung 990 Pro, WD Black SN850X) run hotter than value drives. Check thermal benchmarks for your specific drives before assuming they'll be fine.

Proxmox and VM Use Case

A bifurcation NVMe array is excellent for Proxmox:

# Pass individual drives to VMs (optimal for high-I/O databases)
qm set 100 --sata0 /dev/nvme0n1

# Or create a ZFS pool in Proxmox and use it for all VM storage
# In Proxmox web UI: Datacenter → Storage → Add → ZFS

For a homelab Proxmox node, 2 NVMe drives in a ZFS mirror gives you fast, redundant VM storage. The drives appear directly to ZFS with no RAID controller overhead.

Common Issues and Fixes

All drives not appearing: Bifurcation setting mismatch. Verify x4+x4+x4+x4 is set for your specific slot.

One or two drives missing: Seating issue — check M.2 screw retention and SSD insertion angle.

Performance lower than expected: Check if drives are throttling (temperature). Also verify you're in the primary PCIe slot — secondary slots may have fewer lanes or slower connections to the CPU.

Boot issues after adding card: Some BIOS versions default to boot from any detected NVMe. Set your boot SSD explicitly in BIOS boot order.

Budget Build Example

For a homelab storage node:

For comparison, 8TB of enterprise SATA SSDs costs more and delivers a fraction of the IOPS. NVMe bifurcation brings data-center-style NVMe density to a single consumer PCIe slot.

Get free weekly tips in your inbox. Subscribe to HomeLab Starter