← All articles
LINUX ext4 vs XFS vs Btrfs vs ZFS: Choosing a Linux Filesy... 2026-02-09 · 11 min read · linux · filesystem · ext4

ext4 vs XFS vs Btrfs vs ZFS: Choosing a Linux Filesystem for Your Homelab

Linux 2026-02-09 · 11 min read linux filesystem ext4 xfs btrfs zfs storage nas

Filesystem choice is one of those decisions that's easy to make and painful to change. Unlike swapping a Docker container or switching a reverse proxy, changing your filesystem means backing up everything, reformatting, and restoring. Get it right the first time, and you'll never think about it again. Get it wrong, and you'll be planning a migration weekend.

The good news: there's no universally wrong choice among the major Linux filesystems. Each one has genuine strengths for different homelab use cases. This guide covers the practical differences that actually matter — not theoretical benchmarks on enterprise hardware you don't own, but real-world behavior on the kind of hardware homelabbers actually use.

Linux Tux logo

Quick Comparison

Feature ext4 XFS Btrfs ZFS
Max volume size 1 EiB 8 EiB 16 EiB 256 ZiB
Max file size 16 TiB 8 EiB 16 EiB 16 EiB
Copy-on-write No No (reflink yes) Yes Yes
Snapshots No No Yes (subvolumes) Yes (datasets)
Checksumming Metadata only Metadata only Data + metadata Data + metadata
Compression No No zstd, lzo, zlib lz4, zstd, gzip
RAID support External (mdraid) External (mdraid) Built-in Built-in
Deduplication No No Yes (offline) Yes (inline)
Self-healing No No Yes (with RAID) Yes (with mirrors/raidz)
RAM usage Low Low Moderate High (ARC cache)
Maturity Very high Very high High Very high
Default in Most distros RHEL/Fedora SUSE, some Fedora TrueNAS, Proxmox option

ext4: The Reliable Default

ext4 has been the default Linux filesystem since 2008. It's the Toyota Camry of filesystems — not exciting, not flashy, but it starts every morning and gets you where you need to go without drama.

Strengths

Weaknesses

Creating an ext4 Filesystem

# Basic ext4 filesystem
mkfs.ext4 /dev/sdb1

# With optimizations for large storage
mkfs.ext4 -T largefile4 -O extent,huge_file,flex_bg,metadata_csum,64bit /dev/sdb1

# Disable reserved blocks (default 5% reserved for root — wasteful on data drives)
tune2fs -m 0 /dev/sdb1

# Or set reserved blocks during creation
mkfs.ext4 -m 0 /dev/sdb1

Recommended fstab Options

/dev/sdb1 /mnt/data ext4 defaults,noatime,errors=remount-ro 0 2

When to Choose ext4

XFS: The Performance Workhorse

XFS was developed by Silicon Graphics in 1993 and has been the default filesystem in RHEL and Fedora since 2014. It excels at handling large files and parallel I/O — exactly the workload pattern of media servers, databases, and VM storage.

Strengths

Weaknesses

Creating an XFS Filesystem

# Basic XFS filesystem
mkfs.xfs /dev/sdb1

# Optimized for RAID (match stripe unit and width to your array)
mkfs.xfs -d su=64k,sw=4 /dev/md0

# For NVMe/SSD (disable unnecessary log options)
mkfs.xfs -f -m reflink=1 /dev/nvme0n1p1

Recommended fstab Options

/dev/sdb1 /mnt/data xfs defaults,noatime,logbufs=8,logbsize=256k 0 2

When to Choose XFS

Btrfs: The Feature-Rich Contender

Btrfs (B-tree filesystem, pronounced "butter-FS" or "better-FS") brings modern features like snapshots, checksumming, and compression to Linux without the complexity of ZFS. It's been the default in openSUSE since 2014 and is increasingly adopted by Fedora and other distributions.

Strengths

Weaknesses

Creating a Btrfs Filesystem

# Single drive
mkfs.btrfs /dev/sdb1

# RAID 1 mirror (two drives)
mkfs.btrfs -m raid1 -d raid1 /dev/sdb /dev/sdc

# RAID 10 (four drives)
mkfs.btrfs -m raid10 -d raid10 /dev/sdb /dev/sdc /dev/sdd /dev/sde

Subvolumes and Snapshots

# Mount the top-level subvolume
mount /dev/sdb1 /mnt/btrfs-root

# Create subvolumes
btrfs subvolume create /mnt/btrfs-root/@data
btrfs subvolume create /mnt/btrfs-root/@docker
btrfs subvolume create /mnt/btrfs-root/@snapshots

# Mount subvolumes individually
mount -o subvol=@data /dev/sdb1 /mnt/data
mount -o subvol=@docker /dev/sdb1 /var/lib/docker

# Create a snapshot
btrfs subvolume snapshot /mnt/data /mnt/btrfs-root/@snapshots/data-2026-02-09

# Create a read-only snapshot (required for send/receive)
btrfs subvolume snapshot -r /mnt/data /mnt/btrfs-root/@snapshots/data-2026-02-09-ro

Enabling Compression

# /etc/fstab
/dev/sdb1 /mnt/data btrfs defaults,noatime,compress=zstd:3,subvol=@data 0 0

Compression levels for zstd range from 1 (fast, less compression) to 15 (slow, more compression). Level 3 is the sweet spot for most data.

To check compression ratio:

btrfs filesystem defragment -rv -czstd /mnt/data
compsize /mnt/data
# Shows original size, compressed size, and ratio

Automated Snapshots with Snapper

# Install snapper
sudo apt install snapper    # Debian/Ubuntu
sudo dnf install snapper    # Fedora

# Create a snapper config for a subvolume
snapper -c data create-config /mnt/data

# Configure retention
snapper -c data set-config "TIMELINE_CREATE=yes"
snapper -c data set-config "TIMELINE_LIMIT_HOURLY=24"
snapper -c data set-config "TIMELINE_LIMIT_DAILY=7"
snapper -c data set-config "TIMELINE_LIMIT_WEEKLY=4"
snapper -c data set-config "TIMELINE_LIMIT_MONTHLY=12"

# List snapshots
snapper -c data list

# Rollback to a snapshot
snapper -c data rollback <snapshot-number>

When to Choose Btrfs

ZFS: The Enterprise Powerhouse

ZFS is the most feature-complete filesystem available on Linux. Originally developed by Sun Microsystems for Solaris, it combines a filesystem and volume manager into a single, integrated system. It's the foundation of TrueNAS and a popular choice in Proxmox.

Strengths

Weaknesses

ZFS Concepts

Pool (tank)                    ← Top-level storage container
├── VDev (mirror-0)           ← Redundancy group (mirror, raidz1/2/3)
│   ├── /dev/sdb              ← Physical disk
│   └── /dev/sdc              ← Physical disk
├── VDev (mirror-1)           ← Another redundancy group
│   ├── /dev/sdd
│   └── /dev/sde
└── Datasets                  ← Filesystems within the pool
    ├── tank/data
    ├── tank/docker
    ├── tank/backups
    └── tank/media

Creating a ZFS Pool

# Install ZFS
sudo apt install zfsutils-linux    # Debian/Ubuntu
sudo dnf install zfs               # Fedora (from ZFS repo)

# Mirror (2 drives) — like RAID 1
zpool create tank mirror /dev/sdb /dev/sdc

# RAIDZ1 (3+ drives) — like RAID 5, one drive can fail
zpool create tank raidz1 /dev/sdb /dev/sdc /dev/sdd

# RAIDZ2 (4+ drives) — like RAID 6, two drives can fail
zpool create tank raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde

# Striped mirrors (4 drives) — like RAID 10, best performance
zpool create tank mirror /dev/sdb /dev/sdc mirror /dev/sdd /dev/sde

Creating Datasets

# Create datasets with specific properties
zfs create tank/data
zfs create tank/docker
zfs create tank/media
zfs create tank/backups

# Set properties
zfs set compression=zstd tank/data
zfs set compression=lz4 tank/docker     # lz4 for speed on container layers
zfs set atime=off tank                  # Disable access time updates
zfs set recordsize=1M tank/media        # Large records for media files
zfs set recordsize=16K tank/docker      # Small records for database-like workloads

# Check compression ratio
zfs get compressratio tank/data

Snapshots and Replication

# Create a snapshot
zfs snapshot tank/data@2026-02-09

# List snapshots
zfs list -t snapshot

# Rollback to a snapshot
zfs rollback tank/data@2026-02-09

# Send a snapshot to another pool
zfs send tank/data@2026-02-09 | zfs receive backup/data

# Incremental send (only changes since last snapshot)
zfs send -i tank/data@2026-02-08 tank/data@2026-02-09 | zfs receive backup/data

# Remote replication
zfs send -i tank/data@2026-02-08 tank/data@2026-02-09 | ssh nas zfs receive backup/data

ZFS RAM Tuning

# Check current ARC usage
arc_summary

# Limit ARC size (useful if ZFS is using too much RAM)
# /etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=8589934592    # 8 GB maximum ARC size

# Apply without reboot
echo 8589934592 > /sys/module/zfs/parameters/zfs_arc_max

When to Choose ZFS

Practical Decision Guide

By Use Case

Use Case Recommended Runner-up Avoid
Boot/system drive ext4 XFS ZFS (kernel update risks)
Single data drive ext4 or Btrfs XFS ZFS (overkill)
NAS (2-4 drives) ZFS mirror Btrfs RAID 1 ext4 on mdraid
NAS (4+ drives) ZFS RAIDZ2 mdraid + XFS Btrfs RAID 5/6
Media storage XFS ext4 -
Docker host Btrfs or ext4 XFS ZFS (overhead)
Database server XFS ext4 Btrfs (CoW fragmentation)
VM storage ZFS (zvols) XFS Btrfs
Raspberry Pi ext4 - ZFS (RAM), Btrfs

By Hardware

Hardware Recommended Reason
< 4GB RAM ext4 or XFS ZFS and Btrfs want more RAM
4-8GB RAM ext4, XFS, or Btrfs ZFS possible with limited ARC
8-16GB RAM Any filesystem ZFS comfortable with small pools
16GB+ RAM ZFS for storage pools ARC can cache effectively
HDD storage ZFS or Btrfs Checksumming catches bit rot on spinning drives
NVMe/SSD only XFS or ext4 Less need for data integrity features on reliable media
Mixed HDD + SSD ZFS with SLOG/L2ARC SSD as write log and read cache

Performance Comparison

Real-world performance on typical homelab hardware (consumer SSDs, 4-8 drives):

Operation ext4 XFS Btrfs ZFS
Sequential write Excellent Excellent Good Good
Sequential read Excellent Excellent Good Excellent (ARC)
Random write (small) Good Good Fair (CoW) Fair (CoW)
Random read (small) Good Good Good Excellent (ARC)
Many small files Good Fair Good Fair
Large files Good Excellent Good Good
Compression benefit N/A N/A 30-50% savings 30-50% savings
Metadata operations Fast Fast Fast Moderate

The performance differences between ext4, XFS, and Btrfs are generally small enough that workload characteristics, drive hardware, and configuration matter more than filesystem choice. ZFS stands out with its ARC cache for repeated reads and with compression for reducing I/O.

Migration Tips

If you need to change filesystems:

  1. Back up everything to an independent device (not just another partition on the same disk)
  2. Verify your backup — restore a few files to confirm integrity
  3. Create the new filesystem on the target drive(s)
  4. Restore data from backup
  5. Update /etc/fstab with new UUID and filesystem type
  6. Test thoroughly before deleting the backup

For Docker volumes, also export container configurations and database dumps separately — don't rely solely on filesystem-level backup of volume directories.

Final Thoughts

The filesystem landscape on Linux has never been better. ext4 remains a rock-solid default for single drives. XFS delivers consistent performance for media and database workloads. Btrfs brings modern features without the complexity of ZFS. And ZFS provides unmatched data integrity for serious storage pools.

For most homelabbers, the recommendation is simple: use ext4 for your boot drive, and either ZFS (if you have 8GB+ RAM and multiple drives) or Btrfs (if you want snapshots without ZFS's complexity) for your data storage. XFS is the right choice if raw performance on large files matters more than data management features.

Don't overthink it. Pick the filesystem that matches your hardware and use case, format your drives, and start using them. You can always migrate later — it's just a backup and restore away.