Practical Btrfs Snapshots for Homelab Backups and Rollbacks
Btrfs snapshots are one of those features that seem too good to be true until you actually need one. You're about to run a major system upgrade. You create a snapshot — it takes less than a second and uses zero additional space. The upgrade breaks everything. You restore the snapshot in five seconds. Done. No restoring from a backup drive, no reinstalling packages, no lost afternoon.
This is the power of a copy-on-write filesystem. Because Btrfs never overwrites data in place, creating a snapshot is just saving a reference to the current state of your data. Only when data changes after the snapshot does Btrfs need to store both the old and new versions. For a homelab, this means you can snapshot before every risky operation with effectively zero cost.
This guide covers practical Btrfs snapshot workflows for homelab use — not the theory, but the actual commands and configurations you'll use daily.
Subvolume Layout: Get This Right First
Btrfs snapshots work at the subvolume level, not the filesystem level. If all your data is in the root subvolume, you can only snapshot everything or nothing. A good subvolume layout lets you snapshot different data independently with different retention policies.
Here's a practical layout for a homelab server:
# Create the filesystem
mkfs.btrfs -L homelab /dev/sda2
# Mount the top-level subvolume to set things up
mount /dev/sda2 /mnt
# Create subvolumes
btrfs subvolume create /mnt/@ # Root filesystem
btrfs subvolume create /mnt/@home # Home directories
btrfs subvolume create /mnt/@var-log # Logs (snapshot separately, prune aggressively)
btrfs subvolume create /mnt/@snapshots # Snapshot storage
btrfs subvolume create /mnt/@docker # Docker data (optional)
# Unmount and remount with proper layout
umount /mnt
Update /etc/fstab to mount each subvolume:
# /etc/fstab
UUID=your-uuid / btrfs subvol=@,compress=zstd:3,space_cache=v2,noatime 0 0
UUID=your-uuid /home btrfs subvol=@home,compress=zstd:3,space_cache=v2,noatime 0 0
UUID=your-uuid /var/log btrfs subvol=@var-log,compress=zstd:3,space_cache=v2,noatime 0 0
UUID=your-uuid /.snapshots btrfs subvol=@snapshots,compress=zstd:3,space_cache=v2,noatime 0 0
Why separate subvolumes? Because you probably want hourly snapshots of @ (your OS) but only daily snapshots of @home. And you definitely don't want log files eating up snapshot storage — @var-log can have a shorter retention.
Manual Snapshots
The most common use case: snapshot before doing something risky.
# Snapshot the root filesystem before an upgrade
btrfs subvolume snapshot -r / /.snapshots/@-pre-upgrade-2026-02-14
# Snapshot home directories before you mess with dotfiles
btrfs subvolume snapshot -r /home /.snapshots/@home-pre-dotfiles
# List all snapshots
btrfs subvolume list -s /
# Check snapshot disk usage
btrfs subvolume list -s / | while read line; do
path=$(echo "$line" | awk '{print $NF}')
echo "--- $path ---"
btrfs subvolume show "/$path" 2>/dev/null | grep -E "UUID|Generation|Name"
done
The -r flag creates a read-only snapshot. Always use read-only snapshots for backups — they're immutable, which is the entire point of a backup.
Restoring a Snapshot
There are two approaches to restoring:
Method 1: Replace the subvolume (clean rollback)
# Boot from a live USB or another root, then:
mount /dev/sda2 /mnt
# Move the broken subvolume out of the way
mv /mnt/@ /mnt/@-broken
# Create a writable snapshot from the backup
btrfs subvolume snapshot /mnt/@snapshots/@-pre-upgrade-2026-02-14 /mnt/@
# Reboot into the restored system
reboot
# After verifying everything works, delete the broken subvolume
btrfs subvolume delete /mnt/@-broken
Method 2: Copy specific files from the snapshot
# Mount the snapshot (read-only, so mount it separately)
mount -o subvol=@snapshots/@-pre-upgrade-2026-02-14 /dev/sda2 /mnt/snap
# Copy specific files you need
cp /mnt/snap/etc/nginx/nginx.conf /etc/nginx/nginx.conf
cp -r /mnt/snap/etc/systemd/system/ /etc/systemd/system/
umount /mnt/snap
Method 2 is less disruptive — you're surgically restoring only what broke instead of rolling back everything.
Automated Snapshots with Snapper
Manual snapshots are great for one-off operations, but you want automated, scheduled snapshots with automatic cleanup. That's what snapper does.
# Install snapper
sudo apt install snapper # Debian/Ubuntu
sudo dnf install snapper # Fedora
# Create a snapper configuration for the root filesystem
sudo snapper -c root create-config /
# Create a configuration for /home
sudo snapper -c home create-config /home
Configure snapshot policies:
# Edit the root config
sudo vim /etc/snapper/configs/root
# /etc/snapper/configs/root
SUBVOLUME="/"
# Create timeline snapshots
TIMELINE_CREATE="yes"
# Cleanup old timeline snapshots
TIMELINE_CLEANUP="yes"
# Retention policy
TIMELINE_LIMIT_HOURLY="12"
TIMELINE_LIMIT_DAILY="7"
TIMELINE_LIMIT_WEEKLY="4"
TIMELINE_LIMIT_MONTHLY="6"
TIMELINE_LIMIT_YEARLY="1"
# Minimum age before cleanup (seconds)
TIMELINE_MIN_AGE="1800"
# Space-aware cleanup — delete snapshots when space is tight
SPACE_LIMIT="0.5"
FREE_LIMIT="0.2"
# Enable the timers
sudo systemctl enable --now snapper-timeline.timer
sudo systemctl enable --now snapper-cleanup.timer
# Verify it's working
snapper -c root list
Snapper output looks like this:
# | Type | Pre # | Date | User | Cleanup | Description | Userdata
---+--------+-------+--------------------------+------+----------+-------------+---------
0 | single | | | root | | current |
1 | single | | Fri 14 Feb 2026 02:00:00 | root | timeline | timeline |
2 | single | | Fri 14 Feb 2026 03:00:00 | root | timeline | timeline |
3 | single | | Fri 14 Feb 2026 04:00:00 | root | timeline | timeline |
Pre/Post Snapshots
Snapper can bracket system changes with before/after snapshots:
# Create a pre-snapshot, run a command, create a post-snapshot
snapper -c root create -t pre-post -d "system upgrade" \
--command "apt upgrade -y"
# Or manually bracket the operation
PRE=$(snapper -c root create -t pre -p -d "manual upgrade")
apt upgrade -y
snapper -c root create -t post --pre-number $PRE -d "manual upgrade"
# Compare changes between pre and post
snapper -c root diff $PRE..$(($PRE + 1))
# Undo the changes (restore pre-snapshot state for modified files)
snapper -c root undochange $PRE..$(($PRE + 1))
The undochange command is surgical — it only reverts files that changed between the two snapshots, leaving everything else untouched. This is safer than a full rollback when you want to undo a specific operation.
Offsite Backups with btrfs send
Snapshots protect against software mistakes but not hardware failure. If the drive dies, your snapshots die with it. The solution is btrfs send — a way to efficiently transmit snapshots to another machine.
Initial Full Send
# Create a read-only snapshot
btrfs subvolume snapshot -r / /.snapshots/@-2026-02-14
# Send it to a remote server
btrfs send /.snapshots/@-2026-02-14 | \
ssh backup-server "btrfs receive /mnt/backups/homelab/"
Incremental Sends (Fast)
After the initial full send, subsequent sends only transfer the differences:
# Create today's snapshot
btrfs subvolume snapshot -r / /.snapshots/@-2026-02-15
# Send only the changes since yesterday's snapshot
btrfs send -p /.snapshots/@-2026-02-14 /.snapshots/@-2026-02-15 | \
ssh backup-server "btrfs receive /mnt/backups/homelab/"
The incremental send only transmits blocks that changed between the two snapshots. For a typical homelab server where maybe 500 MB changes per day, this completes in seconds even over a slow link.
Automated Offsite Script
#!/bin/bash
# /usr/local/bin/btrfs-backup.sh
REMOTE="backup-server"
REMOTE_PATH="/mnt/backups/homelab"
SNAP_DIR="/.snapshots"
TODAY=$(date +%Y-%m-%d)
YESTERDAY=$(date -d yesterday +%Y-%m-%d)
# Create today's snapshot
btrfs subvolume snapshot -r / "$SNAP_DIR/@-$TODAY"
# Check if yesterday's snapshot exists on both sides
if btrfs subvolume show "$SNAP_DIR/@-$YESTERDAY" &>/dev/null && \
ssh "$REMOTE" "btrfs subvolume show $REMOTE_PATH/@-$YESTERDAY" &>/dev/null; then
# Incremental send
echo "Sending incremental backup..."
btrfs send -p "$SNAP_DIR/@-$YESTERDAY" "$SNAP_DIR/@-$TODAY" | \
ssh "$REMOTE" "btrfs receive $REMOTE_PATH/"
else
# Full send (first time or missing parent)
echo "Sending full backup..."
btrfs send "$SNAP_DIR/@-$TODAY" | \
ssh "$REMOTE" "btrfs receive $REMOTE_PATH/"
fi
# Prune old local snapshots (keep 7 days)
find "$SNAP_DIR" -maxdepth 1 -name "@-*" -type d | sort | head -n -7 | while read snap; do
echo "Deleting old snapshot: $snap"
btrfs subvolume delete "$snap"
done
echo "Backup complete: @-$TODAY"
# Make it executable and schedule it
chmod +x /usr/local/bin/btrfs-backup.sh
# Add a systemd timer or cron job
echo "0 3 * * * root /usr/local/bin/btrfs-backup.sh >> /var/log/btrfs-backup.log 2>&1" | \
sudo tee /etc/cron.d/btrfs-backup
Common Pitfalls
Snapshots are not free over time. A snapshot starts at zero space, but as data changes, both the original and snapshot versions must be stored. If you have 100 hourly snapshots and your system writes 1 GB per hour, those snapshots consume up to 100 GB of extra space. Set retention limits and monitor disk usage.
# Check actual space used by snapshots
btrfs filesystem du -s /.snapshots/*
# Check overall filesystem usage
btrfs filesystem usage /
Don't snapshot everything. Exclude data that changes constantly and isn't worth snapshotting: Docker volumes (snapshot at the application level instead), database data directories (use pg_dump or mysqldump), and temp files. This is why the subvolume layout matters.
The parent snapshot must exist for incremental sends. If you delete the parent on either the source or destination, the next send must be a full send. Keep at least one common snapshot on both sides.
Btrfs RAID5/6 snapshots are dangerous. If your filesystem uses RAID5 or RAID6, the write-hole bug affects snapshot data too. Only use snapshots with RAID1, RAID10, or single-disk Btrfs.
Disk space reporting is confusing. df reports shared space between subvolumes in ways that can be misleading. Use btrfs filesystem usage / instead of df for accurate space reporting.
A Practical Workflow
Here's how this comes together for a typical homelab:
- Subvolume layout: Separate subvolumes for root, home, logs, and Docker data
- Snapper: Hourly snapshots with 12-hour retention for root, daily snapshots with 7-day retention for home
- Manual snapshots: Before any upgrade, config change, or risky operation
- Offsite: Daily incremental sends to a backup server
- Monitoring: Weekly check of
btrfs filesystem usage /to catch runaway snapshot growth
The entire setup takes about 30 minutes. The first time a snapshot saves you from a botched upgrade, you'll wonder why you didn't set this up on day one.