Offsite Backup Strategies for Your Home Lab
A homelab without offsite backups is a homelab where one house fire, one burglary, or one power surge destroys everything. Local backups protect against disk failures and accidental deletions, but they can't protect against physical disasters. The 3-2-1 rule exists for a reason: three copies of your data, on two different media types, with one copy offsite.
Offsite backup used to mean carrying tapes to a safe deposit box. Today it means encrypting your data and sending it to a cloud storage provider or a remote machine you control. The key requirements: your data must be encrypted before it leaves your network, the process must be automated so it actually happens, and the cost must be predictable.

Cloud Storage Providers Compared
| Provider | Storage Cost | Egress Cost | Protocol | Min Retention | Free Tier |
|---|---|---|---|---|---|
| Backblaze B2 | $6/TB/mo | $0.01/GB | S3-compatible API | None | 10 GB |
| rsync.net | $14.50/TB/mo | Free | rsync, SSH, SFTP, Borg, rclone | None | None |
| Hetzner Storage Box | ~$3.50/TB/mo | Free (rsync/SFTP) | rsync, SFTP, SMB, WebDAV | None | None |
| Wasabi | $7/TB/mo | Free | S3-compatible API | 90 days | None |
| AWS S3 Glacier Deep | $1/TB/mo | $0.02/GB + retrieval fee | S3 API | 180 days | None |
| Storj | $4/TB/mo | $7/TB | S3-compatible API | None | 25 GB |
For most homelabs, Backblaze B2 offers the best balance of cost, simplicity, and ecosystem support. Hetzner Storage Box is the cheapest per-TB option with free egress. rsync.net costs more but gives you a full Unix shell, native Borg support, and ZFS snapshots on their end. AWS Glacier Deep Archive is dirt cheap for storage but expensive and slow to retrieve — use it only for "break glass in emergency" archives.
Method 1: Restic to Backblaze B2
Restic is a modern backup tool with built-in encryption, deduplication, and support for many cloud backends. It's the easiest path from zero to encrypted offsite backups.
Setup
# Install restic
sudo apt install restic # Debian/Ubuntu
sudo dnf install restic # Fedora
# Create a B2 bucket (use the Backblaze web UI or b2 CLI)
b2 authorize-account <keyID> <applicationKey>
b2 create-bucket homelab-backups allPrivate
# Initialize the restic repository
export B2_ACCOUNT_ID="your-key-id"
export B2_ACCOUNT_KEY="your-application-key"
restic -r b2:homelab-backups:/restic init
Restic will ask for a repository password. This password encrypts everything — lose it and your backups are irrecoverable. Store it in a password manager and keep a physical copy in a safe.
Running Backups
# Back up specific directories
restic -r b2:homelab-backups:/restic backup \
/home \
/etc \
/var/lib/docker/volumes \
--exclude="*.tmp" \
--exclude=".cache"
# Check backup integrity
restic -r b2:homelab-backups:/restic check
# List snapshots
restic -r b2:homelab-backups:/restic snapshots
# Restore a file
restic -r b2:homelab-backups:/restic restore latest --target /tmp/restore --include /etc/nginx
Automated Schedule
#!/bin/bash
# /usr/local/bin/offsite-backup.sh
set -euo pipefail
export B2_ACCOUNT_ID="your-key-id"
export B2_ACCOUNT_KEY="your-application-key"
export RESTIC_REPOSITORY="b2:homelab-backups:/restic"
export RESTIC_PASSWORD_FILE="/root/.restic-password"
# Run backup
restic backup \
/home \
/etc \
/var/lib/docker/volumes \
/opt/stacks \
--exclude="*.tmp" \
--exclude=".cache" \
--exclude="node_modules"
# Prune old snapshots: keep 7 daily, 4 weekly, 6 monthly
restic forget \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 6 \
--prune
# Verify integrity (run periodically, not every time)
if [ "$(date +%u)" -eq 7 ]; then
restic check
fi
# /etc/systemd/system/offsite-backup.timer
[Unit]
Description=Nightly offsite backup
[Timer]
OnCalendar=*-*-* 02:00:00
Persistent=true
RandomizedDelaySec=1800
[Install]
WantedBy=timers.target
# /etc/systemd/system/offsite-backup.service
[Unit]
Description=Offsite backup with restic
After=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/offsite-backup.sh
Nice=10
IOSchedulingClass=idle
sudo systemctl enable --now offsite-backup.timer
Method 2: BorgBackup to rsync.net
If you're already using BorgBackup locally (and you should be — it's excellent), rsync.net offers first-class Borg support. Their servers have Borg installed, so you push directly from your Borg client.
Setup
# rsync.net gives you an SSH account
# Initialize a Borg repository on rsync.net
borg init --encryption=repokey ssh://[email protected]/./borg-homelab
# Export the repo key and back it up separately
borg key export ssh://[email protected]/./borg-homelab /root/borg-key-backup.txt
Running Backups
# Create a backup
borg create \
ssh://[email protected]/./borg-homelab::'{hostname}-{now:%Y-%m-%d}' \
/home \
/etc \
/var/lib/docker/volumes \
--exclude '*.tmp' \
--exclude '.cache' \
--compression zstd,3
# Prune old archives
borg prune \
ssh://[email protected]/./borg-homelab \
--keep-daily=7 \
--keep-weekly=4 \
--keep-monthly=6
rsync.net also takes ZFS snapshots of your account nightly, providing an additional layer of protection against accidental deletion or ransomware.
Method 3: Rclone to Any Provider
Rclone is the Swiss Army knife of cloud file transfer. It supports over 70 backends and can encrypt data with its crypt overlay.
Setup with Encryption
# Install rclone
sudo apt install rclone
# Configure a remote (interactive)
rclone config
# Or create the config manually
# ~/.config/rclone/rclone.conf
[b2-raw]
type = b2
account = your-key-id
key = your-application-key
[b2-encrypted]
type = crypt
remote = b2-raw:homelab-backups/encrypted
password = <obscured password from 'rclone obscure'>
password2 = <obscured salt from 'rclone obscure'>
Sync vs. Copy
# Sync mirrors a local directory to the remote (deletes removed files)
rclone sync /data/important b2-encrypted:/important --progress
# Copy only adds new/changed files (safer — doesn't delete)
rclone copy /data/important b2-encrypted:/important --progress
# For backups, 'copy' is safer. Use 'sync' only when you want an exact mirror.
Method 4: Self-Hosted Remote
If you have a friend with a homelab, a VPS, or a second location (office, family member's house), you can run your own offsite backup target.
Minimal Setup with SSH + Borg
On the remote machine:
# Create a backup user with restricted shell
sudo useradd -m -s /bin/bash borgbackup
sudo mkdir -p /backup/homelab
sudo chown borgbackup:borgbackup /backup/homelab
# Add your SSH public key to the backup user
sudo -u borgbackup mkdir -p /home/borgbackup/.ssh
echo "command=\"borg serve --restrict-to-path /backup/homelab\",restrict ssh-ed25519 AAAA..." \
| sudo -u borgbackup tee /home/borgbackup/.ssh/authorized_keys
The borg serve --restrict-to-path restriction ensures the key can only access the backup directory and only through Borg — it can't be used for general SSH access.
WireGuard for Secure Transport
If your remote backup target isn't accessible over the public internet, set up a WireGuard tunnel between your homelab and the remote location. The backup traffic stays encrypted inside the tunnel, and you don't need to expose SSH to the internet.
Bandwidth Considerations
Your first backup will be large. Subsequent backups are incremental and much smaller thanks to deduplication.
| Data Size | Upload Speed | Initial Backup Time |
|---|---|---|
| 100 GB | 10 Mbps | ~24 hours |
| 100 GB | 100 Mbps | ~2.5 hours |
| 1 TB | 10 Mbps | ~10 days |
| 1 TB | 100 Mbps | ~1 day |
| 1 TB | 1 Gbps | ~2.5 hours |
For slow connections, consider seeding your initial backup by shipping a drive. Both rsync.net and Backblaze offer import services.
Rate Limiting
Don't saturate your internet connection during backup:
# Restic: limit to 50 Mbit/s
restic -r b2:homelab-backups:/restic backup /data --limit-upload 6250
# Rclone: limit to 50 Mbit/s
rclone copy /data b2-encrypted:/data --bwlimit 50M
# Borg: use SSH bandwidth limiting
borg create --remote-ratelimit 6250 ssh://... /data
Monitoring and Testing
Backups that aren't tested are wishes, not backups. Regularly verify your offsite backups:
# Restic: verify backup integrity
restic -r b2:homelab-backups:/restic check --read-data-subset=5%
# Borg: verify archive integrity
borg check --verify-data ssh://[email protected]/./borg-homelab
Set up monitoring to alert you when backups fail:
# After successful backup, touch a health check file
curl -fsS -m 10 --retry 5 https://hc-ping.com/your-uuid-here
# Or use Uptime Kuma's push monitors
curl -fsS "http://uptime.homelab.lan:3001/api/push/abc123?status=up&msg=OK"
The worst time to discover your offsite backups don't work is when you need them. Test restores quarterly. Pick a random file, restore it from your offsite backup, and verify it matches the original. If you can't restore, you don't have a backup — you have a bill from a cloud storage provider.