MinIO: S3-Compatible Object Storage for Your Homelab
If you use any cloud service that stores files — Backblaze B2, Wasabi, AWS S3 — you're using object storage. MinIO brings that same model to your homelab: an S3-compatible API running on your own hardware, no AWS account required.
Photo by Palina Kharlanovich on Unsplash
Once MinIO is running, anything that speaks S3 can use it. Backups from Restic, Velero, or Rclone. App uploads from Nextcloud or Immich. Off-site snapshots from Proxmox. All pointing at your own box.
What Is Object Storage?
Traditional filesystems organize data as files in folders. Object storage treats everything as a flat blob with a key — think of it as a giant key-value store optimized for large files. S3 (Simple Storage Service) is Amazon's implementation, and it became the de facto API standard. MinIO implements that API identically, so any S3 client works without modification.
The core concepts:
- Bucket: A named container for objects (like a folder, but flat)
- Object: A blob of data plus metadata, addressed by key
- Endpoint: The URL your S3 clients connect to
Why Run MinIO in Your Homelab?
- Cost: No per-GB fees. Use drives you already have.
- Privacy: Backup data stays on your hardware.
- Speed: LAN speeds are much faster than cloud for large restores.
- S3 API compatibility: Every modern backup tool speaks S3. Zero client-side code changes.
- Kubernetes-ready: Works as a backing store for MinIO Operator or standalone in Longhorn, Velero, etc.
Prerequisites
- A Linux server with spare disk space (a dedicated data directory)
- Docker or a bare-metal install (this guide covers Docker)
- Port 9000 (API) and 9001 (console) available
Like what you're reading? Subscribe to HomeLab Starter — free weekly guides in your inbox.
Installation with Docker Compose
Create a directory and compose file:
mkdir -p ~/minio/{data,config}
cd ~/minio
# docker-compose.yml
services:
minio:
image: quay.io/minio/minio:latest
container_name: minio
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: admin
MINIO_ROOT_PASSWORD: changeme-use-strong-password
ports:
- "9000:9000"
- "9001:9001"
volumes:
- ./data:/data
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 10s
retries: 3
Start it:
docker compose up -d
Access the console at http://<your-server-ip>:9001 and log in with your root credentials.
First-Time Setup
Create a Bucket
From the console, click Create a Bucket. Name it something meaningful — backups, media, k8s-velero. Enable versioning if you want point-in-time restores.
Create a Service Account
Never use root credentials in your apps. Instead, create a scoped service account:
- Go to Identity → Service Accounts → Create Service Account
- Set an expiry if you want rotation enforcement
- Optionally attach an inline policy to restrict which buckets this account can access
- Copy the generated access key and secret key
Example Bucket Policy (read/write to one bucket)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucket"],
"Resource": ["arn:aws:s3:::backups", "arn:aws:s3:::backups/*"]
}
]
}
Connecting Clients
Restic (Backups)
Restic has native S3 support. Point it at your MinIO endpoint:
export AWS_ACCESS_KEY_ID=your-access-key
export AWS_SECRET_ACCESS_KEY=your-secret-key
export RESTIC_REPOSITORY=s3:http://192.168.1.50:9000/backups
export RESTIC_PASSWORD=your-repo-password
restic init
restic backup /important/data
The s3:http:// prefix tells Restic to use HTTP (fine for LAN). Use s3:https:// if you've put MinIO behind a TLS reverse proxy.
Rclone
Add a remote to ~/.config/rclone/rclone.conf:
[minio]
type = s3
provider = Minio
endpoint = http://192.168.1.50:9000
access_key_id = your-access-key
secret_access_key = your-secret-key
Then sync files:
rclone sync /local/path minio:backups/subdir
AWS CLI
The AWS CLI works with MinIO by overriding the endpoint:
aws --endpoint-url http://192.168.1.50:9000 \
s3 ls s3://backups/
Or configure a profile:
aws configure --profile minio
# Enter your MinIO access/secret keys, region can be anything (e.g., "us-east-1")
aws --profile minio --endpoint-url http://192.168.1.50:9000 \
s3 cp file.tar.gz s3://backups/
Putting MinIO Behind a Reverse Proxy
For TLS termination and a clean URL, add MinIO to Nginx Proxy Manager or Caddy:
# Caddyfile snippet
minio.homelab.local {
reverse_proxy localhost:9000
}
minio-console.homelab.local {
reverse_proxy localhost:9001
}
Then update your client endpoints to use HTTPS and your internal domain. This is especially useful when accessing MinIO from outside your LAN via VPN.
Multi-Drive Setup: Erasure Coding
MinIO supports erasure coding — similar to RAID, but at the object level. With 4 or more drives, MinIO can reconstruct lost objects even if half the drives fail:
command: server /data{1...4} --console-address ":9001"
volumes:
- ./data1:/data1
- ./data2:/data2
- ./data3:/data3
- ./data4:/data4
With this config, MinIO stripes objects across all four drives. You get redundancy without a separate RAID controller, and you can mix drive sizes.
Monitoring
MinIO exposes Prometheus metrics at /minio/v2/metrics/cluster. Add a scrape job to your Prometheus config:
- job_name: minio
metrics_path: /minio/v2/metrics/cluster
static_configs:
- targets: ["192.168.1.50:9000"]
bearer_token: <prometheus-scrape-token>
Generate a scrape token from the MinIO console under Monitoring → Metrics.
Storage Sizing Tips
MinIO has no fixed overhead — it uses exactly as much space as the objects stored, plus about 1% for metadata. For erasure coding setups, effective capacity is roughly half your raw capacity (depending on drive count).
Plan for your use case:
- Restic deduplicates and compresses — actual backup data is often 30–60% of the original size
- Media files (photos, video) don't compress, so budget 1:1
- Database dumps compress well — expect 5–10x reduction with
gzipbefore uploading
Conclusion
MinIO turns spare drives into a proper object storage backend with full S3 API compatibility. For homelab use, it's especially valuable as a backup target — Restic + MinIO on a local server gives you fast, private, deduplicated backups that cost nothing per GB. Once it's running, any S3-aware tool in your stack can use it without code changes.
