Hosting Databases in Your Home Lab: PostgreSQL, MariaDB, Redis, and MongoDB
Almost every self-hosted application needs a database. Nextcloud wants MySQL or PostgreSQL. Gitea needs one. Authentik needs one. Grafana, Vikunja, Bookstack, WikiJS — they all need a database. The default approach is to spin up a separate database container for each application. This works, but it's wasteful: you end up with eight PostgreSQL containers, each using its own RAM and disk, each needing separate backup jobs.
A better approach is to run shared database instances that multiple applications connect to. One PostgreSQL server with multiple databases, one Redis instance handling caching for several services. Fewer containers, lower resource usage, simpler backups, and you learn database administration in the process.

PostgreSQL: The Default Choice
PostgreSQL is the best general-purpose database for self-hosted applications. It's what most modern applications default to, it handles everything from small configs to large datasets, and its ecosystem is mature.
Deployment
# docker-compose.yml
services:
postgres:
image: postgres:16-alpine
restart: unless-stopped
ports:
- "5432:5432"
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: change-this-strong-password
POSTGRES_DB: admin
volumes:
- ./data:/var/lib/postgresql/data
- ./backups:/backups
shm_size: '256mb' # Important for performance
command: >
postgres
-c shared_buffers=512MB
-c effective_cache_size=1536MB
-c work_mem=16MB
-c maintenance_work_mem=128MB
-c max_connections=200
-c wal_buffers=16MB
Creating Databases for Each Application
# Connect to PostgreSQL
docker exec -it postgres psql -U admin
# Create databases and users for each application
CREATE USER nextcloud WITH PASSWORD 'nc-password';
CREATE DATABASE nextcloud OWNER nextcloud;
CREATE USER gitea WITH PASSWORD 'gitea-password';
CREATE DATABASE gitea OWNER gitea;
CREATE USER authentik WITH PASSWORD 'auth-password';
CREATE DATABASE authentik OWNER authentik;
-- Verify
\l
Each application gets its own database and user with access only to its database. This provides isolation — a bug in Gitea can't corrupt Nextcloud's data.
Performance Tuning
PostgreSQL's default configuration is conservative. For a homelab server with 8-32 GB of RAM, these settings make a significant difference:
| Parameter | Default | 8 GB RAM | 16 GB RAM | 32 GB RAM |
|---|---|---|---|---|
shared_buffers |
128 MB | 2 GB | 4 GB | 8 GB |
effective_cache_size |
4 GB | 6 GB | 12 GB | 24 GB |
work_mem |
4 MB | 16 MB | 32 MB | 64 MB |
maintenance_work_mem |
64 MB | 512 MB | 1 GB | 2 GB |
wal_buffers |
-1 (auto) | 16 MB | 16 MB | 16 MB |
max_connections |
100 | 200 | 200 | 300 |
Use PGTune to generate configuration based on your hardware. Set the application type to "Mixed" for homelab workloads.
# Apply changes via command line flags in Docker, or with a custom config:
# /etc/postgresql/postgresql.conf
shared_buffers = 2GB
effective_cache_size = 6GB
work_mem = 16MB
maintenance_work_mem = 512MB
max_connections = 200
Backups
PostgreSQL backups are non-negotiable. A corrupt data directory without backups means starting over.
#!/bin/bash
# /usr/local/bin/backup-postgres.sh
set -euo pipefail
BACKUP_DIR="/opt/backups/postgres"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=14
mkdir -p "$BACKUP_DIR"
# Dump all databases
docker exec postgres pg_dumpall -U admin | gzip > "$BACKUP_DIR/all-databases-$TIMESTAMP.sql.gz"
# Or dump individual databases for granular restores
for db in nextcloud gitea authentik; do
docker exec postgres pg_dump -U admin "$db" | gzip > "$BACKUP_DIR/$db-$TIMESTAMP.sql.gz"
done
# Clean old backups
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
echo "PostgreSQL backup completed: $TIMESTAMP"
# Restore a single database
gunzip < /opt/backups/postgres/nextcloud-20260209_020000.sql.gz | \
docker exec -i postgres psql -U admin nextcloud
Schedule this with a systemd timer or cron:
0 2 * * * /usr/local/bin/backup-postgres.sh >> /var/log/postgres-backup.log 2>&1
MariaDB: MySQL Compatibility
Some applications only support MySQL/MariaDB (WordPress, some PHP applications). MariaDB is the community fork of MySQL and is functionally equivalent for most use cases.
Deployment
services:
mariadb:
image: mariadb:11
restart: unless-stopped
ports:
- "3306:3306"
environment:
MARIADB_ROOT_PASSWORD: change-this-root-password
volumes:
- ./data:/var/lib/mysql
- ./backups:/backups
command: >
--innodb-buffer-pool-size=1G
--innodb-log-file-size=256M
--max-connections=200
--character-set-server=utf8mb4
--collation-server=utf8mb4_unicode_ci
Creating Databases
-- Connect: docker exec -it mariadb mariadb -u root -p
CREATE DATABASE wordpress CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER 'wordpress'@'%' IDENTIFIED BY 'wp-password';
GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpress'@'%';
CREATE DATABASE leantime CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER 'leantime'@'%' IDENTIFIED BY 'lt-password';
GRANT ALL PRIVILEGES ON leantime.* TO 'leantime'@'%';
FLUSH PRIVILEGES;
Backups
# Dump all databases
docker exec mariadb mariadb-dump -u root -p'rootpassword' --all-databases | \
gzip > /opt/backups/mariadb/all-$(date +%Y%m%d).sql.gz
# Dump a specific database
docker exec mariadb mariadb-dump -u root -p'rootpassword' wordpress | \
gzip > /opt/backups/mariadb/wordpress-$(date +%Y%m%d).sql.gz
Redis: In-Memory Cache and Message Broker
Redis is an in-memory data store used for caching, session storage, and message queuing. Many applications optionally support Redis to improve performance: Nextcloud uses it for file locking and session caching, Authentik uses it for task queuing, and Gitea uses it for caching.
Deployment
services:
redis:
image: redis:7-alpine
restart: unless-stopped
ports:
- "6379:6379"
command: >
redis-server
--maxmemory 512mb
--maxmemory-policy allkeys-lru
--save 60 1000
--appendonly yes
--requirepass your-redis-password
volumes:
- ./data:/data
Key Configuration
| Parameter | Purpose | Recommended |
|---|---|---|
maxmemory |
Memory limit before eviction | 256 MB - 1 GB depending on usage |
maxmemory-policy |
What to evict when full | allkeys-lru for cache, noeviction for queues |
save |
RDB snapshot frequency | 60 1000 (every 60s if 1000+ writes) |
appendonly |
Append-only file for durability | yes for persistent data |
requirepass |
Authentication password | Always set, even on internal networks |
Connecting Applications
# Nextcloud config.php
'memcache.local' => '\\OC\\Memcache\\Redis',
'memcache.distributed' => '\\OC\\Memcache\\Redis',
'memcache.locking' => '\\OC\\Memcache\\Redis',
'redis' => [
'host' => '10.0.0.5',
'port' => 6379,
'password' => 'your-redis-password',
'dbindex' => 0,
],
Use different dbindex values (0-15) for different applications to keep their data separated within a single Redis instance.
MongoDB: Document Store
MongoDB is needed by some self-hosted applications (Rocket.Chat, Unifi Controller). Use it only when a service specifically requires it — PostgreSQL is a better choice for most workloads.
Deployment
services:
mongodb:
image: mongo:7
restart: unless-stopped
ports:
- "27017:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: change-this-password
volumes:
- ./data:/data/db
command: --wiredTigerCacheSizeGB 1
Backups
# Dump all databases
docker exec mongodb mongodump -u admin -p 'password' --authenticationDatabase admin \
--out /tmp/mongodump
docker cp mongodb:/tmp/mongodump /opt/backups/mongodb/dump-$(date +%Y%m%d)
Replication for Critical Services
If a database failure would cause significant disruption, set up replication. PostgreSQL streaming replication is the most practical option for homelabs.
PostgreSQL Streaming Replication
On the primary:
-- Create a replication user
CREATE ROLE replicator WITH REPLICATION LOGIN PASSWORD 'repl-password';
# postgresql.conf (primary)
wal_level = replica
max_wal_senders = 3
On the replica:
# Stop the replica, clear its data directory, and pull from primary
pg_basebackup -h primary-host -U replicator -D /var/lib/postgresql/data -Fp -Xs -P -R
The -R flag creates the standby.signal file and configures the connection, so the replica starts following the primary automatically.
Monitoring
Use pg_stat_statements for PostgreSQL query analysis and connect to your Grafana stack:
# Prometheus postgres_exporter
services:
postgres-exporter:
image: prometheuscommunity/postgres-exporter:latest
environment:
DATA_SOURCE_NAME: "postgresql://admin:password@postgres:5432/admin?sslmode=disable"
ports:
- "9187:9187"
For Redis:
redis-exporter:
image: oliver006/redis_exporter:latest
environment:
REDIS_ADDR: "redis://redis:6379"
REDIS_PASSWORD: "your-redis-password"
ports:
- "9121:9121"
Key metrics to alert on: connection count approaching max_connections, replication lag on replicas, cache hit ratio below 95% (PostgreSQL), and memory usage approaching maxmemory (Redis).
Running shared database instances instead of per-application containers cuts your resource usage, simplifies backups, and gives you a real understanding of database administration. Start with a single PostgreSQL instance, add Redis for caching when your services support it, and add MariaDB only if a specific application requires MySQL compatibility.