← All articles
SECURITY Self-Hosted Secret Management with HashiCorp Vault 2026-02-14 · 8 min read · vault · secrets · security

Self-Hosted Secret Management with HashiCorp Vault

Security 2026-02-14 · 8 min read vault secrets security hashicorp encryption pki

Every homelab accumulates secrets. Database passwords live in plaintext Docker Compose files. API keys get copy-pasted into environment variables. TLS certificates sit in directories with permissions that seemed right at the time. SSH keys multiply across machines. And when you need to rotate a credential, you're grepping through dozens of config files hoping you found every reference.

HashiCorp Vault is a secrets management tool that centralizes all of this. Instead of scattering credentials across your infrastructure, every service asks Vault for the secrets it needs at runtime. Vault handles encryption, access control, audit logging, and even dynamic credential generation. It can issue short-lived database passwords, generate TLS certificates on demand, and revoke everything with a single command if a machine gets compromised.

HashiCorp Vault logo

For a homelab, Vault might sound like overkill. But once you have more than a handful of services, the alternative -- secrets sprawled across files, environment variables, and your memory -- becomes a real operational liability. Vault provides a single source of truth, and the skills you build using it translate directly to production environments.

Architecture Overview

Vault operates on a client-server model. The Vault server stores encrypted secrets and enforces access policies. Clients authenticate using one of several methods (tokens, AppRole, certificates, LDAP) and receive a time-limited token that grants access to specific secret paths.

Key concepts:

Concept Description
Secrets Engine A backend that stores or generates secrets (KV, PKI, database, etc.)
Auth Method How clients prove their identity (token, AppRole, LDAP, certificates)
Policy Rules defining which paths a token can access and what operations are allowed
Seal/Unseal Vault starts sealed (encrypted). Unseal keys are required to decrypt the master key
Lease A time-to-live attached to secrets. When it expires, the secret is revoked
Audit Device Logs every request and response for security auditing

Deploying Vault with Docker Compose

For a homelab, running Vault in dev mode is tempting but wrong. Dev mode stores everything in memory and auto-unseals with a root token -- fine for testing, terrible for actual use. We'll set up a proper deployment with file-based storage, which is appropriate for a single-node homelab.

Directory Structure

mkdir -p ~/docker/vault/{config,data,logs,policies}

Docker Compose

# ~/docker/vault/docker-compose.yml
services:
  vault:
    image: hashicorp/vault:1.17
    container_name: vault
    restart: unless-stopped
    ports:
      - "8200:8200"
    volumes:
      - ./config:/vault/config:ro
      - ./data:/vault/data
      - ./logs:/vault/logs
      - ./policies:/vault/policies:ro
    environment:
      VAULT_ADDR: "http://127.0.0.1:8200"
      VAULT_API_ADDR: "http://vault.yourdomain.com:8200"
    cap_add:
      - IPC_LOCK
    entrypoint: vault server -config=/vault/config/vault.hcl

The IPC_LOCK capability prevents Vault's memory from being swapped to disk, which would expose decrypted secrets.

Vault Configuration

Create ~/docker/vault/config/vault.hcl:

# Storage backend - file-based for single-node homelab
storage "file" {
  path = "/vault/data"
}

# Listener configuration
listener "tcp" {
  address     = "0.0.0.0:8200"
  tls_disable = 1  # Use a reverse proxy for TLS in production
}

# Enable the UI
ui = true

# Audit logging
api_addr     = "http://vault.yourdomain.com:8200"
cluster_addr = "https://vault.yourdomain.com:8201"

# Logging
log_level = "info"
log_file  = "/vault/logs/vault.log"

For production homelabs, you should terminate TLS at your reverse proxy (Traefik, Caddy, Nginx) rather than disabling it entirely. The connection between your proxy and Vault should still be on a trusted network.

Initialize and Unseal

Start the container and initialize Vault:

docker compose up -d

# Initialize with 5 key shares, requiring 3 to unseal
docker exec vault vault operator init \
  -key-shares=5 \
  -key-threshold=3

# Save the output! It contains your unseal keys and root token
# Store these SECURELY - losing them means losing access to all secrets

The initialization output looks like this:

Unseal Key 1: abc123...
Unseal Key 2: def456...
Unseal Key 3: ghi789...
Unseal Key 4: jkl012...
Unseal Key 5: mno345...

Initial Root Token: hvs.XXXXXXXXXXXX

Store each unseal key in a different location. A password manager, a printed copy in a safe, an encrypted USB drive -- the point is that no single compromise reveals enough keys to unseal Vault.

Now unseal:

# Run this 3 times with 3 different unseal keys
docker exec -it vault vault operator unseal
# Enter unseal key when prompted (repeat 2 more times)

After the third key, Vault transitions from sealed to unsealed and begins serving requests.

Auto-Unseal Consideration

Manually unsealing after every restart gets old fast. For a homelab, you have a few options:

The script approach works for most homelabs. Create a systemd service that runs after Docker starts, reads unseal keys from an age-encrypted file, and posts them to the Vault API. It's not perfect, but it's pragmatic.

Setting Up the KV Secrets Engine

The Key-Value secrets engine is where most of your homelab secrets will live. Vault supports two versions: KV v1 (simple key-value) and KV v2 (versioned, with metadata and soft-delete).

# Log in with your root token
export VAULT_ADDR="http://localhost:8200"
vault login

# Enable KV v2 at the path "secret"
vault secrets enable -path=secret -version=2 kv

Storing and Retrieving Secrets

# Store a database password
vault kv put secret/databases/postgres \
  username="app_user" \
  password="super-secret-password" \
  host="10.0.0.50" \
  port="5432"

# Retrieve it
vault kv get secret/databases/postgres

# Get just the password
vault kv get -field=password secret/databases/postgres

# Store a JSON secret from a file
vault kv put secret/services/grafana @grafana-creds.json

Organizing Secrets

A good path structure makes policies easier to write:

secret/
├── databases/
│   ├── postgres
│   ├── mariadb
│   └── redis
├── services/
│   ├── grafana
│   ├── nextcloud
│   └── gitea
├── infrastructure/
│   ├── proxmox
│   ├── truenas
│   └── router
└── api-keys/
    ├── cloudflare
    ├── github
    └── smtp

PKI Secrets Engine: Your Own Certificate Authority

This is where Vault really shines for homelabs. Instead of using Let's Encrypt for internal services (which requires DNS challenges and public domain ownership) or manually generating self-signed certificates, Vault can act as your own Certificate Authority.

Set Up a Root CA

# Enable the PKI engine for the root CA
vault secrets enable -path=pki pki

# Set the max TTL to 10 years
vault secrets tune -max-lease-ttl=87600h pki

# Generate the root certificate
vault write pki/root/generate/internal \
  common_name="Homelab Root CA" \
  ttl=87600h \
  key_bits=4096

# Configure the CA and CRL URLs
vault write pki/config/urls \
  issuing_certificates="http://vault.yourdomain.com:8200/v1/pki/ca" \
  crl_distribution_points="http://vault.yourdomain.com:8200/v1/pki/crl"

Set Up an Intermediate CA

Never issue certificates directly from the root CA. Create an intermediate:

# Enable a second PKI engine for the intermediate
vault secrets enable -path=pki_int pki

vault secrets tune -max-lease-ttl=43800h pki_int

# Generate the intermediate CSR
vault write -format=json pki_int/intermediate/generate/internal \
  common_name="Homelab Intermediate CA" \
  key_bits=4096 | jq -r '.data.csr' > intermediate.csr

# Sign it with the root CA
vault write -format=json pki/root/sign-intermediate \
  [email protected] \
  format=pem_bundle \
  ttl=43800h | jq -r '.data.certificate' > signed_intermediate.pem

# Import the signed intermediate
vault write pki_int/intermediate/set-signed \
  certificate=@signed_intermediate.pem

Create a Role and Issue Certificates

# Create a role for issuing server certificates
vault write pki_int/roles/homelab-server \
  allowed_domains="yourdomain.com,local.yourdomain.com" \
  allow_subdomains=true \
  max_ttl=720h \
  key_bits=2048 \
  key_type=rsa

# Issue a certificate
vault write pki_int/issue/homelab-server \
  common_name="grafana.local.yourdomain.com" \
  ttl=720h

The output includes the certificate, private key, and CA chain. You can feed these directly into your reverse proxy or application configuration.

Trust the Root CA

For browsers and operating systems to trust your certificates, install the root CA:

# Export the root CA certificate
vault read -field=certificate pki/cert/ca > homelab-root-ca.pem

# On Linux (Fedora/RHEL)
sudo cp homelab-root-ca.pem /etc/pki/ca-trust/source/anchors/
sudo update-ca-trust

# On Debian/Ubuntu
sudo cp homelab-root-ca.pem /usr/local/share/ca-certificates/homelab-root-ca.crt
sudo update-ca-certificates

AppRole Authentication

The root token is for administrative work only. Services should authenticate using AppRole, which provides machine-oriented authentication through a role ID (like a username) and a secret ID (like a password).

# Enable AppRole auth
vault auth enable approle

# Create a policy for Grafana
vault policy write grafana-policy - <<EOF
path "secret/data/services/grafana" {
  capabilities = ["read"]
}

path "secret/data/databases/postgres" {
  capabilities = ["read"]
}
EOF

# Create an AppRole for Grafana
vault write auth/approle/role/grafana \
  token_policies="grafana-policy" \
  token_ttl=1h \
  token_max_ttl=4h \
  secret_id_ttl=720h \
  secret_id_num_uses=0

# Get the role ID (stable, like a username)
vault read auth/approle/role/grafana/role-id

# Generate a secret ID (rotatable, like a password)
vault write -force auth/approle/role/grafana/secret-id

Using AppRole in a Service

Here's how a service authenticates and retrieves secrets:

# Authenticate and get a token
VAULT_TOKEN=$(curl -s \
  --request POST \
  --data "{\"role_id\":\"$ROLE_ID\",\"secret_id\":\"$SECRET_ID\"}" \
  http://vault.yourdomain.com:8200/v1/auth/approle/login | jq -r '.auth.client_token')

# Use the token to read a secret
curl -s \
  --header "X-Vault-Token: $VAULT_TOKEN" \
  http://vault.yourdomain.com:8200/v1/secret/data/services/grafana | jq '.data.data'

Integrating Vault with Docker Services

The cleanest integration pattern for Docker-based homelabs uses an init container or entrypoint script that fetches secrets from Vault before starting the application.

Entrypoint Script Pattern

#!/bin/bash
# vault-init.sh - Fetch secrets before starting the application

VAULT_ADDR="${VAULT_ADDR:-http://vault:8200}"

# Authenticate with AppRole
TOKEN=$(curl -s --request POST \
  --data "{\"role_id\":\"${VAULT_ROLE_ID}\",\"secret_id\":\"${VAULT_SECRET_ID}\"}" \
  "${VAULT_ADDR}/v1/auth/approle/login" | jq -r '.auth.client_token')

# Fetch database credentials
DB_CREDS=$(curl -s --header "X-Vault-Token: ${TOKEN}" \
  "${VAULT_ADDR}/v1/secret/data/databases/postgres" | jq -r '.data.data')

export DB_USER=$(echo "$DB_CREDS" | jq -r '.username')
export DB_PASS=$(echo "$DB_CREDS" | jq -r '.password')
export DB_HOST=$(echo "$DB_CREDS" | jq -r '.host')

# Start the actual application
exec "$@"

Docker Compose with Vault Integration

services:
  app:
    image: your-app:latest
    entrypoint: ["/vault-init.sh"]
    command: ["node", "server.js"]
    environment:
      VAULT_ADDR: "http://vault:8200"
      VAULT_ROLE_ID: "abc-123-def"
      VAULT_SECRET_ID_FILE: "/run/secrets/vault_secret_id"
    secrets:
      - vault_secret_id
    depends_on:
      - vault

secrets:
  vault_secret_id:
    file: ./secrets/vault_secret_id

Audit Logging

Enable audit logging so you have a complete record of who accessed what:

# Enable file-based audit logging
vault audit enable file file_path=/vault/logs/audit.log

# Every request and response is logged (with secrets HMAC-hashed)
# Example log entry shows the accessor, path, operation, and timestamp

Vault HMAC-hashes all secret values in audit logs by default, so the log itself doesn't become a security risk. But it does record which paths were accessed, by whom, and when -- invaluable for debugging access issues and investigating incidents.

Backup and Recovery

The file storage backend stores everything on disk. Back it up like any other directory, but with care:

# Take a Vault snapshot (Raft storage only)
# For file storage, back up the data directory while Vault is sealed or paused

# Stop Vault, back up, restart
docker compose stop vault
tar czf vault-backup-$(date +%Y%m%d).tar.gz -C ~/docker/vault data/
docker compose start vault
# Then unseal again

For homelabs using the file backend, consider scheduling this as a cron job during low-activity hours.

Comparison: Vault vs. Alternatives

Feature HashiCorp Vault Infisical SOPS Bitwarden (Secrets Manager)
Dynamic secrets Yes Limited No No
PKI / CA Yes No No No
Database credential rotation Yes Yes No No
Complexity High Medium Low Low
Resource usage ~200 MB RAM ~500 MB RAM CLI only ~300 MB RAM
Best for Full-featured secret management Team-oriented secrets Git-encrypted secrets Password management + basic secrets

What's Next

With Vault running, you've centralized your secrets and gained the ability to issue internal TLS certificates on demand. From here, consider:

The initial setup is the hardest part. Once Vault is running and your services are configured to pull credentials from it, rotating a compromised secret becomes a single API call instead of an hour of grepping through config files.