Load Balancing with HAProxy in Your Home Lab
Every homelab eventually reaches the point where you have multiple instances of a service or a pile of web UIs that all need clean access through a single IP. You could keep track of port numbers manually — Grafana on 3000, Uptime Kuma on 3001, Gitea on 3002 — but that gets old fast. HAProxy solves this cleanly: it sits in front of your services and routes traffic based on hostnames, paths, or ports, while also load balancing across multiple backends when you need redundancy.
HAProxy is the same software that handles traffic for GitHub, Reddit, and Stack Overflow. It's fast, stable, battle-tested, and uses very little memory. For a homelab, it's arguably the best tool for both reverse proxying and actual load balancing.

Installing HAProxy
On Debian/Ubuntu:
sudo apt update
sudo apt install -y haproxy
On Fedora/RHEL:
sudo dnf install -y haproxy
Check the installed version:
haproxy -v
You want version 2.6 or later for modern features like HTTP/2 support and improved health checks. If your distro ships an older version, the HAProxy PPA (for Ubuntu) or Copr (for Fedora) has recent releases.
Enable and start the service:
sudo systemctl enable --now haproxy
Configuration Basics
HAProxy's configuration lives at /etc/haproxy/haproxy.cfg. The config has four main sections:
- global — Process-level settings (logging, max connections, user/group)
- defaults — Default values inherited by all frontends and backends
- frontend — Where traffic comes in (binds to IP:port, applies routing rules)
- backend — Where traffic goes (lists of servers, health checks, balancing algorithms)
Here's a minimal starting configuration:
global
log /dev/log local0
maxconn 4096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5s
timeout client 30s
timeout server 30s
retries 3
frontend http_front
bind *:80
default_backend web_servers
backend web_servers
balance roundrobin
server web1 192.168.1.101:8080 check
server web2 192.168.1.102:8080 check
This listens on port 80 and distributes traffic across two backend servers using round-robin. The check keyword enables health checks — HAProxy will stop sending traffic to a server that fails its health check.
After editing the config, always validate before reloading:
sudo haproxy -c -f /etc/haproxy/haproxy.cfg
If it reports no errors, reload:
sudo systemctl reload haproxy
Host-Based Routing
The real power of HAProxy in a homelab is routing traffic to different backends based on the hostname. This lets you point grafana.homelab.local, gitea.homelab.local, and nas.homelab.local all at the same HAProxy IP, and it routes each to the correct service.
frontend http_front
bind *:80
# Route based on hostname
acl host_grafana hdr(host) -i grafana.homelab.local
acl host_gitea hdr(host) -i gitea.homelab.local
acl host_nas hdr(host) -i nas.homelab.local
use_backend grafana if host_grafana
use_backend gitea if host_gitea
use_backend nas if host_nas
default_backend fallback
backend grafana
server grafana1 192.168.1.50:3000 check
backend gitea
server gitea1 192.168.1.51:3000 check
backend nas
server nas1 192.168.1.60:8080 check
backend fallback
http-request return status 503 content-type text/plain string "Service not found"
Set up DNS entries (in Pi-hole, your router, or /etc/hosts) pointing all those hostnames to the HAProxy server's IP. Now every service is accessible on port 80 with a clean hostname.
Health Checks
The check keyword on each server line enables basic TCP health checks. HAProxy connects to the server periodically and marks it as down if the connection fails. You can customize this:
backend web_servers
balance roundrobin
option httpchk GET /health
http-check expect status 200
server web1 192.168.1.101:8080 check inter 10s fall 3 rise 2
server web2 192.168.1.102:8080 check inter 10s fall 3 rise 2
option httpchk GET /health— Send an HTTP GET to/healthinstead of just a TCP connecthttp-check expect status 200— Only consider the server healthy if it returns 200inter 10s— Check every 10 secondsfall 3— Mark as down after 3 consecutive failuresrise 2— Mark as up after 2 consecutive successes
For services that don't have a dedicated health endpoint, a basic TCP check (the default) works fine.
Load Balancing Algorithms
HAProxy supports several balancing strategies:
- roundrobin — Distributes requests evenly across servers. Best general-purpose option.
- leastconn — Sends traffic to the server with the fewest active connections. Good for long-lived connections like websockets or database pools.
- source — Hashes the client IP to always send the same client to the same server. Simple sticky sessions without cookies.
- uri — Hashes the request URI for consistent routing. Useful for caching layers.
For most homelab setups, roundrobin or leastconn is what you want.
SSL Termination
HAProxy can handle HTTPS, terminating SSL at the load balancer and forwarding plain HTTP to backends. This means your backend services don't need to deal with certificates.
First, combine your certificate and private key into a single PEM file:
cat /etc/letsencrypt/live/homelab.local/fullchain.pem \
/etc/letsencrypt/live/homelab.local/privkey.pem \
> /etc/haproxy/certs/homelab.local.pem
Then configure the frontend:
frontend https_front
bind *:443 ssl crt /etc/haproxy/certs/homelab.local.pem
bind *:80
# Redirect HTTP to HTTPS
http-request redirect scheme https unless { ssl_fc }
# Forward the original protocol to backends
http-request set-header X-Forwarded-Proto https if { ssl_fc }
acl host_grafana hdr(host) -i grafana.homelab.local
use_backend grafana if host_grafana
default_backend fallback
backend grafana
server grafana1 192.168.1.50:3000 check
For self-signed certificates (common in homelabs), use mkcert to generate locally trusted certs, or accept the browser warning.
The Stats Page
HAProxy includes a built-in statistics dashboard that shows real-time traffic, server health, and connection counts. Enable it by adding:
frontend stats
bind *:8404
stats enable
stats uri /stats
stats refresh 10s
stats admin if LOCALHOST
Access it at http://haproxy-ip:8404/stats. The stats page shows each frontend and backend, which servers are up or down, request rates, error rates, and session counts. The stats admin if LOCALHOST line lets you enable/disable servers from the dashboard when accessed from localhost — useful for maintenance.
Sticky Sessions
Some applications require that a user's requests consistently go to the same backend server — session state stored in memory, for instance. HAProxy handles this with cookie-based stickiness:
backend app_servers
balance roundrobin
cookie SERVERID insert indirect nocache
server app1 192.168.1.101:8080 check cookie s1
server app2 192.168.1.102:8080 check cookie s2
HAProxy inserts a SERVERID cookie into responses. Subsequent requests from the same client include this cookie, and HAProxy routes them to the same backend.
TCP Mode for Non-HTTP Services
HAProxy isn't limited to HTTP. You can proxy any TCP service — databases, MQTT, game servers, SSH:
frontend mysql_front
bind *:3306
mode tcp
default_backend mysql_servers
backend mysql_servers
mode tcp
balance leastconn
option mysql-check user haproxy
server mysql1 192.168.1.101:3306 check
server mysql2 192.168.1.102:3306 check
Make sure both the frontend and backend are set to mode tcp. Protocol-specific health checks are available for MySQL, PostgreSQL, Redis, SMTP, and others.
Practical Homelab Setup
A realistic homelab HAProxy config ties all of this together. You have one machine running HAProxy (or a VM/container — it needs minimal resources), DNS entries pointing your service hostnames to it, and backends defined for each service. The typical deployment looks like:
- HAProxy runs on a dedicated VM or LXC container with a static IP
- DNS (Pi-hole, AdGuard, or your router) resolves
*.homelab.localto the HAProxy IP - Each service gets a
backendblock with its real IP and port - SSL termination happens at HAProxy using a wildcard cert
- The stats page runs on a separate port for monitoring
HAProxy uses roughly 10-20 MB of RAM for a homelab workload. It can handle thousands of concurrent connections without breaking a sweat. It starts instantly, reloads without dropping connections, and its configuration syntax — while different from Nginx — is logical once you understand the frontend/backend model.
If you've been using Nginx Proxy Manager or Traefik and want something lighter and more transparent, HAProxy is worth the switch. The config file is plain text, the behavior is predictable, and the stats page alone is worth the setup time.