← All articles
a computer generated image of a circular object

Kubernetes in Your Homelab: K3s and K0s Setup Guide

Containers 2026-02-09 · 8 min read kubernetes k3s k0s containers docker homelab orchestration
By HomeLab Starter Editorial TeamHome lab enthusiasts covering hardware setup, networking, and self-hosted services for home and small office environments.

There's a running joke in the homelab community: the fastest way to mass-downvote a Reddit post is to suggest someone "just use Kubernetes" for their simple self-hosting setup. And honestly, the joke lands because it's usually true. Docker Compose handles 90% of homelab workloads with a fraction of the complexity.

Photo by Growtika on Unsplash

But there's a legitimate 10% where Kubernetes makes sense — and a much larger group of people who want to learn Kubernetes because their career demands it. Running a small cluster in your homelab is one of the best ways to build real Kubernetes skills without a cloud bill or a production outage to worry about.

Kubernetes logo

This guide covers lightweight Kubernetes distributions designed for homelabs, when they're worth the complexity, and how to get a practical cluster running without losing your mind.

Docker Compose vs Kubernetes: An Honest Assessment

Before diving into setup, let's be honest about when each tool makes sense.

Factor Docker Compose Kubernetes
Setup time 5 minutes 30-60 minutes
Learning curve Low High
Single-node Perfect Overkill
Multi-node Awkward (Docker Swarm) Native
Auto-healing Restart policies only Full self-healing
Rolling updates Manual Built-in
Config management .env files ConfigMaps, Secrets
Storage Bind mounts PVCs, CSI drivers
Networking Simple port mapping Service mesh, ingress
Resource limits Basic Granular (requests/limits)
Career value Minimal Very high

Stay with Docker Compose if:

Consider Kubernetes if:

Lightweight Kubernetes Distributions

Full upstream Kubernetes (kubeadm) is heavy and complex. These lightweight distributions strip it down to what homelabs actually need.

K3s (by Rancher/SUSE)

K3s is the most popular homelab Kubernetes distribution by a wide margin. It packages Kubernetes into a single binary under 100MB, uses SQLite instead of etcd by default, and includes essential components (Traefik ingress, CoreDNS, local-path storage) out of the box.

Best for: Most homelabbers. Excellent community, tons of guides, battle-tested.

K0s (by Mirantis)

K0s is a zero-friction Kubernetes distribution that bundles everything into a single binary with zero host OS dependencies. It's slightly more opinionated about component choices but easier to manage at scale.

Best for: Clean installations where you want minimal host OS changes.

MicroK8s (by Canonical)

MicroK8s installs via snap and offers an add-on system for enabling features. It's Ubuntu-focused and very easy to get started with, but the snap packaging can cause issues with storage and networking.

Best for: Ubuntu users who want the simplest possible installation.

Feature K3s K0s MicroK8s
Install method Shell script / binary Shell script / binary Snap package
Default datastore SQLite (single) / etcd (HA) SQLite / etcd Dqlite
Included ingress Traefik None Nginx (addon)
Included storage local-path-provisioner None hostpath (addon)
Included CNI Flannel kube-router / Calico Calico (addon)
Resource usage ~512MB RAM ~512MB RAM ~700MB RAM
Multi-node Easy (join token) Easy (join token) Easy (add-node)
ARM support Excellent Good Good

Setting Up K3s: Step by Step

K3s is the recommendation for most homelabs. Here's a complete setup.

Prerequisites

Single-Node Installation

For a single-server setup (great for learning):

# Install K3s
curl -sfL https://get.k3s.io | sh -

# Verify it's running
sudo k3s kubectl get nodes

# Copy the kubeconfig for regular user access
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config

# Verify
kubectl get nodes
# NAME        STATUS   ROLES                  AGE   VERSION
# homelab01   Ready    control-plane,master   30s   v1.31.x+k3s1

Multi-Node Cluster

For a proper cluster with high availability:

Server node 1 (initial control plane):

# Install first server node with cluster-init for etcd
curl -sfL https://get.k3s.io | sh -s - server \
  --cluster-init \
  --tls-san=k8s.home.lab \
  --disable=traefik     # We'll install our own ingress

# Get the node token for joining other nodes
sudo cat /var/lib/rancher/k3s/server/node-token

Server node 2 (additional control plane):

curl -sfL https://get.k3s.io | sh -s - server \
  --server https://192.168.1.101:6443 \
  --token <node-token-from-server-1> \
  --tls-san=k8s.home.lab \
  --disable=traefik

Worker node(s):

curl -sfL https://get.k3s.io | sh -s - agent \
  --server https://192.168.1.101:6443 \
  --token <node-token-from-server-1>

Verify the cluster:

kubectl get nodes
# NAME        STATUS   ROLES                       AGE   VERSION
# server01    Ready    control-plane,etcd,master    5m    v1.31.x+k3s1
# server02    Ready    control-plane,etcd,master    3m    v1.31.x+k3s1
# worker01    Ready    <none>                       1m    v1.31.x+k3s1

Like what you're reading? Subscribe to HomeLab Starter — free weekly guides in your inbox.

Setting Up K0s

If you prefer K0s for its zero-dependency approach:

Single-Node Installation

# Download K0s
curl -sSLf https://get.k0s.sh | sudo sh

# Install as a controller+worker (single node)
sudo k0s install controller --single

# Start K0s
sudo k0s start

# Wait for it to be ready, then check status
sudo k0s status

# Get kubeconfig
sudo k0s kubeconfig admin > ~/.kube/config

kubectl get nodes

K0sctl for Multi-Node

K0s provides k0sctl, a tool that manages your entire cluster from a single config file:

# k0sctl.yaml
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: homelab
spec:
  hosts:
    - role: controller
      ssh:
        address: 192.168.1.101
        user: admin
        keyPath: ~/.ssh/id_ed25519
    - role: controller
      ssh:
        address: 192.168.1.102
        user: admin
        keyPath: ~/.ssh/id_ed25519
    - role: worker
      ssh:
        address: 192.168.1.103
        user: admin
        keyPath: ~/.ssh/id_ed25519
  k0s:
    version: "1.31.2+k0s.0"
# Apply the cluster configuration
k0sctl apply --config k0sctl.yaml

# Get kubeconfig
k0sctl kubeconfig --config k0sctl.yaml > ~/.kube/config

Essential Tools to Install

A bare Kubernetes cluster needs a few additions to be useful for homelab workloads.

Helm (Package Manager)

Helm is the de facto package manager for Kubernetes. Most homelab applications have Helm charts available.

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Storage: Longhorn

K3s includes local-path-provisioner, which works but doesn't replicate data across nodes. Longhorn provides distributed block storage:

helm repo add longhorn https://charts.longhorn.io
helm repo update

helm install longhorn longhorn/longhorn \
  --namespace longhorn-system \
  --create-namespace \
  --set defaultSettings.defaultDataPath="/mnt/longhorn"

Ingress Controller

If you disabled Traefik (recommended for more control), install your preferred ingress:

# Nginx Ingress
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  --create-namespace \
  --set controller.service.type=LoadBalancer

MetalLB (Load Balancer)

Cloud Kubernetes gets load balancers from the cloud provider. On bare metal, MetalLB fills that gap:

helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb \
  --namespace metallb-system \
  --create-namespace

Configure an IP address pool:

# metallb-config.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: homelab-pool
  namespace: metallb-system
spec:
  addresses:
    - 192.168.1.200-192.168.1.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: homelab-l2
  namespace: metallb-system
spec:
  ipAddressPools:
    - homelab-pool
kubectl apply -f metallb-config.yaml

cert-manager (SSL Certificates)

Automate Let's Encrypt certificates for your services:

helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set crds.enabled=true

Create a ClusterIssuer:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: [email protected]
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - http01:
          ingress:
            class: nginx

Deploying Your First Application

Here's a complete example deploying Jellyfin on Kubernetes:

# jellyfin.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: media
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jellyfin-config
  namespace: media
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jellyfin
  namespace: media
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jellyfin
  template:
    metadata:
      labels:
        app: jellyfin
    spec:
      containers:
        - name: jellyfin
          image: jellyfin/jellyfin:latest
          ports:
            - containerPort: 8096
          volumeMounts:
            - name: config
              mountPath: /config
            - name: media
              mountPath: /media
              readOnly: true
          resources:
            requests:
              cpu: 500m
              memory: 1Gi
            limits:
              cpu: 4000m
              memory: 4Gi
      volumes:
        - name: config
          persistentVolumeClaim:
            claimName: jellyfin-config
        - name: media
          hostPath:
            path: /mnt/media
            type: Directory
---
apiVersion: v1
kind: Service
metadata:
  name: jellyfin
  namespace: media
spec:
  selector:
    app: jellyfin
  ports:
    - port: 8096
      targetPort: 8096
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: jellyfin
  namespace: media
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - jellyfin.home.lab
      secretName: jellyfin-tls
  rules:
    - host: jellyfin.home.lab
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: jellyfin
                port:
                  number: 8096
kubectl apply -f jellyfin.yaml
kubectl get pods -n media -w
# Wait for it to be Running

Compare that to the Docker Compose equivalent (about 15 lines). This is the Kubernetes trade-off: more explicit, more powerful, more verbose.

GitOps with Flux

The real power of Kubernetes in a homelab is GitOps — storing all your cluster configuration in a Git repository and having it automatically applied. Flux is the most popular GitOps tool for homelabs.

# Install Flux CLI
curl -s https://fluxcd.io/install.sh | sudo bash

# Bootstrap Flux with your GitHub repo
flux bootstrap github \
  --owner=your-username \
  --repository=homelab-cluster \
  --path=clusters/homelab \
  --personal

Now create a directory structure in your repo:

homelab-cluster/
├── clusters/
│   └── homelab/
│       ├── flux-system/        # Auto-generated by bootstrap
│       └── apps.yaml           # Points to your apps
├── apps/
│   ├── jellyfin/
│   │   ├── namespace.yaml
│   │   ├── deployment.yaml
│   │   ├── service.yaml
│   │   └── ingress.yaml
│   ├── nextcloud/
│   │   └── ...
│   └── kustomization.yaml
└── infrastructure/
    ├── cert-manager/
    ├── ingress-nginx/
    ├── longhorn/
    └── metallb/

Push changes to Git, and Flux automatically applies them to your cluster. This is infrastructure as code done properly.

Monitoring Your Cluster

The kube-prometheus-stack Helm chart installs Prometheus, Grafana, and AlertManager with pre-built Kubernetes dashboards:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

helm install monitoring prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --create-namespace \
  --set grafana.adminPassword=your-password \
  --set prometheus.prometheusSpec.retention=30d \
  --set prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage=50Gi

This gives you dashboards for node health, pod resource usage, cluster-wide metrics, and more — all pre-configured.

Common Pitfalls

1. Storage Confusion

Kubernetes storage is the single biggest source of frustration for newcomers. Understand these concepts:

If your data disappears after a pod restart, you probably used emptyDir instead of a PVC.

2. DNS Resolution

Pods communicate via service names. jellyfin.media.svc.cluster.local resolves to the Jellyfin service. If inter-service communication fails, check CoreDNS:

kubectl get pods -n kube-system | grep coredns
kubectl logs -n kube-system -l k8s-app=kube-dns

3. Resource Limits

Always set resource requests and limits. Without them, a single misbehaving pod can starve the entire node:

resources:
  requests:      # Minimum guaranteed resources
    cpu: 250m    # 0.25 CPU cores
    memory: 512Mi
  limits:        # Maximum allowed
    cpu: 2000m   # 2 CPU cores
    memory: 2Gi

4. Namespace Organization

Group related services into namespaces:

media/       - Jellyfin, Sonarr, Radarr, Prowlarr
monitoring/  - Prometheus, Grafana, AlertManager
home/        - Home Assistant, Node-RED
networking/  - Pi-hole, Unbound
default/     - Don't put anything here

Uninstalling K3s

If you decide Kubernetes isn't for you (no shame in that):

# On server nodes
/usr/local/bin/k3s-uninstall.sh

# On agent nodes
/usr/local/bin/k3s-agent-uninstall.sh

This cleanly removes K3s and all its components.

Final Thoughts

Kubernetes in a homelab is a learning investment. The first week will be frustrating — you'll fight with YAML indentation, storage provisioning, and networking concepts that Docker Compose handles implicitly. But once it clicks, you'll have a genuinely powerful platform and skills that are in very high demand professionally.

Start with a single-node K3s cluster, deploy two or three services, and get comfortable with kubectl before adding more nodes or GitOps tooling. The homelab Kubernetes community is incredibly helpful — r/homelab and the K3s Discord are great resources when you get stuck.

And if you try it and decide Docker Compose is enough for your needs, that's a perfectly valid conclusion. The best homelab tool is the one that works for you without making the hobby feel like work.

Get free weekly tips in your inbox. Subscribe to HomeLab Starter