← All articles
a car is parked inside of a building

Cilium: eBPF-Based Networking for Your Homelab Kubernetes Cluster

Kubernetes 2026-03-14 · 4 min read cilium ebpf kubernetes networking network-policy
By HomeLab Starter Editorial TeamHome lab enthusiasts covering hardware setup, networking, and self-hosted services for home and small office environments.

If you're running Kubernetes in your homelab, you've probably accepted kube-proxy as just "the thing that handles networking." It works, but it's been around since Kubernetes 1.0 and relies on iptables rules that don't scale well and are difficult to debug. Cilium takes a fundamentally different approach: it uses eBPF (extended Berkeley Packet Filter) to handle networking directly in the Linux kernel, bypassing iptables entirely.

Photo by Daesun Kim on Unsplash

The result is faster packet processing, richer network policies, and a built-in observability layer that makes debugging network issues dramatically easier. This guide walks you through installing Cilium in a homelab Kubernetes cluster and configuring its most useful features.

What Is eBPF and Why Does It Matter?

eBPF lets you run sandboxed programs inside the Linux kernel without changing kernel source code or loading kernel modules. Cilium uses eBPF programs to intercept and process network packets at the kernel level — before they ever reach iptables.

The practical benefits in a homelab:

Prerequisites

You'll need:

For k3s specifically, you need to disable its built-in Flannel CNI and kube-proxy:

# /etc/rancher/k3s/config.yaml (on server node before first start)
flannel-backend: none
disable-kube-proxy: true
disable:
  - servicelb

If you have an existing k3s cluster, you'll need to reinstall with these flags. For fresh kubeadm clusters, add --skip-phases=addon/kube-proxy to kubeadm init.

Installing Cilium with Helm

First, add the Cilium Helm repository:

helm repo add cilium https://helm.cilium.io/
helm repo update

For a homelab cluster with kube-proxy replacement enabled:

helm install cilium cilium/cilium --version 1.16.0 \
  --namespace kube-system \
  --set kubeProxyReplacement=true \
  --set k8sServiceHost=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[0].address}') \
  --set k8sServicePort=6443 \
  --set hubble.relay.enabled=true \
  --set hubble.ui.enabled=true

Key flags explained:

Watch the pods come up:

kubectl -n kube-system rollout status ds/cilium

Like what you're reading? Subscribe to HomeLab Starter — free weekly guides in your inbox.

Verifying the Installation

Install the Cilium CLI:

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-amd64.tar.gz
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin

Run the built-in connectivity test:

cilium connectivity test

This deploys test pods and runs ~40 network checks: pod-to-pod, pod-to-service, egress policy enforcement, and more. A clean homelab installation typically passes all tests in 5–10 minutes.

Network Policies That Actually Make Sense

Standard Kubernetes NetworkPolicy objects only support IP/port selectors. Cilium extends this with CiliumNetworkPolicy CRDs that understand Kubernetes labels and DNS names.

Here's a policy that allows your monitoring stack to scrape metrics from all pods with a specific label, while blocking everything else:

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: allow-prometheus-scrape
  namespace: default
spec:
  endpointSelector:
    matchLabels:
      app: my-app
  ingress:
  - fromEndpoints:
    - matchLabels:
        app: prometheus
        namespace: monitoring
    toPorts:
    - ports:
      - port: "8080"
        protocol: TCP

You can also write egress policies based on DNS names — useful for controlling which external services pods can reach:

spec:
  endpointSelector:
    matchLabels:
      app: my-app
  egress:
  - toFQDNs:
    - matchName: "api.github.com"
    toPorts:
    - ports:
      - port: "443"
        protocol: TCP

Hubble: Built-In Network Observability

Hubble is Cilium's observability layer. Once enabled, it records every network flow in the cluster. Access the Hubble UI:

cilium hubble ui

This opens a browser showing a live graph of all network traffic between your services. You can filter by namespace, pod, verdict (forwarded/dropped), and protocol.

For CLI access, install hubble and port-forward the relay:

kubectl port-forward -n kube-system svc/hubble-relay 4245:80 &
hubble observe --namespace default --last 20

Sample output:

Mar 14 10:23:01.445  default/frontend → default/backend:8080  HTTP GET /api/users  FORWARDED
Mar 14 10:23:01.447  default/backend  → default/postgres:5432 TCP  FORWARDED
Mar 14 10:23:04.112  default/scraper  → default/backend:8080  HTTP GET /internal/metrics  DROPPED (policy)

That last line shows a network policy drop in real time — try getting that kind of visibility with raw iptables.

Bandwidth Management (Optional)

Cilium supports bandwidth limiting per pod using eBPF-based traffic shaping. This is useful in homelabs where a single download-happy container can saturate your home network:

# Apply to a pod via annotation
kubectl annotate pod my-pod kubernetes.io/egress-bandwidth=100M
kubectl annotate pod my-pod kubernetes.io/ingress-bandwidth=100M

Enable it at install time with --set bandwidthManager.enabled=true.

Troubleshooting Common Issues

Pods can't reach the API server after install: Verify k8sServiceHost is correct. On multi-node clusters, use the VIP or load balancer address, not a single node IP.

Hubble shows no flows: Check that hubble-relay pod is running (kubectl -n kube-system get pods -l app.kubernetes.io/name=hubble-relay). The relay needs to connect to agents on all nodes.

Network policy not enforced: Cilium only enforces CiliumNetworkPolicy and standard NetworkPolicy objects after the agent restarts. If you applied a policy and it seems ignored, check cilium monitor on the affected node:

kubectl -n kube-system exec ds/cilium -- cilium monitor --type drop

Is It Worth It for a Homelab?

Absolutely — especially if you're running any Kubernetes workloads. The Hubble observability alone pays for the migration cost: being able to see exactly which pods are talking to what, and which network policy is dropping traffic, saves hours of debugging compared to reading iptables logs.

The eBPF performance gains matter less in a homelab (you're unlikely to saturate 1GbE purely from kube-proxy overhead), but the richer network policy model and DNS-aware egress filtering are genuinely useful for keeping services properly isolated.

Start with a fresh k3s node if you're nervous about migration — Cilium's connectivity test suite gives you confidence before rolling it out to your main cluster.

Get free weekly tips in your inbox. Subscribe to HomeLab Starter