← All articles
green and black computer motherboard

Managing Your Homelab with Nix Flakes: Reproducible Server Configs

Configuration Management 2026-02-15 · 8 min read nix flakes configuration-management reproducibility infrastructure-as-code
By HomeLab Starter Editorial TeamHome lab enthusiasts covering hardware setup, networking, and self-hosted services for home and small office environments.

If you have more than one server in your homelab, you have a configuration management problem. Maybe you are running Ansible playbooks that drift over time, or SSHing into each box and hoping you remember what you changed last month. Nix flakes offer a different approach: a single repository that declaratively defines every machine in your lab, with exact version pinning and reproducible builds.

Photo by Toby56 on Unsplash

This guide focuses specifically on Nix flakes as a configuration management tool for multi-machine homelabs. If you are new to NixOS itself, check out our NixOS immutable infrastructure guide first. Here we will assume you know the basics of NixOS and the Nix language, and go deep on using flakes to manage a fleet.

NixOS logo

What Flakes Solve

Before flakes, NixOS configurations had a reproducibility gap. Your configuration.nix might reference <nixpkgs>, but what version of nixpkgs? The answer depended on whatever channel your machine happened to be subscribed to. Two machines running the "same" configuration could produce different systems because they were pulling from different nixpkgs revisions.

Flakes fix this with three mechanisms:

Version pinning. The flake.lock file records the exact git revision of every input (nixpkgs, home-manager, hardware quirks, third-party modules). When you build, you get the same packages regardless of when or where you build. Updating is explicit: you run nix flake update and commit the new lock file.

Hermetic evaluation. A flake cannot reference anything outside its declared inputs. No more <nixpkgs> channel lookups, no ambient state. The configuration evaluates the same way on your laptop, in CI, and on the target machine.

Composability. Flakes can consume other flakes as inputs. Need hardware-specific quirks from nixos-hardware? Add it as an input. Want secrets management from sops-nix? Another input. Each dependency is version-pinned independently.

Flake Structure

A flake is a directory (usually a git repo) containing a flake.nix file. For a homelab, the structure looks like this:

homelab-flake/
├── flake.nix              # Inputs, outputs, machine definitions
├── flake.lock             # Pinned dependency versions (auto-generated)
├── hosts/
│   ├── nas/
│   │   ├── default.nix    # NAS-specific configuration
│   │   └── hardware.nix   # Auto-generated hardware config
│   ├── compute/
│   │   ├── default.nix
│   │   └── hardware.nix
│   └── gateway/
│       ├── default.nix
│       └── hardware.nix
├── modules/
│   ├── common.nix         # Base config all machines share
│   ├── monitoring.nix     # Prometheus + node_exporter
│   ├── docker-host.nix    # Docker runtime setup
│   └── networking.nix     # Shared network settings
├── overlays/
│   └── default.nix        # Package customizations
└── secrets/
    ├── secrets.yaml       # Encrypted secrets (sops)
    └── .sops.yaml         # Encryption rules

The flake.nix is the entry point. Here is a practical example managing three homelab machines -- a NAS, a compute node, and a network gateway:

{
  description = "Homelab infrastructure";

  inputs = {
    nixpkgs.url = "github:NixOS/nixpkgs/nixos-24.11";
    nixpkgs-unstable.url = "github:NixOS/nixpkgs/nixos-unstable";

    # Hardware-specific optimizations
    nixos-hardware.url = "github:NixOS/nixos-hardware";

    # Secrets management
    sops-nix = {
      url = "github:Mic92/sops-nix";
      inputs.nixpkgs.follows = "nixpkgs";
    };

    # Remote deployment
    deploy-rs = {
      url = "github:serokell/deploy-rs";
      inputs.nixpkgs.follows = "nixpkgs";
    };
  };

  outputs = { self, nixpkgs, nixpkgs-unstable, nixos-hardware,
              sops-nix, deploy-rs, ... }:
  let
    system = "x86_64-linux";
    # Allow using unstable packages selectively
    unstable = import nixpkgs-unstable {
      inherit system;
      config.allowUnfree = true;
    };
  in {
    nixosConfigurations = {
      nas = nixpkgs.lib.nixosSystem {
        inherit system;
        specialArgs = { inherit unstable; };
        modules = [
          ./hosts/nas
          ./modules/common.nix
          ./modules/monitoring.nix
          sops-nix.nixosModules.sops
        ];
      };

      compute = nixpkgs.lib.nixosSystem {
        inherit system;
        specialArgs = { inherit unstable; };
        modules = [
          ./hosts/compute
          ./modules/common.nix
          ./modules/monitoring.nix
          ./modules/docker-host.nix
          sops-nix.nixosModules.sops
        ];
      };

      gateway = nixpkgs.lib.nixosSystem {
        inherit system;
        specialArgs = { inherit unstable; };
        modules = [
          ./hosts/gateway
          ./modules/common.nix
          ./modules/monitoring.nix
          ./modules/networking.nix
          sops-nix.nixosModules.sops
        ];
      };
    };

    deploy.nodes = {
      nas = {
        hostname = "10.0.20.10";
        profiles.system = {
          user = "root";
          sshUser = "deploy";
          path = deploy-rs.lib.${system}.activate.nixos
            self.nixosConfigurations.nas;
        };
      };
      compute = {
        hostname = "10.0.20.11";
        profiles.system = {
          user = "root";
          sshUser = "deploy";
          path = deploy-rs.lib.${system}.activate.nixos
            self.nixosConfigurations.compute;
        };
      };
      gateway = {
        hostname = "10.0.20.1";
        profiles.system = {
          user = "root";
          sshUser = "deploy";
          path = deploy-rs.lib.${system}.activate.nixos
            self.nixosConfigurations.gateway;
        };
      };
    };
  };
}

A few things to note. The inputs.nixpkgs.follows directive on sops-nix and deploy-rs forces them to use the same nixpkgs as your main configuration, avoiding duplicate package builds. The specialArgs pattern passes the unstable package set into modules so you can selectively pull newer versions of specific packages without moving your entire system to unstable.

Shared Modules and Machine-Specific Configs

The power of this structure is the separation between shared and machine-specific configuration. Your modules/common.nix handles everything that applies across all machines -- SSH hardening, timezone, base packages, nix garbage collection, deploy user setup. Each host's default.nix only contains what makes that machine unique.

For example, the NAS host config focuses on storage:

# hosts/nas/default.nix
{ config, pkgs, ... }:
{
  imports = [ ./hardware.nix ];

  networking.hostName = "nas";
  networking.hostId = "a1b2c3d4"; # Required for ZFS

  boot.supportedFilesystems = [ "zfs" ];

  # ZFS pool auto-import
  boot.zfs.extraPools = [ "tank" ];

  # Samba shares
  services.samba = {
    enable = true;
    settings = {
      global = {
        security = "user";
        "server min protocol" = "SMB3";
      };
      media = {
        path = "/tank/media";
        "read only" = "no";
        "valid users" = "media";
      };
      backups = {
        path = "/tank/backups";
        "read only" = "no";
        "valid users" = "backup";
      };
    };
  };

  # NFS exports
  services.nfs.server = {
    enable = true;
    exports = ''
      /tank/media 10.0.20.0/24(rw,sync,no_subtree_check)
    '';
  };

  networking.firewall.allowedTCPPorts = [ 445 139 2049 ];
}

Meanwhile the gateway config handles routing and VPN:

# hosts/gateway/default.nix
{ config, pkgs, ... }:
{
  imports = [ ./hardware.nix ];

  networking.hostName = "gateway";

  # Enable IP forwarding
  boot.kernel.sysctl = {
    "net.ipv4.ip_forward" = 1;
  };

  # WireGuard VPN
  networking.wireguard.interfaces.wg0 = {
    ips = [ "10.100.0.1/24" ];
    listenPort = 51820;
    privateKeyFile = config.sops.secrets.wireguard-key.path;
    peers = [
      { publicKey = "abc123..."; allowedIPs = [ "10.100.0.2/32" ]; }
      { publicKey = "def456..."; allowedIPs = [ "10.100.0.3/32" ]; }
    ];
  };

  # Nginx reverse proxy for internal services
  services.nginx = {
    enable = true;
    recommendedProxySettings = true;
    recommendedTlsSettings = true;
    virtualHosts."grafana.lab.local" = {
      locations."/".proxyPass = "http://10.0.20.11:3000";
    };
  };

  networking.firewall.allowedTCPPorts = [ 80 443 ];
  networking.firewall.allowedUDPPorts = [ 51820 ];
}

This pattern scales cleanly. Adding a fourth or fifth machine means creating a new directory under hosts/, writing its specific config, and wiring it into flake.nix. The shared modules come along for free.

Like what you're reading? Subscribe to HomeLab Starter — free weekly guides in your inbox.

Overlays: Customizing Packages

Overlays let you modify or add packages across your entire flake without forking nixpkgs. A common homelab use case is pinning a specific version of a package or applying a patch:

# overlays/default.nix
final: prev: {
  # Use a newer version of a monitoring tool from unstable
  prometheus-node-exporter = prev.unstable.prometheus-node-exporter;

  # Custom wrapper script available on all machines
  homelab-status = prev.writeShellScriptBin "homelab-status" ''
    echo "=== $(hostname) ==="
    ${prev.curl}/bin/curl -s http://localhost:9100/metrics | head -5
  '';
}

Reference the overlay in your flake.nix outputs and every machine that imports your common module gets the customized packages.

Deployment Workflows

Direct Rebuild

For a single machine, the standard approach works:

sudo nixos-rebuild switch --flake .#nas

This builds the configuration for the nas host and activates it. The #nas selector matches the key in nixosConfigurations.

Remote Deployment with deploy-rs

For deploying to multiple machines from your workstation (or from CI), deploy-rs is the standard tool. With the deploy nodes already defined in the flake above:

# Deploy to one machine
deploy .#nas

# Deploy to all machines
deploy .

# Dry run (build but don't activate)
deploy .#compute --dry-activate

deploy-rs includes automatic rollback. If the deployed system does not confirm health within a configurable timeout (default 240 seconds), it reverts to the previous generation. This prevents a bad config from bricking a remote machine.

Colmena: An Alternative

Colmena is another multi-machine deployment tool, with a slightly different configuration style that some people prefer. It supports parallel deployment across machines and has good progress reporting. The choice between deploy-rs and colmena is largely a matter of taste -- both work well for homelab-scale fleets.

Secrets Management

Your flake repo will be in git, so secrets need encryption. Two tools integrate tightly with Nix flakes:

sops-nix uses Mozilla's SOPS format. Secrets are encrypted in YAML or JSON files and decrypted at activation time on the target machine. It supports age keys and GPG:

# In your host config
sops.defaultSopsFile = ../../secrets/secrets.yaml;
sops.age.keyFile = "/var/lib/sops-nix/key.txt";

sops.secrets.wireguard-key = {};
sops.secrets."database/password" = {
  owner = "postgres";
};

agenix is simpler, using age encryption directly. Each secret is a separate .age file. It has fewer moving parts but less flexibility with complex secret hierarchies.

Both support per-machine encryption keys, so each server can only decrypt the secrets it needs. For a homelab, either works fine. sops-nix is more popular in the broader Nix community; agenix is easier to set up initially.

Flakes vs. Ansible and Terraform

Ansible is the most common homelab configuration management tool, and for good reason -- it is easy to start with, works on any Linux distro, and does not require changing your OS. But Ansible playbooks describe procedures (do this, then this, then this), not end states. Run the same playbook twice and you might get a different result if the system drifted between runs. Nix flakes describe the desired state and build a system that matches it exactly, every time.

Terraform manages infrastructure provisioning -- creating VMs, DNS records, cloud resources -- but does not manage what runs inside those machines. Nix flakes and Terraform complement each other. You could use Terraform to provision your VMs and Nix flakes to configure what runs on them.

The honest tradeoff: Nix has a steeper learning curve than Ansible, and it requires NixOS on your machines (or at least the Nix package manager). If your homelab runs Ubuntu or Debian and you are happy with Ansible, switching is a significant undertaking. But if you value true reproducibility -- being able to git clone your repo and rebuild your entire lab from scratch -- flakes deliver something Ansible cannot.

Getting Started

If you already have NixOS machines, the migration path is straightforward:

  1. Create a git repo with a flake.nix defining your machines.
  2. Move each machine's configuration.nix into the hosts/ directory structure.
  3. Extract common settings into shared modules.
  4. Run nix flake update to generate your initial flake.lock.
  5. Test with nixos-rebuild build --flake .#hostname before switching.
  6. Once validated, switch with nixos-rebuild switch --flake .#hostname.
  7. Add deploy-rs when you are ready for centralized deployments.

Pin your nixpkgs to a stable release (like nixos-24.11) for production machines. Use nixpkgs-unstable as a secondary input only for packages where you need the latest version. Keep flake.lock committed to git, and treat nix flake update as a deliberate upgrade step, not something that happens automatically.

The result is a single repository that fully describes your homelab infrastructure. Every change is tracked in git. Every machine can be rebuilt from scratch. And when you inevitably add that next server to the rack, configuring it is a matter of writing one file and running one command.

Get free weekly tips in your inbox. Subscribe to HomeLab Starter