Proxmox Cloud-Init Templates: Automate VM Provisioning in Your Homelab
Every time you spin up a new VM in Proxmox, you go through the same routine: install the OS from an ISO, wait 10 minutes, configure the user account, set up networking, install your SSH keys, update packages, install the tools you always need. Multiply that by every VM in your homelab and you've burned hours on repetitive setup.
Photo by Fotografia Lui Vlad on Unsplash
Cloud-init eliminates all of that. You build a template once, and every VM cloned from that template configures itself on first boot — user accounts, SSH keys, network settings, packages, and custom scripts all applied automatically in under a minute. It's the same technology that AWS, GCP, and Azure use to initialize instances, and it works perfectly on your local Proxmox server.

What Cloud-Init Is and Why It Matters
Cloud-init is a service that runs during early boot on Linux systems. It reads configuration data from a datasource — in Proxmox's case, a small virtual CD-ROM drive attached to the VM — and uses that data to configure the system. The configuration can include:
- User accounts and passwords — create users, set passwords, grant sudo access
- SSH authorized keys — inject your public key so you can log in immediately
- Network configuration — set static IPs, DNS servers, search domains, or use DHCP
- Package installation — install packages on first boot
- Custom scripts — run arbitrary commands after boot (runcmd)
- File creation — write config files to disk before services start
Cloud-init is baked into the official cloud images published by Ubuntu, Debian, Rocky Linux, Fedora, and most other major distributions. These images are minimal (typically 300-600 MB), pre-configured to run cloud-init on first boot, and designed to be used as base images for cloning.
For a homelab, this means you can go from "I need a new VM" to "the VM is running and SSH-accessible" in about 30 seconds. No ISO boots, no installation wizards, no manual configuration.
Downloading Cloud Images
The major distributions publish cloud images specifically designed for this workflow. Download the qcow2 format — that's what Proxmox uses natively with its QEMU/KVM backend.
Ubuntu 24.04 LTS (Noble Numbat):
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
Debian 12 (Bookworm):
wget https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2
Rocky Linux 9:
wget https://dl.rockylinux.org/pub/rocky/9/images/x86_64/Rocky-9-GenericCloud-Base.latest.x86_64.qcow2
Store these somewhere persistent on your Proxmox host. I keep mine in /var/lib/vz/template/cloud-images/:
mkdir -p /var/lib/vz/template/cloud-images
mv noble-server-cloudimg-amd64.img /var/lib/vz/template/cloud-images/
mv debian-12-generic-amd64.qcow2 /var/lib/vz/template/cloud-images/
mv Rocky-9-GenericCloud-Base.latest.x86_64.qcow2 /var/lib/vz/template/cloud-images/
Creating a Proxmox Template with qm
The process for turning a cloud image into a Proxmox template is straightforward: create a VM shell, import the cloud image as its disk, configure the hardware, then convert it to a template. Here's the full sequence for Ubuntu 24.04 using VM ID 9000.
Step 1: Create the VM
qm create 9000 --name ubuntu-2404-cloud --memory 2048 --cores 2 \
--net0 virtio,bridge=vmbr0 --ostype l26
This creates an empty VM with 2 cores, 2 GB of RAM, and a virtio NIC on your default bridge. The --ostype l26 tells Proxmox it's a Linux 2.6+ kernel (which covers all modern Linux distributions).
Step 2: Import the Cloud Image as a Disk
qm importdisk 9000 /var/lib/vz/template/cloud-images/noble-server-cloudimg-amd64.img local-lvm
This converts the qcow2 image and imports it into the local-lvm storage as an unused disk. After import, attach it to the VM:
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
Step 3: Add a Cloud-Init Drive
Proxmox uses a virtual CD-ROM to pass cloud-init configuration to the VM. Add it:
qm set 9000 --ide2 local-lvm:cloudinit
Step 4: Configure Boot and Serial Console
Set the boot order to use the SCSI disk, and add a serial console (required for cloud images to display output correctly):
qm set 9000 --boot order=scsi0 --serial0 socket --vga serial0
Step 5: Resize the Disk (Optional but Recommended)
Cloud images come with small disks (typically 2-3 GB). Resize to something usable:
qm resize 9000 scsi0 +30G
The guest filesystem will automatically expand on first boot thanks to cloud-init's growpart module.
Step 6: Enable the QEMU Guest Agent
qm set 9000 --agent enabled=1
The guest agent lets Proxmox query the VM's IP address, perform clean shutdowns, and freeze filesystems for snapshots. The agent package is included in most cloud images.
Step 7: Convert to Template
qm template 9000
This locks the VM and marks it as a template. It can no longer be started directly — only cloned.
Like what you're reading? Subscribe to HomeLab Starter — free weekly guides in your inbox.
Configuring Cloud-Init
With the template created, you configure cloud-init settings either per-template (as defaults) or per-clone (overriding for each new VM). Proxmox exposes the most common cloud-init settings through qm commands.
Setting Defaults on the Template
# Default user and SSH key
qm set 9000 --ciuser hailey
qm set 9000 --sshkeys ~/.ssh/id_ed25519.pub
# Network: DHCP by default
qm set 9000 --ipconfig0 ip=dhcp
# Or set a static IP
qm set 9000 --ipconfig0 ip=192.168.1.50/24,gw=192.168.1.1
# DNS
qm set 9000 --nameserver 192.168.1.1 --searchdomain homelab.local
The --sshkeys option reads the public key file and embeds it in the cloud-init configuration. When you clone this template, every new VM will accept your SSH key immediately.
Password Authentication
If you also want password-based login (useful for console access when networking is broken):
qm set 9000 --cipassword $(openssl passwd -6 yourpassword)
Cloud-init will set this as the user's password. Password SSH login is disabled by default in cloud images — only key-based auth works over SSH, which is the right security posture.
Cloning Templates
Create a new VM from the template with a single command:
qm clone 9000 101 --name web-server --full
The --full flag creates a full copy of the disk (independent of the template). Without it, Proxmox creates a linked clone that shares the template's base image and only stores differences — faster to create and uses less space, but the template can't be deleted while linked clones exist.
Override cloud-init settings for this specific clone:
qm set 101 --ipconfig0 ip=192.168.1.101/24,gw=192.168.1.1
qm set 101 --nameserver 192.168.1.1
Start the VM:
qm start 101
Within 20-30 seconds, the VM will be running, configured with its static IP, and accepting SSH connections with your key. No manual intervention required.
Advanced Cloud-Init: Custom User-Data
Proxmox's built-in cloud-init options cover the basics, but cloud-init supports far more through custom user-data configuration. You can pass a full cloud-config YAML file to control package installation, file creation, service management, and custom scripts.
Create a user-data file — for example, a base configuration that installs common tools and hardens SSH:
#cloud-config
package_update: true
package_upgrade: true
packages:
- qemu-guest-agent
- curl
- wget
- vim
- htop
- git
- unattended-upgrades
- fail2ban
write_files:
- path: /etc/ssh/sshd_config.d/hardened.conf
content: |
PermitRootLogin no
PasswordAuthentication no
MaxAuthTries 3
ClientAliveInterval 300
ClientAliveCountMax 2
permissions: "0644"
- path: /etc/fail2ban/jail.local
content: |
[sshd]
enabled = true
maxretry = 3
bantime = 3600
findtime = 600
permissions: "0644"
runcmd:
- systemctl enable --now qemu-guest-agent
- systemctl enable --now fail2ban
- systemctl restart sshd
- timedatectl set-timezone America/Los_Angeles
- echo "Cloud-init provisioning complete" > /var/log/cloud-init-done
final_message: "System provisioned in $UPTIME seconds"
Applying Custom User-Data in Proxmox
Proxmox stores cloud-init configuration as snippets. First, make sure you have a storage configured for snippets. The default local storage works:
pvesm set local --content images,rootdir,vztmpl,backup,iso,snippets
Save your user-data file:
mkdir -p /var/lib/vz/snippets
cp user-data.yml /var/lib/vz/snippets/base-userdata.yml
Attach it to a VM or template:
qm set 9000 --cicustom "user=local:snippets/base-userdata.yml"
You can also provide separate vendor-data and network-config files:
qm set 9000 --cicustom "user=local:snippets/base-userdata.yml,network=local:snippets/network-config.yml"
Custom user-data is merged with Proxmox's built-in cloud-init settings. The built-in settings (user, SSH key, IP config) take priority for the fields they control, while your custom user-data handles everything else.
Automating the Entire Workflow with a Script
Once you understand the individual commands, wrap them into a script that builds templates automatically. This is especially useful when cloud images are updated — you can rebuild your templates from fresh images in minutes.
#!/bin/bash
set -euo pipefail
# Configuration
VMID="${1:?Usage: $0 <vmid> <image-path> <template-name>}"
IMAGE="${2:?Provide path to qcow2/img file}"
NAME="${3:?Provide template name}"
STORAGE="local-lvm"
SNIPPET_STORAGE="local"
MEMORY=2048
CORES=2
BRIDGE="vmbr0"
SSH_KEY="$HOME/.ssh/id_ed25519.pub"
CI_USER="hailey"
echo "Creating template: $NAME (ID: $VMID)"
# Destroy existing VM if it exists
qm destroy "$VMID" --purge 2>/dev/null || true
# Create VM
qm create "$VMID" --name "$NAME" --memory "$MEMORY" --cores "$CORES" \
--net0 "virtio,bridge=$BRIDGE" --ostype l26
# Import disk
qm importdisk "$VMID" "$IMAGE" "$STORAGE"
# Attach disk and configure hardware
qm set "$VMID" --scsihw virtio-scsi-pci --scsi0 "$STORAGE:vm-$VMID-disk-0"
qm set "$VMID" --ide2 "$STORAGE:cloudinit"
qm set "$VMID" --boot order=scsi0 --serial0 socket --vga serial0
qm set "$VMID" --agent enabled=1
# Resize disk
qm resize "$VMID" scsi0 +30G
# Cloud-init defaults
qm set "$VMID" --ciuser "$CI_USER"
qm set "$VMID" --sshkeys "$SSH_KEY"
qm set "$VMID" --ipconfig0 ip=dhcp
# Apply custom user-data if available
USERDATA="/var/lib/vz/snippets/base-userdata.yml"
if [ -f "$USERDATA" ]; then
qm set "$VMID" --cicustom "user=$SNIPPET_STORAGE:snippets/base-userdata.yml"
echo "Applied custom user-data from $USERDATA"
fi
# Convert to template
qm template "$VMID"
echo "Template $NAME (ID: $VMID) created successfully"
Use it to build all your templates:
chmod +x create-template.sh
./create-template.sh 9000 /var/lib/vz/template/cloud-images/noble-server-cloudimg-amd64.img ubuntu-2404-cloud
./create-template.sh 9001 /var/lib/vz/template/cloud-images/debian-12-generic-amd64.qcow2 debian-12-cloud
./create-template.sh 9002 /var/lib/vz/template/cloud-images/Rocky-9-GenericCloud-Base.latest.x86_64.qcow2 rocky-9-cloud
Provisioning Script for New VMs
Take it a step further with a script that clones a template and configures the new VM in one shot:
#!/bin/bash
set -euo pipefail
TEMPLATE_ID="${1:?Usage: $0 <template-id> <new-vmid> <name> <ip>}"
NEW_VMID="${2:?Provide new VM ID}"
VM_NAME="${3:?Provide VM name}"
IP="${4:?Provide IP address (e.g., 192.168.1.101/24)}"
GATEWAY="192.168.1.1"
echo "Cloning template $TEMPLATE_ID -> VM $NEW_VMID ($VM_NAME)"
qm clone "$TEMPLATE_ID" "$NEW_VMID" --name "$VM_NAME" --full
qm set "$NEW_VMID" --ipconfig0 "ip=$IP,gw=$GATEWAY"
qm set "$NEW_VMID" --nameserver "$GATEWAY"
qm start "$NEW_VMID"
echo "VM $VM_NAME ($NEW_VMID) started with IP $IP"
echo "SSH will be available in ~30 seconds"
Now deploying a new VM is a single command:
./provision-vm.sh 9000 110 k8s-node-1 192.168.1.110/24
Using the Proxmox API
Everything available through qm is also available through the Proxmox REST API, which means you can integrate VM provisioning into any automation system. The pvesh command provides a convenient CLI interface to the API.
Clone a template via the API:
pvesh create /nodes/pve/qemu/9000/clone \
--newid 120 --name api-test-vm --full 1
Set cloud-init parameters:
pvesh set /nodes/pve/qemu/120/config \
--ipconfig0 "ip=192.168.1.120/24,gw=192.168.1.1" \
--nameserver "192.168.1.1"
Start the VM:
pvesh create /nodes/pve/qemu/120/status/start
The API is particularly useful when integrating with tools like Terraform (via the Proxmox provider), Ansible, or custom dashboards. Every operation you can perform in the Proxmox web UI has an API equivalent.
Debugging Cloud-Init
When a VM doesn't configure itself correctly, cloud-init provides detailed logs. SSH into the VM (or use the Proxmox console) and check:
# Full cloud-init log
cat /var/log/cloud-init.log
# Output log (what you'd see on the console)
cat /var/log/cloud-init-output.log
# Cloud-init status
cloud-init status --long
# Re-run cloud-init (useful during debugging)
cloud-init clean --reboot
Common issues and their solutions:
- VM gets no IP address: Check that the cloud-init drive is attached (
qm config <vmid>should showide2: local-lvm:vm-XXX-cloudinit). Verify the network bridge exists and has connectivity. - SSH key not working: Make sure you passed the public key file (not the private key) to
--sshkeys. Check that the file is in the right format (starts withssh-ed25519orssh-rsa). - Custom user-data not applied: Verify snippets are enabled on the storage. Check the path in
--cicustommatches the actual file location. Examine/var/log/cloud-init.logfor YAML parsing errors. - Disk not resized: The
growpartmodule needs a cloud image that supports it. Most official cloud images do. Checklsblk— if the partition hasn't grown, rungrowpart /dev/sda 1manually and resize the filesystem.
Best Practices
Use a dedicated VM ID range for templates. I use 9000-9099 for templates. This keeps them visually separated in the Proxmox UI and makes scripting easier (you know anything in that range is a template, not a running VM).
Keep templates minimal. Don't install application-specific software in the template. Use cloud-init user-data or a configuration management tool (Ansible, etc.) to install application software after cloning. The template should be a clean base OS with your SSH keys and basic utilities.
Update templates monthly. Cloud images are updated regularly with security patches. Download fresh images and rebuild your templates on a monthly schedule. This means new VMs start with current packages instead of needing 200+ MB of updates on first boot.
Use linked clones for ephemeral VMs. If you're spinning up VMs for testing and plan to destroy them within hours or days, linked clones are faster and save disk space. Use full clones for long-lived VMs that need to be independent of the template.
Store your cloud-init configs in version control. Your user-data YAML files, provisioning scripts, and template-building scripts should live in a git repo. This makes your homelab infrastructure reproducible — if your Proxmox host dies, you can rebuild all your templates from the repo.
Test changes on a clone before modifying the template. Clone your template, apply the change to the clone, verify it works, then rebuild the template. Modifying templates directly (after converting back with qm set <vmid> --template 0) risks breaking something for all future clones.
Tag your templates. Use Proxmox tags to identify template purpose and the source image version:
qm set 9000 --tags "template,ubuntu-2404,cloud-init"
This makes it easy to identify templates in the web UI and through the API.
Conclusion
Cloud-init templates transform VM provisioning from a tedious manual process into a single command. The initial setup — downloading cloud images, creating template VMs, writing your base cloud-init configuration — takes about 30 minutes. After that, every new VM in your homelab takes 30 seconds to deploy and arrives fully configured.
The combination of Proxmox's cloning capabilities, cloud-init's first-boot configuration, and a few shell scripts gives you the same VM provisioning workflow that cloud providers use, running entirely on your own hardware. Once you start using templates, manually installing VMs from ISOs will feel like burning CDs to install software.
