← All articles
a close up of a cpu chip on a table

iSCSI Storage for Your Homelab: Shared Block Storage Across VMs

Storage 2026-03-04 · 5 min read iscsi storage networking nas san proxmox truenas block-storage homelab
By HomeLab Starter Editorial TeamHome lab enthusiasts covering hardware setup, networking, and self-hosted services for home and small office environments.

iSCSI Storage for Your Homelab: Shared Block Storage Across VMs

Photo by Stan Hutter on Unsplash

NFS and SMB share files. iSCSI shares block devices — raw disk space that the client sees as a locally-attached drive. This distinction matters for homelab use cases: iSCSI enables Proxmox live migration without shared file systems, supports databases that need direct block access, and lets you build a homelab SAN (Storage Area Network) without enterprise hardware.

This guide covers setting up iSCSI on TrueNAS (as the target) and connecting Proxmox nodes (as initiators).

When to Use iSCSI vs. NFS

Use iSCSI when:

Use NFS when:

For most single-node homelabs, NFS is sufficient. iSCSI becomes compelling when you add a second Proxmox node and want live migration.

Setting Up TrueNAS as an iSCSI Target

TrueNAS SCALE has a built-in iSCSI wizard that handles most of the complexity.

Create a ZFS Dataset or zvol

First, decide what you're sharing:

zvol (recommended for VMs): A virtual block device backed by ZFS. VMs see it as a raw disk. Best performance for VM storage.

Extent from file: A file within a dataset that behaves as a block device. Slightly worse performance but easier to manage alongside regular files.

For VM storage, create a zvol:

  1. Storage → Add Zvol
  2. Name: vm-storage
  3. Size: However much space you want to allocate (e.g., 500G)
  4. Block size: 16K (good default for VM workloads)
  5. Enable compression if desired

Configure iSCSI in TrueNAS

Sharing → iSCSI → Wizard:

  1. Create or choose block device: Select your zvol
  2. Portal: Create a new portal on your storage network IP (not your management IP)
  3. Initiators: For testing, allow all. For production, restrict to your Proxmox nodes' IPs
  4. Auth: None for initial setup, CHAP for production
  5. Review and save

TrueNAS creates the Target, Portal, and associated configuration automatically.

Note your IQN (iSCSI Qualified Name) — it looks like:

iqn.2005-10.org.freenas.ctl:vm-storage

You'll need this on the initiator side.

Configuring a Dedicated Storage Network

iSCSI is chatty — it generates constant traffic for storage I/O. Running it on your primary LAN works but competes with user traffic.

Best practice: Use a dedicated storage network (VLAN or separate switch) for iSCSI.

In a typical homelab setup:

Configure TrueNAS to listen for iSCSI connections only on the storage VLAN IP. Configure Proxmox nodes with a second NIC (or VLAN tag) on the storage network.

Even on 1GbE, a dedicated storage network prevents iSCSI I/O from affecting your network experience.

Like what you're reading? Subscribe to HomeLab Starter — free weekly guides in your inbox.

Connecting Proxmox as an iSCSI Initiator

Install and Enable the iSCSI Initiator

Proxmox uses the Linux open-iscsi package:

apt install open-iscsi
systemctl enable --now iscsid

Discover iSCSI Targets

iscsiadm -m discovery -t sendtargets -p 10.0.100.20

Replace 10.0.100.20 with your TrueNAS storage network IP. This should output your target IQN.

Log In to the Target

iscsiadm -m node -T iqn.2005-10.org.freenas.ctl:vm-storage -p 10.0.100.20 --login

Verify the connection:

lsblk  # Should show the new iSCSI disk (e.g., /dev/sdb)
iscsiadm -m session  # Shows active sessions

Add iSCSI Storage to Proxmox

In the Proxmox web interface:

  1. Datacenter → Storage → Add → iSCSI
  2. ID: truenas-iscsi (a label)
  3. Portal: 10.0.100.20
  4. Target: Select your IQN from the dropdown
  5. Content: Leave blank (iSCSI provides the raw device)

Then add a second LVM layer on top:

  1. Datacenter → Storage → Add → LVM
  2. Base storage: truenas-iscsi
  3. Base volume: Select the iSCSI device
  4. Volume group name: vm-storage
  5. Content: Disk image, Container

This creates an LVM volume group on the iSCSI device, which Proxmox uses to allocate individual VM disks.

Multi-Node Setup for Live Migration

For live migration between Proxmox nodes, both nodes need access to the same iSCSI target:

  1. Connect both Proxmox nodes to the storage network
  2. Follow the "Connect Proxmox as initiator" steps on both nodes
  3. Use the same iSCSI target IQN on both nodes
  4. Add the same LVM storage to both nodes in Proxmox

When Proxmox VMs are stored on shared iSCSI-backed LVM, live migration (moving a running VM between nodes) works because both nodes can access the disk simultaneously.

Important: For production multi-node iSCSI, enable SCSI fencing or use Proxmox's built-in cluster storage protection to prevent simultaneous conflicting writes.

Performance Tuning

Jumbo frames: Enable MTU 9000 on your storage network for better iSCSI throughput. Set it consistently on TrueNAS, your switch (if applicable), and your Proxmox network interfaces.

# On Proxmox, in /etc/network/interfaces
iface eth1 inet static
    address 10.0.100.11/24
    mtu 9000

Multiple sessions (MC/S): iSCSI supports multiple connections per session. On systems with multiple NICs or high-throughput requirements:

iscsiadm -m node -T <IQN> -o update -n node.session.nr_sessions -v 2

TrueNAS block size: Match the zvol block size to your workload. VMs typically benefit from 8K-16K. Databases prefer 4K.

Monitoring iSCSI Health

Check active sessions:

iscsiadm -m session -P 3

Monitor disk I/O on Proxmox:

iostat -x 1 /dev/sdb  # Replace with your iSCSI disk

TrueNAS → Reporting → Disk shows throughput and IOPS on the storage side.

Troubleshooting Common Issues

Target not discovered: Verify firewall allows port 3260/tcp from initiator to target IP. Check TrueNAS is listening on the storage VLAN IP specifically.

Login successful but device not visible: Run iscsiadm -m node --rescan and check dmesg for error messages.

Slow performance: Check jumbo frames are enabled and consistent. Verify no packet fragmentation with ping -M do -s 8972 10.0.100.20.

Connection drops: Check network stability. iSCSI is sensitive to packet loss. Consider enabling iSCSI error recovery settings in /etc/iscsi/iscsid.conf.

The Payoff

For a single-node homelab, iSCSI adds complexity without obvious benefit — NFS is simpler and sufficient. The value appears when you add nodes: shared iSCSI storage is what enables a Proxmox cluster to live-migrate VMs and maintain high availability. It transforms a collection of independent servers into a coordinated cluster.

If you're building toward a multi-node homelab or want to learn SAN concepts in a low-stakes environment, iSCSI with TrueNAS and Proxmox is an excellent combination.

Get free weekly tips in your inbox. Subscribe to HomeLab Starter