iSCSI Storage for Your Homelab: Shared Block Storage Across VMs
iSCSI Storage for Your Homelab: Shared Block Storage Across VMs
Photo by Stan Hutter on Unsplash
NFS and SMB share files. iSCSI shares block devices — raw disk space that the client sees as a locally-attached drive. This distinction matters for homelab use cases: iSCSI enables Proxmox live migration without shared file systems, supports databases that need direct block access, and lets you build a homelab SAN (Storage Area Network) without enterprise hardware.
This guide covers setting up iSCSI on TrueNAS (as the target) and connecting Proxmox nodes (as initiators).
When to Use iSCSI vs. NFS
Use iSCSI when:
- You need live VM migration between Proxmox nodes (requires shared storage)
- Running databases that perform better on block storage than NFS
- Building clustered storage where multiple servers need access to the same raw disk
- You want VM disks stored remotely but accessible with near-native performance
Use NFS when:
- Sharing general files and media across the network
- Storing VM disks for single-node Proxmox (NFS works fine here)
- You need easy filesystem-level access (browse, copy, modify files directly)
- Simplicity matters more than performance
For most single-node homelabs, NFS is sufficient. iSCSI becomes compelling when you add a second Proxmox node and want live migration.
Setting Up TrueNAS as an iSCSI Target
TrueNAS SCALE has a built-in iSCSI wizard that handles most of the complexity.
Create a ZFS Dataset or zvol
First, decide what you're sharing:
zvol (recommended for VMs): A virtual block device backed by ZFS. VMs see it as a raw disk. Best performance for VM storage.
Extent from file: A file within a dataset that behaves as a block device. Slightly worse performance but easier to manage alongside regular files.
For VM storage, create a zvol:
- Storage → Add Zvol
- Name:
vm-storage - Size: However much space you want to allocate (e.g., 500G)
- Block size: 16K (good default for VM workloads)
- Enable compression if desired
Configure iSCSI in TrueNAS
Sharing → iSCSI → Wizard:
- Create or choose block device: Select your zvol
- Portal: Create a new portal on your storage network IP (not your management IP)
- Initiators: For testing, allow all. For production, restrict to your Proxmox nodes' IPs
- Auth: None for initial setup, CHAP for production
- Review and save
TrueNAS creates the Target, Portal, and associated configuration automatically.
Note your IQN (iSCSI Qualified Name) — it looks like:
iqn.2005-10.org.freenas.ctl:vm-storage
You'll need this on the initiator side.
Configuring a Dedicated Storage Network
iSCSI is chatty — it generates constant traffic for storage I/O. Running it on your primary LAN works but competes with user traffic.
Best practice: Use a dedicated storage network (VLAN or separate switch) for iSCSI.
In a typical homelab setup:
- Management/LAN: 192.168.1.0/24 or similar
- Storage VLAN: 10.0.100.0/24 (iSCSI, NFS, storage replication)
Configure TrueNAS to listen for iSCSI connections only on the storage VLAN IP. Configure Proxmox nodes with a second NIC (or VLAN tag) on the storage network.
Even on 1GbE, a dedicated storage network prevents iSCSI I/O from affecting your network experience.
Like what you're reading? Subscribe to HomeLab Starter — free weekly guides in your inbox.
Connecting Proxmox as an iSCSI Initiator
Install and Enable the iSCSI Initiator
Proxmox uses the Linux open-iscsi package:
apt install open-iscsi
systemctl enable --now iscsid
Discover iSCSI Targets
iscsiadm -m discovery -t sendtargets -p 10.0.100.20
Replace 10.0.100.20 with your TrueNAS storage network IP. This should output your target IQN.
Log In to the Target
iscsiadm -m node -T iqn.2005-10.org.freenas.ctl:vm-storage -p 10.0.100.20 --login
Verify the connection:
lsblk # Should show the new iSCSI disk (e.g., /dev/sdb)
iscsiadm -m session # Shows active sessions
Add iSCSI Storage to Proxmox
In the Proxmox web interface:
- Datacenter → Storage → Add → iSCSI
- ID:
truenas-iscsi(a label) - Portal:
10.0.100.20 - Target: Select your IQN from the dropdown
- Content: Leave blank (iSCSI provides the raw device)
Then add a second LVM layer on top:
- Datacenter → Storage → Add → LVM
- Base storage:
truenas-iscsi - Base volume: Select the iSCSI device
- Volume group name:
vm-storage - Content: Disk image, Container
This creates an LVM volume group on the iSCSI device, which Proxmox uses to allocate individual VM disks.
Multi-Node Setup for Live Migration
For live migration between Proxmox nodes, both nodes need access to the same iSCSI target:
- Connect both Proxmox nodes to the storage network
- Follow the "Connect Proxmox as initiator" steps on both nodes
- Use the same iSCSI target IQN on both nodes
- Add the same LVM storage to both nodes in Proxmox
When Proxmox VMs are stored on shared iSCSI-backed LVM, live migration (moving a running VM between nodes) works because both nodes can access the disk simultaneously.
Important: For production multi-node iSCSI, enable SCSI fencing or use Proxmox's built-in cluster storage protection to prevent simultaneous conflicting writes.
Performance Tuning
Jumbo frames: Enable MTU 9000 on your storage network for better iSCSI throughput. Set it consistently on TrueNAS, your switch (if applicable), and your Proxmox network interfaces.
# On Proxmox, in /etc/network/interfaces
iface eth1 inet static
address 10.0.100.11/24
mtu 9000
Multiple sessions (MC/S): iSCSI supports multiple connections per session. On systems with multiple NICs or high-throughput requirements:
iscsiadm -m node -T <IQN> -o update -n node.session.nr_sessions -v 2
TrueNAS block size: Match the zvol block size to your workload. VMs typically benefit from 8K-16K. Databases prefer 4K.
Monitoring iSCSI Health
Check active sessions:
iscsiadm -m session -P 3
Monitor disk I/O on Proxmox:
iostat -x 1 /dev/sdb # Replace with your iSCSI disk
TrueNAS → Reporting → Disk shows throughput and IOPS on the storage side.
Troubleshooting Common Issues
Target not discovered: Verify firewall allows port 3260/tcp from initiator to target IP. Check TrueNAS is listening on the storage VLAN IP specifically.
Login successful but device not visible: Run iscsiadm -m node --rescan and check dmesg for error messages.
Slow performance: Check jumbo frames are enabled and consistent. Verify no packet fragmentation with ping -M do -s 8972 10.0.100.20.
Connection drops: Check network stability. iSCSI is sensitive to packet loss. Consider enabling iSCSI error recovery settings in /etc/iscsi/iscsid.conf.
The Payoff
For a single-node homelab, iSCSI adds complexity without obvious benefit — NFS is simpler and sufficient. The value appears when you add nodes: shared iSCSI storage is what enables a Proxmox cluster to live-migrate VMs and maintain high availability. It transforms a collection of independent servers into a coordinated cluster.
If you're building toward a multi-node homelab or want to learn SAN concepts in a low-stakes environment, iSCSI with TrueNAS and Proxmox is an excellent combination.
