NFS vs SMB vs iSCSI: Choosing the Right Storage Protocol
Every homelab with a NAS or file server faces the same question: how should other machines access the storage? The three main options — NFS, SMB, and iSCSI — each have distinct strengths, and picking the wrong one leads to frustration. Slow VM performance, permission headaches, or unnecessary complexity.
This isn't a theoretical comparison. It's a practical guide to which protocol works best for common homelab workloads, how to set each one up, and when to mix and match.

The Quick Answer
If you want the short version:
- NFS — Best for Linux-to-Linux file sharing, VM datastores, and container persistent storage
- SMB — Best for mixed Windows/Mac/Linux environments and media sharing
- iSCSI — Best for VM disk images and any workload that needs raw block storage
Most homelabs use at least two of these. A typical setup: NFS for Proxmox VM storage and container data, SMB for the shared media library and documents accessible from any OS, and maybe iSCSI for a high-performance database VM.
NFS (Network File System)
NFS is the standard file sharing protocol in the Linux and Unix world. It's simple, fast, and has minimal overhead. NFS exports a directory from the server, and clients mount it as if it were a local filesystem.
Strengths
- Low overhead — NFS has less protocol complexity than SMB, which translates to lower CPU usage and latency
- Unix permissions — NFS maps Unix UIDs/GIDs natively. File ownership and permissions just work (assuming matching UIDs across machines)
- Excellent hypervisor support — Proxmox, ESXi, and KVM all support NFS datastores natively
- Simple configuration — A working NFS export is about 3 lines of config
Weaknesses
- Poor Windows support — Windows can mount NFS shares, but it's clunky and lacks proper integration
- UID/GID mapping headaches — If user IDs don't match between server and client, permissions get confusing. NFSv4 with ID mapping helps, but it adds complexity
- No built-in encryption — NFSv4 supports Kerberos encryption, but nobody runs that in a homelab. NFS traffic is unencrypted by default
Basic NFS Setup
On the server:
sudo apt install -y nfs-kernel-server
# Create and export a directory
sudo mkdir -p /srv/nfs/data
echo "/srv/nfs/data 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)" | sudo tee -a /etc/exports
sudo exportfs -ra
sudo systemctl restart nfs-kernel-server
On the client:
sudo apt install -y nfs-common
sudo mount -t nfs 192.168.1.50:/srv/nfs/data /mnt/data
For persistent mounts, add to /etc/fstab:
192.168.1.50:/srv/nfs/data /mnt/data nfs defaults,_netdev 0 0
NFS Performance
NFS performance depends heavily on your network. Over 1 GbE, expect 100-110 MB/s for sequential reads and writes. Over 10 GbE, NFS can push 1+ GB/s without breaking a sweat. Random I/O performance is limited by the underlying storage, not the protocol.
NFSv4.1+ supports session trunking (multiple network paths), which can aggregate bandwidth across multiple NICs.
SMB (Server Message Block)
SMB — also called CIFS in its older incarnation — is Microsoft's file sharing protocol. It's the default for Windows file sharing and is well-supported on macOS and Linux through Samba. If you need a share that works on every operating system without client-side configuration beyond "connect to network drive," SMB is your answer.
Strengths
- Universal compatibility — Works natively on Windows, macOS, and Linux. Android and iOS apps support it too.
- Good for media — Plex, Jellyfin, Kodi, and most media players handle SMB shares well
- Access control — Username/password authentication built in, no UID matching required
- Discovery — SMB shares show up in Windows Network and macOS Finder automatically
Weaknesses
- Higher overhead than NFS — SMB is a chattier protocol. More round trips per operation means higher latency, especially for small files
- Slower on Linux — The Samba server adds CPU overhead compared to NFS kernel server
- Permission model mismatch — Mapping Windows ACLs to Unix permissions is an ongoing pain point in Samba
Basic SMB Setup
On the server (using Samba):
sudo apt install -y samba
# Create a share directory
sudo mkdir -p /srv/samba/shared
sudo chmod 2775 /srv/samba/shared
Add to /etc/samba/smb.conf:
[shared]
path = /srv/samba/shared
browseable = yes
writable = yes
valid users = @smbusers
create mask = 0664
directory mask = 2775
Create a Samba user:
sudo groupadd smbusers
sudo useradd -M -G smbusers smbuser
sudo smbpasswd -a smbuser
sudo systemctl restart smbd
On Windows, open File Explorer and type \\192.168.1.50\shared. On Linux:
sudo mount -t cifs //192.168.1.50/shared /mnt/shared -o username=smbuser,password=yourpass
SMB Performance
SMB3 (the current version) is significantly faster than older versions. Over 1 GbE, expect 90-105 MB/s sequential, slightly below NFS due to protocol overhead. SMB3 supports multichannel (aggregating multiple network connections), which can improve throughput on multi-NIC setups.
For small file operations (thousands of tiny files), SMB is noticeably slower than NFS due to its chattier nature. This matters for things like npm packages or git repositories.
iSCSI (Internet Small Computer Systems Interface)
iSCSI is fundamentally different from NFS and SMB. Instead of sharing files, it shares a raw block device over the network. The client (initiator) sees what looks like a local disk — it can partition it, format it with any filesystem, and use it exactly like a physical drive. The server (target) just serves blocks.
Strengths
- Best performance for VMs — Because the hypervisor sees a raw block device, it can use native disk caching and I/O scheduling. No filesystem translation layer
- Filesystem agnostic — Format it with ext4, XFS, NTFS, ZFS — whatever the client needs
- Excellent for databases — Databases love raw block devices. No NFS locking overhead, no SMB chattiness
- Multipathing — iSCSI supports multiple network paths for redundancy and bandwidth aggregation
Weaknesses
- Not shareable — A basic iSCSI LUN should only be mounted by one client at a time (unless using a cluster filesystem). You can't have two VMs writing to the same iSCSI target without special handling
- More complex — iSCSI involves targets, LUNs, initiators, and IQNs. More concepts to learn than "export a directory"
- Overkill for file sharing — You wouldn't use iSCSI to share your movie collection. It's block storage, not file storage
Basic iSCSI Setup
On the server (target), using targetcli:
sudo apt install -y targetcli-fb
sudo targetcli
Inside the targetcli shell:
/backstores/block create disk1 /dev/sdb
/iscsi create iqn.2024-01.local.homelab:storage
/iscsi/iqn.2024-01.local.homelab:storage/tpg1/luns create /backstores/block/disk1
/iscsi/iqn.2024-01.local.homelab:storage/tpg1/acls create iqn.2024-01.local.homelab:client1
exit
On the client (initiator):
sudo apt install -y open-iscsi
# Set the initiator name
echo "InitiatorName=iqn.2024-01.local.homelab:client1" | sudo tee /etc/iscsi/initiatorname.iscsi
# Discover targets
sudo iscsiadm -m discovery -t sendtargets -p 192.168.1.50
# Login to the target
sudo iscsiadm -m node --login
The iSCSI disk appears as a new block device (e.g., /dev/sdb). Format and mount it like any disk:
sudo mkfs.ext4 /dev/sdb
sudo mount /dev/sdb /mnt/iscsi
iSCSI Performance
iSCSI has the lowest protocol overhead of the three. On 10 GbE, iSCSI can match local SSD performance for most workloads. On 1 GbE, it's limited by the network (as are NFS and SMB), but random I/O performance is better than NFS because the client has direct block-level caching.
For VM disk images, iSCSI typically shows 10-20% better random I/O performance compared to NFS, and significantly better than SMB.
When to Use What
VM Storage (Proxmox, ESXi)
Best: iSCSI or NFS. iSCSI gives the best raw performance. NFS is easier to manage and works well for most homelab VM workloads. Both are fully supported by Proxmox and ESXi. Don't use SMB for VM storage — it adds unnecessary overhead.
Container Persistent Volumes (Kubernetes, Docker)
Best: NFS. Kubernetes NFS provisioners are mature and simple. Docker supports NFS volumes natively. iSCSI works but is more complex to automate. SMB is possible but not ideal.
Shared Media Library (Plex, Jellyfin)
Best: SMB or NFS. SMB if you have Windows machines that also access the library. NFS if everything is Linux. Media streaming is sequential read, so both perform identically in practice.
Shared Documents (Mixed OS)
Best: SMB. Windows and macOS support SMB natively with no extra software. NFS on Windows is painful. SMB's user authentication makes access control straightforward.
Backups
Best: NFS or SMB. Depends on your backup tool. Borgmatic and restic work well over NFS. Veeam prefers SMB. Both are fine for backup workloads.
Database Storage
Best: iSCSI. Databases generate heavy random I/O and benefit from direct block access. If iSCSI is too complex for your setup, NFS with sync mount options is acceptable.
Mixing Protocols
There's no rule that says you can only pick one. A well-configured NAS exports the same underlying storage through all three protocols simultaneously:
- iSCSI LUNs for Proxmox VM disks
- NFS exports for Kubernetes persistent volumes and Linux file sharing
- SMB shares for the family media library and document access from Windows laptops
TrueNAS, Unraid, and OpenMediaVault all support running NFS, SMB, and iSCSI targets from the same storage pool. The key is matching each workload to the protocol that suits it best, rather than forcing everything through one.
Choose based on what you're actually doing with the storage. Protocol religious wars are fun on Reddit, but in your homelab, pragmatism wins.