← All articles
Black and white image shows many reflected suitcases.

Linux Software RAID with mdadm: Setup and Management

Storage 2026-03-04 · 3 min read mdadm raid linux storage homelab disk redundancy data protection
By HomeLab Starter Editorial TeamHome lab enthusiasts covering hardware setup, networking, and self-hosted services for home and small office environments.

Hardware RAID controllers add cost and introduce single points of failure (the controller itself). Linux software RAID via mdadm provides equivalent protection using the CPU, works with any drives, and is well-supported across distributions. For homelabs without ZFS, mdadm is the standard RAID solution.

Photo by Sam Szuchan on Unsplash

RAID Levels

Level Description Min Drives Usable Space Fault Tolerance
RAID 0 Striping (speed, no redundancy) 2 100% 0 drives
RAID 1 Mirroring 2 50% 1 drive
RAID 5 Striping + 1 parity 3 67-94% 1 drive
RAID 6 Striping + 2 parity 4 50-88% 2 drives
RAID 10 Mirrored stripes 4 50% 1 per mirror pair

Homelab recommendations:

RAID is not backup — it protects against drive failure, not file deletion, corruption, or ransomware.

Install mdadm

sudo apt install mdadm   # Debian/Ubuntu
sudo dnf install mdadm   # Fedora/RHEL

Create a RAID Array

Identify drives

lsblk
# or
fdisk -l

Important: Use drives without existing partitions. Wipe if needed:

wipefs -a /dev/sdb
wipefs -a /dev/sdc

RAID 1 (mirror, 2 drives)

mdadm --create /dev/md0 \
  --level=1 \
  --raid-devices=2 \
  /dev/sdb /dev/sdc

RAID 5 (3 drives)

mdadm --create /dev/md0 \
  --level=5 \
  --raid-devices=3 \
  /dev/sdb /dev/sdc /dev/sdd

RAID 6 (4 drives)

mdadm --create /dev/md0 \
  --level=6 \
  --raid-devices=4 \
  /dev/sdb /dev/sdc /dev/sdd /dev/sde

After creation, the array immediately begins syncing (building parity/mirrors). Check progress:

cat /proc/mdstat
# Shows sync progress: [=====>...........] resync = 28.1%

Full sync on large arrays takes hours. The array is usable during sync but runs degraded.

Format and Mount

# Create filesystem
mkfs.ext4 /dev/md0
# or for large arrays:
mkfs.xfs /dev/md0

# Create mount point
mkdir /mnt/data

# Mount
mount /dev/md0 /mnt/data

Persistent mount via /etc/fstab

# Get UUID
blkid /dev/md0

# Add to /etc/fstab:
UUID=your-uuid  /mnt/data  ext4  defaults  0  2

Save Array Configuration

mdadm --detail --scan >> /etc/mdadm/mdadm.conf
update-initramfs -u  # Ensure array assembles at boot

Monitor Array Health

# Current status
cat /proc/mdstat

# Detailed info
mdadm --detail /dev/md0

# Watch live
watch cat /proc/mdstat

The output shows:

Email Alerts

Configure mdadm to email on events:

# /etc/mdadm/mdadm.conf
MAILADDR [email protected]
# Test alert
mdadm --monitor --scan --test

Simulating and Handling Drive Failure

Mark a drive as failed (for testing)

mdadm /dev/md0 --fail /dev/sdb

Remove failed drive

mdadm /dev/md0 --remove /dev/sdb

Add replacement drive

mdadm /dev/md0 --add /dev/sde

The array automatically begins rebuilding. Monitor with cat /proc/mdstat.

Growing an Array (Adding Drives)

Add a drive to RAID 5 (increasing from 3 to 4 drives):

# Add as spare first
mdadm /dev/md0 --add /dev/sde

# Grow the array
mdadm --grow /dev/md0 --raid-devices=4

# After reshape, resize the filesystem
resize2fs /dev/md0    # ext4
xfs_growfs /mnt/data  # xfs

Reshape takes time proportional to array size. The array is usable throughout.

Scheduled Scrubs

RAID 5/6 arrays can develop inconsistencies. Schedule regular scrubs:

# Manual scrub
echo check > /sys/block/md0/md/sync_action

# Automated monthly scrub (add to cron)
0 3 1 * * root echo check > /sys/block/md0/md/sync_action

Or use the included scrub script:

# Debian includes this automatically:
ls /etc/cron.d/mdadm

mdadm vs ZFS

mdadm ZFS
Data integrity checksums No Yes
Inline compression No Yes
Snapshots No Yes
Self-healing No Yes (with checksums)
Memory overhead Minimal Significant (ARC)
Learning curve Low High
Filesystem flexibility Any ZFS only

For data integrity and advanced features, ZFS is superior. For simplicity, flexibility, and lower overhead, mdadm with ext4 or XFS is a solid choice.

mdadm is the right tool when you want RAID without ZFS's complexity and memory requirements — particularly on systems with limited RAM or when using filesystems that don't exist in the ZFS ecosystem.

Get free weekly tips in your inbox. Subscribe to HomeLab Starter