Skip to main contentSkip to navigation
Rossmann Repair Group logo - data recovery and MacBook repair

Unraid Data Recovery Service

Your Unraid server is down. The array will not start, the cache pool is unmountable, or a failed New Config scrambled your disk assignments. Unraid's architecture is different from traditional NAS RAID systems: each data disk has its own independent filesystem, parity is computed across all members and stored on a dedicated drive, and Docker/VM data often lives on a separate btrfs cache pool. We recover all of it. Free evaluation. No data = no charge.

Louis Rossmann
Written by
Louis Rossmann
Founder & Chief Technician
Updated March 2026
10 min read

How Unraid Storage Works and Why Recovery Is Different

Unraid does not stripe data across disks like RAID 5 or RAID 6. Each data disk holds a standalone XFS or btrfs filesystem. A dedicated parity drive stores XOR parity computed across all data disks, allowing any single disk to be reconstructed if it fails. This architecture makes Unraid recovery different from traditional RAID recovery in several ways.

Advantage for Recovery

Because each data disk has an independent filesystem, we can mount and read individual disks without reconstructing the entire array. If three out of four data disks are healthy, we can extract files from those three immediately while working on the failed member separately.

Complication for Recovery

Unraid splits files across disks based on allocation rules and share settings. A single share (e.g., "Media") can span multiple data disks with no single metadata index tying them together. Reconstructing a complete share requires reading every data disk and reassembling the directory tree from each disk's individual filesystem.

The parity drive itself contains no user files. It stores only XOR parity data. If parity is valid and a single data disk fails, the missing disk can be reconstructed bit-for-bit from the remaining data disks plus parity. Dual parity (available in Unraid 6+) extends this to two simultaneous disk failures, analogous to RAID 6. If more disks fail than parity can cover, the individual data disks that still read are directly recoverable since they hold standalone filesystems.

Common Unraid Failure Scenarios

Unraid fails differently than traditional NAS devices. The most common scenarios involve cache pool corruption, parity invalidation after a New Config, multiple disk failures exceeding parity coverage, and flash drive corruption that prevents the array from starting.

  • Array will not start / "Too many missing disks": Unraid refuses to start the array when it detects more missing or unresponsive disks than parity can cover. This can happen after a power event damages multiple drives, after a backplane/controller failure causes drives to drop, or after a New Config with incorrect assignments. Do not force-start with missing members; this invalidates parity.
  • Cache pool unmountable: The btrfs cache pool (one or two SSDs in RAID-1) becomes unmountable after an SSD failure, power loss during a write, or btrfs metadata corruption. Docker containers, VMs, and any user shares set to "prefer cache" become inaccessible. The array data disks are unaffected, but operational data (Plex databases, Home Assistant configs, Nextcloud files) lives on the cache.
  • New Config with wrong assignments: New Config resets Unraid's disk assignment table. If a parity rebuild runs after disks are assigned to the wrong slots, the parity drive is overwritten with parity computed from the wrong order. The original parity data is destroyed, and any future disk failure cannot be reconstructed from parity. The individual data disk filesystems are unaffected since each is independent.
  • Flash drive failure: Unraid boots from a USB flash drive that stores the OS, license key, and array configuration. If the flash drive fails, the array cannot start, but all data remains on the data disks. We can read the data disks directly without needing the original flash drive configuration.
  • Parity check errors / unclean shutdown: An unclean shutdown (power loss, kernel panic) can leave XFS journals in a dirty state and invalidate parity sync. Running a parity check after an unclean shutdown is normal, but if it reports thousands of errors, the parity may have been corrupted. Do not blindly write parity corrections; contact us if the error count is abnormal.

Do not force-start an Unraid array with missing members. Force-starting writes zeroes in place of the missing disk's data during parity calculations, corrupting the parity drive and eliminating the ability to reconstruct the missing disk later.

How We Recover Data from a Failed Unraid Server

We follow an image-first, offline workflow. Every disk is cloned through a write-blocker before any analysis. Original drives are never modified. The key advantage of Unraid recovery is that healthy data disks can be read directly since each contains an independent XFS or btrfs filesystem.

  1. Free evaluation: We document the Unraid version, number of data disks, parity configuration (single or dual), cache pool setup (SSD count, btrfs RAID level), share allocation settings, and any prior recovery attempts or New Config history.
  2. Write-blocked imaging: Each data disk, parity disk, and cache SSD is imaged through a hardware write-blocker. HDDs are imaged with PC-3000 or DeepSpar using conservative retry profiles. Cache SSDs are imaged with PC-3000 SSD. Drives with mechanical failures (clicking, not spinning) receive head swaps on our clean bench before imaging.
  3. Direct filesystem extraction: For each healthy data disk image, we mount the XFS or btrfs filesystem directly and extract files. Unlike RAID recovery, there is no array reassembly needed for disks that read cleanly. Each disk's share directories are catalogued and merged into a unified recovery set.
  4. Parity reconstruction (if needed): If a data disk failed and parity is valid, we reconstruct the missing disk's contents by XOR-ing all remaining data disk images with the parity image. For dual parity, we use both P and Q parity computations to handle two missing members. This reconstruction runs entirely on images.
  5. Cache pool recovery: For corrupted btrfs cache pools, we reconstruct the btrfs superblock, chunk tree, and device tree from the SSD image. Docker appdata directories, VM disk images, and cached user share files are extracted from the reconstructed filesystem.
  6. Verification and delivery: Recovered data is merged across all source disks, verified against your priority file list, and copied to a target drive. Working copies are securely purged on request.

How Much Does Unraid Recovery Cost?

Unraid recovery uses two-tiered pricing: a per-disk imaging fee based on each drive's condition, plus a $400 to $800 reconstruction fee covering filesystem extraction, parity computation, and share reassembly. If we recover nothing, you owe $0.

Logical/Firmware per Disk

$250 to $900

Data disks with XFS/btrfs corruption, firmware faults, or bad sectors requiring PC-3000 terminal access. Most Unraid members with logical failures fall in this range.

Mechanical per Disk

$1,200 to $1,500

Drives with clicking, beeping, or failed heads that require clean-bench donor transplants. 50% deposit required since donor parts are consumed during the procedure.

Array Reconstruction

$400 to $800

Covers parity computation, per-disk filesystem extraction, share reassembly across disks, and cache pool btrfs reconstruction. Cost scales with disk count and filesystem complexity.

Unraid servers with many data disks (8+, 12+, or more) involve more imaging time, but healthy disks that read without issues are imaged at the lowest tier. The per-disk architecture often makes Unraid recovery less expensive than traditional RAID recovery because healthy disks require no reconstruction at all.

No Data = No Charge. If we cannot recover usable data from your Unraid server, you owe nothing. Optional return shipping is the only potential cost on an unsuccessful case.

Unraid Parity Architecture: How It Works and Where It Breaks

Unraid computes parity using a bitwise XOR operation across all data disks. The result is stored on a dedicated parity drive. Dual parity adds a second drive using a Reed-Solomon-derived computation (similar to RAID 6's Q syndrome), protecting against two simultaneous disk failures.

  • Single parity: One parity drive protects against one data disk failure. The parity disk must be equal to or larger than the largest data disk in the array. Reconstruction is straightforward: XOR all surviving data disk sectors at the same offset to produce the missing disk's sectors.
  • Dual parity: Two parity drives protect against two simultaneous data disk failures. The second parity computation uses a different polynomial to produce independent parity data. Reconstruction of two missing disks requires solving a system of two equations (XOR + polynomial) for each sector offset.
  • Real-time write parity: Unraid computes parity in real time during writes using a read-modify-write cycle: read the old data sector from the target disk, read the old parity sector, XOR old data with new data with old parity to produce new parity, then write both new data and new parity. A power loss during this four-step cycle leaves parity out of sync for those sectors.
  • Parity validity: Parity is only valid when a full parity sync or check has completed without errors since the last unclean shutdown. Invalid parity means a failed data disk cannot be reconstructed from parity alone. However, the individual data disks still contain their standalone XFS or btrfs filesystems, so direct reads from surviving disks remain possible.

Cache Pool Recovery: Docker Appdata, VMs, and Cached Shares

Many Unraid users store their most operationally critical data on the cache pool rather than the array. Docker container configurations (Plex, Nextcloud, Home Assistant), VM disk images, and user shares configured with "prefer cache" all reside on the btrfs cache filesystem. When the cache fails, the array data may be intact but the services and databases are gone.

  • Btrfs RAID-1 cache pools: Unraid 6.9+ allows multi-device btrfs cache pools, typically configured as RAID-1 across two SSDs. If one SSD fails, btrfs should continue operating on the surviving SSD. If both SSDs fail or btrfs metadata is corrupted across both devices, we image both SSDs and reconstruct the btrfs device/chunk/extent trees to locate and extract file data blocks.
  • Docker appdata recovery: Docker containers on Unraid store persistent data in/mnt/cache/appdata/by default. This includes database files for Plex (SQLite media library), Home Assistant (configuration.yaml, recorder database), and Nextcloud (MySQL/MariaDB data directory). We extract these directories from the btrfs image so containers can be reattached to their original data.
  • VM image recovery: Virtual machine disk images (typically qcow2 or raw format) stored on the cache pool are extracted as complete files from the btrfs image. If the btrfs extent tree is damaged, we locate the VM image data blocks by scanning for qcow2 headers and file signatures within the raw SSD image.

Recovering from a Failed or Incorrect New Config

New Config is Unraid's mechanism for resetting disk assignments. It clears the super.dat file on the USB flash drive, which maps serial numbers to array slots. The data on each disk is not touched. The danger comes after New Config: if disks are assigned to the wrong slots and a parity rebuild starts, the parity drive is overwritten with incorrect parity data.

  1. Before parity rebuild: If you ran New Config but have not started a parity rebuild, the data on every disk is intact and the old parity may still be valid. The correct approach: do not start the array with the "parity is already valid" checkbox unless you are certain the assignments match the original configuration. If you are unsure, power down and contact us.
  2. After parity rebuild on wrong assignments: The parity drive now contains parity computed from the wrong disk order. The original parity data is gone. However, every individual data disk still contains its original XFS or btrfs filesystem. We image each disk and extract files directly from the per-disk filesystems.
  3. Corrupted flash drive: If the USB flash drive itself is corrupted or unreadable, the array configuration is lost. We do not need the flash drive to recover data. Each data disk is self-contained. We image them, mount the individual filesystems, and reassemble shares from the per-disk directory structures.

XFS vs Btrfs: Per-Disk Filesystem Recovery on Unraid

Unraid lets users choose XFS (default) or btrfs for each individual data disk. The recovery approach differs for each filesystem type.

XFS (Default)

  • XFS uses allocation groups (AGs) that subdivide the filesystem into independent regions. Each AG has its own free space B+ tree and inode allocation. Corruption in one AG does not necessarily affect others.
  • The XFS log (journal) records metadata changes before committing them to disk. After a power loss, log replay restores the filesystem to a consistent state in most cases.
  • XFS does not natively support data checksumming. Silent bit rot on a data disk will not be detected by the filesystem. Unraid's parity check can catch single-bit errors, but only if parity is valid.

Btrfs (Optional)

  • Btrfs is a copy-on-write filesystem with CRC32C checksums on data and metadata. This makes corruption detectable but recovery more complex: the B-tree metadata structure must be traversed to locate file data.
  • Single-device btrfs volumes on Unraid use the DUP metadata profile, storing two copies of metadata blocks. This gives us a fallback if one metadata copy is damaged.
  • Btrfs snapshots (if enabled via plugins) create additional B-tree roots. If the current tree is damaged, snapshot trees may still reference intact data, enabling recovery of older file versions.

Both filesystem types are recoverable. XFS is generally simpler to reconstruct due to its well-documented allocation group structure. Btrfs offers better data integrity detection but requires more specialized tooling for tree reconstruction when metadata is damaged.

Unraid Recovery FAQ

Is Unraid parity the same as RAID 5?
No. RAID 5 stripes data across all members with distributed parity. Unraid stores each file on a single disk using an independent XFS or btrfs filesystem. Parity is computed across all data disks and written to a dedicated parity drive. This means individual Unraid data disks are mountable and readable on their own, which makes partial recovery easier than RAID 5. The tradeoff: if the parity drive fails and a data disk also fails, you lose that data disk with no way to reconstruct it from the remaining members.
My cache pool is unmountable. Is my Docker data gone?
Not necessarily. Unraid cache pools use btrfs in a RAID-1 or single-disk configuration. If a cache SSD fails, the btrfs filesystem may become unmountable due to corrupted metadata trees. We image the cache SSD, reconstruct the btrfs chunk and device trees from the image, and extract Docker appdata directories, VM images, and any user shares configured to use the cache. If the SSD has NAND-level failures, we use PC-3000 SSD to access the flash directly.
I ran New Config and my array assignments are wrong. Can you recover?
Usually yes. New Config clears the disk assignment table in Unraid's flash drive configuration but does not erase the data on the individual disks. The files remain on each XFS or btrfs partition. We image every disk, identify each filesystem's content, and extract files directly. If parity was invalidated by a New Config with a subsequent parity rebuild on wrong assignments, the old parity data is gone, but the per-disk data is still intact.
One data disk failed and I have a valid parity drive. Do I need professional recovery?
If your parity is valid and only one disk failed, Unraid can rebuild the missing disk's contents from the remaining data disks plus parity. This works if the remaining disks and parity drive read without errors. If the failed disk has partial reads (bad sectors, clicking, firmware faults), the rebuild will produce gaps. In that case, professional imaging of the failed disk with PC-3000 or DeepSpar fills in the missing sectors before the rebuild, giving you a complete reconstruction.
My Unraid array is encrypted. Can you still recover data?
Unraid uses LUKS encryption on individual data disks. If you have the passphrase or keyfile, we can decrypt each disk after imaging. Without the passphrase, the data cannot be recovered by anyone. Check your Unraid flash drive for stored keyfiles before shipping your drives.
How much does Unraid recovery cost?
Unraid recovery uses the same two-tiered pricing as our other NAS services: a per-disk imaging fee based on each drive's condition ($250 to $900 for logical/firmware issues, $1,200 to $1,500 for head swaps), plus a $400 to $800 array reconstruction fee. Cache pool SSD recovery is priced separately based on SSD complexity. If we recover nothing, you owe $0.

Unraid server down? Start a free evaluation.

Ship your drives or walk in at our Austin lab. No data = no charge.