What "Volume Crashed" Means at the Linux Level
Synology DSM runs on Linux and uses mdadm for software RAID management. "Volume Crashed" means the mdadm array has entered an inactive or failed state. It is not a DSM UI glitch; it reflects a real failure in the underlying RAID layer.
- 1.DSM creates Linux md (multiple device) arrays using mdadm. Each storage pool corresponds to one or more md devices (/dev/md0, /dev/md1, etc.).
- 2.Each drive in the array carries an mdadm superblock containing the array UUID, layout, chunk size, and device role.
- 3.When enough member drives fail, disconnect, or report I/O errors, mdadm marks the array as inactive. DSM reads this state and displays "Volume Crashed."
- 4.The drives themselves are usually still readable individually. The RAID metadata binding them into a single volume is what has broken.
Example: A DS920+ with four 4TB drives in SHR-1. Drive 3 develops bad sectors over several weeks. DSM marks the volume as degraded. Before the admin notices, drive 1 also reports read errors during a scheduled data scrub. mdadm cannot maintain the array with two faulty members in a single-parity configuration. It marks the array inactive, and DSM shows "Volume Crashed."
SHR Architecture and mdadm Underneath
Synology Hybrid RAID (SHR) is not a custom RAID implementation. It is a partition layout that creates standard mdadm RAID arrays across partitions of different sizes, allowing mixed-capacity drives to share one storage pool.
- 1.SHR partitions each drive into slices sized to match the smallest drive in the pool.
- 2.Each slice group forms a standard mdadm RAID 5 array (for SHR-1) or RAID 6 array (for SHR-2).
- 3.Leftover capacity on larger drives forms additional mdadm arrays (often RAID 1 pairs) to use the extra space.
- 4.All md arrays are combined into a single LVM volume group, and the logical volume is formatted with Btrfs or EXT4.
- 5.A "Volume Crashed" error means at least one of these md arrays has failed, which takes the entire LVM volume offline.
Example: A DS1621+ with two 8TB and two 4TB drives in SHR-1. DSM creates: md2 as RAID 5 across four 4TB partitions, and md3 as RAID 1 across the extra 4TB from each 8TB drive. Both are joined in a single LVM volume group. If md2 fails, the entire volume crashes even though md3 is healthy.
Btrfs vs EXT4: Filesystem Recovery Differences
Synology supports two filesystems: Btrfs (copy-on-write, with snapshots and checksumming) and EXT4 (traditional journaled filesystem). The filesystem type affects which recovery tools work and which failure modes are possible.
Btrfs
- ●Copy-on-write: data is never overwritten in place. Metadata and data checksums detect silent corruption.
- ●Snapshots may preserve earlier file versions even after corruption of the latest copy.
- ●Recovery tools (btrfs restore, btrfs check) are Btrfs-specific. Standard undelete tools do not understand COW metadata.
EXT4
- ●Journal-based: metadata writes are journaled but data may not be (depends on mount options).
- ●No built-in checksumming. Silent corruption from a RAID parity mismatch is not detected at the filesystem level.
- ●Standard Linux recovery tools (e2fsck, debugfs, extundelete) are well-documented and widely available.
Example: A Btrfs volume on SHR-1 crashes after a power loss. The mdadm array reassembles, but Btrfs refuses to mount because the filesystem tree root checksum does not match. Running "btrfs check --repair" can fix metadata inconsistencies but may also delete files whose metadata cannot be validated. On EXT4, e2fsck would replay the journal and reconnect orphaned inodes to lost+found; the risk profile is different.
Using photorec and testdisk on Crashed Volumes
photorec and testdisk are open-source tools that scan raw block devices for file signatures (file carving). They can recover files from damaged filesystems, but they carry real risks when run against live or degraded arrays.
- 1.photorec scans raw sectors for known file headers (JPEG, PDF, DOCX, etc.) and extracts files regardless of filesystem state. It does not preserve filenames or directory structure.
- 2.testdisk analyzes partition tables and can sometimes rebuild a damaged partition map or recover a deleted partition.
- 3.On Btrfs, file carving with photorec is less effective because COW scatters file extents across the device. Large files are often fragmented in ways that photorec cannot reconstruct.
Image first, scan second. Running recovery tools directly on a degraded array can trigger additional reads that stress failing drives, cause the kernel to issue TRIM commands on SSD-based arrays, or provoke mdadm to attempt resync operations. For irreplaceable data, create write-blocked images of every drive using ddrescue before running photorec, testdisk, or any other scanning tool. Work from the images, not the source drives.
Example: An admin runs photorec directly on /dev/md0 of a crashed 4-drive SHR array. The scan issues sequential reads across all surviving drives. Drive 2, which was already reporting SMART warnings, develops additional bad sectors under the sustained read load. mdadm kicks drive 2 from the array entirely. What was a single-drive degraded failure is now a double-drive failure.
Why Reinstalling DSM Destroys Your Data
When Synology DSM prompts you to reinstall or migrate after a volume crash, it is preparing to rewrite the system partition on every drive. While user data on partition 3 remains untouched, reinstalling the OS can alter partition tables and complicate the recovery of corrupted LVM metadata.
- 1.Each Synology drive contains a small system partition (partition 1) holding the DSM operating system, and one or more data partitions (partition 2, 3, etc.) holding the RAID members.
- 2.The DSM installer rewrites partition 1 (md0, system) and partition 2 (md1, swap). A standard Mode 2 reinstall does not directly touch user data on partition 3+ (md2+), but partition table changes can complicate reassembly.
- 3.Without valid mdadm superblocks, the array cannot be automatically reassembled. Manual reconstruction requires knowing the exact RAID level, chunk size, layout, and drive order; information that was stored in the superblocks.
- 4.Moving drives to a new Synology unit carries the same risk. The new unit's DSM installer may treat the drives as uninitialized if it cannot read the existing RAID metadata.
Example: A DS918+ shows "Volume Crashed." The admin removes the drives and inserts them into a new DS920+. The DS920+ boots and offers two options: "Migrate" or "Install fresh." The admin selects Migrate. DSM rewrites the system partitions and attempts to import the md arrays. The import fails because the array metadata was damaged in the original crash. DSM now offers only "Install fresh," which will destroy the remaining data partitions. The admin is stuck. Synology NAS data recovery at this stage requires imaging the drives before any further DSM operations.
Safe Diagnostic Steps
If the volume contains data you need, the correct sequence is: stop DSM from making changes, image every drive, then attempt reassembly on the images. The original drives should not be written to at any point.
- 1.Power down the NAS. Do not click Repair, Migrate, or Reinstall in DSM.
- 2.Label each drive with its bay number. Remove the drives.
- 3.Connect each drive to a Linux workstation using a write-blocker or mount read-only.
- 4.Run
mdadm --examine /dev/sdX2on each drive to read the RAID superblock. This tells you the array UUID, RAID level, chunk size, and device roles. - 5.Image each drive with ddrescue to a separate destination disk. This preserves the original state.
- 6.Attempt
mdadm --assemble --readonlyon the images. If the array assembles, mount the filesystem read-only and copy files to a new destination.
If you are not comfortable working with mdadm, LVM, and Btrfs or EXT4 at the command line, or if the array does not assemble from images, professional NAS data recovery with write-blocked imaging is the lower-risk path.
Frequently Asked Questions
What does Volume Crashed mean on Synology?
Volume Crashed means the underlying Linux mdadm RAID array has entered a failed state. Synology DSM runs on Linux and uses mdadm for software RAID management. When enough drives fail, report errors, or become inconsistent, mdadm marks the array as inactive. DSM translates this to 'Volume Crashed' in its web interface. The drives themselves may still be readable individually; the RAID metadata binding them into a single volume is what has broken.
Can I recover data after a Synology volume crash?
In most cases, yes. The data is still on the individual drives. Recovery involves imaging each drive with a write-blocker, then reassembling the mdadm array offline using the RAID metadata stored on each disk. If the filesystem (Btrfs or EXT4) is intact, files can be extracted directly from the reassembled image. If the filesystem is also damaged, file carving tools can recover data based on file signatures.
Should I reinstall DSM after a volume crash?
No. Reinstalling DSM reformats the system and swap partitions (partitions 1 and 2), which alters partition tables and complicates recovery of corrupted LVM metadata. If DSM prompts you to reinstall or 'migrate,' do not proceed until the drives have been imaged. The DSM installer is designed to provision a new system, not to preserve an existing crashed volume.
Related Recovery Services
Synology volume crashed?
Free evaluation. Write-blocked drive imaging. mdadm array reconstruction. No data, no fee.