Skip to main contentSkip to navigation
Rossmann Repair Group logo - data recovery and MacBook repair
RAID Recovery

mdadm Missing Superblock Recovery

Your Linux software RAID array is reporting mdadm: No md superblock detected on /dev/sdX or mdadm: no recogniseable md superblock on /dev/sdX. The data on the member drives is intact. The metadata header that described the array geometry has been zeroed, overwritten, or corrupted.

This guide covers the four mdadm superblock metadata versions, their exact on-disk locations, the commands that destroy arrays during recovery attempts, and how professional recovery reconstructs array geometry from hex-level filesystem signatures without writing to original media.

Louis Rossmann
Written by
Louis Rossmann
Founder & Chief Technician
Updated March 2026

mdadm Superblock Metadata Versions

The Linux Multiple Device (MD) driver supports four superblock formats. Each version places the metadata at a different byte offset on the member drive. Knowing which version the array used is the first step in any recovery, because it determines where the data payload begins and where to search for filesystem signatures.

Version 0.90 (Legacy)

The superblock is 4KB and resides in a 64KB-aligned block at the end of the device. The data payload starts at byte offset 0. Maximum member size: 2TB. Maximum 28 devices per array. Still found on older Debian and Ubuntu installations and some early-generation Synology NAS units.

Version 1.0

The superblock sits between 8KB and 12KB from the end of the device. Like 0.90, the data payload starts at offset 0. This means the OS can see a valid ext4 or XFS filesystem starting at the beginning of the raw device, which leads to dangerous auto-mount scenarios where the kernel mounts an individual member drive as a standalone volume.

Version 1.1

The superblock sits at byte offset 0 (the very start of the device). The data payload begins immediately after the superblock. Because the superblock occupies the first bytes, the OS will not accidentally identify the device as a standard filesystem volume. Rarely used in production.

Version 1.2 (Modern Default)

The superblock sits 4KB from the start of the device. The data payload begins after a calculated data_offset, which is typically aligned to a 1MB boundary (2048 sectors). This is the default for all modern Linux installations, Synology DSM, and QNAP QTS. The 1MB alignment was introduced in mdadm 3.2+ to ensure optimal I/O alignment with 4K-sector drives.

Why this matters for recovery: If you run mdadm --create --assume-clean with version 1.2 but the original array used version 1.0, mdadm writes a new superblock at 4KB from the start of the drive. The data payload offset will be wrong, and every block read from the reconstructed array will be shifted, producing garbled output across all member drives. The original payload is still on disk, but the incorrect metadata now points to the wrong byte ranges.

How mdadm Superblocks Get Destroyed

Superblock loss is rarely spontaneous. It is almost exclusively the result of administrative commands or partitioning operations that overwrite the metadata region.

  • 1.Accidental --zero-superblock: Running mdadm --zero-superblock /dev/sdX on the wrong device. This erases only the metadata header (4KB or less). The filesystem payload on each member drive remains intact, but mdadm can no longer identify the device as an array member.
  • 2.Partitioning tool overwrites: Using fdisk, parted, or a NAS initialization wizard on a drive that was already an array member. For version 1.1 and 1.2 arrays, the superblock is at the beginning of the device. Any partition table write destroys it. For version 0.90 and 1.0, the superblock is at the end, so partitioning may leave it intact if the partition does not span the full disk.
  • 3.Kernel auto-assembly with wrong members: A known issue with udev rules on older Ubuntu kernels (particularly 12.04 through 16.04 with kernel 3.2.0.40) caused arrays to fail auto-assembly on boot. Administrators mistakenly assumed the superblocks were destroyed when the issue was a udev race condition.
  • 4.Controller metadata conflict: Moving drives from a hardware RAID controller (Dell PERC, LSI MegaRAID) to a software RAID environment. The controller's DDF metadata at the end of the disk can overlap with mdadm version 0.90 or 1.0 superblock regions, causing both systems to reject the drives.
  • 5.NAS OS upgrades: Major firmware updates on Synology DSM or QNAP QTS can trigger mdadm/LVM unbinding if the update process is interrupted by a power failure. The physical data remains on the drives, but the metadata linking mdadm to LVM becomes inconsistent.

Commands That Destroy Arrays During Recovery

The instinct after seeing "no md superblock detected" is to recreate the array. Each of the following commands can convert a recoverable metadata loss into permanent data destruction.

  • mdadm --create --assume-clean with wrong parameters. If the disk order, metadata version, chunk size, or parity layout does not exactly match the original array, mdadm writes new superblocks with incorrect geometry. The existing data payload is now mapped to wrong offsets. For RAID 5/6, incorrect parity rotation corrupts every stripe.
  • fsck on individual member drives. Running e2fsck or xfs_repair on a raw RAID member interprets the scattered stripe chunks as filesystem corruption. The tool "repairs" the damage by rewriting inode tables and directory entries, permanently destroying the RAID structure.
  • mdadm --create without --assume-clean. Without the --assume-clean flag, mdadm performs a full resync: it recalculates parity across all member drives. On an existing array with real data, this overwrites every parity block with values computed from potentially wrong disk ordering, destroying the original parity permanently.
  • Consumer recovery software on raw members. Tools like Disk Drill, EaseUS, or PhotoRec cannot parse Linux MD parity layout. Running them on individual member drives produces fragmented, unusable output because they cannot reconstruct the stripe interleaving.

Before running any mdadm command on array members: image every drive to a separate storage target using write-blocked connections. All recovery attempts must operate on images, not original media. If any drive has physical faults (bad sectors, degraded heads), the imaging step captures recoverable data before the drive condition deteriorates.

Locating Data Without a Superblock: Filesystem Magic Bytes

When the superblock is gone, the array geometry must be reverse-engineered by examining raw hex data on each member drive. The first step is locating the filesystem payload by searching for known magic byte signatures.

FilesystemMagic Bytes (Hex)Offset from Payload Start
ext40x53EF0x438 (1080 bytes) from data start. The ext4 superblock sits at byte 1024 within the first block group; the magic number is at offset 0x38 within that superblock structure.
XFS0x58465342 ("XFSB" ASCII)Byte 0 of the data payload. XFS writes its superblock at the very beginning of the filesystem, making it the easiest signature to locate.
btrfs_BHRfS_M (ASCII)0x10040 (64KiB + 0x40) from data start. The btrfs primary superblock sits at 64KiB; the magic is at offset 0x40 within the superblock structure.

By finding these signatures on each member drive, we calculate the exact data_offset and confirm which metadata version was in use. The distance between the start of the raw device and the filesystem magic bytes reveals the superblock version: if ext4 magic appears at 0x438 (the very beginning of the drive), the array used version 0.90 or 1.0 (metadata at end). If ext4 magic appears at 0x100438 (1MB + 0x438), the array used version 1.2 with the standard 1MB data_offset alignment.

How We Recover mdadm Arrays Without Superblocks

Professional recovery bypasses mdadm entirely. We image each member drive independently, extract the filesystem geometry from hex analysis, and reconstruct the array virtually. Zero bytes are written to original media at any point in the process.

  • 1.Image all member drives. Each drive is connected via write-blocked interface and imaged sector-by-sector using PC-3000 or DeepSpar Disk Imager. If drives have physical media damage, PC-3000 selective head imaging captures data from healthy heads while skipping damaged areas, then returns to attempt damaged zones with adjusted read parameters.
  • 2.Identify the metadata version from hex signatures. Search each drive image for filesystem magic bytes at known offsets. The position of the signature relative to byte 0 of the raw device reveals the data_offset and therefore the superblock version. Cross-reference with any surviving mdadm.conf, /proc/mdstat logs, or NAS configuration database entries.
  • 3.Determine chunk size and parity rotation. Analyze the repeating patterns in filesystem structures (ext4 block group descriptors, XFS allocation group headers) across multiple member drive images. The spacing between identical structural elements reveals the chunk size. The pattern of which drive holds parity for each stripe determines the parity rotation algorithm (left-symmetric, left-asymmetric, etc.).
  • 4.Determine correct disk ordering. mdadm assigns a unique device role to each member drive. If the order is wrong, every stripe read pulls data from the wrong physical location. We verify ordering by assembling candidate configurations against known filesystem structures and checking for valid directory entries.
  • 5.Virtual assembly using read-only loop devices. Map the drive images into a virtual RAID array using calculated parameters. The virtual device is mounted read-only. Filesystem integrity is verified by checking superblock checksums, inode consistency, and directory tree traversal before any data extraction begins.

Superblock-less reconstruction approach: When mdadm --zero-superblock has been run on all drives, the array parameters must be inferred from the data itself. For ext4 volumes, the superblock magic (0x53EF) at a known offset reveals the data_offset (1MB for metadata version 1.2). Block group descriptor spacing determines chunk size. Parity rotation layout (left-symmetric, left-asymmetric, etc.) is identified by checking which drive holds parity for the first several stripes. Once all parameters are determined, the array can be virtually assembled and the filesystem mounted read-only for extraction.

NAS Devices That Use mdadm Internally

Consumer and prosumer NAS devices from Synology, QNAP, and Western Digital use mdadm as their RAID engine, but wrap it in proprietary abstraction layers that complicate standard recovery procedures.

Synology (DSM / SHR)

Synology Hybrid RAID (SHR) is fundamentally LVM layered over multiple mdadm arrays. When drives of different sizes are used, DSM partitions each drive into chunks, creates separate RAID arrays from size-matched chunks, and merges them using LVM pvresize into a single logical volume. A "missing superblock" on a Synology NAS means the mdadm layer is broken, but the LVM physical volume headers and volume group metadata must also be intact for the volume to mount. Running standard mdadm --create on an SHR member will overwrite the LVM headers immediately.

QNAP (QTS)

QNAP QTS uses standard mdadm with ext4 or ext3 on LVM. The recovery approach is similar to standard Linux mdadm recovery, with one important distinction: QNAP stores its own configuration database on a separate partition (partition 1 on each drive). This database maps storage pools to mdadm arrays. Corrupting or overwriting partition 1 while attempting mdadm recovery will desynchronize the QTS management interface from the underlying array state.

QuTS hero exception: QNAP's QuTS hero uses ZFS (RAID-Z) instead of mdadm. ZFS does not use mdadm superblocks; it relies on ZFS labels and transaction groups. mdadm recovery procedures do not apply to QuTS hero arrays.

WD My Cloud

Multi-bay WD My Cloud devices (EX2, EX4, PR2100, PR4100) use mdadm with ext4 for RAID 0, RAID 1, RAID 5, JBOD, and spanning configurations. The mdadm implementation is straightforward with no LVM layer. Recovery follows standard mdadm procedures: image, identify metadata version, calculate data_offset, and reconstruct.

TerraMaster

TerraMaster NAS devices running TOS use mdadm with btrfs on LVM. The btrfs filesystem adds a complication: btrfs has its own internal RAID implementation. Some TerraMaster configurations use mdadm RAID with btrfs single profile on top, while others use btrfs native RAID. The recovery approach depends on which RAID layer is in use.

The data_offset Alignment Problem

Starting with mdadm version 3.2, the data_offset parameter determines the exact byte position where the filesystem payload begins on each member drive. Getting this value wrong by even one sector shifts the entire data mapping, producing garbled output from an otherwise correct reconstruction.

  • 1.Modern mdadm (3.2+) defaults to aligning data_offset to a 1MB (2048 sector) boundary for version 1.2 arrays. Older versions used smaller alignments. An array created with mdadm 3.1 may have a 4KB data_offset, while the same RAID level created with mdadm 4.x has a 1MB offset.
  • 2.The mdadm --examine command reads and displays the data_offset from an existing superblock. When the superblock is missing, this information must be derived from hex analysis of the filesystem signatures.
  • 3.If you specify the wrong data_offset during mdadm --create, the new superblock will point to the wrong start position for the data region. The filesystem structures will not align, and the volume will fail to mount or mount with corrupt data.

How we determine data_offset: We scan the raw drive image for filesystem magic bytes and calculate the distance from byte 0 of the device to the first filesystem structure. For ext4, if 0x53EF appears at absolute offset 0x100438, the data_offset is 0x100000 (1MB). For XFS, if XFSB appears at offset 0x100000, the data_offset is 0x100000. This calculation is verified across all member drives; consistent results confirm the correct offset.

Frequently Asked Questions

Can I rebuild a missing superblock by running mdadm --create?
Only if you know the exact parameters used during the original array creation: disk order, metadata version, chunk size, layout, and data offset. If any parameter is wrong, mdadm --create overwrites the existing parity layout and payload data, causing permanent loss. Work from cloned images, not original drives.
Why does my drive show up as a normal filesystem after a RAID failure?
Arrays created with version 1.0 or 0.90 metadata store the superblock at the end of the device. The filesystem (ext4, XFS) begins at byte offset 0. The OS may auto-mount the individual member drive as a standalone disk, which desynchronizes the array parity the moment any write occurs.
What does 'mdadm: device /dev/sdX is not an md array' mean?
mdadm cannot locate a valid superblock magic number at the expected offset (0, 4KB from start, or near the end of the disk depending on version). The drive either had its superblock zeroed, overwritten by a partitioning tool, or was never a member of an mdadm array.
Is data recoverable after running mdadm --zero-superblock?
In most cases, yes. The --zero-superblock command erases only the metadata header (4KB or less). The filesystem payload on each member drive remains intact. Recovery requires calculating the correct data_offset for the original metadata version and reassembling the array virtually without writing new superblocks to the original media.
My Synology NAS lost its RAID. Can I use standard mdadm commands to fix it?
Synology Hybrid RAID (SHR) uses mdadm internally, but wraps it in LVM. The NAS partitions uneven drives into multiple slices, builds separate RAID sets from those slices, and merges them using LVM pvresize. Recovering the mdadm superblocks is only the first step; the LVM physical volume headers and volume group metadata must also be reconstructed. Running standard mdadm --create on an SHR member will overwrite the LVM headers.

mdadm array reporting missing superblock?

Free evaluation. Write-blocked imaging. Virtual array reconstruction from hex-level analysis. No data, no fee.