Skip to main contentSkip to navigation
Rossmann Repair Group logo - data recovery and MacBook repair
NAS & RAID Recovery

Migration Failed Data Recovery

A NAS migration, RAID reshape, or LVM pvmove was interrupted. The source array may have inconsistent superblocks. The destination array may be partially initialized. Both systems report errors, and neither will mount your data.

This page covers the specific failure modes of interrupted storage migrations: Synology Migration Assistant failures, QNAP cross-model transfers, mdadm RAID reshapes, and LVM server migrations. Each type leaves different metadata in different states, and each requires a different reconstruction approach.

Louis Rossmann
Written by
Louis Rossmann
Founder & Chief Technician
Updated March 2026

Why Migration Failures Differ from Standard RAID Failures

A standard RAID failure involves one or more drives becoming unreadable while the array geometry stays constant. A migration failure is different: the array geometry itself was in the process of changing when the interruption occurred.

During a migration, reshape, or expansion, the RAID layer rewrites stripe data from one layout to another. The mdadm superblock tracks this progress using a reshape_position field that records how far the conversion has advanced. An interruption at any point creates a split-state array: stripes below the checkpoint follow the new geometry, stripes above it follow the old geometry.

Standard RAID recovery tools that assume a single consistent geometry will produce garbage output when applied to a split-state array. Recovery requires reading the reshape checkpoint and applying two different parity calculations to the correct stripe ranges.

Failure TypeArray GeometryRecovery Approach
Standard RAID drive failureConsistent across all membersImage failed drives, rebuild parity from survivors
Interrupted RAID reshapeSplit: old geometry above checkpoint, new belowRead reshape_position, apply dual-geometry parity reconstruction
NAS-to-NAS migration failureSource array intact, destination partially initializedRecover from source drives; destination drives may supplement missing data
LVM pvmove interruptedExtents split between source and destination PVsParse LVM metadata from both PVs, map extent locations, reconstruct LV

Synology Migration Assistant Failures

Synology's Migration Assistant transfers data and configuration from an older NAS to a newer model. The process involves copying volumes, reconfiguring DSM, and updating SHR/mdadm array metadata to match the new hardware. Failures during this process leave the source and destination in different states depending on where the interruption occurred.

Network Transfer Interruption

Migration Assistant copies data over the LAN between the two NAS units. If the network connection drops, one NAS reboots, or DSM encounters an error, the transfer stops. The source NAS drives retain their original mdadm arrays and filesystems intact. The destination NAS may have a partially initialized storage pool with incomplete Btrfs metadata. Recovery focuses on the source drives; the destination drives can be discarded.

DSM Version Upgrade During Migration

Migrating from a DSM 6.x NAS to a DSM 7.x model triggers a simultaneous data transfer and DSM upgrade. DSM 7.x changed the internal LVM and Btrfs structures. If the migration fails after the destination has begun restructuring volumes but before data transfer completes, the destination may have DSM 7.x metadata pointing to empty or partial volumes. The source drives typically remain in their DSM 6.x state with data intact, but DSM may mark them as "Not Initialized" if they are placed back into the original NAS after the migration was started.

SHR Pool Reconfiguration Failure

Synology Hybrid RAID (SHR) creates multiple mdadm arrays from drive partitions of different sizes. Migrating between models with different drive bay counts can trigger SHR pool restructuring. If the restructuring process is interrupted, the mdadm superblocks across the SHR slice groups may reference different array geometries. Some slices may follow the pre-migration layout; others may follow the new target layout. Standard mdadm --assemble fails because no single geometry matches all superblocks.

Source drives are the priority. In most Synology migration failures, the source NAS drives contain a complete, consistent mdadm array from before the migration started. Image the source drives first. The destination drives are only needed if the source was modified during migration.

QNAP Cross-Model Migration Failures

QNAP supports moving drives directly between NAS models in some configurations. This physical migration (pulling drives from one NAS and inserting them into another) relies on QTS being able to recognize the existing mdadm arrays and LVM volumes. Several conditions cause this recognition to fail.

ARM to Intel Architecture Change
QNAP's ARM-based models (TS-x28A series) and Intel-based models (TS-x53D series) use different system partition layouts on drive 1. The system partition contains QTS configuration, user accounts, and app data. Moving drives from ARM to Intel or vice versa corrupts the system partition. QTS on the new unit cannot read the configuration and may offer to "initialize" the drives, which destroys the data volumes.
Different Bay Count and RAID Geometry
Moving a 4-bay RAID 5 array into an 8-bay unit works if you insert drives into the first 4 bays in the correct order. But QTS may attempt an automatic RAID expansion to use the additional bays. If this expansion starts and is then interrupted (power failure, user cancellation, drive error), the array enters a reshape state identical to an interrupted mdadm --grow operation.
QTS vs QuTS Hero (ZFS) Incompatibility
QTS uses mdadm + LVM + EXT4. QuTS hero uses ZFS. Drives formatted under one system cannot be read by the other. Moving mdadm drives into a QuTS hero NAS results in the system not recognizing any volumes. Moving ZFS drives into a standard QTS NAS has the same result. In both cases, the data is intact on the drives but invisible to the operating system.

Do not initialize, format, or create a new storage pool. If QTS cannot recognize drives after a cross-model migration, the mdadm arrays and data volumes are still intact on the raw drives. Initialization destroys them. Power down, remove drives, and image with write-blockers.

RAID Reshape and Expansion Interruptions

RAID reshape changes the stripe layout of an existing array without destroying data. Common reshape operations include adding a new drive to expand capacity, changing chunk size, or converting from RAID 5 to RAID 6. mdadm performs these reshapes by rewriting data stripes from left to right across the array.

The reshape process maintains a checkpoint position in the mdadm superblock. All stripes to the left of the checkpoint have been rewritten to the new geometry. All stripes to the right remain in the old geometry. Under normal operation, the kernel can resume a reshape from the last checkpoint after a reboot.

When Reshape Recovery Fails

  1. 1.Superblock corruption at the checkpoint boundary. If power was lost during a superblock write, the reshape_position field may contain an invalid value. The kernel cannot determine where old geometry ends and new geometry begins. Manual analysis of the stripe data is required to find the actual transition point.
  2. 2.Drive failure during reshape. If a drive fails mid-reshape, the array has two problems: a missing member and a split geometry. Standard rebuild procedures cannot handle both. The reshape must be resolved first, then the missing member reconstructed from parity.
  3. 3.NAS firmware update during reshape. Some NAS firmware updates require a reboot. If a firmware update is applied while a reshape is in progress, the reboot interrupts the reshape. The updated kernel may handle the mdadm superblock format differently than the previous version, preventing automatic resume.

Interrupted reshape failure pattern: When a NAS begins an mdadm reshape to incorporate a new member drive (equivalent to mdadm --grow --raid-devices=N), a power outage during the reshape leaves the array in a split-geometry state. The reshape_position superblock field marks where the conversion stopped. Stripes before that offset follow the new drive count; stripes after it follow the old geometry. No single RAID parameter set can read the entire volume. Recovery requires imaging all drives and stitching the two geometry segments independently.

LVM Migration Failures Between Servers

LVM pvmove is the standard Linux tool for migrating data between physical volumes while a volume group remains online. It is used during server migrations, storage upgrades, and data center relocations. An interrupted pvmove leaves logical extents scattered across source and destination PVs.

pvmove creates a temporary mirror for each logical extent being moved. The migration sequence for each extent is: (1) create mirror on destination PV, (2) synchronize data from source to destination, (3) remove source extent from mirror, (4) update LVM metadata. An interruption at any stage leaves the extent in a different state.

Mirrored (Both Valid)

The extent was fully synchronized but the source copy was not yet removed. Both source and destination contain identical data. Either copy can be used.

Migrated (Destination Only)

The extent was fully migrated and the source copy removed. Only the destination PV contains valid data for this extent. The source PV extent is stale or zeroed.

Unmigrated (Source Only)

The extent was not yet processed by pvmove. Only the source PV contains valid data. The destination PV may have uninitialized or old data at this offset.

LVM records pvmove progress in its metadata area, which is written to every PV in the volume group. Parsing this metadata from all PVs reveals which extents are in which state. The complete logical volume can be reconstructed by reading the correct version of each extent from the appropriate PV.

Split-Brain Metadata Structure and Resolution

Every migration failure creates some form of split-brain metadata: different drives in the same array contain superblocks that disagree about the array state. mdadm, LVM, and ZFS all refuse to assemble when they detect inconsistent metadata because assembling with the wrong assumptions destroys data.

In mdadm, the key fields are: events (write counter), reshape_position (geometry transition progress), dev_roles (which drive fills which slot), and sb_bitmap (dirty region tracking). When these fields disagree across members, the array cannot self-assemble.

How We Resolve Split-Brain Arrays

  1. 1.Image every drive with write-blockers. PC-3000 Portable III or DeepSpar Disk Imager for drives with physical symptoms. ddrescue for mechanically healthy drives. Every member from every system involved in the migration.
  2. 2.Dump and compare superblocks. Read the mdadm superblock from each image. Map the event counts, reshape positions, and device roles across all members. Identify which drives were part of the pre-migration state and which reflect post-migration changes.
  3. 3.Reconstruct parity using the correct geometry for each stripe range. For interrupted reshapes, apply the old parity calculation above the checkpoint and the new calculation below it. For NAS migrations, prioritize the source array superblocks over destination superblocks.
  4. 4.Validate filesystem integrity. After array reconstruction, mount the Btrfs, EXT4, or ZFS filesystem read-only. Verify directory structures, file sizes, and checksums before extracting data to a destination drive.

What to Do When a Migration Fails

The priority after any failed migration is preserving the current state of every drive involved. Do not retry the migration, do not click repair, and do not reinitialize any storage pool.

  1. 1.Power down all involved systems. Both source and destination NAS units, or both source and destination servers. Clean shutdown if the web interface is accessible; hold power button if not.
  2. 2.Do not retry the migration. Retrying writes additional data to the destination and may modify the source. Each retry attempt changes the superblock event counts, making split-brain resolution more complex.
  3. 3.Label drives from both systems separately. Mark each drive with its system of origin (source or destination) and bay number. Photograph both NAS units before removing drives.
  4. 4.Image all drives from both systems. The source drives typically contain the most complete data. The destination drives may contain partial transfers needed to fill gaps.

Reinitializing a storage pool destroys everything. Some NAS interfaces suggest reinitialization after a failed migration. This creates a new, empty pool. It does not recover the failed migration. If the data matters, send drives for professional NAS recovery before touching the web interface.

Recovery Pricing for Migration Failures

Migration failure recovery is priced per member drive based on the type of work required. Logical recovery (healthy drives with split-brain metadata) starts at the file system recovery tier. Drives with mechanical failures follow standard HDD pricing. No data recovered, no fee.

Service TierPriceDescription
Simple CopyLow complexity$100

Your drive works, you just need the data moved off it

Functional drive; data transfer to new media

Rush available: +$100

File System RecoveryLow complexityFrom $250

Your drive isn't recognized by your computer, but it's not making unusual sounds

File system corruption. Accessible with professional recovery software but not by the OS

Starting price; final depends on complexity

Firmware RepairMedium complexity – PC-3000 required$600–$900

Your drive is completely inaccessible. It may be detected but shows the wrong size or won't respond

Firmware corruption: ROM, modules, or translator tables corrupted; requires PC-3000 terminal access

Standard drives at lower end; high-density drives at higher end

Head SwapHigh complexity – clean bench surgery50% deposit$1,200–$1,500

Your drive is clicking, beeping, or won't spin. The internal read/write heads have failed

Head stack assembly failure. Transplanting heads from a matching donor drive on a clean bench

50% deposit required. Donor parts are consumed in the repair

Surface / Platter DamageHigh complexity – clean bench surgery50% deposit$2,000

Your drive was dropped, has visible damage, or a head crash scraped the platters

Platter scoring or contamination. Requires platter cleaning and head swap

50% deposit required. Donor parts are consumed in the repair. Most difficult recovery type.

Hardware Repair vs. Software Locks

Our "no data, no fee" policy applies to hardware recovery. We do not bill for unsuccessful physical repairs. If we replace a hard drive read/write head assembly or repair a liquid-damaged logic board to a bootable state, the hardware repair is complete and standard rates apply. If data remains inaccessible due to user-configured software locks, a forgotten passcode, or a remote wipe command, the physical repair is still billable. We cannot bypass user encryption or activation locks.

All tiers: Free evaluation and firm quote before any paid work. No data, no fee on simple copy, file system, and firmware tiers. Head swap and surface damage require a 50% deposit because donor parts are consumed in the attempt.

Target drive: The destination drive we copy recovered data onto. You can supply your own or we provide one at cost. For ultra-high-capacity drives (20TB and above), the target drive costs approximately $400+ due to the large media required. All prices are plus applicable tax.

Frequently Asked Questions

My Synology Migration Assistant failed mid-transfer. Is my data gone?
Not necessarily. Synology Migration Assistant copies data at the volume level and updates DSM configuration files on the destination NAS. If the migration failed mid-transfer, the source NAS drives still contain the original mdadm array and Btrfs/EXT4 filesystem. The destination NAS may have a partially initialized pool. Power down both units, remove drives from the source NAS, and image them before attempting any retry. The source array is usually recoverable through offline mdadm reassembly.
Can I move drives from a QNAP ARM-based NAS to an Intel-based QNAP?
QNAP supports cross-architecture migration in some cases, but the process is not guaranteed. ARM and Intel QNAP models use different bootloader layouts and system partition schemes. QTS stores system configuration in a reserved partition on drive 1. Moving drives between incompatible architectures can corrupt the system partition while leaving the data volume intact. If the migration fails and QTS cannot initialize, do not format or reinitialize the drives. The mdadm array containing your data is separate from the system partition and can be reconstructed offline.
My RAID reshape was interrupted by a power failure. Can I recover?
RAID reshape (changing stripe width, adding a member, or converting RAID level) rewrites data across all drives in a rolling pattern. An interruption creates a split state: some stripes follow the old geometry and some follow the new. mdadm records reshape progress in the superblock, and modern kernels (4.x+) can resume an interrupted reshape. If the reshape cannot resume, the array requires manual reconstruction using the reshape checkpoint position to determine which stripes use old versus new geometry. We reconstruct these split-state arrays by reading the mdadm superblock reshape_position field and applying the correct parity calculation for each stripe range.
LVM pvmove was interrupted during a server migration. Is the data recoverable?
pvmove works by creating a temporary mirror of each logical extent, copying data to the destination physical volume, then removing the source extent from the mirror. An interruption leaves some extents mirrored (both copies valid), some fully migrated (only destination copy valid), and some unmigrated (only source copy valid). LVM stores the migration state in its metadata area on every physical volume. We parse the LVM metadata from both source and destination PVs to map which extents live where, then extract the correct version of each extent to reconstruct the complete logical volume.
How much does migration failure recovery cost?
Migration failure recovery is priced per drive based on the work required. If the drives are physically healthy and the issue is purely logical (corrupted metadata, incomplete reshape, split-brain superblocks), pricing starts at $250 per drive (file system recovery tier). If member drives have mechanical failures requiring head swaps or firmware repair, standard HDD pricing applies: firmware $600 to $900, head swap $1,200 to $1,500. No data recovered, no fee. We publish full pricing tiers at rossmanngroup.com/pricing.
What is split-brain metadata and why does it prevent my array from assembling?
Split-brain metadata occurs when different member drives in an array have inconsistent superblock event counts. Each write to an mdadm array increments the superblock event counter. If a drive is temporarily disconnected (cable issue, timeout, power glitch) and then reconnected after additional writes occurred, its event count is behind the rest of the array. mdadm refuses to assemble the array because it cannot determine which drive has the authoritative state. Recovery requires examining the event counts, bitmap state, and reshape position across all members to reconstruct a consistent array view.

Migration failed? Data stuck between two systems?

Free evaluation. We image all drives from source and destination, reconstruct split-brain metadata, and extract your data. No data, no fee.