Migration Failed Data Recovery
A NAS migration, RAID reshape, or LVM pvmove was interrupted. The source array may have inconsistent superblocks. The destination array may be partially initialized. Both systems report errors, and neither will mount your data.
This page covers the specific failure modes of interrupted storage migrations: Synology Migration Assistant failures, QNAP cross-model transfers, mdadm RAID reshapes, and LVM server migrations. Each type leaves different metadata in different states, and each requires a different reconstruction approach.

Why Migration Failures Differ from Standard RAID Failures
A standard RAID failure involves one or more drives becoming unreadable while the array geometry stays constant. A migration failure is different: the array geometry itself was in the process of changing when the interruption occurred.
During a migration, reshape, or expansion, the RAID layer rewrites stripe data from one layout to another. The mdadm superblock tracks this progress using a reshape_position field that records how far the conversion has advanced. An interruption at any point creates a split-state array: stripes below the checkpoint follow the new geometry, stripes above it follow the old geometry.
Standard RAID recovery tools that assume a single consistent geometry will produce garbage output when applied to a split-state array. Recovery requires reading the reshape checkpoint and applying two different parity calculations to the correct stripe ranges.
| Failure Type | Array Geometry | Recovery Approach |
|---|---|---|
| Standard RAID drive failure | Consistent across all members | Image failed drives, rebuild parity from survivors |
| Interrupted RAID reshape | Split: old geometry above checkpoint, new below | Read reshape_position, apply dual-geometry parity reconstruction |
| NAS-to-NAS migration failure | Source array intact, destination partially initialized | Recover from source drives; destination drives may supplement missing data |
| LVM pvmove interrupted | Extents split between source and destination PVs | Parse LVM metadata from both PVs, map extent locations, reconstruct LV |
Synology Migration Assistant Failures
Synology's Migration Assistant transfers data and configuration from an older NAS to a newer model. The process involves copying volumes, reconfiguring DSM, and updating SHR/mdadm array metadata to match the new hardware. Failures during this process leave the source and destination in different states depending on where the interruption occurred.
Network Transfer Interruption
Migration Assistant copies data over the LAN between the two NAS units. If the network connection drops, one NAS reboots, or DSM encounters an error, the transfer stops. The source NAS drives retain their original mdadm arrays and filesystems intact. The destination NAS may have a partially initialized storage pool with incomplete Btrfs metadata. Recovery focuses on the source drives; the destination drives can be discarded.
DSM Version Upgrade During Migration
Migrating from a DSM 6.x NAS to a DSM 7.x model triggers a simultaneous data transfer and DSM upgrade. DSM 7.x changed the internal LVM and Btrfs structures. If the migration fails after the destination has begun restructuring volumes but before data transfer completes, the destination may have DSM 7.x metadata pointing to empty or partial volumes. The source drives typically remain in their DSM 6.x state with data intact, but DSM may mark them as "Not Initialized" if they are placed back into the original NAS after the migration was started.
SHR Pool Reconfiguration Failure
Synology Hybrid RAID (SHR) creates multiple mdadm arrays from drive partitions of different sizes. Migrating between models with different drive bay counts can trigger SHR pool restructuring. If the restructuring process is interrupted, the mdadm superblocks across the SHR slice groups may reference different array geometries. Some slices may follow the pre-migration layout; others may follow the new target layout. Standard mdadm --assemble fails because no single geometry matches all superblocks.
Source drives are the priority. In most Synology migration failures, the source NAS drives contain a complete, consistent mdadm array from before the migration started. Image the source drives first. The destination drives are only needed if the source was modified during migration.
QNAP Cross-Model Migration Failures
QNAP supports moving drives directly between NAS models in some configurations. This physical migration (pulling drives from one NAS and inserting them into another) relies on QTS being able to recognize the existing mdadm arrays and LVM volumes. Several conditions cause this recognition to fail.
- ARM to Intel Architecture Change
- QNAP's ARM-based models (TS-x28A series) and Intel-based models (TS-x53D series) use different system partition layouts on drive 1. The system partition contains QTS configuration, user accounts, and app data. Moving drives from ARM to Intel or vice versa corrupts the system partition. QTS on the new unit cannot read the configuration and may offer to "initialize" the drives, which destroys the data volumes.
- Different Bay Count and RAID Geometry
- Moving a 4-bay RAID 5 array into an 8-bay unit works if you insert drives into the first 4 bays in the correct order. But QTS may attempt an automatic RAID expansion to use the additional bays. If this expansion starts and is then interrupted (power failure, user cancellation, drive error), the array enters a reshape state identical to an interrupted mdadm --grow operation.
- QTS vs QuTS Hero (ZFS) Incompatibility
- QTS uses mdadm + LVM + EXT4. QuTS hero uses ZFS. Drives formatted under one system cannot be read by the other. Moving mdadm drives into a QuTS hero NAS results in the system not recognizing any volumes. Moving ZFS drives into a standard QTS NAS has the same result. In both cases, the data is intact on the drives but invisible to the operating system.
Do not initialize, format, or create a new storage pool. If QTS cannot recognize drives after a cross-model migration, the mdadm arrays and data volumes are still intact on the raw drives. Initialization destroys them. Power down, remove drives, and image with write-blockers.
RAID Reshape and Expansion Interruptions
RAID reshape changes the stripe layout of an existing array without destroying data. Common reshape operations include adding a new drive to expand capacity, changing chunk size, or converting from RAID 5 to RAID 6. mdadm performs these reshapes by rewriting data stripes from left to right across the array.
The reshape process maintains a checkpoint position in the mdadm superblock. All stripes to the left of the checkpoint have been rewritten to the new geometry. All stripes to the right remain in the old geometry. Under normal operation, the kernel can resume a reshape from the last checkpoint after a reboot.
When Reshape Recovery Fails
- 1.Superblock corruption at the checkpoint boundary. If power was lost during a superblock write, the reshape_position field may contain an invalid value. The kernel cannot determine where old geometry ends and new geometry begins. Manual analysis of the stripe data is required to find the actual transition point.
- 2.Drive failure during reshape. If a drive fails mid-reshape, the array has two problems: a missing member and a split geometry. Standard rebuild procedures cannot handle both. The reshape must be resolved first, then the missing member reconstructed from parity.
- 3.NAS firmware update during reshape. Some NAS firmware updates require a reboot. If a firmware update is applied while a reshape is in progress, the reboot interrupts the reshape. The updated kernel may handle the mdadm superblock format differently than the previous version, preventing automatic resume.
Interrupted reshape failure pattern: When a NAS begins an mdadm reshape to incorporate a new member drive (equivalent to mdadm --grow --raid-devices=N), a power outage during the reshape leaves the array in a split-geometry state. The reshape_position superblock field marks where the conversion stopped. Stripes before that offset follow the new drive count; stripes after it follow the old geometry. No single RAID parameter set can read the entire volume. Recovery requires imaging all drives and stitching the two geometry segments independently.
LVM Migration Failures Between Servers
LVM pvmove is the standard Linux tool for migrating data between physical volumes while a volume group remains online. It is used during server migrations, storage upgrades, and data center relocations. An interrupted pvmove leaves logical extents scattered across source and destination PVs.
pvmove creates a temporary mirror for each logical extent being moved. The migration sequence for each extent is: (1) create mirror on destination PV, (2) synchronize data from source to destination, (3) remove source extent from mirror, (4) update LVM metadata. An interruption at any stage leaves the extent in a different state.
Mirrored (Both Valid)
The extent was fully synchronized but the source copy was not yet removed. Both source and destination contain identical data. Either copy can be used.
Migrated (Destination Only)
The extent was fully migrated and the source copy removed. Only the destination PV contains valid data for this extent. The source PV extent is stale or zeroed.
Unmigrated (Source Only)
The extent was not yet processed by pvmove. Only the source PV contains valid data. The destination PV may have uninitialized or old data at this offset.
LVM records pvmove progress in its metadata area, which is written to every PV in the volume group. Parsing this metadata from all PVs reveals which extents are in which state. The complete logical volume can be reconstructed by reading the correct version of each extent from the appropriate PV.
Split-Brain Metadata Structure and Resolution
Every migration failure creates some form of split-brain metadata: different drives in the same array contain superblocks that disagree about the array state. mdadm, LVM, and ZFS all refuse to assemble when they detect inconsistent metadata because assembling with the wrong assumptions destroys data.
In mdadm, the key fields are: events (write counter), reshape_position (geometry transition progress), dev_roles (which drive fills which slot), and sb_bitmap (dirty region tracking). When these fields disagree across members, the array cannot self-assemble.
How We Resolve Split-Brain Arrays
- 1.Image every drive with write-blockers. PC-3000 Portable III or DeepSpar Disk Imager for drives with physical symptoms. ddrescue for mechanically healthy drives. Every member from every system involved in the migration.
- 2.Dump and compare superblocks. Read the mdadm superblock from each image. Map the event counts, reshape positions, and device roles across all members. Identify which drives were part of the pre-migration state and which reflect post-migration changes.
- 3.Reconstruct parity using the correct geometry for each stripe range. For interrupted reshapes, apply the old parity calculation above the checkpoint and the new calculation below it. For NAS migrations, prioritize the source array superblocks over destination superblocks.
- 4.Validate filesystem integrity. After array reconstruction, mount the Btrfs, EXT4, or ZFS filesystem read-only. Verify directory structures, file sizes, and checksums before extracting data to a destination drive.
What to Do When a Migration Fails
The priority after any failed migration is preserving the current state of every drive involved. Do not retry the migration, do not click repair, and do not reinitialize any storage pool.
- 1.Power down all involved systems. Both source and destination NAS units, or both source and destination servers. Clean shutdown if the web interface is accessible; hold power button if not.
- 2.Do not retry the migration. Retrying writes additional data to the destination and may modify the source. Each retry attempt changes the superblock event counts, making split-brain resolution more complex.
- 3.Label drives from both systems separately. Mark each drive with its system of origin (source or destination) and bay number. Photograph both NAS units before removing drives.
- 4.Image all drives from both systems. The source drives typically contain the most complete data. The destination drives may contain partial transfers needed to fill gaps.
Reinitializing a storage pool destroys everything. Some NAS interfaces suggest reinitialization after a failed migration. This creates a new, empty pool. It does not recover the failed migration. If the data matters, send drives for professional NAS recovery before touching the web interface.
Recovery Pricing for Migration Failures
Migration failure recovery is priced per member drive based on the type of work required. Logical recovery (healthy drives with split-brain metadata) starts at the file system recovery tier. Drives with mechanical failures follow standard HDD pricing. No data recovered, no fee.
| Service Tier | Price | Description |
|---|---|---|
| Simple CopyLow complexity | $100 | Your drive works, you just need the data moved off it Functional drive; data transfer to new media Rush available: +$100 |
| File System RecoveryLow complexity | From $250 | Your drive isn't recognized by your computer, but it's not making unusual sounds File system corruption. Accessible with professional recovery software but not by the OS Starting price; final depends on complexity |
| Firmware RepairMedium complexity – PC-3000 required | $600–$900 | Your drive is completely inaccessible. It may be detected but shows the wrong size or won't respond Firmware corruption: ROM, modules, or translator tables corrupted; requires PC-3000 terminal access Standard drives at lower end; high-density drives at higher end |
| Head SwapHigh complexity – clean bench surgery50% deposit | $1,200–$1,500 | Your drive is clicking, beeping, or won't spin. The internal read/write heads have failed Head stack assembly failure. Transplanting heads from a matching donor drive on a clean bench 50% deposit required. Donor parts are consumed in the repair |
| Surface / Platter DamageHigh complexity – clean bench surgery50% deposit | $2,000 | Your drive was dropped, has visible damage, or a head crash scraped the platters Platter scoring or contamination. Requires platter cleaning and head swap 50% deposit required. Donor parts are consumed in the repair. Most difficult recovery type. |
Hardware Repair vs. Software Locks
Our "no data, no fee" policy applies to hardware recovery. We do not bill for unsuccessful physical repairs. If we replace a hard drive read/write head assembly or repair a liquid-damaged logic board to a bootable state, the hardware repair is complete and standard rates apply. If data remains inaccessible due to user-configured software locks, a forgotten passcode, or a remote wipe command, the physical repair is still billable. We cannot bypass user encryption or activation locks.
All tiers: Free evaluation and firm quote before any paid work. No data, no fee on simple copy, file system, and firmware tiers. Head swap and surface damage require a 50% deposit because donor parts are consumed in the attempt.
Target drive: The destination drive we copy recovered data onto. You can supply your own or we provide one at cost. For ultra-high-capacity drives (20TB and above), the target drive costs approximately $400+ due to the large media required. All prices are plus applicable tax.
Frequently Asked Questions
My Synology Migration Assistant failed mid-transfer. Is my data gone?
Can I move drives from a QNAP ARM-based NAS to an Intel-based QNAP?
My RAID reshape was interrupted by a power failure. Can I recover?
LVM pvmove was interrupted during a server migration. Is the data recoverable?
How much does migration failure recovery cost?
What is split-brain metadata and why does it prevent my array from assembling?
Related Recovery Services
Full NAS recovery service overview
Btrfs metadata and mdadm recovery for DSM
Guide for degraded storage pool response
Linux software RAID superblock reconstruction
Recovery after a failed RAID rebuild attempt
Hardware RAID controller failure recovery
Transparent cost breakdown
Migration failed? Data stuck between two systems?
Free evaluation. We image all drives from source and destination, reconstruct split-brain metadata, and extract your data. No data, no fee.