Skip to main contentSkip to navigation
Lab Operational Since: 17 Years, 6 Months, 11 DaysFacility Status: Fully Operational & Accepting New Cases

Enterprise Server Data Recovery Services

We recover failed server arrays from Dell PowerEdge, HP ProLiant, and IBM Power Series hardware. SAS and NVMe drive imaging, RAID controller reconstruction, and VM extraction from VMware ESXi and Hyper-V environments.

Free evaluation. No data = no charge.

Author01/10
Louis Rossmann
Written by
Louis Rossmann
Founder & Chief Technician
Updated April 2026
14 min read
Call (512) 212-9111No data, no recovery feeFree evaluation, no diagnostic fees
Quick answer

Rossmann Repair Group recovers Dell PowerEdge, HP ProLiant, and IBM Power Series server arrays with RAID 5, 6, and 10 reconstruction, VMware VMFS extraction, Hyper-V VHDX recovery, and Windows Storage Spaces repair. Degraded arrays typically complete virtual destriping 3 to 5 calendar days after intake; catastrophic cases run 7 to 14 days.

Supported hardware02/10

Which Server Hardware and RAID Controllers Do We Support?

We recover data from Dell PowerEdge, HP ProLiant, and IBM Power Series servers, including arrays managed by PERC, Smart Array, and SAS HBA controllers. Both SAS and SATA member drives are supported. Enterprise arrays use proprietary RAID controller firmware that standard recovery software cannot interpret; we reconstruct arrays offline using PC-3000 RAID Edition without relying on the original controller hardware.

HardwareController or interfaceRecovery handling
Dell PowerEdgePERC H730, H740, and H755 controllersPERC cards store RAID metadata in proprietary formats on each member drive. When the controller fails or metadata becomes corrupted, we image each SAS or SATA member and reconstruct the array configuration offline using PC-3000 RAID Edition.
HP ProLiantP408i, P440ar, and E208i Smart Array controllers, plus MR416i-p controllersSmart Array controllers use proprietary on-disk metadata stored in the RAID Information Sector on each member drive. Newer MR-series controllers use Broadcom MegaRAID DDF-based metadata written to the final sectors of each drive. We capture both metadata formats during imaging and use them to determine stripe size, parity rotation, and member ordering without requiring the original controller hardware.
IBM Power SeriesHardware-managed RAID on POWER hardwareIBM servers running AIX or Linux use JFS2, ext4, or XFS filesystems over hardware-managed RAID. We image member drives through SAS HBAs and reconstruct the array layout from captured controller metadata.
SAS and SATA arraysSAS drives, SATA drives, and SAS HBAsEnterprise servers commonly use SAS drives, which require different interface hardware than consumer SATA drives. Our lab uses SAS HBAs and PC-3000 SAS support to image these drives while preserving access to non-standard sector sizes (520/528 bytes) often used in enterprise arrays.
PCIe NVMe serversIntel P5510, Samsung PM9A3, and Micron 7450 drivesPCIe NVMe enterprise drives are common in modern server deployments. These drives connect over PCIe rather than SAS or SATA and require NVMe-aware imaging procedures.

Enterprise arrays use proprietary RAID controller firmware that standard recovery software cannot interpret. We reverse-engineer controller configurations and reconstruct arrays offline using PC-3000 RAID Edition without relying on the original controller hardware. This applies to RAID 6 recovery with dual parity, striped mirrors in RAID 10 recovery, and every other common enterprise RAID level.

Enterprise Helium Drive Recovery in Server Arrays

High-capacity enterprise servers (Dell PowerEdge R740xd, HP ProLiant DL380 Gen10 Plus) increasingly ship with hermetically sealed helium drives from Seagate Exos, WD Ultrastar HC, and Toshiba MG series. Helium-filled drives are sealed and require different mechanical handling than air-filled drives.

When a helium drive fails mechanically, it cannot be serviced like a standard air-breather drive. Breaking the hermetic seal exposes the platters to atmospheric pressure and particulate contamination.

Donor parts must come from the same helium-sealed model family with matching head counts and firmware revisions. All open-drive work is performed on our 0.02µm ULPA-filtered clean bench.

Helium drive recovery pricing starts at $200 for simple data copies and can reach $4,000–$5,000 for mechanical failures requiring head swaps, plus additional helium refill cost.

Dell PERC and LSI MegaRAID DDF Metadata Corruption

Dell PERC H730, H740, and H755 controllers write Disk Data Format metadata to member drives. Those metadata blocks are used to describe the virtual disk layout.

The DDF blocks record stripe size, parity rotation sequence, and member ordering. When the controller's NVRAM cache fails or a forced rebuild crashes mid-operation, the DDF metadata on the drives desynchronizes with the controller's internal state, and the array presents as a Foreign Configuration.

We do not import foreign configurations through the PERC BIOS. Instead, we image each SAS member drive through write-blocked SAS HBAs and use PC-3000 RAID Edition to parse the surviving DDF blocks from the end of each drive image. The tool extracts the original parity rotation, stripe size, and member ordering to reconstruct the virtual disk offline.

If individual SAS drives require physical repair before imaging (burned TVS diodes, seized spindle motors), that work happens first on our 0.02µm ULPA-filtered clean bench. For arrays containing NVMe enterprise SSDs alongside SAS spinners, the NVMe members are imaged through separate PCIe adapters using PC-3000 SSD.

Enterprise NVMe Firmware Failures in Server Arrays

Modern servers use PCIe NVMe drives as high-speed caching tiers or primary storage. These drives suffer from controller firmware bugs under sustained write loads that SAS spinners don't experience.

Enterprise NVMe drives can enter controller diagnostic states where the drive reports the wrong capacity instead of its actual size. When the controller's Flash Translation Layer (FTL) cannot initialize normally, the host OS cannot access the user data area.

Software cannot scan a drive that reports the wrong capacity to the host operating system. Standard commercial recovery tools do not support many modern enterprise NVMe controllers. Because these drives use hardware-bound encryption, direct NAND extraction yields ciphertext.

The original controller must be brought back to a state where it can decrypt and serve the data.

For consumer-grade PCIe Gen4 NVMe drives (such as those using the Phison PS5016-E16 controller) occasionally repurposed in entry-level server caches, PCIe link training failures can prevent the drive from enumerating on the bus entirely; PC-3000 SSD's Phison utility handles the low-level initialization sequence to bring the drive online in a recoverable state. NVMe enterprise SSDs in server arrays follow our firmware corruption recovery workflow, with firmware recovery priced at $900–$1,200.

SATA SSD Controller Failures in Legacy Server Arrays

Older enterprise servers & storage appliances still use SATA SSDs for boot volumes, read caches, or tiered storage. Silicon Motion SM2259XT controllers can enter a safe mode where corrupted wear-leveling tables cause the drive to report 0 bytes capacity to the host. Marvell 88SS1074 controllers fail to load firmware from NAND & lock into a BSY state (stuck initialization loop) where the drive never completes its power-on self-test.

Both failure modes block any software-level recovery tool. We use PC-3000 SSD's controller-specific loader modules to work around the corrupted firmware, access the raw NAND pages, & rebuild the FTL mapping.

For SM2259XT drives, the PC-3000 loader reads past the misconfigured capacity boundary to access the full NAND contents. For Marvell 88SS1074 drives, the VanGogh family utility handles the stuck BSY state. SATA SSD recovery in server arrays starts at $200 for drives that read normally.

Vm san recovery03/10

How Do We Recover Virtual Machines and SAN Storage?

We extract .vmdk and .vhdx virtual disk files from failed RAID arrays running VMware ESXi and Microsoft Hyper-V. When a SAN controller or firmware fails, the data still resides on the physical drives; we image them through SAS HBAs and reconstruct the LUN layout and filesystem from raw images.

VMware ESXi and VMFS recovery
VMFS (VMware Virtual Machine File System) is a clustered filesystem designed for virtualization. When a VMFS volume becomes corrupted or the underlying RAID array degrades, we reconstruct the array from member drive images and parse the VMFS metadata to locate and extract individual .vmdk files for each virtual machine.
Hyper-V and .vhdx recovery
Microsoft Hyper-V stores virtual machines as .vhdx files on NTFS or ReFS volumes. When the host server's RAID array fails, we image the members, rebuild the array, and extract the .vhdx files intact. Individual files within the virtual disk can then be recovered as needed.
Traditional SANs
Dell EMC Unity, NetApp FAS, and HPE Nimble storage arrays present LUNs over iSCSI or Fibre Channel to host servers. When the SAN controller or firmware fails, the data still resides on the physical drives. We remove drives from the SAN chassis, image them through SAS HBAs, and reconstruct the LUN layout and filesystem from the raw images.
Virtual SANs (VMware vSAN)
vSAN distributes VM storage objects across local disks in an ESXi cluster. A multi-node failure or metadata corruption can make the entire vSAN datastore inaccessible. We image the member drives from each affected node and reconstruct the distributed object layout to recover the underlying .vmdk files.
Software-Defined Storage (SDS)
When the software layer (ZFS, Ceph, GlusterFS, or similar) fails but the underlying drives are physically intact, recovery focuses on imaging the drives and reconstructing the storage pool metadata. The physical drives contain all the data; the software layer is the map. We rebuild that map from the raw images. For ZFS-specific failures, our ZFS pool recovery guide covers pool states, TXG rollbacks, and safe diagnostic steps.
Microsoft SQL Server and database extraction
After reconstructing the RAID array and parsing the host filesystem, we extract Microsoft SQL Server .mdf and .ldf files directly from the recovered volume. If a sudden RAID controller cache failure caused torn pages in the database, the transaction log (.ldf) often contains enough state to repair tables without requiring a full backup. PostgreSQL, MySQL, and Oracle tablespace files follow the same extraction workflow. Our database recovery service covers application-layer corruption in detail.
ERP and accounting system databases
When a RAID controller cache fails during active writes, enterprise databases supporting ERPs (Microsoft Dynamics, SAP) or accounting systems often suffer from torn pages: 8KB SQL Server page boundaries interrupted mid-write. After array reconstruction, we extract the damaged .mdf and .ldf files using PC-3000 Data Extractor, repair corrupted page headers, and use the transaction log chain to force-attach the database in suspect state. DBCC CHECKDB then recovers critical table rows that automated software skips. Our SQL Server recovery service covers application-layer database corruption in detail.
Windows Server Storage Spaces (S2D)
When a Windows Server "Virtual Disk" goes Offline or Detached in Storage Spaces due to stale metadata or dropped members, running the Repair-VirtualDisk PowerShell cmdlet on degraded pools can cause irreversible data loss by forcing parity recalculation across surviving members. We image the physical pool members and reconstruct the Storage Spaces metadata using PC-3000 Data Extractor, treating the pool as a software-defined RAID without relying on the original Windows Server host.
Enterprise difference04/10

What Makes Enterprise RAID Recovery Different?

Enterprise server recovery differs from consumer NAS recovery in three ways: proprietary RAID controller metadata, SAS drive interfaces, and multi-layer storage architectures that stack RAID, virtualization, and SAN protocols on top of one another. PC-3000 RAID Edition is designed to interpret these proprietary formats and reconstruct arrays without the original controller.

DifferenceTechnical constraintLab handling
Proprietary controller metadataEnterprise RAID controllers from Dell (PERC), HP (Smart Array), and LSI/Broadcom write proprietary on-disk metadata that generic recovery tools cannot parse. Consumer NAS devices from Synology or QNAP use Linux mdadm or Btrfs RAID, which stores metadata in well-documented formats.PC-3000 RAID Edition is designed to interpret these formats and reconstruct arrays without the original controller.
SAS drive handlingEnterprise servers use SAS drives with dual-port connectivity and controller behaviors that do not present cleanly through a standard SATA port.Our imaging hardware includes SAS HBAs that communicate at the drive's native protocol, preserving access to all addressable sectors including those outside the standard SATA command set.
Multi-layer storage stacksA typical enterprise failure might involve a RAID array presenting a LUN over iSCSI to an ESXi host running VMFS with multiple VMs, each containing its own filesystem. Recovery requires reassembling every layer of that stack.The array must be recovered from raw disk images up through the guest filesystem.
ScaleEnterprise arrays routinely contain 8, 12, or 24 drives. Each member must be imaged individually before array reconstruction can begin.The imaging phase alone can take days for large arrays with degraded members, and the reconstruction phase must correctly handle the metadata from every drive in the set.

Controller Reconstruction

PERC, Smart Array, and LSI controller metadata is parsed from drive images and used to reconstruct stripe maps, parity rotation, and member ordering offline.

VM Extraction

After array reconstruction, VMFS and ReFS/NTFS volumes are parsed to locate .vmdk and .vhdx files. Individual VMs can be delivered separately.

Board-Level Repair

SAS drives with burned PCB components or TVS diode failures are repaired at the component level to restore readability before imaging begins.

RAID rebuild dangers05/10

Why Do Forced RAID Rebuilds Fail on Enterprise Arrays?

When an enterprise server drops a drive from a RAID 5 or RAID 6 array, the standard IT response is to hot-swap the failed member and initiate a rebuild. On modern high-capacity drives (8TB, 12TB, 20TB+), this is the single most common cause of total data loss in enterprise environments.

During a rebuild, the RAID controller must read every addressable sector on every surviving drive to recalculate parity. That read load is highest on large arrays because every surviving member is stressed at the same time.

The probability of encountering at least one URE during that rebuild is not trivial, and when one occurs, the rebuild fails. The controller drops a second drive offline, and the array transitions from degraded to destroyed.

This is why we do not rely on the original controller hardware to recover data. We image each member drive individually using PC-3000 in read-only mode through SAS HBAs, then reconstruct the array offline using PC-3000 RAID Edition.

This approach reads each sector exactly once, maps bad sectors without triggering a parity recalculation, and preserves the original data layout. For RAID 5 failures specifically, see our RAID 5 rebuild failure recovery guide. For general failed RAID rebuilds, the imaging-first approach is the same.

Do not force-import a foreign configuration. If your Dell server shows a PERC foreign configuration prompt after a controller swap or firmware update, importing the configuration can overwrite the on-disk metadata that maps drive order, stripe size, and parity rotation. Power off the server and contact a recovery lab before pressing any buttons.

Pricing06/10

How Much Does Enterprise Server Recovery Cost?

Enterprise server recovery follows the same transparent pricing model as every other service we offer: per-drive imaging based on each drive's condition, plus an array reconstruction fee for the virtual destripe. The fee depends on RAID level, member count, filesystem type, and whether stripe parameters must be detected from raw data or captured from surviving controller metadata.

Per-Drive Imaging Cost (Air-Filled SAS & SATA Spinners)

  1. Low complexity

    Simple Copy

    Your drive works, you just need the data moved off it

    Functional drive; data transfer to new media

    Rush available: +$100

    $100

    3-5 business days

  2. Low complexity

    File System Recovery

    Your drive isn't recognized by your computer, but it's not making unusual sounds

    File system corruption. Accessible with professional recovery software but not by the OS

    Starting price; final depends on complexity

    From $250

    2-4 weeks

  3. Medium complexity

    Firmware Repair

    Your drive is completely inaccessible. It may be detected but shows the wrong size or won't respond

    Firmware corruption: ROM, modules, or translator tables corrupted; requires PC-3000 terminal access

    CMR drive: $600. SMR drive: $900.

    $600–$900

    3-6 weeks

  4. High complexity

    Most Common

    Head Swap

    Your drive is clicking, beeping, or won't spin. The internal read/write heads have failed

    Head stack assembly failure. Transplanting heads from a matching donor drive on a clean bench

    50% deposit required. CMR: $1,200-$1,500 + donor. SMR: $1,500 + donor.

    50% deposit required

    $1,200–$1,500

    4-8 weeks

  5. High complexity

    Surface / Platter Damage

    Your drive was dropped, has visible damage, or a head crash scraped the platters

    Platter scoring or contamination. Requires platter cleaning and head swap

    50% deposit required. Donor parts are consumed in the repair. Most difficult recovery type.

    50% deposit required

    $2,000

    4-8 weeks

Hardware Repair vs. Software Locks

Our "no data, no fee" policy applies to hardware recovery. We do not bill for unsuccessful physical repairs. If we replace a hard drive read/write head assembly or repair a liquid-damaged logic board to a bootable state, the hardware repair is complete and standard rates apply. If data remains inaccessible due to user-configured software locks, a forgotten passcode, or a remote wipe command, the physical repair is still billable. We cannot bypass user encryption or activation locks.

No data, no fee. Free evaluation and firm quote before any paid work. Full guarantee details. Head swap and surface damage require a 50% deposit because donor parts are consumed in the attempt.

Rush fee
+$100 rush fee to move to the front of the queue
Donor drives
Donor drives are matching drives used for parts. Typical donor cost: $50–$150 for common drives, $200–$400 for rare or high-capacity models. We source the cheapest compatible donor available.
Target drive
The destination drive we copy recovered data onto. You can supply your own or we provide one at cost plus a small markup. For larger capacities (8TB, 10TB, 16TB and above), target drives cost $400+ extra. All prices are plus applicable tax.

Per-Drive Imaging Cost (Helium SAS & SATA Spinners)

Enterprise Exos, Ultrastar HC, WD Gold, Toshiba MG, and similar sealed 12TB+ members use helium HDD pricing when firmware complexity or mechanical work requires helium-specific handling.

  1. Low complexity

    Simple Copy

    Your helium drive works, you just need the data moved off it

    Functional drive; data transfer to new media

    Rush available: +$100

    $200

    3-5 business days

  2. Low complexity

    File System Recovery

    Your helium drive isn't recognized by your computer, but it's not making unusual sounds

    File system corruption. Accessible with professional recovery software but not by the OS

    Starting price; final depends on complexity

    From $600

    2-4 weeks

  3. Medium complexity

    Most Common

    Firmware Repair

    Your helium drive is completely inaccessible. It may be detected but shows the wrong size or won't respond

    Firmware corruption: ROM, modules, or translator tables corrupted; requires PC-3000 terminal access

    Helium drive firmware recovery is more complex due to sealed chamber architecture

    $900–$1,200

    3-6 weeks

  4. High complexity

    Head Swap

    Your helium drive is clicking, beeping, or won't spin. The internal read/write heads have failed

    Head stack assembly failure. Transplanting heads from a matching helium donor drive on a clean bench. Helium refill required.

    50% deposit required (usually $1,100 non-refundable deposit). Helium cost ($400-$800) and donor drive cost additional.

    50% deposit required

    $3,000–$4,500

    4-8 weeks

  5. High complexity

    Surface / Platter Damage

    Your helium drive was dropped, has visible damage, or a head crash scraped the platters

    Platter scoring or contamination. Requires platter cleaning, head swap, and helium refill

    50% deposit required. Helium cost ($400-$800) and donor drive cost additional. Most difficult recovery type.

    50% deposit required

    $4,000–$5,000

    4-8 weeks

Hardware Repair vs. Software Locks

Our "no data, no fee" policy applies to hardware recovery. We do not bill for unsuccessful physical repairs. If we replace a hard drive read/write head assembly or repair a liquid-damaged logic board to a bootable state, the hardware repair is complete and standard rates apply. If data remains inaccessible due to user-configured software locks, a forgotten passcode, or a remote wipe command, the physical repair is still billable. We cannot bypass user encryption or activation locks.

No data, no fee. Free evaluation and firm quote before any paid work. Full guarantee details. Head swap and surface damage require a 50% deposit because donor parts and helium are consumed in the attempt.

Rush fee
+$100 rush fee to move to the front of the queue
Helium cost
Helium cost: $400-$800 additional for head swap and surface damage tiers. This covers the helium refill required after opening the sealed chamber.
Donor drives
Helium donor drives must be an exact match. Typical donor cost: $200–$600 depending on model and availability, plus helium refill cost ($400–$800) required after opening the sealed chamber.
Target drive
The destination drive we copy recovered data onto. You can supply your own or we provide one at cost plus a small markup. For larger capacities (8TB, 10TB, 16TB and above), target drives cost $400+ extra. All prices are plus applicable tax.

Per-Drive Imaging Cost (NVMe Enterprise SSDs)

  1. Low complexity

    Simple Copy

    Your NVMe drive works, you just need the data moved off it

    Functional drive; data transfer to new media

    Rush available: +$100

    $200

    3-5 business days

  2. Low complexity

    File System Recovery

    Your NVMe drive isn't showing up, but it's not physically damaged

    File system corruption. Visible to recovery software but not to OS

    Starting price; final depends on complexity

    From $250

    2-4 weeks

  3. Medium complexity

    Circuit Board Repair

    Your NVMe drive won't power on or has shorted components

    PCB issues: failed voltage regulators, dead PMICs, shorted capacitors

    May require a donor drive (additional cost)

    $600–$900

    3-6 weeks

  4. Medium complexity

    Most Common

    Firmware Recovery

    Your NVMe drive is detected but shows the wrong name, wrong size, or no data

    Firmware corruption: ROM, modules, or system files corrupted

    Price depends on extent of bad areas in NAND

    $900–$1,200

    3-6 weeks

  5. High complexity

    PCB / NAND Swap

    Your NVMe drive's circuit board is severely damaged and requires NAND chip transplant to a donor PCB

    NAND swap onto donor PCB. Precision microsoldering and BGA rework required

    50% deposit required; donor drive cost additional

    50% deposit required

    $1,200–$2,500

    4-8 weeks

Hardware Repair vs. Software Locks

Our "no data, no fee" policy applies to hardware recovery. We do not bill for unsuccessful physical repairs. If we replace a hard drive read/write head assembly or repair a liquid-damaged logic board to a bootable state, the hardware repair is complete and standard rates apply. If data remains inaccessible due to user-configured software locks, a forgotten passcode, or a remote wipe command, the physical repair is still billable. We cannot bypass user encryption or activation locks.

No data, no fee. Free evaluation and firm quote before any paid work. Full guarantee details. NAND swap requires a 50% deposit because donor parts are consumed in the attempt.

Rush fee
+$100 rush fee to move to the front of the queue
Donor drives
A donor drive is a matching SSD used for its circuit board. Typical donor cost: $40–$100 for common models, $150–$300 for discontinued or rare controllers.
Target drive
The destination drive we copy recovered data onto. You can supply your own or we provide one at cost plus a small markup. All prices are plus applicable tax.

Array Reconstruction Fee

Quoted per array on top of per-drive imaging costs. The fee depends on RAID level, member count, filesystem type, and whether stripe parameters must be detected from raw data or captured from surviving controller metadata. A 4-drive array where all members read normally costs less than a 12-drive array with two mechanically failed members.

Rush fee available: +$100 rush fee to move to the front of the queue. For a 6-drive array, the rush fee applies per drive ($100 x 6 =$600). Rush cases are moved to the front of the imaging queue.

No Data = No Charge: If we recover nothing from your server array, you owe nothing. Free evaluation, no obligation.

Published pricing: We publish our per-drive pricing because the work is based on the condition of each drive and the array reconstruction required.

We sign NDAs for corporate data recovery. All drives remain in our Austin lab under chain-of-custody documentation. We do not sign BAAs, but we are willing to discuss your specific data-handling requirements before work begins.

Enterprise intake07/10

How Should Businesses Ship Server Arrays for Recovery?

Multi-drive server arrays require careful disassembly and packaging before shipping. Do not ship the entire server chassis. Remove each drive from its hot-swap bay, label it with its slot position (bay 0, bay 1, etc.), and ship each drive individually in an anti-static bag with at least 2 inches of closed-cell foam.

  1. Label slot positions before removal. Photograph the front panel with drives seated, then mark each drive caddy with its bay number using a label or marker on the caddy handle. Slot order determines stripe mapping in PERC, Smart Array, and LSI controllers.
  2. Package drives in anti-static bags with foam padding. Each drive ships in its own anti-static bag, wrapped in at least 2 inches of closed-cell foam on all sides. Standard bubble wrap transmits shock; closed-cell foam absorbs it.
  3. Ship via FedEx or UPS with tracking and insurance. We accept standard parcel carriers for most server arrays. For full-rack shipments exceeding parcel weight limits, freight carriers work. Declare the shipment value for insurance purposes. See our mail-in data recovery page for packaging details and our shipping address.
  4. Include array configuration notes. Write down the RAID level, stripe size (if known), filesystem type (VMFS, NTFS, ReFS, XFS, ext4), and any recent events (power loss, rebuild attempt, firmware update) that preceded the failure. This context saves diagnostic time and reduces your total cost.

What Should IT Do in the First 30 Minutes After a Server Outage?

Power the server down, refuse rebuild or repair commands, photograph bay order, and preserve controller error screens before the chassis is touched. Those four artifacts cut hours or days off Dell PERC, Smart Array, VMware VMFS, and Storage Spaces recovery because they preserve the original stripe map and failure context.

Failure patternFirst actionWhy it matters
Foreign Configuration or Missing Disks promptPower the server off, photograph the controller screen, label every bay, and do not import the configuration.That preserves the original metadata path for Dell PERC foreign configuration and other RAID recovery cases.
Storage Spaces or ReFS virtual disk goes OfflineDo not run Repair-VirtualDisk or chkdsk /f. Export the error text and record every pool member serial number before removal.We rebuild the virtual disk from member images, then parse ReFS metadata and any attached .vhdx files from clones instead of the live host.
VMware datastore disappears after power lossStop rescans, note the datastore names, and list which VMs matter first before the drives leave the rack.VMFS recovery moves faster when the target datastore and VM names are known up front. See our VMware ESXi recovery page for the filesystem-specific workflow.
SQL Server goes into suspect state after a cache eventIf the volume is still stable in read-only mode, save the SQL error log and list the affected .mdf and .ldf names. If storage is unstable, power the box off instead of attaching the database again.We recover the array first, then extract the database files for database recovery work on cloned media.

Tell us which datastore, LUN, or database matters first. Once the stripe map is confirmed, we can prioritize a VMware VMFS volume, a ReFS volume, or a SQL file set before lower-priority shares. Package the drives using our mail-in instructions and include that priority list in the box.

What Happens After the Drives Reach the Austin Lab?

Once the drives arrive in Austin, we verify bay order, clone each member in read-only mode through SAS HBAs, capture controller metadata, and build the array virtually before file extraction starts. That keeps writes off the evidence and lets us target the highest-value VM, LUN, or database first.

  1. Intake photos and slot-map verification. We compare the drive labels, intake photos, and your notes before a member is removed from its recorded bay order. That preserves the stripe map when a PERC or Smart Array controller wrote stale metadata to one member but not the rest.
  2. Read-only imaging of every member. SAS and SATA disks are cloned through dedicated SAS HBAs, PC-3000, or a DeepSpar Disk Imager. We log weak heads, unreadable LBAs, and dropped links during the image instead of learning about them after a rebuild has already started.
  3. Controller and pool metadata capture. We pull DDF blocks, HP Smart Array metadata, Storage Spaces headers, GPT maps, and filesystem superblocks from the images before any destriping attempt. Those structures tell us stripe size, parity rotation, member order, and where the live volume begins.
  4. Virtual destriping and filesystem parsing. Once the map is confirmed, we reconstruct the virtual disk in software and parse VMFS, ReFS, NTFS, XFS, ext4, or ZFS from the working copies. That is where a server recovery case separates into VM extraction, database extraction, or raw file export.
  5. Priority extraction and engineer update. If you told us one VM, one LUN, or one SQL database matters most, that target moves first. The same engineer doing the imaging and RAID reconstruction gives the update, not a sales relay.

If you want the broader chain from intake through return shipment, our recovery process page shows how the lab documents each step around diagnostics, imaging, and data return.

Production Downtime and Recovery Prioritization

A failed database server or virtualization host halts access to production applications, internal tools, and customer-facing services. The financial impact compounds by the hour.

We prioritize enterprise server cases that involve active production outages and can begin imaging within hours of receiving drives. SAS drives are imaged through dedicated SAS HBAs; we do not bottleneck enterprise media through consumer SATA adapters.

Confidentiality, NDAs, and Chain of Custody

We execute bilateral NDAs for enterprise clients, law firms, and financial institutions before any drive is opened or powered on. Every case maintains chain-of-custody documentation from intake through return shipment.

Drives remain in our Austin lab on isolated diagnostic networks throughout the recovery process. We offer certificates of media destruction upon request after data is delivered.

We do not sign Business Associate Agreements (BAAs). If your organization requires BAA coverage, we are happy to discuss the specifics of your data-handling requirements before work begins.

Recovery Time and Recovery Point Objectives

  1. Refuse writes to the source array. RPO protection starts by refusing to write to the source array.
  2. Image every surviving member. Every surviving member is imaged sector by sector through a SAS HBA using a DeepSpar Disk Imager or PC-3000 Portable III before any reconstruction is attempted.
  3. Avoid controller-led rebuilds. A controller-led rebuild on a degraded array reads every sector on every survivor. If one member hits an Unrecoverable Read Error, the controller writes corrupt parity back to the stripe and the last-consistent on-disk state of the array is gone.
  4. Work from clones. Imaging first freezes that state on working copies so all subsequent analysis happens on clones, never on the evidence.

RTO ranges depend on the failure mode, not on a marketing tier.

Failure modeRTO rangeTimeline driver
Degraded-but-accessible RAID 5 or RAID 6 array3 to 5 calendar days after intakeEvery member still reads, and members are imaged in parallel through dedicated SAS HBAs before virtual destriping begins.
Two or more failed members, Dell PERC NVRAM mismatch, or helium drive mechanical failure7 to 14 daysMechanical work on helium enterprise drives extends the timeline by donor-drive lead time. Head stacks for Seagate Exos and WD Ultrastar HC families are kept in stock but not always in a matching firmware revision.

PCB-level repairs on controller boards use a Hakko FM-2032 on an FM-203 base for TVS diode and ROM chip replacement when the board shorts the 12V rail. A $100 rush fee is available to move the array to the front of the imaging queue.

Forensic Chain of Custody for Litigation Cases

Law firm, internal-investigation, and e-discovery cases receive a numbered custody log from intake photo through return shipment.

Source drives are imaged behind a hardware write-blocker so the physical evidence cannot be mutated by the host OS or by the imaging tool itself, and SHA-256 hashes of the working image are recorded at clone completion and re-verified after each analysis pass.

Custody artifacts are delivered alongside the recovered data so counsel can match image hashes against intake hashes in deposition, and the original source drives are returned untouched or held in bonded storage at the case owner's instruction.

Direct-Engineer Communication for IT Teams

There is no account manager between your IT team and the engineer handling the array. The technician running SAS HBA passthrough, parsing the DDF or HP Smart Array metadata, and virtually destriping in PC-3000 RAID Edition is the same person answering your phone calls and email threads.

Status updates cite the actual work performed (members imaged, bad-sector maps generated, parity rotation detected, stripe size confirmed), not a generic ticket state a sales rep is relaying from a back office. If the array cannot be recovered, our no data, no charge guarantee applies.

1

Intake

Every package is opened on camera. Your drive gets a serial number tied to your ticket before we touch anything else.
2

Diagnosis

Chris figures out what's actually wrong: firmware corruption, failed heads, seized motor, or something else. You get a quote based on the problem, not the "value" of your data.
3

Recovery

Firmware work happens on the PC-3000. Head swaps and platter surgery happen in our ULPA-filtered bench. Nothing gets outsourced.
4

Return

Original drive plus recovered data on new media. FedEx insured, signature required.

Data Recovery Standards & Verification

Our Austin lab operates on a transparency-first model. We use industry-standard recovery tools, including PC-3000 and DeepSpar, combined with strict environmental controls to make sure your hard drive is handled safely and properly. This approach allows us to serve clients nationwide with consistent technical standards.

Open-drive work is performed in a ULPA-filtered laminar-flow bench, validated to 0.02 µm particle count, verified using TSI P-Trak instrumentation.

Transparent History

Serving clients nationwide via mail-in service since 2008. Our lead engineer holds PC-3000 and HEX Akademia certifications for hard drive firmware repair and mechanical recovery.

Media Coverage

Our repair work has been covered by The Wall Street Journal and Business Insider, with CBC News reporting on our pricing transparency. Louis Rossmann has testified in Right to Repair hearings in multiple states and founded the Repair Preservation Group.

Aligned Incentives

Our "No Data, No Charge" policy means we assume the risk of the recovery attempt, not the client.

We believe in proving standards rather than just stating them. We use TSI P-Trak instrumentation to verify that clean-air benchmarks are met before any drive is opened.

See our clean bench validation data and particle test video
Specialized08/10

What Specialized Enterprise Recovery Services Are Available?

Detailed technical documentation on specific enterprise recovery scenarios covers VMFS datastore recovery, .vmdk extraction, snapshot chain repair, VHD/VHDX file recovery, ZFS pool failures, Ceph OSD recovery, WAFL filesystem parsing, RAID-DP diagonal parity reconstruction, ReFS B+ tree metadata reconstruction, SAN LUN reconstruction, and enterprise SAS drive imaging via SAS HBAs and PC-3000.

For detailed technical documentation on specific enterprise recovery scenarios, see our dedicated pages.

Which Recovery Page Matches Your Stack?

The overview page covers the decision tree. If your outage centers on one storage layer such as VMFS, ReFS, SAN LUN mapping, ZFS, or a failing SAS member, the deeper subpages carry the filesystem, controller, and extraction detail that does not fit on a single overview.

If the outage centers onWhat we reconstruct firstBest next page
VMware ESXi or vSAN datastore failureVMFS metadata, datastore layout, snapshot chains, and the target .vmdk set.VMware ESXi recovery
Hyper-V, Cluster Shared Volume, or ReFS outageReFS metadata trees, CSV layout, and the affected .vhdx chain.Hyper-V recovery and ReFS recovery
Proxmox VE, Ceph, or ZFS-backed virtualizationZFS pool metadata, Ceph object placement, and qcow2 or raw guest disks.Proxmox VE recovery
TrueNAS pool import failure or degraded vdevZFS labels, uberblocks, MOS metadata, and the dataset tree behind the failed pool.TrueNAS recovery
SAN controller failure or offline LUNLUN maps, 520-byte sector members, and the host filesystem sitting on top of the virtual disk.SAN storage recovery
SQL Server or ERP data after an array crashThe RAID first, then the .mdf and .ldf files that need database-level repair.SQL Server recovery
Physical SAS drive failure inside the arrayMember imaging, firmware access, and any board-level or clean-bench work needed before the array can be rebuilt safely.SAS hard drive recovery

VMware ESXi Recovery

VMFS datastore recovery, .vmdk extraction, snapshot chain repair, and vSAN distributed datastore reconstruction. Covers VMFS5 and VMFS6 metadata structures.

Microsoft Hyper-V Recovery

VHD/VHDX file recovery, Cluster Shared Volume reconstruction, checkpoint chain repair, and differencing disk merge from failed Hyper-V hosts.

Proxmox VE Recovery

ZFS pool failures on Proxmox, Ceph OSD recovery, qcow2/raw disk image extraction, and LXC container restoration from degraded clusters.

TrueNAS / FreeNAS Recovery

ZFS pool import failures, vdev degradation, scrub errors, GELI encryption, and dataset recovery from TrueNAS CORE and SCALE systems.

Dell EMC PowerVault Recovery

ME4/ME5 ADAPT erasure coding, MD-series RAID, NX appliance recovery. Quarantine workaround, SAS drive imaging, virtual disk group reconstruction.

NetApp FAS & ONTAP Recovery

WAFL filesystem parsing, RAID-DP diagonal parity reconstruction, RAID-TEC triple parity. FAS2750, FAS8700, FAS9500, AFF A-Series, E-Series SANtricity.

ReFS File System Recovery

ReFS B+ tree metadata reconstruction, Storage Spaces Direct cluster failures, Hyper-V VHDX extraction, and deduplication recovery from Windows Server 2012 R2 through 2025.

SAN Storage Recovery

Dell EMC Unity/VNX, NetApp FAS, HPE Nimble/3PAR, and Pure Storage recovery. LUN reconstruction, controller-independent imaging, 520-byte sector handling.

SAS Hard Drive Recovery

Enterprise SAS drive imaging via SAS HBAs and PC-3000. Dual-port 12Gbps, 520-byte sectors, firmware zone repair. Seagate Exos, WD Ultrastar, HGST.

Faq09/10

Enterprise Server Recovery; Common Questions

Can you recover data from a Dell PowerEdge with a failed PERC controller?
Yes. We reverse-engineer PERC controller metadata and reconstruct the array offline from member drive images using PC-3000 RAID Edition, without needing the original controller.
Do you recover VMware VMFS volumes?
Yes. We reconstruct VMFS volumes from the underlying RAID array images and extract individual .vmdk virtual disk files for each VM. Hyper-V .vhdx recovery follows the same imaging-first approach.
Can you image SAS drives, or only SATA?
We image both. SAS (Serial Attached SCSI) drives require different interface hardware than SATA. Our lab uses SAS HBAs and PC-3000 with SAS support to image enterprise drives through the native SAS protocol.
How is enterprise server recovery priced?
Same transparent model as all our services: per-drive imaging fee based on drive condition, plus an array reconstruction fee. No data recovered means no charge. We do not add opaque emergency surcharges.
Can you handle sensitive corporate data?
We sign NDAs and maintain chain-of-custody documentation for every case. Drives remain in our Austin lab throughout the process. We do not sign BAAs, but we are happy to discuss your data-handling requirements on a case-by-case basis.
Why shouldn't our IT department force a RAID rebuild on the degraded array?
A RAID rebuild reads every sector on every surviving drive to recalculate parity. On high-capacity enterprise disks, the probability of hitting an Unrecoverable Read Error (URE) during this process is high. When a URE occurs, the rebuild fails, a second drive drops offline, and the array transitions from degraded to destroyed. We image each member in read-only mode using PC-3000 through SAS HBAs, then reconstruct the array in software without triggering destructive parity writes.
How do we ship a multi-drive server array for recovery?
Remove each drive from its hot-swap bay, label it with its slot position (bay 0, bay 1, etc.), and ship each drive individually in an anti-static bag with closed-cell foam padding. Do not ship the server chassis. Include notes on the RAID level, stripe size, and filesystem type. Ship via FedEx or UPS with tracking and insurance.
What should our IT team do in the first 30 minutes after a server outage?
Power the server down, refuse rebuild or repair commands, and photograph every drive bay and controller error screen before anything is moved. Label each member by slot position and record the RAID level, filesystem, hypervisor, and last event before failure. That context lets us map the array faster once the drives reach our Austin lab.
Can you extract individual virtual machines from a failed server?
Yes. After imaging the physical drives and reconstructing the RAID array, we parse the host filesystem (VMFS for VMware ESXi, ReFS or NTFS for Hyper-V, ZFS for Proxmox) to locate .vmdk, .vhdx, or qcow2 virtual disk files. Each VM can be delivered separately. Guest OS files within the virtual disk can also be extracted individually.
Do you prioritize cases involving active production outages?
Yes. Enterprise server cases with active production downtime are prioritized in our intake queue. SAS drives are imaged through dedicated SAS HBAs, and array reconstruction begins as soon as all member images are complete. Contact us with your array configuration details so we can provide a timeline estimate before you ship.
My Dell PowerEdge shows 'The following virtual disks have missing disks' and asks to press C. What should I do?
Power off the server immediately. Do not press 'C' to load the configuration utility and do not force-import the foreign configuration. The controller has detected that the DDF RAID metadata on the drives does not match the controller's NVRAM. Importing the configuration on physically degraded drives can trigger an automated rebuild that destroys the parity stripe when it hits an Unrecoverable Read Error on a surviving member.
How do you recover data from an HP ProLiant server showing POST Error 1785?
Error 1785 ('Drive Array Not Configured') indicates the Smart Array controller cannot detect a valid RAID configuration, often because multiple members dropped offline or the array metadata was corrupted. We do not use the HP Smart Array controller to recover the data. We image each SAS or SATA member drive through write-blocked HBAs and use PC-3000 RAID Edition to parse the proprietary HP Smart Array metadata directly from the drives.
Should I run chkdsk or force an array rebuild if multiple drives show errors?
No. Running chkdsk /f on a physically degrading array forces Windows to aggressively truncate the Master File Table (MFT) and orphan critical file indexes when it encounters read errors on failing sectors. On a server with database files, this destroys .mdf headers and transaction log chains. Forcing a RAID rebuild is also destructive: the controller reads every sector on every surviving drive to recalculate parity, and a single Unrecoverable Read Error during that process drops a second member offline and destroys the array. Power down the server, label each drive bay, and ship the drives for read-only imaging with PC-3000.
Can you recover a Windows Storage Spaces or Storage Spaces Direct cluster after the virtual disk goes Offline?
Yes. We image each pool member, capture the Storage Spaces metadata from the disks, and rebuild the virtual disk layout without running Repair-VirtualDisk on the source members. ReFS, NTFS, and .vhdx extraction all happen from those images, not from the live cluster.
Do enterprise helium drives require different recovery procedures?
Yes. Helium-filled enterprise drives (Seagate Exos, WD Ultrastar HC, Toshiba MG series) use hermetically sealed chambers to reduce air turbulence, allowing tighter platter spacing and higher capacity. Opening a helium drive exposes the platters to atmospheric pressure and particulate contamination. These drives require donor parts from the same helium-sealed model family, and all open-drive work is performed on our 0.02µm ULPA-filtered clean bench. Helium drive recovery pricing starts at $200 for simple copies and reaches $4,000+ for surface damage cases, plus additional helium refill cost.
Why shouldn't I use 'Import Foreign Configuration' on a degraded Dell PERC array?
When a Dell PowerEdge halts with a Foreign Configuration error, the DDF metadata stored on the physical drives no longer matches the controller's NVRAM. Importing the foreign configuration forces degraded drives online and can immediately trigger a Background Initialization or parity rebuild. If a surviving member hits an Unrecoverable Read Error during that forced parity write, the array transitions from recoverable to destroyed. Power off the server, label each drive bay, and ship the drives for read-only imaging via PC-3000 RAID Edition.
Can you recover data if the RAID 5 array failed during a rebuild?
Yes. A failed RAID 5 rebuild usually means a surviving member hit an Unrecoverable Read Error (URE) during parity recalculation, dropping the array offline. We don't resume the rebuild. We write-block and image all member drives through SAS HBAs using PC-3000, map every bad sector, then virtually destripe the array using PC-3000 RAID Edition. The tool uses the XOR parity blocks from the original pre-rebuild state to reconstruct data across the failed sectors without triggering additional destructive writes.
Can you recover SQL Server or ERP databases from a crashed server array?
Yes. After imaging the member drives and reconstructing the RAID array offline, we extract .mdf and .ldf database files from the recovered NTFS or ReFS volume. If the RAID controller cache failed during active writes, the database often contains torn pages where SQL Server's 8KB page boundaries were interrupted mid-write. We repair corrupted page headers and use the transaction log (.ldf) chain to force-attach the database in suspect state, allowing DBCC CHECKDB to recover critical ERP table rows that automated software skips.
How much does a 6-drive RAID 5 server array recovery cost?
Each drive is priced by its physical condition: $100 for a healthy drive that reads normally, up to $2,000 for surface damage requiring platter work. A 6-drive RAID 5 where all members read normally costs approximately $100 x 6 = $600, plus an array reconstruction fee for the PC-3000 RAID Edition virtual destripe. If two drives need head swaps ($$600–$900 each), the total increases accordingly. No data recovered means no charge.
Is there a rush option for production server outages?
Yes. A +$100 rush fee to move to the front of the queue is available for any drive in the array. For a 6-drive array, the rush fee applies per drive ($100 x 6 = $600). Rush cases are moved to the front of the imaging queue, and SAS drives are imaged through dedicated SAS HBAs. Contact us with the array configuration before shipping so we can stage the appropriate SAS hardware.
How is NVMe enterprise SSD recovery in servers priced?
NVMe enterprise SSDs (Intel P5510, Samsung PM9A3, Micron 7450) follow our NVMe SSD pricing: $200 for a simple data copy, up to $$1,200–$2,500 for advanced recovery. Supported Phison-based caching drives may be handled through PC-3000 SSD; proprietary enterprise controllers often require board repair or terminal access to restore controller function before data can be read. Servers that mix SAS spinners with NVMe caching drives are priced per-drive based on each drive's condition and interface type. The array reconstruction fee applies once per array, regardless of how many drives are NVMe vs. SAS.
How does a destructive RAID rebuild destroy our Recovery Point Objective?
A controller-driven rebuild reads every sector on every surviving member to recalculate parity. If one of those survivors hits an Unrecoverable Read Error mid-rebuild, the controller writes corrupt parity back to the stripe, and the last-consistent on-disk state of the array is gone. We avoid that path entirely: each member is imaged behind a write-blocked SAS HBA using PC-3000 or a DeepSpar Disk Imager, then virtually destriped in PC-3000 RAID Edition so no write ever touches the source drives and the pre-failure state is preserved on our working images.
What is a realistic RTO for a failed multi-drive server array?
Degraded-but-accessible arrays where every member still reads normally typically complete virtual destriping 3-5 calendar days after intake. Catastrophic cases (two or more failed members, Dell PERC NVRAM mismatch, or helium drive mechanical failure) typically run 7-14 days, and the timeline on mechanical cases is driven by donor head-stack availability for the specific Seagate Exos or WD Ultrastar HC revision in the array. +$100 rush fee to move to the front of the queue.
Can you prioritize one VM, LUN, or database before the rest of the array?
Yes. Once the stripe map and filesystem metadata are confirmed, we can target a named VMFS datastore, .vhdx, .vmdk, .mdf, or .ldf set first instead of spending time on lower-priority shares. Tell us the business-critical workload before you ship so the extraction order matches your RTO.
Do you provide forensic chain of custody for litigation and e-discovery cases?
Yes. Law firm, internal-investigation, and e-discovery cases receive a numbered custody log from intake photo through return shipment. Source drives are imaged behind a hardware write-blocker so the physical evidence cannot be mutated by the host OS or by the imaging tool, and SHA-256 hashes of the working image are recorded and re-verified after each analysis pass. Custody artifacts are delivered with the recovered data so counsel can match image hashes against intake hashes in deposition.
Do IT administrators speak to the engineer doing the work, or to a sales rep?
You speak to the engineer. The technician running SAS HBA passthrough on your array, parsing the DDF or HP Smart Array metadata, and virtually destriping in PC-3000 RAID Edition is the same person answering your phone calls and email threads. Status updates cite the actual work performed (members imaged, bad-sector maps, parity rotation detected), not a generic ticket state.
Logistics10/10

Shipping

Secure Mail-In from Anywhere in the US

Transit Time

1 Business Day

FedEx Priority Overnight delivers to Austin by 10:30 AM the next business day from most US addresses.

Major Origins
  • New York City 1 Business Day
  • Los Angeles 1 Business Day
  • Chicago 1 Business Day
  • Seattle 1 Business Day
  • Denver 1 Business Day
Security & Insurance

Fully Insured

Use FedEx Declared Value to cover hardware costs. We return your original drive and recovered data on new media.

Packaging Standards

  • Use the box-in-box method: float a small box inside a larger box with 2 inches of bubble wrap.
  • Wrap the bare drive in an anti-static bag to prevent electrical damage.
  • Do not use packing peanuts. They compress during transit and allow heavy drives to strike the edge of the box.

Ready to recover your server?

Free evaluation. No data = no charge. Mail-in from anywhere in the U.S.

(512) 212-9111Mon-Fri 10am-6pm CT
No diagnostic fee
No data, no fee
4.9 stars, 1,837+ reviews

4.9★ · 1,837+ reviews