Skip to main contentSkip to navigation
Rossmann Repair Group logo - data recovery and MacBook repair

Database Recovery Service

When a server drive fails, database repair software cannot help because it cannot read the files. We recover the drive first using PC-3000 and clean bench techniques, then repair the database structure from the recovered image. SQL Server, Exchange, MySQL, Oracle, QuickBooks, PostgreSQL, and MongoDB. No data, no fee.

Call (512) 212-9111No data, no recovery feeFree evaluation, no diagnostic fees
No Data = No Charge
7 Database Engines
Physical + Logical Repair
Nationwide Mail-In
Louis Rossmann
Written by
Louis Rossmann
Founder & Chief Technician
Updated March 31, 2026

Why Does Database Repair Software Fail on Crashed Drives?

Database repair software fails because it requires a functional file system to read page headers. When the underlying drive has failed heads, corrupted firmware, or damaged platters, the operating system cannot mount the volume and every repair tool reports the same result: file not found.

Software repair tools (Stellar, SysTools, mysqlcheck)
Operate on files that already exist on a readable filesystem. They parse database page headers, rebuild indexes, & patch torn pages. None of them can read a drive with failed heads, corrupted firmware, or damaged platters.
Server crash failure sequence
A drive develops bad sectors or a mechanical fault, the OS retries reads aggressively, the database engine detects I/O errors & marks the database as suspect or dismounts the store, and the administrator discovers that the underlying drive is no longer accessible.
Two-layer approach
We handle both layers. The physical recovery produces a clean sector-level image of the drive using PC-3000. The database repair phase operates on that image, never on the original media.

What Hardware Failures Cause Database Corruption?

Database engines don't corrupt themselves in isolation. SQL Server Error 824 (torn page detected), Exchange Dirty Shutdown, & PostgreSQL PANIC entries in pg_log all trace back to the same root cause: the underlying storage returned bad data or went offline mid-write. The database engine is the messenger; the drive is the problem.

SQL Server Error 824 from SAS/SATA Drive Degradation

Error 824 means SQL Server requested a data page & the storage layer returned a page with a mismatched checksum. On enterprise hard drives with developing bad sectors, this happens when the drive's internal ECC can no longer correct read errors on specific tracks. The drive returns stale or corrupted bytes instead of failing the read outright.

Running DBCC CHECKDB WITH REPAIR_ALLOW_DATA_LOSS on a drive with active bad sectors destroys the recoverable pages it can't validate. We image the drive first using PC-3000 Express with adaptive read parameters that retry failing sectors at different head offsets, capturing pages that a single-pass read would miss. The MDF repair runs on the cloned image, not the failing media.

Exchange Dirty Shutdown from RAID Controller Failure

When a Dell PERC or HP Smart Array controller drops a drive mid-write, the Exchange Information Store service terminates without flushing its transaction logs. The EDB database enters Dirty Shutdown state. Error -541 (JET_errLogFileSizeMismatch) appears when the log files on the failed drive are truncated or unreadable.

After we image each RAID member drive & reconstruct the virtual array, we extract the EDB & transaction logs from the recovered volume. If log files are missing sectors from unreadable regions on the original media, we use eseutil /r with the /a switch to replay available logs & skip the damaged ones, bringing the database to Clean Shutdown state without touching the original drives.

PostgreSQL PANIC on ZFS Pool Import Failure

PostgreSQL running on ZFS-based NAS appliances (TrueNAS, FreeNAS) writes its WAL (Write-Ahead Log) segments sequentially. When a drive in the ZFS pool develops unreadable sectors in the middle of a WAL segment, the pool goes into a DEGRADED or FAULTED state. PostgreSQL logs PANIC: could not locate a valid checkpoint record because the WAL file it needs for crash recovery is physically damaged.

Forcing a ZFS pool import with zpool import -f on a degraded pool replays the intent log (ZIL) to the original disks, advancing the on-disk state past a recoverable point and destroying the previous transaction group consistency. We clone each pool member individually, reconstruct the ZFS vdev layout from the cloned images, & extract the PostgreSQL data directory with the WAL segments intact for proper crash recovery.

Two-Layer Recovery Workflow

Layer 1: Physical Drive Recovery

  • Write-blocked forensic imaging using PC-3000 and DeepSpar with conservative retry settings
  • Head swap in 0.02µm ULPA-filtered clean bench if heads have failed
  • Firmware repair for drives reporting wrong capacity, wrong model ID, or refusing to spin
  • Bad sector management with adaptive read parameters to maximize data yield from degraded platters
  • Full sector-level clone to a healthy target drive before any database work begins

Layer 2: Database Structure Repair

  • Mount recovered MDF/EDB/InnoDB files from the cloned image in an isolated environment
  • Page-level integrity scan to identify torn pages, checksum failures, and orphaned records
  • Hex-level repair of damaged page headers and allocation metadata
  • Transaction log reconstruction when LDF/E00 log files are missing or corrupt
  • Final export: restored database files, mailbox PSTs, or raw table dumps depending on engine

How We Extract Databases from Failed Server Arrays

Production databases rarely sit on single drives. SQL Server, Exchange, & Oracle instances typically run on RAID 5, 6, or 10 arrays behind a Dell PowerEdge PERC or HP Smart Array controller. When the controller fails or multiple drives drop simultaneously, forcing a rebuild destroys parity & overwrites the data you need.

  1. Clone each member drive individually. Every RAID member is connected to a PC-3000 Express or Portable III hardware unit via write-blocked SATA or SAS channels. We image each drive to a healthy target, handling bad sectors with adaptive read parameters. No data is written to the original media.
  2. Reconstruct the virtual array offline. Using the cloned images, we detect the stripe size, parity rotation, & drive order. For RAID 5 & RAID 6 arrays, we virtually reassemble the array without writing to any original disk, then parse the LVM or NTFS volume from the reconstructed block device.
  3. Extract & repair database files. Once the virtual array is mounted, we locate the MDF, EDB, InnoDB tablespace, or other database files & run engine-specific repair on the recovered copies. The original drives remain untouched throughout.

Consumer SSD Failures in Database Servers: Phison and SMI Controller Panics

Small business database servers are sometimes built using budget consumer SATA SSDs as primary storage. When an unexpected power loss occurs while the controller is flushing its internal SLC cache to TLC/QLC NAND, the Flash Translation Layer (FTL) mapping table corrupts. The drive drops to 0 bytes or becomes unresponsive.

Phison PS3111-S11 (SATAFIRM S11 bug)
Reports 0-byte capacity or shows "SATAFIRM S11" as its model string after power loss during cache flush. We connect the drive to PC-3000 SSD, enter Technological Mode via ATA Vendor Specific Commands, & rebuild the FTL translator from surviving NAND service area copies. Database files cached on the SSD at the time of failure are recoverable if TRIM has not already zeroed the blocks.
Silicon Motion SM2259XT (BSY state)
Enters a locked BSY (Busy) state showing 0 bytes or incorrect capacity after FTL corruption. The drive is shorted into Safe Mode via PCB test points, then PC-3000 SSD loads a volatile microcode loader into the controller SRAM and rebuilds the translator. This procedure is impossible for standard recovery software, which requires the controller to be functional.

TRIM/UNMAP note: modern SSDs with TRIM enabled permanently erase deleted blocks. If the database engine deleted records before the crash, those records are unrecoverable regardless of recovery method.

Marvell 88SS1074 Database Drive Failures

Budget SATA SSDs running the Marvell 88SS1074 controller are common in small business database servers where cost drives the hardware decision. When the firmware boot sequence corrupts after a power loss or NAND degradation event, the drive enters a permanent BSY (Busy) state. The SATA bus reports the drive but returns no data; the operating system hangs on mount attempts, and the SQL Server or PostgreSQL instance cannot start.

Recovery requires PC-3000 SSD with the Marvell VanGogh family utility. We block the corrupted main firmware from loading, boot the controller into a minimal service mode via ATA Vendor Specific Commands, and rebuild the Flash Translation Layer from surviving NAND metadata. Once the FTL is reconstructed, the drive responds normally and we image the full capacity to extract the database files.

NVMe Database Cache Failures: Phison E12 and E16 ROM Mode

High-performance NVMe SSDs with Phison E12 (PCIe 3.0) and E16 (PCIe 4.0) controllers are frequently deployed as L2ARC read caches or ZFS SLOG write caches in database arrays. During unexpected power loss while the controller is flushing its FTL mapping table to NAND, the drive locks into a protective ROM state. The server BIOS detects the NVMe device but reports a generic capacity of 1 GB or 2 MB instead of the actual drive size.

Standard NVMe management tools (nvme-cli, Samsung Magician, Intel SSD Toolbox) cannot communicate with a drive in ROM mode because the controller's main firmware never loads. We connect the drive to PC-3000 NVMe, enter Technological Mode via the PCIe interface, and upload a custom microcode loader directly into the controller's SRAM. The loader replaces the corrupted firmware in volatile memory, allowing us to reconstruct the FTL from surviving NAND service area copies and extract the cached database segments before the operating system triggers a destructive TRIM/UNMAP command on reboot.

Lenovo Server NVMe: Marvell 88SS1093 Timeout Errors

Enterprise database servers from Lenovo (ThinkSystem SR series) and OEM partners using Ramaxel-branded NVMe drives frequently run the Marvell 88SS1093 controller. When NAND cells degrade past the controller's internal ECC threshold, or a firmware translation fault corrupts the logical-to-physical block map, the drive generates persistent timeout errors during database I/O. SQL Server logs Error 823 (I/O error) and PostgreSQL reports "could not read block" failures that trace back to the NVMe timeout.

We use PC-3000 SSD Extended to block the main firmware from loading and boot the drive into Extended (service) mode. From there, we read the TCG Opal subsystem configuration (if enterprise encryption is enabled), rebuild the logical image from raw NAND, and extract the database files. This avoids the timeout loop entirely because the drive's normal I/O path is never engaged.

Recovering Databases from Encrypted Server Drives

Many Windows Server deployments use BitLocker, and Linux production servers frequently use LUKS full-disk encryption. When the underlying drive fails, the physical failure prevents access to the encrypted volume. After imaging, the encryption prevents reading the database files without the original key. We handle this in two separate phases.

Phase 1: Physical recovery of the encrypted volume
The failing drive is connected to PC-3000 Express or Portable III in write-blocked mode. We create a sector-level clone of the entire encrypted partition to a healthy target drive, including the BitLocker metadata header and LUKS key slots. Bad sectors are handled with adaptive read parameters; the clone captures every readable sector without writing to the original media.
Phase 2: Decryption & database extraction
Once the physical clone is complete, you provide the BitLocker recovery key or LUKS passphrase. We mount the encrypted volume on the cloned image and extract the database files (MDF, EDB, InnoDB tablespace, or PostgreSQL data directory). If the database uses SQL Server Transparent Data Encryption (TDE), the database master key and certificate must also be available; without it, the MDF file contents remain encrypted even after the volume is unlocked.

We don't break or bypass encryption. Recovery requires the original recovery key, passphrase, or TDE certificate. Without credentials, the encrypted data is mathematically unrecoverable regardless of the physical condition of the drive.

Database Recovery from Virtual Machine Hosts

Production databases frequently run inside VMware ESXi, Hyper-V, or Proxmox VE virtual machines. When the host server fails, the database files are locked inside VMDK, VHDX, or raw disk images on a VMFS, ReFS, or ZFS datastore. Software tools can't reach the database because the virtualization layer is broken.

  1. Image the physical host drives. Each drive from the host (or SAN shelf) is cloned via PC-3000 in write-blocked mode, exactly as with a bare-metal RAID array recovery.
  2. Reconstruct the datastore. The VMFS, ReFS, or ZFS volume is reassembled from the cloned images. We locate the target VM's virtual disk files (VMDK, VHDX, or QCOW2) and verify their integrity.
  3. Mount the virtual disk & extract database files. The virtual disk is mounted as a loopback device. We navigate the guest OS file system to locate the MDF, EDB, InnoDB tablespace, or PostgreSQL data directory, then run the appropriate database repair on the extracted files.

Accidental VM deletion, snapshot corruption, & SAN LUN failures are all recoverable if the underlying physical storage still contains the VMDK/VHDX data blocks. TRIM on thin-provisioned datastores is the exception; if VMFS UNMAP has already zeroed the blocks, those sections are permanently gone.

Supported Database Engines

Microsoft SQL Server

MDF/NDF/LDF recovery. Error 5171, 823, 824 resolution. Suspect mode databases. DBCC alternatives that preserve data instead of deleting it. SQL Server 2000 through 2022.

Microsoft Exchange Server

EDB file recovery with per-mailbox extraction. Dirty shutdown repair, -1018 checksum errors, JET database corruption. Exchange 2003 through 2019. PST export per user.

MySQL / MariaDB

InnoDB tablespace recovery (ibdata1, .ibd per-table files). Redo log reconstruction when ib_logfile0/1 are corrupted. MyISAM .MYD/.MYI repair for legacy tables. MySQL 5.0 through 8.x, MariaDB 10.x.

Oracle Database

Datafile (.dbf) recovery, ASM diskgroup reassembly, control file reconstruction. ORA-01578 block corruption, ORA-00600 internal errors. Oracle 11g through 21c, including CDB/PDB multitenant.

QuickBooks

QBW company file recovery from failed drives. Error -6000/-301, -6000/-82, and C=343 resolution. Sybase ASA page-level repair. QuickBooks 2006 through 2025, Pro/Premier/Enterprise.

PostgreSQL

Data directory recovery from failed drives. WAL segment reconstruction, pg_control repair, TOAST table recovery, and per-table COPY export from recovered clusters. PostgreSQL 9.x through 18.

MongoDB

WiredTiger collection file extraction, BSON document recovery, oplog replay, and catalog metadata rebuild. Sharded cluster reassembly and GridFS file reconstruction. MongoDB 3.2 through 8.0.

Microsoft SharePoint

Content database recovery from failed SAN/RAID arrays. MDF extraction, Remote BLOB Storage stub reconciliation, torn page repair, and FILESTREAM relinking. SharePoint 2010 through Subscription Edition.

Stop Before Running Repair Commands

DBCC REPAIR_ALLOW_DATA_LOSS, eseutil /p, and innodb_force_recovery=6 are destructive operations. They delete data they cannot validate. If the corruption originates from a physical drive problem, these commands destroy data that a proper drive recovery would have preserved. Power down the server and contact us before running repair utilities.

How Much Does Database Recovery Cost?

Database recovery pricing is based on the physical condition of the drive, not the database engine. The database repair phase is included at no additional cost. Drives in RAID arrays are priced per member drive.

Simple Copy

Low complexity

Your drive works, you just need the data moved off it

$100

3-5 business days

Functional drive; data transfer to new media

Rush available: +$100

File System Recovery

Low complexity

Your drive isn't recognized by your computer, but it's not making unusual sounds

From $250

2-4 weeks

File system corruption. Accessible with professional recovery software but not by the OS

Starting price; final depends on complexity

Firmware Repair

Medium complexity

Your drive is completely inaccessible. It may be detected but shows the wrong size or won't respond

$600–$900

3-6 weeks

Firmware corruption: ROM, modules, or translator tables corrupted; requires PC-3000 terminal access

CMR drive: $600. SMR drive: $900.

Head Swap

High complexityMost Common

Your drive is clicking, beeping, or won't spin. The internal read/write heads have failed

$1,200–$1,500

4-8 weeks

Head stack assembly failure. Transplanting heads from a matching donor drive on a clean bench

50% deposit required. CMR: $1,200-$1,500 + donor. SMR: $1,500 + donor.

50% deposit required

Surface / Platter Damage

High complexity

Your drive was dropped, has visible damage, or a head crash scraped the platters

$2,000

4-8 weeks

Platter scoring or contamination. Requires platter cleaning and head swap

50% deposit required. Donor parts are consumed in the repair. Most difficult recovery type.

50% deposit required

Hardware Repair vs. Software Locks

Our "no data, no fee" policy applies to hardware recovery. We do not bill for unsuccessful physical repairs. If we replace a hard drive read/write head assembly or repair a liquid-damaged logic board to a bootable state, the hardware repair is complete and standard rates apply. If data remains inaccessible due to user-configured software locks, a forgotten passcode, or a remote wipe command, the physical repair is still billable. We cannot bypass user encryption or activation locks.

No data, no fee. Free evaluation and firm quote before any paid work. Full guarantee details. Head swap and surface damage require a 50% deposit because donor parts are consumed in the attempt.

Rush fee: +$100 rush fee to move to the front of the queue.

Donor drives: Donor drives are matching drives used for parts. Typical donor cost: $50–$150 for common drives, $200–$400 for rare or high-capacity models. We source the cheapest compatible donor available.

Target drive: The destination drive we copy recovered data onto. You can supply your own or we provide one at cost plus a small markup. For larger capacities (8TB, 10TB, 16TB and above), target drives cost $400+ extra. All prices are plus applicable tax.

Why Recover Databases With Rossmann Repair Group

Two-layer recovery

Physical drive repair (PC-3000, head swaps, firmware correction) followed by logical database structure repair. One lab handles both.

Multi-engine support

SQL Server MDF/NDF, Exchange EDB, MySQL InnoDB, PostgreSQL. The drive recovery is universal; the database repair adapts to each engine's on-disk format.

Transparent pricing

Five published tiers based on drive condition. Database repair is included. If we recover nothing usable, you pay $0.

Direct engineer access

Talk to the person doing the work. No sales scripts, no account managers, no call center.

No evaluation fee

Free assessment of drive condition and recovery feasibility before any paid work begins.

Image-first workflow

Every drive is forensically imaged before any repair attempts. Original media is never modified. All database repair runs against the cloned image.

Data Recovery Standards & Verification

Our Austin lab operates on a transparency-first model. We use industry-standard recovery tools, including PC-3000 and DeepSpar, combined with strict environmental controls to make sure your hard drive is handled safely and properly. This approach allows us to serve clients nationwide with consistent technical standards.

Open-drive work is performed in a ULPA-filtered laminar-flow bench, validated to 0.02 µm particle count, verified using TSI P-Trak instrumentation.

Transparent History

Serving clients nationwide via mail-in service since 2008. Our lead engineer holds PC-3000 and HEX Akademia certifications for hard drive firmware repair and mechanical recovery.

Media Coverage

Our repair work has been covered by The Wall Street Journal and Business Insider, with CBC News reporting on our pricing transparency. Louis Rossmann has testified in Right to Repair hearings in multiple states and founded the Repair Preservation Group.

Aligned Incentives

Our "No Data, No Charge" policy means we assume the risk of the recovery attempt, not the client.

LR

Louis Rossmann

Louis Rossmann's well trained staff review our lab protocols to ensure technical accuracy and honest service. Since 2008, his focus has been on clear technical communication and accurate diagnostics rather than sales-driven explanations.

We believe in proving standards rather than just stating them. We use TSI P-Trak instrumentation to verify that clean-air benchmarks are met before any drive is opened.

See our clean bench validation data and particle test video

Database Recovery FAQ

What types of databases do you recover?
SQL Server (MDF/NDF/LDF), Exchange Server (EDB), MySQL/MariaDB (InnoDB tablespace files), Oracle (datafiles, ASM diskgroups), QuickBooks (QBW company files), PostgreSQL data directories, and MongoDB (WiredTiger collection files). The physical drive recovery process is the same regardless of database engine. The logical repair phase differs by format.
Can you recover a database from a physically failed drive?
Yes. Software-only database repair tools require the drive to be readable. When the underlying drive has failed (clicking, firmware corruption, bad sectors), we recover the raw data using PC-3000 and clean bench techniques first, then repair the database structure from the recovered image.
How is database recovery priced?
Pricing follows our standard drive recovery tiers based on the physical condition of the drive: From $250 for file system issues, $600–$900 for firmware repair, $1,200–$1,500 for head swaps. Donor drives are matching drives used for parts. Typical donor cost: $50–$150 for common drives, $200–$400 for rare or high-capacity models. We source the cheapest compatible donor available. Database structure repair is included at no additional charge. No data, no fee.
Do you need our entire server?
Usually just the drives containing the database files. For hardware RAID configurations, send all member drives and label each drive's slot position before removal.
How long does database recovery take?
Single-drive recoveries with healthy reads: 3-5 business days. Firmware repair: 3-6 weeks. Head swaps: 4-8 weeks. RAID arrays with multiple failed members add additional time for per-member imaging. +$100 rush fee to move to the front of the queue.
Why does SQL Server show Suspect Mode or Exchange report Dirty Shutdown?
Both conditions occur when the database engine detects I/O anomalies from the underlying storage. SQL Server marks a database as suspect (Error 824, torn page detected) when it can't read a data page consistently. Exchange reports dirty shutdown when the EDB transaction logs are incomplete or the drive returned corrupted sectors during a write. In both cases, the drive itself is usually developing bad sectors or has a degraded RAID stripe. Running DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS or eseutil /p on physically failing storage will permanently destroy database pages that a proper drive recovery would have preserved.
What is the difference between logical database corruption and physical drive failure?
Logical corruption means the drive hardware is healthy but the database file structure is damaged (torn pages, orphaned records, mismatched checksums). Software repair tools can fix this. Physical failure means the drive itself has failed heads, corrupted firmware, or bad sectors. The database files are inaccessible because the storage layer is broken. We handle both: PC-3000 imaging recovers the physical layer, then database-specific repair tools address logical corruption on the recovered image.
Can you recover a database from a BitLocker or LUKS encrypted server drive?
We don't break encryption. We recover the physical drive first using PC-3000 & DeepSpar, creating a sector-level clone of the failing media. Once the clone is on a healthy drive, you provide the BitLocker recovery key or LUKS passphrase, and we mount the encrypted volume to extract the database files. SQL Server Transparent Data Encryption (TDE) adds a second layer; the database master key must also be available for the MDF files to be readable after extraction.
Can you recover a database running inside a VMware or Hyper-V virtual machine?
Yes. We image the physical host drives, reconstruct the VMFS or ReFS datastore, locate the VMDK or VHDX virtual disk files, then mount the virtual disk to access the database files inside. The database repair phase runs on the extracted MDF, EDB, or InnoDB files from the virtual disk image. Physical host failure, SAN LUN corruption, & accidental VM deletion are all recoverable if the underlying storage is intact.
Is rush service available for database recovery?
Yes. +$100 rush fee to move to the front of the queue. Rush moves your case to the front of the imaging queue, which is the bottleneck for most server recoveries. The database repair phase after imaging is typically fast (hours, not days) since it operates on an already-recovered image.
Can you recover SharePoint content databases from a failed SAN or RAID array?
Yes. SharePoint stores all sites, documents, & lists inside SQL Server .mdf content databases. When a SAN or Dell PERC RAID array drops multiple drives, the SharePoint front-end can't connect. We image the failed SAS/SATA drives individually using PC-3000, reconstruct the virtual array's block size & parity rotation in software, then extract the raw SharePoint .mdf files. Database repair runs on the recovered image; the original drives remain untouched.
Can you recover a database encrypted by ransomware?
If ransomware encrypted MDF, EDB, or InnoDB files while the drive was still physically healthy, recovery depends on whether the original unencrypted data was overwritten or just marked deleted. On HDDs without TRIM, we can often recover pre-encryption file versions from unallocated sectors using PC-3000 imaging. On SSDs with TRIM enabled, the original blocks are permanently erased. See our ransomware data recovery page for the full process.
Can you recover ERP or accounting databases like SAP HANA, NetSuite, or Sage?
Yes. ERP systems store financial data in standard database engines: SAP HANA uses its own column-store format on XFS or ext4 volumes, SAP Business One runs on SQL Server or HANA, Sage runs on SQL Server or Pervasive PSQL, and NetSuite is Oracle-based. The drive recovery process is identical regardless of the ERP layer. After we image the failed drive with PC-3000, we extract the database files & run engine-specific repair. The ERP application reads the repaired database once it's restored to a functional server.
Can you recover legacy database formats like FileMaker, Lotus Notes, and IBM Db2?
Yes. We recover proprietary & legacy database engines including FileMaker (.fmp12), Lotus Notes (.nsf), IBM Db2, Paradox (.db), and Sybase ASA. The physical drive recovery process using PC-3000 is identical regardless of the database software. Once we clone the failing drive at our Austin lab, the raw legacy database files are extracted from the recovered volume for restoration on a healthy server.
Why is our database NVMe cache drive showing up as 2MB in BIOS?
A 2 MB or 1 GB capacity in BIOS indicates a severe firmware panic on the NVMe controller. This is common in Phison E12 & E16 controllers used as database L2ARC or SLOG caches. When the Flash Translation Layer (FTL) mapping table corrupts during a power loss, the controller locks into a protective ROM state and reports a generic capacity instead of the actual drive size. Software recovery tools can't reach the drive in this state. We use PC-3000 NVMe Technological Mode to upload a microcode loader into the controller SRAM, reconstruct the FTL, & extract the cached database files before TRIM erases the blocks.
How do you recover a database from a failed SAN or iSCSI LUN?
SAN LUN failures typically involve multiple drives in a RAID group behind a Dell EMC, NetApp, or HP MSA controller. We image each physical drive individually using PC-3000 Express via SAS or SATA write-blocked channels. After cloning, we reconstruct the LUN's block layout (stripe size, parity type, drive order) from the cloned images in software. Once the virtual LUN is reassembled, we mount the VMFS, NTFS, or ext4 volume and extract the database files for repair. The original SAN drives are never written to.
What if our server runs multiple databases on the same drive array?
We recover all databases from the same imaging pass. After the physical recovery produces a clean sector-level clone, every database file on the recovered volume is accessible: SQL Server MDF files, Exchange EDB stores, MySQL data directories, and application files. There's no per-database charge for the physical recovery. The drive condition determines the price, not the number of databases stored on it.

How to Ship Server Drives for Database Recovery

We serve all 50 states through mail-in data recovery. Server drives require more careful packing than consumer drives because enterprise SAS drives & RAID members must arrive with their slot positions documented.

  1. Label each drive before removal. Use masking tape & a marker. Write the slot number (0, 1, 2...) and the server model. For RAID arrays, the slot order determines stripe reconstruction.
  2. Wrap each drive individually. Use anti-static bags if available. Wrap in bubble wrap. Never let bare drives touch each other in transit.
  3. Ship in a sturdy box with 2+ inches of padding. Double-box for enterprise SAS drives. FedEx & UPS both work. Carrier declared-value coverage applies to the physical hardware only, not the data stored on it.
  4. Include a note. Database engine (SQL Server, Exchange, PostgreSQL), file names if known (e.g., the .mdf filename), & your contact info. The more context we have, the faster the diagnosis.

Ship to: 2410 San Antonio Street, Austin, TX 78705. Local Austin drop-offs accepted during Mon-Fri 10am-6pm CT. Call (512) 212-9111 before shipping if you have questions about packing a specific server configuration.

Recover Your Database

Call Mon-Fri 10am-6pm CT or email for a free evaluation.

(512) 212-9111Mon-Fri 10am-6pm CT
No diagnostic fee
No data, no fee
Free return shipping
4.9 stars, 1,837+ reviews