“Had a raid 0 array (windows storage pool) (failed 2tb Seagate, and a working 1tb wd blue) recovered last year, it was much cheaper than the $1500 to $3500 Canadian dollars i was quoted by a Canadian data recovery service. the price while expensive was a comparatively reasonable $900USD (about $1100 CAD at the time). they had very good communication with me about the status of my recovery and were extremely professional. the drive they sent back was Very well packaged. I would 100% have a drive recovered by them again if i ever needed to again.”
Database Recovery Service
When a server drive fails, database repair software cannot help because it cannot read the files. We recover the drive first using PC-3000 and clean bench techniques, then repair the database structure from the recovered image. SQL Server, Exchange, MySQL, Oracle, QuickBooks, PostgreSQL, and MongoDB. No data, no fee.

What is enterprise database recovery?
Enterprise database recovery is a two-layer process. Layer one images the failed physical storage (SAS/SATA/NVMe drive, SAN LUN, RAID member, or ZFS vdev) using PC-3000 Express in write-blocked mode. Layer two parses the recovered sector-level clone to extract database files (MDF/LDF, EDB, ibdata1, datafiles, WAL segments) and repair page-level, block-level, or tablespace-level corruption. Software-only repair tools fail on failing storage because the file system cannot mount the volume.
Why Does Database Repair Software Fail on Crashed Drives?
Database repair software fails because it requires a functional file system to read page headers. When the underlying drive has failed heads, corrupted firmware, or damaged platters, the operating system cannot mount the volume and every repair tool reports the same result: file not found.
- Software repair tools (Stellar, SysTools, mysqlcheck)
- Operate on files that already exist on a readable filesystem. They parse database page headers, rebuild indexes, & patch torn pages. None of them can read a drive with failed heads, corrupted firmware, or damaged platters.
- Server crash failure sequence
- A drive develops bad sectors or a mechanical fault, the OS retries reads aggressively, the database engine detects I/O errors & marks the database as suspect or dismounts the store, and the administrator discovers that the underlying drive is no longer accessible.
- Two-layer approach
- We handle both layers. The physical recovery produces a clean sector-level image of the drive using PC-3000. The database repair phase operates on that image, never on the original media.
What Hardware Failures Cause Database Corruption?
Database engines don't corrupt themselves in isolation. SQL Server Error 824 (torn page detected), Exchange Dirty Shutdown, & PostgreSQL PANIC entries in pg_log all trace back to the same root cause: the underlying storage returned bad data or went offline mid-write. The database engine is the messenger; the drive is the problem.
SQL Server Error 824 from SAS/SATA Drive Degradation
Error 824 means SQL Server requested a data page & the storage layer returned a page with a mismatched checksum. On enterprise hard drives with developing bad sectors, this happens when the drive's internal ECC can no longer correct read errors on specific tracks. The drive returns stale or corrupted bytes instead of failing the read outright.
Running DBCC CHECKDB WITH REPAIR_ALLOW_DATA_LOSS on a drive with active bad sectors destroys the recoverable pages it can't validate. We image the drive first using PC-3000 Express with adaptive read parameters that retry failing sectors at different head offsets, capturing pages that a single-pass read would miss. The MDF repair runs on the cloned image, not the failing media.
Exchange Dirty Shutdown from RAID Controller Failure
When a Dell PERC or HP Smart Array controller drops a drive mid-write, the Exchange Information Store service terminates without flushing its transaction logs. The EDB database enters Dirty Shutdown state. Error -541 (JET_errLogFileSizeMismatch) appears when the log files on the failed drive are truncated or unreadable.
After we image each RAID member drive & reconstruct the virtual array, we extract the EDB & transaction logs from the recovered volume. If log files are missing sectors from unreadable regions on the original media, we use eseutil /r with the /a switch to replay available logs & skip the damaged ones, bringing the database to Clean Shutdown state without touching the original drives.
PostgreSQL PANIC on ZFS Pool Import Failure
PostgreSQL running on ZFS-based NAS appliances (TrueNAS, FreeNAS) writes its WAL (Write-Ahead Log) segments sequentially. When a drive in the ZFS pool develops unreadable sectors in the middle of a WAL segment, the pool goes into a DEGRADED or FAULTED state. PostgreSQL logs PANIC: could not locate a valid checkpoint record because the WAL file it needs for crash recovery is physically damaged.
Forcing a ZFS pool import with zpool import -f on a degraded pool replays the intent log (ZIL) to the original disks, advancing the on-disk state past a recoverable point and destroying the previous transaction group consistency. We clone each pool member individually, reconstruct the ZFS vdev layout from the cloned images, & extract the PostgreSQL data directory with the WAL segments intact for proper crash recovery.
SQL Server MDF and LDF Corruption at the Page Level
SQL Server stores data in 8 KB pages grouped into 64 KB extents. When the storage layer returns corrupted bytes, the page checksum in the header fails and SQL Server raises Error 824 (logical consistency) or Error 823 (operating system returned an I/O error). The damaged page is logged to msdb.dbo.suspect_pages with its file_id, page_id, & event_type. Every page-level recovery starts with that row.
Torn Writes and the Double-Write Buffer
A torn write occurs when SQL Server issues an 8 KB page write but the underlying storage only commits a partial 512-byte or 4 KB sector before a power loss or controller fault. The page header CRC or torn-page detection bits no longer match the page body. Unlike InnoDB, SQL Server does not use a double-write buffer; it relies on PAGE_VERIFY CHECKSUM (the default since SQL Server 2005) & the transaction log to detect and replay partial writes.
The LDF file holds the log records that preceded the torn write. If the drive is still readable, we image the MDF & LDF as a matched pair (same LSN snapshot), then walk the log forward from the MinLSN of the last full checkpoint. Transactions that committed before the torn write can be reapplied from the log. Transactions that were mid-flight at the moment of failure are rolled back using the undo records written to the LDF.
Page Splits and Index B-Tree Corruption
When a clustered index page fills & SQL Server performs a page split, it allocates a new page, moves half the rows across, & updates the sibling pointers in the B-tree. If the storage layer fails mid-split (one sibling updated, the other not), the index becomes logically inconsistent. Queries walking the B-tree hit orphaned records or infinite loops.
We detect the broken sibling pointer by comparing m_prevPage & m_nextPage fields in every page of the index, then rebuild the chain from the recovered image. The IAM (Index Allocation Map) and PFS (Page Free Space) bitmaps are regenerated from the repaired B-tree so the database can be attached cleanly via CREATE DATABASE ... FOR ATTACH_REBUILD_LOG or a full dump-and-reload.
Log Chain Breaks and LDF Truncation
The LDF is a circular file composed of VLFs (Virtual Log Files). Each VLF carries a sequence number, the previous LSN, & a parity bit. A log chain break occurs when a VLF is physically unreadable or when a tail-log backup was not taken before an accidental restore. SQL Server reports The log scan number passed to log scan in database errors & refuses to bring the database online.
Running ALTER DATABASE ... SET EMERGENCY followed by DBCC CHECKDB ... REPAIR_ALLOW_DATA_LOSS is destructive: it deallocates pages the repair cannot validate. We image the LDF first, parse the surviving VLFs to identify the last consistent LSN, then reconstruct a synthetic log with correct header LSNs so the MDF can be attached & the consistent rows exported before any write touches the original database.
PostgreSQL WAL Replay and Base Backup Reconstruction
PostgreSQL durability depends on the Write-Ahead Log. Every data modification is written to pg_wal/ (formerly pg_xlog/) as a 16 MB segment before the corresponding page in a table or index is flushed. When the server crashes, the recovery process replays committed WAL records from the last checkpoint LSN forward to bring the cluster to a consistent state.
When the storage fails mid-write, that replay is where the damage surfaces.
pg_control and the Checkpoint LSN
The global/pg_control file is 8 KB. It carries the database system identifier, the catalog version, the checkpoint location, the timeline ID, & the minimum recovery point LSN. If pg_control is zeroed or the checkpoint LSN points into a missing WAL segment, the postmaster refuses to start & logs PANIC: could not locate a valid checkpoint record.
We rebuild pg_control from the most recent base backup combined with WAL segment headers that survived on the failing drive. pg_resetwal is avoided on the original cluster because it advances the LSN past unrecoverable records; instead, we run it on a copy of the recovered image once all surviving segments have been replayed.
Walking WAL Records with pg_waldump
Each WAL segment contains XLOG records identified by a resource manager (Heap, Btree, Hash, GIN, SPGist, Transaction, XLOG). We feed the recovered pg_wal directory into pg_waldump to walk forward record by record. The walk stops at the first unreadable record or the last valid XLOG_CHECKPOINT_SHUTDOWN or XLOG_CHECKPOINT_ONLINE record. That LSN becomes the PITR target for the recovered cluster.
TOAST and Relfilenode Extraction
PostgreSQL stores large values (over 2 KB) in the TOAST table associated with the parent relation. If a TOAST relfilenode is damaged, rows in the parent table return missing chunk number errors. We extract surviving TOAST chunks by their chunk_id & chunk_seq, reassemble the detoasted value, & reinsert it against the parent row's OID. When pg_class is damaged, we map relfilenodes back to relation names by matching the on-disk relfilenode to the recovered catalog or to the base/<dboid>/ directory listing on the imaged drive.
MySQL InnoDB Tablespace Extraction
InnoDB organizes data into tablespaces composed of 16 KB pages. The shared system tablespace ibdata1 holds the data dictionary, the rollback segments, the doublewrite buffer, & the change buffer. With innodb_file_per_table=ON, each table also has its own .ibd file. Recovery strategy depends on which file survived.
- Per-table .ibd surviving, ibdata1 damaged
- The .ibd file carries the tablespace header, the index root page (FIL_PAGE_INDEX), & the data pages. We parse each .ibd directly using undrop-for-innodb, rebuild the schema from the
.frmfiles (MySQL 5.7) or the.sdiserialized dictionary (MySQL 8.0), & emit each table as a CSV or SQL dump. The damaged ibdata1 is not required for this path. - ibdata1 required (shared tablespace, MySQL 5.5 or file-per-table disabled)
- Every table lives inside ibdata1. If the file is partially readable, we scan the image for pages whose FIL header contains the FIL_PAGE_INDEX type (0x45BF), extract the space_id & index_id from each page, & group pages into their parent B-trees. The data dictionary is reconstructed by locating the
SYS_TABLES&SYS_INDEXEScluster pages inside space_id 0. - Redo log (ib_logfile0/1) corruption
- InnoDB refuses to start when
ib_logfile0orib_logfile1is truncated or corrupted. Settinginnodb_force_recovery=6lets the server start but blocks writes & can mark pages inconsistent. We extract data from the recovered image with force_recovery set to 1 or 2, dump every table, & rebuild the database in a new initialized instance rather than attempting a destructive in-place redo replay. - MyISAM legacy tables (.MYD and .MYI)
- Older MySQL installs still carry MyISAM tables. The
.MYDfile stores row data, & the.MYIfile stores the index. A corrupt .MYI is rebuilt withmyisamchk -ragainst the recovered .MYD on the cloned image, never the original.
Oracle ASM Disk Group Reconstruction
Oracle ASM (Automatic Storage Management) presents raw LUNs as ASM disks grouped into disk groups. Each disk carries a 4 KB header block at AU (Allocation Unit) 0 containing the KFDHDB signature, the disk group name, the disk number, & the AU size (default 1 MB, 4 MB on Exadata).
Datafiles, redo logs, control files, & the SPFILE live inside these AUs & are referenced by ASM file numbers in the ASM file directory (file number 1).
KFDHDB Header Scan and Disk Membership
When the disk group fails to mount because headers are damaged, we image every LUN member individually, scan each image for the KFDHDB signature, & rebuild the disk group roster from the disk_nr, disk_grp_nr, & ausize fields. The failgroup assignments are reconstructed from the failgroup_type in the header; without them, ASM mirroring (NORMAL or HIGH redundancy) cannot be unwound correctly.
File Directory, FST, and Allocation Table
ASM file 1 is the file directory. It maps ASM file numbers to the list of AUs that store the file. The FST (Free Space Table, located at AU 0 block 1 on every ASM disk as physically addressed metadata) tracks free space per disk. The AT (Allocation Table, one per disk) maps physical AUs to logical file extents. Once the file directory is recovered, datafiles are located by filename pattern (+DATA/ORCL/DATAFILE/SYSTEM.256.123456789) & their AUs are pulled from the AT & concatenated into a flat .dbf file outside ASM.
Control File and SPFILE Recovery
Oracle will not open a database without a control file. The control file carries the database SCN, the datafile list, the redo log thread members, & the RMAN backup metadata. If the control file AUs are damaged, we extract them from the ASM image, parse them using the Oracle block header signatures (block type 0x0b for the datafile header, 0x15 for the control file block), & rebuild a consistent control file via CREATE CONTROLFILE ... RESETLOGS against the extracted datafiles. The SPFILE (ASM file number in the parameter file directory) is recovered the same way & converted to a PFILE for bootstrap.
Storage Layer Context: SAN, RAID, and NAS Under the DBMS
A database engine error is a messenger. The file format (MDF, EDB, .ibd, .dbf, pg_wal) sits on top of a stack: filesystem, volume manager, block device, physical drive. Every database recovery starts with the lowest layer that actually lives on spinning platters or NAND cells, not the DBMS itself.
- SAN LUN on Fibre Channel or iSCSI
- SQL Server & Oracle instances frequently mount LUNs from a Dell EMC Unity, NetApp FAS, HPE 3PAR / Primera, or Pure Storage FlashArray over FC or iSCSI. The LUN is a virtual block device; the physical data lives on SAS and SATA drives (Unity, FAS, 3PAR mixed configurations) or on NVMe DirectFlash modules and SAS SSDs (Pure FlashArray) behind a RAID DP, RAID-TP, or distributed erasure coding scheme. We image the physical member drives with PC-3000 Express, reconstruct the LUN offline using the array's stripe geometry, then hand the virtual block device to the filesystem parser.
- Hardware RAID (Dell PERC, HP Smart Array, LSI MegaRAID)
- Mid-range database servers run on PERC H730, H740, or HP P440ar controllers presenting RAID 5, 6, or 10 logical drives. When the controller fails or multiple members drop, forcing a rebuild on the original drives overwrites parity & destroys recoverable data. We image each member, detect stripe size & parity rotation from the cloned images (left symmetric, right asymmetric, Q-shift for RAID 6), & reconstruct the array in software. Original drives remain untouched throughout. See our RAID data recovery process for the full workflow.
- ZFS zvol and NAS Volumes (TrueNAS, Synology, QNAP)
- PostgreSQL & MySQL on TrueNAS zvols or Synology Btrfs volumes depend on the pool's health. A DEGRADED raidz2 vdev will still serve I/O but returns corrupted blocks once redundancy is exhausted. We clone each pool member, import the pool read-only on a lab bench using
zpool import -o readonly=on -R /mnt/recoveryagainst the clones, & extract the DBMS data directory before any scrub or resilver touches the original drives. See our NAS data recovery workflow for the matching NAS procedure. - Ceph RBD and Distributed Block Storage
- Kubernetes-hosted databases (PostgreSQL operators, Percona XtraDB clusters) frequently consume Ceph RBD images backed by BlueStore OSDs spread across many drives. When enough OSDs fail to breach the CRUSH failure domain, the RBD image becomes unreadable. We image every failed OSD drive, extract the BlueStore object metadata, & reassemble the RBD image object by object (4 MB default object size) until the database files inside the ext4 or XFS filesystem on the image are accessible.
- VMFS Datastores and Virtual Disk Chains
- Production VMs running SQL Server or Oracle often live on VMFS 6 datastores with snapshot chains (base VMDK plus many -delta.vmdk files). A damaged snapshot breaks the chain & the running VM hangs. We image the underlying VMFS extents, parse the VMFS resource metadata, walk the snapshot chain from base to current, & flatten it into a single consistent VMDK before mounting the guest filesystem to extract the database files.
Two-Layer Recovery Workflow
Database recovery at Rossmann runs in two layers. Layer 1 is physical drive repair: PC-3000 imaging, head swaps in a 0.02µm ULPA-filtered clean bench, & firmware correction. Layer 2 is logical database structure repair on the recovered sector-level clone. Both layers run in-house at our Austin, TX lab. One lab handles both.
Layer 1: Physical Drive Recovery
- Write-blocked forensic imaging using PC-3000 and DeepSpar with conservative retry settings
- Head swap in 0.02µm ULPA-filtered clean bench if heads have failed
- Firmware repair for drives reporting wrong capacity, wrong model ID, or refusing to spin
- Bad sector management with adaptive read parameters to maximize data yield from degraded platters
- Full sector-level clone to a healthy target drive before any database work begins
Layer 2: Database Structure Repair
- Mount recovered MDF/EDB/InnoDB files from the cloned image in an isolated environment
- Page-level integrity scan to identify torn pages, checksum failures, and orphaned records
- Hex-level repair of damaged page headers and allocation metadata
- Transaction log reconstruction when LDF/E00 log files are missing or corrupt
- Final export: restored database files, mailbox PSTs, or raw table dumps depending on engine
How We Extract Databases from Failed Server Arrays
Production databases rarely sit on single drives. SQL Server, Exchange, & Oracle instances typically run on RAID 5, 6, or 10 arrays behind a Dell PowerEdge PERC or HP Smart Array controller. When the controller fails or multiple drives drop simultaneously, forcing a rebuild destroys parity & overwrites the data you need.
- Clone each member drive individually. Every RAID member is connected to a PC-3000 Express or Portable III hardware unit via write-blocked SATA or SAS channels. We image each drive to a healthy target, handling bad sectors with adaptive read parameters. No data is written to the original media.
- Reconstruct the virtual array offline. Using the cloned images, we detect the stripe size, parity rotation, & drive order. For RAID 5 & RAID 6 arrays, we virtually reassemble the array without writing to any original disk, then parse the LVM or NTFS volume from the reconstructed block device.
- Extract & repair database files. Once the virtual array is mounted, we locate the MDF, EDB, InnoDB tablespace, or other database files & run engine-specific repair on the recovered copies. The original drives remain untouched throughout.
Consumer SSD Failures in Database Servers: Phison and SMI Controller Panics
Small business database servers are sometimes built using budget consumer SATA SSDs as primary storage. When an unexpected power loss occurs while the controller is flushing its internal SLC cache to TLC/QLC NAND, the Flash Translation Layer (FTL) mapping table corrupts. The drive drops to 0 bytes or becomes unresponsive.
- Phison PS3111-S11 (SATAFIRM S11 bug)
- Reports 0-byte capacity or shows "SATAFIRM S11" as its model string after power loss during cache flush. We connect the drive to PC-3000 SSD, enter Technological Mode via ATA Vendor Specific Commands, & rebuild the FTL translator from surviving NAND service area copies. Database files cached on the SSD at the time of failure are recoverable if TRIM has not already zeroed the blocks.
- Silicon Motion SM2259XT (BSY state)
- Enters a locked BSY (Busy) state showing 0 bytes or incorrect capacity after FTL corruption. The drive is shorted into Safe Mode via PCB test points, then PC-3000 SSD loads a volatile microcode loader into the controller SRAM and rebuilds the translator. This procedure is impossible for standard recovery software, which requires the controller to be functional.
TRIM/UNMAP note: modern SSDs with TRIM enabled permanently erase deleted blocks. If the database engine deleted records before the crash, those records are unrecoverable regardless of recovery method.
Marvell 88SS1074 Database Drive Failures
Budget SATA SSDs running the Marvell 88SS1074 controller are common in small business database servers where cost drives the hardware decision. When the firmware boot sequence corrupts after a power loss or NAND degradation event, the drive enters a permanent BSY (Busy) state. The SATA bus reports the drive but returns no data; the operating system hangs on mount attempts, and the SQL Server or PostgreSQL instance cannot start.
Recovery requires PC-3000 SSD with the Marvell VanGogh family utility. We block the corrupted main firmware from loading, boot the controller into a minimal service mode via ATA Vendor Specific Commands, and rebuild the Flash Translation Layer from surviving NAND metadata. Once the FTL is reconstructed, the drive responds normally and we image the full capacity to extract the database files.
NVMe Database Cache Failures: Phison E12 and E16 ROM Mode
High-performance NVMe SSDs with Phison E12 (PCIe 3.0) and E16 (PCIe 4.0) controllers are frequently deployed as L2ARC read caches or ZFS SLOG write caches in database arrays. During unexpected power loss while the controller is flushing its FTL mapping table to NAND, the drive locks into a protective ROM state. The server BIOS detects the NVMe device but reports a generic capacity of 1 GB or 2 MB instead of the actual drive size.
Standard NVMe management tools (nvme-cli, Samsung Magician, Intel SSD Toolbox) cannot communicate with a drive in ROM mode because the controller's main firmware never loads. We connect the drive to PC-3000 NVMe, enter Technological Mode via the PCIe interface, and upload a custom microcode loader directly into the controller's SRAM. The loader replaces the corrupted firmware in volatile memory, allowing us to reconstruct the FTL from surviving NAND service area copies and extract the cached database segments before the operating system triggers a destructive TRIM/UNMAP command on reboot.
Lenovo Server NVMe: Marvell 88SS1093 Timeout Errors
Enterprise database servers from Lenovo (ThinkSystem SR series) and OEM partners using Ramaxel-branded NVMe drives frequently run the Marvell 88SS1093 controller. When NAND cells degrade past the controller's internal ECC threshold, or a firmware translation fault corrupts the logical-to-physical block map, the drive generates persistent timeout errors during database I/O. SQL Server logs Error 823 (I/O error) and PostgreSQL reports "could not read block" failures that trace back to the NVMe timeout.
We use PC-3000 SSD Extended to block the main firmware from loading and boot the drive into Extended (service) mode. From there, we read the TCG Opal subsystem configuration (if enterprise encryption is enabled), rebuild the logical image from raw NAND, and extract the database files. This avoids the timeout loop entirely because the drive's normal I/O path is never engaged.
Recovering Databases from Encrypted Server Drives
Many Windows Server deployments use BitLocker, and Linux production servers frequently use LUKS full-disk encryption. When the underlying drive fails, the physical failure prevents access to the encrypted volume. After imaging, the encryption prevents reading the database files without the original key. We handle this in two separate phases.
- Phase 1: Physical recovery of the encrypted volume
- The failing drive is connected to PC-3000 Express or Portable III in write-blocked mode. We create a sector-level clone of the entire encrypted partition to a healthy target drive, including the BitLocker metadata header and LUKS key slots. Bad sectors are handled with adaptive read parameters; the clone captures every readable sector without writing to the original media.
- Phase 2: Decryption & database extraction
- Once the physical clone is complete, you provide the BitLocker recovery key or LUKS passphrase. We mount the encrypted volume on the cloned image and extract the database files (MDF, EDB, InnoDB tablespace, or PostgreSQL data directory). If the database uses SQL Server Transparent Data Encryption (TDE), the database master key and certificate must also be available; without it, the MDF file contents remain encrypted even after the volume is unlocked.
We don't break or bypass encryption. Recovery requires the original recovery key, passphrase, or TDE certificate. Without credentials, the encrypted data is mathematically unrecoverable regardless of the physical condition of the drive.
Database Recovery from Virtual Machine Hosts
Production databases frequently run inside VMware ESXi, Hyper-V, or Proxmox VE virtual machines. When the host server fails, the database files are locked inside VMDK, VHDX, or raw disk images on a VMFS, ReFS, or ZFS datastore. Software tools can't reach the database because the virtualization layer is broken.
- Image the physical host drives. Each drive from the host (or SAN shelf) is cloned via PC-3000 in write-blocked mode, exactly as with a bare-metal RAID array recovery.
- Reconstruct the datastore. The VMFS, ReFS, or ZFS volume is reassembled from the cloned images. We locate the target VM's virtual disk files (VMDK, VHDX, or QCOW2) and verify their integrity.
- Mount the virtual disk & extract database files. The virtual disk is mounted as a loopback device. We navigate the guest OS file system to locate the MDF, EDB, InnoDB tablespace, or PostgreSQL data directory, then run the appropriate database repair on the extracted files.
Accidental VM deletion, snapshot corruption, & SAN LUN failures are all recoverable if the underlying physical storage still contains the VMDK/VHDX data blocks. TRIM on thin-provisioned datastores is the exception; if VMFS UNMAP has already zeroed the blocks, those sections are permanently gone.
Supported Database Engines
We recover SQL Server (MDF/NDF/LDF), Exchange Server (EDB), MySQL & MariaDB (InnoDB tablespace files), Oracle (datafiles, ASM diskgroups), QuickBooks (QBW company files), PostgreSQL data directories, MongoDB (WiredTiger collection files), & SharePoint content databases. The physical drive recovery process is the same regardless of database engine.
Microsoft SQL Server
MDF/NDF/LDF recovery. Error 5171, 823, 824 resolution. Suspect mode databases. DBCC alternatives that preserve data instead of deleting it. SQL Server 2000 through 2022.
Microsoft Exchange Server
EDB file recovery with per-mailbox extraction. Dirty shutdown repair, -1018 checksum errors, JET database corruption. Exchange 2003 through 2019. PST export per user.
MySQL / MariaDB
InnoDB tablespace recovery (ibdata1, .ibd per-table files). Redo log reconstruction when ib_logfile0/1 are corrupted. MyISAM .MYD/.MYI repair for legacy tables. MySQL 5.0 through 8.x, MariaDB 10.x.
Oracle Database
Datafile (.dbf) recovery, ASM diskgroup reassembly, control file reconstruction. ORA-01578 block corruption, ORA-00600 internal errors. Oracle 11g through 21c, including CDB/PDB multitenant.
QuickBooks
QBW company file recovery from failed drives. Error -6000/-301, -6000/-82, and C=343 resolution. Sybase ASA page-level repair. QuickBooks 2006 through 2025, Pro/Premier/Enterprise.
PostgreSQL
Data directory recovery from failed drives. WAL segment reconstruction, pg_control repair, TOAST table recovery, and per-table COPY export from recovered clusters. PostgreSQL 9.x through 18.
MongoDB
WiredTiger collection file extraction, BSON document recovery, oplog replay, and catalog metadata rebuild. Sharded cluster reassembly and GridFS file reconstruction. MongoDB 3.2 through 8.0.
Microsoft SharePoint
Content database recovery from failed SAN/RAID arrays. MDF extraction, Remote BLOB Storage stub reconciliation, torn page repair, and FILESTREAM relinking. SharePoint 2010 through Subscription Edition.
Stop Before Running Repair Commands
DBCC REPAIR_ALLOW_DATA_LOSS, eseutil /p, and innodb_force_recovery=6 are destructive operations. They delete data they cannot validate. If the corruption originates from a physical drive problem, these commands destroy data that a proper drive recovery would have preserved. Power down the server and contact us before running repair utilities.
How Much Does Database Recovery Cost?
Database recovery pricing is based on the physical condition of the drive, not the database engine. The database repair phase is included at no additional cost. Pricing follows five tiers from a simple file copy to a head swap on a mechanically failed drive. Drives in RAID arrays are priced per member drive.
Simple Copy
Low complexityYour drive works, you just need the data moved off it
$100
3-5 business days
Functional drive; data transfer to new media
Rush available: +$100
File System Recovery
Low complexityYour drive isn't recognized by your computer, but it's not making unusual sounds
From $250
2-4 weeks
File system corruption. Accessible with professional recovery software but not by the OS
Starting price; final depends on complexity
Firmware Repair
Medium complexityYour drive is completely inaccessible. It may be detected but shows the wrong size or won't respond
$600–$900
3-6 weeks
Firmware corruption: ROM, modules, or translator tables corrupted; requires PC-3000 terminal access
CMR drive: $600. SMR drive: $900.
Head Swap
High complexityMost CommonYour drive is clicking, beeping, or won't spin. The internal read/write heads have failed
$1,200–$1,500
4-8 weeks
Head stack assembly failure. Transplanting heads from a matching donor drive on a clean bench
50% deposit required. CMR: $1,200-$1,500 + donor. SMR: $1,500 + donor.
50% deposit required
Surface / Platter Damage
High complexityYour drive was dropped, has visible damage, or a head crash scraped the platters
$2,000
4-8 weeks
Platter scoring or contamination. Requires platter cleaning and head swap
50% deposit required. Donor parts are consumed in the repair. Most difficult recovery type.
50% deposit required
Hardware Repair vs. Software Locks
Our "no data, no fee" policy applies to hardware recovery. We do not bill for unsuccessful physical repairs. If we replace a hard drive read/write head assembly or repair a liquid-damaged logic board to a bootable state, the hardware repair is complete and standard rates apply. If data remains inaccessible due to user-configured software locks, a forgotten passcode, or a remote wipe command, the physical repair is still billable. We cannot bypass user encryption or activation locks.
No data, no fee. Free evaluation and firm quote before any paid work. Full guarantee details. Head swap and surface damage require a 50% deposit because donor parts are consumed in the attempt.
Rush fee: +$100 rush fee to move to the front of the queue.
Donor drives: Donor drives are matching drives used for parts. Typical donor cost: $50–$150 for common drives, $200–$400 for rare or high-capacity models. We source the cheapest compatible donor available.
Target drive: The destination drive we copy recovered data onto. You can supply your own or we provide one at cost plus a small markup. For larger capacities (8TB, 10TB, 16TB and above), target drives cost $400+ extra. All prices are plus applicable tax.
Why Recover Databases With Rossmann Repair Group
Physical drive repair & logical database structure repair run in one lab with no outsourcing. Five published pricing tiers based on drive condition. Database repair is included at no additional cost. Free evaluation before any paid work begins. No data, no fee. Direct access to the engineer doing the work.
Two-layer recovery
Physical drive repair (PC-3000, head swaps, firmware correction) followed by logical database structure repair. One lab handles both.
Multi-engine support
SQL Server MDF/NDF, Exchange EDB, MySQL InnoDB, PostgreSQL. The drive recovery is universal; the database repair adapts to each engine's on-disk format.
Transparent pricing
Five published tiers based on drive condition. Database repair is included. If we recover nothing usable, you pay $0.
Direct engineer access
Talk to the person doing the work. No sales scripts, no account managers, no call center.
No evaluation fee
Free assessment of drive condition and recovery feasibility before any paid work begins.
Image-first workflow
Every drive is forensically imaged before any repair attempts. Original media is never modified. All database repair runs against the cloned image.
Data Recovery Standards & Verification
Our Austin lab operates on a transparency-first model. We use industry-standard recovery tools, including PC-3000 and DeepSpar, combined with strict environmental controls to make sure your hard drive is handled safely and properly. This approach allows us to serve clients nationwide with consistent technical standards.
Open-drive work is performed in a ULPA-filtered laminar-flow bench, validated to 0.02 µm particle count, verified using TSI P-Trak instrumentation.
Transparent History
Serving clients nationwide via mail-in service since 2008. Our lead engineer holds PC-3000 and HEX Akademia certifications for hard drive firmware repair and mechanical recovery.
Media Coverage
Our repair work has been covered by The Wall Street Journal and Business Insider, with CBC News reporting on our pricing transparency. Louis Rossmann has testified in Right to Repair hearings in multiple states and founded the Repair Preservation Group.
Aligned Incentives
Our "No Data, No Charge" policy means we assume the risk of the recovery attempt, not the client.
Technical Oversight
Louis Rossmann
Louis Rossmann's well trained staff review our lab protocols to ensure technical accuracy and honest service. Since 2008, his focus has been on clear technical communication and accurate diagnostics rather than sales-driven explanations.
We believe in proving standards rather than just stating them. We use TSI P-Trak instrumentation to verify that clean-air benchmarks are met before any drive is opened.
See our clean bench validation data and particle test videoDatabase Recovery FAQ
What types of databases do you recover?
Can you recover a database from a physically failed drive?
How is database recovery priced?
Do you need our entire server?
How long does database recovery take?
Why does SQL Server show Suspect Mode or Exchange report Dirty Shutdown?
What is the difference between logical database corruption and physical drive failure?
Can you recover a database from a BitLocker or LUKS encrypted server drive?
Can you recover a database running inside a VMware or Hyper-V virtual machine?
Is rush service available for database recovery?
Can you recover a database encrypted by ransomware?
Can you recover ERP or accounting databases like SAP HANA, NetSuite, or Sage?
Can you recover legacy database formats like FileMaker, Lotus Notes, and IBM Db2?
Why is our database NVMe cache drive showing up as 2MB in BIOS?
How do you recover a database from a failed SAN or iSCSI LUN?
What if our server runs multiple databases on the same drive array?
How do you recover an SQL Server MDF after a torn page (Error 824) on a SAN LUN?
Can you recover a PostgreSQL cluster when pg_control is zeroed and WAL segments are missing?
What does MySQL InnoDB tablespace extraction look like when ibdata1 is corrupted?
How do you reconstruct an Oracle ASM disk group when disk headers are damaged?
Do you diagnose the storage stack below the DBMS (iSCSI, FC LUN, ZFS zvol, Ceph RBD)?
How to Ship Server Drives for Database Recovery
We serve all 50 states through mail-in data recovery. Server drives require more careful packing than consumer drives because enterprise SAS drives & RAID members must arrive with their slot positions documented. Label each drive with its slot number before removal. Double-box enterprise SAS drives. Include the database engine name & file names if known.
- Label each drive before removal. Use masking tape & a marker. Write the slot number (0, 1, 2...) and the server model. For RAID arrays, the slot order determines stripe reconstruction.
- Wrap each drive individually. Use anti-static bags if available. Wrap in bubble wrap. Never let bare drives touch each other in transit.
- Ship in a sturdy box with 2+ inches of padding. Double-box for enterprise SAS drives. FedEx & UPS both work. Carrier declared-value coverage applies to the physical hardware only, not the data stored on it.
- Include a note. Database engine (SQL Server, Exchange, PostgreSQL), file names if known (e.g., the .mdf filename), & your contact info. The more context we have, the faster the diagnosis.
Ship to: 2410 San Antonio Street, Austin, TX 78705. Local Austin drop-offs accepted during Mon-Fri 10am-6pm CT. Call (512) 212-9111 before shipping if you have questions about packing a specific server configuration.
Related Recovery Services
MDF/NDF file recovery, suspect mode, Error 5171/823/824
EDB corruption, mailbox extraction, PST export
InnoDB tablespace, ibdata1, redo log reconstruction
ASM diskgroups, datafile block repair, control files
QBW company files, Error -6000/-301, Sybase repair
WAL reconstruction, pg_control repair, TOAST recovery
Content database extraction, RBS reconciliation
WiredTiger extraction, oplog replay, BSON recovery
RAID 0, 1, 5, 6, 10 arrays and NAS devices
Recover Your Database
Call Mon-Fri 10am-6pm CT or email for a free evaluation.