RAID 60 Data Recovery Services
RAID 60 stripes across multiple RAID 6 sub-arrays, combining dual-parity fault tolerance per span with striped throughput. When one span degrades beyond its two-drive tolerance, the stripe layer takes the entire volume offline. We recover RAID 60 arrays by imaging every member through write-blocked channels, identifying span boundaries, reconstructing each sub-array independently, and assembling the cross-span stripe from cloned images. No data, no charge.

RAID 60 Array Geometry
RAID 60 nests multiple RAID 6 sub-arrays (spans) under a RAID 0 stripe layer. Each span provides dual parity (P + Q) independently. The stripe layer distributes I/O across spans for sequential throughput that a single RAID 6 cannot match.
Drive Count and Layout
- Minimum 8 drives: two spans of four. Each span must contain at least four members to support RAID 6 (two data + two parity).
- Typical production deployments use 12 to 24 drives across three or four spans. A 24-drive array with four spans of six yields 16 drives of usable capacity (each span loses two to parity).
- Adding spans increases sequential throughput linearly (more parallel I/O paths) at the cost of additional parity overhead and recovery complexity.
Fault Tolerance Per Span
- Each RAID 6 span tolerates two simultaneous drive failures independently. A four-span array can lose up to eight drives total, provided the failures distribute across spans.
- If a third drive fails within any single span before that span completes its rebuild, the span loses all parity protection. The RAID 0 stripe layer cannot compensate; the entire volume goes offline.
- Usable capacity = (drives per span - 2) x number of spans x smallest member size.
Why Enterprises Deploy RAID 60 Over RAID 6
RAID 6 provides dual-parity fault tolerance but bottlenecks on write performance as array size grows. RAID 60 solves this by distributing writes across multiple independent RAID 6 spans, each with its own P + Q parity calculations running in parallel.
Database Servers
SQL Server and Oracle instances on Dell PowerEdge R740xd or HP ProLiant DL380 Gen10 servers use RAID 60 across 12 to 24 SAS drives. The striped spans provide the IOPS density that single-span RAID 6 cannot match for transactional workloads.
Virtualization Hosts
VMware ESXi and Hyper-V hosts running 50+ VMs need both the fault tolerance and the parallel I/O paths that RAID 60 provides. A single RAID 6 with 24 drives creates excessive rebuild times; splitting into four 6-drive spans limits each rebuild to four-drive reads instead of 23.
Video Surveillance
Large-scale surveillance systems writing 200+ camera feeds continuously require sustained sequential write throughput that RAID 60 delivers through its parallel span architecture. The dual-parity per span ensures footage survives drive failures without interrupting recording.
Rebuild Risks with 18 TB+ Enterprise Drives
High-capacity enterprise drives (Seagate Exos X18/X20, WD Ultrastar DC HC550/HC560) reduce the per-drive cost of RAID 60, but they increase rebuild time per span and the probability of a secondary failure during that window.
- A six-drive span with 18 TB members requires reading five surviving drives in full during a single-drive rebuild: 90 TB under sustained sequential I/O. At 150 MB/s average throughput per drive, that rebuild takes 72+ hours, assuming no retries or bad sectors.
- With 20 TB Ultrastar HC560 drives, rebuild time stretches past 96 hours. The surviving drives in that span have been running for the same number of power-on hours as the one that already failed. Each additional hour of sustained sequential reads increases the probability of a latent sector error or secondary head failure.
- If the array loses one drive per span and both spans attempt simultaneous rebuilds (common with hot spares configured per span), the controller's I/O scheduler splits bandwidth between rebuild traffic and production I/O. Rebuild times double or triple under production load.
- RAID 60's per-span isolation limits rebuild scope compared to a monolithic RAID 6. A 24-drive RAID 60 with four 6-drive spans only rebuilds within the affected span (five-drive read). The same 24 drives in a flat RAID 6 would require reading all 23 surviving members, an order of magnitude more I/O exposure.
Critical: If your RAID 60 array shows a degraded span, do not force a rebuild on aging drives with high power-on hours. Power down the system, label each drive with its bay position and span assignment, and contact us. Offline imaging eliminates the rebuild risk entirely.
Controller-Specific RAID 60 Implementations
Each RAID controller family stores span membership, parity rotation, and stripe configuration in proprietary on-disk metadata. Recovery requires parsing that metadata from cloned images to reconstruct the array without the original controller hardware.
Dell PERC H740P / H755
The PERC H740P supports RAID 60 with up to 8 spans of 32 drives per span. Span membership is recorded in DDF (Disk Data Format) metadata headers on each member. When one drive drops out of a span and rejoins with a stale timestamp, the controller marks it "Foreign." Importing a foreign config with stale data forces a backward resync that overwrites current blocks with outdated data across every stripe changed while the drive was absent.
We image all members, inspect DDF headers in a hex editor to identify epoch timestamps per drive, and determine span boundaries before any assembly decision. This prevents the stale-member resync failure that is the most common cause of PERC RAID 60 data destruction.
Broadcom MegaRAID 9460-16i
The MegaRAID 9460-16i handles RAID 60 with up to 240 drives and supports spanning across multiple enclosures via SAS expanders. The controller writes its own DDF metadata plus a proprietary MegaRAID configuration record to each member. Recovery from this controller requires parsing both metadata layers to correctly identify span membership, because the DDF layer may reference physical enclosure positions that changed if drives were reseated during troubleshooting.
PC-3000 RAID Edition includes a MegaRAID metadata parser that reads both DDF and proprietary records from cloned images. When metadata is partially corrupted (common after firmware crashes), we fall back to raw data continuity analysis across member images to determine span boundaries.
HP Smart Array P408i-a / P816i-a
HP Smart Array controllers on Gen10 ProLiant servers support RAID 60 but store configuration in a proprietary format distinct from the DDF standard used by Dell and Broadcom. The controller writes span membership to a reserved 256 KB region on each drive (the "RIS" area). When the Smart Storage Battery fails (POST Error 313 on Gen9/Gen10), cached writes may be trapped in the volatile cache, and the controller may refuse to present the virtual disk until the battery is replaced or the cache is manually flushed.
We bypass the controller by imaging each member drive directly via HBA passthrough, reading the RIS metadata from each image, and reconstructing the RAID 60 in PC-3000 using the parsed span map. For cache-trapped writes, we power the cache module independently to flush pending data before imaging.
Our RAID 60 Recovery Process
RAID 60 recovery has two layers: reconstruct each RAID 6 span from its member images, then assemble the RAID 0 stripe across all reconstructed spans. Both layers use offline reconstruction from cloned images. No data is written to original drives.
- Free evaluation and span mapping: Document the controller type, total drive count, span count, drives per span, stripe size, and parity rotation. If you have screenshots of the controller BIOS (PERC Configuration Utility, MegaRAID WebBIOS, Smart Storage Administrator), these accelerate parameter detection.
- Write-blocked forensic imaging: Every member drive is connected to PC-3000 or DeepSpar hardware through a write-blocked channel. We clone the full LBA range of each member, including reserved sectors where controller metadata is stored. Drives with mechanical failures receive head swaps or motor work on our 0.02 µm filtered clean bench before imaging.
- Span boundary identification: Using controller metadata parsed from the cloned images, we map which drives belong to each RAID 6 sub-array. When metadata is damaged or missing, we analyze data continuity patterns across member images to determine span membership empirically.
- Per-span RAID 6 reconstruction: Each span is reconstructed independently using PC-3000 RAID Edition. We identify the stripe block size, P and Q parity rotation patterns, and member ordering within each span. Both parity calculations are validated against sample stripes before the span is marked as reconstructed.
- Cross-span RAID 0 stripe assembly: The reconstructed span images are loaded as virtual drives and striped according to the controller's cross-span configuration. We verify the assembled volume by checking filesystem superblocks, partition tables, and directory structures.
- Filesystem extraction and delivery: The virtual array is parsed (NTFS, EXT4, XFS, ZFS, VMFS) using R-Studio or UFS Explorer. Files are extracted to verified target media. You receive a file listing for review before final delivery.
When RAID 60 Still Fails
RAID 60's per-span dual parity protects against drive failures, but several failure modes bypass the parity layer entirely.
Three+ Failures in One Span
RAID 60's fault tolerance is per-span, not per-array. If three drives fail in the same span (common when drives are from the same manufacturing batch with identical wear), that span loses all parity protection. The RAID 0 stripe layer treats the span as a missing stripe member, and the entire volume becomes inaccessible. Recovery from this scenario requires partial reconstruction using whatever parity data remains on the surviving drives within the affected span.
Controller Firmware Corruption
A firmware crash or bad flash on the RAID controller can corrupt the on-disk metadata that defines span membership. Without valid metadata, the controller cannot present the virtual disk, even though every member drive is physically healthy. Recovery requires parsing whatever metadata fragments survive on each member image and filling gaps through raw data analysis.
Extended Rebuild with Cross-Span Failures
When two spans each lose one drive simultaneously, both spans enter degraded mode and begin parallel rebuilds. The controller splits I/O bandwidth between production traffic and two concurrent rebuild streams. If a second drive fails in either span during this extended rebuild window, that span crosses its parity threshold. The combination of heat, vibration, and sustained I/O load on aging drives makes this outcome more likely on arrays that have been in production for 3+ years.
Accidental Foreign Config Import
On Dell PERC controllers, importing a foreign configuration with stale member data forces the controller to resync the array backward, overwriting current data with outdated blocks. On RAID 60 arrays, this affects every span that contained the stale member. The same trap applies to Broadcom MegaRAID controllers when accepting a "consistency check" prompt after drive reseating.
PC-3000 RAID Workflow for Multi-Span Reconstruction
PC-3000 RAID Edition handles RAID 60 as a two-phase reconstruction: first the RAID 6 sub-arrays, then the RAID 0 stripe layer. This mirrors the hardware controller's own logical architecture.
- Member image loading: All cloned member images are loaded into PC-3000 RAID Edition. The software scans each image for controller metadata (DDF headers, MegaRAID records, HP RIS blocks) to auto-detect span assignments.
- Span separation: Members are grouped into their respective RAID 6 sub-arrays. When auto-detection fails, we use parity signature analysis: the P and Q parity patterns within a span are mathematically distinct from data, allowing us to identify which drives share parity relationships.
- Per-span parameter detection: For each sub-array, PC-3000 detects the stripe block size (typically 64 KB or 256 KB for enterprise controllers), parity rotation direction (left-symmetric, left-asymmetric, right-symmetric), and P/Q parity algorithm variant. Each span may use different parameters if the controller was configured with non-uniform spans.
- Parity validation: Before assembling the stripe layer, we validate P and Q parity consistency across sample stripes in each sub-array. Inconsistencies indicate a misidentified member or incorrect rotation, which we correct before proceeding.
- Cross-span stripe assembly: The reconstructed sub-array images are treated as virtual drives and striped according to the detected cross-span stripe size and ordering.
- Filesystem verification: The assembled virtual volume is checked for valid filesystem structures (MBR/GPT partition table, NTFS MFT, EXT4 superblock, ZFS uberblock, VMFS volume header). A valid parse confirms the reconstruction parameters are correct.
RAID 60 Recovery Pricing
RAID 60 pricing follows the same two-component model as all RAID recovery: per-member imaging fees for each drive in the array, plus an array reconstruction fee. RAID 60 reconstruction fees are at the higher end of the range due to the multi-span complexity. If we recover nothing, you owe $0.
Per-Member Imaging
- Logical or firmware-level issues: $250 to $900 per drive. Covers filesystem corruption, firmware module damage requiring PC-3000 terminal access, and SMART threshold failures that prevent normal reads.
- Mechanical failures (head swap, motor seizure): $1,200 to $1,500 per drive with a 50% deposit. Donor parts are consumed during the transplant. Head swaps are performed on a 0.02 µm filtered clean bench before write-blocked cloning.
Array Reconstruction
- $400-$800 depending on span count, members per span, filesystem type, and whether parameters must be detected from raw data versus parsed from surviving metadata. RAID 60 reconstructions require per-span P and Q parity validation plus cross-span stripe assembly, which adds computation time and verification steps compared to flat RAID 6.
- PC-3000 RAID Edition performs span identification, per-span parameter detection, and virtual assembly from cloned member images. R-Studio and UFS Explorer handle filesystem-level extraction after the array is reconstructed.
No Data = No Charge: If we recover nothing from your array, you owe $0. Free evaluation, no obligation.
RAID 60 arrays with 16+ members or mechanical failures on multiple drives will receive a custom quote after free evaluation.
RAID 60 Recovery Questions
How many drives can fail in a RAID 60 before data is lost?
What is the minimum number of drives for RAID 60?
When should I choose RAID 60 over RAID 6?
How long does a RAID 60 rebuild take with 18 TB drives?
Why is RAID 60 recovery harder than RAID 6?
Can you recover a RAID 60 if the controller card failed?
Data Recovery Standards & Verification
Our Austin lab operates on a transparency-first model. We use industry-standard recovery tools, including PC-3000 and DeepSpar, combined with strict environmental controls to make sure your hard drive is handled safely and properly. This approach allows us to serve clients nationwide with consistent technical standards.
Open-drive work is performed in a ULPA-filtered laminar-flow bench, validated to 0.02 µm particle count, verified using TSI P-Trak instrumentation.
Transparent History
Serving clients nationwide via mail-in service since 2008. Our lead engineer holds PC-3000 and HEX Akademia certifications for hard drive firmware repair and mechanical recovery.
Media Coverage
Our repair work has been covered by The Wall Street Journal and Business Insider, with CBC News reporting on our pricing transparency. Louis Rossmann has testified in Right to Repair hearings in multiple states and founded the Repair Preservation Group.
Aligned Incentives
Our "No Data, No Charge" policy means we assume the risk of the recovery attempt, not the client.
Technical Oversight
Louis Rossmann
Louis Rossmann's well trained staff review our lab protocols to ensure technical accuracy and honest service. Since 2008, his focus has been on clear technical communication and accurate diagnostics rather than sales-driven explanations.
We believe in proving standards rather than just stating them. We use TSI P-Trak instrumentation to verify that clean-air benchmarks are met before any drive is opened.
See our clean bench validation data and particle test videoNeed Recovery for Other Devices?
Degraded RAID 60? Power down before rebuilding.
Free evaluation. Offline multi-span reconstruction from cloned images. No data = no charge. Mail-in from anywhere in the U.S.