Windows Server Recovery
ReFS File System Data Recovery
We recover data from corrupted ReFS volumes on Windows Server 2012 R2 through Server 2025. B+ tree metadata reconstruction, Storage Spaces Direct cluster failures, Hyper-V VHDX extraction, and deduplication-aware recovery. Free evaluation. No data = no charge.

How ReFS Volumes Fail and How We Recover Them
ReFS (Resilient File System) is Microsoft's B+ tree-based filesystem designed for Windows Server environments. Unlike NTFS, which uses a flat Master File Table (MFT) for metadata, ReFS stores all metadata in B+ trees and uses an allocate-on-write model: metadata updates are written to new disk locations rather than overwriting existing data. When the root B+ tree node corrupts or the checkpoint area (superblock) becomes unreadable, Windows cannot mount the volume and displays it as RAW. Recovery requires parsing the B+ tree structures offline, locating historical metadata checkpoints preserved by the allocate-on-write design, and reconstructing the directory hierarchy from surviving tree nodes.
ReFS was introduced in Windows Server 2012 and has gone through several on-disk format versions (1.2 through 3.7+). Each version changes the internal B+ tree layout, page sizes, and metadata structures. Server 2016 introduced ReFS 2.0 with larger cluster sizes and block cloning support. Server 2019 added ReFS 3.x with improved tiering and deduplication integration. A recovery tool built for ReFS 1.2 cannot parse a 3.x volume; the metadata format is not backward compatible. Our parsing handles all production ReFS versions.
ReFS On-Disk Architecture
Understanding the ReFS on-disk format is necessary for targeted recovery. ReFS differs from NTFS in every structural dimension.
B+ Trees and the Object Table
- ReFS organizes all metadata into B+ trees. The root of the filesystem is the object table, a B+ tree whose keys are object IDs and whose values are table descriptors pointing to other B+ trees
- Each directory is a B+ tree. Each file's extent map is a B+ tree. The entire metadata hierarchy is trees pointing to trees
- If the root object table node is damaged, the entire volume is unmountable. Recovery requires locating a previous version of the root node from the checkpoint area or scanning for orphaned B+ tree nodes on disk
Allocate-on-Write (AoW) Model
- When ReFS updates metadata, it writes the new version to a different location on disk and then atomically updates the parent pointer. The old version remains at its original location until the space is reclaimed
- This means metadata updates do not overwrite prior versions immediately. After a corruption event, historical metadata snapshots often survive on disk
- Recovery exploits this: we scan for B+ tree pages at historical offsets, compare page sequence numbers, and reconstruct the directory tree from the most recent consistent set of pages
Checkpoint Area (Superblock)
- ReFS maintains two checkpoint areas at fixed offsets on the volume. Each checkpoint stores a pointer to the current root of the object table and a sequence number
- Windows alternates between the two checkpoints on each metadata flush. If one is corrupted, the other may still be valid (one transaction behind)
- If both checkpoints are damaged, we locate the object table root by scanning for B+ tree page signatures across the volume surface
Container Table and Extents
- ReFS tracks free space and allocated extents through a container table (also a B+ tree). Containers are large allocation units (typically 64 KB in ReFS 3.x)
- File data extents are referenced through the file's extent B+ tree, which maps logical file offsets to physical container numbers
- For recovery, we parse the container table to resolve file extent references. If the container table is corrupted, we reconstruct extent maps from individual file B+ tree nodes
Common ReFS Failure Scenarios
Volume Shows as RAW
The most common ReFS failure. Windows cannot parse the checkpoint area or root object table, so it presents the volume as RAW in Disk Management. Causes include power loss during a metadata flush, firmware bugs in the storage controller, or bad sectors at the checkpoint offsets.
Storage Spaces Pool Failure
Storage Spaces (non-Direct) pools lose redundancy when more disks fail than the resiliency type allows (one for mirror, two for parity). When the pool goes offline, all ReFS volumes on it become inaccessible. Recovery requires reconstructing the Storage Spaces virtual disk layout from the pool metadata database before parsing ReFS.
Storage Spaces Direct (S2D) Quorum Loss
S2D requires a majority of cluster nodes to maintain quorum. If two of three nodes fail simultaneously, the cluster loses quorum and all cluster shared volumes go offline. The data remains on the physical drives in each node. Recovery involves imaging drives from all nodes and reassembling the S2D pool geometry from each node's metadata partition.
Chkdsk Damage
Running chkdsk /f or chkdsk /r on a corrupted ReFS volume can make recovery harder. Chkdsk attempts to repair B+ tree inconsistencies by deleting orphaned nodes and rewriting the tree structure. This destroys historical metadata that allocate-on-write had preserved. Contact us before running chkdsk on any ReFS volume that contains data you need.
Hyper-V Host Crash
When a Hyper-V host crashes with VMs running, the ReFS volume may have uncommitted metadata for in-flight VHDX writes. The volume itself usually recovers on reboot (ReFS is designed for this), but if the crash was caused by a hardware failure (dead controller, failing drive), the combination of ReFS metadata damage and partial VHDX writes creates a two-layer recovery problem.
Deduplication Corruption
Windows Server deduplication on ReFS (supported since Server 2019) creates chunk stores that replace inline file data with references. If the chunk store B+ tree or the reparse point data is corrupted, deduplicated files become unreadable even though the raw chunks exist on disk. Recovery requires reconstructing the dedup mapping table.
ReFS vs NTFS: Why Recovery Requires Different Tools
| Attribute | NTFS | ReFS |
|---|---|---|
| Metadata structure | Master File Table (MFT): flat record array at a fixed offset | B+ trees: hierarchical, self-balancing, rooted in the object table |
| Update model | In-place updates with journaling ($LogFile) | Allocate-on-write for metadata; old pages preserved until space reclaimed |
| Checksums | None (relies on hardware or Storage Spaces for integrity) | CRC-64 on metadata pages; optional CRC-64 on data blocks (integrity streams) |
| Recovery advantage | MFT backup ($MFTMirr) provides a partial copy of the first MFT records | Historical checkpoints from AoW; dual checkpoint areas; B+ tree page scanning |
| Max volume size | 256 TB (practical) | 35 PB (theoretical); designed for hyper-converged multi-petabyte storage |
Most data recovery software and even many professional recovery labs do not support ReFS parsing. The tools that handle NTFS MFT reconstruction cannot parse ReFS B+ trees. This is why ReFS recovery is a specialized service.
Storage Spaces Direct (S2D) Recovery
Storage Spaces Direct is Microsoft's hyper-converged infrastructure (HCI) solution. It pools local storage from multiple cluster nodes into a single software-defined storage layer. ReFS is the recommended filesystem for S2D cluster shared volumes (CSVs) in production configurations.
S2D Pool Geometry
S2D creates a storage pool from all eligible drives across cluster nodes. The pool is divided into slabs, and data is distributed across nodes according to the resiliency type (mirror, parity, or mirror-accelerated parity). Each node stores a copy of the pool metadata database in a hidden partition on every pool drive. This metadata database describes the slab-to-node mapping, virtual disk layout, and resiliency configuration. Recovery starts by reading this metadata from each node's drives to reconstruct the virtual disk geometry.
Cluster Quorum Failure
A three-node S2D cluster requires at least two nodes to maintain quorum. If two nodes fail simultaneously (power event, network partition, cascading hardware failure), the cluster loses quorum and all CSVs go offline. The data is intact on the physical drives, but the cluster refuses to bring the storage online without quorum. We bypass the cluster layer entirely: image all drives from all nodes, reconstruct the pool metadata, assemble the virtual disks from the raw images, and parse the ReFS volumes directly.
Cache Tier and Capacity Tier
S2D uses a tiered storage model: NVMe or SSD drives serve as a cache tier, and HDDs serve as the capacity tier. Data is destaged from cache to capacity asynchronously. If a node fails during destaging, some data exists only in the cache tier on that node's SSDs. Recovery must account for both tiers: we image cache-tier SSDs and capacity-tier HDDs from every node, then reconstruct the complete virtual disk including data that had not yet been destaged.
Recovery Methodology
1. Drive Imaging
Every drive in the server or S2D cluster is imaged through PC-3000 with write-blocking. SAS drives (standard in Dell PowerEdge and HP ProLiant servers) are imaged via SAS HBAs. NVMe cache-tier drives are imaged through NVMe-to-PCIe adapters. For drives with bad sectors, we capture healthy sectors first using head maps, then retry damaged areas with aggressive read parameters.
2. Storage Pool Reconstruction (if applicable)
For Storage Spaces or S2D configurations, we parse the Storage Spaces metadata database from the hidden partition on each pool drive. This database describes the virtual disk layout: which slabs on which physical drives compose each virtual disk, and what resiliency type (mirror, parity) protects each virtual disk. We reassemble the virtual disks from the raw drive images using the parsed metadata.
3. ReFS B+ Tree Parsing
With the virtual disk (or raw volume for non-pooled configurations) assembled, we parse the ReFS metadata. First, read the two checkpoint areas at fixed offsets. If a valid checkpoint exists, follow the root pointer to the object table B+ tree. If both checkpoints are damaged, scan the volume for B+ tree page signatures (magic number + page header structure) and rebuild the object table from discovered pages, sorted by page sequence number to select the most recent consistent set.
4. File Extraction and Verification
Directory entries are resolved from the directory B+ trees. File data extents are mapped from the file extent B+ trees through the container table. Files are extracted to target media with full path preservation. For volumes with integrity streams, we verify each block's CRC-64 checksum during extraction and flag any blocks with checksum mismatches.
Hyper-V VHDX and SQL Database Extraction
VHDX Recovery
Hyper-V stores virtual machine disks as VHDX files on the host ReFS volume. Microsoft recommends ReFS for Hyper-V storage because ReFS block cloning enables instant VM checkpoint creation. After extracting VHDX files from the reconstructed ReFS directory tree, we verify the VHDX header, BAT (block allocation table), and metadata region. If the VHDX is intact, we mount the virtual disk and confirm the guest filesystem (NTFS, ext4, XFS) is accessible. For dynamic VHDX files, we also verify the parent locator chain for differencing disks used by VM checkpoints.
SQL Server Database Extraction
SQL Server databases on ReFS volumes consist of .mdf (primary data) and .ldf (transaction log) files. After extracting these files from the reconstructed ReFS tree, we verify the database header page (page 0) for consistency, check the boot page (page 9) for database metadata, and attempt to bring the database online in a recovery instance. For databases with torn pages (common after a crash), we run DBCC CHECKDB to assess damage scope before attempting repair.
ReFS Deduplication Recovery
Windows Server 2019 introduced deduplication support on ReFS volumes. Deduplication on ReFS uses a post-process model: files are initially written normally, and the deduplication service runs in the background to identify duplicate chunks, store them in a chunk store, and replace the original file data with reparse points referencing the chunk store.
- Chunk store corruption: The chunk store is a set of container files stored in a hidden System Volume Information directory. Each chunk is identified by a hash. If the chunk store containers or the chunk store index B+ tree are corrupted, deduplicated files become unreadable because the reparse points cannot resolve to valid chunk data.
- Reparse point damage: Deduplicated files use NTFS reparse points (even on ReFS) to redirect reads to the chunk store. If the reparse point metadata in a file's B+ tree node is damaged, the file appears as zero-length or fails to open with an I/O error. We reconstruct the reparse point data from the chunk store index by matching file chunk hashes.
- Recovery approach: We parse the chunk store containers, rebuild the chunk index, and resolve reparse points for each deduplicated file. Files that were not yet processed by the dedup service retain their original inline data and are recovered normally through the ReFS B+ tree.
ReFS Recovery Pricing
Same transparent model: per-drive imaging based on each drive's condition, plus a $400-$800 volume reconstruction fee covering ReFS metadata parsing, B+ tree reconstruction, and file extraction. No data recovered means no charge.
| Service Tier | Price Range (Per Drive) | Description |
|---|---|---|
| Logical / Firmware Imaging | $250-$900 | Firmware faults, SMART threshold failures, or filesystem corruption on individual pool members. |
| Mechanical (Head Swap / Motor) | $1,200-$1,50050% deposit | Donor parts required. SAS drives in enterprise servers require SAS-specific donors. |
| ReFS Volume Reconstruction | $400-$800per volume | ReFS metadata parsing, B+ tree reconstruction, file extraction. Includes Storage Spaces pool assembly if applicable. |
No Data = No Charge: If we recover nothing from your ReFS volume, you owe $0. Free evaluation, no obligation.
Before sending drives: do not run chkdsk on the ReFS volume. Chkdsk rewrites B+ tree structures and destroys historical metadata that allocate-on-write had preserved.
ReFS Recovery; Common Questions
My Windows Server shows a ReFS volume as RAW. Can you recover the data?
Can you recover data from a failed Storage Spaces Direct (S2D) cluster?
Is ReFS recovery different from NTFS recovery?
My ReFS volume had integrity streams enabled. Does that help recovery?
Can you extract Hyper-V virtual machines from a corrupted ReFS volume?
How is ReFS recovery priced?
Need Recovery for Other Devices?
Dell, HP, IBM enterprise servers
VHDX and VM extraction
RAID 0, 1, 5, 6, 10 arrays
VMFS and VMDK recovery
Storage Spaces pool metadata and slab reconstruction
All HDD brands and models
Complete service catalog
Ready to recover your ReFS volume?
Free evaluation. No data = no charge. Mail-in from anywhere in the U.S.