Skip to main contentSkip to navigation
Lab Operational Since: 17 Years, 5 Months, 22 DaysFacility Status: Fully Operational & Accepting New Cases
Rossmann Repair Group logo - data recovery and MacBook repair

How Hard Drive Platters Store Data

Louis Rossmann
Written by
Louis Rossmann
Founder & Chief Technician
Published March 8, 2026
Updated April 19, 2026

Hard drive platters store data as patterns of magnetic polarity on a thin cobalt-chromium-platinum alloy coating. Each bit is represented by a tiny magnetic domain whose north-south orientation encodes a 1 or 0. The read/write heads detect and alter these orientations as they fly nanometers above the spinning platter surface. A single modern platter can hold over 1 TB of data across billions of magnetic domains packed into concentric tracks. Platter integrity is what every stage of our hard drive data recovery workflow is built to preserve.

Magnetic Domains and Bit Encoding

The magnetic recording layer is divided into microscopic regions called grains, each containing a few hundred atoms with aligned magnetic orientation. A single data bit occupies a cluster of these grains. The write head generates a localized magnetic field strong enough to flip the orientation of the grains in the target area without affecting neighboring clusters.

Modern drives do not use simple binary encoding. They use run-length-limited (RLL) coding schemes that translate raw data into patterns optimized for reliable magnetic storage. The read channel chip on the PCB applies partial response maximum likelihood (PRML) signal processing to extract the original data from the analog signal, which is noisy and attenuated at the bit densities used in current drives.

Each grain must maintain its magnetic orientation against thermal energy that tends to randomize it. This is the superparamagnetic limit: as grains get smaller to increase density, they become less thermally stable. Manufacturers combat this with perpendicular magnetic recording (PMR), which orients grains vertically rather than horizontally, allowing denser packing with larger effective grain volume.

Perpendicular Magnetic Recording and Recovery Implications

Before 2005, drives used longitudinal magnetic recording (LMR), where grains lay flat along the track surface with their magnetic poles oriented parallel to the platter. LMR hit the superparamagnetic wall at roughly 100 to 200 Gbit/in² because flatter grains needed to shrink below the threshold where thermal energy could flip them spontaneously. Shun-ichi Iwasaki at Tohoku University proposed perpendicular magnetic recording in 1976, but it took until 2005 for Toshiba to ship the first commercial PMR drive at 133 Gbit/in². By 2009, PMR powered roughly 75% of all hard drives shipped worldwide.

PMR orients the recording grains perpendicular to the platter surface, standing them upright like fence posts rather than laying them flat. A soft magnetic underlayer (SUL) beneath the recording layer acts as a magnetic mirror for the write head's field, creating a return flux path that roughly doubles the effective field strength at the grain surface. That stronger write field allows the recording layer to use higher-coercivity materials with finer grain boundaries, which resist thermal demagnetization at smaller grain footprints.

For recovery, PMR's tighter grain packing means the analog readback signal from each bit is weaker relative to noise. When read/write heads degrade or when donor heads from a matched donor drive fly at a slightly different height than the originals, the signal-to-noise ratio drops further. PC-3000 compensates by modifying the drive's service area adaptive parameters that control read channel equalization, retraining the read channel to extract usable data from a weakened signal. This process is described in the read channel section below, but it becomes critical on PMR media where the margin between a readable signal and noise is thin.

Areal Density, Track Pitch, and Recovery Tolerances

Areal density is the number of bits per square inch of platter surface. It is the product of two dimensions: linear bit density (bits per inch along a track) and track density (tracks per inch across the platter radius). Both have grown by orders of magnitude since the first hard drive.

EraAreal DensityApproximate Track Pitch
1956 IBM RAMAC2,000 bits/in²~5 mm
1990s MR heads / PRML100-500 Mbit/in²~1-5 µm
2005-2006 PMR introduction133-325 Gbit/in²~100-150 nm
Modern 18-26 TB drives>1 Tbit/in²<70 nm

The bit aspect ratio (BAR) describes the shape of each recorded bit as the ratio of its along-track length to its cross-track width. Older drives maintained a BAR of 4:1 to 6:1, producing elongated bits that were relatively easy for the read channel to distinguish. Modern high-density drives have compressed BAR toward 2:1 or even 1:1 (square bits) in two-dimensional magnetic recording (TDMR) designs. Square bits pack more tightly but generate stronger inter-track interference, which forces the read channel to filter signal from adjacent tracks using multiple readers per head.

At sub-70 nm track pitch, the physical tolerances for head positioning during imaging become extreme. Thermal expansion of the actuator arm by a fraction of a degree can shift the heads off-center by tens of nanometers. Micro-vibrations from the spindle motor or from placing the drive on an unstable surface have the same effect. When donor heads replace failed originals, even a slight difference in head geometry or fly height characteristic alters the effective read width, and the track margin error budget that was already near zero shrinks further. This is why high-capacity drives (16 TB and above) have lower tolerance for imperfect donor matches and why firmware-level adaptive parameters (head profile tables, zone-specific gain settings) must often be rebuilt in PC-3000 to get a clean image from a donor head set.

Tracks, Sectors, and Zones

Data on a platter is organized into concentric circular tracks. Each track is divided into sectors, typically 512 bytes or 4,096 bytes (Advanced Format) of user data plus error correction codes (ECC), sync bytes, and address markers.

Track
A single concentric ring of data. Modern drives have hundreds of thousands of tracks per platter surface, spaced fractions of a micrometer apart.
Sector
The smallest addressable unit of storage. Contains user data, a sector header with the logical block address (LBA), and ECC bytes for error detection and correction.
Zone
A group of adjacent tracks that share the same sectors-per-track count. Outer zones have more sectors per track than inner zones because the outer circumference is longer. This is zoned-bit recording (ZBR).

Zoned-bit recording means that outer tracks store more data per revolution than inner tracks. The read/write speed varies accordingly: outer tracks are faster because more data passes under the head per rotation. This is why sequential read benchmarks show higher throughput at the beginning of a drive (outer tracks) and lower throughput near capacity (inner tracks).

Anatomy of a Data Sector

A user data sector has its own internal field sequence, distinct from the servo wedge format described in the next section. As the platter spins beneath the read head, each sector passes the head in a strict temporal order: inter-sector gap, preamble (PLL field), data sync mark, modulated user data, ECC parity field, postamble. The read channel processes each of these fields with a different subsystem.

Preamble (PLL field)
A periodic high-frequency pattern, historically 12 to 30 bytes and around 100 bits at current densities. The read channel's voltage-controlled oscillator locks its phase and frequency to this pattern so the analog-to-digital converter samples the waveform at the correct bit-cell intervals. No PLL lock means no usable samples, which means no readable sector.
Data Sync Mark (DSM)
An asymmetric zero-phase pattern, typically around 20 bits, that marks the exact boundary where user data begins. The preamble establishes bit timing; the DSM establishes byte framing. This is a different signal from the Servo Address Mark in the servo wedge. A DSM read failure causes a single-sector read error without losing head position, because the servo tracking loop runs on the SAM from its own wedges and is not affected.
User data (modulation-coded)
4,096 bytes on modern Advanced Format drives, 512 bytes on legacy drives. The host payload is first encoded with a run-length-limited or maximum-transition-run modulation code to guarantee frequent magnetic transitions. Without enforced transitions, a long string of identical bits would let the PLL drift and corrupt every bit that followed.
ECC parity field
Approximately 50 bytes on legacy 512-byte sectors, expanded to roughly 100 bytes on 4K Advanced Format sectors. Modern drives store LDPC parity here; early-to-mid 2000s drives stored Reed-Solomon parity. The difference between these two coding families determines how much degradation a sector can sustain before it becomes unrecoverable.
Postamble
A short trailing pattern, 0.5 to 2 bytes, that flushes the final bits of the ECC field through the Viterbi detector and any trellis-based decoder pipeline so no tail bits are truncated before decoding completes.

Servo wedges are written into the platter surface at the factory and run radially across all tracks. When a single data sector straddles a servo wedge (common at inner zones with fewer sectors per track), the read channel freezes its data-path state into trap registers, processes the servo wedge with a separate control loop, then restores the saved state and finishes the data sector. From the host's perspective, the sector is read atomically; at the read-channel level, two independent signal paths are time-multiplexed across the same head.

Advanced Format (4K) Sector Layout

The 2009 IDEMA Advanced Format specification replaced the 512-byte physical sector with a 4,096-byte sector. Eight legacy sectors, each requiring its own gap, preamble, sync mark, and 50-byte ECC field, collapsed into a single 4K payload with one set of overhead fields and a 100-byte ECC field. Seagate and Western Digital reported a 7% to 11% gain in usable platter capacity from this change alone, lifting overall format efficiency to roughly 97%. The longer ECC block also corrects burst errors more effectively than eight independent 50-byte blocks, because coding gain scales with block length.

Drives ship in two variants: 512e (4K physical, 512-byte logical) and 4Kn (4K physical and logical). A 512e drive that receives an unaligned 512-byte write from the host must read the full 4K sector into DRAM, modify 512 bytes, recompute the 100-byte ECC, and write the 4K sector back. This read-modify-write penalty shows up on misaligned partitions created by legacy operating systems and is a common cause of unexpectedly slow benchmarks on otherwise healthy Advanced Format drives.

Reed-Solomon to LDPC ECC Transition

Through the 1990s and early 2000s, hard drive sectors protected user data with Reed-Solomon codes. Reed-Solomon is an algebraic block code: it makes a hard binary decision about each bit (1 or 0) and then uses the Berlekamp-Massey or Euclidean algorithm to locate and correct errors up to a fixed algebraic minimum distance. Once the read channel quantizes an analog sample into a hard bit, the confidence information in the original signal is discarded.

As drives crossed the 500 Gbit/in² threshold and approached 1 Tbit/in² around 2009, grain sizes and track pitches shrank to the point where analog readback signals routinely fell close to the decision threshold. Hard-decision Reed-Solomon decoders became unable to keep bit error rates within acceptable limits. HGST (later absorbed into Western Digital) introduced the Iterative Detection Read Channel (IDRC) to replace Reed-Solomon with low-density parity-check (LDPC) codes. By the early 2010s, LDPC was the dominant ECC across the industry.

LDPC is a soft-decision code. Instead of forcing each sample into 1 or 0, the read channel emits a log-likelihood ratio (LLR) for every bit, which represents how confident the channel is in that bit's value. The LDPC decoder passes these LLRs around a bipartite graph of variable nodes (bits) and check nodes (parity constraints) using belief-propagation algorithms such as sum-product or min-sum. Each iteration, the check nodes flag parity violations and send feedback that nudges the least-reliable variable nodes toward a consistent solution. The decoder iterates until all parity checks pass or a maximum iteration count is hit.

For recovery, this change is the difference between a sector being readable or not on degraded media. HGST documented a 1 dB SNR gain on their first-generation LDPC channel and another 1 dB on the second generation, which translates into roughly an 8% increase in raw per-drive capacity achievable from the same magnetic media. In practice, LDPC can iteratively reconstruct bits made ambiguous by thermal asperities, adjacent-track interference, or media defects by leaning on the high-confidence probabilities of surrounding bits. PC-3000 exposes the read-channel parameters that govern LDPC iteration count, LLR quantization, and soft-decoder thresholds, so a technician working a drive with marginal heads or worn media can trade decode time for decode success on a sector-by-sector basis during hard drive data recovery imaging.

Servo Wedges and Head Positioning

Servo wedges are pre-written positioning markers embedded between data sectors on every track. They are written during manufacturing by a servo track writer, a precision instrument that programs the position reference data onto blank platters. After manufacturing, servo data is never overwritten by the drive.

Each servo wedge contains a preamble (synchronization pattern), a servo address mark (SAM), a track ID encoded in Gray code, and burst fields that provide sub-track positioning. The heads read servo wedges continuously during operation. The servo controller on the PCB uses this feedback to adjust voice coil motor current and maintain the heads on the target track center.

When a drive clicks repeatedly, it is often because the heads cannot read the servo wedges. Without servo feedback, the controller cannot position the heads, and the drive retries by sweeping the actuator across the platters searching for readable servo data.

Servo Sector Format in Detail

Modern drives embed 200 to 300 servo wedges per track, each consuming a small arc of the track between data sectors. Every servo wedge follows the same field sequence:

AGC Preamble
A repeating pattern that lets the phase-locked loop (PLL) lock onto the servo signal timing and the automatic gain control circuit normalize the signal amplitude. Without a stable AGC lock, every subsequent field in the wedge is unreadable.
Servo Address Mark (SAM)
A unique bit pattern that does not occur elsewhere in the servo data. It tells the servo controller where the track ID field begins. The SAM acts as a sync boundary; if it is corrupted, the controller misinterprets the track number and positions the heads on the wrong track.
Track ID (Gray Code)
The cylinder address encoded in Gray code, where adjacent track numbers differ by only one bit. Gray code prevents large position errors from single-bit misreads during seeks. A misread in standard binary could shift the decoded cylinder number by thousands of tracks; a single-bit Gray code error shifts it by one.
Position Error Signal Bursts (A/B/C/D)
Four offset burst patterns whose relative amplitudes tell the servo controller exactly where the head sits relative to the track center, at sub-nanometer resolution. The controller computes a position error signal (PES) from the amplitude ratios of these bursts and feeds it to the voice coil motor control loop to correct head position in real time. This is what keeps the head centered on a track that is narrower than a wavelength of visible light.

Servo Damage and Recovery with PC-3000

A hardware servo gate signal isolates servo wedge reads from data reads; the controller never attempts to interpret servo fields as user data or vice versa. When servo data is unreadable on one or more heads, the drive enters a BSY (busy) state and never reaches DRDY (drive ready) because it cannot establish the head position reference needed to begin reading user data.

In recovery, PC-3000 addresses servo and head positioning failures through service area module repair. The per-head adaptive parameters and translator modules in the drive's firmware map logical block addresses to physical head/cylinder/sector locations. If these modules are corrupted, PC-3000 can rebuild them from backup copies stored elsewhere in the service area. For drives where specific heads have failed, the technician modifies the RAM head map (the firmware table that tells the drive which physical heads are active) to disable the damaged head and allow imaging of data from the remaining good heads. The data that lived on the disabled head's platters is lost, but the rest of the drive becomes accessible.

Platter Materials and Coatings

A typical hard drive platter is a stack of thin layers deposited on a substrate:

  1. Substrate: Aluminum alloy (most desktop/enterprise drives) or glass-ceramic (laptop drives, some helium-filled enterprise drives). The substrate is polished to sub-nanometer surface roughness.
  2. Underlayer: A magnetically soft layer that helps orient the recording layer's magnetic grains perpendicular to the surface in PMR drives.
  3. Magnetic recording layer: A cobalt-chromium-platinum alloy, 10 to 20 nanometers thick. This is where data is stored.
  4. Overcoat: A diamond-like carbon (DLC) protective layer, approximately 2 to 3 nanometers thick, that protects the magnetic layer from corrosion and head contact.
  5. Lubricant: A perfluoropolyether (PFPE) layer, roughly 1 nanometer thick, that reduces friction during head start/stop events and protects the DLC overcoat.

What Platter Damage Looks Like

Platter damage manifests in several ways, each with different implications for data recovery:

Damage TypeVisual AppearanceRecovery Impact
Concentric scoringCircular scratches visible as rings on the platter surfaceData on scored tracks is destroyed. Unscored tracks may be recoverable with a head swap and careful imaging
Debris contaminationParticulate matter on the platter surface, sometimes visible as a haze or specksDebris can be cleaned in some cases before imaging. Embedded debris that has scratched the surface causes localized data loss
Platter deformationWarped or bent platters from severe impactDeformed platters cannot maintain the required fly height. Data recovery is rarely possible without specialized platter transplant procedures

Running a drive with platter damage accelerates data loss.

Each rotation at 5,400 or 7,200 RPM drags debris across the remaining intact surface, expanding the damaged area. The first power-on after a head crash should be in a lab environment where the heads can be replaced and the platters cleaned before imaging begins.

Read Channel Signal Processing

The analog signal coming off a read head is not a clean sequence of ones and zeros. It is a continuous waveform where each magnetic transition produces a voltage pulse that overlaps with pulses from adjacent bits. The job of the read channel chip on the drive's PCB is to extract the original bit sequence from this noisy, overlapping analog signal.

Peak Detection vs. PRML

Before 1990, read channels used peak detection: the circuit looked for voltage spikes above a threshold and recorded a 1 for each peak. This worked at low bit densities where pulses were spaced far enough apart that they did not overlap. As densities increased, adjacent pulses began to merge (intersymbol interference, or ISI), and peak detection could no longer distinguish individual bits reliably.

IBM introduced partial response maximum likelihood (PRML) read channels around 1990. Instead of fighting intersymbol interference, PRML embraces it. The read channel samples the continuous analog waveform at discrete intervals and feeds those samples into a Viterbi detector. The Viterbi detector uses a trellis diagram (a graph of all possible bit sequence state transitions) and an add-compare-select (ACS) algorithm to find the most probable original bit sequence given the observed samples. PRML increased achievable densities by 30 to 40% over peak detection because it extracted usable data from signal that peak detection would have rejected as ambiguous.

Target Polynomials and Adaptive Filters

The “partial response” in PRML refers to a target polynomial that defines the expected shape of intersymbol interference. PR4 (class IV partial response) models each bit as affecting the current sample and the previous sample. EPR4 (extended partial response) extends that window to three samples. The read channel's equalizer filter shapes the raw analog signal to match the chosen target polynomial before the Viterbi detector processes it. Modern drives use adaptive finite impulse response (FIR) filters that continuously adjust their coefficients based on the actual signal characteristics of the media under the current head.

During recovery, these adaptive parameters matter directly. A degraded head or a donor head with different fly height characteristics produces an analog signal whose shape does not match the equalization profile the drive's firmware was calibrated for. PC-3000 modifies the service area firmware parameters that control read channel behavior, including the adaptive filter coefficients and equalization targets stored in the per-head calibration tables. The tool can also command repeated re-reads of the same sector while stepping through different read channel configurations until it finds a parameter set that produces a valid ECC decode. On drives with severe head degradation, PC-3000's physical block address (PBA) mode bypasses the translator module entirely, reading raw sectors in physical order rather than logical order to minimize head seeks and reduce the thermal cycling that accelerates further head deterioration.

CMR vs. SMR Recording Geometry

The track layout described above assumes conventional magnetic recording (CMR), where each track is written independently with a guard band between adjacent tracks. Shingled magnetic recording (SMR) eliminates most of that guard band by overlapping tracks like roof shingles. The write head writes a full-width track, then the next track partially overwrites the previous one, leaving only a narrow readable strip. This increases track density (and therefore areal density) without shrinking the write head, but it means random writes must rewrite entire shingle bands.

For recovery, SMR adds complexity at the firmware tier. The drive's firmware manages a shingled translation layer that maps logical writes to physical shingle bands, and corruption of that translation layer can make the drive appear empty even though the magnetic data on the platters is intact. PC-3000's firmware repair modules can rebuild the shingled translation tables for supported drive families, converting what looks like a total loss into a firmware-tier recovery.

Frequently Asked Questions

What are hard drive platters made of?

Most modern hard drive platters are aluminum alloy discs with a thin magnetic coating. The aluminum substrate is polished to a surface roughness below 1 nanometer. Some high-performance and laptop drives use glass or glass-ceramic substrates because they can be made thinner, smoother, and are more resistant to thermal expansion. The magnetic recording layer is typically a cobalt-chromium-platinum alloy deposited through sputtering, just 10 to 20 nanometers thick.

Can data be recovered from a scratched platter?

It depends on the extent of the damage. Light surface contamination from a brief head contact may allow recovery of data from undamaged tracks using a head swap and careful imaging with PC-3000. Deep circular scratches destroy the magnetic layer in the damaged zone. Data on those tracks is permanently lost, but data on unscored tracks and on other platter surfaces may still be recoverable.

How do hard drives store data when powered off?

The recording layer is a cobalt-chromium-platinum alloy whose ferromagnetic grains hold a stable magnetic orientation without any applied power. The coercivity of the alloy (the strength of the external field required to flip the grain orientation) is high enough that ambient thermal energy and stray magnetic fields do not disturb the stored pattern under normal conditions. Over years or decades, thermal relaxation can gradually weaken the magnetic signal, but a functioning drive refreshes data through normal read/write activity long before this becomes an issue.

How does areal density affect data recovery?

Higher areal density means smaller grains, narrower tracks, and less margin for error in head positioning. Modern drives above 1 Tbit/in² have track pitch under 70 nm, so even minor thermal expansion or vibration during imaging can push the heads off-track. Donor heads that flew acceptably on an older, lower-density drive may produce unusable signal-to-noise ratios on a high-density platter. PC-3000 compensates with firmware-level adaptive parameter adjustments, but recovery from high-density drives is inherently more sensitive to head quality, vibration isolation, and firmware parameter tuning.

What is LDPC error correction in hard drives?

Low-density parity-check (LDPC) is the error correction code modern hard drives use to protect user data in each sector. LDPC replaced Reed-Solomon around 2009 as areal densities approached 1 Tbit per square inch. Unlike Reed-Solomon, which forces each bit into a hard 1 or 0 decision before decoding, LDPC works on log-likelihood ratios: soft probability values that preserve how confident the read channel is in each bit. An iterative belief-propagation decoder then uses those probabilities and the parity constraints to reconstruct the original bits. The result is roughly a 2 to 3 dB signal-to-noise gain over Reed-Solomon, which is the difference between a degraded sector being readable or unrecoverable on modern high-density media.

What is perpendicular magnetic recording?

Perpendicular magnetic recording (PMR) stores data by orienting magnetic grains vertically, perpendicular to the platter surface. Older longitudinal recording (LMR) laid grains flat along the track. PMR was proposed in 1976 by Shun-ichi Iwasaki at Tohoku University and first shipped commercially by Toshiba in 2005 at 133 Gbit/in². The vertical orientation and a soft magnetic underlayer beneath the recording layer allow higher write field strength and better thermal stability at smaller grain sizes, which is how modern drives achieve densities above 1 Tbit/in².

If you are experiencing this issue, learn about our hard drive recovery service.