What Causes I/O Errors During Pool Import
Pool import reads vdev labels, uberblocks, and the Meta Object Set (MOS) from every vdev member to reconstruct the pool's in-memory state. An I/O error during this process means ZFS could not read at least one required metadata block.
- 1.Physical drive failure: One or more drives in the pool have failed or are returning read errors on sectors that contain ZFS metadata. If the failed drive is part of a raidz group and the group has exhausted its parity tolerance, import fails.
- 2.Controller or cable issues: SATA/SAS cables degrade, controller ports fail, and HBA firmware can develop bugs. If kernel logs show ATA errors or device resets preceding the ZFS I/O error, the problem may be at the transport layer, not the drive.
- 3.Corrupted uberblock: ZFS stores a ring buffer of uberblocks on each vdev member. If the most recent uberblock is corrupted (from power loss during a TXG commit, for example), ZFS cannot determine the pool's last consistent state and refuses to import.
- 4.Insufficient vdev members: raidz1 tolerates one drive failure; if two or more drives are missing or faulted, the vdev becomes UNAVAIL. raidz2 tolerates two drive failures; three or more faulted drives exceeds its redundancy, and the import fails.
Example: A TrueNAS server with a 6-drive raidz2 pool loses power during a large file write. On reboot, zpool import lists the pool but zpool import tank returns "cannot import 'tank': I/O error." The cause: the uberblock for the most recent TXG was mid-write when power was cut. The partial uberblock fails checksum validation. ZFS refuses to import because it cannot verify the pool's consistency.
Diagnostic Steps Before Taking Action
Gather information about the pool and drive health before attempting any import. These commands are read-only and do not modify the pool or drives.
- 1.List available pools:
zpool import(no arguments) scans all connected drives for ZFS metadata and lists pools available for import. Note whether the pool shows as ONLINE, DEGRADED, FAULTED, or UNAVAIL. - 2.Read vdev labels:
zdb -l /dev/sdXreads the ZFS vdev label from a specific drive. This shows the pool name, pool GUID, vdev tree, and TXG number. Run on each drive to confirm all pool members are present and their labels are consistent. - 3.Check SMART data:
smartctl -a /dev/sdXfor each drive. Non-zero values in Reallocated_Sector_Ct, Current_Pending_Sector, or Offline_Uncorrectable indicate degraded media. For SSDs, check Media_Wearout_Indicator and Reallocated_NAND_Blk_Cnt. - 4.Check kernel logs: Review
dmesgorjournalctl -kfor I/O errors, SATA link resets, or device timeouts. These entries appear before the ZFS error and indicate whether the problem is at the drive level or the transport level. - 5.List uberblocks:
zdb -e -u poolnamelists the ring buffer of uberblocks and their TXG numbers. The-eflag lets zdb read an exported or non-imported pool by scanning raw devices; without it, zdb requires the pool to be imported or present in the zpool cache. If the most recent TXG is corrupt, earlier TXGs may still be valid. For pools where even-efails, fall back tozdb -l /dev/sdXto read individual device labels directly.
Example: A FreeBSD server with a 4-drive raidz1 pool. zpool import lists the pool as DEGRADED with one UNAVAIL device. smartctl on the UNAVAIL drive shows 847 reallocated sectors and 12 current pending sectors. The remaining three drives show clean SMART data. dmesg shows repeated "READ FPDMA QUEUED" errors on that device. The diagnosis: one drive has media degradation. The raidz1 pool has zero remaining margin.
Safe Import Strategies
If drives and cables are healthy and the issue is at the ZFS metadata level, these import strategies proceed from least invasive to most. Always attempt read-only import first.
- 1.Read-only import:
zpool import -o readonly=on poolnameimports the pool without writing metadata to the member drives; no TXGs are synced and no ZIL is replayed. Add-o cachefile=noneto prevent updatingzpool.cache. However, a read-only import still forces massive random read I/O across every member drive to traverse the ZFS metadata tree. If any drive has suspected physical failure (clicking, slow response, I/O timeouts), image all drives with a hardware write-blocker first and import the cloned images instead. If ZFS can assemble the vdev tree, copy data to a separate destination immediately. - 2.TXG rollback: If read-only import fails because the latest TXG is corrupt, try importing at an earlier transaction group:
zpool import -T <txg-number> -o readonly=on poolname. Use the TXG numbers fromzdb -lto find valid candidates. Each TXG represents roughly 5 seconds of writes at default settings. - 3.Force import (unclean shutdown only):
zpool import -f poolnametells ZFS to accept a pool that was not cleanly exported. ZFS replays pending TXGs and updates on-disk metadata. This is safe when all drives are healthy and the only issue is an unclean shutdown. If drives are failing, this writes to the drives and can overwrite recoverable metadata.
The order matters. Read-only import writes nothing to the pool's member drives. TXG rollback with readonly=on also writes nothing to the pool. Force import writes metadata updates. Each subsequent method is more invasive. If you skip straight to -f on a pool with failing drives, you may overwrite the blocks needed for recovery.
Example: A QNAP NAS lost power during a scrub. The admin connects the drives to a Linux workstation. zpool import -o readonly=on tank fails with I/O error. zdb -l /dev/sda shows the latest TXG is 294817. zpool import -T 294810 -o readonly=on tank succeeds, rolling back approximately 35 seconds of writes. The admin copies data to a new destination.
When You Can Resolve This Yourself
Several common import failure scenarios have straightforward fixes that do not require professional recovery tools.
- 1.Device paths changed. Moving drives to a new controller, changing SATA ports, or booting a different OS changes
/dev/sd*assignments. ZFS identifies drives by GUID, not device path. Ifzpool importdoes not find the pool, tryzpool import -d /dev/disk/by-id/to scan by stable device identifiers. - 2.Unclean export after power loss. If the only issue is that the pool was not exported before shutdown,
zpool import -f poolnameis safe. ZFS replays the intent log and brings the pool to a consistent state. Confirm SMART data is clean on all drives before proceeding. - 3.Single drive failure in raidz2 or raidz3. If the I/O error is caused by one failed drive and the pool has sufficient parity margin, the pool may import in DEGRADED state. Replace the failed drive and resilver. Image all drives first if the data is irreplaceable.
- 4.Cable or controller port failure. If kernel logs show SATA link resets or device timeouts on a specific port, try a different cable or port. If the drive passes SMART tests on a different port, the original cable or port was the problem.
Example: A homelab Proxmox server with a 4-drive raidz1 pool. After a kernel update, zpool import shows the pool but import fails. The admin notices the kernel updated the disk naming scheme (SCSI vs ATA enumeration). Running zpool import -d /dev/disk/by-id/ finds all four drives and the pool imports cleanly.
When to Stop and Image the Drives
Stop attempting imports and image every drive if any of the following conditions apply. Further import attempts on failing hardware risk making the data unrecoverable.
- 1.Multiple drives show I/O errors or SMART degradation. If more drives are faulted than the parity level allows (for raidz1, two or more; for raidz2, three or more), the pool has exceeded its redundancy. Further import attempts stress drives that are already failing.
- 2.Read-only import and TXG rollback both fail. If ZFS cannot find a valid uberblock across any TXG, the metadata damage extends beyond what standard import tools can handle. The on-disk data may still be intact below the metadata layer, but extraction requires ZFS-aware forensic tools.
- 3.Previous repair attempts did not resolve the issue. If you have already run
zpool clear,zpool replace, orzpool scrubon the degraded pool without resolution, these operations wrote metadata updates to the drives. Image what remains before any further writes. - 4.Drives are making abnormal sounds. Clicking, grinding, or repetitive seeking sounds indicate mechanical failure. Power the drive down immediately. These drives need to be imaged with hardware that can manage weak heads and bad sectors at a level
ddrescuecannot reach.
For imaging, use ddrescue with a separate destination drive for each pool member. Work from the images for all subsequent recovery attempts. For drives with physical faults, professional NAS data recovery uses write-blocked connections and PC-3000/DeepSpar hardware to image drives that consumer tools cannot read, then reconstructs the pool offline from the images.
Frequently Asked Questions
What does 'cannot import pool: I/O error' mean?
ZFS attempted to read metadata structures (vdev labels, uberblocks, or the Meta Object Set) from the pool's member drives and at least one read failed. The error can indicate a failed drive, a corrupted uberblock, a cable or controller issue, or too many missing vdev members for the pool's parity level. Run 'zpool import' without a pool name to see the pool's reported state, then check SMART data and kernel logs to narrow the cause.
Is 'zpool import -f' safe to run?
It depends on why the import failed. If the pool was not cleanly exported (unclean shutdown, power loss) but all drives are healthy, -f is safe. ZFS replays the intent log (ZIL) to recover synchronous writes that were acknowledged but not yet committed to a transaction group. Uncommitted asynchronous writes that existed only in RAM are lost. If drives are physically failing, -f forces ZFS to write to those drives, which can overwrite recoverable data. Check SMART data on every drive before using -f.
Can data be recovered from a FAULTED ZFS pool?
In most cases, yes. FAULTED means ZFS has determined that the pool cannot guarantee data integrity with the current vdevs, but the data is still on the drives. Recovery involves imaging each drive and reconstructing the pool offline. ZFS stores redundant copies of critical metadata (uberblocks, vdev labels) that forensic tools can use even when the live pool refuses to import.
Related Recovery Services
ZFS pool import failing with I/O errors?
Free evaluation. Write-blocked drive imaging. Offline pool reconstruction with TXG history preserved. No data, no fee.