Zpool corruption - Message: ZFS-8000-8A


My zpool is trashed :fearful: I don’t have a backup, can “zpool scrub” fix that?

zpool status -v -x
pool: tank
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://illumos.org/msg/ZFS-8000-8A
scan: none requested

    tank        ONLINE       0     0     7
      ada0p2    ONLINE       0     0    14

errors: Permanent errors have been detected in the following files:


Noticed the issue when I got hd error message while backing-up my jail. Then during reboot I was dropped to db>.
After reboot, I checked my zpool. Wonder if my ssd is going south


have you tries using smarttools?



Thanks for the hint! Working on it, right now.


I did update-smart-driverdb, that went ok, but then:

smartctl -a -P showall /dev/ada0
No presets are defined for this drive.  Its identity strings:
MODEL:    /dev/ada0
do not match any of the known regular expressions.

And this tells me:

smartctl -a /dev/ada0
smartctl 6.5 2016-05-07 r4318 [FreeBSD 12.0-CURRENT amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

Device Model:     LITEONIT LCT-256M3S-41 7mm 256GB FDE
Serial Number:    TW0YM2P4550852B21081
Firmware Version: SRDB
User Capacity:    256,060,514,304 bytes [256 GB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ATA8-ACS, ATA/ATAPI-7 T13/1532D revision 4a
SATA Version is:  SATA 3.0, 6.0 Gb/s
Local Time is:    Wed Nov 29 15:20:10 2017 PST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

SMART overall-health self-assessment test result: PASSED

How did this test PASSED if the tool doesn’t know anything about my storage device ? lol


because it’s a “Smarttool” :wink:


smarttool is mybe OK, but zpool scrub is a killer; not only that it didn’t fix anything, it added 2 more corrupted files to the zpool. I didn’t want to accept the fact as stated in the message from illumos.org:

Damaged files may or may not be able to be removed depending on the type of corruption. If the corruption is within the plain data, the file should be removable. If the corruption is in the file metadata, then the file cannot be removed, though it can be moved to an alternate location. In either case, the data should be restored from a backup source. It is also possible for the corruption to be within pool-wide metadata, resulting in entire datasets being unavailable. If this is the case, the only option is to destroy the pool and re-create the datasets from backup.

So, now I have execuse to install latest UNSTABLE and start new fight with the bleeding edge tech. :slight_smile:


SMART has some standard sections which all devices supporting the protocol respond to, just like all USB devices must respond to some basic queries. That’s why your device was able to say that the basic testing passed. If the device is not in the SMART database, any of the device specific data for more intensive tests will not be understood, but it should still be reported.

Is this machine a laptop or desktop? If desktop or even a laptop with room for an extra SSD, you may want to think about adding a second one and setting up a mirror.


When I install smartmontools, I add a few things to /etc/periodic.conf so that at least the current status of the devices get reported every day in /var/log/daily.log

This is the contents of my /etc/periodic.conf. It pushes the output to log files instead of email to root, doesn’t do DNS lookups, adds zfs status and SMART status.

daily_scrub_zfs_enable=“NO” # set to YES for autoscrubbing at threshold days
daily_scrub_zfs_default_threshold=“45” # days between scrubs


From my experience SMARTD(8) found a HDD/SSD problem before ZFS.
Please post the long output of:

smartctl -x /dev/ada0


This post/topic refers to my Dell Precision M4600 laptop

I am sure that my ssd’s logic gates are fine, despite all the abuse they took from various experiments with partitioning, cloning and forced (power-off) reboots. The laptop has (BIOS-level) self-test utility that among other things checks the storage device(s), which doesn’t indicate any issues with the ssd.

Since I have such an option, I could, can and most likely will add another ssd, the mini-ssd, to my M4600 laptop. Though in case of zpool corruption, I don’t know if mirroring would help. Wouldn’t the second storage device in mirror have corrupted zpool, since it mirrors the first one?

I have a good backup of my jails (not the latest snapshot, but the last clean before zpool corruption) which holds all the data that was important to me. The zpool corruption incident was yet another great lesson in learning about zpool and zfs. And, thanks to all of you for the hints and tips related to smart(ass)tools. I’ll make sure to use it in my TrueOS/FreeBSD installs to protect the storage devices from my dangerous experiments.


Well, ZFS mirrors are interesting. Zfs does checksums on a bunch of things (metadata, data blocks etc), mirrors are independent of each other to some extent. The same data gets pushed to for write to both devices, both devices write the data and do independent checksums. When that block is read, each device does a checksum verification, so you could have one that fails and one that passes. The one that fails would get updated based on the one that passes. Mirrors self correct.

A periodic scrub is a good thing to do; that runs the used blocks and recalcs/verifies/corrects checksums. Consumer grade hardware, you may want to do every 30 to 60 days. Enterprise grade, maybe every quarter.

Having the smartools part of daily status gives an idea of how good the hw is, and how often you should scrub.


No. If you have a problem in a device you have a degraded, but full functional pool.


In its default settings - if I’m remembering correctly - a zfs mirror has a single copy of the metadata and two separate copies of the data itself. So if one disk goes bad, the other is still good - unless they both die at the same moment.