“I’ve verified the primary drive and it does not appear to be a matter of drive corruption but rather looks like not having received the BE update, or able to complete writing it.”
Since the storage-pool is configured to be a mirror, then by definition, the entirety of the BE update must be on all disks which make up the vdev unless the pool became degraded. Which in your case could be a failing primary-disk (a.k.a. your boot disk configured in BIOS) and if so maybe the pool is still in a degraded state.
To start with, check the “health” of the pool:
zpool status -x
If the pool is healthy, then I don’t know how the new BE was only written to one of the disks in the mirror. Like I said, that goes against the definition of mirrored storage. I suspect the pool isn’t healthy and the primary disk is failing. If so, take the primary disk offline to start with:
After the device is off-line, zfs will no longer try to use it.
I suppose that the next step(s) would be to detach it and physically replace it.
Here’s a guide on how to do that: