Whacked my zfs mountpoints, cannot boot


#1

Hello,

while I was trying to upgrade from 10.3 to 12.0 (which fails for me and others for yet unknown reasons), I thought I was smart and tried to execute a missing file from my local zfs tank while still in the emergency console of the installer.

Due to me not knowing how to correctly mount my zfs pool, I somehow altered mountpoints and now my system won’t start anymore.

Could anyone kindly point me into the direction of what mountpoints pc-bsd 10.3 is expecting?

I am attaching 3 screenshots of 1) Error message during boot 2) Error messages when importing via the emergemcy console and 3) the zfs list output from the emergency console!

https://postimg.org/gallery/gppwob3q/343fddbe/

Many thanks!


#2

Could anyone please paste the output of

zfs list

here? With that I should be able to get my system booting again.

Thank you!


#3
[kenmoore@CarbonWraith] ~% zfs list
NAME                                         USED  AVAIL  REFER  MOUNTPOINT
tank1                                       47.8G   179G    19K  none
tank1/ROOT                                  39.6G   179G    19K  none
tank1/ROOT/12.0-CURRENT-201612050948        3.16G   179G  3.16G  none
tank1/ROOT/12.0-CURRENT-201612060734        3.20G   179G  3.20G  none
tank1/ROOT/12.0-CURRENT-up-20170708_121415   185K   179G  7.92G  /
tank1/ROOT/12.0-CURRENT-up-20170809_135917   117K   179G  7.98G  /
tank1/ROOT/12.0-CURRENT-up-20170822_150727   108K   179G  8.06G  /
tank1/ROOT/12.0-CURRENT-up-20170920_081810  31.5G   179G  8.32G  /
tank1/ROOT/12.0-CURRENT-up-20170927_200826  1.70G   179G  7.31G  /
tank1/ROOT/initial                           129K   179G  4.13G  /mnt
tank1/iocage                                1.64M   179G  1.50M  /iocage
tank1/iocage/download                         23K   179G    23K  /iocage/download
tank1/iocage/images                           23K   179G    23K  /iocage/images
tank1/iocage/jails                            23K   179G    23K  /iocage/jails
tank1/iocage/log                              23K   179G    23K  /iocage/log
tank1/iocage/releases                         23K   179G    23K  /iocage/releases
tank1/iocage/templates                        23K   179G    23K  /iocage/templates
tank1/tmp                                    423K   179G  80.5K  /tmp
tank1/usr                                   8.05G   179G    19K  none
tank1/usr/home                              5.91G   179G   138M  /usr/home
tank1/usr/home/kenmoore                     5.77G   179G  5.49G  /usr/home/kenmoore
tank1/usr/jails                               28K   179G    19K  /usr/jails
tank1/usr/obj                                 28K   179G    19K  /usr/obj
tank1/usr/ports                             2.15G   179G  2.00G  /usr/ports
tank1/usr/src                                 28K   179G    19K  /usr/src
tank1/var                                   21.7M   179G    19K  none
tank1/var/audit                               28K   179G    19K  /var/audit
tank1/var/log                               21.3M   179G  2.32M  /var/log
tank1/var/mail                                45K   179G    24K  /var/mail
tank1/var/tmp                                336K   179G   130K  /var/tmp

You can probably ignore the “iocage” mountpoints - those are autocreated/managed by iocage when you start using it. Similarly, the tank1/ROOT/* datasets are the various boot environments I have on this laptop right now - so you can skip those too.


#4

@beanpole135 Many thanks for posting this, Ken. It helped restore my system!


#5