This just cost me a stupidly large amount of time, so I’m going to explain it here in the hopes of saving some other folks that time and effort, though it’s likely to get a bit ranty. I was trying to get Arch Linux running in a VM. I wanted that VM to be in bhyve. So, I installed it in bhyve and initially, everything seemed to be fine, but when I went back to it the next day, I couldn’t ssh in. So, I started up tigrevnc, and low and behold, it claimed that it couldn’t boot, because it was a UEFI hard drive. Huh? bhyve supports UEFI. Why would that be a problem? Maybe there’s a good reason for the error message being what it is, but certainly, at the moment, it seems like an utterly stupid error message to me. Regardless, no matter how many times I restarted bhyve or rebooted my TrueOS box, I got the same result. There was zero indicator as to what was wrong, so I decided that bhyve must be broken right now (at least with regards to UEFI) and switched to Virtualbox.
However, Virtualbox ultimately had the same problem. It didn’t display the horrible error message (rather, it just went to the UEFI prompt, which I later figured out that bhyve does as well if you wait for something like a minute - long enough that you’re more likely to shut the VM down because it’s not working than you are to see the UEFI prompt). But regardless, it wouldn’t boot properly, and the UEFI prompt certainly was of no use to me. Maybe a UEFI guru could make sense of it, but as far as I’m concerned, the OS’ bootloader is supposed to show up, not the UEFI junk. If I’m seeing the UEFI prompt, then something is seriously wrong.
Well, since virtualbox had the problem too, the implication was that Arch had a problem. So, I did a bunch of digging in their documentation and was finally able to piece together what was going on. For some reason, virtualbox does not save the UEFI settings correctly. I guess that there’s somewhere that they get stored on real hardware that Virtualbox just plain fails to emulate. And presumably, bhyve has exactly the same problem - which seems like a huge flaw to me. How can you properly support UEFI without retaining the settings? That’s like claiming that you support a bootloader when you just put it in memory and delete it later, so when you reboot your host machine later, the VM doesn’t work, because the bootloader is gone.
In any case, after digging through Arch’s documentation enough and after experimenting with several of their solutions (which generally were not explained well enough, since they don’t seem to like to actually show you what you have to do and would prefer to give you a one-liner which gives you the idea without actual instructions, which is borderline useless when dealing with something that you’re not familiar with), I finally figured out how to fix the problem. Apparently, there’s a default place that UEFI looks for the bootloader, and for some reason, grub doesn’t put itself there by default. I guess that the UEFI settings that aren’t being retained are what tell UEFI where to look. But if you follow the instructions that Arch gives you, you end up with
whereas what you really need is
If you pass
grub-install instead of
--bootloader-id=grub (which is what the instructions tell you to use), then it gives you
which is closer but still named incorrectly. I don’t know how to get grub to do it correctly for you. However, if you just move/rename the grubx64.efi file so that it’s
then Arch Linux will boot correctly with both virtualbox and bhyve. I don’t know how applicable this is to other distros, but if you run into a similar problem with Ubuntu or whatever, then it may be that you need to fix what it did with the bootloader so that it’s in
If not, hopefully this rant of mine will at least help you figure it out…