LXC VM running FreeBSD can't see hard disk

Hello everyone! I have been having an interesting issue with LXC and the new VM functionality when installing a FreeBSD based (pfsense) guest.

The below series of commands are being used to setup a LXC VM and connect an ISO inside QEMU.

sudo lxc init pfsense --empty --vm -c limits.cpu=4 -c limits.memory=4GB -c security.secureboot=false -n lxcbr0 
sudo lxc config device override pfsense root size=32GB 
sudo echo -n '-device virtio-vga -vnc :2 -drive file=/home/wyatt/pfSense-CE-2.4.5-RELEASE-p1-amd64.iso,index=0,media=cdrom,if=ide' | sudo lxc config set pfsense raw.qemu - 
sudo lxc start pfsense && sudo lxc console pfsense

Booting from the ISO and beginning the installation runs as expected, until the installer indicates that no hard drives are present. Going into the install ISO’s shell, I was able to confirm that the device does not exist.

Using the same set of instructions have worked for CentOS so far, and Windows 10 (when adding the appropriate windows drivers).

Is there drivers for FreeBSD I seem to be missing, or is there something I’m doing wrong here?

Here is the VM YAML for reference:

architecture: x86_64
config:
  limits.cpu: "4"
  limits.memory: 4GB
  raw.qemu: -device virtio-vga -vnc :2 -drive file=/home/wyatt/pfSense-CE-2.4.5-RELEASE-p1-amd64.iso,index=0,media=cdrom,if=ide
  security.secureboot: "false"
  volatile.last_state.power: STOPPED
  volatile.lxcbr0.hwaddr: 00:16:3e:ec:17:c0
  volatile.vm.uuid: b2139fb5-3345-44a1-9cf5-ed9325f3e851
devices:
  lxcbr0:
    nictype: bridged
    parent: lxcbr0
    type: nic
  root:
    path: /
    pool: local
    size: 32GB
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

Sounds like freebsd may be missing a driver for virtio-scsi?

That’s what I was thinking, can anyone point me to a quick guide on adding those in? I can’t even find the download link for that.

So, from racking my head on this the last five hours I’ve been lead to believe it’s an issue with how OVMF and FreeBSD act.

I have a seabios boot image that I would typically use within KVM, how would I specify a bios in LXC?

You can’t. LXD only supports a modern Q35 layout with OVMF.

Well, maybe you can try passing -bios through raw.qemu but I would expect other machine details to be incompatible with that.

It’d be quite good if FreeBSD would behave on virtual UEFI systems though, do they have an open bug report for this?

Thanks! I had meant to post an update on what I’ve found out and its given me new questions.

raw.qemu ignores the -bios flag in the LXC implementation.

Looking through qemu documentation and chatting it up with people on the discord, the hard disk needs to be a qcow2 format and defined in raw.qemu. I’ll be trying that shortly.

If this works, how do I ensure the manually created file can enjoy the migration features within the LXD cluster?

On them working with virtual UEFI systems, I’m not sure. Considering I’ve been just throwing crap at it to see what works, I don’t understand enough of the issue to open a proper bug report.

So my best guess here is the disks allocated by LXC is presented as an IDE device. Manually defining a manually created disk as virtio resolves the issue. This breaks the clustering and backup features I’m used to in LXD, but could be remediated by other means.

The below was added to the beginning of the raw.qemu:

-drive file=/home/wyatt/pfsense.qcow2,index=0,media=disk,if=virtio

Adding this to the thread for prosperity:

LXC VM’s using qemu don’t like to listen the LXC. The only way I’ve had any luck with this so far is to define everything in the raw.qemu. Mainly disk drives and NICs.

Documentation on what to put in raw.qemu has been hard for me to find. The following link the only place I was able to find the relevant man page https://linux.die.net/man/1/qemu-kvm

Stéphane, do you know of a cleaner way to handle this in LXC? With the disk being created externally and defined within raw.qemu for pain in the butt , it feels like KVM running alongside LXC for VMs may be the better option.

LXD unpacks the image templates to raw format so they can be used on directly block devices (or loopback images). These raw disk devices are exposed to the VM as virtio-scsi-pci devices and not IDE.

Thanks Tom!

Is there any configuration options to change how the LXC provided resources are presented?

My thought is the disks and NICs presented in the raw.qemu as virtio are being recognized by the OS, but the disk and NIC presented by LXC are not being seen as is. In the disk example I would assume passing virtio is significantly different from virtio-scsi-pci and is the reason why its working like that. Maybe changing how it attaches to the VM would remedy issues like this for pain in the rear OSes like FreeBSD.

Have you tried enabling the driver? https://www.freebsd.org/cgi/man.cgi?query=virtio_scsi&sektion=4

I’ll give that a try and let you know the results. Friends more familiar with FreeBSD had indicated it was enabled by default in the current release.

Okay, so this is getting weirder…

In the ISO’s preboot enviorment lsdev reveals the disk and network interfaces without issue. Here I run “set virtio_scsi_load=“YES””.

I get into the installer environment and it sees the AHCI device as a Intel ICH9 SATA, but no disks.
I go into the installer’s shell and run “kldload virtio_scsi” and get that it’s already loaded.

The only errors I’m seeing are PCI related allocation errors.

pcib0: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
pci0: <ACPI PCI bus> on pcib0
pcib1: <ACPI PCI-PCI bridge> mem 0xc1a46000-0xc1a46fff irq 21 at device 1.0 on pci0
pcib1: failed to allocate initial I/O port window: 0xa000-0xafff
pcib1: [GIANT-LOCKED]
pcib2: <PCI-PCI bridge> mem 0xc1a45000-0xc1a45fff irq 21 at device 1.1 on pci0
pcib2: Failed to allocate interrupt for PCI-e events
pcib3: <PCI-PCI bridge> mem 0xc1a44000-0xc1a44fff irq 21 at device 1.2 on pci0
pcib3: Failed to allocate interrupt for PCI-e events
pcib4: <PCI-PCI bridge> mem 0xc1a43000-0xc1a43fff irq 21 at device 1.3 on pci0
pcib4: Failed to allocate interrupt for PCI-e events
pcib5: <PCI-PCI bridge> mem 0xc1a42000-0xc1a42fff irq 21 at device 1.4 on pci0
pcib5: Failed to allocate interrupt for PCI-e events

I’m stumped, any ideas?

EDIT: The error and hardware presence (or lack thereof) is the same even without the "set virtio_scsi_load=“YES” being set. I’m just an idiot and didn’t look at the console startup messages.

Doing some more digging, I found https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=243640.

Its a known issue with PCI pass through in FreeBSD. :man_facepalming:

So, I de-evolved and just kept changing the configuration for three hours to figure out how to make this work.

Given the current known issue with FreeBSD, the solution I’ve found is to specify “-machine pc-q35-2.6” in the raw.qemu. Here is my quick write up on how to install pFsense on LXC.

sudo lxc init pfsense --empty --vm -c limits.cpu=4 -c limits.memory=4GB -c security.secureboot=false -n lxcbr0
sudo echo -n '-boot menu=on -machine pc-q35-2.6 -device virtio-vga -vnc :2 -drive file=/home/wyatt/pfSense-CE-2.4.5-RELEASE-p1-amd64.iso,index=0,media=cdrom,if=ide' | sudo lxc config set pfsense raw.qemu -
sudo lxc start pfsense && sudo lxc console pfsense

Now you should be able to install pFsense. Once you are done installing, shut down the VM then run the following. We are just removing the ISO and getting rid of the boot menu.
sudo echo -n '-machine pc-q35-2.6 -device virtio-vga -vnc :2' | sudo lxc config set pfsense raw.qemu -

Connect via VNC to finish the setup. Once you are done setting up, shut down and run the below to get rid of VNC.
sudo echo -n '-machine pc-q35-2.6' | sudo lxc config set pfsense raw.qemu -

Let me know if you guys spot any problems with this! I know using 2.6 is completely non-ideal, and the flag should be removed whenever FreeBSD fixes this bug, but it appears to be low on the priority list for them.

2 Likes