Running virtual machines with LXD 4.0

All is well for me (init, download, etc) until I start the VM (lxc start). I then get this error message:-

Error: Failed to run: modprobe vhost_vsock: modprobe ERROR: could not insert ‘vhost_vsock’

:disappointed:

Does it say why vhost_vsock won’t load?

It’s sometimes because of some vmware/virtualbox tools being already loaded.

It says ‘device or resource busy’.

It is a VM running Ubuntu with lxc/lxd 4.0.1 and yes it has open-vm-tools running.

Thanks.

If you run lsmod | grep vsock it might show which existing vsock modules are loaded, you then need to rmmod them so that LXD can load the vhost_vsock module.

1 Like

vsock doesn’t work with nested virtual machines at this time, which unless there’s no vsock used by the parent hypervisor, effectively prevents running LXD virtual machines inside of an existing virtual machine.

It’s worth noting that nested virtualization, at least on Intel platforms is also not always reliable. If you want to test LXD virtual machines, a bare metal host is strongly advised.

Just now I tried the same thing on a Digital Ocean Droplet (a fancy name for a VM) in the cloud. And it works!

I do not know which hypervisor Digital Ocean uses, but it clearly works with that one nested. Normally with neofetch I can see which hypervisor is being used. For example with AWS neofetch says "Host: HVM domU”, which I believe is Xen. And with Linode it says “Host: KVM/QEMU”. But with Digital Ocean it says “Host: Droplet” - I guess they are hiding it?

Interestingly, the VM running nested says “Host: KVM/QEMU” and not “Host: LXC/LXD”!

It is a shame that it will not work under VMware. I hope one day this is fixed.

By the way you mentioned that nested virtualization is not always reliable on Intel. So I assume you are saying that AMD is a better choice? Is there a specific technical reason for this?

Thanks.

I suspect they’re using QEMU but do not use vsock at all, leaving it free for the VM to use.

For nested virt, I’m not a CPU expert but what I’ve been told is that on the Intel side, VMs effectively are tracked on a flat plane. So nested virt actually means a VM being able to create a VM parallel to itself. AMD instead does have a tree type structure where a child VM is tracked by the CPU as a child of its parent. I don’t expect this to impact performance, but this likely impacts stability and security.

I just tried on a Linode VM and it failed with this error message:-

Error: Failed to run: /snap/lxd/current/bin/lxd forklimits limit=memlock:unlimited:unlimited – /snap/lxd/14890/bin/qemu-system-x86_64 -S -name ubuvm -uuid e00f23fb-146c-4ddf-8662-be1690eb5cbb -daemonize -cpu host -nographic -serial chardev:console -nodefaults -no-reboot -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/ubuvm/qemu.conf -pidfile /var/snap/lxd/common/lxd/logs/ubuvm/qemu.pid -D /var/snap/lxd/common/lxd/logs/ubuvm/qemu.log -chroot /var/snap/lxd/common/lxd/virtual-machines/ubuvm -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd: : exit status 1
Try ‘lxc info --show-log ubuvm’ for more info
root@localhost:~# lxc info --show-log ubuvm
Name: ubuvm
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/04/28 22:01 UTC
Status: Stopped
Type: virtual-machine
Profiles: default

Log:

Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: failed to initialize KVM: No such file or directory

The VM is running a fresh install of Ubuntu 20.04 LTS. It has 2 CPU & 4GB RAM.

That indicates you’re missing the kvm kernel modules.
On intel, that’s kvm and kvm_intel.
On AMD, that’s kvm and kvm_amd.

Depending on the kernel you’re running, those may not be available, which would explain this behavior.

1 Like

Because it is a snap package, does it not include all the dependencies like KVM? :thinking:

By the way, I am a complete beginner and new to Linux! So forgive me if these are dumb questions :blush:

Snaps include all the userspace bits they need, they however do not get to run their own kernels, so you still need your system’s kernel to support all the features we need.

OK thanks. I installed all these: qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils virt-manager, and still get thesame error.

By the way does virtualization in LXD require hardware enabled VT-x?

That won’t do you any good, those are userspace packages, you’re missing kernel modules.

Show:

  • uname -a
  • modinfo kvm
  • modinfo kvm_intel

Yes, LXD requires hardware virtualization (VTx or SVM on Intel/AMD).

uname -a
Linux localhost 5.4.0-26-generic #30-Ubuntu SMP Mon Apr 20 16:58:30 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

modinfo kvm
filename: /lib/modules/5.4.0-26-generic/kernel/arch/x86/kvm/kvm.ko
license: GPL
author: Qumranet
srcversion: 0406E6D7275BE2E610C1AA9
depends:
retpoline: Y
intree: Y
name: kvm
vermagic: 5.4.0-26-generic SMP mod_unload
sig_id: PKCS#7
signer: Build time autogenerated kernel key
sig_key: 2E:1C:6B:CE:DF:4D:6E:F0:5B:25:79:E8:B6:0E:F2:9A:9A:01:CB:AF
sig_hashalgo: sha512
signature: 42:29:AE:40:7C:BD:A2:D1:92:B3:60:48:31:BC:CA:B8:FA:4A:45:04:
38:B9:09:75:69:4A:84:B7:E3:FD:ED:07:EB:65:40:90:B5:BA:0E:84:
D0:A7:43:DA:37:69:F8:F3:FF:1D:8D:88
parm: nx_huge_pages:bool
parm: nx_huge_pages_recovery_ratio:uint
parm: ignore_msrs:bool
parm: report_ignored_msrs:bool
parm: min_timer_period_us:uint
parm: kvmclock_periodic_sync:bool
parm: tsc_tolerance_ppm:uint
parm: lapic_timer_advance_ns:int
parm: vector_hashing:bool
parm: enable_vmware_backdoor:bool
parm: force_emulation_prefix:bool
parm: pi_inject_timer:bint
parm: halt_poll_ns:uint
parm: halt_poll_ns_grow:uint
parm: halt_poll_ns_grow_start:uint
parm: halt_poll_ns_shrink:uint

modinfo kvm_intel
filename: /lib/modules/5.4.0-26-generic/kernel/arch/x86/kvm/kvm-intel.ko
license: GPL
author: Qumranet
srcversion: DCBB34BC10742394CDE65F8
alias: cpu:type:x86,venfammod*:feature:0085
depends: kvm
retpoline: Y
intree: Y
name: kvm_intel
vermagic: 5.4.0-26-generic SMP mod_unload
sig_id: PKCS#7
signer: Build time autogenerated kernel key
sig_key: 2E:1C:6B:CE:DF:4D:6E:F0:5B:25:79:E8:B6:0E:F2:9A:9A:01:CB:AF
sig_hashalgo: sha512
signature: 6A:A7:17:71:5B:B0:0A:D2:8F:B5:76:8A:FB:21:1E:7D:89:9D:4B:13:
DF:64:60:DA:16:9B:7C:87:D6:94:91:B8
parm: enable_shadow_vmcs:bool
parm: nested_early_check:bool
parm: vpid:bool
parm: vnmi:bool
parm: flexpriority:bool
parm: ept:bool
parm: unrestricted_guest:bool
parm: eptad:bool
parm: emulate_invalid_guest_state:bool
parm: fasteoi:bool
parm: enable_apicv:bool
parm: nested:bool
parm: pml:bool
parm: dump_invalid_vmcs:bool
parm: preemption_timer:bool
parm: ple_gap:uint
parm: ple_window:uint
parm: ple_window_grow:uint
parm: ple_window_shrink:uint
parm: ple_window_max:uint
parm: pt_mode:int
parm: enlightened_vmcs:bool

Ok, try modprobe kvm and modprobe kvm_intel

root@localhost:~# modprobe kvm
root@localhost:~# modprobe kvm_intel
modprobe: ERROR: could not insert ‘kvm_intel’: Operation not supported

The CPU is AMD Epic.

Maybe Linode has VT-x off and Digital Ocean has it on?

Thanks.

Ah right, if it’s an Epyc, then you’d want modprobe kvm_amd instead.

root@localhost:~# modprobe kvm_amd
modprobe: ERROR: could not insert ‘kvm_amd’: Operation not supported

I also did this:-

root@localhost:~# kvm-ok
INFO: Your CPU does not support KVM extensions
KVM acceleration can NOT be used

Ok, so looks like Linode is stripping the virtualization extensions from their guests.
That explains why those modules didn’t auto-load and why this isn’t working for you.

On a system where virtualization is supported, the kernel will normally auto-load the needed modules so none of this is needed. Running the commands by hand is a good way to see why it’s not working though :slight_smile:

Thanks for solving this! And thanks for making such a great product that LXD is now with both containers and VMs running side-by-side :smile:

1 Like