Ubuntu 20.04 and LXD 5.5: VM can't be created because vhost_vsock missing

$ lxc launch ubuntu:22.04 --vm
Creating the instance
Error: Failed instance creation: Failed creating instance record: Instance type "virtual-machine" is not supported on this server: vhost_vsock kernel module not loaded

This host is nested but CPU virtualization has been enabled.

$ cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.1 LTS"

$ lxc version
Client version: 5.5
Server version: 5.5

$ uname -a
Linux svr 5.4.0-58-generic #64-Ubuntu SMP Wed Dec 9 08:16:25 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

$ sudo apt install linux-modules-extra-5.4.0-58-generic
Reading package lists... Done
Building dependency tree       
Reading state information... Done
linux-modules-extra-5.4.0-58-generic is already the newest version (5.4.0-58.64).
0 upgraded, 0 newly installed, 0 to remove and 240 not upgraded.

$ sudo kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

Here it says the module vhost_vsock doesn’t have to be loaded in advance. So it appears it should be loaded when LXD attempts to use it.

Related (but for Pi; the fix doesn’t work for me (maybe because I’m on 20.04?)):

I saw that 240 not upgraded so I ran apt upgrade and rebooted - I still get the same error.

Removed open-vm-tools, rebooted. New error.

$ sudo kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

$ lsmod | grep vso
vhost_vsock            24576  0
vmw_vsock_virtio_transport_common    32768  1 vhost_vsock
vhost                  49152  1 vhost_vsock
vsock                  36864  2 vmw_vsock_virtio_transport_common,vhost_vsock

$ sudo lxc launch ubuntu:22.04 --vm
Creating the instance
Error: Failed instance creation: Failed creating instance record: Failed initialising instance: Invalid devices: Failed detecting root disk device: No root device could be found

After that I created another pool - for some reason I couldn’t use the previous one - and then it worked.

$ lxc storage create beegfs dir source=/mnt/beegfs/lxd-pool
Storage pool beegfs created

$ lxc storage list
+----------+--------+----------------------+-------------+---------+---------+
|   NAME   | DRIVER |        SOURCE        | DESCRIPTION | USED BY |  STATE  |
+----------+--------+----------------------+-------------+---------+---------+
| beegfs   | dir    | /mnt/beegfs/lxd-pool |             | 0       | CREATED |
+----------+--------+----------------------+-------------+---------+---------+
| datapool | dir    | /mnt/beegfs/lxd      |             | 0       | CREATED |
+----------+--------+----------------------+-------------+---------+---------+

$ sudo lxc launch ubuntu:22.04 --vm --storage beegfs
Creating the instance
Instance name is: natural-locust          
Starting natural-locust

$ sudo lxc list
+----------------+---------+------+------+-----------------+-----------+
|      NAME      |  STATE  | IPV4 | IPV6 |      TYPE       | SNAPSHOTS |
+----------------+---------+------+------+-----------------+-----------+
| natural-locust | RUNNING |      |      | VIRTUAL-MACHINE | 0         |
+----------------+---------+------+------+-----------------+-----------+

$ sudo lxc shell natural-locust
root@natural-locust:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:16:3e:20:e0:72 brd ff:ff:ff:ff:ff:ff
    inet 10.63.129.177/24 metric 100 brd 10.63.129.255 scope global dynamic enp5s0
       valid_lft 3600sec preferred_lft 3600sec
    inet6 fe80::216:3eff:fe20:e072/64 scope link tentative 
       valid_lft forever preferred_lft forever
root@natural-locust:~# ping -c 1 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=6.42 ms

--- 1.1.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 6.421/6.421/6.421/0.000 ms

Means you’ve likely not got a root disk device in your profile.
The reason it worked with the new pool is that you provided the -s flag which adds an instance level root disk device to the instance, rather than using the (missing) one from the profile.

1 Like

Yeah, I agree for the 2nd error, I think I incorrectly assumed pool name. But I think my first attempt didn’t have that problem (it was a very different error?).