Incus Cluster Rebooted and Incus VM's offline

I rebooted an incus cluster and I have three nodes up and running. The two VM’s I had are now stopped. When I attempt to start either one:

scott@vmsfog-incus:~$ incus start Desktop
Error: Failed to run: forklimits limit=memlock:unlimited:unlimited fd=3 fd=4 -- /opt/incus/bin/qemu-system-x86_64 -S -name Desktop -uuid 620bee36-3056-449c-a425-e57da466e0f6 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /run/incus/Desktop/qemu.conf -spice unix=on,disable-ticketing=on,addr=/run/incus/Desktop/qemu.spice -pidfile /run/incus/Desktop/qemu.pid -D /var/log/incus/Desktop/qemu.log -smbios type=2,manufacturer=LinuxContainers,product=Incus -runas incus: /opt/incus/bin/qemu-system-x86_64: error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory
: exit status 127

I have rebooted this incus cluster before and never seen this issue. Any ideas?

It looks like a shared library is missing.
Perhaps a distro package was removed? Do you have the package libaio1 installed?

No, that’s not one of the libraries I have installed. I created a new VM and it works just fine. The two existing VM’s won’t start and create the errors listed in the original post when trying to start them. All I can conclude is that the cluster state might have had an issue that caused some corruption to the two existing VM’s.

Yes, it’s installed.

scott@vmsfog-incus:~$ sudo apt search  libaio
Sorting... Done
Full Text Search... Done
libaio-dev/noble 0.3.113-6build1 amd64
  Linux kernel AIO access library - development files

libaio1t64/noble,now 0.3.113-6build1 amd64 [installed,automatic]
  Linux kernel AIO access library - shared library

Can you tell us what OS this is running on and what version of Incus this is?

@stgraber This was an Ubuntu 24.04 guest OS VM running on an Incus 6.1 server in a three node cluster with all three hosts running Ubuntu 24.04. It appears that my host rebooted and lost Nested virtualization support. The three incus cluster hosts were themselves VM’s. When their host rebooted, they lost:

insmod /lib/modules/KVM/kvm-amd.ko nested=1

I stopped the VM’s and performed the “insmod” above.

When the incus servers restarted, they of course started the incus server ok and containers were fine. The incus VMs did not start and were apparently “corrupted”. I was able to export them, but all attempts at importing them again results in:

scott@vmscloud-incus:~$ incus import Desktop-2404-20240529211716.tar.gz Desktop-2404
Error: Failed importing backup: Failed creating instance record: Instance type "virtual-machine" is not supported on this server: Failed getting QEMU version

However:

incus launch images:ubuntu/24.04/desktop --vm

works just fine. So, I have verfied that nested virtualization is operational on the host. However, importing an incus VM as you see above is not working.

I find the QEMU version error interesting because I can create a new incus VM as mentioned above.

So, interestingly:

incus launch images:ubuntu/24.04/desktop --vm -c boot.autostart=true -c limits.cpu=2 -c limits.memory=4GiB

is now failing with…

Error: Failed instance creation: Failed creating instance record: Instance type "virtual-machine" is not supported on this server: Failed getting QEMU version

I have qemu-system installed. Also:

scott@vmscloud-incus:~$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

Despite having all QEMU dependencies installed, I still get the errors about virtual-machine not supported. This seems very much like:

https://github.com/lxc/incus/issues/894

So despite a full functional QEMU 9.0, I get:

Failed getting QEMU version

You said it’s a three node cluster, so it’s only one of the three that’s having the issue?
I wonder if that’s why new stuff works, it’s just being placed on another node.

All three nodes of the cluster were created with the same commands. At this point, one node reports that it can see access QEMU components. The other two act as though there is no QEMU support even loaded. Regardless which of the three nodes that I try and create an incus VM from, they all fail with “Failed getting QEMU version”. I have shutdown and rebooted the cluster one node at a time and still no difference. I am baffled how this happened, because I had no issues with this cluster until a day ago. Regular incus containers run perfectly. It’s only incus VM’s that are affected and my analysis points to incus not seeming to know that QEMU components are accessible. All three nodes of the cluster believe that virtualization is properly configured.

image

What’s in /var/log/incus/incusd.log?

I don’t understand this error since the component is loaded.

root@vmscloud-incus:/var/log/incus# ls
Desktop       dnsmasq.incusbr0.log  incusd.log.1     incusd.log.3.gz  incusd.log.5.gz  incusd.log.7.gz
Desktop-2404  incusd.log            incusd.log.2.gz  incusd.log.4.gz  incusd.log.6.gz
root@vmscloud-incus:/var/log/incus# cd Desktop-2404
root@vmscloud-incus:/var/log/incus/Desktop-2404# ls
qemu.early.log  qemu.log.old
root@vmscloud-incus:/var/log/incus/Desktop-2404# cat qemu.early.log
/opt/incus/bin/qemu-system-x86_64: error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory
root@vmscloud-incus:/var/log/incus/Desktop-2404# 

Does this provide anymore light?

root@vmscloud-incus:/var/log/incus/Desktop-2404# ls -al
total 12
drwx------ 2 root root 4096 May 29 17:35 .
drwx------ 4 root root 4096 May 30 00:00 ..
-rw-r--r-- 1 root root  144 May 29 21:36 qemu.early.log
-rw-r--r-- 1 root root    0 May 29 17:18 qemu.log.old
root@vmscloud-incus:/var/log/incus/Desktop-2404# cat qemu.early.log
/opt/incus/bin/qemu-system-x86_64: error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory
root@vmscloud-incus:/var/log/incus/Desktop-2404# cd /opt/incus/bin/qemu-system-x86_64
bash: cd: /opt/incus/bin/qemu-system-x86_64: Not a directory
root@vmscloud-incus:/var/log/incus/Desktop-2404# cd /opt/incus/bin
root@vmscloud-incus:/opt/incus/bin# ls -al
total 247628
drwxr-xr-x 2 root root      4096 May 29 16:58 .
drwxr-xr-x 8 root root      4096 May  6 18:36 ..
-rwxr-xr-x 1 root root   1624320 May 28 22:30 criu
-rwxr-xr-x 1 root root  15322128 May 28 22:30 incus
-rwxr-xr-x 1 root root  10783088 May 28 22:30 incus-benchmark
-rwxr-xr-x 1 root root  45699856 May 28 22:30 incusd
-rwxr-xr-x 1 root root  10700832 May 28 22:30 incus-user
-rwxr-xr-x 1 root root     35248 May 28 22:30 lxcfs
-rwxr-xr-x 1 root root  10774992 May 28 22:30 lxd-to-incus
-rwxr-xr-x 1 root root  26500752 May 28 22:30 mc
-rwxr-xr-x 1 root root 102265232 May 28 22:30 minio
-rwxr-xr-x 1 root root     55816 May 28 22:30 nvidia-container-cli
-rwxr-xr-x 1 root root   2053112 May 28 22:30 qemu-img
-rwxr-xr-x 1 root root  24876224 May 28 22:30 qemu-system-x86_64
-rwxr-xr-x 1 root root     40600 May 28 22:30 swtpm
-rwxr-xr-x 1 root root    605160 May 28 22:30 virtfs-proxy-helper
-rwxr-xr-x 1 root root   2197976 May 28 22:30 virtiofsd
root@vmscloud-incus:/opt/incus/bin# 

Can you do: LD_LIBRARY_PATH=/opt/incus/lib/ ldd /opt/incus/bin/qemu-system-x86_64
And also, for good measure: which qemu-system-x86_64

Ok, here ya go:

root@vmscloud-incus:/home/scott#  LD_LIBRARY_PATH=/opt/incus/lib/ ldd /opt/incus/bin/qemu-system-x86_64
	linux-vdso.so.1 (0x00007ffc5f9d1000)
	libpixman-1.so.0 => /lib/x86_64-linux-gnu/libpixman-1.so.0 (0x0000706552cbf000)
	libspice-server.so.1 => /lib/x86_64-linux-gnu/libspice-server.so.1 (0x00007065512e4000)
	libgnutls.so.30 => /lib/x86_64-linux-gnu/libgnutls.so.30 (0x00007065510ea000)
	libudev.so.1 => /lib/x86_64-linux-gnu/libudev.so.1 (0x0000706552c8c000)
	libusb-1.0.so.0 => /lib/x86_64-linux-gnu/libusb-1.0.so.0 (0x0000706552c6e000)
	libseccomp.so.2 => /lib/x86_64-linux-gnu/libseccomp.so.2 (0x0000706552c4c000)
	libnuma.so.1 => /lib/x86_64-linux-gnu/libnuma.so.1 (0x0000706552c3e000)
	libgio-2.0.so.0 => /lib/x86_64-linux-gnu/libgio-2.0.so.0 (0x0000706550f1a000)
	libgobject-2.0.so.0 => /lib/x86_64-linux-gnu/libgobject-2.0.so.0 (0x0000706550eb7000)
	libglib-2.0.so.0 => /lib/x86_64-linux-gnu/libglib-2.0.so.0 (0x0000706550d6e000)
	libusbredirparser.so.1 => /lib/x86_64-linux-gnu/libusbredirparser.so.1 (0x0000706552c34000)
	libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x0000706552c16000)
	liburing.so.2 => /opt/incus/lib/liburing.so.2 (0x0000706552c0e000)
	libgmodule-2.0.so.0 => /lib/x86_64-linux-gnu/libgmodule-2.0.so.0 (0x0000706552c07000)
	libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x0000706550c85000)
	libpam.so.0 => /lib/x86_64-linux-gnu/libpam.so.0 (0x0000706552bf6000)
	libfuse3.so.3 => /lib/x86_64-linux-gnu/libfuse3.so.3 (0x0000706550c45000)
	libaio.so.1 => not found
	librbd.so.1 => /lib/x86_64-linux-gnu/librbd.so.1 (0x0000706550200000)
	librados.so.2 => /lib/x86_64-linux-gnu/librados.so.2 (0x0000706550ab4000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x000070654fe00000)
	/lib64/ld-linux-x86-64.so.2 (0x0000706552d79000)
	libssl.so.3 => /lib/x86_64-linux-gnu/libssl.so.3 (0x0000706550156000)
	libcrypto.so.3 => /lib/x86_64-linux-gnu/libcrypto.so.3 (0x000070654f800000)
	libopus.so.0 => /lib/x86_64-linux-gnu/libopus.so.0 (0x00007065500f7000)
	libjpeg.so.8 => /lib/x86_64-linux-gnu/libjpeg.so.8 (0x0000706550074000)
	libgstreamer-1.0.so.0 => /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0 (0x000070654f6ad000)
	libgstapp-1.0.so.0 => /lib/x86_64-linux-gnu/libgstapp-1.0.so.0 (0x0000706552bdc000)
	liborc-0.4.so.0 => /lib/x86_64-linux-gnu/liborc-0.4.so.0 (0x000070654fd4e000)
	liblz4.so.1 => /lib/x86_64-linux-gnu/liblz4.so.1 (0x0000706550a90000)
	libsasl2.so.2 => /lib/x86_64-linux-gnu/libsasl2.so.2 (0x000070655005a000)
	libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x000070654f400000)
	libp11-kit.so.0 => /lib/x86_64-linux-gnu/libp11-kit.so.0 (0x000070654f25c000)
	libidn2.so.0 => /lib/x86_64-linux-gnu/libidn2.so.0 (0x0000706550038000)
	libunistring.so.5 => /lib/x86_64-linux-gnu/libunistring.so.5 (0x000070654f0af000)
	libtasn1.so.6 => /lib/x86_64-linux-gnu/libtasn1.so.6 (0x0000706550022000)
	libnettle.so.8 => /lib/x86_64-linux-gnu/libnettle.so.8 (0x000070654f05a000)
	libhogweed.so.6 => /lib/x86_64-linux-gnu/libhogweed.so.6 (0x000070654f012000)
	libgmp.so.10 => /lib/x86_64-linux-gnu/libgmp.so.10 (0x000070654ef8e000)
	libcap.so.2 => /lib/x86_64-linux-gnu/libcap.so.2 (0x0000706550015000)
	libmount.so.1 => /lib/x86_64-linux-gnu/libmount.so.1 (0x000070654ef41000)
	libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x000070654fd21000)
	libffi.so.8 => /lib/x86_64-linux-gnu/libffi.so.8 (0x000070654fd15000)
	libpcre2-8.so.0 => /lib/x86_64-linux-gnu/libpcre2-8.so.0 (0x000070654eea7000)
	libaudit.so.1 => /lib/x86_64-linux-gnu/libaudit.so.1 (0x000070654f67f000)
	libcryptsetup.so.12 => /lib/x86_64-linux-gnu/libcryptsetup.so.12 (0x000070654ee18000)
	libceph-common.so.2 => /usr/lib/x86_64-linux-gnu/ceph/libceph-common.so.2 (0x000070654e200000)
	libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x000070654edeb000)
	libunwind.so.8 => /lib/x86_64-linux-gnu/libunwind.so.8 (0x000070654edd0000)
	libdw.so.1 => /lib/x86_64-linux-gnu/libdw.so.1 (0x000070654e14c000)
	libgstbase-1.0.so.0 => /lib/x86_64-linux-gnu/libgstbase-1.0.so.0 (0x000070654e0c7000)
	libblkid.so.1 => /lib/x86_64-linux-gnu/libblkid.so.1 (0x000070654ed95000)
	libcap-ng.so.0 => /lib/x86_64-linux-gnu/libcap-ng.so.0 (0x000070654ed8d000)
	libuuid.so.1 => /lib/x86_64-linux-gnu/libuuid.so.1 (0x000070654ed83000)
	libdevmapper.so.1.02.1 => /lib/x86_64-linux-gnu/libdevmapper.so.1.02.1 (0x000070654e05a000)
	libargon2.so.1 => /lib/x86_64-linux-gnu/libargon2.so.1 (0x000070654ed7a000)
	libjson-c.so.5 => /lib/x86_64-linux-gnu/libjson-c.so.5 (0x000070654e046000)
	libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x000070654e033000)
	libboost_thread.so.1.83.0 => /lib/x86_64-linux-gnu/libboost_thread.so.1.83.0 (0x000070654e012000)
	libboost_iostreams.so.1.83.0 => /lib/x86_64-linux-gnu/libboost_iostreams.so.1.83.0 (0x000070654dffb000)
	libibverbs.so.1 => /lib/x86_64-linux-gnu/libibverbs.so.1 (0x000070654dfd8000)
	librdmacm.so.1 => /lib/x86_64-linux-gnu/librdmacm.so.1 (0x000070654dfb8000)
	liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x000070654df86000)
	libelf.so.1 => /lib/x86_64-linux-gnu/libelf.so.1 (0x000070654df68000)
	libzstd.so.1 => /lib/x86_64-linux-gnu/libzstd.so.1 (0x000070654deae000)
	libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 (0x000070654de9a000)
	libnl-route-3.so.200 => /lib/x86_64-linux-gnu/libnl-route-3.so.200 (0x000070654de0b000)
	libnl-3.so.200 => /lib/x86_64-linux-gnu/libnl-3.so.200 (0x000070654dde9000)

Also…

root@vmscloud-incus:/home/scott# which qemu-system-x86_64
/usr/bin/qemu-system-x86_64

Right, so that confirms the issue, now why is that missing on your system…

Here I’m getting:

root@v1:~# LD_LIBRARY_PATH=/opt/incus/lib/ ldd /opt/incus/bin/qemu-system-x86_64  | grep libaio
	libaio.so.1t64 => /lib/x86_64-linux-gnu/libaio.so.1t64 (0x000073f212a20000)
root@v1:~# 

That makes me very suspicious of the version of the Incus package that you have installed on your system now.

Can show dpkg -l | grep incus?

Your output makes it likely that you have the Incus build for Ubuntu 22.04 installed on that Ubuntu 24.04 system.

If you did an upgrade it’s quite possible that the Ubuntu upgrade logic didn’t properly re-enable the Incus repository in /etc/apt/sources.list.d and/or didn’t change the file to refer to noble rather than jammy.

You are ABSOLUTELY Correct!
I did upgrade Ubuntu.

zabbly-incus-stable.sources

was at:

Enabled: no
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: jammy
Components: main
Architectures: amd64
Signed-By: /etc/apt/keyrings/zabbly.asc

After enabling and setting it to noble, it upgraded and QEMU works again. Wishing I had not deleted my incus VMs thinking they were corrupted.

So, it does appear that the repositories were not upgraded. After fixing that, the upgrades listed were:

incus-base/noble 1:6.1-202405282242-ubuntu24.04 amd64 [upgradable from: 1:6.1-202405282230-ubuntu22.04]
incus-client/noble 1:6.1-202405282242-ubuntu24.04 amd64 [upgradable from: 1:6.1-202405282230-ubuntu22.04]
incus/noble 1:6.1-202405282242-ubuntu24.04 amd64 [upgradable from: 1:6.1-202405282230-ubuntu22.04]
vim-common/noble-updates 2:9.1.0016-1ubuntu7.1 all [upgradable from: 2:9.1.0016-1ubuntu7]
vim-runtime/noble-updates 2:9.1.0016-1ubuntu7.1 all [upgradable from: 2:9.1.0016-1ubuntu7]
vim-tiny/noble-updates 2:9.1.0016-1ubuntu7.1 amd64 [upgradable from: 2:9.1.0016-1ubuntu7]
vim/noble-updates 2:9.1.0016-1ubuntu7.1 amd64 [upgradable from: 2:9.1.0016-1ubuntu7]
xxd/noble-updates 2:9.1.0016-1ubuntu7.1 amd64 [upgradable from: 2:9.1.0016-1ubuntu7]

Now, when I create the incus vm:

incus launch images:ubuntu/24.04/desktop --vm Desktop-2404 -c boot.autostart=true -c limits.cpu=2 -c limits.memory=4GiB

The incus vm is created and started. FYI I had to “incus stop -f” my other container and restart it because it was in an error state.

Now all is well. Thanks.

scott@vmscloud-incus:~$ incus list
+--------------+---------+-----------------------+------+-----------------+-----------+----------------+
|     NAME     |  STATE  |         IPV4          | IPV6 |      TYPE       | SNAPSHOTS |    LOCATION    |
+--------------+---------+-----------------------+------+-----------------+-----------+----------------+
| Desktop-2404 | RUNNING | 172.16.1.112 (enp5s0) |      | VIRTUAL-MACHINE | 0         | vmscloud-incus |
+--------------+---------+-----------------------+------+-----------------+-----------+----------------+
| UptimeKuma   | RUNNING | 172.16.1.173 (eth0)   |      | CONTAINER       | 0         | vmsfog-incus   |
+--------------+---------+-----------------------+------+-----------------+-----------+----------------+

Did we expect that the Ubuntu upgrade would have re-enabled the repository? Did I discover something here?

Unfortunately, no. Ubuntu usually just disables all the repositories on upgrade and lets you deal with them by hand afterwards…

I knew that in regards to Ubuntu desktop. I guess I assumed that server wouldn’t do that for reasons like this. Probably something I should point out to the community. Thanks for all your assistance.

@Scott_T I’m using Ubuntu 24.04 and I faced same issue, the solution is make syslink of libaio.so.1t64 to libaio.so.1

ln -s /usr/lib/x86_64-linux-gnu/libaio.so.1t64 /usr/lib/x86_64-linux-gnu/libaio.so.1

look like Ubuntu 24.04 LTS has libaio1t64 in repositories

1 Like

Hi,
I confirm I just faced the issue yesterday after an upgrade of my server to Ubuntu 24.04.
libaio1 was replaced by libaio1t64.
I couldn’t boot up my vm desktop anymore.
I had to do the ln -s thing.
cf. Ubuntu 24.04 PHP 8.3 OCI8 and libaio.so.1 - Ask Ubuntu
Maybe the configure step would do it right on a fresh install on 24.04, but going through an upgrade 22.04 to 24.04 breaks it.