Start virtual machine turn to error status

  • environment
    lxd host os(lxc cluster): Ubuntu 22.04 aarch64 (all three nodes are this os and cpu architecture)
    lxc version: 5.5
    lxd version: 5.5
    storage driver: ceph 17.2.0

  • my problem
      I created a virtual machine with an image(ubuntu/focal/cloud).
    when i start it, it turn to ERROR status, after a few seconds, the status becomes RUNNING automatically .It took me 1 minute to start my virtual machine.
      Then I downloaded an new ubuntu image without cloud tag and create a virtual machine based on new ubuntu image, when i start it ,It becomes RUNNING directly.It took me 15 seconds to start my virtual machine.Error status does not appear.

here is my lxc info --show-log result

root@lxdserver1:~# lxc info --show-log rapid-snail
Name: rapid-snail
Status: ERROR
Type: virtual-machine
Architecture: aarch64
Location: lxdserver2
PID: 36977
Created: 2023/01/11 09:23 UTC
Last Used: 2023/01/11 09:24 UTC

Resources:
  Processes: 0
  Disk usage:
    root: 80.00MiB

Log:

warning: tap: open vhost char device failed: Permission denied
warning: tap: open vhost char device failed: Permission denied
qemu-system-aarch64: warning: 9p: degraded performance: a reasonable high msize should be chosen on client/guest side (chosen msize is <= 8192). See https://wiki.qemu.org/Documentation/9psetup#msize for details.

here is my dmesg result,I found a DENY, but I don’t know if it is the cause of ERROR

[166457.937437] device tapf83ab24b entered promiscuous mode
[166457.961209] audit: type=1400 audit(1673429326.360:144): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-rapid-snail_</var/snap/lxd/common/lxd>" pid=38280 comm="apparmor_parser"
[166458.010720] audit: type=1400 audit(1673429326.408:145): apparmor="DENIED" operation="open" profile="lxd-rapid-snail_</var/snap/lxd/common/lxd>" name="/proc/38281/cpuset" pid=38281 comm="lxd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
[166458.751950] lxdbr0: port 1(tapf83ab24b) entered blocking state
[166458.751963] lxdbr0: port 1(tapf83ab24b) entered forwarding state
[166464.826112] device tapc3837c3d left promiscuous mode

I still want to use the first image (the image marked with cloud tag). Who can tell me how to avoid ERROR status ?

I don’t think my image is damaged , because I created a new ceph storage pool, and
use the same Ubuntu image with cloud tag, it still ERROR.

Is it about LXD-agent? or cloud-init? or something else?
what is the different of my two image ?
Is the image with cloud tag execute more steps than the image without cloud tag?

So are you saying both VMs start OK, its just one takes longer and for a while shows in ERROR status?
Are you able to reproduce this on LXD 5.10, as LXD 5.5 isn’t supported anymore.

Yeah, that’s exactly what I mean.
I just upgraded to LXD 5.10 and start VM again.
This time, my VM has been stopped in ERROR status instead of Running.

that’s what i found by lxc start xxxx --console

dsDxe: loading Boot0001 "UEFI QEMU QEMU HARDDISK " from PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/Scsi(0x0,0x1)
BdsDxe: starting Boot0001 "UEFI QEMU QEMU HARDDISK " from PciRoot(0x0)/Pci(0x1,0x1)/Pci(0x0,0x0)/Scsi(0x0,0x1)
[   14.117127] reboot: Restarting system
Error: write unix:@->/var/snap/lxd/common/lxd/logs/innocent-corgi/qemu.console: file already closed
root@lxdserver1:~#

It seems that some kind of image will have this problem.
reboot: Restarting system takes long time.

and that’s what i found by dmesg | grep xxxx

root@lxdserver3:/var/snap/lxd/common/lxd/logs/innocent-corgi# dmesg | grep inno
[665046.649556] ext4 filesystem being mounted at /var/snap/lxd/common/lxd/storage-pools/lxd-ceph-storagepool/virtual-machines/innocent-corgi supports timestamps until 2038 (0x7fffffff)
[665050.047864] ext4 filesystem being mounted at /var/snap/lxd/common/lxd/storage-pools/lxd-ceph-storagepool/virtual-machines/innocent-corgi supports timestamps until 2038 (0x7fffffff)
[665060.938586] ext4 filesystem being mounted at /var/snap/lxd/common/lxd/storage-pools/lxd-ceph-storagepool/virtual-machines/innocent-corgi supports timestamps until 2038 (0x7fffffff)
[665061.683178] audit: type=1400 audit(1673928352.473:194): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-innocent-corgi_</var/snap/lxd/common/lxd>" pid=89196 comm="apparmor_parser"
[665061.708984] audit: type=1400 audit(1673928352.497:195): apparmor="DENIED" operation="open" profile="lxd-innocent-corgi_</var/snap/lxd/common/lxd>" name="/var/lib/snapd/hostfs/etc/resolv.conf" pid=89197 comm="lxd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
[665282.352021] audit: type=1400 audit(1673928573.142:196): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd-innocent-corgi_</var/snap/lxd/common/lxd>" pid=89563 comm="apparmor_parser"
[665282.722075] ext4 filesystem being mounted at /var/snap/lxd/common/lxd/storage-pools/lxd-ceph-storagepool/virtual-machines/innocent-corgi supports timestamps until 2038 (0x7fffffff)
[665282.842129] audit: type=1400 audit(1673928573.630:197): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-innocent-corgi_</var/snap/lxd/common/lxd>" pid=89673 comm="apparmor_parser"
[665282.867592] audit: type=1400 audit(1673928573.658:198): apparmor="DENIED" operation="open" profile="lxd-innocent-corgi_</var/snap/lxd/common/lxd>" name="/var/lib/snapd/hostfs/etc/resolv.conf" pid=89674 comm="lxd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0

here is my lxc info --show-log

root@lxdserver3:/var/snap/lxd/common/lxd/logs/innocent-corgi# lxc info --show-log innocent-corgi
Name: innocent-corgi
Status: RUNNING
Type: virtual-machine
Architecture: aarch64
Location: lxdserver3
PID: 89691
Created: 2023/01/17 04:05 UTC
Last Used: 2023/01/17 04:09 UTC

Resources:
  Processes: 17
  Disk usage:
    root: 1.52GiB
  CPU usage:
    CPU usage (in seconds): 52
  Memory usage:
    Memory (current): 615.50MiB
    Memory (peak): 564.59MiB
  Network usage:
    enp5s0:
      Type: broadcast
      State: UP
      Host interface: tap7d68d3c4
      MAC address: 00:16:3e:85:52:42
      MTU: 1500
      Bytes received: 4.03kB
      Bytes sent: 4.32kB
      Packets received: 38
      Packets sent: 49
      IP addresses:
        inet:  10.101.94.67/24 (global)
        inet6: fd42:8f4f:7c3c:444f:216:3eff:fe85:5242/64 (global)
        inet6: fe80::216:3eff:fe85:5242/64 (link)
    lo:
      Type: loopback
      State: UP
      MTU: 65536
      Bytes received: 6.37kB
      Bytes sent: 6.37kB
      Packets received: 84
      Packets sent: 84
      IP addresses:
        inet:  127.0.0.1/8 (local)
        inet6: ::1/128 (local)

Log:

warning: tap: open vhost char device failed: Permission denied
warning: tap: open vhost char device failed: Permission denied
qemu-system-aarch64: warning: 9p: degraded performance: a reasonable high msize should be chosen on client/guest side (chosen msize is <= 8192). See https://wiki.qemu.org/Documentation/9psetup#msize for details.

I’m not sure if this has anything to do with DENY.
Could you help me to check it please ? thank you so much.

The AppArmor denial isn’t a problem and won’t prevent boot.

I’m a bit more confused now, are you saying that on LXD 5.10 the VM starts to boot and then stops entirely and never fully starts up?

Do you see this using images:ubuntu/jammy or images:ubuntu/jammy/cloud?

1 Like

I tried the following 4 images:(aarch64)

  • images:ubuntu/jammy(OK)
  • images:ubuntu/jammy/cloud(OK)
  • images:ubuntu/focal(OK)
  • images:ubuntu/focal/cloud (ERROR)

When using images:ubuntu/focal/cloud, the status of the VM becomes ERROR when lxc start. A few minutes later, that vm starts up successfully , and status is RUNNING.

The problem is that it took too long to start VM and status is ERROR for a while.
when error apear,other request (just like lxc ls ) to lxc will hang.

I have always used this image (images:ubuntu/focal/cloud), It seems that I need to change to images:ubuntu/jammy/cloud.

I have a problem when using this image

root@lxdserver1:~# lxc image list "images:d1535"
+-----------------------------+--------------+--------+-------------------------------------+--------------+-----------------+----------+-------------------------------+
|            ALIAS            | FINGERPRINT  | PUBLIC |             DESCRIPTION             | ARCHITECTURE |      TYPE       |   SIZE   |          UPLOAD DATE          |
+-----------------------------+--------------+--------+-------------------------------------+--------------+-----------------+----------+-------------------------------+
| ubuntu/focal/cloud (3 more) | d15350bf6ae8 | yes    | Ubuntu focal arm64 (20230115_07:43) | aarch64      | VIRTUAL-MACHINE | 251.29MB | Jan 15, 2023 at 12:00am (UTC) |
+-----------------------------+--------------+--------+-------------------------------------+--------------+-----------------+----------+-------------------------------+
root@lxdserver1:~#

here is my lxc image list result:

root@lxdserver1:~# lxc image list
+--------------------------------------+--------------+--------+--------------------------------------------------------------+--------------+-----------------+-----------+-------------------------------+
|                ALIAS                 | FINGERPRINT  | PUBLIC |                         DESCRIPTION                          | ARCHITECTURE |      TYPE       |   SIZE    |          UPLOAD DATE          |
+--------------------------------------+--------------+--------+--------------------------------------------------------------+--------------+-----------------+-----------+-------------------------------+
| Ubuntu_Focal                         | 541ac7fec8c0 | no     | Ubuntu focal arm64 (20230115_07:44)                          | aarch64      | VIRTUAL-MACHINE | 234.52MB  | Jan 17, 2023 at 11:42am (UTC) |
+--------------------------------------+--------------+--------+--------------------------------------------------------------+--------------+-----------------+-----------+-------------------------------+
| Ubuntu_Focal_Cloud                   | d15350bf6ae8 | no     | Ubuntu focal arm64 (20230115_07:43)                          | aarch64      | VIRTUAL-MACHINE | 251.29MB  | Jan 17, 2023 at 12:04pm (UTC) |
+--------------------------------------+--------------+--------+--------------------------------------------------------------+--------------+-----------------+-----------+-------------------------------+
| Ubuntu_Jammy                         | 4ed838f8a5c3 | no     | Ubuntu jammy arm64 (20230116_08:50)                          | aarch64      | VIRTUAL-MACHINE | 254.91MB  | Jan 17, 2023 at 10:28am (UTC) |
+--------------------------------------+--------------+--------+--------------------------------------------------------------+--------------+-----------------+-----------+-------------------------------+
| Ubuntu_Jammy_Cloud                   | a73dc51e9ea5 | no     | Ubuntu jammy arm64 (20230115_07:43)                          | aarch64      | VIRTUAL-MACHINE | 280.17MB  | Jan 17, 2023 at 11:08am (UTC) |
1 Like

Can you run lxc monitor --type=logging --pretty in a separate window and then try and start the problem VM again, as it will be interesting to see if that highlights where the delay is occurring.

1 Like

the lxc monitor --type=logging --pretty result


DEBUG  [2023-01-18T00:46:20Z] API Request
        {
                "action": "start",
                "timeout": 0,
                "force": false,
                "stateful": false
        }  ip=@ method=PUT protocol=unix url=/1.0/instances/intimate-jackal/state username=root
DEBUG  [2023-01-18T00:46:20Z] New operation                                 class=task description="Starting instance" operation=46c58054-dfe1-4f7a-82a3-0962259fd261 project=default
DEBUG  [2023-01-18T00:46:20Z] Instance operation lock created               action=start instance=intimate-jackal project=default reusable=false
DEBUG  [2023-01-18T00:46:20Z] Started operation                             class=task description="Starting instance" operation=46c58054-dfe1-4f7a-82a3-0962259fd261 project=default
DEBUG  [2023-01-18T00:46:20Z] Start started                                 instance=intimate-jackal instanceType=virtual-machine project=default stateful=false
DEBUG  [2023-01-18T00:46:20Z] Handling API request                          ip=@ method=GET protocol=unix url=/1.0/operations/46c58054-dfe1-4f7a-82a3-0962259fd261 username=root
DEBUG  [2023-01-18T00:46:20Z] MountInstance started                         instance=intimate-jackal project=default
DEBUG  [2023-01-18T00:46:20Z] Matched trusted cert                          fingerprint=3a3ec80297877648d889423e91637deace656e8540ceeb81f514b145ade2177b subject="CN=root@lxdserver1,O=linuxcontainers.org"
DEBUG  [2023-01-18T00:46:20Z] Handling API request                          fingerprint=3a3ec80297877648d889423e91637deace656e8540ceeb81f514b145ade2177b ip="172.31.30.129:52370" method=GET protocol=cluster url=/1.0/operations/46c58054-dfe1-4f7a-82a3-0962259fd261
DEBUG  [2023-01-18T00:46:20Z] Activated RBD volume                          dev=/dev/rbd0 driver=ceph pool=lxd-ceph-storagepool volName=virtual-machine_intimate-jackal.block
DEBUG  [2023-01-18T00:46:21Z] Activated RBD volume                          dev=/dev/rbd1 driver=ceph pool=lxd-ceph-storagepool volName=virtual-machine_intimate-jackal
DEBUG  [2023-01-18T00:46:21Z] Mounted RBD volume                            dev=/dev/rbd1 driver=ceph options=discard path=/var/snap/lxd/common/lxd/storage-pools/lxd-ceph-storagepool/virtual-machines/intimate-jackal pool=lxd-ceph-storagepool
DEBUG  [2023-01-18T00:46:21Z] MountInstance finished                        instance=intimate-jackal project=default
DEBUG  [2023-01-18T00:46:21Z] Installing lxd-agent                          installPath=/var/snap/lxd/common/lxd/virtual-machines/intimate-jackal/config/lxd-agent instance=intimate-jackal instanceType=virtual-machine project=default srcPath=/snap/lxd/24308/bin/lxd-agent
DEBUG  [2023-01-18T00:46:21Z] Generating NVRAM                              instance=intimate-jackal instanceType=virtual-machine project=default
DEBUG  [2023-01-18T00:46:21Z] MountInstance started                         instance=intimate-jackal project=default
DEBUG  [2023-01-18T00:46:21Z] MountInstance finished                        instance=intimate-jackal project=default
DEBUG  [2023-01-18T00:46:21Z] UnmountInstance started                       instance=intimate-jackal project=default
DEBUG  [2023-01-18T00:46:21Z] Skipping unmount as in use                    driver=ceph pool=lxd-ceph-storagepool refCount=1 volName=intimate-jackal
DEBUG  [2023-01-18T00:46:21Z] UnmountInstance finished                      instance=intimate-jackal project=default
DEBUG  [2023-01-18T00:46:21Z] Starting device                               device=eth0 instance=intimate-jackal instanceType=virtual-machine project=default type=nic
DEBUG  [2023-01-18T00:46:21Z] Scheduler: network: tap5a52ae31 has been added: updating network priorities
DEBUG  [2023-01-18T00:46:21Z] Starting device                               device=root instance=intimate-jackal instanceType=virtual-machine project=default type=disk
WARNING[2023-01-18T00:46:21Z] Unable to use virtio-fs for config drive, using 9p as a fallback  err="Architecture unsupported" instance=intimate-jackal instanceType=virtual-machine project=default
DEBUG  [2023-01-18T00:46:21Z] UpdateInstanceBackupFile started              instance=intimate-jackal project=default
DEBUG  [2023-01-18T00:46:21Z] Skipping unmount as in use                    driver=ceph pool=lxd-ceph-storagepool refCount=1 volName=intimate-jackal
DEBUG  [2023-01-18T00:46:21Z] UpdateInstanceBackupFile finished             instance=intimate-jackal project=default
DEBUG  [2023-01-18T00:46:22Z] Instance operation lock finished              action=start err="<nil>" instance=intimate-jackal project=default reusable=false
DEBUG  [2023-01-18T00:46:22Z] Start finished                                instance=intimate-jackal instanceType=virtual-machine project=default stateful=false
DEBUG  [2023-01-18T00:46:22Z] Success for operation                         class=task description="Starting instance" operation=46c58054-dfe1-4f7a-82a3-0962259fd261 project=default
DEBUG  [2023-01-18T00:46:22Z] Handling API request                          ip=@ method=POST protocol=unix url=/1.0/instances/intimate-jackal/console username=root
DEBUG  [2023-01-18T00:46:22Z] API Request
        {
                "width": 120,
                "height": 30,
                "type": "console"
        }  ip=@ method=POST protocol=unix url=/1.0/instances/intimate-jackal/console username=root
DEBUG  [2023-01-18T00:46:22Z] Connecting to a remote LXD over HTTPS
DEBUG  [2023-01-18T00:46:22Z]
        {
                "width": 120,
                "height": 30,
                "type": "console"
        }
DEBUG  [2023-01-18T00:46:22Z] Sending request to LXD                        etag= method=POST url="https://172.31.30.130:8443/1.0/instances/intimate-jackal/console"
DEBUG  [2023-01-18T00:46:22Z] Matched trusted cert                          fingerprint=3a3ec80297877648d889423e91637deace656e8540ceeb81f514b145ade2177b subject="CN=root@lxdserver1,O=linuxcontainers.org"
DEBUG  [2023-01-18T00:46:22Z] Handling API request                          fingerprint=3a3ec80297877648d889423e91637deace656e8540ceeb81f514b145ade2177b ip="172.31.30.129:52382" method=POST protocol=cluster url=/1.0/instances/intimate-jackal/console
DEBUG  [2023-01-18T00:46:22Z] New operation                                 class=websocket description="Showing console" operation=7fe00d97-158d-413d-9ecd-aaf9c9616dcd project=default
DEBUG  [2023-01-18T00:46:22Z] Started operation                             class=websocket description="Showing console" operation=7fe00d97-158d-413d-9ecd-aaf9c9616dcd project=default
DEBUG  [2023-01-18T00:46:53Z] Instance operation lock created               action=restart instance=intimate-jackal project=default reusable=false
DEBUG  [2023-01-18T00:46:53Z] Instance initiated stop                       action=reboot instance=intimate-jackal instanceType=virtual-machine project=default
DEBUG  [2023-01-18T00:46:53Z] Instance stopped                              instance=intimate-jackal instanceType=virtual-machine project=default reason=guest-reset target=reboot
DEBUG  [2023-01-18T00:46:53Z] onStop hook started                           instance=intimate-jackal instanceType=virtual-machine project=default target=reboot
DEBUG  [2023-01-18T00:46:53Z] Waiting for VM process to finish              instance=intimate-jackal instanceType=virtual-machine project=default
DEBUG  [2023-01-18T00:48:03Z] Finished mirroring console to websocket
DEBUG  [2023-01-18T00:48:03Z] Finished mirroring websocket to console
DEBUG  [2023-01-18T00:48:03Z] DefaultWriter got error getting next reader   err="websocket: close 1006 (abnormal closure): unexpected EOF"
DEBUG  [2023-01-18T00:48:03Z] Got error getting next reader: read tcp 172.31.30.130:8443->172.31.30.129:52398: use of closed network connection
DEBUG  [2023-01-18T00:48:03Z] Console control websocket finished
DEBUG  [2023-01-18T00:48:03Z] Console websocket finished
DEBUG  [2023-01-18T00:48:03Z] Failure for operation                         class=websocket description="Showing console" err="write unix:@->/var/snap/lxd/common/lxd/logs/intimate-jackal/qemu.console: file already closed" operation=7fe00d97-158d-413d-9ecd-aaf9c9616dcd project=default
DEBUG  [2023-01-18T00:48:03Z] Event listener server handler stopped         listener=8b063036-133a-4b3a-bb40-b961f4d3c457 local=/var/snap/lxd/common/lxd/unix.socket remote=@
DEBUG  [2023-01-18T00:48:04Z] VM process finished                           instance=intimate-jackal instanceType=virtual-machine project=default
DEBUG  [2023-01-18T00:48:04Z] Stopping device                               device=root instance=intimate-jackal instanceType=virtual-machine project=default type=disk
DEBUG  [2023-01-18T00:48:04Z] Stopping device                               device=eth0 instance=intimate-jackal instanceType=virtual-machine project=default type=nic
DEBUG  [2023-01-18T00:48:04Z] UnmountInstance started                       instance=intimate-jackal project=default
DEBUG  [2023-01-18T00:48:04Z] Unmounted RBD volume                          driver=ceph keepBlockDev=false path=/var/snap/lxd/common/lxd/storage-pools/lxd-ceph-storagepool/virtual-machines/intimate-jackal pool=lxd-ceph-storagepool volName=intimate-jackal
DEBUG  [2023-01-18T00:48:05Z] Deactivated RBD volume                        driver=ceph pool=lxd-ceph-storagepool volName=virtual-machine_intimate-jackal
DEBUG  [2023-01-18T00:48:05Z] Deactivated RBD volume                        driver=ceph pool=lxd-ceph-storagepool volName=virtual-machine_intimate-jackal.block
DEBUG  [2023-01-18T00:48:05Z] UnmountInstance finished                      instance=intimate-jackal project=default
DEBUG  [2023-01-18T00:48:06Z] Start started                                 instance=intimate-jackal instanceType=virtual-machine project=default stateful=false
DEBUG  [2023-01-18T00:48:06Z] Instance operation lock inherited             action=restart inheritedByAction=start instance=intimate-jackal project=default reusable=false
DEBUG  [2023-01-18T00:48:06Z] MountInstance started                         instance=intimate-jackal project=default
DEBUG  [2023-01-18T00:48:06Z] Activated RBD volume                          dev=/dev/rbd0 driver=ceph pool=lxd-ceph-storagepool volName=virtual-machine_intimate-jackal.block
DEBUG  [2023-01-18T00:48:06Z] Activated RBD volume                          dev=/dev/rbd1 driver=ceph pool=lxd-ceph-storagepool volName=virtual-machine_intimate-jackal
DEBUG  [2023-01-18T00:48:06Z] Mounted RBD volume                            dev=/dev/rbd1 driver=ceph options=discard path=/var/snap/lxd/common/lxd/storage-pools/lxd-ceph-storagepool/virtual-machines/intimate-jackal pool=lxd-ceph-storagepool
DEBUG  [2023-01-18T00:48:06Z] MountInstance finished                        instance=intimate-jackal project=default
DEBUG  [2023-01-18T00:48:06Z] Skipping lxd-agent install as unchanged       installPath=/var/snap/lxd/common/lxd/virtual-machines/intimate-jackal/config/lxd-agent instance=intimate-jackal instanceType=virtual-machine project=default srcPath=/snap/lxd/24308/bin/lxd-agent
DEBUG  [2023-01-18T00:48:06Z] Starting device                               device=eth0 instance=intimate-jackal instanceType=virtual-machine project=default type=nic
DEBUG  [2023-01-18T00:48:06Z] Scheduler: network: tapc3ee4e50 has been added: updating network priorities
DEBUG  [2023-01-18T00:48:06Z] Starting device                               device=root instance=intimate-jackal instanceType=virtual-machine project=default type=disk
WARNING[2023-01-18T00:48:06Z] Unable to use virtio-fs for config drive, using 9p as a fallback  err="Architecture unsupported" instance=intimate-jackal instanceType=virtual-machine project=default
DEBUG  [2023-01-18T00:48:06Z] UpdateInstanceBackupFile started              instance=intimate-jackal project=default
DEBUG  [2023-01-18T00:48:06Z] Skipping unmount as in use                    driver=ceph pool=lxd-ceph-storagepool refCount=1 volName=intimate-jackal
DEBUG  [2023-01-18T00:48:06Z] UpdateInstanceBackupFile finished             instance=intimate-jackal project=default
DEBUG  [2023-01-18T00:48:07Z] Start finished                                instance=intimate-jackal instanceType=virtual-machine project=default stateful=false
DEBUG  [2023-01-18T00:48:07Z] Instance operation lock finished              action=restart err="<nil>" instance=intimate-jackal project=default reusable=false
DEBUG  [2023-01-18T00:48:07Z] onStop hook finished                          instance=intimate-jackal instanceType=virtual-machine project=default target=reboot

The only highlighted content is

WARNING[2023-01-18T00:48:06Z] Unable to use virtio-fs for config drive, using 9p as a fallback  err="Architecture unsupported" instance=intimate-jackal instanceType=virtual-machine project=default

i watch the vm console and lxc monitor --type=logging --pretty at the same time.
About one minute after I execute lxd start xxxx --console,the following log appears in the console

Error: write unix:@->/var/snap/lxd/common/lxd/logs/firm-quetzal/qemu.console: file already closed

and then

WARNING[2023-01-18T00:48:06Z] Unable to use virtio-fs for config drive, using 9p as a fallback  err="Architecture unsupported" instance=intimate-jackal instanceType=virtual-machine project=default

appears in the monitor terminal, It is highlighted.

No, thats normal, on non x86_64 architectures.

The issue is that the VM guest is initiating a reboot:

DEBUG  [2023-01-18T00:46:53Z] Instance initiated stop                       action=reboot instance=intimate-jackal instanceType=virtual-machine project=default
DEBUG  [2023-01-18T00:46:53Z] Instance stopped                              instance=intimate-jackal instanceType=virtual-machine project=default reason=guest-reset target=reboot
DEBUG  [2023-01-18T00:46:53Z] onStop hook started                           instance=intimate-jackal instanceType=virtual-machine project=default target=reboot
DEBUG  [2023-01-18T00:46:53Z] Waiting for VM process to finish              instance=intimate-jackal instanceType=virtual-machine project=default
DEBUG  [2023-01-18T00:48:03Z] Finished mirroring console to websocket
DEBUG  [2023-01-18T00:48:03Z] Finished mirroring websocket to console
DEBUG  [2023-01-18T00:48:03Z] DefaultWriter got error getting next reader   err="websocket: close 1006 (abnormal closure): unexpected EOF"
DEBUG  [2023-01-18T00:48:03Z] Got error getting next reader: read tcp 172.31.30.130:8443->172.31.30.129:52398: use of closed network connection
DEBUG  [2023-01-18T00:48:03Z] Console control websocket finished
DEBUG  [2023-01-18T00:48:03Z] Console websocket finished
DEBUG  [2023-01-18T00:48:03Z] Failure for operation                         class=websocket description="Showing console" err="write unix:@->/var/snap/lxd/common/lxd/logs/intimate-jackal/qemu.console: file already closed" operation=7fe00d97-158d-413d-9ecd-aaf9c9616dcd project=default
DEBUG  [2023-01-18T00:48:03Z] Event listener server handler stopped         listener=8b063036-133a-4b3a-bb40-b961f4d3c457 local=/var/snap/lxd/common/lxd/unix.socket remote=@
DEBUG  [2023-01-18T00:48:04Z] VM process finished                           instance=intimate-jackal instanceType=virtual-machine project=default
DEBUG  [2023-01-18T00:48:04Z] Stopping device                               device=root instance=intimate-jackal instanceType=virtual-machine project=default type=disk
DEBUG  [2023-01-18T00:48:04Z] Stopping device                               device=eth0 instance=intimate-jackal instanceType=virtual-machine project=default type=nic
DEBUG  [2023-01-18T00:48:04Z] UnmountInstance started                       instance=intimate-jackal project=default
DEBUG  [2023-01-18T00:48:04Z] Unmounted RBD volume                          driver=ceph keepBlockDev=false path=/var/snap/lxd/common/lxd/storage-pools/lxd-ceph-storagepool/virtual-machines/intimate-jackal pool=lxd-ceph-storagepool volName=intimate-jackal
DEBUG  [2023-01-18T00:48:05Z] Deactivated RBD volume                        driver=ceph pool=lxd-ceph-storagepool volName=virtual-machine_intimate-jackal
DEBUG  [2023-01-18T00:48:05Z] Deactivated RBD volume                        driver=ceph pool=lxd-ceph-storagepool volName=virtual-machine_intimate-jackal.block
DEBUG  [2023-01-18T00:48:05Z] UnmountInstance finished                      instance=intimate-jackal project=default
DEBUG  [2023-01-18T00:48:06Z] Start started                                 instance=intimate-jackal instanceType=virtual-machine project=default stateful=false

The VM process may be taking a while to stop as it flushes the disk data back to the storage volume.

I believe there is a bug/behaviour in some of the Ubuntu images that requires a reboot.

See Bug #1873809 “Make linux-kvm bootable in LXD VMs” : Bugs : cloud-images

The disk-kvm.img images which are to be preferred when run under virtualization, currently completely fail to boot under UEFI.
A workaround was put in place such that LXD instead will pull generic-based images until this is resolved, this however does come with a much longer boot time (as the kernel panics, reboots and then boots) and also reduced functionality from cloud-init, so we’d still like this fixed in the near future.

1 Like

Thank you very much, learned from you again!
:+1:

1 Like