darmon77
(Darmon)
June 1, 2021, 6:06pm
1
Cita Bug #1910696 “Qemu fails to start with error " There is no optio...” : Bugs : QEMU
I am running LXD 4.14 on a void system, got the following error, same error that I am citing
Error: Failed to run: forklimits limit=memlock:unlimited:unlimited – /usr/bin/qemu-system-x86_64 -S -name sid -uuid e4fdf7e7-d954-4365-ab84-559fd3763869 -daemonize -cpu host -nographic -serial chardev:console -nodefaults -no-reboot -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -readconfig /var/log/lxd/sid/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/log/lxd/sid/qemu.spice -pidfile /var/log/lxd/sid/qemu.pid -D /var/log/lxd/sid/qemu.log -chroot /var/lib/lxd/virtual-machines/sid -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas nobody: : Process exited with a non-zero value
stgraber
(Stéphane Graber)
June 1, 2021, 8:34pm
2
Show lxc info sid --show-log
. Also what version of qemu are you using?
LXD currently requires >= 4.0 and < 6.0, so if you’re running qemu 6.0, that’d be the problem.
1 Like
darmon77
(Darmon)
June 1, 2021, 9:07pm
3
$ lxc info sid --show-log
Name: sid
Location: Dennis
Remote: unix://
Architecture: x86_64
Created: 2021/04/13 20:50 UTC
Status: Stopped
Type: virtual-machine
Profiles: vm
Versión qemu-6.0.0_2
tomp
(Thomas Parrott)
June 1, 2021, 9:16pm
4
Yep so version 6.0 of QEMU is most likely the problem.
That version of QEMU has several bugs that upstream are aware of and working on a fix for them. Once they are merged and backported to the 6.0 branch then the packager of your distro can update its version.
We are going to continue testing on QEMU 6.0 and raising issues as we find them. For users of the snap package it may be that we can manually backport some of the fixes too.
1 Like
darmon77
(Darmon)
June 1, 2021, 9:19pm
5
I appreciate your answers and I have no alternative but to degrade qemu
Thank you
1 Like
tomp
(Thomas Parrott)
June 1, 2021, 9:20pm
6
The patches from upstream are being posted here:
opened 12:29AM - 02 Feb 21 UTC
closed 05:50AM - 21 Apr 21 UTC
Incomplete
# Required information
* Distribution: openSUSE
* Distribution version: Tu… mbleweed
<details>
<summary>lxc info</summary>
```
config: {}
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
addresses: []
architectures:
- x86_64
- i686
certificate: |
-----BEGIN CERTIFICATE-----
MIIB7zCCAXWgAwIBAgIRANhl7wynTn9E71xLTaClXhcwCgYIKoZIzj0EAwMwMzEc
MBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzETMBEGA1UEAwwKcm9vdEB5YXZp
bjAeFw0xOTA0MTEwNTE4MDFaFw0yOTA0MDgwNTE4MDFaMDMxHDAaBgNVBAoTE2xp
bnV4Y29udGFpbmVycy5vcmcxEzARBgNVBAMMCnJvb3RAeWF2aW4wdjAQBgcqhkjO
PQIBBgUrgQQAIgNiAAT+PONmtu+qgT9JEF2nkLsL5LlPp3c4l3xMoiMmoSl0F4hY
qwQjqfRf+eoJKSafC+b95aD8H2mP1Aq0LIkZGHw2nY2kYVGeMVl17jezt4fX07fQ
UHVGUX+d43WQZUVb0MqjTTBLMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggr
BgEFBQcDATAMBgNVHRMBAf8EAjAAMBYGA1UdEQQPMA2CBXlhdmluhwQKKhRtMAoG
CCqGSM49BAMDA2gAMGUCMHCaqAqqEgVZaBtiOGa5DwTpdOdRXsf8/rnOgjWlvQ0o
/mOaX58feUAURmAa19E9OQIxAKizR6KMA0RJ84vUbtsDphzMJAP3UyDlkn4dnRVV
Ct/K6H16CwFGJHDkxzjqkLyy4g==
-----END CERTIFICATE-----
certificate_fingerprint: bd18dcc1b4b614aca0b28ffbb4f3bf99ebb9c40e491579937b8e03aa1c99d49b
driver: lxc | qemu
driver_version: 4.0.5 | 5.2.0
firewall: nftables
kernel: Linux
kernel_architecture: x86_64
kernel_features:
netnsid_getifaddrs: "true"
seccomp_listener: "true"
seccomp_listener_continue: "true"
shiftfs: "false"
uevent_injection: "true"
unpriv_fscaps: "true"
kernel_version: 5.10.7-1-default
lxc_features:
cgroup2: "true"
devpts_fd: "true"
mount_injection_file: "true"
network_gateway_device_route: "true"
network_ipvlan: "true"
network_l2proxy: "true"
network_phys_macvlan_mtu: "true"
network_veth_router: "true"
pidfd: "true"
seccomp_allow_deny_syntax: "true"
seccomp_notify: "true"
seccomp_proxy_send_notify_fd: "true"
os_name: openSUSE Tumbleweed
os_version: "20210131"
project: default
server: lxd
server_clustered: false
server_name: yavin
server_pid: 7493
server_version: "4.10"
storage: dir
storage_version: "1"
```
</details>
# Issue description
When trying to run LXD with QEMU 5.2 and later, you get the following error:
```
% lxc launch images:opensuse/15.2 fred-vm --vm --console
Creating fred-vm
Starting fred-vm
Error: Failed to run: forklimits limit=memlock:unlimited:unlimited -- /usr/bin/qemu-system-x86_64 -S -name fred-vm -uuid 1b4eb331-a46e-4a1a-a52e-36d94f95021b -daemonize -cpu host -nographic -serial chardev:console -nodefaults -no-reboot -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -readconfig /var/log/lxd/fred-vm/qemu.conf -pidfile /var/log/lxd/fred-vm/qemu.pid -D /var/log/lxd/fred-vm/qemu.log -chroot /var/lib/lxd/virtual-machines/fred-vm -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas nobody: qemu-system-x86_64:/var/log/lxd/fred-vm/qemu.conf:27: There is no option group 'spice'
qemu-system-x86_64: -readconfig /var/log/lxd/fred-vm/qemu.conf: read config /var/log/lxd/fred-vm/qemu.conf: Invalid argument
: Process exited with a non-zero value
Try `lxc info --show-log local:fred-vm` for more info
```
This is a [known issue with QEMU](https://bugs.launchpad.net/qemu/+bug/1910696) and the [SUSE QEMU maintainers have said that QEMU is soft-deprecating `-readconfig`](https://bugzilla.suse.com/show_bug.cgi?id=1181549#c11) -- with the recommended workaround being that you specify spice configuration parameters directly.
# Steps to reproduce
1. Run LXD with QEMU 5.2.
2. Try to run a VM.
Note that on openSUSE I'm in the middle of figuring out how to package LXD to work with VMs, so you need some extra setup steps (which will be part of the LXD package once we sort this stuff out):
```
% mkdir -p /opt/lxd/OVMF
% ln -s /usr/share/qemu/ovmf-x86_64-ms-code.bin /opt/lxd/OVMF/OVMF_CODE.fd
% ln -s /usr/share/qemu/ovmf-x86_64-vars.bin /opt/lxd/OVMF/OVMF_VARS.fd
% ln -s /usr/share/qemu/ovmf-x86_64-ms-vars.bin /opt/lxd/OVMF/OVMF_VARS.ms.fd
```
And then you need to run LXD with `LXD_OVMF_PATH=/opt/lxd/OVMF` but that shouldn't be causing this issue.
# Information to attach
- [ ] Any relevant kernel output (`dmesg`)
- [x] Container log (`lxc info NAME --show-log`)
<details>
```
Name: fred-vm
Location: none
Remote: unix://
Architecture: x86_64
Created: 2021/02/02 00:21 UTC
Status: Stopped
Type: virtual-machine
Profiles: default
Error: open /var/log/lxd/fred-vm/qemu.log: no such file or directory
```
</details>
- [x] Container configuration (`lxc config show NAME --expanded`)
<details>
```
architecture: x86_64
config:
image.architecture: amd64
image.description: Opensuse 15.2 amd64 (20210201_04:20)
image.os: Opensuse
image.release: "15.2"
image.serial: "20210201_04:20"
image.type: disk-kvm.img
image.variant: default
security.idmap.isolated: "true"
snapshots.expiry: 6w
snapshots.pattern: '{{ creation_date | date:"2006-01-02" }}'
snapshots.schedule: 0 0 * * *
volatile.base_image: db74e0609ba4863716104234a6bb7c634040972d1f6c1e6607d86670b60a5278
volatile.eth0.hwaddr: 00:16:3e:2d:a3:ff
volatile.uuid: 1b4eb331-a46e-4a1a-a52e-36d94f95021b
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
```
</details>
- [x] Main daemon log (at /var/log/lxd/lxd.log or /var/snap/lxd/common/lxd/logs/lxd.log)
<details>
```
t=2021-02-02T11:27:07+1100 lvl=info msg="LXD 4.10 is starting in normal mode" path=/var/lib/lxd
t=2021-02-02T11:27:07+1100 lvl=info msg="Kernel uid/gid map:"
t=2021-02-02T11:27:07+1100 lvl=info msg=" - u 0 0 4294967295"
t=2021-02-02T11:27:07+1100 lvl=info msg=" - g 0 0 4294967295"
t=2021-02-02T11:27:07+1100 lvl=info msg="Configured LXD uid/gid map:"
t=2021-02-02T11:27:07+1100 lvl=info msg=" - u 0 400000000 500000001"
t=2021-02-02T11:27:07+1100 lvl=info msg=" - g 0 400000000 500000001"
t=2021-02-02T11:27:07+1100 lvl=info msg="Kernel features:"
t=2021-02-02T11:27:07+1100 lvl=info msg=" - closing multiple file descriptors efficiently: yes"
t=2021-02-02T11:27:07+1100 lvl=info msg=" - netnsid-based network retrieval: yes"
t=2021-02-02T11:27:07+1100 lvl=info msg=" - pidfds: yes"
t=2021-02-02T11:27:07+1100 lvl=info msg=" - uevent injection: yes"
t=2021-02-02T11:27:07+1100 lvl=info msg=" - seccomp listener: yes"
t=2021-02-02T11:27:07+1100 lvl=info msg=" - seccomp listener continue syscalls: yes"
t=2021-02-02T11:27:07+1100 lvl=info msg=" - seccomp listener add file descriptors: yes"
t=2021-02-02T11:27:07+1100 lvl=info msg=" - attach to namespaces via pidfds: yes"
t=2021-02-02T11:27:07+1100 lvl=info msg=" - safe native terminal allocation : yes"
t=2021-02-02T11:27:07+1100 lvl=info msg=" - unprivileged file capabilities: yes"
t=2021-02-02T11:27:07+1100 lvl=info msg=" - cgroup layout: hybrid"
t=2021-02-02T11:27:07+1100 lvl=warn msg=" - Couldn't find the CGroup blkio.weight, disk priority will be ignored"
t=2021-02-02T11:27:07+1100 lvl=info msg=" - shiftfs support: no"
t=2021-02-02T11:27:07+1100 lvl=info msg="Initializing local database"
t=2021-02-02T11:27:08+1100 lvl=info msg="Starting /dev/lxd handler:"
t=2021-02-02T11:27:08+1100 lvl=info msg=" - binding devlxd socket" socket=/var/lib/lxd/devlxd/sock
t=2021-02-02T11:27:08+1100 lvl=info msg="REST API daemon:"
t=2021-02-02T11:27:08+1100 lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket
t=2021-02-02T11:27:08+1100 lvl=info msg="Initializing global database"
t=2021-02-02T11:27:08+1100 lvl=info msg="Firewall loaded driver \"nftables\""
t=2021-02-02T11:27:08+1100 lvl=info msg="Initializing storage pools"
t=2021-02-02T11:27:08+1100 lvl=info msg="Initializing daemon storage mounts"
t=2021-02-02T11:27:08+1100 lvl=info msg="Initializing networks"
t=2021-02-02T11:27:08+1100 lvl=info msg="Pruning leftover image files"
t=2021-02-02T11:27:08+1100 lvl=info msg="Done pruning leftover image files"
t=2021-02-02T11:27:08+1100 lvl=info msg="Loading daemon configuration"
t=2021-02-02T11:27:08+1100 lvl=info msg="Started seccomp handler" path=/var/lib/lxd/seccomp.socket
t=2021-02-02T11:27:08+1100 lvl=info msg="Pruning expired images"
t=2021-02-02T11:27:08+1100 lvl=info msg="Done pruning expired images"
t=2021-02-02T11:27:08+1100 lvl=info msg="Pruning expired instance backups"
t=2021-02-02T11:27:08+1100 lvl=info msg="Done pruning expired instance backups"
t=2021-02-02T11:27:08+1100 lvl=info msg="Updating instance types"
t=2021-02-02T11:27:08+1100 lvl=info msg="Expiring log files"
t=2021-02-02T11:27:08+1100 lvl=info msg="Done expiring log files"
t=2021-02-02T11:27:08+1100 lvl=info msg="Done updating instance types"
t=2021-02-02T11:27:08+1100 lvl=info msg="Updating images"
t=2021-02-02T11:27:08+1100 lvl=info msg="Done updating images"
t=2021-02-02T11:27:26+1100 lvl=info msg="Creating instance" ephemeral=false instance=fred-vm instanceType=virtual-machine project=default
t=2021-02-02T11:27:26+1100 lvl=info msg="Created instance" ephemeral=false instance=fred-vm instanceType=virtual-machine project=default
t=2021-02-02T11:28:03+1100 lvl=warn msg="Unable to use virtio-fs for config drive, using 9p as a fallback: virtiofsd missing" instance=fred-vm instanceType=virtual-machine project=default
t=2021-02-02T11:28:03+1100 lvl=warn msg="Using writeback cache I/O" DevPath=/var/lib/lxd/storage-pools/default/virtual-machines/fred-vm/root.img fsType=btrfs instance=fred-vm instanceType=virtual-machine project=default
```
</details>
- [ ] Output of the client with --debug
- [x] Output of the daemon with --debug (alternatively output of `lxc monitor` while reproducing the issue)
<details>
```
location: none
metadata:
context: {}
level: dbug
message: 'New event listener: 9c87d7b2-f2aa-4926-9ccf-bef19fef2cb2'
timestamp: "2021-02-02T11:27:22.421826183+11:00"
type: logging
location: none
metadata:
context:
ip: '@'
method: GET
protocol: unix
url: /1.0
username: cyphar
level: dbug
message: Handling
timestamp: "2021-02-02T11:27:25.961953325+11:00"
type: logging
location: none
metadata:
context:
ip: '@'
method: GET
protocol: unix
url: /1.0/events
username: cyphar
level: dbug
message: Handling
timestamp: "2021-02-02T11:27:26.010917077+11:00"
type: logging
location: none
metadata:
context: {}
level: dbug
message: 'New event listener: c1198efa-1c6f-4930-89ea-5718101d5516'
timestamp: "2021-02-02T11:27:26.011266086+11:00"
type: logging
location: none
metadata:
context:
ip: '@'
method: POST
protocol: unix
url: /1.0/instances
username: cyphar
level: dbug
message: Handling
timestamp: "2021-02-02T11:27:26.011719471+11:00"
type: logging
location: none
metadata:
context: {}
level: dbug
message: Connecting to a remote simplestreams server
timestamp: "2021-02-02T11:27:26.011911897+11:00"
type: logging
location: none
metadata:
context: {}
level: dbug
message: Responding to instance create
timestamp: "2021-02-02T11:27:26.011734363+11:00"
type: logging
location: none
metadata:
context: {}
level: dbug
message: 'New task Operation: 939a39e7-f84f-44dc-8db1-722064355a57'
timestamp: "2021-02-02T11:27:26.127909849+11:00"
type: logging
location: none
metadata:
context: {}
level: dbug
message: 'Started task operation: 939a39e7-f84f-44dc-8db1-722064355a57'
timestamp: "2021-02-02T11:27:26.128281196+11:00"
type: logging
location: none
metadata:
class: task
created_at: "2021-02-02T11:27:26.122251067+11:00"
description: Creating instance
err: ""
id: 939a39e7-f84f-44dc-8db1-722064355a57
location: none
may_cancel: false
metadata: null
resources:
containers:
- /1.0/containers/fred-vm
instances:
- /1.0/instances/fred-vm
status: Pending
status_code: 105
updated_at: "2021-02-02T11:27:26.122251067+11:00"
timestamp: "2021-02-02T11:27:26.128258283+11:00"
type: operation
location: none
metadata:
class: task
created_at: "2021-02-02T11:27:26.122251067+11:00"
description: Creating instance
err: ""
id: 939a39e7-f84f-44dc-8db1-722064355a57
location: none
may_cancel: false
metadata: null
resources:
containers:
- /1.0/containers/fred-vm
instances:
- /1.0/instances/fred-vm
status: Running
status_code: 103
updated_at: "2021-02-02T11:27:26.122251067+11:00"
timestamp: "2021-02-02T11:27:26.128548358+11:00"
type: operation
location: none
metadata:
context: {}
level: dbug
message: Connecting to a remote simplestreams server
timestamp: "2021-02-02T11:27:26.129385214+11:00"
type: logging
location: none
metadata:
context:
ip: '@'
method: GET
protocol: unix
url: /1.0/operations/939a39e7-f84f-44dc-8db1-722064355a57
username: cyphar
level: dbug
message: Handling
timestamp: "2021-02-02T11:27:26.129855859+11:00"
type: logging
location: none
metadata:
context:
fingerprint: db74e0609ba4863716104234a6bb7c634040972d1f6c1e6607d86670b60a5278
level: dbug
message: Image already exists in the DB
timestamp: "2021-02-02T11:27:26.259756203+11:00"
type: logging
location: none
metadata:
context:
ephemeral: "false"
instance: fred-vm
instanceType: virtual-machine
project: default
level: info
message: Creating instance
timestamp: "2021-02-02T11:27:26.264557084+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: FillInstanceConfig started
timestamp: "2021-02-02T11:27:26.266139817+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: FillInstanceConfig finished
timestamp: "2021-02-02T11:27:26.266154792+11:00"
type: logging
location: none
metadata:
action: instance-created
source: /1.0/instances/fred-vm
timestamp: "2021-02-02T11:27:26.271728933+11:00"
type: lifecycle
location: none
metadata:
context:
ephemeral: "false"
instance: fred-vm
instanceType: virtual-machine
project: default
level: info
message: Created instance
timestamp: "2021-02-02T11:27:26.271679817+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: CreateInstanceFromImage started
timestamp: "2021-02-02T11:27:26.274576758+11:00"
type: logging
location: none
metadata:
context:
dev: /var/lib/lxd/storage-pools/default/virtual-machines/fred-vm/root.img
driver: dir
path: /var/lib/lxd/storage-pools/default/virtual-machines/fred-vm
pool: default
level: dbug
message: Running filler function
timestamp: "2021-02-02T11:27:26.27589076+11:00"
type: logging
location: none
metadata:
context: {}
level: dbug
message: 'Updated metadata for task Operation: 939a39e7-f84f-44dc-8db1-722064355a57'
timestamp: "2021-02-02T11:27:26.277108813+11:00"
type: logging
location: none
metadata:
class: task
created_at: "2021-02-02T11:27:26.122251067+11:00"
description: Creating instance
err: ""
id: 939a39e7-f84f-44dc-8db1-722064355a57
location: none
may_cancel: false
metadata:
create_instance_from_image_unpack_progress: 'Unpack: 100% (6.68GB/s)'
progress:
percent: "100"
speed: "6676258992"
stage: create_instance_from_image_unpack
resources:
containers:
- /1.0/containers/fred-vm
instances:
- /1.0/instances/fred-vm
status: Running
status_code: 103
updated_at: "2021-02-02T11:27:26.277098213+11:00"
timestamp: "2021-02-02T11:27:26.277340962+11:00"
type: operation
location: none
metadata:
context:
imageFile: /var/lib/lxd/images/db74e0609ba4863716104234a6bb7c634040972d1f6c1e6607d86670b60a5278
vol: fred-vm
level: dbug
message: Checking image unpack size
timestamp: "2021-02-02T11:27:26.340352012+11:00"
type: logging
location: none
metadata:
context:
dstPath: /var/lib/lxd/storage-pools/default/virtual-machines/fred-vm/root.img
imageFile: /var/lib/lxd/images/db74e0609ba4863716104234a6bb7c634040972d1f6c1e6607d86670b60a5278
imgPath: /var/lib/lxd/images/db74e0609ba4863716104234a6bb7c634040972d1f6c1e6607d86670b60a5278.rootfs
vol: fred-vm
level: dbug
message: Converting qcow2 image to raw disk
timestamp: "2021-02-02T11:27:26.340386181+11:00"
type: logging
location: none
metadata:
context:
dev: /var/lib/lxd/storage-pools/default/virtual-machines/fred-vm/root.img
driver: dir
pool: default
level: dbug
message: Moved GPT alternative header to end of disk
timestamp: "2021-02-02T11:28:03.482010328+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: UpdateInstanceBackupFile started
timestamp: "2021-02-02T11:28:03.491635628+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: CreateInstanceFromImage finished
timestamp: "2021-02-02T11:28:03.491580874+11:00"
type: logging
location: none
metadata:
context: {}
level: dbug
message: 'Success for task operation: 939a39e7-f84f-44dc-8db1-722064355a57'
timestamp: "2021-02-02T11:28:03.499494933+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: UpdateInstanceBackupFile finished
timestamp: "2021-02-02T11:28:03.49944648+11:00"
type: logging
location: none
metadata:
class: task
created_at: "2021-02-02T11:27:26.122251067+11:00"
description: Creating instance
err: ""
id: 939a39e7-f84f-44dc-8db1-722064355a57
location: none
may_cancel: false
metadata:
create_instance_from_image_unpack_progress: 'Unpack: 100% (6.68GB/s)'
progress:
percent: "100"
speed: "6676258992"
stage: create_instance_from_image_unpack
resources:
containers:
- /1.0/containers/fred-vm
instances:
- /1.0/instances/fred-vm
status: Success
status_code: 200
updated_at: "2021-02-02T11:27:26.277098213+11:00"
timestamp: "2021-02-02T11:28:03.50023567+11:00"
type: operation
location: none
metadata:
context:
ip: '@'
method: GET
protocol: unix
url: /1.0/instances/fred-vm
username: cyphar
level: dbug
message: Handling
timestamp: "2021-02-02T11:28:03.5031272+11:00"
type: logging
location: none
metadata:
context:
ip: '@'
method: PUT
protocol: unix
url: /1.0/instances/fred-vm/state
username: cyphar
level: dbug
message: Handling
timestamp: "2021-02-02T11:28:03.512238373+11:00"
type: logging
location: none
metadata:
context: {}
level: dbug
message: 'New task Operation: a09586b3-f5dd-4134-95c5-d8959e25ca30'
timestamp: "2021-02-02T11:28:03.523415697+11:00"
type: logging
location: none
metadata:
context: {}
level: dbug
message: 'Started task operation: a09586b3-f5dd-4134-95c5-d8959e25ca30'
timestamp: "2021-02-02T11:28:03.524159976+11:00"
type: logging
location: none
metadata:
class: task
created_at: "2021-02-02T11:28:03.517789371+11:00"
description: Starting instance
err: ""
id: a09586b3-f5dd-4134-95c5-d8959e25ca30
location: none
may_cancel: false
metadata: null
resources:
containers:
- /1.0/containers/fred-vm
status: Running
status_code: 103
updated_at: "2021-02-02T11:28:03.517789371+11:00"
timestamp: "2021-02-02T11:28:03.524757241+11:00"
type: operation
location: none
metadata:
class: task
created_at: "2021-02-02T11:28:03.517789371+11:00"
description: Starting instance
err: ""
id: a09586b3-f5dd-4134-95c5-d8959e25ca30
location: none
may_cancel: false
metadata: null
resources:
containers:
- /1.0/containers/fred-vm
status: Pending
status_code: 105
updated_at: "2021-02-02T11:28:03.517789371+11:00"
timestamp: "2021-02-02T11:28:03.524122546+11:00"
type: operation
location: none
metadata:
context:
ip: '@'
method: GET
protocol: unix
url: /1.0/operations/a09586b3-f5dd-4134-95c5-d8959e25ca30
username: cyphar
level: dbug
message: Handling
timestamp: "2021-02-02T11:28:03.528037732+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: MountInstance started
timestamp: "2021-02-02T11:28:03.528637597+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: MountInstance finished
timestamp: "2021-02-02T11:28:03.531502179+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: UpdateInstanceBackupFile started
timestamp: "2021-02-02T11:28:03.531532424+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: UpdateInstanceBackupFile finished
timestamp: "2021-02-02T11:28:03.534666281+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: MountInstance started
timestamp: "2021-02-02T11:28:03.550505523+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: MountInstance finished
timestamp: "2021-02-02T11:28:03.552047188+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: UnmountInstance started
timestamp: "2021-02-02T11:28:03.581335367+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: UnmountInstance finished
timestamp: "2021-02-02T11:28:03.582532372+11:00"
type: logging
location: none
metadata:
context:
instance: fred-vm
instanceType: virtual-machine
project: default
level: warn
message: 'Unable to use virtio-fs for config drive, using 9p as a fallback: virtiofsd
missing'
timestamp: "2021-02-02T11:28:03.587568801+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: MountInstance started
timestamp: "2021-02-02T11:28:03.594364779+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: MountInstance finished
timestamp: "2021-02-02T11:28:03.595734199+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: UnmountInstance started
timestamp: "2021-02-02T11:28:03.595960204+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: UnmountInstance finished
timestamp: "2021-02-02T11:28:03.597076476+11:00"
type: logging
location: none
metadata:
context:
device: eth0
instance: fred-vm
instanceType: virtual-machine
project: default
type: nic
level: dbug
message: Starting device
timestamp: "2021-02-02T11:28:03.59709998+11:00"
type: logging
location: none
metadata:
context: {}
level: dbug
message: 'Scheduler: network: tap7b787d5c has been added: updating network priorities'
timestamp: "2021-02-02T11:28:03.598924305+11:00"
type: logging
location: none
metadata:
context:
device: root
instance: fred-vm
instanceType: virtual-machine
project: default
type: disk
level: dbug
message: Starting device
timestamp: "2021-02-02T11:28:03.612927711+11:00"
type: logging
location: none
metadata:
context:
DevPath: /var/lib/lxd/storage-pools/default/virtual-machines/fred-vm/root.img
fsType: btrfs
instance: fred-vm
instanceType: virtual-machine
project: default
level: warn
message: Using writeback cache I/O
timestamp: "2021-02-02T11:28:03.613465624+11:00"
type: logging
location: none
metadata:
context:
device: root
instance: fred-vm
instanceType: virtual-machine
project: default
type: disk
level: dbug
message: Stopping device
timestamp: "2021-02-02T11:28:03.808359963+11:00"
type: logging
location: none
metadata:
context:
device: eth0
instance: fred-vm
instanceType: virtual-machine
project: default
type: nic
level: dbug
message: Stopping device
timestamp: "2021-02-02T11:28:03.808431141+11:00"
type: logging
location: none
metadata:
context:
dev: eth0
host_name: tap7b787d5c
hwaddr: 00:16:3e:58:27:54
instance: fred-vm
ipv4: 0.0.0.0
ipv6: '::'
parent: lxdbr0
project: default
level: dbug
message: Clearing instance firewall static filters
timestamp: "2021-02-02T11:28:03.870580195+11:00"
type: logging
location: none
metadata:
context:
dev: eth0
host_name: tap7b787d5c
hwaddr: 00:16:3e:58:27:54
instance: fred-vm
ipv4: <nil>
ipv6: <nil>
parent: lxdbr0
project: default
level: dbug
message: Clearing instance firewall dynamic filters
timestamp: "2021-02-02T11:28:03.877962839+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: UnmountInstance started
timestamp: "2021-02-02T11:28:03.891496775+11:00"
type: logging
location: none
metadata:
context:
driver: dir
instance: fred-vm
pool: default
project: default
level: dbug
message: UnmountInstance finished
timestamp: "2021-02-02T11:28:03.892626161+11:00"
type: logging
location: none
metadata:
context: {}
level: dbug
message: |-
Failure for task operation: a09586b3-f5dd-4134-95c5-d8959e25ca30: Failed to run: forklimits limit=memlock:unlimited:unlimited -- /usr/bin/qemu-system-x86_64 -S -name fred-vm -uuid a46a0e8c-6000-4eda-b9af-9c422bcde128 -daemonize -cpu host -nographic -serial chardev:console -nodefaults -no-reboot -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -readconfig /var/log/lxd/fred-vm/qemu.conf -pidfile /var/log/lxd/fred-vm/qemu.pid -D /var/log/lxd/fred-vm/qemu.log -chroot /var/lib/lxd/virtual-machines/fred-vm -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas nobody: qemu-system-x86_64:/var/log/lxd/fred-vm/qemu.conf:27: There is no option group 'spice'
qemu-system-x86_64: -readconfig /var/log/lxd/fred-vm/qemu.conf: read config /var/log/lxd/fred-vm/qemu.conf: Invalid argument
: Process exited with a non-zero value
timestamp: "2021-02-02T11:28:03.892645802+11:00"
type: logging
location: none
metadata:
class: task
created_at: "2021-02-02T11:28:03.517789371+11:00"
description: Starting instance
err: |-
Failed to run: forklimits limit=memlock:unlimited:unlimited -- /usr/bin/qemu-system-x86_64 -S -name fred-vm -uuid a46a0e8c-6000-4eda-b9af-9c422bcde128 -daemonize -cpu host -nographic -serial chardev:console -nodefaults -no-reboot -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -readconfig /var/log/lxd/fred-vm/qemu.conf -pidfile /var/log/lxd/fred-vm/qemu.pid -D /var/log/lxd/fred-vm/qemu.log -chroot /var/lib/lxd/virtual-machines/fred-vm -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas nobody: qemu-system-x86_64:/var/log/lxd/fred-vm/qemu.conf:27: There is no option group 'spice'
qemu-system-x86_64: -readconfig /var/log/lxd/fred-vm/qemu.conf: read config /var/log/lxd/fred-vm/qemu.conf: Invalid argument
: Process exited with a non-zero value
id: a09586b3-f5dd-4134-95c5-d8959e25ca30
location: none
may_cancel: false
metadata: null
resources:
containers:
- /1.0/containers/fred-vm
status: Failure
status_code: 400
updated_at: "2021-02-02T11:28:03.517789371+11:00"
timestamp: "2021-02-02T11:28:03.892845978+11:00"
type: operation
location: none
metadata:
context: {}
level: dbug
message: 'Event listener finished: c1198efa-1c6f-4930-89ea-5718101d5516'
timestamp: "2021-02-02T11:28:03.894431747+11:00"
type: logging
location: none
metadata:
context: {}
level: dbug
message: 'Disconnected event listener: c1198efa-1c6f-4930-89ea-5718101d5516'
timestamp: "2021-02-02T11:28:03.894598377+11:00"
type: logging
```
</details>
tomp
(Thomas Parrott)
June 4, 2021, 11:36am
7
Tracking QEMU 6.0 support here:
opened 11:29AM - 04 Jun 21 UTC
Blocked
External
This is a place to track the progress for LXD supporting QEMU 6.0
Currently t… here are these outstanding patch sets from @bonzini required for upstream QEMU:
- https://patchew.org/QEMU/20210518131542.2941207-1-pbonzini@redhat.com/
- https://patchew.org/QEMU/20210518154014.2999326-1-pbonzini@redhat.com/
- https://github.com/qemu/qemu/commit/941a4736d2b465be1d6429415f8b1f26e2167585
M11
(Michał Policht)
December 3, 2022, 8:38pm
8
Does it still hold true guys? My virtual machines stopped working. I have noticed I have QEMU version 7.1.0, so I’m suspecting system update created problems. But interestingly I can spawn new virtual machines, only the old ones stopped working.
tomp
(Thomas Parrott)
December 3, 2022, 9:03pm
10
It looks like you’ve double posted.
Let’s continue this discussion on VM won't start - empty console output instead.