Correct, mediastack
is the one with issues. I’ve updated to 4.22 in the meantime as well.
Here’s the config:
architecture: x86_64
config:
image.architecture: amd64
image.description: Ubuntu focal amd64 (20211228_07:42)
image.os: Ubuntu
image.release: focal
image.serial: "20211228_07:42"
image.type: disk-kvm.img
image.variant: cloud
limits.cpu: "4"
limits.memory: 4096MB
migration.stateful: "false"
user.user-data: |
#cloud-config
package_upgrade: true
packages:
- openssh-server
- curl
- wget
- nano
- htop
phone_home:
url: http://10.10.0.1:4040/phone-home
post:
- hostname
tries: 10
volatile.base_image: 3909be5d5c59409c001c40805c86bcb29ac787e10618a3c10ddfd425300d7adb
volatile.eth0.hwaddr: 00:16:3e:a1:cb:60
volatile.last_state.power: RUNNING
volatile.uuid: 1ef955a7-dd26-4c11-9932-6626651f4dd9
volatile.vsock_id: "125"
devices:
eth0:
nictype: bridged
parent: br1
type: nic
root:
path: /
pool: storage-pool
size: 10GB
size.state: 5GB
type: disk
ephemeral: false
profiles:
- default
- cloud-init
- limits.medium
stateful: false
description: ""
The fields config.migration.stateful
and devices.root.size.state
were added to get it to work with 4.22, otherwhise it would fail with Error: Stateful start requires that the instance limits.memory is less than size.state on the root disk device
(Note that passing --stateless
does not fix this)
The logs from lxc monitor --pretty
:
DBUG[02-07|18:39:17] New task Operation: 11446d3a-2052-43dd-a4b2-c1ee8440fc90
DBUG[02-07|18:39:17] Started task operation: 11446d3a-2052-43dd-a4b2-c1ee8440fc90
INFO[02-07|18:39:17] ID: 11446d3a-2052-43dd-a4b2-c1ee8440fc90, Class: task, Description: Starting instance CreatedAt=2022-02-07T18:39:17+0000 UpdatedAt=2022-02-07T18:39:17+0000 Status=Pending StatusCode=Pending Resources=map[instances:[/1.0/instances/mediastack]] Metadata=map[] MayCancel=false Err= Location=aphrodite
INFO[02-07|18:39:17] ID: 11446d3a-2052-43dd-a4b2-c1ee8440fc90, Class: task, Description: Starting instance CreatedAt=2022-02-07T18:39:17+0000 UpdatedAt=2022-02-07T18:39:17+0000 Status=Pending StatusCode=Pending Resources=map[instances:[/1.0/instances/mediastack]] Metadata=map[] MayCancel=false Err= Location=aphrodite
DBUG[02-07|18:39:17] Instance operation lock created reusable=false action=start instance=mediastack project=default
DBUG[02-07|18:39:17] Start started instanceType=virtual-machine project=default stateful=false instance=mediastack
INFO[02-07|18:39:17] ID: 11446d3a-2052-43dd-a4b2-c1ee8440fc90, Class: task, Description: Starting instance CreatedAt=2022-02-07T18:39:17+0000 UpdatedAt=2022-02-07T18:39:17+0000 Status=Running StatusCode=Running Resources=map[instances:[/1.0/instances/mediastack]] Metadata=map[] MayCancel=false Err= Location=aphrodite
INFO[02-07|18:39:17] ID: 11446d3a-2052-43dd-a4b2-c1ee8440fc90, Class: task, Description: Starting instance CreatedAt=2022-02-07T18:39:17+0000 UpdatedAt=2022-02-07T18:39:17+0000 Status=Running StatusCode=Running Resources=map[instances:[/1.0/instances/mediastack]] Metadata=map[] MayCancel=false Err= Location=aphrodite
DBUG[02-07|18:39:17] Handling API request ip=@ method=GET protocol=unix url=/1.0/operations/11446d3a-2052-43dd-a4b2-c1ee8440fc90 username=tobias
DBUG[02-07|18:39:17] MountInstance started driver=zfs instance=mediastack pool=storage-pool project=default
DBUG[02-07|18:39:17] MountInstance finished driver=zfs instance=mediastack pool=storage-pool project=default
DBUG[02-07|18:39:17] Skipping lxd-agent install as unchanged project=default srcPath=/snap/lxd/22340/bin/lxd-agent installPath=/var/snap/lxd/common/lxd/virtual-machines/mediastack/config/lxd-agent instance=mediastack instanceType=virtual-machine
DBUG[02-07|18:39:17] MountInstance started driver=zfs instance=mediastack pool=storage-pool project=default
DBUG[02-07|18:39:17] MountInstance finished pool=storage-pool project=default driver=zfs instance=mediastack
DBUG[02-07|18:39:17] UnmountInstance started project=default driver=zfs instance=mediastack pool=storage-pool
DBUG[02-07|18:39:17] UnmountInstance finished driver=zfs instance=mediastack pool=storage-pool project=default
DBUG[02-07|18:39:17] Skipping unmount as in use volName=mediastack driver=zfs pool=storage-pool refCount=1
DBUG[02-07|18:39:18] Starting device device=eth0 instance=mediastack instanceType=virtual-machine project=default type=nic
DBUG[02-07|18:39:18] Database error: api.StatusError{status:404, msg:"Network not found"}
DBUG[02-07|18:39:18] Scheduler: network: tapd201eb3e has been added: updating network priorities
DBUG[02-07|18:39:18] Starting device device=root instance=mediastack instanceType=virtual-machine project=default type=disk
DBUG[02-07|18:39:19] UpdateInstanceBackupFile started project=default driver=zfs instance=mediastack pool=storage-pool
DBUG[02-07|18:39:19] Skipping unmount as in use driver=zfs pool=storage-pool refCount=1 volName=mediastack
DBUG[02-07|18:39:19] UpdateInstanceBackupFile finished pool=storage-pool project=default driver=zfs instance=mediastack
DBUG[02-07|18:39:19] Instance operation lock finished action=start err="Failed to run: forklimits limit=memlock:unlimited:unlimited -- /snap/lxd/22340/bin/qemu-system-x86_64 -S -name mediastack -uuid 1ef955a7-dd26-4c11-9932-6626651f4dd9 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/mediastack/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/mediastack/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/mediastack/qemu.pid -D /var/snap/lxd/common/lxd/logs/mediastack/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd: char device redirected to /dev/pts/1 (label console)\n: Process exited with non-zero value 1" instance=mediastack project=default reusable=false
DBUG[02-07|18:39:19] Stopping device device=root instance=mediastack instanceType=virtual-machine project=default type=disk
DBUG[02-07|18:39:19] Stopping device instanceType=virtual-machine project=default type=nic device=eth0 instance=mediastack
DBUG[02-07|18:39:19] Database error: api.StatusError{status:404, msg:"Network not found"}
DBUG[02-07|18:39:20] Clearing instance firewall static filters parent=br1 project=default dev=eth0 host_name=tapd201eb3e hwaddr=00:16:3e:a1:cb:60 instance=mediastack ipv4=0.0.0.0 ipv6=::
DBUG[02-07|18:39:20] UnmountInstance started instance=mediastack pool=storage-pool project=default driver=zfs
lxc DBUG[02-07|18:39:30] Start finished instance=mediastack instanceType=virtual-machine project=default stateful=false
DBUG[02-07|18:39:30] UnmountInstance finished driver=zfs instance=mediastack pool=storage-pool project=default
DBUG[02-07|18:39:30] Failure for task operation: 11446d3a-2052-43dd-a4b2-c1ee8440fc90: Failed to run: forklimits limit=memlock:unlimited:unlimited -- /snap/lxd/22340/bin/qemu-system-x86_64 -S -name mediastack -uuid 1ef955a7-dd26-4c11-9932-6626651f4dd9 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/mediastack/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/mediastack/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/mediastack/qemu.pid -D /var/snap/lxd/common/lxd/logs/mediastack/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd: char device redirected to /dev/pts/1 (label console)
: Process exited with non-zero value 1
INFO[02-07|18:39:30] ID: 11446d3a-2052-43dd-a4b2-c1ee8440fc90, Class: task, Description: Starting instance CreatedAt=2022-02-07T18:39:17+0000 UpdatedAt=2022-02-07T18:39:17+0000 Status=Failure StatusCode=Failure Resources=map[instances:[/1.0/instances/mediastack]] Metadata=map[] MayCancel=false Err="Failed to run: forklimits limit=memlock:unlimited:unlimited -- /snap/lxd/22340/bin/qemu-system-x86_64 -S -name mediastack -uuid 1ef955a7-dd26-4c11-9932-6626651f4dd9 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/mediastack/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/mediastack/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/mediastack/qemu.pid -D /var/snap/lxd/common/lxd/logs/mediastack/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd: char device redirected to /dev/pts/1 (label console)\n: Process exited with non-zero value 1" Location=aphrodite
INFO[02-07|18:39:30] ID: 11446d3a-2052-43dd-a4b2-c1ee8440fc90, Class: task, Description: Starting instance CreatedAt=2022-02-07T18:39:17+0000 UpdatedAt=2022-02-07T18:39:17+0000 Status=Failure StatusCode=Failure Resources=map[instances:[/1.0/instances/mediastack]] Metadata=map[] MayCancel=false Err="Failed to run: forklimits limit=memlock:unlimited:unlimited -- /snap/lxd/22340/bin/qemu-system-x86_64 -S -name mediastack -uuid 1ef955a7-dd26-4c11-9932-6626651f4dd9 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/mediastack/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/snap/lxd/common/lxd/logs/mediastack/qemu.spice -pidfile /var/snap/lxd/common/lxd/logs/mediastack/qemu.pid -D /var/snap/lxd/common/lxd/logs/mediastack/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd: char device redirected to /dev/pts/1 (label console)\n: Process exited with non-zero value 1" Location=aphrodite
The only thing I noticed is a repeated 404 on a Network not found
, I’m not sure why this caused, as the config lists br1
as the nic, which is present (lxc network list
):
+------+----------+---------+------+------+-------------+---------+-------+
| NAME | TYPE | MANAGED | IPV4 | IPV6 | DESCRIPTION | USED BY | STATE |
+------+----------+---------+------+------+-------------+---------+-------+
| br1 | bridge | NO | | | | 25 | |
+------+----------+---------+------+------+-------------+---------+-------+
| eno1 | physical | NO | | | | 0 | |
+------+----------+---------+------+------+-------------+---------+-------+
| eno2 | physical | NO | | | | 0 | |
+------+----------+---------+------+------+-------------+---------+-------+