Appreciate any feedback on this, it probably started after switching zfs mounts to legacy, perhaps other reason, but in an unchanged environment and config, I am not able to start a container after creation.
lxc launch images:ubuntu/focal focal1
Creating focal1
Starting focal1
Error: Failed to run: /snap/lxd/current/bin/lxd forkstart 101_focal1 /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/101_focal1/lxc.conf:
Try lxc info --show-log local:focal1 for more info
Name: focal1
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2022/03/04 12:12 CET
Last Used: 2022/03/04 12:12 CET
Log:
lxc 101_focal1 20220304111211.366 WARN conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc 101_focal1 20220304111211.367 WARN conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc 101_focal1 20220304111211.368 WARN conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc 101_focal1 20220304111211.368 WARN conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc 101_focal1 20220304111211.370 WARN cgfsng - cgroups/cgfsng.c:fchowmodat:1252 - No such file or directory - Failed to fchownat(40, memory.oom.group, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
lxc 101_focal1 20220304111211.601 ERROR start - start.c:start:2164 - No such file or directory - Failed to exec “/sbin/init”
lxc 101_focal1 20220304111211.601 ERROR sync - sync.c:sync_wait:34 - An error occurred in another process (expected sequence number 7)
lxc 101_focal1 20220304111211.614 WARN network - network.c:lxc_delete_network_priv:3631 - Failed to rename interface with index 0 from “eth1” to its initial name “veth64805348”
lxc 101_focal1 20220304111211.614 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:877 - Received container state “ABORTING” instead of “RUNNING”
lxc 101_focal1 20220304111211.614 ERROR start - start.c:__lxc_start:2074 - Failed to spawn container “101_focal1”
lxc 101_focal1 20220304111211.614 WARN start - start.c:lxc_abort:1039 - No such process - Failed to send SIGKILL via pidfd 41 for process 341634
lxc 101_focal1 20220304111216.727 WARN conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc 101_focal1 20220304111216.728 WARN conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc 20220304111216.783 ERROR af_unix - af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20220304111216.783 ERROR commands - commands.c:lxc_cmd_rsp_recv_fds:127 - Failed to receive file descriptors for command “get_state”
Can you show snap info lxd please to see which revision you’re on. It feels like there maybe something wrong with the image (either how its unpacked or how its cached on your system).
name: lxd
summary: LXD - container and VM manager
publisher: Canonical✓
store-url: https://snapcraft.io/lxd
contact: Issues · lxc/incus · GitHub
license: unset
description: |
LXD is a system container and virtual machine manager.
It offers a simple CLI and REST API to manage local or remote instances,
uses an image based workflow and support for a variety of advanced features.
Images are available for all Ubuntu releases and architectures as well
as for a wide number of other Linux distributions. Existing
integrations with many deployment and operation tools, makes it work
just like a public cloud, except everything is under your control.
LXD containers are lightweight, secure by default and a great
alternative to virtual machines when running Linux on Linux.
LXD virtual machines are modern and secure, using UEFI and secure-boot
by default and a great choice when a different kernel or operating
system is needed.
With clustering, up to 50 LXD servers can be easily joined and managed
together with the same tools and APIs and without needing any external
dependencies.
Supported configuration options for the snap (snap set lxd [=…]):
- ceph.builtin: Use snap-specific Ceph configuration [default=false]
- ceph.external: Use the system's ceph tools (ignores ceph.builtin) [default=false]
- criu.enable: Enable experimental live-migration support [default=false]
- daemon.debug: Increase logging to debug level [default=false]
- daemon.group: Set group of users that can interact with LXD [default=lxd]
- daemon.preseed: Pass a YAML configuration to `lxd init` on initial start
- daemon.syslog: Send LXD log events to syslog [default=false]
- lvm.external: Use the system's LVM tools [default=false]
- lxcfs.pidfd: Start per-container process tracking [default=false]
- lxcfs.loadavg: Start tracking per-container load average [default=false]
- lxcfs.cfs: Consider CPU shares for CPU usage [default=false]
- openvswitch.builtin: Run a snap-specific OVS daemon [default=false]
- shiftfs.enable: Enable shiftfs support [default=auto]
For system-wide configuration of the CLI, place your configuration in
/var/snap/lxd/common/global-conf/ (config.yml and servercerts)
commands:
Oh wait, you’re tracking tracking: latest/edge which is counter to what you said originally LXD 4.23, because latest/edge is unstable and further on than the 4.23 release.
Is there a reason you’re on edge? You’re likely being affected by https://github.com/lxc/lxd/pull/9975 (so a snap refresh lxd might help, but you probably also need to do lxc image delete on the affected cached images).
If you’re on latest/edge you can expect further breakages in the future as the snap is automatically built from the main LXD git branch every few hours.
Right.
Have set the snap channel to latest/stable.
Restart service lxd. Still same.
Name Version Rev Tracking Publisher Notes
core18 20211215 2284 latest/stable canonical✓ base
core20 20220215 1361 latest/stable canonical✓ base
distrobuilder 2.0 1125 latest/stable stgraber classic
lxd 4.23 22525 latest/stable canonical✓ -
snapd 2.54.3 14978 latest/stable canonical✓ snapd
Yes I think the damage has been done. You will need to use lxc image ls and lxc image delete to remove the images you’ve downloaded already, and delete the containers that used the broken unpacked images.
Running into same issue again.
Above mentioned steps not helping.
snap list:
lxd 4.23 22652 latest/stable canonical✓ -
all images deleted, all broken lxd deleted, still cant start when launching from new image. Tried so far repositories images: ubuntu:
lxc info --show-log local:h21
Name: h21
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2022/03/13 13:22 CET
Last Used: 2022/03/13 13:22 CET
lxc h21 20220313122230.636 WARN conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc h21 20220313122230.636 WARN conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc h21 20220313122230.637 WARN conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc h21 20220313122230.637 WARN conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc h21 20220313122230.638 WARN cgfsng - cgroups/cgfsng.c:fchowmodat:1252 - No such file or directory - Failed to fchownat(40, memory.oom.group, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
lxc h21 20220313122230.751 ERROR start - start.c:start:2164 - No such file or directory - Failed to exec "/sbin/init"
lxc h21 20220313122230.752 ERROR sync - sync.c:sync_wait:34 - An error occurred in another process (expected sequence number 7)
lxc h21 20220313122230.752 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:877 - Received container state "ABORTING" instead of "RUNNING"
lxc h21 20220313122230.753 ERROR start - start.c:__lxc_start:2074 - Failed to spawn container "h21"
lxc h21 20220313122230.753 WARN start - start.c:lxc_abort:1039 - No such process - Failed to send SIGKILL via pidfd 41 for process 20405
lxc h21 20220313122235.764 WARN conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc h21 20220313122235.764 WARN conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc 20220313122235.816 ERROR af_unix - af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20220313122235.816 ERROR commands - commands.c:lxc_cmd_rsp_recv_fds:127 - Failed to receive file descriptors for command "get_state"