myuser@mylaptop:~$ sudo snap install lxd
lxd 5.7-c62733b from Canonical✓ installed
myuser@mylaptop:~$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, ceph, cephobject, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]:
Size in GiB of the new loop device (1GiB minimum) [default=30GiB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like the LXD server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML “lxd init” preseed to be printed? (yes/no) [default=no]:
myuser@mylaptop:~$ lxc launch ubuntu:20.04 kube-train
Creating kube-train
Starting kube-train
Error: Failed to run: /snap/lxd/current/bin/lxd forkstart kube-train /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/kube-train/lxc.conf: exit status 1
Try lxc info --show-log local:kube-train
for more info
myuser@mylaptop:~$ lxc info --show-log local:kube-train
Name: kube-train
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2022/10/31 11:58 PDT
Last Used: 2022/10/31 11:59 PDT
Log:
lxc kube-train 20221031185905.138 WARN conf - …/src/src/lxc/conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc kube-train 20221031185905.138 WARN conf - …/src/src/lxc/conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc kube-train 20221031185905.139 WARN conf - …/src/src/lxc/conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc kube-train 20221031185905.139 WARN conf - …/src/src/lxc/conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc kube-train 20221031185905.245 ERROR cgfsng - …/src/src/lxc/cgroups/cgfsng.c:cgfsng_mount:2131 - No such file or directory - Failed to create cgroup at_mnt 24()
lxc kube-train 20221031185905.245 ERROR conf - …/src/src/lxc/conf.c:lxc_mount_auto_mounts:851 - No such file or directory - Failed to mount “/sys/fs/cgroup”
lxc kube-train 20221031185905.245 ERROR conf - …/src/src/lxc/conf.c:lxc_setup:4396 - Failed to setup remaining automatic mounts
lxc kube-train 20221031185905.245 ERROR start - …/src/src/lxc/start.c:do_start:1272 - Failed to setup container “kube-train”
lxc kube-train 20221031185905.245 ERROR sync - …/src/src/lxc/sync.c:sync_wait:34 - An error occurred in another process (expected sequence number 4)
lxc kube-train 20221031185905.263 WARN network - …/src/src/lxc/network.c:lxc_delete_network_priv:3631 - Failed to rename interface with index 0 from “eth0” to its initial name “veth57dc951d”
lxc kube-train 20221031185905.264 ERROR lxccontainer - …/src/src/lxc/lxccontainer.c:wait_on_daemonized_start:877 - Received container state “ABORTING” instead of “RUNNING”
lxc kube-train 20221031185905.264 ERROR start - …/src/src/lxc/start.c:__lxc_start:2107 - Failed to spawn container “kube-train”
lxc kube-train 20221031185905.264 WARN start - …/src/src/lxc/start.c:lxc_abort:1036 - No such process - Failed to send SIGKILL via pidfd 19 for process 15121
lxc kube-train 20221031185910.366 WARN conf - …/src/src/lxc/conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc kube-train 20221031185910.366 WARN conf - …/src/src/lxc/conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc 20221031185910.458 ERROR af_unix - …/src/src/lxc/af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20221031185910.458 ERROR commands - …/src/src/lxc/commands.c:lxc_cmd_rsp_recv_fds:128 - Failed to receive file descriptors for command “get_state”