New LVM behaviour in LXD?

It’s not working.

I wiped the PV first, then I set this external flag, then rebooted the server, then recreated the PV, the VG and the thin pool, exactly as described above. I’m still getting the same error.

root@sb32 ~ # lxc launch ubuntu-minimal:20.04 c2 -s testpool 
Creating c2
Starting c2                               
Error: Failed to run: /snap/lxd/current/bin/lxd forkstart c2 /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/c2/lxc.conf: 
Try `lxc info --show-log local:c2` for more info

The log:

Name: c2
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2022/03/14 17:49 CET
Last Used: 2022/03/14 17:49 CET

Log:

lxc c2 20220314164925.317 ERROR    utils - utils.c:lxc_can_use_pidfd:1792 - Kernel does not support pidfds
lxc c2 20220314164925.318 WARN     conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc c2 20220314164925.318 WARN     conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc c2 20220314164925.319 WARN     conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc c2 20220314164925.319 WARN     conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc c2 20220314164925.607 ERROR    start - start.c:start:2164 - No such file or directory - Failed to exec "/sbin/init"
lxc c2 20220314164925.607 ERROR    sync - sync.c:sync_wait:34 - An error occurred in another process (expected sequence number 7)
lxc c2 20220314164925.613 WARN     network - network.c:lxc_delete_network_priv:3617 - Failed to rename interface with index 0 from "eth0" to its initial name "veth41a799b2"
lxc c2 20220314164925.613 ERROR    lxccontainer - lxccontainer.c:wait_on_daemonized_start:877 - Received container state "ABORTING" instead of "RUNNING"
lxc c2 20220314164925.613 ERROR    start - start.c:__lxc_start:2074 - Failed to spawn container "c2"
lxc c2 20220314164925.613 WARN     start - start.c:lxc_abort:1045 - No such process - Failed to send SIGKILL to 3651
lxc c2 20220314164930.747 WARN     conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc c2 20220314164930.747 WARN     conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc 20220314164930.752 ERROR    af_unix - af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20220314164930.752 ERROR    commands - commands.c:lxc_cmd_rsp_recv_fds:127 - Failed to receive file descriptors for command "get_state"

Can you try launching from another image?

lxc launch images:alpine/edge a1 or something for a quick test?

The error you’re getting suggests the image is somehow empty which could be an issue with an image server or with a past download/unpack gone bad.

I just did… the regular Ubuntu image instead of the minimal one. But I can also try Alpine.

What snap channel are you on, can you show snap info lxd please.

Sounds similar to Container fails to start after creation from image

Same error.

@tomp: Latest / stable - refresh-date: today at 02:58 CET

Please show full output of snap info lxd

root@sb32 ~ # snap info lxd
name:      lxd
summary:   LXD - container and VM manager
publisher: Canonical✓
store-url: https://snapcraft.io/lxd
contact:   https://github.com/lxc/lxd/issues
license:   unset
description: |
  LXD is a system container and virtual machine manager.
  
  It offers a simple CLI and REST API to manage local or
  remote instances,
  uses an image based workflow and support for a variety of
  advanced features.
  
  Images are available for all Ubuntu releases and
  architectures as well
  as for a wide number of other Linux distributions.
  Existing
  integrations with many deployment and operation tools,
  makes it work
  just like a public cloud, except everything is under your
  control.
  
  LXD containers are lightweight, secure by default and a
  great
  alternative to virtual machines when running Linux on
  Linux.
  
  LXD virtual machines are modern and secure, using UEFI
  and secure-boot
  by default and a great choice when a different kernel or
  operating
  system is needed.
  
  With clustering, up to 50 LXD servers can be easily
  joined and managed
  together with the same tools and APIs and without needing
  any external
  dependencies.
  
  
  Supported configuration options for the snap (snap set
  lxd [<key>=<value>...]):
  
    - ceph.builtin: Use snap-specific Ceph configuration
    [default=false]
    - ceph.external: Use the system's ceph tools (ignores
    ceph.builtin) [default=false]
    - criu.enable: Enable experimental live-migration
    support [default=false]
    - daemon.debug: Increase logging to debug level
    [default=false]
    - daemon.group: Set group of users that have full
    control over LXD [default=lxd]
    - daemon.user.group: Set group of users that have
    restricted LXD access [default=lxd]
    - daemon.preseed: Pass a YAML configuration to `lxd
    init` on initial start
    - daemon.syslog: Send LXD log events to syslog
    [default=false]
    - lvm.external: Use the system's LVM tools
    [default=false]
    - lxcfs.pidfd: Start per-container process tracking
    [default=false]
    - lxcfs.loadavg: Start tracking per-container load
    average [default=false]
    - lxcfs.cfs: Consider CPU shares for CPU usage
    [default=false]
    - openvswitch.builtin: Run a snap-specific OVS daemon
    [default=false]
    - ovn.builtin: Use snap-specific OVN configuration
    [default=false]
    - shiftfs.enable: Enable shiftfs support [default=auto]
  
  For system-wide configuration of the CLI, place your
  configuration in
  /var/snap/lxd/common/global-conf/ (config.yml and
  servercerts)
commands:
  - lxd.benchmark
  - lxd.buginfo
  - lxd.check-kernel
  - lxd.lxc
  - lxd.lxc-to-lxd
  - lxd
  - lxd.migrate
services:
  lxd.activate:    oneshot, enabled, inactive
  lxd.daemon:      simple, enabled, active
  lxd.user-daemon: simple, enabled, inactive
snap-id:      J60k4JY0HppjwOjW8dZdYc8obXKxujRu
tracking:     latest/stable
refresh-date: today at 02:58 CET
channels:
  latest/stable:    4.23        2022-03-13 (22652) 82MB -
  latest/candidate: 4.24        2022-03-13 (22657) 82MB -
  latest/beta:      4.23        2022-03-12 (22652) 82MB -
  latest/edge:      git-d5cb885 2022-03-11 (22638) 82MB -
  4.24/stable:      –                                   
  4.24/candidate:   4.24        2022-03-13 (22657) 82MB -
  4.24/beta:        ↑                                   
  4.24/edge:        ↑                                   
  4.23/stable:      4.23        2022-03-13 (22652) 82MB -
  4.23/candidate:   4.23        2022-03-10 (22633) 82MB -
  4.23/beta:        ↑                                   
  4.23/edge:        ↑                                   
  4.22/stable:      4.22        2022-02-12 (22407) 79MB -
  4.22/candidate:   4.22        2022-02-11 (22407) 79MB -
  4.22/beta:        ↑                                   
  4.22/edge:        ↑                                   
  4.0/stable:       4.0.9       2022-02-25 (22526) 71MB -
  4.0/candidate:    4.0.9       2022-02-24 (22541) 71MB -
  4.0/beta:         ↑                                   
  4.0/edge:         git-d0940f2 2022-02-24 (22535) 71MB -
  3.0/stable:       3.0.4       2019-10-10 (11348) 55MB -
  3.0/candidate:    3.0.4       2019-10-10 (11348) 55MB -
  3.0/beta:         ↑                                   
  3.0/edge:         git-81b81b9 2019-10-10 (11362) 55MB -
  2.0/stable:       2.0.12      2020-08-18 (16879) 38MB -
  2.0/candidate:    2.0.12      2021-03-22 (19859) 39MB -
  2.0/beta:         ↑                                   
  2.0/edge:         git-82c7d62 2021-03-22 (19857) 39MB -
installed:          4.23                   (22652) 82MB -

Can you also show output of sudo journalctl -b | grep DENIED

root@sb32 ~ # journalctl -b | grep DENIED
Mar 14 17:49:23 sb32 audit[3500]: AVC apparmor="DENIED" operation="open" profile="lxd_archive-var-snap-lxd-common-lxd-storage-pools-testpool-images-2890a9342fd0f0fe48e636dfa9bc385daad70a0deb9fe3b67b20cd113aa497fc-rootfs" name="/var/snap/lxd/common/lxd/storage-pools/imagespool/custom/default_lxdimages/2890a9342fd0f0fe48e636dfa9bc385daad70a0deb9fe3b67b20cd113aa497fc.rootfs" pid=3500 comm="unsquashfs" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Mar 14 17:49:23 sb32 kernel: audit: type=1400 audit(1647276563.289:52): apparmor="DENIED" operation="open" profile="lxd_archive-var-snap-lxd-common-lxd-storage-pools-testpool-images-2890a9342fd0f0fe48e636dfa9bc385daad70a0deb9fe3b67b20cd113aa497fc-rootfs" name="/var/snap/lxd/common/lxd/storage-pools/imagespool/custom/default_lxdimages/2890a9342fd0f0fe48e636dfa9bc385daad70a0deb9fe3b67b20cd113aa497fc.rootfs" pid=3500 comm="unsquashfs" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Mar 14 17:52:54 sb32 audit[3932]: AVC apparmor="DENIED" operation="open" profile="lxd_archive-var-snap-lxd-common-lxd-storage-pools-testpool-images-06460ff79260729ba686608f11eb3d6eff26a72449dfd71e9d22a42f0038b897-rootfs" name="/var/snap/lxd/common/lxd/storage-pools/imagespool/custom/default_lxdimages/06460ff79260729ba686608f11eb3d6eff26a72449dfd71e9d22a42f0038b897.rootfs" pid=3932 comm="unsquashfs" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Mar 14 17:52:54 sb32 kernel: audit: type=1400 audit(1647276774.972:59): apparmor="DENIED" operation="open" profile="lxd_archive-var-snap-lxd-common-lxd-storage-pools-testpool-images-06460ff79260729ba686608f11eb3d6eff26a72449dfd71e9d22a42f0038b897-rootfs" name="/var/snap/lxd/common/lxd/storage-pools/imagespool/custom/default_lxdimages/06460ff79260729ba686608f11eb3d6eff26a72449dfd71e9d22a42f0038b897.rootfs" pid=3932 comm="unsquashfs" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Mar 14 17:53:57 sb32 audit[4290]: AVC apparmor="DENIED" operation="open" profile="lxd_archive-var-snap-lxd-common-lxd-storage-pools-lvmthin1-images-366eb5ca6175b9d25f2190997e5826d3767ceac30d8c048fa73552d742d1af0f-rootfs" name="/var/snap/lxd/common/lxd/storage-pools/imagespool/custom/default_lxdimages/366eb5ca6175b9d25f2190997e5826d3767ceac30d8c048fa73552d742d1af0f.rootfs" pid=4290 comm="unsquashfs" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Mar 14 17:53:57 sb32 kernel: audit: type=1400 audit(1647276837.200:66): apparmor="DENIED" operation="open" profile="lxd_archive-var-snap-lxd-common-lxd-storage-pools-lvmthin1-images-366eb5ca6175b9d25f2190997e5826d3767ceac30d8c048fa73552d742d1af0f-rootfs" name="/var/snap/lxd/common/lxd/storage-pools/imagespool/custom/default_lxdimages/366eb5ca6175b9d25f2190997e5826d3767ceac30d8c048fa73552d742d1af0f.rootfs" pid=4290 comm="unsquashfs" requested_mask="r" denied_mask="r" fsuid=0 ouid=0

Ah, damn it… that launch problem could be another problem altogether… sec

OK can you try this:

  1. lxc image delete <fingerprint> for the image you just downloaded.
  2. Delete any instances created from the image that won’t start.
  3. Refresh your snap onto the latest/candidate channel temporarily (this will get you LXD 4.24 which is going to be in latest/stable very soon, you can move back to latest/stable once LXD 4.24 lands there).
  4. Then try again. It will be interesting to see if you still get the AppArmor denials then, as we may need to add additional allow rules to the AppArmor profile for image unpacking. Although we are not seeing that issue locally in our tests.

Because I’m running a bit low on space on my root folder and I didn’t want to touch it (it’s an LV), I created a new partition that I linked to /media/lxdvolumes and added that as a dir pool for images.

lxc config set storage.images_volume imagespool/lxdimages

Not sure though why it’s not working, permissions are set to LXD:LXD

Could be affected by

Should I just get rid of the image.volumes directive for the time being? I only set it up in order to try to restore from a larger backup which didn’t fit on my root volume.

I would suggest refreshing onto latest/candidate as that will be what you’re going to get in latest/stable in a day or or so.

Ok, after the refresh, I can launch Alpine. Will try Ubuntu now…

1 Like

Ok, it works with Ubuntu but only with the image I previously deleted. The minimal image which I didn’t delete still doesn’t work.

Is it safe to delete an image that was used to build a container?

You should delete it using lxc image delete and then delete any containers you created from that image so that any cached image volumes on the target storage pool are also removed.

Then try creating a new instance will download and unpack the image into a new image volume on the target storage pool.

Rather frustratingly the unsquashfs command doesn’t fail if it is denied access by AppArmor it just stops and exits with a normal exit code, meaning LXD cannot detect its failed.

Ok. I just really can’t lose all the stuff I worked on today…

I’m currently testing what we agreed on. Would really appreciate it if we found a way so that I can at least back up the other containers before deleting them.