Container fails to start after creation from image

Right.
Have set the snap channel to latest/stable.
Restart service lxd. Still same.
Name Version Rev Tracking Publisher Notes
core18 20211215 2284 latest/stable canonical✓ base
core20 20220215 1361 latest/stable canonical✓ base
distrobuilder 2.0 1125 latest/stable stgraber classic
lxd 4.23 22525 latest/stable canonical✓ -
snapd 2.54.3 14978 latest/stable canonical✓ snapd

Yes I think the damage has been done. You will need to use lxc image ls and lxc image delete to remove the images you’ve downloaded already, and delete the containers that used the broken unpacked images.

:slight_smile:

I had deleted all images as you mentioned.
Now i tried some fresh other mages and it is working now. Everything fine.

Thanks very much for your help Thomas.

Excellent :slight_smile:

Running into same issue again.
Above mentioned steps not helping.

snap list:
lxd 4.23 22652 latest/stable canonical✓ -
all images deleted, all broken lxd deleted, still cant start when launching from new image. Tried so far repositories images: ubuntu:

lxc info --show-log local:h21
Name: h21
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2022/03/13 13:22 CET
Last Used: 2022/03/13 13:22 CET

lxc h21 20220313122230.636 WARN     conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc h21 20220313122230.636 WARN     conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc h21 20220313122230.637 WARN     conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc h21 20220313122230.637 WARN     conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc h21 20220313122230.638 WARN     cgfsng - cgroups/cgfsng.c:fchowmodat:1252 - No such file or directory - Failed to fchownat(40, memory.oom.group, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
lxc h21 20220313122230.751 ERROR    start - start.c:start:2164 - No such file or directory - Failed to exec "/sbin/init"
lxc h21 20220313122230.752 ERROR    sync - sync.c:sync_wait:34 - An error occurred in another process (expected sequence number 7)
lxc h21 20220313122230.752 ERROR    lxccontainer - lxccontainer.c:wait_on_daemonized_start:877 - Received container state "ABORTING" instead of "RUNNING"
lxc h21 20220313122230.753 ERROR    start - start.c:__lxc_start:2074 - Failed to spawn container "h21"
lxc h21 20220313122230.753 WARN     start - start.c:lxc_abort:1039 - No such process - Failed to send SIGKILL via pidfd 41 for process 20405
lxc h21 20220313122235.764 WARN     conf - conf.c:lxc_map_ids:3592 - newuidmap binary is missing
lxc h21 20220313122235.764 WARN     conf - conf.c:lxc_map_ids:3598 - newgidmap binary is missing
lxc 20220313122235.816 ERROR    af_unix - af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20220313122235.816 ERROR    commands - commands.c:lxc_cmd_rsp_recv_fds:127 - Failed to receive file descriptors for command "get_state"

lxd.log:

t=2022-03-13T13:09:59+0100 lvl=info msg="Creating container" ephemeral=false instance=alp1 instanceType=container project=default
t=2022-03-13T13:09:59+0100 lvl=info msg="Created container" ephemeral=false instance=alp1 instanceType=container project=default
t=2022-03-13T13:09:59+0100 lvl=info msg="Image unpack started" imageFile=/var/snap/lxd/common/lxd/images/69ab23b357ef5f020de8088116023f14ec56552ff534c5aae77bc1a4858f7245 vol=69ab23b357ef5f020de8088116023f14ec56552ff534c5aae77bc1a4858f7245
t=2022-03-13T13:10:00+0100 lvl=info msg="Image unpack stopped" imageFile=/var/snap/lxd/common/lxd/images/69ab23b357ef5f020de8088116023f14ec56552ff534c5aae77bc1a4858f7245 vol=69ab23b357ef5f020de8088116023f14ec56552ff534c5aae77bc1a4858f7245
1 Like

same problem

lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.2 LTS
Release:        20.04
Codename:       focal

snap list

Name    Version   Rev    Tracking       Publisher   Notes
core18  20211215  2284   latest/stable  canonical✓  base
core20  20220304  1376   latest/stable  canonical✓  base
lxd     4.23      22652  latest/stable  canonical✓  -
snapd   2.54.3    14978  latest/stable  canonical✓  snapd

Please can you refresh onto LXD 4.24 sudo snap refresh lxd --channel=latest/candidate.

Then please show the output of lxc storage volume ls <pool> and lxc image ls.

If this fixes it then wait until LXD 4.24 is moved to stable and then you can refresh back to latest/stable channel.

Please can you show output of lxc config show?

sudo snap refresh lxd —channel=latest/candidate

error: cannot refresh "lxd", "---channel=latest/candidate": snap "---channel=latest/candidate" is
       not installed

lxc config show

config:
  storage.backups_volume: s860/backup
  storage.images_volume: s860/images

there was one “-” too much.
Try:

sudo snap refresh lxd --channel=latest/candidate

1 Like

lxc config show

config:
backups.compression_algorithm: zstd
core.https_address: :8443
core.trust_password: true
images.auto_update_interval: “0”
images.compression_algorithm: zstd
images.default_architecture: x86_64
images.remote_cache_expiry: “0”
storage.images_volume: pl/images

lxc storage ls

±-----±-------±-------±------------±--------±--------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
±-----±-------±-------±------------±--------±--------+
| pl | zfs | zp2/pl | | 10 | CREATED |
±-----±-------±-------±------------±--------±--------+

lxc storage volume ls pl

±----------±-------±------------±-------------±--------+
| TYPE | NAME | DESCRIPTION | CONTENT-TYPE | USED BY |
±----------±-------±------------±-------------±--------+
| container | FC-HA | | filesystem | 1 |
±----------±-------±------------±-------------±--------+
| container | FC-Sql | | filesystem | 1 |
±----------±-------±------------±-------------±--------+
| custom | images | | filesystem | 1 |

sudo snap refresh lxd --channel=latest/candidate
sudo snap list

Name Version Rev Tracking Publisher Notes
core18 20211215 2284 latest/stable canonical✓ base
core20 20220304 1376 latest/stable canonical✓ base
distrobuilder 2.0 1125 latest/stable stgraber classic
lxd 4.24 22662 latest/candidate canonical✓ -
snapd 2.54.4 15177 latest/stable canonical✓ snapd

lxc launch images:alpine/edge alp1
Creating alp1
Starting alp1
all worked fine.

I just dont get where the images are stored.
By config key storage.images_volume: pl/images
images supposed to go into custom:
zp2/pl/custom 2.56M 95.4G 24K none
zp2/pl/custom/default_images 2.54M 95.4G 2.54M legacy

yet the image for above newly created container went to:
zp2/pl/images/366eb5ca6175b9d25f2190997e5826d3767ceac30d8c048fa73552d742d1af0f 8.29M 95.4G 8.29M legacy

Ah yes sorry about that, have corrected it.

OK so the 4.24 candidate snap channel should fix your issue (although do be sure to refresh back to the latest/stable channel once LXD 4.24 is pushed to stable so you don’t get the next candidate unexpectedly in the future).

You both use a dedicated image custom volume storage.images_volume so are both being affected by:

Which is fixed in 4.24.

When the compressed image files are downloaded they are normally stored in a location on your / partition inside the snap. However if you set storage.images_volume then these compressed files will be stored in the custom volume you specify. This can prevent the / partition be filled up with downloaded images.

Then when you create an instance from one of those images, if the storage pool being used supports efficient snapshots, then first the compressed image file is unpacked into an “image volume” on the storage pool in question.

Then the instance being launched is created by taking a snapshot of that image volume, and subsequent instances created from that image are also created as snapshots from that image volume.

In this way multiple instances do not need to duplicate the base image file contents repeatedly.

snap list

Name    Version   Rev    Tracking          Publisher   Notes
core18  20211215  2284   latest/stable     canonical✓  base
core20  20220304  1376   latest/stable     canonical✓  base
lxd     4.24      22662  latest/candidate  canonical✓  -
snapd   2.54.4    15177  latest/stable     canonical✓  snapd

lxc launch ubuntu:20.04 test -p default -p macvlan -s s860

Creating test
Starting test

lxc image ls

+-------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 | ARCHITECTURE |   TYPE    |   SIZE   |         UPLOAD DATE          |
+-------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
|       | 06460ff79260 | no     | ubuntu 20.04 LTS amd64 (release) (20220308) | x86_64       | CONTAINER | 387.48MB | Mar 15, 2022 at 8:50am (UTC) |
+-------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+

lxc storage ls

+------+--------+----------------+-------------+---------+---------+
| NAME | DRIVER |     SOURCE     | DESCRIPTION | USED BY |  STATE  |
+------+--------+----------------+-------------+---------+---------+
| s860 | dir    | /mnt/s860/lxd4 |             | 16      | CREATED |
+------+--------+----------------+-------------+---------+---------+

lxc storage volume ls s860 (not a complete list)

+----------------------+--------------+-------------+--------------+---------+
|         TYPE         |     NAME     | DESCRIPTION | CONTENT-TYPE | USED BY |
+----------------------+--------------+-------------+--------------+---------+
| ....                 | ...          |             | filesystem   | 1       |
+----------------------+--------------+-------------+--------------+---------+
| container            | pw1          |             | filesystem   | 1       |
+----------------------+--------------+-------------+--------------+---------+
| container (snapshot) | pw1/20211028 |             | filesystem   | 1       |
+----------------------+--------------+-------------+--------------+---------+
| container            | t2004        |             | filesystem   | 1       |
+----------------------+--------------+-------------+--------------+---------+
| container            | test         |             | filesystem   | 1       |
+----------------------+--------------+-------------+--------------+---------+
| custom               | backup       |             | filesystem   | 1       |
+----------------------+--------------+-------------+--------------+---------+
| custom               | images       |             | filesystem   | 1       |
+----------------------+--------------+-------------+--------------+---------+

lxc config show

config:
  storage.backups_volume: s860/backup
  storage.images_volume: s860/images

Is it working now?

If you run lxc image delete 06460ff79260 and then delete any instances you cannot start, this should clean up the problematic image volumes and allow you to re-download and unpack a fresh image now that you’re running LXD 4.24.

Yes; the error is gone

1 Like

Excellent thanks

tracking: latest/edge which is counter to what you said originally LXD 4.23, because latest/edge is unstable and further on than the 4.23 release.

Is there a reason you’re on edge? You’re likely being affected by (https://github.com/lxc/lxd/pull/9975) so a snap refresh lxd might help, but you probably also need to do lxc image delete on the affected cached images).

Not the list, Off topic - The best Image Compression website is https://pdfinsider.com/en