Error: Failed container creation: Create container from image: Failed to clone the filesystem:

Try to zfs destroy the image (first the @readonly and then the image), assuming it’s not complaining about anything depending on it, that should do the trick.

My best guess is that something went wrong with the filesystem during image decompression and so you ended up with an existing dataset which is broken and also was missing the snapshot.

Thanks for your answer. I previously tried deleting the image using lxc image delete 368bb7174b67 and launching a container so the image would be fetched again and that was failing (i.e. the error was consistent).

I’ve tried using zfs destroy and removed the snapshot and then the image and that raised another issue:

$ sudo lxc launch ubuntu:bionic bionic
Creating bionic
Error: Failed container creation: Create container from image: UNIQUE constraint failed: storage_volumes.storage_pool_id, storage_volumes.node_id, storage_volumes.project_id, storage_volumes.name, storage_volumes.type

Therefore I created the volume back and then deleted the lxc image:

$ sudo zfs create lxd/images/368bb7174b679ece9bd0dfe2ab953c02c47ff4451736cb255655ba8348f17bc0
$ sudo lxc image list                                                                             
+-------+--------------+--------+-----------------------------------------------+--------+----------+-------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |                  DESCRIPTION                  |  ARCH  |   SIZE   |          UPLOAD DATE          |
+-------+--------------+--------+-----------------------------------------------+--------+----------+-------------------------------+
|       | 368bb7174b67 | no     | ubuntu 18.04 LTS amd64 (release) (20190722.1) | x86_64 | 177.56MB | Aug 2, 2019 at 6:18pm (UTC)   |
+-------+--------------+--------+-----------------------------------------------+--------+----------+-------------------------------+
|       | 60f53d7289be | no     | ubuntu 16.04 LTS amd64 (release) (20190729)   | x86_64 | 158.77MB | Jul 30, 2019 at 11:28pm (UTC) |
+-------+--------------+--------+-----------------------------------------------+--------+----------+-------------------------------+
$ sudo lxc image delete 368bb7174b67
$ sudo lxc launch ubuntu:bionic bionic  # it downloaded the image at this step
Creating bionic
Error: Failed container creation: Create container from image: Failed to clone the filesystem: 

So I’m back to the same situation I was before:

$ sudo zfs list 
NAME                                                                                  USED  AVAIL  REFER  MOUNTPOINT
lxd                                                                                  6.59G  2.37G    24K  none
lxd/containers                                                                       5.09G  2.37G    24K  none
lxd/containers/db                                                                     290M  2.37G   575M  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/db
lxd/containers/lamp                                                                  2.08G  2.37G  2.21G  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/lamp
lxd/containers/mongo                                                                  291M  2.37G   578M  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/mongo
lxd/containers/msf                                                                   2.44G  2.37G  2.54G  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/msf
lxd/custom                                                                             24K  2.37G    24K  none
lxd/deleted                                                                          1.20G  2.37G    24K  none
lxd/deleted/images                                                                   1.20G  2.37G    24K  none
lxd/deleted/images/2a7896bae0f2322559e5b9452b0adf58a5a76f7b772fa6906c825407ea6c3386   307M  2.37G   307M  none
lxd/deleted/images/8b430b6d827140412a85a1f76f0fc76ebc42c3e1ca8d628cb90b12e9cef175c9   305M  2.37G   305M  none
lxd/deleted/images/9023b2feede581884cf45be29f60207ccc5553d762ea8088e849858a58762f6b   307M  2.37G   307M  none
lxd/deleted/images/f32f9de84a9e70b23f128f909f72ba484bc9ea70c69316ea5e32fb3c11282a34   306M  2.37G   306M  none
lxd/images                                                                            305M  2.37G    24K  none
lxd/images/368bb7174b679ece9bd0dfe2ab953c02c47ff4451736cb255655ba8348f17bc0            24K  2.37G    24K  none
lxd/images/60f53d7289be4147834523e3c7ffb2d1f5b8a7cbf86afe80e22585a5380534ba           305M  2.37G   305M  none
lxd/snapshots                                                                          48K  2.37G    24K  none
lxd/snapshots/lamp                                                                     24K  2.37G    24K  none

$ sudo zfs list -t snapshot
NAME                                                                                           USED  AVAIL  REFER  MOUNTPOINT
lxd/deleted/images/2a7896bae0f2322559e5b9452b0adf58a5a76f7b772fa6906c825407ea6c3386@readonly     0B      -   307M  -
lxd/deleted/images/8b430b6d827140412a85a1f76f0fc76ebc42c3e1ca8d628cb90b12e9cef175c9@readonly     0B      -   305M  -
lxd/deleted/images/9023b2feede581884cf45be29f60207ccc5553d762ea8088e849858a58762f6b@readonly     0B      -   307M  -
lxd/deleted/images/f32f9de84a9e70b23f128f909f72ba484bc9ea70c69316ea5e32fb3c11282a34@readonly     0B      -   306M  -
lxd/images/60f53d7289be4147834523e3c7ffb2d1f5b8a7cbf86afe80e22585a5380534ba@readonly             0B      -   305M  -

Out of curiosity I’ve tried launching a container from trusty and everything went well:

$ sudo lxc launch ubuntu:trusty t     
Creating t
Starting t                                  
$ sudo lxc list t
+------+---------+-------------------+------+------------+-----------+
| NAME |  STATE  |       IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+-------------------+------+------------+-----------+
| t    | RUNNING | 10.0.3.237 (eth0) |      | PERSISTENT | 0         |
+------+---------+-------------------+------+------------+-----------+

Therefore I guess the problem is only with that image.

Try deleting the image again, then look for its fingerprint in zfs list -t all, it may show under /deleted or just be left in place due to the missing snapshot. Clean that up and then try launching from it again.

OK, I managed to solve it. After deleting the image using lxc image delete 368bb7174b67 - it was still present in zfs list under lxd/images (not lxd/deleted/images), using only 24K of space:

$ lxc image list
+--------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
| ALIAS  | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 |  ARCH  |   SIZE   |          UPLOAD DATE          |
+--------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
|        | 4bfe62583826 | no     | ubuntu 14.04 LTS amd64 (release) (20190514) | x86_64 | 122.40MB | Aug 9, 2019 at 9:25am (UTC)   |
+--------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
|        | 60f53d7289be | no     | ubuntu 16.04 LTS amd64 (release) (20190729) | x86_64 | 158.77MB | Jul 30, 2019 at 11:28pm (UTC) |
+--------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+

$ zfs list -t all
NAME                                                                                           USED  AVAIL  REFER  MOUNTPOINT
lxd                                                                                           6.04G  2.92G    24K  none
lxd/containers                                                                                4.52G  2.92G    24K  none
lxd/containers/lamp                                                                           2.08G  2.92G  2.21G  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/lamp
lxd/containers/msf                                                                            2.44G  2.92G  2.54G  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/msf
lxd/custom                                                                                      24K  2.92G    24K  none
lxd/deleted                                                                                    615M  2.92G    24K  none
lxd/deleted/images                                                                             615M  2.92G    24K  none
lxd/deleted/images/2a7896bae0f2322559e5b9452b0adf58a5a76f7b772fa6906c825407ea6c3386            307M  2.92G   307M  none
lxd/deleted/images/2a7896bae0f2322559e5b9452b0adf58a5a76f7b772fa6906c825407ea6c3386@readonly     0B      -   307M  -
lxd/deleted/images/9023b2feede581884cf45be29f60207ccc5553d762ea8088e849858a58762f6b            307M  2.92G   307M  none
lxd/deleted/images/9023b2feede581884cf45be29f60207ccc5553d762ea8088e849858a58762f6b@readonly     0B      -   307M  -
lxd/images                                                                                     938M  2.92G    24K  none
lxd/images/368bb7174b679ece9bd0dfe2ab953c02c47ff4451736cb255655ba8348f17bc0                     24K  2.92G    24K  none
lxd/images/4bfe6258382622a6a67c7930669a831885fae82e7288f170a33330f51f1d757d                    293M  2.92G   293M  none
lxd/images/4bfe6258382622a6a67c7930669a831885fae82e7288f170a33330f51f1d757d@readonly             0B      -   293M  -
lxd/images/60f53d7289be4147834523e3c7ffb2d1f5b8a7cbf86afe80e22585a5380534ba                    305M  2.92G   305M  none
lxd/images/60f53d7289be4147834523e3c7ffb2d1f5b8a7cbf86afe80e22585a5380534ba@readonly             0B      -   305M  -
lxd/snapshots                                                                                   48K  2.92G    24K  none
lxd/snapshots/lamp                                                                              24K  2.92G    24K  none

Therefore I ran zfs destroy lxd/images/368bb7174b679ece9bd0dfe2ab953c02c47ff4451736cb255655ba8348f17bc0 and everthing went smooth afterwards :).

Thanks a lot for the help, @stgraber!