Error: Failed container creation: Create container from image: Failed to clone the filesystem:

I have a machine which has some containers which I can use fine, but I can’t create any new ones.

alan@hal:~$ lxc launch ubuntu:16.04 test
Creating test
Error: Failed container creation: Create container from image: Failed to clone the filesystem: cannot open 'default/images/5a01c2c8552f1fafe1d78e863c67e513da76d285afa6f22c4bfa93a802596af9@readonly': dataset does not exist
alan@hal:~$ lxc version
Client version: 3.13
Server version: 3.13

Host is Ubuntu 19.04 and was fine a day ago.

what’s the output of

lxc storage info default (assuming you did not create additional storage pools of course)
lxc image list

-> if you have an already downloaded image, can you try to use it instead of a new downloaded image ?

anything strange in syslog when you get this problem ?

alan@hal:~$ lxc storage info default 
info:
  description: ""
  driver: zfs
  name: default
  space used: 29.89GB
  total space: 59.05GB
used by:
  containers:
  - alan
  - electron1604
  - signal
  - signal-desktop-20190322-132001
  - signal-desktop-20190325-201828
  - signal-desktop-20190325-203914
  - signal-desktop-20190325-204528
  - signal-desktop-20190325-210050
  - signal-desktop-20190402-132001
  - signal-desktop-20190411-132001
  - signal-desktop-20190412-132001
  - signal-desktop-20190504-132001
  - signal-desktop-20190514-132001
  - signal-desktop-20190515-132001
  - snapcrafters
  - ubuntu-bionic
  - ubuntu-xenial
  images:
  - 5a01c2c8552f1fafe1d78e863c67e513da76d285afa6f22c4bfa93a802596af9
  - f32f9de84a9e70b23f128f909f72ba484bc9ea70c69316ea5e32fb3c11282a34
  profiles:
  - default
alan@hal:~$ lxc image list
+-------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 |  ARCH  |   SIZE   |         UPLOAD DATE          |
+-------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
|       | 5a01c2c8552f | no     | ubuntu 16.04 LTS amd64 (release) (20190514) | x86_64 | 159.52MB | May 15, 2019 at 5:29pm (UTC) |
+-------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
alan@hal:~$ 

Nothing in syslog when I launch.

Launching a new machine with a different image works…

alan@hal:~$ lxc launch ubuntu:18.04 test
Creating test
Starting test   

Not sure what to do now though

That seems to suggest that there is something wrong with the cached Ubuntu 16.04 image. Now, if I try the same it works, so it’s probably not coming from the server image itself, maybe the trouble is with the cached data.
Can you do an export of the cached image with
lxc export 5a01c2c8552f
you will get 2 files (hopefully). If you don’t the issue is clear there was something wrong in your download.
Then you can try
cat <meta…> <…squashfs file…> | sha256sum
if the result don’t match the fingerprint (as shown in the squashfs file name), that would be interesting (would show a small missing feature in LXD not verifying download integrity enough)
Now if there is something wrong at this stage, either export not working or image integrity not correct, fix should be obvious I guess (lxd image delete)

alan@hal:~$  lxc export 5a01c2c8552f
Error: Create container backup: not found

I tried deleting the image…

alan@hal:~$ lxc image list
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 |  ARCH  |   SIZE   |          UPLOAD DATE          |
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
|       | 5a01c2c8552f | no     | ubuntu 16.04 LTS amd64 (release) (20190514) | x86_64 | 159.52MB | May 15, 2019 at 5:29pm (UTC)  |
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
|       | c4681ac755d9 | no     | ubuntu 18.04 LTS amd64 (release) (20190514) | x86_64 | 177.89MB | May 18, 2019 at 10:42pm (UTC) |
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
alan@hal:~$ lxc image delete 5a01c2c8552f

Then launch a new one

alan@hal:~$ lxc launch ubuntu:16.04 test1604
Creating test1604
Error: Failed container creation: Create container from image: Failed to clone the filesystem: cannot open 'default/images/5a01c2c8552f1fafe1d78e863c67e513da76d285afa6f22c4bfa93a802596af9@readonly': dataset does not exist


It downloaded and failed again…

oh well, I forgot the (important) word ‘image’.
The exact syntax is
lxc export image (hash)

I appreciate the help!

alan@hal:~$ lxc image list
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 |  ARCH  |   SIZE   |          UPLOAD DATE          |
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
|       | 5a01c2c8552f | no     | ubuntu 16.04 LTS amd64 (release) (20190514) | x86_64 | 159.52MB | May 19, 2019 at 3:25pm (UTC)  |
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
|       | c4681ac755d9 | no     | ubuntu 18.04 LTS amd64 (release) (20190514) | x86_64 | 177.89MB | May 18, 2019 at 10:42pm (UTC) |
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
alan@hal:~$ lxc export image 5a01c2c8552f
Error: Create container backup: not found

Doh, I still not got it right
that’s lxc image export (hash)

alan@hal:~$ lxc image export 5a01c2c8552f
Image exported successfully!
-rw-rw-r--   1 alan alan  844 May 19 17:25  meta-5a01c2c8552f1fafe1d78e863c67e513da76d285afa6f22c4bfa93a802596af9.tar.xz
-rw-rw-r--   1 alan alan 160M May 19 17:26  5a01c2c8552f1fafe1d78e863c67e513da76d285afa6f22c4bfa93a802596af9.squashfs
alan@hal:~$ cat meta-5a01c2c8552f1fafe1d78e863c67e513da76d285afa6f22c4bfa93a802596af9.tar.xz 5a01c2c8552f1fafe1d78e863c67e513da76d285afa6f22c4bfa93a802596af9.squashfs | sha256sum
5a01c2c8552f1fafe1d78e863c67e513da76d285afa6f22c4bfa93a802596af9  -

Looks okay?

yes, it seems that the image is perfect. So the problem must come from something rotten in your storage.
Looking back at the output of the first 2 commands I asked you to try, there is something not quite right.
lxc storage info default show 2 entries in the image part, while the lxc image list shows only 1 image. Unless you did not launch the 2 commands in sequence and you did something else in between, this is not coherent. Is there still the same discrepancy between lxc storage info default and lxc image list ?
if there is, this is not normal. Possibly if there is a “phantom” image in your storage it may indicate there is something wrong in it and it should be repaired.

The second image (Ubuntu 18.04) was done as a test after the first reply I think. I was testing to see if I could successfully launch a container that isn’t based off this dodgy image.

I think now it’s probably best if I just uninstall / reinstall lxd?

well yes but it was in your first reply. And the hash does not seem to match (the 18.04 hash is c4681ac755d9, the fingerprint in your first reply is f32f9de84a9e70b23f128f909f72ba484bc9ea70c69316ea5e32fb3c11282a34.
That’s why I as asking if there was still a discrepancy by running again the 2 commands lxc storage info default and lxc image list.

if you have enough available disk space, you can create a new storage and use it, that would be a very easy way to test if your current storage is bad:

lxc storage create newstorage zfs size=10GB
lxc launch -s newstorage ubuntu:16.04 mytestcontainer

About your reinstalling question, never do an uninstall/reinstall cycle without having good backups (lxc export container-name)

Yup, that worked.

alan@hal:~$ lxc storage create newstorage zfs size=10GB
Storage pool newstorage created
alan@hal:~$  lxc launch -s newstorage ubuntu:16.04 mytestcontainer
Creating mytestcontainer
Starting mytestcontainer
alan@hal:~$ lxc list mytestcontainer
+-----------------+---------+-----------------------+------+------------+-----------+
|      NAME       |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+-----------------+---------+-----------------------+------+------------+-----------+
| mytestcontainer | RUNNING | 10.162.121.223 (eth0) |      | PERSISTENT | 0         |
+-----------------+---------+-----------------------+------+------------+-----------+

This proves at 99,9% that your default storage is misbehaving I think.

Backups are in order if you don’t have them already.

Then you can try to repair your default storage with zfs commands.
I’m a bit hazy on this as I don’t use ZFS. I’d try first to stop all containers, and maybe even stop lxd (sudo snap stop lxd then check if lxd is running with ps aux | grep lxd)
then I think it’s zpool status to find if there is something bad.

I appreciate the help, thank you. I wasn’t sure if this was a “broken system” or bug or pilot error issue.

I am happy with backups and data so will nuke and reset the system.

Thanks again.

that’s your call for sure, but it’s possible you could save a lot of time if you can just repair the default spool.

Actually I don’t care about the containers that are there, the single one I care about is easily re-created. I don’t tend to have many long-lived ones, but use it for spinning up machines and throwing them away pretty quickly. So for my use case it’ll be faster to nuke-and-pave than learn the esoteric zfs commands to fix it :wink:

Appreciate the comments though. Probably would apply if I kept containers around.

I am facing the same problem on my machine. The difference is that I cannot create any ubuntu:bionic containers, but it works fine using the ubuntu:xenial that is already cached.

I’ve checked that the hashsum is correct. I have tried removing the bionic image and downloading it again as well.
This is the machine that I’m using daily, mostly for development purposes.


$ lxc version
Client version: 3.15
Server version: 3.15

$ sudo lxc list 
+-------+---------+-------------------+------+------------+-----------+
| NAME  |  STATE  |       IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+-------+---------+-------------------+------+------------+-----------+
| couch | RUNNING | 10.0.3.248 (eth0) |      | PERSISTENT | 0         |
+-------+---------+-------------------+------+------------+-----------+
| db    | STOPPED |                   |      | PERSISTENT | 0         |
+-------+---------+-------------------+------+------------+-----------+
| lamp  | RUNNING | 10.0.3.116 (eth0) |      |            |           |
+-------+---------+-------------------+------+------------+-----------+
| mongo | STOPPED |                   |      | PERSISTENT | 0         |
+-------+---------+-------------------+------+------------+-----------+
| msf   | RUNNING | 10.0.3.165 (eth0) |      | PERSISTENT | 0         |
+-------+---------+-------------------+------+------------+-----------+

$ sudo lxc image list
+-------+--------------+--------+-----------------------------------------------+--------+----------+-------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |                  DESCRIPTION                  |  ARCH  |   SIZE   |          UPLOAD DATE          |
+-------+--------------+--------+-----------------------------------------------+--------+----------+-------------------------------+
|       | 368bb7174b67 | no     | ubuntu 18.04 LTS amd64 (release) (20190722.1) | x86_64 | 177.56MB | Aug 2, 2019 at 6:18pm (UTC)   |
+-------+--------------+--------+-----------------------------------------------+--------+----------+-------------------------------+
|       | 60f53d7289be | no     | ubuntu 16.04 LTS amd64 (release) (20190729)   | x86_64 | 158.77MB | Jul 30, 2019 at 11:28pm (UTC) |
+-------+--------------+--------+-----------------------------------------------+--------+----------+-------------------------------+
|       | 8b430b6d8271 | no     | ubuntu 16.04 LTS amd64 (release) (20190628)   | x86_64 | 158.72MB | Jul 21, 2019 at 1:04pm (UTC)  |
+-------+--------------+--------+-----------------------------------------------+--------+----------+-------------------------------+

$ sudo lxc launch ubuntu:xenial
Creating the container
Container name is: artistic-ray
Starting artistic-ray

$ sudo lxc launch ubuntu:bionic
Creating the container
Error: Failed container creation: Create container from image: Failed to clone the filesystem: 

$ sudo zfs list
NAME                                                                                  USED  AVAIL  REFER  MOUNTPOINT
lxd                                                                                  6.90G  2.07G    24K  none
lxd/containers                                                                       5.21G  2.07G    24K  none
lxd/containers/artistic-ray                                                          6.42M  2.07G   305M  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/artistic-ray
lxd/containers/couch                                                                  119M  2.07G   411M  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/couch
lxd/containers/db                                                                     290M  2.07G   575M  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/db
lxd/containers/lamp                                                                  2.08G  2.07G  2.20G  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/lamp
lxd/containers/mongo                                                                  291M  2.07G   578M  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/mongo
lxd/containers/msf                                                                   2.44G  2.07G  2.55G  /var/snap/lxd/common/lxd/storage-pools/lxd/containers/msf
lxd/custom                                                                             24K  2.07G    24K  none
lxd/deleted                                                                           921M  2.07G    24K  none
lxd/deleted/images                                                                    921M  2.07G    24K  none
lxd/deleted/images/2a7896bae0f2322559e5b9452b0adf58a5a76f7b772fa6906c825407ea6c3386   307M  2.07G   307M  none
lxd/deleted/images/9023b2feede581884cf45be29f60207ccc5553d762ea8088e849858a58762f6b   307M  2.07G   307M  none
lxd/deleted/images/f32f9de84a9e70b23f128f909f72ba484bc9ea70c69316ea5e32fb3c11282a34   306M  2.07G   306M  none
lxd/images                                                                            798M  2.07G    24K  none
lxd/images/368bb7174b679ece9bd0dfe2ab953c02c47ff4451736cb255655ba8348f17bc0           189M  2.07G   189M  none
lxd/images/60f53d7289be4147834523e3c7ffb2d1f5b8a7cbf86afe80e22585a5380534ba           305M  2.07G   305M  none
lxd/images/8b430b6d827140412a85a1f76f0fc76ebc42c3e1ca8d628cb90b12e9cef175c9           305M  2.07G   305M  none
lxd/snapshots                                                                          48K  2.07G    24K  none
lxd/snapshots/lamp                                                                     24K  2.07G    24K  none

$ sudo zpool status
  pool: lxd
 state: ONLINE
  scan: scrub repaired 0B in 0h0m with 0 errors on Sun Jul 14 00:24:34 2019
config:

	NAME                                      STATE     READ WRITE CKSUM
	lxd                                       ONLINE       0     0     0
	  /var/snap/lxd/common/lxd/disks/lxd.img  ONLINE       0     0     0

errors: No known data errors

After fiddling with zfs, I think I discovered perhaps one of the issues that leads to this behavior.
When running zfs list -t snapshot I noticed that the new image is missing its @readonly snapshot as other images do, therefore I created one. Unfortunately, the newly created container does not start.

$ sudo zfs list -t snapshot
NAME                                                                                           USED  AVAIL  REFER  MOUNTPOINT
lxd/deleted/images/2a7896bae0f2322559e5b9452b0adf58a5a76f7b772fa6906c825407ea6c3386@readonly     0B      -   307M  -
lxd/deleted/images/9023b2feede581884cf45be29f60207ccc5553d762ea8088e849858a58762f6b@readonly     0B      -   307M  -
lxd/deleted/images/f32f9de84a9e70b23f128f909f72ba484bc9ea70c69316ea5e32fb3c11282a34@readonly     0B      -   306M  -
lxd/images/60f53d7289be4147834523e3c7ffb2d1f5b8a7cbf86afe80e22585a5380534ba@readonly             0B      -   305M  -
lxd/images/8b430b6d827140412a85a1f76f0fc76ebc42c3e1ca8d628cb90b12e9cef175c9@readonly             0B      -   305M  -

$ sudo zfs snapshot lxd/images/368bb7174b679ece9bd0dfe2ab953c02c47ff4451736cb255655ba8348f17bc0@readonly

$ sudo zfs list -t snapshot                                                                                  
NAME                                                                                           USED  AVAIL  REFER  MOUNTPOINT
lxd/deleted/images/2a7896bae0f2322559e5b9452b0adf58a5a76f7b772fa6906c825407ea6c3386@readonly     0B      -   307M  -
lxd/deleted/images/9023b2feede581884cf45be29f60207ccc5553d762ea8088e849858a58762f6b@readonly     0B      -   307M  -
lxd/deleted/images/f32f9de84a9e70b23f128f909f72ba484bc9ea70c69316ea5e32fb3c11282a34@readonly     0B      -   306M  -
lxd/images/368bb7174b679ece9bd0dfe2ab953c02c47ff4451736cb255655ba8348f17bc0@readonly             0B      -   189M  -
lxd/images/60f53d7289be4147834523e3c7ffb2d1f5b8a7cbf86afe80e22585a5380534ba@readonly             0B      -   305M  -
lxd/images/8b430b6d827140412a85a1f76f0fc76ebc42c3e1ca8d628cb90b12e9cef175c9@readonly             0B      -   305M  -

$ sudo lxc launch ubuntu:bionic dummy
Creating dummy
Starting dummy

$ sudo lxc list
+--------------+---------+-------------------+------+------------+-----------+
|     NAME     |  STATE  |       IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+--------------+---------+-------------------+------+------------+-----------+
| artistic-ray | RUNNING | 10.0.3.206 (eth0) |      | PERSISTENT | 0         |
+--------------+---------+-------------------+------+------------+-----------+
| couch        | RUNNING | 10.0.3.248 (eth0) |      | PERSISTENT | 0         |
+--------------+---------+-------------------+------+------------+-----------+
| db           | STOPPED |                   |      | PERSISTENT | 0         |
+--------------+---------+-------------------+------+------------+-----------+
| dummy        | STOPPED |                   |      | PERSISTENT | 0         |
+--------------+---------+-------------------+------+------------+-----------+
| lamp         | RUNNING | 10.0.3.116 (eth0) |      |            |           |
+--------------+---------+-------------------+------+------------+-----------+
| mongo        | STOPPED |                   |      | PERSISTENT | 0         |
+--------------+---------+-------------------+------+------------+-----------+
| msf          | RUNNING | 10.0.3.165 (eth0) |      | PERSISTENT | 0         |
+--------------+---------+-------------------+------+------------+-----------+

$ sudo lxc start dummy

$ sudo lxc list dummy
+-------+---------+------+------+------------+-----------+
| NAME  |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+-------+---------+------+------+------------+-----------+
| dummy | STOPPED |      |      | PERSISTENT | 0         |
+-------+---------+------+------+------------+-----------+

$ sudo lxc start dummy --debug
DBUG[08-02|22:03:42] Connecting to a local LXD over a Unix socket 
DBUG[08-02|22:03:42] Sending request to LXD                   method=GET url=http://unix.socket/1.0 etag=
DBUG[08-02|22:03:42] Got response struct from LXD 
DBUG[08-02|22:03:42] 
    {
        "config": {
            "core.trust_password": true,
            "images.auto_update_interval": "0"
        },
        "api_extensions": [
            "storage_zfs_remove_snapshots",
            "container_host_shutdown_timeout",
            "container_stop_priority",
            "container_syscall_filtering",
            "auth_pki",
            "container_last_used_at",
            "etag",
            "patch",
            "usb_devices",
            "https_allowed_credentials",
            "image_compression_algorithm",
            "directory_manipulation",
            "container_cpu_time",
            "storage_zfs_use_refquota",
            "storage_lvm_mount_options",
            "network",
            "profile_usedby",
            "container_push",
            "container_exec_recording",
            "certificate_update",
            "container_exec_signal_handling",
            "gpu_devices",
            "container_image_properties",
            "migration_progress",
            "id_map",
            "network_firewall_filtering",
            "network_routes",
            "storage",
            "file_delete",
            "file_append",
            "network_dhcp_expiry",
            "storage_lvm_vg_rename",
            "storage_lvm_thinpool_rename",
            "network_vlan",
            "image_create_aliases",
            "container_stateless_copy",
            "container_only_migration",
            "storage_zfs_clone_copy",
            "unix_device_rename",
            "storage_lvm_use_thinpool",
            "storage_rsync_bwlimit",
            "network_vxlan_interface",
            "storage_btrfs_mount_options",
            "entity_description",
            "image_force_refresh",
            "storage_lvm_lv_resizing",
            "id_map_base",
            "file_symlinks",
            "container_push_target",
            "network_vlan_physical",
            "storage_images_delete",
            "container_edit_metadata",
            "container_snapshot_stateful_migration",
            "storage_driver_ceph",
            "storage_ceph_user_name",
            "resource_limits",
            "storage_volatile_initial_source",
            "storage_ceph_force_osd_reuse",
            "storage_block_filesystem_btrfs",
            "resources",
            "kernel_limits",
            "storage_api_volume_rename",
            "macaroon_authentication",
            "network_sriov",
            "console",
            "restrict_devlxd",
            "migration_pre_copy",
            "infiniband",
            "maas_network",
            "devlxd_events",
            "proxy",
            "network_dhcp_gateway",
            "file_get_symlink",
            "network_leases",
            "unix_device_hotplug",
            "storage_api_local_volume_handling",
            "operation_description",
            "clustering",
            "event_lifecycle",
            "storage_api_remote_volume_handling",
            "nvidia_runtime",
            "container_mount_propagation",
            "container_backup",
            "devlxd_images",
            "container_local_cross_pool_handling",
            "proxy_unix",
            "proxy_udp",
            "clustering_join",
            "proxy_tcp_udp_multi_port_handling",
            "network_state",
            "proxy_unix_dac_properties",
            "container_protection_delete",
            "unix_priv_drop",
            "pprof_http",
            "proxy_haproxy_protocol",
            "network_hwaddr",
            "proxy_nat",
            "network_nat_order",
            "container_full",
            "candid_authentication",
            "backup_compression",
            "candid_config",
            "nvidia_runtime_config",
            "storage_api_volume_snapshots",
            "storage_unmapped",
            "projects",
            "candid_config_key",
            "network_vxlan_ttl",
            "container_incremental_copy",
            "usb_optional_vendorid",
            "snapshot_scheduling",
            "container_copy_project",
            "clustering_server_address",
            "clustering_image_replication",
            "container_protection_shift",
            "snapshot_expiry",
            "container_backup_override_pool",
            "snapshot_expiry_creation",
            "network_leases_location",
            "resources_cpu_socket",
            "resources_gpu",
            "resources_numa",
            "kernel_features",
            "id_map_current",
            "event_location",
            "storage_api_remote_volume_snapshots",
            "network_nat_address",
            "container_nic_routes",
            "rbac",
            "cluster_internal_copy",
            "seccomp_notify",
            "lxc_features",
            "container_nic_ipvlan",
            "network_vlan_sriov",
            "storage_cephfs",
            "container_nic_ipfilter",
            "resources_v2",
            "container_exec_user_group_cwd"
        ],
        "api_status": "stable",
        "api_version": "1.0",
        "auth": "trusted",
        "public": false,
        "auth_methods": [
            "tls"
        ],
        "environment": {
            "addresses": [],
            "architectures": [
                "x86_64",
                "i686"
            ],
            "certificate": "-----BEGIN CERTIFICATE-----\nMIIFRDCCAyygAwIBAgIRANH6teCoaXUkKMFfOV7wWeEwDQYJKoZIhvcNAQELBQAw\nNTEcMBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEVMBMGA1UEAwwMcm9vdEBj\nYXNjYXJhMB4XDTE4MDgwNzA5NDkzMloXDTI4MDgwNDA5NDkzMlowNTEcMBoGA1UE\nChMTbGludXhjb250YWluZXJzLm9yZzEVMBMGA1UEAwwMcm9vdEBjYXNjYXJhMIIC\nIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAvvR0cELUV567EWW/f9rUHE12\nmricBtKGnKbUB2z1VkmjxzXs4e7ipfaybsn5zEIv7TlLmkB186RJyyNCmmFOpsK5\nRojrFEw/ZOnEr121uDbNBPAyzSxyAG3BeG/Z/Pk2atsbukoxvxkilYFssG0CuQFl\nzqgrA1HMyMgBD++++ps2MVVfrPUU0aIzqufRm3taRv58ZUatdkh6MEbn7EPLipHw\ngsX1Kv5ZgOlizKW16u/3tvEhsggMQkcfIVJ3HHA1Tbcet9KzAls9gMrqwiQVKyoy\nhwVtrZ+mwQ/Y/2595gVYIld5XTNIz07cvAHzuItI3mGPQfh7dkWLZNZROHFR2E0Y\nVhr4PFaLZ+sn9SlwStZDIB4TzXxYmIEoTCpMAPMjafa5P7qrS4bKTtcGIoX49tYL\nhCg75wCJ+kH6XAaEgeceHQB1lC4oizZx/6TjJp9Oda/cTB68JiPfMvpLNSXXRKK+\nxgmMTh78dP/3V+Nfxdy1Q5VbxDXkJW6b24V1ujiBJ+hbKiOW8qIkkdTa0RZpdMhM\nYusbgNWae4WPKC7vpxQSksaRYtJbnNp8+rYJv04IvWm9YhFxUZ62EDR22XIPEZaa\nKWS9eyULWC/mBh9ZASXaoVHfYTT2a1A3xVxTH2rVCO2ASQ0HvuASFSafo8v54OaA\nHQLeVBN9/D8WNXNimPECAwEAAaNPME0wDgYDVR0PAQH/BAQDAgWgMBMGA1UdJQQM\nMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwGAYDVR0RBBEwD4IHY2FzY2FyYYcE\nCgAApTANBgkqhkiG9w0BAQsFAAOCAgEAh4hUDteGJ39zpRKPxR05LgtcC6GfVhp2\nQbDXRLQJFKF4Myg0BgHzSmcG8Mg/p49fTaqmtObcA7Q+4XDdQ7ioMZ1mo70lnff7\nkZcinrCrkApq1quaLhJgaDbwtZRO1P99ohTMoCV2F0q3OMdxIP162d+mTlVQsHbd\nkku4GsF9vPRqpmo+J2EG5yeXYceENOunUi/fCRE7hzsRVrgHpVfHV5YzyctUEaXF\nSBljIAKaoh9RL4lAgaTNecgCy4EEXkX2+TupybPPsXup/OdBCpV1Wvt0sbUk64bd\n0Y7rGo2o4aONk2icZAwlvKQVd0Qp44sEAIDERO18D0EZS003DOVby+Xhhi3Msmwt\nucY90oWpXGrxLPFSDwslyo8kNiHdK8MPm30ma1QnSBFDIqvIipweuptzdrmIrU73\nR2JL3wGpQcdwjaNosKyriK8T7JYnVNgs7atYHzq6z+M5X/goiEb5cLgnJVuBMVaL\nztLL+7QxA06lPtfQiD9MLRKE3XFLQ8ANG8JDCKALSWmA6fSLmuoB3MmfJ7knYzhQ\nLnN2SH8eDh+jWppJ1/OVaOc4dILME5p13LPWWIuSHtHvQu6jzqG5uXbg6wYNgRRY\noCYW1J6CC4It968Ub4j5X2uVExNrjGa6tCXR9DUXBCImIVKC66VVt9Z/7vfyfmdJ\n8KOm0Tcg60s=\n-----END CERTIFICATE-----\n",
            "certificate_fingerprint": "74c19d9477ccde1f2100cbf6187a394e999d6f1b8233e95c51e06deaf79caea4",
            "driver": "lxc",
            "driver_version": "3.2.1",
            "kernel": "Linux",
            "kernel_architecture": "x86_64",
            "kernel_features": {
                "netnsid_getifaddrs": "false",
                "seccomp_listener": "false",
                "shiftfs": "false",
                "uevent_injection": "true",
                "unpriv_fscaps": "true"
            },
            "kernel_version": "4.18.0-25-generic",
            "lxc_features": {
                "mount_injection_file": "true",
                "network_gateway_device_route": "true",
                "network_ipvlan": "true",
                "network_l2proxy": "true",
                "network_phys_macvlan_mtu": "true",
                "seccomp_notify": "true"
            },
            "project": "default",
            "server": "lxd",
            "server_clustered": false,
            "server_name": "cascara",
            "server_pid": 8795,
            "server_version": "3.15",
            "storage": "zfs",
            "storage_version": "0.7.9-3ubuntu6"
        }
    } 
DBUG[08-02|22:03:42] Sending request to LXD                   method=GET url=http://unix.socket/1.0/containers/dummy etag=
DBUG[08-02|22:03:42] Got response struct from LXD 
DBUG[08-02|22:03:42] 
    {
        "architecture": "x86_64",
        "config": {
            "image.architecture": "amd64",
            "image.description": "ubuntu 18.04 LTS amd64 (release) (20190722.1)",
            "image.label": "release",
            "image.os": "ubuntu",
            "image.release": "bionic",
            "image.serial": "20190722.1",
            "image.version": "18.04",
            "volatile.base_image": "368bb7174b679ece9bd0dfe2ab953c02c47ff4451736cb255655ba8348f17bc0",
            "volatile.eth0.hwaddr": "00:16:3e:83:d6:96",
            "volatile.idmap.base": "0",
            "volatile.idmap.current": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
            "volatile.idmap.next": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
            "volatile.last_state.idmap": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
            "volatile.last_state.power": "STOPPED"
        },
        "devices": {},
        "ephemeral": false,
        "profiles": [
            "default"
        ],
        "stateful": false,
        "description": "",
        "created_at": "2019-08-02T21:51:14.002798629+03:00",
        "expanded_config": {
            "image.architecture": "amd64",
            "image.description": "ubuntu 18.04 LTS amd64 (release) (20190722.1)",
            "image.label": "release",
            "image.os": "ubuntu",
            "image.release": "bionic",
            "image.serial": "20190722.1",
            "image.version": "18.04",
            "volatile.base_image": "368bb7174b679ece9bd0dfe2ab953c02c47ff4451736cb255655ba8348f17bc0",
            "volatile.eth0.hwaddr": "00:16:3e:83:d6:96",
            "volatile.idmap.base": "0",
            "volatile.idmap.current": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
            "volatile.idmap.next": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
            "volatile.last_state.idmap": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
            "volatile.last_state.power": "STOPPED"
        },
        "expanded_devices": {
            "eth0": {
                "name": "eth0",
                "nictype": "bridged",
                "parent": "lxdbr0",
                "type": "nic"
            },
            "root": {
                "path": "/",
                "pool": "lxd",
                "type": "disk"
            }
        },
        "name": "dummy",
        "status": "Stopped",
        "status_code": 102,
        "last_used_at": "2019-08-02T21:57:44.810840436+03:00",
        "location": "none"
    } 
DBUG[08-02|22:03:42] Connected to the websocket 
DBUG[08-02|22:03:42] Sending request to LXD                   method=PUT url=http://unix.socket/1.0/containers/dummy/state etag=
DBUG[08-02|22:03:42] 
    {
        "action": "start",
        "timeout": 0,
        "force": false,
        "stateful": false
    } 
DBUG[08-02|22:03:42] Got operation from LXD 
DBUG[08-02|22:03:42] 
    {
        "id": "7f1f0d89-7b15-41a7-ac4a-8389b96864c6",
        "class": "task",
        "description": "Starting container",
        "created_at": "2019-08-02T22:03:42.417782261+03:00",
        "updated_at": "2019-08-02T22:03:42.417782261+03:00",
        "status": "Running",
        "status_code": 103,
        "resources": {
            "containers": [
                "/1.0/containers/dummy"
            ]
        },
        "metadata": null,
        "may_cancel": false,
        "err": "",
        "location": "none"
    } 
DBUG[08-02|22:03:42] Sending request to LXD                   method=GET url=http://unix.socket/1.0/operations/7f1f0d89-7b15-41a7-ac4a-8389b96864c6 etag=
DBUG[08-02|22:03:42] Got response struct from LXD 
DBUG[08-02|22:03:42] 
    {
        "id": "7f1f0d89-7b15-41a7-ac4a-8389b96864c6",
        "class": "task",
        "description": "Starting container",
        "created_at": "2019-08-02T22:03:42.417782261+03:00",
        "updated_at": "2019-08-02T22:03:42.417782261+03:00",
        "status": "Running",
        "status_code": 103,
        "resources": {
            "containers": [
                "/1.0/containers/dummy"
            ]
        },
        "metadata": null,
        "may_cancel": false,
        "err": "",
        "location": "none"
    } 

Any hints on how can I debug this further?