Container fail to start

Hi,

A container refuse to start but i don’t know why.

The log says :

Name: mutnovsky
Location: none
Remote: unix://
Architecture: x86_64
Created: 2019/10/31 11:30 UTC
Status: Stopped
Type: persistent
Profiles: bridge_profile

Log:

lxc mutnovsky 20191115104010.143 WARN     cgfsng - cgroups/cgfsng.c:chowmod:1525 - No such file or directory - Failed to chown(/sys/fs/cgroup/unified//lxc.payload/mutnovsky/memory.oom.group, 1000000000, 0)
lxc mutnovsky 20191115104010.178 ERROR    dir - storage/dir.c:dir_mount:198 - No such file or directory - Failed to mount "/var/snap/lxd/common/lxd/containers/mutnovsky/rootfs" on "/var/snap/lxd/common/lxc/"
lxc mutnovsky 20191115104010.178 ERROR    conf - conf.c:lxc_mount_rootfs:1353 - Failed to mount rootfs "/var/snap/lxd/common/lxd/containers/mutnovsky/rootfs" onto "/var/snap/lxd/common/lxc/" with options "(null)"
lxc mutnovsky 20191115104010.178 ERROR    conf - conf.c:lxc_setup_rootfs_prepare_root:3447 - Failed to setup rootfs for
lxc mutnovsky 20191115104010.178 ERROR    conf - conf.c:lxc_setup:3550 - Failed to setup rootfs
lxc mutnovsky 20191115104010.178 ERROR    start - start.c:do_start:1321 - Failed to setup container "mutnovsky"
lxc mutnovsky 20191115104010.178 ERROR    sync - sync.c:__sync_wait:62 - An error occurred in another process (expected sequence number 5)
lxc mutnovsky 20191115104010.178 WARN     network - network.c:lxc_delete_network_priv:3377 - Failed to rename interface with index 19 from "eth0" to its initial name "veth8f9abf15"
lxc mutnovsky 20191115104010.178 ERROR    lxccontainer - lxccontainer.c:wait_on_daemonized_start:873 - Received container state "ABORTING" instead of "RUNNING"
lxc mutnovsky 20191115104010.178 ERROR    start - start.c:__lxc_start:2039 - Failed to spawn container "mutnovsky"
lxc 20191115104010.270 WARN     commands - commands.c:lxc_cmd_rsp_recv:135 - Connection reset by peer - Failed to receive response for command "get_state" 

the bridge_profile :

config: {}
description: Bridged network LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: bridge_profile
used_by:
- /1.0/containers/karimsky
- /1.0/containers/gorely
- /1.0/containers/koriaksky
- /1.0/containers/mutnovsky

and the config of the container is :

architecture: x86_64
config:
  volatile.base_image: 4008aaa396e33476b06fdfb49fb4e4048446ff6311c701b2b6a5b4285e977dcf
  volatile.eth0.hwaddr: 00:16:3e:42:c6:c5
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: STOPPED
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- bridge_profile
stateful: false
description: ""

After a lxc rename, the container has not change his name and refuse to start.
I’ve tried to re-import the container with a tarball exported before but nothing change.

My storage pool is on a btrfs volume at /media/lxd.
I use LXD 3.18 on Debian 10 with the snap package.

Can anyone help me please ?

(Sorry for my bad english speaking.)

After several unsuccessful attempts, I’ve decided to delete the container to reimport my backup.
All is back to normal, my container start.