Restoring a container after it all goes wrong

Sorry for the ambiguous title but I wasn’t so sure how to describe my issue. As memorialized in other threads, when I attempted to upgrade to Ubuntu 19.04 something went wrong and everything went to sh*t. For whatever reason, LXD would hang every time I attempted to invoke a LXD command (N.B. LXD was not the only program that had issues. EVERYTHING broke for the most part).

As such I was unable to utilize standard LXD backup/copy commands and instead chose to tar the entire /var/snap/lxd/common/lxd directory. What makes this additionally sticky is that the two containers I wish to restore relied on a storage pull on an external drive. Luckily that drive is still intact.

So my question is as follows: Is there a way to restore the containers? I’m primarily concerned with the file systems within the containers as they have important db’s.

Thank you for everyone’s help. I really appreciate how giving people are with their time and expertise.

what the content of the /var/snap/lxd/common/lxd/disks directory ?
do you remember what was your storage configuration ? (zfs, btrfs, lvm, file storage, partition…)

What you are asking, is how to troubleshoot LXD when somethig goes wrong. This implies a good understanding of how LXD works, diagnosing the specifics of the type of the problem, and then trying safely any solutions.

Here is a recent case where LXD would appear to have forgotten all about the created containers, Not listing containers after restatring

Some time ago, when LXD was at version 2, there was a rather common issue with messing up and needing to start over. At that point, I wrote about

I believe you are looking at such an updated post with best practices on what to do to salvage containers when things go bad.


Thank you. Most links were very helpful however I believe I have a very rudimentary question that I just can’t seem to find the answer. The zfs storage pool that the old containers utilized is an external USB hard drive. I can import via zpool import <pool name> but LXD import <container name> produces Error: The container "<container name>" does not seem to exist on any storage pool.

zfs list -t all shows (among other things):
<pool name>/containers/<container name> 1.31G 401G 1.56G /var/snap/lxd/common/lxd/storage-pools/<pool name>/containers/<container name>

I must be missing a step

root@optiplex3040:/var/snap/lxd/common/lxd/disks# ls -lh
total 13G
-rw------- 1 root root 24G May 21 21:02 Default.img


if you can mount your container somewhere using ZFS, so that you are able to see a directory named ‘rootfs’ and a file named ‘metadata.yaml’ under a mount directory called let’s say /mnt/mysavedcontainer, you should be able to do (if perms are set right):

lxc image import /mnt/mysavedcontainer --alias mysavedcontainer

and after some time lxd should say that it has created a new image with some fingerprint.
You should be able then to create a new container based on this image
lxc launch mysavedcontainer mycontainer

Wanted to thank you all. I finally got the old container imported. My issue was with my lack of understanding on how gain access to the “old” ZFS storage pool. This inability revolved completely around my ignorance with respect to ZFS, generally. So, once I educated myself on pool vs file system, import vs mount, and zpool vs zfs, I was able to set the mount point for the container I wanted to rescue to /var/snap/lxd/common/lxd/storage-pools/<Old LXD Pool Name>/ and import the pool and mount the container filesystem. Now I am working through the various errors LXD Import throws up but I know enough about the backup.yaml file to get that resolved.

Thanks, again. Cheers