LXD Restore, unix.socket: Connection Refused


First off, I am running Kubuntu 19.10 eoan
LXD/Snap Package Version: 3.18

I’ve been trying to get this done literally all day, but what I have done is prior to re-installing my os I had backed up /var/snap/lxd to an external hdd. According to documentation here (https://lxd.readthedocs.io/en/latest/backup/) thats how to perform a full backup. So when I re-installed lxd, i just stopped the lxd service, restore lxd folder from common on my backup drive to /var/snap/lxd/common/lxd. I did this with root, as it was/always has been root permissions set on this folder. Then I started back up the lxd service. When I ran lxc list, it gave me an error stating /var/snap/lxd/common/lxd/unix.socket: Permission Denied, in which a reboot fixed that but now it reads the same except it says Connection refused. I have noticed though after hours of trying to figure this out that if i move the database folder out of common/lxd and then run lxc list I no longer get that connection refused error but instead it just shows no containers or anything as if it doesn’t pickup the data there, restoring the database folder back resorts in same error again.

I am new to lxc to begin with, so I am unsure what really is different between a storage pool, container & volume, but according to the documentation it seems like I did it the correct way.

I tried that step for disaster recovery and got nowhere at all, I do not know what the zfs tool is for or how to use it, its not like the regular mount command. I seen the zfs mount command but if I specify an img it doesnt like that, I do not know where the ‘datasets’ it wants are suppoed to be, or even where they would be mounted at. The documentation doesn’t seem to explain this.

Is there anyway to be able to simply start container from this data? I don’t know how this would exactly occur as all the data has been restored how it was before. Same lxd package version, same os, everything is the same.

Thanks you for any help with this

Edit: I’ve managed to fix this particular issue by manually re-creating the zpool. I had to look up documentation about this, and read for hours to understand it. Command: sudo zpool create pool_name <path/to/disks/default.img> and then a restart of computer.

However, whenever I start a container it resorts in this error:
Error: Common start logic: Failed to mount ZFS dataset “dhammel/containers/debian” onto “/var/snap/lxd/common/lxd/storage-pools/dhammel/containers/debian”: no such file or directory
Try lxc info --show-log debian for more info

The lxc info command shows
Name: debian
Location: none
Remote: unix://
Architecture: x86_64
Created: 2019/12/03 01:11 UTC
Status: Stopped
Type: persistent
Profiles: default, gui-nvidia

I have checked and confirmed a number of times that that directory does indeed exist so I dont know why it acts as if it doesn’t… Using mkdir simply gives error that file exists, ls even shows it.
I dont know what the location being none is supposed to be, if otherwise from the log command above.

Seem like im getting farther, but every step i take resorts in more errors.

I managed to solve this issue, I was wrong with creating the zpool again.
Basically the zpool create command re-created the filesystem within this file causing everything within it be erased. So i restored from my backup again, and used the import subcommand of zpool:

sudo zpool import <pool_name> -d /var/snap/lxd/common/lxd/disks/filename.img

I tried this import command before, but the syntax I used was wrong. Simple mistake, but the errors it gave me were just all the other options I could use but not telling me what was actually wrong with the command I used, I just didn’t know I had to add the pool name within the command, I figured it would know that from the filename of img. After this command was ran, I restarted computer and all was backup and running.