How can I restore a container on a "new" machine?

I had initial issues with my install of ubuntu 18 so i swapped out the os disk and rebuilt it.

my original LXD build was in a ZFS file partition on a different datapool so when i rebuilt the system i did a “zfs import DATA” and have access to my snapshots / etc

when I ran “lxd init” on my new machine I tried to give it the dataset where the pool currently existed - DATA/LXD

however it told me it couldn’t use that pool because it was not empty so i used a new location DATA/LXC

How can I restore my containers?

I tried to make a new pool with my data sourced from the old one

@SERVER:~$ lxc storage create data zfs source=DATA/LXD
Error: Provided ZFS pool (or dataset) isn't empty

edit: attempting to follow this

i mounted my container

zfs mount DATA/LXD/containers/piholeunifi

which worked

SERVER:/var/lib/lxd/storage-pools/default/containers/piholeunifi$ sudo ls
total 49K
 11K drwx--x--x  4 165536 165536    6 Sep 30 10:05 .
 4.0K drwxr-xr-x  3 root   root   4.0K Oct 15 18:33 ..
 6.0K -r--------  1 root   root   2.4K Oct 15 05:18 backup.yaml
 6.0K -rw-r--r--  1 root   root   1.1K Sep 11 17:58 metadata.yaml
 11K drwxr-xr-x 22 165536 165536   22 Sep 11 16:59 rootfs
 11K drwxr-xr-x  2 root   root      7 Sep 11 17:58 templates

sudo lxd import piholeunifi
Error: The container "piholeunifi" does not seem to exist on any storage pool

so still stuck.

What version of LXD are you running?

Did you perhaps switch to the snap which would explain why LXD isn’t looking at that path?

lxd version is 3.6

Did you perhaps switch to the snap which would explain why LXD isn’t looking at that path?

I’m not sure what you mean here - the snap being through ubuntu 18 “software”

i did not install it through the “software” on the last build, but through the command line. this one is installed through software.

Ok, LXD 3.6 is only available as a snap package, so that’s indeed the problem.

Right, so you’ll want to mount your container onto /var/snap/lxd/common/lxd/storage-pool/default/containers/piholeunifi instead, that should fix this issue.

You’re going to need to change the ZFS mountpoint property to do that, something like zfs set mountpoint=/var/snap/lxd/common/lxd/storage-pools/default/containers/piholeunifi DATA/LXD/containers/piholeunifi followed by zfs mount DATA/LXD/containers/piholeunifi should do the trick

1 Like

thanks, i will be able to look at this tomorrow - i had a terrible dhcp issue take out my home network i just finished fixing!! ubiquiti edge os doesn’t like when 2 reservations have the same name even if they are in different vlans. :wink:

Would you recommend I just install LXD from the terminal instead of the snap?

No, the snap is what we recommend, so you’re running the right thing, it’s just that your previous installation was using the deb and so isn’t using the same paths.

Changing the mount property should let you import your containers just fine.

1 Like

This worked. thanks I knew disaster recovery shouldn’t be this difficult.

I’m happy it’s restored, but if you have time for one more question - when I LXD INIT and tried to give it the previous pool it said it couldn’t because it wasn’t empty. Can I reassign it ? or does it have to be a completely new pool?

Importing a container from the old pool should have caused LXD to define the pool again, effectively doing what lxd init refused to do.


I have a similar problem but I couldn’t solve it even with this thread. I was running lxc on an old but perfect ubuntu 16.04 system, using the original LTS version of LXD (hey, it wasn’t broke). My computer died but the seperate, reliable zpool drive is intact. I have installed this drive in another PC that runs LXD 3.0.3 and I would like to recover and run my old containers if possible.

I got zfs to import the old drive easily enough. I can even “see” my old containers: sudo zfs list shows me them (in the old file structure…/var/lib/lxd/containers/c1 etc.) but I don’t know how to get my old containers imported/recognized. Hours of googling and I still can’t figure out how to get them into the ‘new’ machines LXD setup. Can you advise please?

Sorry for what is probably a dumb question.


lxd import <container> - note lxd not lxc.

Thank you. I have tried that, in all kinds of variations and it does not work. The old drive I put into my computer was recognized and imported by zfs (which I assume is a good start):

~$ sudo zpool list
lxdpool 464G 293G 171G - 20% 63% 1.00x ONLINE -
lxdzpool 3.62T 762G 2.88T - 2% 20% 1.00x ONLINE -

The ‘lxdpool’ is my old storage drive, and ‘lxdzpool’ is the existing one in the new computer:

~$ lxc storage list
| lxd4TBpool | | zfs | lxdzpool | 25 |

I can list the datasets in zfs, e.g. here’s one of (many) datasets - an OpenVPN container that used to run on my old machine:

~$ sudo zfs list
lxdpool/containers/OpenVPN 729M 156G 455M /var/lib/lxd/containers/OpenVPN.zfs

But this command:
sudo lxd import OpenVPN

…does not work (‘Error: The container “OpenVPN” does not seem to exist on any storage pool’)

Neither does it if I spell out the path and/or use OpenVPN.zfs - same error message. It’s as if lxd can’t ‘see’ the lxdpool zpool, even though zfs can. I am hoping it’s a mount point issue or similar…

I also can’t add the zpool to my existing LXD instance:

~$ lxc storage create OB1 zfs source=lxdpool
Error: Provided ZFS pool (or dataset) isn’t empty

So it looks like in this case it can see it, but it’s not happy that it isn’t empty.

I am hoping it’s a simple trick to get my containers from the old ‘lxdpool’ zfs drive into the newer one.

Thank you for the reply though. :slight_smile:

Hi again. I am still struggling with this. I have made a little progress but I still can’t see my old containers in the new LXD instance.

I have a new install of LXD 3.0.3 configured to use zfs pool (“lxdzpool”) on a new PC.
I also have an older zfs pool (“oldpool”) - created under ubuntu 16.04 with the then LTS version of LXD (2.x) on a computer that has died.
I have an old container (c1) on ‘oldpool’ that I want to recover/import into the new LXD instance.
My operating lxdpool containers are mounted at: /var/lib/lxd/storage-pools/lxdpool/containers/

I mounted my old c1 container using:

sudo zfs set mountpoint=/var/lib/lxd/storage-pools/oldpool/containers/speed-test oldpool/containers/c1

zfs list confirms the mountpoint assignment seems to work, but sudo lxd import gives me an error:

“Error: The container “speed-test” does not seem to exist on any storage pool”

I am wondering if I am doing that right though as I can’t list the file using ls /mounted/path.

I cannot get passed this, nor can I add the old storage to LXD because LXD compains that it’s not empty.

My oldpool was created under ubuntu 16.04, so it’s an old version. I don’t know if the version I am importing with (3.0.3) can work on such old files? For example, i note the the zfs datasets for the oldpool are shown with a .zfs suffix, which is not how they are shown in the newer lxdzpool.

I would very much like to recover some of my old containers if I can, so any suggestions welcome.


Do you actually see files in /var/lib/lxd/storage-pools/oldpool/containers/speed-test?

If it’s empty, it’s probably a sign that you didn’t do zfs mount, if it has files including a rootfs directory but no backup.yaml then the source dataset predates support for the import feature and so can’t be imported that way.

Thank you for the reply!

No, I do not see anything in there. I was worried about the zfs dataset. I do still have access to the old LXD config files on the old PC drive, but the computer itself does not work. Is there anything I can copy/use from the old-drive’s root files that can help me recover this? (The drive from the oldpc seems to be OK, but the PC itself does not boot anymore, and attempts to fix that have also failed.)

THANK YOU Stephane

Try zfs mount oldpool/containers/c1 see if that makes things appear.

No sir. zfs set mountpoint gets executed, but the actual mount does not work, nor are there any files shown at the mountpoint:

~$ sudo zfs list #confirms mountpoint is set
oldpool 293G 156G 96K none
olpool/containers 290G 156G 96K none
oldpool/containers/c1 185M 156G 486M /var/lib/lxd/storage-pools/oldpool/containers/c1

~$ sudo zfs mount oldpool/containers/c1
cannot open ‘oldpool/containers/c1’: dataset does not exist

~$ sudo ls -la /var/lib/lxd/storage-pools/oldpool/containers/ #no files present either:
total 8
drwxr-xr-x 2 root root 4096 Jun 17 10:08 .
drwxr-xr-x 3 root root 4096 Jun 17 10:08 …

I hope this is just me doing something seriously dumb but fixable.

That zfs mount error is odd, it’s saying that oldpool/containers/c1 doesn’t exist in ZFS despite you showing it in zfs list.

Nice to know I am not the only one thrown by this. Maybe it didn’t import into zfs properly - it too is an older version of zfs. I may have no choice but to completely re-install from scratch, but I am going to play longer as two of the containers are hard work for me to recreate. :slight_smile:

Thank you for your time Stephane, I do appreciate it. If I ever do get this to work (somehow) I will create a FYI post in case it can help others.

Maybe try with mount -t zfs oldpool/containers/c1 /var/lib/lxd/storage-pools/oldpool/containers/c1 see if bypassing the zfs tool helps.