How to connect old storage pool and to see all old containers

Basically in my test environment (localhost) I used to run LXD snap edge, because stable or canonical didn’t worked for me, and my pool was no mounted. (btrfs in LUKS with only entry in crypttab, but none in fstab). Anyway I think snap updated and all containers became with only one /bin/bash process. But instead of switching to stable, I done a stupid thing and removed snap with snap remove lxd and installed local lxd in the package. But at that moment LXD was already migrated, and basically after I removed packages and install lxd from snap back, i have totally wiped database.

Basically right now I seek a way to restore, or connect old btrfs pool (and import container entries back to the lxd) that I mounted to /mnt/lxd. manually (and added to fstab) which contains containers custom images snapshots directories with all containers intact.

Basically yes, I actually had idea that removing snap will no preserve data, but typed it automatically, after long work day. :man_facepalming:

Mounting your storage pool at the right location under /var/lib/lxd/storage-pools/NAME should then let you run lxd import CONTAINER-NAME for each of your containers, causing LXD to re-create the database entries from the backup.yaml file that’s included in the container’s on-disk storage.

Note that this is a disaster recovery process so it’s unlikely that every bit of information you had in the database before will be re-created properly, but if done right, at least the containers should come out fine.

2 Likes

I mounted /dev/mapper/name at /var/lib/lxd/storage-pools/default and it doesn’t seems to work.

Inside container’s backup.yaml:

pool:
  config:
    source: 945a5f35-....
  description: ""
  name: default
  driver: btrfs
  used_by: []
volume:
  config: {}
  description: ""
  name: byteball
  type: container
  used_by: []

Label: 'default'  uuid: 945a5f35-...
	Total devices 1 FS bytes used 28.19GiB
	devid    1 size 309.44GiB used 64.02GiB path /dev/mapper/name

What happens when you run lxd import byteball?

Just typical:
error: The container "byteball" does not seem to exist on any storage pool

I do seems have ability to import images from directories as root from every directory, export images, backup every directory (to make sure), and move images. Format drive with my pool (no I don’t have extra free HDD), create pool back and recreate containers from images. But sadly it would take a lot time, And I just created one image so far, since low on the time :tired_face:

Is actually funny how I done this error, I just worked with snap automatically like I worked with apt. But somehow thats for better, since we wanted to snap our software and mostly finished it, but as snap doesn’t have confirmation on remove and install. Thats really not an option now, since well if someone wipe data that way, users would lose a lot of money and could just sue us.

Yes LXD seems not doing in snap that good too, I getting constant timeout disconnects from bin/bash in terminal tabs and auto-completion doesn’t work good too, it just hangs. But seems thats just because of snap, so I not reporting it. Pretty sure you aware.
Didn’t moved production servers on snap yet, but seems no ppa with actual versions now.

Yes another thing seems in snap it copies files first in tmp, so in case I do lxc file push - I have no space, even through I have a lot of space on the pool, but only some space on /

Hmm, the snap does ship with a working bash completion profile, it’s working fine here.
It’d however misbehave if mixed with a non-snapped LXD also installed on the same system, can you confirm you don’t have the lxd or lxd-client packages installed on your system?

For the import failure, I’m not sure what’s going on, I’ll have to test that particular feature which may take a few days.

Hmm, the snap does ship with a working bash completion profile, >it’s working fine here.

It works, but is like it does not. For example I do lxc exec half of the container name and press on tab, and it do auto-completion but hangs after, so I’m not able to really finish a command.

can you confirm you don’t have the lxd or lxd-client packages installed on your system?

Sure I don’t. Started importing one by one, as images today, but it sure takes a lot of time. I think I just need to move containers directory out of pool, and import directories as images, instead of exporting every image for bulk import after I’ll recreate pool on the same storage. Should save some time. But still I would feel better with doubled source.

lxc exec gm` pressed tab for autocomplete. And:

error: invalid format "csv"

Thats on another machine with lxd snap too, seems autocomplete is broken with snap, yes. Doesn’t remember the error msg the previous time with btrfs backend, thats one with dir.

Just tried on a few systems here using the snap and completion works fine for lxc exec <some chars><tab>.

Your error really sounds like you have some other version of the lxc client on your system which is taking precedence over the snap provided one. That other lxc binary would be an older one like 2.0.x which doesn’t support the csv output, causing the error above.

Turns out this system was installed from the full ubuntu server, and I always install minimal system, so lxd in the package was installed. Can’t seems to test it after I removed lxd lxd-client, since after reboot this not mine vps, seems down. :rofl: But I’m 100% sure I had no lxd remaining in the packages in my test env.

Was able to restore all old containers within 30 minutes with lxd import today but it worked only on the new pool, after I copied containers directory manually. Gladly I had some scripts with for containers and good memory about profile, so was able to recreate network subnets and profiles manually pretty easy. As for not mine test server, was able to restore all fast by switching to lxd from backports. Also seems the only viable option for me now, since some processes like systemd service for our geth fork did not start in snaped lxd and other small bugs.

Another thing seems I’m not able to add /dev/mapper/name as block device in the lxd init. And I’m pretty sure thats worked.
Basically I never had btrfs filesystem under LUKS mounted in the fstab, but only LUKS in the crypttab, and LXD worked with btrfs filesystem uuid as pool source without any problem. But now it seems it could find it, or couldn’t find it only in the snap version.
Anyway I mount new pool in the fstab now, and source as folder - works volatile.initial_source folder path works fine.

if the storage pool is a lvm, how mount to /var/lib/lxd/storage-pools/NAME?

You can see the volumes available using the lvs command, then you will need to active the instance volumes so they appear as block devices before they can be mounted.

Take a look at the command LXD runs to activate a volume:

Thanks for the explanation

i use

lvchange -ay --ignoreactivationskip /dev/pool-lvm2/containers_ins--lvm2
mount /dev/mapper/pool--lvm2-containers_ins----lvm2 /var/lib/lxd/storage-pools/pool-lvm2/containers/ins-lvm2
lxd import ins-lvm2

got

Error: Failed creating instance record: Failed initialising instance: Failed to add device "test": Failed loading custom volume: No such object

testdevice is a storage volume.

how to load it?

I’m not sure if you can import a custom volume. Perhaps if you created one of the same name on the same storage pool that would allow you to recover your instance.

@stgraber should we modify LXD to ignore device errors when importing, a bit like we do when restoring snapshots?

Ah yeah, that’s a slightly tricky one.
So with planned work on lxd import, we will allow disaster recovery on those too, but we don’t have that right now.

For now, the best option likely is to create an empty custom volume then manually restore it, once it’s there, restore the container.