Hmm, I’m getting:
[root@shl1 lxd]# lxc export git-http git-http.tgz -vvv
Error: Create backup: Backup storage: Could not create snapshot LV named snapshot--05236fd3--6b5f--4499--b074--7ae20cfb9d88
The reason I’m trying to backup containers is because I’m in the below situation. My sense is the above error might be related?..
Do you this I should reboot the host to recover state?
I cancelled an lxc publish as the partition it was using was about to run out of space, then after starting the container again all uid have been lost with nobody nogroup and services do no start:
[root@shl1 lxd]# lxc exec omeka-s bash
bash: /root/.bashrc: Permission denied
any clues on how to fix uid mappings?
Thanks in advance
For some reason:
The above error persists after a reboot
Thanks for your helps so far - any advice on diagnosing further?
Is the container running when you run
Try stopping it and doing export as that might skip the snapshot.
Alas, it appears to want to snapshot when STOPPED too…
Can you create a new storage pool of dir type and then copy the instance to it locally, then try exporting it from there.
[root@shl1 lxd-export]~ lxc export git-http-bak git-http-bak.tgz
Backup exported successfully!
Fab, thanks so much!!
Nearly there! I can export the smaller ones, but even when exporting to a different partition lxd wants to consume space on the root partition which is nearly full. Is there some way to only use the target partition?
[root@shl1 sw206]# lxc config set storage.backups_volume pool2/storage
Error: cannot set 'storage.backups_volume' to 'pool2/storage': unknown key
I’m on LXD 3.6 - was this config on a different key back then?
Ah you cannot upgrade from 3.x to 5.x (only from 4.x), as 3.x is very old.
You could try upgrading the 3.x host to 4.x and then exporting and reimporting.
Going from 3.6 → 5.1
Sorry I should have thought of that earlier.
Can you provide the contents of the backup.yaml file in the exported tarball file, it may be easier to just manually modify that file.
Apologies, I’m not following. My problem is the root partition runs out of disk space on export. The container in question is only 7.5G and there’s 15G free space, however it still runs out.
storage.backups_volume isn’t an option in 3.6 - is it safe to update to v4.x whilst containers are running?
I personally wouldn’t, especially as we know its going to alter the instance’s config.
(Apologies this is dragging on!)
Error: Create backup: Backup storage: exit status 1 from the export command, I’m fairly sure there’s enough spare disk space…
Should I shutdown all containers and upgrade to 4.x?
Yes I think thats the best approach, either that or get some more disk space