LXD export and root file system size

I don’t know why this directory should not have been created or been deleted, maybe there is a bug somewhere, but how about

sudo mkdir /var/snap/lxd/common/lxd/backups

Hmm yes that’s clearer, cancel my advice to create it manually.

duh. That’s bad.
how about

sudo mkdir -p /var/snap/lxd/common/lxd/storage-pools/exports/custom/sav

well, maybe this time I got it right :frowning:

No still not starting. I am starting to worry

what’s the error message in the log this time ?

sudo tail -n 100 /var/snap/lxd/common/lxd/logs/lxd.log t=2020-02-05T18:29:05+0100 lvl=info msg=“LXD 3.20 is starting in normal mode” path=/var/snap/lxd/common/lxd t=2020-02-05T18:29:05+0100 lvl=eror msg=“Failed to start the daemon: Failed to chmod dir /var/snap/lxd/common/lxd/backups: chmod /var/snap/lxd/common/lxd/backups: no such file or directory” t=2020-02-05T18:29:05+0100 lvl=info msg=“Starting shutdown sequence”

If i unlink the backup folder and recreate it. Would that work?

I had to create the folder exports_volume
Now its running again. How odd? I will investigate this some more tomorow

Yes after 5 edits I had not got it right after all, since in my quick test I had not named the volume as you did. Sorry for the botched advice I was doing something else at the same time, I’m bad at multitasking.

I created a snapshot before i rebooted. Now i can not deete the snapshot because it can not find it. This is offcourse expected. Is there a forced way to tell lxd the snapshot is not there anymore?

Err, I don’t quite see why a snapshot should be lost because you rebooted. This has never happened to me. I can understand it could happen if the reboot was unintended following a crash, but if it’s a normal reboot without error, it should not happen. What’s the output of lxc list container-with-snapshot ? Is there a '1" in the shapshot column ?

On the original subject, I think you should file a bug with the LXD issue manager referencing this thread. Well, it should have been a new subject, this is not very orderly but that’s one of the problems with this forum that is letting users posting follows-up without closing automatically topics after a month of no activity, it’s messy.
At the moment to make this feature work you would have to unset the key before stopping the system, and setting it before doing your backup. I reproduced your problem with a makeshift partition on a USB key, maybe the problem is caused by the partition being automounted but I’m not sure at all. Anyway the thing is not working like it should in your case too and it’s a bug.

if i do lxc list container-with-snapshot no containers are shown. if i do lxc list then indeed i see a 1 by the container snapshot.

when everything was running (before my reboot) i tried to make a snapshot. But i gave me an error that it could not find a folder. Same folder as what i created before when after the reboot lxd was not running anymore. however lxd believes that the snapshot excists. if i create a snapshot now it is working as expected.

But there is no way to drop the snapshot?

also when asking for log. all containers are showing this

Log:

lxc dcfs01 20200205201100.834 WARN cgfsng - cgroups/cgfsng.c:chowmod:1525 - No such file or directory - Failed to chown(/sys/fs/cgroup/unified//lxc.payload/dcfs01/memory.oom.group, 1000000000, 0)

what’s the error returned ? and what’s the storage type (zfs, btrfs…) ?

this is a warning about a missing kernel feature, nothing to do with the backup problem.

Its btrfs

The error that i get when i do lxc delete kopano/kopano is

Error: lstat /var/snap/lxd/common/lxd/storage-pools/lxd/containers-snapshots/kopano/kopano/: no such file or directory root@esx:~#

right, let’s check if the snapshot has been created at the storage level

lxc storage show lxd

and

sudo nsenter -t $(pgrep daemon.start) -m -- /snap/lxd/current/bin/btrfs sub list /var/snap /lxd/common/lxd/storage-pools/lxd

If i run the first command then i see the snapshot. The second command wont run because of to many arguments.

1.0/containers/kopano/snapshots/kopano

that’s because the forum is inserting a space between /var/snap and the rest of the command, delete the space before hitting enter.

Ok thank you. This is the output:

ID 257 gen 23100404 top level 5 path containers ID 258 gen 23141448 top level 5 path containers-snapshots ID 259 gen 7909279 top level 5 path images ID 260 gen 10 top level 5 path custom ID 261 gen 11 top level 5 path custom-snapshots ID 1394 gen 23162522 top level 257 path containers/controller ID 1482 gen 23162529 top level 257 path containers/zwave ID 2608 gen 23162526 top level 257 path containers/ex02 ID 2751 gen 14061354 top level 257 path containers/ex02760190707/.backup ID 2752 gen 14061444 top level 257 path containers/ex02174436253/.backup ID 2753 gen 14061448 top level 257 path containers/ex02691178967/.backup ID 2754 gen 14061452 top level 257 path containers/ex02047286817/.backup ID 2755 gen 14061459 top level 257 path containers/ex02077840635/.backup ID 2756 gen 14061601 top level 257 path containers/ex02613717733/.backup ID 2757 gen 14061646 top level 257 path containers/kms158885983/.backup ID 2938 gen 23162514 top level 257 path containers/opsi ID 2997 gen 23162524 top level 257 path containers/dcfs01 ID 3002 gen 23162529 top level 257 path containers/kopano ID 3240 gen 23162349 top level 257 path containers/kms root@esx:~#

there is no snapshot for the kopano container at the storage level; can you try to do

lxc storage edit lxd

and delete the snapshot entry for this container ? I have no idea if this is supposed to be possible :slight_smile: