Corrupted lvm volumes - can't delete default storage

Hi,

I’m non english native speaker, and I think I made several mistakes that leaded to this : all my containers (3) et vms (2) are down since I thought I made the right configuration with space quotas on my lvm thinpool provisionning storage configuration, but it seems I might have not understand a lot of stuff. My lvs & default.img are corrupted & I tried so many different things (e2fsck was kind of the last resort) thanks to all the super work of everyone here & thanks to documentation, but it didn’t do & I write here to you, anyway thanks lot to anyone involved in Incus <3.

I made backups of so many stuff (storage, network, profile, etc) in yml format.

Now I just want to incus import backup.tar.gz of my safe backups (before everything “exploded”) into a new ZFS or DIR storage pool. But I can’t manage to delete my default corrupted lvm storage :

incus storage delete default
Error: The storage pool is currently in use

I tried this “solution” but I end up with these messages saying my containers & vms are down, which I know :

printf 'config: {}\ndevices: {}' | sudo incus profile edit default
Error: The following instances failed to update (profile change still saved):
 - Project: default, Instance: xxx: Failed to write backup file: Failed to mount LVM logical volume: Failed to mount "/dev/default/virtual-machines_xxx" on "/var/lib/incus/storage-pools/default/virtual-machines/xxx" using "ext4": invalid argument
 - Project: default, Instance: xxx: Failed to write backup file: Failed to mount LVM logical volume: Failed to mount "/dev/default/containers_xxx" on "/var/lib/incus/storage-pools/default/containers/xxx" using "ext4": structure needs cleaning
 - Project: default, Instance: xxx: Failed to write backup file: Failed to mount LVM logical volume: Failed to mount "/dev/default/containers_xxx" on "/var/lib/incus/storage-pools/default/containers/xxx" using "ext4": structure needs cleaning
 - Project: default, Instance: xxx: Failed to write backup file: Failed to mount LVM logical volume: Failed to mount "/dev/default/virtual-machines_xxx" on "/var/lib/incus/storage-pools/default/virtual-machines/xxx" using "ext4": invalid argument
 - Project: default, Instance: xxx: Failed to write backup file: Failed to mount LVM logical volume: Failed to mount "/dev/default/containers_xxx" on "/var/lib/incus/storage-pools/default/containers/containers_xxx" using "ext4": structure needs cleaning

And then end up with Error: The storage pool is currently in use whatever I do.

Now I want to just delete this storage that incus created in the beggining when I created lvm storage, but I can’t & it take too much “ghost space” (I don’t understand how incus & lvm communicate but I guess I should let incus do the work here ?) in my partition I want to recreate new storage.

I want to avoid erasing all incus stuff I have (profiles, network, macvlan, etc) like this post may suggest, only lvm storage-pool.

Thanks for the help here,

So what’s the state of LVM? Did you manually delete the VG/PV already?

Sounds like those instances need to go away. If you can’t delete them the normal way, you may need to do some database surgery to get rid of them.

Though if you’re looking at deleting everything on the machine, wouldn’t you be better off just wiping Incus entirely (Basically delete everything in /var/lib/incus/ and reboot the system).

1 Like

Thanks a lot @stgraber for your answer, I am very lucky.

I also thought it would be easier to wipe out & redo it all but I don’t know how to easily “rebuild” everything without re-doing every configuration stuff, is there a way to export configuration files (network, macvlan, share disks, size of containers/virtual machines, etc) then import it ?

EDIT : a bit more precisions

Sorry, states are « STOPPED », « LV Status » are « NOT available », nothing appear anymore inside df -h, vgchange --activate y won’t help too.

I wiped out it all because everything was super buggy at this point, thanks a lot