I used lxd-snapper for a long time. I recently migrated to incus with lxd-to-incus. The 5 containers converted fine, using the same ZFS storage. The snapshots “exist” according to incus but cannot be deleted. There was a “volatile-uuid” error that I fixed after googling, but now the containers won’t start because there exist some snaps which don’t exist anymore.
Any work around or should I re-implement the containers from scratch in incus?
The specific error “Error: Instance snapshot record count doesn’t match instance snapshot volume record count”
Can I reset the Instance snapshot record count to 0?
incus snapshot list <containername> --format csv
[sudo] zfs list -t snap <zfspath>/containers/<containername>
I think it’s basically complaining that something in one list doesn’t appear in the other. How to fix it depends on which one is missing.
TBH. I think incus should be more robust in this case, and in particular, it should not complain if a manually-created zfs snapshot exists that “incus snapshot” was unaware of.
I don’t know what “lxd-snapper” is, but it’s possible that it’s created one or more zfs snapshots that are not incus snapshots. If so, then just deleting the snapshots that lxd-snapper created should be sufficient to fix the problem.
lxd-snapper is a open source script that I could use with LXD to make snapshots. Now I don’t need it anymore. I did some more digging on my backup incus server, the snapshot naming format that I used for lxd-snapper (in a somewhat misguided attempt to co-exist with Sanoid) , i.e. autosnap_daily_2024-08-21_04:00:02 , breaks incus snapshot handling.
I renamed a “badly named” snapshot like autosnap_daily_2024-08-21_04:00:02 to autosnap_daily_2024-08-21 and it can be seen and deleted fine. I have now added configuration to daily snap the containers and expire automatically. I am sorted on my backup server.
thanks brian. Yes, I read the documentation. the migration kind of failed because faulty named snapshots from lxd-snapper. starting over on prod (cuz its saturday).