Most of the containers are fine, but I seem to have some phantom snapshots. Example case:
lxc snapshot collabora
Gives me error:
Error: Create instance snapshot: Error inserting volume "collabora/snap3" for project "default" in pool "default" of type "containers" into database "Insert volume snapshot: UNIQUE constraint failed: storage_volumes_snapshots.storage_volume_id, storage_volumes_snapshots.name"
If I do
lxc info collabora
I get this, no snap3 listed
+----------------------------+----------------------+------------+----------+
| NAME | TAKEN AT | EXPIRES AT | STATEFUL |
+----------------------------+----------------------+------------+----------+
| after-upgrade-ubuntu-20.04 | 2022/04/30 18:02 UTC | | NO |
+----------------------------+----------------------+------------+----------+
| snap1 | 2022/01/29 00:17 UTC | | NO |
+----------------------------+----------------------+------------+----------+
| snap2 | 2022/04/30 17:20 UTC | | NO |
If I do:
zfs list -r -t snapshot osgeo7/containers/collabora
again the mysterious snap3 does not show:
NAME USED AVAIL REFER MOUNTPOINT
osgeo7/containers/collabora@snapshot-snap1 327M - 1.19G -
osgeo7/containers/collabora@snapshot-snap2 451M - 1.26G -
osgeo7/containers/collabora@snapshot-after-upgrade-ubuntu-20.04 117M - 1.44G -
It’s only when I do this, that I see the phantom snapshot:
lxc storage show default | grep collabora
- /1.0/instances/collabora
- /1.0/instances/collabora%252Fafter-2019-10-20-system-updates
- /1.0/instances/collabora%252Fafter-nextcloud-config
- /1.0/instances/collabora%252Fbefore-2019-11-14-updates
- /1.0/instances/collabora%252Fsnap0
- /1.0/instances/collabora%252Fsnap1
- /1.0/instances/collabora%252Fsnap2
- /1.0/instances/collabora%252Fsnap3
- /1.0/instances/collabora%252Fsnap4
What’s also odd is the after-upgrade-ubuntu-20.04 is not in the above list and yet I see it in lxc info collabora.
The only thing I can think of special about this container, is I think I had created it as a copy of another container a very long time ago. So wondering if maybe something went wrong in that copy process.
I have another container with a similar issue, also I think that had undergone a copy a long time ago. Other containers seem fine.