tl;dr: Cruft accumulated over the years leads to issues when deleting snapshots.
Hi, I have a long-running Incus server (years and years, starting with somewhat early LXD releases, then migrated) and it has been exceedingly stable and reliable. But a few months ago I noticed an issue in the journal but did not find time to resolve it right away:
Aug 25 13:17:40 (...) incusd[1717]: time=“2025-08-25T13:17:40+02:00” level=error msg=“Failed pruning instance snapshots” err=“Failed to delete expired instance snapshot "nodered/snap8" in project "default": Error deleting storage volume from database: Storage volume found on more than one cluster member. Please target a specific member”
I have lots of these messages (due to snapshots expiring but failing to be deleted) but only for two containers, one of them being nodered
. Note that I’m not running a cluster.
I found an older post (2023, LXD) with probably the same issue, but I’m hesitant to just try editing the database without input / confirmation from someone more knowledgeable. (After all the post is over 2 years old and it was for LXD, lots of things could have changed…) I don’t want to needlessly risk messing up my setup.
The instance in question has a number of backups:
$ incus info nodered | grep \|
| NAME | TAKEN AT | EXPIRES AT | STATEFUL |
| snap6 | 2024/02/17 06:23 CET | 2024/05/11 06:23 CEST | NO |
| snap7 | 2024/02/24 06:23 CET | 2024/05/18 06:23 CEST | NO |
| snap8 | 2024/03/02 06:23 CET | 2024/05/25 06:23 CEST | NO |
| snap9 | 2024/03/09 06:23 CET | 2024/06/01 06:23 CEST | NO |
| snap10 | 2024/03/16 06:23 CET | 2024/06/08 06:23 CEST | NO |
| snap54 | 2025/06/07 06:23 CEST | 2025/08/30 06:23 CEST | NO |
| snap55 | 2025/07/05 06:23 CEST | 2025/09/27 06:23 CEST | NO |
| snap56 | 2025/07/12 06:23 CEST | 2025/10/04 06:23 CEST | NO |
| snap57 | 2025/07/19 06:23 CEST | 2025/10/11 06:23 CEST | NO |
| snap58 | 2025/07/26 06:23 CEST | 2025/10/18 06:23 CEST | NO |
| snap59 | 2025/08/09 06:23 CEST | 2025/11/01 06:23 CET | NO |
| snap60 | 2025/08/23 06:23 CEST | 2025/11/15 06:23 CET | NO |
| after-alp320-before-nodred-upgrade | 2025/08/24 00:37 CEST | 2025/11/16 00:37 CET | NO |
| nodered4.0 | 2025/08/25 13:04 CEST | | NO |
snap6
–snap10
are the culprits that appear over and over in the host’s journal. The other ones are valid. If I try to manually delete one of the older ones I get the same message as in the journal:
$ incus snapshot delete nodered snap6
Error: Error deleting storage volume from database: Storage volume found on more than one cluster member. Please target a specific member
I also investigated a bit using commands adapted from the older post:
$ incus storage volume list ssd | grep nodered
| container | nodered | | filesystem | 1 |
| container (snapshot) | nodered/after-alp320-before-nodred-upgrade | | filesystem | 0 |
| container (snapshot) | nodered/alpine3.10-but-before-npm | | filesystem | 0 |
| container (snapshot) | nodered/fixed-timezone | | filesystem | 0 |
| container (snapshot) | nodered/nodered4.0 | | filesystem | 0 |
| container (snapshot) | nodered/snap6 | | filesystem | 0 |
| container (snapshot) | nodered/snap6 | | filesystem | 0 |
| container (snapshot) | nodered/snap7 | | filesystem | 0 |
| container (snapshot) | nodered/snap7 | | filesystem | 0 |
| container (snapshot) | nodered/snap8 | | filesystem | 0 |
| container (snapshot) | nodered/snap8 | | filesystem | 0 |
| container (snapshot) | nodered/snap9 | | filesystem | 0 |
| container (snapshot) | nodered/snap9 | | filesystem | 0 |
| container (snapshot) | nodered/snap10 | | filesystem | 0 |
| container (snapshot) | nodered/snap10 | | filesystem | 0 |
| container (snapshot) | nodered/snap54 | | filesystem | 0 |
| container (snapshot) | nodered/snap55 | | filesystem | 0 |
| container (snapshot) | nodered/snap56 | | filesystem | 0 |
| container (snapshot) | nodered/snap57 | | filesystem | 0 |
| container (snapshot) | nodered/snap58 | | filesystem | 0 |
| container (snapshot) | nodered/snap59 | | filesystem | 0 |
| container (snapshot) | nodered/snap60 | | filesystem | 0 |
| container (snapshot) | nodered/with-basic-knx-working | | filesystem | 0 |
| container (snapshot) | noderedbak/after-alpinev3.8-nodered0.19 | | filesystem | 0 |
| container (snapshot) | noderedbak/afterupgradeto0.18 | | filesystem | 0 |
| container (snapshot) | noderedbak/before-NUT | | filesystem | 0 |
| container (snapshot) | noderedbak/before-alpinev3.8-nodered0.19 | | filesystem | 0 |
| container (snapshot) | noderedbak/beforeupgradeto0.18 | | filesystem | 0 |
| container (snapshot) | noderedbak/snap37 | | filesystem | 0 |
| container (snapshot) | noderedbak/snap38 | | filesystem | 0 |
| container (snapshot) | noderedbak/snap39 | | filesystem | 0 |
| container (snapshot) | noderedbak/snap40 | | filesystem | 0 |
| container (snapshot) | noderedbak/snap41 | | filesystem | 0 |
Note that there are a number of volumes listed that should no longer exist. This is also the case for nodredbak
which is a container that no longer exists! Note also that the more problematic ones (snap6-10) are listed twice here! My server definitely needs some garbage collection
Here’s some more info:
$ incus admin sql global 'select * from storage_volumes where name like "nodered/%"' | grep \|
| ID | NAME | STORAGE POOL ID | NODE ID | TYPE | DESCRIPTION | PROJECT ID | CONTENT TYPE | CREATION DATE |
| 3109 | nodered/alpine3.10-but-before-npm | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3186 | nodered/fixed-timezone | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3195 | nodered/with-basic-knx-working | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3462 | nodered/snap6 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3510 | nodered/snap7 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3547 | nodered/snap8 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3582 | nodered/snap9 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3625 | nodered/snap10 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
$ incus admin sql global 'select * from storage_volumes_all' | grep nodered
| 109 | noderedbak/afterupgradeto0.18 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 110 | noderedbak/beforeupgradeto0.18 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 111 | noderedbak/before-NUT | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 125 | noderedbak/before-alpinev3.8-nodered0.19 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 126 | noderedbak/after-alpinev3.8-nodered0.19 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3108 | nodered | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3109 | nodered/alpine3.10-but-before-npm | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3186 | nodered/fixed-timezone | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3195 | nodered/with-basic-knx-working | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3462 | nodered/snap6 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3463 | noderedbak/snap37 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3510 | nodered/snap7 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3511 | noderedbak/snap38 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3547 | nodered/snap8 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3548 | noderedbak/snap39 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3582 | nodered/snap9 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3583 | noderedbak/snap40 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3625 | nodered/snap10 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 3626 | noderedbak/snap41 | 1 | 1 | 0 | | 1 | 0 | 0001-01-01T00:00:00Z |
| 12248 | nodered/snap6 | 1 | 1 | 0 | | 1 | 0 | 2024-02-17T05:23:49.973056488Z |
| 12264 | nodered/snap7 | 1 | 1 | 0 | | 1 | 0 | 2024-02-24T05:23:58.098998989Z |
| 12285 | nodered/snap8 | 1 | 1 | 0 | | 1 | 0 | 2024-03-02T05:23:59.021987654Z |
| 12312 | nodered/snap9 | 1 | 1 | 0 | | 1 | 0 | 2024-03-09T05:23:05.353531576Z |
| 12342 | nodered/snap10 | 1 | 1 | 0 | | 1 | 0 | 2024-03-16T05:23:14.374994958Z |
| 13500 | nodered/snap54 | 1 | 1 | 0 | | 1 | 0 | 2025-06-07T04:23:00.140652788Z |
| 13548 | nodered/snap55 | 1 | 1 | 0 | | 1 | 0 | 2025-07-05T04:23:53.116167821Z |
| 13563 | nodered/snap56 | 1 | 1 | 0 | | 1 | 0 | 2025-07-12T04:23:34.071663157Z |
| 13577 | nodered/snap57 | 1 | 1 | 0 | | 1 | 0 | 2025-07-19T04:23:19.221669956Z |
| 13601 | nodered/snap58 | 1 | 1 | 0 | | 1 | 0 | 2025-07-26T04:23:18.036743537Z |
| 13645 | nodered/snap59 | 1 | 1 | 0 | | 1 | 0 | 2025-08-09T04:23:45.878861569Z |
| 13677 | nodered/snap60 | 1 | 1 | 0 | | 1 | 0 | 2025-08-23T04:23:58.192765096Z |
| 13679 | nodered/after-alp320-before-nodred-upgrade | 1 | 1 | 0 | | 1 | 0 | 2025-08-23T22:37:09.741277272Z |
| 13684 | nodered/nodered4.0 | 1 | 1 | 0 | | 1 | 0 | 2025-08-25T11:04:06.792092313Z |
On disk it seems as though only the correct snapshots are actually there:
/mnt/ssdbtrfs/lxd/containers-snapshots/nodered# ls
after-alp320-before-nodred-upgrade nodered4.0 snap54 snap55 snap56 snap57 snap58 snap59 snap60
I’d be grateful for any help in this matter. I’ll happily provide more information if needed.