LXD fails to start after 4.0/stable -> 6/stable upgrade

Hello All,

I’ve just tried upgrading my LXD (running as a snap on Ubuntu 24.04.3LTS) from 4.0/stable to 6/stable. Unfortunately, LXD is now failing to start. journalctl -u snap.lxd.daemon -f shows that LXD is repeatedly trying and failing to start, with errors of the following nature:

Oct 04 10:08:58 ruby lxd.daemon[166672]: => Starting LXD
Oct 04 10:08:58 ruby lxd.daemon[167568]: time="2025-10-04T10:08:58Z" level=warning msg=" - Couldn't find the CGroup network priority controller, per-instance network priority will be ignored. Please use per-device limits.priority instead"
Oct 04 10:09:00 ruby lxd.daemon[167568]: time="2025-10-04T10:09:00Z" level=error msg="Failed starting daemon" err="Failed applying patch \"storage_missing_snapshot_records\": Failed applying patch to pool \"default\": Error inserting volume \"btrbk/btrbk_2021-08-01\" for project \"default\" in pool \"default\" of type \"containers\" into database \"Failed creating volume snapshot record: UNIQUE constraint failed: storage_volumes_snapshots.storage_volume_id, storage_volumes_snapshots.name\""
Oct 04 10:09:01 ruby lxd.daemon[167568]: time="2025-10-04T10:09:01Z" level=warning msg="Dqlite last entry" index=0 term=0
Oct 04 10:09:01 ruby lxd.daemon[167568]: Error: Failed applying patch "storage_missing_snapshot_records": Failed applying patch to pool "default": Error inserting volume "btrbk/btrbk_2021-08-01" for project "default" in pool "default" of type "containers" into database "Failed creating volume snapshot record: UNIQUE constraint failed: storage_volumes_snapshots.storage_volume_id, storage_volumes_snapshots.name"
Oct 04 10:09:01 ruby lxd.daemon[166672]: Killed
Oct 04 10:09:01 ruby lxd.daemon[166672]: => LXD failed to start

So it looks like there’s an issue with some of the container snapshots I have running. Downgrading to 4.0/stable also no longer works. Can anyone suggest how I can fix the situation?

Regards,

Chris

I am sorry. We don’t provide support for LXD on this forum. You might have better luck contacting Canonical.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.