LXD does not start cannot activate volume group (lvm)

I was trying to migrate my LXD LVM thin pool (“LXDPool”) to a new disk and I apparently didn’t know what I was doing. Once I can recover my containers, I’ll ask guidance on how to do this properly. But now, I’ve gotten myself in a pickle where LXD won’t start:

# lxd --debug --trace --group lxd
INFO[01-03|00:01:56] LXD 3.8 is starting in normal mode       path=/var/snap/lxd/common/lxd
INFO[01-03|00:01:56] Kernel uid/gid map: 
INFO[01-03|00:01:56]  - u 0 0 4294967295 
INFO[01-03|00:01:56]  - g 0 0 4294967295 
INFO[01-03|00:01:56] Configured LXD uid/gid map: 
INFO[01-03|00:01:56]  - u 0 1000000 1000000000 
INFO[01-03|00:01:56]  - g 0 1000000 1000000000 
WARN[01-03|00:01:56] CGroup memory swap accounting is disabled, swap limits will be ignored. 
INFO[01-03|00:01:56] Kernel features: 
INFO[01-03|00:01:56]  - netnsid-based network retrieval: no 
INFO[01-03|00:01:56]  - uevent injection: no 
INFO[01-03|00:01:56]  - unprivileged file capabilities: yes 
INFO[01-03|00:01:56] Initializing local database 
DBUG[01-03|00:01:56] Initializing database gateway 
DBUG[01-03|00:01:56] Start database node                      id=1 address=
DBUG[01-03|00:01:56] Raft: Restored from snapshot 1-1024-1545746259596 
DBUG[01-03|00:01:56] Raft: Initial configuration (index=1): [{Suffrage:Voter ID:1 Address:0}] 
DBUG[01-03|00:01:56] Raft: Node at 0 [Leader] entering Leader state 
DBUG[01-03|00:01:56] Dqlite: starting event loop 
DBUG[01-03|00:01:56] Dqlite: accepting connections 
INFO[01-03|00:01:56] Starting /dev/lxd handler: 
INFO[01-03|00:01:56]  - binding devlxd socket                 socket=/var/snap/lxd/common/lxd/devlxd/sock
INFO[01-03|00:01:56] REST API daemon: 
INFO[01-03|00:01:56]  - binding Unix socket                   socket=/var/snap/lxd/common/lxd/unix.socket
INFO[01-03|00:01:56] Initializing global database 
DBUG[01-03|00:01:56] Dqlite: handling new connection (fd=19) 
DBUG[01-03|00:01:56] Dqlite: connected address=0 attempt=0 
INFO[01-03|00:01:56] Initializing storage pools 
DBUG[01-03|00:01:56] Initializing and checking storage pool "vg0" 
DBUG[01-03|00:01:56] Checking LVM storage pool "vg0" 
EROR[01-03|00:02:15] Failed to start the daemon: could not activate volume group "vg0":   device-mapper: reload ioctl on (253:47) failed: No data available
  24 logical volume(s) in volume group "vg0" now active
 
INFO[01-03|00:02:15] Starting shutdown sequence 
INFO[01-03|00:02:15] Stopping REST API handler: 
INFO[01-03|00:02:15]  - closing socket                        socket=/var/snap/lxd/common/lxd/unix.socket
INFO[01-03|00:02:15] Stopping /dev/lxd handler: 
INFO[01-03|00:02:15]  - closing socket                        socket=/var/snap/lxd/common/lxd/devlxd/sock
INFO[01-03|00:02:15] Closing the database 
DBUG[01-03|00:02:15] Dqlite: closing client 
DBUG[01-03|00:02:15] Stop database gateway 
DBUG[01-03|00:02:15] Stop raft instance 
DBUG[01-03|00:02:15] Dqlite: stopping event loop 
DBUG[01-03|00:02:15] Dqlite: event loop stopped 
INFO[01-03|00:02:15] Unmounting temporary filesystems 
INFO[01-03|00:02:15] Done unmounting temporary filesystems 
INFO[01-03|00:02:15] Saving simplestreams cache 
INFO[01-03|00:02:15] Saved simplestreams cache 
Error: could not activate volume group "vg0":   device-mapper: reload ioctl on (253:47) failed: No data available
  24 logical volume(s) in volume group "vg0" now active

Indeed there is no device @ 253:47:

# ls -l /dev/|grep 253
brw-rw---- 1 root         disk    253,   0 Jan  1 13:44 dm-0
brw-rw---- 1 root         disk    253,   1 Jan  1 13:44 dm-1
brw-rw---- 1 root         disk    253,  10 Jan  1 13:44 dm-10
brw-rw---- 1 root         disk    253,  11 Jan  1 13:44 dm-11
brw-rw---- 1 root         disk    253,  12 Jan  1 13:44 dm-12
brw-rw---- 1 root         disk    253,  13 Jan  1 13:44 dm-13
brw-rw---- 1 libvirt-qemu kvm     253,  14 Jan  3 00:08 dm-14
brw-rw---- 1 root         disk    253,  15 Jan  1 13:44 dm-15
brw-rw---- 1 root         disk    253,  16 Jan  1 13:44 dm-16
brw-rw---- 1 root         disk    253,  17 Jan  1 13:44 dm-17
brw-rw---- 1 root         disk    253,  18 Jan  1 13:44 dm-18
brw-rw---- 1 root         disk    253,  19 Jan  1 13:44 dm-19
brw-rw---- 1 root         disk    253,   2 Jan  1 13:44 dm-2
brw-rw---- 1 root         disk    253,  20 Jan  1 13:44 dm-20
brw-rw---- 1 root         disk    253,  21 Jan  1 13:44 dm-21
brw-rw---- 1 root         disk    253,  22 Jan  1 13:44 dm-22
brw-rw---- 1 root         disk    253,  23 Jan  1 13:44 dm-23
brw-rw---- 1 root         disk    253,  24 Jan  1 13:44 dm-24
brw-rw---- 1 root         disk    253,  25 Jan  1 13:44 dm-25
brw-rw---- 1 root         disk    253,  26 Jan  1 13:44 dm-26
brw-rw---- 1 root         disk    253,  27 Jan  1 13:44 dm-27
brw-rw---- 1 root         disk    253,  28 Jan  1 13:44 dm-28
brw-rw---- 1 root         disk    253,  29 Jan  1 13:44 dm-29
brw-rw---- 1 root         disk    253,   3 Jan  1 13:44 dm-3
brw-rw---- 1 root         disk    253,  30 Jan  1 13:44 dm-30
brw-rw---- 1 root         disk    253,  31 Jan  1 13:44 dm-31
brw-rw---- 1 root         disk    253,  32 Jan  1 13:44 dm-32
brw-rw---- 1 root         disk    253,  33 Jan  1 13:44 dm-33
brw-rw---- 1 root         disk    253,  34 Jan  1 13:44 dm-34
brw-rw---- 1 root         disk    253,  35 Jan  1 13:44 dm-35
brw-rw---- 1 root         disk    253,  36 Jan  1 13:44 dm-36
brw-rw---- 1 root         disk    253,  37 Jan  1 13:44 dm-37
brw-rw---- 1 root         disk    253,  38 Jan  1 13:44 dm-38
brw-rw---- 1 root         disk    253,  39 Jan  1 13:44 dm-39
brw-rw---- 1 libvirt-qemu kvm     253,   4 Jan  2 23:50 dm-4
brw-rw---- 1 root         disk    253,  40 Jan  1 13:44 dm-40
brw-rw---- 1 root         disk    253,  41 Jan  1 13:44 dm-41
brw-rw---- 1 root         disk    253,  42 Jan  1 13:44 dm-42
brw-rw---- 1 root         disk    253,  43 Jan  1 13:44 dm-43
brw-rw---- 1 root         disk    253,  44 Jan  1 13:44 dm-44
brw-rw---- 1 root         disk    253,  45 Jan  1 13:44 dm-45
brw-rw---- 1 root         disk    253,  46 Jan  1 13:44 dm-46
brw-rw---- 1 root         disk    253,   5 Jan  1 13:44 dm-5
brw-rw---- 1 root         disk    253,   6 Jan  1 13:44 dm-6
brw-rw---- 1 root         disk    253,   7 Jan  1 13:44 dm-7
brw-rw---- 1 root         disk    253,   8 Jan  1 13:44 dm-8
brw-rw---- 1 root         disk    253,   9 Jan  1 13:44 dm-9

I tried creating one, but that did not affect anything. Please help me understand where to start. I think it’s possible that there is a lingering reference somewhere to a container I deleted recently, “great-python” whose LV shows inactive:

# lvscan
  ACTIVE            '/dev/vg0/squeezeserver' [8.50 GiB] inherit
  ACTIVE            '/dev/vg0/couv' [34.18 GiB] inherit
  ACTIVE            '/dev/vg0/owncloud' [64.00 GiB] inherit
  ACTIVE            '/dev/vg0/e2fs-cache' [200.00 GiB] inherit
  ACTIVE            '/dev/vg0/LXDPool' [382.70 GiB] inherit
  inactive          '/dev/vg0/containers_great--python' [10.00 GiB] inherit
  ACTIVE            '/dev/vg0/containers_mythserver' [64.00 GiB] inherit
  ACTIVE            '/dev/vg0/images_8fa08537ae51c880966626561987153e72d073cbe19dfe5abc062713d929254d' [10.00 GiB] inherit
  ACTIVE            '/dev/vg0/images_a425df004fda0876a060d5b4f76ac790e85c725449273d991939a4cdb179ad83' [10.00 GiB] inherit
  ACTIVE            '/dev/vg0/containers_local--nextcloud--server' [21.00 GiB] inherit
  ACTIVE            '/dev/vg0/Windows7' [45.00 GiB] inherit
  ACTIVE            '/dev/vg0/containers_nextcloud' [21.00 GiB] inherit
  ACTIVE            '/dev/vg0/containers_roundcube' [21.00 GiB] inherit
  ACTIVE            '/dev/vg0/containers_mailinabox' [21.00 GiB] inherit
  ACTIVE            '/dev/vg0/images_8a825c7097bd5e61c292049503c8d00f68fc98d8b5f5ebe4a45fd844d688aaf9' [21.00 GiB] inherit
  ACTIVE            '/dev/vg0/containers_mailserver' [21.00 GiB] inherit
  ACTIVE            '/dev/vg0/images_d18af57022afb992e26314013406e3dca64e458b8a63f0e1668c7eb60038596e' [21.00 GiB] inherit
  ACTIVE            '/dev/vg0/containers_test--ui1' [21.00 GiB] inherit
  ACTIVE            '/dev/vg0/containers_ldap' [21.00 GiB] inherit
  ACTIVE            '/dev/vg0/containers_myNCcontainer' [21.00 GiB] inherit
  ACTIVE            '/dev/vg0/images_ea1d9641ca09f8d7b55548447493ed808113322401861ab1e09d1017e07d4ebd' [10.00 GiB] inherit
  ACTIVE            '/dev/vg0/images_84a71299044bc3c3563396bef153c0da83d494f6bf3d38fecc55d776b1e19bf9' [21.00 GiB] inherit
  ACTIVE            '/dev/vg0/LXDPool_meta0' [1.00 GiB] inherit
  ACTIVE            '/dev/vg0/LXDPool_meta1' [1.00 GiB] inherit
  ACTIVE            '/dev/vg0/LXDPool_meta2' [1.00 GiB] inherit

Thanks in advance - really do not want to start over and lose all my containers.

Did you try rebooting the system yet? LVM can sometimes get into a funny state in the kernel which requires a reboot to fully resolve.

I’ve never seen this particular error before and it appears that the VG was in fact located on disk but just won’t fully come online, a reboot may resolve things or at least provide a cleaner state from which to debug.

Thanks for the quick reply Stéphane. I had rebooted several times earlier, but not for awhile since corrupting(?) the lvm. I just rebooted and have the same issue on LXD startup.

Can you show (as root);

  • pvscan --cache
  • pvs
  • vgs
  • lvs
  • vgchange vg0 -ay
root@avoton:~# pvscan
  PV /dev/md1   VG vg0             lvm2 [<3.64 TiB / <2.92 TiB free]
  PV /dev/md0   VG vg0             lvm2 [931.38 GiB / <578.68 GiB free]
  Total: 2 [<4.55 TiB] / in use: 2 [<4.55 TiB] / in no VG: 0 [0   ]
root@avoton:~# pvscan --cache
root@avoton:~# pvs
  PV         VG  Fmt  Attr PSize   PFree   
  /dev/md0   vg0 lvm2 a--  931.38g <578.68g
  /dev/md1   vg0 lvm2 a--   <3.64t   <2.92t
root@avoton:~# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  vg0   2  25   0 wz--n- <4.55t 3.48t
root@avoton:~# lvs
  LV                                                                      VG  Attr       LSize   Pool    Origin                                                                  Data%  Meta%  Move Log Cpy%Sync Convert
  LXDPool                                                                 vg0 twi-aotz-- 382.70g                                                                                 5.42   1.30                            
  LXDPool_meta0                                                           vg0 -wi-a-----   1.00g                                                                                                                        
  LXDPool_meta1                                                           vg0 -wi-a-----   1.00g                                                                                                                        
  LXDPool_meta2                                                           vg0 -wi-a-----   1.00g                                                                                                                        
  Windows7                                                                vg0 rwi-a-r---  45.00g                                                                                                        100.00          
  containers_great--python                                                vg0 Vwi---tz--  10.00g LXDPool                                                                                                                
  containers_ldap                                                         vg0 Vwi-a-tz--  21.00g LXDPool images_8a825c7097bd5e61c292049503c8d00f68fc98d8b5f5ebe4a45fd844d688aaf9 5.80                                   
  containers_local--nextcloud--server                                     vg0 Vwi-a-tz--  21.00g LXDPool                                                                         8.74                                   
  containers_mailinabox                                                   vg0 Vwi-a-tz--  21.00g LXDPool                                                                         13.46                                  
  containers_mailserver                                                   vg0 Vwi-a-tz--  21.00g LXDPool images_8a825c7097bd5e61c292049503c8d00f68fc98d8b5f5ebe4a45fd844d688aaf9 4.03                                   
  containers_myNCcontainer                                                vg0 Vwi-a-tz--  21.00g LXDPool                                                                         8.72                                   
  containers_mythserver                                                   vg0 Vwi-a-tz--  64.00g LXDPool                                                                         5.41                                   
  containers_nextcloud                                                    vg0 Vwi-a-tz--  21.00g LXDPool                                                                         11.40                                  
  containers_roundcube                                                    vg0 Vwi-a-tz--  21.00g LXDPool                                                                         4.91                                   
  containers_test--ui1                                                    vg0 Vwi-a-tz--  21.00g LXDPool images_d18af57022afb992e26314013406e3dca64e458b8a63f0e1668c7eb60038596e 4.09                                   
  couv                                                                    vg0 rwi-a-r---  34.18g                                                                                                        100.00          
  e2fs-cache                                                              vg0 rwi-a-r--- 200.00g                                                                                                        100.00          
  images_84a71299044bc3c3563396bef153c0da83d494f6bf3d38fecc55d776b1e19bf9 vg0 Vwi-a-tz--  21.00g LXDPool                                                                         5.30                                   
  images_8a825c7097bd5e61c292049503c8d00f68fc98d8b5f5ebe4a45fd844d688aaf9 vg0 Vwi-a-tz--  21.00g LXDPool                                                                         3.49                                   
  images_8fa08537ae51c880966626561987153e72d073cbe19dfe5abc062713d929254d vg0 Vwi-a-tz--  10.00g LXDPool                                                                         8.76                                   
  images_a425df004fda0876a060d5b4f76ac790e85c725449273d991939a4cdb179ad83 vg0 Vwi-a-tz--  10.00g LXDPool                                                                         8.76                                   
  images_d18af57022afb992e26314013406e3dca64e458b8a63f0e1668c7eb60038596e vg0 Vwi-a-tz--  21.00g LXDPool                                                                         4.08                                   
  images_ea1d9641ca09f8d7b55548447493ed808113322401861ab1e09d1017e07d4ebd vg0 Vwi-a-tz--  10.00g LXDPool                                                                         9.00                                   
  owncloud                                                                vg0 rwi-aor---  64.00g                                                                                                        100.00          
  squeezeserver                                                           vg0 rwi-aor---   8.50g                                                                                                        100.00          
root@avoton:~# vgchange vg0 -ay
  device-mapper: reload ioctl on  (253:47) failed: No data available
  24 logical volume(s) in volume group "vg0" now active

Huh. I actually fixed this by removing the inactive thin LV:

lvremove /dev/vg0/containers_great--python

I had removed this container earlier, but the storage had not disappeared - I found that weird, but shrugged as I don’t quite understand how LXD manages storage for dead containers. Maybe I made an error. Anyway, on the assumption that the container was gone anyway (couldn’t verify for sure because LXD wouldn’t start), I removed it (since activating it didn’t work). Voila! All good.

P.S. I actually was able to see that “great-python” was gone as it was no longer in the LXD database containers table.