I was trying to migrate my LXD LVM thin pool (“LXDPool”) to a new disk and I apparently didn’t know what I was doing. Once I can recover my containers, I’ll ask guidance on how to do this properly. But now, I’ve gotten myself in a pickle where LXD won’t start:
# lxd --debug --trace --group lxd
INFO[01-03|00:01:56] LXD 3.8 is starting in normal mode path=/var/snap/lxd/common/lxd
INFO[01-03|00:01:56] Kernel uid/gid map:
INFO[01-03|00:01:56] - u 0 0 4294967295
INFO[01-03|00:01:56] - g 0 0 4294967295
INFO[01-03|00:01:56] Configured LXD uid/gid map:
INFO[01-03|00:01:56] - u 0 1000000 1000000000
INFO[01-03|00:01:56] - g 0 1000000 1000000000
WARN[01-03|00:01:56] CGroup memory swap accounting is disabled, swap limits will be ignored.
INFO[01-03|00:01:56] Kernel features:
INFO[01-03|00:01:56] - netnsid-based network retrieval: no
INFO[01-03|00:01:56] - uevent injection: no
INFO[01-03|00:01:56] - unprivileged file capabilities: yes
INFO[01-03|00:01:56] Initializing local database
DBUG[01-03|00:01:56] Initializing database gateway
DBUG[01-03|00:01:56] Start database node id=1 address=
DBUG[01-03|00:01:56] Raft: Restored from snapshot 1-1024-1545746259596
DBUG[01-03|00:01:56] Raft: Initial configuration (index=1): [{Suffrage:Voter ID:1 Address:0}]
DBUG[01-03|00:01:56] Raft: Node at 0 [Leader] entering Leader state
DBUG[01-03|00:01:56] Dqlite: starting event loop
DBUG[01-03|00:01:56] Dqlite: accepting connections
INFO[01-03|00:01:56] Starting /dev/lxd handler:
INFO[01-03|00:01:56] - binding devlxd socket socket=/var/snap/lxd/common/lxd/devlxd/sock
INFO[01-03|00:01:56] REST API daemon:
INFO[01-03|00:01:56] - binding Unix socket socket=/var/snap/lxd/common/lxd/unix.socket
INFO[01-03|00:01:56] Initializing global database
DBUG[01-03|00:01:56] Dqlite: handling new connection (fd=19)
DBUG[01-03|00:01:56] Dqlite: connected address=0 attempt=0
INFO[01-03|00:01:56] Initializing storage pools
DBUG[01-03|00:01:56] Initializing and checking storage pool "vg0"
DBUG[01-03|00:01:56] Checking LVM storage pool "vg0"
EROR[01-03|00:02:15] Failed to start the daemon: could not activate volume group "vg0": device-mapper: reload ioctl on (253:47) failed: No data available
24 logical volume(s) in volume group "vg0" now active
INFO[01-03|00:02:15] Starting shutdown sequence
INFO[01-03|00:02:15] Stopping REST API handler:
INFO[01-03|00:02:15] - closing socket socket=/var/snap/lxd/common/lxd/unix.socket
INFO[01-03|00:02:15] Stopping /dev/lxd handler:
INFO[01-03|00:02:15] - closing socket socket=/var/snap/lxd/common/lxd/devlxd/sock
INFO[01-03|00:02:15] Closing the database
DBUG[01-03|00:02:15] Dqlite: closing client
DBUG[01-03|00:02:15] Stop database gateway
DBUG[01-03|00:02:15] Stop raft instance
DBUG[01-03|00:02:15] Dqlite: stopping event loop
DBUG[01-03|00:02:15] Dqlite: event loop stopped
INFO[01-03|00:02:15] Unmounting temporary filesystems
INFO[01-03|00:02:15] Done unmounting temporary filesystems
INFO[01-03|00:02:15] Saving simplestreams cache
INFO[01-03|00:02:15] Saved simplestreams cache
Error: could not activate volume group "vg0": device-mapper: reload ioctl on (253:47) failed: No data available
24 logical volume(s) in volume group "vg0" now active
Indeed there is no device @ 253:47:
# ls -l /dev/|grep 253
brw-rw---- 1 root disk 253, 0 Jan 1 13:44 dm-0
brw-rw---- 1 root disk 253, 1 Jan 1 13:44 dm-1
brw-rw---- 1 root disk 253, 10 Jan 1 13:44 dm-10
brw-rw---- 1 root disk 253, 11 Jan 1 13:44 dm-11
brw-rw---- 1 root disk 253, 12 Jan 1 13:44 dm-12
brw-rw---- 1 root disk 253, 13 Jan 1 13:44 dm-13
brw-rw---- 1 libvirt-qemu kvm 253, 14 Jan 3 00:08 dm-14
brw-rw---- 1 root disk 253, 15 Jan 1 13:44 dm-15
brw-rw---- 1 root disk 253, 16 Jan 1 13:44 dm-16
brw-rw---- 1 root disk 253, 17 Jan 1 13:44 dm-17
brw-rw---- 1 root disk 253, 18 Jan 1 13:44 dm-18
brw-rw---- 1 root disk 253, 19 Jan 1 13:44 dm-19
brw-rw---- 1 root disk 253, 2 Jan 1 13:44 dm-2
brw-rw---- 1 root disk 253, 20 Jan 1 13:44 dm-20
brw-rw---- 1 root disk 253, 21 Jan 1 13:44 dm-21
brw-rw---- 1 root disk 253, 22 Jan 1 13:44 dm-22
brw-rw---- 1 root disk 253, 23 Jan 1 13:44 dm-23
brw-rw---- 1 root disk 253, 24 Jan 1 13:44 dm-24
brw-rw---- 1 root disk 253, 25 Jan 1 13:44 dm-25
brw-rw---- 1 root disk 253, 26 Jan 1 13:44 dm-26
brw-rw---- 1 root disk 253, 27 Jan 1 13:44 dm-27
brw-rw---- 1 root disk 253, 28 Jan 1 13:44 dm-28
brw-rw---- 1 root disk 253, 29 Jan 1 13:44 dm-29
brw-rw---- 1 root disk 253, 3 Jan 1 13:44 dm-3
brw-rw---- 1 root disk 253, 30 Jan 1 13:44 dm-30
brw-rw---- 1 root disk 253, 31 Jan 1 13:44 dm-31
brw-rw---- 1 root disk 253, 32 Jan 1 13:44 dm-32
brw-rw---- 1 root disk 253, 33 Jan 1 13:44 dm-33
brw-rw---- 1 root disk 253, 34 Jan 1 13:44 dm-34
brw-rw---- 1 root disk 253, 35 Jan 1 13:44 dm-35
brw-rw---- 1 root disk 253, 36 Jan 1 13:44 dm-36
brw-rw---- 1 root disk 253, 37 Jan 1 13:44 dm-37
brw-rw---- 1 root disk 253, 38 Jan 1 13:44 dm-38
brw-rw---- 1 root disk 253, 39 Jan 1 13:44 dm-39
brw-rw---- 1 libvirt-qemu kvm 253, 4 Jan 2 23:50 dm-4
brw-rw---- 1 root disk 253, 40 Jan 1 13:44 dm-40
brw-rw---- 1 root disk 253, 41 Jan 1 13:44 dm-41
brw-rw---- 1 root disk 253, 42 Jan 1 13:44 dm-42
brw-rw---- 1 root disk 253, 43 Jan 1 13:44 dm-43
brw-rw---- 1 root disk 253, 44 Jan 1 13:44 dm-44
brw-rw---- 1 root disk 253, 45 Jan 1 13:44 dm-45
brw-rw---- 1 root disk 253, 46 Jan 1 13:44 dm-46
brw-rw---- 1 root disk 253, 5 Jan 1 13:44 dm-5
brw-rw---- 1 root disk 253, 6 Jan 1 13:44 dm-6
brw-rw---- 1 root disk 253, 7 Jan 1 13:44 dm-7
brw-rw---- 1 root disk 253, 8 Jan 1 13:44 dm-8
brw-rw---- 1 root disk 253, 9 Jan 1 13:44 dm-9
I tried creating one, but that did not affect anything. Please help me understand where to start. I think it’s possible that there is a lingering reference somewhere to a container I deleted recently, “great-python” whose LV shows inactive:
# lvscan
ACTIVE '/dev/vg0/squeezeserver' [8.50 GiB] inherit
ACTIVE '/dev/vg0/couv' [34.18 GiB] inherit
ACTIVE '/dev/vg0/owncloud' [64.00 GiB] inherit
ACTIVE '/dev/vg0/e2fs-cache' [200.00 GiB] inherit
ACTIVE '/dev/vg0/LXDPool' [382.70 GiB] inherit
inactive '/dev/vg0/containers_great--python' [10.00 GiB] inherit
ACTIVE '/dev/vg0/containers_mythserver' [64.00 GiB] inherit
ACTIVE '/dev/vg0/images_8fa08537ae51c880966626561987153e72d073cbe19dfe5abc062713d929254d' [10.00 GiB] inherit
ACTIVE '/dev/vg0/images_a425df004fda0876a060d5b4f76ac790e85c725449273d991939a4cdb179ad83' [10.00 GiB] inherit
ACTIVE '/dev/vg0/containers_local--nextcloud--server' [21.00 GiB] inherit
ACTIVE '/dev/vg0/Windows7' [45.00 GiB] inherit
ACTIVE '/dev/vg0/containers_nextcloud' [21.00 GiB] inherit
ACTIVE '/dev/vg0/containers_roundcube' [21.00 GiB] inherit
ACTIVE '/dev/vg0/containers_mailinabox' [21.00 GiB] inherit
ACTIVE '/dev/vg0/images_8a825c7097bd5e61c292049503c8d00f68fc98d8b5f5ebe4a45fd844d688aaf9' [21.00 GiB] inherit
ACTIVE '/dev/vg0/containers_mailserver' [21.00 GiB] inherit
ACTIVE '/dev/vg0/images_d18af57022afb992e26314013406e3dca64e458b8a63f0e1668c7eb60038596e' [21.00 GiB] inherit
ACTIVE '/dev/vg0/containers_test--ui1' [21.00 GiB] inherit
ACTIVE '/dev/vg0/containers_ldap' [21.00 GiB] inherit
ACTIVE '/dev/vg0/containers_myNCcontainer' [21.00 GiB] inherit
ACTIVE '/dev/vg0/images_ea1d9641ca09f8d7b55548447493ed808113322401861ab1e09d1017e07d4ebd' [10.00 GiB] inherit
ACTIVE '/dev/vg0/images_84a71299044bc3c3563396bef153c0da83d494f6bf3d38fecc55d776b1e19bf9' [21.00 GiB] inherit
ACTIVE '/dev/vg0/LXDPool_meta0' [1.00 GiB] inherit
ACTIVE '/dev/vg0/LXDPool_meta1' [1.00 GiB] inherit
ACTIVE '/dev/vg0/LXDPool_meta2' [1.00 GiB] inherit
Thanks in advance - really do not want to start over and lose all my containers.