I have an LVM volume group on a pair of 4TB drives that are mirrored using mdadm. The volume group hosts several conventional LVs for qemu/kvm VM root filesystems, and my LXD thin pool. I am in the process of moving the LVs to a new mdadm mirror of 1TB SSDs. The physical volume(s) look like this:
# pvs
PV VG Fmt Attr PSize PFree
/dev/md0 vg0 lvm2 a-- 931.38g <578.68g
/dev/md1 vg0 lvm2 a-- <3.64t <2.92t
md1 is the old 4TB array, md0 is the new 1TB array
Previously, when I needed to rotate out old disks and move the array (or upgrade the array), I could add the new array to the volume group and mirror the LVs onto the new array, and then drop the old array. This has gone well so far for the regular qemu LVs. I used the command
lvconvert -m 1 vg0/<LVname> /dev/md0
to create the mirrored LV on the new storage. I’ve done this successfully for all the qemu LVs. But I found out this does not work for thin pool LVs. My LVs now look like this:
# lvs -o+devices
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
LXDPool vg0 twi-aotz-- 382.70g 5.42 1.30 LXDPool_tdata(0)
LXDPool_meta0 vg0 -wi-a----- 1.00g /dev/md0(90036)
LXDPool_meta1 vg0 -wi-a----- 1.00g /dev/md1(139951)
LXDPool_meta2 vg0 -wi-a----- 1.00g /dev/md1(140207)
Windows7 vg0 rwi-a-r--- 45.00g 100.00 Windows7_rimage_0(0),Windows7_rimage_1(0)
containers_ldap vg0 Vwi-a-tz-- 21.00g LXDPool images_8a825c7097bd5e61c292049503c8d00f68fc98d8b5f5ebe4a45fd844d688aaf9 5.80
containers_local--nextcloud--server vg0 Vwi-a-tz-- 21.00g LXDPool 8.74
containers_mailinabox vg0 Vwi-a-tz-- 21.00g LXDPool 13.46
containers_mailserver vg0 Vwi-a-tz-- 21.00g LXDPool images_8a825c7097bd5e61c292049503c8d00f68fc98d8b5f5ebe4a45fd844d688aaf9 4.03
containers_myNCcontainer vg0 Vwi-a-tz-- 21.00g LXDPool 8.72
containers_mythserver vg0 Vwi-a-tz-- 64.00g LXDPool 5.41
containers_nextcloud vg0 Vwi-aotz-- 21.00g LXDPool 11.46
containers_roundcube vg0 Vwi-a-tz-- 21.00g LXDPool 4.91
containers_test--ui1 vg0 Vwi-a-tz-- 21.00g LXDPool images_d18af57022afb992e26314013406e3dca64e458b8a63f0e1668c7eb60038596e 4.09
couv vg0 rwi-a-r--- 34.18g 100.00 couv_rimage_0(0),couv_rimage_1(0)
e2fs-cache vg0 rwi-a-r--- 200.00g 100.00 e2fs-cache_rimage_0(0),e2fs-cache_rimage_1(0)
images_84a71299044bc3c3563396bef153c0da83d494f6bf3d38fecc55d776b1e19bf9 vg0 Vwi-a-tz-- 21.00g LXDPool 5.30
images_8a825c7097bd5e61c292049503c8d00f68fc98d8b5f5ebe4a45fd844d688aaf9 vg0 Vwi-a-tz-- 21.00g LXDPool 3.49
images_8fa08537ae51c880966626561987153e72d073cbe19dfe5abc062713d929254d vg0 Vwi-a-tz-- 10.00g LXDPool 8.76
images_a425df004fda0876a060d5b4f76ac790e85c725449273d991939a4cdb179ad83 vg0 Vwi-a-tz-- 10.00g LXDPool 8.76
images_d18af57022afb992e26314013406e3dca64e458b8a63f0e1668c7eb60038596e vg0 Vwi-a-tz-- 21.00g LXDPool 4.08
images_ea1d9641ca09f8d7b55548447493ed808113322401861ab1e09d1017e07d4ebd vg0 Vwi-a-tz-- 10.00g LXDPool 9.00
owncloud vg0 rwi-aor--- 64.00g 100.00 owncloud_rimage_0(0),owncloud_rimage_1(0)
squeezeserver vg0 rwi-aor--- 8.50g 100.00 squeezeserver_rimage_0(0),squeezeserver_rimage_1(0)
Hopefully this makes sense (it barely does to me).
What is the recommended procedure to migrate thin pool LVs (perhaps the entire LXDPool itself?) from one physical volume to another? Preferably, I could copy them so I could drop the redundancy later and remove the slower array PV. Note also I have 3 copies of the LXDPool meta data strewn across both md0 and md1 – this was from an earlier (failed) attempt the nearly left my LXD pool complete unusable.