This tutorial covers clearing an existing partition (/dev/sda4
) which contains a volume group with a single logical volume and a mounted filesystem on it, and configuring a fresh volume group using the existing partition and 3 additional disks (/dev/sda
, /dev/sdb
and /dev/sdc
) for use as a LXD storage pool.
Clear existing logical volume(s) (if needed)
In my host I had setup an existing logical volume mounted on /home, called ‘all’ and gave it all remaining space (with 30GB for the ubuntu 20 system on a separate partition).
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 62M 1 loop /snap/core20/1611
loop1 7:1 0 67.8M 1 loop /snap/lxd/22753
loop2 7:2 0 47M 1 loop /snap/snapd/16292
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 511M 0 part /boot/efi
├─sda2 8:2 0 3G 0 part /
├─sda3 8:3 0 512M 0 part [SWAP]
├─sda4 8:4 0 1.8T 0 part
│ └─vg-all 253:0 0 1.8T 0 lvm /home
└─sda5 8:5 0 2M 0 part
sdb 8:16 0 1.8T 0 disk
sdc 8:32 0 1.8T 0 disk
sdd 8:48 0 1.8T 0 disk
You can see above /home (vg-all) is on sda4. and we want the vg to include that later with sdb, sdc and sdd.
So I’m now logged in as the default ubuntu user. So I set the passwds for ubuntu and root
Then
nano /etc/ssh/sshd_config
add `PermitRootLogin yes`
service ssh reload
exit
I next want to su
as root and make sure mount /home never happens and ubuntu user wont be using it and blocking the command with something like ‘device busy’
nano /etc/fstab
Remove the line with /home
in it so it wont be looked for to be mounted.
reboot
Login as root and see that /home is not mounted
umount /home
umount: /home: not mounted.
List lvs to show current LVM state:
lvs
WARNING: Couldn't find device with uuid 7B1V6G-NVcM-4HMF-PFVO-RqJa-tH6P-urUXEM.
WARNING: VG ALL is missing PV 7B1V6G-NVcM-4HMF-PFVO-RqJa-tH6P-urUXEM (last written to /dev/sda4).
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
all vg -wi-ao---- <1.82t
So I see above that the lv ‘all’ is in the folder ‘vg’.
So I can ls -l /dev/vg/all
and see it there
Dissable the lv and delete it.
lvchange -an /dev/vg/all # disable
lvremove /dev/vg/all # delete
Create new volume group
Make sure no filesystem markers remain on any disks we want to join:
wipefs --all --force /dev/sdb
wipefs --all --force /dev/sdc
wipefs --all --force /dev/sdd
wipefs --all --force /dev/sda4
Now add the disks as physical volumes using pvs
then join them together as a volume group vg called ‘ALL’:
pvcreate /dev/sda4 /dev/sdb /dev/sdc /dev/sdd
vgcreate ALL /dev/sda4 /dev/sdb /dev/sdc /dev/sdd
The vg should be ready for LXD to consume!
vgs
VG #PV #LV #SN Attr VSize VFree
ALL 4 0 0 wz--n- 7.27t 7.27t
Lets get up to date LXD:
apt update && apt upgrade -y
snap remove lxd
snap install lxd --channel=latest/stable
lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (dir, lvm, zfs, ceph, btrfs) [default=zfs]: lvm
Create a new LVM pool? (yes/no) [default=yes]: no
Name of the existing LVM pool or dataset: ALL
Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like the LXD server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Show the result:
lxc storage list
+---------+--------+--------+-------------+---------+---------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
+---------+--------+--------+-------------+---------+---------+
| default | lvm | ALL | | 1 | CREATED |
+---------+--------+--------+-------------+---------+---------+
Lets have a look at the LXD storage pool created, as I’ve never seen an lvm pool before:
lxc storage show default
config:
lvm.thinpool_name: LXDThinPool
lvm.vg_name: ALL
source: ALL
volatile.initial_source: ALL
description: ""
name: default
driver: lvm
used_by:
- /1.0/profiles/default
status: Created
locations:
- none
Check that its not mounted and not using a loop device:
df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 1.6M 6.3G 1% /run
/dev/sda2 29G 2.9G 25G 11% /
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/loop0 62M 62M 0 100% /snap/core20/1611
/dev/loop2 47M 47M 0 100% /snap/snapd/16292
/dev/sda1 511M 5.3M 506M 2% /boot/efi
tmpfs 6.3G 0 6.3G 0% /run/user/0
/dev/loop1 103M 103M 0 100% /snap/lxd/23270
tmpfs 1.0M 0 1.0M 0% /var/snap/lxd/common/ns
Use lvs
to show the new LXD created LVM thinpool on the volume group ALL
:
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LXDThinPool ALL twi-a-tz-- <7.25t 0.00 10.42
Hope this helps new users with lvm storage.
For more info on LVM with LXD please see LVM - lvm - LXD documentation