How to configure lxd storage pool to use an entire partition?

i tried a lot with zfs storage pool, but database work slowly with virtualized files as this.
so i want to follow the advice
“Whenever possible, you should dedicate a full disk or partition to your LXD storage pool.”
so, i don’t have full disk, only partition.

no experimented with partition and volume management. do they have a simple way to do this on ubuntu18 ?

Hi!

You can view your available partition using GNOME disks (application: gnome-disks).
You need to identify the partition name, something that looks similar to /dev/sda9 or /dev/sdb8. That is, the first part is /dev/sd, then a letter like a or b or c, and finally a number.
You need to get this right.

Then, you can enable this partition as a storage pool with ZFS. If you have an existing installation of LXD, then you can move your containers to the new storage pool.
If the above makes sense to you, tell us to give further instructions.

does the partition must be mounted ?

or just created and formatted. ?

i already have partition and want to use logical volume /dev/ubuntu-vg/lxd0 for the spool
i mounted it on /lxd0 but can unmount and reformat…

for info, actual partition vgdisplay

$ vgdisplay -v
  --- Volume group ---
  VG Name               ubuntu-vg
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <34.00 GiB
  PE Size               4.00 MiB
  Total PE              8703
  Alloc PE / Size       8703 / <34.00 GiB
  Free  PE / Size       0 / 0
  VG UUID               bmkf38-tqqx-f3Kt-Pqvb-dLsH-5DGn-jHlgY8

  --- Logical volume ---
  LV Path                /dev/ubuntu-vg/ubuntu-lv
  LV Name                ubuntu-lv
  VG Name                ubuntu-vg
  LV UUID                aHZv83-mmBE-W0fU-p0NC-wRuK-robb-3UkPXj
  LV Write Access        read/write
  LV Creation host, time ubuntu-server, 2020-09-21 13:15:53 +0000
  LV Status              available
  \#  open                 1
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  \- currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/ubuntu-vg/lxd0
  LV Name                lxd0
  VG Name                ubuntu-vg
  LV UUID                XKTzDa-oj4E-oXHD-4cKe-5oxI-IhEg-eMTMPy
  LV Write Access        read/write
  LV Creation host, time lxd30, 2020-09-21 14:46:04 +0000
  LV Status              available
  \# open                 1
  LV Size                \<24.00 GiB
  Current LE             6143
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  \- currently set to     256
  Block device           253:1

  --- Physical volumes ---
  PV Name               /dev/sda3
  PV UUID               23UTK9-17p7-m4eD-6jSJ-YXHM-z6jk-lFfpYs
  PV Status             allocatable
  Total PE / Free PE    8703 / 0

LXD prefers the partition to be unformatted and unmounted.

In your case, you have LVM. LXD supports LVM as a storage device. See Linux Containers - LXD - Has been moved to Canonical

tried a lot, but always get an error or other problem.
are they an exemple working configuration with lxd init storage pool on partition.?

You have to decide between LVM and ZFS. Since you have LVM, I’ll go with LVM.
You can either configure the LVM storage pool when you run lxd init, or afterwards, after you run lxd init. In your case, you most likely want to add it afterwards.

  1. You need to lxc storage create to create an entry in LXD for the LVM storage pool.
  2. You need to add that storage pool into the default LXD profile, so that new containers will be created into that storage pool.

I found this blog post that shows exactly that, https://www.pither.com/simon/blog/2018/09/28/lxd-lvm-thinpool-setup

the post, linked are incomplete and unworking as this
but finaly, taking from other posts, and understanding, storage pool on lvm dedicated partition work, and i have better, very better results using database when mariadb files arent virualized. much faster…

so now, i have a working config for storage pool with lvm.

HowTo:
preparing your system, let free space (not partitionned) in your volume group.
this space will be used to create storage pool partition.

# vgdisplay -v
— Volume group —
VG Name ubuntu-vg <= this is the demo volume group to be used
keep unallocated space to be used by lvm dedicated partition storage pool

1: instal thin-provisionning-tools

apt install thin-provisioning-tools

2: create LXDPool logical volume using 100% free space in volume group ubuntu-vg

lvcreate --type thin-pool --thinpool LXDPool -l 100%FREE ubuntu-vg

3: lxd init without storage pool
lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]: no
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] no
Would you like a YAML “lxd init” preseed to be printed? (yes/no) [default=no]: yes
config:
images.auto_update_interval: “0”
networks:
- config:
ipv4.address: auto
ipv6.address: auto
description: “”
name: lxdbr0
type: “”
storage_pools:
profiles:
- config: {}
description: “”
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
name: default
cluster: null

4: create default storage using thinpool on group volume ubuntu-vg

lxc storage create default lvm source=ubuntu-vg lvm.vg.force_reuse=true lvm.use_thinpool=true lvm.thinpool_name=LXDPool

nb: lvm.vg.force_reuse = Force using an existing non-empty volume group.(in case of…)

5: add this default storage pool to the lxc profile

lxc profile device add default root disk path=/ pool=default

6: to set default container size
lxc profile device set default root size=20GB

now, you can launch your container, they will use the dedicated storage pool with direct acces to real disk… faster.
in my case, zfs pool without dedicated disk or volume, sql reimport take 3H 42 min
with lvm dedicated disk, same sql reimport, based on same container, take only 44 minutes…
and it’s on virtualbox, with only 3 proc… the gain is real.

so, now, this is for lvm.
zfs can be better or faster in this case?

1 Like