the post, linked are incomplete and unworking as this
but finaly, taking from other posts, and understanding, storage pool on lvm dedicated partition work, and i have better, very better results using database when mariadb files arent virualized. much faster…
so now, i have a working config for storage pool with lvm.
HowTo:
preparing your system, let free space (not partitionned) in your volume group.
this space will be used to create storage pool partition.
# vgdisplay -v
— Volume group —
VG Name ubuntu-vg <= this is the demo volume group to be used
keep unallocated space to be used by lvm dedicated partition storage pool
1: instal thin-provisionning-tools
apt install thin-provisioning-tools
2: create LXDPool logical volume using 100% free space in volume group ubuntu-vg
lvcreate --type thin-pool --thinpool LXDPool -l 100%FREE ubuntu-vg
3: lxd init without storage pool
lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]: no
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] no
Would you like a YAML “lxd init” preseed to be printed? (yes/no) [default=no]: yes
config:
images.auto_update_interval: “0”
networks:
- config:
ipv4.address: auto
ipv6.address: auto
description: “”
name: lxdbr0
type: “”
storage_pools:
profiles:
- config: {}
description: “”
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
name: default
cluster: null
4: create default storage using thinpool on group volume ubuntu-vg
lxc storage create default lvm source=ubuntu-vg lvm.vg.force_reuse=true lvm.use_thinpool=true lvm.thinpool_name=LXDPool
nb: lvm.vg.force_reuse = Force using an existing non-empty volume group.(in case of…)
5: add this default storage pool to the lxc profile
lxc profile device add default root disk path=/ pool=default
6: to set default container size
lxc profile device set default root size=20GB
now, you can launch your container, they will use the dedicated storage pool with direct acces to real disk… faster.
in my case, zfs pool without dedicated disk or volume, sql reimport take 3H 42 min
with lvm dedicated disk, same sql reimport, based on same container, take only 44 minutes…
and it’s on virtualbox, with only 3 proc… the gain is real.
so, now, this is for lvm.
zfs can be better or faster in this case?