How to autoprovision additional disks using profiles or Ansible in a cluster?

Hi

I’m trying to deploy containers/VMs with additional disks being created in different pools, then attached and mounted at the time of deployment. Is this possible?

The long story…

We split some application servers this way:

disk0 = OS, binaries, configs, logs…
disk1 = translogs
disk2 = databases

I’m expecting to be able for these additional disks to be automatically created on the fly on the host where LXD chooses to deploy the instance to. If set it in a profile, LXD won’t let us save the profile without specifying the “source” volume, which wont exist yet and the destination host hasn’t been determined yet:

Config parsing error: Device validation failed for “translogs_disk1”: Disk entry is missing the required “source” property`

The profile or Ansible LXD_Container playbook for a Linux container:

devices:
disk1:
path: “/mnt-appsrv/translogs_disk1”
pool: “lxd_local_disk1”
size: 10GB
type: disk
disk2:
path: “/mnt-appsrv/databases_disk2”
pool: “lxd_local_disk2”
size: 20GB
type: disk

There will be Windows VMs as well.

Host and storage pool e.g.:

lxdhost1

disk0 = SAS RAID10, primary OS files, app binaries
disk1 = NVMe, data
disk2 = NVMe, data

lxdhost2

disk0 = SAS RAID10, primary OS files, app binaries
disk1 = NVMe, data
disk2 = NVMe, data

lxdhost3

disk0 = SAS RAID10, primary OS files, app binaries
disk1 = SAS RAID10, data
disk2 = SAS RAID10, data

LXD Cluster Storage Pools

lxd_local_disk0 = lxdhost1.disk0, lxdhost2.disk0, lxdhost3.disk0
lxd_local_disk1 = lxdhost1.disk1, lxdhost2.disk1, lxdhost3.disk1
lxd_local_disk2 = lxdhost1.disk2, lxdhost2.disk2, lxdhost3.disk2

Thanks

1 Like

So what this is looking like to me for the above scenario of additional disks in instances (containers & virtual-machines), is that these additional disks are best to be on storage managed externally to LXD and then their backends referenced in source.

Configuration of this seems to be:

  • figure out whether to use at least 1 LXD managed storage pool for the instances, or go fully external managed storage… probably on ZFS
  • figure out how to reference source dymanically
  • prepare additional storage using external storage management tools
  • potentially have to configure cgroup & apparmor permissions
  • stipulate --target for all instances
  • get all of the above to work in Ansible

References

Instance configuration > device > disks

incus/doc/instances.md at main · lxc/incus · GitHub
size is only supported for the root disk
pool is not required
source is required and can refer to one of several storage backends

Device cgroups & apparmor in LXC article:

Permission denied for mount filesystem - #6 by JeToJedno

Anything else or other pointers that you can think of?

Thanks

I had the same question, and with some search, I understand that currently there is no way to make the creation of new storage volumes automated through the profiles or any other way in LXD (I do not know if we can do it using Ansible). The storage volume should be created first and then attached to the instance. You can see the other questions:

Good morning Ahmad

I didn’t get LXD to do it automatically, but it was fine via Ansible and using variables in the inventory for instance groups. I cheated and used:

  • the shell module to issue lxc storage volume create for standard filesystem and block types
  • the zfs module to create the separate custom dataset hierarchy as not to interfere with LXD’s, and zvols for block types where service that was going to use it didn’t like ZFS, e.g. Docker

but I suppose we could use the API as well. The thing with this method is that we need to know which host to deploy the storage and instance to beforehand, which for us is not an issue so another inventory variable. Or I suppose you could get it auto-allocated (init only) and then track down which host it’s on and store in a variable, provision and attach additional storage, then start the instance.

LXD 5.11 now has this which makes this easier for block volume requirements:
LXD 5.11 has been released
whereas before there were a few extra steps to get into the block volume and align permissions before attaching to the instance.

1 Like