LXD VM storage customization

Hello,
I want to boot a couple of VMs to use as a cluster and I am having issues with the storage.
I want my VMs to have custom disk size (ex. 100GB) on a specific path. I finally found out my image needs to have cloud-init for the storage to be customizable. So I create my VMs with the following command:

lxc launch images:ubuntu/22.04/cloud test --vm

The end result is the following:

$ lxc exec test – df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 2.8G 796K 2.8G 1% /run
/dev/sda2 91G 876M 90G 1% /
tmpfs 14G 0 14G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 50M 13M 38M 25% /run/lxd_agent
/dev/sda1 99M 4.0M 95M 4% /boot/efi

The problem is that the 91GB of storage are located in / and not in say /home folder. Is there a way for me to ensure this disposable storage is under a specific path?

My default profile looks like this:

config:
  limits.cpu: "6"
  limits.memory: 30GB
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    size: 100GB
    type: disk
name: default
used_by:[]

Another storage issue I have is the following, I am using the dir driver. Does the VM disk limit anything I do in the VM to the VM, for example if I format the file system inside the VM to hdfs, or would that affect the entire host partition the VM is located in? Furthermore if the preconfigured storage in the VM is exhausted, does the VM use the rest of the host storage or does it signal that it is filled? Usually LXC containers are limitless in the storage they use and I haven’t been able to limit them. So I wonder if the VM has the same caveat or is it entirely self-contained.

Additionally, would zfs be more useful as a storage driver for hdfs/Hadoop or is dir good enough?

1 Like

Regarding the first part about the additional storage in the instance (container/VM), I understand from your question that you want your newly provisioned instance to have additional storage space (with specific size) mounted on some mount point, let’s say /some-mount-point.

To do this in LXD, the storage volume (device, space, disk or call it whatever you want) that you want to attach (add) to the instance must be created before it can be used by the instance (there is an exception for the storage device used for the instance’s root filesystem, which is created/allocated once the instance is created automatically).
As far as I know, currently there is no way to make the creation of new storage volumes automated through the profiles or any other way. The storage volume should be created first and then attached to the instance.

How to create new storage volume? You need to create a new storage volume in an existing storage pool.

$ lxc storage list

+----------+--------+------------------------------------------------+-------------+---------+---------+
|   NAME   | DRIVER |                     SOURCE                     | DESCRIPTION | USED BY |  STATE  |
+----------+--------+------------------------------------------------+-------------+---------+---------+
| zfs-pool | zfs    | zfs-pool                                       |             | 17      | CREATED |
+----------+--------+------------------------------------------------+-------------+---------+---------+

$ lxc storage volume create zfs-pool c1-vol1

$ lxc storage volume set zfs-pool c1-vol1 size 10GiB    # in case you want to set its size to 10GB

Now the new storage volume must be created successfully. Let’s attach/add it to the instance. There are two methods, one with lxc storage volume attach and one with lxc config device add.

Method1 (assuming the instance name is c1 and you want to mount the new storage volume in /mnt inside the instance)

lxc storage volume attach zfs-pool c1-vol1 c1 /mnt

or Method2

lxc config device add c1 c1-vol1 pool=zfs-pool source=c1-vol1 path=/mnt

Please see the other question: Custom block volumes defined in profiles? and Add additional storage pool to profile

References:

Unfortunately, I do not have an answer to the second (above quoted) part of your question.

Hello,

thank you very much for your reply. Unfortunately I hit one issue which was the following command:

lxc storage volume set zfs-pool c1-vol1 size 10GiB

I am unable to set the size of my volume, it takes automatically the size of my entire partition. I followed your tutorial exactly as is with the only difference that I have dir storage driver and not zfs. Do you have any ideas/suggestion why storage limitation is not working on a dir volume? My commands are the following:

lxc storage volume create dir-default node
lxc storage volume set dir-default node size=50GB
lxc launch images:ubuntu/22.04/cloud nodeVM --storage dir-default --vm
lxc storage volume attach dir-default node nodeVM /home

and ultimately

root@nodeVM:/# df /home/ -h
Filesystem Size Used Avail Use% Mounted on
lxd_node 3T 500G 3.0T 16% /home

Hi cli0, you are right about that the storage volume of type dir will not be affected by setting a size. I reproduced your issue and I got the same results: Creating a storage volume, then setting its size to a fixed value and then attaching it to a container, will show the size of this volume from inside the container as the same size of the host’s filesystem in where the dir volume is created.

As per the documentation[1]

The dir driver supports storage quotas when running on either ext4 or XFS with project quotas enabled at the file system level

However, you still can set quotas on dir storage volumes. See [1] and [2].

Reference:
[1] https://linuxcontainers.org/lxd/docs/master/reference/storage_dir/#storage-dir-quotas
[2] https://linuxcontainers.org/lxd/docs/master/reference/storage_drivers/#feature-comparison

You may find this helpful on how to enable ext4 project quotas

Otherwise the size of a dir volume is only restricted by the size of the host filesystem.

2 Likes