Hello,
I want to boot a couple of VMs to use as a cluster and I am having issues with the storage.
I want my VMs to have custom disk size (ex. 100GB) on a specific path. I finally found out my image needs to have cloud-init for the storage to be customizable. So I create my VMs with the following command:
lxc launch images:ubuntu/22.04/cloud test --vm
The end result is the following:
$ lxc exec test – df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 2.8G 796K 2.8G 1% /run
/dev/sda2 91G 876M 90G 1% /
tmpfs 14G 0 14G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 50M 13M 38M 25% /run/lxd_agent
/dev/sda1 99M 4.0M 95M 4% /boot/efi
The problem is that the 91GB of storage are located in / and not in say /home folder. Is there a way for me to ensure this disposable storage is under a specific path?
My default profile looks like this:
config:
limits.cpu: "6"
limits.memory: 30GB
description: Default LXD profile
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
size: 100GB
type: disk
name: default
used_by:[]
Another storage issue I have is the following, I am using the dir
driver. Does the VM disk limit anything I do in the VM to the VM, for example if I format the file system inside the VM to hdfs, or would that affect the entire host partition the VM is located in? Furthermore if the preconfigured storage in the VM is exhausted, does the VM use the rest of the host storage or does it signal that it is filled? Usually LXC containers are limitless in the storage they use and I haven’t been able to limit them. So I wonder if the VM has the same caveat or is it entirely self-contained.
Additionally, would zfs
be more useful as a storage driver for hdfs/Hadoop or is dir
good enough?