How to set root device size when creating new instance?

Hi, new user, be gentle. I am using LVM as my storage option, and for $reasons I have set my default volume.size to be quite small:

# lxc storage show vg_fast
config:
  lvm.thinpool_name: LXDThinPool
  lvm.vg.force_reuse: "true"
  lvm.vg_name: vg_fast
  source: vg_fast
  volatile.initial_source: vg_fast
  volume.block.filesystem: xfs
  volume.size: 2500MB
description: ""
name: vg_fast
driver: lvm
used_by:

However, I wanted to create an new instance which needed a larger root file system, actually around 3GB to fit the initial image into. Since the default is to create a 2.5GB root partition, I tried the following in order to get an initial (larger) partition:

lxc init migrate-gitea gitea2 -d root,size=5GB

However, this still leads to an “out of disk space” error as the tar file unpacks (ie the root is still being created as a 2.5GB partition). I was able to work around it by temporarily raising the volume.size key on the storage, creating my image and then setting it back again. However, is there an incantation to do this as part of the initial init/launch ?

Note that creating a separate storage volume to mount into this instance was simpler:

lxc storage volume create vg_fast gitea-data size=20GB

As a new user, there feels like a mismatch in how these are specified?

Not terribly related, but I also have some doubt on how to access my new storage volume above, eg in order to migrate some data into it? I can create a new VM and mount my image there, then use sshfs to access the mount. However, I wonder if it’s acceptable to simply mount the lvm partition, and stuff files in that way?

(Edited for clarity)

1 Like

How big is the migrate-gitea image once unpacked? If its around or greater than 5GB then that is why you are getting those out of disk space errors.

Have you tried using increasingly larger sizes and see where it works?

Hi, thanks for replying. The question was far simpler… How do I increase the disk partition size!!!

(for the sakes of illustration, in the example above I default the storage to have a 2.5GB size, I need around 3GB for this specific image though. How to override the default size?)

Note, I wonder if there are some clues when I try and resize a partition after building it. I observe that first resize I need to use this:

lxc config device override instance-name root size=4GB

Then to resize it a second time I need to use:

lxc config device set instance-name root size=5GB

So I deduce that my original question about how to set the initial root partition size, MIGHT hinge on trying to pass “override” to the init/launch call, rather than trying to set “root,size=5GB” ? However, I’m still unsure how to do this…

Ah I see, well because LXD has no idea what filesystem or partition layout is in use inside the VM’s disks it cannot resize the partitions or filesystems themselves, only the virtual disk itself. This is what you’re specifying when you set size=4GB.

But you can resize the filesystem manually (if just using a single partition), see these previous posts:

The specific package to install may vary between OSes.

Hmm, ok that seems a little unsatisfactory in general? So how are others managing this?

Say you want to do some automated creation of instances, but you don’t have unlimited disk space. So you create your default profile to set a root filesystem to be say 10GB. However, then you want to create some instance, which for whatever reason needs a much larger root filesystem, say 50GB.

I presume there must be a way to either create an initial image with no files, resize the filesystem, then re-run “launch” to copy in the filesystem? I’m using LVM if that affects the instructions? Note, I know how to resize the filesystem after the instance is created (“lxc config device override instance-name root size=50GB”), the question is how to integrate that into the instance build process??

However, can I suggest that this sounds like a right old b@ll ache? Could I ask for a feature request to allow the initial default profile to be overridden when creating an instance??!! It would be massively easier to be able to do something along the lines of:

lxc launch myimage new-inst-name -d root,size=50GB

given:

# lxc profile show default
config: {}
description: Default LXD profile
devices:
  root:
    path: /
    pool: vg_fast
    type: disk
name: default

# lxc storage show vg_fast
config:
  lvm.thinpool_name: LXDThinPool
  lvm.vg.force_reuse: "true"
  lvm.vg_name: vg_fast
  source: vg_fast
  volatile.initial_source: vg_fast
  volume.block.filesystem: xfs
  volume.size: 2500MB
description: ""
name: vg_fast
driver: lvm

It feels baffling to me that the simplest way to change the size of the initial size of an instance is to edit the defaults for your storage pool, create the instance and then revert the changes on the storage pool?? Surely there must be a simpler method??

(Background: In my case, nearly all my root filesystems are tiny, like much less than 1GB, so I prefer to keep instances tiny. However, every so often I need to create other instances that might be more like the defaults images: sizes, and this very moment I’m trying to migrate some old linux vserver images, one of which includes the data and so is about 1TB. I don’t want my default instance size to need to be in the TB range, just so that I can start up the odd TB sized root file system? Crazy…)

Using an image with cloud-init installed will automatically grow the root filesystem as it boots. As mentioned on the other post.

OK, but I am building tiny custom images, so I don’t have cloud init.

This page talks a lot about all this being configurable with various keys
How to manage storage volumes - LXD documentation

It just feels unexpected that one of the most important aspects of a virtual machine (storage size of the root filesystem) cannot be specified anywhere, except very indirectly via the storage pool itself? I feel I must be missing something? Can we not use profiles or config keys to override the storage volume?

E.g.

lxc storage create lvm lvm

Try and create a VM that has a root disk smaller than the image size fails.

lxc launch images:ubuntu/jammy v1 --vm -s lvm -d root,size=3GiB
Creating v1
Error: Failed instance creation: Failed creating instance from image: Source image size (4294967296) exceeds specified volume size (3221225472)

Creating a VM with a root disk the same size as the image works fine:

lxc launch images:ubuntu/jammy v1 --vm -s lvm -d root,size=4GiB
lxc exec v1 -- df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       3.8G  698M  3.1G  19% /

Creating a VM with a root disk larger than the image works fine, but resulting filesystem isn’t grown:

lxc launch images:ubuntu/jammy v2 --vm -s lvm -d root,size=10GiB
lxc exec v2 -- df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       3.8G  698M  3.1G  19% /

Creating a VM with cloud-init installed with a root disk larger than the image gets the filesystem grown:

lxc launch images:ubuntu/jammy/cloud v3 --vm -s lvm -d root,size=10GiB
lxc exec v3 -- df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       9.6G  780M  8.8G   9% /

This is incorrect.

The root disk size can be specified in the profile or instance device config via the root disk’s size setting.
The instance device config can be overridden from the profile at create time using -d flag.

But, for VMs, this is controlling the volume size, not the filesystem size.
As I explained above, this is because for VMs (not containers) LXD does not mandate a particular partition layout or filesystem be used.

This is what allows for Windows and Unix VMs to be usable with LXD, for example.

I think we are talking cross purposes? I am not using “vms”. I’m only using containers (apologies if that wasn’t clear above? I thought this was the default?)

Then your custom image should use the smallest possible disk size (and by implication filesystem size), that way when you create VMs from it you can specify the image size or larger for the instance’s root disk size.

But you’ll still need something inside the image itself that detects a larger disk and grows the filesystem.

Oh yes we are.

What is the problem with containers then?

This is what lead me to think you were talking about VMs.

This is incorrect.

Hmm, no I don’t think it is. Something peculiar is going on? Perhaps it’s the ordering of events which isn’t working as expected?

Consider:

lxc launch images:gentoo/openrc v2 -d root,size=10GiB
Creating v2
Starting v2

lxc exec v2 -- df -h
Filesystem                  Size  Used Avail Use% Mounted on
/dev/vg_fast/containers_v2   10G  2.0G  8.1G  20% /

So creating an image where the initial file system is sub 2GB, HAS allowed me to override the storage volume size, and create the instance

But consider this, where I zipped up a gentoo instance, previously running under linux-vservers, and followed the instructions here:

I think the tar file is slightly larger than the default root filesystem size, but much smaller than 10GB (specifics, the default filesystem size is 2.5GB, and I think the tar image is around 2.8GB)

lxc launch migrate-gitea v3 -s vg_fast -d root,size=10GiB
Creating v3
...spew of tar ... Cannot create ...: No space left on device

Can you advise if there is a way to create an initial server with a blank filesystem, grow it, then later on I can have the image installed? Note, a process which would be convenient for this current requirement (migrating VMs from another machine) is to be able to create a skeleton VM, rsync the data across, replacing the whole image?

Right now I have experimented with the following (I doubt it’s a supported suggestion)

  • linux vserver image consists of root filesystem / and the main data directories mounted into various points, eg pretend its a mail server, the root fs is 1GB and the data dir is huge and mounted in at /home/vmail
  • lxc launch images:randomimage migratedvm
  • lxc storage volume create …
  • filesystem is at /var/lib/lxd/storage-pools/vg_fast/containers/migratedvm/rootfs, so rsync complete filesystem into that, replacing the existing random image (can we avoid needing to create the image?)
  • lxc storage volume attach vg_fast mail-data migratedvm /home/vmail
  • Now do the same to rsync the data in (is there a supported way to access custom storage volumes from the host, without attaching them to a random instance?)
  • now restart instance

Is there another way to create the initial instances? Assume a user moderately familiar with LVM and not looking for a supported lxc path here. This is strictly to get access to the raw partitions to preload initial data?

(Edit: Apologies for using the words “VM” in several places, probably including here. I’m coming from linux vservers and only intend to use container based virtualisation. However, I may have inadvertently used the wrong terminology. To be clear - no VMs anticipated in my future, only: LVM based containers, running XFS file system and linux)

Ah OK its likely the use of -s which is causing you to hit a bug, fixed in LXD 5.9.

See:

https://github.com/lxc/lxd/pull/11158

When you use the -s flag the lxc command replaces the profile’s root disk with an instance level root disk that uses the specified storage pool.

However there was a bug in the recently added -d flag that would not work properly when combined with the -s flag, in that it would restore the original profile root disk storage pool, and then override the size.

So I can only assume that the storage pool from the profile isn’t big enough.

Anyway, I hit this exact issue and fixed it in LXD 5.9.

Its in candidate at the moment, but using sudo snap refresh lxd --channel=latest/candidate will get it.

The -d flag was only added in LXD 5.8:

Related question (sorry, no threading here, perhaps I should start a new question?)

If I had created an instance already. This instance is small, say 5GB (and running on LVM pools and underlying filesystem is XFS). Then I run the following:

  • lxc config device set some-container root size=10GB

Then I see this correctly expand the lvm container, plus call appropriate tools to resize the inner filesystem.

How does this happen? (assume I understand how to do this myself using lvm/xfs tooling)? My assumption is that the outside machine (ie where I’m typing the commands), calls lvm tooling to expand the partition size, then calls xfs tools to resize the filesystem, all from OUTSIDE the container, ie never enters the container, doesn’t care that it contains a valid linux image?

So if that were true, then I’m not understanding why my “migrate-gitea” example above cannot work? My assumption is simply that lxd is running the steps in the “wrong” order, ie creating a default 2.5GB filesystem, then expanding the initial template filesystem image, then resizing the filesystem?

So if that were the case, how to report a bug (as in my opinion the resize should happen before the initial filesystem is expanded)? This is using lxd5.8 FWIW. Do you agree with my understanding above? And that it’s not optimal?

That is correct.

I suspect you’re being affected by this issue:

I might be misunderstanding, but I see this problem on 5.8 with both -s and without

However, the specific case I am testing is with a large(er) tar file source image, imported in with a custom metadata, ie following these steps:

I wonder if there is a difference in how the source image is provided? Or perhaps it’s related to how I specified a default volume size in the storage pool itself?

I’m using gentoo for my outer machine. It is a little inconvenient to build the release candidate at present. I think I can work around this if there is a solution for release soon? I wonder if you have capacity to test my specific case is covered? I guess repro is:

  • Create random instance
  • tar it up
  • import instance as a custom image
  • set default lvm storage image size to something stupidly small (0.5GB say)
  • ensure you can create an instance correctly, using that image with “… -d size=5GB”?

Just an FYI, but I’m trying to migrate some linux vservers to something more modern. I feel like I’m hitting tons of problems with lxd… Almost anything that goes wrong inside the container leads to the whole of the lxd tooling just locking up and apparently needing a reboot to resolve… So far: mounting my outer tmp dir subdir into the instance /tmp dir (testing mounting random dirs), leads to a lockup of the lxd tooling when you try and kill the instance. --force won’t stop it. Starting up 2 or more migrated machines leads to them near immediately stopping responding, can’t --force stop them, nor enter them with --exec, console is unresponsive. Have a hope it’s related to them having identical MAC addresses, but … It’s feeling terribly fragile? (I’ll write more about specifics in a different thread if you have some interest to follow up?)