How to set root device size when creating new instance?

I think we are talking cross purposes? I am not using “vms”. I’m only using containers (apologies if that wasn’t clear above? I thought this was the default?)

Then your custom image should use the smallest possible disk size (and by implication filesystem size), that way when you create VMs from it you can specify the image size or larger for the instance’s root disk size.

But you’ll still need something inside the image itself that detects a larger disk and grows the filesystem.

Oh yes we are.

What is the problem with containers then?

This is what lead me to think you were talking about VMs.

This is incorrect.

Hmm, no I don’t think it is. Something peculiar is going on? Perhaps it’s the ordering of events which isn’t working as expected?

Consider:

lxc launch images:gentoo/openrc v2 -d root,size=10GiB
Creating v2
Starting v2

lxc exec v2 -- df -h
Filesystem                  Size  Used Avail Use% Mounted on
/dev/vg_fast/containers_v2   10G  2.0G  8.1G  20% /

So creating an image where the initial file system is sub 2GB, HAS allowed me to override the storage volume size, and create the instance

But consider this, where I zipped up a gentoo instance, previously running under linux-vservers, and followed the instructions here:

I think the tar file is slightly larger than the default root filesystem size, but much smaller than 10GB (specifics, the default filesystem size is 2.5GB, and I think the tar image is around 2.8GB)

lxc launch migrate-gitea v3 -s vg_fast -d root,size=10GiB
Creating v3
...spew of tar ... Cannot create ...: No space left on device

Can you advise if there is a way to create an initial server with a blank filesystem, grow it, then later on I can have the image installed? Note, a process which would be convenient for this current requirement (migrating VMs from another machine) is to be able to create a skeleton VM, rsync the data across, replacing the whole image?

Right now I have experimented with the following (I doubt it’s a supported suggestion)

  • linux vserver image consists of root filesystem / and the main data directories mounted into various points, eg pretend its a mail server, the root fs is 1GB and the data dir is huge and mounted in at /home/vmail
  • lxc launch images:randomimage migratedvm
  • lxc storage volume create …
  • filesystem is at /var/lib/lxd/storage-pools/vg_fast/containers/migratedvm/rootfs, so rsync complete filesystem into that, replacing the existing random image (can we avoid needing to create the image?)
  • lxc storage volume attach vg_fast mail-data migratedvm /home/vmail
  • Now do the same to rsync the data in (is there a supported way to access custom storage volumes from the host, without attaching them to a random instance?)
  • now restart instance

Is there another way to create the initial instances? Assume a user moderately familiar with LVM and not looking for a supported lxc path here. This is strictly to get access to the raw partitions to preload initial data?

(Edit: Apologies for using the words “VM” in several places, probably including here. I’m coming from linux vservers and only intend to use container based virtualisation. However, I may have inadvertently used the wrong terminology. To be clear - no VMs anticipated in my future, only: LVM based containers, running XFS file system and linux)

Ah OK its likely the use of -s which is causing you to hit a bug, fixed in LXD 5.9.

See:

https://github.com/lxc/lxd/pull/11158

When you use the -s flag the lxc command replaces the profile’s root disk with an instance level root disk that uses the specified storage pool.

However there was a bug in the recently added -d flag that would not work properly when combined with the -s flag, in that it would restore the original profile root disk storage pool, and then override the size.

So I can only assume that the storage pool from the profile isn’t big enough.

Anyway, I hit this exact issue and fixed it in LXD 5.9.

Its in candidate at the moment, but using sudo snap refresh lxd --channel=latest/candidate will get it.

The -d flag was only added in LXD 5.8:

Related question (sorry, no threading here, perhaps I should start a new question?)

If I had created an instance already. This instance is small, say 5GB (and running on LVM pools and underlying filesystem is XFS). Then I run the following:

  • lxc config device set some-container root size=10GB

Then I see this correctly expand the lvm container, plus call appropriate tools to resize the inner filesystem.

How does this happen? (assume I understand how to do this myself using lvm/xfs tooling)? My assumption is that the outside machine (ie where I’m typing the commands), calls lvm tooling to expand the partition size, then calls xfs tools to resize the filesystem, all from OUTSIDE the container, ie never enters the container, doesn’t care that it contains a valid linux image?

So if that were true, then I’m not understanding why my “migrate-gitea” example above cannot work? My assumption is simply that lxd is running the steps in the “wrong” order, ie creating a default 2.5GB filesystem, then expanding the initial template filesystem image, then resizing the filesystem?

So if that were the case, how to report a bug (as in my opinion the resize should happen before the initial filesystem is expanded)? This is using lxd5.8 FWIW. Do you agree with my understanding above? And that it’s not optimal?

That is correct.

I suspect you’re being affected by this issue:

I might be misunderstanding, but I see this problem on 5.8 with both -s and without

However, the specific case I am testing is with a large(er) tar file source image, imported in with a custom metadata, ie following these steps:

I wonder if there is a difference in how the source image is provided? Or perhaps it’s related to how I specified a default volume size in the storage pool itself?

I’m using gentoo for my outer machine. It is a little inconvenient to build the release candidate at present. I think I can work around this if there is a solution for release soon? I wonder if you have capacity to test my specific case is covered? I guess repro is:

  • Create random instance
  • tar it up
  • import instance as a custom image
  • set default lvm storage image size to something stupidly small (0.5GB say)
  • ensure you can create an instance correctly, using that image with “… -d size=5GB”?

Just an FYI, but I’m trying to migrate some linux vservers to something more modern. I feel like I’m hitting tons of problems with lxd… Almost anything that goes wrong inside the container leads to the whole of the lxd tooling just locking up and apparently needing a reboot to resolve… So far: mounting my outer tmp dir subdir into the instance /tmp dir (testing mounting random dirs), leads to a lockup of the lxd tooling when you try and kill the instance. --force won’t stop it. Starting up 2 or more migrated machines leads to them near immediately stopping responding, can’t --force stop them, nor enter them with --exec, console is unresponsive. Have a hope it’s related to them having identical MAC addresses, but … It’s feeling terribly fragile? (I’ll write more about specifics in a different thread if you have some interest to follow up?)

Bug with reproducer steps can be reported here: Issues · lxc/incus · GitHub

Generally speaking though you should be trying to do as much as possible via LXD supported functionality (I’m not saying you’re not, but when you’re talking about passing mounts in, its not clear how you’re doing that).

Can you post the exact reproducer steps here please, i.e what is “tar it up” specifically?

I just mean to create a root filesystem suitable for creating a new instance. As far as I can tell it doesn’t actually matter what is in the tar file at all though? (I agree it won’t boot if it’s not a linux image, but the error is way before this point)

So in my case I cd into an existing linux vserver image and “tar -czf …/back.tar.gz .”

However, for the repro, I presume it would be sufficient to just create a 6GB empty file, create a tar file, and check that this tar file can be used to create a new instance where the default instance size is smaller than 6GB, but the incantation is there to make the initial size larger than 6GB. No?

Perhaps this is already covered in your test case though?

Do you have

Do you have any other pointers or tips to ways to create initial instances from “other virtualisation solutions”. I mention above that I create an initial instance using any old random source, then rsync new data over the top of it through knowledge of the mount point. Is this the “best” solution for quick migrations? (given the params in this thread)

https://www.youtube.com/watch?v=F9GALjHtnUU

https://linuxcontainers.org/lxd/docs/master/migration/

1 Like

Using lxd-migrate tool (described in video above) would be preferable to creating the rootfs manually. You would run this tool either inside the running vserver or point it to the existing rootfs.

Appreciated! Thanks for all your help above!

1 Like

I upgraded to lxd 5.9 and I can confirm that this does NOT resolve the issue. So perhaps there are LVM specifics, or it’s related to how the image is being provided.

lxc launch migrate-gitea v3 -s vg_fast -d root,size=10GiB

continues to spew “tar … No space left on disk” errors, so it’s not resizing the partition before unpacking the image data.

It does. I’m just waiting for your exact reproducer steps to try it

If possible sharing your image too, or instructions on how to make the identical image.