How to set root device size when creating new instance?

Bug with reproducer steps can be reported here: Issues · lxc/incus · GitHub

Generally speaking though you should be trying to do as much as possible via LXD supported functionality (I’m not saying you’re not, but when you’re talking about passing mounts in, its not clear how you’re doing that).

Can you post the exact reproducer steps here please, i.e what is “tar it up” specifically?

I just mean to create a root filesystem suitable for creating a new instance. As far as I can tell it doesn’t actually matter what is in the tar file at all though? (I agree it won’t boot if it’s not a linux image, but the error is way before this point)

So in my case I cd into an existing linux vserver image and “tar -czf …/back.tar.gz .”

However, for the repro, I presume it would be sufficient to just create a 6GB empty file, create a tar file, and check that this tar file can be used to create a new instance where the default instance size is smaller than 6GB, but the incantation is there to make the initial size larger than 6GB. No?

Perhaps this is already covered in your test case though?

Do you have

Do you have any other pointers or tips to ways to create initial instances from “other virtualisation solutions”. I mention above that I create an initial instance using any old random source, then rsync new data over the top of it through knowledge of the mount point. Is this the “best” solution for quick migrations? (given the params in this thread)

https://www.youtube.com/watch?v=F9GALjHtnUU

https://linuxcontainers.org/lxd/docs/master/migration/

1 Like

Using lxd-migrate tool (described in video above) would be preferable to creating the rootfs manually. You would run this tool either inside the running vserver or point it to the existing rootfs.

Appreciated! Thanks for all your help above!

1 Like

I upgraded to lxd 5.9 and I can confirm that this does NOT resolve the issue. So perhaps there are LVM specifics, or it’s related to how the image is being provided.

lxc launch migrate-gitea v3 -s vg_fast -d root,size=10GiB

continues to spew “tar … No space left on disk” errors, so it’s not resizing the partition before unpacking the image data.

It does. I’m just waiting for your exact reproducer steps to try it

If possible sharing your image too, or instructions on how to make the identical image.

Apologies, here is a repro:

> lxc storage set vg_fast volume.size 500MB

> lxc storage show vg_fast
config:
  lvm.thinpool_name: LXDThinPool
  lvm.vg.force_reuse: "true"
  lvm.vg_name: vg_fast
  source: vg_fast
  volatile.initial_source: vg_fast
  volume.block.filesystem: xfs
  volume.size: 500MB
description: ""
name: vg_fast
driver: lvm

> lxc launch images:gentoo/openrc v1 -s vg_fast -d root,size=10GiB
Creating v1
Error: Failed instance creation: Failed creating instance from image: Unpack failed: Failed to run: unsquashfs -f -d /var/lib/lxd/storage-pools/vg_fast/images/372be8e5773e57f11f152001dd15215a436b0951eac510cb9ede4fb62dec1ca8/rootfs -n /var/lib/lxd/images/372be8e5773e57f11f152001dd15215a436b0951eac510cb9ede4fb62dec1ca8.rootfs: Process exited with non-zero value 1 (FATAL ERROR: write_file: failed to create file /var/lib/lxd/storage-pools/vg_fast/images/372be8e5773e57f11f152001dd15215a436b0951eac510cb9ede4fb62dec1ca8/rootfs/usr/lib/python3.10/test/test_difflib_expect.html, because No space left on device)

> lxc storage set vg_fast volume.size 2500MB
> lxc launch images:gentoo/openrc v1 -s vg_fast -d root,size=10GiB
Creating v1

The instance you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to an instance, use: lxc network attach

Starting v1

> lxc exec v1 -- df
Filesystem                 1K-blocks    Used Available Use% Mounted on
/dev/vg_fast/containers_v1  10420224 2011424   8408800  20% /

So you can see that setting a small initial size prevents the image from being created, however, if I make the default size large enough for the initial image, then it’s created successfully and then resized. From this it appears that the resize is happening AFTER the unpack?

Right I see whats going on now.

When you create an instance from an image on an LVM thin pool what happens is that the image is unpacked into a new volume that represents the image (not the instance being created). By default this image volume is sized at 10GiB. But it can be controlled using the pool’s volume.size setting. This can be useful to provide a defensive mechanism against an untrusted/unknown size image consuming all space in your storage pool.

Assuming the image volume is sized sufficiently to unpack the image tarball into, then a snapshot of the image volume is created for the new instance. It is then resized to the size specified by the instance’s root disk device (which can come from either the profile(s) or the -d flag). If neither of those two specify a size then it retains the current pool volume.size setting.

For some reason the gentoo/openrc image is much larger than most of our images. For example images:ubuntu/jammy will fit in to less than 1GiB space.

So whats happening is that you’re setting the pool volume.size setting smaller than is required to unpack the gentoo/openrc image into the its image volume.

So LXD never even gets to consulting the -d flag, because it never gets to actually create the instance volume (the snapshot of the image volume).

But this is all working as expected.

If you feel that there is something in the gentoo/openrc image that is taking up a lot of unnecessary space then maybe @monstermunchkin can help trim it down a bit.

Hi, very helpful thanks. Yet at the same time, it doesn’t tell me how to solve the problem?

I hesitate to ask the question, because it seems well specified in the messages above. But how do I specify a split “size=” config such that image volumes can be constrained differently to instance volumes?

To be very specific. If I do this:

> lxc storage set vg_fast volume.size 500MB

How can I now launch an instance? Any instance? eg you mentioned images:ubuntu/jammy, how can I launch that??

In case the question seems belligerent, quite a few bits of LXD documentation refer to creating new instances from “publishing” existing images (in fact I want to roll out new containers using a template container image that will be carefully pre-configured). So given that “is all working as advertised” means that one cannot create any instance based on an image which is larger than the pools “volume.size” setting, then this seems extremely limited? Do you have a suggested workaround?

Specifics:

  • I wanted to use XFS as my filesystem, so this means that I cannot shrink the filesystem later
  • I want in general to constrain the size of most of the container root filesystems to be around 2.5GB max, hence I set “volume.size=2500MB” in order that I can’t forget when creating an image (remember they can’t be shrunk later)
  • However, I will have some images which are larger than 2,500MB. How to create an instance from this image??
  • The only way I see it possible at the moment is to temporarily increase the pool volume.size ("> lxc storage set vg_fast volume.size 50GB"), etc, then create the instance, then revert the storage volume default size

Note this is the original question that I asked? How to do this?? Suggestions appreciated! Thanks

So what you need to do is:

  1. Set the pool’s volume.size to large enough to unpack all the images you want for the image’s volume.
  2. Then when you create an instance from that image you can specify the size of the instance’s root volume either by setting the size of the root disk device in one of the profiles using (lxc profile device set <profile> root size=<size>) or at instance create time using lxc launch <image> <instance> -d root,size=<size>.

This will then take a snapshot of the image volume and then size it to the specified size.

Sorry I still don’t understand what the issue is.

Yes its true that you cannot create an image volume larger than the pool’s volume.size, but not that you can’t create an instance from that image that is larger than the pool’s volume.size.

I’m still not seeing the issue I’m afraid. :slight_smile:

If you want to set the default instance root volume size then you can do this via the profile (so you don’t have to remember to specify -d unless you want it larger).

This way it won’t default to 10GiB.

This is where a reproducer would be useful, here I’ve tried it myself:

Create LVM pool using XFS with maximum image unpack size of 1GiB.

lxc storage create lvm lvm volume.block.filesystem=xfs volume.size=1GiB

Launch a container without any root disk size override (will default to volume.size or what ever is in profile(s)).
We can see both images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 and containers_c1 is 1GiB in size.

lxc launch images:ubuntu/jammy c1 -s lvm
sudo lvs
  LV                                                                      VG       Attr       LSize   Pool        Origin                                                                  Data%  Meta%  Move Log Cpy%Sync Convert
  LXDThinPool                                                             lmv      twi-a-tz--  29.93g                                                                                     0.00   10.47                           
  LXDThinPool                                                             lvm      twi-aotz--  29.93g                                                                                     1.71   11.17                           
  containers_c1                                                           lvm      Vwi-aotz-k   1.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 50.07                                  
  images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 lvm      Vwi---tz-k   1.00g LXDThinPool                                                                                                                
  root                                                                    vgubuntu -wi-ao---- 236.25g                                                                                                                            
  swap_1                                                                  vgubuntu -wi-ao---- 976.00m                  

Launch another container with root disk size override.
We can see containers_c2 is 2GiB in size.

lxc launch images:ubuntu/jammy c2 -s lvm -d root,size=2GiB
sudo lvs
  LV                                                                      VG       Attr       LSize   Pool        Origin                                                                  Data%  Meta%  Move Log Cpy%Sync Convert
  LXDThinPool                                                             lmv      twi-a-tz--  29.93g                                                                                     0.00   10.47                           
  LXDThinPool                                                             lvm      twi-aotz--  29.93g                                                                                     1.77   11.41                           
  containers_c1                                                           lvm      Vwi-aotz-k   1.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 50.23                                  
  containers_c2                                                           lvm      Vwi-aotz-k   2.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 25.08                                  
  images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 lvm      Vwi---tz-k   1.00g LXDThinPool                                                                                                                
  root                                                                    vgubuntu -wi-ao---- 236.25g                                                                                                                            
  swap_1                                                                  vgubuntu -wi-ao---- 976.00m                               

Modify default profile to set default root disk size for instances, and launch a new instance without an override. This time it should take the root disk size from the profile (over the volume.size of the pool).
We expect containers_c3 to be 3GiB.

lxc profile device set default root size=3GiB
lxc launch images:ubuntu/jammy c3 -s lvm
sudo lvs
  LV                                                                      VG       Attr       LSize   Pool        Origin                                                                  Data%  Meta%  Move Log Cpy%Sync Convert
  LXDThinPool                                                             lmv      twi-a-tz--  29.93g                                                                                     0.00   10.47                           
  LXDThinPool                                                             lvm      twi-aotz--  29.93g                                                                                     1.82   11.56                           
  containers_c1                                                           lvm      Vwi-aotz-k   1.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 50.23                                  
  containers_c2                                                           lvm      Vwi-aotz-k   2.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 25.16                                  
  containers_c3                                                           lvm      Vwi-aotz-k   1.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 50.07                                  
  images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 lvm      Vwi---tz-k   1.00g LXDThinPool                                                                                                                
  root                                                                    vgubuntu -wi-ao---- 236.25g                                                                                                                            
  swap_1                                                                  vgubuntu -wi-ao---- 976.00m                    

Oh wait, whats this, its 1GiB. Hrm this doesn’t look right, investigating…

OK so this isn’t a bug, when using the -s flag with instance create, the root disk device from the profile(s) are ignored and instead a new root device is added referencing the pool specified. This means it won’t contain the size setting from the profile(s) and will revert to using the pool’s volume.size setting.

Instead we should create a profile that uses the LVM pool directly so we don’t have to use the -s flag.

lxc profile create lvm
lxc profile device add lvm root disk pool=lvm size=3GiB path=/
lxc launch images:ubuntu/jammy c4 -p default -p lvm
lxc config show c4 --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu jammy amd64 (20221211_07:43)
  image.os: Ubuntu
  image.release: jammy
  image.serial: "20221211_07:43"
  image.type: squashfs
  image.variant: default
  volatile.base_image: 80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7
  volatile.cloud-init.instance-id: 4573c53b-c4d0-469b-9e90-41099312c204
  volatile.eth0.host_name: veth0dfc18ed
  volatile.eth0.hwaddr: 00:16:3e:e7:75:48
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 019cae4e-e1bb-4e34-9fdc-e3c50b977502
devices:
  eth0:
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: lvm
    size: 3GiB
    type: disk
ephemeral: false
profiles:
- default
- lvm
stateful: false
description: ""
sudo lvs
  LV                                                                      VG       Attr       LSize   Pool        Origin                                                                  Data%  Meta%  Move Log Cpy%Sync Convert
  LXDThinPool                                                             lmv      twi-a-tz--  29.93g                                                                                     0.00   10.47                           
  LXDThinPool                                                             lvm      twi-aotz--  29.93g                                                                                     1.89   11.72                           
  containers_c1                                                           lvm      Vwi-aotz-k   1.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 50.23                                  
  containers_c2                                                           lvm      Vwi-aotz-k   2.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 25.16                                  
  containers_c3                                                           lvm      Vwi-aotz-k   1.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 50.23                                  
  containers_c4                                                           lvm      Vwi-aotz-k   3.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 16.80                                  
  images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 lvm      Vwi---tz-k   1.00g LXDThinPool                                                                                                                
  root                                                                    vgubuntu -wi-ao---- 236.25g                                                                                                                            
  swap_1                                                                  vgubuntu -wi-ao---- 976.00m                                                                                                                            

Ah thats better the container_c4 is now 3GiB in size.
And by using -p default -p lvm we get the default profile (with the NIC device in it) applied first and then the root disk device is overridden with the one from the lvm profile).

FWIW I think it would be nice if when using the -s flag the size property of the root disk was copied from the profile(s). But this would be a change of behaviour and would potentially affect other users who are relying on the current behaviour of -s not copying anything from the profile(s).

One thing to keep in mind is that when using a profile, if you change it later those changes are applied to all instances using it. So if you had created multiple containers using the lvm profile with size=3GiB and then later did lxc profile device set lvm root size=4GiB they would all be grown, not just new instances using that profile.

However you can override a profile device after the instance has been created if you want to apply per-instance settings using:

lxc config device override <instance> root size=5GiB

This would copy the root disk device from the effective profile(s) and then add the custom size setting.

E.g. changing the lvm profile’s root disk size to 4GiB will grown container_c4 which is using it.

lxc profile device set lvm root size=4GiB
sudo lvs
  LV                                                                      VG       Attr       LSize   Pool        Origin                                                                  Data%  Meta%  Move Log Cpy%Sync Convert
  LXDThinPool                                                             lmv      twi-a-tz--  29.93g                                                                                     0.00   10.47                           
  LXDThinPool                                                             lvm      twi-aotz--  29.93g                                                                                     1.91   11.72                           
  containers_c1                                                           lvm      Vwi-aotz-k   1.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 50.23                                  
  containers_c2                                                           lvm      Vwi-aotz-k   2.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 25.16                                  
  containers_c3                                                           lvm      Vwi-aotz-k   1.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 50.23                                  
  containers_c4                                                           lvm      Vwi-aotz-k   4.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 12.63                                  
  images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 lvm      Vwi---tz-k   1.00g LXDThinPool                                                                                                                
  root                                                                    vgubuntu -wi-ao---- 236.25g                                                                                                                            
  swap_1                                                                  vgubuntu -wi-ao---- 976.00m                               

But if we want to alter that container only and decouple it from the profile we can do:

lxc config device override c4 root size=5GiB
sudo lvs
  LV                                                                      VG       Attr       LSize   Pool        Origin                                                                  Data%  Meta%  Move Log Cpy%Sync Convert
  LXDThinPool                                                             lmv      twi-a-tz--  29.93g                                                                                     0.00   10.47                           
  LXDThinPool                                                             lvm      twi-aotz--  29.93g                                                                                     1.91   11.72                           
  containers_c1                                                           lvm      Vwi-aotz-k   1.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 50.23                                  
  containers_c2                                                           lvm      Vwi-aotz-k   2.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 25.16                                  
  containers_c3                                                           lvm      Vwi-aotz-k   1.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 50.23                                  
  containers_c4                                                           lvm      Vwi-aotz-k   5.00g LXDThinPool images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 10.11                                  
  images_80fd87bb2e699ea0639ede30497300de1fa835ce7cd4f14fd81e690e3eb36eb7 lvm      Vwi---tz-k   1.00g LXDThinPool                                                                                                                
  root                                                                    vgubuntu -wi-ao---- 236.25g                                                                                                                            
  swap_1                                                                  vgubuntu -wi-ao---- 976.00m