Hi, new user, be gentle. I am using LVM as my storage option, and for $reasons I have set my default volume.size to be quite small:
# lxc storage show vg_fast
config:
lvm.thinpool_name: LXDThinPool
lvm.vg.force_reuse: "true"
lvm.vg_name: vg_fast
source: vg_fast
volatile.initial_source: vg_fast
volume.block.filesystem: xfs
volume.size: 2500MB
description: ""
name: vg_fast
driver: lvm
used_by:
However, I wanted to create an new instance which needed a larger root file system, actually around 3GB to fit the initial image into. Since the default is to create a 2.5GB root partition, I tried the following in order to get an initial (larger) partition:
lxc init migrate-gitea gitea2 -d root,size=5GB
However, this still leads to an “out of disk space” error as the tar file unpacks (ie the root is still being created as a 2.5GB partition). I was able to work around it by temporarily raising the volume.size key on the storage, creating my image and then setting it back again. However, is there an incantation to do this as part of the initial init/launch ?
Note that creating a separate storage volume to mount into this instance was simpler:
lxc storage volume create vg_fast gitea-data size=20GB
As a new user, there feels like a mismatch in how these are specified?
Not terribly related, but I also have some doubt on how to access my new storage volume above, eg in order to migrate some data into it? I can create a new VM and mount my image there, then use sshfs to access the mount. However, I wonder if it’s acceptable to simply mount the lvm partition, and stuff files in that way?
(Edited for clarity)