I’m just getting started with Inucs and have set up a single server that I’m managing with terraform.
The incus server runs an ext4 filesystem; nothing fancy for this lab test. Eventually I’d probably run it on zfs.
I’m trying to enforce a 16GB root filesystem limit on a simple 22.04 container that I created in terraform:
resource "incus_instance" "testinstance" {
(...)
device {
type = "disk"
name = "root"
properties = {
pool = incus_storage_pool.default.name,
path = "/",
size = "16GiB",
}
}
This creates an incus instance on my server with the following:
$ incus config show testinstance
config:
image.architecture: amd64
image.description: Ubuntu jammy amd64 (20240215_07:42)
image.os: Ubuntu
image.release: jammy
image.serial: "20240215_07:42"
image.type: squashfs
image.variant: cloud
(...)
devices:
eth0:
nictype: bridged
parent: bridge0
type: nic
root:
path: /
pool: default
size: 16GiB
type: disk
Here’s the default
storage pool; very simple (on an ext4 filesystem that’s about 500GB):
$ incus storage show default
config:
source: /var/lib/incus/storage-pools/default
description: ""
name: default
driver: dir
used_by:
- /1.0/instances/testinstance
status: Created
locations:
- none
Here’s the problem I’m running into
If I run df -h
on the created testinstance container, I see the full filesystem size of the parent incus server:
$ incus exec testinstance -- df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 495G 8.5G 487G 2% /
(...)
And I’m able to write test files from within the container that exceed 16GB and end up taking up the entire parent server’s disk.
Is there something I missed with setting the 16GB limit?