Incus Container Exceeding Set CPU and Memory Limits - How to Properly Enforce Them?
Hi everyone,
I’m running an Incus container (instance name: IVR-STG) on Ubuntu, and I’ve set CPU and memory limits via a profile, but it seems like they’re not being strictly enforced. The container is overcommitting resources based on the monitoring graphs, and I’d like advice on the correct way to limit RAM and CPU usage, as well as best practices for applying these limits effectively.
Setup Details
- Incus version: 6.0.5
- Host OS: Ubuntu 24.04
- Container OS: Ubuntu 22.04 LTS
Here’s the output from incus profile list:
text
+---------+-------------------------------------------------------------+---------+
| NAME | DESCRIPTION | USED BY |
+---------+-------------------------------------------------------------+---------+
| Container | memory=8GiB, cpus=4, storage=30GB, pool=local, bridge=br300 | 1 |
+---------+-------------------------------------------------------------+---------+
Profile configuration (incus profile show IVR-STG):
text
config:
limits.cpu: "4"
limits.memory: 8GiB
description: memory=8GiB, cpus=4, storage=30GB, pool=local, bridge=bridge
devices:
eth0:
nictype: bridged
parent: bridge
type: nic
root:
path: /
pool: local
size: 30GB
type: disk
name: container
used_by:
- /1.0/instances/Container?project=Project
Instance configuration (incus config show container):
architecture: x86_64
config:
boot.autostart: "true" "
image.architecture: amd64
image.description: ubuntu 22.04 LTS amd64 (release) (20241004)
image.label: release
image.os: ubuntu
image.release: jammy
image.serial: "20241004"
image.type: squashfs
image.version: "22.04"
volatile.base_image: c15fcb01a6eb2f72e74742d69e58a44707c6a6d974451c2d6f553e83e0cacf46
volatile.cloud-init.instance-id: 41dd2c60-0c31-45ae-b4cc-30d286bc780d
volatile.eth0.host_name: veth609002d3
volatile.eth0.hwaddr: 00:16:3e:e6:8e:10
volatile.eth0.name: eth0
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.power: RUNNING
volatile.last_state.ready: "false"
volatile.uuid: a1eabcc9-a733-4a6c-94b2-2b77406f2aed
volatile.uuid.generation: a1eabcc9-a733-4a6c-94b2-2b77406f2aed
devices: {}
ephemeral: false
profiles:
- container
stateful: false
description: ""
Observed Issue
The monitoring graph for the instance on grafana clearly shows overcommitment:
- CPU Usage: Spikes up to around 200% (with user and system components), despite the limit of 4 CPUs. There are multiple peaks between 17:20 and 17:30.
- Memory Usage: RAM/SWAP Used starts high (around 500 GiB) and drops sharply to near 0 by 17:30, with RAM Total at 512 GiB. This seems odd since the limit is only 8 GiB—could this be showing host-level stats instead of container-specific? RAM Cache, Free, and Swap Used are also tracked, with Swap Used minimal.
From what I’ve read in the docs, limits.memory should cap the container at 8 GiB, and limits.cpu=4 should restrict it to 4 cores. However, the usage appears to exceed these, possibly due to soft limits or caching/swap behavior.
Questions
- Is there something wrong with my configuration? Should limits be set directly on the instance instead of the profile?
- How can I enforce hard limits for memory (e.g., prevent overcommitment entirely) and CPU (e.g., cap total usage percentage or time slices)?
- What are best practices for monitoring and verifying that limits are applied? For example, commands to check from inside the container or host.
- Could this be related to swap priority or enforcement settings? I’ve seen mentions of limits.memory.enforce=hard and limits.memory.swap—should I add those?
- Any tips on optimizing resource allocation for a production setup like this.

