Incus Memory Utilization

@stgraber I have noticed that on two of my Incus servers that are hosted on Ubuntu Server 22.04, that memory usage is “inconsistent”. If I boot these servers up and let their containers run normally, I see no issues. If I perform snapshot restores on one or more containers, the memory utilization rises and never falls back to its original level. As an example:

The server shown above has a 50% memory utilization and that will hover about the same level for weeks. If I do two or more snapshot restores, the memory utilization rises to around 72% and stays there. If I reboot, I am back at the 50%. Thought I would pass this along and see if there might be a memory leak some place. --Scott

What storage driver are you using?

I am using zfs.

grep ^c /proc/spl/kstat/zfs/arcstats

That will show you your ZFS arc stats. If the c value is pretty high (it’s in bytes though), then that could explain some of your memory consumption.

Basically ZFS has its own cache which is reported as used memory on the host and can make things look pretty confusing.

As an example on one of my incus servers:

I am not sure that this helps me. If it is zfs using the storage, that points to a zfs memory leak. Is that right?

Assuming I didn’t typo it, that’s about 8GB of RAM that ZFS is using as a cache on this system.

It’s not a memory leak, ZFS basically will use as much memory as it can to cache things and make the filesystem faster. It will release that memory back to the kernel when upon memory pressure, but it annoyingly doesn’t have that memory reported as cached/buffered in tools like free.

There is a module option to set an upper limit on the ARC size, so you could do that if it helps with monitoring your systems, though it will obviously come at the cost of slower ZFS performance.

Hi @stgraber, with that values zfs_arc_min and max, we can adjust the cache size in memory, right?

Yep, that’s right, you can set the zfs_arc_max kernel module option to set an upper size in bytes for the ARC.

1 Like

If we set an upper size on the ARC, what negative impacts might that have? It seems that limiting this growth would always be good. Am I missing something?

Well, it means less data being cached, so more actual disk access.

So then, technically less performance for disk access, but lower memory utilization is the only caveat? Can you provide guidelines or recommendations as to how to tune this setting for the best results?

I usually set it through /etc/modprobe.d/

For the amount, it depends on the systems. For my laptop or desktop with limited memory, I usually set it to 2GB, for my servers with hundreds of GBs of RAM, I’ll usually set it to 16GB.

So, if there is no explicit setting are you saying that it will grow arbitrarily? Can I examine where it is now?

Yeah, with no upper limit set, it can grow to use all your memory.
Which is supposed to be fine as ZFS does react to memory pressure, so when applications actually need the memory, the ARC will shrink.

The c value in your output from earlier is the currently used memory for the ARC.