I see that the RAM doesn’t free up when a VM that has 5 GB allocated RAM uses up like 4GB and then after a few minutes (inside the VM) the consumption comes down to 2GB, however, on the host the RAM doesn’t free up. Nothing is running on my host, just this one VM. So I thought that once a VM uses some amount of RAM, that amount of RAM doesn’t free up on the host and is reserved for use by the VM.
Why is that? Or do you think my observation is incorrect?
You need memory ballooning for that : https://pmhahn.github.io/virtio-balloon/
However, I’m not sure if it’s “automatic” with LXD, I guess this is used for reducing manually VM memory when it is live.
Right, you can force the balloon to inflate by reducing
limits.memory on a running VM, then bringing it back to its prior value.
Is it not possible to automate it so the VM can automatically give the unused RAM back to the host?
No. It’s also usually a bad idea to do that in the first place.
While the memory may not be used, the guest will typically still make use of the memory for caching.
Forcing it to be evicted will result in a lot more disk I/O in the guest as things either get moved over to swap or just plain get dropped and needs to be read back from disk next time it’s needed.
It’s also quite CPU intensive to force the guest to re-shuffle its memory and then for the hypervisor to find memory ranges that can be given back to the kernel.
It’s worth it to do if you know your VMs got a big unexpected memory usage increase, say from applying a large update as that lets you reset them back to baseline, but doing it constantly definitely isn’t recommended.
Also worth noting that this particular operation will very often fail. That happens when the guest can’t handle the balloon inflating at its rate. This typically leads to only partial memory reclaim but in more extreme cases, it can lead to a guest kernel crash due to it rapidly running out of memory.