About LXD container density and best practices for overcommit


I have been reading this article:

It says that the server has 16GB RAM and that they could provision

  • 37 KVM guests
  • 536 LXD guests

And I wonder what were the specs of those containers. How much ram did each one have?

What would be the best practices in terms of memory overcommit with LXD?
In KVM it is recommended no more than 1.5 times the amount of physical ram.
That would mean that in this test, we could have 24GB / 37 Guests ~ 664MB Ram each (let’s say 512MB each).
Were those LXD guests 512MB, too?
That would imply about 16 times the physical ram (the article states that you can achieve 14.5 times greater density than KVM).

Where can I read about the best practices to achieve this? What would be the swap needed for those figures?
Thank you very much

The benchmark was to test the maximum number of VMs and LXD containers that can possibly fit on the same server. Those VMs and LXD containers would not have any workload, and I suppose they would just start up, get an IP and remain idle. It somewhat makes sense because if you add some extra workload, you need to spend time describing that workload in your benchmark.
Still, it would be good to find some sources of best practices.
Have you seen https://github.com/lxc/lxd/blob/master/doc/production-setup.md ?

1 Like

Hi @simos,

Thank you for your answer. Still, I would like to understand how the memory distribution of those containers looked like and what would be expected in a real production environment.

Have you seen https://github.com/lxc/lxd/blob/master/doc/production-setup.md ?

No, I had not seen it, thank you very much!

I cannot think of a reference on this, of someone that went through the process and documented the workload for their real production environment.

If you would like to go ahead, set out the workload and then see how it performs, you can

  1. run lxc info mycontainer1 for each container to get the current and peak memory use. See How to monitor LXC running container metrices during migration phase as well.
  2. use sysdig (see: https://blog.simos.info/how-to-use-sysdig-and-falco-with-lxd-containers/ ) in order to get more direct information through the lifetime of the containers. Note that sysdig installs a kernel module that collects all sorts of information. There are tutorials on how to set up the checking of memory requirements for containers, such as https://sysdig.com/blog/monitoring-greedy-containers-part-1/
1 Like

Awesome, thank you very much @simos

I will study this thoroughly and test. I will publish my findings, if interesting enough.

Thank you again