I would like a comparison to clarify the CPU limits. In particular, cpu.limits.allowance in the two different modes: % | ms.
Let me explain, if I imposed the Limits at 50% that container also uses more than that limit, while if imposed 50ms/100ms I noticed that it will not exceed 50% of the use of the CPU.
You can help me understand the functioning of this block/limit
Thanks for the reply.
I would like to deepen the two parameters, specifically:
- limits.cpu: what kind of parameters can be set? only the processor where to block? the % of the type 50% or 0.5 ?
- limits.cpu.allowance: to set two cpu with double ms parameters (20ms/50ms 100ms/50ms) must the limits.cpu parameter be set to 2 or not?
I can’t find more in-depth documentation on limit handling, or maybe I don’t find it in the correct sections.
Finally how would you set the limits of 50 webserver/wordpress containers on a 4vcpu 2.3GHz (4 threads) dedicated cloud server?
limits.cpu is either an absolute number of CPUs to expose to the guest or a range of specific CPUs to pin to.
limits.cpu.allowance lets you apply CPU time limit as opposed to CPU pinning limits.
If you want your guest to just see 4 CPUs, set
limits.cpu=4, if you want it to see the first 4 cores of of your system, set it to
limits.cpu=0-3. If you want it to see all the CPUs but only get to use one CPU, then use
limits.cpu.allowance=100ms/100ms, if you want it to see all the CPUs but get to use up to 2 CPUs worth of time, use
limits.cpu.allowance=200ms/100ms, if you want your container to see all the CPUs but when the system is under load be limited to the equivalent of 1 CPU, use
If you set both
limits.cpu.allowance=200ms/100ms then in theory you’ll find yourselves pinned to two specific CPUs allocated by LXD, then your CPU time will be limited to a maximum of 2 CPUs worth of CPU time.
In general, if you use
limits.cpu.allowance with a time limit, you don’t want to use
limits.cpu too as that just puts a lot of contraints on the scheduler which may lead to less efficient allocations.
@ru-fu is it worth adding this to the docs ^ seems like useful knowledge?
I think it is very interesting to add an in -depth analysis to the doc. It is a little treated topic online.
I can provide my experience and setting the all to MS the entire node and containers work very well.
I never well understood the “math” behind CPU CFS scheduler time for cgroups. With systemd and Kubernetes, you have to less think about it due to some abstraction (maybe this is bad, I don’t know ). You simply “divide” the quota by the period and voilà, you get the number of CPU worth of CPU time ?
Some examples :
- Quota of 200ms every 100ms period => 2 CPU worth of CPU time
- Quota of 400ms every 200ms period => also 2 CPU worth of CPU time then ?
- Quota of 50ms every 100ms period => half a CPU worth of CPU time ?
Is it correct or there is something else to understand ?