Ressource isolation/limitation causing high load on HOST

Hi there,

I was wondering if ressource limitation can have negative impact on the server and cause very high load on the system.
For example with a container limited to 2 vcpu , if that container use 100% if the CPU allowed by LXD for a long period of time , the host is impacted and the load is like 30 instant of a normal 1,5 to 3 .

The host has good RAM available and lot of CPU free , but that container using 100% of its limitation seems to have negative impacts on the host.
It that true, what could be the option to ovoid such case ?

In production, if you get one of your container infected somehow and the hacker use a cryptominer that will burst CPU limits, if this impact the host and add a lof of latency to the other container, it will be a problem.

Thanks a lot for your inputs.
Regards,
Benoît

Yes, it will raise the system loadavg value, that’s normal and people should really get out of the habit of high load == bad, this is simply not the case.

The load average is the number of processes on the entire system which are currently scheduled to run. When no restriction is applied, an ideal number would be the same as the number of CPU threads on the system.

When using cpu cgroups, a container that’s running a lot of processes will have a number of its processes that remain queued for scheduling, which increases the global load average. This in no way indicates a problem and the rest of the system will keep working as normal.

1 Like

Perfect, I understand , just wanted to confirm the behavior.
As you said, most of people (like me) make the association **high load == bad ** mostly because (or I did not find it) well explained how the load of a linux system is calculated ( in details)