Docker doesn't care about the lxd container limits

Hello,

we have all changed our vms to lxd, it worked wonderfully.
But all our containers use dockers, they work in the container as well.
Unfortunately the cpu, memory limits are not accepted by docker, so the services in the docker containers still have access to the complete host resources.

this leads to some services wanting to use 20% of the system memory, but the container has only 5gb ram and the container crashes

I am desperate

thank you very much

https://medium.com/@Alibaba_Cloud/kubernetes-demystified-using-lxcfs-to-improve-container-resource-visibility-86f48ce20c6 is the first result I’m getting when looking for having Docker use lxcfs.

That may be useful to you in this case.

thanks for the answer, i already discovered this, but it is very unpractical if you have about 20 docker-containers per host to mount the “procs” every time.

At least I don’t know of any way to mount docker Default volumes

I already tried that, but docker-containers with java seem to be not very interested in that, they still take the whole ram

Yeah, ideally Docker itself would support lxcfs…

It’s worth noting that this issue would happen outside of LXD too.
If you setup your Docker container with a memory limit, the workload in the Docker container will see the whole host memory and will crash in much the same way.
That’s why I’m really surprised that Docker still hasn’t added support for lxcfs as that is a solution for a good 90% of affected workloads (some workloads unfortunately ignore /proc/meminfo and pull the max memory some other way…).

why is there no possibility to give the container different default values.

For such cases it would be much easier to manipulate the parameters instead of telling each software individually what it can and cannot do

Well, that’s a Docker problem :slight_smile:

In LXD, that’s why we have profiles, add something to the default profile and all containers get it :wink: