Lxc_eating_memory

Hi there,
been playing for lxc for a week or so, have 3 containers runing with some http for wordpress hosting;
i noticed that the containers are eating my host machine ram, 3 containers with only httpd mysql wordpress, with no production just fresh install, and 16+ GB of ram are gone, stop containers reboot 16 GB are back. that makes me wonder what if i had these containers in production and have posts and visitors ,etc will that eat 32 or even more of ram, am i not better of lxc and use kvm in this case? just assuming and i appreciate your input?
my other question; if i just tar the whole container and then download it to my other machine will it run just fine? or there a better way to backup my container?
thanks.

You should run htop on the host to see what’s consuming the memory (if it’s indeed being actively used rather than just buffered).

What’s the output of free -m on the host?

Hi, here is the output of free -m;

free -m
              total        used        free      shared  buff/cache   available
Mem:          64200        1093       45998         105       17109       62293
Swap:         65535           0       65535

after rebooting the host;

free -m
              total        used        free      shared  buff/cache   available
Mem:          64200        1004       62602          71         594       62522
Swap:         65535           0       65535

did the command on watch free -m;
i noticed buffred increase by time quickly.

can you please answer the part of backup?
my other question; if i just tar the whole container and then download it to my other machine will it run just fine? or there a better way to backup my container?
Thank you.

Yes, a tarball of the container should be fine, as would rsyncing it.

The output above shows your system is using slightly over 1GB of RAM in both cases.
The 16GB difference you’re seeing is in buffered/cache which is perfectly normal for a running system and actually a good thing, so nothing to worry about here.

Thanks for making the code tags:)
As for backup, i noticed i better stop the containers first then make the tarball. by re syncing it, did you mean you mean using rsync with avh flags?

But why 16 GB buffer for a system that runs only 3 containers, i know that buffer can help disk caching and overall speed operations, my point is with the number 16, what if i was on production?
and yes i was on https://www.linuxatemyram.com/ where they say same like you its fine.
is it good practice to run a cron job to flush cache out say each 6 hours sync; echo 3 > /proc/sys/vm/drop_caches?

On production would you advice limiting memory usage on each container? or just leave as default which i believe its “as you go”.

Nope, no need to flush your cache, you’re just making your system slower by doing that.
The kernel will automatically free cached pages.

For a production server, you probably should apply a memory limit to your containers, just to avoid a buggy process in there eating all the memory and bringing your entire system down.

Thanks for the suggestion, i used the follwoing line of code in my config file to limit ram to 2.5 GB lxc.cgroup.memory.limit_in_bytes = 2560M
is there any good reason why should limit cpu as well? or disk or any limits beside ram?

Just to help understand - Linux tries to make use of “free” ram to speed-up access to recent files in cache/buffers. The memory in buff/cache is basically memory that was free, and so can be reused in a better way than just sitting there idle. So its used for buff/cache. Any time some of this memory is needed to be really "used by processes or other more-important-functions, it will be released from the buff/cache and given to use.

I believe this also happens in android to some degree, and it confuses a lot of people that see “my memory is all ocuppied”, which is correct because its one of the fastest memories in a device and linux tries to put it to use as much as possible, without meaning you are “running out of memory” :slight_smile:

htop makes it easier to visualize in a glance [2]

[2] https://codeahoy.com/2017/01/20/hhtop-explained-visually/

Hi.
i do understand why caching iis, and how it can help disk speed, my question was about fresh os on host with only 3 containers on it, they run httpd mysql wordpress no production ,16 GB is little too much, maybe 3 to 6 GB could be ok. any way its all ok. thanks for the input.

Hi, i just noticed that my cache has increased to 17 GB again, for the time frame yesterday this post till now,is that normal, because to me i do not think so? is there a way to track in depth the cause of this.
free -mtotal used free shared buff/cache available Mem: 64200 1119 45139 109 17942 62263 Swap: 65535 0 65535

It’s normal, the cache will effectively contain a copy of every file that you ever read since the system started so that you don’t need to actually access the disk. As I said before, when your system actually needs RAM, the cache will get freed accordingly.

Hi, i have added lxc.cgroup.memory.limit_in_bytes = 2560M to my container config file, restarted the host and the container output of free -m, still shows the whole 64GB of the host instead of the 2.5GB of the container, is there a way to make it show only the mount of ram limited to? how do i restrict the mount a buffer/cache certain container can have?
Thanks.

You’d need lxcfs to have free show cgroup limits, otherwise, they’ll be enforced but aren’t visible from userspace.

Hi, Thanks Stephane, bare with me please, can you re read my post and answer it keeping in mind i am a newbie please.

cache/buffer is global, you cannot restrict it per container. To have your memory limit shows up, you need to install lxcfs.

i will google how to restrict OS buffer/cache globally.
lxcf is already installed, i can see it has a dir in /var/lib/lxcfs as well. is it off and needs to be on, how can i do so?