Memory leak with stacking forkproxy processes?

I’ve been trying to understand why resources keep creeping in until total server failure.
There are 4 containers with different projects and 1 with Haproxy managing the requests.
On the dashboard, we can see that that the memory usage keeps growing, we restart the server and it goes lower again and keeps growing again.


On the main server, there are a lot of forkproxy processes draining the memory, but inside individual containers, there’s minimal memory usage.
Any clue on what might be causing it?
Currently running LXC 3.0.3 on bionic.
Thanks,

Please can you post your container config using lxc config show <container> --expanded?

Also, how busy are these servers, how many requests are they handling roughly?

 lxc config show ests-wp --expanded
architecture: x86_64
config:
  image.architecture: x86_64
  image.description: Ubuntu 18.04 LTS server (20180522)
  image.os: ubuntu
  image.release: bionic
  limits.cpu.allowance: 30%
  limits.memory: 1024MB
  limits.memory.enforce: hard
  limits.memory.swap: "false"
  volatile.base_image: 7810e42b8556b4beb9dac2cbdfeef778550d5adce888fd227b2af0b3bf0b0cb6
  volatile.eth0.hwaddr: 00:16:3e:44:52:3b
  volatile.idmap.base: "0"
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.power: RUNNING
devices:
  eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
  root:
path: /
pool: default
type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

One of the containers has around 1K request per hour.
There other 3 are projects under development with even fewer requests.
Thanks,

That container does not appear to have a proxy device. So cannot be starting fork proxy process.

I think I need to see the haproxy containers config

Any chance you’d consider moving to the LXD snap on bionic?
That’d move you over to 4.0.1 which does contain a LOT of proxy fixes.

I ran
snap remove lxd
and then
snap install lxd
Now I’m on LXD 4.0.1 and LXC 3.0.3
How can I get to LXC 4?

Meanwhile, all my containers disappeared :grimacing:
I was bring then back from the previous server following this guide:


but with little success:
Error: The instance "haproxy" does not seem to exist on any storage pool
I saw on a another post that you made the location changed to:
/var/snap/lxd/common/lxd/storage-pool/default/containers/haproxy
but the folder doesn’t even exist.
Any help would be appreciated.
A memory leak does not look that bad now :stuck_out_tongue:
Thanks,

Ok, let’s see how bad it is. You should never run snap remove lxd unless you actually want it and everything it stores removed… Though hopefully we can still recover somehow.

Please show:

  • dpkg -l | grep lxd
  • snap list
  • ls -lh /var/lib/snapd/snapshots/

Likely not…

dpkg -l | grep lxd
rc lxd 3.0.3-0ubuntu1~18.04.1 amd64 Container hypervisor based on LXC - daemon
ii lxd-client 3.0.3-0ubuntu1~18.04.1 amd64 Container hypervisor based on LXC - client

snap list
Name Version Rev Tracking Publisher Notes
amazon-ssm-agent 2.3.714.0 1566 latest/stable/… aws✓ classic
core 16-2.44.3 9066 latest/stable canonical✓ core
core18 20200427 1754 latest/stable canonical✓ base
lxd 4.0.1 14890 latest/stable canonical✓ -

ls -lh /var/lib/snapd/snapshots/

total 564K

-rw------- 1 root root 557K May 5 22:27 1_lxd_4.0.1_14890.zip

-rw------- 1 root root 1.4K May 5 23:10 2_lxd_4.0.1_14890.zip

Ok, can you show ls -lh /var/lib/lxd/containers/?

And ls -lh /var/lib/lxd too for good measure?

ls -lh /var/lib/lxd/containers/
total 0
lrwxrwxrwx 1 root root 54 May 5 23:31 haproxy/var/lib/lxd/storage-pools/default/containers/haproxy/

ls -lh /var/lib/lxd
total 52K
drwx–x–x 2 root root 4.0K May 5 23:31 containers
drwx------ 3 root root 4.0K May 5 16:10 database
drwx–x–x 2 root root 4.0K May 5 16:10 devices
drwxr-xr-x 2 root root 4.0K May 5 16:10 devlxd
drwx------ 2 root root 4.0K May 5 16:10 disks
drwx------ 2 root root 4.0K May 5 22:32 images
drwx–x–x 3 root root 4.0K May 5 22:32 networks
drwx------ 2 root root 4.0K May 5 16:10 security
-rw-r–r-- 1 root root 2.0K May 5 16:10 server.crt
-rw------- 1 root root 3.2K May 5 16:10 server.key
drwx–x–x 2 root root 4.0K May 5 16:10 shmounts
drwx------ 2 root root 4.0K May 5 16:10 snapshots
drwx–x–x 3 root root 4.0K Feb 11 22:58 storage-pools
srw-rw---- 1 root lxd 0 May 5 23:11 unix.socket

Ok, good. Try running apt-get install lxd.
Then /usr/bin/lxc list see if things look reasonable.
Try starting a container to make sure things are all working again.

Once that’s all good, run lxd.migrate to move to the snap.