Incorrect memory reporting

I remember, while ago, container had shown the right size corresponding with config limits.memory=

It had worked with Ubuntu free -m, with alpine /proc/meminfo.

Now to get back to that status, need to apply lxc config set a1 security.syscalls.intercept.sysinfo=true
to each container and restart them?
sysinfo=true should actually be default, as it was the state of things before lxd version upgrades and it also delivers true information within container, which again speaks for a default situation.

No. Emulation of /proc/meminfo is provided by LXCFS (default enabled in the LXD snap package).

The security.syscalls.intercept.sysinfo is only needed if your application doesn’t rely on /proc/meminfo.

lxd 5.1-4ae3604 23001 latest/stable canonical✓ -

somehow limits.memory=8GiB not working anymore.
Container showing the total server memory, despite:

image.os: ubuntu
image.release: focal
image.serial: “20210108_0856”
limits.memory: 8GiB
security.syscalls.intercept.sysinfo: “true”

after restart:
lxc exec c1 – free -h

              total        used        free      shared  buff/cache   available
 Mem:          125Gi       556Mi       125Gi        85Mi       138Mi       125Gi
 Swap:            0B          0B          0B

host free -h

              total        used        free      shared  buff/cache   available
Mem:          125Gi        62Gi        21Gi       770Mi        42Gi        61Gi
Swap:          29Gi       6.0Mi        29Gi

If you didn’t need security.syscalls.intercept.sysinfo: “true” before to see the memory limits reflected in the container then you should disable it as you won’t need it.

Most likely LXCFS isn’t running, what does ps aux | grep lxcfs on the host show?

root 3122644 0.0 0.0 520552 3572 ? Sl Apr28 1:38 lxcfs /var/snap/lxd/common/var/lib/lxcfs -p /var/snap/lxd/common/lxcfs.pid

Have you tried restarting the container?

I did, I aslo hard restarted lxd (snap restart lxd).

Are you nesting containers?

I’ve not been able to recreate using Focal LXD and container:

root@home02:~# snap install lxd --channel=latest/stable
lxd 5.1-1f6f485 from Canonical✓ installed
root@home02:~# lxd init --auto
root@home02:~# ps aux | grep lxcfs
root        2406  0.0  0.0  85660  1756 ?        Sl   10:05   0:00 lxcfs /var/snap/lxd/common/var/lib/lxcfs -p /var/snap/lxd/common/lxcfs.pid
root        5066  0.0  0.0   6440   720 pts/0    S+   10:05   0:00 grep --color=auto lxcfs
root@home02:~# lxc init images:ubuntu/focal c1
Creating c1
root@home02:~# lxc config set c1 limits.memory=2GiB
root@home02:~# lxc start c1
root@home02:~# lxc exec c1 -- free -m
              total        used        free      shared  buff/cache   available
Mem:           2048          29        2014           0           4        2018
Swap:          2048           0        2048
root@home02:~# free -m
              total        used        free      shared  buff/cache   available
Mem:          11777         366        9926           1        1484       11114
Swap:          4095           0        4095

No, Host is Ubuntu focal 5.4.0-109-generic #123-Ubuntu
unprivileged containers on zfs pool (zfs-0.8.3-1ubuntu12.13.

Does it occur on a freshly created container? It may be something inside the container is unmounting the proc files.

You are right. In freshly created containers, mem is shown right:
lxc exec ub1 – free -h

              total        used        free      shared  buff/cache   available
Mem:          488Mi        40Mi       427Mi       0.0Ki        20Mi       448Mi
Swap:            0B          0B          0B

So most likely something inside the container is unmounting/remounting /proc (possibly systemd)

All existing (old containers) affected, no matter Ubuntu or alpine.
Only the newly fresh created behaved properly.

I’ve moved this (now quite long) discussion to its own topic.