Problems with lxc memory limits

Fresh install of Ubuntu 16, I am able to set CPU quota and see it reflected in output from the container, however not memory quota.
I’m sure it is my fault, but I cannot see what is missing.

# lxc config set test-ubuntu-16-04 limits.memory 256MB

# lxc exec test-ubuntu-16-04 free
              total        used        free      shared  buff/cache   available
Mem:        3947908       61548     3881556       14016        4804     3881556
Swap:       8224764           0     8224764

I tried different kernels without success. Do I need to pass a special option to the kernel at boot time?

#lxc --version
2.0.11

# uname -a
4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

# lxc-checkconfig
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-4.4.0-112-generic
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Warning: newuidmap is not setuid-root
Warning: newgidmap is not setuid-root
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
Bridges: enabled
Advanced netfilter: enabled
CONFIG_NF_NAT_IPV4: enabled
CONFIG_NF_NAT_IPV6: enabled
CONFIG_IP_NF_TARGET_MASQUERADE: enabled
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled
FUSE (for use with lxcfs): enabled

--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: enabled

# cat /proc/mounts |grep lxcfs
lxcfs /var/lib/lxcfs fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
1 Like

That’s weird, does restarting the container make a difference somehow?

Also, can you paste the output of cat /proc/mounts from inside the container (want to make sure meminfo is properly overmounted by lxcfs)?

And can you also look at /var/log/lxd/lxd.log for errors?

Thank you for a very quick reply

Container restart or even a full machine reboot make no difference.

Found the following warning in /var/log/lxd/lxd.log but it seems to refer to swap
I can get rid of it by passing “swapaccount=1” to the kernel command line, but even with this line I cannot get the lxc container to report correct RAM limit.

lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2018-02-19T15:03:18+0000

/var/log/lxd/lxd.log from inside the container

# lxc exec test-ubuntu-16-04 cat /proc/mounts
lxd/containers/test-ubuntu-16-04 / zfs rw,relatime,xattr,noacl 0 0
none /dev tmpfs rw,nodev,relatime,size=492k,mode=755,uid=100000,gid=100000 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
sysfs /sys sysfs rw,nodev,relatime 0 0
udev /dev/fuse devtmpfs rw,nosuid,relatime,size=1953732k,nr_inodes=488433,mode=755 0 0
udev /dev/net/tun devtmpfs rw,nosuid,relatime,size=1953732k,nr_inodes=488433,mode=755 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
tmpfs /dev/lxd tmpfs rw,relatime,size=100k,mode=755 0 0
tmpfs /dev/.lxd-mounts tmpfs rw,relatime,size=100k,mode=711 0 0
lxcfs /proc/cpuinfo fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
lxcfs /proc/diskstats fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
lxcfs /proc/meminfo fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
lxcfs /proc/stat fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
lxcfs /proc/swaps fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
lxcfs /proc/uptime fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
lxcfs /var/lib/lxcfs fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
udev /dev/null devtmpfs rw,nosuid,relatime,size=1953732k,nr_inodes=488433,mode=755 0 0
udev /dev/zero devtmpfs rw,nosuid,relatime,size=1953732k,nr_inodes=488433,mode=755 0 0
udev /dev/full devtmpfs rw,nosuid,relatime,size=1953732k,nr_inodes=488433,mode=755 0 0
udev /dev/urandom devtmpfs rw,nosuid,relatime,size=1953732k,nr_inodes=488433,mode=755 0 0
udev /dev/random devtmpfs rw,nosuid,relatime,size=1953732k,nr_inodes=488433,mode=755 0 0
udev /dev/tty devtmpfs rw,nosuid,relatime,size=1953732k,nr_inodes=488433,mode=755 0 0
devpts /dev/console devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
devpts /dev/pts devpts rw,relatime,gid=100005,mode=620,ptmxmode=666 0 0
devpts /dev/ptmx devpts rw,relatime,gid=100005,mode=620,ptmxmode=666 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev,uid=100000,gid=100000 0 0
tmpfs /run tmpfs rw,nosuid,nodev,mode=755,uid=100000,gid=100000 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k,uid=100000,gid=100000 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755,uid=100000,gid=100000 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids,release_agent=/run/cgmanager/agents/cgm-release-agent.pids 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset,clone_children 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event,release_agent=/run/cgmanager/agents/cgm-release-agent.perf_event 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0

Running config seems to confirm that it is aware of the limits:

# lxc config show test-ubuntu-16-04
architecture: x86_64
config:
  limits.cpu: "1"
  limits.memory: 256MB
  volatile.base_image: 069b95ed3a60645ee1905b7625a468d1357f00bd61bf096fc597063c6ed42cf1
  volatile.eth0.hwaddr: 00:16:3e:16:60:9d
  volatile.idmap.base: "0"
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.power: RUNNING
devices:
  root:
    path: /
    size: 5GB
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

What does cat /sys/fs/cgroup/memory/memory.limit_in_bytes show from inside the container?

root@test-ubuntu-16-04:~# cat /sys/fs/cgroup/memory/memory.limit_in_bytes
9223372036854771712

Hmm, okay, so lxcfs is right and the memory limit isn’t applied…

Does this fix it?

echo 268435456 > /sys/fs/cgroup/memory/lxc/test-ubuntu-16-04/memory.limit_in_bytes

(As root on the host)

yes, I can now see 256MB in side the LXC container

Can you post the output of lxc info?

# lxc info
config:
  storage.zfs_pool_name: lxd
api_extensions:
- id_map
- id_map_base
- resource_limits
api_status: stable
api_version: "1.0"
auth: trusted
auth_methods: []
public: false
environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
cut
    -----END CERTIFICATE-----
  certificate_fingerprint: cut

  driver: lxc
  driver_version: 2.0.8
  kernel: Linux
  kernel_architecture: x86_64
  kernel_version: 4.4.0-112-generic
  server: lxd
  server_pid: 2432
  server_version: 2.0.11
  storage: zfs
  storage_version: "5"

Thanks.

Can you in one terminal run lxc monitor while in another running:

lxc config set test-ubuntu-16.04 limits.memory 512MB
lxc config set test-ubuntu-16.04 limits.memory 256MB

And then post the output you get in that first terminal?

# lxc monitor
metadata:
  context: {}
  level: dbug
  message: 'New events listener: 710df958-b4fd-4647-b2fc-07da8b5768b0'
timestamp: 2018-02-20T08:57:12.324418025Z
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0
  level: dbug
  message: handling
timestamp: 2018-02-20T08:57:23.00502783Z
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/containers/test-ubuntu-16-04
  level: dbug
  message: handling
timestamp: 2018-02-20T08:57:23.018013893Z
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/events
  level: dbug
  message: handling
timestamp: 2018-02-20T08:57:23.028935643Z
type: logging


metadata:
  context:
    ip: '@'
    method: PUT
    url: /1.0/containers/test-ubuntu-16-04
  level: dbug
  message: handling
timestamp: 2018-02-20T08:57:23.02980569Z
type: logging


metadata:
  context: {}
  level: dbug
  message: 'New events listener: dbf50e1c-5828-4a87-a5ef-d57d8e648233'
timestamp: 2018-02-20T08:57:23.029790679Z
type: logging


metadata:
  class: task
  created_at: 2018-02-20T08:57:23.031154485Z
  err: ""
  id: 2e6cb93a-9fd2-48f1-9b26-0568e9e6b1e6
  may_cancel: false
  metadata: null
  resources:
    containers:
    - /1.0/containers/test-ubuntu-16-04
  status: Running
  status_code: 103
  updated_at: 2018-02-20T08:57:23.031154485Z
timestamp: 2018-02-20T08:57:23.031299643Z
type: operation


metadata:
  context: {}
  level: dbug
  message: 'Started task operation: 2e6cb93a-9fd2-48f1-9b26-0568e9e6b1e6'
timestamp: 2018-02-20T08:57:23.031273422Z
type: logging


metadata:
  context: {}
  level: dbug
  message: 'New task operation: 2e6cb93a-9fd2-48f1-9b26-0568e9e6b1e6'
timestamp: 2018-02-20T08:57:23.031177501Z
type: logging


metadata:
  class: task
  created_at: 2018-02-20T08:57:23.031154485Z
  err: ""
  id: 2e6cb93a-9fd2-48f1-9b26-0568e9e6b1e6
  may_cancel: false
  metadata: null
  resources:
    containers:
    - /1.0/containers/test-ubuntu-16-04
  status: Pending
  status_code: 105
  updated_at: 2018-02-20T08:57:23.031154485Z
timestamp: 2018-02-20T08:57:23.031224059Z
type: operation


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/operations/2e6cb93a-9fd2-48f1-9b26-0568e9e6b1e6
  level: dbug
  message: handling
timestamp: 2018-02-20T08:57:23.039736843Z
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Success for task operation: 2e6cb93a-9fd2-48f1-9b26-0568e9e6b1e6'
timestamp: 2018-02-20T08:57:23.076733761Z
type: logging


metadata:
  class: task
  created_at: 2018-02-20T08:57:23.031154485Z
  err: ""
  id: 2e6cb93a-9fd2-48f1-9b26-0568e9e6b1e6
  may_cancel: false
  metadata: null
  resources:
    containers:
    - /1.0/containers/test-ubuntu-16-04
  status: Success
  status_code: 200
  updated_at: 2018-02-20T08:57:23.031154485Z
timestamp: 2018-02-20T08:57:23.076813998Z
type: operation


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0
  level: dbug
  message: handling
timestamp: 2018-02-20T08:57:24.805030154Z
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Disconnected events listener: dbf50e1c-5828-4a87-a5ef-d57d8e648233'
timestamp: 2018-02-20T08:57:24.829844113Z
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/containers/test-ubuntu-16-04
  level: dbug
  message: handling
timestamp: 2018-02-20T08:57:24.836973617Z
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/events
  level: dbug
  message: handling
timestamp: 2018-02-20T08:57:24.847511982Z
type: logging


metadata:
  context: {}
  level: dbug
  message: 'New events listener: f530d1cc-6e4a-4567-bca2-5fa6a05b3377'
timestamp: 2018-02-20T08:57:24.848257373Z
type: logging


metadata:
  context:
    ip: '@'
    method: PUT
    url: /1.0/containers/test-ubuntu-16-04
  level: dbug
  message: handling
timestamp: 2018-02-20T08:57:24.848818578Z
type: logging


metadata:
  context: {}
  level: dbug
  message: 'New task operation: 1b09dc28-eeda-48f1-91df-23635e28e16e'
timestamp: 2018-02-20T08:57:24.850289335Z
type: logging


metadata:
  class: task
  created_at: 2018-02-20T08:57:24.850266709Z
  err: ""
  id: 1b09dc28-eeda-48f1-91df-23635e28e16e
  may_cancel: false
  metadata: null
  resources:
    containers:
    - /1.0/containers/test-ubuntu-16-04
  status: Pending
  status_code: 105
  updated_at: 2018-02-20T08:57:24.850266709Z
timestamp: 2018-02-20T08:57:24.850344619Z
type: operation


metadata:
  context: {}
  level: dbug
  message: 'Started task operation: 1b09dc28-eeda-48f1-91df-23635e28e16e'
timestamp: 2018-02-20T08:57:24.85039073Z
type: logging


metadata:
  class: task
  created_at: 2018-02-20T08:57:24.850266709Z
  err: ""
  id: 1b09dc28-eeda-48f1-91df-23635e28e16e
  may_cancel: false
  metadata: null
  resources:
    containers:
    - /1.0/containers/test-ubuntu-16-04
  status: Running
  status_code: 103
  updated_at: 2018-02-20T08:57:24.850266709Z
timestamp: 2018-02-20T08:57:24.850417174Z
type: operation


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/operations/1b09dc28-eeda-48f1-91df-23635e28e16e
  level: dbug
  message: handling
timestamp: 2018-02-20T08:57:24.856218268Z
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Success for task operation: 1b09dc28-eeda-48f1-91df-23635e28e16e'
timestamp: 2018-02-20T08:57:24.89229784Z
type: logging


metadata:
  class: task
  created_at: 2018-02-20T08:57:24.850266709Z
  err: ""
  id: 1b09dc28-eeda-48f1-91df-23635e28e16e
  may_cancel: false
  metadata: null
  resources:
    containers:
    - /1.0/containers/test-ubuntu-16-04
  status: Success
  status_code: 200
  updated_at: 2018-02-20T08:57:24.850266709Z
timestamp: 2018-02-20T08:57:24.892389127Z
type: operation

And I don’t suppose you actually saw the container’s limit go to 512MB and then back down to 256MB when you did that?

The complete lack of error is very puzzling…

No, the container is still reporting full memory from the host.
The host is a KVM VM, but that should not make a difference, should it?

Shouldn’t make a difference

That’s what I though, the fact that I can set CPU limit makes me think that it is something else causing an issue.

I ended up creating new LXC host, this time everything worked.

That’s very confusing, I really wonder why memory limits wouldn’t get applied on that first host and especially why rebooting didn’t fix whatever was wrong.

Looking at LXD, the only case I can think of where it’d silently skip applying the limit is if the memory controllers wasn’t mounted at the time LXD started, but given that it’s the first thing systemd does on boot that’s pretty much impossible…

Its a old topic but i have seen the exact same behavior.

lxc info:

config: {}
api_extensions:
- id_map
- id_map_base
- resource_limits
api_status: stable
api_version: "1.0"
auth: trusted
auth_methods: []
public: false
environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    -----END CERTIFICATE-----
  certificate_fingerprint: 
  driver: lxc
  driver_version: 2.0.8
  kernel: Linux
  kernel_architecture: x86_64
  kernel_version: 4.4.0-112-generic
  server: lxd
  server_pid: 616
  server_version: 2.0.11
  storage: dir
  storage_version: ""

Running LXD 3.1 on a KVM Virtual Server (snap lxd installed ). Started a alpine container, set limits to 256MB.

Checked Top: 256MB = Correct.
Checked Free: 4000MB = Incorrect. This is the KVM Server system memory size.

Restarted the container. Still same effect.

Top: Mem: 900K used, 261244K free, 87904K shrd, 0K buff, 292K cached
Free: Mem: 3948048 2748944 1199104 87904 263000 308 ( reported in KB )

Different alpine/edge container.

lxc config set a2 limits.memory 256MB

Same effect.

Tried with a debian ( debian/9 ) container:

Top: KiB Mem : 262144 total, 247760 free
Free: Mem: 262144

Here the stats are correct.

It seems that on some images it can not set the internal reported limits correctly.

This spam… It’s a copy of: https://github.com/lxc/lxc/issues/2312#issuecomment-392730173

@stgraber Can you ban this user?

He’s also here: Fan Networking creation?