LXCFS 3.1.2 has been released

Introduction

The LXCFS team is pleased to announce the release of LXCFS 3.1.2!

We had to re-roll the 3.1.0 release twice, first because of a bad Makefile causing an invalid release tarball to be generated, then again to fix an upgrade issue affecting some users of LXCFS 3.0.4

New features

Add support for per-container cpu usage in /proc/stat

Newer LXCFS releases make it possible to virtualize cpu usage per container by using the cpuacct cgroup.

Add support for load average (loadavg) virtualization

LXCFS now supports virtualizing /proc/loadavg. It will calculate the loadavg for a container based on the cpu cgroup.

Display cpus in /proc/cpuinfo based on cpu quotas

LXCFS will virtualize the cpus displayed in /proc/cpuinfo using the cpu cgroup and quotas calculated there.

Allow to disable swap in /proc/meminfo output

This adds the -u option to disable swap info output in /proc/meminfo.

Virtualize /sys/devices/system/cpu/online

LXCFS now also partially virtualizes sysfs. The first file to virtualize is /sys/devices/system/cpu/online per container.

Enable higher precision output in /proc/uptime

The calculations for /proc/uptime are now more correct.

Add support for FUSE nonempty option

The lxcfs binary can now be passed the -d option. When passed, lxcfs will also start when the mountpoint is not empty.

Bugfixes

  • bindings: ensure that opts is non NULL
  • Makefile: Fix typo in file name
  • remove unused functions
  • sys dirs do not need to implement ‘read’ method
  • lxcfs: coding style update
  • config: Adds RPM spec file.
  • config: Adds reload mode to sysvinit and systemd scripts.
  • bindings: prevent NULL pointer dereference
  • stat: check for out of bound access to cpuacct usage data
  • calc_hash(): do not apply modulo LOAD_SIZE
  • tests: include missing sys/sysmacros.h header
  • bindings: prevent double free
  • bindings: better logging for write_string()
  • meminfo: set ShmemHugePages and ShmemPmdMapped to zero
  • bindings: fix memory leak in calc_pid()
  • travis: fix .travis.yml
  • bindings: fix memory leak in proc_loadavg_read()

Support and upgrade

LXCFS 3.1.2 is only supported until the next feature release of LXCFS.
For long term support, you should prefer LXCFS 3.0.4 LTS which is supported until June 2023.

Downloads

1 Like

Add support for load average (loadavg) virtualization

YAY!! This is amazing, couple of questions;

  • How can I install this ?
  • Will LXD be exposing any load avg’s (etc) in any endpoints ?
  • Does the user need to do anything special to virtulize the load avg or will it be by default ?

Thanks

I believe it is opt-in, you need to pass a flag to lxcfs to get it as it is expensive for lxcfs to keep track of that data.

Can you be any more specific or does it still require compiling by yourself?

Usage:

lxcfs [-f|-d] -u -l -n [-p pidfile] mountpoint
  -f running foreground by default; -d enable debug output 
  -l use loadavg 
  -u no swap 
  Default pidfile is /run/lxcfs.pid

This shows that you need to pass -l to have loadavg enabled.

Is there a way to enable it on snap package lxd 3.16?

Not right now, but it’s something we certainly could offer as an option.

Can you file a bug at https://github.com/lxc/lxd-pkg-snap so I don’t forget?

How to configure LXD container to enable this?

Currently LA inside contaner looks the same as on the node.

Latest 3.18 LXD from SNAP.

Fairly certain you have to custom compile LXCFS still

The edge build has a suitable version of lxcfs with a config option for it snap set lxd lxcfs.loadavg=true && systemctl reload snap.lxd.daemon

This will make it to stable with the release of LXCFS 4.0.

2 Likes

Any chance of including that load avg in the 4.0 container info api ? :innocent:

Unlikely, as we mentioned before, even with lxcfs tracking it, there’s no clean way for LXD to retrieve the data which doesn’t involve accessing the container’s filesystem. Doing so is a big security risk we’d rather not take.

Thanks! Will set lxd lxcfs.loadavg=true work for the next LXD 3.19 or only for 4.0+ ?

This is not a LXD feature but a LXCFS feature, so you’ll need the next stable release of LXCFS which is going to be 4.0 in a couple of months.

well, but I found LXD 3.18 from snap stable already shipped with new lxcfs:

/snap/lxd/current/bin/lxcfs --version
3.1.2

But there is no documentation about new flags for snap about lxcfs.loadavg and other new flags. I googled it and still did not found anything about it.

snap info lxd shows only this block:

  **Configuration options**

  Supported options for the LXD snap (`snap set lxd KEY=VALUE`):
   - criu.enable: Enable experimental live-migration support [default=false]
   - daemon.debug: Increases logging to debug level [default=false]
   - daemon.group: Group of users that can interact with LXD [default=lxd]
   - ceph.builtin: Use snap-specific ceph configuration [default=false]
   - openvswitch.builtin: Run a snap-specific OVS daemon [default=false]

Is it documented at all? Where can I find info about other options? I think they might be helping me with a lot of tasks.

The list of flags listed by the store sometimes get out of date (which is annoying), the yaml in /snap/lxd/current/meta should have them all.

1 Like
snap set lxd lxcfs.loadavg=true
systemctl reload snap.lxd.daemon

/snap/lxd/current/bin/lxcfs --version
3.1.2
/snap/lxd/current/bin/lxc --version
3.18

lxc exec c99 uptime ; uptime
 15:12:10 up 26 days,  6:01,  0 users,  load average: 4.50, 6.02, 4.46
 15:12:11 up 32 days, 16:33,  1 user,  load average: 4.50, 6.02, 4.46

c99 is just created container without any load inside.
So load average from container and node is still the same. What I’m doing wrong?

The same with 3.19 fron snap:

root@d1:~# /snap/lxd/current/bin/lxcfs --version
3.1.2
root@d1:~# /snap/lxd/current/bin/lxc --version
3.19

root@d1:~# lxc exec c99 uptime ; uptime
 08:37:56 up 37 days, 12:58,  0 users,  load average: 2.06, 1.92, 1.95
 11:37:56 up 37 days, 12:59,  1 user,  load average: 2.06, 1.92, 1.95

root@d1:~# snap get lxd lxcfs.loadavg
true

ps aux | grep lxcfs