/dev/kmsg on cgroupv2

So I’m running k8s clusters in LXD successfully on CentOS8 and Oracle Linux 8. However on Fedora 34 (which uses cgroupv2) the Kubernetes build is failing with these errors:

root@maestro (LXD container) :  journalctl -u kubelet

Dec 13 17:18:57 maestro kubelet[9850]: E1213 17:18:57.144925    9850 server.go:302] "Failed to run kubelet" err="failed to run Kubelet: failed to create kubelet: open /dev/kmsg: operation not permitted"

and the kubeadm init output is:

[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
... repeats...

The issue based on the logging seems to be the /dev/kmsg issue. The lxd profile I use for these kubernetes LXD containers is as follows:

orabuntu@f34sv1 ~]$ lxc profile show k8s-weavenet
config:
  limits.cpu: "4"
  limits.memory: 8GB
  limits.memory.swap: "false"
  linux.kernel_modules: ip_tables,ip6_tables,nf_nat,overlay,br_netfilter
  raw.lxc: "lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw
    sys:rw\nlxc.mount.entry = /dev/kmsg dev/kmsg none defaults,bind,create=file"
  security.nesting: "true"
  security.privileged: "true"
description: Kubernetes LXD WeaveNet
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: sw1a
    type: nic
  root:
    path: /
    pool: local
    type: disk
name: k8s-weavenet
used_by:
- /1.0/instances/maestro
- /1.0/instances/violin1
- /1.0/instances/violin2

and the part that handles the /dev/kmsg part in the above profile is:

sys:rw\nlxc.mount.entry = /dev/kmsg dev/kmsg none defaults,bind,create=file

Do I need to change this line in the k8s-weavenet profile in some way for cgroupv2 ? That seems to be the cardinal difference between CentOS8/Oracle Linux 8 and the Fedora34 ?

The other difference is fedora uses nftables unlike CentOS and OL which use iptables.

The /dev/kmsg is present in the LXD containers - but for some reason with Fedora 31+ it’s not working.

[root@maestro ~]# ls -l /dev/kmsg
crw-r--r--. 1 root root 1, 11 Dec 13 16:58 /dev/kmsg
[root@maestro ~]# 

Thanks!

In case this helps anyone else, the “easiest” fix turned out to be a one liner.

For NON-cgroupv2 systems (eg. default CentOS8 and default Oracle Linux 8) the above-referenced line in the profile is sufficient for handling /dev/kmsg:

sys:rw\nlxc.mount.entry = /dev/kmsg dev/kmsg none defaults,bind,create=file

However, if you are running kubernetes on cgroup v2 (e.g. default Fedora 31+ which uses cgroupv2 and nftables) you will also need to run this command in addition to the line in the profile:

lxc config device add "ContainerName" "kmsg" unix-char source="/dev/kmsg" path="/dev/kmsg"

The “acid test” for successful configuration from an operational pov for kubernetes kubelet purposes is that this command must succeed inside the container:

cat /dev/kmsg

2 Likes