Kubernetes in LXD

I’m working on putting Kubernetes in LXD Ubuntu 21.04 and have some LXD questions related to that.

When running:

kubeadm init --pod-network-cidr=10.244.0.0/16

It complains about btrfs for the DOCKER GRAPH DRIVER:

[ERROR SystemVerification]: unsupported graph driver: btrfs

So after researching other workers solutions for this, it seemed ext4 would be an option, which is where my question arises. To get ext4 I used the “lvm” option in LXD as shown below:

lxc storage create docker lvm
lxc storage volume create docker kmaster1
lxc config device add kmaster1 docker disk pool=docker source=kmaster1 path=/var/lib/docker

and the resulting filesystem created INSIDE the Kubernetes LXD container seems to be ext4:

root@k8s:~# df -TH | grep docker
/dev/docker/custom_default_kmaster1 ext4 9.8G 1.5G 7.9G 16% /var/lib/docker
root@k8s~#

However, when “kubeadm init …” is run, it apparently reports the filesystem as vfs not as ext4:

[ERROR SystemVerification]: unsupported graph driver: vfs

So the question related to the above is:

Why is kubeadm in its’ pre-flight check reporting “vfs” as the filesystem when it actually “seems” to be “ext4” based on the df -TH output ?

My other question related to this, is, in my preseed for my LXD cluster I “thought” I had correctly set “2 CPU” but here again “kubeadm init …” is complaining that there is only one available CPU:

[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2

My preseed is as follows and it works fine for creating the LXD cluster with no errors, but perhaps I have not used the right syntax for the k8s profile CPU setting (note: pasting the preseed into this forum destroys some of the formatting of the preseed file, but this preseed is working fine).

config:
cluster.https_address: 10.xxx.53.1:8443
core.https_address: 10.xxx.53.1:8443
core.trust_password: ubuntu
networks:

  • config:
    ipv4.address: auto
    ipv6.address: none
    description: “”
    name: lxdbr0
    type: bridge
    project: default
    storage_pools:
  • config:
    source: olxc-001
    volatile.initial_source: olxc-001
    zfs.pool_name: olxc-001
    description: “o83sv1-olxc-001”
    name: local
    driver: zfs
    profiles:
  • config: {}
    description: Default LXD profile
    devices:
    eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
    root:
    path: /
    pool: local
    type: disk
    name: default
  • config: {}
    description: Orabuntu-LXD OpenvSwitch Clone profile
    devices:
    eth0:
    name: eth0
    nictype: bridged
    parent: sw1a
    type: nic
    root:
    path: /
    pool: local
    type: disk
    name: olxc_sw1a
  • config: {}
    description: Orabuntu-LXD OpenvSwitch Seeds profile
    devices:
    eth0:
    name: eth0
    nictype: bridged
    parent: sx1a
    type: nic
    root:
    path: /
    pool: local
    type: disk
    name: olxc_sx1a
  • config: {}
    description: Kubernetes LXD profile
    limits.cpu: “2”
    limits.memory: 2GB
    limits.memory.swap: “false”
    linux.kernel_modules: ip_tables,ip6_tables,nf_nat,overlay,br_netfilter
    raw.lxc: “lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw
    sys:rw”
    security.nesting: “true”
    security.privileged: “true”
    devices:
    eth0:
    name: eth0
    nictype: bridged
    parent: sw1a
    type: nic
    root:
    path: /
    pool: local
    type: disk
    name: k8s
    projects:
  • config:
    description: Default LXD project
    name: default
    cluster:
    server_name: o83sv1
    enabled: true
    member_config:
    cluster_address: “”
    cluster_certificate: “”
    server_address: “”
    cluster_password: “”

Then the other issue is this apparently well-known problem that kubelet does not “like” to run inside an LXD container. There are different recommended tweaks for this problem. So far kubelet is not running despite several tweaks applied.

I will just add that I’m revisiting different approaches to containerizing Kubernetes in LXD and also trying out this one too using juju and charmed kubernetes: so the floor is open to you for any general comments you might have on other approaches to this general task of putting Kubernetes in LXD containers.

My overall goal is that I want to test out Project Antrea with a “suitable” LXD containerized Kubernetes cluster. I’ve already ruled out microk8s after some in-depth testing and evaluation of that (calico would have to be ripped out first, and Antrea did not play well with multus-calico), so now the options that seem to be next up are a sort of conventional but tweaked manual setup as described above, or the juju charmed k8s approach, or this guide by Cornelius Weig (again a tweaked “conventional” install)

UPDATE 2021-10-14 18:59 CST : I’m taking another look at the “microk8s in LXD” guide. I think microk8s in LXD might be the best way to go, even if I have to deal with ripping out calico etc.

UPDATE 2021-10-14 23:36 CST: Microk8s in LXD worked exactly as advertised and very nicely indeed on Ubuntu 20.04 host. However, on redhat-family host Oracle Linux 8, the basic microk8s container will not start. I would guess the apparmor configs in the microk8s profile are not going to be appropriate for selinux-based redhat-family for one thing. Anyway, the errors are as follows:

[ubuntu@o83sv1 ~]$ cat microk8s.profile | lxc profile edit microk8s
[ubuntu@o83sv1 ~]$ lxc launch -p default -p microk8s ubuntu:21.04 microk8s
Creating microk8s
Starting microk8s
Error: Failed to run: /snap/lxd/current/bin/lxd forkstart microk8s /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/microk8s/lxc.conf:
Try lxc info --show-log local:microk8s for more info
[ubuntu@o83sv1 ~]$ lxc info --show-log local:microk8s
Name: microk8s
Status: STOPPED
Type: container
Architecture: x86_64
Location: o83sv1
Created: 2021/10/14 22:53 CDT
Last Used: 2021/10/14 22:53 CDT

Log:

lxc microk8s 20211015035331.406 ERROR conf - conf.c:turn_into_dependent_mounts:3724 - No such file or directory - Failed to recursively turn old root mount tree into dependent mount. Continuing…
lxc microk8s 20211015035331.406 ERROR conf - conf.c:turn_into_dependent_mounts:3724 - No such file or directory - Failed to recursively turn old root mount tree into dependent mount. Continuing…
lxc microk8s 20211015035331.408 ERROR utils - utils.c:mkdir_p:234 - Operation not permitted - Failed to create directory “/var/snap/lxd/common/lxc//sys/module/apparmor/”
lxc microk8s 20211015035331.408 ERROR conf - conf.c:mount_entry_create_dir_file:2428 - Operation not permitted - Failed to create directory “/var/snap/lxd/common/lxc//sys/module/apparmor/parameters/enabled”
lxc microk8s 20211015035331.408 ERROR conf - conf.c:lxc_setup:4104 - Failed to setup mount entries
lxc microk8s 20211015035331.408 ERROR start - start.c:do_start:1291 - Failed to setup container “microk8s”
lxc microk8s 20211015035331.410 ERROR sync - sync.c:sync_wait:36 - An error occurred in another process (expected sequence number 3)
lxc microk8s 20211015035331.425 WARN network - network.c:lxc_delete_network_priv:3622 - Failed to rename interface with index 0 from “eth0” to its initial name “veth931ca79c”
lxc microk8s 20211015035331.425 ERROR start - start.c:__lxc_start:2053 - Failed to spawn container “microk8s”
lxc microk8s 20211015035331.425 WARN start - start.c:lxc_abort:1050 - No such process - Failed to send SIGKILL via pidfd 42 for process 239895
lxc microk8s 20211015035331.425 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:868 - Received container state “ABORTING” instead of “RUNNING”
lxc 20211015035336.529 ERROR af_unix - af_unix.c:lxc_abstract_unix_recv_fds_iov:220 - Connection reset by peer - Failed to receive response
lxc 20211015035336.529 ERROR commands - commands.c:lxc_cmd_rsp_recv_fds:129 - Failed to receive file descriptors

[ubuntu@o83sv1 ~]$

TIA

Gilbert

Just an update the 2 CPU problem is no longer an issue so that seems solved. The syntax in my preseed is correct, so I think the preseed is correct as is.

Now I think it just mainly getting kubelet to run inside LXD.

Then a secondary issue of the:

[ERROR SystemVerification]: unsupported graph driver: vfs issue.

Maybe this link helps? Not sure about the ext4, but at least it seems it can work: A step by step demo on Kubernetes cluster creation | by Asish M Madhu | Geek Culture | Medium

@zekrioca Great suggestion! Thank you! This guide your have referenced by Asish is one of the better guides available on the net, and it is the guide that I have used extensively to get to where I am now! :slight_smile:

The ext4 problem might be because the “physical” Kubernetes’ directory has to be an ext4, and not an ext4 over a vfs, meaning you may need to directly mount the directory from your host. Does that make sense?

It does make sense. That was actually what I tried first: LUN partitioned and formatted with ext4 and then presented to the container which is discussed comprehensively here and here.

So now I am testing the solution described here in those same threads by Stephane:

lxc storage create docker dir
lxc storage volume create docker my-container
lxc config device add container docker disk pool=docker source=container path=/var/lib/docker

Using type=dir is also what Asish used as well.

Have you considered using a LXD VM rather than a container for this?

The reason I ask is, as well as getting kubernetes working, from what I can tell Project Antrea uses openvswitch, and I’m not sure that works inside a network namespace.

1 Like

That is an excellent possible solution @tomp which I will definitely look into. Thank you!

You should be able to just add the --vm flag to your instance launches, e.g.

lxc launch ... --vm

@tomp I got the following message when trying to create an Ubuntu 21.04 LXD vm on Oracle Linux 8 (that is part of the mission of Orabuntu-LXC to bring LXC and LXD to RedHat family linuxes). Most things “just work” on Debian-family Ubuntu Linux but the challenges arise over here in RedHat-family land…

[ubuntu@o83sv1 ~]$ lxc launch -p olxc_sw1a ubuntu:21.04 kubern1 --vm
Creating kubern1
Error: Failed instance creation: Failed creating instance record: Instance type “virtual-machine” is not supported on this server
[ubuntu@o83sv1 ~]$

Is the LXD --vm option a Debian-family only thing, or should it work on redhat-family as well ?

I should mention that o83sv1 is itself a virtual box VM. However the --vm option worked fine on a similar Ubuntu focal virtual box VM so I’m thinking that’s not the issue?

Also I tried with "-p default"instead of my custom “olxc_sw1a” profile just in case it didn’t like the custom profile and that did not work either.

So many good ideas suggested, thank you ! Some dev test work ahead!

Btw, how did you fix the CPU = 1 problem?

This project worked for me very recently. And it uses Containers, not VMs. Only works locally out of the box, but the configuration and scripts here could be adapted to your use case.

1 Like

IF I created the k8s profile separately on Ubuntu 18.04 host not using the preseed (haven’t had a chance to test the preseed yet on 18.04 host) :

AND used Ubuntu 18.04 as my LXD host server
AND used ,Ubuntu 20.04 for my LXD containers
AND used containerd instead of docker

THEN the k8s cluster build and runs successfully reproducing the results from this author as discussed here and here.

The failure of the k8s cluster on Ubuntu 20.04 LXD host was apparently due to handling of /dev/kmsg in the LXD containers. In 18.04 an inelegant but effective command line hack was used in bootstrap-kube.sh to address the issue with /dev/kmsg:

# Hack required to provision K8s v1.15+ in LXC containers
mknod /dev/kmsg c 1 11
echo 'mknod /dev/kmsg c 1 11' >> /etc/rc.local
chmod +x /etc/rc.local

However in Ubuntu 20.04 this hack no longer works. Turns out there are a couple of fairly recent merge requests sitting here that have not yet been merged, and this pull request fixes the /dev/kmsg issue that was borking the k8s build on Ubuntu 20.04 Focal Fossa. The k8s LXD profile which includes the /dev/kmsg fix is shown below.

config:
  limits.cpu: "2"
  limits.memory: 2GB
  limits.memory.swap: "false"
  linux.kernel_modules: ip_tables,ip6_tables,nf_nat,overlay,br_netfilter
  raw.lxc: "lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw
    sys:rw\nlxc.mount.entry = /dev/kmsg dev/kmsg none defaults,bind,create=file"
  security.privileged: "true"
  security.nesting: "true"
description: LXD profile for Kubernetes
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: k8s
used_by: []

Also bootstrap-kube.sh needed some package installs added, so I added this as [TASK 0] in bootstrap-kube.sh and filed a pull request.

Both of these fixes have been applied to my fork.

echo “[TASK 0] Install packages and update”
apt-get install -y -qq apt-transport-https ca-certificates curl gnupg lsb-release openssh-server net-tools software-properties-common >/dev/null 2>&1
apt-get update -qq

Result:

root@kmaster:~# kubectl get pods -A
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-78fcd69978-7sxd2          1/1     Running   0          7m41s
kube-system   coredns-78fcd69978-csshk          1/1     Running   0          7m41s
kube-system   etcd-kmaster                      1/1     Running   0          7m50s
kube-system   kube-apiserver-kmaster            1/1     Running   0          7m56s
kube-system   kube-controller-manager-kmaster   1/1     Running   0          7m57s
kube-system   kube-flannel-ds-cjdkj             1/1     Running   0          3m59s
kube-system   kube-flannel-ds-kbm2b             1/1     Running   0          7m41s
kube-system   kube-flannel-ds-lf85k             1/1     Running   0          52s
kube-system   kube-proxy-55x4l                  1/1     Running   0          52s
kube-system   kube-proxy-67rwr                  1/1     Running   0          7m41s
kube-system   kube-proxy-qcs87                  1/1     Running   0          3m59s
kube-system   kube-scheduler-kmaster            1/1     Running   0          7m55s

And then testing NodePort for a deployment:

root@kmaster:~# kubectl create deploy nginx --image nginx
deployment.apps/nginx created
root@kmaster:~# kubectl expose deploy nginx --port 80 --type NodePort
service/nginx exposed
root@kmaster:~# kubectl get service
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        8m45s
nginx        NodePort    10.98.99.181   <none>        80:30902/TCP   11s
root@kmaster:~# curl -I 10.228.91.177:30902
HTTP/1.1 200 OK
Server: nginx/1.21.3
Date: Tue, 19 Oct 2021 13:42:34 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 07 Sep 2021 15:21:03 GMT
Connection: keep-alive
ETag: "6137835f-267"
Accept-Ranges: bytes

And then a check with curl:

root@kmaster:~# curl -I 10.228.91.92:30902
HTTP/1.1 200 OK
Server: nginx/1.21.3
Date: Tue, 19 Oct 2021 13:42:49 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 07 Sep 2021 15:21:03 GMT
Connection: keep-alive
ETag: "6137835f-267"
Accept-Ranges: bytes

root@kmaster:~# 

AND

ubuntu@u1804sv1:~$ lxc list
+----------+---------+------------------------+-----------------------------------------------+------------+-----------+
|   NAME   |  STATE  |          IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |
+----------+---------+------------------------+-----------------------------------------------+------------+-----------+
| kmaster  | RUNNING | 10.244.0.1 (cni0)      | fd42:29eb:d2d1:3986:216:3eff:fe73:9666 (eth0) | PERSISTENT | 0         |
|          |         | 10.244.0.0 (flannel.1) |                                               |            |           |
|          |         | 10.228.91.177 (eth0)   |                                               |            |           |
+----------+---------+------------------------+-----------------------------------------------+------------+-----------+
| kworker1 | RUNNING | 10.244.1.1 (cni0)      | fd42:29eb:d2d1:3986:216:3eff:fe7f:7b7b (eth0) | PERSISTENT | 0         |
|          |         | 10.244.1.0 (flannel.1) |                                               |            |           |
|          |         | 10.228.91.35 (eth0)    |                                               |            |           |
+----------+---------+------------------------+-----------------------------------------------+------------+-----------+
| kworker2 | RUNNING | 10.244.2.0 (flannel.1) | fd42:29eb:d2d1:3986:216:3eff:fe61:8b41 (eth0) | PERSISTENT | 0         |
|          |         | 10.228.91.92 (eth0)    |                                               |            |           |
+----------+---------+------------------------+-----------------------------------------------+------------+-----------+
ubuntu@u1804sv1:~$

Thank you @dontlaugh will have a look at this directly

UPDATE: The k8s cluster builds fine in Ubuntu 21.04 Hirsute LXD containers as well.
Here shows both the 20.04 k8s cluster and the 21.04 k8s cluster both running on Ubuntu 18.04.

kmaster2/kworker3/kworker4 (Hirsute LXD Containerized k8s cluster)
kmaster/kworker1/kworker2 (Focal LXD Containerized k8s cluster)

ubuntu@u1804sv1:~$ lxc list
+----------+---------+------------------------+-----------------------------------------------+------------+-----------+
|   NAME   |  STATE  |          IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |
+----------+---------+------------------------+-----------------------------------------------+------------+-----------+
| kmaster  | RUNNING | 10.244.0.1 (cni0)      | fd42:29eb:d2d1:3986:216:3eff:fe73:9666 (eth0) | PERSISTENT | 0         |
|          |         | 10.244.0.0 (flannel.1) |                                               |            |           |
|          |         | 10.228.91.177 (eth0)   |                                               |            |           |
+----------+---------+------------------------+-----------------------------------------------+------------+-----------+
| kmaster2 | RUNNING | 10.244.0.1 (cni0)      | fd42:29eb:d2d1:3986:216:3eff:febb:def5 (eth0) | PERSISTENT | 0         |
|          |         | 10.244.0.0 (flannel.1) |                                               |            |           |
|          |         | 10.228.91.251 (eth0)   |                                               |            |           |
+----------+---------+------------------------+-----------------------------------------------+------------+-----------+
| kworker1 | RUNNING | 10.244.1.1 (cni0)      | fd42:29eb:d2d1:3986:216:3eff:fe7f:7b7b (eth0) | PERSISTENT | 0         |
|          |         | 10.244.1.0 (flannel.1) |                                               |            |           |
|          |         | 10.228.91.35 (eth0)    |                                               |            |           |
+----------+---------+------------------------+-----------------------------------------------+------------+-----------+
| kworker2 | RUNNING | 10.244.2.0 (flannel.1) | fd42:29eb:d2d1:3986:216:3eff:fe61:8b41 (eth0) | PERSISTENT | 0         |
|          |         | 10.228.91.92 (eth0)    |                                               |            |           |
+----------+---------+------------------------+-----------------------------------------------+------------+-----------+
| kworker3 | RUNNING | 10.244.1.0 (flannel.1) | fd42:29eb:d2d1:3986:216:3eff:fe7b:13c9 (eth0) | PERSISTENT | 0         |
|          |         | 10.228.91.43 (eth0)    |                                               |            |           |
+----------+---------+------------------------+-----------------------------------------------+------------+-----------+
| kworker4 | RUNNING | 10.244.2.0 (flannel.1) | fd42:29eb:d2d1:3986:216:3eff:fe96:bde (eth0)  | PERSISTENT | 0         |
|          |         | 10.228.91.143 (eth0)   |                                               |            |           |
+----------+---------+------------------------+-----------------------------------------------+------------+-----------+
ubuntu@u1804sv1:~$ lxc exec kmaster2 bash
root@kmaster2:~# kubectl get -A pods
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-78fcd69978-6mnkq           1/1     Running   0          24m
kube-system   coredns-78fcd69978-v5m5s           1/1     Running   0          24m
kube-system   etcd-kmaster2                      1/1     Running   0          24m
kube-system   kube-apiserver-kmaster2            1/1     Running   0          24m
kube-system   kube-controller-manager-kmaster2   1/1     Running   0          24m
kube-system   kube-flannel-ds-cf4cw              1/1     Running   0          15m
kube-system   kube-flannel-ds-sxk6h              1/1     Running   0          11m
kube-system   kube-flannel-ds-z4qzp              1/1     Running   0          24m
kube-system   kube-proxy-f7mg4                   1/1     Running   0          11m
kube-system   kube-proxy-lgn7s                   1/1     Running   0          15m
kube-system   kube-proxy-xssvg                   1/1     Running   0          24m
kube-system   kube-scheduler-kmaster2            1/1     Running   0          24m
root@kmaster2:~# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=21.04
DISTRIB_CODENAME=hirsute
DISTRIB_DESCRIPTION="Ubuntu 21.04"
root@kmaster2:~#