Kubernetes in LXD

I’m working on putting Kubernetes in LXD Ubuntu 21.04 and have some LXD questions related to that.

When running:

kubeadm init --pod-network-cidr=

It complains about btrfs for the DOCKER GRAPH DRIVER:

[ERROR SystemVerification]: unsupported graph driver: btrfs

So after researching other workers solutions for this, it seemed ext4 would be an option, which is where my question arises. To get ext4 I used the “lvm” option in LXD as shown below:

lxc storage create docker lvm
lxc storage volume create docker kmaster1
lxc config device add kmaster1 docker disk pool=docker source=kmaster1 path=/var/lib/docker

and the resulting filesystem created INSIDE the Kubernetes LXD container seems to be ext4:

root@k8s:~# df -TH | grep docker
/dev/docker/custom_default_kmaster1 ext4 9.8G 1.5G 7.9G 16% /var/lib/docker

However, when “kubeadm init …” is run, it apparently reports the filesystem as vfs not as ext4:

[ERROR SystemVerification]: unsupported graph driver: vfs

So the question related to the above is:

Why is kubeadm in its’ pre-flight check reporting “vfs” as the filesystem when it actually “seems” to be “ext4” based on the df -TH output ?

My other question related to this, is, in my preseed for my LXD cluster I “thought” I had correctly set “2 CPU” but here again “kubeadm init …” is complaining that there is only one available CPU:

[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2

My preseed is as follows and it works fine for creating the LXD cluster with no errors, but perhaps I have not used the right syntax for the k8s profile CPU setting (note: pasting the preseed into this forum destroys some of the formatting of the preseed file, but this preseed is working fine).

cluster.https_address: 10.xxx.53.1:8443
core.https_address: 10.xxx.53.1:8443
core.trust_password: ubuntu

  • config:
    ipv4.address: auto
    ipv6.address: none
    description: “”
    name: lxdbr0
    type: bridge
    project: default
  • config:
    source: olxc-001
    volatile.initial_source: olxc-001
    zfs.pool_name: olxc-001
    description: “o83sv1-olxc-001”
    name: local
    driver: zfs
  • config: {}
    description: Default LXD profile
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
    path: /
    pool: local
    type: disk
    name: default
  • config: {}
    description: Orabuntu-LXD OpenvSwitch Clone profile
    name: eth0
    nictype: bridged
    parent: sw1a
    type: nic
    path: /
    pool: local
    type: disk
    name: olxc_sw1a
  • config: {}
    description: Orabuntu-LXD OpenvSwitch Seeds profile
    name: eth0
    nictype: bridged
    parent: sx1a
    type: nic
    path: /
    pool: local
    type: disk
    name: olxc_sx1a
  • config: {}
    description: Kubernetes LXD profile
    limits.cpu: "2"
    limits.memory: 2GB
    limits.memory.swap: “false”
    linux.kernel_modules: ip_tables,ip6_tables,nf_nat,overlay,br_netfilter
    raw.lxc: “lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw
    security.nesting: “true”
    security.privileged: “true”
    name: eth0
    nictype: bridged
    parent: sw1a
    type: nic
    path: /
    pool: local
    type: disk
    name: k8s
  • config:
    description: Default LXD project
    name: default
    server_name: o83sv1
    enabled: true
    member_config: []
    cluster_address: “”
    cluster_certificate: “”
    server_address: “”
    cluster_password: “”

Then the other issue is this apparently well-known problem that kubelet does not “like” to run inside an LXD container. There are different recommended tweaks for this problem. So far kubelet is not running despite several tweaks applied.

I will just add that I’m revisiting different approaches to containerizing Kubernetes in LXD and also trying out this one too using juju and charmed kubernetes: so the floor is open to you for any general comments you might have on other approaches to this general task of putting Kubernetes in LXD containers.

My overall goal is that I want to test out Project Antrea with a “suitable” LXD containerized Kubernetes cluster. I’ve already ruled out microk8s after some in-depth testing and evaluation of that (calico would have to be ripped out first, and Antrea did not play well with multus-calico), so now the options that seem to be next up are a sort of conventional but tweaked manual setup as described above, or the juju charmed k8s approach, or this guide by Cornelius Weig (again a tweaked “conventional” install)

UPDATE 2021-10-14 18:59 CST : I’m taking another look at the “microk8s in LXD” guide. I think microk8s in LXD might be the best way to go, even if I have to deal with ripping out calico etc.

UPDATE 2021-10-14 23:36 CST: Microk8s in LXD worked exactly as advertised and very nicely indeed on Ubuntu 20.04 host. However, on redhat-family host Oracle Linux 8, the basic microk8s container will not start. I would guess the apparmor configs in the microk8s profile are not going to be appropriate for selinux-based redhat-family for one thing. Anyway, the errors are as follows:

[ubuntu@o83sv1 ~]$ cat microk8s.profile | lxc profile edit microk8s
[ubuntu@o83sv1 ~]$ lxc launch -p default -p microk8s ubuntu:21.04 microk8s
Creating microk8s
Starting microk8s
Error: Failed to run: /snap/lxd/current/bin/lxd forkstart microk8s /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/microk8s/lxc.conf:
Try lxc info --show-log local:microk8s for more info
[ubuntu@o83sv1 ~]$ lxc info --show-log local:microk8s
Name: microk8s
Type: container
Architecture: x86_64
Location: o83sv1
Created: 2021/10/14 22:53 CDT
Last Used: 2021/10/14 22:53 CDT


lxc microk8s 20211015035331.406 ERROR conf - conf.c:turn_into_dependent_mounts:3724 - No such file or directory - Failed to recursively turn old root mount tree into dependent mount. Continuing…
lxc microk8s 20211015035331.406 ERROR conf - conf.c:turn_into_dependent_mounts:3724 - No such file or directory - Failed to recursively turn old root mount tree into dependent mount. Continuing…
lxc microk8s 20211015035331.408 ERROR utils - utils.c:mkdir_p:234 - Operation not permitted - Failed to create directory “/var/snap/lxd/common/lxc//sys/module/apparmor/”
lxc microk8s 20211015035331.408 ERROR conf - conf.c:mount_entry_create_dir_file:2428 - Operation not permitted - Failed to create directory “/var/snap/lxd/common/lxc//sys/module/apparmor/parameters/enabled”
lxc microk8s 20211015035331.408 ERROR conf - conf.c:lxc_setup:4104 - Failed to setup mount entries
lxc microk8s 20211015035331.408 ERROR start - start.c:do_start:1291 - Failed to setup container “microk8s”
lxc microk8s 20211015035331.410 ERROR sync - sync.c:sync_wait:36 - An error occurred in another process (expected sequence number 3)
lxc microk8s 20211015035331.425 WARN network - network.c:lxc_delete_network_priv:3622 - Failed to rename interface with index 0 from “eth0” to its initial name “veth931ca79c”
lxc microk8s 20211015035331.425 ERROR start - start.c:__lxc_start:2053 - Failed to spawn container “microk8s”
lxc microk8s 20211015035331.425 WARN start - start.c:lxc_abort:1050 - No such process - Failed to send SIGKILL via pidfd 42 for process 239895
lxc microk8s 20211015035331.425 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:868 - Received container state “ABORTING” instead of “RUNNING”
lxc 20211015035336.529 ERROR af_unix - af_unix.c:lxc_abstract_unix_recv_fds_iov:220 - Connection reset by peer - Failed to receive response
lxc 20211015035336.529 ERROR commands - commands.c:lxc_cmd_rsp_recv_fds:129 - Failed to receive file descriptors

[ubuntu@o83sv1 ~]$



Just an update the 2 CPU problem is no longer an issue so that seems solved. The syntax in my preseed is correct, so I think the preseed is correct as is.

Now I think it just mainly getting kubelet to run inside LXD.

Then a secondary issue of the:

[ERROR SystemVerification]: unsupported graph driver: vfs issue.

Maybe this link helps? Not sure about the ext4, but at least it seems it can work: A step by step demo on Kubernetes cluster creation | by Asish M Madhu | Geek Culture | Medium

@zekrioca Great suggestion! Thank you! This guide your have referenced by Asish is one of the better guides available on the net, and it is the guide that I have used extensively to get to where I am now! :slight_smile:

The ext4 problem might be because the “physical” Kubernetes’ directory has to be an ext4, and not an ext4 over a vfs, meaning you may need to directly mount the directory from your host. Does that make sense?

It does make sense. That was actually what I tried first: LUN partitioned and formatted with ext4 and then presented to the container which is discussed comprehensively here and here.

So now I am testing the solution described here in those same threads by Stephane:

lxc storage create docker dir
lxc storage volume create docker my-container
lxc config device add container docker disk pool=docker source=container path=/var/lib/docker

Using type=dir is also what Asish used as well.

Have you considered using a LXD VM rather than a container for this?

The reason I ask is, as well as getting kubernetes working, from what I can tell Project Antrea uses openvswitch, and I’m not sure that works inside a network namespace.

That is an excellent possible solution @tomp which I will definitely look into. Thank you!

You should be able to just add the --vm flag to your instance launches, e.g.

lxc launch ... --vm

@tomp I got the following message when trying to create an Ubuntu 21.04 LXD vm on Oracle Linux 8 (that is part of the mission of Orabuntu-LXC to bring LXC and LXD to RedHat family linuxes). Most things “just work” on Debian-family Ubuntu Linux but the challenges arise over here in RedHat-family land…

[ubuntu@o83sv1 ~]$ lxc launch -p olxc_sw1a ubuntu:21.04 kubern1 --vm
Creating kubern1
Error: Failed instance creation: Failed creating instance record: Instance type “virtual-machine” is not supported on this server
[ubuntu@o83sv1 ~]$

Is the LXD --vm option a Debian-family only thing, or should it work on redhat-family as well ?

I should mention that o83sv1 is itself a virtual box VM. However the --vm option worked fine on a similar Ubuntu focal virtual box VM so I’m thinking that’s not the issue?

Also I tried with "-p default"instead of my custom “olxc_sw1a” profile just in case it didn’t like the custom profile and that did not work either.

So many good ideas suggested, thank you ! Some dev test work ahead!

Btw, how did you fix the CPU = 1 problem?

This project worked for me very recently. And it uses Containers, not VMs. Only works locally out of the box, but the configuration and scripts here could be adapted to your use case.