I’m working on putting Kubernetes in LXD Ubuntu 21.04 and have some LXD questions related to that.
When running:
kubeadm init --pod-network-cidr=10.244.0.0/16
It complains about btrfs for the DOCKER GRAPH DRIVER:
[ERROR SystemVerification]: unsupported graph driver: btrfs
So after researching other workers solutions for this, it seemed ext4 would be an option, which is where my question arises. To get ext4 I used the “lvm” option in LXD as shown below:
lxc storage create docker lvm
lxc storage volume create docker kmaster1
lxc config device add kmaster1 docker disk pool=docker source=kmaster1 path=/var/lib/docker
and the resulting filesystem created INSIDE the Kubernetes LXD container seems to be ext4:
root@k8s:~# df -TH | grep docker
/dev/docker/custom_default_kmaster1 ext4 9.8G 1.5G 7.9G 16% /var/lib/docker
root@k8s~#
However, when “kubeadm init …” is run, it apparently reports the filesystem as vfs not as ext4:
[ERROR SystemVerification]: unsupported graph driver: vfs
So the question related to the above is:
Why is kubeadm in its’ pre-flight check reporting “vfs” as the filesystem when it actually “seems” to be “ext4” based on the df -TH output ?
My other question related to this, is, in my preseed for my LXD cluster I “thought” I had correctly set “2 CPU” but here again “kubeadm init …” is complaining that there is only one available CPU:
[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
My preseed is as follows and it works fine for creating the LXD cluster with no errors, but perhaps I have not used the right syntax for the k8s profile CPU setting (note: pasting the preseed into this forum destroys some of the formatting of the preseed file, but this preseed is working fine).
config:
cluster.https_address: 10.xxx.53.1:8443
core.https_address: 10.xxx.53.1:8443
core.trust_password: ubuntu
networks:
- config:
ipv4.address: auto
ipv6.address: none
description: “”
name: lxdbr0
type: bridge
project: default
storage_pools:- config:
source: olxc-001
volatile.initial_source: olxc-001
zfs.pool_name: olxc-001
description: “o83sv1-olxc-001”
name: local
driver: zfs
profiles:- config: {}
description: Default LXD profile
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: local
type: disk
name: default- config: {}
description: Orabuntu-LXD OpenvSwitch Clone profile
devices:
eth0:
name: eth0
nictype: bridged
parent: sw1a
type: nic
root:
path: /
pool: local
type: disk
name: olxc_sw1a- config: {}
description: Orabuntu-LXD OpenvSwitch Seeds profile
devices:
eth0:
name: eth0
nictype: bridged
parent: sx1a
type: nic
root:
path: /
pool: local
type: disk
name: olxc_sx1a- config: {}
description: Kubernetes LXD profile
limits.cpu: “2”
limits.memory: 2GB
limits.memory.swap: “false”
linux.kernel_modules: ip_tables,ip6_tables,nf_nat,overlay,br_netfilter
raw.lxc: “lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw
sys:rw”
security.nesting: “true”
security.privileged: “true”
devices:
eth0:
name: eth0
nictype: bridged
parent: sw1a
type: nic
root:
path: /
pool: local
type: disk
name: k8s
projects:- config:
description: Default LXD project
name: default
cluster:
server_name: o83sv1
enabled: true
member_config:
cluster_address: “”
cluster_certificate: “”
server_address: “”
cluster_password: “”
Then the other issue is this apparently well-known problem that kubelet does not “like” to run inside an LXD container. There are different recommended tweaks for this problem. So far kubelet is not running despite several tweaks applied.
I will just add that I’m revisiting different approaches to containerizing Kubernetes in LXD and also trying out this one too using juju and charmed kubernetes: so the floor is open to you for any general comments you might have on other approaches to this general task of putting Kubernetes in LXD containers.
My overall goal is that I want to test out Project Antrea with a “suitable” LXD containerized Kubernetes cluster. I’ve already ruled out microk8s after some in-depth testing and evaluation of that (calico would have to be ripped out first, and Antrea did not play well with multus-calico), so now the options that seem to be next up are a sort of conventional but tweaked manual setup as described above, or the juju charmed k8s approach, or this guide by Cornelius Weig (again a tweaked “conventional” install)
UPDATE 2021-10-14 18:59 CST : I’m taking another look at the “microk8s in LXD” guide. I think microk8s in LXD might be the best way to go, even if I have to deal with ripping out calico etc.
UPDATE 2021-10-14 23:36 CST: Microk8s in LXD worked exactly as advertised and very nicely indeed on Ubuntu 20.04 host. However, on redhat-family host Oracle Linux 8, the basic microk8s container will not start. I would guess the apparmor configs in the microk8s profile are not going to be appropriate for selinux-based redhat-family for one thing. Anyway, the errors are as follows:
[ubuntu@o83sv1 ~]$ cat microk8s.profile | lxc profile edit microk8s
[ubuntu@o83sv1 ~]$ lxc launch -p default -p microk8s ubuntu:21.04 microk8s
Creating microk8s
Starting microk8s
Error: Failed to run: /snap/lxd/current/bin/lxd forkstart microk8s /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/microk8s/lxc.conf:
Trylxc info --show-log local:microk8s
for more info
[ubuntu@o83sv1 ~]$ lxc info --show-log local:microk8s
Name: microk8s
Status: STOPPED
Type: container
Architecture: x86_64
Location: o83sv1
Created: 2021/10/14 22:53 CDT
Last Used: 2021/10/14 22:53 CDTLog:
lxc microk8s 20211015035331.406 ERROR conf - conf.c:turn_into_dependent_mounts:3724 - No such file or directory - Failed to recursively turn old root mount tree into dependent mount. Continuing…
lxc microk8s 20211015035331.406 ERROR conf - conf.c:turn_into_dependent_mounts:3724 - No such file or directory - Failed to recursively turn old root mount tree into dependent mount. Continuing…
lxc microk8s 20211015035331.408 ERROR utils - utils.c:mkdir_p:234 - Operation not permitted - Failed to create directory “/var/snap/lxd/common/lxc//sys/module/apparmor/”
lxc microk8s 20211015035331.408 ERROR conf - conf.c:mount_entry_create_dir_file:2428 - Operation not permitted - Failed to create directory “/var/snap/lxd/common/lxc//sys/module/apparmor/parameters/enabled”
lxc microk8s 20211015035331.408 ERROR conf - conf.c:lxc_setup:4104 - Failed to setup mount entries
lxc microk8s 20211015035331.408 ERROR start - start.c:do_start:1291 - Failed to setup container “microk8s”
lxc microk8s 20211015035331.410 ERROR sync - sync.c:sync_wait:36 - An error occurred in another process (expected sequence number 3)
lxc microk8s 20211015035331.425 WARN network - network.c:lxc_delete_network_priv:3622 - Failed to rename interface with index 0 from “eth0” to its initial name “veth931ca79c”
lxc microk8s 20211015035331.425 ERROR start - start.c:__lxc_start:2053 - Failed to spawn container “microk8s”
lxc microk8s 20211015035331.425 WARN start - start.c:lxc_abort:1050 - No such process - Failed to send SIGKILL via pidfd 42 for process 239895
lxc microk8s 20211015035331.425 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:868 - Received container state “ABORTING” instead of “RUNNING”
lxc 20211015035336.529 ERROR af_unix - af_unix.c:lxc_abstract_unix_recv_fds_iov:220 - Connection reset by peer - Failed to receive response
lxc 20211015035336.529 ERROR commands - commands.c:lxc_cmd_rsp_recv_fds:129 - Failed to receive file descriptors[ubuntu@o83sv1 ~]$
TIA
Gilbert