while creating master in cluster of kubernetes in lxc container

I’m setting up a kubernetes cluster using lxc container while I was configuring master node with kubeadm init it is showing the following error:-

kubeadm init --apiserver-advertise-address=10.102.126.160 --pod-network-cidr=192.168.0.0/16 [init] Using Kubernetes version: v1.13.1 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 4.15.0-43-generic DOCKER_VERSION: 18.06.1-ce DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileContent–proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist [ERROR Swap]: running with swap on is not supported. Please disable swap [ERROR SystemVerification]: failed to parse kernel config: unable to load kernel module: “configs”, output: “modprobe: ERROR: …/libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file ‘/lib/modules/4.15.0-43-generic/modules.dep.bin’\nmodprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-43-generic\n”, err: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...

Can anyone help me out to solve this issue.

Not very familiar with kubernetes, but in this case it looks like it’s running into issues before of some missing kernel modules, you should make sure those modules are loaded before your container is started.

That can be done by hand with modprobe or you can set the linux.kernel_modules config option on the container with a comma separate list of modules to load.

Kubernetes also look unhappy about of a swap file? that would need to be disabled system-wide unfortunately, though I’m not sure that it’s actually that hard a requirement.

Yeah swapoff -a is needed

might need container in privileged mode

docker won’t run right on zfs backed so make sure its dir storage

many things to get kube working

also make sure you enable ignore preflight errors

cheers,
Jon.

Here is a config file from an lxd container that was part of a kube cluster, its probably very insecure as I had to hack about a lot to get it working!

### This is a yaml representation of the configuration.
### Any line starting with a '# will be ignored.
###
### A sample configuration looks like:
### name: container1
### profiles:
### - default
### config:
###   volatile.eth0.hwaddr: 00:16:3e:e9:f8:7f
### devices:
###   homedir:
###     path: /extra
###     source: /home/user
###     type: disk
### ephemeral: false
###
### Note that the name is shown but cannot be changed
 
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 16.04 LTS amd64 (release) (20181004)
  image.label: release
  image.os: ubuntu
  image.release: xenial
  image.serial: "20181004"
  image.version: "16.04"
  linux.kernel_modules: openvswitch,nbd,ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
  raw.lxc: |
lxc.apparmor.profile=unconfined
lxc.mount.auto=proc:rw sys:rw cgroup:rw
lxc.cap.drop=
lxc.cgroup.devices.allow=a
  security.nesting: "true"
  security.privileged: "true"
  volatile.base_image: c966933fdfd390d301fed3447528e2f910bf72c0615b2caaf3235a791fed3541
  volatile.eth0.hwaddr: 00:16:3e:46:5f:f7
  volatile.idmap.base: "0"
  volatile.idmap.next: '[]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.lxdbr1.hwaddr: 00:16:3e:9d:64:5a
  volatile.lxdbr1.name: eth1
devices:
  aadisable:
path: /sys/module/nf_conntrack/parameters/hashsize
source: /dev/null
type: disk
  aadisable1:
path: /sys/module/apparmor/parameters/enabled
source: /dev/null
type: disk
  mem:
path: /dev/mem
type: unix-char