Running Rancher 2 or Kubernetes in unprivileged LXC container

Back to the eternal topic: Running Docker in LXC…

So far I have managed to get Docker running in an unprivileged LXC container by using the following container config:

# Unprivileged container uid and gid mapping
lxc.include = /usr/share/lxc/config/userns.conf
lxc.id_map = u 0 1000000 65536
lxc.id_map = g 0 1000000 65536

# Adjust for Docker inside LXC
lxc.aa_profile = unconfined
lxc.cap.drop =
lxc.cap.drop = sys_time sys_module sys_rawio = proc:rw sys:rw cgroup

This allows to run the Docker service just fine:

root@kube1:~# docker info
 Debug Mode: false

 Containers: 9
  Running: 0
  Paused: 0
  Stopped: 9
 Images: 2
 Server Version: 19.03.8
 Storage Driver: vfs
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
   Profile: default
 Kernel Version: 4.19.0-0.bpo.6-amd64
 Operating System: Ubuntu 18.04.4 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 16
 Total Memory: 15.66GiB
 Name: kube1
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
 Live Restore Enabled: false

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

Simple container images work just fine. This can be tested with the “hello-world” image:

root@kube1:~# docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:

For more examples and ideas, visit:

But more complicated container images, like Rancher2 (rancher/rancher), seem to need more permissions. The difficult thing is to figure out which permissions exactly.
Side-Spoiler: Installation of rancher/rancher works in a privileged LXC container.

Trying to install the Rancher (single node) server gives a permission error during layer extraction:

root@kube1:~# docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:stable
Unable to find image 'rancher/rancher:stable' locally
stable: Pulling from rancher/rancher
5bed26d33875: Already exists 
f11b29a9c730: Already exists 
930bda195c84: Already exists 
78bf9a5ad49e: Already exists 
12a73929b6a7: Pull complete 
8434af3b0a23: Pull complete 
28db93a68de0: Pull complete 
e6dfd852f705: Pull complete 
a1fa824ccd2c: Extracting [==================================================>]  99.67MB/99.67MB
1e2d165916be: Download complete 
aaf1116b238c: Download complete 
375fded79e14: Download complete 
e2c84878ed8a: Download complete 
f7a8fcb48ebd: Download complete 
docker: failed to register layer: ApplyLayer exit status 1 stdout:  stderr: lchown /usr/bin/etcd: invalid argument.
See 'docker run --help'.

This very error is documented on

Trying to install a Kubernetes cluster node using rancher/rancher-agent returns the following error:

root@kube1:~# sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.3.6 --server --token something --ca-checksum something --etcd --controlplane --worker
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "apply caps: operation not permitted": unknown.

Concerning the first case, Rancher 2 Management Server, I believe that this has something to do with the uid mapping.

Unfortunately neither logs nor the output of docker show relevant information where to further look.

Any ideas? Did anyone get this to work correctly in an unprivileged LXC container?

1 Like

In addition to you config add the following lines:

lxc.cgroup.devices.allow = a
lxc.hook.mount = =