Imported LXD container breaks nested docker Instance

I have a working LXD container for Discourse on one LXD host that works fine. I export a copy of that container and import it on another host. The container has nested docker inside it as a part of the application. On the new system, docker inside the LXD fails to start properly.

 docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: activating (auto-restart) (Result: exit-code) since Mon 2022-10-17 17:44:19 UTC; 10ms 
ago
     Docs: https://docs.docker.com
  Process: 1165 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 
(code=exited, status=1/FAILURE)
 Main PID: 1165 (code=exited, status=1/FAILURE)

The container is privileged and has the following set:

security.nesting=true
security.syscalls.intercept.mknod=true
security.syscalls.intercept.setxattr=true

The specific issue is that my application seems to require the overlay2 storage driver for docker. The original container has this driver. The new container will not start docker with overlay2.

scott@Discourse-1:~$ docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running

I am confused since the export of the original container runs fine. When I import it, docker is broken.
Any ideas?

Ok, so I determined the solution. My LXD default storage pool is ZFS. My nested docker application had a requirement for an overlay2 file system. Apparently Overlay2 cannot be nested inside of a ZFS storage pool.

My solution was to create a new “dir” based pool and move the imported container from the zfs pool to the new pool.

lxc storage newpool dir
lxc stop container_name
lxc move container_name temp_container_name -s newpool
lxc move temp_container_name container_name
lxc start container_name
1 Like

Yes thats right overlayfs doesn’t work atop ZFS sadly.

Hi

If you are interested in using existing ZFS pools because dir storage type isn’t optimal, you could also attach a second disk as type block created via sudo zfs create... at a generic mount-point within the container, which is formatted with EXT4 or others that are supported by Docker. You then bind mount directories on that second disk to the default locations for docker so they sit on the attached disk.

The Docker Swarm feature is problematic in a container based on what I’ve read and the issues that others and myself are experiencing. Sad and I think for now that Docker Swarm needs to live in VMs.

E.g. zfs create, note not using LXD’s structure:

sudo zfs create -p -s -V 50GB storagepoolname/acme-custom/block/instancename_disk01

E.g. formatting the block device (zvol):

sudo mkfs.ext4 /dev/zvol/storagepoolname/acme-custom/block/instancename_disk01

E.g. correcting permissions:

sudo mount /dev/zvol/storagepoolname/acme-custom/block/instancename_disk01 /mnt
sudo mkdir -p /mnt/etc/docker /mnt/var/lib/{docker,docker-bootstrap,docker-certs,docker-engine}
sudo chown -R 1000000:1000000 /mnt/
sudo umount /mnt

E.g. container config:

devices:
  disk01:
    path: /mnt/disk01
    source: /dev/zvol/storagepoolname/acme-custom/block/instancename_disk01
    type: disk

E.g. container /etc/fstab:

/mnt/disk01/etc/docker /etc/docker none bind,auto 0 0
/mnt/disk01/var/lib/docker /var/lib/docker none bind,auto 0 0
/mnt/disk01/var/lib/docker-bootstrap /var/lib/docker-bootstrap none bind,auto 0 0
/mnt/disk01/var/lib/docker-certs /var/lib/docker-certs none bind,auto 0 0
/mnt/disk01/var/lib/docker-engine /var/lib/docker-engine none bind,auto 0 0

References:

Hope this may help.

1 Like