Who's got Docker working in an Incus LXC?

Seeing as the recent topics from people looking for help troubleshooting issues similar to mine don’t have happy endings, I’m thinking it might be more fruitful to work backwards from success.

So any of y’all who have succeeded in getting Docker working in an Incus LXC, I want to bite your steez. What distro is your host system? Container distro? What storage driver does your Incus host use? Anyone using a different storage driver for the container than the host? Which security policies have you enabled? Nesting, I assume. Syscalls.intercept.mknod? Syscalls.intercept.setxattr? ZFS delegation?

Appreciate you in advance, and don’t worry, I’ll make your everyone knows that you’re the real Slim Shady.

Not 100% what you’re looking for, but I’ve been able to run Podman without issues in a fedora/42 container with just security.nesting=true.

ZFS delegation is still something I’m trying to look into for RHEL-based instances, but for Debian-based you won’t likely see any significant amount of trouble.

EDIT - Re: ZFS delegation: a tip if you try to go with this, don’t enable zfs.delegation on root disks. If you set it from the storage pool then set initial.zfs.delegate=false for your root disk in instance configuration or profile. I can’t nail this down to anything definitive, but it might be a upstream OpenZFS bug as the datasets get pinned to a non-existent namespace (or zone in OpenZFS terminology)

Hi, whereas I don’t use docker “directly”, I host a couple of discourse Forums inside Incus instances, and the deployment mechanism for discourse is to build docker images and deploy them inside the host environment, in this case an Incus container. So effectively I am hosting docker in a container and deploying docker images into it.

Typically works fine, production uptime, I want to say that all I needed to do at the time was to enable nesting … but on checking the instance now nesting isn’t enabled (nor is priv). The only customisation I seem to have is that swap is enabled, and I think that’s just because Discourse is a bit hungry when it rebuilds.

root@rpi3:~# incus shell linuxforums
root@linuxforums:~# docker ps
CONTAINER ID   IMAGE                 COMMAND        CREATED        STATUS      PORTS                                                                      NAMES
36b1bceccc5e   local_discourse/app   "/sbin/boot"   3 months ago   Up 3 days   0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp   app

Fairly standard low-power install, 8Gb RPi5, local three machine cluster, stock ZFS filesystem volume.

Typically works fine, production uptime, I want to say that all I needed to do at the time was to enable nesting … but on checking the instance now nesting isn’t enabled (nor is priv).

Thank you for this bit, I just did a test without security.nesting set and Podman worked just fine.

Fyi I’m using extensive use of docker images now directly in INCUS using the (relatively) new OCI format. Works great, no need for docker, at least not for the things I’ve tried.

The most challenging thus far has been getting Nginx Proxy Manager working in different configurations, I now have one straddling my OVN network and the Internet, and another straddling my local network and OVN. Still managed to get it all working using the stock docker images, despite needing multiple network ports inside the image. Also running standard docker images for Gitlab and unbound, all relatively straightforward, although maybe the UI could do with some OCI based enhancement… :wink:

I’ve largely been doing the same until recently but Docker Hub’s new authentication requirements have put using OCI on Incus on a temporary hold until this issue gets resolved: OCI authentication support · Issue #1700 · lxc/incus · GitHub

Podman or Docker in an instance remains a temporary stop gap for me until then.

Mm, not come across the authentication feature as a casual Docker user … I’m now wondering what the use-case is for authenticated images, will look it up :slight_smile:

It’s mainly for rate limiting iirc but I do enough experiments that I hit that 10 pulls/hr that I stopped using Incus’s OCI for the time being.

Mmm … well, just fyi; thus far I’ve not had any issues. Install seems to work and I’m setting up in such a way that rebuild also seems happy re; custom data volumes … but then I don’t update sufficiently often to hit any /hr limits. Once you’ve pulled an OCI image it’s cached locally so re-deploying and re-using doesn’t seem likely to incur any remote hits.

If someone does hit pull limits on Docker, they can setup a system container as a caching proxy for Docker packages. And configuration their Docker instances to use that instead.