Thanks!
I ended up being to impatient and just downloading and unpacking ubuntu base into the folder.
I am using runc (crun after testing) to get a container from an “OCI bundle” which can be just a folder with everything in it, no overlay hell on ZFS. I am doing this as I could not get DIR based LXD inside LXD to work and I need an application container inside my ZFS containers, or rather several of them.
And all of this becauses nobody got the idea to make it possible in LD to force a binary to only search in 1 folder for its dynamic libraries and trust me I tried all the -WL,-rapth= and so on stuff I could find. I guess this is why app containers were invented in the first place.
For sure, I am still writing the script, can share it here when it’s done. Will take me a while though.
The theory is quite simple:
Inside a folder with any name, let’s say “container” you make a folder “rootfs” (can be changed in the config to anything you’d like, but who cares)
In rootfs you put your minimal linux system (I also saw an article that you dont need any kind of system, can be just a binary, will have to try that out though, and I guess dynamically linked binaries will want ld-linux-x86-64.so.2 and all the necessary libs)
Of course you also put the app you want to run in the minimal system
In the folder “container” you run “runc spec”, which generates the basic config.json
In that json file you can configure bind mounts, grant additional permissions (I needed CAP_SETUID and CAP_SETGID for PHP-FPM) and so on. There is also a setting to make rootfs mutable.
Inside the container folder where config.json and rootfs are next to each other you just run runc run container and you should have a working container
For the minimal system I would suggest these due to advanced cpu arch support: https://partner-images.canonical.com/oci/
They are similar to the Ubuntu base image.
I wouldn’t go with Alpine because of musl-libc, which I read is less performant. And who cares about 40MB extra these days.
I still have to figure out if one can run different binaries in different containers form the same base OCI bundle folder…
Thanks for providing some details. I’m always looking for alternatives, because I really don’t like Dockerfiles.
I like this more chroot-ish approach. I’ve used it with systemd-nspawn, and I’m pleased to learn you can do it with runc. I would like to explore using an ostree repository to have git-like management of the binaries in the chroot. If I ever figure out ostree, that is.
I’ve never used the bind mount options myself, but it’s possible systemd-nspawn
I wouldn’t call it simple, per se. But it’s not hard to use. Become root and run systemd-nspawn -D ./yourchroot.
I haven’t used it for a while, but if I would consider it if I was running in an environment with an investment in systemd-networkd configs. nspawn has been around for a long time.
Yeah, ok. Another thing will be performance, where I guess both should be similar. And which is better for ARM device support which I would like to add in the future to my stackbuilder.
So one big disadvantage: You have to install systemd-container, but when testing in an 18.04 lxc for backwards compatibility I got the following error:
The following packages have unmet dependencies:
systemd-container : Depends: systemd (= 237-3ubuntu10.53) but 237-3ubuntu10.54 is to be installed
So dependency issues with “managed” packages…
Wait, what problem did I want to solve again? Sorry, couldn’t resist.
So I will continue looking into runc as it comes as a simple download from github as a precompiled static binary. Seems less trouble than to downgrade systemd with aptitude. Must be a reason for that update to .54