As I understand the situation, this capability has been discarded from LXD.
There are valid use cases for being able to host a container containing a foreign arch rootfs using the user-space QEMU interpreter shims.
At Camgian Microsystems, we develop software targeting armhf and other non x86 family processors.
In our build pipelines, we currently use Docker to host cross-target containers with foreign arch root filesystems for executing various quality-assurance tests. There are other very useful tactics we employ that help us deal with certain softwares that are not set up for cross-compilation, etc.
We really like the functionality and feature set of LXD and desire to migrate to it but the lack of equivalent binfmt support is presently a deal-breaker for our shop.
Performance attributes and lack of ptrace support are not primary considerations for our use cases.
We are just getting on the LXD train. I have minimal personal past experience with LXC but not as applied to our use-cases described above.
We are as yet unable to get binfmt support to function for the simple case of interpreting an armhf elf image on a standard LXD container. This functionality is a core constituent of our existing Docker build/release pipelines.
There appears to be binfmt support present in the default Ubuntu LXD images for launching python3 scripts but the update-binfmt script within the container will not permit us to activate the qemu-arm shim.
We are not moving ahead with construction of a full cross-target container under LXD until we are able to execute the simple case of user-space emulation with qemu-arm-static (using binfmt).
Our ultimate goal is a wholesale migration to LXD with full support of cross-target filesystems.
We are willing to contribute development effort toward that goal.
LXD now supports VMs using qemu, and the ability to pass configuration options to it. It is possible to run Windows in a LXD VM on Linux, but I have not seen discussion on running other architectures.
Yeah, we have foreign arches in the idea list but it’s a bit harder than it sounds as not only do we need to start shipping qemu for all the architectures but also the matching firmware.
On top of that, there are some virtualized devices which don’t play nice when not on the native architecture, so we either need to ban those architectures or would have to maintain two different machine definitions.
To clarify, I’m specifically looking at a use case for nested virtualization here which involves either qemu-user-static (as mentioned by the OP) or qemu-user-binfmt in order to use build images for multiple architectures in conjunction with pbuilder.
In a x86_64/amd64 LXD container (sharing the same architecture as the host), it’s possible to add the arm64/aarch64 architecture, install the hello:arm64 test binary and either seamlessly or explicitly (by means of qemu-aarch64[-static]) run /usr/bin/hello and other binaries without any problem.
However, pbuilder requires the use of chroot, and this is where everything comes to a full stop, because binfmt related filters installed outside of the chroot don’t work anymore “on the inside”.
There are a number of (painfully terse, partially outdated) “howtos” like QemuUserEmulation that suggest to use systemd-container, or to bind mount “the usual suspects” (/dev etc.), but this doesn’t work.
Because I’m only now looking to include additional (foreign) architectures for an existing setup which was placed inside an LXD container from the start, I never tried this inside a VM or elsewhere. In order to rule out LXD related (additional?) side effects, I’m currently retracing those steps inside a VM (and if need be, on a dedicated host machine without any kind of nested virtualization, although this defeats the ultimate objective to have a single container for pbuilder accessible in an LXD cluster).
As long as foreign binaries can be executed inside a chroot, that is “good enough” for me–I actually want to avoid having to maintain build images/environments in the form of multiple full-fledged containers at this point in time (because that would also require more or less complicated changes to the existing toolchain and waste resources).
Short update: The issue mentioned above seems to be resolved after un-/reinstalling the binfmt related packages. No bind mounts nor the use of the schroot/proot packages necessary.