Bwrap and flatpaks inside Proxmox 8 LXC

I’ve been trying to get flatpak (which uses bwrap) to work inside an unprivileged LXC container in Proxmox 8. That particular container used to work fine under LXD (with LXD’s nesting), but LXC clearly manages things differently.

I have been trying to work around it using apparmor and hit upon the discussion at [Question] bwrap in LXC · Issue #362 · containers/bubblewrap · GitHub, but clearly Proxmox’s LXC has some nuance that I cannot pin down.

Does anyone here have any hints/experience with this issue?



If someone doesn’t know much about bubblewrap/bwrap but would like to look into this, can you give a list of instructions (a la cheat sheet) to install the software in an LXC or Incus container? Show what you get as output when running in a container, and what would be the typical output (when not running in a container).

Well, the current setup is a vanilla Fedora desktop with flatpak configured (you can get the relevant rootfs from the container images repo, and then install xorgxrdp to get a working Remote Desktop, although I suspect you will need to do something like:

sudo dnf install @"Fedora Workstation product core" avahi binutils gnome-shell-extension-dash-to-dock gnome-tweaks htop net-tools nss-mdns setroubleshoot rsms-inter-fonts tmux vim xorgxrdp xrdp

Since the container accesses my GPU and runs Tailscale, it is set up with:

# cat /etc/pve/lxc/103.conf 
arch: amd64
cores: 6
features: nesting=1,keyctl=1
hostname: fedora
memory: 65536
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=1E:40:8A:B6:B4:B3,ip=dhcp,type=veth
onboot: 1
ostype: fedora
rootfs: local-lvm:vm-103-disk-0,size=256G
swap: 2048
tags: intel;rdp
lxc.cgroup.devices.allow: c 195:* rwm
lxc.cgroup.devices.allow: c 508:* rwm
lxc.cgroup.devices.allow: c 226:* rwm
lxc.cgroup.devices.allow: c 237:* rwm
lxc.mount.entry: /dev/tty0 dev/tty0 none bind,optional,create=file
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.idmap: u 0 100000 39
lxc.idmap: g 0 100000 39
lxc.idmap: u 39 44 1
lxc.idmap: g 39 44 1
lxc.idmap: u 40 100040 65
lxc.idmap: g 40 100040 65
lxc.idmap: u 105 103 1
lxc.idmap: g 105 103 1
lxc.idmap: u 106 100106 65430
lxc.idmap: g 106 100106 65430
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file```

Setting the container as using “nesting” in the GUI or as privileged (via editing the configuration and removing the “unprivileged” key) does not change anything relevant. The above configuration is “privileged”.

When trying to run Ghidra from flatpak, a terminal session inside that Xrdp desktop yields:

# flatpak run org.ghidra_sre.Ghidra
F: Can't get document portal: GDBus.Error:org.freedesktop.portal.Error.Failed: Can't mount path /run/user/1000/doc
bwrap: cannot open /proc/sys/user/max_user_namespaces: Read-only file system

This seems to indicate it can’t pivot to the new root namespace, etc.

This is the host log from the moment I start the container to when I try to run the flatpak (there is nothing notable here):

# tail -f /var/log/kern.log
2024-01-31T11:32:39.071870+00:00 borg kernel: [ 1019.170761] EXT4-fs (dm-7): mounted filesystem 4e9ac51c-342d-48fd-b81f-a01fb61724b9 r/w with ordered data mode. Quota mode: none.
2024-01-31T11:32:39.391989+00:00 borg kernel: [ 1019.493132] audit: type=1400 audit(1706700759.384:30): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-103_</var/lib/lxc>" pid=8410 comm="apparmor_parser"
2024-01-31T11:32:39.895975+00:00 borg kernel: [ 1019.994730] vmbr0: port 2(fwpr103p0) entered blocking state
2024-01-31T11:32:39.895984+00:00 borg kernel: [ 1019.994733] vmbr0: port 2(fwpr103p0) entered disabled state
2024-01-31T11:32:39.895985+00:00 borg kernel: [ 1019.994746] fwpr103p0: entered allmulticast mode
2024-01-31T11:32:39.895986+00:00 borg kernel: [ 1019.994776] fwpr103p0: entered promiscuous mode
2024-01-31T11:32:39.895987+00:00 borg kernel: [ 1019.994799] vmbr0: port 2(fwpr103p0) entered blocking state
2024-01-31T11:32:39.895987+00:00 borg kernel: [ 1019.994801] vmbr0: port 2(fwpr103p0) entered forwarding state
2024-01-31T11:32:39.899917+00:00 borg kernel: [ 1020.001378] fwbr103i0: port 1(fwln103i0) entered blocking state
2024-01-31T11:32:39.899921+00:00 borg kernel: [ 1020.001381] fwbr103i0: port 1(fwln103i0) entered disabled state
2024-01-31T11:32:39.899922+00:00 borg kernel: [ 1020.001392] fwln103i0: entered allmulticast mode
2024-01-31T11:32:39.899922+00:00 borg kernel: [ 1020.001419] fwln103i0: entered promiscuous mode
2024-01-31T11:32:39.899923+00:00 borg kernel: [ 1020.001454] fwbr103i0: port 1(fwln103i0) entered blocking state
2024-01-31T11:32:39.899923+00:00 borg kernel: [ 1020.001456] fwbr103i0: port 1(fwln103i0) entered forwarding state
2024-01-31T11:32:39.907979+00:00 borg kernel: [ 1020.007923] fwbr103i0: port 2(veth103i0) entered blocking state
2024-01-31T11:32:39.907983+00:00 borg kernel: [ 1020.007926] fwbr103i0: port 2(veth103i0) entered disabled state
2024-01-31T11:32:39.907983+00:00 borg kernel: [ 1020.007935] veth103i0: entered allmulticast mode
2024-01-31T11:32:39.907984+00:00 borg kernel: [ 1020.007961] veth103i0: entered promiscuous mode
2024-01-31T11:32:39.936006+00:00 borg kernel: [ 1020.038358] eth0: renamed from vethOFRL1U
2024-01-31T11:32:41.023859+00:00 borg kernel: [ 1021.124078] fwbr103i0: port 2(veth103i0) entered blocking state
2024-01-31T11:32:41.023870+00:00 borg kernel: [ 1021.124082] fwbr103i0: port 2(veth103i0) entered forwarding state
2024-01-31T11:32:52.003899+00:00 borg kernel: [ 1032.103108] traps: gdm-session-wor[9069] trap int3 ip:7f15220ec1b1 sp:7ffde4bea160 error:0 in[7f15220a9000+a1000]

The app ran perfectly inside LXD, so I started looking into the apparmor profiles applied. This is the “nesting” one:

# cat /etc/apparmor.d/lxc/lxc-default-with-nesting 
# Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc

profile lxc-container-default-with-nesting flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/lxc/container-base>
  #include <abstractions/lxc/start-container>

  deny /dev/.lxc/proc/** rw,
  deny /dev/.lxc/sys/** rw,
  mount fstype=proc -> /var/cache/lxc/**,
  mount fstype=sysfs -> /var/cache/lxc/**,
  mount options=(rw,bind),
  mount fstype=cgroup -> /sys/fs/cgroup/**,
  mount fstype=cgroup2 -> /sys/fs/cgroup/**,

It bears noting that I migrated this from LXD to LXC by exporting the rootfs and reimporting, but a fresh Ubuntu container created from scratch has the same issue.

Going through the GitHub thread linked above yielded a few suggestions, but creating a new apparmor profile didn’t help.

Here is me, not knowing a single thing about flatpaks, running a flatpak inside an Incus container.

First, create a GUI Incus container using the instructions at Incus / LXD profile for GUI apps: Wayland, X11 and Pulseaudio By following the instructions, you create an Incus profile, and you name it gui.

I am using the Debian container image in my case. Run the following on the host.

incus launch images:debian/12/cloud --profile default --profile gui bubblewrap

Then, get a non-root shell inside the Incus container. After you run this command, the prompt changes, and you are in the container.

incus exec bubblewrap -- sudo --login --user debian

You are in the container. Package update and then install flatpak in the container.

sudo apt update
sudo apt install flatpak

Still in the container, install some random simple flatpak for testing. It has to be simple or we are doing it wrong.

sudo flatpak -v install org.gnome.clocks

Run the installed flatpak in the container.

flatpak run org.gnome.clocks

Here is the running application.

Since I am using Incus, and the GUI-related configuration resides in an Incus profile, the security.nesting configuration key is elegantly residing in the profile.

Note that I am not using xorgrdp, but taking advantage of hardware acceleration of the host.

In your case, can you provide such instructions for LXC?

Well, Incus is effectively LXD, and I did mention that yes, this container worked inside LXD, so what you’ve done doesn’t help me much… I am using vanilla LXC 5.0.2 as shipped with Proxmox, not LXD or Incus.

(For clarity, what you’ve done is essentially what I had before I moved away from LXD, and what I need to know is how to translate LXD’s nesting settings to LXC, not how to set them up.)

I do not use LXC with local displays. I only use them for remote desktops on my Proxmox machines, so the configuration I posted above does use hardware acceleration, but for rendering to the RDP in-memory frame buffer via xorgxrdp-glamor.

What you are trying to achieve is quite interesting and will likely help others.

I posted those instructions using Incus as an example on how to do the cheat sheet description for someone that may not be well-versed in LXC. I concluded my earlier reply with

In your case, can you provide such instructions for LXC?

The error you get, does not look like it relates to the choice of the distro for the host. I would like to replicate your exact steps.

Well, the exact steps are to deploy an LXC container inside Proxmox and try to run flatpak inside it. I suppose any host that has LXC 5.0.2 will provide a similar environment.

Edited to add: I have a Proxmox test instance inside Gnome Boxes on my laptop, so it actually takes very little time to set this up. 2GB RAM is enough.