Well, the current setup is a vanilla Fedora desktop with flatpak configured (you can get the relevant rootfs from the container images repo, and then install xorgxrdp to get a working Remote Desktop, although I suspect you will need to do something like:
sudo dnf install @"Fedora Workstation product core" avahi binutils gnome-shell-extension-dash-to-dock gnome-tweaks htop net-tools nss-mdns setroubleshoot rsms-inter-fonts tmux vim xorgxrdp xrdp
Since the container accesses my GPU and runs Tailscale, it is set up with:
# cat /etc/pve/lxc/103.conf
arch: amd64
cores: 6
features: nesting=1,keyctl=1
hostname: fedora
memory: 65536
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=1E:40:8A:B6:B4:B3,ip=dhcp,type=veth
onboot: 1
ostype: fedora
rootfs: local-lvm:vm-103-disk-0,size=256G
swap: 2048
tags: intel;rdp
lxc.cgroup.devices.allow: c 195:* rwm
lxc.cgroup.devices.allow: c 508:* rwm
lxc.cgroup.devices.allow: c 226:* rwm
lxc.cgroup.devices.allow: c 237:* rwm
lxc.mount.entry: /dev/tty0 dev/tty0 none bind,optional,create=file
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.idmap: u 0 100000 39
lxc.idmap: g 0 100000 39
lxc.idmap: u 39 44 1
lxc.idmap: g 39 44 1
lxc.idmap: u 40 100040 65
lxc.idmap: g 40 100040 65
lxc.idmap: u 105 103 1
lxc.idmap: g 105 103 1
lxc.idmap: u 106 100106 65430
lxc.idmap: g 106 100106 65430
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file```
Setting the container as using “nesting” in the GUI or as privileged (via editing the configuration and removing the “unprivileged” key) does not change anything relevant. The above configuration is “privileged”.
When trying to run Ghidra from flatpak, a terminal session inside that Xrdp desktop yields:
# flatpak run org.ghidra_sre.Ghidra
F: Can't get document portal: GDBus.Error:org.freedesktop.portal.Error.Failed: Can't mount path /run/user/1000/doc
bwrap: cannot open /proc/sys/user/max_user_namespaces: Read-only file system
This seems to indicate it can’t pivot to the new root namespace, etc.
This is the host log from the moment I start the container to when I try to run the flatpak (there is nothing notable here):
# tail -f /var/log/kern.log
2024-01-31T11:32:39.071870+00:00 borg kernel: [ 1019.170761] EXT4-fs (dm-7): mounted filesystem 4e9ac51c-342d-48fd-b81f-a01fb61724b9 r/w with ordered data mode. Quota mode: none.
2024-01-31T11:32:39.391989+00:00 borg kernel: [ 1019.493132] audit: type=1400 audit(1706700759.384:30): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-103_</var/lib/lxc>" pid=8410 comm="apparmor_parser"
2024-01-31T11:32:39.895975+00:00 borg kernel: [ 1019.994730] vmbr0: port 2(fwpr103p0) entered blocking state
2024-01-31T11:32:39.895984+00:00 borg kernel: [ 1019.994733] vmbr0: port 2(fwpr103p0) entered disabled state
2024-01-31T11:32:39.895985+00:00 borg kernel: [ 1019.994746] fwpr103p0: entered allmulticast mode
2024-01-31T11:32:39.895986+00:00 borg kernel: [ 1019.994776] fwpr103p0: entered promiscuous mode
2024-01-31T11:32:39.895987+00:00 borg kernel: [ 1019.994799] vmbr0: port 2(fwpr103p0) entered blocking state
2024-01-31T11:32:39.895987+00:00 borg kernel: [ 1019.994801] vmbr0: port 2(fwpr103p0) entered forwarding state
2024-01-31T11:32:39.899917+00:00 borg kernel: [ 1020.001378] fwbr103i0: port 1(fwln103i0) entered blocking state
2024-01-31T11:32:39.899921+00:00 borg kernel: [ 1020.001381] fwbr103i0: port 1(fwln103i0) entered disabled state
2024-01-31T11:32:39.899922+00:00 borg kernel: [ 1020.001392] fwln103i0: entered allmulticast mode
2024-01-31T11:32:39.899922+00:00 borg kernel: [ 1020.001419] fwln103i0: entered promiscuous mode
2024-01-31T11:32:39.899923+00:00 borg kernel: [ 1020.001454] fwbr103i0: port 1(fwln103i0) entered blocking state
2024-01-31T11:32:39.899923+00:00 borg kernel: [ 1020.001456] fwbr103i0: port 1(fwln103i0) entered forwarding state
2024-01-31T11:32:39.907979+00:00 borg kernel: [ 1020.007923] fwbr103i0: port 2(veth103i0) entered blocking state
2024-01-31T11:32:39.907983+00:00 borg kernel: [ 1020.007926] fwbr103i0: port 2(veth103i0) entered disabled state
2024-01-31T11:32:39.907983+00:00 borg kernel: [ 1020.007935] veth103i0: entered allmulticast mode
2024-01-31T11:32:39.907984+00:00 borg kernel: [ 1020.007961] veth103i0: entered promiscuous mode
2024-01-31T11:32:39.936006+00:00 borg kernel: [ 1020.038358] eth0: renamed from vethOFRL1U
2024-01-31T11:32:41.023859+00:00 borg kernel: [ 1021.124078] fwbr103i0: port 2(veth103i0) entered blocking state
2024-01-31T11:32:41.023870+00:00 borg kernel: [ 1021.124082] fwbr103i0: port 2(veth103i0) entered forwarding state
2024-01-31T11:32:52.003899+00:00 borg kernel: [ 1032.103108] traps: gdm-session-wor[9069] trap int3 ip:7f15220ec1b1 sp:7ffde4bea160 error:0 in libglib-2.0.so.0.7800.3[7f15220a9000+a1000]
The app ran perfectly inside LXD, so I started looking into the apparmor profiles applied. This is the “nesting” one:
# cat /etc/apparmor.d/lxc/lxc-default-with-nesting
# Do not load this file. Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc
profile lxc-container-default-with-nesting flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/lxc/container-base>
#include <abstractions/lxc/start-container>
deny /dev/.lxc/proc/** rw,
deny /dev/.lxc/sys/** rw,
mount fstype=proc -> /var/cache/lxc/**,
mount fstype=sysfs -> /var/cache/lxc/**,
mount options=(rw,bind),
mount fstype=cgroup -> /sys/fs/cgroup/**,
mount fstype=cgroup2 -> /sys/fs/cgroup/**,
}
It bears noting that I migrated this from LXD to LXC by exporting the rootfs and reimporting, but a fresh Ubuntu container created from scratch has the same issue.
Going through the GitHub thread linked above yielded a few suggestions, but creating a new apparmor profile didn’t help.