How to run a containerized Desktop Environment in a Window?

I’m trying to run a distro with a Desktop Environment in a container and let it render on DISPLAY :1. Basically the same thing @simos suggested here.

However, I don’t seem to be able to get it working. Any chance you can point me towards the right direction? I’m using the following profile and user-data:

When I now login to the container as root and then run gdm3 (to start GNOME) it does not output any error and keeps running - but Ctrl + Alt + F8 is still empty (black screen).

What I’m missing here?

UPDATE: I’m on elementary 5.1 Hera - which is based on Ubuntu 18.04 LTS. LXD v3.0.3

1 Like

tldr:
Take a look at a comparison of methods for using X-Servers in container by @simos:


Well I am not the expert here,
but I think the main point is, simos method (there are other methods, see below) you try to use, is (as far as I know) for running applications, not the whole desktop.

Some more specific problems of your setup:

  1. As far as I can see you use the X0-Socket and that means you should use Display :0.
    Maybe you can try to do it vice versa, and use “X1” for Display :1.
    But at least I only have one socket.

Note that the bold text below (i.e. X1 ) should be adapted for your case; the number is derived from the environment variable $DISPLAY on the host. If the value is :1, use X1 (as it already is below). If the value is :0, change the profile to X0 instead.

Quote from simos blog post: https://blog.simos.info/running-x11-software-in-lxd-containers/

  1. You are using an outdated method of simos with disk-devices, he now recommends using proxy-devices.
    (Personal note: I still use disk-devices, because I could not get proxy-devices to work, but that should not prevent you from trying)

Take a look at simos blog: https://blog.simos.info
Maybe you find something useful.

Edit: Found a blog post from simos with a comparison of methods:

The two basic scenarios are:

  • Scenario 1 is to use the hosts X-Server (thats what I do for example).
    This seems to be what you tried to do and it is the method linked here.

  • Scenario 2 is to create a seperate X-Server (inside the Container or on the host).

@toby63 thank you so much for putting this information together - lots of good stuff there! Will need play around with this a bit. What i already figured out is, it appears we need to be root in order to be able to execute gdm - so probably the uid mapping works a bit different.

We’ll see :wink:

What kind of Desktop graphics/video do you require?

There are several great approaches for Remote Desktop use of LXD containers to host the Desktops including x2go (www.x2go.org), using X redirection via ssh, my project CIAB (cloud in a box) Remote Desktop system (https://github.com/bmullan/ciab-remote-desktop ) and others.

1 Like

I‘d love an LXD based desktop solution which runs locally, is as performant as possible and easily reproduceable.

The goal is to integrate it into my Tins app to allow users setting up a desktop environment in a container on their local host with just a few clicks. It’s for developers and curious people alike - who want to tinker with stuff, get near native performance (incl. GPU acceleration) but don‘t break their stable every day system:

EDIT: Added the following clarifications:

  • … want to tinker on their local host
  • … near native performance (incl. GPU acceleration)
2 Likes

After digging a bit deeper into this, I found this StackExchange answer which clarifies the X-terminology and helped me to wrap my head around the ideal solution I am seeking:

The ideal solution would be to render the containerized desktop environment in a window, on top of the currently running desktop environment (within the current X session).

So I guess Xephyr or xpra are the way to go - sadly @simos has not yet written a tutorial about this yet, so it remains unclear if this solution provides GPU acceleration.

Very cool projects :+1:.

@marbetschar
For xpra you might take a look at lxc or docker stuff.
I once searched for that too (never really tried that though), and there are some Howtos and Repos with scripts and profiles existing, maybe you can modify them for the use with LXD.

so it remains unclear if this solution provides GPU acceleration.

Update: Interesting question, I searched a bit:

Feature List if Xpra (website) includes:

There is a difference between client rendering and using OpenGL for applications inside xpra: http://xpra.org/trac/wiki/Usage/OpenGL

by using ​VirtualGL, which delegates OpenGL acceleration to a real GPU

I got some sort of working solution now - unfortunately it seems to be highly unstable (see below for further information on this):

How it (sort of) works as of now:

  1. After the user made their choice, Tins creates a new container given the selected criteria and applies the generic X11 and the specific Desktop Environment profile (e.g: for X11 this, and for GNOME this one)
  2. The container creation process of LXD downloads the latest available, matching image from images.linuxcontainers.org (e.g.: images:ubuntu/bionic/cloud)
  3. The desktop specific profile provides a cloud-init script, so this gets executed upon container start which downloads, installs and configures the desktop environment (e.g.: for GNOME this one).
  4. If the user wants to open a desktop enabled container, Xephyr is started at an available $DISPLAY number
  5. In the container systemctl start display-manager is executed and LightDM forwards its output to Xephyr

… overall it seems we have to be rather lucky if everything goes well. Which leads me to the problems I encounter:

Problems with this approach:

  1. The images from images.linuxcontainers.org seem to be highly unstable. Sometimes networking does not work, at other times apt is broken, etc…
  2. cloud-init takes very long to complete the initial setup because it needs to download all desktop packages
  3. cloud-init often fails to install the needed packages due to various reasons (e.g. failed to unpack package, connectivity issues, …)
  4. I was not able to find a good way to monitor cloud-init's progress and exit status - so as of now it is impossible to say when it is finished which leads to a bad user experience
  5. Xephyr runs completely independent from Tins so its just fire and forget - hoping everything goes well

Any help to mitigate these issues and/or improve the overall process would be highly appreciated

I’m also wondering if it is possible to build and publish out-of-the-box, desktop environment enabled images to images.linuxcontainers.org (or some other public mirror)? This would save a ton of bandwith, and would probably dramatically improve the success rate of setting such a container up.

In case you want to experiment yourself

Feel free to clone the project from GitHub, as of now Ubuntu Focal and Ubuntu Bionic are the only two Desktop Environment enriched Distro flavours.

2 Likes

Regarding your problems:

  1. and 3. are odd, maybe some of the devs will respond to this :thinking:

  2. You could consider providing your own images, or let the user build images for that, using: https://github.com/lxc/distrobuilder/

  1. I was not able to find a good way to monitor cloud-init 's progress and exit status - so as of now it is impossible to say when it is finished which leads to a bad user experience

There are some solutions in cloud-init:
https://cloudinit.readthedocs.io/en/latest/topics/examples.html#reboot-poweroff-when-finished

https://cloudinit.readthedocs.io/en/latest/topics/examples.html#call-a-url-when-finished

https://cloudinit.readthedocs.io/en/latest/topics/examples.html#alter-the-completion-message

cloud-init status

Though thats not perfect. I guess I will write a feature request for this.
Update: I opened an issue report for this: https://github.com/lxc/lxd/issues/7364
You might want to add info, suggestions or changes to it :slightly_smiling_face: .

1 Like

Thank you very much for pointing towards cloud-init status, seems like cloud-init status --wait does the trick :+1:

Dont go for gnome desktops they require more resources. Go for LXDE, Mate or XCFE .

use xrdp and then do IP table to listen 3389 of container xrdp port to host port.

  • This only works in ubuntu:18.04 as xrdp pulse audio has no package for ubuntu 20.04 LTS as the date of writing.
  • VNC will not work only RDP.

@Manishfoodtechs the goal is to provide a general purpose solution which allows to run any combination of desktop environment and distribution in a container - whether a particular desktop is too resource heavy or not is up to the end user to decide.

Thanks for pointing me towards Xrdp, I’ll check it out! Although I’m not sure what it implies in terms of performance: Does the container need to run its own X server for this to work? If so, is GPU acceleration available to it? Why/when should one choose Xrdp over Xephyr and vice versa?

Regarding the Xephyr topic:

Sadly I am not expert at this (so I might understand things wrong), but I just found this:

I don’t know if thats still the recent status
(it looks like there have been some changes:
https://phoronix.com/scan.php?page=news_item&px=Xephyr-7-GPU-Multi-Seat
https://www.phoronix.com/scan.php?page=news_item&px=MTYyMjc )
but if it is, Xephyr might not be so powerful (regarding 2D and 3D-rendering etc.).
Unfortunately my quick search could not find better information on this.
But it seems that at least the Glamor changes should be merged now (so better 2D-acceleration is likely included).

Interesting comparison (by mviereck, you already discovered his repo I read):

Update:
I see you already got some answers:

@marbetschar Thanks for looking into all this.

Another example of an applet to launch systems can be found in multipass (available as a snap package).
multipass creates VMs and is sort of similar to LXD VMs. They have created an applet to launch terminal windows into the VMs. No GUI support to run applications from within the VM, just a nice applet to launch a terminal windows in the VM. It is written in C++.

Yet another option is to get GNOME Boxes to support LXD containers. This will be more involved though, since GNOME Boxes currently interfaces with the VM over libvirt only (last checked over a year ago).

Linux offers too many ways to run GUI programs on some other computer/VM/container and get the output on your local X11 session. You can just pick the easiest for you for now, just to get something working.

I have tried to use tins in a LXD container, that supports nested containers.

$ lxc launch ubuntu:18.04 tins --profile default --profile x11 -c security.nesting=true
Creating tins
Starting tins
$ 

Then, got a shell in there.

$ lxc exec tins -- sudo --user ubuntu --login
ubuntu@tins:~$ git clone https://github.com/marbetschar/tins.git
Cloning into 'tins'...
remote: Enumerating objects: 256, done.
remote: Counting objects: 100% (256/256), done.
remote: Compressing objects: 100% (166/166), done.
remote: Total 849 (delta 165), reused 176 (delta 88), pack-reused 593
Receiving objects: 100% (849/849), 820.42 KiB | 2.08 MiB/s, done.
Resolving deltas: 100% (531/531), done.
ubuntu@tins:~$ sudo lxd init
ubuntu@tins:~$ sudo apt install build-essential valac libjson-glib-dev libgtk-3-dev libgranite-dev desktop-file-utils
ubuntu@tins:~$ cd tins/
ubuntu@tins:/home/ubuntu/tins$ ./install.sh 

The above instructions have two issues,

  1. The ubuntu user may not be a member of the lxd group, so add them manually.
  2. The PulseAudio socket location is hard-coded, so the container will fail to start. The code should look for PULSE_SERVER, if it exists, and use that location instead of the default.

By having a reproducible environment, it should be easier to develop.
It is great work and I love reading this thread.

@simos thanks for your kind words! For now I’m primarily focus on getting the graphical environment up & running and had success so far for starting and configuring weston and Xwayland (Audio support should follow as soon as the GUI works).

I followed your instructions from your blog, and with Xephyr it worked - but now there seems something wrong. Any chance you can point me towards the right direction?

You’ll find the complete details with config outputs etc. in this GitHub issue: