VirtualGL in a container

Thanks for finding the typo. I just fixed it.

The short answer is that currently, the proxy devices do not transcend between LXD installations.
See recent post at Forward port to be accessible from remote container

That is, if you

lxc launch ubuntu:18.04 myremotelxd:mytest --profile default --profile x11
lxc exec myremotelxd:mytest -- sudo --user ubuntu --login
xclock

then the xclock application will (try to) appear on the system with myremotelxd, not your local computer. The reason is that the lxc client, when working with remote LXD servers, does some things between local and remote (such as lxc file push) and others between the remote host and the remote container (such as lxc config device).

If you look into the container logs on the remote server, you will likely see the proxy device failing to get created. This should be on myremotelxd, at /var/snap/lxd/common/lxd/logs/mycontainer/.

So, how do you deal with this issue?The communication between the X server and the X11 applications is uncompressed and requires lots of speed so that the applications not not appear to stutter when you use them. For this reason there are alternative protocols like VNC that compress this communication quite efficiently.

However, what can you do if you really want to CUDA stuff on a remote computer that has the GPU?

  1. You can create a new LXD profile (x11network), one that does not have the X0 proxy device.
  2. Use SSH and socat, for example, to pass your local /tmp/.X11-unix/X0 socket to the container.
  3. Test first that xclock works.

Hint: Here is the command line that works. It is quite slow and you would need to figure out how to optimize.

socat exec:'ssh -p 2222 ubuntu@remotelxd.example.com socat unix-l\:/tmp/.X11-unix/X1 -' unix:/tmp/.X11-unix/X0

X1 is the socket of your local desktop. X0 is the new socket at the remote container.