VirtualGL in a container

Hi,

I have setup the container following your article. There in one difference between your setup and mine. My host is a headless server running lightdm with the Xconfig generated using,

nvidia-xconfig -a --allow-empty-initial-configuration --use-display-device=None --virtual=1920x1200

I have disabled VirtualGL on the host for the time being to match your setup.

I sshed into the host and then did,

lxc exec mycontainer -- sudo --user ubuntu --login

ubuntu@mycontainer:~$ glxinfo -B
Error: unable to open display :0

$ pactl info
xcb_connection_has_error() returned true
Server String: /tmp/pulse-PKdhtXMmr18n/native
Library Protocol Version: 32
Server Protocol Version: 32
Is Local: yes
Client Index: 0
Tile Size: 65472
User Name: ubuntu
Host Name: mycontainer
Server Name: pulseaudio
Server Version: 11.1
Default Sample Specification: s16le 2ch 44100Hz
Default Channel Map: front-left,front-right
Default Sink: auto_null
Default Source: auto_null.monitor
Cookie: db60:445c

I dont see X being created in the container,

$ lxc exec mycontainer -- ls -la /home/ubuntu          
total 53
drwxr-xr-x 4 ubuntu ubuntu   10 Dec  9 20:17 .
drwxr-xr-x 3 root   root      3 Dec  9 20:14 ..
-rw------- 1 ubuntu ubuntu  147 Dec  9 20:24 .bash_history
-rw-r--r-- 1 ubuntu ubuntu  220 Apr  5  2018 .bash_logout
-rw-r--r-- 1 ubuntu ubuntu 3771 Apr  5  2018 .bashrc
drwx------ 3 ubuntu ubuntu    3 Dec  9 20:17 .config
-rw-r--r-- 1 ubuntu ubuntu  807 Apr  5  2018 .profile
drwx------ 2 ubuntu ubuntu    3 Dec  9 20:14 .ssh
-rw-r--r-- 1 ubuntu ubuntu    0 Dec  9 20:16 .sudo_as_admin_successful
srwxrwxrwx 1 ubuntu ubuntu    0 Dec  9 20:16 pulse-native

$ lxc exec mycontainer -- ls -la /tmp/.X11-unix/
total 49
drwxrwxrwt  2 root root  2 Dec  9 20:16 .
drwxrwxrwt 10 root root 10 Dec  9 20:24 ..

$ lxc exec mycontainer -- nvidia-smi -L         
GPU 0: GeForce GT 1030 (UUID: GPU-1ee28638-8821-c66d-f2a5-e92da7f7d91d)

The container config looks like this,

$ lxc config show mycontainer
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 18.04 LTS amd64 (release) (20191205)
  image.label: release
  image.os: ubuntu
  image.release: bionic
  image.serial: "20191205"
  image.type: squashfs
  image.version: "18.04"
  volatile.base_image: f75468c572cc50eca7f76391182e6fdaf58431f84c3d35a2c92e83814e701698
  volatile.eth0.host_name: veth40e56cc5
  volatile.eth0.hwaddr: 00:16:3e:41:43:b7
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
devices: {}
ephemeral: false
profiles:
- default
- x11
stateful: false
description: ""

$ lxc profile show x11       
config:
  environment.DISPLAY: :0
  environment.PULSE_SERVER: unix:/home/ubuntu/pulse-native
  nvidia.driver.capabilities: all
  nvidia.runtime: "true"
  user.user-data: |
    #cloud-config
    runcmd:
      - 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
    packages:
      - x11-apps
      - mesa-utils
      - pulseaudio
description: GUI LXD profile
devices:
  PASocket1:
    bind: container
    connect: unix:/run/user/1000/pulse/native
    gid: "1000"
    listen: unix:/home/ubuntu/pulse-native
    mode: "0777"
    security.gid: "100"
    security.uid: "1001"
    type: proxy
    uid: "1000"
  X0:
    bind: container
    connect: unix:@/tmp/.X11-unix/X0
    listen: unix:@/tmp/.X11-unix/X0
    security.gid: "100"
    security.uid: "1001"
    type: proxy
  mygpu:
    productid: 1d01
    type: gpu
    vendorid: 10de
name: x11
used_by:
- /1.0/containers/mycontainer

The host user i am trying to map has uid of 1001 and gid of 100. Thus i changed your x11 profile to match those. On the host i can see the X0 created,

$ ls -la /tmp/.X11-unix/
total 0
drwxrwxrwt  2 root root  60 Dec  9 20:11 .
drwxrwxrwt 10 root root 260 Dec  9 20:11 ..
srwxrwxrwx  1 root root   0 Dec  9 20:11 X0

Why i dont get X0 in the container?