Amd Gpu passthrough to containers, obs-studio

Hi does anyone have any experiance using obs-studio from within a container? I installed latest amd driver for 5700u in my case on the host. I create a container, install xrdp, gnome, I cannot seem to get either xorg/xrdp or obs-studio to use my amdgpu.

I guess a better question is does anyone know of instructions on how to properly pass amdgpu to containers? The documentation gave me the idea that since its a container it has accesss without further work but I am not sure. Also I did find a command to add a device as gpu incus config device add TheWatcher gpu gpu. I found several posts on nvidia but nothing from amd.

currently what I am seeing as an error

libEGL warning: DRI2: failed to authenticate
info: Loading up OpenGL on adapter Mesa/X.org llvmpipe (LLVM 15.0.6, 256 bits)
info: OpenGL loaded successfully, version 4.5 (Core Profile) Mesa 22.3.6, shading language 4.50

and
info: [pipewire] No captures available
info: FFmpeg VAAPI H264 encoding not supported

Further investigation lead me to install vainfo in the host which shows:

vainfo: Driver version: Mesa Gallium driver 22.3.6 for AMD Radeon Graphics (renoir, LLVM 15.0.6, DRM 3.49, 6.1.0-23-amd64)
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileHEVCMain               : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointEncSlice
      VAProfileHEVCMain10             : VAEntrypointVLD
      VAProfileHEVCMain10             : VAEntrypointEncSlice
      VAProfileJPEGBaseline           : VAEntrypointVLD
      VAProfileVP9Profile0            : VAEntrypointVLD
      VAProfileVP9Profile2            : VAEntrypointVLD
      VAProfileNone                   : VAEntrypointVide

But in the container

luigi@TheWatcher:~$ vainfo
libva info: VA-API version 1.17.0
libva error: vaGetDriverNameByIndex() failed with unknown libva error, driver_name = (null)
vaInitialize failed with error code -1 (unknown libva error),exit

I tried something simple I found online, I did chmod 777 -R /dev/dri inside the container started obs and now I got info: FFmpeg VAAPI H264 encoding supported

Anyone know if there is a better way or simpler way other than changing permissions? Also when the container restarts it no longer works.

You can set the uid/gid/mode properties on the GPU device to control that.

Thanks Stephan, as always. What is the difference between using

devices:
dri_card0:
gid: “44”
source: /dev/dri/card0
type: unix-char
dri_renderD128:
gid: “44”
source: /dev/dri/renderD128
type: unix-char

or

gpu0:
gid: “44”
type: gpu

Not much. The gpu syntax basically saves you from having to figure out the right devices on a multi-GPU system, it also can do slightly more than just card+render, especially on NVIDIA where it picks up a bunch of other related devices.

I am seeing my minipc, 5700u 8core/16thread just go up to temperatures of 90 just by obs. I did investigate for errrors, and found a bunch of rtkit, pulseaudio, and some for xrdp. Does anyone have any experience? I will try running on the host to test, see if it could be caused by the process being inside incus container or just the minipc not able to handle the workload.

The container itself will have minimal overhead.

It could be the minipc not able to handle the workload. But it could also be the process inside the container still not running properly in the gpu and using the cpu when it shouldn’t.

When you mention related devices, it reminds me of the ROCm and PyTorch tutorial. It requires you pass through /dev/kfd which is not passed by the gpu syntax. It seems to be related to Amd gpus, so shouldn’t it also be passed through on the gpu syntax?

Ill give it a try post what i find. Thanks for reviewing.

You were absolutely correct, I was able to ensure it used my gpu and got it working. I am still struggling with getting the X socket working, it is listed as owner nobody and group nobody but I’m going through all the tutorials and will figure it out. Thanks!

Maybe this topic will help you with X11 socket. I updated it moments ago.

1 Like