Hello,
Anybody has plex transcoding working in an LXD container? There is a guide available and instructions for LXC.
I have tried setting it up using LXD but plex fails to recognize it. Here is my LXD config,
$ lxc config show plex
architecture: x86_64
config:
image.architecture: amd64
image.description: Archlinux current amd64 (20190406_04:18)
image.os: Archlinux
image.release: current
image.serial: "20190406_04:18"
nvidia.runtime: "true"
volatile.base_image: de43692c92c19e78d99b1168955f629129529599e00adca4256b858473a330e9
volatile.eth0.name: eth0
volatile.idmap.base: "0"
volatile.idmap.next: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.idmap: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.power: RUNNING
devices:
eth0:
nictype: bridged
parent: vlan300br
type: nic
gpu:
gid: "986"
type: gpu
sharemedia:
path: /mnt/media
source: /mnt/media
type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
nvidia -smi works insider the container,
# nvidia-smi
Sun Apr 7 18:44:25 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GT 1030 Off | 00000000:82:00.0 Off | N/A |
| 40% 55C P0 N/A / 30W | 0MiB / 2001MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
The /dev/dri directory shows both the onboard and the nvidia card.
[root@plex ~]# ls -lh /dev/dri/
total 2.0K
crw-rw---- 1 root video 226, 0 Apr 7 10:12 card0
crw-rw---- 1 root video 226, 1 Apr 7 10:12 card1
crw-rw---- 1 root video 226, 0 Apr 7 10:12 controlD64
crw-rw-rw- 1 root video 226, 128 Apr 7 10:12 renderD128
[root@plex ~]# ls -lh /dev/nv*
crw-rw-rw- 1 nobody nobody 235, 0 Apr 7 10:06 /dev/nvidia-uvm
crw-rw-rw- 1 nobody nobody 235, 1 Apr 7 10:06 /dev/nvidia-uvm-tools
crw-rw-rw- 1 root video 195, 0 Apr 7 10:12 /dev/nvidia0
crw-rw-rw- 1 nobody nobody 195, 255 Apr 6 21:11 /dev/nvidiactl
[root@plex ~]#
I believe plex is trying to use card0 which is the onboard card which obviously fails. Is it possible to disable export of card0 in the container and only allow card1?