I’d like my linux container to have access to the host’s nvidia gpu. It’s a simple use case for my home lab. Searching around turned up a few tutorials and references. However, none of them strike me as on point. For instance, Graber’s explanation from 2017 would fit the bill, but it is surely outdated.
Ubuntu’s site also has this tutorial, but it is meant for Ubuntu 18.04.
The setup hasn’t changed since that Ubuntu 18.04 tutorial and the steps above should work just fine so long as your host system has the drivers and nvidia tools installed.
Ah, thank you so much. That is pretty straightforward.
Is it literally gpu gpu, or do I need to hunt down how my computer references the gpu?
Also… is it possible to add the gpu functionality to the container after creation? My container, cn1, has a full desktop installed. And, it is networked to a Docker container running Apache’s Guacamole so I can access it remotely. So, re-creating the container takes a bit of leg work. If I can add the gpu functionality to it w/ an lxc one-liner, I’d rather do that.
nb: When I mentioned “uncomplicated” in my first post, I had in mind the idea that I can simply pass the entire gpu through to the container. Obviously, the rest of the setup is a bit more complex.
The ‘gpu gpu’ means add a device called gpu that’s of type gpu. When no property is set after that, it tells LXD to just pass in whatever the host has.
To simplify launch you could do:
lxc profile create nvidia
lxc profile set nvidia nvidia.runtime true
lxc profile device add nvidia gpu gpu
Then you can just do ‘lxc launch images:ubuntu/20.04 u1 -p default -p nvidia’
That will create a container with both the default and nvidia profiles applied.