Frigate gpu passtrough docker inside incus container

Hi all,

I want to do gpu passtrough inside docker. I have installed a incus container. Inside the container i installed docker with frigate. I did a gpu passtrough from my host to the incus container. This works wel and if i do nvidia-smi inside the container i see my videocard.

However i cant seem to get it working to feed the gpu card to docker and frigate.

What am i missing here?

Provide more info on the docker container. Is that container designed to work with GPU passthrough? Which image is it?

Hi,

Yes it is. I installed docker (my incus container is debian 12)

Then i followed the installation instructions from frigate.

Frigate should support hardware acceleration. But within the docker container (frigate) is not working when i enter a nvidia-smi command.

This is the output:

:heavy_check_mark: Container frigate Recreated 3.0s
Attaching to frigate
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running prestart hook #0: exit status 1, stdout: , stderr: Auto-detected mode as ‘legacy’
nvidia-container-cli: mount error: failed to add device rules: unable to find any existing device filters attached to the cgroup: bpf_prog_query(BPF_CGROUP_DEVICE) failed: operation not permitted: unknown

Which nvidia packages did you install in your Incus container?
Docker usually requires “NVIDIA Container Toolkit” to allow the passthrough from the host to docker. The same applies in this case. You have the passthrough working into the container but you need to also passthrough form container to docker. Make sure the version you install in the container matches the host NVIDIA version otherwise you will run into a lot of strange issues.

An alternative approach would be to install Incus latest monthly stable release and make use of the native Incus OCI support. In this case you only need to enable GPU passthrough to your OCI container and it will work flawlessly. There are quite a few here using it this way.

Of course if you need to stick with LTS for any reason using a container is the only option.

Hi,

Thanks for your reply. Sounds interesting to try this out. I upgraded to the latest incus version. What would be the steps to install frigate with native incus oci support?

Incus OCI support is still new and the documentation only contains some basic instructions. However, there are quiet a few post here in the forum and some block posts which explain the basic mechanics how to deploy OCI container on Incus:

There is also a community project around supporting docker-compose to deploy a full stack.

In general it requires to register the correct remote and launch the OCI container with correct options. Finding the correct options requires to translate docker instructions into Incus ones.

Hope this helps to get started

final question,

is it possible to share a single gpu card with 2 incus containers?

Shouldn’t be an issue as long as there are enough resources available…

ok perfect so can you help me getting started. how would i install frigate as oci image inside incus and passtrought my vga card?