Nvidia drivers not enabled after installing gpu-support

I’m trying to run an oci container with nvidia.runtime enabled, but setting that config to true renders the container unable to start.

After some research, I found and installed the gpu-support application. According to a comment in the GitHub issue that inspired gpu-support it seems the application should detect and load the nvidia driver automatically, but that doesn’t appear to be happening in my case. After installing gpu-support and rebooting the machine, the GPU still loads the nouveau driver.

Here are some command output that I hope are relevant to troubleshooting:

incus admin os debug log -b0 | grep -i firmware

[2026/03/25 23:27:30 EDT] kernel: faux_driver regulatory: Direct firmware load for regulatory.db failed with error -2
[2026/03/25 23:27:33 EDT] systemd: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67).
[2026/03/25 23:27:34 EDT] kernel: iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-3168-29.ucode failed with error -2
[2026/03/25 23:27:34 EDT] kernel: iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-3168-28.ucode failed with error -2
[2026/03/25 23:27:34 EDT] kernel: iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-3168-27.ucode failed with error -2
[2026/03/25 23:27:34 EDT] kernel: iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-3168-26.ucode failed with error -2
[2026/03/25 23:27:34 EDT] kernel: iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-3168-25.ucode failed with error -2
[2026/03/25 23:27:34 EDT] kernel: iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-3168-24.ucode failed with error -2
[2026/03/25 23:27:34 EDT] kernel: iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-3168-23.ucode failed with error -2
[2026/03/25 23:27:34 EDT] kernel: iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-3168-22.ucode failed with error -2
[2026/03/25 23:27:34 EDT] kernel: iwlwifi 0000:03:00.0: no suitable firmware found!
[2026/03/25 23:27:34 EDT] kernel: iwlwifi 0000:03:00.0: check git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
[2026/03/25 23:27:35 EDT] kernel: bluetooth hci0: Direct firmware load for intel/ibt-hw-37.8.10-fw-22.50.19.14.f.bseq failed with error -2
[2026/03/25 23:27:35 EDT] kernel: Bluetooth: hci0: failed to open Intel firmware file: intel/ibt-hw-37.8.10-fw-22.50.19.14.f.bseq (-2)
[2026/03/25 23:27:35 EDT] kernel: bluetooth hci0: Direct firmware load for intel/ibt-hw-37.8.bseq failed with error -2
[2026/03/25 23:27:36 EDT] kernel: nouveau 0000:01:00.0: pmu: firmware unavailable
[2026/03/25 23:27:36 EDT] kernel: nouveau 0000:01:00.0: gr: firmware unavailable
[2026/03/25 23:27:36 EDT] kernel: nouveau 0000:01:00.0: sec2: firmware unavailable
[2026/03/25 23:27:36 EDT] systemd: Startup finished in 7.447s (firmware) + 7.239s (loader) + 774ms (kernel) + 3.510s (initrd) + 3.150s (userspace) = 22.122s.
[2026/03/25 23:27:46 EDT] kernel: nouveau 0000:01:00.0: pmu: firmware unavailable
[2026/03/25 23:27:46 EDT] kernel: nouveau 0000:01:00.0: gr: firmware unavailable
[2026/03/25 23:27:46 EDT] kernel: nouveau 0000:01:00.0: sec2: firmware unavailable

incus admin os application show gpu-support

config: {}
state:
initialized: true
version: “202603240012”

incus info --resources

System:
UUID: 0259a1a8-a1ab-0000-0000-000000000000
Vendor: OriginPC
Product: CHRONOS
Family: To Be Filled By O.E.M.
Version: To Be Filled By O.E.M.
SKU: To Be Filled By O.E.M.
Serial: M80-C9015001715
Type: physical
Chassis:
Vendor: OriginPC
Type: Desktop
Version: To Be Filled By O.E.M.
Serial: M80-C9015001715
Motherboard:
Vendor: ASRock
Product: Z390M-ITX/ac
Serial: M80-C9015001715
Firmware:
Vendor: American Megatrends Inc.
Version: P4.20
Date: 08/05/2019

Load:
Processes: 516
Average: 0.01 0.04 0.06

CPU:
Architecture: x86_64
Vendor: GenuineIntel
Name: Intel(R) Core™ i7-9700K CPU @ 3.60GHz
Caches:

  • Level 1 (type: Data): 32KiB
  • Level 1 (type: Instruction): 32KiB
  • Level 2 (type: Unified): 256KiB
  • Level 3 (type: Unified): 12MiB
    Cores:
  • Core 0
    Frequency: 800Mhz
    Threads:
  • 0 (id: 0, online: true, NUMA node: 0)
  • Core 1
    Frequency: 799Mhz
    Threads:
  • 0 (id: 1, online: true, NUMA node: 0)
  • Core 2
    Frequency: 800Mhz
    Threads:
  • 0 (id: 2, online: true, NUMA node: 0)
  • Core 3
    Frequency: 800Mhz
    Threads:
  • 0 (id: 3, online: true, NUMA node: 0)
  • Core 4
    Frequency: 800Mhz
    Threads:
  • 0 (id: 4, online: true, NUMA node: 0)
  • Core 5
    Frequency: 800Mhz
    Threads:
  • 0 (id: 5, online: true, NUMA node: 0)
  • Core 6
    Frequency: 799Mhz
    Threads:
  • 0 (id: 6, online: true, NUMA node: 0)
  • Core 7
    Frequency: 800Mhz
    Threads:
  • 0 (id: 7, online: true, NUMA node: 0)
    Frequency: 799Mhz (min: 800Mhz, max: 4900Mhz)

Memory:
Free: 27.71GiB
Used: 4.29GiB
Total: 32.00GiB

GPU:
NUMA node: 0
Vendor: NVIDIA Corporation (10de)
Product: TU106 [GeForce RTX 2060 SUPER] (1f47)
PCI address: 0000:01:00.0
Driver: nouveau (6.19.9-zabbly+)
DRM:
ID: 0
Card: card0 (226:0)
Control: controlD64 (226:0)
Render: renderD128 (226:128)

NICs:
Card 0:
NUMA node: 0
Vendor: Intel Corporation (8086)
Product: I211 Gigabit Network Connection (1539)
PCI address: 0000:02:00.0
Driver: igb (6.19.9-zabbly+)
Ports:

  • Port 0 (ethernet)
    ID: _pa8a15902aba1
    Address: a8:a1:59:02:ab:a1
    Supported modes: 10baseT/Half, 10baseT/Full, 100baseT/Half, 100baseT/Full, 1000baseT/Full
    Supported ports: twisted pair
    Port type: twisted pair
    Transceiver type: internal
    Auto negotiation: true
    Link detected: false
    Card 1:
    NUMA node: 0
    Vendor: Intel Corporation (8086)
    Product: Ethernet Connection (7) I219-V (15bc)
    PCI address: 0000:00:1f.6
    Driver: e1000e (6.19.9-zabbly+)
    Ports:
  • Port 0 (ethernet)
    ID: _pa8a15902aba3
    Address: a8:a1:59:02:ab:a3
    Supported modes: 10baseT/Half, 10baseT/Full, 100baseT/Half, 100baseT/Full, 1000baseT/Full
    Supported ports: twisted pair
    Port type: twisted pair
    Transceiver type: internal
    Auto negotiation: true
    Link detected: true
    Link speed: 1000Mbit/s (full duplex)
    Card 2:
    NUMA node: 0
    Vendor: Intel Corporation (8086)
    Product: Dual Band Wireless-AC 3168NGW [Stone Peak] (24fb)
    PCI address: 0000:03:00.0

Disks:
Disk 0:
NUMA node: 0
ID: nvme0n1
Device: 259:0
Model: Force MP510
Type: nvme
Size: 223.57GiB
WWN: nvme.1987-3230303738323336303030313238383634364642-466f726365204d50353130-00000001
Read-Only: false
Removable: false
Partitions:

  • Partition 1
    ID: nvme0n1p1
    Device: 259:1
    Read-Only: false
    Size: 2.00GiB
  • Partition 10
    ID: nvme0n1p10
    Device: 259:10
    Read-Only: false
    Size: 25.00GiB
  • Partition 11
    ID: nvme0n1p11
    Device: 259:11
    Read-Only: false
    Size: 190.27GiB
  • Partition 2
    ID: nvme0n1p2
    Device: 259:2
    Read-Only: false
    Size: 100.00MiB
  • Partition 3
    ID: nvme0n1p3
    Device: 259:3
    Read-Only: false
    Size: 16.00KiB
  • Partition 4
    ID: nvme0n1p4
    Device: 259:4
    Read-Only: false
    Size: 100.00MiB
  • Partition 5
    ID: nvme0n1p5
    Device: 259:5
    Read-Only: false
    Size: 1.00GiB
  • Partition 6
    ID: nvme0n1p6
    Device: 259:6
    Read-Only: false
    Size: 16.00KiB
  • Partition 7
    ID: nvme0n1p7
    Device: 259:7
    Read-Only: false
    Size: 100.00MiB
  • Partition 8
    ID: nvme0n1p8
    Device: 259:8
    Read-Only: false
    Size: 1.00GiB
  • Partition 9
    ID: nvme0n1p9
    Device: 259:9
    Read-Only: false
    Size: 4.00GiB
    Disk 1:
    NUMA node: 0
    ID: sda
    Device: 8:0
    Model: Expansion HDD
    Type: scsi
    Size: 21.83TiB
    Read-Only: false
    Removable: false
    Partitions:
  • Partition 1
    ID: sda1
    Device: 8:1
    Read-Only: false
    Size: 21.83TiB
  • Partition 9
    ID: sda9
    Device: 8:9
    Read-Only: false
    Size: 8.00MiB
    Disk 2:
    NUMA node: 0
    ID: sdb
    Device: 8:16
    Model: Samsung SSD 860
    Type: scsi
    Size: 931.51GiB
    Read-Only: false
    Removable: false
    Partitions:
  • Partition 1
    ID: sdb1
    Device: 8:17
    Read-Only: false
    Size: 931.50GiB
  • Partition 9
    ID: sdb9
    Device: 8:25
    Read-Only: false
    Size: 8.00MiB

USB devices:
Device 0:
Vendor: Intel Corp.
Vendor ID: 8087
Product: Wireless-AC 3168 Bluetooth
Product ID: 0aa7
Bus Address: 1
Device Address: 4
Device 1:
Vendor: Logitech, Inc.
Vendor ID: 046d
Product: G513 Carbon GX Blue
Product ID: c33c
Bus Address: 1
Device Address: 2
Device 2:
Vendor: Logitech, Inc.
Vendor ID: 046d
Product: USB Receiver
Product ID: c52b
Bus Address: 1
Device Address: 3
Device 3:
Vendor: Seagate RSS LLC
Vendor ID: 0bc2
Product: Expansion HDD
Product ID: 2038
Bus Address: 2
Device Address: 2

PCI devices:
Device 0:
Address: 0000:00:00.0
Vendor: Intel Corporation
Vendor ID: 8086
Product: 8th/9th Gen Core 8-core Desktop Processor Host Bridge/DRAM Registers [Coffee Lake S]
Product ID: 3e30
NUMA node: 0
IOMMU group: 0
Driver: skl_uncore
Device 1:
Address: 0000:00:01.0
Vendor: Intel Corporation
Vendor ID: 8086
Product: 6th-10th Gen Core Processor PCIe Controller (x16)
Product ID: 1901
NUMA node: 0
IOMMU group: 1
Driver: pcieport
Device 2:
Address: 0000:00:12.0
Vendor: Intel Corporation
Vendor ID: 8086
Product: Cannon Lake PCH Thermal Controller
Product ID: a379
NUMA node: 0
IOMMU group: 2
Driver: intel_pch_thermal
Device 3:
Address: 0000:00:14.0
Vendor: Intel Corporation
Vendor ID: 8086
Product: Cannon Lake PCH USB 3.1 xHCI Host Controller
Product ID: a36d
NUMA node: 0
IOMMU group: 3
Driver: xhci_hcd
Device 4:
Address: 0000:00:14.2
Vendor: Intel Corporation
Vendor ID: 8086
Product: Cannon Lake PCH Shared SRAM
Product ID: a36f
NUMA node: 0
IOMMU group: 3
Driver:
Device 5:
Address: 0000:00:16.0
Vendor: Intel Corporation
Vendor ID: 8086
Product: Cannon Lake PCH HECI Controller
Product ID: a360
NUMA node: 0
IOMMU group: 4
Driver: mei_me
Device 6:
Address: 0000:00:17.0
Vendor: Intel Corporation
Vendor ID: 8086
Product: Cannon Lake PCH SATA AHCI Controller
Product ID: a352
NUMA node: 0
IOMMU group: 5
Driver: ahci
Device 7:
Address: 0000:00:1c.0
Vendor: Intel Corporation
Vendor ID: 8086
Product: Cannon Lake PCH PCI Express Root Port #6
Product ID: a33d
NUMA node: 0
IOMMU group: 6
Driver: pcieport
Device 8:
Address: 0000:00:1c.6
Vendor: Intel Corporation
Vendor ID: 8086
Product: Cannon Lake PCH PCI Express Root Port #7
Product ID: a33e
NUMA node: 0
IOMMU group: 7
Driver: pcieport
Device 9:
Address: 0000:00:1d.0
Vendor: Intel Corporation
Vendor ID: 8086
Product: Cannon Lake PCH PCI Express Root Port #9
Product ID: a330
NUMA node: 0
IOMMU group: 8
Driver: pcieport
Device 10:
Address: 0000:00:1f.0
Vendor: Intel Corporation
Vendor ID: 8086
Product: Z390 Chipset LPC/eSPI Controller
Product ID: a305
NUMA node: 0
IOMMU group: 9
Driver:
Device 11:
Address: 0000:00:1f.3
Vendor: Intel Corporation
Vendor ID: 8086
Product: Cannon Lake PCH cAVS
Product ID: a348
NUMA node: 0
IOMMU group: 9
Driver: snd_hda_intel
Device 12:
Address: 0000:00:1f.4
Vendor: Intel Corporation
Vendor ID: 8086
Product: Cannon Lake PCH SMBus Controller
Product ID: a323
NUMA node: 0
IOMMU group: 9
Driver: i801_smbus
Device 13:
Address: 0000:00:1f.5
Vendor: Intel Corporation
Vendor ID: 8086
Product: Cannon Lake PCH SPI Controller
Product ID: a324
NUMA node: 0
IOMMU group: 9
Driver: intel-spi
Device 14:
Address: 0000:00:1f.6
Vendor: Intel Corporation
Vendor ID: 8086
Product: Ethernet Connection (7) I219-V
Product ID: 15bc
NUMA node: 0
IOMMU group: 9
Driver: e1000e
Device 15:
Address: 0000:01:00.0
Vendor: NVIDIA Corporation
Vendor ID: 10de
Product: TU106 [GeForce RTX 2060 SUPER]
Product ID: 1f47
NUMA node: 0
IOMMU group: 1
Driver: nouveau
Device 16:
Address: 0000:01:00.1
Vendor: NVIDIA Corporation
Vendor ID: 10de
Product: TU106 High Definition Audio Controller
Product ID: 10f9
NUMA node: 0
IOMMU group: 1
Driver: snd_hda_intel
Device 17:
Address: 0000:01:00.2
Vendor: NVIDIA Corporation
Vendor ID: 10de
Product: TU106 USB 3.1 Host Controller
Product ID: 1ada
NUMA node: 0
IOMMU group: 1
Driver: xhci_hcd
Device 18:
Address: 0000:01:00.3
Vendor: NVIDIA Corporation
Vendor ID: 10de
Product: TU106 USB Type-C UCSI Controller
Product ID: 1adb
NUMA node: 0
IOMMU group: 1
Driver: nvidia-gpu
Device 19:
Address: 0000:02:00.0
Vendor: Intel Corporation
Vendor ID: 8086
Product: I211 Gigabit Network Connection
Product ID: 1539
NUMA node: 0
IOMMU group: 10
Driver: igb
Device 20:
Address: 0000:03:00.0
Vendor: Intel Corporation
Vendor ID: 8086
Product: Dual Band Wireless-AC 3168NGW [Stone Peak]
Product ID: 24fb
NUMA node: 0
IOMMU group: 11
Driver:
Device 21:
Address: 0000:04:00.0
Vendor: Phison Electronics Corporation
Vendor ID: 1987
Product: E12 NVMe Controller
Product ID: 5012
NUMA node: 0
IOMMU group: 12
Driver: nvme

incus info --show-log <container>

// container log from trying start after setting nvidia.runtime true

incus info --show-log jellyfin
Name: jellyfin
Description:
Status: STOPPED
Type: container (application)
Architecture: x86_64
Created: 2026/01/11 17:38 EST
Last Used: 2026/03/26 00:39 EDT

Log:

lxc homelab_jellyfin 20260326043908.407 WARN cgfsng - ../src/lxc/cgroups/cgfsng.c:__cgroup_tree_create:747 - File exists - Creating the final cgroup 10(lxc.monitor.homelab_jellyfin) failed
lxc homelab_jellyfin 20260326043908.407 WARN cgfsng - ../src/lxc/cgroups/cgfsng.c:cgroup_tree_create:807 - File exists - Failed to create monitor cgroup 10(lxc.monitor.homelab_jellyfin)
lxc homelab_jellyfin 20260326043908.439 ERROR utils - ../src/lxc/utils.c:safe_mount:1332 - Invalid argument - Failed to mount “none” onto “/opt/incus/lib/lxc/rootfs/run”
lxc homelab_jellyfin 20260326043908.455 ERROR utils - ../src/lxc/utils.c:run_buffer:569 - Script exited with status 1
lxc homelab_jellyfin 20260326043908.455 ERROR conf - ../src/lxc/conf.c:lxc_setup:3945 - Failed to run mount hooks
lxc homelab_jellyfin 20260326043908.455 ERROR start - ../src/lxc/start.c:do_start:1270 - Failed to setup container “homelab_jellyfin”
lxc homelab_jellyfin 20260326043908.455 ERROR sync - ../src/lxc/sync.c:sync_wait:34 - An error occurred in another process (expected sequence number 4)
lxc homelab_jellyfin 20260326043908.459 ERROR network - ../src/lxc/network.c:lxc_netdev_restore_altnames:1422 - Invalid argument - Failed to get altnames for interface “vethcd9b04a2”
lxc homelab_jellyfin 20260326043908.459 WARN network - ../src/lxc/network.c:lxc_delete_network_priv:3940 - Failed to restore altnames for interface with index 0 and initial name “vethcd9b04a2”
lxc homelab_jellyfin 20260326043908.459 WARN network - ../src/lxc/network.c:lxc_delete_network_priv:3945 - Failed to rename interface with index 0 from “physiZ0Ioo” to its initial name “vethcd9b04a2”
lxc homelab_jellyfin 20260326043908.459 ERROR start - ../src/lxc/start.c:__lxc_start:2118 - Failed to spawn container “homelab_jellyfin”
lxc homelab_jellyfin 20260326043908.459 WARN start - ../src/lxc/start.c:lxc_abort:1037 - No such process - Failed to send SIGKILL via pidfd 17 for process 13463
lxc homelab_jellyfin 20260326043908.459 ERROR lxccontainer - ../src/lxc/lxccontainer.c:wait_on_daemonized_start:837 - Received container state “ABORTING” instead of “RUNNING”
lxc 20260326043908.506 ERROR af_unix - ../src/lxc/af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20260326043908.506 ERROR commands - ../src/lxc/commands.c:lxc_cmd_rsp_recv_fds:128 - Failed to receive file descriptors for command “get_init_pid”

I may just be misunderstanding the purpose/implementation of gpu-support, but I hope I haven’t veered too far off the path of expected use cases!

Think NVidia drivers are a bit tricky given how they are licensed and they require a lot of more diskspace which makes it tricky to install on a default IncusOS system.

One option would be to use a VM, pass through the GPU, install latest driver and Incus into it and run all GPU container in this VM. This solution would also separate issues where the driver could impact the host. It isn’t optimal but works.

Not sure if full NVidia support is planed, last time I checked Github it wasn’t.

My understanding is that the gpu-support application is an add-on to IncusOS that provides the large GPU drivers from the 3 main GPU suppliers.

The VM route is definitely a workaround! But I’d really like to be able to use my GPU across several different containers.

I’m also not super clear on the extent of Nvidia support, but that non primary application seems to be offering something.

That is correct but only Open Source drivers are included as they are extracted from the linux kernel source tree. The drivers you have in mind are the core drivers Nvidia ships on it’s own. They use a different license as far as I remember even if they now have an open driver for newer cards.

It would properly require to create a complete separate installation package and API to allow installation them. Would this be possible @stgraber , @gibmat ?

1 Like

It’s possible that we could include and ship NVIDIA’s open-gpu kernel modules which would bring in full support for some of their GPUs and then expand the gpu-support package to include more of userspace components so long as those components can be legally redistributed.

That may end up providing sufficient support for most folks though there will always be a whole bunch of situations where you need the full NVIDIA driver pack and that we can’t include/re-distribute.

Also worth noting that IncusOS tracks the latest stable mainline kernel, so adding more out of tree drivers always comes with some potential issue when we bump to the next kernel release. it’s probably worth doing here, but that will be something we’ll need to consider before rolling out something like NVIDIA’s open-gpu drivers.

In any case, please file a request at GitHub · Where software is built so we can have something to track the investigation and work on getting the NVIDIA open-gpu driver stuff into IncusOS.

1 Like

Thanks a bunch @osch and @stgraber!

I’ve created #992 - Add support for NVIDIA open GPU kernel modules and userspace components in gpu-support. I hope I got the details mostly accurate.