System container should be the easiest, once you’ve got the NVIDIA drivers installed on your system and you see your card in nvidia-smi then you can pass it through to the container the same way as if it was in your system.
For VMs, it’s likely to be difficult/impossible. Basically you’d need the GPU to live on its dedicated IOMMU group, if it is and you have iommu enabled through the kernel command line and firmware, then it should be easy to pass it through, but I’m yet to see a Thunderbolt controller that puts its devices on dedicated IOMMU groups.
I followed the instructions in your video and got things working with CTs. It’s quite simple. Didn’t try with VMs.
For me, the Thunderbolt connection was a non-issue. (Ubuntu 22.04) With Thunderbolt, the GPU device just appears like a normal device on the PCI bus (try lspci).
I had to understand that the Nvidia toolset requires both kernelspace packages (nvidia-driver-NNN) and userspace packages (libcudnnn.*, cuda.*) The kernelspace drivers must be loaded on the host, and are shared by each CT. The userspace tools can be unique for each CT.
IMO the Nvidia documentation and support websites are just terrible. Budget extra time to sort thru the mess. Once the Nvidia issues are resolved, the Incus/GPU passthru is simple.