Help with Setup

Hi all,

Im sorry if this is going to be a long one asking for help but im really frustrated by spending weeks trying proxmox and other solutions that got me close to what i want, but not there yet. So i found about Incus and tried the demo online, it was like a dream! Everything was clear, i finally got a path to victory, but, well, here i am, asking for help, since i could not do it.

My dream setup right now is a base system to run Incus + dns only, a container passing the onboard vga + hdmi port to manage the system and mess with desktop enviroments plus windows managers at will, a headless container for docker apps and a windows vm to pass an external vga + hdmi that i can game on. Everything on the same mini-pc, that way i will leave the base system mostly untouched, run my docker images in a container 24/7, use the container with onboard vga for work + random web browsing and the windows vm to game from time to time.

First the hardware im using, a mini-pc intel 1360p, 32gb and 1 tb nvme. I also have a das with 5 disks running zfs, and im waiting on the delivery of the eVGA to finish the setup.

Im running debian but can start over and use ubuntu if its any better, or any other base that is recomended. I started with a clean install of debian, added the zabbly kernel, zabbly zfs and Incus only. Then i created a container with ubuntu to try docker on, compile stuff and generally mess around and delete at will, my first roadblock was the network i had a bridge already, but found how to attach it to the default profile, so not a problem anymore.

Then the storage, reading the documentation i got worried about adding my pool since it says that incus might delete things, so i just used it with a dir driver, the same with the nvme on the minipc, now my questions about storage, using proxmox, it resizes the lvm and use that for the base contaniner/vm, the same thing is possible here? or do i have to partition and deliver the pieces myself for Incus to manage? and about the zfs, the same questions, do i have to destroy my pool and start over or incus can take control over it and not delete what i already have there?

Next the bane of my existance, video passthrough using the hdmi for the container, i noticed that incus is 100 times friendlier than proxmox on this regard, but still could not do it, i attached the gpu as physical, but don’t get any video output from the container. I tried privileged, unprivileged, iommu is on at grub and bios, i tried with vfio modules on/off, blaclisting the intel drivers from load and everything else all the guides on the internet have to offer, be it for proxmox or incus, or even random things, there was a time that i had so many options on grub that linux could not even start up anymore. Reading on this forum my last attempt was following this guide GitHub - bryansteiner/gpu-passthrough-tutorial and it got me nowhere as usual.

Im going insane here, Incus is my last hope of success because you can sense that at the proxmox forums they do everything to give you the most cryptic answer possible instead of a straight one (maybe in hopes of selling their support or something?), btw the journey im on is way longer than what i described here, but i can’t even think about it anymore, so many reinstalls of everything, dependencies, compiling, failing and frustration that im going to explode!

Someone plz! SAVE ME FROM MYSELF!!!

1 Like

Welcome!

If you already have a ZFS pool, you can arrange for Incus to use a dataset in that pool.

Here’s an educational example. I created a VM and in there I’ll create a ZFS pool over a loopback file (it’s easier for me), and from the zpool I’ll arrange so that Incus uses a separate dataset. The important stuff is what to answer when you run sudo incus admin init so that you are prompted for a dataset name. Then Incus will only touch it’s own dataset and that’s it.

root@myvm:~# fallocate -l 5G /VOLUME
root@myvm:~# losetup /dev/loop2 /VOLUME 
root@myvm:~# zpool create zfsdisk /dev/loop2
root@myvm:~# zfs list
NAME      USED  AVAIL  REFER  MOUNTPOINT
zfsdisk   102K  4.36G    24K  /zfsdisk
root@myvm:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (zfs, dir) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: no
Name of the existing ZFS pool or dataset: zfsdisk/incus
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=incusbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: 
root@myvm:~# zfs list
NAME                                     USED  AVAIL  REFER  MOUNTPOINT
zfsdisk                                  588K  4.36G    24K  /zfsdisk
zfsdisk/incus                            288K  4.36G    24K  legacy
zfsdisk/incus/buckets                     24K  4.36G    24K  legacy
zfsdisk/incus/containers                  24K  4.36G    24K  legacy
zfsdisk/incus/custom                      24K  4.36G    24K  legacy
zfsdisk/incus/deleted                    144K  4.36G    24K  legacy
zfsdisk/incus/deleted/buckets             24K  4.36G    24K  legacy
zfsdisk/incus/deleted/containers          24K  4.36G    24K  legacy
zfsdisk/incus/deleted/custom              24K  4.36G    24K  legacy
zfsdisk/incus/deleted/images              24K  4.36G    24K  legacy
zfsdisk/incus/deleted/virtual-machines    24K  4.36G    24K  legacy
zfsdisk/incus/images                      24K  4.36G    24K  legacy
zfsdisk/incus/virtual-machines            24K  4.36G    24K  legacy
root@myvm:~# 

GPU Passthrough requires to first figure out whether your hardware supports it. It’s a combination of BIOS/firmware, motherboard, CPU and GPU. Once you know that GPU Passthrough is supported on paper (your motherboard’s GPU or that other eGPU), you would need to experiment with a tutorial like the one you posted above. That is, just with your Linux distribution try to make GPU Passthrough work as simply as possible. Perhaps a fresh install with a separate disk?
There are also other communities that can help, such as Reddit - Dive into anything
I am also interested in this in case you get the necessary insight.

Once you know that GPU Passthrough works in the simplest possible setup, then you can re-implement into Incus.

@simos Thank you very much for the zfs dataset part, i was using zfs as filesystem just because in my noob mind it was the new cool fun way of doing software raid with auto healing system, i had no idea how powerful zfs was, and reading about datasets made me understand how incus works on that regard and i even organized all my stuff in a better way now! im really happy about that part.

But, back to the bane of my existance, GPU passthrough, for sure my CPU/BIOS supports it, i get IOMMU enabled when i dmesg, i even installed jellyfin and tried a hw transcodnig, and it works too, i got to this point on both proxmox and Incus, as i said Incus making it 100 times easier by just attaching the gpu using a command, but what i dont get ever on any software is the video output from the HDMI port, that is what eludes me no matter the guide i follow, no matter the options i enable or combination of settings, i tried them all at this point, im at this quest for 3 weeks now and im not giving up! trial and error for life! i even got the vGPU thing to work just to try it on more than one container at a time, but what i really want is to use the hdmi port from a VM or LXC and that seems impossible to me right now.

I’ve been trying to do this. I’d recommend proving your gpu passthru using only QEMU, just to be sure it works. Something like this

qemu-system-x86_64 -bios /usr/share/ovmf/x64//OVMF.fd --cpu host --enable-kvm -vga none -display none -device vfio-pci,host=d8:00.0 

Just so you’re sure it works (pass your gpu’s pci host id, that you configured in IOMMU at boot), then you can narrow down any issues to Incus…