Access Host SCSI Adaptor From Container

I have a use case where I have some old, but still good hardware (A storage array) that is connected to the host by a PCI SCSI card. The drivers for the storage array unfortunately are RedHat specific. I thought I would try to spin up a CentOS guest, grant access to the SCSI card inside the container, and install the drivers there. I’m not entirely clear on how to go about this, so I have a few questions:

  1. Would this even work? I gather some devices are not able to be passed to containers unless they have specific hardware support at the CPU/mobo level (IOMMU, VT-x, VT-D etc)

  2. Should I be adding it as a device in the container profile? Should that be of type unix-char? And would the PCI address (03:08.0) be passed as major/minor? then the path would be /dev/scsi or something inside the container?

3: Should I also be adding all the block devices from the host to the container? And then do the requisite multipath configuration inside the container too? (sorry if I’m getting to much into the weeds here)

Assuming we’re talking kernel driver (which seems likely here), then no, this won’t work.
Containers share their kernel with the host, if the host kernel doesn’t recognize the hardware, you can’t get the container to recognize it for it.

Alternatives here would be:

  • Find a way to get the driver to build on the host kernel
  • Use another machine on a suitable OS to connect to the SAN and export it over iSCSI or similar for other systems to consume (turning it into a NAS effectively)
  • If the IOMMU/VFIO stars align, maybe run a VM instead and pass the controller to it, run a suitable OS in there and similarly export the storage back to the host over something like iSCSI
1 Like

Alas the drivers are old and binary. There are some notes on getting the array to function by converting the RPMS to debs etc, I will try that next. Thanks for the speedy reply as always, you guys are awesome.