Weird issue with storage backends being run in docker

So, I have a linstor satellite and controller setup on each of my nodes in a storage cluster. I want to be able to use the linstor driver but the issue has to do with my setup. Being that linstor offers docker files for their services ive decided to deploy their services in a docker container so that services can be kept separate . Incase things need to be restarted or another reason.

I did not install incus in that docker container , i installed incus on the main host. The running docker containers hold the linstor satellite and linstor controller. Since the linstor stuff is not on my host, i dont think incus sees it. So, when i run incus admin init i just get the following error …

Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=5.0.0.5]: incus-gigabyte.local
Are you joining an existing cluster? (yes/no) [default=no]: no
What member name should be used to identify this server in the cluster? [default=gigabyte]:
Do you want to configure a new local storage pool? (yes/no) [default=yes]: no
Do you want to configure a new remote storage pool? (yes/no) [default=no]: yes
Error: No storage backends available
root@gigabyte:/home/m#

How can i make incus see the storage backends in the docker container ? I can use the storage stuff easily and it works, so it should work with incus as well?

How does incus search for storage backends?

Can i just install without storage and then setup storage after?

Could someone help me with that?

During bootstrap Incus doesn’t know about any remote storage environments except you have installed them locally and / or use a kernel module.

Looking at the linstor guide it requires to set a few config settings in order to use them. These settings are not available during the initial bootstrap process. As such you need to setup Incus without any storage or just define the default dir storage and update / remove your storage config after the service is up and running.

It is similar linke installing your OS and add the correct drivers for your hardware after the OS is installed.

Ive had it work before but i never had linstor in docker containers … I had it installed locally. So if i try to setup manually after I get the same issue

root@gigabyte:/home/mihai# incus storage ls
±-----±-------±------------±--------±------+
| NAME | DRIVER | DESCRIPTION | USED BY | STATE |
±-----±-------±------------±--------±------+
root@gigabyte:/home/mihai# incus storage create remote linstor
Error: LINSTOR satellite executable not found

where exactly is it looking for the satellite executable … maybe i can make that as a alias or shortcut to docker exec -it linstor-satellite /usr/share/linstor-server/bin/Satellite? would that work ?

It does not work lol

Error: Failed to run: /usr/share/linstor-server/bin/Satellite --version: fork/exec /usr/share/linstor-server/bin/Satellite: exec format error

Any idea how to fix this?

The error clearly tells that it doesn’t find the binary.

Suggest to install the required binary on your hosts and all it good.

That’s weird. I’ve never used Linstor with incus, but IMO, incus should not be trying to invoke the satellite binary itself; it should only be talking to the linstor controller API.

There’s full setup guide here. It clearly shows the linstor storage environment being configured before telling incus about it, and it shows configuring the Linstor API endpoint. I think you’ve missed a lot of steps out.

This isnt my first time setting it up i promise you. I haven’t missed any steps and its clear that incus tries to invoke the satellite binary before actually making the storage available. I didn’t install the binary on the host because it contains java and im trying to keep the host as clean as possible. So i dockerized the satellite service.

would you know if its possible to do this with linstor being fully in docker containers?

Maybe @stgraber can clarify if you need the backend installed in the host so that incus can offer it?

The answers are in the source code.

# ./internal/server/storage/drivers/driver_linstor_utils.go

// LinstorSatellitePaths lists the possible FS paths for the Satellite script.
var LinstorSatellitePaths = []string{"/usr/share/linstor-server/bin"}
...
// controllerVersion returns the LINSTOR controller version.
func (d *linstor) controllerVersion() (string, error) {
        var satellitePath string
        for _, path := range LinstorSatellitePaths {
                candidate := filepath.Join(path, "Satellite")
                _, err := os.Stat(candidate)
                if err == nil {
                        satellitePath = candidate
                        break
                }
        }

        if satellitePath == "" {
                return "", errors.New("LINSTOR satellite executable not found")
        }

        out, err := subprocess.RunCommand(satellitePath, "--version")
        if err != nil {
                return "", err
        }

        for _, line := range strings.Split(out, "\n") {
                if strings.HasPrefix(line, "Version:") {
                        fields := strings.Fields(line)
                        if len(fields) < 2 {
                                return "", errors.New("Could not parse LINSTOR satellite version")
                        }

                        return fields[1], nil
                }
        }

        return "", errors.New("Could not parse LINSTOR satellite version")
}

That is, rather than querying the API endpoint for the controller version, it runs the satellite binary and asks its version. This should be the same, because Linstor only works if the controller and satellites are on exactly the same versions.

Similarly, there is code in ./internal/server/storage/drivers/driver_linstor.go which checks for DRBD version on the host.

Therefore, it looks like it’s a hard-coded assumption that every incus node is also a Linstor satellite.

Arguably this is reasonable, given that wherever incus runs a VM or container backed by Linstor, it will need a /dev/drbdXXXX device created to access the content (even if it’s a diskless resource and the content is elsewhere).

Wrapping Linstor inside a docker container makes this difficult: to start with, for docker to be able to manipulate /dev/drbd device nodes it would need to be a privileged container. Also, running docker and incus on the same host is generally a bad idea due to the way they interact over firewall rules.

Creating a dummy /usr/share/linstor-server/bin/Satellite that does some docker exec magic might work. But at this point, you’re on your own; you’re not using the software the way it was designed, so you support yourself.

Therefore, I think you’re best to swallow the Java on host (and I don’t like it either).

Darn, i tried making a shortcut but that didnt work … gave me some exec warning i think.

Darn i spent so much time getting linstor to dockerize lol

I ended up just making a debian13 slim docker container and i made a script that downloads and installs linstor satellite in it. I made it privileged because linstor would need to be privileged anyways.

I really thought i had something going there haha

Mkay, but keep in mind that each Incus node has to have a local satellite, and having proper mappings between DRBD mounts on Docker and on the host looks very hard to me. My advice is, simply don’t install the client within Docker.

IIRC we need to get LINSTOR’s version before initializing the driver and the controller connection.

Because I wrote half of the driver, I can answer: “yes”. You absolutely need the client on the host. The client itself is responsible for creating the proper DRBD devices on the host. And because we need it, we can just query it locally, as we’re doing.

Which is also absolutely needed.

Yeah, please don’t :smiley:

I decided against it. Oh well :slight_smile: