Linstor: "Unable to start cloning" error

Hi everybody!

Kind of new in the Incus-world, and to be honest I have kind of an impression that my approach is over-complicated, although I could not yet find a better option. Please be calm with me if the question is too weird :slight_smile:

What i try to do:

  • Have a IncusOS in my home lab
  • Have a Linstor Controller on a VPS
  • Have a S3 compatible storage on very cheap device (Raspberry PI at a friendโ€™s house) as backup of one of the storage pools of the IncusOS

The idea: Use the โ€œSnapshot Shippingโ€ feature of Linstor to backup the IncusOS data to the S3 compatible storage (see Controlling Data Replication with Snapshot Shipping - LINBIT). I am aware that this is kind of a corner-case usage of Linstor, but I could not come up with a better backup/restore solution (scheduled, incremental, deduplicated, encrypted - my limiting resource is primarily the bandwidth of the Raspberry PI).

# Setting configure the Linstor backend in Incus
incus config set incusos: storage.linstor.controller_connection=http://192.168.122.116:3370
incus config set incusos: storage.linstor.satellite.name=18e79163-b792-4521-a7cf-d2956550223d

# Create "MyLinstorPool" as zfs volume - it is supposed to be the actual incus storage pool that takes part in the Linstor storage pool
incus storage create MyLinstorPool zfs size=20GB
linstor storage-pool create zfs 18e79163-b792-4521-a7cf-d2956550223d MyLinstorPool MyLinstorPool

# Create a "remote" Incus storage pool - a pool that uses the Linstor driver now
incus storage create remote linstor linstor.resource_group.storage_pool=MyLinstorPool linstor.resource_group.place_count=1

Now my problem: If I create an instance in the โ€œremoteโ€ pool, I get a ApiCallError:

incus launch images:debian/12 c1 --storage remote
Launching c1
Error: Failed instance creation: Failed creating instance from image: Unable to start cloning resource definition: json: cannot unmarshal object into Go value of type client.ApiCallError

The Linstor error log shows this:

user@vps:~$ sudo journalctl -u linstor-controller -n 100 | grep -i "clone\|error\|remote"
Dec 30 11:35:58 vps Controller[699]: 2025-12-30 11:35:58.191 [grizzly-http-server-0] INFO  LINSTOR/Controller/717627 SYSTEM - REST/API RestClient(192.168.122.236; 'Go-http-client/1.1')/CloneRscDfn
Dec 30 11:35:58 vps Controller[699]: 2025-12-30 11:35:58.226 [grizzly-http-server-0] ERROR LINSTOR/Controller/ SYSTEM - No suitable storage pools found for cloning. [Report number 69537AF1-00000-000013]
user@vps:~$ linstor err show 69537AF1-00000-000013
ERROR REPORT 69537AF1-00000-000013

============================================================

Application:                        LINBITยฎ LINSTOR
Module:                             Controller
Version:                            1.33.1
Build ID:                           95da7940d6efb6a39ea303c5f37b03478a6fab0b
Build time:                         2025-12-22T16:04:57+00:00
Error time:                         2025-12-30 11:35:58
Node:                               vps
Thread:                             grizzly-http-server-0
Access context information

Identity:                           PUBLIC
Role:                               PUBLIC
Domain:                             PUBLIC

Peer:                               RestClient(192.168.122.236; 'Go-http-client/1.1')

============================================================

Reported error:
===============

Category:                           RuntimeException
Class name:                         ApiRcException
Class canonical name:               com.linbit.linstor.core.apicallhandler.response.ApiRcException
Generated at:                       Method 'findCloneStoragePools', Source file 'CtrlRscDfnApiCallHandler.java', Line #1231

Error message:                      No suitable storage pools found for cloning.

Error context:
        No suitable storage pools found for cloning.
ApiRcException entries: 
  Message:     No suitable storage pools found for cloning.
  NumericCode: -4611686018427386908


Asynchronous stage backtrace:
    
    Error has been observed at the following site(s):
        *__checkpoint โ‡ข Clone resource-definition
    Original Stack Trace:

Call backtrace:

    Method                                   Native Class:Line number
    findCloneStoragePools                    N      com.linbit.linstor.core.apicallhandler.controller.CtrlRscDfnApiCallHandler:1231

Suppressed exception 1 of 1:
===============
Category:                           RuntimeException
Class name:                         OnAssemblyException
Class canonical name:               reactor.core.publisher.FluxOnAssembly.OnAssemblyException
Generated at:                       Method 'findCloneStoragePools', Source file 'CtrlRscDfnApiCallHandler.java', Line #1231

Error message:                      
Error has been observed at the following site(s):
        *__checkpoint โ‡ข Clone resource-definition
Original Stack Trace:

Error context:
        No suitable storage pools found for cloning.
Call backtrace:

    Method                                   Native Class:Line number
    findCloneStoragePools                    N      com.linbit.linstor.core.apicallhandler.controller.CtrlRscDfnApiCallHandler:1231
    cloneRscDfnInTransaction                 N      com.linbit.linstor.core.apicallhandler.controller.CtrlRscDfnApiCallHandler:1411
    lambda$cloneRscDfn$7                     N      com.linbit.linstor.core.apicallhandler.controller.CtrlRscDfnApiCallHandler:851
    doInScope                                N      com.linbit.linstor.core.apicallhandler.ScopeRunner:178
    lambda$fluxInScope$0                     N      com.linbit.linstor.core.apicallhandler.ScopeRunner:101
    call                                     N      reactor.core.publisher.MonoCallable:72
    trySubscribeScalarMap                    N      reactor.core.publisher.FluxFlatMap:128
    subscribeOrReturn                        N      reactor.core.publisher.MonoFlatMapMany:49
    subscribe                                N      reactor.core.publisher.Flux:8833
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:196
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:2570
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:141
    subscribe                                N      reactor.core.publisher.MonoJust:55
    subscribe                                N      reactor.core.publisher.MonoDeferContextual:55
    subscribe                                N      reactor.core.publisher.Flux:8848
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:196
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:2570
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:141
    subscribe                                N      reactor.core.publisher.MonoJust:55
    subscribe                                N      reactor.core.publisher.MonoDeferContextual:55
    subscribe                                N      reactor.core.publisher.Flux:8848
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:196
    onNext                                   N      reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber:129
    completePossiblyEmpty                    N      reactor.core.publisher.Operators$BaseFluxToMonoOperator:2096
    onComplete                               N      reactor.core.publisher.MonoCollect$CollectSubscriber:145
    onComplete                               N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner:261
    checkTerminated                          N      reactor.core.publisher.FluxFlatMap$FlatMapMain:850
    drainLoop                                N      reactor.core.publisher.FluxFlatMap$FlatMapMain:612
    drain                                    N      reactor.core.publisher.FluxFlatMap$FlatMapMain:592
    onComplete                               N      reactor.core.publisher.FluxFlatMap$FlatMapMain:469
    checkTerminated                          N      reactor.core.publisher.FluxFlatMap$FlatMapMain:850
    drainLoop                                N      reactor.core.publisher.FluxFlatMap$FlatMapMain:612
    drain                                    N      reactor.core.publisher.FluxFlatMap$FlatMapMain:592
    onComplete                               N      reactor.core.publisher.FluxFlatMap$FlatMapMain:469
    complete                                 N      reactor.core.publisher.Operators:137
    subscribe                                N      reactor.core.publisher.FluxIterable:144
    subscribe                                N      reactor.core.publisher.FluxIterable:83
    subscribe                                N      reactor.core.publisher.Flux:8848
    trySubscribeScalarMap                    N      reactor.core.publisher.FluxFlatMap:202
    subscribeOrReturn                        N      reactor.core.publisher.MonoFlatMapMany:49
    subscribe                                N      reactor.core.publisher.Flux:8833
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:196
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:2570
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:141
    subscribe                                N      reactor.core.publisher.MonoJust:55
    subscribe                                N      reactor.core.publisher.MonoDeferContextual:55
    subscribe                                N      reactor.core.publisher.Flux:8848
    onNext                                   N      reactor.core.publisher.FluxFlatMap$FlatMapMain:430
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner:251
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:2570
    onSubscribeInner                         N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:150
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner:246
    trySubscribeScalarMap                    N      reactor.core.publisher.FluxFlatMap:193
    subscribeOrReturn                        N      reactor.core.publisher.MonoFlatMapMany:49
    subscribe                                N      reactor.core.publisher.Flux:8833
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:196
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:2570
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:141
    subscribe                                N      reactor.core.publisher.MonoJust:55
    subscribe                                N      reactor.core.publisher.MonoDeferContextual:55
    subscribe                                N      reactor.core.publisher.InternalMonoOperator:76
    subscribe                                N      reactor.core.publisher.MonoUsing:102
    subscribe                                N      reactor.core.publisher.Mono:4576
    subscribeWith                            N      reactor.core.publisher.Mono:4642
    subscribe                                N      reactor.core.publisher.Mono:4542
    subscribe                                N      reactor.core.publisher.Mono:4478
    subscribe                                N      reactor.core.publisher.Mono:4450
    doFlux                                   N      com.linbit.linstor.api.rest.v1.RequestHelper:345
    clone                                    N      com.linbit.linstor.api.rest.v1.ResourceDefinitions:394
    invoke                                   N      jdk.internal.reflect.DirectMethodHandleAccessor:103
    invoke                                   N      java.lang.reflect.Method:580
    lambda$static$0                          N      org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory:52
    run                                      N      org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1:146
    invoke                                   N      org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher:189
    doDispatch                               N      org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$VoidOutInvoker:159
    dispatch                                 N      org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher:93
    invoke                                   N      org.glassfish.jersey.server.model.ResourceMethodInvoker:478
    apply                                    N      org.glassfish.jersey.server.model.ResourceMethodInvoker:400
    apply                                    N      org.glassfish.jersey.server.model.ResourceMethodInvoker:81
    run                                      N      org.glassfish.jersey.server.ServerRuntime$1:256
    call                                     N      org.glassfish.jersey.internal.Errors$1:248
    call                                     N      org.glassfish.jersey.internal.Errors$1:244
    process                                  N      org.glassfish.jersey.internal.Errors:292
    process                                  N      org.glassfish.jersey.internal.Errors:274
    process                                  N      org.glassfish.jersey.internal.Errors:244
    runInScope                               N      org.glassfish.jersey.process.internal.RequestScope:265
    process                                  N      org.glassfish.jersey.server.ServerRuntime:235
    handle                                   N      org.glassfish.jersey.server.ApplicationHandler:684
    service                                  N      org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer:356
    run                                      N      org.glassfish.grizzly.http.server.HttpHandler$1:190
    doWork                                   N      org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker:535
    run                                      N      org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker:515
    run                                      N      java.lang.Thread:1583


END OF ERROR REPORT.

Unclear: I found several notes that enrypted ZFS is unsuable in combination with Linstor, and a bug that is very similar (In the case of using a zfs encrypted pool as the linstor storage backend and linstor as the incus storage backend, incus launch and incus init do not work properly. ยท Issue #2075 ยท lxc/incus ยท GitHub) - although both the incus error and the Linstor error are a little bit different.

Could you please help me?

I wouldnโ€™t say itโ€™s a corner-case usage, itโ€™s a pretty sane backup mechanism.

Weirdโ€ฆ Maybe weโ€™re not catching an error on Incus side.

Could you show the usual linstor node list, linstor storage-pool list and linstor resource-group list, please?

Thatโ€™s correct. Encrypted ZFS doesnโ€™t work with LINSTOR. If you want encrypted volumes, you need to use LUKS. If you tried using encrypted ZFS volumes, we canโ€™t help you any further, Iโ€™m afraid.

Thanks Benjamin, for this super quick reply! :slight_smile:

Here the infos you have been asking for:

user@vps:~$ linstor node list
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”Š Node                                 โ”Š NodeType  โ”Š Addresses                    โ”Š State  โ”Š
โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก
โ”Š 18e79163-b792-4521-a7cf-d2956550223d โ”Š SATELLITE โ”Š 192.168.122.236:3366 (PLAIN) โ”Š Online โ”Š
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

user@vps:~$ linstor storage-pool list
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”Š StoragePool          โ”Š Node                                 โ”Š Driver   โ”Š PoolName      โ”Š FreeCapacity โ”Š TotalCapacity โ”Š CanSnapshots โ”Š State โ”Š SharedName                                                โ”Š
โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก
โ”Š DfltDisklessStorPool โ”Š 18e79163-b792-4521-a7cf-d2956550223d โ”Š DISKLESS โ”Š               โ”Š              โ”Š               โ”Š False        โ”Š Ok    โ”Š 18e79163-b792-4521-a7cf-d2956550223d;DfltDisklessStorPool โ”Š
โ”Š MyLinstorPool        โ”Š 18e79163-b792-4521-a7cf-d2956550223d โ”Š ZFS      โ”Š MyLinstorPool โ”Š     7.76 GiB โ”Š     18.50 GiB โ”Š True         โ”Š Ok    โ”Š 18e79163-b792-4521-a7cf-d2956550223d;MyLinstorPool        โ”Š
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
user@vps:~$ linstor resource-group list
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”Š ResourceGroup โ”Š SelectFilter                  โ”Š VlmNrs โ”Š Description                     โ”Š
โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก
โ”Š DfltRscGrp    โ”Š PlaceCount: 2                 โ”Š        โ”Š                                 โ”Š
โ•žโ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ”„โ•ก
โ”Š remote        โ”Š PlaceCount: 1                 โ”Š        โ”Š Resource group managed by Incus โ”Š
โ”Š               โ”Š StoragePool(s): MyLinstorPool โ”Š        โ”Š                                 โ”Š
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

Regarding the encryption: As far as I understood, IncusOS ALWAYS uses encrypted ZFS (in contrast to โ€œpureโ€ Incus). If this is a misunderstanding, I would appreciate if you could correct me. The only other โ€œmatchโ€ between Linstor and IncusOS that I could find is of type file-system (which on the other hand does not provide all the efficient snapshot features).

Again, thank you so much! :slight_smile:

IncusOS has a create-volume command to create a volume (dataset in practice) for use with either Incus or Linstor.

When a volume is created for use by Linstor, that dataset is marked as unencrypted.

Ok, I just wanted to be extra sure that you set up the proper place count.
I need to clarify something: is 192.168.122.116 your local IncusOS instance or your remote VPS thatโ€™s somehow visible from your local network?

  • If itโ€™s your local instance, Iโ€™d highly suggest setting the IP to 127.0.0.1
  • If itโ€™s your remote VPS, then hereโ€™s your mistake: you need a local LINSTOR satellite on your Incus server (this is what allows you to create DRBD disks that get mounted by Incus).

EDIT: to continue debugging, Iโ€™d need to know what 192.168.122.116 and 192.168.122.236 are, as that could lead to very different diagnoses :slight_smile:

Iโ€™m by no means an IncusOS expert, but Iโ€™d say that only applies to local volumes, as ZFS encryption is not very well supported by replicated storage drivers (this is the unfortunate case of LINSTOR).

In your case, it looks like youโ€™ve created a file backed ZFS pool to use with Linstor, so that wonโ€™t be encrypted.

See here for how to create a dataset for use by Linstor directly on IncusOS managed storage:

Not that I would expect this to be the source of the issue here, just a better way to do what youโ€™re currently doing by avoiding a layer of indirection and by getting to use whatโ€™s likely a bigger chunk of your storage.

Wow guys, are you responsive :smiley: Thanks so much!

Ok, at the moment I am still experimenting in a virtualized environment. Ultimately I want to go on a hardware setup.

Role Usage Experimental Address Final Address
IncusOS VMs/Containers/Data on a Mini PC 192.168.122.236 10.11.12.15
VPS Linstore-Controller, โ€ฆ on a publicly accessible VPS 192.168.122.116 10.11.12.13
Backup minio (S3) on Raspberry PI 192.168.122.8 10.11.12.14

Answering your questions:

  • 192.168.122.116 is the VPS (in my experimental setup)
    • It is NOT the Linstor satellite, just the controller
  • The IncusOS (192.168.122.236) is supposed to be the one and only satellite (at the same time providing storage and using it itself for VMs/Containers/Data)
    • My idea was here: Linstor would be clever enough that I can somehow make a pool or volume part of the distributed storage space (even though there is just one participant) and directly use it in Incus, and I could use Linstor-Controller just make the โ€œschedulerโ€ for the backups to the remote S3 storage.

Thanks Stรฉphane! Thanks for the link. I tried to use this (in various ways, since, to be honest, I did not really understand the concept of pools/volumes/resources), but I failed

incus config set incusos: storage.linstor.controller_connection=http://192.168.122.116:3370
incus config set incusos: storage.linstor.satellite.name=18e79163-b792-4521-a7cf-d2956550223d

# Creating a new volume in the "local" ZFS pool
incus admin os system storage create-volume -d '{"pool":"local","name":"mylinstorvolume5","use":"linstor"}'
linstor storage-pool create zfs 18e79163-b792-4521-a7cf-d2956550223d mylinstorvolume5 local/mylinstorvolume5
incus storage create mylinstorvolume5 linstor source=mylinstorvolume5 linstor.resource_group.storage_pool=mylinstorvolume5 linstor.resource_group.place_count=1

Unfortunately the error remains somehow:

Instance creation failed
Failed creating instance from image: Unable to start cloning resource definition: json: cannot unmarshal object into Go value of type client.ApiCallError

ser@vps:~$ sudo journalctl -u linstor-controller -n 100 | grep -i "clone\|error\|remote"
Dec 30 17:05:23 vps Controller[699]: 2025-12-30 17:05:23.343 [grizzly-http-server-1] INFO  LINSTOR/Controller/4cf9c4 SYSTEM - REST/API RestClient(192.168.122.236; 'Go-http-client/1.1')/CloneRscDfn
Dec 30 17:05:23 vps Controller[699]: 2025-12-30 17:05:23.352 [grizzly-http-server-1] ERROR LINSTOR/Controller/ SYSTEM - No suitable storage pools found for cloning. [Report number 69537AF1-00000-000022]

I am almost sure that I use the functions to create volume/pool/resource wrong or in the wrong order :smiley: Can you see the problem?

That feels pretty unsafe, as if you lose the connection with your VPS, youโ€™ll be unable to operate the cluster.

For the actual problem, Iโ€™m out of ideas. No suitable storage pools found for cloning suggests that there are constraints that cannot be satisfied by the currently defined storage pools, but I donโ€™t know which (and Iโ€™m not really focused right now).

You are right - maybe the overall solution is somehow too complex.

Actually what I want to do is pretty simple: Have a full backup on a different machine:

  • Incremental (so that I can select from various earlier times when rolling back)
  • Encrypted (so that the backup machine can be set up at friendโ€™s home without giving them full access to all data)
  • Compressed/Deduplicated (so that the transfer is possible even with very low bandwidth)
  • Scheduled (so that I can run the backup only during the maintenance window at night time)
  • Partial recovery (possibility to roll back only a part of the VMs/Containers/Volumes)

Do you guys know anything in IncusOS that suits these needs? (I researched a lot of options already, but there is always one little thing that does not work)

Backup tools like kopia would fulfill all these needs, if they just had access to the /var/lib/incus filesystem. I am wondering, if there would be interest in someone implementing a kopia backup service (or application) into IncusOS.

Would there be interest? Or am I on a wrong track? :slight_smile:
(I could maybe try to start a proof-of-concept, if you guys say this is sensible and you could give me some architectural directions)

I am using BorgBackup (https://www.borgbackup.org/) for that. BorgBackup is extremely efficient at deduplicating and has fully encrypted backups.

What I do is:

  • export individual containers/VMs or storage volume to a tar file with incus export/ incus storage volume export
  • feed the resulting tar file to borg with borg import-tar ...

Restore is basically the same in reverse, borg export-tar ... and then incus import .... And using import-tar instead of just backing up the tar files lets me access the backed up individual files in containers or storage volumes as well (for VMs it does not really matter because the root fs for a VM is exported to tar as a blob instead of indicidual files in a directory tree).

And what Iโ€™d really like to see in incus is a way to export to / import from stdin directly, without the need to create potentially huge intermediate tar archives on diskโ€ฆ

Incus is admittedly a bit lacking backup-wise. Iโ€™ll personally be working on a connector for Bareos in January, but it may not suit everybody (especially homelabbers), as itโ€™s a fairly hard to configure piece of software.

Thanks for sharing your experience! :slight_smile:

I thought about a solution like this as well, but having these big intermediate files was worrying me. First of all they eat up a lot of resources (both disk space and computating time), which of course is somehow a waste. And secondly it causes lots of write cycles to my SSDs, which will make them probably fail sooner than later.

Thanks @bensmrs for sharing this. Havenโ€™t ever heard of it - but as you say, this might be more targetted to other people than me :wink:

At the moment I am really a little lost on the backup topic. I have to admit that (although I really love the IncusOS approach!!!) I have started now experiments with Proxmox (and their great Proxmox Backup Server). I would prefer IncusOS in several ways - but without having the feeling that I can recover from a disaster, I would not dare to use it :confused:

Itโ€™s hard to imagine that I am the only one needing this - I assume, I just donโ€™t see the most obvious approach :smiley: Does anybody have a geo-redundant backup solution set up (via low bandwidth connection)?

Depends on how you define as โ€œlow bandwithโ€, and on how much your data changes between increments. Actually, itโ€™s a bit of the wrong question - with a good incremental solution (like BorgBackup, or even simply using incus copy --refreshprovided that both ends run the same storage system and that system supports incremental transfers), the bandwidth you need for the daily increments pretty much depends only on how much your data changes between increments.

That said, Iโ€™ve been running my BorgBackup solution for a system that runs the office IT of a (very) small software development company (Mailserver, GIT repositories, test systems, โ€ฆ) with daily incremental backups over a 10 Mbit connection without any problems.