Incus existing ZFS pool

Hey all,

So i have an existing zpool and datasets available on my server. I create a new dataset for incus with zfs create incus, but i can’t seem to get incus to actually use it. When I use zfs and point incus storage to my dataset location, i still end up with an image at:

/var/lib/incus/disks/incus-zfs.img

What I want is for it to actually use my existing zfs dataset. How do i do this?

During incus admin init you can do that by answering no to the question about creating a new zpool, at which point you can provide an existing zpool or dataset name.

After incus admin init you can do it through incus storage create some-name zfs source=zpool/dataset

2 Likes

Thanks that worked. I was trying to do it through the web ui and i guess i was doing something wrong because when i set source to the same dataset on there, it just created a new image on the disk anyway instead of using my existing pool.

Sorry to revive this but this does not seem to be solution:

# incus storage create incus zfs source=tank/incus
Error: Provided ZFS pool (or dataset) isn't empty, run "sudo zfs list -r tank/incus" to see existing entries

Why is it expecting source to be empty?

EDIT: I see, apparently “incus admin recover” must be used for existing incus storage

1 Like

Hello there,

i have a very similar Problem.

I moved instances from one incus host to another.

There is also a hdd with an zfs pool wich i want to add (physicaly mounting the device) to the new host. Inside the storage Pool, there are some incus custom storage volumes.

How is it possible to “import” them into incus?

With incus admin restore it was not possible, i also tried that with incus admin init, and with incus stroage create… (not possible beacuse theirs already data inside the zfs storage pool.

Do you have an working Incus installation, and there’s an orphan Incus storage pool (with important data) that you want to somehow merge to the working installation?

It’s an interesting question and an interesting exercise as well. Normally you would recover that orphan storage pool into a new Incus installation, get both Incus installations to communicate with each other, and finally move anything useful from the orphan storage pool to your working Incus installation.
But isn’t it cumbersome to install Incus on another host? Well, don’t do that, install Incus in a container or VM on the working Incus installation, and continue from there.

If you think this is an interesting issue and you want to get better help, I suggest to start a new thread.

Did you try incus admin recover? If so, what did it show?

Hi candlerb,

It’s been a few days and I can’t remember exactly.

incus admin recover “worked”, i.e. there was no error message, but it did not import the custom volumes correctly or as I would have expected.

I then copied the volumes from one host to the other over the network and that worked fine.

BR

Is there any update on this? I have two huge pools :

# zpool list
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
backup1      196T   169T  27.2T        -         -    44%    86%  1.00x    ONLINE  -
backup2      218T  47.3T   171T        -         -     1%    21%  1.00x    ONLINE  -
containers   290G  1.29G   289G        -         -     0%     0%  1.00x    ONLINE  -

I’m updating this backup server to use incus to run the backups, so that’s why they were pre-created. It would be difficult to offload 200 TB of data temporarily, to create new pools that are managed as incus storage volumes.

Any way I can let incus take control of these pools and use them within containers?

(I also have a pool named containers, which is managed by incus as the default storage pool)

Edit: Is it just incus storage create backup1 zfs source=backup1? I’m discouraged by this:

Incus assumes that it has full control over the ZFS pool and dataset. Therefore, you should never maintain any datasets or file system entities that are not owned by Incus in a ZFS pool or dataset, because Incus might delete them.

My two pools contain thousands of datasets, for each user in the system. I’m also planning to use zfs.delegate so that the backup daemon inside the incus container can create new datasets for future users.

ChatGPT proposes creating a new sub-dataset to be managed by incus, and afterwards moving the existing non-incus-managed datasets to that one. Is it viable?

Yes, you can use a new dataset for Incus to manage containers, custom volumes, etc.

Work flawless on my systems and it is actually similar how IX-Systems has implemented it for upcoming True NAS Scale.

Would recommend to test this before placing it on your production system to avoid any issues.

1 Like

See also here on how to test these in a VM before going into production. Help with Setup - #2 by simos

It works, but sub-datasets and snapshots aren’t shown inside the container, perhaps because they are legacy and don’t inherit the mount point from the parent. Do you have any experience with this? I saw you had another post about a similar issue.

Yes, I played with it a bit but didn’t come to a full conclusion. My guess is this is security related how Incus protects the host system.

For sub-datasets you need to add the delegated flag to your volume, see here Is it possible to mount nested ZFS datasets from within a container?
Tough can’t remember if this worked for snapshots…