So i have an existing zpool and datasets available on my server. I create a new dataset for incus with zfs create incus, but i can’t seem to get incus to actually use it. When I use zfs and point incus storage to my dataset location, i still end up with an image at:
/var/lib/incus/disks/incus-zfs.img
What I want is for it to actually use my existing zfs dataset. How do i do this?
During incus admin init you can do that by answering no to the question about creating a new zpool, at which point you can provide an existing zpool or dataset name.
After incus admin init you can do it through incus storage create some-name zfs source=zpool/dataset
Thanks that worked. I was trying to do it through the web ui and i guess i was doing something wrong because when i set source to the same dataset on there, it just created a new image on the disk anyway instead of using my existing pool.
Sorry to revive this but this does not seem to be solution:
# incus storage create incus zfs source=tank/incus
Error: Provided ZFS pool (or dataset) isn't empty, run "sudo zfs list -r tank/incus" to see existing entries
Why is it expecting source to be empty?
EDIT: I see, apparently “incus admin recover” must be used for existing incus storage
There is also a hdd with an zfs pool wich i want to add (physicaly mounting the device) to the new host. Inside the storage Pool, there are some incus custom storage volumes.
How is it possible to “import” them into incus?
With incus admin restore it was not possible, i also tried that with incus admin init, and with incus stroage create… (not possible beacuse theirs already data inside the zfs storage pool.
Do you have an working Incus installation, and there’s an orphan Incus storage pool (with important data) that you want to somehow merge to the working installation?
It’s an interesting question and an interesting exercise as well. Normally you would recover that orphan storage pool into a new Incus installation, get both Incus installations to communicate with each other, and finally move anything useful from the orphan storage pool to your working Incus installation.
But isn’t it cumbersome to install Incus on another host? Well, don’t do that, install Incus in a container or VM on the working Incus installation, and continue from there.
If you think this is an interesting issue and you want to get better help, I suggest to start a new thread.
I’m updating this backup server to use incus to run the backups, so that’s why they were pre-created. It would be difficult to offload 200 TB of data temporarily, to create new pools that are managed as incus storage volumes.
Any way I can let incus take control of these pools and use them within containers?
(I also have a pool named containers, which is managed by incus as the default storage pool)
Edit: Is it just incus storage create backup1 zfs source=backup1? I’m discouraged by this:
Incus assumes that it has full control over the ZFS pool and dataset. Therefore, you should never maintain any datasets or file system entities that are not owned by Incus in a ZFS pool or dataset, because Incus might delete them.
My two pools contain thousands of datasets, for each user in the system. I’m also planning to use zfs.delegate so that the backup daemon inside the incus container can create new datasets for future users.
ChatGPT proposes creating a new sub-dataset to be managed by incus, and afterwards moving the existing non-incus-managed datasets to that one. Is it viable?
It works, but sub-datasets and snapshots aren’t shown inside the container, perhaps because they are legacy and don’t inherit the mount point from the parent. Do you have any experience with this? I saw you had another post about a similar issue.