Incus admin init: how to use an existing zfs pool (correctly)?

Hi, I’m new to Incus and I just received my new mini PC and installed Ubuntu 24.04.02 and did a basic setup (updates, bride interface & zfs pool).

Now I need to do the incus admin init, but I’m not sure how to answer the setup questions “correctly”.
My zfs pool is called zfspool and I want all incus stuff to be in zfspool/incus.
Would I have to answer incus for
Name of the new storage pool [default=default]:
or do I have to create it first (sudo zfs create zfspool/incus)
And then later, when I am asked:
name of existing ZFS pool or dataset:
what do I have to answer?

PS:
I will introduce myself properly in a later post and ask a few more questions, but for now I want to get started with Incus and maybe some of the questions will have been answered by the time I’ve experimented a bit.

Welcome!

Since you already have a ZFS pool on your system, you want to make Incus use a dataset in that ZFS pool. See example at Help with Setup - #2 by simos

Therefore,

  1. Name of the new storage pool [default=default]: (you may leave this empty to use default, which is just the local name that Incus will be using for the name of the Incus Storage Pool. You can put anything here. )
  2. Name of the existing ZFS pool or dataset: zfspool/incus Incus will create the dataset for you. Do not create it manually.

@marvindinges If you already have zfs pool in your system created and if you want to use that to be passed to “incus admin init” then use my setup as a reference. NOTE: THIS IS ONLY IF YOU ALREADY HAVE AN EXISTING ZFS POOL

  1. Make sure you have the zfspool whichever you want to provide to “incus admin init” is in place.

  2. In my case, I have zfs pool named “basepool” and it is that pool that I passed it to “incus admin init”

  3. The command “zpool list” should show all your existing zfs pools.

  4. Note that in my case my “basepool” which I am going to pass it to “incus admin init” is only 5GB (experimental). Make sure to allocate a bigger size

> # zpool list
> NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
> basepool  5.50G   132K  5.50G        -         -     0%     0%  1.00x    ONLINE  -
> zpool-1    316G   110K   316G        -         -     0%     0%  1.00x    ONLINE  -
> zpool-2    316G   108K   316G        -         -     0%     0%  1.00x    ONLINE  -
> zpool-3    316G   110K   316G        -         -     0%     0%  1.00x    ONLINE  -
> zpool-4    316G   110K   316G        -         -     0%     0%  1.00x    ONLINE  -
> zpool-5    316G   110K   316G        -         -     0%     0%  1.00x    ONLINE  -
> zpool-6    316G   110K   316G        -         -     0%     0%  1.00x    ONLINE  -
  1. Here is the output of my “incus admin init”. Note that I am also assigning an ip address of my choice (10.1.1.253/24) to incusbr0.
# incus admin init
Would you like to use clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: basepool
Name of the storage backend to use (zfs, btrfs, dir, lvm, lvmcluster) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]: no
Name of the existing ZFS pool or dataset: basepool
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=incusbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 10.1.1.253/24
Would you like to NAT IPv4 traffic on your bridge? [default=yes]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like the server to be available over the network? (yes/no) [default=no]: yes
Address to bind to (not including port) [default=all]:
Port to bind to [default=8443]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: yes
config:
  core.https_address: '[::]:8443'
networks:
- config:
    ipv4.address: 10.1.1.253/24
    ipv4.nat: "true"
    ipv6.address: none
  description: ""
  name: incusbr0
  type: ""
  project: default
storage_pools:
- config:
    source: basepool
  description: ""
  name: basepool
  driver: zfs
storage_volumes: []
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: incusbr0
      type: nic
    root:
      path: /
      pool: basepool
      type: disk
  name: default
  project: default
projects: []
cluster: null

And this is how it will be listed:

# incus storage list
+----------+--------+-------------+---------+---------+
|   NAME   | DRIVER | DESCRIPTION | USED BY |  STATE  |
+----------+--------+-------------+---------+---------+
| basepool | zfs    |             | 4       | CREATED |
+----------+--------+-------------+---------+---------+

Hope this helps.

Thanks mate, works the way I wanted it to!

Thanks, I just adopted @simos solution the moment you posted yours.

1 Like

If you’re wanting to use an existing pool that already has data in it for something else, you can use something like this to create and use a new dataset within that pool:

sudo zfs create zroot/incus
sudo incus admin init
Would you like to use clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: incus
Name of the storage backend to use (zfs, dir) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]: no
Name of the existing ZFS pool or dataset: zroot/incus
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=incusbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like the server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:

The result is:

$ sudo incus storage list
+-------+--------+-------------+---------+---------+
| NAME  | DRIVER | DESCRIPTION | USED BY |  STATE  |
+-------+--------+-------------+---------+---------+
| incus | zfs    |             | 3       | CREATED |
+-------+--------+-------------+---------+---------+

After creating a container, you’d end up with something like this:

$ sudo incus launch images:debian/13 debian-test-13
Launching debian-test-13
$ zfs list
NAME                                                                                  USED  AVAIL     REFER  MOUNTPOINT
zroot                                                                                1.67G  1.72T       96K  none
zroot/ROOT                                                                           1.44G  1.72T       96K  none
zroot/ROOT/debian                                                                    1.44G  1.72T     1.44G  /
zroot/home                                                                            660K  1.72T      660K  /home
zroot/incus                                                                           229M  1.72T       96K  legacy
zroot/incus/buckets                                                                    96K  1.72T       96K  legacy
zroot/incus/containers                                                               3.52M  1.72T       96K  legacy
zroot/incus/containers/debian-test-13                                                3.43M  1.72T      224M  legacy
zroot/incus/custom                                                                     96K  1.72T       96K  legacy
zroot/incus/deleted                                                                   576K  1.72T       96K  legacy
zroot/incus/deleted/buckets                                                            96K  1.72T       96K  legacy
zroot/incus/deleted/containers                                                         96K  1.72T       96K  legacy
zroot/incus/deleted/custom                                                             96K  1.72T       96K  legacy
zroot/incus/deleted/images                                                             96K  1.72T       96K  legacy
zroot/incus/deleted/virtual-machines                                                   96K  1.72T       96K  legacy
zroot/incus/images                                                                    224M  1.72T       96K  legacy
zroot/incus/images/c4f17b293ea6413a120b169de518c2a75c72c311281cc45ddabbcc8500be4c2e   224M  1.72T      224M  legacy
zroot/incus/virtual-machines                                                           96K  1.72T       96K  legacy
$