Using additional drives as storage that can be shared with host and containers

Hey team, in my setup I have an a separate hard drive I use for additional storage which I would like to use as normal storage for containers and the host. In my case I am using zfs as the filesystem. I am aware I can share the drive using directory type. But as per documentation performance might not be optimal. Is there a more optimal way of sharing the drive with the host and containers?

In my case it is a 512gig ssd drive /sda with a mountpoint in the host /mnt/ext_storage
squeige@Ragnarok:~$ sudo zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MyPool 126G 6.52G 119G - - 0% 5% 1.00x ONLINE -
ext_storage 464G 804K 464G - - 0% 0% 1.00x ONLINE -
squeige@Ragnarok:~$
zfs list
ext_storage 804K 450G 96K /mnt/ext_storage

A good use case for this is having the data on the host, passing it to the container for processing and having it available in other containers.

As a background, About storage pools, volumes and buckets - Incus documentation explains the several options on how to use the storage. The part that is relevant to you is the Shared with the host.

In your case, you have a ZFS pool called ext_storage, you have created a filesystem in it (which?) and you have mounted it onto /mnt/ext_storage. I think that this means that the full 450GB are now accessible through something like ext4.

What you need to do, is create a dataset onto that external storage, and use that dataset with Incus. The rest of the space of that external storage can be used on the host as you wish.

Hi Simos,

Please feel free to correct me, as I’m still learning and trying to grasp everything. From what I understand, the link you provided seems to focus on storage for instances. However, I was trying to add a disk-type device as described here: Reference - Devices Disk.

I realize now that using a path essentially creates a bind mount, which shouldn’t introduce any significant overhead—would that be correct? Another option I’m considering is adding the ZFS volume, in this case, ext_storage but I am somehow thinking it would be the same thing?

Please feel free to close this thread, and really appreciate you and your time.
Thank you for your help!

Let’s go through an example. I am creating an Incus VM incusserver and then a 20GB block device ExtStorage that I will be attaching to the Incus VM. From within the VM, there will be a block device /dev/sdb with that 20GB of space. I think this replicates your setup.

$ incus launch images:ubuntu/24.04/cloud incusserver --vm
Launching incusserver
$ incus storage volume create default ExtStorage --type=block size=20GiB
Storage volume ExtStorage created
$ incus storage volume attach default ExtStorage incusserver
$ 

Then, I’ll setup Incus as usual.

$ incus shell incusserver
root@incusserver:~# apt install zfsutils-linux incus
...
root@incusserver:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, zfs) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: 
Size in GiB of the new loop device (1GiB minimum) [default=5GiB]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=incusbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
    ipv4.address: auto
    ipv6.address: auto
  description: ""
  name: incusbr0
  type: ""
  project: default
storage_pools:
- config:
    size: 5GiB
  description: ""
  name: default
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: incusbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
projects: []
cluster: null

root@incusserver:~# 

Next, I’ll create a ZFS pool for the 20GB. Incus has already created its own storage pool (through incus admin init), and I am not going to touch that.

root@incusserver:~# zfs list
NAME                               USED  AVAIL  REFER  MOUNTPOINT
default                           1.45M  4.36G    24K  legacy
default/buckets                     24K  4.36G    24K  legacy
default/containers                  24K  4.36G    24K  legacy
default/custom                      24K  4.36G    24K  legacy
default/deleted                    144K  4.36G    24K  legacy
default/deleted/buckets             24K  4.36G    24K  legacy
default/deleted/containers          24K  4.36G    24K  legacy
default/deleted/custom              24K  4.36G    24K  legacy
default/deleted/images              24K  4.36G    24K  legacy
default/deleted/virtual-machines    24K  4.36G    24K  legacy
default/images                      24K  4.36G    24K  legacy
default/virtual-machines            24K  4.36G    24K  legacy
root@incusserver:~# zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
default  4.50G  1.45M  4.50G        -         -     0%     0%  1.00x    ONLINE  -
root@incusserver:~# fdisk -l /dev/sdb
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk model: QEMU HARDDISK   
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@incusserver:~# zpool create extstorage /dev/sdb
root@incusserver:~# zpool list
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
default     4.50G  1.45M  4.50G        -         -     0%     0%  1.00x    ONLINE  -
extstorage  19.5G    96K  19.5G        -         -     0%     0%  1.00x    ONLINE  -
root@incusserver:~# 

In the extstorage ZFS pool I’ll create a dataset called extstorage/incus that I will add to Incus. The rest of the 20GB will be used as other storage for the server. Sorry for using the name extstorage for both the ZFS pool and the Incus storage pool.

root@incusserver:~# incus storage create extstorage zfs source=extstorage/incus
Storage pool extstorage created
root@incusserver:~# zpool list
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
default     4.50G  1.45M  4.50G        -         -     0%     0%  1.00x    ONLINE  -
extstorage  19.5G   736K  19.5G        -         -     0%     0%  1.00x    ONLINE  -
root@incusserver:~# incus storage list
+------------+--------+----------------------------------+-------------+---------+---------+
|    NAME    | DRIVER |              SOURCE              | DESCRIPTION | USED BY |  STATE  |
+------------+--------+----------------------------------+-------------+---------+---------+
| default    | zfs    | /var/lib/incus/disks/default.img |             | 1       | CREATED |
+------------+--------+----------------------------------+-------------+---------+---------+
| extstorage | zfs    | extstorage/incus                 |             | 0       | CREATED |
+------------+--------+----------------------------------+-------------+---------+---------+
root@incusserver:~# zfs list
NAME                                        USED  AVAIL  REFER  MOUNTPOINT
default                                    1.45M  4.36G    24K  legacy
default/buckets                              24K  4.36G    24K  legacy
default/containers                           24K  4.36G    24K  legacy
default/custom                               24K  4.36G    24K  legacy
default/deleted                             144K  4.36G    24K  legacy
default/deleted/buckets                      24K  4.36G    24K  legacy
default/deleted/containers                   24K  4.36G    24K  legacy
default/deleted/custom                       24K  4.36G    24K  legacy
default/deleted/images                       24K  4.36G    24K  legacy
default/deleted/virtual-machines             24K  4.36G    24K  legacy
default/images                               24K  4.36G    24K  legacy
default/virtual-machines                     24K  4.36G    24K  legacy
extstorage                                  626K  18.9G    24K  /extstorage
extstorage/incus                            288K  18.9G    24K  legacy
extstorage/incus/buckets                     24K  18.9G    24K  legacy
extstorage/incus/containers                  24K  18.9G    24K  legacy
extstorage/incus/custom                      24K  18.9G    24K  legacy
extstorage/incus/deleted                    144K  18.9G    24K  legacy
extstorage/incus/deleted/buckets             24K  18.9G    24K  legacy
extstorage/incus/deleted/containers          24K  18.9G    24K  legacy
extstorage/incus/deleted/custom              24K  18.9G    24K  legacy
extstorage/incus/deleted/images              24K  18.9G    24K  legacy
extstorage/incus/deleted/virtual-machines    24K  18.9G    24K  legacy
extstorage/incus/images                      24K  18.9G    24K  legacy
extstorage/incus/virtual-machines            24K  18.9G    24K  legacy
root@incusserver:~# 

First, we are going to make use of the extstorage ZFS pool for the purposes of Incus. I created a container and I specified to use the storage from extstorage instead of the default. I can also create storage volumes from either Incus storage pool, etc.

root@incusserver:~# incus storage list
+------------+--------+----------------------------------+-------------+---------+---------+
|    NAME    | DRIVER |              SOURCE              | DESCRIPTION | USED BY |  STATE  |
+------------+--------+----------------------------------+-------------+---------+---------+
| default    | zfs    | /var/lib/incus/disks/default.img |             | 1       | CREATED |
+------------+--------+----------------------------------+-------------+---------+---------+
| extstorage | zfs    | extstorage/incus                 |             | 0       | CREATED |
+------------+--------+----------------------------------+-------------+---------+---------+
root@incusserver:~# incus launch images:alpine/edge alpine --storage extstorage
Launching alpine
root@incusserver:~# incus storage list
+------------+--------+----------------------------------+-------------+---------+---------+
|    NAME    | DRIVER |              SOURCE              | DESCRIPTION | USED BY |  STATE  |
+------------+--------+----------------------------------+-------------+---------+---------+
| default    | zfs    | /var/lib/incus/disks/default.img |             | 1       | CREATED |
+------------+--------+----------------------------------+-------------+---------+---------+
| extstorage | zfs    | extstorage/incus                 |             | 2       | CREATED |
+------------+--------+----------------------------------+-------------+---------+---------+
root@incusserver:~# 

Second, I’ll use the ZFS pool extstorage for non-Incus tasks. I create the dataset. Then set the mountpoint. Finally, I use df to demonstrate that /mnt/ has been mounted from extstorage as it has different size than the root filesystem.

root@incusserver:~# zfs create extstorage/mystorage
root@incusserver:~# zfs set mountpoint=/mnt extstorage/mystorage
root@incusserver:~# zfs get mountpoint extstorage/mystorage
NAME                  PROPERTY    VALUE       SOURCE
extstorage/mystorage  mountpoint  /mnt        local
root@incusserver:~# zfs get mounted extstorage/mystorage
NAME                  PROPERTY  VALUE    SOURCE
extstorage/mystorage  mounted   yes      -
root@incusserver:~# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       9.6G 1007M  8.6G  11% /
root@incusserver:~# df -h /mnt/
Filesystem            Size  Used Avail Use% Mounted on
extstorage/mystorage   19G  128K   19G   1% /mnt
root@incusserver:~# 
2 Likes

This information is awesome thanks Simos, I havent replied with my results as I been playing around with my setup and experimenting with your results.

The question does arise, My main drive is an nvme, the second drive is an sdd. Would it be better to have incus db/storage use a partition on the nvme, or use the sdd? I mean in regards to performance and such.

It depends whether you care more about data integrity, or about performance and/or capacity.

I have a small NUC-style server which has both NVMe and SATA SSD, and I use zmirror across both of them. I strongly recommend this setup, because it gives an extremely high level of data integrity: zmirror can deal not only with failed drives but data corruption on either drive. Thanks to ZFS integrity checksums, it knows which drive has the correct data, and copies it back to the other drive. (Normal mirroring can’t do this: at best, all it could tell you is that the two halves are different).

If you have a spare partition on the other drive equal in size to your existing zpool vdev, you can easily add this as a mirror.

Of course, this halves your total capacity, so if you need the full capacity it’s up you how you deploy it, and how (or if) you need to back it up. The NVMe will perform much better than the SATA drive, but if your workloads are not I/O intensive that might not be important. Also, if the SATA drive is much bigger than the NVMe then that affects your decision; for example you might want to keep some of the SATA drive for bulk storage, and back it up externally.

1 Like

Thank you, this worked great to let the host access the default zfs storage area(!) as a standard disk and let incus do its thing as well…