Comparing Storage Drivers

I did some comparisons for the btrfs, lvm and zfs storage drivers using the incus-benchmark tool.

The Incus host is an Ubuntu virtual machine (with nested virtualization) running on an m3 Macbook Pro. I do admit this may not be a favourable scenario for all storage drivers.

All storage pools were created with the default settings.

incus storage pool create btrfs btrfs
incus storage pool create lvm lvm
incus storage pool create zfs zfs

Observations

Before getting into the benchmark, some things I observe while trying out the storage drivers.

  • btrfs has no visibility on the quota set from within the container instance. df -h reports the global size.
  • zfs can only restore the most recent snapshot, you need to delete newer snapshots before a restore is allowed.
  • lvm is also a solid option, even though zfs and btrfs are the ones getting all the attention.

Speed (creation of containers)

The clear winner is btrfs and it is not even close, and lvm beats zfs.

The virtual machine was initially configured to use 2 cpus and 4 GiB memory.

I assumed (wrongly) that the storage drivers would take advantage of multiple cpus and more resources, so I bumped the virtual machine to 8cpus and 8GiB memory.

Even though the performance of btrfs and lvm is slightly better, the results are similar.

Speed (instance export)

I compared exporting a plain ubuntu instance with --optimized-storage for both a container and a virtual machine. Though I am aware lvm has no support for optimised storage export.

They all performed similar for containers. But for virtual machines, zfs and btrfs are still comparable with lvm lagging far behind.

Disk usage

And lastly, I compared the disk usage after the benchmarks are done to see which of the drivers are optimal with disk usage.

In this category zfs shines, btrfs is a distant second at more than twice the usage of zfs. As for lvm, this is a contest too much for it to handle.

The numbers

Expand the details for full benchmark results.

# exporting with --optimized-storage | virtual machines
=======================================================
btrfs:  time: 19s archive_size: 264m
lvm:    time: 61s archive_size: 274m
zfs:    time: 20s archive_size: 270m

# exporting with --optimized-storage | containers
=================================================
btrfs:  time: 15s archive_size: 163m
lvm:    time: 13s archive_size: 162m
zfs:    time: 14s archive_size: 166m

# benchmark: cpu 2, memory 4GiB | 500 containers
================================================
btrfs: ubuntu: 30.375s  alpine: 19.112s
lvm:   ubuntu: 137.585s alpine: 124.836s
zfs:   ubuntu: 161.012s alpine: 146.246s

# benchmark: cpu 8, memory 8GiB | 500 containers
================================================
btrfs: ubuntu: 21.006s  alpine: 19.050s
lvm:   ubuntu: 115.015s alpine: 110.528s
zfs:   ubuntu: 164.790s alpine: 166.859s

# disk usage after benchmarks
=============================
$ incus storage info btrfs
info:
  description: ""
  driver: btrfs
  name: btrfs
  space used: 588.69MiB
  total space: 30.00GiB
used by:
  images:
  - d7d17321ac277c94a3a49dcef4a2a10ab7e44d48e14e6accb6dead0fc1dde65d
  - ec70399387b8877539b36e40787ca0426aa870bd6c514da7b209f97cbdfa8291

$ incus storage info lvm
info:
  description: ""
  driver: lvm
  name: lvm
  space used: 4.45GiB
  total space: 29.93GiB
used by:
  images:
  - d7d17321ac277c94a3a49dcef4a2a10ab7e44d48e14e6accb6dead0fc1dde65d
  - ec70399387b8877539b36e40787ca0426aa870bd6c514da7b209f97cbdfa8291

$ incus storage info zfs
info:
  description: ""
  driver: zfs
  name: zfs
  space used: 264.93MiB
  total space: 28.58GiB
used by:
  images:
  - d7d17321ac277c94a3a49dcef4a2a10ab7e44d48e14e6accb6dead0fc1dde65d
  - ec70399387b8877539b36e40787ca0426aa870bd6c514da7b209f97cbdfa8291
2 Likes

Note that in all 3 cases, you’re effectively running those storage drivers on top of a file on another filesystem. This isn’t exactly ideal for performance or reliability, though at least they were all on equal footing in that regard.

It may be interesting to pass in a second disk to the VM and use that second disk as the source for all 3 storage pools, to see if eliminating that intermediate loop + filesystem layer changes anything.

It’d also obviously be interesting if someone could replicate the exact same test on x86_64 to see if there is a big difference in performance between architectures. I’m particularly interested to see if ZFS is somehow better optimized on x86.

I can try with additional disk as well.

Also, I do have a 7th gen intel device lying around, if that’s not too old I can replicate the tests with it.

1 Like

7th gen should be fine, the numbers won’t be comparable between arm64 and x86_64, but that’s not the point, the point is to compare the different storage drivers on the same machine, so that’ll do just fine.

Benchmarks v2

I have re-run the benchmarks and the performance of zfs is indeed different, still far behind btrfs but now notably better than lvm.

I am not sure of the situation of my machine at the time that skewed the results for zfs but the result of benchmarks (and other tests) this time around follow the same pattern all through.

Scenarios

Hosts

The benchmarks were performed on three different Incus hosts

  • Ubuntu 24.04 running on an Intel i5-7300U device with 16 GiB memory
  • Ubuntu 24.04 virtual machine with qemu on the above host, allocated 2cpus and 4GiB memory
  • Ubuntu 24.04 virtual machine with qemu on m3 pro macbook, allocated 2cpus and 4GiB memory

Disks

The benchmarks were done both with loop disk and physical disks.

Loop Disks

For the loop disks, the storage pools were created with the default command.

incus storage create btrfs btrfs
incus storage create lvm lvm
incus storage create zfs zfs

Physical Disks

For the Intel device, extra 3 partitions were created on the nvme storage for Incus to utilise for each storage driver.

$ lsblk
...
nvme0n1     259:0    0 476.9G  0 disk
├─nvme0n1p1 259:1    0   512M  0 part /boot/efi
├─nvme0n1p2 259:2    0 372.5G  0 part /var/snap/firefox/common/host-hunspell
│                                     /
├─nvme0n1p3 259:3    0  14.9G  0 part [SWAP]
├─nvme0n1p4 259:4    0  29.3G  0 part
├─nvme0n1p5 259:5    0  29.3G  0 part
└─nvme0n1p6 259:6    0  29.3G  0 part

For the virtual machines, three extra 30GiB disks were added to the virtual machine and utilised for the storage drivers.

The storage drivers were created with the default command using the disks (virtual machines) or partitions (physical machine) as source.

incus storage create btrfs btrfs source=/dev/X
incus storage create lvm lvm source=/dev/X
incus storage create zfs zfs source=/dev/X

Charts

x86_64 physical machine - loop disk slower for lvm

x86_64 virtual machine - loop disk slightly faster for zfs

aarch64 virtual machine - loop disk negligibly slower

Numbers

Expand the details to see the actual numbers

--------------------------
x86_64 
--------------------------

# 500 containers | alpine | dedicated disk | virtual machine
============================================================
btrfs: 51.906s
lvm:   317.994s
zfs:   214.629

# 500 containers | alpine | loop disk | virtual machine
=======================================================
btrfs: 55.211s
lvm:   316.483s
zfs:   199.297s

# 500 containers | alpine | dedicated disk | physical machine
=============================================================
btrfs: 41.816s
lvm:   184.098s
zfs:   115.946s

# 500 containers | alpine | loop disk | physical machine
========================================================
btrfs: 43.193s
lvm:   221.709s
zfs:   117.339s

--------------------------
aarch64
--------------------------

# 500 containers | alpine | dedicated disk | virtual machine
============================================================
btrfs: 12.054s
lvm:   105.542s
zfs:   53.681s

# 500 containers | alpine | loop disk | virtual machine
=======================================================
btrfs: 12.076s
lvm:   106.630s
zfs:   55.659s

Conclusion

  • The initial benchmark can be regarded as an outlier for zfs.
  • Btrfs is sooo fast
  • Performance penalty is not a concern when deciding between a loop disk and physical disk

More Comparisons?

I am thinking of comparing disk read and write speeds of the mounted storage disks in each of the above scenarios.

I am still expecting to see a clear advantage of a dedicated disk over a loop disk.

3 Likes