This is an introductory post on installing Incus on a cloud server. It repeats the process of setting up the Zabbly repository, installing Incus and doing some benchmarking with incus-benchmark.
I am using a cloud server from Hetzner, those ARM64 ones by Ampere, with 2-vCPU and 4GB RAM. The storage pool is on a Hetzner volume. You can launch 10 system containers (Ubuntu/Debian) and still have 1GB RAM available. Or, you can launch 50 Alpine system containers and have about 280MB available.
This and the previous tutorial are in the content-creation type of tutorials.
We have tested out Incus on a Hetzner 4GB RAM ARM64 server. The storage pool was on a volume, which should be slower than putting it on the server itself.
I think you mean “faster” here, unless the Hetzner extra volumes are known to be slower than the main storage?
The Hetzner volumes are implemented on some sort of networked block storage based on SSDs. They are also replicated to three different physical servers for reliability and the triple replication is transparent to the cloud server. The network latency would make it a bit slower than having the storage pool on the cloud server’s disk.
Let’s test with /opt/incus/bin/incus-benchmark init --count 50 --parallel 10 images:alpine/edge on the two scenarios.
Storage pool on volume (network block device) with ZFS: 62 seconds
[Jan 18 00:58:26.373] Importing image into local store: de834b5936ca94aacad4df1ff55b5e6805b06220b9610a6394a518826929a160
[Jan 18 00:58:28.544] Found image in local store: de834b5936ca94aacad4df1ff55b5e6805b06220b9610a6394a518826929a160
[Jan 18 00:58:28.544] Batch processing start
[Jan 18 00:58:41.400] Processed 10 containers in 12.856s (0.778/s)
[Jan 18 00:58:55.870] Processed 20 containers in 27.325s (0.732/s)
[Jan 18 00:59:14.683] Processed 40 containers in 46.139s (0.867/s)
[Jan 18 00:59:30.305] Batch processing completed in 61.760s
Storage pool on ZFS loopback image, stored on the cloud server’s default storage (QEMU HARDDISK): 10 seconds
[Jan 18 16:48:04.844] Importing image into local store: 4c2196a526d78e647f0979c522b085b50e6e1b9a2a7596c710ed633e140b06b2
[Jan 18 16:48:06.884] Found image in local store: 4c2196a526d78e647f0979c522b085b50e6e1b9a2a7596c710ed633e140b06b2
[Jan 18 16:48:06.884] Batch processing start
[Jan 18 16:48:09.450] Processed 10 containers in 2.566s (3.897/s)
[Jan 18 16:48:11.212] Processed 20 containers in 4.328s (4.621/s)
[Jan 18 16:48:15.086] Processed 40 containers in 8.202s (4.877/s)
[Jan 18 16:48:17.059] Batch processing completed in 10.176s
Another test with the command /opt/incus/bin/incus-benchmark launch --count 10 images:ubuntu/22.04
On volume:
[Jan 18 01:49:45.417] Found image in local store: fe0be2b7f4d2b97330883f7516a3962524d2a37a9690397384d2bbf46d0da909
[Jan 18 01:49:45.417] Batch processing start
[Jan 18 01:49:52.104] Processed 2 containers in 6.688s (0.299/s)
[Jan 18 01:49:59.088] Processed 4 containers in 13.671s (0.293/s)
[Jan 18 01:50:12.628] Processed 8 containers in 27.212s (0.294/s)
[Jan 18 01:50:20.227] Batch processing completed in 34.810s
On cloud server’s storage:
[Jan 18 17:07:34.752] Importing image into local store: 0e49300093f967b23648ef693c5f52104d516111db9adb1053fe5aeea8153a47
[Jan 18 17:07:35.708] Found image in local store: 0e49300093f967b23648ef693c5f52104d516111db9adb1053fe5aeea8153a47
[Jan 18 17:07:35.708] Batch processing start
[Jan 18 17:07:44.529] Processed 2 containers in 8.821s (0.227/s)
[Jan 18 17:07:48.659] Processed 4 containers in 12.951s (0.309/s)
[Jan 18 17:07:56.557] Processed 8 containers in 20.849s (0.384/s)
[Jan 18 17:08:00.881] Batch processing completed in 25.173s
Here the gap is smaller. Volumes are cheaper and they get triple replication. It would require a bit more benchmarking to figure out whether it would make a difference for typical workloads.
Ah, interesting that they only offer highly available network storage as additional volumes.
On clouds like GCP it’s usually the other way around, your instance volume is replicated but you can attach local NVME drives as additional volumes which don’t get replicated but are wicked fast.