On my testing cluster (3 HP DL380 Gen10 with raid10 arrays of 6 SSD each.) moving VM block disks is very slow as it seems to involve TLS connection between incusd process. The reception process uses 100% cpu, which leads to bearly 12MB/s throughput (~30MB/s during sparse part transfert).
Migrate the same VM using libvirt and direct qemu transfer (tcp with no encryption) uses all the bandwidth of the gigabit interface involved giving 120MB/s.
Is there a way to achieve this with incus ?
Incus indeed ensures that all traffic is always carried securely over TLS and we don’t have plans to relax that given how much of a security issue it could be to transfer both disk and memory data unencrypted.
That said, your performance numbers look very odd. I commonly get 200-300MB/s transfer speeds during those kind of migrations.
I just did a quick test here with a Windows VM (50GiB allocated, around 10GiB used) and even though I’m running in a pretty crappy environment (nested VMs on a busy physical server), the VM moved at an average of 180MB/s.
Thank you Stéphane,
You’re right I probably have an issue with my lvm pool …
I tested with on another cluster but same hardware using ZFS pool and everything works ok.
The weird thing is that io benchmarking in the vm (backed on lvm pool) seems ok.
And the benchmark on the receiving pool is ok too.