Lxd vm slow write performance BTRFS

Hi All,

I notice that the write performance is real slow inside a vm. I think it has become slower overtime.

this is inside a container:

  • 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.93097 s, 556 MB/s

this is inside a vm:
-1073741824 bytes (1.1 GB, 1.0 GiB) copied, 10.5192 s, 102 MB/s

is there something i can do to make it fast again.

The backend storage is on btrfs ssd.

Even when i create a new vm with lxd ssd speed is slow inside vm. Containers are fast. same storage device.

is there anything i can do?

btrfs is going to be the slowest storage backend for virtual machines due to its copy-on-write behavior and VMs writing through one very large file on disk.

What dd command were your running exactly?

Hi,

Ok so i must evoid vm. and do all in containers.

This was the dd command i was using.

dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync

Is there a simple way to convert a full lxd vm to container?

For BTRFS VM info see:

To convert a VM to a container you can use lxd-migrate

See

1 Like

does the migration also work as follow.

i now have a running lxd vm. can i do the migration on this host machine. (the vm is the guest) and copy this on the host container machine?

I did the migration.

Now when i want to start the container I receive the following error.

lxc mailserver 20221104173048.403 ERROR cgroup2_devices - …/src/src/lxc/cgroups/cgroup2_devices.c:bpf_program_load_kernel:332 - Operation not permitted - Failed to load bpf program: (null)
lxc mailserver 20221104173048.531 ERROR conf - …/src/src/lxc/conf.c:turn_into_dependent_mounts:3919 - No such file or directory - Failed to recursively turn old root mount tree into dependent mount. Continuing…
lxc mailserver 20221104173048.665 ERROR start - …/src/src/lxc/start.c:start:2197 - No such file or directory - Failed to exec “/sbin/init”
lxc mailserver 20221104173048.665 ERROR sync - …/src/src/lxc/sync.c:sync_wait:34 - An error occurred in another process (expected sequence number 7)
lxc mailserver 20221104173048.679 WARN network - …/src/src/lxc/network.c:lxc_delete_network_priv:3631 - Failed to rename interface with index 0 from “eth0” to its initial name “vethe2d68ece”
lxc mailserver 20221104173048.679 ERROR lxccontainer - …/src/src/lxc/lxccontainer.c:wait_on_daemonized_start:877 - Received container state “ABORTING” instead of “RUNNING”
lxc mailserver 20221104173048.679 ERROR start - …/src/src/lxc/start.c:__lxc_start:2107 - Failed to spawn container “mailserver”
lxc mailserver 20221104173048.679 WARN start - …/src/src/lxc/start.c:lxc_abort:1036 - No such process - Failed to send SIGKILL via pidfd 18 for process 343115
lxc 20221104173053.811 ERROR af_unix - …/src/src/lxc/af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20221104173053.811 ERROR commands - …/src/src/lxc/commands.c:lxc_cmd_rsp_recv_fds:128 - Failed to receive file descriptors for command “get_state”

What would be the fastest storage backend for VM machines?

Anybody that can help me with this error?

@tomp @stgraber

it does not matter. same error with different vms that i want to copy as container…

Does a new VM on the same storage pool exhibit the same issue?

Yes

I would suggest trying a different storage pool type.

If you have limited space, you could try creating a dir pool on an existing (non btrfs directory) using:

lxc storage create mypool dir source=/some/directory

Then create the VM on that pool using the lxc init myvm --vm -s mypool.

This will use file based disk images and would be interesting to compare performance compared to BTRFS.

See also:

And the storage videos:

Hi,

I will try this. However in the past in was never this slow.

Vriendelijke groet,

Chris Kruijntjens.

Hi Tomp,

Wat would be the perfect setup here? I have 4 1tb ssd drives. I would like to use vm and containers and be able to snapshot.

I want a fast io troughput. And if possible software raid option. How would you set this up?

what can you advise me?

Hi @ckruijntjens,
The best answer to your question will be zfs, in your case you have 4 1TB ssd drives and you are looking for a raid option then you can use those four drives as RAIDZ/RAIDZ1. And if the storage size is not your first priority then you can even choose RAID10.
Regards.

Hi thank you,

I will create a zfs raid 10 then.

What is the best approach for this? Yust lxc export all. Create zfs pool and do lxd init?

Or put a external USB drive as second storage location and move all containers there. Reconfigure main storage and move all back?

You can use that option as you mentioned but there is an another easy way, you define a new storage with lxc storage create and use lxc move <container_name> -s new_pool.
Regards.

Perfect.

I will trust this.

Is it possible when all machines are moved to reformat the default storage to zfs? And move back all systems?

What would be the commands to do this?

Thanks for you help.

Lets suppose our disks as follows, sda,sdb,sdc,sdd, so the zfs command with pool name is zfspool.

zpool create zfspool mirror /dev/sda /dev/sdb mirror /dev/sdc /dev/sdd

Then you can add the new pool to lxd.

lxc storage create new_pool zfs zfs.pool_name=zfspool

And move the containers to the new zfspool.

lxc move <container_name> -s new_pool

Regards.