Data inconsistency when attaching LXD block volume to multiple VM instances

Summary

I half expected problems to occur, but decided to try this anyway …

First, I created a storage pool, using the LXD dir storage pool driver.

Next, I created two Alpine Linux virtual machine instances on LXD 5.13, on Ubuntu 23.04 Lunar Lobster, using the dir storage pool.

Next, I created a custom block storage volume, on the dir storage pool, and attached it to both running VM instances.

I exec’d into one of the VMs and created a GPT partition table, with a single ext4 partition, on the block device /dev/sdb.

I used partprobe /dev/sdb on the second VM, to refresh the disk, and ensure it could “see” the newly created partition.

On both VMs, I mounted /dev/sdb1 to /mnt/data.

Finally, on one VM, I used vim to create a text file on the partition. However, the other VM couldn’t “see” the file.

Question: How can I ensure that multiple LXD VMs, with the same block device attached & mounted, can access the data created by other VMs? Are there some volume synchronization options available, to ensure data consistency?

Indeed you can expect problems with this approach.

What is your use case here that requires sharing a block volume?

Have you considered sharing a custom filesystem volume instead, as that can be shared concurrently with multiple instances.

We have been discussing internally blocking sharing a block volume with multiple instances concurrently, because of the issues it can cause, however if we do we will add an option to allow it so that those who know why they are doing it can still do so.

There are reasons for doing it, but it requires the instances themselves to coordinate access, using something like lvmlockd or a clustered filesystem.

2 Likes

For the same block device to be attached and mounted on multiple VMs concurrently, as Tom said, you need a clustered filesystem like GFS2/OCFS2. Both FSes use the Distributed Lock Manager (DLM) to manages concurrent access to ensure no corruption/inconsistency occurs.

1 Like

I don’t have a specific use case for mounting a single volume to multiple VM instances. I’m simply seeing what’s possible by experimentation.

I like the idea of disallowing users from mounting a block device to multiple VM instances, but allowing someone to force the operation if they know what they’re doing.

Providing some in-app messaging that points users in the right direction, like lvmlockd, would be helpful.

1 Like