Could Lxd support Hyper V Shared Disks?

So I work in a very Hyper V centric environment. At bare metal in our Colo everything is Hyper V and we have various work loads running in VMs from entirely windows focused ones like RDS to more varied stuff like websites, phone systems, etc.

What I was wondering was could Lxd in some way support Hyper V Shared Disks which are designed for Guest clustering scenarios where you need to run a cluster that requires shared storage and for obvious reasons do not want to expose your bare metal storage to a VM and give the ability to cross over to your physical infrastructure?

What I’m thinking is an Lxd cluster using Shared disks for the storage pool so that any failover or migration is basically instantaneous from Host to Host kind of like how it is with Ceph but without the headache of well… Ceph lol. To be clear this would be the same disk just attached to Nodes so they could all see it.

Any response welcome. Fine if the answer is no because of X I’m just very curious as I would think it would be useful for a lot of people.


What are these Hyper V Shared Disks in technical terms? Is it just network file storage, using the CIFS/SMB protocol?
See more at to distinguish between file, block and object storage.

In a nutshell, LXD can do

  1. block storage, without much effort because of the way that block storage works. You can use, for example, ZFS on a block storage device.
  2. file storage using the dir storage driver. Not very performant. Might work with Shared Disks.
  3. object storage, with a dedicated storage driver. For example, CEPH. LXD supports CEPH.

Therefore, you might be able to setup LXD to store the files on a HyperV Shared Disk.
I have not seen a post that discusses setting up LXD to store the files using the dir storage driver on a CIFS/SMB share. LXD might need some feature that CIFS/SMB does not support.
Having said that, give it a go and report back.

Hi Simos

Thanks for the response. Hyper V Shared Virtual Disks are a type of VHD attached to the Virtual SCSI controller on a Virtual Machine and presents itself to the Guest OS as a Shared SAS Disk so it is attached to the VM directly rather than over the network.

The most traditional use I can think of for them would be in Windows failover cluster when you need shared storage between 2 or more nodes and those nodes are VMs. Usually one node would ‘own’ the Disk and the other members of the cluster would be in standby waiting to take over if something happened to the current primary node so an Active/Passive relationship. You can also do an Active/Active relationship but i think that is specific to Scale Out File server roles.

I suppose I could try having the Shared Disks mounted to a pair of Windows File Server VMs in a Cluster and then present that over SMB3 - That would be 2 I think on your list?

What I was thinking was attaching directly to the VMs I’m using as my LXD Nodes so they can see the Storage as ‘local’ but because both nodes can ‘see’ the Disks directly then if a node went down the other one would be able to start it without any move/migration. Likewise a live migration would be quick because there isn’t actually anything to move storage wise. I suppose thinking about the LXD nodes would probably need to have some way of agreeing who ‘owns’ the container and storage like Windows does with VMs and such.

I will have a play with some test servers when I get a moment and see what I find and then report back.


If there is any possibility to get those Shared Disks appear on the host as block devices, then make that your first priority.

Otherwise, try with CIFS/SMB.

It would be nice if you could post a link to some tutorial that shows how to use those Shared Disks on Linux.