How to attach a storage volume with the same name defined in multiple nodes in LXD cluster?

Hi,
I am playing with the LXD storage volumes in LXD cluster and cannot figure out how to attach a storage volume to a container if the same-named volume has been defined on multiple nodes in the cluster…

My scenario:

  1. created 3 nodes LXD cluster (named: lxd-host-01, lxd-host-02, lxd-host-03)
  2. created a storage pool (named: mypool) of type “dir” and targeted it to all nodes of the cluster
  3. created a storage volume (named: myfilevolume) and targeted it to all nodes of the cluster
  4. so, lxc storage volume list mypool returns:

+--------+--------------+-------------+--------------+---------+-------------+
|  TYPE  |     NAME     | DESCRIPTION | CONTENT TYPE | USED BY |  LOCATION   |
+--------+--------------+-------------+--------------+---------+-------------+
| custom | myfilevolume |             | filesystem   | 0       | lxd-host-01 |
+--------+--------------+-------------+--------------+---------+-------------+
| custom | myfilevolume |             | filesystem   | 0       | lxd-host-02 |
+--------+--------------+-------------+--------------+---------+-------------+
| custom | myfilevolume |             | filesystem   | 0       | lxd-host-03 |
+--------+--------------+-------------+--------------+---------+-------------+
  1. created a new container: lxc launch ubuntu:20.10 c11 --target lxd-host-01
  2. however, trying to attach the volume to it, I am getting error:
    lxc storage volume attach mypool myfilevolume c11 mydevice /home/mydevicepath
    Error: More than one cluster member has a volume named "myfilevolume"

So, how to inform the “lxc storage volume attach” to use volume from a specific node ? There is no “–target” flag available in “lxc storage volume attach”…

Or, maybe attach should anyway default to the node on which the container runs, because I guess it is not possible to attach a volume from a different node than the one where the container runs ?

I am playing with the latest LXD (4.7).

Thanks for any suggestions.

Cheers,
Waldemar

That’s likely a bug in the recent logic changes that @tomp implemented.
We’ll sort it out as this should this should definitely work.

Ah thats new info for me, we can define the same volume on the same pool on multiple nodes (I thought they were unique per pool, except the legacy work around for ceph/cephfs volumes)?

In fact that error “More than one cluster member has a volume named” was an old error and there was previously an explicit check for having multiple volumes returned and that was considered an error condition (which is why I left it in), although clearly it wasn’t triggering before for some reason.

Are you able to attach the disk device using the following command?

lxc config device add c1 mydevice disk source=myfilevolume pool=mypool

Yes, I am (in my case it was: lxc config device add c11 mydevice disk source=myfilevolume pool=mypool path=/home/mydevicepath), but lxc storage volume list mypool still shows that both volumes are in use (and I have just 1 container running):

lxc list
+------+---------+---------------------+------+-----------+-----------+-------------+
| NAME |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |  LOCATION   |
+------+---------+---------------------+------+-----------+-----------+-------------+
| c11  | RUNNING | 240.131.0.37 (eth0) |      | CONTAINER | 0         | lxd-host-01 |
+------+---------+---------------------+------+-----------+-----------+-------------+
lxc storage volume list mypool
+--------+--------------+-------------+--------------+---------+-------------+
|  TYPE  |     NAME     | DESCRIPTION | CONTENT TYPE | USED BY |  LOCATION   |
+--------+--------------+-------------+--------------+---------+-------------+
| custom | myfilevolume |             | filesystem   | 1       | lxd-host-01 |
+--------+--------------+-------------+--------------+---------+-------------+
| custom | myfilevolume |             | filesystem   | 1       | lxd-host-02 |
+--------+--------------+-------------+--------------+---------+-------------+

And the volume on the 2nd node : lxc storage volume show mypool myfilevolume --target lxd-host-02

config: {}
description: ""
name: myfilevolume
type: custom
used_by:
- /1.0/instances/c11
location: lxd-host-02
content_type: filesystem

It is quite possibly related to the bug, but volume deletion in this situation reports errors too, like in this scenario (same cluster, nodes and storage pool as above, no containers running):

  1. creating a new file volume and targeting it to a node (lxc storage volume create mypool myfilevolume --target lxd-host-01)

  2. creating a new container: lxc launch ubuntu:20.10 c11 --target lxd-host-01

  3. attaching the new volume to the container: lxc storage volume attach mypool myfilevolume c11 mydevice /home/mydevicepath

  4. creating storage volume with the same name, but targeted to a different node: lxc storage volume create mypool myfilevolume --target lxd-host-02

  5. trying to delete the newly created volume:

        lxc storage volume delete mypool myfilevolume --target lxd-host-02
        Error: The storage volume is still in use
  6. somehow the volume gets assigned to the container on the first node, as in: lxc storage volume show mypool myfilevolume --target lxd-host-02

config: {}
description: ""
name: myfilevolume
type: custom
used_by:
- /1.0/instances/c11
location: lxd-host-02
content_type: filesystem

Cheers,
Waldemar

OK thanks, I’m going to see how broken this was before I added the patch https://github.com/lxc/lxd/pull/8074 as that will give me a better basis to work from on finding a resolution.

@stgraber

I’ve checked and at least this was a bug/same behaviour before the patch above.

This is in LXD 4.6:

lxc storage volume create local vol1 --target v1
lxc storage volume create local vol1 --target v2
lxc storage volume create local vol1 --target v3
lxc init images:alpine/3.12 c1
lxc storage volume attach local vol1 c1 vol path=/mnt/
Error: more than one node has a volume named vol1
lxc storage volume attach local vol1 c1 vol path=/mnt/ --target v1
Error: unknown flag: --target

Where things have changed but are not correct is around showing in use, and deleting volumes (checking in use).

I’ll look into fixing all of it now.

Thanks!

Working PR for this: