Move or copy storage volume between nodes in a cluster

[root@wh-0001 ~]# lxc config show wh3
architecture: aarch64
config:
  image.architecture: aarch64
  image.description: Ubuntu 20.04 LTS server (20210622)
  image.os: ubuntu
  image.release: focal
  volatile.base_image: 7cf35d6166ca0a7f4a3706170331da488b438f99ab3d15f3ee27d65bb65ac800
  volatile.eth0.host_name: vethabfafa5c
  volatile.eth0.hwaddr: 00:16:3e:1c:fd:17
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 6d9e6920-71c8-4019-831c-2555361e65c9
devices:
  data:
    path: /data
    pool: data
    source: test1
    type: disk
  root:
    path: /
    pool: test
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
[root@wh-0001 ~]# lxc config show wh3 | grep device
devices:
[root@wh-0001 ~]# lxc config show wh3 
architecture: aarch64
config:
  image.architecture: aarch64
  image.description: Ubuntu 20.04 LTS server (20210622)
  image.os: ubuntu
  image.release: focal
  volatile.base_image: 7cf35d6166ca0a7f4a3706170331da488b438f99ab3d15f3ee27d65bb65ac800
  volatile.eth0.host_name: vethabfafa5c
  volatile.eth0.hwaddr: 00:16:3e:1c:fd:17
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 6d9e6920-71c8-4019-831c-2555361e65c9
devices:
  data:
    path: /data
    pool: data
    source: test1
    type: disk
  root:
    path: /
    pool: test
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
[root@wh-0001 ~]# lxc storage volume list data
+--------+-------+-------------+---------+----------+
|  TYPE  | NAME  | DESCRIPTION | USED BY | LOCATION |
+--------+-------+-------------+---------+----------+
| custom | test1 |             | 1       | wh-0001  |
+--------+-------+-------------+---------+----------+

I got an instance named wh3(spawned in node wh-0001), attached an data volume which located in data(a storage-pool)/wh-0001(node).

I’m trying to migrate instance wh3 from node wh-0002 to wh-0002. How can I migrate it with the attached data volume?

Already tried:

  1. move instance from wh-0001 to wh-0002 straightly, failed.
[root@wh-0001 ~]# lxc move wh3 --target wh-0002 --storage test
Error: Migration operation failure: Copy instance operation failed: Failed instance creation: Failed creating instance record: Failed initialising instance: Failed to add device "data": Failed loading custom volume: No such object
  1. detach the volume, move instance, move (or copy) storage volume, attach the volume.
    Failed in step move(or copy) storage volume, seems I’ve not found a way that support move(or copy) storage volume between nodes.
[root@wh-0001 ~]# lxc storage volume detach data test1 wh3
[root@wh-0001 ~]# lxc storage volume list data
+--------+-------+-------------+---------+----------+
|  TYPE  | NAME  | DESCRIPTION | USED BY | LOCATION |
+--------+-------+-------------+---------+----------+
| custom | test1 |             | 0       | wh-0001  |
+--------+-------+-------------+---------+----------+
[root@wh-0001 ~]# lxc storage volume copy data/test1 data/test2 --target wh-0002
Storage volume copied successfully!
[root@wh-0001 ~]# lxc storage volume list data
+--------+-------+-------------+---------+----------+
|  TYPE  | NAME  | DESCRIPTION | USED BY | LOCATION |
+--------+-------+-------------+---------+----------+
| custom | test1 |             | 0       | wh-0001  |
+--------+-------+-------------+---------+----------+
| custom | test2 |             | 0       | wh-0001  |
+--------+-------+-------------+---------+----------+

I specifed target to wh-0002, seems not work…

[root@wh-0001 ~]# lxc storage volume --help | grep move
  move           Move storage volumes between pools

also tried move storage volume. It seems not support move between nodes…

Hope for some help.
Thanks!

Updated post’s category to LXD rather than General.

Moving/copying volumes to another remote seems to work OK in LXD 4.16 using:

lxc storage volume copy default/vol1 v2:default/vol1
lxc storage volume move default/vol1 v2:default/vol2
lxc storage volume ls v2:default
+--------+------+-------------+--------------+---------+
|  TYPE  | NAME | DESCRIPTION | CONTENT-TYPE | USED BY |
+--------+------+-------------+--------------+---------+
| custom | vol1 |             | filesystem   | 0       |
+--------+------+-------------+--------------+---------+
| custom | vol2 |             | filesystem   | 0       |
+--------+------+-------------+--------------+---------+

I’ll check cluster mode using --target now…

1 Like

Yep this looks like a bug.

I’ve opened a bug report here:

Thanks for reply!
We’re using lxd 4.0.7, installed by snap install lxd --channel 4.0/stable.
Copy storage volume worked after set remote storage while lxc cli called error :ok_man: