How to move profiles or images to other storage?

Hi.

I have just migrated my containers from a lvm storage to a (raid1) btrfs storage - on the same host.
I have used the lxc move command to do so.
But now I have some images and profiles in the lvm storage that I would also like to move to the btrfs storage - so I can eventually abandon the old lvm storage.
Is there a way to do so?

I see this is somewhat covered in How do I move an image from one storage pool to another?
But does that mean that the “images” will remain available, if I delete the old storage pool, because they’re not actually really in a pool per se?
And what about the profiles, which don’t seem to have a directory of their own?

I also see the question of the profiles somewhat covered in Move default storage location with profile
Which I interpret that I should duplicate the profiles, but that still leaves the question of getting the profiles to reside in the new pool?

I did:

lxc move firty --storage brpool00
For each container.

My storage pools are:

[root@asusi7 storage-pools]# lxc storage ls
+----------+--------+--------------------------------------+-------------+---------+---------+
|   NAME   | DRIVER |                SOURCE                | DESCRIPTION | USED BY |  STATE  |
+----------+--------+--------------------------------------+-------------+---------+---------+
| brpool00 | btrfs  | 7115d9f6-f852-43a9-a626-3870e74596e8 |             | 24      | CREATED |
+----------+--------+--------------------------------------+-------------+---------+---------+
| defnew   | lvm    | defnew                               |             | 4       | CREATED |
+----------+--------+--------------------------------------+-------------+---------+---------+

With defnew being the old and brpool00 the new one.

[root@asusi7 storage-pools]# lxc storage show defnew
config:
  lvm.thinpool_name: LXDThinPool
  lvm.vg_name: defnew
  source: defnew
  volatile.initial_source: /dev/sdb5
description: ""
name: defnew
driver: lvm
used_by:
- /1.0/images/524f749ce98abd92ea3c8badd01a66cf8baa1daef5054516a4f60a27fb6a6c1e
- /1.0/profiles/default
- /1.0/profiles/newdef
- /1.0/profiles/only_disk
status: Created
locations:
- none
[root@asusi7 storage-pools]# lxc storage show brpool00
config:
  source: 7115d9f6-f852-43a9-a626-3870e74596e8
  volatile.initial_source: 7115d9f6-f852-43a9-a626-3870e74596e8
description: ""
name: brpool00
driver: btrfs
used_by:
- /1.0/instances/firty
... and more ...
status: Created
locations:
- none
  • So, you see the “residuals” - that I want to move - in the old pool are:

    • /1.0/images/524f749ce98abd92ea3c8badd01a66cf8baa1daef5054516a4f60a27fb6a6c1e
    • /1.0/profiles/default
    • /1.0/profiles/newdef
    • /1.0/profiles/only_disk
  • Does LXD have some notion of a “default” storage pool? If so, how can I change it to the new one?

  • The lVM uses thinpool (and by now it is also multidevice).

  • (BTW: I can’t remember’ what the image is for - or if it still required).

  • If necessary I can endure the pain of duplicating the profiles to the new pool and reassigning them to the containers. (Before abandonment).

  • This is on Fedora 39 - and snap is not installed.

I hope this makes sense, and that I have provided proper information.
Kind regards, Jakob.

The profiles you just need to update with lxc profile edit so they don’t reference the pool you’re trying to get rid of.

The images don’t matter, you can just delete those storage volumes with lxc storage volume delete defnew image/FINGERPRINT as those are just cached volumes for faster instance creation, the image data itself is stored outside of storage pools.

Hi, Stéphane, thanks for the reply.

  1. Profiles
  2. Image(s)

Profiles:

Yeah that became obvious, and it would have been a clue, that the old pool only listed 3 profiles out of many more.
The 3 listed profiles were those that referenced a pool, like:
lxc profile edit <profile-name>

config: {}
devices:
  root:
    path: /
    pool: <old-pool-name>
    type: disk
...

So after editing to reference the new pool, those 3 profiles were automagically shown in the new pool.

[root@asusi7 storage-pools]# lxc storage show <new-pool-name>
used_by:
- /1.0/profiles/brpool00_default [renamed]
- /1.0/profiles/default
- /1.0/profiles/only_disk

Image(s):

The image was a little trickier.
The command lxc storage volume delete defnew image/FINGERPRINT did not work, and I tried a number of variations of lxc storage volume delete, but none worked.
Then it struck me that the thing I wanted to delete was understood as both a volume and an image:

  • lxc image ls
  • lxc storage volume list defnew
image   FINGERPRINT:    '524f749ce98a'
volume  NAME:           '524f749ce98abd92ea3c8badd01a66cf8baa1daef5054516a4f60a27fb6a6c1e'

So this worked:
lxc image delete 524f749ce98a
It also “removed” the volume, and the container referencing the volume started speedily without any errors.
The cache-image is apparently not recreated, so that is moot?
I did notice another thread, where you solved a similar problem by lxd sql global "DELETE FROM storage_volumes WHERE name="focaltest"
But I persisted with lxc and solved it by addressing the image instead of the volume.
(Maybe there be issues with lxc storage volume delete ?)

Goal:

After editing profiles and deleting image, the old pool was now “empty” and could be deleted.

lxc storage show <old-pool-name>
    used_by: []

lxc storage delete <old-pool-name>
[root@asusi7 storage-pools]# lxc storage ls
+----------+--------+--------------------------------------+-------------+---------+---------+
|   NAME   | DRIVER |                SOURCE                | DESCRIPTION | USED BY |  STATE  |
+----------+--------+--------------------------------------+-------------+---------+---------+
| brpool00 | btrfs  | 7115d9f6-f852-43a9-a626-3870e74596e8 |             | 27      | CREATED |
+----------+--------+--------------------------------------+-------------+---------+---------+

Old pool gone, only new pool remains.

So goal achieved, thanks for the help.

LVM:

Since the old lvm-pool was no longer required, I could also delete the lvm-volume.

vgremove defnew
pvremove /dev/sda1 /dev/sdb6