I have been trying to move all my containers from their original, default zfs storage pool to a new pool using the btrfs driver. The main reason is that I have real performance issues with docker and zfs, a case already well documented by others.
Containers all moved without issue; however, I can’t delete the original zfs pool (pool1).
This is the error I get:
> $ lxc storage delete pool1 Error: The storage pool is currently in use.
Things I have done:
lxc move mycontainer --instance-only --storage pool2
I have 2 weeks of daily snapshots per container and
--instance-only proved itself necessary as LXD will not copy zfs snapshots as btrfs snapshots on
pool2. Without this argument, the new pool fills-up fast!
lxc profile device remove default root lxc profile device add default root disk path=/ pool=pool2
It seems it worked:
> $ lxc profile device show default eth0: name: eth0 nictype: bridged parent: lxdbr0 type: nic root: path: / pool: pool2 type: disk
Now I see there’s a lot of residue in pool1, but I do not know how to delete it?
None of the snapshots below shows under
lxd info mycontainer.
There’s also a lot of snapshots listed that are for containers that have long been deleted.
Last, I am surprised to see that the profiles are tied to my old storage pool (pool1). How do I move those over to pool2?
Should I just
zfs delete those zfs datasets and snapshots?
> $ lxc storage show pool1 config: source: pool1 volatile.initial_source: /dev/mapper/datasetpool1 zfs.pool_name: pool1 description: "" name: pool1 driver: zfs used_by: - /1.0/instances/bazarr/snapshots/r.a.s - /1.0/instances/books/snapshots/calibre-web_clean1 - /1.0/instances/ddns/snapshots/ok - /1.0/instances/emby2/snapshots/before-snapshot - /1.0/instances/freshrss/snapshots/working - /1.0/instances/grav1/snapshots/before-skeleton - /1.0/instances/grav1/snapshots/before-skeleton2 - /1.0/instances/grav1/snapshots/skeleton-ok1 - /1.0/instances/nginx-reverse/snapshots/http-ok - /1.0/instances/nginx-reverse/snapshots/http-ok-2 - /1.0/instances/nginx-reverse/snapshots/http-ok-3 - /1.0/instances/nginx-reverse/snapshots/ssl-embi-ombi - /1.0/instances/nginx-reverse/snapshots/work-in-progress - /1.0/instances/nginx-reverse/snapshots/working - /1.0/instances/nginx-reverse/snapshots/working-grav1 - /1.0/instances/nginx-reverse/snapshots/working2 [...] - /1.0/profiles/default-highres - /1.0/profiles/default-lowres - /1.0/profiles/default-lowres-docker - /1.0/profiles/default-midres-docker - /1.0/storage-pools/pool1/volumes/image/028d045b1cfcfc8a69cc68674557bd86e015c0ba4bb5c3d6851043f785963728 - /1.0/storage-pools/pool1/volumes/image/412fb387e01d8130016d300bbc33fbaee84c1b17ddfb7cb9f85ae63e0c4fa618 - /1.0/storage-pools/pool1/volumes/image/d6f281a2e523674bcd9822f3f61be337c51828fb0dc94c8a200ab216d12a0fff status: Created locations: - none
Thanks in advance for any suggestion!
I figured out the part about the profiles; I just had to make sure the default storage was changed for each profile.