NAME STATE READ WRITE CKSUM
RAID0 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x50014ee2acc24fc1-part1 ONLINE 0 0 0
wwn-0x50014ee2576b7307-part1 ONLINE 0 0 0
errors: No known data errors
pool: storage1
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
storage1 ONLINE 0 0 0
/var/snap/lxd/common/lxd/disks/storage1.img ONLINE 0 0 0
errors: No known data errors`
I would like to use the pool RAID0 as the default storage pool for LXD, then remove/destroy the zfs pool storage1 freeing up that space for the local system.
Thank you very much in advance for your suggestions.
You can use lxc profile device set default root pool=<new pool>
This will change the pool used for instances using that profile.
Be aware though that if you have currently created profiles using that instance these will not be moved to the new pool, and either the command will fail with errors, or the instances will fail to start on next attempt (due to them thinking their root disk is on the new pool when its infact on the old pool).
You can also launch an instance on a specific pool using lxc launch images:ubuntu/focal c1 -s <pool> and this will override the profile’s root disk pool with the pool specified.
If you can to create a new storage pool you can use lxc storage create <pool name> zfs source= where source is the path to the ZFS path. Please can you show sudo zfs list to see what datasets are available on your new zpool?
Thank you for your valuable advice. I ended up editing the default profile first:
lxc profile edit default
I set the pool to be the new one in my case RAID1(I renamed it because it is confusing having a pool called RAID0 when in reality there are two drives in a mirror). I changed the label of the drives as well with e2label so UUID and labels were on point.
Finally, I removed the zpool from LXD using the lxc storage delete storage1 and created a new one lxc storage create RAID1 zfs source=RAID1 then verified the profile default settings, storage pools available, and tested creating a container and a VM all good without having to change anything else (I hope it all works after the next month maintenance, after a reboot):