I moved all my containers from my default store to a newly created one.
I don’t need any containers from the default store. I need to recover all unused storage from zfs rpool
. It is crowding my workstation’s system files. What do you susggest?
From the cli lxc API, there are no containers left in the default store:
> lxc list -c n,b -f csv | grep ',default'
>
30 were moved, so elsewhere there are 30:
❯ lxc list -c n,b -f csv | grep 'workstation-default' | wc -l
30
lxc storage reports:
❯ lxc storage show default
config:
source: rpool/lxd
volatile.initial_source: rpool/lxd
zfs.pool_name: rpool/lxd
description: ""
name: default
driver: zfs
used_by:
- /1.0/images/00e2c079ee7f3d22c61d3c75e455f7628b016e8bb4a2b4366ed22ae84be6302a
- /1.0/images/1c72c70f037b2fd8ce2db8ac00cbbe82e8c6091eed1f2e08831bc1365ae4dcf2
- /1.0/images/d8fb3478449407cc78bd02a4695b580e3e922048bc21cd9d350cbc2f8872329f
- /1.0/images/dec3930f0401db2bb1c1673195623fce483d1dcad4ae40b3f94bb3778cc75ae2
- /1.0/images/e299296138c256b79dda4e61ac7454cf4ac134b43f5521f1ac894f49a9421d00
- /1.0/profiles/default
- /1.0/profiles/fastbook
- /1.0/profiles/kr-test
status: Created
locations:
- none
zfs shows
❯ zfs list rpool/lxd -r
NAME USED AVAIL REFER MOUNTPOINT
rpool/lxd 8.10G 113G 96K none
rpool/lxd/containers 129M 113G 96K none
rpool/lxd/containers/gbif2 129M 113G 927M /var/snap/lxd/common/lxd/storage-pools/default/containers/gbif2
rpool/lxd/containers/pyc-gitub-fork 140K 113G 2.01G /var/snap/lxd/common/lxd/storage-pools/default/containers/pyc-gitub-fork
rpool/lxd/custom 96K 113G 96K none
rpool/lxd/deleted 5.14G 113G 96K none
rpool/lxd/deleted/containers 3.21G 113G 96K none
rpool/lxd/deleted/containers/8f447242-b807-4c44-94d5-c07819ad7d0f 1.13G 113G 1.72G legacy
rpool/lxd/deleted/containers/d81452ba-eb87-4dfe-a3e8-f48193b53407 248M 113G 857M /var/snap/lxd/common/lxd/storage-pools/default/containers/ansible-template
rpool/lxd/deleted/containers/d9c9bc12-4d32-42de-83a8-e19492b228fb 1.84G 113G 2.37G /var/snap/lxd/common/lxd/storage-pools/default/containers/pycharm
rpool/lxd/deleted/custom 96K 113G 96K none
rpool/lxd/deleted/images 1.93G 113G 96K none
rpool/lxd/deleted/images/690801402e1d4e02c07ba2d1a29bb9a9b4825f037c12ccad8cb4d062d2450d2c 644M 113G 644M /var/snap/lxd/common/lxd/storage-pools/default/images/690801402e1d4e02c07ba2d1a29bb9a9b4825f037c12ccad8cb4d062d2450d2c
rpool/lxd/deleted/images/e0c3495ffd489748aa5151628fa56619e6143958f041223cb4970731ef939cb6 638M 113G 638M /var/snap/lxd/common/lxd/storage-pools/default/images/e0c3495ffd489748aa5151628fa56619e6143958f041223cb4970731ef939cb6
rpool/lxd/deleted/images/e9589b6e9c886888b3df98aee0f0e16c5805383418b3563cd8845220f43b40ff 696M 113G 696M legacy
rpool/lxd/deleted/virtual-machines 96K 113G 96K none
rpool/lxd/images 2.84G 113G 96K none
rpool/lxd/images/00e2c079ee7f3d22c61d3c75e455f7628b016e8bb4a2b4366ed22ae84be6302a 336M 113G 336M legacy
rpool/lxd/images/1c72c70f037b2fd8ce2db8ac00cbbe82e8c6091eed1f2e08831bc1365ae4dcf2 708M 113G 708M legacy
rpool/lxd/images/d8fb3478449407cc78bd02a4695b580e3e922048bc21cd9d350cbc2f8872329f 316M 113G 316M legacy
rpool/lxd/images/dec3930f0401db2bb1c1673195623fce483d1dcad4ae40b3f94bb3778cc75ae2 112K 99.9M 104K legacy
rpool/lxd/images/dec3930f0401db2bb1c1673195623fce483d1dcad4ae40b3f94bb3778cc75ae2.block 538M 113G 538M -
rpool/lxd/images/e11dadbafbaaa28de59fe7c07cd060edbf90658981f383b8d656cbf016669e5d 308M 113G 308M /var/snap/lxd/common/lxd/storage-pools/default/images/e11dadbafbaaa28de59fe7c07cd060edbf90658981f383b8d656cbf016669e5d
rpool/lxd/images/e299296138c256b79dda4e61ac7454cf4ac134b43f5521f1ac894f49a9421d00 698M 113G 698M legacy
rpool/lxd/virtual-machines
rpool
seems ok:
❯ sudo zpool status -Lv rpool
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:07:29 with 0 errors on Sun Jul 11 00:31:31 2021
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sdb4 ONLINE 0 0 0
errors: No known data errors
tomp
(Thomas Parrott)
August 25, 2022, 7:41am
2
Changed to LXD topic.
It looks like LXD considers your default
pool to be empty of instances (its only used by images and profiles).
The underlying ZFS pool shows there are still 2 containers on it though:
rpool/lxd/containers/gbif2 129M 113G 927M /var/snap/lxd/common/lxd/storage-pools/default/containers/gbif2
rpool/lxd/containers/pyc-gitub-fork
Do those instances still exist in your lxc ls
output?
Hi Thomas,
No, those containers are not visible via the lxc cli, which is why I believe my default store is inconsistent. I would be happy to delete the default store - but it is marked as ‘in-use’. Are the profiles doing that?
is empty
as above - but more clearly:
❯ lxc list -c n,b
+---------------------------+---------------------+
| NAME | STORAGE POOL |
+---------------------------+---------------------+
| Detectron2 | workstation-default |
+---------------------------+---------------------+
| Gitea | workstation-default |
+---------------------------+---------------------+
| Haskell-ghcup | workstation-default |
+---------------------------+---------------------+
| Julia-Dataframes-Tutorial | workstation-default |
+---------------------------+---------------------+
| ProgWithCatMIT | workstation-default |
+---------------------------+---------------------+
| ansible-host | workstation-default |
+---------------------------+---------------------+
| ansible-host-II | workstation-default |
+---------------------------+---------------------+
| craft-text-detector | workstation-default |
+---------------------------+---------------------+
| fastai22-2022-06-25 | workstation-default |
+---------------------------+---------------------+
| fastai-fastchan | workstation-default |
+---------------------------+---------------------+
| fastbook-01 | workstation-default |
+---------------------------+---------------------+
| gbif1 | workstation-default |
+---------------------------+---------------------+
| gbif-data | workstation-default |
+---------------------------+---------------------+
| git-server | workstation-default |
+---------------------------+---------------------+
| gitea | workstation-default |
+---------------------------+---------------------+
| haskell-anu-comp1100 | workstation-default |
+---------------------------+---------------------+
| jammy | workstation-default |
+---------------------------+---------------------+
| julia-cuda2 | workstation-default |
+---------------------------+---------------------+
| julia-vscode | workstation-default |
+---------------------------+---------------------+
| kr-client | workstation-default |
+---------------------------+---------------------+
| lxd-kr-test | workstation-default |
+---------------------------+---------------------+
| nixos | workstation-default |
+---------------------------+---------------------+
| opencv-east | workstation-default |
+---------------------------+---------------------+
| pandas-dwca-reader | workstation-default |
+---------------------------+---------------------+
| pydev1 | workstation-default |
+---------------------------+---------------------+
| pytorch-CUDA11 | workstation-default |
+---------------------------+---------------------+
| ssh-test | workstation-default |
+---------------------------+---------------------+
| test-me | workstation-default |
+---------------------------+---------------------+
| v1 | workstation-default |
+---------------------------+---------------------+
| x11-dev-base | workstation-default |
+---------------------------+---------------------+
| zim | workstation-default |
+---------------------------+---------------------+
tomp
(Thomas Parrott)
August 26, 2022, 7:35am
4
And are those 2 instances ones that you thought you had moved or did they not show up before you moved the others?
tomp
(Thomas Parrott)
August 26, 2022, 8:14am
5
You could try running lxd recover
to see if they can be created, before moving them to another pool.
See Backing up a LXD server - LXD documentation
The two instances did not show up until I moved the others. When I ran lxd recover
I was asked to recreate a missing profile before I could continue.
You are currently missing the following:
- Profile "lxc-remote-pydev" in project "default"
Please create those missing entries and then hit ENTER:
The following unknown volumes have been found:
- Container "gbif2" on pool "default" in project "default" (includes 0 snapshots)
- Container "pyc-gitub-fork" on pool "default" in project "default" (includes 0 snapshots)
Would you like those to be recovered? (yes/no) [default=no]:
I accepted the default. After that there were no instances in the default repository.
tomp
(Thomas Parrott)
September 5, 2022, 7:54am
7
So lxd recover
worked and what is the output of zfs list rpool/lxd -r
now?
tomp:
zfs list rpool/lxd -r
Sorry - I missed your question. A lot going on.
Here are the results:
NAME USED AVAIL REFER MOUNTPOINT
rpool/lxd 6.88G 103G 96K none
rpool/lxd/containers 129M 103G 96K none
rpool/lxd/containers/gbif2 129M 103G 927M legacy
rpool/lxd/containers/pyc-gitub-fork 140K 103G 2.01G legacy
rpool/lxd/custom 96K 103G 96K none
rpool/lxd/deleted 5.14G 103G 96K none
rpool/lxd/deleted/containers 3.21G 103G 96K none
rpool/lxd/deleted/containers/8f447242-b807-4c44-94d5-c07819ad7d0f 1.13G 103G 1.72G legacy
rpool/lxd/deleted/containers/d81452ba-eb87-4dfe-a3e8-f48193b53407 248M 103G 857M /var/snap/lxd/common/lxd/storage-pools/default/containers/ansible-template
rpool/lxd/deleted/containers/d9c9bc12-4d32-42de-83a8-e19492b228fb 1.84G 103G 2.37G /var/snap/lxd/common/lxd/storage-pools/default/containers/pycharm
rpool/lxd/deleted/custom 96K 103G 96K none
rpool/lxd/deleted/images 1.93G 103G 96K none
rpool/lxd/deleted/images/690801402e1d4e02c07ba2d1a29bb9a9b4825f037c12ccad8cb4d062d2450d2c 644M 103G 644M /var/snap/lxd/common/lxd/storage-pools/default/images/690801402e1d4e02c07ba2d1a29bb9a9b4825f037c12ccad8cb4d062d2450d2c
rpool/lxd/deleted/images/e0c3495ffd489748aa5151628fa56619e6143958f041223cb4970731ef939cb6 638M 103G 638M /var/snap/lxd/common/lxd/storage-pools/default/images/e0c3495ffd489748aa5151628fa56619e6143958f041223cb4970731ef939cb6
rpool/lxd/deleted/images/e9589b6e9c886888b3df98aee0f0e16c5805383418b3563cd8845220f43b40ff 696M 103G 696M legacy
rpool/lxd/deleted/virtual-machines 96K 103G 96K none
rpool/lxd/images 1.62G 103G 96K none
rpool/lxd/images/3214ef9caade0dfb7c7619cf54040139763ebacf999702feab5a7e9bf928ca8b 315M 103G 315M legacy
rpool/lxd/images/60c17cd9d86741c6f042a434ed612b3a67417a60e27cbaf30924c5900a75feff 335M 103G 335M legacy
rpool/lxd/images/e11dadbafbaaa28de59fe7c07cd060edbf90658981f383b8d656cbf016669e5d 308M 103G 308M /var/snap/lxd/common/lxd/storage-pools/default/images/e11dadbafbaaa28de59fe7c07cd060edbf90658981f383b8d656cbf016669e5d
rpool/lxd/images/e299296138c256b79dda4e61ac7454cf4ac134b43f5521f1ac894f49a9421d00 698M 103G 698M legacy
rpool/lxd/virtual-machines 96K 103G 96K none
Looks like a dog’s breakfast to me. Should I backup my storage, purge lxc and import my storage. I have sources for all my profiles
tomp
(Thomas Parrott)
September 26, 2022, 11:12am
10
I’m not entirely following what you have done so far.
Do those instances start OK?
Have you tried moving them to the new storage pool using lxc move
now that they have been recovered?