Can you post the df -hT
output please.
Yes
df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 12G 0 12G 0% /dev
tmpfs tmpfs 2.4G 5.3M 2.4G 1% /run
/dev/sda1 ext4 210G 129G 72G 65% /
tmpfs tmpfs 12G 12K 12G 1% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 12G 0 12G 0% /sys/fs/cgroup
tmpfs tmpfs 12G 0 12G 0% /run/qemu
/dev/md0 ext4 2.7T 2.1T 485G 82% /Volumes/Media
Media2 zfs 3.6T 2.5T 1.1T 70% /Volumes/Media2
/dev/loop2 squashfs 9.7M 9.7M 0 100% /snap/canonical-livepatch/229
/dev/loop3 squashfs 303M 303M 0 100% /snap/code/129
/dev/loop1 squashfs 2.0M 2.0M 0 100% /snap/btop/617
/dev/loop5 squashfs 117M 117M 0 100% /snap/core/14946
/dev/loop4 squashfs 303M 303M 0 100% /snap/code/130
/dev/loop6 squashfs 56M 56M 0 100% /snap/core18/2745
/dev/loop0 squashfs 2.0M 2.0M 0 100% /snap/btop/612
/dev/loop7 squashfs 56M 56M 0 100% /snap/core18/2751
/dev/loop9 squashfs 39M 39M 0 100% /snap/thelounge/280
/dev/loop8 squashfs 117M 117M 0 100% /snap/core/14784
/dev/loop10 squashfs 125M 125M 0 100% /snap/yt-dlp/233
/dev/loop11 squashfs 8.5M 8.5M 0 100% /snap/distrobuilder/1125
/dev/loop12 squashfs 171M 171M 0 100% /snap/lxd/24918
/dev/loop13 squashfs 167M 167M 0 100% /snap/lxd/24846
/dev/loop14 squashfs 74M 74M 0 100% /snap/core22/750
/dev/loop15 squashfs 9.7M 9.7M 0 100% /snap/canonical-livepatch/216
/dev/loop16 squashfs 125M 125M 0 100% /snap/yt-dlp/220
/dev/loop17 squashfs 54M 54M 0 100% /snap/snapd/19122
/dev/loop18 squashfs 64M 64M 0 100% /snap/core20/1879
/dev/loop19 squashfs 8.7M 8.7M 0 100% /snap/distrobuilder/1364
/dev/loop20 squashfs 64M 64M 0 100% /snap/core20/1891
/dev/loop21 squashfs 54M 54M 0 100% /snap/snapd/19361
/dev/loop22 squashfs 74M 74M 0 100% /snap/core22/634
tmpfs tmpfs 2.4G 0 2.4G 0% /run/user/1000
tmpfs tmpfs 1.0M 0 1.0M 0% /var/snap/lxd/common/ns
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/3e7599447063f1ced6dbf6982aef39bd2a79c208c6bb9bb52f723fb888c070ab/merged
/dev/sdg1 ext4 458G 395G 40G 91% /run/timeshift/backup
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/cefed74d03ee25c22e47239dbe336cf81100852715a573b43fdcf8b9a9e799f3/merged
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/3bf2e85985da8f83d831831a1aab6e375266ff603776b6d1533f86e54e18f167/merged
shm tmpfs 64M 16K 64M 1% /var/lib/docker/containers/051666a07682d4a63fb203032a67c732bdce83be110c9f26a259c28f55611226/mounts/shm
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/fe6e72526f1aa96ad78ae7272e875bab004d9ff02beee5dea4cb0cc286d00bc4/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/074143dc35689a75ea08a413a5d5f5ad658cb668f2ef1d2816788ed04f37de4c/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/e58fb60e63909cb2b2a8ded96e2e5aa9dceed6c6f9759e31cf6fb9aac8b8a141/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/e451d0ca4f9e1904e37969aa159b814bec7b2e0dc5eaf6cc25c37f834ddd9d74/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/7fadfaa2de9b156965b704caca900837673e08d4083e42dd18326906fe8019e7/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/91fd8246f4f1d12c656d688a85d5ecd7b94175114e16844ca2b2b644647f714f/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/84d7abf5cf1a0cd12fbd3fcc4e3f975e33c6cb65f97379b74584769d8998d39a/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/8db3e17f529d69ae663914a2dd4f6bf4a56c02e40df2973514c83dcafa20473f/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/521a080fb196f4d0a37c207118ae381348e9eba63be91aee271ed27a059bd81e/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/e8120d1628b65090450f09b69e08982e6ecb2bcc54883d73d8325b2221627c78/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/059c0649a18cbbda3f665c074d01757586fc9c53622ba3586c44c6912b8b0c97/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/a0e7faaa10a7c5860da2a657c26d1a1b26f18157c3bee6f071c698c494d4bd2e/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/0032b6f7ba1b9ca867fa48f0cf66eefd5fa889e42dbe497850abc31e25f3c18f/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/813fa90e94abbef99036a63691c21f9ef2e87bacdc0950c5801ae38ee58773e9/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/e402f2aed7259f7acbf86d18d16f7d42f2bd73040baf8a609e1e7ba216fc015f/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/7d6c7b9e4ead1046a531876f94d8d39d8e0eb356f39d3e965a4bc7cbfae7f284/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/a0727fbbf304999e1e053b8e1674147734a0d681accc7310dcd3f1d31b195f70/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/4740cda75c7f7c02623f164c645bbd7a52e68d91474bd53103ba9c4b4fa2f9ba/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/0bf4a064b56340b2033d0019b55933c45aa9be6c672ac7134650bd44d8e7f0d6/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/8167f085718f14e2d008c35dfa30d75754f7f64a15832e45f1d0fc11d194c19f/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/efe6d2e4f2e35bc55372b2c6446606b2168436675a0a8829639590ceff2e4c49/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/fe7ec8904c633dfbcb07709c5e49a86f14e49b3c1b3c5a585407d40144aab094/merged
shm tmpfs 64M 4.0K 64M 1% /var/lib/docker/containers/e827dfe3988d87e464eb839485b3ec5a05653bedcd00db6f6708fe3edadb09f2/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/d391027ee210de51471179c6ad4edac4397f1a1cc1148bb1f245f9a260c4f8ee/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/bf5f7cb2a021dc866ce9f5a5a7a1630843cfdd31d894b4ed8eed68c4a34d3a5b/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/ee73d057ea023c2eeee211a9bcc4456cdf681f40c9a1c5dc6297893be0343b90/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/5103d5c01516aeffc96c1d809175d3146a28319401a1ad78a0950ec6fcddbf14/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/d641f6cfc9e40705fa6d6cf316030e7c2c8dfd80c12116ca04b08da193e46afe/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/b5940ab39890301e4820fb625b75196ae67b6208875b0bc24212588600105803/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/2ccae3801265b7369ee13dc8bf46881872605c34fe963f62cc182aae03c11fd5/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/f84ce44ed92acd694c282fffcd790e656168144c2629d83de5815caf4eed54a3/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/294413bcde6b45d01e5983f01e6278a7e7459e1c890c64beb342ea7cbbf1fecd/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/a42943dc60027a507328878af6b0c6b907b872512f54c7f1676f265d9378597f/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/1bcfccd5155a99d148c913797cde63ca2f3810dace219ee9da83dcc37bb2ef83/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/a14374eb1527fdcade2a323b3c038c6758773cf98cc15b5609b0a3aab4a0399b/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/2e85b1ce40f4ffb29381078478e6cf465b2bb7d65d73fd2f28492d812e8459b0/merged
shm tmpfs 64M 1.3M 63M 2% /var/lib/docker/containers/fe34550f9d91b1c5cb44219709a2c80e22cc83b913d432d661f0a2b94bcb6dce/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/0689a0bcceb9bef65d1afcb76bccc2d49c298f1afffd283e9c7ff72896668076/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/044bc9645ccdce0590adea331b4f9047f4794298fca7fd7d9c282d2e5fc88d54/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/4f38bbd79edb09e40466aa1fc3021bd69f5900578f565adca43dae63e20d0e26/merged
shm tmpfs 64M 28K 64M 1% /var/lib/docker/containers/1afc12d92666982696d175ad949b011f2bc222ce355d5d0c273a8df72a9fd039/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/a92921cd2c25e9cdc168a006de44a6597f692da872be27b1d76c2ec7f3d94338/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/8e6f4c5f67bcff0036de4bd03076a8fc48e2557c435a0e6d8c58506ab338ccdd/mounts/shm
I havent figure it out, looks fine to me. Have you ever try to reboot your host? One more thing, can you post that command output, lxc storage show default
By the way you can format your outputs with three backtick characters between.
Regards.
lxc storage show default
config:
size: 30GB
source: /var/snap/lxd/common/lxd/disks/default.img
zfs.pool_name: default
description: ""
name: default
driver: zfs
used_by:
- /1.0/images/46c0b8bf83411ce5cc2eb7f27dead107b1699c7f8391b4ec1986ae782b1a045a
- /1.0/images/561195dedea294fc279824c25b18bdc21efed94db63a7749c49378a1e94cdf41
- /1.0/images/c51241b9673c1fd4d206caf8fc49bb62e445b67c647af0d37c567753b774325b
- /1.0/instances/PodcastGenerator
- /1.0/instances/bedrock
- /1.0/instances/bedrock/snapshots/snap0
- /1.0/instances/centos9-stream
- /1.0/instances/cups
- /1.0/instances/cups/snapshots/snap0
- /1.0/instances/cups/snapshots/snap1
- /1.0/instances/darkweb
- /1.0/instances/dircaster
- /1.0/instances/duckdns
- /1.0/instances/duckdns/snapshots/snap0
- /1.0/instances/gimme-iphotos
- /1.0/instances/homeassistant
- /1.0/instances/homeassistant/snapshots/snap0
- /1.0/instances/minecraft
- /1.0/instances/minecraft/snapshots/snap0
- /1.0/instances/pihole
- /1.0/instances/pihole/snapshots/20220918
- /1.0/instances/pihole/snapshots/snap0
- /1.0/instances/windows10
- /1.0/profiles/default
status: Unavailable
locations:
- none
I can’t understand what is wrong either…
Edit:I did reboot the host yes.
I have noticed two things that might be of interest (or might be normal and of no interest ;)).
- " lxc storage show default" shows 30GB while the default.img is 53GB.
- Normally when I take a snapshot with timeshift it takes quite a long time (I think because of default.img), but after it failed it takes no time at all.
Edit: This is of course normal because default.img has not changed
Any more ideas?
PS: Thank you for the help so far!
I’m making this up, you can try to resize your default.img. May be it helps. Follow that post, Resizing zfs filesystem in LXD container - #5 by stuartlangridge
Regards.
You are welcome.
I tried what was suggested in the thread you linked to.
zfs list -t all
THis just shows another drive.
NAME USED AVAIL REFER MOUNTPOINT
Media2 2.43T 1.08T 2.43T /Volumes/Media2
sudo snap stop lxd
2023-06-15T15:22:03+02:00 INFO Waiting for "snap.lxd.daemon.service" to stop.
Stopped.
sudo truncate -s +10G /var/snap/lxd/common/lxd/disks/default.img
This gives no output.
sudo zpool set autoexpand=on default
cannot open 'default': no such pool
sudo zpool online -e default /var/snap/lxd/common/lxd/disks/default.img
cannot open 'default': no such pool
sudo zpool set autoexpand=off default
cannot open 'default': no such pool
PS: /var/lib/lxd/disks/default.img does not exist on my system. Should it?
Can you post the LXD version and how to install LXD?
Version: 5.14-7072c7b
It is installed via a snap. I believe I installed it via “apt install lxd”, but it is still installed as a snap
There is something wrong in your configuration, if you install it with snap the directory you mentioned cant be /var/lib
, it should be something like this /var/snap/lxd/common/lxd
Another point is, if you installed with snap you can check it out with the systemctl status snap.lxd.daemon and snap list
. Some confusion in your setup.
Regards.
I think maybe you misunderstood (maybe;)).
I do have “/var/snap/lxd/common/lxd/disks/default.img”
I don’t have “/var/lib/lxd/disks/default.img”. The only reason I asked about “/var/lib/lxd/disks/default.img” is because it was referred in the post you linked to.
systemctl status snap.lxd.daemon
Loaded: loaded (/etc/systemd/system/snap.lxd.daemon.service; static; vendor preset: enabled)
Active: active (running) since Thu 2023-06-15 15:50:23 CEST; 5h 17min ago
TriggeredBy: ● snap.lxd.daemon.unix.socket
Main PID: 4401 (daemon.start)
Tasks: 0 (limit: 28330)
Memory: 16.4M
CGroup: /system.slice/snap.lxd.daemon.service
‣ 4401 /bin/sh /snap/lxd/24918/commands/daemon.start
Jun 15 20:59:13 aa-srv3 lxd.daemon[6720]: time="2023-06-15T20:59:13+02:00" level=error msg="Failed mounting storage pool" err="Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import 'de>
Jun 15 21:00:14 aa-srv3 lxd.daemon[6720]: time="2023-06-15T21:00:14+02:00" level=error msg="Failed mounting storage pool" err="Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import 'de>
Jun 15 21:01:14 aa-srv3 lxd.daemon[6720]: time="2023-06-15T21:01:14+02:00" level=error msg="Failed mounting storage pool" err="Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import 'de>
Jun 15 21:02:14 aa-srv3 lxd.daemon[6720]: time="2023-06-15T21:02:14+02:00" level=error msg="Failed mounting storage pool" err="Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import 'de>
Jun 15 21:03:14 aa-srv3 lxd.daemon[6720]: time="2023-06-15T21:03:14+02:00" level=error msg="Failed mounting storage pool" err="Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import 'de>
Jun 15 21:04:14 aa-srv3 lxd.daemon[6720]: time="2023-06-15T21:04:14+02:00" level=error msg="Failed mounting storage pool" err="Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import 'de>
Jun 15 21:05:14 aa-srv3 lxd.daemon[6720]: time="2023-06-15T21:05:14+02:00" level=error msg="Failed mounting storage pool" err="Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import 'de>
Jun 15 21:06:15 aa-srv3 lxd.daemon[6720]: time="2023-06-15T21:06:15+02:00" level=error msg="Failed mounting storage pool" err="Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import 'de>
Jun 15 21:07:15 aa-srv3 lxd.daemon[6720]: time="2023-06-15T21:07:15+02:00" level=error msg="Failed mounting storage pool" err="Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import 'de>
Jun 15 21:08:15 aa-srv3 lxd.daemon[6720]: time="2023-06-15T21:08:15+02:00" level=error msg="Failed mounting storage pool" err="Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import 'de
snap list
btop 1.2.13 617 latest/stable kz6fittycent -
canonical-livepatch 10.5.7 229 latest/stable canonical✓ -
code 4cb974a7 131 latest/stable vscode✓ classic
core 16-2.58.3 14946 latest/stable canonical✓ core
core18 20230530 2785 latest/stable canonical✓ base
core20 20230503 1891 latest/stable canonical✓ base
core22 20230531 750 latest/stable canonical✓ base
distrobuilder 2.1 1364 latest/stable stgraber classic
lxd 5.14-7072c7b 24918 latest/stable/… canonical✓ -
snapd 2.59.4 19361 latest/stable canonical✓ snapd
thelounge 4.2.0 280 latest/stable snapcrafters✪ disabled
yt-dlp 59d9fe083 272 latest/stable degville -
Ohhh I see, that link is just a reference for the expansion of img so feel free to use for your needs.
Regards.
Anyone else have ideas? This is still not solved.
Idea: If there is nothing wrong with my default.img (I believe there is not). Should I be able to move it to a new system and restore it (and then retrieve my data)? If so, how do I do that?
Is there no way to directly mount this image? Or something?
Anyone?