I have a container with a server in it and it had been working fine for months. About 2 days ago, I ran OS upgrade commands (Ubuntu 22.04, with sudo apt upgrade ...
, sudo snap refresh
, etc.). After all the updates were done, I shutdown the container and the restarted the host server.
When the server came back online, the container did not. Trying to run lxc start <name>
now returns Error: Storage pool "lxd" unavailable on this server
. I’ve had several cases before where LXD just breaks itself for no apparent reason and while I don’t know if it’s done it again, it certainly seems like it did(?).
No ZFS or LXC commands were used before the restart, etc., so I’m not sure why the storage pool would suddenly become “unavailable”.
Some system info that might be helpful:
$ lxc storage list
+------+--------+----------+-------------+---------+-------------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
+------+--------+----------+-------------+---------+-------------+
| lxd | zfs | tank/lxd | | 2 | UNAVAILABLE |
+------+--------+----------+-------------+---------+-------------+
# Version Info
$ lxc --version
5.6
# Using the snap package
$ which lxc
/snap/bin/lxc
# Snap Info
$ snap list
Name Version Rev Tracking Publisher Notes
core20 20220826 1623 latest/stable canonical✓ base
lxd 5.6-794016a 23680 latest/stable canonical✓ -
snapd 2.57.2 17029 latest/stable canonical✓ snapd
# Profile Info
$ lxc profile show default
config: {}
description: Default LXD profile
devices:
eth0:
name: eth0
nictype: macvlan
parent: eno1
type: nic
root:
path: /
pool: lxd
type: disk
name: default
used_by:
- /1.0/instances/arma
$ lxc profile show arma
config:
boot.autostart: "true"
limits.cpu.allowance: 95%
limits.memory: 8GB
description: Arma3 dedicated server profile.
devices:
eth0:
name: eth0
nictype: macvlan
parent: eno1
type: nic
name: arma
used_by:
- /1.0/instances/arma
ZFS Pool info, etc:
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 127G 1.63T 42.0K /mnt/tank
tank/lxd 127G 1.63T 48.0K /mnt/tank/lxd
tank/lxd/containers 127G 1.63T 42.0K /mnt/tank/lxd/containers
tank/lxd/containers/arma 127G 1.63T 127G legacy # <<--- This auto-changed between reboot
tank/lxd/custom 42.0K 1.63T 42.0K /mnt/tank/lxd/custom
tank/lxd/deleted 216K 1.63T 48.0K /mnt/tank/lxd/deleted
tank/lxd/deleted/containers 42.0K 1.63T 42.0K /mnt/tank/lxd/deleted/containers
tank/lxd/deleted/custom 42.0K 1.63T 42.0K /mnt/tank/lxd/deleted/custom
tank/lxd/deleted/images 42.0K 1.63T 42.0K /mnt/tank/lxd/deleted/images
tank/lxd/deleted/virtual-machines 42.0K 1.63T 42.0K /mnt/tank/lxd/deleted/virtual-machines
tank/lxd/images 42.0K 1.63T 42.0K /mnt/tank/lxd/images
tank/lxd/virtual-machines 42.0K 1.63T 42.0K /mnt/tank/lxd/virtual-machines
Notice the legacy
mountpoint above, which became that way after the restart without any input from me.
Running
$ sudo zfs set mountpoint=/mnt/tank/lxd/containers/arma tank/lxd/containers/arma
Changes the list above to:
$ tank/lxd/containers/arma 127G 1.63T 127G /mnt/tank/lxd/containers/arma
But it does not make it come back either and looks empty when inspected with ls
. The pool itself is online and fine, according to zpool status -v tank
.
So what happened to the storage pool and why would it happen? How can I fix/recover this so that I can start the container and get my server back online again?..
Thanks in advance.