Not listing containers after restatring

Hi ,

After restarting lxd service containers are got disappeared . I have restarted multiple time but no luck could you please help me on this ?

+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

command used to restart
/etc/init.d/lxd stop
/etc/init.d/lxd start

there is no error while restarting

thank you

You are not giving a lot of info on what is your config, what was working before, what happened that could have triggered a change…It’s not really useful to speculate on what could have gone wrong…
how many containers before ? did you try to restart the computer ? what’s the version of lxd ? what is the OS ? any error message in syslog when you restart the service ?

I would try creating a container to see what it says.

Don’t create new containers …

What kind of storage backend do you use? Is the storage/disk still working properly?
Do you use the snap version?

anyone please help ??

Hi,
here I am using debian 9 and lxd 3.8 . backend storage is zfs . There were 3 containers .No I did not restarted the computer .

getting below logs while starting lxd

t=2019-05-16T07:30:58+0000 lvl=info msg=" - unprivileged file capabilities: no"
t=2019-05-16T07:30:58+0000 lvl=info msg="Initializing local database"
t=2019-05-16T07:30:58+0000 lvl=info msg="Starting /dev/lxd handler:"
t=2019-05-16T07:30:58+0000 lvl=info msg=" - binding devlxd socket" socket=/var/lib/lxd/devlxd/sock
t=2019-05-16T07:30:58+0000 lvl=info msg="REST API daemon:"
t=2019-05-16T07:30:58+0000 lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket
t=2019-05-16T07:30:58+0000 lvl=info msg=" - binding TCP socket" socket=[::]:8448
t=2019-05-16T07:30:58+0000 lvl=info msg="Initializing global database"
t=2019-05-16T07:30:58+0000 lvl=info msg="Initializing storage pools"
t=2019-05-16T07:30:58+0000 lvl=info msg="Initializing networks"
t=2019-05-16T07:30:58+0000 lvl=info msg="Pruning leftover image files"
t=2019-05-16T07:30:58+0000 lvl=info msg="Done pruning leftover image files"
t=2019-05-16T07:30:58+0000 lvl=info msg="Loading daemon configuration"
t=2019-05-16T07:30:58+0000 lvl=info msg="Pruning expired images"
t=2019-05-16T07:30:58+0000 lvl=info msg="Done pruning expired images"
t=2019-05-16T07:30:58+0000 lvl=info msg="Pruning expired container backups"
t=2019-05-16T07:30:58+0000 lvl=info msg="Done pruning expired container backups"
t=2019-05-16T07:30:58+0000 lvl=info msg="Expiring log files"
t=2019-05-16T07:30:58+0000 lvl=info msg="Done expiring log files"
t=2019-05-16T07:30:58+0000 lvl=info msg="Updating instance types"
t=2019-05-16T07:30:58+0000 lvl=info msg="Done updating instance types"
t=2019-05-16T07:30:58+0000 lvl=info msg="Updating images"
t=2019-05-16T07:30:58+0000 lvl=info msg="Done updating images"

zfs backend I am using disks are working fine .
I am not using snap version

Ok, can you check the output from the following command: zfs list
Let me know if your containers are listed here.

Also make sure that you do not use the snap version and debian package together.

debian 9 with lxd 3.8 but not snap version ? where did you find deb packages for lxd 3.8 ? in some unofficial repo ? (since lxd is not packaged by debian) or did you compile it yourself ? if it’s not packaged by debian, it’s because it’s really hard to do it right. If you use unofficial debs or compiled by hand lxd, search no further for the reason of your problems. If you did backups, install snap version and restore the backups.

If you use in fact snap lxd (I hope so, but if you were using snap you would be at 3.13 version…), there is nothing obviously wrong in your log. By any chance, did you create persistent containers ?

this version I compiled myself . as 3.8 is not available for debian .
zfs list output
NAME USED AVAIL REFER MOUNTPOINT
default 174G 703G 19K none
default/containers 1.28G 703G 19K none
default/containers/vm620353 611M 9.40G 611M /var/lib/lxd/storage-pools/default/containers/vm620353
default/containers/vm629113 584M 9.43G 584M /var/lib/lxd/storage-pools/default/containers/vm629113
default/containers/vm638724 119M 9.88G 253M /var/lib/lxd/storage-pools/default/containers/vm638724
default/custom 172G 703G 19K none
default/custom/references 154G 146G 154G /var/lib/lxd/storage-pools/default/custom/references
default/custom/vm620353 2.04G 703G 2.04G /var/lib/lxd/storage-pools/default/custom/vm620353
default/custom/vm638724 16.1G 23.0G 16.1G /var/lib/lxd/storage-pools/default/custom/vm638724
default/deleted 19K 703G 19K none
default/images 170M 703G 19K none
default/images/4e5ada574b4e0dfb26c75569135d9f6afa2027667127d0dfd9feb1d6a805864f 170M 703G 170M none
default/snapshots 19K 703G 19K none

there are vms datas are present . I think it might be taking some other config files .
How do I check that could you please guide me?

root@cpu-blabla:~# lxd version
3.8

NAME USED AVAIL REFER MOUNTPOINT
default 174G 703G 19K none
default/containers 1.28G 703G 19K none
default/containers/vm620353 611M 9.40G 611M /var/lib/lxd/storage-pools/default/containers/vm620353
default/containers/vm629113 584M 9.43G 584M /var/lib/lxd/storage-pools/default/containers/vm629113
default/containers/vm638724 119M 9.88G 253M /var/lib/lxd/storage-pools/default/containers/vm638724
default/custom 172G 703G 19K none
default/custom/references 154G 146G 154G /var/lib/lxd/storage-pools/default/custom/references
default/custom/vm620353 2.04G 703G 2.04G /var/lib/lxd/storage-pools/default/custom/vm620353
default/custom/vm638724 16.1G 23.0G 16.1G /var/lib/lxd/storage-pools/default/custom/vm638724
default/deleted 19K 703G 19K none
default/images 170M 703G 19K none
default/images/4e5ada574b4e0dfb26c75569135d9f6afa2027667127d0dfd9feb1d6a805864f 170M 703G 170M none
default/snapshots 19K 703G 19K none

this is the zfs list output and it showing the containers

I am on mobile now, not having read the whole thread. I strongly believe that you have more than one installations of LXD and somehow you got switched to an empty installation (so not containers).

The lxc command looks into a Unix socket in order to connect to the LXD server. Can you find that Unix socket? Then, check which LXD process uses that socket (command: fuser -u mysocket).

Anyone wanting to compile lxd for debian should begin by reading the debian trackre showing five years of effort at packaging LXD in deb format. in particular this message that shows the best LXD expert dismissing at futile efforts at packaging anything beyond the LTS release.
Then the most recent messages show that recent LXD (not LTS, that is > 3.0x) use packaged versions of ZFS, BTRFS, SQLITE. So anyone wanting to compile LXD > 3.0x should either know very well the innards of SQLITE, ZFS or BTRFS, so that they know if and when they can use their own system version with LXD, or take the Ubuntu patches and compile these libraries to the same level as the compiled LXD version. If not they are using an untested configuration.

So in a few words: good luck ! I am not good enough to help you, sorry.

Edit: I said ‘ubuntu patches’, but the appropriate word is ‘snap lxd patches’

There is something wrong with the updates in ubuntu that breakthe debian package. Even in 18.04. It has happen to me before, and now I have a big clusterf$$k. And LXD just turns to crap after the upgrade. It becomes useless. It is even impossible to export container. As someone mentioned it seems to want to run two LXD, but it also lose connectivity on to each other perhaps because there are two LXD one that is hanging. So far developer is not seeing the problem or have a fix for it.

If I had only 5 or 6 containers, I would look at the possibility of backing them up by mounting a ZFS and rebuilding from them from scratch on a new LXD init with a new pool.
I tell you this because I have been working on this problem since Friday, but unfortunately I have several 5 LXD servers with over 50 containers. And right now I am rebuilding.

Actually, it is easy to compile LXD. The difficulty in packaging LXD is that in Debian they have rules (and it’s good) that requires to break down the single LXD deb package into the individual sub-packages.

It is actually too easy to compile LXD,

Reading the debian bug, I don’t think that is the only problem; i see a wide range of compatibility problems with existing packages. That’s the whole point of snap of working around this kind of problem. Just compiling LXD can be easy, getting the rest of the system to a good working state is the main difficulty. Even with all libraries fixed, LXD being an Ubuntu project can help with having the stock Ubuntu kernel compatible. Not the case with Debian. If one use LXD long term version, the kernel issues are hopefully not too bad since no advanced feature is used.

I’m trying to trace down the problem now. It seems when you do an app upgrade it is loading the snap version of LxD together with the Debian version. I’m finding the snap version on my servers after the upgrade when I did not install them.
Then the whole thing craps out. Unfortunately there’s no way to reinstall lxd without losing your storage pool.

I have used deb lxd (please use correct denomination, Ubuntu deb packages are Ubuntu packages in deb format, not ‘Debian’ packages) and I have upgraded everything to snap and I have never found snap lxd installing by itself. I have never found or heard of a deb package upgraded automatically to a snap version without user intervention. On future Ubuntu versions, more packages will be snap by default and there will not be a deb package version, but automatic upgrading to snap don’t exist for LXD.