Lxd not working after reboot on debian 10

I installed lxd on Debian 10 with following commands

apt update
apt install snapd
snap install core
snap install lxd

After this, i was able to create a container

root@b24:~# lxc launch images:ubuntu/20.04 first-vm
Creating first-vm
Starting first-vm                           
root@b24:~#

after rebooting the server, i am not able to run lxd commands.

root@b24:~# lxd list
cannot change profile for the next exec call: No such file or directory
snap-update-ns failed with code 1: File exists
root@b24:~# 

How i fix this ?

I think you mean to run lxc not lxd, e.g. lxc ls

Sorry, that was my mistake. I run lxc command, that also give error

root@b24:~# lxc list
cannot change profile for the next exec call: No such file or directory
snap-update-ns failed with code 1: File exists
root@b24:~# 

Also the network inferface added by lxd is not showing up

root@b24:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 00:26:b9:84:0d:49 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:26:b9:84:0d:4a brd ff:ff:ff:ff:ff:ff
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:57:58:05 brd ff:ff:ff:ff:ff:ff
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:57:58:05 brd ff:ff:ff:ff:ff:ff
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:36:c4:70:17 brd ff:ff:ff:ff:ff:ff
root@b24:~#

That sounds like a snapd issue.

There is a historical issue about this here:

Seems restarting apparmor helped (https://github.com/lxc/lxd/issues/4402#issuecomment-404102641), maybe worth a go.

Thanks for the reply. I reinstalled apparmor, that fixed the issue. Removing apparmor removed snapd and lxd too, i had to reinstall everything agian.

After reinstall, i was asked to run “lxd init”. Then i was able to create new container. I was not able to find the container i created before. Did it get deleted when i created new “storage pool” ?

Last time when i created a container, it was not able to connect to internet (ping/apt failed). Now it worked, only change i done during “lxd init” was

Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes

set to yes on my 2nd run. If i just need lxd container to have internet, do i need this ? What i need is run some containers, then if possible do some port forwarding, so i can access them container from internet.

Yes I believe removing both lxd and snapd would remove your old containers. @stgraber do you know if snapd snapshots remain after removing snapd?

As for your question about Would you like the LXD server to be available over the network? this question is only asking whether you want to expose the LXD API network socket onto the network. It does not affect your container’s ability to access the network, nor is it required (infact unless you need it, it is more secure to not expose LXD onto the network).

You can use the proxy instance device to do port forwarding, see https://linuxcontainers.org/lxd/docs/master/instances#type-proxy

Thanks @tomp, will check the proxy.