Containers recovery after snap channel switch

Indeed there are other things not working.

$ lxc launch ubuntu-minimal:bionic test
Creating test
Error: Failed instance creation: Create instance: Create instance: Invalid devices: Failed detecting root disk device: No root device could be found

I fear there is somethig seriously wrong. I will do a backup of all the containers using `lxc export. Would it then be possible to remove all lxd and zfs and reinstall again (so that the containers keep working incl. services like networking? If that would work, it would be perhaps a good point to get rid of blocked deleted images in zfs eating up my storage.

The first error on the networking says that LXD could not find an unused IPv6 subnet. This is somewhat weird as the address space for IPv6 is rather big. If you do not use IPv6 anyway, you can create lxdbr0 without IPv6 in any case.

The second error says that your default profiles does not mention a storage pool.
Show us the output of

lxc profile show default

Then, run the following to show what storage is there in LXD.

lxc storage list
$ lxc profile show default
config: {}
description: Default LXD profile
devices: {}
name: default
used_by:
- /instances/mycontainer
... (17 more)
$ lxc storage list
+---------+-------------+--------+--------------------------------------------+---------+
|  NAME   | DESCRIPTION | DRIVER |                   SOURCE                   | USED BY |
+---------+-------------+--------+--------------------------------------------+---------+
| default |             | zfs    | /var/snap/lxd/common/lxd/disks/default.img | 18      |
+---------+-------------+--------+--------------------------------------------+---------+

Okay, your profile is missing the two necessary devices as follows.

devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk

First, you need to create the lxdbr0 network device.
Try with the following,

lxc network create lxdbr0 ipv6.address=none

Then, the storage. The following should show the details of the ZFS pool.

lxc storage show default

Once you have both the lxdbr0 network device running, and the default storage pool, then you can complete the configuration of the default profile, and all should be working.

So the bridge was created now, but the containers still have no internet access.

$ lxc network list
+--------+----------+---------+-----------------+------+-------------+---------+
|  NAME  |   TYPE   | MANAGED |      IPV4       | IPV6 | DESCRIPTION | USED BY |
+--------+----------+---------+-----------------+------+-------------+---------+
| enp5s0 | physical | NO      |                 |      |             | 0       |
+--------+----------+---------+-----------------+------+-------------+---------+
| lxdbr0 | bridge   | YES     | 10.122.224.1/24 | none |             | 0       |
+--------+----------+---------+-----------------+------+-------------+---------+

When I call lxc list it shows running containers but the IPv4 and IPv6 are both empty for all containers.

The storage pool is

$ lxc storage show default
config:
  size: 64GB
  source: /var/snap/lxd/common/lxd/disks/default.img
  zfs.pool_name: default
description: ""
name: default
driver: zfs
used_by:
- /1.0/instances/mycontainer
... (17 more)
status: Created
locations:
- none

You need to add both the info about the network and the storage to the default profile.
When you lxc profile show default, you should get an output similar to what I showed above.

Here are the commands to add them to your default LXD profile.

lxc profile device add default eth0 nic nictype=bridged parent=lxdbr0
lxc profile device add default root disk pool=default path="/"

Thank you so much! So far it works again :slight_smile: