LXD containers lost IP addrs after reboot

LXD was working fine until I rebooted. Now my container and vm don’t have IP addrs.

[kxn2@rhel8 ~]$ lxc list
±-----------±--------±-----±-----±----------------±----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
±-----------±--------±-----±-----±----------------±----------+
| nfs-server | RUNNING | | | VIRTUAL-MACHINE | 0 |
±-----------±--------±-----±-----±----------------±----------+
| smb | RUNNING | | | CONTAINER | 0 |
±-----------±--------±-----±-----±----------------±----------+

[kxn2@rhel8 ~]$ uname -a
Linux rhel8.localdomain 4.18.0-193.6.3.el8_2.x86_64 #1 SMP Mon Jun 1 20:24:55 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

[kxn2@rhel8 ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.2 (Ootpa)

[kxn2@rhel8 ~]$ lxc version
Client version: 4.2
Server version: 4.2

[kxn2@rhel8 ~]$ lxd version
4.2

[kxn2@rhel8 ~]$ snap list
Name Version Rev Tracking Publisher Notes
core18 20200427 1754 stable canonical✓ base
lxd 4.2 15457 stable canonical✓ -
snapd 2.45 7777 stable canonical✓ snapd
spotify 1.1.26.501.gbe11e53b-15 41 stable spotify✓ -

[kxn2@rhel8 ~]$ lxc network show lxdbr0
config:
ipv4.address: 10.42.39.1/24
ipv4.nat: “true”
ipv6.address: none
description: “”
name: lxdbr0
type: bridge
used_by:

  • /1.0/instances/nfs-server
  • /1.0/instances/smb
    managed: true
    status: Created
    locations:
  • none

[kxn2@rhel8 ~]$ sudo netstat -lnp | grep ":53 "
tcp 0 0 10.42.39.1:53 0.0.0.0:* LISTEN 2873/dnsmasq
tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN 2561/dnsmasq
tcp6 0 0 fe80::d47b:70ff:fe27:53 :::* LISTEN 2873/dnsmasq
udp 0 0 10.42.39.1:53 0.0.0.0:* 2873/dnsmasq
udp 0 0 192.168.122.1:53 0.0.0.0:* 2561/dnsmasq
udp6 0 0 fe80::d47b:70ff:fe27:53 :::* 2873/dnsmasq

[kxn2@rhel8 ~]$ systemctl status dnsmasq
● dnsmasq.service - DNS caching server.
Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; disabled; vendor preset: disabled)
Active: inactive (dead)

[kxn2@rhel8 ~]$ lxc profile show default
config: {}
description: Default LXD profile
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
used_by:

  • /1.0/instances/smb
  • /1.0/instances/nfs-server

[kxn2@rhel8 ~]$ lxc config show nfs-server --expanded
architecture: x86_64
config:
image.architecture: amd64
image.description: Centos 8 amd64 (20200608_07:08)
image.os: Centos
image.release: “8”
image.serial: “20200608_07:08”
image.type: disk-kvm.img
volatile.base_image: 1060cb163388755d80daf461772aa7cf368872bf1f71d93a74f067b05274a6da
volatile.eth0.host_name: tapc8c133d5
volatile.eth0.hwaddr: 00:16:3e:62:85:2a
volatile.last_state.power: RUNNING
volatile.vm.uuid: d1c0c016-62b8-404c-be5a-50d2ebed3bcf
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
ephemeral: false
profiles:

  • default
    stateful: false
    description: “”

Most likely firewalling, you’ll want to take a look at firewalld and if you’re using Docker on that system too, look at whether that may have messed with your firewall too.

Those are the most common issues we’ve seen causing this.

I previous to rebooting did

[kxn2@rhel8 ~]$ sudo firewall-cmd --add-interface=lxdbr0 --zone=trusted
success

[kxn2@rhel8 ~]$ sudo firewall-cmd --get-active-zones
libvirt
interfaces: virbr0
public
interfaces: enp3s0
trusted
interfaces: lxdbr0

But, I did not run
[kxn2@rhel8 ~]$ sudo firewall-cmd --runtime-to-permanent
success

[kxn2@rhel8 ~]$ lxc list
±-----------±--------±--------------------±-----±----------------±----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
±-----------±--------±--------------------±-----±----------------±----------+
| nfs-server | RUNNING | 10.42.39.186 (eth0) | | VIRTUAL-MACHINE | 0 |
±-----------±--------±--------------------±-----±----------------±----------+
| smb | RUNNING | 10.42.39.153 (eth0) | | CONTAINER | 0 |
±-----------±--------±--------------------±-----±----------------±----------+

Thanks for the 1 minute responce and effective suggestion Stéphane