Cluster upgraded automatically to 4.4 a few minutes ago, and now all my containers have no IPs

Yeah, my containers are running but are useless since they lost all their IPs. What is best way to fix this. I would hate to just reboot blindly and make it worse.

These automatic upgrades are a bad idea.

I restarted individual container - does not make a difference.

Can you show dmesg and cat /var/log/snap/lxd/common/lxd/logs/lxd.log on an affected node?

Also showing lxc network show NAME --target NODE for one of the affected nodes may be useful.

It sounds like it may be the apparmor profile for dnsmasq getting in the way somehow.

cat /var/log/snap/lxd/common/lxd/logs/lxd.log
cat: /var/log/snap/lxd/common/lxd/logs/lxd.log: No such file or directory

Ok, that output is unrelated to apparmor, so not the issue.

Maybe try grep -i apparmor /var/log/kernel.log to get those potential failures specifically.

Oops, cat /var/snap/lxd/common/lxd/logs/lxd.log I think it is.

±------------------------------------------------±--------±-----±-----±----------±----------±---------+
| WP-VIRTUALLY2025-2020-MAY29 | RUNNING | | | CONTAINER | 0 | Q1 |
±------------------------------------------------±--------±-----±-----±----------±----------±---------+
| WP-VIRTUALLY2025-2020-MAY29-bk-Jul20-2020 | STOPPED | | | CONTAINER | 0 | Q4 |
±------------------------------------------------±--------±-----±-----±----------±----------±---------+
| WP-WARHAPPENS-2020-APR29 | RUNNING | | | CONTAINER | 0 | Q1 |
±------------------------------------------------±--------±-----±-----±----------±----------±---------+
| WP-WARHAPPENS-2020-APR29-bk-Jul20-2020 | STOPPED | | | CONTAINER | 0 | Q4 |
±------------------------------------------------±--------±-----±-----±----------±----------±---------+
| WPZ-DATABASE | RUNNING | | | CONTAINER | 0 | Q1 |
±------------------------------------------------±--------±-----±-----±----------±----------±---------+
| WPZ-DATABASE-bk-Jul20-2020 | STOPPED | | | CONTAINER | 0 | Q4 |
±------------------------------------------------±--------±-----±-----±----------±----------±---------+
| WW-CCNESNOTICIAS-2020-mar28 | RUNNING | | | CONTAINER | 0 | Q3 |
±------------------------------------------------±--------±-----±-----±----------±----------±---------+
| lxdMosaic2020B2 | STOPPED | | | CONTAINER | 0 | Q4 |
±------------------------------------------------±--------±-----±-----±----------±----------±---------+

Yeah, I was hoping for a LXD log entry when network is brought up as that’s when things may have failed.

Anyway, please provide as requested:

  • lxc network show NETWORK-NAME --target AFFECTED-NODE
  • grep -i apparmor /var/log/kern.log

It’s almost certainly the new security layer around dnsmasq that’s missing something used in your environment but we need to see what’s failing and what the config is like.

grep -i apparmor /var/log/kern.log
Aug 3 10:17:51 Q2 kernel: [1425893.134362] audit: type=1400 audit(1596464271.492:127): apparmor=“STATUS” operation=“profile_replace” info=“same as current profile, skipping” profile=“unconfined” name="/snap/core/9665/usr/lib/snapd/snap-confine" pid=30337 comm=“apparmor_parser”
Aug 3 10:17:51 Q2 kernel: [1425893.134369] audit: type=1400 audit(1596464271.492:128): apparmor=“STATUS” operation=“profile_replace” info=“same as current profile, skipping” profile=“unconfined” name="/snap/core/9665/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=30337 comm=“apparmor_parser”
Aug 3 10:17:51 Q2 kernel: [1425893.226234] audit: type=1400 audit(1596464271.584:129): apparmor=“STATUS” operation=“profile_replace” profile=“unconfined” name=“snap.lxd.hook.install” pid=30345 comm=“apparmor_parser”
Aug 3 10:17:51 Q2 kernel: [1425893.238259] audit: type=1400 audit(1596464271.596:130): apparmor=“STATUS” operation=“profile_replace” profile=“unconfined” name=“snap.lxd.lxd” pid=30349 comm=“apparmor_parser”
Aug 3 10:17:51 Q2 kernel: [1425893.238375] audit: type=1400 audit(1596464271.596:131): apparmor=“STATUS” operation=“profile_replace” profile=“unconfined” name=“snap.lxd.migrate” pid=30350 comm=“apparmor_parser”
Aug 3 10:17:51 Q2 kernel: [1425893.238776] audit: type=1400 audit(1596464271.596:132): apparmor=“STATUS” operation=“profile_replace” profile=“unconfined” name=“snap.lxd.buginfo” pid=30341 comm=“apparmor_parser”
Aug 3 10:17:51 Q2 kernel: [1425893.239570] audit: type=1400 audit(1596464271.596:133): apparmor=“STATUS” operation=“profile_replace” profile=“unconfined” name=“snap.lxd.lxc” pid=30347 comm=“apparmor_parser”
Aug 3 10:17:51 Q2 kernel: [1425893.239605] audit: type=1400 audit(1596464271.596:134): apparmor=“STATUS” operation=“profile_replace” profile=“unconfined” name=“snap.lxd.activate” pid=30339 comm=“apparmor_parser”
Aug 3 10:17:51 Q2 kernel: [1425893.239668] audit: type=1400 audit(1596464271.596:135): apparmor=“STATUS” operation=“profile_replace” profile=“unconfined” name=“snap.lxd.check-kernel” pid=30342 comm=“apparmor_parser”
Aug 3 10:17:51 Q2 kernel: [1425893.240457] audit: type=1400 audit(1596464271.596:136): apparmor=“STATUS” operation=“profile_replace” profile=“unconfined” name=“snap.lxd.hook.remove” pid=30346 comm=“apparmor_parser”
Aug 3 10:18:09 Q2 kernel: [1425910.676796] audit: type=1400 audit(1596464289.032:144): apparmor=“STATUS” operation=“profile_load” profile=“unconfined” name=“lxd_dnsmasq-lxdfan0_</var/snap/lxd/common/lxd>” pid=30916 comm=“apparmor_parser”
Aug 3 12:53:18 Q2 kernel: [1435219.697090] audit: type=1400 audit(1596473598.070:145): apparmor=“STATUS” operation=“profile_replace” info=“same as current profile, skipping” profile=“unconfined” name=“lxd-UVIPIN-2020-MAY11_</var/snap/lxd/common/lxd>” pid=24125 comm=“apparmor_parser”
Aug 3 12:53:18 Q2 kernel: [1435220.333829] audit: type=1400 audit(1596473598.706:146): apparmor=“STATUS” operation=“profile_load” label=“lxd-UVIPIN-2020-MAY11_</var/snap/lxd/common/lxd>//&:lxd-UVIPIN-2020-MAY11_:unconfined” name="/usr/bin/lxc-start" pid=24630 comm=“apparmor_parser”
Aug 3 12:53:18 Q2 kernel: [1435220.334339] audit: type=1400 audit(1596473598.710:147): apparmor=“STATUS” operation=“profile_load” label=“lxd-UVIPIN-2020-MAY11_</var/snap/lxd/common/lxd>//&:lxd-UVIPIN-2020-MAY11_:unconfined” name="/usr/lib/snapd/snap-confine" pid=24632 comm=“apparmor_parser”
Aug 3 12:53:18 Q2 kernel: [1435220.334346] audit: type=1400 audit(1596473598.710:148): apparmor=“STATUS” operation=“profile_load” label=“lxd-UVIPIN-2020-MAY11_</var/snap/lxd/common/lxd>//&:lxd-UVIPIN-2020-MAY11_:unconfined” name="/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=24632 comm=“apparmor_parser”
Aug 3 12:53:18 Q2 kernel: [1435220.334866] audit: type=1400 audit(1596473598.710:149): apparmor=“STATUS” operation=“profile_load” label=“lxd-UVIPIN-2020-MAY11_</var/snap/lxd/common/lxd>//&:lxd-UVIPIN-2020-MAY11_:unconfined” name="/usr/bin/man" pid=24631 comm=“apparmor_parser”
Aug 3 12:53:18 Q2 kernel: [1435220.334872] audit: type=1400 audit(1596473598.710:150): apparmor=“STATUS” operation=“profile_load” label=“lxd-UVIPIN-2020-MAY11_</var/snap/lxd/common/lxd>//&:lxd-UVIPIN-2020-MAY11_:unconfined” name=“man_filter” pid=24631 comm=“apparmor_parser”
Aug 3 12:53:18 Q2 kernel: [1435220.334877] audit: type=1400 audit(1596473598.710:151): apparmor=“STATUS” operation=“profile_load” label=“lxd-UVIPIN-2020-MAY11_</var/snap/lxd/common/lxd>//&:lxd-UVIPIN-2020-MAY11_:unconfined” name=“man_groff” pid=24631 comm=“apparmor_parser”
Aug 3 12:53:18 Q2 kernel: [1435220.336922] audit: type=1400 audit(1596473598.710:152): apparmor=“STATUS” operation=“profile_load” label=“lxd-UVIPIN-2020-MAY11_</var/snap/lxd/common/lxd>//&:lxd-UVIPIN-2020-MAY11_:unconfined” name=“lxc-container-default” pid=24627 comm=“apparmor_parser”
Aug 3 12:53:18 Q2 kernel: [1435220.336929] audit: type=1400 audit(1596473598.710:153): apparmor=“STATUS” operation=“profile_load” label=“lxd-UVIPIN-2020-MAY11_</var/snap/lxd/common/lxd>//&:lxd-UVIPIN-2020-MAY11_:unconfined” name=“lxc-container-default-cgns” pid=24627 comm=“apparmor_parser”
Aug 3 12:53:18 Q2 kernel: [1435220.336933] audit: type=1400 audit(1596473598.710:154): apparmor=“STATUS” operation=“profile_load” label=“lxd-UVIPIN-2020-MAY11_</var/snap/lxd/common/lxd>//&:lxd-UVIPIN-2020-MAY11_:unconfined” name=“lxc-container-default-with-mounting” pid=24627 comm=“apparmor_parser”

lxc network show NETWORK-NAME --target Q1
Error: not found

Not sure what network name is

Only security that I explicit setup is UFW

Can you show output of lxc network list please

lxc network list
±--------±---------±--------±-----±-----±------------±--------±--------+
| NAME | TYPE | MANAGED | IPV4 | IPV6 | DESCRIPTION | USED BY | STATE |
±--------±---------±--------±-----±-----±------------±--------±--------+
| enp1s0 | physical | NO | | | | 0 | |
±--------±---------±--------±-----±-----±------------±--------±--------+
| enp2s0 | physical | NO | | | | 0 | |
±--------±---------±--------±-----±-----±------------±--------±--------+
| lxdfan0 | bridge | YES | | | | 167 | CREATED |
±--------±---------±--------±-----±-----±------------±--------±--------+

So lxdfan0 is the network name, so:

lxc network show lxdfan0 --target Q1

Thanks
root@Q2:/home/ic2000# lxc network show lxdfan0 --target Q2
config:
bridge.mode: fan
fan.underlay_subnet: 84.17.40.0/24
description: “”
name: lxdfan0
type: bridge
used_by:

  • /1.0/instances/AI-GENIE-2020-mar6
  • /1.0/instances/AI-GENIE-2020-mar6-bk
  • /1.0/instances/CHAT
  • /1.0/instances/DIASPORA-2020-mar6
  • /1.0/instances/DIASPORA-2020-mar6-bk
  • /1.0/instances/DIMELO24
  • /1.0/instances/DIMELO24-bk-Jul20-2020
  • /1.0/instances/EMPODERATE-2020-mar6
  • /1.0/instances/ENLACES24-ADSERVER-REVIVE-2020-APR27
  • /1.0/instances/ENLACES24-ADSERVER-REVIVE-2020-APR30
  • /1.0/instances/ENLACES24-ADSERVER-REVIVE-2020-mar6
  • /1.0/instances/ENLACES24-ADSERVER-REVIVE-2020-mar6-bk
  • /1.0/instances/ENLACES24-ANALY-2020-2020-APR27
  • /1.0/instances/ENLACES24-ANALY-2020-2020-mar6
  • /1.0/instances/ENLACES24-ANALY-2020-2020-mar6-bk
  • /1.0/instances/ENLACES24-ANALY-2020-APR-29
  • /1.0/instances/ENLACES24-ANALY-2020-APR29
  • /1.0/instances/ENLACES24-ANALY-2020-APR30
  • /1.0/instances/ENLACES24-MONITORING-1-2020-mar6
  • /1.0/instances/ENLACES24-MONITORING-1-2020-mar6-bk
  • /1.0/instances/ENLACES24-MONITORING-2-2020-mar6
  • /1.0/instances/ENLACES24-MONITORING-2-2020-mar6-bk
  • /1.0/instances/ENLACES24-MONITORING-2020-mar6
  • /1.0/instances/ENLACES24-MONITORING-2020-mar6-bk
  • /1.0/instances/ENLACES24-MONITORING-Pandora-2020-mar6
  • /1.0/instances/ENLACES24-MONITORING-Pandora-2020-mar6-bk
  • /1.0/instances/ENLACES24-SPLUNK-2020-mar6
  • /1.0/instances/ENLACES24-SPLUNK-2020-mar6-bk
  • /1.0/instances/GOOD-EATS
  • /1.0/instances/JITSI
  • /1.0/instances/NEXTCLOUD-2020-mar6-bk
  • /1.0/instances/PHP-AGT-801-2019-11-26-2020-mar6-bk
  • /1.0/instances/PHP-AGT-801-2020-MAY2
  • /1.0/instances/PHP-AGT-801-2020-mar6
  • /1.0/instances/PHP-AGT-HESK-2019-11-28a-2020-mar6-bk
  • /1.0/instances/PHP-AGT-HESK-2020-MAY2
  • /1.0/instances/PHP-AGT-HESK-2020-mar6
  • /1.0/instances/PHP-CASEMGR-2020-MAY2
  • /1.0/instances/PHP-CASEMGR-2020-mar6
  • /1.0/instances/PHP-CASEMGR-2020-mar6-bk
  • /1.0/instances/PHP-CONSTMGR-2020-MAY2
  • /1.0/instances/PHP-CONSTMGR-2020-mar6
  • /1.0/instances/PHP-CONSTMGR-2020-mar6-bk
  • /1.0/instances/PHP-ELMONSTRO24-2020-MAY-23
  • /1.0/instances/PHP-ELMONSTRO24-2020-MAY-23-bk-Jul20-2020
  • /1.0/instances/PHP-OAI-2019-06-03-2020-mar6-bk
  • /1.0/instances/PHP-OAI-2020-MAY10
  • /1.0/instances/PHP-OAI-2020-MAY10-bk-Jul20-2020
  • /1.0/instances/PHP-OAI-2020-MAY2
  • /1.0/instances/PHP-OAI-2020-mar6
  • /1.0/instances/PHP-POLICY1-2020-MAY2
  • /1.0/instances/PHP-POLICY1-2020-MAY2-bk-Jul20-2020
  • /1.0/instances/PHP-POLICY1-2020-mar6
  • /1.0/instances/PHP-POLICY1-2020-mar6-bk
  • /1.0/instances/PHP-SERVER2-2020-MAY2
  • /1.0/instances/PHP-SERVER2-2020-mar6
  • /1.0/instances/PHP-SERVER2-2020-mar6-bk
  • /1.0/instances/PHP-SERVER3-2020-MAY2
  • /1.0/instances/PHP-SERVER3-2020-mar6
  • /1.0/instances/PHP-SERVER3-2020-mar6-bk
  • /1.0/instances/PHP-SOCIAL1-2020-MAY2
  • /1.0/instances/PHP-SOCIAL1-2020-MAY2-bk-Jul20-2020
  • /1.0/instances/PHP-SOCIAL1-2020-mar6
  • /1.0/instances/PHP-SOCIAL1-2020-mar6-bk
  • /1.0/instances/PLANEINVOICE
  • /1.0/instances/QAI-CLOUD-2020-mar6
  • /1.0/instances/RETHINK-2020-APR30
  • /1.0/instances/RETHINK-2020-APR30-bk-Jul20-2020
  • /1.0/instances/TUMUNDO-2020-APR29
  • /1.0/instances/TUMUNDO-2020-APR29-bk-Jul20-2020
  • /1.0/instances/UVI-ELMONSTRO4
  • /1.0/instances/UVIPIN-2019-11-24-2020-mar6-bk
  • /1.0/instances/UVIPIN-2020-APR29
  • /1.0/instances/UVIPIN-2020-MAY11
  • /1.0/instances/V-Video-Encoder1-2020-1-04b-2020-mar7
  • /1.0/instances/V-Video-Encoder1-2020-1-04b-2020-mar7-bk
  • /1.0/instances/V-Video-Encoder1-bk-may-20-2020
  • /1.0/instances/V-VideoTuyos24–2020-mar7
  • /1.0/instances/V-VideoTuyos24–2020-mar7-bk2
  • /1.0/instances/V-VideoTuyos24-2019-11-24-2020-mar7-bk
  • /1.0/instances/V-VideoTuyos24-BK-may-20-2020
  • /1.0/instances/VideoEncoder-2020-mar7
  • /1.0/instances/VideoEncoder-2020-mar7-bk
  • /1.0/instances/VideoEncoder-2020-mar7-bk2
  • /1.0/instances/WP-CCNESNOTICIAS-2020-55-2020-mar8-bk2
  • /1.0/instances/WP-CoronaVirus19-2020-APR29
  • /1.0/instances/WP-CoronaVirus19-2020-APR29-bk-Jul20-2020
  • /1.0/instances/WP-DYLANTHRUST-2020-APR30
  • /1.0/instances/WP-DYLANTHRUST-2020-APR30-bk-Jul20-2020
  • /1.0/instances/WP-ELGRUPO24-2020-APR29
  • /1.0/instances/WP-ELGRUPO24-2020-APR29-bk-Jul20-2020
  • /1.0/instances/WP-ELPORTAL24-2019-05-23-2020-mar7-bk2
  • /1.0/instances/WP-ELPORTAL24-2020-APR29
  • /1.0/instances/WP-ELPORTAL24-2020-MAY11
  • /1.0/instances/WP-ELPORTAL24-2020-mar7
  • /1.0/instances/WP-FREEDOMRINGS-2020-APR27
  • /1.0/instances/WP-FREEDOMRINGS-2020-APR27-bk-Jul20-2020
  • /1.0/instances/WP-FREEDOMRINGS-2020-APR30
  • /1.0/instances/WP-HAPPYDOGS-2020-APR-29
  • /1.0/instances/WP-HAPPYDOGS-2020-APR-30
  • /1.0/instances/WP-HAPPYDOGS-2020-APR29
  • /1.0/instances/WP-HAPPYDOGS-2020-mar28
  • /1.0/instances/WP-HAPPYDOGS-2020-mar28-bk-Jul20-2020
  • /1.0/instances/WP-JOBSHOOKUP-2020-JUN9
  • /1.0/instances/WP-JOBSHOOKUP-2020-JUN9-bk-Jul20-2020
  • /1.0/instances/WP-LAGACETALIBRE-2020-APR30
  • /1.0/instances/WP-LAGACETALIBRE-2020-APR30-bk-Jul20-2020
  • /1.0/instances/WP-LAGACETALIBRE-FORUM-2020-APR30
  • /1.0/instances/WP-LAGACETALIBRE-FORUM-2020-APR30-bk-Jul20-2020
  • /1.0/instances/WP-LEGALFOX3-2020-APR29
  • /1.0/instances/WP-LEGALFOX3-2020-APR29-bk-Jul20-2020
  • /1.0/instances/WP-MIAMICOLOSERVICE-APR30
  • /1.0/instances/WP-MIAMICOLOSERVICE-APR30-bk-Jul20-2020
  • /1.0/instances/WP-MIAMISHORES24-2020-APR30
  • /1.0/instances/WP-MIAMISHORES24-2020-APR30-bk-Jul20-2020
  • /1.0/instances/WP-MYPROXIMOS-2020-APR30
  • /1.0/instances/WP-MYPROXIMOS-2020-APR30-bk-Jul20-2020
  • /1.0/instances/WP-OPERAZION-2019-08-23W-NEW-2020-mar6-bk
  • /1.0/instances/WP-OPERAZION-2020-APR27
  • /1.0/instances/WP-OPERAZION-2020-APR30
  • /1.0/instances/WP-OPERAZION-2020-APR30-bk-Jul20-2020
  • /1.0/instances/WP-PASSIONWEBHOSTING-2020-APR30
  • /1.0/instances/WP-PASSIONWEBHOSTING-2020-APR30-bk-Jul20-2020
  • /1.0/instances/WP-PATRIOTNEWS24-2020-APR30
  • /1.0/instances/WP-PATRIOTNEWS24-2020-APR30-bk-Jul20-2020
  • /1.0/instances/WP-PATRIOTNEWS24-2020-mar6
  • /1.0/instances/WP-PATRIOTNEWS24-2020-mar6-bk
  • /1.0/instances/WP-QUANTUMAI-2020-APR30
  • /1.0/instances/WP-QUANTUMAI-2020-mar6
  • /1.0/instances/WP-QUANTUMAI-2020-mar6-bk
  • /1.0/instances/WP-QUANTUMGRIDS-2020-APR30
  • /1.0/instances/WP-QUANTUMGRIDS-2020-APR30-bk-Jul20-2020
  • /1.0/instances/WP-REALHARDPOLITICS-2020-APR29
  • /1.0/instances/WP-REALHARDPOLITICS-2020-APR29-bk-Jul20-2020
  • /1.0/instances/WP-REALSOCIALISMO-2020-APR29
  • /1.0/instances/WP-REALSOCIALISMO-2020-APR29-bk-Jul20-2020
  • /1.0/instances/WP-REOPENING2020-MAY21-2020
  • /1.0/instances/WP-REOPENING2020-MAY21-2020-bk-Jul20-2020
  • /1.0/instances/WP-RICHARDO1-2020-MAY14
  • /1.0/instances/WP-RICHARDO1-2020-MAY14-bk-Jul20-2020
  • /1.0/instances/WP-SPACEWATCH-2020-APR30
  • /1.0/instances/WP-SPACEWATCH-2020-APR30-bk-Jul20-2020
  • /1.0/instances/WP-SURVIVING
  • /1.0/instances/WP-SURVIVING-bk-Jul20-2020
  • /1.0/instances/WP-THEMIAMIMETROPOLIS-2020-JULY5
  • /1.0/instances/WP-THEMIAMIMETROPOLIS-2020-JULY5-bk-Jul20-2020
  • /1.0/instances/WP-THEMIAMIMETROPOLIS-BK-2020-may-20
  • /1.0/instances/WP-THEMIAMIREPORTER-2020-JULY5
  • /1.0/instances/WP-THEMIAMIREPORTER-2020-JULY5-bk-Jul20-2020
  • /1.0/instances/WP-TUMUNDO24-2020-JUL5
  • /1.0/instances/WP-TUMUNDO24-2020-JUL5-bk-Jul20-2020
  • /1.0/instances/WP-TUMUNDO24-2020-MAY4
  • /1.0/instances/WP-TWOTIGERS-2020-2020-APR29
  • /1.0/instances/WP-TWOTIGERS-2020-2020-APR29-bk-Jul20-2020
  • /1.0/instances/WP-UBER-2020-mar6-bk
  • /1.0/instances/WP-VIRTUALHUMANS-2020-APR30
  • /1.0/instances/WP-VIRTUALHUMANS-2020-APR30-bk-Jul20-2020
  • /1.0/instances/WP-VIRTUALLY2025-2020-MAY29
  • /1.0/instances/WP-VIRTUALLY2025-2020-MAY29-bk-Jul20-2020
  • /1.0/instances/WP-WARHAPPENS-2020-APR29
  • /1.0/instances/WP-WARHAPPENS-2020-APR29-bk-Jul20-2020
  • /1.0/instances/WPZ-DATABASE
  • /1.0/instances/WPZ-DATABASE-bk-Jul20-2020
  • /1.0/instances/WW-CCNESNOTICIAS-2020-mar28
  • /1.0/instances/lxdMosaic2020B2
  • /1.0/instances/move-8eaeeeb7-804d-44ca-8622-2b25c34baea0
  • /1.0/profiles/default
    managed: true
    status: Created
    locations:
  • Q1
  • Q3
  • Q2
  • Q4

And can you see dnsmasq running on that node:

ps aux | grep dnsmasq

ps aux | grep dnsmasq
lxd 30919 0.0 0.0 49968 3680 ? Ss 10:18 0:01 dnsmasq --keep-in-foreground --strict-order --bind-interfaces --except-interface=lo --pid-file= --no-ping --interface=lxdfan0 --listen-address=240.19.0.1 --dhcp-no-override --dhcp-authoritative --dhcp-leasefile=/var/snap/lxd/common/lxd/networks/lxdfan0/dnsmasq.leases --dhcp-hostsfile=/var/snap/lxd/common/lxd/networks/lxdfan0/dnsmasq.hosts --dhcp-range 240.19.0.2,240.19.0.254,1h -s lxd -S /lxd/240.19.0.1#1053 --rev-server=240.0.0.0/8,240.19.0.1#1053 --conf-file=/var/snap/lxd/common/lxd/networks/lxdfan0/dnsmasq.raw -u lxd
root 31474 0.0 0.0 14428 1012 pts/0 R+ 13:26 0:00 grep --color=auto dnsmasq

Great thanks, so dnsmasq is running.

Also can you show output of ip a on your host?

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:30:48:cf:7c:4c brd ff:ff:ff:ff:ff:ff
inet 84.17.40.19/26 brd 84.17.40.63 scope global noprefixroute enp1s0
valid_lft forever preferred_lft forever
inet6 fe80::85e2:7c79:defd:ade1/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: enp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
link/ether 00:30:48:cf:7c:4d brd ff:ff:ff:ff:ff:ff
4: lxdfan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether 02:f0:ac:00:7b:b8 brd ff:ff:ff:ff:ff:ff
inet 240.19.0.1/8 scope global lxdfan0
valid_lft forever preferred_lft forever
inet6 fe80::d0e9:96ff:fe67:4cba/64 scope link
valid_lft forever preferred_lft forever
16: vethf27985a3@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master lxdfan0 state UP group default qlen 1000
link/ether 8e:f4:b9:f7:fd:0c brd ff:ff:ff:ff:ff:ff link-netnsid 1
17: lxdfan0-mtu: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1450 qdisc noqueue master lxdfan0 state UNKNOWN group default qlen 1000
link/ether ce:cc:5c:de:d7:94 brd ff:ff:ff:ff:ff:ff
inet6 fe80::cccc:5cff:fede:d794/64 scope link
valid_lft forever preferred_lft forever
18: lxdfan0-fan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master lxdfan0 state UNKNOWN group default qlen 1000
link/ether 02:f0:ac:00:7b:b8 brd ff:ff:ff:ff:ff:ff
inet6 fe80::f0:acff:fe00:7bb8/64 scope link
valid_lft forever preferred_lft forever
20: vethf9c373d8@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master lxdfan0 state UP group default qlen 1000
link/ether 3e:c0:87:3c:98:a2 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Please can you show the output of lxc config show <container> --expanded where <container> is one of your affected container’s name.

Then also, inside the affected container, please can you manually add an IP to it using the command:

ip a add 240.19.0.x/24 dev eth0

Where the .x part is a free IP in your subnet, you can pick any one in the range 240.19.0.2 to 240.19.0.254

Then inside the container show output of:

ip a
ip r

and

ping 240.19.0.1 -c 5

What I’m trying to ascertain is whether the bridge is functioning correctly, i.e can you ping the gateway, and whether its specifically something blocking DHCP requests to dnsmasq.