Incus containers are not assigned IP addresses

Hi,

I have incus setup om my laptop with a few containers. All of them have
been migrated from lxd 5 to Incus 6.0.1 from Debian backports using the
lxd-to-incus tool. The problem I have is similar, if not identical, to the one
I had with lxd.

incus ls
-----------+---------+------+------+-----------+-----------+
|   NAME    |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+-----------+---------+------+------+-----------+-----------+
| almadev   | RUNNING |      |      | CONTAINER | 3         |
+-----------+---------+------+------+-----------+-----------+
| debiandev | STOPPED |      |      | CONTAINER | 3         |
+-----------+---------+------+------+-----------+-----------+
| fedoradev | STOPPED |      |      | CONTAINER | 5         |
+-----------+---------+------+------+-----------+-----------+
| rockydev  | STOPPED |      |      | CONTAINER | 1         |
+-----------+---------+------+------+-----------+-----------+

I can start the containers and they work well except that they are
not assigned an IP and thus cannot access the network or
the internet. I have tried starting all containers the issue is
the same for all of them.

The network bride, lxdbr0, looks like it should work to me:

incus network info lxdbr0 
Name: lxdbr0
MAC address: 00:16:3e:8f:49:c6
MTU: 1500
State: up
Type: broadcast

IP addresses:
  inet  10.122.68.1/24 (global)
  inet6 fd42:75e0:e025:3ad7::1/64 (global)
  inet6 fe80::216:3eff:fe8f:49c6/64 (link)

Network usage:
  Bytes received: 10.79kB
  Bytes sent: 780B
  Packets received: 85
  Packets sent: 10

Bridge:
  ID: 8000.00163e8f49c6
  STP: false
  Forward delay: 1500
  Default VLAN ID: 1
  VLAN filtering: true
  Upper devices: 

It also looks like apparmor is blocking the container operations,
at least partially:

sudo dmesg
   41.024614] audit: type=1400 audit(1722343081.668:20): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default" pid=2591 comm="apparmor_parser"
[   41.024622] audit: type=1400 audit(1722343081.668:21): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-cgns" pid=2591 comm="apparmor_parser"
[   41.024625] audit: type=1400 audit(1722343081.668:22): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-with-mounting" pid=2591 comm="apparmor_parser"
[   41.024628] audit: type=1400 audit(1722343081.668:23): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-with-nesting" pid=2591 comm="apparmor_parser"
[   41.075087] NET: Registered PF_VSOCK protocol family
[   65.398636] audit: type=1400 audit(1722343106.293:24): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default" pid=2661 comm="apparmor_parser"
[   65.398647] audit: type=1400 audit(1722343106.293:25): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-cgns" pid=2661 comm="apparmor_parser"
[   65.398652] audit: type=1400 audit(1722343106.293:26): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-with-mounting" pid=2661 comm="apparmor_parser"
[   65.398671] audit: type=1400 audit(1722343106.293:27): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-with-nesting" pid=2661 comm="apparmor_parser"
[   65.897585] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
[   65.942587] audit: type=1400 audit(1722343106.837:28): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd_dnsmasq-lxdbr0_</var/lib/lxd>" pid=2727 comm="apparmor_parser"
[   67.876182] audit: type=1400 audit(1722343108.772:29): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default" pid=2775 comm="apparmor_parser"
[   67.876192] audit: type=1400 audit(1722343108.772:30): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-cgns" pid=2775 comm="apparmor_parser"
[   67.876196] audit: type=1400 audit(1722343108.772:31): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-with-mounting" pid=2775 comm="apparmor_parser"
[   67.876200] audit: type=1400 audit(1722343108.772:32): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-with-nesting" pid=2775 comm="apparmor_parser"
[   68.321143] audit: type=1400 audit(1722343109.220:33): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxd_dnsmasq-lxdbr0_</var/lib/lxd>" pid=2826 comm="apparmor_parser"
[  198.599578] audit: type=1400 audit(1722343239.510:34): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd_dnsmasq-lxdbr0_</var/lib/lxd>" pid=3411 comm="apparmor_parser"
[  198.933460] audit: type=1400 audit(1722343239.842:35): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default" pid=3427 comm="apparmor_parser"
[  198.933470] audit: type=1400 audit(1722343239.842:36): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-cgns" pid=3427 comm="apparmor_parser"
[  198.933474] audit: type=1400 audit(1722343239.842:37): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-with-mounting" pid=3427 comm="apparmor_parser"
[  198.933478] audit: type=1400 audit(1722343239.842:38): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-with-nesting" pid=3427 comm="apparmor_parser"
[  199.672785] audit: type=1400 audit(1722343240.582:39): apparmor="STATUS" operation="profile_load" profile="unconfined" name="incus_dnsmasq-lxdbr0_</var/lib/incus>" pid=3494 comm="apparmor_parser"
[  269.934398] lxdbr0: port 1(veth035f7020) entered blocking state
[  269.934404] lxdbr0: port 1(veth035f7020) entered disabled state
[  269.936918] device veth035f7020 entered promiscuous mode
[  270.197663] audit: type=1400 audit(1722343311.112:40): apparmor="STATUS" operation="profile_load" profile="unconfined" name="incus-almadev_</var/lib/incus>" pid=5898 comm="apparmor_parser"
[  270.280867] physXdIa0r: renamed from veth1adcd8dd
[  270.297438] eth0: renamed from physXdIa0r
[  270.333177] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[  270.333227] lxdbr0: port 1(veth035f7020) entered blocking state
[  270.333233] lxdbr0: port 1(veth035f7020) entered forwarding state
[  270.333333] IPv6: ADDRCONF(NETDEV_CHANGE): lxdbr0: link becomes ready
[  270.419005] audit: type=1400 audit(1722343311.332:41): apparmor="STATUS" operation="profile_load" profile="unconfined" name="incus_forkproxy-Waylandsocket_almadev_</var/lib/incus>" pid=5930 comm="apparmor_parser"
[  270.869524] lxdbr0: port 1(veth035f7020) entered disabled state
[  270.965021] audit: type=1400 audit(1722343311.880:42): apparmor="DENIED" operation="file_lock" profile="incus-almadev_</var/lib/incus>" pid=6065 comm="(ostnamed)" family="unix" sock_type="dgram" protocol=0 requested_mask="send"
[  270.965034] audit: type=1400 audit(1722343311.880:43): apparmor="DENIED" operation="file_lock" profile="incus-almadev_</var/lib/incus>" pid=6065 comm="(ostnamed)" family="unix" sock_type="dgram" protocol=0 requested_mask="send"
[  270.980686] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[  270.980713] lxdbr0: port 1(veth035f7020) entered blocking state
[  270.980716] lxdbr0: port 1(veth035f7020) entered forwarding state
[  290.246824] audit: type=1400 audit(1722343331.161:44): apparmor="DENIED" operation="file_lock" profile="incus-almadev_</var/lib/incus>" pid=6161 comm="(ostnamed)" family="unix" sock_type="dgram" protocol=0 requested_mask="send"
[  290.246830] audit: type=1400 audit(1722343331.161:45): apparmor="DENIED" operation="file_lock" profile="incus-almadev_</var/lib/incus>" pid=6161 comm="(ostnamed)" family="unix" sock_type="dgram" protocol=0 requested_mask="send"
[  388.766309] physXdIa0r: renamed from eth0
[  388.781787] lxdbr0: port 1(veth035f7020) entered disabled state
[  388.790325] veth1adcd8dd: renamed from physXdIa0r
[  388.839163] IPv6: ADDRCONF(NETDEV_CHANGE): veth1adcd8dd: link becomes ready
[  388.839291] lxdbr0: port 1(veth035f7020) entered blocking state
[  388.839298] lxdbr0: port 1(veth035f7020) entered forwarding state
[  388.893814] device veth035f7020 left promiscuous mode
[  388.893868] lxdbr0: port 1(veth035f7020) entered disabled state
[  389.668868] audit: type=1400 audit(1722343430.583:46): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="incus_forkproxy-Waylandsocket_almadev_</var/lib/incus>" pid=6467 comm="apparmor_parser"
[  389.834561] audit: type=1400 audit(1722343430.751:47): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="incus-almadev_</var/lib/incus>" pid=6469 comm="apparmor_parser"

The containers used to have internet access so I guess a software
update might have interfered with the containers. I am using the MullvadVPN
app which is my prime suspect, though I am pretty new to Incus and have
difficulties with debugging the issue.

My firewall is firewalld and I have and it looks like the lxdbr0 bridge has
been properly added to the trusted zone:

sudo firewall-cmd --get-active-zones
home
  interfaces: wlp0s20f3
trusted
  interfaces: lxdbr0

EDIT:
I have also verified that Incus own firewall is disabled:

incus network show lxdbr0
config:
  ipv4.address: 10.122.68.1/24
  ipv4.firewall: "false"
  ipv4.nat: "true"
  ipv6.address: fd42:75e0:e025:3ad7::1/64
  ipv6.firewall: "false"
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/almadev
- /1.0/instances/debiandev
- /1.0/instances/fedoradev
- /1.0/instances/rockydev
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
project: default

Any help with debugging this issue would be highly appreciated.

When your containers are booting up, they send DHCP requests to the bridge (here, lxdbr0 which was retained during the migration) in order to get their configuration.

The lxdbr0 bridge is a managed bridge, and you can verify this when you run incus network list. When we say managed, we mean managed by Incus. Incus manages this bridge by launching a DHCP/DNS service that works specifically for that network interface. It’s a dnsmasq process.

Run the following on the host to locate that process, and actually check that it is really running. We run ps to show all processes, then filter on dnsmasq, and finally filter on the name of the interface, lxdbr0.

ps ax | grep dnsmasq | grep lxdbr0

You should get a long line that looks like this (shortened).

   4289 ?        Ss     0:00 dnsmasq --keep-in-foreground --strict-order --bind-interfaces --except-interface=lo --pid-file= --no-ping --interface=incusbr0 --dhcp-rapid-commit --no-negcache --quiet-dhcp --quiet-dhcp6 --quiet-ra --listen-address= .......

You have mentioned a VPN software and a firewall. The Incus containers are self-contained, and rely on the lxdbr0 network interface. Your next step is to figure out whether some firewall rule applies to this network interface. If you can momentarily stop the firewall and VPN, then restart the containers, you can verify what’s stopping your Incus containers from getting a DHCP configuration.

Since your Incus containers are self-contained, you can disconnect the host from the network while temporarily shutting down the firewall.

If you want to dive even deeper, you can use the tshark command, which shows you the network packets as they move from one IP to another. You would run on the host the following. tshark listens on the lxdbr0 network interface, and -n does not try to resolve any DNS names (shows only IP addresses).

sudo apt install tshark
sudo tshark -i lxdbr0 -n

Then restart a container (incus restart mycontainer) and observe whether the container actually sends the DHCP request and whether the managed network interface actually replies with the network configuration.

1 Like

Hi @simos

Thank you for your reply! I experimented with various settings and have
verified that it is indeed MullvadVPN which blocks the lxdbr0 interface.

The VPN service automatically enabled a “lockdown” mode when I tried
to disable it before, which made it seem like disabling the VPN did not
actually make a difference. If I disable MullvadVPN and start a container
everything works as expected.

I suppose that the apparmor rules created by Mullvad. I have contacted
their support for assistance on how to fix this issue. Meanwhile, if you have
any ideas on how to fix the issue I would happily try your suggestions.

MullvadVPN describes here what they are doing with regards to the firewall of your system. In a nutshell, it takes over your firewall and imposes its own firewall rules.

I glanced this document and I noticed there is this setting called Allow LAN.
By enabling this, MullvadVPN should not affect the 10.0.0.0/8 IP range (used by Incus).
Can you check whether that is a good workaround for you?

Hi @simos

Your proposed workaround works perfectly, thank you very much
for your help with this issue!

1 Like