Container: enabling security.nesting disables IP networking

The mission

I’m trying to migrate my GitLab deployment (Docker on a VM) to an Incus container by following the instructions at Frequently asked questions - Incus documentation to setup Docker on an Incus container, and I’m stuck.

What I’ve done so far

I have created a fresh Debian 12 container with two network bridges connected to it: eth0=lan0 and eth1=lan0-proxy. Its network config:

$ cat /etc/systemd/network/eth0.network 
[Match]
Name=eth0

[Network]
LinkLocalAddressing=ipv4
DHCP=ipv4

[DHCPv4]
UseDomains=true

[DHCP]
ClientIdentifier=mac
$ cat /etc/systemd/network/eth1.network 
[Match]
Name=eth1

[Network]
Address=10.127.0.10/16
LinkLocalAddressing=ipv4

My networking setup in a nutshell

For context, I’m running a separate Incus container for DHCP/DNS at 10.0.0.2 with Technitium DNS (Technitium DNS Server | An Open Source DNS Server For Privacy & Security) on the lan0 bridge, so eth0 gets an IP from that container. lan0-proxy is just an internal bridge to play around with, I’ve set an arbitrary IP address on it with no gateway, as lan0-proxy itself doesn’t have an IP address either. I also have pfSense setup which acts as default gateway for instances on lan0 at 10.0.0.3.

To nest or not to nest

With security.nesting=false, the container’s networking is the following:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
43: eth0@if44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 10:66:6a:27:47:3c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.1.3/16 metric 1024 brd 10.0.255.255 scope global dynamic eth0
       valid_lft 59sec preferred_lft 59sec
45: eth1@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 10:66:6a:bc:a0:79 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.127.0.10/16 brd 10.127.255.255 scope global eth1
       valid_lft forever preferred_lft forever
$ journalctl -xeu systemd-networkd
Apr 05 01:40:21 005-registry-infra systemd[1]: Starting systemd-networkd.service - Network Configuration...
      Subject: A start job for unit systemd-networkd.service has begun execution
      Defined-By: systemd
      Support: https://www.debian.org/support

      A start job for unit systemd-networkd.service has begun execution.

      The job identifier is 77.
Apr 05 01:40:21 005-registry-infra systemd-networkd[147]: Failed to increase receive buffer size for general netlink socket, ignoring: Operation not permitted
Apr 05 01:40:21 005-registry-infra systemd-networkd[147]: eth1: Link UP
Apr 05 01:40:21 005-registry-infra systemd-networkd[147]: eth1: Gained carrier
Apr 05 01:40:21 005-registry-infra systemd-networkd[147]: eth1: Configuring with /etc/systemd/network/eth1.network.
Apr 05 01:40:21 005-registry-infra systemd-networkd[147]: eth0: Link UP
Apr 05 01:40:21 005-registry-infra systemd-networkd[147]: eth0: Gained carrier
Apr 05 01:40:21 005-registry-infra systemd-networkd[147]: eth0: Configuring with /etc/systemd/network/eth0.network.
Apr 05 01:40:21 005-registry-infra systemd-networkd[147]: lo: Link UP
Apr 05 01:40:21 005-registry-infra systemd-networkd[147]: lo: Gained carrier
Apr 05 01:40:21 005-registry-infra systemd-networkd[147]: Enumeration completed
Apr 05 01:40:21 005-registry-infra systemd[1]: Started systemd-networkd.service - Network Configuration.
      Subject: A start job for unit systemd-networkd.service has finished successfully
      Defined-By: systemd
      Support: https://www.debian.org/support

      A start job for unit systemd-networkd.service has finished successfully.

      The job identifier is 77.
Apr 05 01:40:21 005-registry-infra systemd-networkd[147]: eth0: DHCPv4 address 10.0.1.3/16, gateway 10.0.0.3 acquired from 10.0.0.2

However, setting security.nesting=true results in this:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
47: eth0@if48: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 10:66:6a:27:47:3c brd ff:ff:ff:ff:ff:ff link-netnsid 0
49: eth1@if50: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 10:66:6a:bc:a0:79 brd ff:ff:ff:ff:ff:ff link-netnsid 0
$ journalctl -xeu systemd-networkd --no-pager
Apr 05 01:43:01 005-registry-infra systemd[1]: Starting systemd-networkd.service - Network Configuration...
      Subject: A start job for unit systemd-networkd.service has begun execution
      Defined-By: systemd
      Support: https://www.debian.org/support

      A start job for unit systemd-networkd.service has begun execution.
      
      The job identifier is 83.
Apr 05 01:43:02 005-registry-infra systemd-networkd[144]: Failed to increase receive buffer size for general netlink socket, ignoring: Operation not permitted
Apr 05 01:43:02 005-registry-infra systemd-networkd[144]: eth1: Link UP
Apr 05 01:43:02 005-registry-infra systemd-networkd[144]: eth1: Gained carrier
Apr 05 01:43:02 005-registry-infra systemd-networkd[144]: eth0: Link UP
Apr 05 01:43:02 005-registry-infra systemd-networkd[144]: eth0: Gained carrier
Apr 05 01:43:02 005-registry-infra systemd-networkd[144]: lo: Link UP
Apr 05 01:43:02 005-registry-infra systemd-networkd[144]: lo: Gained carrier
Apr 05 01:43:02 005-registry-infra systemd-networkd[144]: Enumeration completed
Apr 05 01:43:02 005-registry-infra systemd[1]: Started systemd-networkd.service - Network Configuration.
      Subject: A start job for unit systemd-networkd.service has finished successfully
      Defined-By: systemd
      Support: https://www.debian.org/support

      A start job for unit systemd-networkd.service has finished successfully.

      The job identifier is 83.

I also disabled IPv6 completely on the Incus host by setting the kernel cmdline parameter ipv6.disable=1 because 1) I’m not familiar with it and 2) I wanted to make sure that IPv6 isn’t causing the situation. And it isn’t causing it, so that’s good to know. (btw. with IPv6 enabled, running ip a in the nesting-enabled container shows an additional inet6 line and nothing more of particular interest).

OS & Incus details

$ incus version
Client version: 6.11
Server version: 6.11
$ cat /etc/apt/sources.list.d/zabbly-incus-stable.sources 
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: bookworm
Components: main
Architectures: amd64
Signed-By: /etc/apt/keyrings/zabbly.asc
cat /etc/apt/sources.list.d/zabbly-kernel-stable.sources 
Enabled: yes
Types: deb deb-src
URIs: https://pkgs.zabbly.com/kernel/stable
Suites: bookworm
Components: main zfs
Architectures: amd64
Signed-By: /etc/apt/keyrings/zabbly.asc
$ uname -a
Linux incus.thetre.dev 6.13.9-zabbly+ #debian12 SMP PREEMPT_DYNAMIC Mon Mar 31 02:08:07 UTC 2025 x86_64 GNU/Linux
$ cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
$ incus network list
incus network list
+------------+----------+---------+---------------+------+-------------+---------+---------+
|    NAME    |   TYPE   | MANAGED |     IPV4      | IPV6 | DESCRIPTION | USED BY |  STATE  |
+------------+----------+---------+---------------+------+-------------+---------+---------+
| enp7s0     | physical | NO      |               |      |             | 1       |         |
+------------+----------+---------+---------------+------+-------------+---------+---------+
| lan0       | bridge   | YES     | 10.0.0.1/16   | none |             | 10      | CREATED |
+------------+----------+---------+---------------+------+-------------+---------+---------+
| lan0-proxy | bridge   | YES     | none          | none |             | 6       | CREATED |
+------------+----------+---------+---------------+------+-------------+---------+---------+
| lo         | loopback | NO      |               |      |             | 0       |         |
+------------+----------+---------+---------------+------+-------------+---------+---------+
$ incus network show lan0
config:
  ipv4.address: 10.0.0.1/16
  ipv4.dhcp: "false"
  ipv4.firewall: "false"
  ipv4.nat: "true"
  ipv6.address: none
[REDACTED]
$ incus network show lan0-proxy 
config:
  ipv4.address: none
  ipv4.dhcp: "false"
  ipv4.firewall: "false"
  ipv4.nat: "true"
  ipv6.address: none
[REDACTED]

What to do next?

The DHCP/DNS/pfSense setup can’t be the reason because if that were the case then only eth0 would behave weird, while eth1 would still have the static IP address set by systemd-networkd which has nothing to do with either of those deployments - lan0-proxy is completely separated from anything else.

Is there any way I can debug any further? Thanks!

Just realized I forgot to set kernel module options for KVM/AMD:

echo "options kvm-amd nested=1" > /etc/modprobe.d/kvm-amd.conf

After a reboot, it is loaded appropriately but security.nesting=true still doesn’t give me IP addresses in the container:

$ cat /sys/module/kvm_amd/parameters/nested
1

I guess that’s because LXC isn’t KVM. But explicitly setting the option won’t hurt anyway I guess.

UPDATE 2: It IS the firewall. I’m using UFW. I’ll reply with a solution as soon as I figure out the culprit.

UPDATE: Still no IP addresses with a fresh container. Doesn’t matter which distro. No idea why it worked for a second. I guess it’s the host’s firewall…

I narrowed it down a bit to my own template container which is based on an older linuxcontainers.org image release - a brand new container works as expected now. But it didn’t when I made this post, apt upgrades didn’t fix it either. Maybe the newly released image tonight fixed it somehow.

I’ll play around with it a bit and see if the issue persists after I’ve got GitLab running inside the container.

In the future I’ll use that GitLab instance to build my own images configured to my liking each night after a new upstream image has been released. This will hopefully sync everything correctly and I’ll always stay up to date when I run incus create/launch.

Found something interesting: Whenever I start a container that has security.nesting=true set, all of my instances - containers as well as VMs - lose network connectivity. The question is why?

I guess I’ve overcomplicated things with Technitium DNS + pfSense, but I’d love to keep those running for testing purposes. I’ll dig further into some routing stuff, I have a feeling that I misconfigured something in pfSense.

I “fixed” the issue by removing ufw completely from the system. For context the server is hosted on Hetzner, so I utilized Hetzner’s firewall which sits before the server. Basically I allowed SSH and other protocols to the primary IP address, allowed everything and anything to the secondary IP address which is the WAN of my pfSense test VM, and since pfSense is a firewall itself, I can manage the WAN rules I need from there. The only thing really different from ufw is that Hetzner’s firewall is stateless, so I had to adjust the outgoing rules, too.

Anyway this specific issue is resolved: without ufw and a fresh container template, I’m getting IP addresses, both via DHCP and statically configured. Because my system was miconfigured on different places/layers, I doubt this issue will apply to many other people.