No IPV6 in external Bridge Network

Hi,

i have a /56 IPv6 on my FritzBox with prefix delegation enabled.
On my Ubuntu Machine, I’ve set up my networking using netplan as follows:

network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp6: false
      dhcp4: false
  bridges:
    br0:
      interfaces: 
        - eno1
      dhcp6: true
      dhcp4: true

My br0 receives an ipv6 address.

Then I’ve created a profile and afterwards assigned it to my container:

config: {}
description: MyBridge
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
name: bridge
used_by:
- /1.0/instances/win11
- /1.0/instances/bionic

Nevertheless, my containers only receive an ipv4 from the pool and no ipv6.

330: eth0@if331: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:b4:34:3b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.1.146/24 brd 10.10.1.255 scope global dynamic eth0
       valid_lft 863762sec preferred_lft 863762sec
    inet6 fe80::216:3eff:feb4:343b/64 scope link 
       valid_lft forever preferred_lft forever

Does anybody know if ipv6 is not possible at all in my scenario using unmanaged bridges ?

Did you configure netplan inside of the container to have dhcp6: true?
By default our configuration relies on SLAAC for IPv6, so DHCPv6 is disabled.

no, it’s default inside the container is slaac. The same is for ubuntu bionic as for win11, which both do net get a ipv6 address

To debug this, I suggest you start by running tcpdump on the host:

tcpdump -i br0 -nn -s0 icmp6

and at the same time inside the container:

incus exec foo -- tcpdump -i eth0 -nn -s0 icmp6

Check whether RAs are being seen on the bridge, and being seen inside the container.

But FYI, it works absolutely fine for me, on an Ubuntu 22.04 host with Ubuntu 22.04 container. The only difference is that I’m assigning the host bridge’s own IP addresses statically; but the container on the bridge picks up its addresses from the upstream router (Mikrotik).

# incus launch images:ubuntu/22.04/cloud -p br255 foobar
Launching foobar
# incus exec foobar -- ip addr list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
64: eth0@if65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:28:e4:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.12.255.239/24 metric 100 brd 10.12.255.255 scope global dynamic eth0
       valid_lft 42864sec preferred_lft 42864sec
    inet6 XXXX:XXXX:XXXX:XXXX:216:3eff:fe28:e4c8/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 86375sec preferred_lft 14375sec
    inet6 fe80::216:3eff:fe28:e4c8/64 scope link
       valid_lft forever preferred_lft forever

incus profile:

# incus profile show br255
config: {}
description: Bridge to backbone
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br255
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: br255
used_by:
...

Relevant parts of netplan config:

  ethernets:
    enp3s0:
      wakeonlan: true
      dhcp4: false
      accept-ra: false
      link-local: []
...
  vlans:
    vlan255:
      id: 255
      link: enp3s0
      accept-ra: false
      link-local: []
...
  bridges:
    br255:
      # See https://bugs.launchpad.net/netplan/+bug/1782221
      macaddress: c0:3f:d5:63:64:11
      interfaces: [vlan255]
      parameters:
        stp: false
        forward-delay: 0
      dhcp4: false
      accept-ra: false
      addresses: [10.12.255.11/24, "XXXX:XXXX:XXXX:XXXX::11/64"]
      gateway4: 10.12.255.1
      gateway6: "XXXX:XXXX:XXXX:XXXX::1"
      nameservers:
        addresses: [10.12.255.1]
        search: [example.net]

(Aside: you may note that I disable link-local addresses on bridges. I don’t want containers/VMs to be able to see the host machine on the bridge)