Is IPv6 Working for bridges created by Incus?

Hello all.

I understand that as of now Incus bridge supports only stateless IPv6, and no DHCPv6 for bridges as of now? Right? I’m not seeing any IP address from the ipv6.dhcp.range being assigned to the container.

My containers are also not obtaining/being assigned any stateless IPv6 addresses. Also, manually assigning an IPv6 address to a container, I’m not able to reach (ping) the to or from the container.

I’m manually assigning an IPv6 IP address to the container that’s part of the subnet already in use in our network, and the same goes for the bridge.

Any suggestions?

I’m testing with a Void linux container.

In advance, thanks for any help/pointers I can get.

Sam

It works for me. Did you remember to set ipv6.dhcp.stateful on the bridge? dnsmasq will then hand out the IPv6 addresses.

If that’s not it, then please show:

  • The version of incus are you using, and the OS of the incus host
  • The network bridge configuration
  • The container creation command
  • The container’s full configuration (if you changed it after creating the container)

Ah, then that sounds like you’re not using an incus-managed bridge, but you’re bridging containers directly onto the upstream network. In that case, automatic address allocation will come from whatever the upstream network provides; incus cannot interfere with that.

incus will only assign addresses via DHCP if it is managing its own network, where incus itself is acting as the gateway. Here’s one that works for me.

config:
  dns.domain: nmm.internal
  ipv4.address: 192.0.2.254/24
  ipv4.dhcp.ranges: 192.0.2.200-192.0.2.250
  ipv4.nat: "true"
  ipv6.address: fdfd::254/64
  ipv6.dhcp.ranges: fdfd::1000-fdfd::1fff
  ipv6.dhcp.stateful: "true"
  ipv6.nat: "true"
description: ""
name: bridge0
type: bridge
managed: true
status: Created
locations:
- none
project: default

incus starts a dnsmasq process to provide the DHCP service on this network:

$ ps auxwww | grep bridge0
incus       3643  0.0  0.0  13196  4644 ?        Ss   Apr29   0:00 dnsmasq --keep-in-foreground --strict-order --bind-interfaces --except-interface=lo --pid-file= --no-ping --interface=bridge0 --dhcp-rapid-commit --no-negcache --quiet-dhcp --quiet-dhcp6 --quiet-ra --listen-address=192.0.2.254 --dhcp-no-override --dhcp-authoritative --dhcp-leasefile=/var/lib/incus/networks/bridge0/dnsmasq.leases --dhcp-hostsfile=/var/lib/incus/networks/bridge0/dnsmasq.hosts --dhcp-range 192.0.2.200,192.0.2.250,1h --listen-address=fdfd::254 --enable-ra --dhcp-range fdfd::1000,fdfd::1fff,64,1h -s nmm.internal --interface-name _gateway.nmm.internal,bridge0 -S /nmm.internal/ --conf-file=/var/lib/incus/networks/bridge0/dnsmasq.raw -u incus -g incus

and it puts the assigned IP for each host in /var/lib/incus/networks/bridge0/dnsmasq.hosts/<proejct>_<container>.<interface>, e.g.

$ incus config show --project nsrc-builder nfsen
...
devices:
  eth0:
    ipv4.address: 192.0.2.3
    ipv6.address: fdfd::3
    name: eth0
    network: bridge0
    type: nic
...

$ cat /var/lib/incus/networks/bridge0/dnsmasq.hosts/nsrc-builder_nfsen.eth0
00:16:3e:32:81:15,192.0.2.3,[fdfd::3],nfsen

If you can’t run this way, then you’re better off assigning statics to each container as you do today. This can be scripted using cloud-init when you create the container (assuming your OS supports cloud-init; I don’t know if Void Linux does). Example in this thread.

Thank you Brian.

Let me try this out.

Will get back.

As to some of the points you raised -

  • Yes, I’m not using the Incus default/auto IP addresses, I’m assigning the bridge an IP address from the IP range assigned to me by my ISP.
  • I’m checking stateful as true
  • The range is also a subset from the ISP assigned addresses
  • The Void image I’ve used is standard, no real changes made
  • except, when I was not getting any response in IPv6, even with the default values, I tried the above.

Anyway, I’ll try and use what worked for you, create a second bridge, remove my custromizations, and see if I can get it all working that way.

Regards.
Sam

EDIT - and host OS is Arch/CachyOS.

I’ll probably have to also post a follow-up soon - if I have 2 bridges, how do I get VMs to communicate across them? Not that strong on the networking stuff :frowning:

So, this is what I did

  • Create new bridge
project: default
name: pvbr0
description: Private Bridge Conf
type: bridge
config:
  dns.domain: zetacloud.lan
  dns.search: zetacloud.lan
  ipv4.address: 192.168.68.68/24
  ipv4.dhcp.ranges: 192.168.68.101-192.168.68.250
  ipv4.nat: 'true'
  ipv6.address: fd68::254/64
  ipv6.dhcp.ranges: fd68::6800-fd68::68ff
  ipv6.dhcp.stateful: 'true'
  ipv6.nat: 'true'
  • attach pvbr0 to existing Void container void00, remove existing bridge
  • Create a fresh Void linux container void01
  • Start up both containers

Result - FAILURE!
Here’s IPs being assigned to both containers -

void00 # pre-existing container, where I'd assigned static IPv4/6 addresses
IPv4	192.168.168.99 (eth0), 192.168.68.147 (eth0)
IPv6	2402:e280:220:fc:ffff:0:c0a8:aa99 (eth0), fe80::1266:6aff:fe60:232d (eth0)
void01 # (freshly created container
IPv4	192.168.68.122 (eth0)
IPv6	fe80::1266:6aff:fecc:9b60 (eth0)

In both instances, the IPv6 address assigned to the container is not what was expected, and also it’s not able to even ping pvbr0

bash-5.2# ping -6 fd68::254
ping: connect: Network is unreachable

Perhaps this is an issue with the image?

I had installed nmap on void00 to see if it was receiving DHCP4/6 broadcasts. And void00 was for DHCP4, but nothing for DHCPv6.

Brian, thanks again.

Sam