To debug this, I suggest you start by running tcpdump on the host:
tcpdump -i br0 -nn -s0 icmp6
and at the same time inside the container:
incus exec foo -- tcpdump -i eth0 -nn -s0 icmp6
Check whether RAs are being seen on the bridge, and being seen inside the container.
But FYI, it works absolutely fine for me, on an Ubuntu 22.04 host with Ubuntu 22.04 container. The only difference is that I’m assigning the host bridge’s own IP addresses statically; but the container on the bridge picks up its addresses from the upstream router (Mikrotik).
# incus launch images:ubuntu/22.04/cloud -p br255 foobar
Launching foobar
# incus exec foobar -- ip addr list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
64: eth0@if65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:28:e4:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.12.255.239/24 metric 100 brd 10.12.255.255 scope global dynamic eth0
valid_lft 42864sec preferred_lft 42864sec
inet6 XXXX:XXXX:XXXX:XXXX:216:3eff:fe28:e4c8/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86375sec preferred_lft 14375sec
inet6 fe80::216:3eff:fe28:e4c8/64 scope link
valid_lft forever preferred_lft forever
incus profile:
# incus profile show br255
config: {}
description: Bridge to backbone
devices:
eth0:
name: eth0
nictype: bridged
parent: br255
type: nic
root:
path: /
pool: default
type: disk
name: br255
used_by:
...