IPv6 without NAT inside LXC container

Hi all, I’m struggling to get direct (non-NAT) IPv6 working inside LXC with veth.

The summary is that IPv6 on the host works fine, IPv6 in the container works fine if I change the adapter to macvlan, but not with veth. NDP seems to be OK so it may be some routing/sysctl issue?

Here’s more detail about the setup.

Host networking
Per Ryan Young | How to Assign IPv6 Addresses to LXD Containers on a VPS, so it looks like this:

$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 00:16:3C:A8:DB:1B
          inet addr:1.2.3.4  Bcast:0.0.0.0  Mask:255.255.255.192
          inet6 addr: fe80::216:3cff:fea8:db1b/64 Scope:Link
          inet6 addr: 2a00:1234:1:5678::abcd/128 Scope:Global
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:437999 errors:0 dropped:0 overruns:0 frame:0
          TX packets:24782 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:36212339 (34.5 MiB)  TX bytes:7944323 (7.5 MiB)
$ ip -6 route show
2a00:1234:1::1 dev eth0 metric 1024 onlink pref medium
2a00:1234:1:5678::abcd dev eth0 proto kernel metric 256 pref medium
2a00:1234:1:5678::/64 dev lxcbr0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev lxcbr0 proto kernel metric 256 pref medium
fe80::/64 dev vethW2ZiF8 proto kernel metric 256 pref medium
default via 2a00:1234:1::1 dev eth0 metric 1024 pref medium
$ ping ipv6.google.com -c 3
PING ipv6.google.com (2a00:1450:400d:804::200e): 56 data bytes
64 bytes from 2a00:1450:400d:804::200e: seq=0 ttl=116 time=153.407 ms
64 bytes from 2a00:1450:400d:804::200e: seq=1 ttl=116 time=30.142 ms
64 bytes from 2a00:1450:400d:804::200e: seq=2 ttl=116 time=30.115 ms

--- ipv6.google.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 30.115/71.221/153.407 ms

LXC with macvlan
Seems to be fine.

$ ip link add mvlan0 link eth0 type macvlan mode bridge
$ ifconfig mvlan0 up
$ cat /var/lib/lxc/v6test/config
...
lxc.net.2.type = macvlan
lxc.net.2.macvlan.mode = bridge
lxc.net.2.link = mvlan0
lxc.net.2.flags = up
lxc.net.2.hwaddr = 00:16:3e:2f:80:a8
$ lxc-start -n v6test
$ lxc-attach -n v6test
$ ifconfig eth2
eth2      Link encap:Ethernet  HWaddr 00:16:3E:2F:80:A8
          inet6 addr: 2a00:1234:1:5678:216:3eff:feef:88aa/64 Scope:Global
          inet6 addr: fe80::216:3eff:feef:88aa/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:382 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:23541 (22.9 KiB)  TX bytes:752 (752.0 B)
$ ip -6 route show
2a00:1234:1::1 dev eth2 metric 1024 pref medium
2a00:1234:1:5678::/64 dev eth2 proto kernel metric 256 pref medium
fe80::/64 dev eth2 proto kernel metric 256 pref medium
default via 2a00:1234:1::1 dev eth2 metric 1024 pref medium
$ ping ipv6.google.com -c 3
PING ipv6.google.com (2a00:1450:4017:80d::200e): 56 data bytes
64 bytes from 2a00:1450:4017:80d::200e: seq=0 ttl=118 time=0.656 ms
64 bytes from 2a00:1450:4017:80d::200e: seq=1 ttl=118 time=0.633 ms
64 bytes from 2a00:1450:4017:80d::200e: seq=2 ttl=118 time=0.746 ms

--- ipv6.google.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.633/0.678/0.746 ms

LXC with veth
Here’s where the problem comes. Even though they are for LXD, I generally tried to follow thse resources: Can't get IPv6 Netmask 64 to work (no NAT, should be end to end), Getting universally routable IPv6 Addresses for your Linux Containers on Ubuntu 18.04 with LXD 4.0 on a VPS and Ryan Young | How to Assign IPv6 Addresses to LXD Containers on a VPS

I’ve assigned the /64 to lxcvbr0 (see above for the routes):

$ ifconfig lxcbr0
lxcbr0    Link encap:Ethernet  HWaddr 00:16:3E:A7:17:58
          inet addr:10.0.3.1  Bcast:10.0.3.255  Mask:255.255.255.0
          inet6 addr: fe80::216:3eff:fea7:1758/64 Scope:Link
          inet6 addr: 2a00:1234:1:5678::/64 Scope:Global
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:567 errors:0 dropped:0 overruns:0 frame:0
          TX packets:604 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:42757 (41.7 KiB)  TX bytes:997835 (974.4 KiB)

For the LXC container:

$ cat /var/lib/lxc/v6test/config
...
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:2f:80:a7
$ cat /etc/default/lxc
...
LXC_IPV6_ADDR="2a00:1234:1:5678::"
LXC_IPV6_MASK="64"
LXC_IPV6_NETWORK="${LXC_IPV6_ADDR}/${LXC_IPV6_MASK}"
$ lxc-attach -n v6test
$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 00:16:3E:2F:80:A7
          inet addr:10.0.3.122  Bcast:10.0.3.255  Mask:255.255.255.0
          inet6 addr: 2a00:1234:1:5678:216:3eff:feef:80aa/64 Scope:Global
          inet6 addr: fe80::216:3eff:feef:80aa/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1932 (1.8 KiB)  TX bytes:1951 (1.9 KiB)
$ ip -6 route show
2a00:1234:1:5678::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via fe80::216:3eff:fea7:1758 dev eth0 proto ra metric 1024 expires 1739sec hoplimit 64 pref medium

Note that it seems to pull an address from dnsmasq perfectly OK. But:

$ ping ipv6.google.com -c 3
PING ipv6.google.com (2a00:1450:4001:831::200e): 56 data bytes

--- ipv6.google.com ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss

Ping in the other direction (to the container) doesn’t seem to work either. I’ve set a NDP proxy on the host and it seems to be functional:

$ ip -6 neigh show proxy
2a00:1234:1:5678:216:3eff:feef:80aa dev eth0  proxy
$ tcpdump ip6 | grep 80aa
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
09:25:38.211704 IP6 fe80::3e61:40b:b8a4:1f7c > ff02::1:ff2f:80aa: ICMP6, neighbor solicitation, who has 2a00:1234:1:5678:216:3eff:feef:80aa, length 32
09:25:38.795718 IP6 fe80::216:3cff:fea8:db1b > fe80::3e61:40b:b8a4:1f7c: ICMP6, neighbor advertisement, tgt is 2a00:1234:1:5678:216:3eff:feef:80aa, length 32
09:25:39.326321 IP6 fe80::3e61:40b:b8a4:1f7c > ff02::1:ff2f:80aa: ICMP6, neighbor solicitation, who has 2a00:1234:1:5678:216:3eff:feef:80aa, length 32
09:25:39.915634 IP6 fe80::216:3cff:fea8:db1b > fe80::3e61:40b:b8a4:1f7c: ICMP6, neighbor advertisement, tgt is 2a00:1234:1:5678:216:3eff:feef:80aa, length 32
09:25:40.294708 IP6 fe80::3e61:40b:b8a4:1f7c > ff02::1:ff2f:80aa: ICMP6, neighbor solicitation, who has 2a00:1234:1:5678:216:3eff:feef:80aa, length 32
09:25:40.535611 IP6 fe80::216:3cff:fea8:db1b > fe80::3e61:40b:b8a4:1f7c: ICMP6, neighbor advertisement, tgt is 2a00:1234:1:5678:216:3eff:feef:80aa, length 32
^C86 packets captured
90 packets received by filter
0 packets dropped by kernel

Firewall shouldn’t be an issue, since I’ve just set ip6tables -P INPUT ACCEPT ; ip6tables -P FORWARD ACCEPT.

It could be a sysctl setting, but I think I’ve covered the obvious ones:

net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.eth0.forwarding = 1
net.ipv6.conf.lxcbr0.forwarding = 1
net.ipv6.conf.all.proxy_ndp = 1
net.ipv6.conf.eth0.proxy_ndp = 1

Do you have any ideas? NDP? Routing? Sysctl? Thoughts appreciated!

Please can you show ip a and ip r on host and inside container.

Hi @tomp, thanks for the response. Here’s the output, with IPv4’s changed to protect the guilty.

Host

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:16:3c:a8:db:1b brd ff:ff:ff:ff:ff:ff
    inet 1.100.150.200/26 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2a00:1234:1:5678::abcd/128 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3cff:fea8:db1b/64 scope link
       valid_lft forever preferred_lft forever
21: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:a7:17:58 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0
       valid_lft forever preferred_lft forever
    inet6 2a00:1234:1:5678::/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fea7:1758/64 scope link
       valid_lft forever preferred_lft forever
36: vethdKKIRd@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxcbr0 state UP group default qlen 1000
    link/ether fe:44:ed:a0:ac:85 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::fc44:edff:fea0:ac85/64 scope link
       valid_lft forever preferred_lft forever
# ip r
default via 1.100.150.193 dev eth0 metric 1
10.0.3.0/24 dev lxcbr0 proto kernel scope link src 10.0.3.1
1.100.150.192/26 dev eth0 proto kernel scope link src 1.100.150.200

LXC container

v6test:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:2f:80:a7 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.122/24 brd 10.0.3.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2a00:1234:1:5678:216:3eff:feef:80aa/64 scope global dynamic mngtmpaddr
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:feef:80aa/64 scope link
       valid_lft forever preferred_lft forever
v6test:/# ip r
default via 10.0.3.1 dev eth0
10.0.3.0/24 dev eth0 proto kernel scope link src 10.0.3.122

And ip -6 r on both please.

I’m not sure if its the cause of the issue, but having 2a00:1234:1:5678::/64 as the lxcbr0 address seems unusual, can you set it to 2a00:1234:1:5678::2/64 or 2a00:1234:1:5678::FFFF/64

Host

# ip -6 r
2a00:1234:1::1 dev eth0 metric 1024 onlink pref medium
2a00:1234:1:5678::abcd dev eth0 proto kernel metric 256 pref medium
2a00:1234:1:5678::/64 dev lxcbr0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev lxcbr0 proto kernel metric 256 pref medium
fe80::/64 dev vethdKKIRd proto kernel metric 256 pref medium
default via 2a00:1234:1::1 dev eth0 metric 1024 pref medium

LXC container

v6test:/# ip -6 r
2a00:1234:1:5678::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via fe80::216:3eff:fea7:1758 dev eth0 proto ra metric 1024 expires 1631sec hoplimit 64 pref medium

So to confirm we are on the same page:

  1. You want to share a single IPv6 /64 subnet with the host and some containers.
  2. You want the containers to talk with the host (hence why not using macvlan).
  3. You don’t want to attach eth0 on the host to lxcbr0 because you have a different IPv4 subnet on eth0 vs lxcbr0.
  4. You understand/im assuming that your upstream does not route the /64 directly to your host’s eth0 interface, and instead relies on NDP adverts to direct IPv6 traffic to your host. That is why you understand that each container’s IP will need to be advertised to the upstream network using proxy NDP entries on the host’s eth0 interface.

Assuming all of that is correct, then I would suggest the following IPv6 addressing schema:

I wouldn’t use 2a00:1234:1:5678::/64 as the IP address of lxcbr0 as I’ve seen some issues with that approach in the past.

Host:

eth0: 2a00:1234:1:5678::1/128
lxcbr0: 2a00:1234:1:5678::FFFF/64
Default gateway: 2a00:1234:1::1

Containers:

eth0: 2a00:1234:1:5678::NNNN/64
Default gateway: 2a00:1234:1:5678::FFFF

That will setup your host as a router with two distinct addresses on each interface.

Hi @tomp, the assumptions are mostly correct:

  1. Correct.
  2. Mostly correct, although the reason is also because I want to keep NAT’ed IPv4 and prefer to use the existing LXC dnsmasq to get an IPv6 address in the container. Neither of these are super critical.
  3. Correct.
  4. Correct.

I’ve revised the addressing to follow your recommendation (and rebooted), but still no connectivity in the container, unfortunately. Here’s the revised output:

Host

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:16:3c:a8:db:1b brd ff:ff:ff:ff:ff:ff
    inet 1.100.150.200/26 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2a00:1234:1:5678::1/128 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3cff:fea8:db1b/64 scope link
       valid_lft forever preferred_lft forever
4: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:a7:17:58 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0
       valid_lft forever preferred_lft forever
    inet6 2a00:1234:1:5678::ffff/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fea7:1758/64 scope link
       valid_lft forever preferred_lft forever
9: vethnSzFhW@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxcbr0 state UP group default qlen 1000
    link/ether fe:32:01:35:54:fa brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::fc32:1ff:fe35:54fa/64 scope link
       valid_lft forever preferred_lft forever
# ip r
default via 1.100.150.193 dev eth0 metric 1
10.0.3.0/24 dev lxcbr0 proto kernel scope link src 10.0.3.1
1.100.150.192/26 dev eth0 proto kernel scope link src 1.100.150.200
# ip -6 r
2a00:1234:1::1 dev eth0 metric 1024 onlink pref medium
2a00:1234:1:5678::1 dev eth0 proto kernel metric 256 pref medium
2a00:1234:1:5678::/64 dev lxcbr0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev lxcbr0 proto kernel metric 256 pref medium
fe80::/64 dev vethnSzFhW proto kernel metric 256 pref medium
default via 2a00:1234:1::1 dev eth0 metric 1024 pref medium

LXC container

v6test:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:2f:80:a7 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.122/24 brd 10.0.3.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2a00:1234:1:5678::1234/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe2f:80a7/64 scope link
       valid_lft forever preferred_lft forever
v6test:/# ip r
default via 10.0.3.1 dev eth0
10.0.3.0/24 dev eth0 proto kernel scope link src 10.0.3.122
v6test:/# ip -6 r
2a00:1234:1:5678::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via 2a00:1234:1:5678::ffff dev eth0 metric 1 pref medium

I did adjust the NDP proxy on the host accordingly

# ip -6 neigh show proxy
2a00:1234:1:5678::1234 dev eth0  proxy

OK good thanks.

Can you ping 2a00:1234:1:5678::ffff from outside of the host and from inside the container?
Can you ping 2a00:1234:1:5678::1 from the container?

If you run a sudo tcpdump -l -nn -i eth0 ip6 inside the container and then ping it from externally can you see packets for 2a00:1234:1:5678::1234 from the external source arriving at the container’s interface?

That was interesting:

  • Ping 2a00:1234:1:5678::ffff from inside the container works
  • Ping 2a00:1234:1:5678::ffff from the internet does not work
  • Ping 2a00:1234:1:5678::1 from the container does not work
  • No sign of any packets arriving in the tcpdump

OK so Linux should respond to NDP solicitations by default for locally bound addresses even if the NDP request was coming in on a different interface than it was configured on (at least that is how it works with IPv4 ARP).

Can you try adding proxy entries for 2a00:1234:1:5678::ffff on the host’s eth0 and 2a00:1234:1:5678::1 on the host’s lxcbr0 interface.

That’s fixed the container to 2a00:1234:1:5678::1. Still nothing on the others. So recap:

  • Ping 2a00:1234:1:5678::ffff from inside the container works
  • Ping 2a00:1234:1:5678::1 from inside the container works
  • Ping 2a00:1234:1:5678::ffff from the internet does not work
  • No sign of any packets arriving in the tcpdump (running inside the container)

My NDP proxies:

# ip -6 neigh show proxy
2a00:1234:1:5678::ffff dev eth0  proxy
2a00:1234:1:5678::1 dev lxcbr0  proxy
2a00:1234:1:5678::1234 dev eth0  proxy

It is as though stuff is not making it to the host even though NDP proxy is set. I did a tcpdump on the host, and when I ping 2a00:1234:1:5678::ffff from outside, it receives a neighbor solicitation.

And do you see your host respond to it?

Can you show the trace?

I believe so. Here’s the dump, I’ve removed some “noise”.

# tcpdump -l -nn -i eth0 ip6
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
11:45:57.266226 IP6 fe80::3e61:40b:b8a4:1f7c > ff02::1:ff00:ffff: ICMP6, neighbor solicitation, who has 2a00:1234:1:5678::ffff, length 32
11:45:57.746031 IP6 fe80::216:3cff:fea8:db1b > fe80::3e61:40b:b8a4:1f7c: ICMP6, neighbor advertisement, tgt is 2a00:1234:1:5678::ffff, length 32
11:45:58.394241 IP6 fe80::3e61:40b:b8a4:1f7c > ff02::1:ff00:ffff: ICMP6, neighbor solicitation, who has 2a00:1234:1:5678::ffff, length 32
11:45:58.915962 IP6 fe80::216:3cff:fea8:db1b > fe80::3e61:40b:b8a4:1f7c: ICMP6, neighbor advertisement, tgt is 2a00:1234:1:5678::ffff, length 32
11:45:59.309864 IP6 fe80::3e61:40b:b8a4:1f7c > ff02::1:ff00:ffff: ICMP6, neighbor solicitation, who has 2a00:1234:1:5678::ffff, length 32
11:45:59.376002 IP6 fe80::216:3cff:fea8:db1b > fe80::3e61:40b:b8a4:1f7c: ICMP6, neighbor advertisement, tgt is 2a00:1234:1:5678::ffff, length 32
11:46:01.322946 IP6 fe80::3e61:40b:b8a4:1f7c > ff02::1:ff00:ffff: ICMP6, neighbor solicitation, who has 2a00:1234:1:5678::ffff, length 32
11:46:01.575984 IP6 fe80::216:3cff:fea8:db1b > fe80::3e61:40b:b8a4:1f7c: ICMP6, neighbor advertisement, tgt is 2a00:1234:1:5678::ffff, length 32
11:46:02.337030 IP6 fe80::3e61:40b:b8a4:1f7c > ff02::1:ff00:ffff: ICMP6, neighbor solicitation, who has 2a00:1234:1:5678::ffff, length 32
^C
85 packets captured
85 packets received by filter
0 packets dropped by kernel

Yes that looks right, suggests something upstream is ignoring that though.

Yes. Maybe it isn’t an LXC problem after all but a pure IPv6 issue? Interesting that everything works fine when I use macvlan, so it seems related to proxy.

Perhaps your upstream only allows specific IPS per Mac address. As in this setup all IPS will appear to be using the hosts Mac address.

You could try adding an IP alias on the hosts eth0 if that’s not reachable its likely an upstream issue.

Seems it is fixed. There were a couple of things going on. First, the NA response was coming from a fe80:: address which was causing issues (see IPv6 Neighbor Discovery Responder for KVM VPS). Using the development version of ndppd fixes that.

However, I had tried ndppd before posting and it didn’t work, so I went back to kernel ND proxy to simplify & remove “unnecessary” parts. Therefore, I think that something else you suggested changing (maybe the addressing scheme) was also required. Thank you for whatever that thing was and for your persistence!

Hmm it still isn’t reliably solved. I do have a better explanation of what is happening though.

The route on the host:

$ ip -6 r
2a00:1234:1::1 dev eth0 metric 1024 pref medium
2a00:1234:1:5678::/64 dev lxcbr0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev lxcbr0 proto kernel metric 256 pref medium
default via 2a00:1234:1::1 dev eth0 metric 1024 pref medium

Now when there is traffic from the host, the neighbor looks like this:

$ ip -6 n
2a00:1234:1::1 dev eth0 lladdr 3c:61:04:a4:1f:7c router REACHABLE

However, if the only traffic is coming out of the LXC container (nothing from the host itself), the neighbor eventually goes stale and then fails. I can then kick-start it into working for a while again by doing ndisc6 -s 2a00:1234:1:5678::1 2a00:1234:1::1 eth0 on the host:

$ ip -6 n ; ndisc6 -s 2a00:1234:1:5678::1 2a00:1234:1::1 eth0 ; ip -6 n
2a00:1234:1::1 dev eth0  router FAILED
Soliciting 2a00:1234:1::1 (2a00:1234:1::1) on eth0...
Target link-layer address: 3C:61:04:A4:1F:7C
 from 2a00:1234:1::1
2a00:1234:1::1 dev eth0 lladdr 3c:61:04:a4:1f:7c router REACHABLE

I guess this is something to do with the config of lxcbr0? Perhaps the bridge sending the ND request on its link address rather than global?

What does ip -6 -n inside the container show when this occurs?

Also what target are you trying to reach that isn’t reachable?

Can you show ip -6 r inside the container at the same time please.