How to make LXD give public IPv6 addresses to containers?

I posted previously on this. I got this working with hetzner by changing the IPV6 to a single address as advised by @tomp , set the IPv6 range on the bridge, and then changed the container to macvlan and the container received a public IPV6 address . Look on this forum for hetzner public IPv6 addresses, it shows instructions.

Here you go

1 Like

Thought I should follow up that this has completely failed. Containers still have addresses as usual, but no connections respond in any direction.

My docker containers’ ipv6’s also weren’t working for inbound connections and, outbound apparently just went through host’s ipv6.

I thought it might have been related to mailcow’s update which uses docker ipv6 with experimental and ip6tables options in daemon.json, so i disabled all mailcow’s ipv6, deleted docker’s daemon.json and restarted it, even flushed all ip6tables but lxc ipv6 still doesn’t work.

So I have no idea what’s wrong and have just given up with ipv6 as it is measily helpful yet constantly has mysterious problems in my experience. Even my home internet service provider i’m using now has issues with ipv6.

Probably best to post your configuration here to see what is wrong.

ok turns out for whatever reason the ipv6 nat table FORWARD chain policy was set to DROP :man_facepalming:

Thanks for coming back to let people know.

how did you fix it?

systemctl edit docker

[Service]
ExecStartPost=-ip6tables -P FORWARD ACCEPT
ExecStartPost=-iptables -P FORWARD ACCEPT

Docker ip6tables implementation is incomplete, it does not add DOCKER-USER chain: DOCKER-USER chain is not created with ip6tables enabled · Issue #1248 · docker/for-linux · GitHub

However as of recently I’ve made all the docker ipv6 networks use a global /80 ipv6 prefix, manually added firewall rules to protect the containers and disabled docker ip6tables option so it won’t mess with it. Now is clean and simple and no NAT as ipv6 was intended.

-A FORWARD -d 2605:a140:2045:1635:d::/80 -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -d 2605:a140:2045:1635:d::/80 -i eth0 -j DROP
-A FORWARD -d 2605:a140:2045:1635:a::/80 -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -d 2605:a140:2045:1635:a::/80 -i eth0 -j DROP
-A FORWARD -d 2605:a140:2045:1635:3::/80 -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -d 2605:a140:2045:1635:3::/80 -i eth0 -j DROP

So if there’s no NDP proxying then how is it working… i can tcpdump the lxdbr0 and see the NDP solicitations and advertisements for container ips. Maybe because in the route table the ip block is routed to that interface?

I wonder what prefix length even does on a link prefix? What difference does it make if the VPS’s interface has /64 vs /128?

Some ISPs will route the public IPv6 subnet to the LXD host’s external interface without it advertising specific IPs via NDP, in which case the LXD host will then be able to route the entire subnet into lxdbr0 without proxy NDP.