How to make LXD give public IPv6 addresses to containers?

It’s also worth noting that the IPv6 allocation RFC (rfc6177) specifies an allocation of /56 to end users or /48 for larger sites.

Providers giving you a single /64 (not even routed separately from the /64 for the physical host) are not following the recommended allocation for IPv6 subnets to customers.

You can also normally reconfigure your hosts external interface to have a single IP in that single /64 block but configure the single address as a /128 address, then you can use the rest of the /64 on the lxdbr0 network, with stateless slaac rather than needing to use ipv6.dhcp.stateful=true

This has been done a couple of times on this forum with Hetzner as they route the /64 subnet direct to the host without needing an NDP response from the host for each IP.

1 Like

Does docker do NDP proxy BTW?

I’ve made a couple of corrections to the post marked as solution to avoid misunderstandings in the future.

Oh interesting, that is a good idea. But then the subnet can’t be used for anything else I guess.

Stateful DHCP works out of the box with Ubuntu, Fedora and openSUSE images :grinning_face_with_smiling_eyes:! But not Debian or Rocky Linux images.

So if you’re using one of those images (and you can’t use a /64) then it boils down to this and nothing more*:

lxc network set lxdbr0 ipv6.address=<a subnet within your subnet> ipv6.nat=false ipv6.dhcp.stateful=true

*NDP thing is still a mystery though.

The /64 subnet with Hetzner is only routed to each individual server, and thus can only be used for that server anyway.

This is where it really helps to have an ISP/Host that is following IPv6 deployment recommendations, and providing an option to route addition subnet(s) to you host machine. Then your host can act as a router and you can then allocate the additional subnets to LXD’s network. That way there’s no need for messing about with proxy NDP (this is where you tell the host to artificially reply to NDP enquiries for a particular IP on a particular external interface that is not actually bound on the LXD host).

Proxy NDP can be used to ‘advertise’ specific IPs that LXD is using on its internal network to the wider external network when the ISP only provides a single /64 subnet at L2 to the host. This allows traffic for those IP to arrive at the LXD host’s external interface, at which point if the LXD host has the correct routes locally it will then route the packets into the LXD network as normal.

We provide a routed NIC type that will allow this occur. However because Linux’s proxy NDP support only allows single IPs to be advertised, rather than an entire subnet, the routed NIC doesn’t support SLAAC/DHCPv6, and IPs must be statically allocated in LXD and configured inside the container.

See

The other alternative is to run a separate daemon, something like ndppd which can be configured to artificially respond to all NDP queries for a particular subnet:

https://manpages.ubuntu.com/manpages/focal/man1/ndppd.1.html

I meant using subnets within that subnet for other things on the host like docker.

I suppose if docker uses a single IP per container and adds a static route to the host, then you could also use proxy NDP on the lxdbr0 to advertise the addition single IPs to the lxdbr0. Its a bit of a faff though and having multiple /64 subnets would be a lot easier.

I posted previously on this. I got this working with hetzner by changing the IPV6 to a single address as advised by @tomp , set the IPv6 range on the bridge, and then changed the container to macvlan and the container received a public IPV6 address . Look on this forum for hetzner public IPv6 addresses, it shows instructions.

Here you go

1 Like

Thought I should follow up that this has completely failed. Containers still have addresses as usual, but no connections respond in any direction.

My docker containers’ ipv6’s also weren’t working for inbound connections and, outbound apparently just went through host’s ipv6.

I thought it might have been related to mailcow’s update which uses docker ipv6 with experimental and ip6tables options in daemon.json, so i disabled all mailcow’s ipv6, deleted docker’s daemon.json and restarted it, even flushed all ip6tables but lxc ipv6 still doesn’t work.

So I have no idea what’s wrong and have just given up with ipv6 as it is measily helpful yet constantly has mysterious problems in my experience. Even my home internet service provider i’m using now has issues with ipv6.

Probably best to post your configuration here to see what is wrong.

ok turns out for whatever reason the ipv6 nat table FORWARD chain policy was set to DROP :man_facepalming:

Thanks for coming back to let people know.

how did you fix it?

systemctl edit docker

[Service]
ExecStartPost=-ip6tables -P FORWARD ACCEPT
ExecStartPost=-iptables -P FORWARD ACCEPT

Docker ip6tables implementation is incomplete, it does not add DOCKER-USER chain: DOCKER-USER chain is not created with ip6tables enabled · Issue #1248 · docker/for-linux · GitHub

However as of recently I’ve made all the docker ipv6 networks use a global /80 ipv6 prefix, manually added firewall rules to protect the containers and disabled docker ip6tables option so it won’t mess with it. Now is clean and simple and no NAT as ipv6 was intended.

-A FORWARD -d 2605:a140:2045:1635:d::/80 -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -d 2605:a140:2045:1635:d::/80 -i eth0 -j DROP
-A FORWARD -d 2605:a140:2045:1635:a::/80 -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -d 2605:a140:2045:1635:a::/80 -i eth0 -j DROP
-A FORWARD -d 2605:a140:2045:1635:3::/80 -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -d 2605:a140:2045:1635:3::/80 -i eth0 -j DROP

So if there’s no NDP proxying then how is it working… i can tcpdump the lxdbr0 and see the NDP solicitations and advertisements for container ips. Maybe because in the route table the ip block is routed to that interface?

I wonder what prefix length even does on a link prefix? What difference does it make if the VPS’s interface has /64 vs /128?

Some ISPs will route the public IPv6 subnet to the LXD host’s external interface without it advertising specific IPs via NDP, in which case the LXD host will then be able to route the entire subnet into lxdbr0 without proxy NDP.