I want to use SLAAC to network my containers with IPv6, but the addresses and routes keep expiring and not getting renewed. This is my network configuration:
And here’s the view from one of my containers. At first, all is well,
# ip -6 addr
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
inet6 2602:ff75:7:373c:216:3eff:feb6:747e/64 scope global mngtmpaddr dynamic
valid_lft 3472sec preferred_lft 3472sec
inet6 fe80::216:3eff:feb6:747e/64 scope link
valid_lft forever preferred_lft forever
# ip -6 ro
2602:ff75:7:373c::/64 dev eth0 proto kernel metric 256 expires 3356sec pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via fe80::d057:eeff:fea6:bfc9 dev eth0 proto ra metric 1024 expires 1556sec hoplimit 64 pref medium
But after those timers count down to 0, the address and the default route disappear, leaving the container without connectivity. It’s my understanding that dnsmasq should be sending out router advertisements to periodically renew them…
I’m running on an Ubuntu 18.04 host, and I’m seeing this with both the standard Ubuntu 18.04 and Debian 9 images. Any ideas? Thanks in advance.
So I’ve re-created the behaviour in my LXD environment, in that I didn’t see any router advertisements being periodically sent without adding the something like ra-param=lxdbr0,high,120,3600 to the dnsmasq command line arguments.
Can you run this inside your container for 10 minutes tcpdump -i eth0 icmp6 -nn -l and check your container is not getting any router advertisements.
What’s interesting, is that although I couldn’t see any advertisements after a few minutes, I also didn’t see the “expires” output on my ip -6 ro command inside the container, suggesting the route I got wasn’t going to expire.
I’m not sure what the default dnsmasq advertisement frequency is, but it certainly is more than 5 minutes.
Background: I’m running LXD on a VPS, and these providers do really strange things with their IPv6 configs. Typically, they will “assign” you a /64 prefix (or part of it), but instead of routing said prefix to your VM, they’ll place all of your addresses on a single /48 subnet, with the gateway located at the beginning of that prefix. In concrete terms, my VPS is assigned the address range from 2602:ff75:7:373c::0 to 2602:ff75:7:373c:ffff:ffff:ffff:ffff, and my gateway is located at 2602:ff75:7::1.
My goal is to give each LXD container a unique IPv6 address from this range. So, I assigned the prefix 2602:ff75:7:373c::1/64 to lxdbr0, and used ndppd to proxy the Neighbor Advertisements necessary to route traffic between the WAN interface (ens3) and the containers (lxdbr0).
With this setup, my routing table resembled this:
2602:ff75:7:373c:: dev ens3
2602:ff75:7:373c::/64 dev lxdbr0
2602:ff75:7:373c::/48 dev ens3
default via 2602:ff75:7::1 dev ens3
As you can see, I had overlapping prefixes on ens3 and lxdbr0. I suspect this somehow confused dnsmasq and prevented it from periodically refreshing Router Advertisements.
The fix was to remove the /48 prefix from ens3 and use on-link addressing. (Side note - this requires some scripting, because neither netplan nor ifupdown can configure this natively.) This is my new routing table:
2602:ff75:7:373c:: dev ens3
2602:ff75:7::1 dev ens3
2602:ff75:7:373c::/64 dev lxdbr0
default via 2602:ff75:7::1 dev ens3
And now my containers are receiving RA’s and maintaining connectivity indefinitely, as intended. As a bonus, my routes are no longer showing timeouts, too.
Hi @yoryan, glad to hear you figured it out. Yes the route expiry timers were strange as I wasn’t seeing them on my setup.
Your setup is quite common, running LXD on a VPS with multiple IPs routed to the container or in the same layer 2 domain a the container. With the desire to get the public IPs directly into the container.
If you don’t need your containers to access services running on the host VPS (and vice versa), then you could also look into using the ipvlan NIC type https://lxd.readthedocs.io/en/latest/containers/#nictype-ipvlan which provides for the ability to statically define one or more IPs from the host’s subnet into the container. Under the hood it also uses proxy NDP so it would do away with the need for ndppd too. This assumes you’re just trying to give each container one or more static IPs.
The downside of IPVLAN is that containers and hosts cannot communicate (this is by design of IPVLAN module). However there are discussions ongoing about adding a “routed” mode that would use a veth pair between container and host, but do away with the bridge and use proxy NDP and proxy ARP settings to allow the container to have an IP from the parent LAN.
This would allow containers to talk to the host, and containers to have public IPs.