Cannot keep Alpine container from acquiring extra IPv6 address

Somewhat as a follow-up to my prior post, I’m having issues keeping an Alpine container from acquiring a secondary local address like:

eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 00:16:3e:00:ac:0e brd ff:ff:ff:ff:ff:ff
inet 10.200.0.101/24 scope global eth0
  valid_lft forever preferred_lft forever
inet6 fd42:abbe:1234:5678::101/64 scope global 
  valid_lft forever preferred_lft forever
inet6 fd42:abbe:1234:5678:216:3eff:fe00:ac0e/64 scope global dynamic # ← THIS
  valid_lft 2652sec preferred_lft 2652sec
inet6 fe80::216:3eff:fe00:ac0e/64 scope link 

My LXD bridge config:

config:
  ipv4.address: 10.100.0.1/24
  ipv4.firewall: "false"
  ipv4.nat: "false"
  ipv6.address: fd42:abbe:1234:5678::1/64
  ipv6.firewall: "false"
  ipv6.nat: "false"

With Ubuntu containers, the following netplan config works:

network:
  version: 2
  ethernets:
    eth0:
      dhcp4: false
      dhcp6: false       # These seem
      accept-ra: false   # to suffice
      addresses:
        - 10.100.0.81/24
        - "fd42:abbe:1234:5678::81/64"
      gateway4: 10.100.0.1
      gateway6: "fd42:abbe:1234:5678::1"

With Alpine, I haven’t had any luck:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
    address 10.100.0.101
    netmask 255.255.255.0
    gateway 10.100.0.1

iface eth0 inet6 static
    address fd42:abbe:1234:5678::101
    netmask 64
    gateway fd42:abbe:1234:5678::1
    pre-up /sbin/sysctl -w net.ipv6.conf.all.autoconf=0
    pre-up /sbin/sysctl -w net.ipv6.conf.eth0.autoconf=0
    privext 0
    autoconf 0

Note: I also tried accept_ra 0 and pre-up /sbin/sysctl -w net.ipv6.conf.(all/eth0).accept_ra=0, and setting all these via /etc/sysctl.conf but to no avail.

If I run $alpine: rc-service networking restart, the container will get rid of the superfluous IP, and I think that survives a container restart as well, but rebooting the host machine will cause the IP to come back.

Suggestions welcome.

It seems quite clear that something is giving your container an IP V6 address based on the LXD network and the MAC address of your container, so I don’t see any other culprit that the Dnsmasq instance managed by LXD.

Can you confirm that by running something along the lines of

ps fauxww | grep dnsmasq

you should get easily at the LXD instance (if you have another dnsmasq running the LXD one should have by far the most complex command line) then get from that the parameter for dhcp-leasefiles and dump it, it should have (probably) your address. I don’t use IPv6 for containers myself, maybe there is another file for ipv6 ? Don’t seems so from the man page.

You probably need to set via sysctl to disable your OS from generating a temporary address from the router advertisements that LXD sends.

net.ipv6.conf.eth0.accept_ra = 0

If you don’t need any of your containers to dynamically generate default routes and addresses, then you could disable ipv6.dhcp on the LXD managed bridged:

e.g.

lxc network set lxdbr0 ipv6.dhcp false

so I don’t see any other culprit that the Dnsmasq instance managed by LXD

Definitely, that was one of the first things I looked at, and Dnsmasq runs with flags like --enable-ra (might be essential for IPv6 function) no matter how I configure the bridge. I then decided that having the container somehow ignore router advertisements would have to be the workaround.

As a sidenote: I’ve read multiple opinions that RA is crucial for robust IPv6 routing and should never be ignored, but I don’t know if that’s relevant to manually configured containers like this.


You probably need to set via sysctl to disable your OS from generating a temporary address from the router advertisements that LXD sends.

You mean the the container OS, right? As mentioned in my post, I tried all that, even via /etc/sysctl.conf & sysctl.d/… before coming here as a last resort. The thing that stood out was that the host rebooting caused this behavior, whereas container restarts seemed to respect the .eth0.accept_ra (and related) toggles.

If you don’t need any of your containers to dynamically generate default routes and addresses, then you could disable ipv6.dhcp on the LXD managed bridged

I didn’t know I could do away with DHCP entirely, that seems to resolve the issue for me, no more spurious IPv6 assignments.


On a related note, does Dnsmasq handle DNS in any way? I used to be able to set the nameserver inside containers (whether in /etc/resolv.conf or /etc/systemd/resolved.conf) to the bridge IP. In my most recent setup I couldn’t get that to work from the get go. Eventually, I just pointed my containers to public DNS servers.

yes - if you disable RA (I assume that disabling dhcp6 will remove enable-ra from the dnsmasq command line) you will have to point explicitly your container to the dnsmasq instance as dns server. It works pretty well, the only difficulty is using it to provide the host with the containers addresses - there are a gazillion of how to and advices on how to achieve that, none of them working in all cases, search the @simos blog for that if you are interested, let me see:
its here
I remember dimly that among the impressive collections of knobs in Ipv6 there is (maybe) an option for the gateway to provide network info to the clients (dhcp-like) without actually providing an address, but I can’t find the reference just now so I am not sure if this really exists and if yes if it’s relevant in your case.

1 Like

I didn’t know I could do away with DHCP entirely, that seems to resolve the issue for me, no more spurious IPv6 assignments.

I spoke too soon. Restarting the host still gives the Alpine container a second fd·· IPv6 address.

I assume that disabling dhcp6 will remove enable-ra from the dnsmasq command line

It does not. I suspect the developers left it in (always) because it’s crucial to IPv6 functionality in some way.

there are a gazillion of how to and advices on how to achieve that, none of them working in all cases, search the @simos blog for that if you are interested, let me see (…)

Yes, I got that impression too. I’ve been very attached to the manual address assignment approach, maybe unreasonably so. Something like SLAAC is probably what a network admin would use. After all, the huge address space and simplified (auto) configuration is said to be a key strength of IPv6. What is your opinion on using “random” SLAAC addresses for containers?


As a test, I created another Alpine container, attached the bridge¹ and started it just to see what happens. It runs for a while without an address, then it acquires an IPv6 address (no IPv4) like the other container. It’s like a d(a)emon that cannot be slain :wink:

¹:

config:
  ipv4.address: 10.100.0.1/24
  ipv4.dhcp: "false"
  ipv4.firewall: "false"
  ipv4.nat: "false"
  ipv6.address: fd42:abbe:1234:5678::1/64
  ipv6.dhcp: "false"
  ipv6.firewall: "false"
  ipv6.nat: "false"

Edit: so many typos today, sorry.

Reading the dnsmasq man page, and looking at the command line generated in your case:
–enable-ra --dhcp-range ::,constructor:lxdbr0,ra-only
I’d say that either dnsmasq is setting wrongly the A bit in its announce, or alpine network stack believes wrongly it’s set. Your Ip v6 address is NOT generated by dnsmasq, it’s a SLAAC address. Trying out an Ubuntu 18.04 gives the same behaviour, so my money is that being a dnsmasq bug after a looong analysis of about 10 minutes (take it with a grain of salt)

1 Like

I think this solves my woes regarding unwanted IPv6 addresses as well. If the firewall ate important protocol negotiations, the containers probably fell back to an emergency address, thinking they had none. As to why none of the kernel parameters in the containers helped, and why rebooting the LXD host exposed the issue – I don’t know. But I think I’ll leave it at that for now and be glad things work well again.

Until next time :wink: