Can't get IPv6 Netmask 64 to work (no NAT, should be end to end)

See solution here: Getting universally routable IPv6 Addresses for your Linux Containers on Ubuntu 18.04 with LXD 4.0 on a VPS

I followed this tutorial:
https://youngryan.com/2019/02/how-to-assign-ipv6-addresses-to-lxd-containers-on-a-vps/
But it does not work, the /etc/network/interfaces content described does not work at all, tried to fix it but can’t.
I wished there was an easy to follow and working tutorial for ipv6 with LXD with direct connections.

lxdbr0 works fine with the supplied argument sudo lxc network set lxdbr0 ipv6.address :::/64 but the host setup does not work.

Using Ubuntu 18.04 and LXD installed through snap.

Please describe a little about your external network setup, as there are various ways to achieve external IPv6 addressing with LXD.

Some questions:

  1. Are you looking to use statically assigned IPs or SLAAC/DHCPv6 from an external router?
  2. Does your network route an IPv6 prefix or specific IPs to your host’s external network interface (in addition to the address on the host itself) or have they just assigned a single shared prefix for the external interface?
  3. Does your network/ISP allow multiple MAC addresses per network port?

Thanks
Tom

I would like a publicly reachable ipv6 to be assigned automatically on launch of a container from my /64 prefix.

I am sad to say that I have no idea how to answer your questions better than this.

And you have working IPv6 connectivity on your host now?

Can you show output of:

ip a

and

ip -6 r

Thanks
Tom

Also, will you be needing IPv4 connectivity as well? How will these IPv4 addresses be assigned? Do you have multiple public ones of those too, or are you going to want to do NAT?

I changed it to the settings there specified:
(ip adresses have been changed of course)

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 1.2.3.4/24
        gateway 1.2.3.1
        netmask 255.255.255.255
iface eth0 inet6 static
       address 1111:aaaa:3004:9978:0000:0000:0000:0001
       netmask 64
       gateway fe80::1
       accept_ra 0
       autoconf 0
       privext 0

Then rebooted, but still no ipv6 on eth0:

ip a output:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:50:56:3f:4e:37 brd ff:ff:ff:ff:ff:ff
    inet 1.2.3.4/32 scope global eth0
       valid_lft forever preferred_lft forever
3: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 16:79:fb:26:1c:37 brd ff:ff:ff:ff:ff:ff
    inet6 1111:aaaa:3004:9978::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::2c5b:9dff:fe37:215d/64 scope link 
       valid_lft forever preferred_lft forever
5: veth503d9cb1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether 16:79:fb:26:1c:37 brd ff:ff:ff:ff:ff:ff link-netnsid 0
7: veth6390f4fa@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether 4e:73:2a:68:2c:cf brd ff:ff:ff:ff:ff:ff link-netnsid 1

ip 6 -r output:

1111:aaaa:3004:9978::/64 dev lxdbr0 proto kernel metric 256 pref medium
fe80::/64 dev lxdbr0 proto kernel metric 256 pref medium

Before that I had the following /etc/network/interfaces content:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 1.2.3.4/24
        gateway 1.2.3.1
        netmask 255.255.255.255
iface eth0 inet6 static
       address 1111:aaaa:3004:9978::1
       netmask 64
       gateway fe80::1
       up ip -6 address add 1111:aaaa:3004:9978::1/128 dev eth0
       up ip -6 route add fe80::1/128 onlink dev eth0
       up ip -6 route add default via fe80::1 dev eth0
       down ip -6 route del default via fe80::1 dev eth0
       down ip -6 route del fe80::1/128 onlink dev eth0
       down ip -6 address del 1111:aaaa:3004:9978::1/128 dev eth0

Nothing works, I am doing something wrong. Your help is very appreciated!

Only ipv6, Cloudflare will do the magic for me meaning I will not need any reverse proxying to host some sites in the containers. At least that is the plan so far.
On the host server I do require ipv4, the one address I have, to remain functional. The containers as I said are fine ipv6 only.

So first things first, you need to get IPv6 connectivity working on the host before trying to get it working on LXD.

The fact that your lxdbr0 interface is sharing an IP on the same subnet as your external interface looks suspect, 1111:aaaa:3004:9978::1/64 so I would first change the IPv6 prefix on the lxdbr0 interface to something else (e.g. a randomly generated private range like LXD uses normally) avoid any routing conflicts with the external network on eth0.

You also don’t need these lines:

       up ip -6 address add 1111:aaaa:3004:9978::1/128 dev eth0
       down ip -6 address del 1111:aaaa:3004:9978::1/128 dev eth0

Next up, please confirm with your ISP how many IPv6 addresses they have assigned you, have they assigned you, the whole /64 subnet, or literally just 1111:aaaa:3004:9978::1?

Sorry, thought that was clear already: Definitely the whole /64 subnet.
That’s why I chose SLAAC in the tutorial instead of DHCPv6.

Speaking of the tutorial, I ran lxc network set lxdbr0 ipv6.address 1111:aaaa:3004:9978::1/64, that’s where that comes from. (https://youngryan.com/2019/02/how-to-assign-ipv6-addresses-to-lxd-containers-on-a-vps/)
In the tutorial trying to connect the host network comes later. So I guess I can completely forget about that tutorial?

Happy to learn the right way!
Do I undo the lxdbr0 setup or simply start the server over?

How do I get IPv6 on the host the best way for the containers to get the addresses later automatically?
Also what are the steps (and commands) after I get ipv6 on eth0?

Ah OK, it wasn’t clear because the ISP provided page only makes reference to a single IP in their control panel, and indicates that a pure static configuration is suggested.

So the tutorial you link to uses an approach called “proxy NDP” to make it appear like the lxdbr0 interface is on the same network as your host’s eth0 interface. It does this by running an additional process on the host called the NDP proxy daemon, which listens for layer 2 NDP requests arriving at eth0 and responds to them for IPs that are in your /64 subnet, but not actually assigned to your host’s eth0 address.

Packets are then forwarded to your host’s eth0 interface, at which point your host must route them internally to lxdbr0 interface.

I would still recommend, at least for now, disabling/removing lxdbr0 and getting IPv6 connectivity working on your host first, as I find it helps to break down networking issues into smaller chunks to resolve them fully.

That way you can be confident that basic IPv6 networking is working, and if its not, then there’s no need to progress further :slight_smile:

Once we’ve got it working, then we can proceed to next step of adding back in lxdbr0 and adding NDP Proxy daemon.

1 Like

Ok, thanks a lot!

So I will:

  1. Reinstall the server, lxd in standard settings and ifupdown
  2. Try to get a single IPv6 to be bound to eth0 as described on the ISPs site
  3. Get back to you here for the right way

I have to say though, that it would be wonderful to have the right steps already here. Also for someone else who might go through similar issues. :slight_smile:

Also done for now, see you later or tomorrow.

The challenge with having a step by step tutorial is that there are many different ways to configure it, and many different types of ISP networks, so its not a case of one-size fits all.

This is why LXD comes out of the box with a private bridge, so that it doesn’t have to take into account the external connectivity much (apart from NAT).

For example, if you did not need automatic IP assignment via SLAAC, I would have suggested you just use LXD’s built in routed NIC type, that uses Linux’s built in proxy NDP functionality without the need for using a LXD bridge or the NDP proxy daemon at all. This would be the simplest option IMHO, as it also avoids the need to rely on your ISPs routing advertisement daemon (if they have one), and avoids any MAC filtering on the ISP network side. However it requires static IP assignment.

https://linuxcontainers.org/lxd/docs/master/instances#nictype-routed

Another option would be to use the macvlan NIC type, which, again, would allow you to avoid using the lxdbr0 and NDP proxy daemon, and instead directly connect your container to the host’s network (which, if the ISP provides a routing advertisement daemon, and there’s no indication from their tutorial that they do), then SLAAC would work directly with external addresses. This may cause issues if the ISP has MAC filtering in place. It also doesn’t allow your containers to talk to your host (which some people need).

https://linuxcontainers.org/lxd/docs/master/instances#nictype-macvlan

Next there is ipvlan which is like a cross between routed and macvlan, in that it requires static assignment, and shares the host’s MAC address, but doesn’t allow containers to talk to the host.

https://linuxcontainers.org/lxd/docs/master/instances#nictype-ipvlan

There’s also the option of creating a new unmanaged bridge, say br0, and then linking both your host’s eth0 interface and your container’s eth0 to it, effectively joining your containers to the external network of your host. This has most of the same properties (pros and cons) as macvlan above, but does allow your containers to talk to your host. However it is the most tricky to setup, as you have to remove the IP from your host’s eth0 and move it to the unmanaged bridge interface.

https://linuxcontainers.org/lxd/docs/master/instances#nictype-bridged

Regarding the unmanaged bridge interface, are you talking about an IPv6 address or the IPv4 address that would have to be assigned to the bridge? Because if it is just the IPv6 I anyways have to assign it/them manually as is, so might not be much more complicated as what I anyways have to do. It is also the only way I have found another tutorial on. (http://www.makikiweb.com/Pi/lxc_on_the_pi.html)

Otherwise if it is really that much more complicated we can go with the simplest setup then.

Regarding a step by step tutorial: it would need to include all the different things to find out to choose the best option for your case and then how to implement it.

With the unmanaged bridge option you have to move all of the host’s eth0 IPs to the newly created bridge interface (e.g. br0) because when you connect eth0 to the br0, any IPs on eth0 will stop working.

This can be tricky because often times, people are logged in via SSH over the same IPs they are trying to move, and can get locked out.

You’d also need to confirm that your ISP is providing a router advertisement daemon on their network, otherwise using an unmanaged bridge or macvlan isn’t going to work with SLAAC, and you’d have to sue static assignments (at which point it’d be easier to use routed or ipvlan).

You should also check whether the ISP allows multiple MAC addresses on the eth0 interface, before you go down the unmanaged bridge or macvlan approach.

If you were going to use the routed NIC type, then the steps would be:

  1. Remove lxdbr0 (or at least change its IP prefix so it doesnt conflict with your public /64).
  2. Ensure that your host has IPv6 connectivity.
  3. Pick an IP in your /64 that isn’t being used.
  4. Then run lxc config device add <container> eth0 nic nictype=routed ipv6.address=<your IPv6 address> parent=eth0

This last step will check for the required sysctl settings and inform you if you need to tweak them. Remember to persist these if you do need to change any of them so a reboot doesn’t wipe them out.

This will then configure the IP inside your container, and a default gateway, as well as the proxy NDP and static routes on the host required to make it appear that your container is on the external network.

Ok, so I got the IPv6 to be bound to eth0.

The image from my ISP seems to have included net.ipv6.conf.all.disable_ipv6 = 1 in /etc/sysctl.conf and made my life difficult.

How do I persist the routed NIC type changes that you mentioned?

Also, is there a way to have the eth0 host ipv6 adresses be routed to the lxdbr0 container ipv6 adresses, which would also allow automatic ipv6 assignment to the containers? I guess that needs the NDP proxy again which is broken in netplan which I now have to use as setting back to ifupdown has proven too difficult.

Tried different approaches, the last one being not to set up a bridge when initializing. Then added with your command the eth0 to the container but when trying to start it I get this error:
Error: Common start logic: Failed to start device "eth0": Routed mode requires sysctl net.ipv6.conf.all.forwarding=1

I added this on the host and in the instance and ran sudo netplan apply but not luck:

bash -c "cat >>/etc/sysctl.conf <<EOL
net.ipv6.conf.all.forwarding=1
net.ipv6.conf.eth0.forwarding=1
net.ipv6.conf.all.proxy_ndp=1
net.ipv6.conf.eth0.proxy_ndp=1
EOL"

This networking stuff is painful… :slight_smile:

After rebooting and adding the eth0 nic the container comes up but has a totally different ipv6 address and lxc list shows an empty ipv6 field. So something is wrong there also.

root@container1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 56:40:d3:e8:91:e0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::5440:d3ff:fee8:91e0/64 scope link 
       valid_lft forever preferred_lft forever

It may be something inside the container resetting the global ip LXD setup before it started. Try disabling dhcp and any network config in the container. The ip u see is the randomly generated link local address, thats normal and will remain even with a static ip.

Here’s an example with ubuntu 18.04:

lxc init ubuntu:18.04 c1
lxc config device add c1 eth0 nic nictype=routed parent=wlp0s20f3 ipv6.address=2a02:nnn:76f4:1::200
sudo sysctl net.ipv6.conf.all.proxy_ndp=1
sudo sysctl net.ipv6.conf.wlp0s20f3.proxy_ndp=1
lxc start c1
lxc exec c1 -- rm /etc/netplan/50-cloud-init.yaml
lxc restart
lxc exec c1 -- ip a show dev eth0
2: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:dc:53:a9:44:29 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 2a02:nnn:76f4:1::200/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::ccdc:53ff:fea9:4429/64 scope link 
       valid_lft forever preferred_lft forever

Ping google

lxc exec c1 -- ping 2a00:1450:4009:811::2004
PING 2a00:1450:4009:811::2004(2a00:1450:4009:811::2004) 56 data bytes
64 bytes from 2a00:1450:4009:811::2004: icmp_seq=1 ttl=53 time=1023 ms
64 bytes from 2a00:1450:4009:811::2004: icmp_seq=2 ttl=53 time=26.6 ms
64 bytes from 2a00:1450:4009:811::2004: icmp_seq=3 ttl=53 time=26.4 ms
64 bytes from 2a00:1450:4009:811::2004: icmp_seq=4 ttl=53 time=25.6 ms
^C
--- 2a00:1450:4009:811::2004 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3019ms
rtt min/avg/max/mdev = 25.609/275.570/1023.518/431.828 ms, pipe 2
1 Like

@tomp you are the best! :smiley:
Thank you so much!

Removing that yaml file from the containers netplan config finally made it accept the ipv6!
Pinging outside sources also successful.
Now I guess I just need to set up DNS in the container and I am good to go. Unfortunately that is the next wall I am hitting. Trying to install with lxdbr0 again so the container can resolve at least.

You have been very helpful and I thank you very much!

1 Like

I have tried setting the netplan DNS this way inside the container:

sudo lxc exec container1 -- bash -c "cat >>/etc/netplan/01-netcfg.yaml<<EOL
network:
  version: 2
  ethernets:
    eth0:
      addresses:
        - 1111:aaaa:3004:9978:0000:0000:0000:0002/128
      nameservers:
        addresses:
          - 2606:4700:4700::1111
          - 2606:4700:4700::1001
EOL"

But no luck. Doesn’t resolve.