Can't get IPv6 Netmask 64 to work (no NAT, should be end to end)

Regarding the unmanaged bridge interface, are you talking about an IPv6 address or the IPv4 address that would have to be assigned to the bridge? Because if it is just the IPv6 I anyways have to assign it/them manually as is, so might not be much more complicated as what I anyways have to do. It is also the only way I have found another tutorial on. (http://www.makikiweb.com/Pi/lxc_on_the_pi.html)

Otherwise if it is really that much more complicated we can go with the simplest setup then.

Regarding a step by step tutorial: it would need to include all the different things to find out to choose the best option for your case and then how to implement it.

With the unmanaged bridge option you have to move all of the host’s eth0 IPs to the newly created bridge interface (e.g. br0) because when you connect eth0 to the br0, any IPs on eth0 will stop working.

This can be tricky because often times, people are logged in via SSH over the same IPs they are trying to move, and can get locked out.

You’d also need to confirm that your ISP is providing a router advertisement daemon on their network, otherwise using an unmanaged bridge or macvlan isn’t going to work with SLAAC, and you’d have to sue static assignments (at which point it’d be easier to use routed or ipvlan).

You should also check whether the ISP allows multiple MAC addresses on the eth0 interface, before you go down the unmanaged bridge or macvlan approach.

If you were going to use the routed NIC type, then the steps would be:

  1. Remove lxdbr0 (or at least change its IP prefix so it doesnt conflict with your public /64).
  2. Ensure that your host has IPv6 connectivity.
  3. Pick an IP in your /64 that isn’t being used.
  4. Then run lxc config device add <container> eth0 nic nictype=routed ipv6.address=<your IPv6 address> parent=eth0

This last step will check for the required sysctl settings and inform you if you need to tweak them. Remember to persist these if you do need to change any of them so a reboot doesn’t wipe them out.

This will then configure the IP inside your container, and a default gateway, as well as the proxy NDP and static routes on the host required to make it appear that your container is on the external network.

Ok, so I got the IPv6 to be bound to eth0.

The image from my ISP seems to have included net.ipv6.conf.all.disable_ipv6 = 1 in /etc/sysctl.conf and made my life difficult.

How do I persist the routed NIC type changes that you mentioned?

Also, is there a way to have the eth0 host ipv6 adresses be routed to the lxdbr0 container ipv6 adresses, which would also allow automatic ipv6 assignment to the containers? I guess that needs the NDP proxy again which is broken in netplan which I now have to use as setting back to ifupdown has proven too difficult.

Tried different approaches, the last one being not to set up a bridge when initializing. Then added with your command the eth0 to the container but when trying to start it I get this error:
Error: Common start logic: Failed to start device "eth0": Routed mode requires sysctl net.ipv6.conf.all.forwarding=1

I added this on the host and in the instance and ran sudo netplan apply but not luck:

bash -c "cat >>/etc/sysctl.conf <<EOL
net.ipv6.conf.all.forwarding=1
net.ipv6.conf.eth0.forwarding=1
net.ipv6.conf.all.proxy_ndp=1
net.ipv6.conf.eth0.proxy_ndp=1
EOL"

This networking stuff is painful… :slight_smile:

After rebooting and adding the eth0 nic the container comes up but has a totally different ipv6 address and lxc list shows an empty ipv6 field. So something is wrong there also.

root@container1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 56:40:d3:e8:91:e0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::5440:d3ff:fee8:91e0/64 scope link 
       valid_lft forever preferred_lft forever

It may be something inside the container resetting the global ip LXD setup before it started. Try disabling dhcp and any network config in the container. The ip u see is the randomly generated link local address, thats normal and will remain even with a static ip.

Here’s an example with ubuntu 18.04:

lxc init ubuntu:18.04 c1
lxc config device add c1 eth0 nic nictype=routed parent=wlp0s20f3 ipv6.address=2a02:nnn:76f4:1::200
sudo sysctl net.ipv6.conf.all.proxy_ndp=1
sudo sysctl net.ipv6.conf.wlp0s20f3.proxy_ndp=1
lxc start c1
lxc exec c1 -- rm /etc/netplan/50-cloud-init.yaml
lxc restart
lxc exec c1 -- ip a show dev eth0
2: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:dc:53:a9:44:29 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 2a02:nnn:76f4:1::200/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::ccdc:53ff:fea9:4429/64 scope link 
       valid_lft forever preferred_lft forever

Ping google

lxc exec c1 -- ping 2a00:1450:4009:811::2004
PING 2a00:1450:4009:811::2004(2a00:1450:4009:811::2004) 56 data bytes
64 bytes from 2a00:1450:4009:811::2004: icmp_seq=1 ttl=53 time=1023 ms
64 bytes from 2a00:1450:4009:811::2004: icmp_seq=2 ttl=53 time=26.6 ms
64 bytes from 2a00:1450:4009:811::2004: icmp_seq=3 ttl=53 time=26.4 ms
64 bytes from 2a00:1450:4009:811::2004: icmp_seq=4 ttl=53 time=25.6 ms
^C
--- 2a00:1450:4009:811::2004 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3019ms
rtt min/avg/max/mdev = 25.609/275.570/1023.518/431.828 ms, pipe 2
1 Like

@tomp you are the best! :smiley:
Thank you so much!

Removing that yaml file from the containers netplan config finally made it accept the ipv6!
Pinging outside sources also successful.
Now I guess I just need to set up DNS in the container and I am good to go. Unfortunately that is the next wall I am hitting. Trying to install with lxdbr0 again so the container can resolve at least.

You have been very helpful and I thank you very much!

1 Like

I have tried setting the netplan DNS this way inside the container:

sudo lxc exec container1 -- bash -c "cat >>/etc/netplan/01-netcfg.yaml<<EOL
network:
  version: 2
  ethernets:
    eth0:
      addresses:
        - 1111:aaaa:3004:9978:0000:0000:0000:0002/128
      nameservers:
        addresses:
          - 2606:4700:4700::1111
          - 2606:4700:4700::1001
EOL"

But no luck. Doesn’t resolve.

Check that setting up netplan isn’t removing the gateway route that LXD is adding in:

ip -6 r

Should expect to see a default route to fe80::1

The output is:

1111:aaaa:3004:9978::2 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via fe80::1 dev eth0 proto static metric 1024 pref medium

But I cannot resolve any domain names. Also resolving ipv4 domain names should be possible as I will want to install programs and stuff.
(I hence included gateway6: fe80::1 to the netplan config.)

What is the contents of /etc/resolv.conf and systemd-resolve --status?

Can you still ping outside IPs?

As for resolving IPv4 domains, once you get DNS working, you will be able to resolve IPv4 domains, but they will be unreachable as you’ve not configured any IPv4 address (not even private ones that could be NATed by your host).

Before the reset I just did for the 1000th time I cloud ping outside IPv6 adresses yes.
I assigned the container an 192.168.1.2 domain, but the pinging of outside domain names still didn’t work.

I had put the following contents in /etc/resolv.conf:

nameserver 1.1.1.1
nameserver 2606:4700:4700::1111

systemd-resolve --status had shown me the nameservers in one of the setups before but still no name resolving, I have no idea why not.

Right now I am trying to have a normal managed bridge on eth0 of the containers and add an eth1 with the static ipv6 address. In the hopes that the managed bridge will handle the DNS.

I also allowed port 53 on ufw on the host but that seems to not have been the issue either.

Ah, ok so you’ve introduced 2 interfaces inside the container, and the requirement to access IPv4 nameservers (before you said you didn’t need IPv4, only IPv6), which will change things a fair bit.

If you have a 2nd interface in the container, connected to the managed bridge, its likely that the SLAAC autoconfiguration for IPv6 on that interface will wipe out the default IPv6 gateway route for the routed interface and instead replace it with a default route out of the managed bridge interface instead.

So either disable IPv6 on the managed bridge, ipv6.dhcp=false and ipv6.address=none so that your 2nd interface is just for IPv4, or add a static private IPv4 address to the routed NIC:

lxc config device edit c1 eth0 nic ipv4.address=192.168.0.n

Then you need to add an outbound masquerade firewall rule so that outbound traffic is translated to your host’s external IP. That way you’d only need 1 interface inside the container.

1 Like

Here’s an example using 2 interfaces, routed for ipv6 (eth1), bridged for ipv4 (using LXD’s NAT) (eth0).

Disable IPv6 on lxdbr0:

lxc network set lxdbr0 ipv6.address=none
lxc network set lxdbr0 ipv6.dhcp=false
lxc init ubuntu:18.04 c1
lxc config device add c1 eth1 nic nictype=routed ipv6.address=2a02:nnn:76f4:1::1234 parent=wlp0s20f3
sysctl net.ipv6.conf.all.proxy_ndp=1
sysctl net.ipv6.conf.wlp0s20f3.proxy_ndp=1
lxc start c1

Then use this netplan config inside container:

network:
    version: 2
    ethernets:
        eth0:
            dhcp4: true
        eth1:
            addresses:
             - 2a02:nnn:76f4:1::1234/128
            gateway6: fe80::1

Resulting config:

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth1@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 82:a6:92:ed:12:67 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 2a02:nnn:76f4:1::1234/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::80a6:92ff:feed:1267/64 scope link 
       valid_lft forever preferred_lft forever
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:b0:fb:9c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.87.57.6/24 brd 10.87.57.255 scope global dynamic eth0
       valid_lft 3274sec preferred_lft 3274sec
    inet6 fe80::216:3eff:feb0:fb9c/64 scope link 
       valid_lft forever preferred_lft forever
ip -4 r
default via 10.87.57.1 dev eth0 proto dhcp src 10.87.57.6 metric 100 
10.87.57.0/24 dev eth0 proto kernel scope link src 10.87.57.6 
10.87.57.1 dev eth0 proto dhcp scope link src 10.87.57.6 metric 100 
ip -6 r
2a02:nnn:76f4:1::1234 dev eth1 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth1 proto kernel metric 256 pref medium
default via fe80::1 dev eth1 proto static metric 1024 pref medium
systemd-resolve --status
Global
          DNSSEC NTA: 10.in-addr.arpa
                      16.172.in-addr.arpa
                      168.192.in-addr.arpa
                      17.172.in-addr.arpa
                      18.172.in-addr.arpa
                      19.172.in-addr.arpa
                      20.172.in-addr.arpa
                      21.172.in-addr.arpa
                      22.172.in-addr.arpa
                      23.172.in-addr.arpa
                      24.172.in-addr.arpa
                      25.172.in-addr.arpa
                      26.172.in-addr.arpa
                      27.172.in-addr.arpa
                      28.172.in-addr.arpa
                      29.172.in-addr.arpa
                      30.172.in-addr.arpa
                      31.172.in-addr.arpa
                      corp
                      d.f.ip6.arpa
                      home
                      internal
                      intranet
                      lan
                      local
                      private
                      test

Link 9 (eth0)
      Current Scopes: DNS
       LLMNR setting: yes
MulticastDNS setting: no
      DNSSEC setting: no
    DNSSEC supported: no
         DNS Servers: 10.87.57.1
          DNS Domain: lxd
lxc ls
+------+---------+-------------------+------------------------------+-----------+-----------+
| NAME |  STATE  |       IPV4        |             IPV6             |   TYPE    | SNAPSHOTS |
+------+---------+-------------------+------------------------------+-----------+-----------+
| c1   | RUNNING | 10.87.57.6 (eth0) | 2a02:nnn:76f4:1::1234 (eth1) | CONTAINER | 0         |
+------+---------+-------------------+------------------------------+-----------+-----------+
2 Likes

I didn’t think I need IPv4 in the container, but DNS would not work. And of course I need DNS so I can install stuff in the container. I will try the setup adjustments you have given me, thanks a lot!

I thought my config was fine as pinging the ipv6 from the outside worked. But of course I wouldn’t want any conflicts and also trying to connect to the containers through ports 80 and 443 from the outside is proving difficult…

Thank you very much for all the provided help!

You could also try using an IPv6 resolver rather than an IPv4 resolver address if you would like to keep a pure IPv6 only environment.

1 Like

How do I do that?
(I will try to use Cloudflare to host a site inside the container and provide the ipv4 connectivity.)
PS: The config works for me without setting the netplan config inside the container.

Well, say, you were using Google’s 8.8.8.8 as your DNS resolver IP. Instead of that you could use their equivalent IPv6 resolver IP:

https://developers.google.com/speed/public-dns/docs/using

Your ISP may provide an IPv6 resolver address too.

I did try Cloudflares IPv6 resolvers:
nameserver 2606:4700:4700::1111
nameserver 2606:4700:4700::1001

But didn’t work, dns simply wouldn’t resolve… Must have been something else.

Try this:

ping 2606:4700:4700::1111
dig @2606:4700:4700::1111 www.google.com