Can't get IPv6 Netmask 64 to work (no NAT, should be end to end)

Ah thats interesting. If that is working without NDP proxy daemon, even with the NDP proxy sysctls activate, the kernel still needs static routes (as generated by the routed NIC type) to work. If its working without that or ndppd then it suggests your ISP are routing your /64 subnet directly to your host rather than expecting NDP resolution to take place. In this way your host is just doing the router part of the job and not needing to proxy NDP as well.

We have a Tutorials section in this forum (although ofcourse we would be happy for you to put a tutorial up on askubuntu too). If you post a tutorial and then we could link to it from our Tutorials section as well.

Ok, so the prerequisites for our tutorial are:

  • Having an /64 IPv6 subnet
  • The ISP routes the /64 subnet directly to the host (If not NDP proxy deamon ndppd has to be used, see here)
  • Running Ubuntu 18.04 and LXD 4.0

Anything I forgot?

Sounds good.

1 Like

Don’t have access to making Tutorial topics.

If you post it as a normal new post and I’ll move it into Tutorials for you.

1 Like

Here our tutorial: Getting universally routable IPv6 addresses for your Linux Containers on Ubuntu 18.04 with LXD 4.0 on a VPS

Please let me know if there are any mistakes or inconsistencies.

1 Like

Hey @tomp, how would I go 1 nesting deep with this?

Make bridges on the host with /112 subnets and assign the same subnet bridge to container-level-one and make a bridge with the same subnet inside container-level-one that then gets assigned to container-level-two? And setup the NDP and forwarding kernel stuff like on the host in container1?

Thanks! :slight_smile:

So the bridged NIC type has support for adding static routes to the host that point into the bridge network using the ipv6.routes NIC property (https://linuxcontainers.org/lxd/docs/master/instances#nictype-bridged).

However unlike (iirc correctly), your ISPs routing of your /64 subnet that routes directly to your host’s interface (avoiding the need for proxy NDP), the routes that LXD will setup won’t route directly to your container’s MAC address, but will instead just route into the bridge network and will expect the container to response to NDP requests for the routed IPs.

i.e it adds a route just to the bridge like:

ip -6 r add fd40:eab1:5993:f7b8::/64 dev lxdbr0

Rather than directly to the container like:

ip -6 r add fd40:eab1:5993:f7b8::/64 via fd40:eab1:5993:f7b7::2 dev lxdbr0

Inside the container you could then setup a new bridged network for the 112, but you would need to make the container respond to NDP requests from the host.

The alternative approach is that you could use routed NIC inside the first container for the nested containers, as this would then setup the proxy NDP entries on the first container. This combined with the routes at the host level should work.

Thank you so much again!

So…

Option 1:
Make another lxdbr0 this time with the ipv6/112 range inside Container-Level-1.
NDP request response: Need to use the NDP proxy deamon or are kernel-features enabled enough? Or is there a netplan setup that does that? Like

  routes:
    - to: ::/0
      via: ipv6-of-container-1

inside Container-Level-2?

Option 2:
Host - Container-Level-1 – Same setup as before
Container-Level-1 - Container-Level-2 – routed NIC

I am sure that i got both wrong :smiley: :smiley:

For Option 1, a /112 subnet is not sufficient for automatic SLAAC allocation to work, so you would have to assign your IPs manually. Also you would need to either add the proxy NDP entries manually (for each individual IP or use something like NDP proxy daemon to automate it).

For Option 2, it effectively automates Option 1 without having to use additional software.

Thank you!

For Option 2, would I setup the Host and lxdbr0 on the host as I do it now, is that what you mean by “routes at host level”?

Would I need to give the first container more than one ipv6?

I would leave the parent container as it is, connected to the existing bridge with a /64 subnet.

This will naturally cause a route to be added to your host for the /64 subnet.

As bridged NICs allow the containers to pick their own IPs and advertise them using NDP, it means you can just allocate individual IPs inside that /64 to routed NICs inside the container and the top-level container will then advertise them via proxy NDP to the host’s bridge.

2 Likes

Fancy stuff! I wished I had teachers like you in school. Or for that matter, a school system that teaches you anything useful for actual life. :slight_smile: :slight_smile:

Thank you very much!

2 Likes

Hi,
I am trying to follow this, offcourse modifying network name.
but i keep getting this error.
rror: Failed to run: ip -6 addr add fe80::1/128 dev vethd08ddefc: RTNETLINK answers: Permission denied
Note: am using hetzner cloud, where they provide me with /64 v6.
what matters in my case a bridge has single ip v4 nat, later on add small portion/ divide the v6 subnet on multiple bridges that use different ip v4 as nat ip.
How would i go about this, please advice?

Can you clarify what command you are running to get that error please?

lxc network set lxdbr0 ipv6.address=none
lxc network set lxdbr0 ipv6.dhcp=false
lxc init ubuntu:18.04 c1
lxc config device add c1 eth1 nic nictype=routed ipv6.address=2a02:nnn:76f4:1::1234 parent=wlp0s20f3
sysctl net.ipv6.conf.all.proxy_ndp=1
sysctl net.ipv6.conf.wlp0s20f3.proxy_ndp=1
lxc start c1

i get it immediately after start.
Now, i came this page by making a search on ipv6 static.
This is my goal actually, and this might / might not be what i need,
Goal:
Note: am using hetzner cloud, where they provide me with /64 v6.
what matters in my case a bridge has single ip v4 nat, later on add small portion/ divide the v6 subnet on multiple bridges that use different ip v4 as nat ip.
How would i go about this, please advice?

@tomp
In other words, am trying to have:
lxdbr1 using IPV4_1 for nat.
lxdbr2 using IPV4_2 for nat.
lxdbr3 using IPV4_3 for nat.
Then make use of the huge ipv6/64 subnet to add ipv6 to make containers public-ally accessible on ipv6.
Does it sound right? what am trying to do.

If I recall correctly, Hetzner route the /64 directly to the host, and so there is no need for proxy NDP to use the /64 addresses.

In that case then you could configure each of your 3 bridges with a subset of your /64 subnet, for example you could use a /120 which then allows 256 statically assigned IPv6s per bridge.

Then you would set the IPv4 as you need with ipv4.nat=true and set ipv6.nat=false.

As you wouldn’t be able to use SLAAC, as that needs a /64 or larger, you’d need to configure the IPv6 addresses statically.

1 Like

So for example if your /64 was fd0b:bc39:4820:d4f8::/64

Then you could create two network as follows, each with a separate /120 subnet, and IPv6 NAT disabled and stateful DHCPv6 enabled.

lxc network create lxdbr1 ipv6.address=fd0b:bc39:4820:d4f8::1:1/120 ipv6.nat=false ipv6.dhcp.stateful=true
lxc network create lxdbr2 ipv6.address=fd0b:bc39:4820:d4f8::2:1/120 ipv6.nat=false ipv6.dhcp.stateful=true

Then launch some containers on the networks:

lxc init images:ubuntu/focal cbr1
lxc config device add cbr1 eth0 nic network=lxdbr1
lxc start cbr1

lxc init images:ubuntu/focal cbr2
lxc config device add cbr1 eth0 nic network=lxdbr2
lxc start cbr2

lxc ls
lxc ls cbr
+------+---------+----------------------+----------------------------------+-----------+-----------+
| NAME |  STATE  |         IPV4         |               IPV6               |   TYPE    | SNAPSHOTS |
+------+---------+----------------------+----------------------------------+-----------+-----------+
| cbr1 | RUNNING | 10.196.72.119 (eth0) | fd0b:bc39:4820:d4f8::1:75 (eth0) | CONTAINER | 0         |
+------+---------+----------------------+----------------------------------+-----------+-----------+
| cbr2 | RUNNING | 10.115.99.13 (eth0)  | fd0b:bc39:4820:d4f8::2:f6 (eth0) | CONTAINER | 0         |
+------+---------+----------------------+----------------------------------+-----------+-----------+

Then from my upstream router, that represents Hetzer’s router, I would add a static route for the /64 subnet and test connectivity:

ip -6 r add fd0b:bc39:4820:d4f8::/64 via <LXD host IP> dev enp2s0
ping -c1 fd0b:bc39:4820:d4f8::1:75
ping -c1 fd0b:bc39:4820:d4f8::2:f6

This setup still allows dynamic IP allocation as long as the container can do DHCPv6.

This only works if your ISP routes the /64 directly to your LXD host and doesn’t rely on your LXD host responding to NDP queries.

@tomp Thanks.
indeed works.
Is there a way making the ipv6 static instead of it being scope global dynamic noprefixroute for now.
Edit:
manually restarting the network inside the container gets back the ip.
note: i made a change to the config file adding.
ipv6.address: fd0b:bc39:4820:d4f8::1:75
as i said on first reboot of whole system , it does not get the ipv6. but after restarting network inside container it does get it back.