Ah thats interesting. If that is working without NDP proxy daemon, even with the NDP proxy sysctls activate, the kernel still needs static routes (as generated by the routed NIC type) to work. If its working without that or ndppd then it suggests your ISP are routing your /64 subnet directly to your host rather than expecting NDP resolution to take place. In this way your host is just doing the router part of the job and not needing to proxy NDP as well.
We have a Tutorials section in this forum (although ofcourse we would be happy for you to put a tutorial up on askubuntu too). If you post a tutorial and then we could link to it from our Tutorials section as well.
Hey @tomp, how would I go 1 nesting deep with this?
Make bridges on the host with /112 subnets and assign the same subnet bridge to container-level-one and make a bridge with the same subnet inside container-level-one that then gets assigned to container-level-two? And setup the NDP and forwarding kernel stuff like on the host in container1?
However unlike (iirc correctly), your ISPs routing of your /64 subnet that routes directly to your host’s interface (avoiding the need for proxy NDP), the routes that LXD will setup won’t route directly to your container’s MAC address, but will instead just route into the bridge network and will expect the container to response to NDP requests for the routed IPs.
i.e it adds a route just to the bridge like:
ip -6 r add fd40:eab1:5993:f7b8::/64 dev lxdbr0
Rather than directly to the container like:
ip -6 r add fd40:eab1:5993:f7b8::/64 via fd40:eab1:5993:f7b7::2 dev lxdbr0
Inside the container you could then setup a new bridged network for the 112, but you would need to make the container respond to NDP requests from the host.
The alternative approach is that you could use routed NIC inside the first container for the nested containers, as this would then setup the proxy NDP entries on the first container. This combined with the routes at the host level should work.
Option 1:
Make another lxdbr0 this time with the ipv6/112 range inside Container-Level-1.
NDP request response: Need to use the NDP proxy deamon or are kernel-features enabled enough? Or is there a netplan setup that does that? Like
routes:
- to: ::/0
via: ipv6-of-container-1
inside Container-Level-2?
Option 2:
Host - Container-Level-1 – Same setup as before
Container-Level-1 - Container-Level-2 – routed NIC
For Option 1, a /112 subnet is not sufficient for automatic SLAAC allocation to work, so you would have to assign your IPs manually. Also you would need to either add the proxy NDP entries manually (for each individual IP or use something like NDP proxy daemon to automate it).
For Option 2, it effectively automates Option 1 without having to use additional software.
I would leave the parent container as it is, connected to the existing bridge with a /64 subnet.
This will naturally cause a route to be added to your host for the /64 subnet.
As bridged NICs allow the containers to pick their own IPs and advertise them using NDP, it means you can just allocate individual IPs inside that /64 to routed NICs inside the container and the top-level container will then advertise them via proxy NDP to the host’s bridge.
Hi,
I am trying to follow this, offcourse modifying network name.
but i keep getting this error. rror: Failed to run: ip -6 addr add fe80::1/128 dev vethd08ddefc: RTNETLINK answers: Permission denied
Note: am using hetzner cloud, where they provide me with /64 v6.
what matters in my case a bridge has single ip v4 nat, later on add small portion/ divide the v6 subnet on multiple bridges that use different ip v4 as nat ip.
How would i go about this, please advice?
i get it immediately after start.
Now, i came this page by making a search on ipv6 static.
This is my goal actually, and this might / might not be what i need,
Goal:
Note: am using hetzner cloud, where they provide me with /64 v6.
what matters in my case a bridge has single ip v4 nat, later on add small portion/ divide the v6 subnet on multiple bridges that use different ip v4 as nat ip.
How would i go about this, please advice?
@tomp
In other words, am trying to have: lxdbr1 using IPV4_1 for nat. lxdbr2 using IPV4_2 for nat. lxdbr3 using IPV4_3 for nat.
Then make use of the huge ipv6/64 subnet to add ipv6 to make containers public-ally accessible on ipv6.
Does it sound right? what am trying to do.
If I recall correctly, Hetzner route the /64 directly to the host, and so there is no need for proxy NDP to use the /64 addresses.
In that case then you could configure each of your 3 bridges with a subset of your /64 subnet, for example you could use a /120 which then allows 256 statically assigned IPv6s per bridge.
Then you would set the IPv4 as you need with ipv4.nat=true and set ipv6.nat=false.
As you wouldn’t be able to use SLAAC, as that needs a /64 or larger, you’d need to configure the IPv6 addresses statically.
@tomp Thanks.
indeed works.
Is there a way making the ipv6 static instead of it being scope global dynamic noprefixroute for now.
Edit:
manually restarting the network inside the container gets back the ip.
note: i made a change to the config file adding.
ipv6.address: fd0b:bc39:4820:d4f8::1:75
as i said on first reboot of whole system , it does not get the ipv6. but after restarting network inside container it does get it back.