Can't get IPv6 Netmask 64 to work (no NAT, should be end to end)

Yes you could create a new bridge manually (using netplan for instance), e.g. call it br0 that is connected to your host’s external interface. You’d need to ensure that the host’s current static IPs are moved from the host’s interface to the new bridge interface, otherwise they will stop working.

Then you could add a new NIC device to your containers (in addition to the private IPv4 one) that connects directly to the new bridge, e.g. lxc config device ad <container> eth1 nic nictype=bridged parent=br0

This would allow you to also use the limits settings on those devices, and they would be directly connected to the host’s external network.

However IIRC your ISP does not run an IPv6 router advertisement service and so your container’s would not be able to auto configure their IPv6 addresses, and you’d need to configure them internally using netplan inside the container.

Also worth noting that your ISP may enforce that a single MAC address can only be present on each network port, if they do this, then using bridging will not work.

If this is the case, then you’d need to use the original approach you linked to and use a private managed bridge, and then use the proxy ndp daemon to advertise your container’s IPv6 addresses onto the external interface.

1 Like

Trying to set the right netplan config on the host with this:

network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      match:
        macaddress: 12:49:56:3f:4e:37
      dhcp4: no
      dhcp6: no
  bridges:
    br0:
      interfaces: [eth0]
      dhcp4: no
      dhcp6: no
      addresses:
        - 111.12.100.70/32
        - 1111:aaaa:3004:9978::1/64
      routes:
        - on-link: true
          to: 0.0.0.0/0
          via: 111.12.100.1
      gateway6: fe80::1
      nameservers:
        addresses:
          - 111.12.100.11
          - 111.12.100.10
          - 1111:aaaa::2:53
          - 1111:aaaa::1:53

But I get locked out after rebooting or netplan applying.
Any tips?

Or getting back to routed: Can I use a bridge to route to instead of the container and then connect the containers to the bridge?

I since learned that my host does indeed use mac filtering, so the unmanaged bridge br0 is out I guess.

What is the easiest way to have traffic go through a bridge from the containers so I can limit the network and still work on a “restricted” host like mine?

So you could try the original approach you were trying with ndpproxyd and see if you can get that working.

But also, I don’t see any reason why we couldn’t add our limit support that we have for bridged NICs to routed NICs, so I’ll add that to our ideas board for the future.

1 Like

Yeah, but the guy states that there is a bug in netplan about on-link (whatever that is :smiley:) so it won’t work for ipv6 with netplan, and my attempts to get rid of netplan and go back to ifupdown were unsuccessful. I will give it a try anyways, won’t be the first suicide mission I go for. :smiley:

Would p2p work with mac address filtering on the host? It is also veth based so it could work?

And I am happy even I could give you an idea, thanks for considering that!

Good thing you send me on that suicide mission as it worked out after all @tomp!

So here the in my opinion simplest and most feature rich approach, as I can limit egress and ingress of the network.

Setting up Netplan

$macaddress, $ipv6address, $ipv4address and $ipv4gateway have to be set/changed to your addresses. And eth0 my default physical interface may have a different name for you.

cat > /etc/netplan/01-netcfg.yaml <<EOF
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      match:
        macaddress: $macaddress
      addresses:
        - $ipv4address/32
        - $ipv6address/128
      routes:
        - to: ::/0
          via: fe80::1
      routes:
        - to: 0.0.0.0/0
          via: $ipv4gateway
          on-link: true
      nameservers:
        search: [ invalid ]
        addresses:
          - 1.1.1.1 # These four entries are Cloudflare's DNS
          - 1.0.0.1
          - 2606:4700:4700::1111
          - 2606:4700:4700::1001
EOF

Setting up the Kernel NDP proxying and forwarding

cat >>/etc/sysctl.conf <<EOF
net.ipv6.conf.all.forwarding=1
net.ipv6.conf.eth0.forwarding=1
net.ipv6.conf.all.proxy_ndp=1
net.ipv6.conf.eth0.proxy_ndp=1
EOF

Also make sure IPv6 is not disabled in this file.

UFW Change - If UFW is used
nano /etc/default/ufw

Make this change: DEFAULT_FORWARD_POLICY="ACCEPT"

Then do a reboot.

Install and setup LXD
When initiating LXD after the install put the IPv6/64 range as the lxdbr0 IPv6 address. If already installed you can run:

lxc network set lxdbr0 ipv6.address $ipv6address/64

This way the containers are going to get an ipv6 address from lxdbr0.

Also the following options should be set:

lxc network set lxdbr0 ipv6.dhcp false
lxc network set lxdbr0 ipv6.nat false
lxc network set lxdbr0 ipv6.routing true

The ipv4 stuff can be left alone and stay with NAT.

Run a Linux Container and enjoy
lxc launch ubuntu:18.04 c1
Enjoy a container with and universally routable IPv6.
To get the address you can run lxc list

Special Thanks
This would not have been possible without the help and tutorials of Thomas Parrott @tomp and Ryan Young @yoryan. Thank you both very very much!

Glad to hear you got it working how you wanted it. Was that still running ndppd by the way (it wasn’t in your setup steps)?

1 Like

No, just using the Kernel NDP proxying features that you told me before and Ryan also mentioned in his tutorial, that’s how I was able to put 2 and 2 together. :slight_smile:
Thanks again you guys! Especially you Thomas!

Do you think I should put this small tutorial up on askubuntu?

Ah thats interesting. If that is working without NDP proxy daemon, even with the NDP proxy sysctls activate, the kernel still needs static routes (as generated by the routed NIC type) to work. If its working without that or ndppd then it suggests your ISP are routing your /64 subnet directly to your host rather than expecting NDP resolution to take place. In this way your host is just doing the router part of the job and not needing to proxy NDP as well.

We have a Tutorials section in this forum (although ofcourse we would be happy for you to put a tutorial up on askubuntu too). If you post a tutorial and then we could link to it from our Tutorials section as well.

Ok, so the prerequisites for our tutorial are:

  • Having an /64 IPv6 subnet
  • The ISP routes the /64 subnet directly to the host (If not NDP proxy deamon ndppd has to be used, see here)
  • Running Ubuntu 18.04 and LXD 4.0

Anything I forgot?

Sounds good.

1 Like

Don’t have access to making Tutorial topics.

If you post it as a normal new post and I’ll move it into Tutorials for you.

1 Like

Here our tutorial: Getting universally routable IPv6 addresses for your Linux Containers on Ubuntu 18.04 with LXD 4.0 on a VPS

Please let me know if there are any mistakes or inconsistencies.

1 Like

Hey @tomp, how would I go 1 nesting deep with this?

Make bridges on the host with /112 subnets and assign the same subnet bridge to container-level-one and make a bridge with the same subnet inside container-level-one that then gets assigned to container-level-two? And setup the NDP and forwarding kernel stuff like on the host in container1?

Thanks! :slight_smile:

So the bridged NIC type has support for adding static routes to the host that point into the bridge network using the ipv6.routes NIC property (https://linuxcontainers.org/lxd/docs/master/instances#nictype-bridged).

However unlike (iirc correctly), your ISPs routing of your /64 subnet that routes directly to your host’s interface (avoiding the need for proxy NDP), the routes that LXD will setup won’t route directly to your container’s MAC address, but will instead just route into the bridge network and will expect the container to response to NDP requests for the routed IPs.

i.e it adds a route just to the bridge like:

ip -6 r add fd40:eab1:5993:f7b8::/64 dev lxdbr0

Rather than directly to the container like:

ip -6 r add fd40:eab1:5993:f7b8::/64 via fd40:eab1:5993:f7b7::2 dev lxdbr0

Inside the container you could then setup a new bridged network for the 112, but you would need to make the container respond to NDP requests from the host.

The alternative approach is that you could use routed NIC inside the first container for the nested containers, as this would then setup the proxy NDP entries on the first container. This combined with the routes at the host level should work.

Thank you so much again!

So…

Option 1:
Make another lxdbr0 this time with the ipv6/112 range inside Container-Level-1.
NDP request response: Need to use the NDP proxy deamon or are kernel-features enabled enough? Or is there a netplan setup that does that? Like

  routes:
    - to: ::/0
      via: ipv6-of-container-1

inside Container-Level-2?

Option 2:
Host - Container-Level-1 – Same setup as before
Container-Level-1 - Container-Level-2 – routed NIC

I am sure that i got both wrong :smiley: :smiley:

For Option 1, a /112 subnet is not sufficient for automatic SLAAC allocation to work, so you would have to assign your IPs manually. Also you would need to either add the proxy NDP entries manually (for each individual IP or use something like NDP proxy daemon to automate it).

For Option 2, it effectively automates Option 1 without having to use additional software.

Thank you!

For Option 2, would I setup the Host and lxdbr0 on the host as I do it now, is that what you mean by “routes at host level”?

Would I need to give the first container more than one ipv6?

I would leave the parent container as it is, connected to the existing bridge with a /64 subnet.

This will naturally cause a route to be added to your host for the /64 subnet.

As bridged NICs allow the containers to pick their own IPs and advertise them using NDP, it means you can just allocate individual IPs inside that /64 to routed NICs inside the container and the top-level container will then advertise them via proxy NDP to the host’s bridge.

2 Likes