Lxd ipv6 interface and bridge in same ipv6 network allowed or not?

Hi folks,

in this post: Getting universally routable IPv6 Addresses for your Linux Containers on Ubuntu 18.04 with LXD 4.0 on a VPS @michacassola
writes "lxc network set lxdbr0 ipv6.address $ipv6address2/64 # Other one than eth0"

just to clarify, with “Other one than eth0” he means another ip6 address, but they are allowed to be in the same network. Right?

Example:
Host Physical Interface
ens3:

  • 2a02:1748:f7df:ABC0::2/64

LXD Bridge used by containers
lxdbrd0:

  • 2a02:1748:f7df:ABC0::3/64

Or would this lead to ipv6 routing issues?

Kind regards,
Raphael

1 Like

The reason why i am asking, (and i have messed arround with LXD IPv6 setup for days…)
this is the explanation of my situation:

IPv6 in container now is working for me with the following setup.
Meaning global ipv6 address inside container is able to reach the www + is reachable from the www globally.

System Info:
HOST ubuntu 20.04 LTS. + LXD v4.21

lxc network show lxdbr0
config:
  ipv4.address: 10.251.186.1/24
  ipv4.dhcp: "true"
  ipv4.dhcp.ranges: 10.251.186.100-10.251.186.140
  ipv4.nat: "true"
  ipv6.address: 2a02:1748:1234:ABC0::21/64
  ipv6.dhcp: "false"
  ipv6.dhcp.stateful: "true"
  ipv6.nat: "false"
  ipv6.routing: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/c1
- /1.0/instances/c2
- /1.0/instances/c3
- /1.0/instances/c4
- /1.0/instances/vsftpd
- /1.0/profiles/default
managed: true
status: Created
locations:
- none

Security Example IPs: #XXXX:XXXX:1234:ABC0 do mask the real IP’s for security reasons.

GUEST-Container ubuntu 20.04 LTS.

vim /etc/netplan/50-cloud-init.yaml

# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
  version: 2
  ethernets:
    eth0:
      dhcp4: true
      dhcp6: true
      gateway4: 192.168.0.138
      #Gateway6 is global IP of lxdbr0 bridge, is there a best practice? Is it really necessary for lxdbr0 to have Global IPv6???
      gateway6: 2a02:1748:1234:ABC0::21
      nameservers:
        addresses:
        - 192.168.0.138
        - 1.1.1.1
        - 8.8.4.4
        - 2001:4860:4860::8844
        # choose your favorite IPv6 DNS here!!!
        search: []
  version: 2

PROBLEM:

IPv6 ping6 from and to container vsftpd does not work yet. One has to apply following commands on the HOST:

  1. Route to container
    ip -6 route add 2a02:1748:1234:ABC0:216:3eff:fe65:806e dev lxdbr0

  2. Set Neighbour Solicitation proxies on lxdbr0 bridge and ens3 physical interface.
    ip -6 neigh add proxy 2a02:1748:1234:ABC0::11 dev lxdbr0
    ip -6 neigh add proxy 2a02:1748:1234:ABC0:216:3eff:fe65:806e dev ens3

Now IPv6 is working, but the above entries for ‘route’ and ‘neigh add proxy’ are gone after reboot.
How to make them permanent?

Or is it anyway bad practice to have physical interface ens3 and lxdbr0 on the same IPv6 network, thus route and neigh add proxy are overkill?

Thanks for your help and advice in advance!

Kind regards,
Raphael

You understood correctly in your first post, that is how it is meant as your config shows. As to the problem I cannot answer as it is beyond my knowledge. Wish you all the best!

Yes it would lead to routing issues.

On the original linked post the external interface used an IP inside the /64 but with a /128 prefix so that the rest of the /64 was routed to lxdbr0.

If your ISP routes the /64 to your lxd host without the need for NDP responses then using a /128 address on the external interface and routing the rest of the /64 to lxdbr0 will work OK.

However if you need to respond to NDP requests for ips in lxdbr0 on the external interface then you’ll need either to manually add IP proxy entries or have some software respond for you (like ndppd).

Alternatively you can use the ‘routed’ nic type in lxd which automates adding proxy NDP and static routes for you and doesn’t use lxdbr0.

Thanks to @michacassola and @tomp for your help on this IPv6 Odyssey.

In the hopefully near future i am going to write a summary blog post about my: Ovirt (VPS) > LXD-HOST > LXDContainer IPv6 adventure and link to it. Host and Container running Ubuntu 20.04 LTS.

Right now, i can only report how far i have gone with this trial and error,… plus i would like to add i am on an Ovirt datacenter + OpnSense Router, but as IPv6 communication arrives as expected from an OpnSense Interface i guess this is working correctly.

What i could not achieve:

  • ens3 and lxdbr0 in the same /64 network.
  • I could not get a long term stable communication when both ens3 and lxdbr0 are on the same /64 ipv6 network. Even not if ens3 is /128 in netplan. Even not when i created route to /128 for ens3 and no other routes to ens3 where present. While the lxdbr0 got the /64 network. It worked some minutes but after a while i got error messages such as ICMPv6 Beyond Scope of Source Address
  • More on this in the upcoming blog post.

What i kind of could achieve:

  • Expend the interface on the Router (OpnSense) from /64 to /63
  • Put ens3 into the /64 network such as 2a02:1748:1234:ABC0::11/64
  • Put lxdbr0 into the other /64 network 2a02:1748:1234:ABC1::21/64

lxc network show lxdbr0

config:
  ipv4.address: 10.251.186.1/24
  ipv4.dhcp: "true"
  ipv4.dhcp.ranges: 10.251.186.100-10.251.186.140
  ipv4.nat: "true"
  ipv6.address: 2a02:1748:1234:ABC1::21/64
  ipv6.dhcp: "false"
  ipv6.dhcp.ranges: 2a02:1748:1234:ABC1:216:3eff:fe65:8000-2a02:1748:1234:ABC1:216:3eff:fe65:81ff
  ipv6.dhcp.stateful: "true"
  ipv6.nat: "false"
  ipv6.routing: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/c1
- /1.0/instances/c2
- /1.0/instances/c3
- /1.0/instances/c4
- /1.0/instances/vsftpd
- /1.0/profiles/default
managed: true
status: Created
locations:
- none

And at first glance i thought this setup works perfect. Besides the sad fact that i had to use a /63 network for this setup. Not a big deal from my provider we do get a /60 ipv6 network provided, but it would have been nice to know a working solution where /64 was sufficient. But i couldn’t.

Why it was not perfect?

  1. i still had to manually add the proxy command after reboot:

ip -6 neigh add proxy 2a02:1748:1234:ABC1:216:3eff:fe65:806e dev ens3

  1. Even first tests where great. Pinging from inside C1 and from outside (from the router interface or the www) Online_IPv6_ICMP_Test worked perfect. After about 2 hours i tried once more. And again ping from C1 to the outside world worked, ping from the router interface to C1 worked, BUT ping from Online_IPv6_ICMP_Test to the container did not.

And the reason is: In debugging via ICMP the working ping communication uses global IPv6 communication. But some how the broken IPv6 communication from the outside switched for the neighbor solicitation request from global IP to link-local fe80**** IP. And this never gets answered with neighbor advertisement response!

working neighbor solicitation request: (icmp on ens3)
2a02:1748:1234:ABC0::10 > 2a02:1748:1234:ABC1:216:3eff:fe65:806e: ICMP6, neighbor solicitation, who has 2a02:1748:1234:ABC1:216:3eff:fe65:806e, length 32

broken request (will not get answered by neighbor advertisment): (icmp on ens3)
fe80::a236:9fff:fe85:7fbf > ff02::1:ff65:806e: ICMP6, neighbor solicitation, who has 2a02:1748:1234:ABC0:216:3eff:fe65:806e, length 32

So the neigh proxy added to ens3 seems to work for global IPv6 addresses but not answer neighbour solicitation requests sent by link-local fe80::a236:9fff:fe85:7fbf router interface.

ip -6 neigh show proxy
Output:
2a02:1748:1234:ABC1:216:3eff:fe65:806e dev ens3 proxy

Could i solve this with ndppd? Or can i force the OpnSense Router Interface to not send solicitation requests via link-local but always with global IP? which sounds odd.
I guess i just should fix the ubuntu20.04 LXD 4.21 host to answer the IP6 fe80::a236:9fff:fe85:7fbf > ff02::1:ff65:806e: ICMP6, neighbor solicitation, who has 2a02:1748:1234:ABC1:216:3eff:fe65:806e, length 32.

Could someone help me out with this last FE80 neighbour solicitation not getting answered issue?

Is this a ubuntu 20.04 Bug?
As you can see i already added the neigh proxy to ens3. Why does it answer neighbor solicitation if the requester has global ipv6 but not if the requester uses fe80 link-local IP address?

Thanks for all your help!
Kind regards,
Raphael

1 Like

I am sorry. After some sleep i recognized with the /63 everything works just fine.

I mistakenly misinterpreted the neighbor solicitation. Right in the moment of my tests i saw the following solicitation request which did not get answered. (about 2hours after my first tests)

But as you can see in the supposedly “broken” neighbor solicitation request:

fe80::a236:9fff:fe85:7fbf > ff02::1:ff65:806e: ICMP6, neighbor solicitation, who has 2a02:1748:1234:ABC0:216:3eff:fe65:806e, length 32

the router asks via fe80**** for the old 2a02:1748:1234:ABC0:216:3eff:fe65:806e IP. This happend exactly while i was testing for the new container IP which is 2a02:1748:1234:ABC1:216:3eff:fe65:806e.

So the old 2a02:1748:1234:ABC0:216:3eff:fe65:806e request did not get answered which should be expected. And the reason why i could not see entries in the tcpdump for icmp to the new IP 2a02:1748:1234:ABC1:216:3eff:fe65:806e was, that i still had a firewall rule for the old IP to let pass through ICMP requests, but not for the new IP.

So after adding the ICMP pass rule for the new IP everything works perfect besides, that i had to create a /63 network as i could not achieve working routing with just /64 on lxdbr0 and /128 on ens3 IPv6 network.

Thanks for all your time!
Best wishes to you all and a happy new year 2022!!! :milky_way::star2::champagne::fireworks:

1 Like