Going mad trying to configure unrouted static ipv6 for my containers

my host has assigned me an unrouted /64 ipv6 block and despite following and attempting almost all the discussions in threads here I am banging my head against the wall trying to do this for the last several weeks.

wts weird is that if add extra ipv6 in the host they all respond to pings from outside but when i try to assign a /64 or /112 using an incremented publicly routed ipv6 the guest container looses all ipv6 connectivity to the internet. if instead of mentioning an ipv6 range I set it to auto in lxd init the ipv6 connectivity works just fine inside the container. Why is this so hard :frowning: :cry:

I have been recently thinking of setting up an ipv6 NAT by specifying extra publicly routable ipv6 in the host and directing it to local ipv6 in the container but why can’t I just simply tell a container to use an ipv6!!

Please can you show ip -6 a and ip -6 r on the host?

I suspect that the ISP has assigned a /64 to the host’s external network, but it requires the host to respond to NDP solicitation requests for each address (like ARP with IPv4). This is why adding individual IPs works, but setting up a virtual bridge with all/part of the allocation doesn’t work.

You would likely have the same issues with IPv4, its just that ISPs rarely provide an entire subnet these days due to the shortages.

If that is the case then you have 2 options:

  1. Use a routed type NIC to pass specific IPv6 addresses into the container(s) statically. See How to get LXD containers get IP from the LAN with routed network
  2. Use bridge network with the IPv6 allocation and then use ndppd to response to NDP solicitations for that subnet. See Getting universally routable IPv6 Addresses for your Linux Containers on Ubuntu 18.04 with LXD 4.0 on a VPS

Alternatively the “proper” solution is to get your ISP to actually route you an additional IPv6 subnet to an address in your current subnet, and that way you can then directly use that on your LXD bridge without needing to respond to NDP solicitations.

1 Like

Thank you for the response.

following article 1 I setup a profile with a static ip. Before that in the lxd init a bridge was setup using auto. What I simply want to achieve is getting not more than a dozen containers setup with static ipv6.

The situation right now is as follows:

  1. From the host i can ping the public ipv6 of the container.
  2. the lxdbr0’s own allocation include a private ipv6 and ipv4
  3. The container has both ipv4 and ipv6 connectivity (i suspect bcoz of the bridge) ie I can ping -6 google successfully from inside the container.
  4. From inside the container I can ping the ipv6 of the host
  5. The host’s sysctl has the following lines
    net.ipv6.conf.all.proxy_ndp = 1
    net.ipv6.conf.all.forwarding = 1
    net.ipv6.conf.eth0.proxy_ndp = 1
  6. In the host I have also run
    ip -6 add proxy 2606:#:#:#:#:#:#:c19 dev eth0
  7. Regardless of the above the ipv6 of the container remains unreachable from the outside world and is only available inside the host

The following is the outcome of running ip -6 a in the host


1: lo:  mtu 65536 state UNKNOWN qlen 1000
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 state UP qlen 1000
    inet6 2606:#:#:#:#:#:#:a/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::236:b9ff:fe0c:89b/64 scope link
       valid_lft forever preferred_lft forever
3: lxdbr0:  mtu 1500 state UP qlen 1000
    inet6 fd42:2c2c:2b15:5afd::1/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe3e:ec35/64 scope link
       valid_lft forever preferred_lft forever
7: veth5bf13f8e@if6:  mtu 1500 state UP qlen 1000
    inet6 fe80::d4aa:cff:fe2f:f6f1/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::1/128 scope link
       valid_lft forever preferred_lft forever

The following is the outcome of running ip -6 r in the host


::1 dev lo proto kernel metric 256 pref medium
2606:#:#:#:#:#:#:c19 dev veth5bf13f8e metric 1024 pref medium
2606:#:#:a::/64 dev eth0 proto kernel metric 256 pref medium
fd42:2c2c:2b15:5afd::/64 dev lxdbr0 proto kernel metric 256 pref medium
fe80::1 dev veth5bf13f8e proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev veth5bf13f8e proto kernel metric 256 pref medium
fe80::/64 dev lxdbr0 proto kernel metric 256 pref medium
default via 2606:#:#::1 dev eth0 metric 1024 onlink pref medium

The ipv6 of the host : 2606: # : # : a :: a
The ipv6 of the container : 2606 : # : # : a : : c19

Can you please identify the issue with this ? The only time I can get extra ipv6 to work is by assigning a secondary ipv6 in the network interface file of host, other than that nothing seems to work.

Anyone?

Can you show the output of lxc config show <instance> --expanded, and can you show ip -6 a and ip -6 r inside the container?

The outcome of executing lxc config show c19 --expanded in the host


architecture: x86_64
config:
  image.architecture: amd64
  image.description: Debian bullseye amd64 (20230104_05:25)
  image.os: Debian
  image.release: bullseye
  image.serial: "20230104_05:25"
  image.type: squashfs
  image.variant: default
  user.network-config: |
    #cloud-config
    version: 2
    ethernets:
        eth0:
          dhcp4: false
          dhcp6: false
          routes:
          - to: 0.0.0.0/0
            via: 169.254.0.1
            on-link: true
  user.user-data: |
    #cloud-config
    bootcmd:
      - echo 'nameserver 8.8.8.8' > /etc/resolvconf/resolv.conf.d/tail
      - systemctl restart resolvconf
  volatile.base_image: 93d985be253baa30063c16033233035408d40b424ca61a4af56c648eb5f35a29
  volatile.cloud-init.instance-id: 722e23b6-bb84-4956-a277-c488a4a70c16
  volatile.eth0.host_name: veth161c3ba0
  volatile.eth0.hwaddr: 00:11:1a:73:fd:43
  volatile.eth19.host_name: veth5bf13f8e
  volatile.eth19.hwaddr: 00:11:1a:bf:f3:fe
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: bf85f789-edb8-4219-9881-74025f28905b
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  eth19:
    ipv6.address: 2606:#:#:a::c19
    name: eth19
    nictype: routed
    parent: eth0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
- hadi_ipv6
stateful: false
description: ""

The outcome of executing ip -6 a in the container


1: lo:  mtu 65536 state UNKNOWN qlen 1000
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
4: eth0@if5:  mtu 1500 state UP qlen 1000
    inet6 fd42:2c2c:2b15:5afd:216:3eff:fe73:fd43/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 3025sec preferred_lft 3025sec
    inet6 fe80::216:3eff:fe73:fd43/64 scope link
       valid_lft forever preferred_lft forever
6: eth19@if7:  mtu 1500 state UP qlen 1000
    inet6 2606:#:#:a::c19/128 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:febf:f3fe/64 scope link
       valid_lft forever preferred_lft forever

The outcome of executing ip -6 r in the container


2606:#:#:a::c19 dev eth19 proto kernel metric 256 pref medium
fd42:2c2c:2b15:5afd::/64 dev eth0 proto ra metric 1024 expires 3409sec pref medium
fe80::/64 dev eth19 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default metric 1024 pref medium
        nexthop via fe80::1 dev eth19 weight 1
        nexthop via fe80::216:3eff:fe3e:ec35 dev eth0 weight 1

Right, so you have 2 IPv6 addresses in your container, one from lxdbr0 and one from the routed NIC.
Try disabling IPv6 on the lxdbr0 network by doing lxc network set lxdbr0 ipv6.address=none and then restarting the container.

  1. set ipv6 to none for lxdbr0 in the host. lxdbr0 was previously showing a private ipv6 now it shows none.

  2. stopped and started the container.

  3. container cannot still be pinged from the outside world

  4. additionally inside container the ipv6 connectivity has been lost

Can you show ip a and ip r in the container now please?

The outcome of ip a in the container


1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
24: eth0@if25:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:73:fd:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.252.151.221/24 brd 10.252.151.255 scope global dynamic eth0
       valid_lft 2841sec preferred_lft 2841sec
    inet6 fe80::216:3eff:fe73:fd43/64 scope link
       valid_lft forever preferred_lft forever
26: eth19@if27:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:bf:f3:fe brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 2606:#:#:a::c19/128 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:febf:f3fe/64 scope link
       valid_lft forever preferred_lft forever

The outcome of ip r in the container


default via 10.252.151.1 dev eth0 proto dhcp src 10.252.151.221 metric 1024
10.252.151.0/24 dev eth0 proto kernel scope link src 10.252.151.221
10.252.151.1 dev eth0 proto dhcp scope link src 10.252.151.221 metric 1024

Ah sorry I meant ip -6 r

That would be :

The outcome of executing ip -6 r in the container


2606:#:#:a::c19 dev eth19 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth19 proto kernel metric 256 pref medium
default via fe80::1 dev eth19 metric 1024 pref medium

Can you ping the container’s IPv6 address from the host?

  1. Yes I can ping from the host to the container.

  2. I can also ping from the container to the host.

  3. Host has full ipv6 connectivity.

  4. However, container can only ping host’s ipv6 but no other ipv6.

Have you got a firewall on the host running?
What is the output of sudo ip6tables-save and sudo nft list ruleset?

Also, can you run sudo tcpdump -i eth0 icmp6 -nn on the host and then run the ping to 2001:4860:4860::8888 from the container so we can see where the packets are going?

no firewall at the moment.


16:44:24.240772 IP6 2606:#:1:2::a > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2606:#:1::1, length 32
16:44:24.522907 IP6 fe80::226:8800:cd55:b7c1 > ff02::1:ff00:a: ICMP6, neighbor solicitation, who has 2606:#:0:2::a, length 32
16:44:24.580179 IP6 fe80::236:b9ff:fe0c:89b > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2606:#:1::1, length 32
16:44:25.307227 IP6 2606:#:1:b::a > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2606:#:1::1, length 32
16:44:25.604179 IP6 fe80::236:b9ff:fe0c:89b > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2606:#:1::1, length 32
16:44:29.166300 IP6 2606:#:1:28::a > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2606:#:1::1, length 32
16:44:29.522953 IP6 fe80::226:8800:cd55:b7c1 > ff02::1:ff00:a: ICMP6, neighbor solicitation, who has 2606:#:0:2::a, length 32
16:44:34.522974 IP6 fe80::226:8800:cd55:b7c1 > ff02::1:ff00:a: ICMP6, neighbor solicitation, who has 2606:#:0:2::a, length 32
16:44:39.522518 IP6 fe80::226:8800:cd55:b7c1 > ff02::1:ff00:a: ICMP6, neighbor solicitation, who has 2606:#:0:2::a, length 32

pinged again


solicitation, who has 2606:#:1::1, length 32
16:47:14.524860 IP6 fe80::226:8800:cd55:b7c1 > ff02::1:ff00:a: ICMP6, neighbor solicitation, who has 2606:#:0:2::a, length 32
16:47:14.852441 IP6 fe80::236:b9ff:fe0c:89b > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2606:#:1::1, length 32
16:47:15.876185 IP6 fe80::236:b9ff:fe0c:89b > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2606:#:1::1, length 32
16:47:16.900249 IP6 fe80::236:b9ff:fe0c:89b > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2606:#:1::1, length 32
16:47:17.924400 IP6 fe80::236:b9ff:fe0c:89b > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2606:#:1::1, length 32
16:47:18.948201 IP6 fe80::236:b9ff:fe0c:89b > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2606:#:1::1, length 32
16:47:19.526065 IP6 fe80::226:8800:cd55:b7c1 > ff02::1:ff00:a: ICMP6, neighbor solicitation, who has 2606:#:0:2::a, length 32
16:47:19.972227 IP6 fe80::236:b9ff:fe0c:89b > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2606:#:1::1, length 32

pinging in the container seems to produce this repeated requests for 2606:***1::1 in the host

I wonder if this is a bug in your ISP’s router policy in that its not responding to solicitations when sent from your host’s link-local IPv6 address ( fe80::236:b9ff:fe0c:89b) which it should.

I found something similar here:

https://yoursunny.com/t/2021/ndpresponder/

So you may find using GitHub - yoursunny/ndpresponder: IPv6 Neighbor Discovery Responder for KVM servers helps (or asking your ISP to change their policy).

Thank you for taking all the time to address. I too have been suspecting it’s a provider network thing. I’ll try yoursunny’s solution but frankly I’m now thinking that I shud just put like a dozen ipv6s in the hosts interface file & using iptables NAT it to the local ipv6 of the container.

1 Like

You can use the network forward or proxy in nat mode for that which both automate the dnat firewall rules:

1 Like

Ok just before trying the proxy or nat i thought of going throught the tcpdump results once more & when I pinged the container from the outside world I can see a solicitation message for 2606:#:#: a::c19 (the container’s ipv6) in the tcpdump of the host. I realized the ip -6 add neighbour proxy gets nulled after every reboot so I ran the following command again
ip -6 neighbour add proxy 2606:a8c0:1:a::c19 dev eth0
& now there is a response but still the ping shows as unreachable. The following is the excerpt from the tcpdump after running add neigbour proxy

ip -6 neighbour add proxy 2606:#:#:a::c19 dev eth0

tcpdump -i eth0 icmp6 -nn

11:15:38.820207 IP6 fe80::236:b9ff:fe0c:89b > fe80::226:8800:cd55:b7c1: ICMP6, neighbor solicitation, who has fe80::226:8800:cd55:b7c1, length 32
11:15:39.560209 IP6 fe80::226:8800:cd55:b7c1 > ff02::1:ff00:a: ICMP6, neighbor solicitation, who has 2606:#:#:2::a, length 32
11:15:39.690821 IP6 2606:#:#:21::a > ff02::1:ff00:1: ICMP6, neighbor solicitation, who has 2606:#:#::1, length 32
11:15:39.844203 IP6 fe80::236:b9ff:fe0c:89b > fe80::226:8800:cd55:b7c1: ICMP6, neighbor solicitation, who has fe80::226:8800:cd55:b7c1, length 32
11:15:40.560155 IP6 fe80::226:8800:cd55:b7c1 > ff02::1:ff00:c19: ICMP6, neighbor solicitation, who has 2606:#:#: a::c19, length 32
11:15:41.220291 IP6 fe80::236:b9ff:fe0c:89b > fe80::226:8800:cd55:b7c1: ICMP6, neighbor advertisement, tgt is 2606:#:#: a::c19, length 32
11:15:41.559214 IP6 fe80::226:8800:cd55:b7c1 > ff02::1:ff00:c19: ICMP6, neighbor solicitation, who has 2606:#:#: a::c19, length 32
11:15:42.180210 IP6 fe80::236:b9ff:fe0c:89b > fe80::226:8800:cd55:b7c1: ICMP6, neighbor advertisement, tgt is 2606:#:#: a::c19, length 32
11:15:42.560148 IP6 fe80::226:8800:cd55:b7c1 > ff02::1:ff00:c19: ICMP6, neighbor solicitation, who has 2606:#:#: a::c19, length 32
11:15:42.796234 IP6 fe80::236:b9ff:fe0c:89b > fe80::226:8800:cd55:b7c1: ICMP6, neighbor advertisement, tgt is 2606:#:#: a::c19, length 32
11:15:44.560249 IP6 fe80::226:8800:cd55:b7c1 > ff02::1:ff00:a: ICMP6, neighbor solicitation, who has 2606:#:#:2::a, length 32
11:15:46.244243 IP6 fe80::236:b9ff:fe0c:89b > fe80::226:8800:cd55:b7c1: ICMP6, neighbor solicitation, who has fe80::226:8800:cd55:b7c1, length 32