Hetnzer server setup with public IPv6 addresses

I have purchased a server from Heztner, and I am trying to setup LXD to use the IPv6 block that they gave me.

My understanding is that:

  1. I can use the free public IPv6 addresses they gave me for the containers without having to buy any IPv4 addresses.
  2. I can do this without any complicated routed or bridge setups if i use the IPv6 addresses. Simply by setting up the IPv6 CIDR notation during init then LXD will handle the rest for me.

Here is how I setup LXD

root@baremetal01 ~ # lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, lvm, zfs, ceph, btrfs) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: 
Size in GB of the new loop device (1GB minimum) [default=30GB]: 100GB
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: cafe1:1234:6789:abcd::2/64
Would you like LXD to NAT IPv6 traffic on your bridge? [default=yes]: 
Would you like the LXD server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 

I created a container

root@baremetal01 ~ # lxc launch ubuntu:20.04 c1
Creating c1
Starting c1                                 
root@baremetal01 ~ # lxc list  
+------+---------+----------------------+------------------------------------------------+-----------+-----------+
| NAME |  STATE  |         IPV4         |                    IPV6                        |   TYPE    | SNAPSHOTS |
+------+---------+----------------------+------------------------------------------------+-----------+-----------+
| c1   | RUNNING | 10.56.132.142 (eth0) | cafe1:1234:6789:abcd:216:3eff:fe1c:6824 (eth0) | CONTAINER | 0         |
+------+---------+----------------------+------------------------------------------------+-----------+-----------+

From inside the container

root@c1:~# ping www.google.com
PING www.google.com(arn11s04-in-x04.1e100.net (2a00:1450:400f:80b::2004)) 56 data bytes


From home

jimbo ~
> ping cafe1:1234:6789:abcd:216:3eff:fe1c:6824
ping: cannot resolve cafe1:1234:6789:abcd:216:3eff:fe1c:6824: Unknown host

Any help would be appreciated, thanks.

I think you need to put IPv6 addresses in square brackets, like so:

ping [cafe1:1234:6789:abcd:216:3eff:fe1c:6824]

Does your home machine have an interface with a valid, routable IPv6 address?

This whole IPv6 is new to me. If I go to whatismyip, its shows a IPv6 address.
I tried putting in brackets an no difference.

ping [cafe1:1234:6789:abcd:216:3eff:fe1c:6824]
ping: cannot resolve [cafe1:1234:6789:abcd:216:3eff:fe1c:6824]: Unknown host

Try

ping6 [cafe1:1234:6789:abcd:216:3eff:fe1c:6824]

and if that doesn’t work, try it without the square brackets.

Please can you show ip a and ip r on the LXD host and inside the container?

Here you go, I changed the public IP address with 123 and abc so if it something does not make sense let me know.

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp41s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether a8:a1:59:8b:35:a5 brd ff:ff:ff:ff:ff:ff
    inet 123.123.123.72/32 scope global enp41s0
       valid_lft forever preferred_lft forever
    inet6 2a01:abcd:abcd:abcd::2/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::aaa1:abcd:abcd:abcd/64 scope link 
       valid_lft forever preferred_lft forever
3: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:c4:57:41 brd ff:ff:ff:ff:ff:ff
    inet 10.56.132.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 2a01:abcd:abcd:abcd::2/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:abcd:abcd:abcd/64 scope link 
       valid_lft forever preferred_lft forever
7: veth15f0dc10@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether 1a:0c:d1:ae:55:47 brd ff:ff:ff:ff:ff:ff link-netnsid 0
$ ip r
default via  123.123.123.72 dev enp41s0 proto static onlink 
10.56.132.0/24 dev lxdbr0 proto kernel scope link src 10.56.132.1 

Additional IP addresses talks about forwarding IPv6 addresses. In this case I have not bought IP addresses etc. They give me one IPv4 address, and a block of Ipv6 addreses.

Ah can you also provide ip -6 r from the host and container please.

Although I think I can already see the issue. You cannot have 2a01:abcd:abcd:abcd::2/64 defined on both enp41s0 and lxdbr0. As that will create 2 routes for 2a01:abcd:abcd:abcd::/64 one going out of enp41s0 and the other going out of lxdbr0. When you ping an IP in the subnet which interface will the host use? (Hint: there’s no happy answer here :))

This has come up in the past and you’ve got a couple of options:

  1. Remove the IPs from enp41s0 and move them to an unmanaged bridge such as br0 (e.g. using Netplan | Backend-agnostic network configuration in YAML) and then get your containers to directly attach to the external network using lxc config device add <instance> <eth0> nic nictype=bridged parent=br0. This will then rely on the external network’s DHCP/SLAAC and DNS services (if they exist). It will also mean that each instance will get its own MAC address, which may be restricted by Hetzner’s network.

  2. Use a routed approach. As it sounds like you have the whole /64 routed to your LXD host directly without the need for NDP proxying. You could just take a single IP from the /64 subnet and assign it to enp41s0 with a /128 subnet (so it doesn’t add any routes to the host). Then pick a different IP for the lxdbr0 interface’s IP and use the /64 subnet. This way you’ll only have one /64 route on the LXD host for your subnet (going down lxdbr0) and the host should still respond to its own IP on enp41s0. That way LXD will provide DHCP/SLAAC and DNS services for lxdbr0 as it is solely responsible for the subnet. And all packets leaving the host will use the host’s external interface MAC address.

$  ip -6 r
::1 dev lo proto kernel metric 256 pref medium
2a01:4f9:abcd:abcd::/64 dev enp41s0 proto kernel metric 256 pref medium
2a01:4f9:abcd:abcd::/64 dev lxdbr0 proto kernel metric 256 pref medium
fe80::/64 dev enp41s0 proto kernel metric 256 pref medium
fe80::/64 dev lxdbr0 proto kernel metric 256 pref medium
default via fe80::1 dev enp41s0 proto static metric 1024 pref medium

Yep as expected:

2a01:4f9:abcd:abcd::/64 dev enp41s0 proto kernel metric 256 pref medium
2a01:4f9:abcd:abcd::/64 dev lxdbr0 proto kernel metric 256 pref medium

Now that subnet is unreachable, or at best unpredictably reachable.

I had asked Heztner for additional mac addresses (as per there docs) for the IPv6 addresses, and they told me The IPv6 subnet must use the MAC address of the primary IP address .

It sounds like to me I am trying to setup number 2, so when you say ‘you could just take a single IP from the /64 subnet and assign it to enp41s0 with a /128 subnet (so it doesn’t add any routes to the host).’, do you mean adjust the netplan below so the IPv6 address is written /128 ?

### Hetzner Online GmbH installimage
network:
  version: 2
  renderer: networkd
  ethernets:
    enp41s0:
      addresses:
        - 123.123.123.72/32
        - 2a01:abcd:abcd:abcd::2/64
      routes:
        - on-link: true
          to: 0.0.0.0/0
          via: 123.123.123.65
      gateway6: fe80::1
      nameservers:
        addresses:
          - 213.133.99.99
          - 213.133.98.98
          - 213.133.100.100
          - 2a01:4f8:0:1::add:9898
          - 2a01:4f8:0:1::add:1010
          - 2a01:4f8:0:1::add:9999
$ lxc network show lxdbr0
config:
  ipv4.address: 10.56.132.1/24
  ipv4.nat: "true"
  ipv6.address: 2a01:abcd:abcd:abcd::2/64
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/c1
- /1.0/profiles/default
managed: true
status: Created
locations:
- none

Yeah so enp41s0 could have an address of 2a01:abcd:abcd:abcd::1/128 and then lxdbr0 could have an address of 2a01:abcd:abcd:abcd::2/64 (as it does now).

You already have the default gateway set to fe80::1 statically, so it won’t affect how that is reached.

You’re ip -6 r output should then only show 1 line for 1 2a01:abcd:abcd:abcd::/64

There’s a certain symmetry to your IPv4 config to, as you’ve already got the equivalent setup for IPv4 using a /32 (single IP) and a out of subnet on-link default route.

I did that and now I get internet inside the container, but still cant reach the container from the outside.

ip -6 r
::1 dev lo proto kernel metric 256 pref medium
2a01:abcd:abcd:abcd::2 dev enp41s0 proto kernel metric 256 pref medium
2a01:abcd:abcd:abcd::/64 dev lxdbr0 proto kernel metric 256 pref medium
fe80::/64 dev enp41s0 proto kernel metric 256 pref medium
fe80::/64 dev lxdbr0 proto kernel metric 256 pref medium
default via fe80::1 dev enp41s0 proto static metric 1024 pref medium

Can you show output of ip a now and please confirm you can ping container’s IPv6 address from LXD host.

Please also send me in PM the actual addresses and subnets for clarity.

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp41s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether a8:a1:59:8b:35:a5 brd ff:ff:ff:ff:ff:ff
    inet 123.123.123.72/32 scope global enp41s0
       valid_lft forever preferred_lft forever
    inet6 2a01:abcd:abcd:abcd::2/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::aaa1:abcd:abcd:abcd/64 scope link 
       valid_lft forever preferred_lft forever
3: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:c4:57:41 brd ff:ff:ff:ff:ff:ff
    inet 10.56.132.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 2a01:abcd:abcd:abcd::2/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:abcd:abcd:abcd/64 scope link 
       valid_lft forever preferred_lft forever
5: veth9d5b817d@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether 42:a3:44:6b:75:2c brd ff:ff:ff:ff:ff:ff link-netnsid 0

You still appear to have the same IP bound to both interfaces (albeit with different subnets now):

2a01:abcd:abcd:abcd::2

Any reason you are doing this and not using separate IPs? It feels like it could be the cause of the problem now.

I thought I was using separate IPs, still trying to get my head round IPv6.

I changed the netplan Ip address to 2a01:abcd:abcd:abcd:0000:0000:0000:0001/128

Holy IPv6 - it works!!! This is awesome.

Thank you @tomp :pray:

Notes:

  • using ping6 , can now ping the IP address. Normal ping command does not. thanks @pajot
  • I can access apache that was installed in the container using brackets around the ipv6 http://[ipv6]
2 Likes