Hetzner Public IPv6 addresses + Additional Subnet

Current Setup:

  • Hetzner Dedicated Server → Virtual Machine (KVM) → LXD
  • Single public IPv4 Address: 123.123.123.72
  • Public IPv6 Block: 2a01:abcd:abcd:abcd::2/64
  • Additional IPv4 Subnet: 222.222.222.104/29

My objective is to have each project in its own VM with LXD setup, each container will be allocated a public IPv6 address. Ideally each VM will have its own block 2a01:abcd:abcd:abcd::prefix:2 to prevent collisions, I guess.

Previously I was able to get IPv6 addresses with Macvlan, however I have since bought the additional IPv4 subnet and to set this up, I had to setup a bridge, and now MacVlan no longer works, I think.

Currently I get no internet in the containers, if I enable IPv6 on the lxd network.
If i go into the VM then i can ping LXC containers with private IPv6, but any other IPv6 address I cant.

me@project1~$ ping6 google.com
ping6: connect: Network is unreachable

I have enabled forwarding so host can be used as a gateway.

$ vi /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1

Host /etc/netplan/01-netcfg.yaml

network:
  version: 2
  renderer: networkd
  ethernets:
    enp41s0:
      dhcp4: no
      dhcp6: no
  bridges:
    br0:
      interfaces:
        - enp41s0
      dhcp4: no
      dhcp6: no
      addresses:
        - 123.123.123.72/32
        - 2a01:abcd:abcd:abcd::2/128
      routes:
        - to: 0.0.0.0/0
          via: 123.123.123.65
          on-link: true
        - to: 222.222.222.104/29
          scope: link
        - to: "::/0"
          via: "fe80::1"
          on-link: true
      nameservers:
        addresses:
          - 213.133.100.100
          - 213.133.98.98
          - 213.133.99.99
          - 2a01:4f8:0:1::add:9898
          - 2a01:4f8:0:1::add:1010
          - 2a01:4f8:0:1::add:9999

Guest /etc/netplan/00-installer-config.yaml

network:
  version: 2
  ethernets:
    enp1s0:
      addresses:
        - 222.222.222.111/29
        - 2a01:abcd:abcd:abcd::111/128
      gateway4: 123.123.123.72
      gateway6: 2a01:abcd:abcd:abcd::2
      nameservers:
        addresses:
          - 213.133.100.100
          - 213.133.98.98
          - 213.133.99.99
          - 2a01:4f8:0:1::add:9898
          - 2a01:4f8:0:1::add:1010
          - 2a01:4f8:0:1::add:9999
      routes:
        - to: 123.123.123.72/32
          via: 0.0.0.0
          scope: link
        - to: "2a01:abcd:abcd:abcd::2/128"
          via: "::/0"
          scope: link

It’s taken me days to get his far, but I need help. Thanks in advance.

If you’re wanting to use automatically assigned IPv6 addresses via SLAAC in your containers then you’re going to need a /64 subnet for each VM. Can you request multiple /64s routed to your server, or even better a /56 or /48 subnet that you can then subdivide as you need?

Otherwise you could try using smaller subnets and then use stateful DHCPv6 on the lxdbr0 (lxc network set lxdbr0 ipv6.dhcp.stateful=true) inside the VM for containers, or manual static config ofcourse.

From what I am aware I cant order extra IPv6 addresses.

So i guess the next step is forget the IPv6 free public Ip addreses, and get IPv6 working inside the VM and containers, which is probably something to do with the host netplan, maybe moving the IPv6 traffic out of the bridge, does that sound correct?

Why is this? macvlan should have worked fine for ipv4 the same as bridge?

Before IIRC you were using an internal lxdbr0 and routing and then macvlan inside the VM.

Hetzner has MAC address restrictions I believe, so you need to understand them, and make sure you’re not using a MAC address that isn’t allowed on their network.

IIRC using bridge or macvlan onto the external interface isn’t going to work as each VM will then expose it’s MAC address onto the external network.

I bought additional IPv4 addresses, and the only way I could use them was if i setup a bridge. Without the bridge it was really easy, but then I cant have separate IPv4 addresses for projects.

Something is fishy there though, there’s no reason a bridge would work and macvlan doesn’t (both expose MAC addresses onto the external network) so suggests there’s some other difference you’re not aware of. I believe there is an interface in the hetzner control panel where you can associated IPv4s to MAC addresses. Perhaps there is an equivalent for IPv6, or if not perhaps thats the issue.

So IPv4 works in the VM guest on the bridge, but ipv6 doesn’t?

Can you ping the VM guest’s IPv6 address from the host, and vice versa?

If I ping6 the VM from the host, I get Destination unreachable: Address unreachable, and if i ping the host from the VM, i get the same Destination unreachable: Address unreachable

If i do this from the VM

ping6 google.com
ping6: connect: Network is unreachable

As I bought a block of IPs this is not the case, basically from what I understand, I have buy individual IPs and then I get access to a different mac address. As I bought a block, i have to use the mac address of my host, if that makes sense.

FWIW, in a somewhat similar setup (albeit with a different ISP), and tens of containers:

Each container has a single public IPv6 (using routed).
A single extra container runs HA Proxy, and directs inbound http and https traffic from IPv4 and IPv6 to the individual containers. It also runs tayga (NAT64), so that the IPv6-only containers have IPv4 outbound connectivity (for things like package upgrades).

Originally I was able to get IPv6 traffic, but I needed extra IPv4 addresses for each VM which has its own LXD server as they are different projects. I finally got the new addresses working in the VM just not IPv6 traffic as I configured a bridge. As I am not getting IPv6 traffic in the lXD host I am assuming it is a problem with my bridge but spent days trying different combos and couldn’t fix.

I would be quite happy with every container having a public IPv6 and private IPv4 addresses, but as these are for business services , my understanding without having a public IPv4 address for web traffic then some people won’t be able to access. IPv6 is new to me, so I am still getting my head round it.

So when you say smaller subnet, do you mean dividing up the 2a01:abcd:abcd:abcd::2/64 that they gave me to ?

  • 2a01:abcd:abcd:abcd:1234:2/64
  • 2a01:abcd:abcd:abcd:2345:2/64

I have been told if I want to use both the IPv6 block and the additional IPv4 subnet block, I have to use a routed setup, can’t use bridged.

Yep that makes sense and is expected as that is how the /64 IPv6 is working too.

The thing your original post is missing is how the VMs are connected (if at all) to the br0 or enp41s0 interfaces).

Assuming that Hetzner also route the /29 IPv4 block directly to your machine like they do with the /64 block (i.e no ARP responses required from your machine in order for the packets destined for those addresses to make it to your machine).

Then I would do the following:

This approach uses a private bridge, and relies on the assumption that packets for your two subnets are routed to your machine’s external interface, meaning the host machine doesn’t need to response to ARP/NDP requests for each IP. By using a routed approach all external packets will use the MAC address of your host machine’s interface.

  1. Remove br0 and just configure the host machine with the 123.123.123.72/32 and
    2a01:abcd:abcd:abcd::FFFF/128 (I picked FFFF as the address, as it is the last IP in the range which is sometimes uses as the gateway IP, and reduces the change of us allocating a sub-subnet later that overlaps with this address) addresses (also removing any custom configure you added for the 222.222.222.104/29 subnet, such as in the routes netplan part).
  2. Ensure IP forwarding for IPv4 and IPv6 is enabled (for all interfaces and globally) using sysctl net.ipv4.conf.all.forwarding = 1 and sysctl net.ipv6.conf.all.forwarding = 1 and that forwarding packets is allowed in your host’s firewall. If using LXD on the host machine to create the bridge it would set these sysctls for you.
  3. Re-create the br0 bridge (without connecting it to enp41s0) and assign it the IPs 222.222.222.105/29 and 2a01:abcd:abcd:abcd::1/64. This will create routes on the host machine for those subnets that point towards the bridge interface.
  4. Connect each VM to br0.
  5. Inside each VM configure an IPv4 address in the 222.222.222.104/29 subnet and use the br0 IPv4 address for the gateway.
  6. Inside each VM configure an arbitrary IPv6 address (you may change this later this is just to prove connectivity is working) in the 2a01:abcd:abcd:abcd::/64 subnet and use the br0 IPv6 address for the gateway.

Stop here and check IPv4 and IPv6 connectivity is working as needed to/from the VM.

On how subdivide the /64 subnet once you’ve confirmed that approach is working, see Hetzner Public IPv6 addresses + Additional Subnet - #16 by tomp

Thanks for taking the time to explain, I gave up in the end on getting it working in Ubuntu. I got the setup working Debain, but did not know how to translate to netplan, I can provide the configuration files, so somebody can translate for the benefit of the LXD community.

I was running to major performance issues and strange LXD errors on both hosts using Ubuntu and Debain, so I had to ditch and setup the host with RockyLinux. Now I have LXD server on Ubuntu installed inside a VM which is on RockyLinux host, everything seems fine.

I believe the performance issue is somethng to do with Debian/Ubuntu, when I ran cat /proc/mdstat

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]

md1 : active raid1 sda2[0] sdb2[1]

523264 blocks super 1.2 [2/2] [UU]
md2 : active raid1 sda3[0] sdb3[1]
1919301696 blocks super 1.2 [2/2] [UU]

bitmap: 4/15 pages [16KB], 65536KB chunk
md0 : active (auto-read-only) raid1 sda1[0] sdb1[1]
33520640 blocks super 1.2 [2/2] [UU]
resync=PENDING
unused devices: <none>

Again I have documented the config for a routed setup using Debian, so if somebody wants to translate that to Ubuntu netplan, I think that could be good.

For sub dividing the /64 IPv6 subnet into smaller subnets for use with containers inside each VM, I suggest you use a /112 subnet. This allows 65k containers per VM, and has the nice property on being on one of the nibble boundaries before the last colon:

E.g.

2402:9400:0000:0000:0000:0000:0000:0001
XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX
      ||| |||| |||| |||| |||| |||| ||||
      ||| |||| |||| |||| |||| |||| |||128
      ||| |||| |||| |||| |||| |||| ||124
      ||| |||| |||| |||| |||| |||| |120
      ||| |||| |||| |||| |||| |||| 116
      ||| |||| |||| |||| |||| |||112

Source: IPv6 Subnet Cheat Sheet and IPv6 Cheat Sheet Reference | Broadcast | Crucial

Again, we’ll use the routed approach inside the VM so that containers cannot hijack address space outside of their allocate /112 subnet, and so that each LXD inside the VM can control its own automatic allocation using stateful DHCPv6.

For the first VM I allocate the /112 subnet: ipv6.address=2a01:abcd:abcd:abcd:0000:0000:0001::/112 which is the first one after the 0::/112 subnet that we are using for the host’s /128 subnet.

Inside a VM lets do the following:

lxc network create lxdbr0 \
    ipv6.address=ipv6.address=2a01:abcd:abcd:abcd:0000:0000:0001:0001/112 \
    ipv6.dhcp.stateful=true

Now on the main host we need to add a static route for each VM’s /112 subnet pointing to the VM’s primary IPv6 address so that NDP isn’t required for the /112 subnet (this must made persistent later):

sudo ip -6 r add 2a01:abcd:abcd:abcd:0000:0000:0001::/112 via <VM IPv6 address> dev br0

Now check we can ping the VM’s lxdbr0 IPv6 address 2a01:abcd:abcd:abcd:0000:0000:0001:0001 from the host (and externally).

Now inside the VM you can launch an Ubuntu Focal container using the VM’s lxdbr0 and it should use DHCPv6 stateful to request an IP in the /112 subnet that is globally reachable.

I’ve tested this on my home setup and works perfectly. :slight_smile:

1 Like

Please post it here for future reference.

Hetzner Routed (brouter) Setup

I bought a dedicated server from Hetzner, they gave me a free IPv4 address and IPv6 /64 block. I then bought an additional block of IPv4 /29. These all work through the mac address of the server, no virtual MAC addresses as many online guides suggested.

123.123.123 is the IP range for Free IPv4 address, and the 222.222.222 is for the additional subnet block

It would be really awesome if somebody could translate the host network config which for Debian into Ubuntu netplan. This has been tested on Debian a few times.

$ sudo apt install bridge-utils
$ sudo cp /etc/network/interfaces /etc/network/interfaces.backup

IMPORTANT: check the interface is the same as different servers might have a different name, e.g. enp41s0

Having problems? the first and last address in the subnet are not available e.g. 104 & 111.

$ sudo vi /etc/network/interfaces
# Hetzner Network Configuration Hostsystem Routed
# Version 2021062601
auto lo
iface lo inet loopback

auto enp41s0
iface enp41s0 inet static
  address 123.123.123.72
  netmask 255.255.255.192
  pointopoint 123.123.123.65
  gateway 123.123.123.65

iface enp41s0 inet6 static
  address 2a01:123.123.123::2
  netmask 128
  gateway fe80::1
  up sysctl -p

# Subnet
auto br0
iface br0 inet static
  address 222.222.222.104
  netmask 29
  bridge_ports none
  bridge_stp off
  bridge_fd 0

iface br0 inet6 static
  address 2a01:abcd:abcd:abcd::3
  netmask 64
  up ip -6 route add 2a01:abcd:abcd:abcd::/64 dev br0

Making mistakes can cause retlink file exists and other problems. So you need to be careful and try not to get locked out. sudo ip addr flush dev enp41s0 && sudo ip addr flush dev enp41s0br0 && sudo ifup enp41s0

Restart networking

$ sudo systemctl restart networking

IP forwarding needs to be setup

$ sudo sysctl -w net.ipv4.ip_forward=1
$ sudo sysctl -w net.ipv6.conf.all.forwarding=1

Also edit /etc/sysctl.conf

net.ipv4.ip_forward=1
net.ipv4.conf.enp41s0.send_redirects=0
net.ipv6.conf.all.forwarding=1

Note, the bridge will show down until you create a VM that uses it.

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp41s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether a8:a1:59:8b:35:a5 brd ff:ff:ff:ff:ff:ff
    inet 123.123.123.72 peer 123.123.123.65/32 brd 123.123.123.127 scope global enp41s0
       valid_lft forever preferred_lft forever
    inet6 2a01:abcd:abcd:abcd::2/128 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::aaa1:59ff:fe8b:35a5/64 scope link
       valid_lft forever preferred_lft forever
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether da:c2:0a:5a:24:de brd ff:ff:ff:ff:ff:ff
    inet 222.222.222.105/29 brd 222.222.222.111 scope global br0
       valid_lft forever preferred_lft forever
    inet6 2a01:abcd:abcd:abcd::3/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::d8c2:aff:fe5a:24de/64 scope link
       valid_lft forever preferred_lft forever

Note: Guest setups are specific to this network configuration.

This is the network configuration for the GUEST:

network:
  version: 2
  ethernets:
    enp1s0:
      addresses:
        - 222.222.222.110/29
        - 2a01:abcd:abcd:abcd::110/64
      gateway4: 222.222.222.104
      gateway6: 2a01:abcd:abcd:abcd::3
      nameservers:
        addresses:
          - 213.133.100.100
          - 213.133.98.98
          - 213.133.99.99
          - 2a01:4f8:0:1::add:9898
          - 2a01:4f8:0:1::add:1010
          - 2a01:4f8:0:1::add:9999