Hetzner additional IP (bridged) working

Hobbyist here, have been running a happy lxd container setup with additional IP via macvlan for years and i thought for the new Server i could use host/container communication that would also work for a VM.
As far as i understood, only bridged profile is capable of both.

This setup below is working but i was wondering what the current lxd way would be or any problems that might show up later on.

Ubuntu 20.04 Server, removed Netplan for now.

Host IP: 213.239.210.243
Gateway: 213.239.210.225
Netmask: 255.255.255.224
Broadcast: 213.239.210.255

Additional IP: 213.239.211.94
Gateway: 213.239.211.65
Netmask: 255.255.255.224
Broadcast: 213.239.211.95

Run lxd init, all default settings, lxdbr0 was created.
Added manually br0 for the additional IP.

$ cat /etc/network/interfaces
# interfaces(5) file used by ifup(8) and ifdown(8)
# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d

# The loopback network interface
auto lo
iface lo inet loopback

auto enp2s0
iface enp2s0 inet static
  address 213.239.210.243
  gateway 213.239.210.225
  netmask 255.255.255.224
  dns-nameserver 185.12.64.1 185.12.64.2 8.8.8.8 8.8.4.4

auto br0
iface br0 inet static
        address 213.239.211.65
        netmask 255.255.255.224
        bridge_ports none
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0
        pre-up brctl addbr br0
        up ip route add 213.239.211.94/32 dev br0

Devices/Name below needs adjustment in the profile, depending on device name of either container or VM.

$ lxc profile show bridgedprofile 
config: {}
description: Bridged networking LXD profile
devices:
  enp5s0:
    name: enp5s0
    nictype: bridged
    parent: br0
    type: nic
name: bridgedprofile
used_by:
- /1.0/instances/c1
- /1.0/instances/vm1

Inside the VM.

$ lxc exec vm1 -- bash
root@vm1:~# cat /etc/netplan/50-cloud-init.yaml 
# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
  version: 2
  renderer: networkd
  ethernets:
    enp5s0:
      addresses:
        - 213.239.211.94/32
      routes:
        - on-link: true
          to: 0.0.0.0/0
          via: 213.239.211.65
      nameservers:
        addresses:
          - 185.12.64.2
          - 185.12.64.1

On Host:

$ ip r
default via 213.239.210.225 dev enp2s0 onlink 
10.49.224.0/24 dev lxdbr0 proto kernel scope link src 10.49.224.1 
213.239.210.224/27 dev enp2s0 proto kernel scope link src 213.239.210.243 
213.239.211.64/27 dev br0 proto kernel scope link src 213.239.211.65 
213.239.211.94 dev br0 scope link 

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 44:8a:5b:5d:d5:32 brd ff:ff:ff:ff:ff:ff
    inet 213.239.210.243/27 brd 213.239.210.255 scope global enp2s0
       valid_lft forever preferred_lft forever
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:96:e1:ba:ea:e7 brd ff:ff:ff:ff:ff:ff
    inet 213.239.211.65/27 brd 213.239.211.95 scope global br0
       valid_lft forever preferred_lft forever
4: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:78:a8:43 brd ff:ff:ff:ff:ff:ff
    inet 10.49.224.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
18: tapc987c466: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
    link/ether fe:96:e1:ba:ea:e7 brd ff:ff:ff:ff:ff:ff
19: tap782b98a7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master lxdbr0 state UP group default qlen 1000
    link/ether 2a:31:4e:a3:6d:b3 brd ff:ff:ff:ff:ff:ff

Inside the VM

root@vm1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:16:3e:6c:0b:d7 brd ff:ff:ff:ff:ff:ff
    inet 213.239.211.94/32 scope global enp5s0
       valid_lft forever preferred_lft forever
3: enp6s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:16:3e:df:4e:41 brd ff:ff:ff:ff:ff:ff

I’m surprised this works, but then Hetzner tends to use device routing (i.e not requiring ARP resolution) so perhaps this is why it works.

However there are still problems with the way you have configured br0.

If this is the info that Hetzner gave you:

Additional IP: 213.239.211.94
Gateway: 213.239.211.65
Netmask: 255.255.255.224
Broadcast: 213.239.211.95

Then you should never be configuring the gateway IP on your own server(s). That is Hetzner’s router and will not be reachable if you configure the IP on your own machine.

Additionally you imply that you only have been allocated a single additional IP 213.239.211.94 and yet the way you have configured br0 means that you won’t be able to reach any of the IPs in that IP’s wider subnet (probably owned by other customers of Hetzner) which may cause strange connectivity issues if you ever need to communicate with them.

My suggestion would be to remove br0 interface entirely and use routed NIC type which allows you to pass individual external IPs into an instance.

See How to get LXD containers get IP from the LAN with routed network