Howto: LXD init 3.9, Hetzner, Single Public IP, MAC, Ubuntu 18.04

My VPS Hobby Mail Server with Hetzner aged and decided to noob dive into LXD and put it on my root Server. Making this post in hopes to cut someone else’s research, try & error Time spend down.
Might be relieved to know that we do not need to touch Netplan configs.
Suggestions are welcome, my next step is to actually install a Mail Server in a container and see it running for a while without problems.
You will not be able to ssh or ping from Host to Container or from Container to Host.

LXD 3.9 Hetzner: Single Public IP Setup with Macvlan on Ubuntu 18.04

$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=100GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] no
Would you like a YAML “lxd init” preseed to be printed? (yes/no) [default=no]:

Follow Simos Macvlan howto https://blog.simos.info/configuring-public-ip-addresses-on-cloud-servers-for-lxd-containers/

Replace parent with your hosts interface
$ lxc profile create macvlan
$ lxc profile device add macvlan eth0 nic nictype=macvlan parent=enp4s0

lxc launch --profile default --profile macvlan ubuntu-minimal:18.04 c1
Creating c1
Starting c1

$ lxc stop c1

Optain the MAC via Hetzner Robot and replace the placeholder MAC below
$ lxc config override c1 eth0 hwaddr=00:AA:BB:CC:DD:FF
$ lxc start c1

$ lxc exec c1 – bash
$ ifconfig

1 Like

Hi!

I have not tried this with Hetzner. Did it work for you?

The issue with macvlan is that the container gets an IP address from the network (not the host),
therefore to work, you would need to get somehow Hetzner to give with DHCP a separate public IP address to the container. My understanding is that it will not work unless you pay for an additional public IP address.

However, you do not really need an extra IP address for the mail server, if your main server (in the container) will be using the same single public IP address of the host. That is, you would just need to setup port forwarding on the host so that network connections from the Internet to port 25/TCP (SMTP) at your VPS get forwarded to the Mail LXD container. You would need to follow some guide on this, which explains how to setup security (SMTPS, etc).

1 Like

Giasu!

Yes, i paid for an extra single IP and i knew i could forward ports but i wanted to try it out this way.
Once the container got it’s hetzner MAC dhcp pulled it’s hetzner IP.

Also thanks for your blog posts, i tried so many things out till i went without network during LXD init.
Less is more i guess in this particular case.

1 Like

Hello Jimi,

volatile.ethX.hwaddr is also required for OVH and online.net.

stale cached images is great option for new images/containers, yes is the default.
Explanation from Stephane: 3.0 - image cache update?

Root/Dedi’s:
2 Disks: My advice is to create a partition with a ZFS pool (block device) instead of a loop device.
3 Disks: Buy an OS disk (SSD or HDD) at Hetzner and use the other 2 disks dedicated for containers.

I have a 3-node cluster at Hetzner and it’s great!

1 Like

Good to know about these other Hosters!

Yes you are correct, think i decided against the image auto updating itself since i do not have plans to install more containers at the moment. If i understand this correctly, once the image was used to create a container it is just using disk space in the pool and if need for another container with same OS arises i can delete the old image and download a new one. Thanks for pointing this out though.

Thanks for the advice, i going to check if its possible for my SB root server, the 240GB SSD is rather pricey at 7,74 € Month. I was thinking of redoing my FDE Server install, this time opting for zfs though on my RAID1 2.7TB hhd’s. Would have to read up on zfs best practice for my hardware, since i never used it before and have no idea on disk speeds. I don’t really need super disk speed, yet (;

Thank you Tom.

I use hetzner with bare metal servers.

In my situation rather than just buying extra disks you can re-partition your current 2x2tb using installimage during rescue boot. i have 100gb for proxmox OS and the rest is empty space

I had a stupid hw raid card which I had to present as 2xRaid0 as so to enable me to use zfs.

in proxmox cli install the usual zfs utils.

install lxd “snap install lxd”

I then fdisk /dev/sda and /dev/sdb and create two large ~1.7tb linux partitions (or what ever size you want)
get partitions id’s (ideally with blkid) or use /dev/sdx1 /dev/sdy1 (replace with the real disk/partition number)

Then
zpool create zfs1 raidz paritionid1 partitionid2

lxc storage create zfs1 zfs source=zfs1
lxc profile device add default root disk path=/ pool=zfs1

now you have separate zfs pool that you can do what you want with and you also have your lxd containers on there in its own dataset.

Also regarding IP’s I’m pretty sure you can route your IP in rather than just do macvlan which I have never used.

create the new IP e.g. 3.3.3.3/32

then

create internet facing bridge
lxc network create br-outside
Plug your internet facing container into br-outside
lxc network attach br-outside c1 eth0 eth0

Now create an interface route:

ip route add 3.3.3.3/32 dev br-outside

Pretty sure that works and afaik I don’t remember enabling proxy arp or anything like that or there is a possibility its already enabled in sysctl.conf?.

I do all my routing with FRR routing

Also you can do the same thing but use a vm instead and use that as the firewall for entry and exit into your host. Currently I’m using pfsense running in a vm, with a leg on the internet facing bridge (outside interface) and then multiple internal interfaces on bridges where containers live which are firewalled off.

Also works with ipv6, I have a /48 ipv6 range I purchased along with an ASN. I spun up a vm in Vultr where they allow you to peer BGP with them so you can advertise your /48 from there (also other locations if you want to assuming they allow you to peer - unfortunately hetzner don’t allow it - bad guys!).

In that case I also ran LXD inside the VM, created a bridge, used FRR for ebgp peering and then routed the pubic ipv6 space internally over a zerotier “overlay” where I can then use any ipv6 public addresses on containers or vms scattered in different Geo locations so long as they are on the same zerotier network… its all pretty pointless but still nice to do as a POC!

Cheers!
Jon.

2 Likes

Thanks Jon, my head spins from trying to understand what you have accomplished here, i might look into creating a pfsense container.

I needed an additional container but did not want to order another hetzner IP/MAC as i did previously for the mail container. Lets say i spend some good time figuring out how to get a NAT bridge working without wiping everything and redoing lxd init.
The Netplan changes to create a working bridge on the host turned out to be easier then i thought, if those disconnects wouldn’t have been so demanding.

Maybe someone can use this info, my yaml file,

### Hetzner Online GmbH installimage
network:
  version: 2
  renderer: networkd
  ethernets:
    enp4s0:
      dhcp4: no
      dhcp6: no

  bridges:
    br0:
      macaddress: de:ad:be:ef:00:00
      dhcp4: no
      dhcp6: no
      interfaces: [ enp6s0 ]
      addresses: [ 100.100.10.12/32 ]
      routes:
        - on-link: true
          to: 0.0.0.0/0
          via: 100.100.10.4
      nameservers:
        addresses:
          - 213.133.99.99
          - 213.133.98.98
          - 213.133.100.100
      parameters:
        stp: false
        forward-delay: 1
        hello-time: 2
        max-age: 12

Then i created a non ipv6 using profile with NAT.
$ lxc network create lxdbr0 ipv6.address=none ipv4.address=10.10.10.1/24 ipv4.nat=true

More details can be found here https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/

The rest in my case was editing my bridge profiles parent to use lxdbr0 but for those in need.
Create a Bridge profile as described by Simos here How to make your LXD containers get IP addresses from your LAN using a bridge – Mi blog lah!

Either you assign this profile to the already existing container via lxc profile assign command or add the profile to the container you are about to create. In my case,
$ lxc launch -p default -p lxdbr0 yourImageVersion c2

$ lxc profile show lxdbr0

config: {}
description: Bridged networking LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
name: lxdbr0
used_by:
- /1.0/containers/

I really need to learn more about networking, i am happy to have come so far, never bothered much with network and bridges before, i hope i have done this in a correct way otherwise please feel free to point out any mistakes and thanks for this amazing piece of Software!

ps
i was itching to reformat to zfs but since i am not really going to be doing much more right now and ZoL integrated encryption is not yet LTS reality i am going to postpone.

1 Like

ah are you trying to run more containers internally and port forward to them on via a single public hetzner IP?

If so that is the most straightforward way, and is what I do for the most part.

anything that needs access inbound from the internet has iptables rules with iptables-persistent forwarded inbound via the lxd bridge to the specific container.

image

image

1 Like

right now it looks like this,

         <-----> macvlan (public ip 2) <-----> postfix stuff and fw
       /

host (br0, public ip 1) <-----> lxdbr0 <-----> other tmp test containers

1 Like

can u post netplan config file?

@jimi How can i use additional IPs bought from Hetzner for one host
in your config?
I am not talking about macvlan but br0 to outside internet
with containers with those additional IPs
I dont know how to configure this with netplan :frowning:

Hi, I don’t use Netplan as it restricts everything. I Use ifupdown or ifupdown2 and remove netplan with the grub command. The iptables rules are done using iptables-persistent, modifying the /etc/iptables/rules.v4 file.

I’ve not used macvlan (mentioned above) before but from the sounds of it, it allows you to pass external network ip’s to the containers in an easy way. I might experiment with that.

Can you post the specific procedure used to remove netplan and config files from /etc/network
I still cant find a reasonable netplan instructions from Hetzner
regards

Hiya,

Something along these lines:


rm -rf /etc/netplan

///  add "netcfg/do_not_use_netplan=true" to grub file**

nano /etc/default/grub

`GRUB_CMDLINE_LINUX_DEFAULT=` `"splash quiet netcfg/do_not_use_netplan=true"`

sudo update-grub

reboot

Hiya,

Something along these lines:

apt install ifupdown

rm -rf /etc/netplan

Hi everyone,

What is the recommended way to assign a static IPv6 address to a Debian10 container that is using macvlan?

The Debian10 container uses macvlan with a separate MAC address (asked from Hetzner Robot and configured using lxc config set myct volatile.eth0.hwaddr xyz ) and its static IPv4 is one of the “additional” IPs of the Hetzner EX server (Hetzner Robot -> Server -> IPs -> Order additional IPs/Nets)

The host is running Ubuntu 18.04 with LXD, which is installed on a EX bare-metal server at Hetzner.

Thank you in advance, K.

Please don’t set the MAC address using the volatile keys, these are meant for internal use only.

Instead use:

lxc config device set <container> <nic> hwaddr <mac address>

As for setting the IP inside the container, you should modify the network configuration files inside the container to set the IP statically (rather than the default of using DHCP).

What is the risk by using the volatile keys? I have been doing this since the beginning for public ip-adresses because there was no good solution. And how about VMs?

The risk is the meaning/usage of that key may change in the future with no warning.

Generally speaking you should try and only use keys that are documented for external use, e.g. https://linuxcontainers.org/lxd/docs/master/instances#nictype-bridged

I’m not sure what you mean about public IP addresses, what is the use-case for needing to specify MAC address via volatile?

At OVH or hetzner, you can only assign a public IP address to a MAC address that you receive/generate from their control panels. It’s not possible to assign the public ip without static MAC address.