Howto: LXD init 3.9, Hetzner, Single Public IP, MAC, Ubuntu 18.04


#1

My VPS Hobby Mail Server with Hetzner aged and decided to noob dive into LXD and put it on my root Server. Making this post in hopes to cut someone else’s research, try & error Time spend down.
Might be relieved to know that we do not need to touch Netplan configs.
Suggestions are welcome, my next step is to actually install a Mail Server in a container and see it running for a while without problems.
You will not be able to ssh or ping from Host to Container or from Container to Host.

LXD 3.9 Hetzner: Single Public IP Setup with Macvlan on Ubuntu 18.04

$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=100GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] no
Would you like a YAML “lxd init” preseed to be printed? (yes/no) [default=no]:

Follow Simos Macvlan howto https://blog.simos.info/configuring-public-ip-addresses-on-cloud-servers-for-lxd-containers/

Replace parent with your hosts interface
$ lxc profile create macvlan
$ lxc profile device add macvlan eth0 nic nictype=macvlan parent=enp4s0

lxc launch --profile default --profile macvlan ubuntu-minimal:18.04 c1
Creating c1
Starting c1

$ lxc stop c1

Optain the MAC via Hetzner Robot and replace the placeholder MAC below
$ lxc config set c1 volatile.eth0.hwaddr 00:AA:BB:CC:DD:FF
$ lxc start c1

$ lxc exec c1 – bash
$ ifconfig


#2

Hi!

I have not tried this with Hetzner. Did it work for you?

The issue with macvlan is that the container gets an IP address from the network (not the host),
therefore to work, you would need to get somehow Hetzner to give with DHCP a separate public IP address to the container. My understanding is that it will not work unless you pay for an additional public IP address.

However, you do not really need an extra IP address for the mail server, if your main server (in the container) will be using the same single public IP address of the host. That is, you would just need to setup port forwarding on the host so that network connections from the Internet to port 25/TCP (SMTP) at your VPS get forwarded to the Mail LXD container. You would need to follow some guide on this, which explains how to setup security (SMTPS, etc).


#3

Giasu!

Yes, i paid for an extra single IP and i knew i could forward ports but i wanted to try it out this way.
Once the container got it’s hetzner MAC dhcp pulled it’s hetzner IP.

Also thanks for your blog posts, i tried so many things out till i went without network during LXD init.
Less is more i guess in this particular case.


(Tom) #4

Hello Jimi,

volatile.ethX.hwaddr is also required for OVH and online.net.

stale cached images is great option for new images/containers, yes is the default.
Explanation from Stephane: 3.0 - image cache update?

Root/Dedi’s:
2 Disks: My advice is to create a partition with a ZFS pool (block device) instead of a loop device.
3 Disks: Buy an OS disk (SSD or HDD) at Hetzner and use the other 2 disks dedicated for containers.

I have a 3-node cluster at Hetzner and it’s great!


#5

Good to know about these other Hosters!

Yes you are correct, think i decided against the image auto updating itself since i do not have plans to install more containers at the moment. If i understand this correctly, once the image was used to create a container it is just using disk space in the pool and if need for another container with same OS arises i can delete the old image and download a new one. Thanks for pointing this out though.

Thanks for the advice, i going to check if its possible for my SB root server, the 240GB SSD is rather pricey at 7,74 € Month. I was thinking of redoing my FDE Server install, this time opting for zfs though on my RAID1 2.7TB hhd’s. Would have to read up on zfs best practice for my hardware, since i never used it before and have no idea on disk speeds. I don’t really need super disk speed, yet (;

Thank you Tom.


(Jon Clayton) #6

I use hetzner with bare metal servers.

In my situation rather than just buying extra disks you can re-partition your current 2x2tb using installimage during rescue boot. i have 100gb for proxmox OS and the rest is empty space

I had a stupid hw raid card which I had to present as 2xRaid0 as so to enable me to use zfs.

in proxmox cli install the usual zfs utils.

install lxd “snap install lxd”

I then fdisk /dev/sda and /dev/sdb and create two large ~1.7tb linux partitions (or what ever size you want)
get partitions id’s (ideally with blkid) or use /dev/sdx1 /dev/sdy1 (replace with the real disk/partition number)

Then
zpool create zfs1 raidz paritionid1 partitionid2

lxc storage create zfs1 zfs source=zfs1
lxc profile device add default root disk path=/ pool=zfs1

now you have separate zfs pool that you can do what you want with and you also have your lxd containers on there in its own dataset.

Also regarding IP’s I’m pretty sure you can route your IP in rather than just do macvlan which I have never used.

create the new IP e.g. 3.3.3.3/32

then

create internet facing bridge
lxc network create br-outside
Plug your internet facing container into br-outside
lxc network attach br-outside c1 eth0 eth0

Now create an interface route:

ip route add 3.3.3.3/32 dev br-outside

Pretty sure that works and afaik I don’t remember enabling proxy arp or anything like that or there is a possibility its already enabled in sysctl.conf?.

I do all my routing with FRR routing

Also you can do the same thing but use a vm instead and use that as the firewall for entry and exit into your host. Currently I’m using pfsense running in a vm, with a leg on the internet facing bridge (outside interface) and then multiple internal interfaces on bridges where containers live which are firewalled off.

Also works with ipv6, I have a /48 ipv6 range I purchased along with an ASN. I spun up a vm in Vultr where they allow you to peer BGP with them so you can advertise your /48 from there (also other locations if you want to assuming they allow you to peer - unfortunately hetzner don’t allow it - bad guys!).

In that case I also ran LXD inside the VM, created a bridge, used FRR for ebgp peering and then routed the pubic ipv6 space internally over a zerotier “overlay” where I can then use any ipv6 public addresses on containers or vms scattered in different Geo locations so long as they are on the same zerotier network… its all pretty pointless but still nice to do as a POC!

Cheers!
Jon.


#7

Thanks Jon, my head spins from trying to understand what you have accomplished here, i might look into creating a pfsense container.

I needed an additional container but did not want to order another hetzner IP/MAC as i did previously for the mail container. Lets say i spend some good time figuring out how to get a NAT bridge working without wiping everything and redoing lxd init.
The Netplan changes to create a working bridge on the host turned out to be easier then i thought, if those disconnects wouldn’t have been so demanding.

Maybe someone can use this info, my yaml file,

### Hetzner Online GmbH installimage
network:
  version: 2
  renderer: networkd
  ethernets:
    enp4s0:
      dhcp4: no
      dhcp6: no

  bridges:
    br0:
      macaddress: 54:04:a6:7e:f4:e4
      dhcp4: no
      dhcp6: no
      interfaces: [ enp4s0 ]
      addresses: [ 176.9.48.115/32 ]
      routes:
        - on-link: true
          to: 0.0.0.0/0
          via: 176.9.48.97
      nameservers:
        addresses:
          - 213.133.99.99
          - 213.133.98.98
          - 213.133.100.100
      parameters:
        stp: false
        forward-delay: 1
        hello-time: 2
        max-age: 12

Then i created a non ipv6 using profile with NAT.
$ lxc network create lxdbr0 ipv6.address=none ipv4.address=10.10.10.1/24 ipv4.nat=true

More details can be found here https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/

The rest in my case was editing my bridge profiles parent to use lxdbr0 but for those in need.
Create a Bridge profile as described by Simos here https://blog.simos.info/how-to-make-your-lxd-containers-get-ip-addresses-from-your-lan-using-a-bridge/

Either you assign this profile to the already existing container via lxc profile assign command or add the profile to the container you are about to create. In my case,
$ lxc launch -p default -p lxdbr0 yourImageVersion c2

$ lxc profile show lxdbr0

config: {}
description: Bridged networking LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
name: lxdbr0
used_by:
- /1.0/containers/

I really need to learn more about networking, i am happy to have come so far, never bothered much with network and bridges before, i hope i have done this in a correct way otherwise please feel free to point out any mistakes and thanks for this amazing piece of Software!

ps
i was itching to reformat to zfs but since i am not really going to be doing much more right now and ZoL integrated encryption is not yet LTS reality i am going to postpone.


(Jon Clayton) #8

ah are you trying to run more containers internally and port forward to them on via a single public hetzner IP?

If so that is the most straightforward way, and is what I do for the most part.

anything that needs access inbound from the internet has iptables rules with iptables-persistent forwarded inbound via the lxd bridge to the specific container.

image

image


#9

right now it looks like this,

         <-----> macvlan (public ip 2) <-----> postfix stuff and fw
       /

host (br0, public ip 1) <-----> lxdbr0 <-----> other tmp test containers


(Nik S Firefly) #10

can u post netplan config file?


(Nik S Firefly) #11

@jimi How can i use additional IPs bought from Hetzner for one host
in your config?
I am not talking about macvlan but br0 to outside internet
with containers with those additional IPs
I dont know how to configure this with netplan :frowning: