LXD installed within a VM and public IPv6 addresses

Kind of a follow up to hetzner server setup with public IPv6 addresses.

I have re-setup the server (#7), created a virtual machine, and installed LXD inside the virtual machine.

The virtual machine has an IPv6 public address (and is reachable). Created a network (vnet0), set the public IPv6 CIDR notation, and then created a container. LXD gave it a public IPv6 address, but the address is not reachable from the outside.

From Virtual Machine

VM1 $ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:77:43:20 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.132/24 brd 192.168.122.255 scope global dynamic enp1s0
       valid_lft 2835sec preferred_lft 2835sec
    inet6 2a01:abcd:abcd:abcd:5054:ff:fe77:4320/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 3579sec preferred_lft 3579sec
    inet6 fe80::5054:ff:fe77:4320/64 scope link
       valid_lft forever preferred_lft forever
3: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:73:f1:5a brd ff:ff:ff:ff:ff:ff
    inet 10.207.139.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fd42:fa85:138c:a438::1/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe73:f15a/64 scope link
       valid_lft forever preferred_lft forever
9: veth62f97733@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether 2e:d1:6b:42:3c:5a brd ff:ff:ff:ff:ff:ff link-netnsid 0
12: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:0d:4d:e2 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/24 scope global vnet0
       valid_lft forever preferred_lft forever
    inet6 2a01:abcd:abcd:abcd::2/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe0d:4de2/64 scope link
       valid_lft forever preferred_lft forever
18: veth5da724ff@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vnet0 state UP group default qlen 1000
    link/ether f2:a4:d0:b2:85:1e brd ff:ff:ff:ff:ff:ff link-netnsid 2
VM1 $ ip -6 r
::1 dev lo proto kernel metric 256 pref medium
2a01:abcd:abcd:abcd::/64 dev enp1s0 proto ra metric 100 expires 3064sec pref medium
2a01:abcd:abcd:abcd::/64 dev vnet0 proto kernel metric 256 pref medium
fd42:fa85:138c:a438::/64 dev lxdbr0 proto kernel metric 256 expires 3118sec pref medium
fe80::/64 dev enp1s0 proto kernel metric 256 pref medium
fe80::/64 dev lxdbr0 proto kernel metric 256 pref medium
fe80::/64 dev vnet0 proto kernel metric 256 pref medium
default via fe80::5054:ff:fe3c:b2d3 dev enp1s0 proto ra metric 100 expires 1264sec mtu 1500 pref medium

The container is allocated a public IPv6 address but this is not pingable

HOME $ ping6 2a01:abcd:abcd:abcd:216:3eff:fea4:ae2
PING 2a01:abcd:abcd:abcd:216:3eff:fea4:ae2(2a01:abcd:abcd:abcd:216:3eff:fea4:ae2) 56 data bytes
From 2a01:abcd:abcd:abcd::2 icmp_seq=1 Destination unreachable: Address unreachable
From 2a01:abcd:abcd:abcd::2 icmp_seq=2 Destination unreachable: Address unreachable
From 2a01:abcd:abcd:abcd::2 icmp_seq=3 Destination unreachable: Address unreachabl

From within the container

APACHE $ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
17: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:a4:0a:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.0.171/24 brd 10.0.0.255 scope global dynamic eth0
       valid_lft 3329sec preferred_lft 3329sec
    inet6 2a01:abcd:abcd:abcd:216:3eff:fea4:ae2/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 3331sec preferred_lft 3331sec
    inet6 fe80::216:3eff:fea4:ae2/64 scope link
       valid_lft forever preferred_lft forever
APACHE $ ip -6 r
2a01:abcd:abcd:abcd::/64 dev eth0 proto ra metric 100 expires 3364sec pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via fe80::216:3eff:fe0d:4de2 dev eth0 proto ra metric 100 expires 1564sec mtu 1500 pref medium

This is how the netplan is setup in the VM, maybe this might be causing the problem for the LXD bridge?

VM1 $ cat /etc/netplan/00-installer-config.yaml 
# This is the network config written by 'subiquity'
network:
  ethernets:
    enp1s0:
      dhcp4: true
  version: 2
VM1 $ lxc network show vnet0
config:
  ipv4.address: 10.0.0.1/24
  ipv4.nat: "true"
  ipv6.address: 2a01:abcd:abcd:abcd::2/64
  ipv6.nat: "true"
description: Public IPv6 Addresss
name: vnet0
type: bridge
used_by:
- /1.0/instances/apache
managed: true
status: Created
locations:
- none

I tried adjusting the netplan in the VM as follows by setting a static IPv6 and the gateway, but this made no difference.

network:
  ethernets:
    enp1s0:
      dhcp4: true
      addresses:
        - 2a01:abcd:abcd:abcd:0000:0000:0000:0007/128
      gateway6: fe80::1
  version: 2

Any thoughts on what LXD needs to get this working?

So if I understand you correctly your setup is now something like this:

Hetzner network → your LXD host → a LXD VM → LXD inside the VM → LXD container inside VM.

Or to put it another way:

Your LXD host:

  • External interface: 2a01:abcd:abcd:abcd::1/128
  • lxdbr0: 2a01:abcd:abcd:abcd::2/64

Your VM:

  • enp1s0 (connected to lxdbr0 on LXD host): 2a01:abcd:abcd:abcd:5054:ff:fe77:4320/64
  • vnet0: 2a01:abcd:abcd:abcd::2/64

Your container:

  • eth0 (connected to vnet0 on LXD VM): 2a01:abcd:abcd:abcd:216:3eff:fea4:ae2/64

If so, then I wouldn’t have expected that to “just work” automatically, especially seeing as once again it looks like you are reusing the same /64 subnet 2a01:abcd:abcd:abcd::2/64 on two different networks (now its one outside the VM and the one inside the VM, whereas before it was the external interface vs the lxdbr0 interface on the LXD host).

You can see its the same problem as before in your ip r output inside the VM:

VM1 $ ip -6 r
::1 dev lo proto kernel metric 256 pref medium
2a01:abcd:abcd:abcd::/64 dev enp1s0 proto ra metric 100 expires 3064sec pref medium
2a01:abcd:abcd:abcd::/64 dev vnet0 proto kernel metric 256 pref medium

The same route to 2a01:abcd:abcd:abcd::/64 going via 2 different interfaces.

What is your aim? Are you wanting to route a separate IPv6 subnet to your VM from your ISP’s delegated /64 subnet (or can you get another /64 delegation or a single larger /56 or /48 for your LXD host that you can then subdivide for your VM subnet?), or do you just want to join your VM’s containers to the LXD host’s lxdbr0 at layer 2, so they share the original /64 allocation as if they were running directly on the LXD host?

By the way, what are you using to run the VM, as LXD VM’s normally have their first Ethernet device as enp5s0

Actually the host is ubuntu server with KVM setup (took me 2 days to get this working with each VM allocated its own public IP address).

I am little bit nervous, i have two production web applications that i want to get out there but i am concerned about having everything just in one setup, so if I make a mistake on the OS , then the whole thing falls apart.

So my idea, is to carve up the server (64GB RAM 2TB) related VMs using KVM, each of those VMs will have its own LXD server which is loaded with containers such as apache, mysql, redis etc. This also gives me freedom to spin up new servers, migrate things around and just be flexible without having to get a second server. I am probably abusing virutalization here.

Not been able to test yet in terms of performance overhead to do it this way, but i have seen a number of companies running LXD inside VMs and seem to do fine. (At home i do it this way, and quite frankly its faster because i have SSD verus the server which i went with larger storage).

I want to be able to spin up VMs with public IPv6 addresses, and containers also with public IPv6 addresses, but the LXD servers will be installed inside each VM not on the host.

I had bought a /29 subnet which gave me some extra IPv4 addresses (these were totally different ip addreses) but i could not get that working, none of the guides that were online worked, i reinstalled the server over and over again witching between debian and ubuntu, trying everything. My initial goal was to give each VM its own IPv4 address, and then use the IPv6 in the LXD setups. But i was not able too.

I asked support if there were any mac addresses they said not for subnets they had to use the main one.

Is it possible to do what i am trying to do with just one IPv6 block? Can this be divided up.

Lets stick with IPv6 for the moment and circle back to IPv4.

As an aside is there a reason you’re using KVM rather than LXD VMs? Not that this really affects the networking principles, just the specific setup steps.

In an ideal world your ISP would route you a single IPv6 subnet that is large enough to contain multiple /64 subnets. This is normally achieved using a /56 subnet (that can contain 256 /64 subnets) or a /48 (which can contain 65k /64 subnets). You would then be free to carve out non-overlapping /64 subnets how you see fit (i.e static routes from the VM host to each of the VMs which they can then use internally however they choose (in your case for a lxdbr0 bridge)).

Another approach some ISPs use is to route multiple /64 subnets to your VM host, which has the same result as above, it just needs more admin with your ISP when allocating new subnets.

The reason having a /64 for each VM (and by extension lxdbr0 bridge) is desirable is that a /64 is the minimum sized subnet that will work with IPv6 SLAAC (auto addressing), otherwise if your lxdbr0 bridge has a subnet size >64 (i.e smaller number of hosts) then you need to either do manual static addressing inside each instance or using DHCPv6 (which requires a working DHCPv6 client inside the instances).

If you just have a single /64 subnet to play with then you have 2 options:

  1. Carve up the /64 subnet into several smaller subnets (one for each VM’s lxdbr0 bridge), with the proviso understanding that SLAAC won’t be possible, perhaps using /120s which allow for 256 hosts. You then need to setup static routes on the VM host that route each of the smaller subnets to the correct VM IP. This way the VM host knows where to send packets it receives for the VM’s smaller subnet.
  2. Join the containers inside the VMs to the VM host’s bridge at layer 2 so they form one “big” network and use the same /64 subnet and SLAAC/DHCP and DNS from the host itself (as your VMs do).

I suspect option 2 will be easier to achieve and conceptually simpler to understand, but there are options on how to achieve it.

Right now I assume you have a bridge on your VM host that is assigned the address 2a01:abcd:abcd:abcd::2/64 and is providing SLAAC, DHCP and DNS service to your VMs.

These VMs will likely be attached to the VM host bridge via a TAP device that provides a virtual Ethernet connection between the bridge the VM’s enp1s0 interface.

You then need to connect your containers inside the VM to that interface so that they can access the VM host’s bridge (rather than connecting them to another internal lxdbr0 or vnet0).

To do this you have several options:

  1. Simplest is to use macvlan NIC type with the parent set to the VM’s enp1s0 interface. E.g. lxc config device add <instance> eth0 nic nictype=macvlan parent=enp1s0. This will cause the container to appear connected to the VM host’s bridge with its own MAC address, and will proceed to do SLAAC/DHCP from the main /64 subnet. This means you won’t then need any additional bridges inside the VMs. The downside is that macvlan inherently prevents the containers from communicating with the VM guest (the parent interface). This may be tolerable, or even desirable, but if you need to communicate with the VM guest from the container then it is a non-starer.

  2. Setup a br0 bridge manually inside each VM, and move the VM’s network config from enp1s0 to br0 and then connect enp1s0 to br0. See Netplan | Backend-agnostic network configuration in YAML. You can then use a bridged NIC type with the parent set to the br0 interface. E.g. lxc config device add <instance> eth0 nic nictype=bridged parent=br0. This will achieve the same effect as 1., but with somewhat reduced performance, but with the benefit that the containers can communicate with the VM guest. This effectively creates a virtual switch inside each VM with your VM’s enp1s0 interface connected to it (so back to the VM host’s lxdbr0 too), and then you also connect each container to it.

Firstly, thank you again @tomp for your detailed and easy to understand answer, and I am pretty sure other people will find this sometime in the future and will be helpful to them too.

I am pretty sure when i first installed LXD on the baremetal server with Hetzner, the Macvlan did not work for me, but I cant remember what stage I tested it at.

Cut a long story short, i switched the apache container in the VM to MacVlan, and it worked straight away, the container is visible on the KVM default network,e.g. 192.168.122.171 and it has its own public IPv6 address, so thats great. I cant believe that i missed that, since at home the MacVlan works brilliantly.

As for your question "As an aside is there a reason you’re using KVM rather than LXD VMs? ", simply because right now that is how I learnt to setup virtual machines. Also the LXD manual still states

This feature is currently considered to be experimental

Plus, I am using the stable branch not latest features etc, which i have no intention of switching until the next stable release.

This leads me to another question, if I installed LXD on the heztner host, will this play nicely with KVM, can the two coexist okay if i created virtual machines elsewhere?

Thank you again for help and explanations.

1 Like

Can you point me to where it says VMs are experimental in the manual, I’m not sure we would consider that to be the case any more, even in the 4.0 LTS branch which has VM support. I’ll check with @stgraber and get that changed if so.

You’re correct that macvlan is unlikely to work on Hetzner’s external network because they restrict which MAC addresses can appear on a physical port. However in this case you are using macvlan internally just to get the container to communicate with the internal bridge on the VM host.

The traffic between the Hetzner external network and the VM host bridge is still being routed, and so it’ll all show up as coming from your server’s own MAC address.

1 Like

Thanks I’ll check with @stgraber and confirm. We’ve done a lot of work on VMs recently and added a lot of features so should be quite a bit closer to feature parity with containers now.

Yes LXD VMs should play fine with an existing KVM VM. We use QEMU under the hood.

The only thing I can think of that may cause coexisting issues is if you have the earlier vsock kernel module loaded rather than the virtio vsock module as this is what LXD uses for lxd-agent communication with the VM guest (to enable lxc exec and lxc shell access).

If you have no vsock module loaded at all or if /dev/vsock exists, you should be fine.

thanks.

@stgraber has confirmed that VM support is considered stable now. We will update the docs.

1 Like

This removes the experimental statement: