Maintaining IP with lxc move on LXD Cluster 3.4

Using LXD 3.4 on Ubuntu 16.04 via snap. LXD cluster setup using CEPH Block storage and Fan networking. Three node cluster, each node has a different /8 on the fan. If we move a container from one node to another on the cluster, it’s assigned a new IP in the /8 on the new host. How should this be configured to maintain the IP with the container move?

The way the FAN bridge works very much relies on a subnet per host with a deterministic way to figure out the target host based on the IP.

So you can’t keep your IP when moving hosts with the FAN bridge. What you’d need is a shared L2 network, if on physical hardware, you’d usually achieve that by using VLANs and connecting containers directly to that, then using a normal router to act as gateway and DHCP or maybe using something like MAAS which would then let LXD control IP assignment for the network.

In a virtual environment, you may be able to use VXLAN directly to achieve a similar shared L2, then using one of the VMs to act as a router.

@stgraber, thanks for the suggestion. The cluster I’m working with is in AWS so to achieve the L2 shared network for LXD containers mentioned in your last post I’ve added a second interface to the EC2 VM with a new subnet and attempting to bridge my containers to the second interface which has DHCP and DNS services. But having issues, here’s my config:

Running LXD 3.4 on Ubuntu 16.04…

#cat /etc/network/interfaces.d/50-cloud-init.cfg
auto lo
iface lo inet loopback

auto ens5
iface ens5 inet dhcp

auto br0
iface br0 inet dhcp
bridge_ports ens6
bridge_stp off
bridge_fd 0
bridge_maxwait 0

#ifconifg
br0 Link encap:Ethernet HWaddr 0e:03:80:9f:2e:20
inet addr:172.34.206.62 Bcast:172.34.207.255 Mask:255.255.248.0
inet6 addr: fe80::c03:80ff:fe9f:2e20/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:592 errors:0 dropped:0 overruns:0 frame:0
TX packets:570 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:35048 (35.0 KB) TX bytes:30348 (30.3 KB)

ens5 Link encap:Ethernet HWaddr 0e:62:4a:d5:e6:64
inet addr:172.34.100.30 Bcast:172.34.100.255 Mask:255.255.255.0
inet6 addr: fe80::c62:4aff:fed5:e664/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:117835 errors:0 dropped:0 overruns:0 frame:0
TX packets:45693 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:175728273 (175.7 MB) TX bytes:3117095 (3.1 MB)

ens6 Link encap:Ethernet HWaddr 0e:03:80:9f:2e:20
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:560 errors:0 dropped:0 overruns:0 frame:0
TX packets:602 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:34480 (34.4 KB) TX bytes:39204 (39.2 KB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:38 errors:0 dropped:0 overruns:0 frame:0
TX packets:38 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:3164 (3.1 KB) TX bytes:3164 (3.1 KB)

veth50DWSL Link encap:Ethernet HWaddr fe:3a:1e:ba:46:d1
inet6 addr: fe80::fc3a:1eff:feba:46d1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:32 errors:0 dropped:0 overruns:0 frame:0
TX packets:527 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8856 (8.8 KB) TX bytes:22446 (22.4 KB)

#lxc network list
±-----±---------±--------±------------±--------+
| NAME | TYPE | MANAGED | DESCRIPTION | USED BY |
±-----±---------±--------±------------±--------+
| br0 | bridge | NO | | 1 |
±-----±---------±--------±------------±--------+
| ens5 | physical | NO | | 0 |
±-----±---------±--------±------------±--------+
| ens6 | physical | NO | | 0 |
±-----±---------±--------±------------±--------+

#lxc profile show default
config: {}
description: Default LXD profile
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: default
type: disk
name: default
used_by:

  • /1.0/containers/first

#brclt show
bridge name bridge id STP enabled interfaces
br0 8000.0e03809f2e20 no ens6
veth50DWSL

#lxc list
±------±--------±-----±-----±-----------±----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
±------±--------±-----±-----±-----------±----------+
| first | RUNNING | | | PERSISTENT | |
±------±--------±-----±-----±-----------±----------+

#lxc exec first bash
On the container

first#ifconfig
eth0 Link encap:Ethernet HWaddr 00:16:3e:45:7c:2a
inet6 addr: fe80::216:3eff:fe45:7c2a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:536 errors:0 dropped:0 overruns:0 frame:0
TX packets:32 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:22824 (22.8 KB) TX bytes:8856 (8.8 KB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:120 errors:0 dropped:0 overruns:0 frame:0
TX packets:120 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:9840 (9.8 KB) TX bytes:9840 (9.8 KB)

Since the veth shows up under the bridge, it seems that container is on the bridge but doesn’t pick up an ipv4 dhcp address. Plus, I’ve added a static address on the container in the new subnet and still can’t reach anything on the new subnet. What have I missed?

@stgraber any insight?

I’m really not familiar with AWS, but I wouldn’t be surprised if their virtual networks perform MAC filtering so that only the MAC addresses of the instances themselves are allowed to get an address from DHCP.

If I can get the DNS resolution for the lxd domain to work over the three hosts, I wouldn’t have to worry about the IP changing.

Starting with 3.4, DNS resolution is actually supposed to work across hosts on the same fan bridge.
Can you check that you have a lxd forkdns process running on each of your hosts and that the IPs passed to it look correct?

If that’s the case, then DNS is supposed to be forwarded to your different hosts.

root@lxd1-a.dev:/home/choyle

$> ps aux | grep forkdns

root 2239 0.0 0.5 207776 22988 ? Sl Sep04 0:19 /snap/lxd/current/bin/lxd forkdns 240.176.0.1:1053 lxd 240.176.0.1 240.217.0.1 240.172.0.1

root 1104875 0.0 0.0 12944 1020 pts/0 S+ 12:36 0:00 grep --color=auto forkdns

forkdns is running, but the host is not receiving any DNS updates from LXD:

root@lxd1-a.dev:/home/choyle

$> ping nova-02.lxd

ping: unknown host nova-02.lxd

nova-02 is a container on this host. nova-02.lxd @ 240.176.0.9 is not being passed out to the host.

Does resolution from inside a container work? That’s the part that LXD handles.

The host isn’t usually configured to resolve LXD network domains, having it do so is up to you and @simos has some instructions he wrote for Ubuntu 18.04.