(Updated) Starting new container causes other container to lose IP / eth0

UPDATE #1
I was able to get stable eth0 / IP for each container by switching to bridged network type with lxc config device add c2 eth0 nic nictype=bridged parent=br0 name=eth0

From my searching, I thought I would be able to use an OVS internal port as a physical nic type for a container for better performance vs veth. Does anyone have any insight into that or what about the original setup I was doing incorrectly? Thanks!

URL References for what I was originally trying to accomplish with LXC / OVS internal ports:
https://arthurchiao.github.io/blog/ovs-deep-dive-6-internal-port/ (Section 3 & 4)
https://www.opencloudblog.com/?p=66
https://www.opencloudblog.com/?p=96 (Performance of veth vs internal port)

I’m working on setting up a testing environment with Virtualbox/Vagrant running Ubuntu 18.04 as the LXC host (not ideal as there are so many layers, but don’t have a better solution at the moment).

I’ve got it working 90% of the way, but having an issue when starting another container on the host, the previous container loses its IP and eth0 is no longer shown when running ifconfig on that container and visa versa when starting the 1st container, 2nd container loses its ip / eth0.

The goal is to have the containers have their own (static) IP on my LAN. I’ve got this working when only one container is running.

Ubuntu 18.04 Host Network Setup:

  • Bridge named br0 setup with: ovs-vsctl add-br br0
  • Bridge is connected to ethernet port enp0s9 with: ovs-vsctl add-port br0 enp0s9
  • No IP is assigned to br0 or enp0s9 as the host is managed via another nic interface
  • Ports for the containers are added to the br0 interface with: ovs-vsctl add-port br0 vport1 -- set Interface vport1 type=internal and ovs-vsctl add-port br0 vport2 -- set Interface vport2 type=internal

Container setup (c1 and c2):

  • Containers are created with lxc init ubuntu:18.04 c1 -p t1 (t1 profile listed below)
  • Each configured to grab static IP from LAN router via netplan (file contents pasted below)
  • eth0 is added to the containers with: lxc config device add c1 eth0 nic nictype=physical parent=vport1 name=eth0 and lxc config device add c2 eth0 nic nictype=physical parent=vport2 name=eth0

What happens:

  • Launching c1 causes it to grab its static IP (router also shows it in its client list)
  • c1 is able to access internet and be accessed over LAN (installed nginx to test) which is the expected result
  • When c2 is launched, c1 loses its IP and eth0 in container (ifconfig no longer shows eth0 in the container)
  • c2 shows running with an IP but isn’t accessible either, lxc list shows no IP for c1 anymore
  • Have to stop both containers and start just one (c1 or c2) for it to then to grab it’s IP and be accessible, but both cannot run at the same time as it causes the other to lose IP and both will not have network access either

Thank you in advance!!!

Below I’ve pasted expanded details for container and host config:

t1 profile info

lxc profile show t1
config:
  limits.cpu: "1"
  limits.memory: 512MB
  security.devlxd: "false"
  security.idmap.isolated: "true"
description: Default LXD profile
devices:
  root:
    path: /
    pool: default
    type: disk
name: t1
used_by:
- /1.0/containers/c1
- /1.0/containers/c2

c1 config show expanded

lxc config show c1 --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 18.04 LTS amd64 (release) (20190918)
  image.label: release
  image.os: ubuntu
  image.release: bionic
  image.serial: "20190918"
  image.type: squashfs
  image.version: "18.04"
  limits.cpu: "1"
  limits.memory: 512MB
  security.devlxd: "false"
  security.idmap.isolated: "true"
  volatile.base_image: 9ff5784302bfd6d556ac4c4c1176a37e86d89ac4d1aced14d9388919fa58bee8
  volatile.eth0.host_name: vport1
  volatile.eth0.last_state.created: "false"
  volatile.eth0.last_state.hwaddr: 02:cf:64:d5:55:72
  volatile.eth0.last_state.mtu: "1500"
  volatile.idmap.base: "1065536"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1065536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1065536,"Nsid":0,"Maprange":65536}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1065536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1065536,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1065536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1065536,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.power: RUNNING
devices:
  eth0:
    name: eth0
    nictype: physical
    parent: vport1
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- t1
stateful: false
description: ""

c2 config show expanded

lxc config show c2 --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 18.04 LTS amd64 (release) (20190918)
  image.label: release
  image.os: ubuntu
  image.release: bionic
  image.serial: "20190918"
  image.type: squashfs
  image.version: "18.04"
  limits.cpu: "1"
  limits.memory: 512MB
  security.devlxd: "false"
  security.idmap.isolated: "true"
  volatile.base_image: 9ff5784302bfd6d556ac4c4c1176a37e86d89ac4d1aced14d9388919fa58bee8
  volatile.idmap.base: "1131072"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1131072,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1131072,"Nsid":0,"Maprange":65536}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1131072,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1131072,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1131072,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1131072,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.power: STOPPED
devices:
  eth0:
    name: eth0
    nictype: physical
    parent: vport2
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- t1
stateful: false
description: ""

I noticed that c1 has extra volatile fields, however I’m not for sure what they are for or if they are causing an issue?

  volatile.eth0.host_name: vport1
  volatile.eth0.last_state.created: "false"
  volatile.eth0.last_state.hwaddr: 02:cf:64:d5:55:72
  volatile.eth0.last_state.mtu: "1500"

lxc list when one container is running (and working with network access)

+------+---------+----------------------+------+------------+-----------+
| NAME |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+------+------------+-----------+
| c1   | RUNNING | 192.168.1.113 (eth0) |      | PERSISTENT | 0         |
+------+---------+----------------------+------+------------+-----------+
| c2   | STOPPED |                      |      | PERSISTENT | 0         |
+------+---------+----------------------+------+------------+-----------+

lxc list when the 2nd container is started and network for both is not working

+------+---------+----------------------+------+------------+-----------+
| NAME |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+------+------------+-----------+
| c1   | RUNNING |                      |      | PERSISTENT | 0         |
+------+---------+----------------------+------+------------+-----------+
| c2   | RUNNING | 192.168.1.114 (eth0) |      | PERSISTENT | 0         |
+------+---------+----------------------+------+------------+-----------+

ifconfig on c1 showing when it’s working vs when it stops

Working:

root@c1:~# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.113  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::cf:64ff:fed5:5572  prefixlen 64  scopeid 0x20<link>
        ether 02:cf:64:d5:55:72  txqueuelen 1000  (Ethernet)
        RX packets 865  bytes 85073 (85.0 KB)
        RX errors 0  dropped 617  overruns 0  frame 0
        TX packets 31  bytes 2781 (2.7 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 4  bytes 280 (280.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4  bytes 280 (280.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Not working:

root@c1:~# ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 4  bytes 280 (280.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4  bytes 280 (280.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Contents of /etc/netplan/50-cloud-init.yaml

C1:

network:
    version: 2
    ethernets:
        eth0:
            dhcp4: no
            addresses: [192.168.1.113/24]
            gateway4: 192.168.1.254
            nameservers:
                addresses: [8.8.8.8]

C2:

network:
    version: 2
    ethernets:
        eth0:
            dhcp4: no
            addresses: [192.168.1.114/24]
            gateway4: 192.168.1.254
            nameservers:
                addresses: [8.8.8.8]

Host networking output:

sudo ovs-vsctl show
5d5744dd-4eb4-4b77-b206-c3cc1cabc9cd
    Bridge "br0"
        Port "vport2"
            Interface "vport2"
                type: internal
        Port "enp0s9"
            Interface "enp0s9"
        Port "vport1"
            Interface "vport1"
                type: internal
        Port "br0"
            Interface "br0"
                type: internal
    ovs_version: "2.9.2"

ifconfig on host

br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::a00:27ff:feca:2ca5  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:ca:2c:a5  txqueuelen 1000  (Ethernet)
        RX packets 18066  bytes 1861492 (1.8 MB)
        RX errors 0  dropped 8499  overruns 0  frame 0
        TX packets 161  bytes 13990 (13.9 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::27:29ff:fe89:289f  prefixlen 64  scopeid 0x20<link>
        ether 02:27:29:89:28:9f  txqueuelen 1000  (Ethernet)
        RX packets 15038  bytes 3013562 (3.0 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8822  bytes 843196 (843.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.33.10  netmask 255.255.255.0  broadcast 192.168.33.255
        inet6 fe80::a00:27ff:fe73:a60e  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:73:a6:0e  txqueuelen 1000  (Ethernet)
        RX packets 207  bytes 18054 (18.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20  bytes 1616 (1.6 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::a00:27ff:feca:2ca5  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:ca:2c:a5  txqueuelen 1000  (Ethernet)
        RX packets 297403  bytes 205203712 (205.2 MB)
        RX errors 0  dropped 38  overruns 0  frame 0
        TX packets 24505  bytes 1999936 (1.9 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 154  bytes 14788 (14.7 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 154  bytes 14788 (14.7 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lxdbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.182.32.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::3819:95ff:fe36:ee2d  prefixlen 64  scopeid 0x20<link>
        inet6 fd42:ee98:50a6:f88f::1  prefixlen 64  scopeid 0x0<global>
        ether 3a:19:95:36:ee:2d  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 16  bytes 1972 (1.9 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Container’s route info:
When working:

root@c1:~# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         _gateway        0.0.0.0         UG    0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0

When not working:

root@c1:~# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         192.168.1.254   0.0.0.0         UG    0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0

Where _gateway resolves too:

root@c1:~# nslookup _gateway
Server:         127.0.0.53
Address:        127.0.0.53#53

Non-authoritative answer:
Name:   _gateway
Address: 192.168.1.254
** server can't find _gateway: NXDOMAIN