Containers routing into other containers?

I’ve had a bit of trouble, as it seems the networking part of LXD isn’t hugely documented and I am having trouble deciding what interfaces I should use for my containers.

The network on my LXD host network:

  • eth0.3, which routes to my 1st WAN connection
  • eth0.4, an intranet VLAN that does not masquerade once it reaches router
  • eth1.2, which routes to my 2nd WAN connection

My LXD host’s routing table:

# ip r
default via 192.168.3.1 dev eth0.3 metric 204
default via 192.168.4.1 dev eth0.4 metric 205
default via 192.168.2.1 dev eth1.2 metric 206
192.168.2.0/24 dev eth1.2 proto kernel scope link src 192.168.2.253
192.168.3.0/24 dev eth0.3 proto kernel scope link src 192.168.3.254
192.168.4.0/24 dev eth0.4 proto kernel scope link src 192.168.4.254

A central DHCP server is listening on all 3 VLANs:

  • 192.168.3.1
  • 192.168.4.1
  • 192.168.2.1

Ideally I’d like to be able to configure my containers with that too, not just my LXD host. That’s going to mean my containers must have different MAC addresses. I am thinking macvlan is the right interface for this.

Webapp 1

  • webapp1 routes to the internet via eth0.3, has it’s own IP address from my central DHCP server
  • webapp1 serves it’s webapp/ssh to eth0.4

Webapp 2

  • webapp2 routes through vpn1 container and leaves via eth1.3
  • webapp2 serves it’s webapp/ssh to eth0.4

vpn1

  • vpn1 has internal network with webapp2
  • vpn1 routes via eth1.3

I’m a bit unsure how the configuration of the interface that routes via eth0.3, eth0.4 and eth1.2 might go.

I’m pretty sure the connection between webapp2 and vpn would be just a bridge like:

eth0:
  name: eth0
  nictype: bridged
  parent: lxdbr0
  type: nic

I think I solved this.

Webapp1

devices:
  eth0:
    name: eth1
    nictype: bridged
    parent: lxdbr0
    type: nic

This interface is a standard bridge to my host.

Webapp2

devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  eth1:
    nictype: macvlan
    parent: eth0.4
    type: nic

For this Webapp container is bridged with vpn1, and can be accessed on the LAN via eth1 (ssh/etc)

vpn1

devices:
  eth0:
    nictype: macvlan
    parent: eth1.2
    type: nic
  eth1:
    name: eth1
    nictype: bridged
    parent: lxdbr0
    type: nic

For this container, it goes out my main interface eth1.2

I wonder if there’s something like “Internal Networking” mode of VirtualBox:

Internal networking This can be used to create a different kind of software-based network which is visible to selected virtual machines, but not to applications running on the host or to the outside world.

Ideally that’s what I’d like eth0 on webapp2 and eth1 of vpn1 to be. That way if the VPN goes down, my data won’t fall back to being routed out my host on vlan3 ie eth0.3.

Is there a way to do a “Internal only” network? Maybe routed is the correct one, and I have to set this up manually?

macvlan and ipvlan do not allow the containers to communicate with the host and vice versa.

It may work to parent a macvlan or ipvlan interface off of a dummy nic on the host so they are not connected to any external NIC.

Alternatively you could setup a bridge and connect your containers to it without assigning an address on the host.

That’s fine, there’s no reason for them to do so, although I don’t see why they can’t go through the router and return.

This would allow me to create a subnet just for the containers to communicate with eachother on, (which is something I think I might do, unless there’s a better way.

In fact with this example I don’t want them communicating with the host, or leaving via the host’s IP.

So I have these two containers, I have been playing with this.

I want my webapp containers to forward their traffic to the VPN container. eth0.4 is a VLAN on my network with no external route (on the router).

The internal lxdbr0 I want to only work between my containers.

Webapp

# lxc profile show wapp
config: {}
description: webapp route to VPN
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  eth1:
    nictype: macvlan
    parent: eth0.4
    type: nic
  root:
    path: /
    pool: lxd_storage
    type: disk
  shared:
    path: /mnt/shared
    source: /mnt/data/shared
    type: disk
name: wapp
used_by:
- /1.0/instances/app1

VPN

# lxc profile show vpn
config: {}
description: VPN Instances
devices:
  eth0:
    nictype: macvlan
    parent: eth1.2
    type: nic
  eth1:
    name: eth1
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: lxd_storage
    type: disk
name: vpn
used_by:
- /1.0/instances/vpn1

lxdbr0 is currently configured:

# lxc network show lxdbr0
config:
  ipv4.address: 192.168.33.1/24
  ipv4.dhcp: "true"
  ipv4.dhcp.ranges: 192.168.33.2-192.168.33.254
  ipv4.nat: "true"
  ipv6.address: 2001:0db8:1234:33::1/64
  ipv6.nat: "true"
description: Internal network
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/app1
- /1.0/instances/vpn1
managed: true
status: Created
locations:
- none

I found if I didn’t set ipv4.address or ipv6.address ie 192.168.33.1 or 2001:0db8:1234:33::1 for the address none of the containers would get an IP address via DHCP.

I wonder if there is a way to do DHCP but not have leakages from app1 going via my host?