Hello,
My team is running a small (old) setup of LXD & containers comprised of 90+ LXD images running on 4+ physical servers, using vpncloud (L2) and a dedicated container running a DHCP + DNS server to provide internal IP addresses to the containers connecting to the overlay network.
All containers have 2 NICs, 1 to connect to the overlay network, the (optional) other if internet connectivity though the LXD bridge is needed.
The setup works but involves a lot of moving parts and components outside of the LXD ecosystem. 4+ years later is there a standard/recommended way to achieve the same setup? As LXD & associated technologies are evolving rapidly, there may be a more standard stack to support this inter container communication. In particular, use of OVN seems a suitable candidate but may fall short for encryption (and L2?).
Required features:
- Containers should be able to reach each other using internal naming, regardless of the physical server they are hosted on
- Dynamic “internal” IP address allocation through DHCP with internal DNS eg $HOSTNAME.internal is added to the DNS internal resolver as soon as $HOSTNAME is provided an IP address by the DHCP server
- Traffic between servers (or containers) is encrypted
- Containers are non visible from the internet by default, explicit exposure (eg through lxd proxy) required
- ability to add new physical servers and have guest on those servers accessing the overlay network, preferably without having the declare the new physical server on each existing server) / auto discovery
- (optional) IP address allocation remains the same if an image is moved from one physical server to another. Today this is achieve through MAC address migration along with the container.
I regularly check https://linuxcontainers.org/lxd/docs/latest/networks/ but fail to see if the various proposed setups would be a suitable replacement / upgrade. As mentioned what is in place somehow “works” even if it feels clunky.
Thanks!