New to LXD (Linux networking/Containers in General) Guidance Request

I went through the tutorials and built containers connected to a bridge NAT’d behind my host’s real IP. Played with the port forwarding feature to expose some services of containers as desired. All worked great, really pleased. It may serve me just as it is, however I was thinking of segregating VMs from communicating with the host or at least with some firewall filtering between them. I also pictured maybe filtering the traffic between containers by creating additional bridges and some type of filtering between there.
I love the simplicity of NATing and port forwarding based off of container name, makes things flexible. What are some examples of the preferred method to filter traffic between containers and host or another container? What is the preferred networking for containers in a production environment? Are people mostly going with bridges w/ NAT?

I am familiar with VMWare switching/routing and networking in general. I have attached some ideas I had. I like visuals! What comes to mind is router on a stick, which would give you filtering at a physical firewall, with good logging and monitoring capabilities, however that would be additional hairpin traffic over the link. I also though of routing at the host level which I would then rely on IPTables for routing. Maybe the current configuration works for me, bridges and NATing through the host. Maybe you with linux networking experience and container experience could point me to some best practices on efficiency and design. The firewall filtering is where I really have a lot of questions on how to do that properly. I have been reading all the great documentation on the website but there isn’t a lot on guidance for these questions, more of tons of options. Thanks for the help.

I apologize my questions for guidance/best practices was too broad. Maybe I am trying to understand linux networking/packet filtering (firewall) more than anything to make decisions on design.

I have read man pages, tutorials etc and feel confused. Please let me try to be more specific with some questions.

This is how I believe the lxd init generated networking works. A bridge (lxdbr0) is created with a network auto generated. An IP from that network is assigned to the bridge which is the default gateway for all containers connected to the bridge.
Containers connect to this bridge with a virtual patch cable, veth interface without IP connected to the bridge and veth interface in the container w/ an IP assigned by dnsmasq.
All container traffic when leaving it’s network is source NAT’d to the host’s physical interface’s IP address.

  1. Do I have an understanding correctly thus far?
  2. Why isn’t the physical NIC of the host required to connect to the lxdbr0 with a veth? (edit:I think I answered this question partly myself. Host physical NIC is in a different subnet, it would be part of that bridge network so it wouldn’t connect to the lxdbr0. ) Is lxdbr0 considered an interface on the host because the physical interface (eth0) on the host does not seem to be connected to the bridge.
  3. How do I configure firewall filtering polices between the bridged network and other networks?
    Looking for the preferred method, I know in Linux there are multiple ways to accomplish things. I guess preferably with lxc commands.
  4. Best way to view these filtering policies?

Thanks again, I apologize for the very broad original post. I hope this is more manageable. I am having a hard time visualizing how everything works. I love learning, though it takes a while to click. I also do not like using technology I do not completely understand. I have been trying to draw out the networking. Any help will be greatly appreciated.

I am not sure whether I will be able to answer any of your questions.
I’ll go through how I see network in LXD containers, and I hope it helps you in some way.

LXD gives you a default setup with the private bridge. This gets you working and is a sane default.
This private bridge is a managed (by LXD) network, and you get DHCP/DNS.
Containers get access to the Internet, and can also communicate with each other (unless you change that, see security.* keys).

If you want to start tinkering, you can start off with no networking at all. Isolated. See https://blog.simos.info/a-network-isolated-container-in-lxd/

Going back to bridging, there is the option of native bridging and openvswitch. See https://linuxcontainers.org/lxd/docs/master/networks
For complex networks, look into openvswitch.

There are several options for network devices, shown here, https://linuxcontainers.org/lxd/docs/master/instances#type-nic
Above we mentioned already bridged and no network device.

Thank you for taking the time to respond. I will read through the documentation you provided and hope lxd with linux network clicks!

Your understanding of the private lxdbr0 is correct. It is not connected to the host’s physical interface and is considered a separate interface on the host, with a randomly generated IPv4 and IPv6 subnet.

Each container is connected to the private bridge using a veth-pair, with one end in the container and the other end in the host connected to the bridge (which is why the veth interface on the host does not have an IP address).

Outbound traffic leaving lxdbr0 on the host is then NATted to the source address of the interface that the traffic leaves on the host.

As @simos said, this is a sane default as it allows users to get up an running quickly with containers, without having to worry about what their external network setup is. It is also relatively secure because by default it does not expose any services running in the containers to the wider network (only between themselves and the host).

You can however expose services running inside containers to the wider network and make it appear that they are running on the host’s IP by using the proxy device (see the docs here https://linuxcontainers.org/lxd/docs/master/instances#type-proxy).

However, as @simos said these are just the defaults, and there are also several other networking options available that allow containers to be directly connected to the external network.

1 Like

Thanks for confirming! I am feeling pretty good about the networking options after some tinkering. Still unsure about filtering traffic between containers and other networks. I will have to study more. Should I be look at IPTables, ufw, or lxc type commands to understand filtering network traffic for containers?

When using a private bridge (i.e lxdbr0) all traffic to/from your host (and any external networks) will go via lxdbr0 interface (as it acts like a virtual switch).

This means you can use software firewalls on the host, such as iptables/nftables to filter traffic.
If you’re looking to filter traffic between containers or restrict traffic for specific containers then you can use a combination of:

  1. Statically assigned DHCP leases using lxc config device set <container> eth0 ipv4.address=n.n.n.n and the built in filtering features that prevent containers from spoofing their MAC and IP addresses. See security.*_filtering settings here https://linuxcontainers.org/lxd/docs/master/instances#nictype-bridged. You can then use iptables, ebtables or nftables to filter traffic going through the bridge using known static IPs.

Note, if you want to use iptables to filter bridge traffic, then you will need to ensure the following sysctls are enabled:

net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
  1. Alternatively rather than use predictable IP addresses you could use statically assigned host interface names for each container and then filter using interface name on the host.

You can set a host-side interface name for a container using: lxc config device set <container> eth0 host_name=myct

Thank you, this was helpful. I appreciate everyone. I believe the “sane” default lxd options serve my needs with the ability to filter traffic with IPTables and port forward for inbound traffic to containers.