How to configure firewalld on the host to be able to access "routed" containers from the internet?

I’m using AlmaLinux 9 with firewalld and LXD 5.10.

In the container there is an ubuntu/jammy/cloud .

The container is routed.

I followed instructions from this tutorial: How to use a second IP with a container and routed NIC

What’s weird is that outbound connections worked for me out of the box, I didn’t need to add container’s veth to the trusted zone. I won’t complain though :slight_smile: . The issue is that container is not accessible from the internet as long as firewalld is up and running (I’m getting Packet filtered errors on ping, and No route to host error on ssh attempt).

The tutorial mentions adding the following rules to the firewalld’s configuration, to fix the inbound connectivity issues:

firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -o [my-containers-veth] -j ACCEPT
firewall-cmd --permanent --direct --add-rule ipv6 filter FORWARD 0 -o [my-containers-veth] -j ACCEPT
firewall-cmd --reload

However, in my case it has no effect.

When I stop firewalld I can log into the container from the internet just fine.

Is there anything else that I’m missing?

Try to repeat the same filter commands with the option -i as well (for input).

Alternatively, try to use a rule like this, which enables forwarding for all the interfaces, in both directions:

firewall-cmd --permanent --direct --add-rule \
    ipv4 filter FORWARD 0 -j ACCEPT
firewall-cmd --reload

There are some more details in this section.

Unfortunately, it doesn’t work. I swear that linux’ networking hates me. :confused:

[luken@localhost ~]$ sudo iptables-save
# Generated by iptables-save v1.8.8 (nf_tables) on Fri Feb 17 16:10:17 2023
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT
# Completed on Fri Feb 17 16:10:17 2023

This is also interesting.

I just realized that the container’s ip is from a different subnet. Does it matter? Is it possible that’s the reason firewalld is filtering out the traffic coming to it?

I enabled firewalld’s logDenied, and I can see the rejected packets of my pings:

filter_FWD_public_REJECT: "IN=enp1s0f0 OUT=veth-fb (…)

I put my veth-fb (which is my container’s veth) into a custom lxd zone that is configured the same way as a trusted one.

Then I attempted to somehow allow the forwarding between both zones by using a custom policy, like that:

firewall-cmd --permanent --new-policy lxd-forwarding
firewall-cmd --permanent --policy lxd-forwarding --add-ingress-zone public
firewall-cmd --permanent --policy lxd-forwarding --add-ingress-zone lxd
firewall-cmd --permanent --policy lxd-forwarding --add-egress-zone public
firewall-cmd --permanent --policy lxd-forwarding --add-egress-zone lxd

firewall-cmd --permanent --policy lxd-forwarding --set-target ACCEPT

But after that, I’m just getting Destination Host Unreachable on ping.

I also tried to put my main interface, and the veth one, on the same public zone with intra-zone forwarding enabled. Destination Host Unreachable again.

Did anyone try to run LXD with routed containers on Almalinux 9 with firewalld enabled?

Alright. I figured it out, but it was… a painful learning experience :slight_smile: . My last attempt was actually correct, but I had a misconfiguration in the container (basically I had different image loaded than I though, the one with broken networking). So to sum this up, there seems to be two ways of configuring this:

  1. The simplest one is to move the veth of the container to the public zone where your main interface is. However, I assume that this way all public firewall rules will also apply to your containers (please correct me someone if I’m wrong, but it seems logical), and you want to have a separation there.
  2. You can also keep your veths in a separate zone. For that, you need to do this:

Create an lxd zone. This is the zone for containers. Then assign container’s veth to this zone.

firewall-cmd --permanent --new-zone lxd
firewall-cmd --permanent --change-zone=[your-containers-veth-name] --zone=lxd

The next thing is to create a policy:

firewall-cmd --permanent --new-policy lxd-forwarding
firewall-cmd --permanent --policy lxd-forwarding --add-ingress-zone public
firewall-cmd --permanent --policy lxd-forwarding --add-egress-zone lxd
firewall-cmd --permanent --policy lxd-forwarding --set-target ACCEPT
firewall-cmd --reload

This seems to be a minimal setup that did it for me. I hope it will help someone!


And one more thing that may clear up some confusion. I believe the direct rule ipv4 filter FORWARD 0 -j ACCEPT didn’t work because it’s an iptables’ rule and firewalld uses nftables as a backend by default. At least, it uses it in Almalinux 9.

That’s what’s often confusing about LXD, that to get a working network, you often have to do different things on each distro (host and container alike). Also container vs vm may require different steps, and so on. It would be nice if we had a single source of knowledge how to set up bridge/routed networking on all popular distros (with their multiple major versions). I could contribute with what I’m discovering :slight_smile: .

It appears that I can no longer edit the solution, but I wanted to add that it turned out, that when I installed firewalld on Ubuntu as a host, I had to also add reverse ingress/egress settings (outgoing packets from the guest were filtered). It’s unclear to me why it worked without them on Almalinux host. So, to be safe, run also this:

firewall-cmd --permanent --policy lxd-forwarding --add-ingress-zone lxd
firewall-cmd --permanent --policy lxd-forwarding --add-egress-zone public
firewall-cmd --reload
1 Like