Disabling access to container's host's network

Hey LXD folks. I am trying to disable access to the LXD container’s host’s network, while still allowing connections from the host to the container and from the container to the internet.

I am using a straightforward bridged network. Here is the configuration when initializing LXD:

networks:
- config:
    ipv4.address: 10.0.0.1/16
    ipv4.nat: "true"
    ipv6.address: none
  description: ""
  managed: false
  name: lxdbr0
  type: ""

In all of my testing, I have only been able to block all traffic to and from the host. Is it possible to just block traffic to the host, while still allowing the host to connect to the container and the container to connect to the internet?

Thanks in advance.

You can add a firewall rule to the lxdbr0 interface on the INPUT chain to stop instances on lxdbr0 from communicating directly with services running on the LXD host.

However you should be aware that LXD provides DHCP and DNS services via dnsmasq to lxdbr0 for your instances, so be careful not to block that.

Are there specific services you’re trying to block access to?

Hi!

There are several networking options and you can choose depending on what you really have in mind to do.

You mention that I am trying to disable access to the LXD container’s host’s network. I suppose you mean that you want to disable access to the host itself, but let the container access the rest of the LAN?

  1. If you are OK to setup macvlan networking for the container, then a feature of that setup is that the container CANNOT communicate with the host. And there is no way around it. macvlan enables the container to obtain an IP address from your LAN. But does not work if the host is connected to the LAN with WiFi (works with Ethernet).
  2. An alternative is just to avoid configuring any networking to the container. You then selectively enable the container to be accessible to some network services.

Thanks so much for the fast responses!

To clarify, what I am hoping to accomplish is to allow the container to continue to connect to the internet, but completely disable access to the host’s LAN. The context is that I would like to provide sandboxed container environments for use by external parties, but do not want to allow those sandboxes to be able to access network services running on the host.

So for example, in an ideal scenario, the following would happen:

  1. From inside the container, ssh 10.0.0.1, curl 10.0.0.1, etc. would not work.
  2. From inside the container, curl google.com would work.
  3. From the host, curl <container ip>:<an open service port>, e.g. curl 10.0.111.111:8000, would work.

So far I have tried using iptables with an INPUT rule like the following, which does block all traffic like I want for (1):

iptables -I INPUT -s 10.0.0.1/16 -d 10.0.0.1 -j REJECT

Per @tomp’s suggestion, I just added a rule like:

iptables -I INPUT -p udp -s 10.0.0.1/16 --dport 53 -j ACCEPT

Which gets DNS working, so (2) is good to go now! Last up, for (3), I try:

iptables -I INPUT -s 10.0.0.1 -d 10.0.0.1/16 -j ACCEPT

Which does not work (I still cannot curl the container from the host).

My understanding of IPTABLES is pretty limited, so perhaps there’s just a different rule I need to add?

I think you would need to make your rules stateful, so that they allow return packets from the container to the host when they are part of a connection initiated from the host.

So adding something like this before the REJECT or DROP rule.

# Allow packets to come inbound if they are related to a previously allowed connection.
iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

Great, thank you. This appears to be doing exactly what I want.

One other thing I have noticed is that containers do not seem to have existing iptables rules applied to them upon creation, requiring the iptables records on the container’s host to have to be re-instantiated. Is this the expected behaviour?

I’m not sure what you mean, we do not by default apply any firewalls rules to the host or container for each container started.

This is for when we first create and start a container. It doesn’t seem to respect the existing iptables rules that have been set on the host, requiring us to remove and re-create all of the rules.

Can you give me an example of the rules you’ve added and the container you create and the tests you run that make you think iptables isn’t taking effect please?

And can you show the output of iptables-save before and after the issue arises thanks

Hmm… that issue is proving harder to reproduce than I thought. I will open a new post if I can get a solid reproduction case.

Thank you very much for all the help with this. Everything is working splendidly, except websocket connections seem to drop after about 10 minutes of no activity on the socket. Do you know if there is a way to configure iptables that will prevent that?

Are the web socket connects active? Can you enable TCP keep alives on them at the lower level?

When there’s no activity on them after 10 minutes, they seem to disconnect. We do have timeouts disabled in our proxies, but I’m guessing there’s something at an application level somewhere that is causing it.

I’m going to close this post since the original question has been addressed. Thanks again for your help. I looked around for a post containing this information for quite a while before asking, so I’m sure others will benefit from it in the future as well.

Wishing you and the team a Happy Holidays and safe new year.

1 Like