Can't manage to: Restrict network egress to public hosts only (should not be able to reach hosts in local network)

Hello,

I’m running Docker in an LXD instance. As a Docker container I’m running Uptime Kuma Monitoring, which has to reach HTTP/HTTPS/DNS/ICMP endpoints in order to check the availability of external services.

The Docker container or the whole LXD instance should only be able to talk to the internet and not to my local network 192.168.0.0/16. I tried to achieve that by using Networking ACLs as shown here

https://www.youtube.com/watch?v=mu34G0cX6Io&t=356s

Here’s my ACL definition:

egress:
- action: drop
  destination: 192.168.0.0/16
  state: enabled
- action: allow
  protocol: icmp4
  description: Ping
  state: enabled
- action: allow
  protocol: udp
  destination_port: "53"
  description: DNS
  state: enabled
- action: allow
  protocol: tcp
  destination_port: 80,443,587
  description: HTTP,HTTPS,Mail
  state: enabled
ingress:
- action: allow
  protocol: tcp
  destination_port: 80,3001
  description: Incoming HTTP
  state: enabled

With this, I can’t ping e.g. 192.168.1.123 (good) but it’s still possible to curl 192.168.1.123 (not what I wanted). Is there a way to isolate my LXD instance completely from my hosts in the local network?

Thank you in advance for the help.

Greetings from Switzerland

Please show ip a and ip r on the LXD host and inside the instance.

Also please show lxc config show <instance> --expanded and lxc network show <network> so I can see how you’ve assigned the ACL.

LXD host

lxc config show d1 --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Debian bullseye amd64 (20221110_15:05)
  image.os: Debian
  image.release: bullseye
  image.serial: "20221110_15:05"
  image.type: squashfs
  image.variant: cloud
  raw.lxc: lxc.apparmor.profile = unconfined
  security.nesting: "true"
  security.syscalls.intercept.mknod: "true"
  security.syscalls.intercept.setxattr: "true"
  volatile.base_image: 8a2a2f7e14611e7990c0794ecafeb002aca2467dbc417cbb1f5758a336498f3c
  volatile.cloud-init.instance-id: aa187ccf-7af0-430b-8d56-ce656d73c1dd
  volatile.eth0.host_name: vethab2263d8
  volatile.eth0.hwaddr: 00:16:3e:ac:d3:75
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 8f0809b5-116a-44a3-94d5-1d9b4047ac5b
devices:
  eth0:
    name: eth0
    network: demobr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
lxc network show demobr0 
config:
  ipv4.address: 10.183.95.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:78bc:f7c6:965f::1/64
  ipv6.nat: "true"
  security.acls: demo
description: ""
name: demobr0
type: bridge
used_by:
- /1.0/instances/d1
managed: true
status: Created
locations:
- none
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 70:85:c2:fc:3a:cc brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.110/24 brd 192.168.1.255 scope global dynamic noprefixroute enp4s0
       valid_lft 83375sec preferred_lft 83375sec
    inet6 fe80::68de:868:651a:5472/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: wlp3s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 48:89:e7:34:35:73 brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:e0:97:78:53 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:e0ff:fe97:7853/64 scope link 
       valid_lft forever preferred_lft forever
5: br-f7e076eb67c1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:58:0a:87:a3 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-f7e076eb67c1
       valid_lft forever preferred_lft forever
6: br-f7f1cd827055: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:f3:3e:7a:7d brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.1/16 brd 172.19.255.255 scope global br-f7f1cd827055
       valid_lft forever preferred_lft forever
7: vmnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ether 00:50:56:c0:00:01 brd ff:ff:ff:ff:ff:ff
    inet 192.168.114.1/24 brd 192.168.114.255 scope global vmnet1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fec0:1/64 scope link 
       valid_lft forever preferred_lft forever
8: vmnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ether 00:50:56:c0:00:08 brd ff:ff:ff:ff:ff:ff
    inet 192.168.204.1/24 brd 192.168.204.255 scope global vmnet8
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fec0:8/64 scope link 
       valid_lft forever preferred_lft forever
9: br-4ddd178bc073: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:84:7c:ef:6c brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.1/16 brd 172.20.255.255 scope global br-4ddd178bc073
       valid_lft forever preferred_lft forever
    inet6 fe80::42:84ff:fe7c:ef6c/64 scope link 
       valid_lft forever preferred_lft forever
11: vethf2b7431@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-4ddd178bc073 state UP group default 
    link/ether a6:30:6d:ea:ee:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::a430:6dff:feea:ee50/64 scope link 
       valid_lft forever preferred_lft forever
13: veth6e267a0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-4ddd178bc073 state UP group default 
    link/ether aa:5e:ef:2d:8c:2f brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::a85e:efff:fe2d:8c2f/64 scope link 
       valid_lft forever preferred_lft forever
15: veth7b2d40b@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-4ddd178bc073 state UP group default 
    link/ether 62:98:d3:0a:8e:c4 brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet6 fe80::6098:d3ff:fe0a:8ec4/64 scope link 
       valid_lft forever preferred_lft forever
17: veth2ddff55@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-4ddd178bc073 state UP group default 
    link/ether 62:1a:ce:ad:f9:af brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::601a:ceff:fead:f9af/64 scope link 
       valid_lft forever preferred_lft forever
19: veth0eb2e25@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-4ddd178bc073 state UP group default 
    link/ether 46:74:a2:bf:55:18 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::4474:a2ff:febf:5518/64 scope link 
       valid_lft forever preferred_lft forever
20: demobr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:26:66:7e brd ff:ff:ff:ff:ff:ff
    inet 10.183.95.1/24 scope global demobr0
       valid_lft forever preferred_lft forever
    inet6 fd42:78bc:f7c6:965f::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe26:667e/64 scope link 
       valid_lft forever preferred_lft forever
21: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:8c:7a:d0 brd ff:ff:ff:ff:ff:ff
    inet 10.123.208.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fd42:b2aa:5af2:f285::1/64 scope global 
       valid_lft forever preferred_lft forever
23: vethf4213db8@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether de:2b:51:93:14:ce brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet 169.254.0.1/32 scope global vethf4213db8
       valid_lft forever preferred_lft forever
    inet6 fe80::dc2b:51ff:fe93:14ce/64 scope link 
       valid_lft forever preferred_lft forever
25: veth1ca340d@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether b2:50:15:52:f0:f6 brd ff:ff:ff:ff:ff:ff link-netnsid 7
    inet6 fe80::b050:15ff:fe52:f0f6/64 scope link 
       valid_lft forever preferred_lft forever
27: vethab2263d8@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master demobr0 state UP group default qlen 1000
    link/ether f2:37:2d:86:14:df brd ff:ff:ff:ff:ff:ff link-netnsid 8

ip r
default via 192.168.1.1 dev enp4s0 proto dhcp metric 100 
10.123.208.0/24 dev lxdbr0 proto kernel scope link src 10.123.208.1 linkdown 
10.183.95.0/24 dev demobr0 proto kernel scope link src 10.183.95.1 
169.254.0.0/16 dev enp4s0 scope link metric 1000 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
172.18.0.0/16 dev br-f7e076eb67c1 proto kernel scope link src 172.18.0.1 linkdown 
172.19.0.0/16 dev br-f7f1cd827055 proto kernel scope link src 172.19.0.1 linkdown 
172.20.0.0/16 dev br-4ddd178bc073 proto kernel scope link src 172.20.0.1 
192.168.1.0/24 dev enp4s0 proto kernel scope link src 192.168.1.110 metric 100 
192.168.1.221 dev vethf4213db8 scope link 
192.168.114.0/24 dev vmnet1 proto kernel scope link src 192.168.114.1 
192.168.204.0/24 dev vmnet8 proto kernel scope link src 192.168.204.1

LXD instance (d1)

root@d1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:23:65:1e:06 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
26: eth0@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:ac:d3:75 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.183.95.205/24 brd 10.183.95.255 scope global dynamic eth0
       valid_lft 3189sec preferred_lft 3189sec
    inet6 fd42:78bc:f7c6:965f:216:3eff:feac:d375/64 scope global dynamic mngtmpaddr 
       valid_lft 3255sec preferred_lft 3255sec
    inet6 fe80::216:3eff:feac:d375/64 scope link 
       valid_lft forever preferred_lft forever
root@d1:~# ip r
default via 10.183.95.1 dev eth0 
10.183.95.0/24 dev eth0 proto kernel scope link src 10.183.95.205 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown

That’s now a bit suprising. I tested the setup in another environment:

lxc network create demobr0 --type=bridge

lxc launch images:debian/11/cloud d1 --network demobr0

lxc network acl create demo
lxc network set demobr0 security.acls=demo

lxc network acl edit demo
...
egress:
- action: drop
  destination: 192.168.0.0/16
  state: enabled
- action: allow
  protocol: icmp4
  description: Ping
  state: enabled
- action: allow
  protocol: udp
  destination_port: "53"
  description: DNS
  state: enabled
- action: allow
  protocol: tcp
  destination_port: 80,443,587
  description: HTTP,HTTPS,Mail
  state: enabled
ingress:
- action: allow
  protocol: tcp
  destination_port: 80,3001
  description: Incoming HTTP
  state: enabled
...

In this other environment, I’m not able to curl 192.168.1.123 (which is good/desired). The main difference between the origin and the new environment is, that in the origin one there is also Docker installed on the LXD host. Maybe Docker interferes with LXD regarding iptables… But the demobr0 is isolated and has nothing to do with Docker, right?

The bridges are separate, true. But the firewall rules are a global set of security policies for the whole.

Please show lxc info | grep firewall: and sudo iptables-save and sudo nft list ruleset (if installed).

lxc info | grep firewall:

firewall: xtables


sudo nft list ruleset

<no output>


sudo iptables-save

# Generated by iptables-save v1.8.4 on Tue Nov 22 14:19:56 2022
*raw
:PREROUTING ACCEPT [795865:938609803]
:OUTPUT ACCEPT [273289:38634613]
-A PREROUTING -i veth9048143b -m rpfilter --invert -m comment --comment "generated for LXD container dr (eth0) rpfilter" -j DROP
COMMIT
# Completed on Tue Nov 22 14:19:56 2022
# Generated by iptables-save v1.8.4 on Tue Nov 22 14:19:56 2022
*mangle
:PREROUTING ACCEPT [796885:939816795]
:INPUT ACCEPT [785616:936936007]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [273766:38678083]
:POSTROUTING ACCEPT [279636:43601635]
-A POSTROUTING -o lxdbr0 -p udp -m udp --dport 68 -m comment --comment "generated for LXD network lxdbr0" -j CHECKSUM --checksum-fill
-A POSTROUTING -o demobr0 -p udp -m udp --dport 68 -m comment --comment "generated for LXD network demobr0" -j CHECKSUM --checksum-fill
COMMIT
# Completed on Tue Nov 22 14:19:56 2022
# Generated by iptables-save v1.8.4 on Tue Nov 22 14:19:56 2022
*filter
:INPUT ACCEPT [785550:936854931]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [272587:37911543]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:lxd_acl_demobr0 - [0:0]
-A INPUT -i lxdbr0 -p icmp -m icmp --icmp-type 12 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A INPUT -i lxdbr0 -p icmp -m icmp --icmp-type 11 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A INPUT -i lxdbr0 -p icmp -m icmp --icmp-type 3 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A INPUT -i lxdbr0 -p tcp -m tcp --dport 53 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A INPUT -i lxdbr0 -p udp -m udp --dport 53 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A INPUT -i lxdbr0 -p udp -m udp --dport 67 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A INPUT -i demobr0 -p icmp -m icmp --icmp-type 12 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A INPUT -i demobr0 -p icmp -m icmp --icmp-type 11 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A INPUT -i demobr0 -p icmp -m icmp --icmp-type 3 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A INPUT -i demobr0 -p udp -m udp --sport 68 --dport 67 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A INPUT -i demobr0 -p udp -m udp --dport 53 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A INPUT -i demobr0 -p tcp -m tcp --dport 53 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A INPUT -i demobr0 -m comment --comment "generated for LXD network demobr0" -j lxd_acl_demobr0
-A INPUT -i demobr0 -p icmp -m icmp --icmp-type 12 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A INPUT -i demobr0 -p icmp -m icmp --icmp-type 11 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A INPUT -i demobr0 -p icmp -m icmp --icmp-type 3 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A INPUT -i demobr0 -p tcp -m tcp --dport 53 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A INPUT -i demobr0 -p udp -m udp --dport 53 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A INPUT -i demobr0 -p udp -m udp --dport 67 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A FORWARD -o lxdbr0 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A FORWARD -i lxdbr0 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A FORWARD ! -i demobr0 -o demobr0 -m comment --comment "generated for LXD network demobr0" -j lxd_acl_demobr0
-A FORWARD -i demobr0 ! -o demobr0 -m comment --comment "generated for LXD network demobr0" -j lxd_acl_demobr0
-A FORWARD -o demobr0 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A FORWARD -i demobr0 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o br-f7f1cd827055 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-f7f1cd827055 -j DOCKER
-A FORWARD -i br-f7f1cd827055 ! -o br-f7f1cd827055 -j ACCEPT
-A FORWARD -i br-f7f1cd827055 -o br-f7f1cd827055 -j ACCEPT
-A FORWARD -o br-f7e076eb67c1 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-f7e076eb67c1 -j DOCKER
-A FORWARD -i br-f7e076eb67c1 ! -o br-f7e076eb67c1 -j ACCEPT
-A FORWARD -i br-f7e076eb67c1 -o br-f7e076eb67c1 -j ACCEPT
-A FORWARD -o br-4ddd178bc073 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-4ddd178bc073 -j DOCKER
-A FORWARD -i br-4ddd178bc073 ! -o br-4ddd178bc073 -j ACCEPT
-A FORWARD -i br-4ddd178bc073 -o br-4ddd178bc073 -j ACCEPT
-A OUTPUT -o lxdbr0 -p icmp -m icmp --icmp-type 12 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A OUTPUT -o lxdbr0 -p icmp -m icmp --icmp-type 11 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A OUTPUT -o lxdbr0 -p icmp -m icmp --icmp-type 3 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A OUTPUT -o lxdbr0 -p tcp -m tcp --sport 53 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A OUTPUT -o lxdbr0 -p udp -m udp --sport 53 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A OUTPUT -o lxdbr0 -p udp -m udp --sport 67 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A OUTPUT -o demobr0 -p icmp -m icmp --icmp-type 12 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A OUTPUT -o demobr0 -p icmp -m icmp --icmp-type 11 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A OUTPUT -o demobr0 -p icmp -m icmp --icmp-type 3 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A OUTPUT -o demobr0 -p udp -m udp --sport 67 --dport 68 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A OUTPUT -o demobr0 -m comment --comment "generated for LXD network demobr0" -j lxd_acl_demobr0
-A OUTPUT -o demobr0 -p icmp -m icmp --icmp-type 12 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A OUTPUT -o demobr0 -p icmp -m icmp --icmp-type 11 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A OUTPUT -o demobr0 -p icmp -m icmp --icmp-type 3 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A OUTPUT -o demobr0 -p tcp -m tcp --sport 53 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A OUTPUT -o demobr0 -p udp -m udp --sport 53 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A OUTPUT -o demobr0 -p udp -m udp --sport 67 -m comment --comment "generated for LXD network demobr0" -j ACCEPT
-A DOCKER -d 172.20.0.2/32 ! -i br-4ddd178bc073 -o br-4ddd178bc073 -p tcp -m tcp --dport 8080 -j ACCEPT
-A DOCKER -d 172.20.0.3/32 ! -i br-4ddd178bc073 -o br-4ddd178bc073 -p tcp -m tcp --dport 5800 -j ACCEPT
-A DOCKER -d 172.20.0.4/32 ! -i br-4ddd178bc073 -o br-4ddd178bc073 -p tcp -m tcp --dport 5800 -j ACCEPT
-A DOCKER -d 172.20.0.5/32 ! -i br-4ddd178bc073 -o br-4ddd178bc073 -p tcp -m tcp --dport 9000 -j ACCEPT
-A DOCKER -d 172.20.0.6/32 ! -i br-4ddd178bc073 -o br-4ddd178bc073 -p tcp -m tcp --dport 17001 -j ACCEPT
-A DOCKER -d 172.20.0.6/32 ! -i br-4ddd178bc073 -o br-4ddd178bc073 -p tcp -m tcp --dport 8180 -j ACCEPT
-A DOCKER -d 172.20.0.6/32 ! -i br-4ddd178bc073 -o br-4ddd178bc073 -p tcp -m tcp --dport 8080 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-f7f1cd827055 ! -o br-f7f1cd827055 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-f7e076eb67c1 ! -o br-f7e076eb67c1 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-4ddd178bc073 ! -o br-4ddd178bc073 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-f7f1cd827055 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-f7e076eb67c1 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-4ddd178bc073 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A lxd_acl_demobr0 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A lxd_acl_demobr0 -d 192.168.0.0/16 -i demobr0 -j DROP
-A lxd_acl_demobr0 -o demobr0 -p tcp -m multiport --dports 80,3001 -j ACCEPT
-A lxd_acl_demobr0 -i demobr0 -p icmp -j ACCEPT
-A lxd_acl_demobr0 -i demobr0 -p udp -m multiport --dports 53 -j ACCEPT
-A lxd_acl_demobr0 -i demobr0 -p tcp -m multiport --dports 80,443,587 -j ACCEPT
-A lxd_acl_demobr0 -i demobr0 -j REJECT --reject-with icmp-port-unreachable
-A lxd_acl_demobr0 -o demobr0 -j REJECT --reject-with icmp-port-unreachable
COMMIT
# Completed on Tue Nov 22 14:19:56 2022
# Generated by iptables-save v1.8.4 on Tue Nov 22 14:19:56 2022
*nat
:PREROUTING ACCEPT [16145:4283883]
:INPUT ACCEPT [4880:1404267]
:OUTPUT ACCEPT [16008:6756145]
:POSTROUTING ACCEPT [14822:5991016]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 10.123.208.0/24 ! -d 10.123.208.0/24 -m comment --comment "generated for LXD network lxdbr0" -j MASQUERADE
-A POSTROUTING -s 10.183.95.0/24 ! -d 10.183.95.0/24 -m comment --comment "generated for LXD network demobr0" -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.19.0.0/16 ! -o br-f7f1cd827055 -j MASQUERADE
-A POSTROUTING -s 172.18.0.0/16 ! -o br-f7e076eb67c1 -j MASQUERADE
-A POSTROUTING -s 172.20.0.0/16 ! -o br-4ddd178bc073 -j MASQUERADE
-A POSTROUTING -s 172.20.0.2/32 -d 172.20.0.2/32 -p tcp -m tcp --dport 8080 -j MASQUERADE
-A POSTROUTING -s 172.20.0.3/32 -d 172.20.0.3/32 -p tcp -m tcp --dport 5800 -j MASQUERADE
-A POSTROUTING -s 172.20.0.4/32 -d 172.20.0.4/32 -p tcp -m tcp --dport 5800 -j MASQUERADE
-A POSTROUTING -s 172.20.0.5/32 -d 172.20.0.5/32 -p tcp -m tcp --dport 9000 -j MASQUERADE
-A POSTROUTING -s 172.20.0.6/32 -d 172.20.0.6/32 -p tcp -m tcp --dport 17001 -j MASQUERADE
-A POSTROUTING -s 172.20.0.6/32 -d 172.20.0.6/32 -p tcp -m tcp --dport 8180 -j MASQUERADE
-A POSTROUTING -s 172.20.0.6/32 -d 172.20.0.6/32 -p tcp -m tcp --dport 8080 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER -i br-f7f1cd827055 -j RETURN
-A DOCKER -i br-f7e076eb67c1 -j RETURN
-A DOCKER -i br-4ddd178bc073 -j RETURN
-A DOCKER ! -i br-4ddd178bc073 -p tcp -m tcp --dport 9999 -j DNAT --to-destination 172.20.0.2:8080
-A DOCKER ! -i br-4ddd178bc073 -p tcp -m tcp --dport 5800 -j DNAT --to-destination 172.20.0.3:5800
-A DOCKER ! -i br-4ddd178bc073 -p tcp -m tcp --dport 5801 -j DNAT --to-destination 172.20.0.4:5800
-A DOCKER ! -i br-4ddd178bc073 -p tcp -m tcp --dport 9000 -j DNAT --to-destination 172.20.0.5:9000
-A DOCKER ! -i br-4ddd178bc073 -p tcp -m tcp --dport 17001 -j DNAT --to-destination 172.20.0.6:17001
-A DOCKER ! -i br-4ddd178bc073 -p tcp -m tcp --dport 17005 -j DNAT --to-destination 172.20.0.6:8180
-A DOCKER ! -i br-4ddd178bc073 -p tcp -m tcp --dport 17004 -j DNAT --to-destination 172.20.0.6:8080
COMMIT
# Completed on Tue Nov 22 14:19:56 2022

If you do sudo iptables -F and then reload LXD sudo systemctl reload snap.lxd.daemon that should wipe the docker rules temporarily and re-apply the LXD ones. Do you still then see the same behaviour?

If so could you provide output of sudo iptables-save again.

To reapply the docker rules reboot.

Throw away the docker rules and re-apply LXD rules did the trick. Now my ACL with drop to 192.168.0.0/16 works correctly.

It seems that running Docker and LXD on the host isn’t a good idea. I will uninstall Docker and let it run inside a LXD instance.

Regarding the ACL:
Is my approach with

- action: drop
  destination: 192.168.0.0/16
  state: enabled

the best way of doing it? Or would there be also an approach with

- action: allow
  protocol: tcp
  destination_port: 80,443,587
  description: HTTP,HTTPS,Mail
  state: enabled

and exclude in destination 192.168.0.0/16 or include only public IP ranges?

Many thanks for the help.

1 Like