How to create container with two network interface

Hi everyone,
I am trying to create two container (server and client) like this topology:

server container will be installed with Debian image and client container will be installed with Ubuntu image.
my question is, how i can create this two container, with certain condition:

  1. server container need to have two network interface
  2. the connection between server and client must be created without any network configuration like DHCP or auto configuration, so i can setup my server container also as router, so it will have DHCP server to setup the IP address for the client container

Is this LXC or LXD?

Sorry this is first time i use LXC/LXD, in case of switching from docker, so i still confused the difference, so let me tell about my last try, after i read this thread Add multiple nic to lxc containers

First time i’m trying to use this configuration

lxc network create external ipv6.address=none ipv4.address=10.20.20.1/24 ipv4.nat=true
lxc network create internal ipv6.address=none ipv4.address=10.10.10.1/24 ipv4.nat=false

lxc launch ubuntu:20.04 server
lxc launch ubuntu:20.04 client

lxc config device add server eth0 nic name=eth0 nictype=bridged parent=external ipv4.address=10.20.20.10
lxc config device add server eth1 nic name=eth1 nictype=bridged parent=internal ipv4.address=10.10.10.10

lxc config device add client eth0 nic name=eth0 nictype=bridged parent=internal ipv4.address=10.10.10.11

when i test the connection, i can ping both from client to server or server to client

then i change my configuration to this

lxc network create external ipv6.address=none ipv4.address=10.20.20.1/24 ipv4.nat=true
lxc network create internal ipv6.address=none ipv4.address=none ipv4.nat=false

lxc launch ubuntu:20.04 server
lxc launch ubuntu:20.04 client

lxc config device add server eth0 nic name=eth0 nictype=bridged parent=external ipv4.address=10.20.20.10
lxc config device add server eth1 nic name=eth1 nictype=bridged parent=internal ipv4.address=10.10.10.1

lxc config device add client eth0 nic name=eth0 nictype=bridged parent=internal ipv4.address=10.10.10.2

but it fail to communicate each other both server and client, i’m also have setup the netplan manually and execute netplan apply to restart the config and this still not connected

OK thanks, so we are talking about LXD (have updated the post’s tags).

For this scenario I suggest you use the existing lxdbr0 network for the first interface (eth0) between the server and the internet. This will allow the server to make outbound SNATted connections to the external network (internet).

Then for 2nd interface between the client and server, I would suggest creating a new LXD managed bridge with IP addressing disabled (which will also disable DHCP, IPv6 RA and DNS on the bridge).

The connect the server to it using eth1 and the client to it using eth0:

Setup server with connection to lxdbr0:

lxc launch ubuntu:20.04 server -n lxdbr0

lxc ls server
+--------+---------+---------------------+---------------------------------------------+-----------+-----------+
|  NAME  |  STATE  |        IPV4         |                    IPV6                     |   TYPE    | SNAPSHOTS |
+--------+---------+---------------------+---------------------------------------------+-----------+-----------+
| server | RUNNING | 10.64.199.30 (eth0) | fd42:bafd:ac21:9f:216:3eff:fead:cf4d (eth0) | CONTAINER | 0         |
+--------+---------+---------------------+-

lxc config show server
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 20.04 LTS amd64 (release) (20220711)
  image.label: release
  image.os: ubuntu
  image.release: focal
  image.serial: "20220711"
  image.type: squashfs
  image.version: "20.04"
  volatile.base_image: e9589b6e9c886888b3df98aee0f0e16c5805383418b3563cd8845220f43b40ff
  volatile.cloud-init.instance-id: c5186d0a-4af6-49b1-aa56-0d646f0424e2
  volatile.eth0.host_name: vethb877c0c9
  volatile.eth0.hwaddr: 00:16:3e:ad:cf:4d
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.uuid: f123e8aa-d999-4d96-8ad6-af119f4d6f27
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
ephemeral: false
profiles:
- default
stateful: false
description: ""

Next create the managed network for the link between server and client:

lxc network create internal ipv4.address=none ipv6.address=none
lxc network show internal
config:
  ipv4.address: none
  ipv6.address: none
description: ""
name: internal
type: bridge
used_by: []
managed: true
status: Created
locations:
- none

lxc network info internal
Name: internal
MAC address: 00:16:3e:6f:19:1a
MTU: 1500
State: up
Type: broadcast

Network usage:
  Bytes received: 0B
  Bytes sent: 0B
  Packets received: 0
  Packets sent: 0

Bridge:
  ID: 8000.00163e6f191a
  STP: false
  Forward delay: 1500
  Default VLAN ID: 1
  VLAN filtering: true
  Upper devices: 

Now connect server to internal using eth1:

lxc network attach internal server eth1

lxc config show server
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 20.04 LTS amd64 (release) (20220711)
  image.label: release
  image.os: ubuntu
  image.release: focal
  image.serial: "20220711"
  image.type: squashfs
  image.version: "20.04"
  volatile.base_image: e9589b6e9c886888b3df98aee0f0e16c5805383418b3563cd8845220f43b40ff
  volatile.cloud-init.instance-id: ab75d3ed-7db4-4ebe-9212-833dbe4d51c6
  volatile.eth0.host_name: vethb877c0c9
  volatile.eth0.hwaddr: 00:16:3e:ad:cf:4d
  volatile.eth1.host_name: veth029b6c8a
  volatile.eth1.hwaddr: 00:16:3e:9c:62:cf
  volatile.eth1.name: eth1
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.uuid: f123e8aa-d999-4d96-8ad6-af119f4d6f27
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  eth1:
    network: internal
    type: nic
ephemeral: false
profiles:
- default
stateful: false
description: ""

lxc exec server -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:ad:cf:4d brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.64.199.30/24 brd 10.64.199.255 scope global dynamic eth0
       valid_lft 3061sec preferred_lft 3061sec
    inet6 fd42:bafd:ac21:9f:216:3eff:fead:cf4d/64 scope global mngtmpaddr noprefixroute 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fead:cf4d/64 scope link 
       valid_lft forever preferred_lft forever
14: eth1@if15: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:16:3e:9c:62:cf brd ff:ff:ff:ff:ff:ff link-netnsid 0

Now you need to setup eth1 inside server how ever you wish.

Next setup client to connect to internal using eth0:

lxc launch ubuntu:20.04 client -n internal

lxc ls client
+--------+---------+------+------+-----------+-----------+
|  NAME  |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+--------+---------+------+------+-----------+-----------+
| client | RUNNING |      |      | CONTAINER | 0         |
+--------+---------+------+------+-----------+-----------+

lxc config show client
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 20.04 LTS amd64 (release) (20220711)
  image.label: release
  image.os: ubuntu
  image.release: focal
  image.serial: "20220711"
  image.type: squashfs
  image.version: "20.04"
  volatile.base_image: e9589b6e9c886888b3df98aee0f0e16c5805383418b3563cd8845220f43b40ff
  volatile.cloud-init.instance-id: 52be5635-1e6b-4784-8d9d-7609dd31b197
  volatile.eth0.host_name: veth94d71fd6
  volatile.eth0.hwaddr: 00:16:3e:f5:fa:0c
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 30713eb2-376f-4768-944e-ae163c60c5cd
devices:
  eth0:
    name: eth0
    network: internal
    type: nic
ephemeral: false
profiles:
- default
stateful: false
description: ""

lxc exec client -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:f5:fa:0c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::216:3eff:fef5:fa0c/64 scope link 
       valid_lft forever preferred_lft forever
1 Like

Thanks for your information @tomp
But i’m still cannot ping each other with this configuration:

Server

network:
    version: 2
    ethernets:
        eth0:
            dhcp4: true
        eth1:
            dhcp4: no
            addresses:
                - 192.168.1.1/24

Client

network:
    version: 2
    ethernets:
        eth0:
            dhcp4: false
            addresses:
                - 192.168.1.221/24
            gateway4: 192.168.1.1
            nameservers:
                addresses: [8.8.8.8,8.8.4.4]

What diagnostics have you done? Like tcpdump on the internal bridge and the respective container interfaces?

This is the output of tcpdump -i internal

listening on internal, link-type EN10MB (Ethernet), capture size 262144 bytes
19:47:06.160352 IP 192.168.1.221.55338 > dns.google.domain: Flags [S], seq 1476596982, win 64240, options [mss 1460,sackOK,TS val 1932557156 ecr 0,nop,wscale 7], length 0
19:47:08.630749 IP 192.168.1.221 > 192.168.1.1: ICMP echo request, id 1012, seq 1, length 64
19:47:09.644628 IP 192.168.1.221 > 192.168.1.1: ICMP echo request, id 1012, seq 2, length 64
19:47:10.412388 IP 192.168.1.221.55338 > dns.google.domain: Flags [S], seq 1476596982, win 64240, options [mss 1460,sackOK,TS val 1932561408 ecr 0,nop,wscale 7], length 0
19:47:10.668484 IP 192.168.1.221 > 192.168.1.1: ICMP echo request, id 1012, seq 3, length 64
19:47:11.692481 IP 192.168.1.221 > 192.168.1.1: ICMP echo request, id 1012, seq 4, length 64
19:47:12.716955 IP 192.168.1.221 > 192.168.1.1: ICMP echo request, id 1012, seq 5, length 64
19:47:13.372143 IP 192.168.1.221.39474 > dns.google.domain: 64632+ AAAA? api.snapcraft.io. (34)
19:47:13.740569 IP 192.168.1.221 > 192.168.1.1: ICMP echo request, id 1012, seq 6, length 64
19:47:16.770325 IP 192.168.1.221.46169 > dns.google.domain: 10712+ A? api.snapcraft.io. (34)
19:47:18.621793 IP 192.168.1.221.47061 > dns.google.domain: 64632+ AAAA? api.snapcraft.io. (34)
19:47:21.773896 IP 192.168.1.221.44700 > dns.google.domain: Flags [S], seq 2243332184, win 64240, options [mss 1460,sackOK,TS val 1798209497 ecr 0,nop,wscale 7,tfo  cookiereq,nop,nop], length 0
19:47:22.796646 IP 192.168.1.221.44700 > dns.google.domain: Flags [S], seq 2243332184, win 64240, options [mss 1460,sackOK,TS val 1798210520 ecr 0,nop,wscale 7], length 0
19:47:23.871802 IP 192.168.1.221.47061 > dns.google.domain: 64632+ AAAA? api.snapcraft.io. (34)
19:47:24.812820 IP 192.168.1.221.44700 > dns.google.domain: Flags [S], seq 2243332184, win 64240, options [mss 1460,sackOK,TS val 1798212536 ecr 0,nop,wscale 7], length 0
19:47:28.844471 IP 192.168.1.221.44700 > dns.google.domain: Flags [S], seq 2243332184, win 64240, options [mss 1460,sackOK,TS val 1798216568 ecr 0,nop,wscale 7], length 0
19:47:28.896361 IP 192.168.1.1 > 192.168.1.221: ICMP echo request, id 1091, seq 1, length 64
19:47:29.868343 ARP, Request who-has 192.168.1.1 tell 192.168.1.221, length 28
19:47:29.868548 ARP, Reply 192.168.1.1 is-at 00:16:3e:db:26:43 (oui Unknown), length 28
19:47:29.900489 IP 192.168.1.1 > 192.168.1.221: ICMP echo request, id 1091, seq 2, length 64
19:47:30.924894 IP 192.168.1.1 > 192.168.1.221: ICMP echo request, id 1091, seq 3, length 64
19:47:31.871972 IP 192.168.1.221.51561 > dns.google.domain: 10712+ A? api.snapcraft.io. (34)
19:47:31.948437 IP 192.168.1.1 > 192.168.1.221: ICMP echo request, id 1091, seq 4, length 64
19:47:32.972489 IP 192.168.1.1 > 192.168.1.221: ICMP echo request, id 1091, seq 5, length 64
19:47:33.964337 ARP, Request who-has 192.168.1.221 tell 192.168.1.1, length 28
19:47:33.964531 ARP, Reply 192.168.1.221 is-at 00:16:3e:7e:a1:af (oui Unknown), length 28
19:47:37.121989 IP 192.168.1.221.44702 > dns.google.domain: Flags [S], seq 4040712644, win 64240, options [mss 1460,sackOK,TS val 1798224845 ecr 0,nop,wscale 7,tfo  cookiereq,nop,nop], length 0
19:47:38.124410 IP 192.168.1.221.44702 > dns.google.domain: Flags [S], seq 4040712644, win 64240, options [mss 1460,sackOK,TS val 1798225848 ecr 0,nop,wscale 7], length 0
19:47:40.140762 IP 192.168.1.221.44702 > dns.google.domain: Flags [S], seq 4040712644, win 64240, options [mss 1460,sackOK,TS val 1798227864 ecr 0,nop,wscale 7], length 0

and this is the output of tcpdump -i eth0 from client when i try to ping server

12:49:46.700534 IP 192.168.1.221 > 192.168.1.1: ICMP echo request, id 1023, seq 3, length 64
12:49:47.724486 IP 192.168.1.221 > 192.168.1.1: ICMP echo request, id 1023, seq 4, length 64
12:49:48.748511 IP 192.168.1.221 > 192.168.1.1: ICMP echo request, id 1023, seq 5, length 64
12:49:49.772921 IP 192.168.1.221 > 192.168.1.1: ICMP echo request, id 1023, seq 6, length 64
12:49:50.796508 IP 192.168.1.221 > 192.168.1.1: ICMP echo request, id 1023, seq 7, length 64

Please show the output of ip a and ip r on the host and inside both containers.
Also please provide the output of sudo iptables-save and sudo nft list ruleset.

Server side:

root@server:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:b2:a0:51 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.118.65.43/24 brd 10.118.65.255 scope global dynamic eth0
       valid_lft 3426sec preferred_lft 3426sec
    inet6 fd42:5d06:fb0b:d669:216:3eff:feb2:a051/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 6893sec preferred_lft 3293sec
    inet6 fe80::216:3eff:feb2:a051/64 scope link 
       valid_lft forever preferred_lft forever
30: eth1@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:db:26:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.1.1/24 brd 192.168.1.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fedb:2643/64 scope link 
       valid_lft forever preferred_lft forever

root@server:~# ip r
default via 10.118.65.1 dev eth0 proto dhcp src 10.118.65.43 metric 100 
10.118.65.0/24 dev eth0 proto kernel scope link src 10.118.65.43 
10.118.65.1 dev eth0 proto dhcp scope link src 10.118.65.43 metric 100 
192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.1 

root@server:~# iptables-save 
# Generated by iptables-save v1.8.4 on Fri Jul 15 13:14:56 2022
*filter
:INPUT ACCEPT [86:32038]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [101:18775]
COMMIT
# Completed on Fri Jul 15 13:14:56 2022

Client Side:

root@client:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
32: eth0@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:7e:a1:af brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.1.221/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe7e:a1af/64 scope link 
       valid_lft forever preferred_lft forever

root@client:~# ip r
default via 192.168.1.1 dev eth0 proto static 
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.221 

root@client:~# iptables-save 
# Generated by iptables-save v1.8.4 on Fri Jul 15 13:16:02 2022
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [2:124]
COMMIT
# Completed on Fri Jul 15 13:16:02 2022

And the LXD host?

I suspect a firewall on your LXD host, because ARP replies are coming from the server to the client over the bridge, but ICMP packets aren’t making it.

1 Like
host:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether 88:d7:f6:1c:ce:80 brd ff:ff:ff:ff:ff:ff
3: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f0:03:8c:e8:18:2d brd ff:ff:ff:ff:ff:ff
    inet 10.10.40.2/27 brd 10.10.40.31 scope global dynamic noprefixroute wlp3s0
       valid_lft 40sec preferred_lft 40sec
    inet6 fe80::7587:7d0:4c81:89d0/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 4e:c2:cb:80:b0:13 brd ff:ff:ff:ff:ff:ff
5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a6:b3:9b:29:62:48 brd ff:ff:ff:ff:ff:ff
8: ztyxaqwjfl: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2800 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ether 6e:e1:e1:db:c5:dd brd ff:ff:ff:ff:ff:ff
    inet 172.28.21.160/16 brd 172.28.255.255 scope global ztyxaqwjfl
       valid_lft forever preferred_lft forever
    inet6 fe80::4404:18ff:fe6a:b1a6/64 scope link 
       valid_lft forever preferred_lft forever
10: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:58:dc:97:f0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:58ff:fedc:97f0/64 scope link 
       valid_lft forever preferred_lft forever
11: br-0f459cd71525: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:f3:02:e9:fa brd ff:ff:ff:ff:ff:ff
    inet 172.22.0.1/16 brd 172.22.255.255 scope global br-0f459cd71525
       valid_lft forever preferred_lft forever
12: br-2b1445dcaca0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:97:77:99:0e brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-2b1445dcaca0
       valid_lft forever preferred_lft forever
13: br-49dd924e9479: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:aa:ca:99:4d brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.1/16 brd 172.19.255.255 scope global br-49dd924e9479
       valid_lft forever preferred_lft forever
19: vethd5be6da@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 82:5c:0a:11:57:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::805c:aff:fe11:5742/64 scope link 
       valid_lft forever preferred_lft forever
25: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:88:5d:67 brd ff:ff:ff:ff:ff:ff
    inet 10.118.65.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fd42:5d06:fb0b:d669::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe88:5d67/64 scope link 
       valid_lft forever preferred_lft forever
28: veth1277bcab@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether 1e:ee:d8:2a:d2:3d brd ff:ff:ff:ff:ff:ff link-netnsid 1
29: internal: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:60:c1:35 brd ff:ff:ff:ff:ff:ff
31: vethc856fb3c@if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master internal state UP group default qlen 1000
    link/ether 7a:c9:38:93:14:99 brd ff:ff:ff:ff:ff:ff link-netnsid 1
33: vethe6009a86@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master internal state UP group default qlen 1000
    link/ether 32:5c:c1:26:2f:04 brd ff:ff:ff:ff:ff:ff link-netnsid 3
35: teredo: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none 
    inet6 2001:0:c38c:c38c:1007:b845:dbb6:de39/32 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::ffff:ffff:ffff/64 scope link 
       valid_lft forever preferred_lft forever
    inet6 fe80::7e93:a870:825b:6009/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever
       
host:~$ ip r
default via 10.10.40.1 dev wlp3s0 proto dhcp metric 600 
10.10.40.0/27 dev wlp3s0 proto kernel scope link src 10.10.40.2 metric 600 
10.118.65.0/24 dev lxdbr0 proto kernel scope link src 10.118.65.1 
169.254.0.0/16 dev wlp3s0 scope link metric 1000 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
172.18.0.0/16 dev br-2b1445dcaca0 proto kernel scope link src 172.18.0.1 linkdown 
172.19.0.0/16 dev br-49dd924e9479 proto kernel scope link src 172.19.0.1 linkdown 
172.22.0.0/16 dev br-0f459cd71525 proto kernel scope link src 172.22.0.1 linkdown 
172.28.0.0/16 dev ztyxaqwjfl proto kernel scope link src 172.28.21.160 

host:~$ sudo iptables-save 
# Generated by iptables-save v1.8.4 on Fri Jul 15 20:18:39 2022
*mangle
:PREROUTING ACCEPT [330270:295955069]
:INPUT ACCEPT [325982:295523801]
:FORWARD ACCEPT [3023:247153]
:OUTPUT ACCEPT [260302:42663909]
:POSTROUTING ACCEPT [261432:42797329]
-A POSTROUTING -o lxdbr0 -p udp -m udp --dport 68 -m comment --comment "generated for LXD network lxdbr0" -j CHECKSUM --checksum-fill
COMMIT
# Completed on Fri Jul 15 20:18:39 2022
# Generated by iptables-save v1.8.4 on Fri Jul 15 20:18:39 2022
*filter
:INPUT ACCEPT [325917:295511732]
:FORWARD DROP [2824:178044]
:OUTPUT ACCEPT [260245:42652495]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A INPUT -i lxdbr0 -p icmp -m icmp --icmp-type 12 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A INPUT -i lxdbr0 -p icmp -m icmp --icmp-type 11 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A INPUT -i lxdbr0 -p icmp -m icmp --icmp-type 3 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A INPUT -i lxdbr0 -p tcp -m tcp --dport 53 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A INPUT -i lxdbr0 -p udp -m udp --dport 53 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A INPUT -i lxdbr0 -p udp -m udp --dport 67 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A FORWARD -o lxdbr0 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A FORWARD -i lxdbr0 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o br-49dd924e9479 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-49dd924e9479 -j DOCKER
-A FORWARD -i br-49dd924e9479 ! -o br-49dd924e9479 -j ACCEPT
-A FORWARD -i br-49dd924e9479 -o br-49dd924e9479 -j ACCEPT
-A FORWARD -o br-2b1445dcaca0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-2b1445dcaca0 -j DOCKER
-A FORWARD -i br-2b1445dcaca0 ! -o br-2b1445dcaca0 -j ACCEPT
-A FORWARD -i br-2b1445dcaca0 -o br-2b1445dcaca0 -j ACCEPT
-A FORWARD -o br-0f459cd71525 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-0f459cd71525 -j DOCKER
-A FORWARD -i br-0f459cd71525 ! -o br-0f459cd71525 -j ACCEPT
-A FORWARD -i br-0f459cd71525 -o br-0f459cd71525 -j ACCEPT
-A OUTPUT -o lxdbr0 -p icmp -m icmp --icmp-type 12 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A OUTPUT -o lxdbr0 -p icmp -m icmp --icmp-type 11 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A OUTPUT -o lxdbr0 -p icmp -m icmp --icmp-type 3 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A OUTPUT -o lxdbr0 -p tcp -m tcp --sport 53 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A OUTPUT -o lxdbr0 -p udp -m udp --sport 53 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A OUTPUT -o lxdbr0 -p udp -m udp --sport 67 -m comment --comment "generated for LXD network lxdbr0" -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 9000 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8000 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-49dd924e9479 ! -o br-49dd924e9479 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-2b1445dcaca0 ! -o br-2b1445dcaca0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-0f459cd71525 ! -o br-0f459cd71525 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-49dd924e9479 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-2b1445dcaca0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-0f459cd71525 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Fri Jul 15 20:18:39 2022
# Generated by iptables-save v1.8.4 on Fri Jul 15 20:18:39 2022
*nat
:PREROUTING ACCEPT [4630:484438]
:INPUT ACCEPT [535:122139]
:OUTPUT ACCEPT [7251:1254259]
:POSTROUTING ACCEPT [7078:1225131]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 10.118.65.0/24 ! -d 10.118.65.0/24 -m comment --comment "generated for LXD network lxdbr0" -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.19.0.0/16 ! -o br-49dd924e9479 -j MASQUERADE
-A POSTROUTING -s 172.18.0.0/16 ! -o br-2b1445dcaca0 -j MASQUERADE
-A POSTROUTING -s 172.22.0.0/16 ! -o br-0f459cd71525 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 9000 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 8000 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER -i br-49dd924e9479 -j RETURN
-A DOCKER -i br-2b1445dcaca0 -j RETURN
-A DOCKER -i br-0f459cd71525 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 9000 -j DNAT --to-destination 172.17.0.2:9000
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8000 -j DNAT --to-destination 172.17.0.2:8000
COMMIT
# Completed on Fri Jul 15 20:18:39 2022

Oh docker, yes that will be the issue then.

See How to configure your firewall - LXD documentation

And Lxd and Docker Firewall Redux - How to deal with FORWARD policy set to drop - #3 by tomp

1 Like

Docker sets the FORWARD chain default policy to DROP which affects all interfaces:

1 Like

Okay, i will try to new host that not have docker

Thank’s @tomp
It’s works, using other host without docker

1 Like