UFW and routed Containers

Greetings, back with another UFW/LXD question. I have two containers that obtain IP’s via the “routed” feature as described here: Mi blog la!. Notwithstanding one container being assigned two virtual nics (see below) it appears to work flawlessly:

+------------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| navidrome  | RUNNING | 192.168.86.105 (eth0) |                                               | CONTAINER | 0         |
+------------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| plex       | RUNNING | 192.168.86.106 (eth1) | fd42:248c:b6e4:e4ac:216:3eff:fe0e:8dde (eth0) | CONTAINER | 0         |
|            |         | 192.168.86.106 (eth0) |                                               |           |           |
+------------+---------+-----------------------+-----------------------------------------------+-----------+-----------+

The issue I am running into is that I can’t seem to create a UFW rule that persistently routes requests through the firewall and onto the two routed containers.

Previously I had been having issues with a container that obtained its ip via lxdbr0. This was easily solved via the advice found here: lxdbr0 fix.

However, this will not work for routed containers because it seems the “bridges” (pardon if this is not the correct term) change. For instance, here are the relevant devices now:

vethb4912077: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 169.254.0.1  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::401c:f8ff:fe52:959f  prefixlen 64  scopeid 0x20<link>
        ether 42:1c:f8:52:95:9f  txqueuelen 1000  (Ethernet)
        RX packets 22733  bytes 4089562 (3.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 33391  bytes 2890201 (2.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethc0dcb475: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 169.254.0.1  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::f098:aaff:fe15:c2b5  prefixlen 64  scopeid 0x20<link>
        ether f2:98:aa:15:c2:b5  txqueuelen 1000  (Ethernet)
        RX packets 21384  bytes 100737503 (96.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20636  bytes 1482100 (1.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Whereas they were previously called

vethea2ddd10

and

veth95e57722

respectively.

So, any advice? Bonus points if you know how to solve the “two-nic” issue for container plex.

As always, thank you!

EDIT: Current UFW Rules

 To                         Action      From
     --                         ------      ----
[ 1] <redacted>                 ALLOW IN    192.168.86.0/24           
[ 2] <redacted>                 ALLOW IN    192.168.87.0/24            
[ 3] plexmediaserver            ALLOW IN    Anywhere                  
[ 4] Anywhere on lxdbr0         ALLOW FWD   Anywhere                   (out)
[ 5] Anywhere                   ALLOW FWD   Anywhere on lxdbr0        
[ 6] Anywhere on lxdbr0         ALLOW IN    Anywhere                  
[ 7] 4533 on any                ALLOW IN    Anywhere                  
[ 8] 192.168.86.0/24 on any     ALLOW FWD   Anywhere on enp42s0       
[ 9] plexmediaserver (v6)       ALLOW IN    Anywhere (v6)             
[10] Anywhere (v6) on lxdbr0    ALLOW FWD   Anywhere (v6)              (out)
[11] Anywhere (v6)              ALLOW FWD   Anywhere (v6) on lxdbr0   
[12] Anywhere (v6) on lxdbr0    ALLOW IN    Anywhere (v6)             
[13] 4533 (v6) on any           ALLOW IN    Anywhere (v6)

You can set the host_name property on each of the NIC’s in the LXD config and they will then produce a static LXD host-side interface name rather than a randomly generated one.

As for the 2 NICs issue, your instance is probably inheriting another NIC from the profile because the routed NIC doesn’t have the same name (and thus isn’t overriding it).

Re: Two NICs
Done! Thank you.

@tomp
Thank you so much for your responses! Unfortunately I have a follow-up re: the setting of the host_name property, I am guessing that is relatively easy but, unfortunately, I don’t know how to do that :confused:

Thank you!

lxc config device set <instance> <device> host_name=<host_side_name>

@tomp
Unfortunately that did not work for me as the device is added via profile and not individually:


Error: Device from profile(s) cannot be modified for individual instance. Override device or modify profile instead

So I modified the profile (incorrectly, probably):

config:
  user.network-config: |
    version: 2
    ethernets:
        eth0:
            addresses: [192.168.86.107/24]
            nameservers:
                addresses: [192.168.86.79]
                search: []
            routes:
            -   to: 0.0.0.0/0
                via: 169.254.0.1
                on-link: true
description: Default LXD profile
devices:
  eth0:
    host_name: ISveth0
    ipv4.address: 192.168.86.107
    nictype: routed
    parent: enp42s0
    type: nic
name: routed_192.168.86.107
used_by: []

And now when I start my container, I get this:

Error: Failed to start device "eth0": Failed adding host route "192.168.86.107/32": Failed to run: ip -4 route add table main 192.168.86.107/32 dev ISveth0: RTNETLINK answers: File exists

Removing the profile allows me to start the container but now no ipv4 connectivity within the container (presumably because the routing table is messed)

I apologize for acting without asking but genuinely and sincerely appreciate your help.

Please show output of ip r on the host.

default via 192.168.86.1 dev enp42s0 proto static 
default via 192.168.86.1 dev wlo1 proto dhcp metric 600 
10.9.21.0/24 dev lxdbr0 proto kernel scope link src 10.9.21.1 
192.168.86.0/24 dev enp42s0 proto kernel scope link src 192.168.86.100 
192.168.86.0/24 dev wlo1 proto kernel scope link src 192.168.86.50 metric 600 
192.168.86.105 dev veth8c707098 scope link 
192.168.86.106 dev veth0543597b scope link 
192.168.86.107 dev veth033e4f11 scope link

Looks like you have another instance using the same IP, or a left over interface and route.

192.168.86.105 dev veth8c707098 scope link 
192.168.86.106 dev veth0543597b scope link 
192.168.86.107 dev veth033e4f11 scope link

So xxx.xxx.xxx.105 and xxx.xxx.xxx.106 are LXC containers obtaining IP’s via routed profiles. The container that was receiving xxx.xxx.xxx.107 via a routed profile is now receiving its ip via lxdbr0 (i.e. the 10.xxx.xxx.xxx subnet) so 192.168.86.107 dev veth033e4f11 scope link is definitely “leftover”.

Do you recommend clearing those veth* devices from the route table and then modifying the profiles?

Can you show ip l output on host.

If you’re sure you dont need it then sudo ip l delete <interface> will remove it.

`1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp42s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 2c:f0:5d:71:37:61 brd ff:ff:ff:ff:ff:ff
3: wlo1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DORMANT group default qlen 1000
    link/ether e4:5e:37:fd:0a:75 brd ff:ff:ff:ff:ff:ff
    altname wlp41s0
23: vetheb0b7ba6@veth033e4f11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 00:16:3e:b9:09:92 brd ff:ff:ff:ff:ff:ff
24: veth033e4f11@vetheb0b7ba6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether da:1a:a4:07:7b:b1 brd ff:ff:ff:ff:ff:ff
37: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 00:16:3e:ef:96:d7 brd ff:ff:ff:ff:ff:ff
39: vetha5379a57@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP mode DEFAULT group default qlen 1000
    link/ether de:1c:a9:b4:a6:d5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
41: vethd4ea1cd8@if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 52:18:af:35:2c:aa brd ff:ff:ff:ff:ff:ff link-netnsid 1
43: veth8c707098@if42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether de:17:6f:c5:66:00 brd ff:ff:ff:ff:ff:ff link-netnsid 2
45: veth0543597b@if44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 9e:92:d8:b0:5e:44 brd ff:ff:ff:ff:ff:ff link-netnsid 4
47: vethca948382@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP mode DEFAULT group default qlen 1000
    link/ether f6:73:ed:f0:55:46 brd ff:ff:ff:ff:ff:ff link-netnsid 6
69: veth998bee53@if68: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP mode DEFAULT group default qlen 1000
    link/ether de:30:f4:5d:96:f3 brd ff:ff:ff:ff:ff:ff link-netnsid 3`