Greetings, back with another UFW/LXD question. I have two containers that obtain IP’s via the “routed” feature as described here: Mi blog la!. Notwithstanding one container being assigned two virtual nics (see below) it appears to work flawlessly:
The issue I am running into is that I can’t seem to create a UFW rule that persistently routes requests through the firewall and onto the two routed containers.
Previously I had been having issues with a container that obtained its ip via lxdbr0. This was easily solved via the advice found here: lxdbr0 fix.
However, this will not work for routed containers because it seems the “bridges” (pardon if this is not the correct term) change. For instance, here are the relevant devices now:
So, any advice? Bonus points if you know how to solve the “two-nic” issue for container plex.
As always, thank you!
EDIT: Current UFW Rules
To Action From
-- ------ ----
[ 1] <redacted> ALLOW IN 192.168.86.0/24
[ 2] <redacted> ALLOW IN 192.168.87.0/24
[ 3] plexmediaserver ALLOW IN Anywhere
[ 4] Anywhere on lxdbr0 ALLOW FWD Anywhere (out)
[ 5] Anywhere ALLOW FWD Anywhere on lxdbr0
[ 6] Anywhere on lxdbr0 ALLOW IN Anywhere
[ 7] 4533 on any ALLOW IN Anywhere
[ 8] 192.168.86.0/24 on any ALLOW FWD Anywhere on enp42s0
[ 9] plexmediaserver (v6) ALLOW IN Anywhere (v6)
[10] Anywhere (v6) on lxdbr0 ALLOW FWD Anywhere (v6) (out)
[11] Anywhere (v6) ALLOW FWD Anywhere (v6) on lxdbr0
[12] Anywhere (v6) on lxdbr0 ALLOW IN Anywhere (v6)
[13] 4533 (v6) on any ALLOW IN Anywhere (v6)
You can set the host_name property on each of the NIC’s in the LXD config and they will then produce a static LXD host-side interface name rather than a randomly generated one.
As for the 2 NICs issue, your instance is probably inheriting another NIC from the profile because the routed NIC doesn’t have the same name (and thus isn’t overriding it).
@tomp
Thank you so much for your responses! Unfortunately I have a follow-up re: the setting of the host_name property, I am guessing that is relatively easy but, unfortunately, I don’t know how to do that
Error: Failed to start device "eth0": Failed adding host route "192.168.86.107/32": Failed to run: ip -4 route add table main 192.168.86.107/32 dev ISveth0: RTNETLINK answers: File exists
Removing the profile allows me to start the container but now no ipv4 connectivity within the container (presumably because the routing table is messed)
I apologize for acting without asking but genuinely and sincerely appreciate your help.
default via 192.168.86.1 dev enp42s0 proto static
default via 192.168.86.1 dev wlo1 proto dhcp metric 600
10.9.21.0/24 dev lxdbr0 proto kernel scope link src 10.9.21.1
192.168.86.0/24 dev enp42s0 proto kernel scope link src 192.168.86.100
192.168.86.0/24 dev wlo1 proto kernel scope link src 192.168.86.50 metric 600
192.168.86.105 dev veth8c707098 scope link
192.168.86.106 dev veth0543597b scope link
192.168.86.107 dev veth033e4f11 scope link
192.168.86.105 dev veth8c707098 scope link
192.168.86.106 dev veth0543597b scope link
192.168.86.107 dev veth033e4f11 scope link
So xxx.xxx.xxx.105 and xxx.xxx.xxx.106 are LXC containers obtaining IP’s via routed profiles. The container that was receiving xxx.xxx.xxx.107 via a routed profile is now receiving its ip via lxdbr0 (i.e. the 10.xxx.xxx.xxx subnet) so 192.168.86.107 dev veth033e4f11 scope link is definitely “leftover”.
Do you recommend clearing those veth* devices from the route table and then modifying the profiles?