NAT using ufw inside LXD container

Hello,

I have two LXD containers:

  • the first, called “front”, has two network interfaces: one internal (host’s bridge lxdbr0) and a public ip (host’s bridge br0)
  • the second, called “mysql”, with only an internal IP (host’s bridge lxdbr0)

I want to forward port 3306 from the public facing interface of “front” to the internal facing interface of “mysql” container. It doesn’t work as expected. I can connect from “front” to “mysql” on port 3306, but not from outside/public network.

Front’s config:

root@front:~# cat /etc/defaukt/ufw
[...]
DEFAULT_FORWARD_POLICY="ACCEPT"
[...]
root@front:~# ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), allow (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
443/tcp                    ALLOW IN    Anywhere                  
80/tcp                     ALLOW IN    Anywhere                  
3306/tcp                   ALLOW IN    Anywhere                  
443/tcp (v6)               ALLOW IN    Anywhere (v6)             
80/tcp (v6)                ALLOW IN    Anywhere (v6)             
3306/tcp (v6)              ALLOW IN    Anywhere (v6)
root@front:~# sysctl net/ipv4/ip_forward
net.ipv4.ip_forward = 1
root@front:~# sysctl net/ipv6/conf/default/forwarding
net.ipv6.conf.default.forwarding = 1
root@front:~# sysctl net/ipv6/conf/all/forwarding
net.ipv6.conf.all.forwarding = 1
root@front:~# cat /etc/ufw/before.rules
[...]
*nat
:PREROUTING ACCEPT [0:0]

-F

-A PREROUTING -i eth0 -p tcp --dport 3306 -j DNAT --to-destination 10.2.249.204:3306
[...]
root@front:~# iptables -t nat -L -n -v
Chain PREROUTING (policy ACCEPT 536 packets, 53325 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    4   240 DNAT       tcp  --  eth0   *       0.0.0.0/0            0.0.0.0/0            tcp dpt:3306 to:10.2.249.204:3306

Chain INPUT (policy ACCEPT 77 packets, 4278 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 4615 packets, 277K bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 4619 packets, 278K bytes)
 pkts bytes target     prot opt in     out     source               destination
root@front:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
31: eth0@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:6a:74:9c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet [redacted]/24 brd 202.22.232.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 [redacted]/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe6a:749c/64 scope link 
       valid_lft forever preferred_lft forever
33: eth1@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:4e:d0:c2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.2.249.2/24 brd 10.2.249.255 scope global dynamic eth1
       valid_lft 3445sec preferred_lft 3445sec
    inet6 fe80::216:3eff:fe4e:d0c2/64 scope link 
       valid_lft forever preferred_lft forever

Do you have a NAT proxy in MySQL VM config?

devices:
  db3306:
    connect: tcp:127.0.0.1:3306
    listen: tcp:0.0.0.0:3306
    type: proxy

I also have this in UFW which may be too generic, but at least I don’t have to screw around with iptables and complicated rules:

Anywhere                   ALLOW FWD   Anywhere on lxdbr0 

Does your internal container use the external container’s IP on lxdbr0 as its default gateway, or does it use the address of lxdbr0 interface on the LXD host as the default gateway?

If the setup is the latter scenario, then I expect the issue is that the DNAT rule you’ve added to your external container is forwarding ingress packets to the internal container, but isn’t rewriting the source address (so it remains as the original client IP). Then return packets from the internal container are going back out via the default gateway on lxdbr0 and this will be falling foul of stateful firewall rules on your host (as the egress packets won’t match an ESTABLISHED connection).

You either need to make the internal container use the external container for its default gateway or add an SNAT rule to the external container’s firewall such that forwarded packets to the internal container have the source address rewritten to the external container’s IP on lxdbr0 network.

This has the downside that all MySQL clients will appear to be coming from the external container’s IP, but that may not be an issue for you.

It does seem ok :

default via [redacted public gateway] dev eth0 proto static 
10.2.249.0/24 dev eth1 proto kernel scope link src 10.2.249.2 
[redacted public network]/24 dev eth0 proto kernel scope link src [redacted public ip]

Please show ip a and ip r inside both containers.

I rebooted the whole server:

Front container:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:6a:74:9c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet  [redacted]/24 brd 202.22.232.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 [redacted]/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe6a:749c/64 scope link 
       valid_lft forever preferred_lft forever
16: eth1@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:4e:d0:c2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.2.249.2/24 brd 10.2.249.255 scope global dynamic eth1
       valid_lft 2559sec preferred_lft 2559sec
    inet6 fe80::216:3eff:fe4e:d0c2/64 scope link 
       valid_lft forever preferred_lft forever
default via [redacted].254 dev eth0 proto static 
10.2.249.0/24 dev eth1 proto kernel scope link src 10.2.249.2 
[redacted].0/24 dev eth0 proto kernel scope link src [redacted].241 

MySQL container:

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
20: eth0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:e1:cb:08 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.2.249.204/24 metric 100 brd 10.2.249.255 scope global dynamic eth0
       valid_lft 2025sec preferred_lft 2025sec
    inet6 fe80::216:3eff:fee1:cb08/64 scope link 
       valid_lft forever preferred_lft forever
default via 10.2.249.1 dev eth0 proto dhcp src 10.2.249.204 metric 100 
10.2.249.0/24 dev eth0 proto kernel scope link src 10.2.249.204 metric 100 
10.2.249.1 dev eth0 proto dhcp scope link src 10.2.249.204 metric 100

Yep, so the MySQL container will need to use 10.2.249.2 as its gateway in order for DNAT rules in the front container to work, as otherwise the front container won’t get the response packets.

I see, if I ip route add default via 10.2.249.2, it works, but then the container can’t access internet anymore.