Iptables + LXD + NGINX how to make them work together?

Hey guys,

I have an fresh install of:

Ubuntu Server 20.04 Host (10.0.1.0/24)
LXD 4 (Ubuntu 20.04 conteiners) (10.177.0./24)
NGINX (Host and inside conteiners)

The goal is have one conteiner per domain.
While testing I got 3 conteiners.

On Host my NIC is ens160 (ESXI Server) and have multiple IPs (10.0.1.20/24 - 10.0.1.21/24 - 10.0.1.22/24 - 10.0.1.23/24 - wich will be one per domain).
On NGINX at host I use the stream option to forward SSH calls to conteiners, and proxy to access the webservers inside the conteiners.

Example /etc/nginx/streams.conf:

stream {
        server {
                listen                  10.0.1.121:121;
                proxy_pass              10.177.0.21:22;
        }

        server {
                listen                  10.0.1.122:131;
                proxy_pass              10.177.0.22:22;
        }

        server {
                listen                  10.0.1.123:132;
                proxy_pass              10.177.0.23:22;
        }
}

This works fine without IPTables enable.

For proxy, every domain at Host using Nginx its similar to:

        location / {
                allow                   all;
                proxy_pass              https://10.177.0.121;
        }

From LXD install I have used:

Would you like to use LXD clustering? (yes/no) [default=no]: no
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]: default
Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=btrfs]: btrfs
Would you like to create a new btrfs dataset under rpool/lxd? (yes/no) [default=yes]: no
Create a new BTRFS pool? (yes/no) [default=yes]: no
Name of the existing BTRFS pool or dataset: /lxd
Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to create a new local network bridge? (yes/no) [default=yes]: yes
What should the new bridge be called? [default=lxdbr0]: lxdbr0
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 10.177.0.1/24
Would you like LXD to NAT IPv4 traffic on your bridge? [default=yes]: yes
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like LXD to be available over the network? (yes/no) [default=no]: no
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] yes
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes

With this basic setup, everything works fine.

Now when I try use Iptables, haven’t found an way to keep this working.

I have read a lot of posts here abour PREROUTING, FORWARD DNAT and seems none of the suggestions work in my specific scenario.

I’m searching at google too, but without success.

Someone would point me what I need do to get it working with Iptables? Or if not possible with my current scenario.

LXD proxy devices might be useful for what you’re doing. Seems to be a good alternative to using iptables.

There’s a tutorial for it at How to use the LXD Proxy Device to map ports between the host and the containers – Mi blog lah! In that tutorial it shows how to apply the proxy device settings directly to the container.

You can also configure profiles with the settings then apply the profiles to the containers. The nice part about that is you can add & remove the profiles from containers as needed while the profile maintains those settings. Also seems to be a good way to keep track of all the LXD proxy devices and various port forwardings. Tutorial with the profile method of doing this is at Forwarding host ports to LXD instances – LXDWARE

Thanks for the reply @John !!!

I tried both links and got same error:

Error: Failed to start device "hostport80": Error occurred when starting proxy device: Error: Failed to listen on 0.0.0.0:80: listen tcp 0.0.0.0:80: bind: address already in use

Host already have NGINX runing binding ports 80 and 443, there’s 2 domains on Host:

Default site
Admin Interface of server

Guests have NGINX inside too (domain1 on lxd1, domain2 on lxd2, etc…).

I have tried specify the IP instead use the listen all with 0.0.0.0 but got same error.

My guess is that Nginx on the host OS being bound to port 80 on all IPs conflicts with trying to set LXD proxy device on any or all IPs.

Not sure as I have not yet done something like this, but you might try binding Nginx on the host OS to the specific IPs that you need pointed there. Then for the LXD proxy device settings, config with a specific IP address which is not in use/bound on port 80 to Nginx on the host.

I have checked the default website on Host and was without IP (read binding all) on NGINX.

Then I add the IP.

I have removed too the virtual host config used previously to:

location / {
                allow                   all;
                proxy_pass              https://10.177.0.121;
        }

Same error, on command:

ss -lntu

I got:

Netid         State          Recv-Q         Send-Q                  Local Address:Port                 Peer Address:Port         Process
udp           UNCONN         0              0                          10.177.0.1:53                        0.0.0.0:*
udp           UNCONN         0              0                       127.0.0.53%lo:53                        0.0.0.0:*
udp           UNCONN         0              0                      0.0.0.0%lxdbr0:67                        0.0.0.0:*
tcp           LISTEN         0              511                      10.0.1.137:137                       0.0.0.0:*
tcp           LISTEN         0              511                      10.0.1.138:138                       0.0.0.0:*
tcp           LISTEN         0              511                      10.0.1.141:141                       0.0.0.0:*
tcp           LISTEN         0              511                      10.0.1.114:80                        0.0.0.0:*
tcp           LISTEN         0              511                      10.0.1.147:147                       0.0.0.0:*
tcp           LISTEN         0              32                         10.177.0.1:53                        0.0.0.0:*
tcp           LISTEN         0              4096                    127.0.0.53%lo:53                        0.0.0.0:*
tcp           LISTEN         0              128                      10.0.1.114:22                        0.0.0.0:*
tcp           LISTEN         0              511                      10.0.1.121:121                       0.0.0.0:*
tcp           LISTEN         0              511                      10.0.1.114:443                       0.0.0.0:*
tcp           LISTEN         0              511                      10.0.1.131:131                       0.0.0.0:*
tcp           LISTEN         0              511                      10.0.1.132:132                       0.0.0.0:*
tcp           LISTEN         0              511                      10.0.1.133:133                       0.0.0.0:*

How dangerous would be keep the setup without firewall?

Not sure how dangerous, if your system is kept updated you might be ok. But would be much better to have the firewall on. In Ubuntu 20.04 the default tool for managing the firewall is UFW. It’s basically an easier interface for iptables. Might try using that instead. Set your iptables back to default then use UFW. For ports 80, 443 & 22 you would only need to:

ufw allow http
ufw allow https
ufw allow ssh
ufw enable

And you can view the UFW settings via ‘ufw status.’ If you do ‘ufw status numbered’ you’ll get a numbered list where you can delete settings by number.

With those ports open & managed by UFW, maybe then try to get the LXD proxy device forwarding to work.

Another thing you might try testing out is disable nginx on the host OS to ensure it’s not listening to port 80, then try the LXD proxy device forwarding. This would at least help narrow down if it’s Nginx on the host OS interfering in some way.

I have tried your suggestion and tested UFW, and worked partially!!!

From access by web clients its working, I mean, all other computers now can access the websites inside LXD.

But, the LXD conteiners have lost the internet access, on first test checking the update system with:

apt update

Got error about unable resolve the name (Temporary failure resolving ‘security.ubuntu.com’), then, I have added some extra rules like:

ufw allow out 123/udp
ufw allow out 53/tcp
ufw allow out 53/udp
ufw allow out 49160:65530/udp

But error persists, looking the /etc/resolv.conf inside conteiner seems ubuntu make that file dynamic, should I need setup the DNS on LXD? If Yes could you point me the direction?

Reading the docs (https://linuxcontainers.org/lxd/docs/master/networks/) I saw this option:

dns.zone.forward

But I’m unable to get it working pointing to my internal DNS server or external DNS Servers.

Can you ping any other public websites from within the containers? If not, when you setup LXD via ‘lxd init’ did you have the init add the default network bridge (lxdbr0)? If not, maybe that’s what is causing the connection issues out to the public internet.

When I setup LCD with ‘lxd init’ I had the script automatically add the lxdbr0 network bridge and I used 10.0.0.1/24 for the private IPs to use. After that I haven’t noticed any problems with accessing the internet from within containers.

With UFW I cant, got the error:

ping: google.com: Temporary failure in name resolution

Without UFW / Iptables works (can ping from conteiner to outside and from outside to conteiner).

In my setup with ‘lxd init’ I got the automatically bridge too, without Iptables or UFW, everything works nice using NGINX to forward sites as proxy and SSH as stream.

My concerns about using conteiners is increase the security, instead have one server with multiple sites together (read NGINX / Apache with lots of virtual hosts), sounds more secure put every site inside an conteiner, them on this setup, if I got an problem with one or more sites, I only lose those problematic conteiners.

The mostly sites on this setup will be using node.js, and since already exists a lot of issues with mainteners “protesting” making they packages have bad and unpredictable behavior, this approach sounds more logical to isolate this.

On this VM I got 2 support, Snapshot from ESXi Server and Snapshot from BTRFS, first one don’t work well, its basically an test workaround (read VMware says to exclude asap tests ends and don’t let it get more 24 hours of use - because VM start getting slow and can corrupt), second one I need test more to oppinate better.

I was using ZFS on Linux on previous monthly, but after an power loss and the system become read only, I thought it best to do a test with BTRFS (have read about same issue on power loss, but haven’t tested).

I’m also in the process of moving my hosting infrastructure to using LXD & containers for pretty much the same exact reasons. LXD seems great for this purpose.

For your connection issues when the firewall is turned on, I wonder if somehow iptables got a setting that’s interfering with your outbound connections. I’m also using Ubuntu 20.04 for the host and containers. Never touched iptables. Only ever used UFW for firewall stuff and didn’t have this type of outbound connection issues.

Not sure how it’s done, but maybe there’s a way for you to easily reset iptables back to the original default config. Then delete UFW settings with the ‘ufw status numbered’ and ‘ufw delete #’ method, disable UFW with ‘ufw disable’ and then reboot the server (not sure if rebooting is needed, but maybe).

After that you should have a default setup regarding the firewall with Ubuntu 20.04 (also might need to do the same for your containers).

From there, enable the ports on UFW again & enable it. Don’t touch iptables, unless really necessary for something, UFW will handle the iptables stuff automatically.

Then test connecting to the internet both with UFW enabled & disabled. Hopefully it will work after all that. This is what I would try.

I forgot to do the basic:

Restart the LXD service after install the UFW.

This is working with UFW after server reboot.

Thank you very much!!!

Adding extra INFO.

After get this working using UFW, I started an new check on CIS Benchmark Recomendations for Ubuntu 20.04 in the respective UFW Chapter.

Enabling every recomendation one by one and testing before try the next, got same error of broken communication, on this case, after enable:

ufw default deny outgoing

Using the next one keep everything working:

ufw default allow outgoing

Seems there some hidden comunnication which don’t appear as listening ports using:

ss -lntu

Note: CIS recommendations says to block all traffic without an rule, and add an rule to all programs before apply this setting. One step of add rules is watch the output from ‘ss -lntu’.

ufw default deny outgoing” blocks everything inside your server from connecting to the outside internet. Disabling outgoing does increase security as well, but when you do that you will need to whitelist all the ports that your server will need for normal operations.

The primary firewall concern is to block outside traffic into your server and only enable the inbound ports that you need. This is probably much more important to do security wise in comparison to blocking outbound ports.