"first-10-minutes-on-securing-an-ubuntu-server" - in a container?

I am starting to host a server in the clouds and have learned that you need to take quite some measures to securing such a server and though following this guide would be a fair start.

Now I am wondering how to apply those measures to a container that will be living inside of the cloud server.

Especially the question about ssh access, the role of root, etc is something I don’t quite know what to make of as a container by default behaves quite different then a ‘normal’ (ubuntu) server. You enter into the root account without any sort of password, i.e.

I think I won’t need direct ssh access to the container get in via the cli of the host via the lxc exec command just fine. So, maybe disabling ssh inside a container altogether may be an option.

On the other hand … my containers will be connect to the outside world via a proxy device (port80, port443) as per this thread.

The more I write, the more I realize how little I understand of the matter. So a container specific howto as the one mentioned above would be ideal. Is there anything out there?


In terms of security, each server is secure. It’s just different users may have different security best practices. It’s an issue on how far you go with the security best practices.

The first part is to secure the host (the VPS itself).
For example, some VPS providers allow password authentication. It is a best practice to disable password authentication.
Having said that, we are not touching the host configuration here because it is dealt with elsewhere.

Each container by default is not directly accessible by the Internet (due to the NAT networking). If, however, you setup macvlan or bridge networking, you may decide to expose a container (or more) to the Internet. In this case, the Ubuntu container images have sane defaults (no password authentication).

The guide that you reference,

  1. says to set a password for the root. For me, that’s a no-no because we do not do password authentication. The idea is that you SSH to a user account (public-key authentication only), and then sudo to root. Later on in the post, they say to enable full sudo access to the non-root account which would make the root password superfluous.
  2. says to add a non-root user to the host. The instructions are missing the chown of ~nonrootuser/.ssh to the account nonrootuser. Regarding containers, the Ubuntu container images already have a non-root account called ubuntu.
  3. The Ubuntu container already have unattended-upgrades installed, and by default the perform the upgrades for the security packages.
  4. Setting up fail2ban on SSH is probably not be needed since you have disallowed password authentication.
  5. 2FA is generally not needed for services not facing directly the Internet. Therefore, while you may setup for the host, I do not think it is require for the containers.
  6. You may install logwatch for some containers. Note that you would need to figure out how to send emails. That is, most providers block outgoing connections towards port 25 as a way to fight spam. Normally you would create a third-party mail provider account.

To what has been said, I’d add that the first thing I do to any container created for something other than a test is to disable default configuration allowing the container to be able to access anywhere on the outside. If you have a container for server application, that’s to contain. If your container is owned, there is another threat that your host being compromised: it’s the possibility that the breached container is used to nefarious objectives. Blocking everything and giving the container access only to needed ports is a good beginning, and that’s include particularly blocking mail if your provider don’t block port 25 (unless you are running a mail server of course :-))

So if you need administrative alerts for your container software (and you need them usually, unless you have some advanced monitoring), the container should only be allowed to access a mail relay on the host, the relay being setup to only relay to your domain. So no possibility for your container to be used by a spammer (and wreck your domain reputation, irate your provider…)

can you be a little more precise on what that in detail means and how to do that (disallowing access anywhere in the outside?

In my case I will have ERPNext running in the said container, which serves a website via nginx and has mariadb in the back. In order to serve anything in such a scenario I have added 2 proxy devices lxc config device add ... one for port 80, another for port 443 as per this guide which references to a post here on the forum by @simos a lot. So, does that satisfy the “allowing the container to be able to access anywhere on the outside” requirement?

it’s mentioned somewhere that this practice is controversial and explain why they do it. But I agree that you probably can unset the root password (passwd root -d I guess, or not set it to begin with)

it does actually a little further down say chmod 400 /home/non-root/.ssh/authorized_keys & chown non-root:non-root /home/non-root -R

not sure whether a container serving an browser interface via nginx (which from the contianer reaches the internet via 2 lxd proxy devices) falls under that categorization of "not facing the internet directly)

(will only work with lxd >= 3):

lxc network get lxdbr0 ipv4.firewall 
lxc network get lxdbr0 ipv6.firewall 

should return false - it’s not intuitive but really setting the firewall to false disables the automatic rules setup by lxd that you will see with

sudo iptables -L | grep -i lxd

Of course study these lines before disabling the firewall because you will have to set up explicitly some of these rules yourself else your containers will be mostly useless.
Once your containers are working with explicit rules, go inside each of your container and try to access a mail server on port 25 with telnet (something like
telnet aspmx.l.google.com 25
If it connects you have not succeeded and what your did amounts to what is doing lXD by default:
ACCEPT all – anywhere anywhere /* generated for LXD network lxdbr0 /
ACCEPT all – anywhere anywhere /
generated for LXD network lxdbr0 *
it’s all very well to begin to use containers, but not so much for a server.

About your erpnext server, the whole point of your server is to be accessed from the internet, while my point is entirely about the other direction,your container don’t need to access the whole internet, only what you are ready to allow it to see.

And you can use this as an easy warning system, if your firewall on the host reports access to an unauthorized port by your container, it’s a thing you may be interested to know asap.

thanks, quite a bit to digest due to limited knowledge on the basics on my side apparently

for my understanding. Would this result into the container obeying the firewall rules of the host and abandon the default fw rules as set in the container?

this is run on the host, right?

actually get does not do anything, it’s to check current configuration, by default ‘true’ (it’s true if it does not return anything). If it’s true, LXD adds default rules, some very useful and some I can live without. You will see them by running on the host sudo iptables -L | grep -i lxd.

For such tasks, there is security to

  1. Secure the host.
  2. Secure the containers.
  3. Secure the services (like WordPress).

To simplify all this, it is good to see any container-specific security best practices.

is there an explainer on how to implement such “container-specific security best practices” out there somewhere?

lxc network get lxdbr0 ipv4.firewall 
lxc network get lxdbr0 ipv6.firewall

the above command works the lxdbr0 interface. Which is the one my containers profile is also using. However the container broadcasts to the world via a proxy device
lxc config device add [container] myport443 proxy listen=tcp:[host-IP]:443 connect=tcp:localhost:443. Does the above get still apply? I guess lxdbr0 may still be doing something in the communication between my host and the container.