Strategies for managing networking for 'public' containers


Hello, this is more of a workflow question than a technical question.

I have a largish server that is managed centrally by my institution. I want to run lxd on this server and provide an arbitrary (but probably less than 100) number of containers to others in my department. However, I do not have easy control over the firewall for the host. iptables configuration is centrally managed and changes that I do will be wiped. As far as I understand, if I wanted to grant ssh access to someone, I would need to create iptables rules to forward the traffic from the host to the container bridge. But if containers change regularly, I can’t wait for these firewall changes to be applied.

What would be a good way to handle a situation like this? It might be possible to ask the institution to unblock a ‘pool’ of ports and then forward them dynamically when things change, although this feels a bit manual. Is there any higher-level way to manage this?

(Ron Kelley) #2

When you say, “grant ssh access to someone”, is this ssh access to the LXD server or a container?


Sorry for the confusion – it would be access to a certain container.


There are many options, and it helps if you can talk a bit about what you are really trying to achieve.
For example, are these students that need a Unix shell to do and test their homework?

A solution to this could be that no, you do not need SSH access. You can offer shell access over the Web, just like this, (source code:

(Ron Kelley) #5

Further to what Simos said, I would recommend separating the management access to the server (“eth0”) from the container network (“eth1”). This way, you can manage access to the containers separate from the server.


Clients have nearly arbitrary requirements but most would fall into some long number crunching jobs and some low traffic web services. Some containers would need to expose non-SSH services (probably only HTTP) from a public IP eventually.