LXD Performance: Stack per container VS service per container

Hi guys, I’m wondering how much extra stress (internal virtual network) would be implied on the host by running 100 Apache containers plus 100 Mysql containers VS just 100 LAMP stack containers.

After much reading, I have opted for one individual database engine per instance of the PHP APP, instead of one big database engine for everyone. However, there’s no literature about running the entire stack in one single LXC container vs separating the different services, in this particular case, Apache2 plus MariaDB each on their own container.

I understand the isolation benefits of separating these 2 services, I also understand the portability and simpler backup benefits of deploying both services together, however, I am now worried about any overheat that could be caused due to all the internal communication between the containers and I also wonder which option would be more performant at low usage.

The APP makes relatively high usage of the DB. HAProxy routes traffic to the right instance.
Each instance of the PHP app only talks to one database, there are no shared tables or databases of any kind.

Thank you!

I’d love to know how you get on with this… I’ve split Apache and MySQL with mixed results, but that might just be me!

@bruce78 Please share your experience…

I’m using both methods right now, the first noticeable effect of using a single container is that, when managing instances manually, your job is 100% simpler/more productive.
This benefit disappears as soon as you automate container creation, configuration and backups.

So for non-automated environments with few containers I am pretty sure the single container benefits outweigh the drawbacks (unless you really need that extra isolation).
However, after you reach a certain amount of containers, it makes a lot of sense to start automating things and that advantage vanishes, but new concerns arise.

Ironically, with the old, non automated host, most of the work was related to setting up new accounts manually, hence all in one made much more sense, however, as we had very few instances and low administrative overheat, we used, and still use, separated containers due to the theoretical increase in security and stability.
Now, with an automated cluster, that obvious advantage is not there anymore, but now we are worrying about density, performance, bottlenecks and so on, so now that we don’t really have that much administrative overheat, now we are considering all in one instances… if it can give us some performance benefits.

I think it would be overkill to have a separate DB container for each website.
It would make sense to have a single DB server in a container, and each website would have a separate limited account on the DB server.

I think that the biggest issues to watch, is the amount of memory that these containers would need. You would want to limit as much as possible the requirements of the container image of your choice.
Perhaps you would prefer to use the Ubuntu Minimal container image instead of the Ubuntu image.

Thank you @simos. I have already weighed the pros and cons of one single DB engine vs Multiple DB engines and decided to go with multiple engines, which allows for fine-tuning of each DB engine (big databases require different settings than smaller ones) and faster/simpler portability of instances within the cluster nodes and eventually across clusters in different datacenters. It also provides an extra layer of separation between individual DB’s, which means that one database failure only affects one instance of the APP instead of all the websites in one host. Additionally, some instances need a single master while other instances require master-slave replication, and finally, it facilitates backup and recovery.

The question is not “one single DB engine vs many”, but rather if the many engines I’m using, should live in their own container or in the same container as Apache + PHP; from a performance perspective.

Thanks for the advice, I’m already using minimal for both the hosts and the containers.
Memory is limited per container with a typical Small-Medium-Big scheme and there is no over-provisioning.

My experience has been good with LXD and splitting MySQL out from the LAMP stack into a different container… I think some of my “issues” are at the application level and not the container level… I’ve just not gotten round to focusing deeply on the issue…

On both hosts I run, LXD has been excellent… I run everything in Debian on the host and only rarely do I run into problems and when I do, it’s almost always at the app level, as above!

Thank you @bruce78
I’m gonna assume the internal networking overheat is minimal and do just what you did…

Cool, let me know how you get on…

Hi Yosu,

There are exceptional use cases, but the deciding factor for me is usually the answer to this question:

Will this service require multiple frontends in the foreseeable future?

If the answer is likely or a definitive yes, then I start with the DB separate from the beginning to prevent the second iteration in the future.

If the answer is unlikely, I install the DBs in the same container for all of the reasons you list above.

As far as automation goes, just have your default be separate and a local_mysql variable determine whether or not the DB is in the container or deployed on a separate machine. This is how I manage fully automated deployments.

1 Like

That definitely makes sense @VinceHillier.
In my case there’s only ever going to be one frontend, like a WP site or some other PHP script. A new frontend always means a new database.

Thanks for the local_mysql variable tip!

As an aside. Keep in mind, each container requires a certain amount of memory overhead to start/run. From experience, Alpine only needs about 3MB per container while Ubuntu/CentOS require much more (like 50-60MB). When running lots of containers on a host, make sure you know the total overhead required to avoid any RAM issues on your compute server . For example, 50 containers running Alpine containers require about 150MB of RAM, while running 50 Ubuntu containers will require 2.5GB of RAM overhead. This is memory that cannot be reclaimed unless you stop the container.

1 Like

Thank you for the heads up @rkelleyrtp

I’ve accustomed myself to use Ubuntu minimal images for both the host and containers, and installing packages as needed. Hope it helps with overall overheat. Can’t move to Alpine for now, only Debian / Ubuntu for me…