They increase the maximum entries limit in ARP tables, IPv4 and IPv6 respectively and will allow you to have a lot more containers.
On that note, I believe the standard linux bridge has a hard limit of 1024 hosts, so in case you might go over, either balance your hosts in different bridges, or, my preference, got with openvswitch.
We have been running LXD in production mode for about 2 years now. Each container host a WordPress site for our customers (nginx and PHP) - we have about 650 sites spread across 10 LXD container servers. We also run MariaDB servers for the WordPress sites in other containers.
So far, so good. We have run into minor issues along the way (more of a learning-curve for containers), but overall, the containers run very well.
Things we really like about LXD:
For our workloads (web site hosting) the containers offer a great way to isolate each site from another
Using BTRFS filesystem allows us to take snapshots of each site in less than a couple of seconds
Starting/stopping containers is very fast; very little downtime for our customers
Very easy to spin up a new container
Things we struggle with:
No centralized management tool to see all container servers with their running containers (LXDUI is a per-server only at this time)
Getting per-container stats is difficult - especially to see which sites are misbehaving
Until LXD 3.7 (just released), no easy way to incrementally move a site from one container server to another
All containers share same profile because its the same softwares inside.
We defined CPU and RAM limits in container configuration but you may create dedicated profiles.
We run the MariaDB on a different server and connect over TCP.
We have been using BTRFS for so long, it is not even a question anymore. I tried ZFS in the past, and it required lots more RAM than BTRFS to get similar performance. Maybe things have changed over the past couple of years. For us, BTRFS requires minimal overhead to perform well, and the snapshots are almost instantaneous. We don’t use BTRFS to create any RAID devices. The underlying storage for the VM already does this.
Hi,
Thanks for writing this down. Can I know how you make the containers talk out to network. Like is there any configuration you have done in containers to add domain, I am using a proxy container which then further routes the requests to each domain and this configuration seems not good to me.
I’m not sure who coined it first to be honest. It is a way to convey the shift in how we treat our servers: as pets, meaning we nurture them and help them live for a long time, or as cattle, where we have them fulfill their purpose and then kill them. Here is an interesting post on this distinction.
You can run stateful workloads in docker containers, you mount external persistent storage (persistent volumes). These can be local folders or shared storage, many plugins available.
If your using Kubernetes you can run persistent storage with something like Rook or OpenEBS which can deploy ceph on your kube worker nodes and will create the ceph osds, the rbd images and mount persistent volumes to your pods. Quite nifty really.
Firewalling with docker containers would be more akin to policy based rules and an SDN, everything with iptables is automated in Kubernetes, similar to how it is with Openstack Neutron or contrail, its designed to scale so you don’t touch the things at the individual conatiner level.
The downside is running docker on mass with kubernetes requires a lot more planning to have it running correctly. LXD is easier to setup, lower barrier for entry, so for smaller scale seems to fit the bill perfectly and its easier to get your head around it if your coming from the world of VM’s.
I don’t know why more people don’t use LXD, it still seems most people only know VM’s or Docker/kubernetes these days.
It can definitely be used for that and a few others have used it for this before.
You’ll need to be careful about security and networking, but that’s the case regardless of the platform you use.