Need Advice on Implementing this for 600+ Machines

  • We have around 600 machines, some are using Ubuntu 16.04 and others are using 20.04, the ones that are using 16.04 have SSL 1.0.2g,
  • All of the machines have a bandwidth limit of 5GiB per day
  • Our code base is messy because of the lack of control we have over the machines once they’re sent out, the outdated software they have, and the lack of consistent runtime environment
  • My intent is to deploy containers to these 600 machines so the code has only one runtime it has to accommodate.
  • The machines themselves are all consistent in terms of hardware aside from that they have a variable number of camera devices that the code inside the container needs access to, some have 6, some have 7, 8, 9, etc…
  • The machine are all inside of a VPN

I was thinking I should:

  • Create a remote with around 3tb disk space to hold the images/instances
  • Install LXC and add this remote on each of the 600 machines
  • Create a profile for each possibility of the number of cameras, so one for machines with 6 cameras, one with 7, 8, etc…
  • Create instances for each of those 600 machines and then launch them from the machines.

Some questions I have are:

  • Can I implement this in such a way that the remote the machines use is inside of the VPN while still being publicly accessible to individuals who are authenticated?

  • Does the remote actually need to be in the VPN?

  • Will I run into errors with token-based authentication being that some of the machines are using openssl 1.0.2g while the remote would use 1.3.0+ openssl as well as most of the other machines?

  • Is it the best route to create an instance for each of the machines or should I just have one image that I launch from each of the machines with a profile that matches its number of cameras?

  • Is there any general advice you can give me so I can be sure I implement this correctly the first time?

  • Being that these machines have limited bandwidth (5GiB per day) would it make more sense to use a cloud-init image? Would this mean that just the base image would be downloaded, followed by downloading packages and installing them once the image is on the machine or do the provisioning steps take place before the image is downloaded?

By “Install LXC” do you mean install LXD? I assume that is the case, because you mention remotes and you reference the LXD documentation for token-based authentication.

So you are thinking of putting your application in a container. I believe this is generally a good idea. One reason for this is that you can port a container to another machine, so if you replace an Ubuntu 16.04 machine with an Ubuntu 22.04 machine (or Debian), you can then move your application to the new host without changing it.

My first question is how are you going to maintain changes to your application? LXD does not provide a direct mechanism to update an instance to a new image. But it makes it possible to do so, because of the attached disk devices feature, if you carefully separate the image from your data. You can put your application data in attached disk devices, which you can put in profiles. When you upgrade to a new image, you keep the existing profiles, so a new instance from the updated image has access to the existing application data. You can probably also do this for moving your camera devices to the new instances, with profiles, as you mentioned.

Another issue is distribution of your application image, which I assume is private. I have been looking into this too, and I think that LXD is lacking in this. My preference would be to find a tool that can create a simplestreams format for the images, which you can then secure with HTTPS Basic Auth, or something like this. I have collected a list of related topics:

I think it’s best to use a single image with profiles for different number of cameras. You can create these profiles on the fly with a script (create profile for N cameras).

I’m in favor of pre-installing packages in your own pre-built image, rather than using a stock base image that builds itself when it is instantiated. An image installs faster than individual packages, and it may use less bandwidth. More importantly, by pre-installing packages in your image, you make sure that you use exactly the same package versions everywhere. If you install packages in the instance, you may occasionally get different versions of packages. But I prefer running final configuration in the instance, so that it is easier to make configuration changes (without having to built a whole new image every time). You can distribute these configuration scripts separately from the image, via a git repository.

Using Ubuntu 16.04 with LXD is problematic, as 16.04 is quite old and I think there is some incompatibility in the LXD images built for it and images built for subsequent Ubuntu LTS versions. I can’t find a reference to this right now.

Another issue is distribution of your application image

In the code base, there’s logic that pulls from a git repo branch every iteration and so the machines are updated that way, the problem is that the environment in which the code runs is not so easy the update (the OS). Pulling from a git repo branch is certainly not the cleanest way to do this though. I was thinking that maybe I could create a cron on the devices that would pull the image from my LXD server but I’m sure there’s a better way.

I’m not super familiar with with remote server protocols so I’m not sure of the purpose of these simplestreams servers, doesn’t LXD by default allow you to create images and add the server where those images were created as a remote? How does a simplestreams server improve upon it?

An image installs faster than individual packages, and it may use less bandwidth

I would think that using cloud-init to provision the image after it’s on the machine would require less bandwidth being that packages are in their compressed form before they’re installed.

Using Ubuntu 16.04 with LXD is problematic, as 16.04 is quite old and I think there is some incompatibility in the LXD images built for it and images built for subsequent Ubuntu LTS versions.

I’m hoping it’s not the case because upgrading the host OS on the machines would be fairly cumbersome, knock on wood.