Did Linux Containers invest work and time to implement a user-agent sniffer against LXD users?

As far as I understood this, you invested the time and effort to code a user-agent sniffer to specifically block LXD users? While the packages are bit-identical, could be used just as well in LXD, and your blockade can be tricked by a simple user-agent spoofer.

I like open-source but you just shredded our CI/CD infrastructure with this.

Please see this post,

Thank you for the link!

Though it did not answer the question, did the Linux Containers team sit down, invest time and effort to implement a user agent sniffer and change file delivery for a certain subset of user agents?

Because it sure looks like it, since the simplestreams delivery mechanism is basically a webserver AFAICT and usually those deliver the same files to all user agents.

Some more background.

In a nutshell, LXD is now a Canonical-only project and is not related anymore with Linux Containers (this site and community). Linux Containers is working on Incus, a successor to LXD.

The images: remote was always part of the Linux Containers community. When the split happened, Canonical did not find a replacement for images: for the LXD users. They had ample time to find a replacement but somehow they did not do so.

I am a bystander in all this. I think Canonical messed up and they consider non-customers a lower priority.

Incus is currently on version 0.5.1. In a couple of months Incus will jump to Version 6, on par with the LXD version numbering.

Most of us here provide support on Incus.

We always have been looking at the user agent as not all version of LXC, LXD, OpenNebula or a number of other consumers of the image server would be compatible with the same set of images.

So that’s not something new that needed custom engineering or anything.

To provide a tiny bit more of a timeline than what’s in the main post:

  • Canonical was reminded on September 1st that continued access to the image server was dependent on their continued investment in maintaining tooling and image definitions. This was done after the main engineer working on images there was told to de-prioritize his work on distrobuilder and community images.
  • End of September, that engineer decided to leave Canonical. I once again reached out to make sure that someone would be taking over that work, Canonical declined to commit to doing that work.
  • End of November, the engineer in question was out of Canonical completely. Canonical still didn’t commit to having anyone take over that work.
  • End of December, Canonical re-licenses LXD and adds a CLA, making us completely unable to support anything related to LXD as we can’t legally go look at the LXD source code anymore due to risk of being tainted for our own work on Incus.
  • Immediately following that, the phase out of access was announced.

So any mention of this coming as a shock to anyone at Canonical, is sadly just not true :slight_smile:

1 Like

Thank you for giving more detail! I understand now that this is a complicated situation.

We are using lxd in our ansible playbooks, do you know if incus will receive the same support in the third-party ecosystems like ansible? We will consider a switch as well.

I contributed an Ansible connection plugin a few weeks ago and have been using it for my own infrastructure. I actually recently linked to the Ansible connection plugin, Terraform provider and Packer plugin in the recent Incus 0.5 announcement:

The inventory plugin for Ansible is a bit weird. It’s actually quite outdated even by LXD standard.
If it got updated to work using the modern LXD API, then it would also work against Incus.

Though I also don’t really like how the inventory plugin interacts with LXD/Incus as it doesn’t really understand remotes, authentication or projects. I think if I were to work on that particular plugin, I’d base it on the CLI tool, following the example of the connection plugin.

There’s also another plugin to handle creation of instances themselves, I think that one is in the worst shape of the three and I’m not too sure how much effort should be put into it. Lately I’ve been in the camp of preferring Terraform/OpenTofu to create the instances and other resources, then use Ansible to deploy everything inside of those instances, trying to keep the two concepts deliberately separate…

1 Like

This is how I always do it. Incus or not. First Terraform with cloud-init, then, if needed, then some Ansible.

It makes things really clean and easy to maintain.