What's your Preferred Configuration Management and Automation Tool for LXD?

Hi guys, trying to choose a tool for myself, so I was wondering what you guys were using and why.

for users (i.e. devs) quickly spinning machines to poke around on something I gave them LxdMosaic and they seem pretty happy. The guy behind it is super helpful and responsive, so supporting it for my users (i.e. ironing out a few bugs, even some feature requests for my use case) was a real breeze.

for configuration management I’m still looking, right now i have some basic settings written down in wiki (lxd init --dump, lxc profile|project list and then lxc profile|project show for each) and for the rest I kinda hope the database on at least one member of my cluster will survive for me to be able to scavenge the finer details :wink:

for automation I’ve written a few shell scripts that curl the API or use lxc client for the few things that I need to do across the complete server, like lxrun which basically loops around lxc ls -c n --format csv [other filters], and does lxc exec.

2 Likes

I use python scripts via pylxd and cloud-init via LXD profiles.

I find most of the off the shelf orchestration systems to be over-engineered and bloated…

IMHO Most solutions involving LXD only require a few hundred lines of python to maintain custom solutions so its a much more elegant tool set.

There is a few options, each with drawbacks n strengths;

Applications (Configuration Management)

  • LXDMosaic (Self Promo, cloud-init / “deployments” support expanded in master branch)
  • Puppet Bolt (thread, @dontlaugh)
  • Ansible (docs)
  • JUJU (docs)
  • Other web interfaces (0 of which I believe make an effort for config management)
  • Your own scripts (as @Ozymandias does, i’d be interested in what LXDMosaic can do to help)

Bare metal (Automation)

1 Like

We run our CI on LXD containers, and we pre-bake all the tools and packages with Packer

The plugin is pretty good. We use it all the time, but it needs some love from some Go programmers.


I’m a big fan of Bolt, obvs. But note that it is not a “desired state” system like Ansible (or regular Puppet). It is more like a “framework for scripts”. I was fascinated by the approach and my team has found it useful. If you do want to get into Bolt, be sure to join the Puppet Community Slack.

This is an LXD-adjacent issue I’d love to see addressed in Bolt soon Allow specifying transport config (e.g. LXD remote) at the command line · Issue #3053 · puppetlabs/bolt · GitHub

Oh, wow! Great to see so much tooling diversity…
turtle0x1 even built his own lxd managing portal!

I’ve discussed how best to build a custom LXD automation with a few people knowledgeable about the topic, including Turtle…

I’ve been debating myself for a long time about writing a bunch of bash + Python scripts, then build an API with them over time.

I was recently recommended to try Salt (similar to Ansible, Chef, Puppet), to put everything together. Anyone’s got a good experience (or any experience at all) with Salt + LXD?

@Ozymandias, I like your minimalist approach, do you think something like salt could be helpful or more of a burden.

@Aleks Like you, that’s what I have now, a set of documents with scripts to be used manually when required, I’ve been improving the scripts for over 4 years now, it is time for me to step up and put everything toghether.
Ha! the first LXD related script I ever created was the “run” command:
#!/bin/bash
lxc exec $@ bash

@turtle0x1, my first option was, as you know, your PHP-LXD Lyb
Second was Ansible, and now I might try SaltStack, So I’m moving away from PHP and more into bash / Python.
MAAS Is out of scope for me now. I’m just running a bunch of dedis that I can manage fine one by one or in a LXD cluster.

@dontlaugh I’m more concerned with deployment automation than anything else. I create LXD images with the pre-configured applications ready to run inside, so code deployment is a problem I am pushing for later, I’d be sure to check bolt as soon as I start facing the code automation issues.

1 Like

Simplicity is the greatest sophistication. :smiley:

3 Likes

I was someone who fell in love with Salt, but then broke up last year. I’m going to strongly encourage you to avoid it. If you are already invested in it, do not deepen your investment.

What I Love About Salt

Salt has, in my opinion, a great DSL. Jinja-templated YAML is a pretty productive way to work. The module system is pretty easy to understand.

Salt is fast. Or, it can be, because Salt is agent-based over a fast transport (ZeroMQ pub-sub).

Salt combines both desired-state and imperative orchestration in the DSL itself. Bolt can kind of do this but much fewer libraries, and the code is a bit more low level.

Why I Removed Salt from our Infra :frowning:

Bugs, bugs, bugs, bugs, bugs. You are guaranteed to hit bugs. Last year I was very excited to use Salt as a way to deploy to Kubernetes. I was excited by the possibility of embedding k8s objects directly in my salt DSL, templating them from values extracted by other workflows, and just kicking ass.

I kid you not, in 2021, their built in kubernetes plugin flat out didn’t work. Now, perhaps my use case isn’t what people mainly use Salt for, but … it’s KUBERNETES. I’m not a huge k8s fan, but … it’s kind of a big deal? I read through the Python code and it’s basically what I would have written by hand; templating values, shelling out to kubectl apply, etc. The only reason it didn’t work is because they didn’t keep the Python library up to date. It hadn’t been worked on in years.

Look how many open issues there are: Issues · saltstack/salt · GitHub

A good summary is the discussion on this issue;

There’s a lot more I could say, but this is the core of it. I love open source, and I really try to do my best to contribute back to the stuff that I use for free. Code, issues, docs, just saying what’s up to new folks on this forum (and others). But with Salt, the project management felt so chaotic that I didn’t really think like it was worth it for me.

Pick literally anything else, in my opinion. And I say that with great sadness. It has a lot of great ideas.

1 Like

Very, very interesting @dontlaugh Thank you very much for sharing.

The software I work with, is also an open source tool/company, recently acquired by a corporation (similar situation as Salt and VMware). So I understand first hand what corporations, even with their “good vibe hats on” will do to an open source community, so when I was recommended Salt, I went directly to the Salt Slack to find some answers.

The first thing I encountered, in the #General channel, was a link to a post where somebody was explaining why that Salt user was leaving Salt for good, he was describing more or less the kind of situation you just did, but with other, let’s call it “outlier” parts of Salt.
I asked him if he thought his case was rare or generalized and he told me his case was probably rare.

  • I started a poll asking how good community support was
    Most people gave free, community support a 4 out of 5 "starts immediately followed by 5 stars, the minimum rating was 3 stars (Yes, I understand that nobody would give 1-2 starts for fear of not receiving good support anymore).

  • I then asked about the situation after the acquisition, VMware’s initial promises vs reality 2 years later. I expected vibrant opinions and a few rants, nothing of the sort happen, which makes me thing, either most of the past community is already gone or they are just OK with it.

  • I also asked, Has there been any major changes in the community since the acquisition? (any jeopardizing)?
    I got 2 types of answers:
    a) Things are more or less exactly as they were before, so all good.
    b) Nothing happens here, this and that didn’t work 2 years ago and it still do not work today.

Even T. Hatch, explains on a video available on YouTube, how after the acquisition, former Saltstack employees will initially be helping VMware realize their new acquisition, rather than developing new features or fixing bugs, which is not only fine but to be expected, and Kudos to Hatch for being honest and transparent about that.

In general, it looks like teams and talent, have been scattered around the company and there is not a lot going on towards maintaining good old Salt.
There’s a new project, called Idem, which is as old as 2018, but is still just a playground, rather than a production ready tool, not even the programmers know exactly if it is a substitute for Salt or a new branch of it, they call it “a sibling” of Salt.

I’m listening to what you are saying, your words will help me keep my eyes pealed and ears open.
But I do need to try for myself (I always do) to see if I can squeeze my needs under the bugs zone.
At the same time, I already know Ansible will most probably not fit the bill, nonetheless, I am building a MVP with Ansible anyway.

Fascinating incite thanks for sharing… Its not very surprising sadly.

For managing and automating LXD containers, here are some top tools:

  1. Ansible: Lightweight, agentless, and easy to use with YAML playbooks. It integrates well with LXD for tasks like provisioning and configuration.
  2. Terraform: Great for infrastructure as code, it allows you to define and manage LXD containers and resources in a declarative way.
  3. SaltStack: Powerful for real-time automation and scaling. It supports both agent-based and agentless modes, making it flexible for complex setups.
  4. Puppet: Ideal for enforcing configurations and ensuring consistency across LXD containers with its declarative language.
  5. Attune: Agentless, cloud-friendly, and great for automating scripts like Bash or PowerShell across LXD containers. Its real-time debugging and multi-container coordination are key strengths.

These tools offer flexibility depending on your needs—Ansible and Puppet are great for simplicity.