Lxc exec shell vs ssh -- best practice to communicate with container

Is there any downside of using lxc exec (shell) to “log in” to a container from the host, as opposed to using ssh?

Are there scenarios (again from host to container) where ssh would be better?

I’m not an expert, but I think that for system-containers - like LXC - you will normally have multiple users in the system (ex: ubuntu, root, etc) and so SSH is better as it allows different users to access the container, with no more requirements than having network access and knowing the user password.

If instead of SSH you rely on “lxc exec”, then you are probably requiring that:

  • the command will have to be run as a local user of the LXC host, like root (or another user with lxc permitions)

  • the command will have to be run in the lxc host (and not from another network location, like for example, a windows pc that could ssh into the LXC with putty). It could be this same lxc host or it could be a remote lxc host properly configured to command this lxc host. In either case, its like asking the person managing-LXC to act as a proxy to make logins with “lxc exec” any time a system-user wants to do something…

So, better go with SSH - much more “normal” for a system-container

NOTE: Docker are “application-containers” meaning they philosophically prefer only-1-process-per-container, and so reluctant to having a parelell ssh-daemon running in the docker container. But LXC/LXD is system-container, more like a VM (very lightweight)

Thanks for the run-down. Like I said, I am asking strictly in the host-to-container scenario, and really strictly for container management, which is essentially done as root in the container.

ok, I read it in a different direction

For that more strict scenario, have a read at this blog post [1] written by a lxc/lxd developer - basically it resumes the efforts to have “lxc-exec” as a solid alternative to “ssh”, including highlights of the differences and challenges involved.

On the other hand, personally I see the “ssh” as a more extended/standars “interface” for management, which enables for example the coupling with other tools - like “Ansible”, to automate the deployment and configuration of new containers (beware the ansible module for lxc/lxd was outdated last time I checked…)

[1] https://cbrauner.wordpress.com/2017/01/20/lxc-exec-vs-ssh/

I have read that blog; interesting read, but did not help me answer my question.

BUT…something you said just made it click for me. I plan on using Ansible for deployment, and so ssh makes a lot more sense to me than lxc exec.

Thanks!

The only problem with this approach is keeping the Ansible inventory updated. Do you have any solution for that? (I’m in the same boat, hence the question.)

I recall that you can build a “dynamic inventory” with ansible [1] - ie, instead of having a file as inventory (which you would have to manually update), you can get the inventory from the output of an executable (like a bash script that would conveniently output the containers you want :slight_smile: ). Sound harder then it is, I’v played with it a year ago

That could be one way to do it

[1] https://docs.ansible.com/ansible/2.5/user_guide/intro_dynamic_inventory.html

1 Like

I faced the same situation, how to manage configuration inside LXC containers. I use Ansible for manage all KVM virtual machines in my infrastructure. I choice LXC like lightweight alternative to KVM. I need manage mysql/nginx/haproxy/redis/etc and deploy nodejs application. I don’t need docker in my case, I have ansible roles for deploy all that.

I use lxd_container_module for deploy new lxc containers on remote hypervisor hosts and manage cpu/ram of containers. That is very comfortable way for me.

But, Ansible LXD connector only works on localhost (issue) and don’t work with remote hypervisors. At the least it’s not easy.

That’t why I think more comfortable to mange application inside lxc containers via SSH like any other vm.