Pull file from sibling container in the same host (SOLVED?)

Hi fellow lxders,

Since it is not possible to pull a file living on a parent container on the same host (right?), I tried the following workaround (which does the job just fine):

  • Add the host as a remote in the container.
  • Pull the file from the container via remote (which actually is the host the containers live in)

My question is: Does this represent an issue of any kind? security risk?

Can you pull a file from a host? I thought you always pull from an instance (container or vm).

Why not use scp/rsync/rrsync? It works everywhere and it works well for large files.

Hi votsalo, thanks for the insight.

I’ve realized I didn’t properly describe the scenario…
So I’ve just edited the title and description.

I still don’t understand the scenario.

Can you clarify what exactly you do:

  • Where is the file/folder you want to pull? (On the host or in a specific container?)
  • And which instance should pull the file and why does it not work?
    Also what is the exact relation between this instance and the instance who want to pull from?

Because normally you would have multiple solutions:

  • push a file from host to container
  • disk-share
  • volume-share (between containers)
  • network-share

All of these are better from a security point of view.

In terms of security, if a container has access to the host’s LXD Unix socket, then if that container is compromised, then the whole server is compromised. But if you are certain that the container will never be compromised, then you are fine.

Personally, I would put all services in containers (insert meme: All The Things). And would not give any access for the containers to the host.

Also, tutorial at https://blog.simos.info/how-to-manage-lxd-from-within-one-of-its-containers/

I suppose your question could be, Here is a scenario that per me, I need the container to access the host. Can you suggest some alternative to avoid that?. And we would take it from there.

Thanks @toby63 and @simos.

Here’s a more detailed explanation:

I have 2 containers which live in the same Host, let’s name them:

  • Origin (The container where the file I want to pull lives in)
  • Destination (The container I am pulling from, note this could be any container, in any host)
  • HostOne (Host where Origin lives in)
  • AnyHost (Any other host where I can run a copy of Origin)

OK, I want to copy a file from Origin to Destination, so I added HostOne as a remote within the Destination container, and now, from Destination, I can run:

lxc file pull HostOne:Origin/folder/file.txt /folder
(please note this works too if the command is issued from AnyHost:Destination)

Effectively pulling a file from Origin to its brother container, Destination.
This works if both containers are in the same host as well as if the containers live in different hosts, which makes it a fantastic tool for my use case (use case explained below).

So, I am assuming that the LXC developers decided not to allow the push and pull commands to work among sibling containers for some reason, probably security related?

On the other hand, it might just be that the command was simply never meant to do this, so the functionality was never implemented, after all, there are many workarounds that allow just that, as @toby63 just pointed out, so maybe there just was no need for it.

Today, LXD’s API is allowing me to break this old rule, but are there any risks involved?
Is this workaround functional (allowed) because it is safe, or just a corner case that no one thought needed to be disabled but could pose some risk…

So, I guess the real questions I am looking for an answer to are:

Is it safe to add a remote to a container?
Is there any additional risk if the remote is the host the container lives in?


By adding the remote in the container, the communication is encrypted, protected by keys, and does not use any socket (as far as I know) as it travels through the network until it reaches the API on HostOne.
In theory, it shouldn’t be safer or riskier than any communications between LXD enabled hosts, for example, the members of a custer.

I really don’t think I am giving the container any access to the host, I am just executing the pull command by proxy of HostOne. Please note this works from any container on any host…

Here’s my use case:

What I’m really doing, is using HostOne:Origin as a config file repository of sorts.
Scripts are run in all containers, in different hosts, those scripts need to be able to retrieve the config files when there are new versions.

This workaround allows the same 2 lines of code to be valid in any host I deploy, including the host Origin lives in, but not limited to it as I can also pull the files from any container in any other host, helping me centralize config files for all hosts and containers.

That’s why I am asking, and in my mind, just considering the implications opens a new batch of questions…

  • The pull and push commands are restricted to container-host communication for a specific reason and I should not be adding a remote within one of the containers, remotes are only meant to be used by other hosts. (so either it should not be used or understanding that reason might help determine if there is an added risk to this workaround).

  • This was never implemented simply because it wasn’t really needed.
    (and hence, maybe there isn’t any inherent security risk to this workaround).

  • There is a specific risk, but removing the remote after the operation is completed will also remove the risk?

  • There is a risk when Origin lives on HostOne, but not when you pull the files from Destination containers living in other hosts that are not the host that Origin lives in.

About the possible solutions you gave me earlier @toby63

  • push a file from host to container: the file lives in another container, not the host. OK, I guess I could first pull it from Origin and then push it to Destination, however, the reason I’m doing this is to run a script in Destination, since the host is out of reach for that script, it would not work for me.
    Looking at it from a different perspective, this is exactly what I am doing, because I can’t issue the pull command directly on Destination, I am making Host do it for me by adding it as a remote.

  • disk-share: This is just one file, pulled once every few months, and I want to be able to use it from any container, and ideally from any host, as I intend to use it for refreshing configuration files, since those hosts live in very heterogeneous environments, it would be really hard(-er) to implement than just adding a remote to the containers.

  • volume-share (between containers): That would work just fine. But adding a remote seems simpler to implement, more portable and somehow more secure, as all the communication is encrypted and there is a key-protected handshake each time we transfer the file.

  • network-share: That would work fine too, but I can’t count on having network shares available, everything is contained within the host and siblings of this host live in different cloud providers and even bare-metal, around the world.

Thank you for the clarification.

So to summarize:
You have two containers and want to pull a file from container1 to container2 and
for now as a solution you added the host’s LXD server (on which container1 runs) as a remote LXD-Server to container2.
So I assume you set security.nesting to true in container2, installed LXD inside it and then added the host as a remote?

So it seems you give container2 access to the Hosts LXD-Server and yes thats a possible security risk, as the intention of containers is, to keep them seperate from the host (and other containers).

Think about it, now container2 can control the LXD-Server and all instances in it, also the LXD-Server can potentially harm the host.
This is a very bad scenario.

This is also an exaggerated solution imo, it’s like shooting a mosquito with a tank.

I would encourage you to use one (or (if necessary) multiple) of the other solutions instead or search for another solution.

This is a quite common usecase, so you will find a solution for that.
Providing it via a webserver for example and/or via VPN (to seperate it from the rest of the internet) etc.

I know it seems like that is much more complicated than your solution, but your solution is really bad (if I understand it correctly) so you need a better solution.