Thanks @toby63 and @simos.
Here’s a more detailed explanation:
I have 2 containers which live in the same Host, let’s name them:
- Origin (The container where the file I want to pull lives in)
- Destination (The container I am pulling from, note this could be any container, in any host)
- HostOne (Host where Origin lives in)
- AnyHost (Any other host where I can run a copy of Origin)
OK, I want to copy a file from Origin to Destination, so I added HostOne as a remote within the Destination container, and now, from Destination, I can run:
lxc file pull HostOne:Origin/folder/file.txt /folder
(please note this works too if the command is issued from AnyHost:Destination)
Effectively pulling a file from Origin to its brother container, Destination.
This works if both containers are in the same host as well as if the containers live in different hosts, which makes it a fantastic tool for my use case (use case explained below).
So, I am assuming that the LXC developers decided not to allow the push and pull commands to work among sibling containers for some reason, probably security related?
On the other hand, it might just be that the command was simply never meant to do this, so the functionality was never implemented, after all, there are many workarounds that allow just that, as @toby63 just pointed out, so maybe there just was no need for it.
Today, LXD’s API is allowing me to break this old rule, but are there any risks involved?
Is this workaround functional (allowed) because it is safe, or just a corner case that no one thought needed to be disabled but could pose some risk…
So, I guess the real questions I am looking for an answer to are:
Is it safe to add a remote to a container?
Is there any additional risk if the remote is the host the container lives in?
By adding the remote in the container, the communication is encrypted, protected by keys, and does not use any socket (as far as I know) as it travels through the network until it reaches the API on HostOne.
In theory, it shouldn’t be safer or riskier than any communications between LXD enabled hosts, for example, the members of a custer.
I really don’t think I am giving the container any access to the host, I am just executing the pull command by proxy of HostOne. Please note this works from any container on any host…
Here’s my use case:
What I’m really doing, is using HostOne:Origin as a config file repository of sorts.
Scripts are run in all containers, in different hosts, those scripts need to be able to retrieve the config files when there are new versions.
This workaround allows the same 2 lines of code to be valid in any host I deploy, including the host Origin lives in, but not limited to it as I can also pull the files from any container in any other host, helping me centralize config files for all hosts and containers.
That’s why I am asking, and in my mind, just considering the implications opens a new batch of questions…
The pull and push commands are restricted to container-host communication for a specific reason and I should not be adding a remote within one of the containers, remotes are only meant to be used by other hosts. (so either it should not be used or understanding that reason might help determine if there is an added risk to this workaround).
This was never implemented simply because it wasn’t really needed.
(and hence, maybe there isn’t any inherent security risk to this workaround).
There is a specific risk, but removing the remote after the operation is completed will also remove the risk?
There is a risk when Origin lives on HostOne, but not when you pull the files from Destination containers living in other hosts that are not the host that Origin lives in.
About the possible solutions you gave me earlier @toby63
push a file from host to container: the file lives in another container, not the host. OK, I guess I could first pull it from Origin and then push it to Destination, however, the reason I’m doing this is to run a script in Destination, since the host is out of reach for that script, it would not work for me.
Looking at it from a different perspective, this is exactly what I am doing, because I can’t issue the pull command directly on Destination, I am making Host do it for me by adding it as a remote.
disk-share: This is just one file, pulled once every few months, and I want to be able to use it from any container, and ideally from any host, as I intend to use it for refreshing configuration files, since those hosts live in very heterogeneous environments, it would be really hard(-er) to implement than just adding a remote to the containers.
volume-share (between containers): That would work just fine. But adding a remote seems simpler to implement, more portable and somehow more secure, as all the communication is encrypted and there is a key-protected handshake each time we transfer the file.
network-share: That would work fine too, but I can’t count on having network shares available, everything is contained within the host and siblings of this host live in different cloud providers and even bare-metal, around the world.