@stgraber Is there a mechanism to passing parameters to the new OCI Docker containers?
I notice that in your release notes you have
-c environment.MYSQL_DATABASE=wordpress
for environment variables.
How would I pass values for exposing ports or defining non-volatile volumes?
Is there perhaps a plan to pass “docker compose” types of parameters?
Where would persistent volume data be stored? In the container or with an external mapping?
So, how would you pull or refresh a container to a newer release and is there a way to create an OCI container with multiple apps similar to a docker-compose file that might contain an app and a database?
I think the primary thing is to detach from the perspective of seeing Incus simply as a way to run Docker containers, and wanting Incus to behave exactly like Docker.
But rather, Incus also supports creating containers from OCI images.
it would be fantastic if incus info <oci_container> shows anything we can compare with skopeo inspect docker://docker.io/xxx/xxx - to be able recognise if there is a new image version - what do you think about it?
That tells you it was an OCI image, what the image was and where it came from as well as its hash on the image server side. So you should be able to pretty easily feed that to skopeo to see if the hash is still the same, if not, then a rebuild will refresh the container.
In my tests, when I add a profile to an OCI container that bridges it to the main LAN as I would with an incus container, it causes the eth0 state to go down. Is there something that I am missing?
If applying profiles live, significant changes to network interfaces are handled by removing the device from the instance and adding a new one back in. For system container and VMs, that’s usually fine because systemd-nertworkd will see the hotplug event and apply the configuration.
For OCI containers because the network configuration is performed directly by Incus prior to the container starting and there’s no init system or anything running in there to react to hotplug events, that’d lead to an unconfigured eth0.
Note that the same would happen with a container or VM using something like ifupdown rather than systemd-networkd as not all network management tools handle hotplugging.
The proxy only proxies between the host and the container, so yeah, you can’t really use it for that.
Does Docker allow for port remapping outside of proxyying the traffic from the host’s IP?
In your case, the ideal solution would be for the container image to have an env variable used to customize the port it listens on, but I think that’s pretty uncommon for application containers to let you customize that.
thanks, i did not notice sha sum, so finally i have this script run from cron to rebuild container when new version is published, a poor man’s version of docker’s watchtower
beware it forces the rebuild! usage: incustower.sh <container>
#!/usr/bin/env bash
URI_SHORT=`incus config get $1 image.description | sed 's/ .*//'`
URI_FULL=docker://"$URI_SHORT"
URI_INCUS=docker:`echo "${URI_SHORT#*/}"`
SHA_LOCAL=`incus config get $1 volatile.base_image`
SHA_REMOTE=`skopeo inspect $URI_FULL | jq -r .Digest | sed 's/.*://'`
if [ $SHA_LOCAL == $SHA_REMOTE ]; then
echo "You are on the current version on $1."
else
incus rebuild --force $URI_INCUS $1
echo "Your container got updated."
fi
In the past I have created an Incus container that I bridged to an address on the main LAN. In that container I would nest docker. I would use “docker compose” to deploy a docker stack which many times I would specify a “port” directive to present a port number inside the docker application to a different port on the incus container and thus on the bridged address. Since the implementation of OCI containers means that the container is the actual docker container, it appears that the only choice is to have the incus host address present the port number without redirection. That’s why I have chosen to bridge the OCI container to its own address thusly:
It turns out that there are use cases for presenting a different port number. If OCI containers are not bridged, then an incus host with multiple OCI containers may (and probably will) have port conflicts since the ports are all provided from the incus host address.
You’d run the containers on an incus-managed bridge (e.g. incusbr0), which also handles the IP address allocation with DHCP. Clients connect to the incus’ hosts IP address.
If OCI containers are not bridged, then an incus host with multiple OCI containers may (and probably will) have port conflicts since the ports are all provided from the incus host address.
No, the proxy can listen on one port and forward to a different one. Obviously it’s up to you to assign unique ports on the host side - as you would with Docker. The containers themselves don’t care, as they’re on different IP addresses internally.
When it comes to networking there is a huge difference between Docker and Incus.
From a high level view Incus instances run in their own network namespace and expose their “local” ports. Let’s say you have them all attached to the default incusbr0 brigde you can reach each individual instance from the host itself but not from outside the host.
To expose these services you can use proxy devices as explained. Now depending on how many instances / services you have it is simple to remember. An alternative solution is having a reverse proxy instance running nginx or apache etc. Create a subdomain or subdirectory foreach of your services and expose only the reverse proxy.
All backend communication between services you just use the internal IP’s or domain names {name}.lxd No need to expose them you rather want to hide them.
Incus is very flexible and requires some more thoughts how to setup your environment. It takes a moment until you see the benefits
Based on your docker example above a translation to incus would look like the following using command line:
This configures and would start an Incus OCI container. It mounts the required volumes (in this case using local disk path but can be also incus storage, see docs) and maps the correct ports between your host and instance.
Why I say would start is because there is an issue to start this image, see Incus unable to start/run (docker) container natively: ‘Error: stat /proc/-1: no such file or directory’. I actually just run this in my test lab to verify the exact steps and it won’t start on default execpt you add the workaround to “/init”