Docker OCI Parameter Passing Incus 6.3

@stgraber Is there a mechanism to passing parameters to the new OCI Docker containers?

I notice that in your release notes you have

-c environment.MYSQL_DATABASE=wordpress 

for environment variables.

How would I pass values for exposing ports or defining non-volatile volumes?
Is there perhaps a plan to pass “docker compose” types of parameters?
Where would persistent volume data be stored? In the container or with an external mapping?

Is there something like this?

incus launch docker:ghost Ghost -c environment.ports="80:2368" -c environment.volumes="./data:/var/lib/ghost/content"

Is the “console” option roughly equivalent to “docker exec -it”?

Exposed ports and persistent storage should be handled the normal Incus way, with devices.

proxy devices should be used to expose ports and disk devices be used for persistent data.

So, how would you pull or refresh a container to a newer release and is there a way to create an OCI container with multiple apps similar to a docker-compose file that might contain an app and a database?

Refresh a container can be done with incus rebuild.

To deploy a set of containers, with storage volumes, networking, … I’d probably recommend using Terraform/OpenTofu for that.

2 Likes

Let’s assume that I have a profile that bridges the container to the LAN so that it has a dedicated address.

Now let’s assume that I want to remap a docker container from port 1234 to port 80.

I was thinking something like a profile like this:

incus profile device add proxy-80 hostport1234 proxy connect="tcp:127.0.0.1:1234" listen="tcp:0.0.0.0:80"

My hope was to be able to connect to the service at port 80. That doesn’t seem to work.

I think the primary thing is to detach from the perspective of seeing Incus simply as a way to run Docker containers, and wanting Incus to behave exactly like Docker.

But rather, Incus also supports creating containers from OCI images.

1 Like

Terraform/OpenTofu is good enough

it would be fantastic if incus info <oci_container> shows anything we can compare with skopeo inspect docker://docker.io/xxx/xxx - to be able recognise if there is a new image version - what do you think about it?

There are a few things you can get from what we record about the image a container was created from:

  image.architecture: x86_64
  image.description: docker.io/library/nginx (OCI)
  image.type: oci
  volatile.base_image: 67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df

That tells you it was an OCI image, what the image was and where it came from as well as its hash on the image server side. So you should be able to pretty easily feed that to skopeo to see if the hash is still the same, if not, then a rebuild will refresh the container.

In my tests, when I add a profile to an OCI container that bridges it to the main LAN as I would with an incus container, it causes the eth0 state to go down. Is there something that I am missing?

If applying profiles live, significant changes to network interfaces are handled by removing the device from the instance and adding a new one back in. For system container and VMs, that’s usually fine because systemd-nertworkd will see the hotplug event and apply the configuration.

For OCI containers because the network configuration is performed directly by Incus prior to the container starting and there’s no init system or anything running in there to react to hotplug events, that’d lead to an unconfigured eth0.

Note that the same would happen with a container or VM using something like ifupdown rather than systemd-networkd as not all network management tools handle hotplugging.

Do you have a recommendation for bridging an OCI APP container to the main LAN? Is that possible?
I tried:

incus network create OCI --type=macvlan parent=bridge0
incus init docker:nginx webtest --network OCI  

That does work. However, if I wanted to remap the nginx port, the proxy option does not seem to work.

The proxy only proxies between the host and the container, so yeah, you can’t really use it for that.
Does Docker allow for port remapping outside of proxyying the traffic from the host’s IP?

In your case, the ideal solution would be for the container image to have an env variable used to customize the port it listens on, but I think that’s pretty uncommon for application containers to let you customize that.

thanks, i did not notice sha sum, so finally i have this script run from cron to rebuild container when new version is published, a poor man’s version of docker’s watchtower

beware it forces the rebuild! usage: incustower.sh <container>

#!/usr/bin/env bash

URI_SHORT=`incus config get $1 image.description | sed 's/ .*//'`
URI_FULL=docker://"$URI_SHORT"
URI_INCUS=docker:`echo "${URI_SHORT#*/}"`
SHA_LOCAL=`incus config get $1 volatile.base_image`
SHA_REMOTE=`skopeo inspect $URI_FULL | jq -r .Digest | sed 's/.*://'`

if [ $SHA_LOCAL == $SHA_REMOTE ]; then
        echo "You are on the current version on $1."
else
        incus rebuild --force $URI_INCUS $1
        echo "Your container got updated."
fi
1 Like

Note that you can do without the skopeo call.

DOCKER_IMG="$(incus config get ${1} image.description | sed -e 's/^docker.io\///' -e 's/ .*//')"
SHA_LOCAL="$(incus config get ${1} volatile.base_image"
SHA_REMOTE="$(incus image info docker:${DOCKER_IMG} | grep ^Fingerprint | cut -d' ' -f2)"

This will basically do the skopeo inspect call for you through incus image info

2 Likes

So there is no plan to add something similar to image “auto-update”?

Could be a really useful feature I guess…

In the past I have created an Incus container that I bridged to an address on the main LAN. In that container I would nest docker. I would use “docker compose” to deploy a docker stack which many times I would specify a “port” directive to present a port number inside the docker application to a different port on the incus container and thus on the bridged address. Since the implementation of OCI containers means that the container is the actual docker container, it appears that the only choice is to have the incus host address present the port number without redirection. That’s why I have chosen to bridge the OCI container to its own address thusly:

incus network create OCI --type=macvlan parent=bridge0
incus init docker:nginx webtest --network OCI  

However, there seems to be no way to abstract port numbers as in the following example:

services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: unless-stopped
    ports:
      - '8080:80'
      - '8181:81'
      - '4443:443'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt

It turns out that there are use cases for presenting a different port number. If OCI containers are not bridged, then an incus host with multiple OCI containers may (and probably will) have port conflicts since the ports are all provided from the incus host address.

The equivalent incus functionality is the “proxy” device: Type: proxy - Incus documentation

You’d run the containers on an incus-managed bridge (e.g. incusbr0), which also handles the IP address allocation with DHCP. Clients connect to the incus’ hosts IP address.

If OCI containers are not bridged, then an incus host with multiple OCI containers may (and probably will) have port conflicts since the ports are all provided from the incus host address.

No, the proxy can listen on one port and forward to a different one. Obviously it’s up to you to assign unique ports on the host side - as you would with Docker. The containers themselves don’t care, as they’re on different IP addresses internally.

When it comes to networking there is a huge difference between Docker and Incus.

From a high level view Incus instances run in their own network namespace and expose their “local” ports. Let’s say you have them all attached to the default incusbr0 brigde you can reach each individual instance from the host itself but not from outside the host.

To expose these services you can use proxy devices as explained. Now depending on how many instances / services you have it is simple to remember. An alternative solution is having a reverse proxy instance running nginx or apache etc. Create a subdomain or subdirectory foreach of your services and expose only the reverse proxy.

All backend communication between services you just use the internal IP’s or domain names {name}.lxd No need to expose them you rather want to hide them.

Incus is very flexible and requires some more thoughts how to setup your environment. It takes a moment until you see the benefits :wink:

Based on your docker example above a translation to incus would look like the following using command line:

incus init docker:jc21/nginx-proxy-manager:latest nginx-manager                                                
incus config device add nginx-manager letsencrypt storage path=/etc/letsencrypt source=/<somewhere-local>/path/to/nginx-manager/letsencrypt type=disk
incus config device add nginx-manager data storage path=/data source=/<somewhere-local>/path/to/nginx-manager/data type=disk
incus config device add nginx-manager web-81 proxy connect=tcp:127.0.0.1:81 listen=tcp:<host-ip>:8081\n
incus config device add nginx-manager web-80 proxy connect=tcp:127.0.0.1:80 listen=tcp:<host-ip>:8080\n
incus config device add nginx-manager web-443 proxy connect=tcp:127.0.0.1:443 listen=tcp:<host-ip>:8443\n
incus start nginx-manager --console

This configures and would start an Incus OCI container. It mounts the required volumes (in this case using local disk path but can be also incus storage, see docs) and maps the correct ports between your host and instance.
Why I say would start is because there is an issue to start this image, see Incus unable to start/run (docker) container natively: ‘Error: stat /proc/-1: no such file or directory’. I actually just run this in my test lab to verify the exact steps and it won’t start on default execpt you add the workaround to “/init” :wink:

1 Like

However, rebuild will not work if there are snapshots.

Error: Failed instance rebuild: Failed rebuilding instance from image: Cannot remove an instance volume that has snapshots

So I need to take the risk to remove all snapshots and rebuild, or stick with the current version?

With OCI, I think it makes sense to not keep snapshots on OCI itself, but worth to snapshot its configuration file.