OCI support with authentication (i.e. GCP Artifact Registry)

I was able to launch an incus OCI (i.e. docker) container pulled from Google Artifact Registry (very cool!!! :partying_face:)… however getting authentication working required some extra steps which I haven’t found anywhere else so I’m posting them here:

  1. create an auth.json file according to skopeo expected format
  2. create the XDG_RUNTIME_DIR env for incusd appropriately such that skopeo can find the above file.

I considered opening a doc PR to the repo for this, however its a bit ugly and would be fairly easy to improve the UX by standardizing on a location for the auth.json file ( and setting XDG_RUNTIME_DIR environment variable or specifying the path in the skopeo invocation).

If maintainers agree I’d be happy to open a feature request issue, and could look at adding this as it seems straightforward.

thanks

What does authentication typically look like for private registries?

Having to use environment variables and text configuration files isn’t likely to be a particularly good fit for Incus’ architecture where systems can be clustered and the client and server are decoupled.

What would make more sense for us would be to use something like http basic auth type syntax, so you’d do something like incus remote add oci-private-registry https://USER:PASSWORD@server/path and then have Incus handle feeding that to skopeo.

But we’d need to have a better understanding of what authentication methods are currently in use/possible for registries so we don’t get ourselves stuck with a solution that won’t work for a bunch of users.

The GCP Artifact Registry does support a docker login style auth. The two most applicable options for incus would be either using a service account key or an access token.

The access token is only valid for 60 minutes, so that wouldn’t lend itself to being part of the remote config. I could see a token being passed at launch time and then being appended to the image pull via skopeo.

Probably the best option for the first iteration would be more like you suggest, which is using a service account key and passing that into remote creation.

Example
As an example, the following skopeo command uses a GCP service account key and works with no extra auth file/environment:

PASS=$(base64 key.json); /opt/incus/bin/skopeo inspect --creds "_json_key_base64:$PASS" docker://us-central1-docker.pkg.dev/blah/blah:blah

You suggestion could work well, have the. remote command take the typical URL w/ USER:PASS and pass that on to the skopeo via --creds.

Right, so above would turn into incus remote add oci-private https://_json_key_base64:${PASS}@us-central1-docker.pkg.dev/blah/blah:blah --protocol=oci

We don’t really have infrastructure to easily test this though, any chance you can easily spin up a private registry with a test image in there and provide me with some credentials I can test with?

Hey @stgraber sorry for the delayed response.

Yes, I have some infra that can be used to test.

Seems pretty straight forward, I would like to work on it if you think it’s a reasonable project for an incus newbie?

I’ve got some time over the next couple weeks.

Yep, that’d be great!

I am also interested in this change as it will make things easier (for example, having Incus rebuild instances based on images stored in a custom private registry), if there is a need for AWS or GCP based resources or other types of resources I may be able to assist with setting these up.

There is Zot which supports username/password (bearer authentication), TLS mutual auth, & other kinds of authentication. Google Cloud’s repository supports more types of authentication. IMO a lightweight solution that retains as many values to pass to slopes would be adequate

Right now the OCI Repository must have a remote URL with HTTPS prefix and Incus does not support any authentication (as far as I can try). So the only way that would work right now is a privately hosted repository with no authentication but with TLS driven by a self signed certificate

AWS ECR, GCP, Chainguard cgr.dev, and others typically use credential helpers.

With docker, skopeo, crane, etc as my user it looks like this

$ cat ~/.docker/config.json
{
	"auths": {},
	"credHelpers": {
	    "cgr.dev": "cgr",
	    "891377301584.dkr.ecr.us-east-1.amazonaws.com": "ecr-login"
	}
}

I have credential helpers configured in there, and everything looks at those. Those credential helpers can do whatever they need (i.e. use sso, yubikeys, oidc, tokens, stuff stored in keyrings etc) but all of them emit a short lived token which is then used for auth.

Cred helpers are based on url prefix. See example docs here GitHub - GoogleCloudPlatform/docker-credential-gcr: A Docker credential helper for GCR users for how a GCP credential helper works.

Thus upon doing:

$ incus remote add cgr.dev --protocol oci https://cgr.dev
$ incus launch cgr.dev:chainguard/node:latest

I expect for this to happen:

  • incus client to call to incus daemon, get oci url
  • incus client locally execute docker credHelpers to get short-lived token, if any
  • incus client to pass the token to daemon (well whatever Username & Secret cred helper returned)
  • incus daemon use the token to pass it to skopeo
  • incus daemon to query image / search images / pull them / etc
  • incus daemon then launches stuff from local registry

A realistic example is:

echo "cgr.dev" | docker-credential-cgr get
{"ServerURL":"cgr.dev","Username":"_token","Secret":"eyJhbGc....."}

Which then can be used dynamically for auth

This is also similar to how all other tools do it, for end-users with ephemeral tokens (crane skopeo docker etc).
I don’t know if there is like an fd / socket option to credhelpers that maybe incus client can grab and pass through to the daemon to pass to skopeo or some such, as that would work on local machine development with incus daemon and client on the same machine.

This is the preferred mode of operation for interactive users, as then strong authentication can be done (both ecr-login and cgr support OIDC with multi-factor authentication and delegation to things like google accounts & github, etc).

Separately, it is possible with most cred helpers to issue long lived tokens, but it still best to go via credhelpers to continiously exchange/refresh those tokens. Thus it would help to specify how to configure credhelpers for the incus daemon account / skopeo. If it is even done sort of backdooring - leaving global creds owned by incus daemon user account in incus daemon home directory. In that usecase the creds/tokens provided should be long-lived noninteractive secret usable to get ephemeral tokens.

The cgr.dev is the one i care the most about. One can login into it for free, to get access to free images, but with credhelper authentication to get access to i.e. cgr.dev/chainguard/node image as an example. Install chainctl using How to Install chainctl — Chainguard Academy ; and then do one time sign up for account as stated here Authenticating to Chainguard Registry — Chainguard Academy ; then use chinctl auth configure-docker to configure the cred helper Authenticating to Chainguard Registry — Chainguard Academy ; and then attempt to pull / launch the node image using docker/skopeo it will pop open browser window to login to get long lived auth; to then exchange shorter lived tokens.

This would be representative of all other cred helpers too, for aws azure gcp etc.

Injecting creds into incus account is hitting a snag with neither $XDG_CONFIG_HOME nor $HOME are defined upon skopeo invocation.

1 Like

Sorry for the necro, but is there a Github issue we can follow on?

Not related to the original poster, but Docker Hub is planning on introducing image pull rate limiting for unauthenticated users in a few months.

1 Like