Oauth2-proxy oci does not run due to networking issue

I can’t seem to get oauth2-proxy to run.

The container either complains about no route to host, or seems to fall back to ::1 as a dns server and fails (I presume since the network doesn’t come up). I am using the same profile that otherwise works for all my other containers.

Even stranger, it works when the host comes up, but after a stop and then a start, it doesn’t come up.


srv01:/home/amaccuish # incus init quay:oauth2-proxy/oauth2-proxy:latest oauth2-proxy --storage local --profile default
Creating oauth2-proxy
srv01:/home/amaccuish # incus config device add oauth2-proxy config disk source=/incus/data/oauth2-proxy/oauth2-proxy.cfg path=/oauth2-proxy.cfg
Device config added to oauth2-proxy
srv01:/home/amaccuish # incus start oauth2-proxy
srv01:/home/amaccuish # incus console --show-log oauth2-proxy
\[2026/01/12 18:57:17\] \[provider.go:55\] Performing OIDC Discovery…
\[2026/01/12 18:57:17\] \[main.go:59\] ERROR: Failed to initialise OAuth2 Proxy: error initialising provider: could not create provider data: error building OIDC ProviderVerifier: could not get verifier builder: error while discovery OIDC configuration: failed to discover OIDC configuration: error performing request: Get “https://xxx.xxx.xxx.xxx/realms/xxx/.well-known/openid-configuration”: dial tcp: lookup xxx.xxx.xxx.xxx on 10.216.157.1:53: dial udp 10.216.157.1:53: connect: network is unreachable
srv01:/home/amaccuish #

Has anyone faced a similar issue with a go app? I note that the container base image is the GoogleContainerTools distroless, perhaps these are not supported by incus?

Config


architecture: x86_64
config:
  environment.HOME: /home/nonroot
  environment.PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  environment.SSL_CERT_FILE: /etc/ssl/certs/ca-certificates.crt
  environment.TERM: xterm
  image.architecture: x86_64
  image.description: quay.io/oauth2-proxy/oauth2-proxy (OCI)
  image.id: oauth2-proxy/oauth2-proxy:latest
  image.type: oci
  oci.cwd: /home/nonroot
  oci.entrypoint: /bin/oauth2-proxy --config /oauth2-proxy.cfg
  oci.gid: '65532'
  oci.uid: '65532'
  volatile.base_image: 56e3daedf765c7a1eea6e366fbe684be7d3084830ade14b6174570d3c7960954
  volatile.cloud-init.instance-id: 76d6fe9e-6d48-4298-82c4-7b2ad4ba97e2
  volatile.container.oci: 'true'
  volatile.eth0.hwaddr: 10:66:6a:91:99:f4
  volatile.idmap.base: '0'
  volatile.idmap.current: >-
    [{"Isuid":true,"Isgid":false,"Hostid":400000000,"Nsid":0,"Maprange":500000001},{"Isuid":false,"Isgid":true,"Hostid":400000000,"Nsid":0,"Maprange":500000001}]
  volatile.idmap.next: >-
    [{"Isuid":true,"Isgid":false,"Hostid":400000000,"Nsid":0,"Maprange":500000001},{"Isuid":false,"Isgid":true,"Hostid":400000000,"Nsid":0,"Maprange":500000001}]
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: STOPPED
  volatile.last_state.ready: 'false'
  volatile.uuid: cceaea66-bf73-491c-b141-da8a8bd29803
  volatile.uuid.generation: cceaea66-bf73-491c-b141-da8a8bd29803
devices:
  config:
    path: /oauth2-proxy.cfg
    source: /incus/data/oauth2-proxy/oauth2-proxy.cfg
    type: disk
  root:
    path: /
    pool: local
    type: disk
ephemeral: false
profiles:
  - default
stateful: false
description: ''
created_at: '2026-01-12T18:55:41.904851868Z'
name: oauth2-proxy
status: Stopped
status_code: 102
last_used_at: '2026-01-12T18:55:53.693452252Z'
location: none
type: container
project: default

It’s probably being slightly too quick, trying to connect while the container is still performing DHCP.

You could try the ugly hack of changing oci.entrypoint to something like /bin/sh -c "sleep 5 ; exec /bin/oauth2-proxy --config /oauth2-proxy.cfg"to add a 5s delay before the binary gets run.

Thank you for your speedy reply.

I had thought of this too but became stuck on the fact that the container has almost none of normal userspace, no sleep or sh. There is only the app binary in /bin. I also tried the various lxc.hook options but with no success :frowning:

Try:

incus config set NAME raw.lxc='lxc.hook.start-host=/bin/sleep 5s'

Legend, that did it :heart:

Thanks for posting this — it’s a common and frustrating situation when an OCI container like oauth2-proxy can start but then fails with networking errors like “network is unreachable” or DNS falling back to something like ::1. In your log we can clearly see the OIDC discovery request failing due to networking (dial udp … connect: network is unreachable) when trying to hit the identity provider URL.

What Stéphane suggested about the container trying to connect before the network is fully up makes a lot of sense, especially with minimal distroless images that don’t have tools like sleep or shell built in to delay startup. A quick workaround, as mentioned, is to introduce a delay before starting the oauth2-proxy binary — even if that means wrapping the entrypoint in a tiny script or adding an init container to manage timing externally.

A few other tips that often help with this kind of issue:

Ensure the container actually receives an IPv4 address and DNS: Some OCI container setups have issues getting an IP via DHCP or proper DNS entries, which results in similar “no route” errors. Networking problems for OCI containers (like missing IPv4 or broken DNS resolution) have been discussed elsewhere on the Incus forum.

Check Incus network timing: If the container is launched immediately after a restart of the host or network bridge, the container might start too early — adding a short delay or retry logic for the OIDC endpoint can drastically improve reliability.

Consider using a slightly less minimal base image (if possible) that includes at least a shell or base utilities — that makes techniques like startup delays or healthchecks easier.

Overall it doesn’t necessarily mean the image is unsupported, but rather that this distroless image doesn’t give you the normal userspace tools to easily handle timing and network readiness, so you may need to adapt the entrypoint or startup flow a bit to cope with the container’s network coming up.

1 Like