How to give a reverse proxy (Caddy) access to other networks' instances?

I’d like to run Caddy as a reverse proxy. Idealy, I’d have a setup something like:

  • Project & Network: alpha
    • foo: runs web server on port 5000
    • network dns.domain=alpha.incus
    • Can not access foo.bravo.incus’s IP address
    • Ok if it can DNS resolve foo.bravo.incus
  • Project & Network: bravo
    • foo: runs web server on port 5000
    • network dns.domain=alpha.incus
    • Can not access foo.alpha.incus’s IP address
    • Ok if it can DNS resolve foo.alpha.incus
  • Project & Network: core
    • caddy: runs on ports 80 & 443 which are proxied to host (i.e. serves internet traffic)
    • Can access by DNS & IP: foo.alpha.incus:5000
    • Can access by DNS & IP: foo.bravo.incus:5000

I thought I’d resolve this by adding an ethX device to the caddy instance for each network. But the second ethX device didn’t get an IP address. Internet and LLM solutions said I should manually set an ip address on that interface but that wouldn’t get the DNS info in /etc/resolv.conf (I don’t think).

It also suggested manually running a DHCP client on that interface and/or altering the DHCP/network config for that host. BUT, my goal is to run OCI/Docker images and so some of my images aren’t going to have any of those tools or configs. So I must be missing something about how incus DHCP works b/c it, seemingly?, can’t rely on the image’s tools.

I could give caddy access to the networks through firewall rules instead of adding an interface. IN some ways that might be cleaner. But that doesn’t solve the DNS problem either.

My current focus is that this works just on a single linux host. Although if the solution worked for a cluster that would give more flexibility for the future.

Thanks in advance for any input you can provide.

Just wondering if anyone can give me some pointers on the above? Am I thinking about this in a way that’s not compatible with how Incus is intended to work?

You properly need to give a bit more context on how you have configured your projects.

Assume both projects are fully isolated there is no simple way on solving this issue. Even if you put a network device into each project using DHCP or manual config it requires to manually setup correct default routes, DNS, etc. Which as you say isn’t currently supported for OCI container, you need to roll your own real container.

What you could try is to create and place your OCI / caddy into it’s own project and create network peering to alpha and bravo to allow network traffic. This way your caddy should be able to communicate with other projects and server the applications.