Running OCI in Incus - System Container network configuration

Hello there ! :wave:

From what I can understand, with the added OCI images capability, Incus is adding a “system OS layer” to an application container image. Am I right ?
In that sense, is it anyway possible to add some kind of advanced configuration parameters to configure the network stack on that system OS layer ?

Here’s what I am trying to achieve:

  • I have an application container image to which I would like 2 network interfaces, both of them should get an IP for DHCP
  • Any way I configure the network interfaces on the container, only an interface named “eth0” is being able to do DHCP.

Is this something configurable or is this a limitation on the application container image (in this case it is pihole)

Cheers !

Hi!

Incus has its own runtime and uses that runtime to run Docker containers.
With Incus you can specify additional network interfaces but it’s up to the application container to perform the necessary network configuration (i.e. send a DHCP request for each network interface).

That is, if you get a shell into the running OCI container, and run a networking command to list the network interfaces, it would show the network interfaces that Incus has been tasked to provide.

Therefore, the question is how to get an OCI container to configure additional network interfaces.

1 Like

Also this,

1 Like

Exactly !
If the feature of adding more than one interface on this initial DHCP handshake on startup through a pre-start hook that would be awesome.
Or expose a way to customize that pre-start hook ! :slight_smile:

Hello again ! :wave:
Just thinking out loud here, would it be something possible to “push” cloud-init to the base image that is being created/used when using an OCI image file ?

No, there is no base image and OCI images don’t run an init system so even if cloud-init was present in the image, nothing would start it when the container boots up.

1 Like

Hello !! :wave:
I see that the linked FR has been closed but I am unsure if it actually address the issue that I am running into ?
Any thoughts ?

Yeah, we only do DHCP on eth0. Doing DHCP on multiple interfaces concurrently is usually a very bad idea, the two networks may use the same subnet or may both push conflicting routes or DNS servers. It’s not strictly impossible to make it work but in 90% of cases it will fail.

Would it be something that would benefits to be customizable ?

If we can determine a reasonable reoccurring pattern where a different behavior would be warranted, possibly. But that would increase the complexity of our DHCP client a fair bit, especially if we now need to have it either handle multiple interfaces directly or coordinate with other DHCP clients.

The DHCP client is quite resource intensive as it stands, so opening up the door for a user to make us spawn an arbitrary number of them could turn into a security issue.

In most cases, we expect the one DHCP client or none (by not having an interface named eth0) is probably fine. If you need something more complex, you can usually customize your image or resulting container to do what you want directly from within the container, avoid any concerns about host resource usage and abuse as that would run from within the container.

1 Like