Myself and all the AI’s keep running in circles. Perhaps it’s the way i’m trying to force things, but the situation is:
1 physical network port everything has to go through.
located in colocation I can’t risk a bridge screwing up ssh to the host machine, network set up in netplan.
bridge br0 is based off eno1
i can not create a NAT internal network with ovn
root@remote1:/etc/netplan# incus network create internal --type=ovn ipv4.address=10.10.0.1/24 ipv4.nat=true ipv6.address=none network=br0
Error: Option "network" value "br0" is not one of the allowed uplink networks in project
root@remote1:/etc/netplan# incus network create internal --type=ovn ipv4.address=10.10.0.1/24 ipv4.nat=true ipv6.address=none bridge.external_interfaces=br0
Error: Option "network" is required
Ultimately I want to create an internal NAT’ed network for all the containers and then for external IP services I will attach a second network device to the container with srvio vf. At least that is the plan.
Any suggestions or help would be greatly appreciated.
My suggestion would be to let Incus create it’s own “incusbr0” bridge and use it as uplink for OVN.
That is my setup on all local systems and avoids any issues with messing around on the single network provided by your ISP provider. It isolates all container on default to the incus bridge or to your ovn network.
or did I interrupt your suggestion wrong?
I don’t feel like i’m trying to do anything crazy here… really just making an internal ‘subnet’ that that has access to NAT’ed internet and then just attach an ethernet device that has a public IP.
After everything i’ve tried - I can not use OVN when i can’t give it an uplink that incus controls.
OVN always needs an uplink that Incus fully manages or at least partially controls (like a VLAN or dedicated physical interface without pre-existing IP configuration).
Yes, this has been mentioned in an earlier forum thread but as far as I understood it is the simplest way to automate and manage it through Incus without much required knowledge about TCP/IP and all the whistles bells that come with it. Lastly it keep the code base simple.
Given your experience you are rather after the advanced options Incus offers. One possible way is to configure all networks in Incus without nat translation, add routes and masquerading manually.
This will result into a setup where OVN can reach incusbr0 and the other way around. The last piece of the puzzle is to add the masquerading manually outside of Incus (as far as I’m aware).
I just set it up on a VM and it works using the steps above. A container on incusbr0 network can reach outside world and any ovn container and the same applies the other way around.
In summary and from my experience Incus is very flexible to configure it for your needs. Default settings might be not the optimum but get you there quickly in an “automated” fashion. As soon as your requirements are more complex it offers all the knobs you need to reach your goal.
Thank you for taking time to show this method. I went back to the drawing board and started over and came up with this solution…
Standard netplan setup on the host with eno1 being set to a static ip.
created standard incus bridge to be a NAT’ed subnet
( this next part was the hard part)
setup the intel nic to have 8 sr-oiv vf’s.
I then added it to my test container incus config device add mytester eth1 nic nictype=physical parent=enp25s0f0v0
as a separate network interface.
One thing i had to work through was an in issue with udev when i tried to set the static ip on eth1 in netplan apply from inside the container. I had to disable app armor for the container.
I’m working on how i can pass a cloud-init that setups the netplan…