Is it possible to use an instance as gateway in a ovn network?

Is it mandatory to configure an ip address in a ovn network to route traffic?

What I am trying to do is to create an instance that work as router and dhcp server inside my ovn network because i need dnsmasq option and firewall specifics rules that i cannot setup directly on the network with incus as much i read the documentation. Now i have configured my instance with a static address 10.254.0.1/8 with a listening dhcp server on that address, but if my network has ipv4.address = none the other instances can’t receive their address from the dhcp server.

Yes, it is possible to configure a setup you are asking for. Have a look at [SOLVED] Setting up router container to manage all networking Think this is the closed resent discussion to give you an idea what is required.

Thanks for your suggestion, but i already read that and I didn’t find a solution to my problem, since he is configuring a container for routing his home network and not my containers inside my cluster. For further information that’s my ovn network configuration:

config:
  bridge.mtu: "1442"
  ipv4.address: none
  ipv6.address: none
  network: none
description: ""
name: ovn-test
type: ovn
used_by:
- /1.0/instances/hopeful-marlin
- /1.0/instances/mutual-fish
- /1.0/instances/router
- /1.0/instances/wired-aardvark
managed: true
status: Created
locations:
- node1
- node2
- node3
project: default

The router interfaces:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
19: eth0@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc noqueue state UP group default qlen 1000
    link/ether 10:66:6a:07:6e:82 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.254.0.1/8 brd 10.255.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::1266:6aff:fe07:6e82/64 scope link 
       valid_lft forever preferred_lft forever
21: eth1@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 10:66:6a:0d:0d:6a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.16.255.69/22 brd 172.16.255.255 scope global dynamic eth1
       valid_lft 3118sec preferred_lft 3118sec
    inet6 fd42:d720:fa64:fba8:1266:6aff:fe0d:d6a/64 scope global mngtmpaddr noprefixroute 
       valid_lft forever preferred_lft forever
    inet6 fe80::1266:6aff:fe0d:d6a/64 scope link 
       valid_lft forever preferred_lft forever

And the configuration of my dhcp server on the router container (I’m using dnsmasq):

port=0
interface=eth0
bind-interfaces
dhcp-range=10.0.0.0,255.0.0.0,12h

Hi

Not an expert but I think OVN would intercept the DHCP packets from your instances.

Perhaps this might be what you are looking for which will be per OVN network: ipv4.dhcp.routes: OVN network - Incus documentation
I think with this you would then have an issue with your router-container also getting this route table via the OVN DHCP so maybe that instance needs to be a manual IP stack config. If it’s got netplan, you can merge and override parts of the options config that the primary interface gets from DHCP (and others), by using a file with larger number in the /etc/netplan directory. Normally there is one starting with 50- so you would override with 70-{some-file-name}.yaml and then reboot or issue netplan try to test and if good then netplan apply

Also you’ve got DNS settings as well in the above article, which we use… You can set dns.domain dns.nameservers & dns.search per each OVN network so that could cover resolution and if you need to add DDNS.

Otherwise OVN does support a number of other DHCP options but I’m not sure what are available via Incus management of OVN. Maybe you can get to override the router/gateway address via OVN itself or manually/Ansible configure the networking stack in your instances on your OVN network/s. But wait to see if someone with more knowledge advises before going this route.

Hi,
I am using systemd-networkd inside the container, it’s just easier. About setting the ovn logical router with the DHCP options, i already tried that but it doesn’t allow the same flexibility of dnsmasq. Doing further troubleshooting i noticed that the containers doesn’t see each other even at the layer 2 level, even if they are connected to the same logical switch. Thank you anyway for your suggestions

My working configuration sets the ovn network to “none” and allows dnsmasq to provide IP addresses to regular containers. The ovn network is configured with ipv4.address = 10.0.0.2/24

I create a router node which configures its ovn interface with the static router IP address provided by dnsmasq DHCP (10.0.0.1 in my case). The router node has a second interface for the uplink. In my case, I also configure an openvpn interface. A SNAT is also employed for my purposes.

The solution is not elegant, but it works reliably.

So the ovn network still has an address but doesn’t provide dhcp?

The ovn network is providing DHCP. The router only forwards packets.

Then how did you configured dnsmasq to provide the network to the containers? Because ovn doesn’t use dnsmasq as DHCP server

Thanks for the correction. Here is the northbound config.

ovn-nbctl list dhcp_options

_uuid               : b03e4043-2074-45d7-96cb-55aaf6665d57
cidr                : "10.0.0.0/24"
external_ids        : {incus_switch=incus-net40-ls-int}
options             : {dns_server="{10.1.0.1}", domain_name="\"example.org\"", domain_search_list="\"example.org\"", lease_time="3600", mtu="1442", router="10.0.0.1", server_id="10.0.0.2", server_mac="00:11:31:11:11:11"}

Thank you. Just a question, did you configured it through incus or you used the ovn cli?

The configuration was done through Terraform on the Incus API. The only other piece is on the uplink side, I configured a route to 10.0.0.0/24.

Can you show me the yaml of your ovn network so i can understand how did you do it?

$ incus network show uplink
config:
  ipv4.address: 10.1.0.0/24
  ipv4.dhcp.gateway: 10.1.0.254
  ipv4.routes: 10.0.0.0/24
  ipv6.address: none
  ipv6.nat: "true"
name: uplink
type: bridge
used_by:
- /1.0/instances/router?project=remote_manage

$ incus network show ovn_net
config:
  bridge.mtu: "1442"
  ipv4.address: 10.0.0.2/24
  ipv4.nat: "false"
  ipv6.address: none
  ipv6.nat: "true"
  network: none
name: ovn_net
type: ovn
used_by:
- /1.0/instances/rmanage?project=remote_manage
- /1.0/instances/router?project=remote_manage

Finally found my notes. There seems to be a NAT setting in OVS which interferes with the setup. I removed it with a command similar to this, just with the X’s replaced with the pertinent value.

ovn-nbctl lr-nat-del incus-XXXXX-lr dnat_and_snat 10.0.0.1

In your configuration it’s the logical router that works as dhcp right? Not the container that you configured as router

Correct, the OVS DHCP is configured with a command similar to:

ovn-nbctl dhcp-options-set-options b03e4043-2074-45d7-96cb-55aaf6665d57 \
dns_server="{10.1.0.1}" \
server_mac=00:11:31:11:11:11 \
router=10.0.0.1 \
server_id=10.0.0.2 \
mtu=1442 \
domain_name=\"example.org\" \
lease_time=3600 \
domain_search_list=\"example.org\"

Thanks for your help but it doesn’t help with my case. I want my container to work not only as router but as dhcp server as well. Also my ovn network is isolated, i don’t need my containers to go to the internet

So you basically are looking at logical switch and logical switch port settings alone. Hopefully you on’t need too much micromanagement on the switch ports.

Yeah exactly. And yeah i don’t want to touch the switch ports