Limit project users to create only private networks with OVN

Hi!

I would like to ask if it is possible, to limit the network ranges for new OVN networks created by project users to private networks only:

- 10.0.0.0/8

  • 172.16.0.0/12
  • 192.168.0.0/16
  • fc00::/7

The reason for this is, that I would like to give project admin users the ability to create themselfs new OVN networks, but at the moment, they can add a custom cidr which basically allows them to grab e.g. a free /64 from my public /48 public IPv6 block.
Maybe I also have a bit of a misconfiguration, but for the moment, I just propagated the full /48 net on a VLAN33 which is attached to the Incus nodes on a second nic (enp3s0) and used exclusively for OVN as an uplink.
Configuring all /64 chunks manually in the router is (right now) something I would like to avoid because Mikrotik routers cannot be easily automated with ansible at the moment :slight_smile:

I’m not sure why this is a problem.

Yes, someone who’s allowed to create an OVN network can use any subnet that they want, but the scope of that subnet is that of their OVN network, it doesn’t suddenly become routable outside of that network.

If they want to route traffic in, then they need to turn off NAT on the OVN router, at which point the existing project restrictions on what can be routed in will kick in (restricted.networks.subnets)

What you’d typically do is:

  • Set ipv4.routes and ipv6.routes on your uplink network to cover any “public” address space that may be used by OVN networks using that uplink
  • Set restricted.networks.subnets in each project to a list of CIDRs that can be used in part or in full within the project
  • Then in the project, you can use any of the addresses from restricted.networks.subnets either on their own as the address of a forward or load-balancer, or routed directly to an instance using ipv4.routes.external or ipv6.routes.external or you can even use a bigger chunk as the main subnet of an OVN network, disabling NAT on it as part of it.

For this to work at scale, you’d want ovn.ingress_mode=routed and then use BGP to get the next-hops for each chunk advertised to your router. But the default behavior will work fine so long as you’re not dealing with large chunks of address space (as OVN needs to be configured with an entry for each individual address…).

Hi! Thanks for your answer. Unfortunately, this confused me even more and let’s me think that I fundamentally misunderstand something here.
All my subnets I create as a user are fully routed outside of OVN automatically. In my router, “2001:XXXX:XXXX::/48” is routed to VLAN33 so I do not have to deal with any of that manual configuration.

This is the network config used to create the incus cluster:

          network:
            LOCAL:
              type: macvlan
              local_config:
                parent: enp2s0
              description: Directly attach to host networking
            UPLINK:
              type: physical
              config:
                ipv4.gateway: "192.168.33.1/24"
                ipv4.ovn.ranges: "192.168.33.20-192.168.33.99"
                ipv6.gateway: "2001:XXXX:XXXX::0001/48"
                ipv6.routes: '::/0'
              local_config:
                parent: eno1
              description: Physical network for OVN routers
            default:
              type: ovn
              config:
                network: UPLINK
                ipv6.address: "2001:XXXX:XXXX:0001::0001/64"
                #dns.nameservers: 2001:4860:4860::8888,8.8.8.8,2001:4860:4860::8844,8.8.4.4
              default: true
              description: Initial OVN network

And inside every project I create the uplink network for that project with terraform and with a dedicated range from the /48 block:

resource “incus_network” “incus_network” {
  count = length(incus_project.incus_project)

  project     = “${element(incus_project.incus_project.*.name, count.index)}”
  name        = “UPLINK”
  description = “Public network”
  type        = “ovn”
  config = {
    # automatically configure UPLINK network range
    “ipv6.address” = “${var.ipv6_48_block}:${count.index + 1}00::1/64”
    “network”      = “UPLINK”
    “bridge.mtu”   = “1380”
  }
}

That is the only uplink network the project should get and users should only be able to additional networks from private IP ranges.

I tried to use restricted.networks.subnets here, but it does not work like I intended to. Project users can no longer create their own networks but rather only those wich match the name of the networks listed in this flag.
If they want to create e.g. two “192.168.0.1/16” networks, I need to add e.g. UPLINK1:192.168.0.0/16,UPLINK2:192.168.0.0/16 to the subnet restrictions which also means they cannot pick the names themselfs any more.

Also, adding the subnet restriction only works, when there are routes for these private networks in the uplink, but I intend that there are no routes. It should be a LAN network for the project.

The scale is quite small: 3 NUC machines with 32GB RAM each and 8 cores. I try to create a lab environment at home (thus the MTU of 1380) for some friends to get a playground for cloud stuff. The /48 block comes from tunnelbroker.net’s 6in4 tunnel, so I cheat myself static IPs on my consumer internect connection :smiley:.

Here’s an example of a working setup, maybe that will help:

stgraber@castiana:~ (incus:n-cloud/public)$ incus network show OVN-CLOUD --project default
config:
  bgp.peers.fw-wan01.address: 2602:fc62:ef:100::100
  bgp.peers.fw-wan01.asn: "64600"
  dns.nameservers: 45.45.148.195,2602:fc62:ef:8::1
  ipv4.gateway: 172.20.1.1/24
  ipv4.ovn.ranges: 172.20.1.10-172.20.1.254
  ipv4.routes: 45.45.148.200/29,45.45.148.208/29
  ipv4.routes.anycast: "true"
  ipv6.gateway: 2602:fc62:ef:301::1/64
  ipv6.routes: 2602:fc62:ee::/48
  ipv6.routes.anycast: "true"
  ovn.ingress_mode: routed
  volatile.last_state.created: "false"
description: ""
name: OVN-CLOUD
type: physical
managed: true
status: Created
project: default

stgraber@castiana:~ (incus:n-cloud/public)$ incus project show ringzer0ctf
config:
  features.images: "true"
  features.networks: "true"
  features.networks.zones: "true"
  features.profiles: "true"
  features.storage.buckets: "true"
  features.storage.volumes: "true"
  limits.cpu: "256"
  limits.disk: 2TiB
  limits.memory: 256GiB
  limits.networks: "10"
  limits.processes: "10000000"
  restricted: "true"
  restricted.containers.nesting: allow
  restricted.networks.subnets: OVN-CLOUD:45.45.148.200/32, OVN-CLOUD:2602:fc62:ee:8100::/56
  restricted.networks.uplinks: OVN-CLOUD
  restricted.snapshots: allow
  user.target.containers: cloud-containers
  user.target.virtual-machines: default
description: 'CLOUD: Ringzer0 CTF'
name: ringzer0ctf

config:
  bridge.mtu: "1500"
  dns.domain: incus
  ipv4.address: 10.66.241.1/24
  ipv4.nat: "true"
  ipv6.address: 2602:fc62:ee:8100::1/64
  ipv6.nat: "false"
  network: OVN-CLOUD
  volatile.network.ipv4.address: 172.20.1.10
  volatile.network.ipv6.address: 2602:fc62:ef:301:216:3eff:feb3:130d
description: ""
name: default
type: ovn

stgraber@castiana:~ (incus:n-cloud/public)$ incus network forward list default --project ringzer0ctf
+----------------+-------------------+------------------------+-------+----------+
| LISTEN ADDRESS |    DESCRIPTION    | DEFAULT TARGET ADDRESS | PORTS | LOCATION |
+----------------+-------------------+------------------------+-------+----------+
| 45.45.148.200  | Main IPv4 address | 10.66.241.2            | 201   |          |
+----------------+-------------------+------------------------+-------+----------+

So in this case, there is an uplink named OVN-CLOUD which uses 172.20.1.1/24 for the IPv4 subnet for the OVN routers and 2602:fc62:ef:301::1/64 for the IPv6 subnet.

That uplink is set up to support a bunch of public address space, two /29 IPv4 and a /48 IPv6 and it uses BGP to advertise its routes to the relevant router.

Then you have 3 projects using that particular uplink, with the example here being ringzer0ctf which is a pretty restricted project that’s still allowed to create its own networks. It has access to a /32 IPv4 and /56 IPv6 subnet from the address space routed to the uplink.

Then in that project, a default network is created, it uses a random IPv4 subnet with NAT but then a public /64 for IPv6. The public IPv4 address for that project is being used by a network forward.

Thanks a lot for that! Unforunately I have not been able to figure it out yet. I will try a bit more and post an update here, once I managed to get this working :slight_smile: