Container/interface count limit?

Hi, I’m testing a new Incus setup, and after spawning about 520 containers I’m getting this error: Error: Failed to start device "eth1": exchange full. (or eth0, based on what is configured in container).

If I delete or stop one container, I can spawn another. It seems I’m hitting some limit, but I can’t tell whether it’s a kernel limit or an Incus internal limit. dmesg and kernel logs show nothing unusual, and the machine itself behaves normally.
I know that 520 containers on a single machine is excessive, but I’m just testing the maximum achievable within my environment. Can anyone point out what’s going on, which limits I should check or increase, or whether I’ve just hit Incus limits?
edit: total number of active veth devices is 2564 (different number of network interfaces in different containers).

I’m on ubuntu noble - upgraded everything to the latest version + zabbly packages
Zabbly kernel: 202511250207~amd64~ubuntu24.04
Zabbly ZFS: 2.3.4-amd64-202511141759-ubuntu24.04
Incus: 1:6.18-ubuntu24.04-202511200741

edit2: Okay, I dug deeper, and it’s obviously a bridge limit (1024 veths per single bridge). I completely forgot about this. I’ll leave it there for future reference.

Yep, it’s indeed the 1024 ports limit on Linux bridges.

Creating more networks will work to get you past that one.
I believe OpenVswitch also doesn’t have that particular limitation so switching the bridge driver on the network may also be an option to get past it.