Correct. You asked for simple - and you did say you currently work this way.
Addresses are assigned statically inside the guest container, or if you want to use DHCP then on your upstream DHCP server on that subnet.
Probably because without those files it defaults to DHCP on eth0.
You can avoid creating those configs manually by using cloud-init, which is very amenable to scripting.
# incus launch -c user.network-config="version: 2
ethernets:
eth0:
dhcp4: false
accept-ra: false
addresses:
- $ADDRESS4
- $ADDRESS6
gateway4: $GATEWAY4
gateway6: $GATEWAY6
nameservers:
search: [$DOMAIN]
addresses: [$NAMESERVERS]
" -c user.user-data="#cloud-config
disable_root: false
users: []
ssh_authorized_keys:
- $SSHKEY
" -p br0 images:ubuntu/24.04/cloud foo
But if you want incus to manage the networking, then yes, the container or VM needs to sit on an incus bridge. In that case, you need to arrange for routing of inbound traffic.
Regular static routing works fine for that (e.g. give the incus bridge a /28 of real public IPs, and then add a static route on the upstream router) (*). However, your container is then tied to that particular host, unless you start messing with overlay networks and the like, which I’ve not attempted.
You could assign a public loopback address to each container, and then announce that into your IGP (or use static routes). That’s a good way to use individual IP addresses efficiently.
As already been suggested, you can use one of the incus proxy modes, and external clients connect to the container host’s IP instead of the container itself. Again, you’re pretty much tied to the incus host.
Or depending on the use case, you can run your own upstream HTTP or SNI reverse proxy.
(*) You don’t want to take a /28 which overlaps with the /24 if it is already live on a subnet, or you’re going to have to start doing proxy ARP nonsense.