For an incus managed bridge (e.g. incusbr0
), this means that incus itself is acting as the default gateway for a private network, and performs NAT. This is what a default install gives you. Normally you would have the network device defined in the default profile, and it would look like this:
$ incus profile show default
config: {}
description: Default Incus profile
devices:
eth0:
name: eth0
network: incusbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
...
project: default
$ incus network show incusbr0
config:
ipv4.address: 10.136.163.1/24
ipv4.nat: "true"
ipv6.address: fd42:93af:f187:61fa::1/64
ipv6.nat: "true"
description: ""
name: incusbr0
type: bridge
used_by:
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
project: default
There’s no need to supply network details when creating a container. You would just incus launch ...
and it would pick up the network from the default profile, with an address assigned by DHCP. If you want to get DHCP to give a different address, then:
devices:
eth0:
ipv4.address: 192.168.68.92
network: incusbr0
type: nic
If you don’t want this NAT, but instead want the instance to be connected directly to whatever network eno1 is plugged into (i.e. eno1 connects to the 192.168.68.x network), then normally you would use an unmanaged bridge for this.
On the host, you’d create a bridge, say br0
. Optionally you can give the host itself an IP address on that network.
# in netplan
network:
version: 2
ethernets:
eno1:
dhcp4: false
accept-ra: false
link-local: []
bridges:
br0:
interfaces: [eno1]
parameters:
stp: false
forward-delay: 0
dhcp4: false
accept-ra: false
# Include all the following if the host itself has an IP on the eno1 network
# (i.e. configure IP on the bridge, not on eno1)
addresses: [192.168.68.2]
routes:
- to: default
via: 192.168.68.1
nameservers:
addresses: [1.1.1.1]
search: [example.com]
incus network list
would show this as an unmanaged bridge. Then you’d create a new profile, e.g.
$ incus profile show br0
config: {}
description: Bridge to br0
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: zfs
type: disk
name: br0
Then incus launch -p br0 ....
and everything is fine. You can run multiple containers bridged onto this same network. In this case, it’s your upstream DHCP server which assigns the IP (incus has no knowledge of this). If you want to set a static IP address, do it in the cloud-init network configuration for the container.
However, you show that you are using nictype: routed
which implies you are trying to do something much fancier, and you’ll need to explain your topology and what you’re trying to achieve.