Attaching instances from the managed bridge network to the host bridge network

FIrst message here !

I followed the incus “Getting Started” tutorial.
It shows how to use a managed bridged network called incusbr0
I started this way two containers (cleverly named pihole and jupyter), each getting an IP addr from the embedded DHCP server (in 10.143.149.0/24).
After many trials, I didn’t manage to get those containers visible from the LAN where the host is (in 192.168.1.0/24).
So I decided to create a real bridge named br0 between my host and the 192.168.1.0 network, and a corresponding profile for the containers .
I can now lauch a test container with this profile, and alright, it’s visible from my laptop on my LAN.

[me@myhost ~]$ incus list
+---------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
|  NAME   |  STATE  |         IPV4          |                     IPV6                      |   TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| jupyter | RUNNING | 10.103.149.100 (eth0) | fd42:5627:9b3b:5023:216:3eff:fec1:2390 (eth0) | CONTAINER | 4         |
+---------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| pihole  | RUNNING | 10.103.149.149 (eth0) | fd42:5627:9b3b:5023:216:3eff:feb7:ffd8 (eth0) | CONTAINER | 0         |
+---------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| testbr  | RUNNING | 192.168.1.78 (eth0)   | 2a01:e0a:3e6:4940:216:3eff:fe13:c6c7 (eth0)   | CONTAINER | 0         |
+---------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
[me@myhost ~]$ incus network list
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
|   NAME   |   TYPE   | MANAGED |      IPV4       |           IPV6            | DESCRIPTION | USED BY |  STATE  |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| br0      | bridge   | false   |                 |                           |             | 2       |         |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| ens1f2   | physical | false   |                 |                           |             | 0       |         |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| incusbr0 | bridge   | true    | 10.103.149.1/24 | fd42:5627:9b3b:5023::1/64 |             | 3       | CREATED |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| lo       | loopback | false   |                 |                           |             | 0       |         |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+
| wlp3s0   | physical | false   |                 |                           |             | 0       |         |
+----------+----------+---------+-----------------+---------------------------+-------------+---------+---------+

My question is: can I (and how to) make those already made containers use the br0 bridge instead of the incusbr0 from the tutorial ?

Welcome!

When you launch an instance, your network configuration comes from the default Incus profile.

incus profile show default

In there you should have something like

  eth0:
    name: eth0
    network: incusbr0
    type: nic

What you need to do, is attach another network in its place.

Let’s have a look. I launch a container with the default profile, then I attach the br0 network device to the instance, with the name eth0, in order to replace the old eth0.

$ incus launch images:debian/12/cloud mycontainer
Launching mycontainer
$ incus list mycontainer -cns4tS
+-------------+---------+---------------------+-----------+-----------+
|    NAME     |  STATE  |        IPV4         |   TYPE    | SNAPSHOTS |
+-------------+---------+---------------------+-----------+-----------+
| mycontainer | RUNNING | 10.10.10.218 (eth0) | CONTAINER | 0         |
+-------------+---------+---------------------+-----------+-----------+
$ incus network list
... it shows I have a br0 network and I am attaching that network ...
$ incus network
Usage:
  incus network [flags]
  incus network [command]

Available Commands:
  acl              Manage network ACLs
  attach           Attach network interfaces to instances
  attach-profile   Attach network interfaces to profiles
  create           Create new networks
  delete           Delete networks
  detach           Detach network interfaces from instances
  detach-profile   Detach network interfaces from profiles
  edit             Edit network configurations as YAML
  forward          Manage network forwards
  get              Get values for network configuration keys
  info             Get runtime information on networks
  integration      Manage network integrations
  list             List available networks
  list-allocations List network allocations in use
  list-leases      List DHCP leases
  load-balancer    Manage network load balancers
  peer             Manage network peerings
  rename           Rename networks
  set              Set network configuration keys
  show             Show network configurations
  unset            Unset network configuration keys
  zone             Manage network zones

Global Flags:
      --debug          Show all debug messages
      --force-local    Force using the local unix socket
  -h, --help           Print help
      --project        Override the source project
  -q, --quiet          Don't show progress information
      --sub-commands   Use with help or --help to view sub-commands
  -v, --verbose        Show all information messages
      --version        Print version number

Use "incus network [command] --help" for more information about a command.
$ incus network attach 
Description:
  Attach new network interfaces to instances

Usage:
  incus network attach [<remote>:]<network> <instance> [<device name>] [<interface name>] [flags]

Global Flags:
      --debug          Show all debug messages
      --force-local    Force using the local unix socket
  -h, --help           Print help
      --project        Override the source project
  -q, --quiet          Don't show progress information
      --sub-commands   Use with help or --help to view sub-commands
  -v, --verbose        Show all information messages
      --version        Print version number
$ incus network attach br0 mycontainer eth0 eth0
$ incus list mycontainer -cns4tS
+-------------+---------+----------------------+-----------+-----------+
|    NAME     |  STATE  |         IPV4         |   TYPE    | SNAPSHOTS |
+-------------+---------+----------------------+-----------+-----------+
| mycontainer | RUNNING | 192.168.1.180 (eth0) | CONTAINER | 0         |
+-------------+---------+----------------------+-----------+-----------+
$ 

Thank you for that welcoming answer.
You found the right angle for the not-so-confident with network concepts that I am.
Not only your solution works, but I learned about networks in incus.
I edited the title of this post, since moving instances is not the right expression.

1 Like

If you plan to launch in the future many containers that would use br0 or another similar network interface, you can launch the containers with the following and get them straight onto br0. It’s the --network br0 parameter where you specify the network device.

$ incus launch images:ubuntu/24.04/cloud myinstance --network br0
Launching myinstance
$ incus list myinstance -cns4tS
+-----------------+---------+----------------------+-----------+-----------+
|      NAME       |  STATE  |         IPV4         |   TYPE    | SNAPSHOTS |
+-----------------+---------+----------------------+-----------+-----------+
| myinstance      | RUNNING | 192.168.1.222 (eth0) | CONTAINER | 0         |
+-----------------+---------+----------------------+-----------+-----------+
$

Or you can create a new profile, which you can also call “br0”, e.g. using incus profile copy default br0; incus profile edit br0, and adjust it to:

devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: default
    type: disk

Then you can launch a new container using incus launch images:foo bar -p br0, or apply this profile to existing containers.