Accessing incus containers from local network

Hi,
I’ve created a incus container with a managed bridge.
The container can connect to the internet and can also connect to machines on the local network.

But I cannot connect / ping the containers from the local network.
Do I need to do something special to be able to connect to the containers?

I would prefer not to create forwards, since the containers should just act like normal machines on the network.

OS: Debian bookworm

$ incus --version
6.0.3
incus network show incusbr0 
config:
  ipv4.address: 10.158.133.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:5b8c:1c69:ec5::1/64
  ipv6.nat: "true"
description: ""
name: incusbr0
type: bridge
used_by:
- /1.0/instances/gh
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
project: default

Something went wrong with my other posts (not sure why). So sorry if there are duplicates elsewhere. I needed to create a new account to be able to post again.

Welcome!

The default networking in Incus will create a private bridge (default name is incusbr0). That’s what you got currently.

You can setup your system so that your Incus containers will get an IP address from the LAN, just like the the host computer does. Here you would be using one of the following networking options: bridged networking, macvlan, ipvlan, routed (and a few more). In addition, if you have a separate and unused network card, you can assign it to a system container.

For example, with bridged networking, if the host has the IP address 192.168.100.x, then your containers would get IP addresses in that range as well. Your containers will be first-class citizens on your LAN.

Among the options

  1. Bridged networking. You need to setup a bridge on the host. Most tutorials will explain how to do that, and they often name the bridge as br0. The setup instructions on the host might be a bit tricky but once you do it properly, you will get a lot of convenience.
  2. macvlan. You do not have to change something on the host but in this setup, the containers will not be able to communicate specifically with the host. There will be connectivity of course with the rest of the systems on the LAN.

Thank you :slight_smile:

And thank you very much for your help. That worked.

I’ve added the bridge manually and ran the incus admin init again, didnt let it create a bridge but instad pointed it do the bridge I’ve created, and it worked.

Do you know what are the consequences that it isn’t ‘managed’ anymore?

This managed private bridge (incusbr0) is a decent default for the majority of your containers and VMs. If your intent is to create mostly containers that get an IP address from the LAN, then your new setup is a good choice.

A typical setup is to keep that default managed private bridge (incusbr0) and when you want to launch a container that gets an IP address from the LAN (br0), then you can specify that information as a parameter. In your case, you can do this the other way around, and it’s OK.

Incus supports profiles. You can view them with incus profile list and incus profile show myprofile1 (etc). What is in the default profile is used when you launch containers without extra parameters.

You can create additional profiles (incus profile create) that will cover the other networking setup. Then, when you launch a container, you can specify explicitly the other profile.
In my case, I have a special profile called bridged, as follows.

$ incus profile show bridged
config: {}
description: ""
devices:
  eno0:
    name: enp5s3
    nictype: bridged
    parent: br0
    type: nic
name: bridged
used_by: []
project: default
$

In this bridged profile I do not specify the other important information that is found in the default profile. However, I make sure that the network interface is listed as eth0, which is the same name as the managed network incusbr0.

Therefore, I would do the following. In this example I specify both the default profile and the new bridged profile. Incus will use everything from the default profile, then add (stack) what is in the bridged profile. In effect, the combined configuration will replace the eth0 / incusbr0 with the eth0 / br0, which is what I want.

$ incus launch images:debian/12 mylancontainer1 --profile default --profile bridged
...
1 Like