Looking for a complete example of setup for localhost/LAN brdging

I am aware is is easy to setup a NAT bridging network interface (it is the default with incusbr0) and a macvlan network interface.

incusbr0 allows localhost access. macvlan allows LAN access but not localhost access

I would like to know what specific additional steps are required to add an incus network (and so interface), with any additional steps, to allow both localhost access and LAN from the same network interface.

I provide examples of specific steps required for incusbr0 and macvlan network interfaces at A friendly introduction to virtualising Virtualmin on Ubuntu with Incus - #2 by shoulders - Help! (Home for newbies) - Virtualmin Community. I have reproduced the original post in its entirety below.

To be clear, I am looking for the actual steps that I can test on setup below. I do not want to be referred to a link. Every link I have examined with regard to solving this type of issue has been a dead end for me. I also do not want to reuse or setup a libvirt interface.

The LAN interface name is enp3s0. Its ipv4 address is 192.168.20.17/24 and the gateway/DHCP server is at 192.168.20.1.

Thanks
John Heenan

REPRODUCED POST

Below are notes from setting up four incus instances of Ubuntu 24.04 on a physical server on a LAN, already running Ubuntu 24.04 desktop.

Two are container instances (not Docker) and two are virtual instances.

The container instances are one of a NAT setup and one of a LAN accessible macvlan setup

The virtual instances are also one of a NAT setup and one of a LAN accessible macvlan setup

A NAT setup instance can only be accessed in a regular manner from the localhost

A LAN accessible macvlan setup can be accessed from the LAN but not from the localhost. This is not a bug. It is designed into the Linux kernel.

So can an incus instance be accessed both from localhost and from the LAN? Not without a high level of unfriendly looking expertise that goes beyond the intended scope of providing a friendly introduction.

To keep with the friendly themes, setting up an incus web ui is included, but that is it, beacuse it is not necessary. When accessing the web ui, follow instructions on the web ui to either create a new certificate or use an existing one.

Webmin or Virtualmin, of themselves, should be OK in a pure Linux container. At least Virtualmin installs. If there are problems, it is likely due to utilities they use that are probably replaceable for utilities that do not call from user space into the kernel (which cannot be done from a container)

Most rented VPS won’t allow nested virtualization, so a container is likely necessary, if you want to try this on a VPS.

John Heenan

References:

Installing incus:

sudo su -

mkdir -p /etc/apt/keyrings/
curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc

sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo ${VERSION_CODENAME})
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc

EOF'


apt update
apt -y install incus
apt install incus-ui-canonical
apt -y install qemu-system # for qemu virtual instances managed by incus

echo "INCUS_UI=/opt/incus/ui" >> /etc/default/incus  # or /etc/default/environment

Configuring incus:


# make a default profile with defaults
# for home LAN, only change from defaults was to make server available over the network (yes)
incus admin init
#Would you like to use clustering? (yes/no) [default=no]:
#Do you want to configure a new storage pool? (yes/no) [default=yes]:
#Name of the new storage pool [default=default]:
#Name of the storage backend to use (dir, lvm, lvmcluster) [default=dir]:
#Would you like to create a new local network bridge? (yes/no) [default=yes]:
#What should the new bridge be called? [default=incusbr0]:
#What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
#What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
#Would you like the server to be available over the network? (yes/no) [default=no]: yes
#Address to bind to (not including port) [default=all]:
#Port to bind to [default=8443]:
#Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
#Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:

# systemctl restart incus # if below does not work
# browse to https://hostname:8443

Installing two different types of Ubuntu instances, using NAT, from same image:


incus launch images:ubuntu/22.04 c1        # container
incus launch images:ubuntu/22.04 v1 --vm   # virtual

incus list -cns4t
#+------+---------+------------------------+-----------------+
#| NAME |  STATE  |          IPV4          |      TYPE       |
#+------+---------+------------------------+-----------------+
#| c1   | RUNNING | 10.79.158.18 (eth0)    | CONTAINER       |
#+------+---------+------------------------+-----------------+
#| v1   | RUNNING | 10.79.158.219 (enp5s0) | VIRTUAL-MACHINE |
#+------+---------+------------------------+-----------------+

Creating a macvlan interface and adding two more instances accessible from LAN


# add an incus macvlan network to allow access from LAN
ip address show
# choose an interface on LAN, such as enp3s0
incus network create macvlan --type=macvlan parent=enp3s0

incus launch images:ubuntu/22.04 mc1 -n macvlan      # container
incus launch images:ubuntu/22.04 mv1 -n macvlan --vm # virtual

incus list -cns4t
#+------+---------+------------------------+-----------------+
#| NAME |  STATE  |          IPV4          |      TYPE       |
#+------+---------+------------------------+-----------------+
#| c1   | RUNNING | 10.79.158.18 (eth0)    | CONTAINER       |
#+------+---------+------------------------+-----------------+
#| mc1  | RUNNING | 192.168.20.36 (eth0)   | CONTAINER       |
#+------+---------+------------------------+-----------------+
#| mv1  | RUNNING | 192.168.20.37 (enp5s0) | VIRTUAL-MACHINE |
#+------+---------+------------------------+-----------------+
#| v1   | RUNNING | 10.79.158.219 (enp5s0) | VIRTUAL-MACHINE |
#+------+---------+------------------------+-----------------+

Ping results:


sh -c 'cat <<EOF >> /etc/hosts
10.79.158.18 c1
192.168.20.36 mc1
192.168.20.37 mv1
10.79.158.219 v1
EOF'


#pinging results from localhost, as expected (not a bug)
ping c1  # ok
ping mc1 # fails
ping mv1 # fails
ping v1  # ok

#pinging from another pc on LAN, as expected (not a bug)
ping c1  # fails
ping mc1 # ok
ping mv1 # ok 
ping v1  # fails


Accessing instances and force delting instances from localhost:


# access from localhost, note the gap between -- and bash
incus exec c1  -- bash
incus exec mc1 -- bash
incus exec mv1 -- bash
incus exec v1  -- bash

# forced deletion of instances from localhost
#incus delete c1  --force
#incus delete mc1 --force
#incus delete mv1 --force
#incus delete v1  --force

I have found a working solution for this using as reference What is a Bridge? - ScottiByte's Discussion Forum

Edit netplan and apply the edits:

sh -c 'cat <<EOF >> /etc/netplan/50-cloud-init.yaml
    bridges:
        bridge0:
            interfaces: [enp3s0]
            addresses: [192.168.20.17/24]
            routes:
               - to: default
                 via: 192.168.20.1
            nameservers:
              addresses:
                - 1.1.1.1
                - 1.0.0.1
            parameters:
              stp: true
              forward-delay: 4
            dhcp4: no
EOF'

netplan apply

Add a new incus profile and edit it:

incus profile create bridgeprofile
incus profile edit bridgeprofile

Replace contents of bridgeprofile with:

config: {}
description: Bridge to Main LAN
devices:
  eth0:
    nictype: bridged
    parent: bridge0
    type: nic
name: bridgeprofile

Launch two new incus instances using brdidge0 and list all of six instances:

incus launch images:ubuntu/22.04 bc1      --profile default --profile bridgeprofile  # container
incus launch images:ubuntu/22.04 bm1 --vm --profile default --profile bridgeprofile  # virtual

incus list -cns4t
#+------+---------+------------------------+-----------------+
#| NAME |  STATE  |          IPV4          |      TYPE       |
#+------+---------+------------------------+-----------------+
#| bc1  | RUNNING | 192.168.20.40 (eth0)   | CONTAINER       |
#+------+---------+------------------------+-----------------+
#| bm1  | RUNNING | 192.168.20.41 (enp5s0) | VIRTUAL-MACHINE |
#+------+---------+------------------------+-----------------+
#| c1   | RUNNING | 10.79.158.18 (eth0)    | CONTAINER       |
#+------+---------+------------------------+-----------------+
#| mc1  | STOPPED |                        | CONTAINER       |
#+------+---------+------------------------+-----------------+
#| mv1  | STOPPED |                        | VIRTUAL-MACHINE |
#+------+---------+------------------------+-----------------+
#| v1   | RUNNING | 10.79.158.219 (enp5s0) | VIRTUAL-MACHINE |
#+------+---------+------------------------+-----------------+

Ping the new instances:

sh -c 'cat <<EOF >> /etc/hosts
192.168.20.40 bc1
192.168.20.41 bm1
EOF'

ping bc1 # ok from everywhere
ping bm1 # ok from everwhere

Fix the dead macvlan instances:

incus network edit macvlan
# edit 'parent: enp3s0' to 'parent: bridge0'
incus start mc1
incus start mv1
# backup again but their IPv4 addresses are different

1 Like

You found the main issue with this topic. It is really hard to give generic advice. Your solution works for people that use netplan.

This is a subset of people looking a tutorial on the topic. I have not found a good way of covering the topic.

Luckily Incus OS will help with this.

Hi Team,

Why not simply add more than one interface to the vm/containers?

Is there a reason you cannot add both a bridge (localhost) and a macvlan (physical device)?

Chuck

What is this incus os of which you speak?

A tutorial can be split into two.

  1. Setting up a normal LAN bridge independent of Incus.
  2. Using the above bridge with Incus

Netplan is a wrapper with unsolved edge issues that make people want to, and do, rip it out and go back to /etc/network, myself included.

It is not difficult to set up a normal bridge as per 1) with ip commands, as long as the right (but poorly understood) sequence and order of commands is used.

Yes, for example, is there any reason why incusbr0 and macvlan cannot go through another bridge setup through Incus and avoid setting up a general bridge?

It is in active development. When it is ready, you will be able to boot from a thumb drive to install an immutable OS that runs Incus.

The only way to interact with the system will be through the network. There will be no shell access to the host.

Will there be a way to manage the Incus OS servers, like there is for Talos with Omni? At the moment I run Ubuntu as the underlying platform for my Incus hosts and then I need to run Landscape to manage everything in one place (understood, Ansible is another solution but that has other complexities). If you have an immutable OS on several servers, it’s really nice to be able to manage it all in one place.

Incus OS will be completely API driven, no local shell or SSH on those systems.
Application updates (Incus) can be applied automatically if desired while OS updates will be downloaded and stage for the next reboot.

We’re also working on a new project called Operations Center (under FuturFusion) which should become public in the next month and will be designed for fleet provisioning and maintenance of Incus OS servers.