Should a bridge interface have an IP?

I realize this is a very general networking question but I’m hoping someone can help me wrap my head around it.

I’ve been migrating my homelab from Proxmox to Incus. I wanted to create a bridge similar to what Proxmox has so based on these instructions I created a bridge interface via systemd-networkd.

One difference with this setup compared to Proxmox is that a “standard” Proxmox bridge doesn’t have an IP assigned on the host.

A few questions:

  • What does it mean for this bridge to have an IP? I assume this allows for communication from container → host, but I already have another interface that my Incus host is accessible on (the same interface/IP set for core.https_address).
  • Let’s say I don’t give this bridge an IP. Would this then be equivalent to a macvlan network?

Welcome!

I am not familiar with Proxmox.

Havind said that, in Incus the default networking (should you accept to enable it when doing incus admin init), is to create a private bridge that is managed by Incus (typically incusbr0). That bridge is routable and your instances will have networking. However, your instances on that bridge will not be accessible by default by other hosts on the network. Incus will spawn a DHCP server that will provide networking configuration for each managed network. Your typical NAT networking.

Use the following to get a list of networks, whether they are managed (by Incus) or not.

incus network list

You may delete any managed networks, as long as they are not found in an Incus profile. Because if they are found in an Incus profile, they need to be remove the the profiles, then deleted with, for example, incus network delete incusbr5.

However, with macvlan, your instances will get IP addresses from the LAN. Depending on how you configure your instances, they may also receive network configuration for that private bridge (incusbr0) on the same instance.

Personal, I would prefer to use a managed network. Yes, there is an issue with how to make an instance accessible to other systems. This can be achieved with a reverse proxy, or with Incus proxy devices, etc.

1 Like

I got pinged from this post, so lemme try to answer. Keep in mind I don’t know too, too much.

a) As far as I can tell, this is equivalent to what Proxmox does under the hood for you. It’s been a while since I set up Proxmox, so I’m not sure what they call “bridged that gets its own MAC/IP” vs “bridged that uses the host’s IP and exposes ports”.

A problem I think is there is confusing terminology for Incus. In Incus there is a “bridge interface” (made automatically but it can be made with incus network create incusbr0) that will not give the container/VM a unique exposed MAC/IP (I’m sure internally there is for Incus’ internal routing). That container can be reached on the host’s IP at whatever port is exposed by Incus. It was made for me before I did the enp1s0 device conversion to br0, so I assume my incusbr0 is tied to enp1s0 but maybe not idk.
Here’s their blurb that explains it pretty good (About networking - Incus documentation):

In Incus context, the bridge network type creates an L2 bridge that connects the instances that use it together into a single network L2 segment. This makes it possible to pass traffic between the instances. The bridge can also provide local DHCP and DNS.

The fabricated br0 is used by Incus as a "bridge nic". They’re both named bridge but they act differently. I got stuck on this for a while (hittin’ rocks together, etc.).
Here’s their blurb on the differences (Type: nic - Incus documentation):

When adding a network device to an instance, there are two methods to specify the type of device that you want to add: through the nictype device option or the network device option.

These two device options are mutually exclusive, and you can specify only one of them when you create a device. However, note that when you specify the network option, the nictype option is derived automatically from the network type.

nictype
When using the nictype device option, you can specify a network interface that is not controlled by Incus. Therefore, you must specify all information that Incus needs to use the network interface.

network
When using the network device option, the NIC is linked to an existing managed network. In this case, Incus has all required information about the network, and you need to specify only the network name when adding the device.

Basically, Incus lacks a network type to get the equivalent of Proxmox done (unique MAC that can have an assigned IP address and is seen by your router’s DHCP like an independent computer AND the container can directly talk to the host) so br0 is fabricated to allow for the nictype=bridge to work on the specified ethernet port.

On the Q’s:

  1. The containers with nictype=bridge connected to br0 have an independent MAC and IP. The container can talk to the host (likely through container IP <-> rotuer/switch <-> host IP instead of internal communique like occurs with incusbr0, but I’m not doing anything that would care about that extra step for speed). nictype=macvlan doesn’t allow for that container ↔ host chat, which is a dealbreaker for me. I’m not quite sure what you mean post-comma, but the container is completely independent of the host’s MAC/IP and doesn’t interfere or impact it in any way.

  2. If your router doesn’t assign an IP to the container’s MAC made by the nictype=bridge connected to br0, the container won’t be able to chat with anything on the network, likely including the host. macvlan would be in the same boat if not given an IP by your router’s DHCP service. If you don’t want to give an independent MAC/IP to a container/VM, use the internal incusbr0 bridge so that the container can be accessed on the host’s IP at whatever port you expose.

Hope that helps.

1 Like

Thanks! Yeah I quickly realized that an Incus bridge is not the same thing as a Proxmox bridge, which is when I went searching for a solution and found your guide.

After reading the responses here and thinking about it, I realized that I was overcomplicating things.

I have multiple physical NICs on my host, and when I was using Proxmox I had things set up like this:

  • vmbr0 (attached to my physical NIC enp3s0) with IP 10.8.8.2
    • This was essentially my management interface. I didn’t share this bridge with any containers.
  • vmbr1 (attached to physical NIC enp1s0f0) with no IP assigned
    • This is what I gave to my containers.

Because of this, when I went to set up Incus, I thought that I had to have one interface for management and one interface for my containers, which obviously isn’t true. In Proxmox, I could have just shared vmbr0 with my containers and used it to access Proxmox.

Anyway, I set up Incus trying to do the same thing:

  • Set core.https_address to 10.8.8.2. This is the IP assigned to enp3s0.
    • This is my “management interface”.
  • Create br0 to give to my containers. This gets its IP via DHCP.
    • This is what I was confused about. I was thinking “what is the point of this interface having an IP? I already have an interface I can access Incus with”.

Well, the answer is obvious to me now. This bridge is just another interface I can use to access the host. If I want to, I can give br0 a static DHCP reservation and use that as both my management interface and the interface I give to containers. This is basically what I could have done with vmbr0.

To mimic the setup I had with Proxmox, I can set Address= (empty string) under the [Network] block in /etc/systemd/network/br0.network so it doesn’t get assigned an IP.

To be honest I don’t why I typed all that out, maybe it’ll help someone haha

Oh, one of the many ways to get your Incus instances to receive an IP address from the LAN, is to change your host’s network configuration and create a bridge that is attached to the main physical network interface. In most guide this new bridge gets the name br0. See, for example, https://www.cyberciti.biz/faq/ubuntu-20-04-add-network-bridge-br0-with-nmcli-command/

Indeed, in that case this br0 interface does not have an IP address.

In Incus you can get instances to get an IP address from the LAN using guides like this ancient one, How to make your LXD containers get IP addresses from your LAN using a bridge – Mi blog lah!

When you use Proxmox, does Proxmox create those br0 bridges for you or do you have to do this yourself like in Incus?

Yeah, by default Proxmox creates vmbr0 (basically equivalent to the br0 we’re talking about) which is also assigned a static IP for management.

Both variants (incus managed bridge like incusbr0) or br0 are bridges and it is totally valid to assign them an ip.

  1. On a bare metal server I always prefer the unmanaged bridge br0-br(x), so the behavior is as discussed for proxmox (or similar on vmware esx): every container has its own mac, is exposed to the bridged network and gets it’s ip and name resolution from the network.
  2. If the bare metal server is exposed to the internet, I always install a pfsense-vm, 2 or more bridge interfaces (e.g. br0 → LAN, br1 → WAN) and move the public ip to the wan-bridge (br1), leaving the local interface unconfigured. So yes, it makes sometimes sense to give a bridge an ip.You could then even create more bridges in netplan, if you need more subnets or vlans and connect the pfsense-vm to these vlans like
    devices:
      eth0:
        nictype: bridged
        parent: br0
        type: nic
      eth1:
        nictype: bridged
        parent: br1
        type: nic
    
    the mac adress in the device config is one option to make the hoster happy expecting a specific mac address.
    You may also give the other bridges an ip but that only makes sense if you expect to directly connect your incus from that specific bridged network.
    If a bridged network should not be directly connected to a physical network your netplan config would just look like:
    bridges:
      br0:
        dhcp4: no
        dhcp6: no
      br1:
        hwaddr: 00:50:xx:xx:xx:xx        
        dhcp4: no
        dhcp6: no
        addresses: [your.public.ip.here]
        interfaces:
          - eth0
    
    The magic happens, when you add a network device to the container (or overwrite eth0) as mentioned above, the container will be added to that bridge. Best practice is to have a profile for any defined bridge and you just “move” the container to a specific network by adding a profile.
  3. When incus is running as an vm on VMWare I highly recommend to not use transparent bridges like 1./2. since vmware doesn’t implement full featured switches / hard codes the arp table / vm mac. There are ugly work arounds (e.g. allowing promiscuous mode on a port group) but after some hard lessons learned I would always recommend to use the default bridge (like incusbr0) and make yourself familiar with the proxy protocol when adding a proxy-device to your containers/vms but be warned:
    When you expose your container/vm to the internet using proxy device, by default the source ip will be always 127.0.0.1 (!). Especially for mail servers this is a no go (everybody can use your server as an open relay) and you should check how to activate and handle the proxy protocol to rewrite the packages with the expected source ip (e.g. nginx has build in support to extract the source ip from the proxy protocol header).
2 Likes

I always thought of the incus/lxd managed bridges as kind virtualizing the old physical Brouter devices like some of cisco systems early switches

Brouter

Also known as the bridging router is a device that combines features of both bridge and router. It can work either at the data link layer or a network layer. Working as a router, it is capable of routing packets across networks and working as a bridge, it is capable of filtering local area network traffic.

1 Like

Very interesting approach to separate and secure the traffic. Keep all the hosting stuff behind a firewall is always a challenge.

How do you solve access to the host OS for management? Depending on your hosting provider you might not have the luxury of a remote KVM or Webinterface. Would you allow SSH (obviously on different port) or through the pfsense firewall?

I do a mix of both:

  • on the host OS I run sshd on default port requiring public key auth
  • ufw on host OS allows access only from private management networks and on the public interface limited to a few trusted, fixed ips on port 22 in case the pfsense-vm is down/broken
  • pfsense is connected to all bridges/networks and has a separate public ip/mac on the bridge with the public interface as the gateway, so the world only sees that additional gateway ip on pfsense.
  • pfsense has wireguard/ipsec/openvpn configured to securly connect from office/home/other networks directly to the host or the containers/vms. pfsense is also registered as dns-forward for specific subdomains in the remote networks.
  • All traffic from the internet to the running services go thru pfsense (ipv4 via port forward).
  • all internal ips are controlled by pfsense dhcp, not by incus.

So ins short: by default all traffic goes thru pfsense like in any company network. Only as a fallback I can connect on the public IP from trusted IPs.

This approach is not limited to a single host but can be easily extended to a secure multi host environment. The additional hosts simply share the private networks via vlan and pfsense could be moved between the hosts (given you have a shared ip for the gw address, otherwise the gw-address may change once you move the pfsense because public ips are bound the hardware mac address / switch ports). I just use this configuration for making extremely fast snapshot backups from all running instances to another host crosswise, so you could load balance and have always a copy on the other host one snapshot behind.

1 Like

Thanks, it pretty much confirms what I had in my mind.

So the host owns the public interface (eth0) and you have a separate bridge setup for pfsense with separate IPv4 and IPv6 configured / managed?

Given your network setup looks similar to the netplan above,

  • br1 (bridged with eth0) has public ip (ipv4/ipv6) assigned by your hoster only for desaster recovery
  • br0 (and possibly more bridges having either no or just vlan interfaces connected) has an fixed ip in the LAN network to connect incus from private networks
  • pfsense is connected to all bridges, assigns its IPs itself and is gateway for all networks, except for the IPs configured on br1. The additionally WAN gw address is configured in pfsense on the interface connected to br1.

Right, should have read the previous posts again before posting…

You just need to get your head around on which level to configure the correct IP and thanks to mention the MAC address part as this is what most providers using to assign your IP subnet.

Way to many things you need to take care about to setup a saver and secure internet facing environment. It is just to dangerous out there to leave default settings and hope the best.

Appreciate the input