Your default profile does not have a configuration for networking. I suppose you used some parameter while launching the container to provide networking?
Here’s a an example of a profile with a managed (by Incus) network. Your networks do not have a managed (by Incus) interface.
$ incus profile show default
config: {}
description: Default Incus profile
devices:
eth0:
name: eth0
network: incusbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
used_by:
...
Thanks, simos! As you say, I manually load a bridge0 on my instances. Regarding the case at hand, I refer you to my recent reply to stgraber:
soporte@ost-demo1:~$ incus launch images:alpine/edge a1
Launching a1
The instance you are starting doesn't have any network attached to it.
To create a new network, use: incus network create
To attach a network to an instance, use: incus network attach
soporte@ost-demo1:~$ incus config device add a1 eth1 nic nictype=bridged parent=bridge0
Dispositivo eth1 añadido a a1
soporte@ost-demo1:~$ incus exec a1 sh
~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 10:66:6a:af:25:e9 brd ff:ff:ff:ff:ff:ff
~ #
Aside: please can you edit your original post and add three backticks (```) on a line of their own, before and after the console text. This will fix the formatting and make it much easier to read.
Also can you do:
incus version # as opposed to `incus --version`, shows both client and server
incus network show bridge0
incus profile show default
From what you’ve posted already, it looks like your bridge0 bridge is not managed by incus, so it will not give out any IP addresses - you would need an upstream DHCP server connected to your bridge to do that. If that’s what’s supposed to happen, then running tcpdump may give some clues:
tcpdump -i bridge0 -nn -s0 -v udp port 67 or udp port 68
# or inside the container, use -i eth0 or -i eth1
I note that conventionally you’d use eth0 rather than eth1 for a single NIC. I’m slightly confused by this:
incus config device add a1 eth1 nic nictype=bridged parent=bridge0
...
incus exec a1 sh
ip a
...
12: eth0@if13:
It looks like the container has created an eth0, not an eth1. I guess this could be the container doing its own renaming of network interfaces…?
On this server, where I also upgraded from Incus v6.13 to Incus v6.14, but where fortunately I have not yet restarted the computer (since it is the one hosting my vm-gpsinet1/Router/Firewall/DHCP that manages a small test network), it is observed that the ct-u2504 container, after restarting it, does not take the IP again. I clarify that bridge0 does DHCP to all computers connected to the LAN, including its own physical server (hosting), and all these physical computers route their services without problems: except for the Incus Instances.
As long as the Incus versions do not change the DB schema, you should be able to to downgrade freely. You can perform a test in a VM to verify. The downgrading should be possible using your distro’s package manager. I.e.
sudo apt install package_name=package_version
Personally I prefer to move forward and sort out any problems by filing issues on Github.
My sincere apologies to everyone! Thanks for your help. candlerb’s tcpdump finally led me down the rabbit hole, and a simple “snapshot restore” to vm-gpsinet1 did the trick. I’m now stuck with Incus v6.14. All good, guys!
Servidor 1
Number 1
soporte@gps-ser1:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 25.04
Release: 25.04
Codename: plucky
soporte@ost-demo1:~$ uname -a
Linux ost-demo1 6.14.0-23-generic #23-Ubuntu SMP PREEMPT_DYNAMIC Fri Jun 13 23:02:20 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
soporte@gps-ser1:~$ incus version
Client version: 6.14
Server version: 6.14
soporte@gps-ser1:~$ incus network show bridge0
config: {}
description: ""
name: bridge0
type: bridge
used_by:
- /1.0/instances/ct-u2504
- /1.0/instances/vm-gpsinet1
managed: false
status: ""
locations: []
project: default
soporte@gps-ser1:~$ incus ls
+-------------+---------+-------------------+------+-----------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------+---------+-------------------+------+-----------------+-----------+
| ct-u2504 | RUNNING | 10.0.1.207 (eth0) | | CONTAINER | 0 |
+-------------+---------+-------------------+------+-----------------+-----------+
| vm-gpsinet1 | RUNNING | 10.0.1.1 (eth1) | | VIRTUAL-MACHINE | 1 |
+-------------+---------+-------------------+------+-----------------+-----------+
soporte@gps-ser1:~$
soporte@gps-ser1:~$ incus profile show default
config: {}
description: Default Incus profile
devices:
root:
path: /
pool: default
type: disk
name: default
used_by:
- /1.0/instances/vm-gpsinet1
- /1.0/instances/ct-u2504
project: default
soporte@gps-ser1:~$
I’m glad it’s working, but I’m interested in what you found when debugging.
Just to be clear, are you saying the problem was a separate change to the VM which was running your network’s DHCP server, unrelated to the upgrade to incus 6.14?
Greetings, candlerb! Due to a beginner’s clumsiness, I clicked Solution! on the wrong answer. But I must inform everyone that it was precisely your exquisite tcpdump -i bridge0 -nn -s0 -v udp port 67 or udp port 68 that told me that my self-hosted vm-gpsinet1/Router/Firewall/DHCP was still applying temporary instructions to lock down Incus instances exclusively. So, all I had to do was run a “snapshot restore” to restore it to its previous production status.
Thanks again; but I can no longer correct my (wrong) Solution! error.
And I will continue to use Incus v6.14 like a fool: it has my complete trust.