After upgrading to Incus 6.14, my containers and VMs have lost networking

And commands used daily within containers have stopped working.

This is my operating context:

soporte@ost-demo1:<del>$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 25.04
Release: 25.04
Codename: plucky
soporte@ost-demo1:</del>$ uname -a
Linux ost-demo1 6.14.0-23-generic [#23](https://github.com/zabbly/incus/issues/23)-Ubuntu SMP PREEMPT_DYNAMIC Fri Jun 13 23:02:20 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
soporte@ost-demo1:<del>$ incus --version
6.14
soporte@ost-demo1:</del>$ incus network list
+---------+----------+---------+------+------+-------------+---------+-------+
| NAME | TYPE | MANAGED | IPV4 | IPV6 | DESCRIPCIÓN | USED BY | STATE |
+---------+----------+---------+------+------+-------------+---------+-------+
| br-wan | bridge | NO | | | | 0 | |
+---------+----------+---------+------+------+-------------+---------+-------+
| bridge0 | bridge | NO | | | | 6 | |
+---------+----------+---------+------+------+-------------+---------+-------+
| eno1 | physical | NO | | | | 0 | |
+---------+----------+---------+------+------+-------------+---------+-------+
| enp3s0 | physical | NO | | | | 0 | |
+---------+----------+---------+------+------+-------------+---------+-------+
| lo | loopback | NO | | | | 0 | |
+---------+----------+---------+------+------+-------------+---------+-------+
soporte@ost-demo1:<del>$ incus profile show default
config: {}
description: Default Incus profile
devices:
root:
path: /
pool: default
type: disk
name: default
used_by:
soporte@ost-demo1:</del>$ incus exec ct-minecserver -- sudo --user ubuntu --login
ubuntu@ct-minecserver:<del>$ ip route
ubuntu@ct-minecserver:</del>$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 10:66:6a:65:fb:d8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::1266:6aff:fe65:fbd8/64 scope link
valid_lft forever preferred_lft forever
ubuntu@ct-minecserver:<del>$ sudo poweroff
Failed to connect to bus: No such file or directory
ubuntu@ct-minecserver:</del>$ sudo reboot
Failed to set wall message, ignoring: Transaction for systemd-logind.service/start is destructive (systemd-remount-fs.service has 'stop' job queued, but 'start' is included in transaction).
Call to Reboot failed: Transaction for systemd-logind.service/start is destructive (systemd-network-generator.service has 'stop' job queued, but 'start' is included in transaction).
ubuntu@ct-minecserver:~$ exit

Welcome!

Your default profile does not have a configuration for networking. I suppose you used some parameter while launching the container to provide networking?

Here’s a an example of a profile with a managed (by Incus) network. Your networks do not have a managed (by Incus) interface.

$ incus profile show default 
config: {}
description: Default Incus profile
devices:
  eth0:
    name: eth0
    network: incusbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
...

Thanks, simos! As you say, I manually load a bridge0 on my instances. Regarding the case at hand, I refer you to my recent reply to stgraber:

soporte@ost-demo1:~$ incus launch images:alpine/edge a1
Launching a1

The instance you are starting doesn't have any network attached to it.
  To create a new network, use: incus network create
  To attach a network to an instance, use: incus network attach

soporte@ost-demo1:~$ incus config device add a1 eth1 nic nictype=bridged parent=bridge0
Dispositivo eth1 añadido a a1
soporte@ost-demo1:~$ incus exec a1 sh
~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 10:66:6a:af:25:e9 brd ff:ff:ff:ff:ff:ff
~ #

In scope to the above:

soporte@ost-demo1:~$ incus config show --expanded a1
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Alpine edge amd64 (20250701_13:00)
  image.os: Alpine
  image.release: edge
  image.requirements.secureboot: "false"
  image.serial: "20250701_13:00"
  image.type: squashfs
  image.variant: default
  volatile.base_image: b51376dc7a10e317642bda21b62b5e18f8b935959690430c0879450b2ed29ae6
  volatile.cloud-init.instance-id: 94489d1a-fa3f-43bd-9cc8-af6d1a0b4edf
  volatile.eth1.hwaddr: 10:66:6a:af:25:e9
  volatile.eth1.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: STOPPED
  volatile.last_state.ready: "false"
  volatile.uuid: 337a987d-1239-43c5-869a-4e94f46ab2eb
  volatile.uuid.generation: 337a987d-1239-43c5-869a-4e94f46ab2eb
devices:
  eth1:
    nictype: bridged
    parent: bridge0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
soporte@ost-demo1:~$

And these are the networks I have been operating with since Incus v6.8:

soporte@ost-demo1:~$ incus network list
+---------+----------+---------+------+------+-------------+---------+-------+
|  NAME   |   TYPE   | MANAGED | IPV4 | IPV6 | DESCRIPCIÓN | USED BY | STATE |
+---------+----------+---------+------+------+-------------+---------+-------+
| br-wan  | bridge   | NO      |      |      |             | 0       |       |
+---------+----------+---------+------+------+-------------+---------+-------+
| bridge0 | bridge   | NO      |      |      |             | 8       |       |
+---------+----------+---------+------+------+-------------+---------+-------+
| eno1    | physical | NO      |      |      |             | 0       |       |
+---------+----------+---------+------+------+-------------+---------+-------+
| enp3s0  | physical | NO      |      |      |             | 0       |       |
+---------+----------+---------+------+------+-------------+---------+-------+
| lo      | loopback | NO      |      |      |             | 0       |       |
+---------+----------+---------+------+------+-------------+---------+-------+
soporte@ost-demo1:~$ 

Aside: please can you edit your original post and add three backticks (```) on a line of their own, before and after the console text. This will fix the formatting and make it much easier to read.

Also can you do:

incus version     # as opposed to `incus --version`, shows both client and server
incus network show bridge0
incus profile show default

From what you’ve posted already, it looks like your bridge0 bridge is not managed by incus, so it will not give out any IP addresses - you would need an upstream DHCP server connected to your bridge to do that. If that’s what’s supposed to happen, then running tcpdump may give some clues:

tcpdump -i bridge0 -nn -s0 -v udp port 67 or udp port 68
# or inside the container, use -i eth0 or -i eth1

I note that conventionally you’d use eth0 rather than eth1 for a single NIC. I’m slightly confused by this:

incus config device add a1 eth1 nic nictype=bridged parent=bridge0
...
incus exec a1 sh
ip a
...
12: eth0@if13:

It looks like the container has created an eth0, not an eth1. I guess this could be the container doing its own renaming of network interfaces…?

1 Like

On this server, where I also upgraded from Incus v6.13 to Incus v6.14, but where fortunately I have not yet restarted the computer (since it is the one hosting my vm-gpsinet1/Router/Firewall/DHCP that manages a small test network), it is observed that the ct-u2504 container, after restarting it, does not take the IP again. I clarify that bridge0 does DHCP to all computers connected to the LAN, including its own physical server (hosting), and all these physical computers route their services without problems: except for the Incus Instances.

soporte@gps-ser1:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 25.04
Release:        25.04
Codename:       plucky
soporte@gps-ser1:~$ uname -a
Linux gps-ser1 6.14.0-22-generic #22-Ubuntu SMP PREEMPT_DYNAMIC Wed May 21 15:01:51 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
soporte@gps-ser1:~$ incus version 
Client version: 6.14
Server version: 6.14
soporte@gps-ser1:~$ incus network show bridge0 
config: {}
description: ""
name: bridge0
type: bridge
used_by:
- /1.0/instances/ct-u2504
- /1.0/instances/vm-gpsinet1
managed: false
status: ""
locations: []
project: default
soporte@gps-ser1:~$ incus profile show default 
config: {}
description: Default Incus profile
devices:
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
- /1.0/instances/vm-gpsinet1
- /1.0/instances/ct-u2504
project: default
soporte@gps-ser1:~$ incus config show vm-gpsinet1 
architecture: x86_64
config:
  boot.autostart: "true"
  boot.autostart.delay: "60"
  boot.autostart.priority: "100"
  limits.cpu: "4"
  limits.memory: 6GB
  migration.stateful: "false"
  security.secureboot: "false"
  volatile.cloud-init.instance-id: 6be7cdd3-6c0b-474c-89e3-d55297dfd9f4
  volatile.eth0.host_name: tap829d4e96
  volatile.eth0.hwaddr: 10:66:6a:e2:95:10
  volatile.eth1.host_name: tapc9cbc1d8
  volatile.eth1.hwaddr: 10:66:6a:65:d5:ca
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: faa56daf-9b45-4219-9bf3-78c76b9f431c
  volatile.uuid.generation: 7a2cd47c-9072-4b39-9a2f-370e9c59cf6d
  volatile.vm.definition: pc-q35-9.0
  volatile.vm.rtc_adjustment: "0"
  volatile.vm.rtc_offset: "0"
  volatile.vsock_id: "2233834054"
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br-wan
    type: nic
  eth1:
    nictype: bridged
    parent: bridge0
    type: nic
  root:
    path: /
    pool: default
    size: 8GiB
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
soporte@gps-ser1:~$ incus config show ct-u2504 
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu plucky amd64 (20250628_07:42)
  image.os: Ubuntu
  image.release: plucky
  image.serial: "20250628_07:42"
  image.type: squashfs
  image.variant: cloud
  security.nesting: "true"
  security.privileged: "false"
  security.syscalls.intercept.mknod: "true"
  security.syscalls.intercept.setxattr: "true"
  volatile.base_image: 372220ecb646c94bf445a023b3ec052394616b2ba6fed4ab981d9cf7d48039ba
  volatile.cloud-init.instance-id: 72683af2-842c-49aa-9443-77158f51cd76
  volatile.eth0.host_name: vetha6cc9401
  volatile.eth0.hwaddr: 10:66:6a:b7:f7:03
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: 50e11171-ebc5-4014-bc34-6a39599328cb
  volatile.uuid.generation: 50e11171-ebc5-4014-bc34-6a39599328cb
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: bridge0
    type: nic
ephemeral: false
profiles:
- default
stateful: false
description: ""
soporte@gps-ser1:~$ 

I was missing this piece

soporte@gps-ser1:~$ incus ls
+-------------+---------+-----------------+------+-----------------+-----------+
|    NAME     |  STATE  |      IPV4       | IPV6 |      TYPE       | SNAPSHOTS |
+-------------+---------+-----------------+------+-----------------+-----------+
| ct-u2504    | RUNNING |                 |      | CONTAINER       | 0         |
+-------------+---------+-----------------+------+-----------------+-----------+
| vm-gpsinet1 | RUNNING | 10.0.1.1 (eth1) |      | VIRTUAL-MACHINE | 1         |
+-------------+---------+-----------------+------+-----------------+-----------+
soporte@gps-ser1:~$ 

Any painless way to revert to Incus v6.13, please?

As long as the Incus versions do not change the DB schema, you should be able to to downgrade freely. You can perform a test in a VM to verify. The downgrading should be possible using your distro’s package manager. I.e.

sudo apt install package_name=package_version

Personally I prefer to move forward and sort out any problems by filing issues on Github.

My sincere apologies to everyone! Thanks for your help. candlerb’s tcpdump finally led me down the rabbit hole, and a simple “snapshot restore” to vm-gpsinet1 did the trick. I’m now stuck with Incus v6.14. All good, guys!

Servidor 1

Number 1
soporte@gps-ser1:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 25.04
Release:        25.04
Codename:       plucky
soporte@ost-demo1:~$ uname -a
Linux ost-demo1 6.14.0-23-generic #23-Ubuntu SMP PREEMPT_DYNAMIC Fri Jun 13 23:02:20 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
soporte@gps-ser1:~$ incus version 
Client version: 6.14
Server version: 6.14
soporte@gps-ser1:~$ incus network show bridge0
config: {}
description: ""
name: bridge0
type: bridge
used_by:
- /1.0/instances/ct-u2504
- /1.0/instances/vm-gpsinet1
managed: false
status: ""
locations: []
project: default
soporte@gps-ser1:~$ incus ls
+-------------+---------+-------------------+------+-----------------+-----------+
|    NAME     |  STATE  |       IPV4        | IPV6 |      TYPE       | SNAPSHOTS |
+-------------+---------+-------------------+------+-----------------+-----------+
| ct-u2504    | RUNNING | 10.0.1.207 (eth0) |      | CONTAINER       | 0         |
+-------------+---------+-------------------+------+-----------------+-----------+
| vm-gpsinet1 | RUNNING | 10.0.1.1 (eth1)   |      | VIRTUAL-MACHINE | 1         |
+-------------+---------+-------------------+------+-----------------+-----------+
soporte@gps-ser1:~$ 
soporte@gps-ser1:~$ incus profile show default
config: {}
description: Default Incus profile
devices:
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
- /1.0/instances/vm-gpsinet1
- /1.0/instances/ct-u2504
project: default
soporte@gps-ser1:~$ 

Number 2

soporte@ost-demo1:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 25.04
Release:        25.04
Codename:       plucky
soporte@ost-demo1:~$ uname -a
Linux ost-demo1 6.14.0-23-generic #23-Ubuntu SMP PREEMPT_DYNAMIC Fri Jun 13 23:02:20 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
soporte@ost-demo1:~$ 
soporte@ost-demo1:~$ incus version
VersiĂłn del cliente: 6.14
Server version: 6.14
soporte@ost-demo1:~$ incus network show bridge0
config: {}
description: ""
name: bridge0
type: bridge
used_by:
- /1.0/instances/ct-minecserver
managed: false
status: ""
locations: []
project: default
soporte@ost-demo1:~$ incus ls
+-----------------------+---------+------------------------------+------+-----------------+-----------+
|         NAME          |  STATE  |             IPV4             | IPV6 |      TYPE       | SNAPSHOTS |
+-----------------------+---------+------------------------------+------+-----------------+-----------+
| a1                    | RUNNING | 10.0.1.201 (eth0)            |      | CONTAINER       | 0         |
+-----------------------+---------+------------------------------+------+-----------------+-----------++-----------------------+---------+------------------------------+------+-----------------+-----------+
| ct-docker             | RUNNING | 192.168.192.9 (zt6ntbnuo4)   |      | CONTAINER       | 1         |
|                       |         | 172.18.0.1 (br-4e6e25eb1b27) |      |                 |           |
|                       |         | 172.17.0.1 (docker0)         |      |                 |           |
|                       |         | 10.8.1.1 (br-28fe405a6fb2)   |      |                 |           |
|                       |         | 10.0.1.205 (eth0)            |      |                 |           |
+-----------------------+---------+------------------------------+------+-----------------+-----------+
| ct-minec              | STOPPED |                              |      | CONTAINER       | 1         |
+-----------------------+---------+------------------------------+------+-----------------+-----------+
| ct-minec2             | STOPPED |                              |      | CONTAINER       | 0         |
+-----------------------+---------+------------------------------+------+-----------------+-----------+
| ct-minecserver        | RUNNING | 192.168.192.9 (zt6ntbnuo4)   |      | CONTAINER       | 1         |
|                       |         | 172.18.0.1 (br-97253901f5ac) |      |                 |           |
|                       |         | 172.17.0.1 (docker0)         |      |                 |           |
|                       |         | 10.0.1.208 (eth0)            |      |                 |           |
+-----------------------+---------+------------------------------+------+-----------------+-----------+
soporte@ost-demo1:~$ incus profile show default
config: {}
description: Default Incus profile
devices:
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
- /1.0/instances/ct-minec2
- /1.0/instances/ct-minec
- /1.0/instances/ct-minecserver
- /1.0/instances/ct-docker
- /1.0/instances/a1
project: default
soporte@ost-demo1:~$ 

I’m glad it’s working, but I’m interested in what you found when debugging.

Just to be clear, are you saying the problem was a separate change to the VM which was running your network’s DHCP server, unrelated to the upgrade to incus 6.14?

Greetings, candlerb! Due to a beginner’s clumsiness, I clicked Solution! on the wrong answer. But I must inform everyone that it was precisely your exquisite tcpdump -i bridge0 -nn -s0 -v udp port 67 or udp port 68 that told me that my self-hosted vm-gpsinet1/Router/Firewall/DHCP was still applying temporary instructions to lock down Incus instances exclusively. So, all I had to do was run a “snapshot restore” to restore it to its previous production status.

Thanks again; but I can no longer correct my (wrong) Solution! error.

And I will continue to use Incus v6.14 like a fool: it has my complete trust.