Issues when running netplan in a Ubuntu Noble 24.04 container

I tried the following both on a Ubuntu 22.04 host as well as on a Ubuntu 24.04 host with the same result.

When installing a guest (container) based on Ubuntu 24.04 and then try to run any netplan command (such as netplan apply, or netplan try), I get the following error message:

Failed to send reload request: No such file or directory
Traceback (most recent call last):
  File "/usr/sbin/netplan", line 23, in <module>
    netplan.main()
  File "/usr/share/netplan/netplan_cli/cli/core.py", line 58, in main
    self.run_command()
  File "/usr/share/netplan/netplan_cli/cli/utils.py", line 298, in run_command
    self.func()
  File "/usr/share/netplan/netplan_cli/cli/commands/apply.py", line 63, in run
    self.run_command()
  File "/usr/share/netplan/netplan_cli/cli/utils.py", line 298, in run_command
    self.func()
  File "/usr/share/netplan/netplan_cli/cli/commands/apply.py", line 255, in command_apply
    subprocess.check_call(['udevadm', 'control', '--reload'])
  File "/usr/lib/python3.12/subprocess.py", line 413, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['udevadm', 'control', '--reload']' returned non-zero exit status 1.

This seems to be related to systemd-udevd not running due to /sys not being read-only.

 systemctl status udev
○ systemd-udevd.service - Rule-based Manager for Device Events and Files
     Loaded: loaded (/usr/lib/systemd/system/systemd-udevd.service; static)
    Drop-In: /usr/lib/systemd/system/systemd-udevd.service.d
             └─syscall-architecture.conf
     Active: inactive (dead)
TriggeredBy: ○ systemd-udevd-control.socket
             ○ systemd-udevd-kernel.socket
  Condition: start condition unmet at Sun 2024-09-01 20:42:52 UTC; 24min ago
       Docs: man:systemd-udevd.service(8)
             man:udev(7)

Sep 01 20:42:52 plesk2 systemd[1]: systemd-udevd.service - Rule-based Manager for Device Events and Files was skipped because of an unmet condition check (ConditionPathIsReadWrite=/sys).

I looked through the discussions available on this topic. Frankly, I didn’t find a definitive answer on what to do about this. But I read on multiple occasions that making the container privileged and setting security.nesting to True should solve this issue albeit not being a recommended approach. Unfortunately, these configuration settings didn’t do anything for me.

It seems it’s possible to work around the issue by just starting systemd-udevd manually,

/lib/systemd/systemd-udevd --daemon

I don’t seem to have this issue on a Ubuntu 22.04 guest, by the way. Only 24.04 is affected.

Your help is appreciated.

Hi!

Can’t seem to replicate.

$ incus launch images:ubuntu/24.04/cloud netplan
Launching netplan
$ incus exec netplan -- netplan apply
$ 

The logs:
There’s a warning on OpenVSwitch, but it’s just an unrelated warning.

$ incus exec netplan -- tail -f /var/log/syslog
2024-09-01T22:07:09.868658+00:00 netplan systemd[1]: Reloading finished in 116 ms.
2024-09-01T22:07:09.930919+00:00 netplan systemd-networkd[444]: eth0: Reconfiguring with /run/systemd/network/10-netplan-eth0.network.
2024-09-01T22:07:09.930984+00:00 netplan systemd-networkd[444]: eth0: DHCP lease lost
2024-09-01T22:07:09.931463+00:00 netplan systemd-networkd[444]: eth0: DHCPv6 lease lost
2024-09-01T22:07:09.955913+00:00 netplan systemd-networkd[444]: eth0: DHCPv4 address 10.10.10.110/24, gateway 10.10.10.1 acquired from 10.10.10.1
2024-09-01T22:07:09.957398+00:00 netplan systemd-networkd[444]: eth0: Configuring with /run/systemd/network/10-netplan-eth0.network.
2024-09-01T22:07:09.957512+00:00 netplan systemd-networkd[444]: eth0: DHCP lease lost
2024-09-01T22:07:09.961236+00:00 netplan systemd[1]: netplan-ovs-cleanup.service - OpenVSwitch configuration for cleanup was skipped because of an unmet condition check (ConditionFileIsExecutable=/usr/bin/ovs-vsctl).
2024-09-01T22:07:09.964921+00:00 netplan systemd-networkd[444]: eth0: DHCPv6 lease lost
2024-09-01T22:07:09.985974+00:00 netplan systemd-networkd[444]: eth0: DHCPv4 address 10.10.10.110/24, gateway 10.10.10.1 acquired from 10.10.10.1
2024-09-01T22:07:40.020011+00:00 netplan systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Ctrl+C
$ 

In your case I see a reference to plesk. Does the simplest possible example shown above work for you?

I’m having a similar issue can’t get it to run netplan --diff

root@unifi:~# netplan status --diff Traceback (most recent call last): File "/usr/sbin/netplan", line 23, in <module> netplan.main() File "/usr/share/netplan/netplan_cli/cli/core.py", line 58, in main self.run_command() File "/usr/share/netplan/netplan_cli/cli/utils.py", line 298, in run_command self.func() File "/usr/share/netplan/netplan_cli/cli/commands/status.py", line 77, in run self.run_command() File "/usr/share/netplan/netplan_cli/cli/utils.py", line 298, in run_command self.func() File "/usr/share/netplan/netplan_cli/cli/commands/status.py", line 833, in command self.state_diff = diff_state.get_diff(self.ifname) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/share/netplan/netplan_cli/cli/state_diff.py", line 107, in get_diff self._analyze_routes(config, iface) File "/usr/share/netplan/netplan_cli/cli/state_diff.py", line 325, in _analyze_routes netplan_routes = self._normalize_routes(netplan_routes) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/share/netplan/netplan_cli/cli/state_diff.py", line 438, in _normalize_routes if ip_prefix[1] == '32' or ip_prefix[1] == '128':

Could you show the full backtrace please? You’ve missed off the most important part from the backtrace, which is the exception itself. Also, can you show how you created the container, and the netplan yaml file itself?

I’ve just tried netplan status --diff in an existing Ubuntu 24.04.1 container and it works for me, so there’s something about your container that’s causing the issue. (The host is 22.04.5 and I’m using incus LTS, version6.0.2-202409162053-ubuntu22.04)