Error: Failed to start device "eth0": Failed to set the MAC address: Failed to run: ip link set dev mac217

Since a few weeks I get the error below on VMs on Ubuntu 20.04 LTS.

lxc start VM-DA-01
Error: Failed to start device "eth0": Failed to set the MAC address: Failed to run: ip link set dev mac217a72af address 02:00:00:EX:AM:PLE exit status 2 (RTNETLINK answers: Address already in use)

I use MACVLAN with MAC Address assignment. Nothing special. This problem has been happening for a while (1 or 2 weeks?) and then I can’t start the VMs. I have to reboot the host after this error.

Any idea?

Hi, not that this helps you, but I’m having the same issue on Ubuntu Server 22.04 x86_64 on a physical server and LXD Snap 5.7-c62733b. Using MACVLAN for instances.

Windows VM disconnects from the network, and when I restart it LXD errors with the same message and it won’t start the VM.

Just restarted the host recently for this, but will try find a way around this for now.

I notice that the dev name changes every time I try start the VM…

Failed to run: ip link set dev macd8b62eeb address 00:16:3e:87:19:1f
Failed to run: ip link set dev macef515ed2 address 00:16:3e:87:19:1f
Failed to run: ip link set dev mac99318f7d address 00:16:3e:87:19:1f
...

You can manually delete the device and start the VM:

ip link show | grep -B 1 '00:16:3e:87:19:1f'
    29: maca35b59f9@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 500
    link/ether 00:16:3e:87:19:1f brd ff:ff:ff:ff:ff:ff

sudo ip link delete maca35b59f9
lxc start vmname

The issue is that all my VMs are having this issue, Windows and even Ubuntu Desktop image from the default repo images.linuxcontainers.org and while they are running, so not just a thing at reboot. They are dropping off the network.

Time to raise a ticket.

1 Like

Yep, this problem is new. Interesting issue.

raised issue:

1 Like

It happened again today. But I see the root cause has been found. Nice @tomp !

The problem is not completely solved. The VMs are still unreachable sometimes and a restart is required to fix the connectivity.

Latest LXD snap (main).
Default config
Ubuntu 20.04 LTS
ZFS pool
MACVLAN

Randomly losing connection. Result: downtime. Manual VM restart required. IP is still visible in lxc list, but the connectivity to the world is lost.

Failed to restart is solved.
The connectivity issue not.
Only VMs

Are you running LXD 5.9?

lxd 5.9-4e4cdc6 24121 latest/stable canonical✓

OK please provide more info on what you are seeing as “failed connectivity”.

We need to see lxc config show <instance> --expanded for one of the affected instances, along with ip a and ip r on the LXD host and inside the affected instance.

We also need confirming whether you can lxc exec into the instance, which shows its running and lxd-agent is working when this happens.

We also need some indication of what triggers it, is LXD process being restarted?

architecture: x86_64
config:
  boot.autostart: "false"
  environment.TZ: Europe/Paris
  image.architecture: amd64
  image.description: ubuntu 20.04 LTS amd64 (release) (20210223)
  image.label: release
  image.os: ubuntu
  image.release: focal
  image.serial: "20210223"
  image.type: disk-kvm.img
  image.version: "20.04"
  limits.cpu: "4"
  limits.memory: 8192MB
  volatile.base_image: a548372a4ccb5fc4fb1243de4ba5e4b130f861bb73f40ad1b6ffb0f534f8d168
  volatile.cloud-init.instance-id: 5783548b-2a17-4bd9-b4ac-91f604bbc691
  volatile.eth0.host_name: mac0db448d4
  volatile.eth0.last_state.created: "false"
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: 89b84395
  volatile.vsock_id: "16"
devices:
  eth0:
    hwaddr: 02:0:0:0
    nictype: macvlan
    parent: eno3
    type: nic
  root:
    path: /
    pool: local
    size: 40GB
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

Not sure about lxc exec. I will try it next time.
And no, no restarts or special configs. Just a clean 20.04 installation with LXD and some VM’s.

OK next time it happens can you check those things and also the start time of the lxd process on the system as if its not occurring when LXD restarts it is likely to be something different and the linked issue above.

Make sure all VMs have been manually restarted using lxc restart <instance> since upgrading to LXD 5.9 to get the fix though.

1 Like