LXD 3.23 - Seeing strange behavior for VMs with images:ubuntu/18.04

It is possible I am doing something wrong here, but am seeing some unexpected IP allocation behavior while trying to create a VM using the images:ubuntu/18.04 image. Multiple VMs all end up getting the same IP address.

As the command log below shows, container creation and images:ubuntu/16.04 produces the expected behavior (different IP addresses for each newly created container/VM).

LXD is configured in cluster mode, using fan networking.

--------- command and output log -------------
akriadmin@c4akri01:~$ lxc list
±-----±------±-----±-----±-----±----------±---------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
±-----±------±-----±-----±-----±----------±---------+
akriadmin@c4akri01:~$ lxc launch --target c4akri01 ubuntu:18.04 container-1804-1
Creating container-1804-1
Starting container-1804-1
akriadmin@c4akri01:~$ lxc launch --target c4akri01 ubuntu:18.04 container-1804-2
Creating container-1804-2
Starting container-1804-2
akriadmin@c4akri01:~$ lxc launch --target c4akri01 --vm images:ubuntu/18.04 vm-1804-1
Creating vm-1804-1
Starting vm-1804-1
akriadmin@c4akri01:~$ lxc launch --target c4akri01 --vm images:ubuntu/18.04 vm-1804-2
Creating vm-1804-2
Starting vm-1804-2
akriadmin@c4akri01:~$ lxc launch --target c4akri01 --vm images:ubuntu/16.04 vm-1604-1
Creating vm-1604-1
Starting vm-1604-1
akriadmin@c4akri01:~$ lxc launch --target c4akri01 --vm images:ubuntu/16.04 vm-1604-2
Creating vm-1604-2
Starting vm-1604-2
akriadmin@c4akri01:~$ lxc version
Client version: 3.23
Server version: 3.23
akriadmin@c4akri01:~$ lxc list
±-----------------±--------±----------------------±-----±----------------±----------±---------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
±-----------------±--------±----------------------±-----±----------------±----------±---------+
| container-1804-1 | RUNNING | 240.204.0.82 (eth0) | | CONTAINER | 0 | c4akri01 |
±-----------------±--------±----------------------±-----±----------------±----------±---------+
| container-1804-2 | RUNNING | 240.204.0.127 (eth0) | | CONTAINER | 0 | c4akri01 |
±-----------------±--------±----------------------±-----±----------------±----------±---------+
| vm-1604-1 | RUNNING | 240.204.0.86 (enp5s0) | | VIRTUAL-MACHINE | 0 | c4akri01 |
±-----------------±--------±----------------------±-----±----------------±----------±---------+
| vm-1604-2 | RUNNING | 240.204.0.91 (enp5s0) | | VIRTUAL-MACHINE | 0 | c4akri01 |
±-----------------±--------±----------------------±-----±----------------±----------±---------+
| vm-1804-1 | RUNNING | 240.204.0.31 (enp5s0) | | VIRTUAL-MACHINE | 0 | c4akri01 |
±-----------------±--------±----------------------±-----±----------------±----------±---------+
| vm-1804-2 | RUNNING | 240.204.0.31 (enp5s0) | | VIRTUAL-MACHINE | 0 | c4akri01 |
±-----------------±--------±----------------------±-----±----------------±----------±---------+
akriadmin@c4akri01:~$ lxc launch --target c4akri01 --vm images:ubuntu/16.04 vm-1604-3
Creating vm-1604-3
Starting vm-1604-3
akriadmin@c4akri01:~$ lxc launch --target c4akri01 --vm images:ubuntu/18.04 vm-1804-3
Creating vm-1804-3
Starting vm-1804-3
akriadmin@c4akri01:~$ lxc list
±-----------------±--------±-----------------------±-----±----------------±----------±---------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
±-----------------±--------±-----------------------±-----±----------------±----------±---------+
| container-1804-1 | RUNNING | 240.204.0.82 (eth0) | | CONTAINER | 0 | c4akri01 |
±-----------------±--------±-----------------------±-----±----------------±----------±---------+
| container-1804-2 | RUNNING | 240.204.0.127 (eth0) | | CONTAINER | 0 | c4akri01 |
±-----------------±--------±-----------------------±-----±----------------±----------±---------+
| vm-1604-1 | RUNNING | 240.204.0.86 (enp5s0) | | VIRTUAL-MACHINE | 0 | c4akri01 |
±-----------------±--------±-----------------------±-----±----------------±----------±---------+
| vm-1604-2 | RUNNING | 240.204.0.91 (enp5s0) | | VIRTUAL-MACHINE | 0 | c4akri01 |
±-----------------±--------±-----------------------±-----±----------------±----------±---------+
| vm-1604-3 | RUNNING | 240.204.0.222 (enp5s0) | | VIRTUAL-MACHINE | 0 | c4akri01 |
±-----------------±--------±-----------------------±-----±----------------±----------±---------+
| vm-1804-1 | RUNNING | 240.204.0.31 (enp5s0) | | VIRTUAL-MACHINE | 0 | c4akri01 |
±-----------------±--------±-----------------------±-----±----------------±----------±---------+
| vm-1804-2 | RUNNING | 240.204.0.31 (enp5s0) | | VIRTUAL-MACHINE | 0 | c4akri01 |
±-----------------±--------±-----------------------±-----±----------------±----------±---------+
| vm-1804-3 | RUNNING | 240.204.0.31 (enp5s0) | | VIRTUAL-MACHINE | 0 | c4akri01 |
±-----------------±--------±-----------------------±-----±----------------±----------±---------+
akriadmin@c4akri01:~$

Can you show lxc config show --expanded NAME for a few of the ones that have the same address?

Attaching lxc config show output for vm-1804-[1-2] (the ones with the issue) and vm-1604-[1-2] (works as expected) below …

------ vm-1804-1 -------

architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu bionic amd64 (20200326_12:29)
  image.os: Ubuntu
  image.release: bionic
  image.serial: "20200326_12:29"
  image.type: disk-kvm.img
  volatile.base_image: 52626176b67166a3dba32eaffea62f0ec55f23feb4493603953aa5978a3bab47
  volatile.eth0.host_name: tap5f9ce70a
  volatile.eth0.hwaddr: 00:16:3e:f7:2e:5f
  volatile.last_state.power: RUNNING
  volatile.vm.uuid: aa9d1e57-f898-4466-9999-cf342de9dae2
devices:
  eth0:
    name: eth0
    network: lxdfan0
    type: nic
  root:
    path: /
    pool: local
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

---- vm-1804-2 —
architecture: x86_64
config:
image.architecture: amd64
image.description: Ubuntu bionic amd64 (20200326_12:29)
image.os: Ubuntu
image.release: bionic
image.serial: “20200326_12:29”
image.type: disk-kvm.img
volatile.base_image: 52626176b67166a3dba32eaffea62f0ec55f23feb4493603953aa5978a3bab47
volatile.eth0.host_name: tap5ce02915
volatile.eth0.hwaddr: 00:16:3e:83:67:fa
volatile.last_state.power: RUNNING
volatile.vm.uuid: af98daf2-39b7-4fac-94f0-bd4642ea35f3
devices:
eth0:
name: eth0
network: lxdfan0
type: nic
root:
path: /
pool: local
type: disk
ephemeral: false
profiles:
- default
stateful: false
description: “”

— vm-1604-1 —
architecture: x86_64
config:
image.architecture: amd64
image.description: Ubuntu xenial amd64 (20200326_12:29)
image.os: Ubuntu
image.release: xenial
image.serial: “20200326_12:29”
image.type: disk-kvm.img
volatile.base_image: 698c6a81afe85b5201126106491d4223b08c6ec881987d3c49ae57de8c6d7f71
volatile.eth0.host_name: tap928dfcad
volatile.eth0.hwaddr: 00:16:3e:c0:49:aa
volatile.last_state.power: RUNNING
volatile.vm.uuid: b496285e-5c23-4bac-b402-5b3aba3fb721
devices:
eth0:
name: eth0
network: lxdfan0
type: nic
root:
path: /
pool: local
type: disk
ephemeral: false
profiles:
- default
stateful: false
description: “”

— vm-1604-2 —
architecture: x86_64
config:
image.architecture: amd64
image.description: Ubuntu xenial amd64 (20200326_12:29)
image.os: Ubuntu
image.release: xenial
image.serial: “20200326_12:29”
image.type: disk-kvm.img
volatile.base_image: 698c6a81afe85b5201126106491d4223b08c6ec881987d3c49ae57de8c6d7f71
volatile.eth0.host_name: tapb66c9061
volatile.eth0.hwaddr: 00:16:3e:46:05:a4
volatile.last_state.power: RUNNING
volatile.vm.uuid: d3eccfb1-4bdb-48a1-9bbc-14e2bf0a74ba
devices:
eth0:
name: eth0
network: lxdfan0
type: nic
root:
path: /
pool: local
type: disk
ephemeral: false
profiles:
- default
stateful: false
description: “”

@monstermunchkin any ideas?

Possibly related issue here LXD 3.23 - Cluster setup, lxc exec has different behavior for containers and VMs

Hmm, no, the IP issue was the dbus machine-id thing I fixed over the weekend.

Newer images should have this resolved now.

Thanks. How was that affecting dhcp do you know?

We use networkd for DHCP which uses the machine-id as the client ID.
That machine-id is generated on boot as our images ship an empty file, unfortunately the generation protocol is to first look for a dbus machine-id and if that’s present, use it.

Our images were mistakenly shipping with such a dbus machine-id, causing all copies of the same image to also use the same machine-id and therefore DHCP client id, causing the same IP to be returned by the DHCP server.

1 Like