Lxd init fails with 'Error: Failed to update profile 'default': Device validation failed "eth0": Cannot use "nictype" property in conjunction with "network" property'

# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: lxd_zfs
Name of the storage backend to use (lvm, zfs, dir) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: no
Name of the existing ZFS pool or dataset: zfs_lxd
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: lxdbr0
Would you like LXD to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks: []
storage_pools:
- config:
    source: zfs_lxd
  description: ""
  name: lxd_zfs
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      nictype: bridged
      parent: lxdbr0
      type: nic
    root:
      path: /
      pool: lxd_zfs
      type: disk
  name: default
cluster: null

Error: Failed to update profile 'default': Device validation failed "eth0": Cannot use "nictype" property in conjunction with "network" property

The bridge lxdbr0 is not running, but is already configured (via lxd init).

# ifconfig
enp0s31f6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.4.36  netmask 255.255.255.0  broadcast 192.168.4.255
        inet6 fe80::46f6:d9e5:60ff:65a8  prefixlen 64  scopeid 0x20<link>
        ether 8c:16:45:6e:c2:6e  txqueuelen 1000  (Ethernet)
        RX packets 96597  bytes 106343019 (101.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 49093  bytes 4867617 (4.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 16  memory 0xec200000-ec220000  

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Lokale Schleife)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

wlp4s0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        ether 92:40:64:f5:dd:27  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

What’s wrong here?

Also, every time I run lxd init this way, after already having created my zfs pool, it destroys the pool. For example:

# zpool create zfs_lxd mirror /var/lib/zfs_img/zfs0.img /var/lib/zfs_img/zfs1.img 
# zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zfs_lxd  9,50G   122K  9,50G        -         -     0%     0%  1.00x    ONLINE  -
# lxd init  
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: lxd_zfs
Name of the storage backend to use (dir, lvm, zfs) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: no
Name of the existing ZFS pool or dataset: zfs_lxd
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes   
Name of the existing bridge or host interface: lxdbr0
Would you like LXD to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks: []
storage_pools:
- config:
    source: zfs_lxd
  description: ""
  name: lxd_zfs
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      nictype: bridged
      parent: lxdbr0
      type: nic
    root:
      path: /
      pool: lxd_zfs
      type: disk
  name: default
cluster: null

Error: Failed to update profile 'default': Device validation failed "eth0": Cannot use "nictype" property in conjunction with "network" property
# zpool list
no pools available
#

When I delete the existing bridge and restart lxd init, it succeeds. But I would like to know why this is necessary:

# lxc network list       
+-----------+----------+---------+--------------+---------+
|   NAME    |   TYPE   | MANAGED | BESCHREIBUNG | USED BY |
+-----------+----------+---------+--------------+---------+
| enp0s31f6 | physical | NO      |              | 0       |
+-----------+----------+---------+--------------+---------+
| lxdbr0    | bridge   | YES     |              | 0       |
+-----------+----------+---------+--------------+---------+
| wlp4s0    | physical | NO      |              | 0       |
+-----------+----------+---------+--------------+---------+
# lxc network delete lxdbr0
Network lxdbr0 deleted
# lxc network list
+-----------+----------+---------+--------------+---------+
|   NAME    |   TYPE   | MANAGED | BESCHREIBUNG | USED BY |
+-----------+----------+---------+--------------+---------+
| enp0s31f6 | physical | NO      |              | 0       |
+-----------+----------+---------+--------------+---------+
| wlp4s0    | physical | NO      |              | 0       |
+-----------+----------+---------+--------------+---------+
# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: lxd_zfs
Name of the storage backend to use (zfs, dir, lvm) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: no
Name of the existing ZFS pool or dataset: zfs_lxd
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like LXD to be available over the network? (yes/no) [default=no]: no
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
    ipv4.address: auto
    ipv6.address: auto
  description: ""
  name: lxdbr0
  type: ""
storage_pools:
- config:
    source: zfs_lxd
  description: ""
  name: lxd_zfs
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: lxdbr0
      type: nic
    root:
      path: /
      pool: lxd_zfs
      type: disk
  name: default
cluster: null

Comparing the two YAML outputs, I would be inclined to file a bug report. Other users have encountered similar issues.

lxd init is only really meant to be run once or only re-run if you’ve deleted most objects and are re-configuring things from scratch.

I suspect the error you’re reporting is because your default profile already has a device of name eth0 which points to lxdbr0 through a network property.

You then run lxd init again, tell it to use an existing non-managed bridge which causes it to set nictype=bridged and parent=lxdbr0, conflicting with what’s currently set.

1 Like