# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: lxd_zfs
Name of the storage backend to use (lvm, zfs, dir) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]: no
Name of the existing ZFS pool or dataset: zfs_lxd
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: lxdbr0
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks: []
storage_pools:
- config:
source: zfs_lxd
description: ""
name: lxd_zfs
driver: zfs
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: lxd_zfs
type: disk
name: default
cluster: null
Error: Failed to update profile 'default': Device validation failed "eth0": Cannot use "nictype" property in conjunction with "network" property
The bridge lxdbr0 is not running, but is already configured (via lxd init).
Also, every time I run lxd init this way, after already having created my zfs pool, it destroys the pool. For example:
# zpool create zfs_lxd mirror /var/lib/zfs_img/zfs0.img /var/lib/zfs_img/zfs1.img
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zfs_lxd 9,50G 122K 9,50G - - 0% 0% 1.00x ONLINE -
# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: lxd_zfs
Name of the storage backend to use (dir, lvm, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]: no
Name of the existing ZFS pool or dataset: zfs_lxd
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: lxdbr0
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks: []
storage_pools:
- config:
source: zfs_lxd
description: ""
name: lxd_zfs
driver: zfs
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: lxd_zfs
type: disk
name: default
cluster: null
Error: Failed to update profile 'default': Device validation failed "eth0": Cannot use "nictype" property in conjunction with "network" property
# zpool list
no pools available
#
When I delete the existing bridge and restart lxd init, it succeeds. But I would like to know why this is necessary:
# lxc network list
+-----------+----------+---------+--------------+---------+
| NAME | TYPE | MANAGED | BESCHREIBUNG | USED BY |
+-----------+----------+---------+--------------+---------+
| enp0s31f6 | physical | NO | | 0 |
+-----------+----------+---------+--------------+---------+
| lxdbr0 | bridge | YES | | 0 |
+-----------+----------+---------+--------------+---------+
| wlp4s0 | physical | NO | | 0 |
+-----------+----------+---------+--------------+---------+
# lxc network delete lxdbr0
Network lxdbr0 deleted
# lxc network list
+-----------+----------+---------+--------------+---------+
| NAME | TYPE | MANAGED | BESCHREIBUNG | USED BY |
+-----------+----------+---------+--------------+---------+
| enp0s31f6 | physical | NO | | 0 |
+-----------+----------+---------+--------------+---------+
| wlp4s0 | physical | NO | | 0 |
+-----------+----------+---------+--------------+---------+
# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: lxd_zfs
Name of the storage backend to use (zfs, dir, lvm) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]: no
Name of the existing ZFS pool or dataset: zfs_lxd
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]: no
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
ipv4.address: auto
ipv6.address: auto
description: ""
name: lxdbr0
type: ""
storage_pools:
- config:
source: zfs_lxd
description: ""
name: lxd_zfs
driver: zfs
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: lxd_zfs
type: disk
name: default
cluster: null
Comparing the two YAML outputs, I would be inclined to file a bug report. Other users have encountered similar issues.
lxd init is only really meant to be run once or only re-run if you’ve deleted most objects and are re-configuring things from scratch.
I suspect the error you’re reporting is because your default profile already has a device of name eth0 which points to lxdbr0 through a network property.
You then run lxd init again, tell it to use an existing non-managed bridge which causes it to set nictype=bridged and parent=lxdbr0, conflicting with what’s currently set.