Error: Only managed networks can be modified

Hi,
Is there a simple solution to changed managed -> false to true?
Thanks.

root@lxdserver01:~# lxc network list
+--------+----------+---------+-------------+---------+
|  NAME  |   TYPE   | MANAGED | DESCRIPTION | USED BY |
+--------+----------+---------+-------------+---------+
| br0    | bridge   | NO      |             | 0       |
+--------+----------+---------+-------------+---------+
| enp2s0 | physical | NO      |             | 0       |
+--------+----------+---------+-------------+---------+

root@lxdserver01:~# lxc --version
3.0.1

Hi!

managed are the network interfaces that the LXD service can manage for you.
Those interfaces are the ones that LXD has created, such as the private bridge interface usually called lxdbr0.

There is no way to convert an interface between managed and unmanaged, therefore there is no option in the lxc network subcommand to perform such a change.

One of the aspects of a managed interface, is that LXD creates and provides a DHCP/DNS server. That makes good sense for the private bridge interface (lxdbr0).

Thanks for the explanation.

Sorry to dig up this old topic but I’m in a situation for which I thought giving the hand to LXD to manage an existing network would solve my problem but since it’s not possible (now I know thanks to the explanation above) I would like to find the best solution for my issue.
I’m under Ubuntu 16.04, I used to have lxd/lxc 2.0.2 and back in the time upon its lxd init I chose to use an existing network bridge virbr0 (created and used by kvm which I also use) with the address 192.168.122.1/24. I then created several containers with static IPs (ex: 192.168.122.25) depending on that parent bridge in the host. I also have lots of iptables forwarding going on to those several containers fixed IPs, All was fine, and then I updated to lxd/lxc 3.0.3 that handles networking differently so it renamed lxd-bridge to lxd-bridge.upgraded and I had to manually define virbr0 as the bridge to be used in the default profile (I think I even performed another lxd init operation and chose to use the existing bridge virbr0 to make the networking work again). Now everything is fine except that the containers don’t start after a host reboot and I have to start them manually. The log shows :
lvl=eror msg="Failed to start container 'cld': Common start logic: Missing parent 'virbr0' for nic 'eth0'"
Now if you followed my explanation, the parent bridge virbr0 does exist and all containers can be started manually without problem and use the bridge without any issues, it just seems that the virbr0 bridge isn’t ready to be used by the containers during the system startup (something about an unavailable resource according to an explanation by stgraber I read somewhere on lxc/lxd in Github) hence my idea of making LXD manage my existing bridge.
I don’t want to create a new bridge and attach it to the default profile to be used by all containers because I will have to manually change the network configuration inside each of my containers.
Any idea on fixing this issue of containers not restarting after reboot?

Hi!

In your current setup, LXD does not know about the virbr0 bridge, and systemd does not know about this dependency either. Therefore, it looks like a situation where you would rather have LXD start much later during bootup so that the virbr0 interface is created/configured first. And then, LXD should be fine to start.

In that case, your question would be translated into, _how can I get LXD to start much later than the service that creates virbr0?

See Start systemd service after specific service? - Stack Overflow for some hints.

Thanks for the advice. I had a look on the link you shared and found it very interesting for other purposes in the future.
I did try the described approach though by forcing lxd-containers.service (then lxd.service in a second attempt) to start AFTER networking.service without success so I ended up spending the whole night manually changing IPs in numerous config files, reverse proxies, database configs and iptables after creating a new bridge lxdbr0 through LXD network manager with a different private IP class.
I know I could have used the exact IP class I wanted during the lxd bridge creation but it didn’t work out when I tried it few days ago, probably interfering with kvm virbr0 bridge.
Everything is working fine now with all the containers starting correctly after reboot.