Percentage of containers won't boot anymore

http://sprunge.us/2mNLZv

Note that I had to install it (again). (I only installed it once I noticed it threw an error about it in the logs for the non-booting containers, as far as I know it was never installed before)

root@procyon:~# ovs-vsctl show
a6100b39-dbe2-4917-9e3c-016357bf22f7
    ovs_version: "2.9.8"```

Thanks.

Out of interest, can you start the instance if you remove the routed NIC from it?

I think this may be a regression in liblxc’s network clean up code (which is triggered when using the routed NIC type).

I just tried it and sadly it still crashes. The error is gone from the logs however.

Which seems to implicate this was not the reason it wasn’t booting up but something that happens when it tears down the container.

Back to square one I’m afraid :frowning:

Any useful in the lxc monitor --debug when starting the container (after that command was started).

Nothing that hints of anything wrong on the host. It is almost like these containers somehow all have an issue actually booting into their guest OS somehow. Although judging from the last access date of the files inside the container nothing actually gets there.

Is there some way to get some kind of logging from the init process inside the container? I tried starting with --console but that just gives no empty and then returns to the command prompt as the container exits.

I guess the topic name is actually not well picked, they do start, for 1 second or so and then exit almost right away, if you are fast enough though you can see it as status running in lxc list.

any ideas @stgraber @brauner ?

@brauner any ideas on this one?