Containers on my workstation restart on host reboot even if

All my containers have "boot.autostart" set to false.
Irrespective of this if they were running when I shut-down my workstation, they are running again after my workstation has started.

I assume this is by design. I can’t find any way of managing this. Any search I do is dominated by autostart.

Do I have to explicitly stop any containers I don’t want restarting?


Can you show output of lxc config show <instance> --expanded for one of the affected containers before you restart please?

After this had been happening for months. I confirmed that containers were restarting several times yesterday by rebooting my workstation.

I stopped all the containersfrom the command-line yesterday and rebooted. I can report there are no unsolicited boot.autostarts happening now.
I am both delighted and disappointed :wink:

Before reboot today

 % lxc ls -c ns4tS pydev1 
|  NAME  |  STATE  |         IPV4          |   TYPE    | SNAPSHOTS |
| pydev1 | RUNNING | (eth0) | CONTAINER | 3         |

% lxc config show --expanded pydev1
architecture: x86_64
  boot.autostart: "false"
  image.architecture: amd64
  image.description: ubuntu 20.04 LTS amd64 (release) (20210510)
  image.label: release
  image.os: ubuntu
  image.release: focal
  image.serial: "20210510"
  image.type: squashfs
  image.version: "20.04"
  user.user-data: |
    locale: en_AU.UTF-8
    timezone: Australia/Sydney
    package_upgrade: true
      - ansible
       - ssh-rsa <random stuff>
  volatile.base_image: 52c9bf12cbd3b06d591c5f56f8d9a185aca4a9a7da4d6e9f26f0ba44f68867b7
  volatile.eth0.host_name: vethff49b320
  volatile.eth0.hwaddr: 00:16:3e:ef:91:9b
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]' '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: a2573771-106d-47a3-b164-f55f48b5dd5d
    name: eth0
    network: lxdbr0
    type: nic
    path: /
    pool: default
    type: disk
ephemeral: false
- default
- lxc-pydev
stateful: false
description: ""

After reboot today:

% lxc ls -c ns4tS pydev1
|  NAME  |  STATE  | IPV4 |   TYPE    | SNAPSHOTS |
| pydev1 | STOPPED |      | CONTAINER | 3         |

a) Can I assume that if boot.autostart is not explicitly a container’s configuration it defaults to “true”?

I can see from my workstation’s shell history that yesterday after posting I ran:

lxc profile set default boot.autostart=false

b) All my containers have the default profile as a base. Does that explain why they all stooped rebooting by default? I’m used to other systems where state doesn’t mutate.

Are you using the snap package? If so which version?

Thanks @tomp for taking the time to help the ignorant :slight_smile:

lxd                  4.15                        20806  latest/stable    canonical✓ 

% cat /etc/os-release
VERSION="20.04.2 LTS (Focal Fossa)"

% uname -r            

Can you check your logs to see if the containers are failing to start because a boot-time dependency has not been fulfilled (perhaps a parent network interface or something).

It may be you’re getting affected by: Retry failed instance startup when auto-started · Issue #8858 · lxc/lxd · GitHub