Intermittent Incus launch fails one shot commands

Topic: Intermittent Incus launch fails with raw.lxc and oci-docker images

I’ve been experiencing an intermittent issue where incus launch hangs after displaying the instance name. This occurs when using an oci-docker image along with a one-shot raw.lxc command that exits quickly.
```
incus launch oci-docker:alpine --console --config “raw.lxc=lxc.execute.cmd=/bin/sh -c ‘ls -lah’”
Launching the instance
Instance name is: known-duck
To detach from the console, press: +a q
Error: Failed running forkconsole: “container is not running: “known-duck””

The command:
incus launch oci-docker:alpine --console --config "raw.lxc=lxc.execute.cmd=/bin/sh -c 'ls -lah'"

The behavior:
The command will sometimes complete successfully, but other times it exits with error after showing the instance name (e.g., measured-goat).

Debugging steps and findings:
By running incus info <instance_name> --show-log on a failed instance, I found the following errors:

lxc known-duck 20251018125805.593 ERROR    utils - ../src/lxc/utils.c:safe_mount:1334 - Invalid argument - Failed to mount "none" onto "/opt/incus/lib/lxc/rootfs/run"
lxc 20251018125805.706 ERROR    af_unix - ../src/lxc/af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20251018125805.706 ERROR    commands - ../src/lxc/commands.c:lxc_cmd_rsp_recv_fds:128 - Failed to receive file descriptors for command "get_state"

The key error appears to be Failed to mount "none" onto ".../rootfs/run", with the Connection reset by peer errors being a symptom of the initial failure.

I could add a sleep command at the beginning to resolve this issue, but I’m wondering if there’s a more elegant solution.

Yeah, it’s basically a race and the sleep would be the easiest work around for it.

Incus doesn’t have a race-free way to start a container AND attach to its console.
So what the CLI does is:

  1. Start the container
  2. Immediately attempt to attach to the live console (also fetches ringbuffer to backfill anything that happened to that point)

The problem is with very short lived containers as the container will have stopped before we get to 2), getting us the error that the container isn’t running.

I think we may be able to work around that one by having the CLI (yet again) checks for the current container state when it gets a console failure. If the container is stopped, then rather than show the error, it can ask for the recorded console output and display that instead (equivalent of incus console --show-log).

Feel free to file an issue about that at GitHub · Where software is built and I’ll look into adding that extra logic to the CLI.