LXD Arm64 - Unsupported architecture

Hi.

I am currently in the mist of migrating a power hungry x86 server over to a Raspberry Pi 4-B, that is used to run a few light task network services in various containers.

I have set this up using UEFI on an sdcard that boots a normal aarch64 Ubuntu Server 20.04 from an SSD connected via USB3 on the Pi. I then configured LXD which in it self is working fine, but when I try to lunch the test template image that I installed (also aarch64 Ubuntu Server 20.04), I get the following errors.

$ lxc info --show-log local:template
Name: template
Location: none
Remote: unix://
Architecture: aarch64
Created: 2021/12/08 23:02 UTC
Status: Stopped
Type: container
Profiles: shared

Log:

lxc template 20211208230512.808 ERROR    seccomp - seccomp.c:parse_config_v2:1070 - Unsupported architecture "[aarch64]"
lxc template 20211208230512.808 ERROR    start - start.c:lxc_init:872 - Failed to read seccomp policy
lxc template 20211208230512.808 ERROR    start - start.c:__lxc_start:2008 - Failed to initialize container "template"
lxc template 20211208230542.524 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_payload_destroy:548 - Uninitialized limit cgroup
lxc template 20211208230542.524 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:868 - Uninitialized monitor cgroup
lxc template 20211208230543.123 ERROR    lxccontainer - lxccontainer.c:wait_on_daemonized_start:859 - No such file or directory - Failed to receive the container state
lxc 20211208230543.123 ERROR    af_unix - af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20211208230543.123 ERROR    af_unix - af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20211208230543.123 ERROR    commands - commands.c:lxc_cmd_rsp_recv_fds:127 - Failed to receive file descriptors
lxc 20211208230543.123 ERROR    commands - commands.c:lxc_cmd_rsp_recv_fds:127 - Failed to receive file descriptors

Not sure what this is, cannot find anything on it and never encountered this on the other server.

$ uname -a
Linux services 5.8.0-63-generic #71~20.04.1-Ubuntu SMP Thu Jul 15 17:46:44 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux

The selected arch should work fine.

What’s the lxc version output?

I’m also surprised by the kernel version as Ubuntu 20.04 should either be 5.4.0 (which won’t boot under UEFI on a rpi4) or be the current HWE kernel (5.11.0-41-generic).

So I’d probably recommend updating to the current linux-generic-hwe-20.04 and then trying again from there.

The LXD team is running 3 Raspberry Pi 4s with the UEFI firmware and external storage using Ubuntu 20.04 but running on that 5.11.0 kernel and ours are behaving :slight_smile:

I tried updating everything, and this issue still persist.

$ uname -a
Linux services 5.11.0-41-generic #45~20.04.1-Ubuntu SMP Wed Nov 10 10:22:58 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux

$ lxd version
4.0.8

I also tried deleting the container and install it again, twice. Ones with my custom profile and ones with the default. Funny enough, the default profile works, so it seams to be a configuration issue, which I find strange. This profile works on the x86 server (could make sense, since it’s an arch described issue), but I cannot find any arch specifics in it.

$ lxc profile show default 
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: services
    type: disk
name: default

This one works, however …

$ lxc profile show shared 
config:
  limits.cpu.allowance: 65%
  limits.memory: 2560MB
  limits.memory.enforce: soft
  raw.idmap: |-
    gid 100 100
    gid 155 155
    uid 1000 0
    uid 115 115
  security.idmap.base: "100000"
  security.idmap.isolated: "true"
  security.idmap.size: "65536"
  security.syscalls.blacklist_compat: "true"
  security.syscalls.blacklist_default: "true"
description: ""
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: services
    type: disk
name: shared

… this one produces the Unsupported architecture "[aarch64]" error.

Well, while looking at the profile, I noticed the two syscalls blacklists and remembered that at least one, was arch specific. Removing them worked. But, if they are not used by arm, I would have thought, that they would just be ignored. Also, the error does not really indicate much to track the error to the configs.