Permission denied for Unix proxy device

For some background I’m running software inside containers, inside each container is a daemon which accepts connections via a UNIX socket located at /var/run/crated.sock. The daemon runs as root, spawning and managing child processes.

This worked fine with the configuration like so without issues:

    connect: unix:/var/run/crated.sock
    gid: "1000"
    listen: unix:/var/lib/battlecrate-depot/daemons/inst1-crated.sock
    type: proxy
    uid: "1000"

I’ve noticed however that my new install of LXD is returning an end of stream attempting to connect with the host-side UNIX socket. The proxy log looks like this:

Warning: Failed to connect to target: dial unix /var/run/crated.sock: connect: permission denied
Warning: Failed to prepare new listener instance: dial unix /var/run/crated.sock: connect: permission denied

I thought it might be a permissions error on the container side, but even setting the permissions to 777 didn’t seem to resolve the problem.

srwxrwxrwx  1 root root    0 Oct 14 22:22 crated.sock

Looking around can’t seem to find any information on what to try next here, some posts recently indicate it might be AppArmor tightening some poor security on my part. If I connect with socat inside the container the Unix socket is working fine. Any ideas? Thanks.


There have been some recent changes regarding forkproxy (the process that does the proxies) and the confinement of that process with AppArmor. Can you tell us the LXD versions involved?

I am on mobile now; you can check the release notes for LXD 4.5 (I think) for this. Also, see the LXD proxy documentation if you need to add extra flags for the proxy command.

Please can you show the output of lxc config show <container> --expanded as well?

Thanks for the quick responses, the output of config is this:

$ sudo lxc config show bc-crate-6e255c42-6ed0-4a51-b980-2e14ae1f6e57 --expanded
architecture: x86_64
  image.architecture: amd64
  image.description: Debian buster amd64 (20200713_05:24) debian-buster-amd64-default-20200713_05:24
  image.os: debian
  image.release: buster
  image.serial: "20200713_05:24"
  image.variant: default
  limits.cpu: "2"
  limits.memory: 1152MB
  security.devlxd: "false"
  volatile.base_image: cd3a100eb55009b592b08508c7f94f877ad6904386f12b97ed767a058a9bdba1
  volatile.eth0.last_state.created: "false" eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]' '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
    connect: unix:/var/run/crated.sock
    gid: "1000"
    listen: unix:/var/lib/battlecrate-depot/daemons/inst1-crated.sock
    type: proxy
    uid: "1000"
    nictype: ipvlan
    parent: enx00e04c6c1d5e
    type: nic
    path: /
    pool: default
    type: disk
ephemeral: false
- default
stateful: false
description: ""

I’m using LXD 4.6 17738 locally and LXD 4.6 17738 in production, oddly this problem doesn’t happen in production but this might be because I haven’t rebooted the box in a while so the new version hasn’t applied yet. I can’t reboot to verify.

As for the release notes, I can’t really understand what exactly would have caused AppArmor to intefere. I’m trying to proxy a root owned UNIX socket in the container to a UNIX socket on the host which is accessible by the host software user. Somehow it seems to be having problems making a connecting to the container side UNIX socket.

Can you run sudo dmesg on the LXD host when you try and connect to the proxy and see if there are any DENIED apparmor messages?

The process that provides the proxy, called fork_proxy was placed under AppArmor restrictions for security reasons, it may be that your usage of it is being blocked by the profile and we need to update it.

Yup, looks like somethings going on here.

[344018.372067] audit: type=1400 audit(1602768837.866:14345): apparmor="DENIED" operation="connect" profile="lxd_forkproxy-crated_bc-crate-6e255c42-6ed0-4a51-b980-2e14ae1f6e57_</var/snap/lxd/common/lxd>" name="/run/crated.sock" pid=28525 comm="lxd" requested_mask="w" denied_mask="w" fsuid=1001000 ouid=1000000

@stgraber do you have any idea why connect statement in the proxy is using /var/run/crated.sock yet apparmor is picking it up as /run/crated.sock? Could this be a symlink issue?

I just changed the configuration to use /run/crated.sock and it works fine, so looks like a symlink issue.

1 Like

Great! Is /var/run/crated.sock a symlink in your container?

Yup, /var/run is just a link to /run.

$ ls /var -la
lrwxrwxrwx  1 root root     4 Jul 13 05:26 run -> /run

Confirmed with @stgraber that now we are protecting the fork_proxy process with AppArmor that we need to ensure the AppArmor profile specifies the fully resolved path to the unix socket, not the symlink.

However whilst we can resolve this on the host side, resolving it inside the container is more difficult as would require entering the container’s mount namespace to resolve the path.

For the time being these paths should be specified in the config as the resolved path rather than to a symlink.