Storage pool being out of space - can't access, nor shutdown container

I have a situation where apparently the storage pool (btrfs on a dedicated partion [an LVM logical volume to be precise] has reached it’s maximum capacity.
This goes as far that I can not enter the container any longer, nor stop it.

The log info does not tell me much. Can anybody advise how to proceed here?

~$ lxc info --show-log [container_name]
Name: [container_name]
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/11/14 15:21 UTC
Status: Running
Type: container
Profiles: default
Pid: 2772295
Ips:
  eth0:	inet	10.76.224.235	veth28bb7611
  eth0:	inet6	fd42:7541:f3cb:242e:216:3eff:fee3:f269	veth28bb7611
  eth0:	inet6	fe80::216:3eff:fee3:f269	veth28bb7611
  lo:	inet	127.0.0.1
  lo:	inet6	::1
Resources:
  Processes: 9
  CPU usage:
    CPU usage (in seconds): 8019
  Memory usage:
    Memory (current): 440.10MB
    Memory (peak): 486.67MB
  Network usage:
    eth0:
      Bytes received: 50.77kB
      Bytes sent: 44.23kB
      Packets received: 370
      Packets sent: 450
    lo:
      Bytes received: 150.70MB
      Bytes sent: 150.70MB
      Packets received: 1381673
      Packets sent: 1381673
Snapshots:
  sn1 (taken at 2021/01/09 11:17 UTC) (stateless)
  sn2 (taken at 2021/01/17 08:34 UTC) (stateless)

Log:

lxc [container_name] 20210129103102.990 WARN     cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1126 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.monitor.[container_name]"
lxc [container_name] 20210129103102.992 WARN     cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1126 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.payload.[container_name]"
lxc [container_name] 20210129103103.928 WARN     cgfsng - cgroups/cgfsng.c:fchowmodat:1547 - No such file or directory - Failed to fchownat(17, memory.oom.group, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
lxc [container_name] 20210129103103.381 ERROR    utils - utils.c:__safe_mount_beneath_at:1106 - Function not implemented - Failed to open 30(dev)
lxc [container_name] 20210129103103.123 ERROR    utils - utils.c:__safe_mount_beneath_at:1106 - Function not implemented - Failed to open 33(full)
lxc [container_name] 20210129103103.124 ERROR    utils - utils.c:__safe_mount_beneath_at:1106 - Function not implemented - Failed to open 33(null)
lxc [container_name] 20210129103103.124 ERROR    utils - utils.c:__safe_mount_beneath_at:1106 - Function not implemented - Failed to open 33(random)
lxc [container_name] 20210129103103.124 ERROR    utils - utils.c:__safe_mount_beneath_at:1106 - Function not implemented - Failed to open 33(tty)
lxc [container_name] 20210129103103.124 ERROR    utils - utils.c:__safe_mount_beneath_at:1106 - Function not implemented - Failed to open 33(urandom)
lxc [container_name] 20210129103103.124 ERROR    utils - utils.c:__safe_mount_beneath_at:1106 - Function not implemented - Failed to open 33(zero)

I have been in similar situations before (storage pool running out of space) with a loop device as btrfs pool. It always worked to delete a snapshot to free at least sufficient space to manouver with lxc commands and not loose the container completely.

I managed to delete a snapshot here aswell but this did not help at all now. The container is running but can not be manouvered at all, so atm it seems to be completely lost.

Have you tried a force stop using lxc stop -f <instance> or rebooting?

rebooting did not work. lxc stop -f [instance] did though, thanks. I wasn’t aware the -f flag exists.