HowTo: Delete Container with CEPH RBD volume giving Device or resource busy

I’ve ran into this several times with CEPH, this time I was deleting some unused docker containers.

Log messages look like this:

unmap container_docker-2: rbd: sysfs write failed\nrbd: unmap failed: (16) Device or resource busy"
t=2019-10-09T21:45:36-0500 lvl=eror msg=“Failed to delete RBD storage volume for container "docker-2" on storage pool "remote"”
t=2019-10-09T21:45:36-0500 lvl=eror msg=“Failed deleting container storage” err=“Failed to delete RBD storage volume for container "docker-2" on storage pool "remote"” name=docker-2

I found an easy fix between these forums and ceph-users though.

Here’s the process, first find the /dev path:

cat /proc/*/mountinfo | grep docker-2

1757 346 252:80 / /var/snap/lxd/common/shmounts/storage-pools/remote/containers/docker-2 rw,relatime shared:653 - ext4 /dev/rbd5 rw,discard,stripe=1024,data=ordered

sudo rbd unmap -o force /dev/rbd5
lxc delete docker-2

1 Like

Hmm, the fact that the remaining reference is in common/shmounts suggests a bug in the mntns logic we have in the LXD snap. We’ve seen a few issues with that before but need to patch the tool to be more verbose on failures so that we may track those down for good.

Ya, I’m not complaining just documenting a fix that is reliable and easy.

Pretty sure I hit this same issue when trying to resize disks too. Workaround for that is to set the default size and copy the container if anyone is looking for that answer instead or maybe this force unmap would work too. Need to test.

thank you for information. It is useful.