Hi team,
I have a lxc container named: lxc-ce9e2a76
It has 1.1 TB of storage allocated and out of that 708GB is consumed.
df -h
output when run inside the container:
Filesystem Size Used Avail Use% Mounted on
/dev/loop25 1.1T 708G 331G 69% /
none 492K 4.0K 488K 1% /dev
udev 63G 0 63G 0% /dev/fuse
tmpfs 100K 0 100K 0% /dev/lxd
/dev/nvme0n1p2 1.9T 1.4T 391G 79% /dev/nvidia2
tmpfs 100K 0 100K 0% /dev/.lxd-mounts
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 13G 220K 13G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
tmpfs 6.4G 0 6.4G 0% /run/user/1001
As you can see there is 300+ GB available.
But for some reason when I copy a small file (500MB size) from a directory to new dir inside container, I get the error:
No space left on device
This is happening when I create or download a simple 100MB file as well.
Not sure what’s the issue as the storage is there in the container as well on the server.
Please find below some additional information on the volume:
lxc storage info vol-ce9e2a76
info:
description: ""
driver: btrfs
name: vol-ce9e2a76
space used: 707.75GiB
total space: 1.01TiB
used by:
instances:
- lxc-ce9e2a76
profiles:
- pro-ce9e2a76
lxc storage show vol-ce9e2a76
config:
size: 250GB
source: /var/snap/lxd/common/lxd/disks/vol-ce9e2a76.img
description: ""
name: vol-ce9e2a76
driver: btrfs
used_by:
- /1.0/instances/lxc-ce9e2a76
- /1.0/profiles/pro-ce9e2a76
status: Created
locations:
- none
Looking forward to getting this resolved quickly. Thanks.
stgraber
(Stéphane Graber)
July 29, 2022, 2:59pm
2
How are you copying that file? Is it done inside the container or through lxc file push
?
Can you show lxc storage volume show
for that volume?
stgraber:
lxc storage volume show
I tried copying data from one directory to another within the container. Basically, it is not able to write any new files either during copy or download using wget or even using fallocate command because it thinks there is not enough storage available.
Command:
lxc storage volume show vol-ce9e2a76 data
Output:
Error: Storage pool volume not found
tomp
(Thomas Parrott)
August 1, 2022, 8:01am
4
How did you create vol-ce9e2a76
? Something doesn’t look quite right here. The location of the /var/snap/lxd/common/lxd/disks/vol-ce9e2a76.img
file suggests this is a BTRFS storage pool, and yet you appear to be passing the entire storage pool into an instance as a disk device. I’m not sure that is supported.
Please can you show me lxc config show lxc-ce9e2a76 --expanded
please, in case I am misreading this.
Hi @tomp sorry for the delay.
I think it is being passed as a root volume to the container. Not sure if it is getting added as a disk device. We simply add the volume to container profile.
Please find below the output for:
Command:
lxc config show lxc-ce9e2a76 --expanded
Output:
architecture: x86_64
config:
image.architecture: x86_64
image.description: LXC Image V3
image.os: ubuntu
image.release: focal
limits.cpu: "16"
limits.memory: 68GB
security.nesting: "true"
security.privileged: "true"
volatile.base_image: b41b71eaefd5412996c58dae791fe356b0e5c1a42be5b7254fb2f96c630d9320
volatile.eth0.host_name: vethb917460b
volatile.eth0.hwaddr: 00:16:3e:69:e0:cd
volatile.eth0.name: eth0
volatile.idmap.base: "0"
volatile.idmap.current: '[]'
volatile.idmap.next: '[]'
volatile.last_state.idmap: '[]'
volatile.last_state.power: RUNNING
volatile.net9fe8593a8a.host_name: veth0048b309
volatile.net9fe8593a8a.hwaddr: 00:16:3e:6b:6f:85
volatile.net9fe8593a8a.name: eth2
volatile.new9fe8593a8a.host_name: veth80545080
volatile.new9fe8593a8a.hwaddr: 00:16:3e:6f:b5:cd
volatile.new9fe8593a8a.name: eth1
volatile.uuid: 9362f722-af1d-4cb7-8ea7-980f4be42cc9
devices:
eth0:
nictype: bridged
parent: net9fe8593a8a
type: nic
gpu2:
pci: 0000:b3:00.0
type: gpu
gpu3:
pci: 0000:b4:00.0
type: gpu
net9fe8593a8a:
network: net9fe8593a8a
type: nic
new9fe8593a8a:
network: new9fe8593a8a
type: nic
root:
path: /
pool: vol-ce9e2a76
type: disk
ephemeral: false
profiles:
- pro-ce9e2a76
stateful: false
description: ""
tomp
(Thomas Parrott)
August 10, 2022, 8:38am
6
OK thats fine makes sense now. The naming of the pool as “vol-…” confused me there, as its not a volume (in the LXD sense anyway).
Can you show me the output of sudo losetup
on the host please?
tomp
(Thomas Parrott)
August 10, 2022, 8:39am
7
Also how much disk space do you have on the partition that holds /var/snap/lxd/common/lxd/disks
?
Hi @tomp sorry for the delayed replies. I have 2TB disk out of which 300GB is available on root partition i.e “/” path.