How to increase storage size for container?

I was trying to copy a somewhat large file (few gigabytes) to my container with:

lxc file push ~/file-on-host my-container/directory-in-container/

but the transfer was interrupted with the error:

Error: write: No space left on device

I have only one container and there is enough space on the host (ubuntu 20.04 arm64, lxd 4.0.9 installed with snap).

I already tried resizing the disk device to 20GiB on the default profile with:

lxc profile device set default root size 20GiB

and resizing the storage for the individual container:

lxc config device override my-container root size=20GiB

In the container, it still shows 100% used disk space:

user@my-container:~$ df -h
Filesystem                              Size  Used Avail Use% Mounted on
default/containers/my-container         4.4G  4.4G     0 100% /
none                                    492K  4.0K  488K   1% /dev
udev                                     12G     0   12G   0% /dev/tty
tmpfs                                   100K     0  100K   0% /dev/lxd
tmpfs                                   100K     0  100K   0% /dev/.lxd-mounts
tmpfs                                    12G     0   12G   0% /dev/shm
tmpfs                                   2.4G  8.3M  2.4G   1% /run
tmpfs                                   5.0M     0  5.0M   0% /run/lock
tmpfs                                    12G     0   12G   0% /sys/fs/cgroup
snapfuse                                 58M   58M     0 100% /snap/core20/1380
snapfuse                                 58M   58M     0 100% /snap/core20/1408
snapfuse                                 62M   62M     0 100% /snap/lxd/22530
snapfuse                                 62M   62M     0 100% /snap/lxd/22761
snapfuse                                 38M   38M     0 100% /snap/snapd/15183

I am kind of a linux noob still and very new to lxd. I would be very grateful if someone could give me a hint on how to increase the storage in the container :slight_smile:

More information:

~$ lxc list
+--------------+---------+-----------+-----------+
|     NAME     |  STATE  |   TYPE    | SNAPSHOTS |
+--------------+---------+-----------+-----------+
| my-container | RUNNING | CONTAINER | 1         |
+--------------+---------+-----------+-----------+
~$ lxc config show my-container -e
architecture: aarch64
config:
  image.architecture: arm64
  image.description: ubuntu 20.04 LTS arm64 (release) (20220322)
  image.label: release
  image.os: ubuntu
  image.release: focal
  image.serial: "20220322"
  image.type: squashfs
  image.version: "20.04"
  security.idmap.isolated: "true"
  volatile.base_image: e7ecfc40fc692e64cb5f8ac588b6c331303e3d0b9f1b59ae7923a0d749a97048
  volatile.eth0.host_name: vethb2bede98
  volatile.eth0.hwaddr: 00:16:3e:9b:3d:63
  volatile.idmap.base: "1065536"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1065536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1065536,"Nsid":0,"Maprange":65536}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1065536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1065536,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1065536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1065536,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 8b77886e-e3ad-48a8-9d69-7380acd7098f
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  my-ports-device:
    connect: tcp:127.0.0.1:25570
    listen: tcp:0.0.0.0:25570
    type: proxy
  root:
    path: /
    pool: default
    size: 20GiB
    type: disk
  ssh-port-device:
    connect: tcp:127.0.0.1:22
    listen: tcp:0.0.0.0:54022
    type: proxy
ephemeral: false
profiles:
- default
- my-proxy-profile
stateful: false
description: ""

There are two profiles:

~$ lxc profile show default 
config:
  security.idmap.isolated: "false"
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    size: 20GiB
    type: disk
name: default
used_by:
- /1.0/instances/my-container
~$ lxc profile show my-proxy-profile 
config:
  security.idmap.isolated: "true"
description: ""
devices:
  my-ports-device:
    connect: tcp:127.0.0.1:25570
    listen: tcp:0.0.0.0:25570
    type: proxy
  ssh-port-device:
    connect: tcp:127.0.0.1:22
    listen: tcp:0.0.0.0:54022
    type: proxy
name: my-proxy-profile
used_by:
- /1.0/instances/my-container

Some storage info:

~$ lxc storage info default
info:
  description: ""
  driver: zfs
  name: default
  space used: 6.78GiB
  total space: 6.78GiB
used by:
  images:
  - e7ecfc40fc692e64cb5f8ac588b6c331303e3d0b9f1b59ae7923a0d749a97048
  instances:
  - my-container
  profiles:
  - default
~$ lxc storage show default
config:
  size: 8GB
  source: /var/snap/lxd/common/lxd/disks/default.img
  zfs.pool_name: default
description: ""
name: default
driver: zfs
used_by:
- /1.0/images/e7ecfc40fc692e64cb5f8ac588b6c331303e3d0b9f1b59ae7923a0d749a97048
- /1.0/instances/my-container
- /1.0/profiles/default
status: Created
locations:
- none
~$ lxc storage show default --resources
space:
  used: 7281438208
  total: 7281438208

Alright, it seems like this is the solution:

I had to adjust it for my snap installation, so the path to the pool was /var/snap/lxd/common/lxd/disks/default.img .