Unable to restore container

can someone explain to me what is the problem, when i try to import a
container instance from a tarball this error message appears

# lxc import teampass-06-01-2021.tar.xz teampass27-2 -s www_pool2
Error: Post hook: Failed to run: resize2fs /dev/vg_ct_100/containers_teampass27–2 1953125K: resize2fs 1.44.1 (24-Mar-2018)
resize2fs: New size smaller than minimum (548890)

the container backup was created from this command

# lxc export teampass teampass-$(date +'%m-%d-%Y').tar.xz

but there is no problem to import the instance to a default storage(with drive dir)

storage www_pool2 info:

config:
  lvm.use_thinpool: "false"
  lvm.vg_name: vg_ct_100
  source: vg_ct_100
  volatile.initial_source: vg_ct_100
description: ""
name: www_pool2
driver: lvm

@parallax are you using profiles? Can you show the output of lxc profile show default?

Also, which version of LXD is this?

Finally, can you open the tarball and attach the backup/container/backup.yaml file from inside?

Also could you import it into a dir pool and then show the output of:

lxc config show <instance> --expanded

lxd 4.14 20450

lxc config show teampass27-3 --expanded

architecture: x86_64
config:
  image.architecture: amd64
  image.description: Centos 7 amd64 (20210211_07:08)
  image.os: Centos
  image.release: "7"
  image.serial: "20210211_07:08"
  image.type: squashfs
  image.variant: default
  volatile.base_image: 8d34dbf5b85f69a4dd73939059b9f676efb39066ca91814761e4f0cb3f44321c
  volatile.eth0.host_name: veth5cb76101
  volatile.eth0.hwaddr: 00:16:3e:68:39:93
  volatile.eth1.host_name: vport16
  volatile.eth1.hwaddr: 00:16:3e:e7:dd:82
  volatile.eth1.name: eth1
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 20e588f7-ae4c-4f69-82da-ea4850aef75a
devices:
  eth0:
    ipv4.address: 10.30.26.100
    name: eth0
    network: lxdbr1
    type: nic
  eth1:
    host_name: vport16
    nictype: bridged
    parent: mybridge
    type: nic
  proxyv4:
    connect: tcp:0.0.0.0:80
    listen: tcp:10.64.0.151:8080
    nat: "true"
    type: proxy
  root:
    path: /
    pool: default
    size: 2GB
    type: disk
  shared1:
    path: /host-data
    source: /home/david/teampass-mount/
    type: disk
ephemeral: false
profiles:
- default
- web2
stateful: false
description: ""

lxc profile show default

config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr1
    type: nic
  root:
    path: /
    pool: default
    size: 2GB
    type: disk
name: default
used_by:
- /1.0/instances/nginx
- /1.0/instances/ksm45
- /1.0/instances/c3
- /1.0/instances/teampass
- /1.0/instances/teampass27
- /1.0/instances/teampass27-3

lxc profile show web2

config: {}
description: ""
devices:
  root:
    path: /
    pool: lvm_thinpool1
    size: 2GB
    type: disk
name: web2
used_by:
- /1.0/instances/nginx
- /1.0/instances/ksm45
- /1.0/instances/c3
- /1.0/instances/teampass
- /1.0/instances/teampass27
- /1.0/instances/teampass27-3

cat backup.yaml

container:
  architecture: x86_64
  config:
    image.architecture: amd64
    image.description: Centos 7 amd64 (20210211_07:08)
    image.os: Centos
    image.release: "7"
    image.serial: "20210211_07:08"
    image.type: squashfs
    image.variant: default
    volatile.base_image: 8d34dbf5b85f69a4dd73939059b9f676efb39066ca91814761e4f0cb3f44321c
    volatile.eth0.host_name: veth5cb76101
    volatile.eth0.hwaddr: 00:16:3e:68:39:93
    volatile.eth1.host_name: vport16
    volatile.eth1.hwaddr: 00:16:3e:e7:dd:82
    volatile.eth1.name: eth1
    volatile.idmap.base: "0"
    volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.last_state.power: RUNNING
    volatile.uuid: 20e588f7-ae4c-4f69-82da-ea4850aef75a
  devices:
    eth0:
      ipv4.address: 10.30.26.100
      name: eth0
      network: lxdbr1
      type: nic
    eth1:
      host_name: vport16
      nictype: bridged
      parent: mybridge
      type: nic
    proxyv4:
      connect: tcp:0.0.0.0:80
      listen: tcp:10.64.0.151:8080
      nat: "true"
      type: proxy
    root:
      path: /
      pool: www_pool2
      size: 2GB
      type: disk
    shared1:
      path: /host-data
      source: /home/david/teampass-mount/
      type: disk
  ephemeral: false
  profiles:
  - default
  - web2
  stateful: false
  description: ""
  created_at: 2021-05-17T09:23:54.743007096+04:00
  expanded_config:
    image.architecture: amd64
    image.description: Centos 7 amd64 (20210211_07:08)
    image.os: Centos
    image.release: "7"
    image.serial: "20210211_07:08"
    image.type: squashfs
    image.variant: default
    volatile.base_image: 8d34dbf5b85f69a4dd73939059b9f676efb39066ca91814761e4f0cb3f44321c
    volatile.eth0.host_name: veth5cb76101
    volatile.eth0.hwaddr: 00:16:3e:68:39:93
    volatile.eth1.host_name: vport16
    volatile.eth1.hwaddr: 00:16:3e:e7:dd:82
    volatile.eth1.name: eth1
    volatile.idmap.base: "0"
    volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.last_state.power: RUNNING
    volatile.uuid: 20e588f7-ae4c-4f69-82da-ea4850aef75a
  expanded_devices:
    eth0:
      ipv4.address: 10.30.26.100
      name: eth0
      network: lxdbr1
      type: nic
    eth1:
      host_name: vport16
      nictype: bridged
      parent: mybridge
      type: nic
    proxyv4:
      connect: tcp:0.0.0.0:80
      listen: tcp:10.64.0.151:8080
      nat: "true"
      type: proxy
    root:
      path: /
      pool: www_pool2
      size: 2GB
      type: disk
    shared1:
      path: /host-data
      source: /home/david/teampass-mount/
      type: disk
  name: teampass
  status: Running
  status_code: 103
  last_used_at: 2021-05-17T09:24:34.020453344+04:00
  location: none
  type: container
pool:
  config:
    lvm.use_thinpool: "false"
    lvm.vg_name: vg_ct_100
    source: vg_ct_100
    volatile.initial_source: vg_ct_100
  description: ""
  name: www_pool2
  driver: lvm
  used_by: []
  status: Created
  locations:
  - none
volume:
  config:
    block.filesystem: ext4
    block.mount_options: discard
  description: ""
  name: teampass
  type: container
  used_by: []
  location: none
  content_type: filesystem

If you start up and login to the container imported to the dir pool, what does the output of du -h / show?

It looks to me like the container is actually bigger than its configured 2GB size, suggesting that either there’s a file that has been expanded somehow during export or import, or that the original source LVM volume had been manually grown alongside the filesystem without LXD knowing about it.

Do you know if the source LVM volume from which the export was made had been resized manually?

Also what OS and version are you running on the LXD host?

CentOS Linux release 7.9.2009 (Core)

moving between storages works
moved the container from default storage to www_pool2
lxc move teampass27-3 teampass27-3 -s www_pool2

and here is the output of df:

Filesystem                            Size  Used Avail Use% Mounted on
/dev/vg_ct_100/containers_teampass27  1.9G  1.2G  531M  70% /
none                                  492K  4.0K  488K   1% /dev
devtmpfs                              7.8G     0  7.8G   0% /dev/tty
tmpfs                                 100K     0  100K   0% /dev/lxd
/dev/mapper/centos-root                17G  9.2G  7.9G  54% /host-data
tmpfs                                 100K     0  100K   0% /dev/.lxd-mounts
none                                   10M     0   10M   0% /sys/fs/cgroup
tmpfs                                 7.8G     0  7.8G   0% /dev/shm
tmpfs                                 7.8G  8.2M  7.8G   1% /run
tmpfs                                 1.6G     0  1.6G   0% /run/user/0

i think lxd sees the actual size of the storage

vgs

  VG        #PV #LV #SN Attr   VSize    VFree  
...
  vg_ct_100   1   5   0 wz--n- <120.00g <57.59g

lxc storage info www_pool2

info:
  description: ""
  driver: lvm
  name: www_pool2
  space used: 71.01GB
  total space: 128.84GB

Great, at least thats a workaround.

Would it be possible to make that container backup file available for me to download somehow and try importing it locally?

a download link sent to your email

The link had expired when I tried to download it.

please check your email for new link

Got it thanks

Thanks. I’ve recreated the issue.

I believe the problem is that the ext4 resize2fs command is estimating that it needs more disk space to shrink the volume than would be available at the original specified size of 2GB.

I can workaround it with this PR, that forces the resize and disables the safety checks:

But I am concerned this introduces the possibility of corrupting filesystems in other scenarios.

One workaround (in addition to the one you used above) is to edit the backup.yaml file in container directory inside the tarball and increase the disk size to 3GB.

This is fixed in: