LXD export and root file system size

This is not working. The snapshot stays

any other ideas?

I think that the ‘snapshot’ exists only in the LXD SQL database. I could try to give you queries to zap the rows from the database but it’s a bit out of my current experience and there should be a better way, I can’t believe that LXD developers have not thought out a way to automatically fix the database when things are wrong but I have not yet found it.
Did you just try to reboot the computer, by the way, maybe that’s the method ?

P.s

wat kernel module should i install?

Hi

first of all thank you for you responses! i allredeay rebooted multiple times. If you could tell me how to do this that would be great!

before you told me that a kernel module was missing. what module should i install (had nothing to do with the backups)

I said that a kernel feature is missing, because LXD don’t find something in sysfs. But it’s not necessarily a module, it can be linked to the whole kernel, it could be found only on a more recent kernel version. I have no idea really because I have no great interest in the feature.

a ok perfect notting to worry about then.

Could you tell me how to zap the rows from the database

I think that it’s a dangerous idea :slight_smile: . If it goes wrong


Maybe it could be possible to create manually a snapshot in the storage and then use the LXD delete snapshot command that could then (hopefully) work correctly. If this does not work, LXD devs counsel to create a new post to ask them how to fix the database with SQL.

How to create a manual snapshot; don’t forget to remove the space(s) at the end(s) of the following line(s) remember the preceding problem!

1 - create a subdirectory

sudo nsenter -t $(pgrep daemon.start) -m -- mkdir /var/snap/lxd/common/lxd/storage-pools/default/containers-snapshots/kopano

2 - create a snapshot in this subdirectory

sudo nsenter -t $(pgrep daemon.start) -m -- /snap/lxd/current/bin/btrfs sub snapshot /var/snap/lxd/common/lxd/storage-pools/default/containers/kopano /var/snap/lxd/common/lxd/storage-pools/default/containers-snapshots/kopano/kopano -c

thank you.

this worked. now its ok again!

thank you again for you help

good, but remember that the first workaround (the mkdir sudo mkdir -p /var/snap/lxd/common/lxd/storage-pools/exports/custom/exports_volume) is NOT an innocuous fix, if you have to delete the storage pool you should not forget to remove these 2 directories before else the storage pool removal will fail.

Eww. After debian upgrade to buster previous workaround:

mount --bind /exptmp/backups /var/snap/lxd/common/lxd/backups

does not work anymore, failing with message:

Error: Create backup: Backup create: Failed to create mount directory "/var/snap/lxd/common/lxd/storage-pools/default/containers-snapshots/main/backup": mkdir /var/snap/lxd/common/lxd/storage-pools/default/containers-snapshots/main/backup: no such file or directory

I created exptmp volume:

# lxc storage volume create default exptmp               
# lxc config set storage.backups_volume=default/exptmp  

but it does not work either with the same message, lxc export still fails:

lxc export main /tank/exportfs/monthly/main.backup.tgz 
Error: Create backup: Backup create: Failed to create mount directory "/var/snap/lxd/common/lxd/storage-pools/default/containers-snapshots/main/backup": mkdir /var/snap/lxd/common/lxd/storage-pools/default/containers-snapshots/main/backup: no such file or directory

Any ideas how to fix it?

Sounds like the issue is with the container’s snapshot mountpoint rather than with the backups directory.

What storage backend are you using?

It seems like creating that missing directory may sort things out.

I’m using zfs.

I just created “main” directory in /var/snap/lxd/common/lxd/storage-pools/default/containers-snapshots and export went ok.

But now I created a new container and no such directory appeared there, yet new container’s export went smooth.

So I have a bunch of old containers and they won’t export without creating a directory there. But newly created containers don’t need a directory in containers-snapshots.

What’s wrong with my old containers? I can create directories, no problem, but may it be a sign of something seriously broken?..

I suspect one thing – locally created containers seem to export ok, but containers with problem are containers copied from another host!

echo Starting backup job at `date`
for c in "${cnt[@]}"; do
    echo Backing up container $c 
    echo ...stopping local copy -- just in case of running, but it should NOT be running
    lxc stop $c
    echo ...deleting local copy
    lxc delete $c
    echo ...stopping remote container
    lxc stop ns:$c
    echo ...deleting old snapshot
    lxc delete ns:$c/backup
    sleep 1
    echo ...making snapshot
    lxc snapshot ns:$c backup
    echo ...starting remote container
    sleep 1
    lxc start ns:$c
    echo ...copying
    lxc copy ns:$c/backup $c
    echo ...deleting snapshot
    lxc delete ns:$c/backup
    echo ...done backing up $c at `date`
done; 

This host is standby host, so live containers are copied from active server and later everything is exported for offline backup. So if I create a new container on active server it will be copied too and I’d expect export working as well.

Finally I can reproduce this problem:

  • Host A: primary host, debian 9, lxc 4.2 via snap,
  • Host B: standby host, debian 10, lxc 4.2 via snap

If I create a container on host B and export it then I can export it just fine. If I create a container on host A then I can export it locally, no problem.

But if I create a container on host A and copy that container from A to B via lxc copy THEN I can’t export it on host B:

Error: Create backup: Backup create: Failed to create mount directory "/var/snap/lxd/common/lxd/storage-pools/default/containers-snapshots/test/backup": mkdir /var/snap/lxd/common/lxd/storage-pools/default/containers-snapshots/test/backup: no such file or directory

If I create required subdirectory on host B (in this case container’s name was test) then export will work


Please can you log a bug with the reproducer steps over at https://github.com/lxc/lxd/issues

Thanks

Can’t reproduce it with new containers anymore. Now if I create new containers on host A they export ok on host B. Old containers still don’t export without corresponding subdirectory in /var/snap/lxd/common/lxd/storage-pools/default/containers-snapshots/

The only difference is recent reboot of host A, it had an uptime of 58 days and I rebooted it today.

HostB:

Non-exportable container:

hostB# lxc config show alp --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Alpine 3.9 amd64 (20190321_13:00)
  image.os: Alpine
  image.release: "3.9"
  image.serial: "20190321_13:00"
  volatile.apply_template: copy
  volatile.base_image: 0cbd911b5a203c7e475241b8b22cc5332d10fd30ae27916bae1558bcb118c9ce
  volatile.eth0.hwaddr: 00:16:3e:0b:21:d1
  volatile.idmap.base: "0"
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

Exportable new container:

hostB# lxc config show test --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Alpine 3.11 amd64 (20200612_13:00)
  image.os: Alpine
  image.release: "3.11"
  image.serial: "20200612_13:00"
  image.type: squashfs
  volatile.apply_template: copy
  volatile.base_image: 4d5d20957dffb6d92ac16c03a7d1a27ece4d8a09d29863e22adb5ae2c15a102c
  volatile.eth0.hwaddr: 00:16:3e:8b:4e:c5
  volatile.idmap.base: "0"
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

Diff between them:

4c4
<   image.description: Alpine 3.9 amd64 (20190321_13:00)
---
>   image.description: Alpine 3.11 amd64 (20200612_13:00)
6,7c6,8
<   image.release: "3.9"
<   image.serial: "20190321_13:00"
---
>   image.release: "3.11"
>   image.serial: "20200612_13:00"
>   image.type: squashfs
9,10c10,11
<   volatile.base_image: 0cbd911b5a203c7e475241b8b22cc5332d10fd30ae27916bae1558bcb118c9ce
<   volatile.eth0.hwaddr: 00:16:3e:0b:21:d1
---
>   volatile.base_image: 4d5d20957dffb6d92ac16c03a7d1a27ece4d8a09d29863e22adb5ae2c15a102c
>   volatile.eth0.hwaddr: 00:16:3e:8b:4e:c5
13c14
<   volatile.last_state.idmap: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
---
>   volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'

All newly created containers export ok, old containers require creating subdirectory every time.

HostA (source of containers, both conainers export okay there):

hostA# lxc config show alp --expanded:

architecture: x86_64
config:
  image.architecture: amd64
  image.description: Alpine 3.9 amd64 (20190321_13:00)
  image.os: Alpine
  image.release: "3.9"
  image.serial: "20190321_13:00"
  volatile.base_image: 0cbd911b5a203c7e475241b8b22cc5332d10fd30ae27916bae1558bcb118c9ce
  volatile.eth0.host_name: veth9c50b8f2
  volatile.eth0.hwaddr: 00:16:3e:6b:04:12
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: tank
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

hostA# lxc config show test --expanded

architecture: x86_64
config:
  image.architecture: amd64
  image.description: Alpine 3.11 amd64 (20200612_13:00)
  image.os: Alpine
  image.release: "3.11"
  image.serial: "20200612_13:00"
  image.type: squashfs
  volatile.base_image: 4d5d20957dffb6d92ac16c03a7d1a27ece4d8a09d29863e22adb5ae2c15a102c
  volatile.eth0.hwaddr: 00:16:3e:64:d2:0a
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: STOPPED
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: tank
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

Diff between them:

4c4
<   image.description: Alpine 3.9 amd64 (20190321_13:00)
---
>   image.description: Alpine 3.11 amd64 (20200612_13:00)
6,10c6,10
<   image.release: "3.9"
<   image.serial: "20190321_13:00"
<   volatile.base_image: 0cbd911b5a203c7e475241b8b22cc5332d10fd30ae27916bae1558bcb118c9ce
<   volatile.eth0.host_name: veth9c50b8f2
<   volatile.eth0.hwaddr: 00:16:3e:6b:04:12
---
>   image.release: "3.11"
>   image.serial: "20200612_13:00"
>   image.type: squashfs
>   volatile.base_image: 4d5d20957dffb6d92ac16c03a7d1a27ece4d8a09d29863e22adb5ae2c15a102c
>   volatile.eth0.hwaddr: 00:16:3e:64:d2:0a
14,15c14,15
<   volatile.last_state.idmap: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
<   volatile.last_state.power: RUNNING
---
>   volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
>   volatile.last_state.power: STOPPED

Something happens when old container is copied from hostA to hostB via lxc copy


Thanks, please can you log an issue and will try to recreate.

Just logged an issue #7532