Spoiler: If you don’t create a snapshot in this test, then the problem does not occur. As i am not deleting the ISO after the backup or snpashot, the snapshot remains at 3.8mb (if i delete it will go to the size of 20GB or something).
Send a post request to /instances
(i have included profile info at the bottom, but not really relevant).
{
"profiles": [
"custom-default",
"custom-nat"
],
"config": {
"limits.memory": "1GB",
"limits.cpu": "1"
},
"name": "ubuntu-test",
"source": {
"type": "image",
"fingerprint": "cab177ff192c5fcf8342f1433a8a4f2baaf796085598d7650999c23d5846a33b"
}
}
Then send a patch request, with this info changed to set the limit on the hard disk
"devices": {
"root": {
"path": "/",
"pool": "default",
"type": "disk",
"size": "25GB"
}
},
Inside the container I run head -c 20GB /dev/zero > data.iso
Now lets check ZFS
$ sudo zfs list
NAME USED AVAIL REFER MOUNTPOINT
lxdpool 26.6G 24.7G 96K none
lxdpool/containers/ubuntu-test 19.1G 4.20G 19.1G /var/snap/lxd/common/lxd/storage-pools/default/containers/ubuntu-test
Create a snapshot, send a post request to /instances/ubuntu-test/snapshots
{
"stateful": false,
"name": "ubuntu-test-20210218-01"
}
Give ZFS a minute or two from the the initial record creation, initially the snapshot will be 86k or something, after about a minute it will get to about 3.8MB.
To create a backup I send a post request to /instances/ubuntu-test/backup
{
"name": "ubuntu-test-20210218",
"expiry": null,
"instance_only": false,
"optimized_storage": false
}
Restoring the backup
I rename the existing instance by sending a post request to /instances/ubuntu-test
{
"name": "ubuntu-test-backup"
}
I then export the backup by sending a get request to /instances/ubuntu-test-backup/backups/ubuntu-test-20210218/export
I then post the tarball back to /instances
and wait for the response to complete, which failed.
{
"type": "sync",
"status": "Success",
"status_code": 200,
"operation": "",
"error_code": 0,
"error": "",
"metadata": {
"id": "0d8e46e5-8013-4919-bbc9-37a685caab6d",
"class": "task",
"description": "Restoring backup",
"created_at": "2021-02-18T09:48:12.824031955Z",
"updated_at": "2021-02-18T09:48:12.824031955Z",
"status": "Failure",
"status_code": 400,
"resources": {
"containers": [
"/1.0/containers/ubuntu-test"
],
"instances": [
"/1.0/instances/ubuntu-test"
]
},
"metadata": null,
"may_cancel": false,
"err": "Create instance from backup: Error starting unpack: Failed to run: tar -zxf - --xattrs-include=* -C /var/snap/lxd/common/lxd/storage-pools/default/containers/ubuntu-test --strip-components=2 backup/container: tar: rootfs/root/data.iso: Cannot write: No space left on device\ntar: rootfs/root/data.iso: Cannot utime: No space left on device\n\ntar: Exiting with failure status due to previous errors",
"location": "none"
}
}
My profiles are
$ lxc profile show custom-default
config: {}
description: ""
devices:
root:
path: /
pool: default
type: disk
name: custom-default
used_by:
- /1.0/instances/mysql
- /1.0/instances/redis
- /1.0/instances/postgres
- /1.0/instances/mariadb
- /1.0/instances/ubuntu-test
$ lxc profile show custom-nat
config: {}
description: Custom NAT Network Profile
devices:
eth0:
name: eth0
nictype: bridged
parent: custombr0
type: nic
name: custom-nat
used_by:
- /1.0/instances/mysql
- /1.0/instances/redis
- /1.0/instances/postgres
- /1.0/instances/mariadb
- /1.0/instances/ubuntu-test