Container export error

Hi all,

When i export my biggest container i get an error. This only happens with this container.

Error: Create backup: remove /var/snap/lxd/common/lxd/backups/dcfs01/backup0: no such file or directory

Any advise?

Is it taking to long maybe?

I think it is running longer then 24 hours

Yeah, a longer than 24h export would be an issue. The CLI tool sets ExpiresAt to 24 hours. This is usually needed to avoid backups accumulating on the server side if the client disconnects.

Understanding why it’s taking so long and seeing if there’s a way to improve the export speed would be the preferred way out of this. An alternative would be to directly create, fetch and delete the backup using lxc query, allowing for expiry-less backups.

stgraber@castiana:~$ lxc query /1.0/instances/a1/backups -X POST -d '{"name": "my-export", "instance_only": true}' --wait
{
	"class": "task",
	"created_at": "2020-02-17T13:35:48.383014875-05:00",
	"description": "Backing up container",
	"err": "",
	"id": "923f5561-c197-4fe0-8b3a-0352a2a7031c",
	"location": "none",
	"may_cancel": false,
	"metadata": null,
	"resources": {
		"backups": [
			"/1.0/backups/my-export"
		],
		"containers": [
			"/1.0/containers/a1"
		],
		"instances": [
			"/1.0/instances/a1"
		]
	},
	"status": "Success",
	"status_code": 200,
	"updated_at": "2020-02-17T13:35:48.383014875-05:00"
}
stgraber@castiana:~$ lxc query /1.0/instances/a1/backups/my-export/export > out.tar.gz
stgraber@castiana:~$ file out.tar.gz
out.tar.gz: gzip compressed data, from Unix, original size modulo 2^32 10618880
stgraber@castiana:~$ lxc query /1.0/instances/a1/backups/my-export -X DELETE --wait
{
	"class": "task",
	"created_at": "2020-02-17T13:36:00.658367812-05:00",
	"description": "Removing container backup",
	"err": "",
	"id": "261f820a-3dce-454c-b8f3-d573afb09513",
	"location": "none",
	"may_cancel": false,
	"metadata": null,
	"resources": {
		"container": [
			"/1.0/container/a1"
		]
	},
	"status": "Success",
	"status_code": 200,
	"updated_at": "2020-02-17T13:36:00.658367812-05:00"
}

i will do it like this.

its only a backup so i can reinstal my system.

wat is the first path?

/1.0/instances/a1/backups

i dont have this path?

Would be /1.0/instances/dcfs01/backups in all the URLs in your case

a1 was the name of my container

perfect thank you!

There is a possible optimization for accelerating lxd export by setting

lxc config set backups.compression_algorithm pigz

could you explain this command?

lxc query /1.0/instances/a1/backups/my-export/export > out.tar.gz

you named the export my-export however why do you refer to export insite the my-export folder?

GET /1.0/instances/NAME/backups/BACKUP-NAME/export is the endpoint to export the raw backup.

GET /1.0/instances/NAME/backups/BACKUP-NAME just gets you JSON metadata.

Hi stgraber

When i run the second command

lxc query /1.0/instances/dcfs01/backups/my-export/export > out.tar.gz

The out.tar.gz stays empty.

?

Ps in the dcfs01 folder is see 2 files

-my-export

And

  • my-export.conpressed

What does lxc query /1.0/instances/dcfs01/backups/my-export/export show you?

fatal error: runtime: out of memory

runtime stack:
runtime.throw(0xc0d1ee, 0x16)
/snap/go/5364/src/runtime/panic.go:774 +0x72
runtime.sysMap(0xc084000000, 0x80000000, 0x13cf978)
/snap/go/5364/src/runtime/mem_linux.go:169 +0xc5
runtime.(*mheap).sysAlloc(0x13b7360, 0x80000000, 0x0, 0x0)
/snap/go/5364/src/runtime/malloc.go:701 +0x1cd
runtime.(*mheap).grow(0x13b7360, 0x40000, 0xffffffff)
/snap/go/5364/src/runtime/mheap.go:1255 +0xa3
runtime.(*mheap).allocSpanLocked(0x13b7360, 0x40000, 0x13cf988, 0x7ffeec82ca98)
/snap/go/5364/src/runtime/mheap.go:1170 +0x266
runtime.(*mheap).alloc_m(0x13b7360, 0x40000, 0x101, 0x7)
/snap/go/5364/src/runtime/mheap.go:1022 +0xc2
runtime.(*mheap).alloc.func1()
/snap/go/5364/src/runtime/mheap.go:1093 +0x4c
runtime.(*mheap).alloc(0x13b7360, 0x40000, 0x7ffeec010101, 0xc000000180)
/snap/go/5364/src/runtime/mheap.go:1092 +0x8a
runtime.largeAlloc(0x7ffffe00, 0x7ffeec820101, 0x438917)
/snap/go/5364/src/runtime/malloc.go:1138 +0x97
runtime.mallocgc.func1()
/snap/go/5364/src/runtime/malloc.go:1033 +0x46
runtime.systemstack(0x45b504)
/snap/go/5364/src/runtime/asm_amd64.s:370 +0x66
runtime.mstart()
/snap/go/5364/src/runtime/proc.go:1146

goroutine 1 [running]:
runtime.systemstack_switch()
/snap/go/5364/src/runtime/asm_amd64.s:330 fp=0xc00020f788 sp=0xc00020f780 pc=0x45b600
runtime.mallocgc(0x7ffffe00, 0xaf02a0, 0x1, 0x0)
/snap/go/5364/src/runtime/malloc.go:1032 +0x895 fp=0xc00020f828 sp=0xc00020f788 pc=0x40e985
runtime.makeslice(0xaf02a0, 0x7ffffe00, 0x7ffffe00, 0xc000199280)
/snap/go/5364/src/runtime/slice.go:49 +0x6c fp=0xc00020f858 sp=0xc00020f828 pc=0x445e9c
bytes.makeSlice(0x7ffffe00, 0x0, 0x0, 0x0)
/snap/go/5364/src/bytes/buffer.go:229 +0x77 fp=0xc00020f8c0 sp=0xc00020f858 pc=0x4f2417
bytes.(*Buffer).grow(0xc00020f9e8, 0x200, 0x17e00)
/snap/go/5364/src/bytes/buffer.go:142 +0x15b fp=0xc00020f910 sp=0xc00020f8c0 pc=0x4f1d5b
bytes.(*Buffer).ReadFrom(0xc00020f9e8, 0xd18f20, 0xc0001992c0, 0x40b29b, 0xc000012000, 0xb3bda0)
/snap/go/5364/src/bytes/buffer.go:202 +0x4b fp=0xc00020f980 sp=0xc00020f910 pc=0x4f220b
io/ioutil.readAll(0xd18f20, 0xc0001992c0, 0x200, 0x0, 0x0, 0x0, 0x0, 0x0)
/snap/go/5364/src/io/ioutil/ioutil.go:36 +0x100 fp=0xc00020fa20 sp=0xc00020f980 pc=0x59e140
io/ioutil.ReadAll(…)
/snap/go/5364/src/io/ioutil/ioutil.go:45
main.(*cmdQuery).Run(0xc0001f4ba0, 0xc000236dc0, 0xc000246cf0, 0x1, 0x1, 0x0, 0x0)
/build/lxd/parts/lxd/go/src/github.com/lxc/lxd/lxc/query.go:120 +0xb21 fp=0xc00020fbb0 sp=0xc00020fa20 pc=0xa50cb1
main.(*cmdQuery).Run-fm(0xc000236dc0, 0xc000246cf0, 0x1, 0x1, 0x0, 0x0)
/build/lxd/parts/lxd/go/src/github.com/lxc/lxd/lxc/query.go:56 +0x52 fp=0xc00020fbf8 sp=0xc00020fbb0 pc=0xa78f62
github.com/spf13/cobra.(*Command).execute(0xc000236dc0, 0xc000246cb0, 0x1, 0x1, 0xc000236dc0, 0xc000246cb0)
/build/lxd/parts/lxd/go/src/github.com/spf13/cobra/command.go:840 +0x460 fp=0xc00020fcd0 sp=0xc00020fbf8 pc=0x5d3810
github.com/spf13/cobra.(*Command).ExecuteC(0xc0000bb8c0, 0xc0000aa160, 0x2, 0x2)
/build/lxd/parts/lxd/go/src/github.com/spf13/cobra/command.go:945 +0x317 fp=0xc00020fda8 sp=0xc00020fcd0 pc=0x5d4307
github.com/spf13/cobra.(*Command).Execute(…)
/build/lxd/parts/lxd/go/src/github.com/spf13/cobra/command.go:885
main.main()
/build/lxd/parts/lxd/go/src/github.com/lxc/lxd/lxc/main.go:238 +0x1ca0 fp=0xc00020ff60 sp=0xc00020fda8 pc=0xa2dfe0
runtime.main()
/snap/go/5364/src/runtime/proc.go:203 +0x21e fp=0xc00020ffe0 sp=0xc00020ff60 pc=0x431bbe
runtime.goexit()
/snap/go/5364/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc00020ffe8 sp=0xc00020ffe0 pc=0x45d6d1

goroutine 20 [syscall]:
os/signal.signal_recv(0x0)
/snap/go/5364/src/runtime/sigqueue.go:147 +0x9c
os/signal.loop()
/snap/go/5364/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.init.0
/snap/go/5364/src/os/signal/signal_unix.go:29 +0x41

goroutine 6 [select]:
net/http.(*persistConn).readLoop(0xc000326000)
/snap/go/5364/src/net/http/transport.go:2032 +0x999
created by net/http.(*Transport).dialConn
/snap/go/5364/src/net/http/transport.go:1580 +0xb0d

goroutine 7 [select]:
net/http.(*persistConn).writeLoop(0xc000326000)
/snap/go/5364/src/net/http/transport.go:2210 +0x123
created by net/http.(*Transport).dialConn
/snap/go/5364/src/net/http/transport.go:1581 +0xb32

Oh right, you’re running out of memory :slight_smile:

curl would probably do this better in this case.

curl --unix-socket /var/snap/lxd/common/lxd/unix.socket lxd/1.0/instances/dcfs01/backups/my-export/export -o out.tar.gz

Hi @stgraber,

now i export all my containers to have a backup of the containers. i now use lxd in a btrfs subvolume instead of a loop device.

What would be the best way to backup all my containers without running to the 24 hours export limitation? I believe in the new version its 8 hours again by the way.

I dont have a second machine for lxd…

Also i must mention i read the article to for example rsync the /var/snap/lxd/common/lxd to external device. However for my running containers i have databases inside them.

And i am looking for a live backup (that everything keeps running)

Lets say i create a snapshot of /var/snap/lxd/common/lxd and rsync everything to an external hdd.

If i restore everything is it sufficient to reinstall host with lxd and rsync everyhing back to /var/snap/lxd/common/lxd ???