zotan
(Simon)
June 16, 2022, 10:10am
1
I’ve started upgrading our servers to Jammy and can no longer reliably copy containers between the servers
lxc copy jammy-server:container container
Fails with:
Error: Failed instance creation: Error transferring instance data: Got error reading migration source: websocket: close 1000 (normal)
Is this a known issue? It fails for both push and pull in the same way.
zotan
(Simon)
June 16, 2022, 10:33am
2
Smaller (~1GB) containers seem to work. The one that fails is ~40GB and the transfer fails after 4GB.
tomp
(Thomas Parrott)
June 16, 2022, 12:15pm
3
What versions of LXD are you using?
zotan
(Simon)
June 16, 2022, 2:34pm
4
The focal machines are using 4.0.9 and the jammy machines are on 5.0.0
tomp
(Thomas Parrott)
June 16, 2022, 2:35pm
5
What storage pool type is source and target?
zotan
(Simon)
June 16, 2022, 2:37pm
6
Source focal machine is zfs, destination jammy machine is lvm.
I can change the destination.
tomp
(Thomas Parrott)
June 16, 2022, 2:39pm
7
As a test can you refresh the target to LXD 5.2 edge (using the latest/edge
snap channel) and then try copying from LXD 4.0.9 to the LXD 5.2 edge.
As there have been quite a few migration changes since LXD 5.0.0 so it would be good to know if its already been fixed and will then be in LXD 5.0.1.
Be aware you won’t be able to downgrade back to LXD 5.0 LTS on that machine afterwards, so don’t move the container there permanently.
zotan
(Simon)
June 16, 2022, 4:24pm
8
I’ve updated the Jammy machine to git-cc23adc (latest/edge) from 5.0.0 and copies just hang now at the start. It creates the db entry for the new container, but the transfer doesn’t appear to start. There is no feed back in the console after entering the command.
zotan
(Simon)
June 16, 2022, 6:10pm
9
It looks like lxd 5.2-79c3c3b has now been made available on the stable channel. I’ve purged lxd on my test machine and installed that. I can’t copy containers with the same symptoms I had with latest/edge
zotan
(Simon)
June 16, 2022, 6:12pm
10
Actually scratch that, there is network traffic. Just no user feedback.
tomp
(Thomas Parrott)
June 17, 2022, 8:13am
11
Did it finish in the end?
tomp
(Thomas Parrott)
June 17, 2022, 9:15am
12
If you’re still experiencing an issue with LXD 5.2 then I suspect it will be related to:
lxc:master
← tomponline:tp-migration-index-header-negotiation
opened 09:01AM - 17 Jun 22 UTC
This was missed in https://github.com/lxc/lxd/commit/3f4fbf60ef249bad5e88d70f9d4… 41f9a5d206dfb to actually use the negotiated version on the target side.
This allows migration between LXD 4.0.x and LXD current.
Otherwise get an error like:
```
lxc copy c1 v2: --refresh
Error: Failed instance creation: Error transferring instance data: Failed creating instance on target: Failed decoding migration index header: invalid character '\x00' looking for beginning of value
```
Signed-off-by: Thomas Parrott <thomas.parrott@canonical.com>
This will be in LXD 5.3 and LXD 5.0.1.
or
opened 09:12AM - 17 Jun 22 UTC
Trying to migrate an instance on LXD 4.0 LTS to a LXD current server fails, even… after https://github.com/lxc/lxd/pull/10567 has been fixed.
Reproducer:
LXD 4.0 Source v1:
```
lxc launch images:ubuntu/jammy v1
lxc shell v1
snap install lxd --channel=4.0/stable
lxd init # Choose ZFS storage pool
lxc config set core.https_address=[::]:8443 core.trust_password=pw
lxc init images:alpine/3.16 c1
lxc snapshot c1
```
LXD 5.x Target v2:
```
lxc launch images:ubuntu/jammy v2
lxc shell v2
snap install lxd --channel=latest/edge
# If edge doesn't include https://github.com/lxc/lxd/pull/10567 make sure to side load it.
lxd init # Choose ZFS storage pool
lxc config set core.https_address=[::]:8443 core.trust_password=pw
```
On the source v1:
```
lxc copy c1 v2: --refresh
lxc copy c1 v2: --refresh # Works
lxc copy c1 v2: --refresh # Fails
Error: Failed instance creation: Error transferring instance data: Failed creating instance on target: Problem with zfs receive: ([exit status 1 write |1: broken pipe]) cannot receive new filesystem stream: destination has snapshots (eg. default/containers/c1@snapshot-snap0)
must destroy them to overwrite it
# Oops it has left behind a zfs send on the source too. Preventing deletion of the instance.
ps aux | grep zfs
root 4994 0.0 0.4 8704 4500 ? S 09:08 0:00 zfs send -c -L default/containers/c1@migration-9af76474-da35-483b-a063-
kill 4994
```
Sending to a non-optimized storage pool works:
```
lxc shell v2
lxc delete c1
lxc storage create dir dir
exit
lxc shell v1
lxc copy c1 v2: --refresh -s dir # Works
lxc copy c1 v2: --refresh -s dir # Works
```
In the meantime you can use lxc export <instance> <file>
move the file to the new host and do lxc import <file>
.
zotan
(Simon)
June 20, 2022, 3:08pm
13
Yes, it did finish in the end, which saved me from resorting to the backups thankfully. It was just weird that the progress feedback has gone. Thanks for your help.
1 Like