Lxc move seems to be stuck

lxd
lxc
networking

(Nativewolf) #1

I have two servers, sitting right by eachother, networked via a switch, and I am not able to successfully move a container between using lxc move.

I have added the the two servers as remotes to eachother:

I have these servers, with the listed containers:

server1 - Ubuntu 18.04

  • testmove (default Ubuntu container)

server2 - Ubuntu 18.04

  • testmoveb

I then used this command:

root@server1:~# lxc move server2:testmove local:

The command just hangs, and doesn’t give any output. So I started investigating, and this is what I have found:

  1. There is a socket connection between the client and server.

  2. There is packets flowing from server2 to server1. I checked via tcpdump. Here is some example output:

    13:23:35.560959 IP server2.8443 > server1.35888: Flags [.], ack 611, win 243, options
    [nop,nop,TS val 2149446780 ecr 575870376], length 0
    E…4> @.?..)DS.)DT …0…lh…BD…
    …|"S…

Packets keep being sent.

  1. The /var/lib/lxd/storage-pools/default/containers/testmove directory stays at 672mb this whole time.

  2. When I trace the lxc move command that is running, I see that there is a futex() system call that sits for a moment, then some pselect() calls, then back to futex(). Then it seems to loop like this.

If I use rsync -e ssh, I can copy files just fine.

Any idea why lxc move is getting stuck?


#2

Should this be reported as a bug?

(Disclaimer… I work at the same company as the above user, so it’s possible this issue is somehow unique to us.)


(Stéphane Graber) #3

Try running lxc monitor on both hosts as this happen, see if either captures an error coming from rsync. That’s assuming that this migration even uses rsync, behavior depends on the storage driver in use on both hosts.


(Nativewolf) #4

Here is the output I get from lxc monitor on server1, after running this command:

lxc copy server2:testmoveb local:

metadata:
context:
ip: ‘@’
method: GET
url: /1.0
level: dbug
message: handling
timestamp: “2019-02-06T08:08:20.608781997-08:00”
type: logging

metadata:
context:
ip: ‘@’
method: GET
url: /1.0/events
level: dbug
message: handling
timestamp: “2019-02-06T08:08:20.813564411-08:00”
type: logging

metadata:
context: {}
level: dbug
message: ‘New event listener: 9ac1276a-91cd-416a-ae12-d17dd1e784a7’
timestamp: “2019-02-06T08:08:20.813642739-08:00”
type: logging

metadata:
context: {}
level: dbug
message: Responding to container create
timestamp: “2019-02-06T08:08:20.81438811-08:00”
type: logging

metadata:
context:
ip: ‘@’
method: POST
url: /1.0/containers
level: dbug
message: handling
timestamp: “2019-02-06T08:08:20.81436362-08:00”
type: logging

metadata:
context: {}
level: dbug
message: No valid storage pool in the container’s local root disk device and profiles
found
timestamp: “2019-02-06T08:08:20.816575864-08:00”
type: logging

metadata:
context: {}
level: dbug
message: ‘Database error: &errors.errorString{s:“sql: no rows in result set”}’
timestamp: “2019-02-06T08:08:20.816868704-08:00”
type: logging

metadata:
context: {}
level: dbug
message: ‘Database error: &errors.errorString{s:“sql: no rows in result set”}’
timestamp: “2019-02-06T08:08:20.817361751-08:00”
type: logging

metadata:
context:
ephemeral: “false”
name: testmoveb
level: info
message: Creating container
timestamp: “2019-02-06T08:08:20.857552147-08:00”
type: logging

metadata:
action: container-updated
source: /1.0/containers/testmoveb
timestamp: “2019-02-06T08:08:20.874942161-08:00”
type: lifecycle

metadata:
action: container-updated
source: /1.0/containers/testmoveb
timestamp: “2019-02-06T08:08:20.885870295-08:00”
type: lifecycle

metadata:
action: container-updated
source: /1.0/containers/testmoveb
timestamp: “2019-02-06T08:08:20.888423808-08:00”
type: lifecycle

metadata:
context: {}
level: dbug
message: Creating empty DIR storage volume for container “testmoveb” on storage
pool “default”
timestamp: “2019-02-06T08:08:20.889963009-08:00”
type: logging

metadata:
action: container-created
source: /1.0/containers/testmoveb
timestamp: “2019-02-06T08:08:20.889954587-08:00”
type: lifecycle

metadata:
context:
ephemeral: “false”
name: testmoveb
level: info
message: Created container
timestamp: “2019-02-06T08:08:20.889941451-08:00”
type: logging

metadata:
context: {}
level: dbug
message: Created empty DIR storage volume for container “testmoveb” on storage pool
“default”
timestamp: “2019-02-06T08:08:20.900941216-08:00”
type: logging

metadata:
action: container-updated
source: /1.0/containers/testmoveb
timestamp: “2019-02-06T08:08:20.900924911-08:00”
type: lifecycle

metadata:
context:
name: testmoveb
level: warn
message: Unable to update backup.yaml at this time
timestamp: “2019-02-06T08:08:20.900994597-08:00”
type: logging

metadata:
context: {}
level: dbug
message: ‘Started task operation: 424ec520-cecf-4946-acb4-7bba1e5b05f9’
timestamp: “2019-02-06T08:08:20.901756132-08:00”
type: logging

metadata:
class: task
created_at: “2019-02-06T08:08:20.90111272-08:00”
description: Creating container
err: “”
id: 424ec520-cecf-4946-acb4-7bba1e5b05f9
may_cancel: false
metadata: null
resources:
containers:
- /1.0/containers/testmoveb
status: Running
status_code: 103
updated_at: “2019-02-06T08:08:20.90111272-08:00”
timestamp: “2019-02-06T08:08:20.901763719-08:00”
type: operation

metadata:
class: task
created_at: “2019-02-06T08:08:20.90111272-08:00”
description: Creating container
err: “”
id: 424ec520-cecf-4946-acb4-7bba1e5b05f9
may_cancel: false
metadata: null
resources:
containers:
- /1.0/containers/testmoveb
status: Pending
status_code: 105
updated_at: “2019-02-06T08:08:20.90111272-08:00”
timestamp: “2019-02-06T08:08:20.90173642-08:00”
type: operation

metadata:
context: {}
level: dbug
message: ‘New task operation: 424ec520-cecf-4946-acb4-7bba1e5b05f9’
timestamp: “2019-02-06T08:08:20.901720099-08:00”
type: logging

metadata:
context:
ip: ‘@’
method: GET
url: /1.0/operations/424ec520-cecf-4946-acb4-7bba1e5b05f9
level: dbug
message: handling
timestamp: “2019-02-06T08:08:20.902379723-08:00”
type: logging

metadata:
context: {}
level: eror
message: 'Rsync receive failed: /var/lib/lxd/containers/testmoveb/: exit status
12: ’
timestamp: “2019-02-06T08:08:21.164032977-08:00”
type: logging

metadata:
context: {}
level: dbug
message: sending write barrier
timestamp: “2019-02-06T08:08:21.163908821-08:00”
type: logging

metadata:
context: {}
level: dbug
message: Got message barrier, resetting stream
timestamp: “2019-02-06T08:08:21.164502831-08:00”
type: logging

metadata:
context:
err: exit status 12
level: eror
message: Error during migration sink
timestamp: “2019-02-06T08:08:21.164739883-08:00”
type: logging

metadata:
context:
created: 2019-02-06 08:08:20 -0800 PST
ephemeral: “false”
name: testmoveb
used: 1969-12-31 16:00:00 -0800 PST
level: info
message: Deleting container
timestamp: “2019-02-06T08:08:21.164885192-08:00”
type: logging

metadata:
context: {}
level: dbug
message: Deleting DIR storage volume for container “testmoveb” on storage pool “default”
timestamp: “2019-02-06T08:08:21.169015842-08:00”
type: logging

metadata:
context: {}
level: dbug
message: Deleted DIR storage volume for container “testmoveb” on storage pool “default”
timestamp: “2019-02-06T08:08:21.169290288-08:00”
type: logging

metadata:
action: container-deleted
source: /1.0/containers/testmoveb
timestamp: “2019-02-06T08:08:21.205066269-08:00”
type: lifecycle

metadata:
context: {}
level: dbug
message: ‘Failure for task operation: 424ec520-cecf-4946-acb4-7bba1e5b05f9: Error
transferring container data: exit status 12’
timestamp: “2019-02-06T08:08:21.205080657-08:00”
type: logging

metadata:
class: task
created_at: “2019-02-06T08:08:20.90111272-08:00”
description: Creating container
err: ‘Error transferring container data: exit status 12’
id: 424ec520-cecf-4946-acb4-7bba1e5b05f9
may_cancel: false
metadata: null
resources:
containers:
- /1.0/containers/testmoveb
status: Failure
status_code: 400
updated_at: “2019-02-06T08:08:20.90111272-08:00”
timestamp: “2019-02-06T08:08:21.205090232-08:00”
type: operation

metadata:
context:
created: 2019-02-06 08:08:20 -0800 PST
ephemeral: “false”
name: testmoveb
used: 1969-12-31 16:00:00 -0800 PST
level: info
message: Deleted container
timestamp: “2019-02-06T08:08:21.205052879-08:00”
type: logging

metadata:
context:
ip: ‘@’
method: POST
url: /1.0/containers
level: dbug
message: handling
timestamp: “2019-02-06T08:08:21.205749438-08:00”
type: logging

metadata:
context: {}
level: dbug
message: Responding to container create
timestamp: “2019-02-06T08:08:21.205768776-08:00”
type: logging

metadata:
context: {}
level: dbug
message: No valid storage pool in the container’s local root disk device and profiles
found
timestamp: “2019-02-06T08:08:21.206709359-08:00”
type: logging

metadata:
context: {}
level: dbug
message: ‘Database error: &errors.errorString{s:“sql: no rows in result set”}’
timestamp: “2019-02-06T08:08:21.206860092-08:00”
type: logging

metadata:
context: {}
level: dbug
message: ‘Database error: &errors.errorString{s:“sql: no rows in result set”}’
timestamp: “2019-02-06T08:08:21.207088902-08:00”
type: logging

metadata:
context:
ephemeral: “false”
name: testmoveb
level: info
message: Creating container
timestamp: “2019-02-06T08:08:21.217731918-08:00”
type: logging

metadata:
action: container-updated
source: /1.0/containers/testmoveb
timestamp: “2019-02-06T08:08:21.226541232-08:00”
type: lifecycle

metadata:
action: container-updated
source: /1.0/containers/testmoveb
timestamp: “2019-02-06T08:08:21.249593351-08:00”
type: lifecycle

metadata:
action: container-updated
source: /1.0/containers/testmoveb
timestamp: “2019-02-06T08:08:21.252207526-08:00”
type: lifecycle

metadata:
context: {}
level: dbug
message: Creating empty DIR storage volume for container “testmoveb” on storage
pool “default”
timestamp: “2019-02-06T08:08:21.254517182-08:00”
type: logging

metadata:
context:
ephemeral: “false”
name: testmoveb
level: info
message: Created container
timestamp: “2019-02-06T08:08:21.254478493-08:00”
type: logging

metadata:
action: container-created
source: /1.0/containers/testmoveb
timestamp: “2019-02-06T08:08:21.254503819-08:00”
type: lifecycle

metadata:
action: container-updated
source: /1.0/containers/testmoveb
timestamp: “2019-02-06T08:08:21.267361871-08:00”
type: lifecycle

metadata:
context: {}
level: dbug
message: Created empty DIR storage volume for container “testmoveb” on storage pool
“default”
timestamp: “2019-02-06T08:08:21.267379561-08:00”
type: logging

metadata:
context:
name: testmoveb
level: warn
message: Unable to update backup.yaml at this time
timestamp: “2019-02-06T08:08:21.267438627-08:00”
type: logging

metadata:
context: {}
level: dbug
message: ‘New task operation: 52ee72a3-2cab-4d4c-8bc5-226c84cc9354’
timestamp: “2019-02-06T08:08:21.268177851-08:00”
type: logging

metadata:
class: task
created_at: “2019-02-06T08:08:21.267574763-08:00”
description: Creating container
err: “”
id: 52ee72a3-2cab-4d4c-8bc5-226c84cc9354
may_cancel: false
metadata: null
resources:
containers:
- /1.0/containers/testmoveb
status: Pending
status_code: 105
updated_at: “2019-02-06T08:08:21.267574763-08:00”
timestamp: “2019-02-06T08:08:21.268192243-08:00”
type: operation

metadata:
class: task
created_at: “2019-02-06T08:08:21.267574763-08:00”
description: Creating container
err: “”
id: 52ee72a3-2cab-4d4c-8bc5-226c84cc9354
may_cancel: false
metadata: null
resources:
containers:
- /1.0/containers/testmoveb
status: Running
status_code: 103
updated_at: “2019-02-06T08:08:21.267574763-08:00”
timestamp: “2019-02-06T08:08:21.268217024-08:00”
type: operation

metadata:
context: {}
level: dbug
message: ‘Started task operation: 52ee72a3-2cab-4d4c-8bc5-226c84cc9354’
timestamp: “2019-02-06T08:08:21.268209601-08:00”
type: logging

metadata:
context:
ip: ‘@’
method: GET
url: /1.0/operations/52ee72a3-2cab-4d4c-8bc5-226c84cc9354
level: dbug
message: handling
timestamp: “2019-02-06T08:08:21.268876113-08:00”
type: logging

Despite some of those errors, the directory /var/lib/lxd/storage-pools/default/containers/testmoveb is created on server1. But, it is empty.

Both hosts are using the same storage driver:

root@server1:~# lxc storage show default
config:
source: /var/lib/lxd/storage-pools/default
description: “”
name: default
driver: dir
used_by:

  • /1.0/containers/testmove
  • /1.0/profiles/default
    status: Created
    locations:
  • none

root@server2:~# lxc storage show default
config:
source: /var/lib/lxd/storage-pools/default
description: “”
name: default
driver: dir
used_by:

  • /1.0/containers/testmoveb
  • /1.0/profiles/default
    status: Created
    locations:
  • none

Here is the lxc monitor output on server2 when I try to initiate the same copy as mentioned above:

metadata:
context:
k: “3”
level: dbug
message: Found cert
timestamp: “2019-02-06T08:19:30.739379292-08:00”
type: logging

metadata:
context:
ip: 209.41.68.83:42084
method: GET
url: /1.0
level: dbug
message: handling
timestamp: “2019-02-06T08:19:30.73940823-08:00”
type: logging

metadata:
context:
k: “3”
level: dbug
message: Found cert
timestamp: “2019-02-06T08:19:30.739809239-08:00”
type: logging

metadata:
context:
k: “3”
level: dbug
message: Found cert
timestamp: “2019-02-06T08:19:30.784889406-08:00”
type: logging

metadata:
context:
ip: 209.41.68.83:42086
method: GET
url: /1.0/containers/testmoveb
level: dbug
message: handling
timestamp: “2019-02-06T08:19:30.784929995-08:00”
type: logging

metadata:
context: {}
level: dbug
message: ‘New event listener: 0085f851-e090-401f-bd97-fc5a9e92f107’
timestamp: “2019-02-06T08:19:30.823900761-08:00”
type: logging

metadata:
context:
k: “3”
level: dbug
message: Found cert
timestamp: “2019-02-06T08:19:30.82380636-08:00”
type: logging

metadata:
context:
ip: 209.41.68.83:42088
method: GET
url: /1.0/events
level: dbug
message: handling
timestamp: “2019-02-06T08:19:30.823844754-08:00”
type: logging

metadata:
context:
ip: 209.41.68.83:42090
method: POST
url: /1.0/containers/testmoveb
level: dbug
message: handling
timestamp: “2019-02-06T08:19:30.860392564-08:00”
type: logging

metadata:
context:
k: “3”
level: dbug
message: Found cert
timestamp: “2019-02-06T08:19:30.860363554-08:00”
type: logging

metadata:
context: {}
level: dbug
message: ‘New websocket operation: 800cf968-a003-4ece-be53-093e1d0aa82f’
timestamp: “2019-02-06T08:19:30.895704467-08:00”
type: logging

metadata:
context: {}
level: dbug
message: ‘Started websocket operation: 800cf968-a003-4ece-be53-093e1d0aa82f’
timestamp: “2019-02-06T08:19:30.895782489-08:00”
type: logging

metadata:
class: websocket
created_at: “2019-02-06T08:19:30.862045548-08:00”
description: Migrating container
err: “”
id: 800cf968-a003-4ece-be53-093e1d0aa82f
may_cancel: false
metadata:
control: 7c51b49a149f6ee51221307e1a6e607f24f79b4b2df43bbfaaee26c95204071a
fs: ee87ce239f58811920dcaef3b2db32451b0e44423177f4dcc51d7c7624809fca
resources:
containers:
- /1.0/containers/testmoveb
status: Pending
status_code: 105
updated_at: “2019-02-06T08:19:30.862045548-08:00”
timestamp: “2019-02-06T08:19:30.895745375-08:00”
type: operation

metadata:
class: websocket
created_at: “2019-02-06T08:19:30.862045548-08:00”
description: Migrating container
err: “”
id: 800cf968-a003-4ece-be53-093e1d0aa82f
may_cancel: false
metadata:
control: 7c51b49a149f6ee51221307e1a6e607f24f79b4b2df43bbfaaee26c95204071a
fs: ee87ce239f58811920dcaef3b2db32451b0e44423177f4dcc51d7c7624809fca
resources:
containers:
- /1.0/containers/testmoveb
status: Running
status_code: 103
updated_at: “2019-02-06T08:19:30.862045548-08:00”
timestamp: “2019-02-06T08:19:30.895804919-08:00”
type: operation

metadata:
context: {}
level: dbug
message: ‘Connected websocket operation: 800cf968-a003-4ece-be53-093e1d0aa82f’
timestamp: “2019-02-06T08:19:31.021070643-08:00”
type: logging

metadata:
context:
ip: 209.41.68.83:42092
url: /1.0/operations/800cf968-a003-4ece-be53-093e1d0aa82f/websocket?secret=7c51b49a149f6ee51221307e1a6e607f24f79b4b2df43bbfaaee26c95204071a
level: dbug
message: allowing untrusted GET
timestamp: “2019-02-06T08:19:31.021027105-08:00”
type: logging

metadata:
context: {}
level: dbug
message: ‘Handled websocket operation: 800cf968-a003-4ece-be53-093e1d0aa82f’
timestamp: “2019-02-06T08:19:31.021148549-08:00”
type: logging

metadata:
context: {}
level: dbug
message: ‘Connected websocket operation: 800cf968-a003-4ece-be53-093e1d0aa82f’
timestamp: “2019-02-06T08:19:31.044423544-08:00”
type: logging

metadata:
context:
ip: 209.41.68.83:42094
url: /1.0/operations/800cf968-a003-4ece-be53-093e1d0aa82f/websocket?secret=ee87ce239f58811920dcaef3b2db32451b0e44423177f4dcc51d7c7624809fca
level: dbug
message: allowing untrusted GET
timestamp: “2019-02-06T08:19:31.044373432-08:00”
type: logging

metadata:
context: {}
level: dbug
message: ‘Handled websocket operation: 800cf968-a003-4ece-be53-093e1d0aa82f’
timestamp: “2019-02-06T08:19:31.044494364-08:00”
type: logging

metadata:
context: {}
level: dbug
message: The other side does not support pre-copy
timestamp: “2019-02-06T08:19:31.04615202-08:00”
type: logging

metadata:
context: {}
level: dbug
message: sending write barrier
timestamp: “2019-02-06T08:19:31.256982148-08:00”
type: logging

metadata:
context: {}
level: dbug
message: Got message barrier, resetting stream
timestamp: “2019-02-06T08:19:31.256866233-08:00”
type: logging

metadata:
context: {}
level: eror
message: |
Rsync send failed: /var/lib/lxd/containers/testmoveb/: exit status 12: rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(235) [sender=3.1.2]
timestamp: “2019-02-06T08:19:31.258288421-08:00”
type: logging

metadata:
context: {}
level: dbug
message: ‘Connected websocket operation: 800cf968-a003-4ece-be53-093e1d0aa82f’
timestamp: “2019-02-06T08:19:31.348154927-08:00”
type: logging

metadata:
context:
ip: 209.41.68.83:42096
url: /1.0/operations/800cf968-a003-4ece-be53-093e1d0aa82f/websocket?secret=7c51b49a149f6ee51221307e1a6e607f24f79b4b2df43bbfaaee26c95204071a
level: dbug
message: allowing untrusted GET
timestamp: “2019-02-06T08:19:31.348115637-08:00”
type: logging

metadata:
context: {}
level: dbug
message: ‘Handled websocket operation: 800cf968-a003-4ece-be53-093e1d0aa82f’
timestamp: “2019-02-06T08:19:31.348249288-08:00”
type: logging

metadata:
context: {}
level: dbug
message: ‘Connected websocket operation: 800cf968-a003-4ece-be53-093e1d0aa82f’
timestamp: “2019-02-06T08:19:31.370432401-08:00”
type: logging

metadata:
context:
ip: 209.41.68.83:42098
url: /1.0/operations/800cf968-a003-4ece-be53-093e1d0aa82f/websocket?secret=ee87ce239f58811920dcaef3b2db32451b0e44423177f4dcc51d7c7624809fca
level: dbug
message: allowing untrusted GET
timestamp: “2019-02-06T08:19:31.370392015-08:00”
type: logging


(Nativewolf) #5

Possibly this bug is the problem:

https://bugs.launchpad.net/ubuntu/+source/rsync/+bug/988144

Since the rsync commands are masked behind the lxc tools, I don’t suppose there is a way to try the -W flag?

Possible problems that I’ve already checked:

  • There is over 400gb available on both servers, and these test containers are the default Ubuntu 18 containers (682M).
  • Both servers have the rsync utility installed (both are on version 3.1.2)
  • SSH is running on both servers, as well as the lxd daemon on port 8443

(Stéphane Graber) #6

Are you using the snap or the deb?

If using the deb, you could setup a temporarily /usr/local/bin/rsync wrapper which effectively contains:

#/bin/sh
exec /usr/bin/rsync "$@" -W

Making all rsync calls get the -W argument.

But this won’t work with the snap as everything is read-only in the snap environment.


(Nativewolf) #7

I’m using the deb package from apt repository.

Good thinking. I setup /usr/local/bin/rsync, and I can use it directly to copy files from server2 to server1 (ie, I can use bash and call that script to copy things over). But, using lxc move, I get this error:

Feb 06 09:27:38 server1 lxd[32513]: lvl=warn msg=“Unable to update backup.yaml at this time” name=testmoveb t=2019-02-06T09:27:38-0800
Feb 06 09:27:38 server1 lxd[32513]: err=“fork/exec /usr/local/bin/rsync: exec format error” lvl=eror msg=“Error during migration sink” t=2019-02-06T09:27:38-0800
Feb 06 09:27:39 server1 lxd[32513]: lvl=warn msg=“Unable to update backup.yaml at this time” name=testmoveb t=2019-02-06T09:27:39-0800

I’ve tried changing the script to use /bin/bash, instead of /bin/sh, just in case, and that didn’t help.

I changed the /usr/local/bin/rsync back to a symlink to /usr/bin/rsync, and tried again from server2, to move the testmove container from server1 to server2, and I saw this error:

Feb 06 09:39:52 server2 lxd[9768]: t=2019-02-06T09:39:52-0800 lvl=warn msg=“Unable to update backup.yaml at this time” name=testmove
Feb 06 09:39:52 server2 lxd[9768]: t=2019-02-06T09:39:52-0800 lvl=eror msg="Rsync receive failed: /var/lib/lxd/containers/testmove/: exit status 2: "
Feb 06 09:39:52 server2 lxd[9768]: t=2019-02-06T09:39:52-0800 lvl=eror msg=“Error during migration sink” err=“exit status 2”

Looks like rsync’s exit status 2 may be “Protocol incompatibility” (https://lxadm.com/Rsync_exit_codes). Both servers are using the same version of rsync and ssh:

rsync version 3.1.2 protocol version 31
OpenSSH_7.6p1 Ubuntu-4ubuntu0.1, OpenSSL 1.0.2n 7 Dec 2017

So moving testmoveb to server1 gives rsync “exit status 12”, and moving testmove from to server2 gives “exit status 2”.


(Stéphane Graber) #8

and you’ve done the /usr/local/bin/rsync trick on both servers?

rsync is pretty picky about options lining up perfectly on both sides


(Nativewolf) #9

Yep, I had it setup on both servers. First using the /bin/sh shebang, then /bin/bash. Same results both times, err=“fork/exec /usr/local/bin/rsync: exec format error”


(Stéphane Graber) #10

Oh, exec format error is odd, that’s not an rsync error.

Is the file executable and what’s in it right now?


(Nativewolf) #11

On both servers:

cat /usr/local/bin/rsync

#!/bin/sh
exec /usr/bin/rsync “$@” -W

stat /usr/local/bin/rsync

File: /usr/local/bin/rsync
Size: 38 Blocks: 8 IO Block: 4096 regular file
Device: 902h/2306d Inode: 44309358 Links: 1
Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)

I do think it’s some sort of rsync error:

root@server2:~# lxc monitor|grep --color -i rsync

Rsync send failed: /var/lib/lxd/containers/testmoveb/: exit status 12: rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(235) [sender=3.1.2]

I am no longer getting the “exec format error”. I suppose I did something wrong before. But I am still getting these exit status 12 and 2 codes.


(Nativewolf) #12

Also, since the lxc monitor gave this error:

message: No valid storage pool in the container’s local root disk device and profiles
found

I also tried this specifying the default profile:

root@server2:~# lxc copy server1:testmove local: -p default
Error: Failed container creation: Duplicate profile found in request

I’m not sure if this is saying that I actually have duplicate default profiles, or this just isn’t the right flag for specifying the default profile. I don’t see evidence of a duplicate of the default profile though. Here is the output of lxc profile list:

+---------+---------+
|  NAME   | USED BY |
+---------+---------+
| default | 4       |
+---------+---------+

Maybe there is another place to check?


#13

These are Unicode quotes, not ASCII quotes.


(Nativewolf) #14

The quotes in the actual script are correct. I just redid them to be sure. For some reason they are being interpreted differently on this site (ah, I used blockquotes instead of the preformatted text). Maybe I should have posted this way:

#!/bin/sh
exec /usr/bin/rsync "$@" -W