Lxc move error websocket: bad handshake

I understand. But it is hard for me to configure this test config right now.
Reminds me a bit of this: Lxc copy to internal IP instead of public IP

OK hopefully @kriszos can try it.

Its similar error but different cause to that earlier issue.
The reason you experience these issues is that LXD tries to discover a working set of communication channels by trying all IPS from the source on the destination.

Even before this change (introducing the timeout) you would have likely experienced 10 to 20s delay before the transfer started due to having IPS bound that are not reachable and don’t reject (rather than drop).

1 Like

so just to cover all possibilities now i have following rules on both source and destination nodes

ip6tables -A INPUT -j REJECT
ip6tables -A FORWARD -j REJECT
ip6tables -A OUTPUT -j REJECT

still got bad handshake

1 Like

Can you look to see if you get similar errors in the debug log as @TomvB gets. It might tell you which IP its failing on.

Source node “nazwa1” has IP 172.31.255.6/32
destination node “atl1” has IP 172.31.255.10/32
They are loopback/dummy interfaces and LXD shoul use only them to communicate as specified in my “lxc config show” posted above, and it seem that lxd is using it.
The only IPv6 addresses that i found are in YAML that, I believe, contain profile of moved container

Weird lines that i found contain addresses configured on my gre-tunnel connecting source and destination 172.20.20.92/30

t=2021-12-09T20:55:31+0100 lvl=dbug msg=“Allowing untrusted GET” ip=172.20.20.94:53604 url=“/1.0/operations/3436b227-9ad5-4edf-ab9d-1544250aab61/websocket?secret=e4db324e6313e9a6873084dce2950ec51e675ca79ccf0fb6443e0a93259586fd”
t=2021-12-09T20:55:07+0100 lvl=dbug msg=“Handling API request” ip=172.20.20.93:55096 method=GET protocol=cluster url=“/1.0/events?project=default&target=atl1” username=
t=2021-12-09T20:55:07+0100 lvl=dbug msg=“Handling API request” ip=172.20.20.93:55098 method=POST protocol=cluster url=“/1.0/instances?project=default&target=atl1” username=
t=2021-12-09T20:55:35+0100 lvl=dbug msg=“Handling API request” ip=172.20.20.93:55106 method=GET protocol=cluster url=“/1.0/operations/ed2cf7bb-d67d-4ad3-83ac-6b453c0a9756?project=default&target=atl1” username=

tho I am not very familiar with syntax of these logs an I can miss something

full log https://pastebin.com/ZaqgZYvG

OK so sounds like similar issue it’s just with extra ipv4 addresses rather than ipv6 addresses like @TomvB has.

Making inbound requests to those other IPS reject connections on 8443 be rejected rather than dropped (assuming Lxd on the destination is hitting the source servers firewall) should fix it temporarily until I can figure out how to make Lxd smarter in these situations.

I don’t think it is that simple. I created the following rules
on nazwa1:

iptables --append INPUT --protocol tcp --dst 172.31.255.6 --dport 8443 --jump ACCEPT
iptables --append INPUT --protocol tcp --dport 8443 --jump REJECT

on atl1:

iptables --append INPUT --protocol tcp --dst 172.31.255.10 --dport 8443 --jump ACCEPT
iptables --append INPUT --protocol tcp --dport 8443 --jump REJECT

and nothing changed, so I added another rule to make sure that node is using correct IP while talking to other nodes

on nazwa1:

iptables -t nat -A POSTROUTING -p tcp --dport 8443 -j SNAT --to 172.31.255.6

on atl1:

iptables -t nat -A POSTROUTING -p tcp --dport 8443 -j SNAT --to 172.31.255.10

and still nothing changed, but 172.20.20.92/30 disappeared from debug log.
I don’t think it was the issue.

@TomvB Can you try setting for each server:

lxc config set core.https_address <ip address>:8443

To the server IP you want LXD to listen on rather than the default wildcard.

1 Like

Yes in cluster mode LXD should really only be using the configured member’s address rather than trying them all (as in @TomvB situation).

One thing that is suspicious in the earlier log you sent from the destination host is:

t=2021-12-06T13:40:30+0100 lvl=info msg="Creating container" ephemeral=false instance=lxd-move-of-8ab127d5-46d7-4501-95e0-706e29e41678 instanceType=container project=default
t=2021-12-06T13:40:38+0100 lvl=dbug msg="Database error: protocol.Error{Code:5, Message:\"database is locked\"}"
t=2021-12-06T13:40:38+0100 lvl=dbug msg="Retry failed db interaction (database is locked)"
t=2021-12-06T13:40:50+0100 lvl=info msg="Created container" ephemeral=false instance=lxd-move-of-8ab127d5-46d7-4501-95e0-706e29e41678 instanceType=container project=default
t=2021-12-06T13:40:51+0100 lvl=dbug msg="Database error: Failed to create operation: database is locked"
t=2021-12-06T13:40:51+0100 lvl=dbug msg="Retry failed db interaction (Failed to create operation: database is locked)"
t=2021-12-06T13:40:52+0100 lvl=dbug msg="New task Operation: 4653dcf3-3b33-4d4a-b570-a2a8687876f2"
t=2021-12-06T13:40:52+0100 lvl=dbug msg="Instance operation lock finished" action=create err=nil instance=lxd-move-of-8ab127d5-46d7-4501-95e0-706e29e41678 project=default reusable=false

As well as all the database locking (possibly suggesting slow I/O), the time its taking is way too long, its taking 20s from “Creating container” to “Created container”.

This would be enough time (>10s) for the websocket listener to give up and close, causing the error you see.

And in your most recent log file:

t=2021-12-09T20:55:13+0100 lvl=info msg="Creating container" ephemeral=false instance=lxd-move-of-8ab127d5-46d7-4501-95e0-706e29e41678 instanceType=container project=default
t=2021-12-09T20:55:28+0100 lvl=info msg="Created container" ephemeral=false instance=lxd-move-of-8ab127d5-46d7-4501-95e0-706e29e41678 instanceType=container project=default

Again, over 10s, but no DB retries this time.

Something is making that creation process (which is just adding DB records) take a long time, this is before the migration has even started.

See https://github.com/lxc/lxd/blob/22d06e2b23d7f653264861c271b75f46a627d5a7/lxd/instances_post.go#L315-L323

So I would check first the load situation on the cluster leader and on the target server, as well as any packet loss or rate limiting that may be going on between leader and target server that might slow down queries.

Yeah, it works!

1 Like

This log is from @kriszos ? (Edit, yes it is.)
The set core.https_address works for me. This is also fine if this is a solution. I’ll include it in my documentation.

fyi:
host-01

config:
core.https_address: 192.168.1.1:8443
core.trust_password: true

host-02

config:
core.https_address: 192.168.1.2:8443
core.trust_password: true

1 Like

@TomvB i am glad your issues are resolved

@tomp
As it is not so easy to check which node is current database leader (I’m grateful that feature is introduced in 4.21), I did the following tasks:
I stopped lxd service on all other nodes besides the 2 that I am testing on, so one of them can become database leader.
Stopped all running containers.
Checked load on both nodes, which at this point was almost nonexistent.
Checked iops with “iostat”, which at this point was also minimal.
Checked random I/O speed on disks with “fio” which could be better but is nothing that would prevent creating database record:
source read 39.1MiB/s write 13.1MiB/s
destination read 118MiB/s write 39.4MiB/s
checked connections speed between nodes using “iperf3” which was 200Mb/s
checked packet loss with “ping -f” with stable 29ms rtt I lost 2/10000 packets, so no significant packet loss,

try to move a containers still end with:
Error: Copy instance operation failed: Failed instance creation: Error transferring instance data: websocket: bad handshake

I have no idea what else could it be.

Can you show lxc config show <instance> --expanded for the container you’re trying to move?

Also, if you create a new instance on the target cluster member does it also take a long time?

creating new instance on target node takes 29 seconds, on source node is instantaneous max 1 second, on other nodes it takes 9-14 seconds

architecture: x86_64
config:
cluster.evacuate: migrate
image.architecture: amd64
image.description: Debian bullseye amd64 (20211213_06:39)
image.os: Debian
image.release: bullseye
image.serial: “20211213_06:39”
image.type: squashfs
image.variant: cloud
user.network-config: |
version: 1
config:
- type: physical
name: enp5s0
subnets:
- type: static
ipv4: true
address: 10.1.2.99
netmask: 32
gateway: 10.1.1.1
control: auto
- type: static6
ipv6: true
accept-ra: false
address: 2001:470:71:fdf::99
netmask: 128
gateway: 2001:470:65ec:1011::1
control: auto
- type: nameserver
address:
- 2001:470:71:fdf::12
- 10.1.2.12
search:
- ad.kriszos.pl
user.vendor-data: |
#cloud-config
#package_upgrade: true
packages:
- openssh-server
- nano
timezone: Europe/Warsaw
disable_root: false
#ssh_pwauth: yes
users:
- name: root
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAwAIXmVfMRgo7EQS/u7kTJ0s0Mr8FZiTfbEsdNhXOUKUuwSyidAo7zuIurEf+xaHLhzW3nfPqhTHZFRbbWORKRkbJsxCzBKr4mxOo/mZ2f0AFqZrTWdxR31aun2Ql3lZPoYAfA+NrfzFVnwAPLzC4jEcnvC+4J/fF0tsRBKSiRQCYR8rBTQWTPvfy2ZipLJn0DgBwCjWXUveJ1M/DI47+W3dgNaM48aWW8Po/UMNIJlsa6+fa4TRdjB1z/HjjTPg0XHgV7W30kPfxhnv86pkO4x9wE4TZBhHGxJotaoe511reCoc6DTMyv6SXg07VlrXI2I/8W1OV/IC+KJwSRpPZUw== rsa-key-20190814
#lock_passwd: false
#hashed_passwd: $1$GFy7utiu$Lkmt3eXTBpST8pkzTJCqY1
shell: /bin/bash
runcmd:
#bellow_for_passwords
#- sed -i -e ‘/^#PermitRootLogin/s/^.*$/PermitRootLogin yes/’ /etc/ssh/sshd_config
#- systemctl reload ssh
#
#below_for_centos
- ping -c 5 10.1.1.1
- ping -c 5 2001:470:65ec:1011::1
- systemctl start sshd
- systemctl enable sshd
#power_state:
#mode: reboot
#condition: true
volatile.apply_template: create
volatile.base_image: 38c4676659ae9dddd19747f2fd92a37798a90f5d09e4d2e49fa615cdb35bf6a7
volatile.eth0.hwaddr: 00:16:3e:c8:15:cb
volatile.idmap.base: “0”
volatile.idmap.next: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.idmap: ‘
volatile.uuid: 7e6f16ab-4319-4e51-8540-e1f6da1a79a1
devices:
eth0:
ipv4.routes.external: 10.1.2.99/32
ipv6.routes.external: 2001:470:71:fdf::99/128
name: enp5s0
network: lxdbr0
type: nic
root:
path: /
pool: local
type: disk
ephemeral: false
profiles:

  • def
  • ip-99
    stateful: false
    description: “”

What storage pool driver are you using?

Can you show the debug log output when you try and create an instance on the target member?

zfs

I can’t tell from the logs, but there is certainly some performance problem affecting operations on the target server.

For example, these sorts of errors indicate either contention or slow operations on the databasa

t=2021-12-13T23:37:32+0100 lvl=dbug msg="Retry failed db interaction (Error adding configuration item \"volatile.eth0.hwaddr\" = \"00:16:3e:30:e5:59\" to instance 800: database is locked)" 

Also, the time between these 2 entries should be effectively zero, as they are just DB operations, but on your system its 3 seconds.

t=2021-12-13T23:37:26+0100 lvl=info msg="Creating container" ephemeral=false instance=test1 instanceType=container project=default
t=2021-12-13T23:37:29+0100 lvl=dbug msg="FillInstanceConfig started" driver=zfs instance=test1 pool=local project=default

For comparison on a 3 member cluster inside VMs on my own system I see this when creating an instance on a non-leader member:

Dec 14 09:21:36 v1 lxd.daemon[997]: t=2021-12-14T09:21:36+0000 lvl=info msg="Creating container" ephemeral=false instance=c1 instanceType=container project=default
Dec 14 09:21:36 v1 lxd.daemon[997]: t=2021-12-14T09:21:36+0000 lvl=dbug msg="FillInstanceConfig started" driver=dir instance=c1 pool=local project=default
Dec 14 09:21:36 v1 lxd.daemon[997]: t=2021-12-14T09:21:36+0000 lvl=dbug msg="FillInstanceConfig finished" driver=dir instance=c1 pool=local project=default

Given that the timeout for the target to connect back to the source to start the migration is 10s, losing 3s just on DB record creation indicates there is some performance issue.

My recommendation would be to upgrade to LXD 4.21 for the following reasons:

  • It reduces the amount of leader DB activity when idle - so if your systems are resource constrained this may help reduce contention.
  • It allows you to see which member is the leader via lxc cluster ls.

Got same issue since mid-November (similar environment as poster, with two hosts not in cluster running LXD 4.21, the sending host having multiple network cards), and solved applying

lxc config set core.https_address <ip address>:8443

on the sender.

Great to hear that works, as that is the correct way to configure LXD to avoid delays and (now) timeouts, if not all IPs that LXD is listening are reachable from all hosts.