Can you also run this on the host side on lxdbr0
and inside the container (on eth0
) so we can see if there are any router announcements:
sudo tcpdump -l -nn -i lxdbr0 icmp6 and 'ip6[40] = 134'
Can you also run this on the host side on lxdbr0
and inside the container (on eth0
) so we can see if there are any router announcements:
sudo tcpdump -l -nn -i lxdbr0 icmp6 and 'ip6[40] = 134'
With ipv6.dhcp.staeful = false
. The container don’t get a route at all:
oot@mutual-hen:~# ip -6 addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1000
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
inet6 fe80::216:3eff:xxxx:xxxx/64 scope link
valid_lft forever preferred_lft forever
root@mutual-hen:~# ip -6 route
fe80::/64 dev eth0 proto kernel metric 256 pref medium
Interesting, so something is blocking either your router solicitation request (or your container isn’t sending one) or the router advertisements.
What container OS/version are you running and what network configuration files do you have in there?
The host is running stock Hetzner Ubuntu 20.04. The guest image is images:ubuntu/focal
.
AFAIK networking is done via cloud-init
:
asbachb@ubuntu-8gb-nbg1-1:~$ cat /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by the datasource. Changes
# to it will not persist across an instance reboot. To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
version: 2
ethernets:
eth0:
addresses:
- 2a01:4f8:xxxx:xxxx::1/64
dhcp4: true
gateway6: fe80::1
match:
macaddress: 96:00:00:xx:xx:xx
set-name: eth0
Is that the container’s netplan config?
I guess it is. At least there’s only the file I pasted in that folder
asbachb@ubuntu-8gb-nbg1-1:~$ ls -l /etc/netplan/
total 4
-rw-r--r-- 1 root root 572 May 1 19:04 50-cloud-init.yaml
ubuntu-8gb-nbg1-1
is your host not your container. What is the network config inside your container?
I guess that might be the problem:
root@mutual-hen:~# cat /etc/netplan/10-lxc.yaml
network:
version: 2
ethernets:
eth0:
dhcp4: true
dhcp-identifier: mac
OK, that looks ok to me.
Have you got the output of tcpdump yet?
Also, when the route disappears can you run netplan apply
and see if it re-appears.
With netplan apply
the route is back and ipv6 connectivity too.
I also can see the request on host:
asbachb@ubuntu-8gb-nbg1-1:~$ sudo tcpdump -l -nn -i lxdbr0 icmp6 and 'ip6[40] = 134' -v
tcpdump: listening on lxdbr0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:36:10.711396 IP6 (class 0xc0, flowlabel 0x55a2e, hlim 255, next-header ICMPv6 (58) payload length: 88) fe80::c810:xxxx:xxxx:c700 > fe80::216:xxxx:xxxx:694e: [icmp6 sum ok] ICMP6, router advertisement, length 88
hop limit 64, Flags [managed, other stateful], pref medium, router lifetime 1800s, reachable time 0ms, retrans timer 0ms
prefix info option (3), length 32 (4): 2a01:4f8:xxxx:xxxx::/120, Flags [onlink], valid time 1800s, pref. time 1800s
mtu option (5), length 8 (1): 1500
source link-address option (1), length 8 (1): 56:f7:92:xx:xx:xx
rdnss option (25), length 24 (3): lifetime 1800s, addr: fe80::c810:f5ff:xxxx:xxxx
OK so now leave tcpdump running and wait until the route drops off again and advise if you still see those periodic route advertisements every few minutes.
There is no traffic. When route expires.
asbachb@ubuntu-8gb-nbg1-1:~$ sudo tcpdump -l -nn -i lxdbr0 icmp6 and 'ip6[40] = 134' -v
tcpdump: listening on lxdbr0, link-type EN10MB (Ethernet), capture size 262144 bytes
root@proud-goldfish:~# tcpdump -l -nn -i eth0 icmp6 and 'ip6[40] = 134' -v
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
I only see traffic on host when restarting the container.
OK thanks, how long are you leaving the capture running (as by default the advertisements only happen every 8 minutes or so from my observations).
I guess about 20 minutes.
OK can you show the output of on your host (not container):
ps aux | grep dnsmasq
asbachb@ubuntu-8gb-nbg1-1:~$ ps aux | grep dnsmasq
lxd 5388 0.0 0.0 43628 3624 ? Ss 14:56 0:00 dnsmasq --keep-in-foreground --strict-order --bind-interfaces --except-interface=lo --no-ping --interface=lxdbr0 --quiet-dhcp --quiet-dhcp6 --quiet-ra --listen-address=10.254.210.1 --dhcp-no-override --dhcp-authoritative --dhcp-leasefile=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.leases --dhcp-hostsfile=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.hosts --dhcp-range 10.254.210.2,10.254.210.254,5m --listen-address=2a01:4f8:xxxx:xxxx::1 --enable-ra --dhcp-range 2a01:4f8:xxxx:xxxx::2,2a01:4f8:xxxx:xxxx::ff,120,5m -s lxd -S /lxd/ --conf-file=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.raw -u lxd
Is it possible to get a login to that system?
For future reference the issue here is that the OP was using an IPv6 subnet size less than /64 for their lxdbr0 interface. dnsmasq does not support using a subnet smaller than /64 for router advertisements apparently.
The docs say:
“The minimum size of the prefix length is 64.”
Have recommended using routed
NIC type for situations where the ISP only provides a single /64.
NOTE: This solution works for Hetzner Cloud Instance. This is UNTESTED on dedicated servers.
With the glad help of @tomp we figured out another solution:
asbachb@ubuntu-8gb-nbg1-1:~$ cat /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by the datasource. Changes
# to it will not persist across an instance reboot. To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
version: 2
ethernets:
eth0:
addresses:
- 2a01:4f8:xxxx:xxxx::1/128 #Previously 2a01:4f8:xxxx:xxxx::1/64
dhcp4: true
gateway6: fe80::1
match:
macaddress: xx:xx:xx:xx:xx:xx
set-name: eth0
asbachb@ubuntu-8gb-nbg1-1:~$ lxc network show lxdbr0
config:
ipv4.address: 10.254.210.1/24
ipv4.nat: "true"
ipv6.address: 2a01:4f8:xxxx:xxxx::2/64
description: ""
name: lxdbr0
type: bridge
used_by:
- none
managed: true
status: Created
locations:
- none