wireguard/Tailscale on Host causes all bridge/routed containers to break networking

I have tried both default bridged lxdbr0 and routed networking via the wifi wlp39s0 and none is working. Both bridged and routed containers can connect to external network but the moment wireguard/tailscale is activated, the containers are cut off from the network.

Searched the forum but all of the past posts dealt with wireguard within containers. My issue is I have wireguard active on the host and would like all packets from containers to be auto routed to the wireguard tunnel.

Thanks for any pointers.

Version: LXD 4.19
Linux: Ubuntu 20.04
Kernel: 5.11.0-37-generic

ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: wlp39s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 5c:80:b6:42:f9:14 brd ff:ff:ff:ff:ff:ff
    inet 192.168.31.193/24 brd 192.168.31.255 scope global dynamic noprefixroute wlp39s0
       valid_lft 36059sec preferred_lft 36059sec
    inet6 fe80::7419:4982:6911:ea60/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
16: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:df:b9:06 brd ff:ff:ff:ff:ff:ff
    inet 10.78.73.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fedf:b906/64 scope link 
       valid_lft forever preferred_lft forever
38: veth6b6e2ca4@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:49:2a:49:d5:4c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 169.254.0.1/32 scope global veth6b6e2ca4
       valid_lft forever preferred_lft forever
    inet6 fe80::fc49:2aff:fe49:d54c/64 scope link 
       valid_lft forever preferred_lft forever
41: tailscale0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none 
    inet 100.64.70.52/32 scope global tailscale0
       valid_lft forever preferred_lft forever
    inet6 fd7a:115c:a1e0:ab12:4843:cd96:6240:4634/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::3eca:a46d:171d:a10b/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever

ip route

default via 192.168.31.1 dev wlp39s0 proto dhcp metric 600
10.78.73.0/24 dev lxdbr0 proto kernel scope link src 10.78.73.1 linkdown
169.254.0.0/16 dev tailscale0 scope link metric 1000
192.168.31.0/24 dev wlp39s0 proto kernel scope link src 192.168.31.193 metric 600
192.168.31.233 dev veth6b6e2ca4 scope link

ip tables (ts-* rules are added by tailscale which is a wiregaurd wrapper)

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ts-input   all  --  anywhere             anywhere            
ACCEPT     icmp --  anywhere             anywhere             icmp parameter-problem /* generated for LXD network lxdbr0 */
ACCEPT     icmp --  anywhere             anywhere             icmp time-exceeded /* generated for LXD network lxdbr0 */
ACCEPT     icmp --  anywhere             anywhere             icmp destination-unreachable /* generated for LXD network lxdbr0 */
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain /* generated for LXD network lxdbr0 */
ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain /* generated for LXD network lxdbr0 */
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps /* generated for LXD network lxdbr0 */
ACCEPT     udp  --  anywhere             anywhere             multiport dports mdns
ACCEPT     tcp  --  anywhere             anywhere             multiport dports 4000

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
ts-forward  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             /* generated for LXD network lxdbr0 */
ACCEPT     all  --  anywhere             anywhere             /* generated for LXD network lxdbr0 */

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     icmp --  anywhere             anywhere             icmp parameter-problem /* generated for LXD network lxdbr0 */
ACCEPT     icmp --  anywhere             anywhere             icmp time-exceeded /* generated for LXD network lxdbr0 */
ACCEPT     icmp --  anywhere             anywhere             icmp destination-unreachable /* generated for LXD network lxdbr0 */
ACCEPT     tcp  --  anywhere             anywhere             tcp spt:domain /* generated for LXD network lxdbr0 */
ACCEPT     udp  --  anywhere             anywhere             udp spt:domain /* generated for LXD network lxdbr0 */
ACCEPT     udp  --  anywhere             anywhere             udp spt:bootps /* generated for LXD network lxdbr0 */

Chain ts-forward (1 references)
target     prot opt source               destination         
MARK       all  --  anywhere             anywhere             MARK set 0x40000
ACCEPT     all  --  anywhere             anywhere             mark match 0x40000
DROP       all  --  100.64.0.0/10        anywhere            
ACCEPT     all  --  anywhere             anywhere            

Chain ts-input (1 references)
target     prot opt source               destination         
ACCEPT     all  --  diego-home-amd       anywhere            
RETURN     all  --  100.115.92.0/23      anywhere            
DROP       all  --  100.64.0.0/10        anywhere

Please use tcpdump -i tailscale0 -nn to confirm that you see traffic from your lxdbr0 containers going over the interface. This will confirm the routing is correct.

Also can you show the config of lxc network show lxdbr0?

@tomp Interesting result. The ci-1 container is configured to use lxdbr0 set in the default profile. When Tailscale is disabled, network works. After Tailscale enabled, I started to issue ping 8.8.8.8 within ci-1 container and got no data back. But on the host, via the tcp dump as you suggested is showing tailscale tun getting the correct icmp packets and icmp reply results. So routing out from lxdbr0 to tailscale tun is working, but the return packets are not correctly routed back to lxdbr0?

lxc network show lxdbr0

config:
  ipv4.address: 10.78.73.1/24
  ipv4.nat: "true"
  ipv6.address: none
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/ci-1
- /1.0/profiles/default
managed: true
status: Created
locations:
- none

tcpdump -i tailscale0 -nn (while ci-1 container performing ping 8.8.8.8)

08:59:34.972964 IP dns.google > diego-home-amd: ICMP echo reply, id 656, seq 29, length 64
08:59:34.973001 IP dns.google > 10.78.73.56: ICMP echo reply, id 656, seq 29, length 64
08:59:35.251297 IP diego-home-amd.domain > 10.78.73.56.43822: Flags [S.], seq 1838475575, ack 3942048116, win 65084, options [mss 1240,sackOK,TS va
l 876842216 ecr 3781977159,nop,wscale 10], length 0
08:59:35.251335 IP diego-home-amd.domain > 10.78.73.56.43822: Flags [S.], seq 1838475575, ack 3942048116, win 65084, options [mss 1240,sackOK,TS va
l 876842216 ecr 3781977159,nop,wscale 10], length 0
08:59:35.795765 IP diego-home-amd > dns.google: ICMP echo request, id 656, seq 30, length 64
08:59:35.996426 IP dns.google > diego-home-amd: ICMP echo reply, id 656, seq 30, length 64
08:59:35.996462 IP dns.google > 10.78.73.56: ICMP echo reply, id 656, seq 30, length 64
08:59:36.819758 IP diego-home-amd > dns.google: ICMP echo request, id 656, seq 31, length 64
08:59:37.020873 IP dns.google > diego-home-amd: ICMP echo reply, id 656, seq 31, length 64
08:59:37.020911 IP dns.google > 10.78.73.56: ICMP echo reply, id 656, seq 31, length 64
08:59:37.267711 IP diego-home-amd.domain > 10.78.73.56.43822: Flags [S.], seq 1838475575, ack 3942048116, win 65084, options [mss 1240,sackOK,TS va
l 876844232 ecr 3781977159,nop,wscale 10], length 0
08:59:37.267768 IP diego-home-amd.domain > 10.78.73.56.43822: Flags [S.], seq 1838475575, ack 3942048116, win 65084, options [mss 1240,sackOK,TS va
l 876844232 ecr 3781977159,nop,wscale 10], length 0
08:59:37.843763 IP diego-home-amd > dns.google: ICMP echo request, id 656, seq 32, length 64
08:59:38.045089 IP dns.google > diego-home-amd: ICMP echo reply, id 656, seq 32, length 64
08:59:38.045122 IP dns.google > 10.78.73.56: ICMP echo reply, id 656, seq 32, length 64
08:59:38.867753 IP diego-home-amd > dns.google: ICMP echo request, id 656, seq 33, length 64
08:59:39.071338 IP dns.google > diego-home-amd: ICMP echo reply, id 656, seq 33, length 64
08:59:39.071375 IP dns.google > 10.78.73.56: ICMP echo reply, id 656, seq 33, length 64
08:59:39.891762 IP diego-home-amd > dns.google: ICMP echo request, id 656, seq 34, length 64
08:59:40.063549 IP dns.google > diego-home-amd: ICMP echo reply, id 656, seq 34, length 64
08:59:40.063583 IP dns.google > 10.78.73.56: ICMP echo reply, id 656, seq 34, length 64
08:59:40.915291 IP diego-home-amd > dns.google: ICMP echo request, id 656, seq 35, length 64
08:59:41.116630 IP dns.google > diego-home-amd: ICMP echo reply, id 656, seq 35, length 64
08:59:41.116653 IP dns.google > 10.78.73.56: ICMP echo reply, id 656, seq 35, length 64

lxc list

+------------+---------+--------------------+------+-----------+-----------+
|    NAME    |  STATE  |        IPV4        | IPV6 |   TYPE    | SNAPSHOTS |
+------------+---------+--------------------+------+-----------+-----------+
| ci-1       | RUNNING | 10.78.73.56 (eth0) |      | CONTAINER | 0         |
+------------+---------+--------------------+------+-----------+-----------+

There’s a few of things I’m still confused about:

  1. I don’t think you ran tcpdump with -nn flags as the output is performing reverse DNS on the source/target IPs making it tricky to see precisely which IPs are being used (in order to understand NAT behaviour).
  2. Although I see ICMP replies going to your container’s IP 10.78.73.56 I don’t see outbound requests from it. This is likely because you have NAT enabled on your lxdbr0 and so I would expect the outbound ICMP request to be NATted to your host’s IP on the tailscale interface ( 100.64.70.52).
  3. But if that were the case then I wouldn’t expect to see ICMP replies from Google going directly to your container’s IP on the tailscale0 interface (only on the lxdbr0 interface after the NAT has been undone).

Please can you re-run tcpdump making sure you use -nn and have run it once with-i tailscale0 and once with -i lxdbr0, making sure there are no other ping operations going on at the same time.

@tomp You’re right. I did not have the correct tcp dump. Here is the corrected version of tcp dump when ci-1 was doing a ping 8.8.8.8 operation with no other program doing any network.

10.78.73.56 = container ci-1 ip.
100.64.70.52 = tailscale tun address

20:46:28.845476 IP 100.64.70.52 > 8.8.8.8: ICMP echo request, id 2504, seq 1, length 64
20:46:29.007421 IP 8.8.8.8 > 100.64.70.52: ICMP echo reply, id 2504, seq 1, length 64
20:46:29.007460 IP 8.8.8.8 > 10.78.73.56: ICMP echo reply, id 2504, seq 1, length 64
20:46:29.651290 IP 10.78.73.1.53 > 10.78.73.56.54750: Flags [S.], seq 2871910905, ack 1981041053, win 65084, options [mss 1240,sackOK,TS val 919256616 ec
r 3824377159,nop,wscale 10], length 0
20:46:29.750328 IP 10.78.73.1.53 > 10.78.73.56.42933: 53856 NXDomain 0/0/0 (33)
20:46:29.750359 IP 10.78.73.1.53 > 10.78.73.56.54537: 35126 NXDomain 0/0/0 (33)
20:46:29.875576 IP 100.64.70.52 > 8.8.8.8: ICMP echo request, id 2504, seq 2, length 64
20:46:30.135173 IP 8.8.8.8 > 100.64.70.52: ICMP echo reply, id 2504, seq 2, length 64
20:46:30.135192 IP 8.8.8.8 > 10.78.73.56: ICMP echo reply, id 2504, seq 2, length 64
20:46:30.181460 IP 52.114.158.162.443 > 100.64.70.52.57340: Flags [P.], seq 1847472552:1847472887, ack 2257760689, win 2047, length 335
20:46:30.184350 IP 100.64.70.52.57340 > 52.114.158.162.443: Flags [P.], seq 1:178, ack 335, win 63, length 177
20:46:30.396603 IP 52.114.158.162.443 > 100.64.70.52.57340: Flags [.], ack 178, win 2046, length 0
20:46:30.899358 IP 100.64.70.52 > 8.8.8.8: ICMP echo request, id 2504, seq 3, length 64
20:46:31.158248 IP 8.8.8.8 > 100.64.70.52: ICMP echo reply, id 2504, seq 3, length 64
20:46:31.158281 IP 8.8.8.8 > 10.78.73.56: ICMP echo reply, id 2504, seq 3, length 64
20:46:31.923304 IP 100.64.70.52 > 8.8.8.8: ICMP echo request, id 2504, seq 4, length 64
20:46:32.183348 IP 8.8.8.8 > 100.64.70.52: ICMP echo reply, id 2504, seq 4, length 64
20:46:32.183362 IP 8.8.8.8 > 10.78.73.56: ICMP echo reply, id 2504, seq 4, length 64
20:46:32.947602 IP 100.64.70.52 > 8.8.8.8: ICMP echo request, id 2504, seq 5, length 64
20:46:33.210198 IP 8.8.8.8 > 100.64.70.52: ICMP echo reply, id 2504, seq 5, length 64
20:46:33.210228 IP 8.8.8.8 > 10.78.73.56: ICMP echo reply, id 2504, seq 5, length 64
20:46:33.971331 IP 100.64.70.52 > 8.8.8.8: ICMP echo request, id 2504, seq 6, length 64
20:46:34.230838 IP 8.8.8.8 > 100.64.70.52: ICMP echo reply, id 2504, seq 6, length 64
20:46:34.230876 IP 8.8.8.8 > 10.78.73.56: ICMP echo reply, id 2504, seq 6, length 64
20:46:34.995475 IP 100.64.70.52 > 8.8.8.8: ICMP echo request, id 2504, seq 7, length 64
20:46:35.000404 IP 10.78.73.1.53 > 10.78.73.56.54752: Flags [S.], seq 2886974633, ack 3742451963, win 65084, options [mss 1240,sackOK,TS val 919261965 ec
r 3824397909,nop,wscale 10], length 0
20:46:35.153643 IP 8.8.8.8 > 100.64.70.52: ICMP echo reply, id 2504, seq 7, length 64
20:46:35.153670 IP 8.8.8.8 > 10.78.73.56: ICMP echo reply, id 2504, seq 7, length 64
20:46:35.443817 IP 100.64.70.52.35962 > 52.96.59.162.443: Flags [F.], seq 2993655676, ack 3896536628, win 63, length 0
20:46:35.444216 IP 100.64.70.52.44056 > 8.8.8.8.53: 61419+ [1au] A? teams-events-data.trafficmanager.net. (65)
20:46:35.444314 IP 100.64.70.52.44828 > 8.8.8.8.53: 47305+ [1au] AAAA? teams-events-data.trafficmanager.net. (65)
20:46:35.539318 IP 10.78.73.1.53 > 10.78.73.56.54744: Flags [S.], seq 4125643167, ack 4222695877, win 65084, options [mss 1240,sackOK,TS val 919262504 ec
r 3824366909,nop,wscale 10], length 0
20:46:35.604189 IP 8.8.8.8.53 > 100.64.70.52.44056: 61419 2/0/1 CNAME onedscolprdwus00.westus.cloudapp.azure.com., A 20.189.173.1 (137)
20:46:35.604215 IP 8.8.8.8.53 > 100.64.70.52.44828: 47305 1/1/1 CNAME onedscolprdwus12.westus.cloudapp.azure.com. (191)
20:46:35.604483 IP 100.64.70.52.45768 > 8.8.8.8.53: 38079+ [1au] AAAA? onedscolprdwus12.westus.cloudapp.azure.com. (71)
20:46:35.612830 IP 52.96.59.162.443 > 100.64.70.52.35962: Flags [R.], seq 1, ack 1, win 0, length 0
20:46:35.869208 IP 8.8.8.8.53 > 100.64.70.52.45768: 38079 0/1/1 (141)
20:46:35.869733 IP 100.64.70.52.41594 > 20.189.173.1.443: Flags [S], seq 3733493959, win 64480, options [mss 1240,sackOK,TS val 1299161339 ecr 0,nop,wsca
le 10], length 0
20:46:35.944895 IP 100.64.70.52.41596 > 20.189.173.1.443: Flags [S], seq 1449856970, win 64480, options [mss 1240,sackOK,TS val 1299161414 ecr 0,nop,wsca
le 10], length 0
20:46:36.019280 IP 10.78.73.1.53 > 10.78.73.56.54752: Flags [S.], seq 2886974633, ack 3742451963, win 65084, options [mss 1240,sackOK,TS val 919262984 ec
r 3824397909,nop,wscale 10], length 0

ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: wlp39s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 5c:80:b6:42:f9:14 brd ff:ff:ff:ff:ff:ff
    inet 192.168.31.193/24 brd 192.168.31.255 scope global dynamic noprefixroute wlp39s0
       valid_lft 29166sec preferred_lft 29166sec
    inet6 fe80::7419:4982:6911:ea60/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
16: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:df:b9:06 brd ff:ff:ff:ff:ff:ff
    inet 10.78.73.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fedf:b906/64 scope link 
       valid_lft forever preferred_lft forever
41: tailscale0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none 
    inet 100.64.70.52/32 scope global tailscale0
       valid_lft forever preferred_lft forever
    inet6 fd7a:115c:a1e0:ab12:4843:cd96:6240:4634/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::3eca:a46d:171d:a10b/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever
46: vetha0905c11@if45: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether 4e:0e:30:72:1e:7a brd ff:ff:ff:ff:ff:ff link-netnsid 0

Correlated effect which is even more obvious is that the moment tailscale/wireguard becomes active, my container ips are no longer pingable via host.

1 Like

Did a solution for this ever arise? I want to use tailscale on my development laptop (in fact I need it to connect to resources elsewhere) but everytime I do … it seems to wreck havoc on my local LXD and/or Juju …

currently when I am connected to tailscale… if i do a

juju models against my laptops LXD model… it simply stops responding.

Just in case someone stumbles upon this thread after a google search.
I had slightly similar issue recently, it appears that one of the hosts in the Tailscale network was advertising routes that collided with the IP range assigned to LXD.

As a result, every time the Tailscaled daemon started, no traffic was able to leave my containers.
I’ve used tailscale set --accept-routes=false on the host machine to test it.