Hereâs how I would do it:
First, a refresh of my test config:
lxc network show lxdbr1 (Iâm using lxdbr1 for IPv4 only):
lxc network show lxdbr1
config:
ipv4.address: 10.0.171.1/24
ipv4.nat: "true"
ipv6.address: none
ipv6.dhcp: "false"
ipv6.nat: "true"
description: ""
name: lxdbr1
type: bridge
Iâve created a test container called crouted
and added 2 NICs, one (eth0) bridged
to lxdbr1
(for IPv4), and the other (eth1) routed
to parent enp3s0
(my external interface on the host).
architecture: x86_64
config:
image.architecture: amd64
image.description: ubuntu 18.04 LTS amd64 (release) (20200317)
image.label: release
image.os: ubuntu
image.release: bionic
image.serial: "20200317"
image.type: squashfs
image.version: "18.04"
volatile.base_image: 98e43d99d83ef1e4d0b28a31fc98e01dd98a2dbace3870e51c5cb03ce908144b
volatile.eth0.hwaddr: 00:16:3e:ec:e2:b5
volatile.eth1.hwaddr: 00:16:3e:83:d2:60
volatile.eth1.name: eth1
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000}]'
volatile.last_state.idmap: '[]'
volatile.last_state.power: STOPPED
devices:
eth0:
name: eth0
network: lxdbr1
type: nic
eth1:
ipv6.address: 2a02:nnn:76f4:1::1234
nictype: routed
parent: enp3s0
type: nic
ephemeral: false
profiles:
- default
stateful: false
description: ""
I start the container, and check the IPs and routes have taken effect:
lxc start crouted
Wait a couple of seconds.
lxc ls crouted
+---------+---------+--------------------+------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------+---------+--------------------+------------------------------+-----------+-----------+
| crouted | RUNNING | 10.0.171.52 (eth0) | 2a02:nnn:76f4:1::1234 (eth1) | CONTAINER | 0 |
+---------+---------+--------------------+------------------------------+-----------+-----------+
Check routes inside container:
lxc exec crouted -- ip r
default via 10.0.171.1 dev eth0 proto dhcp src 10.0.171.52 metric 100
10.0.171.0/24 dev eth0 proto kernel scope link src 10.0.171.52
10.0.171.1 dev eth0 proto dhcp scope link src 10.0.171.52 metric 100
lxc exec crouted -- ip -6 r
2a02:nnn:76f4:1::1234 dev eth1 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth1 proto kernel metric 256 pref medium
default via fe80::1 dev eth1 metric 1024 pref medium
Check ping to external addresses:
lxc exec crouted -- ping 8.8.8.8 -c 5
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=23.9 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=23.8 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=23.8 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=57 time=24.0 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=57 time=23.7 ms
--- 8.8.8.8 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4007ms
rtt min/avg/max/mdev = 23.725/23.885/24.045/0.141 ms
lxc exec crouted -- ping 2606:4700:4700::1111 -c 5
PING 2606:4700:4700::1111(2606:4700:4700::1111) 56 data bytes
64 bytes from 2606:4700:4700::1111: icmp_seq=1 ttl=59 time=23.4 ms
64 bytes from 2606:4700:4700::1111: icmp_seq=2 ttl=59 time=23.4 ms
64 bytes from 2606:4700:4700::1111: icmp_seq=3 ttl=59 time=23.3 ms
64 bytes from 2606:4700:4700::1111: icmp_seq=4 ttl=59 time=23.3 ms
64 bytes from 2606:4700:4700::1111: icmp_seq=5 ttl=59 time=23.3 ms
--- 2606:4700:4700::1111 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 23.325/23.378/23.465/0.200 ms
Now, check DNS resolution manually using dig
tool rather than relying on the systemd resolver (that you have shown is listening on 127.0.0.1:53 of your container):
Test using external IPv6 resolver:
lxc exec crouted -- dig @2606:4700:4700::1111 www.linuxcontainers.org
; <<>> DiG 9.11.3-1ubuntu1.11-Ubuntu <<>> @2606:4700:4700::1111 www.linuxcontainers.org
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6433
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1452
;; QUESTION SECTION:
;www.linuxcontainers.org. IN A
;; ANSWER SECTION:
www.linuxcontainers.org. 884 IN CNAME rproxy.stgraber.org.
rproxy.stgraber.org. 884 IN A 149.56.148.5
;; Query time: 23 msec
;; SERVER: 2606:4700:4700::1111#53(2606:4700:4700::1111)
;; WHEN: Thu Mar 19 08:54:12 UTC 2020
;; MSG SIZE rcvd: 98
Test using external IPv4 resolver:
lxc exec crouted -- dig @8.8.8.8 www.linuxcontainers.org
; <<>> DiG 9.11.3-1ubuntu1.11-Ubuntu <<>> @8.8.8.8 www.linuxcontainers.org
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;www.linuxcontainers.org. IN A
;; ANSWER SECTION:
www.linuxcontainers.org. 899 IN CNAME rproxy.stgraber.org.
rproxy.stgraber.org. 899 IN A 149.56.148.5
;; Query time: 430 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Thu Mar 19 08:54:58 UTC 2020
;; MSG SIZE rcvd: 98
If these didnât work, I then need to check where the packets are getting lost using tcpdump
.
First, lets review how the packets will traverse from the container, to the host and the response coming back:
- User runs
dig
to 2606:4700:4700::1111
inside container, this means UDP packets with a destination port of 53 will leave the container via eth1 destined for the default gateway address fe80::1
.
- Both
bridged
and routed
NIC types make use of a Linux network concept called âveth pairsâ where a pair of virtual Ethernet devices are created by the OS and any packets that go in one end come out of the other. We then leave one end of the pair on the host and move the other end of the pair into the container. In this way eth0
and eth1
in the containers are connected to their respective pair ends on the host. You can see which are the respective host-side veth interfaces by running lxc info crouted
e.g.
lxc info crouted
Name: crouted
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/03/19 08:43 UTC
Status: Running
Type: container
Profiles: default
Pid: 14160
Ips:
lo: inet 127.0.0.1
lo: inet6 ::1
eth0: inet 10.0.171.52 veth575cd614
eth0: inet6 fe80::216:3eff:feec:e2b5 veth575cd614
eth1: inet6 2a02:nnn:76f4:1::1234 veth9142f1c9
eth1: inet6 fe80::f813:d4ff:fe89:c57f veth9142f1c9
You can see that eth0
has a host-side end interface called veth575cd614
and eth1
has a host-side end of veth9142f1c9
- For
bridged
NIC types, LXD âconnectsâ the host-side veth interface to the parent
LXD bridge (in the case of my containerâs eth0
this is lxdbr1
). For routed
NIC types, LXD does not connect the host-side veth interface to anything, and just leaves it âconnectedâ to the host like any other interface.
We can see this by running on the host:
ip a show dev veth9142f1c9
30: veth9142f1c9@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:d4:d1:5e:1c:82 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::1/128 scope link
valid_lft forever preferred_lft forever
inet6 fe80::fcd4:d1ff:fe5e:1c82/64 scope link
valid_lft forever preferred_lft forever
LXD has added an IPv6 address of fe80::1/128
to the host-side interface of the veth pair.
- So, for packets leaving
eth1
in the container destined for fe80::1
they will arrive at the host end of the veth pair. After that it is up to the host to ârouteâ the packets where they need to go.
So an expected packet flow for these UDP packets would be:
- Leave containerâs eth1 for
fe80::1
- Arrive at hostâs veth interface.
- Host routes packets out to the Internet via your hostâs external interface, in my case
enp3s0
(check ip r
on host to see your default gateway).
- Response packets arrive from Internet back at your hostâs external interface destined for the containerâs IPv6 address.
- Your host sees the static route LXD adds for the containerâs IPv6 address, and sends the response packets down the host-side veth interface.
- Response packets appear in the container at
eth1
.
- So we can see there are several places we can âattachâ a
tcpdump
session to track the flow of these packets in and out. I would suggest:
-
enp3s0
on the host (your external interface).
-
veth9142f1c9
on the host (the host-side end of the veth pair).
-
eth1
inside the container.
Lets setup the tcpdump
on enp3s0
and then in a separate window run dig
inside the container again.
On the host:
tcpdump -l -nn -i enp3s0 host 2a02:nnn:76f4:1::1234 and port 53
Now run the dig command:
lxc exec crouted -- dig @2606:4700:4700::1111 www.linuxcontainers.org
The tcpdump
results should show:
sudo tcpdump -l -nn -i enp3s0 host 2a02:nnn:76f4:1::1234 and port 53
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp3s0, link-type EN10MB (Ethernet), capture size 262144 bytes
09:13:06.114853 IP6 2a02:nnn:76f4:1::1234.54264 > 2606:4700:4700::1111.53: 6205+ [1au] A? www.linuxcontainers.org. (64)
09:13:06.138527 IP6 2606:4700:4700::1111.53 > 2a02:nnn:76f4:1::1234.54264: 6205$ 2/0/1 CNAME rproxy.stgraber.org., A 149.56.148.5 (98)
This shows outbound DNS request packets leaving with a source address of 2a02:nnn:76f4:1::1234
going to 2606:4700:4700::1111
and the query for A? www.linuxcontainers.org.
.
Then the response packet coming back with an answer of CNAME rproxy.stgraber.org., A 149.56.148.5
.
But this only shows us that the response packets made it back to the hostâs external interface.
Lets re-run the test, but now with tcpdump
running on veth9142f1c9
:
sudo tcpdump -l -nn -i veth9142f1c9 host 2a02:nnn:76f4:1::1234 and port 53
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on veth9142f1c9, link-type EN10MB (Ethernet), capture size 262144 bytes
09:15:57.342239 IP6 2a02:nnn:76f4:1::1234.45415 > 2606:4700:4700::1111.53: 17864+ [1au] A? www.linuxcontainers.org. (64)
09:15:57.366240 IP6 2606:4700:4700::1111.53 > 2a02:nnn:76f4:1::1234.45415: 17864$ 2/0/1 CNAME rproxy.stgraber.org., A 149.56.148.5 (98)
Great, so we can see that the host is correctly routing the response packets that are coming in on enp3s0
down veth9142f1c9
.
Finally, lets check that the packets are arriving at the containerâs eth1
interface:
sudo lxc exec crouted -- tcpdump -l -nn -i eth1 host 2a02:nnn:76f4:1::1234 and port 53
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
09:18:39.547749 IP6 2a02:nnn:76f4:1::1234.33122 > 2606:4700:4700::1111.53: 4365+ [1au] A? www.linuxcontainers.org. (64)
09:18:39.571926 IP6 2606:4700:4700::1111.53 > 2a02:nnn:76f4:1::1234.33122: 4365$ 2/0/1 CNAME rproxy.stgraber.org., A 149.56.148.5 (98)
Success! The packets have now been confirmed arriving at the container.
So this is how you can break down the problem to see where the issue lies. Hope that helps.