I might be missing something, but I have a weird issue with networking in nested hypervisor. I have three levels: 1. Ubuntu 20.04 (LXD hypervisor) -> 2. Ubuntu 20.04 in container (nested LXD hypervisor) -> 3. Debian 10 in container
First two levels work just fine. But the last one (Debian 10) doesn’t have access to the internet, or the local network.
Additionally, when I start the container in the nested hypervisor - the nested hypervisor starts to losing packets. I think that some rivalry with the container is going on, but I’m far from being a networking expert.
First level actually have more Debian 10 containers, with networking working just fine, so I thought that I figured it out, but for some reason, the same setup nested one level deeper doesn’t work. I believe that it has to do something with routing. I didn’t find anything on the internet that would describe such setup.
So first things first.
LXD 4.0.5
I will group logs and data by level.
1. Ubuntu 20.04 (LXD hypervisor)
luken@lxd-hypervisor:~$ lxc list
+-------------------+---------+----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------+---------+----------------------+------+-----------+-----------+
| hypervisor-nested | RUNNING | 192.168.7.204 (eth0) | | CONTAINER | 0 |
+-------------------+---------+----------------------+------+-----------+-----------+
| # ... some other irrelevant containers
+-------------------+---------+----------------------+------+-----------+-----------+
luken@lxd-hypervisor:~$ lxc profile show hypervisor-nested
config:
user.network-config: |
version: 2
ethernets:
eth0:
addresses:
- 192.168.7.204/32
nameservers:
addresses:
- 8.8.8.8
search: []
routes:
- to: 0.0.0.0/0
via: 169.254.0.1
on-link: true
user.user-data: |
#cloud-config
users:
- name: luken
gecos: ''
primary_group: luken
groups: "sudo"
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys:
- <<<redacted>>>
description: Hypervisor Nested
devices:
eth0:
ipv4.address: 192.168.7.204
nictype: routed
parent: eth1
type: nic
name: hypervisor-nested
used_by:
- /1.0/instances/hypervisor-nested
luken@lxd-hypervisor:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:be:4a:e8 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 52880sec preferred_lft 52880sec
inet6 fe80::a00:27ff:febe:4ae8/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:dd:13:72 brd ff:ff:ff:ff:ff:ff
inet 192.168.7.200/24 scope global eth1
valid_lft forever preferred_lft forever
inet6 2002:5306:912c:0:a00:27ff:fedd:1372/64 scope global dynamic mngtmpaddr
valid_lft 86363sec preferred_lft 86363sec
inet6 fe80::a00:27ff:fedd:1372/64 scope link
valid_lft forever preferred_lft forever
# ... some other, irrelevant veths in between
15: vethe184de76@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:5c:38:07:24:2c brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.0.1/32 scope global vethe184de76
valid_lft forever preferred_lft forever
inet6 fe80::fc5c:38ff:fe07:242c/64 scope link
valid_lft forever preferred_lft forever
luken@lxd-hypervisor:~$ ip r
default via 192.168.7.1 dev eth1
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
192.168.7.0/24 dev eth1 proto kernel scope link src 192.168.7.200
# ... Some other irrelevant routes related to other containers in between
192.168.7.204 dev vethe184de76 scope link
# Note: the default gateway is altered by me, the whole setup runs in Vagrant, so it was Vagrant's network's gateway by default. 10.0.0.0 stuff is Vagrant related.
2. Ubuntu 20.04 in container (nested LXD hypervisor)
root@hypervisor-nested:~# lxc list
+----------------+---------+----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+----------------+---------+----------------------+------+-----------+-----------+
| test-profile-1 | RUNNING | 192.168.7.240 (eth0) | | CONTAINER | 0 |
+----------------+---------+----------------------+------+-----------+-----------+
root@hypervisor-nested:~# lxc info test-profile-1
Name: test-profile-1
Location: none
Remote: unix://
Architecture: x86_64
Created: 2021/02/25 21:48 UTC
Status: Running
Type: container
Profiles: default, test-profile-1
Pid: 5441
Ips:
lo: inet 127.0.0.1
lo: inet6 ::1
eth0: inet 192.168.7.240 veth40a1b521
eth0: inet6 fe80::ecc9:a2ff:fea9:5f0 veth40a1b521
Resources:
Processes: 6
CPU usage:
CPU usage (in seconds): 1
Memory usage:
Memory (current): 21.74MB
Memory (peak): 69.40MB
Network usage:
eth0:
Bytes received: 446B
Bytes sent: 11.68kB
Packets received: 5
Packets sent: 45
lo:
Bytes received: 0B
Bytes sent: 0B
Packets received: 0
Packets sent: 0
root@hypervisor-nested:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether a6:59:18:27:53:a9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.7.204/32 brd 255.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a459:18ff:fe27:53a9/64 scope link
valid_lft forever preferred_lft forever
4: veth40a1b521@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:fd:66:72:a5:81 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 169.254.0.1/32 scope global veth40a1b521
valid_lft forever preferred_lft forever
inet6 fe80::fcfd:66ff:fe72:a581/64 scope link
valid_lft forever preferred_lft forever
root@hypervisor-nested:~# ip r
default via 169.254.0.1 dev eth0 proto static onlink
192.168.7.240 dev veth40a1b521 scope link
root@hypervisor-nested:~# lxc profile show test-profile-1
config: {}
description: 'Test profile #1'
devices:
eth0:
ipv4.address: 192.168.7.240
nictype: routed
parent: eth0
type: nic
name: test-profile-1
used_by:
- /1.0/instances/test-profile-1
root@hypervisor-nested:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=116 time=43.6 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=116 time=43.2 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=116 time=43.3 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=116 time=43.0 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=116 time=43.5 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=116 time=43.0 ms
64 bytes from 8.8.8.8: icmp_seq=7 ttl=116 time=43.2 ms
64 bytes from 8.8.8.8: icmp_seq=8 ttl=116 time=43.3 ms
64 bytes from 8.8.8.8: icmp_seq=9 ttl=116 time=43.0 ms
64 bytes from 8.8.8.8: icmp_seq=49 ttl=116 time=1056 ms
64 bytes from 8.8.8.8: icmp_seq=50 ttl=116 time=43.4 ms
64 bytes from 8.8.8.8: icmp_seq=51 ttl=116 time=44.5 ms
^C
--- 8.8.8.8 ping statistics ---
51 packets transmitted, 12 received, 76.4706% packet loss, time 50970ms
rtt min/avg/max/mdev = 42.955/127.738/1056.017/279.886 ms, pipe 2
# ^ losing packets when Debian 10 container (test-profile-1) is active
root@hypervisor-nested:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether a6:59:18:27:53:a9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.7.204/32 brd 255.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a459:18ff:fe27:53a9/64 scope link
valid_lft forever preferred_lft forever
4: veth40a1b521@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:fd:66:72:a5:81 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 169.254.0.1/32 scope global veth40a1b521
valid_lft forever preferred_lft forever
inet6 fe80::fcfd:66ff:fe72:a581/64 scope link
valid_lft forever preferred_lft forever
root@hypervisor-nested:~# ip r
default via 169.254.0.1 dev eth0 proto static onlink
192.168.7.240 dev veth40a1b521 scope link
3. Debian 10 in container (test-profile-1)
root@test-profile-1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ee:c9:a2:a9:05:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.7.240/32 brd 255.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::ecc9:a2ff:fea9:5f0/64 scope link
valid_lft forever preferred_lft forever
root@test-profile-1:~# ip r
default via 169.254.0.1 dev eth0
169.254.0.1 dev eth0 scope link
root@test-profile-1:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
^C
--- 8.8.8.8 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 89ms
root@test-profile-1:~# ping 192.168.7.204
PING 192.168.7.204 (192.168.7.204) 56(84) bytes of data.
64 bytes from 192.168.7.204: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.7.204: icmp_seq=2 ttl=64 time=0.049 ms
64 bytes from 192.168.7.204: icmp_seq=3 ttl=64 time=0.041 ms
^C
--- 192.168.7.204 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 9ms
rtt min/avg/max/mdev = 0.038/0.042/0.049/0.008 ms
# ^ I can ping the hypervisor directly above though.
IP forwarding is of course enabled for both hypervisors.
What would be the correct setup in this case, that would allow the most deeply nested container to access the network and the internet?