LXC containers cannot ping outside world "network is unreachable"

Hi there,

I have managed to screw up my lxc installation and I feel increasingly out of my depth trying to work out what is going on, I would really appreciate any help :slight_smile:.

Briefly
To sum up the problem, i run:

lxc launch ubuntu: test
lxc exec test ping 8.8.8.8

and i get as response : connect: Network is unreachable. I would like to have access to the internet and to the host device from my containers, and ideally without having to reinstall ubuntu on my host machine :smile:

How i got to this point
I about a week ago had the setup that i wanted: my containers could connect to the internet and communicate with the host. After ignoring the project for a few days and coming back to it, i found that any instance i made could no longer connect to the internet, but instead of the error message above ā€œNetwork is Unreachableā€, they simply lost the packets.

I did a number of things to try and fix the issue based on some of the advice here and on stackoverflow, including attempting to edit the networks on lxc and also by installing dnsmasq and making it my hostā€™s dns service, but one of these attempted fixes somehow got me to the point where the ā€œnetwork is unreachableā€. I tried a bunch of other things but no luck.

So after all this attempted fixes, I attempted instead to fix by redoing lxc init and creating a new network bridge and hope that this would fix itself. It didnā€™t.

Iā€™m really not certain how i ended up in this mess, but I am certain Iā€™ve done 1-5 stupid things without realising in search of a fix.

Potentially relevant information

config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr1
    type: nic
  root:
    path: /
    pool: default2
    type: disk
name: default
used_by:

- /1.0/instances/test
joey@joey-ThinkPad-T460:~$ lxc network show lxdbr1 
config:
  ipv4.address: 10.206.9.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:f0bb:e6c5:98b2::1/64
  ipv6.nat: "true"
description: ""
name: lxdbr1
type: bridge
used_by:
- /1.0/instances/test
managed: true
status: Created
locations:
- none

I would really appreciate any help. Iā€™m just trying to get back to the installation i had before i managed to mess it up. :man_facepalming:

Thanks so much in advance :slight_smile:

1 Like

Hi,

So lets get a picture of your current setup.

Please can you post the output of the following commands from both the LXD host and inside the container.

ip a
ip r

Thanks
Tom

hey, thanks for your response :slight_smile:

From the host:

joey@joey-ThinkPad-T460:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s31f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether c8:5b:76:26:20:ec brd ff:ff:ff:ff:ff:ff
3: wlp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e4:b3:18:cb:63:cb brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.35/24 brd 192.168.0.255 scope global dynamic noprefixroute wlp4s0
       valid_lft 604099sec preferred_lft 604099sec
    inet6 2a02:8109:8280:449:24bb:cf8:46ce:bbc6/64 scope global temporary dynamic 
       valid_lft 5375sec preferred_lft 2675sec
    inet6 2a02:8109:8280:449:56:1bbc:b49d:26e1/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 5375sec preferred_lft 2675sec
    inet6 fe80::7660:b649:4bd3:41aa/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
4: wwp0s20f0u3c2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 02:1e:10:1f:00:00 brd ff:ff:ff:ff:ff:ff
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:ba:9d:c7:54 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:baff:fe9d:c754/64 scope link 
       valid_lft forever preferred_lft forever
6: br-7a3159c1e87a: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:39:cf:1e:68 brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.1/16 brd 172.20.255.255 scope global br-7a3159c1e87a
       valid_lft forever preferred_lft forever
7: br-c14c120c20e2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:48:57:81:b1 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-c14c120c20e2
       valid_lft forever preferred_lft forever
8: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 0e:10:f6:ac:86:47 brd ff:ff:ff:ff:ff:ff
    inet 10.44.85.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fd42:73cc:64fb:b40::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::c10:f6ff:feac:8647/64 scope link 
       valid_lft forever preferred_lft forever
9: blahbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether ee:a4:16:dd:86:73 brd ff:ff:ff:ff:ff:ff
    inet 10.56.166.1/24 scope global blahbr0
       valid_lft forever preferred_lft forever
    inet6 fd42:6504:6fe8:f393::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::eca4:16ff:fedd:8673/64 scope link 
       valid_lft forever preferred_lft forever
10: lxdbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:46:45:b9:ae:d6 brd ff:ff:ff:ff:ff:ff
    inet 10.206.9.1/24 scope global lxdbr1
       valid_lft forever preferred_lft forever
    inet6 fd42:f0bb:e6c5:98b2::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::4c26:1ff:fe51:167d/64 scope link 
       valid_lft forever preferred_lft forever
12: veth912d21ef@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr1 state UP group default qlen 1000
    link/ether b6:bd:af:4c:76:19 brd ff:ff:ff:ff:ff:ff link-netnsid 0
14: veth78d2f66e@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr1 state UP group default qlen 1000
    link/ether 52:46:45:b9:ae:d6 brd ff:ff:ff:ff:ff:ff link-netnsid 1
joey@joey-ThinkPad-T460:~$ ip r
default via 192.168.0.1 dev wlp4s0 proto dhcp metric 600 
10.44.85.0/24 dev lxdbr0 proto kernel scope link src 10.44.85.1 
10.56.166.0/24 dev blahbr0 proto kernel scope link src 10.56.166.1 
10.206.9.0/24 dev lxdbr1 proto kernel scope link src 10.206.9.1 
169.254.0.0/16 dev wlp4s0 scope link metric 1000 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
172.18.0.0/16 dev br-c14c120c20e2 proto kernel scope link src 172.18.0.1 linkdown 
172.20.0.0/16 dev br-7a3159c1e87a proto kernel scope link src 172.20.0.1 linkdown 
192.168.0.0/24 dev wlp4s0 proto kernel scope link src 192.168.0.35 metric 600

From the container

root@test:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:d7:6d:28 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::216:3eff:fed7:6d28/64 scope link 
       valid_lft forever preferred_lft forever
root@test:~# ip r

ip r returns nothing in the container

Hope that helps, let me know if any more info could help

OK so the issue is that your container has no network config (IP addresses or routes).

What OS & version is your container running?

this particular container is ubuntu 18.04

Are you using netplan or /etc/network/interfaces to configure network inside the container?

Can you show me the contents of /etc/network/interfaces and /etc/netplan/50-cloud-init.yaml inside the container please.

Also can you show me the output of ps aux inside the container after it has just booted.

I havent changed anything inside the container as far as iā€™m aware, so iā€™m not personally using either netplan or interfaces. :slight_smile:

here is the output on the container:

root@test:~# cat /etc/network/interfaces 
# ifupdown has been replaced by netplan(5) on this system.  See
# /etc/netplan for current configuration.
# To re-enable ifupdown on this system, you can run:
#    sudo apt install ifupdown
root@test:~# cat /etc/netplan/50-cloud-init.yaml 
# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    version: 2
    ethernets:
        eth0:
            dhcp4: true

and the output of ps aux:

joey@joey-ThinkPad-T460:~/projects/OPC_UA$ lxc launch ubuntu: test2
Creating test2
Starting test2
(venv) joey@joey-ThinkPad-T460:~/projects/OPC_UA$ lxc exec test2 ps aux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.0  77560  5764 ?        Ss   10:03   0:00 /sbin/init
root        85  0.0  0.0  78492  7228 ?        Ss   10:03   0:00 /lib/systemd/systemd-journald
root        93  0.0  0.0  33348  2052 ?        Ss   10:03   0:00 /lib/systemd/systemd-udevd
systemd+   205  0.0  0.0  71856  3780 ?        Ss   10:03   0:00 /lib/systemd/systemd-networkd
root       206  0.0  0.0  61828  2012 ?        Ss   10:03   0:00 /lib/systemd/systemd-networkd-wait-online
systemd+   207  0.0  0.0  70636  3976 ?        Ss   10:03   0:00 /lib/systemd/systemd-resolved
root       208  0.0  0.0  37792  1972 ?        Rs+  10:03   0:00 ps aux

OK so youā€™re using netplan with DHCP and it looks like systemd is waiting for the system to get a DHCP lease.

Back on the host can you show the output of:

ps aux | grep dnsmasq

And have you got any firewalls running on the host?

right ok! thanks for all your help

 ps aux | grep dnsmasq
dnsmasq   1299  0.0  0.0  59980   248 ?        S    10:27   0:00 /usr/sbin/dnsmasq -x /run/dnsmasq/dnsmasq.pid -u dnsmasq -7 /etc/dnsmasq.d,.dpkg-dist,.dpkg-old,.dpkg-new --local-service --trust-anchor={deleted} --trust-anchor={deleted}
joey      7037  0.0  0.0  21532  1080 pts/0    S+   12:24   0:00 grep --color=auto dnsmasq

the deleted arguments were because i was worried they may be credentials.

I havenā€™t got any firewalls as far as Iā€™m aware, I certainly donā€™t think iā€™ve added any myself.

OK so the issue looks like your LXD instantiated dnsmasq instance isnā€™t running.

Youā€™d expect to see something like this in your process list on the host:

 dnsmasq --keep-in-foreground --strict-order --bind-interfaces --except-interface=lo --no-ping --interface=lxdbr0 --listen-address=10.238.31.1 --dhcp-no-override --dhcp-authoritative --dhcp-leasefile=/var/lib/lxd/networks/lxdbr0/dnsmasq.leases --dhcp-hostsfile=/var/lib/lxd/networks/lxdbr0/dnsmasq.hosts --dhcp-range 10.238.31.2,10.238.31.254,1h --listen-address=fd42:be3f:a937:9505::1 --enable-ra --dhcp-range ::,constructor:lxdbr0,ra-stateless,ra-names -s lxd -S /lxd/ --conf-file=/var/lib/lxd/networks/lxdbr0/dnsmasq.raw -u lxd

The key argument is the --interface=lxdbr0 or in your case it should be lxdbr1.

But it looks like you donā€™t have it running at all.

LXD may not have been able to start its own dnsmasq if the separate one youā€™re running is listening on the DHCP and DNS ports on all interfaces.

If you stop LXD and start it again, do you see anything in the logs about failing to start dnsmasq?

Sorry for being stupid but im even managing to fail hereā€¦

sudo lxd shutdown --verbose
joey@joey-ThinkPad-T460:~$ sudo lxd --verbose
EROR[04-27|12:43:36] Failed to start the daemon: LXD is already running 
INFO[04-27|12:43:36] Starting shutdown sequence 
Error: LXD is already running

do i shutdown and restart using the CLI or should i use systemctl?

How are you running LXD, is it from the snap?

I cant remember how i installed it originally, but im using lxc from the commandline now if thats what you mean?

ah yes, i can see it in ~/snap, so i guess i did

(venv) joey@joey-ThinkPad-T460:~/projects/OPC_UA$ sudo snap stop lxd
Stopped.
(venv) joey@joey-ThinkPad-T460:~/projects/OPC_UA$ sudo snap start lxd
Started.
(venv) joey@joey-ThinkPad-T460:~/projects/OPC_UA$ sudo snap logs lxd
2020-04-27T12:10:34Z lxd.daemon[30234]: - proc_loadavg
2020-04-27T12:10:34Z lxd.daemon[30234]: - proc_meminfo
2020-04-27T12:10:34Z lxd.daemon[30234]: - proc_stat
2020-04-27T12:10:34Z lxd.daemon[30234]: - proc_swaps
2020-04-27T12:10:34Z lxd.daemon[30234]: - proc_uptime
2020-04-27T12:10:34Z lxd.daemon[30234]: - shared_pidns
2020-04-27T12:10:34Z lxd.daemon[30234]: - cpuview_daemon
2020-04-27T12:10:34Z lxd.daemon[30234]: - loadavg_daemon
2020-04-27T12:10:34Z lxd.daemon[30234]: - pidfds
2020-04-27T12:10:36Z systemd[1]: Started Service for snap application lxd.activate.

ok, done this using snap. I canā€™t see any logs for dnsmasq, would this be stored somewhere else than snap logs?

Can you show output of netstat -tulnp | grep :53 on the host please.

(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 0.0.0.0:53              0.0.0.0:*               LISTEN      -                   
tcp6       0      0 :::53                   :::*                    LISTEN      -                   
udp        0      0 0.0.0.0:53              0.0.0.0:*                           -                   
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           17106/spotify       
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           17106/spotify       
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           17106/spotify       
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           17106/spotify       
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           17106/spotify       
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           17106/spotify       
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           17106/spotify       
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           17106/spotify       
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           -                   
udp6       0      0 :::53                   :::*                                -                   
udp6       0      0 :::5353                 :::*                                -                   

Sorry, I meant as root, so we can see the process name

sudo netstat -tulnp | grep :53

Although we can see hints of the problem already, those lines that say :::53 and 0.0.0.0:53 is a process listening on the wildcard address (all addresses) that would prevent LXD starting its own dnsmasq instance.

tcp        0      0 0.0.0.0:53              0.0.0.0:*               LISTEN      1299/dnsmasq        
tcp6       0      0 :::53                   :::*                    LISTEN      1299/dnsmasq        
udp        0      0 0.0.0.0:53              0.0.0.0:*                           1299/dnsmasq        
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           17106/spotify       
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           17106/spotify       
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           17106/spotify       
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           17106/spotify       
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           17106/spotify       
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           17106/spotify       
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           17106/spotify       
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           17106/spotify       
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           1142/avahi-daemon:  
udp6       0      0 :::53                   :::*                                1299/dnsmasq        
udp6       0      0 :::5353                 :::*                                1142/avahi-daemon:

ah, sorry about that. hereā€™s the root output.

So I would suggest you look at your dnsmasq config to make it listen on a specific address.

1 Like