Using LXD with ROS, on exposing the IP address of the container, in a WiFi network

Hello,

I have been using LXD for a while to run different versions of ROS [robot operating system] - and finally time has come to put my stack on a real robot, which uses a RPI4. What I would like to do is, to run some of the software actually on the robot, and some of it on my laptop, so I need connectivity between my laptop and the container inside the LXD host.

I understand that with MACVLAN, the guest can get its own IP, but on a wireless network this has not worked (I tried), and google searches confirm that this method wont work on a wifi. There is also bridging method, and I have not tried this one yet.

What I have done was to configure a proxy to the ros port 11311:

lxc config device add rosdev proxy11311 proxy connect=tcp:127.0.0.1:11311 listen=tcp:0.0.0.0:11311

and on the remote computer I make:

export ROS_MASTER_URI=https://myrobot.local:11311
export ROS_HOSTNAME=https://mylaptop.local

This works partially. I can get issue rostopic list and get the topic names, but If I make a rostopic echo /odom it will not work. I can issue commands from my laptop to the robot, but I can not get any data back from the robot.

I have investigated this problem and found out in order to use rostopic echo, I need to be able to connect an arbitrary port that is on the container like http://myrobot.local:[3XXXX]

So, I pretty much need that IP address of the container exposed to network so I can communicate with it. Considering that I am on wifi, on both the robot and the laptop, what options do I have?

Will the bridging method also not work?

Any ideas/recommendations/help greatly appreciated.

Best regards,
C.

Have you tried the routed NIC type, this can allow you to pass an IP from the external network into your container, and works with wifi.

See https://linuxcontainers.org/lxd/docs/master/instances#nictype-routed

Hello,

I followed https://blog.simos.info/how-to-get-lxd-containers-get-ip-from-the-lan-with-routed-network/ to setup with a routed nic. I am running ubuntu18 64bits on a raspberry pi. unfortunately, lxc list shows an the container started with an ip address momentarily, but then stops showing a network address on the second time.

Here is the profile I am using:

config:
raw.idmap: “both 1000 1000”
user.user-data: |
#cloud-config
package_update: no
package_upgrade: no
packages:
- libnss-mdns
- net-tools
user.network-config: |
version: 2
ethernets:
eth0:
addresses:
- 192.168.1.201/32
nameservers:
addresses:
- 8.8.8.8
search: []
routes:
- to: 0.0.0.0/0
via: 169.254.0.1
on-link: true

description: routed
devices:
eth0:
ipv4.address: 192.168.1.201
nictype: routed
parent: wlan0
type: nic
name: routed_192.168.1.201
used_by: []

Best Regards,
C.

Its likely that the network config inside the container is trying to do DHCP (which doesn’t work with routed NIC) and in the process is wiping out the static config set by LXD.

Trying disabling DHCP in the netplan config inside the container or updating it to set the IP address, nameserver and default route statically (this is what the cloud-init config in @simos blog tries to achieve, but perhaps you’ve used an image that doesn’t have cloud-init installed).

Hello,

I have specifically disabled dhcp from config file, and still it behaves the same. However, if I put the relevant part of the profile file into /etc/netplan/50-cloud-init.conf manually, it actually works. I am certain I am using a cloud-init installed image, and I think that is causing the problem.

I can ping the host from the container, and I can actually ping the outside network, but not get reply. I have observed on tcpdump that the pings actually gets transmitted to outside network, but hosts on the outside network fail to find the arp address of the container’s ip address. Is there anyway to fix this? Maybe add a static entry to arp table of the outside host?

Please can you show the output of ip a and ip r both on the LXD host and inside the container.

Please also provide the output of ip neigh show proxy on the host.

Thanks

‘ip a’ on host:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 9c:8e:99:3d:50:d5 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.11/24 brd 192.168.0.255 scope global dynamic noprefixroute enp0s25
valid_lft 603824sec preferred_lft 603824sec
inet6 fe80::d604:8982:8378:b629/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: wlo1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether a0:88:b4:6f:e1:c4 brd ff:ff:ff:ff:ff:ff
4: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:f0:c5:ff brd ff:ff:ff:ff:ff:ff
inet 10.16.133.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever
10: veth5b9d7777@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:59:18:6b:51:31 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.0.1/32 scope global veth5b9d7777
valid_lft forever preferred_lft forever
inet6 fe80::fc59:18ff:fe6b:5131/64 scope link tentative
valid_lft forever preferred_lft forever

‘ip r’ on host:

default via 192.168.0.1 dev enp0s25 proto dhcp metric 100
10.16.133.0/24 dev lxdbr0 proto kernel scope link src 10.16.133.1 linkdown
169.254.0.0/16 dev lxdbr0 scope link metric 1000 linkdown
192.168.0.0/24 dev enp0s25 proto kernel scope link src 192.168.0.11 metric 100
192.168.0.200 dev veth5b9d7777 scope link

‘ip neigh show proxy’ on host:

169.254.0.1 dev veth5b9d7777 proxy
192.168.0.200 dev enp0s25 proxy

‘ip a’ on container:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 6a:6a:ae:85:ce:21 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::686a:aeff:fe85:ce21/64 scope link
valid_lft forever preferred_lft forever

‘ip r’ on container:
returns nothing

Best regards,
C.

Ah so the issue is inside the container, you’d expect to see a default route to 169.254.0.1, and your eth0 inside the container has no IP address either.

Can you show me the netplan config for eth0 please?

Hello,

By netplan config for eth0, we mean the file /etc/netplan/50-cloud-init.yaml correct?

Here is my /etc/netplan/50-cloud-init.yaml on the container: (I deleted commented lines)

network:
version: 2
ethernets:
eth0:
dhcp4: true

at the beginning, the container gets the correct ip, then something happens and this file gets overwritten or something alike.

best regards, and I really appreciate your help.

ps: I tried on both ubuntu 18 and 20 as container, same results.

Yeah so that will be wiping the container’s network config set by LXD, as it will start a DHCP client that will remove the IP address and routes.

That’s why in your original profile you posted you had this for the netplan config pushed by cloud-init:

version: 2
ethernets:
eth0:
addresses:
- 192.168.1.201/32
nameservers:
addresses:
- 8.8.8.8
search: []
routes:
- to: 0.0.0.0/0
via: 169.254.0.1
on-link: true

But it doesn’t look like cloud-init has applied it, but if you apply it manually into that file it should work fine.

well is there a log somewhere? this is for a robot configuration, so I kind of need it to work without manually intervening.

I believe cloud-init will only run on first boot, so have you tried it with a new container and that profile?

Here’s an example of it working:

Create a routed profile containing (note the container’s IP 192.168.1.201 needs to be changed in two places):

lxc profile show routed
config:
  user.network-config: |
    version: 2
    ethernets:
        eth0:
            addresses:
            - 192.168.1.201/32
            nameservers:
                addresses:
                - 8.8.8.8
                search: []
            routes:
            -   to: 0.0.0.0/0
                via: 169.254.0.1
                on-link: true
description: Default LXD profile
devices:
  eth0:
    ipv4.address: 192.168.1.201
    name: eth0
    nictype: routed
    parent: enp3s0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: routed

Create a container using that profile, I’m using the LXD generated Ubuntu images that are cloud enabled.

lxc launch images:ubuntu/focal/cloud c1 --profile routed

lxc ls
+------+---------+-----------------------+-------------------------------------------------+-----------------+-----------+
| NAME |  STATE  |         IPV4          |                      IPV6                       |      TYPE       | SNAPSHOTS |
+------+---------+-----------------------+-------------------------------------------------+-----------------+-----------+
| c1   | RUNNING | 192.168.1.201 (eth0)  |                                                 | CONTAINER       | 0         |
+------+---------+-----------------------+-------------------------------------------------+-----------------+-----------+

lxc exec c1 -- ping linuxcontainers.org
PING linuxcontainers.org (149.56.148.5) 56(84) bytes of data.
64 bytes from rproxy.stgraber.org (149.56.148.5): icmp_seq=1 ttl=51 time=88.4 ms
64 bytes from rproxy.stgraber.org (149.56.148.5): icmp_seq=2 ttl=51 time=88.7 ms
64 bytes from rproxy.stgraber.org (149.56.148.5): icmp_seq=3 ttl=51 time=88.9 ms
64 bytes from rproxy.stgraber.org (149.56.148.5): icmp_seq=4 ttl=51 time=88.9 ms
64 bytes from rproxy.stgraber.org (149.56.148.5): icmp_seq=5 ttl=51 time=89.1 ms
64 bytes from rproxy.stgraber.org (149.56.148.5): icmp_seq=6 ttl=51 time=88.5 ms
^C
--- linuxcontainers.org ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5005ms
rtt min/avg/max/mdev = 88.444/88.770/89.142/0.243 ms

Also tried the official Ubuntu cloud images (that include cloud-init):

lxc launch ubuntu:20.04 c1 --profile routed

lxc ls
+------+---------+-----------------------+-------------------------------------------------+-----------------+-----------+
| NAME |  STATE  |         IPV4          |                      IPV6                       |      TYPE       | SNAPSHOTS |
+------+---------+-----------------------+-------------------------------------------------+-----------------+-----------+
| c1   | RUNNING | 192.168.1.201 (eth0)  |                                                 | CONTAINER       | 0         |
+------+---------+-----------------------+-------------------------------------------------+-----------------+-----------+

lxc exec c1 -- ip r
default via 169.254.0.1 dev eth0 proto static onlink 

lxc exec c1 -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 8a:ac:1a:9a:08:10 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.1.201/32 brd 255.255.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::88ac:1aff:fe9a:810/64 scope link 
       valid_lft forever preferred_lft forever

lxc exec c1 -- ping linuxcontainers.org
PING linuxcontainers.org (149.56.148.5) 56(84) bytes of data.
64 bytes from rproxy.stgraber.org (149.56.148.5): icmp_seq=1 ttl=51 time=88.4 ms
64 bytes from rproxy.stgraber.org (149.56.148.5): icmp_seq=2 ttl=51 time=88.7 ms
64 bytes from rproxy.stgraber.org (149.56.148.5): icmp_seq=3 ttl=51 time=88.9 ms
64 bytes from rproxy.stgraber.org (149.56.148.5): icmp_seq=4 ttl=51 time=88.9 ms
64 bytes from rproxy.stgraber.org (149.56.148.5): icmp_seq=5 ttl=51 time=89.1 ms
64 bytes from rproxy.stgraber.org (149.56.148.5): icmp_seq=6 ttl=51 time=88.5 ms
^C
--- linuxcontainers.org ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5005ms
rtt min/avg/max/mdev = 88.444/88.770/89.142/0.243 ms

Hello,

I have replicated the instructions above and confirm that it works! I have tested on both normal ethernet and wifi parents, and it works on both.

Before with my routed profile, I did try lxc launch ubuntu:20.04 c1 --profile routed --profile default and it failed. But now, the command lxc launch ubuntu:20.04 c1 --profile routed2 works, and not only with the cloud enabled but also with 20.04 and 18.04.

So I made edits on the profile given on your post, importing parts from the previous profile I made, and launched it, again without problems.

So, I think launching the container with --profile default --profile routed was the problem.

Thank you very much for your help. Now that I have it working on my laptop, I will attempt to run the same setup on a RPI4.

Hello @tomp,

I have replicated the setup on a RPI4 running ubuntu server 20.04 - using the raspberry pi’s wlan0 device. Unfortunately it did not work at the beginning, but then I used tcpdump to obtain:

02:31:00.876208 IP 192.168.0.200 > 192.168.0.10: ICMP echo request, id 334, seq 5, length 64
02:31:00.876322 ARP, Request who-has 192.168.0.200 tell 192.168.0.10, length 28

RPI4’s wlan0 device can not do the arp thing, like a normal laptops wireless device. If on the laptop if I put a static arp entry of the RPI4’s mac address, with the containers routed ip address, such as:

‘sudo arp -s 192.168.0.200 dc:a6:32:33:32:2e’

where the ip address is the address of the container and the mac address belongs to RPI4’s wlan0.

Then I will be able to ping and ssh from my laptop to the container inside RPI4.

But unfortunately, with this approach I am not able to get to external internet, because the adsl router having ip of 192.168.0.1 (which is the default gateway to the internet) does not have that static arp entry.

Is there a way I can make the RPI4 (host) send arp replys to announce it has the ip address of the container?

Best regards,
Can

Strange.

Can you describe your network layout as I’m understanding where ADSL comes into this (which normally uses PPPoA or PPPoE).

Can you show the output of ip a, ip r and ip neigh show proxy on the RPI host?

Hello,

By ADSL I mean my internet router at my house, which is also the wifi. Typical home user setup. All my hosts use this as the gateway. Since it has no idea about the mac address of the container, the container fails to reach the outside internet.

I actually made sudo ifconfig set wlan0 promisc on the host, (RPI4) and now container is able to reach to the gateway and other hosts, without a static entry in their arp table. I dont know however, if this can have other side effects.

So for the same setup to work as RPI4 as host, with using wlan0, the wlan0 interface must be in promiscious mode.

Interesting. Although if the RPI isn’t receiving ARP packets without being in promiscuous mode then it wouldnt work even with its own host networking (remembering that proxy ARP is actually hiding the container’s IP behind the RPI’s own MAC address so its not like there is another MAC at play here). It might be that it or the wifi AP is doing some kind of power saving that means its not delivering the broadcast from the other device in a reasonable time.

It would be interesting to see if disabling promiscuous mode, and then ping the router from the container to see if that triggers it into learning the association between IP and RPI’s MAC.

great idea, will try. it should at least work for a while.