LXD and dhcp server in a machine simultaneously

Hello @tomp . I mentioned you because you usually answer questions about LXD and Network

I want to make a laptop to be an access point and have a dchcp server which gives IPs at a specifi range etc.

Can this affect somehow LXD and its functionality ?

(Also if there is a guide available on this topic it would help me a lot. I only managed statically to assigned IPs).

With this setup i imagine that we can achieve the so called WiFi Direct.

Thanks in advance for the help.

It can affect LXD if misconfigured.
You’ll want to ensure that both your DHCP and DNS servers outside of LXD are configured to only bind/listen to the interface they’re supposed to serve.

LXD does that for its own dnsmasq instance, but we’ve often seen other DHCP/DNS servers bind a port globally before LXD has a chance to bind it for the one interface it cares about, causing LXD’s networking to break.

@stgraber Thanks a lot for the answer.
@tomp @stgraber I want your help please.

I found out that from Ubuntu WiFi settings ( i have Ubuntu 18.04 as an OS ) you can turn your laptop as a WiFi hotspot. So i did it. (Let’s name it laptop1)

ifconfig showed that my wlp6s0 interface has as an inet ip 10.42.0.1.

From another laptop which also has Ubuntu 18.04 (let’s name it laptop2)

i connected to laptop1’s hotspot and i took to my wlo1 interface inet ip 10.42.0.76.

Ping worked fine.
I executed a python script who sends an image ( as a client) from laptop2 to laptop1. At laptop1 there was a server python script. Also worked perfect.

I did from laptop2

sudo lxc remote add isolatelaptop 10.42.0.1

and after that

sudo lxc launch isolatedlaptop:customimage isolatedlaptop:test1

As a result at laptop1 ( the access point wifi hotspot meaning) a container with name test1 was created.

After that to laptop1 ( wifi hotspot meaning) i create a profile called routed_10.42.0.200 and i configured it with the below .yaml file.

> config:
  user.network-config: |
    version: 2
    ethernets:
        eth0:
            addresses:
            - 10.42.0.200/32
            nameservers:
                addresses:
                - 8.8.8.8
                search: []
            routes:
            -   to: 0.0.0.0/0
                via: 10.42.0.1
                on-link: true
description: Default LXD profile
devices:
  eth0:
    ipv4.address: 10.42.0.200
    nictype: routed
    parent: wlp6s0
    type: nic
name: routed_10.42.0.200
used_by:

After that i did to laptop1

sudo lxc launch customImage VAMOS --profile default --profile routed_10.42.0.200

As i result i created the container VAMOS with 10.42.0.200 as an IPV4.

When i am doing from laptop2 ( with ip = 10.42.0.76 )

ping 10.42.0.200

it works OK.

But sadly , when i am doing from inside the container VAMOS it happens this.

root@VAMOS:~# ping 10.42.0.76
PING 10.42.0.76 (10.42.0.76) 56(84) bytes of data.
From 169.254.0.1 icmp_seq=1 Destination Port Unreachable
From 169.254.0.1 icmp_seq=2 Destination Port Unreachable
From 169.254.0.1 icmp_seq=3 Destination Port Unreachable
From 169.254.0.1 icmp_seq=4 Destination Port Unreachable
From 169.254.0.1 icmp_seq=5 Destination Port Unreachable
From 169.254.0.1 icmp_seq=6 Destination Port Unreachable
From 169.254.0.1 icmp_seq=7 Destination Port Unreachable
From 169.254.0.1 icmp_seq=8 Destination Port Unreachable

From the other hand :

root@VAMOS:~# ping 10.42.0.1
PING 10.42.0.1 (10.42.0.1) 56(84) bytes of data.
64 bytes from 10.42.0.1: icmp_seq=1 ttl=64 time=0.159 ms
64 bytes from 10.42.0.1: icmp_seq=2 ttl=64 time=0.084 ms
64 bytes from 10.42.0.1: icmp_seq=3 ttl=64 time=0.095 ms
64 bytes from 10.42.0.1: icmp_seq=4 ttl=64 time=0.093 ms
64 bytes from 10.42.0.1: icmp_seq=5 ttl=64 time=0.059 ms
64 bytes from 10.42.0.1: icmp_seq=6 ttl=64 time=0.092 ms
64 bytes from 10.42.0.1: icmp_seq=7 ttl=64 time=0.111 ms
64 bytes from 10.42.0.1: icmp_seq=8 ttl=64 time=0.095 ms
64 bytes from 10.42.0.1: icmp_seq=9 ttl=64 time=0.092 ms

I can imagine that something must be changed at the .yaml file.

Can you help me to solve this issue pleasse ?
I want to be able to communicate from inside the container with another laptop in the same subnet ( the subnet is defined from a laptop being a hotspot)

Thank in advance for any help. This issue is really important for me to be solved

Any help? I have noticed that all the remote operations are done extremely quick with WiFi direct.

Please show the output of:

lxc config show VAMOS --expanded

And then inside the container:

ip r
ip a

And on the LXD host:

ip a
ip r

Thanks

Οk tomp i will posted it right away

@tomp

lxc config show VAMOS - -expanded

architecture: x86_64
config:
image.architecture: x86_64
image.description: Ubuntu 18.04 LTS server (20200807)
image.os: ubuntu
image.release: bionic
user.network-config: |
version: 2
ethernets:
eth0:
addresses:
- 10.42.0.200/32
nameservers:
addresses:
- 8.8.8.8
search: []
routes:
- to: 0.0.0.0/0
via: 10.42.0.1
on-link: true
volatile.base_image: e1baf803cb2469f7f91c2062fe4c4de54c27f158517daf329c228d3a2897dfa5
volatile.eth0.host_name: vethc8c35f6b
volatile.eth0.hwaddr: 00:16:3e:74:23:f1
volatile.eth0.last_state.created: “false”
volatile.eth0.name: eth0
volatile.idmap.base: “0”
volatile.idmap.current: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.idmap.next: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.idmap: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.power: RUNNING
devices:
eth0:
ipv4.address: 10.42.0.200
nictype: routed
parent: wlp6s0
type: nic
root:
path: /
pool: default
type: disk
ephemeral: false
profiles:

  • default
  • routed_10.42.0.200
    stateful: false
    description: “”

root@VAMOS:~# ip r
default via 169.254.0.1 dev eth0
default via 10.42.0.1 dev eth0 proto static onlink

root@VAMOS:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 76:b8:6d:fd:67:be brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.42.0.200/32 brd 255.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::74b8:6dff:fefd:67be/64 scope link
valid_lft forever preferred_lft forever

tkasidakis@tkasidakis-Inspiron-5558:~/Desktop/fog$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 34:e6:d7:85:30:39 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.9/24 brd 192.168.2.255 scope global dynamic noprefixroute enp7s0
valid_lft 85935sec preferred_lft 85935sec
inet6 fe80::24a0:7531:41ac:d2fd/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: wlp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether e4:f8:9c:18:bc:76 brd ff:ff:ff:ff:ff:ff
inet 10.42.0.1/24 brd 10.42.0.255 scope global noprefixroute wlp6s0
valid_lft forever preferred_lft forever
inet6 fe80::717a:97ea:5c2e:eb2c/64 scope link noprefixroute
valid_lft forever preferred_lft forever
4: lxcbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 10.0.3.1/24 scope global lxcbr0
valid_lft forever preferred_lft forever
5: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:35:9e:ff brd ff:ff:ff:ff:ff:ff
inet 10.48.91.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever
inet6 fd42:e95:be58:50e::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fe35:9eff/64 scope link
valid_lft forever preferred_lft forever
7: vethd8210fea@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether 5e:bb:98:ea:3d:6d brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: vethc8c35f6b@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:dc:86:9e:42:64 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 169.254.0.1/32 scope global vethc8c35f6b
valid_lft forever preferred_lft forever
inet6 fe80::fcdc:86ff:fe9e:4264/64 scope link
valid_lft forever preferred_lft forever

tkasidakis@tkasidakis-Inspiron-5558:~/Desktop/fog$ ip r
default via 192.168.2.1 dev enp7s0 proto dhcp metric 100
10.0.3.0/24 dev lxcbr0 proto kernel scope link src 10.0.3.1 linkdown
10.42.0.0/24 dev wlp6s0 proto kernel scope link src 10.42.0.1 metric 600
10.42.0.200 dev vethc8c35f6b scope link
10.48.91.0/24 dev lxdbr0 proto kernel scope link src 10.48.91.1
169.254.0.0/16 dev lxcbr0 scope link metric 1000 linkdown
192.168.2.0/24 dev enp7s0 proto kernel scope link src 192.168.2.9 metric 100

done and ready to provide more info.

Thanks a lot for the help

You cannot specify an arbitrary default route address in your cloud-init config.

This is because routed NICs use point-to-point links back to the LXD host, and are not bridged at layer 2 (as the name suggests we are using layer 3 routing only).

As such there is only one default route you can use, the private address 169.254.0.1 (which LXD should setup up for you on the host end of the point-to-point link between host and container).

See the example of cloud-init config here Lxd "routed" interface config - problem w Ubuntu 20.04 Host and WiFi?

config:
  user.network-config: |
    version: 2
    ethernets:
        eth0:
            addresses:
            - 192.168.1.201/32
            nameservers:
                addresses:
                - 8.8.8.8
            routes:
            -   to: 0.0.0.0/0
                via: 169.254.0.1
                on-link: true
devices:
  eth0:
    ipv4.address: 192.168.1.201
    nictype: routed
    parent: enp3s0
    type: nic

At that point the packets will go from container to host and then use the routing table on your LXD host to decide the next ‘hop’ to make.

@tomp Ok. Thanks for the answer.

So if i understand correctly, the problem is here

routes:
- to: 0.0.0.0/0
via: 10.42.0.1
on-link: true

So you suggest to change via with via:
via: 169.254.0.1 ???

Or this which i am trying t achieve can’t be done ?

Sorry for keep asking but i am not 100% familiar withe the yaml files and the network parameters

I’m not really clear that it is you’re trying to achieve.

I’m assuming you want to use the routed NIC type to expose a container into a Wifi network.

In which case, you’ve done the correct setup, its just that you’ve specified an incorrect default gateway in your cloud-init config.

The reason for this is that routed NICs use a point-to-point veth pair between the container and the host without a layer 2 bridge being involved. Therefore it is not possible for the container’s routing table to specify a L2 next-hop address, it can only be one place - the LXD host (for which LXD sets up a special private address of 169.254.0.1 to reach it by).

The source address of your container can still be 10.42.0.200 as you have specified, and then when the packets arrive from the container to the LXD host, the LXD host acting as a router (hence the name) will then route packets to the next-hop as specified by its routing table.

Your LXD host’s routing table has an entry for 10.42.0.0/24 so those packets should use the wifi interface for packets destined for 10.42.0.0/24.

10.42.0.0/24 dev wlp6s0 proto kernel scope link src 10.42.0.1 metric 600

@tomp I changed the .yaml file and it is like this.

config:
user.network-config: |
version: 2
ethernets:
eth0:
addresses:
- 10.42.0.200/32
nameservers:
addresses:
- 8.8.8.8
search: []
routes:
- to: 0.0.0.0/0
via: 169.254.0.1
on-link: true
description: Default LXD profile
devices:
eth0:
ipv4.address: 10.42.0.200
nictype: routed
parent: wlp6s0
type: nic
name: routed_10.42.0.200
used_by:

Again

root@VAMOS:~# ping 10.42.0.76
PING 10.42.0.76 (10.42.0.76) 56(84) bytes of data.
From 169.254.0.1 icmp_seq=1 Destination Port Unreachable
From 169.254.0.1 icmp_seq=2 Destination Port Unreachable
From 169.254.0.1 icmp_seq=3 Destination Port Unreachable
From 169.254.0.1 icmp_seq=4 Destination Port Unreachable
From 169.254.0.1 icmp_seq=5 Destination Port Unreachable
From 169.254.0.1 icmp_seq=6 Destination Port Unreachable
From 169.254.0.1 icmp_seq=7 Destination Port Unreachable
From 169.254.0.1 icmp_seq=8 Destination Port Unreachable
From 169.254.0.1 icmp_seq=9 Destination Port Unreachable
From 169.254.0.1 icmp_seq=10 Destination Port Unreachable
From 169.254.0.1 icmp_seq=11 Destination Port Unreachable
From 169.254.0.1 icmp_seq=12 Destination Port Unreachable
From 169.254.0.1 icmp_seq=13 Destination Port Unreachable
From 169.254.0.1 icmp_seq=14 Destination Port Unreachable
From 169.254.0.1 icmp_seq=15 Destination Port Unreachable
From 169.254.0.1 icmp_seq=16 Destination Port Unreachable
From 169.254.0.1 icmp_seq=17 Destination Port Unreachable

What i am trying is that i have my LXD host as an access point ( 10.42.0.1).

I want to create containers with IPs 10.42.0.X in order to be able to communicate with other laptops who have connect to the access point and have for example 10.42.0.76

Create containers at the laptop which is simultaneously an AP.

Please show output of ip a and ip r inside container, as cloud-init doesn’t apply on every boot, only the first boot normally, so you may need to delete and re-create it for changes to take effect.

@tomp

root@VAMOS:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 3a:db:47:10:9f:8a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.42.0.200/32 brd 255.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::38db:47ff:fe10:9f8a/64 scope link
valid_lft forever preferred_lft forever

root@VAMOS:~# ip r
default via 169.254.0.1 dev eth0
default via 169.254.0.1 dev eth0 proto static onlink

Other laptops can ping the container as I wrote in my initial post.
The container isn’t able to ping other laptops.

I believe that i understand what is the use of 169.254.0.1

Great.

So now it is just a process of elimination.

First, can you ping the LXD host 10.42.0.1 from the container?

If so this shows the p2p connection is running.

Now if you cannot ping another host in the 10.42.0.0/24 network, then please can you run tcpdump -i wlp6s0 -nn host 10.42.0.200 on the host, whilst running a ping inside the container to the other host so we can see what is happening (or not happening).

Also, have you checked its not a firewall running on the LXD host?

@tomp

I don’t know how to thank you Thomas.

Yes I CAN ping 10.42.0.1.

If i understand right

tkasidakis@tkasidakis-Inspiron-5558:~$ sudo tcpdump -i wlp6s0 -nn host 10.42.0.200
[sudo] password for tkasidakis:
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on wlp6s0, link-type EN10MB (Ethernet), capture size 262144 bytes

This at the host.

After that at the container which is at this host( the 10.42.0.1 )

i did

root@VAMOS:~# ping 10.42.0.76
PING 10.42.0.76 (10.42.0.76) 56(84) bytes of data.
From 169.254.0.1 icmp_seq=1 Destination Port Unreachable
From 169.254.0.1 icmp_seq=2 Destination Port Unreachable
From 169.254.0.1 icmp_seq=3 Destination Port Unreachable
From 169.254.0.1 icmp_seq=4 Destination Port Unreachable

After that nothing appeared at the terminal where i execute tcpdump.

No i didn’t check the firewall .

Remember i can ping the container from another laptop ( the 10.42.0.76 ).

The container can’t ping.

So now what’s next ?

Sounds to be like it is a firewall. Can you show output of iptables-save from your LXD host?

Also can you re-run the ping and tcpdump test, but this time change the tcpdump command to:

sudo tcpdump -i any -nn host 10.42.0.200

@tomp

tkasidakis@tkasidakis-Inspiron-5558:~$ sudo tcpdump -i any -nn host 10.42.0.200
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
13:57:12.459041 IP 10.42.0.200 > 10.42.0.76: ICMP echo request, id 802, seq 1, length 64
13:57:12.459149 IP 169.254.0.1 > 10.42.0.200: ICMP 10.42.0.76 protocol 1 port 55936 unreachable, length 92
13:57:13.462176 IP 10.42.0.200 > 10.42.0.76: ICMP echo request, id 802, seq 2, length 64
13:57:13.462317 IP 169.254.0.1 > 10.42.0.200: ICMP 10.42.0.76 protocol 1 port 43891 unreachable, length 92
13:57:14.486164 IP 10.42.0.200 > 10.42.0.76: ICMP echo request, id 802, seq 3, length 64
13:57:14.486297 IP 169.254.0.1 > 10.42.0.200: ICMP 10.42.0.76 protocol 1 port 62228 unreachable, length 92
13:57:15.510168 IP 10.42.0.200 > 10.42.0.76: ICMP echo request, id 802, seq 4, length 64
13:57:15.510311 IP 169.254.0.1 > 10.42.0.200: ICMP 10.42.0.76 protocol 1 port 11958 unreachable, length 92
13:57:16.534161 IP 10.42.0.200 > 10.42.0.76: ICMP echo request, id 802, seq 5, length 64
13:57:16.534291 IP 169.254.0.1 > 10.42.0.200: ICMP 10.42.0.76 protocol 1 port 29783 unreachable, length 92
13:57:17.558067 ARP, Request who-has 10.42.0.200 tell 169.254.0.1, length 28
13:57:17.558153 ARP, Request who-has 169.254.0.1 tell 10.42.0.200, length 28
13:57:17.558321 ARP, Reply 169.254.0.1 is-at fe:e5:7b:69:d1:b6, length 28
13:57:17.558276 ARP, Reply 10.42.0.200 is-at 3a:db:47:10:9f:8a, length 28
13:57:17.558355 IP 10.42.0.200 > 10.42.0.76: ICMP echo request, id 802, seq 6, length 64
13:57:17.558486 IP 169.254.0.1 > 10.42.0.200: ICMP 10.42.0.76 protocol 1 port 6136 unreachable, length 92
13:57:18.582104 IP 10.42.0.200 > 10.42.0.76: ICMP echo request, id 802, seq 7, length 64
13:57:18.582183 IP 169.254.0.1 > 10.42.0.200: ICMP 10.42.0.76 protocol 1 port 8602 unreachable, length 92
13:57:19.606163 IP 10.42.0.200 > 10.42.0.76: ICMP echo request, id 802, seq 8, length 64
13:57:19.606258 IP 169.254.0.1 > 10.42.0.200: ICMP 10.42.0.76 protocol 1 port 11835 unreachable, length 92
13:57:20.630193 IP 10.42.0.200 > 10.42.0.76: ICMP echo request, id 802, seq 9, length 64
13:57:20.630292 IP 169.254.0.1 > 10.42.0.200: ICMP 10.42.0.76 protocol 1 port 24540 unreachable, length 92
13:57:21.654169 IP 10.42.0.200 > 10.42.0.76: ICMP echo request, id 802, seq 10, length 64
13:57:21.654260 IP 169.254.0.1 > 10.42.0.200: ICMP 10.42.0.76 protocol 1 port 41853 unreachable, length 92
13:57:22.678107 IP 10.42.0.200 > 10.42.0.76: ICMP echo request, id 802, seq 11, length 64

root@VAMOS:~# ping 10.42.0.76
PING 10.42.0.76 (10.42.0.76) 56(84) bytes of data.
From 169.254.0.1 icmp_seq=1 Destination Port Unreachable
From 169.254.0.1 icmp_seq=2 Destination Port Unreachable
From 169.254.0.1 icmp_seq=3 Destination Port Unreachable
From 169.254.0.1 icmp_seq=4 Destination Port Unreachable
From 169.254.0.1 icmp_seq=5 Destination Port Unreachable
From 169.254.0.1 icmp_seq=6 Destination Port Unreachable
From 169.254.0.1 icmp_seq=7 Destination Port Unreachable
From 169.254.0.1 icmp_seq=8 Destination Port Unreachable

Which is the command for iptables-save for the LXD Thomas ?

So that output shows that your container is having its packets rejected by the LXD host (most likely a firewall).

iptables-save is the command to run to get the firewall rule output.