Public IP in the CT

Hello,

I’m trying to set a public IP into a CT (ipv4 failover I get from OVH).

I also need that IP to be used for the outbound traffic from the CT (and then also fro the host).
I mean that the CT outbound traffic must not have the host IP address.

I read several threads but no succeed for my need.

Any help would be appreciated, thank you

You’d typically do this with:

lxc network set lxdbr0 ipv4.routes 1.2.3.4/32

Which will have a route added on the host to send traffic to your container’s IP to the right bridge.

Then you need to make sure your container has that IP on its main network interface.
For testing you can just do it with:

ip -4 route add dev eth0 1.2.3.4/32

At which point you should be able to access the container using that IP, but as you mentioned, container traffic may still show up with the host IP.

You can avoid that by completely statically configuring your container with:

auto eth0
iface eth0 inet static
    address 1.2.3.4
    netmask 255.255.255.255
    gateway 10.0.3.1

    pre-up ip -4 route add dev eth0 10.0.3.1/32

The MASQUERADE rule that LXD maintains is scoped so that only traffic using the bridge’s subnet is NATed, so if your container sends traffic out using its public IP, it won’t get NATed by the rules that LXD addded to iptables.

1 Like

Ok thank you Stéphane, you are doing an amazing job here.

1 Like

Can we imagine to assign the host public address to lxdbr0 and only use the publics IP instead of the privates IP for the bridge and inside the CT ?
And then route all the traffic to lxdbr0 (0.0.0.0/0) ?

I ask this because I have several ipv4 blocks to route to the bridge, and i’m not sure that the private IP do have a real usage for me.

Could anyone succeed in having CT using public IP rather than the IP allocated by the bridge? I’d really like to use OVH failover IP in containers in order to make them act like public VM. I’ve read several discussion threads on the subject, with valuable insights from stgraber, but I personally wasn’t able to get a result.

For OVH’s failover IPs, all you really need to do with a recent LXD (2.21 currently) would be:

lxc network set lxdbr0 ipv4.routes PUBLIC-IP/32

If multiple IPs, you can either use a CIDR subnet for the subset of additional failover IPs, or you can just set multiple individual IPs, comma separated.

Then in the container, all you’ll need to do is add a static IPv4 address, manually with:

ip -4 addr add dev eth0 PUBLIC-IP/32 preferred_lft 0

That line can be put as a post-up of your existing (DHCP) eth0 interface. The container will continue to pull a dynamic local IP from the LXD DHCP server but will then also have its public IP associated with it and will prefer it for outgoing traffic.

root@vorash:~# lxc launch ubuntu:16.04 c1
Creating c1
Starting c1
root@vorash:~# lxc list c1
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| NAME |  STATE  |         IPV4         |                     IPV6                      |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| c1   | RUNNING | 10.178.245.12 (eth0) | fd42:3f9b:e713:ce99:216:3eff:fe1f:f9e7 (eth0) | PERSISTENT | 0         |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
root@vorash:~# ping 149.56.148.6
PING 149.56.148.6 (149.56.148.6) 56(84) bytes of data.
^C
--- 149.56.148.6 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

root@vorash:~# lxc network set lxdbr0 ipv4.routes 149.56.148.6/32
root@vorash:~# lxc exec c1 bash
root@c1:~# ip -4 addr add dev eth0 149.56.148.6/32 preferred_lft 0
root@c1:~# exit
root@vorash:~# lxc list c1
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| NAME |  STATE  |         IPV4         |                     IPV6                      |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| c1   | RUNNING | 149.56.148.6 (eth0)  | fd42:3f9b:e713:ce99:216:3eff:fe1f:f9e7 (eth0) | PERSISTENT | 0         |
|      |         | 10.178.245.12 (eth0) |                                               |            |           |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
root@vorash:~# ping 149.56.148.6
PING 149.56.148.6 (149.56.148.6) 56(84) bytes of data.
64 bytes from 149.56.148.6: icmp_seq=1 ttl=64 time=0.062 ms
^C
--- 149.56.148.6 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
root@vorash:~# 

The above is on an OVH server with failover IPs.

2 Likes

To make this persistent, that ip -4 addr should be added to /etc/network/interfaces as part of the eth0 entry:

auto eth0
iface eth0 inet dhcp
    post-up ip -4 addr add dev eth0 149.56.148.6/32 preferred_lft 0

Fantastic! It works like a charm. Thanks a lot.

One remark. If the host run under Ubuntu 16.04, and the container under the same OS, this in the container works:

auto eth0
iface eth0 inet dhcp
post-up ip -4 addr add dev eth0 149.56.148.6/32 preferred_lft 0

But if the host run under Ubuntu 17, it doesn’t work. It is necessary to define the route manually.

As a side note, would you say, it’s "risky’ to operate LXD host on Ubuntu 17.

I’d certainly strongly recommend sticking to Ubuntu LTS in general, the support length of non-LTS hardly ever makes it worth it and those releases also don’t get quite as many bugfixes as the LTS releases.

With this method, if my LXD host IP is host_ip, and my container public ip is container_ip. I my container network is defined as you suggested with the LXD standard lxdbr0 bridge, lxc list give me this:

container_ip (eth0)
10.14.127.98 (eth0)

c1 is reachable from the outside with container_ip. But when the container reach any host, it is not seen as container_ip but as host_ip. Is there any way to to make the container appear as container_ip rather than host_ip where initiating connexion?

Regarding the multiple IPs being comma separated, should those IPs include the “/32” part?

So, should it be:

lxc network set lxdbr0 ipv4.routes PUBLIC-IPa/32,PUBLIC-IPb/32,PUBLIC-IPc/32,PUBLIC-IPc/32

or

lxc network set lxdbr0 ipv4.routes PUBLIC-IPa,PUBLIC-IPb,PUBLIC-IPc,PUBLIC-IPc

CIDR should be preferred, I think both syntaxes would work as they’re both accepted by iproute2, but CIDR is more specific.

I followed this example exactly as well as the configs in your following and previous messages. Result is that I could ping and even ssh to the public IP (as well as the private IP set by LXD) from within the host. However, from outside of the server I was not able to connect to that IP. Using Xenial host & Xenial container.

Adding that IP to my server in /etc/network/interfaces on the host resulted in the host responding to that IP instead of the container when I connect remotely.

I’m guessing that I need to configure /etc/network/interfaces on the host in some way to pass the IPs to lxdbr0, but I think that’s where I’m stuck.

Any idea what part I have wrong?

This no longer works in 18.04

On a vanilla LXD set up where i have where i have host and guest both at ubuntu 16_04 LTS
I managed to get the public ip assigned to the container and was able to successfully ping the Container’s public IP from my local desktop over public internet. - GREAT!
BUT , there seems to be something missing… this is what I am facing right now

H1 can ping the C1’s internal ip
H1 can ping the C1’s public ip

BUT C1’s public ip is not reachable from local desktop

So couple of things that i observed to finally get the C1’s public IP accessible over internet from my desktop

  1. This command - lxc network set lxdbr0 ipv4.routes 149.56.148.6/32 which is supposed to route traffic to the bridge works fine from the host while referring to the containers public ip but does not result in a successful ping response from container if we ping the container’s ip from our desktop via public internet.
  2. Had to remove the route manually and add it again on the host after the container is started to get this work .
  3. order of when the step 1 performed and if the container’s public ip is set in the containers interfaces.d/c50xxxxx file made the difference

Here is some notes i took

List containers

root@ubuntu:~# lxc list
+-----------+---------+-------------------+------+------------+-----------+
|   NAME    |  STATE  |       IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+-----------+---------+-------------------+------+------------+-----------+
| webserver | RUNNING | 10.0.8.100 (eth0) |      | PERSISTENT | 0         |
+-----------+---------+-------------------+------+------------+-----------+

Display lxc visible network interfaces and what is used by lxc

root@ubuntu:~# lxc network list
+--------+----------+---------+-------------+---------+
|  NAME  |   TYPE   | MANAGED | DESCRIPTION | USED BY |
+--------+----------+---------+-------------+---------+
| eno1   | physical | NO      |             | 0       |
+--------+----------+---------+-------------+---------+
| eno2   | physical | NO      |             | 0       |
+--------+----------+---------+-------------+---------+
| lxdbr0 | bridge   | YES     |             | 1       |
+--------+----------+---------+-------------+---------+

Display lxdbr01 as seen lxc sees

root@ubuntu:~# lxc network show lxdbr0
config:
  ipv4.address: 10.0.8.1/24
  ipv4.dhcp.ranges: 10.0.8.2-10.0.8.254
  ipv4.nat: "true"
  ipv6.address: fd42:614c:7ebe:916c::1/64
  ipv6.dhcp.stateful: "true"
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/containers/webserver
managed: true
status: Created
locations:
- none

Routes on Host 

root@ubuntu:~# ip route show
default via 2xx.1xx.151.49 dev eno1 onlink
10.0.8.0/24 dev lxdbr0  proto kernel  scope link  src 10.0.8.1
2xx.1xx.151.48/28 dev eno1  proto kernel  scope link  src 2xx.1xx.151.50

Iptables rules on HOst 

root@ubuntu:~# iptables --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain /* generated for LXD network lxdbr0 */
ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain /* generated for LXD network lxdbr0 */
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps /* generated for LXD network lxdbr0 */

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere             /* generated for LXD network lxdbr0 */
ACCEPT     all  --  anywhere             anywhere             /* generated for LXD network lxdbr0 */

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     tcp  --  anywhere             anywhere             tcp spt:domain /* generated for LXD network lxdbr0 */
ACCEPT     udp  --  anywhere             anywhere             udp spt:domain /* generated for LXD network lxdbr0 */
ACCEPT     udp  --  anywhere             anywhere             udp spt:bootps /* generated for LXD network lxdbr0 */

List containers.

root@ubuntu:~# lxc list
+-----------+---------+-------------------+------+------------+-----------+
|   NAME    |  STATE  |       IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+-----------+---------+-------------------+------+------------+-----------+
| webserver | RUNNING | 10.0.8.100 (eth0) |      | PERSISTENT | 0         |
+-----------+---------+-------------------+------+------------+-----------+

Pinging public ip to be assigned to container.

root@ubuntu:~# ping 2xx.1xx.151.51
PING 2xx.1xx.151.51 (2xx.1xx.151.51) 56(84) bytes of data.
^C
--- 2xx.1xx.151.51 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms

No response for ping . -> We will  use this Public ip to assign to this container.

**** this step should be performed after the container ip is assigned , container restarted. root@ubuntu:~# lxc network set lxdbr0 ipv4.routes 2xx.1xx.151.51/32   -------> This need to be set after the container ip is set in its config file.!!!!! Manually setting

Log in to the container
root@ubuntu:~# lxc exec webserver bash

Check the interfaces on container.
root@webserver:~# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:16:3e:28:0f:17
          inet addr:10.0.8.100  Bcast:10.0.8.255  Mask:255.255.255.0
          inet6 addr: fe80::216:3eff:fe28:f17/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:138 errors:0 dropped:0 overruns:0 frame:0
          TX packets:115 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:398164 (398.1 KB)  TX bytes:9371 (9.3 KB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

First set the containers public ip to persist during reboot by placing the below line in "50-cloud-init.cfg" file at location "/etc/network/interfaces.d"
post-up ip -4 addr add dev eth0 2xx.1xx.151.51/32 preferred_lft 0


Pinging the local and public ip addresses from within container.

root@webserver:~# ping 10.0.8.100
PING 10.0.8.100 (10.0.8.100) 56(84) bytes of data.
64 bytes from 10.0.8.100: icmp_seq=1 ttl=64 time=0.055 ms
64 bytes from 10.0.8.100: icmp_seq=2 ttl=64 time=0.022 ms
^C
--- 10.0.8.100 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.022/0.038/0.055/0.017 ms


root@webserver:~# ping 2xx.1xx.151.51
PING 2xx.1xx.151.51 (2xx.1xx.151.51) 56(84) bytes of data.
64 bytes from 2xx.1xx.151.51: icmp_seq=1 ttl=64 time=0.048 ms
64 bytes from 2xx.1xx.151.51: icmp_seq=2 ttl=64 time=0.022 ms
^C
--- 2xx.1xx.151.51 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.022/0.035/0.048/0.013 ms

Confirming that the web pages are accessible via public and internal ip's from within Container.

root@webserver:~# cd
root@webserver:~# service apache2 restart
root@webserver:~# wget http://10.0.8.100
--2019-01-21 23:06:10--  http://10.0.8.100/
Connecting to 10.0.8.100:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11321 (11K) [text/html]
Saving to: ‘index.html.3’

index.html.3                            100%[============================================================================>]  11.06K  --.-KB/s    in 0s

2019-01-21 23:06:10 (105 MB/s) - ‘index.html.3’ saved [11321/11321]

root@webserver:~# wget 2xx.1xx.151.51
--2019-01-21 23:06:30--  http://2xx.1xx.151.51/
Connecting to 2xx.1xx.151.51:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11321 (11K) [text/html]
Saving to: ‘index.html.4’

index.html.4                            100%[============================================================================>]  11.06K  --.-KB/s    in 0s

2019-01-21 23:06:30 (118 MB/s) - ‘index.html.4’ saved [11321/11321]

Great

from the host Tried to ping the containers public and internal ip addresses

From Internal address successful 

From Public address was not successful - Understood.
Executed the command 

lxc network set lxdbr0 ipv4.routes 2xx.1xx.151.51/32

This resulted in successful response from the public ip 2xx.1xx.151.51  while run on the HOST.

WAIT - Container's Ip was not reachable from local desktop.

root@ubuntu:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         2xx.1xx.151.49  0.0.0.0         UG    0      0        0 eno1
10.0.8.0        0.0.0.0         255.255.255.0   U     0      0        0 lxdbr0
2xx.1xx.151.48  0.0.0.0         255.255.255.240 U     0      0        0 eno1
root@ubuntu:~# ip route show
default via 2xx.1xx.151.49 dev eno1 onlink
10.0.8.0/24 dev lxdbr0  proto kernel  scope link  src 10.0.8.1 linkdown
2xx.1xx.151.48/28 dev eno1  proto kernel  scope link  src 2xx.1xx.151.50
root@ubuntu:~# lxc start webserver
root@ubuntu:~# lxc list
+-----------+---------+-----------------------+------+------------+-----------+
|   NAME    |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+-----------+---------+-----------------------+------+------------+-----------+
| webserver | RUNNING | 2xx.1xx.151.51 (eth0) |      | PERSISTENT | 0         |
|           |         | 10.0.8.100 (eth0)     |      |            |           |
+-----------+---------+-----------------------+------+------------+-----------+
root@ubuntu:~# ping 2xx.1xx.151.51
PING 2xx.1xx.151.51 (2xx.1xx.151.51) 56(84) bytes of data.
^C
--- 2xx.1xx.151.51 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms

root@ubuntu:~# ping 10.0.8.100
PING 10.0.8.100 (10.0.8.100) 56(84) bytes of data.
64 bytes from 10.0.8.100: icmp_seq=1 ttl=64 time=0.069 ms
64 bytes from 10.0.8.100: icmp_seq=2 ttl=64 time=0.029 ms
^C
--- 10.0.8.100 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.029/0.049/0.069/0.020 ms
root@ubuntu:~# lxc network set lxdbr0 ipv4.routes 2xx.1xx.151.51/32
root@ubuntu:~# ping 2xx.1xx.151.51
PING 2xx.1xx.151.51 (2xx.1xx.151.51) 56(84) bytes of data.
64 bytes from 2xx.1xx.151.51: icmp_seq=1 ttl=64 time=0.103 ms
64 bytes from 2xx.1xx.151.51: icmp_seq=2 ttl=64 time=0.030 ms
^C
--- 2xx.1xx.151.51 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.030/0.066/0.103/0.037 ms


root@ubuntu:~# wget http://10.0.8.100
--2019-01-21 19:02:06--  http://10.0.8.100/
Connecting to 10.0.8.100:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11321 (11K) [text/html]
Saving to: ‘index.html.13’

index.html.13                           100%[============================================================================>]  11.06K  --.-KB/s    in 0.002s

2019-01-21 19:02:06 (6.49 MB/s) - ‘index.html.13’ saved [11321/11321]

root@ubuntu:~# wget http://2xx.1xx.151.51
--2019-01-21 19:02:18--  http://2xx.1xx.151.51/
Connecting to 2xx.1xx.151.51:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11321 (11K) [text/html]
Saving to: ‘index.html.14’

index.html.14                           100%[============================================================================>]  11.06K  --.-KB/s    in 0s

2019-01-21 19:02:18 (195 MB/s) - ‘index.html.14’ saved [11321/11321]

At this point Cannot ping public ip from local container *******

So added this route on HOST - this resulted in the container being visible from Local desktop

root@ubuntu:~# ip address add 2xx.1xx.151.51/32 dev lxdbr0
root@ubuntu:~# lxc list
+-----------+---------+-----------------------+------+------------+-----------+
|   NAME    |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+-----------+---------+-----------------------+------+------------+-----------+
| webserver | RUNNING | 2xx.1xx.151.51 (eth0) |      | PERSISTENT | 0         |
|           |         | 10.0.8.100 (eth0)     |      |            |           |
+-----------+---------+-----------------------+------+------------+-----------+
root@ubuntu:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         2xx.1xx.151.49  0.0.0.0         UG    0      0        0 eno1
10.0.8.0        0.0.0.0         255.255.255.0   U     0      0        0 lxdbr0
2xx.1xx.151.48  0.0.0.0         255.255.255.240 U     0      0        0 eno1
2xx.1xx.151.51  0.0.0.0         255.255.255.255 UH    0      0        0 lxdbr0
root@ubuntu:~# ip route show
default via 2xx.1xx.151.49 dev eno1 onlink
10.0.8.0/24 dev lxdbr0  proto kernel  scope link  src 10.0.8.1
2xx.1xx.151.48/28 dev eno1  proto kernel  scope link  src 2xx.1xx.151.50
2xx.1xx.151.51 dev lxdbr0  proto static  scope link
root@ubuntu:~# wget http://2xx.1xx.151.51
--2019-01-21 19:04:15--  http://2xx.1xx.151.51/
Connecting to 2xx.1xx.151.51:80... failed: Connection refused. -**------> ISSUE HERE!!!**
root@ubuntu:~# wget http://10.0.8.100
--2019-01-21 19:06:19--  http://10.0.8.100/
Connecting to 10.0.8.100:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11321 (11K) [text/html]
Saving to: ‘index.html.15’

index.html.15                           100%[============================================================================>]  11.06K  --.-KB/s    in 0s

2019-01-21 19:06:19 (232 MB/s) - ‘index.html.15’ saved [11321/11321]



At this point, container is visible from local desktop , we cannot get the pages being served by the public ip of container from the HOST .

Back to deleting and adding the route .

root@ubuntu:~# lxc list
+-----------+---------+-----------------------+------+------------+-----------+
|   NAME    |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+-----------+---------+-----------------------+------+------------+-----------+
| webserver | RUNNING | 2xx.1xx.151.51 (eth0) |      | PERSISTENT | 0         |
|           |         | 10.0.8.100 (eth0)     |      |            |           |
+-----------+---------+-----------------------+------+------------+-----------+
root@ubuntu:~# cd
root@ubuntu:~# wget http://2xx.1xx.151.51
--2019-01-21 19:09:23--  http://2xx.1xx.151.51/
Connecting to 2xx.1xx.151.51:80... failed: Connection refused.
root@ubuntu:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         2xx.1xx.151.49  0.0.0.0         UG    0      0        0 eno1
10.0.8.0        0.0.0.0         255.255.255.0   U     0      0        0 lxdbr0
2xx.1xx.151.48  0.0.0.0         255.255.255.240 U     0      0        0 eno1
2xx.1xx.151.51  0.0.0.0         255.255.255.255 UH    0      0        0 lxdbr0
root@ubuntu:~# ip route show
default via 2xx.1xx.151.49 dev eno1 onlink
10.0.8.0/24 dev lxdbr0  proto kernel  scope link  src 10.0.8.1
2xx.1xx.151.48/28 dev eno1  proto kernel  scope link  src 2xx.1xx.151.50
2xx.1xx.151.51 dev lxdbr0  proto static  scope link
root@ubuntu:~# route del -net 2xx.1xx.151.51 gw 0.0.0.0 netmask 255.255.255.255 dev lxdbr0
root@ubuntu:~# ip address del 2xx.1xx.151.51/32 dev lxdbr0
root@ubuntu:~# ping 2xx.1xx.151.51
PING 2xx.1xx.151.51 (2xx.1xx.151.51) 56(84) bytes of data.
^C
--- 2xx.1xx.151.51 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms

root@ubuntu:~# lxc network set lxdbr0 ipv4.routes 2xx.1xx.151.51/32
root@ubuntu:~# wget http://2xx.1xx.151.51
--2019-01-21 19:11:10--  http://2xx.1xx.151.51/
Connecting to 2xx.1xx.151.51:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11321 (11K) [text/html]
Saving to: ‘index.html.16’

index.html.16                           100%[============================================================================>]  11.06K  --.-KB/s    in 0s

2019-01-21 19:11:10 (237 MB/s) - ‘index.html.16’ saved [11321/11321]

root@ubuntu:~#

At his point ,it is all as expected. Containers public ip is visible from local desktop , from host, containers apache web pages are accessible from HOST , and from the local desktop.

********* IF the host is restarted , had to remove and re add the routes.

For what it’s worth, I also had to:

sysctl -w net.ipv4.conf.all.proxy_arp=1

I was seeing ARP requests, but no replies were sent out by the host. You can probably get away with adding this to just one interface.

1 Like

I am searching the issue of giving to the containers an ip from the access point.
I have read many ways to do this but i am a little bit confused.
Which is the easiest way to do something like this ?

I followed this guide https://blog.simos.info/how-to-get-lxd-containers-get-ip-from-the-lan-with-routed-network/
but it seems that something i am doing wrong. I can’t understand which fields i should change. For example, where i should place my public ip ?
Should I place somewhere my wirelles interface ? ( wlo1 )

I am using Wi-Fi and i want to work with Wi-Fi.

Thanks in advance for any help :slight_smile:

Please can you describe your network setup? There are various options, as you say, depending on the setup you have.

How is the public IP router to your LXD host?

I am connected to a wlan and i have 192.168.2.4 as an ip.
I have also the lxd bridge for the containers at 10.237.243.1.

My public ip is 89.210.18.168.

Tell me what else do you need from me about my network.

Thanks a lot for the help :slight_smile: