Public IP in the CT


(Johann) #1

Hello,

I'm trying to set a public IP into a CT (ipv4 failover I get from OVH).

I also need that IP to be used for the outbound traffic from the CT (and then also fro the host).
I mean that the CT outbound traffic must not have the host IP address.

I read several threads but no succeed for my need.

Any help would be appreciated, thank you


Lxd public ip solution doesn't work
(Stéphane Graber) #2

You'd typically do this with:

lxc network set lxdbr0 ipv4.routes 1.2.3.4/32

Which will have a route added on the host to send traffic to your container's IP to the right bridge.

Then you need to make sure your container has that IP on its main network interface.
For testing you can just do it with:

ip -4 route add dev eth0 1.2.3.4/32

At which point you should be able to access the container using that IP, but as you mentioned, container traffic may still show up with the host IP.

You can avoid that by completely statically configuring your container with:

auto eth0
iface eth0 inet static
    address 1.2.3.4
    netmask 255.255.255.255
    gateway 10.0.3.1

    pre-up ip -4 route add dev eth0 10.0.3.1/32

The MASQUERADE rule that LXD maintains is scoped so that only traffic using the bridge's subnet is NATed, so if your container sends traffic out using its public IP, it won't get NATed by the rules that LXD addded to iptables.


(Johann) #3

Ok thank you Stéphane, you are doing an amazing job here.


(Johann) #4

Can we imagine to assign the host public address to lxdbr0 and only use the publics IP instead of the privates IP for the bridge and inside the CT ?
And then route all the traffic to lxdbr0 (0.0.0.0/0) ?

I ask this because I have several ipv4 blocks to route to the bridge, and i'm not sure that the private IP do have a real usage for me.


(Frédéric Demians) #5

Could anyone succeed in having CT using public IP rather than the IP allocated by the bridge? I’d really like to use OVH failover IP in containers in order to make them act like public VM. I’ve read several discussion threads on the subject, with valuable insights from stgraber, but I personally wasn’t able to get a result.


(Stéphane Graber) #6

For OVH’s failover IPs, all you really need to do with a recent LXD (2.21 currently) would be:

lxc network set lxdbr0 ipv4.routes PUBLIC-IP/32

If multiple IPs, you can either use a CIDR subnet for the subset of additional failover IPs, or you can just set multiple individual IPs, comma separated.

Then in the container, all you’ll need to do is add a static IPv4 address, manually with:

ip -4 addr add dev eth0 PUBLIC-IP/32 preferred_lft 0

That line can be put as a post-up of your existing (DHCP) eth0 interface. The container will continue to pull a dynamic local IP from the LXD DHCP server but will then also have its public IP associated with it and will prefer it for outgoing traffic.


How to set public IPs for each container in LXD 3.0.0 & Ubuntu 18.04
(Stéphane Graber) #7
root@vorash:~# lxc launch ubuntu:16.04 c1
Creating c1
Starting c1
root@vorash:~# lxc list c1
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| NAME |  STATE  |         IPV4         |                     IPV6                      |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| c1   | RUNNING | 10.178.245.12 (eth0) | fd42:3f9b:e713:ce99:216:3eff:fe1f:f9e7 (eth0) | PERSISTENT | 0         |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
root@vorash:~# ping 149.56.148.6
PING 149.56.148.6 (149.56.148.6) 56(84) bytes of data.
^C
--- 149.56.148.6 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

root@vorash:~# lxc network set lxdbr0 ipv4.routes 149.56.148.6/32
root@vorash:~# lxc exec c1 bash
root@c1:~# ip -4 addr add dev eth0 149.56.148.6/32 preferred_lft 0
root@c1:~# exit
root@vorash:~# lxc list c1
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| NAME |  STATE  |         IPV4         |                     IPV6                      |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| c1   | RUNNING | 149.56.148.6 (eth0)  | fd42:3f9b:e713:ce99:216:3eff:fe1f:f9e7 (eth0) | PERSISTENT | 0         |
|      |         | 10.178.245.12 (eth0) |                                               |            |           |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
root@vorash:~# ping 149.56.148.6
PING 149.56.148.6 (149.56.148.6) 56(84) bytes of data.
64 bytes from 149.56.148.6: icmp_seq=1 ttl=64 time=0.062 ms
^C
--- 149.56.148.6 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
root@vorash:~# 

The above is on an OVH server with failover IPs.


LXD Default lxdbr0 bridge + Publicly Accessible IPs?
(Stéphane Graber) #8

To make this persistent, that ip -4 addr should be added to /etc/network/interfaces as part of the eth0 entry:

auto eth0
iface eth0 inet dhcp
    post-up ip -4 addr add dev eth0 149.56.148.6/32 preferred_lft 0

(Frédéric Demians) #9

Fantastic! It works like a charm. Thanks a lot.

One remark. If the host run under Ubuntu 16.04, and the container under the same OS, this in the container works:

auto eth0
iface eth0 inet dhcp
post-up ip -4 addr add dev eth0 149.56.148.6/32 preferred_lft 0

But if the host run under Ubuntu 17, it doesn’t work. It is necessary to define the route manually.

As a side note, would you say, it’s "risky’ to operate LXD host on Ubuntu 17.


(Stéphane Graber) #10

I’d certainly strongly recommend sticking to Ubuntu LTS in general, the support length of non-LTS hardly ever makes it worth it and those releases also don’t get quite as many bugfixes as the LTS releases.


(Frédéric Demians) #11

With this method, if my LXD host IP is host_ip, and my container public ip is container_ip. I my container network is defined as you suggested with the LXD standard lxdbr0 bridge, lxc list give me this:

container_ip (eth0)
10.14.127.98 (eth0)

c1 is reachable from the outside with container_ip. But when the container reach any host, it is not seen as container_ip but as host_ip. Is there any way to to make the container appear as container_ip rather than host_ip where initiating connexion?


#12

Regarding the multiple IPs being comma separated, should those IPs include the “/32” part?

So, should it be:

lxc network set lxdbr0 ipv4.routes PUBLIC-IPa/32,PUBLIC-IPb/32,PUBLIC-IPc/32,PUBLIC-IPc/32

or

lxc network set lxdbr0 ipv4.routes PUBLIC-IPa,PUBLIC-IPb,PUBLIC-IPc,PUBLIC-IPc


(Stéphane Graber) #13

CIDR should be preferred, I think both syntaxes would work as they’re both accepted by iproute2, but CIDR is more specific.


#14

I followed this example exactly as well as the configs in your following and previous messages. Result is that I could ping and even ssh to the public IP (as well as the private IP set by LXD) from within the host. However, from outside of the server I was not able to connect to that IP. Using Xenial host & Xenial container.

Adding that IP to my server in /etc/network/interfaces on the host resulted in the host responding to that IP instead of the container when I connect remotely.

I’m guessing that I need to configure /etc/network/interfaces on the host in some way to pass the IPs to lxdbr0, but I think that’s where I’m stuck.

Any idea what part I have wrong?