I want to have several containers with public IPs that I have assigned in my debian host machine in /etc/network/interfaces as static ips, but I also need a few containers with local ips only and bridge only some ports. Is that possible ?
This is what I tried:
lxc network create publicip #New network
lxc network add publicip ipv4.address 10.3.3.3/30 #add random local IP
lxc network add publicip ipv4.routes 138.*.16.151/32 #add route to the actual public IP that I want to assign to the container
lxc network attach publicip myContainer eth0 #attach it as eth0 interface to the container
lxc config device set myContainer eth0 ipv4.address 10.3.3.2 #assign container local IP from the range of the new network I created with first command
But after doing so my container loses internet connection and cannot be reached. What I did wrong ?
You create LXD profiles for both types of containers and assign accordingly to each container.
For the containers that are exposed to the LAN, use either bridge, macvlan or ipvlan. Each have different features, so you select whichever is best for you.
Would the routed profile be applicable for my issue ? Will containers using routed profile with public ip be able to use all tcp/udp ports on that IP without the need to do lxc config device add c1 sshc1 proxy listen=tcp:publicIP:port connect=tcp:privateIP:port for each port ?
EDIT: Followed these steps and the container in the lxc list has the correct IP, however the container cannot be reached at all and also lost internet access.
Indeed, you can also use routed. The list is public bridged, macvlan, ipvlan and routed (and some more advanced).
Each use different features of the Linux kernel to give a LAN IP address to a LXD container.
By configuring a public IP address to a container, you do not need to use (you cannot actually) a LXD proxy device for the purposes of port-forwarding.
If things do not work, then something else is likely at play. For example, if you use some firewall, then how are the INPUT or OUTPUT or POSTROUTING rules affecting your containers?
My iptables are empty, also no other firewall software installed. When I try to ssh the secodary public IP I assigned to the profile I still get connected to the host instead of the container.
Then I assigned this profile together with “default” to a new container: lxc init images:debian/buster c1 --profile default --profile routed
But the IP still points the the host machine instead of the container. And container has no internet access at all. Command ping google.com returns dns resolution error.
The differentiating factor is the existence of the datacenter. Typically, a datacenter has some security features that you need to abide to, if you want to use more than one public IP address.
For example, Hetzner has some specific rules that involve registering MAC addresses.
Is there special documentation for the case of your specific datacenter?
I am in fact in hetzner DC, but their doc doesnt say anything special about additional IP addresse. The documentation shows more or less the first thing I have tried (in my first post where I tried to create a second bridged network).
So first things first lets get your additional IPs working on the host (just to confirm that they are actually routable and understand any configuration requirements your DC require).
Please can you remove the additional config from /etc/network/interfaces, and any static routes (added manually or by LXD networks), and then run:
ip a add 138..16.151/32 dev eth0
This just manually adds the IP alias to your host’s eth0 interface. At that point it should be pingable externally, if its not then there’s no point in going further and will need to contact DC provider for assistance.
Then paste the output of:
ip a and ip r on the LXD host, so I can get a better understanding of your network setup.
Assuming that works, then I think the best way to proceed is to use the routed NIC type, as that avoids the need for an intermediate bridge, and it will ensure that your container uses the same MAC address as your LXD host (which avoids any MAC restrictions the DC provider may have).
It will also automate setting up of static routes and proxy ARP entries.
Let me know how you get on and if there are any other issues we can take it from there. Thanks
I have removed the configuration from etc/network/interfaces, after that I did “service networking restart”. After ip a add 138…16.151/32 dev eth0 I can ping it.
Output of ip a:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 90:1b:0e:8d:f8:70 brd ff:ff:ff:ff:ff:ff
inet 138..16.132/26 brd 138..16.191 scope global eth0
valid_lft forever preferred_lft forever
inet 138..16.151/32 scope global eth0
valid_lft forever preferred_lft forever
inet 138..16.151/26 brd 138.*.16.191 scope global secondary eth0:0
valid_lft forever preferred_lft forever
ip r
default via 138..16.129 dev eth0 onlink
138..16.128/26 via 138..16.129 dev eth0
138..16.128/26 dev eth0 proto kernel scope link src 138.*.16.132
(IP ending with .132 is the primary IP my machine came with, and .151 is the another IP I ordered and I want to assign to the container)
OK great, so lets remove the manually added IP alias using:
ip a del 138..16.151/32 dev eth0
Now I’ll show you how to use a routed NIC manually without the need for an additional profile or cloud-init, although the article from @simos is usable for when automating some of the manual steps inside the container using cloud-init.
lxc launch images:debian/buster c1
lxc shell c1
Modify /etc/network/interfaces inside the container to disable DHCP:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 138..16.151/32
Now exit the container shell, stop the container and add the routed NIC:
exit
lxc stop c1
lxc config device add c1 eth0 nic nictype=routed parent=eth0 ipv4.address=138..16.151
lxc start c1
# See external IP added to container.
lxc exec c1 -- ip a show dev eth0
2: eth0@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 9a:ec:d6:7b:9d:e0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 138..16.151/32 brd 255.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::98ec:d6ff:fe7b:9de0/64 scope link
valid_lft forever preferred_lft forever
# See special default gateway added to container.
lxc exec c1 -- ip r
default via 169.254.0.1 dev eth0
169.254.0.1 dev eth0 scope link
# Check DNS nameservers are left over from your lxdbr0 interface:
lxc exec c1 -- cat /etc/resolv.conf
domain lxd
search lxd
nameserver {lxdbr0 IP}
# Check external connectivity:
lxc exec c1 -- ping 8.8.8.8
lxc exec c1 -- ping linuxcontainers.org
On LXD host, see static route added for external IP alias pointing to container’s interface:
ip r | grep 138..16.151
138..16.151 dev veth573795cd scope link
You can modify /etc/resolv.conf in your container to use any nameserver you prefer, at the moment its got the old DHCP config to use the nameserver of your lxdbr0 bridge. But as we disabled DHCP this won’t get changed. There’s nothing wrong with leaving it like that but it can be changed if you want so it doesn’t depend on the lxdbr0 DNS server.
Generated by xtables-save v1.8.2 on Mon Oct 19 11:13:04 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT
Completed on Mon Oct 19 11:13:04 2020
Warning: iptables-legacy tables present, use iptables-legacy-save to see them
When I try to ping 10.10.10.10 (lxdbr0 ip) from the container I have again “Destination Host Unreachable” message.
Should I have
net.ipv4.ip_forward=1
in etc/sysctl.conf ?
Also this is my route -n:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 138..16.129 0.0.0.0 UG 0 0 0 eth0
10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 lxdbr0
138..16.128 138..16.129 255.255.255.192 UG 0 0 0 eth0
138..16.128 0.0.0.0 255.255.255.192 U 0 0 0 eth0
138..16.132 0.0.0.0 255.255.255.255 UH 0 0 0 lxdbr0
138..16.151 0.0.0.0 255.255.255.255 UH 0 0 0 veth4161f417