Public IPv4 for a container using network bridge?

Search…

You create LXD profiles for both types of containers and assign accordingly to each container.

For the containers that are exposed to the LAN, use either bridge, macvlan or ipvlan. Each have different features, so you select whichever is best for you.

Would the routed profile be applicable for my issue ? Will containers using routed profile with public ip be able to use all tcp/udp ports on that IP without the need to do
lxc config device add c1 sshc1 proxy listen=tcp:publicIP:port connect=tcp:privateIP:port for each port ?

EDIT: Followed these steps and the container in the lxc list has the correct IP, however the container cannot be reached at all and also lost internet access.

Indeed, you can also use routed. The list is public bridged, macvlan, ipvlan and routed (and some more advanced).
Each use different features of the Linux kernel to give a LAN IP address to a LXD container.
By configuring a public IP address to a container, you do not need to use (you cannot actually) a LXD proxy device for the purposes of port-forwarding.

If things do not work, then something else is likely at play. For example, if you use some firewall, then how are the INPUT or OUTPUT or POSTROUTING rules affecting your containers?

My iptables are empty, also no other firewall software installed. When I try to ssh the secodary public IP I assigned to the profile I still get connected to the host instead of the container.

You mention secondary, does that mean that you are assigning two public IP addresses to the container?

In any case, I suggest to describe a reproducible list of steps that exhibit the issue you are facing. See some advice on how to do that at How to best ask questions on this discussion forum

No, I am sorry, what I mean by secodary is my spare public ipv4 assigned to my machine by the datacenter, so that is the IP I want for my container.

This is my host /etc/network/interfaces (those 2 IPs are assigned to me by the datacenter):

auto eth0
iface eth0 inet static
address 138..16.132
netmask 255.255.255.192
gateway 138.
.16.129
up route add -net 138..16.128 netmask 255.255.255.192 gw 138..16.129 eth0

auto eth0:0
iface eth0:0 inet static
address 138..16.151
netmask 255.255.255.192
up route add -net 138.
.16.151 netmask 255.255.255.192 eth0:0

I created a routed profile like this:

config:
  user.network-config: |
    version: 2
    ethernets:
        eth0:
            addresses:
            - 138.201.16.151/32
            nameservers:
                addresses:
                - 8.8.8.8
                search: []
            routes:
            -   to: 0.0.0.0/0
                via: 169.254.0.1
                on-link: true
description: Default LXD profile
devices:
  eth0:
    ipv4.address: 138.*.16.151
    nictype: routed
    parent: eth0:0
    type: nic
name: routed
used_by:

Then I assigned this profile together with “default” to a new container:
lxc init images:debian/buster c1 --profile default --profile routed

But the IP still points the the host machine instead of the container. And container has no internet access at all. Command ping google.com returns dns resolution error.

The differentiating factor is the existence of the datacenter. Typically, a datacenter has some security features that you need to abide to, if you want to use more than one public IP address.
For example, Hetzner has some specific rules that involve registering MAC addresses.

Is there special documentation for the case of your specific datacenter?

I am in fact in hetzner DC, but their doc doesnt say anything special about additional IP addresse. The documentation shows more or less the first thing I have tried (in my first post where I tried to create a second bridged network).

https://docs.hetzner.com/robot/dedicated-server/ip/additional-ip-adresses/

So first things first lets get your additional IPs working on the host (just to confirm that they are actually routable and understand any configuration requirements your DC require).

Please can you remove the additional config from /etc/network/interfaces, and any static routes (added manually or by LXD networks), and then run:

ip a add 138..16.151/32 dev eth0

This just manually adds the IP alias to your host’s eth0 interface. At that point it should be pingable externally, if its not then there’s no point in going further and will need to contact DC provider for assistance.

Then paste the output of:

ip a and ip r on the LXD host, so I can get a better understanding of your network setup.

Assuming that works, then I think the best way to proceed is to use the routed NIC type, as that avoids the need for an intermediate bridge, and it will ensure that your container uses the same MAC address as your LXD host (which avoids any MAC restrictions the DC provider may have).

It will also automate setting up of static routes and proxy ARP entries.

Let me know how you get on and if there are any other issues we can take it from there. Thanks

I have removed the configuration from etc/network/interfaces, after that I did “service networking restart”. After ip a add 138…16.151/32 dev eth0 I can ping it.

Output of ip a:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 90:1b:0e:8d:f8:70 brd ff:ff:ff:ff:ff:ff
inet 138..16.132/26 brd 138..16.191 scope global eth0
valid_lft forever preferred_lft forever
inet 138..16.151/32 scope global eth0
valid_lft forever preferred_lft forever
inet 138.
.16.151/26 brd 138.*.16.191 scope global secondary eth0:0
valid_lft forever preferred_lft forever

ip r

default via 138..16.129 dev eth0 onlink
138.
.16.128/26 via 138..16.129 dev eth0
138.
.16.128/26 dev eth0 proto kernel scope link src 138.*.16.132

(IP ending with .132 is the primary IP my machine came with, and .151 is the another IP I ordered and I want to assign to the container)

OK great, so lets remove the manually added IP alias using:

ip a del 138..16.151/32 dev eth0

Now I’ll show you how to use a routed NIC manually without the need for an additional profile or cloud-init, although the article from @simos is usable for when automating some of the manual steps inside the container using cloud-init.

lxc launch images:debian/buster c1
lxc shell c1

Modify /etc/network/interfaces inside the container to disable DHCP:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
	address 138..16.151/32

Now exit the container shell, stop the container and add the routed NIC:

exit
lxc stop c1
lxc config device add c1 eth0 nic nictype=routed parent=eth0 ipv4.address=138..16.151
lxc start c1

# See external IP added to container.
lxc exec c1 -- ip a show dev eth0
2: eth0@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9a:ec:d6:7b:9d:e0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 138..16.151/32 brd 255.255.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::98ec:d6ff:fe7b:9de0/64 scope link 
       valid_lft forever preferred_lft forever

# See special default gateway added to container.
lxc exec c1 -- ip r
default via 169.254.0.1 dev eth0 
169.254.0.1 dev eth0 scope link 

# Check DNS nameservers are left over from your lxdbr0 interface:
lxc exec c1 -- cat /etc/resolv.conf 
domain lxd
search lxd
nameserver {lxdbr0 IP}

# Check external connectivity:
lxc exec c1 -- ping 8.8.8.8
lxc exec c1 -- ping linuxcontainers.org

On LXD host, see static route added for external IP alias pointing to container’s interface:

ip r | grep 138..16.151
138..16.151 dev veth573795cd scope link 

You can modify /etc/resolv.conf in your container to use any nameserver you prefer, at the moment its got the old DHCP config to use the nameserver of your lxdbr0 bridge. But as we disabled DHCP this won’t get changed. There’s nothing wrong with leaving it like that but it can be changed if you want so it doesn’t depend on the lxdbr0 DNS server.

When I try to ping 8.8.8.8:

From 138.*.16.151 icmp_seq=3 Destination Host Unreachable

lxc exec c1 – ip a show dev eth0

2: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ca:ee:08:e2:88:8b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 138.*.16.151/32 brd 255.255.255.255 scope global eth0
valid_lft forever preferred_lft forever

lxc exec c1 – ip r

default via 169.254.0.1 dev eth0
169.254.0.1 dev eth0 scope link

lxc exec c1 – cat /etc/resolv.conf

domain lxd
search lxd
nameserver 10.10.10.10

lxc list

±-----±--------±----------------------±-----±----------±----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
±-----±--------±----------------------±-----±----------±----------+
| c1 | RUNNING | 138.*.16.151 (eth0) | | CONTAINER | 0 |
±-----±--------±----------------------±-----±----------±----------+

Can you ping 10.10.10.10 from the container (to test local connectivity)?

If so then its likely you’ve got a firewall on your host blocking forwarding traffic.

Please paste output of sudo iptables-save.

ip tables are empty:

Generated by xtables-save v1.8.2 on Mon Oct 19 11:13:04 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT
Completed on Mon Oct 19 11:13:04 2020
Warning: iptables-legacy tables present, use iptables-legacy-save to see them

When I try to ping 10.10.10.10 (lxdbr0 ip) from the container I have again “Destination Host Unreachable” message.

Should I have

net.ipv4.ip_forward=1

in etc/sysctl.conf ?

Also this is my route -n:

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 138..16.129 0.0.0.0 UG 0 0 0 eth0
10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 lxdbr0
138.
.16.128 138..16.129 255.255.255.192 UG 0 0 0 eth0
138.
.16.128 0.0.0.0 255.255.255.192 U 0 0 0 eth0
138..16.132 0.0.0.0 255.255.255.255 UH 0 0 0 lxdbr0
138.
.16.151 0.0.0.0 255.255.255.255 UH 0 0 0 veth4161f417

I think those sysctl settings would have been activated by LXD already.

Have you got any output from iptables-legacy-save or nft list ruleset?

Generated by iptables-save v1.8.2 on Mon Oct 19 11:17:18 2020

*raw
:PREROUTING ACCEPT [21474120:3294353609]
:OUTPUT ACCEPT [22062429:2427295627]
-A PREROUTING -i veth4161f417 -m rpfilter --invert -m comment --comment “generated for LXD container c1 (eth0) rpfilter” -j DROP
COMMIT

Completed on Mon Oct 19 11:17:18 2020

Generated by iptables-save v1.8.2 on Mon Oct 19 11:17:18 2020

*mangle
:PREROUTING ACCEPT [33453954:4892360266]
:INPUT ACCEPT [33453954:4892360266]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [33553621:3647418389]
:POSTROUTING ACCEPT [33553621:3647418389]
-A POSTROUTING -o lxdbr0 -p udp -m udp --dport 68 -m comment --comment “generated for LXD network lxdbr0” -j CHECKSUM --checksum-fill
COMMIT

Completed on Mon Oct 19 11:17:18 2020

Generated by iptables-save v1.8.2 on Mon Oct 19 11:17:18 2020

*nat
:PREROUTING ACCEPT [27867:1623879]
:INPUT ACCEPT [27867:1623879]
:OUTPUT ACCEPT [43359:2622532]
:POSTROUTING ACCEPT [43359:2622532]
-A POSTROUTING -s 10.10.0.0/16 ! -d 10.10.0.0/16 -m comment --comment “generated for LXD network lxdbr0” -j MASQUERADE
COMMIT

Completed on Mon Oct 19 11:17:18 2020

Generated by iptables-save v1.8.2 on Mon Oct 19 11:17:18 2020

*filter
:INPUT ACCEPT [33453886:4892353076]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [33553601:3647418531]
-A INPUT -i lxdbr0 -p tcp -m tcp --dport 53 -m comment --comment “generated for LXD network lxdbr0” -j ACCEPT
-A INPUT -i lxdbr0 -p udp -m udp --dport 53 -m comment --comment “generated for LXD network lxdbr0” -j ACCEPT
-A INPUT -i lxdbr0 -p udp -m udp --dport 67 -m comment --comment “generated for LXD network lxdbr0” -j ACCEPT
-A FORWARD -o lxdbr0 -m comment --comment “generated for LXD network lxdbr0” -j ACCEPT
-A FORWARD -i lxdbr0 -m comment --comment “generated for LXD network lxdbr0” -j ACCEPT
-A OUTPUT -o lxdbr0 -p tcp -m tcp --sport 53 -m comment --comment “generated for LXD network lxdbr0” -j ACCEPT
-A OUTPUT -o lxdbr0 -p udp -m udp --sport 53 -m comment --comment “generated for LXD network lxdbr0” -j ACCEPT
-A OUTPUT -o lxdbr0 -p udp -m udp --sport 67 -m comment --comment “generated for LXD network lxdbr0” -j ACCEPT
COMMIT

Completed on Mon Oct 19 11:17:18 2020

nft list ruleset

table ip filter {
chain INPUT {
type filter hook input priority 0; policy accept;
}

    chain FORWARD {
            type filter hook forward priority 0; policy accept;
    }

    chain OUTPUT {
            type filter hook output priority 0; policy accept;
    }

}

OK, can you show output of ip a on the host, so I can see if the 169.254.0.1 was added, also is this pingable from inside the container?

I cannot ping 169.254.0.1 from inside the container.

ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 90:1b:0e:8d:f8:70 brd ff:ff:ff:ff:ff:ff
inet 138..16.132/26 brd 138..16.191 scope global eth0
valid_lft forever preferred_lft forever
inet 138..16.151/26 brd 138..16.191 scope global secondary eth0:0
valid_lft forever preferred_lft forever
inet6 2a01:4f8:171:2783::2/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::921b:eff:fe8d:f870/64 scope link
valid_lft forever preferred_lft forever
3: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:18:8a:1a brd ff:ff:ff:ff:ff:ff
inet 10.10.10.10/16 scope global lxdbr0
valid_lft forever preferred_lft forever
inet6 fd42:1657:a2e5:b7e6::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fe18:8a1a/64 scope link
valid_lft forever preferred_lft forever
7: veth4161f417@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:72:02:00:18:13 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.0.1/32 scope global veth4161f417
valid_lft forever preferred_lft forever
inet6 fe80::fc72:2ff:fe00:1813/64 scope link
valid_lft forever preferred_lft forever

Should I try to enable net.ipv4.ip_forward=1 ? Just in case ?
EDIT: no difference when I enable it

EDIT 2: I tried to ping 10.10.10.10 and 169.254.0.1 from a new container that is just in lxdbr0 and I can ping both of those IPs.

Yeah it wont hurt.

What version of LXD and the host OS are you running? Is the container debian buster.

Its rather odd you cannot ping the 169.254.0.1 address, as the static default route is there, and you can see it bound to the veth4161f417 interface on the host side, so that doesn’t depend on forwarding to allow it, its just straight veth communication.

Suffice to say I tested it here this morning on Debian and it worked fine.