Assign external ip address to container

In general, I’m trying for the first time to touch lxc containers and create them. It seems like everything is clear, but I have a snag with the network setup.

I have an IP: 212.7.201.17/26 which I want to use on the container

Here is the configuration of the bridge

# lxc network show lxdbr0
config:
  ipv4.address: 10.74.168.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:4ce7:cc43:7ee::1/64
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/utilites-container
- /1.0/profiles/default
managed: true
status: Created
locations:

Next, I created lxc myself and manually assign IP in ifcfg-eth0:

DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
IPADDR=212.7.201.17
PREFIX=26
GATEWAY=212.7.201.62
DNS1=8.8.8.8
DNS2=8.8.4.4
HOSTNAME=utilites-container
TYPE=Ethernet
MTU=
DHCP_HOSTNAME=utilites-container
IPV6INIT=yes

default lxc settings

cat /etc/lxc/default.conf 
lxc.net.0.type = empty
lxc.apparmor.profile = generated
lxc.apparmor.allow_nesting = 1

Current network conf on server

# cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug enp2s0f0
iface enp2s0f0 inet static
        address 212.7.***.38/26
        gateway 212.7.***.62

But nothing worked. What am I missing? Where did I make a mistake or did not configure it?

Never did it manually.

You have configured an instance to connect to the lxdbr0 bridge.
The lxdbr0 bridge has an IP of 10.74.168.1/24, which means that the instances connected to it should have IP addresses in the 10.74.168.0/24 subnet. Not the IP address you have manually assigned.

The reason it doesn’t work is because the host OS networking stack has no idea that your container has been manually given an IP address outside of the lxdbr0 network’s subnet, and your container has no idea how to reach the gateway 212.7.201.62 because it is in the private network lxdbr0.

What I think you want is to instead connect your instance to the external network (the network connected to enp2s0f0) rather than lxdbr0.

There are 3 ways to do this, with pros and cons of each:

  1. Setup a new manual bridge, e.g. br0, connect enp2s0f0 to it (so that the bridge is connected to the external network), and move enp2s0f0 IP config onto br0. Then you can connect your instance to the manual bridge using lxc config device add <instance> eth0 nic nictype=bridged parent=br0. See Netplan | Backend-agnostic network configuration in YAML
  2. Use macvlan NIC type on your instance connected to the external network via enp2s0f0 using lxc cofig device add <instance> eth0 nic nictype=macvlan parent=enp2s0f0. This avoids needing to setup the manual bridge, but has the downside that the instance and LXD host will not be able to communicate.
  3. Use routed NIC type on your instance connected to the external network via enp2s0f0 using lxc config device add <instance> eth0 nic nictype=routed parent=enp2s0f0 ipv4.address=212.7.201.17. This avoids needing to setup the manual bridge, and allows the host and instance to communicate, but doesn’t allow the instance to automatically configure its IP from the external network’s DHCP server or use broadcast traffic. See How to get LXD containers get IP from the LAN with routed network

Hey! Thank you!

Currently i’ve configured bridge on machine itself.

Next i’ve created fresh container via lxc launch images:almalinux/8 container-name command

And executed this command

lxc config device add utilites-container eth1 nic name=eth1 nictype=bridged parent=br0

Then trying to attach external IP to container via this command

lxc config device set container-name eth1 ipv4.address 212.7.201.17

And got this error

Error: Invalid devices: Device validation failed for “eth1”: Cannot use manually specified ipv4.address when using unmanaged parent bridge

My steps seems logical to me, but why i get this error? Or i should just enter container now and enter proper network configurations inside “ifcfg-eth1”

When running lxc config device set container-name eth1 ipv4.address 212.7.201.17 you’re instructing LXD to create a static DHCP assignment in its managed DHCP server.

However if you are connecting your instance’s NIC to an external unmanaged bridge (which you are) then LXD has no control over over the DHCP server on that network (if any at all) and so it would be misleading to accept that IP configuration only to ignore it.

Okay, understood.

Then i will choose the first option which you recommended. I will connect my instance to the manual bridge (which i’ve created). What would be the next step? I just need to go to eth0 settings inside the instance and set the proper IP configuration and magic will happen? :D. Or do i need to configure something else? Sorry for dumb questions :slight_smile:

Yes configure the network settings inside the container manually.

No luck :frowning:

I’ve executed

lxc config device add utilites-container eth0 nic nictype=bridged parent=br0

Then entered the container and modified eth0 settings to this

# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
IPADDR=212.7.201.17
PREFIX=26
GATEWAY=212.7.201.62
HOSTNAME=utilites-container
TYPE=Ethernet
MTU=
DHCP_HOSTNAME=utilites-container
IPV6INIT=no

Rebooted the container and can’t still access it :frowning:

# lxc ls
+--------------------+---------+---------------------+------+-----------+-----------+
|        NAME        |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+--------------------+---------+---------------------+------+-----------+-----------+
| utilites-container | RUNNING | 212.7.201.17 (eth0) |      | CONTAINER | 0         |
+--------------------+---------+---------------------+------+-----------+-----------+

Bridge config

# cat interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

auto br0
iface br0 inet static
        address 212.7.***.38/26
        gateway 212.7.***.62
        bridge_ports enp2s0f0
          bridge_stp off 
          bridge_waitport 0    
          bridge_fd 0

So now you need to do some network debugging using ping and tcpdump.

Answer the following questions:

  1. Can you ping the bridge IP 212.7.201.38 from the container?
  2. Can you ping the bridge IP 212.7.201.38 from a device on the external network/internet?
  3. Can you ping the external gateway IP 212.7.201.62 from the LXD host?
  4. Can you ping the external gateway IP 212.7.201.62 from the container?
  5. Can you ping the container’s IP 212.7.201.17 from the LXD host?
  6. Do you have a firewall running on your LXD host?
  1. Yes
  2. Yes, otherwise the whole server won’t be accessible
  3. Yes
  4. No
  5. Yes
  6. No

Great, so your host <-> container connectivity is working.

Can you run sudo tcpdump -i br0 host 212.7.201.17 on the LXD host and in another window try and ping the gateway IP 212.7.201.62 from the container, and then provide the output of the tcpdump window here.

Also, have you checked that your external network provider allows using multiple MAC addresses to be used on a single external port? As this is what is going to be happening with bridging and macvlan. If they don’t allow that then you’ll need to use a routed NIC which uses the host’s MAC address.

This is what i god

# tcpdump -i br0 host 212.7.201.17
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br0, link-type EN10MB (Ethernet), capture size 262144 bytes
12:57:47.527156 IP 193.3.53.5.59471 > 212.7.201.17.264: Flags [S], seq 3975246200, win 65535, length 0
12:57:55.281376 IP 212.7.201.17 > 212.7.201.62: ICMP echo request, id 741, seq 1, length 64
12:57:56.285883 IP 212.7.201.17 > 212.7.201.62: ICMP echo request, id 741, seq 2, length 64
12:57:57.293580 IP 212.7.201.17 > 212.7.201.62: ICMP echo request, id 741, seq 3, length 64
12:57:58.301692 IP 212.7.201.17 > 212.7.201.62: ICMP echo request, id 741, seq 4, length 64
12:57:59.309642 IP 212.7.201.17 > 212.7.201.62: ICMP echo request, id 741, seq 5, length 64
12:58:00.317629 IP 212.7.201.17 > 212.7.201.62: ICMP echo request, id 741, seq 6, length 64
12:58:00.333614 ARP, Request who-has 212.7.201.62 tell 212.7.201.17, length 28
12:58:00.333871 ARP, Reply 212.7.201.62 is-at 00:1c:73:00:00:08 (oui Unknown), length 46
12:58:01.325692 IP 212.7.201.17 > 212.7.201.62: ICMP echo request, id 741, seq 7, length 64
12:58:02.333714 IP 212.7.201.17 > 212.7.201.62: ICMP echo request, id 741, seq 8, length 64
12:58:03.341778 IP 212.7.201.17 > 212.7.201.62: ICMP echo request, id 741, seq 9, length 64
12:58:04.351710 IP 212.7.201.17 > 212.7.201.62: ICMP echo request, id 741, seq 10, length 64
12:58:05.357635 IP 212.7.201.17 > 212.7.201.62: ICMP echo request, id 741, seq 11, length 64
12:58:06.365731 IP 212.7.201.17 > 212.7.201.62: ICMP echo request, id 741, seq 12, length 64
12:58:06.370067 IP hosting-by.4cloud.mobi.49864 > 212.7.201.17.30727: Flags [S], seq 733674852, win 1024, length 0
12:58:07.373722 IP 212.7.201.17 > 212.7.201.62: ICMP echo request, id 741, seq 13, length 64
12:58:08.381625 IP 212.7.201.17 > 212.7.201.62: ICMP echo request, id 741, seq 14, length 64

Leaseweb does not require MAC address filtering.

OK so we can see ARP requests and replies to/from the gateway, we can ping requests going onto the external network to the gateway.

But the gateway is either choosing not to reply or its getting filtered out somewhere.

Can you leave the tcpdump running and try to ping 212.7.201.17 from the external internet?

Also please show output of sudo iptables-save and sudo nft list ruleset if available.

Does not require or does not apply?

Here you go

# tcpdump -i br0 host 212.7.201.17
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br0, link-type EN10MB (Ethernet), capture size 262144 bytes
13:03:59.335375 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 0, length 64
13:04:00.340456 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 1, length 64
13:04:01.347102 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 2, length 64
13:04:02.348966 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 3, length 64
13:04:02.725112 IP 185.167.97.229.40732 > 212.7.201.17.9042: Flags [S], seq 3491456802, win 65535, length 0
13:04:03.353058 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 4, length 64
13:04:04.359115 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 5, length 64
13:04:05.363225 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 6, length 64
13:04:06.367807 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 7, length 64
13:04:07.371807 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 8, length 64
13:04:08.374519 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 9, length 64
13:04:09.378606 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 10, length 64
13:04:10.377758 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 11, length 64
13:04:11.383022 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 12, length 64
13:04:12.388782 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 13, length 64
13:04:12.915524 IP hosting-by.4cloud.mobi.51811 > 212.7.201.17.4708: Flags [S], seq 1829834702, win 1024, length 0
13:04:13.393625 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 14, length 64
13:04:14.396877 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 15, length 64
13:04:15.400986 IP 159.148.112.49 > 212.7.201.17: ICMP echo request, id 43381, seq 16, length 64

Iptables # sudo iptables-save # Generated by xtables-save v1.8.2 on Wed Apr 20 13:06:10 - Pastebin.com

Do not require.

So we can see the incoming traffic. Good. But no reply from your instance.

And you do have some firewall rules on your host. And docker ones infact, which have been known to cause issues with bridging.

Are you happy to clear the ruleset temporarily (you may need to reboot to restore them)?

If so try sudo iptables -F following by sudo iptables-save and see if that helps.

Nope, no luck

This is what i got from iptables-save command after executing iptables -F