Several IPs on ohe host. Several containers per IP

Hello, I’m searching the way to make this:

[GLOBAL_IP_1]-|---[  H O S T  ]
              |---[CONTAINER_1]
              |---[CONTAINER_2]
              |---[CONTAINER_3]

[GLOBAL_IP_2]-|---[CONTAINER_4]
              |---[CONTAINER_5]
              |---[CONTAINER_6]

[GLOBAL_IP_3]-|---[CONTAINER_7]
              |---[CONTAINER_8]
              |---[CONTAINER_9]

and so on.
All traffic must go only via coresponding IP shared between container groups.
I’ve been goougled a lot of time and tried a lot of solutions, but hasn’t find anything that worked.
What would you advise?

I would suggest using 3 different private LXD managed bridge networks:

e.g.

lxc network create netgroup1 --type=bridge

Then specifying the ipv4.nat.address (and if needed the ipv6.nat.address) settings for that network:

lxc network set netgroup1 ipv4.nat.address=n.n.n.n

Then connecting each container to the required network:

lxc config device override <instance> eth0 network=netgroup1

Thank you.
I’ve tried, but got:

lxc config device override ng2-ct1-test eth0 nic network=netgroup1
Error: No value found in "nic"

UPD_1: This worked (type=nic):

lxc config device override ng2-ct1-test eth0 type=nic network=netgroup1
Error: No value found in "nic"

UPD_2:
As I think, I need to unset separate MAC for this IP and add it to the host’s enpXsY device, am I right?
In both, “with MAC” and “without MAC” I have no net inside container ng2-ct1-test [Wait a little bit and it will work --> UPD_3].
What am I doing wrong?

UPD_3: Seems I need to wait a little bit. So, it works now! Finally, I’ve got what I needed. Using nesting containers was not a good idea. It was like train of binded bicycles. No more matioskas. :slight_smile:

1 Like

Glad you got it working, I fixed the typo in my original post.

Tried the same on my own server (not Hetzner’s) — can ping only bridge and enp2s0, but I can’t ping anything else, from inside container.
enp2s0 has two IP as for now.
Both the main IP and the secondary one can be pinged from another servers.
Host is connected to the internet via gateway4.
I’ve compared networks’ configs, profiles, containers’ configs on my server and the Hetzner’s one — all configs are similar, like twins, nothing different.
UPD: The only difference, that I added ipv4.address=10.161.0.1/24 while creating netgroup01, because without this was impossible to create netgroup01 (Error: Failed generating auto config: Failed to automatically find an unused IPv4 subnet, manual configuration required).
Seems that netgroup01 has no connect with it’s ipv4.nat.address. But why? Any ideas?

Offtop: How to change default editor for lxd/lxc from vi to nano?

I’m not totally clear on how you’ve configured your network (showing the output of ip a and ip r on the host and containers would be a good start).

But one difference most likely between your home network and Hetzner is that, as far as I understand, Hetzner will route your IPs directly to your host, without needing to use ARP. Whereas most likely in your network you need to advertise the external IPs using ARP.

Thank you @tomp
On my server:

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 00:25:90:91:dc:8a brd ff:ff:ff:ff:ff:ff
3: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether 24:4b:fe:cb:a5:4e brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
4: enp1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:25:90:91:dc:8b brd ff:ff:ff:ff:ff:ff
    inet 10.0.1.2/8 brd 10.255.255.255 scope global enp1s0f1
       valid_lft forever preferred_lft forever
    inet6 fe80::225:90ff:fe91:dc8b/64 scope link 
       valid_lft forever preferred_lft forever
5: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 24:4b:fe:cb:a5:4f brd ff:ff:ff:ff:ff:ff
    inet 123.456.789.164/24 brd 193.19.242.255 scope global enp2s0
       valid_lft forever preferred_lft forever
    inet 123.456.789.166/24 brd 193.19.242.255 scope global secondary enp2s0
       valid_lft forever preferred_lft forever
    inet 123.456.789.167/24 brd 193.19.242.255 scope global secondary enp2s0
       valid_lft forever preferred_lft forever
    inet6 fe80::9876:wxyz:wxyz:wxyz/64 scope link 
       valid_lft forever preferred_lft forever
6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:36:f9:3d:9f brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:36ff:fef9:3d9f/64 scope link 
       valid_lft forever preferred_lft forever
12: vethde9c6ee@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 4a:7d:a7:c5:84:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::487d:a7ff:fec5:8403/64 scope link 
       valid_lft forever preferred_lft forever
13: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:f1:e7:40 brd ff:ff:ff:ff:ff:ff
    inet 10.254.254.1/12 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fc00::1/7 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fef1:e740/64 scope link 
       valid_lft forever preferred_lft forever
20: veth77864f71@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether 46:97:95:fc:d3:49 brd ff:ff:ff:ff:ff:ff link-netnsid 1
22: veth096da2e6@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether 86:69:e0:c7:d2:5a brd ff:ff:ff:ff:ff:ff link-netnsid 4
24: vethebdcaa70@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether e6:ae:b6:c3:1f:8d brd ff:ff:ff:ff:ff:ff link-netnsid 5
27: veth680a14e0@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether 76:c5:64:4d:8a:df brd ff:ff:ff:ff:ff:ff link-netnsid 6
30: net-166: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:48:32:d2 brd ff:ff:ff:ff:ff:ff
    inet 10.166.0.1/16 scope global net-166
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe48:32d2/64 scope link 
       valid_lft forever preferred_lft forever
34: vethb48e8f13@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master net-166 state UP group default qlen 1000
    link/ether fa:df:d1:e7:cc:a0 brd ff:ff:ff:ff:ff:ff link-netnsid 7



ip r
default via 193.19.242.1 dev enp2s0 proto static 
10.0.0.0/8      dev enp1s0f1 proto kernel scope link src 10.0.1.2 
10.166.0.0/16   dev net-166  proto kernel scope link src 10.166.0.1 
10.240.0.0/12   dev lxdbr0   proto kernel scope link src 10.254.254.1 
172.17.0.0/16   dev docker0  proto kernel scope link src 172.17.0.1 
193.19.242.0/24 dev enp2s0   proto kernel scope link src 123.456.789.164

====================
And on Hetzner’s server I’m trying what I have already done, but fail:

lxc config device override n234-test type=nic eth0 network=n234
Error: The profile device doesn't exist