How to assign public ip to the container of centos where ubuntu focal as hypervisor

Dear Experts:
How to assign public ip to the container of centos where ubuntu focal as hypervisor

Have rented BareMetal server(dedicated server) on ovhcloud and installed LXD on ubuntu focal for hypervisor role. ( not used virtual rack (vRack) of OVH or another firewall)

Objective/Goal
Deploy various linux distro containers on the ubuntu hypervisor and install application stack, purchase public IP and assign each container a public IP and access the application from external network (internet)
Since ubuntu supports netplan utility hence created a public ip yml file as 60-public-init.yaml and defined the public ip worked, and think for ( 10.x.x.) for lxd communication there is already a 50-cloud-init.yaml file with dhcp
But for centos assigning a different network IP (public IP purchased) Below steps carried out on ubuntu focal hypervisor machine (outside of centos container)

**//initialize LXD to run executed on ubuntu focal -serving as Hypervisor//**
# lxd init --auto --storage-backend=dir
**//#Connect the LXD bridge interface to one of the machine's physical interfaces on the ubuntu focal -serving as Hypervisor //**
# lxc network attach-profile lxdbr0 default 51.89.233.107

**//added additional IP ranges to the routing table on on the ubuntu focal - serving as Hypervisor //**
# ip -4 route add 51.195.168.16/28 dev lxdbr0
Now, I have 51.195.168.16 to 51.195.168.31 and started deploying containers and consume these Ip’s and working as expected when used ubuntu as mentioned it supports netplan but centos deployed container does not have netplan hence I am stuck on how to assign this public ip 51.195.168.24/32
//at centos container - cd /var/snap/lxd/common/lxd/storage-pools/default/containers/still-manatee/rootfs/etc/sysconfig/network-scripts
# vi /var/snap/lxd/common/lxd/storage-pools/default/containers/still-manatee/rootfs/etc/sysconfig/network-scripts/ ifcfg-eth0
DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
HOSTNAME=still-manatee
NM_CONTROLLED=no
TYPE=Ethernet
MTU=
DHCP_HOSTNAME=still-manatee

Request: in the above ifcfg-eth0 already different network ip is assigned now I have to assign the public IP to this container, should it have to be done in the same ifcfg-eth0 file and if “yes” please help on how to define the following 51.195.168.24/32 in the same file
OR
Should I have to create new file , then what should be the file name should it be ifcfg-eth0-a and then should it be like the below

DEVICE=eth0
BOOTPROTO="STATIC"
DEFROUTE="yes"
IPADDR: 51.195.168.24/32
NETMASK=255.255.255.240
GATEWAY= # 51.89.233.107 # **SHOULD this be the lxd bridge interface ip ( hypervisor IP) if found to be incorrect the p please help with actual IP**
DNS1=#

Additional information attached ifcfg-a output from the ubuntu focal Hypervisor and ip addr show from within the centos container.

Hi,

Please can you show me the output of the following commands on the LXD host:

lxc network show lxdbr0
lxc profile show default
ip a
ip r

Also, do you know if the additional public IPs have been routed to your LXD host using the existing IP as the next-hop or whether your ISP is expecting the LXD host to respond to ARP requests for those IPs?

Thanks

Thank you so much for the reply,
Yes, all the traffic is traversing over the same physical interface which is eno1 51.89.223.107 at machine level
As mentioned - ubuntu focal as Host machine
lxc network attach-profile lxdbr0 default 51.89.233.107

objective is to assign pubic IP for each container and provide to the different teams/host different applications. Purchased block of 16 IP addresses 51.195.168.16/28 that will be routed via ns3163697.ip-51-89-233.eu that is from 51.89.233.107

added to the routing table as below,

  1. ip -4 route add 51.195.168.16/28 dev lxdbr0
    (this totally adds all 16 IPs from 51.195.168.16 - 51.195.168.31)

  2. Created lxd container ubuntu and used netplan and configured 51.195.168.16 and able to ping from this server to external and from external to this server- perfectly working as expected

But iam stuck to achieve the me with centos, as manual steps are involved and request the experts support here, please find the below details mentioning that executed at host hypervisor and those executed with the container

-------------------root@ns3163697:~# lxc network show lxdbr0------------------------------------
in ubuntu linux hypervisor machine

root@ns3163697:~# lxc network show lxdbr0-
config:
ipv4.address: 10.192.120.1/24
ipv4.nat: “true”
ipv6.address: fd42:f3e3:9bff:fc2f::1/64
ipv6.nat: “true”
description: “”
name: lxdbr0
type: bridge
used_by:

  • /1.0/instances/centos8
  • /1.0/instances/i1focal
  • /1.0/instances/i2focal
  • /1.0/instances/still-manatee
  • /1.0/instances/u1
  • /1.0/instances/u2
  • /1.0/instances/u3
  • /1.0/instances/u4
  • /1.0/instances/u5
  • /1.0/instances/u6focal
  • /1.0/instances/u7focal
  • /1.0/profiles/default
    managed: true
    status: Created
    locations:
  • none

Note: the above i1focal, i2focal and u2 all are assigned with public IP as these are ubuntu contaniers which supports netplan.

--------------lxc profile show default------------------------------
in ubuntu linux hypervisor machine
root@ns3163697:~# lxc profile show default
config: {}
description: Default LXD profile
devices:
51.89.233.107:
nictype: bridged
parent: lxdbr0
type: nic
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
used_by:

  • /1.0/instances/i1focal
  • /1.0/instances/i2focal
  • /1.0/instances/u5
  • /1.0/instances/u1
  • /1.0/instances/u2
  • /1.0/instances/u3
  • /1.0/instances/u4
  • /1.0/instances/u6focal
  • /1.0/instances/u7focal
  • /1.0/instances/centos8
  • /1.0/instances/still-manatee
    ----------------------------------------End-------------------

---------------------------root@ns3163697:~# ip a-------------
in ubuntu linux hypervisor machine
root@ns3163697:~# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether ac:1f:6b:d4:63:ea brd ff:ff:ff:ff:ff:ff
inet 51.89.233.107/24 brd 51.89.233.255 scope global dynamic eno1
valid_lft 78920sec preferred_lft 78920sec
inet6 2001:41d0:800:266b::/56 scope global
valid_lft forever preferred_lft forever
inet6 fe80::ae1f:6bff:fed4:63ea/64 scope link
valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether ac:1f:6b:d4:63:eb brd ff:ff:ff:ff:ff:ff
4: lxcbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 10.0.3.1/24 scope global lxcbr0
valid_lft forever preferred_lft forever
5: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:e4:0e:93 brd ff:ff:ff:ff:ff:ff
inet 10.192.120.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever
inet6 fd42:f3e3:9bff:fc2f::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fee4:e93/64 scope link
valid_lft forever preferred_lft forever
27: veth2364ac98@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether a6:3b:40:b4:c4:79 brd ff:ff:ff:ff:ff:ff link-netnsid 0
29: veth0d51e834@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether 46:39:5c:d0:5e:6a brd ff:ff:ff:ff:ff:ff link-netnsid 0
31: veth22facd9c@if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether ce:da:2e:eb:70:8f brd ff:ff:ff:ff:ff:ff link-netnsid 1
33: veth1ee76a59@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether ce:4e:0a:a2:9c:0b brd ff:ff:ff:ff:ff:ff link-netnsid 1
35: veth30d9047b@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether 46:52:15:15:25:7b brd ff:ff:ff:ff:ff:ff link-netnsid 4
37: veth8582c575@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether 5a:c2:89:32:92:b5 brd ff:ff:ff:ff:ff:ff link-netnsid 4
51: veth23871d9d@if50: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether e2:3c:54:9a:4b:93 brd ff:ff:ff:ff:ff:ff link-netnsid 3
53: vethf5b519b9@if52: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether 6a:fe:a6:b9:5b:ba brd ff:ff:ff:ff:ff:ff link-netnsid 3
55: veth5b3164e7@if54: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether aa:8d:27:74:06:83 brd ff:ff:ff:ff:ff:ff link-netnsid 5
57: veth0a8f2c09@if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether c6:df:6d:65:19:fd brd ff:ff:ff:ff:ff:ff link-netnsid 5
59: vethf3d4d4bf@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether 96:b2:c4:b8:42:64 brd ff:ff:ff:ff:ff:ff link-netnsid 6
61: veth365c0187@if60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether ce:48:b8:0f:91:58 brd ff:ff:ff:ff:ff:ff link-netnsid 6
67: veth902b6e37@if66: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether 8e:e6:22:4d:6f:41 brd ff:ff:ff:ff:ff:ff link-netnsid 7
69: veth55404c4d@if68: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether 9a:a1:f7:49:28:ba brd ff:ff:ff:ff:ff:ff link-netnsid 7
71: vethd0bcff40@if70: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether ce:c2:4d:f0:ec:66 brd ff:ff:ff:ff:ff:ff link-netnsid 8
73: vetha8391924@if72: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether 72:f4:c6:9d:45:2f brd ff:ff:ff:ff:ff:ff link-netnsid 8
79: veth2580d966@if78: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether aa:e6:db:8d:72:5e brd ff:ff:ff:ff:ff:ff link-netnsid 9
81: vetha7a8af5a@if80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether 7a:f2:ef:f0:4c:f2 brd ff:ff:ff:ff:ff:ff link-netnsid 9
99: vethfea65301@if98: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether d2:a9:48:56:29:53 brd ff:ff:ff:ff:ff:ff link-netnsid 10
101: veth6e2bbbab@if100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
link/ether 46:31:7d:22:8e:ee brd ff:ff:ff:ff:ff:ff link-netnsid 10
root@ns3163697:~#
--------------- ip r---------------------
in ubuntu linux hypervisor machine
root@ns3163697:~# ip r
default via 51.89.233.254 dev eno1 proto dhcp src 51.89.233.107 metric 100
10.0.3.0/24 dev lxcbr0 proto kernel scope link src 10.0.3.1 linkdown
10.192.120.0/24 dev lxdbr0 proto kernel scope link src 10.192.120.1
51.89.233.0/24 dev eno1 proto kernel scope link src 51.89.233.107
51.89.233.254 dev eno1 proto dhcp scope link src 51.89.233.107 metric 100
51.195.168.16/28 dev lxdbr0 scope link
root@ns3163697:~#

--------------------------------end -----------------

---------Below is ifcfg-a output from ubuntu focal-hyperviosor --------------------
in ubuntu linux hypervisor machine
eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 51.89.233.107 netmask 255.255.255.0 broadcast 51.89.233.255
inet6 fe80::ae1f:6bff:fed4:63ea prefixlen 64 scopeid 0x20
inet6 2001:41d0:800:266b:: prefixlen 56 scopeid 0x0
ether ac:1f:6b:d4:63:ea txqueuelen 1000 (Ethernet)
RX packets 7438618 bytes 3180308850 (3.1 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6543657 bytes 1269805031 (1.2 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eno2: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether ac:1f:6b:d4:63:eb txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 32 bytes 3436 (3.4 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 32 bytes 3436 (3.4 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lxcbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.0.3.1 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:16:3e:00:00:00 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lxdbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.192.120.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fd42:f3e3:9bff:fc2f::1 prefixlen 64 scopeid 0x0
inet6 fe80::216:3eff:fee4:e93 prefixlen 64 scopeid 0x20
ether 00:16:3e:e4:0e:93 txqueuelen 1000 (Ethernet)
RX packets 4378293 bytes 1135496760 (1.1 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5720406 bytes 2141456850 (2.1 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth0a8f2c09: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether c6:df:6d:65:19:fd txqueuelen 1000 (Ethernet)
RX packets 736987 bytes 376191120 (376.1 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2124204 bytes 709795469 (709.7 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth0d51e834: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 46:39:5c:d0:5e:6a txqueuelen 1000 (Ethernet)
RX packets 715412 bytes 128057517 (128.0 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2098011 bytes 150973955 (150.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth1ee76a59: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether ce:4e:0a:a2:9c:0b txqueuelen 1000 (Ethernet)
RX packets 291080 bytes 65322696 (65.3 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1732863 bytes 102660619 (102.6 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth22facd9c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether ce:da:2e:eb:70:8f txqueuelen 1000 (Ethernet)
RX packets 9 bytes 766 (766.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1429242 bytes 60919142 (60.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth2364ac98: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether a6:3b:40:b4:c4:79 txqueuelen 1000 (Ethernet)
RX packets 9 bytes 766 (766.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1432236 bytes 61047580 (61.0 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth23871d9d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether e2:3c:54:9a:4b:93 txqueuelen 1000 (Ethernet)
RX packets 9 bytes 766 (766.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1389316 bytes 59231314 (59.2 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth2580d966: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether aa:e6:db:8d:72:5e txqueuelen 1000 (Ethernet)
RX packets 9 bytes 766 (766.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 919307 bytes 39223004 (39.2 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth30d9047b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 46:52:15:15:25:7b txqueuelen 1000 (Ethernet)
RX packets 9 bytes 766 (766.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1426944 bytes 60820710 (60.8 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth365c0187: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether ce:48:b8:0f:91:58 txqueuelen 1000 (Ethernet)
RX packets 245874 bytes 18969280 (18.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1647003 bytes 114121716 (114.1 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth55404c4d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 9a:a1:f7:49:28:ba txqueuelen 1000 (Ethernet)
RX packets 206141 bytes 16107395 (16.1 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1606832 bytes 110142908 (110.1 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth5b3164e7: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether aa:8d:27:74:06:83 txqueuelen 1000 (Ethernet)
RX packets 9 bytes 766 (766.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1388227 bytes 59180510 (59.1 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth6e2bbbab: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 46:31:7d:22:8e:ee txqueuelen 1000 (Ethernet)
RX packets 183 bytes 18470 (18.4 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 189136 bytes 8074094 (8.0 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth8582c575: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 5a:c2:89:32:92:b5 txqueuelen 1000 (Ethernet)
RX packets 915874 bytes 415706071 (415.7 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2274368 bytes 492915715 (492.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth902b6e37: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 8e:e6:22:4d:6f:41 txqueuelen 1000 (Ethernet)
RX packets 9 bytes 766 (766.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1384775 bytes 59016352 (59.0 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vetha8391924: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 72:f4:c6:9d:45:2f txqueuelen 1000 (Ethernet)
RX packets 196048 bytes 14863934 (14.8 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1593656 bytes 106755509 (106.7 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vetha7a8af5a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 7a:f2:ef:f0:4c:f2 txqueuelen 1000 (Ethernet)
RX packets 165466 bytes 11913422 (11.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1093534 bytes 82852435 (82.8 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vethd0bcff40: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether ce:c2:4d:f0:ec:66 txqueuelen 1000 (Ethernet)
RX packets 9 bytes 766 (766.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1381554 bytes 58877350 (58.8 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vethf3d4d4bf: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 96:b2:c4:b8:42:64 txqueuelen 1000 (Ethernet)
RX packets 9 bytes 766 (766.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1386363 bytes 59095484 (59.0 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vethf5b519b9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 6a:fe:a6:b9:5b:ba txqueuelen 1000 (Ethernet)
RX packets 897068 bytes 146081292 (146.0 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2241748 bytes 725202280 (725.2 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vethfea65301: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether d2:a9:48:56:29:53 txqueuelen 1000 (Ethernet)
RX packets 75 bytes 3538 (3.5 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 189040 bytes 8059766 (8.0 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
---------------------------------------------end---------------------------

----------------------Below is output from the centos container ---------------------------------
In the centos container

[root@still-manatee ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
98: eth1@if99: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:d6:5a:ef brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fd42:f3e3:9bff:fc2f:216:3eff:fed6:5aef/64 scope global mngtmpaddr dynamic
valid_lft 3526sec preferred_lft 3526sec
inet6 fe80::216:3eff:fed6:5aef/64 scope link
valid_lft forever preferred_lft forever
100: eth0@if101: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:b5:77:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.192.120.241/24 brd 10.192.120.255 scope global dynamic eth0
valid_lft 2769sec preferred_lft 2769sec
inet6 fd42:f3e3:9bff:fc2f:216:3eff:feb5:7742/64 scope global mngtmpaddr dynamic
valid_lft 3526sec preferred_lft 3526sec
inet6 fe80::216:3eff:feb5:7742/64 scope link
valid_lft forever preferred_lft forever
-----------------------------------------------------end----------------------------------------

Right I see now, thanks.

By the way, do you know about the ipv4.routes setting on the lxdbr0 network, you can add your 51.195.168.16/28 subnet to that and LXD will automatically re-create the static route you are currently manually adding, so you won’t have to run the ip -4 route add 51.195.168.16/28 dev lxdbr0 command on each start up.

lxc network set <network name> ipv4.routes=<subnet CIDR>

See https://linuxcontainers.org/lxd/docs/master/networks#network-bridge

Back to your main question, how to configure CentOS to use the external IP.

In CentOS 8, this should configure the network how you want:

nmcli connection down "System eth0"
nmcli connection modify "System eth0" IPv4.address 51.195.168.24/28
nmcli connection modify "System eth0" IPv4.method manual
nmcli connection modify "System eth0" IPv4.gateway 10.192.120.1
nmcli connection modify "System eth0" IPv4.dns 10.192.120.1
nmcli connection up "System eth0"

Thank you very very much.
did login to centos container and executed below for one command failed to modify was show shown but after that accepted ip /gw and dns commands hence worked.

[root@centos8 ~]# nmcli connection down “System eth0”
Connection ‘System eth0’ successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/1)
[root@centos8 ~]# nmcli connection modify “System eth0” IPv4.method manual
Error: Failed to modify connection ‘System eth0’: ipv4.addresses: this property cannot be empty for ‘method=manual’
[root@centos8 ~]# nmcli connection modify “System eth0” IPv4.address 51.195.168.24/28
[root@centos8 ~]# nmcli connection modify “System eth0” IPv4.gateway 10.192.120.1
[root@centos8 ~]# nmcli connection modify “System eth0” IPv4.dns 10.192.120.1
[root@centos8 ~]# nmcli connection up “System eth0”
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)

logged out from container and executed lxc list and found
lxc list and found eth1 with 10.192.120.158 is newly got created in additional to 51.195.168.24. Hope this is okay
±--------------±--------±----------------------±-----------------------------------------------±----------±----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
±--------------±--------±----------------------±-----------------------------------------------±----------±----------+
| centos8 | RUNNING | 51.195.168.24 (eth0) | fd42:f3e3:9bff:fc2f:9e37:93c2:d7e2:636b (eth1) | CONTAINER | 0 |
| | | 10.192.120.35 (eth0) | fd42:f3e3:9bff:fc2f:216:3eff:fe7a:8d36 (eth0) | | |
| | | 10.192.120.158 (eth1) | | | |
±--------------±--------±----------------------±-----------------------------------------------±----------±----------+

Also May I request for command to be run at the lXD host level the static route permanently gets added and saved as at present I am manually entering the following for each time when server is rebooted, please

#ip -4 route add 51.195.168.16/28 dev lxdbr0

1 Like

lxc network set <network name> ipv4.routes=<subnet CIDR>

Try setting nmcli connection modify “System eth0” IPv4.method manual after setting the IP and then reboot to clear the DHCP assigned address.

Thank you very much, great support and very much valuable help. Thanks once again.

1 Like