in addition: the lxd host is runnig on kvm virtual machine
there are no connection issues on the baremetal server, but this issue exists on the kvm vm
has anyone faced such problem?
Please can you describe a bit more about your ISP’s setup.
Firstly, have they allocated you a subnet of public IPs or just a single one?
Do they allow multiple MACs on your external interface?
I can already see some issues with the way you have eth0 and br0 configured (namely they are sharing the same subnet so routing is not going to work as you most likely want), but it would help to understand the external IP setup first before going further into local config.
the “public ip” means the ip address of my lan
i need to able to connect to my lxd container from my lan using lan ip address in this container
in other words. i need to forward the address from the lan to the container
br0 on the host needs bridging to a physical nic possibly? I can’t see where you have done that yet.
For traffic to be bridged out from the container through br0, it will need a physical port that is connected to your LAN to be bridged so l2 traffic and arp can pass transparently through your bridge to the containers.
One of your server ports that is the one connected to the network, probably will be soemthing like eth0, eno1, ens18 or something like that, not sure about CentOS as I’d rather throw myself off a bridge (pardon the pun!) than use it
sorry for late reply
we have oVirt4 and there is a default network filter on the vnic which is preventing any mac, except for the configured vm mac
i chose not to use a network filter and it worked for me