From Containers can not ping host

I can’t ping container from host and revers.
I can ping containers from all other host.
What is wrong?

Please can you provide your network config and container config.

this is my simple profile:
config:
environment.http_proxy: “”
user.network_mode: “”
description: Default LXD profile
devices:
eth0:
name: eth0
nictype: macvlan
parent: eth0
type: nic
root:
path: /
pool: default
type: disk
name: lanprofile

and i create a container with this command

lxc launch -p lanprofile ubuntu:18.04 test

route in my host is:
route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.74.0.1 0.0.0.0 UG 0 0 0 eth0
10.74.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
10.74.0.0 0.0.0.0 255.255.255.0 U 0 0 0 lxdbr0
and route from container is
route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.74.0.1 0.0.0.0 UG 0 0 0 eth0
10.74.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

There is missing some virbr0 in my host i think.

Ah, so you’re using macvlan. I’m afraid that macvlan (and ipvlan) devices do not allow the containers to communicate with the host (and vice versa). This is an inherent characteristic in macvlan device type in the Linux kernel.

You could try using a bridge instead of macvlan.

ok how?
do you have any desc. how to do that? Thanks

Take a look at https://openschoolsolutions.org/set-up-network-bridge-lxd/

ok Thanks.
Can i install bridge and create new profile and add or change all my containers to that profile?

Yep that should be fine.

i did step by step from doc. you have sent .
at final command:
lxc config device add my_containers eth0 nictype=bridged parent=br0 name=eth0
Error: Invalid devices: Device validation failed “eth0”: Unsupported device type

You missed the device type, try

lxc config device add my_containers eth0 nic nictype=bridged parent=br0 name=eth0

it’s not working. i add container test. then test ifconfig show 3 entry lo for 127.0.0.1 and eth0 where show nothing for ipv4 and br0 show my correct ip. 10.74.0.13 but i can’t use it.
i can not ping it i can’t see any host from this conatiner

Please can you post all of your network config, profile config and container configs.

Thanks

my host , /etc/network/interfaces (its ubuntu upgraded from 16 to 18)

auto eth0
iface eth0 inet static
        address 10.74.0.8
        netmask 255.255.255.0
        network 10.74.0.0
        broadcast 10.74.0.255
        gateway 10.74.0.1

and ifconfig is

eth0 = 10.74.0.8
lxdbr0 = 10.110.1235.1 
route -n  in host 
0.0.0.0         10.74.0.1       0.0.0.0         UG    0      0        0 eth0
10.74.0.0       0.0.0.0         255.255.255.0   U     0      0        0 eth0
10.110.235.0    0.0.0.0         255.255.255.0   U     0      0        0 lxdbr0

create container wilth all default

lxc launch ubuntu:18.04 test

At this point container has ip in lxdbr0 10.110.235.xx and everything is working ping in and out ,…

route -n is
0.0.0.0         10.110.235.1    0.0.0.0         UG    100    0        0 eth0
10.110.235.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0
10.110.235.1    0.0.0.0         255.255.255.255 UH    100    0        0 eth0

i want to gave container my ip rang
then go to /etc/netplan/50… .yaml
and set the dhcp to no and gave ip addresses
this ip is working to see all other host but not master of container (ip is the same rang 10.74.0.x/24)
and gateway is 10.74.0.1.
and host can’t see containers.
i used doc you send it to me and reconfigure 50xxx.yaml to

network:
    ethernets:
        eth0:
            dhcp4: no
    version: 2
    bridges:
        lxdbr0:
            dhcp4: no
            addresses:
            - 10.74.0.13/24
            gateway4: 10.74.0.1
            nameservers:
                addresses:
                - 10.74.0.1
            interfaces:
            - eth0

and gave container

lxc config device add test eth0 nic nictype=bridged parent=lxdbr0 name=eth0

result is the same but container not suppose to se other host.

I’ve modified your post to use code formatting to make it easier to read.

There are several issues here:

  1. You need to create a new bridge, i.e one not called lxdbr0 as that is the internal LXD bridge. So change your netplan bridges section to create a new bridge, suggest br0.
  2. Your container needs to be parented to the new bridge, not the lxdbr0 bridge, so the parent= part should be parent=br0.
  3. You should also ensure that your eth0 on the host doesn’t have any IP addresses, and instead your new bridge has the IP that your eth0 had.

Alternatively if you do not need the container to talk to the host you could abandon using bridges and just use macvlan:

lxc config device add test eth0 nic nictype=macvlan parent=eth0 name=eth0

i create a new bridge br0 and and add the device to container
---------------------------±--------±------------------±-----±----------±----------+
| test | RUNNING | 10.74.0.7 (br0) | | CONTAINER | 0 |
±--------------------------±--------±------------------±-----±----------±----------+

and from container i have ifconfig bro=10.74.0.7 and etho har no address.
it’s not working

Thank you it’s working now.

Hi,
I am trying to do a stress test on LXD containers (running on Raspbian VM). One of which is running Apache Web Server and the other is running Apache Bench. But am unable to ping between these two containers.

Please can you provide more info. To start:

  • The output of ip a and ip r in both the containers and the host.
  • The output of lxc config show <container> --expanded for each container.

Thanks for your prompt response. Here:

For Host: ip a:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

   valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

   valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

link/ether 08:00:27:f5:05:d9 brd ff:ff:ff:ff:ff:ff

inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute eth0

   valid_lft 85575sec preferred_lft 74775sec

inet6 fe80::a9bc:a1f3:c223:928b/64 scope link

   valid_lft forever preferred_lft forever

3: bridge_ragini: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000

link/ether 00:16:3e:0a:dc:7f brd ff:ff:ff:ff:ff:ff

inet 10.233.10.1/24 scope global bridge_ragini

   valid_lft forever preferred_lft forever

inet6 fd42:aac1:e1b9:e3d8::1/64 scope global

   valid_lft forever preferred_lft forever

inet6 fe80::216:3eff:fe0a:dc7f/64 scope link

   valid_lft forever preferred_lft forever

5: veth69b6053b@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master bridge_ragini state UP group default qlen 1000

link/ether e6:32:89:95:9c:db brd ff:ff:ff:ff:ff:ff link-netnsid 0

inet 169.254.164.112/16 brd 169.254.255.255 scope global noprefixroute veth69b6053b

   valid_lft forever preferred_lft forever

7: vethd3bda972@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master bridge_ragini state UP group default qlen 1000

link/ether 46:34:96:1c:d7:75 brd ff:ff:ff:ff:ff:ff link-netnsid 1

inet 169.254.2.245/16 brd 169.254.255.255 scope global noprefixroute vethd3bda972

   valid_lft forever preferred_lft forever

9: veth39e8527b@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master bridge_ragini state UP group default qlen 1000

link/ether 3e:92:d6:a8:2c:c0 brd ff:ff:ff:ff:ff:ff link-netnsid 2

inet 169.254.67.211/16 brd 169.254.255.255 scope global noprefixroute veth39e8527b

   valid_lft forever preferred_lft forever

11: veth1c78ba01@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master bridge_ragini state UP group default qlen 1000

link/ether 46:c3:de:19:c4:c3 brd ff:ff:ff:ff:ff:ff link-netnsid 3

inet 169.254.235.90/16 brd 169.254.255.255 scope global noprefixroute veth1c78ba01

   valid_lft forever preferred_lft forever

################################################
For Host: ip r:

default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 202

10.0.2.0/24 dev eth0 proto dhcp scope link src 10.0.2.15 metric 202

10.233.10.0/24 dev bridge_ragini proto kernel scope link src 10.233.10.1

169.254.0.0/16 dev veth69b6053b scope link src 169.254.164.112 metric 205

169.254.0.0/16 dev vethd3bda972 scope link src 169.254.2.245 metric 207

169.254.0.0/16 dev veth39e8527b scope link src 169.254.67.211 metric 209

169.254.0.0/16 dev veth1c78ba01 scope link src 169.254.235.90 metric 211

###################################################
For container-1: ip a:

ubuntu@benchmark:~$ ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

   valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

   valid_lft forever preferred_lft forever

6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000

link/ether 00:16:3e:83:57:0f brd ff:ff:ff:ff:ff:ff link-netnsid 0

inet 10.233.10.37/24 brd 10.233.10.255 scope global dynamic eth0

   valid_lft 2679sec preferred_lft 2679sec

inet6 fd42:aac1:e1b9:e3d8:216:3eff:fe83:570f/64 scope global dynamic mngtmpaddr noprefixroute

   valid_lft 3256sec preferred_lft 3256sec

inet6 fe80::216:3eff:fe83:570f/64 scope link

   valid_lft forever preferred_lft forever

##############################################

For Container-1: ip r:

ip r

default via 10.233.10.1 dev eth0 proto dhcp src 10.233.10.37 metric 100

10.233.10.0/24 dev eth0 proto kernel scope link src 10.233.10.37

10.233.10.1 dev eth0 proto dhcp scope link src 10.233.10.37 metric 100
#######################################################
For container-2: ip a

: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

   valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

   valid_lft forever preferred_lft forever

4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000

link/ether 00:16:3e:4b:e3:df brd ff:ff:ff:ff:ff:ff link-netnsid 0

inet 10.233.10.74/24 brd 10.233.10.255 scope global dynamic eth0

   valid_lft 2596sec preferred_lft 2596sec

inet6 fd42:aac1:e1b9:e3d8:216:3eff:fe4b:e3df/64 scope global dynamic mngtmpaddr noprefixroute

   valid_lft 3176sec preferred_lft 3176sec

inet6 fe80::216:3eff:fe4b:e3df/64 scope link

   valid_lft forever preferred_lft forever

#########################################
For Container-2: ip r

default via 10.233.10.1 dev eth0 proto dhcp src 10.233.10.74 metric 100

10.233.10.0/24 dev eth0 proto kernel scope link src 10.233.10.74

10.233.10.1 dev eth0 proto dhcp scope link src 10.233.10.74 metric 100

###########################################
Output of lxc config for container-1:

sudo /snap/bin/lxc config show apache1 --expanded

architecture: i686

config:

image.architecture: i386

image.description: ubuntu 18.04 LTS i386 (release) (20201014)

image.label: release

image.os: ubuntu

image.release: bionic

image.serial: “20201014”

image.type: squashfs

image.version: “18.04”

volatile.base_image: 968688d6dfb9530463a7ea811beac3efb4dcf2fbc361489a2f7a43975509acb4

volatile.bridge_ragini.hwaddr: 00:16:3e:80:ef:56

volatile.eth0.host_name: veth69b6053b

volatile.eth0.hwaddr: 00:16:3e:4b:e3:df

volatile.idmap.base: “0”

volatile.idmap.current: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’

volatile.idmap.next: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’

volatile.last_state.idmap: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’

volatile.last_state.power: RUNNING

devices:

eth0:

name: eth0

nictype: bridged

parent: bridge_ragini

type: nic

myport80:

connect: tcp:127.0.0.1:80

listen: tcp:0.0.0.0:80

type: proxy

root:

path: /

pool: local

type: disk

ephemeral: false

profiles:

  • default

stateful: false

description: “”
#############################################################
Output of lxc config for container-2:

architecture: i686

config:

image.architecture: i386

image.description: ubuntu 18.04 LTS i386 (release) (20201014)

image.label: release

image.os: ubuntu

image.release: bionic

image.serial: “20201014”

image.type: squashfs

image.version: “18.04”

volatile.base_image: 968688d6dfb9530463a7ea811beac3efb4dcf2fbc361489a2f7a43975509acb4

volatile.eth0.host_name: vethd3bda972

volatile.eth0.hwaddr: 00:16:3e:83:57:0f

volatile.idmap.base: “0”

volatile.idmap.current: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’

volatile.idmap.next: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’

volatile.last_state.idmap: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’

volatile.last_state.power: RUNNING

devices:

eth0:

name: eth0

nictype: bridged

parent: bridge_ragini

type: nic

root:

path: /

pool: local

type: disk

ephemeral: false

profiles:

  • default

stateful: false

description: “”

So it looks like you’re using a proxy device to expose a port from the host into one of the containers?
Is this what you are using to try and connect your load test to?

Please show me the ping command that you are not able to get working?