Network is not working in new containers

Hi
I was using LXD for a while and all went fine.
When I installed docker in the lxd container the docker in the host got stopped working. I deleted the /var/run/docker.sock and docker in host and LXD container started working fine.

Now a new problem started, The newly created LXD containers are not communicating with LAN. Below are my configuration. Please help me on fixing this or guide me how to troubleshoot.

root@GL503VM:~# lxc profile list
±--------------±--------+
| NAME | USED BY |
±--------------±--------+
| bridgeprofile | 3 |
±--------------±--------+
| default | 8 |
±--------------±--------+
root@GL503VM:~# lxc profile show bridgeprofile
config: {}
description: Bridged networking LXD profile
devices:
eth0:
name: eth0
nictype: bridged
parent: bridge0
type: nic
name: bridgeprofile
used_by:

  • /1.0/containers/node02
  • /1.0/containers/ubuntu
  • /1.0/containers/ubuntu3
    root@GL503VM:~#
    root@GL503VM:~#
    root@GL503VM:~# lxc profile show deafult
    Error: Fetch profile: No such object
    root@GL503VM:~# lxc profile show default
    config: {}
    description: Default LXD profile
    devices:
    eth0:
    name: eth0
    nictype: bridged
    parent: bridge0
    type: nic
    root:
    path: /
    pool: default
    type: disk
    name: default
    used_by:
  • /1.0/containers/gitlab
  • /1.0/containers/devops
  • /1.0/containers/mediawiki
  • /1.0/containers/node02
  • /1.0/containers/ubuntu
  • /1.0/containers/ubuntu1
  • /1.0/containers/ubuntu2
  • /1.0/containers/ubuntu3
    root@GL503VM:~#
    root@GL503VM:~#
    root@GL503VM:~# lxc list
    ±----------±--------±---------------------±-----±-----------±----------+
    | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
    ±----------±--------±---------------------±-----±-----------±----------+
    | devops | RUNNING | 192.168.2.226 (eth0) | | PERSISTENT | 2 |
    | | | 172.17.0.1 (docker0) | | | |
    ±----------±--------±---------------------±-----±-----------±----------+
    | gitlab | RUNNING | 192.168.2.223 (eth0) | | PERSISTENT | 0 |
    | | | 172.17.0.1 (docker0) | | | |
    ±----------±--------±---------------------±-----±-----------±----------+
    | mediawiki | STOPPED | | | PERSISTENT | 0 |
    ±----------±--------±---------------------±-----±-----------±----------+
    | ubuntu | STOPPED | | | PERSISTENT | 0 |
    ±----------±--------±---------------------±-----±-----------±----------+
    | ubuntu1 | STOPPED | | | PERSISTENT | 0 |
    ±----------±--------±---------------------±-----±-----------±----------+
    | ubuntu2 | STOPPED | | | PERSISTENT | 0 |
    ±----------±--------±---------------------±-----±-----------±----------+
    | ubuntu3 | RUNNING | | | PERSISTENT | 0 |
    ±----------±--------±---------------------±-----±-----------±----------+

Network is fine in the below container

root@GL503VM:~# lxc config show devops
architecture: x86_64
config:
image.architecture: amd64
image.description: ubuntu 18.04 LTS amd64 (release) (20190918)
image.label: release
image.os: ubuntu
image.release: bionic
image.serial: “20190918”
image.type: squashfs
image.version: “18.04”
security.nesting: “true”
security.privileged: “true”
volatile.base_image: 9ff5784302bfd6d556ac4c4c1176a37e86d89ac4d1aced14d9388919fa58bee8
volatile.eth0.host_name: vethc32dedcf
volatile.eth0.hwaddr: 00:16:3e:d9:1e:68
volatile.idmap.base: “0”
volatile.idmap.current: ‘[]’
volatile.idmap.next: ‘[]’
volatile.last_state.idmap: ‘[]’
volatile.last_state.power: RUNNING
devices:
mydirectory:
path: /data
source: /data/devops
type: disk
ephemeral: false
profiles:

  • default
    stateful: false
    description: “”
    root@GL503VM:~#
    root@GL503VM:~#

Network is not working at all in the below container

root@GL503VM:~# lxc config show ubuntu3
architecture: x86_64
config:
image.architecture: amd64
image.description: ubuntu 18.04 LTS amd64 (release) (20191003)
image.label: release
image.os: ubuntu
image.release: bionic
image.serial: “20191003”
image.type: squashfs
image.version: “18.04”
volatile.base_image: 09c21c90f975fec9363d6797dff8481ccbcb794f0c1aedb2edbf804590adb01c
volatile.eth0.host_name: veth2c1197cb
volatile.eth0.hwaddr: 00:16:3e:97:96:93
volatile.idmap.base: “0”
volatile.idmap.current: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.idmap.next: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.idmap: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000},{“Isuid”:false,“Isgid”:true,“Hostid”:1000000,“Nsid”:0,“Maprange”:1000000000}]’
volatile.last_state.power: RUNNING
devices: {}
ephemeral: false
profiles:

  • bridgeprofile
  • default
    stateful: false
    description: “”

Docker on the host is known for messing with firewalling. I’d recommend checking your iptables tables for anything which would prevent the container from talking to the host (including DHCP).

Hi Stephane, Thank you so much for your response. Actually I realized the trouble of having firewall and IPTables so I removed it in the starting itself. Now the host and container both doesn’t have them.

Container
root@ubuntu3:~# apt-get remove --purge iptables-persistent
Reading package lists… Done
Building dependency tree
Reading state information… Done
E: Unable to locate package iptables-persistent
root@ubuntu3:~# exit

Host
root@amaris01-GL503VM:~# apt-get remove --purge iptables-persistent
Reading package lists… Done
Building dependency tree
Reading state information… Done
Package ‘iptables-persistent’ is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.
root@amaris01-GL503VM:~# apt-get remove ufw
Reading package lists… Done
Building dependency tree
Reading state information… Done
Package ‘ufw’ is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.

Please help on this as I want to move this setup to production and not able to show a Demo until now. My intention is to have 4 containers in same server each one serving as node for Kubernetes, Openshift, openstack and Nomad cluster but I am stuck with first part of setting up kubernetes Node itself. Due to time difference I am able respond next day only. Thanks.

Please show iptables -L -n -v

Hi Stephane,

The below is the output.

-GL503VM:~/git/projects/istio$ sudo su -
root@amaris01-GL503VM:~# iptables -L -n -v
Chain INPUT (policy ACCEPT 881K packets, 1222M bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy DROP 132K packets, 60M bytes)
pkts bytes target prot opt in out source destination
132K 60M DOCKER-USER all – * * 0.0.0.0/0 0.0.0.0/0
132K 60M DOCKER-ISOLATION-STAGE-1 all – * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all – * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all – * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all – docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all – docker0 docker0 0.0.0.0/0 0.0.0.0/0

Chain OUTPUT (policy ACCEPT 529K packets, 85M bytes)
pkts bytes target prot opt in out source destination

Chain DOCKER (1 references)
pkts bytes target prot opt in out source destination

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
0 0 DOCKER-ISOLATION-STAGE-2 all – docker0 !docker0 0.0.0.0/0 0.0.0.0/0
132K 60M RETURN all – * * 0.0.0.0/0 0.0.0.0/0

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
pkts bytes target prot opt in out source destination
0 0 DROP all – * docker0 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all – * * 0.0.0.0/0 0.0.0.0/0

Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
132K 60M RETURN all – * * 0.0.0.0/0 0.0.0.0/0

Right, so as I said, it’s Docker changing your iptables tables and blocking LXD traffic.

You’ll notice that the policy for the FORWARD table has been changed to DROP and that only Docker traffic is accepted with 132000 packets having been dropped by the rule so far.

Hi Stephane,

Thank you so much for checking my request at odd time.

Yes After flushing and disabling the firewall everything works normal. Thank you so much for your guidance.

Regards
Barani

Hi Stephane,

Just another doubt, In case of installing kubernetes even after disabling swap us% ing below command.

lxc config set master limits.memory.swap false

In the container the free -m shows the swap and its 100% unusion is failing since ed but the kubernetes installation is failing since the present of Swap. Can you please help me on howto manage such situation.

Thanks in Advance.

Regards
Barani

After setting up the security config as below the error got changed. I understand its only me to learn it and lxc and this is not a error. Please point me to right documentation so that I will try to set the cluster in correct way. Thanks again.

lxc config set master security.privileged true

Oct 10 06:33:39 master systemd[1]: kubelet.service: Failed with result ‘exit-code’.
Oct 10 06:33:49 master systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Oct 10 06:33:49 master systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Oct 10 06:33:49 master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Oct 10 06:33:49 master systemd[1]: kubelet.service: Failed to reset devices.list: Operation not permitted
Oct 10 06:33:49 master systemd[1]: Started kubelet: The Kubernetes Node Agent.
Oct 10 06:33:49 master kubelet[707]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet’s --config flag. See https://kubernetes.io/docs/tas
Oct 10 06:33:49 master kubelet[707]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet’s --config flag. See https://kubernetes.io/docs/tasks
Oct 10 06:33:49 master kubelet[707]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet’s --config flag. See https://kubernetes.io/docs/tas
Oct 10 06:33:49 master kubelet[707]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet’s --config flag. See https://kubernetes.io/docs/tasks
Oct 10 06:33:49 master kubelet[707]: I1010 06:33:49.778412 707 server.go:410] Version: v1.16.1
Oct 10 06:33:49 master kubelet[707]: I1010 06:33:49.778653 707 plugins.go:100] No cloud provider specified.
Oct 10 06:33:49 master kubelet[707]: I1010 06:33:49.778666 707 server.go:773] Client rotation is on, will bootstrap in background
Oct 10 06:33:49 master kubelet[707]: I1010 06:33:49.781134 707 certificate_store.go:129] Loading cert/key pair from “/var/lib/kubelet/pki/kubelet-client-current.pem”.
Oct 10 06:33:49 master kubelet[707]: I1010 06:33:49.861496 707 server.go:644] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Oct 10 06:33:49 master kubelet[707]: F1010 06:33:49.862097 707 server.go:271] failed to run Kubelet: running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /p
Oct 10 06:33:49 master systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Oct 10 06:33:49 master systemd[1]: kubelet.service: Failed with result ‘exit-code’.