Container to Host Networking

Hi,

I would like to access parts of my hosts filesystem from multiple LXD containers running on the host. I assumed the best way to do this might be by running an NFS server on the host and mounting them in the container.

The problem I have run into is that my containers can’t communicate with the host, it is unable to ping it. The containers are using macvlan sucessfully and I attempted to follow http://noyaudolive.net/2012/05/09/lxc-and-macvlan-host-to-guest-connection/ without success. Executing the command sudo ifup macvlan0 is having issues running the two ‘route del’ commands.

Info:
driver: lxc
driver_version: 3.0.2
kernel: Linux
kernel_architecture: x86_64
kernel_version: 4.4.0-127-generic
server: lxd
server_pid: 22848
server_version: “3.5”
storage: zfs
storage_version: 0.6.5.11-1~trusty
server_clustered: false
server_name: server1

lxc profile show default
config:
environment.http_proxy: “”
user.network_mode: “”
description: Default LXD profile
devices:
eth0:
name: eth0
nictype: macvlan
parent: br1
type: nic
root:
path: /
pool: lxd
type: disk
name: default
used_by:

  • /1.0/containers/hass
  • /1.0/containers/db1
  • /1.0/containers/dvr1
  • /1.0/containers/plex

Thanks,
David

Macvlan whilst simple to get going can be a pain once you start trying to add routing rules and nested bridges to enable host communication. Another way to do it with networking might be to have a bridge network interface on the host and have your container use that for it’s nic, so it gets access directly to the LAN (if that’s what you want).

Networking aside, If all you want is to access host storage in a container, A simple way to access host storage inside a local container is to bind mount the host directory into the container. Documentation on this is here.

A simple example might be:

lxc config device add MyContainer MyShare disk source=/mnt/usbdrive path=/mnt/usbdrivefromhost

Where:
‘MyContainer’ is the name of the container you want to bind the host directory into
‘MyShare’ is an identifier for the name of the share to refrence in configs (a name of your choice)
‘source’ is the path of the directory on the host
‘path’ is where it gets mounted inside the container

This does have its own complexities related to file permissions and namespace pid mappings but that’s not something I can confidently talk you through, and others have written about (I found some good guides with a bit of searching). Hope this helps nudge you in the right direction?

Hi,

Re-visiting this problem since the root issue is now impacting me. (I was able to work around the previous storage access issues through bind-mounts).

I am able to access LXD containers from my LAN and from other containers (or even KVM VMs running on the host). I am able to access my host from KVM VMs and my LAN. I am unable to access the host from my containers, or the containers from my host.

djwhyte@server1:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br1 state UP group default qlen 1000
    link/ether 00:26:b9:8b:16:17 brd ff:ff:ff:ff:ff:ff
3: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:26:b9:8b:16:17 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.30/24 brd 192.168.0.255 scope global br1
       valid_lft forever preferred_lft forever
    inet6 fe80::226:b9ff:fe8b:1617/64 scope link 
       valid_lft forever preferred_lft forever
4: macvlan0@br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1
    link/ether 6e:67:c7:0c:4d:7c brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.230/24 brd 192.168.0.255 scope global macvlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::6c67:c7ff:fe0c:4d7c/64 scope link 
       valid_lft forever preferred_lft forever
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 3e:65:ec:ef:45:70 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
13: mac2ef23a72@br1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1
    link/ether 00:16:3e:68:d4:42 brd ff:ff:ff:ff:ff:ff
31: mac11cd0e68@br1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1
    link/ether 00:16:3e:1a:a2:d7 brd ff:ff:ff:ff:ff:ff
33: mac345422a8@br1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1
    link/ether 00:16:3e:1a:a2:d7 brd ff:ff:ff:ff:ff:ff
44: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br1 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:0a:fe:7d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe0a:fe7d/64 scope link 
       valid_lft forever preferred_lft forever
46: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br1 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:ff:aa:a9 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:feff:aaa9/64 scope link 
       valid_lft forever preferred_lft forever

I personally haven’t configured any iptable rules but this is what I show:

djwhyte@server1:~$ sudo iptables -L
[sudo] password for djwhyte: 
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:bootps

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             192.168.122.0/24     ctstate RELATED,ESTABLISHED
ACCEPT     all  --  192.168.122.0/24     anywhere            
ACCEPT     all  --  anywhere             anywhere            
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootpc

Interfaces:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto br1
iface br1 inet static
     address 192.168.0.30
     network 192.168.0.0
     netmask 255.255.255.0
     broadcast 192.168.0.255
     gateway 192.168.0.1
     dns-nameserver 192.168.0.1
     bridge_ports em1
     bridge_maxwait 0
     bridge_fd 0

auto macvlan0
iface macvlan0 inet dhcp
    # as br1 and macvlan0 are on the same LAN, we must drop default route and LAN route
    # from br1 configuration to avoid conflicts (this just slooooow down things).
#    pre-up route del default
    # NOTE: adapt this line to your LAN address and netmask 
#    pre-up route del -net 192.168.0.30 netmask 255.255.255.255
    pre-up ip link add link br1 name macvlan0 type macvlan mode bridge
/etc/network/interfaces (END)

I wonder if it is related in some way to the macvlan entry in interfaces. I can’t recall what inspired me to configure it like that back when I setup LXD and I don’t really like how it has its own IP.

Any help is greatly appreciated.

Thanks,
Whytey

This is not exactly a new and original problem, people have complained about this impossibility from the MacVlan creation
It seems that you have created a macvlan on your host but you did setup it on a bridge you added to the host; I wonder why since adding the macvlan interface on the physical device (em1 it seems in your case) is the obvious way to do it. I have used it in this way and it works. I never even thought of creating a macvlan your way.

You have to replace netplan by ifupdown of course, since it’s still not supported at this time. Actually there is a workaround for it if for some corporatey reason you can’t ditch netplan and you can use systemd - netplan and systemd being corporatey features it should not be a problem :slight_smile:

Thanks for the reply.

I am a bit confused. I think you’re saying if I configure things a bit different, macvlan should work fine. I am not sure if you’re saying I need to just create the macvlan of the physical interface or if I just need to switch to ifupdown or if I need to do both.

I will try and have a look at this when I get home tonight.

Thanks again,
Whytey

Yes I would try these 2 changes first to make your config work.
What I think is that given latest developments it may be possible to make netplan work if needed but I did not check these tricks myself, I used ifupdown with success. What’s sure is that netplan still does not work if configured straight out-of-the-box.

Thanks again for the reply. As you probably realise, I am fumbling my way through this so your input is appreciated.

I spent this evening trying to get it working but didn’t progress anywhere. I remember now the reason I originally configured the br1 interface is because I had that configured previously for KVM guests to utilise to get host ↔ guest networking. And since I have br1 configured, I can’t piggy back my macvlan off the em1 device. And I can’t remove br1 until I have moved all my VMs across to LXD so I had to roll back my changes.

Note, I found an article that I followed when I was configuring my LXD instance orginally, some time ago - https://web.archive.org/web/20190628121705/http://noyaudolive.net/2012/05/09/lxc-and-macvlan-host-to-guest-connection/

Do you have any pointers to what I need to do with ifup/ifdown, should I get to this point?

Cheers,
Whytey

Here is an extract of my config on some sort of Ubuntu 18 derivative, it’s possible that something could be done in a better way, but it works.

auto eth0
iface eth0 inet manual

auto macv0
iface macv0 inet manual
   pre-up ip link add link eth0 macv0 type macvlan mode bridge
   pre-up ip addr add 192.168.20.1/24 broadcast 255.255.255.255 dev macv0
   up ip link set dev macv0 up
   post-up ip route add 192.168.20.0/24 dev macv0 || true
   post-up ip route add default via 192.168.20.200 dev macv0
   gateway 192.168.20.200
   dns-nameserver 192.168.20.200
   dns-search localdomain
   post-down ip link del dev macv0

Hi there,

It seems I have it working. I reset my /etc/network/interfaces config back to how I thought I had it before and did a reboot and suddenly I noticed that my host was sending monitoring data to my Zabbix container (which I couldn’t do before).

The good config:

# The primary network interface
auto br1
iface br1 inet static
     address 192.168.0.30
     network 192.168.0.0
     netmask 255.255.255.0
     broadcast 192.168.0.255
     gateway 192.168.0.1
     dns-nameserver 192.168.0.1
     bridge_ports em1
     bridge_maxwait 0
     bridge_fd 0

auto macvlan0
iface macvlan0 inet static
     address 192.168.0.31
     netmask 255.255.255.0
     # as br1 and macvlan0 are on the same LAN, we must drop default route and LAN route
     # from br1 configuration to avoid conflicts (this just slooooow down things).
     pre-up route del default
     # NOTE: adapt this line to your LAN address and netmask 
     pre-up route del -net 192.168.0.0 netmask 255.255.255.0
     pre-up ip link add link br1 name macvlan0 type macvlan mode bridge
     gateway 192.168.0.1

The good routing table:

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         mygateway.modem 0.0.0.0         UG    0      0        0 macvlan0
192.168.0.0     *               255.255.255.0   U     0      0        0 macvlan0
192.168.122.0   *               255.255.255.0   U     0      0        0 virbr0

I do have some issues that I can improve upon, I can’t use the ifdown command on br1 since it tries to tear down routes that no longer exist, but I can work on that some other time.

Thanks for all of your help,
Whytey