Little Help - Setting LXD/LXC for my testing network

Here are the summary details of my network:

  1. Router Netgear/wireless capable, connected to the ISP via PPPoE Dynamic IP IPv4 and IPv6 support.
  2. Lab network is configured no DHCP server everything is configured manually network is 192.168.0.0/24
  3. I have an internal DNS server setup Pri = 192.168.0.101 Sec = 192.168.0.102 I resolve forward and reverse lookup internally fine
  4. I have a small server 16G 1 CPU 4 cores - this server 192.168.0.2 is running Debian 9.3 amd64 with currently lxd 2.21 installed using snap (thanks stgraber for one of those canonical seminars)
  5. I have a couple of systems on the network already running different things: chef-server = 192.168.0.111, chefdk = 192.168.0.3, other Linux OSes ubuntu = 192.168.0.3, debian = 192.168.0.4, solus = 192.168.0.5, etc all these machines are in the same network, they can ping/ssh each other via name thanks to the internal dns server.

Goal:

  1. I want to use the debian server 192.168.0.2 as my main LXD/LXC host or server I guess means the same correct me if I am wrong.
  2. I want to have the containers have an IP address assigned manually as if they were other OS as the ones described, e.g. container1 = 192.168.0.41, container2 = 192.168.0.42, container3 = 192.168.0.43 and so on.
  3. I want the containers to accessible from the internet and access the internet by themselves just like the no container ones I mentioned above. Currently, if I want to have access to one of the no container systems I do a port address translation in the Netgear router e.g. to access “nomachine” on the ubuntu system = 192.168.0.3 I do a NAT/PAT rule ssh to public address > port 12345 access internal to 192.168.0.3 on port 12345 I hope you understand what I mean, I would like to be able to do the same to the containers running on the Debian server 192.168.0.2.

Here is what I choose during the initial setup of lxd:

root@server1:~# lxd init
Do you want to configure a new storage pool (yes/no) [default=yes]? 
Name of the new storage pool [default=default]: cstorage1                      
Name of the storage backend to use (dir, btrfs, ceph, lvm) [default=btrfs]: 
Create a new BTRFS pool (yes/no) [default=yes]? 
Would you like to use an existing block device (yes/no) [default=no]? 
Size in GB of the new loop device (1GB minimum) [default=71GB]: 100 
Would you like LXD to be available over the network (yes/no) [default=no]? yes
Address to bind LXD to (not including port) [default=all]: 
Port to bind LXD to [default=8443]: 
Trust password for new clients: 
Again: 
Would you like stale cached images to be updated automatically (yes/no) [default=yes]? 
Would you like to create a new network bridge (yes/no) [default=yes]? no
LXD has been successfully configured.

root@server1:~# lxc network list
+---------+----------+---------+-------------+---------+
|  NAME   |   TYPE   | MANAGED | DESCRIPTION | USED BY |
+---------+----------+---------+-------------+---------+
| enp0s25 | physical | NO      |             | 0       |
+---------+----------+---------+-------------+---------+

As you can see I ended up with no bridge I am not sure if I can tell lxd/lxc to use the device enp0s25 as the bridge here is where I am not sure and I am stuck.

Currently, the networking settings look like:

root@server1:~# lxc network show enp0s25 
config: {}
description: ""
name: enp0s25
type: physical
used_by: []
managed: false

Physical Interface Debian server:

root@server1:~# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether f0:de:f1:0b:b2:11 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.2/24 brd 192.168.0.255 scope global enp0s25
       valid_lft forever preferred_lft forever
    inet6 2405:6580:2cc0:100:c061:74fc:6474:781c/64 scope global temporary dynamic 
       valid_lft 585120sec preferred_lft 66573sec
    inet6 2405:6580:2cc0:100:c832:d98d:ebbf:7fd4/64 scope global temporary deprecated dynamic 
       valid_lft 498895sec preferred_lft 0sec
    inet6 2405:6580:2cc0:100:10a9:126:2045:f8b4/64 scope global temporary deprecated dynamic 
       valid_lft 412671sec preferred_lft 0sec
    inet6 2405:6580:2cc0:100:fd79:3462:81fa:a233/64 scope global temporary deprecated dynamic 
       valid_lft 326448sec preferred_lft 0sec
    inet6 2405:6580:2cc0:100:ad90:d5bd:10ba:4a42/64 scope global temporary deprecated dynamic 
       valid_lft 240223sec preferred_lft 0sec
    inet6 2405:6580:2cc0:100:f13b:9a2f:3ba1:caa4/64 scope global temporary deprecated dynamic 
       valid_lft 153998sec preferred_lft 0sec
    inet6 2405:6580:2cc0:100:e4d1:b80e:dd23:9c36/64 scope global temporary deprecated dynamic 
       valid_lft 67773sec preferred_lft 0sec
    inet6 2405:6580:2cc0:100:f2de:f1ff:fe0b:b211/64 scope global mngtmpaddr noprefixroute dynamic 
       valid_lft 2591927sec preferred_lft 604727sec
    inet6 fe80::f2de:f1ff:fe0b:b211/64 scope link 
       valid_lft forever preferred_lft forever
3: vboxnet0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.1/24 brd 192.168.56.255 scope global vboxnet0
       valid_lft forever preferred_lft forever
    inet6 fe80::800:27ff:fe00:0/64 scope link 
       valid_lft forever preferred_lft forever

Storage:

root@server1:~# lxc storage list
+-----------+-------------+--------+----------------------------------------------+---------+
|   NAME    | DESCRIPTION | DRIVER |                    SOURCE                    | USED BY |
+-----------+-------------+--------+----------------------------------------------+---------+
| cstorage1 |             | btrfs  | /var/snap/lxd/common/lxd/disks/cstorage1.img | 1       |
+-----------+-------------+--------+----------------------------------------------+---------+

NOTE: I believe that storage was before default ZFS I guess that changed to btrfs. also, I am running VirtualBox on the same server but I guess that is not an issue, I am actually thinking of replacing it with LXD/LXC.

Thank you in advance for your help.

Hi!

  1. In LXD terminology, you have the host and the many containers inside this host.
    I suppose that overall the computer running LXD is a server computer.

  2. You can expose LXD containers to the LAN by using either a bridge or macvlan.
    I have written a tutorial for each, see index at The LXD tutorials of Simos
    You do not use DHCP, therefore you would need to assign the IP addresses manually.

  3. The containers will be able to access the Internet.
    To be accessible from the Internet, you would need to perform port forwarding as you normally do.
    What I have not tried, is whether the router would be OK if an Ethernet port has more than one IP addresses.
    With a WiFi router it is not possible because it expects a single IP address coming from a secure WiFi connection.
    That is, the same Ethernet port for 192.168.0.2 would also coincide with all other containers.
    It is a simple test, and you can try with a port forwarding rule to the given IP address of the container.
    Most likely it will work.

2 Likes

Hello there and thank you so much for your prompt response.

I will check out the tutorials you mentioned, and let you know if I was able to achieve my goal.

Regarding the WIFI test, I am not sure I understand what do you want me to do? I use the wireless function on the Netgear router just for devices like mobiles, google home mini, chromecast. not for the machines or laptop pcs.

Again, Thanks a lot and reporting soon the results.

Sincerely,

I mentioned the WiFi in the sense that if you had the LXD server/host over WiFi (not Ethernet cable), then you would not be able to have multiple containers getting separete LAN IP address. The reason is that a WiFi router expects only a single IP address coming from a client. In your case, the LXD server/host is connected over Ethernet, therefore it is OK.

1 Like

Hello, simos:

Understood, apologies for the confusion and thank you for confirming :construction_worker_man:

Regarding the tutorials, I believe in both of them you need to modify the configuration on the server for example:

Here is the current configuration of the Debian server that will be used as the LXD/LXC server.

root@server1:~# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether f0:de:f1:0b:b2:11 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.2/24 brd 192.168.0.255 scope global enp0s25
       valid_lft forever preferred_lft forever
    inet6 2405:6580:2cc0:100:c061:74fc:6474:781c/64 scope global temporary dynamic 
       valid_lft 598640sec preferred_lft 80093sec
    inet6 2405:6580:2cc0:100:f2de:f1ff:fe0b:b211/64 scope global mngtmpaddr noprefixroute dynamic 
       valid_lft 2591920sec preferred_lft 604720sec
    inet6 fe80::f2de:f1ff:fe0b:b211/64 scope link 
       valid_lft forever preferred_lft forever
3: vboxnet0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.1/24 brd 192.168.56.255 scope global vboxnet0
       valid_lft forever preferred_lft forever
    inet6 fe80::800:27ff:fe00:0/64 scope link 
       valid_lft forever preferred_lft forever


root@server1:~# nmcli
enp0s25: connected to Wired connection 1
	"Intel 82577LM Gigabit"
	ethernet (e1000e), F0:DE:F1:0B:B2:11, hw, mtu 1500
	ip4 default, ip6 default
	inet4 192.168.0.2/24
	route4 169.254.0.0/16
	inet6 2405:6580:2cc0:100:c061:74fc:6474:781c/64
	inet6 2405:6580:2cc0:100:f2de:f1ff:fe0b:b211/64
	inet6 fe80::f2de:f1ff:fe0b:b211/64
	route6 2405:6580:2cc0:100::/64

vboxnet0: unmanaged
	ethernet (vboxnet), 0A:00:27:00:00:00, hw, mtu 1500

lo: unmanaged
	loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536

DNS configuration:
servers: 192.168.0.101 192.168.0.102 8.8.8.8
interface: enp0s25

servers: 2404:1a8:7f01:b::3 2404:1a8:7f01:a::3
interface: enp0s25

I am currently using the device enp0s25 as the physical device for the VirtualBox bridge network. Is it possible to configure a bridge that will allow me to expose the containers without removing this device?

Thank you in advance for the help again!

If you go with the bridge, then you need to make changes to the networking configuration of the host.
At my blog post, I mention how to do those using NetworkManager. In fact, I provide a link to another website that shows how to do this using the GUI of the NetworkManager.
You can also do the bridge configuration using the command line, however it is quite more involved.

Alternatively, if you use macvlan, you do not need to do changes to the host. LXD will do these for you.

1 Like

Hello, simos,

Thank you so much for the great help on this small project I am working on. As per your great tutorials, I tried configuring the macvlan, it was the simplest to test:

root@server1:~# lxc network list
+---------+----------+---------+-------------+---------+
|  NAME   |   TYPE   | MANAGED | DESCRIPTION | USED BY |
+---------+----------+---------+-------------+---------+
| enp0s25 | physical | NO      |             | 2       |
+---------+----------+---------+-------------+---------+

.

root@server1:~# lxc profile list
+------------+---------+
|    NAME    | USED BY |
+------------+---------+
| default    | 0       |
+------------+---------+
| lanprofile | 2       |
+------------+---------+

.

root@server1:~# lxc profile show lanprofile 
config: {}
description: New Default LXD profile
devices:
  eth0:
    nictype: macvlan
    parent: enp0s25
    type: nic
  root:
    path: /
    pool: cstorage1
    type: disk
name: lanprofile
used_by:
- /1.0/containers/c1
- /1.0/containers/c2

.

root@server1:~# lxc list
+------+---------+------+------+------------+-----------+
| NAME |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+------+------+------------+-----------+
| c1   | STOPPED |      |      | PERSISTENT | 0         |
+------+---------+------+------+------------+-----------+
| c2   | STOPPED |      |      | PERSISTENT | 0         |
+------+---------+------+------+------------+-----------+

The reason why this one will not meet my requirements is that the containers cannot communicate with the other clients/servers e.g. debian = 192.168.0.4 nor the host = 192.168.0.2 in the network 192.168.0.0/24 - the containers can communicate with each other:

root@server1:~# lxc list
+------+---------+---------------------+----------------------------------------------+------------+-----------+
| NAME |  STATE  |        IPV4         |                     IPV6                     |    TYPE    | SNAPSHOTS |
+------+---------+---------------------+----------------------------------------------+------------+-----------+
| c1   | RUNNING | 192.168.0.23 (eth0) | 2405:6580:2cc0:100:216:3eff:fe74:7fc1 (eth0) | PERSISTENT | 0         |
+------+---------+---------------------+----------------------------------------------+------------+-----------+
| c2   | RUNNING | 192.168.0.24 (eth0) | 2405:6580:2cc0:100:216:3eff:fe84:513a (eth0) | PERSISTENT | 0         |
+------+---------+---------------------+----------------------------------------------+------------+-----------+

.

jair@c1:~$ ping 192.168.0.24
PING 192.168.0.24 (192.168.0.24) 56(84) bytes of data.
64 bytes from 192.168.0.24: icmp_seq=1 ttl=64 time=0.150 ms
64 bytes from 192.168.0.24: icmp_seq=2 ttl=64 time=0.061 ms
64 bytes from 192.168.0.24: icmp_seq=3 ttl=64 time=0.057 ms

The containers cannot ping the host:

jair@c1:~$ ping 192.168.0.2
PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.
From 192.168.0.23 icmp_seq=1 Destination Host Unreachable
From 192.168.0.23 icmp_seq=2 Destination Host Unreachable
From 192.168.0.23 icmp_seq=3 Destination Host Unreachable

The containers cannot ping the other clients in the network:

jair@c1:~$ ping 192.168.0.4
PING 192.168.0.4 (192.168.0.4) 56(84) bytes of data.
From 192.168.0.23 icmp_seq=1 Destination Host Unreachable
From 192.168.0.23 icmp_seq=2 Destination Host Unreachable
From 192.168.0.23 icmp_seq=3 Destination Host Unreachable

BUT here is the very strange result, we can ping the IP 192.168.0.3 - which is running chefdk perhaps that is why?

jair@c1:~$ ping 192.168.0.3
PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.655 ms
64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.344 ms
64 bytes from 192.168.0.3: icmp_seq=3 ttl=64 time=0.204 ms

NOTE: 192.168.0.3 is a simple laptop with ubuntu 16.04 running the chefdk.

My next test will be to bring a laptop to my lab, install debian linux, install virtualbox and LXC and implement the bridge solution of LXC then see if I can use the br0 interface for virtualbox…

I will keep you posted and perhaps we should be able to close this topic.

Thank you again for all your support.

Sincerely,

Hi!

It is a known issue with macvlan that the containers cannot communicate over the network with the host.
I think there are some networking settings that can make the containers communicate with the host, though I have not seen clear instructions on how to do that. I would not make an effort towards that direction though.

For this know issue with macvlan not being able to get the host communicate with the containers, you may have to use the bridge instead.

It is weird why the container c1 cannot ping the other system on the LAN (that 192.168.0.3).
Connect the 192.168.0.3 laptop with an Ethernet cable to the network and try again.
There are some issues with WiFi, therefore by trying this, you will get an idea whether the WiFi makes a difference.

Hello simos,

Thank you for getting back, I have some updates:

  1. Configure the bridge (sorry but I thought it was simpler to not install Network Manager)

    **This file describes the network interfaces available on your system**
    **and how to activate them. For more information, see interfaces(5).**
    
    source /etc/network/interfaces.d/*
    
    **The loopback network interface**
    auto lo
    iface lo inet loopback
    
    **The primary network interface**
    auto br0
    iface br0 inet static
        address 192.168.0.201/24
        gateway 192.168.0.1
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 192.168.0.101 192.168.0.102 8.8.8.8
        dns-search plumswap.com
    
    **bridge options**
    bridge_ports eno1
    
    **auto eno1**
    iface eno1 inet manual
    
  2. Configure the new profile:

    iface eno1 inet manual
    lxc profile list 
    +---------------+---------+
    |     NAME      | USED BY |
    +---------------+---------+
    | bridgeprofile | 3       |
    +---------------+---------+
    | default       | 3       |
    +---------------+---------+
    
  3. Configured the settings on the interface br0:

     lxc profile show bridgeprofile 
    config: {}
    description: Bridged Networking LXD profile
    devices:
      eth0:
        name: eth0
        nictype: bridged
        parent: br0
        type: nic
    name: bridgeprofile
    used_by:
    - /1.0/containers/c1
    - /1.0/containers/c2
    - /1.0/containers/c3
    
    --
    

NOTE: Is it possible to create the container with “launch” command but assigning ahead of time the IP address that it will use?

For example:

+------+---------+----------------------+----------------------------------------------+------------+-----------+
| NAME |  STATE  |         IPV4         |                     IPV6                     |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+----------------------------------------------+------------+-----------+
| c1   | RUNNING | 192.168.0.202 (eth0) | 2405:6580:2cc0:100:216:3eff:fe12:90d3 (eth0) | PERSISTENT | 0         |
+------+---------+----------------------+----------------------------------------------+------------+-----------+
| c2   | RUNNING | 192.168.0.203 (eth0) | 2405:6580:2cc0:100:216:3eff:fee2:edfd (eth0) | PERSISTENT | 0         |
+------+---------+----------------------+----------------------------------------------+------------+-----------+
| c3   | RUNNING | **_**192.168.0.25**_** (eth0)  | 2405:6580:2cc0:100:216:3eff:fefb:3f3a (eth0) | PERSISTENT | 0         |
+------+---------+----------------------+----------------------------------------------+------------+-----------+

In the highlighted c3 container, the IP address was assigned from my DHCP server, but then after is completed I go to the interface and configured it with static.

Default configuration after the contaners are deployed:

cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

Then I manually change it to:

+------+---------+----------------------+----------------------------------------------+------------+-----------+
| NAME |  STATE  |         IPV4         |                     IPV6                     |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+----------------------------------------------+------------+-----------+
| c1   | RUNNING | 192.168.0.202 (eth0) | 2405:6580:2cc0:100:216:3eff:fe12:90d3 (eth0) | PERSISTENT | 0         |
+------+---------+----------------------+----------------------------------------------+------------+-----------+
| c2   | RUNNING | 192.168.0.203 (eth0) | 2405:6580:2cc0:100:216:3eff:fee2:edfd (eth0) | PERSISTENT | 0         |
+------+---------+----------------------+----------------------------------------------+------------+-----------+
| c3   | RUNNING | **192.168.0.204** (eth0) | 2405:6580:2cc0:100:216:3eff:fefb:3f3a (eth0) | PERSISTENT | 0         |
+------+---------+----------------------+----------------------------------------------+------------+-----------+

It will be great to have a command that can tell the container to get a specific static IP address.

Now I can ping the hosts, the containers and the rest of the devices in my network 192.168.0.0/24

Thank you very much for all the help, all I need to do now is to install VirtualBox on the same machine and use the br0 as the interface for the bridge network, I am sure it will work.

Again thank you for all the help.

Sincerely,

(I did some editing above to beautify the markup. If I made a mistake with the formatting, please edit accordingly).

lxc launch takes a container image, initializes a container from the image and finally start it.

What you need is to perform just the initialization but not start yet. So that you can do additional settings to the not-yet-started (stopped) container.

Therefore, use lxc init with the parameters you need to specify to create the container, and then when you are ready, you can lxc start it.

lxc launch is like lxc init and lxc start combined in a single step.