Best practices for assigning static IP address?

I run a pretty much out-of-the-box default LXD setup.

I quite literally accepted almost all of the defaults during lxd init phase.

That means, among other things, that networking is managed by LXD + dnsmasq.

This is quite useful in my case but I also need most of my containers to have static IP addresses.

Apparently, there are several ways to do it and I’m wondering if any particular method is preferred or advised as a best practice.

E.g. I tried installing ifupdown in one of the containers and configured network interfaces with static IP addresses as well as static routes.

I also, attached an eth0 device to another container, assigned IP address to that container and that worked as well. Networking in that container itself is basically configured via default netplan configuration that requests all the parameters from a DHCP server. So, my understanding is that ipv4.address assigned to the attached eth0 device acts kind of like a static DHCP lease (please, correct me if I’m wrong).

There’s also a macvlan approach, which I haven’t tried yet.

So, is any particular method better or preferred for some reason?

As far as I can see, we are free to mix and match as we find convenient. But is it a good idea? What is regarded as the best practice?

Again, for me having LXD manage container networking is beneficial as it allows for quick and easy creation of containers that are ready for use and can reach the Internet.

If I need any container to have a static IP address without having to actually configure anything in the container I can do this via lxc network attach + lxc config device commands. Which is also scriptable and allows for easy, rapid and massive deployments by using some sort of orchestration tool.

Then, I found that configuring more complex networking (several IP addresses, static routes, SNAT/DNAT iptables rules, etc.) is best achieved inside a container using ifupdown.

So, what I’m saying is that in my particular LXD setup all three methods are quite useful.

The question is, is it an optimal approach or is there even a better/more elegant way?

Hi!

First of all, I think it is important to figure out if you really need static IP addresses.
It would help if you can give some use-cases.

Having said that, when you launch a container with managed networking (LXD manages the networking, default settings), then the container gets a random but fixed MAC address.
This MAC address is used by LXD’s dnsmasq for the DHCP lease. As long as your container has the same MAC address, you get the same lease. This is how it’s done in practice, something might change in the future, but it is the current case.

You can still though instruct LXD’s dnsmasq to use static IP addresses for specific containers with the following. In this way, you will be 100% sure that the static IP address allocation is static.

lxc stop c1
lxc network attach lxdbr0 c1 eth0 eth0
lxc config device set c1 eth0 ipv4.address 10.99.10.42
lxc start c1

Within the containers, you have DNS to access each container. Each container has a hostname, accessible by all other containers. If you want your host to see these container hostnames, (to be able to run ping mycontainer.lxd from the host), you need to configure your host to consult LXD’s dnsmasq (DNS). I have a blog post on this.

Up to here is what you get with managed networking. There are other options if you do not use managed networking. In fact, you can use managed networking for a container, but get the container to ignore LXD’s dnsmasq (not so elegant).

Alternatively, you can expose your containers to your LAN and have them get IP addresses from your LAN’s DHCP server (i.e. router), or set the IP addresses statically. Two ways for this, bridging and macvlan.
Again, this exposes your containers to the LAN so you should consider if your use-case is OK with that.

Yes, my understanding is that it instructs LXD’s dnsmasq to set a static IP. You might want to verify whether LXD binds the IP address to the container’s MAC address or the containers name. Probably the former.

Suppose you have a container mycontainer and you want it to get a specific static IP address.
Suppose you then delete mycontainer, create a new mycontainer and you want it to get the same DHCP lease (same IP address). If you want to do that, then you would need to set the MAC address of the newly created mycontainer to the old MAC address. There was a recent discussion on this.

1 Like

Hi, Simos!

Thank you for the detailed reply.

After I made my post yesterday I realized that apparently I may run into a problem when assigning static IP addresses in /etc/network/interfaces (using ifupdown) only. LXD/dnsmasq is most likely going to be unaware of what IP addresses are already assigned inside a container manually and chances are dnsmasq may try to assign same IP address to another container that sent a DHCP client request for a lease. Is my reasoning correct?

As for my use case, there are three major types of containers:

  • Throwaway/temporary containers. Containers used for quick tests. High rate of turnaround. These benefit from DHCP leases the most. Very convenient to quickly create a container that has already configured networking and can reach the Internet.
  • A large group of containers that require static IP addresses. Containers that will live for significant amount of time (e.g. years). It doesn’t matter if IP addresses are assigned by LXD/dnsmasq or by configuring ifupdown, or netplan. In fact, it’s both preferred and more convenient to have IP addresses assigned with lxc commands as it allows for easier orchestration of such containers (e.g. creating a container via an ansible playbook).
  • A smaller group of containers that require more advance networking configuration. E.g. 2+ IP addresses on one interface with specific netmask. Here configuring ifupdown manually is better than relying on dnsmasq. These are much like the previous group. They will be around for a long time. They just have more complex networking configuration.

It seems that to ensure no IP address conflicts happen, I would need to assign all static IP addresses with lxc commands and have LXD/dnsmasq manage them. But the smaller group of containers with more complex networking requirements need to be configured manually while making sure the IP address their main network interface is assigned is not accidentally assigned by LXD/dnsmasq to containers from other container groups described above.

As I understand, I could assign static IP address to such a container both with lxc commands and configure ifupdown in a container manually. This will ensure that LXD/dnsmasq never assigns container’s IP address to any other container. However, it’s a duplicate effort. I wonder if there is a better way?

Yes, you could get a duplicate IP issue.

You have the lxdbr0 network but you can create more managed networks in order to avoid conflicts.

First, I create lxdbr1. It will get a different random 10.x.y.[1-254] IP range.

lxc network create lxdbr1

Then, I create a profile that uses this network.

lxc profile copy default defaultnet1
lxc profile edit defaultnet1          # in the profile, replace `lxdbr0` with `lxdbr1`.

Now you can launch a new container based on lxdbr1.

lxc launch ubuntu:18.04 mycontainer --profile defaultnet1

It’s up to you to create additional LXD networks and set them so that you assign IPs statically.
By assigning the appropriate profile, you are able to separate the containers according to their role (temporary, longterm, advanced).

1 Like

I gonna bump this thread, as it has good introduction of the most common situation for me (and my usage of LXD).

Nota bene: I’m heavily affected by OpenVZ usage from the past and my expectations may be biased to the way they organized things.

I’m also in a need of having static ip addresses in containers, as most of them supposed to run over time, not to be created and deleted quickly.
Quite standard setup for me (what i’ve used with LXC, but not LXD ) was to have:

  • servers with physical interface in default vlan
  • internal networking on dedicated vlan, say 4000, so device name becomes eth0.4000, 192.168.0.0/22 for example
  • apps running using that internal vlan, for example:
    • DB master listening on 192.168.0.10 on server1
    • DB slave replicating from 192.168.0.10, listening on 192.168.0.11 on server2

Extending this to containers (LXD) in particular, say I wanna move databases to be inside containers, I’d like to have it that way:

  • create bridge (I do manually for now), lxbrtest1 with eth0.4000
  • remove ip from interface
  • attach 192.168.0.10/22 to lxbrtest1
  • repeat over servers fleet
  • lxd init , specify bridge as lxbrtest1 to be used (default profile)

Shortly, all servers in that VLAN are accessible through arp resolution and I would like to have the same for containers.

Where do I stuck now:

  • create containers and be able to assign static ips (with correct netmask/gw)
    Assigning, as mentioned in the thread works via dhcp leases actually. So basically it does nothing in my case as there is no DHCP managed by LXD.
    • In OpenVZ world, they execute helpers scripts inside containers to configure network for you, taking in account different distro specific ways. So you, as end user no need to do any manual job inside containers (VE, Virtual Environments).

With LXD, I don’t see any good way (cloud-init may be?) to achieve such simplicity. Assuming it’s quite a widespread case (I guess Proxmox using containers in such way), there should be more or less simple way to achieve this without need

  • to bring up your own DHCP server and/or
  • making any manual actions inside containers

Would be nice to hear you opinion and experience sharing here, @simos , @iliv

If you’re not using DHCP (with static assignments for MAC addresses) then cloud-init will be the recommended approach for configuring static IP from LXD’s instance config. This way it works in tandem with each distro’s network setup systems.

Thanks, @tomp .
My side note: using DHCP, making it HA is not impossible, but requires a bit more work for low-end “clusters” with couple of servers. For such situations, I think the way OpenVZ’s chosen is better for small setups, 0 efforts for administration.

I’ve started with LXD having private networking with NAT on top (default one), but quickly found:

  • I need to add routes on servers for that private nets to be reachable across server fleet
  • It was NATted, which was bad for ACLs (say container1 on node1 192.168.0.10 with 10.10.10.5 is reaching container2 on node2 192.168.0.11, with 10.10.20.20, then on c2 src ip of c1 would be address of node1 - 192.168.0.10)

As a long time former OpenVZ user myself I am familiar with the venet style network approach it used (with boot-time templates that re-wrote config inside each container).

The nearest we have in LXD is the routed NIC type (https://linuxcontainers.org/lxd/docs/master/instances#nic-routed) as this will preconfigure a NIC with the IP addresses defined in ipv{n}.address and setup the required default route and then move the NIC into the container. This uses the same proxy ARP and static routes on the host approach as OpenVZ’s venet does.

However in order for this to work properly you would need to ensure that DHCP client and any other network config is disabled inside the container so that it doesn’t wipe out the config pre-configured on the NIC.

You would also still need to configure DNS manually.

See How to get LXD containers get IP from the LAN with routed network