[LXD] API to extract IPAM information

Project LXD
Status Draft
Author(s) @gabrielmougard
Approver(s) @stgraber
Release 5.16

Abstract

This document aims to describe the design and introduction of a new REST API endpoint /1.0/network-allocations. The idea is to allow users to retrieve and parse IPAM information easily, specifically about the addresses consumed by a LXD deployment.

All these addresses will be CIDR subnets (aggregating data per IP address would not be very efficient and produce a lot of useless data as there would be no usage for most of the IP address in a range).

Rationale

LXD deployments consume addresses for a variety of purposes, but there’s no simple way for external systems like Netbox to identify and track the usage of these addresses. Although the current system already has logic for BGP and DNS exposure and lease APIs for networks, an API to aggregate this information is missing. Introducing a unified endpoint will eliminate the need for external systems to aggregate the information manually, improving integration efficiency and overall user experience.

Specification

Design

The new REST API endpoint will leverage the existing logic of exposing BGP, DNS, and the lease API on the networks, and encapsulate the required information in an easily retrievable and parsable format. The new endpoint will return a list of JSON structs each representing an address in use. Each address record will have a used_by field indicating the consumer (instance, network forward, load-balancer, network, etc.) to indicate the nature of usage (the purpose or role for which an address is being used in the context of the LXD deployment. This could be whether the address is being used externally, for forwarding, by a load-balancer, or for network-related tasks, etc. This is important information to expose as it provides context about why a particular address is being consumed and can help users manage and better understand their address consumption. For instance, you may want to know how many addresses are being used by load balancers, or how many are being used for forwarding, etc. This could help in capacity planning, troubleshooting network issues, or for performing network optimization tasks for example).

API changes

A new REST API endpoint /1.0/network-allocations will be introduced. The endpoint will respond with a list of JSON structs representing the addresses in use. Each struct will have the following structure:

{
	"address": "10.6.105.1/24", // Example: `10.6.105.1/24`  or `fd42:3cce:990:a1fd::1/64`
	"used_by": "/1.0/networks/lxdbr0", // the LXD resource URI
	"type": "network", // the type of the entity. It could be `network`, `network-forward`, `network-load-balancer` or `instance`
	"nat": false,
	"hwaddr": "" // only the `instance` type contains a hardware address
}

CLI changes

To interact with the new REST API endpoint, new CLI commands will be needed. An example of such a command might be: lxc network list-allocations. This command can take the --project <PROJECT> flag to get the network allocations per project or the --all-projects to get the network allocations from all the available projects. Here is a simple example on the default project:

$ lxc network list-allocations
+----------------------+-------------------------------------------+----------+------+-------------------+
|       USED BY        |                  ADDRESS                  |   TYPE   | NAT  | HARDWARE ADDRESS  |
+----------------------+-------------------------------------------+----------+------+-------------------+
| /1.0/networks/lxdbr0 | 10.6.105.1/24                             | network  | true |                   |
+----------------------+-------------------------------------------+----------+------+-------------------+
| /1.0/networks/lxdbr0 | fd42:3cce:990:a1fd::1/64                  | network  | true |                   |
+----------------------+-------------------------------------------+----------+------+-------------------+
| /1.0/instances/u1    | fd42:3cce:990:a1fd:216:3eff:fe04:f095/128 | instance | true | 00:16:3e:04:f0:95 |
+----------------------+-------------------------------------------+----------+------+-------------------+
| /1.0/instances/u1    | 10.6.105.160/32                           | instance | true | 00:16:3e:04:f0:95 |
+----------------------+-------------------------------------------+----------+------+-------------------+

Database changes

No database changes required

Further information

The proposed design was chosen because it efficiently leverages the existing logic and it allows for easy and efficient integration with external systems. Alternatives might include a more fragmented approach, with different API endpoints for different types of addresses. However, this would require external systems to aggregate information, contrary to the objective of this task. More details on the design, the API, and the data schema will be provided as the implementation progresses.

1 Like

This sentence didn’t read well for me, can you clarify what you mean please.

These are usually URLs, so they include the type and the project, so probably no need for a separate type field.

Although “nature of usage” may mean something else, so could do with clarifying here?

Link here would be good.

This bit is duplicated in the Design section, lets make the abstract shorter and keep the nitty gritty details for the Design.

consumer_entity and usage_type will require explanation and definition I think.

We would want to define the specific CLI command that will be added (if indeed we are adding any at all) here.

I think we can just say “No database changes required” here.

1 Like

Is there any need for this endpoint to support project and if so, then all-projects query parameters, so that only network addresses from certain projects are considered?

@tomp The draft should be a bit more clear now.

Regarding the CLI and your interrogation on the endpoint support for project, I must admit that I don’t know yet. But if we decide to support filtering by project, here is a CLI proposal:

lxc network list-addresses <the_consuming_entity_name> [ --instance | --network | --network-forward | --load-balancer] [--project=<project_name>]

|-----------|
| Addresses |
|-----------|
| ...       |
| ...       |

The binary flags at the end would act as a filter on the entity name. Or it also could be something like --type=instance,network,.... What do you think ?

If we made used_by a URL then it would encompass resource type, resource name, resource owner and project. It would also align with our current used_by fields on other resource responses.

What is this?

What name would you suggest ?

1 Like

As we want to get the addresses from different LXD entities (network, network-forward, etc…) this <the_consuming_entity_name> is just a network name or a network-forward name, etc… I guess that else, we would need to have multiple commands for each type (e.g, lxc network list-addresses network <network_name>, lxc network list-addresses network-forward <network_forward_name>, etc.)

@stgraber is that your understanding of this feature? There seems to be some overlap with lxc network list-addresses <network name>.

My only thoughts on this, quite nice suggestion, is that these used_by could be a confusing “thing”.

Currently used_by is project/entity but this suggestion, in my understanding, says its could now be one of the following (or more);

  • project/instance
  • project/instance/network-forward
  • project/profile/network-forward
  • project/load-balancer
  • project/network/network-forward

“Event” based clients may not like this. I am abit out of date LXD development, so correct me if im wrong.

The used_by field in existing LXD responses can be a URL to any resource (including the optional project and target query string parameters).

Example:

lxc storage show zfs
config:
  size: 30GiB
  source: /var/lib/lxd/disks/zfs.img
  zfs.pool_name: zfs
description: ""
name: zfs
driver: zfs
used_by:
- /1.0/images/60b3778bb7997996a4241afe68956be9bee57d4a8706c7e7affafa1caa6b6df9
- /1.0/instances/v1
- /1.0/instances/v1/snapshots/snap0
- /1.0/instances/v2
- /1.0/instances/v2/snapshots/snap0
- /1.0/profiles/vmdisk

1 Like

Knew id forgotten something!

Must be the way the library I use, thanks for refreshing me!

1 Like

@tomp do you agree with the naming of the endpoint (/1.0/network-addresses) or shall we find something more explicit (e.g, /1.0/network-addresses-in-use, /1.0/net-addresses-usage, …) ? I know that @stgraber wanted a better naming…

@gabrielmougard how about /1.0/network-allocations?

1 Like