Projects configuration

Projects are helpful in order to create separate working spaces, yet it adds multiplel layers of complexity to the whole picture and by missing central tools (whoami, whereami , lxc list --all(projects), makes it right now a little bit difficult to keep the overview.

  1. Rest-Api “?project=” looks like a temporary solution of no idea where else to put it.
  2. Container names are not distinct anymore and in some cases difficult to deal with double entries in dhcp server, resolver … You got multiple hosts in same net /subnet with same host name.
  3. Container root is distinct, but translates into project_containername on zfs … not consistent with 2.
  4. The resources limitation are just aiming at a whole. Total number instances etc. but missing similar to profiles a limitation affecting the single instance and not the summary of project. Like limits.cpu =1 would set every created instance to have 1 CPU only, if not explicitly overwritten by profile or instance config.
  5. Those decisions of an autonomous project, like own storage pool, own images, own profiles are very hard to meet.
    When I choose separate profiles, than suddenly no access to any previously carefully crafted profiles somewhere else.
    Same for images, separate images are desired for some reason, but again every project start piling up the same ubuntu image over and over.
    Ideally, despite having a distinct area for projects with own brain, a set of shared resources would help not storing all those repeated things.
    Like a set of shared image, profiles, backup storage … and anything else separated.

@kamzar1

Update: Take a look at https://linuxcontainers.org/lxd/docs/master/projects, many of the things you want can already be configured.

So you are asking for a different approach of projects, if I understand you correctly.
Your idea sounds interesting, but maybe you could open an Issue (Feature Request) on Github instead, with a shorter and clearer summary and a better title.
This way it will be more visible and it is more likely to get a response.

  1. That was done to avoid duplicating the entire LXD API under /1.0/projects/name and is in line with what we had for cluster members already (?target), this has been working great, is backward compatible too.

  2. Yeah, putting containers with the same name from different projects onto the same network is indeed a source of issue as far as DHCP leases and DNS names. We’ve started investigating whether we can convince dnsmasq to use sub-domains for this (name.project.lxd) but so far have been hitting some limitations around that…

  3. Right, that’s to be expected, since the same instance name can be used in multiple projects, we’ve been namespacing everything stored on disk as well as things like apparmor kernel namespaces and related resources. We’d have liked to do the DNS record too but as mentioned, dnsmasq restrictions have been preventing us from doing so.

  4. Limiting all new instances to have 1 CPU only unless overwritten is what the default profile is for. If that’s what you want, then set limits.cpu=1 in the default profile. The project limits are there to allocate a fixed TOTAL resource to the project, so limits.cpu=10 in a project means you can have up to 10 instances with limits.cpu=1 or 5 with limits.cpu=5. This is extremely useful in shared environments where individual users may be paying for a specific amount of resources or where the administrator is managing expected resource consumption on a per-project basis. When combined with either Canonical RBAC or the newly introduced restricted clients, those users are restricted to specific projects and cannot edit their project configuration to give themselves more resources.

  5. We can probably improve things a bit there by adding --target-project to lxc profile copy. As for images, while LXD does let you get your own view of the images in your project, it doesn’t actually duplicate anything. If the same image is used by two projects, only one copy of it will be present in the images directory and in whatever storage backend you’re using.