Resource usage per project

project info projectname or 1.0/projets/name/state
delivers a nice overview of number instances, CPU, Ram, Disk …
Numbers are not real usage, but accumulation of maximum assigned values through project instances.

If even one of containers within that project doesnt have a certain limitation by config or profile, the whole project config option fails!

Though those numbers are not refelcting a real time usage, I can use them for accounting / auditing per project. Because an assigned Resource is mostly the basic of billing.

So, we got CPU, Ram, Disk. What missing for a comprehensive project billing is:

  • Number, total size Images, Snapshots, Backups
  • Periodic (weekly/monthy) Network rx/tx accumulated per project.

With those above, a project would be transparent in resources usage without manually gathering data over multiple api endpoints single instance, storage etc.

Grafana/Prometheus/Loki is for that purpose too expensive and overhead in implementing and continouse data pulling and honestly it eats up more resources than the ones it monitors.

Network is something you need to track separately through something like prometheus and the metrics API so you can get values over time and can do things like 99% billing. LXD itself has no interest in recording historical data in general as that’d almost immediately cripple the database performance.

Images are a bit problematic too because they’re not really owned by a project.
We only keep a single copy on-disk so only whoever is the first one to request a given image actually causes disk usage. That said, at least we do have DB records that would allow for a total size to be calculated, it’d just be inaccurate vs the real world.

Backups i don’t believe we’ve got a DB record of the size, so it’s currently quite expensive to pull the data. This is something we could add though.

Snapshots are another weird one. Snapshots on their own don’t consume disk space, it’s the divergence since the snapshot that does, the more you diverge over time, the more costly your snapshots become, but you can’t actually compute a fixed disk usage per snapshot which makes this problematic.

That’s why we have the option to just plain turn snapshots off on projects, we feel that currently the better option is to charge a fee for getting to use snapshots with the admin understanding that allowing those will allow the user to far exceed their disk limits.

Can this be a little bit more forgiving?

As by time you add limitations to Project, new profiles or instances might comply, so you can start with project config without going through all old containers and profiles prior to that.

I do fetch harddisks for the size of Backups regularly, also at least once before web download you need to determine the file size once.

Snapshots indeed, start cute and harmless by few kb and grow to monsters by time. When you have scheduling and few of them, it is serious usage stuff.

I am aware, images are shared good, which is smart, but in real world usage, seldom users get the same image and almost everytime fetch their own.
Becuase of the large choice of images and also frequent updating of images/versions.