Limit resources CPU, disk size, RAM

I am new to the area and i am learning the LXC how it works. For my experimentation i have build a Debian system and loaded my LXCs. I would like to limit manually for each LXC resources so that they will not exceed them:
-RAM: 512MB
-CPU: 1vCPU
-HD: 5GB

i have seen the command

limits.
eg lxc config set name_of_system limits.memory 256MB

but i can do it only for one type of resources, not all of the aforementioned

i have seen that i can create a profile, and assign values. I managed to assign values for CPU and RAM.

lxc create cpu1-ram256-hd5

and i assign values:

lxc profile set cpu1-ram256-hd5 limits.cpu=1 limits.memory=256MB

i cannot find how to assign size per lxc to this profile.

Welcome!

In many cases you wouldn’t impose restrictions, especially if you have full control of the services that are running in the container. For example, if you are running nginx in a container, the default settings limit the processing power. Also, you wouldn’t need to worry for disk space unless you allow file uploads.
The reason why those restrictions are tricky, is if you do not select them properly, you may starve the services when they are just a bit under heavier load.

Having said that, this discussion forum provides support for Incus, which is a continuation of LXD. I’ll show you how to do those limits with Incus.

You would create a separate project and put those limits into that project, including limits.disk.

Do you mean disk storage space used by the container? You set the size property on the disk device. If you’re doing this in the profile (so the same limit is applied to multiple containers):

devices:
  ...
  root:
    path: /
    pool: default
    type: disk
    size: 3GiB

However, I believe the quota enforcement only works on zfs or btrfs storage pools. See this thread.

I’ve just tested it with incus and zfs, and it works as expected: inside the container, df reports the expected disk size, and trying to write beyond this gives a “Disk quota exceeded” error.

Of course, zfs does compression too. You might end up being able to write more than 3GiB, while still using less than 3GiB of disk. (So if you’re testing with dd, use /dev/urandom not /dev/zero as the source of data)