HDD size of individual containers

What is the correct way of defining the HDD size of individual containers? Is there any command? I have used a dataset from a zpool for storage. Should I directly manipulate zpool usingzfs set quota=100G zpool3/lxd
Or is there any command like these:

lxc config set xenial limits.cpu 2
lxc config set xenial limits.memory 1024MB


root@pep:~# lxc storage show default
config:
  source: zpool3/lxd
  zfs.pool_name: zpool3/lxd
description: ""
name: default
driver: zfs
used_by:
- /1.0/containers/trusty
- /1.0/containers/xenial
- /1.0/containers/xenial-clean

Thanks! you save a lot of out CPU time in the University!

Look at: Adjusting size of root device - #2 by stgraber

As well you can create a profile with limits; for example:

$ lxc profile show limites 
config:
  limits.cpu: "1"
  limits.cpu.allowance: 30%
  limits.memory: 120MB
  limits.memory.enforce: soft
description: profile with limits
devices:
  root:
    path: /
    pool: lxd
    size: 12GB
    type: disk
name: limites
used_by:
- /1.0/containers/alpine01

Now, you can create and assign the desired profiles:

 $ lxc launch images:alpine/3.7 alpine02 -p default -p limites
 $ lxc exec alpine02 -- df -h
Filesystem                Size      Used Available Use% Mounted on
lxd/containers/alpine02
                         12.0G      4.9M     12.0G   0% /
lxd/containers/alpine02
                         12.0G      4.9M     12.0G   0% /
none                    492.0K         0    492.0K   0% /dev
udev                      3.8G         0      3.8G   0% /dev/full
udev                      3.8G         0      3.8G   0% /dev/null
udev                      3.8G         0      3.8G   0% /dev/random
udev                      3.8G         0      3.8G   0% /dev/tty
udev                      3.8G         0      3.8G   0% /dev/urandom
udev                      3.8G         0      3.8G   0% /dev/zero
udev                      3.8G         0      3.8G   0% /dev/fuse
udev                      3.8G         0      3.8G   0% /dev/net/tun
tmpfs                   100.0K         0    100.0K   0% /dev/lxd
tmpfs                   100.0K         0    100.0K   0% /dev/.lxd-mounts
tmpfs                   787.2M     52.0K    787.2M   0% /run

For example, you can have some profiles to apply limits and other for networking or all in one.
Salutes