mjr2015
(mjr2015)
December 19, 2018, 2:08pm
1
I had an issue where I redid my os and had to reimport my containers. they are set for 50G but internally say they are only 20.
essentially i would like to increase the disk space they are allotted.
@SERVER:~$ zpool status -v
pool: DATA
state: ONLINE
scan: scrub repaired 0B in 9h24m with 0 errors on Sun Dec 9 09:48:32 2018
config:
NAME STATE READ WRITE CKSUM
DATA ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
errors: No known data errors
pool: LXC
state: ONLINE
scan: scrub repaired 0B in 0h0m with 0 errors on Sun Dec 9 00:24:59 2018
config:
NAME STATE READ WRITE CKSUM
LXC ONLINE 0 0 0
/var/snap/lxd/common/lxd/disks/LXC.img ONLINE 0 0 0
errors: No known data errors
my logging server i would like to have no limit on disk.
@SERVER:~$ sudo lxc profile show VLAN20
config: {}
description: Default LXD profile
devices:
eth0:
name: eth0
nictype: macvlan
parent: enp4s0.20
type: nic
root:
path: /
pool: LXC
size: 50GB
type: disk
name: VLAN20
used_by:
- /1.0/containers/log
@SERVER:~$ sudo lxc info log
Name: log
Location: none
Remote: unix://
Architecture: x86_64
Created: 2018/12/05 12:18 UTC
Status: Running
Type: persistent
Profiles: default, VLAN20
Pid: 12282
Ips:
eth0: inet 10.1.20.50
eth0: inet6 fe80::216:3eff:fe92:8618
lo: inet 127.0.0.1
lo: inet6 ::1
Resources:
Processes: 234
Disk usage:
root: 2.19GB
CPU usage:
CPU usage (in seconds): 16358
Memory usage:
Memory (current): 832.92MB
Memory (peak): 3.06GB
Network usage:
eth0:
Bytes received: 844.61MB
Bytes sent: 169.19MB
Packets received: 3005136
Packets sent: 125270
lo:
Bytes received: 1.47GB
Bytes sent: 1.47GB
Packets received: 2870337
Packets sent: 2870337
Snapshots:
snap0 (taken at 2018/12/06 10:22 UTC) (stateless)
root@log:~# df -h
Filesystem Size Used Avail Use% Mounted on
LXC/containers/log 11G 2.3G 8.0G 22% /
none 492K 0 492K 0% /dev
udev 16G 0 16G 0% /dev/tty
tmpfs 100K 0 100K 0% /dev/lxd
tmpfs 100K 0 100K 0% /dev/.lxd-mounts
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 144K 16G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
stgraber
(Stéphane Graber)
December 19, 2018, 2:52pm
2
Can you show lxc config show log
and lxc config show --expanded log
?
mjr2015
(mjr2015)
December 19, 2018, 5:29pm
3
FWIW, I install via snap
@SERVER:~$ sudo lxc config show log
architecture: x86_64
config:
image.architecture: amd64
image.description: ubuntu 18.04 LTS amd64 (release) (20181124)
image.label: release
image.os: ubuntu
image.release: bionic
image.serial: "20181124"
image.version: "18.04"
volatile.base_image: 7b58622614fa724290eb15c139501394c63641e81411c13d166825cc8c7fae45
volatile.eth0.hwaddr: 00:16:3e:92:86:18
volatile.idmap.base: "0"
volatile.idmap.next: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.idmap: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.power: RUNNING
devices:
root:
path: /
pool: LXC
size: 100GB
type: disk
ephemeral: false
profiles:
- default
- VLAN20
stateful: false
description: ""
@SERVER:~$ sudo lxc config show --expanded log
architecture: x86_64
config:
image.architecture: amd64
image.description: ubuntu 18.04 LTS amd64 (release) (20181124)
image.label: release
image.os: ubuntu
image.release: bionic
image.serial: "20181124"
image.version: "18.04"
volatile.base_image: 7b58622614fa724290eb15c139501394c63641e81411c13d166825cc8c7fae45
volatile.eth0.hwaddr: 00:16:3e:92:86:18
volatile.idmap.base: "0"
volatile.idmap.next: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.idmap: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.power: RUNNING
devices:
eth0:
name: eth0
nictype: macvlan
parent: enp4s0.20
type: nic
root:
path: /
pool: LXC
size: 100GB
type: disk
ephemeral: false
profiles:
- default
- VLAN20
stateful: false
description: ""
stgraber
(Stéphane Graber)
December 19, 2018, 6:30pm
4
Ok, so quota should be 100GB for that container.
Can you show zfs list -t all
?
mjr2015
(mjr2015)
December 19, 2018, 7:00pm
5
@SERVER:~$ zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
DATA 5.03T 5.45T 139K /DATA
DATA@20181006 74.6K - 149K -
DATA@prestuff 74.6K - 149K -
DATA@20181114 0B - 139K -
DATA/K 533G 5.45T 521G /DATA/K
DATA/K@20181006 74.6K - 128K -
DATA/K@prestuff 74.6K - 128K -
DATA/K@20181114 11.9G - 445G -
DATA/SHARE 4.51T 5.45T 4.34T /DATA/SHARE
DATA/SHARE@20181006 107K - 149K -
DATA/SHARE@prestuff 2.95G - 16.9G -
DATA/SHARE@20181114 175G - 3.71T -
LXC 11.0G 7.91G 24K none
LXC/containers 7.77G 7.91G 24K none
LXC/containers/ansible 318M 7.91G 485M /var/snap/lxd/common/lxd/storage-pools/LXC/containers/ansible
LXC/containers/ansible@snapshot-snap0 88.5M - 481M -
LXC/containers/log 2.06G 7.91G 2.23G /var/snap/lxd/common/lxd/storage-pools/LXC/containers/log
LXC/containers/log@snapshot-snap0 100M - 1.24G -
LXC/containers/nagios 1.60G 7.91G 1.86G /var/snap/lxd/common/lxd/storage-pools/LXC/containers/nagios
LXC/containers/nagios@snapshot-snap0 54.1M - 430M -
LXC/containers/rproxy 525M 7.91G 737M /var/snap/lxd/common/lxd/storage-pools/LXC/containers/rproxy
LXC/containers/rproxy@snapshot-snap0 2.26M - 335M -
LXC/containers/rproxy@snapshot-snap1 50.2M - 428M -
LXC/containers/rproxy@snapshot-snap2 51.9M - 736M -
LXC/containers/tbx 2.40G 7.91G 2.81G /var/snap/lxd/common/lxd/storage-pools/LXC/containers/tbx
LXC/containers/tbx@snapshot-snap3 158M - 2.73G -
LXC/containers/unifipihole 909M 7.91G 975M /var/snap/lxd/common/lxd/storage-pools/LXC/containers/unifipihole
LXC/containers/unifipihole@snapshot-snap0 230M - 883M -
LXC/custom 24K 7.91G 24K none
LXC/custom-snapshots 24K 7.91G 24K none
LXC/deleted 1002M 7.91G 24K none
LXC/deleted/images 1002M 7.91G 24K none
LXC/deleted/images/51cb67916e21e2d995ef3c0bc0f9aa3133e6603464d99ab1d67b7c04bc66e32c 334M 7.91G 334M none
LXC/deleted/images/51cb67916e21e2d995ef3c0bc0f9aa3133e6603464d99ab1d67b7c04bc66e32c@readonly 0B - 334M -
LXC/deleted/images/7b58622614fa724290eb15c139501394c63641e81411c13d166825cc8c7fae45 334M 7.91G 334M none
LXC/deleted/images/7b58622614fa724290eb15c139501394c63641e81411c13d166825cc8c7fae45@readonly 0B - 334M -
LXC/deleted/images/d72ae2e5073f20450c5260e6f227484c23452a46c6bb553ffe6be55e48602bb4 334M 7.91G 334M none
LXC/deleted/images/d72ae2e5073f20450c5260e6f227484c23452a46c6bb553ffe6be55e48602bb4@readonly 0B - 334M -
LXC/images 2.21G 7.91G 24K none
LXC/images/43df08150a886719a1a413a818274ee4e6b880cdffff4b0f23cfc5e1173c6193 746M 7.91G 746M none
LXC/images/43df08150a886719a1a413a818274ee4e6b880cdffff4b0f23cfc5e1173c6193@readonly 0B - 746M -
LXC/images/84a71299044bc3c3563396bef153c0da83d494f6bf3d38fecc55d776b1e19bf9 334M 7.91G 334M none
LXC/images/84a71299044bc3c3563396bef153c0da83d494f6bf3d38fecc55d776b1e19bf9@readonly 0B - 334M -
LXC/images/8fe87e6212b0a2ff784dc9de114164c26b434eb3bda3691d4b26f5d92c4bd18a 463M 7.91G 463M none
LXC/images/8fe87e6212b0a2ff784dc9de114164c26b434eb3bda3691d4b26f5d92c4bd18a@readonly 0B - 463M -
LXC/images/999766907124b664050a4f421086c3daeb862077a2f328070997f26695fbe1c1 722M 7.91G 722M none
LXC/images/999766907124b664050a4f421086c3daeb862077a2f328070997f26695fbe1c1@readonly 0B - 722M -
LXC/snapshots 168K 7.91G 24K none
LXC/snapshots/ansible 24K 7.91G 24K none
LXC/snapshots/log 24K 7.91G 24K none
LXC/snapshots/nagios 24K 7.91G 24K none
LXC/snapshots/rproxy 24K 7.91G 24K none
LXC/snapshots/tbx 24K 7.91G 24K none
LXC/snapshots/unifipihole 24K 7.91G 24K none
stgraber
(Stéphane Graber)
December 19, 2018, 7:19pm
6
Ok, so the problem is the size of your zpool then, the quotas likely work as expected, but the entire zpool is 10GB, explaining the values you’re seeing.
There are instructions on growing your zpool in our storage documentation:
https://lxd.readthedocs.io/en/latest/storage/#zfs
As you’re using the snap, the path will be /var/snap/lxd/common/lxd/disks
instead of /var/lib/lxd/disks/
mjr2015
(mjr2015)
December 19, 2018, 7:34pm
7
So quick question about my zpools
Is my lxd pool located on my main zfs disks? I have the 3 6 tb disks together and an os drive of 64 gb. I want to make sure it’s not. On the 64 gb os
stgraber
(Stéphane Graber)
December 19, 2018, 7:57pm
8
It’s stored in a file on /var/snap/lxd/common/lxd/disks, whatever disk that is.
mjr2015
(mjr2015)
December 19, 2018, 8:04pm
9
Yeah I’m afraid that may be in my 64 g os disk. How do I find out for sure?
1 Like