I’d use the ZFS method contained here for that:
Thank you for your help!
I really appreciate it!
Don’t thank me yet… lol
one last thing.
i added disk by /dev/disk/by-id/… this is beter than /dev/sda because it could change names.
one question,
i see that my blocksize on the ssd disk are 512. is this correct? or do i need to adjust this?
On SATA, the standard block size it 512.
Ok perfect thank you,
If i gave one disk to lxd. How (what would be te command) to add 3 disk in raid10 mode ?
This is accomplished by
Add extra drive to lxd pool
When attaching drives for raid10 you have to specify <drive1>
first. Any drive that follows the first drive will be mirrored to the first drive specified. This operation should effect the creation of <mirror-0>
from <drive1>
and <drive2>
.
I suggest a dry-run before committing; note the “-n”:
sudo zpool attach -n <name of lxd pool> <drive1> <drive2>
If a success then you can confidently proceed with the addition of the drive:
sudo zpool attach <name of lxd pool> <drive1> <drive2>
If this complains at you, you may have to force the operation with; note the “-f” option:
sudo zpool attach -f <name of lxd pool> <drive1> <drive2>
This should result in a two drive mirror volume on the zpool. Check with:
sudo zpool status
Adding another “mirror” to the pool
sudo zpool add <name of lxd zpool> mirror <drive3> <drive4>
This should result in a two mirror volume pool. Check with
sudo zpool status
In the output you’re looking for something that resembles:
...
config:
NAME:
<name of lxd zpool>
<mirror0>
<drive1>
<drive2>
<mirror1>
<drive3>
<drive4>
Hi Shimmy,
I did what you tought would be the best. reinitialized lxd on one drive and added the 3 drives afterwards.
i enable zfs compression. however inside a container when i run an apt upgrade it yust takes 30 minutes to complete.
root@warehouse:~# apt dist-upgrade -y
Reading package lists… Done
Building dependency tree… Done
Reading state information… Done
Calculating upgrade… Done
The following packages will be upgraded:
odoo
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 197 MB of archives.
After this operation, 9,396 kB of additional disk space will be used.
Get:1 Index of /16.0/nightly/deb/ ./ odoo 16.0.20230103 [197 MB]
Fetched 197 MB in 3s (75.7 MB/s)
(Reading database … 79509 files and directories currently installed.)
Preparing to unpack …/odoo_16.0.20230103_all.deb …
Unpacking odoo (16.0.20230103) over (16.0.20221222) …
Can you post your sudo zpool status
output?
root@esx:~# sudo zpool status
pool: default
state: ONLINE
scan: resilvered 1.89M in 00:00:01 with 0 errors on Tue Jan 3 19:00:09 2023
config:
NAME STATE READ WRITE CKSUM
default ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-Samsung_SSD_870_QVO_1TB_S5RRNF0R642530P ONLINE 0 0 0
ata-Samsung_SSD_870_QVO_1TB_S5RRNF0R644421A ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ata-Samsung_SSD_870_QVO_1TB_S5RRNF0R642539W ONLINE 0 0 0
ata-Samsung_SSD_870_QVO_1TB_S5RRNF0R642467A ONLINE 0 0 0
errors: No known data errors
pool: downloads
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
downloads ONLINE 0 0 0
ata-Samsung_SSD_870_QVO_1TB_S5RRNF0R642514M ONLINE 0 0 0
errors: No known data errors
pool: usb_backup
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
usb_backup ONLINE 0 0 0
ata-Samsung_SSD_870_QVO_2TB_S5RPNF0R603981X ONLINE 0 0 0
errors: No known data errors
Are to connecting to the internet over a bridge such as lxdbr0?
yes,
over a bridge nic. with internet connection test i have full speed.
i have a host installation. here i created a brdige adapter. in lxd when asked for a new bridge i chose no and give the bridge adapter i created in the host.
please post lxc network list -f compact
root@esx:~# lxc network list -f compact
NAME TYPE MANAGED IPV4 IPV6 DESCRIPTION USED BY STATE
bridge0 bridge NO 1
bridge1 bridge NO 9
eno1 physical NO 0
enp4s0 physical NO 0
Did you happen to scrub the drives, and tune your ARC settings? If not, please do that and see if helps?
ok. i will try it.
i will keep you informed.
i have a update.
if i run
sudo iostat -x 1
i see that the 4 ssd drives util is about 90 percent. could it be it has to do with the sync option in zfs?
What does zfs get sync default
report?
root@esx:~# zfs get sync default
NAME PROPERTY VALUE SOURCE
default sync standard default