Performance Tuning: Difference between revisions
Created page with "== IO Tuning == === ZFS Performance Tuning === One of the most common tuning tasks that is done for ZFS is to set the size of the ARC cache. If your system has less than 10..." |
mNo edit summary |
||
Line 1: | Line 1: | ||
[[Category:admin_guide]] | |||
=== ZFS Performance Tuning === | === ZFS Performance Tuning === |
Revision as of 02:12, 21 March 2019
ZFS Performance Tuning
One of the most common tuning tasks that is done for ZFS is to set the size of the ARC cache. If your system has less than 10GB of RAM you should just use the default but if you have 32GB or more then it is a good idea to increase the size of the ARC cache to make maximum use of the available RAM for your storage appliance. Before you set the tuning parameters you should run 'top' to verify how much RAM you have in the system. Next, run this command to set the amount of RAM to some percentage of the available RAM. For example to set the ARC cache to use a maximum of 80% of the available RAM, and a minimum of 50% of the available RAM in the system, run these, then reboot:
qs-util setzfsarcmax 80 qs-util setzfsarcmax 50
Example:
sudo -i qs-util setzfsarcmax 80 INFO: Updating max ARC cache size to 80% of total RAM 1994 MB in /etc/modprobe.d/zfs.conf to: 1672478720 bytes (1595 MB) qs-util setzfsarcmin 50 INFO: Updating min ARC cache size to 50% of total RAM 1994 MB in /etc/modprobe.d/zfs.conf to: 1045430272 bytes (997 MB)
To see how many cache hits you are getting you can monitor the ARC cache while the system is under load with the qs-iostat command:
qs-iostat -a Name Data --------------------------------------------- hits 237841 misses 1463 c_min 4194304 c_max 520984576 size 16169912 l2_hits 19839653 l2_misses 74509 l2_read_bytes 256980043 l2_write_bytes 1056398 l2_cksum_bad 0 l2_size 9999875 l2_hdr_size 233044 arc_meta_used 4763064 arc_meta_limit 390738432 arc_meta_max 5713208 ZFS Intent Log (ZIL) / writeback cache statistics Name Data --------------------------------------------- zil_commit_count 876 zil_commit_writer_count 495 zil_itx_count 857
A description of the different metrics for ARC, L2ARC and ZIL are below.
hits = the number of client read requests that were found in the ARC misses = the number of client read requests were not found in the ARC c_min = the minimum size of the ARC allocated in the system memory. c_max = the maximum size of the ARC that can be allocated in the system memory. size = = the current ARC size l2_hits = the number of client read requests that were found in the L2ARC ls_misses = the number of client read requests were not found in the L2ARC ls_read_bytes = The number of bytes read from the L2ARC ssd devices. l2_write_bytes = The number of bytes written to the L2ARC ssd devices. l2_chksum_bad = The number of checksums that failed the check on an SSD (a number of these occurring on the L2ARC usually indicates a fault for a SSD device that needs to be replaced) l2_size = the current L2ARC size l2_hdr_size = The size of the L2ARC reference headers that are present in ARC Metadata arc_meta_used = The amount of ARC memory used for Metadata arc_meta_limit = The maximum limit for the ARC Metadata arc_meta_max = The maximum value that the ARC Metadata has achieved on this system zil_commit_count = How many ZIL commits have occurred since bootup zil_commit_writer_count = How many ZIL writers were used since bootup zil_itx_count = the number of indirect transaction groups that have occurred sinc bootup
Pool Performance Profiles
Read-ahead and request queue size adjustments can help tune your storage pool for certain workloads. You can also create new storage pool IO profiles by editing the /etc/qs_io_profiles.conf file. The default profile looks like this and you can duplicate it and edit it to customize it.
[default] name=Default description=Optimizes for general purpose server application workloads nr_requests=2048 read_ahead_kb=256 fifo_batch=16 chunk_size_kb=128 scheduler=deadline
If you edit the profiles configuration file be sure to restart the management service with 'service quantastor restart' so that your new profile is discovered and is available in the web interface.
Storage Pool Tuning Parameters
QuantaStor has a number of tunable parameters in the /etc/quantastor.conf file that can be adjusted to better match the needs of your application. That said, we've spent a considerable amount of time tuning the system to efficiently support a broad set of application types so we do not recommend adjusting these settings unless you are a highly skilled Linux administrator. The default contents of the /etc/quantastor.conf configuration file are as follows:
[device] nr_requests=2048 scheduler=deadline read_ahead_kb=512 [mdadm] chunk_size_kb=256 parity_layout=left-symmetric
There are tunable settings for device parameters which are applied to the storage media (SSD/SATA/SAS), as well as settings like the MD device array chunk-size and parity configuration settings used with XFS based storage pools. These configuration settings are read from the configuration file dynamically each time one of the settings is needed so there's no need to restart the quantastor service. Simply edit the file and the changes will be applied to the next operation that utilizes them. For example, if you adjust the chunk_size_kb setting for mdadm then the next time a storage pool is created it will use the new chunk size. Other tunable settings like the device settings will automatically be applied within a minute or so of your changes because the system periodically checks the disk configuration and updates it to match the tunable settings. Also, you can delete the quantastor.conf file and it will automatically use the defaults that you see listed above.