Storage Pools

From OSNEXUS Online Documentation Site
Jump to: navigation, search


Storage Pool Management

Storage pools combine or aggregate one or more physical disks (SATA, SAS, or SSD) into a pool of fault tolerant storage. From the storage pool users can provision both SAN/storage volumes (iSCSI/FC targets) and NAS/network shares (CIFS/NFS). Storage pools can be provisioned using using all major RAID types (RAID0/1/10/5/50/6/60/7/70) and in general OSNEXUS recommends RAID10 for the best combination of fault-tolerance and performance. That said, the optimal RAID type depends on the applications and workloads that the storage will be used for. For assistance in selecting the optimal layout we recommend engaging the OSNEXUS SE team to get advice on configuring your system to get the most out of it.

Pool RAID Layout Selection / General Guidelines

We strongly recommend using RAID10/mirroring for all virtualization workloads and databases and RAIDZ2/RAID60 for archive applications that require higher-capacity for applications that produce mostly sequential IO patterns. RAID10 performs very well with sequential IO and random IO patterns but is more expensive since usable capacity before compression is 50% of the raw capacity. With compression the usable capacity may increase to 75% of the raw capacity or higher. For archival storage or other similar workloads RAIDZ2 or RAIDZ3 is best and provides higher usable capacity. RAIDZ1/5/50 is generally not recommended because it is not fault tolerant after a single disk failure so during healing using a hot-spare the Storage Pool is exposed.

Storage Pools Web 6.jpg

Creating Storage Pools

Navigation: Storage Management --> Storage Pools --> Storage Pool --> Create (toolbar)

Create a Storage Pool.

One of the first steps in configuring an system is to create a storage pool. The storage pool is an aggregation of one or more devices into a fault-tolerant "pool" of storage that will continue operating without interruption in the event of disk failures. The amount of fault-tolerance is dependent on multiple factors including hardware type, RAID layout selection and pool configuration. QuantaStor's SAN storage (storage volumes) as well as NAS storage (network shares) are both provisioned from Storage Pools. A given pool of storage can be used for SAN and NAS storage at the same time, additionally, clients can access the storage using multiple protocols at the same time (iSCSI & FC for storage volumes and NFS & SMB/CIFS for network shares). Creating a storage pool is very straight forward, simply name the pool (DefaultPool is fine too), select disks and a layout for the pool, and then press OK. If you're not familiar with RAID levels then choose the RAID10 layout and select an even number of disks (2, 4, 8, 20, etc). There are a number of advanced options available during pool creation but the most important one to consider is encryption as this cannot be turned on after the pool is created.


Enclosure Aware Intelligent Disk Selection

For systems employing multiple JBOD disk enclosures QuantaStor automatically detects which enclosure each disk is sourced from and selects disks in a spanning "round-robin" fashion during the pool provisioning process to ensure fault-tolerance at the JBOD level. This means that should a JBOD be powered off the pool will continue to operate in a degraded state. If the detected number of enclosures is insufficient for JBOD fault-tolerance the disk selection algorithm switches to a sequential selection mode which groups vdevs by enclosure. For example a pool with RAID10 layout provisioned from disks from two or more JBOD units will use the "round-robin" technique so that mirror pairs span the JBODs. In contrast a storage pool with the RAID50 layout (4d+1p) with two JBOD/disk enclosures will use the sequential selection mode.

Enabling Encryption (One-click Encryption)

Enable encryption.

Encryption must be turned on at the time the pool is created. If you need encryption and you currently have a storage pool that is un-encrypted you'll need to copy/migrate the storage to an encrypted storage pool as the data cannot be encrypted in-place after the fact. Other options like compression, cache sync policy, and the pool IO tuning policy can be changed at any time via the Modify Storage Pool dialog. To enable encryption select the Advanced Settings tab within the Create Storage Pool dialog and check the [x] Enable Encryption checkbox.


Passphrase protecting Pool Encryption Keys

For QuantaStor used in portable systems or in high-security environments we recommend assigning a passphrase to the encrypted pool which will be used to encrypt the pool's keys. The drawback of assigning a passphrase is that it will prevent the system from automatically starting the storage pool after a reboot. Passphrase protected pools require that an administrator login to the system and choose 'Start Storage Pool..' where the passphrase may be manually entered and the pool subsequently started. If you need the encrypted pool to start automatically or to be used in an HA configuration, do not assign a passphrase. If you have set a passphrase by mistake or would like to change it you can change it at any time using the Change/Clear Pool Passphrase... dialog at any time. Note that passphrases must be at least 10 characters in length and no more than 80 and may comprise alpha-numeric characters as well as these basic symbols: ".-:_" (period, hyphen, colon, underscore).

Enabling Compression

Control use of I/O tuning and or compression.

Pool compression is enabled by default as it typically increases usable capacity and boosts read/write performance at the same time. This boost in performance is due to the fact that modern CPUs can compress data much faster than the media can read/write data. The reduction in the amount of data to be read/written due to compression subsequently boosts performance. For workloads which are working with compressed data (common in M&E) we recommend turning compression off. The default compression mode is LZ4 but this can be changed at any time via the Modify Storage Pool dialog.

Boosting Performance with SSD Caching

ZFS based storage pools support the addition of SSD devices for use as read cache or write log. SSD cache devices must be dedicated to a specific storage pool and cannot be shared across multiple storage pools. Some hardware RAID controllers support SSD caching but in our testing we've found that ZFS is more effective at managing the layers of cache than the RAID controllers so we do not recommend using SSD caching at the hardware RAID controller.

Add Cache Device Web 6.jpg

Accelerating Write Performance with SSD Write Log (ZFS SLOG/ZIL)

The ZFS filesystem can use a log device (SLOG/ZIL) for where write for sync based i/o filesystem metadata can be mirrored from the system memory to protect against system component or power failure. Writes are not held for long in the ZIL SSD SLOG so the device does not need to be large as it typically holds no more than 16GB before forcing a flush to the backend disk. Because it is storing metadata and sync based I/O writes that have not yet been persisted to the storage pool the write SSD must be mirrored so that in the event an SSD drive fails that redundancy of the ZIL SLOG is maintained. As writes are occurring constantly on the ZIL SLOG device, we recommend choosing SSD's that have a high wear leveling of 3+ Drive Writes per day (DWPD)

Accelerating Read Performance with SSD Read Cache (L2ARC)

You can add up to 4x devices for SSD read-cache (L2ARC) to any ZFS based storage pool and these devices do not need to be fault tolerant. You can add up to 4x devices directly to the storage pool by selecting 'Add Cache Devices..' after right-clicking on any storage pool. You can also opt to create a RAID0 logical device using the RAID controller out of multiple SSD devices and then add this device to the storage pool as SSD cache. The size of the SSD Cache should be roughly the size of the working set for your application, database, or VMs. For most applications a pair of 400GB SSD drives will be sufficient but for larger configurations you may want to use upwards of 2TB or more of SSD read cache. Note that the SSD read-cache doesn't provide an immediate performance boost because it takes time for it to learn which blocks of data should to be cached to provide better read performance.

RAM Read Cache Configuration (ARC)

ZFS based storage pools use what are called "ARC" as a in-memory read cache rather than the Linux filesystem buffer cache to boost disk read performance. Having a good amount of RAM in your system is critical to deliver solid performance. It is very common with disk systems for blocks to be read multiple times. The frequently accessed "hot data" is cached into RAM where it can serve read requests orders of magnitude faster than reading it from spinning disk. Since the cache takes on some of the load it also reduces the load on the disks and this too leads to additional boosts in read and write performance. It is recommended to have a minimum of 32GB to 64GB of RAM in small systems, 96GB to 128GB of RAM for medium sized systems and 256GB or more in large systems.

Storage Pool Layout Types (ZFS based)

Navigation: Storage Management --> Storage Pools --> Storage Pool --> Create (toolbar)

Raid options displayed are dependent on the number of Pools available.

QuantaStor supports all industry standard RAID levels (RAID1/10/5(Z1)/50/6(Z2)/60), some additional advanced RAID levels (RAID1 triple copy, RAIDZ3 triple-parity), and simple striping with RAID0. Over time all disk media degrades and as such we recommend marking at least one device as a hot-spare disk so that the system can automatically heal itself when a bad device needs replacing. One can assign hot-spare disks as universal spares for use with any storage pools as well as pinning of hot-spares to specific storage pools. Finally, RAID0 is not fault-tolerant at all but it is your only choice if you have only one disk and it can be useful in some scenarios where fault-tolerance is not required. Here's a breakdown of the various RAID types and their pros & cons.

  • RAID0 layout is also called 'striping' and it writes data across all the disk drives in the storage pool in a round robin fashion. This has the effect of greatly boosting performance. The drawback of RAID0 is that it is not fault tolerant, meaning that if a single disk in the storage pool fails then all of your data in the storage pool is lost. As such RAID0 is not recommended except in special cases where the potential for data loss is non-issue.
  • RAID1/mirroring is also called 'mirroring' because it achieves fault tolerance by writing the same data to two disk drives so that you always have two copies of the data. If one drive fails, the other has a complete copy and the storage pool continues to run. RAID1 and it's variant RAID10 are ideal for databases and other applications which do a lot of small write I/O operations.
  • RAID5/RAIDZ1 achieves fault tolerance via what's called a parity calculation where one of the drives contains an XOR calculation of the bits on the other drives. For example, if you have 4 disk drives and you create a RAID5 storage pool, 3 of the disks will store data, and the last disk will contain parity information. This parity information on the 4th drive can be used to recover from any data disk failure. In the event that the parity drive fails, it can be replaced and reconstructed using the data disks. RAID5 (and RAID6) are especially well suited for audio/video streaming, archival, and other applications which do a heavy sequential write I/O operations (such as reading/writing large files) and are not as well suited for database applications which do heavy amounts of small random write I/O operations or with large file-systems containing lots of small files with a heavy write load.
  • RAID6/RAIDZ2 improves upon RAID5 in that it can handle two drive failures but it requires that you have two disk drives dedicated to parity information. For example, if you have a RAID6 storage pool comprised of 5 disks then 3 disks will contain data, and 2 disks will contain parity information. In this example, if the disks are all 1TB disks then you will have 3TB of usable disk space for the creation of volumes. So there's some sacrifice of usable storage space to gain the additional fault tolerance. If you have the disks, we always recommend using RAID6 over RAID5. This is because all hard drives eventually fail and when one fails in a RAID5 storage pool your data is left vulnerable until a spare disk is utilized to recover your storage pool back to a fault tolerant status. With RAID6 your storage pool is still fault tolerant after the first drive failure. (Note: Fault-tolerant storage pools (RAID1,5,6,10) that have suffered a single disk drive failure are called degraded because they're still operational but they require a spare disk to recover back to a fully fault-tolerant status.)
  • RAID10/mirroring is similar to RAID1 in that it utilizes mirroring, but RAID10 also does striping over the mirrors. This gives you the fault tolerance of RAID1 combined with the performance of RAID10. The drawback is that half the disks are used for fault-tolerance so if you have 8 1TB disks utilized to make a RAID10 storage pool, you will have 4TB of usable space for creation of volumes. RAID10 will perform very well with both small random IO operations as well as sequential operations and it is highly fault tolerant as multiple disks can fail as long as they're not from the same mirror-pairing. If you have the disks and you have a mission critical application we highly recommend that you choose the RAID10 layout for your storage pool.
  • RAID60/RAIDZ2 combines the benefits of RAID6 with some of the benefits of RAID10. It is a good compromise when you need better IOPS performance than RAID6 will deliver and more useable storage than RAID10 delivers (50% of raw).

It can be useful to create more than one storage pool so that you have low cost fault-tolerant storage available in RAIDZ2/60 for backup and archive and higher IOPS storage in RAID10/mirrored layout for virtual machines, containers and databases.

Identifying Storage Pools

Navigation: Storage Management --> Controllers & Enclosures --> Identify (rightclick)
or
Navigation: Storage Management --> Storage Pools --> Identify (toolbar)

Identify Hardware Controller Device Web 6.jpg

The group of disk devices which comprise a given storage pool can be easily identified within a rack by using the Storage Pool Identify dialog which will blink the LEDs in a pattern.

Deleting Storage Pools

Navigation: Storage Management --> Storage Pools --> Storage Pool --> Delete (toolbar)

Storage pool deletion is final so be careful and double check to make the correct pool is selected. For secure deletion of storage pools please select one of the data scrub options such as the default 4-pass DoD (Department of Defense) procedure.

Delete a Storage Pool.jpg

Data Shredding Options

Choose a shred option.

QuantaStor uses the Linux scrub utility which is compliant to various government standards to securely shred data on devices. QuantaStor provides three data scrub procedure options including DoD, NNSA, and USARMY data scrub modes:

  • DoD mode (default)
    • 4-pass DoD 5220.22-M section 8-306 procedure (d) for sanitizing removable and non-removable rigid disks which requires overwriting all addressable locations with a character, its complement, a random character, then verify. **
  • NNSA mode
    • 4-pass NNSA Policy Letter NAP-14.1-C (XVI-8) for sanitizing removable and non-removable hard disks, which requires overwriting all locations with a pseudorandom pattern twice and then with a known pattern: random(x2), 0x00, verify. **
  • US ARMY mode
    • US Army AR380-19 method: 0x00, 0xff, random. **
  • Fill-zeros
    • Fills with zeros with a single-pass procedure
  • Random-data
    • Fills with random-data using a single-pass procedure

** Note: short descriptions of the QuantaStor supported scrub procedures are borrowed from the scrub utility manual pages which can be found here.

Importing Storage Pools

Storage pools can be physically moved from system to another system by moving all of the disks associated with that given pool from the old system to a new system . After the devices have been moved over one should use the Scan for Disks... option in the web UI so that the devices are immediately discovered by the system . After that the storage pool may be imported using the Import Pool... dialog in the web UI or via QS CLI/REST API commands.

Importing Encrypted Storage Pools

The keys for the pool (qs-xxxx.key) must be available in the /etc/cryptconf/keys directory in order for the pool to be imported. Additionally, there is an XML file for each pool which contains information about which devices comprise the pool and their serial numbers which is located in the /etc/cryptconf/metadata directory which is called (qs-xxxx.metadata) where xxxx is the UUID of the pool. We highly recommend making a backup of the /etc/cryptconf folder so that one has a backup copy of the encryption keys for all encrypted storage pools on a given system. With these requirements met, the encrypted pool may be imported via the Web Management interface using the Import Storage Pools dialog the same one used do for non-encrypted storage pools.

Importing 3rd-party OpenZFS Storage Pools

Importing OpenZFS based storage pools from other servers and platforms (Illumos/FreeBSD/ZoL) is made easy with the Import Storage Pool dialog. QuantaStor uses globally unique identifiers (UUIDs) to identify storage pools (zpools) and storage volumes (zvols) so after the import is complete one would notice that the system will have made some adjustments as part of the import process. Use the Scan button to scan the available disks to search for pools that are available to be imported. Select the pools to be imported and press OK to complete the process.

Import Storage Pools- Web.jpg

For more complex import scenarios there is a CLI level utility that can be used to do the pool import which is documented here Console Level OpenZFS Storage Pool Importing.

Hot-Spare Management Policies

Select Recovery from the Storage Pool toolbar or right click on a Storage Pool from the the tree view and click on Recover Storage Pool / Add Spares....

Add Hot Spare - Web.jpg

Hot Spares can also be selected from the Modify Storage Pool dialog by clicking on Modify in the Storage Pool toolbar.

Mdfy Stor Pool Hot Spare - Web.jpg

Choose a management policy for the Hot Spares.

Modern versions of QuantaStor include additional options for how Hot Spares are automatically chosen at the time a rebuild needs to occur to replace a faulted disk. Policies can be chosen on a per Storage Pool basis and include:

  • Auto-select best match from assigned or universal spares (default)
  • Auto-select best match from pool assigned spares only
  • Auto-select exact match from assigned or universal spares
  • Auto-select exact match from pool assigned spares only
  • Manual Hot-spare Management Only


If the policy is set to one that includes 'exact match', the Storage Pool will first attempt to replace the failed data drive with a disk that is of the same model and capacity before trying other options. The Manual Hot-spare Management Only mode will disable QuantaStor's hot-spare management system for the pool. This is useful if there are manual/custom reconfiguration steps being run by an administrator via a root SSH session.