Difference between revisions of "Hardware Compatibility List (HCL)"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m (Disk Archive / Backup)
m (Scale-out Cluster Media (Ceph Storage Pools))
 
(315 intermediate revisions by the same user not shown)
Line 1: Line 1:
=== System Architecture Guide & Performance Considerations ===
+
[[Category:design_guide]]
 +
QuantaStor is Linux based and has kernel driver updates released multiple times per year to ensure broad support for the latest hardware from the major vendors including HPE, Dell/EMC, Cisco, Intel, Western Digital, Supermicro and Lenovo.  The hardware listed below has been tested for specific compatibility with QuantaStor to ensure system stability, performance and compatibility. If you have questions about specific hardware compatibility for components not listed below please contact support@osnexus.com for assistance.
  
The following sections provide some general recommendations for selecting the right hardware and configuration settings, so that your QuantaStor appliance delivers great performance for your workload.  Note that these are general guidelines, and not all workloads fit in a single solution category.  As such, if you need assistance on selecting the right hardware and configuration strategy for your QuantaStor storage appliances, please email us at support@osnexus.com for advice and assistance.
+
== Servers for QuantaStor SDS Systems ==
  
== By Performance Requirements ==
 
  
=== Storage Pool Sizing ===
+
{| class="wikitable"
 +
! Vendor
 +
! Model
 +
! Boot Controller
 +
! CPU
 +
! Memory (32GB min)
 +
|-
 +
| Cisco
 +
| [https://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-s3260-storage-server/index.html UCS S3260 Storage Server], C220, C240 (M7/M6/M5/M4 Series)
 +
| UCS RAID and HBAs or software RAID1 M.2 NVMe
 +
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
 +
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 +
|-
 +
| DellEMC
 +
| [https://www.dell.com/en-us/work/shop/povw/poweredge-r750 PowerEdge R650/R750], [https://www.dell.com/en-us/work/shop/povw/poweredge-r740 PowerEdge R640/R740/R740xd], R730/R730xd/R720/R630/R620
 +
| Dell BOSS or software RAID1 M.2 NVMe
 +
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
 +
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 +
|-
 +
| HPE
 +
| [https://www.hpe.com/us/en/product-catalog/servers/proliant-servers/pip.hpe-proliant-dl380-gen10-server.1010026818.html ProLiant DL360/DL380 Gen10], Gen9, Gen8, Gen7
 +
| HPE P4xx w/ FBWC
 +
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
 +
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 +
|-
 +
| HPE
 +
| [https://www.hpe.com/us/en/product-catalog/servers/apollo-systems/pip.hpe-apollo-4200-gen10-server.1011147097.html Apollo 4200], 4510 Gen9 and Gen10 Series
 +
| HPE P4xx w/ FBWC
 +
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
 +
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 +
|-
 +
| Intel
 +
| Intel Server Systems (M50FCP/M50CYP)
 +
| Software RAID1 M.2 NVMe
 +
| Intel Scalable Processors Gen 1/2/3/4
 +
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 +
|-
 +
| Lenovo
 +
| [https://www.lenovo.com/us/en/data-center/servers/racks/c/racks ThinkSystem SR650/SR550/SR590 Series], x3650 M5/M4/M4 BD, x3250 M5
 +
| Software RAID1 M.2 NVMe or Hardware Boot RAID1
 +
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
 +
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 +
|-
 +
| Seagate
 +
| [https://www.seagate.com/products/storage/data-storage-systems/application-platforms/exos-ap-2u24/ Seagate 2U24 AP Storage Server], 5U84 AP, 2U12 AP (AMD / Bonneville)
 +
| Internal M.2 NVMe
 +
| AMD EPYC 16 core
 +
| 128GB RAM (Gold) 256GB RAM (Platinum), 512GB RAM (Titanium)
 +
|-
 +
| Supermicro
 +
| X13, X12, X11, X10 & X9 based Intel and H12 based AMD servers
 +
| Software RAID1 M.2 NVMe
 +
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
 +
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 +
|}
  
The storage pool is the logical aggregation of the physical storage and the performance characteristics of your appliance are largely determined by how you build and configure your storage pools.  The following guidelines are here to help pick the RAID layout and the hardware configuration to meet your application requirements.
+
== Disk Expansion Chassis / JBOD ==
  
==== Spindle Count ====
+
* QuantaStor supports SAS/SATA & NVMe external Expansion Shelves / JBOD devices from Dell, HP, IBM/Lenovo and SuperMicro.
 +
* All JBODs must have two or more SAS expansion ports per module to allow for SAS multipathing.  For chaining of expansion shelves three or more SAS ports are required per expander.
  
A 7200RPM hard disk delivers about 100MB to 200MB/sec of ''sequential'' throughput and much much less (even as low as 2MB/sec) with completely random IO patterns of small 4K block writes. As you can see small block write patterns require a lot of mechanical seeking of the drive head and that simply kills performance. One way to combat this is to ''increase the spindle count'' by combining multiple hard drives together into a RAID group. The more disks you have the higher the sequential throughput. That said, to increase write IOPS performance by adding spindles you must use RAID10 rather than a parity based RAID. This is because parity based RAID like RAID5 and RAID6 employ all the disks during write operations in order to calculate and update the parity information. RAID1 and RADI10 don't have this problem because a write to any given disk requires one exact equal write to it's mirror pair, all the other disks are usable for other read/write tasks.
+
{| class="wikitable"
 +
! Vendor
 +
! JBOD Model
 +
|-
 +
| Cisco
 +
| All Models
 +
|-
 +
| Dell
 +
| [https://www.dell.com/en-us/work/shop/productdetailstxn/storage-md1420 All Models]
 +
|-
 +
| HPE
 +
| All Models ([https://www.hpe.com/us/en/product-catalog/storage/disk-enclosures/pip.hpe-d3000-disk-enclosures.6923837.html D3000 Series], D2000 Series, [https://www.hpe.com/us/en/product-catalog/storage/disk-enclosures/pip.hpe-d6020-enclosure-with-dual-io-modules.1009000694.html D6020])
 +
|-
 +
| IBM/Lenovo
 +
| All Models
 +
|-
 +
| Seagate Exos E
 +
| [https://www.seagate.com/enterprise-storage/systems/exos/?utm_source=eol&utm_medium=redirect&utm_campaign=modular-enclosures All Models]
 +
|-
 +
| Seagate Corvault
 +
| All Models
 +
|-
 +
| Seagate Exos X
 +
| All Models
 +
|-
 +
| SuperMicro
 +
| All Models (HA requires dual expander JBODs)
 +
|-
 +
| Western Digtial
 +
| [https://www.westerndigital.com/products/storage-platforms/ultrastar-data60-hybrid-platform?multilink=switch Ultrastar Data60] / Data102 / 4U60-G2 / 2U24 SSD Models
 +
|}
  
==== RAID Layout ====
+
== SAS HBA Controllers ==
  
===== RAID10 / Ideal for Virtualization, Databases, Email, VDI and other high IOPS workloads =====
+
To make Storage Pools highly-available (ZFS based pools) media must be connected to two or more servers via SAS (or FC) connectivityWe recommend Broadcom HBAs and their OEM equivalents.  HPE is the exception where HPE disk expansion units must be used with HPE HBAs which are now Microsemi based. Make sure the HBA is flashed with the latest firmware to ensure compatibility with the latest storage media.  For OEM branded HBAs always use the OEM's supplied firmware even if a newer firmware is available from Broadcom. SAS HBAs with only external ports (8e, 16e models) should be configured to ignore Boot ROM from Disk Devices. For Broadcom/LSI Controllers, this option is called 'Boot Support' and should be set to 'OS only' in the MPTSAS BIOS.
RAID10 does striping over a series of mirror pairs so that you get the benefits of striping and the data protection of mirroringRAID10 can also handle multiple read and write requests concurrently as while one mirror pair is busy the other pairs can handle read and write requests at the same time. This concurrency greatly boosts the IOPS performance of RAID10 and makes it the ideal choice for many workloads including databases, virtual machines, email, and render farms.  The downside to RAID10 is that you only get 50% utilization of the raw storage because there's a complete copy of everything. With ZFS based storage pools with compression enabled you get some of that space back so your overall usable space may effectively be 75% depending on how much your data can be compressed.
+
  
To estimate the maximum performance of a RAID10 unit in ideal conditions you would take the number of HDD multiplied by the sequential performance.  For write performance you divide it by two.  For example a RAID10 unit with 10x 4TB 7200RPM disks (each of which can do ~150MB/sec) will get about 1.5GB/sec sequential read performance and about 750MB/sec sequential write performance.  IOPS performance will vary but 100-200 IOPS per HDD mirror pair is generally a good estimate which would put this configuration in the 1000 IOPS range.  To increase performance you can increase the spindle count but generally speaking we don't recommend going over 24 disks in a single RAID unit.  Better to make multiple RAID10 units and combine them together using ZFS RAID0.  For IOPS intensive workloads we highly recommend creating your RAID units with SSDs rather than HDDs.  Alternatively you can make hybrid HDD+SDD storage pools by adding a pair of SSDs as write cache and one or more SSDs as read cache to boost performance of your ZFS based storage pool.
+
{| class="wikitable"
 +
! Vendor
 +
! Model
 +
! Type
 +
! QS HW Mgmt Module
 +
|-
 +
| Broadcom/LSI/Avago
 +
| 9500 12Gb tri-mode series
 +
| SAS HBA
 +
| Yes
 +
|-
 +
| Broadcom/LSI/Avago
 +
| 9400 12Gb tri-mode series
 +
| SAS HBA
 +
| Yes
 +
|-
 +
| Broadcom/LSI/Avago
 +
| 9300 12Gb series
 +
| SAS HBA
 +
| Yes
 +
|-
 +
| Broadcom/LSI/Avago
 +
| 9200 6Gb series
 +
| SAS HBA
 +
| Yes
 +
|-
 +
| Cisco
 +
| UCS 6Gb & 12Gb HBAs
 +
| SAS HBA
 +
| Yes
 +
|-
 +
| DELL/EMC
 +
| SAS HBA 6Gb & 12Gb
 +
| SAS HBA
 +
| Yes
 +
|-
 +
| HPE
 +
| SmartArray H241 12Gb
 +
| SAS HBA
 +
| Yes
 +
|-
 +
| Lenovo/IBM
 +
| ServeRAID M5xxx 12Gb
 +
| SAS HBA
 +
| Yes
 +
|}
  
===== RAID6/60 / Ideal for Media and Archive =====
+
== Boot RAID Controllers ==
RAID6 employs double parity (P and Q) so that you can sustain two simultaneous disk failures with no data loss.  This makes it highly fault-tolerant but it does have some draw backs.  To keep the parity information consistent parity based RAID layouts like RAID5 and RAID6 must update the parity information anytime data is written.  Updating parity requires reading and/or writing from all the disks regardless of the size of the block of data being written.  This means that it takes roughly the same amount of time to write 4KB as it does to write 1MB.  As such RAID controllers have a battery backed or super-capacitor protected NVRAM cache so that they can hold writes for a period of time so that they can be combined into a single full-stripe write which is much more efficient.  This works great when the IO patterns are sequential like you find with many media applications and archive applications but it doesn't work well when the data is being written to disparate areas of the drive.  In those cases much seeking is required and the write performance of the RAID5/6 unit is no better than a single hard drive.  It has often been seen where an appliance will be deployed using RAID6 which has fantastic write performance with a light workload of a few virtual machines only to find the performance tanks when heavier write loads are applied.  To summarize, if your workload is mostly reads with only one or two writers that do mostly sequential writes (eg large files) then you've got a good candidate for RAID6.  If you need a hybrid of RAID10 and RAID6 you can try RAID60 but use caution there.  RAID60 with only two to four RAID6 sets (aka legs) won't be much better then RAID6.
+
  
=== SSD Caching ===
+
QuantaStor supports both software RAID1 for boot and hardware RAID1 for boot using high quality datacenter grade media.  We recommend SSDs in the 240GB to 480GB size range for boot.  If SATADOM SSD devices are used use the 128GB devices and use software RAID1 to mirror them.  Do not use Intel VROC.
  
We recommend and select as the default ZFS as the default ''storage pool'' type in QuantaStor because it has additional capabilities like SSD caching so you can boost performance with SSDs at any time by adding SSDs as a caching layer.  To add SSDs as cache to your pool simply right-click on the storage pool in the web interface and choose 'Add Cache Devices..' from the pop-up menu.  Note that these SSDs used for caching must be dedicated to a specific storage pool and cannot be assigned to multiple storage pools at the same time.
+
{| class="wikitable"
 +
! Vendor
 +
! Model
 +
! Type
 +
! QS HW Mgmt Module
 +
|-
 +
| Broadcom/LSI/Avago
 +
| MegaRAID (all models)
 +
| SATA/SAS RAID
 +
| Yes
 +
|-
 +
| Cisco
 +
| UCS RAID
 +
| SATA/SAS RAID
 +
| Yes
 +
|-
 +
| DellEMC
 +
| PERC H7xx, H8xx 6Gb & 12Gb models
 +
| SATA/SAS RAID
 +
| Yes
 +
|-
 +
| DellEMC
 +
| BOSS
 +
| SATA SSD RAID1
 +
| Yes
 +
|-
 +
| HPE
 +
| SmartArray P4xx/P8xx
 +
| SATA/SAS RAID
 +
| Yes
 +
|-
 +
| Lenovo/IBM
 +
| ServeRAID M5xxx
 +
| SATA/SAS RAID
 +
| Yes
 +
|-
 +
| Microsemi/Adaptec
 +
| 5xxx/6xxx/7xxx/8xxx
 +
| SATA/SAS RAID
 +
| Yes
 +
|}
  
==== Read Caching (L2ARC) ====
+
== NVMe RAID Controllers ==
ZFS automatically does read-caching (ARC) using available RAM in the system and that greatly boosts performance but in some cases the amount of data that should be cached is much larger than the available RAM.  In those cases it is helpful to add a 400GB, 800GB or more SSD storage as read cache (L2ARC) to your storage pool.  Since the SSD read cache is a redundant copy of data already stored within the storage pool there is no data loss should a SSD drive fail.  As such, the SSD read cache devices do not need to be mirrored and by default these cache devices are striped together if you specify multiple cache devices.
+
  
==== Write Caching (ZIL) ====
+
QuantaStor supports the use of NVMe RAID controllers for scale-out configurations and single node configurations.   
ZFS based storage pools can also employ SSDs as write cache and for that you'll need two SSD drives.  Two drives are needed for SSD write cache (ZIL) because all writes must be mirrored to ensure that no data is lost even in the event of an SSD drive failure.  SSD drives used for write cache should be 200GB or larger so that the SSD drive can efficiently wear-level the writes across the drive.  At any given time the SSD write cache will rarely hold more than 16GB of data because it is all flushed out to disk as quickly and efficiently as possibleAs such a large SSD is not needed but it should be large enough so that it doesn't wear out due to the write load.
+
  
== By Use Case ==
+
=== Single Node Use Cases ===
 +
* High performance SAN/NAS using software single parity (RAIDZ1) over NVMe hardware RAID5.
 +
* High performance NVMeoF/FC/iSCSI target passthru of logical drives from NVMe hardware RAID5
  
=== Server Virtualization ===
+
=== Scale-out Use Cases ===
 +
* High performance storage for WAL/MDB
 +
* High performance OSDs with local rebuild capability, variable OSD count for higher performance
  
Server Virtualization / VM workloads are fairly sequential when you have just a few virtual machines, but don't be fooled. As you add more VMs, the I/O patterns to the storage appliance will become more and more random in nature. As such, you must design a configuration that is tuned to maximize IOPS. For this reason, for virtualization we always recommend  that you configure your storage pool to use the RAID10 layout. We also recommend that you use a hardware RAID controller like the LSI MegaRAID to make it easy to detect and replace bad hard drives, even if you are using ZFS pool storage types. Within QuantaStor, you can direct the hardware RAID controller to make one large RAID10 unit, which you can then use to create a storage pool using the ZFS type with software RAID0.
+
{| class="wikitable"
 +
! Vendor
 +
! Model
 +
! Type
 +
! QS HW Mgmt Module
 +
|-
 +
| [https://www.graidtech.com/how-it-works/ Graid Technology Inc.]
 +
| [https://www.graidtech.com/product/sr-1000/ SupremeRAID™ SR-1000]
 +
| NVMe RAID
 +
| Yes
 +
|-
 +
| [https://www.graidtech.com/how-it-works/ Graid Technology Inc.]
 +
| [https://www.graidtech.com/product/sr-1010/ SupremeRAID™ SR-1010]
 +
| NVMe RAID
 +
| Yes
 +
|-
 +
| [https://pliops.com/ Pliops]
 +
| [https://pliops.com/raidplus/ XDP-RAIDplus]
 +
| NVMe RAID
 +
| No
 +
|}
  
This provides you with a RAID10 storage pool that both leverages the hardware RAID controller for hot-spare management, and leverages the capabilities of ZFS for easy expansion of the pool.  When you expand the pool later, you can create an additional an RAID10 unit, then grow the pool using that new fault-tolerant logical device from the RAID controller.  Alternatively you can expand the hardware RAID10 unit, then expand the storage pool afterwards to use the new additional space.
+
== Storage Devices/Media ==
  
Another important enhancement is to add extra RAM to your system which will work as a read cache (ARC) and further boost performance.  A good rule of thumb is 24GB for the base system plus 1GB-2GB for each additional VM, but more RAM is always better.  If you have the budget to put 64GB, 128GB or even 256GB of RAM in your server head unit you will see performance advantages.  Also use 10GbE or 8Gb FC for systems with many VMs or with VMs with high load applications like databases and Microsoft Exchange.
+
=== Scale-up Cluster Media (ZFS Storage Pools) ===
  
==== Tuning Summary ====
+
Scale-up clusters require dual-ported media so that QuantaStor's IO fencing system can reserve device access to specific nodes for specific pools. These clusters generally consist of two QuantaStor servers with shared access to the storage media in a separate enclosure (JBOD or JBOF).  Scale-up clusters using dual-ported NVMe media require the use of special Storage-Bridge-Bay servers from Supermicro.
* Use RAID10 layout with a high disk/spindle count
+
* Use the default Storage Pool type (ZFS)
+
* Put extra RAM into the appliance for read cache (128GB-256GB+)
+
* Use 10GbE NICs
+
* Use iSCSI with multipathing for hypervisor connectivity rather than LACP bonding
+
* If using HDDs for the pool
+
** use SSDs for read cache if the VM count is large and you're seeing latency issues
+
** add 2x SSDs for write cache if the RAID10 spindle count is not high enough to keep up with the write load
+
  
=== Databases ===
+
==== Data Media ====
  
Databases typically have IO patterns which resemble random IO of small writes and the database log devices are highly sequential.  IO patterns that look random put a heavy load on HDDs because each IO requires a head seek of the drive heads which kills performance and increases latency.  The solution for this is to use RAID10 and high speed drives.  The high speed drives (SSD, 10K/15K RPM HDD) reduce latency, boost performance, and the RAID10 layout does write transactions in parallel vs parity based RAID levels which serialize them.  If you're separating out your log from the data and index areas you could use a dedicated SSD for the log to boost performance. 
+
{| class="wikitable"
Then next question you should ask about your database workload is whether it is mostly reads, mostly writes or a mix of the two.  If it is mostly reads then you'll want to maximize the read cache in the system by adding more RAM (128GB+) which will increase the size of the ZFS ARC cache and increase the number of cache hits.  You can also add SSDs for use as a Level 2 ARC (L2ARC) read cache which will further boost read IOPS.
+
! Vendor
For databases that are mostly write intensive be sure to have a high spindle count (24x 2TB drives will be as much as 2x faster than 12x 4TB drives) and to add 2x high grade enterprise SSD drives as write cache for your Storage Pool.  These drives do not need to be large because the write cache is an intent log and generally will have no more than 8GB of data in it at any given time.  That said, larger drives will take longer to wear out so if you use a small capacity drive be sure that it is an enterprise grade SAS SSD designed for write intensive workloads.
+
! Model
 +
! Media Type
 +
! Notes
 +
|-
 +
| Western Digital
 +
| Ultrastar
 +
| Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD
 +
|-
 +
| Seagate
 +
| Exos, Nytro & Enterprise Performance
 +
| Dual-port 12/24Gb SAS SSD & NL-SAS HDD
 +
|-
 +
| Samsung
 +
| Enterprise SSDs (PM1643, PM1643a)
 +
| Dual-port 12/24Gb SAS SSD
 +
|-
 +
| Micron
 +
| Enterprise SSDs (S6xxDC)
 +
| Dual-port 12/24Gb SAS SSD
 +
|-
 +
| KIOXIA
 +
| PM6 Enterprise Capacity/Enterprise Performance
 +
| Dual-port 12/24Gb SAS SSD
 +
|-
 +
| Cisco, HPE, DellEMC, Lenovo
 +
| OEM
 +
| Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD
 +
|-
 +
| Western Digital
 +
| NVMe SN200 SN840
 +
| Dual-port NVMe
 +
| Supermicro SBB Server Req
 +
|-
 +
| KIOXIA
 +
| CM5-V, CM6, CM7
 +
| Dual-port NVMe
 +
| '''[ON HOLD - CONTACT SUPPORT]'''
 +
|-
 +
| Micron
 +
| 7300 PRO
 +
| Dual-port NVMe
 +
| Supermicro SBB Server Req
 +
|}
  
==== Tuning Summary ====
+
==== Journal Media (ZIL & L2ARC) ====
* Use RAID10 layout with a high disk/spindle count
+
* Use SSDs for the storage pool (your appliance can have multiple pools using different types of disks)
+
* Use the default Storage Pool type (ZFS)
+
* Put extra RAM into the appliance for read cache (128GB+)
+
* Use 10GbE NICs
+
* If using HDDs for the pool
+
** Add SSDs for read cache if the database is large (multi-TB) and you're seeing increased read latency
+
** Add 2x SSDs for write cache if the RAID10 spindle count is not high enough to keep up with the write load
+
  
=== Desktop Virtualization (VDI) ===
+
A mirrored pair of SSDs are required for each storage pool to enable SSD write acceleration.  SSDs can be added or removed from a given storage at any time with zero downtime.  Be sure to select a storage chassis / JBOD with adequate drive slots for both the data drives and the SSD journal devices. Select 400GB models when purchasing write-intensive SSDs and 800GB models when purchasing mixed-used SSDs.
  
All the recommendations from the above Server Virtualization section applies to VDI, plus you will want to move to SSD drives due to the high IOPS demands of desktops.  Here again it is a good idea to use a ZFS based storage pool, so that you can create template desktop configurations, then clone or snapshot these templates to produce virtual desktops for end-users.  More RAM is also recommended; 128GB is generally good for 50-100 desktops.
+
{| class="wikitable"
 +
! Vendor
 +
! Model
 +
! Type
 +
! Notes
 +
|-
 +
| Western Digital
 +
| Ultrastar SAS Write-Intensive & Mixed-use Models
 +
| Dual-port 12Gb SAS SSD
 +
|-
 +
| Seagate
 +
| Nytro SAS Write-Intensive & Mixed-use Models
 +
| Dual-port 12Gb SAS SSD
 +
|-
 +
| Samsung
 +
| Enterprise SSDs (PM1643, PM1643a)
 +
| Dual-port 12Gb SAS SSD
 +
|-
 +
| Western Digital
 +
| SN200 SN840
 +
| Dual-port NVMe
 +
| Supermicro SBB Server Req
 +
|-
 +
| KIOXIA
 +
| PM6 Enterprise Capacity/Enterprise Performance
 +
| Dual-port 12/24Gb SAS SSD
 +
|-
 +
| KIOXIA
 +
| CM5-V, CM6, CM7
 +
| Dual-port NVMe
 +
| '''[ON HOLD - CONTACT SUPPORT]'''
 +
|-
 +
| Micron
 +
| 7300 PRO
 +
| Dual-port NVMe
 +
| Supermicro SBB Server Req
 +
|}
  
==== Tuning Summary ====
+
=== Scale-out Cluster Media (Ceph Storage Pools) ===
* Use RAID10 layout with a high disk/spindle count
+
* Use the default Storage Pool type (ZFS)
+
* Use SSDs for the VDI boot images to address "boot storm" issues
+
* Put extra RAM into the appliance for read cache (128GB-256GB+)
+
* Use 10GbE NICs
+
* Use iSCSI with multipathing for hypervisor connectivity rather than LACP bonding
+
* If using HDDs for the pool
+
** use SSDs for read cache if the VDI count is large and you're seeing latency issues
+
** add 2x SSDs for write cache if the RAID10 spindle count is not high enough to keep up with the write load
+
  
=== Disk Archive / Backup ===
+
We highly recommend the use of NVMe devices vs SATA/SAS SSDs due to the higher throughput performance per device which reduces overall costs.
  
Disk archive and backup applications generally write large amounts of data in a very sequential fashion.  As such it works most efficiently with RAID6 formatting. Although you could chose RAID5, we don't recommend it - the performance of RAID6 is nearly identical and it allows the RAID unit to continue working after two disk failures.  Make sure, therefore, to leave capacity in the chassis for 1-2 hot-spares, and note, these do not count towards your licensed capacity - get two if you have room.
+
{| class="wikitable"
 +
! Vendor
 +
! Model
 +
! Type
 +
! Use case
 +
|-
 +
| Intel
 +
| DC series (all models)
 +
| NVMe
 +
| all use cases
 +
|-
 +
| Micron
 +
| 7xxx, 9xxx (all models)
 +
| NVMe
 +
| all use cases
 +
|-
 +
| Seagate
 +
| Nytro Series (all models)
 +
| NVMe / SAS
 +
| all use cases
 +
|-
 +
| ScaleFlux
 +
| CSD 2000/CSD 3000 (all models)
 +
| NVMe
 +
| all use cases
 +
|-
 +
| KIOXIA / Toshiba
 +
| CM6/CM7 Series (all models)
 +
| NVMe
 +
| all use cases
 +
|-
 +
| Western Digital
 +
| Ultrastar DC series (all models)
 +
| NVMe
 +
| all use cases
 +
|}
  
The performance with an LSI 9271/9286 is about 1.6GB/sec sequential with the XFS storage pool type and about 1GB/sec sequential with the ZFS storage pool type.  It takes about 16-20 drives to max out the throughput of a single LSI RAID controller, so we recommend that you get a second controller if you have 40+ disk drives in your QuantaStor archive appliance.  We also don't recommend going over 20 drives in a RAID6 configuration due to the increase in rebuild times, and generally speaking 16 drives or less is best.  If you have a 45-drive chassis, then making 3x RAID6 units of 14 drives each or 4x RAID6 units of 11 drives each is best.
+
== Fibre Channel HBAs ==
 
+
Larger capacity drives like 3TB, 4TB, and 5TB drives will deliver better performance due to larger density platters.  That said, 16x 2TB drives will be much faster than 8x 4TB drives due to the larger number of spindles and stripe size.  You can combine multiple RAID6 units with the ZFS storage pool type using ZFS RAID0.  This will produce a RAID60 storage pool, but it must be expanded using a full RAID6 unit.  So if you combine 4x RAID6 units with 8 drives each, note that you'll need to add an 8 drive RAID6 unit when you expand the pool later.  RAID60 is good to use if you have multiple simultaneous backup streams.  It is also good to just create separate pools for each backup stream if you can, to limit spindle contention.  If you have more than 5x disks in your archive system, use 10GbE and/or 8Gb FC or your network will be the bottleneck.
+
  
==== Tuning Summary ====
+
* QuantaStor SDS only supports Qlogic Fibre Channel HBAs for use with the FC SCSI target feature.
* Use RAID6/60 layout with a high disk/spindle count
+
* Use the default Storage Pool type (ZFS)
+
* Use 10GbE NICs
+
* If you're using a hardware RAID controller make sure it has a working BBU or Supercapacitor backed NVRAM cache
+
* Exceptions
+
** If you have a single writer/reader and the throughput is not high enough, try using the XFS pool type
+
** If you have a large number of concurrent writers switch to the RAID10 layout and/or add SSDs write cache
+
  
=== Media Post-Production Editing & Playback  ===
+
{| class="wikitable"
 +
! Vendor
 +
! Model
 +
! Type
 +
! Notes
 +
|-
 +
| QLogic (including all OEM models)
 +
| QLE2742
 +
| 32Gb
 +
| Requires QuantaStor 6
 +
|-
 +
| QLogic (including all OEM models)
 +
| QLE2692
 +
| 16Gb
 +
| Requires QuantaStor 6
 +
|-
 +
| QLogic (including all OEM models)
 +
| QLE267x
 +
| 16Gb
 +
|
 +
|-
 +
| QLogic (including all OEM models)
 +
| QLE25xx
 +
| 8Gb
 +
|
 +
|}
  
For Media applications 10GbE is critical, 1GbE NICs are just not fast enough for most modern playback and editing environments.  You will also want to have large stripe sets that can keep up with ingest and playback.  Choose a chassis with room for 20+ disks. Performance will increase linearly as you add more disks so you want as many disks as possible, up to ~20 per RAID unit in RAID6.  Add a good amount of RAM to be used as a read cache and boost performance.  This is assuming you have largely sequential I/O patterns.  If you have multiple workstations all writing to the storage pool at the same time, then you will want to consider using RAID10.
+
== Network Interface Cards ==
  
=== Research and Life Sciences ===
+
* QuantaStor supports almost all 1GbE and 10/100 network cards on the market
 +
* High speed 10GbE/25GbE/40GbE/50GbE/100GbE NICs that have been tested for compatibility, stability and performance with QuantaStor are as listed.
  
The correct RAID layout depends on access patterns and file size.  For millions of small files, and systems with many active workstations concurrently writing data to a given storage pool, it is best to use RAID10.  Alternatively, RAID6 or RAID60 with an SSD write cache (ZIL) may work well, if the number of concurrent writes is low.  The best approach is to configure two pools, one using RAID10, one in RAID60, and see which one works best.  Each LSI RAID controller will max out at 1.6 to 1.8GB/sec so an optimal configuration will have 1-2 10GbE ports per RAID controller and one RAID controller per 20x-40x 7200 RPM disks.  With ZFS the throughput to the RAID controller will be less, at about 1GB to 1.3GB/sec with dual 8Gb FC.
+
{| class="wikitable"
 +
! Vendor
 +
! Model
 +
! Type
 +
! Connector
 +
|-
 +
| Intel
 +
| X520, X550, X710, XL710,XXV710 (all 1/10/25/40GbE models), E810
 +
| 1/10/25/40GbE, 100GbE
 +
| 10GBaseT, SFP28, QSFP, SFP+, QSFP28
 +
|-
 +
| Emulex
 +
| OneConnect OCe14xxx Series
 +
| 10/40GbE
 +
| QSFP
 +
|-
 +
| Mellanox
 +
| ConnectX-6 EN Series
 +
| 25/50/100/200GbE
 +
| SFP28/QSFP28
 +
|-
 +
| Mellanox
 +
| ConnectX-5 EN Series
 +
| 25/50/100GbE
 +
| SFP28/QSFP28
 +
|-
 +
| Mellanox
 +
| ConnectX-4 EN Series
 +
| 10/25/40/50/100GbE
 +
| SFP28/QSFP28
 +
|-
 +
| Mellanox
 +
| ConnectX-3 EN/VPI Series
 +
| 10/40/56GbE
 +
| SFP+/QSFP+
 +
|-
 +
| Mellanox
 +
| ConnectX-3 Pro EN/VPI Series
 +
| 10GbE/40GbE/56GbE
 +
| SFP+/QSFP+
 +
|-
 +
| Interface Masters
 +
| NIAGARA 32714L (OEM Intel)
 +
| Quad-port 10GbE
 +
| SFP+
 +
|-
 +
| Broadcom
 +
| BCM57508
 +
| 100GbE
 +
| QSFP28
 +
|}
  
Large systems with 100+ disks should use 4x RAID controllers and should consider breaking up the load among 2x or more QuantaStor appliances.  QuantaStor's grid management technology allows you to combine multiple appliances together.  That said, the storage pools will be tied to specific appliances, so it is not a single namespace by default.  There is an advantage to this, as heavy I/O load on Pool A will not impact the performance of an application running on Pool B, even if both pools are in the same appliance, so long as there is adequate network bandwidth and back-end RAID controller bandwidth.  Also, note that you can create scale-out single namespace Gluster volumes with QuantaStor's integrated Gluster management.  You can also use storage pools simultaneously for Gluster, NFS/CIFS, FC/iSCSI, and Hadoop.
+
== Infiniband Adapters ==
  
== [http://www.osnexus.com/recommended-hardware/ Recommended Hardware Guide] ==
+
QuantaStor has deprecated support for the SRP protocol in favor of the iSCSI Extensions for RDMA protocol (iSER). Mellanox Infiniband controllers may be used in IPoIB mode.
  
QuantaStor is based on Linux, specifically Ubuntu Linux 12.04 with QuantaStor v3 so the driver support is very broad. That said, the [http://www.osnexus.com/recommended-hardware/ following equipment] is generally recommended as it has been tested and used reliably in many QuantaStor customer deployments. If you have questions about hardware compatibility please to contact support@osnexus.com for assistance.
+
{| class="wikitable"
 +
! Vendor
 +
! Model
 +
! Type
 +
|-
 +
| Mellanox
 +
| ConnectX-3 VPI Series
 +
| 40/56GbE (QDR)
 +
|-
 +
| Mellanox
 +
| ConnectX-4 VPI Series
 +
| 25/40/50/100GbE
 +
|-
 +
| Mellanox
 +
| ConnectX-5 VPI Series
 +
| 25/50/100GbE
 +
|-
 +
| Mellanox
 +
| ConnectX-6 VPI Series
 +
| 100/200GbE
 +
|}
 +
 
 +
== iSCSI Initiators / Client-side ==
 +
{| class="wikitable"
 +
! Vendor
 +
! Operating System
 +
! iSCSI Initiator
 +
|-
 +
| Microsoft
 +
| Windows (all versions, 7 and newer)
 +
| Windows iSCSI Initiator
 +
|-
 +
| Microsoft
 +
| Windows Server (all versions, 2003 and newer)
 +
| Windows iSCSI Initiator
 +
|-
 +
| Apple
 +
| macOS 13, macOS 12, macOS 11, macOS 10.15, OS X 10.x, Mac OS X 10.x
 +
| ATTO Xtend SAN initiator (globalSAN is not supported)
 +
|-
 +
| Citrix
 +
| XenServer 5.x (iSCSI, FC, NFS), XenServer 6.x (iSCSI, FC, NFS), XenServer 7.x (iSCSI, FC, NFS), Citrix Ready Certified
 +
| iSCSI SR, NFS SR, StorageLink SR
 +
|-
 +
| VMware
 +
| VMware ESXi 4.x, 5.x, 6.x (iSCSI, FC, NFS), 7.x (iSCSI, FC, NFS)
 +
| VMware initiator
 +
|-
 +
| Linux
 +
| RHEL, AlmaLinux, RockyLinux, SUSE, CentOS, Debian, Ubuntu, Oracle Linux (OEL), etc. (all major distros)
 +
| open-iscsi
 +
|}

Latest revision as of 22:24, 12 March 2024

QuantaStor is Linux based and has kernel driver updates released multiple times per year to ensure broad support for the latest hardware from the major vendors including HPE, Dell/EMC, Cisco, Intel, Western Digital, Supermicro and Lenovo. The hardware listed below has been tested for specific compatibility with QuantaStor to ensure system stability, performance and compatibility. If you have questions about specific hardware compatibility for components not listed below please contact support@osnexus.com for assistance.

Servers for QuantaStor SDS Systems

Vendor Model Boot Controller CPU Memory (32GB min)
Cisco UCS S3260 Storage Server, C220, C240 (M7/M6/M5/M4 Series) UCS RAID and HBAs or software RAID1 M.2 NVMe AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
DellEMC PowerEdge R650/R750, PowerEdge R640/R740/R740xd, R730/R730xd/R720/R630/R620 Dell BOSS or software RAID1 M.2 NVMe AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
HPE ProLiant DL360/DL380 Gen10, Gen9, Gen8, Gen7 HPE P4xx w/ FBWC AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
HPE Apollo 4200, 4510 Gen9 and Gen10 Series HPE P4xx w/ FBWC AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
Intel Intel Server Systems (M50FCP/M50CYP) Software RAID1 M.2 NVMe Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
Lenovo ThinkSystem SR650/SR550/SR590 Series, x3650 M5/M4/M4 BD, x3250 M5 Software RAID1 M.2 NVMe or Hardware Boot RAID1 AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
Seagate Seagate 2U24 AP Storage Server, 5U84 AP, 2U12 AP (AMD / Bonneville) Internal M.2 NVMe AMD EPYC 16 core 128GB RAM (Gold) 256GB RAM (Platinum), 512GB RAM (Titanium)
Supermicro X13, X12, X11, X10 & X9 based Intel and H12 based AMD servers Software RAID1 M.2 NVMe AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)

Disk Expansion Chassis / JBOD

  • QuantaStor supports SAS/SATA & NVMe external Expansion Shelves / JBOD devices from Dell, HP, IBM/Lenovo and SuperMicro.
  • All JBODs must have two or more SAS expansion ports per module to allow for SAS multipathing. For chaining of expansion shelves three or more SAS ports are required per expander.
Vendor JBOD Model
Cisco All Models
Dell All Models
HPE All Models (D3000 Series, D2000 Series, D6020)
IBM/Lenovo All Models
Seagate Exos E All Models
Seagate Corvault All Models
Seagate Exos X All Models
SuperMicro All Models (HA requires dual expander JBODs)
Western Digtial Ultrastar Data60 / Data102 / 4U60-G2 / 2U24 SSD Models

SAS HBA Controllers

To make Storage Pools highly-available (ZFS based pools) media must be connected to two or more servers via SAS (or FC) connectivity. We recommend Broadcom HBAs and their OEM equivalents. HPE is the exception where HPE disk expansion units must be used with HPE HBAs which are now Microsemi based. Make sure the HBA is flashed with the latest firmware to ensure compatibility with the latest storage media. For OEM branded HBAs always use the OEM's supplied firmware even if a newer firmware is available from Broadcom. SAS HBAs with only external ports (8e, 16e models) should be configured to ignore Boot ROM from Disk Devices. For Broadcom/LSI Controllers, this option is called 'Boot Support' and should be set to 'OS only' in the MPTSAS BIOS.

Vendor Model Type QS HW Mgmt Module
Broadcom/LSI/Avago 9500 12Gb tri-mode series SAS HBA Yes
Broadcom/LSI/Avago 9400 12Gb tri-mode series SAS HBA Yes
Broadcom/LSI/Avago 9300 12Gb series SAS HBA Yes
Broadcom/LSI/Avago 9200 6Gb series SAS HBA Yes
Cisco UCS 6Gb & 12Gb HBAs SAS HBA Yes
DELL/EMC SAS HBA 6Gb & 12Gb SAS HBA Yes
HPE SmartArray H241 12Gb SAS HBA Yes
Lenovo/IBM ServeRAID M5xxx 12Gb SAS HBA Yes

Boot RAID Controllers

QuantaStor supports both software RAID1 for boot and hardware RAID1 for boot using high quality datacenter grade media. We recommend SSDs in the 240GB to 480GB size range for boot. If SATADOM SSD devices are used use the 128GB devices and use software RAID1 to mirror them. Do not use Intel VROC.

Vendor Model Type QS HW Mgmt Module
Broadcom/LSI/Avago MegaRAID (all models) SATA/SAS RAID Yes
Cisco UCS RAID SATA/SAS RAID Yes
DellEMC PERC H7xx, H8xx 6Gb & 12Gb models SATA/SAS RAID Yes
DellEMC BOSS SATA SSD RAID1 Yes
HPE SmartArray P4xx/P8xx SATA/SAS RAID Yes
Lenovo/IBM ServeRAID M5xxx SATA/SAS RAID Yes
Microsemi/Adaptec 5xxx/6xxx/7xxx/8xxx SATA/SAS RAID Yes

NVMe RAID Controllers

QuantaStor supports the use of NVMe RAID controllers for scale-out configurations and single node configurations.

Single Node Use Cases

  • High performance SAN/NAS using software single parity (RAIDZ1) over NVMe hardware RAID5.
  • High performance NVMeoF/FC/iSCSI target passthru of logical drives from NVMe hardware RAID5

Scale-out Use Cases

  • High performance storage for WAL/MDB
  • High performance OSDs with local rebuild capability, variable OSD count for higher performance
Vendor Model Type QS HW Mgmt Module
Graid Technology Inc. SupremeRAID™ SR-1000 NVMe RAID Yes
Graid Technology Inc. SupremeRAID™ SR-1010 NVMe RAID Yes
Pliops XDP-RAIDplus NVMe RAID No

Storage Devices/Media

Scale-up Cluster Media (ZFS Storage Pools)

Scale-up clusters require dual-ported media so that QuantaStor's IO fencing system can reserve device access to specific nodes for specific pools. These clusters generally consist of two QuantaStor servers with shared access to the storage media in a separate enclosure (JBOD or JBOF). Scale-up clusters using dual-ported NVMe media require the use of special Storage-Bridge-Bay servers from Supermicro.

Data Media

Vendor Model Media Type Notes
Western Digital Ultrastar Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD
Seagate Exos, Nytro & Enterprise Performance Dual-port 12/24Gb SAS SSD & NL-SAS HDD
Samsung Enterprise SSDs (PM1643, PM1643a) Dual-port 12/24Gb SAS SSD
Micron Enterprise SSDs (S6xxDC) Dual-port 12/24Gb SAS SSD
KIOXIA PM6 Enterprise Capacity/Enterprise Performance Dual-port 12/24Gb SAS SSD
Cisco, HPE, DellEMC, Lenovo OEM Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD
Western Digital NVMe SN200 SN840 Dual-port NVMe Supermicro SBB Server Req
KIOXIA CM5-V, CM6, CM7 Dual-port NVMe [ON HOLD - CONTACT SUPPORT]
Micron 7300 PRO Dual-port NVMe Supermicro SBB Server Req

Journal Media (ZIL & L2ARC)

A mirrored pair of SSDs are required for each storage pool to enable SSD write acceleration. SSDs can be added or removed from a given storage at any time with zero downtime. Be sure to select a storage chassis / JBOD with adequate drive slots for both the data drives and the SSD journal devices. Select 400GB models when purchasing write-intensive SSDs and 800GB models when purchasing mixed-used SSDs.

Vendor Model Type Notes
Western Digital Ultrastar SAS Write-Intensive & Mixed-use Models Dual-port 12Gb SAS SSD
Seagate Nytro SAS Write-Intensive & Mixed-use Models Dual-port 12Gb SAS SSD
Samsung Enterprise SSDs (PM1643, PM1643a) Dual-port 12Gb SAS SSD
Western Digital SN200 SN840 Dual-port NVMe Supermicro SBB Server Req
KIOXIA PM6 Enterprise Capacity/Enterprise Performance Dual-port 12/24Gb SAS SSD
KIOXIA CM5-V, CM6, CM7 Dual-port NVMe [ON HOLD - CONTACT SUPPORT]
Micron 7300 PRO Dual-port NVMe Supermicro SBB Server Req

Scale-out Cluster Media (Ceph Storage Pools)

We highly recommend the use of NVMe devices vs SATA/SAS SSDs due to the higher throughput performance per device which reduces overall costs.

Vendor Model Type Use case
Intel DC series (all models) NVMe all use cases
Micron 7xxx, 9xxx (all models) NVMe all use cases
Seagate Nytro Series (all models) NVMe / SAS all use cases
ScaleFlux CSD 2000/CSD 3000 (all models) NVMe all use cases
KIOXIA / Toshiba CM6/CM7 Series (all models) NVMe all use cases
Western Digital Ultrastar DC series (all models) NVMe all use cases

Fibre Channel HBAs

  • QuantaStor SDS only supports Qlogic Fibre Channel HBAs for use with the FC SCSI target feature.
Vendor Model Type Notes
QLogic (including all OEM models) QLE2742 32Gb Requires QuantaStor 6
QLogic (including all OEM models) QLE2692 16Gb Requires QuantaStor 6
QLogic (including all OEM models) QLE267x 16Gb
QLogic (including all OEM models) QLE25xx 8Gb

Network Interface Cards

  • QuantaStor supports almost all 1GbE and 10/100 network cards on the market
  • High speed 10GbE/25GbE/40GbE/50GbE/100GbE NICs that have been tested for compatibility, stability and performance with QuantaStor are as listed.
Vendor Model Type Connector
Intel X520, X550, X710, XL710,XXV710 (all 1/10/25/40GbE models), E810 1/10/25/40GbE, 100GbE 10GBaseT, SFP28, QSFP, SFP+, QSFP28
Emulex OneConnect OCe14xxx Series 10/40GbE QSFP
Mellanox ConnectX-6 EN Series 25/50/100/200GbE SFP28/QSFP28
Mellanox ConnectX-5 EN Series 25/50/100GbE SFP28/QSFP28
Mellanox ConnectX-4 EN Series 10/25/40/50/100GbE SFP28/QSFP28
Mellanox ConnectX-3 EN/VPI Series 10/40/56GbE SFP+/QSFP+
Mellanox ConnectX-3 Pro EN/VPI Series 10GbE/40GbE/56GbE SFP+/QSFP+
Interface Masters NIAGARA 32714L (OEM Intel) Quad-port 10GbE SFP+
Broadcom BCM57508 100GbE QSFP28

Infiniband Adapters

QuantaStor has deprecated support for the SRP protocol in favor of the iSCSI Extensions for RDMA protocol (iSER). Mellanox Infiniband controllers may be used in IPoIB mode.

Vendor Model Type
Mellanox ConnectX-3 VPI Series 40/56GbE (QDR)
Mellanox ConnectX-4 VPI Series 25/40/50/100GbE
Mellanox ConnectX-5 VPI Series 25/50/100GbE
Mellanox ConnectX-6 VPI Series 100/200GbE

iSCSI Initiators / Client-side

Vendor Operating System iSCSI Initiator
Microsoft Windows (all versions, 7 and newer) Windows iSCSI Initiator
Microsoft Windows Server (all versions, 2003 and newer) Windows iSCSI Initiator
Apple macOS 13, macOS 12, macOS 11, macOS 10.15, OS X 10.x, Mac OS X 10.x ATTO Xtend SAN initiator (globalSAN is not supported)
Citrix XenServer 5.x (iSCSI, FC, NFS), XenServer 6.x (iSCSI, FC, NFS), XenServer 7.x (iSCSI, FC, NFS), Citrix Ready Certified iSCSI SR, NFS SR, StorageLink SR
VMware VMware ESXi 4.x, 5.x, 6.x (iSCSI, FC, NFS), 7.x (iSCSI, FC, NFS) VMware initiator
Linux RHEL, AlmaLinux, RockyLinux, SUSE, CentOS, Debian, Ubuntu, Oracle Linux (OEL), etc. (all major distros) open-iscsi