Difference between revisions of "Hardware Compatibility List (HCL)"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m (SAS/SATA RAID Controllers)
m (Scale-out Cluster Media (Ceph Storage Pools))
 
(115 intermediate revisions by the same user not shown)
Line 1: Line 1:
=  QuantaStor Hardware Compatibility List (HCL) =
+
[[Category:design_guide]]
 
+
 
QuantaStor is Linux based and has kernel driver updates released multiple times per year to ensure broad support for the latest hardware from the major vendors including HPE, Dell/EMC, Cisco, Intel, Western Digital, Supermicro and Lenovo.  The hardware listed below has been tested for specific compatibility with QuantaStor to ensure system stability, performance and compatibility. If you have questions about specific hardware compatibility for components not listed below please contact support@osnexus.com for assistance.
 
QuantaStor is Linux based and has kernel driver updates released multiple times per year to ensure broad support for the latest hardware from the major vendors including HPE, Dell/EMC, Cisco, Intel, Western Digital, Supermicro and Lenovo.  The hardware listed below has been tested for specific compatibility with QuantaStor to ensure system stability, performance and compatibility. If you have questions about specific hardware compatibility for components not listed below please contact support@osnexus.com for assistance.
  
== Servers for QuantaStor SDS Appliances ==
+
== Servers for QuantaStor SDS Systems ==
  
  
Line 11: Line 10:
 
! Boot Controller
 
! Boot Controller
 
! CPU
 
! CPU
! Memory (48GB min)
+
! Memory (32GB min)
 
|-
 
|-
 
| Cisco
 
| Cisco
| [https://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-s3260-storage-server/index.html UCS S3260 Storage Server], C240, C3260 (M3, M4, M5 Series)
+
| [https://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-s3260-storage-server/index.html UCS S3260 Storage Server], C220, C240 (M7/M6/M5/M4 Series)
| UCS RAID and HBAs
+
| UCS RAID and HBAs or software RAID1 M.2 NVMe
| Dual Xeon Silver, Gold, or E5-2620v3/v4 or faster
+
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
| 0.5GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
+
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 
|-
 
|-
 
| DellEMC
 
| DellEMC
| [https://www.dell.com/en-us/work/shop/povw/poweredge-r740 PowerEdge R740/R740xd], R730/R730xd/R720/R630/R620
+
| [https://www.dell.com/en-us/work/shop/povw/poweredge-r750 PowerEdge R650/R750], [https://www.dell.com/en-us/work/shop/povw/poweredge-r740 PowerEdge R640/R740/R740xd], R730/R730xd/R720/R630/R620
| Dell PERC H8xx, H7xx (OEM MegaRAID)
+
| Dell BOSS or software RAID1 M.2 NVMe
| Dual Xeon Silver, Gold or E5-2620v3/v4 or faster
+
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
| 0.5GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
+
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 
|-
 
|-
 
| HPE
 
| HPE
 
| [https://www.hpe.com/us/en/product-catalog/servers/proliant-servers/pip.hpe-proliant-dl380-gen10-server.1010026818.html ProLiant DL360/DL380 Gen10], Gen9, Gen8, Gen7
 
| [https://www.hpe.com/us/en/product-catalog/servers/proliant-servers/pip.hpe-proliant-dl380-gen10-server.1010026818.html ProLiant DL360/DL380 Gen10], Gen9, Gen8, Gen7
 
| HPE P4xx w/ FBWC
 
| HPE P4xx w/ FBWC
| Dual Xeon Silver, Gold Scalable Processors or E5-2620v3/v4 or faster
+
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
| 0.5GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
+
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 
|-
 
|-
 
| HPE
 
| HPE
 
| [https://www.hpe.com/us/en/product-catalog/servers/apollo-systems/pip.hpe-apollo-4200-gen10-server.1011147097.html Apollo 4200], 4510 Gen9 and Gen10 Series
 
| [https://www.hpe.com/us/en/product-catalog/servers/apollo-systems/pip.hpe-apollo-4200-gen10-server.1011147097.html Apollo 4200], 4510 Gen9 and Gen10 Series
 
| HPE P4xx w/ FBWC
 
| HPE P4xx w/ FBWC
| Dual Xeon Silver, Gold Scalable Processors or E5-2620v3/v4 or faster
+
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
| 0.5GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
+
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 
|-
 
|-
 
| Intel
 
| Intel
| [https://www.intel.com/content/www/us/en/products/servers/server-chassis-systems/server-board-s2600wf-systems.html S2600WF Based Server Systems]
+
| Intel Server Systems (M50FCP/M50CYP)
| Intel OEM Broadcom/LSI
+
| Software RAID1 M.2 NVMe
| Dual Xeon Silver, Gold Scalable Processors or E5-2620v3/v4 or faster
+
| Intel Scalable Processors Gen 1/2/3/4
| 0.5GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
+
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 
|-
 
|-
 
| Lenovo
 
| Lenovo
 
| [https://www.lenovo.com/us/en/data-center/servers/racks/c/racks ThinkSystem SR650/SR550/SR590 Series], x3650 M5/M4/M4 BD, x3250 M5
 
| [https://www.lenovo.com/us/en/data-center/servers/racks/c/racks ThinkSystem SR650/SR550/SR590 Series], x3650 M5/M4/M4 BD, x3250 M5
| ServeRAID M5xxx (OEM MegaRAID)
+
| Software RAID1 M.2 NVMe or Hardware Boot RAID1
| Dual Xeon Silver, Gold Scalable Processors or E5-2620v3/v4 or faster
+
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
| 0.5GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
+
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 
|-
 
|-
| Supermicro
+
| Seagate
| [https://www.supermicro.com/products/system/2U/index.cfm#dpXeon_Scalable X11], X10 & X9 based servers
+
| [https://www.seagate.com/products/storage/data-storage-systems/application-platforms/exos-ap-2u24/ Seagate 2U24 AP Storage Server], 5U84 AP, 2U12 AP (AMD / Bonneville)
| SMC AOC (OEM MegaRAID)
+
| Internal M.2 NVMe
| Dual Xeon Silver, Gold Scalable Processors or E5-2620v3/v4 or faster
+
| AMD EPYC 16 core
| 0.5GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
+
| 128GB RAM (Gold) 256GB RAM (Platinum), 512GB RAM (Titanium)
 
|-
 
|-
| Western Digital
+
| Supermicro
| [https://www.westerndigital.com/products/storage-platforms/ultrastar-serv24-ha-nvme-server?multilink=switch Ultrastar Serv24 and Serv24HA] (NVMe systems)
+
| X13, X12, X11, X10 & X9 based Intel and H12 based AMD servers
| Internal controller
+
| Software RAID1 M.2 NVMe
| Dual Xeon Silver, Gold Scalable Processors
+
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
| 0.5GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
+
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 
|}
 
|}
  
 
== Disk Expansion Chassis / JBOD ==  
 
== Disk Expansion Chassis / JBOD ==  
  
* QuantaStor supports SAS/SATA & NVME external Expansion Shelves / JBOD devices from Dell, HP, IBM/Lenovo and SuperMicro.  
+
* QuantaStor supports SAS/SATA & NVMe external Expansion Shelves / JBOD devices from Dell, HP, IBM/Lenovo and SuperMicro.  
* All JBOD must have two or more SAS expansion ports per module to allow for SAS multipathing.  For chaining of expansion shelves three or more SAS ports are required per expander.
+
* All JBODs must have two or more SAS expansion ports per module to allow for SAS multipathing.  For chaining of expansion shelves three or more SAS ports are required per expander.
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 73: Line 72:
 
| Cisco
 
| Cisco
 
| All Models
 
| All Models
 +
|-
 +
| Dell
 +
| [https://www.dell.com/en-us/work/shop/productdetailstxn/storage-md1420 All Models]
 +
|-
 +
| HPE
 +
| All Models ([https://www.hpe.com/us/en/product-catalog/storage/disk-enclosures/pip.hpe-d3000-disk-enclosures.6923837.html D3000 Series], D2000 Series, [https://www.hpe.com/us/en/product-catalog/storage/disk-enclosures/pip.hpe-d6020-enclosure-with-dual-io-modules.1009000694.html D6020])
 
|-
 
|-
 
| IBM/Lenovo
 
| IBM/Lenovo
 
| All Models
 
| All Models
 
|-
 
|-
| Dell
+
| Seagate Exos E
| [https://www.dell.com/en-us/work/shop/productdetailstxn/storage-md1420 All Models] (MD14xx, MD13xx, 12xx, MD3060e, MD1280)
+
| [https://www.seagate.com/enterprise-storage/systems/exos/?utm_source=eol&utm_medium=redirect&utm_campaign=modular-enclosures All Models]
 
|-
 
|-
| Western Digtial
+
| Seagate Corvault
| [https://www.westerndigital.com/products/storage-platforms/ultrastar-data60-hybrid-platform?multilink=switch Ultrastar Data60] / Data102 / 4U60-G2 / 2U24 SSD Models
+
| All Models
 
|-
 
|-
| Seagate
+
| Seagate Exos X
| [https://www.seagate.com/enterprise-storage/systems/exos/?utm_source=eol&utm_medium=redirect&utm_campaign=modular-enclosures 4U106 / 5U84 / 2U24 SSD Models]
+
| All Models
|-
+
| HPE
+
| All Models ([https://www.hpe.com/us/en/product-catalog/storage/disk-enclosures/pip.hpe-d3000-disk-enclosures.6923837.html D3000 Series], D2000 Series, [https://www.hpe.com/us/en/product-catalog/storage/disk-enclosures/pip.hpe-d6020-enclosure-with-dual-io-modules.1009000694.html D6020])
+
 
|-
 
|-
 
| SuperMicro
 
| SuperMicro
| All Models (HA requires dual expander [E26, E2C])
+
| All Models (HA requires dual expander JBODs)
 +
|-
 +
| Western Digtial
 +
| [https://www.westerndigital.com/products/storage-platforms/ultrastar-data60-hybrid-platform?multilink=switch Ultrastar Data60] / Data102 / 4U60-G2 / 2U24 SSD Models
 
|}
 
|}
  
 
== SAS HBA Controllers ==
 
== SAS HBA Controllers ==
  
To make Storage Pools highly-available (ZFS based pools) media must be connected to two or more servers via SAS (or FC) connectivity.  We recommend Broadcom HBAs and their OEM equivalents.  HPE is the exception where HPE disk expansion units must be used with HPE HBAs which are now Microsemi based.  Make sure the HBA is flashed with the latest firmware to ensure compatibility with the latest storage media.  For OEM branded HBAs always use the OEM's supplied firmware even if a newer firmware is available from Broadcom.  
+
To make Storage Pools highly-available (ZFS based pools) media must be connected to two or more servers via SAS (or FC) connectivity.  We recommend Broadcom HBAs and their OEM equivalents.  HPE is the exception where HPE disk expansion units must be used with HPE HBAs which are now Microsemi based.  Make sure the HBA is flashed with the latest firmware to ensure compatibility with the latest storage media.  For OEM branded HBAs always use the OEM's supplied firmware even if a newer firmware is available from Broadcom. SAS HBAs with only external ports (8e, 16e models) should be configured to ignore Boot ROM from Disk Devices. For Broadcom/LSI Controllers, this option is called 'Boot Support' and should be set to 'OS only' in the MPTSAS BIOS.
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 101: Line 106:
 
! Model
 
! Model
 
! Type
 
! Type
! Integrated Management
+
! QS HW Mgmt Module
 +
|-
 +
| Broadcom/LSI/Avago
 +
| 9500 12Gb tri-mode series
 +
| SAS HBA
 +
| Yes
 
|-
 
|-
 
| Broadcom/LSI/Avago
 
| Broadcom/LSI/Avago
Line 139: Line 149:
 
|}
 
|}
  
== SAS/SATA RAID Controllers ==
+
== Boot RAID Controllers ==
  
We recommend the use of hardware RAID to make a mirrored (RAID1) boot device for the QuantaStor operating systemAlways use SSD drives for the boot device with a minimum size of 240GB and recommended size of 400GBSAS HBAs which support RAID1 may be used to produce the mirrored boot device in lieu of using a RAID controllerSoftware RAID is also an option but it is not recommended unless the system is booting from SATADOM SSD devices.
+
QuantaStor supports both software RAID1 for boot and hardware RAID1 for boot using high quality datacenter grade mediaWe recommend SSDs in the 240GB to 480GB size range for bootIf SATADOM SSD devices are used use the 128GB devices and use software RAID1 to mirror themDo not use Intel VROC.
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 147: Line 157:
 
! Model
 
! Model
 
! Type
 
! Type
! Integrated Management
+
! QS HW Mgmt Module
 
|-
 
|-
 
| Broadcom/LSI/Avago
 
| Broadcom/LSI/Avago
Line 185: Line 195:
 
|}
 
|}
  
 +
== NVMe RAID Controllers ==
  
* External SAS HBA's should be configured to ignore Boot ROM from Disk Devices. For LSI Controllers, this option is called 'Boot Support' and should be set to 'OS only' in the MPTSAS BIOS.
+
QuantaStor supports the use of NVMe RAID controllers for scale-out configurations and single node configurations.
  
== SSD/HDD Disk Devices ==
+
=== Single Node Use Cases ===
 +
* High performance SAN/NAS using software single parity (RAIDZ1) over NVMe hardware RAID5.
 +
* High performance NVMeoF/FC/iSCSI target passthru of logical drives from NVMe hardware RAID5
 +
 
 +
=== Scale-out Use Cases ===
 +
* High performance storage for WAL/MDB
 +
* High performance OSDs with local rebuild capability, variable OSD count for higher performance
  
 
{| class="wikitable"
 
{| class="wikitable"
 
! Vendor
 
! Vendor
 
! Model
 
! Model
 +
! Type
 +
! QS HW Mgmt Module
 
|-
 
|-
| Western Digital Ultrastar HDDs, SS200/SS300 SSDs
+
| [https://www.graidtech.com/how-it-works/ Graid Technology Inc.]
| SAS, NL-SAS & Enterprise SATA Drives
+
| [https://www.graidtech.com/product/sr-1000/ SupremeRAID™ SR-1000]
 +
| NVMe RAID
 +
| Yes
 
|-
 
|-
| Seagate Exos Enterprise HDDs, Nytro SSDs
+
| [https://www.graidtech.com/how-it-works/ Graid Technology Inc.]
| SAS, NL-SAS & Enterprise SATA Drives
+
| [https://www.graidtech.com/product/sr-1010/ SupremeRAID™ SR-1010]
 +
| NVMe RAID
 +
| Yes
 +
|-
 +
| [https://pliops.com/ Pliops]
 +
| [https://pliops.com/raidplus/ XDP-RAIDplus]
 +
| NVMe RAID
 +
| No
 +
|}
 +
 
 +
== Storage Devices/Media ==
 +
 
 +
=== Scale-up Cluster Media (ZFS Storage Pools) ===
 +
 
 +
Scale-up clusters require dual-ported media so that QuantaStor's IO fencing system can reserve device access to specific nodes for specific pools. These clusters generally consist of two QuantaStor servers with shared access to the storage media in a separate enclosure (JBOD or JBOF).  Scale-up clusters using dual-ported NVMe media require the use of special Storage-Bridge-Bay servers from Supermicro.
 +
 
 +
==== Data Media ====
 +
 
 +
{| class="wikitable"
 +
! Vendor
 +
! Model
 +
! Media Type
 +
! Notes
 +
|-
 +
| Western Digital
 +
| Ultrastar
 +
| Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD
 +
|-
 +
| Seagate
 +
| Exos, Nytro & Enterprise Performance
 +
| Dual-port 12/24Gb SAS SSD & NL-SAS HDD
 +
|-
 +
| Samsung
 +
| Enterprise SSDs (PM1643, PM1643a)
 +
| Dual-port 12/24Gb SAS SSD
 +
|-
 +
| Micron
 +
| Enterprise SSDs (S6xxDC)
 +
| Dual-port 12/24Gb SAS SSD
 
|-
 
|-
| Toshiba Enterprise Capacity/Enterprise Performance
+
| KIOXIA
| SAS, NL-SAS & Enterprise SATA Drives
+
| PM6 Enterprise Capacity/Enterprise Performance
 +
| Dual-port 12/24Gb SAS SSD
 
|-
 
|-
 
| Cisco, HPE, DellEMC, Lenovo
 
| Cisco, HPE, DellEMC, Lenovo
| OEM NL-SAS & SAS SSDs
+
| OEM
 +
| Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD
 +
|-
 +
| Western Digital
 +
| NVMe SN200 SN840
 +
| Dual-port NVMe
 +
| Supermicro SBB Server Req
 +
|-
 +
| KIOXIA
 +
| CM5-V, CM6, CM7
 +
| Dual-port NVMe
 +
| '''[ON HOLD - CONTACT SUPPORT]'''
 +
|-
 +
| Micron
 +
| 7300 PRO
 +
| Dual-port NVMe
 +
| Supermicro SBB Server Req
 
|}
 
|}
  
== SAN/NAS SSD Journal Devices ==
+
==== Journal Media (ZIL & L2ARC) ====
  
 
A mirrored pair of SSDs are required for each storage pool to enable SSD write acceleration.  SSDs can be added or removed from a given storage at any time with zero downtime.  Be sure to select a storage chassis / JBOD with adequate drive slots for both the data drives and the SSD journal devices. Select 400GB models when purchasing write-intensive SSDs and 800GB models when purchasing mixed-used SSDs.
 
A mirrored pair of SSDs are required for each storage pool to enable SSD write acceleration.  SSDs can be added or removed from a given storage at any time with zero downtime.  Be sure to select a storage chassis / JBOD with adequate drive slots for both the data drives and the SSD journal devices. Select 400GB models when purchasing write-intensive SSDs and 800GB models when purchasing mixed-used SSDs.
Line 215: Line 291:
 
! Model
 
! Model
 
! Type
 
! Type
! Use Case
+
! Notes
 
|-
 
|-
 
| Western Digital
 
| Western Digital
| Ultrastar SS300 Write-Intensive & Mixed-use Models
+
| Ultrastar SAS Write-Intensive & Mixed-use Models
| 12Gb SAS
+
| Dual-port 12Gb SAS SSD
| ZFS ZIL (2000MB/sec)
+
 
|-
 
|-
| Western Digital
+
| Seagate
| Ultrastar SS200 Mixed Use Models
+
| Nytro SAS Write-Intensive & Mixed-use Models
| 12Gb SAS
+
| Dual-port 12Gb SAS SSD
| ZFS ZIL (1000MB/sec)
+
 
|-
 
|-
| Western Digital
+
| Samsung
| Ultrastar SAS SSD HUSMH Series
+
| Enterprise SSDs (PM1643, PM1643a)
| 12Gb SAS
+
| Dual-port 12Gb SAS SSD
| ZFS ZIL (765MB/sec)
+
 
|-
 
|-
 
| Western Digital
 
| Western Digital
| Ultrastar SAS SSD HUSMM Series
+
| SN200 SN840
| 12Gb SAS
+
| Dual-port NVMe
| ZFS ZIL (765MB/sec)
+
| Supermicro SBB Server Req
 
|-
 
|-
| Micron
+
| KIOXIA
| S650DC Series MTFDJAK400MBW-2AN1ZAB
+
| PM6 Enterprise Capacity/Enterprise Performance
| 12Gb SAS
+
| Dual-port 12/24Gb SAS SSD
| ZFS ZIL (850MB/sec)
+
 
|-
 
|-
| Micron
+
| KIOXIA
| S655DC Series MTFDJAK400MBW-2AN1ZAB
+
| CM5-V, CM6, CM7
| 12Gb SAS
+
| Dual-port NVMe
| ZFS ZIL (850MB/sec)
+
| '''[ON HOLD - CONTACT SUPPORT]'''
 +
|-
 +
| Micron
 +
| 7300 PRO
 +
| Dual-port NVMe
 +
| Supermicro SBB Server Req
 
|}
 
|}
  
== Object Storage SSD Journal Devices ==
+
=== Scale-out Cluster Media (Ceph Storage Pools) ===
  
 
We highly recommend the use of NVMe devices vs SATA/SAS SSDs due to the higher throughput performance per device which reduces overall costs.  
 
We highly recommend the use of NVMe devices vs SATA/SAS SSDs due to the higher throughput performance per device which reduces overall costs.  
Line 258: Line 335:
 
! Use case
 
! Use case
 
|-
 
|-
| Western Digital
+
| Intel
| Ultrastar DC SN200, SN620
+
| DC series (all models)
 
| NVMe
 
| NVMe
| Ceph Journal / WAL
+
| all use cases
 
|-
 
|-
| Intel
+
| Micron
| DC All Models
+
| 7xxx, 9xxx (all models)
 
| NVMe
 
| NVMe
| Ceph Journal / WAL
+
| all use cases
 
|-
 
|-
| Intel
+
| Seagate
| Optane
+
| Nytro Series (all models)
 +
| NVMe / SAS
 +
| all use cases
 +
|-
 +
| ScaleFlux
 +
| CSD 2000/CSD 3000 (all models)
 
| NVMe
 
| NVMe
| Ceph Journal / WAL
+
| all use cases
 
|-
 
|-
| Micron
+
| KIOXIA / Toshiba
| 9200 PRO / MAX Series
+
| CM6/CM7 Series (all models)
 
| NVMe
 
| NVMe
| Ceph Journal / WAL
+
| all use cases
 
|-
 
|-
| Techman
+
| Western Digital
| XC 100 Series
+
| Ultrastar DC series (all models)
 
| NVMe
 
| NVMe
| Ceph Journal / WAL
+
| all use cases
 
|}
 
|}
  
Line 292: Line 374:
 
! Model
 
! Model
 
! Type
 
! Type
 +
! Notes
 
|-
 
|-
 
| QLogic (including all OEM models)
 
| QLogic (including all OEM models)
| QLE26xx
+
| QLE2742
 +
| 32Gb
 +
| Requires QuantaStor 6
 +
|-
 +
| QLogic (including all OEM models)
 +
| QLE2692
 
| 16Gb
 
| 16Gb
 +
| Requires QuantaStor 6
 
|-
 
|-
 
| QLogic (including all OEM models)
 
| QLogic (including all OEM models)
| QLE25xx
+
| QLE267x
| 8Gb
+
| 16Gb
 +
|
 
|-
 
|-
 
| QLogic (including all OEM models)
 
| QLogic (including all OEM models)
| QLE24xx
+
| QLE25xx
| 4Gb
+
| 8Gb
 +
|
 
|}
 
|}
  
 
== Network Interface Cards ==
 
== Network Interface Cards ==
  
* QuantaStor supports almost all 1GB and 10/100 network cards on the market
+
* QuantaStor supports almost all 1GbE and 10/100 network cards on the market
 
* High speed 10GbE/25GbE/40GbE/50GbE/100GbE NICs that have been tested for compatibility, stability and performance with QuantaStor are as listed.
 
* High speed 10GbE/25GbE/40GbE/50GbE/100GbE NICs that have been tested for compatibility, stability and performance with QuantaStor are as listed.
  
Line 318: Line 409:
 
|-
 
|-
 
| Intel
 
| Intel
| X520, X520, X710, XL710,XXV710 (all 1/10/25/40GbE models)
+
| X520, X550, X710, XL710,XXV710 (all 1/10/25/40GbE models), E810
| 1/10/25/40GbE
+
| 1/10/25/40GbE, 100GbE
| SFP28, QSFP, SFP+
+
| 10GBaseT, SFP28, QSFP, SFP+, QSFP28
 
|-
 
|-
 
| Emulex
 
| Emulex
Line 326: Line 417:
 
| 10/40GbE
 
| 10/40GbE
 
| QSFP
 
| QSFP
 +
|-
 +
| Mellanox
 +
| ConnectX-6 EN Series
 +
| 25/50/100/200GbE
 +
| SFP28/QSFP28
 
|-
 
|-
 
| Mellanox
 
| Mellanox
Line 351: Line 447:
 
| Quad-port 10GbE
 
| Quad-port 10GbE
 
| SFP+
 
| SFP+
 +
|-
 +
| Broadcom
 +
| BCM57508
 +
| 100GbE
 +
| QSFP28
 
|}
 
|}
  
== SRP / Infiniband Adapters ==
+
== Infiniband Adapters ==
 +
 
 +
QuantaStor has deprecated support for the SRP protocol in favor of the iSCSI Extensions for RDMA protocol (iSER).  Mellanox Infiniband controllers may be used in IPoIB mode.
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 359: Line 462:
 
! Model
 
! Model
 
! Type
 
! Type
|-
 
| Mellanox
 
| ConnectX Series
 
| 20Gb (DDR)
 
|-
 
| Mellanox
 
| ConnectX-2 VPI Series
 
| 40Gb (QDR)
 
 
|-
 
|-
 
| Mellanox
 
| Mellanox
 
| ConnectX-3 VPI Series
 
| ConnectX-3 VPI Series
| 40/56Gb (QDR)
+
| 40/56GbE (QDR)
 
|-
 
|-
 
| Mellanox
 
| Mellanox
 
| ConnectX-4 VPI Series
 
| ConnectX-4 VPI Series
| 25/40/50/100Gb
+
| 25/40/50/100GbE
 
|-
 
|-
 
| Mellanox
 
| Mellanox
 
| ConnectX-5 VPI Series
 
| ConnectX-5 VPI Series
| 25/50/100Gb
+
| 25/50/100GbE
 +
|-
 +
| Mellanox
 +
| ConnectX-6 VPI Series
 +
| 100/200GbE
 
|}
 
|}
  
Line 388: Line 487:
 
|-
 
|-
 
| Microsoft
 
| Microsoft
| Windows XP, 7, 8, 10, Windows 2003 Server, 2008 Server, 2008 Server R2, 2012 Server, 2016 Server
+
| Windows (all versions, 7 and newer)
| Windows iSCSI
+
| Windows iSCSI Initiator
 +
|-
 +
| Microsoft
 +
| Windows Server (all versions, 2003 and newer)
 +
| Windows iSCSI Initiator
 
|-
 
|-
 
| Apple
 
| Apple
| Mac OS X v10.5 'Leopard', v10.6 'Snow Leopard', v10.7 'Lion', v10.8 'Mountain Lion', v10.9 'Mavericks', v10.10 'Yosemite', v10.11 'El Capitan', v10.12 'Sierra', macOS 10.12: Sierra, macOS 10.13: High Sierra, macOS 10.14: Mojave
+
| macOS 13, macOS 12, macOS 11, macOS 10.15, OS X 10.x, Mac OS X 10.x
 
| ATTO Xtend SAN initiator (globalSAN is not supported)
 
| ATTO Xtend SAN initiator (globalSAN is not supported)
 
|-
 
|-
Line 400: Line 503:
 
|-
 
|-
 
| VMware
 
| VMware
| VMware ESXi 4.x, 5.x, 6.x (iSCSI, FC, NFS)
+
| VMware ESXi 4.x, 5.x, 6.x (iSCSI, FC, NFS), 7.x (iSCSI, FC, NFS)
 
| VMware initiator
 
| VMware initiator
 
|-
 
|-
 
| Linux
 
| Linux
| RHEL, SUSE, CentOS, Debian, Ubuntu, OEL, etc. (all major distros)
+
| RHEL, AlmaLinux, RockyLinux, SUSE, CentOS, Debian, Ubuntu, Oracle Linux (OEL), etc. (all major distros)
 
| open-iscsi
 
| open-iscsi
 
|}
 
|}

Latest revision as of 22:24, 12 March 2024

QuantaStor is Linux based and has kernel driver updates released multiple times per year to ensure broad support for the latest hardware from the major vendors including HPE, Dell/EMC, Cisco, Intel, Western Digital, Supermicro and Lenovo. The hardware listed below has been tested for specific compatibility with QuantaStor to ensure system stability, performance and compatibility. If you have questions about specific hardware compatibility for components not listed below please contact support@osnexus.com for assistance.

Servers for QuantaStor SDS Systems

Vendor Model Boot Controller CPU Memory (32GB min)
Cisco UCS S3260 Storage Server, C220, C240 (M7/M6/M5/M4 Series) UCS RAID and HBAs or software RAID1 M.2 NVMe AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
DellEMC PowerEdge R650/R750, PowerEdge R640/R740/R740xd, R730/R730xd/R720/R630/R620 Dell BOSS or software RAID1 M.2 NVMe AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
HPE ProLiant DL360/DL380 Gen10, Gen9, Gen8, Gen7 HPE P4xx w/ FBWC AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
HPE Apollo 4200, 4510 Gen9 and Gen10 Series HPE P4xx w/ FBWC AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
Intel Intel Server Systems (M50FCP/M50CYP) Software RAID1 M.2 NVMe Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
Lenovo ThinkSystem SR650/SR550/SR590 Series, x3650 M5/M4/M4 BD, x3250 M5 Software RAID1 M.2 NVMe or Hardware Boot RAID1 AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
Seagate Seagate 2U24 AP Storage Server, 5U84 AP, 2U12 AP (AMD / Bonneville) Internal M.2 NVMe AMD EPYC 16 core 128GB RAM (Gold) 256GB RAM (Platinum), 512GB RAM (Titanium)
Supermicro X13, X12, X11, X10 & X9 based Intel and H12 based AMD servers Software RAID1 M.2 NVMe AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)

Disk Expansion Chassis / JBOD

  • QuantaStor supports SAS/SATA & NVMe external Expansion Shelves / JBOD devices from Dell, HP, IBM/Lenovo and SuperMicro.
  • All JBODs must have two or more SAS expansion ports per module to allow for SAS multipathing. For chaining of expansion shelves three or more SAS ports are required per expander.
Vendor JBOD Model
Cisco All Models
Dell All Models
HPE All Models (D3000 Series, D2000 Series, D6020)
IBM/Lenovo All Models
Seagate Exos E All Models
Seagate Corvault All Models
Seagate Exos X All Models
SuperMicro All Models (HA requires dual expander JBODs)
Western Digtial Ultrastar Data60 / Data102 / 4U60-G2 / 2U24 SSD Models

SAS HBA Controllers

To make Storage Pools highly-available (ZFS based pools) media must be connected to two or more servers via SAS (or FC) connectivity. We recommend Broadcom HBAs and their OEM equivalents. HPE is the exception where HPE disk expansion units must be used with HPE HBAs which are now Microsemi based. Make sure the HBA is flashed with the latest firmware to ensure compatibility with the latest storage media. For OEM branded HBAs always use the OEM's supplied firmware even if a newer firmware is available from Broadcom. SAS HBAs with only external ports (8e, 16e models) should be configured to ignore Boot ROM from Disk Devices. For Broadcom/LSI Controllers, this option is called 'Boot Support' and should be set to 'OS only' in the MPTSAS BIOS.

Vendor Model Type QS HW Mgmt Module
Broadcom/LSI/Avago 9500 12Gb tri-mode series SAS HBA Yes
Broadcom/LSI/Avago 9400 12Gb tri-mode series SAS HBA Yes
Broadcom/LSI/Avago 9300 12Gb series SAS HBA Yes
Broadcom/LSI/Avago 9200 6Gb series SAS HBA Yes
Cisco UCS 6Gb & 12Gb HBAs SAS HBA Yes
DELL/EMC SAS HBA 6Gb & 12Gb SAS HBA Yes
HPE SmartArray H241 12Gb SAS HBA Yes
Lenovo/IBM ServeRAID M5xxx 12Gb SAS HBA Yes

Boot RAID Controllers

QuantaStor supports both software RAID1 for boot and hardware RAID1 for boot using high quality datacenter grade media. We recommend SSDs in the 240GB to 480GB size range for boot. If SATADOM SSD devices are used use the 128GB devices and use software RAID1 to mirror them. Do not use Intel VROC.

Vendor Model Type QS HW Mgmt Module
Broadcom/LSI/Avago MegaRAID (all models) SATA/SAS RAID Yes
Cisco UCS RAID SATA/SAS RAID Yes
DellEMC PERC H7xx, H8xx 6Gb & 12Gb models SATA/SAS RAID Yes
DellEMC BOSS SATA SSD RAID1 Yes
HPE SmartArray P4xx/P8xx SATA/SAS RAID Yes
Lenovo/IBM ServeRAID M5xxx SATA/SAS RAID Yes
Microsemi/Adaptec 5xxx/6xxx/7xxx/8xxx SATA/SAS RAID Yes

NVMe RAID Controllers

QuantaStor supports the use of NVMe RAID controllers for scale-out configurations and single node configurations.

Single Node Use Cases

  • High performance SAN/NAS using software single parity (RAIDZ1) over NVMe hardware RAID5.
  • High performance NVMeoF/FC/iSCSI target passthru of logical drives from NVMe hardware RAID5

Scale-out Use Cases

  • High performance storage for WAL/MDB
  • High performance OSDs with local rebuild capability, variable OSD count for higher performance
Vendor Model Type QS HW Mgmt Module
Graid Technology Inc. SupremeRAID™ SR-1000 NVMe RAID Yes
Graid Technology Inc. SupremeRAID™ SR-1010 NVMe RAID Yes
Pliops XDP-RAIDplus NVMe RAID No

Storage Devices/Media

Scale-up Cluster Media (ZFS Storage Pools)

Scale-up clusters require dual-ported media so that QuantaStor's IO fencing system can reserve device access to specific nodes for specific pools. These clusters generally consist of two QuantaStor servers with shared access to the storage media in a separate enclosure (JBOD or JBOF). Scale-up clusters using dual-ported NVMe media require the use of special Storage-Bridge-Bay servers from Supermicro.

Data Media

Vendor Model Media Type Notes
Western Digital Ultrastar Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD
Seagate Exos, Nytro & Enterprise Performance Dual-port 12/24Gb SAS SSD & NL-SAS HDD
Samsung Enterprise SSDs (PM1643, PM1643a) Dual-port 12/24Gb SAS SSD
Micron Enterprise SSDs (S6xxDC) Dual-port 12/24Gb SAS SSD
KIOXIA PM6 Enterprise Capacity/Enterprise Performance Dual-port 12/24Gb SAS SSD
Cisco, HPE, DellEMC, Lenovo OEM Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD
Western Digital NVMe SN200 SN840 Dual-port NVMe Supermicro SBB Server Req
KIOXIA CM5-V, CM6, CM7 Dual-port NVMe [ON HOLD - CONTACT SUPPORT]
Micron 7300 PRO Dual-port NVMe Supermicro SBB Server Req

Journal Media (ZIL & L2ARC)

A mirrored pair of SSDs are required for each storage pool to enable SSD write acceleration. SSDs can be added or removed from a given storage at any time with zero downtime. Be sure to select a storage chassis / JBOD with adequate drive slots for both the data drives and the SSD journal devices. Select 400GB models when purchasing write-intensive SSDs and 800GB models when purchasing mixed-used SSDs.

Vendor Model Type Notes
Western Digital Ultrastar SAS Write-Intensive & Mixed-use Models Dual-port 12Gb SAS SSD
Seagate Nytro SAS Write-Intensive & Mixed-use Models Dual-port 12Gb SAS SSD
Samsung Enterprise SSDs (PM1643, PM1643a) Dual-port 12Gb SAS SSD
Western Digital SN200 SN840 Dual-port NVMe Supermicro SBB Server Req
KIOXIA PM6 Enterprise Capacity/Enterprise Performance Dual-port 12/24Gb SAS SSD
KIOXIA CM5-V, CM6, CM7 Dual-port NVMe [ON HOLD - CONTACT SUPPORT]
Micron 7300 PRO Dual-port NVMe Supermicro SBB Server Req

Scale-out Cluster Media (Ceph Storage Pools)

We highly recommend the use of NVMe devices vs SATA/SAS SSDs due to the higher throughput performance per device which reduces overall costs.

Vendor Model Type Use case
Intel DC series (all models) NVMe all use cases
Micron 7xxx, 9xxx (all models) NVMe all use cases
Seagate Nytro Series (all models) NVMe / SAS all use cases
ScaleFlux CSD 2000/CSD 3000 (all models) NVMe all use cases
KIOXIA / Toshiba CM6/CM7 Series (all models) NVMe all use cases
Western Digital Ultrastar DC series (all models) NVMe all use cases

Fibre Channel HBAs

  • QuantaStor SDS only supports Qlogic Fibre Channel HBAs for use with the FC SCSI target feature.
Vendor Model Type Notes
QLogic (including all OEM models) QLE2742 32Gb Requires QuantaStor 6
QLogic (including all OEM models) QLE2692 16Gb Requires QuantaStor 6
QLogic (including all OEM models) QLE267x 16Gb
QLogic (including all OEM models) QLE25xx 8Gb

Network Interface Cards

  • QuantaStor supports almost all 1GbE and 10/100 network cards on the market
  • High speed 10GbE/25GbE/40GbE/50GbE/100GbE NICs that have been tested for compatibility, stability and performance with QuantaStor are as listed.
Vendor Model Type Connector
Intel X520, X550, X710, XL710,XXV710 (all 1/10/25/40GbE models), E810 1/10/25/40GbE, 100GbE 10GBaseT, SFP28, QSFP, SFP+, QSFP28
Emulex OneConnect OCe14xxx Series 10/40GbE QSFP
Mellanox ConnectX-6 EN Series 25/50/100/200GbE SFP28/QSFP28
Mellanox ConnectX-5 EN Series 25/50/100GbE SFP28/QSFP28
Mellanox ConnectX-4 EN Series 10/25/40/50/100GbE SFP28/QSFP28
Mellanox ConnectX-3 EN/VPI Series 10/40/56GbE SFP+/QSFP+
Mellanox ConnectX-3 Pro EN/VPI Series 10GbE/40GbE/56GbE SFP+/QSFP+
Interface Masters NIAGARA 32714L (OEM Intel) Quad-port 10GbE SFP+
Broadcom BCM57508 100GbE QSFP28

Infiniband Adapters

QuantaStor has deprecated support for the SRP protocol in favor of the iSCSI Extensions for RDMA protocol (iSER). Mellanox Infiniband controllers may be used in IPoIB mode.

Vendor Model Type
Mellanox ConnectX-3 VPI Series 40/56GbE (QDR)
Mellanox ConnectX-4 VPI Series 25/40/50/100GbE
Mellanox ConnectX-5 VPI Series 25/50/100GbE
Mellanox ConnectX-6 VPI Series 100/200GbE

iSCSI Initiators / Client-side

Vendor Operating System iSCSI Initiator
Microsoft Windows (all versions, 7 and newer) Windows iSCSI Initiator
Microsoft Windows Server (all versions, 2003 and newer) Windows iSCSI Initiator
Apple macOS 13, macOS 12, macOS 11, macOS 10.15, OS X 10.x, Mac OS X 10.x ATTO Xtend SAN initiator (globalSAN is not supported)
Citrix XenServer 5.x (iSCSI, FC, NFS), XenServer 6.x (iSCSI, FC, NFS), XenServer 7.x (iSCSI, FC, NFS), Citrix Ready Certified iSCSI SR, NFS SR, StorageLink SR
VMware VMware ESXi 4.x, 5.x, 6.x (iSCSI, FC, NFS), 7.x (iSCSI, FC, NFS) VMware initiator
Linux RHEL, AlmaLinux, RockyLinux, SUSE, CentOS, Debian, Ubuntu, Oracle Linux (OEL), etc. (all major distros) open-iscsi