Difference between revisions of "Hardware Compatibility List (HCL)"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m (Network Interface Cards)
m (Scale-out Cluster Media (Ceph Storage Pools))
 
(167 intermediate revisions by the same user not shown)
Line 1: Line 1:
= QuantaStor Hardware Compatibility List (HCL) =
+
[[Category:design_guide]]
 +
QuantaStor is Linux based and has kernel driver updates released multiple times per year to ensure broad support for the latest hardware from the major vendors including HPE, Dell/EMC, Cisco, Intel, Western Digital, Supermicro and Lenovo. The hardware listed below has been tested for specific compatibility with QuantaStor to ensure system stability, performance and compatibility. If you have questions about specific hardware compatibility for components not listed below please contact support@osnexus.com for assistance.
  
QuantaStor is Linux based and has kernel driver updates released multiple times per year to ensure broad support for the latest hardware from the major vendors including HPE, Dell/EMC, Cisco, Supermicro and Lenovo.  The hardware listed below has been tested for specific compatibility with QuantaStor to ensure system stability, performance and compatibility. If you have questions about specific hardware compatibility for components not listed below please contact support@osnexus.com for assistance.
+
== Servers for QuantaStor SDS Systems ==
 
+
== Servers for QuantaStor SDS Appliances ==
+
  
  
Line 11: Line 10:
 
! Boot Controller
 
! Boot Controller
 
! CPU
 
! CPU
! Memory (48GB min)
+
! Memory (32GB min)
 
|-
 
|-
 
| Cisco
 
| Cisco
| UCS C3x60 Series
+
| [https://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-s3260-storage-server/index.html UCS S3260 Storage Server], C220, C240 (M7/M6/M5/M4 Series)
| UCS RAID (OEM MegaRAID)
+
| UCS RAID and HBAs or software RAID1 M.2 NVMe
| Dual Xeon E5-2620v3/v4 or faster
+
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
| 0.5GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
+
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 
|-
 
|-
| Dell
+
| DellEMC
| R740/R730/R730xd/R720/R630/R620
+
| [https://www.dell.com/en-us/work/shop/povw/poweredge-r750 PowerEdge R650/R750], [https://www.dell.com/en-us/work/shop/povw/poweredge-r740 PowerEdge R640/R740/R740xd], R730/R730xd/R720/R630/R620
| Dell PERC H7xx (OEM MegaRAID)
+
| Dell BOSS or software RAID1 M.2 NVMe
| Dual Xeon E5-2620v3/v4 or faster
+
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
| 0.5GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
+
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 
|-
 
|-
 
| HPE
 
| HPE
| DL360/DL380 Gen9, Gen8, Gen7
+
| [https://www.hpe.com/us/en/product-catalog/servers/proliant-servers/pip.hpe-proliant-dl380-gen10-server.1010026818.html ProLiant DL360/DL380 Gen10], Gen9, Gen8, Gen7
 
| HPE P4xx w/ FBWC
 
| HPE P4xx w/ FBWC
| Dual Xeon E5-2620v3/v4 or faster
+
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
| 0.5GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
+
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 
|-
 
|-
 
| HPE
 
| HPE
| Apollo 4200/4510 Series
+
| [https://www.hpe.com/us/en/product-catalog/servers/apollo-systems/pip.hpe-apollo-4200-gen10-server.1011147097.html Apollo 4200], 4510 Gen9 and Gen10 Series
 
| HPE P4xx w/ FBWC
 
| HPE P4xx w/ FBWC
| Dual Xeon E5-2620v3/v4 or faster
+
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
| 0.5GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
+
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 
|-
 
|-
| Lenovo/IBM
+
| Intel
| x3650 M5/M4/M4 BD, x3250 M5
+
| Intel Server Systems (M50FCP/M50CYP)
| ServeRAID M5xxx (OEM MegaRAID)
+
| Software RAID1 M.2 NVMe
| Dual Xeon E5-2620v3/v4 or faster
+
| Intel Scalable Processors Gen 1/2/3/4
| 0.5GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
+
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 
|-
 
|-
| SuperMicro
+
| Lenovo
| All X10 & X9 based servers
+
| [https://www.lenovo.com/us/en/data-center/servers/racks/c/racks ThinkSystem SR650/SR550/SR590 Series], x3650 M5/M4/M4 BD, x3250 M5
| SMC AOC (OEM MegaRAID)
+
| Software RAID1 M.2 NVMe or Hardware Boot RAID1
| Dual Xeon E5-2620v3/v4 or faster
+
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
| 0.5GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
+
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 +
|-
 +
| Seagate
 +
| [https://www.seagate.com/products/storage/data-storage-systems/application-platforms/exos-ap-2u24/ Seagate 2U24 AP Storage Server], 5U84 AP, 2U12 AP (AMD / Bonneville)
 +
| Internal M.2 NVMe
 +
| AMD EPYC 16 core
 +
| 128GB RAM (Gold) 256GB RAM (Platinum), 512GB RAM (Titanium)
 +
|-
 +
| Supermicro
 +
| X13, X12, X11, X10 & X9 based Intel and H12 based AMD servers
 +
| Software RAID1 M.2 NVMe
 +
| AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
 +
| 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
 
|}
 
|}
  
== External Expansion Chassis / JBOD ==  
+
== Disk Expansion Chassis / JBOD ==  
  
* QuantaStor supports SAS/SATA & NVME external Expansion Shelves / JBOD devices from Dell, HP, IBM/Lenovo and SuperMicro.  
+
* QuantaStor supports SAS/SATA & NVMe external Expansion Shelves / JBOD devices from Dell, HP, IBM/Lenovo and SuperMicro.  
* All JBOD must have two or more SAS expansion ports per module to allow for SAS multipathing.  For chaining of expansion shelves three or more SAS ports are required per expander.
+
* All JBODs must have two or more SAS expansion ports per module to allow for SAS multipathing.  For chaining of expansion shelves three or more SAS ports are required per expander.
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 61: Line 72:
 
| Cisco
 
| Cisco
 
| All Models
 
| All Models
 +
|-
 +
| Dell
 +
| [https://www.dell.com/en-us/work/shop/productdetailstxn/storage-md1420 All Models]
 +
|-
 +
| HPE
 +
| All Models ([https://www.hpe.com/us/en/product-catalog/storage/disk-enclosures/pip.hpe-d3000-disk-enclosures.6923837.html D3000 Series], D2000 Series, [https://www.hpe.com/us/en/product-catalog/storage/disk-enclosures/pip.hpe-d6020-enclosure-with-dual-io-modules.1009000694.html D6020])
 
|-
 
|-
 
| IBM/Lenovo
 
| IBM/Lenovo
 
| All Models
 
| All Models
 
|-
 
|-
| Dell
+
| Seagate Exos E
 +
| [https://www.seagate.com/enterprise-storage/systems/exos/?utm_source=eol&utm_medium=redirect&utm_campaign=modular-enclosures All Models]
 +
|-
 +
| Seagate Corvault
 
| All Models
 
| All Models
 
|-
 
|-
| HGST
+
| Seagate Exos X
| 4U102 / 4U60-G2 / 2U24 SSD Models
+
|-
+
| HPE
+
 
| All Models
 
| All Models
 
|-
 
|-
 
| SuperMicro
 
| SuperMicro
| All Models (HA requires dual expander [E26, E2C])
+
| All Models (HA requires dual expander JBODs)
 +
|-
 +
| Western Digtial
 +
| [https://www.westerndigital.com/products/storage-platforms/ultrastar-data60-hybrid-platform?multilink=switch Ultrastar Data60] / Data102 / 4U60-G2 / 2U24 SSD Models
 
|}
 
|}
  
== SAS HBAs & SAS/SATA RAID Controllers ==
+
== SAS HBA Controllers ==
 +
 
 +
To make Storage Pools highly-available (ZFS based pools) media must be connected to two or more servers via SAS (or FC) connectivity.  We recommend Broadcom HBAs and their OEM equivalents.  HPE is the exception where HPE disk expansion units must be used with HPE HBAs which are now Microsemi based.  Make sure the HBA is flashed with the latest firmware to ensure compatibility with the latest storage media.  For OEM branded HBAs always use the OEM's supplied firmware even if a newer firmware is available from Broadcom. SAS HBAs with only external ports (8e, 16e models) should be configured to ignore Boot ROM from Disk Devices. For Broadcom/LSI Controllers, this option is called 'Boot Support' and should be set to 'OS only' in the MPTSAS BIOS.
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 84: Line 106:
 
! Model
 
! Model
 
! Type
 
! Type
! Integrated Management
+
! QS HW Mgmt Module
 
|-
 
|-
| Adaptec
+
| Broadcom/LSI/Avago
| 5xxx/6xxx/7xxx/8xxx
+
| 9500 12Gb tri-mode series
| SATA/SAS RAID
+
| SAS HBA
 
| Yes
 
| Yes
 
|-
 
|-
Line 106: Line 128:
 
| Yes
 
| Yes
 
|-
 
|-
| Broadcom LSI/Avago
+
| Cisco
 +
| UCS 6Gb & 12Gb HBAs
 +
| SAS HBA
 +
| Yes
 +
|-
 +
| DELL/EMC
 +
| SAS HBA 6Gb & 12Gb
 +
| SAS HBA
 +
| Yes
 +
|-
 +
| HPE
 +
| SmartArray H241 12Gb
 +
| SAS HBA
 +
| Yes
 +
|-
 +
| Lenovo/IBM
 +
| ServeRAID M5xxx 12Gb
 +
| SAS HBA
 +
| Yes
 +
|}
 +
 
 +
== Boot RAID Controllers ==
 +
 
 +
QuantaStor supports both software RAID1 for boot and hardware RAID1 for boot using high quality datacenter grade media.  We recommend SSDs in the 240GB to 480GB size range for boot.  If SATADOM SSD devices are used use the 128GB devices and use software RAID1 to mirror them.  Do not use Intel VROC.
 +
 
 +
{| class="wikitable"
 +
! Vendor
 +
! Model
 +
! Type
 +
! QS HW Mgmt Module
 +
|-
 +
| Broadcom/LSI/Avago
 
| MegaRAID (all models)
 
| MegaRAID (all models)
 
| SATA/SAS RAID
 
| SATA/SAS RAID
Line 116: Line 169:
 
| Yes
 
| Yes
 
|-
 
|-
| Cisco
+
| DellEMC
| UCS 6Gb & 12Gb HBAs
+
| SAS HBA
+
| Yes
+
|-
+
| DELL/EMC
+
 
| PERC H7xx, H8xx 6Gb & 12Gb models
 
| PERC H7xx, H8xx 6Gb & 12Gb models
 
| SATA/SAS RAID
 
| SATA/SAS RAID
 
| Yes
 
| Yes
 
|-
 
|-
| DELL/EMC
+
| DellEMC
| SAS HBA 6Gb & 12Gb
+
| BOSS
| SAS HBA
+
| SATA SSD RAID1
 
| Yes
 
| Yes
 
|-
 
|-
Line 134: Line 182:
 
| SmartArray P4xx/P8xx
 
| SmartArray P4xx/P8xx
 
| SATA/SAS RAID
 
| SATA/SAS RAID
| Yes
 
|-
 
| HPE
 
| SmartArray H241 12Gb HBA
 
| SAS HBA
 
 
| Yes
 
| Yes
 
|-
 
|-
 
| Lenovo/IBM
 
| Lenovo/IBM
 
| ServeRAID M5xxx
 
| ServeRAID M5xxx
| SATA/SAS
+
| SATA/SAS RAID
 +
| Yes
 +
|-
 +
| Microsemi/Adaptec
 +
| 5xxx/6xxx/7xxx/8xxx
 +
| SATA/SAS RAID
 
| Yes
 
| Yes
 
|}
 
|}
  
* External SAS HBA's should be configured to ignore Boot ROM from Disk Devices. For LSI Controllers, this option is called 'Boot Support' and should be set to 'OS only' in the MPTSAS BIOS.
+
== NVMe RAID Controllers ==
  
== HDD Disk Devices ==
+
QuantaStor supports the use of NVMe RAID controllers for scale-out configurations and single node configurations. 
 +
 
 +
=== Single Node Use Cases ===
 +
* High performance SAN/NAS using software single parity (RAIDZ1) over NVMe hardware RAID5.
 +
* High performance NVMeoF/FC/iSCSI target passthru of logical drives from NVMe hardware RAID5
 +
 
 +
=== Scale-out Use Cases ===
 +
* High performance storage for WAL/MDB
 +
* High performance OSDs with local rebuild capability, variable OSD count for higher performance
  
 
{| class="wikitable"
 
{| class="wikitable"
 
! Vendor
 
! Vendor
 
! Model
 
! Model
 +
! Type
 +
! QS HW Mgmt Module
 
|-
 
|-
| Hitachi (HGST) UltraStar
+
| [https://www.graidtech.com/how-it-works/ Graid Technology Inc.]
| 10K SAS, NL-SAS & Enterprise SATA Drives
+
| [https://www.graidtech.com/product/sr-1000/ SupremeRAID™ SR-1000]
 +
| NVMe RAID
 +
| Yes
 
|-
 
|-
| Seagate Enterprise Capacity/Enterprise Performance
+
| [https://www.graidtech.com/how-it-works/ Graid Technology Inc.]
| 10K SAS, NL-SAS & Enterprise SATA Drives
+
| [https://www.graidtech.com/product/sr-1010/ SupremeRAID™ SR-1010]
 +
| NVMe RAID
 +
| Yes
 
|-
 
|-
| Western Digital RE(RAID Edition)
+
| [https://pliops.com/ Pliops]
| 10K SAS, NL-SAS & Enterprise SATA Drives
+
| [https://pliops.com/raidplus/ XDP-RAIDplus]
|-
+
| NVMe RAID
| Toshiba Enterprise Capacity/Enterprise Performance
+
| No
| 10K SAS, NL-SAS & Enterprise SATA Drives
+
 
|}
 
|}
  
== SAN/NAS SSD Journal Devices ==
+
== Storage Devices/Media ==
 +
 
 +
=== Scale-up Cluster Media (ZFS Storage Pools) ===
 +
 
 +
Scale-up clusters require dual-ported media so that QuantaStor's IO fencing system can reserve device access to specific nodes for specific pools. These clusters generally consist of two QuantaStor servers with shared access to the storage media in a separate enclosure (JBOD or JBOF).  Scale-up clusters using dual-ported NVMe media require the use of special Storage-Bridge-Bay servers from Supermicro.
  
A mirrored pair of SSDs are required for each storage pool to enable SSD write acceleration.  SSDs can be added or removed from a given storage at any time with zero downtime.  Be sure to select a storage chassis / JBOD with adequate drive slots for both the data drives and the SSD journal devices.
+
==== Data Media ====
  
 
{| class="wikitable"
 
{| class="wikitable"
 
! Vendor
 
! Vendor
 
! Model
 
! Model
! Type
+
! Media Type
! Use Case
+
! Notes
 
|-
 
|-
| HGST
+
| Western Digital
| Ultrastar SS300 MLC 1.6TB/800GB/400GB
+
| Ultrastar
| 12Gb SAS
+
| Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD
| ZFS ZIL (2000MB/sec)
+
 
|-
 
|-
| HGST
+
| Seagate
| Ultrastar SS200 Mixed Use Models
+
| Exos, Nytro & Enterprise Performance
| 12Gb SAS
+
| Dual-port 12/24Gb SAS SSD & NL-SAS HDD
| ZFS ZIL (1000MB/sec)
+
 
|-
 
|-
| HGST
+
| Samsung
| Ultrastar SAS SSD HUSMH8040BSS204 (400GB)
+
| Enterprise SSDs (PM1643, PM1643a)
| 12Gb SAS
+
| Dual-port 12/24Gb SAS SSD
| ZFS ZIL (765MB/sec)
+
 
|-
 
|-
| HGST
+
| Micron
| Ultrastar SAS SSD HUSMM1680ASS204 (800GB)
+
| Enterprise SSDs (S6xxDC)
| 12Gb SAS
+
| Dual-port 12/24Gb SAS SSD
| ZFS ZIL (765MB/sec)
+
 
|-
 
|-
| HGST
+
| KIOXIA
| Ultrastar SAS SSD HUSMM1640ASS204 (400GB)
+
| PM6 Enterprise Capacity/Enterprise Performance
| 12Gb SAS
+
| Dual-port 12/24Gb SAS SSD
| ZFS ZIL (765MB/sec)
+
 
|-
 
|-
| HGST
+
| Cisco, HPE, DellEMC, Lenovo
| Ultrastar SAS SSD HUSMM1620ASS204 (200GB)
+
| OEM
| 12Gb SAS
+
| Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD
| ZFS ZIL (765MB/sec)
+
 
|-
 
|-
| Micron
+
| Western Digital
| S650DC Series MTFDJAK400MBW-2AN1ZAB
+
| NVMe SN200 SN840
| 12Gb SAS
+
| Dual-port NVMe
| ZFS ZIL (850MB/sec)
+
| Supermicro SBB Server Req
 
|-
 
|-
| Micron
+
| KIOXIA
| S655DC Series MTFDJAK400MBW-2AN1ZAB
+
| CM5-V, CM6, CM7
| 12Gb SAS
+
| Dual-port NVMe
| ZFS ZIL (850MB/sec)
+
| '''[ON HOLD - CONTACT SUPPORT]'''
 +
|-
 +
| Micron
 +
| 7300 PRO
 +
| Dual-port NVMe
 +
| Supermicro SBB Server Req
 
|}
 
|}
  
== Object Storage SSD Journal Devices ==
+
==== Journal Media (ZIL & L2ARC) ====
 +
 
 +
A mirrored pair of SSDs are required for each storage pool to enable SSD write acceleration.  SSDs can be added or removed from a given storage at any time with zero downtime.  Be sure to select a storage chassis / JBOD with adequate drive slots for both the data drives and the SSD journal devices. Select 400GB models when purchasing write-intensive SSDs and 800GB models when purchasing mixed-used SSDs.
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 225: Line 291:
 
! Model
 
! Model
 
! Type
 
! Type
! Use case
+
! Notes
 
|-
 
|-
| HGST
+
| Western Digital
| Ultrastar SS300 MLC 1.6TB/800GB/400GB
+
| Ultrastar SAS Write-Intensive & Mixed-use Models
| 12Gb SAS
+
| Dual-port 12Gb SAS SSD
| Ceph Journal (2000MB/sec)
+
 
|-
 
|-
| HGST
+
| Seagate
| Ultrastar SS200 Mixed Use Models
+
| Nytro SAS Write-Intensive & Mixed-use Models
| 12Gb SAS
+
| Dual-port 12Gb SAS SSD
| Ceph Journal (1000MB/sec)
+
 
|-
 
|-
| HGST
+
| Samsung
| Ultrastar SAS SSD HUSMM1680ASS204 (800GB)
+
| Enterprise SSDs (PM1643, PM1643a)
| 12Gb SAS
+
| Dual-port 12Gb SAS SSD
| Ceph Journal (765MB/sec)
+
 
|-
 
|-
| HGST
+
| Western Digital
| Ultrastar SAS SSD HUSMM1640ASS204 (400GB)
+
| SN200 SN840
| 12Gb SAS
+
| Dual-port NVMe
| Ceph Journal (765MB/sec)
+
| Supermicro SBB Server Req
 
|-
 
|-
| HGST
+
| KIOXIA
| Ultrastar SAS SSD HUSMM1620ASS204 (200GB)
+
| PM6 Enterprise Capacity/Enterprise Performance
| 12Gb SAS
+
| Dual-port 12/24Gb SAS SSD
| Ceph Journal (765MB/sec)
+
 
|-
 
|-
| Micron
+
| KIOXIA
| S650DC Series MTFDJAK400MBW-2AN1ZAB
+
| CM5-V, CM6, CM7
| 12Gb SAS
+
| Dual-port NVMe
| Ceph Journal (850MB/sec)
+
| '''[ON HOLD - CONTACT SUPPORT]'''
 
|-
 
|-
| Micron
+
| Micron  
| S655DC Series MTFDJAK400MBW-2AN1ZAB
+
| 7300 PRO
| 12Gb SAS
+
| Dual-port NVMe
| Ceph Journal (850MB/sec)
+
| Supermicro SBB Server Req
 +
|}
 +
 
 +
=== Scale-out Cluster Media (Ceph Storage Pools) ===
 +
 
 +
We highly recommend the use of NVMe devices vs SATA/SAS SSDs due to the higher throughput performance per device which reduces overall costs.
 +
 
 +
{| class="wikitable"
 +
! Vendor
 +
! Model
 +
! Type
 +
! Use case
 
|-
 
|-
 
| Intel
 
| Intel
| DC S3710
+
| DC series (all models)
| SATA
+
| Ceph Journal
+
|-
+
| Intel
+
| DC P Series
+
 
| NVMe
 
| NVMe
| Ceph Journal
+
| all use cases
 
|-
 
|-
 
| Micron
 
| Micron
| M510DC, M500DC, M600, P400m, P410m, P420m
+
| 7xxx, 9xxx (all models)
| SATA
+
| NVMe
| Ceph Journal
+
| all use cases
 
|-
 
|-
| Micron
+
| Seagate
| S6xxDC Series (Write Endurance models)
+
| Nytro Series (all models)
| SAS
+
| NVMe / SAS
| Ceph Journal
+
| all use cases
 
|-
 
|-
| Techman
+
| ScaleFlux
| XC 100 Series
+
| CSD 2000/CSD 3000 (all models)
 
| NVMe
 
| NVMe
| Ceph Journal
+
| all use cases
 +
|-
 +
| KIOXIA / Toshiba
 +
| CM6/CM7 Series (all models)
 +
| NVMe
 +
| all use cases
 +
|-
 +
| Western Digital
 +
| Ultrastar DC series (all models)
 +
| NVMe
 +
| all use cases
 
|}
 
|}
  
Line 296: Line 374:
 
! Model
 
! Model
 
! Type
 
! Type
 +
! Notes
 
|-
 
|-
 
| QLogic (including all OEM models)
 
| QLogic (including all OEM models)
| QLE26xx
+
| QLE2742
 +
| 32Gb
 +
| Requires QuantaStor 6
 +
|-
 +
| QLogic (including all OEM models)
 +
| QLE2692
 
| 16Gb
 
| 16Gb
 +
| Requires QuantaStor 6
 
|-
 
|-
 
| QLogic (including all OEM models)
 
| QLogic (including all OEM models)
| QLE25xx
+
| QLE267x
| 8Gb
+
| 16Gb
 +
|
 
|-
 
|-
 
| QLogic (including all OEM models)
 
| QLogic (including all OEM models)
| QLE24xx
+
| QLE25xx
| 4Gb
+
| 8Gb
 +
|
 
|}
 
|}
  
 
== Network Interface Cards ==
 
== Network Interface Cards ==
  
* QuantaStor supports almost all 1GB and 10/100 network cards on the market
+
* QuantaStor supports almost all 1GbE and 10/100 network cards on the market
 
* High speed 10GbE/25GbE/40GbE/50GbE/100GbE NICs that have been tested for compatibility, stability and performance with QuantaStor are as listed.
 
* High speed 10GbE/25GbE/40GbE/50GbE/100GbE NICs that have been tested for compatibility, stability and performance with QuantaStor are as listed.
  
Line 322: Line 409:
 
|-
 
|-
 
| Intel
 
| Intel
| X520, X520, X710, XL710,XXV710 (all 1/10/25/40GbE models)
+
| X520, X550, X710, XL710,XXV710 (all 1/10/25/40GbE models), E810
| 1/10/25/40GbE
+
| 1/10/25/40GbE, 100GbE
| SFP28, QSFP, SFP+
+
| 10GBaseT, SFP28, QSFP, SFP+, QSFP28
 
|-
 
|-
 
| Emulex
 
| Emulex
Line 330: Line 417:
 
| 10/40GbE
 
| 10/40GbE
 
| QSFP
 
| QSFP
 +
|-
 +
| Mellanox
 +
| ConnectX-6 EN Series
 +
| 25/50/100/200GbE
 +
| SFP28/QSFP28
 +
|-
 +
| Mellanox
 +
| ConnectX-5 EN Series
 +
| 25/50/100GbE
 +
| SFP28/QSFP28
 
|-
 
|-
 
| Mellanox
 
| Mellanox
 
| ConnectX-4 EN Series
 
| ConnectX-4 EN Series
 
| 10/25/40/50/100GbE
 
| 10/25/40/50/100GbE
| QSFP28
+
| SFP28/QSFP28
 
|-
 
|-
 
| Mellanox
 
| Mellanox
 
| ConnectX-3 EN/VPI Series
 
| ConnectX-3 EN/VPI Series
 
| 10/40/56GbE
 
| 10/40/56GbE
| QSFP
+
| SFP+/QSFP+
 
|-
 
|-
 
| Mellanox
 
| Mellanox
 
| ConnectX-3 Pro EN/VPI Series
 
| ConnectX-3 Pro EN/VPI Series
 
| 10GbE/40GbE/56GbE
 
| 10GbE/40GbE/56GbE
| QSFP
+
| SFP+/QSFP+
 
|-
 
|-
 
| Interface Masters
 
| Interface Masters
Line 350: Line 447:
 
| Quad-port 10GbE
 
| Quad-port 10GbE
 
| SFP+
 
| SFP+
 +
|-
 +
| Broadcom
 +
| BCM57508
 +
| 100GbE
 +
| QSFP28
 
|}
 
|}
  
== SRP / Infiniband Adapters ==
+
== Infiniband Adapters ==
 +
 
 +
QuantaStor has deprecated support for the SRP protocol in favor of the iSCSI Extensions for RDMA protocol (iSER).  Mellanox Infiniband controllers may be used in IPoIB mode.
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 360: Line 464:
 
|-
 
|-
 
| Mellanox
 
| Mellanox
| ConnectX Series
+
| ConnectX-3 VPI Series
| 20Gb (DDR)
+
| 40/56GbE (QDR)
 
|-
 
|-
 
| Mellanox
 
| Mellanox
| ConnectX-2 VPI Series
+
| ConnectX-4 VPI Series
| 40Gb (QDR)
+
| 25/40/50/100GbE
 
|-
 
|-
 
| Mellanox
 
| Mellanox
| ConnectX-3 VPI Series
+
| ConnectX-5 VPI Series
| 40/56Gb (QDR)
+
| 25/50/100GbE
 
|-
 
|-
 
| Mellanox
 
| Mellanox
| ConnectX-4 VPI Series
+
| ConnectX-6 VPI Series
| 25/40/50/100Gb
+
| 100/200GbE
 
|}
 
|}
  
Line 383: Line 487:
 
|-
 
|-
 
| Microsoft
 
| Microsoft
| Windows XP, 7, 8, 10, Windows 2003 Server, 2008 Server, 2008 Server R2, 2012 Server, 2016 Server
+
| Windows (all versions, 7 and newer)
| Windows iSCSI
+
| Windows iSCSI Initiator
 +
|-
 +
| Microsoft
 +
| Windows Server (all versions, 2003 and newer)
 +
| Windows iSCSI Initiator
 
|-
 
|-
 
| Apple
 
| Apple
| Mac OS X v10.5 'Leopard', v10.6 'Snow Leopard', v10.7 'Lion', v10.8 'Mountain Lion', v10.9 'Mavericks', v10.10 'Yosemite', v10.11 'El Capitan', v10.12 'Sierra'
+
| macOS 13, macOS 12, macOS 11, macOS 10.15, OS X 10.x, Mac OS X 10.x
| ATTO Xtend SAN initiator(globalSAN is not supported)
+
| ATTO Xtend SAN initiator (globalSAN is not supported)
 
|-
 
|-
 
| Citrix
 
| Citrix
| XenServer 5.x (iSCSI, FC, NFS), XenServer 6.x (iSCSI, FC, NFS), Citrix Ready Certified
+
| XenServer 5.x (iSCSI, FC, NFS), XenServer 6.x (iSCSI, FC, NFS), XenServer 7.x (iSCSI, FC, NFS), Citrix Ready Certified
 
| iSCSI SR, NFS SR, StorageLink SR
 
| iSCSI SR, NFS SR, StorageLink SR
 
|-
 
|-
 
| VMware
 
| VMware
| VMware ESXi 4.x, 5.x, 6.x (iSCSI, FC, NFS)
+
| VMware ESXi 4.x, 5.x, 6.x (iSCSI, FC, NFS), 7.x (iSCSI, FC, NFS)
 
| VMware initiator
 
| VMware initiator
 
|-
 
|-
 
| Linux
 
| Linux
| RHEL, SUSE, CentOS, Debian, Ubuntu, OEL, etc. (all major distros)
+
| RHEL, AlmaLinux, RockyLinux, SUSE, CentOS, Debian, Ubuntu, Oracle Linux (OEL), etc. (all major distros)
 
| open-iscsi
 
| open-iscsi
 
|}
 
|}

Latest revision as of 22:24, 12 March 2024

QuantaStor is Linux based and has kernel driver updates released multiple times per year to ensure broad support for the latest hardware from the major vendors including HPE, Dell/EMC, Cisco, Intel, Western Digital, Supermicro and Lenovo. The hardware listed below has been tested for specific compatibility with QuantaStor to ensure system stability, performance and compatibility. If you have questions about specific hardware compatibility for components not listed below please contact support@osnexus.com for assistance.

Servers for QuantaStor SDS Systems

Vendor Model Boot Controller CPU Memory (32GB min)
Cisco UCS S3260 Storage Server, C220, C240 (M7/M6/M5/M4 Series) UCS RAID and HBAs or software RAID1 M.2 NVMe AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
DellEMC PowerEdge R650/R750, PowerEdge R640/R740/R740xd, R730/R730xd/R720/R630/R620 Dell BOSS or software RAID1 M.2 NVMe AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
HPE ProLiant DL360/DL380 Gen10, Gen9, Gen8, Gen7 HPE P4xx w/ FBWC AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
HPE Apollo 4200, 4510 Gen9 and Gen10 Series HPE P4xx w/ FBWC AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
Intel Intel Server Systems (M50FCP/M50CYP) Software RAID1 M.2 NVMe Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
Lenovo ThinkSystem SR650/SR550/SR590 Series, x3650 M5/M4/M4 BD, x3250 M5 Software RAID1 M.2 NVMe or Hardware Boot RAID1 AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
Seagate Seagate 2U24 AP Storage Server, 5U84 AP, 2U12 AP (AMD / Bonneville) Internal M.2 NVMe AMD EPYC 16 core 128GB RAM (Gold) 256GB RAM (Platinum), 512GB RAM (Titanium)
Supermicro X13, X12, X11, X10 & X9 based Intel and H12 based AMD servers Software RAID1 M.2 NVMe AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)

Disk Expansion Chassis / JBOD

  • QuantaStor supports SAS/SATA & NVMe external Expansion Shelves / JBOD devices from Dell, HP, IBM/Lenovo and SuperMicro.
  • All JBODs must have two or more SAS expansion ports per module to allow for SAS multipathing. For chaining of expansion shelves three or more SAS ports are required per expander.
Vendor JBOD Model
Cisco All Models
Dell All Models
HPE All Models (D3000 Series, D2000 Series, D6020)
IBM/Lenovo All Models
Seagate Exos E All Models
Seagate Corvault All Models
Seagate Exos X All Models
SuperMicro All Models (HA requires dual expander JBODs)
Western Digtial Ultrastar Data60 / Data102 / 4U60-G2 / 2U24 SSD Models

SAS HBA Controllers

To make Storage Pools highly-available (ZFS based pools) media must be connected to two or more servers via SAS (or FC) connectivity. We recommend Broadcom HBAs and their OEM equivalents. HPE is the exception where HPE disk expansion units must be used with HPE HBAs which are now Microsemi based. Make sure the HBA is flashed with the latest firmware to ensure compatibility with the latest storage media. For OEM branded HBAs always use the OEM's supplied firmware even if a newer firmware is available from Broadcom. SAS HBAs with only external ports (8e, 16e models) should be configured to ignore Boot ROM from Disk Devices. For Broadcom/LSI Controllers, this option is called 'Boot Support' and should be set to 'OS only' in the MPTSAS BIOS.

Vendor Model Type QS HW Mgmt Module
Broadcom/LSI/Avago 9500 12Gb tri-mode series SAS HBA Yes
Broadcom/LSI/Avago 9400 12Gb tri-mode series SAS HBA Yes
Broadcom/LSI/Avago 9300 12Gb series SAS HBA Yes
Broadcom/LSI/Avago 9200 6Gb series SAS HBA Yes
Cisco UCS 6Gb & 12Gb HBAs SAS HBA Yes
DELL/EMC SAS HBA 6Gb & 12Gb SAS HBA Yes
HPE SmartArray H241 12Gb SAS HBA Yes
Lenovo/IBM ServeRAID M5xxx 12Gb SAS HBA Yes

Boot RAID Controllers

QuantaStor supports both software RAID1 for boot and hardware RAID1 for boot using high quality datacenter grade media. We recommend SSDs in the 240GB to 480GB size range for boot. If SATADOM SSD devices are used use the 128GB devices and use software RAID1 to mirror them. Do not use Intel VROC.

Vendor Model Type QS HW Mgmt Module
Broadcom/LSI/Avago MegaRAID (all models) SATA/SAS RAID Yes
Cisco UCS RAID SATA/SAS RAID Yes
DellEMC PERC H7xx, H8xx 6Gb & 12Gb models SATA/SAS RAID Yes
DellEMC BOSS SATA SSD RAID1 Yes
HPE SmartArray P4xx/P8xx SATA/SAS RAID Yes
Lenovo/IBM ServeRAID M5xxx SATA/SAS RAID Yes
Microsemi/Adaptec 5xxx/6xxx/7xxx/8xxx SATA/SAS RAID Yes

NVMe RAID Controllers

QuantaStor supports the use of NVMe RAID controllers for scale-out configurations and single node configurations.

Single Node Use Cases

  • High performance SAN/NAS using software single parity (RAIDZ1) over NVMe hardware RAID5.
  • High performance NVMeoF/FC/iSCSI target passthru of logical drives from NVMe hardware RAID5

Scale-out Use Cases

  • High performance storage for WAL/MDB
  • High performance OSDs with local rebuild capability, variable OSD count for higher performance
Vendor Model Type QS HW Mgmt Module
Graid Technology Inc. SupremeRAID™ SR-1000 NVMe RAID Yes
Graid Technology Inc. SupremeRAID™ SR-1010 NVMe RAID Yes
Pliops XDP-RAIDplus NVMe RAID No

Storage Devices/Media

Scale-up Cluster Media (ZFS Storage Pools)

Scale-up clusters require dual-ported media so that QuantaStor's IO fencing system can reserve device access to specific nodes for specific pools. These clusters generally consist of two QuantaStor servers with shared access to the storage media in a separate enclosure (JBOD or JBOF). Scale-up clusters using dual-ported NVMe media require the use of special Storage-Bridge-Bay servers from Supermicro.

Data Media

Vendor Model Media Type Notes
Western Digital Ultrastar Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD
Seagate Exos, Nytro & Enterprise Performance Dual-port 12/24Gb SAS SSD & NL-SAS HDD
Samsung Enterprise SSDs (PM1643, PM1643a) Dual-port 12/24Gb SAS SSD
Micron Enterprise SSDs (S6xxDC) Dual-port 12/24Gb SAS SSD
KIOXIA PM6 Enterprise Capacity/Enterprise Performance Dual-port 12/24Gb SAS SSD
Cisco, HPE, DellEMC, Lenovo OEM Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD
Western Digital NVMe SN200 SN840 Dual-port NVMe Supermicro SBB Server Req
KIOXIA CM5-V, CM6, CM7 Dual-port NVMe [ON HOLD - CONTACT SUPPORT]
Micron 7300 PRO Dual-port NVMe Supermicro SBB Server Req

Journal Media (ZIL & L2ARC)

A mirrored pair of SSDs are required for each storage pool to enable SSD write acceleration. SSDs can be added or removed from a given storage at any time with zero downtime. Be sure to select a storage chassis / JBOD with adequate drive slots for both the data drives and the SSD journal devices. Select 400GB models when purchasing write-intensive SSDs and 800GB models when purchasing mixed-used SSDs.

Vendor Model Type Notes
Western Digital Ultrastar SAS Write-Intensive & Mixed-use Models Dual-port 12Gb SAS SSD
Seagate Nytro SAS Write-Intensive & Mixed-use Models Dual-port 12Gb SAS SSD
Samsung Enterprise SSDs (PM1643, PM1643a) Dual-port 12Gb SAS SSD
Western Digital SN200 SN840 Dual-port NVMe Supermicro SBB Server Req
KIOXIA PM6 Enterprise Capacity/Enterprise Performance Dual-port 12/24Gb SAS SSD
KIOXIA CM5-V, CM6, CM7 Dual-port NVMe [ON HOLD - CONTACT SUPPORT]
Micron 7300 PRO Dual-port NVMe Supermicro SBB Server Req

Scale-out Cluster Media (Ceph Storage Pools)

We highly recommend the use of NVMe devices vs SATA/SAS SSDs due to the higher throughput performance per device which reduces overall costs.

Vendor Model Type Use case
Intel DC series (all models) NVMe all use cases
Micron 7xxx, 9xxx (all models) NVMe all use cases
Seagate Nytro Series (all models) NVMe / SAS all use cases
ScaleFlux CSD 2000/CSD 3000 (all models) NVMe all use cases
KIOXIA / Toshiba CM6/CM7 Series (all models) NVMe all use cases
Western Digital Ultrastar DC series (all models) NVMe all use cases

Fibre Channel HBAs

  • QuantaStor SDS only supports Qlogic Fibre Channel HBAs for use with the FC SCSI target feature.
Vendor Model Type Notes
QLogic (including all OEM models) QLE2742 32Gb Requires QuantaStor 6
QLogic (including all OEM models) QLE2692 16Gb Requires QuantaStor 6
QLogic (including all OEM models) QLE267x 16Gb
QLogic (including all OEM models) QLE25xx 8Gb

Network Interface Cards

  • QuantaStor supports almost all 1GbE and 10/100 network cards on the market
  • High speed 10GbE/25GbE/40GbE/50GbE/100GbE NICs that have been tested for compatibility, stability and performance with QuantaStor are as listed.
Vendor Model Type Connector
Intel X520, X550, X710, XL710,XXV710 (all 1/10/25/40GbE models), E810 1/10/25/40GbE, 100GbE 10GBaseT, SFP28, QSFP, SFP+, QSFP28
Emulex OneConnect OCe14xxx Series 10/40GbE QSFP
Mellanox ConnectX-6 EN Series 25/50/100/200GbE SFP28/QSFP28
Mellanox ConnectX-5 EN Series 25/50/100GbE SFP28/QSFP28
Mellanox ConnectX-4 EN Series 10/25/40/50/100GbE SFP28/QSFP28
Mellanox ConnectX-3 EN/VPI Series 10/40/56GbE SFP+/QSFP+
Mellanox ConnectX-3 Pro EN/VPI Series 10GbE/40GbE/56GbE SFP+/QSFP+
Interface Masters NIAGARA 32714L (OEM Intel) Quad-port 10GbE SFP+
Broadcom BCM57508 100GbE QSFP28

Infiniband Adapters

QuantaStor has deprecated support for the SRP protocol in favor of the iSCSI Extensions for RDMA protocol (iSER). Mellanox Infiniband controllers may be used in IPoIB mode.

Vendor Model Type
Mellanox ConnectX-3 VPI Series 40/56GbE (QDR)
Mellanox ConnectX-4 VPI Series 25/40/50/100GbE
Mellanox ConnectX-5 VPI Series 25/50/100GbE
Mellanox ConnectX-6 VPI Series 100/200GbE

iSCSI Initiators / Client-side

Vendor Operating System iSCSI Initiator
Microsoft Windows (all versions, 7 and newer) Windows iSCSI Initiator
Microsoft Windows Server (all versions, 2003 and newer) Windows iSCSI Initiator
Apple macOS 13, macOS 12, macOS 11, macOS 10.15, OS X 10.x, Mac OS X 10.x ATTO Xtend SAN initiator (globalSAN is not supported)
Citrix XenServer 5.x (iSCSI, FC, NFS), XenServer 6.x (iSCSI, FC, NFS), XenServer 7.x (iSCSI, FC, NFS), Citrix Ready Certified iSCSI SR, NFS SR, StorageLink SR
VMware VMware ESXi 4.x, 5.x, 6.x (iSCSI, FC, NFS), 7.x (iSCSI, FC, NFS) VMware initiator
Linux RHEL, AlmaLinux, RockyLinux, SUSE, CentOS, Debian, Ubuntu, Oracle Linux (OEL), etc. (all major distros) open-iscsi