Difference between revisions of "Hardware Compatibility List (HCL)"
m (→SAN/NAS SSD Journal Devices) |
m (→Scale-out Cluster Media (Ceph Storage Pools)) |
||
(187 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | + | [[Category:design_guide]] | |
+ | QuantaStor is Linux based and has kernel driver updates released multiple times per year to ensure broad support for the latest hardware from the major vendors including HPE, Dell/EMC, Cisco, Intel, Western Digital, Supermicro and Lenovo. The hardware listed below has been tested for specific compatibility with QuantaStor to ensure system stability, performance and compatibility. If you have questions about specific hardware compatibility for components not listed below please contact support@osnexus.com for assistance. | ||
− | + | == Servers for QuantaStor SDS Systems == | |
− | + | ||
− | == Servers for QuantaStor SDS | + | |
Line 11: | Line 10: | ||
! Boot Controller | ! Boot Controller | ||
! CPU | ! CPU | ||
− | ! Memory ( | + | ! Memory (32GB min) |
− | + | ||
|- | |- | ||
| Cisco | | Cisco | ||
− | | UCS | + | | [https://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-s3260-storage-server/index.html UCS S3260 Storage Server], C220, C240 (M7/M6/M5/M4 Series) |
− | | UCS RAID | + | | UCS RAID and HBAs or software RAID1 M.2 NVMe |
− | | | + | | AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 |
− | | 1GB per | + | | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
− | + | ||
|- | |- | ||
− | | | + | | DellEMC |
− | | R740/R730/R730xd/R720/R630/R620 | + | | [https://www.dell.com/en-us/work/shop/povw/poweredge-r750 PowerEdge R650/R750], [https://www.dell.com/en-us/work/shop/povw/poweredge-r740 PowerEdge R640/R740/R740xd], R730/R730xd/R720/R630/R620 |
− | | Dell | + | | Dell BOSS or software RAID1 M.2 NVMe |
− | | | + | | AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 |
− | | 1GB per | + | | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
− | + | ||
|- | |- | ||
| HPE | | HPE | ||
− | | DL360/DL380 | + | | [https://www.hpe.com/us/en/product-catalog/servers/proliant-servers/pip.hpe-proliant-dl380-gen10-server.1010026818.html ProLiant DL360/DL380 Gen10], Gen9, Gen8, Gen7 |
| HPE P4xx w/ FBWC | | HPE P4xx w/ FBWC | ||
− | | | + | | AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 |
− | | 1GB per | + | | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
− | + | ||
|- | |- | ||
− | | | + | | HPE |
− | | | + | | [https://www.hpe.com/us/en/product-catalog/servers/apollo-systems/pip.hpe-apollo-4200-gen10-server.1011147097.html Apollo 4200], 4510 Gen9 and Gen10 Series |
− | | | + | | HPE P4xx w/ FBWC |
− | | | + | | AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 |
− | | 1GB per | + | | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
− | + | ||
|- | |- | ||
− | | | + | | Intel |
− | | | + | | Intel Server Systems (M50FCP/M50CYP) |
− | | | + | | Software RAID1 M.2 NVMe |
− | | | + | | Intel Scalable Processors Gen 1/2/3/4 |
− | | 1GB per | + | | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
− | | | + | |- |
+ | | Lenovo | ||
+ | | [https://www.lenovo.com/us/en/data-center/servers/racks/c/racks ThinkSystem SR650/SR550/SR590 Series], x3650 M5/M4/M4 BD, x3250 M5 | ||
+ | | Software RAID1 M.2 NVMe or Hardware Boot RAID1 | ||
+ | | AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 | ||
+ | | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) | ||
+ | |- | ||
+ | | Seagate | ||
+ | | [https://www.seagate.com/products/storage/data-storage-systems/application-platforms/exos-ap-2u24/ Seagate 2U24 AP Storage Server], 5U84 AP, 2U12 AP (AMD / Bonneville) | ||
+ | | Internal M.2 NVMe | ||
+ | | AMD EPYC 16 core | ||
+ | | 128GB RAM (Gold) 256GB RAM (Platinum), 512GB RAM (Titanium) | ||
+ | |- | ||
+ | | Supermicro | ||
+ | | X13, X12, X11, X10 & X9 based Intel and H12 based AMD servers | ||
+ | | Software RAID1 M.2 NVMe | ||
+ | | AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 | ||
+ | | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) | ||
|} | |} | ||
− | == | + | == Disk Expansion Chassis / JBOD == |
− | * QuantaStor supports SAS/SATA & | + | * QuantaStor supports SAS/SATA & NVMe external Expansion Shelves / JBOD devices from Dell, HP, IBM/Lenovo and SuperMicro. |
− | * All | + | * All JBODs must have two or more SAS expansion ports per module to allow for SAS multipathing. For chaining of expansion shelves three or more SAS ports are required per expander. |
{| class="wikitable" | {| class="wikitable" | ||
Line 61: | Line 72: | ||
| Cisco | | Cisco | ||
| All Models | | All Models | ||
+ | |- | ||
+ | | Dell | ||
+ | | [https://www.dell.com/en-us/work/shop/productdetailstxn/storage-md1420 All Models] | ||
+ | |- | ||
+ | | HPE | ||
+ | | All Models ([https://www.hpe.com/us/en/product-catalog/storage/disk-enclosures/pip.hpe-d3000-disk-enclosures.6923837.html D3000 Series], D2000 Series, [https://www.hpe.com/us/en/product-catalog/storage/disk-enclosures/pip.hpe-d6020-enclosure-with-dual-io-modules.1009000694.html D6020]) | ||
|- | |- | ||
| IBM/Lenovo | | IBM/Lenovo | ||
| All Models | | All Models | ||
|- | |- | ||
− | | | + | | Seagate Exos E |
+ | | [https://www.seagate.com/enterprise-storage/systems/exos/?utm_source=eol&utm_medium=redirect&utm_campaign=modular-enclosures All Models] | ||
+ | |- | ||
+ | | Seagate Corvault | ||
| All Models | | All Models | ||
|- | |- | ||
− | | | + | | Seagate Exos X |
− | + | ||
− | + | ||
− | + | ||
| All Models | | All Models | ||
|- | |- | ||
| SuperMicro | | SuperMicro | ||
− | | All Models (HA requires dual expander [ | + | | All Models (HA requires dual expander JBODs) |
+ | |- | ||
+ | | Western Digtial | ||
+ | | [https://www.westerndigital.com/products/storage-platforms/ultrastar-data60-hybrid-platform?multilink=switch Ultrastar Data60] / Data102 / 4U60-G2 / 2U24 SSD Models | ||
|} | |} | ||
− | == SAS HBAs | + | == SAS HBA Controllers == |
+ | |||
+ | To make Storage Pools highly-available (ZFS based pools) media must be connected to two or more servers via SAS (or FC) connectivity. We recommend Broadcom HBAs and their OEM equivalents. HPE is the exception where HPE disk expansion units must be used with HPE HBAs which are now Microsemi based. Make sure the HBA is flashed with the latest firmware to ensure compatibility with the latest storage media. For OEM branded HBAs always use the OEM's supplied firmware even if a newer firmware is available from Broadcom. SAS HBAs with only external ports (8e, 16e models) should be configured to ignore Boot ROM from Disk Devices. For Broadcom/LSI Controllers, this option is called 'Boot Support' and should be set to 'OS only' in the MPTSAS BIOS. | ||
{| class="wikitable" | {| class="wikitable" | ||
Line 84: | Line 106: | ||
! Model | ! Model | ||
! Type | ! Type | ||
− | ! | + | ! QS HW Mgmt Module |
|- | |- | ||
− | | | + | | Broadcom/LSI/Avago |
− | + | | 9500 12Gb tri-mode series | |
− | | | + | | SAS HBA |
| Yes | | Yes | ||
|- | |- | ||
Line 106: | Line 128: | ||
| Yes | | Yes | ||
|- | |- | ||
− | | Broadcom LSI/Avago | + | | Cisco |
+ | | UCS 6Gb & 12Gb HBAs | ||
+ | | SAS HBA | ||
+ | | Yes | ||
+ | |- | ||
+ | | DELL/EMC | ||
+ | | SAS HBA 6Gb & 12Gb | ||
+ | | SAS HBA | ||
+ | | Yes | ||
+ | |- | ||
+ | | HPE | ||
+ | | SmartArray H241 12Gb | ||
+ | | SAS HBA | ||
+ | | Yes | ||
+ | |- | ||
+ | | Lenovo/IBM | ||
+ | | ServeRAID M5xxx 12Gb | ||
+ | | SAS HBA | ||
+ | | Yes | ||
+ | |} | ||
+ | |||
+ | == Boot RAID Controllers == | ||
+ | |||
+ | QuantaStor supports both software RAID1 for boot and hardware RAID1 for boot using high quality datacenter grade media. We recommend SSDs in the 240GB to 480GB size range for boot. If SATADOM SSD devices are used use the 128GB devices and use software RAID1 to mirror them. Do not use Intel VROC. | ||
+ | |||
+ | {| class="wikitable" | ||
+ | ! Vendor | ||
+ | ! Model | ||
+ | ! Type | ||
+ | ! QS HW Mgmt Module | ||
+ | |- | ||
+ | | Broadcom/LSI/Avago | ||
| MegaRAID (all models) | | MegaRAID (all models) | ||
| SATA/SAS RAID | | SATA/SAS RAID | ||
Line 116: | Line 169: | ||
| Yes | | Yes | ||
|- | |- | ||
− | | | + | | DellEMC |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
| PERC H7xx, H8xx 6Gb & 12Gb models | | PERC H7xx, H8xx 6Gb & 12Gb models | ||
| SATA/SAS RAID | | SATA/SAS RAID | ||
| Yes | | Yes | ||
|- | |- | ||
− | | | + | | DellEMC |
− | | | + | | BOSS |
− | | | + | | SATA SSD RAID1 |
| Yes | | Yes | ||
|- | |- | ||
Line 134: | Line 182: | ||
| SmartArray P4xx/P8xx | | SmartArray P4xx/P8xx | ||
| SATA/SAS RAID | | SATA/SAS RAID | ||
− | |||
− | |||
− | |||
− | |||
− | |||
| Yes | | Yes | ||
|- | |- | ||
| Lenovo/IBM | | Lenovo/IBM | ||
| ServeRAID M5xxx | | ServeRAID M5xxx | ||
− | | SATA/SAS | + | | SATA/SAS RAID |
+ | | Yes | ||
+ | |- | ||
+ | | Microsemi/Adaptec | ||
+ | | 5xxx/6xxx/7xxx/8xxx | ||
+ | | SATA/SAS RAID | ||
| Yes | | Yes | ||
|} | |} | ||
− | + | == NVMe RAID Controllers == | |
− | + | QuantaStor supports the use of NVMe RAID controllers for scale-out configurations and single node configurations. | |
− | * | + | === Single Node Use Cases === |
− | * | + | * High performance SAN/NAS using software single parity (RAIDZ1) over NVMe hardware RAID5. |
+ | * High performance NVMeoF/FC/iSCSI target passthru of logical drives from NVMe hardware RAID5 | ||
+ | |||
+ | === Scale-out Use Cases === | ||
+ | * High performance storage for WAL/MDB | ||
+ | * High performance OSDs with local rebuild capability, variable OSD count for higher performance | ||
{| class="wikitable" | {| class="wikitable" | ||
! Vendor | ! Vendor | ||
! Model | ! Model | ||
+ | ! Type | ||
+ | ! QS HW Mgmt Module | ||
|- | |- | ||
− | | | + | | [https://www.graidtech.com/how-it-works/ Graid Technology Inc.] |
− | | | + | | [https://www.graidtech.com/product/sr-1000/ SupremeRAID™ SR-1000] |
+ | | NVMe RAID | ||
+ | | Yes | ||
|- | |- | ||
− | | | + | | [https://www.graidtech.com/how-it-works/ Graid Technology Inc.] |
− | | | + | | [https://www.graidtech.com/product/sr-1010/ SupremeRAID™ SR-1010] |
+ | | NVMe RAID | ||
+ | | Yes | ||
|- | |- | ||
− | | | + | | [https://pliops.com/ Pliops] |
− | | | + | | [https://pliops.com/raidplus/ XDP-RAIDplus] |
+ | | NVMe RAID | ||
+ | | No | ||
+ | |} | ||
+ | |||
+ | == Storage Devices/Media == | ||
+ | |||
+ | === Scale-up Cluster Media (ZFS Storage Pools) === | ||
+ | |||
+ | Scale-up clusters require dual-ported media so that QuantaStor's IO fencing system can reserve device access to specific nodes for specific pools. These clusters generally consist of two QuantaStor servers with shared access to the storage media in a separate enclosure (JBOD or JBOF). Scale-up clusters using dual-ported NVMe media require the use of special Storage-Bridge-Bay servers from Supermicro. | ||
+ | |||
+ | ==== Data Media ==== | ||
+ | |||
+ | {| class="wikitable" | ||
+ | ! Vendor | ||
+ | ! Model | ||
+ | ! Media Type | ||
+ | ! Notes | ||
|- | |- | ||
− | | | + | | Western Digital |
− | | | + | | Ultrastar |
+ | | Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD | ||
+ | |- | ||
+ | | Seagate | ||
+ | | Exos, Nytro & Enterprise Performance | ||
+ | | Dual-port 12/24Gb SAS SSD & NL-SAS HDD | ||
+ | |- | ||
+ | | Samsung | ||
+ | | Enterprise SSDs (PM1643, PM1643a) | ||
+ | | Dual-port 12/24Gb SAS SSD | ||
+ | |- | ||
+ | | Micron | ||
+ | | Enterprise SSDs (S6xxDC) | ||
+ | | Dual-port 12/24Gb SAS SSD | ||
+ | |- | ||
+ | | KIOXIA | ||
+ | | PM6 Enterprise Capacity/Enterprise Performance | ||
+ | | Dual-port 12/24Gb SAS SSD | ||
+ | |- | ||
+ | | Cisco, HPE, DellEMC, Lenovo | ||
+ | | OEM | ||
+ | | Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD | ||
+ | |- | ||
+ | | Western Digital | ||
+ | | NVMe SN200 SN840 | ||
+ | | Dual-port NVMe | ||
+ | | Supermicro SBB Server Req | ||
+ | |- | ||
+ | | KIOXIA | ||
+ | | CM5-V, CM6, CM7 | ||
+ | | Dual-port NVMe | ||
+ | | '''[ON HOLD - CONTACT SUPPORT]''' | ||
+ | |- | ||
+ | | Micron | ||
+ | | 7300 PRO | ||
+ | | Dual-port NVMe | ||
+ | | Supermicro SBB Server Req | ||
|} | |} | ||
− | == | + | ==== Journal Media (ZIL & L2ARC) ==== |
− | + | A mirrored pair of SSDs are required for each storage pool to enable SSD write acceleration. SSDs can be added or removed from a given storage at any time with zero downtime. Be sure to select a storage chassis / JBOD with adequate drive slots for both the data drives and the SSD journal devices. Select 400GB models when purchasing write-intensive SSDs and 800GB models when purchasing mixed-used SSDs. | |
− | + | ||
− | + | ||
{| class="wikitable" | {| class="wikitable" | ||
Line 181: | Line 291: | ||
! Model | ! Model | ||
! Type | ! Type | ||
− | ! | + | ! Notes |
|- | |- | ||
− | | | + | | Western Digital |
− | | Ultrastar SAS | + | | Ultrastar SAS Write-Intensive & Mixed-use Models |
− | | SAS | + | | Dual-port 12Gb SAS SSD |
− | + | ||
|- | |- | ||
− | | | + | | Seagate |
− | | | + | | Nytro SAS Write-Intensive & Mixed-use Models |
− | | SAS | + | | Dual-port 12Gb SAS SSD |
− | + | ||
|- | |- | ||
− | | | + | | Samsung |
− | | | + | | Enterprise SSDs (PM1643, PM1643a) |
− | | SAS | + | | Dual-port 12Gb SAS SSD |
− | + | ||
|- | |- | ||
− | | | + | | Western Digital |
− | | | + | | SN200 SN840 |
− | | SAS | + | | Dual-port NVMe |
− | | | + | | Supermicro SBB Server Req |
+ | |- | ||
+ | | KIOXIA | ||
+ | | PM6 Enterprise Capacity/Enterprise Performance | ||
+ | | Dual-port 12/24Gb SAS SSD | ||
+ | |- | ||
+ | | KIOXIA | ||
+ | | CM5-V, CM6, CM7 | ||
+ | | Dual-port NVMe | ||
+ | | '''[ON HOLD - CONTACT SUPPORT]''' | ||
+ | |- | ||
+ | | Micron | ||
+ | | 7300 PRO | ||
+ | | Dual-port NVMe | ||
+ | | Supermicro SBB Server Req | ||
|} | |} | ||
− | == | + | === Scale-out Cluster Media (Ceph Storage Pools) === |
− | + | We highly recommend the use of NVMe devices vs SATA/SAS SSDs due to the higher throughput performance per device which reduces overall costs. | |
− | + | ||
− | + | ||
{| class="wikitable" | {| class="wikitable" | ||
Line 215: | Line 334: | ||
! Type | ! Type | ||
! Use case | ! Use case | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| Intel | | Intel | ||
− | | DC | + | | DC series (all models) |
| NVMe | | NVMe | ||
− | | | + | | all use cases |
|- | |- | ||
| Micron | | Micron | ||
− | | | + | | 7xxx, 9xxx (all models) |
− | | | + | | NVMe |
− | | | + | | all use cases |
|- | |- | ||
− | | | + | | Seagate |
− | | | + | | Nytro Series (all models) |
− | | SAS | + | | NVMe / SAS |
− | | | + | | all use cases |
|- | |- | ||
− | | | + | | ScaleFlux |
− | | | + | | CSD 2000/CSD 3000 (all models) |
| NVMe | | NVMe | ||
− | | | + | | all use cases |
+ | |- | ||
+ | | KIOXIA / Toshiba | ||
+ | | CM6/CM7 Series (all models) | ||
+ | | NVMe | ||
+ | | all use cases | ||
+ | |- | ||
+ | | Western Digital | ||
+ | | Ultrastar DC series (all models) | ||
+ | | NVMe | ||
+ | | all use cases | ||
|} | |} | ||
− | == Fibre Channel HBAs | + | == Fibre Channel HBAs == |
* QuantaStor SDS only supports Qlogic Fibre Channel HBAs for use with the FC SCSI target feature. | * QuantaStor SDS only supports Qlogic Fibre Channel HBAs for use with the FC SCSI target feature. | ||
Line 255: | Line 374: | ||
! Model | ! Model | ||
! Type | ! Type | ||
+ | ! Notes | ||
|- | |- | ||
| QLogic (including all OEM models) | | QLogic (including all OEM models) | ||
− | | | + | | QLE2742 |
+ | | 32Gb | ||
+ | | Requires QuantaStor 6 | ||
+ | |- | ||
+ | | QLogic (including all OEM models) | ||
+ | | QLE2692 | ||
| 16Gb | | 16Gb | ||
+ | | Requires QuantaStor 6 | ||
|- | |- | ||
| QLogic (including all OEM models) | | QLogic (including all OEM models) | ||
− | | | + | | QLE267x |
− | | | + | | 16Gb |
+ | | | ||
|- | |- | ||
| QLogic (including all OEM models) | | QLogic (including all OEM models) | ||
− | | | + | | QLE25xx |
− | | | + | | 8Gb |
+ | | | ||
|} | |} | ||
− | == Network Interface Cards | + | == Network Interface Cards == |
− | * QuantaStor supports almost all | + | * QuantaStor supports almost all 1GbE and 10/100 network cards on the market |
− | * High speed 10GbE/40GbE NICs that have been tested for compatibility, stability and performance with QuantaStor are as listed. | + | * High speed 10GbE/25GbE/40GbE/50GbE/100GbE NICs that have been tested for compatibility, stability and performance with QuantaStor are as listed. |
{| class="wikitable" | {| class="wikitable" | ||
Line 281: | Line 409: | ||
|- | |- | ||
| Intel | | Intel | ||
− | | X520, | + | | X520, X550, X710, XL710,XXV710 (all 1/10/25/40GbE models), E810 |
− | | 1/10/25/40GbE | + | | 1/10/25/40GbE, 100GbE |
− | | SFP28, QSFP, SFP+ | + | | 10GBaseT, SFP28, QSFP, SFP+, QSFP28 |
|- | |- | ||
| Emulex | | Emulex | ||
Line 289: | Line 417: | ||
| 10/40GbE | | 10/40GbE | ||
| QSFP | | QSFP | ||
+ | |- | ||
+ | | Mellanox | ||
+ | | ConnectX-6 EN Series | ||
+ | | 25/50/100/200GbE | ||
+ | | SFP28/QSFP28 | ||
+ | |- | ||
+ | | Mellanox | ||
+ | | ConnectX-5 EN Series | ||
+ | | 25/50/100GbE | ||
+ | | SFP28/QSFP28 | ||
|- | |- | ||
| Mellanox | | Mellanox | ||
| ConnectX-4 EN Series | | ConnectX-4 EN Series | ||
| 10/25/40/50/100GbE | | 10/25/40/50/100GbE | ||
− | | QSFP28 | + | | SFP28/QSFP28 |
|- | |- | ||
| Mellanox | | Mellanox | ||
| ConnectX-3 EN/VPI Series | | ConnectX-3 EN/VPI Series | ||
| 10/40/56GbE | | 10/40/56GbE | ||
− | | QSFP | + | | SFP+/QSFP+ |
|- | |- | ||
| Mellanox | | Mellanox | ||
| ConnectX-3 Pro EN/VPI Series | | ConnectX-3 Pro EN/VPI Series | ||
| 10GbE/40GbE/56GbE | | 10GbE/40GbE/56GbE | ||
− | | QSFP | + | | SFP+/QSFP+ |
|- | |- | ||
| Interface Masters | | Interface Masters | ||
Line 309: | Line 447: | ||
| Quad-port 10GbE | | Quad-port 10GbE | ||
| SFP+ | | SFP+ | ||
+ | |- | ||
+ | | Broadcom | ||
+ | | BCM57508 | ||
+ | | 100GbE | ||
+ | | QSFP28 | ||
|} | |} | ||
− | == | + | == Infiniband Adapters == |
+ | |||
+ | QuantaStor has deprecated support for the SRP protocol in favor of the iSCSI Extensions for RDMA protocol (iSER). Mellanox Infiniband controllers may be used in IPoIB mode. | ||
{| class="wikitable" | {| class="wikitable" | ||
Line 319: | Line 464: | ||
|- | |- | ||
| Mellanox | | Mellanox | ||
− | | ConnectX Series | + | | ConnectX-3 VPI Series |
− | | | + | | 40/56GbE (QDR) |
|- | |- | ||
| Mellanox | | Mellanox | ||
− | | ConnectX- | + | | ConnectX-4 VPI Series |
− | | | + | | 25/40/50/100GbE |
|- | |- | ||
| Mellanox | | Mellanox | ||
− | | ConnectX- | + | | ConnectX-5 VPI Series |
− | | | + | | 25/50/100GbE |
|- | |- | ||
| Mellanox | | Mellanox | ||
− | | ConnectX- | + | | ConnectX-6 VPI Series |
− | | | + | | 100/200GbE |
|} | |} | ||
Line 342: | Line 487: | ||
|- | |- | ||
| Microsoft | | Microsoft | ||
− | | Windows | + | | Windows (all versions, 7 and newer) |
− | | Windows iSCSI | + | | Windows iSCSI Initiator |
+ | |- | ||
+ | | Microsoft | ||
+ | | Windows Server (all versions, 2003 and newer) | ||
+ | | Windows iSCSI Initiator | ||
|- | |- | ||
| Apple | | Apple | ||
− | | | + | | macOS 13, macOS 12, macOS 11, macOS 10.15, OS X 10.x, Mac OS X 10.x |
− | | ATTO Xtend SAN initiator(globalSAN is not supported) | + | | ATTO Xtend SAN initiator (globalSAN is not supported) |
|- | |- | ||
| Citrix | | Citrix | ||
− | | XenServer 5.x (iSCSI, FC, NFS), XenServer 6.x (iSCSI, FC, NFS), Citrix Ready Certified | + | | XenServer 5.x (iSCSI, FC, NFS), XenServer 6.x (iSCSI, FC, NFS), XenServer 7.x (iSCSI, FC, NFS), Citrix Ready Certified |
| iSCSI SR, NFS SR, StorageLink SR | | iSCSI SR, NFS SR, StorageLink SR | ||
|- | |- | ||
| VMware | | VMware | ||
− | | VMware ESXi 4.x, 5.x, 6.x (iSCSI, FC, NFS) | + | | VMware ESXi 4.x, 5.x, 6.x (iSCSI, FC, NFS), 7.x (iSCSI, FC, NFS) |
| VMware initiator | | VMware initiator | ||
|- | |- | ||
| Linux | | Linux | ||
− | | RHEL, SUSE, CentOS, Debian, Ubuntu, OEL, etc. (all major distros) | + | | RHEL, AlmaLinux, RockyLinux, SUSE, CentOS, Debian, Ubuntu, Oracle Linux (OEL), etc. (all major distros) |
| open-iscsi | | open-iscsi | ||
|} | |} |
Latest revision as of 22:24, 12 March 2024
QuantaStor is Linux based and has kernel driver updates released multiple times per year to ensure broad support for the latest hardware from the major vendors including HPE, Dell/EMC, Cisco, Intel, Western Digital, Supermicro and Lenovo. The hardware listed below has been tested for specific compatibility with QuantaStor to ensure system stability, performance and compatibility. If you have questions about specific hardware compatibility for components not listed below please contact support@osnexus.com for assistance.
Contents
Servers for QuantaStor SDS Systems
Vendor | Model | Boot Controller | CPU | Memory (32GB min) |
---|---|---|---|---|
Cisco | UCS S3260 Storage Server, C220, C240 (M7/M6/M5/M4 Series) | UCS RAID and HBAs or software RAID1 M.2 NVMe | AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
DellEMC | PowerEdge R650/R750, PowerEdge R640/R740/R740xd, R730/R730xd/R720/R630/R620 | Dell BOSS or software RAID1 M.2 NVMe | AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
HPE | ProLiant DL360/DL380 Gen10, Gen9, Gen8, Gen7 | HPE P4xx w/ FBWC | AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
HPE | Apollo 4200, 4510 Gen9 and Gen10 Series | HPE P4xx w/ FBWC | AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
Intel | Intel Server Systems (M50FCP/M50CYP) | Software RAID1 M.2 NVMe | Intel Scalable Processors Gen 1/2/3/4 | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
Lenovo | ThinkSystem SR650/SR550/SR590 Series, x3650 M5/M4/M4 BD, x3250 M5 | Software RAID1 M.2 NVMe or Hardware Boot RAID1 | AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
Seagate | Seagate 2U24 AP Storage Server, 5U84 AP, 2U12 AP (AMD / Bonneville) | Internal M.2 NVMe | AMD EPYC 16 core | 128GB RAM (Gold) 256GB RAM (Platinum), 512GB RAM (Titanium) |
Supermicro | X13, X12, X11, X10 & X9 based Intel and H12 based AMD servers | Software RAID1 M.2 NVMe | AMD EPYC, Intel Scalable Processors Gen 1/2/3/4 | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
Disk Expansion Chassis / JBOD
- QuantaStor supports SAS/SATA & NVMe external Expansion Shelves / JBOD devices from Dell, HP, IBM/Lenovo and SuperMicro.
- All JBODs must have two or more SAS expansion ports per module to allow for SAS multipathing. For chaining of expansion shelves three or more SAS ports are required per expander.
Vendor | JBOD Model |
---|---|
Cisco | All Models |
Dell | All Models |
HPE | All Models (D3000 Series, D2000 Series, D6020) |
IBM/Lenovo | All Models |
Seagate Exos E | All Models |
Seagate Corvault | All Models |
Seagate Exos X | All Models |
SuperMicro | All Models (HA requires dual expander JBODs) |
Western Digtial | Ultrastar Data60 / Data102 / 4U60-G2 / 2U24 SSD Models |
SAS HBA Controllers
To make Storage Pools highly-available (ZFS based pools) media must be connected to two or more servers via SAS (or FC) connectivity. We recommend Broadcom HBAs and their OEM equivalents. HPE is the exception where HPE disk expansion units must be used with HPE HBAs which are now Microsemi based. Make sure the HBA is flashed with the latest firmware to ensure compatibility with the latest storage media. For OEM branded HBAs always use the OEM's supplied firmware even if a newer firmware is available from Broadcom. SAS HBAs with only external ports (8e, 16e models) should be configured to ignore Boot ROM from Disk Devices. For Broadcom/LSI Controllers, this option is called 'Boot Support' and should be set to 'OS only' in the MPTSAS BIOS.
Vendor | Model | Type | QS HW Mgmt Module |
---|---|---|---|
Broadcom/LSI/Avago | 9500 12Gb tri-mode series | SAS HBA | Yes |
Broadcom/LSI/Avago | 9400 12Gb tri-mode series | SAS HBA | Yes |
Broadcom/LSI/Avago | 9300 12Gb series | SAS HBA | Yes |
Broadcom/LSI/Avago | 9200 6Gb series | SAS HBA | Yes |
Cisco | UCS 6Gb & 12Gb HBAs | SAS HBA | Yes |
DELL/EMC | SAS HBA 6Gb & 12Gb | SAS HBA | Yes |
HPE | SmartArray H241 12Gb | SAS HBA | Yes |
Lenovo/IBM | ServeRAID M5xxx 12Gb | SAS HBA | Yes |
Boot RAID Controllers
QuantaStor supports both software RAID1 for boot and hardware RAID1 for boot using high quality datacenter grade media. We recommend SSDs in the 240GB to 480GB size range for boot. If SATADOM SSD devices are used use the 128GB devices and use software RAID1 to mirror them. Do not use Intel VROC.
Vendor | Model | Type | QS HW Mgmt Module |
---|---|---|---|
Broadcom/LSI/Avago | MegaRAID (all models) | SATA/SAS RAID | Yes |
Cisco | UCS RAID | SATA/SAS RAID | Yes |
DellEMC | PERC H7xx, H8xx 6Gb & 12Gb models | SATA/SAS RAID | Yes |
DellEMC | BOSS | SATA SSD RAID1 | Yes |
HPE | SmartArray P4xx/P8xx | SATA/SAS RAID | Yes |
Lenovo/IBM | ServeRAID M5xxx | SATA/SAS RAID | Yes |
Microsemi/Adaptec | 5xxx/6xxx/7xxx/8xxx | SATA/SAS RAID | Yes |
NVMe RAID Controllers
QuantaStor supports the use of NVMe RAID controllers for scale-out configurations and single node configurations.
Single Node Use Cases
- High performance SAN/NAS using software single parity (RAIDZ1) over NVMe hardware RAID5.
- High performance NVMeoF/FC/iSCSI target passthru of logical drives from NVMe hardware RAID5
Scale-out Use Cases
- High performance storage for WAL/MDB
- High performance OSDs with local rebuild capability, variable OSD count for higher performance
Vendor | Model | Type | QS HW Mgmt Module |
---|---|---|---|
Graid Technology Inc. | SupremeRAID™ SR-1000 | NVMe RAID | Yes |
Graid Technology Inc. | SupremeRAID™ SR-1010 | NVMe RAID | Yes |
Pliops | XDP-RAIDplus | NVMe RAID | No |
Storage Devices/Media
Scale-up Cluster Media (ZFS Storage Pools)
Scale-up clusters require dual-ported media so that QuantaStor's IO fencing system can reserve device access to specific nodes for specific pools. These clusters generally consist of two QuantaStor servers with shared access to the storage media in a separate enclosure (JBOD or JBOF). Scale-up clusters using dual-ported NVMe media require the use of special Storage-Bridge-Bay servers from Supermicro.
Data Media
Vendor | Model | Media Type | Notes |
---|---|---|---|
Western Digital | Ultrastar | Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD | |
Seagate | Exos, Nytro & Enterprise Performance | Dual-port 12/24Gb SAS SSD & NL-SAS HDD | |
Samsung | Enterprise SSDs (PM1643, PM1643a) | Dual-port 12/24Gb SAS SSD | |
Micron | Enterprise SSDs (S6xxDC) | Dual-port 12/24Gb SAS SSD | |
KIOXIA | PM6 Enterprise Capacity/Enterprise Performance | Dual-port 12/24Gb SAS SSD | |
Cisco, HPE, DellEMC, Lenovo | OEM | Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD | |
Western Digital | NVMe SN200 SN840 | Dual-port NVMe | Supermicro SBB Server Req |
KIOXIA | CM5-V, CM6, CM7 | Dual-port NVMe | [ON HOLD - CONTACT SUPPORT] |
Micron | 7300 PRO | Dual-port NVMe | Supermicro SBB Server Req |
Journal Media (ZIL & L2ARC)
A mirrored pair of SSDs are required for each storage pool to enable SSD write acceleration. SSDs can be added or removed from a given storage at any time with zero downtime. Be sure to select a storage chassis / JBOD with adequate drive slots for both the data drives and the SSD journal devices. Select 400GB models when purchasing write-intensive SSDs and 800GB models when purchasing mixed-used SSDs.
Vendor | Model | Type | Notes |
---|---|---|---|
Western Digital | Ultrastar SAS Write-Intensive & Mixed-use Models | Dual-port 12Gb SAS SSD | |
Seagate | Nytro SAS Write-Intensive & Mixed-use Models | Dual-port 12Gb SAS SSD | |
Samsung | Enterprise SSDs (PM1643, PM1643a) | Dual-port 12Gb SAS SSD | |
Western Digital | SN200 SN840 | Dual-port NVMe | Supermicro SBB Server Req |
KIOXIA | PM6 Enterprise Capacity/Enterprise Performance | Dual-port 12/24Gb SAS SSD | |
KIOXIA | CM5-V, CM6, CM7 | Dual-port NVMe | [ON HOLD - CONTACT SUPPORT] |
Micron | 7300 PRO | Dual-port NVMe | Supermicro SBB Server Req |
Scale-out Cluster Media (Ceph Storage Pools)
We highly recommend the use of NVMe devices vs SATA/SAS SSDs due to the higher throughput performance per device which reduces overall costs.
Vendor | Model | Type | Use case |
---|---|---|---|
Intel | DC series (all models) | NVMe | all use cases |
Micron | 7xxx, 9xxx (all models) | NVMe | all use cases |
Seagate | Nytro Series (all models) | NVMe / SAS | all use cases |
ScaleFlux | CSD 2000/CSD 3000 (all models) | NVMe | all use cases |
KIOXIA / Toshiba | CM6/CM7 Series (all models) | NVMe | all use cases |
Western Digital | Ultrastar DC series (all models) | NVMe | all use cases |
Fibre Channel HBAs
- QuantaStor SDS only supports Qlogic Fibre Channel HBAs for use with the FC SCSI target feature.
Vendor | Model | Type | Notes |
---|---|---|---|
QLogic (including all OEM models) | QLE2742 | 32Gb | Requires QuantaStor 6 |
QLogic (including all OEM models) | QLE2692 | 16Gb | Requires QuantaStor 6 |
QLogic (including all OEM models) | QLE267x | 16Gb | |
QLogic (including all OEM models) | QLE25xx | 8Gb |
Network Interface Cards
- QuantaStor supports almost all 1GbE and 10/100 network cards on the market
- High speed 10GbE/25GbE/40GbE/50GbE/100GbE NICs that have been tested for compatibility, stability and performance with QuantaStor are as listed.
Vendor | Model | Type | Connector |
---|---|---|---|
Intel | X520, X550, X710, XL710,XXV710 (all 1/10/25/40GbE models), E810 | 1/10/25/40GbE, 100GbE | 10GBaseT, SFP28, QSFP, SFP+, QSFP28 |
Emulex | OneConnect OCe14xxx Series | 10/40GbE | QSFP |
Mellanox | ConnectX-6 EN Series | 25/50/100/200GbE | SFP28/QSFP28 |
Mellanox | ConnectX-5 EN Series | 25/50/100GbE | SFP28/QSFP28 |
Mellanox | ConnectX-4 EN Series | 10/25/40/50/100GbE | SFP28/QSFP28 |
Mellanox | ConnectX-3 EN/VPI Series | 10/40/56GbE | SFP+/QSFP+ |
Mellanox | ConnectX-3 Pro EN/VPI Series | 10GbE/40GbE/56GbE | SFP+/QSFP+ |
Interface Masters | NIAGARA 32714L (OEM Intel) | Quad-port 10GbE | SFP+ |
Broadcom | BCM57508 | 100GbE | QSFP28 |
Infiniband Adapters
QuantaStor has deprecated support for the SRP protocol in favor of the iSCSI Extensions for RDMA protocol (iSER). Mellanox Infiniband controllers may be used in IPoIB mode.
Vendor | Model | Type |
---|---|---|
Mellanox | ConnectX-3 VPI Series | 40/56GbE (QDR) |
Mellanox | ConnectX-4 VPI Series | 25/40/50/100GbE |
Mellanox | ConnectX-5 VPI Series | 25/50/100GbE |
Mellanox | ConnectX-6 VPI Series | 100/200GbE |
iSCSI Initiators / Client-side
Vendor | Operating System | iSCSI Initiator |
---|---|---|
Microsoft | Windows (all versions, 7 and newer) | Windows iSCSI Initiator |
Microsoft | Windows Server (all versions, 2003 and newer) | Windows iSCSI Initiator |
Apple | macOS 13, macOS 12, macOS 11, macOS 10.15, OS X 10.x, Mac OS X 10.x | ATTO Xtend SAN initiator (globalSAN is not supported) |
Citrix | XenServer 5.x (iSCSI, FC, NFS), XenServer 6.x (iSCSI, FC, NFS), XenServer 7.x (iSCSI, FC, NFS), Citrix Ready Certified | iSCSI SR, NFS SR, StorageLink SR |
VMware | VMware ESXi 4.x, 5.x, 6.x (iSCSI, FC, NFS), 7.x (iSCSI, FC, NFS) | VMware initiator |
Linux | RHEL, AlmaLinux, RockyLinux, SUSE, CentOS, Debian, Ubuntu, Oracle Linux (OEL), etc. (all major distros) | open-iscsi |