Hardware Compatibility List (HCL)
QuantaStor is Linux based and has kernel driver updates released multiple times per year to ensure broad support for the latest hardware from the major vendors including HPE, Dell/EMC, Cisco, Intel, Seagate, Supermicro, Gigabyte and Lenovo. The hardware listed below has been tested for specific compatibility with QuantaStor to ensure system stability, performance and compatibility. If you have questions about specific hardware compatibility for components not listed below please contact support@osnexus.com for assistance.
Servers for QuantaStor SDS Systems
QuantaStor supports the broad spectrum of AMD and Intel x86 64bit servers on the market from major vendors. Special configuration files are used to integrate new servers and storage enclosures and there are configuration files for monitoring media, thermals, power supply and fan health. Although QuantaStor comes with configuration settings for most major server models in some cases we may need to update the configuration files at the time of deployment if your selected server model is not one we have default settings for.
| Vendor | Model | Boot Controller | CPU | Memory (32GB min) |
|---|---|---|---|---|
| Cisco | C220, C240 (M8/M7/M6/M5/M4 Series) | Software RAID1 M.2 NVMe | AMD EPYC, Intel Scalable Processors | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
| DellEMC | Gen 7 (eg. R770/R77x5/R670/R67x5), Gen 6 (eg. R760/R76x5/R660/R66x5), Gen5 (eg. R750), Gen4 (eg. R740), Gen3 (eg. R730) | Software RAID1 M.2 NVMe or Dell BOSS | AMD EPYC, Intel Scalable Processors | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
| Gigabyte | Intel and AMD servers | Software RAID1 M.2 NVMe | AMD EPYC, Intel Scalable Processors | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
| HPE | HP DLxxx Servers (Gen11, Gen10, Gen9, Gen8, Gen7) | Software RAID1 M.2 NVMe or HPE P4xx w/ FBWC | AMD EPYC, Intel Scalable Processors | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
| HPE | HP Apollo Servers (Gen9, Gen10 and Gen11) | Software RAID1 M.2 NVMe or HPE P4xx w/ FBWC | AMD EPYC, Intel Scalable Processors | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
| MiTAC | Intel Server Systems (M50FCP/M50CYP) | Software RAID1 M.2 NVMe | Intel Scalable Processors | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
| Lenovo | SRxxx V3 & V4 Rack Servers | Software RAID1 M.2 NVMe or Hardware Boot RAID1 | AMD EPYC, Intel Scalable Processors | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
| Seagate | Seagate 2U24 AP Storage Server, 5U84 AP, 2U12 AP (AMD / Bonneville) | Internal M.2 NVMe | AMD EPYC 16 core | 128GB RAM (Gold) 256GB RAM (Platinum), 512GB RAM (Titanium) |
| Supermicro | X14/H14, X13/H13, X12/H12, X11, X10 & X9 based Intel and AMD servers | Software RAID1 M.2 NVMe | AMD EPYC, Intel Scalable Processors | 0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization) |
Disk Expansion Chassis / JBOD
- QuantaStor Scale-Up
- supports the use of SAS/NL-SAS media JBODs from most vendors
- support external NVMe-oF JBOF from Western Digital (Data24 42xx series)
- all JBODs must be dual-expander / dual-controller / dual-IO module (naming conventions vary by vendor)
- QuantaStor Scale-Out
- supports the use of SAS/NL-SAS and SATA JBODs from most vendors
- support external NVMe-oF JBOF from Western Digital (Data24 41xx series)
- supports JBODs that are both single and dual-expander but we recommend dual-IO module / dual-expanders here too
Supported Vendors
| Vendor | JBOD Model |
|---|---|
| Cisco | All Models |
| Dell | All Models |
| HPE | All Models (D3000 Series, D2000 Series, D6020) |
| IBM/Lenovo | All Models |
| Seagate Exos E | All Models |
| Seagate Corvault | All Models |
| Seagate Exos X | All Models |
| SuperMicro | All Models (HA requires dual expander JBODs) |
| Western Digital | All Models (including Data60 / Data102 / Data24) |
SAS HBA Controllers
To make Storage Pools highly-available (ZFS based pools) media must be connected to two or more servers via SAS (or FC) connectivity. We recommend Broadcom HBAs and their OEM equivalents. HPE is the exception where HPE disk expansion units must be used with HPE HBAs which are now Microsemi based. Make sure the HBA is flashed with the latest firmware to ensure compatibility with the latest storage media. For OEM branded HBAs always use the OEM's supplied firmware even if a newer firmware is available from Broadcom. SAS HBAs with only external ports (8e, 16e models) should be configured to ignore Boot ROM from Disk Devices. For Broadcom/LSI Controllers, this option is called 'Boot Support' and should be set to 'OS only' in the MPTSAS BIOS.
| Vendor | Model | Type | QS HW Mgmt Module |
|---|---|---|---|
| Broadcom/LSI/Avago | 9600 24Gb tri-mode series | SAS/NVMe HBA | Yes |
| Broadcom/LSI/Avago | 9500 12Gb tri-mode series | SAS HBA | Yes |
| Broadcom/LSI/Avago | 9400 12Gb tri-mode series | SAS HBA | Yes |
| Broadcom/LSI/Avago | 9300 12Gb series | SAS HBA | Yes |
| Cisco | UCS 6Gb & 12Gb HBAs | SAS HBA | Yes |
| DELL/EMC | SAS HBA 6Gb & 12Gb | SAS HBA | Yes |
| HPE | SmartArray H241 12Gb | SAS HBA | Yes |
| Lenovo/IBM | ServeRAID M5xxx 12Gb | SAS HBA | Yes |
Boot RAID Controllers
QuantaStor supports both software RAID1 for boot and hardware RAID1 for boot using high quality datacenter grade media. We recommend SSDs in the 240GB to 480GB size range for boot. If SATADOM SSD devices are used use the 128GB devices and use software RAID1 to mirror them. Do not use Intel VROC but vendor specific dedicated boot controllers like the Dell BOSS are generally supported.
| Vendor | Model | Type | QS HW Mgmt Module |
|---|---|---|---|
| Broadcom/LSI/Avago | MegaRAID (all models) | SATA/SAS RAID | Yes |
| Cisco | UCS RAID | SATA/SAS RAID | Yes |
| DellEMC | PERC H7xx, H8xx 6Gb & 12Gb models | SATA/SAS RAID | Yes |
| DellEMC | BOSS | SATA SSD RAID1 | Yes |
| HPE | SmartArray P4xx/P8xx | SATA/SAS RAID | Yes |
| Lenovo/IBM | ServeRAID M5xxx | SATA/SAS RAID | Yes |
| Microsemi/Adaptec | 5xxx/6xxx/7xxx/8xxx | SATA/SAS RAID | Yes |
NVMe RAID Controllers
QuantaStor supports the use of NVMe RAID controllers for scale-out configurations and single node configurations. They are not supported for use with scale-up HA (high-availability) configurations.
Single Node Use Cases
- High performance SAN/NAS using software single parity (RAIDZ1) over NVMe hardware RAID5.
- High performance NVMeoF/FC/iSCSI target passthru of logical drives from NVMe hardware RAID5
Scale-out Use Cases
- High performance storage for WAL/MDB
- High performance OSDs with local rebuild capability, variable OSD count for higher performance
Supported Vendors
| Vendor | Model | Type | QS HW Mgmt Module |
|---|---|---|---|
| Graid Technology Inc. | SupremeRAID™ SR-1000 | NVMe RAID | Yes |
| Graid Technology Inc. | SupremeRAID™ SR-1010 | NVMe RAID | Yes |
| Pliops | XDP-RAIDplus | NVMe RAID | No |
| Xinnor | xiRAID | NVMe RAID | Yes |
Storage Devices/Media
Scale-up Cluster Media (ZFS Storage Pools)
Scale-up clusters require dual-ported media so that QuantaStor's IO fencing system can reserve device access to specific nodes for specific pools. These clusters generally consist of two QuantaStor servers with shared access to the storage media in a separate enclosure (JBOD or JBOF). Scale-up clusters using dual-ported NVMe media require the use of special NVMe based Storage-Bridge-Bay servers from Supermicro or the use of Western Digital Data24 42xx JBOFs.
Data Media
| Vendor | Model | Media Type | Notes |
|---|---|---|---|
| Western Digital | Ultrastar | Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD | |
| Seagate | Exos, Nytro & Enterprise Performance | Dual-port 12/24Gb SAS SSD & NL-SAS HDD | |
| Samsung | Enterprise SAS SSDs | Dual-port 12/24Gb SAS SSD | |
| Samsung | Enterprise NVMe SSDs | Dual-port NVMe | Supermicro SBB or WD Data24 |
| KIOXIA | PM Enterprise Capacity/Enterprise Performance | Dual-port 12/24Gb SAS SSD | |
| KIOXIA | CM | Dual-port NVMe | Supermicro SBB or WD Data24 |
| Cisco, HPE, DellEMC, Lenovo | OEM | Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD | |
| SanDisk | SN8xx NVMe models with dual-port and IO fencing | Dual-port NVMe | Supermicro SBB or WD Data24 |
Checking for NVMe Persistent Reservation Support
This command shows how to get the information on what modes of IO fencing are supported. What we're looking for is "[4:4] : 0 Exclusive Access - Registrants Only" but as you can see in the output mode below that is not supported on this media (Sandisk SN655)
# nvme id-ns -H /dev/nvme6n1 | grep -A9 rescap rescap : 0 [7:7] : 0 Ignore Existing Key - Used as defined in revision 1.2.1 or earlier [6:6] : 0 Exclusive Access - All Registrants Not Supported [5:5] : 0 Write Exclusive - All Registrants Not Supported [4:4] : 0 Exclusive Access - Registrants Only Not Supported [3:3] : 0 Write Exclusive - Registrants Only Not Supported [2:2] : 0 Exclusive Access Not Supported [1:1] : 0 Write Exclusive Not Supported [0:0] : 0 Persist Through Power Loss Not Supported
Journal Media (Metadata-Offload, ZIL & L2ARC)
- Metadata-offload (special VDEV) is highly recommended for all HDD based storage pools. Add 3x flash devices that in sum represents 1% of the HDD raw capacity. For example a 1000TB pool would require 10TB of raw flash so 3x 3.84TB flash devices is ideal.
- ZIL/SLOG is not recommended in most modern configurations. It is used to efficiently handle synchronous writes (SYNC_IO) common to databases and VMs. We recommend using all-flash pools for these workloads. If you do have a need for ZIL add 2x flash devices such as 2x 3.84TB devices.
- L2ARC is not recommended in most modern configurations as the benefits of L2ARC are largely covered by the Metadata-offload (special VDEV). If you do elect to add L2ARC then add one or two flash devices, usually 1.92TB are more then sufficent.
Scale-out Cluster Media (Ceph Storage Pools)
QuantaStor supports a broad spectrum of NVMe, SAS, and SATA media when deploying scale-out clusters. When selecting flash media we highly recommend the use of NVMe devices vs slower SATA/SAS SSDs due to the higher throughput performance for write logging and metadata operations. All major media vendors are supported including Seagate, Solidigm, Micron, Western Digital, ScaleFlux, SanDisk, Kioxia, Toshiba and more. Note that you must use enterprise or datacenter grade media in all configurations. Desktop grade media in unsuitable and will lead to durability, performance and uptime issues.
Fibre Channel HBAs
- QuantaStor only supports Qlogic Fibre Channel HBAs for use as FC target ports. Emulex and other FC cards may be used in initiator mode only. QuantaStor systems with Qlogic FC ports in target mode will work with clients using FC initiators of any brand including Emulex and QLogic models and their OEM derivatives.
| Vendor | Model | Type | Notes |
|---|---|---|---|
| QLogic (including all OEM models) | QLE2742 | 32Gb | |
| QLogic (including all OEM models) | QLE2692 | 16Gb | |
| QLogic (including all OEM models) | QLE267x | 16Gb | |
| QLogic (including all OEM models) | QLE25xx | 8Gb |
Network Interface Cards
QuantaStor being Linux based supports the broad spectrum of network cards on the market but we generally recommend using nVidia/Mellanox, Intel, and Broadcom in that order for your network cards. We're generally less concerned about the vendor used on the motherboard for the 1GbE/2.5GbE/10GbE management interface and generally support them all as long as they're one of the listed server vendors noted above that we've done at least one certification test with.
Supported Vendors
| Vendor | Model | Type | Connector |
|---|---|---|---|
| Intel | X520, X550, X710, XL710,XXV710 (all 1/10/25/40GbE models), E810 | 1/10/25/40GbE, 100GbE | 10GBaseT, SFP28, QSFP, SFP+, QSFP28 |
| Emulex | OneConnect OCe14xxx Series | 10/40GbE | QSFP |
| nVidia | ConnectX-7 EN Series | 100/200/400GbE | QSFP56 |
| nVidia | ConnectX-6 EN Series | 25/50/100/200GbE | SFP28/QSFP28/QSFP56 |
| nVidia/Mellanox | ConnectX-5 EN Series | 25/50/100GbE | SFP28/QSFP28 |
| nVidia/Mellanox | ConnectX-4 EN Series | 10/25/40/50/100GbE | SFP28/QSFP28 |
| nVidia/Mellanox | ConnectX-3 EN/VPI Series | 10/40/56GbE | SFP+/QSFP+ |
| nVidia/Mellanox | ConnectX-3 Pro EN/VPI Series | 10GbE/40GbE/56GbE | SFP+/QSFP+ |
| Interface Masters | NIAGARA 32714L (OEM Intel) | Quad-port 10GbE | SFP+ |
| Broadcom | BCM57508 | 100GbE | QSFP28 |
Infiniband Adapters
QuantaStor supports the use of Infiniband adapters for both scale-up and scale-out cluster configurations. QuantaStor uses the ipoib (IP-over-IB) linux driver to support TCP/IP over Infiniband and the configuration. These Infiniband interfaces in IP-over-IB mode will appear in the system with interface names like "ibN" where N is the interface number such as "ib0", "ib1", "ib2", "ib3" and so on. We recommend the use of Nvidia/Mellanox Infiniband NICs for all systems using IPoIB mode and requires installing the OFED drivers. See (/opt/osnexus/quantastor/bin/mellanox-ofed-install.sh) and OSNexus Support for assistance with installing the latest OFED drivers.
- Note: Support for the SRP & iSER has been deprecated but standard iSCSI mode is supported with IP-over-IB
| Vendor | Model | Type |
|---|---|---|
| Nvidia/Mellanox | ConnectX-3 VPI Series | 40/56GbE (QDR) |
| Nvidia/Mellanox | ConnectX-4 VPI Series | 25/40/50/100GbE |
| Nvidia/Mellanox | ConnectX-5 VPI Series | 25/50/100GbE |
| Nvidia | ConnectX-6 VPI Series | 100/200GbE |
| Nvidia | ConnectX-7 VPI Series | 200/400GbE |
iSCSI Initiators / Client-side
| Vendor | Operating System | iSCSI Initiator |
|---|---|---|
| Microsoft | Windows (all versions, 7 and newer) | Windows iSCSI Initiator |
| Microsoft | Windows Server (all versions, 2003 and newer) | Windows iSCSI Initiator |
| Apple | macOS (all versions, 11 and newer) | ATTO Xtend SAN initiator (globalSAN is not supported) |
| XenServer | XenServer 7 and newer | iSCSI SR, NFS SR, StorageLink SR |
| VMware | VMware ESXi 6 and newer | VMware initiator |
| Proxmox | Proxmox 8 and newer | Proxmox/Linux iSCSI initiator |
| Linux | RHEL, AlmaLinux, RockyLinux, SUSE, CentOS, Debian, Ubuntu, Oracle Linux (OEL), etc. (all major distros) | open-iscsi |