QuantaStor is Linux based and has kernel driver updates released multiple times per year to ensure broad support for the latest hardware from the major vendors including HPE, Dell/EMC, Cisco, Intel, Western Digital, Supermicro and Lenovo. The hardware listed below has been tested for specific compatibility with QuantaStor to ensure system stability, performance and compatibility. If you have questions about specific hardware compatibility for components not listed below please contact support@osnexus.com for assistance.
Servers for QuantaStor SDS Systems
Vendor
|
Model
|
Boot Controller
|
CPU
|
Memory (32GB min)
|
Cisco
|
UCS S3260 Storage Server, C220, C240 (M7/M6/M5/M4 Series)
|
UCS RAID and HBAs or software RAID1 M.2 NVMe
|
AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
|
0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
|
DellEMC
|
R760/R660, PowerEdge R650/R750, PowerEdge R640/R740/R740xd, R730/R730xd/R720/R630/R620
|
Dell BOSS or software RAID1 M.2 NVMe
|
AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
|
0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
|
HPE
|
Gen11, ProLiant DL360/DL380 Gen10, Gen9, Gen8, Gen7
|
HPE P4xx w/ FBWC
|
AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
|
0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
|
HPE
|
Apollo 4200, 4510 Gen9 and Gen10 Series
|
HPE P4xx w/ FBWC
|
AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
|
0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
|
Intel
|
Intel Server Systems (M50FCP/M50CYP)
|
Software RAID1 M.2 NVMe
|
Intel Scalable Processors Gen 1/2/3/4
|
0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
|
Lenovo
|
ThinkSystem SR650/SR550/SR590 Series, x3650 M5/M4/M4 BD, x3250 M5
|
Software RAID1 M.2 NVMe or Hardware Boot RAID1
|
AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
|
0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
|
Seagate
|
Seagate 2U24 AP Storage Server, 5U84 AP, 2U12 AP (AMD / Bonneville)
|
Internal M.2 NVMe
|
AMD EPYC 16 core
|
128GB RAM (Gold) 256GB RAM (Platinum), 512GB RAM (Titanium)
|
Supermicro
|
X13/H13, X12/H12, X11, X10 & X9 based Intel and AMD servers
|
Software RAID1 M.2 NVMe
|
AMD EPYC, Intel Scalable Processors Gen 1/2/3/4
|
0.3GB RAM (Archive) to 1GB RAM per usable TB (OLTP/Virtualization)
|
Disk Expansion Chassis / JBOD
- QuantaStor supports SAS/SATA & NVMe external Expansion Shelves / JBOD devices from Dell, HP, IBM/Lenovo and SuperMicro.
- All JBODs must have two or more SAS expansion ports per module to allow for SAS multipathing. For chaining of expansion shelves three or more SAS ports are required per expander.
Vendor
|
JBOD Model
|
Cisco
|
All Models
|
Dell
|
All Models
|
HPE
|
All Models (D3000 Series, D2000 Series, D6020)
|
IBM/Lenovo
|
All Models
|
Seagate Exos E
|
All Models
|
Seagate Corvault
|
All Models
|
Seagate Exos X
|
All Models
|
SuperMicro
|
All Models (HA requires dual expander JBODs)
|
Western Digtial
|
Ultrastar Data60 / Data102 / 4U60-G2 / 2U24 SSD Models
|
SAS HBA Controllers
To make Storage Pools highly-available (ZFS based pools) media must be connected to two or more servers via SAS (or FC) connectivity. We recommend Broadcom HBAs and their OEM equivalents. HPE is the exception where HPE disk expansion units must be used with HPE HBAs which are now Microsemi based. Make sure the HBA is flashed with the latest firmware to ensure compatibility with the latest storage media. For OEM branded HBAs always use the OEM's supplied firmware even if a newer firmware is available from Broadcom. SAS HBAs with only external ports (8e, 16e models) should be configured to ignore Boot ROM from Disk Devices. For Broadcom/LSI Controllers, this option is called 'Boot Support' and should be set to 'OS only' in the MPTSAS BIOS.
Vendor
|
Model
|
Type
|
QS HW Mgmt Module
|
Broadcom/LSI/Avago
|
9500 12Gb tri-mode series
|
SAS HBA
|
Yes
|
Broadcom/LSI/Avago
|
9400 12Gb tri-mode series
|
SAS HBA
|
Yes
|
Broadcom/LSI/Avago
|
9300 12Gb series
|
SAS HBA
|
Yes
|
Broadcom/LSI/Avago
|
9200 6Gb series
|
SAS HBA
|
Yes
|
Cisco
|
UCS 6Gb & 12Gb HBAs
|
SAS HBA
|
Yes
|
DELL/EMC
|
SAS HBA 6Gb & 12Gb
|
SAS HBA
|
Yes
|
HPE
|
SmartArray H241 12Gb
|
SAS HBA
|
Yes
|
Lenovo/IBM
|
ServeRAID M5xxx 12Gb
|
SAS HBA
|
Yes
|
Boot RAID Controllers
QuantaStor supports both software RAID1 for boot and hardware RAID1 for boot using high quality datacenter grade media. We recommend SSDs in the 240GB to 480GB size range for boot. If SATADOM SSD devices are used use the 128GB devices and use software RAID1 to mirror them. Do not use Intel VROC.
Vendor
|
Model
|
Type
|
QS HW Mgmt Module
|
Broadcom/LSI/Avago
|
MegaRAID (all models)
|
SATA/SAS RAID
|
Yes
|
Cisco
|
UCS RAID
|
SATA/SAS RAID
|
Yes
|
DellEMC
|
PERC H7xx, H8xx 6Gb & 12Gb models
|
SATA/SAS RAID
|
Yes
|
DellEMC
|
BOSS
|
SATA SSD RAID1
|
Yes
|
HPE
|
SmartArray P4xx/P8xx
|
SATA/SAS RAID
|
Yes
|
Lenovo/IBM
|
ServeRAID M5xxx
|
SATA/SAS RAID
|
Yes
|
Microsemi/Adaptec
|
5xxx/6xxx/7xxx/8xxx
|
SATA/SAS RAID
|
Yes
|
NVMe RAID Controllers
QuantaStor supports the use of NVMe RAID controllers for scale-out configurations and single node configurations.
Single Node Use Cases
- High performance SAN/NAS using software single parity (RAIDZ1) over NVMe hardware RAID5.
- High performance NVMeoF/FC/iSCSI target passthru of logical drives from NVMe hardware RAID5
Scale-out Use Cases
- High performance storage for WAL/MDB
- High performance OSDs with local rebuild capability, variable OSD count for higher performance
Storage Devices/Media
Scale-up Cluster Media (ZFS Storage Pools)
Scale-up clusters require dual-ported media so that QuantaStor's IO fencing system can reserve device access to specific nodes for specific pools. These clusters generally consist of two QuantaStor servers with shared access to the storage media in a separate enclosure (JBOD or JBOF). Scale-up clusters using dual-ported NVMe media require the use of special Storage-Bridge-Bay servers from Supermicro.
Data Media
Vendor
|
Model
|
Media Type
|
Notes
|
Western Digital
|
Ultrastar
|
Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD
|
Seagate
|
Exos, Nytro & Enterprise Performance
|
Dual-port 12/24Gb SAS SSD & NL-SAS HDD
|
Samsung
|
Enterprise SSDs (PM1643, PM1643a)
|
Dual-port 12/24Gb SAS SSD
|
Micron
|
Enterprise SSDs (S6xxDC)
|
Dual-port 12/24Gb SAS SSD
|
KIOXIA
|
PM6 Enterprise Capacity/Enterprise Performance
|
Dual-port 12/24Gb SAS SSD
|
Cisco, HPE, DellEMC, Lenovo
|
OEM
|
Dual-port 6/12/24Gb SAS SSD & NL-SAS HDD
|
Western Digital
|
NVMe SN200 SN840
|
Dual-port NVMe
|
Supermicro SBB Server Req
|
KIOXIA
|
CM5-V, CM6, CM7
|
Dual-port NVMe
|
[ON HOLD - CONTACT SUPPORT]
|
Micron
|
7300 PRO
|
Dual-port NVMe
|
Supermicro SBB Server Req
|
Journal Media (ZIL & L2ARC)
A mirrored pair of SSDs are required for each storage pool to enable SSD write acceleration. SSDs can be added or removed from a given storage at any time with zero downtime. Be sure to select a storage chassis / JBOD with adequate drive slots for both the data drives and the SSD journal devices. Select 400GB models when purchasing write-intensive SSDs and 800GB models when purchasing mixed-used SSDs.
Vendor
|
Model
|
Type
|
Notes
|
Western Digital
|
Ultrastar SAS Write-Intensive & Mixed-use Models
|
Dual-port 12Gb SAS SSD
|
Seagate
|
Nytro SAS Write-Intensive & Mixed-use Models
|
Dual-port 12Gb SAS SSD
|
Samsung
|
Enterprise SSDs (PM1643, PM1643a)
|
Dual-port 12Gb SAS SSD
|
Western Digital
|
SN200 SN840
|
Dual-port NVMe
|
Supermicro SBB Server Req
|
KIOXIA
|
PM6 Enterprise Capacity/Enterprise Performance
|
Dual-port 12/24Gb SAS SSD
|
KIOXIA
|
CM5-V, CM6, CM7
|
Dual-port NVMe
|
[ON HOLD - CONTACT SUPPORT]
|
Micron
|
7300 PRO
|
Dual-port NVMe
|
Supermicro SBB Server Req
|
Scale-out Cluster Media (Ceph Storage Pools)
We highly recommend the use of NVMe devices vs SATA/SAS SSDs due to the higher throughput performance per device which reduces overall costs.
Vendor
|
Model
|
Type
|
Use case
|
Intel
|
DC series (all models)
|
NVMe
|
all use cases
|
Micron
|
7xxx, 9xxx (all models)
|
NVMe
|
all use cases
|
Seagate
|
Nytro Series (all models)
|
NVMe / SAS
|
all use cases
|
ScaleFlux
|
CSD 2000/CSD 3000 (all models)
|
NVMe
|
all use cases
|
KIOXIA / Toshiba
|
CM6/CM7 Series (all models)
|
NVMe
|
all use cases
|
Western Digital
|
Ultrastar DC series (all models)
|
NVMe
|
all use cases
|
Fibre Channel HBAs
- QuantaStor only supports Qlogic Fibre Channel HBAs for use as FC target ports. Emulex and other FC cards may be used in initiator mode only. QuantaStor systems with Qlogic FC ports in target mode will work with clients using FC initiators of any brand including Emulex and QLogic models and their OEM derivatives.
Vendor
|
Model
|
Type
|
Notes
|
QLogic (including all OEM models)
|
QLE2742
|
32Gb
|
Requires QuantaStor 6
|
QLogic (including all OEM models)
|
QLE2692
|
16Gb
|
Requires QuantaStor 6
|
QLogic (including all OEM models)
|
QLE267x
|
16Gb
|
|
QLogic (including all OEM models)
|
QLE25xx
|
8Gb
|
|
Network Interface Cards
- QuantaStor supports almost all 1GbE and 10/100 network cards on the market
- High speed 10GbE/25GbE/40GbE/50GbE/100GbE NICs that have been tested for compatibility, stability and performance with QuantaStor are as listed.
Vendor
|
Model
|
Type
|
Connector
|
Intel
|
X520, X550, X710, XL710,XXV710 (all 1/10/25/40GbE models), E810
|
1/10/25/40GbE, 100GbE
|
10GBaseT, SFP28, QSFP, SFP+, QSFP28
|
Emulex
|
OneConnect OCe14xxx Series
|
10/40GbE
|
QSFP
|
Mellanox
|
ConnectX-6 EN Series
|
25/50/100/200GbE
|
SFP28/QSFP28
|
Mellanox
|
ConnectX-5 EN Series
|
25/50/100GbE
|
SFP28/QSFP28
|
Mellanox
|
ConnectX-4 EN Series
|
10/25/40/50/100GbE
|
SFP28/QSFP28
|
Mellanox
|
ConnectX-3 EN/VPI Series
|
10/40/56GbE
|
SFP+/QSFP+
|
Mellanox
|
ConnectX-3 Pro EN/VPI Series
|
10GbE/40GbE/56GbE
|
SFP+/QSFP+
|
Interface Masters
|
NIAGARA 32714L (OEM Intel)
|
Quad-port 10GbE
|
SFP+
|
Broadcom
|
BCM57508
|
100GbE
|
QSFP28
|
Infiniband Adapters
QuantaStor supports the use of Infiniband adapters for both scale-up and scale-out cluster configurations. QuantaStor uses the ipoib (IP-over-IB) linux driver to support TCP/IP over Infiniband and the configuration. These Infiniband interfaces in IP-over-IB mode will appear in the system with interface names like "ibN" where N is the interface number such as "ib0", "ib1", "ib2", "ib3" and so on. We recommend the use of Nvidia/Mellanox Infiniband NICs for all systems using IPoIB mode and requires installing the OFED drivers. See (/opt/osnexus/quantastor/bin/mellanox-ofed-install.sh) and OSNexus Support for assistance with installing the latest OFED drivers.
- Note: Support for the SRP & iSER has been deprecated but standard iSCSI mode is supported with IP-over-IB
Vendor
|
Model
|
Type
|
Mellanox
|
ConnectX-3 VPI Series
|
40/56GbE (QDR)
|
Mellanox
|
ConnectX-4 VPI Series
|
25/40/50/100GbE
|
Nvidia/Mellanox
|
ConnectX-5 VPI Series
|
25/50/100GbE
|
Nvidia/Mellanox
|
ConnectX-6 VPI Series
|
100/200GbE
|
Nvidia/Mellanox
|
ConnectX-7 VPI Series
|
200/400GbE
|
iSCSI Initiators / Client-side
Vendor
|
Operating System
|
iSCSI Initiator
|
Microsoft
|
Windows (all versions, 7 and newer)
|
Windows iSCSI Initiator
|
Microsoft
|
Windows Server (all versions, 2003 and newer)
|
Windows iSCSI Initiator
|
Apple
|
macOS 14, macOS 13, macOS 12, macOS 11, macOS 10.15, OS X 10.x, Mac OS X 10.x
|
ATTO Xtend SAN initiator (globalSAN is not supported)
|
Citrix
|
XenServer 5.x (iSCSI, FC, NFS), XenServer 6.x (iSCSI, FC, NFS), XenServer 7.x (iSCSI, FC, NFS), Citrix Ready Certified
|
iSCSI SR, NFS SR, StorageLink SR
|
VMware
|
VMware ESXi 4.x, 5.x, 6.x (iSCSI, FC, NFS), 7.x (iSCSI, FC, NFS)
|
VMware initiator
|
Linux
|
RHEL, AlmaLinux, RockyLinux, SUSE, CentOS, Debian, Ubuntu, Oracle Linux (OEL), etc. (all major distros)
|
open-iscsi
|