Solution Matrix: Difference between revisions

From OSNEXUS Online Documentation Site
Jump to navigation Jump to search
mNo edit summary
mNo edit summary
 
(21 intermediate revisions by the same user not shown)
Line 1: Line 1:
{| class="wikitable"
{| class="wikitable" style="width: 100%;"
!
|-
! Scale-Up
| style="width: 10%"|<center></center>
! Scale-Out
| style="width: 30%"|<center>'''Scale-Up'''</center>
! Scale-Out
| style="width: 30%"|<center>'''Scale-Out'''</center>
| style="width: 30%"|<center>'''Scale-Out'''</center>
|-
|-
! Solution
! Solution
Line 13: Line 14:
| ZFS
| ZFS
| Ceph
| Ceph
| Gluster
| CephFS
|-
|-
! Protocol Support
! Protocol Support
Line 20: Line 21:
| [https://wiki.osnexus.com/index.php?title=File_Storage_Provisioning File Storage] (CIFS/NFS)
| [https://wiki.osnexus.com/index.php?title=File_Storage_Provisioning File Storage] (CIFS/NFS)
|-
|-
! File System Features
! Key Features
| Snapshots / Bit-rot protection Remote Replication / Compression
| High-availability, Snapshots, Bit-rot protection & correction, Remote Replication, Compression, Encryption
| [https://wiki.osnexus.com/index.php?title=Key_Features#Erasure_Coding Erasure-coding] and replica-based data distribution for high availability and fault-tolerance
| High-availability, Compression, Encryption
| Highly-available file storage suitable for archive applications with lower performance requirements.<br>Each NAS Archive cluster must contain a minimum of 3x systems.
| Single-namespace NAS, High-availability
|-
! System Count
| minimum 1 system <br> (for HA) 2x systems + JBOD
| minimum 3 systems per cluster
| minimum 3 systems per cluster
|-
|-
! Benefits
! Benefits
Line 31: Line 37:
|-
|-
! Workload  
! Workload  
| Virtualization / Databases / High-performance Compute / Archive / Media Editing
| Virtualization / Databases / HPC <br> Archive / Media Editing
| Virtualization / Archive / Unstructured Data
| Virtualization / Archive / Unstructured Data
| Virtualization / Archive / Unstructured Data
| Archive only
|-
|-
! Architecture
! High-availability Architecture
| Standalone servers, with unified grid management
| 2x system + shared SAS JBOD<br>Remote-replication
| Clustered servers, min 3xUnified grid management
| Erasure-coding, Replicas
| Clustered servers, min 3 xUnified grid management
| Erasure-coding, Replicas
|-
|-
! Performance
! Performance
| Between 400MB/sec and 1.6GB/sec,depending on block size and number of disk;adding SSD will increase IOPS to 10k-20k;all-flash systems will deliver 100k+ sustained R/W IOPS
| Up to 1.6GB/sec R/W throughput per storage pool<br>Network bandwidth limited (dual 10GbE)<br>Hybrid IOPS 10k-30k<br>All-flash IOPS 50K-100k+
| Between 400MB/sec and 1.2GB/sec,depending on block size;adding servers will boost performance
| Depends on server count<br>~400MB/sec per server with dual 10GbE<br>Network bandwidth limited (dual 10GbE)
| Adding systems or SSDs will boost performance. <br>Small 3x system configurations will deliver between 400MB/sec and 1.2GB/sec sequential throughput depending on block size and number of concurrent client connections.<br> Erasure-coding is recommended for best write performance and replica mode is recommended for best read performance.
| Depends on server count<br> ~400MB/sec per server with dual 10GbE<br>Network bandwidth limited (dual 10GbE)<br>SSDs used to accelerate write performance.  
|-
|-
! Grids
! Storage Grids
| SAN/NAS configurations can scale up to 432 TB raw by adding HDD to a 36-bay storage server
| Up to 1.4PB raw per QuantaStor HA Cluster (2x pools)<br>up to 64x systems per grid
| Scale-Out configurations can scale up to multi-petabyte in size by adding new servers to the QS grid
| Up to 10PB per cluster<br>up to 64x servers per grid
| Scale-Out configurations can scale up to multi-petabyte in size by adding new servers to the QS grid
| Up to 10PB per cluster<br>up to 64x servers per grid
|-
|-
! Reference Configuration
! Solution Design Tool
| [https://wiki.osnexus.com/index.php?title=-_1._IBM_Cloud_QuantaStor_SAN/NAS_Storage_Servers SAN/NAS Storage Servers]
|[http://www.osnexus.com/zfs-designer Scale-up SAN/NAS Solution Designer<br>(ZFS based)]
| [https://wiki.osnexus.com/index.php?title=-_2._IBM_Cloud_Scale-out_Block_Storage_Cluster Scale-out Block Storage]<br>[https://wiki.osnexus.com/index.php?title=-_3._IBM_Cloud_Scale-out_Object_Storage_Cluster Scale-out Object Storage]
|[http://www.osnexus.com/ceph-designer Scale-out File, Block & Object Solution Designer<br>(Ceph based)]
| [https://wiki.osnexus.com/index.php?title=-_4._IBM_Cloud_Scale-out_NAS_Archive_Storage Scale-out NAS Archive Storage]
|[http://www.osnexus.com/ceph-designer Scale-out File, Block & Object Solution Designer<br>(Ceph based)]
|-
|-
|
|
|
|
|}
|}

Latest revision as of 21:15, 15 August 2019

Scale-Up
Scale-Out
Scale-Out
Solution SAN / NAS Systems Scale-Out Block & Object Storage Scale-Out File Storage
File System ZFS Ceph CephFS
Protocol Support Block Storage (iSCSI / FibreChannel)
File Storage (CIFS/NFS/SMB)
Block Storage (iSCSI)
Object Storage (S3/Swift)
File Storage (CIFS/NFS)
Key Features High-availability, Snapshots, Bit-rot protection & correction, Remote Replication, Compression, Encryption High-availability, Compression, Encryption Single-namespace NAS, High-availability
System Count minimum 1 system
(for HA) 2x systems + JBOD
minimum 3 systems per cluster minimum 3 systems per cluster
Benefits Lower latency / Lower $/TB hardware costs Greater expandability within a single namespace Greater expandability within a single namespace
Workload Virtualization / Databases / HPC
Archive / Media Editing
Virtualization / Archive / Unstructured Data Virtualization / Archive / Unstructured Data
High-availability Architecture 2x system + shared SAS JBOD
Remote-replication
Erasure-coding, Replicas Erasure-coding, Replicas
Performance Up to 1.6GB/sec R/W throughput per storage pool
Network bandwidth limited (dual 10GbE)
Hybrid IOPS 10k-30k
All-flash IOPS 50K-100k+
Depends on server count
~400MB/sec per server with dual 10GbE
Network bandwidth limited (dual 10GbE)
Depends on server count
~400MB/sec per server with dual 10GbE
Network bandwidth limited (dual 10GbE)
SSDs used to accelerate write performance.
Storage Grids Up to 1.4PB raw per QuantaStor HA Cluster (2x pools)
up to 64x systems per grid
Up to 10PB per cluster
up to 64x servers per grid
Up to 10PB per cluster
up to 64x servers per grid
Solution Design Tool Scale-up SAN/NAS Solution Designer
(ZFS based)
Scale-out File, Block & Object Solution Designer
(Ceph based)
Scale-out File, Block & Object Solution Designer
(Ceph based)