Difference between revisions of "Configure Scale-out Storage Cluster"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m (Configurations)
m
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
[[Category:ibm_guide]]
 
[[Category:ibm_guide]]
QuantaStor’s scale-out object configurations provide S3 and SWIFT compatible REST API support.  Configurations scale-out by adding additional drives and systems.  Each system contains SSD to accelerate write performance and each cluster must contain a minimum of 3x systems and a maximum of 32s systems (10PB maximum). QuantaStor’s storage grid technology enables systems to be grouped together and managed as a storage grid that can span IBM datacenters.  Within a storage grid one or more object storage clusters may be provisioned and managed from QuantaStor’s web based management interface as well as via QS REST APIs and the QS CLI.
+
IBM Cloud deployments of QuantaStor scale-out storage clusters provide dedicated and highly available file, block, and object storage which can be deployed an any of the IBM Cloud data-centers worldwide.  Scale-out cluster use Ceph technology and provide NAS (NFS/SMB), SAN (iSCSI) and S3/SWIFT compatible object storage support in a single deploymentAs the name implies, scale-out clusters may be expanded by adding storage media and/or additional QuantaStor servers.   
 +
 
 +
Each server in a scale-out cluster must contain at least one SSD to accelerate write performance and each cluster must contain a minimum of 3x servers (6+ recommended) and a maximum of 32x servers (scalable to over 10PB in the IBM Cloud). QuantaStor’s storage grid technology enables systems to be grouped together and managed as a storage grid that can span IBM Cloud datacenters.  Within a storage grid one or more storage clusters may be provisioned and managed via the QuantaStor web based management interface.
  
 
== Key Features ==
 
== Key Features ==
Line 25: Line 27:
 
|<center>S3 / SWIFT</center>
 
|<center>S3 / SWIFT</center>
 
|Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end
 
|Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end
|Per Server (4x):<br>Intel Xeon Dual E5-2620 / 12x LFF bay server<br>64GB RAM<br>2 x 960GB SATA SSD (RAID10/boot)<br>9 x 4TB SATA HDD (data)<br>1 x 960GB SATA SSD (journal)<br>2 x 10 Gbps Redundant Private Network Uplinks
+
|Per Server (4x):<br>Intel Xeon Dual E5-2620 / 12x LFF bay server<br>64GB RAM<br>2 x 800GB SATA SSD (RAID1/boot)<br>9 x 4TB SATA HDD (data)<br>1 x 800GB SATA SSD (journal)<br>2 x 10 Gbps Redundant Private Network Uplinks
 
|<center>{{QsButton|https://bit.ly/2QVqH95|View Design}}</center>
 
|<center>{{QsButton|https://bit.ly/2QVqH95|View Design}}</center>
 
|-
 
|-
Line 32: Line 34:
 
|<center>S3 / SWIFT</center>
 
|<center>S3 / SWIFT</center>
 
|Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end
 
|Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end
|Per Server (4x):<br>Intel Xeon Dual E5-2620 / 12x LFF bay server<br>128GB RAM<br>2 x 960GB SATA SSD (RAID10/boot)<br>9 x 12TB SATA HDD (data)<br>1 x 960GB SATA SSD (journal)<br>2 x 10 Gbps Redundant Private Network Uplinks
+
|Per Server (4x):<br>Intel Xeon Dual E5-2620 / 12x LFF bay server<br>128GB RAM<br>2 x 800GB SATA SSD (RAID1/boot)<br>9 x 12TB SATA HDD (data)<br>1 x 800GB SATA SSD (journal)<br>2 x 10 Gbps Redundant Private Network Uplinks
 
|<center>{{QsButton|https://bit.ly/2Ir2V0N|View Design}}</center>
 
|<center>{{QsButton|https://bit.ly/2Ir2V0N|View Design}}</center>
 
|-
 
|-
Line 39: Line 41:
 
|<center>S3 / SWIFT</center>
 
|<center>S3 / SWIFT</center>
 
|Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end
 
|Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end
Per Server (8x):<br>Intel Xeon Dual E5-2620 / 36x LFF bay server<br>256GB RAM<br>2 x 960GB SATA SSD (RAID10/boot)<br>32 x 12TB SATA HDD (data)<br>2 x 960GB SATA SSD (journal)<br>2 x 10 Gbps Redundant Private Network Uplinks
+
|Per Server (8x):<br>Intel Xeon Dual E5-2620 / 36x LFF bay server<br>256GB RAM<br>2 x 800GB SATA SSD (RAID1/boot)<br>32 x 12TB SATA HDD (data)<br>2 x 960GB SATA SSD (journal)<br>2 x 10 Gbps Redundant Private Network Uplinks
 
|<center>{{QsButton|https://bit.ly/2WYhpil|View Design}}</center>
 
|<center>{{QsButton|https://bit.ly/2WYhpil|View Design}}</center>
 
|-
 
|-
 
|}
 
|}
  
'''ALL_FLASH OBJECT STORAGE SOLUTIONS'''
+
'''ALL-FLASH OBJECT STORAGE SOLUTIONS'''
 
{| class="wikitable" style="width: 100%;"
 
{| class="wikitable" style="width: 100%;"
 
|-
 
|-
Line 57: Line 59:
 
|<center>SSD</center>
 
|<center>SSD</center>
 
|<center>S3 / SWIFT</center>
 
|<center>S3 / SWIFT</center>
|Virtualization<br>Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end
+
|OpenStack<br>Virtualization<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Next Generation Applications using S3 back-end
|Per Server (4x):<br>Intel Xeon Dual E5-2620 / 12x LFF bay server<br>64GB RAM<br>2 x 960GB SATA SSD (RAID10/boot)<br>10 x 3.8TB SATA SSD (data)<br>0 x 960GB SATA SSD (journal)<br>2 x 10 Gbps Redundant Private Network Uplinks
+
|Per Server (4x):<br>Intel Xeon Dual E5-2620 / 12x LFF bay server<br>64GB RAM<br>2 x 800GB SATA SSD (RAID1/boot)<br>10 x 3.8TB SATA SSD (data)<br>2 x 10 Gbps Redundant Private Network Uplinks
 
|<center>{{QsButton|https://bit.ly/2K4c5Ur|View Design}}</center>
 
|<center>{{QsButton|https://bit.ly/2K4c5Ur|View Design}}</center>
 
|-
 
|-

Revision as of 11:52, 20 September 2019

IBM Cloud deployments of QuantaStor scale-out storage clusters provide dedicated and highly available file, block, and object storage which can be deployed an any of the IBM Cloud data-centers worldwide. Scale-out cluster use Ceph technology and provide NAS (NFS/SMB), SAN (iSCSI) and S3/SWIFT compatible object storage support in a single deployment. As the name implies, scale-out clusters may be expanded by adding storage media and/or additional QuantaStor servers.

Each server in a scale-out cluster must contain at least one SSD to accelerate write performance and each cluster must contain a minimum of 3x servers (6+ recommended) and a maximum of 32x servers (scalable to over 10PB in the IBM Cloud). QuantaStor’s storage grid technology enables systems to be grouped together and managed as a storage grid that can span IBM Cloud datacenters. Within a storage grid one or more storage clusters may be provisioned and managed via the QuantaStor web based management interface.

Key Features

QuantaStor Scale-out Object Storage

  • S3 Object Storage Compatible REST API support
  • Hybrid HDD + SSD configuration to boost write performance
  • Easy web-based management of all systems with built-in QuantaStor storage grid technology.
  • Easy expansion by adding systems and/or drives.

Configurations

HYBRID OBJECT STORAGE SOLUTIONS

Capacity
Media Type
Protocol Support
Workload
Bill of Materials
Design
144TB raw / 96TB usable

QS 5 Image: 4 x MD (48TB)
HDD + SSD
S3 / SWIFT
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end
Per Server (4x):
Intel Xeon Dual E5-2620 / 12x LFF bay server
64GB RAM
2 x 800GB SATA SSD (RAID1/boot)
9 x 4TB SATA HDD (data)
1 x 800GB SATA SSD (journal)
2 x 10 Gbps Redundant Private Network Uplinks
View Design
432 raw / 288raw

QS 5 Image: 4 x LG (128TB)
HDD + SSD
S3 / SWIFT
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end
Per Server (4x):
Intel Xeon Dual E5-2620 / 12x LFF bay server
128GB RAM
2 x 800GB SATA SSD (RAID1/boot)
9 x 12TB SATA HDD (data)
1 x 800GB SATA SSD (journal)
2 x 10 Gbps Redundant Private Network Uplinks
View Design
3072TB raw / 2026TB usable

QS 5 Image: 8 x 2XL (384TB)
HDD + SSD
S3 / SWIFT
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end
Per Server (8x):
Intel Xeon Dual E5-2620 / 36x LFF bay server
256GB RAM
2 x 800GB SATA SSD (RAID1/boot)
32 x 12TB SATA HDD (data)
2 x 960GB SATA SSD (journal)
2 x 10 Gbps Redundant Private Network Uplinks
View Design

ALL-FLASH OBJECT STORAGE SOLUTIONS

Capacity
Media Type
Protocol Support
Workload
Bill of Materials
Design
152TB raw / 76TB usable

QS 5 Image: 4 x MD (48TB)
SSD
S3 / SWIFT
OpenStack
Virtualization
Data Analytics
Private Cloud Content Delivery Network
Next Generation Applications using S3 back-end
Per Server (4x):
Intel Xeon Dual E5-2620 / 12x LFF bay server
64GB RAM
2 x 800GB SATA SSD (RAID1/boot)
10 x 3.8TB SATA SSD (data)
2 x 10 Gbps Redundant Private Network Uplinks
View Design

Quick Configuration Steps

Step One: Storage Management Tab → Create Grid → Click OK
Step Two: Storage Management Tab → Add Grid Node → Enter 2nd Appliance IP → Click OK
Step Three: Storage Management Tab → Add Grid Node → Enter 3nd Appliance IP → Click OK
Step Four: Storage Management Tab → Controllers & Enclosures → Right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each controller on each appliance)
Step Five: Storage Management Tab → Modify Storage System → Set Domain Suffix → Click OK (do this for each appliance)
Step Six: Scale-out Object Storage Tab → Create Ceph Cluster → Select all appliances → Click OK
Step Seven: Scale-out Object Storage Tab → Multi-Create OSDs → Select SSDs as journals, HDDs as OSDs → Click OK
Step Eight: Scale-out Object Storage Tab → Create Object Storage Pool Group → Select Erasure Coding Mode → Click OK