Difference between revisions of "Configure Scale-out Storage Cluster"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m
m
(45 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
[[Category:ibm_guide]]
 
[[Category:ibm_guide]]
{{Titlebar|KEY FEATURES}}
+
IBM Cloud deployments of QuantaStor scale-out storage clusters provide dedicated and highly available file, block, and object storage which can be deployed an any of the IBM Cloud data-centers worldwide.  Scale-out cluster use Ceph technology and provide NAS (NFS/SMB), SAN (iSCSI) and S3/SWIFT compatible object storage support in a single deployment.  As the name implies, scale-out clusters may be expanded by adding storage media and/or additional QuantaStor servers. 
  
QuantaStor’s scale-out object configurations provide S3 and SWIFT compatible REST API support.  Configurations scale-out by adding additional drives and appliances.  Each appliance contains SSD to accelerate write performance and each cluster must contain a minimum of 3x appliances and a maximum of 32s appliances (10PB maximum). QuantaStor’s storage grid technology enables appliances to be grouped together and managed as a storage grid which can spans IBM datacenters.  Within a storage grid one or more object storage clusters may be provisioned and managed from QuantaStor’s web based management interface as well as via QS REST APIs and the QS CLI.
+
Each server in a scale-out cluster must contain at least one SSD to accelerate write performance and each cluster must contain a minimum of 3x servers (6+ recommended) and a maximum of 32x servers (scalable to over 10PB in the IBM Cloud). QuantaStor’s storage grid technology enables systems to be grouped together and managed as a storage grid that can span IBM Cloud datacenters.  Within a storage grid one or more storage clusters may be provisioned and managed via the QuantaStor web based management interface.
  
 +
== Key Features ==
 
'''QuantaStor Scale-out Object Storage'''
 
'''QuantaStor Scale-out Object Storage'''
*S3 & SWIFT Compatible REST API support
+
*S3 Object Storage Compatible REST API support
 
*Hybrid HDD + SSD configuration to boost write performance
 
*Hybrid HDD + SSD configuration to boost write performance
*Easy web-based management of all appliances with built-in QuantaStor storage grid technology.
+
*Easy web-based management of all systems with built-in QuantaStor storage grid technology.
*Easy expansion by adding appliances and/or drives.
+
*Easy expansion by adding systems and/or drives.
  
{{Titlebar|USE CASES}}
+
== Configurations ==
*Highly-available Large Scale Object Storage Archive
+
*Data Analytics
+
*Private Cloud Content Delivery Network
+
*Disk-to-Disk Backup via S3
+
*Next Generation Applications using S3 back-end
+
  
{{Titlebar|EXPANDABLE}}
+
'''HYBRID OBJECT STORAGE SOLUTIONS'''
 +
{| class="wikitable" style="width: 100%;"
 +
|-
 +
| style="width: 15%"|<center>'''Capacity'''</center>
 +
| style="width: 10%"|<center>'''Media Type'''</center>
 +
| style="width: 10%"|<center>'''Protocol Support'''</center>
 +
| style="width: 25%"|<center>'''Workload'''</center>
 +
| style="width: 25%"|<center>'''Bill of Materials'''</center>
 +
| style="width: 15%"|<center>'''Design'''</center>
 +
|-
 +
|<center>144TB raw / 96TB usable<br><br>QS 5 Image: 4 x MD (48TB)</center>
 +
|<center>HDD + SSD</center>
 +
|<center>S3 / SWIFT</center>
 +
|Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end
 +
|Per Server (4x):<br>Intel Xeon Dual E5-2620 / 12x LFF bay server<br>64GB RAM<br>2 x 800GB SATA SSD (RAID1/boot)<br>9 x 4TB SATA HDD (data)<br>1 x 800GB SATA SSD (journal)<br>2 x 10 Gbps Redundant Private Network Uplinks
 +
|<center>{{QsButton|https://bit.ly/2QVqH95|View Design}}</center>
 +
|-
 +
|<center>432 raw / 288raw<br><br>QS 5 Image: 4 x LG (128TB)</center>
 +
|<center>HDD + SSD</center>
 +
|<center>S3 / SWIFT</center>
 +
|Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end
 +
|Per Server (4x):<br>Intel Xeon Dual E5-2620 / 12x LFF bay server<br>128GB RAM<br>2 x 800GB SATA SSD (RAID1/boot)<br>9 x 12TB SATA HDD (data)<br>1 x 800GB SATA SSD (journal)<br>2 x 10 Gbps Redundant Private Network Uplinks
 +
|<center>{{QsButton|https://bit.ly/2Ir2V0N|View Design}}</center>
 +
|-
 +
|<center>3072TB raw / 2026TB usable<br><br>QS 5 Image: 8 x 2XL (384TB)</center>
 +
|<center>HDD + SSD</center>
 +
|<center>S3 / SWIFT</center>
 +
|Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end
 +
|Per Server (8x):<br>Intel Xeon Dual E5-2620 / 36x LFF bay server<br>256GB RAM<br>2 x 800GB SATA SSD (RAID1/boot)<br>32 x 12TB SATA HDD (data)<br>2 x 960GB SATA SSD (journal)<br>2 x 10 Gbps Redundant Private Network Uplinks
 +
|<center>{{QsButton|https://bit.ly/2WYhpil|View Design}}</center>
 +
|-
 +
|}
  
Add additional drives and/or additional appliances to expand capacity.  Drives can be different sizes, appliances can have different numbers of drives but for optimal performance uniform drive count and capacities per QS server/appliance is recommended.
+
'''ALL-FLASH OBJECT STORAGE SOLUTIONS'''
 +
{| class="wikitable" style="width: 100%;"
 +
|-
 +
| style="width: 15%"|<center>'''Capacity'''</center>
 +
| style="width: 10%"|<center>'''Media Type'''</center>
 +
| style="width: 10%"|<center>'''Protocol Support'''</center>
 +
| style="width: 25%"|<center>'''Workload'''</center>
 +
| style="width: 25%"|<center>'''Bill of Materials'''</center>
 +
| style="width: 15%"|<center>'''Design'''</center>
 +
|-
 +
|<center>152TB raw / 76TB usable<br><br>QS 5 Image: 4 x MD (48TB)</center>
 +
|<center>SSD</center>
 +
|<center>S3 / SWIFT</center>
 +
|OpenStack<br>Virtualization<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Next Generation Applications using S3 back-end
 +
|Per Server (4x):<br>Intel Xeon Dual E5-2620 / 12x LFF bay server<br>64GB RAM<br>2 x 800GB SATA SSD (RAID1/boot)<br>10 x 3.8TB SATA SSD (data)<br>2 x 10 Gbps Redundant Private Network Uplinks
 +
|<center>{{QsButton|https://bit.ly/2K4c5Ur|View Design}}</center>
 +
|-
 +
|}
  
{{Titlebar|EXPECTED PERFORMANCE}}
+
== Quick Configuration Steps ==
 
+
Adding appliances will boost performance as the configuration scales-out.  Small 3x appliance configurations will deliver between 400MB/sec and 1.2GB/sec depending on object size and number of concurrent client connections.  Erasure-coding is recommended for best write performance and replica mode is recommended for best read performance.  Each appliance can present S3 access to object storage and load balancing of client connections across the appliances can be done in a variety of ways including round-robin DNS.
+
 
+
{{Titlebar|QUICK CONFIGURATION STEPS}}
+
  
 
'''Step One:''' Storage Management Tab &rarr; Create Grid &rarr; Click OK<br>
 
'''Step One:''' Storage Management Tab &rarr; Create Grid &rarr; Click OK<br>
 
'''Step Two:''' Storage Management Tab &rarr; Add Grid Node &rarr; Enter 2nd Appliance IP &rarr; Click OK<br>
 
'''Step Two:''' Storage Management Tab &rarr; Add Grid Node &rarr; Enter 2nd Appliance IP &rarr; Click OK<br>
 
'''Step Three:''' Storage Management Tab &rarr; Add Grid Node &rarr; Enter 3nd Appliance IP &rarr; Click OK<br>
 
'''Step Three:''' Storage Management Tab &rarr; Add Grid Node &rarr; Enter 3nd Appliance IP &rarr; Click OK<br>
'''Step Four:''' Storage Management Tab &rarr; Controllers & Enclosures &rarr; right-click Controller &rarr; Configure Pass-thru devices &rarr; Select all &rarr; Click OK (do this for each controller on each appliance)<br>
+
'''Step Four:''' Storage Management Tab &rarr; Controllers & Enclosures &rarr; Right-click Controller &rarr; Configure Pass-thru devices &rarr; Select all &rarr; Click OK (do this for each controller on each appliance)<br>
 
'''Step Five:''' Storage Management Tab &rarr; Modify Storage System &rarr; Set Domain Suffix &rarr; Click OK (do this for each appliance)<br>
 
'''Step Five:''' Storage Management Tab &rarr; Modify Storage System &rarr; Set Domain Suffix &rarr; Click OK (do this for each appliance)<br>
 
'''Step Six:''' Scale-out Object Storage Tab &rarr; Create Ceph Cluster &rarr; Select all appliances &rarr; Click OK<br>
 
'''Step Six:''' Scale-out Object Storage Tab &rarr; Create Ceph Cluster &rarr; Select all appliances &rarr; Click OK<br>
 
'''Step Seven:''' Scale-out Object Storage Tab &rarr; Multi-Create OSDs &rarr; Select SSDs as journals, HDDs as OSDs &rarr; Click OK<br>
 
'''Step Seven:''' Scale-out Object Storage Tab &rarr; Multi-Create OSDs &rarr; Select SSDs as journals, HDDs as OSDs &rarr; Click OK<br>
 
'''Step Eight:''' Scale-out Object Storage Tab &rarr; Create Object Storage Pool Group &rarr; Select Erasure Coding Mode &rarr; Click OK<br>
 
'''Step Eight:''' Scale-out Object Storage Tab &rarr; Create Object Storage Pool Group &rarr; Select Erasure Coding Mode &rarr; Click OK<br>
 
{{Titlebar|TECHNICAL SPECIFICATIONS}}
 
 
'''QuantaStor Scale-out Object Storage Appliances'''
 
{| class="wikitable" font-size:12px;
 
|-
 
|'''Hardware'''
 
|'''Networking'''
 
|'''Licenses and Fees'''
 
|-
 
|Small and Medium Tier configurations use 2RU server with 12x drive bays. <br>Large Tier configurations use 4RU server with 24x or 36x drive bays. <br>All configurations include mirrored boot, redundant power.<br>All configurations include 2x 800GB SSD for write acceleration.
 
|2x 10GbE Private Network ports recommended for all configurations. <br>Use LACP bonding for network redundancy and improved throughput.
 
|Select the OSNEXUS operating system QuantaStor license tier based on the amount of raw storage capacity.<br> License, maintenance, upgrades, and support included.
 
|-
 
|}
 
 
{{Titlebar|CONFIGURATIONS}}
 
 
'''Scale-out Object Storage Configuration Options'''
 
{| class="wikitable"
 
|-
 
|'''Medium Tier - Hybrid Object Storage – 48TB'''
 
|'''Large Tier - Hybrid Object Storage – 128TB'''
 
|'''Extra-large Tier - Hybrid Object Storage – 256TB'''
 
|-
 
|2x E5-2620v4 or 4110, 128GB RAM, 8x 6TB, <br>QS Medium Tier License, 1x 800GB SSDs (write log), <br>2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports
 
|2x E5-2650v4, 128GB RAM, 16x 8TB, QS Large Tier License <br> 2x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot) <br> 2x 10GbE Private Network ports
 
|2x E5-2650v4, 256GB RAM, 32x 8TB, QS XL Tier License <br> 2x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot) <br> 2x 10GbE Private Network ports <br>
 
|-
 
|}
 

Revision as of 11:52, 20 September 2019

IBM Cloud deployments of QuantaStor scale-out storage clusters provide dedicated and highly available file, block, and object storage which can be deployed an any of the IBM Cloud data-centers worldwide. Scale-out cluster use Ceph technology and provide NAS (NFS/SMB), SAN (iSCSI) and S3/SWIFT compatible object storage support in a single deployment. As the name implies, scale-out clusters may be expanded by adding storage media and/or additional QuantaStor servers.

Each server in a scale-out cluster must contain at least one SSD to accelerate write performance and each cluster must contain a minimum of 3x servers (6+ recommended) and a maximum of 32x servers (scalable to over 10PB in the IBM Cloud). QuantaStor’s storage grid technology enables systems to be grouped together and managed as a storage grid that can span IBM Cloud datacenters. Within a storage grid one or more storage clusters may be provisioned and managed via the QuantaStor web based management interface.

Key Features

QuantaStor Scale-out Object Storage

  • S3 Object Storage Compatible REST API support
  • Hybrid HDD + SSD configuration to boost write performance
  • Easy web-based management of all systems with built-in QuantaStor storage grid technology.
  • Easy expansion by adding systems and/or drives.

Configurations

HYBRID OBJECT STORAGE SOLUTIONS

Capacity
Media Type
Protocol Support
Workload
Bill of Materials
Design
144TB raw / 96TB usable

QS 5 Image: 4 x MD (48TB)
HDD + SSD
S3 / SWIFT
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end
Per Server (4x):
Intel Xeon Dual E5-2620 / 12x LFF bay server
64GB RAM
2 x 800GB SATA SSD (RAID1/boot)
9 x 4TB SATA HDD (data)
1 x 800GB SATA SSD (journal)
2 x 10 Gbps Redundant Private Network Uplinks
View Design
432 raw / 288raw

QS 5 Image: 4 x LG (128TB)
HDD + SSD
S3 / SWIFT
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end
Per Server (4x):
Intel Xeon Dual E5-2620 / 12x LFF bay server
128GB RAM
2 x 800GB SATA SSD (RAID1/boot)
9 x 12TB SATA HDD (data)
1 x 800GB SATA SSD (journal)
2 x 10 Gbps Redundant Private Network Uplinks
View Design
3072TB raw / 2026TB usable

QS 5 Image: 8 x 2XL (384TB)
HDD + SSD
S3 / SWIFT
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end
Per Server (8x):
Intel Xeon Dual E5-2620 / 36x LFF bay server
256GB RAM
2 x 800GB SATA SSD (RAID1/boot)
32 x 12TB SATA HDD (data)
2 x 960GB SATA SSD (journal)
2 x 10 Gbps Redundant Private Network Uplinks
View Design

ALL-FLASH OBJECT STORAGE SOLUTIONS

Capacity
Media Type
Protocol Support
Workload
Bill of Materials
Design
152TB raw / 76TB usable

QS 5 Image: 4 x MD (48TB)
SSD
S3 / SWIFT
OpenStack
Virtualization
Data Analytics
Private Cloud Content Delivery Network
Next Generation Applications using S3 back-end
Per Server (4x):
Intel Xeon Dual E5-2620 / 12x LFF bay server
64GB RAM
2 x 800GB SATA SSD (RAID1/boot)
10 x 3.8TB SATA SSD (data)
2 x 10 Gbps Redundant Private Network Uplinks
View Design

Quick Configuration Steps

Step One: Storage Management Tab → Create Grid → Click OK
Step Two: Storage Management Tab → Add Grid Node → Enter 2nd Appliance IP → Click OK
Step Three: Storage Management Tab → Add Grid Node → Enter 3nd Appliance IP → Click OK
Step Four: Storage Management Tab → Controllers & Enclosures → Right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each controller on each appliance)
Step Five: Storage Management Tab → Modify Storage System → Set Domain Suffix → Click OK (do this for each appliance)
Step Six: Scale-out Object Storage Tab → Create Ceph Cluster → Select all appliances → Click OK
Step Seven: Scale-out Object Storage Tab → Multi-Create OSDs → Select SSDs as journals, HDDs as OSDs → Click OK
Step Eight: Scale-out Object Storage Tab → Create Object Storage Pool Group → Select Erasure Coding Mode → Click OK