Difference between revisions of "Configure Scale-out Storage Cluster"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m
m
 
(68 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
[[Category:ibm_guide]]
 
[[Category:ibm_guide]]
{{Titlebar|KEY FEATURES}}
+
IBM Cloud deployments of QuantaStor scale-out storage clusters provide dedicated and highly available file, block, and object storage which can be deployed an any of the IBM Cloud data-centers worldwide.  Scale-out cluster use Ceph technology and provide NAS (NFS/SMB), SAN (iSCSI) and S3/SWIFT compatible object storage support in a single deployment.  As the name implies, scale-out clusters may be expanded by adding storage media and/or additional QuantaStor servers. 
  
QuantaStor’s scale-out object configurations provide S3 and SWIFT compatible REST API support.  Configurations scale-out by adding additional drives and appliances.  Each appliance contains SSD to accelerate write performance and each cluster must contain a minimum of 3x appliances and a maximum of 32s appliances (10PB maximum). QuantaStor’s storage grid technology enables appliances to be grouped together and managed as a storage grid which can spans IBM datacenters.  Within a storage grid one or more object storage clusters may be provisioned and managed from QuantaStor’s web based management interface as well as via QS REST APIs and the QS CLI.
+
Each server in a scale-out cluster must contain at least one SSD to accelerate write performance and each cluster must contain a minimum of 3x servers (6+ recommended) and a maximum of 32x servers (scalable to over 10PB in the IBM Cloud). QuantaStor’s storage grid technology enables systems to be grouped together and managed as a storage grid that can span IBM Cloud datacenters.  Within a storage grid one or more storage clusters may be provisioned and managed via the QuantaStor web based management interface.
  
'''QuantaStor SAN/NAS Storage Appliance Features'''
 
*Provides NFS, SMB and iSCSI storage
 
*Compression, AES 256 bit Encryption
 
*Remote-replication between QuantaStor appliances across IBM data centers and on-premises locations for hybrid cloud.
 
*Snapshot Schedules with long term snapshot retention options.
 
*Hybrid SSD+HDD configuration provide economical capacity with performance
 
*All-Flash configurations deliver maximum IOPS
 
*Easy web-based management with Storage Grid technology
 
*NIST 800-53, HIPAA, CJIS compliant
 
*FIPS 140-2 Level 1 compliant options available Q4/18
 
*Advanced Role Based Access Controls
 
*Automatable via QS CLI and REST API
 
*VMware certified and VMware VAAI integrated
 
  
{{Titlebar|USE CASES}}
 
*Highly-available Large Scale Object Storage Archive
 
*Data Analytics
 
*Private Cloud Content Delivery Network
 
*Disk-to-Disk Backup via S3
 
*Next Generation Applications using S3 back-end
 
  
{{Titlebar|EXPANDABLE}}
+
''' OSNEXUS Videos '''
 +
* [[Image:youtube_icon.png|50px|link=https://www.youtube.com/watch?v=CwGYaGMuSfU]] [https://www.youtube.com/watch?v=CwGYaGMuSfU Covers QuantaStor 5 S3 Reverse Proxy for IBM Cloud Object Storage [15:43]]
  
Add additional drives and/or additional appliances to expand capacity. Drives can be different sizes, appliances can have different numbers of drives but for optimal performance uniform drive count and capacities per QS server/appliance is recommended.
+
* [[Image:youtube_icon.png|50px|link=https://www.youtube.com/watch?v=VpfcjZDO3Ys]] [https://www.youtube.com/watch?v=VpfcjZDO3Ys Covers Designing Ceph clusters with the QuantaStor solution design web app [17:44]]
  
{{Titlebar|EXPECTED PERFORMANCE}}
 
  
Adding appliances will boost performance as the configuration scales-out.  Small 3x appliance configurations will deliver between 400MB/sec and 1.2GB/sec depending on object size and number of concurrent client connections.  Erasure-coding is recommended for best write performance and replica mode is recommended for best read performance. Each appliance can present S3 access to object storage and load balancing of client connections across the appliances can be done in a variety of ways including round-robin DNS.
+
== Key Features ==
 +
'''QuantaStor Scale-out Object Storage'''
 +
*S3 Object Storage Compatible REST API support
 +
*Hybrid HDD + SSD configuration to boost write performance
 +
*Easy web-based management of all systems with built-in QuantaStor storage grid technology.
 +
*Easy expansion by adding systems and/or drives.
  
{{Titlebar|QUICK CONFIGURATION STEPS}}
+
== Configurations ==
  
<span style="font-size:80%; line-height: 1.0em;">
+
'''HYBRID OBJECT STORAGE SOLUTIONS'''
 
+
{| class="wikitable" style="width: 100%;"
'''Step One:''' Storage Management Tab &rarr; [https://wiki.osnexus.com/index.php?title=Hardware_Controllers_%26_Enclosures Controllers & Enclosures ] &rarr; Right-click Controller &rarr; Configure [https://wiki.osnexus.com/index.php?title=Hardware_Controller_Passthru_Devices Pass-thru devices] &rarr; Select all &rarr; Click OK <br>
+
'''Step Two:''' Storage Management Tab &rarr; Storage Pools &rarr; Create Storage Pool &rarr; Click OK
+
 
+
'''For iSCSI Block Storage Provisioning:'''
+
 
+
::*Storage Management Tab &rarr; Hosts &rarr; Add Host &rarr; Enter host IQN &rarr; Click OK (do this for each host)
+
 
+
::*Storage Management Tab &rarr; Storage Volumes &rarr; Create Storage Volume &rarr; Enter volume name, select a size,  &rarr; Click OK
+
 
+
::*Storage Management Tab &rarr; Hosts &rarr; Right-click Host &rarr; Assign Volumes &rarr; Select volumes &rarr; Click OK
+
 
+
'''For NFS/SMB File Storage Provisioning:'''
+
 
+
::*Storage Management Tab &rarr; Network Shares &rarr; Create Network Share &rarr; Click OK (do this for each share)
+
 
+
::*Storage Management Tab &rarr; Network Shares &rarr; Modify Network Share & SMB Access &rarr; Adjust Settings &rarr; Click OK (do this for each share)</span>
+
 
+
{{Titlebar|TECHNICAL SPECIFICATIONS}}
+
 
+
'''QuantaStor SAN/NAS Storage Appliances'''
+
{| class="wikitable" font-size:12px;
+
 
|-
 
|-
|'''Hardware'''
+
| style="width: 15%"|<center>'''Capacity'''</center>
|'''Networking'''  
+
| style="width: 10%"|<center>'''Media Type'''</center>
|'''Licenses and Fees'''
+
| style="width: 10%"|<center>'''Protocol Support'''</center>
 +
| style="width: 25%"|<center>'''Workload'''</center>
 +
| style="width: 25%"|<center>'''Bill of Materials'''</center>
 +
| style="width: 15%"|<center>'''Design'''</center>
 
|-
 
|-
|Small and Medium Tier configurations use 2RU server with 12x drive bays. <br>Large Tier configurations use 4RU server with 24x or 36x drive bays. <br>All configurations include mirrored boot. <br>All Hybrid configurations include 2x 800GB SSD for write acceleration.
+
|<center>144TB raw / 96TB usable<br><br>QS 5 Image: 4 x MD (48TB)</center>
|2x 10GbE Private Network ports recommended for all configurations.<br> Use LACP bonded network ports when NFS based Datastores are to be configured.
+
|<center>HDD + SSD</center>
|Select the OSNEXUS operating system QuantaStor license tier based on the amount of raw storage capacity.<br> License, maintenance, upgrades, and support included.
+
|<center>S3 / SWIFT</center>
 +
|Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end
 +
|Per Server (4x):<br>Intel Xeon Dual E5-2620 / 12x LFF bay server<br>64GB RAM<br>2 x 800GB SATA SSD (RAID1/boot)<br>9 x 4TB SATA HDD (data)<br>1 x 800GB SATA SSD (journal)<br>2 x 10 Gbps Redundant Private Network Uplinks
 +
|<center>{{QsButton|https://bit.ly/2QVqH95|View Design}}</center>
 
|-
 
|-
|}
+
|<center>432 raw / 288raw<br><br>QS 5 Image: 4 x LG (128TB)</center>
 
+
|<center>HDD + SSD</center>
{{Titlebar|CONFIGURATIONS}}
+
|<center>S3 / SWIFT</center>
 
+
|Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end
'''Hybrid SAN/NAS Configuration Options'''
+
|Per Server (4x):<br>Intel Xeon Dual E5-2620 / 12x LFF bay server<br>128GB RAM<br>2 x 800GB SATA SSD (RAID1/boot)<br>9 x 12TB SATA HDD (data)<br>1 x 800GB SATA SSD (journal)<br>2 x 10 Gbps Redundant Private Network Uplinks
{| class="wikitable"
+
|<center>{{QsButton|https://bit.ly/2Ir2V0N|View Design}}</center>
 
|-
 
|-
|'''Small Tier - Hybrid SAN/NAS – 16TB'''
+
|<center>3072TB raw / 2026TB usable<br><br>QS 5 Image: 8 x 2XL (384TB)</center>
|'''Medium Tier - Hybrid SAN/NAS – 48TB'''
+
|<center>HDD + SSD</center>
|'''Large Tier - Hybrid SAN/NAS – 128TB'''
+
|<center>S3 / SWIFT</center>
|'''Extra-large Tier - Hybrid SAN/NAS – 256TB'''
+
|Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end
|-
+
|Per Server (8x):<br>Intel Xeon Dual E5-2620 / 36x LFF bay server<br>256GB RAM<br>2 x 800GB SATA SSD (RAID1/boot)<br>32 x 12TB SATA HDD (data)<br>2 x 960GB SATA SSD (journal)<br>2 x 10 Gbps Redundant Private Network Uplinks
|2x E5-2620v4, 64GB RAM, 8x 2TB, QS Small Tier License <br> 2x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot) <br> 2x 10GbE Private Network ports
+
|<center>{{QsButton|https://bit.ly/2WYhpil|View Design}}</center>
|2x E5-2620v4, 128GB RAM, 8x 6TB, QS Medium Tier License <br> 2x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot) <br> 2x 10GbE Private Network ports
+
|2x E5-2650v4, 128GB RAM, 16x 8TB, QS Large Tier License <br> 2x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot) <br> 2x 10GbE Private Network ports
+
|2x E5-2650v4, 256GB RAM, 32x 8TB, QS XL Tier License <br> 2x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot) <br> 2x 10GbE Private Network ports <br>
+
 
|-
 
|-
 
|}
 
|}
'''All Flash Configuration Options'''
+
 
{| class="wikitable"
+
'''ALL-FLASH OBJECT STORAGE SOLUTIONS'''
 +
{| class="wikitable" style="width: 100%;"
 
|-
 
|-
|'''Small Tier – All-Flash SAN/NAS – 15.2TB'''
+
| style="width: 15%"|<center>'''Capacity'''</center>
|'''Medium Tier – All-Flash SAN/NAS – 41.8T'''
+
| style="width: 10%"|<center>'''Media Type'''</center>
 +
| style="width: 10%"|<center>'''Protocol Support'''</center>
 +
| style="width: 25%"|<center>'''Workload'''</center>
 +
| style="width: 25%"|<center>'''Bill of Materials'''</center>
 +
| style="width: 15%"|<center>'''Design'''</center>
 
|-
 
|-
|2x E5-2650v4, 128GB RAM, 8x 1.9TB SSD, QS Small Tier License <br> 2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports <br>
+
|<center>152TB raw / 76TB usable<br><br>QS 5 Image: 4 x MD (48TB)</center>
|2x E5-2650v4, 128GB RAM, 22x 1.9TB SSD, QS Medium Tier License <br> 2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports <br>
+
|<center>SSD</center>
 +
|<center>S3 / SWIFT</center>
 +
|OpenStack<br>Virtualization<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Next Generation Applications using S3 back-end
 +
|Per Server (4x):<br>Intel Xeon Dual E5-2620 / 12x LFF bay server<br>64GB RAM<br>2 x 800GB SATA SSD (RAID1/boot)<br>10 x 3.8TB SATA SSD (data)<br>2 x 10 Gbps Redundant Private Network Uplinks
 +
|<center>{{QsButton|https://bit.ly/2K4c5Ur|View Design}}</center>
 
|-
 
|-
 
|}
 
|}
 +
 +
== Quick Configuration Steps ==
 +
 +
'''Step One:''' Storage Management Tab &rarr; Create Grid &rarr; Click OK<br>
 +
'''Step Two:''' Storage Management Tab &rarr; Add Grid Node &rarr; Enter 2nd Appliance IP &rarr; Click OK<br>
 +
'''Step Three:''' Storage Management Tab &rarr; Add Grid Node &rarr; Enter 3nd Appliance IP &rarr; Click OK<br>
 +
'''Step Four:''' Storage Management Tab &rarr; Controllers & Enclosures &rarr; Right-click Controller &rarr; Configure Pass-thru devices &rarr; Select all &rarr; Click OK (do this for each controller on each appliance)<br>
 +
'''Step Five:''' Storage Management Tab &rarr; Modify Storage System &rarr; Set Domain Suffix &rarr; Click OK (do this for each appliance)<br>
 +
'''Step Six:''' Scale-out Object Storage Tab &rarr; Create Ceph Cluster &rarr; Select all appliances &rarr; Click OK<br>
 +
'''Step Seven:''' Scale-out Object Storage Tab &rarr; Multi-Create OSDs &rarr; Select SSDs as journals, HDDs as OSDs &rarr; Click OK<br>
 +
'''Step Eight:''' Scale-out Object Storage Tab &rarr; Create Object Storage Pool Group &rarr; Select Erasure Coding Mode &rarr; Click OK<br>

Latest revision as of 12:41, 26 October 2020

IBM Cloud deployments of QuantaStor scale-out storage clusters provide dedicated and highly available file, block, and object storage which can be deployed an any of the IBM Cloud data-centers worldwide. Scale-out cluster use Ceph technology and provide NAS (NFS/SMB), SAN (iSCSI) and S3/SWIFT compatible object storage support in a single deployment. As the name implies, scale-out clusters may be expanded by adding storage media and/or additional QuantaStor servers.

Each server in a scale-out cluster must contain at least one SSD to accelerate write performance and each cluster must contain a minimum of 3x servers (6+ recommended) and a maximum of 32x servers (scalable to over 10PB in the IBM Cloud). QuantaStor’s storage grid technology enables systems to be grouped together and managed as a storage grid that can span IBM Cloud datacenters. Within a storage grid one or more storage clusters may be provisioned and managed via the QuantaStor web based management interface.


OSNEXUS Videos


Key Features

QuantaStor Scale-out Object Storage

  • S3 Object Storage Compatible REST API support
  • Hybrid HDD + SSD configuration to boost write performance
  • Easy web-based management of all systems with built-in QuantaStor storage grid technology.
  • Easy expansion by adding systems and/or drives.

Configurations

HYBRID OBJECT STORAGE SOLUTIONS

Capacity
Media Type
Protocol Support
Workload
Bill of Materials
Design
144TB raw / 96TB usable

QS 5 Image: 4 x MD (48TB)
HDD + SSD
S3 / SWIFT
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end
Per Server (4x):
Intel Xeon Dual E5-2620 / 12x LFF bay server
64GB RAM
2 x 800GB SATA SSD (RAID1/boot)
9 x 4TB SATA HDD (data)
1 x 800GB SATA SSD (journal)
2 x 10 Gbps Redundant Private Network Uplinks
View Design
432 raw / 288raw

QS 5 Image: 4 x LG (128TB)
HDD + SSD
S3 / SWIFT
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end
Per Server (4x):
Intel Xeon Dual E5-2620 / 12x LFF bay server
128GB RAM
2 x 800GB SATA SSD (RAID1/boot)
9 x 12TB SATA HDD (data)
1 x 800GB SATA SSD (journal)
2 x 10 Gbps Redundant Private Network Uplinks
View Design
3072TB raw / 2026TB usable

QS 5 Image: 8 x 2XL (384TB)
HDD + SSD
S3 / SWIFT
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end
Per Server (8x):
Intel Xeon Dual E5-2620 / 36x LFF bay server
256GB RAM
2 x 800GB SATA SSD (RAID1/boot)
32 x 12TB SATA HDD (data)
2 x 960GB SATA SSD (journal)
2 x 10 Gbps Redundant Private Network Uplinks
View Design

ALL-FLASH OBJECT STORAGE SOLUTIONS

Capacity
Media Type
Protocol Support
Workload
Bill of Materials
Design
152TB raw / 76TB usable

QS 5 Image: 4 x MD (48TB)
SSD
S3 / SWIFT
OpenStack
Virtualization
Data Analytics
Private Cloud Content Delivery Network
Next Generation Applications using S3 back-end
Per Server (4x):
Intel Xeon Dual E5-2620 / 12x LFF bay server
64GB RAM
2 x 800GB SATA SSD (RAID1/boot)
10 x 3.8TB SATA SSD (data)
2 x 10 Gbps Redundant Private Network Uplinks
View Design

Quick Configuration Steps

Step One: Storage Management Tab → Create Grid → Click OK
Step Two: Storage Management Tab → Add Grid Node → Enter 2nd Appliance IP → Click OK
Step Three: Storage Management Tab → Add Grid Node → Enter 3nd Appliance IP → Click OK
Step Four: Storage Management Tab → Controllers & Enclosures → Right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each controller on each appliance)
Step Five: Storage Management Tab → Modify Storage System → Set Domain Suffix → Click OK (do this for each appliance)
Step Six: Scale-out Object Storage Tab → Create Ceph Cluster → Select all appliances → Click OK
Step Seven: Scale-out Object Storage Tab → Multi-Create OSDs → Select SSDs as journals, HDDs as OSDs → Click OK
Step Eight: Scale-out Object Storage Tab → Create Object Storage Pool Group → Select Erasure Coding Mode → Click OK