Difference between revisions of "- 2. IBM Cloud Scale-out Block Storage Cluster"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m
m
 
(39 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[Category:ibm_guide]]
+
'''PENDING Release 5.4'''
{{Titlebar|KEY FEATURES}}
+
  
QuantaStor’s scale-out block storage configurations provide highly-available iSCSI block storage suitable for a variety of use cases.  Each appliance contains SSD to accelerate write performance and each cluster must contain a minimum of 3x appliances.  QuantaStor’s web-based management interface, REST APIs and QS CLI make storage provisioning and automation easy.
+
QuantaStor’s scale-out block storage configurations provide highly-available iSCSI block storage suitable for a variety of use cases.  Each system contains SSD to accelerate write performance and each cluster must contain a minimum of 3x systems.  QuantaStor’s web-based management interface, REST APIs and QS CLI make storage provisioning and automation easy.
  
'''QuantaStor Scale-out iSCSI Block Storage Features'''
+
== Key Features ==
 
*S3 & SWIFT Compatible REST API support
 
*S3 & SWIFT Compatible REST API support
 
*Hybrid HDD + SSD configuration to boost write performance
 
*Hybrid HDD + SSD configuration to boost write performance
*Easy web-based management of all appliances with built-in QuantaStor storage grid technology.
+
*Easy web-based management of all systems with built-in QuantaStor storage grid technology
*Easy expansion by adding appliances and/or drives.
+
*Easy expansion by adding systems and/or drives
  
{{Titlebar|USE CASES}}
+
== Reference Configurations ==
*Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)
+
*Scale-out block storage with Cinder integration for KVM based OpenStack deployments
+
*Highly-available block storage for databases
+
  
{{Titlebar|EXPANDABLE}}
+
'''Scale-out iSCSI Block Storage Configuration Options'''
  
Add additional drives and/or additional appliances to expand capacity.  Drives can be different sizes, appliances can have different numbers of drives but for optimal performance uniform drive count and capacities per QS server/appliance are recommended.
+
{| class="wikitable" style="width: 100%;"
 
+
|-
{{Titlebar|EXPECTED PERFORMANCE}}
+
| style="width: 40%"|'''Small Tier - Hybrid Block Storage – 16TB'''
 
+
| style="width: 30%"|'''Use Cases'''
Adding appliances will boost performance as the configuration scales-out. Small 3x appliance configurations will deliver between 400MB/sec and 1.2GB/sec sequential throughput depending on block size and number of concurrent client connections. Erasure-coding is recommended for best write performance and replica mode is recommended for best read performance.  
+
| style="width: 30%"|'''System View'''
 
+
|-
{{Titlebar|QUICK CONFIGURATION STEPS}}
+
|2x E5-2650v4 or 4110, 64GB RAM, 8x 2TB SATA, <br>QS Small Tier License, 1x 800GB SSDs (write log), <br>2x 1TB SATA (mirrored boot), <br>2x 10GbE Private Network ports
 +
|Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)<br>Scale-out block storage with Cinder integration for KVM based OpenStack deployments<br>Highly-available block storage for databases
 +
|[[File:Svr_supermicro_2u12_6029p.png | center]]
 +
<center>
 +
{{QsButton|https://bit.ly/2PfXuVt|View Design}}
 +
{{QsButton|https://bit.ly/2KXF6CC|Provisioning Guide}}
 +
{{QsButton|https://ibm.co/2DE53AG|Deploy}}
 +
</center>
 +
|-
 +
|}
 +
{| class="wikitable" style="width: 100%;"
 +
|-
 +
| style="width: 40%"|'''Medium Tier - Hybrid Block Storage – 48TB'''
 +
| style="width: 30%"|'''Use Cases'''
 +
| style="width: 30%"|'''System View'''
 +
|-
 +
|2x E5-2620v4 or 4110, 128GB RAM, <br>8x 6TB, QS Medium Tier License, <br>1x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot), <br>2x 10GbE Private Network ports
 +
|Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)<br>Scale-out block storage with Cinder integration for KVM based OpenStack deployments<br>Highly-available block storage for databases
 +
|[[File:Svr_supermicro_2u12_6029p.png | center]]
 +
<center>
 +
{{QsButton|https://bit.ly/2PfXuVt|View Design}}
 +
{{QsButton|https://bit.ly/2KXF6CC|Provisioning Guide}}
 +
{{QsButton|https://ibm.co/2DE53AG|Deploy}}
 +
</center>
 +
|-
 +
|}
 +
{| class="wikitable" style="width: 100%;"
 +
|-
 +
| style="width: 40%"|'''Large Tier - Hybrid Block Storage – 128TB'''
 +
| style="width: 30%"|'''Use Cases'''
 +
| style="width: 30%"|'''System View'''
 +
|-
 +
|2x E5-2620v4 or 4110, 128GB RAM, <br>16x 8TB SATA, QS Large Tier License, <br>2x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot), <br>2x 10GbE Private Network ports
 +
|Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)<br>Scale-out block storage with Cinder integration for KVM based OpenStack deployments<br>Highly-available block storage for databases
 +
|[[File:Svr_supermicro_2u12_6029p.png | center]]
 +
<center>
 +
{{QsButton|https://bit.ly/2PfXuVt|View Design}}
 +
{{QsButton|https://bit.ly/2KXF6CC|Provisioning Guide}}
 +
{{QsButton|https://ibm.co/2DE53AG|Deploy}}
 +
</center>
 +
|-
 +
|}
 +
== Quick Configurations Steps ==
  
 
<span style="font-size:80%; line-height: 1.0em;">
 
<span style="font-size:80%; line-height: 1.0em;">
  
 
'''Step One:''' Storage Management Tab &rarr; Create Grid &rarr; Click OK <br>
 
'''Step One:''' Storage Management Tab &rarr; Create Grid &rarr; Click OK <br>
'''Step Two:''' Storage Management Tab &rarr; Add Grid Node &rarr; Enter 2nd Appliance IP &rarr; Click OK <br>
+
'''Step Two:''' Storage Management Tab &rarr; Add Grid Node &rarr; Enter 2nd System IP &rarr; Click OK <br>
'''Step Three:''' Storage Management Tab &rarr; Add Grid Node &rarr; Enter 3nd Appliance IP &rarr; Click OK <br>
+
'''Step Three:''' Storage Management Tab &rarr; Add Grid Node &rarr; Enter 3nd System IP &rarr; Click OK <br>
'''Step Four:''' Storage Management Tab &rarr; Modify Storage System &rarr; Set Domain Suffix &rarr; Click OK (do this for each appliance) <br>
+
'''Step Four:''' Storage Management Tab &rarr; Modify Storage System &rarr; Set Domain Suffix &rarr; Click OK (do this for each system) <br>
'''Step Five:''' Storage Management Tab &rarr; Controllers & Enclosures &rarr; right-click Controller &rarr; Configure Pass-thru devices &rarr; Select all &rarr; Click OK (do this for each appliance) <br>
+
'''Step Five:''' Storage Management Tab &rarr; Controllers & Enclosures &rarr; right-click Controller &rarr; Configure Pass-thru devices &rarr; Select all &rarr; Click OK (do this for each system) <br>
'''Step Six:''' Scale-out Block & Object Storage Tab &rarr; Ceph Cluster &rarr; Create Ceph Cluster &rarr; Select all appliances &rarr; Click OK <br>
+
'''Step Six:''' Scale-out Block & Object Storage Tab &rarr; Ceph Cluster &rarr; Create Ceph Cluster &rarr; Select all systems &rarr; Click OK <br>
 
'''Step Seven:''' Scale-out Block & Object Storage Tab &rarr; Ceph Cluster &rarr; Multi-Create OSDs &rarr; Select SSDs as journals, HDDs as OSDs &rarr; Click OK <br>
 
'''Step Seven:''' Scale-out Block & Object Storage Tab &rarr; Ceph Cluster &rarr; Multi-Create OSDs &rarr; Select SSDs as journals, HDDs as OSDs &rarr; Click OK <br>
 
'''Step Eight:''' Scale-out Block & Object Storage Tab &rarr; Storage Pools (Ceph) &rarr; Create Storage Pool &rarr; Select Erasure Coding Mode &rarr; Click OK <br>
 
'''Step Eight:''' Scale-out Block & Object Storage Tab &rarr; Storage Pools (Ceph) &rarr; Create Storage Pool &rarr; Select Erasure Coding Mode &rarr; Click OK <br>
 
'''Step Nine:''' Cluster Resource Management &rarr; Create Site Cluster &rarr; Select All &rarr; Click OK <br>
 
'''Step Nine:''' Cluster Resource Management &rarr; Create Site Cluster &rarr; Select All &rarr; Click OK <br>
'''Step Ten:''' Cluster Resource Management &rarr; Add Site Interface -> Enter IP address for iSCSI Access &rarr; Click OK <br>
+
'''Step Ten:''' Cluster Resource Management &rarr; Add Site Interface &rarr; Enter IP address for iSCSI Access &rarr; Click OK <br>
 
'''Step Eleven:''' Storage Management Tab &rarr; Hosts &rarr; Add Host &rarr; Enter host IQN &rarr; Click OK (do this for each host) <br>
 
'''Step Eleven:''' Storage Management Tab &rarr; Hosts &rarr; Add Host &rarr; Enter host IQN &rarr; Click OK (do this for each host) <br>
 
'''Step Twelve:''' Storage Management Tab &rarr; Storage Volumes &rarr; Create Storage Volume &rarr; Enter volume name, select a size, &rarr; Click OK <br>
 
'''Step Twelve:''' Storage Management Tab &rarr; Storage Volumes &rarr; Create Storage Volume &rarr; Enter volume name, select a size, &rarr; Click OK <br>
 
'''Step Thirteen:''' Storage Management Tab &rarr; Hosts &rarr; Right-click Host &rarr; Assign Volumes &rarr; Select volumes &rarr; Click OK <br>
 
'''Step Thirteen:''' Storage Management Tab &rarr; Hosts &rarr; Right-click Host &rarr; Assign Volumes &rarr; Select volumes &rarr; Click OK <br>
 
 
{{Titlebar|TECHNICAL SPECIFICATIONS}}
 
 
'''QuantaStor SAN/NAS Storage Appliances'''
 
{| class="wikitable" font-size:12px;
 
|-
 
|'''Hardware'''
 
|'''Networking'''
 
|'''Licenses and Fees'''
 
|-
 
|Small and Medium Tier configurations use 2RU server with 12x drive bays. <br>Large Tier configurations use 4RU server with 24x or 36x drive bays. <br>All configurations include mirrored boot, redundant power. <br>All configurations include 2x 800GB SSD for write acceleration.
 
|2x 10GbE Private Network ports recommended for all configurations. <br>Use LACP bonding for network redundancy and improved throughput.
 
|Select the OSNEXUS operating system QuantaStor license tier based on the amount of raw storage capacity. <br>License, maintenance, upgrades and support included.
 
|-
 
|}
 
 
{{Titlebar|CONFIGURATIONS}}
 
 
'''Scale-out iSCSI Block Storage Configuration Options'''
 
{| class="wikitable"
 
|-
 
|'''Small Tier -  Hybrid Block Storage – 16TB per appliance'''
 
|'''Medium Tier - Hybrid Object Storage – 48TB per appliance '''
 
|'''Large Tier - Hybrid Block Storage – 128TB per appliance'''
 
 
|-
 
|2x E5-2650v4 or 4110, 64GB RAM, 8x 2TB SATA, QS Small Tier License, 1x 800GB SSDs (write log) <br>2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports
 
|2x E5-2620v4 or 4110, 128GB RAM, 8x 6TB, QS Medium Tier License, 1x 800GB SSDs (write log) <br>2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports
 
|2x E5-2620v4 or 4110, 128GB RAM, 16x 8TB SATA, QS Large Tier License, 2x 800GB SSDs (write log) <br>2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports
 
|-
 
|}
 

Latest revision as of 13:27, 28 May 2019

PENDING Release 5.4

QuantaStor’s scale-out block storage configurations provide highly-available iSCSI block storage suitable for a variety of use cases. Each system contains SSD to accelerate write performance and each cluster must contain a minimum of 3x systems. QuantaStor’s web-based management interface, REST APIs and QS CLI make storage provisioning and automation easy.

Key Features

  • S3 & SWIFT Compatible REST API support
  • Hybrid HDD + SSD configuration to boost write performance
  • Easy web-based management of all systems with built-in QuantaStor storage grid technology
  • Easy expansion by adding systems and/or drives

Reference Configurations

Scale-out iSCSI Block Storage Configuration Options

Small Tier - Hybrid Block Storage – 16TB Use Cases System View
2x E5-2650v4 or 4110, 64GB RAM, 8x 2TB SATA,
QS Small Tier License, 1x 800GB SSDs (write log),
2x 1TB SATA (mirrored boot),
2x 10GbE Private Network ports
Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)
Scale-out block storage with Cinder integration for KVM based OpenStack deployments
Highly-available block storage for databases
Svr supermicro 2u12 6029p.png

View Design Provisioning Guide Deploy

Medium Tier - Hybrid Block Storage – 48TB Use Cases System View
2x E5-2620v4 or 4110, 128GB RAM,
8x 6TB, QS Medium Tier License,
1x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot),
2x 10GbE Private Network ports
Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)
Scale-out block storage with Cinder integration for KVM based OpenStack deployments
Highly-available block storage for databases
Svr supermicro 2u12 6029p.png

View Design Provisioning Guide Deploy

Large Tier - Hybrid Block Storage – 128TB Use Cases System View
2x E5-2620v4 or 4110, 128GB RAM,
16x 8TB SATA, QS Large Tier License,
2x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot),
2x 10GbE Private Network ports
Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)
Scale-out block storage with Cinder integration for KVM based OpenStack deployments
Highly-available block storage for databases
Svr supermicro 2u12 6029p.png

View Design Provisioning Guide Deploy

Quick Configurations Steps

Step One: Storage Management Tab → Create Grid → Click OK
Step Two: Storage Management Tab → Add Grid Node → Enter 2nd System IP → Click OK
Step Three: Storage Management Tab → Add Grid Node → Enter 3nd System IP → Click OK
Step Four: Storage Management Tab → Modify Storage System → Set Domain Suffix → Click OK (do this for each system)
Step Five: Storage Management Tab → Controllers & Enclosures → right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each system)
Step Six: Scale-out Block & Object Storage Tab → Ceph Cluster → Create Ceph Cluster → Select all systems → Click OK
Step Seven: Scale-out Block & Object Storage Tab → Ceph Cluster → Multi-Create OSDs → Select SSDs as journals, HDDs as OSDs → Click OK
Step Eight: Scale-out Block & Object Storage Tab → Storage Pools (Ceph) → Create Storage Pool → Select Erasure Coding Mode → Click OK
Step Nine: Cluster Resource Management → Create Site Cluster → Select All → Click OK
Step Ten: Cluster Resource Management → Add Site Interface → Enter IP address for iSCSI Access → Click OK
Step Eleven: Storage Management Tab → Hosts → Add Host → Enter host IQN → Click OK (do this for each host)
Step Twelve: Storage Management Tab → Storage Volumes → Create Storage Volume → Enter volume name, select a size, → Click OK
Step Thirteen: Storage Management Tab → Hosts → Right-click Host → Assign Volumes → Select volumes → Click OK