Difference between revisions of "- 2. IBM Cloud Scale-out Block Storage Cluster"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m (Expandable)
m (Quick Configurations Steps)
Line 13: Line 13:
  
 
Adding appliances will boost performance as the configuration scales-out. Small 3x appliance configurations will deliver between 400MB/sec and 1.2GB/sec sequential throughput depending on block size and number of concurrent client connections.  Erasure-coding is recommended for best write performance and replica mode is recommended for best read performance.
 
Adding appliances will boost performance as the configuration scales-out. Small 3x appliance configurations will deliver between 400MB/sec and 1.2GB/sec sequential throughput depending on block size and number of concurrent client connections.  Erasure-coding is recommended for best write performance and replica mode is recommended for best read performance.
 
== Quick Configurations Steps ==
 
 
<span style="font-size:80%; line-height: 1.0em;">
 
 
'''Step One:''' Storage Management Tab &rarr; Create Grid &rarr; Click OK <br>
 
'''Step Two:''' Storage Management Tab &rarr; Add Grid Node &rarr; Enter 2nd Appliance IP &rarr; Click OK <br>
 
'''Step Three:''' Storage Management Tab &rarr; Add Grid Node &rarr; Enter 3nd Appliance IP &rarr; Click OK <br>
 
'''Step Four:''' Storage Management Tab &rarr; Modify Storage System &rarr; Set Domain Suffix &rarr; Click OK (do this for each appliance) <br>
 
'''Step Five:''' Storage Management Tab &rarr; Controllers & Enclosures &rarr; right-click Controller &rarr; Configure Pass-thru devices &rarr; Select all &rarr; Click OK (do this for each appliance) <br>
 
'''Step Six:''' Scale-out Block & Object Storage Tab &rarr; Ceph Cluster &rarr; Create Ceph Cluster &rarr; Select all appliances &rarr; Click OK <br>
 
'''Step Seven:''' Scale-out Block & Object Storage Tab &rarr; Ceph Cluster &rarr; Multi-Create OSDs &rarr; Select SSDs as journals, HDDs as OSDs &rarr; Click OK <br>
 
'''Step Eight:''' Scale-out Block & Object Storage Tab &rarr; Storage Pools (Ceph) &rarr; Create Storage Pool &rarr; Select Erasure Coding Mode &rarr; Click OK <br>
 
'''Step Nine:''' Cluster Resource Management &rarr; Create Site Cluster &rarr; Select All &rarr; Click OK <br>
 
'''Step Ten:''' Cluster Resource Management &rarr; Add Site Interface &rarr; Enter IP address for iSCSI Access &rarr; Click OK <br>
 
'''Step Eleven:''' Storage Management Tab &rarr; Hosts &rarr; Add Host &rarr; Enter host IQN &rarr; Click OK (do this for each host) <br>
 
'''Step Twelve:''' Storage Management Tab &rarr; Storage Volumes &rarr; Create Storage Volume &rarr; Enter volume name, select a size, &rarr; Click OK <br>
 
'''Step Thirteen:''' Storage Management Tab &rarr; Hosts &rarr; Right-click Host &rarr; Assign Volumes &rarr; Select volumes &rarr; Click OK <br>
 
  
 
== Reference Configurations ==
 
== Reference Configurations ==

Revision as of 10:49, 2 May 2019

Key Features

QuantaStor’s scale-out block storage configurations provide highly-available iSCSI block storage suitable for a variety of use cases. Each appliance contains SSD to accelerate write performance and each cluster must contain a minimum of 3x appliances. QuantaStor’s web-based management interface, REST APIs and QS CLI make storage provisioning and automation easy.

QuantaStor Scale-out iSCSI Block Storage Features

  • S3 & SWIFT Compatible REST API support
  • Hybrid HDD + SSD configuration to boost write performance
  • Easy web-based management of all appliances with built-in QuantaStor storage grid technology.
  • Easy expansion by adding appliances and/or drives.

Performance

Adding appliances will boost performance as the configuration scales-out. Small 3x appliance configurations will deliver between 400MB/sec and 1.2GB/sec sequential throughput depending on block size and number of concurrent client connections. Erasure-coding is recommended for best write performance and replica mode is recommended for best read performance.

Reference Configurations

Scale-out iSCSI Block Storage Configuration Options

Small Tier - Hybrid Block Storage – 16TB Use Cases System View
2x E5-2650v4 or 4110, 64GB RAM, 8x 2TB SATA,
QS Small Tier License, 1x 800GB SSDs (write log),
2x 1TB SATA (mirrored boot),
2x 10GbE Private Network ports
Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)
Scale-out block storage with Cinder integration for KVM based OpenStack deployments
Highly-available block storage for databases
Svr supermicro 2u12 6029p.png

View Design Provisioning Guide Deploy

Medium Tier - Hybrid Block Storage – 48TB Use Cases System View
2x E5-2620v4 or 4110, 128GB RAM,
8x 6TB, QS Medium Tier License,
1x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot),
2x 10GbE Private Network ports
Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)
Scale-out block storage with Cinder integration for KVM based OpenStack deployments
Highly-available block storage for databases
Svr supermicro 2u12 6029p.png

View Design Provisioning Guide Deploy

Large Tier - Hybrid Block Storage – 128TB Use Cases System View
2x E5-2620v4 or 4110, 128GB RAM,
16x 8TB SATA, QS Large Tier License,
2x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot),
2x 10GbE Private Network ports
Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)
Scale-out block storage with Cinder integration for KVM based OpenStack deployments
Highly-available block storage for databases
Svr supermicro 2u12 6029p.png

View Design Provisioning Guide Deploy