Difference between revisions of "- 2. IBM Cloud Scale-out Block Storage Cluster"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m
m
Line 75: Line 75:
  
 
|-
 
|-
|2x E5-2650v4 or 4110, 64GB RAM, 8x 2TB SATA, QS Small Tier License, 1x 800GB SSDs (write log), <br>2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports
+
|2x E5-2650v4 or 4110, 64GB RAM, 8x 2TB SATA, QS Small Tier License, 1x 800GB SSDs (write log) <br>2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports
|2x E5-2620v4 or 4110, 128GB RAM, 8x 6TB, QS Medium Tier License, 1x 800GB SSDs (write log), <br>2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports
+
|2x E5-2620v4 or 4110, 128GB RAM, 8x 6TB, QS Medium Tier License, 1x 800GB SSDs (write log) <br>2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports
|2x E5-2620v4 or 4110, 128GB RAM, 16x 8TB SATA, QS Large Tier License, 2x 800GB SSDs (write log), <br>2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports
+
|2x E5-2620v4 or 4110, 128GB RAM, 16x 8TB SATA, QS Large Tier License, 2x 800GB SSDs (write log) <br>2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports
 
|-
 
|-
 
|}
 
|}

Revision as of 13:15, 25 March 2019

 KEY FEATURES

QuantaStor’s scale-out block storage configurations provide highly-available iSCSI block storage suitable for a variety of use cases. Each appliance contains SSD to accelerate write performance and each cluster must contain a minimum of 3x appliances. QuantaStor’s web-based management interface, REST APIs and QS CLI make storage provisioning and automation easy.

QuantaStor SAN/NAS Storage Appliance Features

  • Provides NFS, SMB and iSCSI storage
  • Compression, AES 256 bit Encryption
  • Remote-replication between QuantaStor appliances across IBM data centers and on-premises locations for hybrid cloud.
  • Snapshot Schedules with long term snapshot retention options.
  • Hybrid SSD+HDD configuration provide economical capacity with performance
  • All-Flash configurations deliver maximum IOPS
  • Easy web-based management with Storage Grid technology
  • NIST 800-53, HIPAA, CJIS compliant
  • FIPS 140-2 Level 1 compliant options available Q4/18
  • Advanced Role Based Access Controls
  • Automatable via QS CLI and REST API
  • VMware certified and VMware VAAI integrated
 USE CASES
  • Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)
  • Scale-out block storage with Cinder integration for KVM based OpenStack deployments
  • Highly-available block storage for databases
 EXPANDABLE

Add additional drives and/or additional appliances to expand capacity. Drives can be different sizes, appliances can have different numbers of drives but for optimal performance uniform drive count and capacities per QS server/appliance are recommended.

 EXPECTED PERFORMANCE

Adding appliances will boost performance as the configuration scales-out. Small 3x appliance configurations will deliver between 400MB/sec and 1.2GB/sec sequential throughput depending on block size and number of concurrent client connections. Erasure-coding is recommended for best write performance and replica mode is recommended for best read performance.

 QUICK CONFIGURATION STEPS

Step One: Storage Management Tab -> Create Grid -> Click OK
Step Two: Storage Management Tab -> Add Grid Node -> Enter 2nd Appliance IP -> Click OK
Step Three: Storage Management Tab -> Add Grid Node -> Enter 3nd Appliance IP -> Click OK
Step Four: Storage Management Tab -> Modify Storage System -> Set Domain Suffix -> Click OK (do this for each appliance)
Step Five: Storage Management Tab -> Controllers & Enclosures -> right-click Controller -> Configure Pass-thru devices -> Select all -> Click OK (do this for each appliance)
Step Six: Scale-out Block & Object Storage Tab -> Ceph Cluster -> Create Ceph Cluster -> Select all appliances -> Click OK
Step Seven: Scale-out Block & Object Storage Tab -> Ceph Cluster -> Multi-Create OSDs -> Select SSDs as journals, HDDs as OSDs -> Click OK
Step Eight: Scale-out Block & Object Storage Tab -> Storage Pools (Ceph) -> Create Storage Pool -> Select Erasure Coding Mode -> Click OK
Step Nine: Cluster Resource Management -> Create Site Cluster -> Select All -> Click OK
Step Ten: Cluster Resource Management -> Add Site Interface -> Enter IP address for iSCSI Access -> Click OK
Step Eleven: Storage Management Tab -> Hosts -> Add Host -> Enter host IQN -> Click OK (do this for each host)
Step Twelve: Storage Management Tab -> Storage Volumes -> Create Storage Volume -> Enter volume name, select a size, -> Click OK
Step Thirteen: Storage Management Tab -> Hosts -> Right-click Host -> Assign Volumes -> Select volumes -> Click OK


 TECHNICAL SPECIFICATIONS

QuantaStor SAN/NAS Storage Appliances

Hardware Networking Licenses and Fees
Small and Medium Tier configurations use 2RU server with 12x drive bays.
Large Tier configurations use 4RU server with 24x or 36x drive bays.
All configurations include mirrored boot.
All Hybrid configurations include 2x 800GB SSD for write acceleration.
2x 10GbE Private Network ports recommended for all configurations.
Use LACP bonded network ports when NFS based Datastores are to be configured.
Select the OSNEXUS operating system QuantaStor license tier based on the amount of raw storage capacity.
License, maintenance, upgrades, and support included.
 CONFIGURATIONS

Scale-out iSCSI Block Storage Configuration Options

Small Tier - Hybrid Block Storage – 16TB per appliance Medium Tier - Hybrid Object Storage – 48TB per appliance Large Tier - Hybrid Block Storage – 128TB per appliance
2x E5-2650v4 or 4110, 64GB RAM, 8x 2TB SATA, QS Small Tier License, 1x 800GB SSDs (write log)
2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports
2x E5-2620v4 or 4110, 128GB RAM, 8x 6TB, QS Medium Tier License, 1x 800GB SSDs (write log)
2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports
2x E5-2620v4 or 4110, 128GB RAM, 16x 8TB SATA, QS Large Tier License, 2x 800GB SSDs (write log)
2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports



OSNEXUS QuantaStor Scale-out iSCSI Block Storage