Configure Scale-out Storage Cluster

From OSNEXUS Online Documentation Site
Revision as of 16:37, 6 June 2019 by Qadmin (Talk | contribs)

Jump to: navigation, search

QuantaStor’s scale-out object configurations provide S3 and SWIFT compatible REST API support. Configurations scale-out by adding additional drives and systems. Each system contains SSD to accelerate write performance and each cluster must contain a minimum of 3x systems and a maximum of 32s systems (10PB maximum). QuantaStor’s storage grid technology enables systems to be grouped together and managed as a storage grid that can span IBM datacenters. Within a storage grid one or more object storage clusters may be provisioned and managed from QuantaStor’s web based management interface as well as via QS REST APIs and the QS CLI.

Key Features

QuantaStor Scale-out Object Storage

  • S3 Object Storage Compatible REST API support
  • Hybrid HDD + SSD configuration to boost write performance
  • Easy web-based management of all systems with built-in QuantaStor storage grid technology.
  • Easy expansion by adding systems and/or drives.

Configurations

HYBRID OBJECT STORAGE SOLUTIONS

Capacity
Media Type
Protocol Support
Workload
Bill of Materials
Design
144TB raw / 96TB usable

QS 5 Image: 4 x MD (48TB)
HDD + SSD
S3 / SWIFT
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end
Per Server (4x):
Intel Xeon Dual E5-2620 / 12x LFF bay server
64GB RAM
2 x 960GB SATA SSD (RAID10/boot)
9 x 4TB SATA HDD (data)
1 x 960GB SATA SSD (journal)
2 x 10 Gbps Redundant Private Network Uplinks
View Design
432 raw / 288raw

QS 5 Image: 4 x LG (128TB)
HDD + SSD
S3 / SWIFT
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end
Per Server (4x):
Intel Xeon Dual E5-2620 / 12x LFF bay server
128GB RAM
2 x 960GB SATA SSD (RAID10/boot)
9 x 12TB SATA HDD (data)
1 x 960GB SATA SSD (journal)
2 x 10 Gbps Redundant Private Network Uplinks
View Design
3072TB raw / 2026TB usable

QS 5 Image: 8 x 2XL (384TB)
HDD + SSD
S3 / SWIFT
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end
Per Server (8x):
Intel Xeon Dual E5-2620 / 36x LFF bay server
256GB RAM
2 x 960GB SATA SSD (RAID10/boot)
32 x 12TB SATA HDD (data)
2 x 960GB SATA SSD (journal)
2 x 10 Gbps Redundant Private Network Uplinks
View Design

ALL-FLASH OBJECT STORAGE SOLUTIONS

Capacity
Media Type
Protocol Support
Workload
Bill of Materials
Design
152TB raw / 76TB usable

QS 5 Image: 4 x MD (48TB)
SSD
S3 / SWIFT
OpenStack
Virtualization
Data Analytics
Private Cloud Content Delivery Network
Next Generation Applications using S3 back-end
Per Server (4x):
Intel Xeon Dual E5-2620 / 12x LFF bay server
64GB RAM
2 x 960GB SATA SSD (RAID10/boot)
10 x 3.8TB SATA SSD (data)
0 x 960GB SATA SSD (journal)
2 x 10 Gbps Redundant Private Network Uplinks
View Design

Quick Configuration Steps

Step One: Storage Management Tab → Create Grid → Click OK
Step Two: Storage Management Tab → Add Grid Node → Enter 2nd Appliance IP → Click OK
Step Three: Storage Management Tab → Add Grid Node → Enter 3nd Appliance IP → Click OK
Step Four: Storage Management Tab → Controllers & Enclosures → Right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each controller on each appliance)
Step Five: Storage Management Tab → Modify Storage System → Set Domain Suffix → Click OK (do this for each appliance)
Step Six: Scale-out Object Storage Tab → Create Ceph Cluster → Select all appliances → Click OK
Step Seven: Scale-out Object Storage Tab → Multi-Create OSDs → Select SSDs as journals, HDDs as OSDs → Click OK
Step Eight: Scale-out Object Storage Tab → Create Object Storage Pool Group → Select Erasure Coding Mode → Click OK