Configure Scale-out Storage Cluster

From OSNEXUS Online Documentation Site
Jump to: navigation, search

IBM Cloud deployments of QuantaStor scale-out storage clusters provide dedicated and highly available file, block, and object storage which can be deployed an any of the IBM Cloud data-centers worldwide. Scale-out cluster use Ceph technology and provide NAS (NFS/SMB), SAN (iSCSI) and S3/SWIFT compatible object storage support in a single deployment. As the name implies, scale-out clusters may be expanded by adding storage media and/or additional QuantaStor servers.

Each server in a scale-out cluster must contain at least one SSD to accelerate write performance and each cluster must contain a minimum of 3x servers (6+ recommended) and a maximum of 32x servers (scalable to over 10PB in the IBM Cloud). QuantaStor’s storage grid technology enables systems to be grouped together and managed as a storage grid that can span IBM Cloud datacenters. Within a storage grid one or more storage clusters may be provisioned and managed via the QuantaStor web based management interface.


OSNEXUS Videos


Key Features

QuantaStor Scale-out Object Storage

  • S3 Object Storage Compatible REST API support
  • Hybrid HDD + SSD configuration to boost write performance
  • Easy web-based management of all systems with built-in QuantaStor storage grid technology.
  • Easy expansion by adding systems and/or drives.

Configurations

HYBRID OBJECT STORAGE SOLUTIONS

Capacity
Media Type
Protocol Support
Workload
Bill of Materials
Design
144TB raw / 96TB usable

QS 5 Image: 4 x MD (48TB)
HDD + SSD
S3 / SWIFT
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end
Per Server (4x):
Intel Xeon Dual E5-2620 / 12x LFF bay server
64GB RAM
2 x 800GB SATA SSD (RAID1/boot)
9 x 4TB SATA HDD (data)
1 x 800GB SATA SSD (journal)
2 x 10 Gbps Redundant Private Network Uplinks
View Design
432 raw / 288raw

QS 5 Image: 4 x LG (128TB)
HDD + SSD
S3 / SWIFT
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end
Per Server (4x):
Intel Xeon Dual E5-2620 / 12x LFF bay server
128GB RAM
2 x 800GB SATA SSD (RAID1/boot)
9 x 12TB SATA HDD (data)
1 x 800GB SATA SSD (journal)
2 x 10 Gbps Redundant Private Network Uplinks
View Design
3072TB raw / 2026TB usable

QS 5 Image: 8 x 2XL (384TB)
HDD + SSD
S3 / SWIFT
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end
Per Server (8x):
Intel Xeon Dual E5-2620 / 36x LFF bay server
256GB RAM
2 x 800GB SATA SSD (RAID1/boot)
32 x 12TB SATA HDD (data)
2 x 960GB SATA SSD (journal)
2 x 10 Gbps Redundant Private Network Uplinks
View Design

ALL-FLASH OBJECT STORAGE SOLUTIONS

Capacity
Media Type
Protocol Support
Workload
Bill of Materials
Design
152TB raw / 76TB usable

QS 5 Image: 4 x MD (48TB)
SSD
S3 / SWIFT
OpenStack
Virtualization
Data Analytics
Private Cloud Content Delivery Network
Next Generation Applications using S3 back-end
Per Server (4x):
Intel Xeon Dual E5-2620 / 12x LFF bay server
64GB RAM
2 x 800GB SATA SSD (RAID1/boot)
10 x 3.8TB SATA SSD (data)
2 x 10 Gbps Redundant Private Network Uplinks
View Design

Quick Configuration Steps

Step One: Storage Management Tab → Create Grid → Click OK
Step Two: Storage Management Tab → Add Grid Node → Enter 2nd Appliance IP → Click OK
Step Three: Storage Management Tab → Add Grid Node → Enter 3nd Appliance IP → Click OK
Step Four: Storage Management Tab → Controllers & Enclosures → Right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each controller on each appliance)
Step Five: Storage Management Tab → Modify Storage System → Set Domain Suffix → Click OK (do this for each appliance)
Step Six: Scale-out Object Storage Tab → Create Ceph Cluster → Select all appliances → Click OK
Step Seven: Scale-out Object Storage Tab → Multi-Create OSDs → Select SSDs as journals, HDDs as OSDs → Click OK
Step Eight: Scale-out Object Storage Tab → Create Object Storage Pool Group → Select Erasure Coding Mode → Click OK