IBM Cloud Scale-out Object Storage Cluster (WIP)

From OSNEXUS Online Documentation Site
Revision as of 20:34, 30 May 2019 by Qadmin (talk | contribs)
Jump to navigation Jump to search

QuantaStor’s scale-out object configurations provide S3 and SWIFT compatible REST API support. Configurations scale-out by adding additional drives and systems. Each system contains SSD to accelerate write performance and each cluster must contain a minimum of 3x systems and a maximum of 32s systems (10PB maximum). QuantaStor’s storage grid technology enables systems to be grouped together and managed as a storage grid that can span IBM datacenters. Within a storage grid one or more object storage clusters may be provisioned and managed from QuantaStor’s web based management interface as well as via QS REST APIs and the QS CLI.

Key Features

QuantaStor Scale-out Object Storage

  • S3 Object Storage Compatible REST API support
  • Hybrid HDD + SSD configuration to boost write performance
  • Easy web-based management of all systems with built-in QuantaStor storage grid technology.
  • Easy expansion by adding systems and/or drives.

Configurations

SAN/NAS - HYBRID SOLUTIONS

Capacity
Media Type
Protocol Support
Workload
Bill of Materials
Design
48TB raw / 24TB usable

QS 5 Image: MD (48TB)
HDD + SSD
iSCSI

NFS / CIFS /SMB
Server Virtualization / VDI

Database Storage
* Intel Xeon Dual E5-2650 / 36 x LFF bay server
* 64GB ECC RAM
* 2 x 960GB SATA SSDs (RAID1/boot)
* 12 x 4TB Enterprise SATA HDD (data)
* 2 x 960GB Enterprise SATA SSD (write log)
* 2 x 10 Gbps Redundant Private Network Uplinks
View Design
120TB raw / 60TB usable

QS 5 Image: LG (128TB)
HDD + SSD
iSCSI

NFS / CIFS /SMB
Server Virtualization / VDI

Database Storage
* Intel Xeon Dual E5-2650 / 36 x LFF bay server
* 128GB ECC RAM
* 2 x 960GB SATA SSDs (RAID1/boot)
* 20 x 6TB Enterprise SATA HDD (data)
* 2 x 960GB Enterprise SATA SSD (write log)
* 2 x 10 Gbps Redundant Private Network Uplinks
View Design

HYBRID OBJECT STORAGE SOLUTIONS

Capacity
Media Type
Protocol Support
Workload
Bill of Materials
Design
48TB raw / 24TB usable

QS 5 Image: MD (48TB)
HDD + SSD
iSCSI

NFS / CIFS /SMB
Server Virtualization / VDI

Database Storage
* Intel Xeon Dual E5-2650 / 36 x LFF bay server
* 64GB ECC RAM
* 2 x 960GB SATA SSDs (RAID1/boot)
* 12 x 4TB Enterprise SATA HDD (data)
* 2 x 960GB Enterprise SATA SSD (write log)
* 2 x 10 Gbps Redundant Private Network Uplinks
View Design
120TB raw / 60TB usable

QS 5 Image: LG (128TB)
HDD + SSD
iSCSI

NFS / CIFS /SMB
Server Virtualization / VDI

Database Storage
* Intel Xeon Dual E5-2650 / 36 x LFF bay server
* 128GB ECC RAM
* 2 x 960GB SATA SSDs (RAID1/boot)
* 20 x 6TB Enterprise SATA HDD (data)
* 2 x 960GB Enterprise SATA SSD (write log)
* 2 x 10 Gbps Redundant Private Network Uplinks
View Design


Medium Tier - Hybrid Object Storage – 48TB Use Cases System View
2x E5-2620v4 or 4110, 128GB RAM, 8x 6TB,
QS Medium Tier License, 1x 800GB SSDs (write log),
2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end

View Design Provisioning Guide Deploy

Large Tier - Hybrid Object Storage – 128TB Use Cases System View
2x E5-2650v4, 128GB RAM, 16x 8TB, QS Large Tier License
2x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot)
2x 10GbE Private Network ports
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end

View Design Provisioning Guide Deploy

Extra Large Tier - Hybrid Object Storage – 256TB Use Cases System View
2x E5-2650v4, 256GB RAM, 32x 8TB, QS XL Tier License
2x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot)
2x 10GbE Private Network ports
Highly-available Large Scale Object Storage Archive
Data Analytics
Private Cloud Content Delivery Network
Disk-to-Disk Backup via S3
Next Generation Applications using S3 back-end

View Design Provisioning Guide Deploy

Quick Configuration Steps

Step One: Storage Management Tab → Create Grid → Click OK
Step Two: Storage Management Tab → Add Grid Node → Enter 2nd Appliance IP → Click OK
Step Three: Storage Management Tab → Add Grid Node → Enter 3nd Appliance IP → Click OK
Step Four: Storage Management Tab → Controllers & Enclosures → Right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each controller on each appliance)
Step Five: Storage Management Tab → Modify Storage System → Set Domain Suffix → Click OK (do this for each appliance)
Step Six: Scale-out Object Storage Tab → Create Ceph Cluster → Select all appliances → Click OK
Step Seven: Scale-out Object Storage Tab → Multi-Create OSDs → Select SSDs as journals, HDDs as OSDs → Click OK
Step Eight: Scale-out Object Storage Tab → Create Object Storage Pool Group → Select Erasure Coding Mode → Click OK