|
|
Line 41: |
Line 41: |
| |2x E5-2650v4<br>256GB RAM<br> 2x 960GB SATA SSD (RAID10/boot)<br>2 x 960GB SSDs (write log)<br>32 x 8TB SATA HDD<br>2 x 10GbE Private Network ports <br> | | |2x E5-2650v4<br>256GB RAM<br> 2x 960GB SATA SSD (RAID10/boot)<br>2 x 960GB SSDs (write log)<br>32 x 8TB SATA HDD<br>2 x 10GbE Private Network ports <br> |
| |<center>{{QsButton|https://bit.ly/2PfXuVt|View Design}}</center> | | |<center>{{QsButton|https://bit.ly/2PfXuVt|View Design}}</center> |
| |-
| |
|
| |
|
| |
| |}
| |
|
| |
|
| |
|
| |
|
| |
| {| class="wikitable" style="width: 100%;"
| |
| |-
| |
| | style="width: 40%"|'''Medium Tier - Hybrid Object Storage – 48TB'''
| |
| | style="width: 30%"|'''Use Cases'''
| |
| | style="width: 30%"|'''System View'''
| |
| |-
| |
| |2x E5-2620v4 or 4110, 128GB RAM, 8x 6TB, <br>QS Medium Tier License, 1x 800GB SSDs (write log), <br>2x 1TB SATA (mirrored boot), 2 x 10 Gbps Redundant Private Network Uplinks
| |
| |Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end
| |
| |[[File:Svr_supermicro_2u12_6029p.png | center]]
| |
| <center>
| |
| {{QsButton|https://bit.ly/2PfXuVt|View Design}}
| |
| {{QsButton|https://bit.ly/2KXF6CC|Provisioning Guide}}
| |
| {{QsButton|https://ibm.co/2DE53AG|Deploy}}
| |
| </center>
| |
| |-
| |
| |}
| |
|
| |
| {| class="wikitable" style="width: 100%;"
| |
| |-
| |
| | style="width: 40%"|'''Large Tier - Hybrid Object Storage – 128TB'''
| |
| | style="width: 30%"|'''Use Cases'''
| |
| | style="width: 30%"|'''System View'''
| |
| |-
| |
| |2x E5-2650v4, 128GB RAM, 16x 8TB, QS Large Tier License <br> 2x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot) <br> 2x 10GbE Private Network ports
| |
| |Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end
| |
| |[[File:Svr_supermicro_2u12_6029p.png | center]]
| |
| <center>
| |
| {{QsButton|https://bit.ly/2PfXuVt|View Design}}
| |
| {{QsButton|https://bit.ly/2KXF6CC|Provisioning Guide}}
| |
| {{QsButton|https://ibm.co/2DE53AG|Deploy}}
| |
| </center>
| |
| |-
| |
| |}
| |
|
| |
| {| class="wikitable" style="width: 100%;"
| |
| |-
| |
| | style="width: 40%"|'''Extra Large Tier - Hybrid Object Storage – 256TB'''
| |
| | style="width: 30%"|'''Use Cases'''
| |
| | style="width: 30%"|'''System View'''
| |
| |-
| |
| |2x E5-2650v4, 256GB RAM, 32x 8TB, QS XL Tier License <br> 2x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot) <br> 2x 10GbE Private Network ports <br>
| |
| |Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end
| |
| |[[File:Svr_supermicro_2u12_6029p.png | center]]
| |
| <center>
| |
| {{QsButton|https://bit.ly/2PfXuVt|View Design}}
| |
| {{QsButton|https://bit.ly/2KXF6CC|Provisioning Guide}}
| |
| {{QsButton|https://ibm.co/2DE53AG|Deploy}}
| |
| </center>
| |
| |- | | |- |
| |} | | |} |
Revision as of 23:18, 30 May 2019
QuantaStor’s scale-out object configurations provide S3 and SWIFT compatible REST API support. Configurations scale-out by adding additional drives and systems. Each system contains SSD to accelerate write performance and each cluster must contain a minimum of 3x systems and a maximum of 32s systems (10PB maximum). QuantaStor’s storage grid technology enables systems to be grouped together and managed as a storage grid that can span IBM datacenters. Within a storage grid one or more object storage clusters may be provisioned and managed from QuantaStor’s web based management interface as well as via QS REST APIs and the QS CLI.
Key Features
QuantaStor Scale-out Object Storage
- S3 Object Storage Compatible REST API support
- Hybrid HDD + SSD configuration to boost write performance
- Easy web-based management of all systems with built-in QuantaStor storage grid technology.
- Easy expansion by adding systems and/or drives.
Configurations
HYBRID OBJECT STORAGE SOLUTIONS
Capacity
|
Media Type
|
Protocol Support
|
Workload
|
Bill of Materials
|
Design
|
3 x 48TB raw
QS 5 Image: 3 x MD (48TB)
|
HDD + SSD
|
S3 / SWIFT
|
Highly-available Large Scale Object Storage Archive Data Analytics Private Cloud Content Delivery Network Disk-to-Disk Backup via S3 Next Generation Applications using S3 back-end
|
2x E5-2620v4 or 4110 128GB RAM 2 x 960GB SATA SSD (RAID10/boot) 1 x 960GB SSDs (write log) 8 x 6TB SATA HDD 2 x 10 Gbps Redundant Private Network Uplinks
|
View Design
|
3 x 128TB raw
QS 5 Image: 3 x LG (128TB)
|
HDD + SSD
|
S3 / SWIFT
|
Highly-available Large Scale Object Storage Archive Data Analytics Private Cloud Content Delivery Network Disk-to-Disk Backup via S3 Next Generation Applications using S3 back-end
|
2x E5-2650v4 128GB RAM 2 x 960GB SATA SSD (RAID 10/boot) 2 x 960GB SSDs (write log) 16 x 8TB SATA HDD 2 x 10 Gbps Redundant Private Network Uplinks
|
View Design
|
3 x 256TB raw
QS 5 Image: 3 x XL (256TB)
|
HDD + SSD
|
S3 / SWIFT
|
Highly-available Large Scale Object Storage Archive Data Analytics Private Cloud Content Delivery Network Disk-to-Disk Backup via S3 Next Generation Applications using S3 back-end
|
2x E5-2650v4 256GB RAM 2x 960GB SATA SSD (RAID10/boot) 2 x 960GB SSDs (write log) 32 x 8TB SATA HDD 2 x 10GbE Private Network ports
|
View Design
|
Quick Configuration Steps
Step One: Storage Management Tab → Create Grid → Click OK
Step Two: Storage Management Tab → Add Grid Node → Enter 2nd Appliance IP → Click OK
Step Three: Storage Management Tab → Add Grid Node → Enter 3nd Appliance IP → Click OK
Step Four: Storage Management Tab → Controllers & Enclosures → Right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each controller on each appliance)
Step Five: Storage Management Tab → Modify Storage System → Set Domain Suffix → Click OK (do this for each appliance)
Step Six: Scale-out Object Storage Tab → Create Ceph Cluster → Select all appliances → Click OK
Step Seven: Scale-out Object Storage Tab → Multi-Create OSDs → Select SSDs as journals, HDDs as OSDs → Click OK
Step Eight: Scale-out Object Storage Tab → Create Object Storage Pool Group → Select Erasure Coding Mode → Click OK