Configure Scale-out Storage Cluster: Difference between revisions
mNo edit summary |
mNo edit summary |
||
(70 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
[[Category:ibm_guide]] | [[Category:ibm_guide]] | ||
IBM Cloud deployments of QuantaStor scale-out storage clusters provide dedicated and highly available file, block, and object storage which can be deployed an any of the IBM Cloud data-centers worldwide. Scale-out cluster use Ceph technology and provide NAS (NFS/SMB), SAN (iSCSI) and S3/SWIFT compatible object storage support in a single deployment. As the name implies, scale-out clusters may be expanded by adding storage media and/or additional QuantaStor servers. | |||
Each server in a scale-out cluster must contain at least one SSD to accelerate write performance and each cluster must contain a minimum of 3x servers (6+ recommended) and a maximum of 32x servers (scalable to over 10PB in the IBM Cloud). QuantaStor’s storage grid technology enables systems to be grouped together and managed as a storage grid that can span IBM Cloud datacenters. Within a storage grid one or more storage clusters may be provisioned and managed via the QuantaStor web based management interface. | |||
''' OSNEXUS Videos ''' | |||
* [[Image:youtube_icon.png|50px|link=https://www.youtube.com/watch?v=CwGYaGMuSfU]] [https://www.youtube.com/watch?v=CwGYaGMuSfU Covers QuantaStor 5 S3 Reverse Proxy for IBM Cloud Object Storage [15:43]] | |||
* [[Image:youtube_icon.png|50px|link=https://www.youtube.com/watch?v=VpfcjZDO3Ys]] [https://www.youtube.com/watch?v=VpfcjZDO3Ys Covers Designing Ceph clusters with the QuantaStor solution design web app [17:44]] | |||
== Key Features == | |||
'''QuantaStor Scale-out Object Storage''' | |||
*S3 Object Storage Compatible REST API support | |||
*Hybrid HDD + SSD configuration to boost write performance | |||
*Easy web-based management of all systems with built-in QuantaStor storage grid technology. | |||
*Easy expansion by adding systems and/or drives. | |||
== Configurations == | |||
'''HYBRID OBJECT STORAGE SOLUTIONS''' | |||
{| class="wikitable" style="width: 100%;" | |||
' | |||
{| class="wikitable" | |||
|- | |- | ||
| | | style="width: 15%"|<center>'''Capacity'''</center> | ||
| | | style="width: 10%"|<center>'''Media Type'''</center> | ||
| | | style="width: 10%"|<center>'''Protocol Support'''</center> | ||
| style="width: 25%"|<center>'''Workload'''</center> | |||
| style="width: 25%"|<center>'''Bill of Materials'''</center> | |||
| style="width: 15%"|<center>'''Design'''</center> | |||
|- | |- | ||
| | |<center>144TB raw / 96TB usable<br><br>QS 5 Image: 4 x MD (48TB)</center> | ||
|<center>HDD + SSD</center> | |||
|<center>S3 / SWIFT</center> | |||
|Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end | |||
|Per Server (4x):<br>Intel Xeon Dual E5-2620 / 12x LFF bay server<br>64GB RAM<br>2 x 800GB SATA SSD (RAID1/boot)<br>9 x 4TB SATA HDD (data)<br>1 x 800GB SATA SSD (journal)<br>2 x 10 Gbps Redundant Private Network Uplinks | |||
{| | |<center>{{QsButton|https://bit.ly/2QVqH95|View Design}}</center> | ||
|- | |- | ||
| | |<center>432 raw / 288raw<br><br>QS 5 Image: 4 x LG (128TB)</center> | ||
| | |<center>HDD + SSD</center> | ||
| | |<center>S3 / SWIFT</center> | ||
| | |Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end | ||
|Per Server (4x):<br>Intel Xeon Dual E5-2620 / 12x LFF bay server<br>128GB RAM<br>2 x 800GB SATA SSD (RAID1/boot)<br>9 x 12TB SATA HDD (data)<br>1 x 800GB SATA SSD (journal)<br>2 x 10 Gbps Redundant Private Network Uplinks | |||
|<center>{{QsButton|https://bit.ly/2Ir2V0N|View Design}}</center> | |||
|- | |- | ||
| | |<center>3072TB raw / 2026TB usable<br><br>QS 5 Image: 8 x 2XL (384TB)</center> | ||
| | |<center>HDD + SSD</center> | ||
| | |<center>S3 / SWIFT</center> | ||
|Highly-available Large Scale Object Storage Archive<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Disk-to-Disk Backup via S3<br>Next Generation Applications using S3 back-end | |||
|Per Server (8x):<br>Intel Xeon Dual E5-2620 / 36x LFF bay server<br>256GB RAM<br>2 x 800GB SATA SSD (RAID1/boot)<br>32 x 12TB SATA HDD (data)<br>2 x 960GB SATA SSD (journal)<br>2 x 10 Gbps Redundant Private Network Uplinks | |||
|<center>{{QsButton|https://bit.ly/2WYhpil|View Design}}</center> | |||
|- | |- | ||
|} | |} | ||
''' | |||
{| class="wikitable" | '''ALL-FLASH OBJECT STORAGE SOLUTIONS''' | ||
{| class="wikitable" style="width: 100%;" | |||
|- | |- | ||
|''' | | style="width: 15%"|<center>'''Capacity'''</center> | ||
|''' | | style="width: 10%"|<center>'''Media Type'''</center> | ||
| style="width: 10%"|<center>'''Protocol Support'''</center> | |||
| style="width: 25%"|<center>'''Workload'''</center> | |||
| style="width: 25%"|<center>'''Bill of Materials'''</center> | |||
| style="width: 15%"|<center>'''Design'''</center> | |||
|- | |- | ||
| | |<center>152TB raw / 76TB usable<br><br>QS 5 Image: 4 x MD (48TB)</center> | ||
| | |<center>SSD</center> | ||
|<center>S3 / SWIFT</center> | |||
|OpenStack<br>Virtualization<br>Data Analytics<br>Private Cloud Content Delivery Network<br>Next Generation Applications using S3 back-end | |||
|Per Server (4x):<br>Intel Xeon Dual E5-2620 / 12x LFF bay server<br>64GB RAM<br>2 x 800GB SATA SSD (RAID1/boot)<br>10 x 3.8TB SATA SSD (data)<br>2 x 10 Gbps Redundant Private Network Uplinks | |||
|<center>{{QsButton|https://bit.ly/2K4c5Ur|View Design}}</center> | |||
|- | |- | ||
|} | |} | ||
== Quick Configuration Steps == | |||
'''Step One:''' Storage Management Tab → Create Grid → Click OK<br> | |||
'''Step Two:''' Storage Management Tab → Add Grid Node → Enter 2nd Appliance IP → Click OK<br> | |||
'''Step Three:''' Storage Management Tab → Add Grid Node → Enter 3nd Appliance IP → Click OK<br> | |||
'''Step Four:''' Storage Management Tab → Controllers & Enclosures → Right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each controller on each appliance)<br> | |||
'''Step Five:''' Storage Management Tab → Modify Storage System → Set Domain Suffix → Click OK (do this for each appliance)<br> | |||
'''Step Six:''' Scale-out Object Storage Tab → Create Ceph Cluster → Select all appliances → Click OK<br> | |||
'''Step Seven:''' Scale-out Object Storage Tab → Multi-Create OSDs → Select SSDs as journals, HDDs as OSDs → Click OK<br> | |||
'''Step Eight:''' Scale-out Object Storage Tab → Create Object Storage Pool Group → Select Erasure Coding Mode → Click OK<br> |
Latest revision as of 19:41, 26 October 2020
IBM Cloud deployments of QuantaStor scale-out storage clusters provide dedicated and highly available file, block, and object storage which can be deployed an any of the IBM Cloud data-centers worldwide. Scale-out cluster use Ceph technology and provide NAS (NFS/SMB), SAN (iSCSI) and S3/SWIFT compatible object storage support in a single deployment. As the name implies, scale-out clusters may be expanded by adding storage media and/or additional QuantaStor servers.
Each server in a scale-out cluster must contain at least one SSD to accelerate write performance and each cluster must contain a minimum of 3x servers (6+ recommended) and a maximum of 32x servers (scalable to over 10PB in the IBM Cloud). QuantaStor’s storage grid technology enables systems to be grouped together and managed as a storage grid that can span IBM Cloud datacenters. Within a storage grid one or more storage clusters may be provisioned and managed via the QuantaStor web based management interface.
OSNEXUS Videos
Key Features
QuantaStor Scale-out Object Storage
- S3 Object Storage Compatible REST API support
- Hybrid HDD + SSD configuration to boost write performance
- Easy web-based management of all systems with built-in QuantaStor storage grid technology.
- Easy expansion by adding systems and/or drives.
Configurations
HYBRID OBJECT STORAGE SOLUTIONS
ALL-FLASH OBJECT STORAGE SOLUTIONS
Quick Configuration Steps
Step One: Storage Management Tab → Create Grid → Click OK
Step Two: Storage Management Tab → Add Grid Node → Enter 2nd Appliance IP → Click OK
Step Three: Storage Management Tab → Add Grid Node → Enter 3nd Appliance IP → Click OK
Step Four: Storage Management Tab → Controllers & Enclosures → Right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each controller on each appliance)
Step Five: Storage Management Tab → Modify Storage System → Set Domain Suffix → Click OK (do this for each appliance)
Step Six: Scale-out Object Storage Tab → Create Ceph Cluster → Select all appliances → Click OK
Step Seven: Scale-out Object Storage Tab → Multi-Create OSDs → Select SSDs as journals, HDDs as OSDs → Click OK
Step Eight: Scale-out Object Storage Tab → Create Object Storage Pool Group → Select Erasure Coding Mode → Click OK