Difference between revisions of "- 2. IBM Cloud Scale-out Block Storage Cluster"
m (→CONFIGURATIONS) |
m |
||
(22 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | + | '''PENDING Release 5.4''' | |
− | + | ||
− | QuantaStor’s scale-out block storage configurations provide highly-available iSCSI block storage suitable for a variety of use cases. Each | + | QuantaStor’s scale-out block storage configurations provide highly-available iSCSI block storage suitable for a variety of use cases. Each system contains SSD to accelerate write performance and each cluster must contain a minimum of 3x systems. QuantaStor’s web-based management interface, REST APIs and QS CLI make storage provisioning and automation easy. |
− | + | == Key Features == | |
*S3 & SWIFT Compatible REST API support | *S3 & SWIFT Compatible REST API support | ||
*Hybrid HDD + SSD configuration to boost write performance | *Hybrid HDD + SSD configuration to boost write performance | ||
− | *Easy web-based management of all | + | *Easy web-based management of all systems with built-in QuantaStor storage grid technology |
− | *Easy expansion by adding | + | *Easy expansion by adding systems and/or drives |
− | == | + | == Reference Configurations == |
− | + | '''Scale-out iSCSI Block Storage Configuration Options''' | |
− | == | + | {| class="wikitable" style="width: 100%;" |
− | + | |- | |
− | + | | style="width: 40%"|'''Small Tier - Hybrid Block Storage – 16TB''' | |
− | + | | style="width: 30%"|'''Use Cases''' | |
− | == | + | | style="width: 30%"|'''System View''' |
+ | |- | ||
+ | |2x E5-2650v4 or 4110, 64GB RAM, 8x 2TB SATA, <br>QS Small Tier License, 1x 800GB SSDs (write log), <br>2x 1TB SATA (mirrored boot), <br>2x 10GbE Private Network ports | ||
+ | |Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)<br>Scale-out block storage with Cinder integration for KVM based OpenStack deployments<br>Highly-available block storage for databases | ||
+ | |[[File:Svr_supermicro_2u12_6029p.png | center]] | ||
+ | <center> | ||
+ | {{QsButton|https://bit.ly/2PfXuVt|View Design}} | ||
+ | {{QsButton|https://bit.ly/2KXF6CC|Provisioning Guide}} | ||
+ | {{QsButton|https://ibm.co/2DE53AG|Deploy}} | ||
+ | </center> | ||
+ | |- | ||
+ | |} | ||
+ | {| class="wikitable" style="width: 100%;" | ||
+ | |- | ||
+ | | style="width: 40%"|'''Medium Tier - Hybrid Block Storage – 48TB''' | ||
+ | | style="width: 30%"|'''Use Cases''' | ||
+ | | style="width: 30%"|'''System View''' | ||
+ | |- | ||
+ | |2x E5-2620v4 or 4110, 128GB RAM, <br>8x 6TB, QS Medium Tier License, <br>1x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot), <br>2x 10GbE Private Network ports | ||
+ | |Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)<br>Scale-out block storage with Cinder integration for KVM based OpenStack deployments<br>Highly-available block storage for databases | ||
+ | |[[File:Svr_supermicro_2u12_6029p.png | center]] | ||
+ | <center> | ||
+ | {{QsButton|https://bit.ly/2PfXuVt|View Design}} | ||
+ | {{QsButton|https://bit.ly/2KXF6CC|Provisioning Guide}} | ||
+ | {{QsButton|https://ibm.co/2DE53AG|Deploy}} | ||
+ | </center> | ||
+ | |- | ||
+ | |} | ||
+ | {| class="wikitable" style="width: 100%;" | ||
+ | |- | ||
+ | | style="width: 40%"|'''Large Tier - Hybrid Block Storage – 128TB''' | ||
+ | | style="width: 30%"|'''Use Cases''' | ||
+ | | style="width: 30%"|'''System View''' | ||
+ | |- | ||
+ | |2x E5-2620v4 or 4110, 128GB RAM, <br>16x 8TB SATA, QS Large Tier License, <br>2x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot), <br>2x 10GbE Private Network ports | ||
+ | |Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)<br>Scale-out block storage with Cinder integration for KVM based OpenStack deployments<br>Highly-available block storage for databases | ||
+ | |[[File:Svr_supermicro_2u12_6029p.png | center]] | ||
+ | <center> | ||
+ | {{QsButton|https://bit.ly/2PfXuVt|View Design}} | ||
+ | {{QsButton|https://bit.ly/2KXF6CC|Provisioning Guide}} | ||
+ | {{QsButton|https://ibm.co/2DE53AG|Deploy}} | ||
+ | </center> | ||
+ | |- | ||
+ | |} | ||
+ | == Quick Configurations Steps == | ||
<span style="font-size:80%; line-height: 1.0em;"> | <span style="font-size:80%; line-height: 1.0em;"> | ||
'''Step One:''' Storage Management Tab → Create Grid → Click OK <br> | '''Step One:''' Storage Management Tab → Create Grid → Click OK <br> | ||
− | '''Step Two:''' Storage Management Tab → Add Grid Node → Enter 2nd | + | '''Step Two:''' Storage Management Tab → Add Grid Node → Enter 2nd System IP → Click OK <br> |
− | '''Step Three:''' Storage Management Tab → Add Grid Node → Enter 3nd | + | '''Step Three:''' Storage Management Tab → Add Grid Node → Enter 3nd System IP → Click OK <br> |
− | '''Step Four:''' Storage Management Tab → Modify Storage System → Set Domain Suffix → Click OK (do this for each | + | '''Step Four:''' Storage Management Tab → Modify Storage System → Set Domain Suffix → Click OK (do this for each system) <br> |
− | '''Step Five:''' Storage Management Tab → Controllers & Enclosures → right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each | + | '''Step Five:''' Storage Management Tab → Controllers & Enclosures → right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each system) <br> |
− | '''Step Six:''' Scale-out Block & Object Storage Tab → Ceph Cluster → Create Ceph Cluster → Select all | + | '''Step Six:''' Scale-out Block & Object Storage Tab → Ceph Cluster → Create Ceph Cluster → Select all systems → Click OK <br> |
'''Step Seven:''' Scale-out Block & Object Storage Tab → Ceph Cluster → Multi-Create OSDs → Select SSDs as journals, HDDs as OSDs → Click OK <br> | '''Step Seven:''' Scale-out Block & Object Storage Tab → Ceph Cluster → Multi-Create OSDs → Select SSDs as journals, HDDs as OSDs → Click OK <br> | ||
'''Step Eight:''' Scale-out Block & Object Storage Tab → Storage Pools (Ceph) → Create Storage Pool → Select Erasure Coding Mode → Click OK <br> | '''Step Eight:''' Scale-out Block & Object Storage Tab → Storage Pools (Ceph) → Create Storage Pool → Select Erasure Coding Mode → Click OK <br> | ||
Line 35: | Line 78: | ||
'''Step Twelve:''' Storage Management Tab → Storage Volumes → Create Storage Volume → Enter volume name, select a size, → Click OK <br> | '''Step Twelve:''' Storage Management Tab → Storage Volumes → Create Storage Volume → Enter volume name, select a size, → Click OK <br> | ||
'''Step Thirteen:''' Storage Management Tab → Hosts → Right-click Host → Assign Volumes → Select volumes → Click OK <br> | '''Step Thirteen:''' Storage Management Tab → Hosts → Right-click Host → Assign Volumes → Select volumes → Click OK <br> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− |
Latest revision as of 13:27, 28 May 2019
PENDING Release 5.4
QuantaStor’s scale-out block storage configurations provide highly-available iSCSI block storage suitable for a variety of use cases. Each system contains SSD to accelerate write performance and each cluster must contain a minimum of 3x systems. QuantaStor’s web-based management interface, REST APIs and QS CLI make storage provisioning and automation easy.
Key Features
- S3 & SWIFT Compatible REST API support
- Hybrid HDD + SSD configuration to boost write performance
- Easy web-based management of all systems with built-in QuantaStor storage grid technology
- Easy expansion by adding systems and/or drives
Reference Configurations
Scale-out iSCSI Block Storage Configuration Options
Small Tier - Hybrid Block Storage – 16TB | Use Cases | System View |
2x E5-2650v4 or 4110, 64GB RAM, 8x 2TB SATA, QS Small Tier License, 1x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports |
Highly-available block storage for virtualization (VMware, Hyper-V, XenServer) Scale-out block storage with Cinder integration for KVM based OpenStack deployments Highly-available block storage for databases |
|
Medium Tier - Hybrid Block Storage – 48TB | Use Cases | System View |
2x E5-2620v4 or 4110, 128GB RAM, 8x 6TB, QS Medium Tier License, 1x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports |
Highly-available block storage for virtualization (VMware, Hyper-V, XenServer) Scale-out block storage with Cinder integration for KVM based OpenStack deployments Highly-available block storage for databases |
|
Large Tier - Hybrid Block Storage – 128TB | Use Cases | System View |
2x E5-2620v4 or 4110, 128GB RAM, 16x 8TB SATA, QS Large Tier License, 2x 800GB SSDs (write log), 2x 1TB SATA (mirrored boot), 2x 10GbE Private Network ports |
Highly-available block storage for virtualization (VMware, Hyper-V, XenServer) Scale-out block storage with Cinder integration for KVM based OpenStack deployments Highly-available block storage for databases |
|
Quick Configurations Steps
Step One: Storage Management Tab → Create Grid → Click OK
Step Two: Storage Management Tab → Add Grid Node → Enter 2nd System IP → Click OK
Step Three: Storage Management Tab → Add Grid Node → Enter 3nd System IP → Click OK
Step Four: Storage Management Tab → Modify Storage System → Set Domain Suffix → Click OK (do this for each system)
Step Five: Storage Management Tab → Controllers & Enclosures → right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each system)
Step Six: Scale-out Block & Object Storage Tab → Ceph Cluster → Create Ceph Cluster → Select all systems → Click OK
Step Seven: Scale-out Block & Object Storage Tab → Ceph Cluster → Multi-Create OSDs → Select SSDs as journals, HDDs as OSDs → Click OK
Step Eight: Scale-out Block & Object Storage Tab → Storage Pools (Ceph) → Create Storage Pool → Select Erasure Coding Mode → Click OK
Step Nine: Cluster Resource Management → Create Site Cluster → Select All → Click OK
Step Ten: Cluster Resource Management → Add Site Interface → Enter IP address for iSCSI Access → Click OK
Step Eleven: Storage Management Tab → Hosts → Add Host → Enter host IQN → Click OK (do this for each host)
Step Twelve: Storage Management Tab → Storage Volumes → Create Storage Volume → Enter volume name, select a size, → Click OK
Step Thirteen: Storage Management Tab → Hosts → Right-click Host → Assign Volumes → Select volumes → Click OK