Difference between revisions of "- 4. IBM Cloud Scale-out NAS Archive Storage"
m |
m (→Quick Configuration Steps) |
||
(29 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | + | '''PENDING Release 5.4''' | |
− | == | + | QuantaStor Scale-out NAS Archive configurations provide highly-available file storage suitable archive applications with lower performance requirements. SSD is added to each appliance to accelerate write performance and each NAS Archive cluster must contain a minimum of 3x appliances.<br> |
+ | == Reference Configurations == | ||
− | + | '''Scale-out NAS Archive Storage''' | |
− | ''' | + | {| class="wikitable" style="width: 100%;" |
− | + | |- | |
− | + | | style="width: 40%"|'''Medium Tier - NAS Archive Storage – 48TB''' | |
− | + | | style="width: 30%"|'''Use Cases''' | |
− | + | | style="width: 30%"|'''System View''' | |
+ | |- | ||
+ | |2x E5-2620v4<br> 128GB RAM<br> 8x 6TB<br> QS Medium Tier License <br> 2x 960GB SSDs (write log)<br> 2x 1TB SATA (mirrored boot) <br> 2x 10GbE Private Network ports | ||
+ | |Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)<br>Scale-out block storage with Cinder integration for KVM based OpenStack deployments<br>Highly-available block storage for databases | ||
+ | |[[File:Svr_supermicro_2u12_6029p.png | center]] | ||
+ | <center> | ||
+ | {{QsButton|https://bit.ly/2PfXuVt|View Design}} | ||
+ | {{QsButton|https://bit.ly/2KXF6CC|Provisioning Guide}} | ||
+ | {{QsButton|https://ibm.co/2DE53AG|Deploy}} | ||
+ | </center> | ||
+ | |- | ||
+ | |} | ||
− | { | + | {| class="wikitable" style="width: 100%;" |
− | + | |- | |
− | + | | style="width: 40%"|'''Large Tier - NAS Archive Storage – 128TB''' | |
− | + | | style="width: 30%"|'''Use Cases''' | |
+ | | style="width: 30%"|'''System View''' | ||
+ | |- | ||
+ | |2x E5-2650v4<br> 128GB RAM<br> 16x 8TB<br> QS Large Tier License <br> 2x 960GB SSDs (write log)<br> 2x 1TB SATA (mirrored boot) <br> 2x 10GbE Private Network ports | ||
+ | |Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)<br>Scale-out block storage with Cinder integration for KVM based OpenStack deployments<br>Highly-available block storage for databases | ||
+ | |[[File:Svr_supermicro_4u36.png | center]] | ||
+ | <center> | ||
+ | {{QsButton|https://bit.ly/2PfXuVt|View Design}} | ||
+ | {{QsButton|https://bit.ly/2KXF6CC|Provisioning Guide}} | ||
+ | {{QsButton|https://ibm.co/2DE53AG|Deploy}} | ||
+ | </center> | ||
+ | |- | ||
+ | |} | ||
− | == | + | {| class="wikitable" style="width: 100%;" |
− | + | |- | |
− | + | | style="width: 40%"|'''Extra-large Tier - NAS Archive Storage – 256TB''' | |
− | + | | style="width: 30%"|'''Use Cases''' | |
− | = | + | | style="width: 30%"|'''System View''' |
− | + | |- | |
− | + | |2x E5-2650v4<br> 256GB RAM<br> 32x 8TB<br> QS XL Tier License <br> 2x 960GB SSDs (write log)<br> 2x 1TB SATA (mirrored boot) <br> 2x 10GbE Private Network ports <br> | |
+ | |Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)<br>Scale-out block storage with Cinder integration for KVM based OpenStack deployments<br>Highly-available block storage for databases | ||
+ | |[[File:Svr_supermicro_4u36.png | center]] | ||
+ | <center> | ||
+ | {{QsButton|https://bit.ly/2PfXuVt|View Design}} | ||
+ | {{QsButton|https://bit.ly/2KXF6CC|Provisioning Guide}} | ||
+ | {{QsButton|https://ibm.co/2DE53AG|Deploy}} | ||
+ | </center> | ||
+ | |- | ||
+ | |} | ||
== Quick Configuration Steps == | == Quick Configuration Steps == | ||
Line 29: | Line 62: | ||
'''Step One:''' Storage Management Tab → Create Grid → Click OK<br> | '''Step One:''' Storage Management Tab → Create Grid → Click OK<br> | ||
− | '''Step Two:''' Storage Management Tab → Add Grid Node → Enter 2nd | + | '''Step Two:''' Storage Management Tab → Add Grid Node → Enter 2nd System IP → Click OK<br> |
− | '''Step Three:''' Storage Management Tab → Add Grid Node → Enter 3nd | + | '''Step Three:''' Storage Management Tab → Add Grid Node → Enter 3nd System IP → Click OK<br> |
− | '''Step Four:''' Storage Management Tab → Controllers & Enclosures → right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each controller on each | + | '''Step Four:''' Storage Management Tab → Controllers & Enclosures → right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each controller on each system)<br> |
'''Step Five:''' Cluster Resource Management → Create Site Cluster → Select All → Click OK<br> | '''Step Five:''' Cluster Resource Management → Create Site Cluster → Select All → Click OK<br> | ||
'''Step Six:''' Cluster Resource Management → Add Site Interface → Enter IP address for SMB/NFS Access → Click OK<br> | '''Step Six:''' Cluster Resource Management → Add Site Interface → Enter IP address for SMB/NFS Access → Click OK<br> | ||
− | '''Step Seven:''' Storage | + | '''Step Seven:''' Scale-out Storage Configuration → Create Cluster → Create a Ceph Cluster using all the systems that will be used for the new scale-out NAS Storage Pool<br> |
− | '''Step Eight:''' Scale-out | + | '''Step Eight:''' Scale-out Storage Configuration → Add OSDs & Journals → Add Data Devices & Journals by selecting the "Auto-Config" button and then press OK.<br> |
− | '''Step Nine:''' Scale-out | + | '''Step Nine:''' Scale-out Storage Configuration → Scale-out Storage Pools → Create Scale-out File/NAS Storage Pool → Choose and Erasure Coding level → Click OK<br> |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + |
Latest revision as of 14:45, 19 August 2020
PENDING Release 5.4
QuantaStor Scale-out NAS Archive configurations provide highly-available file storage suitable archive applications with lower performance requirements. SSD is added to each appliance to accelerate write performance and each NAS Archive cluster must contain a minimum of 3x appliances.
Reference Configurations
Scale-out NAS Archive Storage
Medium Tier - NAS Archive Storage – 48TB | Use Cases | System View |
2x E5-2620v4 128GB RAM 8x 6TB QS Medium Tier License 2x 960GB SSDs (write log) 2x 1TB SATA (mirrored boot) 2x 10GbE Private Network ports |
Highly-available block storage for virtualization (VMware, Hyper-V, XenServer) Scale-out block storage with Cinder integration for KVM based OpenStack deployments Highly-available block storage for databases |
|
Large Tier - NAS Archive Storage – 128TB | Use Cases | System View |
2x E5-2650v4 128GB RAM 16x 8TB QS Large Tier License 2x 960GB SSDs (write log) 2x 1TB SATA (mirrored boot) 2x 10GbE Private Network ports |
Highly-available block storage for virtualization (VMware, Hyper-V, XenServer) Scale-out block storage with Cinder integration for KVM based OpenStack deployments Highly-available block storage for databases |
|
Extra-large Tier - NAS Archive Storage – 256TB | Use Cases | System View |
2x E5-2650v4 256GB RAM 32x 8TB QS XL Tier License 2x 960GB SSDs (write log) 2x 1TB SATA (mirrored boot) 2x 10GbE Private Network ports |
Highly-available block storage for virtualization (VMware, Hyper-V, XenServer) Scale-out block storage with Cinder integration for KVM based OpenStack deployments Highly-available block storage for databases |
|
Quick Configuration Steps
Step One: Storage Management Tab → Create Grid → Click OK
Step Two: Storage Management Tab → Add Grid Node → Enter 2nd System IP → Click OK
Step Three: Storage Management Tab → Add Grid Node → Enter 3nd System IP → Click OK
Step Four: Storage Management Tab → Controllers & Enclosures → right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each controller on each system)
Step Five: Cluster Resource Management → Create Site Cluster → Select All → Click OK
Step Six: Cluster Resource Management → Add Site Interface → Enter IP address for SMB/NFS Access → Click OK
Step Seven: Scale-out Storage Configuration → Create Cluster → Create a Ceph Cluster using all the systems that will be used for the new scale-out NAS Storage Pool
Step Eight: Scale-out Storage Configuration → Add OSDs & Journals → Add Data Devices & Journals by selecting the "Auto-Config" button and then press OK.
Step Nine: Scale-out Storage Configuration → Scale-out Storage Pools → Create Scale-out File/NAS Storage Pool → Choose and Erasure Coding level → Click OK