- 4. IBM Cloud Scale-out NAS Archive Storage

From OSNEXUS Online Documentation Site
Jump to: navigation, search

PENDING Release 5.4

QuantaStor Scale-out NAS Archive configurations provide highly-available file storage suitable archive applications with lower performance requirements. SSD is added to each appliance to accelerate write performance and each NAS Archive cluster must contain a minimum of 3x appliances.

Reference Configurations

Scale-out NAS Archive Storage

Medium Tier - NAS Archive Storage – 48TB Use Cases System View
2x E5-2620v4
128GB RAM
8x 6TB
QS Medium Tier License
2x 960GB SSDs (write log)
2x 1TB SATA (mirrored boot)
2x 10GbE Private Network ports
Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)
Scale-out block storage with Cinder integration for KVM based OpenStack deployments
Highly-available block storage for databases
Svr supermicro 2u12 6029p.png

View Design Provisioning Guide Deploy

Large Tier - NAS Archive Storage – 128TB Use Cases System View
2x E5-2650v4
128GB RAM
16x 8TB
QS Large Tier License
2x 960GB SSDs (write log)
2x 1TB SATA (mirrored boot)
2x 10GbE Private Network ports
Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)
Scale-out block storage with Cinder integration for KVM based OpenStack deployments
Highly-available block storage for databases
Svr supermicro 4u36.png

View Design Provisioning Guide Deploy

Extra-large Tier - NAS Archive Storage – 256TB Use Cases System View
2x E5-2650v4
256GB RAM
32x 8TB
QS XL Tier License
2x 960GB SSDs (write log)
2x 1TB SATA (mirrored boot)
2x 10GbE Private Network ports
Highly-available block storage for virtualization (VMware, Hyper-V, XenServer)
Scale-out block storage with Cinder integration for KVM based OpenStack deployments
Highly-available block storage for databases
Svr supermicro 4u36.png

View Design Provisioning Guide Deploy

Quick Configuration Steps

Step One: Storage Management Tab → Create Grid → Click OK
Step Two: Storage Management Tab → Add Grid Node → Enter 2nd System IP → Click OK
Step Three: Storage Management Tab → Add Grid Node → Enter 3nd System IP → Click OK
Step Four: Storage Management Tab → Controllers & Enclosures → right-click Controller → Configure Pass-thru devices → Select all → Click OK (do this for each controller on each system)
Step Five: Cluster Resource Management → Create Site Cluster → Select All → Click OK
Step Six: Cluster Resource Management → Add Site Interface → Enter IP address for SMB/NFS Access → Click OK
Step Seven: Storage Management Tab → Storage Pool → Create Storage Pool → Create one pool (XFS type) per device per system
Step Eight: Scale-out File Storage Tab → Gluster Volumes → Peer Setup → Select all systems → Click OK
Step Nine: Scale-out File Storage Tab → Gluster Volumes → Create Volume → Enter name, select all Pools and Erasure Coding mode → Click OK
Step Ten: Storage Management Tab → Network Shares → right-click share for volume → Select Modify Share & SMB Access → Adjust access settings → Click OK