Difference between revisions of "Scale-out Storage Configuration Section"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m (OSD & Journal Management)
m (Cluster Management)
Line 5: Line 5:
  
 
=== Cluster Management ===
 
=== Cluster Management ===
 +
 +
QuantaStor makes creation of scale-out clusters a simple one dialog operation where the servers are selected along with the front-end & back-end network ports for cluster communication.
  
 
* [[Create_Ceph_Cluster_Configuration|Create a new Ceph Cluster]]
 
* [[Create_Ceph_Cluster_Configuration|Create a new Ceph Cluster]]
 
* [[Delete_Ceph_Cluster_Configuration|Delete a Ceph Cluster]]
 
* [[Delete_Ceph_Cluster_Configuration|Delete a Ceph Cluster]]
 +
 
* [[Add_Member_to_Ceph_Cluster|Add a Ceph Cluster Member]]
 
* [[Add_Member_to_Ceph_Cluster|Add a Ceph Cluster Member]]
 
* [[Remove_Ceph_Cluster_Member|Remove a Ceph Cluster Member]]
 
* [[Remove_Ceph_Cluster_Member|Remove a Ceph Cluster Member]]

Revision as of 17:07, 20 August 2019

The Scale-out Storage Configuration section of the web management interface is where setup and configuration Ceph clusters is done. In this section one may create clusters, configure OSDs & journals, and allocate pools for file, block, and object storage access.

Qs5 section ceph.png

Cluster Management

QuantaStor makes creation of scale-out clusters a simple one dialog operation where the servers are selected along with the front-end & back-end network ports for cluster communication.

Monitor Management

OSD & Journal Management

OSDs and Journal devices are all based on the BlueStore storage backend. QuantaStor also has support for existing systems with FileStore based OSDs but new OSDs are always configured to use BlueStore.

Scale-out S3 Gateway (CephRGW) Management

Scale-out File (CephFS) Management

Metadata Server (MDS) Management

Scale-out Block (CephRBD) Management