Difference between revisions of "Scale-out Storage Configuration Section"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m (Monitor Management)
m (Service Management)
Line 23: Line 23:
 
* [[Remove_Ceph_Monitor|Remove a Ceph Monitor from a Cluster.]]
 
* [[Remove_Ceph_Monitor|Remove a Ceph Monitor from a Cluster.]]
  
* [[Add_Metadata_Server_To_Ceph_Cluster|Add a '''M'''eta '''D'''ata '''S'''erver, '''MDS'''.]]
+
* [[Add_Rados_Gateway_To_Ceph_Cluster|Add a new S3/SWIFT Gateway and specify the network interface configuration.]]
* [[Remove_Metadata_Server_To_Ceph_Cluster|Remove an S3/SWIFT Gateway.]]
+
* [[Remove_Ceph_Rados_Gateway|Remove an S3/SWIFT Gateway.]]
  
 
=== OSD & Journal Management ===
 
=== OSD & Journal Management ===

Revision as of 15:24, 11 September 2019

The Scale-out Storage Configuration section of the web management interface is where setup and configuration Ceph clusters is done. In this section one may create clusters, configure OSDs & journals, and allocate pools for file, block, and object storage access.

Qs5 section ceph.png

Scale-out Cluster Management

QuantaStor makes creation of scale-out clusters a simple one dialog operation where the servers are selected along with the front-end & back-end network ports for cluster communication.

Service Management

QuantaStor automatically configures three servers as Ceph Monitors when the cluster is created. Additional monitors may be added or removed but all clusters require a minimum of three monitors.

OSD & Journal Management

OSDs and Journal devices are all based on the BlueStore storage backend. QuantaStor also has support for existing systems with FileStore based OSDs but new OSDs are always configured to use BlueStore.

Scale-out S3 Gateway (CephRGW) Management

Scale-out File (CephFS) Management

Metadata Server (MDS) Management

Scale-out Block (CephRBD) Management