Difference between revisions of "Scale-out Storage Configuration Section"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m (Cluster Management)
m
 
(23 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
[[CATEGORY:web_guide]]
 
[[CATEGORY:web_guide]]
The Scale-out Storage Configuration section of the web management interface is where setup and configuration Ceph clusters is done.  In this section one may create clusters, configure OSDs & journals, and allocate pools for file, block, and object storage access.
 
  
[[file:qs5_section_ceph.png|930px]]
+
The Scale-out Storage Configuration section of the web management interface is where setup and configuration of Ceph clusters is done.  In this section one may create clusters, configure OSDs & journals, and allocate pools for file, block, and object storage access.
  
=== Cluster Management ===
+
[[file:Scale-Out Storage Cnfg.jpg|1024px]]
 +
<br><br><br><br>
 +
== Scale-out Storage Cluster ==
 +
=== Scale-out Cluster Management ===
  
 
QuantaStor makes creation of scale-out clusters a simple one dialog operation where the servers are selected along with the front-end & back-end network ports for cluster communication.
 
QuantaStor makes creation of scale-out clusters a simple one dialog operation where the servers are selected along with the front-end & back-end network ports for cluster communication.
  
* [[Create_Ceph_Cluster_Configuration|Create a new Ceph Cluster]]
+
* [[Create_Ceph_Cluster_Configuration|Create a new Ceph Cluster.]]
* [[Delete_Ceph_Cluster_Configuration|Delete a Ceph Cluster]]
+
* [[Delete_Ceph_Cluster_Configuration|Delete a Ceph Cluster.]]
  
* [[Add_Member_to_Ceph_Cluster|Add a Ceph Cluster Member]]
+
* [[Add_Member_to_Ceph_Cluster|Add a Ceph Cluster Member.]]
* [[Remove_Ceph_Cluster_Member|Remove a Ceph Cluster Member]]
+
* [[Remove_Ceph_Cluster_Member|Remove a Ceph Cluster Member.]]
  
=== Monitor Management ===
+
* [[Modify_Ceph_Cluster|Modify a Ceph Cluster Member.]]
 +
* [[Fix_Ceph_Cluster_Clock_Skew|Modify a Ceph Cluster.]]
  
* [[Add_Monitor_To_Ceph_Cluster|Add a Ceph Monitor to a Cluster]]
+
=== Service Management ===
* [[Remove_Ceph_Monitor|Remove a Ceph Monitor from a Cluster]]
+
  
=== OSD & Journal Management ===
+
QuantaStor automatically configures three servers as Ceph Monitors when the cluster is created.  Additional monitors may be added or removed but all clusters require a minimum of three monitors.
 +
* [[Add_Monitor_To_Ceph_Cluster|Add a Ceph Monitor to a Cluster.]]
 +
* [[Remove_Ceph_Monitor|Remove a Ceph Monitor from a Cluster.]]
 +
 
 +
* [[Add_Rados_Gateway_To_Ceph_Cluster|Add a new S3/SWIFT Gateway and specify the network interface configuration.]]
 +
* [[Remove_Ceph_Rados_Gateway|Remove an S3/SWIFT Gateway.]]
 +
 
 +
=== Storage Media Management ===
  
 
OSDs and Journal devices are all based on the BlueStore storage backend.  QuantaStor also has support for existing systems with FileStore based OSDs but new OSDs are always configured to use BlueStore.  
 
OSDs and Journal devices are all based on the BlueStore storage backend.  QuantaStor also has support for existing systems with FileStore based OSDs but new OSDs are always configured to use BlueStore.  
Line 31: Line 40:
 
=== Scale-out S3 Gateway (CephRGW) Management ===
 
=== Scale-out S3 Gateway (CephRGW) Management ===
  
 +
* [[HA_Cluster_for_Ceph_Object|S3 Object Storage Setup]]
 
* [[Add_Rados_Gateway_To_Ceph_Cluster|Add S3 Gateway (RGW) to Cluster Member]]
 
* [[Add_Rados_Gateway_To_Ceph_Cluster|Add S3 Gateway (RGW) to Cluster Member]]
 
* [[Remove_Ceph_Rados_Gateway|Remove S3 Gateway (RGW) from Cluster Member]]
 
* [[Remove_Ceph_Rados_Gateway|Remove S3 Gateway (RGW) from Cluster Member]]
Line 36: Line 46:
 
=== Scale-out File (CephFS) Management ===
 
=== Scale-out File (CephFS) Management ===
  
=== Metadata Server (MDS) Management ===
+
==== Metadata Server (MDS) Management ====
 
+
  
 
=== Scale-out Block (CephRBD) Management ===
 
=== Scale-out Block (CephRBD) Management ===

Latest revision as of 08:29, 12 October 2022


The Scale-out Storage Configuration section of the web management interface is where setup and configuration of Ceph clusters is done. In this section one may create clusters, configure OSDs & journals, and allocate pools for file, block, and object storage access.

Scale-Out Storage Cnfg.jpg



Scale-out Storage Cluster

Scale-out Cluster Management

QuantaStor makes creation of scale-out clusters a simple one dialog operation where the servers are selected along with the front-end & back-end network ports for cluster communication.

Service Management

QuantaStor automatically configures three servers as Ceph Monitors when the cluster is created. Additional monitors may be added or removed but all clusters require a minimum of three monitors.

Storage Media Management

OSDs and Journal devices are all based on the BlueStore storage backend. QuantaStor also has support for existing systems with FileStore based OSDs but new OSDs are always configured to use BlueStore.

Scale-out S3 Gateway (CephRGW) Management

Scale-out File (CephFS) Management

Metadata Server (MDS) Management

Scale-out Block (CephRBD) Management