Difference between revisions of "Scale-out Storage Configuration Section"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m
m
 
(30 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
[[CATEGORY:web_guide]]
 
[[CATEGORY:web_guide]]
The Scale-out Storage Configuration section of the web management interface is where setup and configuration Ceph clusters is done.  In this section one may create clusters, configure OSDs & journals, and allocate pools for file, block, and object storage access.
 
  
[[file:qs5_section_ceph.png|930px]]
+
The Scale-out Storage Configuration section of the web management interface is where setup and configuration of Ceph clusters is done.  In this section one may create clusters, configure OSDs & journals, and allocate pools for file, block, and object storage access.
  
== Cluster Management ==
+
[[file:Scale-Out Storage Cnfg.jpg|1024px]]
 +
<br><br><br><br>
 +
== Scale-out Storage Cluster ==
 +
=== Scale-out Cluster Management ===
  
=== [[Create_Ceph_Cluster_Configuration|Create a new Ceph Cluster]] ===
+
QuantaStor makes creation of scale-out clusters a simple one dialog operation where the servers are selected along with the front-end & back-end network ports for cluster communication.
=== [[Delete_Ceph_Cluster_Configuration|Delete a Ceph Cluster]] ===
+
=== [[Add_Member_to_Ceph_Cluster|Add a Ceph Cluster Member]] ===
+
=== [[Remove_Ceph_Cluster_Member|Remove a Ceph Cluster Member]] ===
+
  
== Monitor Management ==
+
* [[Create_Ceph_Cluster_Configuration|Create a new Ceph Cluster.]]
 +
* [[Delete_Ceph_Cluster_Configuration|Delete a Ceph Cluster.]]
  
=== [[Add_Monitor_To_Ceph_Cluster|Add a Ceph Monitor to a Cluster]] ===
+
* [[Add_Member_to_Ceph_Cluster|Add a Ceph Cluster Member.]]
=== [[Remove_Monitor_From_Ceph_Cluster|Remove a Ceph Monitor from a Cluster]] ===
+
* [[Remove_Ceph_Cluster_Member|Remove a Ceph Cluster Member.]]
  
== OSD & Journal Management ==
+
* [[Modify_Ceph_Cluster|Modify a Ceph Cluster Member.]]
 +
* [[Fix_Ceph_Cluster_Clock_Skew|Modify a Ceph Cluster.]]
  
== Scale-out S3 Gateway (CephRGW) Management ==
+
=== Service Management ===
  
=== [[Add_Rados_Gateway_To_Ceph_Cluster|Add S3 Gateway (RGW) to Cluster Member]] ===
+
QuantaStor automatically configures three servers as Ceph Monitors when the cluster is created.  Additional monitors may be added or removed but all clusters require a minimum of three monitors.
=== [[Remove_Ceph_Rados_Gateway|Remove S3 Gateway (RGW) from Cluster Member]] ===
+
* [[Add_Monitor_To_Ceph_Cluster|Add a Ceph Monitor to a Cluster.]]
 +
* [[Remove_Ceph_Monitor|Remove a Ceph Monitor from a Cluster.]]
  
== Scale-out File (CephFS) Management ==
+
* [[Add_Rados_Gateway_To_Ceph_Cluster|Add a new S3/SWIFT Gateway and specify the network interface configuration.]]
 +
* [[Remove_Ceph_Rados_Gateway|Remove an S3/SWIFT Gateway.]]
  
== Metadata Server (MDS) Management ==
+
=== Storage Media Management ===
  
 +
OSDs and Journal devices are all based on the BlueStore storage backend.  QuantaStor also has support for existing systems with FileStore based OSDs but new OSDs are always configured to use BlueStore.
  
== Scale-out Block (CephRBD) Management ==
+
* [[Ceph_Multi_OSD_Create|Automated OSD and Journal/WAL Device Setup / Multi-OSD Create]]
 +
* [[Create_Bluestore_Ceph_OSD|Create a single OSD]]
 +
* [[Ceph_OSD_Delete|Delete an OSD]]
 +
* [[Create_Ceph_Journal|Create a WAL/Journal Device]]
 +
* [[Delete_Ceph_Journal|Delete a WAL/Journal Device]]
 +
 
 +
=== Scale-out S3 Gateway (CephRGW) Management ===
 +
 
 +
* [[HA_Cluster_for_Ceph_Object|S3 Object Storage Setup]]
 +
* [[Add_Rados_Gateway_To_Ceph_Cluster|Add S3 Gateway (RGW) to Cluster Member]]
 +
* [[Remove_Ceph_Rados_Gateway|Remove S3 Gateway (RGW) from Cluster Member]]
 +
 
 +
=== Scale-out File (CephFS) Management ===
 +
 
 +
==== Metadata Server (MDS) Management ====
 +
 
 +
=== Scale-out Block (CephRBD) Management ===

Latest revision as of 08:29, 12 October 2022


The Scale-out Storage Configuration section of the web management interface is where setup and configuration of Ceph clusters is done. In this section one may create clusters, configure OSDs & journals, and allocate pools for file, block, and object storage access.

Scale-Out Storage Cnfg.jpg



Scale-out Storage Cluster

Scale-out Cluster Management

QuantaStor makes creation of scale-out clusters a simple one dialog operation where the servers are selected along with the front-end & back-end network ports for cluster communication.

Service Management

QuantaStor automatically configures three servers as Ceph Monitors when the cluster is created. Additional monitors may be added or removed but all clusters require a minimum of three monitors.

Storage Media Management

OSDs and Journal devices are all based on the BlueStore storage backend. QuantaStor also has support for existing systems with FileStore based OSDs but new OSDs are always configured to use BlueStore.

Scale-out S3 Gateway (CephRGW) Management

Scale-out File (CephFS) Management

Metadata Server (MDS) Management

Scale-out Block (CephRBD) Management