Template:Scale-out File Pool Setup (Ceph FS): Difference between revisions

From OSNEXUS Online Documentation Site
Jump to navigation Jump to search
mNo edit summary
Line 9: Line 9:
Multiple storage daemons can be created with a maximum of thirty journal devices per physical disk.
Multiple storage daemons can be created with a maximum of thirty journal devices per physical disk.


===Add an MDS===
===[[Add_Metadata_Server_To_Ceph_Cluster|Add an ''M''eta ''D''ata ''S''erver(MDS)]]===
One or two Ceph metadata server instances must be allocated to collectively manage the scale-out file system namespace.
One or two Ceph Metadata Server instances must be allocated to collectively manage the scale-out file system namespace.


===Create a Pool Profile (Optional)===
===Create a Pool Profile (Optional)===

Revision as of 23:25, 20 September 2019

Scale-out File Pool Setup (Ceph FS)

The Ceph file system (CephFS) is a POSIX-compliant file system that uses a Ceph storage cluster to store its data. Ceph file systems are highly available using metadata servers (MDS). Once you have a healthy Ceph storage cluster with at least one Ceph metadata server, you can create a Ceph file system.

Create a Ceph Cluster

Specify the new Ceph cluster name along with the network interface configuration.

Create an Object Storage Device (OSD)

Multiple storage daemons can be created with a maximum of thirty journal devices per physical disk.

Add an Meta Data Server(MDS)

One or two Ceph Metadata Server instances must be allocated to collectively manage the scale-out file system namespace.

Create a Pool Profile (Optional)

Create a custom erasure-coded pool profile to meet specific capacity and high-availability requirements for the file system. This is an optional task.

Create a File System

Create a new storage pool based on the Ceph file system. Once provisioned, network shares may be created from the new scale-out storage pool.

Create a Share

Create one or more network shares for users to access storage via the NFS and SMB protocols.