Template:Scale-out File Pool Setup (Ceph FS)

From OSNEXUS Online Documentation Site
Revision as of 23:20, 20 September 2019 by Qadmin (talk | contribs)
Jump to navigation Jump to search

Scale-out File Pool Setup (Ceph FS)

The Ceph file system (CephFS) is a POSIX-compliant file system that uses a Ceph storage cluster to store its data. Ceph file systems are highly available using metadata servers (MDS). Once you have a healthy Ceph storage cluster with at least one Ceph metadata server, you can create a Ceph file system.

Create a Ceph Cluster

Specify the new Ceph cluster name along with the network interface configuration.

Create an Object Storage Device (OSD)

Multiple storage daemons can be created with a maximum of thirty journal devices per physical disk.

Add an MDS

One or two Ceph metadata server instances must be allocated to collectively manage the scale-out file system namespace.

Create a Pool Profile (Optional)

Create a custom erasure-coded pool profile to meet specific capacity and high-availability requirements for the file system. This is an optional task.

Create a File System

Create a new storage pool based on the Ceph file system. Once provisioned, network shares may be created from the new scale-out storage pool.

Create a Share

Create one or more network shares for users to access storage via the NFS and SMB protocols.