Template:Scale-out File Pool Setup (Ceph FS): Difference between revisions
Line 12: | Line 12: | ||
One or two Ceph Metadata Server instances must be allocated to collectively manage the scale-out file system namespace. | One or two Ceph Metadata Server instances must be allocated to collectively manage the scale-out file system namespace. | ||
===Create a Pool Profile (Optional)=== | ===[[Ceph_Pool_Profile_Create|Create a Pool Profile (Optional)]]=== | ||
Create a custom erasure-coded pool profile to meet specific capacity and high-availability requirements for the file system. This is an optional task. | Create a custom erasure-coded pool profile to meet specific capacity and high-availability requirements for the file system. This is an optional task. | ||
Revision as of 23:27, 20 September 2019
Scale-out File Pool Setup (Ceph FS)
The Ceph file system (CephFS) is a POSIX-compliant file system that uses a Ceph storage cluster to store its data. Ceph file systems are highly available using metadata servers (MDS). Once you have a healthy Ceph storage cluster with at least one Ceph metadata server, you can create a Ceph file system.
Create a Ceph Cluster
Specify the new Ceph cluster name along with the network interface configuration.
Create an Object Storage Device (OSD)
Multiple storage daemons can be created with a maximum of thirty journal devices per physical disk.
Add an Meta Data Server(MDS)
One or two Ceph Metadata Server instances must be allocated to collectively manage the scale-out file system namespace.
Create a Pool Profile (Optional)
Create a custom erasure-coded pool profile to meet specific capacity and high-availability requirements for the file system. This is an optional task.
Create a File System
Create a new storage pool based on the Ceph file system. Once provisioned, network shares may be created from the new scale-out storage pool.
Create one or more network shares for users to access storage via the NFS and SMB protocols.