Template:Scale-out File Pool Setup (Ceph FS): Difference between revisions
(5 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
==Scale-out File Pool Setup (Ceph FS)== | ==Scale-out File Pool Setup Using Web Management App (Ceph FS)== | ||
The Ceph file system (CephFS) is a POSIX-compliant file system that uses a Ceph storage cluster to store its data. Ceph file systems are highly available using metadata servers (MDS). Once you have a healthy Ceph storage cluster with at least one Ceph metadata server, you can create a Ceph file system. | The Ceph file system (CephFS) is a POSIX-compliant file system that uses a Ceph storage cluster to store its data. Ceph file systems are highly available using metadata servers (MDS). Once you have a healthy Ceph storage cluster with at least one Ceph metadata server, you can create a Ceph file system. | ||
Line 9: | Line 9: | ||
Multiple storage daemons can be created with a maximum of thirty journal devices per physical disk. | Multiple storage daemons can be created with a maximum of thirty journal devices per physical disk. | ||
===[[Add_Metadata_Server_To_Ceph_Cluster|Add an ''M''eta ''D''ata ''S''erver(MDS)]]=== | ===[[Add_Metadata_Server_To_Ceph_Cluster|Add an ''M''eta ''D''ata ''S''erver (MDS)]]=== | ||
One or two Ceph Metadata Server instances must be allocated to collectively manage the scale-out file system namespace. | One or two Ceph Metadata Server instances must be allocated to collectively manage the scale-out file system namespace. | ||
Line 15: | Line 15: | ||
Create a custom erasure-coded pool profile to meet specific capacity and high-availability requirements for the file system. This is an optional task. | Create a custom erasure-coded pool profile to meet specific capacity and high-availability requirements for the file system. This is an optional task. | ||
===Create a File System=== | ===[[Ceph_Filesystem_Create|Create a File System]]=== | ||
Create a new storage pool based on the Ceph file system. Once provisioned, network shares may be created from the new scale-out storage pool. | Create a new storage pool based on the Ceph file system. Once provisioned, network shares may be created from the new scale-out storage pool. | ||
===Create a Share=== | ===[[Network_Share_Create|Create a Share]]=== | ||
Create one or more network shares for users to access storage via the NFS and SMB protocols. | Create one or more network shares for users to access storage via the NFS and SMB protocols. |
Latest revision as of 22:21, 16 October 2023
Scale-out File Pool Setup Using Web Management App (Ceph FS)
The Ceph file system (CephFS) is a POSIX-compliant file system that uses a Ceph storage cluster to store its data. Ceph file systems are highly available using metadata servers (MDS). Once you have a healthy Ceph storage cluster with at least one Ceph metadata server, you can create a Ceph file system.
Create a Ceph Cluster
Specify the new Ceph cluster name along with the network interface configuration.
Create an Object Storage Device (OSD)
Multiple storage daemons can be created with a maximum of thirty journal devices per physical disk.
Add an Meta Data Server (MDS)
One or two Ceph Metadata Server instances must be allocated to collectively manage the scale-out file system namespace.
Create a Pool Profile (Optional)
Create a custom erasure-coded pool profile to meet specific capacity and high-availability requirements for the file system. This is an optional task.
Create a File System
Create a new storage pool based on the Ceph file system. Once provisioned, network shares may be created from the new scale-out storage pool.
Create one or more network shares for users to access storage via the NFS and SMB protocols.