Difference between revisions of "Object Storage Setup"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m (Object Storage Setup (S3/SWIFT))
m (Create S3 Bucket)
 
(87 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
[[Category:start_guide]]
 
[[Category:start_guide]]
 +
 +
The dialogs for each step can be found through the Getting Started dialog or from toolbar dialogs. I will demonstrate each path in the sections below. We recommend the Getting Started path for initial creation since it walks you through the proper order of installation.
 +
 +
''' OSNEXUS Videos '''
 +
* [[Image:youtube_icon.png|50px|link=https://www.youtube.com/watch?v=VpfcjZDO3Ys]] [https://www.youtube.com/watch?v=VpfcjZDO3Ys Covers Designing Ceph clusters with the QuantaStor solution design web app [17:44]]
 +
 +
* [[Image:youtube_icon.png|50px|link=https://www.youtube.com/watch?v=oxSJLoqOVGA]] [https://www.youtube.com/watch?v=oxSJLoqOVGA Covers QuantaStor 6: Deploying a Ceph-based S3 Object Storage Cluster in minutes. [19:16]]
 +
 +
 
=== Create Object Storage Cluster ===
 
=== Create Object Storage Cluster ===
  
Scale-out S3/SWIFT Object Storage and scale-out block storage leverages open storage Ceph technology. Before provisioning scale-out storage one must setup one or more Ceph Clusters within the Storage Grid. Ceph Clusters should be comprised of at least three (3) or more Storage Systems within the grid. Single-node Ceph Clusters may be created but these are only recommended for test configurations. Note also that one should setup the network ports on each System so that a dedicated high speed back-end network may be used for inter-node communication. This back-end network is recommended to have double the bandwidth of the front-end network. Both the front-end and back-end network ports should bonded ports for improved performance and high-availability. The same network may be used for the front-end and back-end network but this is not optimal.
+
QuantaStor leverages open storage Ceph technology to deliver Scale-out Object Storage (S3/SWIFT) and Scale-out Block Storage (iSCSI). Before provisioning scale-out storage solutions, one must set up one or more Ceph Clusters within a QuantaStor storage grid. Ceph Clusters should be comprised of at least three (3) or more Storage Systems within the grid. It is possible to create a single-node Ceph Cluster, but these are recommended only for test configurations.
  
 +
Note also that one should set up the network ports on each System so that a dedicated high speed back-end network may be used for inter-node communication. This back-end network is recommended to have double the bandwidth of the front-end network. Both the front-end and back-end network ports should bonded ports for improved performance and high-availability. The same network may be used for the front-end and back-end network but this is not optimal. 
  
[[File:Obj Stor Create Object Storage Cluster.jpg | 512px | Create Object Storage Cluster]]
+
[[File:Get Started Create Clstr.jpg|1024px]]
 +
 
 +
''- or -''
 +
 
 +
[[File:Create Ceph Cluster 6.jpg|1024px]]
  
 
===Select Data Devices ===
 
===Select Data Devices ===
  
Scale-out Storage Pools and Zones store their data in object storage devices called OSDs. Each Physical Disk assigned to the Ceph Cluster as a data Device becomes an OSD. To accelerate write performance each System must have at least one SSD device assigned as a WAL Device. All writes are logged to an OSD's associated WAL Device first. Press the button to the left to add OSDs and WAL devices to the Ceph Cluster.
+
Scale-out Storage Pools and Zones store their data in object storage devices called '''OSD'''s ('''O'''bject '''S'''torage '''D'''aemon). Each Physical Disk assigned to the Ceph Cluster as a data device becomes an OSD. To accelerate write performance each System must have at least one SSD device assigned as a '''WAL''' ('''W'''rite-'''A'''head '''L'''ogging) Device. All writes are logged to an OSD's associated WAL Device first. Press the button to the left to add OSDs and WAL devices to the Ceph Cluster.
 +
Note, using the "Auto Config" button, highlighted in red, makes the proper selections for you.
 +
 
 +
 
 +
[[File:Get Started Create OSD.jpg|1024px]]
 +
 
 +
''- or -''
 +
 
 +
[[File:Create OSDs & Journals Web 6.jpg|1024px]]
 +
 
 +
=== Create S3 Object Storage Zone ===
 +
 
 +
To access the cluster via the S3 object storage protocols a Zone must be created. Press the button "Create Object Storage Pool" to create an S3 object storage pool and a zone within the cluster.
 +
 
 +
 
 +
[[File:Get Start Create Obj Strg Pool.jpg|1024px]]
 +
 
 +
''- or - ''
 +
 
 +
[[File:Create Obj Strg Pool.jpg|1024px]]
 +
 
 +
=== Add S3 Gateways ===
  
 +
Select the Ceph Cluster to add a new S3 Gateway to and specify the network interface configuration.
  
[[File:Obj Stor Select Data Devices.jpg | 512px | Select Data Devices]]
 
  
=== Create S3/SWIFT Object Storage Zone ===
+
[[File:Get Start Add S3 Gateway.jpg|1024px]]
  
To access the cluster via the S3 or SWIFT object storage protocols a Zone must be created. Press the button to the left to create a S3/SWIFT object storage zone within the cluster.
+
''- or -''
  
 +
[[File:Add S3 Gateway Web Page.jpg|1024px]]
  
[[File:Obj Stor Select Create S3-SWIFT Object Storage Zone.jpg | 512px | Create S3/SWIFT Object Storage Zone]]
+
=== Add S3 Users ===
  
=== Add S3/SWIFT Gateways ===
+
After the Zone has been created users must be given an access key and secret key in order to create buckets and objects via the S3 protocols. Press the button to the left to add a S3 Object User Access Entry to the Zone.
  
Select the Ceph Cluster to add a new S3/SWIFT Gateway to and specify the network interface configuration.
 
  
 +
[[File:Get Start Create S3 User.jpg|1024px]]
  
[[File:Obj Obj Stor Add S3-SWIFT Gateways.jpg | 512px | Add S3/SWIFT Gateways]]
+
''-- or --''
  
=== Add S3/SWIFT Users ===
+
[[File:Create S3 User-Web.jpg|924px]]
  
After the Zone has been created users must be given an access key and secret key in order to create buckets and objects via the S3 and or SWIFT protocols. Press the button to the left to add a S3/SWIFT Object User Access Entry to the Zone.
+
=== Create S3 Bucket ===
  
 +
Create one or more buckets for writing object storage via the S3 protocol.
  
[[File:Add S3-SWIFT Users.jpg | 512px | Add S3/SWIFT Users]]
+
[[File:Get Start Create Bucket.jpg|1024px]]
  
 +
''-- or --''
  
 +
[[File:Create S3 Bckt.jpg|1024px]]
  
  
 
{{Template:ReturnToWebGuide}}
 
{{Template:ReturnToWebGuide}}
 
[[Category:WebUI Dialog]]
 
[[Category:WebUI Dialog]]
[[Category:QuantaStor4]]
+
[[Category:QuantaStor6]]
 
[[Category:Requires Review]]
 
[[Category:Requires Review]]

Latest revision as of 13:05, 20 February 2024


The dialogs for each step can be found through the Getting Started dialog or from toolbar dialogs. I will demonstrate each path in the sections below. We recommend the Getting Started path for initial creation since it walks you through the proper order of installation.

OSNEXUS Videos


Create Object Storage Cluster

QuantaStor leverages open storage Ceph technology to deliver Scale-out Object Storage (S3/SWIFT) and Scale-out Block Storage (iSCSI). Before provisioning scale-out storage solutions, one must set up one or more Ceph Clusters within a QuantaStor storage grid. Ceph Clusters should be comprised of at least three (3) or more Storage Systems within the grid. It is possible to create a single-node Ceph Cluster, but these are recommended only for test configurations.

Note also that one should set up the network ports on each System so that a dedicated high speed back-end network may be used for inter-node communication. This back-end network is recommended to have double the bandwidth of the front-end network. Both the front-end and back-end network ports should bonded ports for improved performance and high-availability. The same network may be used for the front-end and back-end network but this is not optimal.

Get Started Create Clstr.jpg

- or -

Create Ceph Cluster 6.jpg

Select Data Devices

Scale-out Storage Pools and Zones store their data in object storage devices called OSDs (Object Storage Daemon). Each Physical Disk assigned to the Ceph Cluster as a data device becomes an OSD. To accelerate write performance each System must have at least one SSD device assigned as a WAL (Write-Ahead Logging) Device. All writes are logged to an OSD's associated WAL Device first. Press the button to the left to add OSDs and WAL devices to the Ceph Cluster. Note, using the "Auto Config" button, highlighted in red, makes the proper selections for you.


Get Started Create OSD.jpg

- or -

Create OSDs & Journals Web 6.jpg

Create S3 Object Storage Zone

To access the cluster via the S3 object storage protocols a Zone must be created. Press the button "Create Object Storage Pool" to create an S3 object storage pool and a zone within the cluster.


Get Start Create Obj Strg Pool.jpg

- or -

Create Obj Strg Pool.jpg

Add S3 Gateways

Select the Ceph Cluster to add a new S3 Gateway to and specify the network interface configuration.


Get Start Add S3 Gateway.jpg

- or -

Add S3 Gateway Web Page.jpg

Add S3 Users

After the Zone has been created users must be given an access key and secret key in order to create buckets and objects via the S3 protocols. Press the button to the left to add a S3 Object User Access Entry to the Zone.


Get Start Create S3 User.jpg

-- or --

Create S3 User-Web.jpg

Create S3 Bucket

Create one or more buckets for writing object storage via the S3 protocol.

Get Start Create Bucket.jpg

-- or --

Create S3 Bckt.jpg


Return to the QuantaStor Web Admin Guide