Difference between revisions of "Overview (Getting Started)"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m (Remote-Replication Setup)
m (System Setup)
(35 intermediate revisions by the same user not shown)
Line 4: Line 4:
 
<br>
 
<br>
 
==System Setup==
 
==System Setup==
[[File:Getting Started - System Setup 5.4.jpg|356px|thumb|Initial Storage System setup procedures.]]
+
[[File:Getting Started - Sys Setup 5.4.jpg|356px|thumb|Initial Storage System setup procedures.]]
 
This procedure will assist with the initial setup items including adding license keys, changing the 'admin' user password and making initial network configuration changes.
 
This procedure will assist with the initial setup items including adding license keys, changing the 'admin' user password and making initial network configuration changes.
  
  
-Step one guides one to create a Storage Grid.<br>
+
-Step one guides one to [[Create_Management_Grid|create a Storage Grid]].<br><br>
-Step two guides one to activate a License.<br>
+
-Step two guides one to activate a License.<br><br>
-Step three guides one to change the Administrator's Password.<br>
+
-Step three guides one to change the Administrator's Password.<br><br>
-Step four guides one to modify the Network Configuration.<br>
+
-Step four guides one to modify the Network Configuration.<br><br>
<br><br><br><br><br><br><br><br><br><br>
+
<br><br><br><br><br><br>
  
 
==Storage Grid Setup==
 
==Storage Grid Setup==
[[File:Getting Started - Storage Grid Setup 5.4.1.jpg|356px|thumb|Storage Grid setup procedures.]]
+
[[File:Getting Started - Stor Grid Setup 5.4.jpg|356px|thumb|Storage Grid setup procedures.]]
  
 
This procedure will assist with setting up a Storage Grid. Storage Grid technology makes it easy to manage large numbers of Storage Systems and enables access to features like remote-replication and scale-out storage.
 
This procedure will assist with setting up a Storage Grid. Storage Grid technology makes it easy to manage large numbers of Storage Systems and enables access to features like remote-replication and scale-out storage.
  
 
<br>
 
<br>
-Step one guides one to create a Storage Grid.<br>
+
-Step one guides one to create a Storage Grid.<br><br>
-Step two guides one to add Storage Systems to the Storage Grid.<br>
+
-Step two guides one to add Storage Systems to the Storage Grid.<br><br>
-Step three guides one to modify common network settings.<br>
+
-Step three guides one to modify common network settings.<br><br>
-Step four guides one to configure the Alert Manager so as to send alert notifications to various devices.<br>
+
-Step four guides one to configure the Alert Manager so as to send alert notifications to various devices.<br><br>
-Step five(Optional) guides one to configure the password policy settings.<br>
+
-Step five(Optional) guides one to configure the password policy settings.<br><br>
<br><br><br><br><br><br><br><br><br>
+
<br><br><br><br>
  
 
==Highly-Available Pool Setup (ZFS)==
 
==Highly-Available Pool Setup (ZFS)==
[[File:Getting Started - High Avail Pool 5.4.jpg|356px|thumb|Setup Object Storage (S3/SWIFT).]]
+
[[File:Getting Started - Highly-Available Pool Setup (ZFS) 5.4.jpg|356px|thumb|Setup Object Storage (S3/SWIFT).]]
  
 
Storage pools formed from devices which are dual-connected to two storage systems may be made highly-available. This is done by creating a storage pool high-availability group to which one or more virtual network interfaces (VIFs) are added. All access to storage pools must be done via the associated VIFs to ensure continuous data availability in the event the pool is moved (failed-over) to another system.
 
Storage pools formed from devices which are dual-connected to two storage systems may be made highly-available. This is done by creating a storage pool high-availability group to which one or more virtual network interfaces (VIFs) are added. All access to storage pools must be done via the associated VIFs to ensure continuous data availability in the event the pool is moved (failed-over) to another system.
  
 
<br>
 
<br>
-Step one guides one to create a Site Cluster.
+
-Step one guides one to create a Site Cluster.<br><br>
 +
-Step two guides one to add a Cluster Ring.<br><br>
 +
-Step three guides one to create a Storage Pools.<br><br>
 +
-Step four guides one to create a Group.<br><br>
 +
-Step five guides one to add a Pool Interface.<br><br>
 +
<br><br><br>
  
-Step two guides one to add a Cluster Ring.
+
==Scale-out S3 Pool Setup (Ceph RGW)==
 
+
[[File:Getting Started - Scale-out S3 Pool Setup (Ceph RGW) 5.4.jpg|356px|thumb|Setup Object Storage (Ceph RGW, '''R'''adow '''G'''ate'''w'''ay).]]
-Step three guides one to create a Storage Pools.
+
 
+
-Step four guides one to create a Group.
+
 
+
-Step five guides one to add a Pool Interface.
+
<br><br><br><br><br><br><br>
+
 
+
==Scale-Out S3 Pool Setup (Ceph RGW)==
+
[[File:Scale-Out S3 Pool Setup 5.4.jpg|356px|thumb|Setup Object Storage (Ceph RGW, '''R'''adow '''G'''ate'''w'''ay).]]
+
  
 
Scale-out Object and Block storage configurations enable S3/SWIFT object storage access as well as iSCSI storage access to scale-out Storage Volumes. Scale-out storage uses replicas and/or erasure-coding technologies to ensure fault-tolerance and high-availability. Ceph technology is used within the platform so a Ceph Cluster must first be created to enable this functionality.
 
Scale-out Object and Block storage configurations enable S3/SWIFT object storage access as well as iSCSI storage access to scale-out Storage Volumes. Scale-out storage uses replicas and/or erasure-coding technologies to ensure fault-tolerance and high-availability. Ceph technology is used within the platform so a Ceph Cluster must first be created to enable this functionality.
  
 
<br>
 
<br>
-Step one guides one to create an Object Storage Cluster.<br>
+
-Step one guides one to create an Object Storage Cluster.<br><br>
-Step two guides one to create '''O'''bject '''S'''torage '''D'''evices, OSD's.<br>
+
-Step two guides one to create '''O'''bject '''S'''torage '''D'''evices, OSD's.<br><br>
-Step three guides one to create an S3/SWIFT Object Storage Zone.<br>
+
-Step three guides one to create an S3/SWIFT Object Storage Zone.<br><br>
-Step four guides one to select a Ceph Cluster for the S3/SWIFT Gateway.<br>
+
-Step four guides one to select a Ceph Cluster for the S3/SWIFT Gateway.<br><br>
-Step five guides one to the creation of User Access Entries.<br>
+
-Step five guides one to the creation of User Access Entries.<br><br>
-Step six guides one to create Buckets for writing object storage via the S3/SWIFT Protocols.<br>
+
-Step six guides one to create Buckets for writing object storage via the S3/SWIFT Protocols.<br><br>
<br><br><br><br><br><br><br>
+
<br>
  
 
==Scale-out File Pool Setup(CephFS)==
 
==Scale-out File Pool Setup(CephFS)==
[[File:Scale-out File Pool Setup 5.4.jpg|356px|thumb|Setup a Ceph File System (CephFS).]]
+
[[File:Getting Started - Scale-out File Pool Setup (Ceph FS).jpg|356px|thumb|Setup a Ceph File System (CephFS).]]
  
 
The Ceph file system (CephFS) is a POSIX-compliant file system that uses a Ceph storage cluster to store its data. Ceph file systems are highly available using metadata servers (MDS). Once you have a healthy Ceph storage cluster with at least one Ceph metadata server, you can create a Ceph file system.
 
The Ceph file system (CephFS) is a POSIX-compliant file system that uses a Ceph storage cluster to store its data. Ceph file systems are highly available using metadata servers (MDS). Once you have a healthy Ceph storage cluster with at least one Ceph metadata server, you can create a Ceph file system.
  
 
<br>
 
<br>
-Step one guides one to Create a Scale-out Storage Cluster.
+
-Step one guides one to Create a Scale-out Storage Cluster.<br><br>
 
+
-Step two guides one to Create a Ceph '''O'''bject '''S'''torage '''D'''evice, OSD.<br><br>
-Step two guides one to Create a Ceph '''O'''bject '''S'''torage '''D'''evice, OSD.
+
-Step three guides one to Create a Ceph '''M'''eta '''D'''ata '''S'''erver, MDS.<br><br>
 
+
-Step four (Optional) guides one to Create a Pool Profile.<br><br>
-Step three guides one to Create a Ceph '''M'''eta '''D'''ata '''S'''erver, MDS.
+
-Step five guides one to Create a Ceph File System.<br><br>
 
+
-Step six guides one to Create one or more Network Shares via NFS and SMB Protocols.<br><br>
-Step four (Optional) guides one to Create a Pool Profile.
+
<br>
 
+
-Step five guides one to Create a Ceph File System.
+
 
+
-Step six guides one to Create one or more Network Shares via NFS and SMB Protocols.
+
<br><br><br><br><br><br>
+
  
 
==Scale-Out Block Pool Setup (Ceph RBD)==
 
==Scale-Out Block Pool Setup (Ceph RBD)==
[[File:Getting Started - Block Prov 5.4.jpg|356px|thumb|Setup Scale-out object and block storage.]]
+
[[File:Getting Started - Scale-out Block Pool Setup (Ceph RBD).jpg|356px|thumb|Setup Scale-out object and block storage.]]
  
 
Scale-out object and block storage configurations enable S3/SWIFT object storage access as well as iSCSI storage access to scale-out storage volumes. Scale-out storage uses replicas and/or erasure-coding technologies to ensure fault-tolerance and high-availability. Ceph technology is used within the platform so a Ceph cluster must first be created to enable this functionality.
 
Scale-out object and block storage configurations enable S3/SWIFT object storage access as well as iSCSI storage access to scale-out storage volumes. Scale-out storage uses replicas and/or erasure-coding technologies to ensure fault-tolerance and high-availability. Ceph technology is used within the platform so a Ceph cluster must first be created to enable this functionality.
  
 
<br>
 
<br>
-Step one guides one to create a Storage Cluster for Ceph based file, block, or S3 object storage.<br>
+
-Step one guides one to create a Storage Cluster for Ceph based file, block, or S3 object storage.<br><br>
-Step two guides one to create '''O'''bject '''S'''torage '''D'''evices, OSD's.<br>
+
-Step two guides one to create '''O'''bject '''S'''torage '''D'''evices, OSD's.<br><br>
-Step three (Optional) guides one to Create a Custom Replica or Erasure-coded Pool Profile.<br>  
+
-Step three guides one to allocate one or two '''C'''eph '''M'''etadata '''S'''erver, MDS, Instances.<br><br>
-Step four guides one to create a Block Storage Pool.<br>
+
-Step four (Optional) guides one to create a Block Storage Pool.<br><br>
-Step five guides one to create one or more Storage Volumes.<br>
+
-Step five guides one to create one or more Storage Volumes.<br><br>
<br><br><br><br><br><br><br><br>
+
<br><br><br>
  
 
==Provision File Storage (NFS/CIFS)==
 
==Provision File Storage (NFS/CIFS)==
[[File:Provision File Storage.jpg|356px|thumb|Storage Pools (ZFS or CephFS) must be created before Network Shares may be provisioned]]
+
[[File:Getting Started - Provision File Storage (NFS-CIFS) 5.4.jpg|356px|thumb|Storage Pools (ZFS or CephFS) must be created before Network Shares may be provisioned]]
  
 
This procedure will assist with setting up Storage Pools and Volumes.
 
This procedure will assist with setting up Storage Pools and Volumes.
  
-Step one (Optional) guides one to configure an '''A'''ctive '''D'''irectory (AD) domain to provide '''S'''erver '''M'''essage '''B'''lock, SMB, access to AD users and groups.<br>
+
-Step one (Optional) guides one to configure an '''A'''ctive '''D'''irectory (AD) domain to provide '''S'''erver '''M'''essage '''B'''lock, SMB, access to AD users and groups.<br><br>
-Step two guides one to create a network share in a storage pool which provides users with storage via the '''S'''erver '''M'''essage '''B'''lock, SMB and '''N'''etwork '''F'''ile '''S'''ystem, NFS .<br>
+
-Step two guides one to create a network share in a storage pool which provides users with storage via the '''S'''erver '''M'''essage '''B'''lock, SMB and '''N'''etwork '''F'''ile '''S'''ystem, NFS .<br><br>
-Step three guides one to modify a network share which may have one or more NFS client access entries.<br>
+
-Step three guides one to modify a Network Share which may have one or more NFS client access entries.<br><br>
-Step four modifies the network share to assign access to specific users.<br>
+
-Step four guides one to modify the network share so as to assign access to specific users.<br><br>
-Step five (Optional) guides one to Network Share Namespaces.<br>
+
-Step five (Optional) guides one to Network Share Namespaces.<br><br>
<br><br><br><br><br><br><br><br><br>
+
<br><br><br><br>
  
 
==Provision Block Storage (iSCSI/FC)==
 
==Provision Block Storage (iSCSI/FC)==
[[File:Provision Block Storage 5.4.jpg|356px|thumb|Setup File Storage Provisioning (NFS/CIFS).]]
+
[[File:Getting Started - Provision Block Storage (iSCSI-FC) 5.4.jpg|356px|thumb|Setup File Storage Provisioning (NFS/CIFS).]]
  
 
File storage folders are referred to as Network Shares. Network Shares may be accessed via both the NFS and the SMB protocols.
 
File storage folders are referred to as Network Shares. Network Shares may be accessed via both the NFS and the SMB protocols.
  
-Step one guides one to create a Storage Volume.<br>
+
-Step one guides one to create a Storage Volume.<br><br>
-Step two guides one to add one or more hosts.<br>
+
-Step two guides one to add one or more hosts.<br><br>
-Step three guides one to create a Network Share via SMB or NFS protocall.<br>
+
-Step three guides one to assign Volumes.<br><br>
<br><br><br><br><br><br><br><br><br><br><br><br><br>
+
<br><br><br><br><br><br><br><br><br><br>
  
 
==Remote-Replication Setup==
 
==Remote-Replication Setup==
[[File:Getting Started - Remote Rep 5.4.jpg|356px|thumb|Remote Replication Setup]]
+
[[File:Getting Started - Remote-Replication Setup 5.4.jpg|356px|thumb|Remote Replication Setup]]
  
 
Remote-Replication enables one to asynchronously replicate storage volumes and network shares from any storage pool to any destination storage pool within the storage grid. Replication is usually automated via a replication schedule so that it may be used as part of a business continuity and DR plan.
 
Remote-Replication enables one to asynchronously replicate storage volumes and network shares from any storage pool to any destination storage pool within the storage grid. Replication is usually automated via a replication schedule so that it may be used as part of a business continuity and DR plan.
 
<br><br>
 
<br><br>
-Step one guides one to create a Storage System Link.<br>
+
-Step one guides one to create a Storage System Link.<br><br>
-Step two guides one to Create a Replication Schedule.<br>
+
-Step two guides one to create a Replication Schedule.<br><br>
-Step three guides one to Trigger a Replication Schedule.<br>
+
-Step three guides one to trigger a Replication Schedule.<br><br>
<br><br><br><br><br><br><br><br><br><br>
+
<br><br><br><br><br><br><br>
 
+
 
{{Template:ReturnToWebGuide}}
 
{{Template:ReturnToWebGuide}}
 
[[Category:Incomplete]]
 
[[Category:Incomplete]]
 
[[Category:QuantaStor5]]
 
[[Category:QuantaStor5]]
 
[[Category:WebUI Dialog]]
 
[[Category:WebUI Dialog]]

Revision as of 11:42, 22 August 2019

Getting Started/Configuration Guide

Navigation: Storage Management --> Storage System --> Storage System --> Getting Started (toolbar)

System Setup

Initial Storage System setup procedures.

This procedure will assist with the initial setup items including adding license keys, changing the 'admin' user password and making initial network configuration changes.


-Step one guides one to create a Storage Grid.

-Step two guides one to activate a License.

-Step three guides one to change the Administrator's Password.

-Step four guides one to modify the Network Configuration.







Storage Grid Setup

Storage Grid setup procedures.

This procedure will assist with setting up a Storage Grid. Storage Grid technology makes it easy to manage large numbers of Storage Systems and enables access to features like remote-replication and scale-out storage.


-Step one guides one to create a Storage Grid.

-Step two guides one to add Storage Systems to the Storage Grid.

-Step three guides one to modify common network settings.

-Step four guides one to configure the Alert Manager so as to send alert notifications to various devices.

-Step five(Optional) guides one to configure the password policy settings.





Highly-Available Pool Setup (ZFS)

Setup Object Storage (S3/SWIFT).

Storage pools formed from devices which are dual-connected to two storage systems may be made highly-available. This is done by creating a storage pool high-availability group to which one or more virtual network interfaces (VIFs) are added. All access to storage pools must be done via the associated VIFs to ensure continuous data availability in the event the pool is moved (failed-over) to another system.


-Step one guides one to create a Site Cluster.

-Step two guides one to add a Cluster Ring.

-Step three guides one to create a Storage Pools.

-Step four guides one to create a Group.

-Step five guides one to add a Pool Interface.




Scale-out S3 Pool Setup (Ceph RGW)

Setup Object Storage (Ceph RGW, Radow Gateway).

Scale-out Object and Block storage configurations enable S3/SWIFT object storage access as well as iSCSI storage access to scale-out Storage Volumes. Scale-out storage uses replicas and/or erasure-coding technologies to ensure fault-tolerance and high-availability. Ceph technology is used within the platform so a Ceph Cluster must first be created to enable this functionality.


-Step one guides one to create an Object Storage Cluster.

-Step two guides one to create Object Storage Devices, OSD's.

-Step three guides one to create an S3/SWIFT Object Storage Zone.

-Step four guides one to select a Ceph Cluster for the S3/SWIFT Gateway.

-Step five guides one to the creation of User Access Entries.

-Step six guides one to create Buckets for writing object storage via the S3/SWIFT Protocols.


Scale-out File Pool Setup(CephFS)

Setup a Ceph File System (CephFS).

The Ceph file system (CephFS) is a POSIX-compliant file system that uses a Ceph storage cluster to store its data. Ceph file systems are highly available using metadata servers (MDS). Once you have a healthy Ceph storage cluster with at least one Ceph metadata server, you can create a Ceph file system.


-Step one guides one to Create a Scale-out Storage Cluster.

-Step two guides one to Create a Ceph Object Storage Device, OSD.

-Step three guides one to Create a Ceph Meta Data Server, MDS.

-Step four (Optional) guides one to Create a Pool Profile.

-Step five guides one to Create a Ceph File System.

-Step six guides one to Create one or more Network Shares via NFS and SMB Protocols.


Scale-Out Block Pool Setup (Ceph RBD)

Setup Scale-out object and block storage.

Scale-out object and block storage configurations enable S3/SWIFT object storage access as well as iSCSI storage access to scale-out storage volumes. Scale-out storage uses replicas and/or erasure-coding technologies to ensure fault-tolerance and high-availability. Ceph technology is used within the platform so a Ceph cluster must first be created to enable this functionality.


-Step one guides one to create a Storage Cluster for Ceph based file, block, or S3 object storage.

-Step two guides one to create Object Storage Devices, OSD's.

-Step three guides one to allocate one or two Ceph Metadata Server, MDS, Instances.

-Step four (Optional) guides one to create a Block Storage Pool.

-Step five guides one to create one or more Storage Volumes.




Provision File Storage (NFS/CIFS)

Storage Pools (ZFS or CephFS) must be created before Network Shares may be provisioned

This procedure will assist with setting up Storage Pools and Volumes.

-Step one (Optional) guides one to configure an Active Directory (AD) domain to provide Server Message Block, SMB, access to AD users and groups.

-Step two guides one to create a network share in a storage pool which provides users with storage via the Server Message Block, SMB and Network File System, NFS .

-Step three guides one to modify a Network Share which may have one or more NFS client access entries.

-Step four guides one to modify the network share so as to assign access to specific users.

-Step five (Optional) guides one to Network Share Namespaces.





Provision Block Storage (iSCSI/FC)

Setup File Storage Provisioning (NFS/CIFS).

File storage folders are referred to as Network Shares. Network Shares may be accessed via both the NFS and the SMB protocols.

-Step one guides one to create a Storage Volume.

-Step two guides one to add one or more hosts.

-Step three guides one to assign Volumes.











Remote-Replication Setup

Remote Replication Setup

Remote-Replication enables one to asynchronously replicate storage volumes and network shares from any storage pool to any destination storage pool within the storage grid. Replication is usually automated via a replication schedule so that it may be used as part of a business continuity and DR plan.

-Step one guides one to create a Storage System Link.

-Step two guides one to create a Replication Schedule.

-Step three guides one to trigger a Replication Schedule.








Return to the QuantaStor Web Admin Guide