Difference between revisions of "+ Getting Started Overview"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
(Data Compression)
m (Additional information for Getting Started dialog.)
 
(269 intermediate revisions by 2 users not shown)
Line 1: Line 1:
This guide assumes that you have already installed QuantaStor and have successfully logged into QuantaStor Manager.  If you have not yet installed the QuantaStor SSP software on your server, please see the [[QuantaStor Installation Guide|Installation Guide]] for more details.  It is best to follow along with the Getting Started Guide as you configure the system using the [http://www.osnexus.com/qs-screens/single-gallery/5686717 Getting Started] checklist which will appear when you first login to the QuantaStor Manager web interface.  The checklist can be access again at any time by pressing the 'System Checklist' button in the toolbar.
+
[[Category:start_guide]]
 +
This guide begins from the point that you have installed QuantaStor and have successfully logged into QuantaStor Manager.  If you have not yet installed QuantaStor on your server, please see the [[QuantaStor Installation Guide|Installation Guide]] for more details.
  
The default administrator user name for your storage system is simply 'admin' and this user name will automatically appear in the username field of the login screenThe password for the 'admin' account is initially just 'password' without the quotes.  You will want to change this after you first login and it is one of the steps in the checklist.
+
== First Time Login ==
 +
The default administrator user name for your storage system is simply 'admin'.  This user account is present on all QuantaStor systems.  At the login dialog the admin user account name is pre-populated as this is the most commonly used login.  If QuantaStor was installed for you by a CSP or VAR the password for the system(s) should have been emailed to you or made available via the CSPs server management panel.  If you've installed QuantaStor on a new server or VM the 'admin' account defaults to having a password of 'password' (without the single quotes)'''IMPORTANT:''' Please change this immediately after you first login via the Users & Groups section.
  
== License Key Management ==
+
==== Change Administrator Password ====
  
Once you have the software installed, the first thing you must do is enter your license key block. Your license key block can be added using the License Manager dialog which can be accessed by pressing the License Manager button in the toolbar.  It's also presented as the first step in the 'Getting Started' checklist. The key block you received via email is contained within markers like so:
+
[[File:Set Password UI 5.5.jpg|640px|Change the Administrator's Password.]]
  
<pre>
+
For security, you will want to create a secure password for your storage system. This can be completed quickly using the [[User Set Password | Change Admin Password]] button in the Manager, but also can be achieved via the ''Set Password'' button in the toolbar under "Users & Groups".
--------START KEY BLOCK--------
+
  
---------END KEY BLOCK---------
 
</pre>
 
  
Note that when you add the key using the 'Add License' dialog you can include or not include the START/END KEY BLOCK markers, it makes no difference.  Once your key block has been entered you'll want to activate your key which can be done in just a few seconds using the online Activation dialog.  If your storage system is not connected to the internet select the 'Activate via Email' dialog and send the information contained within to support@osnexus.com.  You have a 7 day grace period for license activation so you can begin configuring and utilizing the system even though the system is not yet activated.  That said, if you do not activate within the 7 days the storage system will no longer allow any additional configuration changes until an activation key is supplied.
+
For more information on the password, refer to the [[User Set Password]] page.
  
== Creating Storage Pools ==
+
== Step by Step Configuration Procedure ==
  
Storage pools combine or aggregate one or more physical disks (SATA, SAS, or SSD) into a single pool of storage from which storage volumes (iSCSI targets) can be created.  Storage pools can be created using any of the following RAID types including RAID0, RAID1, RAID5, RAID6, or RAID10.  Choosing the optimal RAID type depends on your the I/O access patters of your target application, number of disks you have, and the amount of fault-tolerance you require.  (Note: Fault tolerance is just a way of saying how many disks can fail within a storage pool or (aka RAID group before you lose data.)  RAID1 & RAID5 allow you have one disk fail without it interrupting disk IO.  When a disk fails you can remove it and you should add a spare disk with the 'degraded' storage pool as soon as possible to in order to restore it to a fault-tolerant status.  RAID6 allows for up to two disk failures and will keep running, while RAID10 can allow for one disk failure per mirror pair.  Finally, RAID0 is not fault tolerant at all but it is your only choice if you have only one disk and it can be useful in some scenarios where fault-tolerance is not required.  Here's a breakdown of the various RAID types and their pros & cons.
+
===[[QuantaStor_Getting_Started_Guide|Additional information for Getting Started dialog.]]===
  
* '''RAID0''' layout is also called 'striping' and it writes data across all the disk drives in the storage pool in a round robin fashion.  This has the effect of greatly boosting performance.  The drawback of RAID0 is that it is not fault tolerant, meaning that if a single disk in the storage pool fails then all of your data in the storage pool is lost.  As such RAID0 is not recommended except in special cases where the potential for data loss is non-issue.
+
[[File:Getting Started Sys Set.jpg|512px|Getting Started]]
* '''RAID1''' is also called 'mirroring' because it achieves fault tolerance by writing the same data to two disk drives so that you always have two copies of the data.  If one drive fails, the other has a complete copy and the storage pool continues to run.  RAID1 and it's variant RAID10 are ideal for databases and other applications which do a lot of small write I/O operations.
+
* '''RAID5''' achieves fault tolerance via what's called a parity calculation where one of the drives contains an XOR calculation of the bits on the other drives.  For example, if you have 4 disk drives and you create a RAID5 storage pool, 3 of the disks will store data, and the last disk will contain parity information.  This parity information on the 4th drive can be used to recover from any data disk failure.  In the event that the parity drive fails, it can be replaced and reconstructed using the data disks.  RAID5 (and RAID6) are especially well suited for audio/video streaming, archival, and other applications which do a heavy sequential write I/O operations (such as reading/writing large files) and are not as well suited for database applications which do heavy amounts of small random write I/O operations or with large file-systems containing lots of small files with a heavy write load.
+
* '''RAID6''' improves upon RAID5 in that it can handle two drive failures but it requires that you have two disk drives dedicated to parity information.  For example, if you have a RAID6 storage pool comprised of 5 disks then 3 disks will contain data, and 2 disks will contain parity information.  In this example, if the disks are all 1TB disks then you will have 3TB of usable disk space for the creation of volumes.  So there's some sacrifice of usable storage space to gain the additional fault tolerance.  If you have the disks, we always recommend using RAID6 over RAID5.  This is because all hard drives eventually fail and when one fails in a RAID5 storage pool your data is left vulnerable until a spare disk is utilized to recover your storage pool back to a fault tolerant status.  With RAID6 your storage pool is still fault tolerant after the first drive failure. (Note: Fault-tolerant storage pools (RAID1,5,6,10) that have suffered a single disk drive failure are called '''degraded''' because they're still operational but they require a spare disk to recover back to a fully fault-tolerant status.)
+
* '''RAID10''' is similar to RAID1 in that it utilizes mirroring, but RAID10 also does striping over the mirrors.  This gives you the fault tolerance of RAID1 combined with the performance of RAID10.  The drawback is that half the disks are used for fault-tolerance so if you have 8 1TB disks utilized to make a RAID10 storage pool, you will have 4TB of usable space for creation of volumes.  RAID10 will perform very well with both small random IO operations as well as sequential operations and it is highly fault tolerant as multiple disks can fail as long as they're not from the same mirror-pairing.  If you have the disks and you have a mission critical application we '''highly''' recommend that you choose the RAID10 layout for your storage pool.
+
  
In many cases it is useful to create more than one storage pool so that you have both basic low cost fault-tolerant storage available from perhaps a RAID5 storage pool, as well as a highly fault-tolerant RAID10 or RAID6 storage pool available for mission critical applications.
+
With your first login, the Getting Started dialog will give you step by step guidance to set up your desired Storage System.
  
Once you have created a storage pool it will take some time to 'rebuild'. Once the 'rebuild' process has reached 1% you will see the storage pool appear in QuantaStor Manager and you can begin to create new storage volumes. 
+
== Configuration Procedure without Getting Started dialog==
    WARNING:  Although you can begin using the pool at 1% rebuild completion,
+
=== [[Grid_Configuration|Grid Configuration]] ===
    your storage pool is not fault-tolerant until the rebuild process has completed.
+
Combine systems together to form a storage grid. <br>This enables a whole host of additional features including remote-replication, clustering and more.
  
=== Disk Drive Selection ===
+
===[[Template:Editing_Highly-Available_Pool_Setup_(ZFS)|Highly-Available Pool Setup (ZFS)]]===
 +
Storage pools formed from devices which are dual-connected to two storage systems may be made highly-available. This is done by creating a storage pool high-availability group to which one or more virtual network interfaces (VIFs) are added. All access to storage pools must be done via the associated VIFs to ensure continuous data availability in the event the pool is moved (failed-over) to another system.
  
Storage pools are created using entire disk drives.  That is, no two storage pools can can share a physical disk between them.  Also, storage pools that use striping such as RAID5, RAID6, and RAID0 must use disks of equal size or the lowest common denominator of disk sizes will be utilized.  For example, if you have 4 disks of different sizes, 1 x 500GB, 2 x 640GB, and 1 x 1TB and you create a RAID5 storage pool out of them then the storage pool will effectively comprised of 4 x 500GB disks because the extra space in the 640GB and 1TB drives will not be utilized. With RAID5 one of those disks would contain the parity data leaving you with 1.5TB of usable space in the pool.
+
===[[Template:Scale-out_S3_Pool_Setup_(Ceph_RGW)|Scale-out S3 Pool Setup (Ceph RGW)]]===
 +
Scale-out object and block storage configurations enable S3/SWIFT object storage access as well as iSCSI storage access to scale-out storage volumes. Scale-out storage uses replicas and/or erasure-coding technologies to ensure fault-tolerance and high-availability. Ceph technology is used within the platform so a Ceph cluster must first be created to enable this functionality.
  
In short, plan ahead and purchase harddrives of the same size from the same manufacturer.  You can use SAS, SATA and/or SSD drives in your QuantaStor storage system, but it is not a good idea to mix them within a single storage pool as they all have different IO characteristics.
+
=== [[Template:Scale-out_File_Pool_Setup_(Ceph_FS)|Scale-out File Pool Setup (Ceph FS)]] ===
 +
The Ceph file system (CephFS) is a POSIX-compliant file system that uses a Ceph storage cluster to store its data. Ceph file systems are highly available using metadata servers (MDS). Once you have a healthy Ceph storage cluster with at least one Ceph metadata server, you can create a Ceph file system.
  
=== Data Compression ===
+
=== [[Object Storage Setup|Scale-out S3 Object Storage Setup]] ===
 +
If your primary use case is to setup an object storage cluster with 3x or more QuantaStor servers, start here.
  
Storage pools support data compression and this not only saves you disk space it improves performance.  It may seem surprising that read and write performance would improve with compression but the fact is that today's processors can compress/decompress data in much much less time than it takes for a hard disk head to seek to the correct location on the disk.  What this means is that when it does get to the right location to read a given block of data, it will need to read fewer blocks to get a large amount of data thereby making much less work for the hard drive to serve up data for a given request.  This is especially effective with read operations but also improves write performance.  To use data compression with your storage pool, just leave the box checked when you bring up the 'Create Storage Pool' dialog as it is the default to have it enabled.
+
=== [[File Storage Provisioning|Provisioning File Storage]] ===
 +
If your primary use case is to setup your QuantaStor as NAS filer, start here to learn how to provision Network Shares.
  
=== SSD Optimizations ===
+
=== [[Block Storage Provisioning|Provisioning Block Storage]] ===
 +
If your primary use case is to setup your QuantaStor as a SAN, start here to learn how to provision Storage Volumes.
  
== Creating Storage Volumes ==
+
=== [[Remote-Replication Setup]] ===
 +
The recovery manager enables you to quickly and easily recover the storage system database after a full system reinstall.
  
After you have created one or more storage pools you can begin creating storage volumes in them.
+
For further Information... [https://wiki.osnexus.com/index.php?title=Remote-replication_(DR) Remote Replication]
  
=== Thin-provisioning ===
+
<br>
  
== Adding Hosts ==
+
{{Template:ReturnToWebGuide}}
 
+
[[Category:WebUI Dialog]]
== Assigning Storage Volumes to Hosts ==
+

Latest revision as of 11:30, 16 June 2021

This guide begins from the point that you have installed QuantaStor and have successfully logged into QuantaStor Manager. If you have not yet installed QuantaStor on your server, please see the Installation Guide for more details.

First Time Login

The default administrator user name for your storage system is simply 'admin'. This user account is present on all QuantaStor systems. At the login dialog the admin user account name is pre-populated as this is the most commonly used login. If QuantaStor was installed for you by a CSP or VAR the password for the system(s) should have been emailed to you or made available via the CSPs server management panel. If you've installed QuantaStor on a new server or VM the 'admin' account defaults to having a password of 'password' (without the single quotes). IMPORTANT: Please change this immediately after you first login via the Users & Groups section.

Change Administrator Password

Change the Administrator's Password.

For security, you will want to create a secure password for your storage system. This can be completed quickly using the Change Admin Password button in the Manager, but also can be achieved via the Set Password button in the toolbar under "Users & Groups".


For more information on the password, refer to the User Set Password page.

Step by Step Configuration Procedure

Additional information for Getting Started dialog.

Getting Started

With your first login, the Getting Started dialog will give you step by step guidance to set up your desired Storage System.

Configuration Procedure without Getting Started dialog

Grid Configuration

Combine systems together to form a storage grid.
This enables a whole host of additional features including remote-replication, clustering and more.

Highly-Available Pool Setup (ZFS)

Storage pools formed from devices which are dual-connected to two storage systems may be made highly-available. This is done by creating a storage pool high-availability group to which one or more virtual network interfaces (VIFs) are added. All access to storage pools must be done via the associated VIFs to ensure continuous data availability in the event the pool is moved (failed-over) to another system.

Scale-out S3 Pool Setup (Ceph RGW)

Scale-out object and block storage configurations enable S3/SWIFT object storage access as well as iSCSI storage access to scale-out storage volumes. Scale-out storage uses replicas and/or erasure-coding technologies to ensure fault-tolerance and high-availability. Ceph technology is used within the platform so a Ceph cluster must first be created to enable this functionality.

Scale-out File Pool Setup (Ceph FS)

The Ceph file system (CephFS) is a POSIX-compliant file system that uses a Ceph storage cluster to store its data. Ceph file systems are highly available using metadata servers (MDS). Once you have a healthy Ceph storage cluster with at least one Ceph metadata server, you can create a Ceph file system.

Scale-out S3 Object Storage Setup

If your primary use case is to setup an object storage cluster with 3x or more QuantaStor servers, start here.

Provisioning File Storage

If your primary use case is to setup your QuantaStor as NAS filer, start here to learn how to provision Network Shares.

Provisioning Block Storage

If your primary use case is to setup your QuantaStor as a SAN, start here to learn how to provision Storage Volumes.

Remote-Replication Setup

The recovery manager enables you to quickly and easily recover the storage system database after a full system reinstall.

For further Information... Remote Replication


Return to the QuantaStor Web Admin Guide