+ Getting Started Overview

From OSNEXUS Online Documentation Site
Revision as of 13:51, 6 July 2010 by 192.168.0.1 (Talk)

Jump to: navigation, search

This guide assumes that you have already installed QuantaStor and have successfully logged into QuantaStor Manager. If you have not yet installed the QuantaStor SSP software on your server, please see the Installation Guide for more details. It is best to follow along with the Getting Started Guide as you configure the system using the Getting Started checklist which will appear when you first login to the QuantaStor Manager web interface. The checklist can be access again at any time by pressing the 'System Checklist' button in the toolbar.

The default administrator user name for your storage system is simply 'admin' and this user name will automatically appear in the username field of the login screen. The password for the 'admin' account is initially just 'password' without the quotes. You will want to change this after you first login and it is one of the steps in the checklist.

License Key Management

Once you have the software installed, the first thing you must do is enter your license key block. Your license key block can be added using the License Manager dialog which can be accessed by pressing the License Manager button in the toolbar. It's also presented as the first step in the 'Getting Started' checklist. The key block you received via email is contained within markers like so:

--------START KEY BLOCK--------

---------END KEY BLOCK--------- 

Note that when you add the key using the 'Add License' dialog you can include or not include the START/END KEY BLOCK markers, it makes no difference. Once your key block has been entered you'll want to activate your key which can be done in just a few seconds using the online Activation dialog. If your storage system is not connected to the internet select the 'Activate via Email' dialog and send the information contained within to support@osnexus.com. You have a 7 day grace period for license activation so you can begin configuring and utilizing the system even though the system is not yet activated. That said, if you do not activate within the 7 days the storage system will no longer allow any additional configuration changes until an activation key is supplied.

Creating Storage Pools

Storage pools combine or aggregate one or more physical disks (SATA, SAS, or SSD) into a single pool of storage from which storage volumes (iSCSI targets) can be created. Storage pools can be created using any of the following RAID levels including RAID0, RAID1, RAID5, RAID6, or RAID10. Choosing the optimal RAID level depends both on your target application, number of disks you have, and the amount of fault-tolerance you need. Fault tolerance is just a fancy way of saying, how many disks can fail within the storage pool before you loose data. RAID1 & RAID5 allow you have one disk fail without it interrupting disk IO. When a disk fails you can remove it and you should add a spare disk with the 'degraded' storage pool as soon as possible to in order to restore it to a fault-tolerant status.

  • RAID0 layout is also called 'striping' and it writes data across all the disk drives in the storage pool in a round robin fashion. This has the effect of greatly boosting performance. The drawback of RAID0 is that it is not fault tolerant. If a single disk fails all of your data is lost in the entire pool. As such RAID0 is not recommended except in special cases where the potential for data loss is non-issue.
  • RAID1 is also called 'mirroring' because it achieves fault tolerance by writing the same data to two disk drives so that you always have two copies of the data. If one drive fails, the other has a complete copy and the storage pool continues to run. RAID1 and it's variant RAID10 are ideal for databases and other applications which do a lot of small I/O operations.
  • RAID5 achieves fault tolerance via what's called a parity calculation where one of the drives contains an XOR calculation of the bits on the other drives. For example, if you have 4 disk drives and you create a RAID5 storage pool, 3 of the disks will store data, and the last disk will contain parity information. This parity information on the 4th drive can be used to recover from any data disk failure. In the event that the parity drive fails, it can be replaced and reconstructed using the data disks. RAID5 (and RAID6) are especially well suited for audio/video streaming, archival, and other applications which do a heavy sequential I/O operations (such as reading/writing large files) and are not as well suited for database applications which do heavy amounts of small random I/O operations (lots of small files).
  • RAID6 improves upon RAID5 in that it can handle two drive failures but it requires that you have two disk drives dedicated to parity information. For example, if you have a RAID6 storage pool comprised of 5 disks then 3 disks will contain data, and 2 disks will contain parity information. If the disks are all 1TB disks then you will have 3TB of usable disk space for the creation of volumes.
  • RAID10 is similar to RAID1 in that it utilizes mirroring, but RAID10 also does striping over the mirrors. This gives you the fault tolerance of RAID1 combined with the performance of RAID10. The drawback is that half the disks are used for fault-tolerance so if you have 8 1TB disks utilized to make a RAID10 storage pool, you will have 4TB of usable space for creation of volumes. RAID10 will perform very well with both small random IO operations as well as sequential operations and it is highly fault tolerant as multiple disks can fail as long as they're not from the same mirror-pairing. If you have the disks and you have a mission critical application we highly recommend that you choose the RAID10 layout for your storage pool.

In many cases it is useful to create more than one storage pool so that you have both basic low cost fault-tolerant storage available from perhaps a RAID5 storage pool, as well as a highly fault-tolerant RAID10 or RAID6 storage pool available for mission critical applications.