Difference between revisions of "+ Admin Guide Overview"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m (Provisioning Gluster Volumes)
m (Optimization)
 
(683 intermediate revisions by the same user not shown)
Line 1: Line 1:
The QuantaStor Administrators Guide is intended for all administrators and cloud users who plan to manage their storage using QuantaStor Manager as well as for those just looking to get a deeper understanding of how the QuantaStor Storage System Platform (SSP) works.
+
[[Category:admin_guide]]
== Storage System Management Operations ==
+
The QuantaStor Administrator Guide is intended for all IT administrators working to setup or maintain a QuantaStor system or grid of systems as well as for those just looking to get a deeper understanding of how the QuantaStor software defined storage platform works.
  
When you initially connect to QuantaStor manager you'll see a toolbar (aka ribbon bar) at the top of the screen and a stack view / tree view on the left hand side of the screen.  By selecting different areas of the tree view (Storage Volumes, Hosts, etc) the ribbon view . tool bar will change accordingly to indicate the operations available for that section.  The following diagram shows these two sections:
+
== Administrator Guide Topic Links ==
  
[[File:qs_scrn_tree.png|Main Tree View & Ribbon-bar / Toolbar]]
+
[[Web UI Definition]]
  
Note also that you can right-click on the title-bar for each stack item in the tree view to access a pop-up menu, and you can right-click on any object anywhere in the UI to access a context sensitive pop-up menu for that item.
+
[[Navigation for Dialog Access]]
  
=== License Management ===
+
[[Navigation using Right Click for Dialog Access]]
  
QuantaStor has two different categories of license keys, those are 'System' licenses and 'Feature' licenses.  The 'System' licenses specify all the base features and capacity limits for your storage appliance and most systems just have a single 'System' license.  'Feature' licenses stack on top of an existing 'System' license and allow you to add features and capacity to an existing 'System'.  In this way you can start small and add more capacity as you need it.
+
[[Storage System]]
Note also that everything is license key controlled with QuantaStor so you do not need to reinstall to go from a Trial Edition license to a Silver/Gold/Platinum license.  Simply add your new license key and it will replace the old one automatically.
+
  
=== Recovery Manager ===
+
[[Grid Configuration]]
  
The 'Recovery Manager' is accessible from the ribbon-bar at the top of the screen when you login to your QuantaStor system and it allows you to recover all of the system meta-data from a prior installation.  The system metadata includes user accounts, storage assignments, host entries, storage clouds, custom roles and more.  To use the 'Recovery Manager' just select it then select the database you want to recover and press OK.  If you choose the 'network configuration recovery' option it will also recover the network configuration.  Be careful with that as it will most likely drop your current connection to QuantaStor when the IP address changes and if something goes wrong you'll need to re-login at the console to find out what the new IP addresses are.  In the worst case scenario you may need to manually edit the /etc/network/interfaces file as per the same procedure one would use with any Debian/Ubuntu server.
+
[[License Management]]
  
[[File:recovery_manager.png]]
+
=== Hardware Configuration ===
  
=== Upgrade Manager ===
+
[[Network Port Configuration]]
  
The Upgrade Manager handles the process of upgrading your system to the next available minor release version.  Note that Upgrade Manager will not upgrade QuantaStor from a v2 to a v3 version, that requires a re-installation of the QuantaStor OS and then recovery of meta-data using the 'Recovery Manager'.  The Upgrade Manager will display the available versions for the four key packages which includes the core services, web manager, web server, and SCSI target drivers.  You can upgrade any of the packages at any time and it will not block iSCSI access or NFS access to your appliance.  With upgrades to the SCSI target driver package you will need to restart your storage system/appliance for those new drivers to become active.
+
[[Physical Disk/Device Management]]
Note also that you should always upgrade both the manager and service package together, never upgrade just one or the other as this may cause problems when you try to login to the QuantaStor web management interface.
+
On occasion we'll see problems with an upgrade and so we've written a troubleshooting section on how to work out those issues here:
+
[[QuantaStor_Troubleshooting_Guide#Login_.26_Upgrade_Issues | Troublshooting Upgrade Issues]]
+
  
=== System Checklist ===
+
[[Hardware Controller & Enclosure Management]]
  
The 'System Checklist' aka 'Getting Started' will appear automatically when you login anytime there is no license key assigned to the system.  After that you can still bring up the System Checklist by selecting it from the ribbon-bar.  As the name implies, it will help you configure your system and in the process help you get acquainted with QuantaStor.
+
[[Multipath Configuration]]
  
=== System Hostname & DNS management ===
+
=== Storage Provisioning ===
  
To change the name of your system you can simply right-click on the storage system in the tree stack on the left side of the screen and then choose 'Modify Storage System'.  This will bring up a screen where you can specify your DNS server(s) and change the hostname for your system as well as control other global network settings like the ARP Filtering policy.
+
[[Storage Pool Management]]
  
== Physical Disk Management ==
+
[[Storage Volume Management]]
  
=== Identifying physical disks in an enclosure ===
+
[[Network Share Management]]
  
When you right-click on a physical disk you can choose 'Identify' to force the lights on the disk to blink in a pattern which it accomplishes by reading sector 0 on the drive.  This is very helpful when trying to identify which disk is which within the chassis.  Note that technique doesn't work logical drives exposed by your RAID controller(s) so there is separate 'Identify' option for the hardware disks attached to your RAID controller which you'll find in the 'Hardware Controllers & Enclosures' section.
+
[[Cloud Containers/NAS Gateway]]
  
=== Scanning for physical disks ===
+
=== Security, Alerting & Upgrades ===
When new disks have been added to the system you can scan for new disks using the command.  To access this command from the QuantaStor Manager web interface simply right-click where it says 'Physical Disks' and then choose scan for disks.  Disks are typically named sdb, sdc, sdd, sde, sdf and so on.  The 'sd' part just indicates SCSI disk and the letter uniquely identifies the disk within the system.  If you've added a new disk or created a new Hardware RAID Unit you'll typically see the new disk arrive and show up automatically but the rescan operation can explicitly re-execute the disk discovery process.
+
  
=== Importing disks from an Open-ZFS pool configuration ===
+
[[Call-home/Alert Management]]
A script included with QuantaStor called qs-zconvert may assist you with importing a storage pool from other Open-ZFS based solutions.  QuantaStor has a special naming convention for the ''storage pools'' (ZPOOLs) and ''storage volumes'' (ZVOLs) which the conversion script will take care of.  Note that this command must be run from the console/ssh while logged in as root ('sudo -i').
+
  
<pre>
+
[[Security Configuration]]
qs-zconvert is a helper utility for importing ZFS pools and converting them to QuantaStor naming format.
+
This makes it so that foreign zpools can be managed as a QuantaStor Storage Pools while retaining all
+
their original data.
+
WARNING: Please backup your data before converting. This tool has been shown to be reliable but not every
+
        combination of ZFS to ZFSonLinux has been tested or is guaranteed to work.
+
  
Usage:
+
[[Upgrade Manager]]
  
    qs-zconvert list                  : Displays a list of all the pools available for importing
+
=== Snapshots & Replication ===
    qs-zconvert listall                : Displays a detailed list of all the pools available for importing
+
    qs-zconvert import POOLNAME        : Imports and converts the zpool and associated zvols into QuantaStor naming format
+
    qs-zconvert convertvols POOLNAME  : Converts just the ZVOLs to quantastor UUID format naming conventions
+
    qs-zconvert importhg HGFILE        : Creates host groups and host entries using the 'stmfadm list-hg -v' output in HGFILE
+
    qs-zconvert importlumap LUFILE        : Assigns logical units to host groups.
+
</pre>
+
  
Note also that if the particular features and version information for your OpenZFS system can be found by running the 'zpool upgrade -v' command.  When run on QuantaStor you will see this for the feature list.  If your OpenZFS based system has more features and you're using them with your zpool then you may have import problems and the qs-zconvert script may not work for you.
+
[[Snapshot Schedules]]
  
<pre>
+
[[Backup Policies]]
This system supports ZFS pool feature flags.
+
  
The following features are supported:
+
[[Remote-replication (DR)]]
  
FEAT DESCRIPTION
+
=== Cluster Configuration ===
-------------------------------------------------------------
+
async_destroy                        (read-only compatible)
+
    Destroy filesystems asynchronously.
+
empty_bpobj                          (read-only compatible)
+
    Snapshots use less space.
+
lz4_compress
+
    LZ4 compression algorithm support.
+
  
The following legacy versions are also supported:
+
[[HA Cluster Setup (JBODs)]]
  
VER  DESCRIPTION
+
[[HA Cluster Setup (external SAN)]]
---  --------------------------------------------------------
+
1  Initial ZFS version
+
2  Ditto blocks (replicated metadata)
+
3  Hot spares and double parity RAID-Z
+
4  zpool history
+
5  Compression using the gzip algorithm
+
6  bootfs pool property
+
7  Separate intent log devices
+
8  Delegated administration
+
9  refquota and refreservation properties
+
10  Cache devices
+
11  Improved scrub performance
+
12  Snapshot properties
+
13  snapused property
+
14  passthrough-x aclinherit
+
15  user/group space accounting
+
16  stmf property support
+
17  Triple-parity RAID-Z
+
18  Snapshot user holds
+
19  Log device removal
+
20  Compression using zle (zero-length encoding)
+
21  Deduplication
+
22  Received properties
+
23  Slim ZIL
+
24  System attributes
+
25  Improved scrub stats
+
26  Improved snapshot deletion performance
+
27  Improved snapshot creation performance
+
28  Multiple vdev replacements
+
  
For more information on a particular version, including supported releases,
+
[[Scale-out_Block_Setup_(ceph)|Scale-out Block Setup (ceph)]]
see the ZFS Administration Guide.
+
</pre>
+
  
== Hardware Controller & Enclosure Integration ==
+
[[Scale-out Object Setup (ceph)|Scale-out Object Setup (ceph)]]
QuantaStor has custom integration modules 'plug-ins' for a number of major RAID controller cards which monitor the health and status of your hardware RAID units, disks, enclosures, and controllers.  When a disk failure occurs within a hardware RAID group, QuantaStor detects this and sends you an email through the QuantaStor alert management system.  Note that QuantaStor also has software RAID support for RAID levels 1,5,6 & 10 so you do not need a hardware RAID card but hardware RAID can boost performance and offer you additional RAID configuration options.  Also, you can use any RAID controller that works with Ubuntu Server, but QuantaStor will only detect alerts and discover the configuration details of those controllers for which there is a QuantaStor hardware controller plug-in.
+
Note that the plug-in discovery logic is triggered every couple of minutes so in some cases you will find that there is a small delay before the information in the web interface is updated.
+
  
QuantaStor has broad support for integrated hardware management including the following controllers:
+
[[Scale-out File Setup (ceph)|Scale-out File Setup (ceph)]]
* LSI MegaRAID & Nytro MegaRAID (all models)
+
* Adaptec 5xxx/6xxx/7xxx/8xxx (all models)
+
* IBM ServeRAID (LSI derivative)
+
* DELL PERC H7xx/H8xx (LSI derivative)
+
* Intel RAID/SSD RAID (LSI derivative)
+
* HP SmartArray P4xx/P8xx
+
* LSI 3ware 9xxx
+
* LSI HBAs
+
* Fusion IO PCIe
+
  
=== Adaptec RAID integration ===
+
=== Optimization ===
Adaptec controllers are automatically detected and can be managed via the QuantaStor web management interface. 
+
  
=== Fusion IO integration ===
+
[[Performance Tuning]]
The Fusion IO integration requires that the fio-util and iomemory-vsl packages are installed.  Once installed the Fusion IO control and logic devices will automatically show up in the Hardware Enclosures & Controllers view within QuantaStor Manager.
+
  
=== LSI 3ware integration ===
+
[[Performance Monitoring]]
3ware controllers are automatically discovered and can be managed via the QuantaStor web management interface. 
+
  
Note that if you arbitrarily remove a disk that was being utilized by a 3ware RAID unit, there are additional steps required before you can re-add it to the appliance.  3ware writes configuration data on the disk in what's called a Disk Control Block (DCB) and this needs to be scrubbed before you can use the disk again as a hot spare or within another unit.  There is a great article written [http://www.finnie.org/2010/06/07/howto-delete-a-3ware-dcb/ here] on how to scrub the DCB on a disk so that you can use it again with your LSI 3ware controller.  Formatting the disk in another system will also suffice.  You can then add it back into the old system and designate it as a spare, and if you have a unit that is degraded it will automatically adopt the spare and begin rebuilding the unit back to a fully fault tolerant status.  Of course if you pulled the disk because it was faulty you'll want RMA it to the manufacture for a warranty replacement.
+
=== System Internals ===
  
=== LSI MegaRAID / DELL PERC integration ===
+
[[QuantaStor systemd Services]]
  
LSI MegaRAID, DELL PERC, IBM ServeRAID and Intel RAID controllers are fully supported by QuantaStor and can be managed via the web management interface.  Note also that QuantaStor includes a command line utility called qs-util which assists with some MegaRAID maintenance operations. These include:
+
[[QuantaStor Configuration Files]]
<pre>
+
    qs-util megawb                  : Set Write-Back cache mode on all LSI MR units w/ BBU
+
    qs-util megaforcewb              : (W!) Force Write-Back cache mode on all LSI MR units
+
    qs-util megaclearforeign        : (W!) Clear the foreign config info from all LSI controllers
+
    qs-util megaccsetup              : Setup MegaRAID consistency check and patrol read settings
+
    qs-util megalsiget              : Generates a LSIget log report and sends it to support@osnexus.com
+
</pre>
+
  
==== Common Configuration Settings ====
+
[[QuantaStor Shell Utilities]]
 
+
===== Disable Copyback =====
+
 
+
The MegaRAID controller will auto-heal a RAID unit using an available hot-spare in case of a drive failure.  When the bad drive is pulled and a new drive is inserted and marked as hot-spare the location of your hot-spare drive will have changed.  In fact it will change every time a bad drive is replaced.  Generally speaking there is no impact to performance by having your hot-spare in a new location each time but over time it leads to a less organized chassis.  As such there is a 'Copy Back' feature which copies the data from the hot-spare back to the original location after a new hot-spare has be inserted where the failed disk was located.  Copy back does add time to the rebuild process so some prefer to disable it and just deal with the less organized drive placement in the chassis.  To disable copy back on all controllers run this command at the QuantaStor console or via ssh as root:
+
 
+
<code>MegaCli -AdpSetProp -copybackdsbl -1 -aall</code>
+
 
+
To enable the CopyBack feature on all controllers run this command:
+
 
+
<code>MegaCli -AdpSetProp -copybackdsbl -0 -aall</code>
+
 
+
===== Increasing the RAID unit Rebuild Rate =====
+
 
+
The default rebuild rate is 30% which can lead to some long rebuilds depending on the size of your RAID unit and the amount of load on it.  To increase the rate you can issue the following command at the console or ssh to increase it to 75% or higher:
+
 
+
<code>MegaCli -AdpSetProp RebuildRate 75 -aall</code>
+
 
+
===== Disabling the Alarm =====
+
 
+
If your server is in a datacenter then the alarm is not going to help much in identifying the problematic controller card and will only serve to cause grief.  As such, you might want to disable the alarms:
+
 
+
<code>MegaCli -AdpSetProp AlarmDsbl -aall</code>
+
 
+
To just silence the current alarm run this:
+
 
+
<code>MegaCli -AdpSetProp AlarmSilence -aall</code>
+
 
+
The two most common cases for an alarm are that a disk needs to be replaced or the battery backup unit is not functioning properly.  You can also silence all alarms using the web management interface.
+
 
+
===== Auto Import Foreign RAID units =====
+
 
+
The MegaRAID controllers can be a little troublesome if you're moving disks and/or disk chassis around as the disk drives will appear as 'foreign' to the controller when you move them.  Most of the time you'll just want to import these foreign units automatically so that you don't have to press space-bar at boot time to continue the boot process.  To avoid this, set the policy on the controllers to automatically import foreign units with this command:
+
<code>
+
sudo MegaCli -AdpSetProp AutoEnhancedImportEnbl -aALL
+
</code>
+
 
+
Here's an example of what that looks like:
+
<pre>
+
qadmin@qs-testing:~$ sudo MegaCli -AdpSetProp AutoEnhancedImportEnbl -aALL
+
[sudo] password for qadmin:
+
 
+
Adapter 0: Set Auto Enhanced Import to Enable success.
+
Adapter 1: Set Auto Enhanced Import to Enable success.
+
 
+
Exit Code: 0x00
+
</pre>
+
 
+
==== Installing the MegaRAID CLI on older QuantaStor v2 systems ====
+
QuantaStor v3 and newer systems work with LSI MegaRAID controllers with no additional software to be installed.  For older v2 systems first login to your QuantaStor system at the console.  You'll need to make sure that your system is network connected with internet access as it will be downloading some necessary files and packages.  Next, run the following two commands to install:
+
<pre>
+
cd /opt/osnexus/quantastor/raid-tools
+
sudo lsimegaraid-install.sh
+
</pre>
+
It will take a couple of minutes for the QuantaStor service to detect that the MegaRAID CLI is now installed but then you'll see the hardware configuration show up automatically in the web interface.  The other thing is that this script will have upgraded the megaraid_sas driver included with QuantaStor.  As such you must restart the system using the "Restart Storage System" option in the QuantaStor web management interface.
+
Last, new firmware is required to 3TB and larger drives so if you have a older 9260 or 9280 controller be sure to download and apply the latest firmware.  Here's an example of how to upgrade MegaRAID firmware using the MegaCli.
+
 
+
<pre>
+
MegaCli -AdpFwFlash -f FW1046E.rom -a0
+
 
+
Adapter 0: PERC H800 Adapter
+
Vendor ID: 0x1000, Device ID: 0x0079
+
 
+
FW version on the controller: 2.0.03-0772
+
FW version of the image file: 2.100.03-1046
+
Download Completed.
+
Flashing image to adapter...
+
Adapter 0: Flash Completed.
+
 
+
Exit Code: 0x00
+
</pre>
+
 
+
=== HP SmartArray RAID integration ===
+
HP SmartArray controllers are supported out-of-the box with no additional software to be installed. You can manage your HP RAID controller via the QuantaStor web management interface where you can create RAID units, mark hot-spares, replace drives, etc.
+
 
+
== Managing Storage Pools ==
+
 
+
Storage pools combine or aggregate one or more physical disks (SATA, SAS, or SSD) into a single pool of storage from which storage volumes (iSCSI/FC targets) and network shares (CIFS/NFS) can be created.  Storage pools can be created using any of the following software RAID types including RAID0, RAID1, RAID5, RAID6, RAID10, RAID50, or RAID60.  The optimal RAID type for your workload depends on your the I/O access patterns of your target application, number of disks you have, and the amount of fault-tolerance you require.  As a general guideline we recommend using RAID10 for all virtualization workloads and databases and RAID6 for applications that require high-performance sequential IO.  RAID10 performs very well with sequential IO and random IO patterns but is more expensive since you get 50% usable space from the raw storage due to mirroring.  For archival storage or other similar workloads RAID6 is best and provides higher utilization with only two drives used for parity/fault tolerance.  RAID5 is not recommended for any deployments because it is not fault tolerant after a single disk failure.  If you decide to use RAID6 with virtualization or other workloads that can produce a fair amount of random IO, we strongly recommend that you use a RAID controller with at least 1GB of RAM and a super-capacitor so that you can safely enable the write-cache.  RAID6 and other parity RAID mechanisms generally do not perform well when you have many workloads (virtual machines) using the storage due to the serialization of I/O that happens due to parity calculations and updates.
+
 
+
=== SSD Caching ===
+
ZFS based storage pools support the addition of SSD devices for use as read or write cache.  SSD cache devices must be dedicated to a specific storage pool and cannot be shared across multiple storage pools.  Some hardware RAID controllers support SSD caching but in our testing we've found that ZFS is more effective at managing it layers of cache than the RAID controllers so we do not recommend using SSD caching at the hardware RAID controller unless you're creating a older style XFS storage pool which does not have native SSD caching features.
+
 
+
==== SSD Write Cache Configuration (ZIL) ====
+
The write cache is actually a log device (ZIL) for the filesystem where writes can be stored temporarily, coalesced, and written to the storage pool more efficiently.  Because it is storing writes that have not yet been persisted to the storage pool the write cache must be mirrored so that in the event that an SSD drive fails that there is no data loss.  So to enable write caching you should always allocate 2x SSD devices, typically 200GB in size.  Note that write cache / log device only holds writes for at most several seconds before flushing the data into the storage pool.  Since the data is held for a relatively short time the SSD write cache never uses more than about 10GB of storage within the SSD cache layer.  As such large SSD devices are not needed for the SSD cache but we do recommend using a 200GB or larger enterprise grade SSD devices because they are more effective a wear leveling the heavy write IO load placed on the write caching tier. The SSD write cache tier is limited to two (2) devices to be added to the storage pool as this is a limit of ZFS.  In special cases you may consider using 4x or 6x SSD cache devices but to do so requires using hardware RAID10 so that only a single logical device is added to the ZFS storage pool thereby working around the 2x ZIL log device limitation.
+
* Summary, SSD drives for write-cache should always be added to storage pools for write intensive applications like virtualization and databases. A pair of 200GB Enterprise grade SSD devices is suitable for a broad set of applications and workloads.  Larger capacities (>200GB) will not yield benefits unless they yield higher IOPS and higher throughput performance.
+
 
+
==== SSD Read Cache Configuration (L2ARC) ====
+
You can add up to 4x devices for SSD read-cache (L2ARC) to any ZFS based storage pool and these devices do not need to be fault tolerant.  You can up to 4x devices directly to the storage pool by selecting 'Add Cache Devices..' after right-clicking on any storage pool.  You can also opt to create a RAID0 logical device using the RAID controller out of multiple SSD devices and then add this device to the storage pool as SSD cache.  The size of the SSD Cache should be roughly the size of the working set for your application, database, or VMs.  For most applications a pair of 400GB SSD drives will be sufficient but for larger configurations you may want to use upwards of 2TB or more of SSD read cache.  Note that the SSD read-cache doesn't provide an immediate performance boost because it takes time for it to learn which blocks of data should to be cached to provide better read performance.
+
 
+
==== RAM Read Cache Configuration (ARC) ====
+
ZFS based storage pools use what is called "ARC" as a in-memory read cache rather than the Linux filesystem buffer cache to boost disk read performance.  Having a good amount of RAM in your system is critical to delivering solid performance as it is very common with disk systems for blocks to be read multiple times.  When they are then cached into RAM it reduces the load on the disks and greatly boosts performance.  As such it is recommended to have 32-64GB of RAM for small systems, 96-128GB of RAM for medium sized systems and for large appliances you'll want to have upwards of 256GB or more of RAM.  To see the stats on cache hits for both read and write cache layers you'll need to use the command line and run 'sudo qs-iostat -af' which will print an updated status report on cache utilization every couple of seconds.
+
 
+
=== RAID Levels ===
+
RAID1 & RAID5 allow you have one disk fail without it interrupting disk IO.  When a disk fails you can remove it and you should add a spare disk to the 'degraded' storage pool as soon as possible to in order to restore it to a fault-tolerant status.  You can also assign spare disks to storage pools ahead of time so that the recovery happens automatically.  RAID6 allows for up to two disk to fail and will keep running whereas RAID10 can allow for one disk failure per mirror pair.  Finally, RAID0 is not fault tolerant at all but it is your only choice if you have only one disk and it can be useful in some scenarios where fault-tolerance is not required.  Here's a breakdown of the various RAID types and their pros & cons.
+
 
+
* '''RAID0''' layout is also called 'striping' and it writes data across all the disk drives in the storage pool in a round robin fashion.  This has the effect of greatly boosting performance.  The drawback of RAID0 is that it is not fault tolerant, meaning that if a single disk in the storage pool fails then all of your data in the storage pool is lost.  As such RAID0 is not recommended except in special cases where the potential for data loss is non-issue.
+
* '''RAID1''' is also called 'mirroring' because it achieves fault tolerance by writing the same data to two disk drives so that you always have two copies of the data.  If one drive fails, the other has a complete copy and the storage pool continues to run.  RAID1 and it's variant RAID10 are ideal for databases and other applications which do a lot of small write I/O operations.
+
* '''RAID5''' achieves fault tolerance via what's called a parity calculation where one of the drives contains an XOR calculation of the bits on the other drives.  For example, if you have 4 disk drives and you create a RAID5 storage pool, 3 of the disks will store data, and the last disk will contain parity information.  This parity information on the 4th drive can be used to recover from any data disk failure.  In the event that the parity drive fails, it can be replaced and reconstructed using the data disks.  RAID5 (and RAID6) are especially well suited for audio/video streaming, archival, and other applications which do a heavy sequential write I/O operations (such as reading/writing large files) and are not as well suited for database applications which do heavy amounts of small random write I/O operations or with large file-systems containing lots of small files with a heavy write load.
+
* '''RAID6''' improves upon RAID5 in that it can handle two drive failures but it requires that you have two disk drives dedicated to parity information.  For example, if you have a RAID6 storage pool comprised of 5 disks then 3 disks will contain data, and 2 disks will contain parity information.  In this example, if the disks are all 1TB disks then you will have 3TB of usable disk space for the creation of volumes.  So there's some sacrifice of usable storage space to gain the additional fault tolerance.  If you have the disks, we always recommend using RAID6 over RAID5.  This is because all hard drives eventually fail and when one fails in a RAID5 storage pool your data is left vulnerable until a spare disk is utilized to recover your storage pool back to a fault tolerant status.  With RAID6 your storage pool is still fault tolerant after the first drive failure. (Note: Fault-tolerant storage pools (RAID1,5,6,10) that have suffered a single disk drive failure are called '''degraded''' because they're still operational but they require a spare disk to recover back to a fully fault-tolerant status.)
+
* '''RAID10''' is similar to RAID1 in that it utilizes mirroring, but RAID10 also does striping over the mirrors.  This gives you the fault tolerance of RAID1 combined with the performance of RAID10.  The drawback is that half the disks are used for fault-tolerance so if you have 8 1TB disks utilized to make a RAID10 storage pool, you will have 4TB of usable space for creation of volumes.  RAID10 will perform very well with both small random IO operations as well as sequential operations and it is highly fault tolerant as multiple disks can fail as long as they're not from the same mirror-pairing.  If you have the disks and you have a mission critical application we '''highly''' recommend that you choose the RAID10 layout for your storage pool.
+
* '''RAID60''' combines the benefits of RAID6 with some of the benefits of RAID10.  It is a good compromise when you need better IOPS performance than RAID6 will deliver and more useable storage than RAID10 delivers (50% of raw).
+
 
+
In some cases it can be useful to create more than one storage pool so that you have low cost fault-tolerant storage available in RAID6 for archive and higher IOPS storage in RAID10 for virtual machines, databases, MS Exchange, or similar workloads.
+
 
+
Once you have created a storage pool it will take some time to 'rebuild'.  Once the 'rebuild' process has reached 1% you will see the storage pool appear in QuantaStor Manager and you can begin to create new storage volumes. 
+
<blockquote>
+
WARNING:  Although you can begin using the pool at 1% rebuild completion, your storage pool is not fault-tolerant until the rebuild process has completed.
+
</blockquote>
+
 
+
== Target Port Configuration ==
+
Target ports are simply the network ports (NICs) through which your client hosts (initiators) access your storage volumes (aka targets).  The terms 'target' and 'initiator' are SCSI terms that are synonymous with 'server' and 'client' respectively.  QuantaStor supports both statically assigned IP addresses as well as dynamically assigned (DHCP) addresses.  If you selected automatic network configuration when you initially installed QuantaStor then you'll have one port setup with DHCP and the others are likely offline. 
+
We recommend that you always use static IP addresses unless you have your DHCP server setup to specifically assign an IP address to your NICs as identified by MAC address.  If you don't set the target ports up with static IP addresses you risk the IP address changing and losing access to your storage when the dynamically assigned address expires.
+
To modify the configuration of a target port first select the tree section named "Storage System" under the "Storage Management" tab on the left hand side of the screen.  After that, select the "Target Ports" tab in the center of the screen to see the list of target ports that were discovered.  To modify the configuration of one of the ports, simply right-click on it and choose "Modify Target Port" from the pop-up menu.  Alternatively you can press the "Modify" button in the tool bar at the top of the screen in the "Target Ports" section. 
+
Once the "Modify Target Port" dialog appears you can select the target port type for the selected port (static), enter the IP address for the port, subnet mask, and gateway for the port.  You can also set the MTU to 9000 for jumbo packet support, but we recommend that you get your network configuration up and running with standard 1500 byte frames as jumbo packet support requires that you custom configure your host side NICs and network switch with 9K frames as well.
+
 
+
=== NIC Bonding / Trunking ===
+
 
+
QuantaStor supports NIC bonding, also called trunking, which allows you to combine multiple NICs together to improve performance and reliability.  If combine two or more ports together into a virtual port you'll need to make sure that all the bonded ports are connected to the same network switch.  There are very few exceptions to this rule.  For example, if you have two networks and 4 ports (p1, p2, p3, p4) you'll want to create two separate virtual ports each bonding two NIC ports (p1, p2 / p3, p4) together and each pair connected to a separate network (p1, p2 -> network A /  p3, p4 -> network B).  This type of configuration is highly recommended as you have both improved bandwidth and have no single point of failure in the network or in the storage system.  Of course you'll need your host to have at least 2 NIC ports and they'll each need to connect to the separate networks.  For very simple configurations you can just connect everything to one switch but again, the more redundancy you can work into your SAN the better.
+
 
+
By default, QuantaStor uses Linux bonding mode-0, a round-robin policy. This mode provides load balancing and fault tolerance by transmitting packets in sequential order from the first available interface through the last. QuantaStor also supports LACP 802.3ad Dynamic Link aggregation.  Use the 'Modify Storage System' dialog in the web management interface to change the default bonding mode for you appliance.
+
* [[Changing Network Bonding Mode | Enable LACP Port Bonding]]
+
 
+
=== 10GbE NIC support ===
+
 
+
QuantaStor works with all the major 10GbE cards from Chelsio, Intel and others.  We recommend the Intel 10GbE cards and you can use NIC bonding in conjunction with 10GbE to further increase bandwidth.  If you are using 10GbE we recommend that you designate your slower 1GbE ports as iSCSI disabled so that they are only used for management traffic.
+
 
+
== Pool Remote-Replication Configuration ==
+
 
+
Pool remote-replication is only supported with XFS based storage pools.  With ZFS based storage pools remote-replication is handled at a storage volume and network share level so you can replicate just the data that you want and not have to replicate the entire pool.  The other advantage is that ZFS based storage pools replicate using smart replication where only the changes are sent and the data is compressed. 
+
 
+
In both cases (ZFS and XFS replication), you must start by creating a Grid.  To create the grid you'll need to right-click on the storage system and then choose 'Create Grid..' then give it a name.  Once you have a grid you can 'Add Grid Node..' to add another storage appliance to the grid.  Once added you will have a 2 node grid and you can now setup a remote replication policy for volumes & shares if you're using a ZFS based pool.  Or, if you're using a XFS based storage pool you can setup a pool level replication link.
+
 
+
=== Setting up a DRBD based Storage Pool replication link (XFS based pools only) ===
+
 
+
Next you'll need to create one storage pool on each system with the storage pool on the secondary being at least as large as the source storage pool on the primary.  If the Primary already exists, great, in this case you'll just need to create an empty storage pool on the target/remote secondary.  Once you have that created, right-click on the primary pool and choose 'Create Pool Replication Link..'.  It will bring up a dialog where you can choose the source/primary pool, and the designated target/secondary pool at the bottom.  Note also that you'll need to select the IP address through which the network traffic will flow between the primary and the secondary.  Once you have that selected be sure to review that the pool, IP, and storage system selection is correct for the primary and the secondary/target and press OK.
+
You've now setup replication though you'll need to give the system about 2 minutes to get everything properly setup.  In the end you'll see that the Primary storage pool will have a new object underneath it in the tree view that says 'Primary/Secondary' and the target storage pool will have a new object that says 'Secondary/Primary'.
+
The first part indicates the role of the local pool, and the second part indicates the role of the remote storage pool.
+
When you select this object which is also called the 'Pool Replication Link' or 'Pool Replication Configuration' you'll also see the progress of the replication activity in the properties page on the right.  Once the replication has reached 100% you'll be able to fail-over to your secondary / DR site.  Note also that the initial replication is a one time process but it can take up to a couple of days for larger 16TB storage pools.
+
Note also that it is important to have at least 10MB/sec of bandwidth minimum between your Primary system and your Secondary system or else you'll see a big drop in performance under write load.  Better is to have 50MB/sec or more of bandwidth between sites.  You can test that by doing a simple FTP of a large file or using a performance test tool to check how much bandwidth you have on your network between the two systems.
+
 
+
*Summary review of pool replication setup process:
+
** Create a Grid by right-clicking on the storage system.  (n.b. Grid support is not available in the Community Edition)
+
** Add the target storage system to the grid as a new node by right-clicking on the grid in the tree and choosing 'Add Grid Node..'
+
** Now that you have a 2 node grid, you can now start replicating storage from the Primary pool on node 'A' to the target Seconary pool on node 'B' in your DR site.
+
** You must have two pools that are the same size in order to create a link
+
** You must also make sure that the seconary/target storage pool has no volumes/shares in it
+
** You must make sure that there is at least 10-20 MB/sec of bandwidth between the source and target storage system, ideally 50MB/sec or more
+
** If the storage pool on your target storage system node has not been created you'll need to do that now.
+
** Next, right-click on the primary storage pool on node 'A' and choose 'Create Pool Replication Link..'.  One you create the link QuantaStor will do the rest.
+
 
+
After the link is created you must wait, potentially several hours or days depending on the size of the storage pool and the speed of your link.  For a storage pool that is 8TB it will take 28 hours to replicate the secondary storage pool at your target node 'B' with a 80MB/sec link.  For 16TB, it will take a little over 2 days.  Note that this is a one time hit and that you will not need to resync after this because a map is maintained of all writes to both storage pools so that re-sync can be done quickly and efficiently even if the two storage pools have been disconnected for weeks.
+
 
+
=== Activating DR Fail-over ===
+
 
+
When the initial replication has completed you'll be able to 'Promote Pool to Replication Primary' at any time.  To do this, just right-click on the Pool Replication Link object on your secondary storage pool, then choose 'Promote Pool to Replication Primary'. 
+
 
+
[[File:promote_pool.png|Promote Pool]]
+
 
+
After that you'll be presented with a dialog that looks like this:
+
 
+
[[File:promote_pool_dialog.png|Promote Pool Dialog]]
+
 
+
The pool you see selected will be promoted to Primary status and in doing so all of the volumes and shares in that pool will become available.  Note that when a remote storage pool is activated that QuantaStor will automatically rename the device IQNs and ID to make it unique so that it doesn't collide with the device IDs of the original primary.  This means that you'll need to setup your host (Windows, VMware, XenServer, etc) to login to these new iSCSI IQNs and not the old IQNs used at the primary site. 
+
 
+
Note also that activation of the remote DR site's storage pool doesn't in any way effect the status of the source/original Primary.  In fact it is a common and necessary scenario to activate the failover site to verify that it works (failover testing) and then just demote the pool when you're done testing and any changed blocks will be resent from the primary pool to bring it back up to date.  In such a testing scenarios the primary site never goes offline and the workloads continue to work from the primary site but you have a complete copy that's active on the secondary site for testing or possibly even off-host backup.
+
 
+
In this state you will see both storage pools in the 'Primary/Unknown' state which means that they're both primary, and they both don't know what the current state is of its remote storage pool counterpart but the driver is keeping track of which blocks have changed for easy/quick resync later.
+
 
+
=== Deactivating the DR Fail-over pool (Test Scenario) ===
+
If you have two sites NY as primary and Seattle as secondary you can promote the Seattle storage pool to Primary status while NY is still active as a primary.  This is how you can do failover testing without interrupting any of the workloads that are actively using the storage pool in NY.
+
* Summary of this procedure:
+
** Promote DR site to 'Primary' status and it will change from 'Secondary/Primary' to 'Primary/Unknown' as outlined in the prior section.
+
** With the storage pool active you can now boot/attach workloads to the iSCSI volumes and network shares.  The iSCSI volumes will have a suffix of _dr000 to indicate they are DR site replica copies.
+
** Now you can test that the workloads/VMs successfully start and are working but you do not want to change any of the global DNS entries for your workloads as that would be a complete failover and NY will become out-of-date / state.
+
** Once DR site testing is complete in Seattle you will simply 'Demote Pool to Replication Secondary' on the storage pool link in Seattle. 
+
 
+
Demoting the storage pool in Seattle will cause it to be overwritten with blocks from NY to bring it back up to date.  The replication driver (DRBD) keeps track of all changes made to the Seattle storage pool and all changes made to the NY storage pool while they were disconnected.  When the secondary is demoted this information about which block have changes enables it to quickly and efficiently complete the resynchronization in seconds or minutes instead of hours.  Note also that once the Seattle site is demoted it will be in the 'Secondary/Primary' state again.  You'll also see it synchronizing for a short time as Seattle is brought back to the 'Up To Date' state.  If it says 'Inconsistent' then it is either disconnected or still synchronizing.
+
 
+
=== Reversing the flow / DR Fail-back (Live Scenario) ===
+
Let's take as an example that you have two sites, New York is your primary and Seattle is your secondary / DR site.  You had power outage in New York and activated the DR site in Seattle then started up all the VMs/workloads and they ran out of the Seattle site for say one week. 
+
At this point Seattle now has the most current copy of your data and New York can be over written as it is stale.  Here's the process to recover your data back to NY and reactivate it as the primary site:
+
* First you'll need to demote the NY storage pool to 'Secondary' status.  This will start the flow of data from Seattle to New York.  Before you demote NY both pools will have a link in 'Primary/Unknown' status.  After you demote NY the pool in NY will show 'Secondary/Primary' and the pool in Seattle will show 'Primary/Secondary'.
+
* Second, you need to wait. Look at the link object of either side and it will show you the progress of the replication to bring the changes over from Seattle to New York.  Once both sides say 'Up To Date' in their status then you're ready to activate New York.
+
* Now that the pool in New York is an exact copy of the pool in Seattle which is and has been the primary site for the last week you can now orchestrate a fail-back to New York. 
+
* Fail-back will require some amount of transition time as once you break the link between Seattle and NY you'll need to redirect your DNS entries for your apps/web servers back to NY and that can take some time.  You'll also need to boot the VMs in NY and if the Seattle site is still online during this time then any transactions to Seattle during this time will be lost.  So the best thing to do would be to schedule some downtime and then move the global IP addresses for your VMs over first.  Once they switch over your Seattle site is offline and won't record any more transactions to your databases/app servers and you should probably suspend or stop the VMs in Seattle.  Now is the time to promote the storage pool in NY to 'Primary' status which will just take a few seconds.
+
* At this point both storage pools are back in 'Primary/Unknown' status and both have exactly the same data.  Now that the NY pool is active and DNS work is done for your workloads you can boot the VMs in NY on the original primary storage pool which is now active. 
+
* If all the VMs/workloads are started and all has gone well then you've successfully completed the fail-back.
+
* We recommend that you wait an hour to make sure everything checks out and then demote the storage pool in Seattle to secondary status so that new changes to the pool in NY will replicate over to Seattle as your DR site again.
+
* If there was any problem with the fail-back to NY you can simply restart the VMs in Seattle and update the DNS entries accordingly. This is why you want to leave Seattle in the 'Primary/Unknown' state until you are absolutely sure all of the workloads have come back online successfully in NY.  With Seattle left alone the worst case scenario is to just reactivate Seattle and reschedule the fail-back to NY for another day.
+
 
+
Given the complexity of DR failover we highly recommend testing your DR fail-over site on a regular basis and we recommend exercising the failover / failback process outlined above in a test environment to become more familiar with the process.  Trial Edition keys are available on our main web site and include the DR features so setting up a couple of QuantaStor Virtual Storage Appliances is an easy way to become an expert without having to dedicate hardware.
+
 
+
== DR with Volume & Share Remote-Replication ==
+
 
+
Volume and Share Remote-replication within QuantaStor allows you to copy a volume or network share from one QuantaStor storage system to another and is a great tool for migrating volumes and network shares between systems and for using a remote system as a DR site.  Remote replication is done asynchronously which means that changes to volumes and network shares on the original/source system are made up to every hour. 
+
 
+
Once a given set of the volumes and/or network shares have been replicated from one system to another the subsequent periodic replication operations send only the changes and all information sent over the network is compressed to minimize network bandwidth and encrypted for security.  ZFS based storage pools use the ZFS send/receive mechanism which efficiently sends just the changes so it works well over limited bandwidth networks.  Also, if your storage pool has compression enabled the changes sent over the network are also compressed which further reduces your WAN network load.
+
 
+
XFS based storage pools do not have the advanced replication mechanisms like ZFS send/receive so we employ more brute force techniques for replication.  Specifically, when you replicate a XFS based storage volume or network share QuantaStor uses the linux rsync utility.  It does have compression and it will only send changes but it doesn't work well with large files because the entire file must be scanned and in some cases resent over the network.  Because of this we highly recommend using ZFS based storage pools for all deployments unless you specifically need the high sequential IO performance of XFS for a specific application.
+
 
+
=== Creating a Storage System Link ===
+
 
+
The first step in setting up DR/remote-replication between two systems is to create a Storage System Link between the two.  This is accomplished through the QuantaStor Manager web interface by selecting the 'Remote Replication' tab, and then pressing the 'Create Storage System Link' button in the tool bar to bring up the the dialog.  To create a storage system link you must provide the IP address of the remote system and the admin username and password for that remote system.  You must also indicate the local IP address that the remote system will utilize for communication between the remote and local system.  If both systems are on the same network then you can simply select one of the IP addresses from one of the local ports but if the remote system is in the cloud or remote location then most likely you will need to specify the external IP address for your QuantaStor system.  Note that the two systems communicate over ports 22 and 5151 so you will need to open these ports in your firewall in order for the QuantaStor systems to link up properly.
+
 
+
=== Creating a Remote Replica ===
+
 
+
Once you have a Storage System Link created between two systems you can now replicate volumes and network shares in either direction. Simply login to the system that you want to replicate volumes from, right-click on the volume to be replicated, then choose 'Create Remote Replica'.  Creating a remote replica is much like creating a local clone only the data is being copied over to a storage pool in a remote storage system.  As such, when you create a remote-replica you must specify which storage system you want to replicate too (only systems which have established and online storage system links will be displayed) and which storage pool within that system should be utilized to hold the remote replica.  If you have already replicated the specified volume to the remote storage system then you can re-sync the remote volume by choosing the remote-replica association in the web interface and choosing 'resync'.  This can also be done via the 'Create Remote Replica' dialog and then choose the option to replicate to an existing target if available.
+
 
+
== Alert Settings ==
+
 
+
QuantaStor allows you to thin-provision storage and over provision storage but that feature comes with the associated risk of running out of disk space.  As such, you will want to make sure that you configure and test your alert configuration settings in the Alert Manager.  The Alert Manager allows you to specify at which thresholds you want to receive email regarding low disk space alerts for your storage pools.  It also let's you specify the SMTP settings for routing email.
+
 
+
[[File:qs_scrn_alert_manager.png|Drop Session Dialog]]
+
 
+
== Managing Hosts ==
+
 
+
Hosts represent the client computers that you assign storage volumes to.  In SCSI terminology the host computers ''initiate'' the communication with your storage volumes (target devices) and so they are called initiators.  Each host entry can have one or more initiators associated with it and the reason for this is because an iSCSI initiator (Host) can be identified by IP address or IQN or both at the same time.  We recommend using the IQN (iSCSI Qualified Name) at all times as you can have login problems when you try to identify a host by IP address especially when that host has multiple NICs and they're not all specified.
+
 
+
=== Managing Host Groups ===
+
 
+
Sometimes you'll have multiple hosts that need to be assigned the same storage volume(s) such as with a VMware or a XenServer resource pool.  In such cases we recommend making a Host Group object which indicates all of the hosts in your cluster/resource pool.  With a host group you can assign the volume to the group once and save a lot of time.  Also, when you add another host to the host group, it automatically gets access to all the volumes assigned to the group so it makes it very easy to add nodes to your cluster and manage storage from a group perspective rather than individual hosts which can be cumbersome especially for larger clusters.
+
 
+
== Managing Snapshot Schedules ==
+
 
+
Snapshot schedules enable you to have your storage volumes automatically protected on a regular schedule by creating snapshots of them.  You can have more than one snapshot schedule, and each schedule can be associated with any storage volumes even those utilized in other snapshot schedules.  In fact, this is something we recommend.  For storage volumes containing critical data you should create a snapshot schedule that makes a snapshot of your volumes at least once a day and we recommend that you keep around 10-20 snapshots so that you have a week or two of snapshots that you can recover from.  A second schedule that creates a single snapshot on the weekend of your critical volumes is also recommended.  If you set that schedule to retain 10 snapshots that will give you over two months of historical snapshots from which you can recover data from. 
+
 
+
=== Near Continuous Data Protection (N-CDP) ===
+
 
+
What all this boils down to is a feature we in the storage industry refer to as continuous data protection or CDP.  True CDP solutions allow you to recover to any prior point in time at the granularity of seconds.  So if you wanted to see what a storage volume look like at 5:14am on Saturday you could look at a 'point-in-time' view of that storage volume at that exact moment.  Storage systems that allow you to create large number of snapshots thereby giving you the ability to roll-back or recover from a snapshot that was created perhaps every hour are referred to as NCDP or "near continuous data protection" solutions, and that's exactly what QuantaStor is.  This NCDP capability is achieved through ''snapshot schedules'' which run at a maximum granularity of once per hour.  Using a snapshot schedule you can automatically protect your critical volumes and network shares so that you can recover data from previous points in time.
+
 
+
== Managing iSCSI Sessions ==
+
 
+
A list of active iSCSI sessions can be found by selecting the 'Storage Volume' tree-tab in QuantaStor Manager then selecting the 'Sessions' tab in the center view.  Here's a screenshot of a list of active sessions as shown in QuantaStor Manager.
+
 
+
[[File:qs_session.png|640px|Session List]]
+
 
+
=== Dropping Sessions ===
+
 
+
To drop an iSCSI session, just right-click on it and choose 'Drop Session' from the menu. 
+
 
+
[[File:qs_session_drop.png|640px|Drop Session Dialog]]
+
 
+
Keep in mind that some initiators will automatically re-establish a new iSCSI session if one is dropped by the storage system.  To prevent this, just unassign the storage volume from the host so that the host cannot re-login.
+
 
+
== Managing Network Shares ==
+
 
+
QuantaStor ''Network Shares'' provide NAS access to your storage via the NFSv3, NFSv4, and CIFS protocols. Note that you must have first created a ''Storage Pool'' before you create ''Network Shares'' as they are created within a specific ''Storage Pool''.  ''Storage Pools'' can be used to provision NAS storage (''Network Shares'') and can be used to provision SAN storage (''Storage Volumes'') at the same time.
+
 
+
=== Creating Network Shares ===
+
 
+
To create a ''network share'' simply right-click on a Storage Pool and select 'Create Network Share...' or select the '''Network Shares''' section and then choose '''Create Network Share''' from the toolbar or right-click for the pop-up menu.  Network Shares can be concurrently accessed via both NFS and CIFS protocols. 
+
 
+
[[File:qs_create_network_share.png]]
+
 
+
After providing a name, and optional description for the share, and selecting the ''storage pool'' in which the ''network share'' will be created there are a few other options you can set including protocol access types and a share level quota.
+
 
+
==== Enable Quota ====
+
 
+
If you have created a ZFS based storage pool then you can set specific quotas on each ''network share''.  By default there are no quotas assigned and ''network shares'' with no quotas are allowed to use any free space that's available in the ''storage pool'' in which they reside.
+
 
+
==== Enable CIFS/SMB Access ====
+
 
+
Select this check-box to enable CIFS access to the ''network share''.  When you first select to enable CIFS access the default is to make the share public with read/write access.  To adjust this so that you can assign access to specific users or to turn on special features you can adjust the CIFS settings further by pressing the '''CIFS/SMB Advanced Settings''' button.
+
 
+
==== Enable Public NFS Access ====
+
 
+
By default public NFS access is enabled, you can un-check this option to turn off NFS access to this share. Later you can add NFS access rules by right-clicking on the share and choosing 'Add NFS Client Access..'.
+
 
+
=== Modify Network Shares ===
+
 
+
After a ''network share'' has been created you can modify it via the '''Network Share Modify''' dialog.
+
 
+
[[File:qs_mod_gen_network_share.png]]
+
 
+
==== Compression ====
+
 
+
''Network Shares'' and ''Storage Volumes'' inherit the compression mode and type from whatever is set for the ''storage pool''.  You can also customize the compression level to something specific for each given ''network share''.  For network shares that contain files which are heavily compressible you might increase the compression level to gzip (gzip6) but note that it'll use more CPU power for higher compression levels.  For network shares that contain data that is already compressed, you may opt to turn compression 'off'.
+
Note, this feature is specific to ZFS based Storage Pools.
+
 
+
==== Sync Policy ====
+
 
+
The Sync Policy indicates how to handle writes to the network share.  Standard mode is the default and it uses a combination of synchronous and asynchronous writes to ensure consistency and optimize for performance. If the write requests have been tagged as "SYNC_IO" then all of the IO is first sent to the filesystem intent log (ZIL) and then staged out to disk, otherwise the data can be written directly to disk without first staging to the intent log.  In the "Always" mode the data is always sent to the filesystem intent log first and this is a bit slower but technically safer.  If you have a workload that is write intensive it is a good idea to assign a pair of SSD drives to the ''storage pool'' for use as write cache so that the writes to the log and overall IOPs performance can be accelerated.  Note, this feature is specific to ZFS based ''Storage Pools'' and the policy for each ''network share'' is by default inherited from the ''storage pool''.
+
 
+
=== NFS Configuration ===
+
 
+
 
+
==== Configuring NFS Services ====
+
 
+
[[File:nfsServicesConfig.png|300px]]
+
 
+
The default NFS mode is NFSv3 but this can be changed from within the "NFS Services Configuration" dialog to NFSv4. To open this dialog navigate to the "Network Shares" tab, and select "Configure NFS" from the ribbon bar at the top, or "Configure NFS Services" by right clicking the open space under the "Network Share" section to bring up the context menu.
+
 
+
==== Controlling NFS Access ====
+
 
+
NFS share access is filtered by IP address. This can be done by right clicking on a network share, and selecting "Add Host Access". By default the share is set to have public access. This dialog allows you to specify access to a single IP address, or a range of IP addresses.
+
 
+
[[File:addShareClientAccess.png|300px]]
+
 
+
==== NFS Custom Options ====
+
 
+
You can also specify different custom options from within the "Modify Network Share Client Access" dialog. To open this menu, right click on the share's host access (defaults to public), and select "Modify Host Access". In this dialog you can set different options such as "Read Only", "Insecure", etc. You can also add custom options such as "no_root_squash" in the space provided below.
+
 
+
[[File:contextMenuNfs.png|250px]] [[File:modifyShareClientAccess.png|300px]]
+
 
+
=== CIFS Configuration ===
+
QuantaStor v3 uses Samba 3.6 which provides CIFS access to ''Network Shares'' via the SMB2 protocol.  There is also beta support for Samba 4 but as of Q2/14 it does not have good support for joining existing AD Domains.  As such, Samba4 is not planned to be the default until late 2014/early 2015.
+
 
+
==== Modifing CIFS Access ====
+
 
+
There are a number of custom options that can be set to adjust the CIFS access to your ''network share'' for different use cases.  The 'Public' option makes the ''network share'' public so that all users can access it.  The 'Writable' option makes the share writable as opposed to read-only and the 'Browseable' option makes it so that you can see the share when you browse for it from your Windows server or desktop.
+
 
+
[[File:qs_mod_user_network_share.png]]
+
 
+
==== Modifing CIFS Configuration Options ====
+
 
+
===== Hide Unreadable & Hide Unwriteable =====
+
 
+
To only show users those folders and files to which they have access you can set these options so that things that they do not have read and/or write access to are hidden.
+
 
+
===== Media Harmony Support =====
+
 
+
Media Harmony is a special VFS module for Samba which provides a mechanism for multiple Avid users to edit content at the same time on the same network share.  To do this the Media Harmony module maintains separate copies of the Avid meta-data temporary files on a per-user, per-network client basis.
+
 
+
===== Disable Snapshot Browsing =====
+
 
+
Snapshots can be used to recover data and by default your snapshots are visible under a special ShareName_snaps folder.  If you don't want users to see these snapshot folders you can disable it.  Note that you can still access the snapshots for easy file recovery via the Previous Snapshots section of Properties page for the share in Windows.
+
 
+
===== MMC Share Management =====
+
 
+
QuantaStor ''network shares'' can be managed directly from the MMC console Share Management section from Windows Server.  This is often useful in heterogeneous environments where a combination of multiple different filers from multiple different vendors is being used.  To turn on this capability for your ''network share'' simply select this option. 
+
If you want to set this capability to all network shares in the appliance you can do so by [http://www.vionblog.com/manage-samba-permissions-from-windows/ manually editing the smb.conf] file to add these settings to the [global] section.
+
<pre>
+
vfs objects = acl_xattr
+
map acl inherit = Yes
+
store dos attributes = Yes
+
</pre>
+
 
+
===== Extended Attributes =====
+
 
+
Extended attributes are a filesystem feature where extra metadata an be associated with files.  This is useful for enabling security controls (ACLs) for DOS and OS/X.  Extended attributes can also be used by a variety of other applications so if you need this capability simply enable it by checking the box(es) for DOS, OS/X and/or for plain Extended Attribute support.
+
 
+
==== Active Directory Configuration ====
+
 
+
QuantaStor appliances can be joined to your AD domain so that CIFS access can be applied to specific AD users and AD groups.
+
 
+
===== Joining an AD Domain =====
+
 
+
To join a domain first navigate to the "Network Shares" section. Now select "Configure CIFS" in the top ribbon bar, or by right clicking in the "Network Shares" space and selecting "Configure CIFS Services" from the context window. Check the box to enable active directory, and provide the necessary information. KDC is most likely your domain controllers FQDN (DC.DOMAIN.COM).
+
<br>
+
Note: Your storage system name must be <= 15 characters long.
+
<br>
+
If there are any problems joining the domain please verify that you can ping the IP address of the domain controller, and that you are also able to ping the domain itself.
+
 
+
[[File:contextMenu.png|400px]]  [[File:addDomain.png|300px]]
+
 
+
You can now see QuantaStor on the domain controller under the Computer entry tab.
+
 
+
[[File:adComputerEntry.png|400px]]
+
 
+
==== Leaving a AD Domain ====
+
 
+
To leave a domain first navigate to the "Network Shares" section. Now select "Configure CIFS" in the top ribbon bar, or by right clicking in the "Network Shares" space and selecting "Configure CIFS Services" from the context window. Unselect the checkbox to disable active directory integration. If you would like to remove the computer entry from to domain controller you must also specify the domain adminstrator and password. After clicking "OK" QuantaStor will then leave the domain.
+
 
+
[[File:removeDomain.png|400px]]
+
 
+
==== Controlling CIFS Access ====
+
 
+
CIFS access can be controlled on a per user basis. When you are not in a domain, the users you can choose from are the different users you have within QuantaStor. This can be done during share creation by selecting "CIFS/SMB Advanced Settings", or while modifying a share under the tab "CIFS User Access". If you are in a domain, you will also be able to select the different users/groups that are present within the domain. This can be done the same way as using the QuantaStor users, but by selecting "AD Users" or "AD Groups". You can set the access to either "Valid User", "Admin User", or "Invalid User".
+
 
+
[[File:qs_mod_user_network_share.png|300px]]  [[File:qs_mod_aduser_network_share.png|300px]]
+
 
+
===== Verifying Users Have CIFS Passwords =====
+
 
+
Before using a QuantaStor user for CIFS/SMB access you must first verify that the user has a CIFS password. To check if the user can be used for CIFS/SMB first go to the "Users & Groups". Now select a user, and look for the property "CIFS Ready". If the user is ready to be used within CIFS/SMB it will say "Yes". If the property says "Password Change Required" then one more step is required before that user can be used. You must first right click the user and select "Set Password". If you are signed in as an administrator, then the old password is not required. When setting the password for CIFS/SMB, you can use the same password as what it was set as before. It should now show up as CIFS ready.
+
 
+
===== Setting CIFS Options =====
+
 
+
You can modify some of the share options during share creation, or while modifying the share. Most of the options are set by selecting/unselecting the checkboxes. You can also set the file and directory permissions in the modify share dialog under the "CIFS File Permissions" tab.
+
 
+
[[File:qs_mod_perm_network_share.png|300px]]
+
 
+
== Managing Scale-out NAS (GlusterFS) Volumes ==
+
 
+
QuantaStor provides scale-out NAS capabilities with access via traditional protocols like CIFS/SMB, NFS, as well as via the GlusterFS client. For those not familiar with the Gluster filesystem it is a scale-out filesystem that combines multiple underlying filesystems together across appliances to present them in aggregate as a single filesystem. 
+
In QuantaStor appliances Gluster is layered on top of our Storage Pool architecture which is filesystem based. In this way you can use your QuantaStor appliances for file, block, and scale-out file storage needs all at the same time.
+
[[File:gluster0.png|none]]
+
 
+
=== Multi-protocol Access via CIFS, NFS and GlusterFS Client ===
+
 
+
Scale-out NAS storage can be accessed via major NAS protocols like CIFS and NFS as well as via native Gluster Clients. Best performance is achieved when the storage is accessed from Linux based clients which mount the storage using the native GlusterFS client.  Native clients can directly communicate with the servers containing the respective storage bricks for a given volume.  This allows for much more linear scaling of performance and capacity as new appliances are added.
+
[[File:gluster1.png|none]]
+
 
+
=== Primary Use Cases ===
+
 
+
Scale-out NAS using GlusterFS technology is great for unstructured data, archive, and many media use cases. It is not good for high IOPS workloads like databases and virtual machines. For that you'll want to make a traditional Storage Volumes or Network Shares in your appliances that can supply the necessary write performance. GlusterFS read/write performance via CIFS/NFS is moderate and can be improved with SSD caching when used with ZFS based storage pools. Performance is better when you can use the native GlusterFS client but note that the native client is only available on Linux based platforms.
+
 
+
* Good Use Cases
+
** Large-scale Media Archive
+
** Large-scale Unstructure Data Repository
+
* Poor Use Cases
+
** Virtual Machine boot devices
+
** Databases
+
 
+
We expect that the breadth of good use cases for GlusterFS technology to increase over time but conservatively the above represent our recommendations for 2014.
+
 
+
=== Grid Setup Procedure ===
+
 
+
A couple of additional steps are required to setup your appliances before you can start provisioning scale-out NAS shares. The first step is to create a management Grid by right-clicking on the Storage System icon in the tree stack view in the Web Management user interface (WUI) and choose 'Create Grid...'.
+
 
+
[[File:grid0.png]]
+
 
+
After you create the grid you'll need to add your other appliances to the grid by right-clicking on the Grid icon and choosing 'Add Grid Node...' from the menu. It will ask for the IP address and password for the appliance to be added to the grid and once they're all added you'll be able to manage all the nodes from a single login to the WUI.
+
 
+
[[File:grid1.png]]
+
 
+
Be aware that the management user accounts across the appliances will be merged.  In the event that there are duplicate user accounts the user accounts on the then elected primary/master node in the grid take precedence. 
+
 
+
[[File:grid2.png]]
+
 
+
At this point your Grid should be setup and you should see all the appliance nodes via the WUI. If not please double check the configuration and/or contact support for more assistance.
+
 
+
=== Network Setup Procedure ===
+
 
+
You have a number of options for tuning the network setup for Gluster. If you plan to use the native GlusterFS client via Linux servers that will be connecting directly to QuantaStor nodes then you should setup network bonding to bind multiple network ports on each appliance to provide additional bandwidth and automatic fail-over in the event a network cable is pulled. If you plan to use CIFS/NFS as the primary protocols for accessing your storage then you could use bonding or you could separate your ports into a front-end network for clients to access and a back-end network for inter-communication between the nodes. When in doubt start with a simple configuration like LACP bonded ports but ideally you'll want an expert to review your configuration before it goes into production. Getting the networking setup correctly is important for long term reliability and optimal performance so be sure to review your configuration with your reseller to make sure it's ideal for your needs.
+
 
+
=== Peer Setup ===
+
 
+
Setting up QuantaStor appliances into a grid allows them to intercommunicate but it doesn't automatically setup the GlusterFS peer relationships between the appliances. For that you'll want to bring up the 'Peer Setup' dialog in the Web Interface and then select the IP address on each node that you want Gluster to use for intercommunication between the nodes for Gluster operations.
+
 
+
[[File:gluster3.png]]
+
 
+
 
+
This will setup a "hosts" file (/etc/hosts) on each appliance so that each node can refer to the other nodes in the grid by name. You can also do this via DNS but by using Peer Setup in QuantaStor it'll ensure that the configuration is kept in sync across the nodes and will allow the nodes to resolve names even if DNS server access is down. Gluster volumes span appliances and on each appliance a volume spans it places a brick. These gluster bricks are referenced with a brick path that looks much like a URL for a web page. By setting up the IP to hostname mappings QuantaStor is able to create brick paths using hostnames rather than IP addresses and this makes it much easier to change the IP address of a node in the future. Finally, in the Peer Setup dialog, there's a check box to setup the Gluster Peer relationships. This does a series of 'gluster peer probe' commands to link the nodes together so that gluster volumes can be created across the appliances. Once the peers are attached you'll see them appear in the Gluster Peers section of the WUI. Once that's done you can begin provisioning Gluster Volumes.  Alternatively you can add the peers one at a time using the ''Peer Attach'' dialog like so.
+
 
+
[[File:gluster4.png]]
+
 
+
=== Provisioning Gluster Volumes ===
+
 
+
Gluster Volumes are provisioned from the 'Gluster Management' tab in the web user interface. To make a new Gluster Volume simply right-click on the Gluster Volumes section or choose ''Create Gluster Volume'' from the tool bar.
+
 
+
[[File:Gluster5.png]]
+
 
+
To make your Gluster Volume highly-available with two copies of each file choose a replica count of two (2). If you only need fault tolerance in case of a disk failure then that is supplied by the storage pools and you can use a replica count of one (1). With a replica count of two (2) you have full read/write access to your scale-out Network Share even if one of the appliance is turned off. With a replica count of (1) you will loose read access to some of your data in the event that one of the appliances is turned off. When the appliance is turned back on it will automatically synchronize with the other nodes to bring itself up to the proper current state via auto-healing.
+
 
+
=== Gluster Virtual Network Interfaces ===
+
 
+
Once you have a Gluster Volume provisioned you can allocate one or more virtual network interfaces to the Gluster Volume. This is not necessary if you are using the native Gluster client as with it failover is automatic because the client communicates with all nodes directly. But, if you're using CIFS or NFS clients to access your scale-out NAS share then you'll want to use an IP address that will automatically float to another node automatically in the event that the node it is currently attached to is disabled or turned off. To create a virtual interface for a volume simply right-click on a Gluster Volume and choose Create Gluster Virtual Interface. You'll need to provide a static IP address, netmask, and other basic network configuration information. Note that to use Gluster Virtual Network Interfaces you must first create a Grid virtual interface for your grid. This can be done by right-clicking on the Grid in the tree view and then choose 'Modify Grid...' from the pop-up menu. When you create a Grid virtual interface it enables the cluster resource management components QuantaStor uses for IP address failover so this is a prerequisite.
+
 
+
=== Configuring NFS and CIFS access to Gluster Volumes ===
+
 
+
A Network Share will appear in the Network Shares section of the Web Management interface for each Gluster volume that is created. From there you manage the Gluster Volume as a network share just as you would standard single pool network file shares. QuantaStor automatically takes care of synchronizing the configuration changes across nodes automatically to provide CIFS/NFS access across all nodes which a given gluster volume spans. For example, if you have a grid with 5 nodes (A, B, C, D, E) and you have a Gluster Volume which spans nodes A & B then your CIFS/NFS access to the Gluster Volume will only be provided and accessible via nodes A & B.
+
Snapshots & Quotas
+
 
+
At this time we do not provide support for snapshots and quotas of Gluster volumes. That said, when used with ZFS based Storage Pools QuantaStor allocates the Gluster bricks as filesystems so that we can provide more advanced functionality like snapshot & quotas in a future release.
+
 
+
== Managing Storage Volumes ==
+
Each storage volume is a unique block device/target (a.k.a a 'LUN' as it is often referred to in the storage industry) which can be accessed via iSCSI, Fibre Channel, or Infiniband/SRP.  A ''storage volume'' is essentially a virtual disk drive on the network (the SAN) that you can assign to any host in your environment.  Storage volumes are provisioned from a ''storage pool'' so you must first create a ''storage pool'' before you can start provisioning volumes.
+
 
+
=== Creating Storage Volumes ===
+
Storage volumes can be provisioned 'thick' or 'thin' which indicates whether the storage for the volume should be fully reserved (thick) or not (thin).  As an example, a 100GB storage volume in a 1TB storage pool will only use 4KB of disk space in the pool when it is initially created leaving .99TB of disk space left over for use with other volumes and additional volume provisioning.  In contrast, if you choose 'thick' provisioning by unchecking the 'thin provisioning' option then the entire 100GB will be pre-reserved.  The advantage there is that that volume can never run out of disk space due to low storage availability in the pool but since it is reserved up front you will have 900GB free in your 1TB storage pool after it has been allocated so you can end up using up your available disk space fairly rapidly using thick provisioning.  As such, we recommend using thin-provisioning and it is the default.
+
 
+
=== Deleting Storage Volumes ===
+
 
+
There are two separate dialogs in QuantaStor manager for deleting storage volumes.  If you press the the "Delete Volume(s)" button in the ribbon bar you will be presented with a dialog that will allow you to delete multiple volumes all at once and you can even search for volumes based on a partial name match.  This can save a lot of time when you're trying to delete a multiple volumes.  You can also right-click on a storage volume and choose 'Delete Volume' which will bring up a dialog which will allow you to delete just that volume.
+
If there are snapshots of the volume you are deleting they are not deleted rather, they are promoted.  For example, if you have snapshots S1, S2 of volume A1 then the snapshots will become root/primary storage volumes after A1 is deleted.  Once a storage volume is deleted all the data is gone so use extreme caution when deleting your storage volumes to make sure you're deleting the right volumes.  Technically, storage volumes are internally stored as files on a ext4 or btrfs filesystem so it is possible that you could use a filesystem file recovery tool to recover a lost volume but in generally speaking you would need to hire a company that specializes in data-recovery to get this data back.
+
 
+
=== Resizing Storage Volumes ===
+
 
+
QuantaStor supports increasing the size of storage volumes but due to the high probability of data-loss we do not support shrink.  (n.b. all storage volumes are raw files within the storage pool filesystem (usually XFS) so you could theoretically experiment by making a copy of your storage volume file, manually truncate it, rename the old one and then rename the truncated version back into place.  This is not recommended, but it's an example of some of the low-level things you could try in a real pinch given the open nature of the platform.)
+
 
+
=== Creating Snapshots ===
+
 
+
QuantaStor snapshots are probably not like any snapshots you've used with any other storage vendor on the market.  Some key features of QuantaStor volume snapshots include:
+
 
+
* massive scalability
+
** create hundreds of snapshots in just seconds
+
* supports snapshots of snapshots
+
** you can create snapshots of snapshots of snapshots, ad infinitum.
+
* snapshots are R/W by default, read-only snapshots are also supported
+
* snapshots perform extremely well even when large numbers exist
+
* snapshots can be converted into primary storage volumes instantly
+
* you can delete snapshots at any time and in any order
+
* snapshots are 'thin', that is they are a copy of the meta-data associated with the original volume and not a full copy of all the data blocks.
+
 
+
All of these advanced snapshot capabilities make QuantaStor ideally suited for virtual desktop solutions, off-host backup, and near continuous data protection (NCDP).  If you're looking to get NCDP functionality, just create a 'snapshot schedule' and snapshots can be created for your storage volumes as frequently as every hour.
+
 
+
To create a snapshot or a batch of snapshots you'll want to select the storage volume that you which to snap, right-click on it and choose 'Snapshot Storage Volume' from the menu.
+
 
+
If you do not supply a name then QuantaStor will automatically choose a name for you by appending the suffix "_snap" to the end of the original's volume name. So if you have a storage volume named 'vol1' and you create a snapshot of it, you'll have a snapshot named 'vol1_snap000'.  If you create many snapshots then the system will increment the number at the end so that each snapshot has a unique name.
+
 
+
=== Creating Clones ===
+
 
+
Clones represent complete copies of the data blocks in the original storage volume, and a clone can be created in any storage pool in your storage system whereas a snapshot can only be created within the same storage pool as the original.  You can create a clone at any time and while the source volume is in use because QuantaStor creates a temporary snapshot in the background to facilitate the clone process.  The temporary snapshot is automatically deleted once the clone operation completes.  Note also that you cannot use a cloned storage volume until the data copy completes.  You can monitor the progress of the cloning by looking at the Task bar at the bottom of the QuantaStor Manager screen.  In contrast to clones, snapshots are created near instantly and do not involve data movement so you can use them immediately.
+
 
+
=== Restoring from Snapshots ===
+
 
+
If you've accidentally lost some data by inadvertently deleting files in one of your storage volumes, you can recover your data quickly and easily using the 'Restore Storage Volume' operation.  To restore your original storage volume to a previous point in time, first select the original, the right-click on it and choose "Restore Storage Volume" from the pop-up menu.  When the dialog appears you will be presented with all the snapshots of that original from which you can recover from.  Just select the snapshot that you want to restore to and press ok.  Note that you cannot have any active sessions to the original or the snapshot storage volume when you restore, if you do you'll get an error.  This is to prevent the restore from taking place while the OS has the volume in use or mounted as this will lead to data corruption.
+
<pre>
+
WARNING: When you restore, the data in the original is replaced with the data in
+
the snapshot.  As such, there's a possibility of loosing data as everything that
+
was written to the original since the time the snapshot was created will be lost. 
+
Remember, you can always create a snapshot of the original before you restore it
+
to a previous point-in-time snapshot.
+
</pre>
+
 
+
=== Converting a Snapshot into a Primary (btrfs only) ===
+
 
+
A primary volume is simply a storage volume that's not a snapshot of any other storage volume.  With QuantaStor you can take any snapshot and make it a primary storage very easily.  Just select the storage volume in QuantaStor Manager, then right-click and choose 'Modify Storage Volume' from the pop-up menu.  Once you're in the dialog, just un-check the box marked "Is Snapshot?".  If the snapshot has snapshots of it then those snapshots will be connected to the previous parent volume of the snapshot.  This conversion of snapshot to primary does not involve data movement so it's near instantaneous.  After the snapshot becomes a primary it will still have data blocks in common with the storage volume it was previously a snapshot of but that relationship is cleared from a management perspective.
+
 
+
== Managing Backup Policies ==
+
 
+
Within QuantaStor you can create ''backup policies'' where data from any NFS or CIFS share on your network can be automatically backed up for you to your QuantaStor appliance.  To create a ''backup policy'' simply right-click on the ''Network Share'' where you want the data to be backed up to and choose the 'Create Backup Policy..' option from the pop-up menu.
+
Backup policies will do a CIFS/NFS mount of the specified NAS share on your network locally to the appliance in order to access the data to be archived.  When the backup starts it creates a Backup Job object which you will see in the web interface and you can see the progress of any given ''backup job'' by monitoring it in the '''Backup Jobs''' tab in the center-pane of the web interface after you select the ''Network Share'' to which the backup policy is attached.
+
 
+
[[File:qs_bp_menu.png]]
+
 
+
=== Creating Backup Policies ===
+
 
+
Backup policies in QuantaStor support heavy parallelism so that very large NAS filers with 100m+ files can be easily scanned for changes.  The default level of parallelism is 32 concurrent scan+copy threads but this can be reduced or increased to 64 concurrent threads. 
+
 
+
==== Backup to Network Share ====
+
 
+
This is where you indicate where you want the data to be backed up to on your QuantaStor appliance. With QuantaStor backup policies your data is copied from a NAS share on the network to a ''network share'' on your QuantaStor appliance.
+
 
+
==== Policy Name ====
+
 
+
This is a friendly name for your backup policy.  If you are going to have multiple policies doing backups to the same ''network share'' then each policy will be associated with a directory with the name of the policy.  For example, if your share is called ''media-backups'' and you have a policy called 'project1' and a policy called 'project2' then there will be sub-directories under the ''media-backups'' share for ''project1'' and ''project2''.  In order to support multiple policies per ''Network Share'' you must select the option which says '''Backup files to policy specific subdirectory'''.  If that is not selected then only one policy can be associated with the network share and the backups will go into the root of the share to form an exact mirror copy.
+
 
+
==== Selecting the Backup Source ====
+
 
+
In the section which says '''Hostname / IP Address:''' enter the IP address of the NAS filer or server which is sharing the NAS folder you want to backup.  For NFS shares you should enter the IP address and press the '''Scan''' button.  If NFS shares are found they'll show up in the CIFS/NFS Export: list.  For CIFS share backups you'll need to enter the network path to the share in a special format starting with double forward slashes like so:  '''//username%password@ipaddress'''.  For example, you might scan for shares on a filer located at 10.10.5.5 using the SMB credentials of 'admin' and password 'password123' using this path: '''//admin%password123@10.10.5.5'''.  In AD environments you can also include the domain in the SMB path like so '''//DOMAIN/username%password@ipaddress'''.
+
 
+
[[File:qs_bp_create.png]]
+
 
+
==== Policy Type ====
+
 
+
You can indicate that you want the backup policy to backup everything by selecting 'Backup All Files' or you can do a 'Sliding Window Backup'.  For backing up data from huge filers with 100m+ files it is sometimes useful to only backup and maintain a ''sliding window'' of the most recently modified or created files.  If you set the ''Retention Period'' to 60 days then all files that have been created or modified within the last 60 days will be retained.  Files that are older than that will be purged from the backup folder.
+
 
+
Be careful with the ''Backup All Files'' mode.  If you have a Purge Policy enabled it will remove any files from the ''network share'' which were not found on the source NAS share that's being backed up.  If you attached such a backup policy to an existing share which has data on it, the purge policy will remove any data/files that exists in your QuantaStor Network Share which is not on the source NAS share on the remote filer.  So use caution with this as ''Backup All Files'' really means ''maintain a mirror copy of the remote NAS share''.
+
 
+
==== Purge Policy ====
+
 
+
Backup policies may run many times per day to quickly backup new and modified files.  A scan to determine what needs purging is typically less important so it is more efficient to run it nightly rather than with each and every backup job.  For the ''Sliding Window'' policies the purge phase will throw out any files that are older than the retention period.  For the ''Backup All Files'' policies there is a comparison that is done and any files that are no longer present in the NAS source share are removed from the backup.  The Purge Policy can also be set to 'Never delete files' which will backup files to your Network Share but never remove them. 
+
 
+
==== Backup Logs ====
+
 
+
If you select 'Maintain a log of each backup' then a backup log file will be written out after each backup.  Backup logs can be found on your QuantaStor appliance in the /var/log/backup-log/POLICY-NAME directory.  The purge process produces a log with the .purgelog suffix and the backup process produces a log with the .changelog suffix.
+
 
+
=== pwalk ===
+
 
+
pwalk is a open source command line utility included with QuantaStor (see /usr/bin/pwalk).  It was originally written by John Dey to work as a parallelized version of the 'du -a' unix utility which would be suitable for scanning filesystems with 100s of millions of files.  It was then reworked at OSNEXUS to support backups, sliding window backups, additional output formats, etc.  If you type 'pwalk' by itself at the QuantaStor ssh or console window you'll see the following usage page / documentation.  The pwalk utility has three modes, 'walk' which does a parallelized crawl of a directory, 'copy' which does a backup from a SOURCEDIR to a specified --targetdir, and 'purge' mode which removes files in the PURGEDIR which are not found in the --comparedir.  In general you would never need to use pwalk directly but the documentation is provided here to support special use cases like custom backup or replication cron jobs.
+
 
+
<pre>
+
pwalk version 3.1 Oct 22nd 2013 - John F Dey john@fuzzdog.com, OSNEXUS, eng@osnexus.com
+
 
+
Usage :
+
pwalk --help --version
+
          Common Args :
+
            --dryrun : use this to test commands
+
                        without making any changes to the system
+
      --maxthreads=N : indicates the number of threads (default=32)
+
          --nototals : disables printing of totals after the scan
+
              --dots : prints a dot and total every 1000 files scanned.
+
              --quiet : no chatter, speeds up the scan.
+
            --nosnap : Ignore directories with name .snapshot
+
              --debug : Verbose debug spam
+
        Output Format : CSV
+
              Fields : DateStamp,"inode","filename","fileExtension","UID",
+
                        "GID","st_size","st_blocks","st_mode","atime",
+
                        "mtime","ctime","File Count","Directory Size"
+
 
+
Walk Usage :
+
pwalk SOURCEDIR
+
        Command Args :
+
            SOURCEDIR : Fully qualified path to the directory to walk
+
 
+
Copy/Backup Usage :
+
pwalk --targetdir=TARGETDIR SOURCEDIR
+
pwalk --retain=30 --targetdir=TARGETDIR SOURCEDIR
+
        Command Args :
+
          --targetdir : copy files to specified TARGETDIR
+
              --atime : copy if access time change (default=no atime)
+
  --backuplog=LOGFILE : log all files that were copied.
+
  --status=STATUSFILE : write periodic status updates to specified file
+
            --retain : copy if file ctime or mtime within retention period
+
                        specified in days. eg: --retain=60
+
            --nomtime : ignore mtime (default=use mtime)
+
            SOURCEDIR : Fully qualified path to the directory to walk
+
 
+
Delete/Purge Usage :
+
pwalk --purge [--force] --comparedir=COMPAREDIR PURGEDIR
+
pwalk --purge [--force] --retain=N PURGEDIR
+
        Command Args :
+
        --comparedir : compare against this dir but dont touch any files
+
                        in it. comparedir is usually the SOURCEDIR from
+
                        a prior copy/sync stage.
+
              --purge : !!WARNING!! this deletes files older than the
+
                        retain period -OR- if retain is not specified
+
                        --comparedir is required. The comparedir is
+
                        compared against the specified dir and any files
+
                        not found in the comparedir are purged.
+
              --force : !NOTE! default is a *dry-run* for purge, you must
+
                        specify --force option to actually purge files
+
              --atime : keep if access time within retain period
+
            --retain : keep if file ctime or mtime within retention period
+
                        specified in days. eg: --retain=60
+
</pre>
+
 
+
OSNEXUS modified version of the C source code for pwalk is available here [[pwalk.c]]. The original version is available
+
[https://github.com/fizwit/filesystem-reporting-tools/blob/master/pwalk.c here].
+
 
+
== SNMP Agent Configuration ==
+
QuantaStor v3.10.1 and newer comes with an SNMP agent so that you can remotely monitor your system via SNMP and get notified of system alerts via the SNMP trap mechanism.
+
 
+
=== SNMP MIB ===
+
 
+
The full [http://www.osnexus.com/storage/snmpagent/QUANTASTOR-SYS-STATS.mib SNMP MIB for QuantaStor] can be found [http://www.osnexus.com/storage/snmpagent/QUANTASTOR-SYS-STATS.mib here].
+
 
+
=== qs-util SNMP Utility Commands ===
+
 
+
The qs-util command line utility has a number of helper commands to make enabling SNMP and verifying the configuration easier.  Here's a list of those commands, you can also run 'qs-util' at the console to see a full list of these commands.  Note that you must run many of these commands as root so be sure to do a 'sudo -i' before running them.
+
 
+
<pre>
+
  SNMP Commands
+
    qs-util snmpenable              : Configures the SNMP agent to startup automatically at system startup.
+
    qs-util snmpdisable              : Configures the SNMP agent to not start automatically at system startup (default).
+
    qs-util snmpactivate            : Turns on the SNMP agent
+
    qs-util snmprestart              : Restarts the SNMP service and agent
+
    qs-util snmpwalkall              : Walks the entire SNMP mib
+
    qs-util snmpwalkvolumes          : Walks the volumes via the SNMP mib
+
    qs-util snmpwalkalerts          : Walks the alerts via the SNMP mib
+
    qs-util snmpmib                  : Displays the contents of the SNMP mib
+
</pre>
+
 
+
=== Enabling the SNMP Agent ===
+
By default the QuantaStor SNMP agent is turned off but you can enable it at the console with a couple of commands:
+
 
+
<pre>
+
sudo qs-util snmpenable
+
sudo qs-util snmpactivate
+
</pre>
+
 
+
The snmpenable command sets up the appliance so that the SNMP agent will start automatically when the appliance boots up.  The snmpactivate command will startup the snmpd and qs_snmpagent services.  You must also install the snmp package which contains the snmpwalk and snmpget utilities you can use for testing the agent.
+
 
+
<pre>
+
sudo apt-get install snmp
+
</pre>
+
 
+
=== Configuring the SNMP Agent user account ===
+
 
+
You must edit the /etc/snmp/snmpd.conf configuration file to contain the plain text username and password for the account that will be used for communication between the SNMP agent and the QuantaStor core services.  We recommend creating a 'snmpuser' account with the 'System Monitor' role so that even if someone gets the plain text password for the SNMP agent they still cannot make configuration changes to the appliance.
+
If you are not logged into the web management interface you can create a new management user account at the command line like so:
+
 
+
<pre>
+
qs user-add snmpuser snmppass "System Monitor" server=localhost,admin,password
+
</pre>
+
 
+
In the /etc/snmp/snmpd.conf file you will see lines in there that look like this:
+
 
+
<pre>
+
createUser snmpuser MD5 snmppass DES
+
group nmsGroup usm snmpuser
+
</pre>
+
 
+
Edit 'nano /etc/snmp/snmpd.conf' those to match the new user account username and password you gave in the previous step. For example, replace 'snmpuser' with the username of the account you created via the QuantaStor manager web interface, and replace 'snmppass' with the password you gave to that account.  When the SNMP agent starts up, it will use the credentials for the first createUser entry in the snmpd.conf file for all communication with the QuantaStor service.  So even if you have multiple createUser entries in the snmpd.conf file like "admin" but the first createUser entry is "snmpuser" then "snmpuser" credentials are used for all the SNMP agent to qs_service communication.
+
 
+
Now it is time to restart the SNMP daemon and agent like so:
+
<pre>
+
sudo qs-util snmprestart
+
</pre>
+
 
+
=== Testing the SNMP Agent ===
+
 
+
Now that you have the SNMP agent enabled with an account associated with it, now it's time to test it to make sure it is working.  To do this, use the qs-util commands for doing an SNMP walk, for example:
+
 
+
<pre>
+
qs-util snmpwalkvolumes snmpuser snmppass
+
qs-util snmpwalkalerts snmpuser snmppass
+
qs-util snmpwalkall snmpuser snmppass
+
</pre>
+
 
+
Alternatively you can run a snmpwalk like so:
+
 
+
<pre>
+
snmpwalk -v 3 -u snmpuser -a MD5 -A snmppass -x DES -X "snmppass" -l authPriv localhost QUANTASTOR-SYS-STATS::storageVolume
+
</pre>
+
 
+
Be sure to replace snmpuser and snmppass with the user account you setup and specified in the /etc/snmp/snmpd.conf configuration file.  If you're not able to get any data from the snmpwalk commands, try running a simple qs command to verify that the credentials are correct for the account like so:
+
<pre>
+
qs alert-list server=localhost,snmpuser,snmppass
+
</pre>
+
If that doesn't work then either the quantastor service is not running (service quantastor start) or the user account username or password isn't correct.
+
 
+
=== Configuring SNMP Agent Trap Settings ===
+
The alerts within QuantaStor have a severity of error, warning or informational and via the /etc/qs_snmptrapd.conf configuration file you can turn off these categories of alerts to fit your needs.  In general you should not ever ignore error messages but it may be handy to disable informational alerts in some cases.  Here's the default contents of the /etc/qs_snmptrapd.conf file.  Note that if you delete it, the SNMP agent will automatically re-create it for you with the defaults:
+
 
+
<pre>
+
poll-interval=120
+
ignore-error-alerts=false
+
ignore-warn-alerts=false
+
ignore-info-alerts=false
+
</pre>
+
 
+
If you make any changes to this file, be sure to restart the agent like so.
+
<pre>
+
service snmpagent restart
+
</pre>
+
Or you can restart both the agent and SNMP service like so:
+
<pre>
+
qs-util snmprestart
+
</pre>
+
 
+
=== Testing SNMP Trap Settings ===
+
By default the SNMP agent only pushes out traps every 120 seconds so you will have to wait awhile for the trap to be generated after you raise a test alert. QuantaStor only raises traps for Alert objects, so anything that you see in the Alert status bar in the web interface or see in 'qs alert-list' will be sent out as traps.  Traps are only sent a single time and the agent keeps track of what alerts have been sent by writing the alert UUIDs to '/var/log/qs_snmpraisedtraps.dat'.  If you delete that file then all the alerts will be raised again after the agent restarts.  To generate a test alert which will be converted into an SNMP trap use this command:
+
 
+
<pre>
+
qs alert-raise --message="snmp test message" --alert-severity=warning --server=localhost,admin,password
+
</pre>
+
 
+
After you create the test alert you can then look in the log to see if it has been raised:
+
 
+
<pre>
+
qs-showlog -snmp
+
</pre>
+
 
+
An easier way to do that is to leave the log open with a 'tail -f /var/log/qs_snmpagent.log' then hit Ctrl-C to stop monitoring the log once you see the trap generated.  By default the /etc/snmp/snmpd.conf file is configure to only raise traps to the local host.  To raise traps outside of the local host you'll need to add additional lines to the snmpd.conf file like this:
+
 
+
<pre>
+
trap2sink 127.0.0.1 public
+
trap2sink 192.168.10.123 public
+
trap2sink 10.10.50.134 public
+
</pre>
+
 
+
You can also monitor traps using the snmptrapd utility like so:
+
 
+
<pre>
+
snmptrapd -P -F "%02.2h:%02.2j TRAP%w.%q from %A %v %W\n"
+
</pre>
+
 
+
== IO Tuning ==
+
 
+
=== ZFS Performance Tuning ===
+
 
+
One of the most common tuning tasks that is done for ZFS is to set the size of the ARC cache.  If your system has less than 10GB of RAM you should just use the default but if you have 32GB or more then it is a good idea to increase the size of the ARC cache to make maximum use of the available RAM for your storage appliance.  Before you set the tuning parameters you should run 'top' to verify how much RAM you have in the system.  Next, run this command to set the amount of RAM to some percentage of the available RAM.  For example to set the ARC cache to use a maximum of 80% of the available RAM, and a minimum of 50% of the available RAM in the system, run these, then reboot:
+
<pre>
+
qs-util setzfsarcmax 80
+
qs-util setzfsarcmax 50
+
</pre>
+
 
+
Example:
+
<pre>
+
sudo -i
+
qs-util setzfsarcmax 80
+
INFO: Updating max ARC cache size to 80% of total RAM 1994 MB in /etc/modprobe.d/zfs.conf to: 1672478720 bytes (1595 MB)
+
qs-util setzfsarcmin 50
+
INFO: Updating min ARC cache size to 50% of total RAM 1994 MB in /etc/modprobe.d/zfs.conf to: 1045430272 bytes (997 MB)
+
</pre>
+
 
+
 
+
To see how many cache hits you are getting you can monitor the ARC cache while the system is under load with the qs-iostat command:
+
 
+
<pre>
+
qs-iostat -af
+
 
+
ZFS Adaptive Replacement Cache (ARC) / read cache statistics
+
 
+
Name                              Data
+
---------------------------------------------
+
hits                              1099360191
+
misses                            65808011
+
c_min                            67108864
+
c_max                            1045925888
+
size                              26101960
+
arc_meta_used                    11552968
+
arc_meta_limit                    261481472
+
arc_meta_max                      28478856
+
 
+
ZFS Intent Log (ZIL) / writeback cache statistics
+
 
+
Name                              Data
+
---------------------------------------------
+
zil_commit_count                  25858
+
zil_commit_writer_count          25775
+
zil_itx_count                    12945
+
</pre>
+
 
+
=== Pool Performance Profiles ===
+
 
+
Read-ahead and request queue size adjustments can help tune your storage pool for certain workloads.  You can also create new storage pool IO profiles by editing the /etc/qs_io_profiles.conf file.  The default profile looks like this and you can duplicate it and edit it to customize it.
+
 
+
<pre>
+
[default]
+
name=Default
+
description=Optimizes for general purpose server application workloads
+
nr_requests=2048
+
read_ahead_kb=256
+
fifo_batch=16
+
chunk_size_kb=128
+
scheduler=deadline
+
</pre>
+
 
+
If you edit the profiles configuration file be sure to restart the management service with 'service quantastor restart' so that your new profile is discovered and is available in the web interface.
+
 
+
=== XFS Tuning Parameters ===
+
 
+
QuantaStor has a number of tunable parameters in the /etc/quantastor.conf file that can be adjusted to better match the needs of your application.  That said, we've spent a considerable amount of time tuning the system to efficiently support a broad set of application types so we do not recommend adjusting these settings unless you are a highly skilled Linux administrator.
+
The default contents of the /etc/quantastor.conf configuration file are as follows:
+
<pre>
+
[device]
+
nr_requests=2048
+
scheduler=deadline
+
read_ahead_kb=512
+
 
+
[mdadm]
+
chunk_size_kb=256
+
parity_layout=left-symmetric
+
</pre>
+
 
+
There are tunable settings for device parameters, md array chunk-size and parity configuration settings, as well as some settings for btrfs.  These configuration settings are read from the configuration file dynamically each time one of the settings is needed so there's no need to restart the quantastor service.  Simply edit the file and the changes will be applied to the next operation that utilizes them.  For example, if you adjust the chunk_size_kb setting for mdadm then the next time a storage pool is created it will use the new chunk size.  Other tunable settings like the device settings will automatically be applied within a minute or so of your changes because the system periodically checks the disk configuration and updates it to match the tunable settings. 
+
Also, you can delete the quantastor.conf file and it will automatically use the defaults that you see listed above.
+
 
+
== PagerDuty ==
+
 
+
PagerDuty is an alarm aggregation and dispatching service for system administrators and support teams. It collects alerts from your monitoring tools, gives you an overall view of all of your monitoring alarms, and alerts an on duty engineer if there's a problem.
+
 
+
Quantastor can be setup to trigger PagerDuty alerts when Quantastor encounters an alert that is of severity "Error", "Warning", or "Critical". Getting setup only requires a few simple steps (internet connection required).
+
 
+
=== Adding a New Service in PagerDuty ===
+
 
+
{|
+
|-
+
|
+
[[File:pagerDutySetup1.png|400px]]
+
|
+
After logging into your PagerDuty account click on the "Services" tab along the top.
+
From here click on the "Add New Service" button.
+
 
+
This service is what all of the Quantastor alerts will be kept under. This will keep the alerts separate from the other programs that may be sending their alerts to PagerDuty.
+
|-
+
|
+
[[File:pagerDutySetup2.png|400px]]
+
|
+
For the "Service Name" field I would recommend something that describes the box or grid that is being monitored. Also make sure to select "Generic API System" under service type. Quantastor uses PagerDuty's API to post the alert to PagerDuty. After everything is set click "Add Service".
+
|-
+
|
+
[[File:pagerDutySetup3.png|400px]]
+
|
+
Everything on the PagerDuty side should now be setup. Copy the "Service API Key" and set it aside. This key is the input parameter to tell Quantastor where to post the alert.
+
|}
+
 
+
=== Adding PagerDuty to Quantastor ===
+
 
+
{|
+
|-
+
|
+
[[File:pagerduty3.png|400px]]
+
|
+
Open the web interface for the Quantastor system. Right click on the storage system or grid, and select "Alert Manager".
+
|-
+
|
+
[[File:pagerduty2.png|400px]]
+
|
+
In the text box titled "PagerDuty.com Service Key" paste the service key from before. Then click on "Apply".
+
|-
+
|
+
[[File:pagerduty1.png|400px]]
+
|
+
To test if the system is working select generate test alert. Make sure to select a severity level of "Error", "Warning", or "Critical" and then click okay. If everything is setup correctly a test alert should now be generated and sent to PagerDuty.
+
|}
+
 
+
 
+
=== Example Alerts ===
+
 
+
When Quantastor sends an alert to PagerDuty it also sends a list of details to make solving the issue easier. These details include:
+
* The serial number of the system
+
* The startup time of the system
+
* The location
+
* The title of the alert
+
* The version of the Quantastor service
+
* The time at which the alert was sent
+
* The name of the system
+
* The id of the system
+
* The current firmware version
+
* The severity of the alert
+
 
+
 
+
{|
+
|-
+
|
+
[[File:pagerduty5.png|400px]]
+
|
+
[[File:pagerduty4.png|400px]]
+
|}
+
 
+
== Librato Metrics ==
+
 
+
Metrics takes away the headaches of traditional server based monitoring solutions that take time to set up, require investments in hardware and take effort to maintain. Metrics is delivered as a service so you don't have to worry about storage, reliability, redundancy, or scalability. 
+
 
+
=== Setup for Librato Metrics ===
+
 
+
{|
+
|-
+
|
+
[[File:MetricsAccount.png|700px]]
+
|
+
To post data to Librato Metrics you first must have a Librato Metrics account, which can be created through their website at https://metrics.librato.com. Next you will want to go to your account settings page. This is where you will find your username (email used to create the account) and your API token. This token will be used to post data. At this screen you can do other things such as change your password, or generate a new API token.
+
|-
+
|
+
[[File:ApiTokenSettings.png|700px]]
+
|
+
When you create the API token, make sure that it is set to "Full Access". This will allow us to create the different Instruments and Dashboards.
+
|-
+
|
+
[[File:MetricsSettings.png|700px]]
+
|
+
The next step is to configure Quantastor to post data to Librato Metrics using the same API token. Right click on the storage system you wish to post data, and select the Librato Metrics settings. In the dialog that appears set your username as the email you use to log into Librato Metrics. Paste the token from the Librato Metrics site into the token field. The post interval allows you to change how often Quantastor will send data to Librato Metrics. The default value is 60 seconds. Click "OK", and Quantastor should begin posting data.
+
|}
+
 
+
=== Viewing the Metrics ===
+
 
+
To view the data you will first sign into your Librato Metrics account. After signing in click on the "Metrics" tab along the top. This will bring you to a list of all the metrics that have been posted to your account. Quantastor uses a naming convention of:
+
"<storage system/grid name> - <gauge name>"
+
 
+
Quantastor creates the following gauges:
+
 
+
{|
+
|- valign="top"
+
|
+
Metrics
+
|- valign="top"
+
|
+
* CPU Load Average
+
* Storage Pool Free Space
+
* Storage Pool Reads Per Sec
+
* Storage Pool Read kB Per Sec
+
* Storage Pool Writes Per Sec
+
* Storage Pool Write kB Per Sec
+
|- valign="top"
+
|
+
Instruments
+
|- valign="top"
+
|
+
* Storage Pool Read:Write
+
* Storage Pool Read:Write kBps
+
|}
+
 
+
=== Examples ===
+
 
+
{|
+
|-
+
|
+
The picture on the left shows an example of a gauge Metric. This graph is the CPU load averages Metric. In the top right corner of the graph you can change the window of time that is currently being viewed.
+
 
+
To the right of that is an example of an Instrument. An Instrument is a combination of of different Metrics. In this Instrument the Storage Pool Read kBps and Write kBps have been combined into one graph.
+
|-
+
|
+
[[File:metricsCPU.png|400px]]  [[File:MetricsInstrument.png|400px]]
+
|}
+
 
+
== Gladinet Enterprise Configuration (Secure Private Dropbox-like Solution) ==
+
Gladinet Enterprise is Dropbox-like software that allows you to store the data on a local SAN/NAS appliance like your QuantaStor SDS appliance.  Gladinet provides remote secure access to folder and files for your users via an "M:" drive (default).  Gladinet works much like Dropbox(tm) and adds advanced features like encryption on-the-wire and at rest, user management, and team folders.  With all the data stored securely in your datacenter on your QuantaStor appliance(s) you can also ensure physical security of the data and deploy appliances for high security deployments to meet government standards like HIPAA compliance.
+
 
+
=== QuantaStor Configuration ===
+
Setup up QuantaStor to be used with Gladinet was very simple. Here are the steps to getting everything setup:
+
 
+
#Create a user account to be used by Gladinet via the QuantaStor web management interface under the '''Users & Groups''' section. In the example below you can see that we created a user named 'gladinet' in QuantaStor and then we use that for configuring Gladinet.
+
#Create a ''network share'' in the ''storage pool'' you would like to use.
+
##When you create the share, be sure to check the '''Enable CIFS/SMB Access''' option.
+
##In the '''CIFS/SMB Advanced Settings''' section click on the 'None' setting for the ''gladinet'' or other user account that you created and set it as a '''Valid User'''.
+
#Finally, verify that you can access the ''network share'' using this user account from your Windows host before configuring it in Gladinet.  After you have verified connectivity you can disconnect from the share in Windows.
+
 
+
=== Navigation to Attach Local Storage ===
+
 
+
This part of the guide just covers the configuration of the storage so we've skipped a few of the other steps that come before this if you're installing Gladinet Enterprise from scratch.  When you see this storage configuration screen, please follow these steps to complete the configuration of Gladinet for use with your QuantaStor appliance.  If you've already installed Gladinet you'll need to navigate to the '''Attache Local Storage''' section. Note, the title '''Attach Local Storage''' shown in Gladinet is something of a misnomer as this section covers connecting to NAS storage and local filesystems.
+
+
#Navigate to the Management Console
+
#Select Collaboration from the left hand menu
+
#Select the Storage Manager tab from the top menu
+
#Click on Attach Local Storage link in the upper right
+
 
+
[[File:attachLocalStorage.png|1024px]]
+
 
+
=== Creating Storage Attachment ===
+
Once the '''Attach Local Storage''' dialog appears, follow these steps to connect to your QuantaStor ''network share''.
+
#The '''Root Folder Name''' is an arbitrary friendly name by which the storage share will be referred to in Gladinet.
+
#For the '''Local Storage Location''' provide the full SMB path to the CIFS share. In the example my '''network share''' was named '''test''' when we created it in the QuantaStor appliance so here we refer to it via the IP address of the QuantaStor appliance and the share name like so ''\\hostname\sharename''.
+
#For the '''Username''' enter the QuantaStor user that was given access to the share, in this example we created a user in the QuantaStor appliance named '''gladinet'''. Also make sure to put a '\' in front so it doesn't use your local Windows server's domain as part of the username.
+
#The password is the password to the QuantaStor user account; in this example it is the password to the '''gladinet''' user.
+
#Make sure to select both check-boxes and then click '''Create'''.  QuantaStor is Linux based so Gladinet needs this information to properly interface with Samba based shares.  Also, we want all access to the QuantaStor appliance to flow through the '''gladinet''' user account we're specifying here.
+
 
+
[[File:attachDialog.png]]
+
 
+
== Nagios Integration / Support ==
+
 
+
This article has some good detail on setting up [http://askubuntu.com/questions/145518/how-do-i-install-nagios Nagios] but the installation requires running just a couple of commands:
+
 
+
<pre>
+
sudo apt-get update
+
sudo apt-get install -y nagios3
+
</pre>
+
 
+
When installing Nagios for use with QuantaStor note that you must adjust the default port number for apache to something other than port 80 which conflicts with the QuantaStor web management service. For more information on changing the apache port numbers, please see this [http://www.cyberciti.biz/faq/linux-apache2-change-default-port-ipbinding/ article] which has more detail. To change the port numbers edit '/etc/apache2/ports.conf' and modify the default port number of 80 something like to 8001 and 443 to 4431.  Finally, restart apache with 'service apache2 restart'.
+
 
+
After the port number has been changed you can then access Nagios via your web browser at the new port number like so:
+
 
+
<pre>
+
http://your-appliance-ip-address:8001/nagios3/
+
</pre>
+
 
+
== Zabbix Integration / Support ==
+
 
+
To enable the Zabbix agent directly within your QuantaStor appliance you'll need to install the agent as per the Zabbix documentation on how to install into Ubuntu Server 12.04 (Precise) which can be found [https://www.zabbix.com/documentation/2.0/manual/installation/install_from_packages here].
+
 
+
Here is a quick summary of the commands to run as detailed on the [https://www.zabbix.com/documentation/2.2/manual/installation/install#installing_zabbix_daemons Zabbix web site]:
+
<pre>
+
sudo -i
+
wget http://repo.zabbix.com/zabbix/2.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_2.0-1precise_all.deb
+
dpkg -i zabbix-release_2.0-1precise_all.deb
+
apt-get update
+
apt-get install zabbix-server-mysql zabbix-frontend-php
+
</pre>
+
 
+
Note that Zabbix uses the apache2 web server for its web management interface.  Apache uses port 80 by default which conflicts with the Tomcat service QuantaStor uses for its web management interface.  As such, you must edit the /etc/apache2/ports.conf file to change the default port numbers.  For example you can change 80 to 8001 and 443 to 4431, then restart the apache service with 'service apache2 restart'. 
+
This will eliminate the port conflict with the QuantaStor manager web interface.  For more information on changing the apache port numbers, please see this [http://www.cyberciti.biz/faq/linux-apache2-change-default-port-ipbinding/ article] which has more detail.
+
 
+
After the port number has been changed you can then access Nagios via your web browser at the new port number like so:
+
 
+
<pre>
+
http://your-appliance-ip-address:8001/zabbix/
+
</pre>
+
 
+
== Security Configuration ==
+
 
+
=== Change Your Passwords ===
+
 
+
One of the most important steps in the configuration of a new QuantaStor appliance is to just change the admin password for the appliance to something other than the default.  You'll want to start by logging into the console using the 'qadmin' account and 'qadmin' password.  Next type 'passwd' and change the password from 'qadmin' to something else.  Next you'll want to login to the web management interface and change the 'admin' account password from 'password' to something else.
+
 
+
=== Port Lock-down via IP Tables configuration ===
+
 
+
QuantaStor comes with non-encrypted port 80 / http access to the appliance enabled.  For more secure installations it is recommended that port 80 and non-essential services are blocked.  To disable port 80 access run this command:
+
<pre>
+
sudo qs-util disablehttp
+
</pre>
+
To re-enable port 80 access use:
+
<pre>
+
sudo qs-util enablehttp
+
</pre>
+
Note that the web management interface will still be accessible via https on port 443 after you disable http access.
+
 
+
=== Changing the SSL Key for QuantaStor Web Management Interface ===
+
 
+
The SSL key provided with QuantaStor is a common self-signed SSL key that is pre-generated and included with all deployments. This is generally OK for most deployments on private networks but for increased security it is recommended to generate a new SSL keystore for the Apache Tomcat server used to serve the QuantaStor web management interface. 
+
 
+
==== Keystore Password Selection ====
+
'''IMPORTANT NOTE''' You must set the password for the keystore to 'changeit' (without the quotes) as this is the default password that Tomcat uses to unlock the keystore.  If you do not want to use the default password ('changeit') you can select a password of your choice but you will also need to manually edit the connector section of the /opt/osnexus/quantastor/tomcat/conf/server.xml file to add a line containing the keystore password (example: keystorePass="YOURPASSWORD").  Here's an example of what that will look like if you select the password "YOURPASSWORD".
+
 
+
<pre>
+
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
+
              maxThreads="150" scheme="https" secure="true"
+
              keystoreFile="/opt/osnexus/quantastor/tomcat/conf/keystore"
+
              keystorePass="YOURPASSWORD"
+
              clientAuth="false" sslProtocol="TLS" />
+
</pre>
+
 
+
==== New Keystore Generation ====
+
 
+
To generate a new keystore you'll need to do the following steps.
+
 
+
* Login to QuantaStor via the console or via SSH then generate a keystore using the keytool utility. It will prompt you to enter a bunch of data including name, company, location, etc. This will produce a new .keystore file in the current directory.  Remember to use the default Tomcat 'changeit' password for the keystore unless you plan to edit the /opt/osnexus/quantastor/tomcat/conf/server.xml file to add your custom keystore password.
+
<pre>
+
keytool -genkey -alias tomcat -keyalg RSA -validity 365
+
</pre>
+
* Next, backup the original keystore file and then overwrite the original with your newly generated keystore file:
+
<pre>
+
cp /opt/osnexus/quantastor/tomcat/conf/keystore ./keystore.qs.conf
+
cp .keystore /opt/osnexus/quantastor/tomcat/conf/keystore
+
mv .keystore keystore.custom
+
</pre>
+
* Finally, restart tomcat services so that the new key is loaded.
+
<pre>
+
service tomcat restart
+
</pre>
+
 
+
'''IMPORTANT NOTE''' If you are using Firefox as your browser, you must clear the browser history in order to clear the old cached key information.  If you don't clear the history you'll see that the "Confirm Security Exception" button will be greyed out and you won't be able to login to your QuantaStor appliance via https. IE and Chrome do not have this issue.
+
 
+
That's the whole process.  Here's an example of what we enter into these fields as OSNEXUS Engineering, you'll want to put your own company name and other details here:
+
 
+
<pre>
+
keytool -genkey -alias qs-tomcat -keyalg RSA -validity 365
+
 
+
Enter keystore password:
+
Re-enter new password:
+
What is your first and last name?
+
  [Unknown]:  OSNEXUS
+
What is the name of your organizational unit?
+
  [Unknown]:  OSNEXUS Engineering
+
What is the name of your organization?
+
  [Unknown]:  OSNEXUS, Inc.
+
What is the name of your City or Locality?
+
  [Unknown]:  Bellevue
+
What is the name of your State or Province?
+
  [Unknown]:  Washington
+
What is the two-letter country code for this unit?
+
  [Unknown]:  US
+
Is CN=OSNEXUS, OU=OSNEXUS Engineering, O="OSNEXUS, Inc.", L=Bellevue, ST=Washington, C=US correct?
+
  [no]:  yes
+
</pre>
+
 
+
== Internal SAS Device Multi-path Configuration ==
+
If your appliance is dual-path connected to a SAS JBOD or has an internal SAS expander with SAS disks you have the option of setting up multiple paths to the SAS devices for redundancy and in some cases improved performance.  If you are not familiar with ZFS or using the Linux shell utilities we strongly recommend getting help with these steps from customer support.
+
=== Multi-path Configuration with RAID Controllers ===
+
If you are using a RAID controller it will internally manage the multiple paths to the device automatically so there is no additional configuration required except to make sure that you have two cables connecting the controller to the SAS expander.
+
=== Multi-path Configuration with HBAs ===
+
For appliances with SAS HBAs there are some additional steps required to setup the QuantaStor appliance for multipath access.  Specifically, you must add entries to the /etc/multipath.conf file then restart the multipath services.
+
==== Configuring the /etc/multipath.conf File ====
+
QuantaStor being Linux based uses the DMMP (Device Mapper Multi-Path) driver to manage multipathing.  The multipath service can be restarted at any time at the command line using the command 'service multipath-tools restart'.  Configuration of this service is managed via the configuration file located at /etc/multipath.conf which contains a set of rules indicating which devices (identified by Vendor / Model) should be managed by the multipath service and which should be ignored. The base configuration is setup so that no multipath management is done for SAS devices as this is the most common and simplest configuration mode.  To enable multipath management you must add a section to the 'blacklist_exceptions' area of the file indicating the vendor and model of your SAS devices.  The vendor model information for your SAS devices can be found using this command 'grep Vendor /proc/scsi/scsi'.  To summarize:
+
 
+
* grep Vendor /proc/scsi/scsi
+
** Returns the vendor / model information for your SAS devices
+
* nano /etc/multipath.conf
+
** Add a section to the blacklist_exceptions area for your SAS device, example (note the use of a wildcard '*') :
+
 
+
    device {
+
            vendor "SEAGATE"
+
            model "ST33000*"
+
    }
+
 
+
* service multipath-tools restart
+
** Restarts the multipath service
+
* multipath -ll
+
** Shows your devices with multiple paths to them
+
 
+
=== Pool Configuration ===
+
 
+
Once all the above is done you'll need to go into the QuantaStor web management interface and choose 'Scan for Disks' to make the new device mapper paths appear.  If you have already created a storage pool using standard paths rather than the /dev/mapper/mpath* paths then you'll need to run a zpool export/import operation to re-import the pool using the device mapper paths.  To do this you will need to first do a 'Stop Storage Pool' then at the command line / console you'll need to run these commands:
+
* zpool export qs-POOLID
+
* zpool import -d /dev/mapper qs-POOLID
+
Note that you must replace qs-POOLID with the actual ID of the storage pool.  You can also get this ID by running the 'zpool status' command. 
+
 
+
=== Troubleshooting Multipath Configurations ===
+
* Only Single Path
+
** If you only see one path to your device but the multipath driver is recognizing your device by displaying it in the output of 'multipath -ll' then you may have a cabling problem that is only providing the appliance with a single path to the device. 
+
* No Paths Appear
+
** If you don't see any devices in the output of 'multipath -ll' then there's probably something wrong with the device entry you added to the multipath.conf file into the blacklist_exceptions for your particular vendor/model of SAS device.  Double check the output from 'cat /proc/scsi/scsi' to make sure that you have a correct rule added to the multipath.conf file.
+
 
+
== Samba v4 / SMB3 Support ==
+
 
+
QuantaStor versions 3.8.2 and newer have support for Samba v4 but an additional configuration step is required to upgrade your system from the default Samba server (Samba v3.6.3) to Samba v4.  The command you need to run as root at the console/SSH is:
+
 
+
<pre>
+
sudo samba4-install
+
</pre>
+
 
+
It will ask you a few questions about your Active Directory configuration.  Your answers might look similar to these (note you must use the default 'dc' mode, we do not yet support the other modes). Note also that you must provide a strong password for the domain 'Administrator password' or the script will fail and you'll need to retry using the procedure outlined below.
+
 
+
<pre>
+
Realm [UNASSIGNED-DOMAIN]: osnexus.net
+
Domain [osnexus]:
+
Server Role (dc, member, standalone) [dc]:
+
DNS backend (SAMBA_INTERNAL, BIND9_FLATFILE, BIND9_DLZ, NONE) [SAMBA_INTERNAL]:
+
DNS forwarder IP address (write 'none' to disable forwarding) [192.168.0.1]: none
+
Administrator password:
+
Retype password:
+
</pre>
+
 
+
 
+
If you make a mistake and need to reconfigure the AD configuration settings just re-run the installer and it will prompt you again to enter the AD configuration settings.  In some cases you will have to uninstall samba4, and cleanup the remnants of the failed install, then try again like so:
+
 
+
<pre>
+
sudo -i
+
apt-get remove samba4
+
rm -rf /opt/samba4
+
samba4-install
+
</pre>
+
 
+
As of 12/19/2013 we only support the default 'dc' mode and have not yet complete testing of the other modes, namely 'standalone' and 'member'.  After the installation completes you can run this command to verify that the samba4 services are running:
+
 
+
<pre>
+
service samba4 status
+
smbstatus -V
+
</pre>
+
 
+
Starting in QuantaStor v3.9 the samba4-install script will turn off the enforcement of strong passwords but you can manually adjust it meet your company's security requirements by running this command.  For strong passwords you'd want a minimum password length of 10 with the complexity requirement turned 'on' rather than 'off'.  Note also that any existing user 'local' user accounts will need to have their passwords re-applied when you upgrade to Samba4, but that does not apply to AD accounts.  If you have strong passwords enabled and a given user has a password that is not strong left over from a prior config then it will block the login when they attempt to access it from their Windows host. 
+
 
+
<pre>
+
samba-tool domain passwordsettings set --min-pwd-length=1 --complexity=off
+
</pre>
+
 
+
If you have any questions please feel free to contact us at support (at) osnexus.com or via the Community Support Forum.
+
 
+
== Custom Scripting / Application Extensions ==
+
 
+
QuantaStor has script call-outs which you can use to extend the functionality of the appliance for integration with custom applications.  For example, you may have an application which needs to be notified before or after a storage pool starts or stops.  Or you may have need to call a script before an automated snapshot policy starts in order to quiesce applications. 
+
 
+
==== Security Issues ====
+
 
+
Scripts are called from the root user account so you must be careful to not allow anyone but the root user to have write access to create files under /var/opt/osnexus/custom. By default the scripts directory has permissions '755'.  Your scripts should be configured with file permissions using the command 'chmod 755 scriptname.sh' to prevent non-root user accounts from modifying the scripts. Additionally, if you have sensitive information like a plain text password in your custom script be sure to set the permissions to 700 rather than 755 so only the root user account can read the script.
+
 
+
==== Timeouts ====
+
 
+
Scripts must complete within 120 seconds; scripts taking longer are automatically terminated.
+
 
+
=== Where to install custom scripts ===
+
 
+
Custom script call-outs are hard-wired to specific file names and must be placed in the custom scripts
+
directory '/var/opt/osnexus/custom' within your QuantaStor appliance.  If you have a grid of
+
appliances you'll need to install your script onto all of the appliances.
+
 
+
Custom Scripts Directory:
+
<pre>
+
/var/opt/osnexus/custom
+
</pre>
+
 
+
=== Storage System Custom Scripts ===
+
 
+
Scripts related to the startup / shutdown of the appliance.
+
 
+
==== system-poststart.sh ====
+
 
+
The system poststart script is only called one time when the system boots up.  If the management services are
+
restarted it will check against the timestamp in /var/opt/osnexus/quantastor/qs_lastreboot an only call the
+
system-poststart.sh script if it has changed.  If you want your poststart script to run every time the
+
management service is restarted you can just delete the qs_lastreboot file in your script.
+
 
+
==== system-prestop.sh ====
+
 
+
Called when the user initiates a shutdown or a restart via the web management interface (or CLI).  Note that
+
if the admin bypasses the normal shutdown procedure and restarts the appliance at the console
+
using 'reboot' or 'shutdown -P now' or similar command your script won't get called.
+
 
+
=== Storage Pool Custom Scripts ===
+
 
+
If you have custom applications running within the appliance which need to attach/detach from
+
the pool or specific directories within a given storage pool these scripts may be helpful to you.
+
 
+
==== pool-poststart.sh ====
+
 
+
Called just after a storage pool is started.  The UUID of the pool is provided as an input arguement to the
+
script as '--pool=<POOLUUID>'.  You can use 'qs pool-get <POOLUUID> --server=localhost,admin,password --xml' to get more detail about the storage pool from
+
within your script.  The --xml flag is optional, and you'll need to provide the correct admin password.
+
 
+
==== pool-prestop.sh ====
+
 
+
Called just before the pool is stopped.
+
 
+
=== Snapshot & Replication Schedule Custom Scripts ===
+
 
+
==== schedule-prestart.sh ====
+
 
+
Called just before the a snapshot or replication schedule is triggered / executed.  This script is helpful for calling over to applications like databases to tell it to flush writes to prepare for the database to have a snapshot image taken of it.  Snapshots are atomic but snapshots taken of multiple volumes or network shares are not atomic as a group.  That's where this script can help guide an application spanning multiple Storage Volumes (LUNs) to flush and briefly quiesce IO to give you atomicity across volume snapshots for your application.
+

Latest revision as of 09:45, 2 May 2023

The QuantaStor Administrator Guide is intended for all IT administrators working to setup or maintain a QuantaStor system or grid of systems as well as for those just looking to get a deeper understanding of how the QuantaStor software defined storage platform works.

Administrator Guide Topic Links

Web UI Definition

Navigation for Dialog Access

Navigation using Right Click for Dialog Access

Storage System

Grid Configuration

License Management

Hardware Configuration

Network Port Configuration

Physical Disk/Device Management

Hardware Controller & Enclosure Management

Multipath Configuration

Storage Provisioning

Storage Pool Management

Storage Volume Management

Network Share Management

Cloud Containers/NAS Gateway

Security, Alerting & Upgrades

Call-home/Alert Management

Security Configuration

Upgrade Manager

Snapshots & Replication

Snapshot Schedules

Backup Policies

Remote-replication (DR)

Cluster Configuration

HA Cluster Setup (JBODs)

HA Cluster Setup (external SAN)

Scale-out Block Setup (ceph)

Scale-out Object Setup (ceph)

Scale-out File Setup (ceph)

Optimization

Performance Tuning

Performance Monitoring

System Internals

QuantaStor systemd Services

QuantaStor Configuration Files

QuantaStor Shell Utilities