Difference between revisions of "+ Admin Guide Overview"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m (Enabling Compression)
m (Security & Alerting)
(225 intermediate revisions by the same user not shown)
Line 1: Line 1:
The QuantaStor Administrators Guide is intended for all administrators and cloud users who plan to manage their storage using QuantaStor Manager as well as for those just looking to get a deeper understanding of how the QuantaStor Storage System Platform (SSP) works.
+
[[Category:admin_guide]]
== Navigating the QuantaStor Web Management User Interface ==
+
The QuantaStor Administrator Guide is intended for all IT administrators working to setup or maintain a QuantaStor system or grid of systems as well as for those just looking to get a deeper understanding of how the QuantaStor software defined storage platform works.
  
=== Tab Sections ===
+
== Administrator Guide Topic Links ==
When you initially connect to QuantaStor manager you will see the feature management tabs across the top of the screen (shown in the orange box in the diagram below) including tabs named ''Storage Management'', ''Users & Groups'', ''Remote Replication'', etc.  These main tabs divide up the user interface into functional sections.  The most common activities are provisioning file storage (NAS) in the Network Shares section and provisioning block storage (SAN) in the Storage Volumes section.  The most common management and configuration tasks are all accessible from the toolbars and pop-up menus (right-click) in the Storage Management tab.
+
  
=== Ribbon/Toolbar Section ===
+
[[Storage System]]
The toolbar (aka ribbon bar, shown in the red box) is just below the features management tab with group sections including ''Storage System'', ''Network Port'', ''Storage System Grid'', etc.  The toolbar section is dynamic so as you select different tabs or different sections in the tree stack area the toolbar will automatically change to display relevant options to that area.
+
  
=== Main Tree Stack ===
+
[[Grid Configuration]]
The tree stack panel appears on the left side of the screen (shown in the blue box in the screenshot/diagram below) and shows elements of the system in a tree hierarchy. All elements in the tree have menus associated with them that are accessible by right-clicking on the element in the tree.
+
  
=== Center Dashboards & Tables/Grids ===
+
[[License Management]]
The center of the screen typically shows lists of elements based on the selected tree stack section that is active.  This area also often has a dashboard to show information about the selected item be it a storage pool, storage system or other element of the system.
+
  
[[File:qs_initial_scrn.png|1000px|Main Tree View & Ribbon-bar / Toolbar]]
+
=== Hardware Configuration ===
  
=== 100% Native HTML5/JS with Desktop application ease-of use===
+
[[Network Port Configuration]]
By selecting different tabs and sections within the web management interface the items in both the toolbar, tree stack panel, and center panel will change to reflect the available options for the selected area.  Although the QuantaStor web UI is all browser native HTML5 it has many of the ease-of-use features one would find in a desktop application.  Most notably, one can right-click on most items within the web user interface to access context specific pop-up menus.  This includes right-clicking on tree items, the tree stack headers, and items in the center grid/table views.
+
  
== Storage System & Grid Management ==
+
[[Physical Disk/Device Management]]
  
=== License Management ===
+
[[Hardware Controller & Enclosure Management]]
  
QuantaStor has two different categories of license keys, those are 'System' licenses and 'Feature' licenses.  The 'System' licenses specify all the base features and capacity limits for your storage appliance and most systems just have a single 'System' license.  'Feature' licenses stack on top of an existing 'System' license and allow you to add features and capacity to an existing 'System'.  In this way you can start small and add more capacity as you need it.
+
[[Multipath Configuration]]
Note also that everything is license key controlled with QuantaStor so you do not need to re-install to go from a Trial Edition license to a Silver/Gold/Platinum license.  Simply add your new license key and it will replace the old one automatically.
+
  
[[File:License Manager.png|1200px]]
+
=== Storage Provisioning ===
  
=== Recovery Manager ===
+
[[Storage Pool Management]]
  
The 'Recovery Manager' is accessible from the ribbon-bar at the top of the screen when you login to your QuantaStor system and is used to recover the internal appliance database should the system be reinstalled from scratch. Recovery of the systems' metadata it allows you to recover all of the meta-data for all elements of the system from a prior installation.  The system metadata includes user accounts, storage assignments, host entries, storage clouds, custom roles and more.  To use the 'Recovery Manager' just select it then select the database you want to recover and press OK.  If you choose the 'network configuration recovery' option it will also recover the network configuration.  Be careful with IP address changes as it is possible to change settings such that your current connection to QuantaStor is dropped when the IP address changes.  In some cases you will need to use the console to make adjustments if all network ports are inaccessible due to no configuration or mis-configuration.  Newly installed QuantaStor units default to using DHCP on eth0 so as to facilitate initial access to the system so that static IPs can then be assigned.
+
[[Storage Volume Management]]
  
[[File:Storage Recovery Manager.png|500px]]
+
[[Network Share Management]]
  
=== Upgrade Manager ===
+
[[Cloud Containers/NAS Gateway]]
  
The Upgrade Manager handles the process of upgrading your system to the next available minor release version.  Note that Upgrade Manager will not upgrade QuantaStor from a v2 to a v3 version, that requires a re-installation of the QuantaStor OS and then recovery of meta-data using the 'Recovery Manager'.  The Upgrade Manager will display the available versions for the four key packages which includes the core services, web manager, web server, and SCSI target drivers.  You can upgrade any of the packages at any time and it will not block iSCSI access or NFS access to your appliance.  With upgrades to the SCSI target driver package you will need to restart your storage system/appliance for those new drivers to become active.
+
=== Security, Alerting & Upgrades ===
Note also that you should always upgrade both the manager and service package together, never upgrade just one or the other as this may cause problems when you try to login to the QuantaStor web management interface.
+
On occasion we'll see problems with an upgrade and so we've written a troubleshooting section on how to work out those issues here:
+
[[QuantaStor_Troubleshooting_Guide#Login_.26_Upgrade_Issues | Troublshooting Upgrade Issues]]
+
  
[[File:Upgrade Manager.png|800px]]
+
[[Call-home/Alert Management]]
  
=== System Checklist ===
+
[[Security Configuration]]
  
The 'System Checklist' button aka 'Getting Started' will appear automatically when you login anytime there is no license key applied to the appliance.  Once a license has been added the System Checklist is available by selecting it from the ribbon-bar.  As the name implies, it provides a basic checklist which covers basic elements like modifying network settings and such to help new admins get acquainted with QuantaStor.
+
[[Upgrade Manager]]
  
=== System Hostname & DNS management ===
+
=== Snapshots & Replication ===
  
To change the name of your system you can simply right-click on the storage system in the tree stack on the left side of the screen and then choose 'Modify Storage System'.  This will bring up a screen where you can specify your DNS server(s) and change the hostname for your system as well as control other global network settings like the ARP Filtering policy.
+
[[Snapshot Schedules]]
  
[[File:Modify System.png|800px]]
+
[[Backup Policies]]
  
{{Template:GridSetupProceedure}}
+
[[Remote-replication (DR)]]
  
== Physical Disk Management ==
+
=== Cluster Configuration ===
  
=== Identifying physical disks in an enclosure ===
+
[[HA Cluster Setup (JBODs)]]
  
When you right-click on a physical disk you can choose 'Identify' to force the lights on the disk to blink in a pattern which it accomplishes by reading sector 0 on the drive.  This is very helpful when trying to identify which disk is which within the chassis.  Note that technique doesn't work logical drives exposed by your RAID controller(s) so there is separate 'Identify' option for the hardware disks attached to your RAID controller which you'll find in the 'Hardware Controllers & Enclosures' section.
+
[[HA Cluster Setup (external SAN)]]
  
[[File:Identify Physical Disk.png|800px]]
+
[[Scale-out_Block_Setup_(ceph)|Scale-out Block Setup (ceph)]]
  
=== Scanning for physical disks ===
+
[[Scale-out Object Setup (ceph)|Scale-out Object Setup (ceph)]]
When new disks have been added to the system you can scan for new disks using the command.  To access this command from the QuantaStor Manager web interface simply right-click where it says 'Physical Disks' and then choose scan for disks.  Disks are typically named sdb, sdc, sdd, sde, sdf and so on.  The 'sd' part just indicates SCSI disk and the letter uniquely identifies the disk within the system.  If you've added a new disk or created a new Hardware RAID Unit you'll typically see the new disk arrive and show up automatically but the rescan operation can explicitly re-execute the disk discovery process.
+
  
[[File:Scan Disk.png|800px]]
+
[[Scale-out File Setup (glusterfs)|Scale-out File Setup (glusterfs)]]
  
=== Formatting Disks ===
+
=== Optimization ===
  
Sometimes disks will have partitions or other metadata on them which can prevent their use within QuantaStor for pool creation or other operations.  To clear/format a disk simply right-click on the disk and choose ''Format Disk..''. 
+
[[Performance Tuning]]
+
[[File:qs_format_disk.png|800px]]
+
 
+
=== Copying LUNs/Disks from 3rd-Party SANs ===
+
 
+
Please see the [[QuantaStor_Administrators_Guide#Disk_Migration_.2F_LUN_Copy_to_Storage_Volume | section below on Disk Migration]] which outlines how to copy a FC/iSCSI attached block device from a 3rd-party SAN directly to a QuantaStor appliance.
+
 
+
=== Importing disks from an Open-ZFS pool configuration ===
+
 
+
Please see the section below on importing Storage Pools which includes a section on [[QuantaStor_Administrators_Guide#Importing_3rd-party_OpenZFS_Storage_Pools | how to import OpenZFS based storage pools from other systems]].
+
 
+
== Hardware Controller & Enclosure Management ==
+
QuantaStor has custom integration modules for covering all the major HBA and RAID controller card models.  QuantaStor provides integrated management and monitoring of your HBA/RAID attached hardware RAID units, disks, enclosures, and controllers.  It is also integrated with QuantaStor's alerting / call home system so when a disk failure occurs within a hardware RAID group an alert email/PagerDuty alert is sent automatically no different than if it was a software RAID disk failure or any another alert condition detected by the system. 
+
 
+
QuantaStor's hardware integration modules include support for the following HBAs & RAID controllers:
+
* LSI HBAs (92xx, 93xx, 94xx and all matching OEM derivative models)
+
* LSI MegaRAID (all models)
+
* DELL PERC H7xx/H8xx (all models, LSI/Broadcom derivative)
+
* Intel RAID/SSD RAID (all models, LSI/Broadcom derivative)
+
* Fujitsu RAID (all models, LSI/Broadcom derivative)
+
* IBM ServeRAID (all models, LSI/Broadcom derivative)
+
* Adaptec 5xxx/6xxx/7xxx/8xxx (all models)
+
* HP SmartArray P4xx/P8xx
+
* HP HBAs
+
 
+
Note: Special tools available for some models which are helpful for triage of hardware issues which are [[HBA/RAID Utilities | outlined here]].
+
 
+
=== Identifying Devices in the Enclosure View ===
+
 
+
QuantaStor presents an enclosure view of devices that helps to identify where disks are physically located within a server of external disk enclosure chassis (JBOD).  By default QuantaStor assumes a 4x column bottom-to-top left to right ordering of the drive slot numbers which is common to SuperMicro hardware.  Most other manufacturers have other slot layout schemes to it is important that one select the proper vendor/model of disk enclosure chassis within the ''Modify Enclosure..'' dialog.  One the proper chassis model has been selected the layout within the web UI will automatically update.  If the vendor/model of external enclosure is not available and there is no suitable alternative one may add new enclosures layout types via a configuration file, [[Adding_Hardware_Enclosure_Layouts | instructions for that can be found here]].
+
 
+
[[File:qs_hw_enclosure_layout.png|800px]]
+
 
+
==== Selecting the Enclosure Vendor/Model ====
+
 
+
By right-clicking on the enclosure and choosing ''Modify Enclosure..'' one can set the enclosure to a specific vendor/model type so that the layout of the drive slots matches the hardware appropriately. 
+
 
+
[[File:Qs_hw_enclosure_modify.png|800px]]
+
 
+
=== Hardware RAID Unit Management ===
+
 
+
QuantaStor has a powerful hardware RAID unit management system integrated into the platform so that one can create, delete, modify, encrypt, manage hot-spares, and manage SSD caching for hardware RAID units via the QuantaStor Web UI, REST API, and CLI commands.  OSNEXUS recommends the use of hardware RAID with scale-out object storage configurations and the use of HBAs for scale-up storage pool configurations (ZFS).  In all cases, OSNEXUS recommends the use of a hardware RAID controller to manage the boot devices for an appliance.  The follow sections cover how to manage controllers and view the enclosure layout of devices within a selected appliance within a QuantaStor storage appliance grid.
+
 
+
==== Creating Hardware RAID Units ====
+
 
+
A logical grouping of drives using hardware RAID controller technology is referred to as a hardware RAID unit within QuantaStor.  Since management of the controller is fully integrated into QuantaStor one care create RAID units directly through the web UI or via the CLI/REST API.  Creation of hardware RAID units is accessible from the ''Hardware Controllers & Enclosures'' section in the main Storage Management tab.  For more detailed information on how to create a hardware RAID unit please see the web management interface page on [[Hardware_Controller_Create_RAID_Unit | hardware RAID unit creation here]].
+
 
+
==== Importing Hardware RAID Units ====
+
 
+
If a group of devices representing a hardware RAID unit has been added to the system they may not appear automatically.  This is because most RAID controllers treat these devices as a ''foreign configuration'' that must be explicitly imported to the system.  If a system is rebooted with the attached JBOD storage enclosures powered-off this can also lead to a previously imported configuration being identified as a ''foreign configuration''.  To resolve these scenarios simply select ''Import Unit(s)'' from the ''Hardware Controllers & Enclosures'' section of the web management interface.  More information on how to [[Hardware_Controller_Import_RAID_Units | import units using the web UI is available here]].
+
 
+
==== Silencing Audible alarms ====
+
 
+
We recommend that one disable audible alarms on all appliances since the most common deployment scenario is one where the appliance is in a data-center where the alarm will not be heard by the owner of the appliance.  If for some reason the audible alarm is turned on one can disable all alarms for a given controller or appliance via the [[Hardware_Controller_Silence_Alarm | web user interface as outlined here]].
+
 
+
=== Managing Hardware RAID Hot-spares ===
+
 
+
When hardware RAID units are configured for use with storage pools one must indicate hot-spares at the hardware RAID unit level as that is where the automatic repair needs to take place.  QuantaStor provides management of hot-spares via the web management interface, CLI and REST API.  Simply go to the ''Hardware Controllers & Enclosures'' section, then right-click on the disk and choose ''Mark/Unmark Hot-spare..''.  This will allow one to toggle the hot-spare marker on the device.  If a hardware RAID unit is in a degraded state due to one or more failed devices one need only mark one or more devices as hot-spares and the controller will automatically consume the spare and start the rebuild/repair process to restore the affected RAID unit to a fully healthy state.  More information on [[Hardware_Disk_Mark_Hotspare | marking/unmarking disks as hot-spares is available here]].
+
 
+
=== Hardware RAID SSD Read/Write Caching ===
+
 
+
QuantaStor supports the configuration and management of hardware controller SSD caching technology for both Adaptec and LSI/Avago RAID Controllers. QuantaStor automatically detects that these features are enabled on the hardware controllers and presents configuration option within the QuantaStor web management interface to configure caching on a per RAID unit basis.
+
 
+
Hardware SSD caching technologies work by having I/O requests leverage high-performance SSD as a caching layer allowing for large performance improvements in small block IO.  SSD caching technology works in tandem with the NVRAM based write back cache technologies.  NVRAM cache is typically limited to 1GB-4GB of cache whereas the addition of SSD can boost the size of the cache layer to 1TB or more.  The performance boost will depend heavily on the SSD device selected and the application workload.  For SSD to be selected to boost write performance, high-endurance SSDs are required with a high (DWPD) of at least 10x or higher and only enterprise grade / data-enter (DC) grade SSDs are certified for use in QuantaStor appliances.  Desktop grade SSD lead to unstable performance and possibly serious outages as they are not designed for continuous sustained loads, as such OSNEXUS does not support their use.
+
 
+
==== Creating a Hardware SSD Cache Unit ====
+
 
+
You can create a Hardware SSD Cache Unit for your RAID controller by right clicking on your RAID controller in the Hardware Enclosures and Controllers section of the Web Manager as shown in the below screenshot. If you do not see this option please verify with your Hardware RAID controller manufacturer that the SSD Caching technology offered for your RAID Controller platform is enabled.  If you are unsure how to confirm this functionality, please contact OSNEXUS support for assistance.
+
 
+
[[File:Create_SSD_Cache_Unit_Menu.png|300px]]
+
 
+
You will now be presented with the Create SSD Cache Unit Dialog:
+
 
+
[[File:Create_SSD_Cache_Unit.png|300px]]
+
 
+
Please select the SSD's you would like to use for creating your RAID unit. Please note that not all SSD's are supported by the RAID Controller manufacturers for their SSD caching technology. If you cannot create your SSD Cache Unit, please refer to your Hardware RAID Controller manufacturers Hardware Compatibility/Interoperability list.
+
 
+
The SSD Cache Technology from Adaptec and LSI/Avago for their RAID controllers can be configured in one of two ways:
+
 
+
* RAID0 - SSD READ Cache only
+
 
+
* RAID1/10 - Combined SSD READ and WRITE Cache.
+
 
+
Please choose the option you would like and click the 'OK' button to create the SSD Cache Unit. The SSD Cache Unit will now appear alongside the other Hardware RAID units for your Hardware RAID Controller as shown in the below screenshot:
+
 
+
[[File:SSD_Cachecade_Unit_created.png|300px]]
+
 
+
==== Enabling the Hardware SSD Cache Unit for your Virtual Drive(s) ====
+
 
+
Now that you have created your Hardware SSD Cache Unit as detailed above, you can enable it for the specific Virtual Drive(s) you would like to have be cached.
+
 
+
To enable the SSD Cache Unit for a particular Virtual Drive, locate the Virtual Drive in the Hardware Enclosures and Controllers section of the Web Interface and right click and choose the 'Enable SSD Caching' option. 
+
 
+
[[File:Enable_SSD_Caching_on_RAID_Unit_Menu.png|300px]]
+
 
+
This will open the Enable SSD Caching on RAID Unit dialog where you can confirm your selection and click 'OK'.
+
 
+
[[File:Enable_SSD_Caching_on_RAID_Unit.png|300px]]
+
 
+
The SSD Cache will now be associated with the chosen Virtual Drive, enabling the read or read/write cache function that you specified when creating your SSD Caching RAID Unit.
+
 
+
== Storage Pool Management ==
+
 
+
Storage pools combine or aggregate one or more physical disks (SATA, SAS, or SSD) into a pool of fault tolerant storage.  From the storage pool users can provision both SAN/storage volumes (iSCSI/FC targets) and NAS/network shares (CIFS/NFS).  Storage pools can be provisioned using using all major RAID types (RAID0/1/10/5/50/6/60/7/70) and in general OSNEXUS recommends RAID10 for the best combination of fault-tolerance and performance.  That said, the optimal RAID type depends on the applications and workloads that the storage will be used for.  For assistance in selecting the optimal layout we recommend engaging the OSNEXUS SE team to get advice on configuring your system to get the most out of it. 
+
 
+
==== Pool RAID Layout Selection / General Guidelines ====
+
We strongly recommend using RAID10 for all virtualization workloads and databases and RAID60 for archive applications that require higher-capacity for applications that produce mostly sequential IO patterns.  RAID10 performs very well with sequential IO and random IO patterns but is more expensive since usable capacity before compression is 50% of the raw capacity.  With compression the usable capacity may increase to 75% of the raw capacity or higher. 
+
For archival storage or other similar workloads RAID60 is best and provides higher utilization with only two drives used for parity/fault tolerance per RAID set (ZFS VDEV).  RAID5/50 is not recommended because it is not fault tolerant after a single disk failure.
+
 
+
[[File:qs4_pool_graph.png|800px]]
+
 
+
===Creating Storage Pools===
+
 
+
One of the first steps in configuring an appliance is to create a storage pool.  The storage pool is an aggregation of one or more devices into a fault-tolerant "pool" of storage that will continue operating without interruption in the event of disk failures.  The amount of fault-tolerance is dependent on multiple factors including hardware type, RAID layout selection and pool configuration.  QuantaStor's SAN storage (storage volumes) as well as NAS storage (network shares) are both provisioned from ''Storage Pools''.  A given pool of storage can be used for SAN and NAS storage at the same time, additionally, clients can access the storage using multiple protocols at the same time (iSCSI & FC for storage volumes and NFS & SMB/CIFS for network shares).  Creating a storage pool is very straight forward, simply name the pool (DefaultPool is fine too), select disks and a layout for the pool, and then press OK.  If you're not familiar with RAID levels then choose the RAID10 layout and select an even number of disks (2, 4, 8, 20, etc).  There are a number of advanced options available during pool creation but the most important one to consider is encryption as this cannot be turned on after the pool is created. 
+
 
+
[[File:qs_pool_create.png]]
+
 
+
====Intelligent Disk Selection====
+
For systems employing multiple JBOD disk enclosures QuantaStor automatically detects which enclosure each disk is sourced from and selects disks in a spanning "round-robin" fashion during the pool provisioning process to ensure fault-tolerance at the JBOD level.  This means that should a JBOD be powered off the pool will continue to operate in a degraded state.  If the detected number of enclosures is insufficient for JBOD fault-tolerance the disk selection algorithm switches to a sequential selection mode which groups vdevs by enclosure.  For example a pool with RAID10 layout provisioned from disks from two or more JBOD units will use the "round-robin" technique so that mirror pairs span the JBODs.  In contrast a storage pool with the RAID50 layout (4d+1p) with two JBOD/disk enclosures will use the sequential selection mode.
+
 
+
=====Enabling Encryption (One-click Encryption)=====
+
 
+
Encryption must be turned on at the time the pool is created.  If you need encryption and you currently have a storage pool that is un-encrypted you'll need to copy/migrate the storage to an encrypted storage pool as the data cannot be encrypted in-place after the fact.  Other options like compression, cache sync policy, and the pool IO tuning policy can be changed at any time via the ''Modify Storage Pool'' dialog.  To enable encryption select the Advanced Settings tab within the Create Storage Pool dialog and check the ''[x] Enable Encryption'' checkbox.
+
 
+
[[File:qs_pool_create_adv.png]]
+
 
+
=====Passphrase protecting Pool Encryption Keys=====
+
 
+
For QuantaStor used in portable appliances or in high-security environments we recommend assigning a passphrase to the encrypted pool which will be used to encrypt the pool's keys.  The drawback of assigning a passphrase is that it will prevent the appliance from automatically starting the storage pool after a reboot.  Passphrase protected pools require that an administrator login to the appliance and choose 'Start Storage Pool..' where the passphrase may be manually entered and the pool subsequently started.  If you need the encrypted pool to start automatically or to be used in an HA configuration, do not assign a passphrase.  If you have set a passphrase by mistake or would like to change it you can change it at any time using the ''Change/Clear Pool Passphrase...'' dialog at any time.  Note that passphrases must be at least 10 characters in length and no more than 80 and may comprise alpha-numeric characters as well as these basic symbols: '''.-:_'''  (period, hyphen, colon, underscore).
+
 
+
====Enabling Compression====
+
 
+
Pool compression is enabled by default as it typically increases usable capacity and boosts read/write performance at the same time.  This boost in performance is due to the fact that modern CPUs can compress data much faster than the media can read/write data.  The reduction in the amount of data to be read/written due to compression subsequently boosts performance.  For workloads which are working with compressed data (common in M&E) we recommend turning compression off.  The default compression mode is LZ4 but this can be changed at any time via the ''Modify Storage Pool'' dialog.
+
 
+
====Pool Type====
+
 
+
In almost all cases one should select ZFS as the storage pool type.  XFS should only be used for scale-out configurations where Ceph or Gluster scale-out object/NAS technology will be layered on top.  XFS can be used to directly provision basic Storage Volumes and Network Shares but it is a very limited Storage Pool type lacking most of the advanced features supported by the ZFS pool type (snapshots, HA, DR/remote replication, etc).
+
 
+
=== Storage Pool SSD Caching ===
+
ZFS based storage pools support the addition of SSD devices for use as read or write cache.  SSD cache devices must be dedicated to a specific storage pool and cannot be shared across multiple storage pools.  Some hardware RAID controllers support SSD caching but in our testing we've found that ZFS is more effective at managing the layers of cache than the RAID controllers so we do not recommend using SSD caching at the hardware RAID controller unless you're creating a older style XFS storage pool which does not have native SSD caching features.
+
 
+
==== Configuring a ZFS ZIL SLOG SSD(ZFS log/journal) ====
+
The ZFS filesystem can use a log device (SLOG/ZIL) for where write for sync based i/o filesystem metadata can be mirrored from the system memory to protect against system component or power failure. Writes are not held for long in the ZIL SSD SLOG so the device does not need to be large as it typically holds no more than 16GB before forcing a flush to the backend disk. Because it is storing metadata and sync based I/O writes that have not yet been persisted to the storage pool the write SSD must be mirrored so that in the event an SSD drive fails that redundancy of the ZIL SLOG is maintained.As writes are occurring constantly on the ZIL SLOG device, we recommend choosing SSD's that have a high wear leveling of 3+ Drive Writes per day (DWPD)
+
 
+
==== SSD Read Cache Configuration (L2ARC) ====
+
You can add up to 4x devices for SSD read-cache (L2ARC) to any ZFS based storage pool and these devices do not need to be fault tolerant.  You can add up to 4x devices directly to the storage pool by selecting 'Add Cache Devices..' after right-clicking on any storage pool.  You can also opt to create a RAID0 logical device using the RAID controller out of multiple SSD devices and then add this device to the storage pool as SSD cache.  The size of the SSD Cache should be roughly the size of the working set for your application, database, or VMs.  For most applications a pair of 400GB SSD drives will be sufficient but for larger configurations you may want to use upwards of 2TB or more of SSD read cache.  Note that the SSD read-cache doesn't provide an immediate performance boost because it takes time for it to learn which blocks of data should to be cached to provide better read performance.
+
 
+
[[File:Add Cache.png|800px]]
+
 
+
==== RAM Read Cache Configuration (ARC) ====
+
ZFS based storage pools use what are called "ARC" as a in-memory read cache rather than the Linux filesystem buffer cache to boost disk read performance.  Having a good amount of RAM in your system is critical to deliver solid performance. It is very common with disk systems for blocks to be read multiple times.  When they are then cached into RAM it reduces the load on the disks and greatly boosts performance.  As such it is recommended to have 32-64GB of RAM for small systems, 96-128GB of RAM for medium sized systems. For large appliances you'll want to have upwards of 256GB or more of RAM.  To see the stats on cache hits for both read and write cache layers you'll need to use the command line and run 'sudo qs-iostat -af' which will print an updated status report on cache utilization every couple of seconds.
+
 
+
=== Pool RAID Levels ===
+
QuantaStor supports all industry standard RAID levels (RAID1/10/5/50/6/60), some additional advanced RAID levels (RAID1 triple copy, RAID7/70 triple parity), and simple striping with RAID0.  Over time all disk media degrades and as such we recommend marking at least one device as a hot-spare disk so that the system can automatically heal itself when a bad device needs replacing.  One can assign hot-spare disks as universal spares for use with any storage pools as well as pinning of hot-spares to specific storage pools.  Finally, RAID0 is not fault-tolerant at all but it is your only choice if you have only one disk and it can be useful in some scenarios where fault-tolerance is not required.  Here's a breakdown of the various RAID types and their pros & cons.
+
 
+
* '''RAID0''' layout is also called 'striping' and it writes data across all the disk drives in the storage pool in a round robin fashion.  This has the effect of greatly boosting performance.  The drawback of RAID0 is that it is not fault tolerant, meaning that if a single disk in the storage pool fails then all of your data in the storage pool is lost.  As such RAID0 is not recommended except in special cases where the potential for data loss is non-issue.
+
* '''RAID1''' is also called 'mirroring' because it achieves fault tolerance by writing the same data to two disk drives so that you always have two copies of the data.  If one drive fails, the other has a complete copy and the storage pool continues to run.  RAID1 and it's variant RAID10 are ideal for databases and other applications which do a lot of small write I/O operations.
+
* '''RAID5''' achieves fault tolerance via what's called a parity calculation where one of the drives contains an XOR calculation of the bits on the other drives.  For example, if you have 4 disk drives and you create a RAID5 storage pool, 3 of the disks will store data, and the last disk will contain parity information.  This parity information on the 4th drive can be used to recover from any data disk failure.  In the event that the parity drive fails, it can be replaced and reconstructed using the data disks.  RAID5 (and RAID6) are especially well suited for audio/video streaming, archival, and other applications which do a heavy sequential write I/O operations (such as reading/writing large files) and are not as well suited for database applications which do heavy amounts of small random write I/O operations or with large file-systems containing lots of small files with a heavy write load.
+
* '''RAID6''' improves upon RAID5 in that it can handle two drive failures but it requires that you have two disk drives dedicated to parity information.  For example, if you have a RAID6 storage pool comprised of 5 disks then 3 disks will contain data, and 2 disks will contain parity information.  In this example, if the disks are all 1TB disks then you will have 3TB of usable disk space for the creation of volumes.  So there's some sacrifice of usable storage space to gain the additional fault tolerance.  If you have the disks, we always recommend using RAID6 over RAID5.  This is because all hard drives eventually fail and when one fails in a RAID5 storage pool your data is left vulnerable until a spare disk is utilized to recover your storage pool back to a fault tolerant status.  With RAID6 your storage pool is still fault tolerant after the first drive failure. (Note: Fault-tolerant storage pools (RAID1,5,6,10) that have suffered a single disk drive failure are called '''degraded''' because they're still operational but they require a spare disk to recover back to a fully fault-tolerant status.)
+
* '''RAID10''' is similar to RAID1 in that it utilizes mirroring, but RAID10 also does striping over the mirrors.  This gives you the fault tolerance of RAID1 combined with the performance of RAID10.  The drawback is that half the disks are used for fault-tolerance so if you have 8 1TB disks utilized to make a RAID10 storage pool, you will have 4TB of usable space for creation of volumes.  RAID10 will perform very well with both small random IO operations as well as sequential operations and it is highly fault tolerant as multiple disks can fail as long as they're not from the same mirror-pairing.  If you have the disks and you have a mission critical application we '''highly''' recommend that you choose the RAID10 layout for your storage pool.
+
* '''RAID60''' combines the benefits of RAID6 with some of the benefits of RAID10.  It is a good compromise when you need better IOPS performance than RAID6 will deliver and more useable storage than RAID10 delivers (50% of raw).
+
 
+
In some cases it can be useful to create more than one storage pool so that you have low cost fault-tolerant storage available in RAID6 for archive and higher IOPS storage in RAID10 for virtual machines, databases, MS Exchange, or similar workloads.
+
 
+
If you have created an XFS based storage pool with a RAID level it will take some time to 'rebuild'.  Once the 'rebuild' process has reached 1% you will see the storage pool appear in QuantaStor Manager and you can begin to create new storage volumes. 
+
<blockquote>
+
WARNING:  Although you can begin using the pool at 1% rebuild completion, your XFS storage pool is not fault-tolerant until the rebuild process has completed.
+
</blockquote>
+
 
+
===Importing Storage Pools===
+
 
+
Storage pools can be physically moved from appliance to another appliance by moving all of the disks associated with that given pool from the old appliance to a new appliance.  After the devices have been moved over one should use the  ''Scan for Disks...'' option in the web UI so that the devices are immediately discovered by the appliance.  After that the storage pool may be imported using the ''Import Pool...'' dialog in the web UI or via QS CLI/REST API commands. 
+
 
+
====Importing Encrypted Storage Pools====
+
 
+
The keys for the pool (qs-xxxx.key) must be available in the /etc/cryptconf/keys directory in order for the pool to be imported.  Additionally, there is an XML file for each pool which contains information about which devices comprise the pool and their serial numbers which is located in the /etc/cryptconf/metadata directory which is called (qs-xxxx.metadata) where xxxx is the UUID of the pool.  We highly recommend making a backup of the /etc/cryptconf folder so that one has a backup copy of the encryption keys for all encrypted storage pools on a given appliance.  With these requirements met, the encrypted pool may be imported via the Web Management interface using the Import Storage Pools dialog the same one used do for non-encrypted storage pools.
+
 
+
====Importing 3rd-party OpenZFS Storage Pools====
+
 
+
Importing OpenZFS based storage pools from other servers and platforms (Illumos/FreeBSD/ZoL) is made easy with the Import Storage Pool dialog.  QuantaStor uses globally unique identifiers (UUIDs) to identify storage pools (zpools) and storage volumes (zvols) so after the import is complete one would notice that the system will have made some adjustments as part of the import process.  Use the scan button to scan the available disks to search for pools that are available to be imported.  Select the pools to be imported and press OK to complete the process.
+
 
+
[[File:qs4_pool_import.png|800px]]
+
 
+
For more complex import scenarios there is a CLI level utility that can be used to do the pool import which is documented here [[Console Level OpenZFS Storage Pool Importing]].
+
 
+
===Pool Hot-Spare Policies ===
+
 
+
Modern versions of QuantaStor include additional options for how Hot Spares are automatically chosen at the time a rebuild needs to occur to replace a faulted disk.
+
 
+
These Policies can be chosen on a per Storage Pool basis. The below screenshot shows the Policies.
+
 
+
[[file:Modify Storage Pools.png|800px|ZFS Storage Pool Hot Spare Policies]]
+
 
+
Note: If the policy is set to one that includes 'exact match', the Storage Pool will first attempt to replace the failed data drive with a disk that is of the same model and capacity before trying other options.
+
 
+
== Network Port Configuration ==
+
Network ports (NICs) (also called Target Ports) are the interfaces through which your appliance is managed and client hosts (initiators) access your storage volumes (aka targets).  The terms 'target' and 'initiator' are SCSI terms that are synonymous with 'server' and 'client' respectively.  QuantaStor supports both statically assigned IP addresses as well as dynamically assigned (DHCP) addresses.  If you selected automatic network configuration when you initially installed QuantaStor then you'll have one port setup with DHCP and the others are likely offline. 
+
 
+
We recommend that you always use static IP addresses with your appliances unless you have your DHCP server setup to specifically assign a static IP address to your NICs as identified by MAC address.  If you don't set the network ports up with static IP addresses you risk the IP address changing and losing access to your storage when the dynamically assigned address expires.
+
To modify the configuration of a network port first select the tree section named "Storage System" under the "Storage Management" tab on the left hand side of the screen.  After that, select the "Network Ports" tab in the center of the screen to see the list of network ports in each appliance.  To modify the configuration of one of the ports, simply right-click on it and choose "Modify Network Port" from the pop-up menu.  Alternatively you can press the "Modify" button in the tool bar at the top of the screen in the "Network Ports" section. 
+
Once the "Modify Network Port" dialog appears you can select the port type for the selected port (static), enter the IP address for the port, subnet mask, and gateway for the port.  You can also set the MTU to 9000 for jumbo packet support, but we recommend that you get your network configuration up and running with standard 1500 byte frames as jumbo packet support requires that you custom configure your host side NICs and network switch with 9K frames as well.
+
 
+
[[File:Modify Network Port.png|800px]]
+
 
+
=== NIC Bonding / Trunking ===
+
 
+
QuantaStor supports NIC bonding, also called trunking, which allows you to combine multiple NICs together to improve performance and reliability.  If combine two or more ports together into a virtual port you'll need to make sure that all the bonded ports are connected to the same network switch.  There are very few exceptions to this rule.  For example, if you have two networks and 4 ports (p1, p2, p3, p4) you'll want to create two separate virtual ports each bonding two NIC ports (p1, p2 / p3, p4) together and each pair connected to a separate network (p1, p2 -> network A /  p3, p4 -> network B).  This type of configuration is highly recommended as you have both improved bandwidth and have no single point of failure in the network or in the storage system.  Of course you'll need your host to have at least 2 NIC ports and they'll each need to connect to the separate networks.  For very simple configurations you can just connect everything to one switch but again, the more redundancy you can work into your SAN the better.
+
 
+
By default, QuantaStor uses Linux bonding mode-0, a round-robin policy. This mode provides load balancing and fault tolerance by transmitting packets in sequential order from the first available interface through the last. QuantaStor also supports LACP 802.3ad Dynamic Link aggregation.  Use the 'Modify Storage System' dialog in the web management interface to change the default bonding mode for you appliance.
+
* [[Changing Network Bonding Mode | Enable LACP Port Bonding]]
+
 
+
=== 10GbE NIC support ===
+
 
+
QuantaStor works with all the major 10GbE cards from Chelsio, Intel and others.  We recommend the Intel 10GbE cards and you can use NIC bonding in conjunction with 10GbE to further increase bandwidth.  If you are using 10GbE we recommend that you designate your slower 1GbE ports as iSCSI disabled so that they are only used for management traffic.
+
 
+
== Volume & Share Remote-Replication (Disaster Recovery / DR Setup) ==
+
 
+
Volume and Share Remote-replication within QuantaStor allows you to copy a volume or network share from one QuantaStor storage system to another and is a great tool for migrating volumes and network shares between systems and for using a remote system as a DR site.  Remote replication is done asynchronously which means that changes/deltas to volumes and network shares on the source volume or share are replicated up to every hour with calendar based schedules, and up to every 15 minutes with timer based schedules. 
+
 
+
Once a given set of the volumes and/or network shares have been replicated from one system to another the subsequent periodic replication operations send only the changes and all information sent over the network is compressed to minimize network bandwidth and encrypted for security.  ZFS based storage pools use the ZFS send/receive mechanism which efficiently sends just the changes so it works well over limited bandwidth networks.  Also, if your storage pool has compression enabled the changes sent over the network are also compressed which further reduces your WAN network load.
+
 
+
==== Limits of XFS based Volume/Network Share Replication ====
+
XFS based storage pools do not have the advanced replication mechanisms like ZFS send/receive so we employ more brute force techniques for replication.  Specifically, when you replicate an XFS based storage volume or network share QuantaStor uses the linux rsync utility.  It does have compression and it will only send changes but it doesn't work well with large files because the entire file must be scanned and in some cases resent over the network.  Because of this we highly recommend using ZFS based storage pools for all deployments unless you specifically need the high sequential IO performance of XFS for a specific application.
+
 
+
=== Creating a Storage System Link ===
+
 
+
The first step in setting up DR/remote-replication between two systems is to have at least nodes (storage appliances) configured into a Grid ([http://wiki.osnexus.com/index.php?title=QuantaStor_Administrators_Guide#Grid_Setup_Procedure link]).  QuantaStor has a grid communication mechanism that connects appliances (nodes) together so that they can share information, coordinate activities like remote-replication, and simplify management operations.  After you create the grid  you'll need to setup a Storage System Link between the two or more nodes between which you want to replicate data (volumes and/or shares).  The Storage System Link represents a low level security key exchange between the two nodes so that they can send data between each other.  Creation of the Storage System Link is done through the QuantaStor Manager web interface by selecting the 'Remote Replication' tab, and then pressing the 'Create Storage System Link' button in the tool bar to bring up the the dialog.
+
 
+
[[File:Create Storage Link.png|800px]]
+
 
+
Select the IP address on each system to be utilized for communication of remote replication network traffic.  If both systems are on the same network then you can simply select one of the IP addresses from one of the local ports but if the remote system is in the cloud or remote location then most likely you will need to specify the external IP address for your QuantaStor system.  Note that the two systems communicate over ports 22 and 5151 so you will need to open these ports in your firewall in order for the QuantaStor systems to link up properly.
+
 
+
=== Creating a Remote Replica ===
+
 
+
Once you have a Storage System Link created between two systems you can now replicate volumes and network shares in either direction. Simply login to the system that you want to replicate volumes from, right-click on the volume to be replicated, then choose 'Create Remote Replica'.  Creating a remote replica is much like creating a local clone only the data is being copied over to a storage pool in a remote storage system.  As such, when you create a remote-replica you must specify which storage system you want to replicate too (only systems which have established and online storage system links will be displayed) and which storage pool within that system should be utilized to hold the remote replica.  If you have already replicated the specified volume to the remote storage system then you can re-sync the remote volume by choosing the remote-replica association in the web interface and choosing 'resync'.  This can also be done via the 'Create Remote Replica' dialog and then choose the option to replicate to an existing target if available.
+
 
+
=== Creating a Remote Replication Schedule / DR Replication Policy ===
+
 
+
Remote replication schedules provide a mechanism for replicating the changes to your volumes to a matching checkpoint volume on a remote appliance automatically on a timer or a fixed schedule.  To create a schedule navigate to the Remote Replication Schedules section after selecting the Remote Replication tab at the top of the screen.  Right-click on the section header and choose 'Create Replication Schedule'. 
+
 
+
[[File:Drsetup1.png|1000px]]
+
 
+
Besides selection of the volumes and/or shares to be replicated you must select the number of snapshot checkpoints to be maintained on the local and remote systems.  You can use these snapshots for off-host backup and other data recovery purposes as well so there is no need to have a Snapshot Schedule which would be redundant with the snapshots which will be crated by your replication schedule.  If you choose a Max Replicas of 5 then up to 5 snapshot checkpoints will be retained. If for example you were replicating nightly at 1am each day of the week from Monday to Friday then you will have a week's worth of snapshots as data recovery points.  If you are replicating 4 times each day and need a week of snapshots then you would need 5x4 or a Max Replicas setting of 20.
+
 
+
=== Remote Replication Bandwidth Throttling ===
+
 
+
WAN links are often limited in bandwidth in a range between 2MB-60MBytes/sec for on-premises deployments and 20MBytes-100MBytes/sec and higher in datacenters depending on the service provider.  QuantaStor does automatic load balancing of replication activities to limit the impact to active workloads and to limit the use of your available WAN or LAN bandwidth.  By default QuantaStor comes pre-configured to limit replication bandwidth to 50MB/sec but you can increase this or decrease it to better match the bandwidth and network throughput limits of your environment.  This default is a good default for datacenter deployments but hybrid cloud deployments where data is replicating to/from an on-premises site(s) should be configured to take up no more than 50% of your available WAN bandwidth so as to not disrupt other activities and workloads.
+
 
+
Here are the CLI commands available for adjusting the replication rate limit.  To get the current limit use the 'qs-util rratelimitget' and to set the rate limit to a new value, (example, 4MB/sec) you can set the limit like so 'qs-util rratelimitset 4'.
+
 
+
<pre>
+
  Replication Load Balancing
+
    qs-util rratelimitget            : Current max bandwidth available for all remote replication streams.
+
    qs-util rratelimitset NN        : Sets the max bandwidth available in MB/sec across all replication streams.
+
    qs-util rraterebalance          : Rebalances all active replication streams to evenly share the configured limit.
+
                                      Example: If the rratelimit (NN) is set to 100 (MB/sec) and there are 5 active
+
                                      replication streams then each stream will be limited to 20MBytes/sec (100/5)
+
                                      QuantaStor automatically reblanances replication streams every minute unless
+
                                      the file /etc/rratelimit.disable is present.
+
</pre>
+
 
+
To run the above mentioned commands you must login to your storage appliance via SSH or via the console.  Here's an example of setting the rate limit to 50MB/sec.
+
 
+
<pre>sudo qs-util rratelimitset 50</pre>
+
 
+
At any given time you can adjust the rate limit and all active replication jobs will automatically adjust to this new limit within a minute.  This means that you can dynamically adjust the rate limit using the 'qs-util rratelimitset NN' command to set different replication rates for different times of day and days of the week using a cron job.  If you need that functionality and need help configuring cron to run the 'qs-util rratelimitset NN' command please contact Customer Support.
+
 
+
 
+
=== Permanently Promoting a Replicated Storage Volume or Network Share ===
+
 
+
The below process details how to Promote a _chkpnt Storage Volume/Network Share in the event of a failure of the primary node. This same procedure can be used to permanently migrate data to a Storage Pool on a different QuantaStor appliance using remote replication.
+
 
+
If the Replication Source system is offline due to a hardware failure of the appliance, you can skip directly to Step 3.
+
 
+
Step 1) Please ensure that all client I/O has been stopped to the current source Storage Volume or Network Share and that one final replication has occurred using the replication links/schedules of any data that has been modified since the last replication.
+
 
+
Step 2) Remove all Hosts and Host Group Associations from the source Storage Volume.
+
 
+
Step 3) Right Click on the Replication Schedule associated with the source and destination Storage Volume/Network Share and click 'Delete Schedule'.
+
 
+
Step 4) Right click on the Replication Link associated with the source and destination Storage Volume/Network Share and select the 'Delete Replica Association' option, which will open the 'Delete Remote Replication Link' Dialog. You will want to use the defaults in this dialog and click 'OK'
+
 
+
[[File:Delete_Remote_Replication_Link.png|800px]]
+
 
+
At this stage there is no longer a replication link or associations between the source and destination _chkpnt Storage Volume/Network Share. Both the original source and Destination _chkpnt Storage Volume/Network Share can be renamed using the Modify Storage Pool or Modify Network Share dialogs and mapped to client access as required.
+
 
+
Please note: If you are looking to use the same name for the _chkpnt Storage Volume/Netwprk share as used on the Source system and the Source QuantaStor appliance is offline/unavailable, you may need to remove it from the grid at this stage as it will not be accessible to perform the rename operation using the Modify Storage Volume or Modify network Share Dialog. In this event after removal of the offline QuantaStor node from the Grid, you can skip directly to step B below.
+
 
+
Renaming the _chkpnt Storage Volume/Network Share to be the same as the original Source Storage Volume/Network Share.
+
 
+
Step A) Right click on the original Storage Volume/Network Share and choose the 'Modify Storage Volume' or 'Modify Network Share' option. In the dialog box, rename the Storage Volume or Network Share to add '_bak' or any other unique postfix to the end and click 'OK'. Once you are done with the Promotion/Migration you can remove this backup(_bak) version and it's associated snapshots. Our multi-delete feature is useful for this sort of batch deletion process.
+
 
+
Example screenshot below showing the Modify Storage Volume for renaming the source Storage Volume to _bak
+
 
+
[[File:Modify_Storage_Volume_rename_bak.png|800px]]
+
 
+
Step B) Right click on the replicated _chkpnt Storage Volume/Network Share and choose the 'Modify Storage Volume' or 'Modify Network Share' option. In the dialog box, rename the Storage Volume or Network Share as you see fit and click 'OK'.
+
 
+
Example screenshot below showing the Modify Storage Volume for renaming the destination _chkpnt Storage Volume to the name originally used by the Source volume.
+
 
+
[[File:Modify_Storage_Volume_rename.png|800px]]
+
 
+
Step C) Map client access to the Promoted Storage Volume / Network Share
+
 
+
For Storage Volumes, map lun access to your clients using the Host or Host Groups option detailed here: [[QuantaStor_Administrators_Guide#Managing_Hosts|Managing Hosts]]
+
 
+
For Network Shares map them out using the CIFS/NFS access permissions as detailed here: [[QuantaStor_Administrators_Guide#Managing_Network_Shares|Managing Network Shares]]
+
 
+
Please note: If this procedure was performed for Disaster recovery of a failed Primary QuantaStor node, once the original Primary node is brought online once more the old out of date Storage Volume/Network Share will need to be renamed to an '_bak' or your preferred postfix( or removed to free up space) and for the node to be re-added to the grid.  Replication can then be configured from the new Primary Source QuantaStor to the recovered Quantastor appliance in a role as a Secondary replication destination target.
+
 
+
== Disk Migration / LUN Copy to Storage Volume ==
+
 
+
Migrating LUNs (iSCSI and FC block storage) from legacy systems can be time consuming and potentially error prone as it generally involves mapping the new storage and the old storage to a host, ensuring the the newly allocated LUN is equal to or larger than the old LUN and then the data makes two hops from Legacy SAN -> host -> New SAN so it uses more network bandwidth and can take more time. 
+
 
+
QuantaStor has a built-in data migration feature to help make this process easier and faster.  If your legacy SAN is FC based then you'll need to put a Emulex or Qlogic FC adapter into your QuantaStor appliance and will need to make sure that it is in initiator mode.  Using the WWPN of this FC initiator you'll then setup the zoning in the switch and the storage access in the legacy SAN so that the QuantaStor appliance can connect directly to the storage in the legacy SAN with no host in-between.  Once you've assign some storage from the legacy SAN to the QuantaStor appliance's initiator WWPN you'll need to do a 'Scan for Disks' in the QuantaStor appliance and you will then see your LUNs appear from the legacy SAN (they will appear with devices names like sdd, sde, sdf, sdg, etc).
+
To copy a LUN to the QuantaStor appliance right-click on the disk device and choose 'Migrate Disk...' from the pop-up menu. 
+
 
+
[[File:Physical Disk Migration.png|800px]]
+
 
+
You will see a dialog like the one above and it will show the details of the source device to be copied on the left.  On the right it shows the destination information which will be one of the storage pools on the appliance where the LUN is connected.  Enter the name for the new storage volume to be allocated which will be the destination for the copy.  A new storage volume will be allocated with that name which is exactly the same size as the source volume.  It will then copy all the blocks of the source LUN to the new destination Storage Volume. 
+
 
+
=== Data migration via iSCSI ===
+
QuantaStor v4 and newer includes iSCSI Software Adapter support so that one can directly connect to and access iSCSI LUNs from a SAN without having to use the CLI commands outlined below.  The option to create a new iSCSI Software Adapter is in the Hardware Enclosures & Controllers section within the web management interface. 
+
 
+
The process for copying a LUN via iSCSI is similar to that for FC except that iSCSI requires an initiator login from QuantaStor appliance to the remote iSCSI SAN to initially establish access to the remote LUN(s).  This can be done via the QuantaStor console/SSH using the qs-util iscsiinstall', 'qs-util iscsiiqn', and 'qs-util iscsilogin' commands.  Here's the step-by-step process:
+
 
+
<code>sudo qs-util iscsiinstall</code>
+
 
+
This command will install the iSCSI initiator software (open-iscsi).
+
 
+
<code>sudo qs-util iscsiiqn</code>
+
 
+
This command will show you the iSCSI IQN for the QuantaStor appliance.  You'll need to assign the LUNs in the legacy SAN that you want to copy over to your QuantaStor appliance to this IQN.  If your legacy SAN supports snapshots it's a good idea to assign a snapshot LUN to the QuantaStor appliance so that the data isn't changing during the copy.
+
 
+
<code>sudo qs-util iscsilogin 10.10.10.10</code>
+
 
+
In the command above, replace the example 10.10.10.10 IP address with the IP address of the legacy SAN which has the LUNs you're going to migrate over.  Alternatively, you can use the iscsiadm command line utility directly to do this step.  There are several of these iscsi helper commands, type 'qs-util' for a full listing. 
+
Once you've logged into the devices you'll see information about the devices by running the 'cat /proc/scsi/scsi' command or just go back to the QuantaStor web management interface and use the 'Scan for Disks' command to make the disks appear.  Once they appear in the 'Physical Disks' section you can right-click on them and to do a 'Migrate Disk...' operation.
+
 
+
=== FC Initiator vs Target ===
+
Note that for FC data migration you can use either a Qlogic or a Emulex FC HBA in initiator mode but QuantaStor v3 only supports Qlogic QLE24xx/25xx series cards in FC Target mode.  You can also use OEM versions of the Qlogic cards.  As such it is best to use Qlogic cards as then you can switch the card from initiator to target mode using the ''FC Port Enable'' command once you're done migrating LUNs.
+
 
+
== Managing Call-home / Alert Configuration Settings ==
+
 
+
QuantaStor has a number of mechanisms for remote monitoring of system alerts, IO performance and other metrics via traditional protocols like SNMP and new cloud services like Librato Metrics and CopperEgg.  Appliances report alerts at various severity levels like 'ERROR' when a disk or battery needs to be replaced to minor informational 'INFO' alerts like automatic resource cleanup.  The Alert Manager dialog also allows you to set thresholds for when you're to be sent warnings that the appliance has reached a low space condition.  After configuring the call-home mechanism for your environment be sure that you ''test'' your alert configuration settings in the Alert Manager by sending a test alert to verify the mechanism(s) you setup are properly receiving alerts. 
+
 
+
[[File:qs_scrn_alert_manager.png|Drop Session Dialog]]
+
 
+
The Alert Manager allows you to specify at which thresholds you want to receive email regarding low disk space alerts for your storage pools.  It also let's you specify the SMTP settings for routing email and the token for your PagerDuty account if you have one.  For more information on the configuration of PagerDuty, Librato Metrics and other monitoring mechanisms see the [http://wiki.osnexus.com/index.php?title=QuantaStor_Monitoring_%26_Cloud_Metrics_Integration_Guide Monitoring and Cloud Metrics Integration guide here.]
+
 
+
== Managing Hosts ==
+
 
+
Hosts represent the client computers that you assign storage volumes to.  In SCSI terminology the host computers ''initiate'' the communication with your storage volumes (target devices) and so they are called initiators.  Each host entry can have one or more initiators associated with it and the reason for this is because an iSCSI initiator (Host) can be identified by IP address or IQN or both at the same time.  We recommend using the IQN (iSCSI Qualified Name) at all times as you can have login problems when you try to identify a host by IP address especially when that host has multiple NICs and they're not all specified.
+
 
+
For details of installation please refer to QuantaStor Users Guide under iSCSI Initiator Configuration.
+
 
+
=== Managing Host Groups ===
+
 
+
Sometimes you'll have multiple hosts that need to be assigned the same storage volume(s) such as with a VMware or a XenServer resource pool.  In such cases we recommend making a Host Group object which indicates all of the hosts in your cluster/resource pool.  With a host group you can assign the volume to the group once and save a lot of time.  Also, when you add another host to the host group, it automatically gets access to all the volumes assigned to the group so it makes it very easy to add nodes to your cluster and manage storage from a group perspective rather than individual hosts which can be cumbersome especially for larger clusters.
+
 
+
== Managing Snapshot Schedules ==
+
 
+
Snapshot schedules enable one to create a space efficient point-in-time copy of Storage Volumes and Network Shares automatically on a regular schedule. 
+
 
+
[[File:qs4_create_snapshot_schedule_toolbar.png|1024px]]
+
 
+
Snapshots are automatically expired on a rotation by setting the max retained snapshots value.  One can have more than one snapshot schedule, and each schedule can be associated with any Storage Volumes and Network Shares, even those utilized in other Snapshot Schedules. This is something we recommended in cases where a near term high granularity of recovery points is needed along with a long term weekly set of retention points which may span months.  For storage volumes and network shares containing critical data one should create a Snapshot Schedule that will create snapshots of them at least once a day.  A second schedule that creates a single snapshot on the weekend of your critical volumes is also recommended. 
+
 
+
[[File:qs4_create_snapshot_schedule.png|800px]]
+
 
+
=== Near Continuous Data Protection (N-CDP) ===
+
 
+
What all this boils down to is a feature we in the storage industry refer to as continuous data protection or CDP.  Full CDP solutions allow you to recover to any prior point in time at the granularity of seconds.  Storage systems that snapshot at a coarse level of granularity (15min/hourly) are often referred to as NCDP or "near continuous data protection" solutions, which is what QuantaStor provides.  This NCDP capability is achieved through ''Snapshot Schedules'' which run at a maximum granularity of once per hour.  Using a Snapshot Schedules one can automatically protect your critical data making it easy to recover data from previous points in time.
+
 
+
== Managing iSCSI Sessions ==
+
 
+
A list of active iSCSI sessions can be found by selecting the 'Storage Volume' tree-tab in QuantaStor Manager then selecting the 'Sessions' tab in the center view.  Here's a screenshot of a list of active sessions as shown in QuantaStor Manager.
+
 
+
[[File:qs_session.png|640px|Session List]]
+
 
+
=== Dropping Sessions ===
+
 
+
To drop an iSCSI session, just right-click on it and choose 'Drop Session' from the menu. 
+
 
+
[[File:qs_session_drop.png|640px|Drop Session Dialog]]
+
 
+
Keep in mind that some initiators will automatically re-establish a new iSCSI session if one is dropped by the storage system.  To prevent this, just unassign the storage volume from the host so that the host cannot re-login.
+
 
+
== Managing Network Shares ==
+
 
+
QuantaStor ''Network Shares'' provide NAS access to your storage via the NFSv3, NFSv4, and CIFS protocols. Note that you must have first created a ''Storage Pool'' before you create ''Network Shares'' as they are created within a specific ''Storage Pool''.  ''Storage Pools'' can be used to provision NAS storage (''Network Shares'') and can be used to provision SAN storage (''Storage Volumes'') at the same time.
+
 
+
=== Creating Network Shares ===
+
 
+
To create a ''network share'' simply right-click on a Storage Pool and select 'Create Network Share...' or select the '''Network Shares''' section and then choose '''Create Network Share''' from the toolbar or right-click for the pop-up menu.  Network Shares can be concurrently accessed via both NFS and CIFS protocols. 
+
 
+
[[File:Create Network Share.png]]
+
 
+
After providing a name, and optional description for the share, and selecting the ''storage pool'' in which the ''network share'' will be created there are a few other options you can set including protocol access types and a share level quota.
+
 
+
==== Enable Quota ====
+
 
+
If you have created a ZFS based storage pool then you can set specific quotas on each ''network share''.  By default there are no quotas assigned and ''network shares'' with no quotas are allowed to use any free space that's available in the ''storage pool'' in which they reside.
+
 
+
==== Enable CIFS/SMB Access ====
+
 
+
Select this check-box to enable CIFS access to the ''network share''.  When you first select to enable CIFS access the default is to make the share public with read/write access.  To adjust this so that you can assign access to specific users or to turn on special features you can adjust the CIFS settings further by pressing the '''CIFS/SMB Advanced Settings''' button.
+
 
+
==== Enable Public NFS Access ====
+
 
+
By default public NFS access is enabled, you can un-check this option to turn off NFS access to this share. Later you can add NFS access rules by right-clicking on the share and choosing 'Add NFS Client Access..'.
+
 
+
=== Modify Network Shares ===
+
 
+
After a ''network share'' has been created you can modify it via the '''Modify Network Share ''' dialog.
+
 
+
[[File:Modify Network Access.png|700 px]]
+
 
+
==== Compression ====
+
 
+
''Network Shares'' and ''Storage Volumes'' inherit the compression mode and type from whatever is set for the ''storage pool''.  You can also customize the compression level to something specific for each given ''network share''.  For network shares that contain files which are heavily compressible you might increase the compression level to gzip (gzip6) but note that it'll use more CPU power for higher compression levels.  For network shares that contain data that is already compressed, you may opt to turn compression 'off'.
+
Note, this feature is specific to ZFS based Storage Pools.
+
 
+
==== Sync Policy ====
+
 
+
The Sync Policy indicates how to handle writes to the network share.  Standard mode is the default and it uses a combination of synchronous and asynchronous writes to ensure consistency and optimize for performance. If the write requests have been tagged as "SYNC_IO" then all of the IO is first sent to the filesystem intent log (ZIL) and then staged out to disk, otherwise the data can be written directly to disk without first staging to the intent log.  In the "Always" mode the data is always sent to the filesystem intent log first and this is a bit slower but technically safer.  If you have a workload that is write intensive it is a good idea to assign a pair of SSD drives to the ''storage pool'' for use as write cache so that the writes to the log and overall IOPs performance can be accelerated.  Note, this feature is specific to ZFS based ''Storage Pools'' and the policy for each ''network share'' is by default inherited from the ''storage pool''.
+
 
+
=== NFS Configuration ===
+
 
+
 
+
==== Configuring NFS Services ====
+
 
+
[[File:NFS Services Config.png|300px]]
+
 
+
The default NFS mode is NFSv3 but this can be changed from within the "NFS Services Configuration" dialog to NFSv4. To open this dialog navigate to the "Network Shares" tab, and select "Configure NFS" from the ribbon bar at the top, or "Configure NFS Services..." by right clicking the open space under the "Network Share" section to bring up the context menu.
+
 
+
==== Controlling NFS Access ====
+
 
+
NFS share access is filtered by IP address. This can be done by right clicking on a network share, and selecting "Add NFS Access...". By default the share is set to have public access. This dialog allows you to specify access to a single IP address, or a range of IP addresses.
+
 
+
[[File:Add NFS Access.png|300px]]
+
 
+
==== NFS Custom Options ====
+
 
+
You can also specify different custom options from within the "Modify NFS Client Access" dialog. To open this menu, right click on the share's host access (defaults to public), and select "Modify NFS Client Access". In this dialog you can set different options such as "Read Only", "Insecure", etc. You can also add custom options such as "no_root_squash" in the space provided below.
+
 
+
Modify Network Share can be accessed either by selecting "Modify" from "Network Share" in the ribbon or right clicking on the network share in the left tree panel and selecting "Modify Share/CIFS Access" and choosing the "User Access" tab.
+
 
+
[[File:Modify NFS Access.png|600px]]
+
 
+
=== CIFS Configuration ===
+
QuantaStor v3 uses Samba 3.6 which provides CIFS access to ''Network Shares'' via the SMB2 protocol.  There is also beta support for Samba 4 but as of Q2/14 it does not have good support for joining existing AD Domains.  As such, Samba4 is not planned to be the default until late 2014/early 2015.
+
 
+
==== Active Directory Configuration ====
+
 
+
QuantaStor appliances can be joined to your AD domain so that CIFS access can be applied to specific AD users and AD groups.
+
 
+
===== Joining an AD Domain =====
+
 
+
To join a domain first navigate to the "Network Shares" section. Now select "Configure CIFS" in the top ribbon bar, or by right clicking in the "Network Shares" space and selecting "Configure CIFS Services..." from the context window. Check the box to enable active directory, and provide the necessary information. KDC is most likely your domain controllers FQDN (DC.DOMAIN.COM).
+
<br>
+
Note: Your storage system name must be <= 15 characters long.
+
<br>
+
If there are any problems joining the domain please verify that you can ping the IP address of the domain controller, and that you are also able to ping the domain itself.
+
 
+
[[File:CIFS_Configuration.png|600px]]
+
 
+
You can now see QuantaStor on the domain controller under the Computer entry tab.
+
 
+
[[File:adComputerEntry.png|400px]]
+
 
+
==== Active Directory Caching ====
+
 
+
QuantaStor caches AD user name and associated Unix user ID and group ID (UID/GID) information within the service so that when using the QuantaStor Manager web interface you can quickly 'Search' for users and groups and assign them access to shares using the '''User Access''' tab in the '''[http://wiki.osnexus.com/index.php?title=Modify_Network_Share_Dialog Modify Network Share]''' dialog.  If you've recently added new users or groups to your Active Directory environment the QuantaStor cache may be out of date and may require updating in order for them to show up.  To do this, choose the 'Search & Clear Cache' option which forces the QuantaStor service to refresh it's cache of users and their associated UID/GID mappings.
+
 
+
[[File:Qs_network_share_modify_user_access.jpg]]
+
 
+
===== Active Directory Caching for Large Enterprise Deployments =====
+
 
+
For large Active Directory environments (10K-100K+ users and groups) it can take a long time for QuantaStor to gather information from AD to populate the cache.  If it takes too long the scan will timeout and the in-memory AD cache of information will be empty. 
+
 
+
[[File:qs_adcachetimeout.jpg]]
+
 
+
As an example, for configurations with 60K users+groups we've seen it take upwards of 15 minutes to populate the cache so an alternative approach is needed for these configurations.  That alternative caching approach to make QuantaStor work fast and efficiently in large environments is to provide the ability to create an on-disk cache of the AD user list and UID/GID mapping information that the QuantaStor service can use in lieu of scanning that information directly from AD.
+
 
+
In this mode where the on-disk AD cache is present, using the 'Search & Clear Cache' option from the web UI does not clear the on-disk AD cache.  It can only be created, cleared, and updated using the qs-util command line utility adcache commands like so:
+
 
+
* To generate/create the QuantaStor service on-disk AD cache
+
sudo qs-util adcachegenall
+
* To clear all QuantaStor service on-disk AD cache information
+
sudo qs-util adcacheclearall
+
 
+
Here is the full list of on-disk AD cache management commands:
+
 
+
  Active Directory Commands
+
    qs-util adcachelistfiles        : List the files in the Active Directory cache.
+
    qs-util adcachegenall            : Generates a cache of Active Directory users and groups.
+
    qs-util adcacheclearall          : Clears a cache of Active Directory users and groups.
+
    qs-util adusercachegen          : Generates a cache of Active Directory users.
+
    qs-util adusercacheclear        : Clears a cache of Active Directory users.
+
    qs-util adgroupcachegen          : Generates a cache of Active Directory groups.
+
    qs-util adgroupcacheclear        : Clears a cache of Active Directory groups.
+
 
+
Note that when new users are added to your AD environment that the on-disk AD cache information with QuantaStor will be out of date.  To correct this you'll need to run the command to update all the cache files using 'qs-util adcachegenall'.  To automatically update the AD cache on a nightly basis it is recommended to set up a simple cron script like so:
+
 
+
echo "qs-util adcacheclearall" > /etc/cron.daily/adcacheupdate
+
echo "qs-util adcachegenall" >> /etc/cron.daily/adcacheupdate
+
chmod 755 /etc/cron.daily/adcacheupdate
+
 
+
==== Leaving a AD Domain ====
+
 
+
To leave a domain first navigate to the "Network Shares" section. Now select "Configure CIFS" in the top ribbon bar, or by right clicking in the "Network Shares" space and selecting "Configure CIFS Services" from the context window. Unselect the checkbox to disable active directory integration. If you would like to remove the computer entry from to domain controller you must also specify the domain administrator and password. After clicking "OK" QuantaStor will then leave the domain.
+
 
+
[[File:CIFS Services Configuration Services.png|600px]]
+
 
+
==== Modifying CIFS Access ====
+
 
+
There are a number of custom options that can be set to adjust the CIFS access to your ''network share'' for different use cases.  The 'Public' option makes the ''network share'' public so that all users can access it.  The 'Writable' option makes the share writable as opposed to read-only and the 'Browseable' option makes it so that you can see the share when you browse for it from your Windows server or desktop.
+
 
+
[[File:Modify Network Access.png|600px]]
+
 
+
==== Modifing CIFS Configuration Options ====
+
 
+
===== Hide Unreadable & Hide Unwriteable =====
+
 
+
To only show users those folders and files to which they have access you can set these options so that things that they do not have read and/or write access to are hidden.
+
 
+
===== Media Harmony Support =====
+
 
+
Media Harmony is a special VFS module for Samba which provides a mechanism for multiple Avid users to edit content at the same time on the same network share.  To do this the Media Harmony module maintains separate copies of the Avid meta-data temporary files on a per-user, per-network client basis.
+
 
+
===== Disable Snapshot Browsing =====
+
 
+
Snapshots can be used to recover data and by default your snapshots are visible under a special ShareName_snaps folder.  If you don't want users to see these snapshot folders you can disable it.  Note that you can still access the snapshots for easy file recovery via the Previous Snapshots section of Properties page for the share in Windows.
+
 
+
===== MMC Share Management =====
+
 
+
QuantaStor ''network shares'' can be managed directly from the MMC console Share Management section from Windows Server.  This is often useful in heterogeneous environments where a combination of multiple different filers from multiple different vendors is being used.  To turn on this capability for your ''network share'' simply select this option. 
+
If you want to set this capability to all network shares in the appliance you can do so by [http://www.vionblog.com/manage-samba-permissions-from-windows/ manually editing the smb.conf] file to add these settings to the [global] section.
+
<pre>
+
vfs objects = acl_xattr
+
map acl inherit = Yes
+
store dos attributes = Yes
+
</pre>
+
 
+
===== Extended Attributes =====
+
 
+
Extended attributes are a filesystem feature where extra metadata an be associated with files.  This is useful for enabling security controls (ACLs) for DOS and OS/X.  Extended attributes can also be used by a variety of other applications so if you need this capability simply enable it by checking the box(es) for DOS, OS/X and/or for plain Extended Attribute support.
+
 
+
 
+
==== Controlling CIFS Access ====
+
 
+
CIFS access can be controlled on a per user basis. When you are not in a domain, the users you can choose from are the different users you have within QuantaStor. This can be done during share creation by selecting "Advanced Settings", or while modifying a share under the tab "User Access". If you are in a domain, you will also be able to select the different users/groups that are present within the domain. This can be done the same way as using the QuantaStor users, but by selecting "AD Users" or "AD Groups". You can set the access to either "Valid User", "Admin User", or "Invalid User".
+
 
+
[[File:Mod NetShr General.png|300px]]  [[File:AD Users.png|300px]]  [[File:File Perm.png|300px]]
+
 
+
 
+
Under "User Access" tab choices are "Users", "AD Users", and "AD Groups".
+
 
+
[[File:UA Users.png|300px]]  [[File:UA AD Users.png|300px]]  [[File:UA AD Groups.png|300px]]
+
 
+
===== Verifying Users Have CIFS Passwords =====
+
 
+
Before using a QuantaStor user for CIFS/SMB access you must first verify that the user has a CIFS password. To check if the user can be used for CIFS/SMB first go to the "Users & Groups". Now select a user, and look for the property "CIFS Ready". If the user is ready to be used within CIFS/SMB it will say "Yes". If the property says "Password Change Required" then one more step is required before that user can be used. You must first right click the user and select "Set Password". If you are signed in as an administrator, then the old password is not required. When setting the password for CIFS/SMB, you can use the same password as what it was set as before. It should now show up as CIFS ready.
+
 
+
===== Setting Network Share Permissions=====
+
 
+
You can modify some of the share options during share creation, or while modifying the share. Most of the options are set by selecting/unselecting the checkboxes. You can also set the file and directory permissions in the modify share dialog under the "File Permissions" tab.
+
 
+
[[File:Modify Network Share - File Permis .png|300px]]
+
 
+
== [[Scale-out_NAS_(Gluster_based)_Storage_Setup|Managing Scale-out NAS File Storage (GlusterFS) Volumes]] ==
+
 
+
See special section on GlusterFS setup and configuration [[Scale-out_NAS_(Gluster_based)_Storage_Setup|here]].
+
 
+
== Managing Storage Volumes ==
+
Each storage volume is a unique block device/target (a.k.a a 'LUN' as it is often referred to in the storage industry) which can be accessed via iSCSI, Fibre Channel, or Infiniband/SRP.  A ''storage volume'' is essentially a virtual disk drive on the network (the SAN) that you can assign to any host in your environment.  Storage volumes are provisioned from a ''storage pool'' so you must first create a ''storage pool'' before you can start provisioning volumes.
+
 
+
=== Creating Storage Volumes ===
+
Storage volumes can be provisioned 'thick' or 'thin' which indicates whether the storage for the volume should be fully reserved (thick) or not (thin).  As an example, a 100GB storage volume in a 1TB storage pool will initially only use 4KB of disk space in the pool when it is created leaving 99.99% of the disk space left over for use with other volumes and additional volume provisioning.  In contrast, if you choose 'thick' provisioning by unchecking the 'thin provisioning' option then the entire 100GB will be pre-reserved.  The advantage there is that that volume can never run out of disk space due to low storage availability in the pool but since it is reserved up front you will have 900GB free in your 1TB storage pool after it has been allocated so you can end up using up your available disk space fairly rapidly using thick provisioning.  As such, we recommend using thin-provisioning and it is the default.  The other problem with 100% thick provisioning is that it doesn't leave any room for snapshots and snapshot are also required to do remote replication.  If you've provisioned your volumes with thick provisioning by mistake don't worry, the reserved space can be adjusted at the command line using this command 'zfs set refreservation=1G qs-POOLID/VOLUMEID' which customer support can assist you with.
+
 
+
=== Deleting Storage Volumes ===
+
 
+
There are two separate dialogs in QuantaStor manager for deleting storage volumes.  If you press the the "Delete Volume(s)" button in the ribbon bar you will be presented with a dialog that will allow you to delete multiple volumes all at once and you can even search for volumes based on a partial name match.  This can save a lot of time when you're trying to delete a multiple volumes.  You can also right-click on a storage volume and choose 'Delete Volume' which will bring up a dialog which will allow you to delete just that volume.
+
If there are snapshots of the volume you are deleting they are not deleted rather, they are promoted.  For example, if you have snapshots S1, S2 of volume A1 then the snapshots will become root/primary storage volumes after A1 is deleted.  Once a storage volume is deleted all the data is gone so use extreme caution when deleting your storage volumes to make sure you're deleting the right volumes.  Technically, storage volumes are internally stored as files XFS based storage pools so it is possible that you could use a filesystem file recovery tool to recover a lost volume but generally speaking one would need to hire a company that specializes in data-recovery to get this data back.
+
 
+
=== Quality of Service (QoS) Controls ===
+
 
+
When QuantaStor appliances are in a shared or multi-tenancy environment it is important to be able to limit the maximum read and write bandwidth allowed for specific storage volumes so that a given user or application cannot unfairly consume an excessive amount of the storage appliance's available bandwidth.  This bandwidth limiting feature is often referred to as Quality of Service (QoS) controls which limit the maximum throughput for reads and writes to ensure a reliable and predictable QoS for all applications and users of a given appliance.
+
 
+
[[File:osn_qos_controls.png|800px]]
+
 
+
Once you've setup QoS controls on a given Storage Volume settings will be visible in the main center table.
+
 
+
[[File:osn_qos_controls_grid.png|800px]]
+
 
+
==== QoS Support Requirements ====
+
* Appliance must be running QuantaStor v3.16 or newer
+
* QoS controls can only be applied to Storage Volumes (not Network Shares)
+
* Storage Volume must be in a ZFS or Ceph based Storage Pool in order to adjust QoS controls for it
+
 
+
==== QoS Policies ====
+
In some cases you may want to assign In Storage Volumes to a QoS level by policy.  This makes it easy to adjust the QoS for all Storage Volumes using a given QoS policy quickly and easily.  To create a QoS Policy using the QuantaStor CLI run the following command.
+
 
+
qs qos-policy-create high-performance --bw-read=300MB --bw-write=300MB
+
 
+
You can then later adjust the policy at any time and the changes will be immediately applied to all Storage Volumes in the grid associated with the policy.
+
 
+
qs qos-policy-create high-performance --bw-read=400MB --bw-write=400MB
+
 
+
In this way you could dynamically change the performance limits to increase or decrease the maximums for certain hours of the day where storage IO loads are expected to be lower or higher using a script or cron job. 
+
 
+
Note: If you set specific non-policy QoS settings for a Storage Volume these will override remove any QoS policy setting associated with the Storage Volume.  The reverse is also true, if you have a specific QoS setting for a Storage Volume (eg: 200MB/sec reads, 100MB/sec writes) and then you apply a QoS policy to the volume, the limits set in the policy will override the Storage Volume specific settings.
+
 
+
=== Resizing Storage Volumes ===
+
 
+
QuantaStor supports increasing the size of storage volumes but due to the high probability of data-loss we do not support shrink.  (n.b. all storage volumes are raw files within the storage pool filesystem (usually XFS) so you could theoretically experiment by making a copy of your storage volume file, manually truncate it, rename the old one and then rename the truncated version back into place.  This is not recommended, but it's an example of some of the low-level things you could try in a real pinch given the open nature of the platform.)
+
 
+
=== Creating Snapshots ===
+
 
+
Some key features of QuantaStor volume snapshots include:
+
 
+
* massive scalability
+
** create snapshots in just seconds
+
* supports snapshots of snapshots
+
** you can create snapshots of snapshots of snapshots, ad infinitum.
+
* snapshots are R/W by default, read-only snapshots are also supported
+
* snapshots perform well even when large numbers exist
+
* snapshots are 'thin', that is they are a copy of the meta-data associated with the original volume and not a full copy of all the data blocks.
+
 
+
All of these advanced snapshot capabilities make QuantaStor ideally suited for virtual desktop solutions, off-host backup, and near continuous data protection (NCDP).  If you're looking to get NCDP functionality, just create a 'snapshot schedule' and snapshots can be created for your storage volumes as frequently as every hour.
+
 
+
To create a snapshot or a batch of snapshots you'll want to select the storage volume that you which to snap, right-click on it and choose 'Snapshot Storage Volume' from the menu.
+
 
+
If you do not supply a name then QuantaStor will automatically choose a name for you by appending the suffix "_snap" to the end of the original's volume name. So if you have a storage volume named 'vol1' and you create a snapshot of it, you'll have a snapshot named 'vol1_snap000'.  If you create many snapshots then the system will increment the number at the end so that each snapshot has a unique name.
+
 
+
=== Creating Clones ===
+
 
+
Clones represent complete copies of the data blocks in the original storage volume, and a clone can be created in any storage pool in your storage system whereas a snapshot can only be created within the same storage pool as the original.  You can create a clone at any time and while the source volume is in use because QuantaStor creates a temporary snapshot in the background to facilitate the clone process.  The temporary snapshot is automatically deleted once the clone operation completes.  Note also that you cannot use a cloned storage volume until the data copy completes.  You can monitor the progress of the cloning by looking at the Task bar at the bottom of the QuantaStor Manager screen.  (Note: In contrast to clones (complete copies), snapshots are created near instantly and do not involve data movement so you can use them immediately.)
+
 
+
By default QuantaStor systems are setup with clone bandwidth throttling to 200MB/sec across all clone operations on the appliance.  The automatic load balancing ensures minimal impact to workloads due to active clone operations.  The following CLI documentation covers how to adjust the cloning rate limit so that you can increase or decrease it.
+
 
+
<pre>
+
  Volume Clone Load Balancing
+
    qs-util clratelimitget            : Current max bandwidth setting to be divided among active clone operations.
+
    qs-util clratelimitset NN        : Sets the max bandwidth available in MB/sec shareed across all clone operations.
+
    qs-util clraterebalance          : Rebalances all active volume clone operations to use the shared limit (default 200).
+
                                      QuantaStor automatically reblanances active clone streams every minute unless
+
                                      the file /etc/clratelimit.disable is present.
+
</pre>
+
 
+
For example, to set the clone operations rate limit to 300MBytes/sec you would need to login to the appliance via SSH or via the console and run this command:
+
 
+
<pre>sudo qs-util clratelimitset 300</pre>
+
 
+
The new rate is applied automatically to all active clone operations.  QuantaStor automatically rebalances clone operation bandwidth, so if you have a limit of 300MB/sec with 3x clone operations active, then each clone operation will be replicating data at 100MB/sec. If one of them completes first then the other two will accelerate up to 150MB/sec, and when a second one completes the last clone operation will accelerate to 300MB/sec.  Note also that cloning a storage volume to a new storage volume in the same storage pool will be rate limited to whatever the current setting is but because the source and destination are the same it will have double the impact on workloads running in that storage pool.  As such, if you are frequently cloning volumes within the same storage pool and it is impacting workloads you will want to decrease the clone rate to something lower than the default of 200MB/sec.  In other cases where you're using pure SSD storage you will want to increase the clone rate limit.
+
 
+
=== Restoring from Snapshots ===
+
 
+
If you've accidentally lost some data by inadvertently deleting files in one of your storage volumes, you can recover your data quickly and easily using the 'Restore Storage Volume' operation.  To restore your original storage volume to a previous point in time, first select the original, the right-click on it and choose "Restore Storage Volume" from the pop-up menu.  When the dialog appears you will be presented with all the snapshots of that original from which you can recover from.  Just select the snapshot that you want to restore to and press ok.  Note that you cannot have any active sessions to the original or the snapshot storage volume when you restore, if you do you'll get an error.  This is to prevent the restore from taking place while the OS has the volume in use or mounted as this will lead to data corruption.
+
<pre>
+
WARNING: When you restore, the data in the original is replaced with the data in
+
the snapshot.  As such, there's a possibility of loosing data as everything that
+
was written to the original since the time the snapshot was created will be lost. 
+
Remember, you can always create a snapshot of the original before you restore it
+
to a previous point-in-time snapshot.
+
</pre>
+
 
+
=== Converting a Snapshot into a Primary (btrfs only) ===
+
 
+
A primary volume is simply a storage volume that's not a snapshot of any other storage volume.  With BTRFS based Storage Pools you can take any snapshot and make it a primary storage very easily.  Just select the storage volume in QuantaStor Manager, then right-click and choose 'Modify Storage Volume' from the pop-up menu.  Once you're in the dialog, just un-check the box marked "Is Snapshot?".  If the snapshot has snapshots of it then those snapshots will be connected to the previous parent volume of the snapshot.  This conversion of snapshot to primary does not involve data movement so it's near instantaneous.  After the snapshot becomes a primary it will still have data blocks in common with the storage volume it was previously a snapshot of but that relationship is cleared from a management perspective.  BTRFS is not yet ready for production use so at this time we recommend using ZFS based storage pools.  ZFS based storage volume snapshots must be cloned to make them a 'primary'.
+
 
+
== Managing Storage Provisioning Tiers (Storage Pool Groups) ==
+
Storage Tiers provide a way of grouping Storage Pools together for simplified provisioning from automated systems and frameworks like OpenStack Cinder.  A common problem in cloud environments is that you may have many storage appliances in a grid each with one or more storage pools.  This creates a conundrum when you go to provision a Storage Volume or Network Share as several questions must be answered to find the ideal placement for your new volume or share including:
+
* Which Storage Pool has the most free space?
+
* Which Storage Pool has the least number of storage volumes in it?
+
* Which Storage Pool has the least amount of over-provisioning?
+
* Does the Storage Pool meet the IOPS and throughput requirements of my workload?
+
When you create a Storage Tier you select one or more Storage Pools to associate it with and then set various attributes on the Storage Tier so that you provisioning system can make intelligent decisions about which Storage Tier to provision from based on the needs of the workload.  Placement is automatic so you don't have to think about which storage pool within the storage tier the storage volume should be provisioned from.
+
 
+
[[File:Storagetier1.png]]
+
 
+
All REST APIs and CLI commands which take a Storage Pool name or ID as an argument can have the pool identifier substituted with the name or ideally the UUID of a Storage Tier instead.
+
 
+
=== Over-provisioning with Storage Tiers ===
+
 
+
Note that Storage Tiers provide a convenient grouping mechanism and intelligent placement but you cannot create a storage volume which is larger than the pool with the largest available free space in the Storage Tier.  As an example, if you have three storage pools in your storage tier, Pool1 (10TB free), Pool2 (20TB free), Pool3 (4TB free) and you request allocation of a new Storage Volume which is 22TB the provisioning will fail because there are no pools available with that much free space.  Note however that you could allocate 10x thin-provisioned storage volumes which are 6TB in size because newly thin-provisioned volumes use a negligible amount of space until data has been written to them.  So Storage Tiers do provide support for over-provisioning storage pools with some limits.
+
 
+
== Managing Cloud Containers ==
+
 
+
QuantaStor can be configured to be a NAS gateway to object storage provided by OpenStack SWIFT, SoftLayer Object Storage, Amazon S3, and Google Cloud Storage.  This is done by creating one or more Cloud Containers on your QuantaStor appliance which then show up as Network Shares which can be accessed via the standard NFS and CIFS protocols.  Cloud Containers use the [https://bitbucket.org/nikratio/s3ql/overview s3ql] filesystem which provides compression, encryption and deduplication of data so that you get maximum utilization of your cloud storage.
+
 
+
Each Cloud Container can have a unique Passphrase/Encryption key. Each Cloud Container is represented by a bucket in your public or private object storage cloud that starts with a qs-bucket-* prefix. Because the data placed into the Cloud Container is written in a reduced and encrypted format, the data is secure and cannot be accessed directly via REST APIs.
+
 
+
Note that a given Cloud Container can only be used by one QuantaStor appliance at a time. If you need to add it to another appliance, please disable it from the first appliance before activating the container on another.
+
 
+
=== Adding Cloud Provider Credentials ===
+
 
+
To begin using the Cloud Container feature, you will need to provide the QuantaStor appliance access to your Object Storage using the '[[Add_Credentials|Add Cloud Provider Credentials]]' dialog available in the Cloud Container tab of the WebUI as shown in the example below:
+
 
+
[[File:Adding_Cloud_Provider_Credentials.png]]
+
 
+
The credentials for your object storage cloud can be found in the security/authentication pages of your Amazon S3, Google Cloud Storage, or SoftLayer Cloud Storage accounts. Once your credentials have been added you can begin creating cloud containers within QuantaStor.
+
 
+
=== Creating a Cloud Container ===
+
 
+
Create a Cloud Container using the '[[Create_Cloud_Storage_Container|Create Cloud Storage Container]]' Dialog. In the dialog, specify a name for the Cloud Container, the Cloud Provider, the Location for the Cloud Provider object storage you wish to use (public or private) , which of the appliances in your grid you wish to attach to the Cloud Container and the Passphrase/Encryption Key to secure the object storage and click OK to create your Cloud Container.
+
 
+
This is shown in the example below:
+
 
+
[[File:example_Create_Cloud_Storage_Container.png]]
+
 
+
 
+
Once the Cloud Container has been created, you can configure Network Share users and permissions via the [[QuantaStor_Administrators_Guide#Managing_Network_Shares|Network Shares]] section of the Web interface: [[QuantaStor_Administrators_Guide#Managing_Network_Shares|Managing Network Shares]]
+
 
+
=== Advanced Cloud Container Topics ===
+
 
+
Offline/Disable access to a Cloud Container on a a QuantaStor Appliance: [[Disable_Cloud_Storage_Container|Disable Cloud Container]]
+
 
+
Enabling access to a offline Cloud Container on a a QuantaStor Appliance: [[Enable_Cloud_Storage_Container|Enable Cloud Container]]
+
 
+
Troubleshooting and Repairing a Cloud Container if a Cloud Container does not mount: [[Repair_Cloud_Storage_Container|Repair Cloud Container]]
+
 
+
Exporting/removing access to a Cloud Container from a QuantaStor Appliance: [[Remove_Cloud_Storage_Container|Export/Remove Cloud Container]]
+
 
+
Importing existing Cloud Containers to a QuantaStor Appliance: [[Add_Cloud_Storage_Container|Import/Add Cloud Container]]
+
 
+
Permanently deleting a Cloud Container, it's objects and it's bucket in the Object Storage: [[Delete_Cloud_Storage_Container|Delete Cloud Container]]
+
 
+
== Managing Backup Policies ==
+
 
+
Within QuantaStor you can create ''backup policies'' where data from any NFS or CIFS share on your network can be automatically backed up for you to your QuantaStor appliance.  To create a ''backup policy'' simply right-click on the ''Network Share'' where you want the data to be backed up to and choose the 'Create Backup Policy..' option from the pop-up menu.
+
Backup policies will do a CIFS/NFS mount of the specified NAS share on your network locally to the appliance in order to access the data to be archived.  When the backup starts it creates a Backup Job object which you will see in the web interface and you can see the progress of any given ''backup job'' by monitoring it in the '''Backup Jobs''' tab in the center-pane of the web interface after you select the ''Network Share'' to which the backup policy is attached.
+
 
+
[[File:Create Backup Policy.png|300px]]
+
 
+
=== Creating Backup Policies ===
+
 
+
Backup policies in QuantaStor support heavy parallelism so that very large NAS filers with 100m+ files can be easily scanned for changes.  The default level of parallelism is 32 concurrent scan+copy threads but this can be reduced or increased to 64 concurrent threads. 
+
 
+
==== Backup to Network Share ====
+
 
+
This is where you indicate where you want the data to be backed up to on your QuantaStor appliance. With QuantaStor backup policies your data is copied from a NAS share on the network to a ''network share'' on your QuantaStor appliance.
+
 
+
==== Policy Name ====
+
 
+
This is a friendly name for your backup policy.  If you are going to have multiple policies doing backups to the same ''network share'' then each policy will be associated with a directory with the name of the policy.  For example, if your share is called ''media-backups'' and you have a policy called 'project1' and a policy called 'project2' then there will be sub-directories under the ''media-backups'' share for ''project1'' and ''project2''.  In order to support multiple policies per ''Network Share'' you must select the option which says '''Backup files to policy specific subdirectory'''.  If that is not selected then only one policy can be associated with the network share and the backups will go into the root of the share to form an exact mirror copy.
+
 
+
==== Selecting the Backup Source ====
+
 
+
In the section which says '''Hostname / IP Address:''' enter the IP address of the NAS filer or server which is sharing the NAS folder you want to backup.  For NFS shares you should enter the IP address and press the '''Scan''' button.  If NFS shares are found they'll show up in the CIFS/NFS Export: list.  For CIFS share backups you'll need to enter the network path to the share in a special format starting with double forward slashes like so:  '''//username%password@ipaddress'''.  For example, you might scan for shares on a filer located at 10.10.5.5 using the SMB credentials of 'admin' and password 'password123' using this path: '''//admin%password123@10.10.5.5'''.  In AD environments you can also include the domain in the SMB path like so '''//DOMAIN/username%password@ipaddress'''.
+
 
+
[[File:qs_bp_create.png]]
+
 
+
==== Policy Type ====
+
 
+
You can indicate that you want the backup policy to backup everything by selecting 'Backup All Files' or you can do a 'Sliding Window Backup'.  For backing up data from huge filers with 100m+ files it is sometimes useful to only backup and maintain a ''sliding window'' of the most recently modified or created files.  If you set the ''Retention Period'' to 60 days then all files that have been created or modified within the last 60 days will be retained.  Files that are older than that will be purged from the backup folder.
+
 
+
Be careful with the ''Backup All Files'' mode.  If you have a Purge Policy enabled it will remove any files from the ''network share'' which were not found on the source NAS share that's being backed up.  If you attached such a backup policy to an existing share which has data on it, the purge policy will remove any data/files that exists in your QuantaStor Network Share which is not on the source NAS share on the remote filer.  So use caution with this as ''Backup All Files'' really means ''maintain a mirror copy of the remote NAS share''.
+
 
+
==== Purge Policy ====
+
 
+
Backup policies may run many times per day to quickly backup new and modified files.  A scan to determine what needs purging is typically less important so it is more efficient to run it nightly rather than with each and every backup job.  For the ''Sliding Window'' policies the purge phase will throw out any files that are older than the retention period.  For the ''Backup All Files'' policies there is a comparison that is done and any files that are no longer present in the NAS source share are removed from the backup.  The Purge Policy can also be set to 'Never delete files' which will backup files to your Network Share but never remove them. 
+
 
+
==== Backup Logs ====
+
 
+
If you select 'Maintain a log of each backup' then a backup log file will be written out after each backup.  Backup logs can be found on your QuantaStor appliance in the /var/log/backup-log/POLICY-NAME directory.  The purge process produces a log with the .purgelog suffix and the backup process produces a log with the .changelog suffix.
+
 
+
=== pwalk ===
+
 
+
pwalk is a open source command line utility included with QuantaStor (see /usr/bin/pwalk).  It was originally written by John Dey to work as a parallelized version of the 'du -a' unix utility which would be suitable for scanning filesystems with 100s of millions of files.  It was then reworked at OSNEXUS to support backups, sliding window backups, additional output formats, etc.  If you type 'pwalk' by itself at the QuantaStor ssh or console window you'll see the following usage page / documentation.  The pwalk utility has three modes, 'walk' which does a parallelized crawl of a directory, 'copy' which does a backup from a SOURCEDIR to a specified --targetdir, and 'purge' mode which removes files in the PURGEDIR which are not found in the --comparedir.  In general you would never need to use pwalk directly but the documentation is provided here to support special use cases like custom backup or replication cron jobs.
+
 
+
<pre>
+
pwalk version 3.1 Oct 22nd 2013 - John F Dey john@fuzzdog.com, OSNEXUS, eng@osnexus.com
+
 
+
Usage :
+
pwalk --help --version
+
          Common Args :
+
            --dryrun : use this to test commands
+
                        without making any changes to the system
+
      --maxthreads=N : indicates the number of threads (default=32)
+
          --nototals : disables printing of totals after the scan
+
              --dots : prints a dot and total every 1000 files scanned.
+
              --quiet : no chatter, speeds up the scan.
+
            --nosnap : Ignore directories with name .snapshot
+
              --debug : Verbose debug spam
+
        Output Format : CSV
+
              Fields : DateStamp,"inode","filename","fileExtension","UID",
+
                        "GID","st_size","st_blocks","st_mode","atime",
+
                        "mtime","ctime","File Count","Directory Size"
+
 
+
Walk Usage :
+
pwalk SOURCEDIR
+
        Command Args :
+
            SOURCEDIR : Fully qualified path to the directory to walk
+
 
+
Copy/Backup Usage :
+
pwalk --targetdir=TARGETDIR SOURCEDIR
+
pwalk --retain=30 --targetdir=TARGETDIR SOURCEDIR
+
        Command Args :
+
          --targetdir : copy files to specified TARGETDIR
+
              --atime : copy if access time change (default=no atime)
+
  --backuplog=LOGFILE : log all files that were copied.
+
  --status=STATUSFILE : write periodic status updates to specified file
+
            --retain : copy if file ctime or mtime within retention period
+
                        specified in days. eg: --retain=60
+
            --nomtime : ignore mtime (default=use mtime)
+
            SOURCEDIR : Fully qualified path to the directory to walk
+
 
+
Delete/Purge Usage :
+
pwalk --purge [--force] --comparedir=COMPAREDIR PURGEDIR
+
pwalk --purge [--force] --retain=N PURGEDIR
+
        Command Args :
+
        --comparedir : compare against this dir but dont touch any files
+
                        in it. comparedir is usually the SOURCEDIR from
+
                        a prior copy/sync stage.
+
              --purge : !!WARNING!! this deletes files older than the
+
                        retain period -OR- if retain is not specified
+
                        --comparedir is required. The comparedir is
+
                        compared against the specified dir and any files
+
                        not found in the comparedir are purged.
+
              --force : !NOTE! default is a *dry-run* for purge, you must
+
                        specify --force option to actually purge files
+
              --atime : keep if access time within retain period
+
            --retain : keep if file ctime or mtime within retention period
+
                        specified in days. eg: --retain=60
+
</pre>
+
 
+
OSNEXUS modified version of the C source code for pwalk is available here [[pwalk.c]]. The original version is available
+
[https://github.com/fizwit/filesystem-reporting-tools/blob/master/pwalk.c here].
+
 
+
== IO Tuning ==
+
 
+
=== ZFS Performance Tuning ===
+
 
+
One of the most common tuning tasks that is done for ZFS is to set the size of the ARC cache.  If your system has less than 10GB of RAM you should just use the default but if you have 32GB or more then it is a good idea to increase the size of the ARC cache to make maximum use of the available RAM for your storage appliance.  Before you set the tuning parameters you should run 'top' to verify how much RAM you have in the system.  Next, run this command to set the amount of RAM to some percentage of the available RAM.  For example to set the ARC cache to use a maximum of 80% of the available RAM, and a minimum of 50% of the available RAM in the system, run these, then reboot:
+
<pre>
+
qs-util setzfsarcmax 80
+
qs-util setzfsarcmax 50
+
</pre>
+
 
+
Example:
+
<pre>
+
sudo -i
+
qs-util setzfsarcmax 80
+
INFO: Updating max ARC cache size to 80% of total RAM 1994 MB in /etc/modprobe.d/zfs.conf to: 1672478720 bytes (1595 MB)
+
qs-util setzfsarcmin 50
+
INFO: Updating min ARC cache size to 50% of total RAM 1994 MB in /etc/modprobe.d/zfs.conf to: 1045430272 bytes (997 MB)
+
</pre>
+
 
+
 
+
To see how many cache hits you are getting you can monitor the ARC cache while the system is under load with the qs-iostat command:
+
 
+
<pre>
+
qs-iostat -a
+
 
+
Name                              Data
+
---------------------------------------------
+
hits                              237841
+
misses                            1463
+
c_min                            4194304
+
c_max                            520984576
+
size                              16169912
+
l2_hits                          19839653
+
l2_misses                        74509
+
l2_read_bytes                    256980043
+
l2_write_bytes                    1056398
+
l2_cksum_bad                      0
+
l2_size                          9999875
+
l2_hdr_size                      233044
+
arc_meta_used                    4763064
+
arc_meta_limit                    390738432
+
arc_meta_max                      5713208
+
 
+
 
+
ZFS Intent Log (ZIL) / writeback cache statistics
+
 
+
Name                              Data
+
---------------------------------------------
+
zil_commit_count                  876
+
zil_commit_writer_count          495
+
zil_itx_count                    857
+
</pre>
+
 
+
A description of the different metrics for ARC, L2ARC and ZIL are below.
+
 
+
<pre>
+
hits = the number of client read requests that were found in the ARC
+
misses = the number of client read requests were not found in the ARC
+
c_min = the minimum size of the ARC allocated in the system memory.
+
c_max = the maximum size of the ARC that can be allocated in the system memory.
+
size = = the current ARC size
+
l2_hits = the number of client read requests that were found in the L2ARC
+
ls_misses = the number of client read requests were not found in the L2ARC
+
ls_read_bytes = The number of bytes read from the L2ARC ssd devices.
+
l2_write_bytes = The number of bytes written to the L2ARC ssd devices.
+
l2_chksum_bad = The number of checksums that failed the check on an SSD (a number of these occurring on the L2ARC usually indicates a fault for a SSD device that needs to be replaced)
+
l2_size = the current L2ARC size
+
l2_hdr_size = The size of the L2ARC reference headers that are present in ARC Metadata
+
arc_meta_used = The amount of ARC memory used for Metadata
+
arc_meta_limit =  The maximum limit for the ARC Metadata
+
arc_meta_max = The maximum value that the ARC Metadata has achieved on this system
+
 
+
zil_commit_count = How many ZIL commits have occurred since bootup
+
zil_commit_writer_count = How many ZIL writers were used since bootup
+
zil_itx_count  = the number of indirect transaction groups that have occurred sinc bootup
+
</pre>
+
 
+
=== Pool Performance Profiles ===
+
 
+
Read-ahead and request queue size adjustments can help tune your storage pool for certain workloads.  You can also create new storage pool IO profiles by editing the /etc/qs_io_profiles.conf file.  The default profile looks like this and you can duplicate it and edit it to customize it.
+
 
+
<pre>
+
[default]
+
name=Default
+
description=Optimizes for general purpose server application workloads
+
nr_requests=2048
+
read_ahead_kb=256
+
fifo_batch=16
+
chunk_size_kb=128
+
scheduler=deadline
+
</pre>
+
 
+
If you edit the profiles configuration file be sure to restart the management service with 'service quantastor restart' so that your new profile is discovered and is available in the web interface.
+
 
+
=== Storage Pool Tuning Parameters ===
+
 
+
QuantaStor has a number of tunable parameters in the /etc/quantastor.conf file that can be adjusted to better match the needs of your application.  That said, we've spent a considerable amount of time tuning the system to efficiently support a broad set of application types so we do not recommend adjusting these settings unless you are a highly skilled Linux administrator.
+
The default contents of the /etc/quantastor.conf configuration file are as follows:
+
<pre>
+
[device]
+
nr_requests=2048
+
scheduler=deadline
+
read_ahead_kb=512
+
 
+
[mdadm]
+
chunk_size_kb=256
+
parity_layout=left-symmetric
+
</pre>
+
 
+
There are tunable settings for device parameters which are applied to the storage media (SSD/SATA/SAS), as well as settings like the MD device array chunk-size and parity configuration settings used with XFS based storage pools.  These configuration settings are read from the configuration file dynamically each time one of the settings is needed so there's no need to restart the quantastor service.  Simply edit the file and the changes will be applied to the next operation that utilizes them.  For example, if you adjust the chunk_size_kb setting for mdadm then the next time a storage pool is created it will use the new chunk size.  Other tunable settings like the device settings will automatically be applied within a minute or so of your changes because the system periodically checks the disk configuration and updates it to match the tunable settings. 
+
Also, you can delete the quantastor.conf file and it will automatically use the defaults that you see listed above.
+
 
+
== Security Configuration ==
+
 
+
=== Change Your Passwords ===
+
 
+
One of the most important steps in the configuration of a new QuantaStor appliance is to just change the admin password for the appliance to something other than the default.  You'll want to start by logging into the console using the 'qadmin' account and 'qadmin' password.  Next type 'passwd' and change the password from 'qadmin' to something else.  Next you'll want to login to the web management interface and change the 'admin' account password from 'password' to something else.
+
 
+
=== Port Lock-down via IP Tables configuration ===
+
 
+
QuantaStor comes with non-encrypted port 80 / http access to the appliance enabled.  For more secure installations it is recommended that port 80 and non-essential services are blocked.  To disable port 80 access run this command:
+
<pre>
+
sudo qs-util disablehttp
+
</pre>
+
To re-enable port 80 access use:
+
<pre>
+
sudo qs-util enablehttp
+
</pre>
+
Note that the web management interface will still be accessible via https on port 443 after you disable http access.
+
 
+
=== Changing the SSL Key for QuantaStor Web Management Interface ===
+
 
+
The SSL key provided with QuantaStor is a common self-signed SSL key that is pre-generated and included with all deployments. This is generally OK for most deployments on private networks but for increased security it is recommended to generate a new SSL keystore for the Apache Tomcat server used to serve the QuantaStor web management interface. 
+
 
+
==== Keystore Password Selection ====
+
'''IMPORTANT NOTE''' You must set the password for the keystore to 'changeit' (without the quotes) as this is the default password that Tomcat uses to unlock the keystore.  If you do not want to use the default password ('changeit') you can select a password of your choice but you will also need to manually edit the connector section of the /opt/osnexus/quantastor/tomcat/conf/server.xml file to add a line containing the keystore password (example: keystorePass="YOURPASSWORD").  Here's an example of what that will look like if you select the password "YOURPASSWORD".
+
 
+
<pre>
+
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
+
              maxThreads="150" scheme="https" secure="true"
+
              keystoreFile="/opt/osnexus/quantastor/tomcat/conf/keystore"
+
              keystorePass="YOURPASSWORD"
+
              clientAuth="false" sslProtocol="TLS" />
+
</pre>
+
 
+
==== New Keystore Generation ====
+
 
+
To generate a new keystore you'll need to do the following steps.
+
 
+
* Login to QuantaStor via the console or via SSH then generate a keystore using the keytool utility. It will prompt you to enter a bunch of data including name, company, location, etc. This will produce a new .keystore file in the current directory.  Remember to use the default Tomcat 'changeit' password for the keystore unless you plan to edit the /opt/osnexus/quantastor/tomcat/conf/server.xml file to add your custom keystore password.
+
<pre>
+
keytool -genkey -alias tomcat -keyalg RSA -validity 365
+
</pre>
+
* Next, backup the original keystore file and then overwrite the original with your newly generated keystore file:
+
<pre>
+
cp /opt/osnexus/quantastor/tomcat/conf/keystore ./keystore.qs.conf
+
cp .keystore /opt/osnexus/quantastor/tomcat/conf/keystore
+
mv .keystore keystore.custom
+
</pre>
+
* Finally, restart tomcat services so that the new key is loaded.
+
<pre>
+
service tomcat restart
+
</pre>
+
 
+
'''IMPORTANT NOTE''' If you are using Firefox as your browser, you must clear the browser history in order to clear the old cached key information.  If you don't clear the history you'll see that the "Confirm Security Exception" button will be greyed out and you won't be able to login to your QuantaStor appliance via https. IE and Chrome do not have this issue.
+
 
+
That's the whole process.  Here's an example of what we enter into these fields as OSNEXUS Engineering, you'll want to put your own company name and other details here:
+
 
+
<pre>
+
keytool -genkey -alias qs-tomcat -keyalg RSA -validity 365
+
 
+
Enter keystore password:
+
Re-enter new password:
+
What is your first and last name?
+
  [Unknown]:  OSNEXUS
+
What is the name of your organizational unit?
+
  [Unknown]:  OSNEXUS Engineering
+
What is the name of your organization?
+
  [Unknown]:  OSNEXUS, Inc.
+
What is the name of your City or Locality?
+
  [Unknown]:  Bellevue
+
What is the name of your State or Province?
+
  [Unknown]:  Washington
+
What is the two-letter country code for this unit?
+
  [Unknown]:  US
+
Is CN=OSNEXUS, OU=OSNEXUS Engineering, O="OSNEXUS, Inc.", L=Bellevue, ST=Washington, C=US correct?
+
  [no]:  yes
+
</pre>
+
 
+
== Encryption Support ==
+
QuantaStor supports both software and hardware encryption.  Software encryption leverages the Linux based LUKS key management system which is hardware independent, supports a broad set of encryption algorithms, and provides flexible configuration options for key location.  For hardware based encryption you must use a LSI MegaRAID or equivalent OEM version (IBM, Dell, SuperMicro, etc) hardware RAID controller with the SafeStore license key applied along with one or more enterprise SED SAS drives.
+
 
+
=== Hardware Encryption ===
+
 
+
There are three CLI commands for setting up hardware encryption using the 'qs' command line utility.  They are 'hw-unit-encrypt', 'hw-controller-create-security-key', and 'hw-controller-change-security-key'.  The process for setting up encryption is as follows:
+
 
+
1) Create a hardware RAID unit using the 'Create Unit..' dialog in the QuantaStor web management interface as per your workload requirements (RAID10, RAID6, etc). 
+
 
+
2) Go to the console/ssh window and assign a security key to the controller if one is not already set.
+
<pre>
+
    hw-controller-create-security-key [hwc-create-security-key]
+
      :: Create the security key for encryption on SED/FDE-enabled drives on hardware RAID
+
        controller.
+
        <--controller>  :: Name or ID of a hardware RAID controller.
+
        <--security-key> :: Security key on HW Controller card for encryption on FDE-enabled secure
+
                            disk drives.
+
</pre>
+
 
+
3) Encrypt the hardware RAID unit that you created in step one. 
+
<pre>
+
    hw-unit-encrypt [hwu-encrypt]
+
      :: Enable hardware SED/FDE encryption for the specified hardware RAID unit.
+
        <--unit>        :: Name of a hardware RAID unit or it unique ID.
+
        [--options]      :: Special options to hardware encryption policy.
+
</pre>
+
 
+
4) Create a new storage pool using the now encrypted RAID unit
+
 
+
Note that your system will be setup so that no pass-phrase is required at boot time.  In this mode you're protected against someone taking all the hard drives from your system but if they can take the entire server and/or RAID controller with the disks then the drives can be decrypted without a password.  In general the no pass-phrase option is preferred so that the system can be rebooted without administrative involvement but it is less secure.
+
 
+
==== Setting Up Boot Passphrase ====
+
As noted above, the hw-controller-create-security-key command will setup the hardware RAID controller so that no pass-phrase is required at boot time.  To change the keys so that a pass-phrase is required at boot time you'll need to use the MegaCli CreateSecurityKey command to set a security key for the controller that includes a pass-phrase. Here's a snippet of the LSI documentation on how to create a key.
+
 
+
<pre>
+
Syntax: MegaCli -CreateSecurityKey -SecurityKey sssssssssss | [-Passphrase sssssssssss] |[-KeyID kkkkkkkkkkk] -aN
+
 
+
Description:
+
        Command enables security feature on specified controller.
+
        The possible parameters are:
+
        SecurityKey: Security key will be used to generate lock key when drive security is enabled.
+
        Passphrase: Pass phrase to provide additional security.
+
        KeyID: Security key Id.
+
 
+
Convention:
+
          -aN        N specifies the adapter number for the command.
+
        Note:
+
        -      Security key is mandatory and pass phrase is optional.
+
        -      Security key and pass phrase have special requirements.
+
        Security key & pass phrase should have 8 - 32 chars, case-sensitive; 1 number, 1 lowercase letter, 1 uppercase letter, 1 non-alphanumeric character (no spaces).
+
      - In case of Unix based systems, if the character '!' is used as one of the input characters in the value of Security key or pass phrase, it must be preceded by a back slash character('\').
+
</pre>
+
 
+
A good way to generate a secure passphrase and/or security key is to use the uuidgen tool as follows:
+
 
+
<pre>
+
uuidgen | cut -c 25-
+
</pre>
+
 
+
This will output a randomly generated string of characters that looks like '6bb45eb7b615'.  You can then run the tool like so but be sure to replace the generated text '1dabc3b0d467' and '6bb45eb7b615' with your own unique keys generated by the uuidgen tool:
+
 
+
<pre>
+
MegaCli -CreateSecurityKey -SecurityKey 1dabc3b0d467 -Passphrase 6bb45eb7b615 -a0
+
</pre>
+
 
+
Be sure to write down both keys someplace safe.  The pass-phrase will be needed every time the system boots and the security key will be needed in the event that you need to replace the RAID controller.
+
 
+
=== Software Encryption ===
+
 
+
QuantaStor uses the [http://en.wikipedia.org/wiki/Linux_Unified_Key_Setup LUKS (Linux Unified Key Setup)] system for key management but also comes with tools to greatly simplify the configuration and setup of encryption.  Specifically, the qs-util CLI utility comes with a series of additional commands to encrypt disk devices including ''cryptformat'', ''cryptopen'', ''cryptclose'', ''cryptdestroy'', and ''cryptswap''.  There's also a ''devicemap'' command which will scan for and display a list of devices available on the system.
+
 
+
<pre>
+
  Device Encryption Commands
+
    qs-util cryptformat <device> [keyfile]  : Enrypts the specified device using LUKS format, generates key if needed.
+
    qs-util cryptopen <device> [/path/to/keyfile.key] [updatecrypttab] : Opens the specified LUKS encrypted device.
+
    qs-util cryptopenall            : Opens all encrypted devices using keys in /etc/cryptconf/keys
+
    qs-util cryptclose <device>      : Closes the specified LUKS encrypted device.
+
    qs-util cryptcloseall            : Closes all LUKS encrypted devices.
+
    qs-util cryptdestroy <device>    : Closes the LUKS device and deletes the keys and header backups.
+
    qs-util cryptswap <device>      : Enables swap device encryption, updates /etc/fstab and /etc/crypttab.
+
    qs-util crypttabrepair          : Recreates a new /etc/crypttab by trying all keys with all LUKs devices
+
 
+
</pre>
+
 
+
==== Summary of Software Encryption Setup Procedure ====
+
 
+
1. qs-util cryptswap
+
 
+
2. qs-util devicemap
+
* Shows a list of the devices on the system
+
 
+
3. qs-util cryptformat <device>
+
* Formats the specified device with an encryption header and sets up /etc/crypttab to automatically open the device at boot time
+
 
+
4. Use the ''Scan for Disks..'' command and your encrypted (''dm-name-enc-'') device(s) will appear in the QuantaStor web management interface.
+
* There is a known issue in that you have to logout out and log back in to the web interface after you run the Scan for Disks. This will cleanup the list of disks.
+
 
+
5. Create a new storage pool using your encrypted device(s)
+
 
+
Note also that if you are using SSD caching that you must also cryptformat your SSD read/write cache devices before using them with an encrypted pool.  If you don't you will be creating a security hole.
+
 
+
==== Setup: Selecting Drives for Encryption ====
+
The first step in setting up software encryption is selecting the drives to be encrypted.  Use the 'qs-util devicemap' command and you'll see a list of devices that will look something like this:
+
 
+
<pre>
+
/dev/sdj        /dev/disk/by-id/scsi-350000393a8c96130, TOSHIBA, MK1001TRKB, Y1M0A01HFM16
+
/dev/sdv        /dev/disk/by-id/scsi-350000393a8c960c4, TOSHIBA, MK1001TRKB, Y1M0A011FM16
+
/dev/sdt        /dev/disk/by-id/scsi-350000393a8c960b4, TOSHIBA, MK1001TRKB, Y1M0A00XFM16
+
/dev/sdu        /dev/disk/by-id/scsi-350000393a8c960a4, TOSHIBA, MK1001TRKB, Y1M0A00TFM16
+
/dev/sdr        /dev/disk/by-id/scsi-350000393a8c960bc, TOSHIBA, MK1001TRKB, Y1M0A00ZFM16
+
</pre>
+
 
+
Make sure that you are connected to the QuantaStor web management interface so that you can be sure that you're selecting the correct disks. 
+
'''NOTE: You cannot encrypt drives that already have data on them.'''  You must setup encryption on the drives ''before'' you create the storage pool on top.
+
 
+
==== Setup: Formatting Drives with Encryption Header ====
+
 
+
Use the ''qs-util cryptformat <device>'' command to encrypt devices.  WARNING: Any data on these drives will be lost as cryptformat will be imprinting the device with an encryption header.  For example, to encrypt the devices noted in the previous section, one would run these commands:
+
 
+
<pre>
+
qs-util cryptformat /dev/sdj
+
qs-util cryptformat /dev/sdv
+
qs-util cryptformat /dev/sdt
+
qs-util cryptformat /dev/sdu
+
qs-util cryptformat /dev/sdr
+
</pre>
+
 
+
The ''cryptformat'' command does several things:
+
# Generates a new key (1MB) for the device using /dev/urandom and stores it in /etc/cryptconf/keys/
+
# Formats (luksFormat) the device with the new encryption header using the generated key (default is AES 256-bit encryption)
+
# Makes a backup of the encryption header and stores it in /etc/cryptconf/headers/
+
# Updates the /etc/crypttab configuration file so that the appliance automatically opens the device at boot time
+
# Opens (luksOpen) the device using the generated key so that you can start using it
+
 
+
Note that even though we've specified the devices by their short names (/dev/sdj) that the utility automatically looks up the correct persistent device name with the ''scsi-'' prefix so that the encryption driver can locate the correct device even if the device lettering changes after a reboot.
+
 
+
==== Hardware Accelerated Encryption (AES-NI) ====
+
 
+
QuantaStor supports Intel's AES-NI hardware accelerated encryption by default.  In testing we found AES-NI performance on a basic server to be around 1.1GB/sec when enabled (the default) and the same encrypted device with the AES-NI drivers removed dropped to 145MB/sec.  So AES-NI boosts the performance of encrypted storage pools by about 7.5x. 
+
 
+
There are no special steps you need to do activate support for AES-NI and you can run this command to ensure that the drivers are properly loaded: <pre>lsmod | grep aesni_intel</pre>
+
Should should see output which includes the aesni_intel driver like this:
+
<pre>
+
aesni_intel          172032  2
+
aes_x86_64            20480  1 aesni_intel
+
ablk_helper            16384  5 aesni_intel,twofish_avx_x86_64,serpent_avx2,serpent_avx_x86_64,serpent_sse2_x86_64
+
cryptd                24576  4 aesni_intel,ghash_clmulni_intel,ablk_helper
+
lrw                    16384  6 aesni_intel,twofish_avx_x86_64,twofish_x86_64_3way,serpent_avx2,serpent_avx_x86_64,serpent_sse2_x86_64
+
glue_helper            16384  6 aesni_intel,twofish_avx_x86_64,twofish_x86_64_3way,serpent_avx2,serpent_avx_x86_64,serpent_sse2_x86_64
+
</pre>
+
If you don't see the aesni_intel driver loaded then you may be using hardware that doesn't have AES-NI support or has it disabled in the BIOS.  For more information on Intel and AMD processors that support AES-NI please see the Wikipedia article on it [https://en.wikipedia.org/wiki/AES_instruction_set#Intel_and_AMD_x86_achitecture here].
+
 
+
==== Setup: Custom Key Generation (Optional) ====
+
 
+
You can use the lower-level cryptsetup commands like 'cryptsetup luksFormat' to prepare and open your devices using your own custom generated keys and decryption script.  That is ok, but you must follow the naming convention where devices named ''/dev/disk/by-id/scsi-'' are given the same name with the ''enc-'' prefix added.  For example, the encrypted target name for device /dev/disk/by-id/scsi-35001517bb282023a must be set to enc-scsi-35001517bb282023a.  This is automatically setup for you in the /etc/crypttab file when you use the 'qs-util cryptformat' command.
+
Note also that you don't need to use the /etc/crypttab file to open your devices at boot time.  You can have a custom script that is run from /etc/rc.local or you can have a script that requires a passphrase in order to unlock the keys and open the devices.
+
 
+
==== Setup: Backing up your Encryption Keys ====
+
You should immediately make a backup of your encryption keys and headers to someplace secure (NOTE: You can use a tool like Filezilla to sftp into the appliance to get the keys from /etc/cryptconf).  You can make an encrypted backup of your key files and headers using this command:
+
 
+
<pre>
+
tar -cj /etc/cryptconf/ | openssl des3 -salt > mykeys.tar.enc
+
</pre>
+
 
+
You can then decrypt it later using this command:
+
 
+
<pre>
+
cat mykeys.tar.enc | openssl des3 -d -salt | tar -xvj
+
</pre>
+
 
+
You'll see output that looks like this:
+
<pre>
+
enter des-ede3-cbc decryption password:
+
etc/cryptconf/
+
etc/cryptconf/keys/
+
etc/cryptconf/keys/enc-scsi-350000393a8c960c8.key
+
etc/cryptconf/keys/enc-scsi-35001517bb282023a.key
+
etc/cryptconf/keys/enc-scsi-35001517bb2820591.key
+
etc/cryptconf/headers/
+
etc/cryptconf/headers/enc-scsi-35001517bb2820591.header
+
etc/cryptconf/headers/enc-scsi-350000393a8c960c8.header
+
etc/cryptconf/headers/enc-scsi-35001517bb282023a.header
+
</pre>
+
 
+
==== Setup: Securing your Encryption Keys ====
+
The ''qs-util cryptformat'' command places your keys on the appliance boot/system drive in /etc/cryptconf.  On most systems this is a vulnerable place to have the keys as the boot drives can be removed from most servers with the data drives.  If the boot devices have physical locks on them so that the devices containing the keys cannot be stolen or if the boot devices are on the interior of the storage appliance chassis where they cannot be easily removed or accessed then that may be sufficient for your needs.  That said, in most cases you'll want to copy the generated keys to someplace safer and shred the local copies.  Some strategies you might use include:
+
 
+
* Connect a small capacity USB stick/media to the inside of the server and have the /etc/fstab setup to mount the device to /etc/cryptconf.  Now when you cryptformat devices the keys are stored on removable media which cannot be taken from the server without physical access to open the chassis to take out the USB device.
+
* Connect a small capacity USB stick/media to the outside of the server and have the /etc/fstab setup to mount the device to /etc/cryptconf.  Now when you cryptformat devices the keys are stored on removable media which you can remove from the appliance after it has been booted and the devices have been opened.
+
* Copy the keys to a Key Server and have the appliance reference the keys via a custom script which calls 'cryptsetup luksOpen'.  The process for setting this up will depend on your key server software, etc.
+
* Put the keys on an secure NFS/CIFS share which is only accessible to the appliance and which mounts the share to /etc/cryptconf automatically at boot-time.  In the event that the hardware is stolen the keys are not in the appliance so the data cannot be accessed/decrypted. 
+
* After the system starts, manually run a script that decrypts your key backups mykeys.tar.enc which requires a pass-phrase and places the keys into /tmp.  The script should then use cryptsetup luksOpen to open each device using the decrypted keys and then delete the decrypted keys from /tmp automatically.  Then run 'qs pool-start poolname' to start the encrypted storage pool to expose access to the encrypted resources.
+
 
+
Whatever strategy you choose to secure the keys, you'll want to make sure you have backup copies and you'll want to be sure to adjust the /etc/crypttab file accordingly if you change the location of the keys.  The /etc/crypttab is configured automatically when you use the 'qs-util cryptformat <device>' command and it'll look something like this:
+
 
+
<pre>
+
# <target name> <source device>        <key file>      <options>
+
enc-scsi-35001517bb282023a      /dev/disk/by-id/scsi-35001517bb282023a  /etc/cryptconf/keys/enc-scsi-35001517bb282023a.key      luks
+
enc-scsi-35001517bb2820591      /dev/disk/by-id/scsi-35001517bb2820591  /etc/cryptconf/keys/enc-scsi-35001517bb2820591.key      luks
+
swap    /dev/disk/by-id/scsi-SATA_ST32000641AS_9WM288LS-part5  /dev/urandom    swap,cipher=aes-cbc-essiv:sha256,size=256
+
</pre>
+
 
+
If you have copied the keys to a new location such as /mykeys you'll need to update the paths in the /etc/crypttab to replace /etc/cryptconf/keys/ with /mykeys/.  Note also that you'll want to shred the old keys after you have copied them to the new location and make backups.  You can do that securely with the ''shred'' command like so:  <pre>shred /etc/cryptconf/keys/enc-scsi-35001517bb282023a.key</pre> which will ensure there are no remnants of the old key left on the boot drive.
+
 
+
==== Setup: Swap Device Encryption ====
+
 
+
QuantaStor appliances use swap devices which can temporary contain a cache of unencrypted data that was in memory.  This is a security risk which is easily resolved by making your swap device encrypted like so:
+
 
+
<pre>qs-util cryptswap</pre>
+
 
+
This command updates the /etc/fstab and /etc/crypttab so that the system will have an encrypted swap device after the next reboot.  Note that it generates a new key every time the system boots using a randomly generated key. 
+
 
+
:: '''NOTE:''' You will see an entry in the /etc/crypttab that looks like this:
+
:: <pre>swap    /dev/disk/by-id/scsi-SATA_ST32000641AS_9WM288LS-part5  /dev/urandom    swap,cipher=aes-cbc-essiv:sha256,size=256</pre>
+
:: You will also see an updated entry in the /etc/fstab for your swap device that looks like this:
+
:: <pre>/dev/mapper/swap swap swap defaults 0 0</pre>
+
 
+
==== Setup: Scan for Devices & Create Storage Pool(s) ====
+
 
+
At this point your encrypted devices are all setup. If you have restarted the system then you'll see new devices in the '''Physical Disks''' section of the web management interface with device names prefixed with 'dm-name-enc-'.  These are your encrypted devices you can now create a storage pool from.  If you haven't rebooted after completing the above steps you can find your new encryption formatted devices by using the 'Scan for Disks..' command in the web management interface.  You can access this by right-clicking on the Physical Disks section header.
+
 
+
==== Setup: Verifying Encrypted Storage Pool(s) ====
+
 
+
After you have created your encrypted storage pool(s) it is recommended that you reboot to make sure that the swap device is encrypted and that your storage pools have come back online automatically. You can check the swap device by running the swapon command like so:
+
 
+
<pre>swapon -s<pre>
+
 
+
The output of which should look like this with the path to the swap device under /dev/mapper/swap:
+
 
+
<pre>
+
Filename                                Type            Size    Used    Priority
+
/dev/mapper/swap                        partition      6281212 0      -1
+
</pre>
+
 
+
Next, look at the encrypted storage pool in the web interface and you should see that all of the devices for your pool should start with the prefix of ''dm-name-enc-''.  If the pool is offline try starting the pool using 'Start Pool..'.  For the pool to start you must have already opened the encrypted devices as per the naming convention noted above.
+
 
+
== Internal SAS Device Multi-path Configuration ==
+
If your appliance is dual-path connected to a SAS JBOD or has an internal SAS expander with SAS disks you have the option of setting up multiple paths to the SAS devices for redundancy and in some cases improved performance.  If you are not familiar with ZFS or using the Linux shell utilities we strongly recommend getting help with these steps from customer support.
+
=== Multi-path Configuration with RAID Controllers ===
+
If you are using a RAID controller it will internally manage the multiple paths to the device automatically so there is no additional configuration required except to make sure that you have two cables connecting the controller to the SAS expander.
+
=== Multi-path Configuration with HBAs ===
+
For appliances with SAS HBAs there are some additional steps required to setup the QuantaStor appliance for multipath access.  Specifically, you must add entries to the /etc/multipath.conf file then restart the multipath services.
+
==== Configuring the /etc/multipath.conf File ====
+
QuantaStor being Linux based uses the DMMP (Device Mapper Multi-Path) driver to manage multipathing.  The multipath service can be restarted at any time at the command line using the command 'service multipath-tools restart'.  Configuration of this service is managed via the configuration file located at /etc/multipath.conf which contains a set of rules indicating which devices (identified by Vendor / Model) should be managed by the multipath service and which should be ignored. The base configuration is setup so that no multipath management is done for SAS devices as this is the most common and simplest configuration mode.  To enable multipath management you must add a section to the 'blacklist_exceptions' area of the file indicating the vendor and model of your SAS devices.  The vendor model information for your SAS devices can be found using this command 'grep Vendor /proc/scsi/scsi'.  To summarize:
+
 
+
* grep Vendor /proc/scsi/scsi
+
** Returns the vendor / model information for your SAS devices
+
* nano /etc/multipath.conf
+
** Add a section to the blacklist_exceptions area for your SAS device, example (note the use of a wildcard '*') :
+
 
+
    device {
+
            vendor "SEAGATE"
+
            model "ST33000*"
+
    }
+
 
+
* service multipath-tools restart
+
** Restarts the multipath service
+
* multipath -ll
+
** Shows your devices with multiple paths to them
+
 
+
=== Pool Configuration ===
+
 
+
Once all the above is done you'll need to go into the QuantaStor web management interface and choose 'Scan for Disks' to make the new device mapper paths appear.  If you have already created a storage pool using standard paths rather than the /dev/mapper/mpath* paths then you'll need to run a zpool export/import operation to re-import the pool using the device mapper paths.  To do this you will need to first do a 'Stop Storage Pool' then at the command line / console you'll need to run these commands:
+
* zpool export qs-POOLID
+
* zpool import -d /dev/mapper qs-POOLID
+
Note that you must replace qs-POOLID with the actual ID of the storage pool.  You can also get this ID by running the 'zpool status' command. 
+
 
+
=== Troubleshooting Multipath Configurations ===
+
* Only Single Path
+
** If you only see one path to your device but the multipath driver is recognizing your device by displaying it in the output of 'multipath -ll' then you may have a cabling problem that is only providing the appliance with a single path to the device. 
+
* No Paths Appear
+
** If you don't see any devices in the output of 'multipath -ll' then there's probably something wrong with the device entry you added to the multipath.conf file into the blacklist_exceptions for your particular vendor/model of SAS device.  Double check the output from 'cat /proc/scsi/scsi' to make sure that you have a correct rule added to the multipath.conf file.
+
 
+
== Samba v4 / SMB3 Support ==
+
 
+
Newer releases of QuantaStor have support for installing Samba v4 and the SMB 3 protocol but an additional configuration step is required to upgrade your system from the default Samba server (Samba v3.6.3) to Samba v4.
+
 
+
We have an article in our Knowledge base here that details howto install samba4:
+
 
+
https://support.osnexus.com/hc/en-us/articles/209284106-Samba-4-install-
+

Revision as of 09:33, 20 August 2019

The QuantaStor Administrator Guide is intended for all IT administrators working to setup or maintain a QuantaStor system or grid of systems as well as for those just looking to get a deeper understanding of how the QuantaStor software defined storage platform works.

Administrator Guide Topic Links

Storage System

Grid Configuration

License Management

Hardware Configuration

Network Port Configuration

Physical Disk/Device Management

Hardware Controller & Enclosure Management

Multipath Configuration

Storage Provisioning

Storage Pool Management

Storage Volume Management

Network Share Management

Cloud Containers/NAS Gateway

Security, Alerting & Upgrades

Call-home/Alert Management

Security Configuration

Upgrade Manager

Snapshots & Replication

Snapshot Schedules

Backup Policies

Remote-replication (DR)

Cluster Configuration

HA Cluster Setup (JBODs)

HA Cluster Setup (external SAN)

Scale-out Block Setup (ceph)

Scale-out Object Setup (ceph)

Scale-out File Setup (glusterfs)

Optimization

Performance Tuning