Change Log Archive
- 1 v4.7.2.021 (April 4th 2019)
- 2 v4.7.1.010 (Dec 20th 2018)
- 3 v184.108.40.206 (Nov 5th 2018)
- 4 v4.6.3.001 (Oct 17th 2018)
- 5 v4.6.2.017 (July 26th 2018)
- 6 v4.6.1.008 (June 7th 2018)
- 7 v4.6.0.077 (May 22nd 2018)
- 8 v4.5.3.002 (May 1st 2018)
- 9 v4.5.2.001 (April 17th 2018)
- 10 v4.5.1.003 (March 16th 2018)
- 11 v220.127.116.11 (March 9th 2018)
- 12 v4.4.3.006 (February 1st 2018)
- 13 v4.4.2.004 (January 19th 2018)
- 14 v4.4.1.011 (December 19th 2017)
- 15 v18.104.22.168 (November 13th 2017)
- 16 v4.3.3.016 (September 12 2017) DRIVER UPGRADE AVAILABLE - REBOOT REQUIRED
- 17 v4.3.2.025 (August 3rd 2017) DRIVER UPGRADE AVAILABLE - REBOOT REQUIRED
- 18 v4.3.1.007 (June 30th 2017)
- 19 v22.214.171.1245 (June 22nd 2017)
- 20 v4.2.4.004 (May 3rd 2017) DRIVER UPGRADE AVAILABLE REBOOT REQUIRED
- 21 v4.2.3.007 (April 21st 2017)
- 22 v4.2.2.045 (April 5th 2017)
- 23 v4.2.1.018 (March 3rd 2017) DRIVER UPGRADES AVAILABLE REBOOT REQUIRED
- 24 v126.96.36.1995 (Feb 17th 2017)
- 25 v188.8.131.526 (Feb 6th 2017)
- 26 v184.108.40.2064 (Jan 18th 2017)
- 27 v220.127.116.114 (Dec 20th 2016)
- 28 v18.104.22.1688 (Dec 8th 2016)
- 29 v22.214.171.1247 (Dec 6th 2016) DRIVER UPGRADE AVAILABLE
- 30 v126.96.36.1990 (Nov 29th 2016)
- 31 v188.8.131.528 (Nov 23rd 2016)
- 32 v184.108.40.2064 (Nov 18th 2016) KERNEL AND DRIVER UPGRADES AVAILABLE REBOOT REQUIRED
- 33 v220.127.116.110 (Oct 28th 2016)
- 34 v18.104.22.1687 (Oct 14th 2016) DRIVER UPGRADES AVAILABLE REBOOT REQUIRED
- 35 v22.214.171.1244 (August 17th 2016)
- 36 v126.96.36.1993 (August 10th 2016)
- 37 v188.8.131.529 (July 20th 2016) DRIVER UPGRADES AVAILABLE REBOOT REQUIRED
- 38 v184.108.40.2069 (April 29th 2016) DRIVER UPGRADES AVAILABLE REBOOT REQUIRED
- 39 v220.127.116.118 (April 7th 2016) KERNEL AND DRIVER UPGRADES AVAILABLE REBOOT REQUIRED
- 40 v18.104.22.1683 (March 31st 2016) KERNEL AND DRIVER UPGRADES AVAILABLE REBOOT REQUIRED
- 41 v22.214.171.12490 (February 15th 2016)
- 42 v126.96.36.19972 (February 3rd 2016)
- 43 v188.8.131.5268 (February 1st 2016)
- 44 v184.108.40.20663 (January 29th 2016)
- 45 v220.127.116.1149 (January 28th 2016)
- 46 v18.104.22.16889 (December 23rd 2015)
- 47 v22.214.171.12484 (December 16th 2015)
- 48 v126.96.36.19982 (November 25th 2015)
- 49 v188.8.131.5279 (November 19th 2015)
- 50 v184.108.40.20670 (November 3rd 2015)
- 51 v220.127.116.1196 (October 16th 2015) REBOOT REQUIRED
- 52 v18.104.22.16819 (September 17th 2015)
- 53 v22.214.171.12406 (September 14th 2015)
- 54 v126.96.36.19925 (August 7th 2015) DRIVER UPDATE AVAILABLE - REBOOT REQUIRED
- 55 v188.8.131.5209 (July 21st 2015)
- 56 v184.108.40.20660 (June 5th 2015) DRIVER UPDATE AVAILABLE - REBOOT REQUIRED
- 57 v220.127.116.1162 (May 1st 2015)
- 58 v18.104.22.16890 (January 14th 2015)
- 59 v22.214.171.12493 (December 30th 2014) KERNEL and DRIVER UPGRADE AVAILABLE - REBOOT REQUIRED
- 60 v126.96.36.19940 (November 23rd 2014)
- 61 v188.8.131.5237 (November 14th 2014)
- 62 v184.108.40.20627 (October 30th 2014)
- 63 v220.127.116.1115 (October 21st 2014)
- 64 v18.104.22.16891 (October 10th 2014)
- 65 v22.214.171.12452 (September 23rd 2014) DRIVER UPGRADE AVAILABLE - REBOOT REQUIRED
- 66 v126.96.36.19911 (July 30th 2014)
- 67 v188.8.131.5284 (July 22nd 2014)
- 68 v184.108.40.20629 (June 27th 2014)
- 69 v220.127.116.1177 (May 16th 2014)
- 70 v18.104.22.16830 (May 6th 2014)
- 71 v22.214.171.12420 (April 25th 2014)
- 72 v126.96.36.19951 (April 4th 2014)
- 73 v188.8.131.5288 (March 14th 2014)
- 74 v184.108.40.20670 (March 7th 2014)
- 75 v220.127.116.1198 (February 2014)
- 76 v18.104.22.16885 (January 2014)
- 77 v22.214.171.12441 (December 19th 2013)
- 78 v126.96.36.19960 (December 3rd 2013)
- 79 v188.8.131.5235 (November 22nd 2013)
- 80 v184.108.40.20665 (November 5th 2013)
- 81 v220.127.116.1126 (October 21st 2013) DRIVER UPDATE AVAILABLE - REBOOT REQUIRED
- 82 v18.104.22.16861 (October 5th, 2013)
- 83 v22.214.171.12478 (August 20th, 2013)
- 84 v126.96.36.19952 (August 8th, 2013)
- 85 v188.8.131.5211 (August 1st, 2013)
- 86 v184.108.40.20680 (July 25th, 2013)
- 87 v220.127.116.1165 (June 26th, 2013)
- 88 v18.104.22.16889 (June 26th, 2013)
- 89 v22.214.171.12404 (May 6th, 2013)
- 90 v126.96.36.19985 (April 29th, 2013)
- 91 v188.8.131.5290 (April 2st, 2013)
- 92 v184.108.40.20642 (March 16th, 2013)
- 93 v220.127.116.1121 (March 12th, 2013)
- 94 v18.104.22.16806 (March 4st, 2013) 3.8 KERNEL UPGRADE AVAILABLE
- 95 v22.214.171.12421 (January 12th, 2013)
- 96 v126.96.36.19995 (January 5th, 2013)
- 97 v188.8.131.5267 (December 2nd, 2012) Official v3 Release
- 98 v184.108.40.20632 (November 24th, 2012) v3 Release Candidate
- 99 v220.127.116.1168 (December 2nd, 2012)
- 100 v18.104.22.16852 (November 2nd, 2012)
- 101 v22.214.171.12442 (October 29nd, 2012)
- 102 v126.96.36.19926 RC1 (October 22nd, 2012)
- 103 v188.8.131.5212 (October 22nd, 2012)
- 104 v184.108.40.20634 (October 2nd, 2012)
- 105 v220.127.116.1104 (July 23th, 2012)
- 106 v18.104.22.16879 (July 6th, 2012)
- 107 v22.214.171.12422 Tech Preview (May 6th, 2012)
- 108 v126.96.36.19922 (March 28nd, 2012)
- 109 v188.8.131.5279 (Nov 28th, 2011)
- 110 v184.108.40.20672
- 111 v220.127.116.1152
- 112 v18.104.22.16861
- 113 v22.214.171.12449
- 114 v126.96.36.19902
- 115 v188.8.131.522
- 116 v184.108.40.2060
- 117 v220.127.116.110 **
- 118 v18.104.22.1687
- 119 v22.214.171.1249
- 120 v126.96.36.1993 **
- 121 v188.8.131.529
- 122 v184.108.40.2066
- 123 v220.127.116.116
- 124 v18.104.22.1680
- 125 v22.214.171.1244*
- 126 v126.96.36.1997
- 127 v188.8.131.524
- 128 v184.108.40.2062 *
- 129 v220.127.116.110
- 130 v18.104.22.1680
v4.7.2.021 (April 4th 2019)
Base Platform Upgrade Available
- Enabled Platform upgrades from Trusty to Xenial. To upgrade the platform run qs-distupgrade after upgrading fully to v4.7.2. This will bring your system to the very latest QuantaStor 5.x version and supported linux kernel on the Xenial Platform. Please contact firstname.lastname@example.org for any questions or upgrade assistance.
- Fixed an issue with updating the zpool resilver percent completion value.
- Fixed an issue with ensuring VDEV redundancy across multiple JBODs when creating encrypted storage pools.
v4.7.1.010 (Dec 20th 2018)
- Added the 'Create Ceph Pool Profile' dialog to allow detailed tuning of erasure coded pool CRUSH map rules. [ QSTOR-5049 ]
- Added text to recommend increasing the OSD count for your cluster if your cluster is in the 'too few PGs per OSD' Ceph Health warning state. [ QSTOR-5340 ]
- Added: Additional properties for ceph cluster member added to the grid view. [ QSTOR-5391 ]
- Created a new multi-add dialog for adding multiple New Members to a Ceph Cluster. [ QSTOR-5381 ]
- Enable all types of erasure coded ceph pool profiles including: reed_sol_van, reed_sol_r6_op, cauchy_orig, cauchy_good, liberation, blaum_roth, and liber8tion [ QSTOR-5404 ]
- Fixed an issue that could prevent the deletion of Ceph OSD's. [ QSTOR-5350 ]
- Fixed an issue that was preventing users from querying the .rgw.root Ceph Pool via the 'qs ceph-pool-get' cli command. [ QSTOR-5276 ]
- Fixed an issue where Ceph Journal devices were being rediscovered and recreated as new objects inside of the QuantaStor management layer. [ QSTOR-5421 ]
- Fixed an issue with ceph journal device rediscovery if the QuantaStor node has been reinstalled and disks were not cleanly reformatted. [ QSTOR-5412 ]
- Removed duplicate entries in the Profile combo box of the Ceph Zone Create dialog. [ QSTOR-5406 ]
- Fixed an issue where the isWalDevice flag was not being updated after ceph cluster tear down. [ QSTOR-5367 ]
- Added logic to throttle automatic disk scans for faulty devices that rapidly and repeatedly remove/add themselves to the system over a short period of time. [ QSTOR-5437 ]
- Added the 'qs-util devinfo' command that can be used to check the queue depth and other tuning options of a disk as set by a storage pool profile. [ QSTOR-5435 ]
- Fixed an issue that could cause the disk identify LED for a user initiated identify task to be shut off early. [ QSTOR-5374 ]
- Fixed: launching the Disk Identify dialog in the Storage Pool and Physical Disks section will automatically select the disk that was right clicked on in the grid or tree view. [ QSTOR-5114 ]
- Fixed an issue where physical disks were still associated with deleted OSD's. [ QSTOR-5376 ]
Hotspare Disk Management
- Fixed: Global Spare disks will be iofenced when joining a storage pool as a replacement disk for a failed drive. [ QSTOR-5453 ]
- Fixed: Global Spare disks will automatically be unmarked for use as a spare when consumed to repair or grow a storage pool. [ QSTOR-5455 ]
- Changed Storage Pool Export to retain I/O fencing of the physical disks by default. [ QSTOR-3227 ]
- Added a checkbox to the Storage Pool Export dialog that allows forced clearing of disk I/O fencing. [ QSTOR-3227 ]
- Fixed an issue that was preventing removing Hot spare disks from a storage pool if they were no longer physically accessible. [ QSTOR-4678 ]
- Fixed an issue with the Storage Pool tuning profiles that could have lead to larger than expected queue depths. [ QSTOR-5435 ]
- Fixed: Deleting a Storage Pool that cannot be deleted due to all physical disks and VDEV's missing, will now suggest exporting the pool to remove it from the QuantaStor configuration, if a user truly wishes to remove that pool from the system permanently. [ QSTOR-2435 ]
- Fixed an issue where the Storage Pool Profile i/o scheduler tuning was not being applied to disk devices. [ QSTOR-5428 ]
- Increased polling cycles for the dashboard views to 5 seconds. [ QSTOR-5418 ]
- Fixed an issue where the Network Share dashboard was not refreshing if a different object type (storage volume or pool) was selected. [ QSTOR-5065 ]
- Fixed: Disabled the Multi-factor-Authentication checkbox when creating/modifying users by default if Multi-factor auth has not been configured on the QuantaStor grid. [ QSTOR-5416 ]
- Fixed: Widened the 'Create Volume Remote Replica Dialog' to more clearly show the source and destination selection for long storage system names. [ QSTOR-5467 ]
- Fixed an issue where the License Key combo box was not being auto-populated by default. [ QSTOR-5033 ]
Hardware Enclsoures and Controllers
- Fixed enclosure layout automatic discovery for the Cisco S3260/C3X60, Dell MD2060e and Supermicro SBB. [ QSTOR-5436 ]
- Fixed an issue that would cause the disk identify LED to light up every minute for Disks on LSI HBA's in IR mode. [ QSTOR-5387 ]
- Fixed an issue that could cause the disk identify LED for a user initiated identify task to be shut off early. [ QSTOR-5374 ]
- Fixed: added a check to the Add/Import Cloud Container dialog to ensure that you cannot accidentally import a bucket/container using the wrong Cloud Provider Credentials. [ QSTOR-5345 ]
- Added further ipmi sensor support for Dell, HPE, and Supermicro server nodes. [ QSTOR-5452 ]
- Updated SNMP MIB [ QSTOR-5488 ]
- Added SNMP Object mappings for Ceph Scale-Out Cluster and a few other object types. [ QSTOR-5215 ]
v22.214.171.124 (Nov 5th 2018)
- Updated megaraid_sas driver to 07.706.03.00
- Cisco Duo Multifactor Authentication Support - MFA framework
- Cloud Container support for mapping Dropbox folders as NAS Gateway shares
- Improved support for Network Share Quotas with alert threshold
- Getting Started dialog simplifies common configuration tasks like setting up object storage and provisioning file and block storage.
- Automatic Storage Pool device selection in web UI selects drives in sets to ensure disk enclosure fault-toleranance
- Grid-wide configuration of DNS/NTP server settings in an single dialog
- Storage Pools now show RAID set (vdevs) groupings to simplify management of large pools with 100s of drives
- Improved pool and device health alerting, management and LED blink activation system
- Integrated server hardware monitoring and platform detection with alerting on server hardware issues including failed fans, power supplies, and high temperature.
- Add initial functionality for Ansible storage module to communicate with Quantastor [ QSTOR-4960 ]
- Fixed an issue where Backup Policies were not transferring changed files. [ QSTOR-4290 ]
- Updated Ceph Access Key and Secret Key fields to be hidden by default in the Modify S3/SWIFT Object User Access dialog. [ QSTOR-5271 ]
- Updated task descriptions for various Ceph related tasks. [ QSTOR-5153 ]
- Updated instances of 'Ceph Journal' to 'Ceph WAL Device' or 'Write-Ahead Log Device' [ QSTOR-5169 ]
- Updated some UI elements to show more information for Ceph. [ QSTOR-5208 ]
- Updated Ceph Journal Create dialog [ QSTOR-4914 ]
- Organized the Web manager ribbon bar for ceph operations. [ QSTOR-5156 ]
- Auto-enable S3/SWIFT for S3/SWIFT Gateway Interface Target Ports when created and auto-disable S3/SWIFT for the Target Port on S3/SWIFT Gateway deletion. [ QSTOR-5308 ]
- Changed muli-create OSD and create OSD dialogs to be more generic to account for BlueStore. [ QSTOR-5157 ]
- Consolidate pg count calculation to one function and add 'Primary Use' radio buttons to the Create Object Storage Zone dialog [ QSTOR-5178 ]
- Added some warning text to the Add S3/SWIFT Gateway dialog to inform the user the config of ports 80 and 443 will change, meaning the web manager will no longer be available. [ QSTOR-5338 ]
- Added the OSD to WAL device mapping in the Ceph Cluster tree view. [ QSTOR-5204 ]
- Added new column "Storage URL" that details the Endpoint address to the S3/SWIFT Gateways grid view [ QSTOR-5288 ]
- Added support for pmem devices to be used as ceph journal device. [ QSTOR-4916 ]
- Added support for enclosure disk slot identification for Ceph OSD's. [ QSTOR-4885 ]
- Added Ceph Pool Profile options for customizing Erasure coding profiles for Ceph Storage Pools. [ QSTOR-5080 ]
- Added optimizations for Reserved Ceph pool type creation. [ QSTOR-5161 ]
- Added support for pmem devices to be used as ceph journal. [ QSTOR-4963 ]
- Improved messages and images for Ceph Scale-out Storage Pool operations in the web UI. [ QSTOR-5111 ]
- Improved the Add Member to Ceph Cluster dialog by scoping the options for the given IP Addresses available for the Client interface and the Backend interface. [ QSTOR-5023 ]
- Improved the naming of OSDs in the Delete Ceph Object Storage Daemon/Device (OSD) dialog. [ QSTOR-5051 ]
- Moved creation of ceph default object user to after setting up RadosGW [ QSTOR-5079 ]
- Move creation of ceph default object user to after creation of radowGW. [ QSTOR-5083 ]
- Enable the ability to manage radosGW for ceph-members. [ QSTOR-4882 ]
- Enhance Quantastor to still support custom ceph cluster name in ceph Luminous, while keeping the underlying ceph constructs compatible with Ceph Luminous. [ QSTOR-4876 ]
- Enhance the ceph osd create (for new clusters created with 4.7.0 and above) to have it's initial weight set based on the capacity. [ QSTOR-4697 ]
- Fixed an issue where creating a new ceph cluster would throw an error and fail the task. [ QSTOR-5263 ]
- Fixed an issue where the osd journal sizes were 0 in the ceph configuration file. [ QSTOR-5078 ]
- Fixed combo box auto selection in the Create Ceph Object Storage Daemon/Device (OSD) dialog. [ QSTOR-5008 ]
- Fixed an intermittent issue with Ceph Storage Pool association for a newly created S3/Swift Gateway. [ QSTOR-5240 ]
- Fixed an issue with cleaning up ceph-mgr related processes when associated ceph cluster is deleted. [ QSTOR-4859 ]
- Fixed an issue where a ceph monitor was not detected on a system running on a trusty platform after reboot. [ QSTOR-3158 ]
- Fixed an issue where a segmentation fault would occur when a host which was assigned an rbd rebooted. [ QSTOR-5244 ]
- Fixed an issue where the ceph version field was unpopulated [ QSTOR-5213 ]
- Fixed an issue with Ceph cluster overallStatus update. [ QSTOR-4858 ]
- Fixed an issue with deleting OSDs on xenial. [ QSTOR-4900 ]
- Fixed cleanup after Ceph Bluestore OSD Delete on xenial. [ QSTOR-5205 ]
- Fixed clearing out ceph.conf of rados gateway entries when a gateway is deleted. [ QSTOR-5177 ]
- Fixed issue where ceph-mgr was not started on trusty platform after reboot. [ QSTOR-5146 ]
- Fixed issue where pg create during object store setup gets stuck in 'creating+incomplete' state. [ QSTOR-4897 ]
- Fixed OSD-journal/WAL device correlation inconsistencies during discovery on Xenial. [ QSTOR-4992 ]
- Fixed an issue where Ceph OSD dashboards were not being cleared before displaying new data when toggling between various OSDs in a cluster. [ QSTOR-5326 ]
- Fixed an issue where creating new OSDs on trusty was failing. [ QSTOR-5253 ]
- Fixed an issue where physical disks which are being used as OSDs were being filtered out of the physical disk grid view and the cephOsdId fields were not being cleared from memory after ceph cluster tear down. [ QSTOR-5264 ]
- Fixed an issue where the associated WAL device ID was not showing up in the UI for Ceph OSDs. [ QSTOR-5181 ]
- Fixed an issue where the health status of ceph clusters was not appearing. [ QSTOR-4954 ]
- Fixed issue where physical disks which are being used as OSDs were not appearing in the physical disk list on trusty. [ QSTOR-5245 ]
- Updated windows CLI. [ QSTOR-5210 ]
- Added Cloud Container support for Dropbox. [ QSTOR-4835 ]
- Added ability to delete dropbox folders and their contents from Quantastor [ QSTOR-5321 ]
- Fixed issue with creating new cloud containers or importing cloud containers for AWS [ QSTOR-5230 ]
- Fixed an issue where deleting cloud container on Google Cloud would fail. [ QSTOR-5344 ]
- Fixed an issue with the Cloud Container disable and remove options. [ QSTOR-5233 ]
- Fixed an issue where entries were being written to the S3QL authinfo2 file when they shouldn't be. [ QSTOR-5329 ]
- Created new CLI commands to check the health of Storage Pools, Network Shares, and Storage Volumes. [ QSTOR-5052 ]
- Added: Storage Volumes created ontop of a SSD based Storage Pool will now correctly show the rotational SCSI flag for SSD/Flash Storage. [ QSTOR-5134 ]
- Added the option to clear iofencing during Physical disk format. [ QSTOR-4484 ]
- Added: QuantaStor now automatically turns on the slot identify LED for a failed disk with a faulted status. [ QSTOR-4896 ]
- Fixed: Removing and reinserting a multipath disk will now perform a multipath rescan and recreate the multipath device links [ QSTOR-5371 ]
- Fixed: The rotational flag for Block devices will now correctly set the SSD flag for Physical Disks. [ QSTOR-5134 ]
- Fixed an issue where new disks were not being added to the multipath configuration settings when auto-config multipath was set to enabled in the storage system properties. [ QSTOR-4981 ]
- Fixed: Physical Disk Identify will now enable the Enclosure Disk Identify LED if present for the Hardware Disk and Hardware Controller. [ QSTOR-5081 ]
- Fixed an issue where the physical devices were showing up under the wrong path on the xenial platform [ QSTOR-5346 ]
- Fixed filtering of physical disks in the Format Physical Disk dialog [ QSTOR-5209 ]
- Change iofencing error messages to "not supported' warning messages when all disks are VMWare virtual disks. [ QSTOR-5077 ]
High Availability Failover
- Fixed an issue with Fibre Channel ALUA not immediately being presented out for standby paths on the passive HA Cluster node. This corrects a regression introduced in 4.6.0 [ QSTOR-5028 ]
- Fixed some small optimizations for the Fibre Channel ALUA HA failover process [ QSTOR-5028 ]
- Fixed: Reduced the number of LIPs issued to the FC fabric. A LIP will now only be issued on node first boot to register the FC ports on the QuantaStor node to the fabric. [ QSTOR-5166 ]
- Fixed an issue where offline gluster bricks were reporting as online. [ QSTOR-5191 ]
Hardware Enclosures and Controllers
- Added disk identification support for devices that support SAS Enclosure Services via the SCSI generic driver. [ QSTOR-4311 ]
- Added automatic Hardware Enclosure discovery and configuration for supported enclosure models. [ QSTOR-4864 ]
- Fixed an issue where the Identify Hardware Controller Disk Device dialog was not appearing. [ QSTOR-5330 ]
- Fixed: Redirected mpt3sas driver messages to it's own log file under /var/log/mpt3sas.log [ QSTOR-5055 ]
- Fixed an issue preventing some hardware raid controllers from allowing raid unit creation for disks selected in the 'Create Pass-thru Units' Dialog. [ QSTOR-4709 ]
- Add share quota threshold alerts. Also add a column to indicate the quota percentage utilized by the share. [ QSTOR-5024 ]
- Fixed an issue where you could not set share ownership to nobody/nogroup on Network shares after it had previously been set to root. [ QSTOR-4967 ]
- Added Alerts for when Network Share Quota thresholds are exceeded. [ QSTOR-4938 ]
- Updated web server and REST service ssl ciphers to use recommended secure defaults. [ QSTOR-5036 ]
- The REST qstorapi is now only available via https://SERVER:8153/qstorapi/ and https://SERVER/qstorapi/ or for insecure access at the http://SERVERNAME/qstorapi/ location. [ QSTOR-5036 ]
- Fixed an issue with the quantastor rest service restarting periodically. [ QSTOR-4997 ]
SCSI Target driver
- Removed ib_srpt support from SCSI Target. RDMA over Infiniband is recommended via the continued iSER Target support. [ QSTOR-4393 ]
- Added wiki pages for Multi-Factor Authentication [ QSTOR-5090 ]
- Added Cisco Duo Multifactor Authentication Support - MFA framework. [ QSTOR-4654 ]
- Added initial auto-configure and smart select features for Storage Pool creation. [ QSTOR-5360 ]
- Added the ability to turn on the disk identify LED for enclosures for all the disks in a selected ZFS Storage Pool VDEV. [ QSTOR-3235 ]
- Adds vdev device grouping tree view organization to ZFS Storage Pools in the Web UI. [ QSTOR-5016 ]
- Added initial auto-configure and smart select features to Storage Pool creation. [ QSTOR-4929 ]
- Added warning/alert symbol to Storage Pool Device icon when there is a problem. [ QSTOR-5043 ]
- Added a check to Storage Pool create for instances when selected disks have ZFS metadata from having been used for a previous storage pool. This allows users to confirm with a force flag that they do intend to re-use the disks and force storage pool creation. [ QSTOR-4837 ]
- Fix issue with starting an encrypted storage pool after reboot. [ QSTOR-4790 ]
- Refactored updating storage pool cache devices to check scsi-reservation before encrypting the disk. [ QSTOR-3199 ]
- Fixed: Storage Pool state will now be updated as soon as a disk failure is detected. [ QSTOR-5119 ]
- Fixed an issue with parsing multiple pools of type RAID0. [ QSTOR-5150 ]
- Fixed issue where the UI was not updating the Storage Pool context menu immediately after creating an HA group [ QSTOR-4604 ]
- Fixed an issue with automatic disk enclosure redundancy mapping when creating storage pools. [ QSTOR-5152 ]
- Fixed an issue where Storage Pool Device Groups were being retained after removal. [ QSTOR-5013 ]
- Fixed an issue where the web UI was not showing additions to storage pools after a Storage Pool Grow operation. [ QSTOR-5339 ]
- Fixed an issue where not all Storage Pools appeared in the Storage Pool tree view and grid view. [ QSTOR-5045 ]
- Fixed an issue with creating ZFS storage pools with the correct ashift for the virtual disks on Virtualbox VM's. [ QSTOR-5222 ]
- Fixed an issue where removing cache devices from ZFS pools failed. [ QSTOR-5012 ]
- Fixed a bug with removing cache devices from a storage pool. [ QSTOR-5342 ]
- Deprecated the 'isThin' property from Storage Volume objects. [ QSTOR-5147 ]
- Further optimized select Web Interface dialogs with this release. [ QSTOR-5107 ]
- Added a column for QuantaStor Service version for the System Information grid view. [ QSTOR-5335 ]
- Added new Getting Started Guide / Configuration Checklist dialog to replace older workflow dialogs. [ QSTOR-4717 ]
- Fixed an issue where the Scale-Out Block and File tabs could appear as disabled on Enterprise Edition licenses with Silver Support keys. [ QSTOR-5306 ]
- Fixed an issue with the drop down menu for the disk shred option in the Format Physical Disk dialog. [ QSTOR-5298 ]
- Fixed an issue with the Right Click context menu opening correctly in the snapshot grid view. [ QSTOR-5162 ]
- Add the distro and kernel version to the System Info Grid [ QSTOR-5323 ]
- Added an Apply button to the Modify Storage System and Modify Target Port dialogs. [ QSTOR-5324 ]
- Added the 'Modify Grid Network Settings' dialog to easily apply the same DNS and NTP network settings to all nodes in the grid. [ QSTOR-5311 ]
- Fixed an issue where the Grid Dashboard was not rendering when there are only ceph pools present. [ QSTOR-5236 ]
- Added the root user command 'qs-util resetadmin' to allow the admin user password to be reset to factory defaults with fewer steps. [ QSTOR-5110 ]
- Add an offline upgrade script for upgrading trusty systems with no network connection. [ QSTOR-4976 ]
- Add support for disabling alerts [ QSTOR-4839 ]
- Fixed an issue where the NTP servers were being cleared. [ QSTOR-5234 ]
- Ensure the preferred grid management port is a never a floating IP. [ QSTOR-4979 ]
- Greatly Optimized QuantaStor service startup and overall task performance. [ QSTOR-5144 ]
- Fixed an issue causing long delays during service startup on large configurations. [ QSTOR-5144 ]
- Added PSU, Fan and Temperature sensor reporting to QuantaStor for supported hardware partners. [ QSTOR-4681 ]
- Fixed task status to correctly show as completed when issuing a storage system restart task. [ QSTOR-5108 ]
- Fixed an issue where the share clone operation would sometimes fail. [ QSTOR-5060 ]
- Fixed issue with interpreting sizes when the size is zero. [ QSTOR-5138 ]
- Updated SNMP MIB [ QSTOR-5163 ]
v4.6.3.001 (Oct 17th 2018)
- Fixed an issue with grub-pc upgrade prompt and error during install. [ QSTOR-5254 ]
- Updates installer kernel to 4.4.0 [ QSTOR-5231 ]
- Adds updated drivers to install time kernel
i40e: 2.4.3 ixgbe: 5.3.4 igb: 126.96.36.199 e1000e: 188.8.131.52 ena: 1.5.0 megaraid_sas: 07.706.03.00 mpt3sas: 25.00.00.00 aacraid: 1.2.1-55022 mlnx_en: 3.4 hpsa: 3.4.20 sfc: 184.108.40.2061 arcmsr: 1.30.0X.27-20170206 smartpqi: 1.1.2-125
- Fixes an issue with netboot installations. [ QSTOR-5231 ]
- Fixed an issue where custom named snapshots were not inheriting NFS access from the parent share. [ QSTOR-5127 ]
- Fixed an issue where the snapshot options in the right click context menu for Network Shares would not always show. [ QSTOR-5143 ]
v4.6.2.017 (July 26th 2018)
- Added updated megaraid_sas 07.706.03.00 driver to trusty platform installer kernel. [ QSTOR-4836 ]
- Fixed an issue that could cause a blank screen to appear during boot instead of the expected QuantaStor splash screen. [ QSTOR-4950 ]
- Fixed an issue where new installs were not getting auto-multipath for new disks enabled by default. [ QSTOR-4972 ]
- Added 'Flat' Namespace options for Windows client DFS access. This option when configured shows Network Shares for other nodes in the QuantaStor grid for access from any configured node. [ QSTOR-4930 ]
- Added additional error code information for the new Ldap AD user/group search dialog added in 4.6.0 [ QSTOR-4792 ]
- Fixed an issue where a passive HA node would advertise Network Shares that are not longer physically present on the system via smb on it's local IP address(es). [ QSTOR-4980 ]
- Fixed an issue where the Network Share Modify dialog would fail to open if local user groups are assigned to a share. [ QSTOR-4905 ]
- Fixed an issue with changing the assigned group ownership to be that of a users group if the share had previously been set to ownership for root. [ QSTOR-4764 ]
- Fixed an issue with duplicate Share Clone options shown in the UI for the right-click context menu on Storage Pools. [ QSTOR-4907 ]
- Fixed: Removing an Active Directory configuration is now immediately reflected upon task completion. [ QSTOR-4793 ]
- Updated included Ceph version to 12.2.7 [ QSTOR-4953 ]
- Updated Minimum memory requirement for ceph node (VM) increased to 4GB [ QSTOR-3400 ]
- Added a check for creating erasure coded ceph pools for object storage setup in ceph cluster with less than 3 nodes. Erasure coding for object storage on single node clusters is not available with ceph, please choose one of the mirror profiles instead. [ QSTOR-4915 ]
- Added the ability to remove monitors and go below the minimum required number of monitors in a ceph cluster by adding the force flag. [ QSTOR-4872 ]
- Fixed an issue with refreshing to display the correct number of OSDs in the Ceph Cluster dashboard cluster after adding OSD's. [ QSTOR-4850 ]
Hardware Enclosures and Controllers
- Added a toggle for the Hardware Enclosures and Controllers Enclosure View grid view to collapse the Enclosure graphic panel. This provides the option to show even more of the Enclosure view on smaller displays. [ QSTOR-4936 ]
- Adjusted the height and width of disk slots in the Enclosure view. This makes it easier to view the Enclosure layouts on lower resolution displays. [ QSTOR-4917 ]
- Changed default page size in the Physical Disk view to show 400 physical disks per page by default. [ QSTOR-4924 ]
- Fixed an issue that would require scrubbing a disk's SCSI3-PR io-fencing reservations twice to remove the reservation completely. [ QSTOR-4948 ]
- Fixed an issue with SMART disk temperature reporting. This corrects a regression introduced in 4.6.0. [ QSTOR-4949 ]
Remote Replication and Snapshots
- Added a tooltip to the Storage System Link Dialog Bandwidth limit slider to indicate a setting of 0 uses the default limit of 100mb/s [ QSTOR-4868 ]
- Fixed an issue with character validation when changing the name of a schedule via a replication-schedule-modify qs CLI command. [ QSTOR-4727 ]
- Fixed: an alert will now be raise when a snapshot cannot be manually triggered during a cool down period. [ QSTOR-4877 ]
Network Port Management
- Added Create Bonded Port option to the ribbon bar at the top of the Storage System section of the QuantaStor web Manager. [ QSTOR-4928 ]
- Added maximum advertised link speed information to ports with an offline status. [ QSTOR-4941 ]
- Fixed an issue with iSCSI discovery on Manual Virtual interfaces added to Bond and VLAN devices. [ QSTOR-4687 ]
- Added: Automatically Clean up cache files after removing or deleting a cloud container. [ QSTOR-4530 ]
- Fixed an issue where Cloud Containers could sometimes fill up the /tmp directory resulting in not enough free space on the OS disk. [ QSTOR-4529 ]
Storage Pool Management
- Fixed an issue that could cause a Storage Pool Remove Cache device task to leave cache devices attached to the selected pool. [ QSTOR-4899 ]
- Fixed an issue where removing the Zil log mirror from a pool would leave one disk as iofenced to the pool. [ QSTOR-3214 ]
- Fixed multi-delete confirmation popup window now correctly shows the child snapshots in the list. [ QSTOR-4878 ]
- Fixed: launching dialogs from right click context menus will by default have the correct system from the selection drop down menu. [ QSTOR-4951 ]
Service Core & CLI
- Adds filtering to the preferred grid port to ensure HA, Site or Grid VIF's on a node are not used for grid communication.
- Fixed: added a check to create a iSCSI initiator name if it is not present for the QuantaStor Software iSCSI Adapter. [ QSTOR-4909 ]
- Fixed: the metrics-set command for the qs cli will now return back the expected response confirming the change. [ QSTOR-4568 ]
- Increased default RAM limits of management services to allow scaling for larger grids and higher grid object counts. [ QSTOR-4910 ]
v4.6.1.008 (June 7th 2018)
- Fixed an issue where storage volume groups were listed in the storage volume section of the Storage Volume group central grid view. [ QSTOR-3753 ]
- Fixed, Selecting a volume in the Storage Volume list of the Storage Volume Group section of the central grid view now selects the associated Storage volume group.[ QSTOR-3757 ]
- Fixed an issue with the Password Policy rules for 'Days until password expires' in the Security Manager. [ QSTOR-4482 ]
- Added the Placement Group count property to the Ceph cluster object. [ QSTOR-4840 ]
- Fixed an issue where the task would never complete when creating a Object Storage Gateway on a single node Ceph configuration. [ QSTOR-4820 ]
- Fixed: The Ceph Dashboard now correctly scopes Ceph cluster health for the individual Ceph clusters OSD's and Pools. [ QSTOR-4823 ]
- Fixed an issue with Ceph Journal replacement on Ceph Luminous deployments. [ QSTOR-4828 ]
- Fixed an issue when creating a Ceph Object Storage Gateway and Pool on Grid's with multiple Ceph clusters configured. [ QSTOR-4821 ]
- Fixed an issue with subshares on Gluster volumes not coming back after reboot. [ QSTOR-4799 ]
- Fixed: Deleting a sub-share on a Gluster Volume Network Share is now supported. [ QSTOR-4763 ]
- Fixed: The Network Share Owner property is now correctly shown on all Gluster peers in the same Gluster Volume configuration [ QSTOR-4718 ]
- Added subshare permission management support for Gluster Volume Network Shares. [ QSTOR-4690 ]
- Fixed, Selecting a volume in the Storage Volume list of the Storage Volume Group section of the central grid view now selects the associated Storage volume group. [ QSTOR-3757 ]
- Fixed an issue where storage volume groups were listed in the storage volume section of the Storage Volume group central grid view. [ QSTOR-3753 ]
- Fixed an issue with WWN Fibre channel address display if the address starts with a zero. [ QSTOR-4827 ]
Hardware Enclosures and Controllers
- Added new CBLERR status for Hardware Enclosures that are cabled in ways that are not optimal. Please contact OSNEXUS Support for assistance with proper Enclosure cabling if you see this error state. [ QSTOR-4795 ]
- Fixed: HPE H241 controllers in HBA's now correctly fail the disk identify task as it is not supported via the hpe cli utilities. [ QSTOR-4170 ]
- Fixed: Selecting a Raid controller in the center grid pane of the Hardware Enclosures and Controller now correctly updates the Properties view. [ QSTOR-3780 ]
- Fixed: Updated udev rules for encrypted devices to ensure they import after a reboot with the correct device path. [ QSTOR-4833 ]
v4.6.0.077 (May 22nd 2018)
- Enabled Ceph Management for Migration Edition. [ QSTOR-4695 ]
- Enabled Ceph Management, Gluster Management, and Site Management for Community Edition. [ QSTOR-4695 ]
Ceph Scale-out block and object
- Added: HTTPS support for Ceph Object storage Gateway is now enabled by default on new Object gateway creation. [ QSTOR-4234 ]
- Added support for Single Node deployments for Community Edition Developer and Migration edition support. [ QSTOR-4696 ]
- Added initial FC ALUA support for Ceph Storage Volumes (rbd's) [ QSTOR-4751 ]
- Added support for Ceph Luminous [ QSTOR-4363 ]
- Added: Site VIF's can now be created and used to enforce a VIF for iSCSI Portal access for Ceph Scale-out RBD Storage Volumes. [ QSTOR-4626 ]
- Fixed an issue that caused a vertical scroll bar to appear in the ceph dashboard. [ QSTOR-4772 ]
- Fixed an issue with Ceph RBD Storage Volume State toggling between "missing" and "normal". [ QSTOR-4753 ]
- Fixed an issue with disks being filtered out from the list of available disks in the ceph multi-osd create dialogs. [ QSTOR-4777 ]
- Fixed Multi-Create of OSDs and journals with more than 10 disks failing due to timeout. [ QSTOR-4774 ]
- Fixed: In Grids with multiple Ceph Clusters configured, only the disk and journals available on the nodes for a selected ceph cluster will be shown as available in the OSD create dialog. [ QSTOR-4783 ]
- Improved the ceph multi-osd create to use a single nvme/ssd to create upto 30 journals, for large setups. [ QSTOR-4755 ]
- Added support to detect Fibre Channel Link Down to the HA Failover Group Policy configuration. [ QSTOR-4214 ]
- Fixed an issue with HA pools not starting properly if both HA nodes lose power and power is restored. [ QSTOR-4754 ]
- Fixed an issue with the Modify HA Failover Group dialog where it was not retaining it's settings after a change. [ QSTOR-4675 ]
- Fixed an issue where Gluster volumes deployed with Gluster version 3.10 would provide additional shares via smb protocol than available from the QuantaStor management interface. [ QSTOR-4775 ]
- Added CLI commands to provide Gluster peer attach / remove capabilities. [ QSTOR-4637 ]
Active Directory and Network Shares
- Improved Active Directory user and group search to use LDAP search syntax. [ QSTOR-4646 ]
- Enable secure ldap search for Active Directory user and group lookup for Network Shares. [ QSTOR-4782 ]
- Updated the Network Share Quota dialog to use the new LDAP based AD Search function for specifying users/groups for Quotas. [ QSTOR-3694 ]
- Added "Copy from Share" feature for duplicating access mode settings for Network Shares. [ QSTOR-4806 ]
- Added the ability to assign Network Share user / group ownership to root (0:0) [ QSTOR-4685 ]
- Added: You can now query and display a list of open file locks for a chosen Network Share with the 'View Network Share File Locks' dialog and 'qs shr-lock-list' CLI command. [ QSTOR-4441 ]
- Fixed an issue with Network Share Global Namespace link update during ha failover or pool export. [ QSTOR-4734 ]
- Fixed: Network Shares created with a custom Compression option will have that option set as expected. [ QSTOR-3548 ]
- Removed active directory cache commands from qs-util as the direct ldap query better fulfills the needs for AD searching. [ QSTOR-4760 ]
- Removed: Alias/Subshare support for Network Shares on XFS pools. These features are only for ZFS and Gluster going forward. [ QSTOR-4800 ]
Replication, Snapshots and Cloning
- Added Local to Local replication scheduling to replicate Storage Volumes and Network Shares between storage pools on the same node.[ QSTOR-3762 ]
- Fixed an issue where Snapshot Schedules were not taking snapshots for all systems and volumes/shares in the schedule. [ QSTOR-4735 ]
- Fixed an issue with Remote Replication tasks showing a pending status if a manual replication or snapshot schedule has also occurred within 60 seconds of the replication schedule trigger. [ QSTOR-4621 ]
- Fixed: Canceling a Clone task for Storage Volume or Network Share cloning now terminates the clone process as expected. [ QSTOR-4403 ]
- Added minimum and maximum file age support for backup policies [ QSTOR-4421 ]
Hardware Enclosures and Controllers
- Added a 'Create Passthrough Disks' dialog for Raid controllers without built in passthrough mode. This dialog creates RAID0 units for each individual disk selected in the dialog. [ QSTOR-4704 ]
- Added sasAddress property to SAS Enclosure objects. [ QSTOR-4794 ]
- Fixed a discovery issue for disks directly attached to SAS HBA hardware controllers. [ QSTOR-4776 ]
- Fixed a discovery issue with a Adaptec controller, which was causing the quantastor service to be stuck at startup. [ QSTOR-4784 ]
- Fixed: Disk slot numbers for SAS HBA's are now padded with a leading 0 if they are single digits to ensure proper sorting in the tree and grid views. [ QSTOR-4594 ]
- Added Automatic Multipath configuration for disk device types that advertise multipathing. This is enabled by default on new installs and can also be added to existing configuration by enabling the automatic multipathing option in the Storage System Modify Dialog and rebooting the system. [ QSTOR-4267 ]
- Fixed CLI output around Global Spare Add / Remove [ QSTOR-4551 ]
- Added support for active-backup bond mode for bonded interfaces. Note: due to the nature of the MAC address duplication with Active-Backup mode it is not supported in combination with Virtual Interface addresses. [ QSTOR-4664 ]
- Fixed a kernel boot issue where network interfaces were not coming up as the expected ethX device. [ QSTOR-4360 ]
- Fixed an issue where Virtual Interfaces would not come online after create on top of a VLAN interface. [ QSTOR-4638 ]
- Fixed an issue with network port modify of a virtual interface on top of vlan. [ QSTOR-4639 ]
- Fixed: If a Network port is modified with a new IP address is changed on a system in a Network Share Namespace, it's new IP address information is now updated on all the nodes in the namespace. [ QSTOR-4628 ]
- Fixed: It is now possible to configure network ports on ceph cluster member nodes if the port starts in an offline state. Previously all network configuration changes were blocked if a node was configured in a ceph cluster. [ QSTOR-2520 ]
- Fixed an issue with XFS meta-data corruption resulting for newly created pools not starting upon a system reboot. This fixes a regression introduced in v4.5.0 [ QSTOR-4737 ]
- Fixed an issue with starting a newly created encrypted storage pool after reboot. [ QSTOR-4721 ]
- Fixed: Added a validation check to the 'qs pool-create' CLI command to ensure all selected disks are on the same storage system before initiating the pool create task. [ QSTOR-3845 ]
- Fixed: Deleting a Storage Pool that has shared disks between multiple nodes now cleans up partition information in the kernel partition table for those disks on both nodes. [ QSTOR-4472 ]
Fibre Channel Target
- Fixed an issue with the Fibre Channel Target Port reverting back to initiator mode upon reboot, if previously configured in target mode. [ QSTOR-1622 ]
- Added Improvements to paging tool-bar to make the pagesize configurable in the UI. [ QSTOR-4619 ]
- Fixed a an issue with inconsistency of the properties panel view depending on the selected item in the tree or central grid selections. [ QSTOR-4725 ]
- Fixed an issue where the selection set for a Multi-Delete dialog would retain the first selected item on dialog open even if the selection was changed and item was unchecked. [ QSTOR-4741 ]
- Fixed an issue with the Web Interface hanging or becoming unresponsive in Firefox. [ QSTOR-4669 ]
- Fixed: Multi-select based dialogs now retain their selections if overall set filters are used, such as hide snapshots in the Network Share or Storage Volume Multi-delete dialogs. [ QSTOR-4749 ]
- Fixed an issue causing the WebUI to hang in the network interfaces section after bonded interface create. [ QSTOR-4724 ]
Service Core and Installer
- Added iozone3 and fio packages to the ISO for availability on new installs. [ QSTOR-4522 ]
- Added an Alert for Low disk space on the OS Disk drive. [ QSTOR-4436 ]
- Improved log collection with automatic PII scrubbing for GDPR compliance. [ QSTOR-4796 ]
- Masked further password fields in CLI output. [ QSTOR-4728 ]
- qs-upgrade CLI command will now updates the Ceph version on the system to the Luminous release. [ QSTOR-4771 ]
- Removed 'qs-util megalsiget' as it has been replaced by better options provided by Broadcom/Avago/LSI support. [ QSTOR-4711 ]
- Updated Redhat compatible QuantaStor CLI .rpm [ QSTOR-2885 ]
- Updated SNMP MIB [ QSTOR-4797 ]
- Improved connection retry logic in QuantaStor dashboard on a scaled setup. [ QSTOR-4713 ]
v4.5.3.002 (May 1st 2018)
- Updated influxdb to 1.5.2 [ QSTOR-4257 ]
- Updated telegraf to 1.5.3 [ QSTOR-4257 ]
- Upgraded sg3 utils to 1.42 [ QSTOR-4571 ]
High Availability Shared Storage Pool
- Fixed an issue where iofencing checks could return with a incomplete list if a faulted disk is present that could not respond to scsi inquiry commands. [ QSTOR-4672 ]
- Fixed an issue where after a reboot, the standby node in a High Availability shared storage pool configuration was taking 30-minutes to be in a ready state to accept client network connections. Now it is immediately ready after synchronizing with it's partner node on boot. Previously this could have caused a temporary loss of client connection if a failover was triggered before the node was ready. [ QSTOR-4688 ]
Hardware Enclosure Management
- Added: SES disk identify commands are now sent down all available sas paths to a JBOD and it's Enclosure Service Modules(ESM). This helps ensure disk identification continues to work in the event of a faulty ESM. [ QSTOR-4706 ]
- Fixed: added the qs task-clear-all command back. This was a regression from 4.4.0 [ QSTOR-4683 ]
v4.5.2.001 (April 17th 2018)
- Fixed the qs-distupgrade command for precise -> Trusty upgrades. this corrects an issue where the system would fail to boot with a grub rescue prompt after upgrading [ QSTOR-4674 ]
v4.5.1.003 (March 16th 2018)
Encrypted Storage Pool
- Fixed an issue with starting Passphrase Protected Encrypted Storage after upgrading from 4.4.3 or earlier releases. [ QSTOR-4598 ]
Storage Pool Management
- Added a widget to the Pool Stats to indicate short hand Millisecond numbers from the raw Nanosecond numbers provided with 4.5.0. [ QSTOR-4597 ]
- Fixed: The Storage Pool Create Dialog was filtering out Hardware RAID unit based Physical disks that have the same serial# but different SCSI Device ID's. This corrects a regression introduced in 4.5.0. [ QSTOR-4600 ]
- Added new column search filtering in a few of the Management grid views that deal with Large sets of items such as Volumes, Network Shares, Pools and Disks. [ QSTOR-4210 ]
- Expanded the size of the Ceph Cluster create dialog to show more nodes. [ QSTOR-4584 ]
Core Service and CLI
- Updated Windows CLI for QuantaStor [ QSTOR-4592 ]
v220.127.116.11 (March 9th 2018)
For a quick overview on the new Features and key changes in this release, please read the article on the OSNEXUS blog.
- Updated Linux Kernel to 4.4.0-112
- Updated SCST FC and iSCSI target driver 3.3.0
- Updated i40e driver to 2.4.3
- Updated igb driver to 18.104.22.168
- Updated ixgbe driver to 5.3.4
- Updated e1000e driver to 22.214.171.124
- Updated ena driver to 1.5.0
- Updated megaraid_sas driver to 07.704.04.00
- Updated mpt3sas driver to 25.00.00.00
- Updated aacraid driver to 1.2.1-55022
- Updated mlnx-ofed-kernel driver to 3.4
- Updated hpsa driver to 3.4.20
- Updated sfc driver to 126.96.36.1991
- Updated arcmsr driver to 1.30.0X.27-20170206
- Updated smartpqi driver to 1.1.2-125
- Kernel 4.4.0-112 includes fixes for the below Security items:
Spectre - Variant 1 - Bounds Check Bypass - CVE-2017-5753 Meltdown - Variant 3 - Rogue Data Cache Load - CVE-2017-5754 Note: Spectre Variant 2 CVE-2017-5715 is a firmware code issue and can only be addressed with updated microcode in a Motherboard BIOS or firmware update from the Processor manufacturer.
- Fixed an issue where unassigned ceph journal devices were not showing up in the ceph journal tab. [ QSTOR-4520 ]
Cloud Alerting Integrations:
- Added Slack Cloud alert support for alerts of Warning or higher severity. If you are interested in adding this slack integration to a channel on your workspace please contact email@example.com for assistance. [ QSTOR-4400 ]
- Added support for community Edition License to have a QuantaStor management grid upto 3-nodes. [ QSTOR-4405 ]
- Adds enhanced support for NVME hot swap devices with Kernel 4.4.0-112-generic on trusty based deployments. NVME devices now have a better device Path with a unique serial number identifier. [ QSTOR-4445 ]
- Added: Disk Format now supports cleaning up disk entries for multipath disks that had partitions. [ QSTOR-4465 ]
- Optimized Physical Disk discovery and correlation. This reduces the time of HA Failover, Pool Startup and Physical Disk Scan tasks. [ QSTOR-4493 ]
- Added: Pools created with partitions instead of raw block device (such as those without multipathing or encryption configured) now show the partition -part1 on the Storage Pool Disk and Physical Disk Objects. This provides better visibility into the Storage Pools device management and matches the output from 'zpool status -P'. Note: QuantaStor fully supports pools which include both partitioned disks and raw disk devices. For example a configuration without multiplathing would use the partition disk scheme during pool creation, later when multipathing is added and the pool is grown the new disks would utilize the raw disk multipath devices in addition to the -part1 partitioned devices. HA is fully supported in this Scenario. [ QSTOR-4528 ]
- Fixed: removed grouping in the Central Physical Disk grid of the WebUI. This fixes an issue where disks with similar names would be on different pages when a user had a filter for serial or name in place. [ QSTOR-4473 ]
- Added monitoring and automatic startup of glusterfs service if it is not running. [ QSTOR-4464 ]
- Added Network Share Alias and sub-share support to Scale-out NAS Gluster Volumes. [ QSTOR-4406 ]
- Added the ability to create and manage multiple Scale-out Gluster clusters in the same grid. [ QSTOR-4406 ]
Hardware Enclosures and Controllers:
- Adds support for HGST 4U60 Bay G2 and G3 enclosure services and enclosure slot layout. [ QSTOR-4461 ]
- Fixed a race condition with very large disks configs and Disk Identify LED Blinking via SES protocol. [ QSTOR-4553 ]
- Fixed an issue that could cause slow response in the Hardware Enclosure and Controllers section of the WebUI. [ QSTOR-4451 ]
- Fixed an issue with Network Share cloning to local storage Pools or the same storage pool. [ QSTOR-4560 ]
- Added support for creating non-ha Virtual Interfaces ontop of VLAN Interfaces. [ QSTOR-4385 ]
- Added: VLAN interfaces can now be configured directly ontop of an unconfigured network port. Previously creating VLAN interfaces would require a network interface to be active and have IP's configured. [ QSTOR-1383 ]
QuantaStor ISO Installer
- Fixed an issue with the disc intergrity test that could falsely report a checksum failure for md5sum.txt. [ QSTOR-4479 ]
Remote Replication and Snapshots:
- Fixed an issue where a _chkpnt share marked as an active replica checkpoint could cause a schedule to fail to trigger even if the _chkpnt share and it's source share were removed from the schedule. [ QSTOR-4476 ]
- Fixed an issue with volume snapshot rollback failing due to an open file handle from the SCSI Target for VAAI [ QSTOR-4563 ]
SCSI Target Driver:
- Removed Infiniband srpt support in favor of iSER with SCST 3.3.0 driver. [ QSTOR-4557 ]
Storage Pool High Availability Failover:
- Fixed an issue where pool failover could sometimes fail in non-multipath environments where a pool is created or grown with a large number of disks. This also resolves the same issue in scenarios where a pool is created without multipathing and multipathing is configured at a later date and the pool is failed over. [ QSTOR-3211 ]
Storage Pool Encryption:
- Added NVME disks to the list of devices that can be used with Storage Pool Encryption. [ QSTOR-4379 ]
- Fixed an issue where encrypted disk information was not being updated on the standby node in an HA Failover pair after pool grow, spare add or cache device add actions.
- Fixed: Encrypted disk open during a failover has been optimized to greatly reduce failover times in the event a Pools encrypted disks have not yet been opened after a pool grow or initial HA pool creation and failover testing. [ QSTOR-4496 ]
Storage Pool Hotspare Manager:
- Added support for auto-spare replacement of a failed disk with Global spares on Encrypted Storage Pools. [ QSTOR-4523 ]
- Fixed an issue where a hot spare disk replacement action could fail for spare disk devices in pools on systems where multipathing was enabled after pool creation. [ QSTOR-4485 ]
- Fixed an issue where a hot spare replace action would fail for dedicated spare disks. [ QSTOR-4486 ]
- Fixed an issue where a hotspare that was just added could be marked as faulted after failover or adding another hotspare to the same pool. [ QSTOR-4544 ]
- Fixed an issue where adding a spare would raise a warning alert regarding pool repair completion on a healthy pool that was never degraded. [ QSTOR-4552 ]
- Added new 'Enclosure Redundancy' property that appears in the Storage Pool section of the webUI and qs pool-list CLI output that is used to indicate if a Storage Pool is currently redundant across JBODs in multiple disk enclosure configurations. [ QSTOR-4422 ]
- Added the ability to create Storage Pools with RAID 6 / Z3 / 60 / Z3+0 on odd or uneven balanced JBOD enclosures (10 disks in one JBOD and 15 in the other) while retaining enclosure level redundancy. [ QSTOR-4426 ]
- Fixed: Better scoped the right click menu options for XFS STorage pools to remove items that only apply for ZFS selections. [ QSTOR-4550 ]
- Removed XFS Storage Pool Replication menu options from the WebUI. These options continue to be available from the CLI for legacy users. [ QSTOR-4409 ]
- Fixed a regression where the Storage Pool I/O profile was not being set to the default profile if no other selection was chosen. [ QSTOR-4477 ]
- Storage Pool Disk Devices now persist with the Storage Pool Parent even if they are missing from the system. Missing Storage Pool Disks will have a red exclamation next to them in the Storage Pool Web view to indicate the missing status. The Storage Pool Disk object contains details in it's properties for the disk Serial# and it's last known enclosure and slot location. This allows for ease of use in tracking down a Disk that has failed completely and gone dark to all system discovery, identity or presence commands. [ QSTOR-4298 ]
- Added: VAAI is now supported for Storage Volume Remote Replica _chkpnts and Snapshots. [ QSTOR-4567 ]
- Added a Performance view to the Dashboard for Storage Volumes and Storage Pools to show Read and Write graphs for: IOPS, Throughput and I/O Time [ QSTOR-4123, QSTOR-4467 ]
- Added new Web Interface Customization tab to the Create User and Modify User Dialogs. This allows for customizing what sections of the webUI are visible on a per user basis. [ QSTOR-4433 ]
- Added: New Icons in the webUI to indicate when a physical disk is fenced for use by a Storage Pool. New option columns in the Physical Disk Grid view can provide further details. [ QSTOR-4024 ]
- Added: The Grid Dashboard now shows a counter in the System tile for Share and Volume Snapshots. Previously, snapshots had been included in the overall Volume and Share count. [ QSTOR-4572 ]
- Fixed: Improved responsiveness in the Central Web Manager grid views for Enclosure and Ceph. [ QSTOR-4457 ]
- Moved the memory graph in the Storage System Dashboard to be in it's own chart. [ QSTOR-4490 ]
- Removed grouping sections in Storage Pool device list for data / cache and spare disks in favor of the older sorting and filtering method as it takes up less room on smaller display resolutions. [ QSTOR-4475 ]
- Adds alert-StorageSystemName property to SNMP alerts to show the hostname of the originating Storage System raising the alert. [ QSTOR-4502 ]
- Fixed: The qs pool-remove-write-log command now supports removing disks by name, previously it had required passing in the id of the disk. [ QSTOR-4407 ]
- Fixed: electing a grid master via the cli 'qs grid-set-master' now correctly returns a response reflecting the new grid master. [ QSTOR-4543 ]
- Fixed an issue where modifying a Host Description field with host-modify would rename the host. [ QSTOR-4565 ]
- Added additional backups of samba configuration file on AD join/leave. [ QSTOR-4413 ]
- Added auto-repair from backup of the global section of the samba configuration if it is found to be missing during a share modify or Management service startup. [ QSTOR-4412 ]
- Fixed an issue with correctly filtering out rbd and zvol devices in udev on trusty. [ QSTOR-4460 ]
v4.4.3.006 (February 1st 2018)
- Added Active Replica Checkpoint flag to Network shares and Storage Volumes that will temporarily disable Remote replication schedules or Replication tasks when active client access is enabled. This ensures that users who assign access to a Network Share or Storage Volume for a Replica Checkpoint or Checkpoint Snapshot at a particular time do not have the data set change due to an enabled replication schedule or manually triggered replication.
- Fixed: RFC2307 will now correctly always appear in the Active Directory Configuration dialog if the server selected is running the Precise platform with the optional Samba4 version installed.
Hardware Enclosures and Controllers
- Fixed an error with the new SES SAS Hardware Enclosure discovery in QuantaStor deployed on precise platforms.
- Fixed a incorrect SMART alert on QuantaStor Deployments running on precise that would set disks with a warning status. This Issue does not occur on deployments running on the Trusty Platform.
- Fixed: Resolved a dependency issue with the install media that was preventing EFI/UEFI BIOS installs when internet access was not available.
v4.4.2.004 (January 19th 2018)
High Availability VIF
- Fixed an issue where Gluster and Site Cluster VIFs could not be manually failed over with the 'Move HA Virtual Interface' option in the WebUI.
- Fixed an issue where a Gluster VIF would not automatically move to the next available active node in the event of a node failure.
Note: You will need to remove and recreate your Gluster VIF for this change to take effect.
- Fixed: Gluster Peer Setup now performs attaches to a selected set of nodes.
- Fixed an issue with the qs-util resetids command for resetting the Gluster service unique ID.
- Fixed: an issue with the Remote Replicated volume target _chkpnt where it would not appear in the WebUI. This was a regression introduced with the 4.4.1 release. This fix renames the _chkpnt to have the correct UUID so that remote replication will display all associated target child snapshots and continue as expected.
- Added new SnmpTrapType to SNMP alertEntry:
OID: .188.8.131.52.4.1.393184.108.40.206.1.1.22 .iso.org.dod.internet.private.enterprises.osnexus.quantastor.sysStats.alert.alertTable.alertEntry.alert-SnmpTrapType
- Updated SNMP MIB
Hardware Controllers and Enclosures
- Fixed an issue with the disk warning alert where the count would report '0 of N' disks. This alert has been corrected with the correct number of disks. If you are seeing this alert after upgrading it means that a set number of Disks are in a warning state due to SMART Health or Over Temperature alert states. Further investigation of disk health states can then be performed under the Hardware Controller > Disks section of the QuantaStor management interface to determine any corrective hardware actions that need to be performed.
Pass-thru Storage Volumes
- Fixed a Management service crash that could occur with Pass-thru Storage Volumes presented via Fibre Channel.
v4.4.1.011 (December 19th 2017)
4.4.1 upgrade has been deprecated in favor of 4.4.2
- Updated: new Backup Policies have the Backup Concurrency set for Parallel Backup with 12 streams. Previously the default was Serialized Backups.
- Ceph packages updated to Jewel 10.2.10 for Quantastor Appliances running on Trusty.
- Added status icons for Monitors OSDs and other important items to the Ceph Dashboard. This provides a quick at a glance health overview of the most important Ceph Scale-out Cluster items.
- Added checks to ensure adding ceph nodes to existing clusters are all running the same version.
- Fixed the Health tooltip for the Ceph Cluster in the Ceph Dashboard to show brief and more useful information about the Ceph PG states.
- Fixed: A newly created Ceph Cluster will show a status of Initializing and transition to Normal once all the Monitors specified during cluster create are online.
- Fixed an issue with Ceph RBD Storage Volume to ensure that the corresponding iSCSI Target LUN size is also updated.
- Fixed: Added additional validations to the Create Ceph Journal Dialog to limit the maximum partition size to 8.
- Updated Windows CLI available at https://www.osnexus.com/downloads/
- Fixed: Updated the qs share-list CLI output to show text for record size instead of block size.
- Added filtering to Dialogs that list physical disk objects to filter out disks already in use in an active Pool Create, Grow, Add Spare or Add Cache device task.
- Added support for Persistent Memory devices to be used as Physical Disks.
- Fixed: Further optimization for the speed of Physical Disk Scan.
- Fixed: Parallelized Encryption disk format to improve creation time for Storage Pools in larger configurations.
- Fixed: Changed Dell MD3060e Enclosure to use SES standard for Enclosure discovery and management.
- Fixed: Corrected the task description for disk identify when setting to on and off instead of duration.
- Added: FC LUN ID's will now be allocated only on Host Assignment. Previously LUN ID's were allocated on Storage Volume Object creation. When this upgrade is installed, all unassigned Storage Volumes and Snapshots will release their LUN ID's back to the unassigned pool. All Volumes currently assigned to host will retain their existing LUN ID assignments.
- Added a checkbox to the Host/Volume assignment dialogs to release Unused LUN ID's back to the Unassigned pool. By Default Lun ID's are retained on the Storage Volume to allow for temporary unassignment/changing host assignments while keeping the already assigned LUN ID.
- Added: Storage Volumes that have no Host assignment will now show 'Unassigned' for the FC LUN Property in the WebUI.
- Fixed an issue with Storage Volume Resize setting a size not compatible with FC ALUA standby device initialization. All Size operations now round up to the nearest megabyte if size is provided in webUI slider or CLI --size by byte size.
- Fixed an issue with VAAI Primitive Support on FC ALUA configurations.
- Fixed: The LUN property for Storage Volumes now correctly shows 'FC LUN' in the WebUI to indicate that the LUN ID's are used to indicate the Fibre Channel LUN ID.
- Fixed an issue where Resizing a Storage Volume would not be reflected to the FC or iSCSI target LUN object. This corrects a regression introduced in the 4.3.3 release.
- Fixed an issue with the disk format when Adding spares to a ZFS Storage Pool.
- Added a New dashboard at the top of the Network Share section that quickly shows Network Share used space from Storage Pool used space.
- Added: The right-click context menu Delete option for Storage Volumes and Network Shares now opens the multi-delete dialog with the Share or volume pre-selected.
- Fixed an issue with Web browser support for IE and Firefox. This corrects a regression introduced in the 4.4.0 release.
- Fixed: the Multi-OSD Create Dialog has a larger section for the Journal selected list to always show 3 or more.
- Fixed: added some minor text clarification for the Storage Pool section of a system in the new Grid Dashboard.
- Fixed: Selected items now persists through searches and filters for the Create Storage, Pool, Format Physical Disk, Identify Hardware Disk and other dialogs.
- Fixed: Dialogs that select Physical Disks now have counts labeled "Total" , "Found" , "Selected" to help clarify the listed number of disks that were searched for and selected. Total now always represents the total number of available disks on the system.
- Fixed the spacing and default height for some of the Split sections of the Central Grid views to better support smaller screen resolutions.
- Fixed: Combined some of the split sections in the Ceph Cluster section of the webUI to ensure that all important items are visible.
- Fixed: Right clicking on an enclosure and choosing Modify Enclosure in the Enclosure View will now correctly bring up the specific enclosure you right-clicked on.
High Availability Failover
- Added additional corner case protection for HA failover in the event both nodes of an HA pair are rebooted or lose power at the same time.
- Fixed an issue where newly added cache/spare or disk device to a non-multipath configured HA pool could be marked as missing/unavailable after a failover.
- Added Estimated Time and Estimated Transfer to Remote Replication Reports for Replication tasks in the Synchronizing state.
- Fixed an issue with Create Remote Replica for Network Shares that prevented full replication when a custom name was specified.
- Fixed: Replication tasks for replicating Storage Volumes now show the storage Volume name instead of object ID.
- Fixed: Added a check to Network Share Snapshot delete to verify that the Snapshot is not in use by a Replication or Snapshot Schedule or being retained for a retention requirement on the destination. You can use the force flag during the deletion to force deletion of the snapshot if required.
- Fixed: The Replica Associations For Network Shares now correctly show text pertaining to 'Shares' in the Properties fields.
- Fixed an issue with the Interval settings slider in the Snapshot, Remote Replication and Backup Policy Schedule dialogs where the slider would not initialize at the shown value.
- Fixed an issue where the Remote Replica Associations would not appear in their central grid view.
- Fixed an issue where some Network devices would be renamed on reboots. This was due to the devices not having a unique BiosDevName reported to the biosdevname Kernel mapping logic, we now read and numerate the devices based on the ifnames.
- Updated MIB
v220.127.116.11 (November 13th 2017)
- Added New Grid Dashboard tab to the Web Manager that provides a quick at a glance overview of Resource, Cluster, and System Health and Status for nodes in a QuantaStor Grid.
- Added qs-distupgrade script to provide Distribution Upgrade support to migrate QuantaStor Appliances from 12.04 Precise to 14.04 Trusty. Note: For Upgrading HA Clusters or Scale-Out Configurations, Please contact OSNEXUS support for assistance.
- Fixed: User Access assignments will persist after Active Directory server configuration has been removed. Previously these settings would have been removed on leaving the Active Directory domain.
- Fixed a rare case where response from Active Directory server is slow, the join domain task could fail.
Remote Replication and Snapshots
- Added Features to Snapshot Schedules Backup Policies to bring them in line with those already available with Remote Replication Schedules. This includes new Interval based Timers and Long Term Retention Tagging and Policies.
- Fixed a filtering item with Remote Replication and Snapshot schedules that could have limited the Volumes and Shares available for selection. Now only Volumes and Share on the destination pool for Remote Replication schedules is filtered out from the list of available replication sources.
- Fixed an issue where nodes would sometimes report being out of sync during a remote replication.
- Fixed: Increased the maximum retries and wait time for Remote Replication snapshot discovery to better support slower replication target systems.
- Fixed: Corrected an issue where the Schedule ownership was not being set on a Remote Replication Association if the same association was used for Manual Remote replication as well as via a Remote Replication Schedule.
- Fixed: The associated Replication Task will now fail as expected when a Replication processes between nodes is terminated due to network or system stability issue.
- Fixed: The Remote Replication Task now provides better logic for the Information used by the Remote Replication Reports to show the start-times, end-times and replication speeds.
- Fixed: Replica _chkpnt snapshots will now have their share parentId set correctly for their parent target _chkpnt of a remote replication.
- Fixed: the Remote Replication task status will correctly update with a failed status if the replication process is terminated due to a communication or stability problem on the source and/or replication target systems.
- Fixed: Create Remote Replica Volume Tasks will now correctly detect there is no common snapshot for delta's between the source and target and fail the task. Previously the task would stay at 0% and running status and never complete or fail.
- Fixed: Replication Network Share snapshots on the source system could sometimes not be correctly associated with the Replication Schedule that created them. This is now fixed.
- Fixed the descriptive and title text in the Snapshot Modify Dialog.
Ceph Scale-out Block and Object
- Added new 'Enter/Exit Ceph Maintenance Mode' Dialog to allow Administrators to set the Maintenance Mode state on a Ceph Cluster.
- Added 'qs ceph-osd-replace-journal' command to allow Administrators to replace a Journal device in an OSD live. Note; The ceph cluster must be in maintenance mode to allow this option and a restart of the OSD receiving the journal change will occur.
- Added 'qs enter-system-maintenance' and 'qs exit-system-maintenance' CLI commands. This is currently implemented only to set Ceph Scale-out Block and Object clusters into maintenance mode but will be expanded to support maintenance mode management of other QuantaStor Cluster/scale-out solutions.
- Fixed an issue where a Ceph Cluster Node that had RBD devices mapped via iSCSI could not be removed from the Ceph Cluster.
- Fixed: The Ceph Cluster Dashboard will now correctly display if you select a Journal Device that is not assigned to an OSD.
- Fixed: Selecting an object in a grid view in the Ceph Scale Out Block and Object section of the Web Manager will now select the parent object in the tree.
- Added a larger 4GB zfs_dirty_data_max setting for systems with 16GB of RAM or more. For systems with less than 16GB of RAM, the default 1GB cache setting will be used.
- Fixed: Create Storage Pool Tasks on Encrypted disks now provides a more detailed Task description while at the Encrypt Disks stage.
- Added a status column to 'qs pool-list' to show the Storage Pools reported Health Status.
- Added an alert recommending a change of sync policy to 'standard' If a ZIL configuration is removed from a ZFS Storage Pool that has had sync=always configured. This is to ensure an expected level of performance can be maintained after removal of the high IOPS ZIL Slog(Sync Log) SSDs. A policy of sync=standard is generally recommended for all ZFS configurations unless advised by OSNEXUS Support due to the needs of a specific use case and workload.
- Fixed an issue where a spare device could be rediscovered and added back in after removal from an XFS pool.
- Fixed: Alerts will correctly be triggered when the Storage Pool Used Capacity Thresholds are Exceeded.
- Fixed: XFS Storage Pools will correctly show the add/remove hotspare context menu in the Storage Pool section for adding dedicated hot spares to XFS Software RAIDed pools.
- Fixed: The Color Thresholds for Utilized % in the Storage Pool section of the Web Manager is now based off of the values defined in the Alert Manager Pool free space thresholds.
- Fixed an issue where the RAID levels drop down in the Storage Pool Create Dialog would not always update to reflect the specific options available for the ZFS or XFS filesystem type chosen. This also corrects an issue where the RAID levels listed would not always uopdate based on the number of physical disks in a selected system in the dialog.
- Added an additional LACP bonding mode that enables support for Layer 3+4 xmit_hash_policy. The original LACP mode is still identified as lacp via the CLI and will show 'LACP Layer 2' in the Web Manager.
- Added: The Format Physical Disk Dialog now provides the ability to select multiple disks. The Disks listed are from the detected available disks on a Storage System that are not associated with a Storage Pool.
- Added: The Hardware Disk Column is now visible by default for all dialogs that interact with Physical disk Objects.
- Added a Search field to the Physical Disk Tree View. The disks can be searched and filtered based on Disk logical Name and Serial Number.
- Added: Newly inserted Hardware disk devices will automatically trigger a Physical Disk Scan and be immediately ready for use.
- Fixed: increases fs.aio-max-nr value to 1048576 to support larger initial multipath configurations by default.
- Fixed: Added Paging to the central grid pan in the Physical Disk View Section of the WebUI.
- Fixed: added further performance optimizations to the Physical Disk Section of the Web Manager.
- Fixed: Further Enhancements to the Format Disk functionality.
- Fixed: Text in the Physical Disk Grid view can be selected allowing for easier copy/paste of information.
- Fixed: the SerialNo. Field is now visible by default in the Grid view.
Hardware Enclosures and Controllers
- Added SMART health status in the Hardware Controllers and Enclosures section for disks connected to SAS HBA's.
- Added: The Hardware Disk Identify Dialog in the Hardware Enclosures and Controllers section of the Web Manager now supports selecting multiple Hardware Disks at the same time.
- Added: the Hardware Enclosure view in the Hardware Enclosures and Controllers section of the Web Manager now will display the pool the disk is associated with.
- Added: The Mark/Unmark Hotspare Dialog in the Hardware Enclosures and Controllers section of the WebUI Now presents a dialog capable of performing multiple selections at once.
- Fixed an issue where LSI 9400 series HBA's would not show disk temperatures or trigger overtemp alerts.
- Fixed: The Enclosure grid view in the center pane will now update as expected based on the selected Hardware Controller in the Tree view.
- Added support for newest HPE RAID controller management via ssacli 18.104.22.168.0
- Fixed: Changed the storcli discovery support for SAS HBA's to only be enabled for LSI/Broadcom Branded 9400 series controllers.
- Added further Search filtering criteria (Vendor, Product, Controller, Status) for the Hardware Disk Identify and Mark Hot Spare Dialogs.
- Added: Hardware Disk Identify Dialog is now available as a right click context menu item for controller, enclosure and other items in the tree view of the Hardware Enclosures and Controllers section of the Web Manager.
- Added: Hardware Disk Identify now supports 'On' and 'Off' modes in addition to the existing Duration mode previously offered. Now you can set a Disk to Indicate it's location without having to race the clock.
- Fixed an exception where the Hardware Controller->Create Raid Unit, Software Controller->Remove Adapter, and Software Controller->Scan Targets dialogs would not open if there were no available controller objects to perform actions on.
- Fixed: added logic to handle a corner case where a failing or Hardware RAID controller in an errored state would return a status of 'Failed' via it's raid utilities but still provide valid parsing data on the state of Enclosures, Disks and RAID units behind the controller. Previously QuantaStor trusted the 'Failed' status code returned from the card and stopped discovery at that stage.
- Added support for NIST 800-53 R4 AC2(2) compliant 'emergency' and 'temporary' account types. These new user types may be created via the user-add qs CLI command.*Additional NIST AU-12 Compliance
- QuantaStor v4.4 now outputs to the qs_audit.log file in CEE JSON reference profile conformant to RFC 4627.
- Added: QuantaStor 4.4.0 now deploys SSL certificates on new installs with sha256 signed 2048-bit SSL Keys. It is recommended that you upgrade all older deployments to use the newer SSL certificates using the 'qs-util cacertusedefault' command from the CLI. If you have a deployment that needs to continue to use the older SSL Certificates, they are still available by running 'qs-util cacertuselegacy' All QuantaStor grid systems must be running the same type of signed keys for grid communication to function.
- Added: System Swap space is now automatically encrypted when an Encrypted Storage Pool is created or detected on a system after updating to the 4.4.0 release.
- Added: Tree View Search Filter now returns results where child objects matched.
- Added Further Localization enhancements to Dialog text.
- Fixed an issue with some dialogs containing lots of columns, where you could not always scroll fully to the right.
- Fixed: The Dialogs involving Physical Disk and Hardware Disk objects now show a disk total at the bottom of the dialog that reflects the number of disks found from the current search filter. If there is no search filter, it will show the total number of disks available in the system.
- Fixed an issue where the Utilized % Column in the grid view would not show 100% Utilized when a Pool was full.
- Fixed an issue where the Utilized % Column would not update as expected for Storage Volumes after a discovery cycle.
- Fixed: The Add Grid Management Virtual IP Dialog now has the correct help web page link specific to creating a Grid Management Virtual IP.
- Fixed: the right Click Context Menu for Gluster Volumes has been re-ordered to ensure a clearer order of possible operations.
- Fixed: Updated the Workflow Manager help links and images in the Documentation wiki.
- Updated SNMP MIB
- Fixed: Changes to the Password and Security Policies no longer fail if a node does not yet have a active QuantaSor License.
- Fixed an issue where Task description text was not updating as intended for running tasks.
- Fixed an issue where the sync policy setting on ZFS Network Shares was not being set if a user wanted something other than default inherited from the ZFS Storage Pools setting.
- Fixed: Improved logging to qs_service.log for Active Directory join.
- Added ZFS Event Daemon(ZED) to rsyslog for event logging.
- Fixed an issue where the ZFS Event Daemon(ZED) was not starting on startup. ZED will now start and be kept alive by the normal QuantaStor keep alive scripts.
v4.3.3.016 (September 12 2017) DRIVER UPGRADE AVAILABLE - REBOOT REQUIRED
- New SCST SCSI Target driver that allows iSCSI and FC ALUA luns to be presented from the same QuantaStor HA head nodes.
- Added and Increased cache size for the stat cache used for S3FS based one-to-one Cloud Containers.
- Fixed a problem where using multi delete volume would leave the cloud backup of a volume behind in the cloud container's mount directory.
High Availability Failover
- Fixed: Added standby path ALUA Target Portal Group information to active node. This change accelerates path recovery for FC ALUA Storage Volumes.
- Added support for FC ALUA for VMware clients.
- Added HA Storage Pool support for disks and devices that provide unique device serial number identification via Vendor identification SCSI page 83
- Fixed an issue where encrypted disk devices may not be automatically opened on HA failover of a Encrypted Storage Pool without a passphrase.
- Fixed: Improved management around HA virtual interface location constraints.
Hardware Enclosures and Controllers:
- Added new Broadcom HBA discovery support for the new IT mode controller section in the storcli raid utility.
- Fixed: Updated sas3ircu utility that corrects a system crash when a system has a Broadcom 9400 series controller installed.
- Fixed an issue with RAID Unit creation on Areca Hardware RAID controllers.
- Added support for latest HPE hpssacli 22.214.171.124
- Added: Disk Temperatures now supported on HPE Hardware RAID and SAS HBA controllers
- Adds support for the SCSI Temperature field for disks on SAS HBA's.
- Updated Broadcom storcli64 utility to version 007.0204.0000.0000 to support the latest LSI SAS HBA's and MegaRAID controllers.
- Fix the QS CLI share-disable and share-enable to return the accurate value indicating if the share is active.
- Fixed an issue with changing the description field in a Network Share Modify for shares and subshares.
- Fixed: Share Quota support will now be greyed out in the WebUI dialogs for Network Shares created on Filesystems and Storage Pools that do not support quotas.
- Added: Raise SMB limits to allow upto 1Million open files.
- Fixed: Disabled ZFS only sync CLI flags in share-modify and share-create for non ZFS shares.
- Fixed an issue with share and volume associations to Remote Replication Schedule objects if a node is removed and re-added to the grid.
- All nodes involved with the replication schedule need to be updated in the same downtime / maintenance to prevent ownership problems.
- Resolved an issue with ownership of the Remote Replication Schedule object in a High Availability configuration.
- Added 1KB, 2KB and 4KB block size support to ZFS Storage Volumes.
- Fixed: Added additional locking protection to Storage volume multi-delete for mixed cloud XFS and ZFS deployments to ensure all volumes are removed as expected.
- Fixed: Optimized SCSI target to only launch processes for mapped LUNs.
- Disables SSH port forwarding in the sshd service.
- Configured nginx to correctly send the X-Frame-Options header.
- Added Internationalization support to the Configuration manager Dialog.
- Fixed a problem where the login failure dialog was closing before the user could read it
- Added support for more than one Zil SLOG mirror per pool, now with multiple mirrors the SLOG can provide higher performance by utilizing 4 or more SSD devices.
- Added enclosure awareness for DELL MD* devices to Storage Pool disk device RAID redundancy balancing.
- Fixed an issue with creating XFS Storage Pools on Multipath disk devices.
- Fixed an issue with removing a spare disk from a XFS Storage Pool md RAID configuration.
- Fixed: Greatly reduced the number of alerts triggered for a storage pool repair action when global spares are not configured.
- Fixed: qs-pool-create will now generate a unique pool name when no pool name provided.
- Fixed: Clearer error messages for invalid pool names when creating a pool.
- Added new --secure=1 option to the qs-logreport command.
- Added Samba audit logging feature for logging client access of Network Shares. This new option is available under the advanced tab of the Network Share Create and Modify Dialogs.
- Removed telegraf statistic gathering for disk devices that are not being graphed.
v4.3.2.025 (August 3rd 2017) DRIVER UPGRADE AVAILABLE - REBOOT REQUIRED
- Updated hpsa driver 3.4.18-105
- Updated mpt3sas driver 21.00.00.00
- Added arcmsr driver v1.30.0X.27-20170206
- Fixed an issue where a failed ceph OSD could cause other OSD's on the same node to fail to start.
- Fixed: Added a check for Device Mapper based disk devices are blocked when creating Journal and OSD devices. These devices have a different dev partition path and will be supported in a future release.
- Fixed: Optimized Ceph Journal device discovery for faster service startup.
- Fixed: You can now remove Cloud Backups Schedules if there are no Cloud Containers present.
- Fixed: Trusty deployments will now support dm multipath disk configuration for SAS Disks when there is only a single path from one SAS cable physically connected.
- Fixed: Added some improvements for multipath and disk discovery on service startup and disk rescans after multipath configuration changes. This change also helps ensure that if a disk or other hardware device is faulty or slow to respond that disk discovery continues on all other devices.
- Fixed an issue with the physical disk and storage pool device identify logic where the Hardware Enclosure identify disk logic to blink the enclosures disk slot ID LED was not consistently used.
Hardware Controller Support
- Added Broadcom LSI 9400 series HBA controller support.
- Added Hardware Enclosure and Controller management support for Areca Controllers in JBOD mode and RAID0/1+0/5/6 arrays.
- Added a Disk Temperature alert for disks that go above a specific centigrade value. The default for this threshold is 50C. This is currently supported on LSI MegaRAID and Areca RAID controllers, SAS HBA and other controller support will be available in a future release. The alert threshold can be customized with the addition of a 'disk_temp_alert_threshold=NN' definition under the [hw_controller] section in the /etc/quantastor.conf config file.
- Added a new Drive Temperature column to the Controller Disks tab grid view in the Hardware Enclosure and Controllers section of the Web Manager.
- Added additional Enclosure Layouts and Enclosure images for HPE, Dell and Supermicro enclosures.
- Added: Drives that go above the temperature threshold will now show a OVER-TEMP Status in the Controller Disks grid view.
- Updated MotionFX chassis enclosure images and details to reflect new Acromove branding.
- Added additional Enclosure Layouts and Enclosure images for HPE, Dell, HGST, and Supermicro enclosures.
High Availability Failover
- Added Site Cluster VIF to the Web Manager. Site Cluster VIF's can be used to create a Virtual interface specific to a specific site cluster. Recommended for cases such as Network Share Namespaces.
- Fixed: Added a check for HA Storage Pools to ensure that grow checks if the disk is available from both nodes before performing the pool grow.
- Fixed: Added a check for HA Storage Pools to ensure that selecting a new disk to be used as a cache device or spare checks if the disk is available from both nodes before performing the operation.
- Fixed: Added a check for HA Storage Pools when adding a Hotspare disk that the drives are available from both systems.
- Added the ability to failover Site Cluster VIF's to a specific node on the Site Cluster.
- Fixed an issue with deleting Site Cluster HA VIF's on precise platforms, this was a regression from improvements introduced in the 4.3.0 release.
- Fixed an issue where deleting an encrypted HA storage pool would leave encrypted devices open on the standby node.
- Fixed an issue with Storage Pool device verification on failover of HA Encrypted storage pools.
- Fixed an issue where the /etc/crypttab key entries were not cleaned up on the standby node after a HA Encrypted Storage Pool was deleted.
- Fixed and issue where a newly created storage system link would not have the reverse link persist after a reboot or management service restart
- Fixed: Storage System links that do not have a bandwidth limit will now correctly show the default limit of 100 MB/s when upgrading from a older release.
- Updated the Central Grid view of the Enclosure and Controllers section in the Web Manager to provide a more concise and easier to navigate display for enclosure layout and controller and enclosure selection.
- Updated the Central grid views in the Web Manager to provide a clearer layout that shows more information all at the same time.
- Added a Snapshots tab to the central grid view for Network Shares. This allows for a snapshot specific view and list for the selected network share.
- Added support for AD groups in the Network Share User and Group Quota dialog.
- Fixed a help documentation link for the Remove Heartbeat Cluster Dialog.
- Fixed an issue where the Dashboard in the Ceph Tab of the Web Manager could collapse to a hidden state unexpectedly.
- Fixed the Help Documentation for the FC Target port enable/Initiator Mode Enable dialogs.
- Fixed: The Multipath Configurator scan drop down list will have single entries for each unique multipath capable device found.
- Fixed: The Status field in the Controller Disks tab of the Hardware Enclosures and Controllers section is now wider to encompass most common disk status states.
- Fixed: updated Execute Storage Pool Failover dialog to clarify the checkbox and it's effect for ensuring pool failover succeeds to the selected node in the event the original node fails to export the pool.
- Fixed: Task progress for a storage pool delete when a disk scrub is selected will now correctly show the progress for the overall task, including the scrub portion. Previously, individual scrub operations were reporting their process progress percentage as an update to the overall task, this resulted in the overall task progress being incorrect at times.
- Fixed an issue with Pool deletion where the pool delete would fail if there was unexpected data in the pool mount path.
"* Added new CLI support for adding and removing Network Share Quotas per user and group. Previously this was only available in the Network Share User and Group Quota Manager dialog.
qs share-group-quota-add qs share-group-quota-remove qs share-user-quota-add qs share-user-quota-remove"
"* Added password token support for the .qs.cnf file in the root home directory on QuantaStor systems, this provides localhost authentication without needing to have the password in a authentication file for the root user.
Users can enable this by echoing the special token into the root users .qs.cnf file: echo ""localhost,admin,[QSCLITKN]"" > /root/.qs.cnf And then enabling token based authentication using the CLI command for the admin user shown below. qs user-modify admin --cli-auth=yes --server=localhost,admin,PASSWORD Note, if you have created a new Administrative role user, replace 'admin' with the name of your admin user."
v4.3.1.007 (June 30th 2017)
- New nginx-light 1.12.0-1+trusty1 package for trusty deployments.
- Fixed an issue where the encryption keys would sometimes not be copied to the secondary node of an HA Failover Group.
- Changed the Password Policy Manager dialog to be the Security Manager Dialog.
- Added http to https redirect option to the Security Policy Manager Dialog
- Added option to disable http port 80 access to the Security Policy Manager Dialog
- Added logging of manual and automatic user logout from the QuantaStor Web Manager to the '/var/log/qs_audit.log' file
- Fixed an issue where the Password Policy changes could sometimes not be applied to all nodes in the grid.
- Fixed a Security issue with bad password responses. Fixes items found related to CVE-2017-9978
- Fixed the Rest API response for when a method is unsupported. Fixes items found related to CVE-2017-9979
- Fixed: Right-click context menu's will now show the same list of menu options in the tree and grid view.
v126.96.36.1995 (June 22nd 2017)
Ceph Scale-out Block and Object
- Updated Ceph Packages available:
- 10.2.7 Jewel for Trusty based installs - 0.94.10 Hammer for Precise based installs
- Fixed Ceph Scale-out Block startup discovery on grid nodes that are still part of a ceph cluster, when one of the ceph nodes in the grid is removed from the grid.
- New IBM Softlayer S3 Endpoints added to the default Cloud Providers configuration.
- Fixed an issue where changes to the /etc/qs_cloud_providers.conf on the master node was not propagating to secondary nodes.
- Fixed an issue where the entry fields for Tag and End-Point in the Cloud Containers> Add Provider Location Dialog and CLI were swapped.
- Fixed a missing python-six dependency issue with the aws CLI tool.
- Improved pool failover times when network interface connectivity between nodes is lost.
- Fixed an issue where high-availability virtual network interfaces could try and start on a system that did not currently have the QuantaStor service running.
- Fixed issue where deleting high-availability virtual network interface could cause temporary outages on other high-availability virtual network interfaces.
- Fixed an issue where deleting a high-availability virtual network interface could cause an unnecessary failover to occur.
- Fixed an issue where failover would fail due to not finding the disks correctly on the secondary node.
- Fixed an issue with nginx web service starting during qstormanager package install.
- Fixed: HA Storage Pool Failover is now kernel panic aware to trigger a failover to the secondary node.
- Fixed: Deleting a HA VIF from the Network port list in the Storage System View now correctly cleans up the High Availability Failover Group HA VIF object.
- Fixed: A Site Cluster can now be deleted using the force flag if a Cluster node is Permanently offline and will not be returning.
- Added a Multipath Configuration Dialog to the Physical Disk section of the Web Manager, this allows for administrators to scan for SAS, FC, iSCSI and other multipath/multiport capable devices and add their white-listing rules to the multipath configuration. This functionality is also available via the new qs CLI 'disk-multipath-config-list, disk-multipath-config-scan, disk-multipath-config-add and disk-multipath-config-remove' commands.
- Fixed an issue where the /dev/disk/by-id/ata-* devices would be removed under the device mapper path after a Storage Pool is deleted by a user. Previously, a udevadm-trigger command would be required to bring the ata-* device back.
- Fixed an issue with physical disk scan on multipath configurations that could cause the mutlipath devices to not appear in the QuantaStor WebUi and CLI once the scan completes.
- Fixed: the qs disk-scan command now has a force flag
- Fixed an issue where old disk encryption keys would not be cleaned up after a storage pool is deleted.
- Added earlier validation checks for Storage Pool Grow Operations for Encrypted Storage Pool configurations to ensure that the disks to be added are not encrypted before the RAID set size and resultant configuration is confirmed to be valid.
- Fixed an issue during storage pool create where user specified RAID set sizes would not be used.
- Fixed a few scenarios where creating a Storage Pool with XFS, Software RAID and Encryption could fail.
- Fixed an issue where a failed disk device was not automatically removed from a ZFS Storage Pool in Multipath configured environments.
- Fixed an issue that could occur when adding hot spares to ZFS Storage Pools in Multipath configured environments.
- Fixed an issue where a failed disk device was not automatically removed from a ZFS Storage Pool in Multipath configured environments.
Remote Replication and Snapshots
- Added a Remote Replication Report tab to the Remote Replication Schedules section of the Web Manager that shows the results of past replication tasks. Statistics in the report include: completion status, average throughput, the start and end time of the replication and many more. This is also available via the qs CLI with the 'qs replica-report-summary-list' and 'qs replica-report-entry-list' commands.
- Added new snapshot retention options to the Create Remote Replication Schedule Dialog to allow for Daily, Weekly, Monthly and Quarterly Snapshots for Historical Storage Volume and Network Share snapshots.
- Added new snapshot tags for Daily, Weekly, Monthly and Quarterly Snapshots that correspond with the retention policy picked for a particular Storage Volume or Network Share snapshot.
- Added Compression support to Storage System Links, this allows for improved performance over slow WAN links.
- Added: Remote Replication bandwidth throttling has been moved to the Storage System Link object. The qs link-create and link-modify commands and Web Manager Create Storage System Link and Modify Storage System Link Dialogs now allow for setting the bandwidth throttling.
- Added the ability to turn on Unencrypted support for Remote Replication Storage System Links. This uses mBuffer to provide a high performance unencrypted channel for Remote Replication between QuantaStor nodes.
- Added the ability to configure the Bandwidth Limiter in the Create and Modify Storage System Link Dialogs.
- Added more information to indicate replication schedule health/state and cause of failures due to any miss-configuration or networking communication error.
- Added: Snapshot Schedules can now be created for snapshots (allows Snapshot of Snapshot Scheduling)
- Fixed an issue where remote replication could fail for manually initiated replications.
- Fixed: Max Replicas in Replication Schedules are now referred to more correctly as Max Delta Points, this clarifies more precisely how many Intermittent and hourly scheduled replica snapshot points are retained between a source and target replication association.
- Fixed: The Remote Replication Offset interval for Hourly/Daily Replication in the Schedule Interval tab of the Create and Modify Replication Schedule dialogs now defaults to 0 minutes and can go to a max of 59.
- Fixed: ZFS snapshots will once more be correctly removed upon the deletion of a Network Share snapshot. This corrects a regression from the 4.2.2 release.
- Fixed: resolved an issue with updating the Timestamps for running remote replication tasks that could result in the remote replication link having incorrect progress information.
Gluster Scale-out File
- Fixed an issue that could cause unexpected behavior with Gluster Peer and Volume objects showing correctly when Gluster is deployed n the same QuantaStor management grid as non Gluster nodes.
- Fixed an issue that would prevent cleanup of Gluster Volumes in a configuration where the gluster bricks or underlying pools had been already removed.
- Fixed: Re-ordered the Ribbon bar icons in the Scale-out File Storage tab.
- Added the ability to see SMB session information for Network Shares under the new Web Manager Network Share>SMB Sessions tab and the 'qs share-session-list' and 'qs share-get' CLI commands.
- Added the ability for users to create a snapshot of an existing Network Share snapshot (snapshot of a snapshot). This support is limited to custom named snapshots and not snapshots created by a schedule that has @GMT in the name.
- Fixed an issue on Precise platforms with Network Share Snapshots for Windows SMB Shadow Copy and File Versioning support.
- Fixed an issue where removing NFS access from a Network Share would collapse the tree to the first level in the Network Share tree view. Now the tree will stay as expected when a NFS Access object is removed from the share.
- Fixed an issue in the Namespace Add/remove Network Shares Dialog where changing the namespace in the drop down would not always update the available selections.
- Fixed an issue with the search filtering in the Add/Remove Network Shares to/from Namespaces.
- Fixed an Issue with creating shares that have multiple '$' in the name.
- Fixed an issue with snapshot mount directory cleanup after a network share snapshot has been deleted. Note, this prevents the issue from occurring going forward, If users ran into this case previous to upgrading to the 4.3 release they may need to manually remove old @GMT snapshot mount directories from the network share _snaps directory.
- Added a new Password Policy dialog available under the Users and Groups tab in the Web Manager that allows Administrators to enforce password Requirements. This includes:
- Minimum password character length - Password expiration (in days). - Number of allowed login attempts - Minimum days to wait before password change is allowed. - Number of unique passwords before reusing a password is allowed.
- Added: the Storage Volume Close Session dialog now shows a select-able list of the current sessions, clicking okay will close all the selected sessions on the SCSI Target.
- Added: Network Share and Storage Volume Multi-Delete from the web manager can be used to delete share/volume and its child snapshots (select Delete Child Snapshots).
- Added: The Network Shares and Storage Volumes Multi-Delete dialog now has the option to "Hide Snapshots", making it easier to select the parent share/volume to be deleted.
- Added Passthru Storage Volume support to the Web Manager, Passthru Volumes can be created by right clicking on a Physical Disk and choosing the 'Create Passthru Volume' option.
- Added new columns to the Create Storage Pool Dialog to show the Source System and Source Storage Volume to better allow customers to pick specific Passthru physical disks using QuantaStor appliances as backend storage for front end QuantaStor appliances.
- Fixed: General responsiveness and UX performance improvements for the Web Manager on larger scale configurations.
- Fixed: Provided a clearer message in the Create Pool dialog for when users do not provide matching passwords for the 'Encrypt Storage pool with Passphrase' fields.
- Fixed: the Dashboards will enforce the use of https for their rest calls when the Web Manager is using https.
- Updated the create and Modify Remote Replication Schedule Dialogs for a better workflow.
- Fixed a problem under the Remote Replication tab in the Web Manager that could lead to a slow unresponsive Web Interface.
- Fixed an issue where the Replication Targets tab under the Storage System Links could show as empty.
- Fixed an issue where a newly created Host Group to not show the selected hosts in that group. Previously, a browser refresh was required to show them once more.
- Fixed an issue with the Web Manager that would cause object status or task objects to not update or show as completed when a large number of events were received. Previously, a refresh of the Web Browser would have been required if this occurred.
- In the Migration Edition Workflow Manager, the View Network Shares has been replaced with View SMB Connections.
- Added a Task list counter to the Task list at the lower part of the Web Manager.
- Added Block Size column option to the Storage Volume grid view.
- Fixed: Removed the unsupported cloning options in the context menu for Network Share Alias or Subshare. Cloning should only occur at the Parent Share level.
- Fixed an issue in the Physical Disk view that could cause the Firefox and IE Web browsers to report an unresponsive script warning.
- Fixed a few areas in the Web Manager where large numbers of objects or events coming in could resulting in an unresponsive script warning for some Web Browsers.
- Fixed an issue in the Storage Volume tree view where scrolling down and selecting a volume could cause the tree view to 'jump' up to the top of the list.
- Fixed an issue with the tree view for Network shares that could sometimes show the NFS client access out of order with the associated network share snapshot.
- Fixed an issue with truncation of some of the options in the Create and Modify User dialogs.
- Fixed field, text, scroll and other alignment issues in various dialogs.
- Fixed miscellaneous spellings in various Dialogs.
- Fixed the Add user dialog descriptive text.
- Fixed: Added a check to the Storage Volume Advanced Settings CHAP Username/Password to ensure that both Username and Password are supplied before clicking on OK.
- Fixed: Corrected an issue under the Remote Replication Schedule View in the Web Manager where some items under the left hand tree view could not be selected if the same Network Share or Volume was also in under another schedule.
- Fixed: Re-ordered the Ribbon bar icons in the Scale-out Block and Object Storage tab.
- Fixed: Re-ordered the Ribbon bar icons in the Storage System tab.
- Fixed: References for CIFS protocol in the Web Manager have been renamed or further clarified to SMB.
- Fixed: The CIFS Configuration Dialog has been renamed to Active Directory Configuration.
- Updated Web Manager splash to detail how to properly refresh the QuantaStor Web Manager on OS X.
- Improved Web Manager responsiveness and performance.
Hardware RAID Support
- Creating a Hardware RAID unit under Hardware Enclosures and Controllers will now create the HwUnit object for display immediately with discovery of properties occurring in the background. This provides better User Interface feedback when creating a large number of Hardware RAID units.
- Fixed: added a guard to the Hardware RAID Controller SSD Cache Unit delete to prevent removal of the SSD cache if it is actively in use with Any other Hardware RAID units on that same RAID Controller.
Installer and Packaging
- Fixed an issue with the .iso install media that required internet access for a install to finish.
- Fixed: The Installer will now show eth* devices for UEFI BIOS installs
- Fixed a problem with the latest qstortarget package compatibility with the older 3.19.0-29-quantastor kernel.
- Updated Japanese, Chinese, French, Spanish, and Italian localizations for the QuantaStor Web Manager.
- Added: Users who are inactive for 30 minutes are automatically logged out of the Web Manager.
- Added: Auto Logout clears all state information from the Web Browser.
- Added CJIS Section 5.5.4 Compliance:
- System Use Notification is available with System Usage Notification Message field under Password Policy Dialog.
- Added CJIS Section 5.5.5 Compliance:
- Session Lock is available with Auto Logout value under Password Policy Dialog.
- Added CJIS Section 188.8.131.52 Compliance.
CJIS Section 184.108.40.206 Events logging in the /var/log/qs_audit.log for: - Successful and unsuccessful system log-on attempts. - Successful and unsuccessful attempts to use access, create, write, delete or change permission on a user account or other system resource. - Successful and unsuccessful attempts to change account passwords. - Successful and unsuccessful actions by privileged accounts.
- Added CJIS Section 220.127.116.11.1 Compliance.
Note: CJIS default password requirements compliance can be enabled under the Password Policy dialog in the Users and Groups tab. In the dialog, select Suggested Defaults and change password complexity to strong. Detailed CJIS Section 18.104.22.168.1 Compliance, Password Shall: - Be a minimum length of eight (8) characters on all systems. (Compliant & Enforced) - Not be a dictionary word or proper name. (Compliant & Enforced, since QS v4.1.1) - Not be the same as the Userid. (Compliant & Enforced) - Expire within a maximum of 90 calendar days. (Compliant & Enforced, since QS v4.1.1) - Not be identical to the previous ten (10) passwords. (Compliant & Enforced, since QS v4.1.1) - Not be transmitted in the clear outside the secure location. (Compliant & Enforced) - Not be displayed when entered. (Compliant & Enforced) - Erase cached information when a UI session in terminated
- Fixed: Users created with the Cloud Admin and Cloud User Role can now change their own passwords.
- Updated Samba packages are available to address CVE-2017-7494, please upgrade your system using the qs-upgrade CLI command to install these packages and bring your system current with all other security and stability fixes available on the package repository. Note: for customers who installed the sernet-samba4 packages on the older precise platform using the samba4-install script, a workaround to address this security alert is detailed in the KB article here: Sernet Samba4 CVE-2017-7494
- Added Logic to terminate SMB sessions for users who have had their user access removed.
- Added support for RFC2307 configuration to the Configure Active Directory Dialog in the Web Manager for Trusty platforms or Precise installs that have the optional sernet-samba4 packages installed.
- Added Trusted Domain checkbox to the Configure Active Directory Dialog box for Trusty platforms or Precise installs that have the optional sernet-samba4 packages installed. This enables trusted domain support for the CIFS/SMB service
- Fixed: The Sernet Samba4 winbindd service will now be automatically started if it is detected that it is not running. This brings the Sernet samba service management inline with the standard Ubuntu Precise and Trusty Samba services.
- qs CLI commands 'system-shutdown', 'system-restart' and 'system-upgrade' commands can accept a '--sys-list' argument with a comma delimmeted list of storage systems to perform Shutdown, restart or upgrade tasks on multiple gris nodes at once.
- Fixed: 'qs set-tag' now allows the use of object UUIDs to set tags. Prior to this fix, only names were allowed to set tags.
- Fixed: replaced the Parent Share ID with the human readable the Parent Share Name to the 'qs share-list' command for snapshots.
- Added clarity to the help for qs commands such as 'pool-create', 'pool-grow' and others that have the "--disk-list" argument.
- Updated the Help text and error responses for qs pool-create.
- Fixed qs CLI cluster-ring-member-get command to provide better context with the --cluster-ring-member argument.
- Fixed: Changed CLI management of Network Ports to use Network Port instead of Target Port naming convention. Legacy commands (tp-list, tp-modify, tp-get) will continue to be supported but will point to the new Network port naming convention commands for CLI help output.
- Added: Network Shares and Storage Volumes can now be deleted from the qs CLI using the flag '--delete-child-snaps'. Adding this flag will delete all the child snapshots. If snapshots are used by schedules then the additional '--flags=force' option should be used.
- Added a new QuantaStor Log collection tool with the below new features:
- adds support for uploading via https. - the tool will fetch an updated json definitions file if available from the OSNEXUS update servers before gathering logging data. This allows up to date fetching of diagnostics when working with the OSNEXUS support team. - The Send Log Report task will now show more detailed status on the log gather scripts progress.
- Fixed: Added validation to correct an issue where a Grid Node Object and associated child objects were unexpectedly removed from the Grid Master if the QuantaStor Grid Node came onto the network with a new Storage System ID but using the same IP. This corrects a scenario where a system reinstalled in place due to a Hardware issue could cause an unexpected grid or configuration change.
- Fixed an issue where DNS entries added in the Web Manager Storage System Modify Dialog were not being reflected in the /etc/resolv.conf nameserver settings file.
- Fixed an issue with the Web Manager Send Log Report task where the log would fail to upload but the task would return success. The task will now return as failed if the upload or gathering of the logs fails for any reason.
- Fixed an issue that was blocking https access when the 'qs-util disablehttp' tool was used to turn off http port 80.
- Updated SNMP MIB.
v4.2.4.004 (May 3rd 2017) DRIVER UPGRADE AVAILABLE REBOOT REQUIRED
- Adds new ZFS filesystem Driver 0.6.5.8-osn-4. This driver addresses an issue in the ZFS Kernel Driver that could rarely occur during a Kernel Memory allocation that could result in a Kernel Panic instead of a Warning message.
- Fixed an issue where the Web Manager may not refresh as expected on some grid events.
v4.2.3.007 (April 21st 2017)
- Added checks on system startup of passive nodes to ensure encrypted devices are available for HA failover to reduce failover time.
- Fixed: Corrected a behavior where iofencing would sometimes not be released from a cache device that is removed from the Storage Pool. This would cause a device that was removed to still be locked to the old pool.
- Fixed: Corrected an issue where some disks would not be included in the Storage Pool device list for iofencing for a storage pool during a failover. This would intermittently cause a failover to not succeed.
- Fixed an issue with the refresh of the site cluster view in the Web manager after a site cluster configuration is removed by a User.
High Availability Fibre Channel Target
- Fixed: Now uses Standby instead of transitioning mode during failover. This addresses the ALUA failover "flapping" issues which would cause the devices to not come back online without a reboot.
- Fixed: Optimized the use of issue LIP to limit disturbance to FC fabrics.
- Fixed closed a small time window where Relative Target Portal group ID for FC ALUA devices was not set early in an HA Failover. This would cause issues where devices would not come back online without a reboot.
- Fixed: a Network Share Modify will now correctly apply the recordzsise change to Network Shares on ZFS Storage Pools.
- Fixed: Encrypted disks are now opened using concurrency to better support large configurations (80-200 disks). This reduces failover and pool startup time time for Encrypted disks by ~30%.
v4.2.2.045 (April 5th 2017)
- Added a better Cluster site overview to the Cluster Resourcee Management section of the WebUI. Now when a site cluster is selected the central grid view will show all details regarding status for the Site Cluster nodes and services. Previously this information was available in separate tabs in the grid view and not always apparent.
- Added: The Add Cluster Heartbeat ring Dialog now selects all nodes in the selectd Site Cluster by default reducing the number of clicks to create additional site cluster heartbeat rings.
- Added the new Restart Site Cluster Services dialog and 'qs site-cluster-restart-services' CLI command that allows for Administrators to restart the heartbeat ring and site cluster service on a chosen node.
- Fixed an issue where a Site Cluster would remain in a Warning state after the heartbeat rings and nodes were brought back to a Healthy state. It will now report a Healthy state as expected.
- Fixed: Highly Available Storage Pools now add a protection lock on pool Import to ensure that they are not re-imported if they had previously failed to export on an automatic or manually initiated failover. Previously this check only occurred on pool export.
- Added a check to Network Share Delete to ensure that any Network Share Aliases/Subshares are removed before the parent Network Share can be removed.
- Fixed: Network Share Aliases now report a share type of alias. As they are an alias of the parent Network Share, they will now report N/A or '0' for their Logical Used/Physical Used to avoid confusion.
- Fixed: The Network Share Logical Used and Physical Used reporting in the WebUI now matches the same precision with less rounding as the 'qs share-list' CLI output.
- Fixed: Changes to the NFS exports for deletion or disabling a Network Share object now use a safe reload method for updating the NFS exports table. The create Network Share and Create Network Share Snapshot functions have been using this reload function for sometime.
- Fixed an issue where subshare/aliases selected for removal in the MultiDelete Network Share Dialog would sometimes fail to be removed.
- Fixed an issue where a newly created local clone of a Network Share would inherit the mountpoint property of the source Network Share. Previously this could lead to the source Network Share being taken offline if the clone share is disabled or removed.
- Fixed an issue where disabling or deleting a Network Share Alias could unmount the Parent Network Share.
- Fixed: Lazy Deleted Network Shares will now correctly be cleaned up on system boot or the next Storage Pool discovery cycle.
- Added new 'Hardware' Column to the disk selection section of the Storage Pool Create Dialog that provides a way to sort and select the disks based on disk location.
- Fixed an issue where Growing A ZFS Storage Pool was not retaining enclosure level redundancy as expected.
- Fixed an issue where the Snapshot Physical used capacity would incorrectly appear in the other category in the Storage Pool Dashboard.
- Fixed an issue with pool import on disks with multipath devices.
- Updated the Storage Volume Group icon with a new icon that provides a clearer difference between Storage Volumes and Volume Groups in the Storage Volume tree view.
Scale-out Block and Object (Ceph)
- New icons used for Ceph RBD Storage Volumes.
- Fixed an issue with creating a Ceph Scale-out Object Storage Pool Group.
- Fixed an issue where OSD's could sometimes not start after reboot for Ceph cluster nodes on the Trusty Platform.
- New Workflow Manager with easy workflows for common initial setup tasks. This replaces the previous System Checklist.
- New Workflow Manager splash screen when logging into the Web Manager for Migration Edition. This new window present common initial tasks for the Migration Edition such opening/starting the encrypted pool, viewing the share mount commands, shutting down the storage appliance and other common tasks.
- Added a validation check to ensure correct iQN formatting to the 'qs host-initiator-add' command.
- Changed Link state column in the 'qs target-port-list' to show Link Up/Link Down instead of 'Normal'. Verbose output for the 'qs target-port-list' and 'qs target-port-get' commands show Link up/Link Down instead of 'Normal'. XML output will continue to report a enum of '0' or '1' as previously estabilished.
- Fixed the dependencies for the qstorservice so that the samba-client package is suggested and not a hard dependency. This is required top allow the upcoming precise to trusty platform upgrade path.
v4.2.1.018 (March 3rd 2017) DRIVER UPGRADES AVAILABLE REBOOT REQUIRED
- Adds new aacraid driver 22.214.171.124005src. This driver corrects an issue with Array units appearing with the correct /dev/disk/by-id/scsi-* device paths.
- Adds new ZFS filesystem Driver 0.6.5.8-osn-3. This driver addresses a latency issue with ZFS Block Storage Volumes (zvols) under some workload conditions.
- Fixed: During a Manual Storage Pool Failover operation, the failover will now continue if the original owner of the Pool is unresponsive or unable to export the pool. This is now equivalent to using the force flag in the Execute Storage Pool Failover Dialog, which is now checked by default.
- Added support for 256K, 512K and 1024K Record Sizes in Network Shares.
- Fixed: New Network Share Namespaces are browseable and public by default.
- Fixed an issue with renaming Network Shares that contain special characters such as $.
- Fixed: a check has been added to ensure a Network Share is not renamed when it is part of a existing namespace as this can lead to unexpected behavior. If you wish to rename a share, please remove it from the namespace configuration, rename it and add it back.
- Fixed an issue with the Modify NFS Client Access dialog under the The Network Share> NFS Client Access tab so that the Correct network Share is automatically selected and the rule to be modified can be selected from the drop down menu.
Remote Replication and Snapshots
- Fixed: Network Share Subshares and Aliases are now correctly filtered from selection as a remote replication or snapshot source.
- Fixed an issue with Replication of Shares that include a $ in the name.
- Added: the 'qs pool-create' argument '--disk-list' now supports specifying [n] number of disks or [*] to use all available disks when creating the Storage Pool.
- Fixed: Updated Storage Pool Create, Modify, Grow and other Dialogs to be much more elastic.
- Fixed: Storage Pool Modify, Grow and other dialogs now include more useful details displayed regarding the pool RAID type, RAID set size and other properties.
- Fixed: Operations on Encrypted Storage Pools that require access to the Encryption key will now Fail with a clear error message prompting for the Pool to be opened with the Passphrase so that the operation can be performed.
- Fixed a few small items that would cause the Storage Pool Dashboard to not display when selecting a different Storage Pool.
- Fixed an issue where the wrong device path location was being used for ZFS Storage Pools when adding/removing cache devices and spares.
- Added support for 256K, 512K and 1024K Record Sizes in Storage Volumes.
- Adds new Storage Volume Dashboard in the Storage Volume section of the WebUI. The Storage Volume Dashboard provides a detailed view of the Logical and Physical Used capacity.
- Fixed: The Storage Volume Modify Advanced Settings Dialog now correctly shows the Block Size that was chosen when the Storage Volume was Created. Previously this information was only available via the Properties view.
Scale-Out File Storage (Gluster)
- Added a health check for the Selected Gluster Peers before a Gluster Volume Create, Modify or Grow operations can be executed.
- Fixed: Gluster Volumes now correctly with type gvol in 'qs share-list' output.
- Fixed: Gluster Volumes now include a Logical Used attribute to show the logically used capacity before mirroring or erasure coding.
- Fixed: Storage Pool type now shows N/A for Gluster Shares as there is no direct mapping to the underlying pool for this Share type.
Hardware Enclosures and Controllers
- Fixed an issue where Write Caching was shown as enabled for a RAID unit when the RAID controller BBU was failed or not present and the RAID controller was defaulting to Write Through mode.
- Fixed an Issue where the default enclosure layout view was not being selected on newly added Enclosures.
- Fixed an issue in the WebUI where new items added to a tree view would not show up until a discovery cycle or Browser reload has occurred.
- Fixed an issue where Virtual Interfaces could not be created from the WebUI if the gateway field was empty.
- Added: Enterprise License keys now support License Capacity Passthrough when using LUNs presented from QuantaStor Backend Storage Appliances.
- Updated API and CLI Documentation for the 4.1 and newer releases.
- Fixed an issue where UEFI installs would incorrectly show the Base OS grub splash screen settings instead of those for QuantaStor.
- Fixed an issue that was preventing the hourly automatic management database backups from occurring in some scenarios.
v126.96.36.1995 (Feb 17th 2017)
- Added support for the '$' Character in Network Share names to provide support for Windows Client to automatically hide these Network Shares from Browsing.
- Added the advanced recordsize option for Network Shares created on ZFS Storage Pools.
- Added support to Network Shares for presenting a Secondary path (Alias) and/or Sub-folder via CIFS and NFS.
- Fixed: Network Share Snapshots inherit the parent shares Security and access list settings.
- Added a new One-to-One Cloud Container that uses S3FS to provide a direct Object mapping for every file written to the Cloud Container Network Share.
- Added support for custom S3 endpoints.
- Added new qs CLI commands to allow for management of Cloud Provider Locations, Cloud Providers, and Cloud Provider Credentials.
- Fixed an issue where the S3/Swift bucket at the Cloud Provider would not be removed during a cloud container delete.
- Fixed: Cloud Containers now report a Type of 'cloud' in their share list properties.
- QuantaStor now uses awscli for all internal S3 endpoint management.
- Added additional Columns and Properties to the Storage Volume Section of the WebUI to better show the PhysicalUsed capacity(after compression) on disk, Logical Used capacity(what the client has allocated) and child Snapshot Physical used capacity.
- Added the qs volume-create-passthru command to allow for passthrough of Raw Storage devices such as NVMe disks as Storage Volumes.
Hardware Enclosures and Controllers
- Added new Custom Chassis Tag for Hardware Disk Enclosures. This allows for custom names for the Disk Enclosures to match any real world location/naming scheme used in your orgination. If the same Custom Tag is used on multiple enclosures, QuantaStor will refer to them as the same enclosure., this is helpful for some Vendor enclusres that have a SAS Expander Backplane in the front and Back of their JBOD chassis that would normally appear as seperate enclosures.
- Added further enhancements to the Hardware unit to Physical disk correlation.
- Enhanced the iSCSI Software Adapter Create Dialog.
- Fixed: the iSCSI SW Adapter now logins to it's remote targets much faster.
- Fixed an issue that could prevent the Disk Locator light function from working on some Hardware Disk Enclosures.
Storage Pool and Disk Management
- Added Hardware Disk Correlation in the Physical Disk view of the WebUI.
- ZFS is now the default Storage pool type for 'qs pool-create' if a pool type is not specified.
- Fixed: ZFS Storage pools comprised of Physical Disks which are Hardware RAID units, will now show a Combined RAID level property of (HWRAID+ZFSRAID). For instance, if underlying Hardware RAID 6 is used alongside ZFS RAID 0 the Value would report as (RAID6+0) or if HW RAID10 with ZFS RAIDZ2(6) the result would be (RAID10+6).
- Fixed an issue that was preventing growing a Storage Pool if a Remote Replication was running for a Storage Volume/Network Share on that pool.
- Fixed an issue where the suggested RAID level for a chosen number of disks would be incorrect.
- Fixed an issue where multipath disks could sometimes appear as dm-name-mpathN device identifier instead of the always unique dm-UUID device identifier.
- Fixed an issue where the physical disk multipath flag was not inheriting to encrypted device objects. This would result in a warning flag appearing on the device in the WebUI and Cli properties.
- Fixed an issue where Storage Pools created without multipath device id's would not automatically import on boot up once multipathing is enabled for the disk devices and the system rebooted.
- Fixed an issue where ZFS Storage pool imports could take a much longer time than expected to import during an HA Pool Failover.
- Fixed an issue that could sometimes occur after a QuantaStor HA node is upgraded and a Storage Pool Failover occurs where the Network Share user and group access list information could be removed.
- Fixed: the FC-ALUA standby path devices will correctly appear on the passive node after a HA Storage Pool has been taken over by a node filling the active role. This fixes an issue introduced in the 4.1.5 release.
- Fixed an issue with HA failover that could sometimes occur if the designated grid port was not available. Now the HA nodes try communicating via the Heartbeat ring interfaces if the normal grid communication port is unavailable.
Disk Encryption / Security
- Added support for custom Encrypted Storage Pool key Passphrases. This allows for workflows where the Encrypted Storage Pool remains locked for access on bootup unless a Admin starts the storage Pool and enters the Passphrase. The Passphrase can be changed if needed from the Modify Storage Pool dialog advanced options.
- Fixed an issue that would cause the DoD shred option to fail on Storage Pool with Encrypted disks.
- Fixed: Encrypted Disk devices formatted using the Format Disk tool will now properly close out the dm-enc-* device releasing the underlying physical disk device for use.
- Fixed: the 'qs-util crypttabrepair' utility will now try all available encryption keys instead of defaulting to the enc-scsi-*.key file that matches the enc-scsi-* device name.
- Various fixes for Encrypted Storage Pool management.
- Added a search bar to the tree view in various sections to allow for faster navigation.
- Added New Dashboard to the Ceph Scale-out section in the WebUI that shows a more detailed picture of how the physical storage is being used.
- Added New Dashboard to the Storage pool section in the WebUI that shows a more detailed picture of how the physical storage is being used.
- Added support for creating custom Cloud Provider and Cloud Provider Locations(endpoints) in the WebUI.
- QuantaStor now allows for custom UID/GID settings for Local QuantaStor users.
- Added groups to Local user management in QuantaStor web interface. This includes managing the local POSIX group and GID.
Remote Replication and Snapshots
- Fixed: Large and long running replication transfers in the same schedule with other pending replications could result in a serialization lock error causing the pending replication tasks to fail.
- Fixed an issue where Manually triggering a Snapshot schedule could sometimes result in a silent failure.
Ceph Scale-out Block and Object
- Fixed an issue with Ceph Journal device discovery on System Boot.
- Throttled the Storage Pool Low Free Space Alerts which could sometimes occur at 10 minute intervals to every two months at the Warning level, monthly at Alert level and weekly at Critical level.
- Fixed an issue where the 'samba4-install' script could not connect to the update servers that contained the samba4 update packages.
- It is now possible to use the samba4-install script on precise platforms to upgrade from Samba 3.x to Samba 4.x without needing to leave the AD domain to perform the upgrade.
- Added new 'qs grid-send-supportlogs' and improved Send Support Logs dialog to allow customers to easily send logs to the OSNEXUS support team from multiple nodes in the grid.
- Additional Grid performance and service improvements.
- Updated SNMP MIB for 4.2
- Updated VSS Provider.
v188.8.131.526 (Feb 6th 2017)
- Fixed an issue with the package update server list file that was preventing customers from performing future upgrades who installed from the 4.1.5 ISO media.
- Fixed an issue that caused the webUI to be unavailable on systems where the http port was disabled with the 'qs-util disablehttp' command. Note: disabling the http port 80 will block the dashboard view from other systems, this will be addressed in a future release.
v184.108.40.2064 (Jan 18th 2017)
- Added descriptive text to Network Share Users Access tab Search field. Added example text to tooltip.
- Fixed an issue that prevented wildcard searches for users.
Hardware Enclosures and Controllers
- Added support for Cisco branded SAS HBA's
- Fixed an issue where the Enclosure View could appear blank.
- Various small fixes for the iSCSI Software Adapter login/logout dialogs.
- Fixed a rare issue where the first unit created on an LSI RAID Controller may not appear in the WebUI.
Fibre Channel Target
- Fixed an issue where a LIP would sometimes not be issued on the target FC ports during add/remove Host access for Storage Volumes.
High Availability Failover
- Added a faster failover check so that a secondary node can more quickly take ownership of the Storage Pool, Storage Volumes and Network Shares for instances where a active node is powered off or loses all network connectivity to it's network switch and standby nodes.
- Added for FC ALUA paths now report standby status instead of unavailable for the secondary standby node. This corrects an issue that would cause some clients to report dead/failed paths.
- Added for FC ALUA a check to issue a LIP after failover of the Storage Pool on the Standby node so that the standby paths are redicovered.
- Added for FC ALUA an issue LIP for when a secondary node comes online from a poweroff or reboot state and goes into standby status.
- Fixed an issue where some third party FC SAN arrays would not respond to a SCSI Persistant reservation request for full status including keys and reservations. qs-iofence now requests these items individually to support these FC array models.
- Added checks in the Create Storage Pool Dialog to detect the number of available disks on a system and provide suggested RAID levels at the top of the RAID selection list. For Example, this will ensure RAID60 is listed before RAID6.
- Added checks in the Create Storage Pool Dialog to prefer for RAID+Striping Levels and remove Single RAID levels based on the number of drives in the system. This is to ensure best performance and capacity options are chosen during pool creation and discourage non-best practice extremely large single RAID Level, such as a twenty drive RAID5 for instance.
- Added: Default compression to lz4 on ZFS storage pool create, this applies to all editions.
- Added: When clicking on the Create Storage pool ribbon button, the first system selected is now a system that has available disks.
- Fixed: When no free disks are available to create a disk, the options in the pool create dialog are now greyed out.
- Fixed: enabled storage pool compression support for Community Edition licenses.
- Added: the qs_checkservice will now log to the /var/log/qs_checkservice.log file for any warnings or errors instead of issuing a mail.
- Fixed an issue where the new qs_restd service was not being monitored correctly by the qs_checkservice.
- Fixed: Corrected an issue with Object name caching, this corrects an error that could sometimes occur after deleting and then recreating a snapshot, or storage pool with the same name.
- There is a new SNMP MIB available with this release. You can use qs-util snmpmib to review.
- Fixed an issue where an SNMP Walk would return no objects.
- Fixed an issue where the snmpagent was unable to start on 12.04 precise platforms.
- Fixed: Addressed SSL concern CVE-2016-2183 (SWEET32) with updated qsciphers file to remove DES and 3DES ciphers.
- Fixed: disabled tomcat web port 8443.
v220.127.116.114 (Dec 20th 2016)
- The Trusty platform install media now includes and updated megaraid_sas driver to support the LSI Megaraid 3316 ROC.
- Fixed an issue that could cause problems with XFS based Storage Volumes after reboot
- Fixed an issue where ZFS Storage Volume snapshots and replicated snapshots could sometimes not become writable clones after a snapshot or replication operation.
Scale-out Block and Object (Ceph)
- Fixed a permissions issue for the ceph startup scripts.
- Fixed an issue where the OSD device may not be properly associated with it's Journal device in the QuantaStor management interface.
- Various small Ceph implementation fixes.
- Enabled Upgrades for 4.1 series features and improvements based on the Precise Platform at IBM SoftLayer and other locations that use their own update repositories.
v18.104.22.1688 (Dec 8th 2016)
- Fixed an issue that was preventing the new nginx web service from starting on system boot.
Scale-out Block and Object (Ceph)
- Fixed an issue where rebooting a Ceph node and then the Ceph Master node could result in Journal devices showing up as offline and owned by the Ceph master Node.
- Fixed: Lowered logging level on Metrics Dashboard InfluxDB
- Fixed: Lowered Logging Levels on nginx Web server
- Added Additional log files to qs-sendlogs log gathering scripts.
v22.214.171.1247 (Dec 6th 2016) DRIVER UPGRADE AVAILABLE
- Intel 40GBe Network Adapters i40e 1.5.25
- Added further Optimzations to speed up Pool Failover times around Storage Pool startup and discovery tasks.
- Added: ZFS Storage Pools now support NVMe SSD devices for ZIL and L2ARC in stand-alone appliance deployments.
- Fixed an issue where some multipath or encryption devices could not be used to grow a ZFS or XFS Storage Pool.
- Fixed an issue where some multipath or encryption devices could not be used as a spare in a ZFS Storage Pool.
Scale-out Block and Object (Ceph)
- Fixed a rare issue where multi-osd create would fail to create an OSD due to a failure in the XFS Pool creation step.
- Various small Ceph management fixes.
- Adds vfs_unityed_media support for better Avid integration on CIFS/SMB Shares. This replaces the previous media_harmony plugin support.
Snapshots and Remote Replication
- Added: Remote Replication on Trusty 14.04 Platform deployments can now be enabled to use AES-NI accelerated Ciphers for SSH tunneling between QuantaStor appliances with the 'qs-util aesni' command.
- Added some enhancements to reduce the time it takes when performing large numbers of snapshots all at the same time.
- Added some optimizations to better batch cleanup Storage Volume and Network Share Snapshots marked for deletion.
- Fixed The slider bar for the minute interval option in the Create and Modify Remote Replication schedules now correctly shows 15 minutes as the minimal available option when the slide is all the way to the left.
Physical Disk Management
- Added new 'qs disk-format' command and Format Disk Dialog to the Physical Disk section of the Web Manager. This allows for the removal of any unwanted encryption or disk formatting prior to a disk being used in a Storage Pool.
- Added a new property to indicate the Distribution version to the Storage System Properties view.
- Fixed: The Remove grid member dialog was missing the force flag option checkbox.
- Fixed an issue that would prvent the Dashboard from showing when logged into the Web manager via https or port 8080
- Added the '--flags=' option to the qs grid-remove command.
- Added logic to remove the bucket from the cloud provider during Cloud Container Deletion. Note: Very large multi-terabye Buckets may need to be removed manually with swift/s3cmd commands.
Scale-Out File (Gluster)
- Fixed: Added a scaling timeout for Gluster Peer setup operations based on the number of Gluster Peers selected for the operation.
- Added further Optimzations to speed up service startup around Storage Pool startup and discovery tasks.
v126.96.36.1990 (Nov 29th 2016)
Storage Pool Management
- Fixes a rare issue that could prevent Storage Pools on 12.04 Precise platforms from starting on System Boot.
v188.8.131.528 (Nov 23rd 2016)
New Platform for new ISO deployments
- 4.1 now uses Ubuntu 14.04 Trusty as the base platform by default.
- If you require a 4.1 install media based on the 3.x/4.x 12.04 Precise platform it is available here md5
- Intel 40GBe Network Adapters i40e 1.5.16
Ceph Scale-out Block and Object
- New Ceph version 10.2 (Jewel) available with QuantaStor 14.04 Trusty based deployments.
- Added: Ceph Jewel now supports reporting RBD disk storage utilization statistics. This is reflected in the utilized property in QuantaStor for the Ceph based Storage Volumes for iSCSI and RB access.
- Added: Pool Replica Count can now be modified via the web Manager. Added the ability to list custom pool create profiles in the Ceph Object Store create dialog. Added a Force scrub checkbox to the ceph osd multi-create dialog
- Added a minimum hardware/virtual hardware check for Ceph cluster creation and adding ceph cluster members. Minimum requirements for a VM or server to demo or run minimal cluster member services is 2 CPU cores and 2GB of memory.
- Added new logic to better support creating a ceph cluster with a better suggested-placement-group count based on the osd count on minimal ceph configurations (3 nodes 3-6 OSD's)
- Added logic to ensure a minimum size of 1GB for Ceph Journal partitions during journal create and multi-osd create.
- Added better support for NVME devices when used as Ceph journal devices.
- Added new Ceph Erasure Coded pool profile management.
- Added Ceph object Storage Pools can now be created as Erasure coded in addition to different Replica counts.
- Fixed an issue where a target port would not correctly have it's firewall forwarding rules from port 80 to 7480 removed when it had S3/Swift Object gateway access disabled.
- Fixed an issue during OSD delete where the mount points under mtab were not updated to reflect the correct unmounted status.
- Fixed an issue where the Web Manager Ceph Dashboard would reflect stale capacity and information when a cluster capacity is reduced from a Ceph OSD remove event.
- Fixed: Ceph Cluster Members now group by Ceph Cluster when viewed in the central grid view in the Web Manager.
Gluster Scale-out File
- Fixed: QuantaStor now provides a more accurate view of the current Gluster Volume and Brick status.
- Fixed an issue to ensure the Glusterfs client mount on QuantaStor used to provide NFS anf CFIS access is correctly mounted and not accidentally providing a mountpoint to the root filesystem.
High Availability Failover
- Added: HA Failover tasks will now show more detailed status during failover tasks.
- Added improved iofencing tool that greatly improves SCSI-3 Persistent reservation verification and assignment during HA Failover tasks.
- Added discrete ARP Ping even to occur on HA Failover for each HA VIF configured on an HA Failover Group.
- Added: HA Failover groups will now automatically be activated when a HA VIF is first created on it.
- Added an HA Failover Policy based on the the link status of the ports in the machine.
- Added iSCSI SAN Configuration feature for simplifying the configuration of Tiered QuantaStor High Availability Failover based deployments. This feature allows for automatic iSCSI interconnect configuration of front-end QuantaStor appliances to back-end QuantaStor appliances providing iSCSI Storage Volumes.
- New Network Share Namespaces feature that allows NFSv4 and CIFS clients to see all shares accessible in a configure namespace on QuantStor appliances and Network Shares added to that namespace.
- Fixed: QuantaStor now uses the reload command instead of restart for the Samba CIFS/SMB service.
- Fixed an issue where Network Shares could reflect incorrect Utilized statistics until a discovery cycle occurs.
- Fixed an issue where a local users default group could appear in the AD group list.
Storage Pool Management
- New security options available in Storage Pool Delete dialog allow for securely erasing the disks when the storage pool is decommissioned.
- Added a new option to the Create Storage Pool Dialog that will clean the partition label and ensure a disk is available for use prior to creating a Storage Pool with it.
- Added: Backup Policies now support pushing data from a QuantaStor Network Share to a external CIFS/NFS share on a third party server/appliance.
- Fixed an issue that would allow users to delete a Backup job while it was running resulting in errors.
- Fixed an issue where the Backup Job status would not update during single threaded rsync based transfers.
- Fixed an issue where Timestamps for Created, Modified and Start Date are all updated to the 'current time' when qs CLI command 'backup-policy-modify ' is executed.
- Fixed an issue that would still provide the option to cancel a already completed backup job.
- Backup Policy settings are now shared between nodes in a High Availability Failover Group.
- Fixed an issue that would cause backup policies to fail if the target share type was changed between CIFS/SMB or NFS.
Cloud Containers / Cloud Backup
- Fixed: Cloud Backup Schedules will now correctly trigger an immediate backup when manually triggered.
- Added various performance improvements to the QuantaStor service and backend Database.
- Fixed: Reduced the number of grid events triggered by snapshot grid objects. This will improve Web Manager responsiveness and overall performance for deployments that have a large number of snapshots.
- Fixed a small timing issue on system startup with the iSCSI Target driver and service that would cause a false positive with the QuantaStor service startup requiring a manual service restart in some instances.
- Fixed a rare case where adding grid nodes with existing modified admin accounts could result in the new nodes admin account being retained and multiple 'admin' accounts appearing in the QuantaStor user list.
- Fixed: Various Pool startup and management service startup performance improvements.
- Changed: qs system-modify commands now require the storage system name or id be passed in for command execution.
- Added missing feature flags for the qs backup-policy-create CLI tool to bring it inline with the Backup Policy Create Dialog.
- Added shorthand flags for common QS commands '-u' for '--user', '-s=' for '--server=', '-h' for '--help' more information is available in the qs command help.
- Added a new --noheader option for qs commands to show the output for list commands without the column headers.
- Fixed: the qs tp-modify --port=ethX --port-type=disabled command now correctly removes the static or dhcp networking configuration from the port and sets the port state to disabled.
- Fixed up the output of the qs disk-list, volume-list, share-list, target-port-list, and pool-list commands so that they include the storage system name as an earlier column and the UUID as the latest column.
- Fixed an issue where the QuantaStor user credentials in %USERPROFILE%\.qs.cnf on Windows were not being read properly for us with the qs command line tool.
Hardware Enclosure and Controllers
- Added support for HP HBA series controllers in Hardware Enclosures and Controllers Module.
- Added better SAS HBA Enclosure correlation between hardware controllers and nodes. Enclosures correlated this way will have the same unique id number.
- Added correlation for SAS Disk devices presented from SAS Hardware controllers in Physical Disk view. This makes it easier to dentify physical disk objects to their SAS disk counterpart in SAS HBA configurations.
- Added logic to check for network availability before performing an iSCSI Login on a iSCSI Software Adapter.
- Fixed: Improved Device Multipathing discovery logic for Physical Disk objects.
- Fixed some object properties that were not being shown correctly for HP Smart Array Controllers.
- Fixed: Improved Correlation between Physical Disk objects and Hardware Disk Objects for Adaptec controllers.
- New Dashboard feature adds real time statistics for Storage System Memory, CPU, Load, and Networking statistics for a selected Storage System in the System Managment section. Additional statistic dashboards will be added in upcoming QuantaStor releases for other sections such as Storage Volumes and Storage Pools and many more.
- Added name search to the assign/unassign storage volume dialog.
- Added Client Connectivity check IP addresses to columns in the grid for the HA Failover Group section of the Cluster Management tab in the WebUI.
- Added new 'Source Volume Size' property to the Remote Replication Links for Storage Volumes.
- Change: Moved Host Groups to the hosts section and Volume Groups to the Volume section and removed their discrete sections from the left hand accordian tree view navigation.
- Fixed: Performance improvements to initial Web Manager load times.
- Fixed: The options for switching between target and initiator only mode on a FC controller now more clearly show 'Enable FC Target Mode' and 'Enable FC Initator Mode'
- Fixed: Bonded ports can now be selected in the Create VLAN Interface Dialog.
- Changes Network Target Port disable/enable to offline/online more clearly indicate desired link status.
- Changed the property sidebar so that it is collapsed by default.
- Changed the Restart NFS and CIFS services Dialog to now auto select the current Storage System by default.
v184.108.40.2064 (Nov 18th 2016) KERNEL AND DRIVER UPGRADES AVAILABLE REBOOT REQUIRED
Kernel and Drivers
- Adds new 3.19.0-73 Linux kernel that includes updates and a security patch to address CVE-2016-5195 (Dirty COW)
- Adds new ZFS filesystem Driver 0.6.5.8-osn-2 please review the ZFS v6.5.x changelogs for further detail v0.6.5 v0.6.5.1 v0.6.5.2 v0.6.5.3 v0.6.5.4 v0.6.5.5 v0.6.5.6 v0.6.5.7 v0.6.5.8 v0.6.5.8-osn-2
- Fixed an issue where some systems would not use the latest quantastor provided hardware drivers included with the qstortarget package.
- Fixed Task list cleanup for remote replication and snapshot schedule tasks so that they are not immediately cleaned up on long running tasks.
- Fixed Task list cleanup so that they are cleared in the order of their timestamp, previously these were sorted and cleaned up by id.
- Fixed an issue where the log files for the core quantastor services would sometimes become truncated.
- The optional Samba 4 packages available via the samba4-install script are now hosted on the packages.osnexus.com mirror.
v220.127.116.110 (Oct 28th 2016)
- Fixed: The FC ALUA state now remains in transitioning state while the Storage Pool and Storage Volumes are being moved between the nodes. This addresses a small window on some clients were a sync based write could have found the Storage Volume LUN in a unavailable state and not retry.
- Fixed: Many base command execution performance improvements. This improves HA failover times, Storage Pool creation task times and many other operations.
- Fixed: Tasks are now cleaned up via the order of their timestamp instead of the previous ordering method.
CIFS / SMB:
- Fixed: Removed Sernet Samba Enterprise external repo from samba4-install script. Samba4 packages now come from OSNEXUS repository servers.
v18.104.22.1687 (Oct 14th 2016) DRIVER UPGRADES AVAILABLE REBOOT REQUIRED
- Adds latest ZFS v.6.5.8 filesystem drivers and v.6.5.7 user mode tools, please review the ZFS v6.5.x changelogs for further detail v0.6.5 v0.6.5.1 v0.6.5.2 v0.6.5.3 v0.6.5.4 v0.6.5.5 v0.6.5.6 v0.6.5.7 v0.6.5.8
- Configures ZFS ARC Max at 50% of system memory as default to provide better default performance for mixed workloads. Please consult with a OSNEXUS Reseller or Sales Engineer in regards to advanced ARC tunings for task or use case specific workloads.
- Added the serialized backup option as the default Backup Concurrency option. Serialized backup provides the most economical form of backup and is less I/O intensive on the source and destination shares in comparison to the Parallelized backup options.
- Fixed an issue where the Backup job objects status and properties would not correctly update in the WebUI or on other nodes when a Backup job changes status.
- Fixed: Backup Jobs that fail will correctly show a Failed state instead of showing Initializing
- Fixed: Backup Jobs will raise an alert and transition to Failed status if the source share failed to mount or if the QuantaStor target/destination Network Share is disabled.
- Fixed: Backup Jobs will transition to a Failed state when using NFS and the source NFS share becomes inaccesible.
- Corrected syntax and argument help for qs backup-policy-modify command. You can correctly rename a policy via the CLI like you can via the WebUi with the 'qs backup-policy-modify --policy=POLICYNAMEorID --name=NEWNAME' command.
Ceph Scale-out Block
- Fixed an issue where mapped iSCSI LUNS on Ceph Scale-Out Block were not presented from all QuantaStor nodes in the Ceph Cluster.
Core Service and CLI
- Fixed an issue where the optional Samba 4 upgrade would not correctly report the service status as online in the QuantaStor system properties.
Disk Device Multipathing
- Fixed an issue that could prevent a multipathed Hotspare disk being used to replace a failed disk in a ZFS Storage Pool.
- Fixed an disk mapping issue for Encrypted Multipathed devices to ensure that all disk paths receive SCSI3 reservations.
- Encrypted Multipathed devices will now appear in the WebUI and CLI as having all of their path associations.
- Fixed an issue where Storage Volumes on a FC ALUA deployment could sometimes not initialize properly on system boot or when first created and presented to a Host
- Adds new license types for HA pairing and Support Renewal only Licenses.
- Fixed an issue where two HA nodes with Multipathed disk devices were incorrectly reporting double the license capacity used.
- Fixed an issue where some SSD devices incorrectly counted towards licensed capacity.
- Fixed an issue where hotspares in use repairing a ZFS Storage Pool could be incorrectly counted towards License capacity.
- Fixed: The Ownership Setting>Assigned Group will now correctly show the AD group name in addition to the Group ID (gid) in the Network Share Dialog.
- Disabled IPv6 address discovery for Network devices by default.
- updated MIB
v22.214.171.1244 (August 17th 2016)
- Corrected an issue with mapping of devices for iofencing. This affected devices that had dm Multipathing and/or LUKS Encryption.
v126.96.36.1993 (August 10th 2016)
- Added support for Fibre Channel ALUA High Availability.
- Added Legacy SCSI Target USN support for upgrades from QuantaStor 4.0.3 and older releases.
- Fixed: resolved an issue with creating XFS Storage Pools with LUKS Encryption enabled.
v188.8.131.529 (July 20th 2016) DRIVER UPGRADES AVAILABLE REBOOT REQUIRED
- Added Trusted Domain support for Customers who have installed Samba4. Users and Groups from Trusted Domains can now be added by searching in the Network Share User Access>AD User or AD Group section.
- Removed getent as a dependency for Active Directory UID/GID lookups. UID and GIDs are now shown for users that have CIFS access assigned under the Network Share User access Tab.
- Fixed an issue where the idmap selection was not visible in the joining Active Directory domain section of the CIFS Configuration Dialog.
- Fixed qs-util adcachegenall Active Directory caching used for very large (100,000+ user/groups). Generating the Active Directory cache is now much faster.
- Fixed: the idmap ranges for autorid mode were reduced as the values shipped with 4.0 were to high high, preventing uid/gid generation from the Active Directory sid.
- Changes purge policy function for single thread mode to use rsync `--delete-after` instead of running pwalk after the transfer completes.
- Added a lower CPU priority for Backup Policy tasks. Added a lower CPU priority for Remote Replication tasks.
- Fixed an issue where the Daily Purge Policy would trigger at the end of the Backup Policy instead of only once a day.
Ceph Scale-out Block and Object
- Added updated ceph-install script for customers upgrading from 3.x releases who are interested in installing and testing the QuantaStor Ceph scale-out block and object features.
Cloud Containers and Cloud Backup
- Fixed an issue with creating a cloud backup without an Cloud Storage Container. This scenario will now properly error out and raise an alert indicating a Cloud Storage Container should be created.
- Fixed an issue where the Cloud Container Repair task would not complete due to a short timeout value on the process.
- Fixed: Restore from Cloud backup will now only list Storage Pools local to the QuantaStor system where the Cloud Container is mounted.
Hardware RAID Modules
- Added Cisco UCS C3260 enclosure layout support.
- Added new qs hw-unit-auto-create CLI command that will take different inputs to be used as rules to setup Hardware RAID units automatically. More details are in the `qs help=hw-unit-auto-create` output.
- Fixed an issue where dedicated RAID controller hot-spares would show as a warning state when they are perfectly healthy.
- Updated included Adaptec controller utilities for Adaptec Hardware Module support.
- Added new HA failover feature to perform Client Connectivity testing. This feature is available in the Modify Storage Pool HA Failover Group Dialog and will ping a specified set of client IP Addresses and then execute a failover if a chosen policy for the failure is met.
- Added improvements to HA Cluster Storage Pool failover speed for cases where the Node is failed due to a power loss or will not be able to communicate with the node that is taking ownership of the pool.
- Fixed an issue with SCSI-3 Reservations and registrations used by the HA Clustered ZFS Storage Pool feature. Any customers running the HA Clustered ZFS Storage Pool feature are advised to upgrade to 4.0.3 or newer.
- Fixed an issue with the HA heartbeat rings where a ring member would be in a offline/warning state.
- Fixed an issue where the heartbeat cluster service would start on a node that had no Cluster heartbeat rings configured.
- Fixed an issue that prevented the creation of HA Virtual Network Interfaces on top of VLAN tagged interfaces.
- Fixed an issue where VAAI SCSI target support could prevent a Storage pool export during HA Clustered Storage Pool Failover.
- Fixed a corner case with HA Storage Pool startup when both primary and secondary nodes are powered at the same time.
- Fixed an issue where objects related to an HA Cluster Storage pool would not be updated if the Grid Master node is unavailable and an HA Storage Pool failover occurs.
- Fixed, Alert messages related to heartbeat ring status changes now correctly identify the heartbeat ring as the source of the alert with a clearer message. Previously the alert would state the node was offline, which was incorrect.
- Added new Create and Modify Network Share Dialogs. CIFS User access, ACL Permissions and Share Owner settings are now on the User Access Tab. Advanced settings such as compression mode, ACL and xattr features have been moved to a new Advanced Tab.
- Added: the quota options in the Network Share Create and Modify Dialogs now allow for the exclusion of snapshot used capacity from the Quota.
- Fixed: The Network Share User Access tab grid view in the WebUI now correctly sorts on username and supports sorting by User Access Mode.
- Fixed an issue that would prevent the modification of a Network Share name that included the - _ . Characters.
- Fixed an issue that could sometimes cause a Netowrk Share creation to fail if 'nobody' and 'nogroup' were specified as the share owner and group.
- Fixed an issue that could sometimes occur where the Network Share Create or Modify dialog would generate an error regarding the share owner/group not being set when and AD user was selected.
- Added Consistency groups for Remote Replication. Replication Schedules now quickly take the snapshots for all Volumes or Network Shares in the schedule at the same point in time and are transferred serially in a sequential manner for best performance.
- Fixed an issue where a lock was not placed on a Network Share replication link, this could lead to Remote Replication Schedules containing only Network Shares running in parallel instead of serially.
- Fixed conflict between VMware VAAI extended copy feature when there was remote replication for Storage Volumes.
- Fixed: QuantaStor will now do more to auto re-create a replica-assoc if it is missing or was removed and there is a good source/target match.
- Fixed: the Enable and Disable Remote Replication schedule dialogs now include more detail regarding the number of shares in the selected schedule.
- Added support for disperse Gluster Volumes to span the disperse volume over an uneven number of systems that do not match the disperse configuratiobn. Previously for a 4D+1P configuration you would require 5 or 10 systems, now this configuration can be deployed on 5,6,7, or any number of nodes as long as the number of bricks are available to ensure the conditions for the Gluster disperse configuration are met.
- Fixed: there was an issue where Gluster tasks would not succeed due to another gluster task or command transaction being in progress, this has been corrected with additional retry logic.
- Fixed an issue that would allow removal of a QuantaStor node from the grid while it was still in use serving Gluster Volume access and bricks. If you determine you do have a neew to perform a grid node removal while gluster configuration is present on that node, you can do so via the force flag.
- Fixed: Removed disperse configuration options from the WebUI that Gluster does not natively support.
- Added: SCSI Target USN's now match the Storage Volume object unique ID's.
Core Service and CLI
- Added further detail to the ZFS Storage Pool Resilver property to show how much time the Storage Pool reports as remaining for a resilver.
- Added qs pool-preimport-scan command that can now be used to get a list of available pools for importing.
- Added new 'timezone-list' and 'timezone-set' commands to the qs CLI, these commands allow for users to change the timezone of a QuantaStor system in the event the system is relocated or an incorrect timezone is chosen on system startup. More information is available via the 'qs help=timezone-list' and 'qs help=timezone-set'
- Removed auto import logic on QuantaStor service startup for Storage Pools that were not local or owned by the Storage System. This corrects a behavior where a storage pool would be imported incorrectly on systems where shared disk access is possible from multiple head nodes. Customers who wish to import foreign Storage Pools from other QuantaStor or for Open-ZFS based pools should continue to use the Pool Import Dialog.
- Fixed: qs import-pool command to allow importing of storage pools on a remote grid member.
- Fixed: qs pool-import now requires the foreign pool name to import a specific storage pool.
- Fixed an issue where the QuantaStor iSCSI Software Adapter (initiator) would sometimes not automatically login to configured targets on system reboot.
- Fixed an issue where the QuantaStor iSCSI Software Adapter (initiator) would not immediately scan for remote iSCSI targets on startup. In some cases this would cause a Storage Pool to be slow to import or not complete importing properly until the disks were rescanned and Storage Pool started manually.
- Fixed qs license-list command output now by default provides verbose license details.
- Fixed an issue at system startup that could lead to an alert regarding a problem for discovery of the iSCSI Target service running state.
- Fixed a conflict with latest SCST driver and Instant rollback from snapshot feature that would sometimes prevent snapshot rollback of Storage Volumes.
- Fixed an issue where deletion of a user created via the QuantaStor Management interfaces would not also remove the corresponding local linux user account.
- Fixed an issue that can sometimes occur where a Stop Storage Pool task would not correctly stop an XFS storage Pool.
- Fixed an issue that could sometimes occur where a Storage pool resilver would complete, but the failed disk would not be removed automatically.
- Added updated Storage Pool Create dialog to provide better detail on when to choose XFS or ZFS storage Pool options.
- Fixed: The Rollback Storage Volume dialog will now tell a user if there are no avaialble snapshot recovery points.
- Fixed: The grid view in the center of the Web manager for Volumes and Network Shares can now be correctly sorted based on any chosen column sorting.
- Added the Alert tab in the Web Manager will not show a count for the number of alerts.
- Fixed an issue where the Storage Pool % Utilized property was not updating as often as the grid view or other Utilized percentage information.
- Fixed an issue where the About box in the Web Manager would not correctly show the versioning information for the system you are accessing via the WebUI.
- Fixed an issue where the ribbon bar would not always appear in the Web Manager on smaller resolution screens.
- Fixed: Storage Volumes that have their % Reserved changed to 0 % from a higher % value will now correctly report as Thin Provisioned
- Fixed an issue where the Name field in the Resource Group -> Add/Remove Users dialog would sometimes not be populated.
- Fixed an issue where HTML formatting tags would be present in some Localizations.
v184.108.40.2069 (April 29th 2016) DRIVER UPGRADES AVAILABLE REBOOT REQUIRED
New Driver releases:
- HP SmartArray RAID Controllers hpsa 3.4.10-0
- Mellanox Infiniband Adapters mlx4_ib 3.2-2.0.0
- Mellanox Converged Ethernet Adapters mlx4_en 3.2-2.0.0
- Added logic to ensure HA failover would succeed during manual failover if the iptables firewall was unresponsive.
- Added a timeout to qs-util rraterebalance
- Fixed conflict between VMware VAAI extended copy feature when there was remote replication for Storage Volumes.
- Changed default replication throttle rate from 10MB/s to 30MB/s
- Added Further grid communication optimizations.
- Fixed a bug that caused grid events to be sent for objects that didn't change.
- Fixed a compatibility issue with IE11 where user entered names in a textfield would not be accepted.
iSCSI Target Driver
- Fixed an issue where removing or adding a physical block device to the system would cause the iSCSI target driver to deadlock.
v220.127.116.118 (April 7th 2016) KERNEL AND DRIVER UPGRADES AVAILABLE REBOOT REQUIRED
- Adds kernel upgrade to the Linux 3.19-0.58 kernel (latest stable LTS release) this kernel update addresses a potential stability issue introduced with the previous 3.19-0.51 LTS kernel included with QuantaStor v4.0.0. This issue does not effect data integrity in any way but could lead to an instability which would require a reboot.
Scale-out Block and Object
- Fixed: Ceph Cluster create now only allows Alpha-Numeric and underscore '_' characters in the cluster name. The 'qs ceph-cluster-create' CLI help has been updated to reflect this.
- Fixed: Corrected an issue that would cause the removal of Scale-out Ceph Storage Volume to fail.
- Fixed: The Cloud Container tab will now correctly appear on Community Edition keys that have the Cloud Backup feature enabled on the license key.
v18.104.22.1683 (March 31st 2016) KERNEL AND DRIVER UPGRADES AVAILABLE REBOOT REQUIRED
QuantaStor 4.0.0 was superseded by the QuantaStor 4.0.1 release on April 7th 2016. Please click Please click here for the QuantaStor 4.0.1 release notes and upgrade instructions.
- Adds kernel upgrade to the Linux 3.19-0.51 kernel (latest stable LTS release)
- It is now even easier to Deploy QuantaStor via PXE/Kickstart solutions such as RedHat Kickstart or Cobbler.
- New Driver releases:
- Dell PERC and Avago/LSI MegaRAID controllers megaraid_sas 06.810.08.00
- Avago/LSI 12GB/s SAS HBA's mpt3sas 12.00.00.00
- HP SmartArray RAID Controllers hpsa 3.4.14
- HP Broadcom tg3 3.137k
- Adaptec RAID Controllers aacraid 1.2-1.41010
- Intel 40GBe Network Adapters i40e 1.4.25
- Intel 10GBe Network Adapters ixgbe 4.3.13
- Intel 1GBe Network Adapters igb 22.214.171.124
- Intel 1GBe Network Adapters e1000e 3.3.3
- SolarFlare Network Adapters sfc 126.96.36.1991
- Mellanox Infiniband Adapters mlnx4-en 3.2
- Qlogic FC Adapters(supports 16GB Qlogic Gen 5 26xx controllers) qla2x00tgt 3.1.0
- Scale-out Block and Object Storage (Ceph integration)
- Added new Scale-out Ceph Object Storage support
- A new Ceph Object Storage will have a default 'objadmin' user account with S3 and Swift access keys. This user is intended for diagnostics and resolution of ACL issues. This user can be disabled.
- Added Ceph User access model for management of Secret and Access keys for Scale-out Object Storage S3 and Swift access. Users can be enabled and disabled and can have different ACL access.
- Adds initial support to 'qs ceph-pool-create' CLI for custom crush maps and additional Ceph Storage Pools. Contact OSNEXUS support if you need assistance with creatign and deploying a custom crushmap.
- Added the Add and Remove Ceph Monitors dialogs to the Web Manager.
- Added a new Ceph Member status tab to the Web Manager
- Added support to remove a Ceph Monitor configuration in the Ceph Cluster from nodes that are offline or will be permanently unavailable.
- Added: Multi-OSD create now has the option to use available journal partitions on existing journal devices.
- Added Scale-out Block and Object Ceph clusters will now allow for 48-hours before initiating an auto-heal to rebalance data on the remaining OSD's. This is to help ensure a reblance does not occur if a node was taken offline due to a quickly corrected hardware component failure or temporary power failure.
- Added: You can now use the 'qs ceph-pool-modify' CLI command with the --max-replicas=X option to modify an existing Storage Pool replica count level and initiate a rebalance of the Placement Groups to the new level.
- Added enhancements to 'qs ceph-monitor-remove' command that allows for discovery of the ceph monitor to be removed with the use of the storage system name or storage system id.
- Added protections to the Modify network Dialog and 'qs tp-modify' CLI to warn about changing the network configuration for Network ports used with a Ceph Cluster. Please contact OSNEXUS support for assistance if you determine you need to change the configuration of your networking on a node.
- Added additional warning health status for the Ceph Cluster to reflect error or warning state of underlying Monitors or OSD's.
- Fixed 'ceph-install' command that can be run on older deployments to enable scale-out block now also installs all of the dependencies required for scale-out object.
- Fix for Scale-out Block Ceph Pools now correctly show their individual used capacities. Previously all Ceph Pools reported a combined used capacity.
- Fix for rare condition that could cause a QuantaStor node to halt during shutdown or reboot when a scale-out Storage Volume/RBD has active client access.
- Fix to ensure newly created Ceph pool appears with all properties in the Storage pool list in the Storage Management tab.
- Fixed an issue that can sometimes occur when removing a Ceph Monitor.
- Fixed an issue where the client and backend network settings provided during Ceph Cluster Creation were not correctly set.
- Fixes to 'qs ceph-cluster-*' CLI commands to clarify help messages and command arguments.
- Fixed: The Ceph Cluster status now shows a more accurate health status of Initializing when a Ceph Cluster is first created.
- Fixed an issue with Ceph Scale-out Block Storage Volumes where host access assignment events would be rebroadcast.
- Fixes and Various small updates for Ceph Cluster deployment and management
- Added new Scale-out Ceph Object Storage support
- Scale-out File Storage (Gluster integration)
- Added: Removing a Gluster Brick now performs additional checks to ensure the action does not compromise data availabilty. Please contact OSNEXUS support for assistance with removing gluster bricks that are not allowed for removal via the qs CLI or Web Manager.
- Added: Gluster Peer Setup now allows for selection of specific peers in a grid for use in a Gluster configuration. This will allow for multiple Gluster peers configurations to be available on the same QuantaStor Management grid. Previously all grid nodes were included in the Gluster Peer setup.
- Added firewall to ensure access was allowed for Gluster version 3.4 and higher client access.
- Added: Storage pools created with the one click Encryption feature are now supported as Shared Storage pools with the High Availability Storage Pool Cluster feature.
- Added: When creating a HA Failover Group, selection of the second node is now scoped to the nodes available in the site cluster of the primary node.
- Added: Storage pools can now be created with LUKS encryption enabled on the underlying disk devices. This automates the manual tasks that had previously only been available via the qs-util crypt* utility.
- Storage Pool
- Added: The Import Storage Pool Dialog has been expanded to allow the selection of any detected Storage pools that are not already imported and managed by QuantaStor. This allows for the easy import of Storage Pools from other OpenZFS based storage solutions.
- Added: Storage Pool Creation can now map Storage Pool RAID redundancy for RAIN/RBOD configurations across Backend SANS when LUNs presented from Legacy or Third Party SANs include a Serial, SCSIid or Enclosure ID. This helps ensure that there is no single point of failure for FC or iSCSI Luns presented to a QuantaStor Storage Controller from HP MSA, QuantaStor SDS, IBM N Series or other certified SAN solutions.
- Fixed a rare issue that could occur on some hardware deployments where a ZFS Storage pool would come online before the multipathing driver finished creating all of the device mapper devices.
- Fixed: Adding a Hotspare to a Storage Pool that is degraded now immediately begins the resilver/rebuild process.
- Fixed failed drives that showed as UNAVAIL with a numerical ID will now be properly removed from the pool once a hot spare resilver has completed to replace the disk.
- Fixed a rare case where a resilver/rebuild of a Storage Pool RAID would not start when there were available Global or Pool assigned Hot Spares.
- Network Shares
- Fixed: A Network Share created for CIFS/SMB with NFS disabled in the Network Share Create Dialog now has the active option available and will by default be created in an active state
- Remote Replication
- Added logic to prevent accidental user initated CLI, API or Web manager deletion of replica snapshots that are required by replication schedules for successful delta replication.
- Improved Storage System Link pre-check logic to ensure that remote replication pre-check of System Link exchanged SSH keys succeeds in the event of a temporary network problem or slow WAN link.
- Fixed an issue where replication to a _chkpnt replica that has a manually created or other block snapshot could fail silently.
- Fixed: Remote replication now verifies that replica Parent _chkpnt and all child snapshots have the correct createdBySchedule association and corrects if not present.
- Cloud NAS Gateway
- Added further discovery logic for re-discovering existing Cloud Backups of Storage Volumes if a Cloud Container needs to be added to a QuantaStor for recovery.
- Added the gsutil packages to the Installation ISO for Google Cloud Container support. These packages can now be installed via 'apt-get install python-gsutil' for existing deployments.
- Added The Web Manager now includes the ability to specify the Google Cloud Storage project name when creating a Cloud Container using Google Cloud Storage. Previously this had to be manually entered in a config file.
- Fixed: Cloud Backups will no longer be incorrectly listed in the Instant Rollback Snapshot dialog for a Storage Volume. Cloud Backups must be restored using the Restore Cloud Backup
- Fixed an issue that prevented the repair or removing and re-adding a Cloud Container that has experienced a lengthy network or loss of access to the Object Storage.
- Fix for error state after adding or creating a Cloud Container with Google Cloud Storage or Amazon S3 .
- Backup Policies
- Added Web Manager now shows Backup Policy name and Finish date in Backup Job properties.
- Fix for Backup Policy Job launcher for pwalk and rsync. Previously there could be a process that would not be properly closed and reported as 'defunct'.
- Fix for inconsistent Backup policy Job detail in Web Manager
- Fixed an issue with creating a Backup Policy of a remote NAS share served by a Windows AD Server.
- Web Manager:
- Added: The Web Manager has a new modern theme and branding for the 4.0 QuantaStor release.
- Added: The Web Manager has a new Utilized% column in some views that has a Bar showing Utilized % for Storage pools, Storage Volumes, Ceph Storage pools and Ceph OSD's.
- Added: The Web Manager now has additional connection retry logic that will reduce the need to re-login if there was a temporary network issue between the web browser and the QuantaStor management services.
- Added support for renaming the hostname of a Host in the Web manager Host Modify Dialog and with the 'qs host-modify' CLI --hostname flag. Renaming a host will not affect client access as it is just a Human readable property for the Object.
- Added: There is a new tab in the Physical disk view that lists any Global Hot Spares configured for Physical Disk objects.
- Improved many dialogs with grid controls. The dialogs are now horizontally elastic making it possible to easily view more columns.
- Fixed an issue where the Web Manager could sometimes log the user out automatically if there was considerable UTC clock skew between the Browser and QuantaStor managemen service.
- Fixed: Dialogs that list @GMT Snapshots of Network Shares now include the parent replica or Network Share name to provide more clarity on the snapshot being slected for the operation.
- Fixed: Dialogs that previously referenced the IP address of a Target Port for configutation now also show the Physical port name.
- Fixed an issue to correctly remove a Host iqn child object object if the associated Host object was removed or no longer exists.
- Fixed an issue where properties fields could sometimes not be selected to allow copying of their contents.
- Fixed an issue where some objects on secondary nodes would not show the master node.
- Fixed an issue where the browser Locale setting would sometimes not be used to automatically select the correct Language Localization.
- Fixed an issue where the Web Manager was not showing the corresponding size in Decimal Bytes (Terabyte[TB], Gigabyte[GB], etc.) alongside the Binary Byte (Tebibyte[TiB],Gibibyte[GiB], etc.) More information on the differences are here: https://en.wikipedia.org/wiki/Tebibyte
- Fixed: The Disk type in the Hardware Controller Create Unit Dialog column to show SAS/SATA/etc. will now appear by default.
- Fixed an issue where a Resource Group would not be automatically selected in the drop down when using the Add/Remove Resource Users & User Groups dialog.
- iSCSI Target Driver
- Fixed an issue with the SCST SCSI Target driver where an iSCSI client that unexpectedly closed a connection due to client stability or network related issues could lead to a rare crash.
- Added Migration edition license support.
- Core Service
- Added further Grid communication improvements.
- Added direct query of replication target storage volumes prior to starting replication or removing excess snapshots.
- Fixed: The SNMP-MIB file will now correctly reflect the release date code for the currently installed QuantaStor release.
- REST API Service
- Fixed a corner case where some url strings pased via a REST call were not decoded.
- Fixed: Addressed CVE-2015-4000 (Logjam) in the Web Server Package with increase of the default Modulus length to 2048-bit and removal of weak DHE Diffie-Hellman ciphers.
- Added: New QuantaStor users created via the Users and Groups section of the Web Manager or 'qs user-add' CLI command will now have the same User ID on all QuantaStor nodes. The new UID range is 100000000-199999999.
- Fixed: An unexpected web request to the Web Server will now correctly route to a 404 error page.
- Hardware Modules
- Added: The Adaptec CLI utility 'arcconf' has been updated to v1.7-21229
- Added Multi-Shelf SAS JBOD enclosure support, this includes enclosures such as the Dell MD1280.
- Added: Mark Disk as Good in the Web Manager and 'qs hw-disk-mark-good' CLI will now initialize/convert RAW and Passthrough devices on Adaptec Controllers for use with creating RAID units.
- Added: Raw Passthrough disks on Adaptec controllers will now be initialized on operations for Hardware Controller Create Unit in Web manager and 'qs hw-unit-create' CLI command
- Added: RAID units marked as a system device or marked with a boot flag in a RAID Controller configuration can now be deleted with the force flag.
- Added: An exception will now be raised if a Hardware RAID unit is selected for deletion that has an Active Storage Pool. This includes delete operations for the Hardware Controller Delete Unit dialog in the Web Manager or 'qs hw-unit-delete' CLI operation.
- Fixed: Adaptec RAID Controllers with Super Cap BBU's now correctly show health status
- Fixed an issue where some third party LSI based HBA controllers would not appear in the Hardware Enclosures and Controllers section of the Web Manager or for the 'qs hw-controller-list' CLI command.
- Fixed: Logical RAID units that have a Hardware SSD Cache unit assigned now correctly show the cache enabled icon and property.
- Fixed LSI/Avago controllers can miss-report a temperature anomaly/differential with some firmware releases, this is now filtered and treated as informational.
- There is a new QuantaStor 4.0 qs CLI available for Windows at http://www.osnexus.com/downloads/
- Fixed: You can now list the associations between Snapshot Schedules and snapshots with the 'qs scha-list' command
- The 'qs license-get' command now returns the license of the local system the qs command is issued against by default if no other arguments are given.
- 'qs-sendlogs' utility now collects additional scale-out block and scale-out object log details.
v188.8.131.5290 (February 15th 2016)
- Fixes a rare problem when replications are triggered that would cause replication to fail with a 'No matching snapshots' warning message.
v184.108.40.20672 (February 3rd 2016)
- Enhanced the Storage Volume replication schedule pre-verification checks introduced in QuantaStor 3.16.6 to more efficiently batch the operations.
- Enhanced retry logic to Storage Volume replication schedule pre-verification checks. This helps reduce the possibility that a bad network connection or latency spike would cause a replication to be rescheduled due to a failure to sync the list of Storage Volume snapshots between replica partners.
v220.127.116.1168 (February 1st 2016)
- Increased security token default timeout to address larger grids and WAN link latency.
v18.104.22.16863 (January 29th 2016)
QuantaStor 3.16.7 was superseded by the QuantaStor 3.16.8 release on February 1st 2016. Please click here for the QuantaStor 3.16.8 release notes and upgrade instructions.
- Fixed an issue related to Grid Communication Event handling.
v22.214.171.12449 (January 28th 2016)
QuantaStor 3.16.6 was superseded by the QuantaStor 3.16.7 release on January 29th 2016. Please click here for the QuantaStor 3.16.7 release notes and upgrade instructions.
- Added enhancements for Grid Communication Event handling to make communication more reliable across WAN/unstable networks.
- Added pre-check for remote replication that verifies the list of snapshots currently on the source and target of the replication link. This helps correct a rare grid sync issue that could lead to left-over Snapshot objects in the database of the Grid Master that do not physically exist on the source or target QuantaStor nodes in a replication link. This also corrects behavior where the reference snapshot needed to perform delta transfers could mistakenly have been deleted by a retention policy requiring that a full transfer be initated to re-establish the replication link.
- Fix for Calendar replication schedules that used offsets where sometimes they would trigger on the hour and on the offset. For example 1:00AM and 1:20AM.
- Adds single threaded rsync mode for Backup Policies.
- Adds check for Backup Policy trigger to ensure that destination network share or cloud container is mounted before perfoming data copy.
- Fix to ensure Backup policy object status is updated if Backup Policy process is completed or failed.
- Fix for Backup Policy mounting of source Network Shares with Active Directory Credentials.
- Fix to update Network Share status property field if there is a change to the Associated Gluster Volume.
- Fix for rare instance where Gluster Brick could not be removed.
High Availability Storage Pool
- Adds support for HA Virtual interfaces to be created on Bonded VLAN interfaces.
iSCSI Software Adapter
- Added qs-util iscsrelogin CLI command to allow for management of iSCSI Software Adapters target logins from the QuantaStor CLI.
Back-end Storage Integration
- Added support for automatic disk group mapping of RAIN architecture for HP MSA Backend storage devices when creating ZFS Storage Pools. This functionality helps provide automatic RAID mirroring or RAIDZ1/Z2 parity of LUN's between the HP MSA enclosures to provide enclosure failure protection for the LUN's used in the Storage Pool.
v126.96.36.19989 (December 23rd 2015)
- Fixes issue where Remote Replication target snapshots would not be discovered. This corrects behavior where remote replica snapshots copied to the destination target would not be used, resulting in a larger delta snapshot on subsequent replication tasks.
v188.8.131.5284 (December 16th 2015)
QuantaStor 3.16.4 was superseded by the QuantaStor 3.16.5 release on December 23rd 2015. Please click here for the QuantaStor 3.16.5 release notes and upgrade instructions.
- QuantaStor now uses larger idmap ranges when configuring new Active Directory configurations
- Adds validation of users and groups when added to the cifsUserAccessList of cifsGroupAccessList via the qs share-modify command
- Fix for Disable Snapshot Browsing in Modify Network Share dialog, there was a regression in the 3.14.1 release that prevented this from working as expected.
- Fix for Network Share multi-delete to ensure snapshots are removed first. Previously there was a chance that an error would occur when removing a parent Network Share that still had a child snapshot.
- Fix to correctly show Backup Policy status in the Web manager when backup policy pwalk processes complete.
Scale-out Block (Ceph)
- Adds qs CLI ceph-monitor-add and ceph-monitor-delete commands for adding and removing scale-out block Ceph cluster monitors.
- Adds qs CLI ceph-monitor-add and ceph-monitor-delete commands for adding and removing scale-out block Ceph cluster monitors.
- Add Multi-OSD create feature that allows for quick deployments of Scale-out Block storage. Multi-OSD create will create the XFS Storage Pools and Journal Devices needed for an OSD based on the Disks selected.
Scale-out File (Gluster)
- Fix to ensure Add Gluster Brick raises an alert and cleans up if a brick cannot be added to the Gluster Volume selected.
- Fixes filtering of Network Shares in Remote Replication dialogs to ensure previously replicated Network Shares appear.
- Fixes for remote replication qs_zfsreplicate script that ensures metadata is always applied to _chpknt Storage Volumes. Previously there was a small chance of a replication snapshot not having the correct name and other metadata resulting in an orphaned snapshot on the destination replica target named with a GUID.
- Additional Enhancements for better logging of error conditions in the remote replication qs_zfsreplicate script.
- Fix to ensure metadata of remote replica Network Share or Storage Volume is set correctly on destination replica target.
- Fix for showing Network Shares correctly with object type of Network Share under Remote Replication tab.
- Reordered columns in Replication Schedule section of Remote Replication tab to more clearly show the Destination Storage Pool and system more clearly.
- Adds metadata and discovery for Cloud Containers to ensure that Add Cloud Container will reuse the original Cloud Container Name and other properties when importing a previously removed Cloud Container.
- Added the ability to manually trigger Cloud Backup Schedules via the CLI and Web Manager.
- Fix for hanging Storage Volume restore from Cloud Container that was in use for another process.
- Cloud backups triggered on XFS Storage Volumes now raise an alert warning of possibly inconsistent data if active iSCSI/FC client sessions are detected.
- Fix to ensure Cloud Container rename also renames the associated Cloud Container presented Network Share.
- Added logic to validate UID/GID is available when adding a new user via the QuantaStor Web Manager or qs CLI. This corrects behavior some users have run into where they create their own users on the QuantaStor from the adduser/useradd Linux CLI commands for IT administration or users used by tape backup/other software.
- Fix for case where Grid sync event for Deleted Storage Volume Snapshot was not propagated to all of the grid nodes.
- Fix for Modify User dialog where changes could not be saved. This is a fix for a regression introduced in 3.16.1.
- Fixes to ensure QuantaStor user modifications and updates are synced properly across grid members.
- Fixes rules used for XenServer VSA VXDB disk discovery. This corrects an issue where creating Storage Pools on vxdb disks would fail.
- Lowered the alert level to Informational for the 'System startup and service initialization completed successfully' message that occurs when the QuantaStor management service is started or restarted.
Hardware RAID Support
- Adds logic to correlate QuantaStor Physical disk objects to LSI/Avago RAID units.
- Fix for LSI/Avago controllers where RAID units incorrectly reported RAID0 for the Raid Type property.
- Fix for LSI/Avago controllers where Token Size for storcli RAID create was exceeded, now QuantaStor batches the disks into groups and validates for the token size.
- Adds Firmware property to Disks on RAID controllers.
- Adds more detail to show underlying disks on LSI/Avago Controllers in the Tree view of the Web manager.
- Fix for rare case where Mark/Unmark as Hotspare Dialog would not succeed in marking/unmarking the disk as a hot spare.
- Adds support for 3PAR Arrays in the multipath config file.
- Enables QuantaStor iSCSI/FC devices in multipath config file by default for RBOD Architecture configurations.
- Adds custom tags for objects available in the Properties dialog of objects. This allows for custom key/value strings to be added to an object, a typical use case is for adding Employee ID's or Organizational information to QuantaStor Users.
- Fix for Alert Manager dialog, in some cases the dialog would not show the current settings.
- Fix to preserve DNS Server order in Storage System Modify dialog.
- Fix to detect and correctly remove conflicting DKMS aacraid drivers when running qs_kernelupgrade.sh script.
- Fix for qs help=share-modify output to correctly show synax for adding domain users and groups.
- qs system-modify CLI command now supports configuration of a list of DNS and NTP servers.
- Fixes for qs quota-create and quota-modify CLI commands to correctly set limits.
- Adds Cloud Edition licensing pass-through for QuantaStor RBOD storage architecture.
v184.108.40.20682 (November 25th 2015)
- Reduces snapshot clean-up timer from 15 Minutes to 30 seconds, the corrects the behavior some customers using short replication intervals would observe where snapshots above their max replica count would sometime be retained on the source and destination of the replication link.
v220.127.116.1179 (November 19th 2015)
- Fix for ZFS Storage Volume discovery that occurs after replication or during large batch creation. Previously, Storage Volumes could appear as missing or offline for a short period of time until a later discovery process occurred.
- Adds clearer logging for grid communication where a TCP connection could not be re-used and a retry should be tried via a new connection. Previous log messages were unclear 'broken pipe' message.
- Adds clearer logging for grid communication TCP connection timeout or connection error states so that errors are raised only if a connection retry fails permanently.
Snapshots and Replication
- Adds check for ZFS snapshotting to verify there is available Storage Pool free space for snapshot to succeed.
v18.104.22.16870 (November 3rd 2015)
- Optimized to leverage server side object cache which boosts scalability and reduces overall grid sync time for 16 nodes to less than 30 seconds
- Optimized to leverage server side object cache which boosts WUI login speed by up to 10x on busy systems
- Fixes hang seen on Web UI in dialog when executing a command / clicking OK on systems with heavy load
- Enhanced caching layer logic to boost overall system scalability and performance
- Fixes to SOAP communication which resolves most instances of 'Broken Pipe' which causes slow down in grid communication
- Fix for support of Netapp LUN C-Mode devices so that they can be used in HA storage pools
- Fix to issue LIP to FC targets on reboot
Cloud Backup / Backup to Cloud Containers
- Now supported for ZFS based Storage Volumes
Storage Volume Utilization
- Optimized DB updates of volume utilization stats
- Fix to ensure EUI is listed with other properties in Parent Storage Volume object above child objects.
v22.214.171.12496 (October 16th 2015) REBOOT REQUIRED
- Adds kernel upgrade to the Linux 3.19 kernel (latest stable LTS release)
- New Driver releases:
- Dell PERC and Avago/LSI MegaRAID controllers megaraid_sas 06.805.06.01-rc1
- Avago/LSI 12GB/s SAS HBA's mpt3sas 04.100.00.00
- HP SmartArray RAID Controllers hpsa 3.4.10-0
- Adaptec RAID Controllers aacraid 1.2-0-ms
- Intel 10GBe Network Adapters ixgbe 4.1.2
- SolarFlare Network Adapters sfc 126.96.36.1990
- Mellanox Infiniband Adapters mellanox 3.0-1.0.1
- Qlogic FC Adapters(supports 16GB Qlogic Gen 5 26xx controllers) qla2x00tgt 3.0.2
- Driver updates for OS Installer to support 93xx series Avago/LSI HBAs and RAID controllers.
Scale-out Block Storage (Ceph integration)
- Adds support for active-active scale-out block storage over iSCSI or via native Ceph RBD Client
- Adds support Microsoft VSS to enable integration with backup applications
- Fixes issue with network interface reconfiguration where port restart did not work properly on virtual ports or interfaces containing virtual ports
Cloud NAS Gateway
- Adds support for Cloud Containers at additional IBM SoftLayer datacenter locations
Scale-out File Storage (Gluster integration)
- Upgrades to Gluster 3.6.5
- Adds support for erasure coding
- Adds smart provisioning support.
- Adds enhanced monitoring and Web Interface improvements to display brick count, replica count, and brick set number.
- Adds major improvements to Gluster volume provisioning performance though parallelization of brick provisioning.
- When provisioning Gluster Volumes where the replica or disperse count does not divide evenly into the number of appliances which the volume spans. For example, with 6 appliances and 12 pools and a disperse count of 5, the number of bricks required to evenly disperse the data is 60 with 5 bricks per brick set. QuantaStor now does this smart provisioning (LCM+round-robin) so that the replication/disperse count no longer needs to evenly divide into the storage pool count.
- Adds check to ensure that Gluster Volumes are using XFS or ZFS version .6.4.2 or newer.
- If you're using ZFS with Gluster be sure to do a full upgrade of all packages including the kernel and driver packages which includes the ZFS upgrade to .6.4.2
- Fixes issue with /etc/hosts updates when IP addresses are changes in Gluster configurations
- Adds support for RID (Relative ID) mode which auto assigns UID/GIDs to users based on their Windows SID
- Adds caching of AD user and group list information to support large AD domains with 10K to 100K or more users (see 'qs-util adcachegen' command which creates the persistent AD cache within the appliance)
- Fix to the join AD domain process to quote passwords which may contain symbols
- Adds support for using QuantaStor as a iSCSI storage gateway appliance in front of other QuantaStor or 3rd party storage appliances like NetApp and EMC systems. Right-click in the Controllers & Enclosures section to add a Software iSCSI Adapter to the appliance which will login to the 3rd party storage system/array.
- Adds HA support for Gateway mode tested with NetApp E Series storage systems
- Changes default hash algorithm for encrypted devices from Sha1 to Sha256
- Fix to stop using DES encryption (too weak) of passwords when new users are created
- Adds additional checks to verify new user name length (must be less than 31 chars)
- Adds commands to qs-util to make it easy to start encrypted pools even if the key file names don not match the device IDs/serial numbers (cryptopenall, crypttabrepair, etc)
- Verifies support for AES-NI. Testing shows performance improvement of 7.5x when using Intel AES-NI hardware acceleration vs plain software encryption
- Adds encryption support for XFS based storage pools which can also be used with Gluster/Ceph
- Fixes issues with dropped grid connections between nodes during heavy load or when multiple active replication streams are going
- Adds support for NVMe SSD devices
- Adds enhanced support for hierarchical device discovery (multipath, encryption devices, etc)
- Improves device discovery and scan speed
- Improved support for Samba4 (use install-samba4 to upgrade to SMB3)
- Fixes issue where @GMT based snapshots were not getting cleaned up in rotation schedules
- Fixes issue where samba service was getting cycled when snapshots were taken
- Adds support for enabling/disabling NFSv4 browsing
- Fixes Adaptec BBU discovery issue
- Adds discovery of hardware RAID unit caching mode/policy
- Adds discovery of disk firmware versions
- Adds support for Serial numbers with special characters. for example (-/?<>)
- Adds checks to ensure serial numbers available in raid CLI utilities are included in hardware controller disk objects.
- Improves date/time parsing of LSI event logs
- Adds discovery of predictive error count for RAID controller disk objects
- Adds detection of JBOD disconnection and triggers automatic pool fail-over
Quality of Service (QoS) Controls
- Adds support for adjusting QoS controls on Storage Volumes.
- Adds support for QoS policies which an be applied to Storage Volumes. Policies make it easy to change QoS setting categorically for large groups of Storage Volumes within a QuantaStor appliance or grid of appliances.
- Adds NTP server management to the Storage System Modify dialog
- Fix to preserve ordering of DNS entries in Storage System Modify dialog
- Fixes issue with adding or using hot spares on ZFS pools where the device was not pre-initialized with a GPT partition table.
- Adds --reserved option to volume-modify to adjust reserved space on existing Storage Volumes
- Adds support for QoS commands and creation of QoS policies which can limit MB/s for reads and writes.
- Improvements to share-modify and share-client-modify CLI commands.
- Adds automatic logging of iSCSI/FC session open/close events into the /var/log/qs_volsession.log file.
- Internal database backups and TDB backups are now placed in .qsbackups hidden folder in Storage Pools.
- Fixes audit log rotation to only rotate when it hits 40MB
- Fixes core service log rotation to only rotate at service start-up when the log file is at least 8K
v188.8.131.5219 (September 17th 2015)
- Adds additional validation for structure of NFS /etc/exports file, ensuring that all lines are comments or export entries managed by the QuantaStor service. For customers who wish to have their own custom NFS export entries, they can place their entries in a /etc/exports.custom file. The QuantaStor service will automatically append any valid NFS export entries or comments in /etc/export.custom to the bottom of the /etc/exports file.
- Adds logic to reinforce Share Owner User and Share Owner Group when shares are modified. This corrects a behavior where share default ACL's were not being re-applied on some share modifications.
- Corrects a behavior on Network Shares where users or groups specified with admin access would not have valid share access preventing them from logging into the share. This corrects behavior that affected 3.14.2 and newer releases.
- Fix to periodically check for and clean up @GMT snapshots that were lazy deleted.
v184.108.40.20606 (September 14th 2015)
QuantaStor 3.15.4 was superseded by the QuantaStor 3.15.5 release on September 17th 2015. Please click here for the QuantaStor 3.15.5 release notes and upgrade instructions.
- Adds support for more complex passwords when joining an Active Directory. For the --ad-password flag on the qs share-join-domain command, passwords should now be wrapped in single quotes(') to ensure the BASH shell does not interpret the string.
- Adds driver to install media to allow OS installation on LSI 12GB/s SAS HBA's.
- Fix for Adaptec controllers to correctly report Cache Policy and Battery Backup Property.
- Adds filtering for NFS Network rules to ensure that host and IP NFS export entries are before network or domain rules. This allows for more complicated rules such as read only access for all hosts on a network and read/write access for a specific host on the same network.
- Enhancements to the CLI for the qs share-modify user access list management, the new system allows for specifying just the username or group name or list option that you would like to add or remove, reducing the number of flags required to be passed in on an individual share modification.
[--user-access-list] :: List of users with permission to access the network share for example 'user1:valid,user2:invalid,user3:none,~user7,~user33', prepend with tilde (~) to remove fields/properties. [--group-access-list] :: List of groups with permission to access the network share for example 'group1+DOMAIN:valid,~group2+DOMAIN' prepend with tilde (~) to remove access for specific users or groups. [--cifs-options] :: CIFS/Samba configuration options specified as 'key=value,key2=value2,~key3,...', prepend with tilde (~) to remove fields/properties.
- Removed redundant CIFS service restart when snapshots are taken. This resolved a temporary timeout that would occur on CIFS network shares when files where accessed via a stream, for example music or video files.
- Adds logic to ensure a rename of a Network Share on a ZFS Storage Pool with active clients will raise an alert and be canceled early, not affecting the active client connections.
- Adds support for modifying Thin/Thick provisioning(referenced capacity) for Storage Volumes
- Adds support for fast coordinated VSS snapshots for upcoming VSS plugin.
Scale-out File Storage
- Enhanced the Gluster Create High-Availability Virtual Interface Dialog to show member nodes of the Specified Gluster Volume.
- SHA512 is now the default password hash for local Linux users created via the QuantaStor WebUI and qs CLI.
Remote Replication and Snapshots
- Fix for key exchange during Storage System Links in scenarios where storage system link was removed and re-created between the same systems.
- Fix for a behavior where @GMT snapshots would not be deleted. Behavior affected 3.15.2 and 3.15.3 releases.
Web Manager UI
- Fix to the property window on the right hand side of the Web Manager to ensure it correctly displays properties for the selected object at all times.
- Adds improvement for grid communication over long distance or heavily congested networks, this feature can be enabled using the below steps:
Grid communication Auto Linger enable (for congested/long distance network communication): touch /etc/qs_autolinger service quantastor restart
v220.127.116.1125 (August 7th 2015) DRIVER UPDATE AVAILABLE - REBOOT REQUIRED
- adds latest ZFS v.6.4.2 filesystem drivers, please review the the ZFS changelogs for further detail v0.6.4 v0.6.4.1 v0.6.4.2
- adds udev device management logic to resolve a conflict that can occur for customers who install the lvm2 and/or udisks packages
- adds automatic checking and correction for Storage Volumes that have missing dev paths
- fixes issue with User Groups that could prevent the addition or removal of users.
v18.104.22.16809 (July 21st 2015)
- adds support to the Add NFS client access dialog for a comma separated list of IP's, Hostnames or Network definitions. This allows for the batch creation of NFS client access rules.
- fix for Network Share rename on ZFS Storage Pools so that share export directory is automatically mounted with new name
- fix to ensure force flag forces removal of Network Shares on ZFS Storage Pools with Active client connections. Default behavior without force flag remains where share removal is aborted if active clients may be connected.
Hardware RAID Integration
- adds new disk property fields that show media error counters and disk firmware version reported by RAID controller
- adds new properties for RAID units on LSI RAID controllers thats show consistency status and cache policy settings
- adds new property for LSI RAID controller capacitor based BBU solution to indicate health status
- fix for old LSI controller alerts being relayed via alert manager if the system is rebooted
- fix to correctly show background init progress for new RAID units created on LSI controllers
- fix for RAID unit creation on newer LSI RAID controllers when setting custom stripe size
- fix for Mark as Hot Spare to ensure spares are always correctly set.
- adds latest SoftLayer Object Storage library for Cloud Container provisioning.
- adds latest release of pwalk which includes bug fixes and a new --exclude option to exclude specified directories
HA Storage Pools
- fixed a rare instance that could cause the QuantaStor service to crash when exporting a storage pool during a failover operation
- fix to allow the addition of multiple Host Initiators to existing Hosts in a multi-node grid configurations
v22.214.171.12460 (June 5th 2015) DRIVER UPDATE AVAILABLE - REBOOT REQUIRED
- adds support for encrypted hot-spares
- adds verification logic to require use of software encrypted disks with encrypted pools and similarly non-enc disks with non-enc pools
Remote Replication / DR
- fixes issue where there was no NFS access to the snapshots of the target checkpoint network shares
- adds support for configuring the nfs kerberos mode (krb5, krb5i, krb5p) via the Configure NFS Services dialog* fixes issue where /etc/exports was not immediately updated on change from NFSv3 to NFSv4 or vice-versa
- adds configurable export root for NFSv4 via /etc/qs_nfsv4rootoptions, default is fsid=0,ro
- adds support to turn off browsing of NFSv4 shares via the /export root via the Configure NFS Services dialog
- fix to automatically set ZFS shares to posixacl mode when modified
- adds support for caching AD user and group information which greatly improves support for large AD domains (verified with 30K users and groups)
- adds support for configuring idmap mode (tdb, rid) via the WebUI using the Configure CIFS Services dialog
- fixes grid election logic corner cases where all nodes have no primary or multiple primaries are designated
- fixes startup order sequencing to support gluster volumes on encrypted pools
- enables Gluster POSIX ACL support by default, use /etc/qs_gluster_posixmode_disable touch file to disable
- adds checks which will move files and directories out of the way if they're blocking storage pool or Gluster brick mount points
- adds improved brick and volume health status checks
- adds automatic fixup of brick mount points which have blocking content preventing them from mounting
- fix for Samba support to use POSIX ACLs and block XATTRS
- adds qs CLI automatic session management which will successfully and transparently reconnect if the connection is lost while running an API command
- adds qs-util tab based command completion
- improves CLI list output detail to include system and pool names with proper sorting
Site Cluster Management
- adds Gluster HA VIF and Storage Pool HA VIF information and management into the Site Cluster tab section of the Web UI
- improves and fixes port validation checks when multiple site clusters are created
Hardware RAID Integration
- fixes support for LSI MegaRAID controller event log processing, no longer raises alerts for events that have already been raised
- fixes support for DELL PERC based hardware integration
- updates storage system link state timestamp periodically to auto-fix link down condition when grid communication is disrupted
- fixes up hostname information in replica associations and storage system links when appliance hostname changes
- adds firewall support for disabling access to unused storage services
- fix to support creation of roles with no permissions
- fix to support creation of roles where permissions are copied from other Roles using wildcards (*) for object type/operation type
- adds latest ZFS v.6.4.1 filesystem drivers, please review the the ZFS changelogs for further detail v0.6.4 v0.6.4.1
- adds latest Intel ixgbe v126.96.36.199 driver
- adds latest HP hpsa v3.4.8-140 RAID controller driver
- fixes issue with port bonding enslave so it only does this at system startup
- fixes issue where duplicate bond ports would show with one online and one offline
HA Storage Pools
- fixes issue with system shutdown to automatically move HA pool(s) to secondary node
- adjusts fs.aio-max-nr to support larger configurations using SAS multipathing
- adds command line argument validation to qs_service
- adds improved swap device checks so that they're never spammy when utilization load is high
- fix for swap space check to support configs with no swap devices
- adds support for variable thin provisioning (0-100%) for ZFS based Storage Volumes
- updated MIB
v188.8.131.5262 (May 1st 2015)
SAN / Storage Volumes
- adds and updates accessTimeStamp on storage volume objects
- adds CLI command to allow for updating the createByScheduleId field on storage volumes
- fix to storage volume assignment by host in large grid configurations
- fix for user level CHAP credentials multi-node auto-update logic
- fix to T10 device descriptor format to use pool UUID rather than system UUID. Only applies to newly created storage volumes, existing volumes are unchanged.
- fix to Storage Volume Utilization entries to show the volume name and date stamp in the 'name' field
- fix for FC port startup at boot time
- fix to adding/removing a initiator IQN to/from a host to update ACLs when using host groups
- fix to allow deleting storage volumes which are a member of a storage volume group
Network Shares / NAS
- adds user/group ownership control settings to Network Share Create/Modify dialogs
- adds share r/w/x permissions control to Network Share Modify dialog
- adds posixUid/posixGui to user objects
- adds setting to enable POSIX acls by default on ZFS pool create, XFS pools already had posix ACLs enabled by default
- adds improved support for AD integration and Network Share quota management
- fix to allow for duplicate share names as long as they're on separate systems
- fix to auto-repair stale NFS file handles after system/appliance reboot
- fix for open file handle issue to winbind with services restart logic
Storage Pool Management
- adds smart hotspare selection to prefer hotspare which is in the same enclosure as the failed device
- adds smart pool creation which makes multi-enclosure chassis deployments highly-available by round-robin selection of disks across enclosures for all RAID levels
- adds updates for improved detection and display of SED disks / sets isEncrypted flag.
- fix to auto start XFS based pool on pool-create
HA Storage Pools & Cluster Heartbeat Management
- revamped HA system adds Site Cluster, Cluster Ring management, and active cluster monitoring into WUI
- adds major optimizations for HA failover speed
- fix for replication checkpoint snapshot rotation where oldest device wasn't being rotated/expired
- fix to skip expired replicas in cleanup stage if the snapshot has snapshots
- fix to XFS based replication
- fix to do license checks before processing or triggering remote replication schedules
- adds support for customizing the pem files for all services (core qs_service, REST service, and Tomcat)
- adds support for customizing the SSL ciphers, applies strong cipher limits automatically
- adds SSL cert generation script which deposits custom certs into /var/opt/osnexus/quantastor/ssl which are automatically picked up by REST and core services
- adds script command to upgrade from Java 6 to Java 7 (qs-util java7upgrade), which allows browsers to connect via https using stronger ciphers / TLS 1.2
- fix to disable all use of SSLv3 across all internal services (Core service, Tomcat, REST API service) in favor of TLS for improved security / HIPAA compliance
- fix to allow removal of duplicate 'admin' users
- fix to remove duplicate user entries in Samba config when user assigned as 'Admin' on a share
- fix to password length enforcement (8-34 char)
Cloud Containers / NAS Gateway
- adds new SoftLayer Object Storage locations for mon,mel,mex,fra,par,syd,tok
- fix to auto start Cloud Containers at system startup
- fix to Cloud Container create to setup CIFS/NFS settings and to auto-enable the container when the share is enabled
- fix to improve web UI connection and sync time
- fix to show nested snapshots of snapshots in WUI
- fix to obscure CHAP user/pass in web UI
- adds improvement to daily utilization graph to allow selection of multiple days
- adds many improvements to Japanese localization ( add '/?locale=ja' to URL )
- adds support for HP P431 and related RAID controllers (except install time boot driver)
- adds fixes management and monitoring for latest LSI HBAs including IBM OEM variants
- adds support for latest LSI MegaRAID and related OEM hardware
- adds support for installing via USB media with new ISO
- adds enclosure view for Adaptec RAID controllers
- fix for LSI MegaRAID controller support where duplicate alerts were being generated and old alerts would have incorrect timestamps showing as new alerts
- fix to remove blank 'Disk ()' entries on virtual LSI MPT SAS controllers presented by VMware & VBox
- fix to HP P4xx/8xx Series RAID unit creation to pass 'force' option as needed
- adds VMware certification, iSCSI w/ ESX 5.1
- adds SNMP fixes for SNMP v3 traps and adds new SNMP types
- fix to resolve intermittent issue which was causing the SNMP agent to restart
- adds improved ethernet port vendor/model detection, and ports now show as "Disabled" if unconfigured
- fix to bonded port creation to auto ifenslave ports after creation
- adds support for Gluster 3.6, upgrading from 3.5 to 3.6 can be done incrementally node by node without downtime
- fix for gluster brick add operation also adds checks to prevent bricks from overlaying on existing pools
- fix for gluster reblance corner case allowing more time for it to start
- fix to Alert Manager to allow for setting the SMTP port number
- fix to Alert Manager SMTP send logic to properly handle STARTTLS
- fix to CLI to allow clearing fields with empty argument value "", for example --description=""
- fix for CLI bug in volume-clone and network-share-snapshot commands
- fix to CLI so that boolean args default to true (eg. --somearg is equivalent to --somearg=true)
- fix to CLI to allow specifying network share names starting with '@' symbols
- adds major optimizations (~8x faster!) for pool scan and device import speed
- adds major optimizations (~8x faster!) for service startup time
- adds major optimizations (~10x faster!) for grid synchronization speed and scalability
- fix to qs_service --reset-password
- fix to gSOAP library / upgrade to 2.8.17 resolves management service memory leak seen in multi-node configurations
- fix to increase default max log size to 25MB before auto-rotate
v184.108.40.20690 (January 14th 2015)
- fix to Create Hardware RAID unit API where task would stay in Queued state (.7090 hot-fix)
- fix to multi-node NFS config update when Gluster based Network Share NFS client access is changed
- fix SCST warning in kern.log due to missing dir
- fix to allow upgrading to 3.14 without making the 3.13 linux kernel upgrade mandatory
- fix to deletion of custom named network share snapshot cleanup
- fix to HA manager startup to support configs where grid is torn down
- fix http redirect for lsiget log generation
- fix to delete grid where grid IP field is not cleared
- fix to NFS server config to remove RPCMOUNTDOPTS=--manage-gids option by default
- adds support for storage volume / network share instant rollback from snapshot
- fix to rollback network share from remote replica
- fix to syslog startup in 3.14
- fix to CLI output when commands are run with --async to show task information
- fix to CLI share-client-add command to create Network shares in r/w mode by default.
- fix to CHAP settings update when changed on user account
- fix HTTP header of QuantaStor Manager to make sure it is validator.w3.org compliant
- adds support for multiple Network Shares with the same name as long as they're on separate appliances in the grid
- adds QuantaStor appliance host name to the browser tab title area
- adds lazy-clone option to Storage Volume snapshot which delays making the snapshot writable until it is assigned
- adds delayed/lazy-clone as default mode for remote replication snapshots which greatly reduces system CPU, memory, and filesystem load
- fixes issue with Backup Policies to handle special characters in file names (see pwalk)
QuantaStor 3.14.1 update via Upgrade Manager
Login to the WebUI with an admin account and run the Upgrade Manager, click Check for Updates and Install the below QuantaStor update packages via the Upgrade manager.
Web Management 220.127.116.1190-1 Core Services 18.104.22.16890-1 Web Server 7.0.7089-1
If you are upgrading from a release older than 3.14.0, you will need to install the iSCSI target Driver and schedule a reboot of the system.
iSCSI Target Driver 22.214.171.12493-1
QuantaStor 3.14.1 Manual install
The below 2 commands can be run from the console of a QuantaStor appliance to explicitly install the 3.14.1 release on any QuantaStor running a v3 release. apt-get update apt-get install -y libpython2.7=2.7.3-0ubuntu3.6 zfsutils libzpool2 libzfs2 lsscsi pv qstormanager=126.96.36.19990-1 qstorservice=188.8.131.5290-1 qstortomcat=7.0.7089-1 qstortarget=184.108.40.20693-1
If you wish to upgrade to the 3.13 linux kernel provided with the QuantaStor 3.14 release, please follow the instructions here.
v220.127.116.1193 (December 30th 2014) KERNEL and DRIVER UPGRADE AVAILABLE - REBOOT REQUIRED
- adds 3.13 linux kernel and SCST driver stack upgrade
- adds support for Micron PCIe SSD cards
- adds universal hot-spare management system for ZFS based pools
- adds support for FC session management and session iostats collection
- adds disk search/filtering to Storage Pool Create/Grow dialogs in web interface
- adds configurable replication schedule start offset to replication schedule create/modify dialogs
- adds support for cascading replication schedules so that you can replicate volumes across appliances A->B->C->D->etc
- adds wiki documentation for CopperEgg
- adds significantly more stats/instruments to Librato Metrics integration
- adds dual mode FC support where FC ports can now be in Target+Initiator mode
- adds support for management API connection session management to CLI and REST API interfaces
- adds storage volume instant rollback dialog to web management interface
- adds sysstats to send logs report
- adds swap device utilization monitoring and alerting on high swap utilization
- adds support for unlimited users / removes user count limit license checks for all license editions
- adds support for scale-out block storage via Ceph FS/RBDs (pilot program only)
- deprecated drbd continuos xfs pool replication
- fix for CLI host-modify command
- fix for pool discovery reverting IO profile selection back to default at pool start
- fix for web interface to hide 'Delete Unit' for units used for system/boot
- fix for alert threshold slider setting in web interface 'Alert Manager' dialog
- fix for sending email alerts to multiple accounts.
- fix to accelerate pool start/stop operations for FC based systems
- fix to disk/pool correlation logic
- fix to allow IO profiles to have spaces and other special characters in the profile name
- fix to FC ACL removal
- fix to storage system link setup to use management network IPs
- fix to remove replication association dialog to greatly simplify it
- fix to CLI disk and pool operations to allow referencing disks by short names
- fix for replication schedule create to fixup and validate storage system links
- fix for replication schedule delta snapshot cleanup logic which ensures that the last delta between source and target is not removed
- fix for stop replication to support terminating zfs based replication jobs
- fix for pool freespace detection and alert management
- fix license checks to support sum of vol, snap, cloud limits across all grid nodes
- fix to create gluster volume to use round-robin brick allocation across grid nodes/appliances to ensure brick pairs do not land on the same node
- fix to storage volume snapshot space utilization calculation
- fix to iSCSI close session logic for when multiple sessions are created between the same pair of target/initiator IP addresses
- fix to auto update user specific CHAP settings across all grid nodes when modified
- fix to allow udev more time to generate block device links, resolves issue exposed during high load with replication
- fix to IO fencing logic to reduce load and make it work better with udev
v18.104.22.16840 (November 23rd 2014)
- fix for user limit to combine across grid licenses and nice log message when approaching limits
- add softlayer object storage support for london and toronto locations
v22.214.171.12437 (November 14th 2014)
- adds new SoftLayer datacenter Cloud Container locations
- fix for user limit count calculation.
- fix grid compatibility with older QuantaStor versions.
v126.96.36.19927 (October 30th 2014)
- reduce CPU usage during remote replication.
- further replication throttling improvements.
- fix reported size usage for remote replications.
v188.8.131.5215 (October 21st 2014)
- adds load balancing to remote replication, use 'qs-util rratelimitset NN' to configure, 50MB/sec is the default limit.
- adds load balancing to volume cloning, use 'qs-util clratelimitset NN' to configure, 200MB/sec is the default limit.
- adds additional Storage Volume delete confirmation checks in web management interface to show count of any active iSCSI sessions
- adds option to offset the start of a replication schedule by NN minutes for staggered replication. With scheduled replication at 1am, 4am, 7am and a 10 minute offset, replication will start at 1:10am, 4:10am and 7:10am respectively. CLI only feature in this update:
- qs replication-schedule-modify SCHEDULE-NAME --offset-minutes=10
- fix to Create Remote Replication Schedule dialog to allow adjusting max replicas and sets min replicas to 3.
- fix to audit logging
v184.108.40.20691 (October 10th 2014)
- fix to volume check logic for remote-replication schedules
- fix to volume batch create and delete operations to allow more time for udev rules to run on slow systems
- fix to network share enum user/group quotas to API redirect to correct node owner of share
- fix to SAS disk discovery for Adaptec controllers
- fix to scheduler for interval based replication schedules which only contain Network Shares
- fix to read/write TX counters on network ports where in some cases they were not getting updated
- fix to show MB rather than MiB for read/write TX counters in QuantaStor Manager
- fix to show a suffix of "(Disconnected)" in WUI to make it clear when a grid node is offline/disconnected
- adds qs-util CLI enhancement for ZFS meta cache limit configuration. see qs-util setzfsarcmax auto
- adds qs-util CLI commands for checkswap and clearcache
- adds qs-iostat CLI enhancement ZFS L2ARC information. see qs-iostat -a
- adds qs CLI utility commands for replication schedule and replica assoc management
- adds qs CLI utility command for trigger snapshot schedule
- adds additional swap and cache stats information to log report
- updated wiki CLI documentation
- updates object naming of Storage Cloud to be more aptly named Multitenant Resource Group
v220.127.116.1152 (September 23rd 2014) DRIVER UPGRADE AVAILABLE - REBOOT REQUIRED
- upgrades ZFS to latest maintenance release v.6.3 (included in qstortarget package and requires reboot)
- upgrades GlusterFS to latest maintenance release v3.5.2
- adds hardware encryption support via qs CLI for LSI SafeStore SED/FDE hardware encryption
- adds software encryption (LUKS based)
- adds GlusterFS automated peer setup w/ /etc/hosts management
- adds support for triple parity RAID-Z3 layout for ZFS based storage pools
- adds new qs-iofence utility, deprecates use of zpoolfence, adds support for multipath HA configurations
- adds support for SAS multipath device detection and path associations
- adds improved device naming to use friendly name plus the boot resilient name in parentheses
- adds LSI SafeStore key management operations to the web management interface
- adds additional options to qs-iostat
- fix to remote replication configuration setup where replica-associations/storage-system-links could get dropped during reconfiguration in larger grids
- fix to SNMP MIB, overhauled MIB design and snmpagent is now compliant to various MIB certification tests
- fix to storage pool device naming convention to map through the exact device names shown in 'zpool status'
- fix to gluster peer detach to add 'force' option
- fix to service port 5151 http GET to return 404 for invalid requests
- fix to various WUI dialog error messages
- fix to remove unnecessary spl package dependency
- fix to filter out partitions from the Physical Disk list
- fix to log rotation to no longer rotate empty log files
- fix to block sending email alerts when there are no recipients
- fix to prevent grid dual primary link condition
- fix to cloud container import for swift based containers
v18.104.22.16811 (July 30th 2014)
- fix to DNS lookup issue for Gluster configurations
- fix to prune large MegaSAS.log file (> 50MB) and to not log MegaCli discovery operations
- fix to corner case in HPNBufferSize editing of /etc/ssh/sshd_conf for HPN SSH support
- adds /etc/hosts configuration management via Gluster Peer Setup dialog
- adds qs CLI commands qs grid-get-hosts, grid-set-hosts for /etc/hosts configuration (qs grid-set-hosts eth0)
- adds hardware module for LSI MPT SAS HBAs
v22.214.171.12484 (July 22nd 2014)
- fix to show gluster brick/volume warning icon if brick/volume is unhealthy
- fix to add gluster logs into 'Send Logs..' report
- fix to gluster volume discovery logic and brick free-space updates
- fix to grid 'set master node' error handling
- fix to network share attribute updates (compression level, etc)
- fix to detect HPN version of ssh and to auto configure /etc/ssh/sshd_config accordingly
- fix to network config management logic which was removing an additional line(s) in /etc/network/interface
- fix to storage pool free-space and percent provisioned calc
- fix to BBU discovery for Adaptec controllers
- fix to network share delete on ZFS pools to not use 'rm' as it is slow and redundant
- fix to update user password on all grid nodes
- fix to gluster volume delete logic
- fix to set password error message to show 8 to 40 characters required
- fix to delete vlan dialog in web interface
- fix to schedule manager which was preventing hour/week based schedules from firing
- adds network event alert throttling
- fix to grid rename and grid IP address change logic
v126.96.36.19929 (June 27th 2014)
- adds support for OpenStack Cinder (see here for more detail )
- adds support for interval based replication down to 15 minute cycles
- adds support for data-migration / 3rd party LUN copy to new QuantaStor Storage Volume
- adds support for user quotas on network shares (AD group quotas not yet available)
- adds support for storage tiers / tiers are groups of storage pools for which provide smart placement of newly provisioned storage volumes
- adds redesigned network bond management logic, now allows selection of teaming mode on a per bond basis
- adds new https keystore for web management interface (be sure to clear your browser cache)
- adds secure mode 'qs-util disablehttp' to enable/disable http access (port 80) to force admins to use https for web management
- adds info on posix UID/GID to user properties page
- adds session management to qs CLI with automatic retry logic to handle broken network pipe conditions, improves scripting/automation
- adds new password minimum length of 8 characters (was 6)
- adds acl mount option to gluster client loopback connections for Samba/NFS access
- adds xattr=sa option for network shares by default, is also enabled with 'Enable MMC Managment' option.
- adds additional zones to SoftLayer cloud provider location list for Hong Kong and Singapore
- adds alert filtering via '/etc/qs_alertfilters.conf' file. To filter alerts echo the name of the alert like so:
- echo "[Service Update]" > /etc/qs_alertfilters.conf
- adds nightly check for MCE errors (memory check exception) which can indicate bad RAM.
- adds automatic tdb backup for SMB configuration data
- fix to gluster volume delete/modify for SMB config synchronization
- fix for Gluster peer attach to use hostnames whenever possible (/etc/hosts recommended as DNS failback option for name resolution)
- fix to network share restore operation to remove files that are not in the snapshot and to restore extended attributes
- fix for remote replication create/modify dialog for grids with 3 or more nodes
- fix to speed up pool start logic for configurations with many share snapshots
- fix to network share multi-delete to show more progress detail
- fix to Cloud Container location code so that you can select a location and create a container from the context menu
- fix for various network share CLI commands
- fix invalid trace messages
- fix for DR support to do additional replication checks
- fix to core service to allow for changing openssl pem files
v188.8.131.5277 (May 16th 2014)
- adds option to disable ALUA support (needed for VMware HA configurations)
- adds support for storage tier management. Tiers are groups of storage pools for easy automated provisioning. (currently CLI only)
- adds alert when HA port failover occurs
- adds new basic discovery module for mptsas LSI Fusion HBAs
- adds SAS address info to HW disk properties
- adds HW controller cache memory size information
- fix to allow for clearing network port configurations. Do this via the Modify Network Port dialog and set the port to 0.0.0.0 or choose 'disabled'.
- fix to identify HA virtual interfaces as 'static' rather than 'unknown'
- fix for Network Share free-space updates / previously was generating too much system load
- fix to clone operation, adds more progress detail to task status
- fix for cli command host-group-host-remove and volume-modify
- fix to recovery management to additionally auto-recover samba configuration
- fix to rename user to update samba configuration
- fix to CIFS management Network Share Modify dialog
- fix to UI to not show empty 1969 timestamps and other unpopulated fields
- fix to backup policies to backup files with non-ASCII characters in the file name
v184.108.40.20630 (May 6th 2014)
- fix for Network Share used space check (resolves performance / CPU utilization issue)
- fix to HA device descriptor generation
- fix to clone operation, adds more progress detail to task status
- fix for qs-util megalsiget utility
- adds new driver for mpt3sas LSI SAS3 HBAs
v220.127.116.1120 (April 25th 2014)
- adds support for ALUA on iSCSI for HA
- adds new CIFS options for extended attributes to Network Share Create/Modify dialogs
- adds support for additional compression options
- adds support for LSI mpt2sas based HBA discovery and enclosure services integration
- adds SNMP support and full MIB
- adds SNMP commands to qs-util
- adds support for custom qs_init_share.sh in /var/opt/osnexus/quantastor
- adds performance test to qs-util
- adds the SNMP tools and iozone performance tool packages
- fixes and optimization for HA failover support
- fix for iSCSI session write/update issue
- fix for System Monitor role
- fix for setting bind address / gridIP attachment to non-eth0
- fix for gluster version check for 3.4
- fix for SSD storage pool IO profile
v18.104.22.16851 (April 4th 2014)
- adds additional trace for alert logging
- adds support for GlusterFS 3.4.2
- adds configurable compression level and sync policy settings to Storage Volume Modify, Network Share Modify, and Storage Pool Modify
- adds logic to automatically set ZIL cache policy to always when ZIL SSD cache devices are added
- adds ZFS dataset creation for gluster bricks
- adds gluster volume auto-start after creation
- adds grid status monitoring logic to core service via --grid-stat option
- fixes and optimizations for grid scalability
- fix for cloud container CIFS access disabled at service startup
- fix for inaccurate utilized space on container's associated network share
- fix for inaccurate utilized space on gluster volume's associated network share
- fix for SoftLayer cloud container creation
- fix for accessing @GMT snapshots via NFS
- fix to allow creating storage system links between virtual and VLAN interfaces
- fix for MTU setting on VLAN ports, MTU of VLANs interfaces must be less than or equal to parent interface
- fix for gluster delete volume
- fix for delete/export pool to disconnect associated gluster mounts
v22.214.171.12488 (March 14th 2014)
- hot-fix for host group management, was not managing assignments properly in grid configurations
- fix for network share management of CIFS settings on Gluster shares
- fix for low level grid reset logic with /opt/osnexus/quantastor/bin/qs_service --reset-grid
- fix to set SSD optimized state to true when ZIL/L2ARC is enabled for a pool
- adds minor web UI enhancements to show compression ratios and quotas in the table view
- adds more checks to block Community Edition appliances from being added to a grid
- adds qs CLI commands for marking hardware RAID disks as good, host-spares, and for importing foreign RAID units
- adds support for mark good API with Adaptec controllers
Upgrade Instructions for systems on Private Networks
v126.96.36.19970 (March 7th 2014)
- added cloud container support for Google Cloud Storage
- this requires installation and configuration of gsutil via console using 'sudo install-gsutil'
- added support for accessing cloud containers via CIFS, see Modify Network Share to enable
- added AD group and user search/filtering features to QuantaStor web interface in Modify Network Share
- added support for network share quota with ZFS based storage pools in Modify Network Share
- added support for disabling browsing for network share _snaps directories in Modify Network Share
- added enhanced tabs for hardware RAID units, disks, events to filter on selected controller
- adds convenience Select All buttons to the network share permissions tab in Modify Network Share
- moved tab for iSCSI sessions to Storage Volume section, now only shows iSCSI sessions for selected storage volume
- fixed bug in cloud container create which would occasionally set the container state to error
- minor updates to EULA
- changed policy to have iSCSI redirection disabled by default as there are issues in grid configurations with VLANs where redirection could point to an inaccessible network
- changed default max ARC size to 70% after initial system installation
- added cluster configuration information to send log report
- adds qs-util megasettime to set the clock on MegaRAID controllers
- adds warning alert that additional configuration is required when NFSv4 w/ Kerberos mode is enabled.
- fix to update '/etc/issue' automatically after network configuration changes
- fix for HA custom callout script support
- fix to cleanup HA groups on storage pool export
- fix to skip schedule execution if no volumes/shares are present
- fix to MegaRAID SCSI inquiry page parsing to flip serial number / model number around in some cases
- fix to MegaRAID to show proper drive status when marked as 'Failed'
- fix to AD domain leave operation to remove AD computer entry
- fix for network share delete snapshot / unmount filesystem issue
v188.8.131.5298 (February 2014)
- added optimizations to DR / remote-replication to efficiently handle import
- added optimizations for Grid join process
- added additional license capacity 'wiggle room' to allow for 1TB of additional space for SSD caching
- fixed a race condition import problem seen with multiple replication policies all running concurrently
- fixed disk device correlation problem seen when cloning VSAs in Virtual Box
- fixed Cancel button in dialog for Add/Remove Shares from Quota
- fixed AD join process to support Domain Administrator accounts with passwords which have spaces ' ' in them.
v184.108.40.20685 (January 2014)
- adds support for SAS HBA based HA support
- adds support for Gluster HA virtual network interfaces, now CIFS/NFS access to Gluster volumes is HA
- adds customizable storage volume block size (ZVOL block size) in 'Create Storage Volume..' dialog under 'Advanced Settings'.
- adds multipath support for dual-path SAS HBA connectivity to SAS JBOD
- adds SNMP agent with get/walk/trap support
- adds 'Attach Gluster Peer' dialog for customizing Gluster peer connections to use specific ports/interfaces
- adds compression ratio information to volume, share, and pool properties
- adds secondary port discovery for manually created virtual interfaces which show up in 'ip addr' but not 'ifconfig'
- adds revised layout (grid aware) for all network management dialogs
- fix for local user synchronization across grid nodes for Gluster/CIFS support
- fix for CIFS/NFS configuration synchronization across grid nodes for Gluster
- fixes for Japanese localization
- fixes reboot/shutdown hang due to missing pacemaker K01 shutdown directive in /etc/rc6.d
- fixes for Gluster 3.4 integration
- optimizations for to speed up create grid operation
- deprecates / removes btrfs pool type option from Create Storage Pool in web UI, still available from CLI
- tested/certified LSI MegaRAID 93xx / 12G RAID Controller
- manual upgrade procedure for systems on private networks:
v220.127.116.1141 (December 19th 2013)
- adds initial support for Samba4/SMB3.
- Note that an additional installation step is required to upgrade.
- adds 'zvolutil repair' command for fixing bad blocks/checksums in ZFS ZVOLs, more info here.
- fix to RBAC role modify operation
- fix to multi-tenancy support to add Network Shares as a cloud resource type
- fix to Amazon S3 / add custom locations support
v18.104.22.16860 (December 3rd 2013)
- fixes for storage volume clone operation
- fixes for manual HA failover support
- fixes for grid synchronization logic
v22.214.171.12435 (November 22nd 2013)
- adds support for specifying block size and stripe leg length for hardware RAID unit creation (Adaptec / LSI MegaRAID)
- adds unit build/initialization status information for Adaptec controllers
- adds hardware controller configuration options to the toolbar
- adds qs-util setzfsarcmin / setzfsarcmax commands for adjusting ZFS ARC global settings.
- to configure your system to use 80% of available RAM for ARC cache use the command 'sudo qs-util setzfsarcmax 80'
- reboot is required for new ARC settings to take effect
- adds alert and task annotations to Librato Metrics integration
- adds auto config adjustment to reserve at least 128M of RAM for the system
- adds improved qs CLI help page
- adds support for network share replication
- fixes for remote-replication / DR
- fixes for manual HA failover support
- fixes Adaptec unit creation for single drive, JBOD type is now Simple Volume in 7xxx series
- fixes pool start issue with ZFS pools where network shares would not auto-enable
- fixes Adaptec 7xxx series device correlation
- fixes web UI issue with duplicate physical disks associated to RAID unit
- fixes 'Impacted' Adaptec unit state to be categorized as busy rather than warning
- fixes auto-floating of pacemaker owned virtual ports
- fixes NFS/CIFS export discovery issue with Scan function in Backup Policy create/modify dialogs
v126.96.36.19965 (November 5th 2013)
- adds support for parallelized Backup Policies which can ingest data from any NFS/CIFS sources to Network Shares
- adds tab completion for qs CLI
- adds VMware EUI column and Storage Volume property to web management interface
- adds 'Mark Disk as Good' dialog to simplify disk replacement with MegaRAID controllers
- adds 'qs-util megalsiget' command to assist with LSI support information requests
- fixes backup policy expired job cleanup
- fixes cciss / HP smart array device discovery issue when used with ZFS
- fixes MegaRAID issue for embedded LSI ROC chips which have no serial number
- fixes password dialog error message to say length isn't between 6 and 40 chars
- fixes issues with enclosure layout view and pop-up menu in web management interface
- fixes MegaRAID patrol read warnings to be informational (use 'qs-util megaccsetup' to setup proper cron job for MegaRAID controller scans)
- fixes pool percent provisioned property to exclude the thin-provisioned space of snapshots
v188.8.131.5226 (October 21st 2013) DRIVER UPDATE AVAILABLE - REBOOT REQUIRED
** Note that if you upgrade the SCSI target driver package it will stop any ** ZFS pools and require an immediate reboot after the install completes. ** Sorry for the inconvenience with this update. Note also that you can optionally upgrade ** the core service and manager packages without a reboot then upgrade the iSCSI target ** package later when you have an available maintenance window.
- adds core service optimizations to further reduce CPU utilization
- adds automatic backup of MegaRAID controller config data
- adds new pwalk utility for parallelized backup
- adds qs-zconvert utility which simplifies importing & converting foreign ZFS pools and ZVOLs into QuantaStor
- adds qs-util helper utility with common maintenance commands for megaraid, networking, etc
- fix for MegaRAID Patrol Read Aborted warnings by running 'sudo qs megaccestup' which will reschedule the LD consistency check so the PR and it run at different times
- fix for BBU and cache discovery logic for HP P800 controller module
- fix to zpoolfence causing issues with automatic ZFS pool startup at boot time
- fix for gluster volume discovery
v184.108.40.20661 (October 5th, 2013)
- adds CIFS/NFS support for scale-out NAS (Gluster)
- adds 'qs-showlog -e' option which just shows any errors or warnings in the log
- adds additional network configuration information into 'qs-sendlogs'
- adds preferred network port option for grid communication
- adds usage info and ZFS iostat, ZIL and ARC stats to 'qs-iostat'
- adds support for Gluster 3.4
- adds ability to choose RAID group leg size in pool create
- adds support for RAIDZ10, RAIDZ20
- adds qs-crm script to assist with HA triage tasks
- adds hadoop-install script to support latest CloudEra Hadoop
- adds HA module support configurable via /etc/qs_ha_modules.conf
- adds support to control which target drivers are loaded via /etc/iscsi-target.modules
- adds corosync and pacemaker packages for HA support and automatic grid IP failover
- adds fix for bad 'Missing Physical Disk' error
- adds CIFS User Access tab to the web management interface in the Network Shares section
- adds alpha-level support for HA failover using HBAs
- adds SAS HBA discovery module
- adds mcelog package
- fix to LSI MegaRAID hardware event discovery
- fix to pool discovery logic for failed pool devices
- fix so you can set IP to 0.0.0.0 even if this is also set on another port
- fix for ZFS storage pool import to prefer the resilient /dev/disk/by-id rather than /dev/disk/by-path
- fix to allow enabling network shares when no NFS access is present (only CIFS)
- fix for hardware unit to hardware disk association
- fix to pool rescan logic to prevent auto-import unless the UUID of the pool is specified in /etc/qs_zpool_autoimport
- fix remove cache for mirrored ZIL
- fix to AD Domain Join logic to verify hostname is less than or equal to 15 characters in length for netbios compatibility
- fix for zfs pool manager to make discovery cycle to fast-detect status and configuration changes
- fix to service startup message to be clearer
- fix for volume resize so that the iSCSI session is not dropped
- optimization, changes default swappiness to 10 for better performance and turns off unused services like postfix
v220.127.116.1178 (August 20th, 2013)
- fix for ZFS storage volume resize
- fix for ZFS log device detection
- fix for User Create dialog to require Password + Repeat Password to ensure correctly entered passwords
- added support for /etc/exports.custom so that custom NFS mounts could be specified and not overwritten by core service
- added custom script call-outs for pool start/stop. Place scripts in these locations to do custom actions at pool start/stop
- added custom script call-out for post system startup. This is called once per system boot.
v18.104.22.16852 (August 8th, 2013)
- fix to role creation
- fix to session synchronization in grid environments
- added support for disabling iSCSI redirection via stanza in /etc/quantastor.conf
- fix to allow many-to-many relationship between hardware disks and units
- fix for volume snapshot to skip freespace check as ZFS snapshots are thin-provisioned
- added color highlighting to hot-spares in Enclosure View
- disabled SAS switch manager tab
- enhanced pool delete to require a force option when iSCSI sessions are present
- adds default /etc/apt/preferences.default configuration which allows for non-mainline packages from ubuntu
- adds support for monitoring ZFS pool scrub
- adds dialogs to web interface to start/stop scrub
- adds zpoolscrub command which can be used to setup an automatic monthly scrub on the last Saturday of the month 'sudo zpoolscrub --cron'
- adds cli support for creating custom RBAC roles using wildcards, for example:
- qs role-add "Volume Administrator" --permissions=*:view,StorageVolume:*,NetworkShare:*
v22.214.171.12411 (August 1st, 2013)
- adds support for volume replica rollback for easy DR failback
- adds detection for when a replica checkpoint volume is in use (has iSCSI session) and auto marks it as an 'Active Replica Checkpoint'
- this flag can be toggled using the 'Modify Storage Volume' dialog
- when set all replica operations to the active checkpoint are blocked to protect the data
- adds enclosure view which allows you to view the layout of disks within the chassis and their state. This greatly simplifies drive replacement.
- custom enclosure/chassis drive layouts can be defined in /etc/qs_enclosure_layout.conf
- adds support for developing custom cloud providers for QuantaStor's cloud backup system which can backup to OpenStack SWIFT based object storage
- custom providers are registered using the /etc/qs_cloud_providers.conf file
- adds zfscleanupsnaps helper/maintenance script to cleanup orphaned snapshots which have no associated clone
- adds button to Upgrade Manager dialog to link to this ChangeLog page
v126.96.36.19980 (July 25th, 2013)
- fixed sorting by slot number
- fixed sorting by disk name
- fixed Create Unit to show the number of selected disks
- fixed Delete Unit so you can delete units with the boot flag (requires checking the 'Force' option) and added a confirmation check
- fixed table views to show alternate/custom name for controller
- fixed unit table to show 'Disks' which is the count of disks in that unit
- fixed issue with alert send using wrong sender name
- fixed replication schedules to allow replicating from zfs volumes with active iSCSI sessions
- adds dialog for multi-delete of network shares
- fixes RAIDz detection
- fixed corner case in pool creation dialog where the filesystem choices needed to update to include ZFS
- fixed snapshot schedule triggering logic to support schedules with a mix of volumes from different systems
v188.8.131.5265 (June 26th, 2013)
- fixes for multi-node ZFS remote-replication & replication schedules
- added initial support for enclosure layout selection
- fix to minor corner cases in grid synchronization logic
- fix to dns update logic
- adds support for enabling the write-back cache at the SCSI target driver level
- adds ALB and TLB bonding modes
- adds storage pool blacklist (/etc/qs_poolblacklist.conf) to exclude storage which should not be managed as a Quantastor pool
- fix for compression enable/disable on ZFS based pools
- fix to an alert that would spam in the web UI in some cases
- fix to zfs share rename
- fix to Adaptec 7xxx module for disk size discovery
- changed ZVOL block size policy to be adjustable via /etc/quantastor.conf
- adds check for volume snapshots to report a nice error if snapshots exist and the parent volume delete is requested
v184.108.40.20689 (June 26th, 2013)
- adds support for ZFS storage pools
- smart replication
- SSD write log device management (ZIL)
- SSD read cache device management (L2ARC)
- online grow pool with zero downtime
- smart cloning
- data and metadata-checksums
- cloning volumes and share to/from other storage pool types
- enhanced snapshot schedules
- added qs-iostat CLI command for monitoring performance
- added support for sharing with Avid MediaComposer using MediaHarmony samba module
- added avidlog and avidupdate commands for Media Harmony log viewing and auto-update of Avid bins when changes are made
- added support for Network Share cloning
- added grid column for showing % Provisioned on storage pools
- added SMART health PASS/FAIL reporting for MegaRAID disk devices
- fix for Adaptec discovery logic
- added Trigger Snapshot Schedule dialog for immediate activation of policy
- fix to set default rwx for samba shares
- fix to allow single node Gluster volume creation
- added optional lftp based replication
- fix for btrfs pool grow
- changed btrfs to 'Experimental' in web interface
- enhanced clone and restore operations to be sparse aware to recover unused space
- adds support for DRBD proxy
- fix for Chelsio 10GbE support
- fixed MegaRAID mark hot-spare to first mark disk as Good.
v220.127.116.1104 (May 6th, 2013)
- fix - setting default drbd replication verify protocol to crc32c
- fix - added logic to ensure volume change operations are not attempted / blocked on offline pools
- fix - block internal database auto-backup to pool if it is in the process of being deleted
- fix - minor web UI spacing fixes
- fix - create storage system link between grid nodes doesn't require specifying password if you're logged in as admin
- fix - adjusted host group add/remove host logic to resolve a corner case where storage volume access control wasn't properly updated
- fix - web UI was showing ZFS and HA support when these features are in beta, these are removed
- fix - fixed apt-get upgrade so that you can run it safely without it stomping on the QuantaStor kernel
- fix - re-added new Adaptec 7xxx CLI
- fix - fix to DRBD reconnect/demote logic
- fix - fixed disable snapshot schedule so that they're not triggered when disabled
- fix - added logic to drop sessions on volume resize so that the initiator will relogin and discover the new volume size
- fix - added option to change the default replication account from qadmin to something else
- fix - added logic to ensure grid doesn't use the loopback interface / 127.x.x.x for peer communication
- fix - added script to change the device timeout settings for Adaptec RAID controller compatibility
- fix - adds support for external host name for grid communication for systems with separate internal/external hostname/IPs
- fix - fixes blank entries from being created in smb.conf which would cause warnings
- fix - set/change user password doesn't require old password if you're logged-in as admin
- fix - fixes to hard password reset logic 'qs_service --reset-password=XXX', duplicate admin account was being created
- added option to restart Samba and/or NFS and not just NFS in web UI
v18.104.22.16885 (April 29th, 2013)
- maintenance upgrade for v2 backports some fixes and enhancements from v3
- adds Storage Pool Replication verify dialog to web interface
- replica verification should be run periodically and does a checksum of all the blocks on the primary pool and verifies the secondary pool is correct
- adds performance enhancement to clone operation (~5x-10x faster) and makes it sparse aware
- fixes issue with grid creation in web UI when the default 'password' is used
- fixes issue with pool demote logic for Storage Pool Replication
v22.214.171.12490 (April 2st, 2013)
- boosts clone performance by 10x and maintains sparseness of copy so it's utilization level is same as the source
- fixes ISO installation issue that was causing the installer to hang during the last 'preseed' stage.
- fixes issue with brick space utilization calc
- adds option to set alternate name on the HW enclosure
- fixes PAM configuration security problem
- fixes issue with StorageLink ID generation
- fixes cloned volume free space reporting
- adds install logging to preseed to write data into /var/log/qs_install.log
- adds new script to do a quick service status check qs_status.sh
- adds -f option to qs_showlog.sh for tailing the service log
- fixes issue with gluster brick discovery
- fixes PAM password authentication issue / defines default PAM configuration for common-auth and common-password
- fixes ISO installer/preseed bug
- fixes gluster context and toolbar menu
- fixes dynamic gluster peer attach issue
- fixes gluster re-ID process to auto-restart gluster service after changing the internal peer ID
- adds option to modify hardware enclosure name/description
- adds dialogs for rebalance gluster volume and add-bricks to gluster volume
- adds detail on internal services status
- adds dialog for restarting gluster service
- adds UI support for gluster NFS mode
v126.96.36.19942 (March 16th, 2013)
- fixes local CIFS / SMB user password issue, password was not set correctly at time of new user creation
- fixes printcap related logging errors seen in Samba log.smbd, new version disables Samba printer config
- adds automatic btrfs rebalancing and associated rebalancing options to quantastor.conf file
- fixes problem where changing the IP address on a port was requiring disable/enable iSCSI to reenable iSCSI access after changes
- fixes bonding logic and issues and infinite loop bug in the base Ubuntu ifenslave script
- fixes postfix log error messages by adding an empty main.cf config
- adds quantastor User-Agent setting to Librato metric post operations
- fixes ActiveDirectory join domain to auto-rollback and report proper error if the domain join does not succeed
- fixes bug in DNS domain search suffix where it would get duplicated
- fixes bug in DNS configuration where UI would not reflect accurate DNS configuration after removing entries
v188.8.131.5221 (March 12th, 2013)
- fixes CIFS support regression introduced by new Ubuntu 12.04.2 base
- adds MediaHarmony support to CIFS for multi-client Avid editing
- fixes force flag so you can resize while volume is in use. iSCSI session relogin is required.
- fixes to new AD support
v184.108.40.20606 (March 4st, 2013) 3.8 KERNEL UPGRADE AVAILABLE
- adds support for Adaptec 7000 Series RAID controllers
- enhanced CIFS support with ActiveDirectory integration
- adds qs_sendlogs and qs_upgrade scripts for CLI upgrades and log reports
- adds new IO profiles for media production
- adds support for custom tags/names for hardware RAID units, disks, and controllers
- fix to report error when trying to grow software RAID10/RAID1 which isn't supported by mdadm layer
- adds new option to silence alarms on all controllers
- various fixes for grid management plus updated menu in web manager for grid management
- enhanced license activation logic to verify DNS and gateway configuration settings
- fix password check on add grid node
- added force flag option to Resize Volume dialog
- added logic to auto-disable the HDDs write cache when you create a MegaRAID unit
- added support for custom alert handlers
- adds ethtool, ipmitool, and winbind to standard install
- fix for disk identification of KVM/proxmox virtual disk devices
- adds ATA over Ethernet driver
- adds latest Adaptec 7000 driver
- adds STEC PCIe support and latest driver
- added logic to block Gluster volume creation on pools with DR async replication links
- adds support for Windows shadowcopy / snapshot support with CIFS shares
- upgrades Linux kernel to v3.8
- upgrades SCST drivers to latest v3 release
v220.127.116.1121 (January 12th, 2013)
- adds CIFS support
- fix to network share rename cleanup
- fix to tomcat restart logic
- various fixes to pool remote-replication failover
- fix to bonded port create, now does a down up down beforehand
- improved grid sync performance
v18.104.22.16895 (January 5th, 2013)
- fixes Upgrade Manager, to upgrade from previous releases you must upgrade from the command line with these commands:
- sudo apt-get update
- sudo apt-get install qstormanager qstorservice
- various fixes to pool remote-replication failover
- enhanced grid synchronization performance
- adds support for Nytro MegaRAID
- minor enhancements to qs CLI
- fix to cloud container creation
- fix to license check corner case
v22.214.171.12467 (December 2nd, 2012) Official v3 Release
- adds Gluster support
- fixes licensing check issue in grid configurations
- fixes alert propagation issue in grid configurations
- qs cli improvements
- adds help search with partial command names like so 'qs help=glu'
- adds support for loading login credentials eg: 'localhost,admin,password' into ~/.qs.cnf
v126.96.36.19932 (November 24th, 2012) v3 Release Candidate
- fixed 4TB limit on pool remote-replication, now there is no limit.
- adds VLAN interface support, see 'Create VLAN Interface', 'Delete VLAN Interface' in web manager
- upgraded web interface graphics
- fixed icons for pool replication
- added initial support for Gluster, this will be officially supported in v3.1
- added samba to installation process, this will be officially supported in v3.1
v188.8.131.5268 (December 2nd, 2012)
- fixed 4TB limit on pool remote-replication, now there is no limit.
- adds VLAN interface support, see 'Create VLAN Interface', 'Delete VLAN Interface' in web manager
- adds improved volume/share level remote replication
- communicates better progress information
- replication process can now be cancelled at any time
- replication is not interrupted by a upgrade or restart of QuantaStor core service
- fixed problem with virtual interface floating
- fixes licensing check issue in grid configurations
- fixes alert propagation issue in grid configurations
v184.108.40.20652 (November 2nd, 2012)
- fix to automatically update IP addresses in DRBD resource files on source and target side
- fix to volume naming on DRBD secondary
- fix to upgrade process messages
- fix for DNS issue, resolvconf needed to be restarted after config changes
- fix to DRBD based pool resync logic
- fix to storage pool start ( no longer needs --update-summaries in array startup )
- adds support to allow cancelling Storage Volume clone operations / tasks while in progress
- adds HP SmartArray management support
- removed old btrfs warnings from web interface
- changed btrfs default block size to 32K and made it customizable via /etc/quantastor.conf
v220.127.116.1142 (October 29nd, 2012)
- fix to DRBD based pool startup logic
- fix to DRBD based pool resync logic
- adds support to allow cancelling Storage Volume clone operations / tasks while in progress
v18.104.22.16826 RC1 (October 22nd, 2012)
- based on Ubuntu 12.04.1
- incorporates all enhancements made up through QuantaStor v2.7.2
- fixes bug in virtual interface (vif) float logic so that when you explicitly disable a vif it won't try to float it.
- fixes NIC bonding issues found in v3.0.0 Tech Preview
- fixes minor bug in Chrome browser support
- fix to apply DNS changes to resolvconf base file
- fixes bonded port speed to show the sum of the speed of slave ports
- upgrades kernel to v3.5.5
- upgrades SCST driver support to v3.0.0
- upgraded to expand browser support for latest browsers (Chrome, FireFox, Opera, IE, etc)
- adds support for German locale, just add /?locale=de to the URL when you connect to the QuantaStor Manager web management interface
v22.214.171.12412 (October 22nd, 2012)
- fixes issue with preseed that was causing boot issues after installation
- adds array and hot-spare management support for HP SmartArray RAID controllers
v126.96.36.19934 (October 2nd, 2012)
- adds support for pool level remote replication
- adds support for Citrix StorageLink with XenServer 6.0 & 6.1
- adds qstorutil.py to simplify StorageLink management operations
- adds REST API support on port 8153
- changed default boot filesystem type to ext3 so that QuantaStor can be para-virtualized under XenServer
- fix for MegaRAID discovery logic (was creating zombie processes)
- fix to storage system ID field on Alert
- fix to rare database lock issue
- expanded CLI command support
- various fixes to CLI commands
- added preferred LUN number logic for multi-protocol VMware configurations
- fix to tomcat initd script
v188.8.131.5204 (July 23th, 2012)
- fixes to init.d scripts for tomcat and core service ensures proper restart/stop operations
- adds patch to Cloud Container support for SoftLayer private cloud access within Softlayer private network
- adds y-axis labels to Librato metrics posts
- fix to BBU status
- adds LSI CacheVault support
- adds Brocade 1020 CNA network card detection
v184.108.40.20679 (July 6th, 2012)
- adds support for Softlayer Cloud Storage support
- adds support for metrics posting to Librato Metrics
- adds support for PagerDuty.com
- adds support for MegaRAID RAID10 units >16 disks
- adds support for cloning volumes while in use w/ force flag
- various fixes to grid scale-out management support
- upgrade from Tomcat 6 to Tomcat 7 for web management interface web server
- adds check to block volume resize when storage is assigned to XenServer hosts
- fix to UI for replicate network share
v220.127.116.1122 Tech Preview (May 6th, 2012)
- This version is based on the same code base as v18.104.22.16822 but is built on top of Ubuntu Server 12.04 LTS and the SCST v3.0 driver.
v22.214.171.12422 (March 28nd, 2012)
- adds support for scale-out management
- adds support for HA failover groups utilizing integrated LSI SAS 6160 switch management for storage motion
- adds support for NFS shares backed by a cloud container / infinite S3 storage network shares
- adds support for Infiniband / Mellanox controllers
- adds support for LSI CacheCade SSD cache management
- adds support for Adaptec MaxCache and Adaptec 6xxx series RAID controllers
- adds logic to automatically update the system clock via ntp.ubuntu.com twice daily
- adds support for i2o device discovery
- adds support for Adaptec enclosure discovery
- adds SAS switch management module for LSI 616x
- adds close window button to upper-right corner of dialogs
- adds support for toggling the hot spare flag with Adaptec disk devices using the Mark Hot Spare dialog.
- adds integrated online help documentation for most dialogs via [?] button
- adds check for FC/IB controllers and removes tabs if none are present.
- adds Apply button to alert manager dialog
- adds system memory check
- adds 500GB of flexibility to license key check
- adds core services and tomcat server monitoring
- enhancement to temporarily shutdown FC ports on storage pool stop
- fix to exclude spares from unit create/grow dialogs
- fix to iSCSI target driver status detection logic
- fix to 3ware discovery logic to exclude non-existent units reported as INOPERABLE
- fix to tomcat service start/stop logic, adds status
- fixup of various web interface grid layouts/sizing
- upgrades to version 1.9 of s3ql filesystem
- fix to Modify Storage System so that DNS entries are not removed
- fix to hardware event detection logic to prevent events from being re-raised
- fixes network interface management terminology to make it consistent
- fixes bug in 3ware module flush/rescan operations
- fixes default CLI output so that nested objects are not printed (was constantly scrolling off the screen), use --verbose now for that
- adds support for setting the bonding policy
- fixed BBU discovery for 3ware
- fix to pool grow logic
- fixed storage volume assignment logic to remove iSCSI assignment by IP address. Only IQN is supported.
- fixed vendor/model information on network ports
- fixed ellipsis on context menus to be ...
- fix to send alert email script so that it logs to /var/log/qs_service.log
- fixed up HP ACU CLI install script
- fix to show "Linear" software RAID type in management interface rather than "RAID2"
v126.96.36.19979 (Nov 28th, 2011)
- adds send log report command to web interface
- adds support for hardware RAID unit grow operation (LSI MegaRAID & Adaptec)
- adds the Linear type Storage Pool layout
- adds support for dynamic Storage Pool grow(concatenate) & expand operations for single disk RAID0 and Linear types
- fixes license TB utilization calculation
- adds support for virtual network interfaces, virtual interfaces allow you to create virtual network ports on different networks
- adds support for virtual network interface floating. In the event that the physical port that the virtual interface is attached to goes offline, the virtual interface will automatically float to another online/active physical network interface. Virtual interfaces can also be pinned to specific physical network ports.
- adds link state and link speed information to target port / network interface information
- added hardware unit busy state to indicate when a relayout/build is active
- added logic to pre-populate create virtual interface / create bonded interface dialog with subnet/network
- fix create/grow unit dialog to filter disk selection based on selected hw controller
- fixes create/modify cloud backup schedule dialog so that you can change the selected cloud container
- fixes add cloud provider credentials so that you can have an amazon secret key that includes symbols like '+'
- adds CLI command for upgrade 'qs upgrade' so that you can remotely upgrade quantastor instances
- fixes various localized text for Japanese, note, add /?locale=ja to the URL to see the web interface in Japanese
- fixes various cloud backup volume object state issues and adds automatic metadata and data cache flushing at the end of each volume backup
- fixes cloud container activation logic and upgrades QuantaStor to s3ql v1.0.1
- minor change to the ordering of stack items in the web interface
- adds support for MegaRAID 9265 & 9285 controllers
- adds hardware controller event filtering to remove nonsensical information and limits the event display to the last 50 messages plus any warnings/errors up to 100 hardware event messages total
- adds initial API and model changes for grid management support
- adds battery backup unit status information and detail to the HW controller properties
- adds NFS export path information to the table view for Network Shares
- disables browser caching so that the browser will load a new version of QuantaStor Manager on reconnect rather than using the cached
- optimized RAID50 and RAID60 hardware unit creation
- adds IO profiles to Storage Pool creation
- fix to /etc/init.d/iptables configuration script to allow connections on QuantaStor web and SOAP ports
- fix to Recovery Manager logic to allow recovery of bonded network port configurations
- removed bad tooltips from various web dialogs
- added password reset logic so that you can reset the admin password using qs_service --reset-password=mynewpassword
- added iSCSI redirect logic for grid support
- created qscli debian package and posted on Downloads page
- enhanced the qs_bug_report.sh script to auto upload bug report
- fixed web interface to show volume and pool size in both TB and TiB
- adds support for MegaRAID 9265 and 9285 controllers
- improved web interface support for MegaRAID operations (silence alarm, import foreign units)
- adds MegaRAID and 3ware drivers to installer to assist with installing Quantastor onto a hardware RAID1 mirror
- adds hardware controller detection logic to base activation of a plugin based on the hardware driver being loaded and the availability of the hardware vendor's API/CLI
- qs Command Line Utility now supports having a QS_SERVER environment variable so that credentials like --server=localhost,admin,password do not have to be specified for every command
- minor fix to EULA dialog
- minor fix to menu popup so that the minimal menu shows when the "No storage pools detected." message is in the tree.
- fix to network share quota support
- includes upgraded driver - ixgbe v3.4.24 - Adds support for the latest Intel 10GbE NICs
- includes upgraded driver - megaraid_sas v5.30 - Adds support for the 92x5 series
- includes upgraded driver - aacraid v1.1-7 - Adds support for the Adaptec 6xxx series
- adds Japanese localization
- enhanced Storage System Link so that user can specify a remote IP address for replication to remote WAN locations in the cloud
- adds additional license checks to the web interface
- fixes memory leak in the remote replication monitoring logic
- renamed the xfs pool type from 'Archive' to Enterprise'
- changes the default storage pool type to 'Enterprise (XFS)'
- adds additional xfs tuning parameters to address fragmentation
- adds hw controller discovery dialog to preemptively rediscover/rescan controllers
- adds OEM Cloud Edition license management support
- adds RAID10/50/60 support for Adaptec controllers
- certification completed for Adaptec 5805 series controllers
- adds async operation support for NFS restart and License management operations
- various enhancements to the CLI to support scripted automation of new system deployment
- adds item "No physical disks detected" to tree in web interface when no disks are detected. Similar items added for volumes, pools, hw controllers.
- fixes licensing issue with Cloud Backup feature
- fixes lsi 3ware installation script
- adds NFS support to Community Edition
- summary: QuantaStor v2.0.4 adds support for IO profiles, remote-replication of network shares and volumes residing in any pool type, and additional Adaptec hardware support.
- adds CLI support for hardware management commands
- adds Japanese localization
- adds support for remote replication of network shares
- adds support for replication of non-btrfs volumes and shares
- adds icons and dialog changes for network share replication
- adds support for adaptec single disk / JBOD type units
- adds use user chap setting as default in UI
- adds support for Storage Pool IO profiles, custom IO profiles can be created in /etc/qs_io_profiles.conf
- adds 'disk-list=ALL' option to storage pool-create command using the qs CLI. This will auto-select all available non-utilized disks.
- summary: QuantaStor v2.0.2 adds support for Adaptec controllers, cloud licensing and includes some minor maintenance fixes.
- fixes issue with network interface changes to ports with gateway set
- adds icon to identify spare hardware units
- fixes issue where duplicate disks were shown under a hardware unit after it is created
- fixes RAID level correlation / Storage Pool now shows hardware RAID level if hardware RAID is used
- fixed web interface rendering bug seen in Firefox 4.
- added logic to flush the write buffer cache of hardware RAID controllers automatically when storage pools are stopped. (also part of system shutdown)
- fixes pool identification logic to route 'flash led' request to hardware controller as necessary
- adds logic to discover boot/system physical disks but filter them from the web interface. This information is used to identify the hardware RAID unit on which the core QuantaStor OS is installed so that we can prevent users from deleting it.
- improved correlation of physical disks with hardware RAID units
- adds support for OEM edition branding
- adds support for Adaptec 5405 controller
- adds flag to allow network configuration recovery as part of database rollback
- adds support for network share client modify commands to CLI
- summary: QuantaStor v2.0.1 adds support for NFSv4 and includes some minor maintenance fixes.
- adds support for NFSv4
- adds simplified NFS export path format which eliminates the full storage pool GUID from the path
- adds customizable ARP filtering policy setting to Modify Storage System dialog
- adds NFS configuration dialog so you can set the system to NFSv3 or NFSv4 mode
- fixes ribbon bar browser layout issue with IE
- fixes bug in CLI command system-shutdown
- fixes Windows CLI so you can run it from anywhere rather than just from the CLI installed location
- fixes issue with Add Host dialog where the host entries to a bad DNS lookup and get an invalid IP address associated with them
- fixes for Citrix StorageLink 2.4 support
- fixes upgrade from 1.x to 2.x so that this can be done and the kernel upgrade is automatic
- fixes pool start logic to automatically re-enable network shares when a storage pool is brought back online
- fixes pool modify configuration to preserve the enable write optimizations flag which was getting cleared when a pool was start/stopped
- summary: QuantaStor v2.0 adds support for Fibre Channel via Qlogic 24xx and 25xx series adapters and adds integrated support for the LSI MegaRAID and DELL PERC families of RAID controllers.
- fixes invalid warning when scsi target driver is restarted
- adds install script for LSI MegaRAID CLI to /opt/osnexus/quantastor/raid-tools/lsimegaraid-install.sh
- adds plug-in support for LSI MegaRAID discovery and management
- adds support for Fibre Channel via Qlogic 24xx and 25xx cards
- add host/initiator dialogs updated to support FC initiators
- new kernel 2.6.35-27 with FC support and latest LSI MegaRAID driver
- multiple simultaneous hardware alerts are now coalesced into a single alert email
- enhanced storage volume create dialog to support partial reservation of space
- added additional RAID info message filtering
- fixes /dev/fd0 warning messages via floppy driver blacklist
- fixes make initrd issue with missing pango library
- fixes missing target ID field in UI which was always reporting 0
- cleanup of alert email to remove non-IP addresses like 'bond0' from the list.
- added logic to remove the replication and cloud backup tabs as per licensing
- SCST scsi target driver optimized for performance
- fixes corner case with IQN target ID generation with 'volname.x' overlapping with 'volname:x'
- fixes UI issue with partially expanded tree on User, Cloud, and Replication tabs
- adds XenServer to drop-down list of supported host types
- adds cloud container repair command to fixup container in cases where the system is shutdown while cloud replication is in progress
- adds network share enable/disable dialogs rather than using the network share modify dialog for this
- adds config file option to permanently enable/disable arp filtering. Default is auto where arp filtering is enabled only when NIC ports are bonded.
- adds IP Address to host modify dialog so this can be changed when the detected IP address is wrong.
- minor fix to export storage pool command
- minor UI fix to virtual port / bond creation
- minor UI fix to create volume dialog when pool has no free space
- cleanup of storage quota create dialog layout
- target IQN generation now includes part of the storage volume UUID
- fix to cloud container enable / repair logic
- adds logic to automatically removed completed tasks and old alerts > 50 by default
- fixes CHAP authentication bug where remove old entries were not removed on reconfiguration of CHAP settings
- fixes Windows MPIO issue with t10 device descriptor format
- fixes license key activation error detail
- fix to db restore logic
- fix to service upgrade logic
- fix to modify cloud container dialog
- fixed issue with db rollback to a configuration database with ports configured with NIC bonding, new behavior is to flush the bond configuration.
- fixed issue with VMware physical disks being identified as XenServer
- web interface redesign to move tabs to the top
- adds hardware RAID device correlation from physical disk -> hardware unit -> hardware disk
- we now enable ARP filtering when NIC bonding / virtual ports are used
- adds 3ware support for unit create, unit delete, unit identify, disk mark spare, disk remove, and controller rescan
- fix up of web interface column widths
- adds alert severity to the subject line of emailed alerts
- summary: 2239 is a hot-fix release for some issues caught after the initial 1.5.0 release.
- fixes bug in session monitoring logic
- fixes bug in alert manager email generation
- fixes alert manager dialog so that the SNMP password is hidden text ******
- summary: QuantaStor 1.5 introduces Cloud Backup to Amazon S3 and adds support for SCSI-3PR which is needed for Hyper-V live migration
- fixes installer so that an internet connection is no longer required to install from CD/ISO
- fixes XFS thick provisioning w/large volumes
- fixes Cancel Task dialog to show proper "Are you sure?" text
- fixes Storage Volume Resize dialog max size setting for thick provisioning
- changes iSCSI driver from IET to SCST
- fixes 'View Share' dialog, missing space in mount path
- fixes text validation in dialogs so that the cursor doesn't jump to the end
- fixes upgrade logic to upgrade the target driver first and tomcat last
- fixes VMWare/XenServer virtual appliance device discovery so that devices are found without allow_unident=1 flag
- adds option to create/modify a pool to 'Enable Write Optimizations'. This uses the nobarrier filesystem option to leverage write optimizations in systems with a battery backup units.
- adds new LSI 3ware operations to web interface: controller rescan, add hot spare, identify disk, identify unit
- adds JFS pool type (create via command line only)
- adds support for SCSI3 persistent reservations / Windows Hyper-V live migration
- adds cloud backup support (Amazon S3)
- adds a warning to BTRFS pool type creation, this pool type is not yet ready for production use
- adds support for FusionIO cards
- adds support for new XFS based 'Archive' storage pool type. The Standard (ext4) pools are limited to 16TB which is sub-optimal for some disk-to-disk backup applications like Atempo so the new 'Archive (xfs)' pool type w/ a 8EB limit solves that.
- adds automatic audit logging, see /var/log/qs_audit.log
- adds logic to expunge old complete or failed task entries automatically
- adds logic to do an automatic internal database backup anytime a new pool is created
- adds properties page for iSCSI session in web interface
- adds LSI 3ware logical unit discovery
- fixes size reporting on Standard (ext4) thin provisioned volumes
- fixes service update logic
- fixes bug that was preventing new user role creation
- fixes bug in target port modify dialog
- fixes to new SCST support beta
- fixes bug in modify chap user/password where they were reversed
- remote replication support officially released
- adds Upgrade Manager for upgrading QuantaStor via the web interface
- adds Recovery Manager for recovering the QuantaStor database via the web interface
- login screen uses default password 'password' if no password is specified
- additional i18n work to prepare for localized editions
- remote-replication beta updates
- improved connection state logic
- various enhancements to enable whole file transfers on initial push as well as options to disable this mode in quantastor.conf
- bandwidth fixes for storage system link dialogs
- fixed CHAP bug in web interface
- added device exclusion list option to /etc/quantastor.conf
- additional i18n work to prepare for localized editions
- fixed bug in PhysicalDisk query where invalid characters in the serialNumber field was causing SOAP API issues
- fixed installer to allow more control over partitioning
- fixed bug in filesystem resize logic
- adds support for HP P400 RAID controllers via hpacucli
- fixed up firmware level detection
- adds beta support for remote replication schedules
- adds beta support for remote replication
- adds dialog for modifying network share client entries
- extends Storage System modify dialog to allow setting the network domain and search suffix
- additional i18n work to prepare for localized editions
- added storage system ID reset when /etc/qs_reset_sysid is present
- upgraded base platform from Ubuntu Server 10.04 to Ubuntu Server 10.10
- adds support for NAS / network shares / NFSv3
- adds support for network share snapshots
- adds support for restoring from network share snapshot
- enhances snapshot schedules to include support for automatic network share snapshots
- added additional license checks
- added more detail to storage pool properties page
- added dynamic permission definition upgrade logic
- additional i18n work to prepare for German and Japanese localization
- fixed bug in pool free-space calculation
- fix to target port state, was showing offline in some cases where it was online
- fix to storage pool state, also showing offline in some cases where it was online
- added logic to skip the startup device arrival delay if the system has been up for more than a couple mintues
- enhanced CLI to add network share management commands
- some optimization of web UI synchronization logic
- fixed up the send alert logic to handle SSL SMTP connections w/ retry logic
- added remote replication tab and initial code for replication schedules
- added support for changing DNS entries to the 'Modify Storage System' dialog
- added support for sending email with SMTP user/password
- fixed bug in license activate dialog to show license activation state
- added initial support for remote replication
- fixed memory leak in storage pool state and health monitor
- additional work on Japanese localization
- added properties pages for hardware enclosure / controller types
- added initial support for LSI 3ware 9690SA hardware discovery / alert integration
- added support for gathering tx/rx data per network adapter
- added file-system optimizations to reduce meta data footprint
- added support for Fusion IO device names
- fixed storage pool delete logic when deleting a pool with missing disk devices
- initial support for Japanese locale
- added option to QuantaStor Manager UI to allow one to 'force' a storage pool to start even when degraded
- improved device identification logic
- fixed minor bug in HP RAID controller support detection logic
- added support for large >2TB hardware RAID devices
- minor fix to storage pool create logic so that physical disks don't show the storage pool association as a GUID
- adds support for configuring ports as 'iSCSI enabled' so that ports can be designated as management only
- fixed physical disk discovery bug in XS VM
- adjusted read/transmit settings in iSCSI target driver to improve performance
- fixed storage pool start-up issue which would cause pools to be started as degraded or not at all. This was more common with large pools with 6 or more disks.
- fixed various edge conditions around create/delete virtual port
- fixed bug in port state discovery
- fixed bug where network access to QS server was not available after boot because networking needed to be restarted after port bonding was completed. Non-issue with systems not utilizing port bonding.
- added Mac OS X option to the host add/modify dialog
- minor fixes to volume cloning logic to support cloning across pool types
- minor fixes to alert email generation logic
- added support for automatic backup of QuantaStor database to /var/opt/osnexus/quantastor/osn.db.backup
- added initial changes for HP server / HP RAID controller support
- updated device ID collection logic to store scsi VDP 0x83 descriptor information for Physical Disk objects
- fixed bug in stripe-width/stride calculations for 'Standard' type pools
- fixed UI bug preventing role creation / RBAC security configuration changes
- fix to device discovery logic to support running QuantaStor in a VMWare ESX virtual machine
- fix to storage pool deletion logic to properly clear physical device to storage pool association
- added support for 'Standard' type storage pools which utilizes ext4 rather than btrfs as used with the 'Advanced' storage pool type.
- updated storage volume create dialog to include storage pool RAID level info
- updated QuantaStor kernel to be based off 2.6.35 to resolve ENOSPC edge conditions with Advanced storage pool type
- minor fix to storage pool diagram colors to improve contrast
- improved quantastor shutdown logic to bring down storage pools more cleanly
- updated storage pool partitioning logic to align storage pools to physical disk cylinder boundaries
- added filtering logic to prevent snapshot selection of volumes in Standard type storage pools