QuantaStor Version ChangeLog

From OSNEXUS Online Documentation Site
Revision as of 10:44, 26 January 2018 by Qadmin (Talk | contribs)

Jump to: navigation, search

Contents

ChangeLog

The change log contains a detailed summary of the changes made for each new release of QuantaStor. For information on how to upgrade your storage system please see the Upgrade Guide.

Versioning System

QuantaStor version numbers have four (4) parts to them. A major number (M), minor number (N), maintenance update number (U) and build number (B) of the form M.N.U.B such as 4.1.1.1050. QuantaStor upgrades go directly from whatever version you are running to the latest version with no interim steps so for example if you are at v3.16 you will upgrade directly to the latest version which may be v4.1.

Major Version Number

QuantaStor v4.x is the current Major release version.

Upgrading from QuantaStor 3.x releases can be done inline with minimal downtime and a schedule work window to reboot for any new Kernel and driver updates.

Legacy upgrades from QuantaStor 2.x releases would require exporting any Storage Pools, reloading the OS with QuantaStor 4.x install media, and then importing the storage pools and performing a restore of the configuration database using the Storage System Recovery Manager Dialog

Minor Version Number

The minor version number increments with each minor product update release of QuantaStor which comes out every 2 - 4 months. Releases typically include a combination of new features and some maintenance fixes. Most releases can be applied without a reboot and zero downtime as they rarely include driver changes. If a release does require a reboot we mark it specially with a large tag "REBOOT REQUIRED" so that you can find an appropriate maintenance window in which to apply the upgrade. In general, reboots are only required when the qstortarget package has been upgraded. So if you see that a new version of that package is available know that a reboot will be required to complete the upgrade.

Maintenance Update

If the release includes maintenance version number like 4.1.1 or 4.1.2 it represents a update to address one or more support tickets. These updates generally do not contain new features, only fixes to address specific issues. Maintenance releases generally ship once a month.

Build Number

The build number can be largely ignored, it simply increments with each commit that is made to the source tree.

v4.4.2.004 (January 19th 2017)

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.4.2.

ISO/DVD/USB Boot Install Image

Release Notes

High Availability VIF

  • Fixed an issue where Gluster and Site Cluster VIFs could not be manually failed over with the 'Move HA Virtual Interface' option in the WebUI.
  • Fixed an issue where a Gluster VIF would not automatically move to the next available active node in the event of a node failure.
Note: You will need to remove and recreate your Gluster VIF for this change to take effect.

Gluster

  • Fixed: Gluster Peer Setup now performs attaches to a selected set of nodes.
  • Fixed an issue with the qs-util resetids command for resetting the Gluster service unique ID.

Remote Replication

  • Fixed: an issue with the Remote Replicated volume target _chkpnt where it would not appear in the WebUI. This was a regression introduced with the 4.4.1 release. This fix renames the _chkpnt to have the correct UUID so that remote replication will display all associated target child snapshots and continue as expected.

SNMP

  • Added new SnmpTrapType to SNMP alertEntry:
 OID: .1.3.6.1.4.1.39324.1.1.19.1.1.22 
.iso.org.dod.internet.private.enterprises.osnexus.quantastor.sysStats.alert.alertTable.alertEntry.alert-SnmpTrapType 
  • Updated SNMP MIB

Hardware Controllers and Enclosures

  • Fixed an issue with the disk warning alert where the count would report '0 of N' disks. This alert has been corrected with the correct number of disks. If you are seeing this alert after upgrading it means that a set number of Disks are in a warning state due to SMART Health or Over Temperature alert states. Further investigation of disk health states can then be performed under the Hardware Controller > Disks section of the QuantaStor management interface to determine any corrective hardware actions that need to be performed.

Pass-thru Storage Volumes

  • Fixed a Management service crash that could occur with Pass-thru Storage Volumes presented via Fibre Channel.

v4.4.1.011 (December 19th 2017)

Upgrade Instructions

4.4.1 upgrade has been deprecated in favor of 4.4.2

ISO/DVD/USB Boot Install Image

4.4.1 iso has been deprecated in favor of 4.4.2

Release Notes

Backup Policies

  • Updated: new Backup Policies have the Backup Concurrency set for Parallel Backup with 12 streams. Previously the default was Serialized Backups.

Ceph

  • Ceph packages updated to Jewel 10.2.10 for Quantastor Appliances running on Trusty.
  • Added status icons for Monitors OSDs and other important items to the Ceph Dashboard. This provides a quick at a glance health overview of the most important Ceph Scale-out Cluster items.
  • Added checks to ensure adding ceph nodes to existing clusters are all running the same version.
  • Fixed the Health tooltip for the Ceph Cluster in the Ceph Dashboard to show brief and more useful information about the Ceph PG states.
  • Fixed: A newly created Ceph Cluster will show a status of Initializing and transition to Normal once all the Monitors specified during cluster create are online.
  • Fixed an issue with Ceph RBD Storage Volume to ensure that the corresponding iSCSI Target LUN size is also updated.
  • Fixed: Added additional validations to the Create Ceph Journal Dialog to limit the maximum partition size to 8.

CLI

Disk Management

  • Added filtering to Dialogs that list physical disk objects to filter out disks already in use in an active Pool Create, Grow, Add Spare or Add Cache device task.
  • Added support for Persistent Memory devices to be used as Physical Disks.
  • Fixed: Further optimization for the speed of Physical Disk Scan.
  • Fixed: Parallelized Encryption disk format to improve creation time for Storage Pools in larger configurations.
  • Fixed: Changed Dell MD3060e Enclosure to use SES standard for Enclosure discovery and management.
  • Fixed: Corrected the task description for disk identify when setting to on and off instead of duration.

Fibre Channel

  • Added: FC LUN ID's will now be allocated only on Host Assignment. Previously LUN ID's were allocated on Storage Volume Object creation. When this upgrade is installed, all unassigned Storage Volumes and Snapshots will release their LUN ID's back to the unassigned pool. All Volumes currently assigned to host will retain their existing LUN ID assignments.
  • Added a checkbox to the Host/Volume assignment dialogs to release Unused LUN ID's back to the Unassigned pool. By Default Lun ID's are retained on the Storage Volume to allow for temporary unassignment/changing host assignments while keeping the already assigned LUN ID.
  • Added: Storage Volumes that have no Host assignment will now show 'Unassigned' for the FC LUN Property in the WebUI.
  • Fixed an issue with Storage Volume Resize setting a size not compatible with FC ALUA standby device initialization. All Size operations now round up to the nearest megabyte if size is provided in webUI slider or CLI --size by byte size.
  • Fixed an issue with VAAI Primitive Support on FC ALUA configurations.
  • Fixed: The LUN property for Storage Volumes now correctly shows 'FC LUN' in the WebUI to indicate that the LUN ID's are used to indicate the Fibre Channel LUN ID.

Storage Volumes

  • Fixed an issue where Resizing a Storage Volume would not be reflected to the FC or iSCSI target LUN object. This corrects a regression introduced in the 4.3.3 release.

Storage Pools

  • Fixed an issue with the disk format when Adding spares to a ZFS Storage Pool.

Web Manager

  • Added a New dashboard at the top of the Network Share section that quickly shows Network Share used space from Storage Pool used space.
  • Added: The right-click context menu Delete option for Storage Volumes and Network Shares now opens the multi-delete dialog with the Share or volume pre-selected.
  • Fixed an issue with Web browser support for IE and Firefox. This corrects a regression introduced in the 4.4.0 release.
  • Fixed: the Multi-OSD Create Dialog has a larger section for the Journal selected list to always show 3 or more.
  • Fixed: added some minor text clarification for the Storage Pool section of a system in the new Grid Dashboard.
  • Fixed: Selected items now persists through searches and filters for the Create Storage, Pool, Format Physical Disk, Identify Hardware Disk and other dialogs.
  • Fixed: Dialogs that select Physical Disks now have counts labeled "Total" , "Found" , "Selected" to help clarify the listed number of disks that were searched for and selected. Total now always represents the total number of available disks on the system.
  • Fixed the spacing and default height for some of the Split sections of the Central Grid views to better support smaller screen resolutions.
  • Fixed: Combined some of the split sections in the Ceph Cluster section of the webUI to ensure that all important items are visible.
  • Fixed: Right clicking on an enclosure and choosing Modify Enclosure in the Enclosure View will now correctly bring up the specific enclosure you right-clicked on.

High Availability Failover

  • Added additional corner case protection for HA failover in the event both nodes of an HA pair are rebooted or lose power at the same time.
  • Fixed an issue where newly added cache/spare or disk device to a non-multipath configured HA pool could be marked as missing/unavailable after a failover.

Remote Replication

  • Added Estimated Time and Estimated Transfer to Remote Replication Reports for Replication tasks in the Synchronizing state.
  • Fixed an issue with Create Remote Replica for Network Shares that prevented full replication when a custom name was specified.
  • Fixed: Replication tasks for replicating Storage Volumes now show the storage Volume name instead of object ID.
  • Fixed: Added a check to Network Share Snapshot delete to verify that the Snapshot is not in use by a Replication or Snapshot Schedule or being retained for a retention requirement on the destination. You can use the force flag during the deletion to force deletion of the snapshot if required.
  • Fixed: The Replica Associations For Network Shares now correctly show text pertaining to 'Shares' in the Properties fields.
  • Fixed an issue with the Interval settings slider in the Snapshot, Remote Replication and Backup Policy Schedule dialogs where the slider would not initialize at the shown value.
  • Fixed an issue where the Remote Replica Associations would not appear in their central grid view.

Service Core

  • Fixed an issue where some Network devices would be renamed on reboots. This was due to the devices not having a unique BiosDevName reported to the biosdevname Kernel mapping logic, we now read and numerate the devices based on the ifnames.
  • Updated MIB

v4.4.0.174 (November 13th 2017)

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.4.0.

ISO/DVD/USB Boot Install Image

Release Notes

New Features

  • Added New Grid Dashboard tab to the Web Manager that provides a quick at a glance overview of Resource, Cluster, and System Health and Status for nodes in a QuantaStor Grid.
  • Added qs-distupgrade script to provide Distribution Upgrade support to migrate QuantaStor Appliances from 12.04 Precise to 14.04 Trusty. Note: For Upgrading HA Clusters or Scale-Out Configurations, Please contact OSNEXUS support for assistance.

Active Directory

  • Fixed: User Access assignments will persist after Active Directory server configuration has been removed. Previously these settings would have been removed on leaving the Active Directory domain.
  • Fixed a rare case where response from Active Directory server is slow, the join domain task could fail.

Remote Replication and Snapshots

  • Added Features to Snapshot Schedules Backup Policies to bring them in line with those already available with Remote Replication Schedules. This includes new Interval based Timers and Long Term Retention Tagging and Policies.
  • Fixed a filtering item with Remote Replication and Snapshot schedules that could have limited the Volumes and Shares available for selection. Now only Volumes and Share on the destination pool for Remote Replication schedules is filtered out from the list of available replication sources.
  • Fixed an issue where nodes would sometimes report being out of sync during a remote replication.
  • Fixed: Increased the maximum retries and wait time for Remote Replication snapshot discovery to better support slower replication target systems.
  • Fixed: Corrected an issue where the Schedule ownership was not being set on a Remote Replication Association if the same association was used for Manual Remote replication as well as via a Remote Replication Schedule.
  • Fixed: The associated Replication Task will now fail as expected when a Replication processes between nodes is terminated due to network or system stability issue.
  • Fixed: The Remote Replication Task now provides better logic for the Information used by the Remote Replication Reports to show the start-times, end-times and replication speeds.
  • Fixed: Replica _chkpnt snapshots will now have their share parentId set correctly for their parent target _chkpnt of a remote replication.
  • Fixed: the Remote Replication task status will correctly update with a failed status if the replication process is terminated due to a communication or stability problem on the source and/or replication target systems.
  • Fixed: Create Remote Replica Volume Tasks will now correctly detect there is no common snapshot for delta's between the source and target and fail the task. Previously the task would stay at 0% and running status and never complete or fail.
  • Fixed: Replication Network Share snapshots on the source system could sometimes not be correctly associated with the Replication Schedule that created them. This is now fixed.
  • Fixed the descriptive and title text in the Snapshot Modify Dialog.

Ceph Scale-out Block and Object

  • Added new 'Enter/Exit Ceph Maintenance Mode' Dialog to allow Administrators to set the Maintenance Mode state on a Ceph Cluster.
  • Added 'qs ceph-osd-replace-journal' command to allow Administrators to replace a Journal device in an OSD live. Note; The ceph cluster must be in maintenance mode to allow this option and a restart of the OSD receiving the journal change will occur.
  • Added 'qs enter-system-maintenance' and 'qs exit-system-maintenance' CLI commands. This is currently implemented only to set Ceph Scale-out Block and Object clusters into maintenance mode but will be expanded to support maintenance mode management of other QuantaStor Cluster/scale-out solutions.
  • Fixed an issue where a Ceph Cluster Node that had RBD devices mapped via iSCSI could not be removed from the Ceph Cluster.
  • Fixed: The Ceph Cluster Dashboard will now correctly display if you select a Journal Device that is not assigned to an OSD.
  • Fixed: Selecting an object in a grid view in the Ceph Scale Out Block and Object section of the Web Manager will now select the parent object in the tree.

Storage Pool

  • Added a larger 4GB zfs_dirty_data_max setting for systems with 16GB of RAM or more. For systems with less than 16GB of RAM, the default 1GB cache setting will be used.
  • Fixed: Create Storage Pool Tasks on Encrypted disks now provides a more detailed Task description while at the Encrypt Disks stage.
  • Added a status column to 'qs pool-list' to show the Storage Pools reported Health Status.
  • Added an alert recommending a change of sync policy to 'standard' If a ZIL configuration is removed from a ZFS Storage Pool that has had sync=always configured. This is to ensure an expected level of performance can be maintained after removal of the high IOPS ZIL Slog(Sync Log) SSDs. A policy of sync=standard is generally recommended for all ZFS configurations unless advised by OSNEXUS Support due to the needs of a specific use case and workload.
  • Fixed an issue where a spare device could be rediscovered and added back in after removal from an XFS pool.
  • Fixed: Alerts will correctly be triggered when the Storage Pool Used Capacity Thresholds are Exceeded.
  • Fixed: XFS Storage Pools will correctly show the add/remove hotspare context menu in the Storage Pool section for adding dedicated hot spares to XFS Software RAIDed pools.
  • Fixed: The Color Thresholds for Utilized % in the Storage Pool section of the Web Manager is now based off of the values defined in the Alert Manager Pool free space thresholds.
  • Fixed an issue where the RAID levels drop down in the Storage Pool Create Dialog would not always update to reflect the specific options available for the ZFS or XFS filesystem type chosen. This also corrects an issue where the RAID levels listed would not always uopdate based on the number of physical disks in a selected system in the dialog.

Networking

  • Added an additional LACP bonding mode that enables support for Layer 3+4 xmit_hash_policy. The original LACP mode is still identified as lacp via the CLI and will show 'LACP Layer 2' in the Web Manager.

Disk Management

  • Added: The Format Physical Disk Dialog now provides the ability to select multiple disks. The Disks listed are from the detected available disks on a Storage System that are not associated with a Storage Pool.
  • Added: The Hardware Disk Column is now visible by default for all dialogs that interact with Physical disk Objects.
  • Added a Search field to the Physical Disk Tree View. The disks can be searched and filtered based on Disk logical Name and Serial Number.
  • Added: Newly inserted Hardware disk devices will automatically trigger a Physical Disk Scan and be immediately ready for use.
  • Fixed: increases fs.aio-max-nr value to 1048576 to support larger initial multipath configurations by default.
  • Fixed: Added Paging to the central grid pan in the Physical Disk View Section of the WebUI.
  • Fixed: added further performance optimizations to the Physical Disk Section of the Web Manager.
  • Fixed: Further Enhancements to the Format Disk functionality.
  • Fixed: Text in the Physical Disk Grid view can be selected allowing for easier copy/paste of information.
  • Fixed: the SerialNo. Field is now visible by default in the Grid view.

Hardware Enclosures and Controllers

  • Added SMART health status in the Hardware Controllers and Enclosures section for disks connected to SAS HBA's.
  • Added: The Hardware Disk Identify Dialog in the Hardware Enclosures and Controllers section of the Web Manager now supports selecting multiple Hardware Disks at the same time.
  • Added: the Hardware Enclosure view in the Hardware Enclosures and Controllers section of the Web Manager now will display the pool the disk is associated with.
  • Added: The Mark/Unmark Hotspare Dialog in the Hardware Enclosures and Controllers section of the WebUI Now presents a dialog capable of performing multiple selections at once.
  • Fixed an issue where LSI 9400 series HBA's would not show disk temperatures or trigger overtemp alerts.
  • Fixed: The Enclosure grid view in the center pane will now update as expected based on the selected Hardware Controller in the Tree view.
  • Added support for newest HPE RAID controller management via ssacli 3.3.10.3.0
  • Fixed: Changed the storcli discovery support for SAS HBA's to only be enabled for LSI/Broadcom Branded 9400 series controllers.
  • Added further Search filtering criteria (Vendor, Product, Controller, Status) for the Hardware Disk Identify and Mark Hot Spare Dialogs.
  • Added: Hardware Disk Identify Dialog is now available as a right click context menu item for controller, enclosure and other items in the tree view of the Hardware Enclosures and Controllers section of the Web Manager.
  • Added: Hardware Disk Identify now supports 'On' and 'Off' modes in addition to the existing Duration mode previously offered. Now you can set a Disk to Indicate it's location without having to race the clock.
  • Fixed an exception where the Hardware Controller->Create Raid Unit, Software Controller->Remove Adapter, and Software Controller->Scan Targets dialogs would not open if there were no available controller objects to perform actions on.
  • Fixed: added logic to handle a corner case where a failing or Hardware RAID controller in an errored state would return a status of 'Failed' via it's raid utilities but still provide valid parsing data on the state of Enclosures, Disks and RAID units behind the controller. Previously QuantaStor trusted the 'Failed' status code returned from the card and stopped discovery at that stage.


Security

  • Added support for NIST 800-53 R4 AC2(2) compliant 'emergency' and 'temporary' account types. These new user types may be created via the user-add qs CLI command.*Additional NIST AU-12 Compliance
    • QuantaStor v4.4 now outputs to the qs_audit.log file in CEE JSON reference profile conformant to RFC 4627.
  • Added: QuantaStor 4.4.0 now deploys SSL certificates on new installs with sha256 signed 2048-bit SSL Keys. It is recommended that you upgrade all older deployments to use the newer SSL certificates using the 'qs-util cacertusedefault' command from the CLI. If you have a deployment that needs to continue to use the older SSL Certificates, they are still available by running 'qs-util cacertuselegacy' All QuantaStor grid systems must be running the same type of signed keys for grid communication to function.
  • Added: System Swap space is now automatically encrypted when an Encrypted Storage Pool is created or detected on a system after updating to the 4.4.0 release.

Web Manager

  • Added: Tree View Search Filter now returns results where child objects matched.
  • Added Further Localization enhancements to Dialog text.
  • Fixed an issue with some dialogs containing lots of columns, where you could not always scroll fully to the right.
  • Fixed: The Dialogs involving Physical Disk and Hardware Disk objects now show a disk total at the bottom of the dialog that reflects the number of disks found from the current search filter. If there is no search filter, it will show the total number of disks available in the system.
  • Fixed an issue where the Utilized % Column in the grid view would not show 100% Utilized when a Pool was full.
  • Fixed an issue where the Utilized % Column would not update as expected for Storage Volumes after a discovery cycle.
  • Fixed: The Add Grid Management Virtual IP Dialog now has the correct help web page link specific to creating a Grid Management Virtual IP.
  • Fixed: the right Click Context Menu for Gluster Volumes has been re-ordered to ensure a clearer order of possible operations.
  • Fixed: Updated the Workflow Manager help links and images in the Documentation wiki.

Service Core

  • Updated SNMP MIB
  • Fixed: Changes to the Password and Security Policies no longer fail if a node does not yet have a active QuantaSor License.
  • Fixed an issue where Task description text was not updating as intended for running tasks.
  • Fixed an issue where the sync policy setting on ZFS Network Shares was not being set if a user wanted something other than default inherited from the ZFS Storage Pools setting.
  • Fixed: Improved logging to qs_service.log for Active Directory join.
  • Added ZFS Event Daemon(ZED) to rsyslog for event logging.
  • Fixed an issue where the ZFS Event Daemon(ZED) was not starting on startup. ZED will now start and be kept alive by the normal QuantaStor keep alive scripts.

v4.3.3.016 (September 12 2017) DRIVER UPGRADE AVAILABLE - REBOOT REQUIRED

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.3.3.

ISO/DVD/USB Boot Install Image

Release Notes

Drivers

  • New SCST SCSI Target driver that allows iSCSI and FC ALUA luns to be presented from the same QuantaStor HA head nodes.

Cloud Containers

  • Added and Increased cache size for the stat cache used for S3FS based one-to-one Cloud Containers.
  • Fixed a problem where using multi delete volume would leave the cloud backup of a volume behind in the cloud container's mount directory.

High Availability Failover

  • Fixed: Added standby path ALUA Target Portal Group information to active node. This change accelerates path recovery for FC ALUA Storage Volumes.
  • Added support for FC ALUA for VMware clients.
  • Added HA Storage Pool support for disks and devices that provide unique device serial number identification via Vendor identification SCSI page 83
  • Fixed an issue where encrypted disk devices may not be automatically opened on HA failover of a Encrypted Storage Pool without a passphrase.
  • Fixed: Improved management around HA virtual interface location constraints.


Hardware Enclosures and Controllers:

  • Added new Broadcom HBA discovery support for the new IT mode controller section in the storcli raid utility.
  • Fixed: Updated sas3ircu utility that corrects a system crash when a system has a Broadcom 9400 series controller installed.
  • Fixed an issue with RAID Unit creation on Areca Hardware RAID controllers.
  • Added support for latest HPE hpssacli 2.40.13.0
  • Added: Disk Temperatures now supported on HPE Hardware RAID and SAS HBA controllers
  • Adds support for the SCSI Temperature field for disks on SAS HBA's.
  • Updated Broadcom storcli64 utility to version 007.0204.0000.0000 to support the latest LSI SAS HBA's and MegaRAID controllers.


Network Shares

  • Fix the QS CLI share-disable and share-enable to return the accurate value indicating if the share is active.
  • Fixed an issue with changing the description field in a Network Share Modify for shares and subshares.
  • Fixed: Share Quota support will now be greyed out in the WebUI dialogs for Network Shares created on Filesystems and Storage Pools that do not support quotas.
  • Added: Raise SMB limits to allow upto 1Million open files.
  • Fixed: Disabled ZFS only sync CLI flags in share-modify and share-create for non ZFS shares.


Remote Replication

  • Fixed an issue with share and volume associations to Remote Replication Schedule objects if a node is removed and re-added to the grid.
  • All nodes involved with the replication schedule need to be updated in the same downtime / maintenance to prevent ownership problems.
  • Resolved an issue with ownership of the Remote Replication Schedule object in a High Availability configuration.


Storage Volumes

  • Added 1KB, 2KB and 4KB block size support to ZFS Storage Volumes.
  • Fixed: Added additional locking protection to Storage volume multi-delete for mixed cloud XFS and ZFS deployments to ensure all volumes are removed as expected.


SCSI Target:

  • Fixed: Optimized SCSI target to only launch processes for mapped LUNs.


Security

  • Disables SSH port forwarding in the sshd service.
  • Configured nginx to correctly send the X-Frame-Options header.


Web Manager

  • Added Internationalization support to the Configuration manager Dialog.
  • Fixed a problem where the login failure dialog was closing before the user could read it


Storage Pools

  • Added support for more than one Zil SLOG mirror per pool, now with multiple mirrors the SLOG can provide higher performance by utilizing 4 or more SSD devices.
  • Added enclosure awareness for DELL MD* devices to Storage Pool disk device RAID redundancy balancing.
  • Fixed an issue with creating XFS Storage Pools on Multipath disk devices.
  • Fixed an issue with removing a spare disk from a XFS Storage Pool md RAID configuration.
  • Fixed: Greatly reduced the number of alerts triggered for a storage pool repair action when global spares are not configured.
  • Fixed: qs-pool-create will now generate a unique pool name when no pool name provided.
  • Fixed: Clearer error messages for invalid pool names when creating a pool.


Service Core

  • Added new --secure=1 option to the qs-logreport command.
  • Added Samba audit logging feature for logging client access of Network Shares. This new option is available under the advanced tab of the Network Share Create and Modify Dialogs.
  • Removed telegraf statistic gathering for disk devices that are not being graphed.

v4.3.2.025 (August 3rd 2017) DRIVER UPGRADE AVAILABLE - REBOOT REQUIRED

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.3.2.

ISO/DVD/USB Boot Install Image


Release Notes

Drivers

  • Updated hpsa driver 3.4.18-105
  • Updated mpt3sas driver 21.00.00.00
  • Added arcmsr driver v1.30.0X.27-20170206

Ceph

  • Fixed an issue where a failed ceph OSD could cause other OSD's on the same node to fail to start.
  • Fixed: Added a check for Device Mapper based disk devices are blocked when creating Journal and OSD devices. These devices have a different dev partition path and will be supported in a future release.
  • Fixed: Optimized Ceph Journal device discovery for faster service startup.

Cloud Containers

  • Fixed: You can now remove Cloud Backups Schedules if there are no Cloud Containers present.

Disk Manager

  • Fixed: Trusty deployments will now support dm multipath disk configuration for SAS Disks when there is only a single path from one SAS cable physically connected.
  • Fixed: Added some improvements for multipath and disk discovery on service startup and disk rescans after multipath configuration changes. This change also helps ensure that if a disk or other hardware device is faulty or slow to respond that disk discovery continues on all other devices.
  • Fixed an issue with the physical disk and storage pool device identify logic where the Hardware Enclosure identify disk logic to blink the enclosures disk slot ID LED was not consistently used.

Hardware Controller Support

  • Added Broadcom LSI 9400 series HBA controller support.
  • Added Hardware Enclosure and Controller management support for Areca Controllers in JBOD mode and RAID0/1+0/5/6 arrays.
  • Added a Disk Temperature alert for disks that go above a specific centigrade value. The default for this threshold is 50C. This is currently supported on LSI MegaRAID and Areca RAID controllers, SAS HBA and other controller support will be available in a future release. The alert threshold can be customized with the addition of a 'disk_temp_alert_threshold=NN' definition under the [hw_controller] section in the /etc/quantastor.conf config file.
  • Added a new Drive Temperature column to the Controller Disks tab grid view in the Hardware Enclosure and Controllers section of the Web Manager.
  • Added additional Enclosure Layouts and Enclosure images for HPE, Dell and Supermicro enclosures.
  • Added: Drives that go above the temperature threshold will now show a OVER-TEMP Status in the Controller Disks grid view.
  • Updated MotionFX chassis enclosure images and details to reflect new Acromove branding.
  • Added additional Enclosure Layouts and Enclosure images for HPE, Dell, HGST, and Supermicro enclosures.

High Availability Failover

  • Added Site Cluster VIF to the Web Manager. Site Cluster VIF's can be used to create a Virtual interface specific to a specific site cluster. Recommended for cases such as Network Share Namespaces.
  • Fixed: Added a check for HA Storage Pools to ensure that grow checks if the disk is available from both nodes before performing the pool grow.
  • Fixed: Added a check for HA Storage Pools to ensure that selecting a new disk to be used as a cache device or spare checks if the disk is available from both nodes before performing the operation.
  • Fixed: Added a check for HA Storage Pools when adding a Hotspare disk that the drives are available from both systems.
  • Added the ability to failover Site Cluster VIF's to a specific node on the Site Cluster.
  • Fixed an issue with deleting Site Cluster HA VIF's on precise platforms, this was a regression from improvements introduced in the 4.3.0 release.
  • Fixed an issue where deleting an encrypted HA storage pool would leave encrypted devices open on the standby node.
  • Fixed an issue with Storage Pool device verification on failover of HA Encrypted storage pools.
  • Fixed an issue where the /etc/crypttab key entries were not cleaned up on the standby node after a HA Encrypted Storage Pool was deleted.

Remote Replication

  • Fixed and issue where a newly created storage system link would not have the reverse link persist after a reboot or management service restart
  • Fixed: Storage System links that do not have a bandwidth limit will now correctly show the default limit of 100 MB/s when upgrading from a older release.

Web Manager

  • Updated the Central Grid view of the Enclosure and Controllers section in the Web Manager to provide a more concise and easier to navigate display for enclosure layout and controller and enclosure selection.
  • Updated the Central grid views in the Web Manager to provide a clearer layout that shows more information all at the same time.
  • Added a Snapshots tab to the central grid view for Network Shares. This allows for a snapshot specific view and list for the selected network share.
  • Added support for AD groups in the Network Share User and Group Quota dialog.
  • Fixed a help documentation link for the Remove Heartbeat Cluster Dialog.
  • Fixed an issue where the Dashboard in the Ceph Tab of the Web Manager could collapse to a hidden state unexpectedly.
  • Fixed the Help Documentation for the FC Target port enable/Initiator Mode Enable dialogs.
  • Fixed: The Multipath Configurator scan drop down list will have single entries for each unique multipath capable device found.
  • Fixed: The Status field in the Controller Disks tab of the Hardware Enclosures and Controllers section is now wider to encompass most common disk status states.
  • Fixed: updated Execute Storage Pool Failover dialog to clarify the checkbox and it's effect for ensuring pool failover succeeds to the selected node in the event the original node fails to export the pool.

Storage Pool

  • Fixed: Task progress for a storage pool delete when a disk scrub is selected will now correctly show the progress for the overall task, including the scrub portion. Previously, individual scrub operations were reporting their process progress percentage as an update to the overall task, this resulted in the overall task progress being incorrect at times.
  • Fixed an issue with Pool deletion where the pool delete would fail if there was unexpected data in the pool mount path.


QS CLI

"* Added new CLI support for adding and removing Network Share Quotas per user and group. Previously this was only available in the Network Share User and Group Quota Manager dialog.

qs share-group-quota-add 
qs share-group-quota-remove 
qs share-user-quota-add 
qs share-user-quota-remove"

"* Added password token support for the .qs.cnf file in the root home directory on QuantaStor systems, this provides localhost authentication without needing to have the password in a authentication file for the root user.

Users can enable this by echoing the special token into the root users .qs.cnf file: 

echo ""localhost,admin,[QSCLITKN]"" > /root/.qs.cnf 

And then enabling token based authentication using the CLI command for the admin user shown below. 

qs user-modify admin --cli-auth=yes --server=localhost,admin,PASSWORD 

Note, if you have created a new Administrative role user, replace 'admin' with the name of your admin user."

v4.3.1.007 (June 30th 2017)

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.3.1.

ISO/DVD/USB Boot Install Image


Release Notes

Package updates

  • New nginx-light 1.12.0-1+trusty1 package for trusty deployments.

High Availability

  • Fixed an issue where the encryption keys would sometimes not be copied to the secondary node of an HA Failover Group.

Security

  • Changed the Password Policy Manager dialog to be the Security Manager Dialog.
  • Added http to https redirect option to the Security Policy Manager Dialog
  • Added option to disable http port 80 access to the Security Policy Manager Dialog
  • Added logging of manual and automatic user logout from the QuantaStor Web Manager to the '/var/log/qs_audit.log' file
  • Fixed an issue where the Password Policy changes could sometimes not be applied to all nodes in the grid.
  • Fixed a Security issue with bad password responses. Fixes items found related to CVE-2017-9978
  • Fixed the Rest API response for when a method is unsupported. Fixes items found related to CVE-2017-9979

Web Manager

  • Fixed: Right-click context menu's will now show the same list of menu options in the tree and grid view.

v4.3.0.335 (June 22nd 2017)

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.3.0.

ISO/DVD/USB Boot Install Image


Release Notes

Ceph Scale-out Block and Object

  • Updated Ceph Packages available:
- 10.2.7 Jewel for Trusty based installs 
- 0.94.10 Hammer for Precise based installs
  • Fixed Ceph Scale-out Block startup discovery on grid nodes that are still part of a ceph cluster, when one of the ceph nodes in the grid is removed from the grid.


Cloud Containers

  • New IBM Softlayer S3 Endpoints added to the default Cloud Providers configuration.
  • Fixed an issue where changes to the /etc/qs_cloud_providers.conf on the master node was not propagating to secondary nodes.
  • Fixed an issue where the entry fields for Tag and End-Point in the Cloud Containers> Add Provider Location Dialog and CLI were swapped.
  • Fixed a missing python-six dependency issue with the aws CLI tool.


High Availability

  • Improved pool failover times when network interface connectivity between nodes is lost.
  • Fixed an issue where high-availability virtual network interfaces could try and start on a system that did not currently have the QuantaStor service running.
  • Fixed issue where deleting high-availability virtual network interface could cause temporary outages on other high-availability virtual network interfaces.
  • Fixed an issue where deleting a high-availability virtual network interface could cause an unnecessary failover to occur.
  • Fixed an issue where failover would fail due to not finding the disks correctly on the secondary node.
  • Fixed an issue with nginx web service starting during qstormanager package install.
  • Fixed: HA Storage Pool Failover is now kernel panic aware to trigger a failover to the secondary node.
  • Fixed: Deleting a HA VIF from the Network port list in the Storage System View now correctly cleans up the High Availability Failover Group HA VIF object.
  • Fixed: A Site Cluster can now be deleted using the force flag if a Cluster node is Permanently offline and will not be returning.


Disk Management

  • Added a Multipath Configuration Dialog to the Physical Disk section of the Web Manager, this allows for administrators to scan for SAS, FC, iSCSI and other multipath/multiport capable devices and add their white-listing rules to the multipath configuration. This functionality is also available via the new qs CLI 'disk-multipath-config-list, disk-multipath-config-scan, disk-multipath-config-add and disk-multipath-config-remove' commands.
  • Fixed an issue where the /dev/disk/by-id/ata-* devices would be removed under the device mapper path after a Storage Pool is deleted by a user. Previously, a udevadm-trigger command would be required to bring the ata-* device back.
  • Fixed an issue with physical disk scan on multipath configurations that could cause the mutlipath devices to not appear in the QuantaStor WebUi and CLI once the scan completes.
  • Fixed: the qs disk-scan command now has a force flag


Storage Pool

  • Fixed an issue where old disk encryption keys would not be cleaned up after a storage pool is deleted.
  • Added earlier validation checks for Storage Pool Grow Operations for Encrypted Storage Pool configurations to ensure that the disks to be added are not encrypted before the RAID set size and resultant configuration is confirmed to be valid.
  • Fixed an issue during storage pool create where user specified RAID set sizes would not be used.
  • Fixed a few scenarios where creating a Storage Pool with XFS, Software RAID and Encryption could fail.
  • Fixed an issue where a failed disk device was not automatically removed from a ZFS Storage Pool in Multipath configured environments.
  • Fixed an issue that could occur when adding hot spares to ZFS Storage Pools in Multipath configured environments.
  • Fixed an issue where a failed disk device was not automatically removed from a ZFS Storage Pool in Multipath configured environments.

Remote Replication and Snapshots

  • Added a Remote Replication Report tab to the Remote Replication Schedules section of the Web Manager that shows the results of past replication tasks. Statistics in the report include: completion status, average throughput, the start and end time of the replication and many more. This is also available via the qs CLI with the 'qs replica-report-summary-list' and 'qs replica-report-entry-list' commands.
  • Added new snapshot retention options to the Create Remote Replication Schedule Dialog to allow for Daily, Weekly, Monthly and Quarterly Snapshots for Historical Storage Volume and Network Share snapshots.
  • Added new snapshot tags for Daily, Weekly, Monthly and Quarterly Snapshots that correspond with the retention policy picked for a particular Storage Volume or Network Share snapshot.
  • Added Compression support to Storage System Links, this allows for improved performance over slow WAN links.
  • Added: Remote Replication bandwidth throttling has been moved to the Storage System Link object. The qs link-create and link-modify commands and Web Manager Create Storage System Link and Modify Storage System Link Dialogs now allow for setting the bandwidth throttling.
  • Added the ability to turn on Unencrypted support for Remote Replication Storage System Links. This uses mBuffer to provide a high performance unencrypted channel for Remote Replication between QuantaStor nodes.
  • Added the ability to configure the Bandwidth Limiter in the Create and Modify Storage System Link Dialogs.
  • Added more information to indicate replication schedule health/state and cause of failures due to any miss-configuration or networking communication error.
  • Added: Snapshot Schedules can now be created for snapshots (allows Snapshot of Snapshot Scheduling)
  • Fixed an issue where remote replication could fail for manually initiated replications.
  • Fixed: Max Replicas in Replication Schedules are now referred to more correctly as Max Delta Points, this clarifies more precisely how many Intermittent and hourly scheduled replica snapshot points are retained between a source and target replication association.
  • Fixed: The Remote Replication Offset interval for Hourly/Daily Replication in the Schedule Interval tab of the Create and Modify Replication Schedule dialogs now defaults to 0 minutes and can go to a max of 59.
  • Fixed: ZFS snapshots will once more be correctly removed upon the deletion of a Network Share snapshot. This corrects a regression from the 4.2.2 release.
  • Fixed: resolved an issue with updating the Timestamps for running remote replication tasks that could result in the remote replication link having incorrect progress information.


Gluster Scale-out File

  • Fixed an issue that could cause unexpected behavior with Gluster Peer and Volume objects showing correctly when Gluster is deployed n the same QuantaStor management grid as non Gluster nodes.
  • Fixed an issue that would prevent cleanup of Gluster Volumes in a configuration where the gluster bricks or underlying pools had been already removed.
  • Fixed: Re-ordered the Ribbon bar icons in the Scale-out File Storage tab.


Network Share:

  • Added the ability to see SMB session information for Network Shares under the new Web Manager Network Share>SMB Sessions tab and the 'qs share-session-list' and 'qs share-get' CLI commands.
  • Added the ability for users to create a snapshot of an existing Network Share snapshot (snapshot of a snapshot). This support is limited to custom named snapshots and not snapshots created by a schedule that has @GMT in the name.
  • Fixed an issue on Precise platforms with Network Share Snapshots for Windows SMB Shadow Copy and File Versioning support.
  • Fixed an issue where removing NFS access from a Network Share would collapse the tree to the first level in the Network Share tree view. Now the tree will stay as expected when a NFS Access object is removed from the share.
  • Fixed an issue in the Namespace Add/remove Network Shares Dialog where changing the namespace in the drop down would not always update the available selections.
  • Fixed an issue with the search filtering in the Add/Remove Network Shares to/from Namespaces.
  • Fixed an Issue with creating shares that have multiple '$' in the name.
  • Fixed an issue with snapshot mount directory cleanup after a network share snapshot has been deleted. Note, this prevents the issue from occurring going forward, If users ran into this case previous to upgrading to the 4.3 release they may need to manually remove old @GMT snapshot mount directories from the network share _snaps directory.

Web Manager

  • Added a new Password Policy dialog available under the Users and Groups tab in the Web Manager that allows Administrators to enforce password Requirements. This includes:
- Minimum password character length 
- Password expiration (in days). 
- Number of allowed login attempts 
- Minimum days to wait before password change is allowed. 
- Number of unique passwords before reusing a password is allowed.
  • Added: the Storage Volume Close Session dialog now shows a select-able list of the current sessions, clicking okay will close all the selected sessions on the SCSI Target.
  • Added: Network Share and Storage Volume Multi-Delete from the web manager can be used to delete share/volume and its child snapshots (select Delete Child Snapshots).
  • Added: The Network Shares and Storage Volumes Multi-Delete dialog now has the option to "Hide Snapshots", making it easier to select the parent share/volume to be deleted.
  • Added Passthru Storage Volume support to the Web Manager, Passthru Volumes can be created by right clicking on a Physical Disk and choosing the 'Create Passthru Volume' option.
  • Added new columns to the Create Storage Pool Dialog to show the Source System and Source Storage Volume to better allow customers to pick specific Passthru physical disks using QuantaStor appliances as backend storage for front end QuantaStor appliances.
  • Fixed: General responsiveness and UX performance improvements for the Web Manager on larger scale configurations.
  • Fixed: Provided a clearer message in the Create Pool dialog for when users do not provide matching passwords for the 'Encrypt Storage pool with Passphrase' fields.
  • Fixed: the Dashboards will enforce the use of https for their rest calls when the Web Manager is using https.
  • Updated the create and Modify Remote Replication Schedule Dialogs for a better workflow.
  • Fixed a problem under the Remote Replication tab in the Web Manager that could lead to a slow unresponsive Web Interface.
  • Fixed an issue where the Replication Targets tab under the Storage System Links could show as empty.
  • Fixed an issue where a newly created Host Group to not show the selected hosts in that group. Previously, a browser refresh was required to show them once more.
  • Fixed an issue with the Web Manager that would cause object status or task objects to not update or show as completed when a large number of events were received. Previously, a refresh of the Web Browser would have been required if this occurred.
  • In the Migration Edition Workflow Manager, the View Network Shares has been replaced with View SMB Connections.
  • Added a Task list counter to the Task list at the lower part of the Web Manager.
  • Added Block Size column option to the Storage Volume grid view.
  • Fixed: Removed the unsupported cloning options in the context menu for Network Share Alias or Subshare. Cloning should only occur at the Parent Share level.
  • Fixed an issue in the Physical Disk view that could cause the Firefox and IE Web browsers to report an unresponsive script warning.
  • Fixed a few areas in the Web Manager where large numbers of objects or events coming in could resulting in an unresponsive script warning for some Web Browsers.
  • Fixed an issue in the Storage Volume tree view where scrolling down and selecting a volume could cause the tree view to 'jump' up to the top of the list.
  • Fixed an issue with the tree view for Network shares that could sometimes show the NFS client access out of order with the associated network share snapshot.
  • Fixed an issue with truncation of some of the options in the Create and Modify User dialogs.
  • Fixed field, text, scroll and other alignment issues in various dialogs.
  • Fixed miscellaneous spellings in various Dialogs.
  • Fixed the Add user dialog descriptive text.
  • Fixed: Added a check to the Storage Volume Advanced Settings CHAP Username/Password to ensure that both Username and Password are supplied before clicking on OK.
  • Fixed: Corrected an issue under the Remote Replication Schedule View in the Web Manager where some items under the left hand tree view could not be selected if the same Network Share or Volume was also in under another schedule.
  • Fixed: Re-ordered the Ribbon bar icons in the Scale-out Block and Object Storage tab.
  • Fixed: Re-ordered the Ribbon bar icons in the Storage System tab.
  • Fixed: References for CIFS protocol in the Web Manager have been renamed or further clarified to SMB.
  • Fixed: The CIFS Configuration Dialog has been renamed to Active Directory Configuration.
  • Updated Web Manager splash to detail how to properly refresh the QuantaStor Web Manager on OS X.
  • Improved Web Manager responsiveness and performance.


Hardware RAID Support

  • Creating a Hardware RAID unit under Hardware Enclosures and Controllers will now create the HwUnit object for display immediately with discovery of properties occurring in the background. This provides better User Interface feedback when creating a large number of Hardware RAID units.
  • Fixed: added a guard to the Hardware RAID Controller SSD Cache Unit delete to prevent removal of the SSD cache if it is actively in use with Any other Hardware RAID units on that same RAID Controller.


Installer and Packaging

  • Fixed an issue with the .iso install media that required internet access for a install to finish.
  • Fixed: The Installer will now show eth* devices for UEFI BIOS installs
  • Fixed a problem with the latest qstortarget package compatibility with the older 3.19.0-29-quantastor kernel.


Localization

  • Updated Japanese, Chinese, French, Spanish, and Italian localizations for the QuantaStor Web Manager.

Security

  • Added: Users who are inactive for 30 minutes are automatically logged out of the Web Manager.
  • Added: Auto Logout clears all state information from the Web Browser.
  • Added CJIS Section 5.5.4 Compliance:
- System Use Notification is available with System Usage Notification Message field under Password Policy Dialog. 
  • Added CJIS Section 5.5.5 Compliance:
- Session Lock is available with Auto Logout value under Password Policy Dialog.
  • Added CJIS Section 5.4.1.1 Compliance.
CJIS Section 5.4.1.1 Events logging in the /var/log/qs_audit.log for: 
- Successful and unsuccessful system log-on attempts. 
- Successful and unsuccessful attempts to use access, create, write, delete or change permission on a user account or other system resource. 
- Successful and unsuccessful attempts to change account passwords. 
- Successful and unsuccessful actions by privileged accounts.
  • Added CJIS Section 5.6.2.1.1 Compliance.
Note: CJIS default password requirements compliance can be enabled under the Password Policy dialog in the Users and Groups tab. In the dialog, select Suggested Defaults and change password complexity to strong.
Detailed CJIS Section 5.6.2.1.1 Compliance, Password Shall:
- Be a minimum length of eight (8) characters on all systems. (Compliant & Enforced)
- Not be a dictionary word or proper name. (Compliant & Enforced, since QS v4.1.1)
- Not be the same as the Userid. (Compliant & Enforced)
- Expire within a maximum of 90 calendar days. (Compliant & Enforced, since QS v4.1.1)
- Not be identical to the previous ten (10) passwords. (Compliant & Enforced, since QS v4.1.1)
- Not be transmitted in the clear outside the secure location. (Compliant & Enforced)
- Not be displayed when entered. (Compliant & Enforced)
- Erase cached information when a UI session in terminated 
  • Fixed: Users created with the Cloud Admin and Cloud User Role can now change their own passwords.
  • Updated Samba packages are available to address CVE-2017-7494, please upgrade your system using the qs-upgrade CLI command to install these packages and bring your system current with all other security and stability fixes available on the package repository. Note: for customers who installed the sernet-samba4 packages on the older precise platform using the samba4-install script, a workaround to address this security alert is detailed in the KB article here: Sernet Samba4 CVE-2017-7494
  • Added Logic to terminate SMB sessions for users who have had their user access removed.


Active Directory

  • Added support for RFC2307 configuration to the Configure Active Directory Dialog in the Web Manager for Trusty platforms or Precise installs that have the optional sernet-samba4 packages installed.
  • Added Trusted Domain checkbox to the Configure Active Directory Dialog box for Trusty platforms or Precise installs that have the optional sernet-samba4 packages installed. This enables trusted domain support for the CIFS/SMB service
  • Fixed: The Sernet Samba4 winbindd service will now be automatically started if it is detected that it is not running. This brings the Sernet samba service management inline with the standard Ubuntu Precise and Trusty Samba services.


CLI

  • qs CLI commands 'system-shutdown', 'system-restart' and 'system-upgrade' commands can accept a '--sys-list' argument with a comma delimmeted list of storage systems to perform Shutdown, restart or upgrade tasks on multiple gris nodes at once.
  • Fixed: 'qs set-tag' now allows the use of object UUIDs to set tags. Prior to this fix, only names were allowed to set tags.
  • Fixed: replaced the Parent Share ID with the human readable the Parent Share Name to the 'qs share-list' command for snapshots.
  • Added clarity to the help for qs commands such as 'pool-create', 'pool-grow' and others that have the "--disk-list" argument.
  • Updated the Help text and error responses for qs pool-create.
  • Fixed qs CLI cluster-ring-member-get command to provide better context with the --cluster-ring-member argument.
  • Fixed: Changed CLI management of Network Ports to use Network Port instead of Target Port naming convention. Legacy commands (tp-list, tp-modify, tp-get) will continue to be supported but will point to the new Network port naming convention commands for CLI help output.
  • Added: Network Shares and Storage Volumes can now be deleted from the qs CLI using the flag '--delete-child-snaps'. Adding this flag will delete all the child snapshots. If snapshots are used by schedules then the additional '--flags=force' option should be used.


Core Service

  • Added a new QuantaStor Log collection tool with the below new features:
- adds support for uploading via https. 
- the tool will fetch an updated json definitions file if available from the OSNEXUS update servers before gathering logging data. This allows up to date fetching of diagnostics when working with the OSNEXUS support team. 
- The Send Log Report task will now show more detailed status on the log gather scripts progress. 
  • Fixed: Added validation to correct an issue where a Grid Node Object and associated child objects were unexpectedly removed from the Grid Master if the QuantaStor Grid Node came onto the network with a new Storage System ID but using the same IP. This corrects a scenario where a system reinstalled in place due to a Hardware issue could cause an unexpected grid or configuration change.
  • Fixed an issue where DNS entries added in the Web Manager Storage System Modify Dialog were not being reflected in the /etc/resolv.conf nameserver settings file.
  • Fixed an issue with the Web Manager Send Log Report task where the log would fail to upload but the task would return success. The task will now return as failed if the upload or gathering of the logs fails for any reason.
  • Fixed an issue that was blocking https access when the 'qs-util disablehttp' tool was used to turn off http port 80.
  • Updated SNMP MIB.

v4.2.4.004 (May 3rd 2017) DRIVER UPGRADE AVAILABLE REBOOT REQUIRED

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.2.4.

ISO/DVD/USB Boot Install Image


Release Notes

ZFS Driver


Web Manager

  • Fixed an issue where the Web Manager may not refresh as expected on some grid events.

v4.2.3.007 (April 21st 2017)

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.2.3.

ISO/DVD/USB Boot Install Image

Note: precise media is available here md5

Release Notes

High Availability

  • Added checks on system startup of passive nodes to ensure encrypted devices are available for HA failover to reduce failover time.
  • Fixed: Corrected a behavior where iofencing would sometimes not be released from a cache device that is removed from the Storage Pool. This would cause a device that was removed to still be locked to the old pool.
  • Fixed: Corrected an issue where some disks would not be included in the Storage Pool device list for iofencing for a storage pool during a failover. This would intermittently cause a failover to not succeed.
  • Fixed an issue with the refresh of the site cluster view in the Web manager after a site cluster configuration is removed by a User.

High Availability Fibre Channel Target

  • Fixed: Now uses Standby instead of transitioning mode during failover. This addresses the ALUA failover "flapping" issues which would cause the devices to not come back online without a reboot.
  • Fixed: Optimized the use of issue LIP to limit disturbance to FC fabrics.
  • Fixed closed a small time window where Relative Target Portal group ID for FC ALUA devices was not set early in an HA Failover. This would cause issues where devices would not come back online without a reboot.

Network Shares

  • Fixed: a Network Share Modify will now correctly apply the recordzsise change to Network Shares on ZFS Storage Pools.

Storage Pools

  • Fixed: Encrypted disks are now opened using concurrency to better support large configurations (80-200 disks). This reduces failover and pool startup time time for Encrypted disks by ~30%.

v4.2.2.045 (April 5th 2017)

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.2.2.

ISO/DVD/USB Boot Install Image

Note: precise media is available here md5

Release Notes

High Availability

  • Added a better Cluster site overview to the Cluster Resourcee Management section of the WebUI. Now when a site cluster is selected the central grid view will show all details regarding status for the Site Cluster nodes and services. Previously this information was available in separate tabs in the grid view and not always apparent.
  • Added: The Add Cluster Heartbeat ring Dialog now selects all nodes in the selectd Site Cluster by default reducing the number of clicks to create additional site cluster heartbeat rings.
  • Added the new Restart Site Cluster Services dialog and 'qs site-cluster-restart-services' CLI command that allows for Administrators to restart the heartbeat ring and site cluster service on a chosen node.
  • Fixed an issue where a Site Cluster would remain in a Warning state after the heartbeat rings and nodes were brought back to a Healthy state. It will now report a Healthy state as expected.
  • Fixed: Highly Available Storage Pools now add a protection lock on pool Import to ensure that they are not re-imported if they had previously failed to export on an automatic or manually initiated failover. Previously this check only occurred on pool export.


Network Shares

  • Added a check to Network Share Delete to ensure that any Network Share Aliases/Subshares are removed before the parent Network Share can be removed.
  • Fixed: Network Share Aliases now report a share type of alias. As they are an alias of the parent Network Share, they will now report N/A or '0' for their Logical Used/Physical Used to avoid confusion.
  • Fixed: The Network Share Logical Used and Physical Used reporting in the WebUI now matches the same precision with less rounding as the 'qs share-list' CLI output.
  • Fixed: Changes to the NFS exports for deletion or disabling a Network Share object now use a safe reload method for updating the NFS exports table. The create Network Share and Create Network Share Snapshot functions have been using this reload function for sometime.
  • Fixed an issue where subshare/aliases selected for removal in the MultiDelete Network Share Dialog would sometimes fail to be removed.
  • Fixed an issue where a newly created local clone of a Network Share would inherit the mountpoint property of the source Network Share. Previously this could lead to the source Network Share being taken offline if the clone share is disabled or removed.
  • Fixed an issue where disabling or deleting a Network Share Alias could unmount the Parent Network Share.
  • Fixed: Lazy Deleted Network Shares will now correctly be cleaned up on system boot or the next Storage Pool discovery cycle.


Storage Pools

  • Added new 'Hardware' Column to the disk selection section of the Storage Pool Create Dialog that provides a way to sort and select the disks based on disk location.
  • Fixed an issue where Growing A ZFS Storage Pool was not retaining enclosure level redundancy as expected.
  • Fixed an issue where the Snapshot Physical used capacity would incorrectly appear in the other category in the Storage Pool Dashboard.
  • Fixed an issue with pool import on disks with multipath devices.


Storage Volumes

  • Updated the Storage Volume Group icon with a new icon that provides a clearer difference between Storage Volumes and Volume Groups in the Storage Volume tree view.

Scale-out Block and Object (Ceph)

  • New icons used for Ceph RBD Storage Volumes.
  • Fixed an issue with creating a Ceph Scale-out Object Storage Pool Group.
  • Fixed an issue where OSD's could sometimes not start after reboot for Ceph cluster nodes on the Trusty Platform.

Web Manager

  • New Workflow Manager with easy workflows for common initial setup tasks. This replaces the previous System Checklist.
  • New Workflow Manager splash screen when logging into the Web Manager for Migration Edition. This new window present common initial tasks for the Migration Edition such opening/starting the encrypted pool, viewing the share mount commands, shutting down the storage appliance and other common tasks.


CLI

  • Added a validation check to ensure correct iQN formatting to the 'qs host-initiator-add' command.
  • Changed Link state column in the 'qs target-port-list' to show Link Up/Link Down instead of 'Normal'. Verbose output for the 'qs target-port-list' and 'qs target-port-get' commands show Link up/Link Down instead of 'Normal'. XML output will continue to report a enum of '0' or '1' as previously estabilished.


Core Service

  • Fixed the dependencies for the qstorservice so that the samba-client package is suggested and not a hard dependency. This is required top allow the upcoming precise to trusty platform upgrade path.

v4.2.1.018 (March 3rd 2017) DRIVER UPGRADES AVAILABLE REBOOT REQUIRED

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.2.1.

ISO/DVD/USB Boot Install Image

Note: precise media is available here md5

Release Notes

Drivers

High Availability

  • Fixed: During a Manual Storage Pool Failover operation, the failover will now continue if the original owner of the Pool is unresponsive or unable to export the pool. This is now equivalent to using the force flag in the Execute Storage Pool Failover Dialog, which is now checked by default.

Network Shares

  • Added support for 256K, 512K and 1024K Record Sizes in Network Shares.
  • Fixed: New Network Share Namespaces are browseable and public by default.
  • Fixed an issue with renaming Network Shares that contain special characters such as $.
  • Fixed: a check has been added to ensure a Network Share is not renamed when it is part of a existing namespace as this can lead to unexpected behavior. If you wish to rename a share, please remove it from the namespace configuration, rename it and add it back.
  • Fixed an issue with the Modify NFS Client Access dialog under the The Network Share> NFS Client Access tab so that the Correct network Share is automatically selected and the rule to be modified can be selected from the drop down menu.


Remote Replication and Snapshots

  • Fixed: Network Share Subshares and Aliases are now correctly filtered from selection as a remote replication or snapshot source.
  • Fixed an issue with Replication of Shares that include a $ in the name.

Storage Pools

  • Added: the 'qs pool-create' argument '--disk-list' now supports specifying [n] number of disks or [*] to use all available disks when creating the Storage Pool.
  • Fixed: Updated Storage Pool Create, Modify, Grow and other Dialogs to be much more elastic.
  • Fixed: Storage Pool Modify, Grow and other dialogs now include more useful details displayed regarding the pool RAID type, RAID set size and other properties.
  • Fixed: Operations on Encrypted Storage Pools that require access to the Encryption key will now Fail with a clear error message prompting for the Pool to be opened with the Passphrase so that the operation can be performed.
  • Fixed a few small items that would cause the Storage Pool Dashboard to not display when selecting a different Storage Pool.
  • Fixed an issue where the wrong device path location was being used for ZFS Storage Pools when adding/removing cache devices and spares.


Storage Volumes

  • Added support for 256K, 512K and 1024K Record Sizes in Storage Volumes.
  • Adds new Storage Volume Dashboard in the Storage Volume section of the WebUI. The Storage Volume Dashboard provides a detailed view of the Logical and Physical Used capacity.
  • Fixed: The Storage Volume Modify Advanced Settings Dialog now correctly shows the Block Size that was chosen when the Storage Volume was Created. Previously this information was only available via the Properties view.

Scale-Out File Storage (Gluster)

  • Added a health check for the Selected Gluster Peers before a Gluster Volume Create, Modify or Grow operations can be executed.
  • Fixed: Gluster Volumes now correctly with type gvol in 'qs share-list' output.
  • Fixed: Gluster Volumes now include a Logical Used attribute to show the logically used capacity before mirroring or erasure coding.
  • Fixed: Storage Pool type now shows N/A for Gluster Shares as there is no direct mapping to the underlying pool for this Share type.

Hardware Enclosures and Controllers

  • Fixed an issue where Write Caching was shown as enabled for a RAID unit when the RAID controller BBU was failed or not present and the RAID controller was defaulting to Write Through mode.
  • Fixed an Issue where the default enclosure layout view was not being selected on newly added Enclosures.

Web Manager

  • Fixed an issue in the WebUI where new items added to a tree view would not show up until a discovery cycle or Browser reload has occurred.
  • Fixed an issue where Virtual Interfaces could not be created from the WebUI if the gateway field was empty.

Core Service

  • Added: Enterprise License keys now support License Capacity Passthrough when using LUNs presented from QuantaStor Backend Storage Appliances.
  • Updated API and CLI Documentation for the 4.1 and newer releases.
  • Fixed an issue where UEFI installs would incorrectly show the Base OS grub splash screen settings instead of those for QuantaStor.
  • Fixed an issue that was preventing the hourly automatic management database backups from occurring in some scenarios.

v4.2.0.375 (Feb 17th 2017)

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.2.0.

ISO/DVD/USB Boot Install Image

Note: precise media is available here md5

Release Notes

Network Shares

  • Added support for the '$' Character in Network Share names to provide support for Windows Client to automatically hide these Network Shares from Browsing.
  • Added the advanced recordsize option for Network Shares created on ZFS Storage Pools.
  • Added support to Network Shares for presenting a Secondary path (Alias) and/or Sub-folder via CIFS and NFS.
  • Fixed: Network Share Snapshots inherit the parent shares Security and access list settings.

Cloud Containers

  • Added a new One-to-One Cloud Container that uses S3FS to provide a direct Object mapping for every file written to the Cloud Container Network Share.
  • Added support for custom S3 endpoints.
  • Added new qs CLI commands to allow for management of Cloud Provider Locations, Cloud Providers, and Cloud Provider Credentials.
  • Fixed an issue where the S3/Swift bucket at the Cloud Provider would not be removed during a cloud container delete.
  • Fixed: Cloud Containers now report a Type of 'cloud' in their share list properties.
  • QuantaStor now uses awscli for all internal S3 endpoint management.

Storage Volumes

  • Added additional Columns and Properties to the Storage Volume Section of the WebUI to better show the PhysicalUsed capacity(after compression) on disk, Logical Used capacity(what the client has allocated) and child Snapshot Physical used capacity.
  • Added the qs volume-create-passthru command to allow for passthrough of Raw Storage devices such as NVMe disks as Storage Volumes.

Hardware Enclosures and Controllers

  • Added new Custom Chassis Tag for Hardware Disk Enclosures. This allows for custom names for the Disk Enclosures to match any real world location/naming scheme used in your orgination. If the same Custom Tag is used on multiple enclosures, QuantaStor will refer to them as the same enclosure., this is helpful for some Vendor enclusres that have a SAS Expander Backplane in the front and Back of their JBOD chassis that would normally appear as seperate enclosures.
  • Added further enhancements to the Hardware unit to Physical disk correlation.
  • Enhanced the iSCSI Software Adapter Create Dialog.
  • Fixed: the iSCSI SW Adapter now logins to it's remote targets much faster.
  • Fixed an issue that could prevent the Disk Locator light function from working on some Hardware Disk Enclosures.

Storage Pool and Disk Management

  • Added Hardware Disk Correlation in the Physical Disk view of the WebUI.
  • ZFS is now the default Storage pool type for 'qs pool-create' if a pool type is not specified.
  • Fixed: ZFS Storage pools comprised of Physical Disks which are Hardware RAID units, will now show a Combined RAID level property of (HWRAID+ZFSRAID). For instance, if underlying Hardware RAID 6 is used alongside ZFS RAID 0 the Value would report as (RAID6+0) or if HW RAID10 with ZFS RAIDZ2(6) the result would be (RAID10+6).
  • Fixed an issue that was preventing growing a Storage Pool if a Remote Replication was running for a Storage Volume/Network Share on that pool.
  • Fixed an issue where the suggested RAID level for a chosen number of disks would be incorrect.
  • Fixed an issue where multipath disks could sometimes appear as dm-name-mpathN device identifier instead of the always unique dm-UUID device identifier.
  • Fixed an issue where the physical disk multipath flag was not inheriting to encrypted device objects. This would result in a warning flag appearing on the device in the WebUI and Cli properties.
  • Fixed an issue where Storage Pools created without multipath device id's would not automatically import on boot up once multipathing is enabled for the disk devices and the system rebooted.

High Availability

  • Fixed an issue where ZFS Storage pool imports could take a much longer time than expected to import during an HA Pool Failover.
  • Fixed an issue that could sometimes occur after a QuantaStor HA node is upgraded and a Storage Pool Failover occurs where the Network Share user and group access list information could be removed.
  • Fixed: the FC-ALUA standby path devices will correctly appear on the passive node after a HA Storage Pool has been taken over by a node filling the active role. This fixes an issue introduced in the 4.1.5 release.
  • Fixed an issue with HA failover that could sometimes occur if the designated grid port was not available. Now the HA nodes try communicating via the Heartbeat ring interfaces if the normal grid communication port is unavailable.

Disk Encryption / Security

  • Added support for custom Encrypted Storage Pool key Passphrases. This allows for workflows where the Encrypted Storage Pool remains locked for access on bootup unless a Admin starts the storage Pool and enters the Passphrase. The Passphrase can be changed if needed from the Modify Storage Pool dialog advanced options.
  • Fixed an issue that would cause the DoD shred option to fail on Storage Pool with Encrypted disks.
  • Fixed: Encrypted Disk devices formatted using the Format Disk tool will now properly close out the dm-enc-* device releasing the underlying physical disk device for use.
  • Fixed: the 'qs-util crypttabrepair' utility will now try all available encryption keys instead of defaulting to the enc-scsi-*.key file that matches the enc-scsi-* device name.
  • Various fixes for Encrypted Storage Pool management.

Web Manager

  • Added a search bar to the tree view in various sections to allow for faster navigation.
  • Added New Dashboard to the Ceph Scale-out section in the WebUI that shows a more detailed picture of how the physical storage is being used.
  • Added New Dashboard to the Storage pool section in the WebUI that shows a more detailed picture of how the physical storage is being used.
  • Added support for creating custom Cloud Provider and Cloud Provider Locations(endpoints) in the WebUI.

User Management

  • QuantaStor now allows for custom UID/GID settings for Local QuantaStor users.
  • Added groups to Local user management in QuantaStor web interface. This includes managing the local POSIX group and GID.

Remote Replication and Snapshots

  • Fixed: Large and long running replication transfers in the same schedule with other pending replications could result in a serialization lock error causing the pending replication tasks to fail.
  • Fixed an issue where Manually triggering a Snapshot schedule could sometimes result in a silent failure.

Ceph Scale-out Block and Object

  • Fixed an issue with Ceph Journal device discovery on System Boot.

Core Service

  • Throttled the Storage Pool Low Free Space Alerts which could sometimes occur at 10 minute intervals to every two months at the Warning level, monthly at Alert level and weekly at Critical level.
  • Fixed an issue where the 'samba4-install' script could not connect to the update servers that contained the samba4 update packages.
  • It is now possible to use the samba4-install script on precise platforms to upgrade from Samba 3.x to Samba 4.x without needing to leave the AD domain to perform the upgrade.
  • Added new 'qs grid-send-supportlogs' and improved Send Support Logs dialog to allow customers to easily send logs to the OSNEXUS support team from multiple nodes in the grid.
  • Additional Grid performance and service improvements.

SNMP

  • Updated SNMP MIB for 4.2

VSS

  • Updated VSS Provider.

v4.1.6.896 (Feb 6th 2017)

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.1.6.

ISO/DVD/USB Boot Install Image

Note: precise media is available here md5

Release Notes

Install Media

  • Fixed an issue with the package update server list file that was preventing customers from performing future upgrades who installed from the 4.1.5 ISO media.

Web Server

  • Fixed an issue that caused the webUI to be unavailable on systems where the http port was disabled with the 'qs-util disablehttp' command. Note: disabling the http port 80 will block the dashboard view from other systems, this will be addressed in a future release.

v4.1.5.894 (Jan 18th 2017)

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.1.5.

Release Notes

Network Shares

  • Added descriptive text to Network Share Users Access tab Search field. Added example text to tooltip.
  • Fixed an issue that prevented wildcard searches for users.

Hardware Enclosures and Controllers

  • Added support for Cisco branded SAS HBA's
  • Fixed an issue where the Enclosure View could appear blank.
  • Various small fixes for the iSCSI Software Adapter login/logout dialogs.
  • Fixed a rare issue where the first unit created on an LSI RAID Controller may not appear in the WebUI.

Fibre Channel Target

  • Fixed an issue where a LIP would sometimes not be issued on the target FC ports during add/remove Host access for Storage Volumes.

High Availability Failover

  • Added a faster failover check so that a secondary node can more quickly take ownership of the Storage Pool, Storage Volumes and Network Shares for instances where a active node is powered off or loses all network connectivity to it's network switch and standby nodes.
  • Added for FC ALUA paths now report standby status instead of unavailable for the secondary standby node. This corrects an issue that would cause some clients to report dead/failed paths.
  • Added for FC ALUA a check to issue a LIP after failover of the Storage Pool on the Standby node so that the standby paths are redicovered.
  • Added for FC ALUA an issue LIP for when a secondary node comes online from a poweroff or reboot state and goes into standby status.
  • Fixed an issue where some third party FC SAN arrays would not respond to a SCSI Persistant reservation request for full status including keys and reservations. qs-iofence now requests these items individually to support these FC array models.

Storage Pools

  • Added checks in the Create Storage Pool Dialog to detect the number of available disks on a system and provide suggested RAID levels at the top of the RAID selection list. For Example, this will ensure RAID60 is listed before RAID6.
  • Added checks in the Create Storage Pool Dialog to prefer for RAID+Striping Levels and remove Single RAID levels based on the number of drives in the system. This is to ensure best performance and capacity options are chosen during pool creation and discourage non-best practice extremely large single RAID Level, such as a twenty drive RAID5 for instance.
  • Added: Default compression to lz4 on ZFS storage pool create, this applies to all editions.
  • Added: When clicking on the Create Storage pool ribbon button, the first system selected is now a system that has available disks.
  • Fixed: When no free disks are available to create a disk, the options in the pool create dialog are now greyed out.
  • Fixed: enabled storage pool compression support for Community Edition licenses.

Core Service

  • Added: the qs_checkservice will now log to the /var/log/qs_checkservice.log file for any warnings or errors instead of issuing a mail.
  • Fixed an issue where the new qs_restd service was not being monitored correctly by the qs_checkservice.
  • Fixed: Corrected an issue with Object name caching, this corrects an error that could sometimes occur after deleting and then recreating a snapshot, or storage pool with the same name.

SNMP

  • There is a new SNMP MIB available with this release. You can use qs-util snmpmib to review.
  • Fixed an issue where an SNMP Walk would return no objects.
  • Fixed an issue where the snmpagent was unable to start on 12.04 precise platforms.

Security

  • Fixed: Addressed SSL concern CVE-2016-2183 (SWEET32) with updated qsciphers file to remove DES and 3DES ciphers.
  • Fixed: disabled tomcat web port 8443.

v4.1.4.884 (Dec 20th 2016)

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.1.4.

ISO/DVD/USB Boot Install Image

Note: precise media is available here md5

Release Notes

Installer

  • The Trusty platform install media now includes and updated megaraid_sas driver to support the LSI Megaraid 3316 ROC.

Storage Volumes

  • Fixed an issue that could cause problems with XFS based Storage Volumes after reboot
  • Fixed an issue where ZFS Storage Volume snapshots and replicated snapshots could sometimes not become writable clones after a snapshot or replication operation.

Scale-out Block and Object (Ceph)

  • Fixed a permissions issue for the ceph startup scripts.
  • Fixed an issue where the OSD device may not be properly associated with it's Journal device in the QuantaStor management interface.
  • Various small Ceph implementation fixes.

Upgrade Support

  • Enabled Upgrades for 4.1 series features and improvements based on the Precise Platform at IBM SoftLayer and other locations that use their own update repositories.

v4.1.3.878 (Dec 8th 2016)

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.1.3.

ISO/DVD/USB Boot Install Image

Note: precise media is available here md5

Release Notes

Web Server

  • Fixed an issue that was preventing the new nginx web service from starting on system boot.

Scale-out Block and Object (Ceph)

  • Fixed an issue where rebooting a Ceph node and then the Ceph Master node could result in Journal devices showing up as offline and owned by the Ceph master Node.

Core Service

  • Fixed: Lowered logging level on Metrics Dashboard InfluxDB
  • Fixed: Lowered Logging Levels on nginx Web server
  • Added Additional log files to qs-sendlogs log gathering scripts.

v4.1.2.877 (Dec 6th 2016) DRIVER UPGRADE AVAILABLE

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.1.2.

ISO/DVD/USB Boot Install Image

Note: precise media is available here md5

Release Notes

New Drivers

  • Intel 40GBe Network Adapters i40e 1.5.25

High Availability

  • Added further Optimzations to speed up Pool Failover times around Storage Pool startup and discovery tasks.

Storage Pools

  • Added: ZFS Storage Pools now support NVMe SSD devices for ZIL and L2ARC in stand-alone appliance deployments.
  • Fixed an issue where some multipath or encryption devices could not be used to grow a ZFS or XFS Storage Pool.
  • Fixed an issue where some multipath or encryption devices could not be used as a spare in a ZFS Storage Pool.

Scale-out Block and Object (Ceph)

  • Fixed a rare issue where multi-osd create would fail to create an OSD due to a failure in the XFS Pool creation step.
  • Various small Ceph management fixes.

Network Shares

  • Adds vfs_unityed_media support for better Avid integration on CIFS/SMB Shares. This replaces the previous media_harmony plugin support.

Snapshots and Remote Replication

  • Added: Remote Replication on Trusty 14.04 Platform deployments can now be enabled to use AES-NI accelerated Ciphers for SSH tunneling between QuantaStor appliances with the 'qs-util aesni' command.
  • Added some enhancements to reduce the time it takes when performing large numbers of snapshots all at the same time.
  • Added some optimizations to better batch cleanup Storage Volume and Network Share Snapshots marked for deletion.
  • Fixed The slider bar for the minute interval option in the Create and Modify Remote Replication schedules now correctly shows 15 minutes as the minimal available option when the slide is all the way to the left.

Physical Disk Management

  • Added new 'qs disk-format' command and Format Disk Dialog to the Physical Disk section of the Web Manager. This allows for the removal of any unwanted encryption or disk formatting prior to a disk being used in a Storage Pool.

Web Manager

  • Added a new property to indicate the Distribution version to the Storage System Properties view.
  • Fixed: The Remove grid member dialog was missing the force flag option checkbox.
  • Fixed an issue that would prvent the Dashboard from showing when logged into the Web manager via https or port 8080

CLI

  • Added the '--flags=' option to the qs grid-remove command.

Cloud Containers

  • Added logic to remove the bucket from the cloud provider during Cloud Container Deletion. Note: Very large multi-terabye Buckets may need to be removed manually with swift/s3cmd commands.

Scale-Out File (Gluster)

  • Fixed: Added a scaling timeout for Gluster Peer setup operations based on the number of Gluster Peers selected for the operation.

Core Service

  • Added further Optimzations to speed up service startup around Storage Pool startup and discovery tasks.

v4.1.1.870 (Nov 29th 2016)

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.1.1.

Release Notes

Storage Pool Management

  • Fixes a rare issue that could prevent Storage Pools on 12.04 Precise platforms from starting on System Boot.

v4.1.0.868 (Nov 23rd 2016)

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.1.0.

ISO/DVD/USB Boot Install Image

Release Notes

New Platform for new ISO deployments

  • 4.1 now uses Ubuntu 14.04 Trusty as the base platform by default.
  • If you require a 4.1 install media based on the 3.x/4.x 12.04 Precise platform it is available here md5

New Drivers

  • Intel 40GBe Network Adapters i40e 1.5.16

Ceph Scale-out Block and Object

  • New Ceph version 10.2 (Jewel) available with QuantaStor 14.04 Trusty based deployments.
  • Added: Ceph Jewel now supports reporting RBD disk storage utilization statistics. This is reflected in the utilized property in QuantaStor for the Ceph based Storage Volumes for iSCSI and RB access.
  • Added: Pool Replica Count can now be modified via the web Manager. Added the ability to list custom pool create profiles in the Ceph Object Store create dialog. Added a Force scrub checkbox to the ceph osd multi-create dialog
  • Added a minimum hardware/virtual hardware check for Ceph cluster creation and adding ceph cluster members. Minimum requirements for a VM or server to demo or run minimal cluster member services is 2 CPU cores and 2GB of memory.
  • Added new logic to better support creating a ceph cluster with a better suggested-placement-group count based on the osd count on minimal ceph configurations (3 nodes 3-6 OSD's)
  • Added logic to ensure a minimum size of 1GB for Ceph Journal partitions during journal create and multi-osd create.
  • Added better support for NVME devices when used as Ceph journal devices.
  • Added new Ceph Erasure Coded pool profile management.
  • Added Ceph object Storage Pools can now be created as Erasure coded in addition to different Replica counts.
  • Fixed an issue where a target port would not correctly have it's firewall forwarding rules from port 80 to 7480 removed when it had S3/Swift Object gateway access disabled.
  • Fixed an issue during OSD delete where the mount points under mtab were not updated to reflect the correct unmounted status.
  • Fixed an issue where the Web Manager Ceph Dashboard would reflect stale capacity and information when a cluster capacity is reduced from a Ceph OSD remove event.
  • Fixed: Ceph Cluster Members now group by Ceph Cluster when viewed in the central grid view in the Web Manager.


Gluster Scale-out File

  • Fixed: QuantaStor now provides a more accurate view of the current Gluster Volume and Brick status.
  • Fixed an issue to ensure the Glusterfs client mount on QuantaStor used to provide NFS anf CFIS access is correctly mounted and not accidentally providing a mountpoint to the root filesystem.


High Availability Failover

  • Added: HA Failover tasks will now show more detailed status during failover tasks.
  • Added improved iofencing tool that greatly improves SCSI-3 Persistent reservation verification and assignment during HA Failover tasks.
  • Added discrete ARP Ping even to occur on HA Failover for each HA VIF configured on an HA Failover Group.
  • Added: HA Failover groups will now automatically be activated when a HA VIF is first created on it.
  • Added an HA Failover Policy based on the the link status of the ports in the machine.
  • Added iSCSI SAN Configuration feature for simplifying the configuration of Tiered QuantaStor High Availability Failover based deployments. This feature allows for automatic iSCSI interconnect configuration of front-end QuantaStor appliances to back-end QuantaStor appliances providing iSCSI Storage Volumes.


Network Shares

  • New Network Share Namespaces feature that allows NFSv4 and CIFS clients to see all shares accessible in a configure namespace on QuantStor appliances and Network Shares added to that namespace.
  • Fixed: QuantaStor now uses the reload command instead of restart for the Samba CIFS/SMB service.
  • Fixed an issue where Network Shares could reflect incorrect Utilized statistics until a discovery cycle occurs.
  • Fixed an issue where a local users default group could appear in the AD group list.


Storage Pool Management

  • New security options available in Storage Pool Delete dialog allow for securely erasing the disks when the storage pool is decommissioned.
  • Added a new option to the Create Storage Pool Dialog that will clean the partition label and ensure a disk is available for use prior to creating a Storage Pool with it.

Backup Policies

  • Added: Backup Policies now support pushing data from a QuantaStor Network Share to a external CIFS/NFS share on a third party server/appliance.
  • Fixed an issue that would allow users to delete a Backup job while it was running resulting in errors.
  • Fixed an issue where the Backup Job status would not update during single threaded rsync based transfers.
  • Fixed an issue where Timestamps for Created, Modified and Start Date are all updated to the 'current time' when qs CLI command 'backup-policy-modify ' is executed.
  • Fixed an issue that would still provide the option to cancel a already completed backup job.
  • Backup Policy settings are now shared between nodes in a High Availability Failover Group.
  • Fixed an issue that would cause backup policies to fail if the target share type was changed between CIFS/SMB or NFS.

Cloud Containers / Cloud Backup

  • Fixed: Cloud Backup Schedules will now correctly trigger an immediate backup when manually triggered.


Core Service

  • Added various performance improvements to the QuantaStor service and backend Database.
  • Fixed: Reduced the number of grid events triggered by snapshot grid objects. This will improve Web Manager responsiveness and overall performance for deployments that have a large number of snapshots.
  • Fixed a small timing issue on system startup with the iSCSI Target driver and service that would cause a false positive with the QuantaStor service startup requiring a manual service restart in some instances.
  • Fixed a rare case where adding grid nodes with existing modified admin accounts could result in the new nodes admin account being retained and multiple 'admin' accounts appearing in the QuantaStor user list.
  • Fixed: Various Pool startup and management service startup performance improvements.


QS CLI

  • Changed: qs system-modify commands now require the storage system name or id be passed in for command execution.
  • Added missing feature flags for the qs backup-policy-create CLI tool to bring it inline with the Backup Policy Create Dialog.
  • Added shorthand flags for common QS commands '-u' for '--user', '-s=' for '--server=', '-h' for '--help' more information is available in the qs command help.
  • Added a new --noheader option for qs commands to show the output for list commands without the column headers.
  • Fixed: the qs tp-modify --port=ethX --port-type=disabled command now correctly removes the static or dhcp networking configuration from the port and sets the port state to disabled.
  • Fixed up the output of the qs disk-list, volume-list, share-list, target-port-list, and pool-list commands so that they include the storage system name as an earlier column and the UUID as the latest column.
  • Fixed an issue where the QuantaStor user credentials in %USERPROFILE%\.qs.cnf on Windows were not being read properly for us with the qs command line tool.


Hardware Enclosure and Controllers

  • Added support for HP HBA series controllers in Hardware Enclosures and Controllers Module.
  • Added better SAS HBA Enclosure correlation between hardware controllers and nodes. Enclosures correlated this way will have the same unique id number.
  • Added correlation for SAS Disk devices presented from SAS Hardware controllers in Physical Disk view. This makes it easier to dentify physical disk objects to their SAS disk counterpart in SAS HBA configurations.
  • Added logic to check for network availability before performing an iSCSI Login on a iSCSI Software Adapter.
  • Fixed: Improved Device Multipathing discovery logic for Physical Disk objects.
  • Fixed some object properties that were not being shown correctly for HP Smart Array Controllers.
  • Fixed: Improved Correlation between Physical Disk objects and Hardware Disk Objects for Adaptec controllers.


Web Manager

  • New Dashboard feature adds real time statistics for Storage System Memory, CPU, Load, and Networking statistics for a selected Storage System in the System Managment section. Additional statistic dashboards will be added in upcoming QuantaStor releases for other sections such as Storage Volumes and Storage Pools and many more.
  • Added name search to the assign/unassign storage volume dialog.
  • Added Client Connectivity check IP addresses to columns in the grid for the HA Failover Group section of the Cluster Management tab in the WebUI.
  • Added new 'Source Volume Size' property to the Remote Replication Links for Storage Volumes.
  • Change: Moved Host Groups to the hosts section and Volume Groups to the Volume section and removed their discrete sections from the left hand accordian tree view navigation.
  • Fixed: Performance improvements to initial Web Manager load times.
  • Fixed: The options for switching between target and initiator only mode on a FC controller now more clearly show 'Enable FC Target Mode' and 'Enable FC Initator Mode'
  • Fixed: Bonded ports can now be selected in the Create VLAN Interface Dialog.
  • Changes Network Target Port disable/enable to offline/online more clearly indicate desired link status.
  • Changed the property sidebar so that it is collapsed by default.
  • Changed the Restart NFS and CIFS services Dialog to now auto select the current Storage System by default.

v4.0.8.1194 (Nov 18th 2016) KERNEL AND DRIVER UPGRADES AVAILABLE REBOOT REQUIRED

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.0.8.

Release Notes

Kernel and Drivers

  • Adds new 3.19.0-73 Linux kernel that includes updates and a security patch to address CVE-2016-5195 (Dirty COW)
  • Fixed an issue where some systems would not use the latest quantastor provided hardware drivers included with the qstortarget package.

Core Service:

  • Fixed Task list cleanup for remote replication and snapshot schedule tasks so that they are not immediately cleaned up on long running tasks.
  • Fixed Task list cleanup so that they are cleared in the order of their timestamp, previously these were sorted and cleaned up by id.
  • Fixed an issue where the log files for the core quantastor services would sometimes become truncated.

Network Shares:

  • The optional Samba 4 packages available via the samba4-install script are now hosted on the packages.osnexus.com mirror.

v4.0.7.1190 (Oct 28th 2016)

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.0.7.

Release Notes

High Availability:

  • Fixed: The FC ALUA state now remains in transitioning state while the Storage Pool and Storage Volumes are being moved between the nodes. This addresses a small window on some clients were a sync based write could have found the Storage Volume LUN in a unavailable state and not retry.

Core Service:

  • Fixed: Many base command execution performance improvements. This improves HA failover times, Storage Pool creation task times and many other operations.
  • Fixed: Tasks are now cleaned up via the order of their timestamp instead of the previous ordering method.

CIFS / SMB:

  • Fixed: Removed Sernet Samba Enterprise external repo from samba4-install script. Samba4 packages now come from OSNEXUS repository servers.

v4.0.6.1187 (Oct 14th 2016) DRIVER UPGRADES AVAILABLE REBOOT REQUIRED

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.0.6.

ISO/DVD/USB Boot Install Image

Release Notes

Drivers

  • Configures ZFS ARC Max at 50% of system memory as default to provide better default performance for mixed workloads. Please consult with a OSNEXUS Reseller or Sales Engineer in regards to advanced ARC tunings for task or use case specific workloads.


Backup Policies

  • Added the serialized backup option as the default Backup Concurrency option. Serialized backup provides the most economical form of backup and is less I/O intensive on the source and destination shares in comparison to the Parallelized backup options.
  • Fixed an issue where the Backup job objects status and properties would not correctly update in the WebUI or on other nodes when a Backup job changes status.
  • Fixed: Backup Jobs that fail will correctly show a Failed state instead of showing Initializing
  • Fixed: Backup Jobs will raise an alert and transition to Failed status if the source share failed to mount or if the QuantaStor target/destination Network Share is disabled.
  • Fixed: Backup Jobs will transition to a Failed state when using NFS and the source NFS share becomes inaccesible.
  • Corrected syntax and argument help for qs backup-policy-modify command. You can correctly rename a policy via the CLI like you can via the WebUi with the 'qs backup-policy-modify --policy=POLICYNAMEorID --name=NEWNAME' command.


Ceph Scale-out Block

  • Fixed an issue where mapped iSCSI LUNS on Ceph Scale-Out Block were not presented from all QuantaStor nodes in the Ceph Cluster.


Core Service and CLI

  • Fixed an issue where the optional Samba 4 upgrade would not correctly report the service status as online in the QuantaStor system properties.


Disk Device Multipathing

  • Fixed an issue that could prevent a multipathed Hotspare disk being used to replace a failed disk in a ZFS Storage Pool.
  • Fixed an disk mapping issue for Encrypted Multipathed devices to ensure that all disk paths receive SCSI3 reservations.
  • Encrypted Multipathed devices will now appear in the WebUI and CLI as having all of their path associations.


High Availability

  • Fixed an issue where Storage Volumes on a FC ALUA deployment could sometimes not initialize properly on system boot or when first created and presented to a Host


Licensing

  • Adds new license types for HA pairing and Support Renewal only Licenses.
  • Fixed an issue where two HA nodes with Multipathed disk devices were incorrectly reporting double the license capacity used.
  • Fixed an issue where some SSD devices incorrectly counted towards licensed capacity.
  • Fixed an issue where hotspares in use repairing a ZFS Storage Pool could be incorrectly counted towards License capacity.


Network Shares

  • Fixed: The Ownership Setting>Assigned Group will now correctly show the AD group name in addition to the Group ID (gid) in the Network Share Dialog.


Networking

  • Disabled IPv6 address discovery for Network devices by default.


SNMP

  • updated MIB

v4.0.5.1174 (August 17th 2016)

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.0.5.

ISO/DVD/USB Boot Install Image

Release Notes

High Availability

  • Corrected an issue with mapping of devices for iofencing. This affected devices that had dm Multipathing and/or LUKS Encryption.

v4.0.4.1173 (August 10th 2016)

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.0.4.

Release Notes

High Availability

  • Added support for Fibre Channel ALUA High Availability.

iSCSI/FC Target

  • Added Legacy SCSI Target USN support for upgrades from QuantaStor 4.0.3 and older releases.

Storage Pools

  • Fixed: resolved an issue with creating XFS Storage Pools with LUKS Encryption enabled.

v4.0.3.1169 (July 20th 2016) DRIVER UPGRADES AVAILABLE REBOOT REQUIRED

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.0.3.

ISO/DVD/USB Boot Install Image

Release Notes

Active Directory

  • Added Trusted Domain support for Customers who have installed Samba4. Users and Groups from Trusted Domains can now be added by searching in the Network Share User Access>AD User or AD Group section.
  • Removed getent as a dependency for Active Directory UID/GID lookups. UID and GIDs are now shown for users that have CIFS access assigned under the Network Share User access Tab.
  • Fixed an issue where the idmap selection was not visible in the joining Active Directory domain section of the CIFS Configuration Dialog.
  • Fixed qs-util adcachegenall Active Directory caching used for very large (100,000+ user/groups). Generating the Active Directory cache is now much faster.
  • Fixed: the idmap ranges for autorid mode were reduced as the values shipped with 4.0 were to high high, preventing uid/gid generation from the Active Directory sid.

Backup Policies

  • Changes purge policy function for single thread mode to use rsync `--delete-after` instead of running pwalk after the transfer completes.
  • Added a lower CPU priority for Backup Policy tasks. Added a lower CPU priority for Remote Replication tasks.
  • Fixed an issue where the Daily Purge Policy would trigger at the end of the Backup Policy instead of only once a day.

Ceph Scale-out Block and Object

  • Added updated ceph-install script for customers upgrading from 3.x releases who are interested in installing and testing the QuantaStor Ceph scale-out block and object features.

Cloud Containers and Cloud Backup

  • Fixed an issue with creating a cloud backup without an Cloud Storage Container. This scenario will now properly error out and raise an alert indicating a Cloud Storage Container should be created.
  • Fixed an issue where the Cloud Container Repair task would not complete due to a short timeout value on the process.
  • Fixed: Restore from Cloud backup will now only list Storage Pools local to the QuantaStor system where the Cloud Container is mounted.

Hardware RAID Modules

  • Added Cisco UCS C3260 enclosure layout support.
  • Added new qs hw-unit-auto-create CLI command that will take different inputs to be used as rules to setup Hardware RAID units automatically. More details are in the `qs help=hw-unit-auto-create` output.
  • Fixed an issue where dedicated RAID controller hot-spares would show as a warning state when they are perfectly healthy.
  • Updated included Adaptec controller utilities for Adaptec Hardware Module support.

High Availability

  • Added new HA failover feature to perform Client Connectivity testing. This feature is available in the Modify Storage Pool HA Failover Group Dialog and will ping a specified set of client IP Addresses and then execute a failover if a chosen policy for the failure is met.
  • Added improvements to HA Cluster Storage Pool failover speed for cases where the Node is failed due to a power loss or will not be able to communicate with the node that is taking ownership of the pool.
  • Fixed an issue with SCSI-3 Reservations and registrations used by the HA Clustered ZFS Storage Pool feature. Any customers running the HA Clustered ZFS Storage Pool feature are advised to upgrade to 4.0.3 or newer.
  • Fixed an issue with the HA heartbeat rings where a ring member would be in a offline/warning state.
  • Fixed an issue where the heartbeat cluster service would start on a node that had no Cluster heartbeat rings configured.
  • Fixed an issue that prevented the creation of HA Virtual Network Interfaces on top of VLAN tagged interfaces.
  • Fixed an issue where VAAI SCSI target support could prevent a Storage pool export during HA Clustered Storage Pool Failover.
  • Fixed a corner case with HA Storage Pool startup when both primary and secondary nodes are powered at the same time.
  • Fixed an issue where objects related to an HA Cluster Storage pool would not be updated if the Grid Master node is unavailable and an HA Storage Pool failover occurs.
  • Fixed, Alert messages related to heartbeat ring status changes now correctly identify the heartbeat ring as the source of the alert with a clearer message. Previously the alert would state the node was offline, which was incorrect.

Network Shares

  • Added new Create and Modify Network Share Dialogs. CIFS User access, ACL Permissions and Share Owner settings are now on the User Access Tab. Advanced settings such as compression mode, ACL and xattr features have been moved to a new Advanced Tab.
  • Added: the quota options in the Network Share Create and Modify Dialogs now allow for the exclusion of snapshot used capacity from the Quota.
  • Fixed: The Network Share User Access tab grid view in the WebUI now correctly sorts on username and supports sorting by User Access Mode.
  • Fixed an issue that would prevent the modification of a Network Share name that included the - _ . Characters.
  • Fixed an issue that could sometimes cause a Netowrk Share creation to fail if 'nobody' and 'nogroup' were specified as the share owner and group.
  • Fixed an issue that could sometimes occur where the Network Share Create or Modify dialog would generate an error regarding the share owner/group not being set when and AD user was selected.

Remote Replication

  • Added Consistency groups for Remote Replication. Replication Schedules now quickly take the snapshots for all Volumes or Network Shares in the schedule at the same point in time and are transferred serially in a sequential manner for best performance.
  • Fixed an issue where a lock was not placed on a Network Share replication link, this could lead to Remote Replication Schedules containing only Network Shares running in parallel instead of serially.
  • Fixed conflict between VMware VAAI extended copy feature when there was remote replication for Storage Volumes.
  • Fixed: QuantaStor will now do more to auto re-create a replica-assoc if it is missing or was removed and there is a good source/target match.
  • Fixed: the Enable and Disable Remote Replication schedule dialogs now include more detail regarding the number of shares in the selected schedule.

Scale-out File

  • Added support for disperse Gluster Volumes to span the disperse volume over an uneven number of systems that do not match the disperse configuratiobn. Previously for a 4D+1P configuration you would require 5 or 10 systems, now this configuration can be deployed on 5,6,7, or any number of nodes as long as the number of bricks are available to ensure the conditions for the Gluster disperse configuration are met.
  • Fixed: there was an issue where Gluster tasks would not succeed due to another gluster task or command transaction being in progress, this has been corrected with additional retry logic.
  • Fixed an issue that would allow removal of a QuantaStor node from the grid while it was still in use serving Gluster Volume access and bricks. If you determine you do have a neew to perform a grid node removal while gluster configuration is present on that node, you can do so via the force flag.
  • Fixed: Removed disperse configuration options from the WebUI that Gluster does not natively support.

SCSI Target

  • Added: SCSI Target USN's now match the Storage Volume object unique ID's.

Core Service and CLI

  • Added further detail to the ZFS Storage Pool Resilver property to show how much time the Storage Pool reports as remaining for a resilver.
  • Added qs pool-preimport-scan command that can now be used to get a list of available pools for importing.
  • Added new 'timezone-list' and 'timezone-set' commands to the qs CLI, these commands allow for users to change the timezone of a QuantaStor system in the event the system is relocated or an incorrect timezone is chosen on system startup. More information is available via the 'qs help=timezone-list' and 'qs help=timezone-set'
  • Removed auto import logic on QuantaStor service startup for Storage Pools that were not local or owned by the Storage System. This corrects a behavior where a storage pool would be imported incorrectly on systems where shared disk access is possible from multiple head nodes. Customers who wish to import foreign Storage Pools from other QuantaStor or for Open-ZFS based pools should continue to use the Pool Import Dialog.
  • Fixed: qs import-pool command to allow importing of storage pools on a remote grid member.
  • Fixed: qs pool-import now requires the foreign pool name to import a specific storage pool.
  • Fixed an issue where the QuantaStor iSCSI Software Adapter (initiator) would sometimes not automatically login to configured targets on system reboot.
  • Fixed an issue where the QuantaStor iSCSI Software Adapter (initiator) would not immediately scan for remote iSCSI targets on startup. In some cases this would cause a Storage Pool to be slow to import or not complete importing properly until the disks were rescanned and Storage Pool started manually.
  • Fixed qs license-list command output now by default provides verbose license details.
  • Fixed an issue at system startup that could lead to an alert regarding a problem for discovery of the iSCSI Target service running state.
  • Fixed a conflict with latest SCST driver and Instant rollback from snapshot feature that would sometimes prevent snapshot rollback of Storage Volumes.
  • Fixed an issue where deletion of a user created via the QuantaStor Management interfaces would not also remove the corresponding local linux user account.
  • Fixed an issue that can sometimes occur where a Stop Storage Pool task would not correctly stop an XFS storage Pool.
  • Fixed an issue that could sometimes occur where a Storage pool resilver would complete, but the failed disk would not be removed automatically.

Web Manager

  • Added updated Storage Pool Create dialog to provide better detail on when to choose XFS or ZFS storage Pool options.
  • Fixed: The Rollback Storage Volume dialog will now tell a user if there are no avaialble snapshot recovery points.
  • Fixed: The grid view in the center of the Web manager for Volumes and Network Shares can now be correctly sorted based on any chosen column sorting.
  • Added the Alert tab in the Web Manager will not show a count for the number of alerts.
  • Fixed an issue where the Storage Pool % Utilized property was not updating as often as the grid view or other Utilized percentage information.
  • Fixed an issue where the About box in the Web Manager would not correctly show the versioning information for the system you are accessing via the WebUI.
  • Fixed an issue where the ribbon bar would not always appear in the Web Manager on smaller resolution screens.
  • Fixed: Storage Volumes that have their % Reserved changed to 0 % from a higher % value will now correctly report as Thin Provisioned
  • Fixed an issue where the Name field in the Resource Group -> Add/Remove Users dialog would sometimes not be populated.

Localization

  • Fixed an issue where HTML formatting tags would be present in some Localizations.

v4.0.2.1139 (April 29th 2016) DRIVER UPGRADES AVAILABLE REBOOT REQUIRED

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.0.2.

ISO/DVD/USB Boot Install Image

Release Notes

New Driver releases:

  • HP SmartArray RAID Controllers hpsa 3.4.10-0
  • Mellanox Infiniband Adapters mlx4_ib 3.2-2.0.0
  • Mellanox Converged Ethernet Adapters mlx4_en 3.2-2.0.0

High Availability

  • Added logic to ensure HA failover would succeed during manual failover if the iptables firewall was unresponsive.

Remote replication

  • Added a timeout to qs-util rraterebalance
  • Fixed conflict between VMware VAAI extended copy feature when there was remote replication for Storage Volumes.
  • Changed default replication throttle rate from 10MB/s to 30MB/s

Core Service

  • Added Further grid communication optimizations.
  • Fixed a bug that caused grid events to be sent for objects that didn't change.

Web Manager

  • Fixed a compatibility issue with IE11 where user entered names in a textfield would not be accepted.

iSCSI Target Driver

  • Fixed an issue where removing or adding a physical block device to the system would cause the iSCSI target driver to deadlock.

v4.0.1.1128 (April 7th 2016) KERNEL AND DRIVER UPGRADES AVAILABLE REBOOT REQUIRED

Upgrade Instructions

Click here for instructions on upgrading to QuantaStor v4.0.1.

ISO/DVD/USB Boot Install Image

Release Notes

  • Adds kernel upgrade to the Linux 3.19-0.58 kernel (latest stable LTS release) this kernel update addresses a potential stability issue introduced with the previous 3.19-0.51 LTS kernel included with QuantaStor v4.0.0. This issue does not effect data integrity in any way but could lead to an instability which would require a reboot.

Scale-out Block and Object

  • Fixed: Ceph Cluster create now only allows Alpha-Numeric and underscore '_' characters in the cluster name. The 'qs ceph-cluster-create' CLI help has been updated to reflect this.
  • Fixed: Corrected an issue that would cause the removal of Scale-out Ceph Storage Volume to fail.

Web Manager

  • Fixed: The Cloud Container tab will now correctly appear on Community Edition keys that have the Cloud Backup feature enabled on the license key.

v4.0.0.1123 (March 31st 2016) KERNEL AND DRIVER UPGRADES AVAILABLE REBOOT REQUIRED

Upgrade Instructions

QuantaStor 4.0.0 was superseded by the QuantaStor 4.0.1 release on April 7th 2016. Please click Please click here for the QuantaStor 4.0.1 release notes and upgrade instructions.

Release Notes

  • Adds kernel upgrade to the Linux 3.19-0.51 kernel (latest stable LTS release)
  • It is now even easier to Deploy QuantaStor via PXE/Kickstart solutions such as RedHat Kickstart or Cobbler.
  • New Driver releases:
    • Dell PERC and Avago/LSI MegaRAID controllers megaraid_sas 06.810.08.00
    • Avago/LSI 12GB/s SAS HBA's mpt3sas 12.00.00.00
    • HP SmartArray RAID Controllers hpsa 3.4.14
    • HP Broadcom tg3 3.137k
    • Adaptec RAID Controllers aacraid 1.2-1.41010
    • Intel 40GBe Network Adapters i40e 1.4.25
    • Intel 10GBe Network Adapters ixgbe 4.3.13
    • Intel 1GBe Network Adapters igb 5.3.4.4
    • Intel 1GBe Network Adapters e1000e 3.3.3
    • SolarFlare Network Adapters sfc 4.7.0.1031
    • Mellanox Infiniband Adapters mlnx4-en 3.2
    • Qlogic FC Adapters(supports 16GB Qlogic Gen 5 26xx controllers) qla2x00tgt 3.1.0
  • Scale-out Block and Object Storage (Ceph integration)
    • Added new Scale-out Ceph Object Storage support
      • A new Ceph Object Storage will have a default 'objadmin' user account with S3 and Swift access keys. This user is intended for diagnostics and resolution of ACL issues. This user can be disabled.
      • Added Ceph User access model for management of Secret and Access keys for Scale-out Object Storage S3 and Swift access. Users can be enabled and disabled and can have different ACL access.
    • Adds initial support to 'qs ceph-pool-create' CLI for custom crush maps and additional Ceph Storage Pools. Contact OSNEXUS support if you need assistance with creatign and deploying a custom crushmap.
    • Added the Add and Remove Ceph Monitors dialogs to the Web Manager.
    • Added a new Ceph Member status tab to the Web Manager
    • Added support to remove a Ceph Monitor configuration in the Ceph Cluster from nodes that are offline or will be permanently unavailable.
    • Added: Multi-OSD create now has the option to use available journal partitions on existing journal devices.
    • Added Scale-out Block and Object Ceph clusters will now allow for 48-hours before initiating an auto-heal to rebalance data on the remaining OSD's. This is to help ensure a reblance does not occur if a node was taken offline due to a quickly corrected hardware component failure or temporary power failure.
    • Added: You can now use the 'qs ceph-pool-modify' CLI command with the --max-replicas=X option to modify an existing Storage Pool replica count level and initiate a rebalance of the Placement Groups to the new level.
    • Added enhancements to 'qs ceph-monitor-remove' command that allows for discovery of the ceph monitor to be removed with the use of the storage system name or storage system id.
    • Added protections to the Modify network Dialog and 'qs tp-modify' CLI to warn about changing the network configuration for Network ports used with a Ceph Cluster. Please contact OSNEXUS support for assistance if you determine you need to change the configuration of your networking on a node.
    • Added additional warning health status for the Ceph Cluster to reflect error or warning state of underlying Monitors or OSD's.
    • Fixed 'ceph-install' command that can be run on older deployments to enable scale-out block now also installs all of the dependencies required for scale-out object.
    • Fix for Scale-out Block Ceph Pools now correctly show their individual used capacities. Previously all Ceph Pools reported a combined used capacity.
    • Fix for rare condition that could cause a QuantaStor node to halt during shutdown or reboot when a scale-out Storage Volume/RBD has active client access.
    • Fix to ensure newly created Ceph pool appears with all properties in the Storage pool list in the Storage Management tab.
    • Fixed an issue that can sometimes occur when removing a Ceph Monitor.
    • Fixed an issue where the client and backend network settings provided during Ceph Cluster Creation were not correctly set.
    • Fixes to 'qs ceph-cluster-*' CLI commands to clarify help messages and command arguments.
    • Fixed: The Ceph Cluster status now shows a more accurate health status of Initializing when a Ceph Cluster is first created.
    • Fixed an issue with Ceph Scale-out Block Storage Volumes where host access assignment events would be rebroadcast.
    • Fixes and Various small updates for Ceph Cluster deployment and management
  • Scale-out File Storage (Gluster integration)
    • Added: Removing a Gluster Brick now performs additional checks to ensure the action does not compromise data availabilty. Please contact OSNEXUS support for assistance with removing gluster bricks that are not allowed for removal via the qs CLI or Web Manager.
    • Added: Gluster Peer Setup now allows for selection of specific peers in a grid for use in a Gluster configuration. This will allow for multiple Gluster peers configurations to be available on the same QuantaStor Management grid. Previously all grid nodes were included in the Gluster Peer setup.
    • Added firewall to ensure access was allowed for Gluster version 3.4 and higher client access.
  • High-Availability
    • Added: Storage pools created with the one click Encryption feature are now supported as Shared Storage pools with the High Availability Storage Pool Cluster feature.
    • Added: When creating a HA Failover Group, selection of the second node is now scoped to the nodes available in the site cluster of the primary node.
  • Encryption
    • Added: Storage pools can now be created with LUKS encryption enabled on the underlying disk devices. This automates the manual tasks that had previously only been available via the qs-util crypt* utility.
  • Storage Pool
    • Added: The Import Storage Pool Dialog has been expanded to allow the selection of any detected Storage pools that are not already imported and managed by QuantaStor. This allows for the easy import of Storage Pools from other OpenZFS based storage solutions.
    • Added: Storage Pool Creation can now map Storage Pool RAID redundancy for RAIN/RBOD configurations across Backend SANS when LUNs presented from Legacy or Third Party SANs include a Serial, SCSIid or Enclosure ID. This helps ensure that there is no single point of failure for FC or iSCSI Luns presented to a QuantaStor Storage Controller from HP MSA, QuantaStor SDS, IBM N Series or other certified SAN solutions.
    • Fixed a rare issue that could occur on some hardware deployments where a ZFS Storage pool would come online before the multipathing driver finished creating all of the device mapper devices.
    • Fixed: Adding a Hotspare to a Storage Pool that is degraded now immediately begins the resilver/rebuild process.
    • Fixed failed drives that showed as UNAVAIL with a numerical ID will now be properly removed from the pool once a hot spare resilver has completed to replace the disk.
    • Fixed a rare case where a resilver/rebuild of a Storage Pool RAID would not start when there were available Global or Pool assigned Hot Spares.
  • Network Shares
    • Fixed: A Network Share created for CIFS/SMB with NFS disabled in the Network Share Create Dialog now has the active option available and will by default be created in an active state
  • Remote Replication
    • Added logic to prevent accidental user initated CLI, API or Web manager deletion of replica snapshots that are required by replication schedules for successful delta replication.
    • Improved Storage System Link pre-check logic to ensure that remote replication pre-check of System Link exchanged SSH keys succeeds in the event of a temporary network problem or slow WAN link.
    • Fixed an issue where replication to a _chkpnt replica that has a manually created or other block snapshot could fail silently.
    • Fixed: Remote replication now verifies that replica Parent _chkpnt and all child snapshots have the correct createdBySchedule association and corrects if not present.
  • Cloud NAS Gateway
    • Added further discovery logic for re-discovering existing Cloud Backups of Storage Volumes if a Cloud Container needs to be added to a QuantaStor for recovery.
    • Added the gsutil packages to the Installation ISO for Google Cloud Container support. These packages can now be installed via 'apt-get install python-gsutil' for existing deployments.
    • Added The Web Manager now includes the ability to specify the Google Cloud Storage project name when creating a Cloud Container using Google Cloud Storage. Previously this had to be manually entered in a config file.
    • Fixed: Cloud Backups will no longer be incorrectly listed in the Instant Rollback Snapshot dialog for a Storage Volume. Cloud Backups must be restored using the Restore Cloud Backup
    • Fixed an issue that prevented the repair or removing and re-adding a Cloud Container that has experienced a lengthy network or loss of access to the Object Storage.
    • Fix for error state after adding or creating a Cloud Container with Google Cloud Storage or Amazon S3 .
  • Backup Policies
    • Added Web Manager now shows Backup Policy name and Finish date in Backup Job properties.
    • Fix for Backup Policy Job launcher for pwalk and rsync. Previously there could be a process that would not be properly closed and reported as 'defunct'.
    • Fix for inconsistent Backup policy Job detail in Web Manager
    • Fixed an issue with creating a Backup Policy of a remote NAS share served by a Windows AD Server.
  • Web Manager:
    • Added: The Web Manager has a new modern theme and branding for the 4.0 QuantaStor release.
    • Added: The Web Manager has a new Utilized% column in some views that has a Bar showing Utilized % for Storage pools, Storage Volumes, Ceph Storage pools and Ceph OSD's.
    • Added: The Web Manager now has additional connection retry logic that will reduce the need to re-login if there was a temporary network issue between the web browser and the QuantaStor management services.
    • Added support for renaming the hostname of a Host in the Web manager Host Modify Dialog and with the 'qs host-modify' CLI --hostname flag. Renaming a host will not affect client access as it is just a Human readable property for the Object.
    • Added: There is a new tab in the Physical disk view that lists any Global Hot Spares configured for Physical Disk objects.
    • Improved many dialogs with grid controls. The dialogs are now horizontally elastic making it possible to easily view more columns.
    • Fixed an issue where the Web Manager could sometimes log the user out automatically if there was considerable UTC clock skew between the Browser and QuantaStor managemen service.
    • Fixed: Dialogs that list @GMT Snapshots of Network Shares now include the parent replica or Network Share name to provide more clarity on the snapshot being slected for the operation.
    • Fixed: Dialogs that previously referenced the IP address of a Target Port for configutation now also show the Physical port name.
    • Fixed an issue to correctly remove a Host iqn child object object if the associated Host object was removed or no longer exists.
    • Fixed an issue where properties fields could sometimes not be selected to allow copying of their contents.
    • Fixed an issue where some objects on secondary nodes would not show the master node.
    • Fixed an issue where the browser Locale setting would sometimes not be used to automatically select the correct Language Localization.
    • Fixed an issue where the Web Manager was not showing the corresponding size in Decimal Bytes (Terabyte[TB], Gigabyte[GB], etc.) alongside the Binary Byte (Tebibyte[TiB],Gibibyte[GiB], etc.) More information on the differences are here: https://en.wikipedia.org/wiki/Tebibyte
    • Fixed: The Disk type in the Hardware Controller Create Unit Dialog column to show SAS/SATA/etc. will now appear by default.
    • Fixed an issue where a Resource Group would not be automatically selected in the drop down when using the Add/Remove Resource Users & User Groups dialog.
  • iSCSI Target Driver
    • Fixed an issue with the SCST SCSI Target driver where an iSCSI client that unexpectedly closed a connection due to client stability or network related issues could lead to a rare crash.
  • Licensing
    • Added Migration edition license support.
  • Core Service
    • Added further Grid communication improvements.
    • Added direct query of replication target storage volumes prior to starting replication or removing excess snapshots.
    • Fixed: The SNMP-MIB file will now correctly reflect the release date code for the currently installed QuantaStor release.
  • REST API Service
    • Fixed a corner case where some url strings pased via a REST call were not decoded.
  • Security
    • Fixed: Addressed CVE-2015-4000 (Logjam) in the Web Server Package with increase of the default Modulus length to 2048-bit and removal of weak DHE Diffie-Hellman ciphers.
    • Added: New QuantaStor users created via the Users and Groups section of the Web Manager or 'qs user-add' CLI command will now have the same User ID on all QuantaStor nodes. The new UID range is 100000000-199999999.
    • Fixed: An unexpected web request to the Web Server will now correctly route to a 404 error page.
  • Hardware Modules
    • Added: The Adaptec CLI utility 'arcconf' has been updated to v1.7-21229
    • Added Multi-Shelf SAS JBOD enclosure support, this includes enclosures such as the Dell MD1280.
    • Added: Mark Disk as Good in the Web Manager and 'qs hw-disk-mark-good' CLI will now initialize/convert RAW and Passthrough devices on Adaptec Controllers for use with creating RAID units.
    • Added: Raw Passthrough disks on Adaptec controllers will now be initialized on operations for Hardware Controller Create Unit in Web manager and 'qs hw-unit-create' CLI command
    • Added: RAID units marked as a system device or marked with a boot flag in a RAID Controller configuration can now be deleted with the force flag.
    • Added: An exception will now be raised if a Hardware RAID unit is selected for deletion that has an Active Storage Pool. This includes delete operations for the Hardware Controller Delete Unit dialog in the Web Manager or 'qs hw-unit-delete' CLI operation.
    • Fixed: Adaptec RAID Controllers with Super Cap BBU's now correctly show health status
    • Fixed an issue where some third party LSI based HBA controllers would not appear in the Hardware Enclosures and Controllers section of the Web Manager or for the 'qs hw-controller-list' CLI command.
    • Fixed: Logical RAID units that have a Hardware SSD Cache unit assigned now correctly show the cache enabled icon and property.
    • Fixed LSI/Avago controllers can miss-report a temperature anomaly/differential with some firmware releases, this is now filtered and treated as informational.
  • CLI
    • There is a new QuantaStor 4.0 qs CLI available for Windows at http://www.osnexus.com/downloads/
    • Fixed: You can now list the associations between Snapshot Schedules and snapshots with the 'qs scha-list' command
    • The 'qs license-get' command now returns the license of the local system the qs command is issued against by default if no other arguments are given.
  • Logging
    • 'qs-sendlogs' utility now collects additional scale-out block and scale-out object log details.


Change Log Archive

Select the link above to see the Change Log Archive of older revisions.