Difference between revisions of "+ CLI Guide Overview"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
<
m (tc-commands)
m (CLI Commands)
Line 106: Line 106:
 
== General Commands ==
 
== General Commands ==
  
=== help [h] ===
 
Help page for all commands. Try 'help --min' for a compacted list of commands, or just 'help' by itself for a verbose listing of all commands with their input parameters. Optionally you can get help on a specific category or command (ex: help --category=volume).  You can also search for commands by using 'qs help=partialcommandname' and it will return all CLI commands that match the specified command name or partial command name.
 
<pre>
 
[--command]      :: A specific CLI command to get detailed help with.
 
[--category]    :: Category of commands to get help with (acl, alert, host, host-group,
 
                            license, role, session, schedule, pool, quota, system, volume,
 
                            volume-group, task, user, user-group).
 
[--wiki]        :: Generates help output in a format that's importable into MediaWiki.
 
</pre>
 
  
 +
; help : Display help for all commands. 'qs help' provides a verbose listing of all commands with their input parameters. Use 'qs help --min' for a compacted list of commands, or use 'qs help --category=<some category>' for help on aspecific category or command (for example, 'qs help --category=volume').
  
=== tc-commands ===
+
<pre> qs help|h [--command=value ] [--category=value ] [--wiki=value ] [--api=value ] </pre>
List of all CLI commands; used to drive tab completion
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
[--command]      :: Command to show method args for.
+
| &nbsp; || <tt>command</tt> || A specific CLI command to get detailed help with.
</pre>
+
|-
 +
| &nbsp; || <tt>category</tt> || Category of commands to get help with (acl, alert, host, host-group, license, role, session, schedule, pool, quota, system, volume, volume-group, task, user, user-group).
 +
|-
 +
| &nbsp; || <tt>wiki</tt> || Generates help output in a format that's importable into MediaWiki.
 +
|-
 +
| &nbsp; || <tt>api</tt> || Generates help output in a parsable format for automating API.
 +
|}
  
== Alert Management ==
 
  
=== alert-clear [a-clear] ===
+
; tc-commands : List of commands for tab completion
Deletes the alert with the specified ID.
+
<pre>
+
<--id>          :: Unique identifier (GUID) for the object.
+
</pre>
+
  
 +
<pre> qs tc-commands [--command=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>command</tt> || Command to show method args for.
 +
|}
  
=== alert-clear-all [a-clear-all] ===
 
Clears all the alerts.
 
  
=== alert-config-get [ac-get] ===
+
===ALERT  Alert Management===
Alert configuration settings indicate where alert notifications should be sent.
+
----
 +
<div class='mw-collapsible mw-collapsed'>
 +
<div class='mw-collapsible-content'>
  
=== alert-config-set [ac-set] ===
+
; alert-clear : Deletes the alert with the specified ID.
Sets the alert configuration settings such as the administrator email address, SMTP server address, etc. Be sure to use the alert-raise command to generate a test alert after you modify the alert configuration settings to ensure that emails are being properly sent.
+
<pre>
+
[--sender-email] :: Sender email address to be used for alert emails sent from the storage
+
                            system.
+
[--smtp-server]  :: IP Address of the SMTP service where emails should be routed through.
+
[--smtp-port]    :: Port number of the SMTP service (eg 25, 465, 587) where emails should be
+
                            routed through (0 indicatates auto-select).
+
[--smtp-user]    :: SMTP user name to use for secure SMTP access to your SMTP server for
+
                            sending alert emails.
+
[--smtp-password] :: SMTP password to use for secure SMTP access to your SMTP server.
+
[--smtp-auth]    :: SMTP security mode.
+
[--support-email] :: Email address for local customer support.
+
[--freespace-warning] :: The percentage of free-space left in a pool at which a warning is sent
+
                            to indicate pool growth plans should be made.
+
[--freespace-alert] :: The percentage (default=10) of free-space left in a pool at which alerts
+
                            are generated.
+
[--freespace-critical] :: The percentage (default=5) of free-space left in a pool at which all
+
                            provisioning operations are blocked.
+
[--pagerduty-key] :: Specifies the service key to which alerts will be posted to PagerDuty. 
+
                            Please see www.pagerduty.com for more detail on service keys.
+
</pre>
+
  
 +
<pre> qs alert-clear|a-clear --id=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>id</tt> || Unique identifier (GUID) for the object.
 +
|}
  
=== alert-get [a-get] ===
 
Gets information about a specific alert.
 
<pre>
 
<--id>          :: Unique identifier (GUID) for the object.
 
</pre>
 
  
 +
; alert-clear-all : Clears all the alerts.
  
=== alert-list [a-list] ===
+
<pre> qs alert-clear-all|a-clear-all </pre>
Returns a list of all the alerts from all systems in the grid. Adjust the settings for the 'admin' or custom user account(s) to indicate alerts to be sent out via email call-home mechanism.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
[--filtered]    :: Returns just the 100 most recently created.
+
</pre>
+
  
 +
; alert-config-get : Alert configuration settings indicate where alert notifications should be sent.
  
=== alert-raise [a-raise] ===
+
<pre> qs alert-config-get|ac-get </pre>
Allows one to raise a user generated alert for testing the SMTP server configuration settings.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
<--title>        :: Title string to be echoed back from the server.
+
<--message>      :: Message string to be echoed back from the server.
+
[--alert-severity] :: Severity of the user generated alert to be raise. [always, critical,
+
                            error, *info, warning]
+
</pre>
+
  
 +
; alert-config-set : Sets the alert configuration settings such as the administrator email address, SMTP server address, etc. Be sure to use the alert-raise command to generate a test alert after you modify the alert configuration settings to ensure that emails are being properly sent.
  
=== event-list [ev-list] ===
+
<pre> qs alert-config-set|ac-set [--sender-email=value ] [--smtp-server=value ] [--smtp-port=value ] [--smtp-user=value ] [--smtp-password=value ] [--smtp-auth=value ] [--support-email=value ] [--freespace-warning=value ] [--freespace-alert=value ] [--freespace-critical=value ] [--pagerduty-key=value ] </pre>
Returns a list of all the internal events in the event queue (used for service monitoring).
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
[--index]        :: Starting event index of where to begin listing events from.
+
| &nbsp; || <tt>sender-email</tt> || Sender email address to be used for alert emails sent from the storage system.
[--max]          :: Maximum number of events to enumerate.
+
|-
[--time-stamp]  :: Time-stamp of the service.
+
| &nbsp; || <tt>smtp-server</tt> || IP Address of the SMTP service where emails should be routed through.
</pre>
+
|-
 +
| &nbsp; || <tt>smtp-port</tt> || Port number of the SMTP service (eg 25, 465, 587) where emails should be routed through (0 indicates auto-select).
 +
|-
 +
| &nbsp; || <tt>smtp-user</tt> || SMTP user name to use for secure SMTP access to your SMTP server for sending alert emails.
 +
|-
 +
| &nbsp; || <tt>smtp-password</tt> || SMTP password to use for secure SMTP access to your SMTP server.
 +
|-
 +
| &nbsp; || <tt>smtp-auth</tt> || SMTP security mode.
 +
|-
 +
| &nbsp; || <tt>support-email</tt> || Email address for local customer support.
 +
|-
 +
| &nbsp; || <tt>freespace-warning</tt> || The percentage of free-space left in a pool at which a warning is sent to indicate pool growth plans should be made.
 +
|-
 +
| &nbsp; || <tt>freespace-alert</tt> || The percentage (default=10) of free-space left in a pool at which alerts are generated.
 +
|-
 +
| &nbsp; || <tt>freespace-critical</tt> || The percentage (default=5) of free-space left in a pool at which all provisioning operations are blocked.
 +
|-
 +
| &nbsp; || <tt>pagerduty-key</tt> || Specifies the service key to which alerts will be posted to PagerDuty.  Please see www.pagerduty.com for more detail on service keys.
 +
|}
  
  
== Backup Policy Management ==
+
; alert-get : Gets information about a specific alert.
  
=== backup-job-list [bj-list] ===
+
<pre> qs alert-get|a-get --id=value </pre>
Returns a list of backup jobs in the system.
+
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>id</tt> || Unique identifier (GUID) for the object.
 +
|}
  
=== backup-policy-create [bp-create] ===
 
Creates a ingest backup policy which pull data from NFS shares to the QuantaStor appliance.
 
<pre>
 
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
 
                            not allowed.
 
<--desc>        :: A description for the object.
 
<--network-share> :: Name or ID of a CIFS/NFS network share.
 
<--remote-hostname> :: Name of the remote host containing NFS shares to be backed up
 
<--remote-export-path> :: Remote export path to be mounted to access the data to be backed up.
 
[--remote-export-type] :: Remote mount type (currently NFS only) [*nfs]
 
[--retain-rules] :: Retention rules (atime, ctime, mtime) [atime, ctime, *mtime]
 
[--policy-type]  :: Backup policy type [*inbound]
 
[--scan-threads] :: Number of concurrent threads for walking/scanning the specified target
 
                            export
 
[--hours]        :: For the specified days of the week, snapshots will be created at the
 
                            specified hours. [10am, 10pm, 11am, 11pm, 12am, 12pm, 1am, 1pm, 2am, 2pm,
 
                            *3am, 3pm, 4am, 4pm, 5am, 5pm, 6am, 6pm, 7am, 7pm, 8am, 8pm, 9am, 9pm]
 
[--days]        :: The days of the week on which this schedule should create snapshots.
 
                            [fri, mon, sat, *sun, thu, tue, wed]
 
[--retain-period] :: Number of days of file history on the specified filer export to backup
 
[--purge-policy] :: Indicates how old files should be cleaned up / purged from the backup.
 
                            [after-backup, *daily, never, weekly]
 
[--enabled]      :: Set to enabled to activate the backup policy.
 
</pre>
 
  
 +
; alert-list : Returns a list of all the alerts from all systems in the grid. Adjust the settings for the 'admin' or custom user account(s) to indicate alerts to be sent out via email call-home mechanism.
  
=== backup-policy-delete [bp-delete] ===
+
<pre> qs alert-list|a-list [--filtered=value ] </pre>
Deletes the specified backup policy.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--policy>       :: A backup policy name or ID which is associated with a network share to
+
| &nbsp; || <tt>filtered</tt> || Returns just the 'value' most recently created.
                            do ingest backups.
+
|}
</pre>
+
  
  
=== backup-policy-disable [bp-disable] ===
+
; alert-raise : Allows one to raise a user generated alert for testing the SMTP server configuration settings.
Disables the specified backup policy so job are not automatically run.
+
<pre>
+
<--policy>      :: A backup policy name or ID which is associated with a network share to
+
                            do ingest backups.
+
</pre>
+
  
 +
<pre> qs alert-raise|a-raise --title=value --message=value [--alert-severity=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>title</tt> || Title string to be echoed back from the server.
 +
|-
 +
| &nbsp; || <tt>message</tt> || Message string to be echoed back from the server.
 +
|-
 +
| &nbsp; || <tt>alert-severity</tt> || Severity of the user generated alert to be raise. [always, critical, error, *info, warning]
 +
|}
  
=== backup-policy-enable [bp-enable] ===
 
Enables a backup policy that was previously disabled.
 
<pre>
 
<--policy>      :: A backup policy name or ID which is associated with a network share to
 
                            do ingest backups.
 
</pre>
 
  
 +
; event-list : Returns a list of all the internal events in the event queue (used for service monitoring).
  
=== backup-policy-get [bp-get] ===
+
<pre> qs event-list|ev-list [--index=value ] [--max=value ] [--time-stamp=value ] </pre>
Returns detailed information on a specific backup policy.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--policy>       :: A backup policy name or ID which is associated with a network share to  
+
| &nbsp; || <tt>index</tt> || Starting event index of where to begin listing events from.
                            do ingest backups.
+
|-
</pre>
+
| &nbsp; || <tt>max</tt> || Maximum number of events to enumerate.
 +
|-
 +
| &nbsp; || <tt>time-stamp</tt> || Time-stamp of the service.
 +
|}
  
 +
</div>
 +
</div>
  
=== backup-policy-list [bp-list] ===
+
===BACKUP-POLICY  Backup Policy Management===
Returns a list of backup policies in the system.
+
----
 +
<div class='mw-collapsible mw-collapsed'>
 +
<div class='mw-collapsible-content'>
  
=== backup-policy-modify [bp-modify] ===
+
; backup-job-list : Returns a list of backup jobs in the system.
Modifies the specified backup policy settings.
+
<pre>
+
<--policy>      :: A backup policy name or ID which is associated with a network share to
+
                            do ingest backups.
+
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
                            not allowed.
+
<--desc>        :: A description for the object.
+
<--network-share> :: Name or ID of a CIFS/NFS network share.
+
<--remote-hostname> :: Name of the remote host containing NFS shares to be backed up
+
<--remote-export-path> :: Remote export path to be mounted to access the data to be backed up.
+
[--remote-export-type] :: Remote mount type (currently NFS only) [*nfs]
+
[--retain-rules] :: Retention rules (atime, ctime, mtime) [atime, ctime, *mtime]
+
[--policy-type]  :: Backup policy type [*inbound]
+
[--scan-threads] :: Number of concurrent threads for walking/scanning the specified target
+
                            export
+
[--hours]        :: For the specified days of the week, snapshots will be created at the
+
                            specified hours. [10am, 10pm, 11am, 11pm, 12am, 12pm, 1am, 1pm, 2am, 2pm,
+
                            *3am, 3pm, 4am, 4pm, 5am, 5pm, 6am, 6pm, 7am, 7pm, 8am, 8pm, 9am, 9pm]
+
[--days]        :: The days of the week on which this schedule should create snapshots.
+
                            [fri, mon, sat, *sun, thu, tue, wed]
+
[--retain-period] :: retain-period
+
[--purge-policy] :: Indicates how old files should be cleaned up / purged from the backup.
+
                            [after-backup, *daily, never, weekly]
+
[--enabled]      :: Set to enabled to activate the backup policy.
+
</pre>
+
  
 +
<pre> qs backup-job-list|bj-list </pre>
 +
{| cellspacing='0' cellpadding='5'
  
=== backup-policy-trigger [bp-trigger] ===
+
; backup-policy-create : Creates a ingest backup policy which pull data from NFS shares to the QuantaStor appliance.
Triggers the specified backup policy which in turn starts a backup job.
+
<pre>
+
<--policy>      :: A backup policy name or ID which is associated with a network share to  
+
                            do ingest backups.
+
</pre>
+
  
 +
<pre> qs backup-policy-create|bp-create --name=value --desc=value --network-share=value --remote-hostname=value --remote-export-path=value [--remote-export-type=value ] [--retain-rules=value ] [--policy-type=value ] [--scan-threads=value ] [--hours=value ] [--days=value ] [--retain-period=value ] [--purge-policy=value ] [--backup-to-root=value ] [--maintain-logs=value ] [--start-date=value ] [--enabled=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>network-share</tt> || Name or ID of a CIFS/NFS network share.
 +
|-
 +
| &nbsp; || <tt>remote-hostname</tt> || Name of the remote host containing NFS shares to be backed up
 +
|-
 +
| &nbsp; || <tt>remote-export-path</tt> || Remote export path to be mounted to access the data to be backed up.
 +
|-
 +
| &nbsp; || <tt>remote-export-type</tt> || Remote mount type (currently NFS only) [*nfs]
 +
|-
 +
| &nbsp; || <tt>retain-rules</tt> || Retention rules (atime, ctime, mtime) [atime, ctime, *mtime]
 +
|-
 +
| &nbsp; || <tt>policy-type</tt> || Backup policy type [*inbound]
 +
|-
 +
| &nbsp; || <tt>scan-threads</tt> || Number of concurrent threads for walking/scanning the specified target export
 +
|-
 +
| &nbsp; || <tt>hours</tt> || For the specified days of the week, snapshots will be created at the specified hours. [10am, 10pm, 11am, 11pm, 12am, 12pm, 1am, 1pm, 2am, 2pm, *3am, 3pm, 4am, 4pm, 5am, 5pm, 6am, 6pm, 7am, 7pm, 8am, 8pm, 9am, 9pm]
 +
|-
 +
| &nbsp; || <tt>days</tt> || The days of the week on which this schedule should create snapshots. [fri, mon, sat, *sun, thu, tue, wed]
 +
|-
 +
| &nbsp; || <tt>retain-period</tt> || Number of days of file history on the specified filer export to backup
 +
|-
 +
| &nbsp; || <tt>purge-policy</tt> || Indicates how old files should be cleaned up / purged from the backup. [after-backup, *daily, never, weekly]
 +
|-
 +
| &nbsp; || <tt>backup-to-root</tt> || Indicates that the backups should go into the root folder of the Network Share.
 +
|-
 +
| &nbsp; || <tt>maintain-logs</tt> || Maintain logs of all the files that were backed up in the /var/log/backup-log folder within the appliance.
 +
|-
 +
| &nbsp; || <tt>start-date</tt> || Start date at which the system will begin using a given schedule.
 +
|-
 +
| &nbsp; || <tt>enabled</tt> || Set to enabled to activate the backup policy.
 +
|}
  
== Ceph Management ==
 
  
=== ceph-cluster-add-member [cc-amn] ===
+
; backup-policy-delete : Deletes the specified backup policy.
Add a new storage system to existing Ceph Cluster.
+
<pre>
+
<--ceph-cluster> :: Ceph Cluster name or ID.
+
<--storage-system> :: Name or ID of a storage system in a management grid.
+
<--port>        :: Name of unique ID of a physical network port/target port.
+
[--public-network] :: Ceph public network mask (e.g. x.x.x.x/YY).
+
[--cluster-network] :: Ceph cluster network mask (e.g. x.x.x.x/YY).
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
 +
<pre> qs backup-policy-delete|bp-delete --policy=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>policy</tt> || A backup policy name or ID which is associated with a network share to do ingest backups.
 +
|}
  
=== ceph-cluster-create [cc-create] ===
 
Create a new Ceph Cluster using the specified Quantastor nodes(s)
 
<pre>
 
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
 
                            not allowed.
 
<--sys-list>    :: List of one or more storage systems
 
<--port-list>    :: List of target ports to be bonded together.
 
[--desc]        :: A description for the object.
 
[--public-network] :: Ceph public network mask (e.g. x.x.x.x/YY).
 
[--cluster-network] :: Ceph cluster network mask (e.g. x.x.x.x/YY).
 
[--osd-default-pool-size] :: Minimum number of written replicas for objects in the pool in order to
 
                            ack a write operation to the client.(default=2).
 
[--auth-cluster-required] :: Authentication mode for ceph cluster (default=cephx).
 
[--auth-service-required] :: Authentication mode for ceph services (default=cephx).
 
[--auth-client-required] :: Authentication mode for client managing the ceph cluster (default=cephx).
 
[--filestore-xattr-use-omap] :: To use object map for XATTRS. Set to true (default) for ext4 file
 
                            systems.
 
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
 
</pre>
 
  
 +
; backup-policy-disable : Disables the specified backup policy so job are not automatically run.
  
=== ceph-cluster-delete [cc-delete] ===
+
<pre> qs backup-policy-disable|bp-disable --policy=value </pre>
Deletes the specified Ceph Cluster
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--ceph-cluster> :: Ceph Cluster name or ID.
+
| &nbsp; || <tt>policy</tt> || A backup policy name or ID which is associated with a network share to do ingest backups.
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
|}
</pre>
+
  
  
=== ceph-cluster-fix-clock-skew [cc-fcs] ===
+
; backup-policy-enable : Enables a backup policy that was previously disabled.
Adjusts the clocks on all the member nodes in the specified Ceph Cluster in order to address any clock skew issues.
+
<pre>
+
<--ceph-cluster> :: Ceph Cluster name or ID.
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
 +
<pre> qs backup-policy-enable|bp-enable --policy=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>policy</tt> || A backup policy name or ID which is associated with a network share to do ingest backups.
 +
|}
  
=== ceph-cluster-get [cc-get] ===
 
Gets information about a specific Ceph Cluster
 
<pre>
 
<--ceph-cluster> :: Ceph Cluster name or ID.
 
</pre>
 
  
 +
; backup-policy-get : Returns detailed information on a specific backup policy.
  
=== ceph-cluster-list [cc-list] ===
+
<pre> qs backup-policy-get|bp-get --policy=value </pre>
Returns a list of all the Ceph Clusters
+
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>policy</tt> || A backup policy name or ID which is associated with a network share to do ingest backups.
 +
|}
  
=== ceph-cluster-member-get [ccm-get] ===
 
Gets information about a specific Ceph Cluster Member
 
<pre>
 
<--ceph-cluster-member> :: Ceph Cluster Member name or ID.
 
</pre>
 
  
 +
; backup-policy-list : Returns a list of backup policies in the system.
  
=== ceph-cluster-member-list [ccm-list] ===
+
<pre> qs backup-policy-list|bp-list </pre>
Returns a list of all the Ceph Clusters Members
+
{| cellspacing='0' cellpadding='5'
  
=== ceph-cluster-modify [cc-modify] ===
+
; backup-policy-modify : Modifies the specified backup policy settings.
Modify a Ceph Cluster using the specified Quantastor nodes(s)
+
<pre>
+
<--ceph-cluster> :: Ceph Cluster name or ID.
+
[--desc]        :: A description for the object.
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
 +
<pre> qs backup-policy-modify|bp-modify --policy=value --name=value --desc=value --network-share=value --remote-hostname=value --remote-export-path=value [--remote-export-type=value ] [--retain-rules=value ] [--policy-type=value ] [--scan-threads=value ] [--hours=value ] [--days=value ] [--retain-period=value ] [--purge-policy=value ] [--maintain-logs=value ] [--start-date=value ] [--enabled=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>policy</tt> || A backup policy name or ID which is associated with a network share to do ingest backups.
 +
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>network-share</tt> || Name or ID of a CIFS/NFS network share.
 +
|-
 +
| &nbsp; || <tt>remote-hostname</tt> || Name of the remote host containing NFS shares to be backed up
 +
|-
 +
| &nbsp; || <tt>remote-export-path</tt> || Remote export path to be mounted to access the data to be backed up.
 +
|-
 +
| &nbsp; || <tt>remote-export-type</tt> || Remote mount type (currently NFS only) [*nfs]
 +
|-
 +
| &nbsp; || <tt>retain-rules</tt> || Retention rules (atime, ctime, mtime) [atime, ctime, *mtime]
 +
|-
 +
| &nbsp; || <tt>policy-type</tt> || Backup policy type [*inbound]
 +
|-
 +
| &nbsp; || <tt>scan-threads</tt> || Number of concurrent threads for walking/scanning the specified target export
 +
|-
 +
| &nbsp; || <tt>hours</tt> || For the specified days of the week, snapshots will be created at the specified hours. [10am, 10pm, 11am, 11pm, 12am, 12pm, 1am, 1pm, 2am, 2pm, *3am, 3pm, 4am, 4pm, 5am, 5pm, 6am, 6pm, 7am, 7pm, 8am, 8pm, 9am, 9pm]
 +
|-
 +
| &nbsp; || <tt>days</tt> || The days of the week on which this schedule should create snapshots. [fri, mon, sat, *sun, thu, tue, wed]
 +
|-
 +
| &nbsp; || <tt>retain-period</tt> || retain-period
 +
|-
 +
| &nbsp; || <tt>purge-policy</tt> || Indicates how old files should be cleaned up / purged from the backup. [after-backup, *daily, never, weekly]
 +
|-
 +
| &nbsp; || <tt>maintain-logs</tt> || Maintain logs of all the files that were backed up in the /var/log/backup-log folder within the appliance.
 +
|-
 +
| &nbsp; || <tt>start-date</tt> || Start date at which the system will begin using a given schedule.
 +
|-
 +
| &nbsp; || <tt>enabled</tt> || Set to enabled to activate the backup policy.
 +
|}
  
=== ceph-cluster-remove-member [cc-rmn] ===
 
Remove a storage system from an existing Ceph Cluster.
 
<pre>
 
<--ceph-cluster> :: Ceph Cluster name or ID.
 
<--storage-system> :: Name or ID of a storage system in a management grid.
 
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
 
</pre>
 
  
 +
; backup-policy-trigger : Triggers the specified backup policy which in turn starts a backup job.
  
=== ceph-journal-create [cj-create] ===
+
<pre> qs backup-policy-trigger|bp-trigger --policy=value </pre>
Creates a group of Ceph Journal Devices on the specified Physical Disk.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--disk>        :: Name of the physical disk or its unique ID/serial number.
+
| &nbsp; || <tt>policy</tt> || A backup policy name or ID which is associated with a network share to do ingest backups.
<--storage-system> :: Name or ID of a storage system in a management grid.
+
|}
[--device-count] :: Number of journal devices to create on the specified high performance
+
                            media/SSD device. (Device count 1-8). Default is 4 if not specified
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
 +
</div>
 +
</div>
  
=== ceph-journal-delete [cj-delete] ===
+
===CEPH  Ceph Management===
Deletes the specified Ceph Journal and all journal partitions on the given physical disk, if a physical disk is specified. Note: This call will fail if there are one or more Journals still being used by OSDs in the system.
+
----
<pre>
+
<div class='mw-collapsible mw-collapsed'>
<--ceph-journal-device> :: Ceph Journal Device name or ID.
+
<div class='mw-collapsible-content'>
<--storage-system> :: Name or ID of a storage system in a management grid.
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
 +
; ceph-cluster-add-member : Add a new storage system to existing Ceph Cluster.
  
=== ceph-journal-get [cj-get] ===
+
<pre> qs ceph-cluster-add-member|cc-amn --ceph-cluster=value --storage-system=value --port=value [--public-network=value ] [--cluster-network=value ] [--enable-object-store=value ] [--flags=value ] </pre>
Returns details of a specific Ceph Journal Device
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--ceph-journal-device> :: Ceph Journal Device name or ID.
+
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
</pre>
+
|-
 +
| &nbsp; || <tt>storage-system</tt> || Name or ID of a storage system in a management grid.
 +
|-
 +
| &nbsp; || <tt>port</tt> || Name of unique ID of a physical network port/target port.
 +
|-
 +
| &nbsp; || <tt>public-network</tt> || Ceph public/client network CIDR (e.g. x.x.x.x/YY). Default: Computed from public/client interface.
 +
|-
 +
| &nbsp; || <tt>cluster-network</tt> || Ceph cluster/backend network CIDR (e.g. x.x.x.x/YY). Default: Computed from cluster/backend interface.
 +
|-
 +
| &nbsp; || <tt>enable-object-store</tt> || Enable ceph Object Store (if configured in the cluster. Default: false)
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
  
=== ceph-journal-list [cj-list] ===
+
; ceph-cluster-create : Create a new Ceph Cluster using the specified QuantaStor nodes(s)
Returns a list of all the Ceph Journal Devices
+
  
=== ceph-monitor-add [cmon-add] ===
+
<pre> qs ceph-cluster-create|cc-create --name=value --client-interface=value --backend-interface=value [--desc=value ] [--public-network=value ] [--cluster-network=value ] [--osd-default-pool-size=value ] [--auth-cluster-required=value ] [--auth-service-required=value ] [--auth-client-required=value ] [--filestore-xattr-use-omap=value ] [--flags=value ] </pre>
Configure a new monitor for the Ceph Cluster
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--ceph-cluster> :: Ceph Cluster name or ID.
+
| &nbsp; || <tt>name</tt> || Name of the Ceph Cluster.
<--ceph-cluster-member> :: Ceph Cluster Member name or ID.
+
|-
<--ceph-cluster-monitor-ipaddress> :: IPAddress used by ceph monitor daemon for communication.
+
| &nbsp; || <tt>client-interface</tt> || Interface used by client to connect to ceph cluster for I/O.
[--ceph-cluster-monitor-port] :: Port used by ceph monitor daemon for communication (default:6789).
+
|-
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
| &nbsp; || <tt>backend-interface</tt> || Interface used by ceph cluster for its backend communication.
</pre>
+
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>public-network</tt> || Ceph public/client network CIDR (e.g. x.x.x.x/YY). Default: Computed from public/client interface.
 +
|-
 +
| &nbsp; || <tt>cluster-network</tt> || Ceph cluster/backend network CIDR (e.g. x.x.x.x/YY). Default: Computed from cluster/backend interface.
 +
|-
 +
| &nbsp; || <tt>osd-default-pool-size</tt> || Minimum number of written replicas for objects in the pool in order to ack a write operation to the client.(default=2).
 +
|-
 +
| &nbsp; || <tt>auth-cluster-required</tt> || Authentication mode for ceph cluster (default=cephx).
 +
|-
 +
| &nbsp; || <tt>auth-service-required</tt> || Authentication mode for ceph services (default=cephx).
 +
|-
 +
| &nbsp; || <tt>auth-client-required</tt> || Authentication mode for client managing the ceph cluster (default=cephx).
 +
|-
 +
| &nbsp; || <tt>filestore-xattr-use-omap</tt> || To use object map for XATTRS. Set to true (default) for ext4 file systems.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
  
=== ceph-monitor-delete [cmon-delete] ===
+
; ceph-cluster-delete : Deletes the specified Ceph Cluster
Deletes the specified Ceph Monitor
+
<pre>
+
<--ceph-cluster> :: Ceph Cluster name or ID.
+
<--ceph-monitor> :: Ceph Monitor service name or ID.
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
 +
<pre> qs ceph-cluster-delete|cc-delete --ceph-cluster=value [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== ceph-monitor-get [cmon-get] ===
 
Gets information about a specific Ceph Monitor
 
<pre>
 
<--ceph-monitor> :: Ceph Monitor service name or ID.
 
</pre>
 
  
 +
; ceph-cluster-fix-clock-skew : Adjusts the clocks on all the member nodes in the specified Ceph Cluster in order to address any clock skew issues.
  
=== ceph-monitor-list [cmon-list] ===
+
<pre> qs ceph-cluster-fix-clock-skew|cc-fcs --ceph-cluster=value [--flags=value ] </pre>
Returns a list of all the Ceph Monitors
+
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== ceph-osd-create [osd-create] ===
 
Creates a new Ceph OSD on the specified storage pool
 
<pre>
 
<--ceph-cluster> :: Ceph Cluster name or ID.
 
<--data-pool>    :: Name of the storage pool or its unique ID (GUID) to allocate OSD Data
 
                            store.
 
<--journal-device> :: Name of the storage device or its unique ID (GUID) to allocate OSD
 
                            Journal store.
 
[--desc]        :: A description for the object.
 
[--weight]      :: Weight associated with the OSD (default 0.3).
 
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
 
</pre>
 
  
 +
; ceph-cluster-get : Gets information about a specific Ceph Cluster
  
=== ceph-osd-delete [osd-delete] ===
+
<pre> qs ceph-cluster-get|cc-get --ceph-cluster=value </pre>
Deletes the specified Ceph Osd
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--ceph-cluster> :: Ceph Cluster name or ID.
+
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
<--ceph-osd>    :: Ceph Object Storage Daemon name or ID.
+
|}
[--delete-data]  :: Flag to specify deletion of user data along with this operation.
+
                            (Default : false)
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
  
=== ceph-osd-get [osd-get] ===
+
; ceph-cluster-list : Returns a list of all the Ceph Clusters
Gets information about a specific Ceph Object Storage Daemon.
+
<pre>
+
<--ceph-osd>    :: Ceph Object Storage Daemon name or ID.
+
</pre>
+
  
 +
<pre> qs ceph-cluster-list|cc-list </pre>
 +
{| cellspacing='0' cellpadding='5'
  
=== ceph-osd-list [osd-list] ===
+
; ceph-cluster-member-get : Gets information about a specific Ceph Cluster Member
Returns a list of all the Ceph Object Storage Daemons
+
  
=== ceph-osd-modify [osd-modify] ===
+
<pre> qs ceph-cluster-member-get|ccm-get --ceph-cluster-member=value </pre>
Modifies the specified ceph Osd
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--ceph-cluster> :: Ceph Cluster name or ID.
+
| &nbsp; || <tt>ceph-cluster-member</tt> || Ceph Cluster Member name or ID.
<--ceph-osd>    :: Ceph Object Storage Daemon name or ID.
+
|}
[--desc]        :: A description for the object.
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
  
=== ceph-osd-multi-create [osd-multi-create] ===
+
; ceph-cluster-member-list : Returns a list of all the Ceph Clusters Members
Creates multiple new ceph Osd on the specified ceph cluster
+
<pre>
+
<--ceph-cluster> :: Ceph Cluster name or ID.
+
<--physical-disk-list> :: List of comma seperated physical disk ID to be used for OSD data disks
+
                            during OSD create.
+
<--journal-ssd-list> :: List of comma seperated SSD disk ID to be used for OSD journal devices
+
                            during OSD create.
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
 +
<pre> qs ceph-cluster-member-list|ccm-list </pre>
 +
{| cellspacing='0' cellpadding='5'
  
=== ceph-pool-create [cpool-create] ===
+
; ceph-cluster-modify : Modify a Ceph Cluster using the specified QuantaStor nodes(s)
Creates a new Ceph pool using the specified OSDs or Storage Pools
+
<pre>
+
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
                            not allowed.
+
<--ceph-cluster> :: Ceph Cluster name or ID.
+
[--osd-list]    :: List of one or more ceph-osd. (Default: Select all OSD's for ceph pool
+
                            create). User selection is currently not supported.
+
[--desc]        :: A description for the object.
+
[--placement-groups] :: Number of placement groups to assign to the Ceph pool.
+
[--max-replicas] :: Maximum number of replicas (redundancy level) to be created of each
+
                            block in the Ceph pool.
+
[--min-replicas] :: Minimum number of replicas to have of each block while still allowing
+
                            write access to the Ceph pool.
+
[--pool-type]    :: Type of Ceph pool, 'replica' (mirror based, default) or 'erasure' which
+
                            is like network RAID5.
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
 +
<pre> qs ceph-cluster-modify|cc-modify --ceph-cluster=value [--desc=value ] [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== ceph-pool-delete [cpool-delete] ===
 
Deletes the specified Ceph pool
 
<pre>
 
<--ceph-cluster> :: Ceph Cluster name or ID.
 
<--ceph-pool>    :: Ceph Pool name or ID.
 
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
 
</pre>
 
  
 +
; ceph-cluster-remove-member : Remove a storage system from an existing Ceph Cluster.
  
=== ceph-pool-get [cpool-get] ===
+
<pre> qs ceph-cluster-remove-member|cc-rmn --ceph-cluster=value --storage-system=value [--flags=value ] </pre>
Gets information about a specific Ceph storage pool.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--ceph-pool>   :: Ceph Pool name or ID.
+
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
</pre>
+
|-
 +
| &nbsp; || <tt>storage-system</tt> || Name or ID of a storage system in a management grid.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
  
=== ceph-pool-list [cpool-list] ===
+
; ceph-journal-create : Creates a group of Ceph Journal Devices on the specified Physical Disk.
Returns a list of all the Ceph storage pools.
+
  
=== ceph-pool-modify [cpool-modify] ===
+
<pre> qs ceph-journal-create|cj-create --disk=value --storage-system=value [--device-count=value ] [--flags=value ] </pre>
Sets the display name and/or description field for the Ceph pool.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--ceph-cluster> :: Ceph Cluster name or ID.
+
| &nbsp; || <tt>disk</tt> || Name of the physical disk or its unique ID/serial number.
<--ceph-pool>   :: Ceph Pool name or ID.
+
|-
[--name]        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
| &nbsp; || <tt>storage-system</tt> || Name or ID of a storage system in a management grid.
                            not allowed.
+
|-
[--desc]        :: A description for the object.
+
| &nbsp; || <tt>device-count</tt> || Number of journal devices to create on the specified high performance media/SSD device. (Device count 1-8). Default is 4 if not specified
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
|-
</pre>
+
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
  
=== ceph-rbd-create [rbd-create] ===
+
; ceph-journal-delete : Deletes the specified Ceph Journal and all journal partitions on the given physical disk, if a physical disk is specified. Note: This call will fail if there are one or more Journals still being used by OSDs in the system.
Creates a new Ceph block device (RBD) in the specified Ceph pool.
+
<pre>
+
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
                            not allowed.
+
<--ceph-cluster> :: Ceph Cluster name or ID.
+
<--ceph-pool>    :: ceph-pool
+
<--size>        :: Size may be specified in MiB, GiB, or TiB. examples: 4G, 100M, 1.4T
+
[--desc]        :: A description for the object.
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
 +
<pre> qs ceph-journal-delete|cj-delete --ceph-journal-device=value --storage-system=value [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ceph-journal-device</tt> || Ceph Journal Device name or ID.
 +
|-
 +
| &nbsp; || <tt>storage-system</tt> || Name or ID of a storage system in a management grid.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== ceph-rbd-delete [rbd-delete] ===
 
Deletes the specified Ceph block device (RBD)
 
<pre>
 
<--ceph-cluster> :: Ceph Cluster name or ID.
 
<--ceph-rbd>    :: Ceph RBD/Block Device name or ID.
 
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
 
</pre>
 
  
 +
; ceph-journal-get : Returns details of a specific Ceph Journal Device
  
=== ceph-rbd-get [rbd-get] ===
+
<pre> qs ceph-journal-get|cj-get --ceph-journal-device=value </pre>
Gets information about a specific Ceph RDB / block device
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--ceph-rbd>     :: Ceph RBD/Block Device name or ID.
+
| &nbsp; || <tt>ceph-journal-device</tt> || Ceph Journal Device name or ID.
</pre>
+
|}
  
  
=== ceph-rbd-list [rbd-list] ===
+
; ceph-journal-list : Returns a list of all the Ceph Journal Devices
Returns a list of all the Ceph RDB / block devices.
+
  
=== ceph-rbd-modify [rbd-modify] ===
+
<pre> qs ceph-journal-list|cj-list </pre>
Sets the display name and/or description field for the Ceph block device (RBD).
+
{| cellspacing='0' cellpadding='5'
<pre>
+
<--ceph-cluster> :: Ceph Cluster name or ID.
+
<--ceph-rbd>     :: Ceph RBD/Block Device name or ID.
+
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
                            not allowed.
+
[--desc]        :: A description for the object.
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
 +
; ceph-monitor-add : Configure a new monitor for the Ceph Cluster
  
=== ceph-rbd-resize [rbd-resize] ===
+
<pre> qs ceph-monitor-add|cmon-add --ceph-cluster=value --ceph-cluster-member=value --ceph-cluster-monitor-ipaddress=value [--ceph-cluster-monitor-port=value ] [--flags=value ] </pre>
Resizes the specified Ceph block device (RBD) to a larger size.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--ceph-cluster> :: Ceph Cluster name or ID.
+
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
<--ceph-rbd>     :: Ceph RBD/Block Device name or ID.
+
|-
<--size>         :: Size may be specified in MiB, GiB, or TiB. examples: 4G, 100M, 1.4T
+
| &nbsp; || <tt>ceph-cluster-member</tt> || Ceph Cluster Member name or ID.
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
|-
</pre>
+
| &nbsp; || <tt>ceph-cluster-monitor-ipaddress</tt> || IPAddress used by ceph monitor daemon for communication.
 +
|-
 +
| &nbsp; || <tt>ceph-cluster-monitor-port</tt> || Port used by ceph monitor daemon for communication (default:6789).
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
  
=== ceph-rbd-snapshot [rbd-snap] ===
+
; ceph-monitor-get : Gets information about a specific Ceph Monitor
Creates an instant snapshot of the specified Ceph block device (RBD)
+
<pre>
+
<--ceph-cluster> :: Ceph Cluster name or ID.
+
<--ceph-rbd>    :: Ceph RBD/Block Device name or ID.
+
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
                            not allowed.
+
[--desc]        :: A description for the object.
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
 +
<pre> qs ceph-monitor-get|cmon-get --ceph-monitor=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ceph-monitor</tt> || Ceph Monitor service name or ID.
 +
|}
  
== Cloud Container Management ==
 
  
=== cloud-backup-container-add [cbc-add] ===
+
; ceph-monitor-list : Returns a list of all the Ceph Monitors
Recovers a cloud backup container that was previously removed or used with a prior installation.
+
<pre>
+
<--provider-creds> :: Credentials for accessing your cloud storage provider account.  With
+
                            Amazon S3 this is your 'Access Key ID' and your 'Secret Access Key'.
+
<--locaiton>    :: Cloud storage provider endpoint location.
+
<--encryption-key> :: A passphrase which is used to encrypt all the data sent to the cloud
+
                            container.
+
<--storage-url>  :: The Amazon S3 or other storage URL used to identify a cloud backup
+
                            container.
+
<--enable-nfs>  :: Enable NFS sharing of the cloud container.
+
</pre>
+
  
 +
<pre> qs ceph-monitor-list|cmon-list </pre>
 +
{| cellspacing='0' cellpadding='5'
  
=== cloud-backup-container-create [cbc-create] ===
+
; ceph-monitor-remove : Removes the specified Ceph Monitor
Creates a cloud backup container into which cloud backups of storage volumes can be made.
+
<pre>
+
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
                            not allowed.
+
<--desc>        :: A description for the object.
+
<--provider-creds> :: Credentials for accessing your cloud storage provider account.  With
+
                            Amazon S3 this is your 'Access Key ID' and your 'Secret Access Key'.
+
<--locaiton>    :: Cloud storage provider endpoint location.
+
<--encryption-key> :: A passphrase which is used to encrypt all the data sent to the cloud
+
                            container.
+
<--enable-nfs>  :: Enable NFS sharing of the cloud container.
+
</pre>
+
  
 +
<pre> qs ceph-monitor-remove|cmon-remove --ceph-cluster=value --ceph-monitor=value [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
 +
|-
 +
| &nbsp; || <tt>ceph-monitor</tt> || Ceph Monitor service name or ID.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== cloud-backup-container-delete [cbc-delete] ===
 
Deletes the specified cloud backup container. WARNING, all data in the container will be destroyed.
 
<pre>
 
<--container>    :: A cloud backup container into which storage volumes can be backed up.  A
 
                            cloud backup container is like an unlimited storage pool in the cloud.
 
</pre>
 
  
 +
; ceph-osd-create : Creates a new Ceph OSD on the specified storage pool
  
=== cloud-backup-container-disable [cbc-disable] ===
+
<pre> qs ceph-osd-create|osd-create --ceph-cluster=value --data-pool=value --journal-device=value [--desc=value ] [--weight=value ] [--flags=value ] </pre>
Disables access to the specified cloud container without having to remove it.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--container>   :: A cloud backup container into which storage volumes can be backed up. A  
+
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
                            cloud backup container is like an unlimited storage pool in the cloud.
+
|-
</pre>
+
| &nbsp; || <tt>data-pool</tt> || Name of the storage pool or its unique ID (GUID) to allocate OSD Data store.
 +
|-
 +
| &nbsp; || <tt>journal-device</tt> || Name of the storage device or its unique ID (GUID) to allocate OSD Journal store.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>weight</tt> || Weight associated with the OSD (default 0.3).
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
  
=== cloud-backup-container-enable [cbc-enable] ===
+
; ceph-osd-delete : Deletes the specified Ceph Osd
Enables a cloud container that was previously disabled or was inaccessible due to network connection issues.
+
<pre>
+
<--container>    :: A cloud backup container into which storage volumes can be backed up.  A
+
                            cloud backup container is like an unlimited storage pool in the cloud.
+
</pre>
+
  
 +
<pre> qs ceph-osd-delete|osd-delete --ceph-cluster=value --ceph-osd=value [--delete-data=value ] [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
 +
|-
 +
| &nbsp; || <tt>ceph-osd</tt> || Ceph Object Storage Daemon name or ID.
 +
|-
 +
| &nbsp; || <tt>delete-data</tt> || Flag to specify deletion of user data along with this operation. (Default : false)
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== cloud-backup-container-get [cbc-get] ===
 
Returns detailed information on a specific cloud backup container.
 
<pre>
 
<--container>    :: A cloud backup container into which storage volumes can be backed up.  A
 
                            cloud backup container is like an unlimited storage pool in the cloud.
 
</pre>
 
  
 +
; ceph-osd-get : Gets information about a specific Ceph Object Storage Daemon.
  
=== cloud-backup-container-list [cbc-list] ===
+
<pre> qs ceph-osd-get|osd-get --ceph-osd=value </pre>
Returns a list of cloud backup containers in the system.
+
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ceph-osd</tt> || Ceph Object Storage Daemon name or ID.
 +
|}
  
=== cloud-backup-container-modify [cbc-modify] ===
 
Modifies the specified cloud backup container settings.
 
<pre>
 
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
 
                            not allowed.
 
<--desc>        :: A description for the object.
 
<--container>    :: A cloud backup container into which storage volumes can be backed up.  A
 
                            cloud backup container is like an unlimited storage pool in the cloud.
 
<--encryption-key> :: A passphrase which is used to encrypt all the data sent to the cloud
 
                            container.
 
<--enable-nfs>  :: Enable NFS sharing of the cloud container.
 
</pre>
 
  
 +
; ceph-osd-list : Returns a list of all the Ceph Object Storage Daemons
  
=== cloud-backup-container-remove [cbc-remove] ===
+
<pre> qs ceph-osd-list|osd-list </pre>
Removes the specified cloud backup container from the system but does not delete any backup data in the cloud.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
<--container>    :: A cloud backup container into which storage volumes can be backed up.  A
+
                            cloud backup container is like an unlimited storage pool in the cloud.
+
</pre>
+
  
 +
; ceph-osd-modify : Modifies the specified ceph Osd
  
=== cloud-backup-container-repair [cbc-repair] ===
+
<pre> qs ceph-osd-modify|osd-modify --ceph-cluster=value --ceph-osd=value [--desc=value ] [--flags=value ] </pre>
Repairs the specified cloud backup container.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--container>   :: A cloud backup container into which storage volumes can be backed up. A
+
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
                            cloud backup container is like an unlimited storage pool in the cloud.
+
|-
</pre>
+
| &nbsp; || <tt>ceph-osd</tt> || Ceph Object Storage Daemon name or ID.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
  
=== cloud-backup-credentials-add [cbcred-add] ===
+
; ceph-osd-multi-create : Creates multiple new ceph Osd on the specified ceph cluster
Adds cloud provider credentials to enable cloud backup to cloud backup containers.
+
<pre>
+
<--access-key>  :: The access key provided by your cloud storage provider for accessing
+
                            your cloud storage account.
+
<--secret-key>  :: The secret key provided by your cloud storage provider for accessing
+
                            your cloud storage account.
+
</pre>
+
  
 +
<pre> qs ceph-osd-multi-create|osd-multi-create --ceph-cluster=value --physical-disk-list=value [--journal-ssd-list=value ] [--use-unused-journal-partitions=value ] [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
 +
|-
 +
| &nbsp; || <tt>physical-disk-list</tt> || List of comma separated physical disk ID to be used for OSD data disks during OSD create.
 +
|-
 +
| &nbsp; || <tt>journal-ssd-list</tt> || List of comma separated SSD disk ID to be used for OSD journal devices during OSD create.
 +
|-
 +
| &nbsp; || <tt>use-unused-journal-partitions</tt> || Used unused journal partition on storage system for OSD create.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== cloud-backup-credentials-get [cbcred-get] ===
 
Returns information about the specified cloud provider credential.
 
<pre>
 
<--provider-creds> :: Credentials for accessing your cloud storage provider account.  With
 
                            Amazon S3 this is your 'Access Key ID' and your 'Secret Access Key'.
 
</pre>
 
  
 +
; ceph-pool-create : Creates a new Ceph pool using the specified OSDs or Storage Pools
  
=== cloud-backup-credentials-list [cbcred-list] ===
+
<pre> qs ceph-pool-create|cpool-create --name=value --ceph-cluster=value [--osd-list=value ] [--desc=value ] [--placement-groups=value ] [--max-replicas=value ] [--min-replicas=value ] [--pool-type=value ] [--crush-ruleset-name=value ] [--flags=value ] </pre>
Returns a list of all the cloud provider credentials in the system. Passwords are masked.
+
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
 +
|-
 +
| &nbsp; || <tt>osd-list</tt> || List of one or more ceph-osd. (Default: Select all OSDs for ceph pool create). User selection is currently not supported.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>placement-groups</tt> || Number of placement groups to assign to the Ceph pool.
 +
|-
 +
| &nbsp; || <tt>max-replicas</tt> || Maximum number of replicas (redundancy level) to be created of each block in the Ceph pool.
 +
|-
 +
| &nbsp; || <tt>min-replicas</tt> || Minimum number of replicas to have of each block while still allowing write access to the Ceph pool.
 +
|-
 +
| &nbsp; || <tt>pool-type</tt> || Type of Ceph pool, 'replicated' (mirror based, default) or 'erasure' which is like network RAID5.
 +
|-
 +
| &nbsp; || <tt>crush-ruleset-name</tt> || Ceph crush map rule set name. (default : 'default', if not specified)
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== cloud-backup-credentials-remove [cbcred-remove] ===
 
Removes the specified cloud provider credentials
 
<pre>
 
<--provider-creds> :: Credentials for accessing your cloud storage provider account.  With
 
                            Amazon S3 this is your 'Access Key ID' and your 'Secret Access Key'.
 
</pre>
 
  
 +
; ceph-pool-delete : Deletes the specified Ceph pool
  
=== cloud-backup-provider-get [cbp-get] ===
+
<pre> qs ceph-pool-delete|cpool-delete --ceph-cluster=value --ceph-pool=value [--flags=value ] </pre>
Returns detailed information about the specified cloud provider.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--provider>     :: A cloud provider is a storage provider like Amazon S3 which QuantaStor
+
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
                            utilizes for backup in the cloud.
+
|-
</pre>
+
| &nbsp; || <tt>ceph-pool</tt> || Ceph Pool name or ID.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
  
=== cloud-backup-provider-list [cbp-list] ===
+
; ceph-pool-get : Gets information about a specific Ceph storage pool.
Returns the list of supported cloud providers.
+
  
== Cloud Backup Schedule Management ==
+
<pre> qs ceph-pool-get|cpool-get --ceph-pool=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ceph-pool</tt> || Ceph Pool name or ID.
 +
|}
  
=== cloud-backup-schedule-create [cbs-create] ===
 
Creates a new schedule to automate backups to a cloud backup container.
 
<pre>
 
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
 
                            not allowed.
 
<--container>    :: A cloud backup container into which storage volumes can be backed up.  A
 
                            cloud backup container is like an unlimited storage pool in the cloud.
 
[--volume-list]  :: A list of one or more storage volumes.
 
[--start-date]  :: Start date at which the system will begin creating snapshots for a given
 
                            schedule.
 
[--enabled]      :: While the schedule is enabled snapshots will be taken at the designated
 
                            times.
 
[--desc]        :: A description for the object.
 
[--max-backups]  :: Maximum number of backups to do of a given volume in a given backup
 
                            schedule before the oldest backup is removed.
 
[--days]        :: The days of the week on which this schedule should create snapshots.
 
                            [fri, mon, sat, *sun, thu, tue, wed]
 
[--hours]        :: For the specified days of the week, snapshots will be created at the
 
                            specified hours. [10am, 10pm, 11am, 11pm, 12am, 12pm, 1am, 1pm, 2am, 2pm,
 
                            *3am, 3pm, 4am, 4pm, 5am, 5pm, 6am, 6pm, 7am, 7pm, 8am, 8pm, 9am, 9pm]
 
[--flags]        :: Optional flags for the operation. [async]
 
</pre>
 
  
 +
; ceph-pool-list : Returns a list of all the Ceph storage pools.
  
=== cloud-backup-schedule-delete [cbs-delete] ===
+
<pre> qs ceph-pool-list|cpool-list </pre>
Deletes the specified cloud backup schedule.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
<--backup-sched> :: A cloud backup schedule periodically does a backup of a specified set of
+
                            volumes into the cloud.
+
[--flags]        :: Optional flags for the operation. [async, force]
+
</pre>
+
  
 +
; ceph-pool-modify : Sets the display name and/or description field for the Ceph pool.
  
=== cloud-backup-schedule-disable [cbs-disable] ===
+
<pre> qs ceph-pool-modify|cpool-modify --ceph-cluster=value --ceph-pool=value [--name=value ] [--desc=value ] [--max-replicas=value ] [--flags=value ] </pre>
Disables a cloud backup schedule so that it does not trigger backups.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--backup-sched> :: A cloud backup schedule periodically does a backup of a specified set of  
+
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
                            volumes into the cloud.
+
|-
[--flags]        :: Optional flags for the operation. [async, force]
+
| &nbsp; || <tt>ceph-pool</tt> || Ceph Pool name or ID.
</pre>
+
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>max-replicas</tt> || Maximum number of replicas (redundancy level) to be created of each block in the Ceph pool.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
  
=== cloud-backup-schedule-enable [cbs-enable] ===
+
; ceph-rbd-create : Creates a new Ceph block device (RBD) in the specified Ceph pool.
Enables a cloud backup schedule that was previously disabled.
+
<pre>
+
<--backup-sched> :: A cloud backup schedule periodically does a backup of a specified set of
+
                            volumes into the cloud.
+
[--flags]        :: Optional flags for the operation. [async, force]
+
</pre>
+
  
 +
<pre> qs ceph-rbd-create|rbd-create --name=value --ceph-cluster=value --ceph-pool=value --size=value [--desc=value ] [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
 +
|-
 +
| &nbsp; || <tt>ceph-pool</tt> || ceph-pool
 +
|-
 +
| &nbsp; || <tt>size</tt> || Size may be specified in MiB, GiB, or TiB. examples: 4G, 100M, 1.4T
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== cloud-backup-schedule-get [cbs-get] ===
 
Gets detailed information about a specific cloud backup schedule.
 
<pre>
 
<--backup-sched> :: A cloud backup schedule periodically does a backup of a specified set of
 
                            volumes into the cloud.
 
</pre>
 
  
 +
; ceph-rbd-delete : Deletes the specified Ceph block device (RBD)
  
=== cloud-backup-schedule-list [cbs-list] ===
+
<pre> qs ceph-rbd-delete|rbd-delete --ceph-cluster=value --ceph-rbd=value [--flags=value ] </pre>
Lists all the cloud backup schedules in the system.
+
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
 +
|-
 +
| &nbsp; || <tt>ceph-rbd</tt> || Ceph RBD/Block Device name or ID.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== cloud-backup-schedule-modify [cbs-modify] ===
 
Modifies the settings for the specified cloud backup schedule.
 
<pre>
 
<--backup-sched> :: A cloud backup schedule periodically does a backup of a specified set of
 
                            volumes into the cloud.
 
[--name]        :: Names may include any alpha-numeric characters '_' and '-', spaces are
 
                            not allowed.
 
[--start-date]  :: Start date at which the system will begin creating snapshots for a given
 
                            schedule.
 
[--enabled]      :: While the schedule is enabled snapshots will be taken at the designated
 
                            times.
 
[--desc]        :: A description for the object.
 
[--container]    :: A cloud backup container into which storage volumes can be backed up.  A
 
                            cloud backup container is like an unlimited storage pool in the cloud.
 
[--max-backups]  :: Maximum number of backups to do of a given volume in a given backup
 
                            schedule before the oldest backup is removed.
 
[--days]        :: The days of the week on which this schedule should create snapshots.
 
                            [fri, mon, sat, *sun, thu, tue, wed]
 
[--hours]        :: For the specified days of the week, snapshots will be created at the
 
                            specified hours. [10am, 10pm, 11am, 11pm, 12am, 12pm, 1am, 1pm, 2am, 2pm,
 
                            *3am, 3pm, 4am, 4pm, 5am, 5pm, 6am, 6pm, 7am, 7pm, 8am, 8pm, 9am, 9pm]
 
[--flags]        :: Optional flags for the operation. [async]
 
</pre>
 
  
 +
; ceph-rbd-get : Gets information about a specific Ceph RDB / block device
  
=== cloud-backup-schedule-trigger [cbs-trigger] ===
+
<pre> qs ceph-rbd-get|rbd-get --ceph-rbd=value </pre>
Immediately triggers the sepecified cloud backup schedule to start a backup.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--backup-sched> :: A cloud backup schedule periodically does a backup of a specified set of
+
| &nbsp; || <tt>ceph-rbd</tt> || Ceph RBD/Block Device name or ID.
                            volumes into the cloud.
+
|}
[--flags]        :: Optional flags for the operation. [async, force]
+
</pre>
+
  
  
=== cloud-backup-schedule-volume-add [cbs-v-add] ===
+
; ceph-rbd-list : Returns a list of all the Ceph RDB / block devices.
Adds storage volumes to an existing cloud backup schedule.
+
<pre>
+
<--backup-sched> :: A cloud backup schedule periodically does a backup of a specified set of
+
                            volumes into the cloud.
+
[--volume-list]  :: A list of one or more storage volumes.
+
</pre>
+
  
 +
<pre> qs ceph-rbd-list|rbd-list </pre>
 +
{| cellspacing='0' cellpadding='5'
  
=== cloud-backup-schedule-volume-remove [cbs-v-remove] ===
+
; ceph-rbd-modify : Sets the display name and/or description field for the Ceph block device (RBD).
Removes storage volumes from an existing cloud backup schedule.
+
<pre>
+
<--backup-sched> :: A cloud backup schedule periodically does a backup of a specified set of
+
                            volumes into the cloud.
+
[--volume-list]  :: A list of one or more storage volumes.
+
</pre>
+
  
 +
<pre> qs ceph-rbd-modify|rbd-modify --ceph-cluster=value --ceph-rbd=value --name=value [--desc=value ] [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
 +
|-
 +
| &nbsp; || <tt>ceph-rbd</tt> || Ceph RBD/Block Device name or ID.
 +
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
== Physical Disk Management ==
 
  
=== disk-get [pd-get] ===
+
; ceph-rbd-resize : Resizes the specified Ceph block device (RBD) to a larger size.
Gets information about a specific physical disk.
+
<pre>
+
<--disk>        :: Name of the physical disk or its unique ID/serial number.
+
</pre>
+
  
 +
<pre> qs ceph-rbd-resize|rbd-resize --ceph-cluster=value --ceph-rbd=value --size=value [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
 +
|-
 +
| &nbsp; || <tt>ceph-rbd</tt> || Ceph RBD/Block Device name or ID.
 +
|-
 +
| &nbsp; || <tt>size</tt> || Size may be specified in MiB, GiB, or TiB. examples: 4G, 100M, 1.4T
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== disk-global-spare-add [hsm-add] ===
 
Adds one or more dedicated hot-spares to the global hotspare pool.
 
<pre>
 
<--disk-list>    :: Comma delimited list of drives (no spaces) to be used for the operation.
 
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
 
</pre>
 
  
 +
; ceph-rbd-snapshot : Creates an instant snapshot of the specified Ceph block device (RBD)
  
=== disk-global-spare-marker-cleanup [hsm-cleanup] ===
+
<pre> qs ceph-rbd-snapshot|rbd-snap --ceph-cluster=value --ceph-rbd=value --name=value [--desc=value ] [--flags=value ] </pre>
Cleans up invalid marker records for the dedicated hot-spare physical disks in the global hotspare pool.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
| &nbsp; || <tt>ceph-cluster</tt> || Ceph Cluster name or ID.
</pre>
+
|-
 +
| &nbsp; || <tt>ceph-rbd</tt> || Ceph RBD/Block Device name or ID.
 +
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
 +
</div>
 +
</div>
  
=== disk-global-spare-marker-delete [hsm-del] ===
+
===CLOUD-BACKUP  Cloud Container Management===
Deletes the specified global hotspare marker object.
+
----
<pre>
+
<div class='mw-collapsible mw-collapsed'>
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
<div class='mw-collapsible-content'>
                            not allowed.
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
 +
; cloud-backup-container-add : Recovers a cloud backup container that was previously removed or used with a prior installation.
  
=== disk-global-spare-marker-get [hsm-get] ===
+
<pre> qs cloud-backup-container-add|cbc-add --provider-creds=value --encryption-key=value --storage-url=value --enable-nfs=value </pre>
Returns information about the specified global hotspare marker object.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--name>         :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
| &nbsp; || <tt>provider-creds</tt> || Credentials for accessing your cloud storage provider account.  With Amazon S3 this is your 'Access Key ID' and your 'Secret Access Key'.
                            not allowed.
+
|-
[--flags]        :: Optional flags for the operation. [min]
+
| &nbsp; || <tt>encryption-key</tt> || A passphrase which is used to encrypt all the data sent to the cloud container.
</pre>
+
|-
 +
| &nbsp; || <tt>storage-url</tt> || The Amazon S3 or other storage URL used to identify a cloud backup container.
 +
|-
 +
| &nbsp; || <tt>enable-nfs</tt> || Enable NFS sharing of the cloud container.
 +
|}
  
  
=== disk-global-spare-marker-list [hsm-list] ===
+
; cloud-backup-container-create : Creates a cloud backup container into which cloud backups of storage volumes can be made.
Enumerates marker records for the dedicated hot-spare physical disks in the global hotspare pool.
+
<pre>
+
[--flags]        :: Optional flags for the operation. [min]
+
</pre>
+
  
 +
<pre> qs cloud-backup-container-create|cbc-create --name=value --provider-creds=value --location=value --encryption-key=value --enable-nfs=value [--desc=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>provider-creds</tt> || Credentials for accessing your cloud storage provider account.  With Amazon S3 this is your 'Access Key ID' and your 'Secret Access Key'.
 +
|-
 +
| &nbsp; || <tt>location</tt> || Cloud storage provider endpoint location.
 +
|-
 +
| &nbsp; || <tt>encryption-key</tt> || A passphrase which is used to encrypt all the data sent to the cloud container.
 +
|-
 +
| &nbsp; || <tt>enable-nfs</tt> || Enable NFS sharing of the cloud container.
 +
|}
  
=== disk-global-spare-remove [hsm-remove] ===
 
Removes one or more dedicated hot-spares from the global hotspare pool.
 
<pre>
 
<--disk-list>    :: Comma delimited list of drives (no spaces) to be used for the operation.
 
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
 
</pre>
 
  
 +
; cloud-backup-container-delete : Deletes the specified cloud backup container. WARNING, all data in the container will be destroyed.
  
=== disk-identify [pd-id] ===
+
<pre> qs cloud-backup-container-delete|cbc-delete --container=value </pre>
Pulses the disk activity light so that the specified disk can be identified in the cassis.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--disk>        :: Name of the physical disk or its unique ID/serial number.
+
| &nbsp; || <tt>container</tt> || A cloud backup container into which storage volumes can be backed up.  A cloud backup container is like an unlimited storage pool in the cloud.
[--pattern]      :: Pattern to flash the disk LED lights in, p = short pulse, P = long
+
|}
                            pulse, d = short delay, D = long delay, ex: pattern=pppD
+
[--duration]    :: Duration in seconds to repeat the disk identification pattern.
+
</pre>
+
  
  
=== disk-list [pd-list] ===
+
; cloud-backup-container-disable : Disables access to the specified cloud container without having to remove it.
Enumerates all physical disks.
+
  
=== disk-scan [pd-scan] ===
+
<pre> qs cloud-backup-container-disable|cbc-disable --container=value </pre>
Scans for any new physical disks that may have been hot-plugged into the storage system.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
[--storage-system] :: Name or ID of a storage system in a management grid.
+
| &nbsp; || <tt>container</tt> || A cloud backup container into which storage volumes can be backed up.  A cloud backup container is like an unlimited storage pool in the cloud.
</pre>
+
|}
  
  
=== disk-spare-list [pd-spare-list] ===
+
; cloud-backup-container-enable : Enables a cloud container that was previously disabled or was inaccessible due to network connection issues.
Enumerates dedicated hot-spare physical disks in the global hotspare pool.
+
<pre>
+
[--flags]        :: Optional flags for the operation. [min]
+
</pre>
+
  
 +
<pre> qs cloud-backup-container-enable|cbc-enable --container=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>container</tt> || A cloud backup container into which storage volumes can be backed up.  A cloud backup container is like an unlimited storage pool in the cloud.
 +
|}
  
=== path-list [pdp-list] ===
 
Enumerates all physical disk paths for multipath devices.
 
  
== Gluster Management ==
+
; cloud-backup-container-get : Returns detailed information on a specific cloud backup container.
  
=== gluster-brick-get [gb-get] ===
+
<pre> qs cloud-backup-container-get|cbc-get --container=value </pre>
Gets information about a specific Gluster brick.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--gluster-brick> :: Name or ID of a Gluster brick
+
| &nbsp; || <tt>container</tt> || A cloud backup container into which storage volumes can be backed up.  A cloud backup container is like an unlimited storage pool in the cloud.
</pre>
+
|}
  
  
=== gluster-brick-list [gb-list] ===
+
; cloud-backup-container-list : Returns a list of cloud backup containers in the system.
Returns a list of all the Gluster bricks
+
  
=== gluster-ha-intf-get [gi-get] ===
+
<pre> qs cloud-backup-container-list|cbc-list </pre>
Returns a list of all the Gluster HA failover interface definitions.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
<--ha-interface> :: High-availability interface for a specific Gluster Volume.
+
</pre>
+
  
 +
; cloud-backup-container-modify : Modifies the specified cloud backup container settings.
  
=== gluster-ha-intf-list [gi-list] ===
+
<pre> qs cloud-backup-container-modify|cbc-modify --name=value --desc=value --container=value --encryption-key=value --enable-nfs=value </pre>
Gets information about a specific Gluster HA failover interface definition.
+
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>container</tt> || A cloud backup container into which storage volumes can be backed up.  A cloud backup container is like an unlimited storage pool in the cloud.
 +
|-
 +
| &nbsp; || <tt>encryption-key</tt> || A passphrase which is used to encrypt all the data sent to the cloud container.
 +
|-
 +
| &nbsp; || <tt>enable-nfs</tt> || Enable NFS sharing of the cloud container.
 +
|}
  
=== gluster-peer-get [gp-get] ===
 
Gets information about a specific Gluster peer system
 
<pre>
 
<--gluster-peer> :: Name or ID of the Gluster peer
 
</pre>
 
  
 +
; cloud-backup-container-remove : Removes the specified cloud backup container from the system but does not delete any backup data in the cloud.
  
=== gluster-peer-list [gp-list] ===
+
<pre> qs cloud-backup-container-remove|cbc-remove --container=value </pre>
Returns a list of all the Gluster peer systems.
+
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>container</tt> || A cloud backup container into which storage volumes can be backed up.  A cloud backup container is like an unlimited storage pool in the cloud.
 +
|}
  
=== gluster-volume-add-bricks [gv-add-bricks] ===
 
Adds one or more bricks on the specified volume.
 
<pre>
 
<--gluster-volume> :: Name or ID of the Gluster volume
 
<--pool-list>    :: List of one or more storage pools.
 
<--restripe-volume> :: Restripe files across the new Gluster bricks
 
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
 
</pre>
 
  
 +
; cloud-backup-container-repair : Repairs the specified cloud backup container.
  
=== gluster-volume-create [gv-create] ===
+
<pre> qs cloud-backup-container-repair|cbc-repair --container=value </pre>
Create a new Gluster volume with new Bricks on the specified Storage Pools.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
| &nbsp; || <tt>container</tt> || A cloud backup container into which storage volumes can be backed up. A cloud backup container is like an unlimited storage pool in the cloud.
                            not allowed.
+
|}
[--desc]        :: A description for the object.
+
<--pool-list>   :: List of one or more storage pools.
+
<--replica-count> :: Number of replia copies to have of each file in the Gluster volume (1 =
+
                            no redundancy, 2 = mirroring)
+
<--disperse-count> :: Dispersed is a parity based volume layout similar to RAID5 (disperse 5 =
+
                            4 data + 1 parity, disperse 3 = 2 data + 1 parity). Optionally supply a
+
                            replica count to indicate the number of parity bricks per stripe where 2
+
                            is like RAID6 and 1 is like RAID5.
+
<--stripe-volume> :: Enable striping of files across Gluster bricks
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
  
=== gluster-volume-delete [gv-delete] ===
+
; cloud-backup-credentials-add : Adds cloud provider credentials to enable cloud backup to cloud backup containers.
Deletes the specified Gluster volume.
+
<pre>
+
<--gluster-volume> :: Name or ID of the Gluster volume
+
</pre>
+
  
 +
<pre> qs cloud-backup-credentials-add|cbcred-add --provider=value --access-key=value --secret-key=value [--project-id=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>provider</tt> || A cloud provider is a storage provider like Amazon S3 which QuantaStor utilizes for backup in the cloud.
 +
|-
 +
| &nbsp; || <tt>access-key</tt> || The access key provided by your cloud storage provider for accessing your cloud storage account.
 +
|-
 +
| &nbsp; || <tt>secret-key</tt> || The secret key provided by your cloud storage provider for accessing your cloud storage account.
 +
|-
 +
| &nbsp; || <tt>project-id</tt> || The project Id provided by your cloud storage provider for accessing your cloud storage account.
 +
|}
  
=== gluster-volume-get [gv-get] ===
 
Gets information about the specified Gluster volume
 
<pre>
 
<--gluster-volume> :: Name or ID of the Gluster volume
 
</pre>
 
  
 +
; cloud-backup-credentials-get : Returns information about the specified cloud provider credential.
  
=== gluster-volume-list [gv-list] ===
+
<pre> qs cloud-backup-credentials-get|cbcred-get --provider-creds=value </pre>
Returns a list of all the Gluster volumes in the grid
+
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>provider-creds</tt> || Credentials for accessing your cloud storage provider account.  With Amazon S3 this is your 'Access Key ID' and your 'Secret Access Key'.
 +
|}
  
=== gluster-volume-modify [gv-modify] ===
 
Modifies the name and/or description of the specified Gluster volume.
 
<pre>
 
<--gluster-volume> :: Name or ID of the Gluster volume
 
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
 
                            not allowed.
 
[--desc]        :: A description for the object.
 
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
 
</pre>
 
  
 +
; cloud-backup-credentials-list : Returns a list of all the cloud provider credentials in the system. Passwords are masked.
  
=== gluster-volume-rebalance [gv-rebalance] ===
+
<pre> qs cloud-backup-credentials-list|cbcred-list </pre>
Rebalances files across the bricks in the specified Gluster volume
+
{| cellspacing='0' cellpadding='5'
<pre>
+
<--gluster-volume> :: Name or ID of the Gluster volume
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
 +
; cloud-backup-credentials-remove : Removes the specified cloud provider credentials
  
=== gluster-volume-start [gv-start] ===
+
<pre> qs cloud-backup-credentials-remove|cbcred-remove --provider-creds=value </pre>
Starts the specified Gluster volume
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--gluster-volume> :: Name or ID of the Gluster volume
+
| &nbsp; || <tt>provider-creds</tt> || Credentials for accessing your cloud storage provider account.  With Amazon S3 this is your 'Access Key ID' and your 'Secret Access Key'.
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
|}
</pre>
+
  
  
=== gluster-volume-stop [gv-stop] ===
+
; cloud-backup-provider-get : Returns detailed information about the specified cloud provider.
Stops the specified Gluster volume.
+
<pre>
+
<--gluster-volume> :: Name or ID of the Gluster volume
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
 +
<pre> qs cloud-backup-provider-get|cbp-get --provider=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>provider</tt> || A cloud provider is a storage provider like Amazon S3 which QuantaStor utilizes for backup in the cloud.
 +
|}
  
== Storage System Grid Management ==
 
  
=== grid-add [mg-add] ===
+
; cloud-backup-provider-list : Returns the list of supported cloud providers.
Adds the specified storage system to the management grid.
+
<pre>
+
<--node-ipaddress> :: IP address of a storage system node.
+
[--node-username] :: Admin user account with permissions to add/remove nodes from the grid.
+
[--node-password] :: Admin user account password.
+
</pre>
+
  
 +
<pre> qs cloud-backup-provider-list|cbp-list </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
</div>
 +
</div>
  
=== grid-assoc-get [mg-aget] ===
+
===CLOUDBACKUPSCHEDULE  Cloud Backup Schedule Management===
Get general information about the associated storage system management grid.
+
----
<pre>
+
<div class='mw-collapsible mw-collapsed'>
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
<div class='mw-collapsible-content'>
                            not allowed.
+
<--storage-system> :: Name or ID of a storage system in a management grid.
+
[--flags]        :: Optional flags for the operation. [min]
+
</pre>
+
  
 +
; cloud-backup-schedule-create : Creates a new schedule to automate backups to a cloud backup container.
  
=== grid-assoc-list [mg-alist] ===
+
<pre> qs cloud-backup-schedule-create|cbs-create --name=value --container=value [--volume-list=value ] [--start-date=value ] [--enabled=value ] [--desc=value ] [--max-backups=value ] [--days=value ] [--hours=value ] [--flags=value ] </pre>
Returns a list of the associated storage system nodes in the grid.
+
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>container</tt> || A cloud backup container into which storage volumes can be backed up.  A cloud backup container is like an unlimited storage pool in the cloud.
 +
|-
 +
| &nbsp; || <tt>volume-list</tt> || A list of one or more storage volumes.
 +
|-
 +
| &nbsp; || <tt>start-date</tt> || Start date at which the system will begin using a given schedule.
 +
|-
 +
| &nbsp; || <tt>enabled</tt> || While the schedule is enabled snapshots will be taken at the designated times.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>max-backups</tt> || Maximum number of backups to do of a given volume in a given backup schedule before the oldest backup is removed.
 +
|-
 +
| &nbsp; || <tt>days</tt> || The days of the week on which this schedule should create snapshots. [fri, mon, sat, *sun, thu, tue, wed]
 +
|-
 +
| &nbsp; || <tt>hours</tt> || For the specified days of the week, snapshots will be created at the specified hours. [10am, 10pm, 11am, 11pm, 12am, 12pm, 1am, 1pm, 2am, 2pm, *3am, 3pm, 4am, 4pm, 5am, 5pm, 6am, 6pm, 7am, 7pm, 8am, 8pm, 9am, 9pm]
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async]
 +
|}
  
=== grid-create [mg-create] ===
 
Creates a new management grid.  A given storage system can only be a member of one grid at a time.
 
<pre>
 
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
 
                            not allowed.
 
[--desc]        :: A description for the object.
 
</pre>
 
  
 +
; cloud-backup-schedule-delete : Deletes the specified cloud backup schedule.
  
=== grid-delete [mg-delete] ===
+
<pre> qs cloud-backup-schedule-delete|cbs-delete --backup-sched=value [--flags=value ] </pre>
Deletes the management grid. After the grid is deleted each node in the grid operates independently again.
+
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>backup-sched</tt> || A cloud backup schedule periodically does a backup of a specified set of volumes into the cloud.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force]
 +
|}
  
=== grid-get [mg-get] ===
 
Get general information about the storage system management grid.
 
  
=== grid-get-hosts [mg-get-hosts] ===
+
; cloud-backup-schedule-disable : Disables a cloud backup schedule so that it does not trigger backups.
Returns the /etc/hosts configuration file configuration for grid nodes.
+
  
=== grid-modify [mg-modify] ===
+
<pre> qs cloud-backup-schedule-disable|cbs-disable --backup-sched=value [--flags=value ] </pre>
Modify the management grid properties.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
| &nbsp; || <tt>backup-sched</tt> || A cloud backup schedule periodically does a backup of a specified set of volumes into the cloud.
                            not allowed.
+
|-
[--desc]        :: A description for the object.
+
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force]
</pre>
+
|}
  
  
=== grid-remove [mg-remove] ===
+
; cloud-backup-schedule-enable : Enables a cloud backup schedule that was previously disabled.
Removes the specified storage system from the management grid.
+
<pre>
+
<--storage-system> :: Name or ID of a storage system in a management grid.
+
</pre>
+
  
 +
<pre> qs cloud-backup-schedule-enable|cbs-enable --backup-sched=value [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>backup-sched</tt> || A cloud backup schedule periodically does a backup of a specified set of volumes into the cloud.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force]
 +
|}
  
=== grid-set-hosts [mg-set-hosts] ===
 
Configures the /etc/hosts configuration file on all systems to facilitate host name based gluster volume configurations.
 
<pre>
 
<--portid-list>  :: List of UUIDs for the ethernet ports to be used for grid wide /etc/hosts
 
                            file configuration.
 
</pre>
 
  
 +
; cloud-backup-schedule-get : Gets detailed information about a specific cloud backup schedule.
  
=== grid-set-master [mg-set] ===
+
<pre> qs cloud-backup-schedule-get|cbs-get --backup-sched=value </pre>
Sets the master node for a storage system.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--storage-system> :: Name or ID of a storage system in a management grid.
+
| &nbsp; || <tt>backup-sched</tt> || A cloud backup schedule periodically does a backup of a specified set of volumes into the cloud.
</pre>
+
|}
  
  
== Storage Pool HA Failover Management ==
+
; cloud-backup-schedule-list : Lists all the cloud backup schedules in the system.
  
=== ha-group-activate [hag-activate] ===
+
<pre> qs cloud-backup-schedule-list|cbs-list </pre>
Activates/enables the specified high-availability group to failover can occur on a system outage.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
<--ha-group>    :: Name or UUID of a storage pool high-availability group
+
</pre>
+
  
 +
; cloud-backup-schedule-modify : Modifies the settings for the specified cloud backup schedule.
  
=== ha-group-create [hag-create] ===
+
<pre> qs cloud-backup-schedule-modify|cbs-modify --backup-sched=value [--name=value ] [--start-date=value ] [--enabled=value ] [--desc=value ] [--container=value ] [--max-backups=value ] [--days=value ] [--hours=value ] [--flags=value ] </pre>
Creates a new storage pool high-availability group.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--name>         :: Names may include any alpha-numeric characters '_' and '-', spaces are  
+
| &nbsp; || <tt>backup-sched</tt> || A cloud backup schedule periodically does a backup of a specified set of volumes into the cloud.
                            not allowed.
+
|-
<--pool>         :: Name of the storage pool or its unique ID (GUID).
+
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
<--sys-secondary> :: Storage system associated with the failover group secondary node
+
|-
<--sys-primary> :: Storage system associated with the failover group primary node
+
| &nbsp; || <tt>start-date</tt> || Start date at which the system will begin using a given schedule.
[--desc]        :: A description for the object.
+
|-
[--ha-module]    :: Name or UUID of a storage pool high-availability module
+
| &nbsp; || <tt>enabled</tt> || While the schedule is enabled snapshots will be taken at the designated times.
</pre>
+
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>container</tt> || A cloud backup container into which storage volumes can be backed up.  A cloud backup container is like an unlimited storage pool in the cloud.
 +
|-
 +
| &nbsp; || <tt>max-backups</tt> || Maximum number of backups to do of a given volume in a given backup schedule before the oldest backup is removed.
 +
|-
 +
| &nbsp; || <tt>days</tt> || The days of the week on which this schedule should create snapshots. [fri, mon, sat, *sun, thu, tue, wed]
 +
|-
 +
| &nbsp; || <tt>hours</tt> || For the specified days of the week, snapshots will be created at the specified hours. [10am, 10pm, 11am, 11pm, 12am, 12pm, 1am, 1pm, 2am, 2pm, *3am, 3pm, 4am, 4pm, 5am, 5pm, 6am, 6pm, 7am, 7pm, 8am, 8pm, 9am, 9pm]
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async]
 +
|}
  
  
=== ha-group-deactivate [hag-deactivate] ===
+
; cloud-backup-schedule-trigger : Immediately triggers the specified cloud backup schedule to start a backup.
Deactivates the specified high-availability group so that failover policies are disabled.
+
<pre>
+
<--ha-group>    :: Name or UUID of a storage pool high-availability group
+
</pre>
+
  
 +
<pre> qs cloud-backup-schedule-trigger|cbs-trigger --backup-sched=value [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>backup-sched</tt> || A cloud backup schedule periodically does a backup of a specified set of volumes into the cloud.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force]
 +
|}
  
=== ha-group-delete [hag-delete] ===
 
Deletes the specified high-availability group
 
<pre>
 
<--ha-group>    :: Name or UUID of a storage pool high-availability group
 
</pre>
 
  
 +
; cloud-backup-schedule-volume-add : Adds storage volumes to an existing cloud backup schedule.
  
=== ha-group-failover [hag-failover] ===
+
<pre> qs cloud-backup-schedule-volume-add|cbs-v-add --backup-sched=value [--volume-list=value ] </pre>
Manually triggers a failover of the specified storage pool using the associated storage pool HA group policy.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--ha-group>     :: Name or UUID of a storage pool high-availability group
+
| &nbsp; || <tt>backup-sched</tt> || A cloud backup schedule periodically does a backup of a specified set of volumes into the cloud.
<--storage-system> :: Name or ID of a storage system in a management grid.
+
|-
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
| &nbsp; || <tt>volume-list</tt> || A list of one or more storage volumes.
</pre>
+
|}
  
  
=== ha-group-get [hag-get] ===
+
; cloud-backup-schedule-volume-remove : Removes storage volumes from an existing cloud backup schedule.
Gets information about the specified storage pool HA group
+
<pre>
+
<--ha-group>    :: Name or UUID of a storage pool high-availability group
+
</pre>
+
  
 +
<pre> qs cloud-backup-schedule-volume-remove|cbs-v-remove --backup-sched=value [--volume-list=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>backup-sched</tt> || A cloud backup schedule periodically does a backup of a specified set of volumes into the cloud.
 +
|-
 +
| &nbsp; || <tt>volume-list</tt> || A list of one or more storage volumes.
 +
|}
  
=== ha-group-list [hag-list] ===
+
</div>
Returns a list of all the HA groups
+
</div>
  
=== ha-group-modify [hag-modify] ===
+
===DISK  Physical Disk Management===
Modifies the settings for the specified high-availability group
+
----
<pre>
+
<div class='mw-collapsible mw-collapsed'>
<--ha-group>    :: Name or UUID of a storage pool high-availability group
+
<div class='mw-collapsible-content'>
[--name]        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
                            not allowed.
+
[--desc]        :: A description for the object.
+
[--sys-secondary] :: Storage system associated with the failover group secondary node
+
[--ha-module]    :: Name or UUID of a storage pool high-availability module
+
</pre>
+
  
 +
; disk-get : Gets information about a specific physical disk.
  
=== ha-interface-create [hai-create] ===
+
<pre> qs disk-get|pd-get --disk=value </pre>
Creates a new virtual network interface for the specified HA failover group.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--ha-group>    :: Name or UUID of a storage pool high-availability group
+
| &nbsp; || <tt>disk</tt> || Name of the physical disk or its unique ID/serial number.
<--parent-port> :: Parent network port like 'eth0' which the virtual interface should be
+
|}
                            attached to.  On failover the virtual interface will attach to the port
+
                            with the same name on the failover/secondary node.
+
<--ip-address>   :: IP Address of the host being added; if unspecified the service will look
+
                            it up.
+
[--netmask]      :: Subnet IP mask (ex: 255.255.255.0)
+
[--interface-tag] :: Tags are a alpha-numeric tag which is appended onto a given HA virtual
+
                            interface for easy identification.
+
[--desc]        :: A description for the object.
+
[--gateway]      :: IP address of the network gateway
+
[--mac-address]  :: MAC Address
+
[--iscsi-enable] :: Enables or disables iSCSI access to the specified port(s).
+
</pre>
+
  
  
=== ha-interface-delete [hai-delete] ===
+
; disk-global-spare-add : Adds one or more dedicated hotspares to the global hotspare pool.
Deletes the specified virtual network interface resource from the HA group
+
<pre>
+
<--ha-interface> :: Name or UUID of a storage pool high-availability virtual network
+
                            interface
+
</pre>
+
  
 +
<pre> qs disk-global-spare-add|hsm-add --disk-list=value [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>disk-list</tt> || Comma delimited list of drives (no spaces) to be used for the operation.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== ha-interface-get [hai-get] ===
 
Gets information about the specified storage pool HA virtual network interface
 
<pre>
 
<--ha-interface> :: Name or UUID of a storage pool high-availability virtual network
 
                            interface
 
</pre>
 
  
 +
; disk-global-spare-marker-cleanup : Cleans up invalid marker records for the dedicated hotspare physical disks in the global hotspare pool.
  
=== ha-interface-list [hai-list] ===
+
<pre> qs disk-global-spare-marker-cleanup|hsm-cleanup [--flags=value ] </pre>
Returns a list of all the HA interfaces on the specified group
+
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== ha-module-get [ham-get] ===
 
Gets information about the specified storage pool HA module
 
<pre>
 
<--ha-module>    :: Name or UUID of a storage pool high-availability module
 
</pre>
 
  
 +
; disk-global-spare-marker-delete : Deletes the specified global hotspare marker object.
  
=== ha-module-list [ham-list] ===
+
<pre> qs disk-global-spare-marker-delete|hsm-del --name=value [--flags=value ] </pre>
Returns a list of all the HA failover modules
+
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
== Host Group Management ==
 
  
=== host-group-create [hg-create] ===
+
; disk-global-spare-marker-get : Returns information about the specified global hotspare marker object.
Creates a new host group with the specified name.
+
<pre>
+
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
                            not allowed.
+
<--host-list>    :: A list of one or more hosts by name or ID.
+
[--desc]        :: A description for the object.
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
 +
<pre> qs disk-global-spare-marker-get|hsm-get --name=value [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [min]
 +
|}
  
=== host-group-delete [hg-delete] ===
 
Removes the specified host group.
 
<pre>
 
<--host-group>  :: An arbitrary collection of hosts used to simplify volume ACL management
 
                            for grids and other groups of hosts.
 
[--flags]        :: Optional flags for the operation. [async, force]
 
</pre>
 
  
 +
; disk-global-spare-marker-list : Enumerates marker records for the dedicated hotspare physical disks in the global hotspare pool.
  
=== host-group-get [hg-get] ===
+
<pre> qs disk-global-spare-marker-list|hsm-list [--flags=value ] </pre>
Gets information about a specific host group.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--host-group>   :: An arbitrary collection of hosts used to simplify volume ACL management
+
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [min]
                            for grids and other groups of hosts.
+
|}
</pre>
+
  
  
=== host-group-host-add [hg-host-add] ===
+
; disk-global-spare-remove : Removes one or more dedicated hotspares from the global hotspare pool.
Adds a host to the specified host group.
+
<pre>
+
<--host-group>  :: An arbitrary collection of hosts used to simplify volume ACL management
+
                            for grids and other groups of hosts.
+
<--host-list>    :: A list of one or more hosts by name or ID.
+
</pre>
+
  
 +
<pre> qs disk-global-spare-remove|hsm-remove --disk-list=value [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>disk-list</tt> || Comma delimited list of drives (no spaces) to be used for the operation.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== host-group-host-remove [hg-host-remove] ===
 
Removes a host from the specified host group.
 
<pre>
 
<--host-group>  :: An arbitrary collection of hosts used to simplify volume ACL management
 
                            for grids and other groups of hosts.
 
<--host-list>    :: A list of one or more hosts by name or ID.
 
</pre>
 
  
 +
; disk-identify : Pulses the disk activity light so that the specified disk can be identified in the chassis.
  
=== host-group-list [hg-list] ===
+
<pre> qs disk-identify|pd-id --disk=value [--pattern=value ] [--duration=value ] </pre>
Returns a list of all the host groups.
+
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>disk</tt> || Name of the physical disk or its unique ID/serial number.
 +
|-
 +
| &nbsp; || <tt>pattern</tt> || Pattern to flash the disk LED lights in, p = short pulse, P = long pulse, d = short delay, D = long delay, ex: pattern=pppD
 +
|-
 +
| &nbsp; || <tt>duration</tt> || Duration in seconds to repeat the disk identification pattern.
 +
|}
  
=== host-group-modify [hg-modify] ===
 
Modifies the properties of a host group such as its name and/or description.
 
<pre>
 
<--host-group>  :: An arbitrary collection of hosts used to simplify volume ACL management
 
                            for grids and other groups of hosts.
 
[--name]        :: Names may include any alpha-numeric characters '_' and '-', spaces are
 
                            not allowed.
 
[--desc]        :: A description for the object.
 
</pre>
 
  
 +
; disk-list : Enumerates all physical disks.
  
== Host Management ==
+
<pre> qs disk-list|pd-list </pre>
 +
{| cellspacing='0' cellpadding='5'
  
=== host-add [h-add] ===
+
; disk-scan : Scans for any new physical disks that may have been hot-plugged into the storage system.
Modifies a host entry. The username/password fields are optional and are not yet leveraged by the QuantaStor system. Later this may be used to provide additional levels of integration such as automatic host side configuration of your iSCSI initiator.
+
<pre>
+
<--hostname>    :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
                            not allowed.
+
[--iqn]          :: IQN (iSCSI Qualified Name) of the host's iSCSI initiator
+
[--ip-address]  :: IP Address of the host being added; if unspecified the service will look
+
                            it up.
+
[--desc]        :: A description for the object.
+
[--username]    :: Administrator user name for the host, typically 'Administrator' for
+
                            Windows hosts.
+
[--password]    :: Administrator password for the host; enables auto-configuration of
+
                            host's iSCSI initiator.
+
[--host-type]    :: Operating system type of the host. [aix, hpux, linux, other, solaris,
+
                            vmware, *windows, xenserver]
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
 +
<pre> qs disk-scan|pd-scan [--storage-system=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>storage-system</tt> || Name or ID of a storage system in a management grid.
 +
|}
  
=== host-get [h-get] ===
 
Gets information about a specific host.
 
<pre>
 
<--host>        :: Name of the host or its unique ID (GUID).
 
</pre>
 
  
 +
; disk-spare-list : Enumerates dedicated hotspare physical disks in the global hotspare pool.
  
=== host-initiator-add [hi-add] ===
+
<pre> qs disk-spare-list|pd-spare-list [--flags=value ] </pre>
Adds an additional iSCSI host initiator IQN to the specified host.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--host>         :: Name of the host or its unique ID (GUID).
+
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [min]
<--iqn>          :: IQN (iSCSI Qualified Name) of the host's iSCSI initiator
+
|}
</pre>
+
  
  
=== host-initiator-get [hi-get] ===
+
; path-list : Enumerates all physical disk paths for multi-path devices.
Gets information about a specific host identified by its initiator IQN.
+
<pre>
+
<--iqn>          :: IQN (iSCSI Qualified Name) of the host's iSCSI initiator
+
</pre>
+
  
 +
<pre> qs path-list|pdp-list </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
</div>
 +
</div>
  
=== host-initiator-list [hi-list] ===
+
===GLUSTER  Gluster Management===
Returns a list of all the initiators (IQN) of the specified host
+
----
<pre>
+
<div class='mw-collapsible mw-collapsed'>
<--host>         :: Name of the host or its unique ID (GUID).
+
<div class='mw-collapsible-content'>
</pre>
+
  
 +
; gluster-brick-get : Gets information about a specific Gluster brick.
  
=== host-initiator-remove [hi-remove] ===
+
<pre> qs gluster-brick-get|gb-get --gluster-brick=value </pre>
Removes a iSCSI host initiator (IQN) from the specified host.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--host>         :: Name of the host or its unique ID (GUID).
+
| &nbsp; || <tt>gluster-brick</tt> || Name or ID of a Gluster brick
<--iqn>          :: IQN (iSCSI Qualified Name) of the host's iSCSI initiator
+
|}
</pre>
+
  
  
=== host-list [h-list] ===
+
; gluster-brick-list : Returns a list of all the Gluster bricks
Returns a list of all the hosts that you have added to the QuantaStor system. Host groups allow you to assign storage to multiple host all at once. This is especially useful when you have a VMware or Windows cluster as you can assign and unassign storage to all nodes in the cluster in one operation.
+
  
=== host-modify [h-modify] ===
+
<pre> qs gluster-brick-list|gb-list </pre>
Modifies a host entry which contains a list of WWN/IQN or IB GIDs for a given host.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
<--host>        :: Name of the host or its unique ID (GUID).
+
[--desc]        :: A description for the object.
+
[--ip-address]  :: IP Address of the host being added; if unspecified the service will look
+
                            it up.
+
[--username]    :: Administrator user name for the host, typically 'Administrator' for
+
                            Windows hosts.
+
[--password]    :: Administrator password for the host; enables auto-configuration of
+
                            host's iSCSI initiator.
+
[--host-type]    :: Operating system type of the host. [aix, hpux, linux, other, solaris,
+
                            vmware, *windows, xenserver]
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
 +
; gluster-ha-intf-get : Returns a list of all the Gluster HA failover interface definitions.
  
=== host-remove [h-remove] ===
+
<pre> qs gluster-ha-intf-get|gi-get --ha-interface=value </pre>
Removes the specified host, *WARNING* host's active iSCSI sessions will be dropped.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--host>         :: Name of the host or its unique ID (GUID).
+
| &nbsp; || <tt>ha-interface</tt> || High-availability interface for a specific Gluster Volume.
[--flags]        :: Optional flags for the operation. [async, force]
+
|}
</pre>
+
  
  
== Hardware RAID Management ==
+
; gluster-ha-intf-list : Gets information about a specific Gluster HA failover interface definition.
  
=== hw-alarm-clear-all [hwa-clear-all] ===
+
<pre> qs gluster-ha-intf-list|gi-list </pre>
Clears all the hardware alarms that have been recorded for the specified hardware RAID controller.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
<--controller>  :: Name or ID of a hardware RAID controller.
+
</pre>
+
  
 +
; gluster-peer-get : Gets information about a specific Gluster peer system
  
=== hw-alarm-get [hwa-get] ===
+
<pre> qs gluster-peer-get|gp-get --gluster-peer=value </pre>
Returns information about a specific hardware alarm.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--id>          :: Unique identifier (GUID) for the object.
+
| &nbsp; || <tt>gluster-peer</tt> || Name or ID of the Gluster peer
</pre>
+
|}
  
  
=== hw-alarm-list [hwa-list] ===
+
; gluster-peer-list : Returns a list of all the Gluster peer systems.
Returns a list of all the current hardware alarms/alert messages generated from the controller.
+
<pre>
+
[--controller]  :: Name or ID of a hardware RAID controller.
+
</pre>
+
  
 +
<pre> qs gluster-peer-list|gp-list </pre>
 +
{| cellspacing='0' cellpadding='5'
  
=== hw-controller-change-security-key [hwc-change-security-key] ===
+
; gluster-volume-add-bricks : Adds one or more bricks on the specified volume.
Change the security key for encryption on SED/FDE-enabled drives on hardware RAID controller.
+
<pre>
+
<--controller>  :: Name or ID of a hardware RAID controller.
+
<--old-security-key> :: Prior security key on HW Controller card, for changing key, for
+
                            encryption on FDE-enabled secure disk drives.
+
<--security-key> :: Security key on HW Controller card for encryption on FDE-enabled secure
+
                            disk drives.
+
</pre>
+
  
 +
<pre> qs gluster-volume-add-bricks|gv-add-bricks --gluster-volume=value --pool-list=value --restripe-volume=value [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>gluster-volume</tt> || Name or ID of the Gluster volume
 +
|-
 +
| &nbsp; || <tt>pool-list</tt> || List of one or more storage pools.
 +
|-
 +
| &nbsp; || <tt>restripe-volume</tt> || Restripe files across the new Gluster bricks
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== hw-controller-create-security-key [hwc-create-security-key] ===
 
Create the security key for encryption on SED/FDE-enabled drives on hardware RAID controller.
 
<pre>
 
<--controller>  :: Name or ID of a hardware RAID controller.
 
<--security-key> :: Security key on HW Controller card for encryption on FDE-enabled secure
 
                            disk drives.
 
</pre>
 
  
 +
; gluster-volume-create : Create a new Gluster volume with new Bricks on the specified Storage Pools.
  
=== hw-controller-get [hwc-get] ===
+
<pre> qs gluster-volume-create|gv-create --name=value --pool-list=value --replica-count=value --disperse-count=value --stripe-volume=value [--desc=value ] [--flags=value ] </pre>
Returns information about a specific hardware RAID controller.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--controller>   :: Name or ID of a hardware RAID controller.
+
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
</pre>
+
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>pool-list</tt> || List of one or more storage pools.
 +
|-
 +
| &nbsp; || <tt>replica-count</tt> || Number of replica copies to have of each file in the Gluster volume (1 = no redundancy, 2 = mirroring)
 +
|-
 +
| &nbsp; || <tt>disperse-count</tt> || Dispersed is a parity based volume layout similar to RAID5 (disperse 5 = 4 data + 1 parity, disperse 3 = 2 data + 1 parity). Optionally supply a replica count to indicate the number of parity bricks per stripe where 2 is like RAID6 and 1 is like RAID5.
 +
|-
 +
| &nbsp; || <tt>stripe-volume</tt> || Enable striping of files across Gluster bricks
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
  
=== hw-controller-group-get [hwcg-get] ===
+
; gluster-volume-delete : Deletes the specified Gluster volume.
Returns information about all the support hardware RAID controller group types.
+
<pre>
+
<--controller-group> :: Name or ID of a hardware RAID controller group.
+
</pre>
+
  
 +
<pre> qs gluster-volume-delete|gv-delete --gluster-volume=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>gluster-volume</tt> || Name or ID of the Gluster volume
 +
|}
  
=== hw-controller-group-list [hwcg-list] ===
 
Returns a list of all the hardware controller groups.
 
  
=== hw-controller-import-units [hwc-import-units] ===
+
; gluster-volume-get : Gets information about the specified Gluster volume
Scan and import foreign disks associated with RAID groups that were attached to another RAID controller or that require re-importing to the local appliance.
+
<pre>
+
<--controller>  :: Name or ID of a hardware RAID controller.
+
</pre>
+
  
 +
<pre> qs gluster-volume-get|gv-get --gluster-volume=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>gluster-volume</tt> || Name or ID of the Gluster volume
 +
|}
  
=== hw-controller-list [hwc-list] ===
 
Returns a list of all the hardware controllers.
 
<pre>
 
[--controller-group] :: Name or ID of a hardware RAID controller group.
 
</pre>
 
  
 +
; gluster-volume-list : Returns a list of all the Gluster volumes in the grid
  
=== hw-controller-rescan [hwc-rescan] ===
+
<pre> qs gluster-volume-list|gv-list </pre>
Rescans the hardware controller to look for new disks and RAID units.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
<--controller>  :: Name or ID of a hardware RAID controller.
+
</pre>
+
  
 +
; gluster-volume-modify : Modifies the name and/or description of the specified Gluster volume.
  
=== hw-disk-delete [hwd-delete] ===
+
<pre> qs gluster-volume-modify|gv-modify --gluster-volume=value --name=value [--desc=value ] [--flags=value ] </pre>
Marks the specified disk so that it can be removed from the enclosure.  Disks marked as hot-spares will return to normal status after being deleted.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--disk>        :: Specifies a physical disk connected to a hardware RAID controller.
+
| &nbsp; || <tt>gluster-volume</tt> || Name or ID of the Gluster volume
[--duration]    :: Duration in seconds to repeat the disk identification pattern.
+
|-
</pre>
+
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
  
=== hw-disk-get [hwd-get] ===
+
; gluster-volume-rebalance : Rebalances files across the bricks in the specified Gluster volume
Returns information about a specific disk managed by a hardware RAID controller.
+
<pre>
+
<--disk>        :: Specifies a physical disk connected to a hardware RAID controller.
+
</pre>
+
  
 +
<pre> qs gluster-volume-rebalance|gv-rebalance --gluster-volume=value [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>gluster-volume</tt> || Name or ID of the Gluster volume
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== hw-disk-identify [hwd-identify] ===
 
Flashes the LED indicator light on the specified disk so that it can be identified in the enclosure chassis.
 
<pre>
 
<--unit>        :: Name of a hardware RAID unit or it unique ID.
 
[--duration]    :: Duration in seconds to repeat the disk identification pattern.
 
</pre>
 
  
 +
; gluster-volume-start : Starts the specified Gluster volume
  
=== hw-disk-list [hwd-list] ===
+
<pre> qs gluster-volume-start|gv-start --gluster-volume=value [--flags=value ] </pre>
Returns a list of all the disks managed by the specified hardware controller.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
[--controller]  :: Name or ID of a hardware RAID controller.
+
| &nbsp; || <tt>gluster-volume</tt> || Name or ID of the Gluster volume
</pre>
+
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
  
=== hw-disk-mark-good [hwd-mark-good] ===
+
; gluster-volume-stop : Stops the specified Gluster volume.
Marks the specified disk as 'good' or 'ready'.  You can use this to correct the disk status for good disks that the controller has in 'bad' or 'failed' state.
+
<pre>
+
<--disk>        :: Specifies a physical disk connected to a hardware RAID controller.
+
</pre>
+
  
 +
<pre> qs gluster-volume-stop|gv-stop --gluster-volume=value [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>gluster-volume</tt> || Name or ID of the Gluster volume
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== hw-disk-mark-spare [hwd-mark-spare] ===
+
</div>
Marks the specified disk as a universal hot spare within the group of RAID units managed by the controller in which the disk is attached.
+
</div>
<pre>
+
<--disk>         :: Specifies a physical disk connected to a hardware RAID controller.
+
</pre>
+
  
 +
===GRID  Storage System Grid Management===
 +
----
 +
<div class='mw-collapsible mw-collapsed'>
 +
<div class='mw-collapsible-content'>
  
=== hw-enclosure-get [hwe-get] ===
+
; grid-add : Adds the specified storage system to the management grid.
Returns information about a specific enclosure managed by the specified hardware RAID controller.
+
<pre>
+
<--enclosure>    :: Name of a hardware RAID enclosure or it unique ID.
+
</pre>
+
  
 +
<pre> qs grid-add|mg-add --node-ipaddress=value [--node-username=value ] [--node-password=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>node-ipaddress</tt> || IP address of a storage system node.
 +
|-
 +
| &nbsp; || <tt>node-username</tt> || Admin user account with permissions to add/remove nodes from the grid.
 +
|-
 +
| &nbsp; || <tt>node-password</tt> || Admin user account password.
 +
|}
  
=== hw-enclosure-list [hwe-list] ===
 
Returns a list of all the enclosures managed by the specified hardware RAID controller.
 
<pre>
 
[--controller]  :: Name or ID of a hardware RAID controller.
 
</pre>
 
  
 +
; grid-assoc-get : Get general information about the associated storage system management grid.
  
=== hw-unit-create [hwu-create] ===
+
<pre> qs grid-assoc-get|mg-aget --name=value --storage-system=value [--flags=value ] </pre>
Creates a new hardware RAID unit using the specified controller.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--raid>        :: Hardware RAID type for a hardware RAID unit. [*AUTO, RAID0, RAID1,
+
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
                            RAID10, RAID5, RAID50, RAID6, RAID60]
+
|-
<--disk-list>   :: Specifies one or more physical disks connected to a hardware RAID
+
| &nbsp; || <tt>storage-system</tt> || Name or ID of a storage system in a management grid.
                            controller. Use 'all' to indicate all unused disks.
+
|-
[--controller]  :: Name or ID of a hardware RAID controller.
+
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [min]
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
|}
</pre>
+
  
  
=== hw-unit-delete [hwu-delete] ===
+
; grid-assoc-list : Returns a list of the associated storage system nodes in the grid.
Deletes the specified RAID unit.  Note that you must first delete the Storage Pool before you delete the RAID unit.
+
<pre>
+
<--unit>        :: Name of a hardware RAID unit or it unique ID.
+
[--duration]    :: Duration in seconds to repeat the disk identification pattern.
+
</pre>
+
  
 +
<pre> qs grid-assoc-list|mg-alist </pre>
 +
{| cellspacing='0' cellpadding='5'
  
=== hw-unit-encrypt [hwu-encrypt] ===
+
; grid-create : Creates a new management grid. A given storage system can only be a member of one grid at a time.
Enable hardware SED/FDE encryption for the specified hardware RAID unit.
+
<pre>
+
<--unit>        :: Name of a hardware RAID unit or it unique ID.
+
[--options]      :: Special options to hardware encryption policy.
+
</pre>
+
  
 +
<pre> qs grid-create|mg-create --name=value [--desc=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|}
  
=== hw-unit-get [hwu-get] ===
 
Returns information about a specific RAID unit managed by the specified hardware RAID controller.
 
<pre>
 
<--unit>        :: Name of a hardware RAID unit or it unique ID.
 
</pre>
 
  
 +
; grid-delete : Deletes the management grid.  After the grid is deleted each node in the grid operates independently again.
  
=== hw-unit-identify [hwu-identify] ===
+
<pre> qs grid-delete|mg-delete </pre>
Flashes the LED indicator light on all the disks in the RAID unit so that it can be identified in the enclosure.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
<--unit>        :: Name of a hardware RAID unit or it unique ID.
+
[--duration]    :: Duration in seconds to repeat the disk identification pattern.
+
</pre>
+
  
 +
; grid-get : Get general information about the storage system management grid.
  
=== hw-unit-list [hwu-list] ===
+
<pre> qs grid-get|mg-get </pre>
Returns a list of all the RAID units managed by the specified hardware controller.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
[--controller]  :: Name or ID of a hardware RAID controller.
+
</pre>
+
  
 +
; grid-get-hosts : Returns the /etc/hosts configuration file configuration for grid nodes.
  
== SAS Switch Management ==
+
<pre> qs grid-get-hosts|mg-get-hosts </pre>
 +
{| cellspacing='0' cellpadding='5'
  
=== hw-switch-adapter-get [hwsa-get] ===
+
; grid-modify : Modify the management grid properties.
Returns information about the specified HW switch management module.
+
<pre>
+
<--switch-adapter> :: Storage switch adapter module ID
+
</pre>
+
  
 +
<pre> qs grid-modify|mg-modify --name=value [--desc=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|}
  
=== hw-switch-adapter-list [hwsa-list] ===
 
Returns a list of all the storage switch management adapters
 
  
=== hw-switch-cred-add [hwsc-add] ===
+
; grid-remove : Removes the specified storage system from the management grid.
Adds storage switch login credentials for a specific switch management adapter
+
<pre>
+
<--username>    :: Administrator user name for the host, typically 'Administrator' for
+
                            Windows hosts.
+
<--password>    :: Administrator password for the host; enables auto-configuration of
+
                            host's iSCSI initiator.
+
<--domain-password> :: Password for the committing zoneset changes to a storage switch.
+
<--ip-address>  :: IP Address of the host being added; if unspecified the service will look
+
                            it up.
+
[--switch-adapter] :: Storage switch adapter module ID
+
[--primary]      :: Primary storage system responsible for managing and discovering the
+
                            switch(es)
+
[--secondary]    :: Secondary storage system responsible for managing and discovering the
+
                            switch(es)
+
</pre>
+
  
 +
<pre> qs grid-remove|mg-remove --storage-system=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>storage-system</tt> || Name or ID of a storage system in a management grid.
 +
|}
  
=== hw-switch-cred-get [hwsc-get] ===
 
Returns information about specific storage switch login credentials
 
<pre>
 
<--creds>        :: Storage switch credentials (user/pass)
 
</pre>
 
  
 +
; grid-set-hosts : Configures the /etc/hosts configuration file on all systems to facilitate host name based gluster volume configurations.
  
=== hw-switch-cred-list [hwsc-list] ===
+
<pre> qs grid-set-hosts|mg-set-hosts --portid-list=value </pre>
Returns a list of all the storage switch login credentials
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
[--switch-adapter] :: Storage switch adapter module ID
+
| &nbsp; || <tt>portid-list</tt> || List of UUIDs for the ethernet ports to be used for grid wide /etc/hosts file configuration.
</pre>
+
|}
  
  
=== hw-switch-cred-remove [hwsc-remove] ===
+
; grid-set-master : Sets the master node for a storage system.
Removes storage switch login credentials
+
<pre>
+
<--creds>        :: Storage switch credentials (user/pass)
+
</pre>
+
  
 +
<pre> qs grid-set-master|mg-set --storage-system=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>storage-system</tt> || Name or ID of a storage system in a management grid.
 +
|}
  
=== hw-switch-failover-group-activate [hwsfg-activate] ===
+
</div>
Activates the pools in a switch failover group on the specified storage system
+
</div>
<pre>
+
<--failover-group> :: Name/ID of a storage switch failover group
+
<--storage-system> :: Name or ID of a storage system in a management grid.
+
</pre>
+
  
 +
===HA-FAILOVER  Storage Pool HA Failover Management===
 +
----
 +
<div class='mw-collapsible mw-collapsed'>
 +
<div class='mw-collapsible-content'>
  
=== hw-switch-failover-group-create [hwsfg-create] ===
+
; ha-group-activate : Activates/enables the specified high-availability group to failover can occur on a system outage.
Creates a new switch failover group
+
<pre>
+
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
                            not allowed.
+
<--pool-list>    :: List of one or more storage pools.
+
<--sys-primary>  :: Storage system associated with the failover group primary node
+
<--zoneset-primary> :: Zoneset to be associated with the failover group primary node
+
<--sys-secondary> :: Storage system associated with the failover group secondary node
+
<--zoneset-secondary> :: Zoneset to be associated with the failover group secondary node
+
[--ip-address]  :: IP Address of the host being added; if unspecified the service will look
+
                            it up.
+
[--netmask]      :: Subnet IP mask (ex: 255.255.255.0)
+
[--gateway]      :: IP address of the network gateway
+
[--desc]        :: A description for the object.
+
</pre>
+
  
 +
<pre> qs ha-group-activate|hag-activate --ha-group=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ha-group</tt> || Name or UUID of a storage pool high-availability group
 +
|}
  
=== hw-switch-failover-group-delete [hwsfg-delete] ===
 
Deletes a failover group
 
<pre>
 
<--failover-group> :: Name/ID of a storage switch failover group
 
</pre>
 
  
 +
; ha-group-create : Creates a new storage pool high-availability group.
  
=== hw-switch-failover-group-get [hwsfg-get] ===
+
<pre> qs ha-group-create|hag-create --name=value --pool=value --sys-secondary=value --sys-primary=value [--desc=value ] [--ha-module=value ] </pre>
Returns information about a specific switch failover group
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--failover-group> :: Name/ID of a storage switch failover group
+
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
</pre>
+
|-
 +
| &nbsp; || <tt>pool</tt> || Name of the storage pool or its unique ID (GUID).
 +
|-
 +
| &nbsp; || <tt>sys-secondary</tt> || Storage system associated with the failover group secondary node
 +
|-
 +
| &nbsp; || <tt>sys-primary</tt> || Storage system associated with the failover group primary node
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>ha-module</tt> || Name or UUID of a storage pool high-availability module
 +
|}
  
  
=== hw-switch-failover-group-list [hwsfg-list] ===
+
; ha-group-deactivate : Deactivates the specified high-availability group so that failover policies are disabled.
Returns a list of all the switch failover groups
+
<pre>
+
[--switch]      :: Name or ID of a SAS/FC storage switch
+
</pre>
+
  
 +
<pre> qs ha-group-deactivate|hag-deactivate --ha-group=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ha-group</tt> || Name or UUID of a storage pool high-availability group
 +
|}
  
=== hw-switch-failover-group-modify [hwsfg-modify] ===
 
Modifies the properties of a failover group
 
<pre>
 
<--failover-group> :: Name/ID of a storage switch failover group
 
[--name]        :: Names may include any alpha-numeric characters '_' and '-', spaces are
 
                            not allowed.
 
[--ip-address]  :: IP Address of the host being added; if unspecified the service will look
 
                            it up.
 
[--netmask]      :: Subnet IP mask (ex: 255.255.255.0)
 
[--gateway]      :: IP address of the network gateway
 
[--pool-list]    :: List of one or more storage pools.
 
[--sys-primary]  :: Storage system associated with the failover group primary node
 
[--zoneset-primary] :: Zoneset to be associated with the failover group primary node
 
[--sys-secondary] :: Storage system associated with the failover group secondary node
 
[--zoneset-secondary] :: Zoneset to be associated with the failover group secondary node
 
[--desc]        :: A description for the object.
 
</pre>
 
  
 +
; ha-group-delete : Deletes the specified high-availability group
  
=== hw-switch-get [hws-get] ===
+
<pre> qs ha-group-delete|hag-delete --ha-group=value </pre>
Returns detailed information about a storage switch
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--switch>       :: Name or ID of a SAS/FC storage switch
+
| &nbsp; || <tt>ha-group</tt> || Name or UUID of a storage pool high-availability group
</pre>
+
|}
  
  
=== hw-switch-list [hws-list] ===
+
; ha-group-failover : Manually triggers a failover of the specified storage pool using the associated storage pool HA group policy.
Returns a list of all the discovered storage switches
+
<pre>
+
[--switch-adapter] :: Storage switch adapter module ID
+
</pre>
+
  
 +
<pre> qs ha-group-failover|hag-failover --ha-group=value --storage-system=value [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ha-group</tt> || Name or UUID of a storage pool high-availability group
 +
|-
 +
| &nbsp; || <tt>storage-system</tt> || Name or ID of a storage system in a management grid.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== hw-switch-zoneset-activate [hwsz-activate] ===
 
Activates a specific storage switch zonset
 
<pre>
 
<--zoneset>      :: Name or ID of a storage switch zoneset
 
[--switch]      :: Name or ID of a SAS/FC storage switch
 
</pre>
 
  
 +
; ha-group-get : Gets information about the specified storage pool HA group
  
=== hw-switch-zoneset-get [hwsz-get] ===
+
<pre> qs ha-group-get|hag-get --ha-group=value </pre>
Returns information about a specific switch zoneset
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--zoneset>     :: Name or ID of a storage switch zoneset
+
| &nbsp; || <tt>ha-group</tt> || Name or UUID of a storage pool high-availability group
</pre>
+
|}
  
  
=== hw-switch-zoneset-list [hwsz-list] ===
+
; ha-group-list : Returns a list of all the HA groups
Returns a list of all the discovered zonesets
+
<pre>
+
[--switch]      :: Name or ID of a SAS/FC storage switch
+
</pre>
+
  
 +
<pre> qs ha-group-list|hag-list </pre>
 +
{| cellspacing='0' cellpadding='5'
  
== License Management ==
+
; ha-group-modify : Modifies the settings for the specified high-availability group
  
=== license-activate [lic-act] ===
+
<pre> qs ha-group-modify|hag-modify --ha-group=value [--name=value ] [--desc=value ] [--sys-secondary=value ] [--ha-module=value ] [--verify-client-ips=value ] [--client-connectivity-check-policy=value ] </pre>
Activates the system using a activation key received from customer support.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--activation-key> :: Activation key you'll receive from customer support after you send the
+
| &nbsp; || <tt>ha-group</tt> || Name or UUID of a storage pool high-availability group
                            activation request code.
+
|-
</pre>
+
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>sys-secondary</tt> || Storage system associated with the failover group secondary node
 +
|-
 +
| &nbsp; || <tt>ha-module</tt> || Name or UUID of a storage pool high-availability module
 +
|-
 +
| &nbsp; || <tt>verify-client-ips</tt> || IP addresses of hosts that should be pinged to verify connectivity.  If there's no connectivity a preemptive failover is attempted.
 +
|-
 +
| &nbsp; || <tt>client-connectivity-check-policy</tt> || Client connectivity failover policy. [all-nonresponsive, *disabled, majority-nonresponsive]
 +
|}
  
  
=== license-activate-online [lic-aon] ===
+
; ha-interface-create : Creates a new virtual network interface for the specified HA failover group.
Requests automatic activation via the online activation service.
+
<pre>
+
[--key]          :: Unique license key identifier, use license-list to get a list of these.
+
</pre>
+
  
 +
<pre> qs ha-interface-create|hai-create --ha-group=value --parent-port=value --ip-address=value [--netmask=value ] [--interface-tag=value ] [--desc=value ] [--gateway=value ] [--mac-address=value ] [--iscsi-enable=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>ha-group</tt> || Name or UUID of a storage pool high-availability group
 +
|-
 +
| &nbsp; || <tt>parent-port</tt> || Parent network port like 'eth0' which the virtual interface should be attached to.  On failover the virtual interface will attach to the port with the same name on the failover/secondary node.
 +
|-
 +
| &nbsp; || <tt>ip-address</tt> || IP Address of the host being added; if unspecified the service will look it up.
 +
|-
 +
| &nbsp; || <tt>netmask</tt> || Subnet IP mask (ex: 255.255.255.0)
 +
|-
 +
| &nbsp; || <tt>interface-tag</tt> || Tags are a alpha-numeric tag which is appended onto a given HA virtual interface for easy identification.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>gateway</tt> || IP address of the network gateway
 +
|-
 +
| &nbsp; || <tt>mac-address</tt> || MAC Address
 +
|-
 +
| &nbsp; || <tt>iscsi-enable</tt> || Enables or disables iSCSI access to the specified port(s).
 +
|}
  
=== license-add [lic-add] ===
 
Adds a license key using a license key block specified in a key file. In general, you have 7 days to activate your license using online activation of activation via email. If you do not activate after the 7 days the system will continue to run but you will not be able to make configuration changes.
 
<pre>
 
[--storage-system] :: Name or ID of a storage system in a management grid.
 
<--key-file>    :: Key file you received which contains a key block section.
 
</pre>
 
  
 +
; ha-interface-delete : Deletes the specified virtual network interface resource from the HA group
  
=== license-get [lic-get] ===
+
<pre> qs ha-interface-delete|hai-delete --ha-interface=value </pre>
Shows the current license key info, and any activation request code.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--key>          :: Unique license key identifier, use license-list to get a list of these.
+
| &nbsp; || <tt>ha-interface</tt> || Name or UUID of a storage pool high-availability virtual network interface
</pre>
+
|}
  
  
=== license-list [lic-list] ===
+
; ha-interface-get : Gets information about the specified storage pool HA virtual network interface
Returns a list of all the registered license keys.
+
  
=== license-remove [lic-remove] ===
+
<pre> qs ha-interface-get|hai-get --ha-interface=value </pre>
Removes the specified license key.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--key>          :: Unique license key identifier, use license-list to get a list of these.
+
| &nbsp; || <tt>ha-interface</tt> || Name or UUID of a storage pool high-availability virtual network interface
</pre>
+
|}
  
  
== Cloud I/O Stats Management ==
+
; ha-interface-list : Returns a list of all the HA interfaces on the specified group
  
=== metrics-get [lm-get] ===
+
<pre> qs ha-interface-list|hai-list </pre>
Get the current username, token, and interval settings for Librato Metrics.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
<--storage-system> :: Name or ID of a storage system in a management grid.
+
</pre>
+
  
 +
; ha-module-get : Gets information about the specified storage pool HA module
  
=== metrics-set [lm-set] ===
+
<pre> qs ha-module-get|ham-get --ha-module=value </pre>
Set the username, token, and interval for Librato Metrics posting.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
[--storage-system] :: Name or ID of a storage system in a management grid.
+
| &nbsp; || <tt>ha-module</tt> || Name or UUID of a storage pool high-availability module
<--username>    :: The username/email of the Librato Metrics account.
+
|}
<--token>       :: The API token associated with the Librato Metrics account.
+
[--interval]    :: The interval in seconds of how often Quantastor should post data to
+
                            Librato Metrics.
+
[--dashboards]  :: Autmatically create the QuantaStor system dashboard in Librato Metrics
+
[--alert-anno]  :: Add alert annotations to the Librato Metrics postings.
+
[--config-anno]  :: Add config annotations to the Librato Metrics postings.
+
</pre>
+
  
  
== Storage Pool Management ==
+
; ha-module-list : Returns a list of all the HA failover modules
  
=== pool-add-spare [p-add] ===
+
<pre> qs ha-module-list|ham-list </pre>
Adds a dedicated hot-spare to the specified storage pool.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
<--pool>        :: Name of the storage pool or its unique ID (GUID).
+
<--disk-list>    :: Comma delimited list of drives (no spaces) to be used for the operation.
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
 +
; ping-check : Pings the specified list of IP addresses and returns the list of IPs that responded to the ping check.
  
=== pool-create [p-create] ===
+
<pre> qs ping-check|ping --verify-client-ips=value [--storage-system=value ] </pre>
Creates a new storage pool from which storage volumes can be created.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
| &nbsp; || <tt>verify-client-ips</tt> || IP addresses of hosts that should be pinged to verify connectivityIf there's no connectivity a preemptive failover is attempted.
                            not allowed.
+
|-
<--disk-list>    :: Comma delimited list of drives (no spaces) to be used for the operation.
+
| &nbsp; || <tt>storage-system</tt> || Name or ID of a storage system in a management grid.
[--raid-type]    :: RAID type for the storage pool. [*AUTO, LINEAR, RAID0, RAID1, RAID10,
+
|}
                            RAID5, RAID6]
+
[--pool-type]   :: The type of storage pool to be created. [btrfs, ceph, ext4, jfs, *xfs,
+
                            zfs]
+
[--desc]        :: A description for the object.
+
[--is-default]  :: Indicates that this pool should be utilized as the default storage pool.
+
[--ssd]          :: Enable solid state disk (SSD) storage pool optimizations.
+
[--compress]    :: Enable storage volume compression on the pool, this boosts both read and
+
                            write performance on most IO loads.
+
[--nobarriers]  :: Enable storage pool write optimizationsThis requires that you have a  
+
                            hardware controller with a battery backup unit.
+
[--profile]      :: Specifies an optional IO optimization profile for the storage pool. 
+
                            Storage pool profiles control elements like read-ahead, queue depth and
+
                            other device configurable settings.
+
[--raid-set-size] :: Then number of disks to use in each set of disks when creating a RAID50
+
                            or RAID60 storage pool.
+
[--encrypt]      :: Enables encryption on all devices in the storage pool. Encryption can
+
                            only be enabled at the time of pool creation.
+
[--encryption-type] :: Sets the encryption algorithm, currently only the default aes256 is
+
                            supported. [aes256, *default]
+
[--passphrase]  :: Locks up the storage keys into an encrypted file after the pool has been
+
                            created.  After this the passphrase must be provided to start the storage
+
                            pool.  If no storage pool is provided then the pool is started
+
                            automatically at system startup time.
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
 +
</div>
 +
</div>
  
=== pool-destroy [p-destroy] ===
+
===HOST  Host Management===
Deletes a storage pool, *WARNING* any data in the pool will be lost.
+
----
<pre>
+
<div class='mw-collapsible mw-collapsed'>
<--pool>         :: Name of the storage pool or its unique ID (GUID).
+
<div class='mw-collapsible-content'>
[--flags]        :: Optional flags for the operation. [async, force]
+
</pre>
+
  
 +
; host-add : Modifies a host entry. The username/password fields are optional and are not yet leveraged by the QuantaStor system. Later this may be used to provide additional levels of integration such as automatic host side configuration of your iSCSI initiator.
  
=== pool-device-get [spd-get] ===
+
<pre> qs host-add|h-add --hostname=value [--iqn=value ] [--ip-address=value ] [--desc=value ] [--username=value ] [--password=value ] [--host-type=value ] [--flags=value ] </pre>
Gets information about a specific storage pool device.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--name>         :: Names may include any alpha-numeric characters '_' and '-', spaces are  
+
| &nbsp; || <tt>hostname</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
                            not allowed.
+
|-
</pre>
+
| &nbsp; || <tt>iqn</tt> || IQN (iSCSI Qualified Name) of the host's iSCSI initiator
 +
|-
 +
| &nbsp; || <tt>ip-address</tt> || IP Address of the host being added; if unspecified the service will look it up.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>username</tt> || Administrator user name for the host, typically 'Administrator' for Windows hosts.
 +
|-
 +
| &nbsp; || <tt>password</tt> || Administrator password for the host; enables auto-configuration of host's iSCSI initiator.
 +
|-
 +
| &nbsp; || <tt>host-type</tt> || Operating system type of the host. [aix, hpux, linux, other, solaris, vmware, *windows, xenserver]
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async]
 +
|}
  
  
=== pool-device-list [spd-list] ===
+
; host-get : Gets information about a specific host.
Returns a list of all the storage pool devices.
+
  
=== pool-expand [p-expand] ===
+
<pre> qs host-get|h-get --host=value </pre>
Expands a storage pool after the underlying hardware RAID unit has been grown underneath.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--pool>         :: Name of the storage pool or its unique ID (GUID).
+
| &nbsp; || <tt>host</tt> || Name of the host or its unique ID (GUID).
[--flags]        :: Optional flags for the operation. [async]
+
|}
</pre>
+
  
  
=== pool-export [p-export] ===
+
; host-initiator-add : Adds an additional iSCSI host initiator IQN to the specified host.
Deactivate and removes the storage pool from the storage system database so that it can be exported and used on another system.
+
<pre>
+
<--pool>        :: Name of the storage pool or its unique ID (GUID).
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
 +
<pre> qs host-initiator-add|hi-add --host=value --iqn=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>host</tt> || Name of the host or its unique ID (GUID).
 +
|-
 +
| &nbsp; || <tt>iqn</tt> || IQN (iSCSI Qualified Name) of the host's iSCSI initiator
 +
|}
  
=== pool-get [p-get] ===
 
Gets information about a specific storage pool.
 
<pre>
 
<--pool>        :: Name of the storage pool or its unique ID (GUID).
 
</pre>
 
  
 +
; host-initiator-get : Gets information about a specific host identified by its initiator IQN.
  
=== pool-grow [p-grow] ===
+
<pre> qs host-initiator-get|hi-get --iqn=value </pre>
Grows the specified storage pool by adding an additional disk.  You can only grow storage pools that are using the RAID5 or RAID6 layout.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--pool>         :: Name of the storage pool or its unique ID (GUID).
+
| &nbsp; || <tt>iqn</tt> || IQN (iSCSI Qualified Name) of the host's iSCSI initiator
<--disk-list>    :: Comma delimited list of drives (no spaces) to be used for the operation.
+
|}
[--raid-type]    :: RAID type for the storage pool. [*AUTO, LINEAR, RAID0, RAID1, RAID10,
+
                            RAID5, RAID6]
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
  
=== pool-identify [p-id] ===
+
; host-initiator-list : Returns a list of all the initiators (IQN) of the specified host
Pulses the disk activity lights for all disks in the pool so they can be identified in the chassis.
+
<pre>
+
<--pool>        :: Name of the storage pool or its unique ID (GUID).
+
[--pattern]      :: Pattern to flash the disk LED lights in, p = short pulse, P = long
+
                            pulse, d = short delay, D = long delay, ex: pattern=pppD
+
[--duration]    :: Duration in seconds to repeat the disk identification pattern.
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
 +
<pre> qs host-initiator-list|hi-list --host=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>host</tt> || Name of the host or its unique ID (GUID).
 +
|}
  
=== pool-import [p-import] ===
 
Imports the named storage pool(s) which have been previously exported.
 
<pre>
 
[--pool-list]    :: List of storage pools.
 
[--flags]        :: Optional flags for the operation. [async]
 
</pre>
 
  
 +
; host-initiator-remove : Removes a iSCSI host initiator (IQN) from the specified host.
  
=== pool-list [p-list] ===
+
<pre> qs host-initiator-remove|hi-remove --host=value --iqn=value </pre>
Returns a list of all the storage pools.
+
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>host</tt> || Name of the host or its unique ID (GUID).
 +
|-
 +
| &nbsp; || <tt>iqn</tt> || IQN (iSCSI Qualified Name) of the host's iSCSI initiator
 +
|}
  
=== pool-modify [p-modify] ===
 
Modifies the properties of the storage pool such as its name and description.
 
<pre>
 
<--pool>        :: Name of the storage pool or its unique ID (GUID).
 
[--name]        :: Names may include any alpha-numeric characters '_' and '-', spaces are
 
                            not allowed.
 
[--is-default]  :: Indicates that this pool should be utilized as the default storage pool.
 
[--ssd]          :: Enable solid state disk (SSD) storage pool optimizations.
 
[--compress]    :: Enable storage volume compression on the pool, this boosts both read and
 
                            write performance on most IO loads.
 
[--nobarriers]  :: Enable storage pool write optimizations.  This requires that you have a
 
                            hardware controller with a battery backup unit.
 
[--profile]      :: Specifies an optional IO optimization profile for the storage pool. 
 
                            Storage pool profiles control elements like read-ahead, queue depth and
 
                            other device configurable settings.
 
[--desc]        :: A description for the object.
 
[--sync]        :: Synchronization policy to use for handling writes to the storage pool
 
                            (standard, always, none).  standard mode is a hybrid of write-through and
 
                            write-back caching based on the O_SYNC flag, always mode is write-through
 
                            to ZIL which could be SSD cache, and disabled indicates to always use
 
                            async writes. [always, disabled, *standard]
 
[--compression-type] :: Type of compression to be used. (on | off | lzjb | gzip | gzip-[1-9] |
 
                            zle | lz4)
 
[--repair-policy] :: Type of automatic hotspare repair action to be applied.
 
                            (assigned-and-global | assigned-only | assigned-and-global-exact |
 
                            assigned-only-exact | manual-repair) [*assigned-and-global,
 
                            assigned-and-global-exact, assigned-only, assigned-only-exact, manual]
 
[--approve-repair] :: Set flag to approve a pending storage pool repair action that is
 
                            currently deferred and requires explicit manual approval by an
 
                            operator/admin to proceed
 
[--copies]      :: Indicates the number of copies of each block should be maintained in the
 
                            storage pool.  This is a way of getting duplicates for bit-rot protection
 
                            on a single device.
 
[--flags]        :: Optional flags for the operation. [async]
 
</pre>
 
  
 +
; host-list : Returns a list of all the hosts that you have added to the QuantaStor system. Host groups allow you to assign storage to multiple host all at once. This is especially useful when you have a VMware or Windows cluster as you can assign and unassign storage to all nodes in the cluster in one operation.
  
=== pool-preimport-scan [ppi-scan] ===
+
<pre> qs host-list|h-list </pre>
Returns a list of pools that are available to import but that are not yet discovered.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
[--storage-system] :: Name or ID of a storage system in a management grid.
+
[--flags]        :: Optional flags for the operation. [min]
+
</pre>
+
  
 +
; host-modify : Modifies a host entry which contains a list of WWN/IQN or IB GIDs for a given host.
  
=== pool-profile-get [pp-get] ===
+
<pre> qs host-modify|h-modify --host=value [--desc=value ] [--ip-address=value ] [--username=value ] [--password=value ] [--host-type=value ] [--hostname=value ] [--flags=value ] </pre>
Gets information about a specific storage pool profile.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--profile>     :: Specifies an optional IO optimization profile for the storage pool.
+
| &nbsp; || <tt>host</tt> || Name of the host or its unique ID (GUID).
                            Storage pool profiles control elements like read-ahead, queue depth and
+
|-
                            other device configurable settings.
+
| &nbsp; || <tt>desc</tt> || A description for the object.
</pre>
+
|-
 +
| &nbsp; || <tt>ip-address</tt> || IP Address of the host being added; if unspecified the service will look it up.
 +
|-
 +
| &nbsp; || <tt>username</tt> || Administrator user name for the host, typically 'Administrator' for Windows hosts.
 +
|-
 +
| &nbsp; || <tt>password</tt> || Administrator password for the host; enables auto-configuration of host's iSCSI initiator.
 +
|-
 +
| &nbsp; || <tt>host-type</tt> || Operating system type of the host. [aix, hpux, linux, other, solaris, vmware, *windows, xenserver]
 +
|-
 +
| &nbsp; || <tt>hostname</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async]
 +
|}
  
  
=== pool-profile-list [pp-list] ===
+
; host-remove : Removes the specified host, *WARNING* host's active iSCSI sessions will be dropped.
Returns a list of all the storage pool profiles.
+
  
=== pool-remove-spare [p-remove] ===
+
<pre> qs host-remove|h-remove --host=value [--flags=value ] </pre>
Removes the specified hot-spare from the specified pool.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--pool>         :: Name of the storage pool or its unique ID (GUID).
+
| &nbsp; || <tt>host</tt> || Name of the host or its unique ID (GUID).
<--disk-list>   :: Comma delimited list of drives (no spaces) to be used for the operation.
+
|-
[--flags]        :: Optional flags for the operation. [async]
+
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force]
</pre>
+
|}
  
 +
</div>
 +
</div>
  
=== pool-scan [p-scan] ===
+
===HOST-GROUP  Host Group Management===
Rescans the specified storage system for storage pools.
+
----
<pre>
+
<div class='mw-collapsible mw-collapsed'>
[--storage-system] :: Name or ID of a storage system in a management grid.
+
<div class='mw-collapsible-content'>
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
 +
; host-group-create : Creates a new host group with the specified name.
  
=== pool-scrub-start [p-scrub-start] ===
+
<pre> qs host-group-create|hg-create --name=value --host-list=value [--desc=value ] [--flags=value ] </pre>
Starts a zpool scrub/verify operation which verifies data integrity and prevents bit-rot.  Use 'zpoolscrub --cron' to setup an automatic monthly scrub.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--pool>         :: Name of the storage pool or its unique ID (GUID).
+
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
[--flags]        :: Optional flags for the operation. [async]
+
|-
</pre>
+
| &nbsp; || <tt>host-list</tt> || A list of one or more hosts by name or ID.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async]
 +
|}
  
  
=== pool-scrub-stop [p-scrub-stop] ===
+
; host-group-delete : Removes the specified host group.
Stops the zpool scrub/verify operation if it is currently active on the storage pool.  Only applies to ZFS based storage pools.
+
<pre>
+
<--pool>        :: Name of the storage pool or its unique ID (GUID).
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
 +
<pre> qs host-group-delete|hg-delete --host-group=value [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>host-group</tt> || An arbitrary collection of hosts used to simplify volume ACL management for grids and other groups of hosts.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force]
 +
|}
  
=== pool-start [p-start] ===
 
Starts up a previously stopped storage pool.
 
<pre>
 
<--pool>        :: Name of the storage pool or its unique ID (GUID).
 
[--passphrase]  :: If the Storage Pool was created with an encryption passphrase then it
 
                            must be specified in order to temporarly unlock the keys and start the
 
                            Storage Pool.
 
[--flags]        :: Optional flags for the operation. [async]
 
</pre>
 
  
 +
; host-group-get : Gets information about a specific host group.
  
=== pool-stop [p-stop] ===
+
<pre> qs host-group-get|hg-get --host-group=value </pre>
Stops all volume activity to the pool and disables it for maintenance.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--pool>         :: Name of the storage pool or its unique ID (GUID).
+
| &nbsp; || <tt>host-group</tt> || An arbitrary collection of hosts used to simplify volume ACL management for grids and other groups of hosts.
[--flags]        :: Optional flags for the operation. [async]
+
|}
</pre>
+
  
  
== QoS Policy Management ==
+
; host-group-host-add : Adds a host to the specified host group.
  
=== qos-policy-create [qos-create] ===
+
<pre> qs host-group-host-add|hg-host-add --host-group=value --host-list=value </pre>
Creates a new Quality-of-Service (QoS) policy template which an be used to apply performance limits to Storage Volumes.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
| &nbsp; || <tt>host-group</tt> || An arbitrary collection of hosts used to simplify volume ACL management for grids and other groups of hosts.
                            not allowed.
+
|-
<--bw-read>      :: Sets the maximum read bandwidth (eg: 100MB) per second as a Quality of
+
| &nbsp; || <tt>host-list</tt> || A list of one or more hosts by name or ID.
                            Service (QoS) control on the storage volume, 0 (default) indicates
+
|}
                            unlimited.
+
<--bw-write>     :: Sets the maximum write bandwidth (eg: 100MB) per second as a Quality of  
+
                            Service (QoS) control on the storage volume, 0 (default) indicates
+
                            unlimited.
+
[--desc]        :: A description for the object.
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
  
=== qos-policy-delete [qos-delete] ===
+
; host-group-host-remove : Removes a host from the specified host group.
Deletes a given QoS Policy and clears the QoS performance limits on all Storage Volumes associated with the policy.
+
<pre>
+
<--qos-policy>  :: Specifies the name or ID of a Quality of Service (QoS) policy.  QoS
+
                            policies limit the throughput and IOPs of a storage volume which is
+
                            especially useful in multi-tenant environments.
+
[--flags]        :: Optional flags for the operation. [async, force]
+
</pre>
+
  
 +
<pre> qs host-group-host-remove|hg-host-remove --host-group=value --host-list=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>host-group</tt> || An arbitrary collection of hosts used to simplify volume ACL management for grids and other groups of hosts.
 +
|-
 +
| &nbsp; || <tt>host-list</tt> || A list of one or more hosts by name or ID.
 +
|}
  
=== qos-policy-get [qos-get] ===
 
Returns detailed information about a specific QoS policy.
 
<pre>
 
<--qos-policy>  :: Specifies the name or ID of a Quality of Service (QoS) policy.  QoS
 
                            policies limit the throughput and IOPs of a storage volume which is
 
                            especially useful in multi-tenant environments.
 
</pre>
 
  
 +
; host-group-list : Returns a list of all the host groups.
  
=== qos-policy-list [qos-list] ===
+
<pre> qs host-group-list|hg-list </pre>
Returns details on the list of all QoS policy objects in the storage system grid.
+
{| cellspacing='0' cellpadding='5'
  
=== qos-policy-modify [qos-modify] ===
+
; host-group-modify : Modifies the properties of a host group such as its name and/or description.
Modifies and existing QoS policy with a new name, description, or performance limits.  Changes are applied immediately to all volumes.
+
<pre>
+
<--qos-policy>  :: Specifies the name or ID of a Quality of Service (QoS) policy.  QoS
+
                            policies limit the throughput and IOPs of a storage volume which is
+
                            especially useful in multi-tenant environments.
+
[--name]        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
                            not allowed.
+
[--desc]        :: A description for the object.
+
[--bw-read]      :: Sets the maximum read bandwidth (eg: 100MB) per second as a Quality of
+
                            Service (QoS) control on the storage volume, 0 (default) indicates
+
                            unlimited.
+
[--bw-write]    :: Sets the maximum write bandwidth (eg: 100MB) per second as a Quality of
+
                            Service (QoS) control on the storage volume, 0 (default) indicates
+
                            unlimited.
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
 +
<pre> qs host-group-modify|hg-modify --host-group=value [--name=value ] [--desc=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>host-group</tt> || An arbitrary collection of hosts used to simplify volume ACL management for grids and other groups of hosts.
 +
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|}
  
== Resource Group Quota Management ==
+
</div>
 +
</div>
  
=== provisioning-quota-create [pq-create] ===
+
===HWRAID  Hardware RAID Management===
Creates a new storage provisioning quota on a pool for the specified tenant resource cloud.
+
----
<pre>
+
<div class='mw-collapsible mw-collapsed'>
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
<div class='mw-collapsible-content'>
                            not allowed.
+
<--pool>         :: Name of the storage pool or its unique ID (GUID).
+
<--cloud>        :: Name of a Storage Cloud or its unique id.
+
[--policy]      :: Indicates the type of quota to be created. [hard, *soft]
+
[--desc]        :: A description for the object.
+
[--psize]        :: The total thin-provisionable space allowed by this provisioning quota.
+
[--usize]        :: The total utilizable space allowed by this provisioning quota which may
+
                            be less than the provisionable space.
+
[--max-volumes]  :: The maximum number of volumes that can be created using this quota,
+
                            specify 0 for no limit.
+
[--max-shares]  :: The maximum number of share that can be created using this quota,
+
                            specify 0 for no limit.
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
 +
; hw-alarm-clear-all : Clears all the hardware alarms that have been recorded for the specified hardware RAID controller.
  
=== provisioning-quota-delete [pq-delete] ===
+
<pre> qs hw-alarm-clear-all|hwa-clear-all --controller=value </pre>
Deletes a storage provisioning quota, the associated volumes are not deleted.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--quota>       :: Name or ID of a storage provisioning quota.
+
| &nbsp; || <tt>controller</tt> || Name or ID of a hardware RAID controller.
[--flags]        :: Optional flags for the operation. [async, force]
+
|}
</pre>
+
  
  
=== provisioning-quota-get [pq-get] ===
+
; hw-alarm-get : Returns information about a specific hardware alarm.
Returns information about a specific storage provisioning quota.
+
<pre>
+
<--quota>        :: Name or ID of a storage provisioning quota.
+
</pre>
+
  
 +
<pre> qs hw-alarm-get|hwa-get --id=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>id</tt> || Unique identifier (GUID) for the object.
 +
|}
  
=== provisioning-quota-list [pq-list] ===
 
Returns a list of all the storage provisioning quotas.
 
  
=== provisioning-quota-modify [pq-modify] ===
+
; hw-alarm-list : Returns a list of all the current hardware alarms/alert messages generated from the controller.
Modifies one to change the name and/or description of a storage provisioning quota.
+
<pre>
+
<--quota>        :: Name or ID of a storage provisioning quota.
+
<--cloud>        :: Name of a Storage Cloud or its unique id.
+
[--name]        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
                            not allowed.
+
[--desc]        :: A description for the object.
+
[--psize]        :: The total thin-provisionable space allowed by this provisioning quota.
+
[--usize]        :: The total utilizable space allowed by this provisioning quota which may
+
                            be less than the provisionable space.
+
[--max-volumes]  :: The maximum number of volumes that can be created using this quota,
+
                            specify 0 for no limit.
+
[--max-shares]  :: The maximum number of share that can be created using this quota,
+
                            specify 0 for no limit.
+
[--policy]      :: Indicates the type of quota to be created. [hard, *soft]
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
 +
<pre> qs hw-alarm-list|hwa-list [--controller=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>controller</tt> || Name or ID of a hardware RAID controller.
 +
|}
  
=== provisioning-quota-share-add [pqs-add] ===
 
Adds one or more shares to the specified provisioning quota.
 
<pre>
 
<--quota>        :: Name or ID of a storage provisioning quota.
 
<--share-list>  :: A list of one or more network shares.
 
</pre>
 
  
 +
; hw-controller-change-security-key : Change the security key for encryption on SED/FDE-enabled drives on hardware RAID controller.
  
=== provisioning-quota-share-assoc-list [pqs-alist] ===
+
<pre> qs hw-controller-change-security-key|hwc-change-security-key --controller=value --old-security-key=value --security-key=value </pre>
Returns a list of all the associated provisioning quotas of a specified share.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--share>       :: Name or ID of a network share.
+
| &nbsp; || <tt>controller</tt> || Name or ID of a hardware RAID controller.
</pre>
+
|-
 +
| &nbsp; || <tt>old-security-key</tt> || Prior security key on HW Controller card, for changing key, for encryption on FDE-enabled secure disk drives.
 +
|-
 +
| &nbsp; || <tt>security-key</tt> || Security key on HW Controller card for encryption on FDE-enabled secure disk drives.
 +
|}
  
  
=== provisioning-quota-share-remove [pqs-remove] ===
+
; hw-controller-create-security-key : Create the security key for encryption on SED/FDE-enabled drives on hardware RAID controller.
Removes one or more shares from the specified provisioning quota.
+
<pre>
+
<--quota>        :: Name or ID of a storage provisioning quota.
+
<--share-list>  :: A list of one or more network shares.
+
</pre>
+
  
 +
<pre> qs hw-controller-create-security-key|hwc-create-security-key --controller=value --security-key=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>controller</tt> || Name or ID of a hardware RAID controller.
 +
|-
 +
| &nbsp; || <tt>security-key</tt> || Security key on HW Controller card for encryption on FDE-enabled secure disk drives.
 +
|}
  
=== provisioning-quota-volume-add [pqv-add] ===
 
Adds one or more volumes to the specified provisioning quota.
 
<pre>
 
<--quota>        :: Name or ID of a storage provisioning quota.
 
<--volume-list>  :: A list of one or more storage volumes.
 
</pre>
 
  
 +
; hw-controller-get : Returns information about a specific hardware RAID controller.
  
=== provisioning-quota-volume-assoc-list [pqv-alist] ===
+
<pre> qs hw-controller-get|hwc-get --controller=value </pre>
Returns a list of all the associated provisioning quotas of a specified volume.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--volume>       :: Name of the storage volume or its unique ID (GUID).
+
| &nbsp; || <tt>controller</tt> || Name or ID of a hardware RAID controller.
</pre>
+
|}
  
  
=== provisioning-quota-volume-remove [pqv-remove] ===
+
; hw-controller-group-get : Returns information about all the support hardware RAID controller group types.
Removes one or more volumes from the specified provisioning quota
+
<pre>
+
<--quota>        :: Name or ID of a storage provisioning quota.
+
<--volume-list>  :: A list of one or more storage volumes.
+
</pre>
+
  
 +
<pre> qs hw-controller-group-get|hwcg-get --controller-group=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>controller-group</tt> || Name or ID of a hardware RAID controller group.
 +
|}
  
== Remote Replication Management ==
 
  
=== replica-assoc-delete [rep-assoc-delete] ===
+
; hw-controller-group-list : Returns a list of all the hardware controller groups.
Deletes the specified replication association between a source/target pair of volumes or shares.
+
<pre>
+
<--replica-assoc> :: Name or ID of a replica association between a source/target volume or
+
                            share
+
</pre>
+
  
 +
<pre> qs hw-controller-group-list|hwcg-list </pre>
 +
{| cellspacing='0' cellpadding='5'
  
=== replica-assoc-get [rep-assoc-get] ===
+
; hw-controller-import-units : Scan and import foreign disks associated with RAID groups that were attached to another RAID controller or that require re-importing to the local appliance.
Returns details of the specified replication association.
+
<pre>
+
<--replica-assoc> :: Name or ID of a replica association between a source/target volume or
+
                            share
+
</pre>
+
  
 +
<pre> qs hw-controller-import-units|hwc-import-units --controller=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>controller</tt> || Name or ID of a hardware RAID controller.
 +
|}
  
=== replica-assoc-list [rep-assoc-list] ===
 
Returns a list of all the replication associations.
 
  
=== replica-assoc-rollback [rep-assoc-rollback] ===
+
; hw-controller-list : Returns a list of all the hardware controllers.
Reverses the replication to send the changes on the target back to the source volume/share. Requires the --force flag.
+
<pre>
+
<--replica-assoc> :: Name or ID of a replica association between a source/target volume or
+
                            share
+
</pre>
+
  
 +
<pre> qs hw-controller-list|hwc-list [--controller-group=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>controller-group</tt> || Name or ID of a hardware RAID controller group.
 +
|}
  
=== replica-assoc-stop [rep-assoc-stop] ===
 
Attempts to stop the replication process between a source/target pair of volumes or shares.
 
<pre>
 
<--replica-assoc> :: Name or ID of a replica association between a source/target volume or
 
                            share
 
</pre>
 
  
 +
; hw-controller-rescan : Rescans the hardware controller to look for new disks and RAID units.
  
=== replica-assoc-sync [rep-assoc-sync] ===
+
<pre> qs hw-controller-rescan|hwc-rescan --controller=value </pre>
Restarts the replication process between a source/target pair of volumes or shares.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--replica-assoc> :: Name or ID of a replica association between a source/target volume or
+
| &nbsp; || <tt>controller</tt> || Name or ID of a hardware RAID controller.
                            share
+
|}
</pre>
+
  
  
=== replication-schedule-add [rsch-add] ===
+
; hw-disk-delete : Marks the specified disk so that it can be removed from the enclosureDisks marked as hot-spares will return to normal status after being deleted.
Adds one or more volumes/shares to the specified schedule.
+
<pre>
+
<--schedule>    :: Name or ID of a replication schedule.
+
[--volume-list] :: A list of one or more storage volumes.
+
[--share-list]  :: A list of one or more network shares.
+
</pre>
+
  
 +
<pre> qs hw-disk-delete|hwd-delete --disk=value [--duration=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>disk</tt> || Specifies a physical disk connected to a hardware RAID controller.
 +
|-
 +
| &nbsp; || <tt>duration</tt> || Duration in seconds to repeat the disk identification pattern.
 +
|}
  
=== replication-schedule-create [rsch-create] ===
 
Creates a new replication schedule to replicate the specified storage volumes and shares to the specified target pool on a schedule.
 
<pre>
 
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
 
                            not allowed.
 
<--target-pool>  :: Target storage pool on remote system to replicate to.
 
[--volume-list]  :: A list of one or more storage volumes.
 
[--share-list]  :: A list of one or more network shares.
 
[--start-date]  :: Start date at which the system will begin creating snapshots for a given
 
                            schedule.
 
[--enabled]      :: While the schedule is enabled snapshots will be taken at the designated
 
                            times.
 
[--desc]        :: A description for the object.
 
[--cloud]        :: Name of a Storage Cloud or its unique id.
 
[--max-replicas] :: Maximum number of replica snapshot checkpoints to retain for this
 
                            schedule, after which the oldest snapshot is removed before a new one is
 
                            created.
 
[--days]        :: The days of the week on which this schedule should create snapshots.
 
                            [fri, mon, sat, *sun, thu, tue, wed]
 
[--hours]        :: For the specified days of the week, snapshots will be created at the
 
                            specified hours. [10am, 10pm, 11am, 11pm, 12am, 12pm, 1am, 1pm, 2am, 2pm,
 
                            *3am, 3pm, 4am, 4pm, 5am, 5pm, 6am, 6pm, 7am, 7pm, 8am, 8pm, 9am, 9pm]
 
[--interval]    :: Interval in minutes between replications, minimum is 15 minutes.
 
[--offset-minutes] :: Delay the scheduled start time by specified minutes. For example, a
 
                            30min offset with scheduled trigger time of 1am and 4am will trigger at
 
                            1:30am and 4:30am respectively.
 
[--flags]        :: Optional flags for the operation. [async]
 
</pre>
 
  
 +
; hw-disk-get : Returns information about a specific disk managed by a hardware RAID controller.
  
=== replication-schedule-delete [rsch-delete] ===
+
<pre> qs hw-disk-get|hwd-get --disk=value </pre>
Deletes a replication schedule, snapshots associated with the schedule are not removed.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--schedule>     :: Name or ID of a replication schedule.
+
| &nbsp; || <tt>disk</tt> || Specifies a physical disk connected to a hardware RAID controller.
[--flags]        :: Optional flags for the operation. [async, force]
+
|}
</pre>
+
  
  
=== replication-schedule-disable [rsch-disable] ===
+
; hw-disk-identify : Flashes the LED indicator light on the specified disk so that it can be identified in the enclosure chassis.
Disables the specified replication schedule.
+
<pre>
+
<--schedule>    :: Name or ID of a replication schedule.
+
[--flags]        :: Optional flags for the operation. [async, force]
+
</pre>
+
  
 +
<pre> qs hw-disk-identify|hwd-identify --unit=value [--duration=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>unit</tt> || Name of a hardware RAID unit or it unique ID.
 +
|-
 +
| &nbsp; || <tt>duration</tt> || Duration in seconds to repeat the disk identification pattern.
 +
|}
  
=== replication-schedule-enable [rsch-enable] ===
 
Enables the specified replication schedule.
 
<pre>
 
<--schedule>    :: Name or ID of a replication schedule.
 
[--flags]        :: Optional flags for the operation. [async, force]
 
</pre>
 
  
 +
; hw-disk-list : Returns a list of all the disks managed by the specified hardware controller.
  
=== replication-schedule-get [rsch-get] ===
+
<pre> qs hw-disk-list|hwd-list [--controller=value ] </pre>
Returns information about a specific replication schedule.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--schedule>     :: Name or ID of a replication schedule.
+
| &nbsp; || <tt>controller</tt> || Name or ID of a hardware RAID controller.
</pre>
+
|}
  
  
=== replication-schedule-list [rsch-list] ===
+
; hw-disk-mark-good : Marks the specified disk as 'good' or 'ready'.  You can use this to correct the disk status for good disks that the controller has in 'bad' or 'failed' state.
Returns a list of all the replication schedules.
+
  
=== replication-schedule-modify [rsch-modify] ===
+
<pre> qs hw-disk-mark-good|hwd-mark-good --disk=value </pre>
Modifies the name, description or other properties of a replication schedule.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--schedule>    :: Name or ID of a replication schedule.
+
| &nbsp; || <tt>disk</tt> || Specifies a physical disk connected to a hardware RAID controller.
[--name]        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
|}
                            not allowed.
+
[--start-date]  :: Start date at which the system will begin creating snapshots for a given
+
                            schedule.
+
[--enabled]      :: While the schedule is enabled snapshots will be taken at the designated
+
                            times.
+
[--desc]        :: A description for the object.
+
[--cloud]        :: Name of a Storage Cloud or its unique id.
+
[--max-replicas] :: Maximum number of replica snapshot checkpoints to retain for this
+
                            schedule, after which the oldest snapshot is removed before a new one is
+
                            created.
+
[--days]        :: The days of the week on which this schedule should create snapshots.
+
                            [fri, mon, sat, *sun, thu, tue, wed]
+
[--hours]        :: For the specified days of the week, snapshots will be created at the
+
                            specified hours. [10am, 10pm, 11am, 11pm, 12am, 12pm, 1am, 1pm, 2am, 2pm,
+
                            *3am, 3pm, 4am, 4pm, 5am, 5pm, 6am, 6pm, 7am, 7pm, 8am, 8pm, 9am, 9pm]
+
[--interval]    :: Interval in minutes between replications, minimum is 15 minutes.
+
[--offset-minutes] :: Delay the scheduled start time by specified minutes. For example, a
+
                            30min offset with scheduled trigger time of 1am and 4am will trigger at
+
                            1:30am and 4:30am respectively.
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
  
=== replication-schedule-remove [rsch-remove] ===
+
; hw-disk-mark-spare : Marks the specified disk as a universal hot spare within the group of RAID units managed by the controller in which the disk is attached.
Removes one or more volumes/shares from the specified schedule.
+
<pre>
+
<--schedule>    :: Name or ID of a replication schedule.
+
[--volume-list]  :: A list of one or more storage volumes.
+
[--share-list]  :: A list of one or more network shares.
+
</pre>
+
  
 +
<pre> qs hw-disk-mark-spare|hwd-mark-spare --disk=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>disk</tt> || Specifies a physical disk connected to a hardware RAID controller.
 +
|}
  
=== replication-schedule-trigger [rsch-trigger] ===
 
Triggers the specified schedule to run immediately.
 
<pre>
 
<--schedule>    :: Name or ID of a replication schedule.
 
[--flags]        :: Optional flags for the operation. [async, force]
 
</pre>
 
  
 +
; hw-enclosure-get : Returns information about a specific enclosure managed by the specified hardware RAID controller.
  
== Resource Domain Management ==
+
<pre> qs hw-enclosure-get|hwe-get --enclosure=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>enclosure</tt> || Name of a hardware RAID enclosure or it unique ID.
 +
|}
  
=== resource-domain-create [rd-create] ===
 
Creates a new resource domain which identifies a site, building or rack of equipment.
 
<pre>
 
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
 
                            not allowed.
 
[--desc]        :: A description for the object.
 
[--resource-type] :: Type of the domain resource which can be a site, building, rack, or
 
                            server. [building, *rack, region, server, site]
 
[--resource-parent] :: Parent domain resource name or ID.  For example a rack resource can have
 
                            a parent resource of type site or building.
 
[--flags]        :: Optional flags for the operation. [async]
 
</pre>
 
  
 +
; hw-enclosure-list : Returns a list of all the enclosures managed by the specified hardware RAID controller.
  
=== resource-domain-delete [rd-delete] ===
+
<pre> qs hw-enclosure-list|hwe-list [--controller=value ] </pre>
Deletes the specified resource domain.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--domain-resource> :: ID or name of a domain resource.
+
| &nbsp; || <tt>controller</tt> || Name or ID of a hardware RAID controller.
[--flags]        :: Optional flags for the operation. [async, force]
+
|}
</pre>
+
  
  
=== resource-domain-get [rd-get] ===
+
; hw-unit-auto-create : Creates new hardware RAID units automatically using available disk resources according to the selection criteria.
Resource failure domains identify physical equipment, sites, racks so that data can be dispersed in such a way as to ensure fault-tolerance and high availability across sites and racks.
+
<pre>
+
<--domain-resource> :: ID or name of a domain resource.
+
</pre>
+
  
 +
<pre> qs hw-unit-auto-create|hwu-auto-create --raid=value --disks-per-unit=value [--disk-category=value ] [--min-size=value ] [--max-size=value ] [--unit-count=value ] [--options=value ] [--storage-system=value ] [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>raid</tt> || Hardware RAID type for a hardware RAID unit. [*AUTO, RAID0, RAID1, RAID10, RAID5, RAID50, RAID6, RAID60]
 +
|-
 +
| &nbsp; || <tt>disks-per-unit</tt> || Number of disks to select for each new hardware RAID unit to be created.  Example, 5 disks indicates 4d+1p when selected with RAID5 layout type.
 +
|-
 +
| &nbsp; || <tt>disk-category</tt> || Any, SSD, or HDD, this allows filtering by disk type so that new hardware RAID units are created using the correct category of disks. [*ANY, HDD, SSD]
 +
|-
 +
| &nbsp; || <tt>min-size</tt> || Minimum size of the disks to be selected for creation of new hardware RAID units
 +
|-
 +
| &nbsp; || <tt>max-size</tt> || Maximum size of the disks to be selected for creation of new hardware RAID units, useful for limiting the selection to a specific group of drives.
 +
|-
 +
| &nbsp; || <tt>unit-count</tt> || Maximum number of units to create, if 0 then all available disks will be used after filtering.
 +
|-
 +
| &nbsp; || <tt>options</tt> || Special options to hardware encryption policy.
 +
|-
 +
| &nbsp; || <tt>storage-system</tt> || Name or ID of a storage system in a management grid.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
=== resource-domain-list [rd-list] ===
 
Returns a list of all the defined resource domain objects which can include sites, buildings, racks, and servers.
 
  
=== resource-domain-modify [rd-modify] ===
+
; hw-unit-create : Creates a new hardware RAID unit using the specified controller.
Modifies an existing resource domain to change properties like name and description.
+
<pre>
+
<--domain-resource> :: ID or name of a domain resource.
+
[--name]        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
                            not allowed.
+
[--desc]        :: A description for the object.
+
[--resource-type] :: Type of the domain resource which can be a site, building, rack, or
+
                            server. [building, *rack, region, server, site]
+
[--resource-parent] :: Parent domain resource name or ID.  For example a rack resource can have
+
                            a parent resource of type site or building.
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
 +
<pre> qs hw-unit-create|hwu-create --raid=value --disk-list=value [--controller=value ] [--flags=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>raid</tt> || Hardware RAID type for a hardware RAID unit. [*AUTO, RAID0, RAID1, RAID10, RAID5, RAID50, RAID6, RAID60]
 +
|-
 +
| &nbsp; || <tt>disk-list</tt> || Specifies one or more physical disks connected to a hardware RAID controller. Use 'all' to indicate all unused disks.
 +
|-
 +
| &nbsp; || <tt>controller</tt> || Name or ID of a hardware RAID controller.
 +
|-
 +
| &nbsp; || <tt>flags</tt> || Optional flags for the operation. [async, force, min, *none]
 +
|}
  
== Multitenant Resource Group Management ==
 
  
=== resource-group-create [rg-create] ===
+
; hw-unit-delete : Deletes the specified RAID unit. Note that you must first delete the Storage Pool before you delete the RAID unit.
Creates a new tenant resource group comprised of the specified users, resources and chap information.
+
<pre>
+
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
                            not allowed.
+
[--desc]        :: A description for the object.
+
[--subject-list] :: A list subjects in the following format name:type. Ex:
+
                            userName:user,groupName:user_group...
+
[--resource-list] :: A list resources in the following format name:type. Ex:
+
                            vol:volume,hostname:host...
+
[--parent-cloud] :: The name or unique id of a tenant resource cloud.
+
[--tier]        :: The tier of the storage cloud.
+
[--organization] :: The name of the organization this tenant resource cloud is assigned to.
+
[--chap-user]    :: An optional iSCSI CHAP username.
+
[--chap-pass]    :: An optional iSCSI CHAP password.
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
 +
<pre> qs hw-unit-delete|hwu-delete --unit=value [--duration=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>unit</tt> || Name of a hardware RAID unit or it unique ID.
 +
|-
 +
| &nbsp; || <tt>duration</tt> || Duration in seconds to repeat the disk identification pattern.
 +
|}
  
=== resource-group-delete [rg-delete] ===
 
Deletes a tenant's resource group, the resources and users will not be deleted.
 
<pre>
 
<--cloud>        :: Name of a Storage Cloud or its unique id.
 
[--flags]        :: Optional flags for the operation. [async, force]
 
</pre>
 
  
 +
; hw-unit-encrypt : Enable hardware SED/FDE encryption for the specified hardware RAID unit.
  
=== resource-group-get [rg-get] ===
+
<pre> qs hw-unit-encrypt|hwu-encrypt --unit=value [--options=value ] </pre>
Returns information of the specified tenant resource groups.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--cloud>       :: Name of a Storage Cloud or its unique id.
+
| &nbsp; || <tt>unit</tt> || Name of a hardware RAID unit or it unique ID.
[--flags]        :: Optional flags for the operation. [async, force]
+
|-
</pre>
+
| &nbsp; || <tt>options</tt> || Special options to hardware encryption policy.
 +
|}
  
  
=== resource-group-list [rg-list] ===
+
; hw-unit-get : Returns information about a specific RAID unit managed by the specified hardware RAID controller.
Returns a list of all the tenant resource groups.
+
<pre>
+
[--flags]        :: Optional flags for the operation. [async, force]
+
</pre>
+
  
 +
<pre> qs hw-unit-get|hwu-get --unit=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>unit</tt> || Name of a hardware RAID unit or it unique ID.
 +
|}
  
=== resource-group-modify [rg-modify] ===
 
Modifies the name, description, parent resource group, tier, organization, and chap information of a resource group.
 
<pre>
 
<--cloud>        :: Name of a Storage Cloud or its unique id.
 
[--name]        :: Names may include any alpha-numeric characters '_' and '-', spaces are
 
                            not allowed.
 
[--desc]        :: A description for the object.
 
[--parent-cloud] :: The name or unique id of a tenant resource cloud.
 
[--tier]        :: The tier of the storage cloud.
 
[--organization] :: The name of the organization this tenant resource cloud is assigned to.
 
[--chap-user]    :: An optional iSCSI CHAP username.
 
[--chap-pass]    :: An optional iSCSI CHAP password.
 
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
 
</pre>
 
  
 +
; hw-unit-identify : Flashes the LED indicator light on all the disks in the RAID unit so that it can be identified in the enclosure.
  
=== resource-group-resource-add [rgr-add] ===
+
<pre> qs hw-unit-identify|hwu-identify --unit=value [--duration=value ] </pre>
Add one or more resources to the specified tenant resource group.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--cloud>       :: Name of a Storage Cloud or its unique id.
+
| &nbsp; || <tt>unit</tt> || Name of a hardware RAID unit or it unique ID.
<--resource-list> :: A list resources in the following format name:type. Ex:
+
|-
                            vol:volume,hostname:host...
+
| &nbsp; || <tt>duration</tt> || Duration in seconds to repeat the disk identification pattern.
</pre>
+
|}
  
  
=== resource-group-resource-mode [rgr-mode] ===
+
; hw-unit-list : Returns a list of all the RAID units managed by the specified hardware controller.
Set the mode of a specified resource in the tenant resource group.
+
<pre>
+
<--cloud>        :: Name of a Storage Cloud or its unique id.
+
<--resource>    :: The unique id of a volume or volume group or share or host or host group.
+
<--access-mode>  :: Access mode for the volume.
+
[--flags]        :: Optional flags for the operation. [async, force, min, *none]
+
</pre>
+
  
 +
<pre> qs hw-unit-list|hwu-list [--controller=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>controller</tt> || Name or ID of a hardware RAID controller.
 +
|}
  
=== resource-group-resource-remove [rgr-remove] ===
+
</div>
Remove one or more resource from the specified tenant resource group.
+
</div>
<pre>
+
<--cloud>        :: Name of a Storage Cloud or its unique id.
+
<--resource-list> :: A list resources in the following format name:type. Ex:
+
                            vol:volume,hostname:host...
+
</pre>
+
  
 +
===HWSWITCH  SAS Switch Management===
 +
----
 +
<div class='mw-collapsible mw-collapsed'>
 +
<div class='mw-collapsible-content'>
  
=== resource-group-subject-assoc-list [rgsub-alist] ===
+
; hw-switch-adapter-get : Returns information about the specified HW switch management module.
Returns a list of associated tenant resource groups to the specified subject (user or user group).
+
<pre>
+
<--subject>      :: The unique id of a user or user group.
+
[--flags]        :: Optional flags for the operation. [async, force]
+
</pre>
+
  
 +
<pre> qs hw-switch-adapter-get|hwsa-get --switch-adapter=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>switch-adapter</tt> || Storage switch adapter module ID
 +
|}
  
=== resource-group-user-add [rgu-add] ===
 
Add one or more users to the specified tenant resource group.
 
<pre>
 
<--cloud>        :: Name of a Storage Cloud or its unique id.
 
<--subject-list> :: A list subjects in the following format name:type. Ex:
 
                            userName:user,groupName:user_group...
 
</pre>
 
  
 +
; hw-switch-adapter-list : Returns a list of all the storage switch management adapters
  
=== resource-group-user-remove [rgu-remove] ===
+
<pre> qs hw-switch-adapter-list|hwsa-list </pre>
Remove one or more users from specified tenant resource group.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
<--cloud>        :: Name of a Storage Cloud or its unique id.
+
<--subject-list> :: A list subjects in the following format name:type. Ex:
+
                            userName:user,groupName:user_group...
+
</pre>
+
  
 +
; hw-switch-cred-add : Adds storage switch login credentials for a specific switch management adapter
  
== RBAC Role Management ==
+
<pre> qs hw-switch-cred-add|hwsc-add --username=value --password=value --domain-password=value --ip-address=value [--switch-adapter=value ] [--primary=value ] [--secondary=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>username</tt> || Administrator user name for the host, typically 'Administrator' for Windows hosts.
 +
|-
 +
| &nbsp; || <tt>password</tt> || Administrator password for the host; enables auto-configuration of host's iSCSI initiator.
 +
|-
 +
| &nbsp; || <tt>domain-password</tt> || Password for the committing zoneset changes to a storage switch.
 +
|-
 +
| &nbsp; || <tt>ip-address</tt> || IP Address of the host being added; if unspecified the service will look it up.
 +
|-
 +
| &nbsp; || <tt>switch-adapter</tt> || Storage switch adapter module ID
 +
|-
 +
| &nbsp; || <tt>primary</tt> || Primary storage system responsible for managing and discovering the switch(es)
 +
|-
 +
| &nbsp; || <tt>secondary</tt> || Secondary storage system responsible for managing and discovering the switch(es)
 +
|}
  
=== role-add [r-add] ===
 
Adds a new role to the role based access control (RBAC) system.
 
<pre>
 
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
 
                            not allowed.
 
[--desc]        :: A description for the object.
 
[--permissions]  :: List of permissions and/or permission groups to add to the specified
 
                            role.
 
</pre>
 
  
 +
; hw-switch-cred-get : Returns information about specific storage switch login credentials
  
=== role-get [r-get] ===
+
<pre> qs hw-switch-cred-get|hwsc-get --creds=value </pre>
Gets information about the specified role.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--role>         :: Name of a security role or its unique ID (GUID).
+
| &nbsp; || <tt>creds</tt> || Storage switch credentials (user/pass)
</pre>
+
|}
  
  
=== role-list [r-list] ===
+
; hw-switch-cred-list : Returns a list of all the storage switch login credentials
Returns a list of all the defined roles the RBAC system.
+
  
=== role-modify [r-modify] ===
+
<pre> qs hw-switch-cred-list|hwsc-list [--switch-adapter=value ] </pre>
Modifies the name and/or description of a role
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--role>        :: Name of a security role or its unique ID (GUID).
+
| &nbsp; || <tt>switch-adapter</tt> || Storage switch adapter module ID
[--name]         :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
|}
                            not allowed.
+
[--desc]        :: A description for the object.
+
</pre>
+
  
  
=== role-permission-add [rp-add] ===
+
; hw-switch-cred-remove : Removes storage switch login credentials
Adds additional permissions and/or permission groups to the specified role.
+
<pre>
+
<--role>        :: Name of a security role or its unique ID (GUID).
+
<--permissions>  :: List of permissions and/or permission groups to add to the specified
+
                            role.
+
</pre>
+
  
 +
<pre> qs hw-switch-cred-remove|hwsc-remove --creds=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>creds</tt> || Storage switch credentials (user/pass)
 +
|}
  
=== role-permission-def-list [rpd-list] ===
 
Returns a list of all the defined permissions available to be assigned to roles in the RBAC system.
 
  
=== role-permission-remove [rp-remove] ===
+
; hw-switch-failover-group-activate : Activates the pools in a switch failover group on the specified storage system
Removes one or more permissions and/or permission groups from the specified role.
+
<pre>
+
<--role>        :: Name of a security role or its unique ID (GUID).
+
<--permissions>  :: List of permissions and/or permission groups to add to the specified  
+
                            role.
+
</pre>
+
  
 +
<pre> qs hw-switch-failover-group-activate|hwsfg-activate --failover-group=value --storage-system=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>failover-group</tt> || Name/ID of a storage switch failover group
 +
|-
 +
| &nbsp; || <tt>storage-system</tt> || Name or ID of a storage system in a management grid.
 +
|}
  
=== role-remove [r-remove] ===
 
Removes the specified role identified by name or ID
 
<pre>
 
<--role>        :: Name of a security role or its unique ID (GUID).
 
</pre>
 
  
 +
; hw-switch-failover-group-create : Creates a new switch failover group
  
== Snapshot Schedule Management ==
+
<pre> qs hw-switch-failover-group-create|hwsfg-create --name=value --pool-list=value --sys-primary=value --zoneset-primary=value --sys-secondary=value --zoneset-secondary=value [--ip-address=value ] [--netmask=value ] [--gateway=value ] [--desc=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>pool-list</tt> || List of one or more storage pools.
 +
|-
 +
| &nbsp; || <tt>sys-primary</tt> || Storage system associated with the failover group primary node
 +
|-
 +
| &nbsp; || <tt>zoneset-primary</tt> || Zoneset to be associated with the failover group primary node
 +
|-
 +
| &nbsp; || <tt>sys-secondary</tt> || Storage system associated with the failover group secondary node
 +
|-
 +
| &nbsp; || <tt>zoneset-secondary</tt> || Zoneset to be associated with the failover group secondary node
 +
|-
 +
| &nbsp; || <tt>ip-address</tt> || IP Address of the host being added; if unspecified the service will look it up.
 +
|-
 +
| &nbsp; || <tt>netmask</tt> || Subnet IP mask (ex: 255.255.255.0)
 +
|-
 +
| &nbsp; || <tt>gateway</tt> || IP address of the network gateway
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|}
  
=== snap-schedule-add [sch-add] ===
 
Adds one or more volumes/shares to the specified schedule.
 
<pre>
 
<--schedule>    :: Name of a snapshot schedule or its unique ID (GUID).
 
[--volume-list]  :: A list of one or more storage volumes.
 
[--share-list]  :: A list of one or more network shares.
 
</pre>
 
  
 +
; hw-switch-failover-group-delete : Deletes a failover group
  
=== snap-schedule-create [sch-create] ===
+
<pre> qs hw-switch-failover-group-delete|hwsfg-delete --failover-group=value </pre>
Creates a new snapshot schedule comprised of the specified storage volumes.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--name>        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
| &nbsp; || <tt>failover-group</tt> || Name/ID of a storage switch failover group
                            not allowed.
+
|}
[--volume-list]  :: A list of one or more storage volumes.
+
[--share-list]  :: A list of one or more network shares.
+
[--start-date]  :: Start date at which the system will begin creating snapshots for a given
+
                            schedule.
+
[--enabled]      :: While the schedule is enabled snapshots will be taken at the designated
+
                            times.
+
[--desc]        :: A description for the object.
+
[--cloud]        :: Name of a Storage Cloud or its unique id.
+
[--max-snaps]    :: Maximum number of snapshots to retain for this schedule, after which the
+
                            oldest snapshot is removed before a new one is created.
+
[--days]        :: The days of the week on which this schedule should create snapshots.
+
                            [fri, mon, sat, *sun, thu, tue, wed]
+
[--hours]        :: For the specified days of the week, snapshots will be created at the
+
                            specified hours. [10am, 10pm, 11am, 11pm, 12am, 12pm, 1am, 1pm, 2am, 2pm,
+
                            *3am, 3pm, 4am, 4pm, 5am, 5pm, 6am, 6pm, 7am, 7pm, 8am, 8pm, 9am, 9pm]
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
  
=== snap-schedule-delete [sch-delete] ===
+
; hw-switch-failover-group-get : Returns information about a specific switch failover group
Deletes a snapshot schedule, snapshots associated with the schedule are not removed.
+
<pre>
+
<--schedule>    :: Name of a snapshot schedule or its unique ID (GUID).
+
[--flags]        :: Optional flags for the operation. [async, force]
+
</pre>
+
  
 +
<pre> qs hw-switch-failover-group-get|hwsfg-get --failover-group=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>failover-group</tt> || Name/ID of a storage switch failover group
 +
|}
  
=== snap-schedule-disable [sch-disable] ===
 
Disables the specified snapshot schedule.
 
<pre>
 
<--schedule>    :: Name of a snapshot schedule or its unique ID (GUID).
 
[--flags]        :: Optional flags for the operation. [async, force]
 
</pre>
 
  
 +
; hw-switch-failover-group-list : Returns a list of all the switch failover groups
  
=== snap-schedule-enable [sch-enable] ===
+
<pre> qs hw-switch-failover-group-list|hwsfg-list [--switch=value ] </pre>
Enables the specified snapshot schedule.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--schedule>     :: Name of a snapshot schedule or its unique ID (GUID).
+
| &nbsp; || <tt>switch</tt> || Name or ID of a SAS/FC storage switch
[--flags]        :: Optional flags for the operation. [async, force]
+
|}
</pre>
+
  
  
=== snap-schedule-get [sch-get] ===
+
; hw-switch-failover-group-modify : Modifies the properties of a failover group
Returns information about a specific snapshot schedule.
+
<pre>
+
<--schedule>    :: Name of a snapshot schedule or its unique ID (GUID).
+
</pre>
+
  
 +
<pre> qs hw-switch-failover-group-modify|hwsfg-modify --failover-group=value [--name=value ] [--ip-address=value ] [--netmask=value ] [--gateway=value ] [--pool-list=value ] [--sys-primary=value ] [--zoneset-primary=value ] [--sys-secondary=value ] [--zoneset-secondary=value ] [--desc=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>failover-group</tt> || Name/ID of a storage switch failover group
 +
|-
 +
| &nbsp; || <tt>name</tt> || Names may include any alpha-numeric plus '_' and '-' characters; spaces are not allowed.
 +
|-
 +
| &nbsp; || <tt>ip-address</tt> || IP Address of the host being added; if unspecified the service will look it up.
 +
|-
 +
| &nbsp; || <tt>netmask</tt> || Subnet IP mask (ex: 255.255.255.0)
 +
|-
 +
| &nbsp; || <tt>gateway</tt> || IP address of the network gateway
 +
|-
 +
| &nbsp; || <tt>pool-list</tt> || List of one or more storage pools.
 +
|-
 +
| &nbsp; || <tt>sys-primary</tt> || Storage system associated with the failover group primary node
 +
|-
 +
| &nbsp; || <tt>zoneset-primary</tt> || Zoneset to be associated with the failover group primary node
 +
|-
 +
| &nbsp; || <tt>sys-secondary</tt> || Storage system associated with the failover group secondary node
 +
|-
 +
| &nbsp; || <tt>zoneset-secondary</tt> || Zoneset to be associated with the failover group secondary node
 +
|-
 +
| &nbsp; || <tt>desc</tt> || A description for the object.
 +
|}
  
=== snap-schedule-list [sch-list] ===
 
Returns a list of all the snapshot schedules.
 
  
=== snap-schedule-modify [sch-modify] ===
+
; hw-switch-get : Returns detailed information about a storage switch
Modifies the name, description or other properties of a snapshot schedule.
+
<pre>
+
<--schedule>    :: Name of a snapshot schedule or its unique ID (GUID).
+
[--name]        :: Names may include any alpha-numeric characters '_' and '-', spaces are
+
                            not allowed.
+
[--start-date]  :: Start date at which the system will begin creating snapshots for a given
+
                            schedule.
+
[--enabled]      :: While the schedule is enabled snapshots will be taken at the designated
+
                            times.
+
[--desc]        :: A description for the object.
+
[--cloud]        :: Name of a Storage Cloud or its unique id.
+
[--max-snaps]    :: Maximum number of snapshots to retain for this schedule, after which the
+
                            oldest snapshot is removed before a new one is created.
+
[--days]        :: The days of the week on which this schedule should create snapshots.
+
                            [fri, mon, sat, *sun, thu, tue, wed]
+
[--hours]        :: For the specified days of the week, snapshots will be created at the
+
                            specified hours. [10am, 10pm, 11am, 11pm, 12am, 12pm, 1am, 1pm, 2am, 2pm,
+
                            *3am, 3pm, 4am, 4pm, 5am, 5pm, 6am, 6pm, 7am, 7pm, 8am, 8pm, 9am, 9pm]
+
[--flags]        :: Optional flags for the operation. [async]
+
</pre>
+
  
 +
<pre> qs hw-switch-get|hws-get --switch=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>switch</tt> || Name or ID of a SAS/FC storage switch
 +
|}
  
=== snap-schedule-remove [sch-remove] ===
 
Removes one or more volumes/shares from the specified schedule.
 
<pre>
 
<--schedule>    :: Name of a snapshot schedule or its unique ID (GUID).
 
[--volume-list]  :: A list of one or more storage volumes.
 
[--share-list]  :: A list of one or more network shares.
 
</pre>
 
  
 +
; hw-switch-list : Returns a list of all the discovered storage switches
  
=== snap-schedule-trigger [sch-trigger] ===
+
<pre> qs hw-switch-list|hws-list [--switch-adapter=value ] </pre>
Triggers the specified schedule to run immediately.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--schedule>    :: Name of a snapshot schedule or its unique ID (GUID).
+
| &nbsp; || <tt>switch-adapter</tt> || Storage switch adapter module ID
</pre>
+
|}
  
  
== iSCSI Session Management ==
+
; hw-switch-zoneset-activate : Activates a specific storage switch zonset
  
=== session-close [sn-close] ===
+
<pre> qs hw-switch-zoneset-activate|hwsz-activate --zoneset=value [--switch=value ] </pre>
Forcibly closes the specified iSCSI session; generally not recommended, use acl-remove instead.
+
{| cellspacing='0' cellpadding='5'
<pre>
+
|-
<--session>     :: iSCSI session identifier for an active iSCSI session.
+
| &nbsp; || <tt>zoneset</tt> || Name or ID of a storage switch zoneset
</pre>
+
|-
 +
| &nbsp; || <tt>switch</tt> || Name or ID of a SAS/FC storage switch
 +
|}
  
  
=== session-get [sn-get] ===
+
; hw-switch-zoneset-get : Returns information about a specific switch zoneset
Returns detailed information on a specific iSCSI session.
+
<pre>
+
<--session>      :: iSCSI session identifier for an active iSCSI session.
+
</pre>
+
  
 +
<pre> qs hw-switch-zoneset-get|hwsz-get --zoneset=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>zoneset</tt> || Name or ID of a storage switch zoneset
 +
|}
  
=== session-list [sn-list] ===
 
Returns a list of all the active iSCSI sessions.
 
<pre>
 
[--volume]      :: Name of the storage volume or its unique ID (GUID).
 
[--host]        :: Name of the host or its unique ID (GUID).
 
</pre>
 
  
 +
; hw-switch-zoneset-list : Returns a list of all the discovered zonesets
  
== Network Share Management ==
+
<pre> qs hw-switch-zoneset-list|hwsz-list [--switch=value ] </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>switch</tt> || Name or ID of a SAS/FC storage switch
 +
|}
  
=== share-ad-user-group-list [shr-ad-ug-list] ===
+
</div>
Returns a list of all the users and groups in the Active Directory domain that the appliance is a member of.
+
</div>
<pre>
+
<--storage-system> :: Name or ID of a storage system in a management grid.
+
[--share]        :: Name or ID of a network share.
+
[--flags]        :: Optional flags for the operation. [async, force]
+
</pre>
+
  
 +
===LICENSE  License Management===
 +
----
 +
<div class='mw-collapsible mw-collapsed'>
 +
<div class='mw-collapsible-content'>
  
=== share-client-add [shr-cadd] ===
+
; license-activate : Activates the system using a activation key received from customer support.
Adds a NFS client access rule for the specified network share.
+
<pre>
+
<--share>        :: Name or ID of a network share.
+
<--filter>      :: A filter string for the client
+
[--async]        :: Use asynchronous communication between NFS server and client
+
[--secure]      :: Requires the requests to originate from an Internet port less than
+
                            IPPORT_RESERVED
+
[--subtree]      :: Enables subtree checking
+
[--rdonly]      :: Allow only read requests for the NFS volume
+
[--options]      :: Set of custom NFS options as a comma delimited list such as
+
                            no_root_squash,wdelay,ro etc
+
[--flags]        :: Optional flags for the operation. [async, force]
+
</pre>
+
  
 +
<pre> qs license-activate|lic-act --activation-key=value </pre>
 +
{| cellspacing='0' cellpadding='5'
 +
|-
 +
| &nbsp; || <tt>activation-key</tt> || Activation key you'll receive from customer support after you send the activation request code.
 +
|}
  
=== share-client-get [shr-cget] ===