Difference between revisions of "QuantaStor Internal Architecture"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m (Storage Grid Technology)
m
Line 1: Line 1:
 
This section provides an overview of the terminology used in the QuantaStor storage model and how elements of the model relate.   
 
This section provides an overview of the terminology used in the QuantaStor storage model and how elements of the model relate.   
 +
 +
== Storage System ==
 +
 +
A QuantaStor server is a physical or virtual server with the QuantaStor operating system installed on it.  You'll also see it referred to via other terms like "node", "QuantaStor box", or just "system" for short.  A group of systems combined to form a grid is called a Storage Grid. 
  
 
== Storage Grid Technology ==
 
== Storage Grid Technology ==
  
QuantaStor's built-in storage grid technology enables multiple QuantaStor servers to be combined together so that they can be managed as one.  Grid formation is done by creating a Storage Grid on a first QuantaStor server and then adding additional QuantaStor servers to the grid in a batch or one at a time.  Once a Storage Grid is formed a number of advanced features are enabled like Remote Replication, Clustering, and more.  No additional software is required to manage a grid of systems as all management functions for all systems are linked.  QuantaStor's grid technology maintains a distributed configuration database that spans all systems so that configuration changes made to any system are communicated to all systems within a few seconds.  
+
QuantaStor's built-in storage grid technology enables multiple QuantaStor servers to be combined together so that they can be managed as one.  Grid formation is done by creating a Storage Grid on a first QuantaStor server and then adding additional QuantaStor servers to the grid in a batch or one at a time.  Once a Storage Grid is formed a number of advanced features are enabled like Remote Replication, Clustering, and more.  No additional software is required to manage a grid of systems as all management functions for all systems are linked.  QuantaStor's grid technology maintains a distributed configuration database that spans all systems so that configuration changes made to any system are communicated to all systems within a few seconds. QuantaStor internally maintains this information in a Sqlite database and uses an encrypted SOAP (XML based) protocol for communication between nodes to communicate configuration changes throughout a given Storage Grid.  Each Storage Grid has what is referred to as a Primary Node or a Master Node.  This is the system that is responsible for communicating update events to Secondary nodes within a given Storage Grid.  Any system within a Storage Grid may be elected as a Primary/Master node by using the 'Set Grid Master' dialog in the Web UI or by updating this via the QS CLI or QS REST API.  Automatic election of a new Primary/Master node is accomplished by adding a floating "gridIP" interface to the grid.
 +
 
 +
=== System Communication ===
 +
 
 +
Every API call to a QuantaStor system, irrespective of protocol or client (WUI, CLI, REST API, SOAP, etc) is authenticated and authorized on every time.  QuantaStor is session-less but does have an authentication token system.  Communication between QuantaStor systems uses the WSDL SOAP protocol and all API calls are auto-forwarded to the appropriate system within the grid to execute any given operation/task.  QuantaStor systems perform resource locking to ensure that multiple tasks/operations can be queued and executed with the necessary resources become available.  When using the QuantaStor CLI it will block and wait for a given operation to complete and then print the results of the QuantaStor server side task.  To have the CLI run in a non-blocking mode like the Web UI the --async flag may be passed to the CLI. 
  
 
[[File:grid.png|600px]]
 
[[File:grid.png|600px]]
  
== Storage System ==
+
== Storage Pools ==
  
A QuantaStor server is referred to as a Storage System but you'll also see it referred to via other terms like "node" in the context of Storage Grids or clustersA group of systems combined to form a grid is called a Storage Grid.
+
Storage Pools are formed out of one or more SSD or HDD disk devices.  QuantaStor supports both scale-up style Storage Pools (ZFS based) and scale-out style Storage Pools (Ceph based).  Once a Storage Pool is created, storage may be provisioned from it.  Each pool type allows for provisioning of specific types of storage.  ZFS based Storage Pools allow one to provision both file storage (Network Shares), and block storage (Storage Volumes). Ceph based Storage Pools are each specific to a storage type (File, Block or Object), but these scale-out Storage Pools can share a common group of disks within the scale-out cluster.
  
[[File:pool.png:300px]]
+
[[File:pool.png|300px]]

Revision as of 13:12, 21 March 2019

This section provides an overview of the terminology used in the QuantaStor storage model and how elements of the model relate.

Storage System

A QuantaStor server is a physical or virtual server with the QuantaStor operating system installed on it. You'll also see it referred to via other terms like "node", "QuantaStor box", or just "system" for short. A group of systems combined to form a grid is called a Storage Grid.

Storage Grid Technology

QuantaStor's built-in storage grid technology enables multiple QuantaStor servers to be combined together so that they can be managed as one. Grid formation is done by creating a Storage Grid on a first QuantaStor server and then adding additional QuantaStor servers to the grid in a batch or one at a time. Once a Storage Grid is formed a number of advanced features are enabled like Remote Replication, Clustering, and more. No additional software is required to manage a grid of systems as all management functions for all systems are linked. QuantaStor's grid technology maintains a distributed configuration database that spans all systems so that configuration changes made to any system are communicated to all systems within a few seconds. QuantaStor internally maintains this information in a Sqlite database and uses an encrypted SOAP (XML based) protocol for communication between nodes to communicate configuration changes throughout a given Storage Grid. Each Storage Grid has what is referred to as a Primary Node or a Master Node. This is the system that is responsible for communicating update events to Secondary nodes within a given Storage Grid. Any system within a Storage Grid may be elected as a Primary/Master node by using the 'Set Grid Master' dialog in the Web UI or by updating this via the QS CLI or QS REST API. Automatic election of a new Primary/Master node is accomplished by adding a floating "gridIP" interface to the grid.

System Communication

Every API call to a QuantaStor system, irrespective of protocol or client (WUI, CLI, REST API, SOAP, etc) is authenticated and authorized on every time. QuantaStor is session-less but does have an authentication token system. Communication between QuantaStor systems uses the WSDL SOAP protocol and all API calls are auto-forwarded to the appropriate system within the grid to execute any given operation/task. QuantaStor systems perform resource locking to ensure that multiple tasks/operations can be queued and executed with the necessary resources become available. When using the QuantaStor CLI it will block and wait for a given operation to complete and then print the results of the QuantaStor server side task. To have the CLI run in a non-blocking mode like the Web UI the --async flag may be passed to the CLI.

Grid.png

Storage Pools

Storage Pools are formed out of one or more SSD or HDD disk devices. QuantaStor supports both scale-up style Storage Pools (ZFS based) and scale-out style Storage Pools (Ceph based). Once a Storage Pool is created, storage may be provisioned from it. Each pool type allows for provisioning of specific types of storage. ZFS based Storage Pools allow one to provision both file storage (Network Shares), and block storage (Storage Volumes). Ceph based Storage Pools are each specific to a storage type (File, Block or Object), but these scale-out Storage Pools can share a common group of disks within the scale-out cluster.

Pool.png