QuantaStor Setup Workflows: Difference between revisions

From OSNEXUS Online Documentation Site
Jump to navigation Jump to search
Line 1: Line 1:
The following workflows are intended as a GO-TO guide which outlines the basic steps for initial appliance and grid configuration.  More detailed information on each specific area can be found in the Administrators Guide.   
The following workflows are intended as a GO-TO guide which outlines the basic steps for initial appliance and grid configuration.  More detailed information on each specific area can be found in the Administrators Guide.   
== Setup Workflow for DR / Remote-replication of SAN/NAS Storage (ZFS based) ==
=== Minimum Hardware Requirements ===
* 2x QuantaStor storage appliances each with a ZFS based storage pool
** Storage pools do not need to be the same size or and the hardware and disk types on the appliances can be asymmetrical (non-matching)
** Replication can be cascaded across many appliances.
** Replication can be N-way replicating from one-to-many or many-to-one appliance.
** Replication is incremental so only the changes are sent
** Replication is supported for both Storage Volumes and Network Shares
** Replication interval can be set to as low as 15 minutes for a near-CDP configuration or scheduled to run at specific hours on specific days
** All data is AES 256 encrypted on the wire.
=== Setup Process ===
* Select the remote-replication tab and choose 'Create Storage System Link'. This will exchange keys between the two appliances so that a replication schedule can be created.  You can create an unlimited number of links.  The link also stores information about the ports to be used for remote-replication traffic
* Select the ''Volume & Share Replication Schedules'' section and choose ''Create'' in the toolbar to bring up the dialog to create a new remote replication schedule
** Select the replication link that will indicate the direction of replication.
** Select the storage pool on the destination system where the replicated shares and volumes will reside
** Select the times of day or interval at which replication will be run
** Select the volumes and shares to be replicated
** Click OK to create the schedule
* The Remote-Replication/DR Schedule is now created.  If you chose an interval based replication schedule it will start momentarily.  If you chose one that runs at specific times of day it will not trigger until that time. 
* You can test the schedule by triggering it to start immediately. 


== Setup Workflow for Highly-Available SAN/NAS Storage (ZFS based) ==
== Setup Workflow for Highly-Available SAN/NAS Storage (ZFS based) ==

Revision as of 05:06, 12 November 2015

The following workflows are intended as a GO-TO guide which outlines the basic steps for initial appliance and grid configuration. More detailed information on each specific area can be found in the Administrators Guide.

Setup Workflow for DR / Remote-replication of SAN/NAS Storage (ZFS based)

Minimum Hardware Requirements

  • 2x QuantaStor storage appliances each with a ZFS based storage pool
    • Storage pools do not need to be the same size or and the hardware and disk types on the appliances can be asymmetrical (non-matching)
    • Replication can be cascaded across many appliances.
    • Replication can be N-way replicating from one-to-many or many-to-one appliance.
    • Replication is incremental so only the changes are sent
    • Replication is supported for both Storage Volumes and Network Shares
    • Replication interval can be set to as low as 15 minutes for a near-CDP configuration or scheduled to run at specific hours on specific days
    • All data is AES 256 encrypted on the wire.

Setup Process

  • Select the remote-replication tab and choose 'Create Storage System Link'. This will exchange keys between the two appliances so that a replication schedule can be created. You can create an unlimited number of links. The link also stores information about the ports to be used for remote-replication traffic
  • Select the Volume & Share Replication Schedules section and choose Create in the toolbar to bring up the dialog to create a new remote replication schedule
    • Select the replication link that will indicate the direction of replication.
    • Select the storage pool on the destination system where the replicated shares and volumes will reside
    • Select the times of day or interval at which replication will be run
    • Select the volumes and shares to be replicated
    • Click OK to create the schedule
  • The Remote-Replication/DR Schedule is now created. If you chose an interval based replication schedule it will start momentarily. If you chose one that runs at specific times of day it will not trigger until that time.
  • You can test the schedule by triggering it to start immediately.


Setup Workflow for Highly-Available SAN/NAS Storage (ZFS based)

Minimum Hardware Requirements

  • 2x QuantaStor storage appliances acting as storage pool controllers
  • 1x SAS JBOD connected to both storage appliances (or optionally use 2x or more additional QuantaStor storage appliances acting as FC/iSCSI JBOD)
  • 1x hardware RAID controller in each appliance for mirrored boot drives (drives should be 100GB to 1TB in size, usually 2x RAM capacity)
  • 2x 500GB HDDs (mirrored boot drives for QuantaStor OS)
  • 2x to 100x SAS HDDs or SAS SSDs for pool storage (or optionally use SATA storage in separate QuantaStor storage appliances acting as FC/iSCSI JBOD)

Setup Process

  • Login to the QuantaStor web management interface on each appliance
  • Add your license keys, one unique key for each appliance
  • Setup static IP addresses on each node (DHCP is the default and should only be used to get the appliance initially setup)
  • Right-click to Modify Network Port of each port you want to disable iSCSI access on. If you have 10GbE be sure to disable iSCSI access on the slower 1GbE ports used for management access and/or remote-replication.
  • Right-click on the storage system and set the DNS IP address (eg 8.8.8.8), and your NTP server IP address
  • Make sure that eth0 is on the same network on both appliances
  • Make sure that eth1 is on the same but separate network from eth0 on both appliances
  • Create the Site Cluster with Ring 1 on the first network and Ring 2 on the second network. This establishes a redundant (dual ring) heartbeat between the appliances.
  • Create a ZFS based storage pool on the first appliance.
  • Create a Storage Pool HA Group for the pool created in the previous step, if the storage is not accessible to both appliances it will block you from creating the group.
  • Create a Storage Pool Virtual Interface for the Storage Pool HA Group. All NFS/iSCSI access to the pool must be through the Virtual Interface IP address to ensure highly-available access to the storage for the clients.
  • Enable the Storage Pool HA Group. Automatic Storage Pool fail-over to the passive node will now occur if the active node is disabled or heartbeat between the nodes is lost.
  • Test pool failover, right-click on the Storage Pool HA Group and choose 'Manual Failover' to fail-over the pool to another node.
  • Create one or more Network Shares (CIFS/NFS) and Storage Volumes (iSCSI/FC)
  • Create one or more Host entries with the iSCSI initiator IQN or FC WWPN of your client hosts/servers that will be accessing block storage.
  • Assign Storage Volumes to client Host entries created in the previous step to enable iSCSI/FC access to Storage Volumes.

Diagram of Completed Configuration

Setup Workflow for Scale-out NFS/CIFS File Storage (Gluster based)

Minimum Hardware Requirements

  • 3x QuantaStor storage appliances (up to 32x appliances)
  • 5x to 100x HDDs or SSD for data storage per appliance
  • 1x hardware RAID controller
  • 2x 500GB HDDs (for mirrored hardware RAID device for QuantaStor OS boot/system disk)

Setup Process

  • Login to the QuantaStor web management interface on each appliance
  • Add your license keys, one unique key for each appliance
  • Setup static IP addresses on each node (DHCP is the default and should only be used to get the appliance initially setup)
  • Right-click on the storage system, choose 'Modify Storage System..' and set the DNS IP address (eg: 8.8.8.8), and the NTP server IP address (important!)
  • Use the same Modify Storage System dialog set a unique host name for each for appliance.
  • Setup separate front-end and back-end network ports (eg eth0 = 10.0.4.5/16, eth1 = 10.55.4.5/16) for NFS/CIFS traffic and Gluster traffic (a single network will work but is not optimal)
  • Create a Grid out of the 3 or more appliances (use Create Grid then add the other two nodes using the Add Grid Node dialog)
  • Create hardware RAID5 units using 5 disks per RAID unit (4d + 1p) on each node until all HDDs have been used (see Hardware Enclosures & Controllers section, right-click on the controller to create new hardware RAID units)
  • Create a XFS based Storage Pool using the Create Storage Pool dialog for each Physical Disk that comes from the hardware RAID controllers. There will be one XFS based storage pool for each hardware RAID5 unit.
  • Select the Scale-out File Storage tab and choose 'Peer Setup' from the toolbar. In the dialog check the box for Autoconfigure Gluster Peer Connections. It will take a minute for all the connections to appear in the Gluster Peers section.
  • Now that the peers are all connected we can provision scale-out NAS shares by using the Create Gluster Volume dialog.
  • Gluster Volumes also appear as Network Shares in the Network Shares section and can be further configured to apply CIFS/NFS specific settings.

Diagram of Completed Configuration

Setup Workflow for Scale-out iSCSI Block Storage (Ceph based)

Minimum Hardware Requirements

  • 3x QuantaStor storage appliances minimum (up to 64x appliances)
  • 1x write endurance SSD device per appliance to make journal devices from. Have 1x SSD device for each 4x Ceph OSDs.
  • 5x to 100x HDDs or SSD for data storage per appliance
  • 1x hardware RAID controller for OSDs (SAS HBA can also be used but RAID is faster)

Setup Process

  • Login to the QuantaStor web management interface on each appliance
  • Add your license keys, one unique key for each appliance
  • Setup static IP addresses on each node (DHCP is the default and should only be used to get the appliance initially setup)
  • Right-click on the storage system, choose 'Modify Storage System..' and set the DNS IP address (eg: 8.8.8.8), and the NTP server IP address (important!)
  • Setup separate front-end and back-end network ports (eg eth0 = 10.0.4.5/16, eth1 = 10.55.4.5/16) for iSCSI and Ceph traffic respectively
  • Create a Grid out of the 3 appliances (use Create Grid then add the other two nodes using the Add Grid Node dialog)
  • Create hardware RAID5 units using 5 disks per RAID unit (4d + 1p) on each node until all HDDs have been used (see Hardware Enclosures & Controllers section for Create Hardware RAID Unit)
  • Create Ceph Cluster using all the appliances in your grid that will be part of the Ceph cluster, in this example of 3 appliances you'll select all three.
  • Create a XFS based Storage Pool using the Create Storage Pool dialog for each Physical Disk that comes from the hardware RAID controllers. There will be one pool for each hardware RAID5 unit and each pool will be used to create one Ceph OSD.
  • Go to the Physical Disks section and right-click on one SSD drive on each appliance and choose Create Journal Device this will slice up the SSD device to make many journals that can be used to accelerate the write performance of the OSDs.
  • Create one OSD for each Storage Pool created from the previous step and be sure to select a journal device for each OSD. The Create Ceph OSD dialog is in the 'Scale-out Block Storage' section.
  • Create a scale-out storage pool by going to the Scale-out Block Storage section, then choose the Ceph Storage Pool section, then create a pool. It will automatically select all the available OSDs.
  • Create a scale-out block storage device (RBD / RADOS Block Device) by choosing the 'Create Block Device/RBD' or by going to the Storage Management tab and choosing 'Create Storage Volume' and then select your Ceph Pool created in the previous step.

At this point everything is configured and a block device has been provisioned. To assign that block device to a host, you'll follow the same steps as you would for non-scale-out storage.

  • Go to the Hosts section then choose Add Host and enter the Initiator IQN or WWPN of the host that will be accessing the block storage.
  • Right-click on the Host and choose Assign Volumes... to assign the scale-out storage volume(s) to the host.

Repeat the Storage Volume Create and Assign Volumes steps to provision additional storage and to assign it to one or more hosts.

Diagram of Completed Configuration