Difference between revisions of "Clustered HA SAN/NAS Solutions using ZFS"

From OSNEXUS Online Documentation Site
Jump to: navigation, search
m (Storage Pool High Availability Group Management)
m (Installation)
Line 155: Line 155:
  
 
* Install both QuantaStor Appliances with the most recent release of QuantaStor.
 
* Install both QuantaStor Appliances with the most recent release of QuantaStor.
* Apply the QuantaStor License key and HA Add-on Module key to your QuantaStor Appliances. Activate both appliances.
+
* Apply the QuantaStor Gold, Platinum or Cloud Edition license key to each appliance. Each key must be unique.
 
* Create a Grid and join both QuantaStor Appliances to the grid.
 
* Create a Grid and join both QuantaStor Appliances to the grid.
  

Revision as of 16:37, 22 February 2016

Overview

QuantaStor's clustered storage pool configurations ensure high-availability (HA) in the event of a node outage or storage connectivity to the active node is lost. From a hardware perspective a QuantaStor deployment of one or more clustered storage pools requires at least two QuantaStor appliances so that automatic fail-over can occur in the event that the active appliance goes offline. Another requirement of these configurations is that the disk storage devices for the highly-available pool cannot be in either of the server units. Rather the storage must come from an external source which can be as simple as using one or more SAS JBOD enclosures or for higher performance configurations could be a SAN delivering block storage (LUNs) to the QuantaStor front-end appliances over FC (preferred) or iSCSI. The following two sections outline the minimum hardware requirements for each.

Qs clustered san.png

Highly-Available SAN/NAS Storage (Tiered SAN, ZFS based Storage Pools)

In this configuration the QuantaStor front-end controller appliances are acting as a gateway to the storage in the SANs on the back-end. QuantaStor has been tested with NetApp and HP MSA 3rd party SANs as back-end storage as well as with QuantaStor SDS as a back-end SAN. Please contact support@osnexus.com for the latest HCL for 3rd party SAN support or to get additional SAN support added to the HCL.

Qs clustered san minimum hardware.png

Minimum Hardware Requirements

  • 2x (or more) QuantaStor appliances which will be configured as front-end controller nodes
  • 2x (or more) QuantaStor appliance configured as back-end data nodes with SAS or SATA disk
  • High-performance SAN (FC/iSCSI) connectivity between front-end controller nodes and back-end data nodes

Setup Process

All Appliances

  • Add your license keys, one unique key for each appliance
  • Setup static IP addresses on each appliance (DHCP is the default and should only be used to get the appliance initially setup)
  • Right-click on the storage system and set the DNS IP address (eg 8.8.8.8), and your NTP server IP address

Back-end Appliances (Data Nodes)

  • Setup each back-end data node appliance as per basic appliance configuration with one or more storage pools each with one storage volume per pool.
    • Ideal pool size is 10 to 20 drives, you may need to create multiple pools per back-end appliance.
    • SAS is recommended but enterprise SATA drives can also be used
    • HBA(s) or Hardware RAID controller(s) can be used for storage connectivity

Front-end Appliances (Controller Nodes)

  • Connectivity between the front-end and back-end nodes can be FC or iSCSI
FC SAN Back-end Configuration
  • Create Host entries, one for each front-end appliance and add the WWPN of each of the FC ports on the front-end appliances which will be used for intercommunication between the front-end and back-end nodes.
  • Direct point-to-point physical cabling connections can be made in smaller configurations to avoid the cost of an FC switch. Here's a guide that can help with some advice on FC zone setup for larger configurations using a back-end fabric.
  • If you're using a FC switch you should use a fabric topology that'll give you fault tolerance.
  • Back-end appliances must use Qlogic QLE 8Gb or 16Gb FC cards as QuantaStor can only present Storage Volumes as FC target LUNs via Qlogic cards.
iSCSI SAN Back-end Configuration
  • It's important but not required to separate the networks for the front-end (client communication) vs back-end (communicate between the control and data appliances).
  • For iSCSI connectivity to the back-end nodes, create a Software iSCSI Adapter by going to the Hardware Controllers & Enclosures section and adding a iSCSI Adapter. This will take care of logging into and accessing the back-end storage. The back-end storage appliances must assign their Storage Volumes to all the Hosts for the front-end nodes with their associated iSCSI IQNs.
  • Right-click to Modify Network Port of each port you want to disable client iSCSI access on. If you have 10GbE be sure to disable iSCSI access on the slower 1GbE ports used for management access and/or remote-replication.

HA Network Setup

  • Make sure that eth0 is on the same network on both appliances
  • Make sure that eth1 is on the same but separate network from eth0 on both appliances
  • Create the Site Cluster with Ring 1 on the first network and Ring 2 on the second network, both front-end nodes should be in the Site Cluster, back-end nodes can be left out. This establishes a redundant (dual ring) heartbeat between the front-end appliances which will be used to detect hardware problems which in turn will trigger a failover of the pool to the passive node.

HA Storage Pool Setup

  • Create a Storage Pool on the first front-end appliance (ZFS based) using the physical disks which have arrived from the back-end appliances.
    • QuantaStor will automatically analyze the disks from the back-end appliances and stripe across the appliances to ensure proper fault-tolerance across the back-end nodes.
  • Create a Storage Pool HA Group for the pool created in the previous step, if the storage is not accessible to both appliances it will block you from creating the group.
  • Create a Storage Pool Virtual Interface for the Storage Pool HA Group. All NFS/iSCSI access to the pool must be through the Virtual Interface IP address to ensure highly-available access to the storage for the clients.
  • Enable the Storage Pool HA Group. Automatic Storage Pool fail-over to the passive node will now occur if the active node is disabled or heartbeat between the nodes is lost.
  • Test pool failover, right-click on the Storage Pool HA Group and choose 'Manual Failover' to fail-over the pool to another node.

Standard Storage Provisioning

  • Create one or more Network Shares (CIFS/NFS) and Storage Volumes (iSCSI/FC)
  • Create one or more Host entries with the iSCSI initiator IQN or FC WWPN of your client hosts/servers that will be accessing block storage.
  • Assign Storage Volumes to client Host entries created in the previous step to enable iSCSI/FC access to Storage Volumes.

Diagram of Completed Configuration

Osn clustered san config.png

Highly-Available SAN/NAS Storage (Shared JBOD, ZFS based Storage Pools)

Qs clustered jbod minimum hardware.png

Minimum Hardware Requirements

  • 2x QuantaStor storage appliances acting as storage pool controllers
  • 1x (or more) SAS JBOD connected to both storage appliances
  • 2x to 100x SAS HDDs and/or SAS SSDs for pool storage, all data drives must be placed in the external shared JBOD. Drives must be SAS that support Multi-port and SCSI3 Reservations, SATA drives are not supported.
  • 1x hardware RAID controller (for mirrored boot drives used for QuantaStor OS operating system)
  • 2x 500GB HDDs (mirrored boot drives for QuantaStor SDS operating system)
    • Boot drives should be 100GB to 1TB in size, both enterprise HDDs and DC grade SSD are suitable

Storage Bridge Bay Support

Using a cluster-in-a-box or SBB (Storage Bridge Bay) system one can setup QuantaStor in a highly-available cluster configuration in a single 2U rack-mount unit. SBB units contain two hot-swap servers and a JBOD all-in-one and QuantaStor supports all SuperMicro based SBB units. For more information on hardware options and SBB please contact your OSNEXUS reseller or our solution engineering team at sdr@osnexus.com.

Setup Process

Basics

  • Login to the QuantaStor web management interface on each appliance
  • Add your license keys, one unique key for each appliance
  • Setup static IP addresses on each node (DHCP is the default and should only be used to get the appliance initially setup)
  • Right-click to Modify Network Port of each port you want to disable iSCSI access on. If you have 10GbE be sure to disable iSCSI access on the slower 1GbE ports used for management access and/or remote-replication.

HA Network Config

  • Right-click on the storage system and set the DNS IP address (eg 8.8.8.8), and your NTP server IP address
  • Make sure that eth0 is on the same network on both appliances
  • Make sure that eth1 is on the same but separate network from eth0 on both appliances
  • Create the Site Cluster with Ring 1 on the first network and Ring 2 on the second network. This establishes a redundant (dual ring) heartbeat between the appliances.

HA Storage Pool Creation Setup

  • Create a Storage Pool (ZFS based) on the first appliance using only disk drives that are in the external shared JBOD
  • Create a Storage Pool HA Group for the pool created in the previous step, if the storage is not accessible to both appliances it will block you from creating the group.
  • Create a Storage Pool Virtual Interface for the Storage Pool HA Group. All NFS/iSCSI access to the pool must be through the Virtual Interface IP address to ensure highly-available access to the storage for the clients. This ensures that connectivity is maintained in the event of fail-over to the other node.
  • Enable the Storage Pool HA Group. Automatic Storage Pool fail-over to the passive node will now occur if the active node is disabled or heartbeat between the nodes is lost.
  • Test pool fail-over, right-click on the Storage Pool HA Group and choose 'Manual Fail-over' to fail-over the pool to another node.

Storage Provisioning

  • Create one or more Network Shares (CIFS/NFS) and Storage Volumes (iSCSI/FC)
  • Create one or more Host entries with the iSCSI initiator IQN or FC WWPN of your client hosts/servers that will be accessing block storage.
  • Assign Storage Volumes to client Host entries created in the previous step to enable iSCSI/FC access to Storage Volumes.

Diagram of Completed Configuration

Osn clustered jbod config.png


Site Cluster Management

Storage Pool High Availability Group Management

To make a storage pool highly available there are two steps that must be completed. First a Site Cluster must be created which includes the two appliances that will be used in the cluster. The site cluster sets up the heartbeat mechanism so that automatic fail-over can occur in the event that the active node goes offline. Second, the storage pool must have a HA group attached along with one or more Pool Virtual IP addresses for the storage pool. The virtual IPs for the pool will move with the storage pool so that client connectivity (NFS, iSCSI) is not lost when the pool is moved (manually or via automatic failover).

Hardware Pre-checks

Storage pools for the High Availability Group feature require special considerations regarding their Hardware configuration.

The Storage Pool that is used for the High Availability Group must have shared physical storage devices that both Head nodes can see. These shared physical storage devices can be either of the below:

  • SAS disks in a SAS JBOD that both nodes are connected to via SAS HBA's.
  • FC or iSCSI storage LUNs presented to both nodes from a legacy SAN device that supports SCSI persistent reservations.

Head node requirements:

  • Two QuantaStor appliances
    • Each Quantastor appliance will need adequate memory, processor, and network controller hardware capable of handling the client connections and load required for access to the Storage Volume or Network Share resources intended for use in the deployment.
    • Each QuantaStor appliance will need to have a dedicated connection to the shared storage devices via the appropriate initiator or HBA device. Multiple paths/connections from each head node to the shared storage devices is recommended.

Ha requirements.png

Shared Storage via SAS disk and SAS JBOD requirements:

  • Connection to both QuantaStor Appliances via SAS HBA's.
  • All drives must be SAS or Near-Line SAS drives that support Multi-Port and SCSI3 Reservation. SAS Interposers have proven to be unreliable in our testing and are not supported.
  • SAS JBOD should have at least two SAS Expansion ports. Having a JBOD with 3 or more expansions ports and Redundant SAS Expander/Environment Service Modules(ESM) is prefferred.
  • SAS JBOD should be within standard SAS cable lengh(typically under 15 meters) of the SAS HBA's installed in the QuantaStor appliances.
  • For best performance, it is recommended that any faster disk such as SSD or 10/15K platter disk be in sperate enclosures and SAS expansion chains in comparison to slower larger disk.

Cluster-in-a-Box solutions provide these hardware requirements in a single HA QuantaStor Appliance, please contact a reseller if you are interested.

Configuration Summary

  • Install QuantaStor on both appliances
  • SAS Hardware Configuration and Verify connectivity of SAS cables from HBAs to JBOD
  • Network and Cluster Heartbeat Configuration and Verify network connectivity
  • Create a Storage Pool using only drives from the shared JBOD
  • Create a Storage Pool High-availability Group
  • Create one or more Storage Pool HA virtual interfaces
  • Activate the Storage Pool
  • Test failover

Installation

  • Install both QuantaStor Appliances with the most recent release of QuantaStor.
  • Apply the QuantaStor Gold, Platinum or Cloud Edition license key to each appliance. Each key must be unique.
  • Create a Grid and join both QuantaStor Appliances to the grid.

SAS Hardware Configuration

  • LSI SAS HBA's must be configured with the 'Boot Support' MPT BIOS option configured to 'OS only' mode.
  • Verify that the SAS disks installed in both systems appear to both head nodes in the WebUI Hardware Enclosures and Controllers section.

Network and Cluster Heartbeat Configuration

For the High Availability Group feature, primary Client/host access is provided via one or more Virtual Network Interface on the node that has ownership of the Shared Storage Pool.

Modify Grid.png

A Grid Virtual Network Interface must be configured in the 'Modify Grid' Dialog box. If a management network is preferred, configure the Grid Virtual Network Interface on the Management network subnet.

  • Each Virtual Network Interface requires Three IP Addresses be configured in the same subnet: one for the Virtual Network Interface and one for each Network Device on each QuantaStor Storage Appliance.
    • A Management Network and multiple Data networks can be configured.
    • Both QuantaStor Appliances must have unique IP address for their Network devices.
    • It is preferred that the Network devices be configured the same on both QuantaStor Appliances(i.e. bond0 on both nodes). However, it is not a requirement, the Failover of the Virtual network interface occurs to the partner node based on the network device configured in the same subnet which is why the network devices must be on unique subnets.
    • Each Management and Data network must be on separate subnets to allow for proper routing of requests from clients.
    • A network gateway is best configured on the management network.

Shared Storage Pool Creation

  • Verify in the Physical Disk section of the WebUI that all of the shared storage disks are appearing to both head nodes.
  • Configure the Storage Pool on one of the nodes using the Create Storage Pool dialog.
    • Provide a Name for the Storage Pool
    • Choose the Pool Type of Default (zfs)
    • Choose the RAID Type and I/O profile that will suit your use case best, more details are available in the Soultion Design Guide.
    • Select the shared storage disks that you would like to use that will suit your RAID type and that were previosuly confirmed to be accessible to both QuantaStor Appliances.
    • Click 'OK' to create the Storage Pool once all of the Storage pool settings are configured correctly.

Ha pool.png

High Availability Group creation

High Availability Groups can be created by right clicking on the Storage Pool and choosing the 'Create High Availability Group' option.

Ha group.png

HA Virtual Network Interface creation

HA Virtual Network Interfaces can be created by right clicking on the High Availability Group and choosing 'Create High Availability Network Interface'

  • Configure the IP address and subnet mask for Virtual Network interface and choose the ethernet device that is on the matching subnet

Ha group vif.png

HA Group Activation

The High Availability Group can be activated by right Clicking on the High Availability Group and choosing 'Activate High Availability Group'

Manual HA Failover Steps / Testing Failover

The Manual Failover process will gracefully failover the Shared Storage Pool from the original node with ownership to the partner node who will take ownership of the Share Storage Pool and provide client access to the Storage Volume and/or Network Share resources.

To trigger a Manual Failover for maintenance or for testing, right clicking on the High Availability Group and choose the Failover High Availability Group option. In the dialog, choose the node you would like to failover to and click 'ok' to start the manual failover.

Automatic HA Failover

In the event that a failure is detected on the node that has ownership of the Shared Storage Pool, an Automatic HA Failover event will be triggered. This automatic event will release ownership of the Shared Storage Pool from the affected node and it's partner node will take ownership and provide client access to the Storage Volume and/or Network Share resources.

Triage/Troubleshooting

qs-iofence

qs-iofence devstatus

qs-util devicemap