QuantaStor Setup Workflows: Difference between revisions

From OSNEXUS Online Documentation Site
Jump to navigation Jump to search
m Blanked the page
 
(39 intermediate revisions by the same user not shown)
Line 1: Line 1:
The following workflows are intended as a GO-TO guide which outlines the basic steps for initial appliance and grid configuration.  More detailed information on each specific area can be found in the Administrators Guide. 


== Setup Workflow for Highly-Available SAN/NAS Storage (ZFS based) ==
=== Minimum Hardware Requirements ===
* 2x QuantaStor storage appliances acting as storage pool controllers
* 1x SAS JBOD connected to both storage appliances (or optionally use 2x or more additional QuantaStor storage appliances acting as FC/iSCSI JBOD)
* 1x hardware RAID controller in each appliance for mirrored boot drives (drives should be 100GB to 1TB in size, usually 2x RAM capacity)
* 2x 500GB HDDs (mirrored boot drives for QuantaStor OS)
* 2x to 100x SAS HDDs or SAS SSDs for pool storage (or optionally use SATA storage in separate QuantaStor storage appliances acting as FC/iSCSI JBOD)
=== Setup Process ===
* Login to the QuantaStor web management interface on each appliance
* Add your license keys, one unique key for each appliance
* Setup static IP addresses on each node (DHCP is the default and should only be used to get the appliance initially setup)
* Right-click to Modify Network Port of each port you want to disable iSCSI access on.  If you have 10GbE be sure to disable iSCSI access on the slower 1GbE ports used for management access and/or remote-replication.
* Right-click on the storage system and set the DNS IP address (eg 8.8.8.8), and your NTP server IP address
* Make sure that eth0 is on the same network on both appliances
* Make sure that eth1 is on the same but separate network from eth0 on both appliances
* Create the Site Cluster with Ring 1 on the first network and Ring 2 on the second network.  This establishes a redundant (dual ring) heartbeat between the appliances. 
* Create a ZFS based storage pool on the first appliance. 
* Create a Storage Pool HA Group for the pool created in the previous step, if the storage is not accessible to both appliances it will block you from creating the group.
* Create a Storage Pool Virtual Interface for the Storage Pool HA Group.  All NFS/iSCSI access to the pool must be through the Virtual Interface IP address to ensure highly-available access to the storage for the clients.
* Enable the Storage Pool HA Group.  Automatic Storage Pool fail-over to the passive node will now occur if the active node is disabled or heartbeat between the nodes is lost.
* Test pool failover, right-click on the Storage Pool HA Group and choose 'Manual Failover' to fail-over the pool to another node.
* Create one or more ''Network Shares'' (CIFS/NFS) and ''Storage Volumes'' (iSCSI/FC)
* Create one or more ''Host'' entries with the iSCSI initiator IQN or FC WWPN of your client hosts/servers that will be accessing block storage.
* Assign Storage Volumes to client ''Host'' entries created in the previous step to enable iSCSI/FC access to Storage Volumes.
== Setup Workflow for Scale-out iSCSI Block Storage (Ceph based) ==
=== Minumum Hardware Requirements ===
* 3x QuantaStor storage appliances minimum (up to 64x appliances)
* 1x write endurance SSD device per appliance to make journal devices from. Have 1x SSD device for each 4x Ceph OSDs.
* 5x to 100x HDDs or SSD for data storage per appliance
* 1x hardware RAID controller for OSDs (SAS HBA can also be used but RAID is faster)
=== Setup Process ===
* Login to the QuantaStor web management interface on each appliance
* Add your license keys, one unique key for each appliance
* Setup static IP addresses on each node (DHCP is the default and should only be used to get the appliance initially setup)
* Right-click on the storage system, choose 'Modify Storage System..' and set the DNS IP address (eg: 8.8.8.8), and the NTP server IP address (important!)
* Setup separate front-end and back-end network ports (eg eth0 = 10.0.4.5/16, eth1 = 10.55.4.5/16) for iSCSI and Ceph traffic respectively
* Create a Grid out of the 3 appliances (use ''Create Grid'' then add the other two nodes using the ''Add Grid Node'' dialog)
* Create hardware RAID5 units using 5 disks per RAID unit (4d + 1p) on each node until all HDDs have been used (see ''Hardware Enclosures & Controllers'' section for ''Create Hardware RAID Unit'')
* ''Create Ceph Cluster'' using all the appliances in your grid that will be part of the Ceph cluster, in this example of 3 appliances you'll select all three. 
* Create a XFS based Storage Pool using the ''Create Storage Pool'' dialog for each Physical Disk that comes from the hardware RAID controllers.  There will be one pool for each hardware RAID5 unit and each pool will be used to create one Ceph OSD.
* Go to the Physical Disks section and right-click on one SSD drive on each appliance and choose ''Create Journal Device'' this will slice up the SSD device to make many journals that can be used to accelerate the write performance of the OSDs.
* Create one OSD for each Storage Pool created from the previous step and be sure to select a journal device for each OSD. The ''Create Ceph OSD'' dialog is in the 'Scale-out Block Storage' section.
* Create a scale-out storage pool by going to the ''Scale-out Block Storage'' section, then choose the ''Ceph Storage Pool'' section, then create a pool.  It will automatically select all the available OSDs.
* Create a scale-out block storage device (RBD / RADOS Block Device) by choosing the 'Create Block Device/RBD' or by going to the ''Storage Management'' tab and choosing 'Create Storage Volume' and then select your Ceph Pool created in the previous step.
At this point everything is configured and a block device has been provisioned.  To assign that block device to a host, you'll follow the same steps as you would for non-scale-out storage.
* Go to the Hosts section then choose ''Add Host'' and enter the Initiator IQN or WWPN of the host that will be accessing the block storage.
* Right-click on the Host and choose Assign Volumes... to assign the scale-out storage volume(s) to the host.
Repeat the Storage Volume Create and Assign Volumes steps to provision additional storage and to assign it to one or more hosts.
=== Diagram of Completed Configuration ===
[[File:osn_ceph_workflow2.png|700px]]

Latest revision as of 18:43, 17 February 2016