Scale-out Object Setup (ceph)

From OSNEXUS Online Documentation Site
Revision as of 22:33, 1 March 2016 by Qadmin (Talk | contribs)

Jump to: navigation, search

Overview

QuantaStor supports scale-out object storage via the S3 and SWIFT compatible REST based protocols with scalability to 64PB of storage and 64 appliances per grid. QuantaStor integrates with and extends Ceph storage technology to deliver scale-out block (iSCSI, Ceph RBD) and object storage (S3/SWIFT). Ceph is a highly-available and elastic storage technology that can scale from a small 3x appliance configuration to hyper-scale. Within a QuantaStor grid up to 20x individual Ceph clusters can be managed through a single pane of glass by logging into any appliance in the grid. Further, QuantaStor provides web UI management for all configuration and management operations making it easy to setup large complex configurations with ease in minutes. The following guide covers QuantaStor and Ceph terminology, then goes into the Ceph cluster configuration and setup process, then finishes with day-to-day management operations.

Minimum Hardware Requirements

To achieve quorum a minimum of three appliances are required. The storage in the appliance can be SAS or SATA HDD or SSDs but a minimum of 1x SSD is required for use as a journal (write log) device in each appliance. Appliances must use a hardware RAID controller for QuantaStor boot/system devices and we recommend using a hardware RAID controller for the storage pools as well.

  • Intel Xeon or AMD Opteron CPU
  • 64 GB RAM
  • 3x QuantaStor storage appliances minimum (up to 64x appliances)
  • 1x write endurance SSD device per appliance to make journal devices from. Have 1x SSD device for each 4x Ceph OSDs.
  • 5x to 100x HDDs or SSD for data storage per appliance
  • 1x hardware RAID controller for OSDs (SAS HBA can also be used but RAID is faster)

Setup Process

The following section is a step by step guide to setting up scale-out S3/SWIFT Object storage with a grid of 3x or more QuantaStor appliances.

  • Login to the QuantaStor web management interface on each appliance, the default username & password is 'admin' / 'password' without the quotes. If the appliance was pre-configured use the credentials provided by your service provider to login as admin.
  • Add your license keys, one unique key for each appliance by choosing the License Manager then adding one key per appliance. Scale-out storage configurations require Cloud Edition or Enterprise Edition license keys, one per appliance. Journal devices do not deduct from the licensed capacity.
  • Setup static IP addresses on each node (DHCP is the default and should only be used to get the appliance initially setup)
  • Right-click on the storage system, choose 'Modify Storage System..' and set the DNS IP address (eg: 8.8.8.8), and the NTP server IP address (important!)
  • Setup separate front-end and back-end network ports (eg eth0 = 10.0.4.5/16, eth1 = 10.55.4.5/16) for iSCSI and Ceph traffic respectively
  • Create a Grid out of the 3 appliances (use Create Grid then add the other two nodes using the Add Grid Node dialog)
  • Create hardware RAID5 units using 5 disks per RAID unit (4d + 1p) on each node until all HDDs have been used (see Hardware Enclosures & Controllers section for Create Hardware RAID Unit)
  • Create Ceph Cluster using all the appliances in your grid that will be part of the Ceph cluster, in this example of 3 appliances you'll select all three.
  • Use OSD Multi-Create to setup all the storage, in that dialog you'll select some SSDs to be used as Journal Devices and the HDDs to be used for data storage across the cluster nodes. Once selected click OK and QuantaStor will do the rest.
  • Create a scale-out storage pool by going to the Scale-out Block Storage section, then choose the Ceph Storage Pool section, then create a pool. It will automatically select all the available OSDs.
  • Create a scale-out block storage device (RBD / RADOS Block Device) by choosing the 'Create Block Device/RBD' or by going to the Storage Management tab and choosing 'Create Storage Volume' and then select your Ceph Pool created in the previous step.

At this point everything is configured and a block device has been provisioned. To assign that block device to a host for access via iSCSI, you'll follow the same steps as you would for non-scale-out Storage Volumes.

  • Go to the Hosts section then choose Add Host and enter the Initiator IQN or WWPN of the host that will be accessing the block storage.
  • Right-click on the Host and choose Assign Volumes... to assign the scale-out storage volume(s) to the host.

Repeat the Storage Volume Create and Assign Volumes steps to provision additional storage and to assign it to one or more hosts.

Terminology

What is a Ceph Cluster?

What is a Ceph Monitor?

What is a journal device?

What is a Ceph OSD?

What is a Placement Group / PG

What is a Object Storage Group?

What are User Object Access Entries?