Atom feed of this document
 

 Chapter 1. Block Storage Overview

There are a number of concepts and ideas that are helpful in understanding administration of the OpenStack Block Storage service. There are a number of choices in terms of configuring the Block Storage service in OpenStack, but the bulk of the options come down to two choices, single node or multi-node install. You can read a longer discussion about storage decisions in the OpenStack Operations Guide.

The OpenStack Block Storage service works via the interaction of a series of daemon processes named cinder-* that reside persistently on the host machine or machines. The binaries can all be run from a single node, or spread across multiple nodes. They can also be run on the same node as other OpenStack services.

The current services available in OpenStack Block Storage are:

  • cinder-api - The cinder-api service is a WSGI app that authenticates and routes requests throughout the Block Storage system. It supports the OpenStack API's only, although there is a translation that can be done via Nova's EC2 interface which calls in to the cinderclient.

  • cinder-scheduler - The cinder-scheduler is responsible for scheduling/routing requests to the appropriate volume service. As of Grizzly; depending upon your configuration this may be simple round-robin scheduling to the running volume services, or it can be more sophisticated through the use of the Filter Scheduler. The Filter Scheduler is the default in Grizzly and enables filter on things like Capacity, Availability Zone, Volume Types and Capabilities as well as custom filters.

  • cinder-volume - The cinder-volume service is responsible for managing Block Storage devices, specifically the back-end devices themselves.

  • cinder-backup - The cinder-backup service provides a means to back up a Cinder Volume to OpenStack Object Store (SWIFT).

 Introduction to OpenStack Block Storage

OpenStack Block Storage provides persistent High Performance Block Storage resources that can be consumed by OpenStack Compute instances. This includes secondary attached storage similar to Amazon's Elastic Block Storage (EBS). In addition images can be written to a Block Storage device and specified for OpenStack Compute to use a bootable persistent instance.

There are some differences from Amazon's EBS that one should be aware of. OpenStack Block Storage is not a shared storage solution like NFS, but currently is designed so that the device is attached and in use by a single instance at a time.

 Backend Storage Devices

OpenStack Block Storage requires some form of back-end storage that the service is built on. The default implementation is to use LVM on a local Volume Group named "cinder-volumes". In addition to the base driver implementation, OpenStack Block Storage also provides the means to add support for other storage devices to be utilized such as external Raid Arrays or other Storage appliances.

 Users and Tenants (Projects)

The OpenStack Block Storage system is designed to be used by many different cloud computing consumers or customers, basically tenants on a shared system, using role-based access assignments. Roles control the actions that a user is allowed to perform. In the default configuration, most actions do not require a particular role, but this is configurable by the system administrator editing the appropriate policy.json file that maintains the rules. A user's access to particular volumes is limited by tenant, but the username and password are assigned per user. Key pairs granting access to a volume are enabled per user, but quotas to control resource consumption across available hardware resources are per tenant.

For tenants, quota controls are available to limit the:

  • Number of volumes which may be created

  • Number of snapshots which may be created

  • Total number of Giga Bytes allowed per tenant (shared between snapshots and volumes)

 Volumes Snapshots and Backups

This introduction provides a high level overview of the two basic resources offered by the OpenStack Block Storage service. The first is Volumes and the second is Snapshots which are derived from Volumes.

Volumes

Volumes are allocated block storage resources that can be attached to instances as secondary storage or they can be used as the root store to boot instances. Volumes are persistent R/W Block Storage devices most commonly attached to the Compute node via iSCSI.

Snapshots

A Snapshot in OpenStack Block Storage is a read-only point in time copy of a Volume. The Snapshot can be created from a Volume that is currently in use (via the use of '--force True') or in an available state. The Snapshot can then be used to create a new volume via create from snapshot.

Backups

A Backup is an archived copy of a Volume currently stored in Object Storage (Swift).

 Managing Volumes

Cinder is the OpenStack service that allows you to give extra block level storage to your OpenStack Compute instances. You may recognize this as a similar offering from Amazon EC2 known as Elastic Block Storage (EBS). The default Cinder implementation is an iSCSI solution that employs the use of Logical Volume Manager (LVM) for Linux. Note that a volume may only be attached to one instance at a time. This is not a ‘shared storage’ solution like a SAN of NFS on which multiple servers can attach to. It's also important to note that Cinder also includes a number of drivers to allow you to use a number of other vendor's back-end storage devices in addition to or instead of the base LVM implementation.

Here is brief walk-through of a simple create/attach sequence, keep in mind this requires proper configuration of both OpenStack Compute via cinder.conf and OpenStack Block Storage via cinder.conf.

  1. The volume is created via cinder create; which creates an LV into the volume group (VG) "cinder-volumes"

  2. The volume is attached to an instance via nova volume-attach; which creates a unique iSCSI IQN that will be exposed to the compute node

  3. The compute node which run the concerned instance has now an active ISCSI session; and a new local storage (usually a /dev/sdX disk)

  4. libvirt uses that local storage as a storage for the instance; the instance get a new disk (usually a /dev/vdX disk)

Log a bug against this page


loading table of contents...