API reference for storage drivers

class DataDriverBase(conf, cache, control_driver)[source]

Interface definition for storage drivers.

Data plane storage drivers are responsible for implementing the core functionality of the system.

Connection information and driver-specific options are loaded from the config file or the pool catalog.

Parameters:
  • conf (oslo_config.ConfigOpts) – Configuration containing options for this driver.

  • cache (dogpile.cache.region.CacheRegion) – Cache instance to use for reducing latency for certain lookups.

abstract property capabilities

Returns storage’s capabilities.

abstract property claim_controller

Returns the driver’s claim controller.

abstract close()[source]

Close connections to the backend.

gc()[source]

Perform manual garbage collection of claims and messages.

This method can be overridden in order to provide a trigger that can be called by so-called “garbage collection” scripts that are required by some drivers.

By default, this method does nothing.

health()[source]

Return the health status of service.

abstract is_alive()[source]

Check whether the storage is ready.

abstract property message_controller

Returns the driver’s message controller.

abstract property subscription_controller

Returns the driver’s subscription controller.

property topic_controller

Returns the driver’s topic controller.

class ControlDriverBase(conf, cache)[source]

Interface definition for control plane storage drivers.

Storage drivers that work at the control plane layer allow one to modify aspects of the functionality of the system. This is ideal for administrative purposes.

Allows access to the pool registry through a catalogue and a pool controller.

Parameters:
  • conf (oslo_config.ConfigOpts) – Configuration containing options for this driver.

  • cache (dogpile.cache.region.CacheRegion) – Cache instance to use for reducing latency for certain lookups.

abstract property catalogue_controller

Returns the driver’s catalogue controller.

abstract close()[source]

Close connections to the backend.

abstract property flavors_controller

Returns storage’s flavor management controller.

abstract property pools_controller

Returns storage’s pool management controller.

abstract property queue_controller

Returns the driver’s queue controller.

abstract property topic_controller

Returns the driver’s topic controller.

class Queue(driver)[source]

This class is responsible for managing queues.

Queue operations include CRUD, monitoring, etc.

Storage driver implementations of this class should be capable of handling high workloads and huge numbers of queues.

calculate_resource_count(project=None)[source]

Base method for calculate queues amount.

Parameters:

project – Project id

Returns:

The number of queues.

create(name, metadata=None, project=None)[source]

Base method for queue creation.

Parameters:
  • name – The queue name

  • project – Project id

Returns:

True if a queue was created and False if it was updated.

delete(name, project=None)[source]

Base method for deleting a queue.

Parameters:
  • name – The queue name

  • project – Project id

exists(name, project=None)[source]

Base method for testing queue existence.

Parameters:
  • name – The queue name

  • project – Project id

Returns:

True if a queue exists and False if it does not.

get(name, project=None)[source]

Base method for queue metadata retrieval.

Parameters:
  • name – The queue name

  • project – Project id

Returns:

Dictionary containing queue metadata

Raises:

DoesNotExist – if queue metadata does not exist

get_metadata(name, project=None)[source]

Base method for queue metadata retrieval.

Parameters:
  • name – The queue name

  • project – Project id

Returns:

Dictionary containing queue metadata

Raises:

DoesNotExist – if queue metadata does not exist

list(project=None, kfilter={}, marker=None, limit=10, detailed=False, name=None)[source]

Base method for listing queues.

Parameters:
  • project – Project id

  • kfilter – The key-value of metadata which user want to filter

  • marker – The last queue name

  • limit – (Default 10) Max number of queues to return

  • detailed – Whether metadata is included

  • name – The queue name which user want to filter

Returns:

An iterator giving a sequence of queues and the marker of the next page.

set_metadata(name, metadata, project=None)[source]

Base method for updating a queue metadata.

Parameters:
  • name – The queue name

  • metadata – Queue metadata as a dict

  • project – Project id

Raises:

DoesNotExist – if queue metadata can not be updated

stats(name, project=None)[source]

Base method for queue stats.

Parameters:
  • name – The queue name

  • project – Project id

Returns:

Dictionary with the queue stats

class Message(driver)[source]

This class is responsible for managing message CRUD.

abstract bulk_delete(queue, message_ids, project=None, claim_ids=None)[source]

Base method for deleting multiple messages.

Parameters:
  • queue – Name of the queue to post message to.

  • message_ids – A sequence of message IDs to be deleted.

  • project – Project id

  • claim_ids – claim IDs passed in by the delete request

abstract bulk_get(queue, message_ids, project=None)[source]

Base method for getting multiple messages.

Parameters:
  • queue – Name of the queue to get the message from.

  • project – Project id

  • message_ids – A sequence of message IDs.

Returns:

An iterable, yielding dicts containing message details

abstract delete(queue, message_id, project=None, claim=None)[source]

Base method for deleting a single message.

Parameters:
  • queue – Name of the queue to post message to.

  • message_id – Message to be deleted

  • project – Project id

  • claim – Claim this message belongs to. When specified, claim must be valid and message_id must belong to it.

abstract first(queue, project=None, sort=1)[source]

Get first message in the queue (including claimed).

Parameters:
  • queue – Name of the queue to list

  • sort – (Default 1) Sort order for the listing. Pass 1 for ascending (oldest message first), or -1 for descending (newest message first).

Returns:

First message in the queue, or None if the queue is empty

abstract get(queue, message_id, project=None)[source]

Base method for getting a message.

Parameters:
  • queue – Name of the queue to get the message from.

  • project – Project id

  • message_id – Message ID

Returns:

Dictionary containing message data

Raises:

DoesNotExist – if message data can not be got

abstract list(queue, project=None, marker=None, limit=10, echo=False, client_uuid=None, include_claimed=False, include_delayed=False)[source]

Base method for listing messages.

Parameters:
  • queue – Name of the queue to get the message from.

  • project – Project id

  • marker – Tail identifier

  • limit (Maybe int) – (Default 10) Max number of messages to return.

  • echo – (Default False) Boolean expressing whether or not this client should receive its own messages.

  • client_uuid – A UUID object. Required when echo=False.

  • include_claimed (bool) – omit claimed messages from listing?

  • include_delayed (bool) – omit delayed messages from listing

Returns:

An iterator giving a sequence of messages and the marker of the next page.

abstract pop(queue, limit, project=None)[source]

Base method for popping messages.

Parameters:
  • queue – Name of the queue to pop message from.

  • limit – Number of messages to pop.

  • project – Project id

abstract post(queue, messages, client_uuid, project=None)[source]

Base method for posting one or more messages.

Implementations of this method should guarantee and preserve the order, in the returned list, of incoming messages.

Parameters:
  • queue – Name of the queue to post message to.

  • messages – Messages to post to queue, an iterable yielding 1 or more elements. An empty iterable results in undefined behavior.

  • client_uuid – A UUID object.

  • project – Project id

Returns:

List of message ids

class Claim(driver)[source]
abstract create(queue, metadata, project=None, limit=10)[source]

Base method for creating a claim.

Parameters:
  • queue – Name of the queue this claim belongs to.

  • metadata – Claim’s parameters to be stored.

  • project – Project id

  • limit – (Default 10) Max number of messages to claim.

Returns:

(Claim ID, claimed messages)

abstract delete(queue, claim_id, project=None)[source]

Base method for deleting a claim.

Parameters:
  • queue – Name of the queue this claim belongs to.

  • claim_id – Claim to be deleted

  • project – Project id

abstract get(queue, claim_id, project=None)[source]

Base method for getting a claim.

Parameters:
  • queue – Name of the queue this claim belongs to.

  • claim_id – The claim id

  • project – Project id

Returns:

(Claim’s metadata, claimed messages)

Raises:

DoesNotExist – if claimed messages can not be got

abstract update(queue, claim_id, metadata, project=None)[source]

Base method for updating a claim.

Parameters:
  • queue – Name of the queue this claim belongs to.

  • claim_id – Claim to be updated

  • metadata – Claim’s parameters to be updated.

  • project – Project id

MongoDB Driver

MongoDB Storage Driver for Zaqar.

About the store

MongoDB is a nosql, eventually consistent, reliable database with support for horizontal-scaling and capable of handling different levels of throughputs.

Supported Features

  • FIFO

  • Unlimited horizontal-scaling [1]

  • Reliability [2]

Supported Deployments

MongoDB can be deployed in 3 different ways. The first and most simple one is to deploy a standalone mongod node. The second one is to use a Replica Sets which gives a master-slave deployment but cannot be scaled unlimitedly. The third and last one is a sharded cluster.

The second and third methods are the ones recommended for production environments where durability and scalability are a must-have. The driver itself forces operators to use such environments by checking whether it is talking to a replica-set or sharded cluster. Such enforcement can be disabled by running Zaqar in an unreliable mode.

Replica Sets

When running on a replica-set, Zaqar won’t try to be smart and it’ll rely as much as possible on the database and pymongo.

Sharded Cluster

TBD