The ceilometer.pipeline.base Module

The ceilometer.pipeline.base Module

class ceilometer.pipeline.base.InterimPublishContext(conf, mgr)[source]

Bases: object

Publisher to hash/shard data to pipelines

static hash_grouping(datapoint, grouping_keys)[source]
class ceilometer.pipeline.base.MainNotificationEndpoint(conf, publisher)[source]

Bases: ceilometer.pipeline.base.NotificationEndpoint

Listens to queues on all priority levels and clears by default.

classmethod audit(notifications)

RPC endpoint for useless notification level

classmethod critical(notifications)

RPC endpoint for useless notification level

classmethod debug(notifications)

RPC endpoint for useless notification level

classmethod error(notifications)

RPC endpoint for useless notification level

classmethod info(notifications)

RPC endpoint for useless notification level

classmethod sample(notifications)

RPC endpoint for useless notification level

classmethod warn(notifications)

RPC endpoint for useless notification level

class ceilometer.pipeline.base.NotificationEndpoint(conf, publisher)[source]

Bases: object

Base Endpoint for plugins that support the notification API.

event_types = []

List of strings to filter messages on.

process_notifications(priority, notifications)[source]

Return a sequence of Counter instances for the given message.

Parameters:message – Message to process.
class ceilometer.pipeline.base.Pipeline(conf, source, sink)[source]

Bases: object

Represents a coupling between a sink and a corresponding source.

default_grouping_key

Attribute to hash data on. Pass if no partitioning.

flush()[source]
get_grouping_key()[source]
publish_data(data)[source]

Publish data from pipeline.

publishers
serializer(data)[source]

Serialize data for interim transport. Pass if no partitioning.

supported(data)[source]

Attribute to filter on. Pass if no partitioning.

exception ceilometer.pipeline.base.PipelineException(message, cfg)[source]

Bases: ceilometer.agent.ConfigException

class ceilometer.pipeline.base.PipelineManager(conf, cfg_file, transformer_manager, partition)[source]

Bases: ceilometer.agent.ConfigManagerBase

Pipeline Manager

Pipeline manager sets up pipelines according to config file

NOTIFICATION_IPC = 'ceilometer_ipc'
get_interim_endpoints()[source]

Return endpoints for interim pipeline queues.

get_main_endpoints()[source]

Return endpoints for main queue.

get_main_publisher()[source]

Return the publishing context to use

interim_publisher()[source]

Build publishing context for IPC.

pm_pipeline

Pipeline class

pm_sink

Pipeline sink class

pm_source

Pipeline source class

pm_type

Pipeline manager type.

publisher()[source]

Build publisher for pipeline publishing.

class ceilometer.pipeline.base.PipelineSource(cfg)[source]

Bases: ceilometer.agent.Source

Represents a source of samples or events.

check_sinks(sinks)[source]
class ceilometer.pipeline.base.PublishContext(pipelines)[source]

Bases: object

class ceilometer.pipeline.base.PublisherManager(conf, purpose)[source]

Bases: object

get(url)[source]
class ceilometer.pipeline.base.Sink(conf, cfg, transformer_manager, publisher_manager)[source]

Bases: object

Represents a sink for the transformation and publication of data.

Each sink config is concerned only with the transformation rules and publication conduits for data.

In effect, a sink describes a chain of handlers. The chain starts with zero or more transformers and ends with one or more publishers.

The first transformer in the chain is passed data from the corresponding source, takes some action such as deriving rate of change, performing unit conversion, or aggregating, before passing the modified data to next step.

The subsequent transformers, if any, handle the data similarly.

At the end of the chain, publishers publish the data. The exact publishing method depends on publisher type, for example, pushing into data storage via the message bus providing guaranteed delivery, or for loss-tolerant data UDP may be used.

If no transformers are included in the chain, the publishers are passed data directly from the sink which are published unchanged.

static flush()[source]

Flush data after all events have been injected to pipeline.

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.