For the list and description of configuration options that can be set for Ceilometer in order to set up the services please see the Telemetry section in the OpenStack Manuals Configuration Reference.
The sample configuration file for Ceilometer, named etc/ceilometer/ceilometer.conf.sample, was removed from version control after the Icehouse release. You can generate this sample configuration file by running tox -e genconfig.
Polling rules are defined by the polling.yaml file. It defines the pollsters to enable and the interval they should be polled. See Writing Agent Plugins for details on how to write and plug in your plugins.
Each source configuration encapsulates meter name matching which matches against the entry point of pollster. It also includes: polling interval determination, optional resource enumeration or discovery.
All samples generated by polling are placed on the queue to be handled by the pipeline configuration loaded in the notification agent.
The polling definition may look like the following:
--- sources: - name: 'source name' interval: 'how often the samples should be generated' meters: - 'meter filter' resources: - 'list of resource URLs' discovery: - 'list of discoverers'
The interval parameter in the sources section should be defined in seconds. It determines the cadence of sample generation.
A given polling plugin is invoked according to each source section whose meters parameter matches the plugin’s meter name. That is, the matching source sections are combined by union, not intersection, of the prescribed time series. It functions the same as Pipelines filtering
The optional resources section of a pipeline source allows a list of static resource URLs to be configured. An amalgamated list of all statically configured resources for a set of pipeline sources with a common interval is passed to individual pollsters matching those pipelines.
The optional discovery section of a pipeline source contains the list of discoverers. These discoverers can be used to dynamically discover the resources to be polled by the pollsters defined in this pipeline. The name of the discoverers should be the same as the related names of plugins in setup.cfg.
If resources or discovery section is not set, the default value would be an empty list. If both resources and discovery are set, the final resources passed to the pollsters will be the combination of the dynamic resources returned by the discoverers and the static resources defined in the resources section. If there are some duplications between the resources returned by the discoverers and those defined in the resources section, the duplication will be removed before passing those resources to the pollsters.
There are three ways a pollster can get a list of resources to poll, as the following in descending order of precedence:
- From the per-pipeline configured discovery and/or static resources.
- From the per-pollster default discovery.
- From the per-agent default discovery.
Pipelines describe a coupling between sources of samples and the corresponding sinks for transformation and publication of the samples.
A source is a set of samples that should be passed to specified sinks. These samples may come from polling agents or service notificaitons.
A sink on the other hand is a consumer of samples, providing logic for the transformation and publication of samples emitted from related sources. Each sink configuration is concerned only with the transformation rules and publication conduits for samples.
In effect, a sink describes a chain of handlers. The chain starts with zero or more transformers and ends with one or more publishers. The first transformer in the chain is passed samples from the corresponding source, takes some action such as deriving rate of change, performing unit conversion, or aggregating, before passing the modified sample to the next step.
The chains end with one or more publishers. This component makes it possible to persist the data into storage through the message bus or to send it to one or more external consumers. One chain can contain multiple publishers, see the Pipeline Manager section.
Pipeline configuration by default, is stored in a separate configuration file, called pipeline.yaml, next to the ceilometer.conf file. The pipeline configuration file can be set in the pipeline_cfg_file parameter in ceilometer.conf. Multiple chains can be defined in one configuration file.
The chain definition looks like the following:
--- sources: - name: 'source name' sinks - 'sink name' sinks: - name: 'sink name' transformers: 'definition of transformers' publishers: - 'list of publishers'
The name parameter of a source is unrelated to anything else; nothing references a source by name, and a source’s name does not have to match anything.
There are several ways to define the list of meters for a pipeline source. The list of valid meters can be found in the Measurements section. There is a possibility to define all the meters, or just included or excluded meters, with which a source should operate:
The above definition methods can be used in the following combinations:
At least one of the above variations should be included in the meters section. Included and excluded meters cannot co-exist in the same pipeline. Wildcard and included meters cannot co-exist in the same pipeline definition section.
The transformers section of a pipeline sink provides the possibility to add a list of transformer definitions. The names of the transformers should be the same as the names of the related extensions in setup.cfg. For a more detailed description, please see the transformers section of the Administrator Guide of Ceilometer.
The publishers section contains the list of publishers, where the samples data should be sent after the possible transformations. The names of the publishers should be the same as the related names of the plugins in setup.cfg.
On large workloads, multiple notification agents can be deployed to handle the flood of incoming messages from monitored services. If transformations are enabled in the pipeline, the notification agents must be coordinated to ensure related messages are routed to the same agent. To enable coordination, set the workload_partitioning value in notification section.
To distribute messages across agents, pipeline_processing_queues option should be set. This value defines how many pipeline queues to create which will then be distributed to the active notification agents. It is recommended that the number of processing queues, at the very least, match the number of agents.
Increasing the number of processing queues will improve the distribution of messages across the agents. It will also help batching which minimises the requests to Gnocchi storage backend.
Decreasing the number of processing queues may result in lost data as any previously created queues may no longer be assigned to active agents. It is only recommended that you increase processing queues.