Prometheus Datasource

This document describes how to configure Prometheus datasource properly. Prometheus is an open-source systems monitoring and alerting toolkit, with exporters that export different metrics to Prometheus. The Alertmanager handles alerts sent by Prometheus server.

Datasource configuration

In order to connect Vitrage and Prometheus, configurations need to be done in both sides.

Step 1 - Configure Alertmanager

Vitrage receives Prometheus alerts through event api using basic authentication. Basic authentication is done by an Openstack administrator user. In order to configure Prometheus to send alerts to Vitrage, add a webhook receiver in Alertmanager.

Vitrage webhook receiver example

- name: 'vitrage'
  - url: ''
    send_resolved: true
        username: 'admin'
        password: 'admin'

url is the url of Vitrage event api. This can be fetched from openstack endpoint list. Set send_resolved to be true to ensure to get when alerts are resolved.

Step 2 - Configure Vitrage

In /etc/vitrage/vitrage.conf add prometheus to the list of active datasources

types =,nova.instance,,aodh,static,cinder.volume,,neutron.port,prometheus

Add the http url of Prometheus Alertmanager api for alerts and the receiver name from the previous step under [prometheus] section:

alertmanager_url = http://localhost:9093/api/v2/alerts
receiver = vitrage

Note: Both v1 and v2 Alertmanager apis are supported

Step 3 - Map Prometheus alerts to Vitrage resources

A configuration file that maps the Prometheus alert labels to a corresponding Vitrage resource with specific properties (id or other unique properties). The mapping will most likely be defined by the alert name and other fields. Set the location of the alerts mapping in /etc/vitrage/vitrage.conf under [prometheus] section:

config_file = /path/to/alert/mapping

For example

config_file = /etc/vitrage/prometheus_conf.yaml

Prometheus configuration file structure

The configuration file contains a list of alerts. Each alert contains key and resource.

The key contains labels which uniquely identify each alert.

The resource specifies how to identify in Vitrage the resource that the alert is on. It contains one or more Vitrage property names and corresponding Prometheus alert labels.

For example, for the following Prometheus alert

  "status": "firing",
  "version": "4",
  "groupLabels": {
    "alertname": "HighCpuOnVmAlert"
  "commonAnnotations": {
    "description": "Test alert to test libvirt exporter.\n",
    "title": "High cpu usage on vm"
  "groupKey": "{}:{alertname=\"HighCpuOnVmAlert\"}",
  "receiver": "vitrage",
  "externalURL": "",
  "alerts": [
      "status": "firing",
      "labels": {
        "instance": "",
        "domain": "instance-00000004",
        "job": "libvirt",
        "alertname": "HighCpuOnVmAlert",
        "severity": "critical"
      "endsAt": "2019-01-16T12:26:05.91446215Z",
      "generatorURL": "",
      "startsAt": "2019-01-16T12:11:50.91446215Z",
      "annotations": {
        "description": "Test alert to test libvirt exporter.\n",
        "title": "High cpu usage on vm"
  "commonLabels": {
    "instance": "",
    "job": "libvirt",
    "severity": "critical",
    "alertname": "HighCpuOnVmAlert"

The mapping alerts file looks like this

- key:
    alertname: HighCpuOnVmAlert
    job: libvirt
    instance_name: domain
    host_id: instance

This is an example of alert which generated from libvirt metrics. The alert is on virtual machine with the libvirt name ‘instance-00000004’ running on top of host with the ip ‘’. The alert is identified by its name from alertname label and from its job label. Though the key in the mapping file contains those two labels and their values. This alert will be mapped to a resource with following properties

instance_name: ""
host_id: "instance-00000004"