os_performance_tools.counters2statsd module

class os_performance_tools.counters2statsd.AttachmentResult

Bases: StreamResult

Keeps track of top level results with StreamToDict drops.

We use a SpooledTemporaryFile to keep it performant with smaller files but to ensure we don’t use up tons of RAM. Anything over 1MB will be spooled out to disk.

classmethod enabled()
status(test_id=None, test_status=None, test_tags=None, runnable=True, file_name=None, file_bytes=None, eof=False, mime_type=None, route_code=None, timestamp=None)

Inform the result about a test status.

Parameters:
  • test_id – The test whose status is being reported. None to report status about the test run as a whole.

  • test_status

    The status for the test. There are two sorts of status - interim and final status events. As many interim events can be generated as desired, but only one final event. After a final status event any further file or status events from the same test_id+route_code may be discarded or associated with a new test by the StreamResult. (But no exception will be thrown).

    Interim states:
    • None - no particular status is being reported, or status being reported is not associated with a test (e.g. when reporting on stdout / stderr chatter).

    • inprogress - the test is currently running. Emitted by tests when they start running and at any intermediary point they might choose to indicate their continual operation.

    Final states:
    • exists - the test exists. This is used when a test is not being executed. Typically this is when querying what tests could be run in a test run (which is useful for selecting tests to run).

    • xfail - the test failed but that was expected. This is purely informative - the test is not considered to be a failure.

    • uxsuccess - the test passed but was expected to fail. The test will be considered a failure.

    • success - the test has finished without error.

    • fail - the test failed (or errored). The test will be considered a failure.

    • skip - the test was selected to run but chose to be skipped. e.g. a test dependency was missing. This is purely informative- the test is not considered to be a failure.

  • test_tags – Optional set of tags to apply to the test. Tags have no intrinsic meaning - that is up to the test author.

  • runnable – Allows status reports to mark that they are for tests which are not able to be explicitly run. For instance, subtests will report themselves as non-runnable.

  • file_name – The name for the file_bytes. Any unicode string may be used. While there is no semantic value attached to the name of any attachment, the names ‘stdout’ and ‘stderr’ and ‘traceback’ are recommended for use only for output sent to stdout, stderr and tracebacks of exceptions. When file_name is supplied, file_bytes must be a bytes instance.

  • file_bytes – A bytes object containing content for the named file. This can just be a single chunk of the file - emitting another file event with more later. Must be None unleses a file_name is supplied.

  • eof – True if this chunk is the last chunk of the file, any additional chunks with the same name should be treated as an error and discarded. Ignored unless file_name has been supplied.

  • mime_type – An optional MIME type for the file. stdout and stderr will generally be “text/plain; charset=utf8”. If None, defaults to application/octet-stream. Ignored unless file_name has been supplied.

stopTestRun()

Stop a test run.

This informs the result that no more test updates will be received. At this point any test ids that have started and not completed can be considered failed-or-hung.

class os_performance_tools.counters2statsd.Pipeline(pipeline, dynamic_prefix=None)

Bases: object

Wrapper for statsd.Pipeline

statsd’s API doesn’t say if prefix can be changed on the fly so we’re going to assume it cannot be and make a wrapper that does that.

incr(*args, **kwargs)
send()
timing(*args, **kwargs)
os_performance_tools.counters2statsd.get_statsd_client()