Database Layer

This section contains some common information that will be useful for developers that need to do some database changes as well as to execute queries using the oslo.db API.

Difference between ‘default’ and ‘server_default’ parameters for columns

For columns it is possible to set ‘default’ or ‘server_default’. What is the difference between them and why should they be used?

The explanation is quite simple:

  • default - the default value that SQLAlchemy will specify in queries for creating instances of a given model;

  • server_default - the default value for a column that SQLAlchemy will specify in DDL.

Summarizing, ‘default’ is useless in migrations and only ‘server_default’ should be used. For synchronizing migrations with models server_default parameter should also be added in model. If default value in database is not needed, ‘server_default’ should not be used. The declarative approach can be bypassed (i.e. ‘default’ may be omitted in the model) if default is enforced through business logic.

Database migrations

For details on the neutron-db-manage wrapper and alembic migrations, see Alembic Migrations.

Tests to verify that database migrations and models are in sync

class neutron.tests.functional.db.test_migrations._TestModelsMigrations

Test for checking of equality models state and migrations.

For the opportunistic testing you need to set up a db named ‘openstack_citest’ with user ‘openstack_citest’ and password ‘openstack_citest’ on localhost. The test will then use that db and user/password combo to run the tests.

For PostgreSQL on Ubuntu this can be done with the following commands:

sudo -u postgres psql
postgres=# create user openstack_citest with createdb login password
postgres=# create database openstack_citest with owner

For MySQL on Ubuntu this can be done with the following commands:

mysql -u root
>create database openstack_citest;
>grant all privileges on openstack_citest.* to
 openstack_citest@localhost identified by 'openstack_citest';

Output is a list that contains information about differences between db and models. Output example:

  Table('bat', MetaData(bind=None),
        Column('info', String(), table=<bat>), schema=None)),
  Table(u'bar', MetaData(bind=None),
        Column(u'data', VARCHAR(), table=<bar>), schema=None)),
  Column('data', Integer(), table=<foo>)),
  Column(u'old_data', VARCHAR(), table=None)),
   {'existing_server_default': None,
   'existing_type': INTEGER()},
  • remove_* means that there is extra table/column/constraint in db;

  • add_* means that it is missing in db;

  • modify_* means that on column in db is set wrong type/nullable/server_default. Element contains information:

    • what should be modified,

    • schema,

    • table,

    • column,

    • existing correct column parameters,

    • right value,

    • wrong value.

This class also contains tests for branches, like that correct operations are used in contract and expand branches.


Run migration scripts with the given engine instance.

This method must be implemented in subclasses and run migration scripts for a DB the given engine is connected to.


Filter changes before assert in test_models_sync().

Allow subclasses to whitelist/blacklist changes. By default, no filtering is performed, changes are returned as is.


diff – a list of differences (see compare_metadata() docs for details on format)


a list of differences


Return the engine instance to be used when running tests.

This method must be implemented in subclasses and return an engine instance to be used when running tests.


Return the metadata instance to be used for schema comparison.

This method must be implemented in subclasses and return the metadata instance attached to the BASE model.

include_object(object_, name, type_, reflected, compare_to)

Return True for objects that should be compared.

  • object – a SchemaItem object such as a Table or Column object

  • name – the name of the object

  • type – a string describing the type of object (e.g. “table”)

  • reflected – True if the given object was produced based on table reflection, False if it’s from a local MetaData object

  • compare_to – the object being compared against, if available, else None

The Standard Attribute Table

There are many attributes that we would like to store in the database which are common across many Neutron objects (e.g. tags, timestamps, rbac entries). We have previously been handling this by duplicating the schema to every table via model mixins. This means that a DB migration is required for each object that wants to adopt one of these common attributes. This becomes even more cumbersome when the relationship between the attribute and the object is many-to-one because each object then needs its own table for the attributes (assuming referential integrity is a concern).

To address this issue, the ‘standardattribute’ table is available. Any model can add support for this table by inheriting the ‘HasStandardAttributes’ mixin in neutron.db.standard_attr. This mixin will add a standard_attr_id BigInteger column to the model with a foreign key relationship to the ‘standardattribute’ table. The model will then be able to access any columns of the ‘standardattribute’ table and any tables related to it.

A model that inherits HasStandardAttributes must implement the property ‘api_collections’, which is a list of API resources that the new object may appear under. In most cases, this will only be one (e.g. ‘ports’ for the Port model). This is used by all of the service plugins that add standard attribute fields to determine which API responses need to be populated.

A model that supports tag mechanism must implement the property ‘collection_resource_map’ which is a dict of ‘collection_name’ and ‘resource_name’ for API resources. And also the model must implement ‘tag_support’ with a value True.

The introduction of a new standard attribute only requires one column addition to the ‘standardattribute’ table for one-to-one relationships or a new table for one-to-many or one-to-zero relationships. Then all of the models using the ‘HasStandardAttribute’ mixin will automatically gain access to the new attribute.

Any attributes that will apply to every neutron resource (e.g. timestamps) can be added directly to the ‘standardattribute’ table. For things that will frequently be NULL for most entries (e.g. a column to store an error reason), a new table should be added and joined to in a query to prevent a bunch of NULL entries in the database.

Session handling

The main information reference is in Usage, that provides an initial picture of how to use oslo.db in Neutron. Any request call to the Neutron server API must have a “neutron_context” parameter, that is an instance of Context. This context holds a sqlalchemy.orm.session.Session instance that “manages persistence operations for ORM-mapped objects” (from SQLAlchemy documentation). A Session establishes all conversations with the database and represents a “holding zone” for all loaded or associated objects during its lifespan.

A Session instance establishes a transaction to the database using the defined Engine. This transaction represents an SQL transaction that is “a logical unit of work that contains one or more SQL statements”. Regardless of the number of statements this transaction may have, the execution is atomic; if the transaction fails, any previous SQL statement already executed that implies a change in the database is undone (rollback).

Database transactions

Any Neutron database operation, regardless of the type and the amount, should be executed inside a transaction. There are two type of transactions:

  • Reader: for reading operations.

  • Writer: for any operation that implies a change in the database, like a register creation, modification or deletion.

The neutron-lib library provides an API wrapper for the oslo.db operations. The CONTEXT_READER and CONTEXT_WRITER context managers can be used both as decorators or context managers. For example:

from neutron_lib.db import api as db_api
from neutron.db import models_v2

def get_ports(context, network_id):
    with db_api.CONTEXT_READER.using(context):
        query = context.session.query(models_v2.Port)
        query.filter(models_v2.Port.network_id == network_id)
        return query.all()

def delete_port(context, port_id)
    query = context.session.query(models_v2.Port)
    query.filter( == port_id)

The transaction contexts can be nested. For example, if inside a context a decorated method is called, the current transaction is preserved. There is only one exception on this rule: a reader context cannot be upgraded to writer. That means inside a reader context it is not possible to start a writer context. The following exception will be raised:

TypeError: Can't upgrade a READER transaction to a WRITER mid-transaction

Another consideration that must be taken when implementing/reviewing new code is that, as commented before, a transaction is an atomic operation on the database. If the database layer (SQLAlchemy, oslo.db) returns a database exception, the current active transaction should end. In other words, we can catch, if needed, the exception raised and retry any needed operation, but any further database command should be executed in a new context. This is needed to allow the context wrapper (writer, reader) to properly finish the operation, for example rolling back the already executed commands. Check the patch as an example of how to handle database exceptions.

Retry decorators

This is an appendix for Retrying Operations

This is also related to the previous section. The neutron-lib library provides a decorator called retry_if_session_inactive that can be used to retry any method if the context session is not active; in other words, there is no active transaction when the method is called. The session is retrieved from the “context” parameter passed into the method (it is a must to have this parameter in the method signature).

This retry decorator can be used along with a transaction decorator but the retry decorator must be declared before the context one. If we first declare the database context (writer or reader) and then the retry decorator, the retry context would be always called from inside an active transaction making it useless. An example of a good implementation (first the retry decorator and then the reader one):

def get_ports_count(self, context, filters=None):
    return self._get_ports_query(context, filters).count()