keystone.common.sql package

keystone.common.sql package



keystone.common.sql.core module

SQL backends for the various services.

Before using this module, call initialize(). This has to be done before CONF() because it sets up configuration options.

class keystone.common.sql.core.DateTimeInt(*args, **kwargs)[source]

Bases: sqlalchemy.sql.type_api.TypeDecorator

A column that automatically converts a datetime object to an Int.

Keystone relies on accurate (sub-second) datetime objects. In some cases the RDBMS drop sub-second accuracy (some versions of MySQL). This field automatically converts the value to an INT when storing the data and back to a datetime object when it is loaded from the database.

NOTE: Any datetime object that has timezone data will be converted to UTC.
Any datetime object that has no timezone data will be assumed to be UTC and loaded from the DB as such.
epoch = datetime.datetime(1970, 1, 1, 0, 0, tzinfo=<UTC>)

alias of BigInteger

process_bind_param(value, dialect)[source]
process_result_value(value, dialect)[source]
class keystone.common.sql.core.JsonBlob(*args, **kwargs)[source]

Bases: sqlalchemy.sql.type_api.TypeDecorator


alias of Text

process_bind_param(value, dialect)[source]
process_result_value(value, dialect)[source]
class keystone.common.sql.core.ModelDictMixin[source]

Bases: oslo_db.sqlalchemy.models.ModelBase

classmethod from_dict(d)[source]

Return a model instance from a dictionary.


Return the model’s attributes as a dictionary.

class keystone.common.sql.core.ModelDictMixinWithExtras[source]

Bases: oslo_db.sqlalchemy.models.ModelBase

Mixin making model behave with dict-like interfaces includes extras.

NOTE: DO NOT USE THIS FOR FUTURE SQL MODELS. “Extra” column is a legacy
concept that should not be carried forward with new SQL models as the concept of “arbitrary” properties is not in line with the design philosophy of Keystone.
attributes = []
classmethod from_dict(d)[source]

Return the model’s attributes as a dictionary.

If include_extra_dict is True, ‘extra’ attributes are literally included in the resulting dictionary twice, for backwards-compatibility with a broken implementation.

keystone.common.sql.core.filter_limit_query(model, query, hints)[source]

Apply filtering and limit to a query.

  • model – table model
  • query – query to apply filters to
  • hints – contains the list of filters and limit details. This may be None, indicating that there are no filters or limits to be applied. If it’s not None, then any filters satisfied here will be removed so that the caller will know if any filters remain.

query updated with any filters and limits satisfied


Convert select sqlalchemy exceptions into HTTP 409 Conflict.


Initialize the module.


Ensure that the length of string field do not exceed the limit.

This decorator check the initialize arguments, to make sure the length of string field do not exceed the length limit, or raise a ‘StringLengthExceeded’ exception.

Use decorator instead of inheritance, because the metaclass will check the __tablename__, primary key columns, etc. at the class definition.


keystone.common.sql.upgrades module

class keystone.common.sql.upgrades.Repository(engine, repo_name)[source]

Bases: object

upgrade(version=None, current_schema=None)[source]

Contract the database.

This is run manually by the keystone-manage command once the keystone nodes have been upgraded to the latest release and will remove any old tables/columns that are no longer required.


Expand the database schema ahead of data migration.

This is run manually by the keystone-manage command before the first keystone node is migrated to the latest release.


Return the absolute path to the named repository.

keystone.common.sql.upgrades.get_constraints_names(table, column_name)[source]

Get the initial version of a migrate repository.

Parameters:abs_path – Absolute path to migrate repository.
Returns:initial version number or None, if DB is empty.

Migrate data to match the new schema.

This is run manually by the keystone-manage command once the keystone schema has been expanded for the new release.


Perform and off-line sync of the database.

Migrate the database up to the latest version, doing the equivalent of the cycle of –expand, –migrate and –contract, for when an offline upgrade is being performed.

If a version is specified then only migrate the database up to that version. Downgrading is not supported. If version is specified, then only the main database migration is carried out - and the expand, migration and contract phases will NOT be run.

keystone.common.sql.upgrades.rename_tables_with_constraints(renames, constraints, engine)[source]

Rename tables with foreign key constraints.

Tables are renamed after first removing constraints. The constraints are replaced after the rename is complete.

This works on databases that don’t support renaming tables that have constraints on them (DB2).

renames is a dict, mapping {‘to_table_name’: from_table, …}

keystone.common.sql.upgrades.validate_upgrade_order(repo_name, target_repo_version=None)[source]

Validate the state of the migration repositories.

This is run before allowing the db_sync command to execute. Ensure the upgrade step and version specified by the operator remains consistent with the upgrade process. I.e. expand’s version is greater or equal to migrate’s, migrate’s version is greater or equal to contract’s.

  • repo_name – The name of the repository that the user is trying to upgrade.
  • target_repo_version – The version to upgrade the repo. Otherwise, the version will be upgraded to the latest version available.

Module contents

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.