cinder.manager module

Base Manager class.

Managers are responsible for a certain aspect of the system. It is a logical grouping of code relating to a portion of the system. In general other components should be using the manager to make changes to the components that it is responsible for.

For example, other components that need to deal with volumes in some way, should do so by calling methods on the VolumeManager instead of directly changing fields in the database. This allows us to keep all of the code relating to volumes in the same place.

We have adopted a basic strategy of Smart managers and dumb data, which means rather than attaching methods to data objects, components should call manager methods that act on the data.

Methods on managers that can be executed locally should be called directly. If a particular method must execute on a remote host, this should be done via rpc to the service that wraps the manager

Managers should be responsible for most of the db access, and non-implementation specific data. Anything implementation specific that can’t be generalized should be done by the Driver.

In general, we prefer to have one manager with multiple drivers for different implementations, but sometimes it makes sense to have multiple managers. You can think of it this way: Abstract different overall strategies at the manager level(FlatNetwork vs VlanNetwork), and different implementations at the driver level(LinuxNetDriver vs CiscoNetDriver).

Managers will often provide methods for initial setup of a host or periodic tasks to a wrapping service.

This module provides Manager, a base class for managers.

class CleanableManager

Bases: object

do_cleanup(context: RequestContext, cleanup_request: CleanupRequest) None
init_host(service_id, added_to_cluster=None, **kwargs)
class Manager(host: HostAddress | None = None, cluster=None, **_kwargs)

Bases: Base, PeriodicTasks

RPC_API_VERSION = '1.0'
get_log_levels(context, log_request)
init_host(service_id, added_to_cluster=None)

Handle initialization if this is a standalone service.

A hook point for services to execute tasks before the services are made available (i.e. showing up on RPC and starting to accept RPC calls) to other components. Child classes should override this method.

Parameters:
  • service_id – ID of the service where the manager is running.

  • added_to_cluster – True when a host’s cluster configuration has changed from not being defined or being ‘’ to any other value and the DB service record reflects this new value.

init_host_with_rpc()

A hook for service to do jobs after RPC is ready.

Like init_host(), this method is a hook where services get a chance to execute tasks that need RPC. Child classes should override this method.

is_working()

Method indicating if service is working correctly.

This method is supposed to be overridden by subclasses and return if manager is working correctly.

reset()

Method executed when SIGHUP is caught by the process.

We’re utilizing it to reset RPC API version pins to avoid restart of the service when rolling upgrade is completed.

property service_topic_queue
set_log_levels(context, log_request)
target = <Target version=1.0>
class PeriodicTasks

Bases: PeriodicTasks

class SchedulerDependentManager(host=None, service_name='undefined', cluster=None, *args, **kwargs)

Bases: ThreadPoolManager

Periodically send capability updates to the Scheduler services.

Services that need to update the Scheduler of their capabilities should derive from this class. Otherwise they can derive from manager.Manager directly. Updates are only sent after update_service_capabilities is called with non-None values.

reset()

Method executed when SIGHUP is caught by the process.

We’re utilizing it to reset RPC API version pins to avoid restart of the service when rolling upgrade is completed.

update_service_capabilities(capabilities)

Remember these capabilities to send on next periodic update.

class ThreadPoolManager(*args, **kwargs)

Bases: Manager