Ocata Series Release Notes

3.0.1

New Features

  • A new recovery action “REBOOT” has been added to the health policy.
  • Added support to listen to heat event notifications for stack failure detection.
  • The numeric properties in the spec for a scaling policy now have stricter validations.

Bug Fixes

  • The bug where the availability zone info from a nova server deployment was not available has been fixed.
  • The parameter checking for the cluster update operation may incorrectly parse the provided value(s). This bug has been fixed.

3.0.0

New Features

  • A new API “cluster-op” is introduced to trigger a profile type specific operation on all nodes in a cluster. This API is available since API micro-version 1.4.
  • Docker container profile now supports operations like restart, pause and unpause.
  • A new, optional parameter “destroy_after_deletion” is added to the cluster-del-nodes request since API micro-version 1.4.
  • The health manager is improved to use dynamic timers instead of fix interval timers when polling cluster’s status.
  • Error messages returned from API requests are now unified. All parameter validation failures of the same reason returns a similar message.
  • A configuration option “exclude_derived_actions” is introduced into the “dispatchers” group for controlling whether derived actions should lead into event notifications and/or DB records.
  • A event_purge subcommand is added to senlin-manage tool for purging events generated in a specific project.
  • Health policy recovery actions now contains a list of dictionaries instead of a list of simple names. This is to make room for workflow invocations.
  • Many new operations are added to os.nova.server profile type. These operations can be shown using the “profile-type-ops” API.
  • Added new node-operation API for performing profile type supported operations on a node.
  • New API “node-op” is introduced for triggering profile type specific operations on a node. This is available since API micro-version 1.4.
  • Event notifications (versioned) are added to enable senlin-engine to send out messaging events when configured. The old event repo is adapted to follow the same design.
  • Versioned request support in API, RPC and engine layers.
  • Basic support for event/notification.
  • Enables osprofiler support.
  • Rally plugin for cluster scaling in.
  • Batch policy support for cluster actions.
  • Integration test for message receiver.
  • A new API “profile-type-ops” is introduced to expose the profile type specific operations’ schema to end users.
  • Integrated OSProfiler into Senlin, support using OSProfiler to measure performance of Senlin.
  • Profile type list and policy type list now returns the support status for each type since API micro-version 1.5.
  • RPC requests from the API service to the engine service are fully managed using versioned objects now. This will enable a smooth upgrade for the service in future.

Upgrade Notes

  • For resources which has a user, a project and a domain property, the lengths of these columns are increased from 32 chars to 64 chars for a better conformance with Keystone.
  • New setup configuration items are provided to enable the “message” and/or “database” event generation.

Critical Issues

  • The problem of having clusters or nodes still locked by actions executed by a dead engine is fixed.

Security Issues

  • Multi-tenancy is enhanced so that an admin role user has to respect project isolation unless explicitly asking for an exception.

Bug Fixes

  • Fixed the problem that health manager related configuration options were not properly exposed.
  • Removed LB_STATUS_POLLING from health policy since LBaaS still cannot provide reliable node status update.
  • The health policy recovery actions is designed to be a list but the current implementation can only handle one action. This is now explicitly checked.
  • Fixed the problem that the “updated_at” timestamp of a node was not correctly updated.
  • The notifications of profile type specific operations were not properly reporting the operation’s name. This has been fixed.
  • Fixed the notification logic so that it uses the proper transport obtained from oslo.messaging.
  • Fixed bug related to cluster-collect API where the path parameter is None.
  • When attaching a policy (especially a health policy) to a cluster, users may choose to keep the policy disabled. This has to be considered in the health manager and other places. This issue is fixed.
  • A nova server, if booted from volume, will not return a valid image ID. This situation is now taken care of.

Other Notes

  • The retrieval of some resources such as actions and policies are optimized to avoid object instantiation.

2.0.0

New Features

  • Added dependents to clusters and nodes for recording other clusters/nodes that depend on them.
  • Added a new type of receiver (i.e. message) which is based on Zaqar message queue.
  • The ‘image’, ‘flavor’, ‘key_name’ and ‘networks’ properties of a nova server profile can now be validated via profile-validate API.
  • Optimized nova server update so that password and server name can be updated with and without image-based rebuild.
  • Added ‘template_url’ support to heat stack profile.

Deprecation Notes

  • Deprecate ‘block_device_mapping’ from nova server profile since it was never supported by OpenStack SDK.

Bug Fixes

  • Fixed a bug in affinity policy where the calls to nova driver was wrong.
  • Fixed bug related to the desired_capacity calculation. The base number used now is the current capacity of the cluster instead of previous ‘desired’ capacity. This include all actions that change cluster capacity and all related policies.
  • Fixed cluster/node status setting after a cluster/node check operation.
  • Various fixes to the user doc, developer doc and API documentation.
  • Removed ‘metadata’ from profile query parameters because the current support is known to have issues.
  • Added validation of key_name, flavor, image, networks when updating nova server.
  • Fixed bugs related to receiver creation when type is set to ‘message’.
  • Fixed dead service clean-up logic so that the clean-up operation can be retried.