Pike バージョンのリリースノート

16.1.8-57

セキュリティー上の問題

  • OSSA-2019-003: Nova Server Resource Faults Leak External Exception Details (CVE-2019-14433)

    This release contains a security fix for bug 1837877 where users without the admin role can be exposed to sensitive error details in the server resource fault message.

    There is a behavior change where non-nova exceptions will only record the exception class name in the fault message field which is exposed to all users, regardless of the admin role.

    The fault details, which are only exposed to users with the admin role, will continue to include the traceback and also include the exception value which for non-nova exceptions is what used to be exposed in the fault message field. Meaning, the information that admins could see for server faults is still available, but the exception value may be in details rather than message now.

バグ修正

  • Bug 1811726 is fixed by deleting the resource provider (in placement) associated with each compute node record managed by a nova-compute service when that service is deleted via the DELETE /os-services/{service_id} API. This is particularly important for compute services managing ironic baremetal nodes.

  • The os-simple-tenant-usage pagination has been fixed. In some cases, nova usage-list would have returned incorrect results because of this. See bug https://launchpad.net/bugs/1796689 for details.

16.1.8

バグ修正

  • It is now possible to configure the [cinder] section of nova.conf to allow setting admin-role credentials for scenarios where a user token is not available to perform actions on a volume. For example, when reclaim_instance_interval is a positive integer, instances are soft deleted until the nova-compute service periodic task removes them. If a soft deleted instance has volumes attached, the compute service needs to be able to detach and possibly delete the associated volumes, otherwise they will be orphaned in the block storage service. Similarly, if running_deleted_instance_poll_interval is set and running_deleted_instance_action = reap, then the compute service will need to be able to detach and possibly delete volumes attached to instances that are reaped. See bug 1733736 and bug 1734025 for more details.

  • Fixes an issue with cold migrating (resizing) an instance from ocata to pike compute by correcting parameters order in resize_instance rpcapi call to destination compute.

16.1.7

バグ修正

  • When testing whether direct IO is possible on the backing storage for an instance, Nova now uses a block size of 4096 bytes instead of 512 bytes, avoiding issues when the underlying block device has sectors larger than 512 bytes. See bug https://launchpad.net/bugs/1801702 for details.

16.1.5

アップグレード時の注意

  • The nova-api service now requires the [placement] section to be configured in nova.conf if you are using a separate config file just for that service. This is because the nova-api service now needs to talk to the placement service in order to delete resource provider allocations when deleting an instance and the nova-compute service on which that instance is running is down. This change is idempotent if [placement] is not configured in nova-api but it will result in new warnings in the logs until configured. See bug https://bugs.launchpad.net/nova/+bug/1679750 for more details.

  • The default list of non-inherited image properties to pop when creating a snapshot has been extended to include image signature properties. The properties img_signature_hash_method, img_signature, img_signature_key_type and img_signature_certificate_uuid are no longer inherited by the snapshot image as they would otherwise result in a Glance attempting to verify the snapshot image with the signature of the original.

  • A new online data migration has been added to populate missing instance.availability_zone values for instances older than Pike whose availability_zone was not specified during boot time. This can be run during the normal nova-manage db online_data_migrations routine. This fixes Bug 1768876

セキュリティー上の問題

  • A new policy rule, os_compute_api:servers:create:zero_disk_flavor, has been introduced which defaults to rule:admin_or_owner for backward compatibility, but can be configured to make the compute API enforce that server create requests using a flavor with zero root disk must be volume-backed or fail with a 403 HTTPForbidden error.

    Allowing image-backed servers with a zero root disk flavor can be potentially hazardous if users are allowed to upload their own images, since an instance created with a zero root disk flavor gets its size from the image, which can be unexpectedly large and exhaust local disk on the compute host. See https://bugs.launchpad.net/nova/+bug/1739646 for more details.

    While this is introduced in a backward-compatible way, the default will be changed to rule:admin_api in a subsequent release. It is advised that you communicate this change to your users before turning on enforcement since it will result in a compute API behavior change.

  • The 'SSBD' and 'VIRT-SSBD' cpu flags have been added to the list of available choices for the [libvirt]/cpu_model_extra_flags config option. These are important for proper mitigation of the Spectre 3a and 4 CVEs. Note that the use of either of these flags require updated packages below nova, including libvirt, qemu (specifically >=2.9.0 for virt-ssbd), linux, and system firmware. For more information see https://www.us-cert.gov/ncas/alerts/TA18-141A

バグ修正

  • The DELETE /os-services/{service_id} compute API will now return a 409 HTTPConflict response when trying to delete a nova-compute service which is still hosting instances. This is because doing so would orphan the compute node resource provider in the placement service on which those instances have resource allocations, which affects scheduling. See https://bugs.launchpad.net/nova/+bug/1763183 for more details.

16.1.2

紹介

This release includes fixes for security vulnerabilities.

セキュリティー上の問題

  • [CVE-2017-18191] Swapping encrypted volumes can lead to data loss and a possible compute host DOS attack.

バグ修正

  • The libvirt driver now allows specifying individual CPU feature flags for guests, via a new configuration attribute [libvirt]/cpu_model_extra_flags -- only with custom as the [libvirt]/cpu_model. Refer to its documentation in nova.conf for usage details.

    One of the motivations for this is to alleviate the performance degradation (caused as a result of applying the "Meltdown" CVE fixes) for guests running with certain Intel-based virtual CPU models. This guest performance impact is reduced by exposing the CPU feature flag 'PCID' ("Process-Context ID") to the guest CPU, assuming that it is available in the physical hardware itself.

    Note that besides custom, Nova's libvirt driver has two other CPU modes: host-model (which is the default), and host-passthrough. Refer to the [libvirt]/cpu_model_extra_flags documentation for what to do when you are using either of those CPU modes in context of 'PCID'.

16.1.1

バグ修正

  • The nova-manage discover_hosts command now has a --by-service option which allows discovering hosts in a cell purely by the presence of a nova-compute binary. At this point, there is no need to use this unless you're using ironic, as it is less efficient. However, if you are using ironic, this allows discovery and mapping of hosts even when no ironic nodes are present.

  • prevent swap_volume action if the instance is in state SUSPENDED, STOPPED or SOFT_DELETED. A conflict (409) will be raised now as previously it used to fail silently.

16.1.0

アップグレード時の注意

  • On AArch64 architecture cpu_mode for libvirt is set to host-passthrough by default.

    AArch64 currently lacks host-model support because neither libvirt nor QEMU are able to tell what the host CPU model exactly is and there is no CPU description code for ARM(64) at this point.

    警告

    host-passthrough mode will completely break live migration, unless all the Compute nodes (running libvirtd) have identical CPUs.

  • Starting in Ocata, there is a behavior change where aggregate-based overcommit ratios will no longer be honored during scheduling for the FilterScheduler. Instead, overcommit values must be set on a per-compute-node basis in the Nova configuration files.

    If you have been relying on per-aggregate overcommit, during your upgrade, you must change to using per-compute-node overcommit ratios in order for your scheduling behavior to stay consistent. Otherwise, you may notice increased NoValidHost scheduling failures as the aggregate-based overcommit is no longer being considered.

    You can safely remove the AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter from your [filter_scheduler]enabled_filters and you do not need to replace them with any other core/ram/disk filters. The placement query in the FilterScheduler takes care of the core/ram/disk filtering, so CoreFilter, RamFilter, and DiskFilter are redundant.

    Please see the mailing list thread for more information: http://lists.openstack.org/pipermail/openstack-operators/2018-January/014748.html

  • This release contains a schema migration for the nova_api database in order to address bug 1738094:

    https://bugs.launchpad.net/nova/+bug/1738094

    The migration is optional and can be postponed if you have not been affected by the bug. The bug manifests itself through "Data too long for column 'spec'" database errors.

バグ修正

  • The delete_host command has been added in nova-manage cell_v2 to delete a host from a cell (host mappings). The force option has been added in nova-manage cell_v2 delete_cell. If the force option is specified, a cell can be deleted even if the cell has hosts.

  • If scheduling fails during rebuild the server instance will go to ERROR state and a fault will be recorded. Bug 1744325

16.0.4

既知の問題

  • In 16.0.0 Pike release, quota limits are checked in a new fashion after change 5c90b25e49d47deb7dc6695333d9d5e46efe8665 and a new config option [quota]/recheck_quota has been added in change eab1d4b5cc6dd424c5c7dfd9989383a8e716cae5 to recheck quota after resource creation to prevent allowing quota to be exceeded as a result of racing requests. These changes could lead to requests blocked by over quota resulting in instances in the ERROR state, rather than no instance records as before. Refer to https://bugs.launchpad.net/nova/+bug/1716706 for detailed bug report.

セキュリティー上の問題

  • OSSA-2017-006: Nova FilterScheduler doubles resource allocations during rebuild with new image (CVE-2017-17051)

    By repeatedly rebuilding an instance with new images, an authenticated user may consume untracked resources on a hypervisor host leading to a denial of service. This regression was introduced with the fix for OSSA-2017-005 (CVE-2017-16239), however, only Nova stable/pike or later deployments with that fix applied and relying on the default FilterScheduler are affected.

    The fix is in the nova-api and nova-scheduler services.

    注釈

    The fix for errata in OSSA-2017-005 (CVE-2017-16239) will need to be applied in addition to this fix.

バグ修正

  • The fix for OSSA-2017-005 (CVE-2017-16239) was too far-reaching in that rebuilds can now fail based on scheduling filters that should not apply to rebuild. For example, a rebuild of an instance on a disabled compute host could fail whereas it would not before the fix for CVE-2017-16239. Similarly, rebuilding an instance on a host that is at capacity for vcpu, memory or disk could fail since the scheduler filters would treat it as a new build request even though the rebuild is not claiming new resources.

    Therefore this release contains a fix for those regressions in scheduling behavior on rebuild while maintaining the original fix for CVE-2017-16239.

    注釈

    The fix relies on a RUN_ON_REBUILD variable which is checked for all scheduler filters during a rebuild. The reasoning behind the value for that variable depends on each filter. If you have out-of-tree scheduler filters, you will likely need to assess whether or not they need to override the default value (False) for the new variable.

  • This release includes a fix for bug 1733886 which was a regression introduced in the 2.36 API microversion where the force parameter was missing from the PUT /os-quota-sets/{tenant_id} API request schema so users could not force quota updates with microversion 2.36 or later. The bug is now fixed so that the force parameter can once again be specified during quota updates. There is no new microversion for this change since it is an admin-only API.

16.0.3

セキュリティー上の問題

  • OSSA-2017-005: Nova Filter Scheduler bypass through rebuild action

    By rebuilding an instance, an authenticated user may be able to circumvent the FilterScheduler bypassing imposed filters (for example, the ImagePropertiesFilter or the IsolatedHostsFilter). All setups using the FilterScheduler (or CachingScheduler) are affected.

    The fix is in the nova-api and nova-conductor services.

バグ修正

  • Fixes bug 1695861 in which the aggregate API accepted requests that have availability zone names including ':'. With this fix, a creation of an availabilty zone whose name includes ':' results in a 400 BadRequest error response.

  • Fixes a bug preventing ironic nodes without VCPUs, memory or disk in their properties from being picked by nova.

16.0.2

アップグレード時の注意

  • 新しい keystone 設定セクションが追加されました。これにより keystone と連携するセッションリンク属性を設定することができます。これにより、Nova と Keystone 間でカスタム証明書を使い安全なリンクを使えます。

その他の注意点

  • The ironic driver will automatically migrate instance flavors for resource classes at runtime. If you are not able to run the compute and ironic services at pike because you are automating an upgrade past this release, you can use the nova-manage db ironic_flavor_migration to push the migration manually. This is only for advanced users taking on the risk of automating the process of upgrading through pike and is not recommended for normal users.

16.0.1

アップグレード時の注意

  • The nova-conductor service now needs access to the Placement service in the case of forcing a destination host during a live migration. Ensure the [placement] section of nova.conf for the nova-conductor service is filled in.

バグ修正

  • When forcing a specified destination host during live migration, the scheduler is bypassed but resource allocations will still be made in the Placement service against the forced destination host. If the resource allocation against the destination host fails, the live migration operation will fail, regardless of the force flag being specified in the API. The guest will be unchanged on the source host. For more details, see bug 1712008.

  • When forcing a specified destination host during evacuate, the scheduler is bypassed but resource allocations will still be made in the Placement service against the forced destination host. If the resource allocation against the destination host fails, the evacuate operation will fail, regardless of the force flag being specified in the API. The guest will be unchanged on the source host. For more details, see bug 1713786.

  • It is now possible to unset the [vnc]keymap and [spice]keymap configuration options. These were known to cause issues for some users with non-US keyboards and may be deprecated in the future.

16.0.0

紹介

This release includes fixes for security vulnerabilities.

The 16.0.0 release includes many new features and bug fixes. It is difficult to cover all the changes that have been introduced. Please at least read the upgrade section which describes the required actions to upgrade your cloud from 15.0.0 (Ocata) to 16.0.0 (Pike).

つまり、少しの主たる変更は述べる価値があります。これは息切れするほどのリストではありません:

  • The latest Compute API microversion supported for Pike is v2.53. Details on REST API microversions added since the 15.0.0 Ocata release can be found in the REST API Version History page.

  • The FilterScheduler driver now provides allocations to the Placement API, which helps concurrent schedulers to verify resource consumptions directly without waiting for compute services to ask for a reschedule in case of a race condition. That is an important performance improvement that includes allowing one to use more than one scheduler worker if there are capacity concerns. For more details, see the Pike Upgrade Notes for Placement.

  • Nova now supports a Cells v2 multi-cell deployment. The default deployment is a single cell. There are known limitations with multiple cells. Refer to the Cells v2 Layout page for more information about deploying multiple cells.

  • Cells v1 は Cells v2 が優勢のため非推奨となりました。

  • The quota system has been reworked to count resources at the point of creation rather than using a reserve/commit/rollback approach. No operator impacts are expected.

  • Compute 固有のドキュメントは http://docs.openstack.org から https://docs.openstack.org/nova/ に移行され、 Nova 開発ドキュメントのレイアプトは再構成されました。もし何かがなくなるか、壊れているブックマークを見つけたら report a bug してください。

新機能

  • The versioned_notifications_topic configuration option; This enables one to configure the topics used for versioned notifications.

  • Ironic virt ドライバーを使うベアメタルノードについて インタフェースの attach/detach サポートを追加しました。インスタンス情報キャッシュの更新は neutron から network-changed イベントを得ること、または周期的タスクがインスタンス情報キャッシュの更新に依存します。これは nova のキャッシュされたネットワーク情報(たとえば GET /servers レスポンスとして送られるもの)はアタッチ、デタッチ直後に更新されないだろうことを意味します。

  • qemu の hw_vif_model プロパティに有効な nic として LAN9118 サポートを追加しました。

  • フィールド lockeddisplay_description が InstancePayload に追加されました。インスタンスアクションのバージョンが付与された通知がこれらのフィールドを含みます。

    InstancePayload を使うバージョンが付与された通知のいくつかの例:

    • instance.create

    • instance.delete

    • instance.resize

    • instance.pause

  • PCIWeigher weigher を追加しました。これはPCIでないインスタンスが PCI デバイスを持つホスト上のリソースを占有しないことを保証するために使えます。これは [filter_scheduler] pci_weight_multiplier 設定オプションを使い設定できます。

  • The network.json metadata format has been amended for IPv6 networks under Neutron control. The type that is shown has been changed from being always set to ipv6_dhcp to correctly reflecting the ipv6_address_mode option in Neutron, so the type now will be ipv6_slaac, ipv6_dhcpv6-stateless or ipv6_dhcpv6-stateful.

  • ironic virt ドライバによる iscsi ボリュームによるインスタンス起動を可能としました。この機能は API バージョン 1.32 またはそれ以降の ironic サービス、つまり ironic リリース > 8.0 を必要とします。また、python-ironicclient >= 1.14.0 が必要です。

  • The model name vhostuser_vrouter_plug is set by the neutron contrail plugin during a VM (network port) creation. The libvirt compute driver now supports plugging virtual interfaces of type "contrail_vrouter" which are provided by the contrail-nova-vif-driver plugin [1]. [1] https://github.com/Juniper/contrail-nova-vif-driver

  • マイクロバージョン v2.48 で VM diagnostics レスポンスを標準化しました。それぞれのハイパーバイザーがうめるフィールドがあります。もしハイパーバイザードライバが特定のフィールドを提供できないならば、このフィールドは 'None' として報告されます。

  • Microversion 2.53 changes service and hypervisor IDs to UUIDs to ensure uniqueness across cells. Prior to this, ID collisions were possible in multi-cell deployments. See the REST API Version History and Compute API reference for details.

  • The nova-compute worker can automatically disable itself in the service database if consecutive build failures exceed a set threshold. The [compute]/consecutive_build_service_disable_threshold configuration option allows setting the threshold for this behavior, or disabling it entirely if desired. The intent is that an admin will examine the issue before manually re-enabling the service, which will avoid that compute node becoming a black hole build magnet.

  • Supports a new method for deleting all inventory for a resource provider

    • DELETE /resource-providers/{uuid}/inventories

    戻り値(ステータスコード)

    • 204 NoContent on success

    • 404 NotFound for missing resource provider

    • 405 MethodNotAllowed if a microversion is specified that is before

      この変更 (1.5)

    • 409 Conflict if inventory in use or if some other request concurrently

      このリソースプロバイダーを更新している。

    Requires OpenStack-API-Version placement 1.5

  • The discover_hosts_in_cells_interval periodic task in the scheduler is now more efficient in that it can specifically query unmapped compute nodes from the cell databases instead of having to query them all and compare against existing host mappings.

  • 新しい 2.47 マイクロバージョンが Compute API に追加された。このマイクロバージョンまたはそれ以降を指定したユーザーは、servers REST API エンドポイント経由でサーバーの詳細を参照するとき、フレーバーの情報を辞書として参照する。もし、ユーザーがextra_specsを参照することをポリシーにより制限されているならば、extra_specsはフレーバーの情報に含まれない。

  • The libvirt compute driver now supports attaching volumes of type "drbd". See the DRBD documentation for more information.

  • os_compute_api:os-flavor-manage ポリシーに粒度を追加し、作成、削除のアクションを分離しました。

    • os_compute_api:os-flavor-manage:create

    • os_compute_api:os-flavor-manage:delete

    To address backwards compatibility, the new rules added to the flavor_manage.py policy file, default to the existing rule, os_compute_api:os-flavor-manage, if it is set to a non-default value.

  • Some hypervisors add a signature to their guests, e.g. KVM is adding KVMKVMKVM\0\0\0, Xen: XenVMMXenVMM. The existence of a hypervisor signature enables some paravirtualization features on the guest as well as disallowing certain drivers which test for the hypervisor to load e.g. Nvidia driver [1]: "The latest Nvidia driver (337.88) specifically checks for KVM as the hypervisor and reports Code 43 for the driver in a Windows guest when found. Removing or changing the KVM signature is sufficient for the driver to load and work."

    The new img_hide_hypervisor_id image metadata property hides the hypervisor signature for the guest.

    現在は libvirt コンピュートドライバーだけがゲストホストのハイパーバイザー署名を隠すことができます。

    To verify if hiding hypervisor id is working on Linux based system:

    $ cpuid | grep -i hypervisor_id
    

    The result should not be (for KVM hypervisor):

    $ hypervisor_id = KVMKVMKVM\0\0\0
    

    You can enable this feature by setting the img_hide_hypervisor_id=true property in a Glance image.

    [1]: http://git.qemu.org/?p=qemu.git;a=commitdiff;h=f522d2a

  • The 1.7 version of the placement API changes handling of PUT /resource_classes/{name} to be a create or verification of the resource class with {name}. If the resource class is a custom resource class and does not already exist it will be created and a 201 response code returned. If the class already exists the response code will be 204. This makes it possible to check or create a resource class in one request.

  • The virtio-forwarder VNIC type has been added to the list of VNICs. This VNIC type is intended to request a low-latency virtio port inside the instance, likely backed by hardware acceleration. Currently the Agilio OVS external Neutron and OS-VIF plugins provide support for this VNIC mode.

  • It is now possible to signal and perform an online volume size change as of the 2.51 microversion using the volume-extended external event. Nova will perform the volume extension so the host can detect its new size. It will also resize the device in QEMU so instance can detect the new disk size without rebooting.

    現在は iSCSIと FC ボリュームのlibvirt コンピュートドライバーだけがオンラインボリュームサイズ変更をサポートします。

  • The 2.51 microversion exposes the events field in the response body for the GET /servers/{server_id}/os-instance-actions/{request_id} API. This is useful for API users to monitor when a volume extend operation completes for the given server instance. By default only users with the administrator role will be able to see event traceback details.

  • Nova now uses oslo.middleware for request_id processing. This means that there is now a new X-OpenStack-Request-ID header returned on every request which mirrors the content of the existing X-Compute-Request-ID. The expected existence of this header is signaled by Microversion 2.46. If server version >= 2.46, you can expect to see this header in your results (regardless of microversion requested).

  • Placement API に新しいマイクロバージョン 1.10 が追加された。このマイクロバージョンは GET /allocation_candidates リソースエンドポイントをサポートを追加した。このエンドポイントは、クエリ文字列で供給されるリソース制限の集合に合う、可能な割り当て要求について情報を返却する。

  • The placement API service can now be configured to support CORS. If a cors configuration group is present in the service's configuration file (currently nova.conf), with allowed_origin configured, the values within will be used to configure the middleware. If cors.allowed_origin is not set, the middleware will not be used.

  • Traits are added to the placement with Microversion 1.6.

    • GET /traits: すべてのリソースクラスを返す

    • PUT /traits/{name}: To insert a single custom trait.

    • GET /traits/{name}: trait の名前が存在するかチェックする

    • DELETE /traits/{name}: 指定された trait を削除します。

    • GET /resource_providers/{uuid}/traits: 特定のリソースプロバイダに関連するtrait のリスト

    • PUT /resource_providers/{uuid}/traits: Set all the traits for a specific resource provider

    • DELETE /resource_providers/{uuid}/traits: 指定されたリソースプロバイダーに対して存在する trait 関連付けを削除します。

  • 新しい設定オプション``[quota]/recheck_quota``が追加されました。これは、競合要求の結果としてクォータを超えてしまうこと防ぐように、リソース作成後にクォータを再度チェックするものです。デフォルトは True です。これによりユーザはクォータを超えることができません。しかし、たとえ、ユーザーが要求した時に十分なクォータが利用可能だとしても、REST API ユーザーにとって、クォータ制限に達する直前の衝突の際、オーバークォータ 403 エラーレスポンスで拒否されることはあり得ます。運用者はもし競合する要求が受け入れ可能なのでクォータを超えることを許容するならば、システムに余計な負荷をかけることを避けるために False にしたいかもしれない。

  • 新しい設定オプション reserved_host_cpus が Compute サービスに追加されました。運用者にインスタンスが利用するものと別に、ハイパーバイザーに対していくつの物理CPUを予約しておくかを提供します。

  • Versioned instance.update notification will be sent when server's tags field is updated.

  • OVS vif タイプにdirect ポート(SR-IOV)サポートが追加されました。このOVS加速モードを使うためには、openvswitch` 2.8.0 と Linux Kernel 4.8 が必要です。この機能は OpenFlow コントロールプレーン経由でSR-IOVの仮想機能を制御することで Open vSwitch のパフォーマンスを改善します。Pike リリースでは 同じホスト上で SR-IOV ハードウェアと OVS オフロードとの間に差異がないことに注意してください。この制限は enable-sriov-nic 機能が完成したら解決されます。それまでは運用者はハードウェアに基づき特定のホストにインスタンスがスケジューリングされるようホストアグリゲートを使うことができます。

  • サーバーを作成するとき、タグ適用のサポートを追加しました。タグスキーマは 2.26 マイクロバージョンにあるものと同様です。

  • Nova が Glance API とやり取りするため Keystone ミドルウェア機能のサポートが追加されました。これにより、サービストークンがユーザートークンとともに送られたなら、ユーザートークンの期限が無視されます。この機能を使うためには、サービスユーザーが最初に作られる必要があります。nova.conf``の ``service_user グループの下に send_service_user_token フラグを True にしてサービスユーザー設定を追加してください。

    注釈

    This feature is already implemented for Nova interaction with the Cinder and Neutron APIs in Ocata.

  • The libvirt compute driver now supports connecting to Veritas HyperScale volume backends.

  • Microversion 2.49 brings device role tagging to the attach operation of volumes and network interfaces. Both network interfaces and volumes can now be attached with an optional tag parameter. The tag is then exposed to the guest operating system through the metadata API. Unlike the original device role tagging feature, tagged attach does not support the config drive. Because the config drive was never designed to be dynamic, it only contains device tags that were set at boot time with API 2.32. Any changes made to tagged devices with API 2.49 while the server is running will only be reflected in the metadata obtained from the metadata API. Because of metadata caching, changes may take up to metadata_cache_expiration to appear in the metadata API. The default value for metadata_cache_expiration is 15 seconds.

    Tagged volume attachment is not supported for shelved-offloaded instances. Tagged device attachment (both volumes and network interfaces) is not supported for Cells V1 deployments.

  • The following volume attach and volume detach versioned notifications have been added to the nova-compute service:

    • instance.volume_attach.start

    • instance.volume_attach.end

    • instance.volume_attach.error

    • instance.volume_detach.start

    • instance.volume_detach.end

  • The XenAPI compute driver now supports creating servers with virtual interface and block device tags which was introduced in the 2.32 microversion.

    Note that multiple paths will exist for a tagged disk for the following reasons:

    1. HVM guests may not have the paravirtualization (PV) drivers installed, in which case the disk will be accessible on the ide bus. When the PV drivers are installed the disk will be accessible on the xen bus.

    2. Windows guests with PV drivers installed expose devices in a different way to Linux guests with PV drivers. Linux systems will see disk paths under /sys/devices/, but Windows guests will see them in the registry, for example HKLM\System\ControlSet001\Enum\SCSIDisk. These two disks are both on the xen bus.

    See the following XenAPI documentation for details: http://xenbits.xen.org/docs/4.2-testing/misc/vbd-interface.txt

既知の問題

  • bug 1707256 のため、Placement における共有ストレージモデルはスケジューラによりサポートされていません。これは Pike リリースシリーズにて、運用者はスケジューリングとリソーストラッキングのためにPlacement サービスを使って2つまたはそれ以上のコンピュートホストの間の共有ストレージプールを作成できないことを意味します。

    This is not a regression, just a note about functionality that is not yet available. Support for modeling shared storage providers will be worked on in the Queens release.

  • 設定オプション [libvirt]/live_migration_progress_timeout により制御されるライブマイグレーションの進捗タイムアウトが、しばしばライブマイグレーションがまだよい進捗にも関わらず進捗タイムアウトで失敗する原因となることが発見されました。これらのチェックが原因となる問題を最小化するには、デフォルトを 0 と変更しました。これはタイムアウトを引き起こさないことを意味します。ライブマイグレーションがタイムアウトエラーで失敗するのを修正するには、[libvirt]/live_migration_completion_timeout[libvirt]/live_migration_downtime を見てください。

  • ベアメタルノードのスケジューリング変更のため、追加リソースは Placement に対してフリーであるとレポートされます。これは2つの場合で起こります:

    • インスタンスはノードより小さいフレーバーでデプロイされます(exact_filters が使われないときのみ可能)。

    • Node properties were modified in ironic for a deployed nodes

    When such instances were deployed without using a custom resource class, it is possible for the scheduler to try deploying another instance on the same node. It will cause a failure in the compute and a scheduling retry.

    The recommended work around is to assign a resource class to all ironic nodes, and use it for scheduling of bare metal instances.

  • In deployments with multiple (v2) cells, upcalls from the computes to the scheduler (or other control services) cannot occur. This prevents certain things from happening, such as the track_instance_changes updates, as well as the late affinity checks for server groups. See the related documentation on the scheduler.track_instance_changes and workarounds.disable_group_policy_check_upcall configuration options for more details. Single-cell deployments without any MQ isolation will continue to operate as they have for the time being.

アップグレード時の注意

  • Interface attachment/detachment for ironic virt driver was implemented in in-tree network interfaces in ironic version 8.0, and this release is required for nova's interface attachment feature to work. Prior to that release, calling VIF attach on an active ironic node using in-tree network interfaces would be basically a noop. It should not be an issue during the upgrade though, as it is required to upgrade ironic before nova.

  • default_floating_pool 設定オプションは [neutron] グループに追加されました。現在の [DEFAULT] グループにある default_floating_pool オプションは保持され、nova-network ユーザにより使われます。Neutronユーザは新しいオプションに移行すべきです。

  • The information in the network.json metadata has been amended, for IPv6 networks under Neutron control, the type field has been changed from being always set to ipv6_dhcp to correctly reflecting the ipv6_address_mode option in Neutron.

  • The required ironic API version is updated to 1.32. The ironic service must be upgraded to an ironic release > 8.0 before nova is upgraded, otherwise all ironic intergration will fail.

  • The type of following config options have been changed from string to URI. They are checked whether they follow the URI format or not and its scheme.

    • ironic グループの api_endpoint

    • mks グループの mksproxy_base_url

    • rdp グループの html5_proxy_base_url

    • vmware グループの serial_port_proxy_uri

  • The os-volume_attachments APIs no longer check os_compute_api:os-volumes policy. They do still check os_compute_api:os-volumes-attachments policy rules. Deployers who have customized policy should confirm that their settings for os-volume_attachments policy checks are sufficient.

  • The new configuration option [compute]/consecutive_build_service_disable_threshold defaults to a nonzero value, which means multiple failed builds will result in a compute node auto-disabling itself.

  • The nova-manage project quota_usage_refresh and its alias nova-manage account quota_usage_refresh commands have been renamed nova-manage quota refresh. Aliases are provided but these are marked as deprecated and will be removed in the next release of nova.

  • ``[xenserver]/vif_driver``設定オプションのデフォルト値は``[DEFAULT]/use_neutron=True``ので尾フォルト設定と合うように ``nova.virt.xenapi.vif.XenAPIOpenVswitchDriver``に変更されました。

  • The libvirt driver port filtering feature will now ignore the use_ipv6 config option.

    The libvirt driver provides port filtering capability. This capability is enabled when the following is true:

    • The nova.virt.libvirt.firewall.IptablesFirewallDriver firewall driver is enabled

    • セキュリティグループが無効

    • Neutron port filtering is disabled/unsupported

    • IPTable互換インターフェースが使われます。例えば、VIF が ブリッジでOVS に接続する tap デバイス である OVS VIF ハイブリッドモードなどです。

    When enabled, libvirt applies IPTables rules to all interface ports that provide MAC, IP, and ARP spoofing protection.

    Previously, setting the use_ipv6 config option to False prevented the generation of IPv6 rules even when there were IPv6 subnets available. This was fine when using nova-network, where the same config option was used to control generation of these subnets. However, a mismatch between this nova option and equivalent IPv6 options in neutron would have resulted in IPv6 packets being dropped.

    Seeing as there was no apparent reason for not allowing IPv6 traffic when the network is IPv6-capable, we now ignore this option. Instead, we use the availability of IPv6-capable subnets as an indicator that IPv6 rules should be added.

  • The libvirt driver port filtering feature will now ignore the allow_same_net_traffic config option.

    The libvirt driver provides port filtering capability. This capability is enabled when the following is true:

    • The nova.virt.libvirt.firewall.IptablesFirewallDriver firewall driver is enabled

    • セキュリティグループが無効

    • Neutron port filtering is disabled/unsupported

    • IPTable互換インターフェースが使われます。例えば、VIF が ブリッジでOVS に接続する tap デバイス である OVS VIF ハイブリッドモードなどです。

    When enabled, libvirt applies IPTables rules to all interface ports that provide MAC, IP, and ARP spoofing protection.

    Previously, setting the allow_same_net_traffic config option to True allowed for same network traffic when using these port filters. This was the default case and was the only case tested. Setting this to False disabled same network traffic when using the libvirt driver port filtering functionality only, however, this was neither tested nor documented.

    Given that there are other better documented and better tested ways to approach this, such as through use of neutron's native port filtering or security groups, this functionality has been removed. Users should instead rely on one of these alternatives.

  • Three live-migration related configuration options were restricted by minimum values since 16.0.0 and will now raise a ValueError if these configuration options' values less than minimum values, instead of logging warning before. These configuration options are:

    • live_migration_downtime 最小値 100

    • live_migration_downtime_steps 最小値 3

    • live_migration_downtime_delay 最小値 10

  • The ssl options were only used by Nova code that interacts with Glance client. These options are now defined and read by Keystoneauth. api_insecure option from glance group is renamed to insecure. The following ''ssl'' options are moved to glance group

    • ca_file は、現在 cafile です

    • cert_file は、現在 certfile です

    • key_file は、現在 keyfile です

  • Injected network templates will now ignore the use_ipv6 config option.

    Nova supports file injection of network templates. Putting these in a config drive is the only way to configure networking without DHCP.

    Previously, setting the use_ipv6 config option to False prevented the generation of IPv6 network info, even if there were IPv6 networks available. This was fine when using nova-network, where the same config option is used to control generation of these subnets. However, a mismatch between this nova option and equivalent IPv6 options in neutron would have resulted in IPv6 packets being dropped.

    Seeing as there was no apparent reason for not including IPv6 network info when IPv6 capable networks are present, we now ignore this option. Instead, we include info for all available networks in the template, be they IPv4 or IPv6.

  • In Ocata, the nova-scheduler would fall back to not calling the placement service during instance boot if old computes were running. That compatibility mode is no longer present in Pike, and as such, the scheduler fully depends on the placement service. This effectively means that in Pike Nova requires Placement API version 1.4 (Ocata).

  • The default policy on os-server-tags has been changed from RULE_ANY (allow all) to RULE_ADMIN_OR_OWNER. This is because server tags should only be manipulated on servers owned by the user or admin. This doesn't have any affect on how the API works.

  • The default value of the [DEFAULT]/firewall_driver configuration option has been changed to nova.virt.firewall.NoopFirewallDriver to coincide with the default value of [DEFAULT]/use_neutron=True.

  • nova-compute サービスに使用される libvirt の最小バージョンは 1.2.9 となりました。nova-compute サービスに使用される QEMU の最小バージョンは 2.1.0 となりました。libvirt compute ドライバに使うときこれらの最小バージョンに適合することに失敗すると、nova-compute サービスが開始しません。

  • Parts of the compute REST API are now relying on getting information from cells via their mappings in the nova_api database. This is to support multiple cells. For example, when listing compute hosts or services, all cells will be iterated in the API and the results will be returned.

    This change can have impacts, however, to deployment tooling that relies on parts of the API, like listing compute hosts, before the compute hosts are mapped using the nova-manage cell_v2 discover_hosts command.

    If you were using nova hypervisor-list after starting new nova-compute services to tell when to run nova-manage cell_v2 discover_hosts, you should change your tooling to instead use one of the following commands:

    nova service-list --binary nova-compute [--host <hostname>]
    
    openstack compute service list --service nova-compute [--host <host>]
    

    [scheduler]/discover_hosts_in_cells_interval 設定オプションもあります。これは、自動的に nova-scheduler サービスからホストを発見するために使われます。

  • Quota limits and classes are being moved to the API database for Cells v2. In this release, the online data migrations will move any quota limits and classes you have in your main database to the API database, retaining all attributes.

    注釈

    Quota limits and classes can no longer be soft-deleted as the API database does not replicate the legacy soft-delete functionality from the main database. As such, deleted quota limits and classes are not migrated and the behavior users will experience will be the same as if a purge of deleted records was performed.

  • The default policy for os_compute_api:os-quota-sets:detail has been changed to permit listing of quotas with details to project users, not only to admins.

  • The deprecated nova cert daemon is now removed. The /os-certificates API endpoint that depended on this service now returns 410 whenever it is called.

  • The deprecated /os-cloudpipe API endpoint has been removed. Whenever calls are made to that endpoint it now returns a 410 response.

  • DEFAULT グループにある console_driver 設定オプションは Ocata リリースで非推奨となり、今や削除されました。

  • 拡張 を有効または無効にする非推奨の設定オプション extensions_blacklistextensions_whitelist が削除されました。これは すべてのAPI 拡張はいつも有効になることを意味します。もしポリシーを修正したならば、全ての API に対してポリシーの設定が正しいことを二重チェックしてください。

  • 次の名前スキームに従う全てのポリシールールは削除されました: os_compute_api:{extension_alias}:discoverable これらのポリシールールは活発な API 拡張リストから有効な拡張を隠すために使われました。もはやどんな API 拡張も無効にできないので、API 拡張が活発である事実を隠す選択肢は意味がありません。そのため、このような全てのポリシールールは削除されました。

  • The nova.virt.libvirt.volume.glusterfs.LibvirtGlusterfsVolumeDriver volume driver has been removed. The GlusterFS volume driver in Cinder was deprecated during the Newton release and was removed from Cinder in the Ocata release so it is effectively not maintained and therefore no longer supported.

    The following configuration options, previously found in the libvirt group, have been removed:

    • glusterfs_mount_point_base

    • qemu_allowed_storage_drivers

    These were used by the now-removed LibvirtGlusterfsVolumeDriver volume driver and therefore no longer had any effect.

  • The cells topic configuration option has been removed. Please make sure your cells related message queue topic is 'cells'.

  • The nova.virt.libvirt.volume.scality.LibvirtScalityVolumeDriver volume driver has been removed. The Scality volume driver in Cinder was deprecated during the Newton release and was removed from Cinder in the Ocata release so it is effectively not maintained and therefore no longer supported.

  • The following configuration options, previously found in the libvirt group, have been removed:

    • scality_sofs_config

    • scality_sofs_mount_point

    These were used by the now-removed LibvirtScalityVolumeDriver volume driver and therefore no longer had any effect.

  • RPC トピックに関する設定オプションは過去のリリースで非推奨となりましたが、今や完全に削除されました。ユーザーが全てのサービスに対する RPC トピックを選ぶ必要がありませんでした。これから得られる利益はほとんどなく、トピックオプションの値を変更することにより容易に Nova が壊れました。

    The following options are removed:

    • compute_topic

    • console_topic

    • consoleauth_topic

    • scheduler_topic

    • network_topic

  • Policy rule with name os_compute_api:os-admin-actions has been removed as it was never used by any API.

  • The [vmware] wsdl_location configuration option has been removed after being deprecated in 15.0.0. It was unused and should have no impact.

  • イメージファイルに関連する設定オプションは削除されました。それらは非推奨と印づけられました。なぜならファイルシステム経由で glance からイメージをダウンロードする機能は使われません。以下は削除されオプションです:

    • image_file_url.filesystems

    • image_file_url.FS.id

    • image_file_url.FS.mountpoint

  • libvirt.num_iscsi_scan_tries option has been renamed to libvirt.num_volume_scan_tries, as the previous name was suggesting that this option only concerns devices connected using iSCSI interface. It also concerns devices connected using fibrechannel, scaleio and disco.

  • 新しい request_log ミドルウェアが、たとえ Nova API が eventlet.wsgi 下で動作していなくてもREST HTTP リクエストを記録するために作成されました。これは api-paste.ini の変更なので、この機能を得るには api-paste.iniを手動で更新する必要があります。新しいリクエストログは nova-api が eventlet 下で動作していないことを検知したときだけ発行され、すべての以前に記録された情報に加えて、マイクロバージョンのリクエストを含みます。

  • The nova-manage api_db sync and nova-manage db sync commands previously took an optional --version parameter to determine which version to sync to. For example:

    $ nova-manage api_db sync --version some-version
    

    This is now an optional positional argument. For example:

    $ nova-manage api_db sync some-version
    

    エイリアスは提供されますが、これらは非推奨であり、時期リリースでは削除されます。

  • The scheduler now requests allocation candidates from the Placement service during scheduling. The allocation candidates information was introduced in the Placement API 1.10 microversion, so you should upgrade the placement service before the Nova scheduler service so that the scheduler can take advantage of the allocation candidate information.

  • オンラインデータマイグレーションが削除されていない service レコードに対して nova データベースの services.uuid カラムをマイグレーションするようになりました。os-services API からサービスを一覧または参照するのも同様の影響を受けます。

  • Nova is now configured to use the v3 version of the Cinder API. You need to ensure that the v3 version of the Cinder API is available and listed in the service catalog in order to use Nova with the default configuration option.

    The base 3.0 version is identical to v2 and it was introduced in the Newton release of OpenStack. In case you need Nova to continue using the v2 version you can point it towards that by setting the catalog_info option in the nova.conf file under the cinder section, like:

    [cinder]
    catalog_info = volumev2:cinderv2:publicURL
    
  • Since we now use Placement to verify basic CPU/RAM/disk resources when using the FilterScheduler, the RamFilter and DiskFilter entries are being removed from the default value for the enabled_filters config option in the [filter_scheduler] group. If you are overriding this option, you probably should remove them from your version. If you are using CachingScheduler you may wish to enable these filters as we will not use Placement in that case.

  • WSGI application scripts nova-api-wsgi and nova-metadata-wsgi are now available. They allow running the compute and metadata APIs using a WSGI server of choice (for example nginx and uwsgi, apache2 with mod_proxy_uwsgi or gunicorn). The eventlet-based servers are still available, but the WSGI options will allow greater deployment flexibility.

廃止予定の機能

  • TypeAffinityFilter is deprecated for removal in the 17.0.0 Queens release. There is no replacement planned for this filter. It is fundamentally flawed in that it relies on the flavors.id primary key and if a flavor "changed", i.e. deleted and re-created with new values, it will result in this filter thinking it is a different flavor, thus breaking the usefulness of this filter.

  • The [api]/allow_instance_snapshots configuration option is now deprecated for removal. To disable snapshots in the createImage server action API, change the os_compute_api:servers:create_image and os_compute_api:servers:create_image:allow_volume_backed policies.

  • The configuration options baremetal_enabled_filters and use_baremetal_filters are deprecated in Pike and should only be used if your deployment still contains nodes that have not had their resource_class attribute set. See Ironic release notes for upgrade concerns.

  • The following scheduler filters are deprecated in Pike: ExactRamFilter, ExactCoreFilter and ExactDiskFilter and should only be used if your deployment still contains nodes that have not had their resource_class attribute set. See Ironic release notes for upgrade concerns.

  • [libvirt]/live_migration_progress_timeout は、この機能が動作しないことがわかり、非推奨となりました。詳細は、バグ 1644248 を参照してください。

  • DEFAULT にある次のオプションは nova-network を設定するために使われるだけであり、nova-network 自体と同様に非推奨とされました。

    • default_floating_pool (neutron ユーザーは neutron.default_floating_pool を使用してください)

    • ipv6_backend

    • firewall_driver

    • metadata_host

    • metadata_port

    • iptables_top_regex

    • iptables_bottom_regex

    • iptables_drop_action

    • ldap_dns_url

    • ldap_dns_user

    • ldap_dns_password

    • ldap_dns_soa_hostmaster

    • ldap_dns_servers

    • ldap_dns_base_dn

    • ldap_dns_soa_refresh

    • ldap_dns_soa_retry

    • ldap_dns_soa_expiry

    • ldap_dns_soa_minimum

    • dhcpbridge_flagfile

    • dhcpbridge

    • dhcp_lease_time

    • dns_server

    • use_network_dns_servers

    • dnsmasq_config_file

    • ebtables_exec_attempts

    • ebtables_retry_interval

    • fake_network

    • send_arp_for_ha

    • send_arp_for_ha_count

    • dmz_cidr

    • force_snat_range

    • linuxnet_interface_driver

    • linuxnet_ovs_integration_bridge

    • use_single_default_gateway

    • forward_bridge_interface

    • ovs_vsctl_timeout

    • networks_path

    • public_interface

    • routing_source_ip

    • use_ipv6

    • allow_same_net_traffic

  • When using neutron polling mode with XenAPI driver, booting a VM will timeout because nova-compute cannot receive network-vif-plugged event. This is because it set vif['id'](i.e. neutron port uuid) to two different OVS ports. One is XenServer VIF, the other is tap device qvo-XXXX, but setting 'nicira-iface-id' to XenServer VIF isn't correct, so deprecate it.

  • 多くの nova-manage コマンドは非推奨になりました。コマンドおよび非推奨理由は以下のとおり:

    account

    This allows for the creation, deletion, update and listing of user and project quotas. Operators should use the equivalent resources in the REST API instead.

    The quota_usage_refresh sub-command has been renamed to nova-manage quota refresh. This new command should be used instead.

    agent

    This allows for the creation, deletion, update and listing of "agent builds". Operators should use the equivalent resources in the REST API instead.

    host

    This allows for the listing of compute hosts. Operators should use the equivalent resources in the REST API instead.

    log

    This allows for the filtering of errors from nova's logs and extraction of all logs from syslog. This command has not been actively maintained in a long time, is not tested, and can be achieved using journalctl or by simply grepping through /var/log. It will simply be removed.

    project

    This is an alias for account and has been deprecated for the same reasons.

    shell

    This starts the Python interactive interpreter. It is a clone of the same functionality found in Django's django-manage command. This command hasn't been actively maintained in a long time and is not tested. It will simply be removed.

    These commands will be removed in their entirety during the Queens cycle.

  • Nova support for using the Block Storage (Cinder) v2 API is now deprecated and will be removed in the 17.0.0 Queens release. The v3 API is now the default and is backward compatible with the v2 API.

  • [xenserver]/vif_driver 設定オプションは削除に向けて非推奨とされました。XenAPIOpenVswitchDriver vif ドライバは Neutron に対して使われ、 XenAPIBridgeDriver vif ドライバは nova-network 、これは非推奨です、に使われます。将来的には、use_neutron 設定オプションがどの vif ドライバーをロードするか決定するために使われるでしょう。

  • The TrustedFilter scheduler filter has been experimental since its existence on May 18, 2012. Due to the lack of tests and activity with it, it's now deprecated and set for removal in the 17.0.0 Queens release.

  • Some unused policies have been deprecated. These are:

    • os_compute_api:os-server-groups

    • os_compute_api:flavors

    Please note you should remove these from your policy file(s).

  • wsgi_log_format 設定オプションは非推奨となりました。これは eventlet 配下で nova-api を動作した時に適用されますが、それはもはや好ましい展開様式ではないからです。

  • The following APIs which are considered as proxies of Neutron networking API, are deprecated and will result in a 404 error response in microversion 2.44:

    POST /servers/{server_uuid}/action
    {
        "addFixedIp": {...}
    }
    POST /servers/{server_uuid}/action
    {
        "removeFixedIp": {...}
    }
    POST /servers/{server_uuid}/action
    {
        "addFloatingIp": {...}
    }
    POST /servers/{server_uuid}/action
    {
        "removeFloatingIp": {...}
    }
    

    Those server actions can be replaced by calling the Neutron API directly.

    The nova-network specific API to query the server's interfaces is deprecated:

    GET /servers/{server_uuid}/os-virtual-interfaces
    

    To query attached neutron interfaces for a specific server, the API GET /servers/{server_uuid}/os-interface can be used.

  • Scheduling bare metal (ironic) instances using standard resource classes (VCPU, memory, disk) is deprecated and will no longer be supported in Queens. Custom resource classes should be used instead. Please refer to the ironic documentation for a detailed explanation.

  • The os-hosts API is deprecated as of the 2.43 microversion. Requests made with microversion >= 2.43 will result in a 404 error. To list and show host details, use the os-hypervisors API. To enable or disable a service, use the os-services API. There is no replacement for the shutdown, startup, reboot, or maintenance_mode actions as those are system-level operations which should be outside of the control of the compute service.

  • The nova-manage quota refresh command has been deprecated and is now a no-op since quota usage is counted from resources instead of being tracked separately. The command will be removed during the Queens cycle.

  • The --version parameters of the nova-manage api_db sync and nova-manage db sync commands has been deprecated in favor of positional arguments.

  • The CachingScheduler and ChanceScheduler drivers are deprecated in Pike. These are not integrated with the placement service, and their primary purpose (speed over correctness) should be addressed by the default FilterScheduler going forward. If ChanceScheduler behavior is desired (i.e. speed trumps correctness) then configuring the FilterScheduler with no enabled filters should approximate that behavior.

セキュリティー上の問題

  • [CVE-2017-7214] Failed notification payload is dumped in logs with auth secrets

バグ修正

  • In the 2.50 microversion, the following fields are added to the GET /os-quota-class-sets and PUT /os-quota-class-sets/{id} API response:

    • server_groups

    • server_group_members

    次のフィールドは同じマイクロバージョンで同じ API から削除されます:

    • fixed_ips

    • floating_ips

    • security_groups

    • security_group_rules

    • networks

  • The POST and DELETE operations on the os-assisted-volume-snapshots API will now fail with a 400 error if the related instance is undergoing a task state transition or does not have a host, i.e. is shelved offloaded.

  • bug 1662699 を修正しました。これは、古い v2 APIで実行される block_device_mapping_v2.boot_index 検証が v2.1 APIで回帰していたものです。この修正にて、boot_index=None でサーバーを作成する要求すると、まるで boot_index が指定されていないかのように扱われます。これはデフォルトでブート可能ではないブロックデバイスを意味します。

  • 15.0.0 Ocata リリース における回帰であった bug 1670522 を修正しました。 この修正なしには、virt_type が "kvm" または "qemu" でない、たとえば "xen" である libvirt ドライバで動作するコンピュートノードに対して、libvirt >= 1.3.3 かつ QEMU >= 2.7.0 ではデフォルトでサーバー作成に失敗します。

  • Includes the fix for bug 1673613 which could cause issues when upgrading and running nova-manage cell_v2 simple_cell_setup or nova-manage cell_v2 map_cell0 where the database connection is read from config and has special characters in the URL.

  • cell データベースが確立する接続方法のためデータベース接続の重要な増加があり、bug 1691545 で修正されました。この修正で、 API サービスのデータベース接続に関するオブジェクトはキャッシュされ、 cell データベースと連携するたびに新しい接続を作成することをせず、再利用するようになりました。

  • 正確には、[scheduler]/driver 設定オプションにあるカスタムドライバーエントリーポイントの名前を使うことによってカスタムスケジューラードライバーの使用を許可します。setup.cfg のエントリーポイントも更新しなくてはなりません。

  • The i/o performance for Quobyte volumes has been increased significantly by disabling xattrs.

  • The ironic virt driver no longer reports an empty inventory for bare metal nodes that have instances on them. Instead the custom resource class, VCPU, memory and disk are reported as they are configured on the node.

  • /os-quota-sets API 呼び出しは project_id が keystone で操作されたか検証します。もしユーザートークンが GET /v3/projects/{project_id} を実行するのに十分な権限を持ち、keystone プロジェクトがなければ 400 BadRequest を返して、無効なプロジェクトデータがNova データベースに置かれることを防ぎます。これは、たとえ有効な project_id でなくても保存されてしまう効果的な無言エラーを修正します。

  • 競合状態を起こしかねず、チェックは Cinder の reserve_volume により処理されるので、Nova コードから内部 check_attach 呼び出しを削除することにより bug 1581230 を修正しました。reserve_volume はすべてのボリューム接続にて、 Cinder 側で必要なチェックとボリューム状態の検証を提供するために呼ばれます。

  • Physical network name will be retrieved from a multi-segement network. The current implementation will retrieve the physical network name for the first segment that provides it. This is mostly intended to support a combinatin of vxlan and vlan segments. Additional work will be required to support a case of multiple vlan segments associated with different physical networks.

その他の注意点

  • instance.shutdown.end versioned notification will have an empty ip_addresses field since the network resources associated with the instance are deallocated before this notification is sent, which is actually more accurate. Consumers should rely on the instance.shutdown.start notification if they need the network information for the instance when it is being deleted.

  • The PUT /os-services/disable, PUT /os-services/enable and PUT /os-services/force-down APIs to enable, disable, or force-down a service will now only work with nova-compute services. If you are using those APIs to try and disable a non-compute service, like nova-scheduler or nova-conductor, those APIs will result in a 404 response.

    There really never was a good reason to disable or enable non-compute services anyway since that would not do anything. The nova-scheduler and nova-api services are checking the status and forced_down fields to see if instance builds can be scheduled to a compute host or if instances can be evacuated from a downed compute host. There is nothing that relies on a disabled or downed nova-conductor or nova-scheduler service.

  • The [DEFAULT]/enable_new_services configuration option will now only be used to auto-disable new nova-compute services. Other services like nova-conductor, nova-scheduler and nova-osapi_compute will not be auto-disabled since disabling them does nothing functionally, and starting in Pike the PUT /os-services/enable REST API will not be able to find non-compute services to enable them.

  • The 2.45 microversion is introduced which changes the response for the createImage and createBackup server action APIs to no longer return a Location response header. With microversion 2.45 those APIs now return a json dict in the response body with a single image_id key whose value is the snapshot image ID (a uuid). The old Location header in the response before microversion 2.45 is most likely broken and inaccessible by end users since it relies on the internal Glance API server configuration and does not take into account Glance API versions.

  • Placement API は nova.conf``の中にある``[placement]``セクションの ``os_interface オプションを使って特定の keystone エンドポイントに接続する設定されます。この値は必要ではありませんが、Placement サービスと接続するためにデフォルトではないエンドポイントインターフェースで接続したいときに使用されます。デフォルトでは、 keystoneauth は "public" エンドポイントと接続します。

  • The filter scheduler will now attempt to claim a number of resources in the placement API after determining a list of potential hosts. We attempt to claim these resources for each instance in the build request, and if a claim does not succeed, we try this claim against the next potential host the scheduler selected. This claim retry process can potentially attempt claims against a large number of hosts, and we do not limit the number of hosts to attempt claims against. Claims for resources may fail due to another scheduler process concurrently claiming resources against the same compute node. This concurrent resource claim is normal and the retry of a claim request should be unusual but harmless.

  • With XenAPI driver, we have deprecated bittorrent since '15.0.0', so we decide to remove all bittorrent related files and unit tests.

  • Nova からの内部呼び出し check_attach を削除することにより、小さな振る舞いの変化が導入されました。

    reserve_volume call was added to the boot from volume scenario. In case a failure occurs while building the instance, the instance goes into ERROR state while the volume stays in attaching state. The volume state will be set back to available when the instance gets deleted.

    ボリュームアタッチの流れに追加のアベイラビリティーゾーンチェックが追加されました。インスタンスが復元されたときアベイラビリティーゾーンチェックが実行されます。アベイラビリティーゾーンに影響されず、AvailabilityZoneFilter スケジューラーフィルターを使っていないならば、現在のデフォルト設定(cross_az_attach=True)により追加設定なしでこの変更前と同様に復元が動作します。

  • The disabled os-pci API has been removed. This API was originally added to the v3 API which over time finally became the v2.1 API and the initial microversion is backward compatible with the v2.0 API, where the os-pci extension did not exist. The os-pci API was never enabled as a microversion in the v2.1 API and at this time no longer aligns with Nova strategically and is therefore just technical debt, so it has been removed. Since it was never enabled or exposed out of the compute REST API endpoint there was no deprecation period for this.