Stein Series Release Notes

1.0.0-21

New Features

  • Added a new driver for handling network policy support as introduced in this blueprint .

    In order to enable it the following configuration must be followed:

    [kubernetes]
    enabled_handlers=vif,lb,lbaasspec,namespace,pod_label,policy,kuryrnetpolicy
    pod_subnets_driver=namespace
    pod_security_groups_driver=policy
    

1.0.0

New Features

  • Added possibility to ensure all OpenStack resources created by Kuryr are tagged. In case of Neutron regular tags field is used. If Octavia supports tagging (from Octavia API 2.5, i.e. Stein), tags field is used as well, otherwise tags are put on description field. All this is controlled by [neutron_defaults]resource_tags config option that can hold a list of tags to be put on resources. This feature is useful to correctly identify any leftovers in OpenStack after K8s cluster Kuryr was serving gets deleted.

  • It is now possible to use same pool_driver for different pod_vif_drivers when using MultiVIFPool driver.

    A new config option vif_pool.vif_pool_mapping is introduced which is a dict/mapping from pod_vif_driver => pool_driver. So different pod_vif_drivers can be configured to use the same pool_driver.

    [vif_pool]
    vif_pool_mapping=nested-vlan:nested,neutron-vif:neutron
    

    Earlier each instance of a pool_driver was mapped to a single pod_driver, thus requiring a unique pool_driver for each pod_vif_driver.

Upgrade Notes

  • As announced, possiblity of running Kuryr-Kubernetes without kuryr-daemon service is now removed from the project and considered not supported.

  • If vif_pool.pools_vif_drivers config option is used, new config option vif_pool.vif_pool_mapping should be populated with inverted mapping from the present value of vif_pool.pools_vif_drivers.

Deprecation Notes

  • Configuration option vif_pool.pools_vif_drivers has been deprecated in favour of vif_pool.vif_pool_mapping to allow reuse of pool_drivers for different pod_vif_drivers.

    If vif_pool_mapping is not configured, pools_vif_drivers will still continue to work for now, but pools_vif_drivers will be completely removed in a future release.

0.6.0

New Features

  • Added support for using cri-o (and podman & buildah) as container engine in both container images and DevStack.

Upgrade Notes

  • Before upgrading to T (0.7.x) run kuryr-k8s-status upgrade check to check if upgrade is possible. In case of negative result refer to kuryr-kubernetes documentation for mitigation steps.

Deprecation Notes

  • The scripts/run_server.py scripts gets removed as we no longer use it in DevStack plugin.

0.5.0

New Features

  • Kuryr-Kubernetes now supports running kuryr-controller service in Active/Passive HA mode. This is only possible when running those services as Pods on Kubernetes cluster, as Kubernetes is used for leader election. Also it is required to add leader-elector container to the kuryr-controller Pods. HA is controlled by [kubernetes]controller_ha option, which defaults to False.

  • An OpenShift route is a way to expose a service by giving it an externally-reachable hostname like www.example.com. A defined route and the endpoints identified by its service can be consumed by a router to provide named connectivity that allows external clients to reach your applications. Each route consists of a route name , target service details. To enable it the following handlers should be added :

    [kubernetes]
    enabled_handlers=vif,lb,lbaasspec,ingresslb,ocproute
    
  • The CNI daemon now provides health checks allowing the deployer or the orchestration layer to probe it for readiness and liveness.

    These health checks are served and executed by a Manager that runs as part of CNI daemon, and offers two endpoints indicating whether it is ready and alive.

    The Manager validates presence of NET_ADMIN capabilities, health status of a transactional database, connectivity with Kubernetes API, quantity of CNI add failures, health of CNI components and amount of memory being consumed. The health checks fails if any of the presented checks are not validated, causing the orchestration layer to restart. More information can be found in the kuryr-kubernetes documentation.

  • Introduced a pluggable interface for the Kuryr controller handlers. Each Controller handler associates itself with specific Kubernetes object kind and is expected to process the events of the watched Kubernetes API endpoints. The pluggable handlers framework enable both using externally provided handlers in Kuryr Controller and controlling which handlers should be active.

    To control which Kuryr Controller handlers should be active, the selected handlers need to be included at the kuryr.conf at the ‘kubernetes’ section. If not specified, Kuryr Controller will run the default handlers. For example, to enable only the ‘vif’ controller handler we should set the following at kuryr.conf:

    [kubernetes]
    enabled_handlers=vif
    
  • Adds a new multi pool driver to support hybrid environments where some nodes are Bare Metal while others are running inside VMs, therefore having different VIF drivers (e.g., neutron and nested-vlan)

    This new multi pool driver is the default pool driver used even if a different vif_pool_driver is set at the config option. However if the configuration about the mappings between the different pools and pod vif drivers is not provided at the pools_vif_drivers config option of vif_pool configuration section only one pool driver will be loaded – using the standard vif_pool_driver and pod_vif_driver config options, i.e., using the one selected at kuryr.conf options.

    To enable the option of having different pools depending on the node’s pod vif types, you need to state the type of pool that you want for each pod vif driver, e.g.:

    [vif_pool]
    pools_vif_drivers=nested:nested-vlan,neutron:neutron-vif
    

    This will use a pool driver nested to handle the pods whose vif driver is nested-vlan, and a pool driver neutron to handle the pods whose vif driver is neutron-vif. When the controller is requesting a vif for a pod in node X, it will first read the node’s annotation about pod_vif driver to use, e.g., pod_vif: nested-vlan, and then use the corresponding pool driver – which has the right pod-vif driver set.

    Note that if no annotation is set on a node, the default pod_vif_driver is used.

  • Introduced a new subnet driver that is able to create a new subnet (including the network and its connection to the router) for each namespace creation event.

    To enable it the namespace subnet driver must be selected and the namespace handler needs to be enabled:

    [kubernetes]
    enabled_handlers=vif,lb,lbaasspec,namespace
    pod_subnets_driver = namespace
    
  • Migrated all upstream gates to Zuul V3 [1] native format. This commit also introduces several new (for now) experimental gates such as multinode and centos-7 based. These will be moved to check and voting once they have been behaving at a stable pace for some time.

Upgrade Notes

  • Legacy Kuryr deployment without running kuryr-daemon is now considered deprecated. That possibility will be completely removed in one of the next releases. Please note that this means that [cni_daemon]daemon_enabled option will default to True.

  • Legacy Kuryr deployment relying on neutron-lbaas as the LBaaSv2 endpoint is now deprecated. The possibility of using it as Kuryr’s lbaasv2 endpoint will be totally removed in one of the next releases.

  • For the kuryr kubernetes watcher, a new option ‘watch_retry_timeout’ has been added. The following should be modified at kuryr.conf:

    [kubernetes]
    # 'watch_retry_timeout' field is optional,
    # default = 60 if not set.
    watch_retry_timeout = <Time in seconds>
    
  • For the external services (type=LoadBalancer) case, a new field ‘external_svc_net’ was added and the ‘external_svc_subnet’ field become optional. The following should be modified at kuryr.conf:

    [neutron_defaults]
    external_svc_net= <id of external network>
    # 'external_svc_subnet' field is optional, set this field in case
    # multiple subnets attached to 'external_svc_net'
    external_svc_subnet= <id of external subnet>
    
  • As the openstack performance differs in production environments, fixed timeout of LBaaS activation might create the kuryr-kubernetes error. In order to adapt to the environment, a new option [neutron_defaults]lbaas_activation_timeout was added.

Deprecation Notes

  • Running Kuryr-Kubernetes without kuryr-daemon service is now deprecated. Motivations for that move include:

    • Discoveries of bugs that are much easier to fix in kuryr-daemon.

    • Further improvements in Kuryr scalability (e.g. moving choosing VIF from pool into kuryr-daemon) are only possible when kuryr-daemon is present.

    Possibility of running Kuryr-Kubernetes without kuryr-daemon will be removed in one of the future releases.

  • Running Kuryr-Kubernetes with neutron-lbaasv2 is now deprecated. The main motivation for this is the deprecation of the neutron-lbaas implementation in favour to Octavia.

    Possibility of running Kuryr-Kubernetes with the lbaas handler pointing to anything but Octavia or SDN lbaas implementations will be removed in future releases.

Bug Fixes

  • K8s api server is often temporarily down and restored soon in production environment. Since kuryr-kubernetes watches k8s resources by connecting k8s api server, watcher fails to watch the resources if k8s api server is down. In order to fix it, we made watcher retry connecting to k8s api server for specific time duration when an exception is raised.

  • It is very common for production environments to only allow access to the public network and not the associated public subnets. In that case, we fail to allocate a floating IP to the Loadbalancer service type. In order to fix it, we added an option for specifying the network id instead and switch the subnet config option to being optional.

0.4.0

New Features

  • Kuryr can now be run in containers on top of K8s cluster it is providing networking for. A tool to generate K8s resource definitions is provided. More information can be found in the kuryr-kubernetes documentation.

  • Introduced kuryr-daemon service. Daemon is an optional service that should run on every Kubernetes node. It is responsible for watching pod events on the node it’s running on, answering calls from CNI Driver and attaching VIFs when they are ready. This helps to limit the number of processes spawned when creating multiple Pods, as a single Watcher is enough for each node and CNI Driver will only wait on local network socket for response from the Daemon.

0.3.0

New Features

  • oslo.cache support has been added to the default_subnet driver. This allows to skip unnecessary calls to neutron to retrieve network and subnet information that can be cached and retrieved faster. It includes the generic oslo caching options to enable/disable it as well as to specify the backend to use (dogpile.cache.memory by default). In addition it includes the specific options ofr the subnet driver, to enable/disable it just for this driver, as well as to specify the cache time. To change default configurations, the next can be modified at kuryr.conf

    [cache]
    enable=True
    backend=dogpile.cache.memory
    
    [subnet_caching]
    caching=True
    cache_time=3600
    

Other Notes

  • Started using reno for release notes.