Victoria Series Release Notes

11.2.0

新機能

  • Support hyperkube_prefix label which defaults to k8s.gcr.io/. Users now have the option to define alternative hyperkube image source since the default source has discontinued publication of hyperkube images for kube_tag greater than 1.18.x. Note that if container_infra_prefix label is define, it still takes precedence over this label.

11.1.0

アップグレード時の注意

  • Now the default admission controller list is updated by as "NodeRestriction, PodSecurityPolicy, NamespaceLifecycle, LimitRanger, ServiceAccount, ResourceQuota, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, RuntimeClass"

  • The default containerd version is updated with 1.4.3.

バグ修正

  • Fixes a regression which left behind trustee user accounts and certificates when a cluster is deleted.

  • Fixes database migrations with SQLAlchemy 1.3.20.

  • Fixes an issue with cluster deletion if load balancers do not exist. See story 2008548 <https://storyboard.openstack.org/#!/story/2008548> for details.

11.0.0

新機能

  • Users can enable or disable master_lb_enabled when creating a cluster.

  • The default 10 seconds health polling interval is too frequent for most of the cases. Now it has been changed to 60s. A new config health_polling_interval is supported to make the interval configurable. Cloud admin can totally disable the health polling by set a negative value for the config.

  • Expose autoscaler prometheus metrics on pod port metrics (8085).

  • Add a new label named master_lb_allowed_cidrs to control the IP ranges which can access the k8s API and etcd load balancers of master. To get this feature, the minimum version of Heat is stable/ussuri and minimum version of Octavia is stable/train.

  • A new boolean flag is introduced in the CLuster and Nodegroup create API calls. Using this flag, users can override label values when clusters or nodegroups are created without having to specify all the inherited values. To do that, users have to specify the labels with their new values and use the flag --merge-labels. At the same time, three new fields are added in the cluster and nodegroup show outputs, showing the differences between the actual and the iherited labels.

  • Magnum now cascade deletes all the load balancers before deleting the cluster, not only including load balancers for the cluster services and ingresses, but also those for Kubernetes API/etcd endpoints.

  • Support Helm v3 client to install helm charts. To use this feature, users will need to use helm_client_tag>=v3.0.0 (default helm_client_tag=v3.2.1). All the existing chart used to depend on Helm v2, e.g. nginx ingress controller, metrics server, prometheus operator and prometheus adapter are now also installable using v3 client. Also introduce helm_client_sha256 and helm_client_url that users can specify to install non-default helm client version (https://github.com/helm/helm/releases).

  • Kubernetes cluster owner can now do CA cert rotate to re-generate CA of the cluster, service account keys and the certs of all nodes will be regenerated as well. Cluster user needs to get a new kubeconfig to access kubernetes API. This function is only supported by Fedora CoreOS driver.

  • Cloud admin user now can do rolling upgrade on behalf of end user so as to do urgent security patching when it's necessary.

  • Add to prometheus federation exported metrics the cluster_uuid label.

アップグレード時の注意

  • If it's still preferred to have 10s health polling interval for Kubernetes cluster. It can be set by config health_polling_interval under kubernetes section.

  • Label cinder_csi_enabled defaults to True from V cycle.

  • The default version of Kubernetes dashboard has been upgraded to v2.0.0 and metrics-server is supported by k8s dashboard now.

  • Default tiller_tag is set to v2.16.7. The charts remain compatible but helm_client_tag will also need to be set to the same value as tiller_tag, i.e. v2.16.7. In this case, the user will also need to provide helm_client_sha256 for the helm client binary intended for use.

  • Bumped prometheus-operator chart tag to 8.12.13. Added container_infra_prefix to missing prometheusOperator images.

廃止予定の機能

  • Deprecate in-tree Cinder volume driver for removal in X cycle in favour of out-of-tree Cinder CSI plugin.

  • The devicemapper and overlay storage driver is deprecated in favor of overlay2 in docker, and will be removed in a future release from docker. Users of the devicemapper and overlay storage driver are recommended to migrate to a different storage driver, such as overlay2. overlay2 will be set as the default storage driver from Victoria cycle.

  • Support for Helm v2 client will be removed in X release.

バグ修正

  • Deploy traefik from the heat-agent

    Use kubectl from the heat agent to apply the traefik deployment. Current behaviour was to create a systemd unit to send the manifests to the API.

    This way we will have only one way for applying manifests to the API.

    This change is triggered to adddress the kubectl change [0] that is not using 127.0.0.1:8080 as the default kubernetes API.

    [0] https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#kubectl

  • Fixes an edge case where when a cluster with additional nodegroups is patched with health_status and health_status_reason, it was leading to the default-worker nodegroup being resized.

  • Now the label fixed_network_cidr have been renamed with fixed_subnet_cidr. And it can be passed in and set correctly.

  • Fix an issue with private clusters getting stuck in CREATE_IN_PROGRESS status where floating_ip_enabled=True in the cluster template but this is disabled when the cluster is created.

  • There was a corner case that when floating_ip_enabled=False, master_lb_enabled=True,master_lb_floating_ip_enabled=False in cluster template, but setting floating_ip_enabled=True when creating the cluster, which causes missing IP address in the api_address of cluster. Now the isssue has been fixed.

  • Prometheus server now scrape metrics from traefik proxy. Prometheus server now scrape metrics from cluster autoscaler.

  • Scrape metrics from kube-{controller-manager,scheduler}. Disable PrometheusRule for etcd.