[ English | English (United Kingdom) | Indonesia | 한국어 (대한민국) | Deutsch ]

Multinode

Overview

In order to drive towards a production-ready OpenStack solution, our goal is to provide containerized, yet stable persistent volumes that Kubernetes can use to schedule applications that require state, such as MariaDB (Galera). Although we assume that the project should provide a “batteries included” approach towards persistent storage, we want to allow operators to define their own solution as well. Examples of this work will be documented in another section, however evidence of this is found throughout the project. If you find any issues or gaps, please create a story to track what can be done to improve our documentation.

Note

Please see the supported application versions outlined in the source variable file.

Other versions and considerations (such as other CNI SDN providers), config map data, and value overrides will be included in other documentation as we explore these options further.

The installation procedures below, will take an administrator from a new kubeadm installation to OpenStack-Helm deployment.

Note

Many of the default container images that are referenced across OpenStack-Helm charts are not intended for production use; for example, while LOCI and Kolla can be used to produce production-grade images, their public reference images are not prod-grade. In addition, some of the default images use latest or master tags, which are moving targets and can lead to unpredictable behavior. For production-like deployments, we recommend building custom images, or at minimum caching a set of known images, and incorporating them into OpenStack-Helm via values overrides.

Warning

Until the Ubuntu kernel shipped with 16.04 supports CephFS subvolume mounts by default the HWE Kernel is required to use CephFS.

Kubernetes Preparation

You can use any Kubernetes deployment tool to bring up a working Kubernetes cluster for use with OpenStack-Helm. For production deployments, please choose (and tune appropriately) a highly-resilient Kubernetes distribution, e.g.:

  • Airship, a declarative open cloud infrastructure platform

  • KubeADM, the foundation of a number of Kubernetes installation solutions

For a lab or proof-of-concept environment, the OpenStack-Helm gate scripts can be used to quickly deploy a multinode Kubernetes cluster using KubeADM and Ansible. Please refer to the deployment guide here.

Managing and configuring a Kubernetes cluster is beyond the scope of OpenStack-Helm and this guide.

Deploy OpenStack-Helm

Note

The following commands all assume that they are run from the /opt/openstack-helm directory.

Setup Clients on the host and assemble the charts

The OpenStack clients and Kubernetes RBAC rules, along with assembly of the charts can be performed by running the following commands:

#!/bin/bash
set -xe

sudo -H -E pip install "cmd2<=0.8.7"
sudo -H -E pip install \
-c${UPPER_CONSTRAINTS_FILE:=https://releases.openstack.org/constraints/upper/master} \
python-openstackclient python-heatclient --ignore-installed

sudo -H mkdir -p /etc/openstack
sudo -H chown -R $(id -un): /etc/openstack
tee /etc/openstack/clouds.yaml << EOF
clouds:
  openstack_helm:
    region_name: RegionOne
    identity_api_version: 3
    auth:
      username: 'admin'
      password: 'password'
      project_name: 'admin'
      project_domain_name: 'default'
      user_domain_name: 'default'
      auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
EOF

#NOTE: Build charts
make all

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/010-setup-client.sh

Deploy the ingress controller

export OSH_DEPLOY_MULTINODE=True
#!/bin/bash
set -xe

#NOTE: Get the over-rides to use
export HELM_CHART_ROOT_PATH="${HELM_CHART_ROOT_PATH:="${OSH_INFRA_PATH:="../openstack-helm-infra"}"}"
: ${OSH_EXTRA_HELM_ARGS_INGRESS:="$(./tools/deployment/common/get-values-overrides.sh ingress)"}

#NOTE: Lint and package chart
make -C ${HELM_CHART_ROOT_PATH} ingress

#NOTE: Deploy command
: ${OSH_EXTRA_HELM_ARGS:=""}
tee /tmp/ingress-kube-system.yaml << EOF
deployment:
  mode: cluster
  type: DaemonSet
network:
  host_namespace: true
EOF

touch /tmp/ingress-component.yaml

if [ -n "${OSH_DEPLOY_MULTINODE}" ]; then
  tee --append /tmp/ingress-kube-system.yaml << EOF
pod:
  replicas:
    error_page: 2
EOF

  tee /tmp/ingress-component.yaml << EOF
pod:
  replicas:
    ingress: 2
    error_page: 2
EOF
fi

helm upgrade --install ingress-kube-system ${HELM_CHART_ROOT_PATH}/ingress \
  --namespace=kube-system \
  --values=/tmp/ingress-kube-system.yaml \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_INGRESS} \
  ${OSH_EXTRA_HELM_ARGS_INGRESS_KUBE_SYSTEM}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh kube-system

#NOTE: Display info
helm status ingress-kube-system

#NOTE: Deploy namespace ingress
helm upgrade --install ingress-openstack ${HELM_CHART_ROOT_PATH}/ingress \
  --namespace=openstack \
  --values=/tmp/ingress-component.yaml \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_INGRESS} \
  ${OSH_EXTRA_HELM_ARGS_INGRESS_OPENSTACK}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Display info
helm status ingress-openstack

helm upgrade --install ingress-ceph ${HELM_CHART_ROOT_PATH}/ingress \
  --namespace=ceph \
  --values=/tmp/ingress-component.yaml \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_INGRESS} \
  ${OSH_EXTRA_HELM_ARGS_INGRESS_CEPH}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh ceph

#NOTE: Display info
helm status ingress-ceph

Alternatively, this step can be performed by running the script directly:

OSH_DEPLOY_MULTINODE=True ./tools/deployment/component/common/ingress.sh

Deploy Ceph

The script below configures Ceph to use filesystem directory-based storage. To configure a custom block device-based backend, please refer to the ceph-osd values.yaml.

Additional information on Kubernetes Ceph-based integration can be found in the documentation for the CephFS and RBD storage provisioners, as well as for the alternative NFS provisioner.

Warning

The upstream Ceph image repository does not currently pin tags to specific Ceph point releases. This can lead to unpredictable results in long-lived deployments. In production scenarios, we strongly recommend overriding the Ceph images to use either custom built images or controlled, cached images.

Note

The ./tools/deployment/multinode/kube-node-subnet.sh script requires docker to run.

#!/bin/bash
set -xe

#NOTE: Deploy command
[ -s /tmp/ceph-fs-uuid.txt ] || uuidgen > /tmp/ceph-fs-uuid.txt
CEPH_PUBLIC_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)"
CEPH_CLUSTER_NETWORK="${CEPH_PUBLIC_NETWORK}"
CEPH_FS_ID="$(cat /tmp/ceph-fs-uuid.txt)"
#NOTE(portdirect): to use RBD devices with kernels < 4.5 this should be set to 'hammer'
LOWEST_CLUSTER_KERNEL_VERSION=$(kubectl get node  -o go-template='{{range .items}}{{.status.nodeInfo.kernelVersion}}{{"\n"}}{{ end }}' | sort -V | tail -1)
if [ "$(echo ${LOWEST_CLUSTER_KERNEL_VERSION} | awk -F "." '{ print $1 }')" -lt "4" ] || [ "$(echo ${LOWEST_CLUSTER_KERNEL_VERSION} | awk -F "." '{ print $2 }')" -lt "15" ]; then
  echo "Using hammer crush tunables"
  CRUSH_TUNABLES=hammer
else
  CRUSH_TUNABLES=null
fi
NUMBER_OF_OSDS="$(kubectl get nodes -l ceph-osd=enabled --no-headers | wc -l)"
tee /tmp/ceph.yaml << EOF
endpoints:
  ceph_mon:
    namespace: ceph
network:
  public: ${CEPH_PUBLIC_NETWORK}
  cluster: ${CEPH_CLUSTER_NETWORK}
deployment:
  storage_secrets: true
  ceph: true
  rbd_provisioner: true
  cephfs_provisioner: false
  client_secrets: false
bootstrap:
  enabled: true
conf:
  ceph:
    global:
      fsid: ${CEPH_FS_ID}
  pool:
    crush:
      tunables: ${CRUSH_TUNABLES}
    target:
      osd: ${NUMBER_OF_OSDS}
      pg_per_osd: 100
  storage:
    osd:
      - data:
          type: directory
          location: /var/lib/openstack-helm/ceph/osd/osd-one
        journal:
          type: directory
          location: /var/lib/openstack-helm/ceph/osd/journal-one
storageclass:
  cephfs:
    provision_storage_class: false
manifests:
  deployment_cephfs_provisioner: false
  job_cephfs_client_key: false
EOF

: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
for CHART in ceph-mon ceph-osd ceph-client ceph-provisioners; do
  helm upgrade --install ${CHART} ${OSH_INFRA_PATH}/${CHART} \
    --namespace=ceph \
    --values=/tmp/ceph.yaml \
    ${OSH_EXTRA_HELM_ARGS} \
    ${OSH_EXTRA_HELM_ARGS_CEPH_DEPLOY}

  #NOTE: Wait for deploy
  ./tools/deployment/common/wait-for-pods.sh ceph 1200

  #NOTE: Validate deploy
  MON_POD=$(kubectl get pods \
    --namespace=ceph \
    --selector="application=ceph" \
    --selector="component=mon" \
    --no-headers | awk '{ print $1; exit }')
  kubectl exec -n ceph ${MON_POD} -- ceph -s
done

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/030-ceph.sh

Activate the openstack namespace to be able to use Ceph

#!/bin/bash
set -xe

#NOTE: Deploy command
CEPH_PUBLIC_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)"
CEPH_CLUSTER_NETWORK="${CEPH_PUBLIC_NETWORK}"
tee /tmp/ceph-openstack-config.yaml <<EOF
endpoints:
  ceph_mon:
    namespace: ceph
network:
  public: ${CEPH_PUBLIC_NETWORK}
  cluster: ${CEPH_CLUSTER_NETWORK}
deployment:
  ceph: false
  rbd_provisioner: false
  cephfs_provisioner: false
  client_secrets: true
bootstrap:
  enabled: false
storageclass:
  cephfs:
    provision_storage_class: false
EOF

: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
helm upgrade --install ceph-openstack-config ${OSH_INFRA_PATH}/ceph-provisioners \
  --namespace=openstack \
  --values=/tmp/ceph-openstack-config.yaml \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_CEPH_NS_ACTIVATE}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
helm status ceph-openstack-config

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/040-ceph-ns-activate.sh

Deploy MariaDB

#!/bin/bash
set -xe

#NOTE: Deploy command
tee /tmp/mariadb.yaml << EOF
pod:
  replicas:
    server: 3
    ingress: 3
EOF
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
helm upgrade --install mariadb ${OSH_INFRA_PATH}/mariadb \
    --namespace=openstack \
    --values=/tmp/mariadb.yaml \
    ${OSH_EXTRA_HELM_ARGS} \
    ${OSH_EXTRA_HELM_ARGS_MARIADB}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
helm status mariadb

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/050-mariadb.sh

Deploy RabbitMQ

#!/bin/bash
set -xe

#NOTE: Deploy command
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
: ${OSH_EXTRA_HELM_ARGS:=""}
helm upgrade --install rabbitmq ${OSH_INFRA_PATH}/rabbitmq \
    --namespace=openstack \
    ${OSH_EXTRA_HELM_ARGS} \
    ${OSH_EXTRA_HELM_ARGS_RABBITMQ}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
helm status rabbitmq

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/060-rabbitmq.sh

Deploy Memcached

#!/bin/bash
set -xe

#NOTE: Get the over-rides to use
export HELM_CHART_ROOT_PATH="${HELM_CHART_ROOT_PATH:="${OSH_INFRA_PATH:="../openstack-helm-infra"}"}"
: ${OSH_EXTRA_HELM_ARGS_MEMCACHED:="$(./tools/deployment/common/get-values-overrides.sh memcached)"}

#NOTE: Lint and package chart
make -C ${HELM_CHART_ROOT_PATH} memcached

#NOTE: Deploy command
: ${OSH_EXTRA_HELM_ARGS:=""}
helm upgrade --install memcached ${HELM_CHART_ROOT_PATH}/memcached \
    --namespace=openstack \
    ${OSH_EXTRA_HELM_ARGS} \
    ${OSH_EXTRA_HELM_ARGS_MEMCACHED}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
helm status memcached

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/070-memcached.sh

Deploy Keystone

#!/bin/bash
set -xe

#NOTE: Deploy command
helm upgrade --install keystone ./keystone \
    --namespace=openstack \
    --set pod.replicas.api=2 \
    ${OSH_EXTRA_HELM_ARGS} \
    ${OSH_EXTRA_HELM_ARGS_KEYSTONE}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
helm status keystone
export OS_CLOUD=openstack_helm
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack endpoint list
# Delete the test pod if it still exists
kubectl delete pods -l application=keystone,release_group=keystone,component=test --namespace=openstack --ignore-not-found
helm test keystone --timeout 900

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/080-keystone.sh

Deploy Rados Gateway for object store

#!/bin/bash
set -xe

#NOTE: Deploy command
CEPH_PUBLIC_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)"
CEPH_CLUSTER_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)"
tee /tmp/radosgw-openstack.yaml <<EOF
endpoints:
  identity:
    namespace: openstack
  object_store:
    namespace: openstack
  ceph_mon:
    namespace: ceph
network:
  public: ${CEPH_PUBLIC_NETWORK}
  cluster: ${CEPH_CLUSTER_NETWORK}
deployment:
  ceph: true
bootstrap:
  enabled: false
conf:
  rgw_ks:
    enabled: true
network_policy:
  ceph:
    ingress:
      - from:
        - podSelector:
            matchLabels:
              application: glance
        - podSelector:
            matchLabels:
              application: cinder
        - podSelector:
            matchLabels:
              application: libvirt
        - podSelector:
            matchLabels:
              application: nova
        - podSelector:
            matchLabels:
              application: ceph
        - podSelector:
            matchLabels:
              application: ingress
        ports:
        - protocol: TCP
          port: 8088
manifests:
  network_policy: true
EOF

: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
helm upgrade --install radosgw-openstack ${OSH_INFRA_PATH}/ceph-rgw \
  --namespace=openstack \
  --set manifests.network_policy=true \
  --values=/tmp/radosgw-openstack.yaml \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_HEAT}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
helm status radosgw-openstack

#NOTE: Run Tests
export OS_CLOUD=openstack_helm
helm test radosgw-openstack

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/090-ceph-radosgateway.sh

Deploy Glance

#!/bin/bash
set -xe

#NOTE: Deploy command
: ${OSH_OPENSTACK_RELEASE:="newton"}
#NOTE(portdirect), this could be: radosgw, rbd, swift or pvc
: ${GLANCE_BACKEND:="swift"}
tee /tmp/glance.yaml <<EOF
storage: ${GLANCE_BACKEND}
pod:
  replicas:
    api: 2
    registry: 2
EOF
if [ "x${OSH_OPENSTACK_RELEASE}" == "xnewton" ]; then
# NOTE(portdirect): glance APIv1 is required for heat in Newton
  tee -a /tmp/glance.yaml <<EOF
conf:
  glance:
    DEFAULT:
      enable_v1_api: true
      enable_v2_registry: true
manifests:
  deployment_registry: true
  ingress_registry: true
  pdb_registry: true
  service_ingress_registry: true
EOF
fi
helm upgrade --install glance ./glance \
  --namespace=openstack \
  --values=/tmp/glance.yaml \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_GLANCE}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
helm status glance
export OS_CLOUD=openstack_helm
openstack service list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack image list
openstack image show 'Cirros 0.3.5 64-bit'
# Delete the test pod if it still exists
kubectl delete pods -l application=glance,release_group=glance,component=test --namespace=openstack --ignore-not-found
helm test glance --timeout 900

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/100-glance.sh

Deploy Cinder

#!/bin/bash

#NOTE: Deploy command
tee /tmp/cinder.yaml << EOF
pod:
  replicas:
    api: 2
    volume: 1
    scheduler: 1
    backup: 1
conf:
  cinder:
    DEFAULT:
      backup_driver: cinder.backup.drivers.swift
EOF
helm upgrade --install cinder ./cinder \
  --namespace=openstack \
  --values=/tmp/cinder.yaml \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_CINDER}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack volume type list
# Delete the test pod if it still exists
kubectl delete pods -l application=cinder,release_group=cinder,component=test --namespace=openstack --ignore-not-found
helm test cinder --timeout 900

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/110-cinder.sh

Deploy OpenvSwitch

#!/bin/bash

#NOTE: Deploy command
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
helm upgrade --install openvswitch ${OSH_INFRA_PATH}/openvswitch \
  --namespace=openstack \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_OPENVSWITCH}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
helm status openvswitch

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/120-openvswitch.sh

Deploy Libvirt

#!/bin/bash

#NOTE: Deploy libvirt
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
helm upgrade --install libvirt ${OSH_INFRA_PATH}/libvirt \
  --namespace=openstack \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_LIBVIRT}

#NOTE(portdirect): We don't wait for libvirt pods to come up, as they depend
# on the neutron agents being up.

#NOTE: Validate Deployment info
helm status libvirt

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/130-libvirt.sh

Deploy Compute Kit (Nova and Neutron)

#!/bin/bash

#NOTE: Deploy nova
tee /tmp/nova.yaml << EOF
labels:
  api_metadata:
    node_selector_key: openstack-helm-node-class
    node_selector_value: primary
pod:
  replicas:
    api_metadata: 1
    placement: 2
    osapi: 2
    conductor: 2
    consoleauth: 2
    scheduler: 1
    novncproxy: 1
EOF

function kvm_check () {
  POD_NAME="tmp-$(cat /dev/urandom | env LC_CTYPE=C tr -dc a-z | head -c 5; echo)"
  cat <<EOF | kubectl apply -f - 1>&2;
apiVersion: v1
kind: Pod
metadata:
  name: ${POD_NAME}
spec:
  hostPID: true
  restartPolicy: Never
  containers:
  - name: util
    securityContext:
      privileged: true
    image: docker.io/busybox:latest
    command:
      - sh
      - -c
      - |
        nsenter -t1 -m -u -n -i -- sh -c "kvm-ok >/dev/null && echo yes || echo no"
EOF
  end=$(($(date +%s) + 900))
  until kubectl get pod/${POD_NAME} -o go-template='{{.status.phase}}' | grep -q Succeeded; do
    now=$(date +%s)
    [ $now -gt $end ] && echo containers failed to start. && \
        kubectl get pod/${POD_NAME} -o wide && exit 1
  done
  kubectl logs pod/${POD_NAME}
  kubectl delete pod/${POD_NAME} 1>&2;
}

if [ "x$(kvm_check)" == "xyes" ]; then
  echo 'OSH is not being deployed in virtualized environment'
  helm upgrade --install nova ./nova \
      --namespace=openstack \
      --values=/tmp/nova.yaml \
      ${OSH_EXTRA_HELM_ARGS} \
      ${OSH_EXTRA_HELM_ARGS_NOVA}
else
  echo 'OSH is being deployed in virtualized environment, using qemu for nova'
  helm upgrade --install nova ./nova \
      --namespace=openstack \
      --values=/tmp/nova.yaml \
      --set conf.nova.libvirt.virt_type=qemu \
      --set conf.nova.libvirt.cpu_mode=none \
      ${OSH_EXTRA_HELM_ARGS} \
      ${OSH_EXTRA_HELM_ARGS_NOVA}
fi

#NOTE: Deploy neutron, for simplicity we will assume the default route device
# should be used for tunnels
function network_tunnel_dev () {
  POD_NAME="tmp-$(cat /dev/urandom | env LC_CTYPE=C tr -dc a-z | head -c 5; echo)"
  cat <<EOF | kubectl apply -f - 1>&2;
apiVersion: v1
kind: Pod
metadata:
  name: ${POD_NAME}
spec:
  hostNetwork: true
  restartPolicy: Never
  containers:
  - name: util
    image: docker.io/busybox:latest
    command:
    - 'ip'
    - '-4'
    - 'route'
    - 'list'
    - '0/0'
EOF
  end=$(($(date +%s) + 900))
  until kubectl get pod/${POD_NAME} -o go-template='{{.status.phase}}' | grep -q Succeeded; do
    now=$(date +%s)
    [ $now -gt $end ] && echo containers failed to start. && \
        kubectl get pod/${POD_NAME} -o wide && exit 1
  done
  kubectl logs pod/${POD_NAME} | awk '{ print $5; exit }'
  kubectl delete pod/${POD_NAME} 1>&2;
}

NETWORK_TUNNEL_DEV="$(network_tunnel_dev)"
tee /tmp/neutron.yaml << EOF
network:
  interface:
    tunnel: "${NETWORK_TUNNEL_DEV}"
labels:
  agent:
    dhcp:
      node_selector_key: openstack-helm-node-class
      node_selector_value: primary
    l3:
      node_selector_key: openstack-helm-node-class
      node_selector_value: primary
    metadata:
      node_selector_key: openstack-helm-node-class
      node_selector_value: primary
pod:
  replicas:
    server: 2
conf:
  neutron:
    DEFAULT:
      l3_ha: False
      max_l3_agents_per_router: 1
      l3_ha_network_type: vxlan
      dhcp_agents_per_network: 1
  plugins:
    ml2_conf:
      ml2_type_flat:
        flat_networks: public
    openvswitch_agent:
      agent:
        tunnel_types: vxlan
      ovs:
        bridge_mappings: public:br-ex
EOF

if [ -n "$OSH_OPENSTACK_RELEASE" ]; then
  if [ -e "./neutron/values_overrides/${OSH_OPENSTACK_RELEASE}.yaml" ] ; then
    echo "Adding release overrides for ${OSH_OPENSTACK_RELEASE}"
    OSH_RELEASE_OVERRIDES_NEUTRON="--values=./neutron/values_overrides/${OSH_OPENSTACK_RELEASE}.yaml"
  fi
fi

helm upgrade --install neutron ./neutron \
    --namespace=openstack \
    --values=/tmp/neutron.yaml \
    ${OSH_RELEASE_OVERRIDES_NEUTRON} \
    ${OSH_EXTRA_HELM_ARGS} \
    ${OSH_EXTRA_HELM_ARGS_NEUTRON}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack compute service list
openstack network agent list
# Delete the test pods if they still exist
kubectl delete pods -l application=nova,release_group=nova,component=test --namespace=openstack --ignore-not-found
kubectl delete pods -l application=neutron,release_group=neutron,component=test --namespace=openstack --ignore-not-found

timeout=${OSH_TEST_TIMEOUT:-900}
helm test nova --timeout $timeout
helm test neutron --timeout $timeout

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/140-compute-kit.sh

Deploy Heat

#!/bin/bash

#NOTE: Deploy command
tee /tmp/heat.yaml << EOF
pod:
  replicas:
    api: 2
    cfn: 2
    cloudwatch: 2
    engine: 2
EOF
helm upgrade --install heat ./heat \
  --namespace=openstack \
  --values=/tmp/heat.yaml \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_HEAT}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack orchestration service list
# Delete the test pod if it still exists
kubectl delete pods -l application=heat,release_group=heat,component=test --namespace=openstack --ignore-not-found
helm test heat --timeout 900

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/150-heat.sh

Deploy Barbican

#!/bin/bash

#NOTE: Deploy command
helm upgrade --install barbican ./barbican \
  --namespace=openstack \
  --set pod.replicas.api=2 \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_BARBICAN}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
# Delete the test pod if it still exists
kubectl delete pods -l application=barbican,release_group=barbican,component=test --namespace=openstack --ignore-not-found
helm test barbican

Alternatively, this step can be performed by running the script directly:

./tools/deployment/multinode/160-barbican.sh

Configure OpenStack

Configuring OpenStack for a particular production use-case is beyond the scope of this guide. Please refer to the OpenStack Configuration documentation for your selected version of OpenStack to determine what additional values overrides should be provided to the OpenStack-Helm charts to ensure appropriate networking, security, etc. is in place.