6.12.1.1. Keystone DB / cache operations analysis

6.12.1.1.1. Environment description

The Keystone Performance testing (test case #1) is executed at the all-in-one virtual environment, created with Oracle VM VirtualBox Manager.

6.12.1.1.1.1. Hardware

All-in-one installation on virtual environment.

Parameter Value
CPU 4 CPU out of 8 (2,7 GHz Intel Core i5)
RAM 4 Gb

6.12.1.1.1.2. Software

This section describes installed software.

Parameter Value
OS Ubuntu 15.04

6.12.1.1.2. Execution

To process all control plane requests against the environment the following set of commands need to be processed:

openstack --profile SECRET_KEY token issue
openstack --profile SECRET_KEY user list
openstack --profile SECRET_KEY endpoint list
openstack --profile SECRET_KEY service list
openstack --profile SECRET_KEY server create --image <image_id> --flavor <flavor_id> <server_name>

6.12.1.1.3. Short summary

Detailed information about every topology (out of 8 that were examined) can be found in the next Reports section.

Every topology was examined with 5 different profiled control plane requests to figure out number and relative time spent on the DB / Memcached caching.

All collected data needs to be deeply examined in future, although several interesting moments can be highlighted right now to pay special attention to them further.

Due to the collected profiling information, it can be clearly seen that Keystone was significantly changed during Mitaka timeframe, and lots of changes seem to be related to the following:

  • Federation support (introduced more complexity on the DB level)
  • Moving to oslo.cache instead of local dogpile.cache usage in Liberty and introducing local context cache layer for per-request caching.

Federation support introduced multiple SQL JOINs usage in Keystone and made the database scheme a bit more complex. In further multinode research it’s planned to check how this is influencing operations DB operations performance in case, for instance, if Galera cluster is used.

As for the caching layer usage, one specific issue is clearly seen in Mitaka Keystone operations caching. Although local context cache should reduce number of calls to Memcache via storing already grabbed data for the specific API request in local thread, this was not observed (the duplicated function calls still used Memcache for cache purposes). The Keystone bug was filed to investigate this behaviour.

One more interesting moment is also related to the cache usage. If the cache will be turned off in Keystone configuration explicitly, the profiling still shows data being fetched from the Memcache.