Configure Identity service for Networking

Configure Identity service for Networking

To configure the Identity service for use with Networking

  1. Create the get_id() function

    The get_id() function stores the ID of created objects, and removes the need to copy and paste object IDs in later steps:

    1. Add the following function to your .bashrc file:

      function get_id () {
      echo `"$@" | awk '/ id / { print $4 }'`
    2. Source the .bashrc file:

      $ source .bashrc
  2. Create the Networking service entry

    Networking must be available in the Compute service catalog. Create the service:

    $ NEUTRON_SERVICE_ID=$(get_id keystone service-create --name
    neutron --type network --description 'OpenStack Networking Service')
  3. Create the Networking service endpoint entry

    The way that you create a Networking endpoint entry depends on whether you are using the SQL or the template catalog driver:

    • If you are using the SQL driver, run the following command with the specified region ($REGION), IP address of the Networking server ($IP), and service ID ($NEUTRON_SERVICE_ID, obtained in the previous step).

      $ keystone endpoint-create --region $REGION --service-id \
       --publicurl 'http://$IP:9696/' --adminurl 'http://$IP:9696/' \
       --internalurl 'http://$IP:9696/'

      For example:

      $ keystone endpoint-create --region myregion --service-id \
       --publicurl "" --adminurl \
       "" --internalurl \
    • If you are using the template driver, specify the following parameters in your Compute catalog template file (default_catalog.templates), along with the region ($REGION) and IP address of the Networking server ($IP).

      catalog.$ = http://$IP:9696
      catalog.$ = http://$IP:9696
      catalog.$ = http://$IP:9696
      catalog.$ = Network Service

      For example:

      catalog.$ =
      catalog.$ =
      catalog.$ =
      catalog.$ = Network Service
  4. Create the Networking service user

    You must provide admin user credentials that Compute and some internal Networking components can use to access the Networking API. Create a special service tenant and a neutron user within this tenant, and assign an admin role to this role.

    1. Create the admin role:

      $ ADMIN_ROLE=$(get_id keystone role-create --name admin)
    2. Create the neutron user:

      $ NEUTRON_USER=$(get_id keystone user-create --name neutron\
      --pass "$NEUTRON_PASSWORD" --email --tenant-id service)
    3. Create the service tenant:

      $ SERVICE_TENANT=$(get_id keystone tenant-create --name
      service --description "Services Tenant")
    4. Establish the relationship among the tenant, user, and role:

      $ keystone user-role-add --user_id $NEUTRON_USER \
      --role_id $ADMIN_ROLE --tenant_id $SERVICE_TENANT

For information about how to create service entries and users, see the OpenStack Installation Guide for your distribution (


If you use Networking, do not run the Compute nova-network service (like you do in traditional Compute deployments). Instead, Compute delegates most network-related decisions to Networking.


Uninstall nova-network and reboot any physical nodes that have been running nova-network before using them to run Networking. Inadvertently running the nova-network process while using Networking can cause problems, as can stale iptables rules pushed down by previously running nova-network.

Compute proxies tenant-facing API calls to manage security groups and floating IPs to Networking APIs. However, operator-facing tools such as nova-manage, are not proxied and should not be used.


When you configure networking, you must use this guide. Do not rely on Compute networking documentation or past experience with Compute. If a nova command or configuration option related to networking is not mentioned in this guide, the command is probably not supported for use with Networking. In particular, you cannot use CLI tools like nova-manage and nova to manage networks or IP addressing, including both fixed and floating IPs, with Networking.

To ensure that Compute works properly with Networking (rather than the legacy nova-network mechanism), you must adjust settings in the nova.conf configuration file.

Networking API and credential configuration

Each time you provision or de-provision a VM in Compute, nova-\* services communicate with Networking using the standard API. For this to happen, you must configure the following items in the nova.conf file (used by each nova-compute and nova-api instance).

nova.conf API and credential settings
Attribute name Required
[DEFAULT] use_neutron Modify from the default to True to indicate that Networking should be used rather than the traditional nova-network networking model.
[neutron] url Update to the host name/IP and port of the neutron-server instance for this deployment.
[neutron] auth_strategy Keep the default keystone value for all production deployments.
[neutron] admin_tenant_name Update to the name of the service tenant created in the above section on Identity configuration.
[neutron] admin_username Update to the name of the user created in the above section on Identity configuration.
[neutron] admin_password Update to the password of the user created in the above section on Identity configuration.
[neutron] admin_auth_url Update to the Identity server IP and port. This is the Identity (keystone) admin API server IP and port value, and not the Identity service API IP and port.

Configure security groups

The Networking service provides security group functionality using a mechanism that is more flexible and powerful than the security group capabilities built into Compute. Therefore, if you use Networking, you should always disable built-in security groups and proxy all security group calls to the Networking API. If you do not, security policies will conflict by being simultaneously applied by both services.

To proxy security groups to Networking, use the following configuration values in the nova.conf file:

nova.conf security group settings

Item Configuration
firewall_driver Update to nova.virt.firewall.NoopFirewallDriver, so that nova-compute does not perform iptables-based filtering itself.

Configure metadata

The Compute service allows VMs to query metadata associated with a VM by making a web request to a special address. Networking supports proxying those requests to nova-api, even when the requests are made from isolated networks, or from multiple networks that use overlapping IP addresses.

To enable proxying the requests, you must update the following fields in [neutron] section in the nova.conf.

nova.conf metadata settings

Item Configuration
service_metadata_proxy Update to true, otherwise nova-api will not properly respond to requests from the neutron-metadata-agent.

Update to a string “password” value. You must also configure the same value in the metadata_agent.ini file, to authenticate requests made for metadata.

The default value of an empty string in both files will allow metadata to function, but will not be secure if any non-trusted entities have access to the metadata APIs exposed by nova-api.


As a precaution, even when using metadata_proxy_shared_secret, we recommend that you do not expose metadata using the same nova-api instances that are used for tenants. Instead, you should run a dedicated set of nova-api instances for metadata that are available only on your management network. Whether a given nova-api instance exposes metadata APIs is determined by the value of enabled_apis in its nova.conf.

Example nova.conf (for nova-compute and nova-api)

Example values for the above settings, assuming a cloud controller node running Compute and Networking with an IP address of

use_neutron = True

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.