Chapter 10. Managing Projects and Users

An OpenStack cloud does not have much value without users. This chapter covers topics that relate to managing users, projects, and quotas.

 Projects or Tenants?

In OpenStack user interfaces and documentation, a group of users is referred to as a project or tenant. These terms are interchangeable.

The initial implementation of the OpenStack Compute Service (nova) had its own authentication system and used the term project. When authentication moved into the OpenStack Identity Service (keystone) project, it used the term tenant to refer to a group of users. Because of this legacy, some of the OpenStack tools refer to projects and some refer to tenants.

This guide uses the term project, unless an example shows interaction with a tool that uses the term tenant.

 Managing Projects

Users must be associated with at least one project, though they may belong to many. Therefore, you should add at least one project before adding users.

 Adding Projects

To create a project through the dashboard:

  1. Log in as an administrative user.

  2. Select the "Projects" link in the left hand navigation bar.

  3. Click on the "Create Project" button at the top right.

You are prompted for a project name and an optional, but recommended, description. Select the check box at the bottom of the form to enable this project. By default, this is enabled.

It is also possible to add project members and adjust the project quotas. We'll discuss those later, but in practice it can be quite convenient to deal with all these operations at one time.

To create a project through the command-line interface (CLI):

To add a project through the command line, you must use the keystone utility, which uses "tenant" in place of "project": 

# keystone tenant-create --name=demo

This command creates a project named "demo". Optionally, you can add a description string by appending --description <tenant-description>, which can be very useful. You can also create a group in a disabled state by appending --enabled false to the command. By default, projects are created in an enabled state.


OpenStack provides a number of quotas which are all enforced at the project (rather than user) level. As an administrative user in the Dashboard you can see (but not edit) the default quotas using the "Quotas" link in the navigation sidebar. These default project quotas are specified in the nova.conf file on your cloud controller.

If you do not make quota-related changes, the system uses the following defaults.

Table 10.1. Description of nova.conf file configuration options for quotas
Configuration option=Default value (Type) Description


(IntOpt) number of instance cores allowed per project (tenant)


(IntOpt) number of floating ips allowed per project (tenant)


(IntOpt) number of floating ips allowed per project (this should be at least the number of instances allowed). -1 is unlimited


(IntOpt) number of volume gigabytes allowed per project (tenant)


(IntOpt) number of bytes allowed per injected file


(IntOpt) number of bytes allowed per injected file path


(IntOpt) number of injected files allowed


(IntOpt) number of instances allowed per project (tenant)


(IntOpt) number of key pairs allowed per user


(IntOpt) number of metadata items allowed per instance


(IntOpt) megabytes of instance ram allowed per project (tenant)


(IntOpt) number of security rules per security group


(IntOpt) number of security groups per project (tenant)


(IntOpt) number of volumes allowed per project (tenant)

Configuration table excerpted from

The simplest way to change the default project quotas is to edit the nova.conf file on your cloud controller. Quotas are enforced by the nova-scheduler service, so you must restart that service once you change these options.

If your site implementation varies from our example architecture, ensure any changes you make to quota default options in /etc/nova/nova.conf are applied to the host(s) running the nova-scheduler service. It is critical for consistent quota enforcement that all schedulers have identical quotas. Assuming you are following the best practices recommended in this guide, your configuration management system will automatically ensure all your schedulers have consistent configurations.

To view and edit quotas for an individual project through the Dashboard:

  1. Use the "Projects" navigation link to get a list of your existing projects.

  2. Locate the project you want to modify and select "Modify Quotas" from the "Actions" drop down menu a the end of the line.  

To view and edit quotas for an individual project through the CLI, follow these steps:

You can access and modify quotas from the command line but it is a bit complicated. This is done using Keystone to get the ID and then nova-manage.

  1. To list a project's quota, you must first find its ID using the Keystone CLI tool.

    # keystone tenant-list | grep <tenant-name> 
    | 98333a1a28e746fa8c629c83a818ad57 | <tenant-name> | True | 
  2. Recall that the Keystone CLI tool uses "tenant" where the Nova CLI tool uses "project" for the same concept.

    To show the quota for the project, for the example above, we must use the ID 98333a1a28e746fa8c629c83a818ad57:

    # nova-manage project quota 98333a1a28e746fa8c629c83a818ad57
    metadata_items: 128
    volumes: 10
    gigabytes: 1000
    ram: 6291456
    security_group_rules: 20
    instances: 1024
    security_groups: 10
    injected_file_content_bytes: 10240
    floating_ips: 10
    injected_files: 5
    cores: 2048

    Confusingly, nova-manage project quota silently accepts any string at the end of the command and reports the default quotas. In particular, if you enter the project name rather than the ID, nova-manage does not complain, it just lies.

    To change these values you append --key and --value flags to the above command. To increase the tenant quota of Floating IPs from 10 to 20:

    # nova-manage project quota 98333a1a28e746fa8c629c83a818ad57 --key floating_ips --value 20 
    metadata_items: 128 
    volumes: 10
    gigabytes: 1000
    ram: 6291456
    security_group_rules: 20
    instances: 1024
    security_groups: 10
    injected_file_content_bytes: 10240
    floating_ips: 20
    injected_files: 5
    cores: 2048

 User Management

The command line tools for managing users are inconvenient to use directly. They require issuing multiple commands to complete a single task, and they use UUIDs rather than symbolic names for many items. In practice, humans typically do not use these tools directly. Fortunately, the OpenStack Dashboard provides a reasonable interface to this. In addition, many sites write custom tools for local needs to enforce local policies and provide levels of self service to users that aren't currently available with packaged tools.

 Creating New Users

To create a user, you need the following information:

  • Username

  • Email address

  • Password

  • Primary project

  • Role

Username and email address are self-explanatory, though your site may have local conventions you should observe. Setting and changing passwords in the Identity Service requires administrative privileges. As of the Folsom release, users cannot change their own passwords. This is a large driver for creating local custom tools, and must be kept in mind when assigning and distributing passwords. The primary project is simply the first project the user is associated with and must exist prior to creating the user. Role is almost always going to be "member". Out of the box, OpenStack comes with two roles defined:

  • "member":  a typical user.

  • "admin":  an administrative super user which has full permissions across all projects and should be used with great care.

It is possible to define other roles, but doing so is uncommon.

Once you've gathered this information, creating the user in the Dashboard is just another web form similar to what we've seen before and can be found on the "Users" link in the "Admin" navigation bar and then clicking the "Create User" button at the top right.

Modifying users is also done from this "Users" page. If you have a large number of users, this page can get quite crowded. The "Filter" search box at the top of the page can be used to limit the users listing. A form very similar to the user creation dialog can be pulled up by selecting "Edit" from the actions drop down menu at the end of the line for the user you are modifying.

 Associating Users with Projects

Many sites run with users being associated with only one project. This is a more conservative and simpler choice both for administration and for users. Administratively if a user reports a problem with an instance or quota it is obvious which project this relates to as well. Users needn't worry about what project they are acting in if they are only in one project. However, note that, by default, any user can affect the resources of any other user within their project. It is also possible to associate users with multiple projects if that makes sense for your organization.

Associating existing users with an additional project or removing them from an older project is done from the "Projects" page of the Dashboard by selecting the "Modify Users" from the "Actions" column:

From this view you can do a number of useful and a few dangerous things.

The first column of this form, titled "All Users", will include a list of all the users in your cloud who are not already associated with this project and the second all the users who are. These can be quite long, but can be limited by typing a substring of the user name you are looking for in the filter field at the top of the column.

From here, click the + icon to add users to the project. Click the - to remove them.

The dangerous possibility comes in the ability to change member roles. This is the drop down list after the user name in the "Project Members" list. In virtually all cases this value should be set to "Member". This example purposefully show and administrative user where this value is "admin".


The "admin" is global not per project so granting a user the admin role in any project gives the administrative rights across the whole cloud.

Typical use is to only create administrative users in a single project, by convention the "admin" project which is created by default during cloud setup. If your administrative users also use the cloud to launch and manage instances it is strongly recommended that you use separate user accounts for administrative access and normal operations and that they be in distinct projects.

 Customizing Authorization

The default authorization settings only allow administrative users to create resources on behalf of a different project. OpenStack handles two kind of authorization policies:

  • Operation-based: policies specify access criteria for specific operations, possibly with fine-grained control over specific attributes.

  • Resource-based: whether access to a specific resource might be granted or not according to the permissions configured for the resource (currently available only for the network resource). The actual authorization policies enforced in an OpenStack service vary from deployment to deployment.

The policy engine reads entries from the policy.json file. The actual location of this file might vary from distribution to distribution, for nova it is typically in /etc/nova/policy.json. You can update entries while the system is running, and you do not have to restart services. Currently the only way to update such policies is to edit the policy file.

The OpenStack service's policy engine matches a policy directly. A rule indicates evaluation of the elements of such policies. For instance, in a compute:create: [["rule:admin_or_owner"]] statement, the policy is compute:create, and the rule is admin_or_owner.

Policies are triggered by an OpenStack policy engine whenever one of them matches an OpenStack API operation or a specific attribute being used in a given operation. For instance, the engine tests the create:compute policy every time a user sends a POST /v2/{tenant_id}servers request to the OpenStack Compute API server. Policies can be also related to specific API extensions. For instance, if a user needs an extension like compute_extension:rescue the attributes defined by the provider extensions trigger the rule test for that operation.

An authorization policy can be composed by one or more rules. If more rules are specified, evaluation policy is successful if any of the rules evaluates successfully; if an API operation matches multiple policies, then all the policies must evaluate successfully. Also, authorization rules are recursive. Once a rule is matched, the rule(s) can be resolved to another rule, until a terminal rule is reached. These are the rules defined:

  • Role-based rules: evaluate successfully if the user submitting the request has the specified role. For instance "role:admin"is successful if the user submitting the request is an administrator.

  • Field-based rules: evaluate successfully if a field of the resource specified in the current request matches a specific value. For instance "field:networks:shared=True" is successful if the attribute shared of the network resource is set to true.

  • Generic rules: compare an attribute in the resource with an attribute extracted from the user's security credentials and evaluates successfully if the comparison is successful. For instance "tenant_id:%(tenant_id)s" is successful if the tenant identifier in the resource is equal to the tenant identifier of the user submitting the request.

Here are snippets of the default nova policy.json file:

	"context_is_admin":  [["role:admin"]],
	"admin_or_owner":  [["is_admin:True"], ["project_id:%(project_id)s"]], [1]
	"default": [["rule:admin_or_owner"]], [2]
	"compute:create": [ ],
	"compute:create:attach_network": [ ],
	"compute:create:attach_volume": [ ],
	"compute:get_all": [ ],
    "admin_api": [["is_admin:True"]],
	"compute_extension:accounts": [["rule:admin_api"]],
	"compute_extension:admin_actions": [["rule:admin_api"]],
	"compute_extension:admin_actions:pause": [["rule:admin_or_owner"]],
	"compute_extension:admin_actions:unpause": [["rule:admin_or_owner"]],
	"compute_extension:admin_actions:migrate": [["rule:admin_api"]],
	"compute_extension:aggregates": [["rule:admin_api"]],
	"compute_extension:certificates": [ ],
	"compute_extension:flavorextraspecs": [ ],
	"compute_extension:flavormanage": [["rule:admin_api"]],  [3]

[1] Shows a rule which evaluates successfully if the current user is an administrator or the owner of the resource specified in the request (tenant identifier is equal).

[2] Shows the default policy which is always evaluated if an API operation does not match any of the policies in policy.json.

[3] Shows a policy restricting the ability of manipulating flavors to administrators using the Admin API only.

In some cases, some operations should be restricted to administrators only. Therefore, as a further example, let us consider how this sample policy file could be modified in a scenario where we enable users to create their own flavors:

"compute_extension:flavormanage": [ ],

 Users that Disrupt Other Users

Users on your cloud can disrupt other users, sometimes intentionally and maliciously and other times by accident. Understanding the situation allows you to make a better decision on how to handle the disruption.

For example: A group of users have instances that are utilizing a large amount of compute resources for very compute-intensive tasks. This is driving the load up on compute nodes and affecting other users. In this situation, review your user use cases. You may find that high compute scenarios are common and should then plan for proper segregation in your cloud such as host aggregation or regions.

Another example is a user consuming a very large amount of bandwidth. Again, the key is to understand what the user is doing. If they naturally need a high amount of bandwidth, you might have to limit their transmission rate as to not affect other users or move them to an area with more bandwidth available. On the other hand, maybe the user's instance has been hacked and is part of a botnet launching DDOS attacks. Resolution to this issue is the same as if any other server on your network has been hacked. Contact the user and give them time to respond. If they don't respond, shut the instance down.

A final example is if a user is hammering cloud resources repeatedly. Contact the user and learn what they are trying to do. Maybe they don't understand that what they're doing is inappropriate or maybe there is an issue with the resource they are trying to access that is causing their requests to queue or lag.

One key element of systems administration that is often overlooked is that end users are the reason why systems administrators exist. Don't go the BOFH route and terminate every user who causes an alert to go off. Work with them to understand what they're trying to accomplish and see how your environment can better assist them in achieving their goals.

loading table of contents...