Factors affecting OpenStack deployment¶
When deploying OpenStack in an enterprise as a private cloud, it is usually behind the firewall and within the trusted network alongside existing systems. Users are employees that are bound by the company security requirements. This tends to drive most of the security domains towards a more trusted model. However, when deploying OpenStack in a public facing role, no assumptions can be made and the attack vectors significantly increase.
Consider the following security implications and requirements:
Managing the users for both public and private clouds. The Identity service allows for LDAP to be part of the authentication process. This may ease user management if integrating into existing systems.
User authentication requests include sensitive information including usernames, passwords, and authentication tokens. It is strongly recommended to place API services behind hardware that performs SSL termination.
Negative or hostile users who would attack or compromise the security of your deployment regardless of firewalls or security agreements.
Attack vectors increase further in a public facing OpenStack deployment. For example, the API endpoints and the software behind it become vulnerable to hostile entities attempting to gain unauthorized access or prevent access to services. You should provide appropriate filtering and periodic security auditing.
Be mindful of consistency when utilizing third party clouds to explore authentication options.
For more information OpenStack Security, see the OpenStack Security Guide.
A security domain comprises of users, applications, servers or networks that share common trust requirements and expectations within a system. Typically they have the same authentication and authorization requirements and users.
Security domains include:
- Public security domains
The public security domain can refer to the internet as a whole or networks over which you have no authority. This domain is considered untrusted. For example, in a hybrid cloud deployment, any information traversing between and beyond the clouds is in the public domain and untrustworthy.
- Guest security domains
The guest security domain handles compute data generated by instances on the cloud, but not services that support the operation of the cloud, such as API calls. Public cloud providers and private cloud providers who do not have stringent controls on instance use or who allow unrestricted internet access to instances should consider this domain to be untrusted. Private cloud providers may want to consider this network as internal and therefore trusted only if they have controls in place to assert that they trust instances and all their tenants.
- Management security domains
The management security domain is where services interact. Sometimes referred to as the control plane, the networks in this domain transport confidential data such as configuration parameters, user names, and passwords. In most deployments this domain is considered trusted when it is behind an organization’s firewall.
- Data security domains
The data security domain is primarily concerned with information pertaining to the storage services within OpenStack. The data that crosses this network has high integrity and confidentiality requirements and, depending on the type of deployment, may also have strong availability requirements. The trust level of this network is heavily dependent on other deployment decisions.
These security domains can be individually or collectively mapped to an OpenStack deployment. The cloud operator should be aware of the appropriate security concerns. Security domains should be mapped out against your specific OpenStack deployment topology. The domains and their trust requirements depend upon whether the cloud instance is public, private, or hybrid.
The hypervisor also requires a security assessment. In a public cloud, organizations typically do not have control over the choice of hypervisor. Properly securing your hypervisor is important. Attacks made upon the unsecured hypervisor are called a hypervisor breakout. Hypervisor breakout describes the event of a compromised or malicious instance breaking out of the resource controls of the hypervisor and gaining access to the bare metal operating system and hardware resources.
Hypervisor security is not an issue if the security of instances is not important. However, enterprises can minimize vulnerability by avoiding hardware sharing with others in a public cloud.
There are other services worth considering that provide a bare metal instance instead of a cloud. In other cases, it is possible to replicate a second private cloud by integrating with a private Cloud-as-a-Service deployment. The organization does not buy the hardware, but also does not share with other tenants. It is also possible to use a provider that hosts a bare-metal public cloud instance for which the hardware is dedicated only to one customer, or a provider that offers private Cloud-as-a-Service.
Each cloud implements services differently. Understand the security requirements of every cloud that handles the organization’s data or workloads.
Consider security implications and requirements before designing the physical and logical network topologies. Make sure that the networks are properly segregated and traffic flows are going to the correct destinations without crossing through locations that are undesirable. Consider the following factors:
Overlay interconnects for joining separated tenant networks
Routing through or avoiding specific networks
How networks attach to hypervisors can expose security vulnerabilities. To mitigate hypervisor breakouts, separate networks from other systems and schedule instances for the network onto dedicated Compute nodes. This prevents attackers from having access to the networks from a compromised instance.
Securing a multi-site OpenStack installation brings several challenges. Tenants may expect a tenant-created network to be secure. In a multi-site installation the use of a non-private connection between sites may be required. This may mean that traffic would be visible to third parties and, in cases where an application requires security, this issue requires mitigation. In these instances, install a VPN or encrypted connection between sites to conceal sensitive traffic.
Identity is another security consideration. Authentication centralization provides a single authentication point for users across the deployment, and a single administration point for traditional create, read, update, and delete operations. Centralized authentication is also useful for auditing purposes because all authentication tokens originate from the same source.
Tenants in multi-site installations need isolation from each other. The main challenge is ensuring tenant networks function across regions which is not currently supported in OpenStack Networking (neutron). Therefore an external system may be required to manage mapping. Tenant networks may contain sensitive information requiring accurate and consistent mapping to ensure that a tenant in one site does not connect to a different tenant in another site.
Using remote resources for collection, processing, storage, and retrieval provides potential benefits to businesses. With the rapid growth of data within organizations, businesses need to be proactive about their data storage strategies from a compliance point of view.
Most countries have legislative and regulatory requirements governing the storage and management of data in cloud environments. This is particularly relevant for public, community and hybrid cloud models, to ensure data privacy and protection for organizations using a third party cloud provider.
Common areas of regulation include:
Data retention policies ensuring storage of persistent data and records management to meet data archival requirements.
Data ownership policies governing the possession and responsibility for data.
Data sovereignty policies governing the storage of data in foreign countries or otherwise separate jurisdictions.
Data compliance policies governing certain types of information needing to reside in certain locations due to regulatory issues - and more importantly, cannot reside in other locations for the same reason.
Data location policies ensuring that the services deployed to the cloud are used according to laws and regulations in place for the employees, foreign subsidiaries, or third parties.
Disaster recovery policies ensuring regular data backups and relocation of cloud applications to another supplier in scenarios where a provider may go out of business, or their data center could become inoperable.
Security breach policies governing the ways to notify individuals through cloud provider’s systems or other means if their personal data gets compromised in any way.
Industry standards policy governing additional requirements on what type of cardholder data may or may not be stored and how it is to be protected.
This is an example of such legal frameworks:
Privacy and security are spread over different industry-specific laws and regulations:
Health Insurance Portability and Accountability Act (HIPAA)
Gramm-Leach-Bliley Act (GLBA)
Payment Card Industry Data Security Standard (PCI DSS)
Family Educational Rights and Privacy Act (FERPA)
Cloud security architecture¶
Cloud security architecture should recognize the issues that arise with security management, which addresses these issues with security controls. Cloud security controls are put in place to safeguard any weaknesses in the system, and reduce the effect of an attack.
The following security controls are described below.
- Deterrent controls:
Typically reduce the threat level by informing potential attackers that there will be adverse consequences for them if they proceed.
- Preventive controls:
Strengthen the system against incidents, generally by reducing if not actually eliminating vulnerabilities.
- Detective controls:
Intended to detect and react appropriately to any incidents that occur. System and network security monitoring, including intrusion detection and prevention arrangements, are typically employed to detect attacks on cloud systems and the supporting communications infrastructure.
- Corrective controls:
Reduce the consequences of an incident, normally by limiting the damage. They come into effect during or after an incident. Restoring system backups in order to rebuild a compromised system is an example of a corrective control.
For more information, see See also NIST Special Publication 800-53.
The many different forms of license agreements for software are often written with the use of dedicated hardware in mind. This model is relevant for the cloud platform itself, including the hypervisor operating system, supporting software for items such as database, RPC, backup, and so on. Consideration must be made when offering Compute service instances and applications to end users of the cloud, since the license terms for that software may need some adjustment to be able to operate economically in the cloud.
Multi-site OpenStack deployments present additional licensing considerations over and above regular OpenStack clouds, particularly where site licenses are in use to provide cost efficient access to software licenses. The licensing for host operating systems, guest operating systems, OpenStack distributions (if applicable), software-defined infrastructure including network controllers and storage systems, and even individual applications need to be evaluated.
Topics to consider include:
The definition of what constitutes a site in the relevant licenses, as the term does not necessarily denote a geographic or otherwise physically isolated location.
Differentiations between “hot” (active) and “cold” (inactive) sites, where significant savings may be made in situations where one site is a cold standby for disaster recovery purposes only.
Certain locations might require local vendors to provide support and services for each site which may vary with the licensing agreement in place.