This page highlights some of the most prominent features available in sahara. The guidance provided here is primarily focused on the runtime aspects of sahara. For discussions about configuring the sahara server processes please see the Sahara Configuration Guide and Sahara Advanced Configuration Guide.
One of the problems with running data processing applications on OpenStack is the inability to control where an instance is actually running. It is not always possible to ensure that two new virtual machines are started on different physical machines. As a result, any replication within the cluster is not reliable because all replicas may be co-located on one physical machine. To remedy this, sahara provides the anti-affinity feature to explicitly command all instances of the specified processes to spawn on different Compute nodes. This is especially useful for Hadoop data node processes to increase HDFS replica reliability.
Starting with the Juno release, sahara can create server groups with the anti-affinity policy to enable this feature. Sahara creates one server group per cluster and assigns all instances with affected processes to this server group. Refer to the Nova documentation on how server groups work.
This feature is supported by all plugins out of the box, and can be enabled during the cluster template creation.
OpenStack Block Storage (cinder) can be used as an alternative for ephemeral drives on instances. Using Block Storage volumes increases the reliability of data which is important for HDFS services.
A user can set how many volumes will be attached to each instance in a node group and the size of each volume. All volumes are attached during cluster creation and scaling operations.
If volumes are used for the HDFS storage it’s important to make sure that the linear read-write operations as well as IOpS level are high enough to handle the workload. Volumes placed on the same compute host provide a higher level of performance.
In some cases cinder volumes can be backed by a distributed storage like Ceph. In this type of installation it’s important to make sure that the network latency and speed do not become a blocker for HDFS. Distributed storage solutions usually provide their own replication mechanism. HDFS replication should be disabled so that it does not generate redundant traffic across the cloud.
Cluster scaling allows users to change the number of running instances in a cluster without needing to recreate the cluster. Users may increase or decrease the number of instances in node groups or add new node groups to existing clusters. If a cluster fails to scale properly, all changes will be rolled back.
For optimal performance, it is best for data processing applications to work on data local to the same rack, OpenStack Compute node, or virtual machine. Hadoop supports a data locality feature and can schedule jobs to task tracker nodes that are local for the input stream. In this manner the task tracker nodes can communicate directly with the local data nodes.
Sahara supports topology configuration for HDFS and Object Storage data sources. For more information on configuring this option please see the Data-locality configuration documentation.
Having an instance and an attached volume on the same physical host can be very helpful in order to achieve high-performance disk I/O operations. To achieve this, sahara provides access to the Block Storage volume instance locality functionality.
For more information on using volume instance locality with sahara, please see the Volume instance locality configuration documentation.
The Sahara Installation Guide suggests launching sahara in distributed mode with sahara-api and sahara-engine processes potentially running on several machines simultaneously. Running in distributed mode allows sahara to offload intensive tasks to the engine processes while keeping the API process free to handle requests.
For an expanded discussion of configuring sahara to run in distributed mode please see the Distributed mode configuration documentation.
Currently HDFS and YARN HA are supported with the HDP 2.4 plugin and CDH 5.7 plugins.
Hadoop HDFS and YARN High Availability provide an architecture to ensure that HDFS or YARN will continue to work in the result of an active namenode or resourcemanager failure. They use 2 namenodes and 2 resourcemanagers in an active/passive state to provide this availability.
In the HDP 2.4 plugin, the feature can be enabled through dashboard in the Cluster Template creation form. High availability is achieved by using a set of journalnodes, Zookeeper servers, and ZooKeeper Failover Controllers (ZKFC), as well as additional configuration changes to HDFS and other services that use HDFS.
In the CDH 5.7 plugin, HA for HDFS and YARN is enabled through adding several HDFS_JOURNALNODE roles in the node group templates of cluster template. The HDFS HA is enabled when HDFS_JOURNALNODE roles are added and the roles setup meets below requirements:
In this case, the original SecondrayNameNode node will be used as the Standby NameNode.
Sahara supports both the nova-network and neutron implementations of OpenStack Networking. By default sahara is configured to behave as if the nova-network implementation is available. For OpenStack installations that are using the neutron project please see Networking configuration.
Sahara can use OpenStack Object Storage (swift) to store job binaries and data sources utilized by its job executions and clusters. In order to leverage this support within Hadoop, including using Object Storage for data sources for EDP, Hadoop requires the application of a patch. For additional information about enabling this support, including patching Hadoop and configuring sahara, please refer to the Swift Integration documentation.
Sahara can resolve hostnames of cluster instances by using DNS. For this Sahara uses designate. For additional details see Sahara Advanced Configuration Guide.
You can protect your HDP or CDH cluster using MIT Kerberos security. To get more details about this, please, see documentation for the appropriate plugin.
The following table provides a plugin capability matrix:
|Nova and Neutron network||x||x||x||x|
Security groups are sets of IP filter rules that are applied to an instance’s networking. They are project specified, and project members can edit the default rules for their group and add new rules sets. All projects have a “default” security group, which is applied to instances that have no other security group defined. Unless changed, this security group denies all incoming traffic.
Sahara allows you to control which security groups will be used for created instances. This can be done by providing the security_groups parameter for the node group or node group template. The default for this option is an empty list, which will result in the default project security group being used for the instances.
Sahara may also create a security group for instances in the node group automatically. This security group will only contain open ports for required instance processes and the sahara engine. This option is useful for development and for when your installation is secured from outside environments. For production environments we recommend controlling the security group policy manually.
Sahara supports special strings that can be used in data source URLs. These strings will be replaced with appropriate values during job execution which allows the use of the same data source as an output multiple times.
There are 2 types of string currently supported:
After placeholders are replaced, the real URLs are stored in the data_source_urls field of the job execution object. This is used later to find objects created by a particular job run.