Amazon EMR release 6.0.0 - Amazon EMR

Amazon EMR release 6.0.0

6.0.0 application versions

The following applications are supported in this release: Ganglia, HBase, HCatalog, Hadoop, Hive, Hudi, Hue, JupyterHub, Livy, MXNet, Oozie, Phoenix, Presto, Spark, TensorFlow, Tez, Zeppelin, and ZooKeeper.

The table below lists the application versions available in this release of Amazon EMR and the application versions in the preceding three Amazon EMR releases (when applicable).

For a comprehensive history of application versions for each release of Amazon EMR, see the following topics:

Application version information
emr-6.1.1 emr-6.1.0 emr-6.0.1 emr-6.0.0
AWS SDK for Java 1.11.8281.11.8281.11.7111.11.711
Python 2.7, 3.72.7, 3.72.7, 3.72.7, 3.7
Scala 2.12.102.12.102.12.102.11.12
AmazonCloudWatchAgent - - - -
Delta - - - -
Flink1.11.01.11.0 - -
Ganglia3.7.23.7.23.7.23.7.2
HBase2.2.52.2.52.2.32.2.3
HCatalog3.1.23.1.23.1.23.1.2
Hadoop3.2.13.2.13.2.13.2.1
Hive3.1.23.1.23.1.23.1.2
Hudi0.5.2-incubating-amzn-20.5.2-incubating-amzn-20.5.0-incubating-amzn-10.5.0-incubating-amzn-1
Hue4.7.14.7.14.4.04.4.0
Iceberg - - - -
JupyterEnterpriseGateway - - - -
JupyterHub1.1.01.1.01.0.01.0.0
Livy0.7.00.7.00.6.00.6.0
MXNet1.6.01.6.01.5.11.5.1
Mahout - - - -
Oozie5.2.05.2.05.1.05.1.0
Phoenix5.0.05.0.05.0.05.0.0
Pig0.17.00.17.0 - -
Presto0.2320.2320.2300.230
Spark3.0.03.0.02.4.42.4.4
Sqoop1.4.71.4.7 - -
TensorFlow2.1.02.1.01.14.01.14.0
Tez0.9.20.9.20.9.20.9.2
Trino (PrestoSQL)338338 - -
Zeppelin0.9.00.9.00.9.00.9.0
ZooKeeper3.4.143.4.143.4.143.4.14

6.0.0 release notes

The following release notes include information for Amazon EMR release 6.0.0.

Initial release date: March 10, 2020

Supported applications
  • AWS SDK for Java version 1.11.711

  • Ganglia version 3.7.2

  • Hadoop version 3.2.1

  • HBase version 2.2.3

  • HCatalog version 3.1.2

  • Hive version 3.1.2

  • Hudi version 0.5.0-incubating

  • Hue version 4.4.0

  • JupyterHub version 1.0.0

  • Livy version 0.6.0

  • MXNet version 1.5.1

  • Oozie version 5.1.0

  • Phoenix version 5.0.0

  • Presto version 0.230

  • Spark version 2.4.4

  • TensorFlow version 1.14.0

  • Zeppelin version 0.9.0-SNAPSHOT

  • Zookeeper version 3.4.14

  • Connectors and drivers: DynamoDB Connector 4.14.0

Note

Flink, Sqoop, Pig, and Mahout are not available in Amazon EMR version 6.0.0.

New features
  • YARN Docker Runtime Support - YARN applications, such as Spark jobs, can now run in the context of a Docker container. This allows you to easily define dependencies in a Docker image without the need to install custom libraries on your Amazon EMR cluster. For more information, see Configure Docker Integration and Run Spark applications with Docker using Amazon EMR 6.0.0.

  • Hive LLAP Support - Hive now supports the LLAP execution mode for improved query performance. For more information, see Using Hive LLAP.

Changes, enhancements, and resolved issues
  • This is a release to fix issues with Amazon EMR Scaling when it fails to scale up/scale down a cluster successfully or causes application failures.

  • Fixed an issue where scaling requests failed for a large, highly utilized cluster when Amazon EMR on-cluster daemons were running health checking activities, such as gathering YARN node state and HDFS node state. This was happening because on-cluster daemons were not able to communicate the health status data of a node to internal Amazon EMR components.

  • Improved EMR on-cluster daemons to correctly track the node states when IP addresses are reused to improve reliability during scaling operations.

  • SPARK-29683. Fixed an issue where job failures occurred during cluster scale-down as Spark was assuming all available nodes were deny-listed.

  • YARN-9011. Fixed an issue where job failures occurred due to a race condition in YARN decommissioning when cluster tried to scale up or down.

  • Fixed issue with step or job failures during cluster scaling by ensuring that the node states are always consistent between the Amazon EMR on-cluster daemons and YARN/HDFS.

  • Fixed an issue where cluster operations such as scale down and step submission failed for Amazon EMR clusters enabled with Kerberos authentication. This was because the Amazon EMR on-cluster daemon did not renew the Kerberos ticket, which is required to securely communicate with HDFS/YARN running on the primary node.

  • Newer Amazon EMR releases fix the issue with a lower "Max open files" limit on older AL2 in Amazon EMR. Amazon EMR releases 5.30.1, 5.30.2, 5.31.1, 5.32.1, 6.0.1, 6.1.1, 6.2.1, 5.33.0, 6.3.0 and later now include a permanent fix with a higher "Max open files" setting.

  • Amazon Linux

    • Amazon Linux 2 is the operating system for the EMR 6.x release series.

    • systemd is used for service management instead of upstart used inAmazon Linux 1.

  • Java Development Kit (JDK)

    • Corretto JDK 8 is the default JDK for the EMR 6.x release series.

  • Scala

    • Scala 2.12 is used with Apache Spark and Apache Livy.

  • Python 3

    • Python 3 is now the default version of Python in EMR.

  • YARN node labels

    • Beginning with Amazon EMR 6.x release series, the YARN node labels feature is disabled by default. The application master processes can run on both core and task nodes by default. You can enable the YARN node labels feature by configuring following properties: yarn.node-labels.enabled and yarn.node-labels.am.default-node-label-expression. For more information, see Understanding Primary, Core, and Task Nodes.

Known issues
  • Lower "Max open files" limit on older AL2 [fixed in newer releases]. Amazon EMR releases: emr-5.30.x, emr-5.31.0, emr-5.32.0, emr-6.0.0, emr-6.1.0, and emr-6.2.0 are based on older versions ofAmazon Linux 2 (AL2), which have a lower ulimit setting for "Max open files" when Amazon EMR clusters are created with the default AMI. Amazon EMR releases 5.30.1, 5.30.2, 5.31.1, 5.32.1, 6.0.1, 6.1.1, 6.2.1, 5.33.0, 6.3.0 and later include a permanent fix with a higher "Max open files" setting. Releases with the lower open file limit causes a "Too many open files" error when submitting Spark job. In the impacted releases, the Amazon EMR default AMI has a default ulimit setting of 4096 for "Max open files," which is lower than the 65536 file limit in the latestAmazon Linux 2 AMI. The lower ulimit setting for "Max open files" causes Spark job failure when the Spark driver and executor try to open more than 4096 files. To fix the issue, Amazon EMR has a bootstrap action (BA) script that adjusts the ulimit setting at cluster creation.

    If you are using an older Amazon EMR version that doesn't have the permanent fix for this issue, the following workaround lets you to explicitly set the instance-controller ulimit to a maximum of 65536 files.

    Explicitly set a ulimit from the command line
    1. Edit /etc/systemd/system/instance-controller.service to add the following parameters to Service section.

      LimitNOFILE=65536

      LimitNPROC=65536

    2. Restart InstanceController

      $ sudo systemctl daemon-reload

      $ sudo systemctl restart instance-controller

    Set a ulimit using bootstrap action (BA)

    You can also use a bootstrap action (BA) script to configure the instance-controller ulimit to 65536 files at cluster creation.

    #!/bin/bash for user in hadoop spark hive; do sudo tee /etc/security/limits.d/$user.conf << EOF $user - nofile 65536 $user - nproc 65536 EOF done for proc in instancecontroller logpusher; do sudo mkdir -p /etc/systemd/system/$proc.service.d/ sudo tee /etc/systemd/system/$proc.service.d/override.conf << EOF [Service] LimitNOFILE=65536 LimitNPROC=65536 EOF pid=$(pgrep -f aws157.$proc.Main) sudo prlimit --pid $pid --nofile=65535:65535 --nproc=65535:65535 done sudo systemctl daemon-reload
  • Spark interactive shell, including PySpark, SparkR, and spark-shell, does not support using Docker with additional libraries.

  • To use Python 3 with Amazon EMR version 6.0.0, you must add PATH to yarn.nodemanager.env-whitelist.

  • The Live Long and Process (LLAP) functionality is not supported when you use the AWS Glue Data Catalog as the metastore for Hive.

  • When using Amazon EMR 6.0.0 with Spark and Docker integration, you need to configure the instances in your cluster with the same instance type and the same amount of EBS volumes to avoid failure when submitting a Spark job with Docker runtime.

  • In Amazon EMR 6.0.0, HBase on Amazon S3 storage mode is impacted by the HBASE-24286. issue. HBase master cannot initialize when the cluster is created using existing S3 data.

  • Known issue in clusters with multiple primary nodes and Kerberos authentication

    If you run clusters with multiple primary nodes and Kerberos authentication in Amazon EMR releases 5.20.0 and later, you may encounter problems with cluster operations such as scale down or step submission, after the cluster has been running for some time. The time period depends on the Kerberos ticket validity period that you defined. The scale-down problem impacts both automatic scale-down and explicit scale down requests that you submitted. Additional cluster operations can also be impacted.

    Workaround:

    • SSH as hadoop user to the lead primary node of the EMR cluster with multiple primary nodes.

    • Run the following command to renew Kerberos ticket for hadoop user.

      kinit -kt <keytab_file> <principal>

      Typically, the keytab file is located at /etc/hadoop.keytab and the principal is in the form of hadoop/<hostname>@<REALM>.

    Note

    This workaround will be effective for the time period the Kerberos ticket is valid. This duration is 10 hours by default, but can configured by your Kerberos settings. You must re-run the above command once the Kerberos ticket expires.

6.0.0 component versions

The components that Amazon EMR installs with this release are listed below. Some are installed as part of big-data application packages. Others are unique to Amazon EMR and installed for system processes and features. These typically start with emr or aws. Big-data application packages in the most recent Amazon EMR release are usually the latest version found in the community. We make community releases available in Amazon EMR as quickly as possible.

Some components in Amazon EMR differ from community versions. These components have a version label in the form CommunityVersion-amzn-EmrVersion. The EmrVersion starts at 0. For example, if open source community component named myapp-component with version 2.2 has been modified three times for inclusion in different Amazon EMR releases, its release version is listed as 2.2-amzn-2.

Component Version Description
aws-sagemaker-spark-sdk1.2.6Amazon SageMaker Spark SDK
emr-ddb4.14.0Amazon DynamoDB connector for Hadoop ecosystem applications.
emr-goodies3.0.0Extra convenience libraries for the Hadoop ecosystem.
emr-kinesis3.5.0Amazon Kinesis connector for Hadoop ecosystem applications.
emr-s3-dist-cp2.14.0Distributed copy application optimized for Amazon S3.
emr-s3-select1.5.0EMR S3Select Connector
emrfs2.39.0Amazon S3 connector for Hadoop ecosystem applications.
ganglia-monitor3.7.2Embedded Ganglia agent for Hadoop ecosystem applications along with the Ganglia monitoring agent.
ganglia-metadata-collector3.7.2Ganglia metadata collector for aggregating metrics from Ganglia monitoring agents.
ganglia-web3.7.1Web application for viewing metrics collected by the Ganglia metadata collector.
hadoop-client3.2.1-amzn-0Hadoop command-line clients such as 'hdfs', 'hadoop', or 'yarn'.
hadoop-hdfs-datanode3.2.1-amzn-0HDFS node-level service for storing blocks.
hadoop-hdfs-library3.2.1-amzn-0HDFS command-line client and library
hadoop-hdfs-namenode3.2.1-amzn-0HDFS service for tracking file names and block locations.
hadoop-hdfs-journalnode3.2.1-amzn-0HDFS service for managing the Hadoop filesystem journal on HA clusters.
hadoop-httpfs-server3.2.1-amzn-0HTTP endpoint for HDFS operations.
hadoop-kms-server3.2.1-amzn-0Cryptographic key management server based on Hadoop's KeyProvider API.
hadoop-mapred3.2.1-amzn-0MapReduce execution engine libraries for running a MapReduce application.
hadoop-yarn-nodemanager3.2.1-amzn-0YARN service for managing containers on an individual node.
hadoop-yarn-resourcemanager3.2.1-amzn-0YARN service for allocating and managing cluster resources and distributed applications.
hadoop-yarn-timeline-server3.2.1-amzn-0Service for retrieving current and historical information for YARN applications.
hbase-hmaster2.2.3Service for an HBase cluster responsible for coordination of Regions and execution of administrative commands.
hbase-region-server2.2.3Service for serving one or more HBase regions.
hbase-client2.2.3HBase command-line client.
hbase-rest-server2.2.3Service providing a RESTful HTTP endpoint for HBase.
hbase-thrift-server2.2.3Service providing a Thrift endpoint to HBase.
hcatalog-client3.1.2-amzn-0The 'hcat' command line client for manipulating hcatalog-server.
hcatalog-server3.1.2-amzn-0Service providing HCatalog, a table and storage management layer for distributed applications.
hcatalog-webhcat-server3.1.2-amzn-0HTTP endpoint providing a REST interface to HCatalog.
hive-client3.1.2-amzn-0Hive command line client.
hive-hbase3.1.2-amzn-0Hive-hbase client.
hive-metastore-server3.1.2-amzn-0Service for accessing the Hive metastore, a semantic repository storing metadata for SQL on Hadoop operations.
hive-server23.1.2-amzn-0Service for accepting Hive queries as web requests.
hudi0.5.0-incubating-amzn-1Incremental processing framework to power data pipline at low latency and high efficiency.
hudi-presto0.5.0-incubating-amzn-1Bundle library for running Presto with Hudi.
hue-server4.4.0Web application for analyzing data using Hadoop ecosystem applications
jupyterhub1.0.0Multi-user server for Jupyter notebooks
livy-server0.6.0-incubatingREST interface for interacting with Apache Spark
nginx1.12.1nginx [engine x] is an HTTP and reverse proxy server
mxnet1.5.1A flexible, scalable, and efficient library for deep learning.
mariadb-server5.5.64+MariaDB database server.
nvidia-cuda9.2.88Nvidia drivers and Cuda toolkit
oozie-client5.1.0Oozie command-line client.
oozie-server5.1.0Service for accepting Oozie workflow requests.
opencv3.4.0Open Source Computer Vision Library.
phoenix-library5.0.0-HBase-2.0The phoenix libraries for server and client
phoenix-query-server5.0.0-HBase-2.0A light weight server providing JDBC access as well as Protocol Buffers and JSON format access to the Avatica API
presto-coordinator0.230Service for accepting queries and managing query execution among presto-workers.
presto-worker0.230Service for executing pieces of a query.
presto-client0.230Presto command-line client which is installed on an HA cluster's stand-by masters where Presto server is not started.
r3.4.3The R Project for Statistical Computing
spark-client2.4.4Spark command-line clients.
spark-history-server2.4.4Web UI for viewing logged events for the lifetime of a completed Spark application.
spark-on-yarn2.4.4In-memory execution engine for YARN.
spark-yarn-slave2.4.4Apache Spark libraries needed by YARN slaves.
tensorflow1.14.0TensorFlow open source software library for high performance numerical computation.
tez-on-yarn0.9.2The tez YARN application and libraries.
webserver2.4.41+Apache HTTP server.
zeppelin-server0.9.0-SNAPSHOTWeb-based notebook that enables interactive data analytics.
zookeeper-server3.4.14Centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.
zookeeper-client3.4.14ZooKeeper command line client.

6.0.0 configuration classifications

Configuration classifications allow you to customize applications. These often correspond to a configuration XML file for the application, such as hive-site.xml. For more information, see Configure applications.

emr-6.0.0 classifications
Classifications Description

capacity-scheduler

Change values in Hadoop's capacity-scheduler.xml file.

container-executor

Change values in Hadoop YARN's container-executor.cfg file.

container-log4j

Change values in Hadoop YARN's container-log4j.properties file.

core-site

Change values in Hadoop's core-site.xml file.

emrfs-site

Change EMRFS settings.

hadoop-env

Change values in the Hadoop environment for all Hadoop components.

hadoop-log4j

Change values in Hadoop's log4j.properties file.

hadoop-ssl-server

Change hadoop ssl server configuration

hadoop-ssl-client

Change hadoop ssl client configuration

hbase

Amazon EMR-curated settings for Apache HBase.

hbase-env

Change values in HBase's environment.

hbase-log4j

Change values in HBase's hbase-log4j.properties file.

hbase-metrics

Change values in HBase's hadoop-metrics2-hbase.properties file.

hbase-policy

Change values in HBase's hbase-policy.xml file.

hbase-site

Change values in HBase's hbase-site.xml file.

hdfs-encryption-zones

Configure HDFS encryption zones.

hdfs-env

Change values in the HDFS environment.

hdfs-site

Change values in HDFS's hdfs-site.xml.

hcatalog-env

Change values in HCatalog's environment.

hcatalog-server-jndi

Change values in HCatalog's jndi.properties.

hcatalog-server-proto-hive-site

Change values in HCatalog's proto-hive-site.xml.

hcatalog-webhcat-env

Change values in HCatalog WebHCat's environment.

hcatalog-webhcat-log4j2

Change values in HCatalog WebHCat's log4j2.properties.

hcatalog-webhcat-site

Change values in HCatalog WebHCat's webhcat-site.xml file.

hive

Amazon EMR-curated settings for Apache Hive.

hive-beeline-log4j2

Change values in Hive's beeline-log4j2.properties file.

hive-parquet-logging

Change values in Hive's parquet-logging.properties file.

hive-env

Change values in the Hive environment.

hive-exec-log4j2

Change values in Hive's hive-exec-log4j2.properties file.

hive-llap-daemon-log4j2

Change values in Hive's llap-daemon-log4j2.properties file.

hive-log4j2

Change values in Hive's hive-log4j2.properties file.

hive-site

Change values in Hive's hive-site.xml file

hiveserver2-site

Change values in Hive Server2's hiveserver2-site.xml file

hue-ini

Change values in Hue's ini file

httpfs-env

Change values in the HTTPFS environment.

httpfs-site

Change values in Hadoop's httpfs-site.xml file.

hadoop-kms-acls

Change values in Hadoop's kms-acls.xml file.

hadoop-kms-env

Change values in the Hadoop KMS environment.

hadoop-kms-log4j

Change values in Hadoop's kms-log4j.properties file.

hadoop-kms-site

Change values in Hadoop's kms-site.xml file.

jupyter-notebook-conf

Change values in Jupyter Notebook's jupyter_notebook_config.py file.

jupyter-hub-conf

Change values in JupyterHubs's jupyterhub_config.py file.

jupyter-s3-conf

Configure Jupyter Notebook S3 persistence.

jupyter-sparkmagic-conf

Change values in Sparkmagic's config.json file.

livy-conf

Change values in Livy's livy.conf file.

livy-env

Change values in the Livy environment.

livy-log4j

Change Livy log4j.properties settings.

mapred-env

Change values in the MapReduce application's environment.

mapred-site

Change values in the MapReduce application's mapred-site.xml file.

oozie-env

Change values in Oozie's environment.

oozie-log4j

Change values in Oozie's oozie-log4j.properties file.

oozie-site

Change values in Oozie's oozie-site.xml file.

phoenix-hbase-metrics

Change values in Phoenix's hadoop-metrics2-hbase.properties file.

phoenix-hbase-site

Change values in Phoenix's hbase-site.xml file.

phoenix-log4j

Change values in Phoenix's log4j.properties file.

phoenix-metrics

Change values in Phoenix's hadoop-metrics2-phoenix.properties file.

presto-log

Change values in Presto's log.properties file.

presto-config

Change values in Presto's config.properties file.

presto-password-authenticator

Change values in Presto's password-authenticator.properties file.

presto-env

Change values in Presto's presto-env.sh file.

presto-node

Change values in Presto's node.properties file.

presto-connector-blackhole

Change values in Presto's blackhole.properties file.

presto-connector-cassandra

Change values in Presto's cassandra.properties file.

presto-connector-hive

Change values in Presto's hive.properties file.

presto-connector-jmx

Change values in Presto's jmx.properties file.

presto-connector-kafka

Change values in Presto's kafka.properties file.

presto-connector-localfile

Change values in Presto's localfile.properties file.

presto-connector-memory

Change values in Presto's memory.properties file.

presto-connector-mongodb

Change values in Presto's mongodb.properties file.

presto-connector-mysql

Change values in Presto's mysql.properties file.

presto-connector-postgresql

Change values in Presto's postgresql.properties file.

presto-connector-raptor

Change values in Presto's raptor.properties file.

presto-connector-redis

Change values in Presto's redis.properties file.

presto-connector-redshift

Change values in Presto's redshift.properties file.

presto-connector-tpch

Change values in Presto's tpch.properties file.

presto-connector-tpcds

Change values in Presto's tpcds.properties file.

ranger-kms-dbks-site

Change values in dbks-site.xml file of Ranger KMS.

ranger-kms-site

Change values in ranger-kms-site.xml file of Ranger KMS.

ranger-kms-env

Change values in the Ranger KMS environment.

ranger-kms-log4j

Change values in kms-log4j.properties file of Ranger KMS.

ranger-kms-db-ca

Change values for CA file on S3 for MySQL SSL connection with Ranger KMS.

recordserver-env

Change values in the EMR RecordServer environment.

recordserver-conf

Change values in EMR RecordServer's erver.properties file.

recordserver-log4j

Change values in EMR RecordServer's log4j.properties file.

spark

Amazon EMR-curated settings for Apache Spark.

spark-defaults

Change values in Spark's spark-defaults.conf file.

spark-env

Change values in the Spark environment.

spark-hive-site

Change values in Spark's hive-site.xml file

spark-log4j

Change values in Spark's log4j.properties file.

spark-metrics

Change values in Spark's metrics.properties file.

tez-site

Change values in Tez's tez-site.xml file.

yarn-env

Change values in the YARN environment.

yarn-site

Change values in YARN's yarn-site.xml file.

zeppelin-env

Change values in the Zeppelin environment.

zookeeper-config

Change values in ZooKeeper's zoo.cfg file.

zookeeper-log4j

Change values in ZooKeeper's log4j.properties file.