Apache Spark
Apache Spark
Spark natively supports applications written in Scala, Python, and Java. It also includes
several tightly integrated libraries for SQL (Spark
You can install Spark on an Amazon EMR cluster along with other Hadoop applications, and it can
also leverage the Amazon EMR file system (EMRFS) to directly access data in Amazon S3. Hive is also
integrated with Spark so that you can use a HiveContext object to run Hive scripts using
Spark. A Hive context is included in the spark-shell as sqlContext
.
For an example tutorial on setting up an EMR cluster with Spark and analyzing a sample data set, see Tutorial: Getting started with Amazon EMR on the AWS News blog.
Important
Apache Spark version 2.3.1, available beginning with Amazon EMR release 5.16.0, addresses CVE-2018-8024
The following table lists the version of Spark included in the latest release of the Amazon EMR 7.x series, along with the components that Amazon EMR installs with Spark.
For the version of components installed with Spark in this release, see Release 7.5.0 Component Versions.
Amazon EMR Release Label | Spark Version | Components Installed With Spark |
---|---|---|
emr-7.5.0 |
Spark 3.5.2 |
delta, emrfs, emr-goodies, emr-ddb, emr-s3-select, hadoop-client, hadoop-hdfs-datanode, hadoop-hdfs-library, hadoop-hdfs-namenode, hadoop-httpfs-server, hadoop-kms-server, hadoop-yarn-nodemanager, hadoop-yarn-resourcemanager, hadoop-yarn-timeline-server, hudi, hudi-spark, iceberg, livy-server, nginx, r, spark-client, spark-history-server, spark-on-yarn, spark-yarn-slave |
The following table lists the version of Spark included in the latest release of the Amazon EMR 6.x series, along with the components that Amazon EMR installs with Spark.
For the version of components installed with Spark in this release, see Release 6.15.0 Component Versions.
Amazon EMR Release Label | Spark Version | Components Installed With Spark |
---|---|---|
emr-6.15.0 |
Spark 3.4.1 |
aws-sagemaker-spark-sdk, delta, emrfs, emr-goodies, emr-ddb, emr-s3-select, hadoop-client, hadoop-hdfs-datanode, hadoop-hdfs-library, hadoop-hdfs-namenode, hadoop-httpfs-server, hadoop-kms-server, hadoop-yarn-nodemanager, hadoop-yarn-resourcemanager, hadoop-yarn-timeline-server, hudi, hudi-spark, iceberg, livy-server, nginx, r, spark-client, spark-history-server, spark-on-yarn, spark-yarn-slave |
Note
Amazon EMR release 6.8.0 comes with Apache Spark 3.3.0. This Spark release uses Apache Log4j 2 and the log4j2.properties
file to configure Log4j in Spark processes. If you use Spark in the cluster or create EMR clusters with custom configuration parameters, and you want to upgrade to Amazon EMR release 6.8.0, you must migrate to the new spark-log4j2
configuration classification and key format for Apache Log4j 2. For more information, see Migrating from Apache Log4j 1.x to Log4j
2.x.
The following table lists the version of Spark included in the latest release of the Amazon EMR 5.x series, along with the components that Amazon EMR installs with Spark.
For the version of components installed with Spark in this release, see Release 5.36.2 Component Versions.
Amazon EMR Release Label | Spark Version | Components Installed With Spark |
---|---|---|
emr-5.36.2 |
Spark 2.4.8 |
aws-sagemaker-spark-sdk, emrfs, emr-goodies, emr-ddb, emr-s3-select, hadoop-client, hadoop-hdfs-datanode, hadoop-hdfs-library, hadoop-hdfs-namenode, hadoop-httpfs-server, hadoop-kms-server, hadoop-yarn-nodemanager, hadoop-yarn-resourcemanager, hadoop-yarn-timeline-server, hudi, hudi-spark, livy-server, nginx, r, spark-client, spark-history-server, spark-on-yarn, spark-yarn-slave |
Topics
- Create a cluster with Apache Spark
- Run Spark applications with Docker on Amazon EMR 6.x
- Use AWS Glue Data Catalog catalog with Spark on Amazon EMR
- Working with a multi-catalog hierarchy in AWS Glue Data Catalog with Spark on Amazon EMR
- Configure Spark
- Optimize Spark performance
- Spark Result Fragment Caching
- Use the Nvidia RAPIDS Accelerator for Apache Spark
- Access the Spark shell
- Use Amazon SageMaker Spark for machine learning
- Write a Spark application
- Improve Spark performance with Amazon S3
- Add a Spark step
- View Spark application history
- Access the Spark web UIs
- Using the Spark structured streaming Amazon Kinesis Data Streams connector
- Using Amazon Redshift integration for Apache Spark with Amazon EMR
- Spark release history