Use Apache Spark in Amazon Athena
Amazon Athena makes it easy to interactively run data analytics and exploration using Apache Spark without the need to plan for, configure, or manage resources. Running Apache Spark applications on Athena means submitting Spark code for processing and receiving the results directly without the need for additional configuration. You can use the simplified notebook experience in Amazon Athena console to develop Apache Spark applications using Python or Athena notebook APIs. Apache Spark on Amazon Athena is serverless and provides automatic, on-demand scaling that delivers instant-on compute to meet changing data volumes and processing requirements.
Amazon Athena offers the following features:
-
Console usage – Submit your Spark applications from the Amazon Athena console.
-
Scripting – Quickly and interactively build and debug Apache Spark applications in Python.
-
Dynamic scaling – Amazon Athena automatically determines the compute and memory resources needed to run a job and continuously scales those resources accordingly up to the maximums that you specify. This dynamic scaling reduces cost without affecting speed.
-
Notebook experience – Use the Athena notebook editor to create, edit, and run computations using a familiar interface. Athena notebooks are compatible with Jupyter notebooks and contain a list of cells that are executed in order as calculations. Cell content can include code, text, Markdown, mathematics, plots and rich media.
For additional information, see Run Spark SQL on Amazon Athena
Spark
Topics
- Considerations and limitations
- Get started
- Manage notebook files
- Notebook editor
- Non-Hive table formats
- Python library support
- Specify custom configuration
- Supported data and storage formats
- Monitor Apache Spark calculations
- Enable requester pays buckets
- Enable Spark encryption
- Cross-account catalog access
- Service quotas
- Athena notebook APIs
- Troubleshoot