Apache Spark with Amazon SageMaker AI
Amazon SageMaker AI Spark is an open source Spark library that helps you build Spark machine learning
(ML) pipelines with SageMaker AI. This simplifies the integration of Spark ML stages with SageMaker AI
stages, like model training and hosting. For information about SageMaker AI Spark, see the SageMaker AI Spark
The SageMaker AI Spark library is available in Python and Scala. You can use SageMaker AI Spark to train
models in SageMaker AI using org.apache.spark.sql.DataFrame data frames in your Spark
clusters. After model training, you can also host the model using SageMaker AI hosting services.
The SageMaker AI Spark library, com.amazonaws.services.sagemaker.sparksdk, provides
the following classes, among others:
-
SageMakerEstimator—Extends theorg.apache.spark.ml.Estimatorinterface. You can use this estimator for model training in SageMaker AI. -
KMeansSageMakerEstimator,PCASageMakerEstimator, andXGBoostSageMakerEstimator—Extend theSageMakerEstimatorclass. -
SageMakerModel—Extends theorg.apache.spark.ml.Modelclass. You can use thisSageMakerModelfor model hosting and getting inferences in SageMaker AI.
You can download the source code for both Python Spark (PySpark) and Scala libraries from
the SageMaker AI Spark
For installation and examples of the SageMaker AI Spark library, see SageMaker AI Spark for Scala examples or Resources for using SageMaker AI Spark for Python (PySpark) examples.
If you use Amazon EMR on AWS to manage Spark clusters, see Apache Spark
Topics
Integrate your Apache Spark application with SageMaker AI
The following is high-level summary of the steps for integrating your Apache Spark application with SageMaker AI.
-
Continue data preprocessing using the Apache Spark library that you are familiar with. Your dataset remains a
DataFramein your Spark cluster. Load your data into aDataFrame. Preprocess it so that you have afeaturescolumn withorg.apache.spark.ml.linalg.VectorofDoubles, and an optionallabelcolumn with values ofDouble type. -
Use the estimator in the SageMaker AI Spark library to train your model. For example, if you choose the k-means algorithm provided by SageMaker AI for model training, call the
KMeansSageMakerEstimator.fitmethod.Provide your
DataFrameas input. The estimator returns aSageMakerModelobject.Note
SageMakerModelextends theorg.apache.spark.ml.Model.The
fitmethod does the following:-
Converts the input
DataFrameto the protobuf format. It does so by selecting thefeaturesandlabelcolumns from the inputDataFrame. It then uploads the protobuf data to an Amazon S3 bucket. The protobuf format is efficient for model training in SageMaker AI. -
Starts model training in SageMaker AI by sending a SageMaker AI
CreateTrainingJobrequest. After model training has completed, SageMaker AI saves the model artifacts to an S3 bucket.SageMaker AI assumes the IAM role that you specified for model training to perform tasks on your behalf. For example, it uses the role to read training data from an S3 bucket and to write model artifacts to a bucket.
-
Creates and returns a
SageMakerModelobject. The constructor does the following tasks, which are related to deploying your model to SageMaker AI.-
Sends a
CreateModelrequest to SageMaker AI. -
Sends a
CreateEndpointConfigrequest to SageMaker AI. -
Sends a
CreateEndpointrequest to SageMaker AI, which then launches the specified resources, and hosts the model on them.
-
-
-
You can get inferences from your model hosted in SageMaker AI with the
SageMakerModel.transform.Provide an input
DataFramewith features as input. Thetransformmethod transforms it to aDataFramecontaining inferences. Internally, thetransformmethod sends a request to theInvokeEndpointSageMaker API to get inferences. Thetransformmethod appends the inferences to the inputDataFrame.