Examples and More Information: Use Your Own Algorithm or Model - Amazon SageMaker

Examples and More Information: Use Your Own Algorithm or Model

The following Jupyter notebooks and added information show how to use your own algorithms or pretrained models from an Amazon SageMaker notebook instance. For links to the GitHub repositories with the prebuilt Dockerfiles for the TensorFlow, MXNet, Chainer, and PyTorch frameworks and instructions on using the AWS SDK for Python (Boto3) estimators to run your own training algorithms on SageMaker Learner and your own models on SageMaker hosting, see Prebuilt SageMaker Docker images for deep learning

Setup

  1. Create a SageMaker notebook instance. For instructions on how to create and access Jupyter notebook instances, see Amazon SageMaker Notebook Instances.

  2. Open the notebook instance you created.

  3. Choose the SageMaker Examples tab for a list of all SageMaker example notebooks.

  4. Open the sample notebooks from the Advanced Functionality section in your notebook instance or from GitHub using the provided links. To open a notebook, choose its Use tab, then choose Create copy.

Host models trained in Scikit-learn

To learn how to host models trained in Scikit-learn for making predictions in SageMaker by injecting them into first-party k-means and XGBoost containers, see the following sample notebooks.

Package TensorFlow and Scikit-learn models for use in SageMaker

To learn how to package algorithms that you have developed in TensorFlow and scikit-learn frameworks for training and deployment in the SageMaker environment, see the following notebooks. They show you how to build, register, and deploy your own Docker containers using Dockerfiles.

Train and deploy a neural network on SageMaker

To learn how to train a neural network locally using MXNet or TensorFlow, and then create an endpoint from the trained model and deploy it on SageMaker, see the following notebooks. The MXNet model is trained to recognize handwritten numbers from the MNIST dataset. The TensorFlow model is trained to classify irises.

Training using pipe mode

To learn how to use a Dockerfile to build a container that calls the train.py script and uses pipe mode to custom train an algorithm, see the following notebook. In pipe mode, the input data is transferred to the algorithm while it is training. This can decrease training time compared to using file mode.

Bring your own R model

To learn how to use add a custom R image to build and train a model in a AWS SMS notebook, see the following blog post. This blog post uses a sample R Dockerfile from a library of SageMaker Studio Classic Custom Image Samples.

Extend a pre-built PyTorch container Image

To learn how to extend a prebuilt SageMaker PyTorch container image when you have additional functional requirements for your algorithm or model that the prebuilt Docker image doesn't support, see the following notebook.

For more information about extending a container, see Extend a Pre-built Container.

Train and debug training jobs on a custom container

To learn how to train and debug training jobs using SageMaker Debugger, see the following notebook. A training script provided through this example uses the TensorFlow Keras ResNet 50 model and the CIFAR10 dataset. A Docker custom container is built with the training script and pushed to Amazon ECR. While the training job is running, Debugger collects tensor outputs and identifies debugging problems. With smdebug client library tools, you can set a smdebug trial object that calls the training job and debugging information, check the training and Debugger rule status, and retrieve tensors saved in an Amazon S3 bucket to analyze training issues.