TensorFlow Elastic Inference with Python
With Elastic Inference TensorFlow Serving, the standard TensorFlow Serving interface remains
unchanged. The only difference is that the entry point is a different binary named
amazonei_tensorflow_model_server
.
TensorFlow Serving and Predictor are the only inference modes that Elastic Inference supports. If you haven't tried TensorFlow Serving before, we recommend that you try the TensorFlow Serving tutorial first.
This release of Elastic Inference TensorFlow Serving has been tested to perform well and provide cost-saving benefits with the following deep learning use cases and network architectures (and similar variants):
Use Case | Example Network Topology |
---|---|
Image Recognition |
Inception, ResNet, MVCNN |
Object Detection |
SSD, RCNN |
Neural Machine Translation |
GNMT |
These tutorials assume usage of a DLAMI with v26 or later, and Elastic Inference enabled Tensorflow.
Topics
Activate the Tensorflow Elastic Inference Environment
-
-
(Option for Python 3) - Activate the Python 3 TensorFlow Elastic Inference environment:
$
source activate amazonei_tensorflow_p36 -
(Option for Python 2) - Activate the Python 2.7 TensorFlow Elastic Inference environment:
$
source activate amazonei_tensorflow_p27
-
-
The remaining parts of this guide assume you are using the
amazonei_tensorflow_p27
environment.
If you are switching between Elastic Inference enabled MXNet, TensorFlow, or PyTorch environments, you must stop and then start your instance in order to reattach the Elastic Inference accelerator. Rebooting is not sufficient since the process requires a complete shut down.
Use Elastic Inference with TensorFlow Serving
The following is an example of serving a Single Shot Detector (SSD) with a ResNet backbone.
Serve and Test Inference with an Inception Model
-
Download the model.
curl -O https://s3-us-west-2.amazonaws.com/aws-tf-serving-ei-example/ssd_resnet.zip
-
Unzip the model.
unzip ssd_resnet.zip -d /tmp
-
Download a picture of three dogs to your home directory.
curl -O https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/3dogs.jpg
-
Use the built-in
EI Tool
to get the device ordinal number of all attached Elastic Inference accelerators. For more information onEI Tool
, see Monitoring Elastic Inference Accelerators./opt/amazon/ei/ei_tools/bin/ei describe-accelerators --json
Your output should look like the following:
{ "ei_client_version": "1.5.0", "time": "Fri Nov 1 03:09:38 2019", "attached_accelerators": 2, "devices": [ { "ordinal": 0, "type": "eia1.xlarge", "id": "eia-679e4c622d584803aed5b42ab6a97706", "status": "healthy" }, { "ordinal": 1, "type": "eia1.xlarge", "id": "eia-6c414c6ee37a4d93874afc00825c2f28", "status": "healthy" } ] }
-
Navigate to the folder where
AmazonEI_TensorFlow_Serving
is installed and run the following command to launch the server. SetEI_VISIBLE_DEVICES
to the device ordinal or device ID of the attached Elastic Inference accelerator that you want to use. This device will then be accessible usingid 0
. For more information onEI_VISIBLE_DEVICES
, see Monitoring Elastic Inference Accelerators. Note,model_base_path
must be an absolute path.EI_VISIBLE_DEVICES=
<ordinal number>
amazonei_tensorflow_model_server --model_name=ssdresnet --model_base_path=/tmp/ssd_resnet50_v1_coco --port=9000 -
While the server is running in the foreground, launch another terminal session. Open a new terminal and activate the TensorFlow environment.
source activate amazonei_tensorflow_p27
-
Use your preferred text editor to create a script that has the following content. Name it
ssd_resnet_client.py
. This script will take an image filename as a parameter and get a prediction result from the pretrained model.from __future__ import print_function import grpc import tensorflow as tf from PIL import Image import numpy as np import time import os from tensorflow_serving.apis import predict_pb2 from tensorflow_serving.apis import prediction_service_pb2_grpc tf.app.flags.DEFINE_string('server', 'localhost:9000', 'PredictionService host:port') tf.app.flags.DEFINE_string('image', '', 'path to image in JPEG format') FLAGS = tf.app.flags.FLAGS coco_classes_txt = "https://raw.githubusercontent.com/amikelive/coco-labels/master/coco-labels-paper.txt" local_coco_classes_txt = "/tmp/coco-labels-paper.txt" # it's a file like object and works just like a file os.system("curl -o %s -O %s"%(local_coco_classes_txt, coco_classes_txt)) NUM_PREDICTIONS = 5 with open(local_coco_classes_txt) as f: classes = ["No Class"] + [line.strip() for line in f.readlines()] def main(_): channel = grpc.insecure_channel(FLAGS.server) stub = prediction_service_pb2_grpc.PredictionServiceStub(channel) # Send request with Image.open(FLAGS.image) as f: f.load() # See prediction_service.proto for gRPC request/response details. data = np.asarray(f) data = np.expand_dims(data, axis=0) request = predict_pb2.PredictRequest() request.model_spec.name = 'ssdresnet' request.inputs['inputs'].CopyFrom( tf.contrib.util.make_tensor_proto(data, shape=data.shape)) result = stub.Predict(request, 60.0) # 10 secs timeout outputs = result.outputs detection_classes = outputs["detection_classes"] detection_classes = tf.make_ndarray(detection_classes) num_detections = int(tf.make_ndarray(outputs["num_detections"])[0]) print("%d detection[s]" % (num_detections)) class_label = [classes[int(x)] for x in detection_classes[0][:num_detections]] print("SSD Prediction is ", class_label) if __name__ == '__main__': tf.app.run()
-
Now run the script passing the server location, port, and the dog photo's filename as the parameters.
python ssd_resnet_client.py --server=localhost:9000 --image 3dogs.jpg
Use Elastic Inference with the TensorFlow EIPredictor API
Elastic Inference TensorFlow packages for Python 2 and 3 provide an EIPredictor API. This API function provides you with a flexible way to run models on Elastic Inference accelerators as an alternative to using TensorFlow Serving. The EIPredictor API provides a simple interface to perform repeated inference on a pretrained model. The following code sample shows the available parameters.
accelerator_id
should be set to the device's ordinal number,
not its ID
.
ei_predictor = EIPredictor(model_dir, signature_def_key=None, signature_def=None, input_names=None, output_names=None, tags=None, graph=None, config=None, use_ei=True, accelerator_id=
<device ordinal number>
) output_dict = ei_predictor(feed_dict)
EIPredictor can be used in the following ways:
//EIPredictor class picks inputs and outputs from default serving signature def with tag “serve”. (similar to TF predictor) ei_predictor = EIPredictor(model_dir) //EI Predictor class picks inputs and outputs from the signature def picked using the signtaure_def_key (similar to TF predictor) ei_predictor = EIPredictor(model_dir, signature_def_key='predict') // Signature_def can be provided directly (similar to TF predictor) ei_predictor = EIPredictor(model_dir, signature_def= sig_def) // You provide the input_names and output_names dict. // similar to TF predictor ei_predictor = EIPredictor(model_dir, input_names, output_names) // tag is used to get the correct signature def. (similar to TF predictor) ei_predictor = EIPredictor(model_dir, tags='serve')
Additional EI Predictor functionality includes:
-
Support for frozen models.
// For Frozen graphs, model_dir takes a file name , input_names and output_names // input_names and output_names should be provided in this case. ei_predictor = EIPredictor(model_dir, input_names=None, output_names=None )
-
Ability to disable use of Elastic Inference by using the
use_ei
flag, which defaults toTrue
. This is useful for testing EIPredictor against TensorFlow Predictor. -
EIPredictor can also be created from a TensorFlow Estimator. Given a trained Estimator, you can first export a SavedModel. See the SavedModel documentation
for more details. The following shows example usage: saved_model_dir = estimator.export_savedmodel(my_export_dir, serving_input_fn) ei_predictor = EIPredictor(export_dir=saved_model_dir) // Once the EIPredictor is created, inference is done using the following: output_dict = ei_predictor(feed_dict)
Use Elastic Inference with TensorFlow Predictor Example
Installing Elastic Inference TensorFlow
Elastic Inference enabled TensorFlow comes bundled in the AWS Deep Learning AMI. You can also download pip wheels for Python 2 and 3 from the Elastic Inference S3 bucket. Follow these instructions to download and install the pip package:
Choose the tar file for the Python version and operating system of your choice
from the S3
bucket
curl -O [URL of the tar file of your choice]
To open the tar the file:
tar -xvzf [name of tar file]
Try the following example to serve different models, such as ResNet, using a Single Shot Detector (SSD).
Serve and Test Inference with an SSD Model
-
Download the model. If you already downloaded the model in the Serving example, skip this step.
curl -O https://s3-us-west-2.amazonaws.com/aws-tf-serving-ei-example/ssd_resnet.zip
-
Unzip the model. Again, you may skip this step if you already have the model.
unzip ssd_resnet.zip -d /tmp
-
Download a picture of three dogs to your current directory.
curl -O https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/3dogs.jpg
-
Use the built-in
EI Tool
to get the device ordinal number of all attached Elastic Inference accelerators. For more information onEI Tool
, see Monitoring Elastic Inference Accelerators./opt/amazon/ei/ei_tools/bin/ei describe-accelerators --json
Your output should look like the following:
{ "ei_client_version": "1.5.0", "time": "Fri Nov 1 03:09:38 2019", "attached_accelerators": 2, "devices": [ { "ordinal": 0, "type": "eia1.xlarge", "id": "eia-679e4c622d584803aed5b42ab6a97706", "status": "healthy" }, { "ordinal": 1, "type": "eia1.xlarge", "id": "eia-6c414c6ee37a4d93874afc00825c2f28", "status": "healthy" } ] }
You use the device ordinal of your desired Elastic Inference accelerator to create a Predictor.
-
Open a text editor, such as vim, and paste the following inference script. Replace the
accelerator_id
value with the device ordinal of the desired Elastic Inference accelerator. This value must be an integer. Save the file asssd_resnet_predictor.py
.from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import sys import numpy as np import tensorflow as tf import matplotlib.image as mpimg from tensorflow.contrib.ei.python.predictor.ei_predictor import EIPredictor tf.app.flags.DEFINE_string('image', '', 'path to image in JPEG format') FLAGS = tf.app.flags.FLAGS coco_classes_txt = "https://raw.githubusercontent.com/amikelive/coco-labels/master/coco-labels-paper.txt" local_coco_classes_txt = "/tmp/coco-labels-paper.txt" # it's a file like object and works just like a file os.system("curl -o %s -O %s"%(local_coco_classes_txt, coco_classes_txt)) NUM_PREDICTIONS = 5 with open(local_coco_classes_txt) as f: classes = ["No Class"] + [line.strip() for line in f.readlines()] def get_output(eia_predictor, test_input): pred = None for curpred in range(NUM_PREDICTIONS): pred = eia_predictor(test_input) num_detections = int(pred["num_detections"]) print("%d detection[s]" % (num_detections)) detection_classes = pred["detection_classes"][0][:num_detections] print([classes[int(i)] for i in detection_classes]) def main(_): img = mpimg.imread(FLAGS.image) img = np.expand_dims(img, axis=0) ssd_resnet_input = {'inputs': img} print('Running SSD Resnet on EIPredictor using specified input and outputs') eia_predictor = EIPredictor( model_dir='/tmp/ssd_resnet50_v1_coco/1/', input_names={"inputs": "image_tensor:0"}, output_names={"detection_classes": "detection_classes:0", "num_detections": "num_detections:0", "detection_boxes": "detection_boxes:0"}, accelerator_id=
<device ordinal>
) get_output(eia_predictor, ssd_resnet_input) print('Running SSD Resnet on EIPredictor using default Signature Def') eia_predictor = EIPredictor( model_dir='/tmp/ssd_resnet50_v1_coco/1/', ) get_output(eia_predictor, ssd_resnet_input) if __name__ == "__main__": tf.app.run() -
Run the inference script.
python ssd_resnet_predictor.py --image 3dogs.jpg
For more tutorials and examples, see the TensorFlow Python
API
Use Elastic Inference with the TensorFlow Keras API
The Keras API has become an integral part of the machine learning development cycle because of its simplicity and ease of use. Keras enables rapid prototyping and development of machine learning constructs. Elastic Inference provides an API that offers native support for Keras. Using this API, you can directly use your Keras model, h5 file, and weights to instantiate a Keras-like Object. This object supports the native Keras prediction APIs, while fully utilizing Elastic Inference in the backend. The following code sample shows the available parameters:
EIKerasModel(model, weights=None, export_dir=None, ): """Constructs an `EIKerasModel` instance. Args: model: A model object that either has its weights already set, or will be set with the weights argument. A model file that can be loaded weights (Optional): A weights object, or weights file that can be loaded, and will be set to the model object export_dir: A folder location to save your model as a SavedModelBundle Raises: RuntimeError: If eager execution is enabled. """
EIKerasModel can be used as follows:
#Loading from Keras Model Object from tensorflow.contrib.ei.python.keras.ei_keras import EIKerasModel model = Model() # Build Keras Model in the normal fashion x = # input data ei_model = EIKerasModel(model) # Only additional step to use EI res = ei_model.predict(x) #Loading from Keras h5 File from tensorflow.contrib.ei.python.keras.ei_keras import EIKerasModel x = # input data ei_model = EIKerasModel("keras_model.h5") # Only additional step to use EI res = ei_model.predict(x) #Loading from Keras h5 File and Weights file from tensorflow.contrib.ei.python.keras.ei_keras import EIKerasModel x = # input data ei_model = EIKerasModel("keras_model.json", weights="keras_weights.h5") # Only additional step to use EI res = ei_model.predict(x)
Additionally, Elastic Inference enabled Keras includes Predict API Support:
tf.keras def predict( x, batch_size=None, verbose=0, steps=None, max_queue_size=10, #Not supported workers=1, #Not Supported use_multiprocessing=False): #Not Supported Native Keras def predict( x, batch_size=None, verbose=0, steps=None, callbacks=None) # Not supported
TensorFlow Keras API Example
In this example, you use a trained ResNet-50 model to classify an image of an African Elephant from ImageNet.
Test Inference with a Keras Model
-
Activate the Elastic Inference TensorFlow Conda Environment
source activate amazonei_tensorflow_p27
-
Download an image of an African Elephant to your current directory.
curl -O https://upload.wikimedia.org/wikipedia/commons/5/59/Serengeti_Elefantenbulle.jpg
-
Open a text editor, such as vim, and paste the following inference script. Save the file as
test_keras.py
.# Resnet Example from tensorflow.keras.applications.resnet50 import ResNet50 from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions from tensorflow.contrib.ei.python.keras.ei_keras import EIKerasModel import numpy as np import time import os ITERATIONS = 20 model = ResNet50(weights='imagenet') ei_model = EIKerasModel(model) folder_name = os.path.dirname(os.path.abspath(__file__)) img_path = folder_name + '/Serengeti_Elefantenbulle.jpg' img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) # Warm up both models _ = model.predict(x) _ = ei_model.predict(x) # Benchmark both models for each in range(ITERATIONS): start = time.time() preds = model.predict(x) print("Vanilla iteration %d took %f" % (each, time.time() - start)) for each in range(ITERATIONS): start = time.time() ei_preds = ei_model.predict(x) print("EI iteration %d took %f" % (each, time.time() - start)) # decode the results into a list of tuples (class, description, probability) # (one such list for each sample in the batch) print('Predicted:', decode_predictions(preds, top=3)[0]) print('EI Predicted:', decode_predictions(ei_preds, top=3)[0])
-
Run the inference script.
python test_keras.py
-
Your output should be a list of predictions, as well as their respective confidence score.
('Predicted:', [(u'n02504458', u'African_elephant', 0.9081173), (u'n01871265', u'tusker', 0.07836755), (u'n02504013', u'Indian_elephant', 0.011482777)]) ('EI Predicted:', [(u'n02504458', u'African_elephant', 0.90811676), (u'n01871265', u'tusker', 0.07836751), (u'n02504013', u'Indian_elephant', 0.011482781)])
For more tutorials and examples, see the TensorFlow Python
API