AWS DeepLens
Developer Guide

mo.optimize Method

Converts AWS DeepLens model artifacts from a Caffe (.prototxt or .caffemodel), MXNet (.json and .params), or TensorFlow (.pb) representation to an AWS DeepLens representation and performs necessary optimization.

Syntax

import mo res = mo.optimize(model_name, input_width, input_height, platform, aux_inputs)

Request Parameters

  • model_name: The name of the model to optimize.

    Type: string

    Required: Yes

  • input_width: The width of the input image in pixels. The value must be a non-negative integer less than or equal to 1024.

    Type: integer.

    Required: Yes

  • input_height: The height of the input image in pixels. The value must be a non-negative integer less than or equal to 1024.

    Type: integer.

    Required: Yes

  • platform: The source platform for the optimization. For valid values, see the following table.

    Type: string

    Required: No

    Valid platform Values:

    Value Description
    Caffe or caffe The optimization converts Caffe model artifacts (of the .prototxt or .caffemodel files) to AWS DeepLens model artifacts.
    MXNet, mxNet or mx The optimization converts Apache MXNet model artifacts (of the .json and .params files) to AWS DeepLens model artifacts. This is the default option.
    TensorFlow or tensorflow or tf The optimization converts TensorFlow model artifact (of the frozen graph .pb files) to AWS DeepLens model artifacts.
  • aux_inputs: A Python dictionary object that contains auxiliary inputs, including entries common to all platforms and entries specific to individual platforms.

    Type: Dict

    Required: No

    Valid axu_inputs dictionary Entries

    Item Name Applicable Platforms Description

    --img-format

    All

    Image format. The default value is BGR.

    --img-channels

    All

    Number of image channels. The default value is 3.

    --precision

    All

    Image data type. The default value is FP16.

    --fuse

    All

    A switch to turn on (ON) or off (OFF) fusing of linear operations to convolution. The default value is ON.

    --models-dir

    All

    Model directory. The default directory is /opt/awscam/artifacts.

    --output-dir

    All

    Output directory. The default directory is /opt/awscam/artifacts.

    --input_proto

    Caffe

    The prototxt file path. The default value is an empty string ("").

    --epoch

    MXNet

    Epoch number. The default value is 0.

    --input_model_is_text

    TensorFlow

    A Boolean flag that indicates whether the input model file is in text protobuf format (True) or not (False). The default value is False.

Returns

The optimize function returns a result that contains the following:

  • model_path: Path of the optimized model artifacts when they are successfully returned.

    Type: string

  • status: Operational status of the function. For possible cause of failures and corrective actions when the method call fails, see the status table below.

    Type: integer

    status Cause Action

    0

    Model optimization succeeded.

    No action needed.

    1

    Model optimization failed because the requested platform is not supported.

    • Choose a supported platform.

    • Make sure that the platform name is spelled correctly.

    2

    Model optimization failed because you are using inconsistent platform versions.

    • Make sure that you are running the latest version of the platform. To check your version, at a command prompt, run pip install mxnet.

    • Make sure that there are no unsupported layers in the model for the target platform.

    • Make sure that your awscam software is up-to-date.

    • See Troubleshooting the Model Optimizer for recommended actions for error messages reported in the CloudWatch Logs for AWS DeepLens and on your AWS DeepLens device.

To load the optimized model for inference, call the awscam.Model API and specify the model_path returned from this function.