AWS IoT Greengrass
Developer Guide

How to Configure Optimized Machine Learning Inference Using the AWS Management Console

To follow the steps in this tutorial, you must be using AWS IoT Greengrass Core v1.6 or later.

You can use the Amazon SageMaker Neo deep learning compiler to optimize the prediction efficiency of native machine learning inference models in many frameworks. You can then download the optimized model and install the Amazon SageMaker Neo deep learning runtime and deploy them to your AWS IoT Greengrass devices for faster inference.

This tutorial describes how to use the AWS Management Console to configure a Greengrass group to run a Lambda inference example that recognizes images from a camera locally, without sending data to the cloud. The inference example accesses the camera module on a Raspberry Pi. In this tutorial, you download a prepackaged model that is trained by Resnet-50 and optimized in the Neo deep learning compiler. You then use the model to perform local image classification on your AWS IoT Greengrass device.

The tutorial contains the following high-level steps:

Prerequisites

To complete this tutorial, you need:

Note

This tutorial uses a Raspberry Pi, but AWS IoT Greengrass supports other platforms, such as Intel Atom and NVIDIA Jetson TX2.

Step 1: Configure the Raspberry Pi

In this step, you install updates to the Raspbian operating system, install the camera module software and Python dependencies, and enable the camera interface.

Run the following commands in your Raspberry Pi terminal.

  1. Install updates to Raspbian.

    sudo apt-get update sudo apt-get dist-upgrade
  2. Install the picamera interface for the camera module and other Python libraries that are required for this tutorial.

    sudo apt-get install -y python-dev python-setuptools python-pip python-picamera
  3. Reboot the Raspberry Pi.

    sudo reboot
  4. Open the Raspberry Pi configuration tool.

    sudo raspi-config
  5. Use the arrow keys to open Interfacing Options and enable the camera interface. If prompted, allow the device to reboot.

  6. Use the following command to test the camera setup.

    raspistill -v -o test.jpg

    This opens a preview window on the Raspberry Pi, saves a picture named test.jpg to your current directory, and displays information about the camera in the Raspberry Pi terminal.

Step 2: Install the Amazon SageMaker Neo Deep Learning Runtime

In this step, you download the Neo deep learning runtime and install it onto your Raspberry Pi.

  1. On the AWS IoT Greengrass Machine Learning Runtimes and Precompiled Libraries downloads page, locate the Deep Learning Runtime version 1.0.0 for Raspberry Pi. Choose Download.

  2. Transfer the downloaded dlr-1.0-py2-armv7l.tar.gz file from your computer to your Raspberry Pi. You can also use the following scp command with a path to save your file, such as /home/pi/:

    scp dlr-1.0-py2-armv7l.tar.gz pi@your-device-ip-address:path-to-save-file
  3. Use the following commands to remotely sign in to your Raspberry Pi and extract the installer files.

    ssh pi@your-device-ip-address ​cd path-to-save-file tar -xvzf dlr-1.0-py2-armv7l.tar.gz
  4. Install the Neo deep learning runtime.

    cd dlr-1.0-py2-armv7l/ chmod 755 install-dlr.sh sudo ./install-dlr.sh

    This package contains an examples directory that contains several files you use to run this tutorial. This directory also contains version 1.2.0 of the AWS IoT Greengrass Core SDK for Python. You can also download the latest version of the SDK from the AWS IoT Greengrass Core SDK downloads page.

Step 3: Create an Inference Lambda Function

In this step, you create a deployment package and a Lambda function that is configured to use the deployment package. Then, you publish a function version and create an alias.

  1. On your computer, unzip the downloaded dlr-1.0-py2-armv7l.tar.gz file you previously copied to your Raspberry Pi.

    cd path-to-downloaded-runtime tar -xvzf dlr-1.0-py2-armv7l.tar.gz
  2. The resulting dlr-1.0-py2-armv7l directory contains an examples folder. It contains inference.py, the example code used in this tutorial for inference. You can view this code as a usage example to create your own inference code.

    Compress the files in the examples folder into a file named optimizedImageClassification.zip.

    Note

    When you create the .zip file, verify that the .py files and dependencies are in the root of the directory.

    cd path-to-downloaded-runtime/dlr-1.0-py2-armv7l/examples zip -r optimizedImageClassification.zip .

    This .zip file is your deployment package. This package contains the function code and dependencies, including the code example that invokes the Neo deep learning runtime Python APIs to perform inference with the Neo deep learning compiler models. You upload this deployment package later.

  3. Now, create the Lambda function.

    In the AWS IoT console, in the navigation pane, choose Greengrass, and then choose Groups.

    
            The navigation pane in the AWS IoT console with Groups highlighted.
  4. Choose the Greengrass group where you want to add the Lambda function.

  5. On the group configuration page, choose Lambdas, and then choose Add Lambda.

    
            The group page with Lambdas and Add Lambda highlighted.
  6. On the Add a Lambda to your Greengrass Group page, choose Create new Lambda. This opens the AWS Lambda console.

    
            The Add a Lambda to your Greengrass Group page with Create new Lambda highlighted.
  7. Choose Author from scratch and use the following values to create your function:

    • For Function name, enter optimizedImageClassification.

    • For Runtime, choose Python 2.7.

    For Permissions, keep the default setting. This creates an execution role that grants basic Lambda permissions. This role isn't used by AWS IoT Greengrass.

    
            The Basic information section of the Create function page.
  8. Choose Create function.

 

Now, upload your Lambda function deployment package and register the handler.

  1. On the Configuration tab for the optimizedImageClassification function, for Function code, use the following values:

    • For Code entry type, choose Upload a .zip file.

    • For Runtime, choose Python 2.7.

    • For Handler, enter inference.handler.

  2. Choose Upload.

    
            The Function code section with Upload highlighted.
  3. Choose your optimizedImageClassification.zip deployment package.

  4. Choose Save.

 

Next, publish the first version of your Lambda function. Then, create an alias for the version.

Note

Greengrass groups can reference a Lambda function by alias (recommended) or by version. Using an alias makes it easier to manage code updates because you don't have to change your subscription table or group definition when the function code is updated. Instead, you just point the alias to the new function version.

  1. From the Actions menu, choose Publish new version.

    
            The Publish new version option in the Actions menu.
  2. For Version description, enter First version, and then choose Publish.

  3. On the optimizedImageClassification: 1 configuration page, from the Actions menu, choose Create alias.

    
            The Create alias option in the Actions menu.
  4. On the Create a new alias page, use the following values:

    • For Name, enter mlTestOpt.

    • For Version, enter 1.

    Note

    AWS IoT Greengrass doesn't support Lambda aliases for $LATEST versions.

  5. Choose Create.

    Now, add the Lambda function to your Greengrass group.

Step 4: Add the Lambda Function to the Greengrass Group

In this step, you add the Lambda function to the group, and then configure its lifecycle.

First, add the Lambda function to your Greengrass group.

  1. On the Add a Lambda to your Greengrass Group page, choose Use existing Lambda.

    
            The Add a Lambda to your Greengrass Group page with Use existing Lambda highlighted.
  2. Choose optimizedImageClassification, and then choose Next.

  3. On the Select a Lambda version page, choose Alias:mlTestOpt, and then choose Finish.

 

Next, configure the lifecycle of the Lambda function.

  1. On the Lambdas page, choose the optimizedImageClassification Lambda function.

    
            The Lambdas page with the optimizedImageClassification Lambda function highlighted.
  2. On the optimizedImageClassification configuration page, choose Edit.

  3. On the Group-specific Lambda configuration page, use the following values:

    • For Memory limit, enter 1024 MB.

    • For Timeout, enter 10 seconds.

    • For Lambda lifecycle, choose Make this function long-lived and keep it running indefinitely.

    • For Read access to /sys directory, choose Enable.

    For more information, see Lifecycle Configuration for Greengrass Lambda Functions.

  4. Choose Update.

Step 5: Add a Amazon SageMaker Neo-Optimized Model Resource to the Greengrass Group

In this step, you create a resource for the optimized ML inference model and upload it to an Amazon S3 bucket. Then, you locate the Amazon S3 uploaded model in the AWS IoT Greengrass console and affiliate the newly created resource with the Lambda function. This makes it possible for the function to access its resources on the core device.

  1. On your computer, navigate to the Neo deep learning runtime installer package that you unpacked earlier. Navigate to the resnet50 directory.

    cd path-to-downloaded-runtime/dlr-1.0-py2-armv7l/models/resnet50

    This directory contains precompiled model artifacts for an image classification model trained with Resnet-50. Compress the files inside the resnet50 directory to create resnet50.zip.

    zip -r resnet50.zip .
  2. On the group configuration page for your AWS IoT Greengrass group, choose Resources. Navigate to the Machine Learning section and choose Add machine learning resource. On the Create a machine learning resource page, for Resource name, enter resnet50_model.

    
            The Add Machine Learning Model page with updated properties.
  3. For Model source, choose Upload a model in S3.

  4. Under Model from S3, choose Select.

    Note

    Currently, optimized Amazon SageMaker models are stored automatically in Amazon S3. You can find your optimized model in your Amazon S3 bucket using this option. For more information about model optimization in Amazon SageMaker, see the Amazon SageMaker Neo documentation.

  5. Choose Upload a model.

  6. On the Amazon S3 console tab, upload your zip file to an Amazon S3 bucket. For information, see How Do I Upload Files and Folders to an S3 Bucket?

    Note

    Your bucket name must contain the string greengrass. Choose a unique name (such as greengrass-dlr-bucket-user-id-epoch-time). Don't use a period (.) in the bucket name.

  7. In the AWS IoT Greengrass console tab, locate and choose your Amazon S3 bucket. Locate your uploaded resnet50.zip file, and choose Select. You might need to refresh the page to update the list of available buckets and files.

  8. In Local path, enter /ml_model.

    
            The updated local path.

    This is the destination for the local model in the Lambda runtime namespace. When you deploy the group, AWS IoT Greengrass retrieves the source model package and then extracts the contents to the specified directory.

    Note

    We strongly recommend that you use the exact path provided for your local path. Using a different local model destination path in this step causes some troubleshooting commands provided in this tutorial to be inaccurate. If you use a different path, you must set up a MODEL_PATH environment variable that uses the exact path you provide here. For information about environment variables, see AWS Lambda Environment Variables.

  9. Under Lambda function affiliations, choose Select.

  10. Choose optimizedImageClassification, choose Read-only access, and then choose Done.

  11. Choose Save.

Step 6: Add Your Camera Device Resource to the Greengrass Group

In this step, you create a resource for the camera module and affiliate it with the Lambda function, allowing the resource to be accessible on the AWS IoT Greengrass core.

  1. On the group configuration page, choose Resources.

    
            The group configuration page with Resources highlighted.
  2. On the Local tab, choose Add local resource.

  3. On the Create a local resource page, use the following values:

    • For Resource name, enter videoCoreSharedMemory.

    • For Resource type, choose Device.

    • For Device path, enter /dev/vcsm.

      The device path is the local absolute path of the device resource. This path can refer only to a character device or block device under /dev.

    • For Group owner file access permission, choose Automatically add OS group permissions of the Linux group that owns the resource.

      The Group owner file access permission option lets you grant additional file access permissions to the Lambda process. For more information, see Group Owner File Access Permission.

    
            The Create a local resource page with edited resource properties.
  4. Under Lambda function affiliations, choose Select.

  5. Choose optimizedImageClassification, choose Read and write access, and then choose Done.

    
            Lambda function affiliation properties with Done highlighted.

    Next, you add a local device resource for the camera interface.

  6. At the bottom of the page, choose Add another resource.

  7. On the Create a local resource page, use the following values:

    • For Resource name, enter videoCoreInterface.

    • For Resource type, choose Device.

    • For device path, enter /dev/vchiq.

    • For Group owner file access permission, choose Automatically add OS group permissions of the Linux group that owns the resource.

    
            The Create a local resource page with edited resource properties.
  8. Under Lambda function affiliations, choose Select.

  9. Choose optimizedImageClassification, choose Read and write access, and then choose Done.

  10. Choose Save.

Step 7: Add Subscriptions to the Greengrass Group

In this step, you add subscriptions to the group. These subscriptions enable the Lambda function to send prediction results to AWS IoT by publishing to an MQTT topic.

  1. On the group configuration page, choose Subscriptions, and then choose Add Subscription.

    
            The group page with Subscriptions and Add Subscription highlighted.
  2. On the Select your source and target page, configure the source and target, as follows:

    1. In Select a source, choose Lambdas, and then choose optimizedImageClassification.

    2. In Select a target, choose Services, and then choose IoT Cloud.

    3. Choose Next.

      
                The Select your source and target page with Next highlighted.
  3. On the Filter your data with a topic page, in Optional topic filter, enter /resnet-50/predictions, and then choose Next.

    
            The Filter your data with a topic page with Next highlighted.
  4. Choose Finish.

  5. Add a second subscription. On the Select your source and target page, configure the source and target, as follows:

    1. In Select a source, choose Services, and then choose IoT Cloud.

    2. In Select a target, choose Lambdas, and then choose optimizedImageClassification.

    3. Choose Next.

  6. On the Filter your data with a topic page, in Optional topic filter, enter /resnet-50/test, and then choose Next.

  7. Choose Finish.

Step 8: Deploy the Greengrass Group

In this step, you deploy the current version of the group definition to the Greengrass core device. The definition contains the Lambda function, resources, and subscription configurations that you added.

  1. Make sure that the AWS IoT Greengrass core is running. Run the following commands in your Raspberry Pi terminal, as needed.

    1. To check whether the daemon is running:

      ps aux | grep -E 'greengrass.*daemon'

      If the output contains a root entry for /greengrass/ggc/packages/latest-core-version/bin/daemon, then the daemon is running.

    2. To start the daemon:

      cd /greengrass/ggc/core/ sudo ./greengrassd start
  2. On the group configuration page, choose Deployments, and from the Actions menu, choose Deploy.

    
            The group page with Deployments and Deploy highlighted.
  3. On the Configure how devices discover your core page, choose Automatic detection.

    This enables devices to automatically acquire connectivity information for the core, such as IP address, DNS, and port number. Automatic detection is recommended, but AWS IoT Greengrass also supports manually specified endpoints. You're only prompted for the discovery method the first time that the group is deployed.

    
            The Configure how devices discover your core page with Automatic detection highlighted.

    Note

    If prompted, grant permission to create the Greengrass service role and associate it with your AWS account in the current AWS Region. This role allows AWS IoT Greengrass to access your resources in AWS services.

    The Deployments page shows the deployment timestamp, version ID, and status. When completed, the status displayed for the deployment should be Successfully completed.

    
            The Deployments page with a successful deployment status highlighted.

    For troubleshooting help, see Troubleshooting AWS IoT Greengrass.

Test the Inference Example

Now you can verify whether the deployment is configured correctly. To test, you subscribe to the /resnet-50/predictions topic and publish any message to the /resnet-50/test topic. This triggers the Lambda function to take a photo with your Raspberry Pi and perform inference on the image it captures.

Note

If a monitor is attached to the Raspberry Pi, the live camera feed is displayed in a preview window.

  1. On the AWS IoT console home page, choose Test.

    
            The navigation pane in the AWS IoT console with Test highlighted.
  2. For Subscriptions, choose Subscribe to a Topic. Use the following values. Leave the remaining options at their defaults.

    • For Subscription topic, enter /resnet-50/predictions.

    • For MQTT payload display, choose Display payloads as strings.

  3. Choose Subscribe to topic.

  4. On the /resnet-50/predictions page, specify the /resnet-50/test topic to publish to. Choose Publish to topic.

  5. If the test is successful, the published message causes the Raspberry Pi camera to capture an image. A message from the Lambda function appears at the bottom of the page. This message contains the prediction result of the image, using the format: predicted class name, probability, and peak memory usage.

    
            The Subscriptions page showing test results with message data.

Configuring an Intel Atom

To run this tutorial on an Intel Atom device, you provide source images and configure the Lambda function. To use the GPU for inference, you must have OpenCL version 1.0 or later installed on your device. You must also add a local device resource.

  1. Download static PNG or JPG images for the Lambda function to use for image classification. The example works best with small image files.

    Save your image files in the directory that contains the inference.py file (or in a subdirectory of this directory). This is in the Lambda function deployment package that you upload in Step 3: Create an Inference Lambda Function.

    Note

    If you are using AWS DeepLens, you can choose to instead use the onboard camera or mount your own camera to capture images and perform inference on them. However, we strongly recommend you start with static images first.

  2. Edit the configuration of the Lambda function. Follow the procedure in Step 4: Add the Lambda Function to the Greengrass Group.

    1. Increase the Memory limit value to 3000 MB.

    2. Increase the Timeout value to 2 minutes. This ensures that the request does not time out too early. It takes a few minutes after setup to run inference.

    3. For Read access to /sys directory, choose Enable.

    4. For Lambda lifecycle, choose Make this function long-lived and keep it running indefinitely.

  3. Add the required local device resource.

    1. On the group configuration page, choose Resources.

      
                The group configuration page with Resources highlighted.
    2. On the Local tab, choose Add a local resource.

    3. Define the resource:

      • For Resource name, enter renderD128.

      • For Resource type, choose Device.

      • For Device path, enter /dev/dri/renderD128.

      • For Group owner file access permission, choose Automatically add OS group permissions of the Linux group that owns the resource.

      • For Lambda function affiliations, grant Read and write access to your Lambda function.

Configuring an NVIDIA Jetson TX2

To run this tutorial on an NVIDIA Jetson TX2, you provide source images and configure the Lambda function. To use the GPU for inference, you must install CUDA 9.0 and cuDNN 7.0 on your device when you image your board with Jetpack 3.3. You must also add local device resources.

To learn how to configure your Jetson so you can install the AWS IoT Greengrass Core software, see Setting Up Other Devices.

  1. Download static PNG or JPG images for the Lambda function to use for image classification. The example works best with small image files.

    Save your image files in the directory that contains the inference.py file (or in a subdirectory of this directory). This is in the Lambda function deployment package that you upload in Step 3: Create an Inference Lambda Function.

    Note

    You can instead choose to instrument a camera on the Jetson board to capture the source images. However, we strongly recommend you start with static images first.

  2. Edit the configuration of the Lambda function. Follow the procedure in Step 4: Add the Lambda Function to the Greengrass Group.

    1. Increase the Memory limit value. To use the provided model in GPU mode, use 2048 MB.

    2. Increase the Timeout value to 5 minutes. This ensures that the request does not time out too early. It takes a few minutes after setup to run inference.

    3. For Lambda lifecycle, choose Make this function long-lived and keep it running indefinitely.

    4. For Read access to /sys directory, choose Enable.

  3. Add the required local device resources.

    1. On the group configuration page, choose Resources.

      
                The group configuration page with Resources highlighted.
    2. On the Local tab, choose Add a local resource.

    3. Define each resource:

      • For Resource name and Device path, use the values in the following table. Create one device resource for each row in the table.

      • For Resource type, choose Device.

      • For Group owner file access permission, choose Automatically add OS group permissions of the Linux group that owns the resource.

      • For Lambda function affiliations, grant Read and write access to your Lambda function.

         

        Name

        Device path

        nvhost-ctrl

        /dev/nvhost-ctrl

        nvhost-gpu

        /dev/nvhost-gpu

        nvhost-ctrl-gpu

        /dev/nvhost-ctrl-gpu

        nvhost-dbg-gpu

        /dev/nvhost-dbg-gpu

        nvhost-prof-gpu

        /dev/nvhost-prof-gpu

        nvmap

        /dev/nvmap

Troubleshooting AWS IoT Greengrass ML Inference

If the test is not successful, you can try the following troubleshooting steps. Run the commands in your Raspberry Pi terminal.

Check error logs

  1. Switch to the root user and navigate to the log directory. Access to AWS IoT Greengrass logs requires root permissions.

    sudo su cd /greengrass/ggc/var/log
  2. Check runtime.log for any errors.

    cat system/runtime.log | grep 'ERROR'

    You can also look in your user-defined Lambda function log for any errors:

    cat user/your-region/your-account-id/lambda-function-name.log | grep 'ERROR'

    For more information, see Troubleshooting with Logs.

 

Verify the Lambda function is successfully deployed

  1. List the contents of the deployed Lambda in the /lambda directory. Replace the placeholder values before you run the command.

    cd /greengrass/ggc/deployment/lambda/arn:aws:lambda:region:account:function:function-name:function-version ls -la
  2. Verify that the directory contains the same content as the optimizedImageClassification.zip deployment package that you uploaded in Step 3: Create an Inference Lambda Function.

    Make sure that the .py files and dependencies are in the root of the directory.

 

Verify the inference model is successfully deployed

  1. Find the process identification number (PID) of the Lambda runtime process:

    ps aux | grep lambda-function-name

    In the output, the PID appears in the second column of the line for the Lambda runtime process.

  2. Enter the Lambda runtime namespace. Be sure to replace the placeholder pid value before you run the command.

    Note

    This directory and its contents are in the Lambda runtime namespace, so they aren't visible in a regular Linux namespace.

    sudo nsenter -t pid -m /bin/bash
  3. List the contents of the local directory that you specified for the ML resource.

    Note

    If your ML resource path is something other than ml_model, you must substitute that here.

    cd /ml_model ls -ls

    You should see the following files:

    56 -rw-r--r-- 1 ggc_user ggc_group 56703 Oct 29 20:07 model.json 196152 -rw-r--r-- 1 ggc_user ggc_group 200855043 Oct 29 20:08 model.params 256 -rw-r--r-- 1 ggc_user ggc_group 261848 Oct 29 20:07 model.so 32 -rw-r--r-- 1 ggc_user ggc_group 30564 Oct 29 20:08 synset.txt

Lambda function cannot find /dev/dri/renderD128

This can occur if OpenCL cannot connect to the GPU devices it needs. You must create device resources for the necessary devices for your Lambda function.

Next Steps

Next, explore other optimized models. For information, see the Amazon SageMaker Neo documentation.