Menu
AWS Greengrass
Developer Guide

How to Configure Machine Learning Inference Using the AWS Management Console

This feature is available for AWS Greengrass Core v1.5.0 and later.

You can perform machine learning (ML) inference locally on a Greengrass core device using data from connected devices. For information, including requirements and constraints, see Perform Machine Learning Inference.

This tutorial describes how to use the AWS Management Console to configure a Greengrass group to run a Lambda inference app that recognizes images from a camera locally, without sending data to the cloud. The inference app accesses the camera module on a Raspberry Pi and runs inference using the open source SqueezeNet model.

The tutorial contains the following high-level steps:

Prerequisites

To complete this tutorial, you need:

Note

This tutorial uses a Raspberry Pi, but AWS Greengrass supports other platforms, such as Intel Atom and NVIDIA Jetson TX2.

Step 1: Configure the Raspberry Pi

In this step, you update the Rasbian operating system, install the camera module software and Python dependencies, and enable the camera interface. Run the following commands in your Raspberry Pi terminal.

  1. Update Raspbian Jessie.

    sudo apt-get update sudo apt-get dist-upgrade
  2. Install the picamera interface for the camera module and other Python libraries that are required for this tutorial.

    sudo apt-get install -y python-dev python-setuptools python-pip python-picamera
  3. Reboot the Raspberry Pi.

    sudo reboot
  4. Open the Raspberry Pi configuration tool.

    sudo raspi-config
  5. Use the arrow keys to open Interfacing Options and enable the camera interface. If prompted, allow the device to reboot.

  6. Use the following command to test the camera setup.

    raspistill -v -o test.jpg

    This opens a preview window on the Raspberry Pi, saves a picture named test.jpg to your /home/pi directory, and displays information about the camera in the Raspberry Pi terminal.

Step 2: Install the MXNet Framework

In this step, you download precompiled Apache MXNet libraries and install them on your Raspberry Pi.

Note

This tutorial uses libraries for the MXNet ML framework, but libraries for TensorFlow are also available. For more information, including limitations, see Precompiled Libraries for ML Frameworks.

  1. On your computer, open the AWS IoT console.

  2. In the left pane, choose Software.

    
            The AWS IoT console with Software highlighted.
  3. In the Machine learning libraries section, for MXNet/TensorFlow precompiled libraries, choose Configure download.

  4. On the Machine learning libraries page, under Software configurations, for MXNet Raspberry Pi version 0.11.0, choose Download.

    Note

    By downloading this software you agree to the Apache License 2.0.

  5. Transfer the downloaded ggc-mxnet-v0.11.0-python-raspi.tar.gz file from your computer to your Raspberry Pi.

    Note

    For ways that you can do this on different platforms, see this step in the Getting Started section. For example, you might use the following scp command:

    scp ggc-mxnet-v0.11.0-python-raspi.tar.gz pi@IP-address:/home/pi
  6. In your Raspberry Pi terminal, unpack the transferred file.

    tar -xzf ggc-mxnet-v0.11.0-python-raspi.tar.gz
  7. Install the MDXNet framework.

    ./mxnet_installer.sh

    Note

    You can continue to Step 3: Create an MXNet Model Package while the framework is installing, but you must wait for the installation to complete before proceeding to Step 4: Create and Publish a Lambda Function.

    You can optionally run unit tests to verify the installation. To do so, add the -u option to the previous command. If successful, each test logs a line in the terminal that ends with ok. If all tests are successful, the final log statement contains OK. Note that running the unit tests increases the installation time.

    The script also creates a Lambda function deployment package named greengrassObjectClassification.zip. This package contains the function code and dependencies, including the mxnet Python module that Greengrass Lambda functions need to work with MXNet models. You upload this deployment package later.

  8. When the installation is complete, transfer greengrassObjectClassification.zip to your computer. Depending on your environment, you can use the scp command or a utility such as WinSCP.

Step 3: Create an MXNet Model Package

In this step, you download files for a sample pretrained MXNet model, and then save them as a .zip file. AWS Greengrass can use models from Amazon S3, provided that they use the tar.gz or .zip format.

  1. Download the following files to your computer:

    Note

    All MXNet model packages use these three file types, but the contents of TensorFlow model packages vary.

  2. Zip the three files, and name the compressed file squeezenet.zip. You upload this model package to Amazon S3 in Step 6: Add Resources to the Greengrass Group.

Step 4: Create and Publish a Lambda Function

In this step, you create a Lambda function and configure it to use the deployment package that was created in Step 2: Install the MXNet Framework. Then, you publish a function version and create an alias.

The Lambda function deployment package is named greengrassObjectClassification.zip. It contains an inference app that performs common tasks, such as loading models, importing Apache MXNet, and taking actions based on predictions. The app contains the following key components:

  • App logic:

    • load_model.py. Loads MXNet models.

    • greengrassObjectClassification.py. Runs predictions on images that are streamed from the camera.

  • Dependencies:

    • greengrasssdk. Required library for all Python Lambda functions.

    • mxnet. Required library for Python Lambda functions that run local inference using MXNet.

  • License:

    • license. Contains the required Greengrass Core Software License Agreement.

Note

You can reuse these dependencies and license when you create new MXNet inference Lambda functions.

First, create the Lambda function.

  1. In the AWS IoT console, in the left pane, choose Greengrass, and then choose Groups.

    
            The left pane in the AWS IoT console with Groups highlighted.
  2. Choose the Greengrass group where you want to add the Lambda function.

  3. On the group configuration page, choose Lambdas, and then choose Add Lambda.

    
            The group page with Lambdas and Add Lambda highlighted.
  4. On the Add a Lambda to your Greengrass Group page, choose Create new Lambda. This takes you to the AWS Lambda console.

    
            The Add a Lambda to your Greengrass Group page with Create new Lambda highlighted.
  5. Choose Author from scratch.

  6. In the Author from scratch section, use the following values:

    Property

    Value

    Name

    greengrassObjectClassification

    Runtime

    Python 2.7

    Role

    Create new role from template(s)

    Role name

    (This role isn't used by AWS Greengrass.)

    Greengrass_role_does_not_matter

  7. At the bottom of the page, choose Create function.

    
            The Create function page with Create function highlighted.

 

Now, upload your Lambda function deployment package and register the handler.

  1. On the Configuration tab for the greengrassObjectClassification function, use the following values for Function code:

    Property

    Value

    Code entry type

    Upload a .ZIP file

    Runtime

    Python 2.7

    Handler

    greengrassObjectClassification.function_handler

  2. Choose Upload.

    
            The Function code section with Upload highlighted.
  3. Choose your greengrassObjectClassification.zip deployment package.

  4. At the top of the page, choose Save.

 

Next, publish the first version of your Lambda function. Then, create an alias for the version.

Note

Greengrass groups can reference a Lambda function by alias (recommended) or by version. Using an alias makes it easier to manage code updates because you don't have to change your subscription table or group definition when the function code is updated. Instead, you just point the alias to the new function version.

  1. From the Actions menu, choose Publish new version.

    
            The Publish new version option in the Actions menu.
  2. For Version description, type First version, and then choose Publish.

  3. On the greengrassObjectClassification: 1 configuration page, from the Actions menu, choose Create alias.

    
            The Create alias option in the Actions menu.
  4. On the Create a new alias page, use the following values:

    Property

    Value

    Name

    mlTest

    Version

    1

    Note

    AWS Greengrass doesn't support Lambda aliases for $LATEST versions.

  5. Choose Create.

    
            The Create a new alias page with Create highlighted.

    Now, add the Lambda function to your Greengrass group.

Step 5: Add the Lambda Function to the Greengrass Group

In this step, you add the Lambda function to the group and then configure its lifecycle.

First, add the Lambda function to your Greengrass group.

  1. In the AWS IoT console, open the group configuration page.

  2. Choose Lambdas, and then choose Add Lambda.

    
            The group page with Lambdas and Add Lambda highlighted.
  3. On the Add a Lambda to your Greengrass Group page, choose Use existing Lambda.

    
            The Add a Lambda to your Greengrass Group page with Use existing Lambda highlighted.
  4. On the Use existing Lambda page, choose greengrassObjectClassification, and then choose Next.

  5. On the Select a Lambda version page, choose Alias:mlTest, and then choose Finish.

 

Next, configure the lifecycle of the Lambda function.

  1. On the Lambdas page, choose the greengrassObjectClassification Lambda function.

    
            The Lambdas page with the greengrassObjectClassification Lambda function highlighted.
  2. On the greengrassObjectClassification configuration page, choose Edit.

  3. On the Group-specific Lambda configuration page, use the following values:

    Property

    Value

    Memory limit

    96 MB

    Timeout

    10 seconds

    Lambda lifecycle

    Make this function long-lived and keep it running indefinitely

    Read access to /sys directory

    Enable

    For more information, see Lifecycle Configuration for Greengrass Lambda Functions.

    
            The greengrassObjectClassification page with updated properties.
  4. At the bottom of the page, choose Update.

Step 6: Add Resources to the Greengrass Group

In this step, you create resources for the camera module and the ML inference model. You also affiliate the resources with the Lambda function, which enables the function to access the resources on the core device.

First, create two local device resources for the camera: one for shared memory and one for the device interface. For more information about local resource access, see Access Local Resources with Lambda Functions.

  1. On the group configuration page, choose Resources.

    
            The group configuration page with Resources highlighted.
  2. On the Local Resources tab, choose Add local resource.

  3. On the Create a local resource page, use the following values:

    Property

    Value

    Resource name

    videoCoreSharedMemory

    Resource type

    Device

    Device path

    /dev/vcsm

    Group owner file access permission

    Automatically add OS group permissions of the Linux group that owns the resource

    The Device path is the local absolute path of the device resource. This path can only refer to a character device or block device under /dev.

    The Group owner file access permission option lets you grant additional file access permissions to the Lambda process. For more information, see Group Owner File Access Permission.

    
            The Create a local resource page with edited resource properties.
  4. Under Lambda function affiliations, choose Select.

  5. Choose greengrassObjectClassification, choose Read and write access, and then choose Done.

    
            Lambda function affiliation properties with Done highlighted.

    Next, you add a local device resource for the camera interface.

  6. At the bottom of the page, choose Add another resource.

  7. On the Create a local resource page, use the following values:

    Property

    Value

    Resource name

    videoCoreInterface

    Resource type

    Device

    Device path

    /dev/vchiq

    Group owner file access permission

    Automatically add OS group permissions of the Linux group that owns the resource

    
            The Create a local resource page with edited resource properties.
  8. Under Lambda function affiliations, choose Select.

  9. Choose greengrassObjectClassification, choose Read and write access, and then choose Done.

  10. At the bottom of the page, choose Save.

 

Now, add the inference model as a machine learning resource. This step includes uploading the squeezenet.zip model package to Amazon S3.

  1. On the Machine Learning tab, choose Add machine learning resource.

  2. On the Create a machine learning resource page, for Resource name, type squeezenet_model.

  3. For Model source, choose Locate or upload a model in S3.

  4. Under Model from S3, choose Select, and then choose Create S3 bucket.

    
            The Create a machine learning resource page with Create S3 bucket highlighted.
  5. For Bucket name, type a name that contains the string greengrass (such as greengrass-datetime), and then choose Create.

    Note

    Don't use a period (".") in the bucket name.

  6. Choose Upload a model, and then choose the squeezenet.zip package that you created in Step 3: Create an MXNet Model Package.

  7. For Local path, type /greengrass-machine-learning/mxnet/squeezenet.

    This is the destination for the local model in the Lambda runtime namespace. When you deploy the group, AWS Greengrass retrieves the source model package and then extracts the contents to the specified directory. The sample Lambda function for this tutorial is already configured to use this path (in the model_path variable).

  8. Under Lambda function affiliations, choose Select.

  9. Choose greengrassObjectClassification, choose Read-only access, and then choose Done.

  10. At the bottom of the page, choose Save.

Using Amazon SageMaker Trained Models

This tutorial uses a model that's stored in Amazon S3, but you can easily use Amazon SageMaker models too. The Greengrass console has built-in Amazon SageMaker integration, so you don't need to manually upload these models to Amazon S3. For requirements and limitations for using Amazon SageMaker models, see Supported Model Sources.

To use an Amazon SageMaker model:

  • For Model source, choose Use an existing SageMaker model, and then choose the name of the model's training job.

  • For Local path, type the path to the directory where your Lambda function looks for the model.

Step 7: Add a Subscription to the Greengrass Group

In this step, you add a subscription to the group. This subscription enables the Lambda function to send prediction results to AWS IoT by publishing to an MQTT topic.

  1. On the group configuration page, choose Subscriptions, and then choose Add Subscription.

    
            The group page with Subscriptions and Add Subscription highlighted.
  2. On the Select your source and target page, configure the source and target, as follows:

    1. In Select a source, choose Lambdas, and then choose greengrassObjectClassification.

    2. In Select a target, choose Services, and then choose IoT Cloud.

    3. Choose Next.

      
                The Select your source and target page with Next highlighted.
  3. On the Filter your data with a topic page, in the Optional topic filter field, type hello/world, and then choose Next.

    
            The Filter your data with a topic page with Next highlighted.
  4. Choose Finish.

Step 8: Deploy the Greengrass Group

In this step, you deploy the current version of the group definition to the Greengrass core device. The definition contains the Lambda function, resources, and subscription configurations that you added.

  1. Make sure that the AWS Greengrass core is running. Run the following commands in your Raspberry Pi terminal, as needed.

    1. To check whether the daemon is running:

      ps aux | grep -E 'greengrass.*daemon'

      If the output contains a root entry for /greengrass/ggc/packages/1.6.0/bin/daemon, then the daemon is running.

      Note

      The version in the path depends on the AWS Greengrass Core software version that's installed on your core device.

    2. To start the daemon:

      cd /greengrass/ggc/core/ sudo ./greengrassd start
  2. On the group configuration page, choose Deployments, and from the Actions menu, choose Deploy.

    
            The group page with Deployments and Deploy highlighted.
  3. On the Configure how devices discover your core page, choose Automatic detection.

    This enables devices to automatically acquire connectivity information for the core, such as IP address, DNS, and port number. Automatic detection is recommended, but AWS Greengrass also supports manually specified endpoints. You're only prompted for the discovery method the first time that the group is deployed.

    
            The Configure how devices discover your core page with Automatic detection highlighted.

    Note

    If prompted, grant permission to create the AWS Greengrass service role on your behalf, which allows AWS Greengrass to access other AWS services. You need to do this only one time per account.

    The Deployments page shows the deployment time stamp, version ID, and status. When completed, the deployment should show a Successfully completed status.

    For help troubleshooting any issues that you encounter, see Troubleshooting AWS Greengrass Applications.

Test the Inference App

Now you can verify whether the deployment is configured correctly. To test, you subscribe to the hello/world topic and view the prediction results that are published by the Lambda function.

Note

If a monitor is attached to the Raspberry Pi, the live camera feed is displayed in a preview window.

  1. On the AWS IoT console home page, choose Test.

    
            The left pane in the AWS IoT console with Test highlighted.
  2. For Subscriptions, use the following values:

    Property

    Value

    Subscription topic

    hello/world

    MQTT payload display

    Display payloads as strings

  3. Choose Subscribe to topic.

    If the test is successful, the messages from the Lambda function appear at the bottom of the page. Each message contains the top five prediction results of the image, using the format: probability, predicted class ID, and corresponding class name.

    
            The Subscriptions page showing test results with message data.

Troubleshooting AWS Greengrass ML Inference

If the test is not successful, you can try the following troubleshooting steps. Run the commands in your Raspberry Pi terminal.

Check Error Logs

  1. Switch to the root user.

    sudo su
  2. Navigate to the /log directory.

    cd /greengrass/ggc/var/log
  3. Check runtime.log or python_runtime.log.

    For more information, see Troubleshooting with Logs.

"Unpacking" Error in runtime.log

If runtime.log contains an error similar to the following, ensure that your tar.gz source model package has a parent directory.

Greengrass deployment error: unable to download the artifact model-arn: Error while processing. Error while unpacking the file from /tmp/greengrass/artifacts/model-arn/path to /greengrass/ggc/deployment/path/model-arn, error: open /greengrass/ggc/deployment/path/model-arn/squeezenet/squeezenet_v1.1-0000.params: no such file or directory

If your package doesn't have a parent directory that contains the model files, try repackaging the model using the following command:

tar -zcvf model.tar.gz ./model

For example:

─$ tar -zcvf test.tar.gz ./test ./test ./test/some.file ./test/some.file2 ./test/some.file3

Note

Don't include trailing /* characters in this command.

 

Verify That the Lambda Function Is Successfully Deployed

  1. List the contents of the deployed Lambda in the /lambda directory. Replace the placeholder values before running the command.

    cd /greengrass/ggc/deployment/lambda/arn:aws:lambda:region:account:function:function-name:function-version ls -la
  2. Verify that the directory contains the same content as the greengrassObjectClassification.zip deployment package that you uploaded in Step 4: Create and Publish a Lambda Function.

    Also make sure that the .py files and dependencies are in the root of the directory.

 

Verify That the Inference Model Is Successfully Deployed

  1. Find the process identification number (PID) of the Lambda runtime process:

    ps aux | grep lambda-function-name

    In the output, the PID appears in the second column of the line for the Lambda runtime process.

  2. Enter the Lambda runtime namespace. Be sure to replace the placeholder pid value before running the command.

    Note

    This directory and its contents are in the Lambda runtime namespace, so they aren't visible in a regular Linux namespace.

    sudo nsenter -t pid -m /bin/bash
  3. List the contents of the local directory that you specified for the ML resource.

    cd /greengrass-machine-learning/mxnet/squeezenet/ ls -ls

    You should see the following files:

    32 -rw-r--r-- 1 ggc_user ggc_group   31675 Nov 18 15:19 synset.txt 32 -rw-r--r-- 1 ggc_user ggc_group   28707 Nov 18 15:19 squeezenet_v1.1-symbol.json 4832 -rw-r--r-- 1 ggc_user ggc_group 4945062 Nov 18 15:19 squeezenet_v1.1-0000.params

Next Steps

Next, explore other inference apps. AWS Greengrass provides other Lambda functions that you can use to try out local inference. You can find the examples package in the precompiled libraries folder that you downloaded in Step 2: Install the MXNet Framework.

Configuring an NVIDIA Jetson TX2

To run this tutorial on the GPU of an NVIDIA Jetson TX2, you must add additional local device resources and configure access for the Lambda function.

Note

Your Jetson must be configured before you can install the AWS Greengrass Core software. For more information, see Configuring NVIDIA Jetson TX2 for AWS Greengrass.

  1. Add the following local device resources. Follow the procedure in Add Resources to the Group.

    For each resource:

    • For Resource type, choose Device.

    • For Group owner file access permission, choose Automatically add OS group permissions of the Linux group that owns the resource.

    • For Lambda function affiliations, grant Read and write access to your Lambda function.

     

    Name

    Device path

    nvhost-ctrl

    /dev/nvhost-ctrl

    nvhost-gpu

    /dev/nvhost-gpu

    nvhost-ctrl-gpu

    /dev/nvhost-ctrl-gpu

    nvhost-dbg-gpu

    /dev/nvhost-dbg-gpu

    nvhost-prof-gpu

    /dev/nvhost-prof-gpu

    nvmap

    /dev/nvmap

  2. Edit the configuration of the Lambda function to increase Memory limit to 1000 MB. Follow the procedure in Add the Lambda Function to the Group.