Create AWS IoT Greengrass V2 Components - Amazon SageMaker

Create AWS IoT Greengrass V2 Components

AWS IoT Greengrass uses components, a software module that is deployed to and runs on a AWS IoT Greengrass core device. You will need (at a minimum) three components:

  1. A public Edge Manager Agent AWS IoT Greengrass component.

  2. A model component that is autogenerated when you package your machine learning model with either the AWS SDK for Python (Boto3) API or with the SageMaker console.

  3. A private, custom component for the inference application.

The Edge Manager Agent AWS IoT Greengrass component (1) deploys the Edge Manager Agent binary.

The model component (2) is autogenerated when you create an edge packaging job with either the AWS SDK for Python (Boto3) API or the SageMaker console. For information on how to generate the model component, see Autogenerated Component.

The private custom component (3) is the application that you use to implement the Edge Manager Agent client application, as well as do any preprocessing and post-processing of the inference results. For more information about how to create a custom component, see Autogenerated Component or Create custom AWS IoT Greengrass components.

Autogenerated Component

Generate the model component with the CreateEdgePackagingJob API and specify GreengrassV2Component for the SageMaker Edge Manager packaging job API field PresetDeploymentType. When you call the CreateEdgePackagingJob API, Edge Manager takes your SageMaker Neo–compiled model in Amazon S3 and creates a model component. The model component is automatically stored in your account. You can view any of your components by navigating to the AWS IoT console https://console.aws.amazon.com/iot/. Select Greengrass and then select Core devices. The page has a list of AWS IoT Greengrass core devices associated with your account. If a model component name is not specified in PresetDeploymentConfig, the default name generated consists of "SagemakerEdgeManager" and the name of your SageMaker Edge Manager packaging job. The following example demonstrates how to specify Edge Manager to create a AWS IoT Greengrass V2 component with the CreateEdgePackagingJob API.

import sagemaker import boto3 # Create a SageMaker client object to make it easier to interact with other AWS services. sagemaker_client = boto3.client('sagemaker', region=<YOUR_REGION>) # Replace with your IAM Role ARN sagemaker_role_arn = "arn:aws:iam::<account>:role/*" # Replace string with the name of your already created S3 bucket. bucket = 'edge-manager-demo-bucket' # Specify a name for your edge packaging job. edge_packaging_name = "edge_packag_job_demo" # Replace the following string with the name you used for the SageMaker Neo compilation job. compilation_job_name = "getting-started-demo" # The name of the model and the model version. model_name = "sample-model" model_version = "1.1" # Output directory in S3 where you want to store the packaged model. packaging_output_dir = 'packaged_models' packaging_s3_output = 's3://{}/{}'.format(bucket, packaging_output_dir) # The name you want your Greengrass component to have. component_name = "SagemakerEdgeManager" + edge_packaging_name sagemaker_client.create_edge_packaging_job( EdgePackagingJobName=edge_packaging_name, CompilationJobName=compilation_job_name, RoleArn=sagemaker_role_arn, ModelName=model_name, ModelVersion=model_version, OutputConfig={ "S3OutputLocation": packaging_s3_output, "PresetDeploymentType":"GreengrassV2Component", "PresetDeploymentConfig":"{\"ComponentName\":\"sample-component-name\", \"ComponentVersion\":\"1.0.2\"}" } )

You can also create the autogenerated component with the SageMaker console. Follow steps 1-6 in Package a Model (Amazon SageMaker Console)

Enter the Amazon S3 bucket URI where you want to store the output of the packaging job and optional encrytion key.

Complete the following to create the model component:

  1. Choose Preset deployment.

  2. Specify the name of the component for the Component name field.

  3. Optionally, provide a description of the component, a component version, the platform OS, or the platform architecture for the Component description, Component version, Platform OS, and Platform architecture, respectively.

  4. Choose Submit.

Create a Hello World custom component

The custom application component is used to perform inference on the edge device. The component is responsible for loading models to SageMaker Edge Manager, invoking the Edge Manager agent for inference, and unloading the model when the component is shut down. Before you create your component, ensure the agent and application can communicate with SageMaker Edge Manager. To do this, configure gRPC. The SageMaker Edge Manager agent uses methods defined in Protobuf Buffers and the gRPC server to establish communication with the client application on the edge device and the cloud.

To use gRPC, you must:

  1. Create a gRPC stub using the .proto file provided when you download the Edge Manager agent from Amazon S3 release bucket.

  2. Write client code with the language you prefer.

You do not need to define the service in a .proto file (1). The service .proto files are included in the compressed TAR file when you download the SageMaker Edge Manager agent release binary from the Amazon S3 release bucket.

Install gRPC and other necessary tools on your host machine and create gRPC stubs agent_pb2_grpc.py and agent_pb2.py in Python. Make sure you have agent.proto in your local directory.

%%bash pip install grpcio pip install grpcio-tools python3 -m grpc_tools.protoc --proto_path=. --python_out=. --grpc_python_out=. agent.proto

The preceding code generates the gRPC client and server interfaces from your .proto service definition (2). In other words, it creates the gRPC model in Python. The API folder contains the Protobuf specification for communicating with the agent.

Next, use the gRPC API to write a client and server for your service (2). The following example script, edge_manager_python_example.py, uses Python to load, list, and unload a yolov3 model to the edge device.

import grpc from PIL import Image import agent_pb2 import agent_pb2_grpc import os model_path = '<PATH-TO-SagemakerEdgeManager-COMPONENT>' agent_socket = 'unix:///tmp/aws.greengrass.SageMakerEdgeManager.sock' agent_channel = grpc.insecure_channel(agent_socket, options=(('grpc.enable_http_proxy', 0),)) agent_client = agent_pb2_grpc.AgentStub(agent_channel) def list_models(): return agent_client.ListModels(agent_pb2.ListModelsRequest()) def list_model_tensors(models): return { model.name: { 'inputs': model.input_tensor_metadatas, 'outputs': model.output_tensor_metadatas } for model in list_models().models } def load_model(model_name, model_path): load_request = agent_pb2.LoadModelRequest() load_request.url = model_path load_request.name = model_name return agent_client.LoadModel(load_request) def unload_model(name): unload_request = agent_pb2.UnLoadModelRequest() unload_request.name = name return agent_client.UnLoadModel(unload_request) def predict_image(model_name, image_path): image_tensor = agent_pb2.Tensor() image_tensor.byte_data = Image.open(image_path).tobytes() image_tensor_metadata = list_model_tensors(list_models())[model_name]['inputs'][0] image_tensor.tensor_metadata.name = image_tensor_metadata.name image_tensor.tensor_metadata.data_type = image_tensor_metadata.data_type for shape in image_tensor_metadata.shape: image_tensor.tensor_metadata.shape.append(shape) predict_request = agent_pb2.PredictRequest() predict_request.name = model_name predict_request.tensors.append(image_tensor) predict_response = agent_client.Predict(predict_request) return predict_response def main(): try: unload_model('your-model') except: pass print('LoadModel...', end='') try: load_model('your-model', model_path) print('done.') except Exception as e: print() print(e) print('Model already loaded!') print('ListModels...', end='') try: print(list_models()) print('done.') except Exception as e: print() print(e) print('List model failed!') print('Unload model...', end='') try: unload_model('your-model') print('done.') except Exception as e: print() print(e) print('unload model failed!') if __name__ == '__main__': main()

Ensure model_path points to the name of the AWS IoT Greengrass component containing the model if you use the same client code example.

You can create your AWS IoT Greengrass V2 Hello World component once you have generated your gRPC stubs and you have your Hello World code ready. To do so:

  • Upload your edge_manager_python_example.py, agent_pb2_grpc.py, and agent_pb2.py to your Amazon S3 bucket and note down their Amazon S3 path.

  • Create a private component in the AWS IoT Greengrass V2 console and define the recipe for your component. Specify the Amazon S3 URI to your Hello World application and gRPC stub in the following recipe.

    --- RecipeFormatVersion: 2020-01-25 ComponentName: com.sagemaker.edgePythonExample ComponentVersion: 1.0.0 ComponentDescription: Sagemaker Edge Manager Python example ComponentPublisher: Amazon Web Services, Inc. ComponentDependencies: aws.greengrass.SageMakerEdgeManager: VersionRequirement: '>=1.0.0' DependencyType: HARD Manifests: - Platform: os: linux architecture: "/amd64|x86/" Lifecycle: install: |- apt-get install python3-pip pip3 install grpcio pip3 install grpcio-tools pip3 install protobuf pip3 install Pillow run: script: |- python3 {{artifacts:path}}/edge_manager_python_example.py Artifacts: - URI: <code-s3-path> - URI: <pb2-s3-path> - URI: <pb2-grpc-s3-path>

For detailed information about creating a Hello World recipe, see Create your first component in the AWS IoT Greengrass documentation.