Run AWS Lambda functions - AWS IoT Greengrass

Run AWS Lambda functions

Note

AWS IoT Greengrass doesn't currently support this feature on Windows core devices.

You can import AWS Lambda functions as components that run on AWS IoT Greengrass core devices. You might want to do this in the following cases:

Lambda functions include dependencies on the following components. You don't need to define these components as dependencies when you import the function. When you deploy the Lambda function component, the deployment includes these Lambda component dependencies.

Requirements

Your core devices and Lambda functions must meet the following requirements for you to run the functions on the AWS IoT Greengrass Core software:

  • Your core device must meet the requirements to run Lambda functions. If you want the core device to run containerized Lambda functions, the device must meet the requirements to do so. For more information, see Lambda function requirements.

  • You must install the programming languages that the Lambda function uses on your core devices.

    Tip

    You can create a component that installs the programming language, and then specify that component as a dependency of your Lambda function component. Greengrass supports all Lambda supported versions of Python, Node.js, and Java runtimes. Greengrass doesn't apply any additional restrictions on deprecated Lambda runtime versions. You can run Lambda functions that use these deprecated runtimes on AWS IoT Greengrass, but you can't create them in AWS Lambda. For more information about AWS IoT Greengrass support for Lambda runtimes, see Run AWS Lambda functions.

Configure Lambda function lifecycle

The Greengrass Lambda function lifecycle determines when a function starts and how it creates and uses containers. The lifecycle also determines how the AWS IoT Greengrass Core software retains variables and preprocessing logic that are outside of the function handler.

AWS IoT Greengrass supports on-demand (default) and long-lived lifecycles:

  • On-demand functions start when they are invoked and stop when there are no tasks left to run. Each invocation of the function creates a separate container, also called a sandbox, to process invocations, unless an existing container is available for reuse. Any of the containers might process data that you send to the function.

    Multiple invocations of an on-demand function can run simultaneously.

    Variables and preprocessing logic that you define outside of the function handler are not retained when new containers are created.

  • Long-lived (or pinned) functions start when the AWS IoT Greengrass Core software starts and run in a single container. The same container processes all data that you send to the function.

    Multiple invocations are queued until the AWS IoT Greengrass Core software runs earlier invocations.

    Variables and preprocessing logic that you define outside of the function handler are retained for every invocation of the handler.

    Use long-lived Lambda functions when you need to start doing work without any initial input. For example, a long-lived function can load and start processing a machine learning model to be ready when the function receives device data.

    Note

    Long-lived functions have timeouts that are associated with each invocation of their handler. If you want to invoke code that runs indefinitely, you must start it outside of the handler. Make sure that there's no blocking code outside of the handler that might prevent the function from initializing.

    These functions run unless the AWS IoT Greengrass Core software stops, such as during a deployment or reboot. These functions won't run if the function encounters an uncaught exception, exceeds its memory limits, or enters an error state, such as a handler timeout.

For more information about container reuse, see Understanding Container Reuse in AWS Lambda in the AWS Compute Blog.

Configure Lambda function containerization

By default, Lambda functions run inside of an AWS IoT Greengrass container. Greengrass containers provide isolation between your functions and the host. This isolation increases security for both the host and the functions in the container.

We recommend that you run Lambda functions in a Greengrass container, unless your use case requires them to run without containerization. By running your Lambda functions in a Greengrass container, you have more control over how you restrict access to resources.

You might run a Lambda function without containerization in the following cases:

  • You want to run AWS IoT Greengrass on a device that doesn't support container mode. An example would be if you wanted to use a special Linux distribution, or have an earlier kernel version that is out of date.

  • You want to run your Lambda function in another container environment with its own OverlayFS, but encounter OverlayFS conflicts when you run in a Greengrass container.

  • You need access to local resources with paths that can't be determined at deployment time, or whose paths can change after deployment. An example of this resource would be a pluggable device.

  • You have an earlier application that was written as a process, and you encounter issues when you run it in a Greengrass container.

Containerization differences
Containerization Notes

Greengrass container

  • All AWS IoT Greengrass features are available when you run a Lambda function in a Greengrass container.

  • Lambda functions that run in a Greengrass container don't have access to the deployed code of other Lambda functions, even if they run with the same system group. In other words, your Lambda functions run with increased isolation from one another.

  • Because the AWS IoT Greengrass Core software runs all child processes in the same container as the Lambda function, the child processes stop when the Lambda function stops.

No container

  • The following features aren't available to non-containerized Lambda functions:

    • Lambda function memory limits.

    • Local device and volume resources. You must access these resources using their file paths on the core device instead of as Lambda function resources.

  • If your non-containerized Lambda function accesses a machine learning resource, you must identify a resource owner and set access permissions on the resource, not on the Lambda function.

  • Non-containerized Lambda functions have read-only access to the deployed code of other Lambda functions that run with the same system group.

If you change the containerization for a Lambda function when you deploy it, the function might not work as expected. If the Lambda function uses local resources that are no longer available with the new containerization setting, deployment fails.

  • When you change a Lambda function from running in a Greengrass container to running without containerization, the function's memory limits are discarded. You must access the file system directly instead of using attached local resources. You must remove any attached resources before you deploy the Lambda function.

  • When you change a Lambda function from running without containerization to running in a container, your Lambda function loses direct access to the file system. You must define a memory limit for each function or accept the default 16 MB memory limit. You can configure these settings for each Lambda function when you deploy it.

To change containerization settings for a Lambda function component, set the value of the containerMode configuration parameter to one of the following options when you deploy the component.

  • NoContainer – The component doesn't run in an isolated runtime environment.

  • GreengrassContainer – The component runs in an isolated runtime environment inside the AWS IoT Greengrass container.

For more information about how to deploy and configure components, see Deploy AWS IoT Greengrass components to devices and Update component configurations.