Package software.amazon.awscdk.services.lambda


package software.amazon.awscdk.services.lambda

AWS Lambda Construct Library

This construct library allows you to define AWS Lambda Functions.

 Function fn = Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .build();
 

Handler Code

The lambda.Code class includes static convenience methods for various types of runtime code.

  • lambda.Code.fromBucket(bucket, key[, objectVersion]) - specify an S3 object that contains the archive of your runtime code.
  • lambda.Code.fromInline(code) - inline the handle code as a string. This is limited to supported runtimes.
  • lambda.Code.fromAsset(path) - specify a directory or a .zip file in the local filesystem which will be zipped and uploaded to S3 before deployment. See also bundling asset code.
  • lambda.Code.fromDockerBuild(path, options) - use the result of a Docker build as code. The runtime code is expected to be located at /asset in the image and will be zipped and uploaded to S3 as an asset.
  • lambda.Code.fromCustomCommand(output, command, customCommandOptions) - supply a command that is invoked during cdk synth. That command is meant to direct the generated code to output (a zip file or a directory), which is then used as the code for the created AWS Lambda.

The following example shows how to define a Python function and deploy the code from the local directory my-lambda-handler to it:

 Function.Builder.create(this, "MyLambda")
         .code(Code.fromAsset(join(__dirname, "my-lambda-handler")))
         .handler("index.main")
         .runtime(Runtime.PYTHON_3_9)
         .build();
 

When deploying a stack that contains this code, the directory will be zip archived and then uploaded to an S3 bucket, then the exact location of the S3 objects will be passed when the stack is deployed.

During synthesis, the CDK expects to find a directory on disk at the asset directory specified. Note that we are referencing the asset directory relatively to our CDK project directory. This is especially important when we want to share this construct through a library. Different programming languages will have different techniques for bundling resources into libraries.

Docker Images

Lambda functions allow specifying their handlers within docker images. The docker image can be an image from ECR or a local asset that the CDK will package and load into ECR.

The following DockerImageFunction construct uses a local folder with a Dockerfile as the asset that will be used as the function handler.

 DockerImageFunction.Builder.create(this, "AssetFunction")
         .code(DockerImageCode.fromImageAsset(join(__dirname, "docker-handler")))
         .build();
 

You can also specify an image that already exists in ECR as the function handler.

 import software.amazon.awscdk.services.ecr.*;
 
 Repository repo = new Repository(this, "Repository");
 
 DockerImageFunction.Builder.create(this, "ECRFunction")
         .code(DockerImageCode.fromEcr(repo))
         .build();
 

The props for these docker image resources allow overriding the image's CMD, ENTRYPOINT, and WORKDIR configurations as well as choosing a specific tag or digest. See their docs for more information.

To deploy a DockerImageFunction on Lambda arm64 architecture, specify Architecture.ARM_64 in architecture. This will bundle docker image assets for arm64 architecture with --platform linux/arm64 even if build within an x86_64 host.

 DockerImageFunction.Builder.create(this, "AssetFunction")
         .code(DockerImageCode.fromImageAsset(join(__dirname, "docker-arm64-handler")))
         .architecture(Architecture.ARM_64)
         .build();
 

Execution Role

Lambda functions assume an IAM role during execution. In CDK by default, Lambda functions will use an autogenerated Role if one is not provided.

The autogenerated Role is automatically given permissions to execute the Lambda function. To reference the autogenerated Role:

 Function fn = Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .build();
 
 IRole role = fn.getRole();
 

You can also provide your own IAM role. Provided IAM roles will not automatically be given permissions to execute the Lambda function. To provide a role and grant it appropriate permissions:

 Role myRole = Role.Builder.create(this, "My Role")
         .assumedBy(new ServicePrincipal("lambda.amazonaws.com"))
         .build();
 
 Function fn = Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .role(myRole)
         .build();
 
 myRole.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName("service-role/AWSLambdaBasicExecutionRole"));
 myRole.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName("service-role/AWSLambdaVPCAccessExecutionRole"));
 

Function Timeout

AWS Lambda functions have a default timeout of 3 seconds, but this can be increased up to 15 minutes. The timeout is available as a property of Function so that you can reference it elsewhere in your stack. For instance, you could use it to create a CloudWatch alarm to report when your function timed out:

 import software.amazon.awscdk.*;
 import software.amazon.awscdk.services.cloudwatch.*;
 
 
 Function fn = Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .timeout(Duration.minutes(5))
         .build();
 
 if (fn.getTimeout()) {
     Alarm.Builder.create(this, "MyAlarm")
             .metric(fn.metricDuration().with(MetricOptions.builder()
                     .statistic("Maximum")
                     .build()))
             .evaluationPeriods(1)
             .datapointsToAlarm(1)
             .threshold(fn.timeout.toMilliseconds())
             .treatMissingData(TreatMissingData.IGNORE)
             .alarmName("My Lambda Timeout")
             .build();
 }
 

Advanced Logging

You can have more control over your function logs, by specifying the log format (Json or plain text), the system log level, the application log level, as well as choosing the log group:

 import software.amazon.awscdk.services.logs.ILogGroup;
 
 ILogGroup logGroup;
 
 
 Function.Builder.create(this, "Lambda")
         .code(new InlineCode("foo"))
         .handler("index.handler")
         .runtime(Runtime.NODEJS_18_X)
         .loggingFormat(LoggingFormat.JSON)
         .systemLogLevelV2(SystemLogLevel.INFO)
         .applicationLogLevelV2(ApplicationLogLevel.INFO)
         .logGroup(logGroup)
         .build();
 

To use applicationLogLevelV2 and/or systemLogLevelV2 you must set loggingFormat to LoggingFormat.JSON.

Resource-based Policies

AWS Lambda supports resource-based policies for controlling access to Lambda functions and layers on a per-resource basis. In particular, this allows you to give permission to AWS services, AWS Organizations, or other AWS accounts to modify and invoke your functions.

Grant function access to AWS services

 // Grant permissions to a service
 Function fn;
 
 ServicePrincipal principal = new ServicePrincipal("my-service");
 
 fn.grantInvoke(principal);
 
 // Equivalent to:
 fn.addPermission("my-service Invocation", Permission.builder()
         .principal(principal)
         .build());
 

You can also restrict permissions given to AWS services by providing a source account or ARN (representing the account and identifier of the resource that accesses the function or layer).

Important:

By default fn.grantInvoke() grants permission to the principal to invoke any version of the function, including all past ones. If you only want the principal to be granted permission to invoke the latest version or the unqualified Lambda ARN, use grantInvokeLatestVersion(grantee).

 Function fn;
 
 ServicePrincipal principal = new ServicePrincipal("my-service");
 // Grant invoke only to latest version and unqualified lambda arn
 fn.grantInvokeLatestVersion(principal);
 

If you want to grant access for invoking a specific version of Lambda function, you can use fn.grantInvokeVersion(grantee, version)

 Function fn;
 IVersion version;
 
 ServicePrincipal principal = new ServicePrincipal("my-service");
 // Grant invoke only to the specific version
 fn.grantInvokeVersion(principal, version);
 

For more information, see Granting function access to AWS services in the AWS Lambda Developer Guide.

Grant function access to an AWS Organization

 // Grant permissions to an entire AWS organization
 Function fn;
 
 OrganizationPrincipal org = new OrganizationPrincipal("o-xxxxxxxxxx");
 
 fn.grantInvoke(org);
 

In the above example, the principal will be * and all users in the organization o-xxxxxxxxxx will get function invocation permissions.

You can restrict permissions given to the organization by specifying an AWS account or role as the principal:

 // Grant permission to an account ONLY IF they are part of the organization
 Function fn;
 
 AccountPrincipal account = new AccountPrincipal("123456789012");
 
 fn.grantInvoke(account.inOrganization("o-xxxxxxxxxx"));
 

For more information, see Granting function access to an organization in the AWS Lambda Developer Guide.

Grant function access to other AWS accounts

 // Grant permission to other AWS account
 Function fn;
 
 AccountPrincipal account = new AccountPrincipal("123456789012");
 
 fn.grantInvoke(account);
 

For more information, see Granting function access to other accounts in the AWS Lambda Developer Guide.

Grant function access to unowned principals

Providing an unowned principal (such as account principals, generic ARN principals, service principals, and principals in other accounts) to a call to fn.grantInvoke will result in a resource-based policy being created. If the principal in question has conditions limiting the source account or ARN of the operation (see above), these conditions will be automatically added to the resource policy.

 Function fn;
 
 ServicePrincipal servicePrincipal = new ServicePrincipal("my-service");
 String sourceArn = "arn:aws:s3:::my-bucket";
 String sourceAccount = "111122223333";
 PrincipalBase servicePrincipalWithConditions = servicePrincipal.withConditions(Map.of(
         "ArnLike", Map.of(
                 "aws:SourceArn", sourceArn),
         "StringEquals", Map.of(
                 "aws:SourceAccount", sourceAccount)));
 
 fn.grantInvoke(servicePrincipalWithConditions);
 

Grant function access to a CompositePrincipal

To grant invoke permissions to a CompositePrincipal use the grantInvokeCompositePrincipal method:

 Function fn;
 
 CompositePrincipal compositePrincipal = new CompositePrincipal(
 new OrganizationPrincipal("o-zzzzzzzzzz"),
 new ServicePrincipal("apigateway.amazonaws.com"));
 
 fn.grantInvokeCompositePrincipal(compositePrincipal);
 

Versions

You can use versions to manage the deployment of your AWS Lambda functions. For example, you can publish a new version of a function for beta testing without affecting users of the stable production version.

The function version includes the following information:

  • The function code and all associated dependencies.
  • The Lambda runtime that executes the function.
  • All of the function settings, including the environment variables.
  • A unique Amazon Resource Name (ARN) to identify this version of the function.

You could create a version to your lambda function using the Version construct.

 Function fn;
 
 Version version = Version.Builder.create(this, "MyVersion")
         .lambda(fn)
         .build();
 

The major caveat to know here is that a function version must always point to a specific 'version' of the function. When the function is modified, the version will continue to point to the 'then version' of the function.

One way to ensure that the lambda.Version always points to the latest version of your lambda.Function is to set an environment variable which changes at least as often as your code does. This makes sure the function always has the latest code. For instance -

 String codeVersion = "stringOrMethodToGetCodeVersion";
 Function fn = Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .environment(Map.of(
                 "CodeVersionString", codeVersion))
         .build();
 

The fn.latestVersion property returns a lambda.IVersion which represents the $LATEST pseudo-version.

However, most AWS services require a specific AWS Lambda version, and won't allow you to use $LATEST. Therefore, you would normally want to use lambda.currentVersion.

The fn.currentVersion property can be used to obtain a lambda.Version resource that represents the AWS Lambda function defined in your application. Any change to your function's code or configuration will result in the creation of a new version resource. You can specify options for this version through the currentVersionOptions property.

NOTE: The currentVersion property is only supported when your AWS Lambda function uses either lambda.Code.fromAsset or lambda.Code.fromInline. Other types of code providers (such as lambda.Code.fromBucket) require that you define a lambda.Version resource directly since the CDK is unable to determine if their contents had changed.

currentVersion: Updated hashing logic

To produce a new lambda version each time the lambda function is modified, the currentVersion property under the hood, computes a new logical id based on the properties of the function. This informs CloudFormation that a new AWS::Lambda::Version resource should be created pointing to the updated Lambda function.

However, a bug was introduced in this calculation that caused the logical id to change when it was not required (ex: when the Function's Tags property, or when the DependsOn clause was modified). This caused the deployment to fail since the Lambda service does not allow creating duplicate versions.

This has been fixed in the AWS CDK but existing users need to opt-in via a feature flag. Users who have run cdk init since this fix will be opted in, by default.

Otherwise, you will need to enable the feature flag @aws-cdk/aws-lambda:recognizeVersionProps. Since CloudFormation does not allow duplicate versions, you will also need to make some modification to your function so that a new version can be created. To efficiently and trivially modify all your lambda functions at once, you can attach the FunctionVersionUpgrade aspect to the stack, which slightly alters the function description. This aspect is intended for one-time use to upgrade the version of all your functions at the same time, and can safely be removed after deploying once.

 Stack stack = new Stack();
 Aspects.of(stack).add(new FunctionVersionUpgrade(LAMBDA_RECOGNIZE_VERSION_PROPS));
 

When the new logic is in effect, you may rarely come across the following error: The following properties are not recognized as version properties. This will occur, typically when property overrides are used, when a new property introduced in AWS::Lambda::Function is used that CDK is still unaware of.

To overcome this error, use the API Function.classifyVersionProperty() to record whether a new version should be generated when this property is changed. This can be typically determined by checking whether the property can be modified using the UpdateFunctionConfiguration API or not.

currentVersion: Updated hashing logic for layer versions

An additional update to the hashing logic fixes two issues surrounding layers. Prior to this change, updating the lambda layer version would have no effect on the function version. Also, the order of lambda layers provided to the function was unnecessarily baked into the hash.

This has been fixed in the AWS CDK starting with version 2.27. If you ran cdk init with an earlier version, you will need to opt-in via a feature flag. If you run cdk init with v2.27 or later, this fix will be opted in, by default.

Existing users will need to enable the feature flag @aws-cdk/aws-lambda:recognizeLayerVersion. Since CloudFormation does not allow duplicate versions, they will also need to make some modification to their function so that a new version can be created. To efficiently and trivially modify all your lambda functions at once, users can attach the FunctionVersionUpgrade aspect to the stack, which slightly alters the function description. This aspect is intended for one-time use to upgrade the version of all your functions at the same time, and can safely be removed after deploying once.

 Stack stack = new Stack();
 Aspects.of(stack).add(new FunctionVersionUpgrade(LAMBDA_RECOGNIZE_LAYER_VERSION));
 

Aliases

You can define one or more aliases for your AWS Lambda function. A Lambda alias is like a pointer to a specific Lambda function version. Users can access the function version using the alias ARN.

The version.addAlias() method can be used to define an AWS Lambda alias that points to a specific version.

The following example defines an alias named live which will always point to a version that represents the function as defined in your CDK app. When you change your lambda code or configuration, a new resource will be created. You can specify options for the current version through the currentVersionOptions property.

 Function fn = Function.Builder.create(this, "MyFunction")
         .currentVersionOptions(VersionOptions.builder()
                 .removalPolicy(RemovalPolicy.RETAIN) // retain old versions
                 .retryAttempts(1)
                 .build())
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .build();
 
 fn.addAlias("live");
 

Function URL

A function URL is a dedicated HTTP(S) endpoint for your Lambda function. When you create a function URL, Lambda automatically generates a unique URL endpoint for you. Function URLs can be created for the latest version Lambda Functions, or Function Aliases (but not for Versions).

Function URLs are dual stack-enabled, supporting IPv4 and IPv6, and cross-origin resource sharing (CORS) configuration. After you configure a function URL for your function, you can invoke your function through its HTTP(S) endpoint via a web browser, curl, Postman, or any HTTP client. To invoke a function using IAM authentication your HTTP client must support SigV4 signing.

See the Invoking Function URLs section of the AWS Lambda Developer Guide for more information on the input and output payloads of Functions invoked in this way.

IAM-authenticated Function URLs

To create a Function URL which can be called by an IAM identity, call addFunctionUrl(), followed by grantInvokeFunctionUrl():

 // Can be a Function or an Alias
 Function fn;
 Role myRole;
 
 
 FunctionUrl fnUrl = fn.addFunctionUrl();
 fnUrl.grantInvokeUrl(myRole);
 
 CfnOutput.Builder.create(this, "TheUrl")
         // The .url attributes will return the unique Function URL
         .value(fnUrl.getUrl())
         .build();
 

Calls to this URL need to be signed with SigV4.

Anonymous Function URLs

To create a Function URL which can be called anonymously, pass authType: FunctionUrlAuthType.NONE to addFunctionUrl():

 // Can be a Function or an Alias
 Function fn;
 
 
 FunctionUrl fnUrl = fn.addFunctionUrl(FunctionUrlOptions.builder()
         .authType(FunctionUrlAuthType.NONE)
         .build());
 
 CfnOutput.Builder.create(this, "TheUrl")
         .value(fnUrl.getUrl())
         .build();
 

CORS configuration for Function URLs

If you want your Function URLs to be invokable from a web page in browser, you will need to configure cross-origin resource sharing to allow the call (if you do not do this, your browser will refuse to make the call):

 Function fn;
 
 
 fn.addFunctionUrl(FunctionUrlOptions.builder()
         .authType(FunctionUrlAuthType.NONE)
         .cors(FunctionUrlCorsOptions.builder()
                 // Allow this to be called from websites on https://example.com.
                 // Can also be ['*'] to allow all domain.
                 .allowedOrigins(List.of("https://example.com"))
                 .build())
         .build());
 

Invoke Mode for Function URLs

Invoke mode determines how AWS Lambda invokes your function. You can configure the invoke mode when creating a Function URL using the invokeMode property

 Function fn;
 
 
 fn.addFunctionUrl(FunctionUrlOptions.builder()
         .authType(FunctionUrlAuthType.NONE)
         .invokeMode(InvokeMode.RESPONSE_STREAM)
         .build());
 

If the invokeMode property is not specified, the default BUFFERED mode will be used.

Layers

The lambda.LayerVersion class can be used to define Lambda layers and manage granting permissions to other AWS accounts or organizations.

 LayerVersion layer = LayerVersion.Builder.create(stack, "MyLayer")
         .code(Code.fromAsset(join(__dirname, "layer-code")))
         .compatibleRuntimes(List.of(Runtime.NODEJS_LATEST))
         .license("Apache-2.0")
         .description("A layer to test the L2 construct")
         .build();
 
 // To grant usage by other AWS accounts
 layer.addPermission("remote-account-grant", LayerVersionPermission.builder().accountId(awsAccountId).build());
 
 // To grant usage to all accounts in some AWS Ogranization
 // layer.grantUsage({ accountId: '*', organizationId });
 
 // To grant usage to all accounts in some AWS Ogranization
 // layer.grantUsage({ accountId: '*', organizationId });
 Function.Builder.create(stack, "MyLayeredLambda")
         .code(new InlineCode("foo"))
         .handler("index.handler")
         .runtime(Runtime.NODEJS_LATEST)
         .layers(List.of(layer))
         .build();
 

By default, updating a layer creates a new layer version, and CloudFormation will delete the old version as part of the stack update.

Alternatively, a removal policy can be used to retain the old version:

 LayerVersion.Builder.create(this, "MyLayer")
         .removalPolicy(RemovalPolicy.RETAIN)
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .build();
 

Architecture

Lambda functions, by default, run on compute systems that have the 64 bit x86 architecture.

The AWS Lambda service also runs compute on the ARM architecture, which can reduce cost for some workloads.

A lambda function can be configured to be run on one of these platforms:

 Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .architecture(Architecture.ARM_64)
         .build();
 

Similarly, lambda layer versions can also be tagged with architectures it is compatible with.

 LayerVersion.Builder.create(this, "MyLayer")
         .removalPolicy(RemovalPolicy.RETAIN)
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .compatibleArchitectures(List.of(Architecture.X86_64, Architecture.ARM_64))
         .build();
 

Lambda Insights

Lambda functions can be configured to use CloudWatch Lambda Insights which provides low-level runtime metrics for a Lambda functions.

 Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .insightsVersion(LambdaInsightsVersion.VERSION_1_0_98_0)
         .build();
 

If the version of insights is not yet available in the CDK, you can also provide the ARN directly as so -

 String layerArn = "arn:aws:lambda:us-east-1:580247275435:layer:LambdaInsightsExtension:14";
 Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .insightsVersion(LambdaInsightsVersion.fromInsightVersionArn(layerArn))
         .build();
 

If you are deploying an ARM_64 Lambda Function, you must specify a Lambda Insights Version >= 1_0_119_0.

 Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .architecture(Architecture.ARM_64)
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .insightsVersion(LambdaInsightsVersion.VERSION_1_0_119_0)
         .build();
 

Parameters and Secrets Extension

Lambda functions can be configured to use the Parameters and Secrets Extension. The Parameters and Secrets Extension can be used to retrieve and cache secrets from Secrets Manager or parameters from Parameter Store in Lambda functions without using an SDK.

 import software.amazon.awscdk.services.secretsmanager.*;
 import software.amazon.awscdk.services.ssm.*;
 
 
 Secret secret = new Secret(this, "Secret");
 StringParameter parameter = StringParameter.Builder.create(this, "Parameter")
         .parameterName("mySsmParameterName")
         .stringValue("mySsmParameterValue")
         .build();
 
 ParamsAndSecretsLayerVersion paramsAndSecrets = ParamsAndSecretsLayerVersion.fromVersion(ParamsAndSecretsVersions.V1_0_103, ParamsAndSecretsOptions.builder()
         .cacheSize(500)
         .logLevel(ParamsAndSecretsLogLevel.DEBUG)
         .build());
 
 Function lambdaFunction = Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .architecture(Architecture.ARM_64)
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .paramsAndSecrets(paramsAndSecrets)
         .build();
 
 secret.grantRead(lambdaFunction);
 parameter.grantRead(lambdaFunction);
 

If the version of Parameters and Secrets Extension is not yet available in the CDK, you can also provide the ARN directly as so:

 import software.amazon.awscdk.services.secretsmanager.*;
 import software.amazon.awscdk.services.ssm.*;
 
 
 Secret secret = new Secret(this, "Secret");
 StringParameter parameter = StringParameter.Builder.create(this, "Parameter")
         .parameterName("mySsmParameterName")
         .stringValue("mySsmParameterValue")
         .build();
 
 String layerArn = "arn:aws:lambda:us-east-1:177933569100:layer:AWS-Parameters-and-Secrets-Lambda-Extension:4";
 ParamsAndSecretsLayerVersion paramsAndSecrets = ParamsAndSecretsLayerVersion.fromVersionArn(layerArn, ParamsAndSecretsOptions.builder()
         .cacheSize(500)
         .build());
 
 Function lambdaFunction = Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .architecture(Architecture.ARM_64)
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .paramsAndSecrets(paramsAndSecrets)
         .build();
 
 secret.grantRead(lambdaFunction);
 parameter.grantRead(lambdaFunction);
 

Event Rule Target

You can use an AWS Lambda function as a target for an Amazon CloudWatch event rule:

 import software.amazon.awscdk.services.events.*;
 import software.amazon.awscdk.services.events.targets.*;
 
 Function fn;
 
 Rule rule = Rule.Builder.create(this, "Schedule Rule")
         .schedule(Schedule.cron(CronOptions.builder().minute("0").hour("4").build()))
         .build();
 rule.addTarget(new LambdaFunction(fn));
 

Event Sources

AWS Lambda supports a variety of event sources.

In most cases, it is possible to trigger a function as a result of an event by using one of the add<Event>Notification methods on the source construct. For example, the s3.Bucket construct has an onEvent method which can be used to trigger a Lambda when an event, such as PutObject occurs on an S3 bucket.

An alternative way to add event sources to a function is to use function.addEventSource(source). This method accepts an IEventSource object. The module @aws-cdk/aws-lambda-event-sources includes classes for the various event sources supported by AWS Lambda.

For example, the following code adds an SQS queue as an event source for a function:

 import software.amazon.awscdk.services.lambda.eventsources.*;
 import software.amazon.awscdk.services.sqs.*;
 
 Function fn;
 
 Queue queue = new Queue(this, "Queue");
 fn.addEventSource(new SqsEventSource(queue));
 

The following code adds an S3 bucket notification as an event source:

 import software.amazon.awscdk.services.lambda.eventsources.*;
 import software.amazon.awscdk.services.s3.*;
 
 Function fn;
 
 Bucket bucket = new Bucket(this, "Bucket");
 fn.addEventSource(S3EventSource.Builder.create(bucket)
         .events(List.of(EventType.OBJECT_CREATED, EventType.OBJECT_REMOVED))
         .filters(List.of(NotificationKeyFilter.builder().prefix("subdir/").build()))
         .build());
 

The following code adds an DynamoDB notification as an event source filtering insert events:

 import software.amazon.awscdk.services.lambda.eventsources.*;
 import software.amazon.awscdk.services.dynamodb.*;
 
 Function fn;
 
 Table table = Table.Builder.create(this, "Table")
         .partitionKey(Attribute.builder()
                 .name("id")
                 .type(AttributeType.STRING)
                 .build())
         .stream(StreamViewType.NEW_IMAGE)
         .build();
 fn.addEventSource(DynamoEventSource.Builder.create(table)
         .startingPosition(StartingPosition.LATEST)
         .filters(List.of(FilterCriteria.filter(Map.of("eventName", FilterRule.isEqual("INSERT")))))
         .build());
 

See the documentation for the @aws-cdk/aws-lambda-event-sources module for more details.

Imported Lambdas

When referencing an imported lambda in the CDK, use fromFunctionArn() for most use cases:

 IFunction fn = Function.fromFunctionArn(this, "Function", "arn:aws:lambda:us-east-1:123456789012:function:MyFn");
 

The fromFunctionAttributes() API is available for more specific use cases:

 IFunction fn = Function.fromFunctionAttributes(this, "Function", FunctionAttributes.builder()
         .functionArn("arn:aws:lambda:us-east-1:123456789012:function:MyFn")
         // The following are optional properties for specific use cases and should be used with caution:
 
         // Use Case: imported function is in the same account as the stack. This tells the CDK that it
         // can modify the function's permissions.
         .sameEnvironment(true)
 
         // Use Case: imported function is in a different account and user commits to ensuring that the
         // imported function has the correct permissions outside the CDK.
         .skipPermissions(true)
         .build());
 

Function.fromFunctionArn() and Function.fromFunctionAttributes() will attempt to parse the Function's Region and Account ID from the ARN. addPermissions will only work on the Function object if the Region and Account ID are deterministically the same as the scope of the Stack the referenced Function object is created in. If the containing Stack is environment-agnostic or the Function ARN is a Token, this comparison will fail, and calls to Function.addPermission will do nothing. If you know Function permissions can safely be added, you can use Function.fromFunctionName() instead, or pass sameEnvironment: true to Function.fromFunctionAttributes().

 IFunction fn = Function.fromFunctionName(this, "Function", "MyFn");
 

Lambda with DLQ

A dead-letter queue can be automatically created for a Lambda function by setting the deadLetterQueueEnabled: true configuration. In such case CDK creates a sqs.Queue as deadLetterQueue.

 Function fn = Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromInline("exports.handler = function(event, ctx, cb) { return cb(null, \"hi\"); }"))
         .deadLetterQueueEnabled(true)
         .build();
 

It is also possible to provide a dead-letter queue instead of getting a new queue created:

 import software.amazon.awscdk.services.sqs.*;
 
 
 Queue dlq = new Queue(this, "DLQ");
 Function fn = Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromInline("exports.handler = function(event, ctx, cb) { return cb(null, \"hi\"); }"))
         .deadLetterQueue(dlq)
         .build();
 

You can also use a sns.Topic instead of an sqs.Queue as dead-letter queue:

 import software.amazon.awscdk.services.sns.*;
 
 
 Topic dlt = new Topic(this, "DLQ");
 Function fn = Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromInline("// your code here"))
         .deadLetterTopic(dlt)
         .build();
 

See the AWS documentation to learn more about AWS Lambdas and DLQs.

Lambda with X-Ray Tracing

 Function fn = Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromInline("exports.handler = function(event, ctx, cb) { return cb(null, \"hi\"); }"))
         .tracing(Tracing.ACTIVE)
         .build();
 

See the AWS documentation to learn more about AWS Lambda's X-Ray support.

Lambda with AWS Distro for OpenTelemetry layer

To have automatic integration with XRay without having to add dependencies or change your code, you can use the AWS Distro for OpenTelemetry Lambda (ADOT) layer. Consuming the latest ADOT layer can be done with the following snippet:

 import software.amazon.awscdk.services.lambda.AdotLambdaExecWrapper;
 import software.amazon.awscdk.services.lambda.AdotLayerVersion;
 import software.amazon.awscdk.services.lambda.AdotLambdaLayerJavaScriptSdkVersion;
 
 
 Function fn = Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromInline("exports.handler = function(event, ctx, cb) { return cb(null, \"hi\"); }"))
         .adotInstrumentation(AdotInstrumentationConfig.builder()
                 .layerVersion(AdotLayerVersion.fromJavaScriptSdkLayerVersion(AdotLambdaLayerJavaScriptSdkVersion.LATEST))
                 .execWrapper(AdotLambdaExecWrapper.REGULAR_HANDLER)
                 .build())
         .build();
 

To use a different layer version, use one of the following helper functions for the layerVersion prop:

  • AdotLayerVersion.fromJavaScriptSdkLayerVersion
  • AdotLayerVersion.fromPythonSdkLayerVersion
  • AdotLayerVersion.fromJavaSdkLayerVersion
  • AdotLayerVersion.fromJavaAutoInstrumentationSdkLayerVersion
  • AdotLayerVersion.fromGenericSdkLayerVersion

Each helper function expects a version value from a corresponding enum-like class as below:

  • AdotLambdaLayerJavaScriptSdkVersion
  • AdotLambdaLayerPythonSdkVersion
  • AdotLambdaLayerJavaSdkVersion
  • AdotLambdaLayerJavaAutoInstrumentationSdkVersion
  • AdotLambdaLayerGenericSdkVersion

For more examples, see our the integration test.

If you want to retrieve the ARN of the ADOT Lambda layer without enabling ADOT in a Lambda function:

 Function fn;
 
 String layerArn = AdotLambdaLayerJavaSdkVersion.V1_19_0.layerArn(fn.getStack(), fn.getArchitecture());
 

When using the AdotLambdaLayerPythonSdkVersion the AdotLambdaExecWrapper needs to be AdotLambdaExecWrapper.INSTRUMENT_HANDLER as per AWS Distro for OpenTelemetry Lambda Support For Python

Lambda with Profiling

The following code configures the lambda function with CodeGuru profiling. By default, this creates a new CodeGuru profiling group -

 Function fn = Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.PYTHON_3_9)
         .handler("index.handler")
         .code(Code.fromAsset("lambda-handler"))
         .profiling(true)
         .build();
 

The profilingGroup property can be used to configure an existing CodeGuru profiler group.

CodeGuru profiling is supported for all Java runtimes and Python3.6+ runtimes.

See the AWS documentation to learn more about AWS Lambda's Profiling support.

Lambda with Reserved Concurrent Executions

 Function fn = Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromInline("exports.handler = function(event, ctx, cb) { return cb(null, \"hi\"); }"))
         .reservedConcurrentExecutions(100)
         .build();
 

See the AWS documentation managing concurrency.

Lambda with SnapStart

SnapStart is currently supported only on Java 11/Java 17 runtime. SnapStart does not support provisioned concurrency, the arm64 architecture, Amazon Elastic File System (Amazon EFS), or ephemeral storage greater than 512 MB. After you enable Lambda SnapStart for a particular Lambda function, publishing a new version of the function will trigger an optimization process.

See the AWS documentation to learn more about AWS Lambda SnapStart

 Function fn = Function.Builder.create(this, "MyFunction")
         .code(Code.fromAsset(join(__dirname, "handler.zip")))
         .runtime(Runtime.JAVA_11)
         .handler("example.Handler::handleRequest")
         .snapStart(SnapStartConf.ON_PUBLISHED_VERSIONS)
         .build();
 
 Version version = fn.getCurrentVersion();
 

AutoScaling

You can use Application AutoScaling to automatically configure the provisioned concurrency for your functions. AutoScaling can be set to track utilization or be based on a schedule. To configure AutoScaling on a function alias:

 import software.amazon.awscdk.services.autoscaling.*;
 
 Function fn;
 
 Alias alias = fn.addAlias("prod");
 
 // Create AutoScaling target
 IScalableFunctionAttribute as = alias.addAutoScaling(AutoScalingOptions.builder().maxCapacity(50).build());
 
 // Configure Target Tracking
 as.scaleOnUtilization(UtilizationScalingOptions.builder()
         .utilizationTarget(0.5)
         .build());
 
 // Configure Scheduled Scaling
 as.scaleOnSchedule("ScaleUpInTheMorning", ScalingSchedule.builder()
         .schedule(Schedule.cron(CronOptions.builder().hour("8").minute("0").build()))
         .minCapacity(20)
         .build());
 

 import software.amazon.awscdk.services.applicationautoscaling.*;
 import software.amazon.awscdk.*;
 import cx.api.LAMBDA_RECOGNIZE_LAYER_VERSION;
 import software.amazon.awscdk.*;
 
 /**
 * Stack verification steps:
 * aws application-autoscaling describe-scalable-targets --service-namespace lambda --resource-ids function:<function name>:prod
 * has a minCapacity of 3 and maxCapacity of 50
 */
 public class TestStack extends Stack {
     public TestStack(App scope, String id) {
         super(scope, id);
 
         Function fn = Function.Builder.create(this, "MyLambda")
                 .code(new InlineCode("exports.handler = async () => { console.log('hello world'); };"))
                 .handler("index.handler")
                 .runtime(Runtime.NODEJS_LATEST)
                 .build();
 
         Version version = fn.getCurrentVersion();
 
         Alias alias = Alias.Builder.create(this, "Alias")
                 .aliasName("prod")
                 .version(version)
                 .build();
 
         IScalableFunctionAttribute scalingTarget = alias.addAutoScaling(AutoScalingOptions.builder().minCapacity(3).maxCapacity(50).build());
 
         scalingTarget.scaleOnUtilization(UtilizationScalingOptions.builder()
                 .utilizationTarget(0.5)
                 .build());
 
         scalingTarget.scaleOnSchedule("ScaleUpInTheMorning", ScalingSchedule.builder()
                 .schedule(Schedule.cron(CronOptions.builder().hour("8").minute("0").build()))
                 .minCapacity(20)
                 .build());
 
         scalingTarget.scaleOnSchedule("ScaleDownAtNight", ScalingSchedule.builder()
                 .schedule(Schedule.cron(CronOptions.builder().hour("20").minute("0").build()))
                 .maxCapacity(20)
                 .build());
 
         CfnOutput.Builder.create(this, "FunctionName")
                 .value(fn.getFunctionName())
                 .build();
     }
 }
 
 App app = new App();
 
 TestStack stack = new TestStack(app, "aws-lambda-autoscaling");
 
 // Changes the function description when the feature flag is present
 // to validate the changed function hash.
 Aspects.of(stack).add(new FunctionVersionUpgrade(LAMBDA_RECOGNIZE_LAYER_VERSION));
 
 app.synth();
 

See the AWS documentation on autoscaling lambda functions.

Log Group

By default, Lambda functions automatically create a log group with the name /aws/lambda/<function-name> upon first execution with log data set to never expire. This is convenient, but prevents you from changing any of the properties of this auto-created log group using the AWS CDK. For example you cannot set log retention or assign a data protection policy.

To fully customize the logging behavior of your Lambda function, create a logs.LogGroup ahead of time and use the logGroup property to instruct the Lambda function to send logs to it. This way you can use the full features set supported by Amazon CloudWatch Logs.

 import software.amazon.awscdk.services.logs.LogGroup;
 
 
 LogGroup myLogGroup = LogGroup.Builder.create(this, "MyLogGroupWithLogGroupName")
         .logGroupName("customLogGroup")
         .build();
 
 Function.Builder.create(this, "Lambda")
         .code(new InlineCode("foo"))
         .handler("index.handler")
         .runtime(Runtime.NODEJS_18_X)
         .logGroup(myLogGroup)
         .build();
 

Providing a user-controlled log group was rolled out to commercial regions on 2023-11-16. If you are deploying to another type of region, please check regional availability first.

Legacy Log Retention

As an alternative to providing a custom, user controlled log group, the legacy logRetention property can be used to set a different expiration period. This feature uses a Custom Resource to change the log retention of the automatically created log group.

By default, CDK uses the AWS SDK retry options when creating a log group. The logRetentionRetryOptions property allows you to customize the maximum number of retries and base backoff duration.

Note that a CloudFormation custom resource is added to the stack that pre-creates the log group as part of the stack deployment, if it already doesn't exist, and sets the correct log retention period (never expire, by default). This Custom Resource will also create a log group to log events of the custom resource. The log retention period for this addtional log group is hard-coded to 1 day.

Further note that, if the log group already exists and the logRetention is not set, the custom resource will reset the log retention to never expire even if it was configured with a different value.

FileSystem Access

You can configure a function to mount an Amazon Elastic File System (Amazon EFS) to a directory in your runtime environment with the filesystem property. To access Amazon EFS from lambda function, the Amazon EFS access point will be required.

The following sample allows the lambda function to mount the Amazon EFS access point to /mnt/msg in the runtime environment and access the filesystem with the POSIX identity defined in posixUser.

 import software.amazon.awscdk.services.ec2.*;
 import software.amazon.awscdk.services.efs.*;
 
 
 // create a new VPC
 Vpc vpc = new Vpc(this, "VPC");
 
 // create a new Amazon EFS filesystem
 FileSystem fileSystem = FileSystem.Builder.create(this, "Efs").vpc(vpc).build();
 
 // create a new access point from the filesystem
 AccessPoint accessPoint = fileSystem.addAccessPoint("AccessPoint", AccessPointOptions.builder()
         // set /export/lambda as the root of the access point
         .path("/export/lambda")
         // as /export/lambda does not exist in a new efs filesystem, the efs will create the directory with the following createAcl
         .createAcl(Acl.builder()
                 .ownerUid("1001")
                 .ownerGid("1001")
                 .permissions("750")
                 .build())
         // enforce the POSIX identity so lambda function will access with this identity
         .posixUser(PosixUser.builder()
                 .uid("1001")
                 .gid("1001")
                 .build())
         .build());
 
 Function fn = Function.Builder.create(this, "MyLambda")
         // mount the access point to /mnt/msg in the lambda runtime environment
         .filesystem(FileSystem.fromEfsAccessPoint(accessPoint, "/mnt/msg"))
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .vpc(vpc)
         .build();
 

IPv6 support

You can configure IPv6 connectivity for lambda function by setting Ipv6AllowedForDualStack to true. It allows Lambda functions to specify whether the IPv6 traffic should be allowed when using dual-stack VPCs. To access IPv6 network using Lambda, Dual-stack VPC is required. Using dual-stack VPC a function communicates with subnet over either of IPv4 or IPv6.

 import software.amazon.awscdk.services.ec2.*;
 
 
 NatProvider natProvider = NatProvider.gateway();
 
 // create dual-stack VPC
 Vpc vpc = Vpc.Builder.create(this, "DualStackVpc")
         .ipProtocol(IpProtocol.DUAL_STACK)
         .subnetConfiguration(List.of(SubnetConfiguration.builder()
                 .name("Ipv6Public1")
                 .subnetType(SubnetType.PUBLIC)
                 .build(), SubnetConfiguration.builder()
                 .name("Ipv6Public2")
                 .subnetType(SubnetType.PUBLIC)
                 .build(), SubnetConfiguration.builder()
                 .name("Ipv6Private1")
                 .subnetType(SubnetType.PRIVATE_WITH_EGRESS)
                 .build()))
         .natGatewayProvider(natProvider)
         .build();
 
 String natGatewayId = natProvider.getConfiguredGateways()[0].getGatewayId();
 ((PrivateSubnet)vpc.privateSubnets[0]).addIpv6Nat64Route(natGatewayId);
 
 Function fn = Function.Builder.create(this, "Lambda_with_IPv6_VPC")
         .code(new InlineCode("def main(event, context): pass"))
         .handler("index.main")
         .runtime(Runtime.PYTHON_3_9)
         .vpc(vpc)
         .ipv6AllowedForDualStack(true)
         .build();
 

Ephemeral Storage

You can configure ephemeral storage on a function to control the amount of storage it gets for reading or writing data, allowing you to use AWS Lambda for ETL jobs, ML inference, or other data-intensive workloads. The ephemeral storage will be accessible in the functions' /tmp directory.

 import software.amazon.awscdk.Size;
 
 
 Function fn = Function.Builder.create(this, "MyFunction")
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .ephemeralStorageSize(Size.mebibytes(1024))
         .build();
 

Read more about using this feature in this AWS blog post.

Singleton Function

The SingletonFunction construct is a way to guarantee that a lambda function will be guaranteed to be part of the stack, once and only once, irrespective of how many times the construct is declared to be part of the stack. This is guaranteed as long as the uuid property and the optional lambdaPurpose property stay the same whenever they're declared into the stack.

A typical use case of this function is when a higher level construct needs to declare a Lambda function as part of it but needs to guarantee that the function is declared once. However, a user of this higher level construct can declare it any number of times and with different properties. Using SingletonFunction here with a fixed uuid will guarantee this.

For example, the AwsCustomResource construct requires only one single lambda function for all api calls that are made.

Bundling Asset Code

When using lambda.Code.fromAsset(path) it is possible to bundle the code by running a command in a Docker container. The asset path will be mounted at /asset-input. The Docker container is responsible for putting content at /asset-output. The content at /asset-output will be zipped and used as Lambda code.

Example with Python:

 Function.Builder.create(this, "Function")
         .code(Code.fromAsset(join(__dirname, "my-python-handler"), AssetOptions.builder()
                 .bundling(BundlingOptions.builder()
                         .image(Runtime.PYTHON_3_9.getBundlingImage())
                         .command(List.of("bash", "-c", "pip install -r requirements.txt -t /asset-output && cp -au . /asset-output"))
                         .build())
                 .build()))
         .runtime(Runtime.PYTHON_3_9)
         .handler("index.handler")
         .build();
 

Runtimes expose a bundlingImage property that points to the AWS SAM build image.

Use cdk.DockerImage.fromRegistry(image) to use an existing image or cdk.DockerImage.fromBuild(path) to build a specific image:

 Function.Builder.create(this, "Function")
         .code(Code.fromAsset("/path/to/handler", AssetOptions.builder()
                 .bundling(BundlingOptions.builder()
                         .image(DockerImage.fromBuild("/path/to/dir/with/DockerFile", DockerBuildOptions.builder()
                                 .buildArgs(Map.of(
                                         "ARG1", "value1"))
                                 .build()))
                         .command(List.of("my", "cool", "command"))
                         .build())
                 .build()))
         .runtime(Runtime.PYTHON_3_9)
         .handler("index.handler")
         .build();
 

Language-specific APIs

Language-specific higher level constructs are provided in separate modules:

Code Signing

Code signing for AWS Lambda helps to ensure that only trusted code runs in your Lambda functions. When enabled, AWS Lambda checks every code deployment and verifies that the code package is signed by a trusted source. For more information, see Configuring code signing for AWS Lambda. The following code configures a function with code signing.

 import software.amazon.awscdk.services.signer.*;
 
 
 SigningProfile signingProfile = SigningProfile.Builder.create(this, "SigningProfile")
         .platform(Platform.AWS_LAMBDA_SHA384_ECDSA)
         .build();
 
 CodeSigningConfig codeSigningConfig = CodeSigningConfig.Builder.create(this, "CodeSigningConfig")
         .signingProfiles(List.of(signingProfile))
         .build();
 
 Function.Builder.create(this, "Function")
         .codeSigningConfig(codeSigningConfig)
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .build();
 

Runtime updates

Lambda runtime management controls help reduce the risk of impact to your workloads in the rare event of a runtime version incompatibility. For more information, see Runtime management controls

 Function.Builder.create(this, "Lambda")
         .runtimeManagementMode(RuntimeManagementMode.AUTO)
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .build();
 

If you want to set the "Manual" setting, using the ARN of the runtime version as the argument.

 Function.Builder.create(this, "Lambda")
         .runtimeManagementMode(RuntimeManagementMode.manual("runtimeVersion-arn"))
         .runtime(Runtime.NODEJS_18_X)
         .handler("index.handler")
         .code(Code.fromAsset(join(__dirname, "lambda-handler")))
         .build();
 

Exclude Patterns for Assets

When using lambda.Code.fromAsset(path) an exclude property allows you to ignore particular files for assets by providing patterns for file paths to exclude. Note that this has no effect on Assets bundled using the bundling property.

The ignoreMode property can be used with the exclude property to specify the file paths to ignore based on the .gitignore specification or the .dockerignore specification. The default behavior is to ignore file paths based on simple glob patterns.

 Function.Builder.create(this, "Function")
         .code(Code.fromAsset(join(__dirname, "my-python-handler"), AssetOptions.builder()
                 .exclude(List.of("*.ignore"))
                 .ignoreMode(IgnoreMode.DOCKER)
                 .build()))
         .runtime(Runtime.PYTHON_3_9)
         .handler("index.handler")
         .build();
 

You can also write to include only certain files by using a negation.

 Function.Builder.create(this, "Function")
         .code(Code.fromAsset(join(__dirname, "my-python-handler"), AssetOptions.builder()
                 .exclude(List.of("*", "!index.py"))
                 .build()))
         .runtime(Runtime.PYTHON_3_9)
         .handler("index.handler")
         .build();