Lambda event source mappings - AWS Lambda

Lambda event source mappings


If you want to send data to a target other than a Lambda function or enrich the data before sending it, see Amazon EventBridge Pipes.

An event source mapping is a Lambda resource that reads from an event source and invokes a Lambda function. You can use event source mappings to process items from a stream or queue in services that don't invoke Lambda functions directly. This page describes the services that Lambda provides event source mappings and how-to fine tune batching behavior.

An event source mapping uses permissions in the function's execution role to read and manage items in the event source. Permissions, event structure, settings, and polling behavior vary by event source. For more information, see the linked topic for the service that you use as an event source.

To manage an event source with the AWS Command Line Interface (AWS CLI) or an AWS SDK, you can use the following API operations:


Lambda event source mappings process each event at least once, and duplicate processing of records can occur. To avoid potential issues related to duplicate events, we strongly recommend that you make your function code idempotent. To learn more, see How do I make my Lambda function idempotent in the AWS Knowledge Center.

Creating an event source mapping

To create a mapping between an event source and a Lambda function, create a trigger in the console or use the create-event-source-mapping command.

To add permissions and create a trigger
  1. Add the required permissions to your execution role. Some services, such as Amazon SQS, have an AWS managed policy that includes the permissions that Lambda needs to read from your event source.

  2. Open the Functions page of the Lambda console.

  3. Choose the name of a function.

  4. Under Function overview, choose Add trigger.

            Function overview section of the Lambda console
  5. Choose a trigger type.

  6. Configure the required options, and then choose Add.

To create an event source mapping (AWS CLI)

The following example uses the AWS CLI to map a function named my-function to a DynamoDB stream that its Amazon Resource Name (ARN) specifies, with a batch size of 500.

aws lambda create-event-source-mapping --function-name my-function --batch-size 500 --maximum-batching-window-in-seconds 5 --starting-position LATEST \ --event-source-arn arn:aws:dynamodb:us-east-2:123456789012:table/my-table/stream/2023-06-10T19:26:16.525

You should see the following output:

{ "UUID": "14e0db71-5d35-4eb5-b481-8945cf9d10c2", "BatchSize": 500, "MaximumBatchingWindowInSeconds": 5, "ParallelizationFactor": 1, "EventSourceArn": "arn:aws:dynamodb:us-east-2:123456789012:table/my-table/stream/2019-06-10T19:26:16.525", "FunctionArn": "arn:aws:lambda:us-east-2:123456789012:function:my-function", "LastModified": 1560209851.963, "LastProcessingResult": "No records processed", "State": "Creating", "StateTransitionReason": "User action", "DestinationConfig": {}, "MaximumRecordAgeInSeconds": 604800, "BisectBatchOnFunctionError": false, "MaximumRetryAttempts": 10000 }

Updating an event source mapping

To update an event source mapping (console)
  1. Open the Functions page of the Lambda console.

  2. Choose a function.

  3. Choose Configuration, then choose Triggers.

  4. Select the trigger and then choose Edit.

To update an event source mapping (AWS CLI)

Use the update-event-source-mapping command. The following example configures maximum concurrency for an Amazon SQS event source.

aws lambda update-event-source-mapping \ --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \ --scaling-config '{"MaximumConcurrency":5}'

Deleting an event source mapping

When you delete a function, Lambda doesn't delete associated event source mappings. You can delete event source mappings in the console or using the DeleteEventSourceMapping API action.

To delete event source mappings (console)
  1. Open the Event source mappings page of the Lambda console.

  2. Select the event source mappings that you want to delete.

  3. In the Delete event source mappings dialog box, enter delete, and then choose Delete.

To delete an event source mapping (AWS CLI)

Use the delete-event-source-mapping command.

aws lambda delete-event-source-mapping \ --uuid a1b2c3d4-5678-90ab-cdef-11111EXAMPLE

Batching behavior

Event source mappings read items from a target event source. By default, an event source mapping batches records together into a single payload that Lambda sends to your function. To fine-tune batching behavior, you can configure a batching window (MaximumBatchingWindowInSeconds) and a batch size (BatchSize). A batching window is the maximum amount of time to gather records into a single payload. A batch size is the maximum number of records in a single batch. Lambda invokes your function when one of the following three criteria is met:

  • The batching window reaches its maximum value. Default batching window behavior varies depending on the specific event source.

    • For Kinesis, DynamoDB, and Amazon SQS event sources: The default batching window is 0 seconds. This means that Lambda sends batches to your function only when either the batch size is met or the payload size limit is reached. To set a batching window, configure MaximumBatchingWindowInSeconds. You can set this parameter to any value from 0 to 300 seconds in increments of 1 second. If you configure a batching window, the next window begins as soon as the previous function invocation completes.

    • For Amazon MSK, self-managed Apache Kafka, Amazon MQ, and Amazon DocumentDB event sources: The default batching window is 500 ms. You can configure MaximumBatchingWindowInSeconds to any value from 0 seconds to 300 seconds in increments of seconds. A batching window begins as soon as the first record arrives.


      Because you can only change MaximumBatchingWindowInSeconds in increments of seconds, you cannot revert back to the 500 ms default batching window after you have changed it. To restore the default batching window, you must create a new event source mapping.

  • The batch size is met. The minimum batch size is 1. The default and maximum batch size depend on the event source. For details about these values, see the BatchSize specification for the CreateEventSourceMapping API operation.

  • The payload size reaches 6 MB. You cannot modify this limit.

The following diagram illustrates these three conditions. Suppose a batching window begins at t = 7 seconds. In the first scenario, the batching window reaches its 40 second maximum at t = 47 seconds after accumulating 5 records. In the second scenario, the batch size reaches 10 before the batching window expires, so the batching window ends early. In the third scenario, the maximum payload size is reached before the batching window expires, so the batching window ends early.

        A batching window expires when one of the following three criteria are met: the batching window reaches its
          maximum value, the batch size is met, or the payload size reaches 6 MB.

The following example shows an event source mapping that reads from a Kinesis stream. If a batch of events fails all processing attempts, the event source mapping sends details about the batch to an SQS queue.

        An event source mapping reads from a Kinesis stream. It queues records locally before sending them to the

The event batch is the event that Lambda sends to the function. It is a batch of records or messages compiled from the items that the event source mapping reads up until the current batching window expires.

For Kinesis and DynamoDB streams, an event source mapping creates an iterator for each shard in the stream and processes items in each shard in order. You can configure the event source mapping to read only new items that appear in the stream, or to start with older items. Processed items aren't removed from the stream, and other functions or consumers can process them.

Lambda doesn't wait for any configured Lambda extensions to complete before sending the next batch for processing. In other words, your extensions may continue to run as Lambda processes the next batch of records. This can cause throttling issues if you breach any of your account's concurrency settings or limits. To detect whether this is a potential issue, monitor your functions and check whether you're seeing higher concurrency metrics than expected for your event source mapping. Due to short times in between invokes, Lambda may briefly report higher concurrency usage than the number of shards. This can be true even for Lambda functions without extensions.

By default, if your function returns an error, the event source mapping reprocesses the entire batch until the function succeeds, or the items in the batch expire. To ensure in-order processing, the event source mapping pauses processing for the affected shard until the error is resolved. You can configure the event source mapping to discard old events or process multiple batches in parallel. If you process multiple batches in parallel, in-order processing is still guaranteed for each partition key, but the event source mapping simultaneously processes multiple partition keys in the same shard.

For stream sources (DynamoDB and Kinesis), you can configure the maximum number of times that Lambda retries when your function returns an error. Service errors or throttles where the batch does not reach your function do not count toward retry attempts.

You can also configure the event source mapping to send an invocation record to another service when it discards an event batch. Lambda supports the following destinations for event source mappings.

  • Amazon SQS – An SQS queue.

  • Amazon SNS – An SNS topic.

The invocation record contains details about the failed event batch in JSON format.

The following example shows an invocation record for a Kinesis stream.

Example invocation record
{ "requestContext": { "requestId": "c9b8fa9f-5a7f-xmpl-af9c-0c604cde93a5", "functionArn": "arn:aws:lambda:us-east-2:123456789012:function:myfunction", "condition": "RetryAttemptsExhausted", "approximateInvokeCount": 1 }, "responseContext": { "statusCode": 200, "executedVersion": "$LATEST", "functionError": "Unhandled" }, "version": "1.0", "timestamp": "2019-11-14T00:38:06.021Z", "KinesisBatchInfo": { "shardId": "shardId-000000000001", "startSequenceNumber": "49601189658422359378836298521827638475320189012309704722", "endSequenceNumber": "49601189658422359378836298522902373528957594348623495186", "approximateArrivalOfFirstRecord": "2019-11-14T00:38:04.835Z", "approximateArrivalOfLastRecord": "2019-11-14T00:38:05.580Z", "batchSize": 500, "streamArn": "arn:aws:kinesis:us-east-2:123456789012:stream/mystream" } }

Lambda also supports in-order processing for FIFO (first-in, first-out) queues, scaling up to the number of active message groups. For standard queues, items aren't necessarily processed in order. Lambda scales up to process a standard queue as quickly as possible. When an error occurs, Lambda returns batches to the queue as individual items and might process them in a different grouping than the original batch. Occasionally, the event source mapping might receive the same item from the queue twice, even if no function error occurred. Lambda deletes items from the queue after they're processed successfully. You can configure the source queue to send items to a dead-letter queue or a destination if Lambda can't process them.

For information about services that invoke Lambda functions directly, see Using AWS Lambda with other services.

Configuring destinations for event source mapping invocations

To retain records of failed event source mapping invocations, add a destination to your function's event source mapping. Configuring destinations for event source mapping invocations is supported for Kinesis, DynamoDB, and Kafka-based event sources only. Each record sent to the destination is a JSON document with details about the invocation. Like error handling settings, you can configure destinations on a function, function version, or alias.


For event source mapping invocations, you can retain records for failed invocations only. For other asynchronous invocations, you can retain records for both successful and failed invocations. For more information, see Configuring destinations for asynchronous invocation.

You can configure any Amazon SNS topic or any Amazon SQS queue as a destination. For these destination types, Lambda sends the record metadata to the destination. For Kafka-based event sources only, you can also choose an Amazon S3 bucket as the destination. If you specify an S3 bucket, Lambda sends the entire invocation record along with the metadata to the destination.

The following table summarizes the types of supported destinations for event source mapping invocations. For Lambda to successfully send records to your chosen destination, ensure that your function's execution role also contains the relevant permissions. The table also describes how each destination type receives the JSON invocation record.

Destination type Supported for the following event sources Required permissions Destination-specific JSON format

Amazon SQS queue

  • Kinesis

  • DynamoDB

  • Self-managed Apache Kafka and Managed Apache Kafka

Lambda passes the invocation record metadata as the Message to the destination.

Amazon SNS topic

  • Kinesis

  • DynamoDB

  • Self-managed Apache Kafka and Managed Apache Kafka

Lambda passes the invocation record metadata as the Message to the destination.

Amazon S3 bucket

  • Self-managed Apache Kafka and Managed Apache Kafka

Lambda stores the invocation record along with its metadata at the destination.

The following example shows what Lambda sends to an SQS queue or SNS topic for a failed Kinesis event source invocation. Since Lambda sends only the metadata for these destination types, use the streamArn, shardId, startSequenceNumber, and endSequenceNumber fields to obtain the full original record.

{ "requestContext": { "requestId": "c9b8fa9f-5a7f-xmpl-af9c-0c604cde93a5", "functionArn": "arn:aws:lambda:us-east-2:123456789012:function:myfunction", "condition": "RetryAttemptsExhausted", "approximateInvokeCount": 1 }, "responseContext": { "statusCode": 200, "executedVersion": "$LATEST", "functionError": "Unhandled" }, "version": "1.0", "timestamp": "2019-11-14T00:38:06.021Z", "KinesisBatchInfo": { "shardId": "shardId-000000000001", "startSequenceNumber": "49601189658422359378836298521827638475320189012309704722", "endSequenceNumber": "49601189658422359378836298522902373528957594348623495186", "approximateArrivalOfFirstRecord": "2019-11-14T00:38:04.835Z", "approximateArrivalOfLastRecord": "2019-11-14T00:38:05.580Z", "batchSize": 500, "streamArn": "arn:aws:kinesis:us-east-2:123456789012:stream/mystream" } }

For an example for DynamoDB event sources, see Error handling. For an example for Kafka event sources, see on-failure destinations for self-managed Apache Kafka, or on-failure destinations for Amazon MSK.