EventSourceMappingProps

class aws_cdk.aws_lambda.EventSourceMappingProps(*, batch_size=None, bisect_batch_on_error=None, enabled=None, event_source_arn=None, filter_encryption=None, filters=None, kafka_bootstrap_servers=None, kafka_consumer_group_id=None, kafka_topic=None, max_batching_window=None, max_concurrency=None, max_record_age=None, on_failure=None, parallelization_factor=None, report_batch_item_failures=None, retry_attempts=None, source_access_configurations=None, starting_position=None, starting_position_timestamp=None, support_s3_on_failure_destination=None, tumbling_window=None, target)

Bases: EventSourceMappingOptions

Properties for declaring a new event source mapping.

Parameters:
  • batch_size (Union[int, float, None]) – The largest number of records that AWS Lambda will retrieve from your event source at the time of invoking your function. Your function receives an event with all the retrieved records. Valid Range: Minimum value of 1. Maximum value of 10000. Default: - Amazon Kinesis, Amazon DynamoDB, and Amazon MSK is 100 records. The default for Amazon SQS is 10 messages. For standard SQS queues, the maximum is 10,000. For FIFO SQS queues, the maximum is 10.

  • bisect_batch_on_error (Optional[bool]) – If the function returns an error, split the batch in two and retry. Default: false

  • enabled (Optional[bool]) – Set to false to disable the event source upon creation. Default: true

  • event_source_arn (Optional[str]) – The Amazon Resource Name (ARN) of the event source. Any record added to this stream can invoke the Lambda function. Default: - not set if using a self managed Kafka cluster, throws an error otherwise

  • filter_encryption (Optional[IKey]) – Add Customer managed KMS key to encrypt Filter Criteria. Default: - none

  • filters (Optional[Sequence[Mapping[str, Any]]]) – Add filter criteria to Event Source. Default: - none

  • kafka_bootstrap_servers (Optional[Sequence[str]]) – A list of host and port pairs that are the addresses of the Kafka brokers in a self managed “bootstrap” Kafka cluster that a Kafka client connects to initially to bootstrap itself. They are in the format abc.example.com:9096. Default: - none

  • kafka_consumer_group_id (Optional[str]) – The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. The value must have a lenght between 1 and 200 and full the pattern ‘[a-zA-Z0-9-/:_+=.@-]’. For more information, see Customizable consumer group ID. Default: - none

  • kafka_topic (Optional[str]) – The name of the Kafka topic. Default: - no topic

  • max_batching_window (Optional[Duration]) – The maximum amount of time to gather records before invoking the function. Maximum of Duration.minutes(5) Default: Duration.seconds(0)

  • max_concurrency (Union[int, float, None]) – The maximum concurrency setting limits the number of concurrent instances of the function that an Amazon SQS event source can invoke. Default: - No specific limit.

  • max_record_age (Optional[Duration]) – The maximum age of a record that Lambda sends to a function for processing. Valid Range: - Minimum value of 60 seconds - Maximum value of 7 days Default: - infinite or until the record expires.

  • on_failure (Optional[IEventSourceDlq]) – An Amazon SQS queue or Amazon SNS topic destination for discarded records. Default: discarded records are ignored

  • parallelization_factor (Union[int, float, None]) – The number of batches to process from each shard concurrently. Valid Range: - Minimum value of 1 - Maximum value of 10 Default: 1

  • report_batch_item_failures (Optional[bool]) – Allow functions to return partially successful responses for a batch of records. Default: false

  • retry_attempts (Union[int, float, None]) – The maximum number of times to retry when the function returns an error. Set to undefined if you want lambda to keep retrying infinitely or until the record expires. Valid Range: - Minimum value of 0 - Maximum value of 10000 Default: - infinite or until the record expires.

  • source_access_configurations (Optional[Sequence[Union[SourceAccessConfiguration, Dict[str, Any]]]]) – Specific settings like the authentication protocol or the VPC components to secure access to your event source. Default: - none

  • starting_position (Optional[StartingPosition]) – The position in the DynamoDB, Kinesis or MSK stream where AWS Lambda should start reading. Default: - no starting position

  • starting_position_timestamp (Union[int, float, None]) – The time from which to start reading, in Unix time seconds. Default: - no timestamp

  • support_s3_on_failure_destination (Optional[bool]) – Check if support S3 onfailure destination(ODF). Currently only MSK and self managed kafka event support S3 ODF Default: false

  • tumbling_window (Optional[Duration]) – The size of the tumbling windows to group records sent to DynamoDB or Kinesis. Default: - None

  • target (IFunction) – The target AWS Lambda function.

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
import aws_cdk as cdk
from aws_cdk import aws_kms as kms
from aws_cdk import aws_lambda as lambda_

# event_source_dlq: lambda.IEventSourceDlq
# filters: Any
# function_: lambda.Function
# key: kms.Key
# source_access_configuration_type: lambda.SourceAccessConfigurationType

event_source_mapping_props = lambda.EventSourceMappingProps(
    target=function_,

    # the properties below are optional
    batch_size=123,
    bisect_batch_on_error=False,
    enabled=False,
    event_source_arn="eventSourceArn",
    filter_encryption=key,
    filters=[{
        "filters_key": filters
    }],
    kafka_bootstrap_servers=["kafkaBootstrapServers"],
    kafka_consumer_group_id="kafkaConsumerGroupId",
    kafka_topic="kafkaTopic",
    max_batching_window=cdk.Duration.minutes(30),
    max_concurrency=123,
    max_record_age=cdk.Duration.minutes(30),
    on_failure=event_source_dlq,
    parallelization_factor=123,
    report_batch_item_failures=False,
    retry_attempts=123,
    source_access_configurations=[lambda.SourceAccessConfiguration(
        type=source_access_configuration_type,
        uri="uri"
    )],
    starting_position=lambda_.StartingPosition.TRIM_HORIZON,
    starting_position_timestamp=123,
    support_s3_on_failure_destination=False,
    tumbling_window=cdk.Duration.minutes(30)
)

Attributes

batch_size

The largest number of records that AWS Lambda will retrieve from your event source at the time of invoking your function.

Your function receives an event with all the retrieved records.

Valid Range: Minimum value of 1. Maximum value of 10000.

Default:

  • Amazon Kinesis, Amazon DynamoDB, and Amazon MSK is 100 records.

The default for Amazon SQS is 10 messages. For standard SQS queues, the maximum is 10,000. For FIFO SQS queues, the maximum is 10.

bisect_batch_on_error

If the function returns an error, split the batch in two and retry.

Default:

false

enabled

Set to false to disable the event source upon creation.

Default:

true

event_source_arn

The Amazon Resource Name (ARN) of the event source.

Any record added to this stream can invoke the Lambda function.

Default:
  • not set if using a self managed Kafka cluster, throws an error otherwise

filter_encryption

Add Customer managed KMS key to encrypt Filter Criteria.

Default:
  • none

See:

https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk

filters

Add filter criteria to Event Source.

Default:
  • none

See:

https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html

kafka_bootstrap_servers

A list of host and port pairs that are the addresses of the Kafka brokers in a self managed “bootstrap” Kafka cluster that a Kafka client connects to initially to bootstrap itself.

They are in the format abc.example.com:9096.

Default:
  • none

kafka_consumer_group_id

The identifier for the Kafka consumer group to join.

The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. The value must have a lenght between 1 and 200 and full the pattern ‘[a-zA-Z0-9-/:_+=.@-]’. For more information, see Customizable consumer group ID.

Default:
  • none

See:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-eventsourcemapping-selfmanagedkafkaeventsourceconfig.html

kafka_topic

The name of the Kafka topic.

Default:
  • no topic

max_batching_window

The maximum amount of time to gather records before invoking the function.

Maximum of Duration.minutes(5)

Default:

Duration.seconds(0)

max_concurrency

The maximum concurrency setting limits the number of concurrent instances of the function that an Amazon SQS event source can invoke.

Default:
  • No specific limit.

See:

https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#events-sqs-max-concurrency

Valid Range: Minimum value of 2. Maximum value of 1000.

max_record_age

The maximum age of a record that Lambda sends to a function for processing.

Valid Range:

  • Minimum value of 60 seconds

  • Maximum value of 7 days

Default:
  • infinite or until the record expires.

on_failure

An Amazon SQS queue or Amazon SNS topic destination for discarded records.

Default:

discarded records are ignored

parallelization_factor

The number of batches to process from each shard concurrently.

Valid Range:

  • Minimum value of 1

  • Maximum value of 10

Default:

1

report_batch_item_failures

Allow functions to return partially successful responses for a batch of records.

Default:

false

See:

https://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html#services-ddb-batchfailurereporting

retry_attempts

The maximum number of times to retry when the function returns an error.

Set to undefined if you want lambda to keep retrying infinitely or until the record expires.

Valid Range:

  • Minimum value of 0

  • Maximum value of 10000

Default:
  • infinite or until the record expires.

source_access_configurations

Specific settings like the authentication protocol or the VPC components to secure access to your event source.

Default:
  • none

See:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-eventsourcemapping-sourceaccessconfiguration.html

starting_position

The position in the DynamoDB, Kinesis or MSK stream where AWS Lambda should start reading.

Default:
  • no starting position

See:

https://docs.aws.amazon.com/kinesis/latest/APIReference/API_GetShardIterator.html#Kinesis-GetShardIterator-request-ShardIteratorType

starting_position_timestamp

The time from which to start reading, in Unix time seconds.

Default:
  • no timestamp

support_s3_on_failure_destination

Check if support S3 onfailure destination(ODF).

Currently only MSK and self managed kafka event support S3 ODF

Default:

false

target

The target AWS Lambda function.

tumbling_window

The size of the tumbling windows to group records sent to DynamoDB or Kinesis.

Default:
  • None

See:

https://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html#services-ddb-windows

Valid Range: 0 - 15 minutes