DestinationS3BackupProps¶
-
class
aws_cdk.aws_kinesisfirehose_destinations.
DestinationS3BackupProps
(*, buffering_interval=None, buffering_size=None, compression=None, data_output_prefix=None, encryption_key=None, error_output_prefix=None, bucket=None, logging=None, log_group=None, mode=None)¶ Bases:
aws_cdk.aws_kinesisfirehose_destinations.CommonDestinationS3Props
(experimental) Properties for defining an S3 backup destination.
S3 backup is available for all destinations, regardless of whether the final destination is S3 or not.
- Parameters
buffering_interval (
Optional
[Duration
]) – (experimental) The length of time that Firehose buffers incoming data before delivering it to the S3 bucket. Minimum: Duration.seconds(60) Maximum: Duration.seconds(900) Default: Duration.seconds(300)buffering_size (
Optional
[Size
]) – (experimental) The size of the buffer that Kinesis Data Firehose uses for incoming data before delivering it to the S3 bucket. Minimum: Size.mebibytes(1) Maximum: Size.mebibytes(128) Default: Size.mebibytes(5)compression (
Optional
[Compression
]) – (experimental) The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. The compression formats SNAPPY or ZIP cannot be specified for Amazon Redshift destinations because they are not supported by the Amazon Redshift COPY operation that reads from the S3 bucket. Default: - UNCOMPRESSEDdata_output_prefix (
Optional
[str
]) – (experimental) A prefix that Kinesis Data Firehose evaluates and adds to records before writing them to S3. This prefix appears immediately following the bucket name. Default: “YYYY/MM/DD/HH”encryption_key (
Optional
[IKey
]) – (experimental) The AWS KMS key used to encrypt the data that it delivers to your Amazon S3 bucket. Default: - Data is not encrypted.error_output_prefix (
Optional
[str
]) – (experimental) A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. Default: “YYYY/MM/DD/HH”bucket (
Optional
[IBucket
]) – (experimental) The S3 bucket that will store data and failed records. Default: - Ifmode
is set toBackupMode.ALL
orBackupMode.FAILED
, a bucket will be created for you.logging (
Optional
[bool
]) – (experimental) If true, log errors when data transformation or data delivery fails. IflogGroup
is provided, this will be implicitly set totrue
. Default: true - errors are logged.log_group (
Optional
[ILogGroup
]) – (experimental) The CloudWatch log group where log streams will be created to hold error logs. Default: - iflogging
is set totrue
, a log group will be created for you.mode (
Optional
[BackupMode
]) – (experimental) Indicates the mode by which incoming records should be backed up to S3, if any. Ifbucket
is provided, this will be implicitly set toBackupMode.ALL
. Default: - Ifbucket
is provided, the default will beBackupMode.ALL
. Otherwise, source records are not backed up to S3.
- Stability
experimental
- ExampleMetadata
lit=../aws-kinesisfirehose-destinations/test/integ.s3-bucket.lit.ts infused
Example:
import path as path import aws_cdk.aws_kinesisfirehose as firehose import aws_cdk.aws_kms as kms import aws_cdk.aws_lambda_nodejs as lambdanodejs import aws_cdk.aws_logs as logs import aws_cdk.aws_s3 as s3 import aws_cdk.core as cdk import aws_cdk.aws_kinesisfirehose_destinations as destinations app = cdk.App() stack = cdk.Stack(app, "aws-cdk-firehose-delivery-stream-s3-all-properties") bucket = s3.Bucket(stack, "Bucket", removal_policy=cdk.RemovalPolicy.DESTROY, auto_delete_objects=True ) backup_bucket = s3.Bucket(stack, "BackupBucket", removal_policy=cdk.RemovalPolicy.DESTROY, auto_delete_objects=True ) log_group = logs.LogGroup(stack, "LogGroup", removal_policy=cdk.RemovalPolicy.DESTROY ) data_processor_function = lambdanodejs.NodejsFunction(stack, "DataProcessorFunction", entry=path.join(__dirname, "lambda-data-processor.js"), timeout=cdk.Duration.minutes(1) ) processor = firehose.LambdaFunctionProcessor(data_processor_function, buffer_interval=cdk.Duration.seconds(60), buffer_size=cdk.Size.mebibytes(1), retries=1 ) key = kms.Key(stack, "Key", removal_policy=cdk.RemovalPolicy.DESTROY ) backup_key = kms.Key(stack, "BackupKey", removal_policy=cdk.RemovalPolicy.DESTROY ) firehose.DeliveryStream(stack, "Delivery Stream", destinations=[destinations.S3Bucket(bucket, logging=True, log_group=log_group, processor=processor, compression=destinations.Compression.GZIP, data_output_prefix="regularPrefix", error_output_prefix="errorPrefix", buffering_interval=cdk.Duration.seconds(60), buffering_size=cdk.Size.mebibytes(1), encryption_key=key, s3_backup=destinations.DestinationS3BackupProps( mode=destinations.BackupMode.ALL, bucket=backup_bucket, compression=destinations.Compression.ZIP, data_output_prefix="backupPrefix", error_output_prefix="backupErrorPrefix", buffering_interval=cdk.Duration.seconds(60), buffering_size=cdk.Size.mebibytes(1), encryption_key=backup_key ) )] ) app.synth()
Attributes
-
bucket
¶ (experimental) The S3 bucket that will store data and failed records.
- Default
If
mode
is set toBackupMode.ALL
orBackupMode.FAILED
, a bucket will be created for you.
- Stability
experimental
- Return type
Optional
[IBucket
]
-
buffering_interval
¶ (experimental) The length of time that Firehose buffers incoming data before delivering it to the S3 bucket.
Minimum: Duration.seconds(60) Maximum: Duration.seconds(900)
- Default
Duration.seconds(300)
- Stability
experimental
- Return type
Optional
[Duration
]
-
buffering_size
¶ (experimental) The size of the buffer that Kinesis Data Firehose uses for incoming data before delivering it to the S3 bucket.
Minimum: Size.mebibytes(1) Maximum: Size.mebibytes(128)
- Default
Size.mebibytes(5)
- Stability
experimental
- Return type
Optional
[Size
]
-
compression
¶ (experimental) The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket.
The compression formats SNAPPY or ZIP cannot be specified for Amazon Redshift destinations because they are not supported by the Amazon Redshift COPY operation that reads from the S3 bucket.
- Default
UNCOMPRESSED
- Stability
experimental
- Return type
Optional
[Compression
]
-
data_output_prefix
¶ (experimental) A prefix that Kinesis Data Firehose evaluates and adds to records before writing them to S3.
This prefix appears immediately following the bucket name.
- Default
“YYYY/MM/DD/HH”
- See
https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html
- Stability
experimental
- Return type
Optional
[str
]
-
encryption_key
¶ (experimental) The AWS KMS key used to encrypt the data that it delivers to your Amazon S3 bucket.
- Default
Data is not encrypted.
- Stability
experimental
- Return type
Optional
[IKey
]
-
error_output_prefix
¶ (experimental) A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3.
This prefix appears immediately following the bucket name.
- Default
“YYYY/MM/DD/HH”
- See
https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html
- Stability
experimental
- Return type
Optional
[str
]
-
log_group
¶ (experimental) The CloudWatch log group where log streams will be created to hold error logs.
- Default
if
logging
is set totrue
, a log group will be created for you.
- Stability
experimental
- Return type
Optional
[ILogGroup
]
-
logging
¶ (experimental) If true, log errors when data transformation or data delivery fails.
If
logGroup
is provided, this will be implicitly set totrue
.- Default
true - errors are logged.
- Stability
experimental
- Return type
Optional
[bool
]
-
mode
¶ (experimental) Indicates the mode by which incoming records should be backed up to S3, if any.
If
bucket
is provided, this will be implicitly set toBackupMode.ALL
.- Default
If
bucket
is provided, the default will beBackupMode.ALL
. Otherwise,
source records are not backed up to S3.
- Stability
experimental
- Return type
Optional
[BackupMode
]