CfnApplication
- class aws_cdk.aws_kinesisanalytics.CfnApplication(scope, id, *, inputs, application_code=None, application_description=None, application_name=None)
Bases:
CfnResource
A CloudFormation
AWS::KinesisAnalytics::Application
.The
AWS::KinesisAnalytics::Application
resource creates an Amazon Kinesis Data Analytics application. For more information, see the Amazon Kinesis Data Analytics Developer Guide .- CloudformationResource:
AWS::KinesisAnalytics::Application
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics cfn_application = kinesisanalytics.CfnApplication(self, "MyCfnApplication", inputs=[kinesisanalytics.CfnApplication.InputProperty( input_schema=kinesisanalytics.CfnApplication.InputSchemaProperty( record_columns=[kinesisanalytics.CfnApplication.RecordColumnProperty( name="name", sql_type="sqlType", # the properties below are optional mapping="mapping" )], record_format=kinesisanalytics.CfnApplication.RecordFormatProperty( record_format_type="recordFormatType", # the properties below are optional mapping_parameters=kinesisanalytics.CfnApplication.MappingParametersProperty( csv_mapping_parameters=kinesisanalytics.CfnApplication.CSVMappingParametersProperty( record_column_delimiter="recordColumnDelimiter", record_row_delimiter="recordRowDelimiter" ), json_mapping_parameters=kinesisanalytics.CfnApplication.JSONMappingParametersProperty( record_row_path="recordRowPath" ) ) ), # the properties below are optional record_encoding="recordEncoding" ), name_prefix="namePrefix", # the properties below are optional input_parallelism=kinesisanalytics.CfnApplication.InputParallelismProperty( count=123 ), input_processing_configuration=kinesisanalytics.CfnApplication.InputProcessingConfigurationProperty( input_lambda_processor=kinesisanalytics.CfnApplication.InputLambdaProcessorProperty( resource_arn="resourceArn", role_arn="roleArn" ) ), kinesis_firehose_input=kinesisanalytics.CfnApplication.KinesisFirehoseInputProperty( resource_arn="resourceArn", role_arn="roleArn" ), kinesis_streams_input=kinesisanalytics.CfnApplication.KinesisStreamsInputProperty( resource_arn="resourceArn", role_arn="roleArn" ) )], # the properties below are optional application_code="applicationCode", application_description="applicationDescription", application_name="applicationName" )
Create a new
AWS::KinesisAnalytics::Application
.- Parameters:
scope (
Construct
) –scope in which this resource is defined.
id (
str
) –scoped id of the resource.
inputs (
Union
[IResolvable
,Sequence
[Union
[InputProperty
,Dict
[str
,Any
],IResolvable
]]]) – Use this parameter to configure the application input. You can configure your application to receive input from a single streaming source. In this configuration, you map this streaming source to an in-application stream that is created. Your application code can then query the in-application stream like a table (you can think of it as a constantly updating table). For the streaming source, you provide its Amazon Resource Name (ARN) and format of data on the stream (for example, JSON, CSV, etc.). You also must provide an IAM role that Amazon Kinesis Analytics can assume to read this stream on your behalf. To create the in-application stream, you need to specify a schema to transform your data into a schematized version used in SQL. In the schema, you provide the necessary mapping of the data elements in the streaming source to record columns in the in-app stream.application_code (
Optional
[str
]) – One or more SQL statements that read input data, transform it, and generate output. For example, you can write a SQL statement that reads data from one in-application stream, generates a running average of the number of advertisement clicks by vendor, and insert resulting rows in another in-application stream using pumps. For more information about the typical pattern, see Application Code . You can provide such series of SQL statements, where output of one statement can be used as the input for the next statement. You store intermediate results by creating in-application streams and pumps. Note that the application code must create the streams with names specified in theOutputs
. For example, if yourOutputs
defines output streams namedExampleOutputStream1
andExampleOutputStream2
, then your application code must create these streams.application_description (
Optional
[str
]) – Summary description of the application.application_name (
Optional
[str
]) – Name of your Amazon Kinesis Analytics application (for example,sample-app
).
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined)
.- Parameters:
path (
str
) – The path of the value to delete.- Return type:
None
- add_depends_on(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource
)- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str
)value (
Any
)
- See:
- Return type:
None
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermdediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str
) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters:
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
).- Parameters:
policy (
Optional
[RemovalPolicy
])apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resoure, please consult that specific resource’s documentation.
- Return type:
None
- get_att(attribute_name)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str
) – The name of the attribute.- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str
)- See:
- Return type:
Any
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector
) –tree inspector to collect and process attributes.
- Return type:
None
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str
- Returns:
a string representation of this resource
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::KinesisAnalytics::Application'
- application_code
One or more SQL statements that read input data, transform it, and generate output.
For example, you can write a SQL statement that reads data from one in-application stream, generates a running average of the number of advertisement clicks by vendor, and insert resulting rows in another in-application stream using pumps. For more information about the typical pattern, see Application Code .
You can provide such series of SQL statements, where output of one statement can be used as the input for the next statement. You store intermediate results by creating in-application streams and pumps.
Note that the application code must create the streams with names specified in the
Outputs
. For example, if yourOutputs
defines output streams namedExampleOutputStream1
andExampleOutputStream2
, then your application code must create these streams.
- application_description
Summary description of the application.
- application_name
Name of your Amazon Kinesis Analytics application (for example,
sample-app
).
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- inputs
Use this parameter to configure the application input.
You can configure your application to receive input from a single streaming source. In this configuration, you map this streaming source to an in-application stream that is created. Your application code can then query the in-application stream like a table (you can think of it as a constantly updating table).
For the streaming source, you provide its Amazon Resource Name (ARN) and format of data on the stream (for example, JSON, CSV, etc.). You also must provide an IAM role that Amazon Kinesis Analytics can assume to read this stream on your behalf.
To create the in-application stream, you need to specify a schema to transform your data into a schematized version used in SQL. In the schema, you provide the necessary mapping of the data elements in the streaming source to record columns in the in-app stream.
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- node
The construct tree node associated with this construct.
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
Static Methods
- classmethod is_cfn_element(x)
Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any
)- Return type:
bool
- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(construct)
Check whether the given construct is a CfnResource.
- Parameters:
construct (
IConstruct
)- Return type:
bool
- classmethod is_construct(x)
Return whether the given object is a Construct.
- Parameters:
x (
Any
)- Return type:
bool
CSVMappingParametersProperty
- class CfnApplication.CSVMappingParametersProperty(*, record_column_delimiter, record_row_delimiter)
Bases:
object
Provides additional mapping information when the record format uses delimiters, such as CSV.
For example, the following sample records use CSV format, where the records use the ‘n’ as the row delimiter and a comma (“,”) as the column delimiter:
"name1", "address1"
"name2", "address2"
- Parameters:
record_column_delimiter (
str
) – Column delimiter. For example, in a CSV format, a comma (“,”) is the typical column delimiter.record_row_delimiter (
str
) – Row delimiter. For example, in a CSV format, ‘n’ is the typical row delimiter.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics c_sVMapping_parameters_property = kinesisanalytics.CfnApplication.CSVMappingParametersProperty( record_column_delimiter="recordColumnDelimiter", record_row_delimiter="recordRowDelimiter" )
Attributes
- record_column_delimiter
Column delimiter.
For example, in a CSV format, a comma (“,”) is the typical column delimiter.
- record_row_delimiter
Row delimiter.
For example, in a CSV format, ‘n’ is the typical row delimiter.
InputLambdaProcessorProperty
- class CfnApplication.InputLambdaProcessorProperty(*, resource_arn, role_arn)
Bases:
object
An object that contains the Amazon Resource Name (ARN) of the AWS Lambda function that is used to preprocess records in the stream, and the ARN of the IAM role that is used to access the AWS Lambda function.
- Parameters:
resource_arn (
str
) –The ARN of the AWS Lambda function that operates on records in the stream. .. epigraph:: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: AWS Lambda
role_arn (
str
) – The ARN of the IAM role that is used to access the AWS Lambda function.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics input_lambda_processor_property = kinesisanalytics.CfnApplication.InputLambdaProcessorProperty( resource_arn="resourceArn", role_arn="roleArn" )
Attributes
- resource_arn
The ARN of the AWS Lambda function that operates on records in the stream.
To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: AWS Lambda
- role_arn
The ARN of the IAM role that is used to access the AWS Lambda function.
InputParallelismProperty
- class CfnApplication.InputParallelismProperty(*, count=None)
Bases:
object
Describes the number of in-application streams to create for a given streaming source.
For information about parallelism, see Configuring Application Input .
- Parameters:
count (
Union
[int
,float
,None
]) – Number of in-application streams to create. For more information, see Limits .- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics input_parallelism_property = kinesisanalytics.CfnApplication.InputParallelismProperty( count=123 )
Attributes
InputProcessingConfigurationProperty
- class CfnApplication.InputProcessingConfigurationProperty(*, input_lambda_processor=None)
Bases:
object
Provides a description of a processor that is used to preprocess the records in the stream before being processed by your application code.
Currently, the only input processor available is AWS Lambda .
- Parameters:
input_lambda_processor (
Union
[IResolvable
,InputLambdaProcessorProperty
,Dict
[str
,Any
],None
]) – The InputLambdaProcessor that is used to preprocess the records in the stream before being processed by your application code.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics input_processing_configuration_property = kinesisanalytics.CfnApplication.InputProcessingConfigurationProperty( input_lambda_processor=kinesisanalytics.CfnApplication.InputLambdaProcessorProperty( resource_arn="resourceArn", role_arn="roleArn" ) )
Attributes
- input_lambda_processor
The InputLambdaProcessor that is used to preprocess the records in the stream before being processed by your application code.
InputProperty
- class CfnApplication.InputProperty(*, input_schema, name_prefix, input_parallelism=None, input_processing_configuration=None, kinesis_firehose_input=None, kinesis_streams_input=None)
Bases:
object
When you configure the application input, you specify the streaming source, the in-application stream name that is created, and the mapping between the two.
For more information, see Configuring Application Input .
- Parameters:
input_schema (
Union
[IResolvable
,InputSchemaProperty
,Dict
[str
,Any
]]) – Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created. Also used to describe the format of the reference data source.name_prefix (
str
) – Name prefix to use when creating an in-application stream. Suppose that you specify a prefix “MyInApplicationStream.” Amazon Kinesis Analytics then creates one or more (as per theInputParallelism
count you specified) in-application streams with names “MyInApplicationStream_001,” “MyInApplicationStream_002,” and so on.input_parallelism (
Union
[IResolvable
,InputParallelismProperty
,Dict
[str
,Any
],None
]) –Describes the number of in-application streams to create. Data from your source is routed to these in-application input streams. See Configuring Application Input .
input_processing_configuration (
Union
[IResolvable
,InputProcessingConfigurationProperty
,Dict
[str
,Any
],None
]) –The InputProcessingConfiguration for the input. An input processor transforms records as they are received from the stream, before the application’s SQL code executes. Currently, the only input processing configuration available is InputLambdaProcessor .
kinesis_firehose_input (
Union
[IResolvable
,KinesisFirehoseInputProperty
,Dict
[str
,Any
],None
]) – If the streaming source is an Amazon Kinesis Firehose delivery stream, identifies the delivery stream’s ARN and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf. Note: EitherKinesisStreamsInput
orKinesisFirehoseInput
is required.kinesis_streams_input (
Union
[IResolvable
,KinesisStreamsInputProperty
,Dict
[str
,Any
],None
]) – If the streaming source is an Amazon Kinesis stream, identifies the stream’s Amazon Resource Name (ARN) and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf. Note: EitherKinesisStreamsInput
orKinesisFirehoseInput
is required.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics input_property = kinesisanalytics.CfnApplication.InputProperty( input_schema=kinesisanalytics.CfnApplication.InputSchemaProperty( record_columns=[kinesisanalytics.CfnApplication.RecordColumnProperty( name="name", sql_type="sqlType", # the properties below are optional mapping="mapping" )], record_format=kinesisanalytics.CfnApplication.RecordFormatProperty( record_format_type="recordFormatType", # the properties below are optional mapping_parameters=kinesisanalytics.CfnApplication.MappingParametersProperty( csv_mapping_parameters=kinesisanalytics.CfnApplication.CSVMappingParametersProperty( record_column_delimiter="recordColumnDelimiter", record_row_delimiter="recordRowDelimiter" ), json_mapping_parameters=kinesisanalytics.CfnApplication.JSONMappingParametersProperty( record_row_path="recordRowPath" ) ) ), # the properties below are optional record_encoding="recordEncoding" ), name_prefix="namePrefix", # the properties below are optional input_parallelism=kinesisanalytics.CfnApplication.InputParallelismProperty( count=123 ), input_processing_configuration=kinesisanalytics.CfnApplication.InputProcessingConfigurationProperty( input_lambda_processor=kinesisanalytics.CfnApplication.InputLambdaProcessorProperty( resource_arn="resourceArn", role_arn="roleArn" ) ), kinesis_firehose_input=kinesisanalytics.CfnApplication.KinesisFirehoseInputProperty( resource_arn="resourceArn", role_arn="roleArn" ), kinesis_streams_input=kinesisanalytics.CfnApplication.KinesisStreamsInputProperty( resource_arn="resourceArn", role_arn="roleArn" ) )
Attributes
- input_parallelism
Describes the number of in-application streams to create.
Data from your source is routed to these in-application input streams.
- input_processing_configuration
The InputProcessingConfiguration for the input. An input processor transforms records as they are received from the stream, before the application’s SQL code executes. Currently, the only input processing configuration available is InputLambdaProcessor .
- input_schema
Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created.
Also used to describe the format of the reference data source.
- kinesis_firehose_input
If the streaming source is an Amazon Kinesis Firehose delivery stream, identifies the delivery stream’s ARN and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf.
Note: Either
KinesisStreamsInput
orKinesisFirehoseInput
is required.
- kinesis_streams_input
If the streaming source is an Amazon Kinesis stream, identifies the stream’s Amazon Resource Name (ARN) and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf.
Note: Either
KinesisStreamsInput
orKinesisFirehoseInput
is required.
- name_prefix
Name prefix to use when creating an in-application stream.
Suppose that you specify a prefix “MyInApplicationStream.” Amazon Kinesis Analytics then creates one or more (as per the
InputParallelism
count you specified) in-application streams with names “MyInApplicationStream_001,” “MyInApplicationStream_002,” and so on.
InputSchemaProperty
- class CfnApplication.InputSchemaProperty(*, record_columns, record_format, record_encoding=None)
Bases:
object
Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created.
Also used to describe the format of the reference data source.
- Parameters:
record_columns (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,RecordColumnProperty
,Dict
[str
,Any
]]]]) – A list ofRecordColumn
objects.record_format (
Union
[IResolvable
,RecordFormatProperty
,Dict
[str
,Any
]]) – Specifies the format of the records on the streaming source.record_encoding (
Optional
[str
]) – Specifies the encoding of the records in the streaming source. For example, UTF-8.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics input_schema_property = kinesisanalytics.CfnApplication.InputSchemaProperty( record_columns=[kinesisanalytics.CfnApplication.RecordColumnProperty( name="name", sql_type="sqlType", # the properties below are optional mapping="mapping" )], record_format=kinesisanalytics.CfnApplication.RecordFormatProperty( record_format_type="recordFormatType", # the properties below are optional mapping_parameters=kinesisanalytics.CfnApplication.MappingParametersProperty( csv_mapping_parameters=kinesisanalytics.CfnApplication.CSVMappingParametersProperty( record_column_delimiter="recordColumnDelimiter", record_row_delimiter="recordRowDelimiter" ), json_mapping_parameters=kinesisanalytics.CfnApplication.JSONMappingParametersProperty( record_row_path="recordRowPath" ) ) ), # the properties below are optional record_encoding="recordEncoding" )
Attributes
- record_columns
A list of
RecordColumn
objects.
- record_encoding
Specifies the encoding of the records in the streaming source.
For example, UTF-8.
- record_format
Specifies the format of the records on the streaming source.
JSONMappingParametersProperty
- class CfnApplication.JSONMappingParametersProperty(*, record_row_path)
Bases:
object
Provides additional mapping information when JSON is the record format on the streaming source.
- Parameters:
record_row_path (
str
) – Path to the top-level parent that contains the records.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics j_sONMapping_parameters_property = kinesisanalytics.CfnApplication.JSONMappingParametersProperty( record_row_path="recordRowPath" )
Attributes
- record_row_path
Path to the top-level parent that contains the records.
KinesisFirehoseInputProperty
- class CfnApplication.KinesisFirehoseInputProperty(*, resource_arn, role_arn)
Bases:
object
Identifies an Amazon Kinesis Firehose delivery stream as the streaming source.
You provide the delivery stream’s Amazon Resource Name (ARN) and an IAM role ARN that enables Amazon Kinesis Analytics to access the stream on your behalf.
- Parameters:
resource_arn (
str
) – ARN of the input delivery stream.role_arn (
str
) – ARN of the IAM role that Amazon Kinesis Analytics can assume to access the stream on your behalf. You need to make sure that the role has the necessary permissions to access the stream.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics kinesis_firehose_input_property = kinesisanalytics.CfnApplication.KinesisFirehoseInputProperty( resource_arn="resourceArn", role_arn="roleArn" )
Attributes
- resource_arn
ARN of the input delivery stream.
- role_arn
ARN of the IAM role that Amazon Kinesis Analytics can assume to access the stream on your behalf.
You need to make sure that the role has the necessary permissions to access the stream.
KinesisStreamsInputProperty
- class CfnApplication.KinesisStreamsInputProperty(*, resource_arn, role_arn)
Bases:
object
Identifies an Amazon Kinesis stream as the streaming source.
You provide the stream’s Amazon Resource Name (ARN) and an IAM role ARN that enables Amazon Kinesis Analytics to access the stream on your behalf.
- Parameters:
resource_arn (
str
) – ARN of the input Amazon Kinesis stream to read.role_arn (
str
) – ARN of the IAM role that Amazon Kinesis Analytics can assume to access the stream on your behalf. You need to grant the necessary permissions to this role.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics kinesis_streams_input_property = kinesisanalytics.CfnApplication.KinesisStreamsInputProperty( resource_arn="resourceArn", role_arn="roleArn" )
Attributes
- resource_arn
ARN of the input Amazon Kinesis stream to read.
- role_arn
ARN of the IAM role that Amazon Kinesis Analytics can assume to access the stream on your behalf.
You need to grant the necessary permissions to this role.
MappingParametersProperty
- class CfnApplication.MappingParametersProperty(*, csv_mapping_parameters=None, json_mapping_parameters=None)
Bases:
object
When configuring application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source.
- Parameters:
csv_mapping_parameters (
Union
[IResolvable
,CSVMappingParametersProperty
,Dict
[str
,Any
],None
]) – Provides additional mapping information when the record format uses delimiters (for example, CSV).json_mapping_parameters (
Union
[IResolvable
,JSONMappingParametersProperty
,Dict
[str
,Any
],None
]) – Provides additional mapping information when JSON is the record format on the streaming source.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics mapping_parameters_property = kinesisanalytics.CfnApplication.MappingParametersProperty( csv_mapping_parameters=kinesisanalytics.CfnApplication.CSVMappingParametersProperty( record_column_delimiter="recordColumnDelimiter", record_row_delimiter="recordRowDelimiter" ), json_mapping_parameters=kinesisanalytics.CfnApplication.JSONMappingParametersProperty( record_row_path="recordRowPath" ) )
Attributes
- csv_mapping_parameters
Provides additional mapping information when the record format uses delimiters (for example, CSV).
- json_mapping_parameters
Provides additional mapping information when JSON is the record format on the streaming source.
RecordColumnProperty
- class CfnApplication.RecordColumnProperty(*, name, sql_type, mapping=None)
Bases:
object
Describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream.
Also used to describe the format of the reference data source.
- Parameters:
name (
str
) – Name of the column created in the in-application input stream or reference table.sql_type (
str
) – Type of column created in the in-application input stream or reference table.mapping (
Optional
[str
]) – Reference to the data element in the streaming input or the reference data source. This element is required if the RecordFormatType isJSON
.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics record_column_property = kinesisanalytics.CfnApplication.RecordColumnProperty( name="name", sql_type="sqlType", # the properties below are optional mapping="mapping" )
Attributes
- mapping
Reference to the data element in the streaming input or the reference data source.
This element is required if the RecordFormatType is
JSON
.
- name
Name of the column created in the in-application input stream or reference table.
- sql_type
Type of column created in the in-application input stream or reference table.
RecordFormatProperty
- class CfnApplication.RecordFormatProperty(*, record_format_type, mapping_parameters=None)
Bases:
object
Describes the record format and relevant mapping information that should be applied to schematize the records on the stream.
- Parameters:
record_format_type (
str
) – The type of record format.mapping_parameters (
Union
[IResolvable
,MappingParametersProperty
,Dict
[str
,Any
],None
]) – When configuring application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics record_format_property = kinesisanalytics.CfnApplication.RecordFormatProperty( record_format_type="recordFormatType", # the properties below are optional mapping_parameters=kinesisanalytics.CfnApplication.MappingParametersProperty( csv_mapping_parameters=kinesisanalytics.CfnApplication.CSVMappingParametersProperty( record_column_delimiter="recordColumnDelimiter", record_row_delimiter="recordRowDelimiter" ), json_mapping_parameters=kinesisanalytics.CfnApplication.JSONMappingParametersProperty( record_row_path="recordRowPath" ) ) )
Attributes
- mapping_parameters
When configuring application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source.
- record_format_type
The type of record format.