You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.

Class: Aws::KinesisAnalyticsV2::Types::Input

Inherits:
Struct
  • Object
show all
Defined in:
(unknown)

Overview

Note:

When passing Input as input to an Aws::Client method, you can use a vanilla Hash:

{
  name_prefix: "InAppStreamName", # required
  input_processing_configuration: {
    input_lambda_processor: { # required
      resource_arn: "ResourceARN", # required
    },
  },
  kinesis_streams_input: {
    resource_arn: "ResourceARN", # required
  },
  kinesis_firehose_input: {
    resource_arn: "ResourceARN", # required
  },
  input_parallelism: {
    count: 1,
  },
  input_schema: { # required
    record_format: { # required
      record_format_type: "JSON", # required, accepts JSON, CSV
      mapping_parameters: {
        json_mapping_parameters: {
          record_row_path: "RecordRowPath", # required
        },
        csv_mapping_parameters: {
          record_row_delimiter: "RecordRowDelimiter", # required
          record_column_delimiter: "RecordColumnDelimiter", # required
        },
      },
    },
    record_encoding: "RecordEncoding",
    record_columns: [ # required
      {
        name: "RecordColumnName", # required
        mapping: "RecordColumnMapping",
        sql_type: "RecordColumnSqlType", # required
      },
    ],
  },
}

When you configure the application input for a SQL-based Kinesis Data Analytics application, you specify the streaming source, the in-application stream name that is created, and the mapping between the two.

Returned by:

Instance Attribute Summary collapse

Instance Attribute Details

#input_parallelismTypes::InputParallelism

Describes the number of in-application streams to create.

Returns:

#input_processing_configurationTypes::InputProcessingConfiguration

The InputProcessingConfiguration for the input. An input processor transforms records as they are received from the stream, before the application\'s SQL code executes. Currently, the only input processing configuration available is InputLambdaProcessor.

#input_schemaTypes::SourceSchema

Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created.

Also used to describe the format of the reference data source.

Returns:

  • (Types::SourceSchema)

    Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created.

#kinesis_firehose_inputTypes::KinesisFirehoseInput

If the streaming source is an Amazon Kinesis Data Firehose delivery stream, identifies the delivery stream\'s ARN.

Returns:

  • (Types::KinesisFirehoseInput)

    If the streaming source is an Amazon Kinesis Data Firehose delivery stream, identifies the delivery stream\'s ARN.

#kinesis_streams_inputTypes::KinesisStreamsInput

If the streaming source is an Amazon Kinesis data stream, identifies the stream\'s Amazon Resource Name (ARN).

Returns:

  • (Types::KinesisStreamsInput)

    If the streaming source is an Amazon Kinesis data stream, identifies the stream\'s Amazon Resource Name (ARN).

#name_prefixString

The name prefix to use when creating an in-application stream. Suppose that you specify a prefix \"MyInApplicationStream.\" Kinesis Data Analytics then creates one or more (as per the InputParallelism count you specified) in-application streams with the names \"MyInApplicationStream_001,\" \"MyInApplicationStream_002,\" and so on.

Returns:

  • (String)

    The name prefix to use when creating an in-application stream.