CfnFlow

class aws_cdk.aws_appflow.CfnFlow(scope, id, *, destination_flow_config_list, flow_name, source_flow_config, tasks, trigger_config, description=None, flow_status=None, kms_arn=None, metadata_catalog_config=None, tags=None)

Bases: CfnResource

The AWS::AppFlow::Flow resource is an Amazon AppFlow resource type that specifies a new flow.

If you want to use AWS CloudFormation to create a connector profile for connectors that implement OAuth (such as Salesforce, Slack, Zendesk, and Google Analytics), you must fetch the access and refresh tokens. You can do this by implementing your own UI for OAuth, or by retrieving the tokens from elsewhere. Alternatively, you can use the Amazon AppFlow console to create the connector profile, and then use that connector profile in the flow creation CloudFormation template.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-appflow-flow.html

CloudformationResource:

AWS::AppFlow::Flow

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

cfn_flow = appflow.CfnFlow(self, "MyCfnFlow",
    destination_flow_config_list=[appflow.CfnFlow.DestinationFlowConfigProperty(
        connector_type="connectorType",
        destination_connector_properties=appflow.CfnFlow.DestinationConnectorPropertiesProperty(
            custom_connector=appflow.CfnFlow.CustomConnectorDestinationPropertiesProperty(
                entity_name="entityName",

                # the properties below are optional
                custom_properties={
                    "custom_properties_key": "customProperties"
                },
                error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
                    bucket_name="bucketName",
                    bucket_prefix="bucketPrefix",
                    fail_on_first_error=False
                ),
                id_field_names=["idFieldNames"],
                write_operation_type="writeOperationType"
            ),
            event_bridge=appflow.CfnFlow.EventBridgeDestinationPropertiesProperty(
                object="object",

                # the properties below are optional
                error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
                    bucket_name="bucketName",
                    bucket_prefix="bucketPrefix",
                    fail_on_first_error=False
                )
            ),
            lookout_metrics=appflow.CfnFlow.LookoutMetricsDestinationPropertiesProperty(
                object="object"
            ),
            marketo=appflow.CfnFlow.MarketoDestinationPropertiesProperty(
                object="object",

                # the properties below are optional
                error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
                    bucket_name="bucketName",
                    bucket_prefix="bucketPrefix",
                    fail_on_first_error=False
                )
            ),
            redshift=appflow.CfnFlow.RedshiftDestinationPropertiesProperty(
                intermediate_bucket_name="intermediateBucketName",
                object="object",

                # the properties below are optional
                bucket_prefix="bucketPrefix",
                error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
                    bucket_name="bucketName",
                    bucket_prefix="bucketPrefix",
                    fail_on_first_error=False
                )
            ),
            s3=appflow.CfnFlow.S3DestinationPropertiesProperty(
                bucket_name="bucketName",

                # the properties below are optional
                bucket_prefix="bucketPrefix",
                s3_output_format_config=appflow.CfnFlow.S3OutputFormatConfigProperty(
                    aggregation_config=appflow.CfnFlow.AggregationConfigProperty(
                        aggregation_type="aggregationType",
                        target_file_size=123
                    ),
                    file_type="fileType",
                    prefix_config=appflow.CfnFlow.PrefixConfigProperty(
                        path_prefix_hierarchy=["pathPrefixHierarchy"],
                        prefix_format="prefixFormat",
                        prefix_type="prefixType"
                    ),
                    preserve_source_data_typing=False
                )
            ),
            salesforce=appflow.CfnFlow.SalesforceDestinationPropertiesProperty(
                object="object",

                # the properties below are optional
                data_transfer_api="dataTransferApi",
                error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
                    bucket_name="bucketName",
                    bucket_prefix="bucketPrefix",
                    fail_on_first_error=False
                ),
                id_field_names=["idFieldNames"],
                write_operation_type="writeOperationType"
            ),
            sapo_data=appflow.CfnFlow.SAPODataDestinationPropertiesProperty(
                object_path="objectPath",

                # the properties below are optional
                error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
                    bucket_name="bucketName",
                    bucket_prefix="bucketPrefix",
                    fail_on_first_error=False
                ),
                id_field_names=["idFieldNames"],
                success_response_handling_config=appflow.CfnFlow.SuccessResponseHandlingConfigProperty(
                    bucket_name="bucketName",
                    bucket_prefix="bucketPrefix"
                ),
                write_operation_type="writeOperationType"
            ),
            snowflake=appflow.CfnFlow.SnowflakeDestinationPropertiesProperty(
                intermediate_bucket_name="intermediateBucketName",
                object="object",

                # the properties below are optional
                bucket_prefix="bucketPrefix",
                error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
                    bucket_name="bucketName",
                    bucket_prefix="bucketPrefix",
                    fail_on_first_error=False
                )
            ),
            upsolver=appflow.CfnFlow.UpsolverDestinationPropertiesProperty(
                bucket_name="bucketName",
                s3_output_format_config=appflow.CfnFlow.UpsolverS3OutputFormatConfigProperty(
                    prefix_config=appflow.CfnFlow.PrefixConfigProperty(
                        path_prefix_hierarchy=["pathPrefixHierarchy"],
                        prefix_format="prefixFormat",
                        prefix_type="prefixType"
                    ),

                    # the properties below are optional
                    aggregation_config=appflow.CfnFlow.AggregationConfigProperty(
                        aggregation_type="aggregationType",
                        target_file_size=123
                    ),
                    file_type="fileType"
                ),

                # the properties below are optional
                bucket_prefix="bucketPrefix"
            ),
            zendesk=appflow.CfnFlow.ZendeskDestinationPropertiesProperty(
                object="object",

                # the properties below are optional
                error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
                    bucket_name="bucketName",
                    bucket_prefix="bucketPrefix",
                    fail_on_first_error=False
                ),
                id_field_names=["idFieldNames"],
                write_operation_type="writeOperationType"
            )
        ),

        # the properties below are optional
        api_version="apiVersion",
        connector_profile_name="connectorProfileName"
    )],
    flow_name="flowName",
    source_flow_config=appflow.CfnFlow.SourceFlowConfigProperty(
        connector_type="connectorType",
        source_connector_properties=appflow.CfnFlow.SourceConnectorPropertiesProperty(
            amplitude=appflow.CfnFlow.AmplitudeSourcePropertiesProperty(
                object="object"
            ),
            custom_connector=appflow.CfnFlow.CustomConnectorSourcePropertiesProperty(
                entity_name="entityName",

                # the properties below are optional
                custom_properties={
                    "custom_properties_key": "customProperties"
                },
                data_transfer_api=appflow.CfnFlow.DataTransferApiProperty(
                    name="name",
                    type="type"
                )
            ),
            datadog=appflow.CfnFlow.DatadogSourcePropertiesProperty(
                object="object"
            ),
            dynatrace=appflow.CfnFlow.DynatraceSourcePropertiesProperty(
                object="object"
            ),
            google_analytics=appflow.CfnFlow.GoogleAnalyticsSourcePropertiesProperty(
                object="object"
            ),
            infor_nexus=appflow.CfnFlow.InforNexusSourcePropertiesProperty(
                object="object"
            ),
            marketo=appflow.CfnFlow.MarketoSourcePropertiesProperty(
                object="object"
            ),
            pardot=appflow.CfnFlow.PardotSourcePropertiesProperty(
                object="object"
            ),
            s3=appflow.CfnFlow.S3SourcePropertiesProperty(
                bucket_name="bucketName",
                bucket_prefix="bucketPrefix",

                # the properties below are optional
                s3_input_format_config=appflow.CfnFlow.S3InputFormatConfigProperty(
                    s3_input_file_type="s3InputFileType"
                )
            ),
            salesforce=appflow.CfnFlow.SalesforceSourcePropertiesProperty(
                object="object",

                # the properties below are optional
                data_transfer_api="dataTransferApi",
                enable_dynamic_field_update=False,
                include_deleted_records=False
            ),
            sapo_data=appflow.CfnFlow.SAPODataSourcePropertiesProperty(
                object_path="objectPath",

                # the properties below are optional
                pagination_config=appflow.CfnFlow.SAPODataPaginationConfigProperty(
                    max_page_size=123
                ),
                parallelism_config=appflow.CfnFlow.SAPODataParallelismConfigProperty(
                    max_parallelism=123
                )
            ),
            service_now=appflow.CfnFlow.ServiceNowSourcePropertiesProperty(
                object="object"
            ),
            singular=appflow.CfnFlow.SingularSourcePropertiesProperty(
                object="object"
            ),
            slack=appflow.CfnFlow.SlackSourcePropertiesProperty(
                object="object"
            ),
            trendmicro=appflow.CfnFlow.TrendmicroSourcePropertiesProperty(
                object="object"
            ),
            veeva=appflow.CfnFlow.VeevaSourcePropertiesProperty(
                object="object",

                # the properties below are optional
                document_type="documentType",
                include_all_versions=False,
                include_renditions=False,
                include_source_files=False
            ),
            zendesk=appflow.CfnFlow.ZendeskSourcePropertiesProperty(
                object="object"
            )
        ),

        # the properties below are optional
        api_version="apiVersion",
        connector_profile_name="connectorProfileName",
        incremental_pull_config=appflow.CfnFlow.IncrementalPullConfigProperty(
            datetime_type_field_name="datetimeTypeFieldName"
        )
    ),
    tasks=[appflow.CfnFlow.TaskProperty(
        source_fields=["sourceFields"],
        task_type="taskType",

        # the properties below are optional
        connector_operator=appflow.CfnFlow.ConnectorOperatorProperty(
            amplitude="amplitude",
            custom_connector="customConnector",
            datadog="datadog",
            dynatrace="dynatrace",
            google_analytics="googleAnalytics",
            infor_nexus="inforNexus",
            marketo="marketo",
            pardot="pardot",
            s3="s3",
            salesforce="salesforce",
            sapo_data="sapoData",
            service_now="serviceNow",
            singular="singular",
            slack="slack",
            trendmicro="trendmicro",
            veeva="veeva",
            zendesk="zendesk"
        ),
        destination_field="destinationField",
        task_properties=[appflow.CfnFlow.TaskPropertiesObjectProperty(
            key="key",
            value="value"
        )]
    )],
    trigger_config=appflow.CfnFlow.TriggerConfigProperty(
        trigger_type="triggerType",

        # the properties below are optional
        trigger_properties=appflow.CfnFlow.ScheduledTriggerPropertiesProperty(
            schedule_expression="scheduleExpression",

            # the properties below are optional
            data_pull_mode="dataPullMode",
            first_execution_from=123,
            flow_error_deactivation_threshold=123,
            schedule_end_time=123,
            schedule_offset=123,
            schedule_start_time=123,
            time_zone="timeZone"
        )
    ),

    # the properties below are optional
    description="description",
    flow_status="flowStatus",
    kms_arn="kmsArn",
    metadata_catalog_config=appflow.CfnFlow.MetadataCatalogConfigProperty(
        glue_data_catalog=appflow.CfnFlow.GlueDataCatalogProperty(
            database_name="databaseName",
            role_arn="roleArn",
            table_prefix="tablePrefix"
        )
    ),
    tags=[CfnTag(
        key="key",
        value="value"
    )]
)
Parameters:
  • scope (Construct) – Scope in which this resource is defined.

  • id (str) – Construct identifier for this resource (unique in its scope).

  • destination_flow_config_list (Union[IResolvable, Sequence[Union[IResolvable, DestinationFlowConfigProperty, Dict[str, Any]]]]) – The configuration that controls how Amazon AppFlow places data in the destination connector.

  • flow_name (str) – The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.

  • source_flow_config (Union[IResolvable, SourceFlowConfigProperty, Dict[str, Any]]) – Contains information about the configuration of the source connector used in the flow.

  • tasks (Union[IResolvable, Sequence[Union[IResolvable, TaskProperty, Dict[str, Any]]]]) – A list of tasks that Amazon AppFlow performs while transferring the data in the flow run.

  • trigger_config (Union[IResolvable, TriggerConfigProperty, Dict[str, Any]]) – The trigger settings that determine how and when Amazon AppFlow runs the specified flow.

  • description (Optional[str]) – A user-entered description of the flow.

  • flow_status (Optional[str]) – Sets the status of the flow. You can specify one of the following values:. - Active - The flow runs based on the trigger settings that you defined. Active scheduled flows run as scheduled, and active event-triggered flows run when the specified change event occurs. However, active on-demand flows run only when you manually start them by using Amazon AppFlow. - Suspended - You can use this option to deactivate an active flow. Scheduled and event-triggered flows will cease to run until you reactive them. This value only affects scheduled and event-triggered flows. It has no effect for on-demand flows. If you omit the FlowStatus parameter, Amazon AppFlow creates the flow with a default status. The default status for on-demand flows is Active. The default status for scheduled and event-triggered flows is Draft, which means they’re not yet active.

  • kms_arn (Optional[str]) – The ARN (Amazon Resource Name) of the Key Management Service (KMS) key you provide for encryption. This is required if you do not want to use the Amazon AppFlow-managed KMS key. If you don’t provide anything here, Amazon AppFlow uses the Amazon AppFlow-managed KMS key.

  • metadata_catalog_config (Union[IResolvable, MetadataCatalogConfigProperty, Dict[str, Any], None]) – Specifies the configuration that Amazon AppFlow uses when it catalogs your data. When Amazon AppFlow catalogs your data, it stores metadata in a data catalog.

  • tags (Optional[Sequence[Union[CfnTag, Dict[str, Any]]]]) – The tags used to organize, track, or control access for your flow.

Methods

add_deletion_override(path)

Syntactic sugar for addOverride(path, undefined).

Parameters:

path (str) – The path of the value to delete.

Return type:

None

add_dependency(target)

Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.

This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.

Parameters:

target (CfnResource) –

Return type:

None

add_depends_on(target)

(deprecated) Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.

Parameters:

target (CfnResource) –

Deprecated:

use addDependency

Stability:

deprecated

Return type:

None

add_metadata(key, value)

Add a value to the CloudFormation Resource Metadata.

Parameters:
  • key (str) –

  • value (Any) –

See:

Return type:

None

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html

Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.

add_override(path, value)

Adds an override to the synthesized CloudFormation resource.

To add a property override, either use addPropertyOverride or prefix path with “Properties.” (i.e. Properties.TopicName).

If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.

To include a literal . in the property name, prefix with a \. In most programming languages you will need to write this as "\\." because the \ itself will need to be escaped.

For example:

cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"])
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")

would add the overrides Example:

"Properties": {
  "GlobalSecondaryIndexes": [
    {
      "Projection": {
        "NonKeyAttributes": [ "myattribute" ]
        ...
      }
      ...
    },
    {
      "ProjectionType": "INCLUDE"
      ...
    },
  ]
  ...
}

The value argument to addOverride will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.

Parameters:
  • path (str) –

    • The path of the property, you can use dot notation to override values in complex types. Any intermediate keys will be created as needed.

  • value (Any) –

    • The value. Could be primitive or complex.

Return type:

None

add_property_deletion_override(property_path)

Adds an override that deletes the value of a property from the resource definition.

Parameters:

property_path (str) – The path to the property.

Return type:

None

add_property_override(property_path, value)

Adds an override to a resource property.

Syntactic sugar for addOverride("Properties.<...>", value).

Parameters:
  • property_path (str) – The path of the property.

  • value (Any) – The value.

Return type:

None

apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)

Sets the deletion policy of the resource based on the removal policy specified.

The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.

The resource can be deleted (RemovalPolicy.DESTROY), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN). In some cases, a snapshot can be taken of the resource prior to deletion (RemovalPolicy.SNAPSHOT). A list of resources that support this policy can be found in the following link:

Parameters:
  • policy (Optional[RemovalPolicy]) –

  • apply_to_update_replace_policy (Optional[bool]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: true

  • default (Optional[RemovalPolicy]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resource, please consult that specific resource’s documentation.

See:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html#aws-attribute-deletionpolicy-options

Return type:

None

get_att(attribute_name, type_hint=None)

Returns a token for an runtime attribute of this resource.

Ideally, use generated attribute accessors (e.g. resource.arn), but this can be used for future compatibility in case there is no generated attribute.

Parameters:
  • attribute_name (str) – The name of the attribute.

  • type_hint (Optional[ResolutionTypeHint]) –

Return type:

Reference

get_metadata(key)

Retrieve a value value from the CloudFormation Resource Metadata.

Parameters:

key (str) –

See:

Return type:

Any

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html

Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.

inspect(inspector)

Examines the CloudFormation resource and discloses attributes.

Parameters:

inspector (TreeInspector) – tree inspector to collect and process attributes.

Return type:

None

obtain_dependencies()

Retrieves an array of resources this resource depends on.

This assembles dependencies on resources across stacks (including nested stacks) automatically.

Return type:

List[Union[Stack, CfnResource]]

obtain_resource_dependencies()

Get a shallow copy of dependencies between this resource and other resources in the same stack.

Return type:

List[CfnResource]

override_logical_id(new_logical_id)

Overrides the auto-generated logical ID with a specific ID.

Parameters:

new_logical_id (str) – The new logical ID to use for this stack element.

Return type:

None

remove_dependency(target)

Indicates that this resource no longer depends on another resource.

This can be used for resources across stacks (including nested stacks) and the dependency will automatically be removed from the relevant scope.

Parameters:

target (CfnResource) –

Return type:

None

replace_dependency(target, new_target)

Replaces one dependency with another.

Parameters:
Return type:

None

to_string()

Returns a string representation of this construct.

Return type:

str

Returns:

a string representation of this resource

Attributes

CFN_RESOURCE_TYPE_NAME = 'AWS::AppFlow::Flow'
attr_flow_arn

The flow’s Amazon Resource Name (ARN).

CloudformationAttribute:

FlowArn

cfn_options

Options for this resource, such as condition, update policy etc.

cfn_resource_type

AWS resource type.

creation_stack

return:

the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.

description

A user-entered description of the flow.

destination_flow_config_list

The configuration that controls how Amazon AppFlow places data in the destination connector.

flow_name

The specified name of the flow.

flow_status

Sets the status of the flow.

You can specify one of the following values:.

kms_arn

The ARN (Amazon Resource Name) of the Key Management Service (KMS) key you provide for encryption.

logical_id

The logical ID for this CloudFormation stack element.

The logical ID of the element is calculated from the path of the resource node in the construct tree.

To override this value, use overrideLogicalId(newLogicalId).

Returns:

the logical ID as a stringified token. This value will only get resolved during synthesis.

metadata_catalog_config

Specifies the configuration that Amazon AppFlow uses when it catalogs your data.

node

The tree node.

ref

Return a string that will be resolved to a CloudFormation { Ref } for this element.

If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through Lazy.any({ produce: resource.ref }).

source_flow_config

Contains information about the configuration of the source connector used in the flow.

stack

The stack in which this element is defined.

CfnElements must be defined within a stack scope (directly or indirectly).

tags

Tag Manager which manages the tags for this resource.

tags_raw

The tags used to organize, track, or control access for your flow.

tasks

A list of tasks that Amazon AppFlow performs while transferring the data in the flow run.

trigger_config

The trigger settings that determine how and when Amazon AppFlow runs the specified flow.

Static Methods

classmethod is_cfn_element(x)

Returns true if a construct is a stack element (i.e. part of the synthesized cloudformation template).

Uses duck-typing instead of instanceof to allow stack elements from different versions of this library to be included in the same stack.

Parameters:

x (Any) –

Return type:

bool

Returns:

The construct as a stack element or undefined if it is not a stack element.

classmethod is_cfn_resource(x)

Check whether the given object is a CfnResource.

Parameters:

x (Any) –

Return type:

bool

classmethod is_construct(x)

Checks if x is a construct.

Use this method instead of instanceof to properly detect Construct instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the constructs library on disk are seen as independent, completely different libraries. As a consequence, the class Construct in each copy of the constructs library is seen as a different class, and an instance of one class will not test as instanceof the other class. npm install will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the constructs library can be accidentally installed, and instanceof will behave unpredictably. It is safest to avoid using instanceof, and using this type-testing method instead.

Parameters:

x (Any) – Any object.

Return type:

bool

Returns:

true if x is an object created from a class which extends Construct.

AggregationConfigProperty

class CfnFlow.AggregationConfigProperty(*, aggregation_type=None, target_file_size=None)

Bases: object

The aggregation settings that you can use to customize the output format of your flow data.

Parameters:
  • aggregation_type (Optional[str]) – Specifies whether Amazon AppFlow aggregates the flow records into a single file, or leave them unaggregated.

  • target_file_size (Union[int, float, None]) – The desired file size, in MB, for each output file that Amazon AppFlow writes to the flow destination. For each file, Amazon AppFlow attempts to achieve the size that you specify. The actual file sizes might differ from this target based on the number and size of the records that each file contains.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-aggregationconfig.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

aggregation_config_property = appflow.CfnFlow.AggregationConfigProperty(
    aggregation_type="aggregationType",
    target_file_size=123
)

Attributes

aggregation_type

Specifies whether Amazon AppFlow aggregates the flow records into a single file, or leave them unaggregated.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-aggregationconfig.html#cfn-appflow-flow-aggregationconfig-aggregationtype

target_file_size

The desired file size, in MB, for each output file that Amazon AppFlow writes to the flow destination.

For each file, Amazon AppFlow attempts to achieve the size that you specify. The actual file sizes might differ from this target based on the number and size of the records that each file contains.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-aggregationconfig.html#cfn-appflow-flow-aggregationconfig-targetfilesize

AmplitudeSourcePropertiesProperty

class CfnFlow.AmplitudeSourcePropertiesProperty(*, object)

Bases: object

The properties that are applied when Amplitude is being used as a source.

Parameters:

object (str) – The object specified in the Amplitude flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-amplitudesourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

amplitude_source_properties_property = appflow.CfnFlow.AmplitudeSourcePropertiesProperty(
    object="object"
)

Attributes

object

The object specified in the Amplitude flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-amplitudesourceproperties.html#cfn-appflow-flow-amplitudesourceproperties-object

ConnectorOperatorProperty

class CfnFlow.ConnectorOperatorProperty(*, amplitude=None, custom_connector=None, datadog=None, dynatrace=None, google_analytics=None, infor_nexus=None, marketo=None, pardot=None, s3=None, salesforce=None, sapo_data=None, service_now=None, singular=None, slack=None, trendmicro=None, veeva=None, zendesk=None)

Bases: object

The operation to be performed on the provided source fields.

Parameters:
  • amplitude (Optional[str]) – The operation to be performed on the provided Amplitude source fields.

  • custom_connector (Optional[str]) – Operators supported by the custom connector.

  • datadog (Optional[str]) – The operation to be performed on the provided Datadog source fields.

  • dynatrace (Optional[str]) – The operation to be performed on the provided Dynatrace source fields.

  • google_analytics (Optional[str]) – The operation to be performed on the provided Google Analytics source fields.

  • infor_nexus (Optional[str]) – The operation to be performed on the provided Infor Nexus source fields.

  • marketo (Optional[str]) – The operation to be performed on the provided Marketo source fields.

  • pardot (Optional[str]) – The operation to be performed on the provided Salesforce Pardot source fields.

  • s3 (Optional[str]) – The operation to be performed on the provided Amazon S3 source fields.

  • salesforce (Optional[str]) – The operation to be performed on the provided Salesforce source fields.

  • sapo_data (Optional[str]) – The operation to be performed on the provided SAPOData source fields.

  • service_now (Optional[str]) – The operation to be performed on the provided ServiceNow source fields.

  • singular (Optional[str]) – The operation to be performed on the provided Singular source fields.

  • slack (Optional[str]) – The operation to be performed on the provided Slack source fields.

  • trendmicro (Optional[str]) – The operation to be performed on the provided Trend Micro source fields.

  • veeva (Optional[str]) – The operation to be performed on the provided Veeva source fields.

  • zendesk (Optional[str]) – The operation to be performed on the provided Zendesk source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

connector_operator_property = appflow.CfnFlow.ConnectorOperatorProperty(
    amplitude="amplitude",
    custom_connector="customConnector",
    datadog="datadog",
    dynatrace="dynatrace",
    google_analytics="googleAnalytics",
    infor_nexus="inforNexus",
    marketo="marketo",
    pardot="pardot",
    s3="s3",
    salesforce="salesforce",
    sapo_data="sapoData",
    service_now="serviceNow",
    singular="singular",
    slack="slack",
    trendmicro="trendmicro",
    veeva="veeva",
    zendesk="zendesk"
)

Attributes

amplitude

The operation to be performed on the provided Amplitude source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-amplitude

custom_connector

Operators supported by the custom connector.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-customconnector

datadog

The operation to be performed on the provided Datadog source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-datadog

dynatrace

The operation to be performed on the provided Dynatrace source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-dynatrace

google_analytics

The operation to be performed on the provided Google Analytics source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-googleanalytics

infor_nexus

The operation to be performed on the provided Infor Nexus source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-infornexus

marketo

The operation to be performed on the provided Marketo source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-marketo

pardot

The operation to be performed on the provided Salesforce Pardot source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-pardot

s3

The operation to be performed on the provided Amazon S3 source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-s3

salesforce

The operation to be performed on the provided Salesforce source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-salesforce

sapo_data

The operation to be performed on the provided SAPOData source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-sapodata

service_now

The operation to be performed on the provided ServiceNow source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-servicenow

singular

The operation to be performed on the provided Singular source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-singular

slack

The operation to be performed on the provided Slack source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-slack

trendmicro

The operation to be performed on the provided Trend Micro source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-trendmicro

veeva

The operation to be performed on the provided Veeva source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-veeva

zendesk

The operation to be performed on the provided Zendesk source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-connectoroperator.html#cfn-appflow-flow-connectoroperator-zendesk

CustomConnectorDestinationPropertiesProperty

class CfnFlow.CustomConnectorDestinationPropertiesProperty(*, entity_name, custom_properties=None, error_handling_config=None, id_field_names=None, write_operation_type=None)

Bases: object

The properties that are applied when the custom connector is being used as a destination.

Parameters:
  • entity_name (str) – The entity specified in the custom connector as a destination in the flow.

  • custom_properties (Union[IResolvable, Mapping[str, str], None]) – The custom properties that are specific to the connector when it’s used as a destination in the flow.

  • error_handling_config (Union[IResolvable, ErrorHandlingConfigProperty, Dict[str, Any], None]) – The settings that determine how Amazon AppFlow handles an error when placing data in the custom connector as destination.

  • id_field_names (Optional[Sequence[str]]) – The name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update, delete, or upsert.

  • write_operation_type (Optional[str]) – Specifies the type of write operation to be performed in the custom connector when it’s used as destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-customconnectordestinationproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

custom_connector_destination_properties_property = appflow.CfnFlow.CustomConnectorDestinationPropertiesProperty(
    entity_name="entityName",

    # the properties below are optional
    custom_properties={
        "custom_properties_key": "customProperties"
    },
    error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
        bucket_name="bucketName",
        bucket_prefix="bucketPrefix",
        fail_on_first_error=False
    ),
    id_field_names=["idFieldNames"],
    write_operation_type="writeOperationType"
)

Attributes

custom_properties

The custom properties that are specific to the connector when it’s used as a destination in the flow.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-customconnectordestinationproperties.html#cfn-appflow-flow-customconnectordestinationproperties-customproperties

entity_name

The entity specified in the custom connector as a destination in the flow.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-customconnectordestinationproperties.html#cfn-appflow-flow-customconnectordestinationproperties-entityname

error_handling_config

The settings that determine how Amazon AppFlow handles an error when placing data in the custom connector as destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-customconnectordestinationproperties.html#cfn-appflow-flow-customconnectordestinationproperties-errorhandlingconfig

id_field_names

The name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update, delete, or upsert.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-customconnectordestinationproperties.html#cfn-appflow-flow-customconnectordestinationproperties-idfieldnames

write_operation_type

Specifies the type of write operation to be performed in the custom connector when it’s used as destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-customconnectordestinationproperties.html#cfn-appflow-flow-customconnectordestinationproperties-writeoperationtype

CustomConnectorSourcePropertiesProperty

class CfnFlow.CustomConnectorSourcePropertiesProperty(*, entity_name, custom_properties=None, data_transfer_api=None)

Bases: object

The properties that are applied when the custom connector is being used as a source.

Parameters:
  • entity_name (str) – The entity specified in the custom connector as a source in the flow.

  • custom_properties (Union[IResolvable, Mapping[str, str], None]) – Custom properties that are required to use the custom connector as a source.

  • data_transfer_api (Union[IResolvable, DataTransferApiProperty, Dict[str, Any], None]) – The API of the connector application that Amazon AppFlow uses to transfer your data.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-customconnectorsourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

custom_connector_source_properties_property = appflow.CfnFlow.CustomConnectorSourcePropertiesProperty(
    entity_name="entityName",

    # the properties below are optional
    custom_properties={
        "custom_properties_key": "customProperties"
    },
    data_transfer_api=appflow.CfnFlow.DataTransferApiProperty(
        name="name",
        type="type"
    )
)

Attributes

custom_properties

Custom properties that are required to use the custom connector as a source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-customconnectorsourceproperties.html#cfn-appflow-flow-customconnectorsourceproperties-customproperties

data_transfer_api

The API of the connector application that Amazon AppFlow uses to transfer your data.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-customconnectorsourceproperties.html#cfn-appflow-flow-customconnectorsourceproperties-datatransferapi

entity_name

The entity specified in the custom connector as a source in the flow.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-customconnectorsourceproperties.html#cfn-appflow-flow-customconnectorsourceproperties-entityname

DataTransferApiProperty

class CfnFlow.DataTransferApiProperty(*, name, type)

Bases: object

The API of the connector application that Amazon AppFlow uses to transfer your data.

Parameters:
  • name (str) – The name of the connector application API.

  • type (str) – You can specify one of the following types:. - AUTOMATIC - The default. Optimizes a flow for datasets that fluctuate in size from small to large. For each flow run, Amazon AppFlow chooses to use the SYNC or ASYNC API type based on the amount of data that the run transfers. - SYNC - A synchronous API. This type of API optimizes a flow for small to medium-sized datasets. - ASYNC - An asynchronous API. This type of API optimizes a flow for large datasets.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-datatransferapi.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

data_transfer_api_property = appflow.CfnFlow.DataTransferApiProperty(
    name="name",
    type="type"
)

Attributes

name

The name of the connector application API.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-datatransferapi.html#cfn-appflow-flow-datatransferapi-name

type

.

  • AUTOMATIC - The default. Optimizes a flow for datasets that fluctuate in size from small to large. For each flow run, Amazon AppFlow chooses to use the SYNC or ASYNC API type based on the amount of data that the run transfers.

  • SYNC - A synchronous API. This type of API optimizes a flow for small to medium-sized datasets.

  • ASYNC - An asynchronous API. This type of API optimizes a flow for large datasets.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-datatransferapi.html#cfn-appflow-flow-datatransferapi-type

Type:

You can specify one of the following types

DatadogSourcePropertiesProperty

class CfnFlow.DatadogSourcePropertiesProperty(*, object)

Bases: object

The properties that are applied when Datadog is being used as a source.

Parameters:

object (str) – The object specified in the Datadog flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-datadogsourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

datadog_source_properties_property = appflow.CfnFlow.DatadogSourcePropertiesProperty(
    object="object"
)

Attributes

object

The object specified in the Datadog flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-datadogsourceproperties.html#cfn-appflow-flow-datadogsourceproperties-object

DestinationConnectorPropertiesProperty

class CfnFlow.DestinationConnectorPropertiesProperty(*, custom_connector=None, event_bridge=None, lookout_metrics=None, marketo=None, redshift=None, s3=None, salesforce=None, sapo_data=None, snowflake=None, upsolver=None, zendesk=None)

Bases: object

This stores the information that is required to query a particular connector.

Parameters:
See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationconnectorproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

destination_connector_properties_property = appflow.CfnFlow.DestinationConnectorPropertiesProperty(
    custom_connector=appflow.CfnFlow.CustomConnectorDestinationPropertiesProperty(
        entity_name="entityName",

        # the properties below are optional
        custom_properties={
            "custom_properties_key": "customProperties"
        },
        error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
            bucket_name="bucketName",
            bucket_prefix="bucketPrefix",
            fail_on_first_error=False
        ),
        id_field_names=["idFieldNames"],
        write_operation_type="writeOperationType"
    ),
    event_bridge=appflow.CfnFlow.EventBridgeDestinationPropertiesProperty(
        object="object",

        # the properties below are optional
        error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
            bucket_name="bucketName",
            bucket_prefix="bucketPrefix",
            fail_on_first_error=False
        )
    ),
    lookout_metrics=appflow.CfnFlow.LookoutMetricsDestinationPropertiesProperty(
        object="object"
    ),
    marketo=appflow.CfnFlow.MarketoDestinationPropertiesProperty(
        object="object",

        # the properties below are optional
        error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
            bucket_name="bucketName",
            bucket_prefix="bucketPrefix",
            fail_on_first_error=False
        )
    ),
    redshift=appflow.CfnFlow.RedshiftDestinationPropertiesProperty(
        intermediate_bucket_name="intermediateBucketName",
        object="object",

        # the properties below are optional
        bucket_prefix="bucketPrefix",
        error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
            bucket_name="bucketName",
            bucket_prefix="bucketPrefix",
            fail_on_first_error=False
        )
    ),
    s3=appflow.CfnFlow.S3DestinationPropertiesProperty(
        bucket_name="bucketName",

        # the properties below are optional
        bucket_prefix="bucketPrefix",
        s3_output_format_config=appflow.CfnFlow.S3OutputFormatConfigProperty(
            aggregation_config=appflow.CfnFlow.AggregationConfigProperty(
                aggregation_type="aggregationType",
                target_file_size=123
            ),
            file_type="fileType",
            prefix_config=appflow.CfnFlow.PrefixConfigProperty(
                path_prefix_hierarchy=["pathPrefixHierarchy"],
                prefix_format="prefixFormat",
                prefix_type="prefixType"
            ),
            preserve_source_data_typing=False
        )
    ),
    salesforce=appflow.CfnFlow.SalesforceDestinationPropertiesProperty(
        object="object",

        # the properties below are optional
        data_transfer_api="dataTransferApi",
        error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
            bucket_name="bucketName",
            bucket_prefix="bucketPrefix",
            fail_on_first_error=False
        ),
        id_field_names=["idFieldNames"],
        write_operation_type="writeOperationType"
    ),
    sapo_data=appflow.CfnFlow.SAPODataDestinationPropertiesProperty(
        object_path="objectPath",

        # the properties below are optional
        error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
            bucket_name="bucketName",
            bucket_prefix="bucketPrefix",
            fail_on_first_error=False
        ),
        id_field_names=["idFieldNames"],
        success_response_handling_config=appflow.CfnFlow.SuccessResponseHandlingConfigProperty(
            bucket_name="bucketName",
            bucket_prefix="bucketPrefix"
        ),
        write_operation_type="writeOperationType"
    ),
    snowflake=appflow.CfnFlow.SnowflakeDestinationPropertiesProperty(
        intermediate_bucket_name="intermediateBucketName",
        object="object",

        # the properties below are optional
        bucket_prefix="bucketPrefix",
        error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
            bucket_name="bucketName",
            bucket_prefix="bucketPrefix",
            fail_on_first_error=False
        )
    ),
    upsolver=appflow.CfnFlow.UpsolverDestinationPropertiesProperty(
        bucket_name="bucketName",
        s3_output_format_config=appflow.CfnFlow.UpsolverS3OutputFormatConfigProperty(
            prefix_config=appflow.CfnFlow.PrefixConfigProperty(
                path_prefix_hierarchy=["pathPrefixHierarchy"],
                prefix_format="prefixFormat",
                prefix_type="prefixType"
            ),

            # the properties below are optional
            aggregation_config=appflow.CfnFlow.AggregationConfigProperty(
                aggregation_type="aggregationType",
                target_file_size=123
            ),
            file_type="fileType"
        ),

        # the properties below are optional
        bucket_prefix="bucketPrefix"
    ),
    zendesk=appflow.CfnFlow.ZendeskDestinationPropertiesProperty(
        object="object",

        # the properties below are optional
        error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
            bucket_name="bucketName",
            bucket_prefix="bucketPrefix",
            fail_on_first_error=False
        ),
        id_field_names=["idFieldNames"],
        write_operation_type="writeOperationType"
    )
)

Attributes

custom_connector

The properties that are required to query the custom Connector.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationconnectorproperties.html#cfn-appflow-flow-destinationconnectorproperties-customconnector

event_bridge

The properties required to query Amazon EventBridge.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationconnectorproperties.html#cfn-appflow-flow-destinationconnectorproperties-eventbridge

lookout_metrics

The properties required to query Amazon Lookout for Metrics.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationconnectorproperties.html#cfn-appflow-flow-destinationconnectorproperties-lookoutmetrics

marketo

The properties required to query Marketo.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationconnectorproperties.html#cfn-appflow-flow-destinationconnectorproperties-marketo

redshift

The properties required to query Amazon Redshift.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationconnectorproperties.html#cfn-appflow-flow-destinationconnectorproperties-redshift

s3

The properties required to query Amazon S3.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationconnectorproperties.html#cfn-appflow-flow-destinationconnectorproperties-s3

salesforce

The properties required to query Salesforce.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationconnectorproperties.html#cfn-appflow-flow-destinationconnectorproperties-salesforce

sapo_data

The properties required to query SAPOData.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationconnectorproperties.html#cfn-appflow-flow-destinationconnectorproperties-sapodata

snowflake

The properties required to query Snowflake.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationconnectorproperties.html#cfn-appflow-flow-destinationconnectorproperties-snowflake

upsolver

The properties required to query Upsolver.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationconnectorproperties.html#cfn-appflow-flow-destinationconnectorproperties-upsolver

zendesk

The properties required to query Zendesk.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationconnectorproperties.html#cfn-appflow-flow-destinationconnectorproperties-zendesk

DestinationFlowConfigProperty

class CfnFlow.DestinationFlowConfigProperty(*, connector_type, destination_connector_properties, api_version=None, connector_profile_name=None)

Bases: object

Contains information about the configuration of destination connectors present in the flow.

Parameters:
  • connector_type (str) – The type of destination connector, such as Sales force, Amazon S3, and so on.

  • destination_connector_properties (Union[IResolvable, DestinationConnectorPropertiesProperty, Dict[str, Any]]) – This stores the information that is required to query a particular connector.

  • api_version (Optional[str]) – The API version that the destination connector uses.

  • connector_profile_name (Optional[str]) – The name of the connector profile. This name must be unique for each connector profile in the AWS account .

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationflowconfig.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

destination_flow_config_property = appflow.CfnFlow.DestinationFlowConfigProperty(
    connector_type="connectorType",
    destination_connector_properties=appflow.CfnFlow.DestinationConnectorPropertiesProperty(
        custom_connector=appflow.CfnFlow.CustomConnectorDestinationPropertiesProperty(
            entity_name="entityName",

            # the properties below are optional
            custom_properties={
                "custom_properties_key": "customProperties"
            },
            error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
                bucket_name="bucketName",
                bucket_prefix="bucketPrefix",
                fail_on_first_error=False
            ),
            id_field_names=["idFieldNames"],
            write_operation_type="writeOperationType"
        ),
        event_bridge=appflow.CfnFlow.EventBridgeDestinationPropertiesProperty(
            object="object",

            # the properties below are optional
            error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
                bucket_name="bucketName",
                bucket_prefix="bucketPrefix",
                fail_on_first_error=False
            )
        ),
        lookout_metrics=appflow.CfnFlow.LookoutMetricsDestinationPropertiesProperty(
            object="object"
        ),
        marketo=appflow.CfnFlow.MarketoDestinationPropertiesProperty(
            object="object",

            # the properties below are optional
            error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
                bucket_name="bucketName",
                bucket_prefix="bucketPrefix",
                fail_on_first_error=False
            )
        ),
        redshift=appflow.CfnFlow.RedshiftDestinationPropertiesProperty(
            intermediate_bucket_name="intermediateBucketName",
            object="object",

            # the properties below are optional
            bucket_prefix="bucketPrefix",
            error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
                bucket_name="bucketName",
                bucket_prefix="bucketPrefix",
                fail_on_first_error=False
            )
        ),
        s3=appflow.CfnFlow.S3DestinationPropertiesProperty(
            bucket_name="bucketName",

            # the properties below are optional
            bucket_prefix="bucketPrefix",
            s3_output_format_config=appflow.CfnFlow.S3OutputFormatConfigProperty(
                aggregation_config=appflow.CfnFlow.AggregationConfigProperty(
                    aggregation_type="aggregationType",
                    target_file_size=123
                ),
                file_type="fileType",
                prefix_config=appflow.CfnFlow.PrefixConfigProperty(
                    path_prefix_hierarchy=["pathPrefixHierarchy"],
                    prefix_format="prefixFormat",
                    prefix_type="prefixType"
                ),
                preserve_source_data_typing=False
            )
        ),
        salesforce=appflow.CfnFlow.SalesforceDestinationPropertiesProperty(
            object="object",

            # the properties below are optional
            data_transfer_api="dataTransferApi",
            error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
                bucket_name="bucketName",
                bucket_prefix="bucketPrefix",
                fail_on_first_error=False
            ),
            id_field_names=["idFieldNames"],
            write_operation_type="writeOperationType"
        ),
        sapo_data=appflow.CfnFlow.SAPODataDestinationPropertiesProperty(
            object_path="objectPath",

            # the properties below are optional
            error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
                bucket_name="bucketName",
                bucket_prefix="bucketPrefix",
                fail_on_first_error=False
            ),
            id_field_names=["idFieldNames"],
            success_response_handling_config=appflow.CfnFlow.SuccessResponseHandlingConfigProperty(
                bucket_name="bucketName",
                bucket_prefix="bucketPrefix"
            ),
            write_operation_type="writeOperationType"
        ),
        snowflake=appflow.CfnFlow.SnowflakeDestinationPropertiesProperty(
            intermediate_bucket_name="intermediateBucketName",
            object="object",

            # the properties below are optional
            bucket_prefix="bucketPrefix",
            error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
                bucket_name="bucketName",
                bucket_prefix="bucketPrefix",
                fail_on_first_error=False
            )
        ),
        upsolver=appflow.CfnFlow.UpsolverDestinationPropertiesProperty(
            bucket_name="bucketName",
            s3_output_format_config=appflow.CfnFlow.UpsolverS3OutputFormatConfigProperty(
                prefix_config=appflow.CfnFlow.PrefixConfigProperty(
                    path_prefix_hierarchy=["pathPrefixHierarchy"],
                    prefix_format="prefixFormat",
                    prefix_type="prefixType"
                ),

                # the properties below are optional
                aggregation_config=appflow.CfnFlow.AggregationConfigProperty(
                    aggregation_type="aggregationType",
                    target_file_size=123
                ),
                file_type="fileType"
            ),

            # the properties below are optional
            bucket_prefix="bucketPrefix"
        ),
        zendesk=appflow.CfnFlow.ZendeskDestinationPropertiesProperty(
            object="object",

            # the properties below are optional
            error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
                bucket_name="bucketName",
                bucket_prefix="bucketPrefix",
                fail_on_first_error=False
            ),
            id_field_names=["idFieldNames"],
            write_operation_type="writeOperationType"
        )
    ),

    # the properties below are optional
    api_version="apiVersion",
    connector_profile_name="connectorProfileName"
)

Attributes

api_version

The API version that the destination connector uses.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationflowconfig.html#cfn-appflow-flow-destinationflowconfig-apiversion

connector_profile_name

The name of the connector profile.

This name must be unique for each connector profile in the AWS account .

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationflowconfig.html#cfn-appflow-flow-destinationflowconfig-connectorprofilename

connector_type

The type of destination connector, such as Sales force, Amazon S3, and so on.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationflowconfig.html#cfn-appflow-flow-destinationflowconfig-connectortype

destination_connector_properties

This stores the information that is required to query a particular connector.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-destinationflowconfig.html#cfn-appflow-flow-destinationflowconfig-destinationconnectorproperties

DynatraceSourcePropertiesProperty

class CfnFlow.DynatraceSourcePropertiesProperty(*, object)

Bases: object

The properties that are applied when Dynatrace is being used as a source.

Parameters:

object (str) – The object specified in the Dynatrace flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-dynatracesourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

dynatrace_source_properties_property = appflow.CfnFlow.DynatraceSourcePropertiesProperty(
    object="object"
)

Attributes

object

The object specified in the Dynatrace flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-dynatracesourceproperties.html#cfn-appflow-flow-dynatracesourceproperties-object

ErrorHandlingConfigProperty

class CfnFlow.ErrorHandlingConfigProperty(*, bucket_name=None, bucket_prefix=None, fail_on_first_error=None)

Bases: object

The settings that determine how Amazon AppFlow handles an error when placing data in the destination.

For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure. ErrorHandlingConfig is a part of the destination connector details.

Parameters:
  • bucket_name (Optional[str]) – Specifies the name of the Amazon S3 bucket.

  • bucket_prefix (Optional[str]) – Specifies the Amazon S3 bucket prefix.

  • fail_on_first_error (Union[bool, IResolvable, None]) – Specifies if the flow should fail after the first instance of a failure when attempting to place data in the destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-errorhandlingconfig.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

error_handling_config_property = appflow.CfnFlow.ErrorHandlingConfigProperty(
    bucket_name="bucketName",
    bucket_prefix="bucketPrefix",
    fail_on_first_error=False
)

Attributes

bucket_name

Specifies the name of the Amazon S3 bucket.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-errorhandlingconfig.html#cfn-appflow-flow-errorhandlingconfig-bucketname

bucket_prefix

Specifies the Amazon S3 bucket prefix.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-errorhandlingconfig.html#cfn-appflow-flow-errorhandlingconfig-bucketprefix

fail_on_first_error

Specifies if the flow should fail after the first instance of a failure when attempting to place data in the destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-errorhandlingconfig.html#cfn-appflow-flow-errorhandlingconfig-failonfirsterror

EventBridgeDestinationPropertiesProperty

class CfnFlow.EventBridgeDestinationPropertiesProperty(*, object, error_handling_config=None)

Bases: object

The properties that are applied when Amazon EventBridge is being used as a destination.

Parameters:
  • object (str) – The object specified in the Amazon EventBridge flow destination.

  • error_handling_config (Union[IResolvable, ErrorHandlingConfigProperty, Dict[str, Any], None]) – The object specified in the Amplitude flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-eventbridgedestinationproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

event_bridge_destination_properties_property = appflow.CfnFlow.EventBridgeDestinationPropertiesProperty(
    object="object",

    # the properties below are optional
    error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
        bucket_name="bucketName",
        bucket_prefix="bucketPrefix",
        fail_on_first_error=False
    )
)

Attributes

error_handling_config

The object specified in the Amplitude flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-eventbridgedestinationproperties.html#cfn-appflow-flow-eventbridgedestinationproperties-errorhandlingconfig

object

The object specified in the Amazon EventBridge flow destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-eventbridgedestinationproperties.html#cfn-appflow-flow-eventbridgedestinationproperties-object

GlueDataCatalogProperty

class CfnFlow.GlueDataCatalogProperty(*, database_name, role_arn, table_prefix)

Bases: object

Trigger settings of the flow.

Parameters:
  • database_name (str) – A string containing the value for the tag.

  • role_arn (str) – A string containing the value for the tag.

  • table_prefix (str) – A string containing the value for the tag.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-gluedatacatalog.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

glue_data_catalog_property = appflow.CfnFlow.GlueDataCatalogProperty(
    database_name="databaseName",
    role_arn="roleArn",
    table_prefix="tablePrefix"
)

Attributes

database_name

A string containing the value for the tag.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-gluedatacatalog.html#cfn-appflow-flow-gluedatacatalog-databasename

role_arn

A string containing the value for the tag.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-gluedatacatalog.html#cfn-appflow-flow-gluedatacatalog-rolearn

table_prefix

A string containing the value for the tag.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-gluedatacatalog.html#cfn-appflow-flow-gluedatacatalog-tableprefix

GoogleAnalyticsSourcePropertiesProperty

class CfnFlow.GoogleAnalyticsSourcePropertiesProperty(*, object)

Bases: object

The properties that are applied when Google Analytics is being used as a source.

Parameters:

object (str) – The object specified in the Google Analytics flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-googleanalyticssourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

google_analytics_source_properties_property = appflow.CfnFlow.GoogleAnalyticsSourcePropertiesProperty(
    object="object"
)

Attributes

object

The object specified in the Google Analytics flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-googleanalyticssourceproperties.html#cfn-appflow-flow-googleanalyticssourceproperties-object

IncrementalPullConfigProperty

class CfnFlow.IncrementalPullConfigProperty(*, datetime_type_field_name=None)

Bases: object

Specifies the configuration used when importing incremental records from the source.

Parameters:

datetime_type_field_name (Optional[str]) – A field that specifies the date time or timestamp field as the criteria to use when importing incremental records from the source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-incrementalpullconfig.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

incremental_pull_config_property = appflow.CfnFlow.IncrementalPullConfigProperty(
    datetime_type_field_name="datetimeTypeFieldName"
)

Attributes

datetime_type_field_name

A field that specifies the date time or timestamp field as the criteria to use when importing incremental records from the source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-incrementalpullconfig.html#cfn-appflow-flow-incrementalpullconfig-datetimetypefieldname

InforNexusSourcePropertiesProperty

class CfnFlow.InforNexusSourcePropertiesProperty(*, object)

Bases: object

The properties that are applied when Infor Nexus is being used as a source.

Parameters:

object (str) – The object specified in the Infor Nexus flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-infornexussourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

infor_nexus_source_properties_property = appflow.CfnFlow.InforNexusSourcePropertiesProperty(
    object="object"
)

Attributes

object

The object specified in the Infor Nexus flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-infornexussourceproperties.html#cfn-appflow-flow-infornexussourceproperties-object

LookoutMetricsDestinationPropertiesProperty

class CfnFlow.LookoutMetricsDestinationPropertiesProperty(*, object=None)

Bases: object

The properties that are applied when Amazon Lookout for Metrics is used as a destination.

Parameters:

object (Optional[str]) – The object specified in the Amazon Lookout for Metrics flow destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-lookoutmetricsdestinationproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

lookout_metrics_destination_properties_property = appflow.CfnFlow.LookoutMetricsDestinationPropertiesProperty(
    object="object"
)

Attributes

object

The object specified in the Amazon Lookout for Metrics flow destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-lookoutmetricsdestinationproperties.html#cfn-appflow-flow-lookoutmetricsdestinationproperties-object

MarketoDestinationPropertiesProperty

class CfnFlow.MarketoDestinationPropertiesProperty(*, object, error_handling_config=None)

Bases: object

The properties that Amazon AppFlow applies when you use Marketo as a flow destination.

Parameters:
  • object (str) – The object specified in the Marketo flow destination.

  • error_handling_config (Union[IResolvable, ErrorHandlingConfigProperty, Dict[str, Any], None]) – The settings that determine how Amazon AppFlow handles an error when placing data in the destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure. ErrorHandlingConfig is a part of the destination connector details.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-marketodestinationproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

marketo_destination_properties_property = appflow.CfnFlow.MarketoDestinationPropertiesProperty(
    object="object",

    # the properties below are optional
    error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
        bucket_name="bucketName",
        bucket_prefix="bucketPrefix",
        fail_on_first_error=False
    )
)

Attributes

error_handling_config

The settings that determine how Amazon AppFlow handles an error when placing data in the destination.

For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure. ErrorHandlingConfig is a part of the destination connector details.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-marketodestinationproperties.html#cfn-appflow-flow-marketodestinationproperties-errorhandlingconfig

object

The object specified in the Marketo flow destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-marketodestinationproperties.html#cfn-appflow-flow-marketodestinationproperties-object

MarketoSourcePropertiesProperty

class CfnFlow.MarketoSourcePropertiesProperty(*, object)

Bases: object

The properties that are applied when Marketo is being used as a source.

Parameters:

object (str) – The object specified in the Marketo flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-marketosourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

marketo_source_properties_property = appflow.CfnFlow.MarketoSourcePropertiesProperty(
    object="object"
)

Attributes

object

The object specified in the Marketo flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-marketosourceproperties.html#cfn-appflow-flow-marketosourceproperties-object

MetadataCatalogConfigProperty

class CfnFlow.MetadataCatalogConfigProperty(*, glue_data_catalog=None)

Bases: object

Specifies the configuration that Amazon AppFlow uses when it catalogs your data.

When Amazon AppFlow catalogs your data, it stores metadata in a data catalog.

Parameters:

glue_data_catalog (Union[IResolvable, GlueDataCatalogProperty, Dict[str, Any], None]) – Specifies the configuration that Amazon AppFlow uses when it catalogs your data with the AWS Glue Data Catalog .

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-metadatacatalogconfig.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

metadata_catalog_config_property = appflow.CfnFlow.MetadataCatalogConfigProperty(
    glue_data_catalog=appflow.CfnFlow.GlueDataCatalogProperty(
        database_name="databaseName",
        role_arn="roleArn",
        table_prefix="tablePrefix"
    )
)

Attributes

glue_data_catalog

Specifies the configuration that Amazon AppFlow uses when it catalogs your data with the AWS Glue Data Catalog .

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-metadatacatalogconfig.html#cfn-appflow-flow-metadatacatalogconfig-gluedatacatalog

PardotSourcePropertiesProperty

class CfnFlow.PardotSourcePropertiesProperty(*, object)

Bases: object

The properties that are applied when Salesforce Pardot is being used as a source.

Parameters:

object (str) – The object specified in the Salesforce Pardot flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-pardotsourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

pardot_source_properties_property = appflow.CfnFlow.PardotSourcePropertiesProperty(
    object="object"
)

Attributes

object

The object specified in the Salesforce Pardot flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-pardotsourceproperties.html#cfn-appflow-flow-pardotsourceproperties-object

PrefixConfigProperty

class CfnFlow.PrefixConfigProperty(*, path_prefix_hierarchy=None, prefix_format=None, prefix_type=None)

Bases: object

Specifies elements that Amazon AppFlow includes in the file and folder names in the flow destination.

Parameters:
  • path_prefix_hierarchy (Optional[Sequence[str]]) – Specifies whether the destination file path includes either or both of the following elements:. - EXECUTION_ID - The ID that Amazon AppFlow assigns to the flow run. - SCHEMA_VERSION - The version number of your data schema. Amazon AppFlow assigns this version number. The version number increases by one when you change any of the following settings in your flow configuration: - Source-to-destination field mappings - Field data types - Partition keys

  • prefix_format (Optional[str]) – Determines the level of granularity for the date and time that’s included in the prefix.

  • prefix_type (Optional[str]) – Determines the format of the prefix, and whether it applies to the file name, file path, or both.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-prefixconfig.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

prefix_config_property = appflow.CfnFlow.PrefixConfigProperty(
    path_prefix_hierarchy=["pathPrefixHierarchy"],
    prefix_format="prefixFormat",
    prefix_type="prefixType"
)

Attributes

path_prefix_hierarchy

.

  • EXECUTION_ID - The ID that Amazon AppFlow assigns to the flow run.

  • SCHEMA_VERSION - The version number of your data schema. Amazon AppFlow assigns this version number. The version number increases by one when you change any of the following settings in your flow configuration:

  • Source-to-destination field mappings

  • Field data types

  • Partition keys

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-prefixconfig.html#cfn-appflow-flow-prefixconfig-pathprefixhierarchy

Type:

Specifies whether the destination file path includes either or both of the following elements

prefix_format

Determines the level of granularity for the date and time that’s included in the prefix.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-prefixconfig.html#cfn-appflow-flow-prefixconfig-prefixformat

prefix_type

Determines the format of the prefix, and whether it applies to the file name, file path, or both.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-prefixconfig.html#cfn-appflow-flow-prefixconfig-prefixtype

RedshiftDestinationPropertiesProperty

class CfnFlow.RedshiftDestinationPropertiesProperty(*, intermediate_bucket_name, object, bucket_prefix=None, error_handling_config=None)

Bases: object

The properties that are applied when Amazon Redshift is being used as a destination.

Parameters:
  • intermediate_bucket_name (str) – The intermediate bucket that Amazon AppFlow uses when moving data into Amazon Redshift.

  • object (str) – The object specified in the Amazon Redshift flow destination.

  • bucket_prefix (Optional[str]) – The object key for the bucket in which Amazon AppFlow places the destination files.

  • error_handling_config (Union[IResolvable, ErrorHandlingConfigProperty, Dict[str, Any], None]) – The settings that determine how Amazon AppFlow handles an error when placing data in the Amazon Redshift destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure. ErrorHandlingConfig is a part of the destination connector details.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-redshiftdestinationproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

redshift_destination_properties_property = appflow.CfnFlow.RedshiftDestinationPropertiesProperty(
    intermediate_bucket_name="intermediateBucketName",
    object="object",

    # the properties below are optional
    bucket_prefix="bucketPrefix",
    error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
        bucket_name="bucketName",
        bucket_prefix="bucketPrefix",
        fail_on_first_error=False
    )
)

Attributes

bucket_prefix

The object key for the bucket in which Amazon AppFlow places the destination files.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-redshiftdestinationproperties.html#cfn-appflow-flow-redshiftdestinationproperties-bucketprefix

error_handling_config

The settings that determine how Amazon AppFlow handles an error when placing data in the Amazon Redshift destination.

For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure. ErrorHandlingConfig is a part of the destination connector details.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-redshiftdestinationproperties.html#cfn-appflow-flow-redshiftdestinationproperties-errorhandlingconfig

intermediate_bucket_name

The intermediate bucket that Amazon AppFlow uses when moving data into Amazon Redshift.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-redshiftdestinationproperties.html#cfn-appflow-flow-redshiftdestinationproperties-intermediatebucketname

object

The object specified in the Amazon Redshift flow destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-redshiftdestinationproperties.html#cfn-appflow-flow-redshiftdestinationproperties-object

S3DestinationPropertiesProperty

class CfnFlow.S3DestinationPropertiesProperty(*, bucket_name, bucket_prefix=None, s3_output_format_config=None)

Bases: object

The properties that are applied when Amazon S3 is used as a destination.

Parameters:
  • bucket_name (str) – The Amazon S3 bucket name in which Amazon AppFlow places the transferred data.

  • bucket_prefix (Optional[str]) – The object key for the destination bucket in which Amazon AppFlow places the files.

  • s3_output_format_config (Union[IResolvable, S3OutputFormatConfigProperty, Dict[str, Any], None]) – The configuration that determines how Amazon AppFlow should format the flow output data when Amazon S3 is used as the destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-s3destinationproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

s3_destination_properties_property = appflow.CfnFlow.S3DestinationPropertiesProperty(
    bucket_name="bucketName",

    # the properties below are optional
    bucket_prefix="bucketPrefix",
    s3_output_format_config=appflow.CfnFlow.S3OutputFormatConfigProperty(
        aggregation_config=appflow.CfnFlow.AggregationConfigProperty(
            aggregation_type="aggregationType",
            target_file_size=123
        ),
        file_type="fileType",
        prefix_config=appflow.CfnFlow.PrefixConfigProperty(
            path_prefix_hierarchy=["pathPrefixHierarchy"],
            prefix_format="prefixFormat",
            prefix_type="prefixType"
        ),
        preserve_source_data_typing=False
    )
)

Attributes

bucket_name

The Amazon S3 bucket name in which Amazon AppFlow places the transferred data.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-s3destinationproperties.html#cfn-appflow-flow-s3destinationproperties-bucketname

bucket_prefix

The object key for the destination bucket in which Amazon AppFlow places the files.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-s3destinationproperties.html#cfn-appflow-flow-s3destinationproperties-bucketprefix

s3_output_format_config

The configuration that determines how Amazon AppFlow should format the flow output data when Amazon S3 is used as the destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-s3destinationproperties.html#cfn-appflow-flow-s3destinationproperties-s3outputformatconfig

S3InputFormatConfigProperty

class CfnFlow.S3InputFormatConfigProperty(*, s3_input_file_type=None)

Bases: object

When you use Amazon S3 as the source, the configuration format that you provide the flow input data.

Parameters:

s3_input_file_type (Optional[str]) – The file type that Amazon AppFlow gets from your Amazon S3 bucket.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-s3inputformatconfig.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

s3_input_format_config_property = appflow.CfnFlow.S3InputFormatConfigProperty(
    s3_input_file_type="s3InputFileType"
)

Attributes

s3_input_file_type

The file type that Amazon AppFlow gets from your Amazon S3 bucket.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-s3inputformatconfig.html#cfn-appflow-flow-s3inputformatconfig-s3inputfiletype

S3OutputFormatConfigProperty

class CfnFlow.S3OutputFormatConfigProperty(*, aggregation_config=None, file_type=None, prefix_config=None, preserve_source_data_typing=None)

Bases: object

The configuration that determines how Amazon AppFlow should format the flow output data when Amazon S3 is used as the destination.

Parameters:
  • aggregation_config (Union[IResolvable, AggregationConfigProperty, Dict[str, Any], None]) – The aggregation settings that you can use to customize the output format of your flow data.

  • file_type (Optional[str]) – Indicates the file type that Amazon AppFlow places in the Amazon S3 bucket.

  • prefix_config (Union[IResolvable, PrefixConfigProperty, Dict[str, Any], None]) – Determines the prefix that Amazon AppFlow applies to the folder name in the Amazon S3 bucket. You can name folders according to the flow frequency and date.

  • preserve_source_data_typing (Union[bool, IResolvable, None]) – If your file output format is Parquet, use this parameter to set whether Amazon AppFlow preserves the data types in your source data when it writes the output to Amazon S3. - true : Amazon AppFlow preserves the data types when it writes to Amazon S3. For example, an integer or 1 in your source data is still an integer in your output. - false : Amazon AppFlow converts all of the source data into strings when it writes to Amazon S3. For example, an integer of 1 in your source data becomes the string "1" in the output.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-s3outputformatconfig.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

s3_output_format_config_property = appflow.CfnFlow.S3OutputFormatConfigProperty(
    aggregation_config=appflow.CfnFlow.AggregationConfigProperty(
        aggregation_type="aggregationType",
        target_file_size=123
    ),
    file_type="fileType",
    prefix_config=appflow.CfnFlow.PrefixConfigProperty(
        path_prefix_hierarchy=["pathPrefixHierarchy"],
        prefix_format="prefixFormat",
        prefix_type="prefixType"
    ),
    preserve_source_data_typing=False
)

Attributes

aggregation_config

The aggregation settings that you can use to customize the output format of your flow data.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-s3outputformatconfig.html#cfn-appflow-flow-s3outputformatconfig-aggregationconfig

file_type

Indicates the file type that Amazon AppFlow places in the Amazon S3 bucket.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-s3outputformatconfig.html#cfn-appflow-flow-s3outputformatconfig-filetype

prefix_config

Determines the prefix that Amazon AppFlow applies to the folder name in the Amazon S3 bucket.

You can name folders according to the flow frequency and date.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-s3outputformatconfig.html#cfn-appflow-flow-s3outputformatconfig-prefixconfig

preserve_source_data_typing

If your file output format is Parquet, use this parameter to set whether Amazon AppFlow preserves the data types in your source data when it writes the output to Amazon S3.

  • true : Amazon AppFlow preserves the data types when it writes to Amazon S3. For example, an integer or 1 in your source data is still an integer in your output.

  • false : Amazon AppFlow converts all of the source data into strings when it writes to Amazon S3. For example, an integer of 1 in your source data becomes the string "1" in the output.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-s3outputformatconfig.html#cfn-appflow-flow-s3outputformatconfig-preservesourcedatatyping

S3SourcePropertiesProperty

class CfnFlow.S3SourcePropertiesProperty(*, bucket_name, bucket_prefix, s3_input_format_config=None)

Bases: object

The properties that are applied when Amazon S3 is being used as the flow source.

Parameters:
  • bucket_name (str) – The Amazon S3 bucket name where the source files are stored.

  • bucket_prefix (str) – The object key for the Amazon S3 bucket in which the source files are stored.

  • s3_input_format_config (Union[IResolvable, S3InputFormatConfigProperty, Dict[str, Any], None]) – When you use Amazon S3 as the source, the configuration format that you provide the flow input data.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-s3sourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

s3_source_properties_property = appflow.CfnFlow.S3SourcePropertiesProperty(
    bucket_name="bucketName",
    bucket_prefix="bucketPrefix",

    # the properties below are optional
    s3_input_format_config=appflow.CfnFlow.S3InputFormatConfigProperty(
        s3_input_file_type="s3InputFileType"
    )
)

Attributes

bucket_name

The Amazon S3 bucket name where the source files are stored.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-s3sourceproperties.html#cfn-appflow-flow-s3sourceproperties-bucketname

bucket_prefix

The object key for the Amazon S3 bucket in which the source files are stored.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-s3sourceproperties.html#cfn-appflow-flow-s3sourceproperties-bucketprefix

s3_input_format_config

When you use Amazon S3 as the source, the configuration format that you provide the flow input data.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-s3sourceproperties.html#cfn-appflow-flow-s3sourceproperties-s3inputformatconfig

SAPODataDestinationPropertiesProperty

class CfnFlow.SAPODataDestinationPropertiesProperty(*, object_path, error_handling_config=None, id_field_names=None, success_response_handling_config=None, write_operation_type=None)

Bases: object

The properties that are applied when using SAPOData as a flow destination.

Parameters:
  • object_path (str) – The object path specified in the SAPOData flow destination.

  • error_handling_config (Union[IResolvable, ErrorHandlingConfigProperty, Dict[str, Any], None]) – The settings that determine how Amazon AppFlow handles an error when placing data in the destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure. ErrorHandlingConfig is a part of the destination connector details.

  • id_field_names (Optional[Sequence[str]]) – A list of field names that can be used as an ID field when performing a write operation.

  • success_response_handling_config (Union[IResolvable, SuccessResponseHandlingConfigProperty, Dict[str, Any], None]) – Determines how Amazon AppFlow handles the success response that it gets from the connector after placing data. For example, this setting would determine where to write the response from a destination connector upon a successful insert operation.

  • write_operation_type (Optional[str]) – The possible write operations in the destination connector. When this value is not provided, this defaults to the INSERT operation.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sapodatadestinationproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

s_aPOData_destination_properties_property = appflow.CfnFlow.SAPODataDestinationPropertiesProperty(
    object_path="objectPath",

    # the properties below are optional
    error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
        bucket_name="bucketName",
        bucket_prefix="bucketPrefix",
        fail_on_first_error=False
    ),
    id_field_names=["idFieldNames"],
    success_response_handling_config=appflow.CfnFlow.SuccessResponseHandlingConfigProperty(
        bucket_name="bucketName",
        bucket_prefix="bucketPrefix"
    ),
    write_operation_type="writeOperationType"
)

Attributes

error_handling_config

The settings that determine how Amazon AppFlow handles an error when placing data in the destination.

For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure. ErrorHandlingConfig is a part of the destination connector details.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sapodatadestinationproperties.html#cfn-appflow-flow-sapodatadestinationproperties-errorhandlingconfig

id_field_names

A list of field names that can be used as an ID field when performing a write operation.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sapodatadestinationproperties.html#cfn-appflow-flow-sapodatadestinationproperties-idfieldnames

object_path

The object path specified in the SAPOData flow destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sapodatadestinationproperties.html#cfn-appflow-flow-sapodatadestinationproperties-objectpath

success_response_handling_config

Determines how Amazon AppFlow handles the success response that it gets from the connector after placing data.

For example, this setting would determine where to write the response from a destination connector upon a successful insert operation.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sapodatadestinationproperties.html#cfn-appflow-flow-sapodatadestinationproperties-successresponsehandlingconfig

write_operation_type

The possible write operations in the destination connector.

When this value is not provided, this defaults to the INSERT operation.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sapodatadestinationproperties.html#cfn-appflow-flow-sapodatadestinationproperties-writeoperationtype

SAPODataPaginationConfigProperty

class CfnFlow.SAPODataPaginationConfigProperty(*, max_page_size)

Bases: object

SAP Source connector page size.

Parameters:

max_page_size (Union[int, float]) –

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sapodatapaginationconfig.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

s_aPOData_pagination_config_property = appflow.CfnFlow.SAPODataPaginationConfigProperty(
    max_page_size=123
)

Attributes

max_page_size

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sapodatapaginationconfig.html#cfn-appflow-flow-sapodatapaginationconfig-maxpagesize

Type:

see

SAPODataParallelismConfigProperty

class CfnFlow.SAPODataParallelismConfigProperty(*, max_parallelism)

Bases: object

SAP Source connector parallelism factor.

Parameters:

max_parallelism (Union[int, float]) –

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sapodataparallelismconfig.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

s_aPOData_parallelism_config_property = appflow.CfnFlow.SAPODataParallelismConfigProperty(
    max_parallelism=123
)

Attributes

max_parallelism

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sapodataparallelismconfig.html#cfn-appflow-flow-sapodataparallelismconfig-maxparallelism

Type:

see

SAPODataSourcePropertiesProperty

class CfnFlow.SAPODataSourcePropertiesProperty(*, object_path, pagination_config=None, parallelism_config=None)

Bases: object

The properties that are applied when using SAPOData as a flow source.

Parameters:
See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sapodatasourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

s_aPOData_source_properties_property = appflow.CfnFlow.SAPODataSourcePropertiesProperty(
    object_path="objectPath",

    # the properties below are optional
    pagination_config=appflow.CfnFlow.SAPODataPaginationConfigProperty(
        max_page_size=123
    ),
    parallelism_config=appflow.CfnFlow.SAPODataParallelismConfigProperty(
        max_parallelism=123
    )
)

Attributes

object_path

The object path specified in the SAPOData flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sapodatasourceproperties.html#cfn-appflow-flow-sapodatasourceproperties-objectpath

pagination_config

SAP Source connector page size.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sapodatasourceproperties.html#cfn-appflow-flow-sapodatasourceproperties-paginationconfig

parallelism_config

SAP Source connector parallelism factor.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sapodatasourceproperties.html#cfn-appflow-flow-sapodatasourceproperties-parallelismconfig

SalesforceDestinationPropertiesProperty

class CfnFlow.SalesforceDestinationPropertiesProperty(*, object, data_transfer_api=None, error_handling_config=None, id_field_names=None, write_operation_type=None)

Bases: object

The properties that are applied when Salesforce is being used as a destination.

Parameters:
  • object (str) – The object specified in the Salesforce flow destination.

  • data_transfer_api (Optional[str]) – Specifies which Salesforce API is used by Amazon AppFlow when your flow transfers data to Salesforce. - AUTOMATIC - The default. Amazon AppFlow selects which API to use based on the number of records that your flow transfers to Salesforce. If your flow transfers fewer than 1,000 records, Amazon AppFlow uses Salesforce REST API. If your flow transfers 1,000 records or more, Amazon AppFlow uses Salesforce Bulk API 2.0. Each of these Salesforce APIs structures data differently. If Amazon AppFlow selects the API automatically, be aware that, for recurring flows, the data output might vary from one flow run to the next. For example, if a flow runs daily, it might use REST API on one day to transfer 900 records, and it might use Bulk API 2.0 on the next day to transfer 1,100 records. For each of these flow runs, the respective Salesforce API formats the data differently. Some of the differences include how dates are formatted and null values are represented. Also, Bulk API 2.0 doesn’t transfer Salesforce compound fields. By choosing this option, you optimize flow performance for both small and large data transfers, but the tradeoff is inconsistent formatting in the output. - BULKV2 - Amazon AppFlow uses only Salesforce Bulk API 2.0. This API runs asynchronous data transfers, and it’s optimal for large sets of data. By choosing this option, you ensure that your flow writes consistent output, but you optimize performance only for large data transfers. Note that Bulk API 2.0 does not transfer Salesforce compound fields. - REST_SYNC - Amazon AppFlow uses only Salesforce REST API. By choosing this option, you ensure that your flow writes consistent output, but you decrease performance for large data transfers that are better suited for Bulk API 2.0. In some cases, if your flow attempts to transfer a vary large set of data, it might fail with a timed out error.

  • error_handling_config (Union[IResolvable, ErrorHandlingConfigProperty, Dict[str, Any], None]) – The settings that determine how Amazon AppFlow handles an error when placing data in the Salesforce destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure. ErrorHandlingConfig is a part of the destination connector details.

  • id_field_names (Optional[Sequence[str]]) – The name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update or delete.

  • write_operation_type (Optional[str]) – This specifies the type of write operation to be performed in Salesforce. When the value is UPSERT , then idFieldNames is required.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-salesforcedestinationproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

salesforce_destination_properties_property = appflow.CfnFlow.SalesforceDestinationPropertiesProperty(
    object="object",

    # the properties below are optional
    data_transfer_api="dataTransferApi",
    error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
        bucket_name="bucketName",
        bucket_prefix="bucketPrefix",
        fail_on_first_error=False
    ),
    id_field_names=["idFieldNames"],
    write_operation_type="writeOperationType"
)

Attributes

data_transfer_api

Specifies which Salesforce API is used by Amazon AppFlow when your flow transfers data to Salesforce.

  • AUTOMATIC - The default. Amazon AppFlow selects which API to use based on the number of records that your flow transfers to Salesforce. If your flow transfers fewer than 1,000 records, Amazon AppFlow uses Salesforce REST API. If your flow transfers 1,000 records or more, Amazon AppFlow uses Salesforce Bulk API 2.0.

Each of these Salesforce APIs structures data differently. If Amazon AppFlow selects the API automatically, be aware that, for recurring flows, the data output might vary from one flow run to the next. For example, if a flow runs daily, it might use REST API on one day to transfer 900 records, and it might use Bulk API 2.0 on the next day to transfer 1,100 records. For each of these flow runs, the respective Salesforce API formats the data differently. Some of the differences include how dates are formatted and null values are represented. Also, Bulk API 2.0 doesn’t transfer Salesforce compound fields.

By choosing this option, you optimize flow performance for both small and large data transfers, but the tradeoff is inconsistent formatting in the output.

  • BULKV2 - Amazon AppFlow uses only Salesforce Bulk API 2.0. This API runs asynchronous data transfers, and it’s optimal for large sets of data. By choosing this option, you ensure that your flow writes consistent output, but you optimize performance only for large data transfers.

Note that Bulk API 2.0 does not transfer Salesforce compound fields.

  • REST_SYNC - Amazon AppFlow uses only Salesforce REST API. By choosing this option, you ensure that your flow writes consistent output, but you decrease performance for large data transfers that are better suited for Bulk API 2.0. In some cases, if your flow attempts to transfer a vary large set of data, it might fail with a timed out error.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-salesforcedestinationproperties.html#cfn-appflow-flow-salesforcedestinationproperties-datatransferapi

error_handling_config

The settings that determine how Amazon AppFlow handles an error when placing data in the Salesforce destination.

For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure. ErrorHandlingConfig is a part of the destination connector details.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-salesforcedestinationproperties.html#cfn-appflow-flow-salesforcedestinationproperties-errorhandlingconfig

id_field_names

The name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update or delete.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-salesforcedestinationproperties.html#cfn-appflow-flow-salesforcedestinationproperties-idfieldnames

object

The object specified in the Salesforce flow destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-salesforcedestinationproperties.html#cfn-appflow-flow-salesforcedestinationproperties-object

write_operation_type

This specifies the type of write operation to be performed in Salesforce.

When the value is UPSERT , then idFieldNames is required.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-salesforcedestinationproperties.html#cfn-appflow-flow-salesforcedestinationproperties-writeoperationtype

SalesforceSourcePropertiesProperty

class CfnFlow.SalesforceSourcePropertiesProperty(*, object, data_transfer_api=None, enable_dynamic_field_update=None, include_deleted_records=None)

Bases: object

The properties that are applied when Salesforce is being used as a source.

Parameters:
  • object (str) – The object specified in the Salesforce flow source.

  • data_transfer_api (Optional[str]) – Specifies which Salesforce API is used by Amazon AppFlow when your flow transfers data from Salesforce. - AUTOMATIC - The default. Amazon AppFlow selects which API to use based on the number of records that your flow transfers from Salesforce. If your flow transfers fewer than 1,000,000 records, Amazon AppFlow uses Salesforce REST API. If your flow transfers 1,000,000 records or more, Amazon AppFlow uses Salesforce Bulk API 2.0. Each of these Salesforce APIs structures data differently. If Amazon AppFlow selects the API automatically, be aware that, for recurring flows, the data output might vary from one flow run to the next. For example, if a flow runs daily, it might use REST API on one day to transfer 900,000 records, and it might use Bulk API 2.0 on the next day to transfer 1,100,000 records. For each of these flow runs, the respective Salesforce API formats the data differently. Some of the differences include how dates are formatted and null values are represented. Also, Bulk API 2.0 doesn’t transfer Salesforce compound fields. By choosing this option, you optimize flow performance for both small and large data transfers, but the tradeoff is inconsistent formatting in the output. - BULKV2 - Amazon AppFlow uses only Salesforce Bulk API 2.0. This API runs asynchronous data transfers, and it’s optimal for large sets of data. By choosing this option, you ensure that your flow writes consistent output, but you optimize performance only for large data transfers. Note that Bulk API 2.0 does not transfer Salesforce compound fields. - REST_SYNC - Amazon AppFlow uses only Salesforce REST API. By choosing this option, you ensure that your flow writes consistent output, but you decrease performance for large data transfers that are better suited for Bulk API 2.0. In some cases, if your flow attempts to transfer a vary large set of data, it might fail wituh a timed out error.

  • enable_dynamic_field_update (Union[bool, IResolvable, None]) – The flag that enables dynamic fetching of new (recently added) fields in the Salesforce objects while running a flow.

  • include_deleted_records (Union[bool, IResolvable, None]) – Indicates whether Amazon AppFlow includes deleted files in the flow run.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-salesforcesourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

salesforce_source_properties_property = appflow.CfnFlow.SalesforceSourcePropertiesProperty(
    object="object",

    # the properties below are optional
    data_transfer_api="dataTransferApi",
    enable_dynamic_field_update=False,
    include_deleted_records=False
)

Attributes

data_transfer_api

Specifies which Salesforce API is used by Amazon AppFlow when your flow transfers data from Salesforce.

  • AUTOMATIC - The default. Amazon AppFlow selects which API to use based on the number of records that your flow transfers from Salesforce. If your flow transfers fewer than 1,000,000 records, Amazon AppFlow uses Salesforce REST API. If your flow transfers 1,000,000 records or more, Amazon AppFlow uses Salesforce Bulk API 2.0.

Each of these Salesforce APIs structures data differently. If Amazon AppFlow selects the API automatically, be aware that, for recurring flows, the data output might vary from one flow run to the next. For example, if a flow runs daily, it might use REST API on one day to transfer 900,000 records, and it might use Bulk API 2.0 on the next day to transfer 1,100,000 records. For each of these flow runs, the respective Salesforce API formats the data differently. Some of the differences include how dates are formatted and null values are represented. Also, Bulk API 2.0 doesn’t transfer Salesforce compound fields.

By choosing this option, you optimize flow performance for both small and large data transfers, but the tradeoff is inconsistent formatting in the output.

  • BULKV2 - Amazon AppFlow uses only Salesforce Bulk API 2.0. This API runs asynchronous data transfers, and it’s optimal for large sets of data. By choosing this option, you ensure that your flow writes consistent output, but you optimize performance only for large data transfers.

Note that Bulk API 2.0 does not transfer Salesforce compound fields.

  • REST_SYNC - Amazon AppFlow uses only Salesforce REST API. By choosing this option, you ensure that your flow writes consistent output, but you decrease performance for large data transfers that are better suited for Bulk API 2.0. In some cases, if your flow attempts to transfer a vary large set of data, it might fail wituh a timed out error.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-salesforcesourceproperties.html#cfn-appflow-flow-salesforcesourceproperties-datatransferapi

enable_dynamic_field_update

The flag that enables dynamic fetching of new (recently added) fields in the Salesforce objects while running a flow.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-salesforcesourceproperties.html#cfn-appflow-flow-salesforcesourceproperties-enabledynamicfieldupdate

include_deleted_records

Indicates whether Amazon AppFlow includes deleted files in the flow run.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-salesforcesourceproperties.html#cfn-appflow-flow-salesforcesourceproperties-includedeletedrecords

object

The object specified in the Salesforce flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-salesforcesourceproperties.html#cfn-appflow-flow-salesforcesourceproperties-object

ScheduledTriggerPropertiesProperty

class CfnFlow.ScheduledTriggerPropertiesProperty(*, schedule_expression, data_pull_mode=None, first_execution_from=None, flow_error_deactivation_threshold=None, schedule_end_time=None, schedule_offset=None, schedule_start_time=None, time_zone=None)

Bases: object

Specifies the configuration details of a schedule-triggered flow as defined by the user.

Currently, these settings only apply to the Scheduled trigger type.

Parameters:
  • schedule_expression (str) – The scheduling expression that determines the rate at which the schedule will run, for example rate(5minutes) .

  • data_pull_mode (Optional[str]) – Specifies whether a scheduled flow has an incremental data transfer or a complete data transfer for each flow run.

  • first_execution_from (Union[int, float, None]) – Specifies the date range for the records to import from the connector in the first flow run.

  • flow_error_deactivation_threshold (Union[int, float, None]) – Defines how many times a scheduled flow fails consecutively before Amazon AppFlow deactivates it.

  • schedule_end_time (Union[int, float, None]) – The time at which the scheduled flow ends. The time is formatted as a timestamp that follows the ISO 8601 standard, such as 2022-04-27T13:00:00-07:00 .

  • schedule_offset (Union[int, float, None]) – Specifies the optional offset that is added to the time interval for a schedule-triggered flow.

  • schedule_start_time (Union[int, float, None]) – The time at which the scheduled flow starts. The time is formatted as a timestamp that follows the ISO 8601 standard, such as 2022-04-26T13:00:00-07:00 .

  • time_zone (Optional[str]) – Specifies the time zone used when referring to the dates and times of a scheduled flow, such as America/New_York . This time zone is only a descriptive label. It doesn’t affect how Amazon AppFlow interprets the timestamps that you specify to schedule the flow. If you want to schedule a flow by using times in a particular time zone, indicate the time zone as a UTC offset in your timestamps. For example, the UTC offsets for the America/New_York timezone are -04:00 EDT and -05:00 EST .

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-scheduledtriggerproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

scheduled_trigger_properties_property = appflow.CfnFlow.ScheduledTriggerPropertiesProperty(
    schedule_expression="scheduleExpression",

    # the properties below are optional
    data_pull_mode="dataPullMode",
    first_execution_from=123,
    flow_error_deactivation_threshold=123,
    schedule_end_time=123,
    schedule_offset=123,
    schedule_start_time=123,
    time_zone="timeZone"
)

Attributes

data_pull_mode

Specifies whether a scheduled flow has an incremental data transfer or a complete data transfer for each flow run.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-scheduledtriggerproperties.html#cfn-appflow-flow-scheduledtriggerproperties-datapullmode

first_execution_from

Specifies the date range for the records to import from the connector in the first flow run.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-scheduledtriggerproperties.html#cfn-appflow-flow-scheduledtriggerproperties-firstexecutionfrom

flow_error_deactivation_threshold

Defines how many times a scheduled flow fails consecutively before Amazon AppFlow deactivates it.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-scheduledtriggerproperties.html#cfn-appflow-flow-scheduledtriggerproperties-flowerrordeactivationthreshold

schedule_end_time

The time at which the scheduled flow ends.

The time is formatted as a timestamp that follows the ISO 8601 standard, such as 2022-04-27T13:00:00-07:00 .

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-scheduledtriggerproperties.html#cfn-appflow-flow-scheduledtriggerproperties-scheduleendtime

schedule_expression

The scheduling expression that determines the rate at which the schedule will run, for example rate(5minutes) .

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-scheduledtriggerproperties.html#cfn-appflow-flow-scheduledtriggerproperties-scheduleexpression

schedule_offset

Specifies the optional offset that is added to the time interval for a schedule-triggered flow.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-scheduledtriggerproperties.html#cfn-appflow-flow-scheduledtriggerproperties-scheduleoffset

schedule_start_time

The time at which the scheduled flow starts.

The time is formatted as a timestamp that follows the ISO 8601 standard, such as 2022-04-26T13:00:00-07:00 .

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-scheduledtriggerproperties.html#cfn-appflow-flow-scheduledtriggerproperties-schedulestarttime

time_zone

Specifies the time zone used when referring to the dates and times of a scheduled flow, such as America/New_York .

This time zone is only a descriptive label. It doesn’t affect how Amazon AppFlow interprets the timestamps that you specify to schedule the flow.

If you want to schedule a flow by using times in a particular time zone, indicate the time zone as a UTC offset in your timestamps. For example, the UTC offsets for the America/New_York timezone are -04:00 EDT and -05:00 EST .

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-scheduledtriggerproperties.html#cfn-appflow-flow-scheduledtriggerproperties-timezone

ServiceNowSourcePropertiesProperty

class CfnFlow.ServiceNowSourcePropertiesProperty(*, object)

Bases: object

The properties that are applied when ServiceNow is being used as a source.

Parameters:

object (str) – The object specified in the ServiceNow flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-servicenowsourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

service_now_source_properties_property = appflow.CfnFlow.ServiceNowSourcePropertiesProperty(
    object="object"
)

Attributes

object

The object specified in the ServiceNow flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-servicenowsourceproperties.html#cfn-appflow-flow-servicenowsourceproperties-object

SingularSourcePropertiesProperty

class CfnFlow.SingularSourcePropertiesProperty(*, object)

Bases: object

The properties that are applied when Singular is being used as a source.

Parameters:

object (str) – The object specified in the Singular flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-singularsourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

singular_source_properties_property = appflow.CfnFlow.SingularSourcePropertiesProperty(
    object="object"
)

Attributes

object

The object specified in the Singular flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-singularsourceproperties.html#cfn-appflow-flow-singularsourceproperties-object

SlackSourcePropertiesProperty

class CfnFlow.SlackSourcePropertiesProperty(*, object)

Bases: object

The properties that are applied when Slack is being used as a source.

Parameters:

object (str) – The object specified in the Slack flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-slacksourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

slack_source_properties_property = appflow.CfnFlow.SlackSourcePropertiesProperty(
    object="object"
)

Attributes

object

The object specified in the Slack flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-slacksourceproperties.html#cfn-appflow-flow-slacksourceproperties-object

SnowflakeDestinationPropertiesProperty

class CfnFlow.SnowflakeDestinationPropertiesProperty(*, intermediate_bucket_name, object, bucket_prefix=None, error_handling_config=None)

Bases: object

The properties that are applied when Snowflake is being used as a destination.

Parameters:
  • intermediate_bucket_name (str) – The intermediate bucket that Amazon AppFlow uses when moving data into Snowflake.

  • object (str) – The object specified in the Snowflake flow destination.

  • bucket_prefix (Optional[str]) – The object key for the destination bucket in which Amazon AppFlow places the files.

  • error_handling_config (Union[IResolvable, ErrorHandlingConfigProperty, Dict[str, Any], None]) – The settings that determine how Amazon AppFlow handles an error when placing data in the Snowflake destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure. ErrorHandlingConfig is a part of the destination connector details.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-snowflakedestinationproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

snowflake_destination_properties_property = appflow.CfnFlow.SnowflakeDestinationPropertiesProperty(
    intermediate_bucket_name="intermediateBucketName",
    object="object",

    # the properties below are optional
    bucket_prefix="bucketPrefix",
    error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
        bucket_name="bucketName",
        bucket_prefix="bucketPrefix",
        fail_on_first_error=False
    )
)

Attributes

bucket_prefix

The object key for the destination bucket in which Amazon AppFlow places the files.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-snowflakedestinationproperties.html#cfn-appflow-flow-snowflakedestinationproperties-bucketprefix

error_handling_config

The settings that determine how Amazon AppFlow handles an error when placing data in the Snowflake destination.

For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure. ErrorHandlingConfig is a part of the destination connector details.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-snowflakedestinationproperties.html#cfn-appflow-flow-snowflakedestinationproperties-errorhandlingconfig

intermediate_bucket_name

The intermediate bucket that Amazon AppFlow uses when moving data into Snowflake.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-snowflakedestinationproperties.html#cfn-appflow-flow-snowflakedestinationproperties-intermediatebucketname

object

The object specified in the Snowflake flow destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-snowflakedestinationproperties.html#cfn-appflow-flow-snowflakedestinationproperties-object

SourceConnectorPropertiesProperty

class CfnFlow.SourceConnectorPropertiesProperty(*, amplitude=None, custom_connector=None, datadog=None, dynatrace=None, google_analytics=None, infor_nexus=None, marketo=None, pardot=None, s3=None, salesforce=None, sapo_data=None, service_now=None, singular=None, slack=None, trendmicro=None, veeva=None, zendesk=None)

Bases: object

Specifies the information that is required to query a particular connector.

Parameters:
See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

source_connector_properties_property = appflow.CfnFlow.SourceConnectorPropertiesProperty(
    amplitude=appflow.CfnFlow.AmplitudeSourcePropertiesProperty(
        object="object"
    ),
    custom_connector=appflow.CfnFlow.CustomConnectorSourcePropertiesProperty(
        entity_name="entityName",

        # the properties below are optional
        custom_properties={
            "custom_properties_key": "customProperties"
        },
        data_transfer_api=appflow.CfnFlow.DataTransferApiProperty(
            name="name",
            type="type"
        )
    ),
    datadog=appflow.CfnFlow.DatadogSourcePropertiesProperty(
        object="object"
    ),
    dynatrace=appflow.CfnFlow.DynatraceSourcePropertiesProperty(
        object="object"
    ),
    google_analytics=appflow.CfnFlow.GoogleAnalyticsSourcePropertiesProperty(
        object="object"
    ),
    infor_nexus=appflow.CfnFlow.InforNexusSourcePropertiesProperty(
        object="object"
    ),
    marketo=appflow.CfnFlow.MarketoSourcePropertiesProperty(
        object="object"
    ),
    pardot=appflow.CfnFlow.PardotSourcePropertiesProperty(
        object="object"
    ),
    s3=appflow.CfnFlow.S3SourcePropertiesProperty(
        bucket_name="bucketName",
        bucket_prefix="bucketPrefix",

        # the properties below are optional
        s3_input_format_config=appflow.CfnFlow.S3InputFormatConfigProperty(
            s3_input_file_type="s3InputFileType"
        )
    ),
    salesforce=appflow.CfnFlow.SalesforceSourcePropertiesProperty(
        object="object",

        # the properties below are optional
        data_transfer_api="dataTransferApi",
        enable_dynamic_field_update=False,
        include_deleted_records=False
    ),
    sapo_data=appflow.CfnFlow.SAPODataSourcePropertiesProperty(
        object_path="objectPath",

        # the properties below are optional
        pagination_config=appflow.CfnFlow.SAPODataPaginationConfigProperty(
            max_page_size=123
        ),
        parallelism_config=appflow.CfnFlow.SAPODataParallelismConfigProperty(
            max_parallelism=123
        )
    ),
    service_now=appflow.CfnFlow.ServiceNowSourcePropertiesProperty(
        object="object"
    ),
    singular=appflow.CfnFlow.SingularSourcePropertiesProperty(
        object="object"
    ),
    slack=appflow.CfnFlow.SlackSourcePropertiesProperty(
        object="object"
    ),
    trendmicro=appflow.CfnFlow.TrendmicroSourcePropertiesProperty(
        object="object"
    ),
    veeva=appflow.CfnFlow.VeevaSourcePropertiesProperty(
        object="object",

        # the properties below are optional
        document_type="documentType",
        include_all_versions=False,
        include_renditions=False,
        include_source_files=False
    ),
    zendesk=appflow.CfnFlow.ZendeskSourcePropertiesProperty(
        object="object"
    )
)

Attributes

amplitude

Specifies the information that is required for querying Amplitude.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-amplitude

custom_connector

The properties that are applied when the custom connector is being used as a source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-customconnector

datadog

Specifies the information that is required for querying Datadog.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-datadog

dynatrace

Specifies the information that is required for querying Dynatrace.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-dynatrace

google_analytics

Specifies the information that is required for querying Google Analytics.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-googleanalytics

infor_nexus

Specifies the information that is required for querying Infor Nexus.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-infornexus

marketo

Specifies the information that is required for querying Marketo.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-marketo

pardot

Specifies the information that is required for querying Salesforce Pardot.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-pardot

s3

Specifies the information that is required for querying Amazon S3.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-s3

salesforce

Specifies the information that is required for querying Salesforce.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-salesforce

sapo_data

The properties that are applied when using SAPOData as a flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-sapodata

service_now

Specifies the information that is required for querying ServiceNow.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-servicenow

singular

Specifies the information that is required for querying Singular.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-singular

slack

Specifies the information that is required for querying Slack.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-slack

trendmicro

Specifies the information that is required for querying Trend Micro.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-trendmicro

veeva

Specifies the information that is required for querying Veeva.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-veeva

zendesk

Specifies the information that is required for querying Zendesk.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceconnectorproperties.html#cfn-appflow-flow-sourceconnectorproperties-zendesk

SourceFlowConfigProperty

class CfnFlow.SourceFlowConfigProperty(*, connector_type, source_connector_properties, api_version=None, connector_profile_name=None, incremental_pull_config=None)

Bases: object

Contains information about the configuration of the source connector used in the flow.

Parameters:
  • connector_type (str) – The type of connector, such as Salesforce, Amplitude, and so on.

  • source_connector_properties (Union[IResolvable, SourceConnectorPropertiesProperty, Dict[str, Any]]) – Specifies the information that is required to query a particular source connector.

  • api_version (Optional[str]) – The API version of the connector when it’s used as a source in the flow.

  • connector_profile_name (Optional[str]) – The name of the connector profile. This name must be unique for each connector profile in the AWS account .

  • incremental_pull_config (Union[IResolvable, IncrementalPullConfigProperty, Dict[str, Any], None]) – Defines the configuration for a scheduled incremental data pull. If a valid configuration is provided, the fields specified in the configuration are used when querying for the incremental data pull.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceflowconfig.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

source_flow_config_property = appflow.CfnFlow.SourceFlowConfigProperty(
    connector_type="connectorType",
    source_connector_properties=appflow.CfnFlow.SourceConnectorPropertiesProperty(
        amplitude=appflow.CfnFlow.AmplitudeSourcePropertiesProperty(
            object="object"
        ),
        custom_connector=appflow.CfnFlow.CustomConnectorSourcePropertiesProperty(
            entity_name="entityName",

            # the properties below are optional
            custom_properties={
                "custom_properties_key": "customProperties"
            },
            data_transfer_api=appflow.CfnFlow.DataTransferApiProperty(
                name="name",
                type="type"
            )
        ),
        datadog=appflow.CfnFlow.DatadogSourcePropertiesProperty(
            object="object"
        ),
        dynatrace=appflow.CfnFlow.DynatraceSourcePropertiesProperty(
            object="object"
        ),
        google_analytics=appflow.CfnFlow.GoogleAnalyticsSourcePropertiesProperty(
            object="object"
        ),
        infor_nexus=appflow.CfnFlow.InforNexusSourcePropertiesProperty(
            object="object"
        ),
        marketo=appflow.CfnFlow.MarketoSourcePropertiesProperty(
            object="object"
        ),
        pardot=appflow.CfnFlow.PardotSourcePropertiesProperty(
            object="object"
        ),
        s3=appflow.CfnFlow.S3SourcePropertiesProperty(
            bucket_name="bucketName",
            bucket_prefix="bucketPrefix",

            # the properties below are optional
            s3_input_format_config=appflow.CfnFlow.S3InputFormatConfigProperty(
                s3_input_file_type="s3InputFileType"
            )
        ),
        salesforce=appflow.CfnFlow.SalesforceSourcePropertiesProperty(
            object="object",

            # the properties below are optional
            data_transfer_api="dataTransferApi",
            enable_dynamic_field_update=False,
            include_deleted_records=False
        ),
        sapo_data=appflow.CfnFlow.SAPODataSourcePropertiesProperty(
            object_path="objectPath",

            # the properties below are optional
            pagination_config=appflow.CfnFlow.SAPODataPaginationConfigProperty(
                max_page_size=123
            ),
            parallelism_config=appflow.CfnFlow.SAPODataParallelismConfigProperty(
                max_parallelism=123
            )
        ),
        service_now=appflow.CfnFlow.ServiceNowSourcePropertiesProperty(
            object="object"
        ),
        singular=appflow.CfnFlow.SingularSourcePropertiesProperty(
            object="object"
        ),
        slack=appflow.CfnFlow.SlackSourcePropertiesProperty(
            object="object"
        ),
        trendmicro=appflow.CfnFlow.TrendmicroSourcePropertiesProperty(
            object="object"
        ),
        veeva=appflow.CfnFlow.VeevaSourcePropertiesProperty(
            object="object",

            # the properties below are optional
            document_type="documentType",
            include_all_versions=False,
            include_renditions=False,
            include_source_files=False
        ),
        zendesk=appflow.CfnFlow.ZendeskSourcePropertiesProperty(
            object="object"
        )
    ),

    # the properties below are optional
    api_version="apiVersion",
    connector_profile_name="connectorProfileName",
    incremental_pull_config=appflow.CfnFlow.IncrementalPullConfigProperty(
        datetime_type_field_name="datetimeTypeFieldName"
    )
)

Attributes

api_version

The API version of the connector when it’s used as a source in the flow.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceflowconfig.html#cfn-appflow-flow-sourceflowconfig-apiversion

connector_profile_name

The name of the connector profile.

This name must be unique for each connector profile in the AWS account .

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceflowconfig.html#cfn-appflow-flow-sourceflowconfig-connectorprofilename

connector_type

The type of connector, such as Salesforce, Amplitude, and so on.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceflowconfig.html#cfn-appflow-flow-sourceflowconfig-connectortype

incremental_pull_config

Defines the configuration for a scheduled incremental data pull.

If a valid configuration is provided, the fields specified in the configuration are used when querying for the incremental data pull.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceflowconfig.html#cfn-appflow-flow-sourceflowconfig-incrementalpullconfig

source_connector_properties

Specifies the information that is required to query a particular source connector.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-sourceflowconfig.html#cfn-appflow-flow-sourceflowconfig-sourceconnectorproperties

SuccessResponseHandlingConfigProperty

class CfnFlow.SuccessResponseHandlingConfigProperty(*, bucket_name=None, bucket_prefix=None)

Bases: object

Determines how Amazon AppFlow handles the success response that it gets from the connector after placing data.

For example, this setting would determine where to write the response from the destination connector upon a successful insert operation.

Parameters:
  • bucket_name (Optional[str]) – The name of the Amazon S3 bucket.

  • bucket_prefix (Optional[str]) – The Amazon S3 bucket prefix.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-successresponsehandlingconfig.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

success_response_handling_config_property = appflow.CfnFlow.SuccessResponseHandlingConfigProperty(
    bucket_name="bucketName",
    bucket_prefix="bucketPrefix"
)

Attributes

bucket_name

The name of the Amazon S3 bucket.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-successresponsehandlingconfig.html#cfn-appflow-flow-successresponsehandlingconfig-bucketname

bucket_prefix

The Amazon S3 bucket prefix.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-successresponsehandlingconfig.html#cfn-appflow-flow-successresponsehandlingconfig-bucketprefix

TaskPropertiesObjectProperty

class CfnFlow.TaskPropertiesObjectProperty(*, key, value)

Bases: object

A map used to store task-related information.

The execution service looks for particular information based on the TaskType .

Parameters:
  • key (str) – The task property key.

  • value (str) – The task property value.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-taskpropertiesobject.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

task_properties_object_property = appflow.CfnFlow.TaskPropertiesObjectProperty(
    key="key",
    value="value"
)

Attributes

key

The task property key.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-taskpropertiesobject.html#cfn-appflow-flow-taskpropertiesobject-key

value

The task property value.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-taskpropertiesobject.html#cfn-appflow-flow-taskpropertiesobject-value

TaskProperty

class CfnFlow.TaskProperty(*, source_fields, task_type, connector_operator=None, destination_field=None, task_properties=None)

Bases: object

A class for modeling different type of tasks.

Task implementation varies based on the TaskType .

Parameters:
  • source_fields (Sequence[str]) – The source fields to which a particular task is applied.

  • task_type (str) – Specifies the particular task implementation that Amazon AppFlow performs. Allowed values : Arithmetic | Filter | Map | Map_all | Mask | Merge | Truncate | Validate

  • connector_operator (Union[IResolvable, ConnectorOperatorProperty, Dict[str, Any], None]) – The operation to be performed on the provided source fields.

  • destination_field (Optional[str]) – A field in a destination connector, or a field value against which Amazon AppFlow validates a source field.

  • task_properties (Union[IResolvable, Sequence[Union[IResolvable, TaskPropertiesObjectProperty, Dict[str, Any]]], None]) – A map used to store task-related information. The execution service looks for particular information based on the TaskType .

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-task.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

task_property = appflow.CfnFlow.TaskProperty(
    source_fields=["sourceFields"],
    task_type="taskType",

    # the properties below are optional
    connector_operator=appflow.CfnFlow.ConnectorOperatorProperty(
        amplitude="amplitude",
        custom_connector="customConnector",
        datadog="datadog",
        dynatrace="dynatrace",
        google_analytics="googleAnalytics",
        infor_nexus="inforNexus",
        marketo="marketo",
        pardot="pardot",
        s3="s3",
        salesforce="salesforce",
        sapo_data="sapoData",
        service_now="serviceNow",
        singular="singular",
        slack="slack",
        trendmicro="trendmicro",
        veeva="veeva",
        zendesk="zendesk"
    ),
    destination_field="destinationField",
    task_properties=[appflow.CfnFlow.TaskPropertiesObjectProperty(
        key="key",
        value="value"
    )]
)

Attributes

connector_operator

The operation to be performed on the provided source fields.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-task.html#cfn-appflow-flow-task-connectoroperator

destination_field

A field in a destination connector, or a field value against which Amazon AppFlow validates a source field.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-task.html#cfn-appflow-flow-task-destinationfield

source_fields

The source fields to which a particular task is applied.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-task.html#cfn-appflow-flow-task-sourcefields

task_properties

A map used to store task-related information.

The execution service looks for particular information based on the TaskType .

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-task.html#cfn-appflow-flow-task-taskproperties

task_type

Specifies the particular task implementation that Amazon AppFlow performs.

Allowed values : Arithmetic | Filter | Map | Map_all | Mask | Merge | Truncate | Validate

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-task.html#cfn-appflow-flow-task-tasktype

TrendmicroSourcePropertiesProperty

class CfnFlow.TrendmicroSourcePropertiesProperty(*, object)

Bases: object

The properties that are applied when using Trend Micro as a flow source.

Parameters:

object (str) – The object specified in the Trend Micro flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-trendmicrosourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

trendmicro_source_properties_property = appflow.CfnFlow.TrendmicroSourcePropertiesProperty(
    object="object"
)

Attributes

object

The object specified in the Trend Micro flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-trendmicrosourceproperties.html#cfn-appflow-flow-trendmicrosourceproperties-object

TriggerConfigProperty

class CfnFlow.TriggerConfigProperty(*, trigger_type, trigger_properties=None)

Bases: object

The trigger settings that determine how and when Amazon AppFlow runs the specified flow.

Parameters:
  • trigger_type (str) – Specifies the type of flow trigger. This can be OnDemand , Scheduled , or Event .

  • trigger_properties (Union[IResolvable, ScheduledTriggerPropertiesProperty, Dict[str, Any], None]) – Specifies the configuration details of a schedule-triggered flow as defined by the user. Currently, these settings only apply to the Scheduled trigger type.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-triggerconfig.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

trigger_config_property = appflow.CfnFlow.TriggerConfigProperty(
    trigger_type="triggerType",

    # the properties below are optional
    trigger_properties=appflow.CfnFlow.ScheduledTriggerPropertiesProperty(
        schedule_expression="scheduleExpression",

        # the properties below are optional
        data_pull_mode="dataPullMode",
        first_execution_from=123,
        flow_error_deactivation_threshold=123,
        schedule_end_time=123,
        schedule_offset=123,
        schedule_start_time=123,
        time_zone="timeZone"
    )
)

Attributes

trigger_properties

Specifies the configuration details of a schedule-triggered flow as defined by the user.

Currently, these settings only apply to the Scheduled trigger type.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-triggerconfig.html#cfn-appflow-flow-triggerconfig-triggerproperties

trigger_type

Specifies the type of flow trigger.

This can be OnDemand , Scheduled , or Event .

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-triggerconfig.html#cfn-appflow-flow-triggerconfig-triggertype

UpsolverDestinationPropertiesProperty

class CfnFlow.UpsolverDestinationPropertiesProperty(*, bucket_name, s3_output_format_config, bucket_prefix=None)

Bases: object

The properties that are applied when Upsolver is used as a destination.

Parameters:
  • bucket_name (str) – The Upsolver Amazon S3 bucket name in which Amazon AppFlow places the transferred data.

  • s3_output_format_config (Union[IResolvable, UpsolverS3OutputFormatConfigProperty, Dict[str, Any]]) – The configuration that determines how data is formatted when Upsolver is used as the flow destination.

  • bucket_prefix (Optional[str]) – The object key for the destination Upsolver Amazon S3 bucket in which Amazon AppFlow places the files.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-upsolverdestinationproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

upsolver_destination_properties_property = appflow.CfnFlow.UpsolverDestinationPropertiesProperty(
    bucket_name="bucketName",
    s3_output_format_config=appflow.CfnFlow.UpsolverS3OutputFormatConfigProperty(
        prefix_config=appflow.CfnFlow.PrefixConfigProperty(
            path_prefix_hierarchy=["pathPrefixHierarchy"],
            prefix_format="prefixFormat",
            prefix_type="prefixType"
        ),

        # the properties below are optional
        aggregation_config=appflow.CfnFlow.AggregationConfigProperty(
            aggregation_type="aggregationType",
            target_file_size=123
        ),
        file_type="fileType"
    ),

    # the properties below are optional
    bucket_prefix="bucketPrefix"
)

Attributes

bucket_name

The Upsolver Amazon S3 bucket name in which Amazon AppFlow places the transferred data.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-upsolverdestinationproperties.html#cfn-appflow-flow-upsolverdestinationproperties-bucketname

bucket_prefix

The object key for the destination Upsolver Amazon S3 bucket in which Amazon AppFlow places the files.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-upsolverdestinationproperties.html#cfn-appflow-flow-upsolverdestinationproperties-bucketprefix

s3_output_format_config

The configuration that determines how data is formatted when Upsolver is used as the flow destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-upsolverdestinationproperties.html#cfn-appflow-flow-upsolverdestinationproperties-s3outputformatconfig

UpsolverS3OutputFormatConfigProperty

class CfnFlow.UpsolverS3OutputFormatConfigProperty(*, prefix_config, aggregation_config=None, file_type=None)

Bases: object

The configuration that determines how Amazon AppFlow formats the flow output data when Upsolver is used as the destination.

Parameters:
  • prefix_config (Union[IResolvable, PrefixConfigProperty, Dict[str, Any]]) – Specifies elements that Amazon AppFlow includes in the file and folder names in the flow destination.

  • aggregation_config (Union[IResolvable, AggregationConfigProperty, Dict[str, Any], None]) – The aggregation settings that you can use to customize the output format of your flow data.

  • file_type (Optional[str]) – Indicates the file type that Amazon AppFlow places in the Upsolver Amazon S3 bucket.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-upsolvers3outputformatconfig.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

upsolver_s3_output_format_config_property = appflow.CfnFlow.UpsolverS3OutputFormatConfigProperty(
    prefix_config=appflow.CfnFlow.PrefixConfigProperty(
        path_prefix_hierarchy=["pathPrefixHierarchy"],
        prefix_format="prefixFormat",
        prefix_type="prefixType"
    ),

    # the properties below are optional
    aggregation_config=appflow.CfnFlow.AggregationConfigProperty(
        aggregation_type="aggregationType",
        target_file_size=123
    ),
    file_type="fileType"
)

Attributes

aggregation_config

The aggregation settings that you can use to customize the output format of your flow data.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-upsolvers3outputformatconfig.html#cfn-appflow-flow-upsolvers3outputformatconfig-aggregationconfig

file_type

Indicates the file type that Amazon AppFlow places in the Upsolver Amazon S3 bucket.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-upsolvers3outputformatconfig.html#cfn-appflow-flow-upsolvers3outputformatconfig-filetype

prefix_config

Specifies elements that Amazon AppFlow includes in the file and folder names in the flow destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-upsolvers3outputformatconfig.html#cfn-appflow-flow-upsolvers3outputformatconfig-prefixconfig

VeevaSourcePropertiesProperty

class CfnFlow.VeevaSourcePropertiesProperty(*, object, document_type=None, include_all_versions=None, include_renditions=None, include_source_files=None)

Bases: object

The properties that are applied when using Veeva as a flow source.

Parameters:
  • object (str) – The object specified in the Veeva flow source.

  • document_type (Optional[str]) – The document type specified in the Veeva document extract flow.

  • include_all_versions (Union[bool, IResolvable, None]) – Boolean value to include All Versions of files in Veeva document extract flow.

  • include_renditions (Union[bool, IResolvable, None]) – Boolean value to include file renditions in Veeva document extract flow.

  • include_source_files (Union[bool, IResolvable, None]) – Boolean value to include source files in Veeva document extract flow.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-veevasourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

veeva_source_properties_property = appflow.CfnFlow.VeevaSourcePropertiesProperty(
    object="object",

    # the properties below are optional
    document_type="documentType",
    include_all_versions=False,
    include_renditions=False,
    include_source_files=False
)

Attributes

document_type

The document type specified in the Veeva document extract flow.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-veevasourceproperties.html#cfn-appflow-flow-veevasourceproperties-documenttype

include_all_versions

Boolean value to include All Versions of files in Veeva document extract flow.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-veevasourceproperties.html#cfn-appflow-flow-veevasourceproperties-includeallversions

include_renditions

Boolean value to include file renditions in Veeva document extract flow.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-veevasourceproperties.html#cfn-appflow-flow-veevasourceproperties-includerenditions

include_source_files

Boolean value to include source files in Veeva document extract flow.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-veevasourceproperties.html#cfn-appflow-flow-veevasourceproperties-includesourcefiles

object

The object specified in the Veeva flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-veevasourceproperties.html#cfn-appflow-flow-veevasourceproperties-object

ZendeskDestinationPropertiesProperty

class CfnFlow.ZendeskDestinationPropertiesProperty(*, object, error_handling_config=None, id_field_names=None, write_operation_type=None)

Bases: object

The properties that are applied when Zendesk is used as a destination.

Parameters:
  • object (str) – The object specified in the Zendesk flow destination.

  • error_handling_config (Union[IResolvable, ErrorHandlingConfigProperty, Dict[str, Any], None]) – The settings that determine how Amazon AppFlow handles an error when placing data in the destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure. ErrorHandlingConfig is a part of the destination connector details.

  • id_field_names (Optional[Sequence[str]]) – A list of field names that can be used as an ID field when performing a write operation.

  • write_operation_type (Optional[str]) – The possible write operations in the destination connector. When this value is not provided, this defaults to the INSERT operation.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-zendeskdestinationproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

zendesk_destination_properties_property = appflow.CfnFlow.ZendeskDestinationPropertiesProperty(
    object="object",

    # the properties below are optional
    error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty(
        bucket_name="bucketName",
        bucket_prefix="bucketPrefix",
        fail_on_first_error=False
    ),
    id_field_names=["idFieldNames"],
    write_operation_type="writeOperationType"
)

Attributes

error_handling_config

The settings that determine how Amazon AppFlow handles an error when placing data in the destination.

For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure. ErrorHandlingConfig is a part of the destination connector details.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-zendeskdestinationproperties.html#cfn-appflow-flow-zendeskdestinationproperties-errorhandlingconfig

id_field_names

A list of field names that can be used as an ID field when performing a write operation.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-zendeskdestinationproperties.html#cfn-appflow-flow-zendeskdestinationproperties-idfieldnames

object

The object specified in the Zendesk flow destination.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-zendeskdestinationproperties.html#cfn-appflow-flow-zendeskdestinationproperties-object

write_operation_type

The possible write operations in the destination connector.

When this value is not provided, this defaults to the INSERT operation.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-zendeskdestinationproperties.html#cfn-appflow-flow-zendeskdestinationproperties-writeoperationtype

ZendeskSourcePropertiesProperty

class CfnFlow.ZendeskSourcePropertiesProperty(*, object)

Bases: object

The properties that are applied when using Zendesk as a flow source.

Parameters:

object (str) – The object specified in the Zendesk flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-zendesksourceproperties.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk import aws_appflow as appflow

zendesk_source_properties_property = appflow.CfnFlow.ZendeskSourcePropertiesProperty(
    object="object"
)

Attributes

object

The object specified in the Zendesk flow source.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-zendesksourceproperties.html#cfn-appflow-flow-zendesksourceproperties-object