CfnFlow¶
-
class
aws_cdk.aws_appflow.
CfnFlow
(scope, id, *, destination_flow_config_list, flow_name, source_flow_config, tasks, trigger_config, description=None, kms_arn=None, tags=None)¶ Bases:
aws_cdk.core.CfnResource
A CloudFormation
AWS::AppFlow::Flow
.The
AWS::AppFlow::Flow
resource is an Amazon AppFlow resource type that specifies a new flow. .. epigraph:If you want to use AWS CloudFormation to create a connector profile for connectors that implement OAuth (such as Salesforce, Slack, Zendesk, and Google Analytics), you must fetch the access and refresh tokens. You can do this by implementing your own UI for OAuth, or by retrieving the tokens from elsewhere. Alternatively, you can use the Amazon AppFlow console to create the connector profile, and then use that connector profile in the flow creation CloudFormation template.
- CloudformationResource
AWS::AppFlow::Flow
- Link
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-appflow-flow.html
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow cfn_flow = appflow.CfnFlow(self, "MyCfnFlow", destination_flow_config_list=[appflow.CfnFlow.DestinationFlowConfigProperty( connector_type="connectorType", destination_connector_properties=appflow.CfnFlow.DestinationConnectorPropertiesProperty( event_bridge=appflow.CfnFlow.EventBridgeDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), lookout_metrics=appflow.CfnFlow.LookoutMetricsDestinationPropertiesProperty( object="object" ), marketo=appflow.CfnFlow.MarketoDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), redshift=appflow.CfnFlow.RedshiftDestinationPropertiesProperty( intermediate_bucket_name="intermediateBucketName", object="object", # the properties below are optional bucket_prefix="bucketPrefix", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), s3=appflow.CfnFlow.S3DestinationPropertiesProperty( bucket_name="bucketName", # the properties below are optional bucket_prefix="bucketPrefix", s3_output_format_config=appflow.CfnFlow.S3OutputFormatConfigProperty( aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType" ), file_type="fileType", prefix_config=appflow.CfnFlow.PrefixConfigProperty( prefix_format="prefixFormat", prefix_type="prefixType" ) ) ), salesforce=appflow.CfnFlow.SalesforceDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ), sapo_data=appflow.CfnFlow.SAPODataDestinationPropertiesProperty( object_path="objectPath", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], success_response_handling_config=appflow.CfnFlow.SuccessResponseHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix" ), write_operation_type="writeOperationType" ), snowflake=appflow.CfnFlow.SnowflakeDestinationPropertiesProperty( intermediate_bucket_name="intermediateBucketName", object="object", # the properties below are optional bucket_prefix="bucketPrefix", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), upsolver=appflow.CfnFlow.UpsolverDestinationPropertiesProperty( bucket_name="bucketName", s3_output_format_config=appflow.CfnFlow.UpsolverS3OutputFormatConfigProperty( prefix_config=appflow.CfnFlow.PrefixConfigProperty( prefix_format="prefixFormat", prefix_type="prefixType" ), # the properties below are optional aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType" ), file_type="fileType" ), # the properties below are optional bucket_prefix="bucketPrefix" ), zendesk=appflow.CfnFlow.ZendeskDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ) ), # the properties below are optional connector_profile_name="connectorProfileName" )], flow_name="flowName", source_flow_config=appflow.CfnFlow.SourceFlowConfigProperty( connector_type="connectorType", source_connector_properties=appflow.CfnFlow.SourceConnectorPropertiesProperty( amplitude=appflow.CfnFlow.AmplitudeSourcePropertiesProperty( object="object" ), datadog=appflow.CfnFlow.DatadogSourcePropertiesProperty( object="object" ), dynatrace=appflow.CfnFlow.DynatraceSourcePropertiesProperty( object="object" ), google_analytics=appflow.CfnFlow.GoogleAnalyticsSourcePropertiesProperty( object="object" ), infor_nexus=appflow.CfnFlow.InforNexusSourcePropertiesProperty( object="object" ), marketo=appflow.CfnFlow.MarketoSourcePropertiesProperty( object="object" ), s3=appflow.CfnFlow.S3SourcePropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", # the properties below are optional s3_input_format_config=appflow.CfnFlow.S3InputFormatConfigProperty( s3_input_file_type="s3InputFileType" ) ), salesforce=appflow.CfnFlow.SalesforceSourcePropertiesProperty( object="object", # the properties below are optional enable_dynamic_field_update=False, include_deleted_records=False ), sapo_data=appflow.CfnFlow.SAPODataSourcePropertiesProperty( object_path="objectPath" ), service_now=appflow.CfnFlow.ServiceNowSourcePropertiesProperty( object="object" ), singular=appflow.CfnFlow.SingularSourcePropertiesProperty( object="object" ), slack=appflow.CfnFlow.SlackSourcePropertiesProperty( object="object" ), trendmicro=appflow.CfnFlow.TrendmicroSourcePropertiesProperty( object="object" ), veeva=appflow.CfnFlow.VeevaSourcePropertiesProperty( object="object", # the properties below are optional document_type="documentType", include_all_versions=False, include_renditions=False, include_source_files=False ), zendesk=appflow.CfnFlow.ZendeskSourcePropertiesProperty( object="object" ) ), # the properties below are optional connector_profile_name="connectorProfileName", incremental_pull_config=appflow.CfnFlow.IncrementalPullConfigProperty( datetime_type_field_name="datetimeTypeFieldName" ) ), tasks=[appflow.CfnFlow.TaskProperty( source_fields=["sourceFields"], task_type="taskType", # the properties below are optional connector_operator=appflow.CfnFlow.ConnectorOperatorProperty( amplitude="amplitude", datadog="datadog", dynatrace="dynatrace", google_analytics="googleAnalytics", infor_nexus="inforNexus", marketo="marketo", s3="s3", salesforce="salesforce", sapo_data="sapoData", service_now="serviceNow", singular="singular", slack="slack", trendmicro="trendmicro", veeva="veeva", zendesk="zendesk" ), destination_field="destinationField", task_properties=[appflow.CfnFlow.TaskPropertiesObjectProperty( key="key", value="value" )] )], trigger_config=appflow.CfnFlow.TriggerConfigProperty( trigger_type="triggerType", # the properties below are optional trigger_properties=appflow.CfnFlow.ScheduledTriggerPropertiesProperty( schedule_expression="scheduleExpression", # the properties below are optional data_pull_mode="dataPullMode", schedule_end_time=123, schedule_offset=123, schedule_start_time=123, time_zone="timeZone" ) ), # the properties below are optional description="description", kms_arn="kmsArn", tags=[CfnTag( key="key", value="value" )] )
Create a new
AWS::AppFlow::Flow
.- Parameters
scope (
Construct
) –scope in which this resource is defined.
id (
str
) –scoped id of the resource.
destination_flow_config_list (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,DestinationFlowConfigProperty
]]]) – The configuration that controls how Amazon AppFlow places data in the destination connector.flow_name (
str
) – The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.source_flow_config (
Union
[IResolvable
,SourceFlowConfigProperty
]) – Contains information about the configuration of the source connector used in the flow.tasks (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,TaskProperty
]]]) – A list of tasks that Amazon AppFlow performs while transferring the data in the flow run.trigger_config (
Union
[IResolvable
,TriggerConfigProperty
]) – The trigger settings that determine how and when Amazon AppFlow runs the specified flow.description (
Optional
[str
]) – A user-entered description of the flow.kms_arn (
Optional
[str
]) – The ARN (Amazon Resource Name) of the Key Management Service (KMS) key you provide for encryption. This is required if you do not want to use the Amazon AppFlow-managed KMS key. If you don’t provide anything here, Amazon AppFlow uses the Amazon AppFlow-managed KMS key.tags (
Optional
[Sequence
[CfnTag
]]) – The tags used to organize, track, or control access for your flow.
Methods
-
add_deletion_override
(path)¶ Syntactic sugar for
addOverride(path, undefined)
.- Parameters
path (
str
) – The path of the value to delete.- Return type
None
-
add_depends_on
(target)¶ Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters
target (
CfnResource
) –- Return type
None
-
add_metadata
(key, value)¶ Add a value to the CloudFormation Resource Metadata.
- Parameters
key (
str
) –value (
Any
) –
- See
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- Return type
None
-
add_override
(path, value)¶ Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermdediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type
None
-
add_property_deletion_override
(property_path)¶ Adds an override that deletes the value of a property from the resource definition.
- Parameters
property_path (
str
) – The path to the property.- Return type
None
-
add_property_override
(property_path, value)¶ Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type
None
-
apply_removal_policy
(policy=None, *, apply_to_update_replace_policy=None, default=None)¶ Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
).- Parameters
policy (
Optional
[RemovalPolicy
]) –apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resoure, please consult that specific resource’s documentation.
- Return type
None
-
get_att
(attribute_name)¶ Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters
attribute_name (
str
) – The name of the attribute.- Return type
-
get_metadata
(key)¶ Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters
key (
str
) –- See
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- Return type
Any
-
inspect
(inspector)¶ Examines the CloudFormation resource and discloses attributes.
- Parameters
inspector (
TreeInspector
) –tree inspector to collect and process attributes.
- Return type
None
-
override_logical_id
(new_logical_id)¶ Overrides the auto-generated logical ID with a specific ID.
- Parameters
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type
None
-
to_string
()¶ Returns a string representation of this construct.
- Return type
str
- Returns
a string representation of this resource
Attributes
-
CFN_RESOURCE_TYPE_NAME
= 'AWS::AppFlow::Flow'¶
-
attr_flow_arn
¶ The flow’s Amazon Resource Name (ARN).
- CloudformationAttribute
FlowArn
- Return type
str
-
cfn_options
¶ Options for this resource, such as condition, update policy etc.
- Return type
-
cfn_resource_type
¶ AWS resource type.
- Return type
str
-
creation_stack
¶ return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- Return type
List
[str
]
-
description
¶ A user-entered description of the flow.
- Link
- Return type
Optional
[str
]
-
destination_flow_config_list
¶ The configuration that controls how Amazon AppFlow places data in the destination connector.
-
flow_name
¶ The specified name of the flow.
Spaces are not allowed. Use underscores (_) or hyphens (-) only.
-
kms_arn
¶ The ARN (Amazon Resource Name) of the Key Management Service (KMS) key you provide for encryption.
This is required if you do not want to use the Amazon AppFlow-managed KMS key. If you don’t provide anything here, Amazon AppFlow uses the Amazon AppFlow-managed KMS key.
- Link
- Return type
Optional
[str
]
-
logical_id
¶ The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Return type
str
- Returns
the logical ID as a stringified token. This value will only get resolved during synthesis.
-
node
¶ The construct tree node associated with this construct.
- Return type
-
ref
¶ Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.- Return type
str
-
source_flow_config
¶ Contains information about the configuration of the source connector used in the flow.
-
stack
¶ The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- Return type
The tags used to organize, track, or control access for your flow.
-
tasks
¶ A list of tasks that Amazon AppFlow performs while transferring the data in the flow run.
- Link
- Return type
Union
[IResolvable
,List
[Union
[IResolvable
,TaskProperty
]]]
-
trigger_config
¶ The trigger settings that determine how and when Amazon AppFlow runs the specified flow.
Static Methods
-
classmethod
is_cfn_element
(x)¶ Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters
x (
Any
) –- Return type
bool
- Returns
The construct as a stack element or undefined if it is not a stack element.
-
classmethod
is_cfn_resource
(construct)¶ Check whether the given construct is a CfnResource.
- Parameters
construct (
IConstruct
) –- Return type
bool
-
classmethod
is_construct
(x)¶ Return whether the given object is a Construct.
- Parameters
x (
Any
) –- Return type
bool
AggregationConfigProperty¶
-
class
CfnFlow.
AggregationConfigProperty
(*, aggregation_type=None)¶ Bases:
object
The aggregation settings that you can use to customize the output format of your flow data.
- Parameters
aggregation_type (
Optional
[str
]) – Specifies whether Amazon AppFlow aggregates the flow records into a single file, or leave them unaggregated.- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow aggregation_config_property = appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType" )
Attributes
-
aggregation_type
¶ Specifies whether Amazon AppFlow aggregates the flow records into a single file, or leave them unaggregated.
AmplitudeSourcePropertiesProperty¶
-
class
CfnFlow.
AmplitudeSourcePropertiesProperty
(*, object)¶ Bases:
object
The properties that are applied when Amplitude is being used as a source.
- Parameters
object (
str
) – The object specified in the Amplitude flow source.- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow amplitude_source_properties_property = appflow.CfnFlow.AmplitudeSourcePropertiesProperty( object="object" )
Attributes
-
object
¶ The object specified in the Amplitude flow source.
ConnectorOperatorProperty¶
-
class
CfnFlow.
ConnectorOperatorProperty
(*, amplitude=None, datadog=None, dynatrace=None, google_analytics=None, infor_nexus=None, marketo=None, s3=None, salesforce=None, sapo_data=None, service_now=None, singular=None, slack=None, trendmicro=None, veeva=None, zendesk=None)¶ Bases:
object
The operation to be performed on the provided source fields.
- Parameters
amplitude (
Optional
[str
]) – The operation to be performed on the provided Amplitude source fields.datadog (
Optional
[str
]) – The operation to be performed on the provided Datadog source fields.dynatrace (
Optional
[str
]) – The operation to be performed on the provided Dynatrace source fields.google_analytics (
Optional
[str
]) – The operation to be performed on the provided Google Analytics source fields.infor_nexus (
Optional
[str
]) – The operation to be performed on the provided Infor Nexus source fields.marketo (
Optional
[str
]) – The operation to be performed on the provided Marketo source fields.s3 (
Optional
[str
]) – The operation to be performed on the provided Amazon S3 source fields.salesforce (
Optional
[str
]) – The operation to be performed on the provided Salesforce source fields.sapo_data (
Optional
[str
]) – The operation to be performed on the provided SAPOData source fields.service_now (
Optional
[str
]) – The operation to be performed on the provided ServiceNow source fields.singular (
Optional
[str
]) – The operation to be performed on the provided Singular source fields.slack (
Optional
[str
]) – The operation to be performed on the provided Slack source fields.trendmicro (
Optional
[str
]) – The operation to be performed on the provided Trend Micro source fields.veeva (
Optional
[str
]) – The operation to be performed on the provided Veeva source fields.zendesk (
Optional
[str
]) – The operation to be performed on the provided Zendesk source fields.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow connector_operator_property = appflow.CfnFlow.ConnectorOperatorProperty( amplitude="amplitude", datadog="datadog", dynatrace="dynatrace", google_analytics="googleAnalytics", infor_nexus="inforNexus", marketo="marketo", s3="s3", salesforce="salesforce", sapo_data="sapoData", service_now="serviceNow", singular="singular", slack="slack", trendmicro="trendmicro", veeva="veeva", zendesk="zendesk" )
Attributes
-
amplitude
¶ The operation to be performed on the provided Amplitude source fields.
-
datadog
¶ The operation to be performed on the provided Datadog source fields.
-
dynatrace
¶ The operation to be performed on the provided Dynatrace source fields.
-
google_analytics
¶ The operation to be performed on the provided Google Analytics source fields.
-
infor_nexus
¶ The operation to be performed on the provided Infor Nexus source fields.
-
marketo
¶ The operation to be performed on the provided Marketo source fields.
-
s3
¶ The operation to be performed on the provided Amazon S3 source fields.
-
salesforce
¶ The operation to be performed on the provided Salesforce source fields.
-
sapo_data
¶ The operation to be performed on the provided SAPOData source fields.
-
service_now
¶ The operation to be performed on the provided ServiceNow source fields.
-
singular
¶ The operation to be performed on the provided Singular source fields.
-
slack
¶ The operation to be performed on the provided Slack source fields.
-
trendmicro
¶ The operation to be performed on the provided Trend Micro source fields.
-
veeva
¶ The operation to be performed on the provided Veeva source fields.
-
zendesk
¶ The operation to be performed on the provided Zendesk source fields.
DatadogSourcePropertiesProperty¶
-
class
CfnFlow.
DatadogSourcePropertiesProperty
(*, object)¶ Bases:
object
The properties that are applied when Datadog is being used as a source.
- Parameters
object (
str
) – The object specified in the Datadog flow source.- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow datadog_source_properties_property = appflow.CfnFlow.DatadogSourcePropertiesProperty( object="object" )
Attributes
-
object
¶ The object specified in the Datadog flow source.
DestinationConnectorPropertiesProperty¶
-
class
CfnFlow.
DestinationConnectorPropertiesProperty
(*, event_bridge=None, lookout_metrics=None, marketo=None, redshift=None, s3=None, salesforce=None, sapo_data=None, snowflake=None, upsolver=None, zendesk=None)¶ Bases:
object
This stores the information that is required to query a particular connector.
- Parameters
event_bridge (
Union
[IResolvable
,EventBridgeDestinationPropertiesProperty
,None
]) – The properties required to query Amazon EventBridge.lookout_metrics (
Union
[IResolvable
,LookoutMetricsDestinationPropertiesProperty
,None
]) – The properties required to query Amazon Lookout for Metrics.marketo (
Union
[IResolvable
,MarketoDestinationPropertiesProperty
,None
]) – The properties required to query Marketo.redshift (
Union
[IResolvable
,RedshiftDestinationPropertiesProperty
,None
]) – The properties required to query Amazon Redshift.s3 (
Union
[IResolvable
,S3DestinationPropertiesProperty
,None
]) – The properties required to query Amazon S3.salesforce (
Union
[IResolvable
,SalesforceDestinationPropertiesProperty
,None
]) – The properties required to query Salesforce.sapo_data (
Union
[IResolvable
,SAPODataDestinationPropertiesProperty
,None
]) – The properties required to query SAPOData.snowflake (
Union
[IResolvable
,SnowflakeDestinationPropertiesProperty
,None
]) – The properties required to query Snowflake.upsolver (
Union
[IResolvable
,UpsolverDestinationPropertiesProperty
,None
]) – The properties required to query Upsolver.zendesk (
Union
[IResolvable
,ZendeskDestinationPropertiesProperty
,None
]) – The properties required to query Zendesk.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow destination_connector_properties_property = appflow.CfnFlow.DestinationConnectorPropertiesProperty( event_bridge=appflow.CfnFlow.EventBridgeDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), lookout_metrics=appflow.CfnFlow.LookoutMetricsDestinationPropertiesProperty( object="object" ), marketo=appflow.CfnFlow.MarketoDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), redshift=appflow.CfnFlow.RedshiftDestinationPropertiesProperty( intermediate_bucket_name="intermediateBucketName", object="object", # the properties below are optional bucket_prefix="bucketPrefix", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), s3=appflow.CfnFlow.S3DestinationPropertiesProperty( bucket_name="bucketName", # the properties below are optional bucket_prefix="bucketPrefix", s3_output_format_config=appflow.CfnFlow.S3OutputFormatConfigProperty( aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType" ), file_type="fileType", prefix_config=appflow.CfnFlow.PrefixConfigProperty( prefix_format="prefixFormat", prefix_type="prefixType" ) ) ), salesforce=appflow.CfnFlow.SalesforceDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ), sapo_data=appflow.CfnFlow.SAPODataDestinationPropertiesProperty( object_path="objectPath", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], success_response_handling_config=appflow.CfnFlow.SuccessResponseHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix" ), write_operation_type="writeOperationType" ), snowflake=appflow.CfnFlow.SnowflakeDestinationPropertiesProperty( intermediate_bucket_name="intermediateBucketName", object="object", # the properties below are optional bucket_prefix="bucketPrefix", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), upsolver=appflow.CfnFlow.UpsolverDestinationPropertiesProperty( bucket_name="bucketName", s3_output_format_config=appflow.CfnFlow.UpsolverS3OutputFormatConfigProperty( prefix_config=appflow.CfnFlow.PrefixConfigProperty( prefix_format="prefixFormat", prefix_type="prefixType" ), # the properties below are optional aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType" ), file_type="fileType" ), # the properties below are optional bucket_prefix="bucketPrefix" ), zendesk=appflow.CfnFlow.ZendeskDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ) )
Attributes
-
event_bridge
¶ The properties required to query Amazon EventBridge.
-
lookout_metrics
¶ The properties required to query Amazon Lookout for Metrics.
-
marketo
¶ The properties required to query Marketo.
-
redshift
¶ The properties required to query Amazon Redshift.
-
s3
¶ The properties required to query Amazon S3.
-
salesforce
¶ The properties required to query Salesforce.
-
sapo_data
¶ The properties required to query SAPOData.
-
snowflake
¶ The properties required to query Snowflake.
-
upsolver
¶ The properties required to query Upsolver.
-
zendesk
¶ The properties required to query Zendesk.
DestinationFlowConfigProperty¶
-
class
CfnFlow.
DestinationFlowConfigProperty
(*, connector_type, destination_connector_properties, connector_profile_name=None)¶ Bases:
object
Contains information about the configuration of destination connectors present in the flow.
- Parameters
connector_type (
str
) – The type of destination connector, such as Sales force, Amazon S3, and so on. Allowed Values :EventBridge | Redshift | S3 | Salesforce | Snowflake
destination_connector_properties (
Union
[IResolvable
,DestinationConnectorPropertiesProperty
]) – This stores the information that is required to query a particular connector.connector_profile_name (
Optional
[str
]) – The name of the connector profile. This name must be unique for each connector profile in the AWS account .
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow destination_flow_config_property = appflow.CfnFlow.DestinationFlowConfigProperty( connector_type="connectorType", destination_connector_properties=appflow.CfnFlow.DestinationConnectorPropertiesProperty( event_bridge=appflow.CfnFlow.EventBridgeDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), lookout_metrics=appflow.CfnFlow.LookoutMetricsDestinationPropertiesProperty( object="object" ), marketo=appflow.CfnFlow.MarketoDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), redshift=appflow.CfnFlow.RedshiftDestinationPropertiesProperty( intermediate_bucket_name="intermediateBucketName", object="object", # the properties below are optional bucket_prefix="bucketPrefix", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), s3=appflow.CfnFlow.S3DestinationPropertiesProperty( bucket_name="bucketName", # the properties below are optional bucket_prefix="bucketPrefix", s3_output_format_config=appflow.CfnFlow.S3OutputFormatConfigProperty( aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType" ), file_type="fileType", prefix_config=appflow.CfnFlow.PrefixConfigProperty( prefix_format="prefixFormat", prefix_type="prefixType" ) ) ), salesforce=appflow.CfnFlow.SalesforceDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ), sapo_data=appflow.CfnFlow.SAPODataDestinationPropertiesProperty( object_path="objectPath", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], success_response_handling_config=appflow.CfnFlow.SuccessResponseHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix" ), write_operation_type="writeOperationType" ), snowflake=appflow.CfnFlow.SnowflakeDestinationPropertiesProperty( intermediate_bucket_name="intermediateBucketName", object="object", # the properties below are optional bucket_prefix="bucketPrefix", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), upsolver=appflow.CfnFlow.UpsolverDestinationPropertiesProperty( bucket_name="bucketName", s3_output_format_config=appflow.CfnFlow.UpsolverS3OutputFormatConfigProperty( prefix_config=appflow.CfnFlow.PrefixConfigProperty( prefix_format="prefixFormat", prefix_type="prefixType" ), # the properties below are optional aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType" ), file_type="fileType" ), # the properties below are optional bucket_prefix="bucketPrefix" ), zendesk=appflow.CfnFlow.ZendeskDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ) ), # the properties below are optional connector_profile_name="connectorProfileName" )
Attributes
-
connector_profile_name
¶ The name of the connector profile.
This name must be unique for each connector profile in the AWS account .
-
connector_type
¶ The type of destination connector, such as Sales force, Amazon S3, and so on.
Allowed Values :
EventBridge | Redshift | S3 | Salesforce | Snowflake
-
destination_connector_properties
¶ This stores the information that is required to query a particular connector.
DynatraceSourcePropertiesProperty¶
-
class
CfnFlow.
DynatraceSourcePropertiesProperty
(*, object)¶ Bases:
object
The properties that are applied when Dynatrace is being used as a source.
- Parameters
object (
str
) – The object specified in the Dynatrace flow source.- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow dynatrace_source_properties_property = appflow.CfnFlow.DynatraceSourcePropertiesProperty( object="object" )
Attributes
-
object
¶ The object specified in the Dynatrace flow source.
ErrorHandlingConfigProperty¶
-
class
CfnFlow.
ErrorHandlingConfigProperty
(*, bucket_name=None, bucket_prefix=None, fail_on_first_error=None)¶ Bases:
object
The settings that determine how Amazon AppFlow handles an error when placing data in the destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfig
is a part of the destination connector details.- Parameters
bucket_name (
Optional
[str
]) – Specifies the name of the Amazon S3 bucket.bucket_prefix (
Optional
[str
]) – Specifies the Amazon S3 bucket prefix.fail_on_first_error (
Union
[bool
,IResolvable
,None
]) – Specifies if the flow should fail after the first instance of a failure when attempting to place data in the destination.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow error_handling_config_property = appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False )
Attributes
-
bucket_name
¶ Specifies the name of the Amazon S3 bucket.
-
bucket_prefix
¶ Specifies the Amazon S3 bucket prefix.
-
fail_on_first_error
¶ Specifies if the flow should fail after the first instance of a failure when attempting to place data in the destination.
EventBridgeDestinationPropertiesProperty¶
-
class
CfnFlow.
EventBridgeDestinationPropertiesProperty
(*, object, error_handling_config=None)¶ Bases:
object
The properties that are applied when Amazon EventBridge is being used as a destination.
- Parameters
object (
str
) – The object specified in the Amazon EventBridge flow destination.error_handling_config (
Union
[IResolvable
,ErrorHandlingConfigProperty
,None
]) – The object specified in the Amplitude flow source.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow event_bridge_destination_properties_property = appflow.CfnFlow.EventBridgeDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) )
Attributes
-
error_handling_config
¶ The object specified in the Amplitude flow source.
-
object
¶ The object specified in the Amazon EventBridge flow destination.
GoogleAnalyticsSourcePropertiesProperty¶
-
class
CfnFlow.
GoogleAnalyticsSourcePropertiesProperty
(*, object)¶ Bases:
object
The properties that are applied when Google Analytics is being used as a source.
- Parameters
object (
str
) – The object specified in the Google Analytics flow source.- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow google_analytics_source_properties_property = appflow.CfnFlow.GoogleAnalyticsSourcePropertiesProperty( object="object" )
Attributes
-
object
¶ The object specified in the Google Analytics flow source.
IncrementalPullConfigProperty¶
-
class
CfnFlow.
IncrementalPullConfigProperty
(*, datetime_type_field_name=None)¶ Bases:
object
Specifies the configuration used when importing incremental records from the source.
- Parameters
datetime_type_field_name (
Optional
[str
]) – A field that specifies the date time or timestamp field as the criteria to use when importing incremental records from the source.- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow incremental_pull_config_property = appflow.CfnFlow.IncrementalPullConfigProperty( datetime_type_field_name="datetimeTypeFieldName" )
Attributes
-
datetime_type_field_name
¶ A field that specifies the date time or timestamp field as the criteria to use when importing incremental records from the source.
InforNexusSourcePropertiesProperty¶
-
class
CfnFlow.
InforNexusSourcePropertiesProperty
(*, object)¶ Bases:
object
The properties that are applied when Infor Nexus is being used as a source.
- Parameters
object (
str
) – The object specified in the Infor Nexus flow source.- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow infor_nexus_source_properties_property = appflow.CfnFlow.InforNexusSourcePropertiesProperty( object="object" )
Attributes
-
object
¶ The object specified in the Infor Nexus flow source.
LookoutMetricsDestinationPropertiesProperty¶
-
class
CfnFlow.
LookoutMetricsDestinationPropertiesProperty
(*, object=None)¶ Bases:
object
The properties that are applied when Amazon Lookout for Metrics is used as a destination.
- Parameters
object (
Optional
[str
]) – The object specified in the Amazon Lookout for Metrics flow destination.- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow lookout_metrics_destination_properties_property = appflow.CfnFlow.LookoutMetricsDestinationPropertiesProperty( object="object" )
Attributes
-
object
¶ The object specified in the Amazon Lookout for Metrics flow destination.
MarketoDestinationPropertiesProperty¶
-
class
CfnFlow.
MarketoDestinationPropertiesProperty
(*, object, error_handling_config=None)¶ Bases:
object
The properties that Amazon AppFlow applies when you use Marketo as a flow destination.
- Parameters
object (
str
) – The object specified in the Marketo flow destination.error_handling_config (
Union
[IResolvable
,ErrorHandlingConfigProperty
,None
]) – The settings that determine how Amazon AppFlow handles an error when placing data in the destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfig
is a part of the destination connector details.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow marketo_destination_properties_property = appflow.CfnFlow.MarketoDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) )
Attributes
-
error_handling_config
¶ The settings that determine how Amazon AppFlow handles an error when placing data in the destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfig
is a part of the destination connector details.
-
object
¶ The object specified in the Marketo flow destination.
MarketoSourcePropertiesProperty¶
-
class
CfnFlow.
MarketoSourcePropertiesProperty
(*, object)¶ Bases:
object
The properties that are applied when Marketo is being used as a source.
- Parameters
object (
str
) – The object specified in the Marketo flow source.- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow marketo_source_properties_property = appflow.CfnFlow.MarketoSourcePropertiesProperty( object="object" )
Attributes
-
object
¶ The object specified in the Marketo flow source.
PrefixConfigProperty¶
-
class
CfnFlow.
PrefixConfigProperty
(*, prefix_format=None, prefix_type=None)¶ Bases:
object
Determines the prefix that Amazon AppFlow applies to the destination folder name.
You can name your destination folders according to the flow frequency and date.
- Parameters
prefix_format (
Optional
[str
]) – Determines the level of granularity that’s included in the prefix.prefix_type (
Optional
[str
]) – Determines the format of the prefix, and whether it applies to the file name, file path, or both.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow prefix_config_property = appflow.CfnFlow.PrefixConfigProperty( prefix_format="prefixFormat", prefix_type="prefixType" )
Attributes
-
prefix_format
¶ Determines the level of granularity that’s included in the prefix.
-
prefix_type
¶ Determines the format of the prefix, and whether it applies to the file name, file path, or both.
RedshiftDestinationPropertiesProperty¶
-
class
CfnFlow.
RedshiftDestinationPropertiesProperty
(*, intermediate_bucket_name, object, bucket_prefix=None, error_handling_config=None)¶ Bases:
object
The properties that are applied when Amazon Redshift is being used as a destination.
- Parameters
intermediate_bucket_name (
str
) – The intermediate bucket that Amazon AppFlow uses when moving data into Amazon Redshift.object (
str
) – The object specified in the Amazon Redshift flow destination.bucket_prefix (
Optional
[str
]) – The object key for the bucket in which Amazon AppFlow places the destination files.error_handling_config (
Union
[IResolvable
,ErrorHandlingConfigProperty
,None
]) – The settings that determine how Amazon AppFlow handles an error when placing data in the Amazon Redshift destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfig
is a part of the destination connector details.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow redshift_destination_properties_property = appflow.CfnFlow.RedshiftDestinationPropertiesProperty( intermediate_bucket_name="intermediateBucketName", object="object", # the properties below are optional bucket_prefix="bucketPrefix", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) )
Attributes
-
bucket_prefix
¶ The object key for the bucket in which Amazon AppFlow places the destination files.
-
error_handling_config
¶ The settings that determine how Amazon AppFlow handles an error when placing data in the Amazon Redshift destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfig
is a part of the destination connector details.
-
intermediate_bucket_name
¶ The intermediate bucket that Amazon AppFlow uses when moving data into Amazon Redshift.
-
object
¶ The object specified in the Amazon Redshift flow destination.
S3DestinationPropertiesProperty¶
-
class
CfnFlow.
S3DestinationPropertiesProperty
(*, bucket_name, bucket_prefix=None, s3_output_format_config=None)¶ Bases:
object
The properties that are applied when Amazon S3 is used as a destination.
- Parameters
bucket_name (
str
) – The Amazon S3 bucket name in which Amazon AppFlow places the transferred data.bucket_prefix (
Optional
[str
]) – The object key for the destination bucket in which Amazon AppFlow places the files.s3_output_format_config (
Union
[IResolvable
,S3OutputFormatConfigProperty
,None
]) – The configuration that determines how Amazon AppFlow should format the flow output data when Amazon S3 is used as the destination.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow s3_destination_properties_property = appflow.CfnFlow.S3DestinationPropertiesProperty( bucket_name="bucketName", # the properties below are optional bucket_prefix="bucketPrefix", s3_output_format_config=appflow.CfnFlow.S3OutputFormatConfigProperty( aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType" ), file_type="fileType", prefix_config=appflow.CfnFlow.PrefixConfigProperty( prefix_format="prefixFormat", prefix_type="prefixType" ) ) )
Attributes
-
bucket_name
¶ The Amazon S3 bucket name in which Amazon AppFlow places the transferred data.
-
bucket_prefix
¶ The object key for the destination bucket in which Amazon AppFlow places the files.
-
s3_output_format_config
¶ The configuration that determines how Amazon AppFlow should format the flow output data when Amazon S3 is used as the destination.
S3InputFormatConfigProperty¶
-
class
CfnFlow.
S3InputFormatConfigProperty
(*, s3_input_file_type=None)¶ Bases:
object
When you use Amazon S3 as the source, the configuration format that you provide the flow input data.
- Parameters
s3_input_file_type (
Optional
[str
]) – The file type that Amazon AppFlow gets from your Amazon S3 bucket.- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow s3_input_format_config_property = appflow.CfnFlow.S3InputFormatConfigProperty( s3_input_file_type="s3InputFileType" )
Attributes
-
s3_input_file_type
¶ The file type that Amazon AppFlow gets from your Amazon S3 bucket.
S3OutputFormatConfigProperty¶
-
class
CfnFlow.
S3OutputFormatConfigProperty
(*, aggregation_config=None, file_type=None, prefix_config=None)¶ Bases:
object
The configuration that determines how Amazon AppFlow should format the flow output data when Amazon S3 is used as the destination.
- Parameters
aggregation_config (
Union
[IResolvable
,AggregationConfigProperty
,None
]) – The aggregation settings that you can use to customize the output format of your flow data.file_type (
Optional
[str
]) – Indicates the file type that Amazon AppFlow places in the Amazon S3 bucket.prefix_config (
Union
[IResolvable
,PrefixConfigProperty
,None
]) – Determines the prefix that Amazon AppFlow applies to the folder name in the Amazon S3 bucket. You can name folders according to the flow frequency and date.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow s3_output_format_config_property = appflow.CfnFlow.S3OutputFormatConfigProperty( aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType" ), file_type="fileType", prefix_config=appflow.CfnFlow.PrefixConfigProperty( prefix_format="prefixFormat", prefix_type="prefixType" ) )
Attributes
-
aggregation_config
¶ The aggregation settings that you can use to customize the output format of your flow data.
-
file_type
¶ Indicates the file type that Amazon AppFlow places in the Amazon S3 bucket.
-
prefix_config
¶ Determines the prefix that Amazon AppFlow applies to the folder name in the Amazon S3 bucket.
You can name folders according to the flow frequency and date.
S3SourcePropertiesProperty¶
-
class
CfnFlow.
S3SourcePropertiesProperty
(*, bucket_name, bucket_prefix, s3_input_format_config=None)¶ Bases:
object
The properties that are applied when Amazon S3 is being used as the flow source.
- Parameters
bucket_name (
str
) – The Amazon S3 bucket name where the source files are stored.bucket_prefix (
str
) – The object key for the Amazon S3 bucket in which the source files are stored.s3_input_format_config (
Union
[IResolvable
,S3InputFormatConfigProperty
,None
]) – When you use Amazon S3 as the source, the configuration format that you provide the flow input data.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow s3_source_properties_property = appflow.CfnFlow.S3SourcePropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", # the properties below are optional s3_input_format_config=appflow.CfnFlow.S3InputFormatConfigProperty( s3_input_file_type="s3InputFileType" ) )
Attributes
-
bucket_name
¶ The Amazon S3 bucket name where the source files are stored.
-
bucket_prefix
¶ The object key for the Amazon S3 bucket in which the source files are stored.
-
s3_input_format_config
¶ When you use Amazon S3 as the source, the configuration format that you provide the flow input data.
SAPODataDestinationPropertiesProperty¶
-
class
CfnFlow.
SAPODataDestinationPropertiesProperty
(*, object_path, error_handling_config=None, id_field_names=None, success_response_handling_config=None, write_operation_type=None)¶ Bases:
object
The properties that are applied when using SAPOData as a flow destination.
- Parameters
object_path (
str
) – The object path specified in the SAPOData flow destination.error_handling_config (
Union
[IResolvable
,ErrorHandlingConfigProperty
,None
]) – The settings that determine how Amazon AppFlow handles an error when placing data in the destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfig
is a part of the destination connector details.id_field_names (
Optional
[Sequence
[str
]]) – A list of field names that can be used as an ID field when performing a write operation.success_response_handling_config (
Union
[IResolvable
,SuccessResponseHandlingConfigProperty
,None
]) – Determines how Amazon AppFlow handles the success response that it gets from the connector after placing data. For example, this setting would determine where to write the response from a destination connector upon a successful insert operation.write_operation_type (
Optional
[str
]) – The possible write operations in the destination connector. When this value is not provided, this defaults to theINSERT
operation.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow s_aPOData_destination_properties_property = appflow.CfnFlow.SAPODataDestinationPropertiesProperty( object_path="objectPath", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], success_response_handling_config=appflow.CfnFlow.SuccessResponseHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix" ), write_operation_type="writeOperationType" )
Attributes
-
error_handling_config
¶ The settings that determine how Amazon AppFlow handles an error when placing data in the destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfig
is a part of the destination connector details.
-
id_field_names
¶ A list of field names that can be used as an ID field when performing a write operation.
-
object_path
¶ The object path specified in the SAPOData flow destination.
-
success_response_handling_config
¶ Determines how Amazon AppFlow handles the success response that it gets from the connector after placing data.
For example, this setting would determine where to write the response from a destination connector upon a successful insert operation.
-
write_operation_type
¶ The possible write operations in the destination connector.
When this value is not provided, this defaults to the
INSERT
operation.
SAPODataSourcePropertiesProperty¶
-
class
CfnFlow.
SAPODataSourcePropertiesProperty
(*, object_path)¶ Bases:
object
The properties that are applied when using SAPOData as a flow source.
- Parameters
object_path (
str
) – The object path specified in the SAPOData flow source.- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow s_aPOData_source_properties_property = appflow.CfnFlow.SAPODataSourcePropertiesProperty( object_path="objectPath" )
Attributes
-
object_path
¶ The object path specified in the SAPOData flow source.
SalesforceDestinationPropertiesProperty¶
-
class
CfnFlow.
SalesforceDestinationPropertiesProperty
(*, object, error_handling_config=None, id_field_names=None, write_operation_type=None)¶ Bases:
object
The properties that are applied when Salesforce is being used as a destination.
- Parameters
object (
str
) – The object specified in the Salesforce flow destination.error_handling_config (
Union
[IResolvable
,ErrorHandlingConfigProperty
,None
]) – The settings that determine how Amazon AppFlow handles an error when placing data in the Salesforce destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfig
is a part of the destination connector details.id_field_names (
Optional
[Sequence
[str
]]) – The name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update or delete.write_operation_type (
Optional
[str
]) – This specifies the type of write operation to be performed in Salesforce. When the value isUPSERT
, thenidFieldNames
is required.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow salesforce_destination_properties_property = appflow.CfnFlow.SalesforceDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" )
Attributes
-
error_handling_config
¶ The settings that determine how Amazon AppFlow handles an error when placing data in the Salesforce destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfig
is a part of the destination connector details.
-
id_field_names
¶ The name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update or delete.
-
object
¶ The object specified in the Salesforce flow destination.
-
write_operation_type
¶ This specifies the type of write operation to be performed in Salesforce.
When the value is
UPSERT
, thenidFieldNames
is required.
SalesforceSourcePropertiesProperty¶
-
class
CfnFlow.
SalesforceSourcePropertiesProperty
(*, object, enable_dynamic_field_update=None, include_deleted_records=None)¶ Bases:
object
The properties that are applied when Salesforce is being used as a source.
- Parameters
object (
str
) – The object specified in the Salesforce flow source.enable_dynamic_field_update (
Union
[bool
,IResolvable
,None
]) – The flag that enables dynamic fetching of new (recently added) fields in the Salesforce objects while running a flow.include_deleted_records (
Union
[bool
,IResolvable
,None
]) – Indicates whether Amazon AppFlow includes deleted files in the flow run.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow salesforce_source_properties_property = appflow.CfnFlow.SalesforceSourcePropertiesProperty( object="object", # the properties below are optional enable_dynamic_field_update=False, include_deleted_records=False )
Attributes
-
enable_dynamic_field_update
¶ The flag that enables dynamic fetching of new (recently added) fields in the Salesforce objects while running a flow.
-
include_deleted_records
¶ Indicates whether Amazon AppFlow includes deleted files in the flow run.
-
object
¶ The object specified in the Salesforce flow source.
ScheduledTriggerPropertiesProperty¶
-
class
CfnFlow.
ScheduledTriggerPropertiesProperty
(*, schedule_expression, data_pull_mode=None, schedule_end_time=None, schedule_offset=None, schedule_start_time=None, time_zone=None)¶ Bases:
object
Specifies the configuration details of a schedule-triggered flow as defined by the user.
Currently, these settings only apply to the
Scheduled
trigger type.- Parameters
schedule_expression (
str
) – The scheduling expression that determines the rate at which the schedule will run, for examplerate(5minutes)
.data_pull_mode (
Optional
[str
]) – Specifies whether a scheduled flow has an incremental data transfer or a complete data transfer for each flow run.schedule_end_time (
Union
[int
,float
,None
]) – Specifies the scheduled end time for a schedule-triggered flow.schedule_offset (
Union
[int
,float
,None
]) – Specifies the optional offset that is added to the time interval for a schedule-triggered flow.schedule_start_time (
Union
[int
,float
,None
]) – Specifies the scheduled start time for a schedule-triggered flow.time_zone (
Optional
[str
]) – Specifies the time zone used when referring to the date and time of a scheduled-triggered flow, such asAmerica/New_York
.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow scheduled_trigger_properties_property = appflow.CfnFlow.ScheduledTriggerPropertiesProperty( schedule_expression="scheduleExpression", # the properties below are optional data_pull_mode="dataPullMode", schedule_end_time=123, schedule_offset=123, schedule_start_time=123, time_zone="timeZone" )
Attributes
-
data_pull_mode
¶ Specifies whether a scheduled flow has an incremental data transfer or a complete data transfer for each flow run.
-
schedule_end_time
¶ Specifies the scheduled end time for a schedule-triggered flow.
-
schedule_expression
¶ The scheduling expression that determines the rate at which the schedule will run, for example
rate(5minutes)
.
-
schedule_offset
¶ Specifies the optional offset that is added to the time interval for a schedule-triggered flow.
-
schedule_start_time
¶ Specifies the scheduled start time for a schedule-triggered flow.
-
time_zone
¶ Specifies the time zone used when referring to the date and time of a scheduled-triggered flow, such as
America/New_York
.
ServiceNowSourcePropertiesProperty¶
-
class
CfnFlow.
ServiceNowSourcePropertiesProperty
(*, object)¶ Bases:
object
The properties that are applied when ServiceNow is being used as a source.
- Parameters
object (
str
) – The object specified in the ServiceNow flow source.- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow service_now_source_properties_property = appflow.CfnFlow.ServiceNowSourcePropertiesProperty( object="object" )
Attributes
-
object
¶ The object specified in the ServiceNow flow source.
SingularSourcePropertiesProperty¶
-
class
CfnFlow.
SingularSourcePropertiesProperty
(*, object)¶ Bases:
object
The properties that are applied when Singular is being used as a source.
- Parameters
object (
str
) – The object specified in the Singular flow source.- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow singular_source_properties_property = appflow.CfnFlow.SingularSourcePropertiesProperty( object="object" )
Attributes
-
object
¶ The object specified in the Singular flow source.
SlackSourcePropertiesProperty¶
-
class
CfnFlow.
SlackSourcePropertiesProperty
(*, object)¶ Bases:
object
The properties that are applied when Slack is being used as a source.
- Parameters
object (
str
) – The object specified in the Slack flow source.- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow slack_source_properties_property = appflow.CfnFlow.SlackSourcePropertiesProperty( object="object" )
Attributes
-
object
¶ The object specified in the Slack flow source.
SnowflakeDestinationPropertiesProperty¶
-
class
CfnFlow.
SnowflakeDestinationPropertiesProperty
(*, intermediate_bucket_name, object, bucket_prefix=None, error_handling_config=None)¶ Bases:
object
The properties that are applied when Snowflake is being used as a destination.
- Parameters
intermediate_bucket_name (
str
) – The intermediate bucket that Amazon AppFlow uses when moving data into Snowflake.object (
str
) – The object specified in the Snowflake flow destination.bucket_prefix (
Optional
[str
]) – The object key for the destination bucket in which Amazon AppFlow places the files.error_handling_config (
Union
[IResolvable
,ErrorHandlingConfigProperty
,None
]) – The settings that determine how Amazon AppFlow handles an error when placing data in the Snowflake destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfig
is a part of the destination connector details.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow snowflake_destination_properties_property = appflow.CfnFlow.SnowflakeDestinationPropertiesProperty( intermediate_bucket_name="intermediateBucketName", object="object", # the properties below are optional bucket_prefix="bucketPrefix", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) )
Attributes
-
bucket_prefix
¶ The object key for the destination bucket in which Amazon AppFlow places the files.
-
error_handling_config
¶ The settings that determine how Amazon AppFlow handles an error when placing data in the Snowflake destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfig
is a part of the destination connector details.
-
intermediate_bucket_name
¶ The intermediate bucket that Amazon AppFlow uses when moving data into Snowflake.
-
object
¶ The object specified in the Snowflake flow destination.
SourceConnectorPropertiesProperty¶
-
class
CfnFlow.
SourceConnectorPropertiesProperty
(*, amplitude=None, datadog=None, dynatrace=None, google_analytics=None, infor_nexus=None, marketo=None, s3=None, salesforce=None, sapo_data=None, service_now=None, singular=None, slack=None, trendmicro=None, veeva=None, zendesk=None)¶ Bases:
object
Specifies the information that is required to query a particular connector.
- Parameters
amplitude (
Union
[IResolvable
,AmplitudeSourcePropertiesProperty
,None
]) – Specifies the information that is required for querying Amplitude.datadog (
Union
[IResolvable
,DatadogSourcePropertiesProperty
,None
]) – Specifies the information that is required for querying Datadog.dynatrace (
Union
[IResolvable
,DynatraceSourcePropertiesProperty
,None
]) – Specifies the information that is required for querying Dynatrace.google_analytics (
Union
[IResolvable
,GoogleAnalyticsSourcePropertiesProperty
,None
]) – Specifies the information that is required for querying Google Analytics.infor_nexus (
Union
[IResolvable
,InforNexusSourcePropertiesProperty
,None
]) – Specifies the information that is required for querying Infor Nexus.marketo (
Union
[IResolvable
,MarketoSourcePropertiesProperty
,None
]) – Specifies the information that is required for querying Marketo.s3 (
Union
[IResolvable
,S3SourcePropertiesProperty
,None
]) – Specifies the information that is required for querying Amazon S3.salesforce (
Union
[IResolvable
,SalesforceSourcePropertiesProperty
,None
]) – Specifies the information that is required for querying Salesforce.sapo_data (
Union
[IResolvable
,SAPODataSourcePropertiesProperty
,None
]) – The properties that are applied when using SAPOData as a flow source.service_now (
Union
[IResolvable
,ServiceNowSourcePropertiesProperty
,None
]) – Specifies the information that is required for querying ServiceNow.singular (
Union
[IResolvable
,SingularSourcePropertiesProperty
,None
]) – Specifies the information that is required for querying Singular.slack (
Union
[IResolvable
,SlackSourcePropertiesProperty
,None
]) – Specifies the information that is required for querying Slack.trendmicro (
Union
[IResolvable
,TrendmicroSourcePropertiesProperty
,None
]) – Specifies the information that is required for querying Trend Micro.veeva (
Union
[IResolvable
,VeevaSourcePropertiesProperty
,None
]) – Specifies the information that is required for querying Veeva.zendesk (
Union
[IResolvable
,ZendeskSourcePropertiesProperty
,None
]) – Specifies the information that is required for querying Zendesk.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow source_connector_properties_property = appflow.CfnFlow.SourceConnectorPropertiesProperty( amplitude=appflow.CfnFlow.AmplitudeSourcePropertiesProperty( object="object" ), datadog=appflow.CfnFlow.DatadogSourcePropertiesProperty( object="object" ), dynatrace=appflow.CfnFlow.DynatraceSourcePropertiesProperty( object="object" ), google_analytics=appflow.CfnFlow.GoogleAnalyticsSourcePropertiesProperty( object="object" ), infor_nexus=appflow.CfnFlow.InforNexusSourcePropertiesProperty( object="object" ), marketo=appflow.CfnFlow.MarketoSourcePropertiesProperty( object="object" ), s3=appflow.CfnFlow.S3SourcePropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", # the properties below are optional s3_input_format_config=appflow.CfnFlow.S3InputFormatConfigProperty( s3_input_file_type="s3InputFileType" ) ), salesforce=appflow.CfnFlow.SalesforceSourcePropertiesProperty( object="object", # the properties below are optional enable_dynamic_field_update=False, include_deleted_records=False ), sapo_data=appflow.CfnFlow.SAPODataSourcePropertiesProperty( object_path="objectPath" ), service_now=appflow.CfnFlow.ServiceNowSourcePropertiesProperty( object="object" ), singular=appflow.CfnFlow.SingularSourcePropertiesProperty( object="object" ), slack=appflow.CfnFlow.SlackSourcePropertiesProperty( object="object" ), trendmicro=appflow.CfnFlow.TrendmicroSourcePropertiesProperty( object="object" ), veeva=appflow.CfnFlow.VeevaSourcePropertiesProperty( object="object", # the properties below are optional document_type="documentType", include_all_versions=False, include_renditions=False, include_source_files=False ), zendesk=appflow.CfnFlow.ZendeskSourcePropertiesProperty( object="object" ) )
Attributes
-
amplitude
¶ Specifies the information that is required for querying Amplitude.
-
datadog
¶ Specifies the information that is required for querying Datadog.
-
dynatrace
¶ Specifies the information that is required for querying Dynatrace.
-
google_analytics
¶ Specifies the information that is required for querying Google Analytics.
-
infor_nexus
¶ Specifies the information that is required for querying Infor Nexus.
-
marketo
¶ Specifies the information that is required for querying Marketo.
-
s3
¶ Specifies the information that is required for querying Amazon S3.
-
salesforce
¶ Specifies the information that is required for querying Salesforce.
-
sapo_data
¶ The properties that are applied when using SAPOData as a flow source.
-
service_now
¶ Specifies the information that is required for querying ServiceNow.
-
singular
¶ Specifies the information that is required for querying Singular.
-
slack
¶ Specifies the information that is required for querying Slack.
-
trendmicro
¶ Specifies the information that is required for querying Trend Micro.
-
veeva
¶ Specifies the information that is required for querying Veeva.
-
zendesk
¶ Specifies the information that is required for querying Zendesk.
SourceFlowConfigProperty¶
-
class
CfnFlow.
SourceFlowConfigProperty
(*, connector_type, source_connector_properties, connector_profile_name=None, incremental_pull_config=None)¶ Bases:
object
Contains information about the configuration of the source connector used in the flow.
- Parameters
connector_type (
str
) – The type of source connector, such as Salesforce, Amplitude, and so on. Allowed Values : S3 | Amplitude | Datadog | Dynatrace | Googleanalytics | Infornexus | Salesforce | Servicenow | Singular | Slack | Trendmicro | Veeva | Zendesksource_connector_properties (
Union
[IResolvable
,SourceConnectorPropertiesProperty
]) – Specifies the information that is required to query a particular source connector.connector_profile_name (
Optional
[str
]) – The name of the connector profile. This name must be unique for each connector profile in the AWS account .incremental_pull_config (
Union
[IResolvable
,IncrementalPullConfigProperty
,None
]) – Defines the configuration for a scheduled incremental data pull. If a valid configuration is provided, the fields specified in the configuration are used when querying for the incremental data pull.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow source_flow_config_property = appflow.CfnFlow.SourceFlowConfigProperty( connector_type="connectorType", source_connector_properties=appflow.CfnFlow.SourceConnectorPropertiesProperty( amplitude=appflow.CfnFlow.AmplitudeSourcePropertiesProperty( object="object" ), datadog=appflow.CfnFlow.DatadogSourcePropertiesProperty( object="object" ), dynatrace=appflow.CfnFlow.DynatraceSourcePropertiesProperty( object="object" ), google_analytics=appflow.CfnFlow.GoogleAnalyticsSourcePropertiesProperty( object="object" ), infor_nexus=appflow.CfnFlow.InforNexusSourcePropertiesProperty( object="object" ), marketo=appflow.CfnFlow.MarketoSourcePropertiesProperty( object="object" ), s3=appflow.CfnFlow.S3SourcePropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", # the properties below are optional s3_input_format_config=appflow.CfnFlow.S3InputFormatConfigProperty( s3_input_file_type="s3InputFileType" ) ), salesforce=appflow.CfnFlow.SalesforceSourcePropertiesProperty( object="object", # the properties below are optional enable_dynamic_field_update=False, include_deleted_records=False ), sapo_data=appflow.CfnFlow.SAPODataSourcePropertiesProperty( object_path="objectPath" ), service_now=appflow.CfnFlow.ServiceNowSourcePropertiesProperty( object="object" ), singular=appflow.CfnFlow.SingularSourcePropertiesProperty( object="object" ), slack=appflow.CfnFlow.SlackSourcePropertiesProperty( object="object" ), trendmicro=appflow.CfnFlow.TrendmicroSourcePropertiesProperty( object="object" ), veeva=appflow.CfnFlow.VeevaSourcePropertiesProperty( object="object", # the properties below are optional document_type="documentType", include_all_versions=False, include_renditions=False, include_source_files=False ), zendesk=appflow.CfnFlow.ZendeskSourcePropertiesProperty( object="object" ) ), # the properties below are optional connector_profile_name="connectorProfileName", incremental_pull_config=appflow.CfnFlow.IncrementalPullConfigProperty( datetime_type_field_name="datetimeTypeFieldName" ) )
Attributes
-
connector_profile_name
¶ The name of the connector profile.
This name must be unique for each connector profile in the AWS account .
-
connector_type
¶ The type of source connector, such as Salesforce, Amplitude, and so on.
Allowed Values : S3 | Amplitude | Datadog | Dynatrace | Googleanalytics | Infornexus | Salesforce | Servicenow | Singular | Slack | Trendmicro | Veeva | Zendesk
-
incremental_pull_config
¶ Defines the configuration for a scheduled incremental data pull.
If a valid configuration is provided, the fields specified in the configuration are used when querying for the incremental data pull.
-
source_connector_properties
¶ Specifies the information that is required to query a particular source connector.
SuccessResponseHandlingConfigProperty¶
-
class
CfnFlow.
SuccessResponseHandlingConfigProperty
(*, bucket_name=None, bucket_prefix=None)¶ Bases:
object
Determines how Amazon AppFlow handles the success response that it gets from the connector after placing data.
For example, this setting would determine where to write the response from the destination connector upon a successful insert operation.
- Parameters
bucket_name (
Optional
[str
]) – The name of the Amazon S3 bucket.bucket_prefix (
Optional
[str
]) – The Amazon S3 bucket prefix.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow success_response_handling_config_property = appflow.CfnFlow.SuccessResponseHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix" )
Attributes
-
bucket_name
¶ The name of the Amazon S3 bucket.
-
bucket_prefix
¶ The Amazon S3 bucket prefix.
TaskPropertiesObjectProperty¶
-
class
CfnFlow.
TaskPropertiesObjectProperty
(*, key, value)¶ Bases:
object
A map used to store task-related information.
The execution service looks for particular information based on the
TaskType
.- Parameters
key (
str
) – The task property key. Allowed Values :VALUE | VALUES | DATA_TYPE | UPPER_BOUND | LOWER_BOUND | SOURCE_DATA_TYPE | DESTINATION_DATA_TYPE | VALIDATION_ACTION | MASK_VALUE | MASK_LENGTH | TRUNCATE_LENGTH | MATH_OPERATION_FIELDS_ORDER | CONCAT_FORMAT | SUBFIELD_CATEGORY_MAP
|EXCLUDE_SOURCE_FIELDS_LIST
value (
str
) – The task property value.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow task_properties_object_property = appflow.CfnFlow.TaskPropertiesObjectProperty( key="key", value="value" )
Attributes
-
key
¶ The task property key.
Allowed Values :
VALUE | VALUES | DATA_TYPE | UPPER_BOUND | LOWER_BOUND | SOURCE_DATA_TYPE | DESTINATION_DATA_TYPE | VALIDATION_ACTION | MASK_VALUE | MASK_LENGTH | TRUNCATE_LENGTH | MATH_OPERATION_FIELDS_ORDER | CONCAT_FORMAT | SUBFIELD_CATEGORY_MAP
|EXCLUDE_SOURCE_FIELDS_LIST
-
value
¶ The task property value.
TaskProperty¶
-
class
CfnFlow.
TaskProperty
(*, source_fields, task_type, connector_operator=None, destination_field=None, task_properties=None)¶ Bases:
object
A class for modeling different type of tasks.
Task implementation varies based on the
TaskType
.- Parameters
source_fields (
Sequence
[str
]) – The source fields to which a particular task is applied.task_type (
str
) – Specifies the particular task implementation that Amazon AppFlow performs. Allowed values :Arithmetic
|Filter
|Map
|Map_all
|Mask
|Merge
|Truncate
|Validate
connector_operator (
Union
[IResolvable
,ConnectorOperatorProperty
,None
]) – The operation to be performed on the provided source fields.destination_field (
Optional
[str
]) – A field in a destination connector, or a field value against which Amazon AppFlow validates a source field.task_properties (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,TaskPropertiesObjectProperty
]],None
]) – A map used to store task-related information. The execution service looks for particular information based on theTaskType
.
- Link
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-task.html
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow task_property = appflow.CfnFlow.TaskProperty( source_fields=["sourceFields"], task_type="taskType", # the properties below are optional connector_operator=appflow.CfnFlow.ConnectorOperatorProperty( amplitude="amplitude", datadog="datadog", dynatrace="dynatrace", google_analytics="googleAnalytics", infor_nexus="inforNexus", marketo="marketo", s3="s3", salesforce="salesforce", sapo_data="sapoData", service_now="serviceNow", singular="singular", slack="slack", trendmicro="trendmicro", veeva="veeva", zendesk="zendesk" ), destination_field="destinationField", task_properties=[appflow.CfnFlow.TaskPropertiesObjectProperty( key="key", value="value" )] )
Attributes
-
connector_operator
¶ The operation to be performed on the provided source fields.
-
destination_field
¶ A field in a destination connector, or a field value against which Amazon AppFlow validates a source field.
-
source_fields
¶ The source fields to which a particular task is applied.
-
task_properties
¶ A map used to store task-related information.
The execution service looks for particular information based on the
TaskType
.- Link
- Return type
Union
[IResolvable
,List
[Union
[IResolvable
,TaskPropertiesObjectProperty
]],None
]
-
task_type
¶ Specifies the particular task implementation that Amazon AppFlow performs.
Allowed values :
Arithmetic
|Filter
|Map
|Map_all
|Mask
|Merge
|Truncate
|Validate
TrendmicroSourcePropertiesProperty¶
-
class
CfnFlow.
TrendmicroSourcePropertiesProperty
(*, object)¶ Bases:
object
The properties that are applied when using Trend Micro as a flow source.
- Parameters
object (
str
) – The object specified in the Trend Micro flow source.- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow trendmicro_source_properties_property = appflow.CfnFlow.TrendmicroSourcePropertiesProperty( object="object" )
Attributes
-
object
¶ The object specified in the Trend Micro flow source.
TriggerConfigProperty¶
-
class
CfnFlow.
TriggerConfigProperty
(*, trigger_type, trigger_properties=None)¶ Bases:
object
The trigger settings that determine how and when Amazon AppFlow runs the specified flow.
- Parameters
trigger_type (
str
) – Specifies the type of flow trigger. This can beOnDemand
,Scheduled
, orEvent
.trigger_properties (
Union
[IResolvable
,ScheduledTriggerPropertiesProperty
,None
]) – Specifies the configuration details of a schedule-triggered flow as defined by the user. Currently, these settings only apply to theScheduled
trigger type.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow trigger_config_property = appflow.CfnFlow.TriggerConfigProperty( trigger_type="triggerType", # the properties below are optional trigger_properties=appflow.CfnFlow.ScheduledTriggerPropertiesProperty( schedule_expression="scheduleExpression", # the properties below are optional data_pull_mode="dataPullMode", schedule_end_time=123, schedule_offset=123, schedule_start_time=123, time_zone="timeZone" ) )
Attributes
-
trigger_properties
¶ Specifies the configuration details of a schedule-triggered flow as defined by the user.
Currently, these settings only apply to the
Scheduled
trigger type.
-
trigger_type
¶ Specifies the type of flow trigger.
This can be
OnDemand
,Scheduled
, orEvent
.
UpsolverDestinationPropertiesProperty¶
-
class
CfnFlow.
UpsolverDestinationPropertiesProperty
(*, bucket_name, s3_output_format_config, bucket_prefix=None)¶ Bases:
object
The properties that are applied when Upsolver is used as a destination.
- Parameters
bucket_name (
str
) – The Upsolver Amazon S3 bucket name in which Amazon AppFlow places the transferred data.s3_output_format_config (
Union
[IResolvable
,UpsolverS3OutputFormatConfigProperty
]) – The configuration that determines how data is formatted when Upsolver is used as the flow destination.bucket_prefix (
Optional
[str
]) – The object key for the destination Upsolver Amazon S3 bucket in which Amazon AppFlow places the files.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow upsolver_destination_properties_property = appflow.CfnFlow.UpsolverDestinationPropertiesProperty( bucket_name="bucketName", s3_output_format_config=appflow.CfnFlow.UpsolverS3OutputFormatConfigProperty( prefix_config=appflow.CfnFlow.PrefixConfigProperty( prefix_format="prefixFormat", prefix_type="prefixType" ), # the properties below are optional aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType" ), file_type="fileType" ), # the properties below are optional bucket_prefix="bucketPrefix" )
Attributes
-
bucket_name
¶ The Upsolver Amazon S3 bucket name in which Amazon AppFlow places the transferred data.
-
bucket_prefix
¶ The object key for the destination Upsolver Amazon S3 bucket in which Amazon AppFlow places the files.
-
s3_output_format_config
¶ The configuration that determines how data is formatted when Upsolver is used as the flow destination.
UpsolverS3OutputFormatConfigProperty¶
-
class
CfnFlow.
UpsolverS3OutputFormatConfigProperty
(*, prefix_config, aggregation_config=None, file_type=None)¶ Bases:
object
The configuration that determines how Amazon AppFlow formats the flow output data when Upsolver is used as the destination.
- Parameters
prefix_config (
Union
[IResolvable
,PrefixConfigProperty
]) – Determines the prefix that Amazon AppFlow applies to the destination folder name. You can name your destination folders according to the flow frequency and date.aggregation_config (
Union
[IResolvable
,AggregationConfigProperty
,None
]) – The aggregation settings that you can use to customize the output format of your flow data.file_type (
Optional
[str
]) – Indicates the file type that Amazon AppFlow places in the Upsolver Amazon S3 bucket.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow upsolver_s3_output_format_config_property = appflow.CfnFlow.UpsolverS3OutputFormatConfigProperty( prefix_config=appflow.CfnFlow.PrefixConfigProperty( prefix_format="prefixFormat", prefix_type="prefixType" ), # the properties below are optional aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType" ), file_type="fileType" )
Attributes
-
aggregation_config
¶ The aggregation settings that you can use to customize the output format of your flow data.
-
file_type
¶ Indicates the file type that Amazon AppFlow places in the Upsolver Amazon S3 bucket.
-
prefix_config
¶ Determines the prefix that Amazon AppFlow applies to the destination folder name.
You can name your destination folders according to the flow frequency and date.
VeevaSourcePropertiesProperty¶
-
class
CfnFlow.
VeevaSourcePropertiesProperty
(*, object, document_type=None, include_all_versions=None, include_renditions=None, include_source_files=None)¶ Bases:
object
The properties that are applied when using Veeva as a flow source.
- Parameters
object (
str
) – The object specified in the Veeva flow source.document_type (
Optional
[str
]) – The document type specified in the Veeva document extract flow.include_all_versions (
Union
[bool
,IResolvable
,None
]) – Boolean value to include All Versions of files in Veeva document extract flow.include_renditions (
Union
[bool
,IResolvable
,None
]) – Boolean value to include file renditions in Veeva document extract flow.include_source_files (
Union
[bool
,IResolvable
,None
]) – Boolean value to include source files in Veeva document extract flow.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow veeva_source_properties_property = appflow.CfnFlow.VeevaSourcePropertiesProperty( object="object", # the properties below are optional document_type="documentType", include_all_versions=False, include_renditions=False, include_source_files=False )
Attributes
-
document_type
¶ The document type specified in the Veeva document extract flow.
-
include_all_versions
¶ Boolean value to include All Versions of files in Veeva document extract flow.
-
include_renditions
¶ Boolean value to include file renditions in Veeva document extract flow.
-
include_source_files
¶ Boolean value to include source files in Veeva document extract flow.
-
object
¶ The object specified in the Veeva flow source.
ZendeskDestinationPropertiesProperty¶
-
class
CfnFlow.
ZendeskDestinationPropertiesProperty
(*, object, error_handling_config=None, id_field_names=None, write_operation_type=None)¶ Bases:
object
The properties that are applied when Zendesk is used as a destination.
- Parameters
object (
str
) – The object specified in the Zendesk flow destination.error_handling_config (
Union
[IResolvable
,ErrorHandlingConfigProperty
,None
]) – The settings that determine how Amazon AppFlow handles an error when placing data in the destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfig
is a part of the destination connector details.id_field_names (
Optional
[Sequence
[str
]]) – A list of field names that can be used as an ID field when performing a write operation.write_operation_type (
Optional
[str
]) – The possible write operations in the destination connector. When this value is not provided, this defaults to theINSERT
operation.
- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow zendesk_destination_properties_property = appflow.CfnFlow.ZendeskDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" )
Attributes
-
error_handling_config
¶ The settings that determine how Amazon AppFlow handles an error when placing data in the destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfig
is a part of the destination connector details.
-
id_field_names
¶ A list of field names that can be used as an ID field when performing a write operation.
-
object
¶ The object specified in the Zendesk flow destination.
-
write_operation_type
¶ The possible write operations in the destination connector.
When this value is not provided, this defaults to the
INSERT
operation.
ZendeskSourcePropertiesProperty¶
-
class
CfnFlow.
ZendeskSourcePropertiesProperty
(*, object)¶ Bases:
object
The properties that are applied when using Zendesk as a flow source.
- Parameters
object (
str
) – The object specified in the Zendesk flow source.- Link
- ExampleMetadata
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow zendesk_source_properties_property = appflow.CfnFlow.ZendeskSourcePropertiesProperty( object="object" )
Attributes
-
object
¶ The object specified in the Zendesk flow source.