You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::Appflow::Client
- Inherits:
-
Seahorse::Client::Base
- Object
- Seahorse::Client::Base
- Aws::Appflow::Client
- Defined in:
- (unknown)
Overview
An API client for Amazon Appflow. To construct a client, you need to configure a :region
and :credentials
.
appflow = Aws::Appflow::Client.new(
region: region_name,
credentials: credentials,
# ...
)
See #initialize for a full list of supported configuration options.
Region
You can configure a default region in the following locations:
ENV['AWS_REGION']
Aws.config[:region]
Go here for a list of supported regions.
Credentials
Default credentials are loaded automatically from the following locations:
ENV['AWS_ACCESS_KEY_ID']
andENV['AWS_SECRET_ACCESS_KEY']
Aws.config[:credentials]
- The shared credentials ini file at
~/.aws/credentials
(more information) - From an instance profile when running on EC2
You can also construct a credentials object from one of the following classes:
Alternatively, you configure credentials with :access_key_id
and
:secret_access_key
:
# load credentials from disk
creds = YAML.load(File.read('/path/to/secrets'))
Aws::Appflow::Client.new(
access_key_id: creds['access_key_id'],
secret_access_key: creds['secret_access_key']
)
Always load your credentials from outside your application. Avoid configuring credentials statically and never commit them to source control.
Instance Attribute Summary
Attributes inherited from Seahorse::Client::Base
Constructor collapse
-
#initialize(options = {}) ⇒ Aws::Appflow::Client
constructor
Constructs an API client.
API Operations collapse
-
#create_connector_profile(options = {}) ⇒ Types::CreateConnectorProfileResponse
Creates a new connector profile associated with your AWS account.
-
#create_flow(options = {}) ⇒ Types::CreateFlowResponse
Enables your application to create a new flow using Amazon AppFlow.
-
#delete_connector_profile(options = {}) ⇒ Struct
Enables you to delete an existing connector profile.
-
#delete_flow(options = {}) ⇒ Struct
Enables your application to delete an existing flow.
-
#describe_connector_entity(options = {}) ⇒ Types::DescribeConnectorEntityResponse
Provides details regarding the entity used with the connector, with a description of the data model for each entity.
-
#describe_connector_profiles(options = {}) ⇒ Types::DescribeConnectorProfilesResponse
Returns a list of
connector-profile
details matching the providedconnector-profile
names andconnector-types
. -
#describe_connectors(options = {}) ⇒ Types::DescribeConnectorsResponse
Describes the connectors vended by Amazon AppFlow for specified connector types.
-
#describe_flow(options = {}) ⇒ Types::DescribeFlowResponse
Provides a description of the specified flow.
-
#describe_flow_execution_records(options = {}) ⇒ Types::DescribeFlowExecutionRecordsResponse
Fetches the execution history of the flow.
-
#list_connector_entities(options = {}) ⇒ Types::ListConnectorEntitiesResponse
Returns the list of available connector entities supported by Amazon AppFlow.
-
#list_flows(options = {}) ⇒ Types::ListFlowsResponse
Lists all of the flows associated with your account.
-
#list_tags_for_resource(options = {}) ⇒ Types::ListTagsForResourceResponse
Retrieves the tags that are associated with a specified flow.
-
#start_flow(options = {}) ⇒ Types::StartFlowResponse
Activates an existing flow.
-
#stop_flow(options = {}) ⇒ Types::StopFlowResponse
Deactivates the existing flow.
-
#tag_resource(options = {}) ⇒ Struct
Applies a tag to the specified flow.
-
#untag_resource(options = {}) ⇒ Struct
Removes a tag from the specified flow.
-
#update_connector_profile(options = {}) ⇒ Types::UpdateConnectorProfileResponse
Updates a given connector profile associated with your account.
-
#update_flow(options = {}) ⇒ Types::UpdateFlowResponse
Updates an existing flow.
Instance Method Summary collapse
-
#wait_until(waiter_name, params = {}) {|waiter| ... } ⇒ Boolean
Waiters polls an API operation until a resource enters a desired state.
-
#waiter_names ⇒ Array<Symbol>
Returns the list of supported waiters.
Methods inherited from Seahorse::Client::Base
add_plugin, api, #build_request, clear_plugins, define, new, #operation, #operation_names, plugins, remove_plugin, set_api, set_plugins
Methods included from Seahorse::Client::HandlerBuilder
#handle, #handle_request, #handle_response
Constructor Details
#initialize(options = {}) ⇒ Aws::Appflow::Client
Constructs an API client.
Options Hash (options):
-
:access_key_id
(String)
—
Used to set credentials statically. See Plugins::RequestSigner for more details.
-
:active_endpoint_cache
(Boolean)
—
When set to
true
, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults tofalse
. See Plugins::EndpointDiscovery for more details. -
:convert_params
(Boolean)
— default:
true
—
When
true
, an attempt is made to coerce request parameters into the required types. See Plugins::ParamConverter for more details. -
:credentials
(required, Credentials)
—
Your AWS credentials. The following locations will be searched in order for credentials:
:access_key_id
,:secret_access_key
, and:session_token
options- ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY']
HOME/.aws/credentials
shared credentials file- EC2 instance profile credentials See Plugins::RequestSigner for more details.
-
:disable_host_prefix_injection
(Boolean)
—
Set to true to disable SDK automatically adding host prefix to default service endpoint when available. See Plugins::EndpointPattern for more details.
-
:endpoint
(String)
—
A default endpoint is constructed from the
:region
. See Plugins::RegionalEndpoint for more details. -
:endpoint_cache_max_entries
(Integer)
—
Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000. See Plugins::EndpointDiscovery for more details.
-
:endpoint_cache_max_threads
(Integer)
—
Used for the maximum threads in use for polling endpoints to be cached, defaults to 10. See Plugins::EndpointDiscovery for more details.
-
:endpoint_cache_poll_interval
(Integer)
—
When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec. See Plugins::EndpointDiscovery for more details.
-
:endpoint_discovery
(Boolean)
—
When set to
true
, endpoint discovery will be enabled for operations when available. Defaults tofalse
. See Plugins::EndpointDiscovery for more details. -
:http_continue_timeout
(Float)
— default:
1
—
See Seahorse::Client::Plugins::NetHttp for more details.
-
:http_idle_timeout
(Integer)
— default:
5
—
See Seahorse::Client::Plugins::NetHttp for more details.
-
:http_open_timeout
(Integer)
— default:
15
—
See Seahorse::Client::Plugins::NetHttp for more details.
-
:http_proxy
(String)
—
See Seahorse::Client::Plugins::NetHttp for more details.
-
:http_read_timeout
(Integer)
— default:
60
—
See Seahorse::Client::Plugins::NetHttp for more details.
-
:http_wire_trace
(Boolean)
— default:
false
—
See Seahorse::Client::Plugins::NetHttp for more details.
-
:log_level
(Symbol)
— default:
:info
—
The log level to send messages to the logger at. See Plugins::Logging for more details.
-
:log_formatter
(Logging::LogFormatter)
—
The log formatter. Defaults to Seahorse::Client::Logging::Formatter.default. See Plugins::Logging for more details.
-
:logger
(Logger)
— default:
nil
—
The Logger instance to send log messages to. If this option is not set, logging will be disabled. See Plugins::Logging for more details.
-
:profile
(String)
—
Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, 'default' is used. See Plugins::RequestSigner for more details.
-
:raise_response_errors
(Boolean)
— default:
true
—
When
true
, response errors are raised. See Seahorse::Client::Plugins::RaiseResponseErrors for more details. -
:region
(required, String)
—
The AWS region to connect to. The region is used to construct the client endpoint. Defaults to
ENV['AWS_REGION']
. Also checksAMAZON_REGION
andAWS_DEFAULT_REGION
. See Plugins::RegionalEndpoint for more details. -
:retry_limit
(Integer)
— default:
3
—
The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors and auth errors from expired credentials. See Plugins::RetryErrors for more details.
-
:secret_access_key
(String)
—
Used to set credentials statically. See Plugins::RequestSigner for more details.
-
:session_token
(String)
—
Used to set credentials statically. See Plugins::RequestSigner for more details.
-
:ssl_ca_bundle
(String)
—
See Seahorse::Client::Plugins::NetHttp for more details.
-
:ssl_ca_directory
(String)
—
See Seahorse::Client::Plugins::NetHttp for more details.
-
:ssl_ca_store
(String)
—
See Seahorse::Client::Plugins::NetHttp for more details.
-
:ssl_verify_peer
(Boolean)
— default:
true
—
See Seahorse::Client::Plugins::NetHttp for more details.
-
:stub_responses
(Boolean)
— default:
false
—
Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling ClientStubs#stub_responses. See ClientStubs for more information.
Please note When response stubbing is enabled, no HTTP requests are made, and retries are disabled. See Plugins::StubResponses for more details.
-
:validate_params
(Boolean)
— default:
true
—
When
true
, request parameters are validated before sending the request. See Plugins::ParamValidator for more details.
Instance Method Details
#create_connector_profile(options = {}) ⇒ Types::CreateConnectorProfileResponse
Creates a new connector profile associated with your AWS account. There is a soft quota of 100 connector profiles per AWS account. If you need more connector profiles than this quota allows, you can submit a request to the Amazon AppFlow team through the Amazon AppFlow support channel.
Examples:
Request syntax with placeholder values
resp = client.create_connector_profile({
connector_profile_name: "ConnectorProfileName", # required
kms_arn: "KMSArn",
connector_type: "Salesforce", # required, accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge
connection_mode: "Public", # required, accepts Public, Private
connector_profile_config: { # required
connector_profile_properties: { # required
amplitude: {
},
datadog: {
instance_url: "InstanceUrl", # required
},
dynatrace: {
instance_url: "InstanceUrl", # required
},
google_analytics: {
},
infor_nexus: {
instance_url: "InstanceUrl", # required
},
marketo: {
instance_url: "InstanceUrl", # required
},
redshift: {
database_url: "DatabaseUrl", # required
bucket_name: "BucketName", # required
bucket_prefix: "BucketPrefix",
role_arn: "RoleArn", # required
},
salesforce: {
instance_url: "InstanceUrl",
is_sandbox_environment: false,
},
service_now: {
instance_url: "InstanceUrl", # required
},
singular: {
},
slack: {
instance_url: "InstanceUrl", # required
},
snowflake: {
warehouse: "Warehouse", # required
stage: "Stage", # required
bucket_name: "BucketName", # required
bucket_prefix: "BucketPrefix",
private_link_service_name: "PrivateLinkServiceName",
account_name: "AccountName",
region: "Region",
},
trendmicro: {
},
veeva: {
instance_url: "InstanceUrl", # required
},
zendesk: {
instance_url: "InstanceUrl", # required
},
},
connector_profile_credentials: { # required
amplitude: {
api_key: "ApiKey", # required
secret_key: "SecretKey", # required
},
datadog: {
api_key: "ApiKey", # required
application_key: "ApplicationKey", # required
},
dynatrace: {
api_token: "ApiToken", # required
},
google_analytics: {
client_id: "ClientId", # required
client_secret: "ClientSecret", # required
access_token: "AccessToken",
refresh_token: "RefreshToken",
o_auth_request: {
auth_code: "AuthCode",
redirect_uri: "RedirectUri",
},
},
infor_nexus: {
access_key_id: "AccessKeyId", # required
user_id: "Username", # required
secret_access_key: "Key", # required
datakey: "Key", # required
},
marketo: {
client_id: "ClientId", # required
client_secret: "ClientSecret", # required
access_token: "AccessToken",
o_auth_request: {
auth_code: "AuthCode",
redirect_uri: "RedirectUri",
},
},
redshift: {
username: "Username", # required
password: "Password", # required
},
salesforce: {
access_token: "AccessToken",
refresh_token: "RefreshToken",
o_auth_request: {
auth_code: "AuthCode",
redirect_uri: "RedirectUri",
},
client_credentials_arn: "ClientCredentialsArn",
},
service_now: {
username: "Username", # required
password: "Password", # required
},
singular: {
api_key: "ApiKey", # required
},
slack: {
client_id: "ClientId", # required
client_secret: "ClientSecret", # required
access_token: "AccessToken",
o_auth_request: {
auth_code: "AuthCode",
redirect_uri: "RedirectUri",
},
},
snowflake: {
username: "Username", # required
password: "Password", # required
},
trendmicro: {
api_secret_key: "ApiSecretKey", # required
},
veeva: {
username: "Username", # required
password: "Password", # required
},
zendesk: {
client_id: "ClientId", # required
client_secret: "ClientSecret", # required
access_token: "AccessToken",
o_auth_request: {
auth_code: "AuthCode",
redirect_uri: "RedirectUri",
},
},
},
},
})
Response structure
resp.connector_profile_arn #=> String
Options Hash (options):
-
:connector_profile_name
(required, String)
—
The name of the connector profile. The name is unique for each
ConnectorProfile
in your AWS account. -
:kms_arn
(String)
—
The ARN (Amazon Resource Name) of the Key Management Service (KMS) key you provide for encryption. This is required if you do not want to use the Amazon AppFlow-managed KMS key. If you don\'t provide anything here, Amazon AppFlow uses the Amazon AppFlow-managed KMS key.
-
:connector_type
(required, String)
—
The type of connector, such as Salesforce, Amplitude, and so on.
-
:connection_mode
(required, String)
—
Indicates the connection mode and specifies whether it is public or private. Private flows use AWS PrivateLink to route data over AWS infrastructure without exposing it to the public internet.
-
:connector_profile_config
(required, Types::ConnectorProfileConfig)
—
Defines the connector-specific configuration and credentials.
Returns:
-
(Types::CreateConnectorProfileResponse)
—
Returns a response object which responds to the following methods:
- #connector_profile_arn => String
See Also:
#create_flow(options = {}) ⇒ Types::CreateFlowResponse
Enables your application to create a new flow using Amazon AppFlow. You must create a connector profile before calling this API. Please note that the Request Syntax below shows syntax for multiple destinations, however, you can only transfer data to one item in this list at a time. Amazon AppFlow does not currently support flows to multiple destinations at once.
Examples:
Request syntax with placeholder values
resp = client.create_flow({
flow_name: "FlowName", # required
description: "FlowDescription",
kms_arn: "KMSArn",
trigger_config: { # required
trigger_type: "Scheduled", # required, accepts Scheduled, Event, OnDemand
trigger_properties: {
scheduled: {
schedule_expression: "ScheduleExpression", # required
data_pull_mode: "Incremental", # accepts Incremental, Complete
schedule_start_time: Time.now,
schedule_end_time: Time.now,
timezone: "Timezone",
},
},
},
source_flow_config: { # required
connector_type: "Salesforce", # required, accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge
connector_profile_name: "ConnectorProfileName",
source_connector_properties: { # required
amplitude: {
object: "Object", # required
},
datadog: {
object: "Object", # required
},
dynatrace: {
object: "Object", # required
},
google_analytics: {
object: "Object", # required
},
infor_nexus: {
object: "Object", # required
},
marketo: {
object: "Object", # required
},
s3: {
bucket_name: "BucketName", # required
bucket_prefix: "BucketPrefix",
},
salesforce: {
object: "Object", # required
enable_dynamic_field_update: false,
include_deleted_records: false,
},
service_now: {
object: "Object", # required
},
singular: {
object: "Object", # required
},
slack: {
object: "Object", # required
},
trendmicro: {
object: "Object", # required
},
veeva: {
object: "Object", # required
},
zendesk: {
object: "Object", # required
},
},
incremental_pull_config: {
datetime_type_field_name: "DatetimeTypeFieldName",
},
},
destination_flow_config_list: [ # required
{
connector_type: "Salesforce", # required, accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge
connector_profile_name: "ConnectorProfileName",
destination_connector_properties: { # required
redshift: {
object: "Object", # required
intermediate_bucket_name: "BucketName", # required
bucket_prefix: "BucketPrefix",
error_handling_config: {
fail_on_first_destination_error: false,
bucket_prefix: "BucketPrefix",
bucket_name: "BucketName",
},
},
s3: {
bucket_name: "BucketName", # required
bucket_prefix: "BucketPrefix",
s3_output_format_config: {
file_type: "CSV", # accepts CSV, JSON, PARQUET
prefix_config: {
prefix_type: "FILENAME", # accepts FILENAME, PATH, PATH_AND_FILENAME
prefix_format: "YEAR", # accepts YEAR, MONTH, DAY, HOUR, MINUTE
},
aggregation_config: {
aggregation_type: "None", # accepts None, SingleFile
},
},
},
salesforce: {
object: "Object", # required
id_field_names: ["Name"],
error_handling_config: {
fail_on_first_destination_error: false,
bucket_prefix: "BucketPrefix",
bucket_name: "BucketName",
},
write_operation_type: "INSERT", # accepts INSERT, UPSERT, UPDATE
},
snowflake: {
object: "Object", # required
intermediate_bucket_name: "BucketName", # required
bucket_prefix: "BucketPrefix",
error_handling_config: {
fail_on_first_destination_error: false,
bucket_prefix: "BucketPrefix",
bucket_name: "BucketName",
},
},
event_bridge: {
object: "Object", # required
error_handling_config: {
fail_on_first_destination_error: false,
bucket_prefix: "BucketPrefix",
bucket_name: "BucketName",
},
},
},
},
],
tasks: [ # required
{
source_fields: ["String"], # required
connector_operator: {
amplitude: "BETWEEN", # accepts BETWEEN
datadog: "PROJECTION", # accepts PROJECTION, BETWEEN, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
dynatrace: "PROJECTION", # accepts PROJECTION, BETWEEN, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
google_analytics: "PROJECTION", # accepts PROJECTION, BETWEEN
infor_nexus: "PROJECTION", # accepts PROJECTION, BETWEEN, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
marketo: "PROJECTION", # accepts PROJECTION, LESS_THAN, GREATER_THAN, BETWEEN, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
s3: "PROJECTION", # accepts PROJECTION, LESS_THAN, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
salesforce: "PROJECTION", # accepts PROJECTION, LESS_THAN, CONTAINS, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
service_now: "PROJECTION", # accepts PROJECTION, CONTAINS, LESS_THAN, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
singular: "PROJECTION", # accepts PROJECTION, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
slack: "PROJECTION", # accepts PROJECTION, LESS_THAN, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
trendmicro: "PROJECTION", # accepts PROJECTION, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
veeva: "PROJECTION", # accepts PROJECTION, LESS_THAN, GREATER_THAN, CONTAINS, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
zendesk: "PROJECTION", # accepts PROJECTION, GREATER_THAN, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
},
destination_field: "DestinationField",
task_type: "Arithmetic", # required, accepts Arithmetic, Filter, Map, Mask, Merge, Truncate, Validate
task_properties: {
"VALUE" => "Property",
},
},
],
tags: {
"TagKey" => "TagValue",
},
})
Response structure
resp.flow_arn #=> String
resp.flow_status #=> String, one of "Active", "Deprecated", "Deleted", "Draft", "Errored", "Suspended"
Options Hash (options):
-
:flow_name
(required, String)
—
The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.
-
:description
(String)
—
A description of the flow you want to create.
-
:kms_arn
(String)
—
The ARN (Amazon Resource Name) of the Key Management Service (KMS) key you provide for encryption. This is required if you do not want to use the Amazon AppFlow-managed KMS key. If you don\'t provide anything here, Amazon AppFlow uses the Amazon AppFlow-managed KMS key.
-
:trigger_config
(required, Types::TriggerConfig)
—
The trigger settings that determine how and when the flow runs.
-
:source_flow_config
(required, Types::SourceFlowConfig)
—
The configuration that controls how Amazon AppFlow retrieves data from the source connector.
-
:destination_flow_config_list
(required, Array<Types::DestinationFlowConfig>)
—
The configuration that controls how Amazon AppFlow places data in the destination connector.
-
:tasks
(required, Array<Types::Task>)
—
A list of tasks that Amazon AppFlow performs while transferring the data in the flow run.
-
:tags
(Hash<String,String>)
—
The tags used to organize, track, or control access for your flow.
Returns:
-
(Types::CreateFlowResponse)
—
Returns a response object which responds to the following methods:
- #flow_arn => String
- #flow_status => String
See Also:
#delete_connector_profile(options = {}) ⇒ Struct
Enables you to delete an existing connector profile.
Examples:
Request syntax with placeholder values
resp = client.delete_connector_profile({
connector_profile_name: "ConnectorProfileName", # required
force_delete: false,
})
Options Hash (options):
-
:connector_profile_name
(required, String)
—
The name of the connector profile. The name is unique for each
ConnectorProfile
in your account. -
:force_delete
(Boolean)
—
Indicates whether Amazon AppFlow should delete the profile, even if it is currently in use in one or more flows.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
#delete_flow(options = {}) ⇒ Struct
Enables your application to delete an existing flow. Before deleting the flow, Amazon AppFlow validates the request by checking the flow configuration and status. You can delete flows one at a time.
Examples:
Request syntax with placeholder values
resp = client.delete_flow({
flow_name: "FlowName", # required
force_delete: false,
})
Options Hash (options):
-
:flow_name
(required, String)
—
The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.
-
:force_delete
(Boolean)
—
Indicates whether Amazon AppFlow should delete the flow, even if it is currently in use.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
#describe_connector_entity(options = {}) ⇒ Types::DescribeConnectorEntityResponse
Provides details regarding the entity used with the connector, with a description of the data model for each entity.
Examples:
Request syntax with placeholder values
resp = client.describe_connector_entity({
connector_entity_name: "Name", # required
connector_type: "Salesforce", # accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge
connector_profile_name: "ConnectorProfileName",
})
Response structure
resp.connector_entity_fields #=> Array
resp.connector_entity_fields[0].identifier #=> String
resp.connector_entity_fields[0].label #=> String
resp.connector_entity_fields[0].supported_field_type_details.v1.field_type #=> String
resp.connector_entity_fields[0].supported_field_type_details.v1.filter_operators #=> Array
resp.connector_entity_fields[0].supported_field_type_details.v1.filter_operators[0] #=> String, one of "PROJECTION", "LESS_THAN", "GREATER_THAN", "CONTAINS", "BETWEEN", "LESS_THAN_OR_EQUAL_TO", "GREATER_THAN_OR_EQUAL_TO", "EQUAL_TO", "NOT_EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.connector_entity_fields[0].supported_field_type_details.v1.supported_values #=> Array
resp.connector_entity_fields[0].supported_field_type_details.v1.supported_values[0] #=> String
resp.connector_entity_fields[0].description #=> String
resp.connector_entity_fields[0].source_properties.is_retrievable #=> true/false
resp.connector_entity_fields[0].source_properties.is_queryable #=> true/false
resp.connector_entity_fields[0].destination_properties.is_creatable #=> true/false
resp.connector_entity_fields[0].destination_properties.is_nullable #=> true/false
resp.connector_entity_fields[0].destination_properties.is_upsertable #=> true/false
resp.connector_entity_fields[0].destination_properties.is_updatable #=> true/false
resp.connector_entity_fields[0].destination_properties.supported_write_operations #=> Array
resp.connector_entity_fields[0].destination_properties.supported_write_operations[0] #=> String, one of "INSERT", "UPSERT", "UPDATE"
Options Hash (options):
-
:connector_entity_name
(required, String)
—
The entity name for that connector.
-
:connector_type
(String)
—
The type of connector application, such as Salesforce, Amplitude, and so on.
-
:connector_profile_name
(String)
—
The name of the connector profile. The name is unique for each
ConnectorProfile
in the AWS account.
Returns:
-
(Types::DescribeConnectorEntityResponse)
—
Returns a response object which responds to the following methods:
See Also:
#describe_connector_profiles(options = {}) ⇒ Types::DescribeConnectorProfilesResponse
Returns a list of connector-profile
details matching the provided connector-profile
names and connector-types
. Both input lists are optional, and you can use them to filter the result.
If no names or connector-types
are provided, returns all connector profiles in a paginated form. If there is no match, this operation returns an empty list.
Examples:
Request syntax with placeholder values
resp = client.describe_connector_profiles({
connector_profile_names: ["ConnectorProfileName"],
connector_type: "Salesforce", # accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge
max_results: 1,
next_token: "NextToken",
})
Response structure
resp.connector_profile_details #=> Array
resp.connector_profile_details[0].connector_profile_arn #=> String
resp.connector_profile_details[0].connector_profile_name #=> String
resp.connector_profile_details[0].connector_type #=> String, one of "Salesforce", "Singular", "Slack", "Redshift", "S3", "Marketo", "Googleanalytics", "Zendesk", "Servicenow", "Datadog", "Trendmicro", "Snowflake", "Dynatrace", "Infornexus", "Amplitude", "Veeva", "EventBridge"
resp.connector_profile_details[0].connection_mode #=> String, one of "Public", "Private"
resp.connector_profile_details[0].credentials_arn #=> String
resp.connector_profile_details[0].connector_profile_properties.datadog.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.dynatrace.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.infor_nexus.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.marketo.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.redshift.database_url #=> String
resp.connector_profile_details[0].connector_profile_properties.redshift.bucket_name #=> String
resp.connector_profile_details[0].connector_profile_properties.redshift.bucket_prefix #=> String
resp.connector_profile_details[0].connector_profile_properties.redshift.role_arn #=> String
resp.connector_profile_details[0].connector_profile_properties.salesforce.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.salesforce.is_sandbox_environment #=> true/false
resp.connector_profile_details[0].connector_profile_properties.service_now.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.slack.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.snowflake.warehouse #=> String
resp.connector_profile_details[0].connector_profile_properties.snowflake.stage #=> String
resp.connector_profile_details[0].connector_profile_properties.snowflake.bucket_name #=> String
resp.connector_profile_details[0].connector_profile_properties.snowflake.bucket_prefix #=> String
resp.connector_profile_details[0].connector_profile_properties.snowflake.private_link_service_name #=> String
resp.connector_profile_details[0].connector_profile_properties.snowflake.account_name #=> String
resp.connector_profile_details[0].connector_profile_properties.snowflake.region #=> String
resp.connector_profile_details[0].connector_profile_properties.veeva.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.zendesk.instance_url #=> String
resp.connector_profile_details[0].created_at #=> Time
resp.connector_profile_details[0].last_updated_at #=> Time
resp.next_token #=> String
Options Hash (options):
-
:connector_profile_names
(Array<String>)
—
The name of the connector profile. The name is unique for each
ConnectorProfile
in the AWS account. -
:connector_type
(String)
—
The type of connector, such as Salesforce, Amplitude, and so on.
-
:max_results
(Integer)
—
Specifies the maximum number of items that should be returned in the result set. The default for
maxResults
is 20 (for all paginated API operations). -
:next_token
(String)
—
The pagination token for the next page of data.
Returns:
-
(Types::DescribeConnectorProfilesResponse)
—
Returns a response object which responds to the following methods:
- #connector_profile_details => Array<Types::ConnectorProfile>
- #next_token => String
See Also:
#describe_connectors(options = {}) ⇒ Types::DescribeConnectorsResponse
Describes the connectors vended by Amazon AppFlow for specified connector types. If you don't specify a connector type, this operation describes all connectors vended by Amazon AppFlow. If there are more connectors than can be returned in one page, the response contains a nextToken
object, which can be be passed in to the next call to the DescribeConnectors
API operation to retrieve the next page.
Examples:
Request syntax with placeholder values
resp = client.describe_connectors({
connector_types: ["Salesforce"], # accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge
next_token: "NextToken",
})
Response structure
resp.connector_configurations #=> Hash
resp.connector_configurations["ConnectorType"].can_use_as_source #=> true/false
resp.connector_configurations["ConnectorType"].can_use_as_destination #=> true/false
resp.connector_configurations["ConnectorType"].supported_destination_connectors #=> Array
resp.connector_configurations["ConnectorType"].supported_destination_connectors[0] #=> String, one of "Salesforce", "Singular", "Slack", "Redshift", "S3", "Marketo", "Googleanalytics", "Zendesk", "Servicenow", "Datadog", "Trendmicro", "Snowflake", "Dynatrace", "Infornexus", "Amplitude", "Veeva", "EventBridge"
resp.connector_configurations["ConnectorType"].supported_scheduling_frequencies #=> Array
resp.connector_configurations["ConnectorType"].supported_scheduling_frequencies[0] #=> String, one of "BYMINUTE", "HOURLY", "DAILY", "WEEKLY", "MONTHLY", "ONCE"
resp.connector_configurations["ConnectorType"].is_private_link_enabled #=> true/false
resp.connector_configurations["ConnectorType"].is_private_link_endpoint_url_required #=> true/false
resp.connector_configurations["ConnectorType"].supported_trigger_types #=> Array
resp.connector_configurations["ConnectorType"].supported_trigger_types[0] #=> String, one of "Scheduled", "Event", "OnDemand"
resp.connector_configurations["ConnectorType"].connector_metadata.google_analytics.o_auth_scopes #=> Array
resp.connector_configurations["ConnectorType"].connector_metadata.google_analytics.o_auth_scopes[0] #=> String
resp.connector_configurations["ConnectorType"].connector_metadata.salesforce.o_auth_scopes #=> Array
resp.connector_configurations["ConnectorType"].connector_metadata.salesforce.o_auth_scopes[0] #=> String
resp.connector_configurations["ConnectorType"].connector_metadata.slack.o_auth_scopes #=> Array
resp.connector_configurations["ConnectorType"].connector_metadata.slack.o_auth_scopes[0] #=> String
resp.connector_configurations["ConnectorType"].connector_metadata.snowflake.supported_regions #=> Array
resp.connector_configurations["ConnectorType"].connector_metadata.snowflake.supported_regions[0] #=> String
resp.connector_configurations["ConnectorType"].connector_metadata.zendesk.o_auth_scopes #=> Array
resp.connector_configurations["ConnectorType"].connector_metadata.zendesk.o_auth_scopes[0] #=> String
resp.next_token #=> String
Options Hash (options):
-
:connector_types
(Array<String>)
—
The type of connector, such as Salesforce, Amplitude, and so on.
-
:next_token
(String)
—
The pagination token for the next page of data.
Returns:
-
(Types::DescribeConnectorsResponse)
—
Returns a response object which responds to the following methods:
- #connector_configurations => Hash<String,Types::ConnectorConfiguration>
- #next_token => String
See Also:
#describe_flow(options = {}) ⇒ Types::DescribeFlowResponse
Provides a description of the specified flow.
Examples:
Request syntax with placeholder values
resp = client.describe_flow({
flow_name: "FlowName", # required
})
Response structure
resp.flow_arn #=> String
resp.description #=> String
resp.flow_name #=> String
resp.kms_arn #=> String
resp.flow_status #=> String, one of "Active", "Deprecated", "Deleted", "Draft", "Errored", "Suspended"
resp.flow_status_message #=> String
resp.source_flow_config.connector_type #=> String, one of "Salesforce", "Singular", "Slack", "Redshift", "S3", "Marketo", "Googleanalytics", "Zendesk", "Servicenow", "Datadog", "Trendmicro", "Snowflake", "Dynatrace", "Infornexus", "Amplitude", "Veeva", "EventBridge"
resp.source_flow_config.connector_profile_name #=> String
resp.source_flow_config.source_connector_properties.amplitude.object #=> String
resp.source_flow_config.source_connector_properties.datadog.object #=> String
resp.source_flow_config.source_connector_properties.dynatrace.object #=> String
resp.source_flow_config.source_connector_properties.google_analytics.object #=> String
resp.source_flow_config.source_connector_properties.infor_nexus.object #=> String
resp.source_flow_config.source_connector_properties.marketo.object #=> String
resp.source_flow_config.source_connector_properties.s3.bucket_name #=> String
resp.source_flow_config.source_connector_properties.s3.bucket_prefix #=> String
resp.source_flow_config.source_connector_properties.salesforce.object #=> String
resp.source_flow_config.source_connector_properties.salesforce.enable_dynamic_field_update #=> true/false
resp.source_flow_config.source_connector_properties.salesforce.include_deleted_records #=> true/false
resp.source_flow_config.source_connector_properties.service_now.object #=> String
resp.source_flow_config.source_connector_properties.singular.object #=> String
resp.source_flow_config.source_connector_properties.slack.object #=> String
resp.source_flow_config.source_connector_properties.trendmicro.object #=> String
resp.source_flow_config.source_connector_properties.veeva.object #=> String
resp.source_flow_config.source_connector_properties.zendesk.object #=> String
resp.source_flow_config.incremental_pull_config.datetime_type_field_name #=> String
resp.destination_flow_config_list #=> Array
resp.destination_flow_config_list[0].connector_type #=> String, one of "Salesforce", "Singular", "Slack", "Redshift", "S3", "Marketo", "Googleanalytics", "Zendesk", "Servicenow", "Datadog", "Trendmicro", "Snowflake", "Dynatrace", "Infornexus", "Amplitude", "Veeva", "EventBridge"
resp.destination_flow_config_list[0].connector_profile_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.redshift.object #=> String
resp.destination_flow_config_list[0].destination_connector_properties.redshift.intermediate_bucket_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.redshift.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.redshift.error_handling_config.fail_on_first_destination_error #=> true/false
resp.destination_flow_config_list[0].destination_connector_properties.redshift.error_handling_config.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.redshift.error_handling_config.bucket_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.s3.bucket_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.s3.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.s3.s3_output_format_config.file_type #=> String, one of "CSV", "JSON", "PARQUET"
resp.destination_flow_config_list[0].destination_connector_properties.s3.s3_output_format_config.prefix_config.prefix_type #=> String, one of "FILENAME", "PATH", "PATH_AND_FILENAME"
resp.destination_flow_config_list[0].destination_connector_properties.s3.s3_output_format_config.prefix_config.prefix_format #=> String, one of "YEAR", "MONTH", "DAY", "HOUR", "MINUTE"
resp.destination_flow_config_list[0].destination_connector_properties.s3.s3_output_format_config.aggregation_config.aggregation_type #=> String, one of "None", "SingleFile"
resp.destination_flow_config_list[0].destination_connector_properties.salesforce.object #=> String
resp.destination_flow_config_list[0].destination_connector_properties.salesforce.id_field_names #=> Array
resp.destination_flow_config_list[0].destination_connector_properties.salesforce.id_field_names[0] #=> String
resp.destination_flow_config_list[0].destination_connector_properties.salesforce.error_handling_config.fail_on_first_destination_error #=> true/false
resp.destination_flow_config_list[0].destination_connector_properties.salesforce.error_handling_config.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.salesforce.error_handling_config.bucket_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.salesforce.write_operation_type #=> String, one of "INSERT", "UPSERT", "UPDATE"
resp.destination_flow_config_list[0].destination_connector_properties.snowflake.object #=> String
resp.destination_flow_config_list[0].destination_connector_properties.snowflake.intermediate_bucket_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.snowflake.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.snowflake.error_handling_config.fail_on_first_destination_error #=> true/false
resp.destination_flow_config_list[0].destination_connector_properties.snowflake.error_handling_config.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.snowflake.error_handling_config.bucket_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.event_bridge.object #=> String
resp.destination_flow_config_list[0].destination_connector_properties.event_bridge.error_handling_config.fail_on_first_destination_error #=> true/false
resp.destination_flow_config_list[0].destination_connector_properties.event_bridge.error_handling_config.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.event_bridge.error_handling_config.bucket_name #=> String
resp.last_run_execution_details.most_recent_execution_message #=> String
resp.last_run_execution_details.most_recent_execution_time #=> Time
resp.last_run_execution_details.most_recent_execution_status #=> String, one of "InProgress", "Successful", "Error"
resp.trigger_config.trigger_type #=> String, one of "Scheduled", "Event", "OnDemand"
resp.trigger_config.trigger_properties.scheduled.schedule_expression #=> String
resp.trigger_config.trigger_properties.scheduled.data_pull_mode #=> String, one of "Incremental", "Complete"
resp.trigger_config.trigger_properties.scheduled.schedule_start_time #=> Time
resp.trigger_config.trigger_properties.scheduled.schedule_end_time #=> Time
resp.trigger_config.trigger_properties.scheduled.timezone #=> String
resp.tasks #=> Array
resp.tasks[0].source_fields #=> Array
resp.tasks[0].source_fields[0] #=> String
resp.tasks[0].connector_operator.amplitude #=> String, one of "BETWEEN"
resp.tasks[0].connector_operator.datadog #=> String, one of "PROJECTION", "BETWEEN", "EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.dynatrace #=> String, one of "PROJECTION", "BETWEEN", "EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.google_analytics #=> String, one of "PROJECTION", "BETWEEN"
resp.tasks[0].connector_operator.infor_nexus #=> String, one of "PROJECTION", "BETWEEN", "EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.marketo #=> String, one of "PROJECTION", "LESS_THAN", "GREATER_THAN", "BETWEEN", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.s3 #=> String, one of "PROJECTION", "LESS_THAN", "GREATER_THAN", "BETWEEN", "LESS_THAN_OR_EQUAL_TO", "GREATER_THAN_OR_EQUAL_TO", "EQUAL_TO", "NOT_EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.salesforce #=> String, one of "PROJECTION", "LESS_THAN", "CONTAINS", "GREATER_THAN", "BETWEEN", "LESS_THAN_OR_EQUAL_TO", "GREATER_THAN_OR_EQUAL_TO", "EQUAL_TO", "NOT_EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.service_now #=> String, one of "PROJECTION", "CONTAINS", "LESS_THAN", "GREATER_THAN", "BETWEEN", "LESS_THAN_OR_EQUAL_TO", "GREATER_THAN_OR_EQUAL_TO", "EQUAL_TO", "NOT_EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.singular #=> String, one of "PROJECTION", "EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.slack #=> String, one of "PROJECTION", "LESS_THAN", "GREATER_THAN", "BETWEEN", "LESS_THAN_OR_EQUAL_TO", "GREATER_THAN_OR_EQUAL_TO", "EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.trendmicro #=> String, one of "PROJECTION", "EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.veeva #=> String, one of "PROJECTION", "LESS_THAN", "GREATER_THAN", "CONTAINS", "BETWEEN", "LESS_THAN_OR_EQUAL_TO", "GREATER_THAN_OR_EQUAL_TO", "EQUAL_TO", "NOT_EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.zendesk #=> String, one of "PROJECTION", "GREATER_THAN", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].destination_field #=> String
resp.tasks[0].task_type #=> String, one of "Arithmetic", "Filter", "Map", "Mask", "Merge", "Truncate", "Validate"
resp.tasks[0].task_properties #=> Hash
resp.tasks[0].task_properties["OperatorPropertiesKeys"] #=> String
resp.created_at #=> Time
resp.last_updated_at #=> Time
resp.created_by #=> String
resp.last_updated_by #=> String
resp.tags #=> Hash
resp.tags["TagKey"] #=> String
Options Hash (options):
-
:flow_name
(required, String)
—
The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.
Returns:
-
(Types::DescribeFlowResponse)
—
Returns a response object which responds to the following methods:
- #flow_arn => String
- #description => String
- #flow_name => String
- #kms_arn => String
- #flow_status => String
- #flow_status_message => String
- #source_flow_config => Types::SourceFlowConfig
- #destination_flow_config_list => Array<Types::DestinationFlowConfig>
- #last_run_execution_details => Types::ExecutionDetails
- #trigger_config => Types::TriggerConfig
- #tasks => Array<Types::Task>
- #created_at => Time
- #last_updated_at => Time
- #created_by => String
- #last_updated_by => String
- #tags => Hash<String,String>
See Also:
#describe_flow_execution_records(options = {}) ⇒ Types::DescribeFlowExecutionRecordsResponse
Fetches the execution history of the flow.
Examples:
Request syntax with placeholder values
resp = client.describe_flow_execution_records({
flow_name: "FlowName", # required
max_results: 1,
next_token: "NextToken",
})
Response structure
resp.flow_executions #=> Array
resp.flow_executions[0].execution_id #=> String
resp.flow_executions[0].execution_status #=> String, one of "InProgress", "Successful", "Error"
resp.flow_executions[0].execution_result.error_info.put_failures_count #=> Integer
resp.flow_executions[0].execution_result.error_info.execution_message #=> String
resp.flow_executions[0].execution_result.bytes_processed #=> Integer
resp.flow_executions[0].execution_result.bytes_written #=> Integer
resp.flow_executions[0].execution_result.records_processed #=> Integer
resp.flow_executions[0].started_at #=> Time
resp.flow_executions[0].last_updated_at #=> Time
resp.next_token #=> String
Options Hash (options):
-
:flow_name
(required, String)
—
The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.
-
:max_results
(Integer)
—
Specifies the maximum number of items that should be returned in the result set. The default for
maxResults
is 20 (for all paginated API operations). -
:next_token
(String)
—
The pagination token for the next page of data.
Returns:
-
(Types::DescribeFlowExecutionRecordsResponse)
—
Returns a response object which responds to the following methods:
- #flow_executions => Array<Types::ExecutionRecord>
- #next_token => String
See Also:
#list_connector_entities(options = {}) ⇒ Types::ListConnectorEntitiesResponse
Returns the list of available connector entities supported by Amazon AppFlow. For example, you can query Salesforce for Account and Opportunity entities, or query ServiceNow for the Incident entity.
Examples:
Request syntax with placeholder values
resp = client.list_connector_entities({
connector_profile_name: "ConnectorProfileName",
connector_type: "Salesforce", # accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge
entities_path: "EntitiesPath",
})
Response structure
resp.connector_entity_map #=> Hash
resp.connector_entity_map["Group"] #=> Array
resp.connector_entity_map["Group"][0].name #=> String
resp.connector_entity_map["Group"][0].label #=> String
resp.connector_entity_map["Group"][0].has_nested_entities #=> true/false
Options Hash (options):
-
:connector_profile_name
(String)
—
The name of the connector profile. The name is unique for each
ConnectorProfile
in the AWS account, and is used to query the downstream connector. -
:connector_type
(String)
—
The type of connector, such as Salesforce, Amplitude, and so on.
-
:entities_path
(String)
—
This optional parameter is specific to connector implementation. Some connectors support multiple levels or categories of entities. You can find out the list of roots for such providers by sending a request without the
entitiesPath
parameter. If the connector supports entities at different roots, this initial request returns the list of roots. Otherwise, this request returns all entities supported by the provider.
Returns:
-
(Types::ListConnectorEntitiesResponse)
—
Returns a response object which responds to the following methods:
- #connector_entity_map => Hash<String,Array<Types::ConnectorEntity>>
See Also:
#list_flows(options = {}) ⇒ Types::ListFlowsResponse
Lists all of the flows associated with your account.
Examples:
Request syntax with placeholder values
resp = client.list_flows({
max_results: 1,
next_token: "NextToken",
})
Response structure
resp.flows #=> Array
resp.flows[0].flow_arn #=> String
resp.flows[0].description #=> String
resp.flows[0].flow_name #=> String
resp.flows[0].flow_status #=> String, one of "Active", "Deprecated", "Deleted", "Draft", "Errored", "Suspended"
resp.flows[0].source_connector_type #=> String, one of "Salesforce", "Singular", "Slack", "Redshift", "S3", "Marketo", "Googleanalytics", "Zendesk", "Servicenow", "Datadog", "Trendmicro", "Snowflake", "Dynatrace", "Infornexus", "Amplitude", "Veeva", "EventBridge"
resp.flows[0].destination_connector_type #=> String, one of "Salesforce", "Singular", "Slack", "Redshift", "S3", "Marketo", "Googleanalytics", "Zendesk", "Servicenow", "Datadog", "Trendmicro", "Snowflake", "Dynatrace", "Infornexus", "Amplitude", "Veeva", "EventBridge"
resp.flows[0].trigger_type #=> String, one of "Scheduled", "Event", "OnDemand"
resp.flows[0].created_at #=> Time
resp.flows[0].last_updated_at #=> Time
resp.flows[0].created_by #=> String
resp.flows[0].last_updated_by #=> String
resp.flows[0].tags #=> Hash
resp.flows[0].tags["TagKey"] #=> String
resp.flows[0].last_run_execution_details.most_recent_execution_message #=> String
resp.flows[0].last_run_execution_details.most_recent_execution_time #=> Time
resp.flows[0].last_run_execution_details.most_recent_execution_status #=> String, one of "InProgress", "Successful", "Error"
resp.next_token #=> String
Options Hash (options):
-
:max_results
(Integer)
—
Specifies the maximum number of items that should be returned in the result set.
-
:next_token
(String)
—
The pagination token for next page of data.
Returns:
-
(Types::ListFlowsResponse)
—
Returns a response object which responds to the following methods:
- #flows => Array<Types::FlowDefinition>
- #next_token => String
See Also:
#list_tags_for_resource(options = {}) ⇒ Types::ListTagsForResourceResponse
Retrieves the tags that are associated with a specified flow.
Examples:
Request syntax with placeholder values
resp = client.list_tags_for_resource({
resource_arn: "ARN", # required
})
Response structure
resp.tags #=> Hash
resp.tags["TagKey"] #=> String
Options Hash (options):
-
:resource_arn
(required, String)
—
The Amazon Resource Name (ARN) of the specified flow.
Returns:
See Also:
#start_flow(options = {}) ⇒ Types::StartFlowResponse
Activates an existing flow. For on-demand flows, this operation runs the flow immediately. For schedule and event-triggered flows, this operation activates the flow.
Examples:
Request syntax with placeholder values
resp = client.start_flow({
flow_name: "FlowName", # required
})
Response structure
resp.flow_arn #=> String
resp.flow_status #=> String, one of "Active", "Deprecated", "Deleted", "Draft", "Errored", "Suspended"
resp.execution_id #=> String
Options Hash (options):
-
:flow_name
(required, String)
—
The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.
Returns:
-
(Types::StartFlowResponse)
—
Returns a response object which responds to the following methods:
- #flow_arn => String
- #flow_status => String
- #execution_id => String
See Also:
#stop_flow(options = {}) ⇒ Types::StopFlowResponse
Deactivates the existing flow. For on-demand flows, this operation returns an unsupportedOperationException
error message. For schedule and event-triggered flows, this operation deactivates the flow.
Examples:
Request syntax with placeholder values
resp = client.stop_flow({
flow_name: "FlowName", # required
})
Response structure
resp.flow_arn #=> String
resp.flow_status #=> String, one of "Active", "Deprecated", "Deleted", "Draft", "Errored", "Suspended"
Options Hash (options):
-
:flow_name
(required, String)
—
The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.
Returns:
-
(Types::StopFlowResponse)
—
Returns a response object which responds to the following methods:
- #flow_arn => String
- #flow_status => String
See Also:
#tag_resource(options = {}) ⇒ Struct
Applies a tag to the specified flow.
Examples:
Request syntax with placeholder values
resp = client.tag_resource({
resource_arn: "ARN", # required
tags: { # required
"TagKey" => "TagValue",
},
})
Options Hash (options):
-
:resource_arn
(required, String)
—
The Amazon Resource Name (ARN) of the flow that you want to tag.
-
:tags
(required, Hash<String,String>)
—
The tags used to organize, track, or control access for your flow.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
#untag_resource(options = {}) ⇒ Struct
Removes a tag from the specified flow.
Examples:
Request syntax with placeholder values
resp = client.untag_resource({
resource_arn: "ARN", # required
tag_keys: ["TagKey"], # required
})
Options Hash (options):
-
:resource_arn
(required, String)
—
The Amazon Resource Name (ARN) of the flow that you want to untag.
-
:tag_keys
(required, Array<String>)
—
The tag keys associated with the tag that you want to remove from your flow.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
#update_connector_profile(options = {}) ⇒ Types::UpdateConnectorProfileResponse
Updates a given connector profile associated with your account.
Examples:
Request syntax with placeholder values
resp = client.update_connector_profile({
connector_profile_name: "ConnectorProfileName", # required
connection_mode: "Public", # required, accepts Public, Private
connector_profile_config: { # required
connector_profile_properties: { # required
amplitude: {
},
datadog: {
instance_url: "InstanceUrl", # required
},
dynatrace: {
instance_url: "InstanceUrl", # required
},
google_analytics: {
},
infor_nexus: {
instance_url: "InstanceUrl", # required
},
marketo: {
instance_url: "InstanceUrl", # required
},
redshift: {
database_url: "DatabaseUrl", # required
bucket_name: "BucketName", # required
bucket_prefix: "BucketPrefix",
role_arn: "RoleArn", # required
},
salesforce: {
instance_url: "InstanceUrl",
is_sandbox_environment: false,
},
service_now: {
instance_url: "InstanceUrl", # required
},
singular: {
},
slack: {
instance_url: "InstanceUrl", # required
},
snowflake: {
warehouse: "Warehouse", # required
stage: "Stage", # required
bucket_name: "BucketName", # required
bucket_prefix: "BucketPrefix",
private_link_service_name: "PrivateLinkServiceName",
account_name: "AccountName",
region: "Region",
},
trendmicro: {
},
veeva: {
instance_url: "InstanceUrl", # required
},
zendesk: {
instance_url: "InstanceUrl", # required
},
},
connector_profile_credentials: { # required
amplitude: {
api_key: "ApiKey", # required
secret_key: "SecretKey", # required
},
datadog: {
api_key: "ApiKey", # required
application_key: "ApplicationKey", # required
},
dynatrace: {
api_token: "ApiToken", # required
},
google_analytics: {
client_id: "ClientId", # required
client_secret: "ClientSecret", # required
access_token: "AccessToken",
refresh_token: "RefreshToken",
o_auth_request: {
auth_code: "AuthCode",
redirect_uri: "RedirectUri",
},
},
infor_nexus: {
access_key_id: "AccessKeyId", # required
user_id: "Username", # required
secret_access_key: "Key", # required
datakey: "Key", # required
},
marketo: {
client_id: "ClientId", # required
client_secret: "ClientSecret", # required
access_token: "AccessToken",
o_auth_request: {
auth_code: "AuthCode",
redirect_uri: "RedirectUri",
},
},
redshift: {
username: "Username", # required
password: "Password", # required
},
salesforce: {
access_token: "AccessToken",
refresh_token: "RefreshToken",
o_auth_request: {
auth_code: "AuthCode",
redirect_uri: "RedirectUri",
},
client_credentials_arn: "ClientCredentialsArn",
},
service_now: {
username: "Username", # required
password: "Password", # required
},
singular: {
api_key: "ApiKey", # required
},
slack: {
client_id: "ClientId", # required
client_secret: "ClientSecret", # required
access_token: "AccessToken",
o_auth_request: {
auth_code: "AuthCode",
redirect_uri: "RedirectUri",
},
},
snowflake: {
username: "Username", # required
password: "Password", # required
},
trendmicro: {
api_secret_key: "ApiSecretKey", # required
},
veeva: {
username: "Username", # required
password: "Password", # required
},
zendesk: {
client_id: "ClientId", # required
client_secret: "ClientSecret", # required
access_token: "AccessToken",
o_auth_request: {
auth_code: "AuthCode",
redirect_uri: "RedirectUri",
},
},
},
},
})
Response structure
resp.connector_profile_arn #=> String
Options Hash (options):
-
:connector_profile_name
(required, String)
—
The name of the connector profile and is unique for each
ConnectorProfile
in the AWS Account. -
:connection_mode
(required, String)
—
Indicates the connection mode and if it is public or private.
-
:connector_profile_config
(required, Types::ConnectorProfileConfig)
—
Defines the connector-specific profile configuration and credentials.
Returns:
-
(Types::UpdateConnectorProfileResponse)
—
Returns a response object which responds to the following methods:
- #connector_profile_arn => String
See Also:
#update_flow(options = {}) ⇒ Types::UpdateFlowResponse
Updates an existing flow.
Examples:
Request syntax with placeholder values
resp = client.update_flow({
flow_name: "FlowName", # required
description: "FlowDescription",
trigger_config: { # required
trigger_type: "Scheduled", # required, accepts Scheduled, Event, OnDemand
trigger_properties: {
scheduled: {
schedule_expression: "ScheduleExpression", # required
data_pull_mode: "Incremental", # accepts Incremental, Complete
schedule_start_time: Time.now,
schedule_end_time: Time.now,
timezone: "Timezone",
},
},
},
source_flow_config: {
connector_type: "Salesforce", # required, accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge
connector_profile_name: "ConnectorProfileName",
source_connector_properties: { # required
amplitude: {
object: "Object", # required
},
datadog: {
object: "Object", # required
},
dynatrace: {
object: "Object", # required
},
google_analytics: {
object: "Object", # required
},
infor_nexus: {
object: "Object", # required
},
marketo: {
object: "Object", # required
},
s3: {
bucket_name: "BucketName", # required
bucket_prefix: "BucketPrefix",
},
salesforce: {
object: "Object", # required
enable_dynamic_field_update: false,
include_deleted_records: false,
},
service_now: {
object: "Object", # required
},
singular: {
object: "Object", # required
},
slack: {
object: "Object", # required
},
trendmicro: {
object: "Object", # required
},
veeva: {
object: "Object", # required
},
zendesk: {
object: "Object", # required
},
},
incremental_pull_config: {
datetime_type_field_name: "DatetimeTypeFieldName",
},
},
destination_flow_config_list: [ # required
{
connector_type: "Salesforce", # required, accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge
connector_profile_name: "ConnectorProfileName",
destination_connector_properties: { # required
redshift: {
object: "Object", # required
intermediate_bucket_name: "BucketName", # required
bucket_prefix: "BucketPrefix",
error_handling_config: {
fail_on_first_destination_error: false,
bucket_prefix: "BucketPrefix",
bucket_name: "BucketName",
},
},
s3: {
bucket_name: "BucketName", # required
bucket_prefix: "BucketPrefix",
s3_output_format_config: {
file_type: "CSV", # accepts CSV, JSON, PARQUET
prefix_config: {
prefix_type: "FILENAME", # accepts FILENAME, PATH, PATH_AND_FILENAME
prefix_format: "YEAR", # accepts YEAR, MONTH, DAY, HOUR, MINUTE
},
aggregation_config: {
aggregation_type: "None", # accepts None, SingleFile
},
},
},
salesforce: {
object: "Object", # required
id_field_names: ["Name"],
error_handling_config: {
fail_on_first_destination_error: false,
bucket_prefix: "BucketPrefix",
bucket_name: "BucketName",
},
write_operation_type: "INSERT", # accepts INSERT, UPSERT, UPDATE
},
snowflake: {
object: "Object", # required
intermediate_bucket_name: "BucketName", # required
bucket_prefix: "BucketPrefix",
error_handling_config: {
fail_on_first_destination_error: false,
bucket_prefix: "BucketPrefix",
bucket_name: "BucketName",
},
},
event_bridge: {
object: "Object", # required
error_handling_config: {
fail_on_first_destination_error: false,
bucket_prefix: "BucketPrefix",
bucket_name: "BucketName",
},
},
},
},
],
tasks: [ # required
{
source_fields: ["String"], # required
connector_operator: {
amplitude: "BETWEEN", # accepts BETWEEN
datadog: "PROJECTION", # accepts PROJECTION, BETWEEN, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
dynatrace: "PROJECTION", # accepts PROJECTION, BETWEEN, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
google_analytics: "PROJECTION", # accepts PROJECTION, BETWEEN
infor_nexus: "PROJECTION", # accepts PROJECTION, BETWEEN, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
marketo: "PROJECTION", # accepts PROJECTION, LESS_THAN, GREATER_THAN, BETWEEN, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
s3: "PROJECTION", # accepts PROJECTION, LESS_THAN, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
salesforce: "PROJECTION", # accepts PROJECTION, LESS_THAN, CONTAINS, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
service_now: "PROJECTION", # accepts PROJECTION, CONTAINS, LESS_THAN, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
singular: "PROJECTION", # accepts PROJECTION, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
slack: "PROJECTION", # accepts PROJECTION, LESS_THAN, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
trendmicro: "PROJECTION", # accepts PROJECTION, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
veeva: "PROJECTION", # accepts PROJECTION, LESS_THAN, GREATER_THAN, CONTAINS, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
zendesk: "PROJECTION", # accepts PROJECTION, GREATER_THAN, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
},
destination_field: "DestinationField",
task_type: "Arithmetic", # required, accepts Arithmetic, Filter, Map, Mask, Merge, Truncate, Validate
task_properties: {
"VALUE" => "Property",
},
},
],
})
Response structure
resp.flow_status #=> String, one of "Active", "Deprecated", "Deleted", "Draft", "Errored", "Suspended"
Options Hash (options):
-
:flow_name
(required, String)
—
The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.
-
:description
(String)
—
A description of the flow.
-
:trigger_config
(required, Types::TriggerConfig)
—
The trigger settings that determine how and when the flow runs.
-
:source_flow_config
(Types::SourceFlowConfig)
—
Contains information about the configuration of the source connector used in the flow.
-
:destination_flow_config_list
(required, Array<Types::DestinationFlowConfig>)
—
The configuration that controls how Amazon AppFlow transfers data to the destination connector.
-
:tasks
(required, Array<Types::Task>)
—
A list of tasks that Amazon AppFlow performs while transferring the data in the flow run.
Returns:
-
(Types::UpdateFlowResponse)
—
Returns a response object which responds to the following methods:
- #flow_status => String
See Also:
#wait_until(waiter_name, params = {}) {|waiter| ... } ⇒ Boolean
Waiters polls an API operation until a resource enters a desired state.
Basic Usage
Waiters will poll until they are succesful, they fail by entering a terminal state, or until a maximum number of attempts are made.
# polls in a loop, sleeping between attempts client.waiter_until(waiter_name, params)
Configuration
You can configure the maximum number of polling attempts, and the delay (in seconds) between each polling attempt. You configure waiters by passing a block to #wait_until:
# poll for ~25 seconds
client.wait_until(...) do |w|
w.max_attempts = 5
w.delay = 5
end
Callbacks
You can be notified before each polling attempt and before each
delay. If you throw :success
or :failure
from these callbacks,
it will terminate the waiter.
started_at = Time.now
client.wait_until(...) do |w|
# disable max attempts
w.max_attempts = nil
# poll for 1 hour, instead of a number of attempts
w.before_wait do |attempts, response|
throw :failure if Time.now - started_at > 3600
end
end
Handling Errors
When a waiter is successful, it returns true
. When a waiter
fails, it raises an error. All errors raised extend from
Waiters::Errors::WaiterFailed.
begin
client.wait_until(...)
rescue Aws::Waiters::Errors::WaiterFailed
# resource did not enter the desired state in time
end
Parameters:
-
waiter_name
(Symbol)
—
The name of the waiter. See #waiter_names for a full list of supported waiters.
-
params
(Hash)
(defaults to: {})
—
Additional request parameters. See the #waiter_names for a list of supported waiters and what request they call. The called request determines the list of accepted parameters.
Yield Parameters:
-
waiter
(Waiters::Waiter)
—
Yields a Waiter object that can be configured prior to waiting.
Returns:
-
(Boolean)
—
Returns
true
if the waiter was successful.
Raises:
-
(Errors::FailureStateError)
—
Raised when the waiter terminates because the waiter has entered a state that it will not transition out of, preventing success.
-
(Errors::TooManyAttemptsError)
—
Raised when the configured maximum number of attempts have been made, and the waiter is not yet successful.
-
(Errors::UnexpectedError)
—
Raised when an error is encounted while polling for a resource that is not expected.
-
(Errors::NoSuchWaiterError)
—
Raised when you request to wait for an unknown state.
#waiter_names ⇒ Array<Symbol>
Returns the list of supported waiters. The following table lists the supported waiters and the client method they call:
Waiter Name | Client Method | Default Delay: | Default Max Attempts: |
---|
Returns:
-
(Array<Symbol>)
—
the list of supported waiters.