Class: Aws::Appflow::Client

Inherits:
Seahorse::Client::Base show all
Includes:
ClientStubs
Defined in:
gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb

Overview

An API client for Appflow. To construct a client, you need to configure a :region and :credentials.

client = Aws::Appflow::Client.new(
  region: region_name,
  credentials: credentials,
  # ...
)

For details on configuring region and credentials see the developer guide.

See #initialize for a full list of supported configuration options.

Instance Attribute Summary

Attributes inherited from Seahorse::Client::Base

#config, #handlers

API Operations collapse

Instance Method Summary collapse

Methods included from ClientStubs

#api_requests, #stub_data, #stub_responses

Methods inherited from Seahorse::Client::Base

add_plugin, api, clear_plugins, define, new, #operation_names, plugins, remove_plugin, set_api, set_plugins

Methods included from Seahorse::Client::HandlerBuilder

#handle, #handle_request, #handle_response

Constructor Details

#initialize(options) ⇒ Client

Returns a new instance of Client.

Parameters:

  • options (Hash)

Options Hash (options):

  • :credentials (required, Aws::CredentialProvider)

    Your AWS credentials. This can be an instance of any one of the following classes:

    • Aws::Credentials - Used for configuring static, non-refreshing credentials.

    • Aws::SharedCredentials - Used for loading static credentials from a shared file, such as ~/.aws/config.

    • Aws::AssumeRoleCredentials - Used when you need to assume a role.

    • Aws::AssumeRoleWebIdentityCredentials - Used when you need to assume a role after providing credentials via the web.

    • Aws::SSOCredentials - Used for loading credentials from AWS SSO using an access token generated from aws login.

    • Aws::ProcessCredentials - Used for loading credentials from a process that outputs to stdout.

    • Aws::InstanceProfileCredentials - Used for loading credentials from an EC2 IMDS on an EC2 instance.

    • Aws::ECSCredentials - Used for loading credentials from instances running in ECS.

    • Aws::CognitoIdentityCredentials - Used for loading credentials from the Cognito Identity service.

    When :credentials are not configured directly, the following locations will be searched for credentials:

    • Aws.config[:credentials]
    • The :access_key_id, :secret_access_key, and :session_token options.
    • ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY']
    • ~/.aws/credentials
    • ~/.aws/config
    • EC2/ECS IMDS instance profile - When used by default, the timeouts are very aggressive. Construct and pass an instance of Aws::InstanceProfileCredentails or Aws::ECSCredentials to enable retries and extended timeouts.
  • :region (required, String)

    The AWS region to connect to. The configured :region is used to determine the service :endpoint. When not passed, a default :region is searched for in the following locations:

    • Aws.config[:region]
    • ENV['AWS_REGION']
    • ENV['AMAZON_REGION']
    • ENV['AWS_DEFAULT_REGION']
    • ~/.aws/credentials
    • ~/.aws/config
  • :access_key_id (String)
  • :active_endpoint_cache (Boolean) — default: false

    When set to true, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults to false.

  • :adaptive_retry_wait_to_fill (Boolean) — default: true

    Used only in adaptive retry mode. When true, the request will sleep until there is sufficent client side capacity to retry the request. When false, the request will raise a RetryCapacityNotAvailableError and will not retry instead of sleeping.

  • :client_side_monitoring (Boolean) — default: false

    When true, client-side metrics will be collected for all API requests from this client.

  • :client_side_monitoring_client_id (String) — default: ""

    Allows you to provide an identifier for this client which will be attached to all generated client side metrics. Defaults to an empty string.

  • :client_side_monitoring_host (String) — default: "127.0.0.1"

    Allows you to specify the DNS hostname or IPv4 or IPv6 address that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_port (Integer) — default: 31000

    Required for publishing client metrics. The port that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_publisher (Aws::ClientSideMonitoring::Publisher) — default: Aws::ClientSideMonitoring::Publisher

    Allows you to provide a custom client-side monitoring publisher class. By default, will use the Client Side Monitoring Agent Publisher.

  • :convert_params (Boolean) — default: true

    When true, an attempt is made to coerce request parameters into the required types.

  • :correct_clock_skew (Boolean) — default: true

    Used only in standard and adaptive retry modes. Specifies whether to apply a clock skew correction and retry requests with skewed client clocks.

  • :disable_host_prefix_injection (Boolean) — default: false

    Set to true to disable SDK automatically adding host prefix to default service endpoint when available.

  • :endpoint (String)

    The client endpoint is normally constructed from the :region option. You should only configure an :endpoint when connecting to test or custom endpoints. This should be a valid HTTP(S) URI.

  • :endpoint_cache_max_entries (Integer) — default: 1000

    Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000.

  • :endpoint_cache_max_threads (Integer) — default: 10

    Used for the maximum threads in use for polling endpoints to be cached, defaults to 10.

  • :endpoint_cache_poll_interval (Integer) — default: 60

    When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec.

  • :endpoint_discovery (Boolean) — default: false

    When set to true, endpoint discovery will be enabled for operations when available.

  • :log_formatter (Aws::Log::Formatter) — default: Aws::Log::Formatter.default

    The log formatter.

  • :log_level (Symbol) — default: :info

    The log level to send messages to the :logger at.

  • :logger (Logger)

    The Logger instance to send log messages to. If this option is not set, logging will be disabled.

  • :max_attempts (Integer) — default: 3

    An integer representing the maximum number attempts that will be made for a single request, including the initial attempt. For example, setting this value to 5 will result in a request being retried up to 4 times. Used in standard and adaptive retry modes.

  • :profile (String) — default: "default"

    Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, 'default' is used.

  • :retry_backoff (Proc)

    A proc or lambda used for backoff. Defaults to 2**retries * retry_base_delay. This option is only used in the legacy retry mode.

  • :retry_base_delay (Float) — default: 0.3

    The base delay in seconds used by the default backoff function. This option is only used in the legacy retry mode.

  • :retry_jitter (Symbol) — default: :none

    A delay randomiser function used by the default backoff function. Some predefined functions can be referenced by name - :none, :equal, :full, otherwise a Proc that takes and returns a number. This option is only used in the legacy retry mode.

    @see https://www.awsarchitectureblog.com/2015/03/backoff.html

  • :retry_limit (Integer) — default: 3

    The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors, auth errors, endpoint discovery, and errors from expired credentials. This option is only used in the legacy retry mode.

  • :retry_max_delay (Integer) — default: 0

    The maximum number of seconds to delay between retries (0 for no limit) used by the default backoff function. This option is only used in the legacy retry mode.

  • :retry_mode (String) — default: "legacy"

    Specifies which retry algorithm to use. Values are:

    • legacy - The pre-existing retry behavior. This is default value if no retry mode is provided.

    • standard - A standardized set of retry rules across the AWS SDKs. This includes support for retry quotas, which limit the number of unsuccessful retries a client can make.

    • adaptive - An experimental retry mode that includes all the functionality of standard mode along with automatic client side throttling. This is a provisional mode that may change behavior in the future.

  • :secret_access_key (String)
  • :session_token (String)
  • :stub_responses (Boolean) — default: false

    Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling ClientStubs#stub_responses. See ClientStubs for more information.

    Please note When response stubbing is enabled, no HTTP requests are made, and retries are disabled.

  • :validate_params (Boolean) — default: true

    When true, request parameters are validated before sending the request.

  • :http_proxy (URI::HTTP, String)

    A proxy to send requests through. Formatted like 'http://proxy.com:123'.

  • :http_open_timeout (Float) — default: 15

    The number of seconds to wait when opening a HTTP session before raising a Timeout::Error.

  • :http_read_timeout (Integer) — default: 60

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_idle_timeout (Float) — default: 5

    The number of seconds a connection is allowed to sit idle before it is considered stale. Stale connections are closed and removed from the pool before making a request.

  • :http_continue_timeout (Float) — default: 1

    The number of seconds to wait for a 100-continue response before sending the request body. This option has no effect unless the request has "Expect" header set to "100-continue". Defaults to nil which disables this behaviour. This value can safely be set per request on the session.

  • :http_wire_trace (Boolean) — default: false

    When true, HTTP debug output will be sent to the :logger.

  • :ssl_verify_peer (Boolean) — default: true

    When true, SSL peer certificates are verified when establishing a connection.

  • :ssl_ca_bundle (String)

    Full path to the SSL certificate authority bundle file that should be used when verifying peer certificates. If you do not pass :ssl_ca_bundle or :ssl_ca_directory the the system default will be used if available.

  • :ssl_ca_directory (String)

    Full path of the directory that contains the unbundled SSL certificate authority files for verifying peer certificates. If you do not pass :ssl_ca_bundle or :ssl_ca_directory the the system default will be used if available.



324
325
326
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 324

def initialize(*args)
  super
end

Instance Method Details

#create_connector_profile(params = {}) ⇒ Types::CreateConnectorProfileResponse

Creates a new connector profile associated with your Amazon Web Services account. There is a soft quota of 100 connector profiles per Amazon Web Services account. If you need more connector profiles than this quota allows, you can submit a request to the Amazon AppFlow team through the Amazon AppFlow support channel.

Examples:

Request syntax with placeholder values


resp = client.create_connector_profile({
  connector_profile_name: "ConnectorProfileName", # required
  kms_arn: "KMSArn",
  connector_type: "Salesforce", # required, accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge, LookoutMetrics, Upsolver, Honeycode, CustomerProfiles, SAPOData
  connection_mode: "Public", # required, accepts Public, Private
  connector_profile_config: { # required
    connector_profile_properties: { # required
      amplitude: {
      },
      datadog: {
        instance_url: "InstanceUrl", # required
      },
      dynatrace: {
        instance_url: "InstanceUrl", # required
      },
      google_analytics: {
      },
      honeycode: {
      },
      infor_nexus: {
        instance_url: "InstanceUrl", # required
      },
      marketo: {
        instance_url: "InstanceUrl", # required
      },
      redshift: {
        database_url: "DatabaseUrl", # required
        bucket_name: "BucketName", # required
        bucket_prefix: "BucketPrefix",
        role_arn: "RoleArn", # required
      },
      salesforce: {
        instance_url: "InstanceUrl",
        is_sandbox_environment: false,
      },
      service_now: {
        instance_url: "InstanceUrl", # required
      },
      singular: {
      },
      slack: {
        instance_url: "InstanceUrl", # required
      },
      snowflake: {
        warehouse: "Warehouse", # required
        stage: "Stage", # required
        bucket_name: "BucketName", # required
        bucket_prefix: "BucketPrefix",
        private_link_service_name: "PrivateLinkServiceName",
        account_name: "AccountName",
        region: "Region",
      },
      trendmicro: {
      },
      veeva: {
        instance_url: "InstanceUrl", # required
      },
      zendesk: {
        instance_url: "InstanceUrl", # required
      },
      sapo_data: {
        application_host_url: "ApplicationHostUrl", # required
        application_service_path: "ApplicationServicePath", # required
        port_number: 1, # required
        client_number: "ClientNumber", # required
        logon_language: "LogonLanguage",
        private_link_service_name: "PrivateLinkServiceName",
        o_auth_properties: {
          token_url: "TokenUrl", # required
          auth_code_url: "AuthCodeUrl", # required
          o_auth_scopes: ["OAuthScope"], # required
        },
      },
    },
    connector_profile_credentials: { # required
      amplitude: {
        api_key: "ApiKey", # required
        secret_key: "SecretKey", # required
      },
      datadog: {
        api_key: "ApiKey", # required
        application_key: "ApplicationKey", # required
      },
      dynatrace: {
        api_token: "ApiToken", # required
      },
      google_analytics: {
        client_id: "ClientId", # required
        client_secret: "ClientSecret", # required
        access_token: "AccessToken",
        refresh_token: "RefreshToken",
        o_auth_request: {
          auth_code: "AuthCode",
          redirect_uri: "RedirectUri",
        },
      },
      honeycode: {
        access_token: "AccessToken",
        refresh_token: "RefreshToken",
        o_auth_request: {
          auth_code: "AuthCode",
          redirect_uri: "RedirectUri",
        },
      },
      infor_nexus: {
        access_key_id: "AccessKeyId", # required
        user_id: "Username", # required
        secret_access_key: "Key", # required
        datakey: "Key", # required
      },
      marketo: {
        client_id: "ClientId", # required
        client_secret: "ClientSecret", # required
        access_token: "AccessToken",
        o_auth_request: {
          auth_code: "AuthCode",
          redirect_uri: "RedirectUri",
        },
      },
      redshift: {
        username: "Username", # required
        password: "Password", # required
      },
      salesforce: {
        access_token: "AccessToken",
        refresh_token: "RefreshToken",
        o_auth_request: {
          auth_code: "AuthCode",
          redirect_uri: "RedirectUri",
        },
        client_credentials_arn: "ClientCredentialsArn",
      },
      service_now: {
        username: "Username", # required
        password: "Password", # required
      },
      singular: {
        api_key: "ApiKey", # required
      },
      slack: {
        client_id: "ClientId", # required
        client_secret: "ClientSecret", # required
        access_token: "AccessToken",
        o_auth_request: {
          auth_code: "AuthCode",
          redirect_uri: "RedirectUri",
        },
      },
      snowflake: {
        username: "Username", # required
        password: "Password", # required
      },
      trendmicro: {
        api_secret_key: "ApiSecretKey", # required
      },
      veeva: {
        username: "Username", # required
        password: "Password", # required
      },
      zendesk: {
        client_id: "ClientId", # required
        client_secret: "ClientSecret", # required
        access_token: "AccessToken",
        o_auth_request: {
          auth_code: "AuthCode",
          redirect_uri: "RedirectUri",
        },
      },
      sapo_data: {
        basic_auth_credentials: {
          username: "Username", # required
          password: "Password", # required
        },
        o_auth_credentials: {
          client_id: "ClientId", # required
          client_secret: "ClientSecret", # required
          access_token: "AccessToken",
          refresh_token: "RefreshToken",
          o_auth_request: {
            auth_code: "AuthCode",
            redirect_uri: "RedirectUri",
          },
        },
      },
    },
  },
})

Response structure


resp.connector_profile_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :connector_profile_name (required, String)

    The name of the connector profile. The name is unique for each ConnectorProfile in your Amazon Web Services account.

  • :kms_arn (String)

    The ARN (Amazon Resource Name) of the Key Management Service (KMS) key you provide for encryption. This is required if you do not want to use the Amazon AppFlow-managed KMS key. If you don't provide anything here, Amazon AppFlow uses the Amazon AppFlow-managed KMS key.

  • :connector_type (required, String)

    The type of connector, such as Salesforce, Amplitude, and so on.

  • :connection_mode (required, String)

    Indicates the connection mode and specifies whether it is public or private. Private flows use Amazon Web Services PrivateLink to route data over Amazon Web Services infrastructure without exposing it to the public internet.

  • :connector_profile_config (required, Types::ConnectorProfileConfig)

    Defines the connector-specific configuration and credentials.

Returns:

See Also:



560
561
562
563
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 560

def create_connector_profile(params = {}, options = {})
  req = build_request(:create_connector_profile, params)
  req.send_request(options)
end

#create_flow(params = {}) ⇒ Types::CreateFlowResponse

Enables your application to create a new flow using Amazon AppFlow. You must create a connector profile before calling this API. Please note that the Request Syntax below shows syntax for multiple destinations, however, you can only transfer data to one item in this list at a time. Amazon AppFlow does not currently support flows to multiple destinations at once.

Examples:

Request syntax with placeholder values


resp = client.create_flow({
  flow_name: "FlowName", # required
  description: "FlowDescription",
  kms_arn: "KMSArn",
  trigger_config: { # required
    trigger_type: "Scheduled", # required, accepts Scheduled, Event, OnDemand
    trigger_properties: {
      scheduled: {
        schedule_expression: "ScheduleExpression", # required
        data_pull_mode: "Incremental", # accepts Incremental, Complete
        schedule_start_time: Time.now,
        schedule_end_time: Time.now,
        timezone: "Timezone",
        schedule_offset: 1,
        first_execution_from: Time.now,
      },
    },
  },
  source_flow_config: { # required
    connector_type: "Salesforce", # required, accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge, LookoutMetrics, Upsolver, Honeycode, CustomerProfiles, SAPOData
    connector_profile_name: "ConnectorProfileName",
    source_connector_properties: { # required
      amplitude: {
        object: "Object", # required
      },
      datadog: {
        object: "Object", # required
      },
      dynatrace: {
        object: "Object", # required
      },
      google_analytics: {
        object: "Object", # required
      },
      infor_nexus: {
        object: "Object", # required
      },
      marketo: {
        object: "Object", # required
      },
      s3: {
        bucket_name: "BucketName", # required
        bucket_prefix: "BucketPrefix",
        s3_input_format_config: {
          s3_input_file_type: "CSV", # accepts CSV, JSON
        },
      },
      salesforce: {
        object: "Object", # required
        enable_dynamic_field_update: false,
        include_deleted_records: false,
      },
      service_now: {
        object: "Object", # required
      },
      singular: {
        object: "Object", # required
      },
      slack: {
        object: "Object", # required
      },
      trendmicro: {
        object: "Object", # required
      },
      veeva: {
        object: "Object", # required
        document_type: "DocumentType",
        include_source_files: false,
        include_renditions: false,
        include_all_versions: false,
      },
      zendesk: {
        object: "Object", # required
      },
      sapo_data: {
        object_path: "Object",
      },
    },
    incremental_pull_config: {
      datetime_type_field_name: "DatetimeTypeFieldName",
    },
  },
  destination_flow_config_list: [ # required
    {
      connector_type: "Salesforce", # required, accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge, LookoutMetrics, Upsolver, Honeycode, CustomerProfiles, SAPOData
      connector_profile_name: "ConnectorProfileName",
      destination_connector_properties: { # required
        redshift: {
          object: "Object", # required
          intermediate_bucket_name: "BucketName", # required
          bucket_prefix: "BucketPrefix",
          error_handling_config: {
            fail_on_first_destination_error: false,
            bucket_prefix: "BucketPrefix",
            bucket_name: "BucketName",
          },
        },
        s3: {
          bucket_name: "BucketName", # required
          bucket_prefix: "BucketPrefix",
          s3_output_format_config: {
            file_type: "CSV", # accepts CSV, JSON, PARQUET
            prefix_config: {
              prefix_type: "FILENAME", # accepts FILENAME, PATH, PATH_AND_FILENAME
              prefix_format: "YEAR", # accepts YEAR, MONTH, DAY, HOUR, MINUTE
            },
            aggregation_config: {
              aggregation_type: "None", # accepts None, SingleFile
            },
          },
        },
        salesforce: {
          object: "Object", # required
          id_field_names: ["Name"],
          error_handling_config: {
            fail_on_first_destination_error: false,
            bucket_prefix: "BucketPrefix",
            bucket_name: "BucketName",
          },
          write_operation_type: "INSERT", # accepts INSERT, UPSERT, UPDATE
        },
        snowflake: {
          object: "Object", # required
          intermediate_bucket_name: "BucketName", # required
          bucket_prefix: "BucketPrefix",
          error_handling_config: {
            fail_on_first_destination_error: false,
            bucket_prefix: "BucketPrefix",
            bucket_name: "BucketName",
          },
        },
        event_bridge: {
          object: "Object", # required
          error_handling_config: {
            fail_on_first_destination_error: false,
            bucket_prefix: "BucketPrefix",
            bucket_name: "BucketName",
          },
        },
        lookout_metrics: {
        },
        upsolver: {
          bucket_name: "UpsolverBucketName", # required
          bucket_prefix: "BucketPrefix",
          s3_output_format_config: { # required
            file_type: "CSV", # accepts CSV, JSON, PARQUET
            prefix_config: { # required
              prefix_type: "FILENAME", # accepts FILENAME, PATH, PATH_AND_FILENAME
              prefix_format: "YEAR", # accepts YEAR, MONTH, DAY, HOUR, MINUTE
            },
            aggregation_config: {
              aggregation_type: "None", # accepts None, SingleFile
            },
          },
        },
        honeycode: {
          object: "Object", # required
          error_handling_config: {
            fail_on_first_destination_error: false,
            bucket_prefix: "BucketPrefix",
            bucket_name: "BucketName",
          },
        },
        customer_profiles: {
          domain_name: "DomainName", # required
          object_type_name: "ObjectTypeName",
        },
        zendesk: {
          object: "Object", # required
          id_field_names: ["Name"],
          error_handling_config: {
            fail_on_first_destination_error: false,
            bucket_prefix: "BucketPrefix",
            bucket_name: "BucketName",
          },
          write_operation_type: "INSERT", # accepts INSERT, UPSERT, UPDATE
        },
      },
    },
  ],
  tasks: [ # required
    {
      source_fields: ["String"], # required
      connector_operator: {
        amplitude: "BETWEEN", # accepts BETWEEN
        datadog: "PROJECTION", # accepts PROJECTION, BETWEEN, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        dynatrace: "PROJECTION", # accepts PROJECTION, BETWEEN, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        google_analytics: "PROJECTION", # accepts PROJECTION, BETWEEN
        infor_nexus: "PROJECTION", # accepts PROJECTION, BETWEEN, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        marketo: "PROJECTION", # accepts PROJECTION, LESS_THAN, GREATER_THAN, BETWEEN, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        s3: "PROJECTION", # accepts PROJECTION, LESS_THAN, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        salesforce: "PROJECTION", # accepts PROJECTION, LESS_THAN, CONTAINS, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        service_now: "PROJECTION", # accepts PROJECTION, CONTAINS, LESS_THAN, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        singular: "PROJECTION", # accepts PROJECTION, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        slack: "PROJECTION", # accepts PROJECTION, LESS_THAN, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        trendmicro: "PROJECTION", # accepts PROJECTION, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        veeva: "PROJECTION", # accepts PROJECTION, LESS_THAN, GREATER_THAN, CONTAINS, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        zendesk: "PROJECTION", # accepts PROJECTION, GREATER_THAN, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        sapo_data: "PROJECTION", # accepts PROJECTION, LESS_THAN, CONTAINS, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
      },
      destination_field: "DestinationField",
      task_type: "Arithmetic", # required, accepts Arithmetic, Filter, Map, Map_all, Mask, Merge, Truncate, Validate
      task_properties: {
        "VALUE" => "Property",
      },
    },
  ],
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.flow_arn #=> String
resp.flow_status #=> String, one of "Active", "Deprecated", "Deleted", "Draft", "Errored", "Suspended"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :flow_name (required, String)

    The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.

  • :description (String)

    A description of the flow you want to create.

  • :kms_arn (String)

    The ARN (Amazon Resource Name) of the Key Management Service (KMS) key you provide for encryption. This is required if you do not want to use the Amazon AppFlow-managed KMS key. If you don't provide anything here, Amazon AppFlow uses the Amazon AppFlow-managed KMS key.

  • :trigger_config (required, Types::TriggerConfig)

    The trigger settings that determine how and when the flow runs.

  • :source_flow_config (required, Types::SourceFlowConfig)

    The configuration that controls how Amazon AppFlow retrieves data from the source connector.

  • :destination_flow_config_list (required, Array<Types::DestinationFlowConfig>)

    The configuration that controls how Amazon AppFlow places data in the destination connector.

  • :tasks (required, Array<Types::Task>)

    A list of tasks that Amazon AppFlow performs while transferring the data in the flow run.

  • :tags (Hash<String,String>)

    The tags used to organize, track, or control access for your flow.

Returns:

See Also:



831
832
833
834
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 831

def create_flow(params = {}, options = {})
  req = build_request(:create_flow, params)
  req.send_request(options)
end

#delete_connector_profile(params = {}) ⇒ Struct

Enables you to delete an existing connector profile.

Examples:

Request syntax with placeholder values


resp = client.delete_connector_profile({
  connector_profile_name: "ConnectorProfileName", # required
  force_delete: false,
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :connector_profile_name (required, String)

    The name of the connector profile. The name is unique for each ConnectorProfile in your account.

  • :force_delete (Boolean)

    Indicates whether Amazon AppFlow should delete the profile, even if it is currently in use in one or more flows.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



859
860
861
862
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 859

def delete_connector_profile(params = {}, options = {})
  req = build_request(:delete_connector_profile, params)
  req.send_request(options)
end

#delete_flow(params = {}) ⇒ Struct

Enables your application to delete an existing flow. Before deleting the flow, Amazon AppFlow validates the request by checking the flow configuration and status. You can delete flows one at a time.

Examples:

Request syntax with placeholder values


resp = client.delete_flow({
  flow_name: "FlowName", # required
  force_delete: false,
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :flow_name (required, String)

    The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.

  • :force_delete (Boolean)

    Indicates whether Amazon AppFlow should delete the flow, even if it is currently in use.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



889
890
891
892
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 889

def delete_flow(params = {}, options = {})
  req = build_request(:delete_flow, params)
  req.send_request(options)
end

#describe_connector_entity(params = {}) ⇒ Types::DescribeConnectorEntityResponse

Provides details regarding the entity used with the connector, with a description of the data model for each entity.

Examples:

Request syntax with placeholder values


resp = client.describe_connector_entity({
  connector_entity_name: "Name", # required
  connector_type: "Salesforce", # accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge, LookoutMetrics, Upsolver, Honeycode, CustomerProfiles, SAPOData
  connector_profile_name: "ConnectorProfileName",
})

Response structure


resp.connector_entity_fields #=> Array
resp.connector_entity_fields[0].identifier #=> String
resp.connector_entity_fields[0].label #=> String
resp.connector_entity_fields[0].supported_field_type_details.v1.field_type #=> String
resp.connector_entity_fields[0].supported_field_type_details.v1.filter_operators #=> Array
resp.connector_entity_fields[0].supported_field_type_details.v1.filter_operators[0] #=> String, one of "PROJECTION", "LESS_THAN", "GREATER_THAN", "CONTAINS", "BETWEEN", "LESS_THAN_OR_EQUAL_TO", "GREATER_THAN_OR_EQUAL_TO", "EQUAL_TO", "NOT_EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.connector_entity_fields[0].supported_field_type_details.v1.supported_values #=> Array
resp.connector_entity_fields[0].supported_field_type_details.v1.supported_values[0] #=> String
resp.connector_entity_fields[0].description #=> String
resp.connector_entity_fields[0].source_properties.is_retrievable #=> Boolean
resp.connector_entity_fields[0].source_properties.is_queryable #=> Boolean
resp.connector_entity_fields[0].destination_properties.is_creatable #=> Boolean
resp.connector_entity_fields[0].destination_properties.is_nullable #=> Boolean
resp.connector_entity_fields[0].destination_properties.is_upsertable #=> Boolean
resp.connector_entity_fields[0].destination_properties.is_updatable #=> Boolean
resp.connector_entity_fields[0].destination_properties.supported_write_operations #=> Array
resp.connector_entity_fields[0].destination_properties.supported_write_operations[0] #=> String, one of "INSERT", "UPSERT", "UPDATE"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :connector_entity_name (required, String)

    The entity name for that connector.

  • :connector_type (String)

    The type of connector application, such as Salesforce, Amplitude, and so on.

  • :connector_profile_name (String)

    The name of the connector profile. The name is unique for each ConnectorProfile in the Amazon Web Services account.

Returns:

See Also:



944
945
946
947
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 944

def describe_connector_entity(params = {}, options = {})
  req = build_request(:describe_connector_entity, params)
  req.send_request(options)
end

#describe_connector_profiles(params = {}) ⇒ Types::DescribeConnectorProfilesResponse

Returns a list of connector-profile details matching the provided connector-profile names and connector-types. Both input lists are optional, and you can use them to filter the result.

If no names or connector-types are provided, returns all connector profiles in a paginated form. If there is no match, this operation returns an empty list.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.describe_connector_profiles({
  connector_profile_names: ["ConnectorProfileName"],
  connector_type: "Salesforce", # accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge, LookoutMetrics, Upsolver, Honeycode, CustomerProfiles, SAPOData
  max_results: 1,
  next_token: "NextToken",
})

Response structure


resp.connector_profile_details #=> Array
resp.connector_profile_details[0].connector_profile_arn #=> String
resp.connector_profile_details[0].connector_profile_name #=> String
resp.connector_profile_details[0].connector_type #=> String, one of "Salesforce", "Singular", "Slack", "Redshift", "S3", "Marketo", "Googleanalytics", "Zendesk", "Servicenow", "Datadog", "Trendmicro", "Snowflake", "Dynatrace", "Infornexus", "Amplitude", "Veeva", "EventBridge", "LookoutMetrics", "Upsolver", "Honeycode", "CustomerProfiles", "SAPOData"
resp.connector_profile_details[0].connection_mode #=> String, one of "Public", "Private"
resp.connector_profile_details[0].credentials_arn #=> String
resp.connector_profile_details[0].connector_profile_properties.datadog.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.dynatrace.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.infor_nexus.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.marketo.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.redshift.database_url #=> String
resp.connector_profile_details[0].connector_profile_properties.redshift.bucket_name #=> String
resp.connector_profile_details[0].connector_profile_properties.redshift.bucket_prefix #=> String
resp.connector_profile_details[0].connector_profile_properties.redshift.role_arn #=> String
resp.connector_profile_details[0].connector_profile_properties.salesforce.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.salesforce.is_sandbox_environment #=> Boolean
resp.connector_profile_details[0].connector_profile_properties.service_now.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.slack.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.snowflake.warehouse #=> String
resp.connector_profile_details[0].connector_profile_properties.snowflake.stage #=> String
resp.connector_profile_details[0].connector_profile_properties.snowflake.bucket_name #=> String
resp.connector_profile_details[0].connector_profile_properties.snowflake.bucket_prefix #=> String
resp.connector_profile_details[0].connector_profile_properties.snowflake.private_link_service_name #=> String
resp.connector_profile_details[0].connector_profile_properties.snowflake. #=> String
resp.connector_profile_details[0].connector_profile_properties.snowflake.region #=> String
resp.connector_profile_details[0].connector_profile_properties.veeva.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.zendesk.instance_url #=> String
resp.connector_profile_details[0].connector_profile_properties.sapo_data.application_host_url #=> String
resp.connector_profile_details[0].connector_profile_properties.sapo_data.application_service_path #=> String
resp.connector_profile_details[0].connector_profile_properties.sapo_data.port_number #=> Integer
resp.connector_profile_details[0].connector_profile_properties.sapo_data.client_number #=> String
resp.connector_profile_details[0].connector_profile_properties.sapo_data.logon_language #=> String
resp.connector_profile_details[0].connector_profile_properties.sapo_data.private_link_service_name #=> String
resp.connector_profile_details[0].connector_profile_properties.sapo_data.o_auth_properties.token_url #=> String
resp.connector_profile_details[0].connector_profile_properties.sapo_data.o_auth_properties.auth_code_url #=> String
resp.connector_profile_details[0].connector_profile_properties.sapo_data.o_auth_properties.o_auth_scopes #=> Array
resp.connector_profile_details[0].connector_profile_properties.sapo_data.o_auth_properties.o_auth_scopes[0] #=> String
resp.connector_profile_details[0].created_at #=> Time
resp.connector_profile_details[0].last_updated_at #=> Time
resp.connector_profile_details[0].private_connection_provisioning_state.status #=> String, one of "FAILED", "PENDING", "CREATED"
resp.connector_profile_details[0].private_connection_provisioning_state.failure_message #=> String
resp.connector_profile_details[0].private_connection_provisioning_state.failure_cause #=> String, one of "CONNECTOR_AUTHENTICATION", "CONNECTOR_SERVER", "INTERNAL_SERVER", "ACCESS_DENIED", "VALIDATION"
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :connector_profile_names (Array<String>)

    The name of the connector profile. The name is unique for each ConnectorProfile in the Amazon Web Services account.

  • :connector_type (String)

    The type of connector, such as Salesforce, Amplitude, and so on.

  • :max_results (Integer)

    Specifies the maximum number of items that should be returned in the result set. The default for maxResults is 20 (for all paginated API operations).

  • :next_token (String)

    The pagination token for the next page of data.

Returns:

See Also:



1038
1039
1040
1041
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 1038

def describe_connector_profiles(params = {}, options = {})
  req = build_request(:describe_connector_profiles, params)
  req.send_request(options)
end

#describe_connectors(params = {}) ⇒ Types::DescribeConnectorsResponse

Describes the connectors vended by Amazon AppFlow for specified connector types. If you don't specify a connector type, this operation describes all connectors vended by Amazon AppFlow. If there are more connectors than can be returned in one page, the response contains a nextToken object, which can be be passed in to the next call to the DescribeConnectors API operation to retrieve the next page.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.describe_connectors({
  connector_types: ["Salesforce"], # accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge, LookoutMetrics, Upsolver, Honeycode, CustomerProfiles, SAPOData
  next_token: "NextToken",
})

Response structure


resp.connector_configurations #=> Hash
resp.connector_configurations["ConnectorType"].can_use_as_source #=> Boolean
resp.connector_configurations["ConnectorType"].can_use_as_destination #=> Boolean
resp.connector_configurations["ConnectorType"].supported_destination_connectors #=> Array
resp.connector_configurations["ConnectorType"].supported_destination_connectors[0] #=> String, one of "Salesforce", "Singular", "Slack", "Redshift", "S3", "Marketo", "Googleanalytics", "Zendesk", "Servicenow", "Datadog", "Trendmicro", "Snowflake", "Dynatrace", "Infornexus", "Amplitude", "Veeva", "EventBridge", "LookoutMetrics", "Upsolver", "Honeycode", "CustomerProfiles", "SAPOData"
resp.connector_configurations["ConnectorType"].supported_scheduling_frequencies #=> Array
resp.connector_configurations["ConnectorType"].supported_scheduling_frequencies[0] #=> String, one of "BYMINUTE", "HOURLY", "DAILY", "WEEKLY", "MONTHLY", "ONCE"
resp.connector_configurations["ConnectorType"].is_private_link_enabled #=> Boolean
resp.connector_configurations["ConnectorType"].is_private_link_endpoint_url_required #=> Boolean
resp.connector_configurations["ConnectorType"].supported_trigger_types #=> Array
resp.connector_configurations["ConnectorType"].supported_trigger_types[0] #=> String, one of "Scheduled", "Event", "OnDemand"
resp.connector_configurations["ConnectorType"]..google_analytics.o_auth_scopes #=> Array
resp.connector_configurations["ConnectorType"]..google_analytics.o_auth_scopes[0] #=> String
resp.connector_configurations["ConnectorType"]..salesforce.o_auth_scopes #=> Array
resp.connector_configurations["ConnectorType"]..salesforce.o_auth_scopes[0] #=> String
resp.connector_configurations["ConnectorType"]..slack.o_auth_scopes #=> Array
resp.connector_configurations["ConnectorType"]..slack.o_auth_scopes[0] #=> String
resp.connector_configurations["ConnectorType"]..snowflake.supported_regions #=> Array
resp.connector_configurations["ConnectorType"]..snowflake.supported_regions[0] #=> String
resp.connector_configurations["ConnectorType"]..zendesk.o_auth_scopes #=> Array
resp.connector_configurations["ConnectorType"]..zendesk.o_auth_scopes[0] #=> String
resp.connector_configurations["ConnectorType"]..honeycode.o_auth_scopes #=> Array
resp.connector_configurations["ConnectorType"]..honeycode.o_auth_scopes[0] #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :connector_types (Array<String>)

    The type of connector, such as Salesforce, Amplitude, and so on.

  • :next_token (String)

    The pagination token for the next page of data.

Returns:

See Also:



1102
1103
1104
1105
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 1102

def describe_connectors(params = {}, options = {})
  req = build_request(:describe_connectors, params)
  req.send_request(options)
end

#describe_flow(params = {}) ⇒ Types::DescribeFlowResponse

Provides a description of the specified flow.

Examples:

Request syntax with placeholder values


resp = client.describe_flow({
  flow_name: "FlowName", # required
})

Response structure


resp.flow_arn #=> String
resp.description #=> String
resp.flow_name #=> String
resp.kms_arn #=> String
resp.flow_status #=> String, one of "Active", "Deprecated", "Deleted", "Draft", "Errored", "Suspended"
resp.flow_status_message #=> String
resp.source_flow_config.connector_type #=> String, one of "Salesforce", "Singular", "Slack", "Redshift", "S3", "Marketo", "Googleanalytics", "Zendesk", "Servicenow", "Datadog", "Trendmicro", "Snowflake", "Dynatrace", "Infornexus", "Amplitude", "Veeva", "EventBridge", "LookoutMetrics", "Upsolver", "Honeycode", "CustomerProfiles", "SAPOData"
resp.source_flow_config.connector_profile_name #=> String
resp.source_flow_config.source_connector_properties.amplitude.object #=> String
resp.source_flow_config.source_connector_properties.datadog.object #=> String
resp.source_flow_config.source_connector_properties.dynatrace.object #=> String
resp.source_flow_config.source_connector_properties.google_analytics.object #=> String
resp.source_flow_config.source_connector_properties.infor_nexus.object #=> String
resp.source_flow_config.source_connector_properties.marketo.object #=> String
resp.source_flow_config.source_connector_properties.s3.bucket_name #=> String
resp.source_flow_config.source_connector_properties.s3.bucket_prefix #=> String
resp.source_flow_config.source_connector_properties.s3.s3_input_format_config.s3_input_file_type #=> String, one of "CSV", "JSON"
resp.source_flow_config.source_connector_properties.salesforce.object #=> String
resp.source_flow_config.source_connector_properties.salesforce.enable_dynamic_field_update #=> Boolean
resp.source_flow_config.source_connector_properties.salesforce.include_deleted_records #=> Boolean
resp.source_flow_config.source_connector_properties.service_now.object #=> String
resp.source_flow_config.source_connector_properties.singular.object #=> String
resp.source_flow_config.source_connector_properties.slack.object #=> String
resp.source_flow_config.source_connector_properties.trendmicro.object #=> String
resp.source_flow_config.source_connector_properties.veeva.object #=> String
resp.source_flow_config.source_connector_properties.veeva.document_type #=> String
resp.source_flow_config.source_connector_properties.veeva.include_source_files #=> Boolean
resp.source_flow_config.source_connector_properties.veeva.include_renditions #=> Boolean
resp.source_flow_config.source_connector_properties.veeva.include_all_versions #=> Boolean
resp.source_flow_config.source_connector_properties.zendesk.object #=> String
resp.source_flow_config.source_connector_properties.sapo_data.object_path #=> String
resp.source_flow_config.incremental_pull_config.datetime_type_field_name #=> String
resp.destination_flow_config_list #=> Array
resp.destination_flow_config_list[0].connector_type #=> String, one of "Salesforce", "Singular", "Slack", "Redshift", "S3", "Marketo", "Googleanalytics", "Zendesk", "Servicenow", "Datadog", "Trendmicro", "Snowflake", "Dynatrace", "Infornexus", "Amplitude", "Veeva", "EventBridge", "LookoutMetrics", "Upsolver", "Honeycode", "CustomerProfiles", "SAPOData"
resp.destination_flow_config_list[0].connector_profile_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.redshift.object #=> String
resp.destination_flow_config_list[0].destination_connector_properties.redshift.intermediate_bucket_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.redshift.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.redshift.error_handling_config.fail_on_first_destination_error #=> Boolean
resp.destination_flow_config_list[0].destination_connector_properties.redshift.error_handling_config.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.redshift.error_handling_config.bucket_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.s3.bucket_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.s3.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.s3.s3_output_format_config.file_type #=> String, one of "CSV", "JSON", "PARQUET"
resp.destination_flow_config_list[0].destination_connector_properties.s3.s3_output_format_config.prefix_config.prefix_type #=> String, one of "FILENAME", "PATH", "PATH_AND_FILENAME"
resp.destination_flow_config_list[0].destination_connector_properties.s3.s3_output_format_config.prefix_config.prefix_format #=> String, one of "YEAR", "MONTH", "DAY", "HOUR", "MINUTE"
resp.destination_flow_config_list[0].destination_connector_properties.s3.s3_output_format_config.aggregation_config.aggregation_type #=> String, one of "None", "SingleFile"
resp.destination_flow_config_list[0].destination_connector_properties.salesforce.object #=> String
resp.destination_flow_config_list[0].destination_connector_properties.salesforce.id_field_names #=> Array
resp.destination_flow_config_list[0].destination_connector_properties.salesforce.id_field_names[0] #=> String
resp.destination_flow_config_list[0].destination_connector_properties.salesforce.error_handling_config.fail_on_first_destination_error #=> Boolean
resp.destination_flow_config_list[0].destination_connector_properties.salesforce.error_handling_config.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.salesforce.error_handling_config.bucket_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.salesforce.write_operation_type #=> String, one of "INSERT", "UPSERT", "UPDATE"
resp.destination_flow_config_list[0].destination_connector_properties.snowflake.object #=> String
resp.destination_flow_config_list[0].destination_connector_properties.snowflake.intermediate_bucket_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.snowflake.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.snowflake.error_handling_config.fail_on_first_destination_error #=> Boolean
resp.destination_flow_config_list[0].destination_connector_properties.snowflake.error_handling_config.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.snowflake.error_handling_config.bucket_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.event_bridge.object #=> String
resp.destination_flow_config_list[0].destination_connector_properties.event_bridge.error_handling_config.fail_on_first_destination_error #=> Boolean
resp.destination_flow_config_list[0].destination_connector_properties.event_bridge.error_handling_config.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.event_bridge.error_handling_config.bucket_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.upsolver.bucket_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.upsolver.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.upsolver.s3_output_format_config.file_type #=> String, one of "CSV", "JSON", "PARQUET"
resp.destination_flow_config_list[0].destination_connector_properties.upsolver.s3_output_format_config.prefix_config.prefix_type #=> String, one of "FILENAME", "PATH", "PATH_AND_FILENAME"
resp.destination_flow_config_list[0].destination_connector_properties.upsolver.s3_output_format_config.prefix_config.prefix_format #=> String, one of "YEAR", "MONTH", "DAY", "HOUR", "MINUTE"
resp.destination_flow_config_list[0].destination_connector_properties.upsolver.s3_output_format_config.aggregation_config.aggregation_type #=> String, one of "None", "SingleFile"
resp.destination_flow_config_list[0].destination_connector_properties.honeycode.object #=> String
resp.destination_flow_config_list[0].destination_connector_properties.honeycode.error_handling_config.fail_on_first_destination_error #=> Boolean
resp.destination_flow_config_list[0].destination_connector_properties.honeycode.error_handling_config.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.honeycode.error_handling_config.bucket_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.customer_profiles.domain_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.customer_profiles.object_type_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.zendesk.object #=> String
resp.destination_flow_config_list[0].destination_connector_properties.zendesk.id_field_names #=> Array
resp.destination_flow_config_list[0].destination_connector_properties.zendesk.id_field_names[0] #=> String
resp.destination_flow_config_list[0].destination_connector_properties.zendesk.error_handling_config.fail_on_first_destination_error #=> Boolean
resp.destination_flow_config_list[0].destination_connector_properties.zendesk.error_handling_config.bucket_prefix #=> String
resp.destination_flow_config_list[0].destination_connector_properties.zendesk.error_handling_config.bucket_name #=> String
resp.destination_flow_config_list[0].destination_connector_properties.zendesk.write_operation_type #=> String, one of "INSERT", "UPSERT", "UPDATE"
resp.last_run_execution_details.most_recent_execution_message #=> String
resp.last_run_execution_details.most_recent_execution_time #=> Time
resp.last_run_execution_details.most_recent_execution_status #=> String, one of "InProgress", "Successful", "Error"
resp.trigger_config.trigger_type #=> String, one of "Scheduled", "Event", "OnDemand"
resp.trigger_config.trigger_properties.scheduled.schedule_expression #=> String
resp.trigger_config.trigger_properties.scheduled.data_pull_mode #=> String, one of "Incremental", "Complete"
resp.trigger_config.trigger_properties.scheduled.schedule_start_time #=> Time
resp.trigger_config.trigger_properties.scheduled.schedule_end_time #=> Time
resp.trigger_config.trigger_properties.scheduled.timezone #=> String
resp.trigger_config.trigger_properties.scheduled.schedule_offset #=> Integer
resp.trigger_config.trigger_properties.scheduled.first_execution_from #=> Time
resp.tasks #=> Array
resp.tasks[0].source_fields #=> Array
resp.tasks[0].source_fields[0] #=> String
resp.tasks[0].connector_operator.amplitude #=> String, one of "BETWEEN"
resp.tasks[0].connector_operator.datadog #=> String, one of "PROJECTION", "BETWEEN", "EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.dynatrace #=> String, one of "PROJECTION", "BETWEEN", "EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.google_analytics #=> String, one of "PROJECTION", "BETWEEN"
resp.tasks[0].connector_operator.infor_nexus #=> String, one of "PROJECTION", "BETWEEN", "EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.marketo #=> String, one of "PROJECTION", "LESS_THAN", "GREATER_THAN", "BETWEEN", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.s3 #=> String, one of "PROJECTION", "LESS_THAN", "GREATER_THAN", "BETWEEN", "LESS_THAN_OR_EQUAL_TO", "GREATER_THAN_OR_EQUAL_TO", "EQUAL_TO", "NOT_EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.salesforce #=> String, one of "PROJECTION", "LESS_THAN", "CONTAINS", "GREATER_THAN", "BETWEEN", "LESS_THAN_OR_EQUAL_TO", "GREATER_THAN_OR_EQUAL_TO", "EQUAL_TO", "NOT_EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.service_now #=> String, one of "PROJECTION", "CONTAINS", "LESS_THAN", "GREATER_THAN", "BETWEEN", "LESS_THAN_OR_EQUAL_TO", "GREATER_THAN_OR_EQUAL_TO", "EQUAL_TO", "NOT_EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.singular #=> String, one of "PROJECTION", "EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.slack #=> String, one of "PROJECTION", "LESS_THAN", "GREATER_THAN", "BETWEEN", "LESS_THAN_OR_EQUAL_TO", "GREATER_THAN_OR_EQUAL_TO", "EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.trendmicro #=> String, one of "PROJECTION", "EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.veeva #=> String, one of "PROJECTION", "LESS_THAN", "GREATER_THAN", "CONTAINS", "BETWEEN", "LESS_THAN_OR_EQUAL_TO", "GREATER_THAN_OR_EQUAL_TO", "EQUAL_TO", "NOT_EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.zendesk #=> String, one of "PROJECTION", "GREATER_THAN", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].connector_operator.sapo_data #=> String, one of "PROJECTION", "LESS_THAN", "CONTAINS", "GREATER_THAN", "BETWEEN", "LESS_THAN_OR_EQUAL_TO", "GREATER_THAN_OR_EQUAL_TO", "EQUAL_TO", "NOT_EQUAL_TO", "ADDITION", "MULTIPLICATION", "DIVISION", "SUBTRACTION", "MASK_ALL", "MASK_FIRST_N", "MASK_LAST_N", "VALIDATE_NON_NULL", "VALIDATE_NON_ZERO", "VALIDATE_NON_NEGATIVE", "VALIDATE_NUMERIC", "NO_OP"
resp.tasks[0].destination_field #=> String
resp.tasks[0].task_type #=> String, one of "Arithmetic", "Filter", "Map", "Map_all", "Mask", "Merge", "Truncate", "Validate"
resp.tasks[0].task_properties #=> Hash
resp.tasks[0].task_properties["OperatorPropertiesKeys"] #=> String
resp.created_at #=> Time
resp.last_updated_at #=> Time
resp.created_by #=> String
resp.last_updated_by #=> String
resp.tags #=> Hash
resp.tags["TagKey"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :flow_name (required, String)

    The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.

Returns:

See Also:



1267
1268
1269
1270
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 1267

def describe_flow(params = {}, options = {})
  req = build_request(:describe_flow, params)
  req.send_request(options)
end

#describe_flow_execution_records(params = {}) ⇒ Types::DescribeFlowExecutionRecordsResponse

Fetches the execution history of the flow.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.describe_flow_execution_records({
  flow_name: "FlowName", # required
  max_results: 1,
  next_token: "NextToken",
})

Response structure


resp.flow_executions #=> Array
resp.flow_executions[0].execution_id #=> String
resp.flow_executions[0].execution_status #=> String, one of "InProgress", "Successful", "Error"
resp.flow_executions[0].execution_result.error_info.put_failures_count #=> Integer
resp.flow_executions[0].execution_result.error_info.execution_message #=> String
resp.flow_executions[0].execution_result.bytes_processed #=> Integer
resp.flow_executions[0].execution_result.bytes_written #=> Integer
resp.flow_executions[0].execution_result.records_processed #=> Integer
resp.flow_executions[0].started_at #=> Time
resp.flow_executions[0].last_updated_at #=> Time
resp.flow_executions[0].data_pull_start_time #=> Time
resp.flow_executions[0].data_pull_end_time #=> Time
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :flow_name (required, String)

    The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.

  • :max_results (Integer)

    Specifies the maximum number of items that should be returned in the result set. The default for maxResults is 20 (for all paginated API operations).

  • :next_token (String)

    The pagination token for the next page of data.

Returns:

See Also:



1321
1322
1323
1324
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 1321

def describe_flow_execution_records(params = {}, options = {})
  req = build_request(:describe_flow_execution_records, params)
  req.send_request(options)
end

#list_connector_entities(params = {}) ⇒ Types::ListConnectorEntitiesResponse

Returns the list of available connector entities supported by Amazon AppFlow. For example, you can query Salesforce for Account and Opportunity entities, or query ServiceNow for the Incident entity.

Examples:

Request syntax with placeholder values


resp = client.list_connector_entities({
  connector_profile_name: "ConnectorProfileName",
  connector_type: "Salesforce", # accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge, LookoutMetrics, Upsolver, Honeycode, CustomerProfiles, SAPOData
  entities_path: "EntitiesPath",
})

Response structure


resp.connector_entity_map #=> Hash
resp.connector_entity_map["Group"] #=> Array
resp.connector_entity_map["Group"][0].name #=> String
resp.connector_entity_map["Group"][0].label #=> String
resp.connector_entity_map["Group"][0].has_nested_entities #=> Boolean

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :connector_profile_name (String)

    The name of the connector profile. The name is unique for each ConnectorProfile in the Amazon Web Services account, and is used to query the downstream connector.

  • :connector_type (String)

    The type of connector, such as Salesforce, Amplitude, and so on.

  • :entities_path (String)

    This optional parameter is specific to connector implementation. Some connectors support multiple levels or categories of entities. You can find out the list of roots for such providers by sending a request without the entitiesPath parameter. If the connector supports entities at different roots, this initial request returns the list of roots. Otherwise, this request returns all entities supported by the provider.

Returns:

See Also:



1371
1372
1373
1374
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 1371

def list_connector_entities(params = {}, options = {})
  req = build_request(:list_connector_entities, params)
  req.send_request(options)
end

#list_flows(params = {}) ⇒ Types::ListFlowsResponse

Lists all of the flows associated with your account.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_flows({
  max_results: 1,
  next_token: "NextToken",
})

Response structure


resp.flows #=> Array
resp.flows[0].flow_arn #=> String
resp.flows[0].description #=> String
resp.flows[0].flow_name #=> String
resp.flows[0].flow_status #=> String, one of "Active", "Deprecated", "Deleted", "Draft", "Errored", "Suspended"
resp.flows[0].source_connector_type #=> String, one of "Salesforce", "Singular", "Slack", "Redshift", "S3", "Marketo", "Googleanalytics", "Zendesk", "Servicenow", "Datadog", "Trendmicro", "Snowflake", "Dynatrace", "Infornexus", "Amplitude", "Veeva", "EventBridge", "LookoutMetrics", "Upsolver", "Honeycode", "CustomerProfiles", "SAPOData"
resp.flows[0].destination_connector_type #=> String, one of "Salesforce", "Singular", "Slack", "Redshift", "S3", "Marketo", "Googleanalytics", "Zendesk", "Servicenow", "Datadog", "Trendmicro", "Snowflake", "Dynatrace", "Infornexus", "Amplitude", "Veeva", "EventBridge", "LookoutMetrics", "Upsolver", "Honeycode", "CustomerProfiles", "SAPOData"
resp.flows[0].trigger_type #=> String, one of "Scheduled", "Event", "OnDemand"
resp.flows[0].created_at #=> Time
resp.flows[0].last_updated_at #=> Time
resp.flows[0].created_by #=> String
resp.flows[0].last_updated_by #=> String
resp.flows[0].tags #=> Hash
resp.flows[0].tags["TagKey"] #=> String
resp.flows[0].last_run_execution_details.most_recent_execution_message #=> String
resp.flows[0].last_run_execution_details.most_recent_execution_time #=> Time
resp.flows[0].last_run_execution_details.most_recent_execution_status #=> String, one of "InProgress", "Successful", "Error"
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)

    Specifies the maximum number of items that should be returned in the result set.

  • :next_token (String)

    The pagination token for next page of data.

Returns:

See Also:



1424
1425
1426
1427
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 1424

def list_flows(params = {}, options = {})
  req = build_request(:list_flows, params)
  req.send_request(options)
end

#list_tags_for_resource(params = {}) ⇒ Types::ListTagsForResourceResponse

Retrieves the tags that are associated with a specified flow.

Examples:

Request syntax with placeholder values


resp = client.list_tags_for_resource({
  resource_arn: "ARN", # required
})

Response structure


resp.tags #=> Hash
resp.tags["TagKey"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The Amazon Resource Name (ARN) of the specified flow.

Returns:

See Also:



1453
1454
1455
1456
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 1453

def list_tags_for_resource(params = {}, options = {})
  req = build_request(:list_tags_for_resource, params)
  req.send_request(options)
end

#start_flow(params = {}) ⇒ Types::StartFlowResponse

Activates an existing flow. For on-demand flows, this operation runs the flow immediately. For schedule and event-triggered flows, this operation activates the flow.

Examples:

Request syntax with placeholder values


resp = client.start_flow({
  flow_name: "FlowName", # required
})

Response structure


resp.flow_arn #=> String
resp.flow_status #=> String, one of "Active", "Deprecated", "Deleted", "Draft", "Errored", "Suspended"
resp.execution_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :flow_name (required, String)

    The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.

Returns:

See Also:



1488
1489
1490
1491
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 1488

def start_flow(params = {}, options = {})
  req = build_request(:start_flow, params)
  req.send_request(options)
end

#stop_flow(params = {}) ⇒ Types::StopFlowResponse

Deactivates the existing flow. For on-demand flows, this operation returns an unsupportedOperationException error message. For schedule and event-triggered flows, this operation deactivates the flow.

Examples:

Request syntax with placeholder values


resp = client.stop_flow({
  flow_name: "FlowName", # required
})

Response structure


resp.flow_arn #=> String
resp.flow_status #=> String, one of "Active", "Deprecated", "Deleted", "Draft", "Errored", "Suspended"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :flow_name (required, String)

    The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.

Returns:

See Also:



1521
1522
1523
1524
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 1521

def stop_flow(params = {}, options = {})
  req = build_request(:stop_flow, params)
  req.send_request(options)
end

#tag_resource(params = {}) ⇒ Struct

Applies a tag to the specified flow.

Examples:

Request syntax with placeholder values


resp = client.tag_resource({
  resource_arn: "ARN", # required
  tags: { # required
    "TagKey" => "TagValue",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The Amazon Resource Name (ARN) of the flow that you want to tag.

  • :tags (required, Hash<String,String>)

    The tags used to organize, track, or control access for your flow.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1549
1550
1551
1552
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 1549

def tag_resource(params = {}, options = {})
  req = build_request(:tag_resource, params)
  req.send_request(options)
end

#untag_resource(params = {}) ⇒ Struct

Removes a tag from the specified flow.

Examples:

Request syntax with placeholder values


resp = client.untag_resource({
  resource_arn: "ARN", # required
  tag_keys: ["TagKey"], # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

    The Amazon Resource Name (ARN) of the flow that you want to untag.

  • :tag_keys (required, Array<String>)

    The tag keys associated with the tag that you want to remove from your flow.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



1576
1577
1578
1579
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 1576

def untag_resource(params = {}, options = {})
  req = build_request(:untag_resource, params)
  req.send_request(options)
end

#update_connector_profile(params = {}) ⇒ Types::UpdateConnectorProfileResponse

Updates a given connector profile associated with your account.

Examples:

Request syntax with placeholder values


resp = client.update_connector_profile({
  connector_profile_name: "ConnectorProfileName", # required
  connection_mode: "Public", # required, accepts Public, Private
  connector_profile_config: { # required
    connector_profile_properties: { # required
      amplitude: {
      },
      datadog: {
        instance_url: "InstanceUrl", # required
      },
      dynatrace: {
        instance_url: "InstanceUrl", # required
      },
      google_analytics: {
      },
      honeycode: {
      },
      infor_nexus: {
        instance_url: "InstanceUrl", # required
      },
      marketo: {
        instance_url: "InstanceUrl", # required
      },
      redshift: {
        database_url: "DatabaseUrl", # required
        bucket_name: "BucketName", # required
        bucket_prefix: "BucketPrefix",
        role_arn: "RoleArn", # required
      },
      salesforce: {
        instance_url: "InstanceUrl",
        is_sandbox_environment: false,
      },
      service_now: {
        instance_url: "InstanceUrl", # required
      },
      singular: {
      },
      slack: {
        instance_url: "InstanceUrl", # required
      },
      snowflake: {
        warehouse: "Warehouse", # required
        stage: "Stage", # required
        bucket_name: "BucketName", # required
        bucket_prefix: "BucketPrefix",
        private_link_service_name: "PrivateLinkServiceName",
        account_name: "AccountName",
        region: "Region",
      },
      trendmicro: {
      },
      veeva: {
        instance_url: "InstanceUrl", # required
      },
      zendesk: {
        instance_url: "InstanceUrl", # required
      },
      sapo_data: {
        application_host_url: "ApplicationHostUrl", # required
        application_service_path: "ApplicationServicePath", # required
        port_number: 1, # required
        client_number: "ClientNumber", # required
        logon_language: "LogonLanguage",
        private_link_service_name: "PrivateLinkServiceName",
        o_auth_properties: {
          token_url: "TokenUrl", # required
          auth_code_url: "AuthCodeUrl", # required
          o_auth_scopes: ["OAuthScope"], # required
        },
      },
    },
    connector_profile_credentials: { # required
      amplitude: {
        api_key: "ApiKey", # required
        secret_key: "SecretKey", # required
      },
      datadog: {
        api_key: "ApiKey", # required
        application_key: "ApplicationKey", # required
      },
      dynatrace: {
        api_token: "ApiToken", # required
      },
      google_analytics: {
        client_id: "ClientId", # required
        client_secret: "ClientSecret", # required
        access_token: "AccessToken",
        refresh_token: "RefreshToken",
        o_auth_request: {
          auth_code: "AuthCode",
          redirect_uri: "RedirectUri",
        },
      },
      honeycode: {
        access_token: "AccessToken",
        refresh_token: "RefreshToken",
        o_auth_request: {
          auth_code: "AuthCode",
          redirect_uri: "RedirectUri",
        },
      },
      infor_nexus: {
        access_key_id: "AccessKeyId", # required
        user_id: "Username", # required
        secret_access_key: "Key", # required
        datakey: "Key", # required
      },
      marketo: {
        client_id: "ClientId", # required
        client_secret: "ClientSecret", # required
        access_token: "AccessToken",
        o_auth_request: {
          auth_code: "AuthCode",
          redirect_uri: "RedirectUri",
        },
      },
      redshift: {
        username: "Username", # required
        password: "Password", # required
      },
      salesforce: {
        access_token: "AccessToken",
        refresh_token: "RefreshToken",
        o_auth_request: {
          auth_code: "AuthCode",
          redirect_uri: "RedirectUri",
        },
        client_credentials_arn: "ClientCredentialsArn",
      },
      service_now: {
        username: "Username", # required
        password: "Password", # required
      },
      singular: {
        api_key: "ApiKey", # required
      },
      slack: {
        client_id: "ClientId", # required
        client_secret: "ClientSecret", # required
        access_token: "AccessToken",
        o_auth_request: {
          auth_code: "AuthCode",
          redirect_uri: "RedirectUri",
        },
      },
      snowflake: {
        username: "Username", # required
        password: "Password", # required
      },
      trendmicro: {
        api_secret_key: "ApiSecretKey", # required
      },
      veeva: {
        username: "Username", # required
        password: "Password", # required
      },
      zendesk: {
        client_id: "ClientId", # required
        client_secret: "ClientSecret", # required
        access_token: "AccessToken",
        o_auth_request: {
          auth_code: "AuthCode",
          redirect_uri: "RedirectUri",
        },
      },
      sapo_data: {
        basic_auth_credentials: {
          username: "Username", # required
          password: "Password", # required
        },
        o_auth_credentials: {
          client_id: "ClientId", # required
          client_secret: "ClientSecret", # required
          access_token: "AccessToken",
          refresh_token: "RefreshToken",
          o_auth_request: {
            auth_code: "AuthCode",
            redirect_uri: "RedirectUri",
          },
        },
      },
    },
  },
})

Response structure


resp.connector_profile_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :connector_profile_name (required, String)

    The name of the connector profile and is unique for each ConnectorProfile in the Amazon Web Services account.

  • :connection_mode (required, String)

    Indicates the connection mode and if it is public or private.

  • :connector_profile_config (required, Types::ConnectorProfileConfig)

    Defines the connector-specific profile configuration and credentials.

Returns:

See Also:



1793
1794
1795
1796
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 1793

def update_connector_profile(params = {}, options = {})
  req = build_request(:update_connector_profile, params)
  req.send_request(options)
end

#update_flow(params = {}) ⇒ Types::UpdateFlowResponse

Updates an existing flow.

Examples:

Request syntax with placeholder values


resp = client.update_flow({
  flow_name: "FlowName", # required
  description: "FlowDescription",
  trigger_config: { # required
    trigger_type: "Scheduled", # required, accepts Scheduled, Event, OnDemand
    trigger_properties: {
      scheduled: {
        schedule_expression: "ScheduleExpression", # required
        data_pull_mode: "Incremental", # accepts Incremental, Complete
        schedule_start_time: Time.now,
        schedule_end_time: Time.now,
        timezone: "Timezone",
        schedule_offset: 1,
        first_execution_from: Time.now,
      },
    },
  },
  source_flow_config: { # required
    connector_type: "Salesforce", # required, accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge, LookoutMetrics, Upsolver, Honeycode, CustomerProfiles, SAPOData
    connector_profile_name: "ConnectorProfileName",
    source_connector_properties: { # required
      amplitude: {
        object: "Object", # required
      },
      datadog: {
        object: "Object", # required
      },
      dynatrace: {
        object: "Object", # required
      },
      google_analytics: {
        object: "Object", # required
      },
      infor_nexus: {
        object: "Object", # required
      },
      marketo: {
        object: "Object", # required
      },
      s3: {
        bucket_name: "BucketName", # required
        bucket_prefix: "BucketPrefix",
        s3_input_format_config: {
          s3_input_file_type: "CSV", # accepts CSV, JSON
        },
      },
      salesforce: {
        object: "Object", # required
        enable_dynamic_field_update: false,
        include_deleted_records: false,
      },
      service_now: {
        object: "Object", # required
      },
      singular: {
        object: "Object", # required
      },
      slack: {
        object: "Object", # required
      },
      trendmicro: {
        object: "Object", # required
      },
      veeva: {
        object: "Object", # required
        document_type: "DocumentType",
        include_source_files: false,
        include_renditions: false,
        include_all_versions: false,
      },
      zendesk: {
        object: "Object", # required
      },
      sapo_data: {
        object_path: "Object",
      },
    },
    incremental_pull_config: {
      datetime_type_field_name: "DatetimeTypeFieldName",
    },
  },
  destination_flow_config_list: [ # required
    {
      connector_type: "Salesforce", # required, accepts Salesforce, Singular, Slack, Redshift, S3, Marketo, Googleanalytics, Zendesk, Servicenow, Datadog, Trendmicro, Snowflake, Dynatrace, Infornexus, Amplitude, Veeva, EventBridge, LookoutMetrics, Upsolver, Honeycode, CustomerProfiles, SAPOData
      connector_profile_name: "ConnectorProfileName",
      destination_connector_properties: { # required
        redshift: {
          object: "Object", # required
          intermediate_bucket_name: "BucketName", # required
          bucket_prefix: "BucketPrefix",
          error_handling_config: {
            fail_on_first_destination_error: false,
            bucket_prefix: "BucketPrefix",
            bucket_name: "BucketName",
          },
        },
        s3: {
          bucket_name: "BucketName", # required
          bucket_prefix: "BucketPrefix",
          s3_output_format_config: {
            file_type: "CSV", # accepts CSV, JSON, PARQUET
            prefix_config: {
              prefix_type: "FILENAME", # accepts FILENAME, PATH, PATH_AND_FILENAME
              prefix_format: "YEAR", # accepts YEAR, MONTH, DAY, HOUR, MINUTE
            },
            aggregation_config: {
              aggregation_type: "None", # accepts None, SingleFile
            },
          },
        },
        salesforce: {
          object: "Object", # required
          id_field_names: ["Name"],
          error_handling_config: {
            fail_on_first_destination_error: false,
            bucket_prefix: "BucketPrefix",
            bucket_name: "BucketName",
          },
          write_operation_type: "INSERT", # accepts INSERT, UPSERT, UPDATE
        },
        snowflake: {
          object: "Object", # required
          intermediate_bucket_name: "BucketName", # required
          bucket_prefix: "BucketPrefix",
          error_handling_config: {
            fail_on_first_destination_error: false,
            bucket_prefix: "BucketPrefix",
            bucket_name: "BucketName",
          },
        },
        event_bridge: {
          object: "Object", # required
          error_handling_config: {
            fail_on_first_destination_error: false,
            bucket_prefix: "BucketPrefix",
            bucket_name: "BucketName",
          },
        },
        lookout_metrics: {
        },
        upsolver: {
          bucket_name: "UpsolverBucketName", # required
          bucket_prefix: "BucketPrefix",
          s3_output_format_config: { # required
            file_type: "CSV", # accepts CSV, JSON, PARQUET
            prefix_config: { # required
              prefix_type: "FILENAME", # accepts FILENAME, PATH, PATH_AND_FILENAME
              prefix_format: "YEAR", # accepts YEAR, MONTH, DAY, HOUR, MINUTE
            },
            aggregation_config: {
              aggregation_type: "None", # accepts None, SingleFile
            },
          },
        },
        honeycode: {
          object: "Object", # required
          error_handling_config: {
            fail_on_first_destination_error: false,
            bucket_prefix: "BucketPrefix",
            bucket_name: "BucketName",
          },
        },
        customer_profiles: {
          domain_name: "DomainName", # required
          object_type_name: "ObjectTypeName",
        },
        zendesk: {
          object: "Object", # required
          id_field_names: ["Name"],
          error_handling_config: {
            fail_on_first_destination_error: false,
            bucket_prefix: "BucketPrefix",
            bucket_name: "BucketName",
          },
          write_operation_type: "INSERT", # accepts INSERT, UPSERT, UPDATE
        },
      },
    },
  ],
  tasks: [ # required
    {
      source_fields: ["String"], # required
      connector_operator: {
        amplitude: "BETWEEN", # accepts BETWEEN
        datadog: "PROJECTION", # accepts PROJECTION, BETWEEN, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        dynatrace: "PROJECTION", # accepts PROJECTION, BETWEEN, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        google_analytics: "PROJECTION", # accepts PROJECTION, BETWEEN
        infor_nexus: "PROJECTION", # accepts PROJECTION, BETWEEN, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        marketo: "PROJECTION", # accepts PROJECTION, LESS_THAN, GREATER_THAN, BETWEEN, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        s3: "PROJECTION", # accepts PROJECTION, LESS_THAN, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        salesforce: "PROJECTION", # accepts PROJECTION, LESS_THAN, CONTAINS, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        service_now: "PROJECTION", # accepts PROJECTION, CONTAINS, LESS_THAN, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        singular: "PROJECTION", # accepts PROJECTION, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        slack: "PROJECTION", # accepts PROJECTION, LESS_THAN, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        trendmicro: "PROJECTION", # accepts PROJECTION, EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        veeva: "PROJECTION", # accepts PROJECTION, LESS_THAN, GREATER_THAN, CONTAINS, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        zendesk: "PROJECTION", # accepts PROJECTION, GREATER_THAN, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
        sapo_data: "PROJECTION", # accepts PROJECTION, LESS_THAN, CONTAINS, GREATER_THAN, BETWEEN, LESS_THAN_OR_EQUAL_TO, GREATER_THAN_OR_EQUAL_TO, EQUAL_TO, NOT_EQUAL_TO, ADDITION, MULTIPLICATION, DIVISION, SUBTRACTION, MASK_ALL, MASK_FIRST_N, MASK_LAST_N, VALIDATE_NON_NULL, VALIDATE_NON_ZERO, VALIDATE_NON_NEGATIVE, VALIDATE_NUMERIC, NO_OP
      },
      destination_field: "DestinationField",
      task_type: "Arithmetic", # required, accepts Arithmetic, Filter, Map, Map_all, Mask, Merge, Truncate, Validate
      task_properties: {
        "VALUE" => "Property",
      },
    },
  ],
})

Response structure


resp.flow_status #=> String, one of "Active", "Deprecated", "Deleted", "Draft", "Errored", "Suspended"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :flow_name (required, String)

    The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.

  • :description (String)

    A description of the flow.

  • :trigger_config (required, Types::TriggerConfig)

    The trigger settings that determine how and when the flow runs.

  • :source_flow_config (required, Types::SourceFlowConfig)

    Contains information about the configuration of the source connector used in the flow.

  • :destination_flow_config_list (required, Array<Types::DestinationFlowConfig>)

    The configuration that controls how Amazon AppFlow transfers data to the destination connector.

  • :tasks (required, Array<Types::Task>)

    A list of tasks that Amazon AppFlow performs while transferring the data in the flow run.

Returns:

See Also:



2044
2045
2046
2047
# File 'gems/aws-sdk-appflow/lib/aws-sdk-appflow/client.rb', line 2044

def update_flow(params = {}, options = {})
  req = build_request(:update_flow, params)
  req.send_request(options)
end