Class: Aws::Glue::Client

Inherits:
Seahorse::Client::Base show all
Includes:
ClientStubs
Defined in:
gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb

Overview

An API client for Glue. To construct a client, you need to configure a :region and :credentials.

client = Aws::Glue::Client.new(
  region: region_name,
  credentials: credentials,
  # ...
)

For details on configuring region and credentials see the developer guide.

See #initialize for a full list of supported configuration options.

Instance Attribute Summary

Attributes inherited from Seahorse::Client::Base

#config, #handlers

API Operations collapse

Instance Method Summary collapse

Methods included from ClientStubs

#api_requests, #stub_data, #stub_responses

Methods inherited from Seahorse::Client::Base

add_plugin, api, clear_plugins, define, new, #operation_names, plugins, remove_plugin, set_api, set_plugins

Methods included from Seahorse::Client::HandlerBuilder

#handle, #handle_request, #handle_response

Constructor Details

#initialize(options) ⇒ Client

Returns a new instance of Client.

Parameters:

  • options (Hash)

Options Hash (options):

  • :credentials (required, Aws::CredentialProvider)

    Your AWS credentials. This can be an instance of any one of the following classes:

    • Aws::Credentials - Used for configuring static, non-refreshing credentials.

    • Aws::SharedCredentials - Used for loading static credentials from a shared file, such as ~/.aws/config.

    • Aws::AssumeRoleCredentials - Used when you need to assume a role.

    • Aws::AssumeRoleWebIdentityCredentials - Used when you need to assume a role after providing credentials via the web.

    • Aws::SSOCredentials - Used for loading credentials from AWS SSO using an access token generated from aws login.

    • Aws::ProcessCredentials - Used for loading credentials from a process that outputs to stdout.

    • Aws::InstanceProfileCredentials - Used for loading credentials from an EC2 IMDS on an EC2 instance.

    • Aws::ECSCredentials - Used for loading credentials from instances running in ECS.

    • Aws::CognitoIdentityCredentials - Used for loading credentials from the Cognito Identity service.

    When :credentials are not configured directly, the following locations will be searched for credentials:

    • Aws.config[:credentials]
    • The :access_key_id, :secret_access_key, and :session_token options.
    • ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY']
    • ~/.aws/credentials
    • ~/.aws/config
    • EC2/ECS IMDS instance profile - When used by default, the timeouts are very aggressive. Construct and pass an instance of Aws::InstanceProfileCredentails or Aws::ECSCredentials to enable retries and extended timeouts. Instance profile credential fetching can be disabled by setting ENV['AWS_EC2_METADATA_DISABLED'] to true.
  • :region (required, String)

    The AWS region to connect to. The configured :region is used to determine the service :endpoint. When not passed, a default :region is searched for in the following locations:

    • Aws.config[:region]
    • ENV['AWS_REGION']
    • ENV['AMAZON_REGION']
    • ENV['AWS_DEFAULT_REGION']
    • ~/.aws/credentials
    • ~/.aws/config
  • :access_key_id (String)
  • :active_endpoint_cache (Boolean) — default: false

    When set to true, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults to false.

  • :adaptive_retry_wait_to_fill (Boolean) — default: true

    Used only in adaptive retry mode. When true, the request will sleep until there is sufficent client side capacity to retry the request. When false, the request will raise a RetryCapacityNotAvailableError and will not retry instead of sleeping.

  • :client_side_monitoring (Boolean) — default: false

    When true, client-side metrics will be collected for all API requests from this client.

  • :client_side_monitoring_client_id (String) — default: ""

    Allows you to provide an identifier for this client which will be attached to all generated client side metrics. Defaults to an empty string.

  • :client_side_monitoring_host (String) — default: "127.0.0.1"

    Allows you to specify the DNS hostname or IPv4 or IPv6 address that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_port (Integer) — default: 31000

    Required for publishing client metrics. The port that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_publisher (Aws::ClientSideMonitoring::Publisher) — default: Aws::ClientSideMonitoring::Publisher

    Allows you to provide a custom client-side monitoring publisher class. By default, will use the Client Side Monitoring Agent Publisher.

  • :convert_params (Boolean) — default: true

    When true, an attempt is made to coerce request parameters into the required types.

  • :correct_clock_skew (Boolean) — default: true

    Used only in standard and adaptive retry modes. Specifies whether to apply a clock skew correction and retry requests with skewed client clocks.

  • :defaults_mode (String) — default: "legacy"

    See DefaultsModeConfiguration for a list of the accepted modes and the configuration defaults that are included.

  • :disable_host_prefix_injection (Boolean) — default: false

    Set to true to disable SDK automatically adding host prefix to default service endpoint when available.

  • :endpoint (String)

    The client endpoint is normally constructed from the :region option. You should only configure an :endpoint when connecting to test or custom endpoints. This should be a valid HTTP(S) URI.

  • :endpoint_cache_max_entries (Integer) — default: 1000

    Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000.

  • :endpoint_cache_max_threads (Integer) — default: 10

    Used for the maximum threads in use for polling endpoints to be cached, defaults to 10.

  • :endpoint_cache_poll_interval (Integer) — default: 60

    When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec.

  • :endpoint_discovery (Boolean) — default: false

    When set to true, endpoint discovery will be enabled for operations when available.

  • :log_formatter (Aws::Log::Formatter) — default: Aws::Log::Formatter.default

    The log formatter.

  • :log_level (Symbol) — default: :info

    The log level to send messages to the :logger at.

  • :logger (Logger)

    The Logger instance to send log messages to. If this option is not set, logging will be disabled.

  • :max_attempts (Integer) — default: 3

    An integer representing the maximum number attempts that will be made for a single request, including the initial attempt. For example, setting this value to 5 will result in a request being retried up to 4 times. Used in standard and adaptive retry modes.

  • :profile (String) — default: "default"

    Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, 'default' is used.

  • :retry_backoff (Proc)

    A proc or lambda used for backoff. Defaults to 2**retries * retry_base_delay. This option is only used in the legacy retry mode.

  • :retry_base_delay (Float) — default: 0.3

    The base delay in seconds used by the default backoff function. This option is only used in the legacy retry mode.

  • :retry_jitter (Symbol) — default: :none

    A delay randomiser function used by the default backoff function. Some predefined functions can be referenced by name - :none, :equal, :full, otherwise a Proc that takes and returns a number. This option is only used in the legacy retry mode.

    @see https://www.awsarchitectureblog.com/2015/03/backoff.html

  • :retry_limit (Integer) — default: 3

    The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors, auth errors, endpoint discovery, and errors from expired credentials. This option is only used in the legacy retry mode.

  • :retry_max_delay (Integer) — default: 0

    The maximum number of seconds to delay between retries (0 for no limit) used by the default backoff function. This option is only used in the legacy retry mode.

  • :retry_mode (String) — default: "legacy"

    Specifies which retry algorithm to use. Values are:

    • legacy - The pre-existing retry behavior. This is default value if no retry mode is provided.

    • standard - A standardized set of retry rules across the AWS SDKs. This includes support for retry quotas, which limit the number of unsuccessful retries a client can make.

    • adaptive - An experimental retry mode that includes all the functionality of standard mode along with automatic client side throttling. This is a provisional mode that may change behavior in the future.

  • :secret_access_key (String)
  • :session_token (String)
  • :simple_json (Boolean) — default: false

    Disables request parameter conversion, validation, and formatting. Also disable response data type conversions. This option is useful when you want to ensure the highest level of performance by avoiding overhead of walking request parameters and response data structures.

    When :simple_json is enabled, the request parameters hash must be formatted exactly as the DynamoDB API expects.

  • :stub_responses (Boolean) — default: false

    Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling ClientStubs#stub_responses. See ClientStubs for more information.

    Please note When response stubbing is enabled, no HTTP requests are made, and retries are disabled.

  • :token_provider (Aws::TokenProvider)

    A Bearer Token Provider. This can be an instance of any one of the following classes:

    • Aws::StaticTokenProvider - Used for configuring static, non-refreshing tokens.

    • Aws::SSOTokenProvider - Used for loading tokens from AWS SSO using an access token generated from aws login.

    When :token_provider is not configured directly, the Aws::TokenProviderChain will be used to search for tokens configured for your profile in shared configuration files.

  • :use_dualstack_endpoint (Boolean)

    When set to true, dualstack enabled endpoints (with .aws TLD) will be used if available.

  • :use_fips_endpoint (Boolean)

    When set to true, fips compatible endpoints will be used if available. When a fips region is used, the region is normalized and this config is set to true.

  • :validate_params (Boolean) — default: true

    When true, request parameters are validated before sending the request.

  • :endpoint_provider (Aws::Glue::EndpointProvider)

    The endpoint provider used to resolve endpoints. Any object that responds to #resolve_endpoint(parameters) where parameters is a Struct similar to Aws::Glue::EndpointParameters

  • :http_proxy (URI::HTTP, String)

    A proxy to send requests through. Formatted like 'http://proxy.com:123'.

  • :http_open_timeout (Float) — default: 15

    The number of seconds to wait when opening a HTTP session before raising a Timeout::Error.

  • :http_read_timeout (Float) — default: 60

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_idle_timeout (Float) — default: 5

    The number of seconds a connection is allowed to sit idle before it is considered stale. Stale connections are closed and removed from the pool before making a request.

  • :http_continue_timeout (Float) — default: 1

    The number of seconds to wait for a 100-continue response before sending the request body. This option has no effect unless the request has "Expect" header set to "100-continue". Defaults to nil which disables this behaviour. This value can safely be set per request on the session.

  • :ssl_timeout (Float) — default: nil

    Sets the SSL timeout in seconds.

  • :http_wire_trace (Boolean) — default: false

    When true, HTTP debug output will be sent to the :logger.

  • :ssl_verify_peer (Boolean) — default: true

    When true, SSL peer certificates are verified when establishing a connection.

  • :ssl_ca_bundle (String)

    Full path to the SSL certificate authority bundle file that should be used when verifying peer certificates. If you do not pass :ssl_ca_bundle or :ssl_ca_directory the the system default will be used if available.

  • :ssl_ca_directory (String)

    Full path of the directory that contains the unbundled SSL certificate authority files for verifying peer certificates. If you do not pass :ssl_ca_bundle or :ssl_ca_directory the the system default will be used if available.



375
376
377
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 375

def initialize(*args)
  super
end

Instance Method Details

#batch_create_partition(params = {}) ⇒ Types::BatchCreatePartitionResponse

Creates one or more partitions in a batch operation.

Examples:

Request syntax with placeholder values


resp = client.batch_create_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_input_list: [ # required
    {
      values: ["ValueString"],
      last_access_time: Time.now,
      storage_descriptor: {
        columns: [
          {
            name: "NameString", # required
            type: "ColumnTypeString",
            comment: "CommentString",
            parameters: {
              "KeyString" => "ParametersMapValue",
            },
          },
        ],
        location: "LocationString",
        additional_locations: ["LocationString"],
        input_format: "FormatString",
        output_format: "FormatString",
        compressed: false,
        number_of_buckets: 1,
        serde_info: {
          name: "NameString",
          serialization_library: "NameString",
          parameters: {
            "KeyString" => "ParametersMapValue",
          },
        },
        bucket_columns: ["NameString"],
        sort_columns: [
          {
            column: "NameString", # required
            sort_order: 1, # required
          },
        ],
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
        skewed_info: {
          skewed_column_names: ["NameString"],
          skewed_column_values: ["ColumnValuesString"],
          skewed_column_value_location_maps: {
            "ColumnValuesString" => "ColumnValuesString",
          },
        },
        stored_as_sub_directories: false,
        schema_reference: {
          schema_id: {
            schema_arn: "GlueResourceArn",
            schema_name: "SchemaRegistryNameString",
            registry_name: "SchemaRegistryNameString",
          },
          schema_version_id: "SchemaVersionIdString",
          schema_version_number: 1,
        },
      },
      parameters: {
        "KeyString" => "ParametersMapValue",
      },
      last_analyzed_time: Time.now,
    },
  ],
})

Response structure


resp.errors #=> Array
resp.errors[0].partition_values #=> Array
resp.errors[0].partition_values[0] #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the catalog in which the partition is to be created. Currently, this should be the Amazon Web Services account ID.

  • :database_name (required, String)

    The name of the metadata database in which the partition is to be created.

  • :table_name (required, String)

    The name of the metadata table in which the partition is to be created.

  • :partition_input_list (required, Array<Types::PartitionInput>)

    A list of PartitionInput structures that define the partitions to be created.

Returns:

See Also:



485
486
487
488
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 485

def batch_create_partition(params = {}, options = {})
  req = build_request(:batch_create_partition, params)
  req.send_request(options)
end

#batch_delete_connection(params = {}) ⇒ Types::BatchDeleteConnectionResponse

Deletes a list of connection definitions from the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.batch_delete_connection({
  catalog_id: "CatalogIdString",
  connection_name_list: ["NameString"], # required
})

Response structure


resp.succeeded #=> Array
resp.succeeded[0] #=> String
resp.errors #=> Hash
resp.errors["NameString"].error_code #=> String
resp.errors["NameString"].error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog in which the connections reside. If none is provided, the Amazon Web Services account ID is used by default.

  • :connection_name_list (required, Array<String>)

    A list of names of the connections to delete.

Returns:

See Also:



523
524
525
526
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 523

def batch_delete_connection(params = {}, options = {})
  req = build_request(:batch_delete_connection, params)
  req.send_request(options)
end

#batch_delete_partition(params = {}) ⇒ Types::BatchDeletePartitionResponse

Deletes one or more partitions in a batch operation.

Examples:

Request syntax with placeholder values


resp = client.batch_delete_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partitions_to_delete: [ # required
    {
      values: ["ValueString"], # required
    },
  ],
})

Response structure


resp.errors #=> Array
resp.errors[0].partition_values #=> Array
resp.errors[0].partition_values[0] #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the partition to be deleted resides. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database in which the table in question resides.

  • :table_name (required, String)

    The name of the table that contains the partitions to be deleted.

  • :partitions_to_delete (required, Array<Types::PartitionValueList>)

    A list of PartitionInput structures that define the partitions to be deleted.

Returns:

See Also:



575
576
577
578
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 575

def batch_delete_partition(params = {}, options = {})
  req = build_request(:batch_delete_partition, params)
  req.send_request(options)
end

#batch_delete_table(params = {}) ⇒ Types::BatchDeleteTableResponse

Deletes multiple tables at once.

After completing this operation, you no longer have access to the table versions and partitions that belong to the deleted table. Glue deletes these "orphaned" resources asynchronously in a timely manner, at the discretion of the service.

To ensure the immediate deletion of all related resources, before calling BatchDeleteTable, use DeleteTableVersion or BatchDeleteTableVersion, and DeletePartition or BatchDeletePartition, to delete any resources that belong to the table.

Examples:

Request syntax with placeholder values


resp = client.batch_delete_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  tables_to_delete: ["NameString"], # required
  transaction_id: "TransactionIdString",
})

Response structure


resp.errors #=> Array
resp.errors[0].table_name #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database in which the tables to delete reside. For Hive compatibility, this name is entirely lowercase.

  • :tables_to_delete (required, Array<String>)

    A list of the table to delete.

  • :transaction_id (String)

    The transaction ID at which to delete the table contents.

Returns:

See Also:



633
634
635
636
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 633

def batch_delete_table(params = {}, options = {})
  req = build_request(:batch_delete_table, params)
  req.send_request(options)
end

#batch_delete_table_version(params = {}) ⇒ Types::BatchDeleteTableVersionResponse

Deletes a specified batch of versions of a table.

Examples:

Request syntax with placeholder values


resp = client.batch_delete_table_version({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  version_ids: ["VersionString"], # required
})

Response structure


resp.errors #=> Array
resp.errors[0].table_name #=> String
resp.errors[0].version_id #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.

  • :table_name (required, String)

    The name of the table. For Hive compatibility, this name is entirely lowercase.

  • :version_ids (required, Array<String>)

    A list of the IDs of versions to be deleted. A VersionId is a string representation of an integer. Each version is incremented by 1.

Returns:

See Also:



681
682
683
684
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 681

def batch_delete_table_version(params = {}, options = {})
  req = build_request(:batch_delete_table_version, params)
  req.send_request(options)
end

#batch_get_blueprints(params = {}) ⇒ Types::BatchGetBlueprintsResponse

Retrieves information about a list of blueprints.

Examples:

Request syntax with placeholder values


resp = client.batch_get_blueprints({
  names: ["OrchestrationNameString"], # required
  include_blueprint: false,
  include_parameter_spec: false,
})

Response structure


resp.blueprints #=> Array
resp.blueprints[0].name #=> String
resp.blueprints[0].description #=> String
resp.blueprints[0].created_on #=> Time
resp.blueprints[0].last_modified_on #=> Time
resp.blueprints[0].parameter_spec #=> String
resp.blueprints[0].blueprint_location #=> String
resp.blueprints[0].blueprint_service_location #=> String
resp.blueprints[0].status #=> String, one of "CREATING", "ACTIVE", "UPDATING", "FAILED"
resp.blueprints[0].error_message #=> String
resp.blueprints[0].last_active_definition.description #=> String
resp.blueprints[0].last_active_definition.last_modified_on #=> Time
resp.blueprints[0].last_active_definition.parameter_spec #=> String
resp.blueprints[0].last_active_definition.blueprint_location #=> String
resp.blueprints[0].last_active_definition.blueprint_service_location #=> String
resp.missing_blueprints #=> Array
resp.missing_blueprints[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :names (required, Array<String>)

    A list of blueprint names.

  • :include_blueprint (Boolean)

    Specifies whether or not to include the blueprint in the response.

  • :include_parameter_spec (Boolean)

    Specifies whether or not to include the parameters, as a JSON string, for the blueprint in the response.

Returns:

See Also:



735
736
737
738
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 735

def batch_get_blueprints(params = {}, options = {})
  req = build_request(:batch_get_blueprints, params)
  req.send_request(options)
end

#batch_get_crawlers(params = {}) ⇒ Types::BatchGetCrawlersResponse

Returns a list of resource metadata for a given list of crawler names. After calling the ListCrawlers operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

Examples:

Request syntax with placeholder values


resp = client.batch_get_crawlers({
  crawler_names: ["NameString"], # required
})

Response structure


resp.crawlers #=> Array
resp.crawlers[0].name #=> String
resp.crawlers[0].role #=> String
resp.crawlers[0].targets.s3_targets #=> Array
resp.crawlers[0].targets.s3_targets[0].path #=> String
resp.crawlers[0].targets.s3_targets[0].exclusions #=> Array
resp.crawlers[0].targets.s3_targets[0].exclusions[0] #=> String
resp.crawlers[0].targets.s3_targets[0].connection_name #=> String
resp.crawlers[0].targets.s3_targets[0].sample_size #=> Integer
resp.crawlers[0].targets.s3_targets[0].event_queue_arn #=> String
resp.crawlers[0].targets.s3_targets[0].dlq_event_queue_arn #=> String
resp.crawlers[0].targets.jdbc_targets #=> Array
resp.crawlers[0].targets.jdbc_targets[0].connection_name #=> String
resp.crawlers[0].targets.jdbc_targets[0].path #=> String
resp.crawlers[0].targets.jdbc_targets[0].exclusions #=> Array
resp.crawlers[0].targets.jdbc_targets[0].exclusions[0] #=> String
resp.crawlers[0].targets.jdbc_targets[0]. #=> Array
resp.crawlers[0].targets.jdbc_targets[0].[0] #=> String, one of "COMMENTS", "RAWTYPES"
resp.crawlers[0].targets.mongo_db_targets #=> Array
resp.crawlers[0].targets.mongo_db_targets[0].connection_name #=> String
resp.crawlers[0].targets.mongo_db_targets[0].path #=> String
resp.crawlers[0].targets.mongo_db_targets[0].scan_all #=> Boolean
resp.crawlers[0].targets.dynamo_db_targets #=> Array
resp.crawlers[0].targets.dynamo_db_targets[0].path #=> String
resp.crawlers[0].targets.dynamo_db_targets[0].scan_all #=> Boolean
resp.crawlers[0].targets.dynamo_db_targets[0].scan_rate #=> Float
resp.crawlers[0].targets.catalog_targets #=> Array
resp.crawlers[0].targets.catalog_targets[0].database_name #=> String
resp.crawlers[0].targets.catalog_targets[0].tables #=> Array
resp.crawlers[0].targets.catalog_targets[0].tables[0] #=> String
resp.crawlers[0].targets.catalog_targets[0].connection_name #=> String
resp.crawlers[0].targets.catalog_targets[0].event_queue_arn #=> String
resp.crawlers[0].targets.catalog_targets[0].dlq_event_queue_arn #=> String
resp.crawlers[0].targets.delta_targets #=> Array
resp.crawlers[0].targets.delta_targets[0].delta_tables #=> Array
resp.crawlers[0].targets.delta_targets[0].delta_tables[0] #=> String
resp.crawlers[0].targets.delta_targets[0].connection_name #=> String
resp.crawlers[0].targets.delta_targets[0].write_manifest #=> Boolean
resp.crawlers[0].database_name #=> String
resp.crawlers[0].description #=> String
resp.crawlers[0].classifiers #=> Array
resp.crawlers[0].classifiers[0] #=> String
resp.crawlers[0].recrawl_policy.recrawl_behavior #=> String, one of "CRAWL_EVERYTHING", "CRAWL_NEW_FOLDERS_ONLY", "CRAWL_EVENT_MODE"
resp.crawlers[0].schema_change_policy.update_behavior #=> String, one of "LOG", "UPDATE_IN_DATABASE"
resp.crawlers[0].schema_change_policy.delete_behavior #=> String, one of "LOG", "DELETE_FROM_DATABASE", "DEPRECATE_IN_DATABASE"
resp.crawlers[0].lineage_configuration.crawler_lineage_settings #=> String, one of "ENABLE", "DISABLE"
resp.crawlers[0].state #=> String, one of "READY", "RUNNING", "STOPPING"
resp.crawlers[0].table_prefix #=> String
resp.crawlers[0].schedule.schedule_expression #=> String
resp.crawlers[0].schedule.state #=> String, one of "SCHEDULED", "NOT_SCHEDULED", "TRANSITIONING"
resp.crawlers[0].crawl_elapsed_time #=> Integer
resp.crawlers[0].creation_time #=> Time
resp.crawlers[0].last_updated #=> Time
resp.crawlers[0].last_crawl.status #=> String, one of "SUCCEEDED", "CANCELLED", "FAILED"
resp.crawlers[0].last_crawl.error_message #=> String
resp.crawlers[0].last_crawl.log_group #=> String
resp.crawlers[0].last_crawl.log_stream #=> String
resp.crawlers[0].last_crawl.message_prefix #=> String
resp.crawlers[0].last_crawl.start_time #=> Time
resp.crawlers[0].version #=> Integer
resp.crawlers[0].configuration #=> String
resp.crawlers[0].crawler_security_configuration #=> String
resp.crawlers[0].lake_formation_configuration.use_lake_formation_credentials #=> Boolean
resp.crawlers[0].lake_formation_configuration. #=> String
resp.crawlers_not_found #=> Array
resp.crawlers_not_found[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :crawler_names (required, Array<String>)

    A list of crawler names, which might be the names returned from the ListCrawlers operation.

Returns:

See Also:



834
835
836
837
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 834

def batch_get_crawlers(params = {}, options = {})
  req = build_request(:batch_get_crawlers, params)
  req.send_request(options)
end

#batch_get_custom_entity_types(params = {}) ⇒ Types::BatchGetCustomEntityTypesResponse

Retrieves the details for the custom patterns specified by a list of names.

Examples:

Request syntax with placeholder values


resp = client.batch_get_custom_entity_types({
  names: ["NameString"], # required
})

Response structure


resp.custom_entity_types #=> Array
resp.custom_entity_types[0].name #=> String
resp.custom_entity_types[0].regex_string #=> String
resp.custom_entity_types[0].context_words #=> Array
resp.custom_entity_types[0].context_words[0] #=> String
resp.custom_entity_types_not_found #=> Array
resp.custom_entity_types_not_found[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :names (required, Array<String>)

    A list of names of the custom patterns that you want to retrieve.

Returns:

See Also:



870
871
872
873
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 870

def batch_get_custom_entity_types(params = {}, options = {})
  req = build_request(:batch_get_custom_entity_types, params)
  req.send_request(options)
end

#batch_get_data_quality_result(params = {}) ⇒ Types::BatchGetDataQualityResultResponse

Retrieves a list of data quality results for the specified result IDs.

Examples:

Request syntax with placeholder values


resp = client.batch_get_data_quality_result({
  result_ids: ["HashString"], # required
})

Response structure


resp.results #=> Array
resp.results[0].result_id #=> String
resp.results[0].score #=> Float
resp.results[0].data_source.glue_table.database_name #=> String
resp.results[0].data_source.glue_table.table_name #=> String
resp.results[0].data_source.glue_table.catalog_id #=> String
resp.results[0].data_source.glue_table.connection_name #=> String
resp.results[0].data_source.glue_table.additional_options #=> Hash
resp.results[0].data_source.glue_table.additional_options["NameString"] #=> String
resp.results[0].ruleset_name #=> String
resp.results[0].evaluation_context #=> String
resp.results[0].started_on #=> Time
resp.results[0].completed_on #=> Time
resp.results[0].job_name #=> String
resp.results[0].job_run_id #=> String
resp.results[0].ruleset_evaluation_run_id #=> String
resp.results[0].rule_results #=> Array
resp.results[0].rule_results[0].name #=> String
resp.results[0].rule_results[0].description #=> String
resp.results[0].rule_results[0].evaluation_message #=> String
resp.results[0].rule_results[0].result #=> String, one of "PASS", "FAIL", "ERROR"
resp.results_not_found #=> Array
resp.results_not_found[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :result_ids (required, Array<String>)

    A list of unique result IDs for the data quality results.

Returns:

See Also:



921
922
923
924
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 921

def batch_get_data_quality_result(params = {}, options = {})
  req = build_request(:batch_get_data_quality_result, params)
  req.send_request(options)
end

#batch_get_dev_endpoints(params = {}) ⇒ Types::BatchGetDevEndpointsResponse

Returns a list of resource metadata for a given list of development endpoint names. After calling the ListDevEndpoints operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

Examples:

Request syntax with placeholder values


resp = client.batch_get_dev_endpoints({
  dev_endpoint_names: ["GenericString"], # required
})

Response structure


resp.dev_endpoints #=> Array
resp.dev_endpoints[0].endpoint_name #=> String
resp.dev_endpoints[0].role_arn #=> String
resp.dev_endpoints[0].security_group_ids #=> Array
resp.dev_endpoints[0].security_group_ids[0] #=> String
resp.dev_endpoints[0].subnet_id #=> String
resp.dev_endpoints[0].yarn_endpoint_address #=> String
resp.dev_endpoints[0].private_address #=> String
resp.dev_endpoints[0].zeppelin_remote_spark_interpreter_port #=> Integer
resp.dev_endpoints[0].public_address #=> String
resp.dev_endpoints[0].status #=> String
resp.dev_endpoints[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X"
resp.dev_endpoints[0].glue_version #=> String
resp.dev_endpoints[0].number_of_workers #=> Integer
resp.dev_endpoints[0].number_of_nodes #=> Integer
resp.dev_endpoints[0].availability_zone #=> String
resp.dev_endpoints[0].vpc_id #=> String
resp.dev_endpoints[0].extra_python_libs_s3_path #=> String
resp.dev_endpoints[0].extra_jars_s3_path #=> String
resp.dev_endpoints[0].failure_reason #=> String
resp.dev_endpoints[0].last_update_status #=> String
resp.dev_endpoints[0].created_timestamp #=> Time
resp.dev_endpoints[0].last_modified_timestamp #=> Time
resp.dev_endpoints[0].public_key #=> String
resp.dev_endpoints[0].public_keys #=> Array
resp.dev_endpoints[0].public_keys[0] #=> String
resp.dev_endpoints[0].security_configuration #=> String
resp.dev_endpoints[0].arguments #=> Hash
resp.dev_endpoints[0].arguments["GenericString"] #=> String
resp.dev_endpoints_not_found #=> Array
resp.dev_endpoints_not_found[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :dev_endpoint_names (required, Array<String>)

    The list of DevEndpoint names, which might be the names returned from the ListDevEndpoint operation.

Returns:

See Also:



985
986
987
988
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 985

def batch_get_dev_endpoints(params = {}, options = {})
  req = build_request(:batch_get_dev_endpoints, params)
  req.send_request(options)
end

#batch_get_jobs(params = {}) ⇒ Types::BatchGetJobsResponse

Returns a list of resource metadata for a given list of job names. After calling the ListJobs operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

Examples:

Request syntax with placeholder values


resp = client.batch_get_jobs({
  job_names: ["NameString"], # required
})

Response structure


resp.jobs #=> Array
resp.jobs[0].name #=> String
resp.jobs[0].description #=> String
resp.jobs[0].log_uri #=> String
resp.jobs[0].role #=> String
resp.jobs[0].created_on #=> Time
resp.jobs[0].last_modified_on #=> Time
resp.jobs[0].execution_property.max_concurrent_runs #=> Integer
resp.jobs[0].command.name #=> String
resp.jobs[0].command.script_location #=> String
resp.jobs[0].command.python_version #=> String
resp.jobs[0].default_arguments #=> Hash
resp.jobs[0].default_arguments["GenericString"] #=> String
resp.jobs[0].non_overridable_arguments #=> Hash
resp.jobs[0].non_overridable_arguments["GenericString"] #=> String
resp.jobs[0].connections.connections #=> Array
resp.jobs[0].connections.connections[0] #=> String
resp.jobs[0].max_retries #=> Integer
resp.jobs[0].allocated_capacity #=> Integer
resp.jobs[0].timeout #=> Integer
resp.jobs[0].max_capacity #=> Float
resp.jobs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X"
resp.jobs[0].number_of_workers #=> Integer
resp.jobs[0].security_configuration #=> String
resp.jobs[0].notification_property.notify_delay_after #=> Integer
resp.jobs[0].glue_version #=> String
resp.jobs[0].code_gen_configuration_nodes #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.connector_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.connection_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.schema_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].athena_connector_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.connector_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.filter_predicate #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.partition_column #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.lower_bound #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.upper_bound #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.num_partitions #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.job_bookmark_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.job_bookmark_keys[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.job_bookmark_keys_sort_order #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.data_type_mapping #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.additional_options.data_type_mapping["JDBCDataType"] #=> String, one of "DATE", "STRING", "TIMESTAMP", "INT", "FLOAT", "LONG", "BIGDECIMAL", "BYTE", "SHORT", "DOUBLE"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.connection_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.query #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.connector_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_source.redshift_tmp_dir #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_source.tmp_dir_iam_role #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.partition_predicate #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.paths[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.compression_type #=> String, one of "gzip", "bzip2"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.exclusions #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.exclusions[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.group_size #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.group_files #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.recurse #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.max_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.max_files_in_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.additional_options.enable_sample_path #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.additional_options.sample_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.separator #=> String, one of "comma", "ctrla", "pipe", "semicolon", "tab"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.escaper #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.quote_char #=> String, one of "quote", "quillemet", "single_quote", "disabled"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.multiline #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.with_header #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.write_header #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.skip_first #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.optimize_performance #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_csv_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.paths[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.compression_type #=> String, one of "gzip", "bzip2"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.exclusions #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.exclusions[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.group_size #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.group_files #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.recurse #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.max_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.max_files_in_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.additional_options.enable_sample_path #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.additional_options.sample_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.json_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.multiline #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_json_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.paths[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.compression_type #=> String, one of "snappy", "lzo", "gzip", "uncompressed", "none"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.exclusions #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.exclusions[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.group_size #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.group_files #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.recurse #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.max_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.max_files_in_band #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.additional_options.enable_sample_path #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.additional_options.sample_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_parquet_source.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].relational_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].relational_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].relational_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamo_db_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamo_db_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamo_db_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.connection_table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.connector_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].jdbc_connector_target.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.connector_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.connection_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.additional_options #=> Hash
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.additional_options["EnclosedInStringProperty"] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_connector_target.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.redshift_tmp_dir #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.tmp_dir_iam_role #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.upsert_redshift_options.table_location #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.upsert_redshift_options.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.upsert_redshift_options.upsert_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].redshift_target.upsert_redshift_options.upsert_keys[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_catalog_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.compression #=> String, one of "snappy", "lzo", "gzip", "uncompressed", "none"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.schema_change_policy.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_glue_parquet_target.schema_change_policy.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.compression #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.format #=> String, one of "json", "csv", "avro", "orc", "parquet"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.schema_change_policy.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].s3_direct_target.schema_change_policy.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].to_key #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].from_path #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].from_path[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].from_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].to_type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].dropped #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].apply_mapping.mapping[0].children #=> Types::Mappings
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.paths[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_fields.paths[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.paths[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_fields.paths[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.source_path #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.source_path[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.target_path #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].rename_field.target_path[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.topk #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spigot.prob #=> Float
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.join_type #=> String, one of "equijoin", "left", "right", "outer", "leftsemi", "leftanti"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.columns[0].from #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.columns[0].keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.columns[0].keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].join.columns[0].keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.paths #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.paths[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].split_fields.paths[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_from_collection.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_from_collection.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_from_collection.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].select_from_collection.index #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].fill_missing_values.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].fill_missing_values.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].fill_missing_values.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].fill_missing_values.imputed_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].fill_missing_values.filled_path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.logical_operator #=> String, one of "AND", "OR"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].operation #=> String, one of "EQ", "LT", "GT", "LTE", "GTE", "REGEX", "ISNULL"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].negated #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].values #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].values[0].type #=> String, one of "COLUMNEXTRACTED", "CONSTANT"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].values[0].value #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].filter.filters[0].values[0].value[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.code #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.class_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].custom_code.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.sql_query #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.sql_aliases #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.sql_aliases[0].from #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.sql_aliases[0].alias #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.output_schemas #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.output_schemas[0].columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.output_schemas[0].columns[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].spark_sql.output_schemas[0].columns[0].type #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.window_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.detect_schema #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.endpoint_url #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.stream_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.classification #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.delimiter #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.starting_position #=> String, one of "latest", "trim_horizon", "earliest"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.max_fetch_time_in_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.max_fetch_records_per_shard #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.max_record_per_read #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.add_idle_time_between_reads #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.idle_time_between_reads_in_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.describe_shard_interval #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.num_retries #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.max_retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.avoid_empty_batches #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.stream_arn #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.role_arn #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.streaming_options.role_session_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.data_preview_options.polling_time #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kinesis_source.data_preview_options.record_polling_limit #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.bootstrap_servers #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.security_protocol #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.topic_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.assign #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.subscribe_pattern #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.classification #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.delimiter #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.starting_offsets #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.ending_offsets #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.poll_timeout_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.num_retries #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.max_offsets_per_trigger #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.streaming_options.min_partitions #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.window_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.detect_schema #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.data_preview_options.polling_time #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].direct_kafka_source.data_preview_options.record_polling_limit #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.window_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.detect_schema #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.endpoint_url #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.stream_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.classification #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.delimiter #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.starting_position #=> String, one of "latest", "trim_horizon", "earliest"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.max_fetch_time_in_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.max_fetch_records_per_shard #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.max_record_per_read #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.add_idle_time_between_reads #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.idle_time_between_reads_in_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.describe_shard_interval #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.num_retries #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.max_retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.avoid_empty_batches #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.stream_arn #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.role_arn #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.streaming_options.role_session_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.data_preview_options.polling_time #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kinesis_source.data_preview_options.record_polling_limit #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.window_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.detect_schema #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.bootstrap_servers #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.security_protocol #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.connection_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.topic_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.assign #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.subscribe_pattern #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.classification #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.delimiter #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.starting_offsets #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.ending_offsets #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.poll_timeout_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.num_retries #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.retry_interval_ms #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.max_offsets_per_trigger #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.streaming_options.min_partitions #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.data_preview_options.polling_time #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].catalog_kafka_source.data_preview_options.record_polling_limit #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_check_box_list.is_empty #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_check_box_list.is_null_string #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_check_box_list.is_neg_one #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_text_list #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_text_list[0].value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_text_list[0].datatype.id #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_null_fields.null_text_list[0].datatype.label #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.source #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.primary_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.primary_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].merge.primary_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].union.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].union.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].union.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].union.union_type #=> String, one of "ALL", "DISTINCT"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.pii_type #=> String, one of "RowAudit", "RowMasking", "ColumnAudit", "ColumnMasking"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.entity_types_to_detect #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.entity_types_to_detect[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.output_column_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.sample_fraction #=> Float
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.threshold_fraction #=> Float
resp.jobs[0].code_gen_configuration_nodes["NodeId"].pii_detection.mask_value #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.groups #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.groups[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.groups[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.aggs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.aggs[0].column #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.aggs[0].column[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].aggregate.aggs[0].agg_func #=> String, one of "avg", "countDistinct", "count", "first", "last", "kurtosis", "max", "min", "skewness", "stddev_samp", "stddev_pop", "sum", "sumDistinct", "var_samp", "var_pop"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.columns #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.columns[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].drop_duplicates.columns[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.partition_keys #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.partition_keys[0] #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.partition_keys[0][0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.schema_change_policy.enable_update_catalog #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_target.schema_change_policy.update_behavior #=> String, one of "UPDATE_IN_DATABASE", "LOG"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.partition_predicate #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.additional_options.bounded_size #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].governed_catalog_source.additional_options.bounded_files #=> Integer
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_source.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_source.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_source.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].microsoft_sql_server_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].my_sql_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].oracle_sql_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.database #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].postgre_sql_catalog_target.table #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.transform_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].type #=> String, one of "str", "int", "float", "complex", "bool", "list", "null"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].validation_rule #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].validation_message #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].value #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].value[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].list_type #=> String, one of "str", "int", "float", "complex", "bool", "list", "null"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.parameters[0].is_optional #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.function_name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.path #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].dynamic_transform.version #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.name #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.inputs #=> Array
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.inputs[0] #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.ruleset #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.output #=> String, one of "PrimaryInput", "EvaluationResults"
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.publishing_options.evaluation_context #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.publishing_options.results_s3_prefix #=> String
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.publishing_options.cloud_watch_metrics_enabled #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.publishing_options.results_publishing_enabled #=> Boolean
resp.jobs[0].code_gen_configuration_nodes["NodeId"].evaluate_data_quality.stop_job_on_failure_options.stop_job_on_failure_timing #=> String, one of "Immediate", "AfterDataLoad"
resp.jobs[0].execution_class #=> String, one of "FLEX", "STANDARD"
resp.jobs[0].source_control_details.provider #=> String, one of "GITHUB", "AWS_CODE_COMMIT"
resp.jobs[0].source_control_details.repository #=> String
resp.jobs[0].source_control_details.owner #=> String
resp.jobs[0].source_control_details.branch #=> String
resp.jobs[0].source_control_details.folder #=> String
resp.jobs[0].source_control_details.last_commit_id #=> String
resp.jobs[0].source_control_details.auth_strategy #=> String, one of "PERSONAL_ACCESS_TOKEN", "AWS_SECRETS_MANAGER"
resp.jobs[0].source_control_details.auth_token #=> String
resp.jobs_not_found #=> Array
resp.jobs_not_found[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_names (required, Array<String>)

    A list of job names, which might be the names returned from the ListJobs operation.

Returns:

See Also:



1561
1562
1563
1564
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 1561

def batch_get_jobs(params = {}, options = {})
  req = build_request(:batch_get_jobs, params)
  req.send_request(options)
end

#batch_get_partition(params = {}) ⇒ Types::BatchGetPartitionResponse

Retrieves partitions in a batch request.

Examples:

Request syntax with placeholder values


resp = client.batch_get_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partitions_to_get: [ # required
    {
      values: ["ValueString"], # required
    },
  ],
})

Response structure


resp.partitions #=> Array
resp.partitions[0].values #=> Array
resp.partitions[0].values[0] #=> String
resp.partitions[0].database_name #=> String
resp.partitions[0].table_name #=> String
resp.partitions[0].creation_time #=> Time
resp.partitions[0].last_access_time #=> Time
resp.partitions[0].storage_descriptor.columns #=> Array
resp.partitions[0].storage_descriptor.columns[0].name #=> String
resp.partitions[0].storage_descriptor.columns[0].type #=> String
resp.partitions[0].storage_descriptor.columns[0].comment #=> String
resp.partitions[0].storage_descriptor.columns[0].parameters #=> Hash
resp.partitions[0].storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.partitions[0].storage_descriptor.location #=> String
resp.partitions[0].storage_descriptor.additional_locations #=> Array
resp.partitions[0].storage_descriptor.additional_locations[0] #=> String
resp.partitions[0].storage_descriptor.input_format #=> String
resp.partitions[0].storage_descriptor.output_format #=> String
resp.partitions[0].storage_descriptor.compressed #=> Boolean
resp.partitions[0].storage_descriptor.number_of_buckets #=> Integer
resp.partitions[0].storage_descriptor.serde_info.name #=> String
resp.partitions[0].storage_descriptor.serde_info.serialization_library #=> String
resp.partitions[0].storage_descriptor.serde_info.parameters #=> Hash
resp.partitions[0].storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.partitions[0].storage_descriptor.bucket_columns #=> Array
resp.partitions[0].storage_descriptor.bucket_columns[0] #=> String
resp.partitions[0].storage_descriptor.sort_columns #=> Array
resp.partitions[0].storage_descriptor.sort_columns[0].column #=> String
resp.partitions[0].storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.partitions[0].storage_descriptor.parameters #=> Hash
resp.partitions[0].storage_descriptor.parameters["KeyString"] #=> String
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.partitions[0].storage_descriptor.stored_as_sub_directories #=> Boolean
resp.partitions[0].storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_version_id #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.partitions[0].parameters #=> Hash
resp.partitions[0].parameters["KeyString"] #=> String
resp.partitions[0].last_analyzed_time #=> Time
resp.partitions[0].catalog_id #=> String
resp.unprocessed_keys #=> Array
resp.unprocessed_keys[0].values #=> Array
resp.unprocessed_keys[0].values[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions' table.

  • :partitions_to_get (required, Array<Types::PartitionValueList>)

    A list of partition values identifying the partitions to retrieve.

Returns:

See Also:



1657
1658
1659
1660
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 1657

def batch_get_partition(params = {}, options = {})
  req = build_request(:batch_get_partition, params)
  req.send_request(options)
end

#batch_get_triggers(params = {}) ⇒ Types::BatchGetTriggersResponse

Returns a list of resource metadata for a given list of trigger names. After calling the ListTriggers operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

Examples:

Request syntax with placeholder values


resp = client.batch_get_triggers({
  trigger_names: ["NameString"], # required
})

Response structure


resp.triggers #=> Array
resp.triggers[0].name #=> String
resp.triggers[0].workflow_name #=> String
resp.triggers[0].id #=> String
resp.triggers[0].type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND", "EVENT"
resp.triggers[0].state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.triggers[0].description #=> String
resp.triggers[0].schedule #=> String
resp.triggers[0].actions #=> Array
resp.triggers[0].actions[0].job_name #=> String
resp.triggers[0].actions[0].arguments #=> Hash
resp.triggers[0].actions[0].arguments["GenericString"] #=> String
resp.triggers[0].actions[0].timeout #=> Integer
resp.triggers[0].actions[0].security_configuration #=> String
resp.triggers[0].actions[0].notification_property.notify_delay_after #=> Integer
resp.triggers[0].actions[0].crawler_name #=> String
resp.triggers[0].predicate.logical #=> String, one of "AND", "ANY"
resp.triggers[0].predicate.conditions #=> Array
resp.triggers[0].predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.triggers[0].predicate.conditions[0].job_name #=> String
resp.triggers[0].predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING"
resp.triggers[0].predicate.conditions[0].crawler_name #=> String
resp.triggers[0].predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.triggers[0].event_batching_condition.batch_size #=> Integer
resp.triggers[0].event_batching_condition.batch_window #=> Integer
resp.triggers_not_found #=> Array
resp.triggers_not_found[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :trigger_names (required, Array<String>)

    A list of trigger names, which may be the names returned from the ListTriggers operation.

Returns:

See Also:



1717
1718
1719
1720
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 1717

def batch_get_triggers(params = {}, options = {})
  req = build_request(:batch_get_triggers, params)
  req.send_request(options)
end

#batch_get_workflows(params = {}) ⇒ Types::BatchGetWorkflowsResponse

Returns a list of resource metadata for a given list of workflow names. After calling the ListWorkflows operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

Examples:

Request syntax with placeholder values


resp = client.batch_get_workflows({
  names: ["NameString"], # required
  include_graph: false,
})

Response structure


resp.workflows #=> Array
resp.workflows[0].name #=> String
resp.workflows[0].description #=> String
resp.workflows[0].default_run_properties #=> Hash
resp.workflows[0].default_run_properties["IdString"] #=> String
resp.workflows[0].created_on #=> Time
resp.workflows[0].last_modified_on #=> Time
resp.workflows[0].last_run.name #=> String
resp.workflows[0].last_run.workflow_run_id #=> String
resp.workflows[0].last_run.previous_run_id #=> String
resp.workflows[0].last_run.workflow_run_properties #=> Hash
resp.workflows[0].last_run.workflow_run_properties["IdString"] #=> String
resp.workflows[0].last_run.started_on #=> Time
resp.workflows[0].last_run.completed_on #=> Time
resp.workflows[0].last_run.status #=> String, one of "RUNNING", "COMPLETED", "STOPPING", "STOPPED", "ERROR"
resp.workflows[0].last_run.error_message #=> String
resp.workflows[0].last_run.statistics.total_actions #=> Integer
resp.workflows[0].last_run.statistics.timeout_actions #=> Integer
resp.workflows[0].last_run.statistics.failed_actions #=> Integer
resp.workflows[0].last_run.statistics.stopped_actions #=> Integer
resp.workflows[0].last_run.statistics.succeeded_actions #=> Integer
resp.workflows[0].last_run.statistics.running_actions #=> Integer
resp.workflows[0].last_run.statistics.errored_actions #=> Integer
resp.workflows[0].last_run.statistics.waiting_actions #=> Integer
resp.workflows[0].last_run.graph.nodes #=> Array
resp.workflows[0].last_run.graph.nodes[0].type #=> String, one of "CRAWLER", "JOB", "TRIGGER"
resp.workflows[0].last_run.graph.nodes[0].name #=> String
resp.workflows[0].last_run.graph.nodes[0].unique_id #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.workflow_name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.id #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND", "EVENT"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.description #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.schedule #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions #=> Array
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].job_name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].arguments #=> Hash
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].arguments["GenericString"] #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].timeout #=> Integer
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].security_configuration #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].crawler_name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions #=> Array
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].job_name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawler_name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.event_batching_condition.batch_size #=> Integer
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.event_batching_condition.batch_window #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs #=> Array
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].id #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].attempt #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].previous_run_id #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].trigger_name #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].job_name #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].started_on #=> Time
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].last_modified_on #=> Time
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].completed_on #=> Time
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING"
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].arguments #=> Hash
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].arguments["GenericString"] #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].error_message #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].predecessor_runs #=> Array
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].job_name #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].run_id #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].allocated_capacity #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].execution_time #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].timeout #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].max_capacity #=> Float
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X"
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].number_of_workers #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].security_configuration #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].log_group_name #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].notification_property.notify_delay_after #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].glue_version #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].dpu_seconds #=> Float
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].execution_class #=> String, one of "FLEX", "STANDARD"
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls #=> Array
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].started_on #=> Time
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].completed_on #=> Time
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].error_message #=> String
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].log_group #=> String
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].log_stream #=> String
resp.workflows[0].last_run.graph.edges #=> Array
resp.workflows[0].last_run.graph.edges[0].source_id #=> String
resp.workflows[0].last_run.graph.edges[0].destination_id #=> String
resp.workflows[0].last_run.starting_event_batch_condition.batch_size #=> Integer
resp.workflows[0].last_run.starting_event_batch_condition.batch_window #=> Integer
resp.workflows[0].graph.nodes #=> Array
resp.workflows[0].graph.nodes[0].type #=> String, one of "CRAWLER", "JOB", "TRIGGER"
resp.workflows[0].graph.nodes[0].name #=> String
resp.workflows[0].graph.nodes[0].unique_id #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.workflow_name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.id #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND", "EVENT"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.description #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.schedule #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions #=> Array
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].job_name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].arguments #=> Hash
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].arguments["GenericString"] #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].timeout #=> Integer
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].security_configuration #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].crawler_name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions #=> Array
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].job_name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawler_name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.event_batching_condition.batch_size #=> Integer
resp.workflows[0].graph.nodes[0].trigger_details.trigger.event_batching_condition.batch_window #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs #=> Array
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].id #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].attempt #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].previous_run_id #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].trigger_name #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].job_name #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].started_on #=> Time
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].last_modified_on #=> Time
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].completed_on #=> Time
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT", "ERROR", "WAITING"
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].arguments #=> Hash
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].arguments["GenericString"] #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].error_message #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].predecessor_runs #=> Array
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].job_name #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].run_id #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].allocated_capacity #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].execution_time #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].timeout #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].max_capacity #=> Float
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X", "G.025X"
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].number_of_workers #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].security_configuration #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].log_group_name #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].notification_property.notify_delay_after #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].glue_version #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].dpu_seconds #=> Float
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].execution_class #=> String, one of "FLEX", "STANDARD"
resp.workflows[0].graph.nodes[0].crawler_details.crawls #=> Array
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED", "ERROR"
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].started_on #=> Time
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].completed_on #=> Time
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].error_message #=> String
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].log_group #=> String
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].log_stream #=> String
resp.workflows[0].graph.edges #=> Array
resp.workflows[0].graph.edges[0].source_id #=> String
resp.workflows[0].graph.edges[0].destination_id #=> String
resp.workflows[0].max_concurrent_runs #=> Integer
resp.workflows[0].blueprint_details.blueprint_name #=> String
resp.workflows[0].blueprint_details.run_id #=> String
resp.missing_workflows #=> Array
resp.missing_workflows[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :names (required, Array<String>)

    A list of workflow names, which may be the names returned from the ListWorkflows operation.

  • :include_graph (Boolean)

    Specifies whether to include a graph when returning the workflow resource metadata.

Returns:

See Also:



1918
1919
1920
1921
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 1918

def batch_get_workflows(params = {}, options = {})
  req = build_request(:batch_get_workflows, params)
  req.send_request(options)
end

#batch_stop_job_run(params = {}) ⇒ Types::BatchStopJobRunResponse

Stops one or more job runs for a specified job definition.

Examples:

Request syntax with placeholder values


resp = client.batch_stop_job_run({
  job_name: "NameString", # required
  job_run_ids: ["IdString"], # required
})

Response structure


resp.successful_submissions #=> Array
resp.successful_submissions[0].job_name #=> String
resp.successful_submissions[0].job_run_id #=> String
resp.errors #=> Array
resp.errors[0].job_name #=> String
resp.errors[0].job_run_id #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :job_name (required, String)

    The name of the job definition for which to stop job runs.

  • :job_run_ids (required, Array<String>)

    A list of the JobRunIds that should be stopped for that job definition.

Returns:

See Also:



1959
1960
1961
1962
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 1959

def batch_stop_job_run(params = {}, options = {})
  req = build_request(:batch_stop_job_run, params)
  req.send_request(options)
end

#batch_update_partition(params = {}) ⇒ Types::BatchUpdatePartitionResponse

Updates one or more partitions in a batch operation.

Examples:

Request syntax with placeholder values


resp = client.batch_update_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  entries: [ # required
    {
      partition_value_list: ["ValueString"], # required
      partition_input: { # required
        values: ["ValueString"],
        last_access_time: Time.now,
        storage_descriptor: {
          columns: [
            {
              name: "NameString", # required
              type: "ColumnTypeString",
              comment: "CommentString",
              parameters: {
                "KeyString" => "ParametersMapValue",
              },
            },
          ],
          location: "LocationString",
          additional_locations: ["LocationString"],
          input_format: "FormatString",
          output_format: "FormatString",
          compressed: false,
          number_of_buckets: 1,
          serde_info: {
            name: "NameString",
            serialization_library: "NameString",
            parameters: {
              "KeyString" => "ParametersMapValue",
            },
          },
          bucket_columns: ["NameString"],
          sort_columns: [
            {
              column: "NameString", # required
              sort_order: 1, # required
            },
          ],
          parameters: {
            "KeyString" => "ParametersMapValue",
          },
          skewed_info: {
            skewed_column_names: ["NameString"],
            skewed_column_values: ["ColumnValuesString"],
            skewed_column_value_location_maps: {
              "ColumnValuesString" => "ColumnValuesString",
            },
          },
          stored_as_sub_directories: false,
          schema_reference: {
            schema_id: {
              schema_arn: "GlueResourceArn",
              schema_name: "SchemaRegistryNameString",
              registry_name: "SchemaRegistryNameString",
            },
            schema_version_id: "SchemaVersionIdString",
            schema_version_number: 1,
          },
        },
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
        last_analyzed_time: Time.now,
      },
    },
  ],
})

Response structure


resp.errors #=> Array
resp.errors[0].partition_value_list #=> Array
resp.errors[0].partition_value_list[0] #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the catalog in which the partition is to be updated. Currently, this should be the Amazon Web Services account ID.

  • :database_name (required, String)

    The name of the metadata database in which the partition is to be updated.

  • :table_name (required, String)

    The name of the metadata table in which the partition is to be updated.

  • :entries (required, Array<Types::BatchUpdatePartitionRequestEntry>)

    A list of up to 100 BatchUpdatePartitionRequestEntry objects to update.

Returns:

See Also:



2071
2072
2073
2074
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2071

def batch_update_partition(params = {}, options = {})
  req = build_request(:batch_update_partition, params)
  req.send_request(options)
end

#cancel_data_quality_rule_recommendation_run(params = {}) ⇒ Struct

Cancels the specified recommendation run that was being used to generate rules.

Examples:

Request syntax with placeholder values


resp = client.cancel_data_quality_rule_recommendation_run({
  run_id: "HashString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :run_id (required, String)

    The unique run identifier associated with this run.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



2094
2095
2096
2097
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2094

def cancel_data_quality_rule_recommendation_run(params = {}, options = {})
  req = build_request(:cancel_data_quality_rule_recommendation_run, params)
  req.send_request(options)
end

#cancel_data_quality_ruleset_evaluation_run(params = {}) ⇒ Struct

Cancels a run where a ruleset is being evaluated against a data source.

Examples:

Request syntax with placeholder values


resp = client.cancel_data_quality_ruleset_evaluation_run({
  run_id: "HashString", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :run_id (required, String)

    The unique run identifier associated with this run.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



2117
2118
2119
2120
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2117

def cancel_data_quality_ruleset_evaluation_run(params = {}, options = {})
  req = build_request(:cancel_data_quality_ruleset_evaluation_run, params)
  req.send_request(options)
end

#cancel_ml_task_run(params = {}) ⇒ Types::CancelMLTaskRunResponse

Cancels (stops) a task run. Machine learning task runs are asynchronous tasks that Glue runs on your behalf as part of various machine learning workflows. You can cancel a machine learning task run at any time by calling CancelMLTaskRun with a task run's parent transform's TransformID and the task run's TaskRunId.

Examples:

Request syntax with placeholder values


resp = client.cancel_ml_task_run({
  transform_id: "HashString", # required
  task_run_id: "HashString", # required
})

Response structure


resp.transform_id #=> String
resp.task_run_id #=> String
resp.status #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :transform_id (required, String)

    The unique identifier of the machine learning transform.

  • :task_run_id (required, String)

    A unique identifier for the task run.

Returns:

See Also:



2157
2158
2159
2160
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2157

def cancel_ml_task_run(params = {}, options = {})
  req = build_request(:cancel_ml_task_run, params)
  req.send_request(options)
end

#cancel_statement(params = {}) ⇒ Struct

Cancels the statement.

Examples:

Request syntax with placeholder values


resp = client.cancel_statement({
  session_id: "NameString", # required
  id: 1, # required
  request_origin: "OrchestrationNameString",
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :session_id (required, String)

    The Session ID of the statement to be cancelled.

  • :id (required, Integer)

    The ID of the statement to be cancelled.

  • :request_origin (String)

    The origin of the request to cancel the statement.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



2187
2188
2189
2190
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2187

def cancel_statement(params = {}, options = {})
  req = build_request(:cancel_statement, params)
  req.send_request(options)
end

#check_schema_version_validity(params = {}) ⇒ Types::CheckSchemaVersionValidityResponse

Validates the supplied schema. This call has no side effects, it simply validates using the supplied schema using DataFormat as the format. Since it does not take a schema set name, no compatibility checks are performed.

Examples:

Request syntax with placeholder values


resp = client.check_schema_version_validity({
  data_format: "AVRO", # required, accepts AVRO, JSON, PROTOBUF
  schema_definition: "SchemaDefinitionString", # required
})

Response structure


resp.valid #=> Boolean
resp.error #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :data_format (required, String)

    The data format of the schema definition. Currently AVRO, JSON and PROTOBUF are supported.

  • :schema_definition (required, String)

    The definition of the schema that has to be validated.

Returns:

See Also:



2225
2226
2227
2228
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2225

def check_schema_version_validity(params = {}, options = {})
  req = build_request(:check_schema_version_validity, params)
  req.send_request(options)
end

#create_blueprint(params = {}) ⇒ Types::CreateBlueprintResponse

Registers a blueprint with Glue.

Examples:

Request syntax with placeholder values


resp = client.create_blueprint({
  name: "OrchestrationNameString", # required
  description: "Generic512CharString",
  blueprint_location: "OrchestrationS3Location", # required
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    The name of the blueprint.

  • :description (String)

    A description of the blueprint.

  • :blueprint_location (required, String)

    Specifies a path in Amazon S3 where the blueprint is published.

  • :tags (Hash<String,String>)

    The tags to be applied to this blueprint.

Returns:

See Also:



2267
2268
2269
2270
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2267

def create_blueprint(params = {}, options = {})
  req = build_request(:create_blueprint, params)
  req.send_request(options)
end

#create_classifier(params = {}) ⇒ Struct

Creates a classifier in the user's account. This can be a GrokClassifier, an XMLClassifier, a JsonClassifier, or a CsvClassifier, depending on which field of the request is present.

Examples:

Request syntax with placeholder values


resp = client.create_classifier({
  grok_classifier: {
    classification: "Classification", # required
    name: "NameString", # required
    grok_pattern: "GrokPattern", # required
    custom_patterns: "CustomPatterns",
  },
  xml_classifier: {
    classification: "Classification", # required
    name: "NameString", # required
    row_tag: "RowTag",
  },
  json_classifier: {
    name: "NameString", # required
    json_path: "JsonPath", # required
  },
  csv_classifier: {
    name: "NameString", # required
    delimiter: "CsvColumnDelimiter",
    quote_symbol: "CsvQuoteSymbol",
    contains_header: "UNKNOWN", # accepts UNKNOWN, PRESENT, ABSENT
    header: ["NameString"],
    disable_value_trimming: false,
    allow_single_column: false,
    custom_datatype_configured: false,
    custom_datatypes: ["NameString"],
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

Returns:

  • (Struct)

    Returns an empty response.

See Also:



2325
2326
2327
2328
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2325

def create_classifier(params = {}, options = {})
  req = build_request(:create_classifier, params)
  req.send_request(options)
end

#create_connection(params = {}) ⇒ Struct

Creates a connection definition in the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.create_connection({
  catalog_id: "CatalogIdString",
  connection_input: { # required
    name: "NameString", # required
    description: "DescriptionString",
    connection_type: "JDBC", # required, accepts JDBC, SFTP, MONGODB, KAFKA, NETWORK, MARKETPLACE, CUSTOM
    match_criteria: ["NameString"],
    connection_properties: { # required
      "HOST" => "ValueString",
    },
    physical_connection_requirements: {
      subnet_id: "NameString",
      security_group_id_list: ["NameString"],
      availability_zone: "NameString",
    },
  },
  tags: {
    "TagKey" => "TagValue",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :catalog_id (String)

    The ID of the Data Catalog in which to create the connection. If none is provided, the Amazon Web Services account ID is used by default.

  • :connection_input (required, Types::ConnectionInput)

    A ConnectionInput object defining the connection to create.

  • :tags (Hash<String,String>)

    The tags you assign to the connection.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



2371
2372
2373
2374
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2371

def create_connection(params = {}, options = {})
  req = build_request(:create_connection, params)
  req.send_request(options)
end

#create_crawler(params = {}) ⇒ Struct

Creates a new crawler with specified targets, role, configuration, and optional schedule. At least one crawl target must be specified, in the s3Targets field, the jdbcTargets field, or the DynamoDBTargets field.

Examples:

Request syntax with placeholder values


resp = client.create_crawler({
  name: "NameString", # required
  role: "Role", # required
  database_name: "DatabaseName",
  description: "DescriptionString",
  targets: { # required
    s3_targets: [
      {
        path: "Path",
        exclusions: ["Path"],
        connection_name: "ConnectionName",
        sample_size: 1,
        event_queue_arn: "EventQueueArn",
        dlq_event_queue_arn: "EventQueueArn",
      },
    ],
    jdbc_targets: [
      {
        connection_name: "ConnectionName",
        path: "Path",
        exclusions: ["Path"],
        enable_additional_metadata: ["COMMENTS"], # accepts COMMENTS, RAWTYPES
      },
    ],
    mongo_db_targets: [
      {
        connection_name: "ConnectionName",
        path: "Path",
        scan_all: false,
      },
    ],
    dynamo_db_targets: [
      {
        path: "Path",
        scan_all: false,
        scan_rate: 1.0,
      },
    ],
    catalog_targets: [
      {
        database_name: "NameString", # required
        tables: ["NameString"], # required
        connection_name: "ConnectionName",
        event_queue_arn: "EventQueueArn",
        dlq_event_queue_arn: "EventQueueArn",
      },
    ],
    delta_targets: [
      {
        delta_tables: ["Path"],
        connection_name: "ConnectionName",
        write_manifest: false,
      },
    ],
  },
  schedule: "CronExpression",
  classifiers: ["NameString"],
  table_prefix: "TablePrefix",
  schema_change_policy: {
    update_behavior: "LOG", # accepts LOG, UPDATE_IN_DATABASE
    delete_behavior: "LOG", # accepts LOG, DELETE_FROM_DATABASE, DEPRECATE_IN_DATABASE
  },
  recrawl_policy: {
    recrawl_behavior: "CRAWL_EVERYTHING", # accepts CRAWL_EVERYTHING, CRAWL_NEW_FOLDERS_ONLY, CRAWL_EVENT_MODE
  },
  lineage_configuration: {
    crawler_lineage_settings: "ENABLE", # accepts ENABLE, DISABLE
  },
  lake_formation_configuration: {
    use_lake_formation_credentials: false,
    account_id: "AccountId",
  },
  configuration: "CrawlerConfiguration",
  crawler_security_configuration: "CrawlerSecurityConfiguration",
  tags: {
    "TagKey" => "TagValue",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :name (required, String)

    Name of the new crawler.

  • :role (required, String)

    The IAM role or Amazon Resource Name (ARN) of an IAM role used by the new crawler to access customer resources.

  • :database_name (String)

    The Glue database where results are written, such as: arn:aws:daylight:us-east-1::database/sometable/*.

  • :description (String)

    A description of the new crawler.

  • :targets (required, Types::CrawlerTargets)

    A list of collection of targets to crawl.

  • :schedule (String)

    A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *).

  • :classifiers (Array<String>)

    A list of custom classifiers that the user has registered. By default, all built-in classifiers are included in a crawl, but these custom classifiers always override the default classifiers for a given classification.

  • :table_prefix (String)

    The table prefix used for catalog tables that are created.

  • :schema_change_policy (Types::SchemaChangePolicy)

    The policy for the crawler's update and deletion behavior.

  • :recrawl_policy (Types::RecrawlPolicy)

    A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.

  • :lineage_configuration (Types::LineageConfiguration)

    Specifies data lineage configuration settings for the crawler.

  • :lake_formation_configuration (Types::LakeFormationConfiguration)

    Specifies Lake Formation configuration settings for the crawler.

  • :configuration (String)

    Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler's behavior. For more information, see Setting crawler configuration options.

  • :crawler_security_configuration (String)

    The name of the SecurityConfiguration structure to be used by this crawler.

  • :tags (Hash<String,String>)

    The tags to use with this crawler request. You may use tags to limit access to the crawler. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



2538
2539
2540
2541
# File 'gems/aws-sdk-glue/lib/aws-sdk-glue/client.rb', line 2538

def create_crawler(params = {}, options = {})
  req = build_request(:create_crawler, params)
  req.send_request(options)
end

#create_custom_entity_type(params = {}) ⇒ Types::CreateCustomEntityTypeResponse

Creates a custom pattern that is used to detect sensitive data across the columns and rows of your structured data.

Each custom pattern you create specifies a regular expression and an optional list of context words. If no context words are passed only a regular expression is checked.

Examples:

Request syntax with placeholder values


resp = client.create_custom_entity_type({
  name: "NameString", # required
  regex_string: "NameString", # required
  context_words: ["NameString"],
})

Response structure


resp.name #=> String

Parameters:

  • params (Hash) (defaults to: {})