You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.

Class: Aws::Glue::Client

Inherits:
Seahorse::Client::Base show all
Defined in:
(unknown)

Overview

An API client for AWS Glue. To construct a client, you need to configure a :region and :credentials.

glue = Aws::Glue::Client.new(
  region: region_name,
  credentials: credentials,
  # ...
)

See #initialize for a full list of supported configuration options.

Region

You can configure a default region in the following locations:

  • ENV['AWS_REGION']
  • Aws.config[:region]

Go here for a list of supported regions.

Credentials

Default credentials are loaded automatically from the following locations:

  • ENV['AWS_ACCESS_KEY_ID'] and ENV['AWS_SECRET_ACCESS_KEY']
  • Aws.config[:credentials]
  • The shared credentials ini file at ~/.aws/credentials (more information)
  • From an instance profile when running on EC2

You can also construct a credentials object from one of the following classes:

Alternatively, you configure credentials with :access_key_id and :secret_access_key:

# load credentials from disk
creds = YAML.load(File.read('/path/to/secrets'))

Aws::Glue::Client.new(
  access_key_id: creds['access_key_id'],
  secret_access_key: creds['secret_access_key']
)

Always load your credentials from outside your application. Avoid configuring credentials statically and never commit them to source control.

Instance Attribute Summary

Attributes inherited from Seahorse::Client::Base

#config, #handlers

Constructor collapse

API Operations collapse

Instance Method Summary collapse

Methods inherited from Seahorse::Client::Base

add_plugin, api, #build_request, clear_plugins, define, new, #operation, #operation_names, plugins, remove_plugin, set_api, set_plugins

Methods included from Seahorse::Client::HandlerBuilder

#handle, #handle_request, #handle_response

Constructor Details

#initialize(options = {}) ⇒ Aws::Glue::Client

Constructs an API client.

Options Hash (options):

  • :access_key_id (String)

    Used to set credentials statically. See Plugins::RequestSigner for more details.

  • :active_endpoint_cache (Boolean)

    When set to true, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults to false. See Plugins::EndpointDiscovery for more details.

  • :convert_params (Boolean) — default: true

    When true, an attempt is made to coerce request parameters into the required types. See Plugins::ParamConverter for more details.

  • :credentials (required, Credentials)

    Your AWS credentials. The following locations will be searched in order for credentials:

    • :access_key_id, :secret_access_key, and :session_token options
    • ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY']
    • HOME/.aws/credentials shared credentials file
    • EC2 instance profile credentials See Plugins::RequestSigner for more details.
  • :disable_host_prefix_injection (Boolean)

    Set to true to disable SDK automatically adding host prefix to default service endpoint when available. See Plugins::EndpointPattern for more details.

  • :endpoint (String)

    A default endpoint is constructed from the :region. See Plugins::RegionalEndpoint for more details.

  • :endpoint_cache_max_entries (Integer)

    Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000. See Plugins::EndpointDiscovery for more details.

  • :endpoint_cache_max_threads (Integer)

    Used for the maximum threads in use for polling endpoints to be cached, defaults to 10. See Plugins::EndpointDiscovery for more details.

  • :endpoint_cache_poll_interval (Integer)

    When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec. See Plugins::EndpointDiscovery for more details.

  • :endpoint_discovery (Boolean)

    When set to true, endpoint discovery will be enabled for operations when available. Defaults to false. See Plugins::EndpointDiscovery for more details.

  • :http_continue_timeout (Float) — default: 1

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_idle_timeout (Integer) — default: 5

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_open_timeout (Integer) — default: 15

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_proxy (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_read_timeout (Integer) — default: 60

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_wire_trace (Boolean) — default: false

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :log_level (Symbol) — default: :info

    The log level to send messages to the logger at. See Plugins::Logging for more details.

  • :log_formatter (Logging::LogFormatter)

    The log formatter. Defaults to Seahorse::Client::Logging::Formatter.default. See Plugins::Logging for more details.

  • :logger (Logger) — default: nil

    The Logger instance to send log messages to. If this option is not set, logging will be disabled. See Plugins::Logging for more details.

  • :profile (String)

    Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, 'default' is used. See Plugins::RequestSigner for more details.

  • :raise_response_errors (Boolean) — default: true

    When true, response errors are raised. See Seahorse::Client::Plugins::RaiseResponseErrors for more details.

  • :region (required, String)

    The AWS region to connect to. The region is used to construct the client endpoint. Defaults to ENV['AWS_REGION']. Also checks AMAZON_REGION and AWS_DEFAULT_REGION. See Plugins::RegionalEndpoint for more details.

  • :retry_limit (Integer) — default: 3

    The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors and auth errors from expired credentials. See Plugins::RetryErrors for more details.

  • :secret_access_key (String)

    Used to set credentials statically. See Plugins::RequestSigner for more details.

  • :session_token (String)

    Used to set credentials statically. See Plugins::RequestSigner for more details.

  • :simple_json (Boolean) — default: false

    Disables request parameter conversion, validation, and formatting. Also disable response data type conversions. This option is useful when you want to ensure the highest level of performance by avoiding overhead of walking request parameters and response data structures.

    When :simple_json is enabled, the request parameters hash must be formatted exactly as the DynamoDB API expects. See Plugins::Protocols::JsonRpc for more details.

  • :ssl_ca_bundle (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :ssl_ca_directory (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :ssl_ca_store (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :ssl_verify_peer (Boolean) — default: true

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :stub_responses (Boolean) — default: false

    Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling ClientStubs#stub_responses. See ClientStubs for more information.

    Please note When response stubbing is enabled, no HTTP requests are made, and retries are disabled. See Plugins::StubResponses for more details.

  • :validate_params (Boolean) — default: true

    When true, request parameters are validated before sending the request. See Plugins::ParamValidator for more details.

Instance Method Details

#batch_create_partition(options = {}) ⇒ Types::BatchCreatePartitionResponse

Creates one or more partitions in a batch operation.

Examples:

Request syntax with placeholder values


resp = client.batch_create_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_input_list: [ # required
    {
      values: ["ValueString"],
      last_access_time: Time.now,
      storage_descriptor: {
        columns: [
          {
            name: "NameString", # required
            type: "ColumnTypeString",
            comment: "CommentString",
            parameters: {
              "KeyString" => "ParametersMapValue",
            },
          },
        ],
        location: "LocationString",
        input_format: "FormatString",
        output_format: "FormatString",
        compressed: false,
        number_of_buckets: 1,
        serde_info: {
          name: "NameString",
          serialization_library: "NameString",
          parameters: {
            "KeyString" => "ParametersMapValue",
          },
        },
        bucket_columns: ["NameString"],
        sort_columns: [
          {
            column: "NameString", # required
            sort_order: 1, # required
          },
        ],
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
        skewed_info: {
          skewed_column_names: ["NameString"],
          skewed_column_values: ["ColumnValuesString"],
          skewed_column_value_location_maps: {
            "ColumnValuesString" => "ColumnValuesString",
          },
        },
        stored_as_sub_directories: false,
        schema_reference: {
          schema_id: {
            schema_arn: "GlueResourceArn",
            schema_name: "SchemaRegistryNameString",
            registry_name: "SchemaRegistryNameString",
          },
          schema_version_id: "SchemaVersionIdString",
          schema_version_number: 1,
        },
      },
      parameters: {
        "KeyString" => "ParametersMapValue",
      },
      last_analyzed_time: Time.now,
    },
  ],
})

Response structure


resp.errors #=> Array
resp.errors[0].partition_values #=> Array
resp.errors[0].partition_values[0] #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the catalog in which the partition is to be created. Currently, this should be the AWS account ID.

  • :database_name (required, String)

    The name of the metadata database in which the partition is to be created.

  • :table_name (required, String)

    The name of the metadata table in which the partition is to be created.

  • :partition_input_list (required, Array<Types::PartitionInput>)

    A list of PartitionInput structures that define the partitions to be created.

Returns:

See Also:

#batch_delete_connection(options = {}) ⇒ Types::BatchDeleteConnectionResponse

Deletes a list of connection definitions from the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.batch_delete_connection({
  catalog_id: "CatalogIdString",
  connection_name_list: ["NameString"], # required
})

Response structure


resp.succeeded #=> Array
resp.succeeded[0] #=> String
resp.errors #=> Hash
resp.errors["NameString"].error_code #=> String
resp.errors["NameString"].error_message #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog in which the connections reside. If none is provided, the AWS account ID is used by default.

  • :connection_name_list (required, Array<String>)

    A list of names of the connections to delete.

Returns:

See Also:

#batch_delete_partition(options = {}) ⇒ Types::BatchDeletePartitionResponse

Deletes one or more partitions in a batch operation.

Examples:

Request syntax with placeholder values


resp = client.batch_delete_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partitions_to_delete: [ # required
    {
      values: ["ValueString"], # required
    },
  ],
})

Response structure


resp.errors #=> Array
resp.errors[0].partition_values #=> Array
resp.errors[0].partition_values[0] #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the partition to be deleted resides. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database in which the table in question resides.

  • :table_name (required, String)

    The name of the table that contains the partitions to be deleted.

  • :partitions_to_delete (required, Array<Types::PartitionValueList>)

    A list of PartitionInput structures that define the partitions to be deleted.

Returns:

See Also:

#batch_delete_table(options = {}) ⇒ Types::BatchDeleteTableResponse

Deletes multiple tables at once.

After completing this operation, you no longer have access to the table versions and partitions that belong to the deleted table. AWS Glue deletes these "orphaned" resources asynchronously in a timely manner, at the discretion of the service.

To ensure the immediate deletion of all related resources, before calling BatchDeleteTable, use DeleteTableVersion or BatchDeleteTableVersion, and DeletePartition or BatchDeletePartition, to delete any resources that belong to the table.

Examples:

Request syntax with placeholder values


resp = client.batch_delete_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  tables_to_delete: ["NameString"], # required
})

Response structure


resp.errors #=> Array
resp.errors[0].table_name #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the table resides. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database in which the tables to delete reside. For Hive compatibility, this name is entirely lowercase.

  • :tables_to_delete (required, Array<String>)

    A list of the table to delete.

Returns:

See Also:

#batch_delete_table_version(options = {}) ⇒ Types::BatchDeleteTableVersionResponse

Deletes a specified batch of versions of a table.

Examples:

Request syntax with placeholder values


resp = client.batch_delete_table_version({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  version_ids: ["VersionString"], # required
})

Response structure


resp.errors #=> Array
resp.errors[0].table_name #=> String
resp.errors[0].version_id #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the tables reside. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.

  • :table_name (required, String)

    The name of the table. For Hive compatibility, this name is entirely lowercase.

  • :version_ids (required, Array<String>)

    A list of the IDs of versions to be deleted. A VersionId is a string representation of an integer. Each version is incremented by 1.

Returns:

See Also:

#batch_get_crawlers(options = {}) ⇒ Types::BatchGetCrawlersResponse

Returns a list of resource metadata for a given list of crawler names. After calling the ListCrawlers operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

Examples:

Request syntax with placeholder values


resp = client.batch_get_crawlers({
  crawler_names: ["NameString"], # required
})

Response structure


resp.crawlers #=> Array
resp.crawlers[0].name #=> String
resp.crawlers[0].role #=> String
resp.crawlers[0].targets.s3_targets #=> Array
resp.crawlers[0].targets.s3_targets[0].path #=> String
resp.crawlers[0].targets.s3_targets[0].exclusions #=> Array
resp.crawlers[0].targets.s3_targets[0].exclusions[0] #=> String
resp.crawlers[0].targets.s3_targets[0].connection_name #=> String
resp.crawlers[0].targets.jdbc_targets #=> Array
resp.crawlers[0].targets.jdbc_targets[0].connection_name #=> String
resp.crawlers[0].targets.jdbc_targets[0].path #=> String
resp.crawlers[0].targets.jdbc_targets[0].exclusions #=> Array
resp.crawlers[0].targets.jdbc_targets[0].exclusions[0] #=> String
resp.crawlers[0].targets.mongo_db_targets #=> Array
resp.crawlers[0].targets.mongo_db_targets[0].connection_name #=> String
resp.crawlers[0].targets.mongo_db_targets[0].path #=> String
resp.crawlers[0].targets.mongo_db_targets[0].scan_all #=> true/false
resp.crawlers[0].targets.dynamo_db_targets #=> Array
resp.crawlers[0].targets.dynamo_db_targets[0].path #=> String
resp.crawlers[0].targets.dynamo_db_targets[0].scan_all #=> true/false
resp.crawlers[0].targets.dynamo_db_targets[0].scan_rate #=> Float
resp.crawlers[0].targets.catalog_targets #=> Array
resp.crawlers[0].targets.catalog_targets[0].database_name #=> String
resp.crawlers[0].targets.catalog_targets[0].tables #=> Array
resp.crawlers[0].targets.catalog_targets[0].tables[0] #=> String
resp.crawlers[0].database_name #=> String
resp.crawlers[0].description #=> String
resp.crawlers[0].classifiers #=> Array
resp.crawlers[0].classifiers[0] #=> String
resp.crawlers[0].recrawl_policy.recrawl_behavior #=> String, one of "CRAWL_EVERYTHING", "CRAWL_NEW_FOLDERS_ONLY"
resp.crawlers[0].schema_change_policy.update_behavior #=> String, one of "LOG", "UPDATE_IN_DATABASE"
resp.crawlers[0].schema_change_policy.delete_behavior #=> String, one of "LOG", "DELETE_FROM_DATABASE", "DEPRECATE_IN_DATABASE"
resp.crawlers[0].state #=> String, one of "READY", "RUNNING", "STOPPING"
resp.crawlers[0].table_prefix #=> String
resp.crawlers[0].schedule.schedule_expression #=> String
resp.crawlers[0].schedule.state #=> String, one of "SCHEDULED", "NOT_SCHEDULED", "TRANSITIONING"
resp.crawlers[0].crawl_elapsed_time #=> Integer
resp.crawlers[0].creation_time #=> Time
resp.crawlers[0].last_updated #=> Time
resp.crawlers[0].last_crawl.status #=> String, one of "SUCCEEDED", "CANCELLED", "FAILED"
resp.crawlers[0].last_crawl.error_message #=> String
resp.crawlers[0].last_crawl.log_group #=> String
resp.crawlers[0].last_crawl.log_stream #=> String
resp.crawlers[0].last_crawl.message_prefix #=> String
resp.crawlers[0].last_crawl.start_time #=> Time
resp.crawlers[0].version #=> Integer
resp.crawlers[0].configuration #=> String
resp.crawlers[0].crawler_security_configuration #=> String
resp.crawlers_not_found #=> Array
resp.crawlers_not_found[0] #=> String

Options Hash (options):

  • :crawler_names (required, Array<String>)

    A list of crawler names, which might be the names returned from the ListCrawlers operation.

Returns:

See Also:

#batch_get_dev_endpoints(options = {}) ⇒ Types::BatchGetDevEndpointsResponse

Returns a list of resource metadata for a given list of development endpoint names. After calling the ListDevEndpoints operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

Examples:

Request syntax with placeholder values


resp = client.batch_get_dev_endpoints({
  dev_endpoint_names: ["GenericString"], # required
})

Response structure


resp.dev_endpoints #=> Array
resp.dev_endpoints[0].endpoint_name #=> String
resp.dev_endpoints[0].role_arn #=> String
resp.dev_endpoints[0].security_group_ids #=> Array
resp.dev_endpoints[0].security_group_ids[0] #=> String
resp.dev_endpoints[0].subnet_id #=> String
resp.dev_endpoints[0].yarn_endpoint_address #=> String
resp.dev_endpoints[0].private_address #=> String
resp.dev_endpoints[0].zeppelin_remote_spark_interpreter_port #=> Integer
resp.dev_endpoints[0].public_address #=> String
resp.dev_endpoints[0].status #=> String
resp.dev_endpoints[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.dev_endpoints[0].glue_version #=> String
resp.dev_endpoints[0].number_of_workers #=> Integer
resp.dev_endpoints[0].number_of_nodes #=> Integer
resp.dev_endpoints[0].availability_zone #=> String
resp.dev_endpoints[0].vpc_id #=> String
resp.dev_endpoints[0].extra_python_libs_s3_path #=> String
resp.dev_endpoints[0].extra_jars_s3_path #=> String
resp.dev_endpoints[0].failure_reason #=> String
resp.dev_endpoints[0].last_update_status #=> String
resp.dev_endpoints[0].created_timestamp #=> Time
resp.dev_endpoints[0].last_modified_timestamp #=> Time
resp.dev_endpoints[0].public_key #=> String
resp.dev_endpoints[0].public_keys #=> Array
resp.dev_endpoints[0].public_keys[0] #=> String
resp.dev_endpoints[0].security_configuration #=> String
resp.dev_endpoints[0].arguments #=> Hash
resp.dev_endpoints[0].arguments["GenericString"] #=> String
resp.dev_endpoints_not_found #=> Array
resp.dev_endpoints_not_found[0] #=> String

Options Hash (options):

  • :dev_endpoint_names (required, Array<String>)

    The list of DevEndpoint names, which might be the names returned from the ListDevEndpoint operation.

Returns:

See Also:

#batch_get_jobs(options = {}) ⇒ Types::BatchGetJobsResponse

Returns a list of resource metadata for a given list of job names. After calling the ListJobs operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

Examples:

Request syntax with placeholder values


resp = client.batch_get_jobs({
  job_names: ["NameString"], # required
})

Response structure


resp.jobs #=> Array
resp.jobs[0].name #=> String
resp.jobs[0].description #=> String
resp.jobs[0].log_uri #=> String
resp.jobs[0].role #=> String
resp.jobs[0].created_on #=> Time
resp.jobs[0].last_modified_on #=> Time
resp.jobs[0].execution_property.max_concurrent_runs #=> Integer
resp.jobs[0].command.name #=> String
resp.jobs[0].command.script_location #=> String
resp.jobs[0].command.python_version #=> String
resp.jobs[0].default_arguments #=> Hash
resp.jobs[0].default_arguments["GenericString"] #=> String
resp.jobs[0].non_overridable_arguments #=> Hash
resp.jobs[0].non_overridable_arguments["GenericString"] #=> String
resp.jobs[0].connections.connections #=> Array
resp.jobs[0].connections.connections[0] #=> String
resp.jobs[0].max_retries #=> Integer
resp.jobs[0].allocated_capacity #=> Integer
resp.jobs[0].timeout #=> Integer
resp.jobs[0].max_capacity #=> Float
resp.jobs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.jobs[0].number_of_workers #=> Integer
resp.jobs[0].security_configuration #=> String
resp.jobs[0].notification_property.notify_delay_after #=> Integer
resp.jobs[0].glue_version #=> String
resp.jobs_not_found #=> Array
resp.jobs_not_found[0] #=> String

Options Hash (options):

  • :job_names (required, Array<String>)

    A list of job names, which might be the names returned from the ListJobs operation.

Returns:

See Also:

#batch_get_partition(options = {}) ⇒ Types::BatchGetPartitionResponse

Retrieves partitions in a batch request.

Examples:

Request syntax with placeholder values


resp = client.batch_get_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partitions_to_get: [ # required
    {
      values: ["ValueString"], # required
    },
  ],
})

Response structure


resp.partitions #=> Array
resp.partitions[0].values #=> Array
resp.partitions[0].values[0] #=> String
resp.partitions[0].database_name #=> String
resp.partitions[0].table_name #=> String
resp.partitions[0].creation_time #=> Time
resp.partitions[0].last_access_time #=> Time
resp.partitions[0].storage_descriptor.columns #=> Array
resp.partitions[0].storage_descriptor.columns[0].name #=> String
resp.partitions[0].storage_descriptor.columns[0].type #=> String
resp.partitions[0].storage_descriptor.columns[0].comment #=> String
resp.partitions[0].storage_descriptor.columns[0].parameters #=> Hash
resp.partitions[0].storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.partitions[0].storage_descriptor.location #=> String
resp.partitions[0].storage_descriptor.input_format #=> String
resp.partitions[0].storage_descriptor.output_format #=> String
resp.partitions[0].storage_descriptor.compressed #=> true/false
resp.partitions[0].storage_descriptor.number_of_buckets #=> Integer
resp.partitions[0].storage_descriptor.serde_info.name #=> String
resp.partitions[0].storage_descriptor.serde_info.serialization_library #=> String
resp.partitions[0].storage_descriptor.serde_info.parameters #=> Hash
resp.partitions[0].storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.partitions[0].storage_descriptor.bucket_columns #=> Array
resp.partitions[0].storage_descriptor.bucket_columns[0] #=> String
resp.partitions[0].storage_descriptor.sort_columns #=> Array
resp.partitions[0].storage_descriptor.sort_columns[0].column #=> String
resp.partitions[0].storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.partitions[0].storage_descriptor.parameters #=> Hash
resp.partitions[0].storage_descriptor.parameters["KeyString"] #=> String
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.partitions[0].storage_descriptor.stored_as_sub_directories #=> true/false
resp.partitions[0].storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_version_id #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.partitions[0].parameters #=> Hash
resp.partitions[0].parameters["KeyString"] #=> String
resp.partitions[0].last_analyzed_time #=> Time
resp.partitions[0].catalog_id #=> String
resp.unprocessed_keys #=> Array
resp.unprocessed_keys[0].values #=> Array
resp.unprocessed_keys[0].values[0] #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is supplied, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions\' table.

  • :partitions_to_get (required, Array<Types::PartitionValueList>)

    A list of partition values identifying the partitions to retrieve.

Returns:

See Also:

#batch_get_triggers(options = {}) ⇒ Types::BatchGetTriggersResponse

Returns a list of resource metadata for a given list of trigger names. After calling the ListTriggers operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

Examples:

Request syntax with placeholder values


resp = client.batch_get_triggers({
  trigger_names: ["NameString"], # required
})

Response structure


resp.triggers #=> Array
resp.triggers[0].name #=> String
resp.triggers[0].workflow_name #=> String
resp.triggers[0].id #=> String
resp.triggers[0].type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND"
resp.triggers[0].state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.triggers[0].description #=> String
resp.triggers[0].schedule #=> String
resp.triggers[0].actions #=> Array
resp.triggers[0].actions[0].job_name #=> String
resp.triggers[0].actions[0].arguments #=> Hash
resp.triggers[0].actions[0].arguments["GenericString"] #=> String
resp.triggers[0].actions[0].timeout #=> Integer
resp.triggers[0].actions[0].security_configuration #=> String
resp.triggers[0].actions[0].notification_property.notify_delay_after #=> Integer
resp.triggers[0].actions[0].crawler_name #=> String
resp.triggers[0].predicate.logical #=> String, one of "AND", "ANY"
resp.triggers[0].predicate.conditions #=> Array
resp.triggers[0].predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.triggers[0].predicate.conditions[0].job_name #=> String
resp.triggers[0].predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.triggers[0].predicate.conditions[0].crawler_name #=> String
resp.triggers[0].predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED"
resp.triggers_not_found #=> Array
resp.triggers_not_found[0] #=> String

Options Hash (options):

  • :trigger_names (required, Array<String>)

    A list of trigger names, which may be the names returned from the ListTriggers operation.

Returns:

See Also:

#batch_get_workflows(options = {}) ⇒ Types::BatchGetWorkflowsResponse

Returns a list of resource metadata for a given list of workflow names. After calling the ListWorkflows operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

Examples:

Request syntax with placeholder values


resp = client.batch_get_workflows({
  names: ["NameString"], # required
  include_graph: false,
})

Response structure


resp.workflows #=> Array
resp.workflows[0].name #=> String
resp.workflows[0].description #=> String
resp.workflows[0].default_run_properties #=> Hash
resp.workflows[0].default_run_properties["IdString"] #=> String
resp.workflows[0].created_on #=> Time
resp.workflows[0].last_modified_on #=> Time
resp.workflows[0].last_run.name #=> String
resp.workflows[0].last_run.workflow_run_id #=> String
resp.workflows[0].last_run.previous_run_id #=> String
resp.workflows[0].last_run.workflow_run_properties #=> Hash
resp.workflows[0].last_run.workflow_run_properties["IdString"] #=> String
resp.workflows[0].last_run.started_on #=> Time
resp.workflows[0].last_run.completed_on #=> Time
resp.workflows[0].last_run.status #=> String, one of "RUNNING", "COMPLETED", "STOPPING", "STOPPED", "ERROR"
resp.workflows[0].last_run.error_message #=> String
resp.workflows[0].last_run.statistics.total_actions #=> Integer
resp.workflows[0].last_run.statistics.timeout_actions #=> Integer
resp.workflows[0].last_run.statistics.failed_actions #=> Integer
resp.workflows[0].last_run.statistics.stopped_actions #=> Integer
resp.workflows[0].last_run.statistics.succeeded_actions #=> Integer
resp.workflows[0].last_run.statistics.running_actions #=> Integer
resp.workflows[0].last_run.graph.nodes #=> Array
resp.workflows[0].last_run.graph.nodes[0].type #=> String, one of "CRAWLER", "JOB", "TRIGGER"
resp.workflows[0].last_run.graph.nodes[0].name #=> String
resp.workflows[0].last_run.graph.nodes[0].unique_id #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.workflow_name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.id #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.description #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.schedule #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions #=> Array
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].job_name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].arguments #=> Hash
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].arguments["GenericString"] #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].timeout #=> Integer
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].security_configuration #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.actions[0].crawler_name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions #=> Array
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].job_name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawler_name #=> String
resp.workflows[0].last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED"
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs #=> Array
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].id #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].attempt #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].previous_run_id #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].trigger_name #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].job_name #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].started_on #=> Time
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].last_modified_on #=> Time
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].completed_on #=> Time
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].arguments #=> Hash
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].arguments["GenericString"] #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].error_message #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].predecessor_runs #=> Array
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].job_name #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].run_id #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].allocated_capacity #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].execution_time #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].timeout #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].max_capacity #=> Float
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].number_of_workers #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].security_configuration #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].log_group_name #=> String
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].notification_property.notify_delay_after #=> Integer
resp.workflows[0].last_run.graph.nodes[0].job_details.job_runs[0].glue_version #=> String
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls #=> Array
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED"
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].started_on #=> Time
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].completed_on #=> Time
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].error_message #=> String
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].log_group #=> String
resp.workflows[0].last_run.graph.nodes[0].crawler_details.crawls[0].log_stream #=> String
resp.workflows[0].last_run.graph.edges #=> Array
resp.workflows[0].last_run.graph.edges[0].source_id #=> String
resp.workflows[0].last_run.graph.edges[0].destination_id #=> String
resp.workflows[0].graph.nodes #=> Array
resp.workflows[0].graph.nodes[0].type #=> String, one of "CRAWLER", "JOB", "TRIGGER"
resp.workflows[0].graph.nodes[0].name #=> String
resp.workflows[0].graph.nodes[0].unique_id #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.workflow_name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.id #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.description #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.schedule #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions #=> Array
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].job_name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].arguments #=> Hash
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].arguments["GenericString"] #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].timeout #=> Integer
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].security_configuration #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.workflows[0].graph.nodes[0].trigger_details.trigger.actions[0].crawler_name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions #=> Array
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].job_name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawler_name #=> String
resp.workflows[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED"
resp.workflows[0].graph.nodes[0].job_details.job_runs #=> Array
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].id #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].attempt #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].previous_run_id #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].trigger_name #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].job_name #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].started_on #=> Time
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].last_modified_on #=> Time
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].completed_on #=> Time
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].arguments #=> Hash
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].arguments["GenericString"] #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].error_message #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].predecessor_runs #=> Array
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].job_name #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].run_id #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].allocated_capacity #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].execution_time #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].timeout #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].max_capacity #=> Float
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].number_of_workers #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].security_configuration #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].log_group_name #=> String
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].notification_property.notify_delay_after #=> Integer
resp.workflows[0].graph.nodes[0].job_details.job_runs[0].glue_version #=> String
resp.workflows[0].graph.nodes[0].crawler_details.crawls #=> Array
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED"
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].started_on #=> Time
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].completed_on #=> Time
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].error_message #=> String
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].log_group #=> String
resp.workflows[0].graph.nodes[0].crawler_details.crawls[0].log_stream #=> String
resp.workflows[0].graph.edges #=> Array
resp.workflows[0].graph.edges[0].source_id #=> String
resp.workflows[0].graph.edges[0].destination_id #=> String
resp.workflows[0].max_concurrent_runs #=> Integer
resp.missing_workflows #=> Array
resp.missing_workflows[0] #=> String

Options Hash (options):

  • :names (required, Array<String>)

    A list of workflow names, which may be the names returned from the ListWorkflows operation.

  • :include_graph (Boolean)

    Specifies whether to include a graph when returning the workflow resource metadata.

Returns:

See Also:

#batch_stop_job_run(options = {}) ⇒ Types::BatchStopJobRunResponse

Stops one or more job runs for a specified job definition.

Examples:

Request syntax with placeholder values


resp = client.batch_stop_job_run({
  job_name: "NameString", # required
  job_run_ids: ["IdString"], # required
})

Response structure


resp.successful_submissions #=> Array
resp.successful_submissions[0].job_name #=> String
resp.successful_submissions[0].job_run_id #=> String
resp.errors #=> Array
resp.errors[0].job_name #=> String
resp.errors[0].job_run_id #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Options Hash (options):

  • :job_name (required, String)

    The name of the job definition for which to stop job runs.

  • :job_run_ids (required, Array<String>)

    A list of the JobRunIds that should be stopped for that job definition.

Returns:

See Also:

#batch_update_partition(options = {}) ⇒ Types::BatchUpdatePartitionResponse

Updates one or more partitions in a batch operation.

Examples:

Request syntax with placeholder values


resp = client.batch_update_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  entries: [ # required
    {
      partition_value_list: ["ValueString"], # required
      partition_input: { # required
        values: ["ValueString"],
        last_access_time: Time.now,
        storage_descriptor: {
          columns: [
            {
              name: "NameString", # required
              type: "ColumnTypeString",
              comment: "CommentString",
              parameters: {
                "KeyString" => "ParametersMapValue",
              },
            },
          ],
          location: "LocationString",
          input_format: "FormatString",
          output_format: "FormatString",
          compressed: false,
          number_of_buckets: 1,
          serde_info: {
            name: "NameString",
            serialization_library: "NameString",
            parameters: {
              "KeyString" => "ParametersMapValue",
            },
          },
          bucket_columns: ["NameString"],
          sort_columns: [
            {
              column: "NameString", # required
              sort_order: 1, # required
            },
          ],
          parameters: {
            "KeyString" => "ParametersMapValue",
          },
          skewed_info: {
            skewed_column_names: ["NameString"],
            skewed_column_values: ["ColumnValuesString"],
            skewed_column_value_location_maps: {
              "ColumnValuesString" => "ColumnValuesString",
            },
          },
          stored_as_sub_directories: false,
          schema_reference: {
            schema_id: {
              schema_arn: "GlueResourceArn",
              schema_name: "SchemaRegistryNameString",
              registry_name: "SchemaRegistryNameString",
            },
            schema_version_id: "SchemaVersionIdString",
            schema_version_number: 1,
          },
        },
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
        last_analyzed_time: Time.now,
      },
    },
  ],
})

Response structure


resp.errors #=> Array
resp.errors[0].partition_value_list #=> Array
resp.errors[0].partition_value_list[0] #=> String
resp.errors[0].error_detail.error_code #=> String
resp.errors[0].error_detail.error_message #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the catalog in which the partition is to be updated. Currently, this should be the AWS account ID.

  • :database_name (required, String)

    The name of the metadata database in which the partition is to be updated.

  • :table_name (required, String)

    The name of the metadata table in which the partition is to be updated.

  • :entries (required, Array<Types::BatchUpdatePartitionRequestEntry>)

    A list of up to 100 BatchUpdatePartitionRequestEntry objects to update.

Returns:

See Also:

#cancel_ml_task_run(options = {}) ⇒ Types::CancelMLTaskRunResponse

Cancels (stops) a task run. Machine learning task runs are asynchronous tasks that AWS Glue runs on your behalf as part of various machine learning workflows. You can cancel a machine learning task run at any time by calling CancelMLTaskRun with a task run's parent transform's TransformID and the task run's TaskRunId.

Examples:

Request syntax with placeholder values


resp = client.cancel_ml_task_run({
  transform_id: "HashString", # required
  task_run_id: "HashString", # required
})

Response structure


resp.transform_id #=> String
resp.task_run_id #=> String
resp.status #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"

Options Hash (options):

  • :transform_id (required, String)

    The unique identifier of the machine learning transform.

  • :task_run_id (required, String)

    A unique identifier for the task run.

Returns:

See Also:

#check_schema_version_validity(options = {}) ⇒ Types::CheckSchemaVersionValidityResponse

Validates the supplied schema. This call has no side effects, it simply validates using the supplied schema using DataFormat as the format. Since it does not take a schema set name, no compatibility checks are performed.

Examples:

Request syntax with placeholder values


resp = client.check_schema_version_validity({
  data_format: "AVRO", # required, accepts AVRO
  schema_definition: "SchemaDefinitionString", # required
})

Response structure


resp.valid #=> true/false
resp.error #=> String

Options Hash (options):

  • :data_format (required, String)

    The data format of the schema definition. Currently only AVRO is supported.

  • :schema_definition (required, String)

    The definition of the schema that has to be validated.

Returns:

See Also:

#create_classifier(options = {}) ⇒ Struct

Creates a classifier in the user's account. This can be a GrokClassifier, an XMLClassifier, a JsonClassifier, or a CsvClassifier, depending on which field of the request is present.

Examples:

Request syntax with placeholder values


resp = client.create_classifier({
  grok_classifier: {
    classification: "Classification", # required
    name: "NameString", # required
    grok_pattern: "GrokPattern", # required
    custom_patterns: "CustomPatterns",
  },
  xml_classifier: {
    classification: "Classification", # required
    name: "NameString", # required
    row_tag: "RowTag",
  },
  json_classifier: {
    name: "NameString", # required
    json_path: "JsonPath", # required
  },
  csv_classifier: {
    name: "NameString", # required
    delimiter: "CsvColumnDelimiter",
    quote_symbol: "CsvQuoteSymbol",
    contains_header: "UNKNOWN", # accepts UNKNOWN, PRESENT, ABSENT
    header: ["NameString"],
    disable_value_trimming: false,
    allow_single_column: false,
  },
})

Options Hash (options):

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#create_connection(options = {}) ⇒ Struct

Creates a connection definition in the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.create_connection({
  catalog_id: "CatalogIdString",
  connection_input: { # required
    name: "NameString", # required
    description: "DescriptionString",
    connection_type: "JDBC", # required, accepts JDBC, SFTP, MONGODB, KAFKA, NETWORK
    match_criteria: ["NameString"],
    connection_properties: { # required
      "HOST" => "ValueString",
    },
    physical_connection_requirements: {
      subnet_id: "NameString",
      security_group_id_list: ["NameString"],
      availability_zone: "NameString",
    },
  },
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog in which to create the connection. If none is provided, the AWS account ID is used by default.

  • :connection_input (required, Types::ConnectionInput)

    A ConnectionInput object defining the connection to create.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#create_crawler(options = {}) ⇒ Struct

Creates a new crawler with specified targets, role, configuration, and optional schedule. At least one crawl target must be specified, in the s3Targets field, the jdbcTargets field, or the DynamoDBTargets field.

Examples:

Request syntax with placeholder values


resp = client.create_crawler({
  name: "NameString", # required
  role: "Role", # required
  database_name: "DatabaseName",
  description: "DescriptionString",
  targets: { # required
    s3_targets: [
      {
        path: "Path",
        exclusions: ["Path"],
        connection_name: "ConnectionName",
      },
    ],
    jdbc_targets: [
      {
        connection_name: "ConnectionName",
        path: "Path",
        exclusions: ["Path"],
      },
    ],
    mongo_db_targets: [
      {
        connection_name: "ConnectionName",
        path: "Path",
        scan_all: false,
      },
    ],
    dynamo_db_targets: [
      {
        path: "Path",
        scan_all: false,
        scan_rate: 1.0,
      },
    ],
    catalog_targets: [
      {
        database_name: "NameString", # required
        tables: ["NameString"], # required
      },
    ],
  },
  schedule: "CronExpression",
  classifiers: ["NameString"],
  table_prefix: "TablePrefix",
  schema_change_policy: {
    update_behavior: "LOG", # accepts LOG, UPDATE_IN_DATABASE
    delete_behavior: "LOG", # accepts LOG, DELETE_FROM_DATABASE, DEPRECATE_IN_DATABASE
  },
  recrawl_policy: {
    recrawl_behavior: "CRAWL_EVERYTHING", # accepts CRAWL_EVERYTHING, CRAWL_NEW_FOLDERS_ONLY
  },
  configuration: "CrawlerConfiguration",
  crawler_security_configuration: "CrawlerSecurityConfiguration",
  tags: {
    "TagKey" => "TagValue",
  },
})

Options Hash (options):

  • :name (required, String)

    Name of the new crawler.

  • :role (required, String)

    The IAM role or Amazon Resource Name (ARN) of an IAM role used by the new crawler to access customer resources.

  • :database_name (String)

    The AWS Glue database where results are written, such as: arn:aws:daylight:us-east-1::database/sometable/*.

  • :description (String)

    A description of the new crawler.

  • :targets (required, Types::CrawlerTargets)

    A list of collection of targets to crawl.

  • :schedule (String)

    A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *).

  • :classifiers (Array<String>)

    A list of custom classifiers that the user has registered. By default, all built-in classifiers are included in a crawl, but these custom classifiers always override the default classifiers for a given classification.

  • :table_prefix (String)

    The table prefix used for catalog tables that are created.

  • :schema_change_policy (Types::SchemaChangePolicy)

    The policy for the crawler\'s update and deletion behavior.

  • :recrawl_policy (Types::RecrawlPolicy)

    A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.

  • :configuration (String)

    Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler\'s behavior. For more information, see Configuring a Crawler.

  • :crawler_security_configuration (String)

    The name of the SecurityConfiguration structure to be used by this crawler.

  • :tags (Hash<String,String>)

    The tags to use with this crawler request. You may use tags to limit access to the crawler. For more information about tags in AWS Glue, see AWS Tags in AWS Glue in the developer guide.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#create_database(options = {}) ⇒ Struct

Creates a new database in a Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.create_database({
  catalog_id: "CatalogIdString",
  database_input: { # required
    name: "NameString", # required
    description: "DescriptionString",
    location_uri: "URI",
    parameters: {
      "KeyString" => "ParametersMapValue",
    },
    create_table_default_permissions: [
      {
        principal: {
          data_lake_principal_identifier: "DataLakePrincipalString",
        },
        permissions: ["ALL"], # accepts ALL, SELECT, ALTER, DROP, DELETE, INSERT, CREATE_DATABASE, CREATE_TABLE, DATA_LOCATION_ACCESS
      },
    ],
    target_database: {
      catalog_id: "CatalogIdString",
      database_name: "NameString",
    },
  },
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog in which to create the database. If none is provided, the AWS account ID is used by default.

  • :database_input (required, Types::DatabaseInput)

    The metadata for the database.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#create_dev_endpoint(options = {}) ⇒ Types::CreateDevEndpointResponse

Creates a new development endpoint.

Examples:

Request syntax with placeholder values


resp = client.create_dev_endpoint({
  endpoint_name: "GenericString", # required
  role_arn: "RoleArn", # required
  security_group_ids: ["GenericString"],
  subnet_id: "GenericString",
  public_key: "GenericString",
  public_keys: ["GenericString"],
  number_of_nodes: 1,
  worker_type: "Standard", # accepts Standard, G.1X, G.2X
  glue_version: "GlueVersionString",
  number_of_workers: 1,
  extra_python_libs_s3_path: "GenericString",
  extra_jars_s3_path: "GenericString",
  security_configuration: "NameString",
  tags: {
    "TagKey" => "TagValue",
  },
  arguments: {
    "GenericString" => "GenericString",
  },
})

Response structure


resp.endpoint_name #=> String
resp.status #=> String
resp.security_group_ids #=> Array
resp.security_group_ids[0] #=> String
resp.subnet_id #=> String
resp.role_arn #=> String
resp.yarn_endpoint_address #=> String
resp.zeppelin_remote_spark_interpreter_port #=> Integer
resp.number_of_nodes #=> Integer
resp.worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.glue_version #=> String
resp.number_of_workers #=> Integer
resp.availability_zone #=> String
resp.vpc_id #=> String
resp.extra_python_libs_s3_path #=> String
resp.extra_jars_s3_path #=> String
resp.failure_reason #=> String
resp.security_configuration #=> String
resp.created_timestamp #=> Time
resp.arguments #=> Hash
resp.arguments["GenericString"] #=> String

Options Hash (options):

  • :endpoint_name (required, String)

    The name to be assigned to the new DevEndpoint.

  • :role_arn (required, String)

    The IAM role for the DevEndpoint.

  • :security_group_ids (Array<String>)

    Security group IDs for the security groups to be used by the new DevEndpoint.

  • :subnet_id (String)

    The subnet ID for the new DevEndpoint to use.

  • :public_key (String)

    The public key to be used by this DevEndpoint for authentication. This attribute is provided for backward compatibility because the recommended attribute to use is public keys.

  • :public_keys (Array<String>)

    A list of public keys to be used by the development endpoints for authentication. The use of this attribute is preferred over a single public key because the public keys allow you to have a different private key per client.

    If you previously created an endpoint with a public key, you must remove that key to be able to set a list of public keys. Call the UpdateDevEndpoint API with the public key content in the deletePublicKeys attribute, and the list of new keys in the addPublicKeys attribute.

  • :number_of_nodes (Integer)

    The number of AWS Glue Data Processing Units (DPUs) to allocate to this DevEndpoint.

  • :worker_type (String)

    The type of predefined worker that is allocated to the development endpoint. Accepts a value of Standard, G.1X, or G.2X.

    • For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.

    • For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.

    • For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.

    Known issue: when a development endpoint is created with the G.2X WorkerType configuration, the Spark drivers for the development endpoint will run on 4 vCPU, 16 GB of memory, and a 64 GB disk.

  • :glue_version (String)

    Glue version determines the versions of Apache Spark and Python that AWS Glue supports. The Python version indicates the version supported for running your ETL scripts on development endpoints.

    For more information about the available AWS Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.

    Development endpoints that are created without specifying a Glue version default to Glue 0.9.

    You can specify a version of Python support for development endpoints by using the Arguments parameter in the CreateDevEndpoint or UpdateDevEndpoint APIs. If no arguments are provided, the version defaults to Python 2.

  • :number_of_workers (Integer)

    The number of workers of a defined workerType that are allocated to the development endpoint.

    The maximum number of workers you can define are 299 for G.1X, and 149 for G.2X.

  • :extra_python_libs_s3_path (String)

    The paths to one or more Python libraries in an Amazon S3 bucket that should be loaded in your DevEndpoint. Multiple values must be complete paths separated by a comma.

    You can only use pure Python libraries with a DevEndpoint. Libraries that rely on C extensions, such as the pandas Python data analysis library, are not yet supported.

  • :extra_jars_s3_path (String)

    The path to one or more Java .jar files in an S3 bucket that should be loaded in your DevEndpoint.

  • :security_configuration (String)

    The name of the SecurityConfiguration structure to be used with this DevEndpoint.

  • :tags (Hash<String,String>)

    The tags to use with this DevEndpoint. You may use tags to limit access to the DevEndpoint. For more information about tags in AWS Glue, see AWS Tags in AWS Glue in the developer guide.

  • :arguments (Hash<String,String>)

    A map of arguments used to configure the DevEndpoint.

Returns:

See Also:

#create_job(options = {}) ⇒ Types::CreateJobResponse

Creates a new job definition.

Examples:

Request syntax with placeholder values


resp = client.create_job({
  name: "NameString", # required
  description: "DescriptionString",
  log_uri: "UriString",
  role: "RoleString", # required
  execution_property: {
    max_concurrent_runs: 1,
  },
  command: { # required
    name: "GenericString",
    script_location: "ScriptLocationString",
    python_version: "PythonVersionString",
  },
  default_arguments: {
    "GenericString" => "GenericString",
  },
  non_overridable_arguments: {
    "GenericString" => "GenericString",
  },
  connections: {
    connections: ["GenericString"],
  },
  max_retries: 1,
  allocated_capacity: 1,
  timeout: 1,
  max_capacity: 1.0,
  security_configuration: "NameString",
  tags: {
    "TagKey" => "TagValue",
  },
  notification_property: {
    notify_delay_after: 1,
  },
  glue_version: "GlueVersionString",
  number_of_workers: 1,
  worker_type: "Standard", # accepts Standard, G.1X, G.2X
})

Response structure


resp.name #=> String

Options Hash (options):

  • :name (required, String)

    The name you assign to this job definition. It must be unique in your account.

  • :description (String)

    Description of the job being defined.

  • :log_uri (String)

    This field is reserved for future use.

  • :role (required, String)

    The name or Amazon Resource Name (ARN) of the IAM role associated with this job.

  • :execution_property (Types::ExecutionProperty)

    An ExecutionProperty specifying the maximum number of concurrent runs allowed for this job.

  • :command (required, Types::JobCommand)

    The JobCommand that executes this job.

  • :default_arguments (Hash<String,String>)

    The default arguments for this job.

    You can specify arguments here that your own job-execution script consumes, as well as arguments that AWS Glue itself consumes.

    For information about how to specify and consume your own Job arguments, see the Calling AWS Glue APIs in Python topic in the developer guide.

    For information about the key-value pairs that AWS Glue consumes to set up your job, see the Special Parameters Used by AWS Glue topic in the developer guide.

  • :non_overridable_arguments (Hash<String,String>)

    Non-overridable arguments for this job, specified as name-value pairs.

  • :connections (Types::ConnectionsList)

    The connections used for this job.

  • :max_retries (Integer)

    The maximum number of times to retry this job if it fails.

  • :allocated_capacity (Integer)

    This parameter is deprecated. Use MaxCapacity instead.

    The number of AWS Glue data processing units (DPUs) to allocate to this Job. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the AWS Glue pricing page.

  • :timeout (Integer)

    The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).

  • :max_capacity (Float)

    The number of AWS Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the AWS Glue pricing page.

    Do not set Max Capacity if using WorkerType and NumberOfWorkers.

    The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job or an Apache Spark ETL job:

    • When you specify a Python shell job (JobCommand.Name=\"pythonshell\"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.

    • When you specify an Apache Spark ETL job (JobCommand.Name=\"glueetl\") or Apache Spark streaming ETL job (JobCommand.Name=\"gluestreaming\"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.

  • :security_configuration (String)

    The name of the SecurityConfiguration structure to be used with this job.

  • :tags (Hash<String,String>)

    The tags to use with this job. You may use tags to limit access to the job. For more information about tags in AWS Glue, see AWS Tags in AWS Glue in the developer guide.

  • :notification_property (Types::NotificationProperty)

    Specifies configuration properties of a job notification.

  • :glue_version (String)

    Glue version determines the versions of Apache Spark and Python that AWS Glue supports. The Python version indicates the version supported for jobs of type Spark.

    For more information about the available AWS Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.

    Jobs that are created without specifying a Glue version default to Glue 0.9.

  • :number_of_workers (Integer)

    The number of workers of a defined workerType that are allocated when a job runs.

    The maximum number of workers you can define are 299 for G.1X, and 149 for G.2X.

  • :worker_type (String)

    The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, or G.2X.

    • For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.

    • For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.

    • For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.

Returns:

See Also:

#create_ml_transform(options = {}) ⇒ Types::CreateMLTransformResponse

Creates an AWS Glue machine learning transform. This operation creates the transform and all the necessary parameters to train it.

Call this operation as the first step in the process of using a machine learning transform (such as the FindMatches transform) for deduplicating data. You can provide an optional Description, in addition to the parameters that you want to use for your algorithm.

You must also specify certain parameters for the tasks that AWS Glue runs on your behalf as part of learning from your data and creating a high-quality machine learning transform. These parameters include Role, and optionally, AllocatedCapacity, Timeout, and MaxRetries. For more information, see Jobs.

Examples:

Request syntax with placeholder values


resp = client.create_ml_transform({
  name: "NameString", # required
  description: "DescriptionString",
  input_record_tables: [ # required
    {
      database_name: "NameString", # required
      table_name: "NameString", # required
      catalog_id: "NameString",
      connection_name: "NameString",
    },
  ],
  parameters: { # required
    transform_type: "FIND_MATCHES", # required, accepts FIND_MATCHES
    find_matches_parameters: {
      primary_key_column_name: "ColumnNameString",
      precision_recall_tradeoff: 1.0,
      accuracy_cost_tradeoff: 1.0,
      enforce_provided_labels: false,
    },
  },
  role: "RoleString", # required
  glue_version: "GlueVersionString",
  max_capacity: 1.0,
  worker_type: "Standard", # accepts Standard, G.1X, G.2X
  number_of_workers: 1,
  timeout: 1,
  max_retries: 1,
  tags: {
    "TagKey" => "TagValue",
  },
  transform_encryption: {
    ml_user_data_encryption: {
      ml_user_data_encryption_mode: "DISABLED", # required, accepts DISABLED, SSE-KMS
      kms_key_id: "NameString",
    },
    task_run_security_configuration_name: "NameString",
  },
})

Response structure


resp.transform_id #=> String

Options Hash (options):

  • :name (required, String)

    The unique name that you give the transform when you create it.

  • :description (String)

    A description of the machine learning transform that is being defined. The default is an empty string.

  • :input_record_tables (required, Array<Types::GlueTable>)

    A list of AWS Glue table definitions used by the transform.

  • :parameters (required, Types::TransformParameters)

    The algorithmic parameters that are specific to the transform type used. Conditionally dependent on the transform type.

  • :role (required, String)

    The name or Amazon Resource Name (ARN) of the IAM role with the required permissions. The required permissions include both AWS Glue service role permissions to AWS Glue resources, and Amazon S3 permissions required by the transform.

    • This role needs AWS Glue service role permissions to allow access to resources in AWS Glue. See Attach a Policy to IAM Users That Access AWS Glue.

    • This role needs permission to your Amazon Simple Storage Service (Amazon S3) sources, targets, temporary directory, scripts, and any libraries used by the task run for this transform.

  • :glue_version (String)

    This value determines which version of AWS Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see AWS Glue Versions in the developer guide.

  • :max_capacity (Float)

    The number of AWS Glue data processing units (DPUs) that are allocated to task runs for this transform. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the AWS Glue pricing page.

    MaxCapacity is a mutually exclusive option with NumberOfWorkers and WorkerType.

    • If either NumberOfWorkers or WorkerType is set, then MaxCapacity cannot be set.

    • If MaxCapacity is set then neither NumberOfWorkers or WorkerType can be set.

    • If WorkerType is set, then NumberOfWorkers is required (and vice versa).

    • MaxCapacity and NumberOfWorkers must both be at least 1.

    When the WorkerType field is set to a value other than Standard, the MaxCapacity field is set automatically and becomes read-only.

    When the WorkerType field is set to a value other than Standard, the MaxCapacity field is set automatically and becomes read-only.

  • :worker_type (String)

    The type of predefined worker that is allocated when this task runs. Accepts a value of Standard, G.1X, or G.2X.

    • For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.

    • For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.

    • For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.

    MaxCapacity is a mutually exclusive option with NumberOfWorkers and WorkerType.

    • If either NumberOfWorkers or WorkerType is set, then MaxCapacity cannot be set.

    • If MaxCapacity is set then neither NumberOfWorkers or WorkerType can be set.

    • If WorkerType is set, then NumberOfWorkers is required (and vice versa).

    • MaxCapacity and NumberOfWorkers must both be at least 1.

  • :number_of_workers (Integer)

    The number of workers of a defined workerType that are allocated when this task runs.

    If WorkerType is set, then NumberOfWorkers is required (and vice versa).

  • :timeout (Integer)

    The timeout of the task run for this transform in minutes. This is the maximum time that a task run for this transform can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).

  • :max_retries (Integer)

    The maximum number of times to retry a task for this transform after a task run fails.

  • :tags (Hash<String,String>)

    The tags to use with this machine learning transform. You may use tags to limit access to the machine learning transform. For more information about tags in AWS Glue, see AWS Tags in AWS Glue in the developer guide.

  • :transform_encryption (Types::TransformEncryption)

    The encryption-at-rest settings of the transform that apply to accessing user data. Machine learning transforms can access user data encrypted in Amazon S3 using KMS.

Returns:

See Also:

#create_partition(options = {}) ⇒ Struct

Creates a new partition.

Examples:

Request syntax with placeholder values


resp = client.create_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_input: { # required
    values: ["ValueString"],
    last_access_time: Time.now,
    storage_descriptor: {
      columns: [
        {
          name: "NameString", # required
          type: "ColumnTypeString",
          comment: "CommentString",
          parameters: {
            "KeyString" => "ParametersMapValue",
          },
        },
      ],
      location: "LocationString",
      input_format: "FormatString",
      output_format: "FormatString",
      compressed: false,
      number_of_buckets: 1,
      serde_info: {
        name: "NameString",
        serialization_library: "NameString",
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
      },
      bucket_columns: ["NameString"],
      sort_columns: [
        {
          column: "NameString", # required
          sort_order: 1, # required
        },
      ],
      parameters: {
        "KeyString" => "ParametersMapValue",
      },
      skewed_info: {
        skewed_column_names: ["NameString"],
        skewed_column_values: ["ColumnValuesString"],
        skewed_column_value_location_maps: {
          "ColumnValuesString" => "ColumnValuesString",
        },
      },
      stored_as_sub_directories: false,
      schema_reference: {
        schema_id: {
          schema_arn: "GlueResourceArn",
          schema_name: "SchemaRegistryNameString",
          registry_name: "SchemaRegistryNameString",
        },
        schema_version_id: "SchemaVersionIdString",
        schema_version_number: 1,
      },
    },
    parameters: {
      "KeyString" => "ParametersMapValue",
    },
    last_analyzed_time: Time.now,
  },
})

Options Hash (options):

  • :catalog_id (String)

    The AWS account ID of the catalog in which the partition is to be created.

  • :database_name (required, String)

    The name of the metadata database in which the partition is to be created.

  • :table_name (required, String)

    The name of the metadata table in which the partition is to be created.

  • :partition_input (required, Types::PartitionInput)

    A PartitionInput structure defining the partition to be created.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#create_registry(options = {}) ⇒ Types::CreateRegistryResponse

Creates a new registry which may be used to hold a collection of schemas.

Examples:

Request syntax with placeholder values


resp = client.create_registry({
  registry_name: "SchemaRegistryNameString", # required
  description: "DescriptionString",
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.registry_arn #=> String
resp.registry_name #=> String
resp.description #=> String
resp.tags #=> Hash
resp.tags["TagKey"] #=> String

Options Hash (options):

  • :registry_name (required, String)

    Name of the registry to be created of max length of 255, and may only contain letters, numbers, hyphen, underscore, dollar sign, or hash mark. No whitespace.

  • :description (String)

    A description of the registry. If description is not provided, there will not be any default value for this.

  • :tags (Hash<String,String>)

    AWS tags that contain a key value pair and may be searched by console, command line, or API.

Returns:

See Also:

#create_schema(options = {}) ⇒ Types::CreateSchemaResponse

Creates a new schema set and registers the schema definition. Returns an error if the schema set already exists without actually registering the version.

When the schema set is created, a version checkpoint will be set to the first version. Compatibility mode "DISABLED" restricts any additional schema versions from being added after the first schema version. For all other compatibility modes, validation of compatibility settings will be applied only from the second version onwards when the RegisterSchemaVersion API is used.

When this API is called without a RegistryId, this will create an entry for a "default-registry" in the registry database tables, if it is not already present.

Examples:

Request syntax with placeholder values


resp = client.create_schema({
  registry_id: {
    registry_name: "SchemaRegistryNameString",
    registry_arn: "GlueResourceArn",
  },
  schema_name: "SchemaRegistryNameString", # required
  data_format: "AVRO", # required, accepts AVRO
  compatibility: "NONE", # accepts NONE, DISABLED, BACKWARD, BACKWARD_ALL, FORWARD, FORWARD_ALL, FULL, FULL_ALL
  description: "DescriptionString",
  tags: {
    "TagKey" => "TagValue",
  },
  schema_definition: "SchemaDefinitionString",
})

Response structure


resp.registry_name #=> String
resp.registry_arn #=> String
resp.schema_name #=> String
resp.schema_arn #=> String
resp.description #=> String
resp.data_format #=> String, one of "AVRO"
resp.compatibility #=> String, one of "NONE", "DISABLED", "BACKWARD", "BACKWARD_ALL", "FORWARD", "FORWARD_ALL", "FULL", "FULL_ALL"
resp.schema_checkpoint #=> Integer
resp.latest_schema_version #=> Integer
resp.next_schema_version #=> Integer
resp.schema_status #=> String, one of "AVAILABLE", "PENDING", "DELETING"
resp.tags #=> Hash
resp.tags["TagKey"] #=> String
resp.schema_version_id #=> String
resp.schema_version_status #=> String, one of "AVAILABLE", "PENDING", "FAILURE", "DELETING"

Options Hash (options):

  • :registry_id (Types::RegistryId)

    This is a wrapper shape to contain the registry identity fields. If this is not provided, the default registry will be used. The ARN format for the same will be: arn:aws:glue:us-east-2:<customer id>:registry/default-registry:random-5-letter-id.

  • :schema_name (required, String)

    Name of the schema to be created of max length of 255, and may only contain letters, numbers, hyphen, underscore, dollar sign, or hash mark. No whitespace.

  • :data_format (required, String)

    The data format of the schema definition. Currently only AVRO is supported.

  • :compatibility (String)

    The compatibility mode of the schema. The possible values are:

    • NONE: No compatibility mode applies. You can use this choice in development scenarios or if you do not know the compatibility mode that you want to apply to schemas. Any new version added will be accepted without undergoing a compatibility check.

    • DISABLED: This compatibility choice prevents versioning for a particular schema. You can use this choice to prevent future versioning of a schema.

    • BACKWARD: This compatibility choice is recommended as it allows data receivers to read both the current and one previous schema version. This means that for instance, a new schema version cannot drop data fields or change the type of these fields, so they can\'t be read by readers using the previous version.

    • BACKWARD_ALL: This compatibility choice allows data receivers to read both the current and all previous schema versions. You can use this choice when you need to delete fields or add optional fields, and check compatibility against all previous schema versions.

    • FORWARD: This compatibility choice allows data receivers to read both the current and one next schema version, but not necessarily later versions. You can use this choice when you need to add fields or delete optional fields, but only check compatibility against the last schema version.

    • FORWARD_ALL: This compatibility choice allows data receivers to read written by producers of any new registered schema. You can use this choice when you need to add fields or delete optional fields, and check compatibility against all previous schema versions.

    • FULL: This compatibility choice allows data receivers to read data written by producers using the previous or next version of the schema, but not necessarily earlier or later versions. You can use this choice when you need to add or remove optional fields, but only check compatibility against the last schema version.

    • FULL_ALL: This compatibility choice allows data receivers to read data written by producers using all previous schema versions. You can use this choice when you need to add or remove optional fields, and check compatibility against all previous schema versions.

  • :description (String)

    An optional description of the schema. If description is not provided, there will not be any automatic default value for this.

  • :tags (Hash<String,String>)

    AWS tags that contain a key value pair and may be searched by console, command line, or API. If specified, follows the AWS tags-on-create pattern.

  • :schema_definition (String)

    The schema definition using the DataFormat setting for SchemaName.

Returns:

See Also:

#create_script(options = {}) ⇒ Types::CreateScriptResponse

Transforms a directed acyclic graph (DAG) into code.

Examples:

Request syntax with placeholder values


resp = client.create_script({
  dag_nodes: [
    {
      id: "CodeGenIdentifier", # required
      node_type: "CodeGenNodeType", # required
      args: [ # required
        {
          name: "CodeGenArgName", # required
          value: "CodeGenArgValue", # required
          param: false,
        },
      ],
      line_number: 1,
    },
  ],
  dag_edges: [
    {
      source: "CodeGenIdentifier", # required
      target: "CodeGenIdentifier", # required
      target_parameter: "CodeGenArgName",
    },
  ],
  language: "PYTHON", # accepts PYTHON, SCALA
})

Response structure


resp.python_script #=> String
resp.scala_code #=> String

Options Hash (options):

  • :dag_nodes (Array<Types::CodeGenNode>)

    A list of the nodes in the DAG.

  • :dag_edges (Array<Types::CodeGenEdge>)

    A list of the edges in the DAG.

  • :language (String)

    The programming language of the resulting code from the DAG.

Returns:

See Also:

#create_security_configuration(options = {}) ⇒ Types::CreateSecurityConfigurationResponse

Creates a new security configuration. A security configuration is a set of security properties that can be used by AWS Glue. You can use a security configuration to encrypt data at rest. For information about using security configurations in AWS Glue, see Encrypting Data Written by Crawlers, Jobs, and Development Endpoints.

Examples:

Request syntax with placeholder values


resp = client.create_security_configuration({
  name: "NameString", # required
  encryption_configuration: { # required
    s3_encryption: [
      {
        s3_encryption_mode: "DISABLED", # accepts DISABLED, SSE-KMS, SSE-S3
        kms_key_arn: "KmsKeyArn",
      },
    ],
    cloud_watch_encryption: {
      cloud_watch_encryption_mode: "DISABLED", # accepts DISABLED, SSE-KMS
      kms_key_arn: "KmsKeyArn",
    },
    job_bookmarks_encryption: {
      job_bookmarks_encryption_mode: "DISABLED", # accepts DISABLED, CSE-KMS
      kms_key_arn: "KmsKeyArn",
    },
  },
})

Response structure


resp.name #=> String
resp.created_timestamp #=> Time

Options Hash (options):

  • :name (required, String)

    The name for the new security configuration.

  • :encryption_configuration (required, Types::EncryptionConfiguration)

    The encryption configuration for the new security configuration.

Returns:

See Also:

#create_table(options = {}) ⇒ Struct

Creates a new table definition in the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.create_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_input: { # required
    name: "NameString", # required
    description: "DescriptionString",
    owner: "NameString",
    last_access_time: Time.now,
    last_analyzed_time: Time.now,
    retention: 1,
    storage_descriptor: {
      columns: [
        {
          name: "NameString", # required
          type: "ColumnTypeString",
          comment: "CommentString",
          parameters: {
            "KeyString" => "ParametersMapValue",
          },
        },
      ],
      location: "LocationString",
      input_format: "FormatString",
      output_format: "FormatString",
      compressed: false,
      number_of_buckets: 1,
      serde_info: {
        name: "NameString",
        serialization_library: "NameString",
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
      },
      bucket_columns: ["NameString"],
      sort_columns: [
        {
          column: "NameString", # required
          sort_order: 1, # required
        },
      ],
      parameters: {
        "KeyString" => "ParametersMapValue",
      },
      skewed_info: {
        skewed_column_names: ["NameString"],
        skewed_column_values: ["ColumnValuesString"],
        skewed_column_value_location_maps: {
          "ColumnValuesString" => "ColumnValuesString",
        },
      },
      stored_as_sub_directories: false,
      schema_reference: {
        schema_id: {
          schema_arn: "GlueResourceArn",
          schema_name: "SchemaRegistryNameString",
          registry_name: "SchemaRegistryNameString",
        },
        schema_version_id: "SchemaVersionIdString",
        schema_version_number: 1,
      },
    },
    partition_keys: [
      {
        name: "NameString", # required
        type: "ColumnTypeString",
        comment: "CommentString",
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
      },
    ],
    view_original_text: "ViewTextString",
    view_expanded_text: "ViewTextString",
    table_type: "TableTypeString",
    parameters: {
      "KeyString" => "ParametersMapValue",
    },
    target_table: {
      catalog_id: "CatalogIdString",
      database_name: "NameString",
      name: "NameString",
    },
  },
  partition_indexes: [
    {
      keys: ["NameString"], # required
      index_name: "NameString", # required
    },
  ],
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog in which to create the Table. If none is supplied, the AWS account ID is used by default.

  • :database_name (required, String)

    The catalog database in which to create the new table. For Hive compatibility, this name is entirely lowercase.

  • :table_input (required, Types::TableInput)

    The TableInput object that defines the metadata table to create in the catalog.

  • :partition_indexes (Array<Types::PartitionIndex>)

    A list of partition indexes, PartitionIndex structures, to create in the table.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#create_trigger(options = {}) ⇒ Types::CreateTriggerResponse

Creates a new trigger.

Examples:

Request syntax with placeholder values


resp = client.create_trigger({
  name: "NameString", # required
  workflow_name: "NameString",
  type: "SCHEDULED", # required, accepts SCHEDULED, CONDITIONAL, ON_DEMAND
  schedule: "GenericString",
  predicate: {
    logical: "AND", # accepts AND, ANY
    conditions: [
      {
        logical_operator: "EQUALS", # accepts EQUALS
        job_name: "NameString",
        state: "STARTING", # accepts STARTING, RUNNING, STOPPING, STOPPED, SUCCEEDED, FAILED, TIMEOUT
        crawler_name: "NameString",
        crawl_state: "RUNNING", # accepts RUNNING, CANCELLING, CANCELLED, SUCCEEDED, FAILED
      },
    ],
  },
  actions: [ # required
    {
      job_name: "NameString",
      arguments: {
        "GenericString" => "GenericString",
      },
      timeout: 1,
      security_configuration: "NameString",
      notification_property: {
        notify_delay_after: 1,
      },
      crawler_name: "NameString",
    },
  ],
  description: "DescriptionString",
  start_on_creation: false,
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.name #=> String

Options Hash (options):

  • :name (required, String)

    The name of the trigger.

  • :workflow_name (String)

    The name of the workflow associated with the trigger.

  • :type (required, String)

    The type of the new trigger.

  • :schedule (String)

    A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *).

    This field is required when the trigger type is SCHEDULED.

  • :predicate (Types::Predicate)

    A predicate to specify when the new trigger should fire.

    This field is required when the trigger type is CONDITIONAL.

  • :actions (required, Array<Types::Action>)

    The actions initiated by this trigger when it fires.

  • :description (String)

    A description of the new trigger.

  • :start_on_creation (Boolean)

    Set to true to start SCHEDULED and CONDITIONAL triggers when created. True is not supported for ON_DEMAND triggers.

  • :tags (Hash<String,String>)

    The tags to use with this trigger. You may use tags to limit access to the trigger. For more information about tags in AWS Glue, see AWS Tags in AWS Glue in the developer guide.

Returns:

See Also:

#create_user_defined_function(options = {}) ⇒ Struct

Creates a new function definition in the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.create_user_defined_function({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  function_input: { # required
    function_name: "NameString",
    class_name: "NameString",
    owner_name: "NameString",
    owner_type: "USER", # accepts USER, ROLE, GROUP
    resource_uris: [
      {
        resource_type: "JAR", # accepts JAR, FILE, ARCHIVE
        uri: "URI",
      },
    ],
  },
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog in which to create the function. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database in which to create the function.

  • :function_input (required, Types::UserDefinedFunctionInput)

    A FunctionInput object that defines the function to create in the Data Catalog.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#create_workflow(options = {}) ⇒ Types::CreateWorkflowResponse

Creates a new workflow.

Examples:

Request syntax with placeholder values


resp = client.create_workflow({
  name: "NameString", # required
  description: "GenericString",
  default_run_properties: {
    "IdString" => "GenericString",
  },
  tags: {
    "TagKey" => "TagValue",
  },
  max_concurrent_runs: 1,
})

Response structure


resp.name #=> String

Options Hash (options):

  • :name (required, String)

    The name to be assigned to the workflow. It should be unique within your account.

  • :description (String)

    A description of the workflow.

  • :default_run_properties (Hash<String,String>)

    A collection of properties to be used as part of each execution of the workflow.

  • :tags (Hash<String,String>)

    The tags to be used with this workflow.

  • :max_concurrent_runs (Integer)

    You can use this parameter to prevent unwanted multiple updates to data, to control costs, or in some cases, to prevent exceeding the maximum number of concurrent runs of any of the component jobs. If you leave this parameter blank, there is no limit to the number of concurrent workflow runs.

Returns:

See Also:

#delete_classifier(options = {}) ⇒ Struct

Removes a classifier from the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.delete_classifier({
  name: "NameString", # required
})

Options Hash (options):

  • :name (required, String)

    Name of the classifier to remove.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_column_statistics_for_partition(options = {}) ⇒ Struct

Delete the partition column statistics of a column.

The Identity and Access Management (IAM) permission required for this operation is DeletePartition.

Examples:

Request syntax with placeholder values


resp = client.delete_column_statistics_for_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_values: ["ValueString"], # required
  column_name: "NameString", # required
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is supplied, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions\' table.

  • :partition_values (required, Array<String>)

    A list of partition values identifying the partition.

  • :column_name (required, String)

    Name of the column.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_column_statistics_for_table(options = {}) ⇒ Struct

Retrieves table statistics of columns.

The Identity and Access Management (IAM) permission required for this operation is DeleteTable.

Examples:

Request syntax with placeholder values


resp = client.delete_column_statistics_for_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  column_name: "NameString", # required
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is supplied, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions\' table.

  • :column_name (required, String)

    The name of the column.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_connection(options = {}) ⇒ Struct

Deletes a connection from the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.delete_connection({
  catalog_id: "CatalogIdString",
  connection_name: "NameString", # required
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog in which the connection resides. If none is provided, the AWS account ID is used by default.

  • :connection_name (required, String)

    The name of the connection to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_crawler(options = {}) ⇒ Struct

Removes a specified crawler from the AWS Glue Data Catalog, unless the crawler state is RUNNING.

Examples:

Request syntax with placeholder values


resp = client.delete_crawler({
  name: "NameString", # required
})

Options Hash (options):

  • :name (required, String)

    The name of the crawler to remove.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_database(options = {}) ⇒ Struct

Removes a specified database from a Data Catalog.

After completing this operation, you no longer have access to the tables (and all table versions and partitions that might belong to the tables) and the user-defined functions in the deleted database. AWS Glue deletes these "orphaned" resources asynchronously in a timely manner, at the discretion of the service.

To ensure the immediate deletion of all related resources, before calling DeleteDatabase, use DeleteTableVersion or BatchDeleteTableVersion, DeletePartition or BatchDeletePartition, DeleteUserDefinedFunction, and DeleteTable or BatchDeleteTable, to delete any resources that belong to the database.

Examples:

Request syntax with placeholder values


resp = client.delete_database({
  catalog_id: "CatalogIdString",
  name: "NameString", # required
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog in which the database resides. If none is provided, the AWS account ID is used by default.

  • :name (required, String)

    The name of the database to delete. For Hive compatibility, this must be all lowercase.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_dev_endpoint(options = {}) ⇒ Struct

Deletes a specified development endpoint.

Examples:

Request syntax with placeholder values


resp = client.delete_dev_endpoint({
  endpoint_name: "GenericString", # required
})

Options Hash (options):

  • :endpoint_name (required, String)

    The name of the DevEndpoint.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_job(options = {}) ⇒ Types::DeleteJobResponse

Deletes a specified job definition. If the job definition is not found, no exception is thrown.

Examples:

Request syntax with placeholder values


resp = client.delete_job({
  job_name: "NameString", # required
})

Response structure


resp.job_name #=> String

Options Hash (options):

  • :job_name (required, String)

    The name of the job definition to delete.

Returns:

See Also:

#delete_ml_transform(options = {}) ⇒ Types::DeleteMLTransformResponse

Deletes an AWS Glue machine learning transform. Machine learning transforms are a special type of transform that use machine learning to learn the details of the transformation to be performed by learning from examples provided by humans. These transformations are then saved by AWS Glue. If you no longer need a transform, you can delete it by calling DeleteMLTransforms. However, any AWS Glue jobs that still reference the deleted transform will no longer succeed.

Examples:

Request syntax with placeholder values


resp = client.delete_ml_transform({
  transform_id: "HashString", # required
})

Response structure


resp.transform_id #=> String

Options Hash (options):

  • :transform_id (required, String)

    The unique identifier of the transform to delete.

Returns:

See Also:

#delete_partition(options = {}) ⇒ Struct

Deletes a specified partition.

Examples:

Request syntax with placeholder values


resp = client.delete_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_values: ["ValueString"], # required
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the partition to be deleted resides. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database in which the table in question resides.

  • :table_name (required, String)

    The name of the table that contains the partition to be deleted.

  • :partition_values (required, Array<String>)

    The values that define the partition.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_registry(options = {}) ⇒ Types::DeleteRegistryResponse

Delete the entire registry including schema and all of its versions. To get the status of the delete operation, you can call the GetRegistry API after the asynchronous call. Deleting a registry will disable all online operations for the registry such as the UpdateRegistry, CreateSchema, UpdateSchema, and RegisterSchemaVersion APIs.

Examples:

Request syntax with placeholder values


resp = client.delete_registry({
  registry_id: { # required
    registry_name: "SchemaRegistryNameString",
    registry_arn: "GlueResourceArn",
  },
})

Response structure


resp.registry_name #=> String
resp.registry_arn #=> String
resp.status #=> String, one of "AVAILABLE", "DELETING"

Options Hash (options):

  • :registry_id (required, Types::RegistryId)

    This is a wrapper structure that may contain the registry name and Amazon Resource Name (ARN).

Returns:

See Also:

#delete_resource_policy(options = {}) ⇒ Struct

Deletes a specified policy.

Examples:

Request syntax with placeholder values


resp = client.delete_resource_policy({
  policy_hash_condition: "HashString",
  resource_arn: "GlueResourceArn",
})

Options Hash (options):

  • :policy_hash_condition (String)

    The hash value returned when this policy was set.

  • :resource_arn (String)

    The ARN of the AWS Glue resource for the resource policy to be deleted.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_schema(options = {}) ⇒ Types::DeleteSchemaResponse

Deletes the entire schema set, including the schema set and all of its versions. To get the status of the delete operation, you can call GetSchema API after the asynchronous call. Deleting a registry will disable all online operations for the schema, such as the GetSchemaByDefinition, and RegisterSchemaVersion APIs.

Examples:

Request syntax with placeholder values


resp = client.delete_schema({
  schema_id: { # required
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
})

Response structure


resp.schema_arn #=> String
resp.schema_name #=> String
resp.status #=> String, one of "AVAILABLE", "PENDING", "DELETING"

Options Hash (options):

  • :schema_id (required, Types::SchemaId)

    This is a wrapper structure that may contain the schema name and Amazon Resource Name (ARN).

Returns:

See Also:

#delete_schema_versions(options = {}) ⇒ Types::DeleteSchemaVersionsResponse

Remove versions from the specified schema. A version number or range may be supplied. If the compatibility mode forbids deleting of a version that is necessary, such as BACKWARDS_FULL, an error is returned. Calling the GetSchemaVersions API after this call will list the status of the deleted versions.

When the range of version numbers contain check pointed version, the API will return a 409 conflict and will not proceed with the deletion. You have to remove the checkpoint first using the DeleteSchemaCheckpoint API before using this API.

You cannot use the DeleteSchemaVersions API to delete the first schema version in the schema set. The first schema version can only be deleted by the DeleteSchema API. This operation will also delete the attached SchemaVersionMetadata under the schema versions. Hard deletes will be enforced on the database.

If the compatibility mode forbids deleting of a version that is necessary, such as BACKWARDS_FULL, an error is returned.

Examples:

Request syntax with placeholder values


resp = client.delete_schema_versions({
  schema_id: { # required
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  versions: "VersionsString", # required
})

Response structure


resp.schema_version_errors #=> Array
resp.schema_version_errors[0].version_number #=> Integer
resp.schema_version_errors[0].error_details.error_code #=> String
resp.schema_version_errors[0].error_details.error_message #=> String

Options Hash (options):

  • :schema_id (required, Types::SchemaId)

    This is a wrapper structure that may contain the schema name and Amazon Resource Name (ARN).

  • :versions (required, String)

    A version range may be supplied which may be of the format:

    • a single version number, 5

    • a range, 5-8 : deletes versions 5, 6, 7, 8

Returns:

See Also:

#delete_security_configuration(options = {}) ⇒ Struct

Deletes a specified security configuration.

Examples:

Request syntax with placeholder values


resp = client.delete_security_configuration({
  name: "NameString", # required
})

Options Hash (options):

  • :name (required, String)

    The name of the security configuration to delete.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_table(options = {}) ⇒ Struct

Removes a table definition from the Data Catalog.

After completing this operation, you no longer have access to the table versions and partitions that belong to the deleted table. AWS Glue deletes these "orphaned" resources asynchronously in a timely manner, at the discretion of the service.

To ensure the immediate deletion of all related resources, before calling DeleteTable, use DeleteTableVersion or BatchDeleteTableVersion, and DeletePartition or BatchDeletePartition, to delete any resources that belong to the table.

Examples:

Request syntax with placeholder values


resp = client.delete_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  name: "NameString", # required
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the table resides. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database in which the table resides. For Hive compatibility, this name is entirely lowercase.

  • :name (required, String)

    The name of the table to be deleted. For Hive compatibility, this name is entirely lowercase.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_table_version(options = {}) ⇒ Struct

Deletes a specified version of a table.

Examples:

Request syntax with placeholder values


resp = client.delete_table_version({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  version_id: "VersionString", # required
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the tables reside. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.

  • :table_name (required, String)

    The name of the table. For Hive compatibility, this name is entirely lowercase.

  • :version_id (required, String)

    The ID of the table version to be deleted. A VersionID is a string representation of an integer. Each version is incremented by 1.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_trigger(options = {}) ⇒ Types::DeleteTriggerResponse

Deletes a specified trigger. If the trigger is not found, no exception is thrown.

Examples:

Request syntax with placeholder values


resp = client.delete_trigger({
  name: "NameString", # required
})

Response structure


resp.name #=> String

Options Hash (options):

  • :name (required, String)

    The name of the trigger to delete.

Returns:

See Also:

#delete_user_defined_function(options = {}) ⇒ Struct

Deletes an existing function definition from the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.delete_user_defined_function({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  function_name: "NameString", # required
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the function to be deleted is located. If none is supplied, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the function is located.

  • :function_name (required, String)

    The name of the function definition to be deleted.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_workflow(options = {}) ⇒ Types::DeleteWorkflowResponse

Deletes a workflow.

Examples:

Request syntax with placeholder values


resp = client.delete_workflow({
  name: "NameString", # required
})

Response structure


resp.name #=> String

Options Hash (options):

  • :name (required, String)

    Name of the workflow to be deleted.

Returns:

See Also:

#get_catalog_import_status(options = {}) ⇒ Types::GetCatalogImportStatusResponse

Retrieves the status of a migration operation.

Examples:

Request syntax with placeholder values


resp = client.get_catalog_import_status({
  catalog_id: "CatalogIdString",
})

Response structure


resp.import_status.import_completed #=> true/false
resp.import_status.import_time #=> Time
resp.import_status.imported_by #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the catalog to migrate. Currently, this should be the AWS account ID.

Returns:

See Also:

#get_classifier(options = {}) ⇒ Types::GetClassifierResponse

Retrieve a classifier by name.

Examples:

Request syntax with placeholder values


resp = client.get_classifier({
  name: "NameString", # required
})

Response structure


resp.classifier.grok_classifier.name #=> String
resp.classifier.grok_classifier.classification #=> String
resp.classifier.grok_classifier.creation_time #=> Time
resp.classifier.grok_classifier.last_updated #=> Time
resp.classifier.grok_classifier.version #=> Integer
resp.classifier.grok_classifier.grok_pattern #=> String
resp.classifier.grok_classifier.custom_patterns #=> String
resp.classifier.xml_classifier.name #=> String
resp.classifier.xml_classifier.classification #=> String
resp.classifier.xml_classifier.creation_time #=> Time
resp.classifier.xml_classifier.last_updated #=> Time
resp.classifier.xml_classifier.version #=> Integer
resp.classifier.xml_classifier.row_tag #=> String
resp.classifier.json_classifier.name #=> String
resp.classifier.json_classifier.creation_time #=> Time
resp.classifier.json_classifier.last_updated #=> Time
resp.classifier.json_classifier.version #=> Integer
resp.classifier.json_classifier.json_path #=> String
resp.classifier.csv_classifier.name #=> String
resp.classifier.csv_classifier.creation_time #=> Time
resp.classifier.csv_classifier.last_updated #=> Time
resp.classifier.csv_classifier.version #=> Integer
resp.classifier.csv_classifier.delimiter #=> String
resp.classifier.csv_classifier.quote_symbol #=> String
resp.classifier.csv_classifier.contains_header #=> String, one of "UNKNOWN", "PRESENT", "ABSENT"
resp.classifier.csv_classifier.header #=> Array
resp.classifier.csv_classifier.header[0] #=> String
resp.classifier.csv_classifier.disable_value_trimming #=> true/false
resp.classifier.csv_classifier.allow_single_column #=> true/false

Options Hash (options):

  • :name (required, String)

    Name of the classifier to retrieve.

Returns:

See Also:

#get_classifiers(options = {}) ⇒ Types::GetClassifiersResponse

Lists all classifier objects in the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.get_classifiers({
  max_results: 1,
  next_token: "Token",
})

Response structure


resp.classifiers #=> Array
resp.classifiers[0].grok_classifier.name #=> String
resp.classifiers[0].grok_classifier.classification #=> String
resp.classifiers[0].grok_classifier.creation_time #=> Time
resp.classifiers[0].grok_classifier.last_updated #=> Time
resp.classifiers[0].grok_classifier.version #=> Integer
resp.classifiers[0].grok_classifier.grok_pattern #=> String
resp.classifiers[0].grok_classifier.custom_patterns #=> String
resp.classifiers[0].xml_classifier.name #=> String
resp.classifiers[0].xml_classifier.classification #=> String
resp.classifiers[0].xml_classifier.creation_time #=> Time
resp.classifiers[0].xml_classifier.last_updated #=> Time
resp.classifiers[0].xml_classifier.version #=> Integer
resp.classifiers[0].xml_classifier.row_tag #=> String
resp.classifiers[0].json_classifier.name #=> String
resp.classifiers[0].json_classifier.creation_time #=> Time
resp.classifiers[0].json_classifier.last_updated #=> Time
resp.classifiers[0].json_classifier.version #=> Integer
resp.classifiers[0].json_classifier.json_path #=> String
resp.classifiers[0].csv_classifier.name #=> String
resp.classifiers[0].csv_classifier.creation_time #=> Time
resp.classifiers[0].csv_classifier.last_updated #=> Time
resp.classifiers[0].csv_classifier.version #=> Integer
resp.classifiers[0].csv_classifier.delimiter #=> String
resp.classifiers[0].csv_classifier.quote_symbol #=> String
resp.classifiers[0].csv_classifier.contains_header #=> String, one of "UNKNOWN", "PRESENT", "ABSENT"
resp.classifiers[0].csv_classifier.header #=> Array
resp.classifiers[0].csv_classifier.header[0] #=> String
resp.classifiers[0].csv_classifier.disable_value_trimming #=> true/false
resp.classifiers[0].csv_classifier.allow_single_column #=> true/false
resp.next_token #=> String

Options Hash (options):

  • :max_results (Integer)

    The size of the list to return (optional).

  • :next_token (String)

    An optional continuation token.

Returns:

See Also:

#get_column_statistics_for_partition(options = {}) ⇒ Types::GetColumnStatisticsForPartitionResponse

Retrieves partition statistics of columns.

The Identity and Access Management (IAM) permission required for this operation is GetPartition.

Examples:

Request syntax with placeholder values


resp = client.get_column_statistics_for_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_values: ["ValueString"], # required
  column_names: ["NameString"], # required
})

Response structure


resp.column_statistics_list #=> Array
resp.column_statistics_list[0].column_name #=> String
resp.column_statistics_list[0].column_type #=> String
resp.column_statistics_list[0].analyzed_time #=> Time
resp.column_statistics_list[0].statistics_data.type #=> String, one of "BOOLEAN", "DATE", "DECIMAL", "DOUBLE", "LONG", "STRING", "BINARY"
resp.column_statistics_list[0].statistics_data.boolean_column_statistics_data.number_of_trues #=> Integer
resp.column_statistics_list[0].statistics_data.boolean_column_statistics_data.number_of_falses #=> Integer
resp.column_statistics_list[0].statistics_data.boolean_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.date_column_statistics_data.minimum_value #=> Time
resp.column_statistics_list[0].statistics_data.date_column_statistics_data.maximum_value #=> Time
resp.column_statistics_list[0].statistics_data.date_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.date_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.minimum_value.unscaled_value #=> IO
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.minimum_value.scale #=> Integer
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.maximum_value.unscaled_value #=> IO
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.maximum_value.scale #=> Integer
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.double_column_statistics_data.minimum_value #=> Float
resp.column_statistics_list[0].statistics_data.double_column_statistics_data.maximum_value #=> Float
resp.column_statistics_list[0].statistics_data.double_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.double_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.long_column_statistics_data.minimum_value #=> Integer
resp.column_statistics_list[0].statistics_data.long_column_statistics_data.maximum_value #=> Integer
resp.column_statistics_list[0].statistics_data.long_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.long_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.string_column_statistics_data.maximum_length #=> Integer
resp.column_statistics_list[0].statistics_data.string_column_statistics_data.average_length #=> Float
resp.column_statistics_list[0].statistics_data.string_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.string_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.binary_column_statistics_data.maximum_length #=> Integer
resp.column_statistics_list[0].statistics_data.binary_column_statistics_data.average_length #=> Float
resp.column_statistics_list[0].statistics_data.binary_column_statistics_data.number_of_nulls #=> Integer
resp.errors #=> Array
resp.errors[0].column_name #=> String
resp.errors[0].error.error_code #=> String
resp.errors[0].error.error_message #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is supplied, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions\' table.

  • :partition_values (required, Array<String>)

    A list of partition values identifying the partition.

  • :column_names (required, Array<String>)

    A list of the column names.

Returns:

See Also:

#get_column_statistics_for_table(options = {}) ⇒ Types::GetColumnStatisticsForTableResponse

Retrieves table statistics of columns.

The Identity and Access Management (IAM) permission required for this operation is GetTable.

Examples:

Request syntax with placeholder values


resp = client.get_column_statistics_for_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  column_names: ["NameString"], # required
})

Response structure


resp.column_statistics_list #=> Array
resp.column_statistics_list[0].column_name #=> String
resp.column_statistics_list[0].column_type #=> String
resp.column_statistics_list[0].analyzed_time #=> Time
resp.column_statistics_list[0].statistics_data.type #=> String, one of "BOOLEAN", "DATE", "DECIMAL", "DOUBLE", "LONG", "STRING", "BINARY"
resp.column_statistics_list[0].statistics_data.boolean_column_statistics_data.number_of_trues #=> Integer
resp.column_statistics_list[0].statistics_data.boolean_column_statistics_data.number_of_falses #=> Integer
resp.column_statistics_list[0].statistics_data.boolean_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.date_column_statistics_data.minimum_value #=> Time
resp.column_statistics_list[0].statistics_data.date_column_statistics_data.maximum_value #=> Time
resp.column_statistics_list[0].statistics_data.date_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.date_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.minimum_value.unscaled_value #=> IO
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.minimum_value.scale #=> Integer
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.maximum_value.unscaled_value #=> IO
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.maximum_value.scale #=> Integer
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.decimal_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.double_column_statistics_data.minimum_value #=> Float
resp.column_statistics_list[0].statistics_data.double_column_statistics_data.maximum_value #=> Float
resp.column_statistics_list[0].statistics_data.double_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.double_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.long_column_statistics_data.minimum_value #=> Integer
resp.column_statistics_list[0].statistics_data.long_column_statistics_data.maximum_value #=> Integer
resp.column_statistics_list[0].statistics_data.long_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.long_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.string_column_statistics_data.maximum_length #=> Integer
resp.column_statistics_list[0].statistics_data.string_column_statistics_data.average_length #=> Float
resp.column_statistics_list[0].statistics_data.string_column_statistics_data.number_of_nulls #=> Integer
resp.column_statistics_list[0].statistics_data.string_column_statistics_data.number_of_distinct_values #=> Integer
resp.column_statistics_list[0].statistics_data.binary_column_statistics_data.maximum_length #=> Integer
resp.column_statistics_list[0].statistics_data.binary_column_statistics_data.average_length #=> Float
resp.column_statistics_list[0].statistics_data.binary_column_statistics_data.number_of_nulls #=> Integer
resp.errors #=> Array
resp.errors[0].column_name #=> String
resp.errors[0].error.error_code #=> String
resp.errors[0].error.error_message #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is supplied, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions\' table.

  • :column_names (required, Array<String>)

    A list of the column names.

Returns:

See Also:

#get_connection(options = {}) ⇒ Types::GetConnectionResponse

Retrieves a connection definition from the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.get_connection({
  catalog_id: "CatalogIdString",
  name: "NameString", # required
  hide_password: false,
})

Response structure


resp.connection.name #=> String
resp.connection.description #=> String
resp.connection.connection_type #=> String, one of "JDBC", "SFTP", "MONGODB", "KAFKA", "NETWORK"
resp.connection.match_criteria #=> Array
resp.connection.match_criteria[0] #=> String
resp.connection.connection_properties #=> Hash
resp.connection.connection_properties["ConnectionPropertyKey"] #=> String
resp.connection.physical_connection_requirements.subnet_id #=> String
resp.connection.physical_connection_requirements.security_group_id_list #=> Array
resp.connection.physical_connection_requirements.security_group_id_list[0] #=> String
resp.connection.physical_connection_requirements.availability_zone #=> String
resp.connection.creation_time #=> Time
resp.connection.last_updated_time #=> Time
resp.connection.last_updated_by #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog in which the connection resides. If none is provided, the AWS account ID is used by default.

  • :name (required, String)

    The name of the connection definition to retrieve.

  • :hide_password (Boolean)

    Allows you to retrieve the connection metadata without returning the password. For instance, the AWS Glue console uses this flag to retrieve the connection, and does not display the password. Set this parameter when the caller might not have permission to use the AWS KMS key to decrypt the password, but it does have permission to access the rest of the connection properties.

Returns:

See Also:

#get_connections(options = {}) ⇒ Types::GetConnectionsResponse

Retrieves a list of connection definitions from the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.get_connections({
  catalog_id: "CatalogIdString",
  filter: {
    match_criteria: ["NameString"],
    connection_type: "JDBC", # accepts JDBC, SFTP, MONGODB, KAFKA, NETWORK
  },
  hide_password: false,
  next_token: "Token",
  max_results: 1,
})

Response structure


resp.connection_list #=> Array
resp.connection_list[0].name #=> String
resp.connection_list[0].description #=> String
resp.connection_list[0].connection_type #=> String, one of "JDBC", "SFTP", "MONGODB", "KAFKA", "NETWORK"
resp.connection_list[0].match_criteria #=> Array
resp.connection_list[0].match_criteria[0] #=> String
resp.connection_list[0].connection_properties #=> Hash
resp.connection_list[0].connection_properties["ConnectionPropertyKey"] #=> String
resp.connection_list[0].physical_connection_requirements.subnet_id #=> String
resp.connection_list[0].physical_connection_requirements.security_group_id_list #=> Array
resp.connection_list[0].physical_connection_requirements.security_group_id_list[0] #=> String
resp.connection_list[0].physical_connection_requirements.availability_zone #=> String
resp.connection_list[0].creation_time #=> Time
resp.connection_list[0].last_updated_time #=> Time
resp.connection_list[0].last_updated_by #=> String
resp.next_token #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog in which the connections reside. If none is provided, the AWS account ID is used by default.

  • :filter (Types::GetConnectionsFilter)

    A filter that controls which connections are returned.

  • :hide_password (Boolean)

    Allows you to retrieve the connection metadata without returning the password. For instance, the AWS Glue console uses this flag to retrieve the connection, and does not display the password. Set this parameter when the caller might not have permission to use the AWS KMS key to decrypt the password, but it does have permission to access the rest of the connection properties.

  • :next_token (String)

    A continuation token, if this is a continuation call.

  • :max_results (Integer)

    The maximum number of connections to return in one response.

Returns:

See Also:

#get_crawler(options = {}) ⇒ Types::GetCrawlerResponse

Retrieves metadata for a specified crawler.

Examples:

Request syntax with placeholder values


resp = client.get_crawler({
  name: "NameString", # required
})

Response structure


resp.crawler.name #=> String
resp.crawler.role #=> String
resp.crawler.targets.s3_targets #=> Array
resp.crawler.targets.s3_targets[0].path #=> String
resp.crawler.targets.s3_targets[0].exclusions #=> Array
resp.crawler.targets.s3_targets[0].exclusions[0] #=> String
resp.crawler.targets.s3_targets[0].connection_name #=> String
resp.crawler.targets.jdbc_targets #=> Array
resp.crawler.targets.jdbc_targets[0].connection_name #=> String
resp.crawler.targets.jdbc_targets[0].path #=> String
resp.crawler.targets.jdbc_targets[0].exclusions #=> Array
resp.crawler.targets.jdbc_targets[0].exclusions[0] #=> String
resp.crawler.targets.mongo_db_targets #=> Array
resp.crawler.targets.mongo_db_targets[0].connection_name #=> String
resp.crawler.targets.mongo_db_targets[0].path #=> String
resp.crawler.targets.mongo_db_targets[0].scan_all #=> true/false
resp.crawler.targets.dynamo_db_targets #=> Array
resp.crawler.targets.dynamo_db_targets[0].path #=> String
resp.crawler.targets.dynamo_db_targets[0].scan_all #=> true/false
resp.crawler.targets.dynamo_db_targets[0].scan_rate #=> Float
resp.crawler.targets.catalog_targets #=> Array
resp.crawler.targets.catalog_targets[0].database_name #=> String
resp.crawler.targets.catalog_targets[0].tables #=> Array
resp.crawler.targets.catalog_targets[0].tables[0] #=> String
resp.crawler.database_name #=> String
resp.crawler.description #=> String
resp.crawler.classifiers #=> Array
resp.crawler.classifiers[0] #=> String
resp.crawler.recrawl_policy.recrawl_behavior #=> String, one of "CRAWL_EVERYTHING", "CRAWL_NEW_FOLDERS_ONLY"
resp.crawler.schema_change_policy.update_behavior #=> String, one of "LOG", "UPDATE_IN_DATABASE"
resp.crawler.schema_change_policy.delete_behavior #=> String, one of "LOG", "DELETE_FROM_DATABASE", "DEPRECATE_IN_DATABASE"
resp.crawler.state #=> String, one of "READY", "RUNNING", "STOPPING"
resp.crawler.table_prefix #=> String
resp.crawler.schedule.schedule_expression #=> String
resp.crawler.schedule.state #=> String, one of "SCHEDULED", "NOT_SCHEDULED", "TRANSITIONING"
resp.crawler.crawl_elapsed_time #=> Integer
resp.crawler.creation_time #=> Time
resp.crawler.last_updated #=> Time
resp.crawler.last_crawl.status #=> String, one of "SUCCEEDED", "CANCELLED", "FAILED"
resp.crawler.last_crawl.error_message #=> String
resp.crawler.last_crawl.log_group #=> String
resp.crawler.last_crawl.log_stream #=> String
resp.crawler.last_crawl.message_prefix #=> String
resp.crawler.last_crawl.start_time #=> Time
resp.crawler.version #=> Integer
resp.crawler.configuration #=> String
resp.crawler.crawler_security_configuration #=> String

Options Hash (options):

  • :name (required, String)

    The name of the crawler to retrieve metadata for.

Returns:

See Also:

#get_crawler_metrics(options = {}) ⇒ Types::GetCrawlerMetricsResponse

Retrieves metrics about specified crawlers.

Examples:

Request syntax with placeholder values


resp = client.get_crawler_metrics({
  crawler_name_list: ["NameString"],
  max_results: 1,
  next_token: "Token",
})

Response structure


resp.crawler_metrics_list #=> Array
resp.crawler_metrics_list[0].crawler_name #=> String
resp.crawler_metrics_list[0].time_left_seconds #=> Float
resp.crawler_metrics_list[0].still_estimating #=> true/false
resp.crawler_metrics_list[0].last_runtime_seconds #=> Float
resp.crawler_metrics_list[0].median_runtime_seconds #=> Float
resp.crawler_metrics_list[0].tables_created #=> Integer
resp.crawler_metrics_list[0].tables_updated #=> Integer
resp.crawler_metrics_list[0].tables_deleted #=> Integer
resp.next_token #=> String

Options Hash (options):

  • :crawler_name_list (Array<String>)

    A list of the names of crawlers about which to retrieve metrics.

  • :max_results (Integer)

    The maximum size of a list to return.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

#get_crawlers(options = {}) ⇒ Types::GetCrawlersResponse

Retrieves metadata for all crawlers defined in the customer account.

Examples:

Request syntax with placeholder values


resp = client.get_crawlers({
  max_results: 1,
  next_token: "Token",
})

Response structure


resp.crawlers #=> Array
resp.crawlers[0].name #=> String
resp.crawlers[0].role #=> String
resp.crawlers[0].targets.s3_targets #=> Array
resp.crawlers[0].targets.s3_targets[0].path #=> String
resp.crawlers[0].targets.s3_targets[0].exclusions #=> Array
resp.crawlers[0].targets.s3_targets[0].exclusions[0] #=> String
resp.crawlers[0].targets.s3_targets[0].connection_name #=> String
resp.crawlers[0].targets.jdbc_targets #=> Array
resp.crawlers[0].targets.jdbc_targets[0].connection_name #=> String
resp.crawlers[0].targets.jdbc_targets[0].path #=> String
resp.crawlers[0].targets.jdbc_targets[0].exclusions #=> Array
resp.crawlers[0].targets.jdbc_targets[0].exclusions[0] #=> String
resp.crawlers[0].targets.mongo_db_targets #=> Array
resp.crawlers[0].targets.mongo_db_targets[0].connection_name #=> String
resp.crawlers[0].targets.mongo_db_targets[0].path #=> String
resp.crawlers[0].targets.mongo_db_targets[0].scan_all #=> true/false
resp.crawlers[0].targets.dynamo_db_targets #=> Array
resp.crawlers[0].targets.dynamo_db_targets[0].path #=> String
resp.crawlers[0].targets.dynamo_db_targets[0].scan_all #=> true/false
resp.crawlers[0].targets.dynamo_db_targets[0].scan_rate #=> Float
resp.crawlers[0].targets.catalog_targets #=> Array
resp.crawlers[0].targets.catalog_targets[0].database_name #=> String
resp.crawlers[0].targets.catalog_targets[0].tables #=> Array
resp.crawlers[0].targets.catalog_targets[0].tables[0] #=> String
resp.crawlers[0].database_name #=> String
resp.crawlers[0].description #=> String
resp.crawlers[0].classifiers #=> Array
resp.crawlers[0].classifiers[0] #=> String
resp.crawlers[0].recrawl_policy.recrawl_behavior #=> String, one of "CRAWL_EVERYTHING", "CRAWL_NEW_FOLDERS_ONLY"
resp.crawlers[0].schema_change_policy.update_behavior #=> String, one of "LOG", "UPDATE_IN_DATABASE"
resp.crawlers[0].schema_change_policy.delete_behavior #=> String, one of "LOG", "DELETE_FROM_DATABASE", "DEPRECATE_IN_DATABASE"
resp.crawlers[0].state #=> String, one of "READY", "RUNNING", "STOPPING"
resp.crawlers[0].table_prefix #=> String
resp.crawlers[0].schedule.schedule_expression #=> String
resp.crawlers[0].schedule.state #=> String, one of "SCHEDULED", "NOT_SCHEDULED", "TRANSITIONING"
resp.crawlers[0].crawl_elapsed_time #=> Integer
resp.crawlers[0].creation_time #=> Time
resp.crawlers[0].last_updated #=> Time
resp.crawlers[0].last_crawl.status #=> String, one of "SUCCEEDED", "CANCELLED", "FAILED"
resp.crawlers[0].last_crawl.error_message #=> String
resp.crawlers[0].last_crawl.log_group #=> String
resp.crawlers[0].last_crawl.log_stream #=> String
resp.crawlers[0].last_crawl.message_prefix #=> String
resp.crawlers[0].last_crawl.start_time #=> Time
resp.crawlers[0].version #=> Integer
resp.crawlers[0].configuration #=> String
resp.crawlers[0].crawler_security_configuration #=> String
resp.next_token #=> String

Options Hash (options):

  • :max_results (Integer)

    The number of crawlers to return on each call.

  • :next_token (String)

    A continuation token, if this is a continuation request.

Returns:

See Also:

#get_data_catalog_encryption_settings(options = {}) ⇒ Types::GetDataCatalogEncryptionSettingsResponse

Retrieves the security configuration for a specified catalog.

Examples:

Request syntax with placeholder values


resp = client.get_data_catalog_encryption_settings({
  catalog_id: "CatalogIdString",
})

Response structure


resp.data_catalog_encryption_settings.encryption_at_rest.catalog_encryption_mode #=> String, one of "DISABLED", "SSE-KMS"
resp.data_catalog_encryption_settings.encryption_at_rest.sse_aws_kms_key_id #=> String
resp.data_catalog_encryption_settings.connection_password_encryption.return_connection_password_encrypted #=> true/false
resp.data_catalog_encryption_settings.connection_password_encryption.aws_kms_key_id #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog to retrieve the security configuration for. If none is provided, the AWS account ID is used by default.

Returns:

See Also:

#get_database(options = {}) ⇒ Types::GetDatabaseResponse

Retrieves the definition of a specified database.

Examples:

Request syntax with placeholder values


resp = client.get_database({
  catalog_id: "CatalogIdString",
  name: "NameString", # required
})

Response structure


resp.database.name #=> String
resp.database.description #=> String
resp.database.location_uri #=> String
resp.database.parameters #=> Hash
resp.database.parameters["KeyString"] #=> String
resp.database.create_time #=> Time
resp.database.create_table_default_permissions #=> Array
resp.database.create_table_default_permissions[0].principal.data_lake_principal_identifier #=> String
resp.database.create_table_default_permissions[0].permissions #=> Array
resp.database.create_table_default_permissions[0].permissions[0] #=> String, one of "ALL", "SELECT", "ALTER", "DROP", "DELETE", "INSERT", "CREATE_DATABASE", "CREATE_TABLE", "DATA_LOCATION_ACCESS"
resp.database.target_database.catalog_id #=> String
resp.database.target_database.database_name #=> String
resp.database.catalog_id #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog in which the database resides. If none is provided, the AWS account ID is used by default.

  • :name (required, String)

    The name of the database to retrieve. For Hive compatibility, this should be all lowercase.

Returns:

See Also:

#get_databases(options = {}) ⇒ Types::GetDatabasesResponse

Retrieves all databases defined in a given Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.get_databases({
  catalog_id: "CatalogIdString",
  next_token: "Token",
  max_results: 1,
  resource_share_type: "FOREIGN", # accepts FOREIGN, ALL
})

Response structure


resp.database_list #=> Array
resp.database_list[0].name #=> String
resp.database_list[0].description #=> String
resp.database_list[0].location_uri #=> String
resp.database_list[0].parameters #=> Hash
resp.database_list[0].parameters["KeyString"] #=> String
resp.database_list[0].create_time #=> Time
resp.database_list[0].create_table_default_permissions #=> Array
resp.database_list[0].create_table_default_permissions[0].principal.data_lake_principal_identifier #=> String
resp.database_list[0].create_table_default_permissions[0].permissions #=> Array
resp.database_list[0].create_table_default_permissions[0].permissions[0] #=> String, one of "ALL", "SELECT", "ALTER", "DROP", "DELETE", "INSERT", "CREATE_DATABASE", "CREATE_TABLE", "DATA_LOCATION_ACCESS"
resp.database_list[0].target_database.catalog_id #=> String
resp.database_list[0].target_database.database_name #=> String
resp.database_list[0].catalog_id #=> String
resp.next_token #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog from which to retrieve Databases. If none is provided, the AWS account ID is used by default.

  • :next_token (String)

    A continuation token, if this is a continuation call.

  • :max_results (Integer)

    The maximum number of databases to return in one response.

  • :resource_share_type (String)

    Allows you to specify that you want to list the databases shared with your account. The allowable values are FOREIGN or ALL.

    • If set to FOREIGN, will list the databases shared with your account.

    • If set to ALL, will list the databases shared with your account, as well as the databases in yor local account.

Returns:

See Also:

#get_dataflow_graph(options = {}) ⇒ Types::GetDataflowGraphResponse

Transforms a Python script into a directed acyclic graph (DAG).

Examples:

Request syntax with placeholder values


resp = client.get_dataflow_graph({
  python_script: "PythonScript",
})

Response structure


resp.dag_nodes #=> Array
resp.dag_nodes[0].id #=> String
resp.dag_nodes[0].node_type #=> String
resp.dag_nodes[0].args #=> Array
resp.dag_nodes[0].args[0].name #=> String
resp.dag_nodes[0].args[0].value #=> String
resp.dag_nodes[0].args[0].param #=> true/false
resp.dag_nodes[0].line_number #=> Integer
resp.dag_edges #=> Array
resp.dag_edges[0].source #=> String
resp.dag_edges[0].target #=> String
resp.dag_edges[0].target_parameter #=> String

Options Hash (options):

  • :python_script (String)

    The Python script to transform.

Returns:

See Also:

#get_dev_endpoint(options = {}) ⇒ Types::GetDevEndpointResponse

Retrieves information about a specified development endpoint.

When you create a development endpoint in a virtual private cloud (VPC), AWS Glue returns only a private IP address, and the public IP address field is not populated. When you create a non-VPC development endpoint, AWS Glue returns only a public IP address.

Examples:

Request syntax with placeholder values


resp = client.get_dev_endpoint({
  endpoint_name: "GenericString", # required
})

Response structure


resp.dev_endpoint.endpoint_name #=> String
resp.dev_endpoint.role_arn #=> String
resp.dev_endpoint.security_group_ids #=> Array
resp.dev_endpoint.security_group_ids[0] #=> String
resp.dev_endpoint.subnet_id #=> String
resp.dev_endpoint.yarn_endpoint_address #=> String
resp.dev_endpoint.private_address #=> String
resp.dev_endpoint.zeppelin_remote_spark_interpreter_port #=> Integer
resp.dev_endpoint.public_address #=> String
resp.dev_endpoint.status #=> String
resp.dev_endpoint.worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.dev_endpoint.glue_version #=> String
resp.dev_endpoint.number_of_workers #=> Integer
resp.dev_endpoint.number_of_nodes #=> Integer
resp.dev_endpoint.availability_zone #=> String
resp.dev_endpoint.vpc_id #=> String
resp.dev_endpoint.extra_python_libs_s3_path #=> String
resp.dev_endpoint.extra_jars_s3_path #=> String
resp.dev_endpoint.failure_reason #=> String
resp.dev_endpoint.last_update_status #=> String
resp.dev_endpoint.created_timestamp #=> Time
resp.dev_endpoint.last_modified_timestamp #=> Time
resp.dev_endpoint.public_key #=> String
resp.dev_endpoint.public_keys #=> Array
resp.dev_endpoint.public_keys[0] #=> String
resp.dev_endpoint.security_configuration #=> String
resp.dev_endpoint.arguments #=> Hash
resp.dev_endpoint.arguments["GenericString"] #=> String

Options Hash (options):

  • :endpoint_name (required, String)

    Name of the DevEndpoint to retrieve information for.

Returns:

See Also:

#get_dev_endpoints(options = {}) ⇒ Types::GetDevEndpointsResponse

Retrieves all the development endpoints in this AWS account.

When you create a development endpoint in a virtual private cloud (VPC), AWS Glue returns only a private IP address and the public IP address field is not populated. When you create a non-VPC development endpoint, AWS Glue returns only a public IP address.

Examples:

Request syntax with placeholder values


resp = client.get_dev_endpoints({
  max_results: 1,
  next_token: "GenericString",
})

Response structure


resp.dev_endpoints #=> Array
resp.dev_endpoints[0].endpoint_name #=> String
resp.dev_endpoints[0].role_arn #=> String
resp.dev_endpoints[0].security_group_ids #=> Array
resp.dev_endpoints[0].security_group_ids[0] #=> String
resp.dev_endpoints[0].subnet_id #=> String
resp.dev_endpoints[0].yarn_endpoint_address #=> String
resp.dev_endpoints[0].private_address #=> String
resp.dev_endpoints[0].zeppelin_remote_spark_interpreter_port #=> Integer
resp.dev_endpoints[0].public_address #=> String
resp.dev_endpoints[0].status #=> String
resp.dev_endpoints[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.dev_endpoints[0].glue_version #=> String
resp.dev_endpoints[0].number_of_workers #=> Integer
resp.dev_endpoints[0].number_of_nodes #=> Integer
resp.dev_endpoints[0].availability_zone #=> String
resp.dev_endpoints[0].vpc_id #=> String
resp.dev_endpoints[0].extra_python_libs_s3_path #=> String
resp.dev_endpoints[0].extra_jars_s3_path #=> String
resp.dev_endpoints[0].failure_reason #=> String
resp.dev_endpoints[0].last_update_status #=> String
resp.dev_endpoints[0].created_timestamp #=> Time
resp.dev_endpoints[0].last_modified_timestamp #=> Time
resp.dev_endpoints[0].public_key #=> String
resp.dev_endpoints[0].public_keys #=> Array
resp.dev_endpoints[0].public_keys[0] #=> String
resp.dev_endpoints[0].security_configuration #=> String
resp.dev_endpoints[0].arguments #=> Hash
resp.dev_endpoints[0].arguments["GenericString"] #=> String
resp.next_token #=> String

Options Hash (options):

  • :max_results (Integer)

    The maximum size of information to return.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

#get_job(options = {}) ⇒ Types::GetJobResponse

Retrieves an existing job definition.

Examples:

Request syntax with placeholder values


resp = client.get_job({
  job_name: "NameString", # required
})

Response structure


resp.job.name #=> String
resp.job.description #=> String
resp.job.log_uri #=> String
resp.job.role #=> String
resp.job.created_on #=> Time
resp.job.last_modified_on #=> Time
resp.job.execution_property.max_concurrent_runs #=> Integer
resp.job.command.name #=> String
resp.job.command.script_location #=> String
resp.job.command.python_version #=> String
resp.job.default_arguments #=> Hash
resp.job.default_arguments["GenericString"] #=> String
resp.job.non_overridable_arguments #=> Hash
resp.job.non_overridable_arguments["GenericString"] #=> String
resp.job.connections.connections #=> Array
resp.job.connections.connections[0] #=> String
resp.job.max_retries #=> Integer
resp.job.allocated_capacity #=> Integer
resp.job.timeout #=> Integer
resp.job.max_capacity #=> Float
resp.job.worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.job.number_of_workers #=> Integer
resp.job.security_configuration #=> String
resp.job.notification_property.notify_delay_after #=> Integer
resp.job.glue_version #=> String

Options Hash (options):

  • :job_name (required, String)

    The name of the job definition to retrieve.

Returns:

See Also:

#get_job_bookmark(options = {}) ⇒ Types::GetJobBookmarkResponse

Returns information on a job bookmark entry.

Examples:

Request syntax with placeholder values


resp = client.get_job_bookmark({
  job_name: "JobName", # required
  run_id: "RunId",
})

Response structure


resp.job_bookmark_entry.job_name #=> String
resp.job_bookmark_entry.version #=> Integer
resp.job_bookmark_entry.run #=> Integer
resp.job_bookmark_entry.attempt #=> Integer
resp.job_bookmark_entry.previous_run_id #=> String
resp.job_bookmark_entry.run_id #=> String
resp.job_bookmark_entry.job_bookmark #=> String

Options Hash (options):

  • :job_name (required, String)

    The name of the job in question.

  • :run_id (String)

    The unique run identifier associated with this job run.

Returns:

See Also:

#get_job_run(options = {}) ⇒ Types::GetJobRunResponse

Retrieves the metadata for a given job run.

Examples:

Request syntax with placeholder values


resp = client.get_job_run({
  job_name: "NameString", # required
  run_id: "IdString", # required
  predecessors_included: false,
})

Response structure


resp.job_run.id #=> String
resp.job_run.attempt #=> Integer
resp.job_run.previous_run_id #=> String
resp.job_run.trigger_name #=> String
resp.job_run.job_name #=> String
resp.job_run.started_on #=> Time
resp.job_run.last_modified_on #=> Time
resp.job_run.completed_on #=> Time
resp.job_run.job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.job_run.arguments #=> Hash
resp.job_run.arguments["GenericString"] #=> String
resp.job_run.error_message #=> String
resp.job_run.predecessor_runs #=> Array
resp.job_run.predecessor_runs[0].job_name #=> String
resp.job_run.predecessor_runs[0].run_id #=> String
resp.job_run.allocated_capacity #=> Integer
resp.job_run.execution_time #=> Integer
resp.job_run.timeout #=> Integer
resp.job_run.max_capacity #=> Float
resp.job_run.worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.job_run.number_of_workers #=> Integer
resp.job_run.security_configuration #=> String
resp.job_run.log_group_name #=> String
resp.job_run.notification_property.notify_delay_after #=> Integer
resp.job_run.glue_version #=> String

Options Hash (options):

  • :job_name (required, String)

    Name of the job definition being run.

  • :run_id (required, String)

    The ID of the job run.

  • :predecessors_included (Boolean)

    True if a list of predecessor runs should be returned.

Returns:

See Also:

#get_job_runs(options = {}) ⇒ Types::GetJobRunsResponse

Retrieves metadata for all runs of a given job definition.

Examples:

Request syntax with placeholder values


resp = client.get_job_runs({
  job_name: "NameString", # required
  next_token: "GenericString",
  max_results: 1,
})

Response structure


resp.job_runs #=> Array
resp.job_runs[0].id #=> String
resp.job_runs[0].attempt #=> Integer
resp.job_runs[0].previous_run_id #=> String
resp.job_runs[0].trigger_name #=> String
resp.job_runs[0].job_name #=> String
resp.job_runs[0].started_on #=> Time
resp.job_runs[0].last_modified_on #=> Time
resp.job_runs[0].completed_on #=> Time
resp.job_runs[0].job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.job_runs[0].arguments #=> Hash
resp.job_runs[0].arguments["GenericString"] #=> String
resp.job_runs[0].error_message #=> String
resp.job_runs[0].predecessor_runs #=> Array
resp.job_runs[0].predecessor_runs[0].job_name #=> String
resp.job_runs[0].predecessor_runs[0].run_id #=> String
resp.job_runs[0].allocated_capacity #=> Integer
resp.job_runs[0].execution_time #=> Integer
resp.job_runs[0].timeout #=> Integer
resp.job_runs[0].max_capacity #=> Float
resp.job_runs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.job_runs[0].number_of_workers #=> Integer
resp.job_runs[0].security_configuration #=> String
resp.job_runs[0].log_group_name #=> String
resp.job_runs[0].notification_property.notify_delay_after #=> Integer
resp.job_runs[0].glue_version #=> String
resp.next_token #=> String

Options Hash (options):

  • :job_name (required, String)

    The name of the job definition for which to retrieve all job runs.

  • :next_token (String)

    A continuation token, if this is a continuation call.

  • :max_results (Integer)

    The maximum size of the response.

Returns:

See Also:

#get_jobs(options = {}) ⇒ Types::GetJobsResponse

Retrieves all current job definitions.

Examples:

Request syntax with placeholder values


resp = client.get_jobs({
  next_token: "GenericString",
  max_results: 1,
})

Response structure


resp.jobs #=> Array
resp.jobs[0].name #=> String
resp.jobs[0].description #=> String
resp.jobs[0].log_uri #=> String
resp.jobs[0].role #=> String
resp.jobs[0].created_on #=> Time
resp.jobs[0].last_modified_on #=> Time
resp.jobs[0].execution_property.max_concurrent_runs #=> Integer
resp.jobs[0].command.name #=> String
resp.jobs[0].command.script_location #=> String
resp.jobs[0].command.python_version #=> String
resp.jobs[0].default_arguments #=> Hash
resp.jobs[0].default_arguments["GenericString"] #=> String
resp.jobs[0].non_overridable_arguments #=> Hash
resp.jobs[0].non_overridable_arguments["GenericString"] #=> String
resp.jobs[0].connections.connections #=> Array
resp.jobs[0].connections.connections[0] #=> String
resp.jobs[0].max_retries #=> Integer
resp.jobs[0].allocated_capacity #=> Integer
resp.jobs[0].timeout #=> Integer
resp.jobs[0].max_capacity #=> Float
resp.jobs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.jobs[0].number_of_workers #=> Integer
resp.jobs[0].security_configuration #=> String
resp.jobs[0].notification_property.notify_delay_after #=> Integer
resp.jobs[0].glue_version #=> String
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    A continuation token, if this is a continuation call.

  • :max_results (Integer)

    The maximum size of the response.

Returns:

See Also:

#get_mapping(options = {}) ⇒ Types::GetMappingResponse

Creates mappings.

Examples:

Request syntax with placeholder values


resp = client.get_mapping({
  source: { # required
    database_name: "NameString", # required
    table_name: "NameString", # required
  },
  sinks: [
    {
      database_name: "NameString", # required
      table_name: "NameString", # required
    },
  ],
  location: {
    jdbc: [
      {
        name: "CodeGenArgName", # required
        value: "CodeGenArgValue", # required
        param: false,
      },
    ],
    s3: [
      {
        name: "CodeGenArgName", # required
        value: "CodeGenArgValue", # required
        param: false,
      },
    ],
    dynamo_db: [
      {
        name: "CodeGenArgName", # required
        value: "CodeGenArgValue", # required
        param: false,
      },
    ],
  },
})

Response structure


resp.mapping #=> Array
resp.mapping[0].source_table #=> String
resp.mapping[0].source_path #=> String
resp.mapping[0].source_type #=> String
resp.mapping[0].target_table #=> String
resp.mapping[0].target_path #=> String
resp.mapping[0].target_type #=> String

Options Hash (options):

Returns:

See Also:

#get_ml_task_run(options = {}) ⇒ Types::GetMLTaskRunResponse

Gets details for a specific task run on a machine learning transform. Machine learning task runs are asynchronous tasks that AWS Glue runs on your behalf as part of various machine learning workflows. You can check the stats of any task run by calling GetMLTaskRun with the TaskRunID and its parent transform's TransformID.

Examples:

Request syntax with placeholder values


resp = client.get_ml_task_run({
  transform_id: "HashString", # required
  task_run_id: "HashString", # required
})

Response structure


resp.transform_id #=> String
resp.task_run_id #=> String
resp.status #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.log_group_name #=> String
resp.properties.task_type #=> String, one of "EVALUATION", "LABELING_SET_GENERATION", "IMPORT_LABELS", "EXPORT_LABELS", "FIND_MATCHES"
resp.properties.import_labels_task_run_properties.input_s3_path #=> String
resp.properties.import_labels_task_run_properties.replace #=> true/false
resp.properties.export_labels_task_run_properties.output_s3_path #=> String
resp.properties.labeling_set_generation_task_run_properties.output_s3_path #=> String
resp.properties.find_matches_task_run_properties.job_id #=> String
resp.properties.find_matches_task_run_properties.job_name #=> String
resp.properties.find_matches_task_run_properties.job_run_id #=> String
resp.error_string #=> String
resp.started_on #=> Time
resp.last_modified_on #=> Time
resp.completed_on #=> Time
resp.execution_time #=> Integer

Options Hash (options):

  • :transform_id (required, String)

    The unique identifier of the machine learning transform.

  • :task_run_id (required, String)

    The unique identifier of the task run.

Returns:

See Also:

#get_ml_task_runs(options = {}) ⇒ Types::GetMLTaskRunsResponse

Gets a list of runs for a machine learning transform. Machine learning task runs are asynchronous tasks that AWS Glue runs on your behalf as part of various machine learning workflows. You can get a sortable, filterable list of machine learning task runs by calling GetMLTaskRuns with their parent transform's TransformID and other optional parameters as documented in this section.

This operation returns a list of historic runs and must be paginated.

Examples:

Request syntax with placeholder values


resp = client.get_ml_task_runs({
  transform_id: "HashString", # required
  next_token: "PaginationToken",
  max_results: 1,
  filter: {
    task_run_type: "EVALUATION", # accepts EVALUATION, LABELING_SET_GENERATION, IMPORT_LABELS, EXPORT_LABELS, FIND_MATCHES
    status: "STARTING", # accepts STARTING, RUNNING, STOPPING, STOPPED, SUCCEEDED, FAILED, TIMEOUT
    started_before: Time.now,
    started_after: Time.now,
  },
  sort: {
    column: "TASK_RUN_TYPE", # required, accepts TASK_RUN_TYPE, STATUS, STARTED
    sort_direction: "DESCENDING", # required, accepts DESCENDING, ASCENDING
  },
})

Response structure


resp.task_runs #=> Array
resp.task_runs[0].transform_id #=> String
resp.task_runs[0].task_run_id #=> String
resp.task_runs[0].status #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.task_runs[0].log_group_name #=> String
resp.task_runs[0].properties.task_type #=> String, one of "EVALUATION", "LABELING_SET_GENERATION", "IMPORT_LABELS", "EXPORT_LABELS", "FIND_MATCHES"
resp.task_runs[0].properties.import_labels_task_run_properties.input_s3_path #=> String
resp.task_runs[0].properties.import_labels_task_run_properties.replace #=> true/false
resp.task_runs[0].properties.export_labels_task_run_properties.output_s3_path #=> String
resp.task_runs[0].properties.labeling_set_generation_task_run_properties.output_s3_path #=> String
resp.task_runs[0].properties.find_matches_task_run_properties.job_id #=> String
resp.task_runs[0].properties.find_matches_task_run_properties.job_name #=> String
resp.task_runs[0].properties.find_matches_task_run_properties.job_run_id #=> String
resp.task_runs[0].error_string #=> String
resp.task_runs[0].started_on #=> Time
resp.task_runs[0].last_modified_on #=> Time
resp.task_runs[0].completed_on #=> Time
resp.task_runs[0].execution_time #=> Integer
resp.next_token #=> String

Options Hash (options):

  • :transform_id (required, String)

    The unique identifier of the machine learning transform.

  • :next_token (String)

    A token for pagination of the results. The default is empty.

  • :max_results (Integer)

    The maximum number of results to return.

  • :filter (Types::TaskRunFilterCriteria)

    The filter criteria, in the TaskRunFilterCriteria structure, for the task run.

  • :sort (Types::TaskRunSortCriteria)

    The sorting criteria, in the TaskRunSortCriteria structure, for the task run.

Returns:

See Also:

#get_ml_transform(options = {}) ⇒ Types::GetMLTransformResponse

Gets an AWS Glue machine learning transform artifact and all its corresponding metadata. Machine learning transforms are a special type of transform that use machine learning to learn the details of the transformation to be performed by learning from examples provided by humans. These transformations are then saved by AWS Glue. You can retrieve their metadata by calling GetMLTransform.

Examples:

Request syntax with placeholder values


resp = client.get_ml_transform({
  transform_id: "HashString", # required
})

Response structure


resp.transform_id #=> String
resp.name #=> String
resp.description #=> String
resp.status #=> String, one of "NOT_READY", "READY", "DELETING"
resp.created_on #=> Time
resp.last_modified_on #=> Time
resp.input_record_tables #=> Array
resp.input_record_tables[0].database_name #=> String
resp.input_record_tables[0].table_name #=> String
resp.input_record_tables[0].catalog_id #=> String
resp.input_record_tables[0].connection_name #=> String
resp.parameters.transform_type #=> String, one of "FIND_MATCHES"
resp.parameters.find_matches_parameters.primary_key_column_name #=> String
resp.parameters.find_matches_parameters.precision_recall_tradeoff #=> Float
resp.parameters.find_matches_parameters.accuracy_cost_tradeoff #=> Float
resp.parameters.find_matches_parameters.enforce_provided_labels #=> true/false
resp.evaluation_metrics.transform_type #=> String, one of "FIND_MATCHES"
resp.evaluation_metrics.find_matches_metrics.area_under_pr_curve #=> Float
resp.evaluation_metrics.find_matches_metrics.precision #=> Float
resp.evaluation_metrics.find_matches_metrics.recall #=> Float
resp.evaluation_metrics.find_matches_metrics.f1 #=> Float
resp.evaluation_metrics.find_matches_metrics.confusion_matrix.num_true_positives #=> Integer
resp.evaluation_metrics.find_matches_metrics.confusion_matrix.num_false_positives #=> Integer
resp.evaluation_metrics.find_matches_metrics.confusion_matrix.num_true_negatives #=> Integer
resp.evaluation_metrics.find_matches_metrics.confusion_matrix.num_false_negatives #=> Integer
resp.label_count #=> Integer
resp.schema #=> Array
resp.schema[0].name #=> String
resp.schema[0].data_type #=> String
resp.role #=> String
resp.glue_version #=> String
resp.max_capacity #=> Float
resp.worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.number_of_workers #=> Integer
resp.timeout #=> Integer
resp.max_retries #=> Integer
resp.transform_encryption.ml_user_data_encryption.ml_user_data_encryption_mode #=> String, one of "DISABLED", "SSE-KMS"
resp.transform_encryption.ml_user_data_encryption.kms_key_id #=> String
resp.transform_encryption.task_run_security_configuration_name #=> String

Options Hash (options):

  • :transform_id (required, String)

    The unique identifier of the transform, generated at the time that the transform was created.

Returns:

See Also:

#get_ml_transforms(options = {}) ⇒ Types::GetMLTransformsResponse

Gets a sortable, filterable list of existing AWS Glue machine learning transforms. Machine learning transforms are a special type of transform that use machine learning to learn the details of the transformation to be performed by learning from examples provided by humans. These transformations are then saved by AWS Glue, and you can retrieve their metadata by calling GetMLTransforms.

Examples:

Request syntax with placeholder values


resp = client.get_ml_transforms({
  next_token: "PaginationToken",
  max_results: 1,
  filter: {
    name: "NameString",
    transform_type: "FIND_MATCHES", # accepts FIND_MATCHES
    status: "NOT_READY", # accepts NOT_READY, READY, DELETING
    glue_version: "GlueVersionString",
    created_before: Time.now,
    created_after: Time.now,
    last_modified_before: Time.now,
    last_modified_after: Time.now,
    schema: [
      {
        name: "ColumnNameString",
        data_type: "ColumnTypeString",
      },
    ],
  },
  sort: {
    column: "NAME", # required, accepts NAME, TRANSFORM_TYPE, STATUS, CREATED, LAST_MODIFIED
    sort_direction: "DESCENDING", # required, accepts DESCENDING, ASCENDING
  },
})

Response structure


resp.transforms #=> Array
resp.transforms[0].transform_id #=> String
resp.transforms[0].name #=> String
resp.transforms[0].description #=> String
resp.transforms[0].status #=> String, one of "NOT_READY", "READY", "DELETING"
resp.transforms[0].created_on #=> Time
resp.transforms[0].last_modified_on #=> Time
resp.transforms[0].input_record_tables #=> Array
resp.transforms[0].input_record_tables[0].database_name #=> String
resp.transforms[0].input_record_tables[0].table_name #=> String
resp.transforms[0].input_record_tables[0].catalog_id #=> String
resp.transforms[0].input_record_tables[0].connection_name #=> String
resp.transforms[0].parameters.transform_type #=> String, one of "FIND_MATCHES"
resp.transforms[0].parameters.find_matches_parameters.primary_key_column_name #=> String
resp.transforms[0].parameters.find_matches_parameters.precision_recall_tradeoff #=> Float
resp.transforms[0].parameters.find_matches_parameters.accuracy_cost_tradeoff #=> Float
resp.transforms[0].parameters.find_matches_parameters.enforce_provided_labels #=> true/false
resp.transforms[0].evaluation_metrics.transform_type #=> String, one of "FIND_MATCHES"
resp.transforms[0].evaluation_metrics.find_matches_metrics.area_under_pr_curve #=> Float
resp.transforms[0].evaluation_metrics.find_matches_metrics.precision #=> Float
resp.transforms[0].evaluation_metrics.find_matches_metrics.recall #=> Float
resp.transforms[0].evaluation_metrics.find_matches_metrics.f1 #=> Float
resp.transforms[0].evaluation_metrics.find_matches_metrics.confusion_matrix.num_true_positives #=> Integer
resp.transforms[0].evaluation_metrics.find_matches_metrics.confusion_matrix.num_false_positives #=> Integer
resp.transforms[0].evaluation_metrics.find_matches_metrics.confusion_matrix.num_true_negatives #=> Integer
resp.transforms[0].evaluation_metrics.find_matches_metrics.confusion_matrix.num_false_negatives #=> Integer
resp.transforms[0].label_count #=> Integer
resp.transforms[0].schema #=> Array
resp.transforms[0].schema[0].name #=> String
resp.transforms[0].schema[0].data_type #=> String
resp.transforms[0].role #=> String
resp.transforms[0].glue_version #=> String
resp.transforms[0].max_capacity #=> Float
resp.transforms[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.transforms[0].number_of_workers #=> Integer
resp.transforms[0].timeout #=> Integer
resp.transforms[0].max_retries #=> Integer
resp.transforms[0].transform_encryption.ml_user_data_encryption.ml_user_data_encryption_mode #=> String, one of "DISABLED", "SSE-KMS"
resp.transforms[0].transform_encryption.ml_user_data_encryption.kms_key_id #=> String
resp.transforms[0].transform_encryption.task_run_security_configuration_name #=> String
resp.next_token #=> String

Options Hash (options):

Returns:

See Also:

#get_partition(options = {}) ⇒ Types::GetPartitionResponse

Retrieves information about a specified partition.

Examples:

Request syntax with placeholder values


resp = client.get_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_values: ["ValueString"], # required
})

Response structure


resp.partition.values #=> Array
resp.partition.values[0] #=> String
resp.partition.database_name #=> String
resp.partition.table_name #=> String
resp.partition.creation_time #=> Time
resp.partition.last_access_time #=> Time
resp.partition.storage_descriptor.columns #=> Array
resp.partition.storage_descriptor.columns[0].name #=> String
resp.partition.storage_descriptor.columns[0].type #=> String
resp.partition.storage_descriptor.columns[0].comment #=> String
resp.partition.storage_descriptor.columns[0].parameters #=> Hash
resp.partition.storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.partition.storage_descriptor.location #=> String
resp.partition.storage_descriptor.input_format #=> String
resp.partition.storage_descriptor.output_format #=> String
resp.partition.storage_descriptor.compressed #=> true/false
resp.partition.storage_descriptor.number_of_buckets #=> Integer
resp.partition.storage_descriptor.serde_info.name #=> String
resp.partition.storage_descriptor.serde_info.serialization_library #=> String
resp.partition.storage_descriptor.serde_info.parameters #=> Hash
resp.partition.storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.partition.storage_descriptor.bucket_columns #=> Array
resp.partition.storage_descriptor.bucket_columns[0] #=> String
resp.partition.storage_descriptor.sort_columns #=> Array
resp.partition.storage_descriptor.sort_columns[0].column #=> String
resp.partition.storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.partition.storage_descriptor.parameters #=> Hash
resp.partition.storage_descriptor.parameters["KeyString"] #=> String
resp.partition.storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.partition.storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.partition.storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.partition.storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.partition.storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.partition.storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.partition.storage_descriptor.stored_as_sub_directories #=> true/false
resp.partition.storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.partition.storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.partition.storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.partition.storage_descriptor.schema_reference.schema_version_id #=> String
resp.partition.storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.partition.parameters #=> Hash
resp.partition.parameters["KeyString"] #=> String
resp.partition.last_analyzed_time #=> Time
resp.partition.catalog_id #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the partition in question resides. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partition resides.

  • :table_name (required, String)

    The name of the partition\'s table.

  • :partition_values (required, Array<String>)

    The values that define the partition.

Returns:

See Also:

#get_partition_indexes(options = {}) ⇒ Types::GetPartitionIndexesResponse

Retrieves the partition indexes associated with a table.

Examples:

Request syntax with placeholder values


resp = client.get_partition_indexes({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  next_token: "Token",
})

Response structure


resp.partition_index_descriptor_list #=> Array
resp.partition_index_descriptor_list[0].index_name #=> String
resp.partition_index_descriptor_list[0].keys #=> Array
resp.partition_index_descriptor_list[0].keys[0].name #=> String
resp.partition_index_descriptor_list[0].keys[0].type #=> String
resp.partition_index_descriptor_list[0].index_status #=> String, one of "ACTIVE"
resp.next_token #=> String

Options Hash (options):

  • :catalog_id (String)

    The catalog ID where the table resides.

  • :database_name (required, String)

    Specifies the name of a database from which you want to retrieve partition indexes.

  • :table_name (required, String)

    Specifies the name of a table for which you want to retrieve the partition indexes.

  • :next_token (String)

    A continuation token, included if this is a continuation call.

Returns:

See Also:

#get_partitions(options = {}) ⇒ Types::GetPartitionsResponse

Retrieves information about the partitions in a table.

Examples:

Request syntax with placeholder values


resp = client.get_partitions({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  expression: "PredicateString",
  next_token: "Token",
  segment: {
    segment_number: 1, # required
    total_segments: 1, # required
  },
  max_results: 1,
})

Response structure


resp.partitions #=> Array
resp.partitions[0].values #=> Array
resp.partitions[0].values[0] #=> String
resp.partitions[0].database_name #=> String
resp.partitions[0].table_name #=> String
resp.partitions[0].creation_time #=> Time
resp.partitions[0].last_access_time #=> Time
resp.partitions[0].storage_descriptor.columns #=> Array
resp.partitions[0].storage_descriptor.columns[0].name #=> String
resp.partitions[0].storage_descriptor.columns[0].type #=> String
resp.partitions[0].storage_descriptor.columns[0].comment #=> String
resp.partitions[0].storage_descriptor.columns[0].parameters #=> Hash
resp.partitions[0].storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.partitions[0].storage_descriptor.location #=> String
resp.partitions[0].storage_descriptor.input_format #=> String
resp.partitions[0].storage_descriptor.output_format #=> String
resp.partitions[0].storage_descriptor.compressed #=> true/false
resp.partitions[0].storage_descriptor.number_of_buckets #=> Integer
resp.partitions[0].storage_descriptor.serde_info.name #=> String
resp.partitions[0].storage_descriptor.serde_info.serialization_library #=> String
resp.partitions[0].storage_descriptor.serde_info.parameters #=> Hash
resp.partitions[0].storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.partitions[0].storage_descriptor.bucket_columns #=> Array
resp.partitions[0].storage_descriptor.bucket_columns[0] #=> String
resp.partitions[0].storage_descriptor.sort_columns #=> Array
resp.partitions[0].storage_descriptor.sort_columns[0].column #=> String
resp.partitions[0].storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.partitions[0].storage_descriptor.parameters #=> Hash
resp.partitions[0].storage_descriptor.parameters["KeyString"] #=> String
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.partitions[0].storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.partitions[0].storage_descriptor.stored_as_sub_directories #=> true/false
resp.partitions[0].storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_version_id #=> String
resp.partitions[0].storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.partitions[0].parameters #=> Hash
resp.partitions[0].parameters["KeyString"] #=> String
resp.partitions[0].last_analyzed_time #=> Time
resp.partitions[0].catalog_id #=> String
resp.next_token #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions\' table.

  • :expression (String)

    An expression that filters the partitions to be returned.

    The expression uses SQL syntax similar to the SQL WHERE filter clause. The SQL statement parser JSQLParser parses the expression.

    Operators: The following are the operators that you can use in the Expression API call:

    =

    : Checks whether the values of the two operands are equal; if yes, then the condition becomes true.

    Example: Assume \'variable a\' holds 10 and \'variable b\' holds 20.

    (a = b) is not true.

    < >

    Checks whether the values of two operands are equal; if the values are not equal, then the condition becomes true.

    Example: (a < > b) is true.

    >

    Checks whether the value of the left operand is greater than the value of the right operand; if yes, then the condition becomes true.

    Example: (a > b) is not true.

    <

    Checks whether the value of the left operand is less than the value of the right operand; if yes, then the condition becomes true.

    Example: (a < b) is true.

    >=

    : Checks whether the value of the left operand is greater than or equal to the value of the right operand; if yes, then the condition becomes true.

    Example: (a >= b) is not true.

    <=

    : Checks whether the value of the left operand is less than or equal to the value of the right operand; if yes, then the condition becomes true.

    Example: (a <= b) is true.

    AND, OR, IN, BETWEEN, LIKE, NOT, IS NULL

    Logical operators.

    Supported Partition Key Types: The following are the supported partition keys.

    • string

    • date

    • timestamp

    • int

    • bigint

    • long

    • tinyint

    • smallint

    • decimal

    If an invalid type is encountered, an exception is thrown.

    The following list shows the valid operators on each type. When you define a crawler, the partitionKey type is created as a STRING, to be compatible with the catalog partitions.

    Sample API Call:

  • :next_token (String)

    A continuation token, if this is not the first call to retrieve these partitions.

  • :segment (Types::Segment)

    The segment of the table\'s partitions to scan in this request.

  • :max_results (Integer)

    The maximum number of partitions to return in a single response.

Returns:

See Also:

#get_plan(options = {}) ⇒ Types::GetPlanResponse

Gets code to perform a specified mapping.

Examples:

Request syntax with placeholder values


resp = client.get_plan({
  mapping: [ # required
    {
      source_table: "TableName",
      source_path: "SchemaPathString",
      source_type: "FieldType",
      target_table: "TableName",
      target_path: "SchemaPathString",
      target_type: "FieldType",
    },
  ],
  source: { # required
    database_name: "NameString", # required
    table_name: "NameString", # required
  },
  sinks: [
    {
      database_name: "NameString", # required
      table_name: "NameString", # required
    },
  ],
  location: {
    jdbc: [
      {
        name: "CodeGenArgName", # required
        value: "CodeGenArgValue", # required
        param: false,
      },
    ],
    s3: [
      {
        name: "CodeGenArgName", # required
        value: "CodeGenArgValue", # required
        param: false,
      },
    ],
    dynamo_db: [
      {
        name: "CodeGenArgName", # required
        value: "CodeGenArgValue", # required
        param: false,
      },
    ],
  },
  language: "PYTHON", # accepts PYTHON, SCALA
  additional_plan_options_map: {
    "GenericString" => "GenericString",
  },
})

Response structure


resp.python_script #=> String
resp.scala_code #=> String

Options Hash (options):

  • :mapping (required, Array<Types::MappingEntry>)

    The list of mappings from a source table to target tables.

  • :source (required, Types::CatalogEntry)

    The source table.

  • :sinks (Array<Types::CatalogEntry>)

    The target tables.

  • :location (Types::Location)

    The parameters for the mapping.

  • :language (String)

    The programming language of the code to perform the mapping.

  • :additional_plan_options_map (Hash<String,String>)

    A map to hold additional optional key-value parameters.

    Currently, these key-value pairs are supported:

    • inferSchema  —  Specifies whether to set inferSchema to true or false for the default script generated by an AWS Glue job. For example, to set inferSchema to true, pass the following key value pair:

      --additional-plan-options-map '`{"inferSchema":"true"}`'

Returns:

See Also:

#get_registry(options = {}) ⇒ Types::GetRegistryResponse

Describes the specified registry in detail.

Examples:

Request syntax with placeholder values


resp = client.get_registry({
  registry_id: { # required
    registry_name: "SchemaRegistryNameString",
    registry_arn: "GlueResourceArn",
  },
})

Response structure


resp.registry_name #=> String
resp.registry_arn #=> String
resp.description #=> String
resp.status #=> String, one of "AVAILABLE", "DELETING"
resp.created_time #=> String
resp.updated_time #=> String

Options Hash (options):

  • :registry_id (required, Types::RegistryId)

    This is a wrapper structure that may contain the registry name and Amazon Resource Name (ARN).

Returns:

See Also:

#get_resource_policies(options = {}) ⇒ Types::GetResourcePoliciesResponse

Retrieves the security configurations for the resource policies set on individual resources, and also the account-level policy.

This operation also returns the Data Catalog resource policy. However, if you enabled metadata encryption in Data Catalog settings, and you do not have permission on the AWS KMS key, the operation can't return the Data Catalog resource policy.

Examples:

Request syntax with placeholder values


resp = client.get_resource_policies({
  next_token: "Token",
  max_results: 1,
})

Response structure


resp.get_resource_policies_response_list #=> Array
resp.get_resource_policies_response_list[0].policy_in_json #=> String
resp.get_resource_policies_response_list[0].policy_hash #=> String
resp.get_resource_policies_response_list[0].create_time #=> Time
resp.get_resource_policies_response_list[0].update_time #=> Time
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    A continuation token, if this is a continuation request.

  • :max_results (Integer)

    The maximum size of a list to return.

Returns:

See Also:

#get_resource_policy(options = {}) ⇒ Types::GetResourcePolicyResponse

Retrieves a specified resource policy.

Examples:

Request syntax with placeholder values


resp = client.get_resource_policy({
  resource_arn: "GlueResourceArn",
})

Response structure


resp.policy_in_json #=> String
resp.policy_hash #=> String
resp.create_time #=> Time
resp.update_time #=> Time

Options Hash (options):

  • :resource_arn (String)

    The ARN of the AWS Glue resource for the resource policy to be retrieved. For more information about AWS Glue resource ARNs, see the AWS Glue ARN string pattern

Returns:

See Also:

#get_schema(options = {}) ⇒ Types::GetSchemaResponse

Describes the specified schema in detail.

Examples:

Request syntax with placeholder values


resp = client.get_schema({
  schema_id: { # required
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
})

Response structure


resp.registry_name #=> String
resp.registry_arn #=> String
resp.schema_name #=> String
resp.schema_arn #=> String
resp.description #=> String
resp.data_format #=> String, one of "AVRO"
resp.compatibility #=> String, one of "NONE", "DISABLED", "BACKWARD", "BACKWARD_ALL", "FORWARD", "FORWARD_ALL", "FULL", "FULL_ALL"
resp.schema_checkpoint #=> Integer
resp.latest_schema_version #=> Integer
resp.next_schema_version #=> Integer
resp.schema_status #=> String, one of "AVAILABLE", "PENDING", "DELETING"
resp.created_time #=> String
resp.updated_time #=> String

Options Hash (options):

  • :schema_id (required, Types::SchemaId)

    This is a wrapper structure to contain schema identity fields. The structure contains:

    • SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. Either SchemaArn or SchemaName and RegistryName has to be provided.

    • SchemaId$SchemaName: The name of the schema. Either SchemaArn or SchemaName and RegistryName has to be provided.

Returns:

See Also:

#get_schema_by_definition(options = {}) ⇒ Types::GetSchemaByDefinitionResponse

Retrieves a schema by the SchemaDefinition. The schema definition is sent to the Schema Registry, canonicalized, and hashed. If the hash is matched within the scope of the SchemaName or ARN (or the default registry, if none is supplied), that schema’s metadata is returned. Otherwise, a 404 or NotFound error is returned. Schema versions in Deleted statuses will not be included in the results.

Examples:

Request syntax with placeholder values


resp = client.get_schema_by_definition({
  schema_id: { # required
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  schema_definition: "SchemaDefinitionString", # required
})

Response structure


resp.schema_version_id #=> String
resp.schema_arn #=> String
resp.data_format #=> String, one of "AVRO"
resp.status #=> String, one of "AVAILABLE", "PENDING", "FAILURE", "DELETING"
resp.created_time #=> String

Options Hash (options):

  • :schema_id (required, Types::SchemaId)

    This is a wrapper structure to contain schema identity fields. The structure contains:

    • SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. One of SchemaArn or SchemaName has to be provided.

    • SchemaId$SchemaName: The name of the schema. One of SchemaArn or SchemaName has to be provided.

  • :schema_definition (required, String)

    The definition of the schema for which schema details are required.

Returns:

See Also:

#get_schema_version(options = {}) ⇒ Types::GetSchemaVersionResponse

Get the specified schema by its unique ID assigned when a version of the schema is created or registered. Schema versions in Deleted status will not be included in the results.

Examples:

Request syntax with placeholder values


resp = client.get_schema_version({
  schema_id: {
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  schema_version_id: "SchemaVersionIdString",
  schema_version_number: {
    latest_version: false,
    version_number: 1,
  },
})

Response structure


resp.schema_version_id #=> String
resp.schema_definition #=> String
resp.data_format #=> String, one of "AVRO"
resp.schema_arn #=> String
resp.version_number #=> Integer
resp.status #=> String, one of "AVAILABLE", "PENDING", "FAILURE", "DELETING"
resp.created_time #=> String

Options Hash (options):

  • :schema_id (Types::SchemaId)

    This is a wrapper structure to contain schema identity fields. The structure contains:

    • SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. Either SchemaArn or SchemaName and RegistryName has to be provided.

    • SchemaId$SchemaName: The name of the schema. Either SchemaArn or SchemaName and RegistryName has to be provided.

  • :schema_version_id (String)

    The SchemaVersionId of the schema version. This field is required for fetching by schema ID. Either this or the SchemaId wrapper has to be provided.

  • :schema_version_number (Types::SchemaVersionNumber)

    The version number of the schema.

Returns:

See Also:

#get_schema_versions_diff(options = {}) ⇒ Types::GetSchemaVersionsDiffResponse

Fetches the schema version difference in the specified difference type between two stored schema versions in the Schema Registry.

This API allows you to compare two schema versions between two schema definitions under the same schema.

Examples:

Request syntax with placeholder values


resp = client.get_schema_versions_diff({
  schema_id: { # required
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  first_schema_version_number: { # required
    latest_version: false,
    version_number: 1,
  },
  second_schema_version_number: { # required
    latest_version: false,
    version_number: 1,
  },
  schema_diff_type: "SYNTAX_DIFF", # required, accepts SYNTAX_DIFF
})

Response structure


resp.diff #=> String

Options Hash (options):

  • :schema_id (required, Types::SchemaId)

    This is a wrapper structure to contain schema identity fields. The structure contains:

    • SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. One of SchemaArn or SchemaName has to be provided.

    • SchemaId$SchemaName: The name of the schema. One of SchemaArn or SchemaName has to be provided.

  • :first_schema_version_number (required, Types::SchemaVersionNumber)

    The first of the two schema versions to be compared.

  • :second_schema_version_number (required, Types::SchemaVersionNumber)

    The second of the two schema versions to be compared.

  • :schema_diff_type (required, String)

    Refers to SYNTAX_DIFF, which is the currently supported diff type.

Returns:

See Also:

#get_security_configuration(options = {}) ⇒ Types::GetSecurityConfigurationResponse

Retrieves a specified security configuration.

Examples:

Request syntax with placeholder values


resp = client.get_security_configuration({
  name: "NameString", # required
})

Response structure


resp.security_configuration.name #=> String
resp.security_configuration.created_time_stamp #=> Time
resp.security_configuration.encryption_configuration.s3_encryption #=> Array
resp.security_configuration.encryption_configuration.s3_encryption[0].s3_encryption_mode #=> String, one of "DISABLED", "SSE-KMS", "SSE-S3"
resp.security_configuration.encryption_configuration.s3_encryption[0].kms_key_arn #=> String
resp.security_configuration.encryption_configuration.cloud_watch_encryption.cloud_watch_encryption_mode #=> String, one of "DISABLED", "SSE-KMS"
resp.security_configuration.encryption_configuration.cloud_watch_encryption.kms_key_arn #=> String
resp.security_configuration.encryption_configuration.job_bookmarks_encryption.job_bookmarks_encryption_mode #=> String, one of "DISABLED", "CSE-KMS"
resp.security_configuration.encryption_configuration.job_bookmarks_encryption.kms_key_arn #=> String

Options Hash (options):

  • :name (required, String)

    The name of the security configuration to retrieve.

Returns:

See Also:

#get_security_configurations(options = {}) ⇒ Types::GetSecurityConfigurationsResponse

Retrieves a list of all security configurations.

Examples:

Request syntax with placeholder values


resp = client.get_security_configurations({
  max_results: 1,
  next_token: "GenericString",
})

Response structure


resp.security_configurations #=> Array
resp.security_configurations[0].name #=> String
resp.security_configurations[0].created_time_stamp #=> Time
resp.security_configurations[0].encryption_configuration.s3_encryption #=> Array
resp.security_configurations[0].encryption_configuration.s3_encryption[0].s3_encryption_mode #=> String, one of "DISABLED", "SSE-KMS", "SSE-S3"
resp.security_configurations[0].encryption_configuration.s3_encryption[0].kms_key_arn #=> String
resp.security_configurations[0].encryption_configuration.cloud_watch_encryption.cloud_watch_encryption_mode #=> String, one of "DISABLED", "SSE-KMS"
resp.security_configurations[0].encryption_configuration.cloud_watch_encryption.kms_key_arn #=> String
resp.security_configurations[0].encryption_configuration.job_bookmarks_encryption.job_bookmarks_encryption_mode #=> String, one of "DISABLED", "CSE-KMS"
resp.security_configurations[0].encryption_configuration.job_bookmarks_encryption.kms_key_arn #=> String
resp.next_token #=> String

Options Hash (options):

  • :max_results (Integer)

    The maximum number of results to return.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

#get_table(options = {}) ⇒ Types::GetTableResponse

Retrieves the Table definition in a Data Catalog for a specified table.

Examples:

Request syntax with placeholder values


resp = client.get_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  name: "NameString", # required
})

Response structure


resp.table.name #=> String
resp.table.database_name #=> String
resp.table.description #=> String
resp.table.owner #=> String
resp.table.create_time #=> Time
resp.table.update_time #=> Time
resp.table.last_access_time #=> Time
resp.table.last_analyzed_time #=> Time
resp.table.retention #=> Integer
resp.table.storage_descriptor.columns #=> Array
resp.table.storage_descriptor.columns[0].name #=> String
resp.table.storage_descriptor.columns[0].type #=> String
resp.table.storage_descriptor.columns[0].comment #=> String
resp.table.storage_descriptor.columns[0].parameters #=> Hash
resp.table.storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.table.storage_descriptor.location #=> String
resp.table.storage_descriptor.input_format #=> String
resp.table.storage_descriptor.output_format #=> String
resp.table.storage_descriptor.compressed #=> true/false
resp.table.storage_descriptor.number_of_buckets #=> Integer
resp.table.storage_descriptor.serde_info.name #=> String
resp.table.storage_descriptor.serde_info.serialization_library #=> String
resp.table.storage_descriptor.serde_info.parameters #=> Hash
resp.table.storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.table.storage_descriptor.bucket_columns #=> Array
resp.table.storage_descriptor.bucket_columns[0] #=> String
resp.table.storage_descriptor.sort_columns #=> Array
resp.table.storage_descriptor.sort_columns[0].column #=> String
resp.table.storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.table.storage_descriptor.parameters #=> Hash
resp.table.storage_descriptor.parameters["KeyString"] #=> String
resp.table.storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.table.storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.table.storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.table.storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.table.storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.table.storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.table.storage_descriptor.stored_as_sub_directories #=> true/false
resp.table.storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.table.storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.table.storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.table.storage_descriptor.schema_reference.schema_version_id #=> String
resp.table.storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.table.partition_keys #=> Array
resp.table.partition_keys[0].name #=> String
resp.table.partition_keys[0].type #=> String
resp.table.partition_keys[0].comment #=> String
resp.table.partition_keys[0].parameters #=> Hash
resp.table.partition_keys[0].parameters["KeyString"] #=> String
resp.table.view_original_text #=> String
resp.table.view_expanded_text #=> String
resp.table.table_type #=> String
resp.table.parameters #=> Hash
resp.table.parameters["KeyString"] #=> String
resp.table.created_by #=> String
resp.table.is_registered_with_lake_formation #=> true/false
resp.table.target_table.catalog_id #=> String
resp.table.target_table.database_name #=> String
resp.table.target_table.name #=> String
resp.table.catalog_id #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the table resides. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.

  • :name (required, String)

    The name of the table for which to retrieve the definition. For Hive compatibility, this name is entirely lowercase.

Returns:

See Also:

#get_table_version(options = {}) ⇒ Types::GetTableVersionResponse

Retrieves a specified version of a table.

Examples:

Request syntax with placeholder values


resp = client.get_table_version({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  version_id: "VersionString",
})

Response structure


resp.table_version.table.name #=> String
resp.table_version.table.database_name #=> String
resp.table_version.table.description #=> String
resp.table_version.table.owner #=> String
resp.table_version.table.create_time #=> Time
resp.table_version.table.update_time #=> Time
resp.table_version.table.last_access_time #=> Time
resp.table_version.table.last_analyzed_time #=> Time
resp.table_version.table.retention #=> Integer
resp.table_version.table.storage_descriptor.columns #=> Array
resp.table_version.table.storage_descriptor.columns[0].name #=> String
resp.table_version.table.storage_descriptor.columns[0].type #=> String
resp.table_version.table.storage_descriptor.columns[0].comment #=> String
resp.table_version.table.storage_descriptor.columns[0].parameters #=> Hash
resp.table_version.table.storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.table_version.table.storage_descriptor.location #=> String
resp.table_version.table.storage_descriptor.input_format #=> String
resp.table_version.table.storage_descriptor.output_format #=> String
resp.table_version.table.storage_descriptor.compressed #=> true/false
resp.table_version.table.storage_descriptor.number_of_buckets #=> Integer
resp.table_version.table.storage_descriptor.serde_info.name #=> String
resp.table_version.table.storage_descriptor.serde_info.serialization_library #=> String
resp.table_version.table.storage_descriptor.serde_info.parameters #=> Hash
resp.table_version.table.storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.table_version.table.storage_descriptor.bucket_columns #=> Array
resp.table_version.table.storage_descriptor.bucket_columns[0] #=> String
resp.table_version.table.storage_descriptor.sort_columns #=> Array
resp.table_version.table.storage_descriptor.sort_columns[0].column #=> String
resp.table_version.table.storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.table_version.table.storage_descriptor.parameters #=> Hash
resp.table_version.table.storage_descriptor.parameters["KeyString"] #=> String
resp.table_version.table.storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.table_version.table.storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.table_version.table.storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.table_version.table.storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.table_version.table.storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.table_version.table.storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.table_version.table.storage_descriptor.stored_as_sub_directories #=> true/false
resp.table_version.table.storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.table_version.table.storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.table_version.table.storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.table_version.table.storage_descriptor.schema_reference.schema_version_id #=> String
resp.table_version.table.storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.table_version.table.partition_keys #=> Array
resp.table_version.table.partition_keys[0].name #=> String
resp.table_version.table.partition_keys[0].type #=> String
resp.table_version.table.partition_keys[0].comment #=> String
resp.table_version.table.partition_keys[0].parameters #=> Hash
resp.table_version.table.partition_keys[0].parameters["KeyString"] #=> String
resp.table_version.table.view_original_text #=> String
resp.table_version.table.view_expanded_text #=> String
resp.table_version.table.table_type #=> String
resp.table_version.table.parameters #=> Hash
resp.table_version.table.parameters["KeyString"] #=> String
resp.table_version.table.created_by #=> String
resp.table_version.table.is_registered_with_lake_formation #=> true/false
resp.table_version.table.target_table.catalog_id #=> String
resp.table_version.table.target_table.database_name #=> String
resp.table_version.table.target_table.name #=> String
resp.table_version.table.catalog_id #=> String
resp.table_version.version_id #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the tables reside. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.

  • :table_name (required, String)

    The name of the table. For Hive compatibility, this name is entirely lowercase.

  • :version_id (String)

    The ID value of the table version to be retrieved. A VersionID is a string representation of an integer. Each version is incremented by 1.

Returns:

See Also:

#get_table_versions(options = {}) ⇒ Types::GetTableVersionsResponse

Retrieves a list of strings that identify available versions of a specified table.

Examples:

Request syntax with placeholder values


resp = client.get_table_versions({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  next_token: "Token",
  max_results: 1,
})

Response structure


resp.table_versions #=> Array
resp.table_versions[0].table.name #=> String
resp.table_versions[0].table.database_name #=> String
resp.table_versions[0].table.description #=> String
resp.table_versions[0].table.owner #=> String
resp.table_versions[0].table.create_time #=> Time
resp.table_versions[0].table.update_time #=> Time
resp.table_versions[0].table.last_access_time #=> Time
resp.table_versions[0].table.last_analyzed_time #=> Time
resp.table_versions[0].table.retention #=> Integer
resp.table_versions[0].table.storage_descriptor.columns #=> Array
resp.table_versions[0].table.storage_descriptor.columns[0].name #=> String
resp.table_versions[0].table.storage_descriptor.columns[0].type #=> String
resp.table_versions[0].table.storage_descriptor.columns[0].comment #=> String
resp.table_versions[0].table.storage_descriptor.columns[0].parameters #=> Hash
resp.table_versions[0].table.storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.table_versions[0].table.storage_descriptor.location #=> String
resp.table_versions[0].table.storage_descriptor.input_format #=> String
resp.table_versions[0].table.storage_descriptor.output_format #=> String
resp.table_versions[0].table.storage_descriptor.compressed #=> true/false
resp.table_versions[0].table.storage_descriptor.number_of_buckets #=> Integer
resp.table_versions[0].table.storage_descriptor.serde_info.name #=> String
resp.table_versions[0].table.storage_descriptor.serde_info.serialization_library #=> String
resp.table_versions[0].table.storage_descriptor.serde_info.parameters #=> Hash
resp.table_versions[0].table.storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.table_versions[0].table.storage_descriptor.bucket_columns #=> Array
resp.table_versions[0].table.storage_descriptor.bucket_columns[0] #=> String
resp.table_versions[0].table.storage_descriptor.sort_columns #=> Array
resp.table_versions[0].table.storage_descriptor.sort_columns[0].column #=> String
resp.table_versions[0].table.storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.table_versions[0].table.storage_descriptor.parameters #=> Hash
resp.table_versions[0].table.storage_descriptor.parameters["KeyString"] #=> String
resp.table_versions[0].table.storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.table_versions[0].table.storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.table_versions[0].table.storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.table_versions[0].table.storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.table_versions[0].table.storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.table_versions[0].table.storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.table_versions[0].table.storage_descriptor.stored_as_sub_directories #=> true/false
resp.table_versions[0].table.storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.table_versions[0].table.storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.table_versions[0].table.storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.table_versions[0].table.storage_descriptor.schema_reference.schema_version_id #=> String
resp.table_versions[0].table.storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.table_versions[0].table.partition_keys #=> Array
resp.table_versions[0].table.partition_keys[0].name #=> String
resp.table_versions[0].table.partition_keys[0].type #=> String
resp.table_versions[0].table.partition_keys[0].comment #=> String
resp.table_versions[0].table.partition_keys[0].parameters #=> Hash
resp.table_versions[0].table.partition_keys[0].parameters["KeyString"] #=> String
resp.table_versions[0].table.view_original_text #=> String
resp.table_versions[0].table.view_expanded_text #=> String
resp.table_versions[0].table.table_type #=> String
resp.table_versions[0].table.parameters #=> Hash
resp.table_versions[0].table.parameters["KeyString"] #=> String
resp.table_versions[0].table.created_by #=> String
resp.table_versions[0].table.is_registered_with_lake_formation #=> true/false
resp.table_versions[0].table.target_table.catalog_id #=> String
resp.table_versions[0].table.target_table.database_name #=> String
resp.table_versions[0].table.target_table.name #=> String
resp.table_versions[0].table.catalog_id #=> String
resp.table_versions[0].version_id #=> String
resp.next_token #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the tables reside. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.

  • :table_name (required, String)

    The name of the table. For Hive compatibility, this name is entirely lowercase.

  • :next_token (String)

    A continuation token, if this is not the first call.

  • :max_results (Integer)

    The maximum number of table versions to return in one response.

Returns:

See Also:

#get_tables(options = {}) ⇒ Types::GetTablesResponse

Retrieves the definitions of some or all of the tables in a given Database.

Examples:

Request syntax with placeholder values


resp = client.get_tables({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  expression: "FilterString",
  next_token: "Token",
  max_results: 1,
})

Response structure


resp.table_list #=> Array
resp.table_list[0].name #=> String
resp.table_list[0].database_name #=> String
resp.table_list[0].description #=> String
resp.table_list[0].owner #=> String
resp.table_list[0].create_time #=> Time
resp.table_list[0].update_time #=> Time
resp.table_list[0].last_access_time #=> Time
resp.table_list[0].last_analyzed_time #=> Time
resp.table_list[0].retention #=> Integer
resp.table_list[0].storage_descriptor.columns #=> Array
resp.table_list[0].storage_descriptor.columns[0].name #=> String
resp.table_list[0].storage_descriptor.columns[0].type #=> String
resp.table_list[0].storage_descriptor.columns[0].comment #=> String
resp.table_list[0].storage_descriptor.columns[0].parameters #=> Hash
resp.table_list[0].storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.table_list[0].storage_descriptor.location #=> String
resp.table_list[0].storage_descriptor.input_format #=> String
resp.table_list[0].storage_descriptor.output_format #=> String
resp.table_list[0].storage_descriptor.compressed #=> true/false
resp.table_list[0].storage_descriptor.number_of_buckets #=> Integer
resp.table_list[0].storage_descriptor.serde_info.name #=> String
resp.table_list[0].storage_descriptor.serde_info.serialization_library #=> String
resp.table_list[0].storage_descriptor.serde_info.parameters #=> Hash
resp.table_list[0].storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.table_list[0].storage_descriptor.bucket_columns #=> Array
resp.table_list[0].storage_descriptor.bucket_columns[0] #=> String
resp.table_list[0].storage_descriptor.sort_columns #=> Array
resp.table_list[0].storage_descriptor.sort_columns[0].column #=> String
resp.table_list[0].storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.table_list[0].storage_descriptor.parameters #=> Hash
resp.table_list[0].storage_descriptor.parameters["KeyString"] #=> String
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.table_list[0].storage_descriptor.stored_as_sub_directories #=> true/false
resp.table_list[0].storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.table_list[0].storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.table_list[0].storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.table_list[0].storage_descriptor.schema_reference.schema_version_id #=> String
resp.table_list[0].storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.table_list[0].partition_keys #=> Array
resp.table_list[0].partition_keys[0].name #=> String
resp.table_list[0].partition_keys[0].type #=> String
resp.table_list[0].partition_keys[0].comment #=> String
resp.table_list[0].partition_keys[0].parameters #=> Hash
resp.table_list[0].partition_keys[0].parameters["KeyString"] #=> String
resp.table_list[0].view_original_text #=> String
resp.table_list[0].view_expanded_text #=> String
resp.table_list[0].table_type #=> String
resp.table_list[0].parameters #=> Hash
resp.table_list[0].parameters["KeyString"] #=> String
resp.table_list[0].created_by #=> String
resp.table_list[0].is_registered_with_lake_formation #=> true/false
resp.table_list[0].target_table.catalog_id #=> String
resp.table_list[0].target_table.database_name #=> String
resp.table_list[0].target_table.name #=> String
resp.table_list[0].catalog_id #=> String
resp.next_token #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the tables reside. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The database in the catalog whose tables to list. For Hive compatibility, this name is entirely lowercase.

  • :expression (String)

    A regular expression pattern. If present, only those tables whose names match the pattern are returned.

  • :next_token (String)

    A continuation token, included if this is a continuation call.

  • :max_results (Integer)

    The maximum number of tables to return in a single response.

Returns:

See Also:

#get_tags(options = {}) ⇒ Types::GetTagsResponse

Retrieves a list of tags associated with a resource.

Examples:

Request syntax with placeholder values


resp = client.get_tags({
  resource_arn: "GlueResourceArn", # required
})

Response structure


resp.tags #=> Hash
resp.tags["TagKey"] #=> String

Options Hash (options):

  • :resource_arn (required, String)

    The Amazon Resource Name (ARN) of the resource for which to retrieve tags.

Returns:

See Also:

#get_trigger(options = {}) ⇒ Types::GetTriggerResponse

Retrieves the definition of a trigger.

Examples:

Request syntax with placeholder values


resp = client.get_trigger({
  name: "NameString", # required
})

Response structure


resp.trigger.name #=> String
resp.trigger.workflow_name #=> String
resp.trigger.id #=> String
resp.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND"
resp.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.trigger.description #=> String
resp.trigger.schedule #=> String
resp.trigger.actions #=> Array
resp.trigger.actions[0].job_name #=> String
resp.trigger.actions[0].arguments #=> Hash
resp.trigger.actions[0].arguments["GenericString"] #=> String
resp.trigger.actions[0].timeout #=> Integer
resp.trigger.actions[0].security_configuration #=> String
resp.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.trigger.actions[0].crawler_name #=> String
resp.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.trigger.predicate.conditions #=> Array
resp.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.trigger.predicate.conditions[0].job_name #=> String
resp.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.trigger.predicate.conditions[0].crawler_name #=> String
resp.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED"

Options Hash (options):

  • :name (required, String)

    The name of the trigger to retrieve.

Returns:

See Also:

#get_triggers(options = {}) ⇒ Types::GetTriggersResponse

Gets all the triggers associated with a job.

Examples:

Request syntax with placeholder values


resp = client.get_triggers({
  next_token: "GenericString",
  dependent_job_name: "NameString",
  max_results: 1,
})

Response structure


resp.triggers #=> Array
resp.triggers[0].name #=> String
resp.triggers[0].workflow_name #=> String
resp.triggers[0].id #=> String
resp.triggers[0].type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND"
resp.triggers[0].state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.triggers[0].description #=> String
resp.triggers[0].schedule #=> String
resp.triggers[0].actions #=> Array
resp.triggers[0].actions[0].job_name #=> String
resp.triggers[0].actions[0].arguments #=> Hash
resp.triggers[0].actions[0].arguments["GenericString"] #=> String
resp.triggers[0].actions[0].timeout #=> Integer
resp.triggers[0].actions[0].security_configuration #=> String
resp.triggers[0].actions[0].notification_property.notify_delay_after #=> Integer
resp.triggers[0].actions[0].crawler_name #=> String
resp.triggers[0].predicate.logical #=> String, one of "AND", "ANY"
resp.triggers[0].predicate.conditions #=> Array
resp.triggers[0].predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.triggers[0].predicate.conditions[0].job_name #=> String
resp.triggers[0].predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.triggers[0].predicate.conditions[0].crawler_name #=> String
resp.triggers[0].predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED"
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    A continuation token, if this is a continuation call.

  • :dependent_job_name (String)

    The name of the job to retrieve triggers for. The trigger that can start this job is returned, and if there is no such trigger, all triggers are returned.

  • :max_results (Integer)

    The maximum size of the response.

Returns:

See Also:

#get_user_defined_function(options = {}) ⇒ Types::GetUserDefinedFunctionResponse

Retrieves a specified function definition from the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.get_user_defined_function({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  function_name: "NameString", # required
})

Response structure


resp.user_defined_function.function_name #=> String
resp.user_defined_function.database_name #=> String
resp.user_defined_function.class_name #=> String
resp.user_defined_function.owner_name #=> String
resp.user_defined_function.owner_type #=> String, one of "USER", "ROLE", "GROUP"
resp.user_defined_function.create_time #=> Time
resp.user_defined_function.resource_uris #=> Array
resp.user_defined_function.resource_uris[0].resource_type #=> String, one of "JAR", "FILE", "ARCHIVE"
resp.user_defined_function.resource_uris[0].uri #=> String
resp.user_defined_function.catalog_id #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the function to be retrieved is located. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the function is located.

  • :function_name (required, String)

    The name of the function.

Returns:

See Also:

#get_user_defined_functions(options = {}) ⇒ Types::GetUserDefinedFunctionsResponse

Retrieves multiple function definitions from the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.get_user_defined_functions({
  catalog_id: "CatalogIdString",
  database_name: "NameString",
  pattern: "NameString", # required
  next_token: "Token",
  max_results: 1,
})

Response structure


resp.user_defined_functions #=> Array
resp.user_defined_functions[0].function_name #=> String
resp.user_defined_functions[0].database_name #=> String
resp.user_defined_functions[0].class_name #=> String
resp.user_defined_functions[0].owner_name #=> String
resp.user_defined_functions[0].owner_type #=> String, one of "USER", "ROLE", "GROUP"
resp.user_defined_functions[0].create_time #=> Time
resp.user_defined_functions[0].resource_uris #=> Array
resp.user_defined_functions[0].resource_uris[0].resource_type #=> String, one of "JAR", "FILE", "ARCHIVE"
resp.user_defined_functions[0].resource_uris[0].uri #=> String
resp.user_defined_functions[0].catalog_id #=> String
resp.next_token #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the functions to be retrieved are located. If none is provided, the AWS account ID is used by default.

  • :database_name (String)

    The name of the catalog database where the functions are located. If none is provided, functions from all the databases across the catalog will be returned.

  • :pattern (required, String)

    An optional function-name pattern string that filters the function definitions returned.

  • :next_token (String)

    A continuation token, if this is a continuation call.

  • :max_results (Integer)

    The maximum number of functions to return in one response.

Returns:

See Also:

#get_workflow(options = {}) ⇒ Types::GetWorkflowResponse

Retrieves resource metadata for a workflow.

Examples:

Request syntax with placeholder values


resp = client.get_workflow({
  name: "NameString", # required
  include_graph: false,
})

Response structure


resp.workflow.name #=> String
resp.workflow.description #=> String
resp.workflow.default_run_properties #=> Hash
resp.workflow.default_run_properties["IdString"] #=> String
resp.workflow.created_on #=> Time
resp.workflow.last_modified_on #=> Time
resp.workflow.last_run.name #=> String
resp.workflow.last_run.workflow_run_id #=> String
resp.workflow.last_run.previous_run_id #=> String
resp.workflow.last_run.workflow_run_properties #=> Hash
resp.workflow.last_run.workflow_run_properties["IdString"] #=> String
resp.workflow.last_run.started_on #=> Time
resp.workflow.last_run.completed_on #=> Time
resp.workflow.last_run.status #=> String, one of "RUNNING", "COMPLETED", "STOPPING", "STOPPED", "ERROR"
resp.workflow.last_run.error_message #=> String
resp.workflow.last_run.statistics.total_actions #=> Integer
resp.workflow.last_run.statistics.timeout_actions #=> Integer
resp.workflow.last_run.statistics.failed_actions #=> Integer
resp.workflow.last_run.statistics.stopped_actions #=> Integer
resp.workflow.last_run.statistics.succeeded_actions #=> Integer
resp.workflow.last_run.statistics.running_actions #=> Integer
resp.workflow.last_run.graph.nodes #=> Array
resp.workflow.last_run.graph.nodes[0].type #=> String, one of "CRAWLER", "JOB", "TRIGGER"
resp.workflow.last_run.graph.nodes[0].name #=> String
resp.workflow.last_run.graph.nodes[0].unique_id #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.name #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.workflow_name #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.id #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND"
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.description #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.schedule #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.actions #=> Array
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.actions[0].job_name #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.actions[0].arguments #=> Hash
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.actions[0].arguments["GenericString"] #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.actions[0].timeout #=> Integer
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.actions[0].security_configuration #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.actions[0].crawler_name #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions #=> Array
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].job_name #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawler_name #=> String
resp.workflow.last_run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED"
resp.workflow.last_run.graph.nodes[0].job_details.job_runs #=> Array
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].id #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].attempt #=> Integer
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].previous_run_id #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].trigger_name #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].job_name #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].started_on #=> Time
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].last_modified_on #=> Time
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].completed_on #=> Time
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].arguments #=> Hash
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].arguments["GenericString"] #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].error_message #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].predecessor_runs #=> Array
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].job_name #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].run_id #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].allocated_capacity #=> Integer
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].execution_time #=> Integer
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].timeout #=> Integer
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].max_capacity #=> Float
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].number_of_workers #=> Integer
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].security_configuration #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].log_group_name #=> String
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].notification_property.notify_delay_after #=> Integer
resp.workflow.last_run.graph.nodes[0].job_details.job_runs[0].glue_version #=> String
resp.workflow.last_run.graph.nodes[0].crawler_details.crawls #=> Array
resp.workflow.last_run.graph.nodes[0].crawler_details.crawls[0].state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED"
resp.workflow.last_run.graph.nodes[0].crawler_details.crawls[0].started_on #=> Time
resp.workflow.last_run.graph.nodes[0].crawler_details.crawls[0].completed_on #=> Time
resp.workflow.last_run.graph.nodes[0].crawler_details.crawls[0].error_message #=> String
resp.workflow.last_run.graph.nodes[0].crawler_details.crawls[0].log_group #=> String
resp.workflow.last_run.graph.nodes[0].crawler_details.crawls[0].log_stream #=> String
resp.workflow.last_run.graph.edges #=> Array
resp.workflow.last_run.graph.edges[0].source_id #=> String
resp.workflow.last_run.graph.edges[0].destination_id #=> String
resp.workflow.graph.nodes #=> Array
resp.workflow.graph.nodes[0].type #=> String, one of "CRAWLER", "JOB", "TRIGGER"
resp.workflow.graph.nodes[0].name #=> String
resp.workflow.graph.nodes[0].unique_id #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.name #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.workflow_name #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.id #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND"
resp.workflow.graph.nodes[0].trigger_details.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.workflow.graph.nodes[0].trigger_details.trigger.description #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.schedule #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.actions #=> Array
resp.workflow.graph.nodes[0].trigger_details.trigger.actions[0].job_name #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.actions[0].arguments #=> Hash
resp.workflow.graph.nodes[0].trigger_details.trigger.actions[0].arguments["GenericString"] #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.actions[0].timeout #=> Integer
resp.workflow.graph.nodes[0].trigger_details.trigger.actions[0].security_configuration #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.workflow.graph.nodes[0].trigger_details.trigger.actions[0].crawler_name #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.workflow.graph.nodes[0].trigger_details.trigger.predicate.conditions #=> Array
resp.workflow.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.workflow.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].job_name #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.workflow.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawler_name #=> String
resp.workflow.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED"
resp.workflow.graph.nodes[0].job_details.job_runs #=> Array
resp.workflow.graph.nodes[0].job_details.job_runs[0].id #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].attempt #=> Integer
resp.workflow.graph.nodes[0].job_details.job_runs[0].previous_run_id #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].trigger_name #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].job_name #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].started_on #=> Time
resp.workflow.graph.nodes[0].job_details.job_runs[0].last_modified_on #=> Time
resp.workflow.graph.nodes[0].job_details.job_runs[0].completed_on #=> Time
resp.workflow.graph.nodes[0].job_details.job_runs[0].job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.workflow.graph.nodes[0].job_details.job_runs[0].arguments #=> Hash
resp.workflow.graph.nodes[0].job_details.job_runs[0].arguments["GenericString"] #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].error_message #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].predecessor_runs #=> Array
resp.workflow.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].job_name #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].run_id #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].allocated_capacity #=> Integer
resp.workflow.graph.nodes[0].job_details.job_runs[0].execution_time #=> Integer
resp.workflow.graph.nodes[0].job_details.job_runs[0].timeout #=> Integer
resp.workflow.graph.nodes[0].job_details.job_runs[0].max_capacity #=> Float
resp.workflow.graph.nodes[0].job_details.job_runs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.workflow.graph.nodes[0].job_details.job_runs[0].number_of_workers #=> Integer
resp.workflow.graph.nodes[0].job_details.job_runs[0].security_configuration #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].log_group_name #=> String
resp.workflow.graph.nodes[0].job_details.job_runs[0].notification_property.notify_delay_after #=> Integer
resp.workflow.graph.nodes[0].job_details.job_runs[0].glue_version #=> String
resp.workflow.graph.nodes[0].crawler_details.crawls #=> Array
resp.workflow.graph.nodes[0].crawler_details.crawls[0].state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED"
resp.workflow.graph.nodes[0].crawler_details.crawls[0].started_on #=> Time
resp.workflow.graph.nodes[0].crawler_details.crawls[0].completed_on #=> Time
resp.workflow.graph.nodes[0].crawler_details.crawls[0].error_message #=> String
resp.workflow.graph.nodes[0].crawler_details.crawls[0].log_group #=> String
resp.workflow.graph.nodes[0].crawler_details.crawls[0].log_stream #=> String
resp.workflow.graph.edges #=> Array
resp.workflow.graph.edges[0].source_id #=> String
resp.workflow.graph.edges[0].destination_id #=> String
resp.workflow.max_concurrent_runs #=> Integer

Options Hash (options):

  • :name (required, String)

    The name of the workflow to retrieve.

  • :include_graph (Boolean)

    Specifies whether to include a graph when returning the workflow resource metadata.

Returns:

See Also:

#get_workflow_run(options = {}) ⇒ Types::GetWorkflowRunResponse

Retrieves the metadata for a given workflow run.

Examples:

Request syntax with placeholder values


resp = client.get_workflow_run({
  name: "NameString", # required
  run_id: "IdString", # required
  include_graph: false,
})

Response structure


resp.run.name #=> String
resp.run.workflow_run_id #=> String
resp.run.previous_run_id #=> String
resp.run.workflow_run_properties #=> Hash
resp.run.workflow_run_properties["IdString"] #=> String
resp.run.started_on #=> Time
resp.run.completed_on #=> Time
resp.run.status #=> String, one of "RUNNING", "COMPLETED", "STOPPING", "STOPPED", "ERROR"
resp.run.error_message #=> String
resp.run.statistics.total_actions #=> Integer
resp.run.statistics.timeout_actions #=> Integer
resp.run.statistics.failed_actions #=> Integer
resp.run.statistics.stopped_actions #=> Integer
resp.run.statistics.succeeded_actions #=> Integer
resp.run.statistics.running_actions #=> Integer
resp.run.graph.nodes #=> Array
resp.run.graph.nodes[0].type #=> String, one of "CRAWLER", "JOB", "TRIGGER"
resp.run.graph.nodes[0].name #=> String
resp.run.graph.nodes[0].unique_id #=> String
resp.run.graph.nodes[0].trigger_details.trigger.name #=> String
resp.run.graph.nodes[0].trigger_details.trigger.workflow_name #=> String
resp.run.graph.nodes[0].trigger_details.trigger.id #=> String
resp.run.graph.nodes[0].trigger_details.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND"
resp.run.graph.nodes[0].trigger_details.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.run.graph.nodes[0].trigger_details.trigger.description #=> String
resp.run.graph.nodes[0].trigger_details.trigger.schedule #=> String
resp.run.graph.nodes[0].trigger_details.trigger.actions #=> Array
resp.run.graph.nodes[0].trigger_details.trigger.actions[0].job_name #=> String
resp.run.graph.nodes[0].trigger_details.trigger.actions[0].arguments #=> Hash
resp.run.graph.nodes[0].trigger_details.trigger.actions[0].arguments["GenericString"] #=> String
resp.run.graph.nodes[0].trigger_details.trigger.actions[0].timeout #=> Integer
resp.run.graph.nodes[0].trigger_details.trigger.actions[0].security_configuration #=> String
resp.run.graph.nodes[0].trigger_details.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.run.graph.nodes[0].trigger_details.trigger.actions[0].crawler_name #=> String
resp.run.graph.nodes[0].trigger_details.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.run.graph.nodes[0].trigger_details.trigger.predicate.conditions #=> Array
resp.run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].job_name #=> String
resp.run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawler_name #=> String
resp.run.graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED"
resp.run.graph.nodes[0].job_details.job_runs #=> Array
resp.run.graph.nodes[0].job_details.job_runs[0].id #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].attempt #=> Integer
resp.run.graph.nodes[0].job_details.job_runs[0].previous_run_id #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].trigger_name #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].job_name #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].started_on #=> Time
resp.run.graph.nodes[0].job_details.job_runs[0].last_modified_on #=> Time
resp.run.graph.nodes[0].job_details.job_runs[0].completed_on #=> Time
resp.run.graph.nodes[0].job_details.job_runs[0].job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.run.graph.nodes[0].job_details.job_runs[0].arguments #=> Hash
resp.run.graph.nodes[0].job_details.job_runs[0].arguments["GenericString"] #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].error_message #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].predecessor_runs #=> Array
resp.run.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].job_name #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].run_id #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].allocated_capacity #=> Integer
resp.run.graph.nodes[0].job_details.job_runs[0].execution_time #=> Integer
resp.run.graph.nodes[0].job_details.job_runs[0].timeout #=> Integer
resp.run.graph.nodes[0].job_details.job_runs[0].max_capacity #=> Float
resp.run.graph.nodes[0].job_details.job_runs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.run.graph.nodes[0].job_details.job_runs[0].number_of_workers #=> Integer
resp.run.graph.nodes[0].job_details.job_runs[0].security_configuration #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].log_group_name #=> String
resp.run.graph.nodes[0].job_details.job_runs[0].notification_property.notify_delay_after #=> Integer
resp.run.graph.nodes[0].job_details.job_runs[0].glue_version #=> String
resp.run.graph.nodes[0].crawler_details.crawls #=> Array
resp.run.graph.nodes[0].crawler_details.crawls[0].state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED"
resp.run.graph.nodes[0].crawler_details.crawls[0].started_on #=> Time
resp.run.graph.nodes[0].crawler_details.crawls[0].completed_on #=> Time
resp.run.graph.nodes[0].crawler_details.crawls[0].error_message #=> String
resp.run.graph.nodes[0].crawler_details.crawls[0].log_group #=> String
resp.run.graph.nodes[0].crawler_details.crawls[0].log_stream #=> String
resp.run.graph.edges #=> Array
resp.run.graph.edges[0].source_id #=> String
resp.run.graph.edges[0].destination_id #=> String

Options Hash (options):

  • :name (required, String)

    Name of the workflow being run.

  • :run_id (required, String)

    The ID of the workflow run.

  • :include_graph (Boolean)

    Specifies whether to include the workflow graph in response or not.

Returns:

See Also:

#get_workflow_run_properties(options = {}) ⇒ Types::GetWorkflowRunPropertiesResponse

Retrieves the workflow run properties which were set during the run.

Examples:

Request syntax with placeholder values


resp = client.get_workflow_run_properties({
  name: "NameString", # required
  run_id: "IdString", # required
})

Response structure


resp.run_properties #=> Hash
resp.run_properties["IdString"] #=> String

Options Hash (options):

  • :name (required, String)

    Name of the workflow which was run.

  • :run_id (required, String)

    The ID of the workflow run whose run properties should be returned.

Returns:

See Also:

#get_workflow_runs(options = {}) ⇒ Types::GetWorkflowRunsResponse

Retrieves metadata for all runs of a given workflow.

Examples:

Request syntax with placeholder values


resp = client.get_workflow_runs({
  name: "NameString", # required
  include_graph: false,
  next_token: "GenericString",
  max_results: 1,
})

Response structure


resp.runs #=> Array
resp.runs[0].name #=> String
resp.runs[0].workflow_run_id #=> String
resp.runs[0].previous_run_id #=> String
resp.runs[0].workflow_run_properties #=> Hash
resp.runs[0].workflow_run_properties["IdString"] #=> String
resp.runs[0].started_on #=> Time
resp.runs[0].completed_on #=> Time
resp.runs[0].status #=> String, one of "RUNNING", "COMPLETED", "STOPPING", "STOPPED", "ERROR"
resp.runs[0].error_message #=> String
resp.runs[0].statistics.total_actions #=> Integer
resp.runs[0].statistics.timeout_actions #=> Integer
resp.runs[0].statistics.failed_actions #=> Integer
resp.runs[0].statistics.stopped_actions #=> Integer
resp.runs[0].statistics.succeeded_actions #=> Integer
resp.runs[0].statistics.running_actions #=> Integer
resp.runs[0].graph.nodes #=> Array
resp.runs[0].graph.nodes[0].type #=> String, one of "CRAWLER", "JOB", "TRIGGER"
resp.runs[0].graph.nodes[0].name #=> String
resp.runs[0].graph.nodes[0].unique_id #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.name #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.workflow_name #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.id #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND"
resp.runs[0].graph.nodes[0].trigger_details.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.runs[0].graph.nodes[0].trigger_details.trigger.description #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.schedule #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.actions #=> Array
resp.runs[0].graph.nodes[0].trigger_details.trigger.actions[0].job_name #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.actions[0].arguments #=> Hash
resp.runs[0].graph.nodes[0].trigger_details.trigger.actions[0].arguments["GenericString"] #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.actions[0].timeout #=> Integer
resp.runs[0].graph.nodes[0].trigger_details.trigger.actions[0].security_configuration #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.runs[0].graph.nodes[0].trigger_details.trigger.actions[0].crawler_name #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.runs[0].graph.nodes[0].trigger_details.trigger.predicate.conditions #=> Array
resp.runs[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.runs[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].job_name #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.runs[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawler_name #=> String
resp.runs[0].graph.nodes[0].trigger_details.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED"
resp.runs[0].graph.nodes[0].job_details.job_runs #=> Array
resp.runs[0].graph.nodes[0].job_details.job_runs[0].id #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].attempt #=> Integer
resp.runs[0].graph.nodes[0].job_details.job_runs[0].previous_run_id #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].trigger_name #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].job_name #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].started_on #=> Time
resp.runs[0].graph.nodes[0].job_details.job_runs[0].last_modified_on #=> Time
resp.runs[0].graph.nodes[0].job_details.job_runs[0].completed_on #=> Time
resp.runs[0].graph.nodes[0].job_details.job_runs[0].job_run_state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.runs[0].graph.nodes[0].job_details.job_runs[0].arguments #=> Hash
resp.runs[0].graph.nodes[0].job_details.job_runs[0].arguments["GenericString"] #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].error_message #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].predecessor_runs #=> Array
resp.runs[0].graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].job_name #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].predecessor_runs[0].run_id #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].allocated_capacity #=> Integer
resp.runs[0].graph.nodes[0].job_details.job_runs[0].execution_time #=> Integer
resp.runs[0].graph.nodes[0].job_details.job_runs[0].timeout #=> Integer
resp.runs[0].graph.nodes[0].job_details.job_runs[0].max_capacity #=> Float
resp.runs[0].graph.nodes[0].job_details.job_runs[0].worker_type #=> String, one of "Standard", "G.1X", "G.2X"
resp.runs[0].graph.nodes[0].job_details.job_runs[0].number_of_workers #=> Integer
resp.runs[0].graph.nodes[0].job_details.job_runs[0].security_configuration #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].log_group_name #=> String
resp.runs[0].graph.nodes[0].job_details.job_runs[0].notification_property.notify_delay_after #=> Integer
resp.runs[0].graph.nodes[0].job_details.job_runs[0].glue_version #=> String
resp.runs[0].graph.nodes[0].crawler_details.crawls #=> Array
resp.runs[0].graph.nodes[0].crawler_details.crawls[0].state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED"
resp.runs[0].graph.nodes[0].crawler_details.crawls[0].started_on #=> Time
resp.runs[0].graph.nodes[0].crawler_details.crawls[0].completed_on #=> Time
resp.runs[0].graph.nodes[0].crawler_details.crawls[0].error_message #=> String
resp.runs[0].graph.nodes[0].crawler_details.crawls[0].log_group #=> String
resp.runs[0].graph.nodes[0].crawler_details.crawls[0].log_stream #=> String
resp.runs[0].graph.edges #=> Array
resp.runs[0].graph.edges[0].source_id #=> String
resp.runs[0].graph.edges[0].destination_id #=> String
resp.next_token #=> String

Options Hash (options):

  • :name (required, String)

    Name of the workflow whose metadata of runs should be returned.

  • :include_graph (Boolean)

    Specifies whether to include the workflow graph in response or not.

  • :next_token (String)

    The maximum size of the response.

  • :max_results (Integer)

    The maximum number of workflow runs to be included in the response.

Returns:

See Also:

#import_catalog_to_glue(options = {}) ⇒ Struct

Imports an existing Amazon Athena Data Catalog to AWS Glue

Examples:

Request syntax with placeholder values


resp = client.import_catalog_to_glue({
  catalog_id: "CatalogIdString",
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the catalog to import. Currently, this should be the AWS account ID.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#list_crawlers(options = {}) ⇒ Types::ListCrawlersResponse

Retrieves the names of all crawler resources in this AWS account, or the resources with the specified tag. This operation allows you to see which resources are available in your account, and their names.

This operation takes the optional Tags field, which you can use as a filter on the response so that tagged resources can be retrieved as a group. If you choose to use tags filtering, only resources with the tag are retrieved.

Examples:

Request syntax with placeholder values


resp = client.list_crawlers({
  max_results: 1,
  next_token: "Token",
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.crawler_names #=> Array
resp.crawler_names[0] #=> String
resp.next_token #=> String

Options Hash (options):

  • :max_results (Integer)

    The maximum size of a list to return.

  • :next_token (String)

    A continuation token, if this is a continuation request.

  • :tags (Hash<String,String>)

    Specifies to return only these tagged resources.

Returns:

See Also:

#list_dev_endpoints(options = {}) ⇒ Types::ListDevEndpointsResponse

Retrieves the names of all DevEndpoint resources in this AWS account, or the resources with the specified tag. This operation allows you to see which resources are available in your account, and their names.

This operation takes the optional Tags field, which you can use as a filter on the response so that tagged resources can be retrieved as a group. If you choose to use tags filtering, only resources with the tag are retrieved.

Examples:

Request syntax with placeholder values


resp = client.list_dev_endpoints({
  next_token: "GenericString",
  max_results: 1,
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.dev_endpoint_names #=> Array
resp.dev_endpoint_names[0] #=> String
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    A continuation token, if this is a continuation request.

  • :max_results (Integer)

    The maximum size of a list to return.

  • :tags (Hash<String,String>)

    Specifies to return only these tagged resources.

Returns:

See Also:

#list_jobs(options = {}) ⇒ Types::ListJobsResponse

Retrieves the names of all job resources in this AWS account, or the resources with the specified tag. This operation allows you to see which resources are available in your account, and their names.

This operation takes the optional Tags field, which you can use as a filter on the response so that tagged resources can be retrieved as a group. If you choose to use tags filtering, only resources with the tag are retrieved.

Examples:

Request syntax with placeholder values


resp = client.list_jobs({
  next_token: "GenericString",
  max_results: 1,
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.job_names #=> Array
resp.job_names[0] #=> String
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    A continuation token, if this is a continuation request.

  • :max_results (Integer)

    The maximum size of a list to return.

  • :tags (Hash<String,String>)

    Specifies to return only these tagged resources.

Returns:

See Also:

#list_ml_transforms(options = {}) ⇒ Types::ListMLTransformsResponse

Retrieves a sortable, filterable list of existing AWS Glue machine learning transforms in this AWS account, or the resources with the specified tag. This operation takes the optional Tags field, which you can use as a filter of the responses so that tagged resources can be retrieved as a group. If you choose to use tag filtering, only resources with the tags are retrieved.

Examples:

Request syntax with placeholder values


resp = client.list_ml_transforms({
  next_token: "PaginationToken",
  max_results: 1,
  filter: {
    name: "NameString",
    transform_type: "FIND_MATCHES", # accepts FIND_MATCHES
    status: "NOT_READY", # accepts NOT_READY, READY, DELETING
    glue_version: "GlueVersionString",
    created_before: Time.now,
    created_after: Time.now,
    last_modified_before: Time.now,
    last_modified_after: Time.now,
    schema: [
      {
        name: "ColumnNameString",
        data_type: "ColumnTypeString",
      },
    ],
  },
  sort: {
    column: "NAME", # required, accepts NAME, TRANSFORM_TYPE, STATUS, CREATED, LAST_MODIFIED
    sort_direction: "DESCENDING", # required, accepts DESCENDING, ASCENDING
  },
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.transform_ids #=> Array
resp.transform_ids[0] #=> String
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    A continuation token, if this is a continuation request.

  • :max_results (Integer)

    The maximum size of a list to return.

  • :filter (Types::TransformFilterCriteria)

    A TransformFilterCriteria used to filter the machine learning transforms.

  • :sort (Types::TransformSortCriteria)

    A TransformSortCriteria used to sort the machine learning transforms.

  • :tags (Hash<String,String>)

    Specifies to return only these tagged resources.

Returns:

See Also:

#list_registries(options = {}) ⇒ Types::ListRegistriesResponse

Returns a list of registries that you have created, with minimal registry information. Registries in the Deleting status will not be included in the results. Empty results will be returned if there are no registries available.

Examples:

Request syntax with placeholder values


resp = client.list_registries({
  max_results: 1,
  next_token: "SchemaRegistryTokenString",
})

Response structure


resp.registries #=> Array
resp.registries[0].registry_name #=> String
resp.registries[0].registry_arn #=> String
resp.registries[0].description #=> String
resp.registries[0].status #=> String, one of "AVAILABLE", "DELETING"
resp.registries[0].created_time #=> String
resp.registries[0].updated_time #=> String
resp.next_token #=> String

Options Hash (options):

  • :max_results (Integer)

    Maximum number of results required per page. If the value is not supplied, this will be defaulted to 25 per page.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

#list_schema_versions(options = {}) ⇒ Types::ListSchemaVersionsResponse

Returns a list of schema versions that you have created, with minimal information. Schema versions in Deleted status will not be included in the results. Empty results will be returned if there are no schema versions available.

Examples:

Request syntax with placeholder values


resp = client.list_schema_versions({
  schema_id: { # required
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  max_results: 1,
  next_token: "SchemaRegistryTokenString",
})

Response structure


resp.schemas #=> Array
resp.schemas[0].schema_arn #=> String
resp.schemas[0].schema_version_id #=> String
resp.schemas[0].version_number #=> Integer
resp.schemas[0].status #=> String, one of "AVAILABLE", "PENDING", "FAILURE", "DELETING"
resp.schemas[0].created_time #=> String
resp.next_token #=> String

Options Hash (options):

  • :schema_id (required, Types::SchemaId)

    This is a wrapper structure to contain schema identity fields. The structure contains:

    • SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. Either SchemaArn or SchemaName and RegistryName has to be provided.

    • SchemaId$SchemaName: The name of the schema. Either SchemaArn or SchemaName and RegistryName has to be provided.

  • :max_results (Integer)

    Maximum number of results required per page. If the value is not supplied, this will be defaulted to 25 per page.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

#list_schemas(options = {}) ⇒ Types::ListSchemasResponse

Returns a list of schemas with minimal details. Schemas in Deleting status will not be included in the results. Empty results will be returned if there are no schemas available.

When the RegistryId is not provided, all the schemas across registries will be part of the API response.

Examples:

Request syntax with placeholder values


resp = client.list_schemas({
  registry_id: {
    registry_name: "SchemaRegistryNameString",
    registry_arn: "GlueResourceArn",
  },
  max_results: 1,
  next_token: "SchemaRegistryTokenString",
})

Response structure


resp.schemas #=> Array
resp.schemas[0].registry_name #=> String
resp.schemas[0].schema_name #=> String
resp.schemas[0].schema_arn #=> String
resp.schemas[0].description #=> String
resp.schemas[0].schema_status #=> String, one of "AVAILABLE", "PENDING", "DELETING"
resp.schemas[0].created_time #=> String
resp.schemas[0].updated_time #=> String
resp.next_token #=> String

Options Hash (options):

  • :registry_id (Types::RegistryId)

    A wrapper structure that may contain the registry name and Amazon Resource Name (ARN).

  • :max_results (Integer)

    Maximum number of results required per page. If the value is not supplied, this will be defaulted to 25 per page.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

#list_triggers(options = {}) ⇒ Types::ListTriggersResponse

Retrieves the names of all trigger resources in this AWS account, or the resources with the specified tag. This operation allows you to see which resources are available in your account, and their names.

This operation takes the optional Tags field, which you can use as a filter on the response so that tagged resources can be retrieved as a group. If you choose to use tags filtering, only resources with the tag are retrieved.

Examples:

Request syntax with placeholder values


resp = client.list_triggers({
  next_token: "GenericString",
  dependent_job_name: "NameString",
  max_results: 1,
  tags: {
    "TagKey" => "TagValue",
  },
})

Response structure


resp.trigger_names #=> Array
resp.trigger_names[0] #=> String
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    A continuation token, if this is a continuation request.

  • :dependent_job_name (String)

    The name of the job for which to retrieve triggers. The trigger that can start this job is returned. If there is no such trigger, all triggers are returned.

  • :max_results (Integer)

    The maximum size of a list to return.

  • :tags (Hash<String,String>)

    Specifies to return only these tagged resources.

Returns:

See Also:

#list_workflows(options = {}) ⇒ Types::ListWorkflowsResponse

Lists names of workflows created in the account.

Examples:

Request syntax with placeholder values


resp = client.list_workflows({
  next_token: "GenericString",
  max_results: 1,
})

Response structure


resp.workflows #=> Array
resp.workflows[0] #=> String
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    A continuation token, if this is a continuation request.

  • :max_results (Integer)

    The maximum size of a list to return.

Returns:

See Also:

#put_data_catalog_encryption_settings(options = {}) ⇒ Struct

Sets the security configuration for a specified catalog. After the configuration has been set, the specified encryption is applied to every catalog write thereafter.

Examples:

Request syntax with placeholder values


resp = client.put_data_catalog_encryption_settings({
  catalog_id: "CatalogIdString",
  data_catalog_encryption_settings: { # required
    encryption_at_rest: {
      catalog_encryption_mode: "DISABLED", # required, accepts DISABLED, SSE-KMS
      sse_aws_kms_key_id: "NameString",
    },
    connection_password_encryption: {
      return_connection_password_encrypted: false, # required
      aws_kms_key_id: "NameString",
    },
  },
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog to set the security configuration for. If none is provided, the AWS account ID is used by default.

  • :data_catalog_encryption_settings (required, Types::DataCatalogEncryptionSettings)

    The security configuration to set.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#put_resource_policy(options = {}) ⇒ Types::PutResourcePolicyResponse

Sets the Data Catalog resource policy for access control.

Examples:

Request syntax with placeholder values


resp = client.put_resource_policy({
  policy_in_json: "PolicyJsonString", # required
  resource_arn: "GlueResourceArn",
  policy_hash_condition: "HashString",
  policy_exists_condition: "MUST_EXIST", # accepts MUST_EXIST, NOT_EXIST, NONE
  enable_hybrid: "TRUE", # accepts TRUE, FALSE
})

Response structure


resp.policy_hash #=> String

Options Hash (options):

  • :policy_in_json (required, String)

    Contains the policy document to set, in JSON format.

  • :resource_arn (String)

    The ARN of the AWS Glue resource for the resource policy to be set. For more information about AWS Glue resource ARNs, see the AWS Glue ARN string pattern

  • :policy_hash_condition (String)

    The hash value returned when the previous policy was set using PutResourcePolicy. Its purpose is to prevent concurrent modifications of a policy. Do not use this parameter if no previous policy has been set.

  • :policy_exists_condition (String)

    A value of MUST_EXIST is used to update a policy. A value of NOT_EXIST is used to create a new policy. If a value of NONE or a null value is used, the call will not depend on the existence of a policy.

  • :enable_hybrid (String)

    Allows you to specify if you want to use both resource-level and account/catalog-level resource policies. A resource-level policy is a policy attached to an individual resource such as a database or a table.

    The default value of NO indicates that resource-level policies cannot co-exist with an account-level policy. A value of YES means the use of both resource-level and account/catalog-level resource policies is allowed.

Returns:

See Also:

#put_schema_version_metadata(options = {}) ⇒ Types::PutSchemaVersionMetadataResponse

Puts the metadata key value pair for a specified schema version ID. A maximum of 10 key value pairs will be allowed per schema version. They can be added over one or more calls.

Examples:

Request syntax with placeholder values


resp = client.({
  schema_id: {
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  schema_version_number: {
    latest_version: false,
    version_number: 1,
  },
  schema_version_id: "SchemaVersionIdString",
  metadata_key_value: { # required
    metadata_key: "MetadataKeyString",
    metadata_value: "MetadataValueString",
  },
})

Response structure


resp.schema_arn #=> String
resp.schema_name #=> String
resp.registry_name #=> String
resp.latest_version #=> true/false
resp.version_number #=> Integer
resp.schema_version_id #=> String
resp. #=> String
resp. #=> String

Options Hash (options):

Returns:

See Also:

#put_workflow_run_properties(options = {}) ⇒ Struct

Puts the specified workflow run properties for the given workflow run. If a property already exists for the specified run, then it overrides the value otherwise adds the property to existing properties.

Examples:

Request syntax with placeholder values


resp = client.put_workflow_run_properties({
  name: "NameString", # required
  run_id: "IdString", # required
  run_properties: { # required
    "IdString" => "GenericString",
  },
})

Options Hash (options):

  • :name (required, String)

    Name of the workflow which was run.

  • :run_id (required, String)

    The ID of the workflow run for which the run properties should be updated.

  • :run_properties (required, Hash<String,String>)

    The properties to put for the specified run.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#query_schema_version_metadata(options = {}) ⇒ Types::QuerySchemaVersionMetadataResponse

Queries for the schema version metadata information.

Examples:

Request syntax with placeholder values


resp = client.({
  schema_id: {
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  schema_version_number: {
    latest_version: false,
    version_number: 1,
  },
  schema_version_id: "SchemaVersionIdString",
  metadata_list: [
    {
      metadata_key: "MetadataKeyString",
      metadata_value: "MetadataValueString",
    },
  ],
  max_results: 1,
  next_token: "SchemaRegistryTokenString",
})

Response structure


resp. #=> Hash
resp.["MetadataKeyString"]. #=> String
resp.["MetadataKeyString"].created_time #=> String
resp.schema_version_id #=> String
resp.next_token #=> String

Options Hash (options):

  • :schema_id (Types::SchemaId)

    A wrapper structure that may contain the schema name and Amazon Resource Name (ARN).

  • :schema_version_number (Types::SchemaVersionNumber)

    The version number of the schema.

  • :schema_version_id (String)

    The unique version ID of the schema version.

  • :metadata_list (Array<Types::MetadataKeyValuePair>)

    Search key-value pairs for metadata, if they are not provided all the metadata information will be fetched.

  • :max_results (Integer)

    Maximum number of results required per page. If the value is not supplied, this will be defaulted to 25 per page.

  • :next_token (String)

    A continuation token, if this is a continuation call.

Returns:

See Also:

#register_schema_version(options = {}) ⇒ Types::RegisterSchemaVersionResponse

Adds a new version to the existing schema. Returns an error if new version of schema does not meet the compatibility requirements of the schema set. This API will not create a new schema set and will return a 404 error if the schema set is not already present in the Schema Registry.

If this is the first schema definition to be registered in the Schema Registry, this API will store the schema version and return immediately. Otherwise, this call has the potential to run longer than other operations due to compatibility modes. You can call the GetSchemaVersion API with the SchemaVersionId to check compatibility modes.

If the same schema definition is already stored in Schema Registry as a version, the schema ID of the existing schema is returned to the caller.

Examples:

Request syntax with placeholder values


resp = client.register_schema_version({
  schema_id: { # required
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  schema_definition: "SchemaDefinitionString", # required
})

Response structure


resp.schema_version_id #=> String
resp.version_number #=> Integer
resp.status #=> String, one of "AVAILABLE", "PENDING", "FAILURE", "DELETING"

Options Hash (options):

  • :schema_id (required, Types::SchemaId)

    This is a wrapper structure to contain schema identity fields. The structure contains:

    • SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. Either SchemaArn or SchemaName and RegistryName has to be provided.

    • SchemaId$SchemaName: The name of the schema. Either SchemaArn or SchemaName and RegistryName has to be provided.

  • :schema_definition (required, String)

    The schema definition using the DataFormat setting for the SchemaName.

Returns:

See Also:

#remove_schema_version_metadata(options = {}) ⇒ Types::RemoveSchemaVersionMetadataResponse

Removes a key value pair from the schema version metadata for the specified schema version ID.

Examples:

Request syntax with placeholder values


resp = client.({
  schema_id: {
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  schema_version_number: {
    latest_version: false,
    version_number: 1,
  },
  schema_version_id: "SchemaVersionIdString",
  metadata_key_value: { # required
    metadata_key: "MetadataKeyString",
    metadata_value: "MetadataValueString",
  },
})

Response structure


resp.schema_arn #=> String
resp.schema_name #=> String
resp.registry_name #=> String
resp.latest_version #=> true/false
resp.version_number #=> Integer
resp.schema_version_id #=> String
resp. #=> String
resp. #=> String

Options Hash (options):

  • :schema_id (Types::SchemaId)

    A wrapper structure that may contain the schema name and Amazon Resource Name (ARN).

  • :schema_version_number (Types::SchemaVersionNumber)

    The version number of the schema.

  • :schema_version_id (String)

    The unique version ID of the schema version.

  • :metadata_key_value (required, Types::MetadataKeyValuePair)

    The value of the metadata key.

Returns:

See Also:

#reset_job_bookmark(options = {}) ⇒ Types::ResetJobBookmarkResponse

Resets a bookmark entry.

Examples:

Request syntax with placeholder values


resp = client.reset_job_bookmark({
  job_name: "JobName", # required
  run_id: "RunId",
})

Response structure


resp.job_bookmark_entry.job_name #=> String
resp.job_bookmark_entry.version #=> Integer
resp.job_bookmark_entry.run #=> Integer
resp.job_bookmark_entry.attempt #=> Integer
resp.job_bookmark_entry.previous_run_id #=> String
resp.job_bookmark_entry.run_id #=> String
resp.job_bookmark_entry.job_bookmark #=> String

Options Hash (options):

  • :job_name (required, String)

    The name of the job in question.

  • :run_id (String)

    The unique run identifier associated with this job run.

Returns:

See Also:

#resume_workflow_run(options = {}) ⇒ Types::ResumeWorkflowRunResponse

Restarts selected nodes of a previous partially completed workflow run and resumes the workflow run. The selected nodes and all nodes that are downstream from the selected nodes are run.

Examples:

Request syntax with placeholder values


resp = client.resume_workflow_run({
  name: "NameString", # required
  run_id: "IdString", # required
  node_ids: ["NameString"], # required
})

Response structure


resp.run_id #=> String
resp.node_ids #=> Array
resp.node_ids[0] #=> String

Options Hash (options):

  • :name (required, String)

    The name of the workflow to resume.

  • :run_id (required, String)

    The ID of the workflow run to resume.

  • :node_ids (required, Array<String>)

    A list of the node IDs for the nodes you want to restart. The nodes that are to be restarted must have a run attempt in the original run.

Returns:

See Also:

#search_tables(options = {}) ⇒ Types::SearchTablesResponse

Searches a set of tables based on properties in the table metadata as well as on the parent database. You can search against text or filter conditions.

You can only get tables that you have access to based on the security policies defined in Lake Formation. You need at least a read-only access to the table for it to be returned. If you do not have access to all the columns in the table, these columns will not be searched against when returning the list of tables back to you. If you have access to the columns but not the data in the columns, those columns and the associated metadata for those columns will be included in the search.

Examples:

Request syntax with placeholder values


resp = client.search_tables({
  catalog_id: "CatalogIdString",
  next_token: "Token",
  filters: [
    {
      key: "ValueString",
      value: "ValueString",
      comparator: "EQUALS", # accepts EQUALS, GREATER_THAN, LESS_THAN, GREATER_THAN_EQUALS, LESS_THAN_EQUALS
    },
  ],
  search_text: "ValueString",
  sort_criteria: [
    {
      field_name: "ValueString",
      sort: "ASC", # accepts ASC, DESC
    },
  ],
  max_results: 1,
  resource_share_type: "FOREIGN", # accepts FOREIGN, ALL
})

Response structure


resp.next_token #=> String
resp.table_list #=> Array
resp.table_list[0].name #=> String
resp.table_list[0].database_name #=> String
resp.table_list[0].description #=> String
resp.table_list[0].owner #=> String
resp.table_list[0].create_time #=> Time
resp.table_list[0].update_time #=> Time
resp.table_list[0].last_access_time #=> Time
resp.table_list[0].last_analyzed_time #=> Time
resp.table_list[0].retention #=> Integer
resp.table_list[0].storage_descriptor.columns #=> Array
resp.table_list[0].storage_descriptor.columns[0].name #=> String
resp.table_list[0].storage_descriptor.columns[0].type #=> String
resp.table_list[0].storage_descriptor.columns[0].comment #=> String
resp.table_list[0].storage_descriptor.columns[0].parameters #=> Hash
resp.table_list[0].storage_descriptor.columns[0].parameters["KeyString"] #=> String
resp.table_list[0].storage_descriptor.location #=> String
resp.table_list[0].storage_descriptor.input_format #=> String
resp.table_list[0].storage_descriptor.output_format #=> String
resp.table_list[0].storage_descriptor.compressed #=> true/false
resp.table_list[0].storage_descriptor.number_of_buckets #=> Integer
resp.table_list[0].storage_descriptor.serde_info.name #=> String
resp.table_list[0].storage_descriptor.serde_info.serialization_library #=> String
resp.table_list[0].storage_descriptor.serde_info.parameters #=> Hash
resp.table_list[0].storage_descriptor.serde_info.parameters["KeyString"] #=> String
resp.table_list[0].storage_descriptor.bucket_columns #=> Array
resp.table_list[0].storage_descriptor.bucket_columns[0] #=> String
resp.table_list[0].storage_descriptor.sort_columns #=> Array
resp.table_list[0].storage_descriptor.sort_columns[0].column #=> String
resp.table_list[0].storage_descriptor.sort_columns[0].sort_order #=> Integer
resp.table_list[0].storage_descriptor.parameters #=> Hash
resp.table_list[0].storage_descriptor.parameters["KeyString"] #=> String
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_names #=> Array
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_names[0] #=> String
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_values #=> Array
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_values[0] #=> String
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_value_location_maps #=> Hash
resp.table_list[0].storage_descriptor.skewed_info.skewed_column_value_location_maps["ColumnValuesString"] #=> String
resp.table_list[0].storage_descriptor.stored_as_sub_directories #=> true/false
resp.table_list[0].storage_descriptor.schema_reference.schema_id.schema_arn #=> String
resp.table_list[0].storage_descriptor.schema_reference.schema_id.schema_name #=> String
resp.table_list[0].storage_descriptor.schema_reference.schema_id.registry_name #=> String
resp.table_list[0].storage_descriptor.schema_reference.schema_version_id #=> String
resp.table_list[0].storage_descriptor.schema_reference.schema_version_number #=> Integer
resp.table_list[0].partition_keys #=> Array
resp.table_list[0].partition_keys[0].name #=> String
resp.table_list[0].partition_keys[0].type #=> String
resp.table_list[0].partition_keys[0].comment #=> String
resp.table_list[0].partition_keys[0].parameters #=> Hash
resp.table_list[0].partition_keys[0].parameters["KeyString"] #=> String
resp.table_list[0].view_original_text #=> String
resp.table_list[0].view_expanded_text #=> String
resp.table_list[0].table_type #=> String
resp.table_list[0].parameters #=> Hash
resp.table_list[0].parameters["KeyString"] #=> String
resp.table_list[0].created_by #=> String
resp.table_list[0].is_registered_with_lake_formation #=> true/false
resp.table_list[0].target_table.catalog_id #=> String
resp.table_list[0].target_table.database_name #=> String
resp.table_list[0].target_table.name #=> String
resp.table_list[0].catalog_id #=> String

Options Hash (options):

  • :catalog_id (String)

    A unique identifier, consisting of account_id.

  • :next_token (String)

    A continuation token, included if this is a continuation call.

  • :filters (Array<Types::PropertyPredicate>)

    A list of key-value pairs, and a comparator used to filter the search results. Returns all entities matching the predicate.

    The Comparator member of the PropertyPredicate struct is used only for time fields, and can be omitted for other field types. Also, when comparing string values, such as when Key=Name, a fuzzy match algorithm is used. The Key field (for example, the value of the Name field) is split on certain punctuation characters, for example, -, :, #, etc. into tokens. Then each token is exact-match compared with the Value member of PropertyPredicate. For example, if Key=Name and Value=link, tables named customer-link and xx-link-yy are returned, but xxlinkyy is not returned.

  • :search_text (String)

    A string used for a text search.

    Specifying a value in quotes filters based on an exact match to the value.

  • :sort_criteria (Array<Types::SortCriterion>)

    A list of criteria for sorting the results by a field name, in an ascending or descending order.

  • :max_results (Integer)

    The maximum number of tables to return in a single response.

  • :resource_share_type (String)

    Allows you to specify that you want to search the tables shared with your account. The allowable values are FOREIGN or ALL.

    • If set to FOREIGN, will search the tables shared with your account.

    • If set to ALL, will search the tables shared with your account, as well as the tables in yor local account.

Returns:

See Also:

#start_crawler(options = {}) ⇒ Struct

Starts a crawl using the specified crawler, regardless of what is scheduled. If the crawler is already running, returns a CrawlerRunningException.

Examples:

Request syntax with placeholder values


resp = client.start_crawler({
  name: "NameString", # required
})

Options Hash (options):

  • :name (required, String)

    Name of the crawler to start.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#start_crawler_schedule(options = {}) ⇒ Struct

Changes the schedule state of the specified crawler to SCHEDULED, unless the crawler is already running or the schedule state is already SCHEDULED.

Examples:

Request syntax with placeholder values


resp = client.start_crawler_schedule({
  crawler_name: "NameString", # required
})

Options Hash (options):

  • :crawler_name (required, String)

    Name of the crawler to schedule.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#start_export_labels_task_run(options = {}) ⇒ Types::StartExportLabelsTaskRunResponse

Begins an asynchronous task to export all labeled data for a particular transform. This task is the only label-related API call that is not part of the typical active learning workflow. You typically use StartExportLabelsTaskRun when you want to work with all of your existing labels at the same time, such as when you want to remove or change labels that were previously submitted as truth. This API operation accepts the TransformId whose labels you want to export and an Amazon Simple Storage Service (Amazon S3) path to export the labels to. The operation returns a TaskRunId. You can check on the status of your task run by calling the GetMLTaskRun API.

Examples:

Request syntax with placeholder values


resp = client.start_export_labels_task_run({
  transform_id: "HashString", # required
  output_s3_path: "UriString", # required
})

Response structure


resp.task_run_id #=> String

Options Hash (options):

  • :transform_id (required, String)

    The unique identifier of the machine learning transform.

  • :output_s3_path (required, String)

    The Amazon S3 path where you export the labels.

Returns:

See Also:

#start_import_labels_task_run(options = {}) ⇒ Types::StartImportLabelsTaskRunResponse

Enables you to provide additional labels (examples of truth) to be used to teach the machine learning transform and improve its quality. This API operation is generally used as part of the active learning workflow that starts with the StartMLLabelingSetGenerationTaskRun call and that ultimately results in improving the quality of your machine learning transform.

After the StartMLLabelingSetGenerationTaskRun finishes, AWS Glue machine learning will have generated a series of questions for humans to answer. (Answering these questions is often called 'labeling' in the machine learning workflows). In the case of the FindMatches transform, these questions are of the form, “What is the correct way to group these rows together into groups composed entirely of matching records?” After the labeling process is finished, users upload their answers/labels with a call to StartImportLabelsTaskRun. After StartImportLabelsTaskRun finishes, all future runs of the machine learning transform use the new and improved labels and perform a higher-quality transformation.

By default, StartMLLabelingSetGenerationTaskRun continually learns from and combines all labels that you upload unless you set Replace to true. If you set Replace to true, StartImportLabelsTaskRun deletes and forgets all previously uploaded labels and learns only from the exact set that you upload. Replacing labels can be helpful if you realize that you previously uploaded incorrect labels, and you believe that they are having a negative effect on your transform quality.

You can check on the status of your task run by calling the GetMLTaskRun operation.

Examples:

Request syntax with placeholder values


resp = client.start_import_labels_task_run({
  transform_id: "HashString", # required
  input_s3_path: "UriString", # required
  replace_all_labels: false,
})

Response structure


resp.task_run_id #=> String

Options Hash (options):

  • :transform_id (required, String)

    The unique identifier of the machine learning transform.

  • :input_s3_path (required, String)

    The Amazon Simple Storage Service (Amazon S3) path from where you import the labels.

  • :replace_all_labels (Boolean)

    Indicates whether to overwrite your existing labels.

Returns:

See Also:

#start_job_run(options = {}) ⇒ Types::StartJobRunResponse

Starts a job run using a job definition.

Examples:

Request syntax with placeholder values


resp = client.start_job_run({
  job_name: "NameString", # required
  job_run_id: "IdString",
  arguments: {
    "GenericString" => "GenericString",
  },
  allocated_capacity: 1,
  timeout: 1,
  max_capacity: 1.0,
  security_configuration: "NameString",
  notification_property: {
    notify_delay_after: 1,
  },
  worker_type: "Standard", # accepts Standard, G.1X, G.2X
  number_of_workers: 1,
})

Response structure


resp.job_run_id #=> String

Options Hash (options):

  • :job_name (required, String)

    The name of the job definition to use.

  • :job_run_id (String)

    The ID of a previous JobRun to retry.

  • :arguments (Hash<String,String>)

    The job arguments specifically for this run. For this job run, they replace the default arguments set in the job definition itself.

    You can specify arguments here that your own job-execution script consumes, as well as arguments that AWS Glue itself consumes.

    For information about how to specify and consume your own Job arguments, see the Calling AWS Glue APIs in Python topic in the developer guide.

    For information about the key-value pairs that AWS Glue consumes to set up your job, see the Special Parameters Used by AWS Glue topic in the developer guide.

  • :allocated_capacity (Integer)

    This field is deprecated. Use MaxCapacity instead.

    The number of AWS Glue data processing units (DPUs) to allocate to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the AWS Glue pricing page.

  • :timeout (Integer)

    The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.

  • :max_capacity (Float)

    The number of AWS Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the AWS Glue pricing page.

    Do not set Max Capacity if using WorkerType and NumberOfWorkers.

    The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, or an Apache Spark ETL job:

    • When you specify a Python shell job (JobCommand.Name=\"pythonshell\"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.

    • When you specify an Apache Spark ETL job (JobCommand.Name=\"glueetl\"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.

  • :security_configuration (String)

    The name of the SecurityConfiguration structure to be used with this job run.

  • :notification_property (Types::NotificationProperty)

    Specifies configuration properties of a job run notification.

  • :worker_type (String)

    The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, or G.2X.

    • For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.

    • For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.

    • For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.

  • :number_of_workers (Integer)

    The number of workers of a defined workerType that are allocated when a job runs.

    The maximum number of workers you can define are 299 for G.1X, and 149 for G.2X.

Returns:

See Also:

#start_ml_evaluation_task_run(options = {}) ⇒ Types::StartMLEvaluationTaskRunResponse

Starts a task to estimate the quality of the transform.

When you provide label sets as examples of truth, AWS Glue machine learning uses some of those examples to learn from them. The rest of the labels are used as a test to estimate quality.

Returns a unique identifier for the run. You can call GetMLTaskRun to get more information about the stats of the EvaluationTaskRun.

Examples:

Request syntax with placeholder values


resp = client.start_ml_evaluation_task_run({
  transform_id: "HashString", # required
})

Response structure


resp.task_run_id #=> String

Options Hash (options):

  • :transform_id (required, String)

    The unique identifier of the machine learning transform.

Returns:

See Also:

#start_ml_labeling_set_generation_task_run(options = {}) ⇒ Types::StartMLLabelingSetGenerationTaskRunResponse

Starts the active learning workflow for your machine learning transform to improve the transform's quality by generating label sets and adding labels.

When the StartMLLabelingSetGenerationTaskRun finishes, AWS Glue will have generated a "labeling set" or a set of questions for humans to answer.

In the case of the FindMatches transform, these questions are of the form, “What is the correct way to group these rows together into groups composed entirely of matching records?”

After the labeling process is finished, you can upload your labels with a call to StartImportLabelsTaskRun. After StartImportLabelsTaskRun finishes, all future runs of the machine learning transform will use the new and improved labels and perform a higher-quality transformation.

Examples:

Request syntax with placeholder values


resp = client.start_ml_labeling_set_generation_task_run({
  transform_id: "HashString", # required
  output_s3_path: "UriString", # required
})

Response structure


resp.task_run_id #=> String

Options Hash (options):

  • :transform_id (required, String)

    The unique identifier of the machine learning transform.

  • :output_s3_path (required, String)

    The Amazon Simple Storage Service (Amazon S3) path where you generate the labeling set.

Returns:

See Also:

#start_trigger(options = {}) ⇒ Types::StartTriggerResponse

Starts an existing trigger. See Triggering Jobs for information about how different types of trigger are started.

Examples:

Request syntax with placeholder values


resp = client.start_trigger({
  name: "NameString", # required
})

Response structure


resp.name #=> String

Options Hash (options):

  • :name (required, String)

    The name of the trigger to start.

Returns:

See Also:

#start_workflow_run(options = {}) ⇒ Types::StartWorkflowRunResponse

Starts a new run of the specified workflow.

Examples:

Request syntax with placeholder values


resp = client.start_workflow_run({
  name: "NameString", # required
})

Response structure


resp.run_id #=> String

Options Hash (options):

  • :name (required, String)

    The name of the workflow to start.

Returns:

See Also:

#stop_crawler(options = {}) ⇒ Struct

If the specified crawler is running, stops the crawl.

Examples:

Request syntax with placeholder values


resp = client.stop_crawler({
  name: "NameString", # required
})

Options Hash (options):

  • :name (required, String)

    Name of the crawler to stop.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#stop_crawler_schedule(options = {}) ⇒ Struct

Sets the schedule state of the specified crawler to NOT_SCHEDULED, but does not stop the crawler if it is already running.

Examples:

Request syntax with placeholder values


resp = client.stop_crawler_schedule({
  crawler_name: "NameString", # required
})

Options Hash (options):

  • :crawler_name (required, String)

    Name of the crawler whose schedule state to set.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#stop_trigger(options = {}) ⇒ Types::StopTriggerResponse

Stops a specified trigger.

Examples:

Request syntax with placeholder values


resp = client.stop_trigger({
  name: "NameString", # required
})

Response structure


resp.name #=> String

Options Hash (options):

  • :name (required, String)

    The name of the trigger to stop.

Returns:

See Also:

#stop_workflow_run(options = {}) ⇒ Struct

Stops the execution of the specified workflow run.

Examples:

Request syntax with placeholder values


resp = client.stop_workflow_run({
  name: "NameString", # required
  run_id: "IdString", # required
})

Options Hash (options):

  • :name (required, String)

    The name of the workflow to stop.

  • :run_id (required, String)

    The ID of the workflow run to stop.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#tag_resource(options = {}) ⇒ Struct

Adds tags to a resource. A tag is a label you can assign to an AWS resource. In AWS Glue, you can tag only certain resources. For information about what resources you can tag, see AWS Tags in AWS Glue.

Examples:

Request syntax with placeholder values


resp = client.tag_resource({
  resource_arn: "GlueResourceArn", # required
  tags_to_add: { # required
    "TagKey" => "TagValue",
  },
})

Options Hash (options):

  • :resource_arn (required, String)

    The ARN of the AWS Glue resource to which to add the tags. For more information about AWS Glue resource ARNs, see the AWS Glue ARN string pattern.

  • :tags_to_add (required, Hash<String,String>)

    Tags to add to this resource.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#untag_resource(options = {}) ⇒ Struct

Removes tags from a resource.

Examples:

Request syntax with placeholder values


resp = client.untag_resource({
  resource_arn: "GlueResourceArn", # required
  tags_to_remove: ["TagKey"], # required
})

Options Hash (options):

  • :resource_arn (required, String)

    The Amazon Resource Name (ARN) of the resource from which to remove the tags.

  • :tags_to_remove (required, Array<String>)

    Tags to remove from this resource.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#update_classifier(options = {}) ⇒ Struct

Modifies an existing classifier (a GrokClassifier, an XMLClassifier, a JsonClassifier, or a CsvClassifier, depending on which field is present).

Examples:

Request syntax with placeholder values


resp = client.update_classifier({
  grok_classifier: {
    name: "NameString", # required
    classification: "Classification",
    grok_pattern: "GrokPattern",
    custom_patterns: "CustomPatterns",
  },
  xml_classifier: {
    name: "NameString", # required
    classification: "Classification",
    row_tag: "RowTag",
  },
  json_classifier: {
    name: "NameString", # required
    json_path: "JsonPath",
  },
  csv_classifier: {
    name: "NameString", # required
    delimiter: "CsvColumnDelimiter",
    quote_symbol: "CsvQuoteSymbol",
    contains_header: "UNKNOWN", # accepts UNKNOWN, PRESENT, ABSENT
    header: ["NameString"],
    disable_value_trimming: false,
    allow_single_column: false,
  },
})

Options Hash (options):

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#update_column_statistics_for_partition(options = {}) ⇒ Types::UpdateColumnStatisticsForPartitionResponse

Creates or updates partition statistics of columns.

The Identity and Access Management (IAM) permission required for this operation is UpdatePartition.

Examples:

Request syntax with placeholder values


resp = client.update_column_statistics_for_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_values: ["ValueString"], # required
  column_statistics_list: [ # required
    {
      column_name: "NameString", # required
      column_type: "TypeString", # required
      analyzed_time: Time.now, # required
      statistics_data: { # required
        type: "BOOLEAN", # required, accepts BOOLEAN, DATE, DECIMAL, DOUBLE, LONG, STRING, BINARY
        boolean_column_statistics_data: {
          number_of_trues: 1, # required
          number_of_falses: 1, # required
          number_of_nulls: 1, # required
        },
        date_column_statistics_data: {
          minimum_value: Time.now,
          maximum_value: Time.now,
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        decimal_column_statistics_data: {
          minimum_value: {
            unscaled_value: "data", # required
            scale: 1, # required
          },
          maximum_value: {
            unscaled_value: "data", # required
            scale: 1, # required
          },
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        double_column_statistics_data: {
          minimum_value: 1.0,
          maximum_value: 1.0,
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        long_column_statistics_data: {
          minimum_value: 1,
          maximum_value: 1,
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        string_column_statistics_data: {
          maximum_length: 1, # required
          average_length: 1.0, # required
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        binary_column_statistics_data: {
          maximum_length: 1, # required
          average_length: 1.0, # required
          number_of_nulls: 1, # required
        },
      },
    },
  ],
})

Response structure


resp.errors #=> Array
resp.errors[0].column_statistics.column_name #=> String
resp.errors[0].column_statistics.column_type #=> String
resp.errors[0].column_statistics.analyzed_time #=> Time
resp.errors[0].column_statistics.statistics_data.type #=> String, one of "BOOLEAN", "DATE", "DECIMAL", "DOUBLE", "LONG", "STRING", "BINARY"
resp.errors[0].column_statistics.statistics_data.boolean_column_statistics_data.number_of_trues #=> Integer
resp.errors[0].column_statistics.statistics_data.boolean_column_statistics_data.number_of_falses #=> Integer
resp.errors[0].column_statistics.statistics_data.boolean_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.date_column_statistics_data.minimum_value #=> Time
resp.errors[0].column_statistics.statistics_data.date_column_statistics_data.maximum_value #=> Time
resp.errors[0].column_statistics.statistics_data.date_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.date_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.minimum_value.unscaled_value #=> IO
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.minimum_value.scale #=> Integer
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.maximum_value.unscaled_value #=> IO
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.maximum_value.scale #=> Integer
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.double_column_statistics_data.minimum_value #=> Float
resp.errors[0].column_statistics.statistics_data.double_column_statistics_data.maximum_value #=> Float
resp.errors[0].column_statistics.statistics_data.double_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.double_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.long_column_statistics_data.minimum_value #=> Integer
resp.errors[0].column_statistics.statistics_data.long_column_statistics_data.maximum_value #=> Integer
resp.errors[0].column_statistics.statistics_data.long_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.long_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.string_column_statistics_data.maximum_length #=> Integer
resp.errors[0].column_statistics.statistics_data.string_column_statistics_data.average_length #=> Float
resp.errors[0].column_statistics.statistics_data.string_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.string_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.binary_column_statistics_data.maximum_length #=> Integer
resp.errors[0].column_statistics.statistics_data.binary_column_statistics_data.average_length #=> Float
resp.errors[0].column_statistics.statistics_data.binary_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].error.error_code #=> String
resp.errors[0].error.error_message #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is supplied, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions\' table.

  • :partition_values (required, Array<String>)

    A list of partition values identifying the partition.

  • :column_statistics_list (required, Array<Types::ColumnStatistics>)

    A list of the column statistics.

Returns:

See Also:

#update_column_statistics_for_table(options = {}) ⇒ Types::UpdateColumnStatisticsForTableResponse

Creates or updates table statistics of columns.

The Identity and Access Management (IAM) permission required for this operation is UpdateTable.

Examples:

Request syntax with placeholder values


resp = client.update_column_statistics_for_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  column_statistics_list: [ # required
    {
      column_name: "NameString", # required
      column_type: "TypeString", # required
      analyzed_time: Time.now, # required
      statistics_data: { # required
        type: "BOOLEAN", # required, accepts BOOLEAN, DATE, DECIMAL, DOUBLE, LONG, STRING, BINARY
        boolean_column_statistics_data: {
          number_of_trues: 1, # required
          number_of_falses: 1, # required
          number_of_nulls: 1, # required
        },
        date_column_statistics_data: {
          minimum_value: Time.now,
          maximum_value: Time.now,
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        decimal_column_statistics_data: {
          minimum_value: {
            unscaled_value: "data", # required
            scale: 1, # required
          },
          maximum_value: {
            unscaled_value: "data", # required
            scale: 1, # required
          },
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        double_column_statistics_data: {
          minimum_value: 1.0,
          maximum_value: 1.0,
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        long_column_statistics_data: {
          minimum_value: 1,
          maximum_value: 1,
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        string_column_statistics_data: {
          maximum_length: 1, # required
          average_length: 1.0, # required
          number_of_nulls: 1, # required
          number_of_distinct_values: 1, # required
        },
        binary_column_statistics_data: {
          maximum_length: 1, # required
          average_length: 1.0, # required
          number_of_nulls: 1, # required
        },
      },
    },
  ],
})

Response structure


resp.errors #=> Array
resp.errors[0].column_statistics.column_name #=> String
resp.errors[0].column_statistics.column_type #=> String
resp.errors[0].column_statistics.analyzed_time #=> Time
resp.errors[0].column_statistics.statistics_data.type #=> String, one of "BOOLEAN", "DATE", "DECIMAL", "DOUBLE", "LONG", "STRING", "BINARY"
resp.errors[0].column_statistics.statistics_data.boolean_column_statistics_data.number_of_trues #=> Integer
resp.errors[0].column_statistics.statistics_data.boolean_column_statistics_data.number_of_falses #=> Integer
resp.errors[0].column_statistics.statistics_data.boolean_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.date_column_statistics_data.minimum_value #=> Time
resp.errors[0].column_statistics.statistics_data.date_column_statistics_data.maximum_value #=> Time
resp.errors[0].column_statistics.statistics_data.date_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.date_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.minimum_value.unscaled_value #=> IO
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.minimum_value.scale #=> Integer
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.maximum_value.unscaled_value #=> IO
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.maximum_value.scale #=> Integer
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.decimal_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.double_column_statistics_data.minimum_value #=> Float
resp.errors[0].column_statistics.statistics_data.double_column_statistics_data.maximum_value #=> Float
resp.errors[0].column_statistics.statistics_data.double_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.double_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.long_column_statistics_data.minimum_value #=> Integer
resp.errors[0].column_statistics.statistics_data.long_column_statistics_data.maximum_value #=> Integer
resp.errors[0].column_statistics.statistics_data.long_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.long_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.string_column_statistics_data.maximum_length #=> Integer
resp.errors[0].column_statistics.statistics_data.string_column_statistics_data.average_length #=> Float
resp.errors[0].column_statistics.statistics_data.string_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].column_statistics.statistics_data.string_column_statistics_data.number_of_distinct_values #=> Integer
resp.errors[0].column_statistics.statistics_data.binary_column_statistics_data.maximum_length #=> Integer
resp.errors[0].column_statistics.statistics_data.binary_column_statistics_data.average_length #=> Float
resp.errors[0].column_statistics.statistics_data.binary_column_statistics_data.number_of_nulls #=> Integer
resp.errors[0].error.error_code #=> String
resp.errors[0].error.error_message #=> String

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the partitions in question reside. If none is supplied, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the partitions reside.

  • :table_name (required, String)

    The name of the partitions\' table.

  • :column_statistics_list (required, Array<Types::ColumnStatistics>)

    A list of the column statistics.

Returns:

See Also:

#update_connection(options = {}) ⇒ Struct

Updates a connection definition in the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.update_connection({
  catalog_id: "CatalogIdString",
  name: "NameString", # required
  connection_input: { # required
    name: "NameString", # required
    description: "DescriptionString",
    connection_type: "JDBC", # required, accepts JDBC, SFTP, MONGODB, KAFKA, NETWORK
    match_criteria: ["NameString"],
    connection_properties: { # required
      "HOST" => "ValueString",
    },
    physical_connection_requirements: {
      subnet_id: "NameString",
      security_group_id_list: ["NameString"],
      availability_zone: "NameString",
    },
  },
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog in which the connection resides. If none is provided, the AWS account ID is used by default.

  • :name (required, String)

    The name of the connection definition to update.

  • :connection_input (required, Types::ConnectionInput)

    A ConnectionInput object that redefines the connection in question.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#update_crawler(options = {}) ⇒ Struct

Updates a crawler. If a crawler is running, you must stop it using StopCrawler before updating it.

Examples:

Request syntax with placeholder values


resp = client.update_crawler({
  name: "NameString", # required
  role: "Role",
  database_name: "DatabaseName",
  description: "DescriptionStringRemovable",
  targets: {
    s3_targets: [
      {
        path: "Path",
        exclusions: ["Path"],
        connection_name: "ConnectionName",
      },
    ],
    jdbc_targets: [
      {
        connection_name: "ConnectionName",
        path: "Path",
        exclusions: ["Path"],
      },
    ],
    mongo_db_targets: [
      {
        connection_name: "ConnectionName",
        path: "Path",
        scan_all: false,
      },
    ],
    dynamo_db_targets: [
      {
        path: "Path",
        scan_all: false,
        scan_rate: 1.0,
      },
    ],
    catalog_targets: [
      {
        database_name: "NameString", # required
        tables: ["NameString"], # required
      },
    ],
  },
  schedule: "CronExpression",
  classifiers: ["NameString"],
  table_prefix: "TablePrefix",
  schema_change_policy: {
    update_behavior: "LOG", # accepts LOG, UPDATE_IN_DATABASE
    delete_behavior: "LOG", # accepts LOG, DELETE_FROM_DATABASE, DEPRECATE_IN_DATABASE
  },
  recrawl_policy: {
    recrawl_behavior: "CRAWL_EVERYTHING", # accepts CRAWL_EVERYTHING, CRAWL_NEW_FOLDERS_ONLY
  },
  configuration: "CrawlerConfiguration",
  crawler_security_configuration: "CrawlerSecurityConfiguration",
})

Options Hash (options):

  • :name (required, String)

    Name of the new crawler.

  • :role (String)

    The IAM role or Amazon Resource Name (ARN) of an IAM role that is used by the new crawler to access customer resources.

  • :database_name (String)

    The AWS Glue database where results are stored, such as: arn:aws:daylight:us-east-1::database/sometable/*.

  • :description (String)

    A description of the new crawler.

  • :targets (Types::CrawlerTargets)

    A list of targets to crawl.

  • :schedule (String)

    A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *).

  • :classifiers (Array<String>)

    A list of custom classifiers that the user has registered. By default, all built-in classifiers are included in a crawl, but these custom classifiers always override the default classifiers for a given classification.

  • :table_prefix (String)

    The table prefix used for catalog tables that are created.

  • :schema_change_policy (Types::SchemaChangePolicy)

    The policy for the crawler\'s update and deletion behavior.

  • :recrawl_policy (Types::RecrawlPolicy)

    A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.

  • :configuration (String)

    Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler\'s behavior. For more information, see Configuring a Crawler.

  • :crawler_security_configuration (String)

    The name of the SecurityConfiguration structure to be used by this crawler.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#update_crawler_schedule(options = {}) ⇒ Struct

Updates the schedule of a crawler using a cron expression.

Examples:

Request syntax with placeholder values


resp = client.update_crawler_schedule({
  crawler_name: "NameString", # required
  schedule: "CronExpression",
})

Options Hash (options):

  • :crawler_name (required, String)

    The name of the crawler whose schedule to update.

  • :schedule (String)

    The updated cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *).

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#update_database(options = {}) ⇒ Struct

Updates an existing database definition in a Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.update_database({
  catalog_id: "CatalogIdString",
  name: "NameString", # required
  database_input: { # required
    name: "NameString", # required
    description: "DescriptionString",
    location_uri: "URI",
    parameters: {
      "KeyString" => "ParametersMapValue",
    },
    create_table_default_permissions: [
      {
        principal: {
          data_lake_principal_identifier: "DataLakePrincipalString",
        },
        permissions: ["ALL"], # accepts ALL, SELECT, ALTER, DROP, DELETE, INSERT, CREATE_DATABASE, CREATE_TABLE, DATA_LOCATION_ACCESS
      },
    ],
    target_database: {
      catalog_id: "CatalogIdString",
      database_name: "NameString",
    },
  },
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog in which the metadata database resides. If none is provided, the AWS account ID is used by default.

  • :name (required, String)

    The name of the database to update in the catalog. For Hive compatibility, this is folded to lowercase.

  • :database_input (required, Types::DatabaseInput)

    A DatabaseInput object specifying the new definition of the metadata database in the catalog.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#update_dev_endpoint(options = {}) ⇒ Struct

Updates a specified development endpoint.

Examples:

Request syntax with placeholder values


resp = client.update_dev_endpoint({
  endpoint_name: "GenericString", # required
  public_key: "GenericString",
  add_public_keys: ["GenericString"],
  delete_public_keys: ["GenericString"],
  custom_libraries: {
    extra_python_libs_s3_path: "GenericString",
    extra_jars_s3_path: "GenericString",
  },
  update_etl_libraries: false,
  delete_arguments: ["GenericString"],
  add_arguments: {
    "GenericString" => "GenericString",
  },
})

Options Hash (options):

  • :endpoint_name (required, String)

    The name of the DevEndpoint to be updated.

  • :public_key (String)

    The public key for the DevEndpoint to use.

  • :add_public_keys (Array<String>)

    The list of public keys for the DevEndpoint to use.

  • :delete_public_keys (Array<String>)

    The list of public keys to be deleted from the DevEndpoint.

  • :custom_libraries (Types::DevEndpointCustomLibraries)

    Custom Python or Java libraries to be loaded in the DevEndpoint.

  • :update_etl_libraries (Boolean)

    True if the list of custom libraries to be loaded in the development endpoint needs to be updated, or False if otherwise.

  • :delete_arguments (Array<String>)

    The list of argument keys to be deleted from the map of arguments used to configure the DevEndpoint.

  • :add_arguments (Hash<String,String>)

    The map of arguments to add the map of arguments used to configure the DevEndpoint.

    Valid arguments are:

    • "--enable-glue-datacatalog": ""

    • "GLUE_PYTHON_VERSION": "3"

    • "GLUE_PYTHON_VERSION": "2"

    You can specify a version of Python support for development endpoints by using the Arguments parameter in the CreateDevEndpoint or UpdateDevEndpoint APIs. If no arguments are provided, the version defaults to Python 2.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#update_job(options = {}) ⇒ Types::UpdateJobResponse

Updates an existing job definition.

Examples:

Request syntax with placeholder values


resp = client.update_job({
  job_name: "NameString", # required
  job_update: { # required
    description: "DescriptionString",
    log_uri: "UriString",
    role: "RoleString",
    execution_property: {
      max_concurrent_runs: 1,
    },
    command: {
      name: "GenericString",
      script_location: "ScriptLocationString",
      python_version: "PythonVersionString",
    },
    default_arguments: {
      "GenericString" => "GenericString",
    },
    non_overridable_arguments: {
      "GenericString" => "GenericString",
    },
    connections: {
      connections: ["GenericString"],
    },
    max_retries: 1,
    allocated_capacity: 1,
    timeout: 1,
    max_capacity: 1.0,
    worker_type: "Standard", # accepts Standard, G.1X, G.2X
    number_of_workers: 1,
    security_configuration: "NameString",
    notification_property: {
      notify_delay_after: 1,
    },
    glue_version: "GlueVersionString",
  },
})

Response structure


resp.job_name #=> String

Options Hash (options):

  • :job_name (required, String)

    The name of the job definition to update.

  • :job_update (required, Types::JobUpdate)

    Specifies the values with which to update the job definition.

Returns:

See Also:

#update_ml_transform(options = {}) ⇒ Types::UpdateMLTransformResponse

Updates an existing machine learning transform. Call this operation to tune the algorithm parameters to achieve better results.

After calling this operation, you can call the StartMLEvaluationTaskRun operation to assess how well your new parameters achieved your goals (such as improving the quality of your machine learning transform, or making it more cost-effective).

Examples:

Request syntax with placeholder values


resp = client.update_ml_transform({
  transform_id: "HashString", # required
  name: "NameString",
  description: "DescriptionString",
  parameters: {
    transform_type: "FIND_MATCHES", # required, accepts FIND_MATCHES
    find_matches_parameters: {
      primary_key_column_name: "ColumnNameString",
      precision_recall_tradeoff: 1.0,
      accuracy_cost_tradeoff: 1.0,
      enforce_provided_labels: false,
    },
  },
  role: "RoleString",
  glue_version: "GlueVersionString",
  max_capacity: 1.0,
  worker_type: "Standard", # accepts Standard, G.1X, G.2X
  number_of_workers: 1,
  timeout: 1,
  max_retries: 1,
})

Response structure


resp.transform_id #=> String

Options Hash (options):

  • :transform_id (required, String)

    A unique identifier that was generated when the transform was created.

  • :name (String)

    The unique name that you gave the transform when you created it.

  • :description (String)

    A description of the transform. The default is an empty string.

  • :parameters (Types::TransformParameters)

    The configuration parameters that are specific to the transform type (algorithm) used. Conditionally dependent on the transform type.

  • :role (String)

    The name or Amazon Resource Name (ARN) of the IAM role with the required permissions.

  • :glue_version (String)

    This value determines which version of AWS Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see AWS Glue Versions in the developer guide.

  • :max_capacity (Float)

    The number of AWS Glue data processing units (DPUs) that are allocated to task runs for this transform. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the AWS Glue pricing page.

    When the WorkerType field is set to a value other than Standard, the MaxCapacity field is set automatically and becomes read-only.

  • :worker_type (String)

    The type of predefined worker that is allocated when this task runs. Accepts a value of Standard, G.1X, or G.2X.

    • For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.

    • For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.

    • For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.

  • :number_of_workers (Integer)

    The number of workers of a defined workerType that are allocated when this task runs.

  • :timeout (Integer)

    The timeout for a task run for this transform in minutes. This is the maximum time that a task run for this transform can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).

  • :max_retries (Integer)

    The maximum number of times to retry a task for this transform after a task run fails.

Returns:

See Also:

#update_partition(options = {}) ⇒ Struct

Updates a partition.

Examples:

Request syntax with placeholder values


resp = client.update_partition({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_name: "NameString", # required
  partition_value_list: ["ValueString"], # required
  partition_input: { # required
    values: ["ValueString"],
    last_access_time: Time.now,
    storage_descriptor: {
      columns: [
        {
          name: "NameString", # required
          type: "ColumnTypeString",
          comment: "CommentString",
          parameters: {
            "KeyString" => "ParametersMapValue",
          },
        },
      ],
      location: "LocationString",
      input_format: "FormatString",
      output_format: "FormatString",
      compressed: false,
      number_of_buckets: 1,
      serde_info: {
        name: "NameString",
        serialization_library: "NameString",
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
      },
      bucket_columns: ["NameString"],
      sort_columns: [
        {
          column: "NameString", # required
          sort_order: 1, # required
        },
      ],
      parameters: {
        "KeyString" => "ParametersMapValue",
      },
      skewed_info: {
        skewed_column_names: ["NameString"],
        skewed_column_values: ["ColumnValuesString"],
        skewed_column_value_location_maps: {
          "ColumnValuesString" => "ColumnValuesString",
        },
      },
      stored_as_sub_directories: false,
      schema_reference: {
        schema_id: {
          schema_arn: "GlueResourceArn",
          schema_name: "SchemaRegistryNameString",
          registry_name: "SchemaRegistryNameString",
        },
        schema_version_id: "SchemaVersionIdString",
        schema_version_number: 1,
      },
    },
    parameters: {
      "KeyString" => "ParametersMapValue",
    },
    last_analyzed_time: Time.now,
  },
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the partition to be updated resides. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database in which the table in question resides.

  • :table_name (required, String)

    The name of the table in which the partition to be updated is located.

  • :partition_value_list (required, Array<String>)

    List of partition key values that define the partition to update.

  • :partition_input (required, Types::PartitionInput)

    The new partition object to update the partition to.

    The Values property can\'t be changed. If you want to change the partition key values for a partition, delete and recreate the partition.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#update_registry(options = {}) ⇒ Types::UpdateRegistryResponse

Updates an existing registry which is used to hold a collection of schemas. The updated properties relate to the registry, and do not modify any of the schemas within the registry.

Examples:

Request syntax with placeholder values


resp = client.update_registry({
  registry_id: { # required
    registry_name: "SchemaRegistryNameString",
    registry_arn: "GlueResourceArn",
  },
  description: "DescriptionString", # required
})

Response structure


resp.registry_name #=> String
resp.registry_arn #=> String

Options Hash (options):

  • :registry_id (required, Types::RegistryId)

    This is a wrapper structure that may contain the registry name and Amazon Resource Name (ARN).

  • :description (required, String)

    A description of the registry. If description is not provided, this field will not be updated.

Returns:

See Also:

#update_schema(options = {}) ⇒ Types::UpdateSchemaResponse

Updates the description, compatibility setting, or version checkpoint for a schema set.

For updating the compatibility setting, the call will not validate compatibility for the entire set of schema versions with the new compatibility setting. If the value for Compatibility is provided, the VersionNumber (a checkpoint) is also required. The API will validate the checkpoint version number for consistency.

If the value for the VersionNumber (checkpoint) is provided, Compatibility is optional and this can be used to set/reset a checkpoint for the schema.

This update will happen only if the schema is in the AVAILABLE state.

Examples:

Request syntax with placeholder values


resp = client.update_schema({
  schema_id: { # required
    schema_arn: "GlueResourceArn",
    schema_name: "SchemaRegistryNameString",
    registry_name: "SchemaRegistryNameString",
  },
  schema_version_number: {
    latest_version: false,
    version_number: 1,
  },
  compatibility: "NONE", # accepts NONE, DISABLED, BACKWARD, BACKWARD_ALL, FORWARD, FORWARD_ALL, FULL, FULL_ALL
  description: "DescriptionString",
})

Response structure


resp.schema_arn #=> String
resp.schema_name #=> String
resp.registry_name #=> String

Options Hash (options):

  • :schema_id (required, Types::SchemaId)

    This is a wrapper structure to contain schema identity fields. The structure contains:

    • SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. One of SchemaArn or SchemaName has to be provided.

    • SchemaId$SchemaName: The name of the schema. One of SchemaArn or SchemaName has to be provided.

  • :schema_version_number (Types::SchemaVersionNumber)

    Version number required for check pointing. One of VersionNumber or Compatibility has to be provided.

  • :compatibility (String)

    The new compatibility setting for the schema.

  • :description (String)

    The new description for the schema.

Returns:

See Also:

#update_table(options = {}) ⇒ Struct

Updates a metadata table in the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.update_table({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  table_input: { # required
    name: "NameString", # required
    description: "DescriptionString",
    owner: "NameString",
    last_access_time: Time.now,
    last_analyzed_time: Time.now,
    retention: 1,
    storage_descriptor: {
      columns: [
        {
          name: "NameString", # required
          type: "ColumnTypeString",
          comment: "CommentString",
          parameters: {
            "KeyString" => "ParametersMapValue",
          },
        },
      ],
      location: "LocationString",
      input_format: "FormatString",
      output_format: "FormatString",
      compressed: false,
      number_of_buckets: 1,
      serde_info: {
        name: "NameString",
        serialization_library: "NameString",
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
      },
      bucket_columns: ["NameString"],
      sort_columns: [
        {
          column: "NameString", # required
          sort_order: 1, # required
        },
      ],
      parameters: {
        "KeyString" => "ParametersMapValue",
      },
      skewed_info: {
        skewed_column_names: ["NameString"],
        skewed_column_values: ["ColumnValuesString"],
        skewed_column_value_location_maps: {
          "ColumnValuesString" => "ColumnValuesString",
        },
      },
      stored_as_sub_directories: false,
      schema_reference: {
        schema_id: {
          schema_arn: "GlueResourceArn",
          schema_name: "SchemaRegistryNameString",
          registry_name: "SchemaRegistryNameString",
        },
        schema_version_id: "SchemaVersionIdString",
        schema_version_number: 1,
      },
    },
    partition_keys: [
      {
        name: "NameString", # required
        type: "ColumnTypeString",
        comment: "CommentString",
        parameters: {
          "KeyString" => "ParametersMapValue",
        },
      },
    ],
    view_original_text: "ViewTextString",
    view_expanded_text: "ViewTextString",
    table_type: "TableTypeString",
    parameters: {
      "KeyString" => "ParametersMapValue",
    },
    target_table: {
      catalog_id: "CatalogIdString",
      database_name: "NameString",
      name: "NameString",
    },
  },
  skip_archive: false,
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the table resides. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database in which the table resides. For Hive compatibility, this name is entirely lowercase.

  • :table_input (required, Types::TableInput)

    An updated TableInput object to define the metadata table in the catalog.

  • :skip_archive (Boolean)

    By default, UpdateTable always creates an archived version of the table before updating it. However, if skipArchive is set to true, UpdateTable does not create the archived version.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#update_trigger(options = {}) ⇒ Types::UpdateTriggerResponse

Updates a trigger definition.

Examples:

Request syntax with placeholder values


resp = client.update_trigger({
  name: "NameString", # required
  trigger_update: { # required
    name: "NameString",
    description: "DescriptionString",
    schedule: "GenericString",
    actions: [
      {
        job_name: "NameString",
        arguments: {
          "GenericString" => "GenericString",
        },
        timeout: 1,
        security_configuration: "NameString",
        notification_property: {
          notify_delay_after: 1,
        },
        crawler_name: "NameString",
      },
    ],
    predicate: {
      logical: "AND", # accepts AND, ANY
      conditions: [
        {
          logical_operator: "EQUALS", # accepts EQUALS
          job_name: "NameString",
          state: "STARTING", # accepts STARTING, RUNNING, STOPPING, STOPPED, SUCCEEDED, FAILED, TIMEOUT
          crawler_name: "NameString",
          crawl_state: "RUNNING", # accepts RUNNING, CANCELLING, CANCELLED, SUCCEEDED, FAILED
        },
      ],
    },
  },
})

Response structure


resp.trigger.name #=> String
resp.trigger.workflow_name #=> String
resp.trigger.id #=> String
resp.trigger.type #=> String, one of "SCHEDULED", "CONDITIONAL", "ON_DEMAND"
resp.trigger.state #=> String, one of "CREATING", "CREATED", "ACTIVATING", "ACTIVATED", "DEACTIVATING", "DEACTIVATED", "DELETING", "UPDATING"
resp.trigger.description #=> String
resp.trigger.schedule #=> String
resp.trigger.actions #=> Array
resp.trigger.actions[0].job_name #=> String
resp.trigger.actions[0].arguments #=> Hash
resp.trigger.actions[0].arguments["GenericString"] #=> String
resp.trigger.actions[0].timeout #=> Integer
resp.trigger.actions[0].security_configuration #=> String
resp.trigger.actions[0].notification_property.notify_delay_after #=> Integer
resp.trigger.actions[0].crawler_name #=> String
resp.trigger.predicate.logical #=> String, one of "AND", "ANY"
resp.trigger.predicate.conditions #=> Array
resp.trigger.predicate.conditions[0].logical_operator #=> String, one of "EQUALS"
resp.trigger.predicate.conditions[0].job_name #=> String
resp.trigger.predicate.conditions[0].state #=> String, one of "STARTING", "RUNNING", "STOPPING", "STOPPED", "SUCCEEDED", "FAILED", "TIMEOUT"
resp.trigger.predicate.conditions[0].crawler_name #=> String
resp.trigger.predicate.conditions[0].crawl_state #=> String, one of "RUNNING", "CANCELLING", "CANCELLED", "SUCCEEDED", "FAILED"

Options Hash (options):

  • :name (required, String)

    The name of the trigger to update.

  • :trigger_update (required, Types::TriggerUpdate)

    The new values with which to update the trigger.

Returns:

See Also:

#update_user_defined_function(options = {}) ⇒ Struct

Updates an existing function definition in the Data Catalog.

Examples:

Request syntax with placeholder values


resp = client.update_user_defined_function({
  catalog_id: "CatalogIdString",
  database_name: "NameString", # required
  function_name: "NameString", # required
  function_input: { # required
    function_name: "NameString",
    class_name: "NameString",
    owner_name: "NameString",
    owner_type: "USER", # accepts USER, ROLE, GROUP
    resource_uris: [
      {
        resource_type: "JAR", # accepts JAR, FILE, ARCHIVE
        uri: "URI",
      },
    ],
  },
})

Options Hash (options):

  • :catalog_id (String)

    The ID of the Data Catalog where the function to be updated is located. If none is provided, the AWS account ID is used by default.

  • :database_name (required, String)

    The name of the catalog database where the function to be updated is located.

  • :function_name (required, String)

    The name of the function.

  • :function_input (required, Types::UserDefinedFunctionInput)

    A FunctionInput object that redefines the function in the Data Catalog.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#update_workflow(options = {}) ⇒ Types::UpdateWorkflowResponse

Updates an existing workflow.

Examples:

Request syntax with placeholder values


resp = client.update_workflow({
  name: "NameString", # required
  description: "GenericString",
  default_run_properties: {
    "IdString" => "GenericString",
  },
  max_concurrent_runs: 1,
})

Response structure


resp.name #=> String

Options Hash (options):

  • :name (required, String)

    Name of the workflow to be updated.

  • :description (String)

    The description of the workflow.

  • :default_run_properties (Hash<String,String>)

    A collection of properties to be used as part of each execution of the workflow.

  • :max_concurrent_runs (Integer)

    You can use this parameter to prevent unwanted multiple updates to data, to control costs, or in some cases, to prevent exceeding the maximum number of concurrent runs of any of the component jobs. If you leave this parameter blank, there is no limit to the number of concurrent workflow runs.

Returns:

See Also:

#wait_until(waiter_name, params = {}) {|waiter| ... } ⇒ Boolean

Waiters polls an API operation until a resource enters a desired state.

Basic Usage

Waiters will poll until they are succesful, they fail by entering a terminal state, or until a maximum number of attempts are made.

# polls in a loop, sleeping between attempts client.waiter_until(waiter_name, params)

Configuration

You can configure the maximum number of polling attempts, and the delay (in seconds) between each polling attempt. You configure waiters by passing a block to #wait_until:

# poll for ~25 seconds
client.wait_until(...) do |w|
  w.max_attempts = 5
  w.delay = 5
end

Callbacks

You can be notified before each polling attempt and before each delay. If you throw :success or :failure from these callbacks, it will terminate the waiter.

started_at = Time.now
client.wait_until(...) do |w|

  # disable max attempts
  w.max_attempts = nil

  # poll for 1 hour, instead of a number of attempts
  w.before_wait do |attempts, response|
    throw :failure if Time.now - started_at > 3600
  end

end

Handling Errors

When a waiter is successful, it returns true. When a waiter fails, it raises an error. All errors raised extend from Waiters::Errors::WaiterFailed.

begin
  client.wait_until(...)
rescue Aws::Waiters::Errors::WaiterFailed
  # resource did not enter the desired state in time
end

Parameters:

  • waiter_name (Symbol)

    The name of the waiter. See #waiter_names for a full list of supported waiters.

  • params (Hash) (defaults to: {})

    Additional request parameters. See the #waiter_names for a list of supported waiters and what request they call. The called request determines the list of accepted parameters.

Yield Parameters:

Returns:

  • (Boolean)

    Returns true if the waiter was successful.

Raises:

  • (Errors::FailureStateError)

    Raised when the waiter terminates because the waiter has entered a state that it will not transition out of, preventing success.

  • (Errors::TooManyAttemptsError)

    Raised when the configured maximum number of attempts have been made, and the waiter is not yet successful.

  • (Errors::UnexpectedError)

    Raised when an error is encounted while polling for a resource that is not expected.

  • (Errors::NoSuchWaiterError)

    Raised when you request to wait for an unknown state.

#waiter_namesArray<Symbol>

Returns the list of supported waiters. The following table lists the supported waiters and the client method they call:

Waiter NameClient MethodDefault Delay:Default Max Attempts:

Returns:

  • (Array<Symbol>)

    the list of supported waiters.