Table Of Contents

Feedback

User Guide

First time using the AWS CLI? See the User Guide for help getting started.

[ aws . glue ]

get-crawler

Description

Retrieves metadata for a specified crawler.

See also: AWS API Documentation

See 'aws help' for descriptions of global parameters.

Synopsis

  get-crawler
--name <value>
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Options

--name (string)

Name of the crawler to retrieve metadata for.

--cli-input-json (string) Performs service operation based on the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton. If other arguments are provided on the command line, the CLI values will override the JSON-provided values.

--generate-cli-skeleton (string) Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input, prints a sample input JSON that can be used as an argument for --cli-input-json. If provided with the value output, it validates the command inputs and returns a sample output JSON for that command.

See 'aws help' for descriptions of global parameters.

Output

Crawler -> (structure)

The metadata for the specified crawler.

Name -> (string)

The crawler name.

Role -> (string)

The IAM role (or ARN of an IAM role) used to access customer resources, such as data in Amazon S3.

Targets -> (structure)

A collection of targets to crawl.

S3Targets -> (list)

Specifies Amazon S3 targets.

(structure)

Specifies a data store in Amazon S3.

Path -> (string)

The path to the Amazon S3 target.

Exclusions -> (list)

A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler .

(string)

JdbcTargets -> (list)

Specifies JDBC targets.

(structure)

Specifies a JDBC data store to crawl.

ConnectionName -> (string)

The name of the connection to use to connect to the JDBC target.

Path -> (string)

The path of the JDBC target.

Exclusions -> (list)

A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler .

(string)

DatabaseName -> (string)

The database where metadata is written by this crawler.

Description -> (string)

A description of the crawler.

Classifiers -> (list)

A list of custom classifiers associated with the crawler.

(string)

SchemaChangePolicy -> (structure)

Sets the behavior when the crawler finds a changed or deleted object.

UpdateBehavior -> (string)

The update behavior when the crawler finds a changed schema.

DeleteBehavior -> (string)

The deletion behavior when the crawler finds a deleted object.

State -> (string)

Indicates whether the crawler is running, or whether a run is pending.

TablePrefix -> (string)

The prefix added to the names of tables that are created.

Schedule -> (structure)

For scheduled crawlers, the schedule when the crawler runs.

ScheduleExpression -> (string)

A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers . For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .

State -> (string)

The state of the schedule.

CrawlElapsedTime -> (long)

If the crawler is running, contains the total time elapsed since the last crawl began.

CreationTime -> (timestamp)

The time when the crawler was created.

LastUpdated -> (timestamp)

The time the crawler was last updated.

LastCrawl -> (structure)

The status of the last crawl, and potentially error information if an error occurred.

Status -> (string)

Status of the last crawl.

ErrorMessage -> (string)

If an error occurred, the error information about the last crawl.

LogGroup -> (string)

The log group for the last crawl.

LogStream -> (string)

The log stream for the last crawl.

MessagePrefix -> (string)

The prefix for a message about this crawl.

StartTime -> (timestamp)

The time at which the crawl started.

Version -> (long)

The version of the crawler.

Configuration -> (string)

Crawler configuration information. This versioned JSON string allows users to specify aspects of a Crawler's behavior.

You can use this field to force partitions to inherit metadata such as classification, input format, output format, serde information, and schema from their parent table, rather than detect this information separately for each partition. Use the following JSON string to specify that behavior:

Example: '{ "Version": 1.0, "CrawlerOutput": { "Partitions": { "AddOrUpdateBehavior": "InheritFromTable" } } }'